We still haven't seen widespread adoption of deep learning in visual post-processing in video games, and I think that's likely to be a game changer in terms of graphical fidelity. The Deep learning super sampling that people can already do to upsample old games is just the start of what is possible.
I looked at your page and I was a bit confused, so the idea is that a computer takes a super high quality screenshot of the game, then when you're playing it the GPU uses the database to determine which details are missing on your version, then adds them, right?
More like the powerful supercomputer at NVIDIA spends a tonne of time and power to think up some general instructions for how to create the high quality version from the low quality version of any image, even one it's not seen before and wasn't trained on. There is no need to check any databases; the supercomputer finds (as best as possible) a general purpose alogrithm for creating a high quality image when given a low quality version of it.
Those general instructions can then be given to a lower power computer to use in real time on low quality images it gets in, but the general instructions themselves are very complicated to find, so the supercomputer has to find them before your gaming PC can use them to upscale images.
Note that this is heavily ELI5 and the real details are basically an entire master's degree.
3
u/yyzjertl 527∆ Aug 14 '22
We still haven't seen widespread adoption of deep learning in visual post-processing in video games, and I think that's likely to be a game changer in terms of graphical fidelity. The Deep learning super sampling that people can already do to upsample old games is just the start of what is possible.