Originally Posted by Jay20016
It is quite ironic you want me to educate myself on something you obviously fail to grasp.
Answer me this. If I take an image that is 1440p and up scale it to 4k, how do I account for the fact there are fewer total pixels in the 1440p image? What technique is used to reach the higher pixel count?
Did you bother reading the link I posted?
You know what happens when you take a 1440p image and upscale it 4k like in photoshop? It looks like crap.
I'm 99% certain that's how you think deep learning super resolution works. And there in lies the fault of your argument.
To answer your question, answer my question....
What if you take that 1440p image, look at it side by side with a native 4k image, would you be able to derive an algorithm that eventually comes close to the native 4k image? Because that's exactly what deep learning super resolution does.... you train the AI on ground truth images, feed it thousands of times over and over and over days or months, and eventually it will learn to produce similar native quality results from those lower resolution images. Those "missing pixels" are extrapolated from the reference native images, something you can't do with a simple upscaling filter. If you understand this then you can understand why Nvidia has a dedicated supercomputing cluster for training DLSS on games and why the more training time the better the output will be. Probably also explains why it takes so long to get DLSS implemented in game, and they likely rushed it for Battlefield 5. They did mention the next BF5 patch will vastly improve DLSS, it's unfortunately they rushed it out prematurely.
Lastly can you do me a favor and actually play with DLSS on/off first hand in games like Metro Exodus/FFXV before making judgement? I'm sorry, but 1080p youtube videos don't cut it.