“real time raytracing” as is advertised by hardware vendors and implemented in games today is primarily faked by AI de-noising. Even the most powerful cards can’t fire anywhere near enough rays to fully raytrace a scene in realtime, so instead they just fire a very low number of rays, and use denoising to clean up the noisy result. That’s why, if you look closely, you’ll notice that reflections can look weird, and blurry/smeary (especially on weaker cards). It’s because the majority of those pixels are predicted by machine learning, not actually sampled from the real scene data.
Blender/Maya’s and other film raytracers have always used some form of denoising (before machine learning denoising, there were other algorithms used), but in films they’re applied after casting thousands of rays per pixel. In a game today, scenes are rendering around 1 ray per pixel, and with DLSS it’s probably even less since the internal render resolution is 2-4x smaller than the final image.
As a technologist, I’ll readily admit these are cool applications of machine learning, but as a #gamer4lyfe, I hate how they look in actual games. Until gpus can hit thousands (or maybe just hundreds) of rays per pixel in real time, I’ll continue to call it “fake AI bullshit” rather than “real time raytracing”
To each their own. I love ray tracing and think it looks far better than without it. I’m not concerned with how we got here but simply the end result of what is displayed on my screen and it’s pleasing to my eyes.
“real time raytracing” as is advertised by hardware vendors and implemented in games today is primarily faked by AI de-noising. Even the most powerful cards can’t fire anywhere near enough rays to fully raytrace a scene in realtime, so instead they just fire a very low number of rays, and use denoising to clean up the noisy result. That’s why, if you look closely, you’ll notice that reflections can look weird, and blurry/smeary (especially on weaker cards). It’s because the majority of those pixels are predicted by machine learning, not actually sampled from the real scene data.
Blender/Maya’s and other film raytracers have always used some form of denoising (before machine learning denoising, there were other algorithms used), but in films they’re applied after casting thousands of rays per pixel. In a game today, scenes are rendering around 1 ray per pixel, and with DLSS it’s probably even less since the internal render resolution is 2-4x smaller than the final image.
As a technologist, I’ll readily admit these are cool applications of machine learning, but as a #gamer4lyfe, I hate how they look in actual games. Until gpus can hit thousands (or maybe just hundreds) of rays per pixel in real time, I’ll continue to call it “fake AI bullshit” rather than “real time raytracing”
also, here’s an informative video for anyone curious: https://youtu.be/6O2B9BZiZjQ
To each their own. I love ray tracing and think it looks far better than without it. I’m not concerned with how we got here but simply the end result of what is displayed on my screen and it’s pleasing to my eyes.