Story of Two Pictures

The Story of Two Pictures

Things aren't always as they seem

Nvidia just hosted their CES keynote last night, and man did it have a lot of really exciting stuff. They covered the usual stuff that we are used to from previous events such as ray tracing, games that support it, progress of rendering technologies over time, etc. But beyond all of that there was the RTX 2060 announcement, RTX Mobility, Adaptive Sync support as well as BFGD (again). All really cool stuff if you ask me.

But, in the midst of all of this, there was again, Deep Learning Super Sampling (DLSS). DLSS is a new technique employed by Turing that uses a trained neural net to increase the visual fidelity of a scene shaded at lower than its native resolution. I don't say half as the verdict is still out on exactly how much lower it is, though in the case of Final Fantasy XV it looks to be between 1440p and 1800p which is then scaled up to 4k. You can read about DLSS and how it works here

DLSS by itself is a pretty controversial topic in itself. So far the reception on it has been pretty mixed, and publicly available tests for it are limited to only Final Fantasy XV and its benchmark utility. Some reviewers will tell you that it increases fidelity, others will tell you it decreases. In my own personal experience, I say in the limited use case it does present a better overall picture, but this is simply because of the rather subpar implementation of TAA in FFXV as opposed to the merits of DLSS, but I digress.

DLSS

DLSS is said to be getting a lot of traction within games, but its yet to come to fruition. In fact, Dark Siders 3 is one of the titles that was said to get DLSS support and at the time of writing, has yet to receive it. This is true for a few other games, most notably PUBG which has the code in the test version, but no profile to employ it. But one thing that everyone can agree on and tests do validate is, it increases performance. So, leaving personal opinions aside for DLSS lets just look at it as another tool in the bag.

1700x660-25games

So great, we have another option to employ in games to play as we see fit, whats the issue? Well, marketing is the issue. Of course Nvidia is a corporation that creates products and technologies to sell and make a profit. Lets not confuse ourselves of what these corporations are built around. This is true for AMD and Intel as well. But there is a difference between good and bad marketing. Lets get back on topic.

During the live stream Jensen Huang showed us the upcoming (tomorrow actually) Port Royal benchmark from FutureMark. This is the first DXR raytracing benchmark available to the public. In the video, a (disclosed later) RTX 2060 is running the benchmark in two videos. On the left, we have 1440p with TAA, on the right we have 1440p with DLSS.

Throughout the feed Huang goes on and on about how much better the DLSS image looks. This should immediately bring you to a pause (it did me). From what we know of DLSS, it actually lowers image quality slightly. This is especially true for texture quality as the textures are essentially down sampled from their native asset resolution and shaded at X amount lower. Up until this point I wasn't thinking much of the benchmark outside of 'Omg, they are allowing DLSS in a benchmark, this isn't going to make people happy', but when he started going on about better visuals I started to pay closer attention.

The next thing I noticed was the blur. The feed on the left was a lot blurrier than the video on the right, which seemed a little off. Even with FFXV, its not THAT blurry. Upon closer inspection, I noticed the frame rate overlay was actually blurred as well. This gave me an even bigger pause.

For anyone not up to speed with how fraps works, here is a quick down and dirty on it. When you start a game with Fraps (or RTSS/Shadowplay) running, the renderer (DirectX or OpenGL) is 'hooked'. Think of this as the road the rendered frame is rendered on gets detoured into the overlay program. Whenever the API (DX or OpenGL) says 'Hey, this frame is done, lets send it out' the output is hooked and sent to Fraps. This is the completed game image. All post processing and effects are done (this is key). From there, Fraps will draw its overlay and send the output to the GPU for drawing onto your monitor. So the overlay is at the very end of the drawing process. The reason the blurry text on the left side caught my attention was, it would be impossible for TAA to affect the overlay text, as TAA is a form of post processing, which happens during shading, before FRAP ever has a chance to draw the text onto the screen. But somehow in the video feed, the overlay text was blurred as if it had the worlds worst TAA applied to it, much like the rest of the image.

This all looks very fishy to me. Today, I pull the feed back up, start watching again. And sure enough, I am able to stop and take screenshots and clearly see that the text is in fact lower quality than the text on the right. Its been blurred. In addition to that, the entire image appears lower resolution. There is spotting, artifacts, lower quality textures on the left while a much clearer image on the right. Again, this doesn't line up with what we have seen with DLSS.

'But Mr. Huang, you are telling me DLSS looks so much better, how can this be true!'. I thought the same thing. But the more I watched the video the more the fabrication of what was being sold to me was becoming apparent. In fact, Mr. Huang even contradicts himself in his own presentation. He says at the beginning that both are running the same resolution. Then later as hes describing how much better the right image looks, he says its actually running a higher resolution. Well, thats about the most believable thing he has said. I do believe the video feed on the right was a higher resolution, while the feed on the left was downsampled and filtered to decrease fidelity. That is a guess.

Now, I cant say what exactly happened, I wasn't there. I wasn't in charge of putting together the presentation, recording the videos, the bitrate used, etc etc. But, in order for this to not be outright fradulent then gross negligence would have to be a factor. Different settings with vastly lower quality would have had to been used on the TAA side compared to the DLSS side. Worst case is this was done willfully. Nvidia has been receiving a lot of flak from the community and the reception of DLSS isn't as good as they would have liked (I imagine anyways). So they are attempting to drive the marketing to win over the opinions.

So, whats the point of all this? Well, insight I guess. At PC Better, we are driven by facts and a no BS approach. We don't like being misled, and we imagine those reading don't enjoy it either. The intent of this writing definitely isn't an attempt to sway opinion against Nvidia. They are a corporation like many others and are doing what corporations do. Its shady as all hell, but its the reality of all the big three tech giants. What I do want the readers to do is take this as a bit of a lesson, an eye opener, and to hopefully encourage educated and skeptical viewership. Never take what is being sold to you at face value, they want your money and they are going to polish whatever turd they are selling you as best they can to get into your wallet (I dont mean DLSS is a turd, just marketing in general).