I’ve been talking a lot about GPUs lately. A LOT. A big part of that is that the market is really fascinating to me for the first time in a while and its prompted a degree of research into what has changed on the AMD ecosystem over the last few years, what is new in the Nvidia one, and comparing the two.
When I’ve talked about RTX 3000 vs. RX 6000 so far, it has been released (well, allegedly…) cards versus prerelease numbers from the hardware vendor, who has a good track record with accurate data but also has a desire to sell more cards, of course. It has also focused largely on raw performance – framerate and capabilities of handling higher detail and higher resolution with fluidity. What I haven’t talked much about in the head-to-head is the software side.
For years now, Nvidia has set itself apart from AMD on the software side, or at least is perceived as such. Both vendors offer a pretty confusing array of different software options, and today I think it is worth exploring these in more detail as the launch of RX 6000 approaches. Since I’ve been researching it in-depth for my own purchasing decision, I felt like sharing!
Features Offered by Both Vendors
Screen Capture/Recording: Both Nvidia and AMD offer advanced video encoders in their GPU dies and software tied to it to to enable gamers a lower cost option for streaming and recording gameplay footage. The encoder standards are different (AMD uses AMF as their standard, while Nvidia uses NVENC) but both spare your CPU the work of encoding the video and can often do so at little or no performance hit to the GPU. Nvidia enables their feature as Shadowplay through the Geforce Experience software, or you can use most standard screen capture tools and enable it by using the NVENC encoder in your settings. Likewise, AMD offers their tool through Radeon Adrenalin as ReLive, and it can also be enabled in software like OBS by selecting the AMD AMF encoder.
Both have a good amount of feature parity – they’ll record at the resolution you set in the game and 60 FPS by default, but can be tweaked and tuned through a myriad of settings. ReLive supports desktop recording which Shadowplay doesn’t do, but both can be used to capture your desktop via third-party apps using their respective codecs. They offer instant replay options through their specific software, good for capturing moments if you weren’t actively streaming or recording, and you can adjust the buffer size for that feature to a longer or shorter length based on how often you think you’ll use it or how quickly you can get in to run it back.
Now, there is a quality difference between the two. Generally, over the last several years, NVENC completely outpaces AMF on video quality at the same settings, with even the newest version from the RX 5000 series cards being worse quality than a CPU encode! If you need a quick solution on a lower cost system without upgrading a CPU, it works well enough. If you have an Nvidia card, you can use NVENC and the quality since Turing is outstanding. With a modern CPU, you can push software encoding via X.264 with Medium presets or higher and a high bitrate to beat both, but you really would want a very fast and high-core count CPU to use those settings. For most people, either GPU option is fine, but the AMD AMF is worse, so if that matters to your use case, Nvidia is very clearly ahead in this regard.
Reduced Input Latency Gaming: Both AMD and Nvidia now have this tech, although AMD was first to it. On AMD cards, this is Radeon Anti-Lag, where on Nvidia cards it is Nvidia Reflex. The high-level of these technologies is this: through various adjustments to processing frames on the CPU side, as handled by the graphics driver, these technologies work to better synchronize the work being done on both CPU and GPU to reduce the number of times where the CPU is ahead of the GPU in processing, making it better able to accept new inputs from the player and then handle those prior to processing with the GPU. AMD does this through dropping to double-buffering of frames, where Nvidia does this through a companion technology called Ultra Low Latency. Reflex takes it one step further by implementing a game-specific codepath, where a game must be written to use Reflex and manages further latency reductions through just-in-time frame processing to reduce missed inputs which must then wait for the next frame.
The comparison here is pretty straightforward, unlike the video encoder mess – Radeon Anti-Lag currently, by most reviews, works better by virtue of working with all DirectX 9 and 11 games, while Nvidia’s Ultra Low Latency only works on DirectX 11 and Reflex requires specific developer implementation, which means it is only supported on a very limited roster of titles. In cases where Reflex is available, it can deliver some improvements over Radeon Anti-Lag, but it requires such a specific set of scenarios to gain that benefit that it would be difficult to recommend neatly over RAL.
Adaptive Sync for Displays: Adaptive sync, or Variable Refresh Rate, or Freesync/GSYnc, are all basically the same idea. V-sync in the past meant locking your graphics output to match your monitor’s refresh rate to a tee – a 60Hz monitor meant V-sync gave you framerates of 60, 30, 20, or less in even divisors of 60. Adaptive sync is the same idea but allows the monitor and PC to match each other within limits – if you have a 75 Hz monitor with Adaptive Sync, it will likely have a range of framerates it can match up to its limit. Freesync is the AMD implementation of this, which includes a baseline standard the display must meet, but it is free to the monitor vendor and so even cheaper monitors with 75 Hz panels will often be Freesync certified. Nvidia’s equivalent is G-Sync, but in the past, G-Sync required the manufacturer pay a licensing cost to Nvidia and also pay for an adapter card in the monitor which provided hardware-level G-Sync standards. Now, G-Sync is solely a licensing thing, and Nvidia is doing it to cannibalize AMD sales by certifying monitors without the expensive G-Sync board so the box has their logo and branding on it instead. Freesync is pretty good, although the standard is fairly loose, especially when it comes to HDR quality, while G-Sync does generally have tighter standards but even in the non-hardware era, G-Sync panels come at a higher cost.
Ray-Tracing, With Caveats: With the RX 6000 series, AMD will officially support DXR raytracing and VulkanRT. This will cover a majority of current ray-tracing titles, with the exception of two – Quake II RTX and Wolfenstein Youngblood, as these use custom implementations from Nvidia. Nvidia pioneered ray-tracing at a hardware level with the RTX cards starting in August 2018, and through their proprietary Optix API, has offered developers this technology. Uptake has been slow, and I suspect that DXR will be the most common implementation going forward as it should ensure broad support from RX 6000 cards, RTX 2000 and 3000 cards, and the Xbox Series consoles, while likely being easier in some ways to port to PS5, but Nvidia packages ray-tracing as a part of a broader “RTX” ecosystem which includes our next entry!
Nvidia Specific Features
DLSS: This is both the big one and also a sort of side-concern, for now. DLSS (Deep Learning Super Sampling) is AI-based upscaling technology, which Nvidia trains using their supercomputers built on their GPUs to create smooth scaling based on a ground-truth 16k resolution image, a low detail 720p image, and then uses motion vectors in the scene to account for movement and ensure neat, clean scaling. Like a lot of Nvidia’s technologies, this requires developer support to be built into a game. Currently, about 20 games support DLSS, which is a relatively small number of titles to build a feature into a must-have on. In games where the 2.0 version of the technology works, it works very well and delivers an excellent experience with sharp details that can often look close enough to the image you’d expect at native, and sometimes even very slightly better due to the “ground-truth 16k” usage for scaling. It remains a point of focus for Nvidia, who is forcefully working to expand it to more titles, but so far, I can only find 12 more titles scheduled for launch this year that are slated for DLSS implementation.
DLSS aims to do two things – make ray-tracing more accessible by reducing render resolution to make scenes simpler to perform BVH calculations on, and to enable crazy and stupid marketing claims like 8k gaming at 60 FPS. The new 9x mode introduced to allow that 8k claim has some pretty funny implications, which you can see in this YouTube video from creator 2kliksphilip:
Nvidia Broadcast: Less of a graphics feature, but something Nvidia is pushing hard nonetheless, Nvidia Broadcast is a set of streaming and content creation tools that aim to make these modes of content easier to produce at higher quality. It includes a set of tools accelerated in hardware on RTX cards, including AI-assisted background adjustment or replacement, AI-assisted noise reduction in microphone input, webcam auto-framing to account for motion of the subject, and may in the future offer webcam upscaling (it is a feature Nvidia showed off at their most recent GTC event, but it has not yet been tied into Broadcast or a gaming-GPU feature). This is good for content creators first starting out – when you have to stream or record in a room you also live in, background blurring or replacement can help, and having easy tools like reframing your camera and removing background noise from a starter mic can make a lot of difference to how professional your content looks. Doing all of that through a single suite without heavy hardware costs (well, outside of buying an Nvidia graphics card HEYOOOOO) is a good thing!
Omniverse: While not out yet, the biggest cool thing I liked about Nvidia’s software offerings from the Ampere announcement is this tool. Omniverse is a machnima creation toolkit, designed to allow you to pull assets from games and into supported 3D applications, use AI-enhanced pose mapping to capture your own movement from a webcam and map it to a model’s animation rig or to animate a model’s face using only an audio file of speech, or even set an army of models into motion in a quick and simple fashion. As a tool, this is immensely cool, and something that is genuinely exciting to me even as someone who doesn’t do machinima. For Nvidia, the use of those tensor cores for AI workloads lets them, again, plug the sale of an RTX card and push people to them as a platform.
Studio Drivers: Nvidia offers two driver paths to Geforce customers in 2020 – the standard gaming driver and Studio drivers. If you use your PC for gaming almost exclusively, the Game-Ready driver path is fine and the one you should be using. If you use your primary gaming PC as a workstation at all, however, the Studio drivers can open up some additional improvements in professional applications like most of the Adobe suite, most 3D rendering kits, and a handful of other applications. It isn’t much to many people, but for amateur content creators, the ability to unlock even some professional-grade optimizations via a driver path is a worthy feature!
Gameworks: Nvidia has a handful of features they offer to developers that are a black box of sorts. These features all offer increased visual fidelity at low cost on Nvidia hardware, and while they do also work on AMD cards, they often run slower due to optimizations made specifically for Nvidia cards. These features include a full kit called VisualFX for rendering effects, FaceWorks for facial animation, HairWorks for hair but also for grass and similar effects, PhysX for physics simulations (this can also run on the CPU), and the previously mentioned OptiX API for ray-traced lighting. These features offer a lot of visual detail, but have come under fire as being specifically used to make AMD cards look bad (the first version of the Nvidia-supported PC port of Final Fantasy XV benchmark used HairWorks fur and grass simulation that was not culled properly from the render pipeline, resulting in the game rendering hair and foliage that was miles away from the scene and not even in front of the camera. There’s no evidence that this was done maliciously, but it does make for a fun conspiracy!
AMD Specific Features
Radeon Boost: A part of AMD’s performance enhancement toolkit, Radeon Boost works in a small handful of games (which all require specific support for it) to detect high-motion scenes and dynamically adjust render resolution downwards to improve performance, bringing it back up as the motion resolves and the scene returns to normal. It is currently supported in a whopping 8 titles, so I definitely wouldn’t claim this as a huge win, but it’s cool.
Radeon Chill: While both vendors offer some amount of driver support for low-power modes, AMD offers Radeon Chill, which is a dynamically-adjusted power limiter to save power consumption and heat output. With Chill enabled, you set a minimum and maximum FPS target, and the GPU will drive down power consumption when your target is met or exceeding, in order to avoid rendering extra frames that your display cannot show or that you don’t benefit from. This feature is great for those with air-cooled cards, especially blower coolers like the stock Radeon RX 5700 series cards, as reduced power and heat means the fan does not have to ramp as loud! You can set it to be game-specific (so you could set a higher FPS target for a competitive shooter game and then lower for an MMO or casual game) or global, and it works with most games under all APIs except OpenGL.
Radeon Image Sharpening: Up until the RX 6000 series, this was presented as the alternative to Nvidia’s DLSS. However, we know now that AMD is working on a Super Resolution feature, which is intended to be their counter to Nvidia’s DLSS for real this time. So what is RIS? Simply put, RIS is a use of Contrast Adaptive Sharpening. CAS uses the scene contrast of colors and objects to determine what to sharpen when running, which is designed to leave edges of objects untouched while dialing up the detail in the rest of the scene using an algorithm. It works decently well and does look nice, although it does not offer quite the same visual impact or performance difference as DLSS. In fact, it requires running at a lower resolution to gain the performance benefits. This pays off in a couple of ways, though – the first is that RIS can be used at your intended resolution to enhance detail in the scene, which new titles like Godfall are doing. The second is that it will work in tandem with Super Resolution when that is launched (supposedly) so that you could apply sharpening to the lower resolution render target, and then scale it up, gaining the sharpness you gain from RIS and then also getting the resolution to match. Theoretically, it could also be run again after scaling via Super Resolution (or maybe would be better served running after instead) and could end up delivering superbly sharp images. For now, that is just my speculation!
Radeon Integer Display Scaling: This one is actually kind of cool, in that it’s not something Nvidia ever really targets with their feature sets. Retro games are a part of the appeal of a PC – the idea that (theoretically) any Windows title ever released could still be played today on a PC, and modern markets like GOG offer patched versions of old titles to run on Windows 10. However, their resolution often scales poorly and display-applied stretching and squashing can make them look a lot worse. Integer Display Scaling fixes this by offering a flat pixel scaling mechanism, using a pixel-squaring mechanic along with empty space insertion to allow an image to scale up to fit a modern display while maintaining the original image fidelity. This does mean that the softness that can sometimes enhance a retro game’s look goes away, but it gives you more period-accurate visuals and removes any stretching that would come from a game being monitor-scaled to match the aspect ratio of a modern display (which is very different and much wider than our old CRTs!). This one largely comes down to preference, but if you play a lot of older titles on your modern PC, this feature offers something to make that work more authentically.
FidelityFX/GPUOpen: Like GameWorks from Nvidia, AMD offers a suite of enhancements that are tuned and tweaked to work better on their cards, with the newest RDNA2 implementation leaning on features like Variable Rate Shading and Contrast Adaptive Sharpening. However, unlike GameWorks, FidelityFX is fully open source and documented, allowing developers to implement and adjust the code as needed to better suit their engine or even to optimize more for a general hardware set. It includes a set of similar features to GameWorks – TressFX for hair, fur, and foliage, lighting and shadow toolkits, GeometryFX for managing scene geometry more efficiently, and a mix of other tools with broad application. With AMD GPUs now powering the main visual-fidelity focused game consoles of the last two generations, FidelityFX does have broad support and many of the older tools from GPUOpen have implementations in tons of titles. At the RX 6000 launch, there will already be titles available with FidelityFX optimizations and as the new generation consoles roll out, this will likely continue to grow in support.
Recapping these, it is easy to see how Nvidia gets a lot of lock-in – they market their features far more aggressively, where AMD tends to offer a lot of stuff that is locked in driver and software download pages and many may not be aware of. When it comes to gameplay-focused features, I think currently Nvidia has a very slight edge in that features like DLSS are game-changers (for the very limited number of titles that support them), offers more image adjustments via their driver control panel, and offers a greater library of features that hook into gaming-adjacent usage of the GPU like streaming and video creation/rendering. However, AMD offers a lot of higher level adjustments that apply to a more broad selection of titles. Radeon Image Sharpening just works as long as the game uses a supported API. Integer Scaling is a neat feature that allows better support of retro games. Having their visual effects libraries open source means more games can integrate support for them and manage it without a performance hit on Nvidia GPUs while also leveraging the optimizations for AMD GPUs. Radeon Chill is a feature that can be nice, depending on your use case, although a lot of AMD fans have a weird attachment to this one that makes them compare it to things strangely (I’ve seen more than one Reddit thread war comparing Chill to DLSS as if they were in any way things that compare neatly to each other).
Outside of gaming, this is a fight that AMD isn’t even showing up for and one I would like to see them start working on more. AMD’s AMF, while good enough, isn’t a great video codec and often seems to be neglected in the eyes of many. I’d love to see that change. Nvidia is really firing off a lot of shots outside of core gaming usage – Broadcast offers a lot of fantastic features to gamers that stream or record and I would hope AMD would focus in on this alongside the AMF encoder and get these up to par. Omniverse sounds cool, but I’d have to see Nvidia actually release it to know for sure. Lastly, the Studio driver path is absolutely something I think AMD should emulate – their cards have a lot of compute power (especially the older GCN models) and I think opening up some compute capability and professional acceleration on the enthusiast gamer cards can only help position the Radeon brand more strongly.
Either way, I actually don’t feel there is a clear winner overall. If you game but do a lot of content creation, Nvidia neatly wins, but if you just game, AMD is in the running and does have a solid use case over Nvidia in many types of game diet. I guess it comes back to performance, and if AMD’s word holds true (we find out for the RX 6800 series cards next week!) then they may very well have a winning hand.
2 thoughts on “Sidenote: Comparing GPU Software Kits Between Nvidia and AMD”
I know enough to know that I need to know more. That said, there’s an interesting saying that goes something like “who got time for that?”
There’s a limit on practical understanding of anything tech related, aside from “does it look good”. The investment required to get a return on this makes it ultra niche, and practically like trying to tell the future. Ray tracing is practical. 4K to some degree, but the monitor/TV price points are still a challenge. But I recall SLI/Crossfire being the second coming and its all but in the bin with laserdiscs. Crimes, I still remember when Plug and Play cards were the big thing on the block. We’re on the border of audiophile territory in terms of “120FPS is so much better than 100FPS” – our eyes can barely see over 80.
I’m more on the bandwagon of reducing the size of the components so you can get better performance from smaller settings. I realize that there are physical limits to size/heat that cannot be overcome with our current understanding of tech, but that’s where we need to be.
Long rant aside, damn I love your passion for it all the same.
LikeLiked by 3 people
I’m not sure I’d agree that SLI was ever seen as the second coming. It was always an ultra enthusiast, ultra-niche element. Given the choice of spending roughly equal dollars on a higher-end card or two lower cards, it was almost always a better idea on balance to go with the single card.
So that really only left if you were already spending the megabucks for the top and then still wanted more.
I suppose there was a time where an argument could’ve been made for buying one of the best you can now, and then adding a second at some future point.
But at least with my experience in that regard, unless you were talking fairly short term, it was generally better to go with a single good card of the next generation.
Having said all that… Yeah, I suppose some people really did go quite batty over it. xD
LikeLiked by 2 people