Sidenote: Potential Pitfalls for the Radeon RX 6000 Series Cards

I spent much of yesterday being very excited for the new AMD Radeon RX 6000-series cards, making jokes about the RX 6900 XT (nice) and being very interested in the performance numbers that AMD shared.

Now, I complained last month about what it is with Nvidia I dislike about their marketing, which, distilled simply, is this – the company loves to obfuscate via benchmarks and does not provide easy to digest numbers or otherwise browsable metrics. AMD has earned a lot of trust over the Ryzen era for being forthrightly transparent and not fudging numbers – they do engage in some cherry picking, but generally speaking, they’re pretty on the level and they don’t shy away from providing unflattering benchmarks (the new Ryzen CPUs still losing to Intel in Battlefield V, the Radeon RX 6000 cards doing poorly at Wolfenstein: Youngblood) as they help paint a more accurate picture of where a component actually plays.

After the videos yesterday, I kind of thought that was all we’d get short of the partner game stuff until third-party reviews met embargo dates and started to spill the tea, but I took a trip to AMD’s site and found something I genuinely appreciate: a full set of the benchmarks used in the Radeon event showing system configuration and giving a single, apples-to-apples chart that you can browse for each game they tested with all 6 cards for comparison – the 3 Nvidia cards they compared against and the 3 new Radeons. It also lets you select the resolution, and has dropdowns for settings and APIs (although those only offer one choice each).

I have only one concern about this data directly, in that it does not make clear on the page currently, as of this writing, if the data for the new Radeon cards is using the Smart Access Memory technology or Rage Mode. The platform used for testing enables SAM, while Rage Mode is fully open to any owner of these cards, so the distinction is important in my opinion.

However, browsing this data did two things – it reaffirmed my purchasing decision towards the RX 6900 XT, but it also raised some potential issues worth pointing out. Because of that and my desire to write more about this (for my own sake as a potential buyer but also because I got feedback that people like reading these!), and I wanted to breakdown some potential concerns I have after looking at this data.

API Choice: No DX11 or DX9 Titles: The numbers AMD has in this cool table view only show either DirectX 12 or Vulkan API titles. Both of these APIs are heavily optimized for modern GPUs, but are also low-level and require the developer to put some elbow grease into implementation for best results. Vulkan, also, coincidentally, is built on the remnants of AMD’s own Mantle API, which they built for a laugh in the early 2010s as enabling software to the APUs in the PS4 and Xbox One, but also as the Bulldozer debacle began to unfold. Naturally, it does tend to run better on AMD hardware, although Turing and onward GPUs from Nvidia handle it much better than past parts because of their new dedicated Integer units. This isn’t a controversy in my opinion, though, because in both major Vulkan showcase titles – Doom Eternal and Wolfenstein Youngblood – the RTX 3090 sits atop the charts. DX12 doesn’t have as interesting of an origin story, but it again is a case where AMD’s architecture is generally better optimized for it, as only Turing and forward have had hardware that fully plays nice with DX12 from Nvidia. What potentially worries me is the complete exclusion of DX11 and DX9 titles. DirectX11 is still in use in many games and is one place where Nvidia has always done very well. For me, Final Fantasy XIV is a DX11 title and so I would like to see some DX11 benchmarks prior to launch just to have a ballpark estimate of what to expect. DX9 support is more of a joke from me, but if you play older games or are a highly competitive Counter Strike: Global Offensive player, that API still matters, sadly! I don’t think there are any real surprises or scares lurking in the DX11/9 shadows, but I would love to see AMD expand these charts over the coming weeks to show more data from different APIs just to get a taste of where things will land. The webpage they built to show these numbers does already have the functionality, after all.

No Board Partner RX 6900 XTs?: This one was a rumor that was circulating in the lead-up to the announcement, helpfully pointed out by reader Jay in a comment on the initial announcement post I wrote, and as announcements come out from AMD’s board partners, it does seem to be coming true. The rumor is that AMD themselves will be the only source of the RX 6900 XT, with no board partner custom designs forthcoming. I find this disappointing for a couple of reasons. Firstly, the reference design, while it seems very nice, has only two 8-pin PCIE power connectors, which deliver a combined 300w to the card. You’ll note that 300w is also the power rating of the card! A PCIE slot can deliver 75w at full specification, meaning in theory, overclocking would allow you to provide the card with up to a 125% power limit without overdrawing from any of the involved connectors, and that assumes that the card’s VRM solution is built to allow it. Board partners tend to overbuild cards to attract enthusiast buyers – higher spec VRM components, additional power connectors for overclocking, multiple VBIOS options on board with different power target and boost clock limitations, and higher-end coolers including custom water blocks or overly elaborate air coolers.

Now, I know I am water cooling my next build, so my hope was that I’d be able to get a PowerColor Liquid Devil series RX 6900 XT, with ideally a two-mode VBIOS switch and an extra 6 or 8-pin PCIE connector to allow for massive overclocking headroom. On Radeon cards, the best overclocking comes from software modding the PowerPlay tables in your registry to jack the headroom up high, or using a custom VBIOS that sets different limits which can then be used within Radeon Adrenalin or a third-party OC tool to push things higher. And, according to Igor’s Lab, the maximum listed boost clock on these GPUs in their BIOSes is a staggering 2.8 GHz! Now, 2.8 GHz is likely to be something you’d only see on an exotic cooling option like liquid nitrogen, but the RX 6800 XT is supposedly hitting 2.5 GHz plus on air cooling from vendor cards! My bother with all of this is that I want my next system to be heavily tuned and optimized for as much performance as I can wring out of the parts while keeping things in a stable, safe operating range for daily use. I’ve forecast having 400w of available PSU throughput for the graphics card, and still have around 93w of headroom in the power supply past that even assuming a 200w Ryzen 9 5950x (overclocked CPU as well to reach that target).

However, if only AMD offers the card, then the design may be a limiter. AMD’s reference cards are built by Sapphire as-is, so the quality is generally good and even the reference RX 5700 XT could handle a +90% PowerPlay table mod or thereabouts, so it might not mean much. But, given that Ampere cards from Nvidia have very little headroom for overclocking, I was hoping that switching to Team Red was going to mean getting much better margins. A 25%+ power limit increase on a 300w base card is nothing to scoff at, however – but I do still wish I could see a third-party board with a crazy overbuilt VRM that would be capable of handling 400w+ with ease (and without concern for melting PSU cables or overdrawing the motherboard into instability!).

GPU War Agnostic Vendor Designs Reuse Nvidia Coolers: This one is a quirk of the system in that Nvidia cards sell more, so many AIB partners build a cooling solution for Nvidia and then adapt it to work on Radeon cards. The board partners that have announced custom designs thus far are MSI and Asus, both of whom have a bit of a bad reputation for shorting AMD buyers in even the most recent Radeon releases. MSI’s option is a Gaming Trio with the same cooler they’ve used on their Ampere cards, while Asus has announced a ROG Strix lineup that includes the same cooler they have been using and a unique hybrid liquid cooled card that looks pretty awesome. The coolers aren’t a problem in terms of thermal dissipation – Ampere cards are way hotter so putting a solution that can cool a 400w+ custom design RTX 3090 onto a 300w RX 6800 XT isn’t a bad thing per se. What has been bad in the past is that these cooler designs often fit to Nvidia’s spec on mounting pressure, cold plate sizing and location, and require reworks of some sort to better fit Radeon cards. If both of these companies have custom PCB designs on deck that handle the rerouting of components on the board in order to fit, great! – all will be well. If they take the reference PCB and slap the Nvidia-first cooler design on it without testing for adequate mounting pressure or contact, then there will be some issues and both vendors, in the past, responded poorly to initial concerns from customers who received poorly mounted coolers or inappropriately small thermal pads for memory chips, a thing that MSI did with some models last generation.

Drivers: I keep coming back to this one, but it remains the pre-eminent roadblock between me and an RX 6900 XT. RDNA cards released last year had a higher-than-usual rate of user reports of driver errors, to the point that even today, navigating the AMD subreddit often requires wading through a number of these posts. Issues of blackscreening, hard locks, soft locks, abnormal data reported in the Radeon software by on-board temperature sensors, and a variety of other showstopping behaviors are not uncommon to see. While AMD has made a lot of significant strides in this area in the last year, there is a ton of reasonable concern over how these types of problems could manifest in RDNA 2, especially given the new Infinity Cache, Smart Access Memory, and other such technologies that add new wrinkles to the driver stack. I will stress here that the plural of anecdote is not data, and a lot of dedicated fans reporting issues doesn’t inherently mean there is a higher occurrence rate of problems vs. Nvidia drivers, but “AMD drivers” have become a meme for a reason and it must be said that a lot of potential buyers who currently have Nvidia cards can only look on from the outside at user reports and given that, some concern is warranted. The amount of time AMD has been working on this launch is apparent given the months of trickling rumors and multiple Linux driver dumps showing RDNA2 feature support has been in the works for a while, but until cards are in the wild, fresh drivers in hand, those concerns are going to remain. As with Nvidia and the 3080/3090 staggered launch, however, I do find it quite funny that AMD is sending the middle of the 3 cards out first, which should allay a lot of concerns that people like me wanting an RX 6900 XT would have with drivers, and I know that I’ll be tearing through as many reviews and reports as possible at the RX 6800 series launch to make the final decision on my plunge back into a full AMD rig for the first time since 2005.

And so that brings us back to where we are now. With these concerns noted, I still think that AMD has pulled off what many of us thought wouldn’t be done and I am thrilled to see the competition in the market, as well as to have a more savvy way to fulfill my desire for the best of the best without having to spend 117% more money for 10% more performance. I think that the Radeon RX 6000 series is going to be a major turning point in the gaming world and I am eagerly awaiting third-party benchmarks that address all the points I discussed here!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.