Sidenote: Why I Use Nvidia Products But Dislike The Company

All week, I’ve been talking a lot about Nvidia. They announced new GPUs that offer a lot of additional performance and bring their pricing back into a better balance.

There’s just one snag, though – there’s no reason to believe that the benchmarks presented by Nvidia mean much of anything at the moment.

In technology, I have a lot of companies whose product I will use and love, but where I hate the company. Intel is high on that list, but at this point, my most hated tech company is Nvidia.

The challenge I have is this – they have excellent engineers on staff who design outstanding products and new technologies and make a case for their products on the merits, but the marketing and leadership staff love to play games with how their new GPUs are presented. My favorite example of this is back two years ago, during the launch of the Geforce RTX 2000 cards, where they compared the raytracing performance of a Pascal GPU to a Turing GPU, which is fine until you realize that…well, Pascal doesn’t have hardware for real-time raytracing.

So, like, sure, this isn’t inherently misleading marketing or advertising. Turing is faster at ray-tracing, or, if you’re Nvidia, you better fucking hope so! To an enthusiast audience, this is an eye-rolling sales pitch – clearly, I would not run RT on my 1080 Ti because it just doesn’t do it that well, without taking a significant performance hit as the shader engine and schedulers don’t have any optimizations to deal with it even without the dedicated RT hardware, meaning it is stupidly slow. Possible, but slow – and thus, not a realistic scenario that most people would push for.

Nvidia’s presentations are full of statements and advertising like this, where they compare the new cards against the prior ones in the best possible light. One common practice is using a 4k resolution to compare – because 4k relies on memory bandwidth to increase performance, even when a generational leap isn’t that impressive, 4k exacerbates an easy point of improvement, as new memory technologies and increases in bandwidth come with nearly every generation in large quantities. Just look at this from this week’s Ampere presentation!

Again, this isn’t inherently wrong – it just presents a best case scenario and leans heavily on it to increase the perception of performance improvements in the new cards. An RTX 2070 Super is not really a 4k card. especially for raytracing, and trying to use it as such creates a perception of massive performance leaps where that may not be the case. Granted, the 3070 that is intended to replace that card is here too, and that makes the comparison better, in my opinion, but the graphic is obfuscated to create a larger perceived gap. There’s no actual data here, in terms of frame rate, frame time metrics, or details for settings.

That is why, while my initial reaction is skeptical excitement, until third-party benchmarks emerge, it is worth being just skeptical. The RTX 2000 launch was full of obfuscating statements designed to create a perception that the new cards were drastically better than they were, when in truth, the like-for-like generational jumps were barely 30% in best case scenarios, RTX wasn’t widely supported, and it took a long time for technologies like DLSS and RTX Voice to come to the stage in viable, useful forms.

One last negative point to discuss, and one that really did get me up front – the power efficiency they claimed for Ampere. At first, I was really excited for this, because it seemed really good – too good, but not altogether unrealistic. 1.9x the performance per watt? Wow – that’s impressive. What is less impressive, however, is how that metric was decided. Let’s start with the graphic:

So on the surface, this seems reasonable if a bit too good. A node-shrink to an optimized process built for your product, new technologies like GDDR6X, and all of the architectural improvements made to Ampere over Turing make this seem doable, in a way. However, when you look at the graphic, this metric is found through one game, on an ideal platform, using solely the power draw needed to hit 60 FPS. The graphic makes clear a few things that Nvidia didn’t say so loudly, though. First and foremost, the voltage/frequency curve on Ampere seems to be better, which is good, but they show you the scaling they’ve used – tapping the card out at a whopping 320W in order to maximize that performance. That isn’t inherently bad, but the claim here is that Ampere can get 60 FPS at 4k resolution at only 120W! That’s great, until you look at see that almost tripling the power only gets you 66% more FPS, which means that the scaling is broken in a bad way. They cherry-picked the point at which Turing taps out while hitting 60 FPS, which naturally is an improved power consumption point on Ampere, and used that point to show the overall metric.

In all likelihood, 1.9X performance per watt seems unrealistic given the continuation of the chart, but this number was plucked into prominence so that most laypersons watching the presentation or reading coverage of it written by the average non-enthusiast press would just say, “Wow, this card is almost twice as efficient!” Is that actually the case? It seems unlikely to remain fully the case, but we don’t know until independent reviewers analyze the cards.

So given all of this, I loathe Nvidia. I hate how their marketing attempts to exploit cheap headlines and use of cherry-picked, over-idealized scenarios in order to present massive leaps that don’t actually exist. What makes it worse, though, is that I think many of Nvidia’s products are great and don’t need this kind of bullshit obfuscation!

For most of the last decade, I’ve used Nvidia GPUs. I’ve had Radeons at all ends, GeForce cards at all price points save for that Titan-tier 4-digit card (although I am seriously looking at getting an RTX 3090, so yeah), but I keep coming back to Nvidia. There are a few anti-AMD reasons for that (Radeon drivers have given me problems and continued to be a sore spot, the coolers made for Radeon cards tend to be worse, the worse coolers are significantly louder, and Radeon Technologies Group within AMD have, until now, largely given up on competing in the price brackets that I tend to buy GPUs at), but a large part of it is that on their own merits, Nvidia’s performance has been great and their driver package is robust, simple to use, and hasn’t given me issues historically across 5 versions of Windows and 15 years of building my own systems.

While Nvidia markets in a scummy, underhanded sort of way, the performance of their products does generally deliver when you remove any cherry-picking or obfuscation. They won’t always hit the inflated numbers or obscured scales shown in Nvidia’s presentations, but they get close enough as to not be inherently awful and unworthy of purchase. Listening to them discuss a new product requires a lot of tolerance for bullshit and ability to read critically into what they say to understand the loopholes they are exploiting to remain honest while misleading. Sometimes, they make it easy (comparing raytracing performance between cards with and without hardware to accelerate it!) and sometimes they tuck away the misdirect (using 4k for all benchmarks, picking select data points and extrapolating them to represent the full lineup or full change), but in my time as a techie, I have not seen an Nvidia presentation without such a misdirection or obfuscation present.

When you see engineers talk about the work they do, they are passionate about delivering great experiences for gamers that impress on multiple levels, trying hard to deliver something new and exciting every two years, and you can see that passion in the end product. While Nvidia’s Founder’s Edition designs have made them a sort of punching bag with techies who deride Apple for similar design-focused, form over function ideas, you can see them getting better each generation. The blower coolers that made Pascal FE cards annoying and loud gave way to quieter dual axial fan designs for RTX 2000, which has given way to an elegant design this generation with what seems like a stronger cooler that, subjectively, I think looks great and seems like an interesting new concept. Time will tell if it works, but I don’t doubt that the people who tried to build it were doing their best and had their hearts and minds in the right place.

But, technology and modern marketing in general is full of this kind of thing – companies with well-intentioned people fighting to deliver a good thing while the marketing and leadership staff present things poorly in an attempt to push a sale, even in cases where it doesn’t make sense for the end-user, and where that consumer won’t find the experience as amazing as promised by bullshit slide decks and silver-tongued keynotes. It sucks to end up supporting the latter due to appreciation for the former, but when the former delivers as Nvidia’s engineering staff so often do, well, it is something that I find myself able to suck up.

5 thoughts on “Sidenote: Why I Use Nvidia Products But Dislike The Company

  1. It’s funny, I’m just the opposite.

    I’ve always had problems with nVidea drivers, but not AMD

    Out of the last twenty years or so, I’ve had one nVidea, a hand-me-down from the wife (it was the Titan, so no worries there!). It worked okay. My human eyes couldn’t tell you the difference. But, “works and works well” describes my experience, so imma not gonna hate.

    I’m also most certainly not aiming at your price point. On a scale of “the best”, “really good”, and “good enough” I tend to fall on the scale between the latter two.

    If I had to claim any brand loyalty at all I’d say that it would be that I buy almost exclusively Gigabyte boards these days, including the videa cards. That might explain why I don’t see the cooling systems as a problem here and you do, though why EVGA would consistently do this on AMD and not nVidea is beyond me (assuming that’s your rig). But,at the least, let’s not pin the quality of the cooling system on AMD/nVidea. That’s a manufacturing call, not the OEM.

    Liked by 1 person

    1. I know a few people in exactly your boat as well – the driver experience is anecdotal, but I do remember the days of Windows Vista and how almost a third of system crashes on it were caused by Nvidia drivers!

      On the cooling point, I had a run of Radeon cards that used the AMD-designed reference PCB and cooler, that is where that experience came from. On the trio of 5970s I had, the ramping of the fans was really quite loud and I ended up replacing all 3 coolers with aftermarket GPU heatsinks that ran much quieter. In modern times (at least with the 5700xt and a few others), AMD often launches the reference design with specific implementations for cooling and it takes time before partners make their own solutions, which keeps that a problem because AMD’s reference design coolers are still typically loud blower fans into just good enough heatsinks. Nvidia, trying to Apple-ify their lineup, has at least tried to make their own designs interesting and better-performing. I’m really curious to see how the new RTX cards do, because it is the first generation in a long time where I vastly prefer the Nvidia Founder’s Edition design over partner cards, both aesthetically and in performance terms.

      I do have similar allegiances for manufacturers though – part of my Nvidia lock-in is due to EVGA only making Nvidia cards, since they generally make really solid cards (and the motherboard I’ve daily-driven the longest without change was an EVGA as well). Right now I’m Asus on motherboards (since I prefer AMD Ryzen to Intel CPUs so I can’t get EVGA there!) and EVGA for video cards, but when I look at the upcoming options, it might be the first time I buy directly with Nvidia.

      The price point callout is a fair one – AMD’s strategy targets mainstream price points, and I tend to buy the halo cards where they aren’t as competitive, which is sometimes purposeful (3D rendering, CUDA accelerated video rendering, etc) and some of it is just pointless epeen-waving (given that my gaming time is mostly MMOs that run well across the board on a ton of GPUs, having any video card north of $500 is just for show at that point). That is a topic I’ll probably write on really quickly while I acclimate to typing with a wedding ring (it’s a bit weird!).

      Like

      1. You’re right, the experience with drivers isn’t really something most people can nail down. With a spreadsheet, access to hundreds of mobos and video cards, and unlimited funding I could probably eke something out that would be less than hearsay, but given the circumstances it’s just best to be flexible.

        Funny story about the cooling system – you reminded me of the time my former boss loaned me an EVGA top of the line previous generation card to see if it would work for me, preliminary to buying it on the cheap. I was so horrified by the noise the fans made I thought I had broken it! Apparently first pass top of the line EVGA cards tend to run a little hot, so they over-cool them, or something like that?

        I do tend to aim for middle of the pack, price-wise. As I’ve mentioned before, when I build a system I build for longevity and cutting edge tends not to be survivable.

        Grats on the ring šŸ™‚

        Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s