Sidenote: The Effect of the High-End Technology Rollouts on Low Cost Parts

I have a huge blind spot that I’ll readily admit when it comes to PC and gaming hardware – I focus on the stuff I like and tend towards purchasing.

When looking at the newest GPUs, for example, I’ve been pretty excited and talked a lot about the RTX 3080 and up and the Radeon RX 6800 XT and up, but for more people, the RTX 3070 and RX 6800 are more likely purchases, and for a majority of the market, their next GPU likely sits beneath that point as well. The same goes for CPUs – for every word spent on the 16-core Ryzen and 10-core Intel parts, there is a person buying a perfectly usable quad-core for a low cost.

Since I can’t sleep and I’ve been watching a lot of LowSpecGamer on YouTube, I want to explore that side of things more today, because I think the long-term implications of the launches we’ve already seen and those forthcoming are fascinating.

Integrated Graphics: Good, Actually?: The vast, vast majority of computers in the world in 2020 have no discrete graphics card at all, instead using integrated graphics. Since Sandy Bridge in 2011, Intel has shipped their full mainstream lineup with graphics cores included. Now, Intel’s integrated graphics to date haven’t really been good, mind you, but they are certainly there and for most people, they serve a need just fine. At the high end, if you buy an i7 or i9 for gaming, you can get one of the “F” models now that lasers off the integrated graphics and saves you $10-20, but even with Rocket Lake and forward, Intel continues to focus on a monolithic system design, which doesn’t make much sense until you consider the larger market. Intel’s next-generation parts, from Rocket Lake forward, will be using the Xe graphics, the result of Intel grabbing a ton of ex-AMD engineers from the Radeon Technologies Group and pushing forward with higher-level iGPU designs as well as discrete GPUs for use in the datacenter as well as, perhaps, some gaming.

Likewise, AMD has shipped what they still sometimes call APUs for a long time, and they’ve seriously invested in having strong integrated graphics, although they only ship such parts at the low end. In the early 2010s, AMD’s business model moved to “Fusion” – the idea that the CPU and GPU would function together more harmoniously to leverage strengths of each model, with the CPU being more integer-focused while the GPU would pick up floating point operations. While that model damn near killed the company due to it causing a massive weakness in their CPUs of that era, today it also makes them excellent choices at the low-end. Zen-based APUs feature Zen or Zen+ cores today alongside Vega graphics CUs, and while Vega wasn’t a particularly astounding gaming architecture, the company has made it really work well in APUs. They’ve revived their old Athlon brand and use it for low-spec parts with integrated graphics which work surprisingly well – it is very possible to game on a modern Athlon with no dedicated GPU in the system, provided you ensure dual-channel and decent RAM. The Ryzen APUs are not as good of a value proposition, but they do offer more CPU performance and marginally better GPU performance, and until Rocket Lake, AMD’s parts remain the best value proposition for low-spec gaming.

While they aren’t easily available outside of OEM systems yet, the Ryzen 4000G series on desktop has even better performance, as it marks the first 8-core APUs from AMD and couples the company’s best memory controller as of this writing with 8 Zen 2 cores and a stronger, more optimized Vega iGPU. Those parts can make a total system that is very low-cost for a PC (I found a Lenovo one with a quad-core version for right around $400) while still offering a pretty good gaming experience.

DirectX 12 Optimizations Help: DirectX 12 Ultimate is this fancy sounding API that seems at first blush to be about pushing a lot of increased visual fidelity and hardware usage. In truth, as you analyze the core components of it, it actually targets a lot of performance-increasing functions that are arguably more useful in a lower-spec computer. Take Variable Rate Shading as an example – VRS reduces shader math performed for objects in a scene that are further away from the main viewport, saving time and increasing performance without impacting the perceived visual fidelity. On a high-end system, this can increase framerates a smidge and help someone push a high refresh rate display closer to an ideal level of smoothness. However, on a lower-end system, that reduction in shader work means drastic speed-ups – doing full scene shading occupies more resources on a lower-tier GPU relative to a higher tier one, and reducing that work frees up the GPU to focus on other tasks. The same is true of features like sampler feedback and mesh shaders – all of these functions serve to reduce time spent processing in ways that have little or no perceived image quality reductions, allowing more core work to be done.

While current iGPUs do not support a lot of these features, Xe from Intel does and as AMD pushes RDNA technology into APUs in the near future, integrated graphics will benefit tremendously (in DX12U titles with these technologies in use).

Lower Cost RAM Brings Up System Performance: In the past, when DDR4 memory was costly, it was common to build using a single stick of memory, causing a sharp drop in performance due to not having dual-channel memory bandwidth. Likewise, effective budget RAM kits usually ship in lower speeds and higher latencies, which is especially bad for a system with an integrated GPU, as that GPU needs as much raw memory bandwidth as it can get. Right now, however, it is possible to get a 16 GB dual channel DDR4 3200 memory kit for $55 in the US. While that doesn’t neatly map to other countries, particularly lesser served markets (a point that I find fantastically well articulated in many LowSpecGamer videos), the increases in supply have outstripped demand and so global DRAM prices continue to fall and become more accessible. It is currently possible to build a pretty good spec integrated graphics system with an AMD APU, 16 GB of RAM, and a decent motherboard for around $300, and be able to play eSports titles and a large number of games with reduced settings at enjoyable framerates and performance.

Storage Costs Continue to Fall, Including SSDs: Getting a reasonably cost-effective SSD for gaming is no longer out of reach for many, which is great, because it noticeably improves system responsiveness and ensures more time spent actually playing. Similarly, the cost of bulk storage via slower hard drives is very reasonable and if you have slightly more to spend, a boot SSD + game storage hard drive is a fantastic combo that is no longer unreasonably priced for many. While next-generation titles will begin a slow march towards required SSD storage, today, most gaming can still be done quite well from a hard drive, especially with console-first titles as they are built around the limitations of spinning rust.

The Future Is Bright: I’ve already mentioned Intel Xe graphics as a positive point for low-spec gamers in the near future, but looking even further ahead, there are some exciting possibilities in the short-term. As AMD moves towards future APUs, one combination I am eager to see (and maybe build a home theater PC with) is Zen 3 CPU cores combined with RDNA 2 GPU cores. Such a part would be easily capable of handling a large number of games at great settings, would benefit from DirectX 12 Ultimate features in supported titles, and would even potentially allow some things that sound absurd today to happen (if RDNA 2 is ported with the full CU design, such an integrated GPU would have Ray Accelerators for real-time ray tracing!).

The development of a smart resolution scaler using machine learning on the more-open DirectML API to compete with Nvidia’s DLSS would also mean that all GPUs would gain access to a hardware-agnostic scaling mechanism, which would allow you to set a lower render resolution on an integrated graphics part and then output at higher resolution! Today, low-spec gaming often entails running at native 720p rendering or 1080p, using resolution scaling when available to lower render resolution and then output at one of these resolutions, and being forced to adjust settings around this target to reach playable framerates. In a future with machine learning-assisted resolution scaling, you could instead render a lower internal target like 480p, and use a smart scaling method that can allow you to maintain higher framerates, higher visual quality, and scale it without artifacts, smearing, or other unsightly things.

Likewise, further advances in storage tech on both SSD and hard drives will continue to push cost per unit of capacity down which will also further improve performance. While it is more expensive today to go full solid state, and not advisable for gaming with modern titles that take up tons of disk space, the next-gen consoles will start to push titles to use less space (since multiple copies of an asset should no longer be required for reasonable loading times) and the possibility of moving to SSD storage fully will be more reasonable. In particular, as QLC flash improves in implementation and reliability, capacity of SSDs can be driven much higher while keeping costs lower.

All of that combined makes low-spec gaming endlessly fascinating, and while the advances of today focus on the high-end market, those improvements will see their way down the stack until eventually, those improvements end up being had for pennies on the dollar, which can only be a good thing!


One thought on “Sidenote: The Effect of the High-End Technology Rollouts on Low Cost Parts

  1. Disclaimer: used to work in the hardware sector (HP) but that was 20 years ago so some practices may have drifted.

    Anyway, back when I was in that mode, we’d do things in two phases. First would be rollout to production of new tech – in my division’s case, fax machines and AiO printers – and then we’d do a cost reduction pass – or more, if the schedule didn’t bear getting everything into the first pass.

    What this amounts to is the measured removal of redundant or unused or underused components, a simplification of the design without a redesign. Some examples from my field are things such as removing ferrite beads from the main board and replacing with “zero ohm resisters” which cost a fraction of the cost (this practice means that the board itself doesn’t need redesigning, and that’s a major cost, so yay) or adding ferrite beads on the main board and removing a ferrite core from one or more cables – at 12 cents apiece, over a million units, that’s real money.

    The principle is that you design to get past the regulatories first, and then find ways you can chip away at that margin to lower the cost of the machine – possibly to the users, but definately to ourselves.

    So a lot of what you are talking about falls under that in broad fashion. Integrating things on the mobo is always going to be cheaper because PCI connectors are hella expensive, for example. One reason the first of anything is so expensive is that they rarely take a nuanced approach to regulatory-type things, certs, etc. They usually shoot to ace it with margin and then later the lower cost units are the same thing without margin.

    Same applies in things like e.g. performance – metrics show we don’t use all that ram? We’ll reduce it and call it something ending with a ‘5’.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.