WoW fans – I have a post in the works about the game for later this week. It dives into the concluding lore of the Kyrian covenant and my thoughts on where that story goes from here, and it is lengthy and taking a lot of passes so I can ensure I get my points straight. Plus, I switched raid mains this week back to my Demon Hunter, making the blog’s namesake fully active once more! That has meant doing Mythic Plus and some small gear catchup to get myself into line where I want to be. I’ll also write about that soon! In the meantime, I’m talking about CPUs again!
As an AMD fan, when I say what I am about to say, it gets tut-tutted, and perhaps rightfully so, but here is a statement that is percolating in my head and has been for a while.
If Intel didn’t have institutional inertia, they’d be dead as a company today.
The thing about the Intel of today is that they are a very large and sprawling entity, and so much of what remains is built on the success of the past. Their relationships with OEMs and system integrators, their ability to keep mainstream DIY fans coming back, even as the number that do dwindles year over year, and the perception that Intel is a juggernaut in computing remains as strong as ever, in spite of a lot of evidence that suggests that perhaps Intel’s best days are behind them.
The simplest reason to say this and believe it is this – Intel as a company is more about manufacturing and engineering than about any one market segment. What has always set Intel apart is their dogged maintenance of their own silicon foundries, managing a much larger part of their own supply chain than competitors like AMD. Back in the days when AMD was most competitive with Intel prior to Ryzen, AMD also had their own foundries, and as the company struggled to find footing in the late 2000s, they spun them off to focus solely on design.
The problem Intel has now is nearly all derived from a lack of engineering focus. As a company, Intel has a ridiculously high R&D budget and yet for two silicon processes in a row, they’ve been met with massive delays, pushed-back products, and because of the tight integration of their designs with their manufacturing, they’ve been left to rehash the same old stuff over and over again, such that the desktop segment and most of the datacenter is still being served with a basic CPU core design from 2015!
With Rocket Lake, that changes, as Intel finally backported a product from its planned manufacturing node to their working, old reliable 14nm process. It is easy to make jokes about that process, but Intel deserves a lot of kudos for it – it works exceptionally well and has been squeezed for ever more performance year after year, such that even with the same CPU core design, they still managed to keep making leaps, mostly by juicing the CPU designs with more power, more cores, and more clock speed to keep the same IPC feeling strong.
The news that Rocket Lake, then, would be a backported design on 14nm set my brain on edge. It sounded good, in that it would be nice to see Intel have a new design with new stuff to talk about and new reviews to consume, and new core architecture means more performant cores, new features, and an overall focus on revolutionary change rather than evolutions to the existing standard. However, my armchair silicon analyst brain, coupled with some rumors, wondered aloud if it would mean anything. The rumors were top-end parts would be down to 8 cores, they weren’t promising anything other than a “double-digit IPC increase,” and while it would be Intel’s turn to move forward to PCI-E Gen 4 and better memory support for faster DDR4, it kind of felt like it couldn’t be that good. Plus, the assumptions made about a backport is that Intel would have to forfeit many of the Skylake 14nm optimizations and would not be able to get Rocket Lake up to anywhere near the clock speed that the newest parts had – 10th gen Core i9 could go up to 5.3 GHz when properly cooled, after all!
So with CES this week, Intel showed off more solid details through benchmark slides. While Intel has a sort of mixed history with accuracy in benchmarks (and we’ll note an interesting choice in their comparison point from AMD’s stack), I think these are fine and I trust them, for now. Rocket Lake shows…up to an 8% advantage in gaming performance over the Ryzen 9 5900x, with the results shown being between 2 and 8% faster. Unlike AMD’s slides, there is no loser data point where Intel is down – all wins, so says big blue.
This is enabled by a mix of things – a 19% IPC uplift, full support for DDR4-3200, full support for PCIE Gen 4 being exploited via a Geforce RTX 3080 used for benchmarking, both platforms power limited to stock TDP settings, and most importantly, the resolution used was 1080p, making the CPU the limiter with an RTX 3080.
So, I have a few thoughts here. Firstly, I think it says good things about Intel’s setup that the i9 11900k was limited to stock TDP. Intel has a very laissez-faire motherboard protocol for how partners can setup turbo boosting limits, and they’ll often simply allow the CPU to boost for as long as it can and as high as it can. This means that what Intel showed is actually lower than what most enthusiast systems will get in an otherwise identical test, because most consumers are not going to limit their CPU boost to save on power, unless they live in a place where electricity costs an absurd amount – and even then, that audience is likely not buying the top-end, highest power consuming part. The confirmation of clock speeds is nice – the i9-11900k can also still boost a single core up to 5.3 GHz when under 70 degrees Celsius, so no loss of clock speed between generations there.
On the downside, I have a few nits to pick. The first is that while 1080p testing is the best way to put a stress on the CPU in a gaming system, it also is untrue to the experience that most people will have with a $500 (likely) CPU. Short of eSports pros, you’re not buying a $500 processor to play at 1080p. At a minimum, you’re looking for 1440p high refresh rate, ultrawide 1440p, or ultrawide high refresh rate 1440p. For a lot more folks now, you might even be going to 4k, and some weirdos with more money than sense can even try 8k! While high-refresh rate puts a heavier burden on the CPU than a standard 60Hz, it isn’t as much as 1080p, where the GPU can spit out frames so fast that it gets stuck waiting on draw calls from the CPU. So while 1080p is an accurate test in that it demonstrates an advantage that can be chalked up to the CPU, it isn’t indicative of most people’s experiences at that tier of the market. The second is that I would love to see more of how Intel configured the AMD BIOS, given that the Principled Technologies fiasco a few years back involved Intel paying an outside firm to test Ryzen vs Core and they used the Threadripper Game Mode option, which disables half the CPU cores and was meant to enable Threadripper to not have weirdness with games that can’t handle that many cores, but they used it on a normal Ryzen chip to cut it from 8 cores to 4. I want to believe that Intel learned from that, because it was a big story to enthusiasts, but who knows?
The third nit I’m going to pick at is the use of the Ryzen 9 5900x. I assume this is a price competitor option, and that is fine – I have no objections to Intel saying something like “in the $500 class, we once again offer the best gaming CPU for consumers.” Great, that’s fine and accurate and I have no sadness there. However – the 5900x represents sort of a worst case for the AMD side. By being a two-chiplet design, latency between cores is higher when the CPU has to pitch requests between chiplets, and if the games used manage to load 8 cores very well, then the 5900x will have to do this a reasonable amount, as it has 6 cores per chiplet. If you look at AMD’s product stack, you can see how this would add up – the Ryzen 7 5800X would offer a single 8-core chiplet, reducing the latency disadvantage, while the Ryzen 9 5950X has two 8-core chiplets and has a higher boost frequency for a single core than any other Ryzen part. By picking the 5900X, Intel has a fair comparison on price and market segment – but one that also takes advantage of a couple of smallish gaps in AMD’s stack to do so.
Finally, of course, a gaming comparison is not representative of all the work someone might do using a PC, and Intel knows it. By cutting two cores from Rocket Lake, Intel is in a position where the CPU can deliver nearly identical gaming performance, with the IPC uplift pushing the improvement, but Intel showed a curious lack of productivity benchmarks and kept laser-focused on gaming. Again, I don’t think this is dishonest – for the enthusiast audience, we all know that Intel isn’t going to beat AMD at most productivity workloads by still being lower IPC (which they are) and having fewer cores that clock a bit higher. To be fair to Intel, in fact, for the gaming audience, they made a strong and seemingly honest pitch. None of the games were egregious sponsored titles or really badly single-threaded games – they had a good mix, including RTS titles that use a lot of threads well, and that speaks well to the product they’re trying to sell here. You could, for a gamer, make the argument that streaming, fast becoming a thing that more and more gamers do, would be worse using software encoding on the Rocket Lake part compared to Zen 3, but most streamers aren’t using single-system software encoding – they’re either using GPU features like AMD HEVC or Nvidia’s NVENC, or they’re professional streamers using two systems and a capture device. People like me that use the CPU muscle going to waste for high quality encoding are uncommon, because most systems just can’t really do it at a level that exceeds GPU-based solutions or dual system options.
So overall, I remain curious about Rocket Lake. I already have a nearly complete system here that just needs an AM4 CPU, so my curiosity won’t be for my own purchase – but the redemption arc of Intel is one that I find fascinating. Everything about it is so abnormal for a company normally so composed and self-sure – sticking with their original designs, then hastily adding more cores, sticking to their guns that future process nodes would provide new designs and advantages before shuffling in a panic to optimize a design made for a much-smaller node to fit into their existing 14nm lithography, talking about how AMD poses no threat, before scurrying to push back as quickly as possible once they even get close to the performance crown, and just generally being sort of weird and obviously concerned about what AMD is doing.
What I like about Rocket Lake is that Intel is adapting in some really good ways. They’re doing more to be consumer-friendly, sort of – making these new chips work with Z490 motherboards, removing the restrictions of 10th-gen memory overclocking for lower tier parts (allegedly), keeping pricing fairly consistent (again, rumored), and putting in the engineering effort to completely redesign the core layout in order to deliver something new and fresh. For as much as I give Intel shit in these posts (and I do feel they’ve earned it!), I think it is in everyone’s best interests that we have multiple viable CPU, GPU, and other technology companies competing to give us more. It has taken a while, but Intel has finally showed back up to the dance, and things will get interesting in March!