Sidenote: I Got A New Smartphone, So I Have Thoughts on SOC Reviews and Hardware Platforms (Ramble Incoming)

I love nerdy shit.

Let’s be real, as my blog here has grown, I’ve expanded topics a lot from WoW guides and theorycrafting to WoW analysis, to FFXIV included, to talking about other games, to finally today, where while WoW is still sort of the main topic, I don’t pidgeonhole into just WoW and talk about everything on my mind, minus deeply personal stuff.

Here’s the impetus for this post – I just got my first new smartphone in 3 years.

Since 2013 and the Galaxy Note 2, I’ve been a Samsung fan and an Android user. For about 3 years prior to that, I was an iPhone guy – iPhone 3G, 4, and 4S in short order. I liked Samsung’s focus on the “phablet” form factor, as someone with big hands who likes the content experience on a modern smartphone well enough but found the downright tiny original iPhone size to be restrictive. However, in 2018, I kind of stopped following the smartphone race. I got a Galaxy Note 9, which was a phenomenally good smartphone, and it kind of stopped mattering. It had what I needed and wanted – a fancy camera system with dual sensors, a flash, and 4k video, a large screen with high resolution and bright colors, great battery life, and it held up to abuse exceptionally well.

However, this year, it has started to feel like my Note 9 was a little long in the tooth. It was still great, make no mistake, but it was sometimes sluggish and slow, especially post update, and as my wife and I switched from AT&T here in the states to Google Fi, unlocking our existing Note 9s to do so, AT&T made our Note 9’s exceptionally difficult to use, mainly by loading up a ton of trashware mobile games on each phone reboot which made a reboot for updates especially problematic and slow. My wife was also very angry with her Note 9, as she had a lot of interface lag and problems I couldn’t mirror, like camera loading pauses lasting into a double-digit number of seconds ruining shots of our dog. Despite this, we held out for a while, because of both pragmatic concerns about finances but also because of a degree of sentimentality about the phones we had – they came with us around the world in 2019 and we used Facebook Messenger video on them to call her parents about our engagement while we were in Tokyo, along with having been there for a lot of our major moments. In the end, usability beat out sentimentality.

It was time for us to finally upgrade.

My wife, not as gadget-obsessive as me, read a small number of reviews and decided she wanted a Google Pixel, but didn’t need the newest one. She easily adapted to the Pixel 4a and has enjoyed using it since she got it nearly 3 weeks ago.

I, on the other hand, went for the biggest boy of the current Samsung lineup, opting for the Galaxy S21 Ultra 5G. Why? Well, firstly, it has an incredibly good looking matte finish black back glass, which takes on no fingerprints and looks sharp – which I care about because I tend not to have cases on my phones (gasp, the horror!). I’ve only broken one phone ever – my iPhone 4, and even then it was just a shattered screen glass that was perfectly usable through a packing tape front shield as the screen itself was still fully intact, touch digitizer and all! Secondly, though, the camera system on the S21 Ultra was absurd compared to other phones. The top iPhone, the 12 Pro Max, has a 3 camera array with a flash and LIDAR scanner, used mainly to provide laser autofocus and Augmented Reality support. The S21 Ultra has no fewer than 4 camera sensors on the back – a 108 megapixel main camera, a pair of 12MP sensors with optical zoom at 3x and 10x respectively, and an ultradwide 12MP camera, all accompanied with an LED flash, a laser autofocus system based on Time of Flight, and a front-facing 40 MP selfie camera through a screen holepunch, for a grand total of 5 actual camera sensors alongside a flash and laser focus system. Crazy stuff!

Overall, the phone had what I wanted – good ability to shoot video on my camera rig over the summer, with 8k/24FPS options alongside 4k/60 and 1080p video that can be shot up to 240FPS for slow-motion shots. I’m comfortable with every phone OS I’ve seen, but I enjoy the customization of Android and Samsung’s One UI, while not the most amazing, is functional and at a point in its development where a lot of the bloat isn’t there.

However, something was funny about the reviews.

Samsung sells two flavors of most of their flagship phones, based on market. In most of the world, the Galaxy S series ships with Qualcomm’s Snapdragon SoC, which generally has offered higher performance and battery life than Samsung’s own Exynos SoC lineup, which is shipped in Europe and I believe China as the only option. It isn’t a thing like a consumer choice, where you can go to the store and pick a flavor – instead, it’s a regional choice. Last year, Samsung’s Exynos phones stunk real bad. Exynos had far worse performance and battery life, instead of the usual margin-of-error differences. It made a problem for me, though. In reading reviews, all anyone was talking about was the comparison to last year, Snapdragon vs Exynos, and there wasn’t a whole lot to consume outside of some sample images or pretty vague thoughts on the camera with comparisons to other phones.

And it struck me – I’m a nerd for my PC, and I love kitting it out with each component being selected after rigorous review to optimize my experience. My current system is all built around stuff I spent literal months reading about, researching, and watching for. But with my phone, while I like knowing the technical specs of the Qualcomm Snapdragon 888 SoC in my new phone, it also kind of…doesn’t matter?

Like, I bought the phone based on the look, the camera system, and the continuation of the UI and featureset I’d grown accustomed to with my Note 9. As an 8-year Android veteran, I’m also locked in on app ecosystem, and while I have an iPad Pro on my desk at home, I just don’t use it that much. Most of its apps are from my brief tenure as an iPhone user, and the most common apps I use today on the iPad are a flip-clock display for my desk and Twitch for stream and chat monitoring. So Android wins for me on that front, and it would take a lot to pull me away.

In the smartphone market, the SoC only sort of matters. Yes, a better SoC can mean faster performance, better gaming if that matters to you, and the image signal processing done on-board can make iffy cameras into great systems, or vice-versa. However, even knowing that, I didn’t buy into Android because it is theoretically faster or has a broader selection of SoCs – I bought it because I wanted a Note 2, just like I upgraded for most of the last decade to some new Samsung thing, usually a Note-series phablet but occasionally with a diversion into the smaller Galaxy S phones. None of them was because of the SoC, but it was instead a set of criteria – better screen, higher resolution, better camera systems, smoother video, flashier physical design, and the like – and those only tangentially even touch on the SoC as a factor at all!

Ultimately, as an Android fan for this long, I’ve thought a bit about why I like Android. Ultimately, it meets me in the middle better – I have broader customization, more hardware options, and I like that Android doesn’t hide power user stuff. I like having controllable screen resolution on my phone, and on my new S21 Ultra, I really like variable refresh rate up to 120 Hz battery be damned (the new phone makes excellent compromises on that front by dropping to 10 Hz if nothing is happening, so static viewing can actually be less battery draining in this mode!). I like having the option to sideload apps and the interesting, hacky history of Android as Linux derivative. But generally, I like Samsung phones. I like their user interface for the most part, I like their physical design and style, I like the hardware overall for performance, and I like that they are getting better about supporting phones with Android updates for longer (I was still getting Note 9 updates just prior to switching away from it, a length of support that is relatively new for Samsung) but at the same time, I liked that they didn’t push old phones to new OSes, hardware capabilities be damned.

It makes me think about Apple a lot though, which is where this post takes a big turn!

Apple, the iPhone, M1, and Hardware Specs

If I haven’t yet said it here (and I have over 600 posts here now, so I might very well have!), I’m not an Apple fan but I also don’t hate them. I find their industrial design motifs and focus on ease of use very appealing, but I also think they overcharge for what they offer and often “ease of use” comes down to stripping something down to such a degree that it feels like there’s almost no customization or ability to tweak and refine your system/phone/tablet. There’s a type of user for whom Apple’s products are perfect fits, and there is also a common cliche of a status-obsessed Apple fan, who has an iPhone/iPad/Macbook to stunt on people at Starbucks or corporate meetings. At my old employer, most company phones were iPhones and they had added the ability for staff past a certain level to pick either a Windows-based Lenovo laptop or a Macbook Pro, and I saw a few people take the latter choice, despite some of their software not working on MacOS!

The switch over the last year in Apple devices to ARM-based CPUs built into fully-custom silicon engineered by Apple and made by them for their own products has been interesting to watch. I wrote up a pair of longer posts previously and generally found the idea interesting, although I had my doubts about how well the switch would go.

The short answer is this: almost too well, in a weird way.

As of the Apple Spring event this week, the company now has a bizarre portfolio top to bottom based on a single silicon design, the Apple M1. With the announcements, there is a tablet, multiple laptops, a small form factor PC, and an all-in-one design all using literally the exact same SoC with only differences in embedded on-package memory and a single additional GPU core at the higher end, at prices from $700 to $1,500, with the only performance-focused difference being one GPU core and memory.

Yikes.

Apple Silicon is succeeding however, in spite of this, for a few reasons. Firstly, it does offer better performance than the last round of Intel-based Mac systems. In the case of the iMac but especially the Mac Mini and the Macbook lineup, the Intel CPUs were running excessively hot, under inadequate cooling solutions that disallowed maximum performance, but even then, in benchmarks capable of running natively on ARM and x86, the Apple M1 chip still does pretty well and competes with those offerings in spite of the ARM foundation and vastly reduced electricity usage. Secondly, Apple’s M1 chip offers quieter computers with vastly better battery life.

The biggest thing however, is something that hadn’t occurred to me previously.

Apple knows their customer base pretty well. What I’m prepared to say will sound like a PC nerd being mean to Apple fans, but I promise I’ll stick the landing. Apple fans are a wide swath of what I would call the “computing middle” – they don’t know PCs and don’t care to, or they’re reasonably skilled users who don’t need a ton of extra options. They do work on a PC that functionally could be done on a Windows PC from the 1990s, but see their computer as an extension of them in terms of style and flair. The higher-skill end of that computing middle does more complicated work on their Macs, but it still ends up being fairly simple in terms of actual computing – programming, working with basic 3D rendering or visual planning, and media creation. At the high end, there are valid uses for Apple hardware – MacOS handles audio latency far, far better than Windows, so musicians and recording studios tend to run Macs for that reason almost exclusively. To all of these audiences, Apple makes generally inoffensive but stylish hardware designs that are a statement onto themselves – paperboard packaging with reduced use of plastic, unibody aluminum and glass designs that look visually stunning, with a high amount of emphasis on how the device looks and feels, and the keyword Apple tends to go for is “premium.”

When I switched to Android, there were elements of my original Galaxy Note 2 that felt bad after using an iPhone – the flimsy plastic back you snapped on and off, the crunchy notes it made in doing that, the sort of high-gloss finish on the plastics that made it look tacky and cheap, or, woe of woes, the stretch of Notes from the 3 to the 4 that were made to look like books, using a weirdly textured plastic sideband with leather-textured plastic backing (that was far slicker in the white color, don’t ask me how I know). It wasn’t until around the Galaxy S5 and related Note series that Samsung got serious about their aesthetic and styling, making phones where the physical design was a key focus.

Apple’s gamble with their own silicon being used breaks down thusly: our users don’t care about the performance enough in most of our products to spend money on differentiating them, so we can make a single chip, disable one GPU core for lower-end parts, and deliver a wide range of products, all profitable because of economies of scale. With the iMac launching at the end of the month and the new iPad Pro using the M1, there will be 5 different core products all using the same chip, which simplifies things drastically.

If you’re AMD or Intel, making a CPU is hard. You have product segmentation such that you use a single or perhaps two silicon designs to launch anywhere from 4 to 10 products, with each having elements disabled or clock speeds scaled up or down to match market segment. You have to do a lot of testing – voltage/frequency curve testing, ensuring all the elements work on-die, and then that testing results in parts being sold with different hardware blocks enabled or disabled, or set to high or low clock speeds. For AMD, high-efficiency CPU cores are sent off to become server CPUs, while leaky ones are often packaged for desktop usage since the power efficiency of a gaming CPU is less of a concern.

For Apple, though, they test for enablement only, with the only noted change that is likely down to yields of working dies being the GPU core disabling they do in lower-stack products. They don’t even advertise clock speeds on the M1 parts – the best I could find was that Wikipedia claims a 3.2 GHz maximum speed! They’re ARM CPU cores, so even a leaky part is still like 4 watts versus 3 watts – a 33% increase in power consumption that also makes almost no appreciable difference at that level (and even then, leaky parts could be shuffled off into iMacs or Mac Minis that are always plugged in).

The crazy thing is that it works – Apple’s M1 products are higher performing, lower power consuming, and all-around better than the products they’ve been making with Intel hardware. Why is that, though?

Modular, Purpose-Built Hardware and the Future

Apple’s success with the M1 boils down to being a unified system architecture. In a weird way, they beat AMD to the actual ideal of their Fusion project – the GPU in the M1 chip can be leveraged for any floating point workload, and the system can make up for weak spots in the CPU by using that GPU hardware instead. The webcams built into the M1 systems don’t need software processing that consumes a ton of CPU cycles, because the M1 has a built-in ISP to handle image processing from the attached cameras. Likewise, AI and neural-network processing tasks that are becoming increasingly more common have purpose-built, dedicated hardware on the M1.

I think this is where the future lies for computing, in a way.

When I was a kid, most PCs were fiercely divided out into a huge list of parts. You could get a CPU that needed a co-processor which you selected and installed separately, along with fully-external cache. Graphics for much of my formative years on a PC were split into a 2D card and a 3D accelerator, with each having purpose-built hardware made to perform the one specific task to which they were assigned. Sound cards were required and had a number of different components on-board – a digital-to-analog converter, amplifier circuits, a DSP or sound processing hardware of some sort, memory for storing the work it was doing, and depending on how far back you go, it might have even had a memory slot for add-in MIDI sound banks and samples. Motherboards had multiple chips just for their basic communication jobs – a Northbridge and a Southbridge, as they were often called, and motherboards were typically responsible for memory control as well as system bus control at all levels.

Over the last 3 decades, the average home PC has consolidated sharply compared to this old standard. A modern CPU has integrated IO and manages most of the system communications, an integrated memory controller, and most consumer CPUs sold even have built-in graphics, both 2D and 3D, as a logical block that doesn’t take up a huge amount of physical space. Motherboards often have a single chip that manages added IO beyond what the CPU can do, along with maybe a small handful of additional chips for things like new USB standards, Thunderbolt, or faster LAN speeds/Wifi. For most common tasks, this has been a boon – computing power in most homes has increased and a modern desktop or laptop PC is capable of tons of tasks, from massive 3D rendering to gaming down to word processing and web browsing.

There is a growing gap, however. A lot of modern tasks are being accelerated by artificial intelligence methods, machine learning, or neural-network processing. AI processing can be used for all sorts of interesting editing and manipulation techniques, like resolution upscaling for images and videos, while machine learning/neural-networks can be used for all sorts of tasks, like image recognition. These tasks can be done theoretically by a modern CPU or GPU, but often not well. Nvidia has added Tensor Cores to their GPU designs, which are used for machine-learning workloads and most often leveraged at the consumer level for DLSS, upscaling images based on a machine-learning process to increase perceived detail and resolution from a base image that does not have it, while Nvidia’s datacenter GPUs use this same hardware for tons of tasks at large, multi-node scales.

These tasks need dedicated hardware to run at anything resembling efficiency, and while in the past, the perceived solution would be an overall increase in general compute power, the increase of compute power in most CPUs and GPUs is not growing at a pace that would allow for that to be handled in a reasonable timeframe.

The solution has been to spin out new functional hardware blocks, like Nvidia’s aforementioned tensor cores. Intel as a company has CPUs, storage controllers and hardware, GPUs, the Movidius line of AI inference SoCs and Habana parts which are all AI, and FPGAs, which are blank-slate collections of transistors which can be software-configured to build a pipeline for nearly any type of compute workload. Nvidia has tensor cores, but have also brought out new GPU pipelines to increase Integer performance alongside the steady forward march of floating point performance in modern GPUs. AMD recently bought the company Xilinx, which was involved in FPGA development. All of the major players in the silicon game are working on designs that can combine multiple functional units into a single package. AMD has been using their chiplet paradigm since Zen 2 in 2019, with their current desktop lineup being anywhere from a single die to 3 separate dies, and they have patents for stacking silicon dies on a single package, primarily through stacking memory on top of working processing dies. Intel has FOVEROS, a packaging technology that allows Intel to mold together a broad mix of silicon die components to build almost anything – a package with 8 CPU cores, a decent GPU, an AI coprocessor, and some integrated memory would be possible, and it also has 3D stacking and can stack layers of hardware together to offer a more specialized processor. Nvidia supposedly has a multi-die GPU package in the works as well.

All of this points at an interesting future. While it isn’t necessarily the case that a consumer PC needs machine-learning or AI hardware processing now, the truth is that it will be useful sooner than later. Having the ability in a Geforce graphics card to scale resolution higher without needing a drastically better GPU for that performance is crazy cool, and that kind of capability has all sorts of uses for a wide variety of work. Image enhancement and element recognition could be pushed as machine-learning and run on purpose-built hardware that can do such analysis and modification faster than a CPU or GPU could. AI processors becoming more standard could enable incredible advances in game logic. As we grow more dependent on video conferencing and use digital cameras to capture more of our lives, proper dedicated image signal processing being a standard feature in all computing devices with a camera or support for one could enable a lot of leaps in quality.

Thus, what I find the most interesting about the current market, my phone change, and the Apple Silicon project, months into the last on that list, is this – computing is increasingly turning into a more decentralized affair after nearly 30 years of consolidation, with a wider variety of purpose-built silicon being put into our everyday devices and enabling a broad-reaching set of changes to the kind of things our devices can do that we scarcely understand today, much less leverage to their fullest.

And thus concludes my ramble about hardware (for today!).

Advertisement

One thought on “Sidenote: I Got A New Smartphone, So I Have Thoughts on SOC Reviews and Hardware Platforms (Ramble Incoming)

  1. I was thinking the same thing about Apple specs vs the people that buy Apple products when I was reading up on them, but you were a bit more on the point than I was, so for once I don’t have to be the meanie 🙂 I mean, yeah – they know their buyers, and those people wouldn’t know a first level cache from a cattle prod, nor would that bother them. But Apple tells them that X is better than Y and that’s better than Z and they’re all in.

    Honestly, I don’t really feel like Apple is too far off the mark here either – we’re getting into what I would call Phase 3 of the commodification of computer hardware components, and it’s easy to conclude that in the very near future we’ll see this model copied by Intel et al, and higher-end components will be at a premium price.

    I’ve had a Pixel 3a for a few years now and am very happy with it. I actually avoided the 4a because of what I was reading about it when it came out, but hopefully those issues have been addressed somehow. At this point I’m pretty sure my next will be either a 5 (or 5a) or some sort of Samsung. These are all premium phones but if you can figure out how not to bleed for the purchase, they’re solid buys and will last a good long time.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.