The phone hasn’t even hit the market yet, but the prospect of a Pixel 6 series powered by a custom Google Tensor SoC already raises some big questions. Can the chip catch Apple? Is it really going to use the latest and greatest technology?
Google could have bought chipsets from long-time partner Qualcomm, or even got an Exynos model from its new friends at Samsung. But that wouldn’t have been nearly as fun. Instead, the company worked with Samsung to develop its own chipset that uses a combination of off-the-shelf components and a little of its in-house machine learning (ML) silicon.
According to a robust report, the Pixel 6’s Google Tensor SoC will look a little different than other flagship chipsets on the market. Of course, we only save benchmarking and any performance and battery assessments once we have the device in hand. But we already have a lot of information to compare on paper between the latest Qualcomm (and Samsung, too, if we’re at it) chipsets. How is the confrontation between Google Tensor and Snapdragon 888 chipset? Let’s take an early look.
Google Tensor vs Snapdragon 888 vs Exynos 2100
Although the next-generation SoCs from Qualcomm and Samsung are not too far away, the Google Tensor chip is set to compete with the flagship chipsets Qualcomm Snapdragon 888 and Samsung Exynos 2100 of the current generation. Therefore we will use this as a basis for our comparison.
Google tensor | Snapdragon 888 | Exynos 2100 | |
---|---|---|---|
Central processor | Google tensor: 2x Arm Cortex-X1 (2.80GHz) | Snapdragon 888: 1x Arm Cortex-X1 (2.84 GHz) | Exynos 2100: 1x Arm Cortex-X1 (2.90GHz) |
GPU | Google tensor: Arm Mali-G78 (854MHz) | Snapdragon 888: Adreno 660 | Exynos 2100: Arm Mali-G78 MP14 (854MHz) |
R.A.M. | Google tensor: LPDDR5 | Snapdragon 888: LPDDR5 | Exynos 2100: LPDDR5 |
ML | Google tensor: Tensor processing unit | Snapdragon 888: Hex 780 DSP | Exynos 2100: Triple NPU + DSP |
Media decoding | Google tensor: H.264, H.265, VP9, AV1 | Snapdragon 888: H.264, H.265, VP9 | Exynos 2100: H.264, H.265, VP9, AV1 |
modem | Google tensor: 4G LTE | Snapdragon 888: 4G LTE | Exynos 2100: 4G LTE |
procedure | Google tensor: 5nm | Snapdragon 888: 5nm | Exynos 2100: 5nm |
As we would expect given the nature of their relationship, Google’s Tensor SoC relies heavily on Samsung’s technology, which can be found in its latest Exynos processor. According to the report, the modem and GPU setup are borrowed directly from the Exynos 2100, and the similarities extend to supporting similar AV1 media decoding hardware.
If the GPU setup actually matches Samsung’s Exynos 2100, the Pixel 6 will also be a decent gaming phone, albeit still a few frames behind the graphics capabilities of the Snapdragon 888. However, this will be a relief for those who love the Pixel 6 Hope for adequate flagship tier performance. However, we anticipate that the chip’s Tensor Processing Unit (TPU) will provide even more competitive machine learning and AI capabilities.
Continue reading: Snapdragon 888 vs. Exynos 2100 tested
The Google Tensor SoC appears to be competitive in terms of CPU, GPU, modem, and other technologies.
Google’s 2 + 2 + 4 CPU setup is a slightly weird design choice. It’s worth exploring in more detail, which we’ll get to later, but the salient point is that two powerful Cortex-X1 CPUs should give the Google Tensor SoC more power for single threads, but the older Cortex-A76 -Cores can turn the chip into a weaker multitasker. It’s an interesting combination that goes back to Samsung’s unfortunate Mongoose CPU setups. However, there are big questions to be answered about the performance and thermal efficiency of this design.
On paper, the Google Tensor processor and Pixel 6 series seem very competitive with the Exynos 2100 and Snapdragon 888, which can be found in some of the best smartphones of 2021.
Understand the Google Tensor CPU design
Let’s get to the big question that comes to the lips of every tech enthusiast: Why should Google choose the Arm Cortex-A76 CPU from 2018 for a state-of-the-art SoC? The answer lies in a compromise in area, performance and heat.
I’ve dug up a slide (shown below) from a previous announcement by Arm that helps visualize the key arguments. Admittedly, the scale of the diagram is not particularly precise, but the bottom line is that the Cortex-A76 is both smaller and less powerful than the newer Cortex-A77 and A78 with the same clock rate and the same manufacturing process (ISO comparison). This example is at 7nm, but Samsung has been using Arm on a 5nm Cortex-A76 for some time. If you want numbers, the Cortex-A77 is 17% larger than the A76 while the A78 is only 5% smaller than the A77. Arm also managed to reduce the power consumption between the A77 and the A78 by only 4%, so that the A76 remained the smaller, less powerful choice.
The tradeoff is that the Cortex-A76 offers much less peak performance. Looking at Arm’s numbers, the company saw a 20% increase in microarchitecture between A77 and A76, and another 7% on a comparable process when moving to A78. As a result, multithreaded tasks can run more slowly on the Pixel 6 than on its Snapdragon 888 competitors, although of course that depends heavily on the exact workload. With two Cortex-X1 cores for heavy lifting, Google can be confident that its chip offers the right mix of top performance and efficiency.
That is the crucial point – the choice of the older Cortex-A76s is inextricably linked with Google’s desire for two powerful Cortex-X1 CPU cores. There is only a limited amount of area, power, and heat that can be devoted to a mobile processor CPU design, and two Cortex-X1’s reach those limits.
The decision for smaller cores with lower performance frees up the silicon, energy and heat budget of the chip for these larger components. Alternatively, it could be said that choosing two Cortex-X1 CPU cores would force Google to use two smaller, lower-performing, mid-tier cores. But why would Google want two Cortex-X1s when Qualcomm and Samsung are happy and do very well with just one?
Continue reading: Why the Pixel 6’s tensor chip is actually a big deal (and why not)
Aside from the raw single-thread performance boost, the core is 23% faster than the A78, the Cortex-X1 is an ML workhorse. As we know, machine learning is a big part of Google’s design goals for this custom silicon. The Cortex-X1 offers twice the machine learning number processing functions of the Cortex-A78 by using a larger cache and double the SIMD floating point instruction bandwidth. In other words, Google is trading a general multi-core performance in exchange for two Cortex-X1s that expand its TPU-ML capabilities. Especially in cases where it might not be worth enabling the dedicated machine learning accelerator. While we don’t yet know how much cache Google wants to couple with its CPU cores, this will also affect their performance.
Two high-performance Cortex-X1 cores are a departure from Qualcomm’s formula for success, which has its own advantages and disadvantages.
Despite using Cortex-A76 cores, there may still be a tradeoff between power and heat. Tests suggest that a single Cortex-X1 core is quite power hungry and can struggle to maintain peak frequencies in today’s flagship phones. Some phones even avoid performing tasks on the X1 to improve power consumption. Having two cores on board doubles the heat and power problem, so we should be wary of suggestions that the Pixel 6 will simply overtake the competition just because it has two powerhouse cores. Lasting performance and energy consumption will be crucial. Keep in mind that Samsung’s Exynos chipsets, powered by its powerful Mongoose cores, suffered from exactly this problem.
Google’s TPU differentiator
One of the few remaining unknowns of the Google Tensor SoC is its tensor processing unit. We know it’s mainly responsible for performing Google’s various machine learning tasks, such as: B. from speech recognition to image processing and even video decoding. This suggests a somewhat universal inference and media component tied into the chip’s multimedia pipeline.
Related: How on-device machine learning has changed the way we use our phones
Qualcomm and Samsung also have their own silicon parts for ML, but what’s particularly interesting about the Snapdragon 888 is how diffuse those processing parts are. Qualcomm’s AI engine is distributed across the CPU, GPU, Hexagon DSP, Spectra ISP and Sensing Hub. While this is good for efficiency, you won’t find a use case that does all of these components at the same time. So Qualcomm’s 26TOPS system-wide AI performance isn’t used often, if at all. Instead, it is more likely that one or two components are running at the same time, e.g. B. the ISP and the DSP for computer vision tasks.
Google states that its TPU and ML skills will be the main differentiator.
Google’s TPU will no doubt include various sub-blocks, especially if it does video encoding and decoding as well, but it seems like the TPU will house most, if not all, of the Pixel 6’s ML functionality. If Google can take advantage of most of its TPU power all at once, it can potentially overtake its competitors for some really interesting use cases. But we just have to wait and see.
Google Tensor vs. Snapdragon 888: The Early Verdict
With Huawei’s Kirin in the backburner, the Google Tensor SoC has thrown urgently needed fresh blood into the mobile chipset coliseum. Of course, we wait until we have the phone in hand before drawing any conclusions. But on paper, the Google Tensor looks just as compelling as the flagship tier Snapdragon 888 and Exynos 2100.
As we’ve been expecting all along, the Google Tensor won’t skip the current generation processors. However, it takes its own novel approach to the mobile processing problem. With two powerful CPU cores and the in-house TPU machine learning solution, Google’s SoC is developing a little differently than its competitors. Though the real game changer might be Google, which offers five years of OS updates by switching to its own silicon.
What do you think of Google Tensor vs Snapdragon 888 and Exynos 2100? Is the Pixel 6’s processor becoming a real flagship contender?