Why the Pixel 6’s Tensor chip is actually a big deal (and why it isn’t)
David Imel / Android Authority
It’s finally official. Google’s Pixel 6 will feature the company’s first bespoke SoC. Although the company has already dealt with the security add-ons Pixel Visual Core and Titan M with custom hardware, this is the first time that Google has picked out the entire inner workings of the chip itself. (Although the company has licensed many of the building blocks for the SoC.) However, the Tensor Processing Unit (TPU) is entirely in-house, and Google becomes the heart of the Tensor SoC.
As expected, the Google Tensor Processor is focused on enhanced imaging and machine learning (ML) capabilities rather than breakthrough performance. Still, that gives us a lot of suspense and some reservations.
Told you: What you can really expect from Google’s Pixel 6 SoC
Why the Google Tensor SoC is a big deal …
First and foremost, Tensor is a custom piece of silicon developed by Google to be efficient on the things the company wants to prioritize the most. That means it should offer faster, more powerful image processing, language processing, and other machine learning-based functions. At least it’s faster than the previous generation Pixel 5.
With a powerful internal TPU at the core of the chip, Google talks about on-device real-time voice translation capabilities for subtitles, text-to-speech without an internet connection, dual keyboard and voice input methods, and superior camera capabilities. We can imagine that Google Lens and other machine learning (ML) technologies will improve too. While these are mostly advancements over what Google already did with existing hardware, hopefully we’ll see some new features.
Google Tensor is going to take what we loved about the Pixel 5 and make it even better.
AI and ML are at the heart of what Google does, and they arguably do it better than anyone else – hence, it’s the core focus of the Google chip. As we’ve found in many newer SoC versions, performance is no longer the most important aspect of mobile SoCs. Heterogeneous compute and workload efficiencies are just as important, if not more important, to enabling powerful new software features and product differentiation.
By leaving the Qualcomm ecosystem and choosing its own components, Google is gaining more control over how and where valuable silicon space is devoted to realizing its smartphone vision. Qualcomm has to do justice to a multitude of partner visions, while Google clearly has something much more specific in mind. It would be hard to argue whether Google thinks the Pixel 6 experience will benefit more from improved AI than Facebook opens 5% faster than last year. Much like Apple’s work on custom silicon, Google is turning to bespoke hardware to create bespoke experiences.
By switching to a personalized or co-developed processor, Google may be able to provide updates even faster and for longer than ever before. Partners rely on Qualcomm’s support roadmap to provide long-term updates. Samsung is offering three years of operating system and four years of security updates through Qualcomm, and Google promises the same for the Pixel 5 and earlier. It will be interesting to see if Google goes any further now that it is closer to the chip design process.
… and why it might not be so
Anyone who hoped for a performance that would not go against the generation will be disappointed here. Google has not shared any benchmarks or details about the internals of its CPU, GPU, or other components. Without an architectural designer, however, Google certainly licenses off-the-shelf arm parts such as the Cortex-A78. We are still in the dark about which 5G functions the phone will have. In fact, Google doesn’t even disclose who made its chipset, despite rumors pointing to Samsung. Google hardware boss Rick Osterloh said Tensor will be “very competitive” in terms of CPU and GPU performance. Make it what you want.
Even Google isn’t doing anything completely groundbreaking with its image and machine learning pipeline. After all, Google’s development cycle doesn’t work in isolation. State-of-the-art hardware has evolved significantly from the last premium Google cell phone, the Pixel 4 series.
State-of-the-art hardware has developed significantly compared to the last premium Google cell phone.
So far, Google’s demos have demonstrated the application of its advanced image processing capabilities to multi-camera and video scenarios. This is possible because Google’s machine learning chops are now integrated into the image processing pipeline (ISP) instead of sitting somewhere further away.
However, this isn’t a new idea even for 2020 smartphones, let alone late 2021. In fact, the Qualcomm Snapdragon 855, which powered the Google Pixel 4 in 2019, introduced computer vision elements to the ISP chain. Since then, the Snapdragon 865 and 888 have enhanced these capabilities, allowing partners to use data from multiple cameras simultaneously and apply effects such as HDR and real-time bokeh to 4K video at 60 frames per second. Google isn’t the first to come up with these ideas, but that doesn’t mean they can’t do them better.
See also: Qualcomm explains how the Snapdragon 888 changes the camera game
Other SoC manufacturers also have their own low-power sensor chips for functions such as always-on speech recognition, environment display and other sensor functions. Security enclaves like Titan M are not new either. In fact, they are indispensable in today’s biometric-obsessed devices. Similar capabilities can be found in mobile SoCs from Apple, Huawei, Qualcomm, and Samsung. However, the exact functions differ.
Google’s Tensor SoC: A departure from the status quo?
Robert Triggs / Android Authority
Google boss Sundar Pichai noted that the tensor chip took four years to manufacture, which is an interesting timeframe. Google started this project when mobile AI and ML capabilities were relatively new. The company has always been a leader in the ML market and often seemed frustrated with the limitations of partner silicon, as demonstrated in the Pixel Visual Core and Neural Core experiments.
The Tensor SoC is Google with its own vision not only for machine learning, but also for how hardware design affects product differentiation and software functions. It will be fascinating to see if all of this successfully pulls together into a Pixel 6 smartphone capable of some impressive industry firsts.
Qualcomm and others haven’t sat on their hands in four years, however. Machine learning, computer imaging and heterogeneous computing functions are at the heart of all major mobile SoC players, and not just in their premium products. It remains to be seen whether Google will simply reinvent the wheel for this or whether its TPU technology and Tensor SoC will actually be ahead.