Tensor Chips in Pixel 6: Does Google Have a Slam Dunk in its Hands?

When Google launched Pixel back in 2016, its focus shifted to building software features that could leverage hardware specifications and provide benefits by using advanced artificial intelligence (AI) and machine learning (ML) algorithms. This was introduced in the Tensor processing unit or TPU - an AI-accelerator application-specific chip that came standard in Pixel

An excellent example of TPU flexing its arms was Pixel 2 and Pixel 2 XL’s cameras that were highly praised for their computational photography. Features like HDR+ and Night Sight, and later Astrophotography paved the way for a more extensive AI/ML adoption across the playing field.

Generational improvements in AI/ML landscape enabled Google to use them to generate the real-time speech recognition models that were employed to build Google Recorder. By 2018, Google had made TPUs available for third-party use, both as part of its cloud infrastructure as well as a chip for sale. In the Pixel line of smartphones, these purpose-built chips were dubbed Pixel Visual Core (for AI/ML-based computational photography), and later Pixel Neural Core (with added AI/ML-based speech recognition).

Words on the street were that Google would make its own SoC (system on chip) for its future devices. And, with the introduction of Apple’s M1 chip last year, those rumours started to gain significant traction. Then in August 2021, Google finally lifted the veil off its newest generation of Pixel phones - Pixel 6 and Pixel 6 Pro. And, it confirmed that both would debut with a new, custom-designed, in-house SoC called Tensor. This is Google Brain Project’s most ambitious venture to date and can upend the traditional smart devices market.

What is Google Tensor SoC?

Google’s Tensor, code-named GS101, is its first SoC custom-built for its Pixel line of phones. It incorporates standard chip components like CPU, GPU, and memory alongside its latest generation of TPU to provide computational muscle to day-to-day tasks.

The Tensor chip origin lies in the fact that the current mobile chips are limited to incorporating AI- and ML-based work routines. Google has a dual advantage here. It not only can leverage its TensorFlow library to train its own neural networks via ML but can now also directly transfer those benefits to the entire SoC, removing major workloads off CPU and GPU’s plates and improving performance as a result.

Rick Osterloh, Senior Vice President, Devices and Services at Google, described the purpose in a recent blog post. He stated: “Tensor enables us to make the Google phones we’ve always envisioned — phones that keep getting better while tapping the most powerful parts of Google, all in a highly personalized experience”.

What Else do we Know About Google Tensor Chip?

Google is yet to release more information about the new SoC, Tensor, but rumours indicate that it has been working closely with Samsung to adapt an ARM-based design for these chips. Samsung is also rumoured to be the manufacturer of these chips - a logical decision since it has one of the world’s biggest chip foundries besides TSMC. Samsung may also share some insights from its successful Exynos line of SoCs to Google, further helping it in this process.

XDA reported that Tensor SoC will likely contain a variation of Cortex A78, A76, and A55 cores with a custom-designed ARM Mali GPU and will be manufactured using the 5nm fabrication process at Samsung.

Both Pixel 6 and Pixel Pro contain Tensor SoCs and are scheduled to be released later this year. Google said alongside the chip; these phones will have improvements in almost all areas like screens, cameras, and most importantly, integrated software experience in the form of Android 12 and its ‘Material You’ design.

Though we have seen Google’s AI/ML prowess shine through the generations of TPUs it has produced over the years, it remains to be seen how much of an improvement it can make when integrated into a system on chip design. Tensor is Google’s most ambitious project to date. Most pundits agree that while this may not be an immediate slam dunk, it will be the first step towards a long-term goal of making a better Google chip, and in turn better integration between software and hardware in Google products.

Reggie Singh

A seasoned professional with over 20 years of experience, Reggie Singh is a global digital strategist and innovation leader who thrives at the intersection of technology and heritage.

His background extends beyond just digital expertise. Reggie is a creative thinker and futurist, constantly exploring the transformative power of emerging technologies. He delves into how these advancements not only reshape the digital landscape but also influence the zeitgeist and popular culture.

Reggie's global perspective fuels his unique approach. He sees technology as a dynamic storyteller, a bridge connecting generations across the world. This is particularly evident in his passion for Girmit ancestry tracing in India. By leveraging cutting-edge tools, Reggie goes beyond traditional methods, breathing life into forgotten narratives for a modern audience.

His work transcends cultural exploration. Reggie views technology as a powerful tool for cultural preservation and fostering deeper human connections, especially when it comes to ancestry and heritage. He sees emerging technologies as enablers, not just disruptors, and his innovative thinking pushes the boundaries of how technology shapes collective memory.

Reggie's journey is a testament to this philosophy. He skillfully blends honoring the past with navigating the present, all while shaping the future through the transformative power of technology and cultural exploration.

Reggie on LinkedIn

http://www.reggiesingh.com
Previous
Previous

Web3 - Will The Decentralised Web Make The Internet FREE Again?

Next
Next

Short videos are all the rage, but long-form videos are here to stay