Google’s new phone launch: Android 14, ‘buy a phone, get AI’

At 10 a.m. local time on October 4, Google held a “Made by Google” event in New York, where hardware such as the Pixel 8, Pixel 8Pro, Pixel Watch 2, Pixel Buds, and the latest Android 14 system were revealed one by one. Aside from the Pixel 8’s matte frosted back and blast-shield-shaped 50-megapixel camera, the biggest highlight of the day’s event, as you can guess, was the AI – and it’s even more intriguing to see how Google integrates its various AI ideas and features on its flagship phones. From super-immaculate retouching and image quality correction to AI-generated wallpapers, AI permeates every single one of the most familiar and perceived features of Google’s new phones and systems. The $699 Pixel 8 and $999 Pixel 8 Pro are still quite a bit cheaper than Apple’s and Huawei’s newest flagships, but the Google Pixel is a true “buy a phone, get a free AI”. Buy a phone, get AI.

1. AI features invade phones
The new Pixel 8 series inherits the usual design style of the Pixel series, and there are not many changes in the appearance. The biggest change is that the screen has changed from the curved screen of the previous generation to a non-curved screen, and the back shell has changed from glass to a matte frosted material for a better grip. At the same time, Google upgraded the screen of the Pixel 8 series of phones, The 1-120Hz LTPO refresh rate allows the phone to play games and browse the web when the screen is smoother, but also a little more power saving. The Pixel 8 series is the first cell phone equipped with Android 14, the biggest highlight is still AI, or at least this is the normal launch of the most talked about features.

According to Brian Rakowski, Google’s vice president of product management, the Pixel 8 and Pixel 8 Pro are “both centered around artificial intelligence,” powered by Google Tensor G3. For starters, the Pixel 8 and Pixel 8 Pro have an upgraded camera system. Compared to the Pixel 7, the 50-megapixel main camera is more capable of adapting to low-light conditions, the ultra-wide-angle lens is larger, the telephoto lens captures more light and takes 10x magnified photos at optical quality, and the front-facing camera now features autofocus to help users take selfies. And even more interesting than the camera is its editing tools. “We’ve all been in situations where a perfect group shot is taken, but someone isn’t looking at the camera.” With generative AI, Magic Editor in Google Photos can take a photo that has already been taken and manually select another expression from another photo to generate a new photo that achieves the effect the shooter wanted, so that ‘everyone has their eyes open’.
Magic Editor can also be used to change lighting and backgrounds, such as changing a gray sky to a golden sunset, providing multiple options for the result after selecting an edit. According to Dina Berrada, director of product management for Google Photos, “This is just the beginning, and we plan to add more intuitive and generative AI features to Magic Editor in the future.” After taking a photo, Pixel users will also be able to use it to zoom in as much as they want, filling in the gaps between pixels and adding details to the photo using generative AI, a feature Google calls ‘Zoom Enhance’, which will be available on the Pixel 8 Pro.
As well as images, there’s also audio and video. Audio Magic Eraser on Pixel phones reduces video distracting sounds, such as whistling wind or noisy crowd noises, recognizing and controlling their volume, and is said to be “the first computational audio capability that uses advanced machine learning models to categorize sounds into different levels.” Later this year, the Pixel 8 Pro is said to come with Video Boost Video Boost, a video processing app that adjusts the color, lighting, stability, and graininess of videos, as well as enabling Night Sight Video, which improves low-light video quality.
We’re using the same algorithms to enable a completely new approach to computational video,” said Isaac Reynolds, chief product manager for the Pixel camera. Achieving this is challenging because there’s a lot more data in video than in photos. In fact, processing one minute of 4K video at 30 frames per second is the same as processing 1,800 photos, a task that no cell phone can accomplish on its own.” In addition, Google’s Pixel phones embed some common features of generative AI, such as generating web page summaries. In addition, the Pixel can read and translate web pages for users, allowing them to ‘listen to articles’ while on the go.

On the communication front, the Pixel supports multiple languages for typing, editing, and sending messages with your voice. Google executives say the Pixel is ‘better at understanding the nuances of human speech’, waiting for the user to finish before responding when they pause or say ‘um’, and with AI improvements, the Call Screen on the phone is now claimed to help reduce spam calls by an average of 50%. “It will silently answer calls from unknown numbers in a more natural voice to interact with the caller. It’s also smart enough to separate the calls you want from the ones you don’t want. Soon, Call Screen will also generate contextual reply suggestions for you to quickly respond to simple calls, such as appointment confirmations, without having to answer the call.”
There’s also a new temperature sensor on the back of the Pixel 8 Pro that scans the temperature of objects, which is claimed to be useful for checking if a pan is hot enough when cooking, and checking that the milk in a baby’s bottle is at the right temperature when caring for a baby. Google has also submitted an application to the FDA to enable the Pixel’s thermometer app to measure body temperature.
2. AI, from hardware to software
In order for the aforementioned AI features to be fully implemented, the Pixel 8 series phones are equipped with Google’s latest custom chip, the Tensor G3, which is custom-designed to run Google’s AI models and includes the latest generation of ARM CPUs, upgraded GPUs, new ISPs, and Imaging DSPs, as well as a new generation of TPUs. According to Gupta, the latest Pixel phones have “more than twice the number of machine learning models running on the device” compared to the previous generation of Tensor, and the models themselves are more complex, with the generative AI on the latest phones being “150 times more complex than the most complex model on the Pixel 7 a year ago”.
In terms of speech and natural language understanding, the Pixel 8 is the first phone to use the same text-to-speech model as Google’s data centers, which also enables the Pixel to read web pages aloud. The aforementioned image and video apps are also powered by Tensor G3 with Google’s data centers. Google has optimized the image process and integrated machine learning algorithms directly into the chip, enabling Live-HDR to capture more detail on the Pixel 8 and Pixel 8 Pro to improve color, contrast, and dynamic range. All of this is combined with a new camera sensor, new machine-learning models, and new camera software while consuming less power.
“Machine learning models now enhance virtually every aspect of the Pixel user experience.” said the aforementioned Google executive, “and this is just the beginning.” Days later, Google also launched Google Personal Assistant Assistant with Bard, powered by generative AI, combining Bard’s generative and inference capabilities. In the coming months, users will be able to use it on Android and iOS mobile devices. “We think digital assistants should make it easier to manage things big and small on your to-do list. Like planning your next trip, finding details buried in your inbox, creating a grocery list for a weekend getaway, or sending a text message. Just like a real assistant.” Sissie Hsiao, Google’s vice president and general manager, said.
On Android devices, Google is building a more contextual phone experience. For example, let’s say a Pixel user has just taken a photo of an adorable puppy dog, and simply lets Assistant with Bard float over the photo and voice-asks it to compose a social post. assistant with Bard then uses the image as a visual cue to understand the context and generate content. Google executives say “This conversational overlay is a whole new way to interact with your phone.” According to Google executives, Assistant with Bard is still an early experiment and will be rolled out to early testers soon to gather feedback before bringing it to the public in the coming months.
Also, Google’s AI is integrated into watches in addition to phones. In terms of wearables, Google has constructed a typical scenario: after a user goes to sleep and wakes up, he or she opens the Google Personal Assistant and simply asks “How did I sleep last night?” to get a daily sleep score or weekly average or ask to start exercising.
The latest Android 14 update is similar to Apple’s iOS 17 update in that it doesn’t change much, but is more about system beautification and AI. For example, Android 14 allows users to freely customize the screen saver screen on their phone, as well as generate their own wallpaper with generative AI, among other things.
There’s no denying that the Pixel lineup’s market share is still low, despite all the attention it’s getting.
However, AI may be an opportunity for the Pixel and Google to turn things around. In the generative artificial intelligence explosion, cell phone manufacturers have to follow, hoping to be able to add the big model ability to the phone, which in addition to the need for chips like Qualcomm to provide the ability to provide the underlying hardware, also the need for Android system manufacturers – Google in the software system level to support and guide. This is Google’s advantage over strong rivals such as Apple and Microsoft – both AI capabilities and the Android system that runs on billions of mobile devices.The Pixel 8 series, obviously, is just the beginning.


Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top