NewsBite

Advertisement

Does Google AI make the Pixel 10 a game-changing smartphone?

Tim Biggs

Google’s new Pixel 10 line-up is a very impressive range of smartphones, with refined looks, a smooth version of Android 16, seven-year support, great displays and excellent cameras (including a telephoto on the standard non-Pro version for the first time). But as usual, the buzziest selling points are all about AI.

From the latest Gemini models to new computational photography methods, Google is selling the Pixel 10 as the smartest smartphone you can get. So, how well do these features work?

The Pixel 10 phones are filled with Google’s latest AI features.

Magic Cue

Ever since the Pixel 2 and its At a Glance widget in 2017, Google has been trying to take the stuff its apps know about you and offer it up when you need it. At a Glance still exists on Pixel 10, and it still occasionally surprises with a prescient nudge about how long it will take to get across town in time for a meeting, but Magic Cue aims to be on another level.

Advertisement

The basic idea is that the phone should bring you information when you need it, even if you’re in one app and the information is from another. To do this, the AI model considers information stored in certain Google apps, including Gmail, Calendar, Screenshots and more – information that is, or was, recently displayed on your screen – and “foundational” data such as your phone number.

Examples in which a Magic Cue might show up include when calling a specific business (a panel could appear showing the number of a recent order you placed, taken from an email), or if someone mentions a location in a text message (a prompt could appear that will plot a route from you to that place in Google Maps, or that will take you to the weather forecast for that place, depending on context).

It’s a strangely ephemeral feature in that it exists only when it thinks it will be helpful. At one point, my wife messaged to ask about the time of a dentist appointment, and when I glanced at my phone the Magic Cue was there. Next to other generic auto-replies (“I’ll get back to you”), there was a rainbow bubble with the details of the appointment, pulled from my calendar. But then I tapped on the notification to open the Messages app, and the prompt was gone.

If it all works as advertised, Magic Cue could be the ideal form of AI assistant. If I’m on hold trying to talk to my energy provider, and someone answers and asks for my customer number, I should be able to look at my phone and have it there instead of scrambling to get back to the email I swiped away from out of boredom 20 minutes ago. If someone texts to ask the address of the hotel, or the ETA of my Uber, my phone should be able to hand them that stuff directly with my permission, given it already knows the answer. But right now, because Magic Cue is limited to certain Google apps and shows up so occasionally (at least for me), it doesn’t feel transformative.

Advertisement

Pro Res Zoom

Using the tiny cameras in a smartphone to zoom in on something in the distance is a big ask, but this feature (which as the name implies is exclusive to the Pro phones with their superior cameras) promises to do it with AI.

The telephoto lens in the Pixel 10 Pro is set at a 5x optical zoom, and you can zoom up to 30x to get a traditional crop, meaning the result will be a small portion of the overall image, sharpened and cleaned up a little. If you keep zooming past 30x and take a photo, the AI kicks in. You can take a photo all the way up to 100x, which, if this were a professional telephoto lens that cost tens of thousands of dollars, would mean you could photograph detail a kilometre away.

The obvious thing to note here is that photos you take this way are essentially invented by AI. If you’re judging them by how convincingly real they look, then they do a good job. Detail such as brick and masonry on buildings I photographed looked good, and faraway plants had leaves and flowers that made it seem as though I took the photo from much closer. But these results were never what the objects looked like – they’re just guesses. And on photos of the kinds of objects the AI would have seen a lot in its training, its guesses are believable.

Advertisement

On the other hand, fine details such as text and pictograms (think images on a sign) usually come out sharply defined but complete nonsense, and the system will actively refuse to apply its process to images of people, for obvious reasons. Overall, if you feed it a well-lit photo in the 40x to 50x range, the effect can be subtle and useful. But at maximum zoom, there’s a chance you’ll get a total fabrication. The phone saves a traditionally sharpened image as well as the AI take, so you can compare them.

Camera Coach and Gemini Live

To continue the theme of blending the real world with AI creations, these two features use Google’s machine smarts and the camera to help you out, with mixed results. If you’re taking a photo and you’re not sure how to frame it, you can tap the Camera Coach button at the top of the screen, which will analyse everything it can see in front of you to generate suggestions. Typically, it will ask what you want to focus on; for example, just the plant in the foreground, or both the plant and the window behind. Then it will give you step-by-step instructions that may involve changing camera settings (it will highlight them) or physically moving around (it will show on-screen guides).

Alongside the initial options there is usually an “inspire me” button, which kicks off a second round of AI processing and gives another set of options. These come with AI-generated mock-ups of what your photo could look like, which tend to be pretty good. In my office plant example, one option was to get really close for some leaf detail, while another was a wide-angle shot taking in the whole window. In both cases, the AI had to guess detail it couldn’t see, but it gave a good idea of what it would look like, and again offered steps to get there.

Advertisement

Meanwhile, if you hold the power button to summon Gemini, and then hit the Live button, you’ll enter a kind of phone call with the chatbot in which it can see what’s happening on your phone’s screen and you can talk about it. That in itself isn’t new, but if you choose to open up your camera feed and let Gemini see what you’re seeing, it can now draw simple graphics over the top to illustrate its point.

As with Magic Cue, it doesn’t do this all the time. I asked it how to reset my AirPods and it told me, describing where the pairing button was rather than pointing it out. I then looked at a TV remote and asked how to turn on subtitles, and a white circle appeared above one of the buttons. “Pressing this button should turn on subtitles,” Gemini said. It was absolutely wrong, but confident as usual. That is a very difficult question though, given how many remote models there are.

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.

Tim BiggsTim Biggs is a writer covering consumer technology, gadgets and video games.Connect via Twitter or email.

Most Viewed in Technology

From our partners

Advertisement
Advertisement

Original URL: https://www.watoday.com.au/technology/does-google-ai-make-the-pixel-10-a-game-changing-smartphone-20250905-p5mspv.html