NewsBite

Advertisement

Google’s Gemini 3: What’s new and what can it do?

By Tim Biggs

Google has announced a new version of its main generative AI model with Gemini 3, taking the unusual step of rolling out a version of the new tech across all its products immediately in a sign that the web giant is growing more confident about its AI chops.

Google has been rushing to inject Gemini into its web search offering, its legacy smart speakers and all throughout its smartphones. And with Gemini 3, it says developers will benefit from more powerful “vibe coding” (where the AI generates apps and features in code after the developer describes it) and the ability to create more complex AI agents.

Google’s Gemini 3 is the latest family of its in-house large language models.

Google’s Gemini 3 is the latest family of its in-house large language models.

But with Gemini 3 rolling out across all of Google, it’s likely to impact regular non-technical users as well, whether they encounter the AI in their documents and spreadsheets, or they’re asking Gemini to parse videos and make recommendations.

The big catch is that a lot of the advanced features are accessible only to those paying $33 per month for a Google AI Pro subscription.

Why is Gemini 3 a big deal?

Training a model of this size can take billions of dollars, but tech giants such as Google and Microsoft are locked in an arms race with each other and upstarts, including OpenAI and Perplexity. For Google, generative AI presents a threat to its position as gatekeeper and king of all web knowledge. For others, it’s an opportunity to take the crown.

The company is promoting Gemini 3 Pro – the first model in the 3 family – as being a smarter tool to tap into the entire internet’s knowledge for answering questions, writing code and analysing or creating media. It says users of the Gemini app can choose to use Gemini 3 Pro to solve problems with advanced reasoning, or even create interactive apps or presentations on the fly.

But the announcement is also about Google flexing its power. Other companies have launched AI browsers that promise to augment your web surfing with generative capabilities. Google wants to show it can already do all that inside the browser and search system you already use, since it owns the world’s biggest search index.

Advertisement

Looking ahead to a time when we won’t necessarily go through websites manually, it also wants to show consumers how AI-generated content will still look nice and be pleasant to read, and get developers started on crafting AI agent applications.

Australian engineer James Groeneveld, chief intelligence officer at Character.AI, said he expected big things from Gemini, given the company’s DNA in the AI space goes back decades. Google’s VP of engineering and Gemini co-lead Noam Shazeer was previously chief executive at Character.

“Having worked with Noam, he is one of, if not the greatest of all time in this space,” Groeneveld said.

“I’m looking forward to not just the outcomes of one year of Noam’s work back at Google, but where they are in five years. I think it will be very, very exciting.”

Is Gemini better than ChatGPT?

Another reason Google is motivated to make a big deal out of its new model is that OpenAI’s most recent releases have been received quite poorly. Dan Petrovic, an academic and consultant on SEO and generative AI, said Google’s size, expertise and massive trove of search data gave it a massive advantage, but that Gemini 3 Pro would probably be a more expensive model to run.

“When GPT 5 came out, it felt like it was suddenly a dumber model. If you switch off search grounding, it’s hardly capable. Instant answers are really silly. Compare that against Gemini – if you switch off search and there’s no external source of knowledge, it’s still very knowledgeable,” he said.

“From what we’re seeing in the benchmarks, [Gemini 3 Pro] is not just a little bit stronger than GPT 5.1, it’s a lot stronger. But Google would have used the high thinking level to produce that output, and that costs time and compute.”

Though Google says it’s rolling out Gemini 3 across its products, Petrovic speculated it would use a mix of models or employ a faster version for consumer-facing products, because the full Gemini 3 Pro model would burn money on every web search.

Google’s announcement focuses heavily on empowering developers and creators, not just with vibe-coding tools but with methods for creating AI agent applications. And although Gemini 3 has essentially the same knowledge base as 2.5 (ending at January 2025), it’s been retrained and has access to Google Search on tap, which combined with its visual capabilities may make it superior to GPT.

Loading

However, given OpenAI has focused more heavily on consumer-level features such as chat personalities, it remains to be seen how everyday users will feel about the difference.

Google search with AI Mode

To combat the threat of others using AI to circumvent the need for Search, Google’s AI Mode is a tool that takes your question and performs multiple searches quickly, synthesising the answers into conversational text. Google said it has already rolled out Gemini 3 to AI Mode, but there’s no real way to confirm it. Using my Pro account I asked AI Mode if it was using 2.5 or 3, and even it didn’t know.

Regardless, I’ve been avoiding using AI Mode in place of traditional search because I want to keep rewarding providers of good information with my eyeballs. But I dug through my search history from the past few days to find some questions I’d asked, and fed them to AI Mode to see how Gemini handled them.

Google was initially reluctant to integrate AI with search, but now it has both AI Overviews and AI Mode.

Google was initially reluctant to integrate AI with search, but now it has both AI Overviews and AI Mode.

In a question about stopping my cats knocking over their automatic feeder to gorge on kibble overnight, the regular search took me to Reddit, where I found some good advice. AI Mode considered multiple Facebook, Quora and Reddit posts to provide three answers (secure the feeder, reinforce the lid, address the behaviour) with suggestions for each.

In another query, I wanted to know more about a character in a comic book I’d been reading to my kids. Regular search just took me to others asking the same questions, but AI Mode found a discussion with the comic’s writer, which was exactly what I needed.

The system does less well if you need very current information. It flubbed a request about what was showing at the movies, essentially listing cinemas near where it thought I lived (it was wrong) and telling me to check with them.

Overall, the system provided the benefit of multiple searches in a fraction of the time. The biggest issue is that, if everyone starts using AI Mode, eventually there won’t be any new Reddit threads or Quora answers to feed off, only AI-generated garbage created to mop up the last monetisable clicks.

“The open web is at risk, and content production is at risk, and people are not motivated to produce more content. At the same time, there’s more AI slop being entered into the indexes,” Petrovic said.

“When the radio plays a song, it pays a royalty fee to the artist. Google and OpenAI have to start doing the same thing.”

What can the Gemini app currently do?

For consumers, the Gemini app or web interface is where you go if you want the AI to chew on any particular problem. There’s a “fast” setting, using a simpler model, and a “thinking” setting using Gemini 3 Pro. Free users can only use “thinking” a couple of times per day. I set Gemini 3 three tasks to test it out.

First, I asked it to create a slideshow presentation comparing six specific mirrorless cameras. They’re all a few years old and roughly comparable. Gemini took around 90 seconds to create the slides, and despite a bizarre cover photo choice with some green German text, it appeared to be to my specifications. At first glance.

Loading

Each of the six got their own slide with a price range, pros and cons, and a photo. But the AI seemed to confuse the Olympus OM-D E-M5 Mark III (which I’d asked for) with the Olympus Tough TG-5, a totally different kind of camera. It gave a made-up price and details.

It also chose to include the Fujifilm X-T30 II instead of the older model I asked for. Where it gave vague opinions (“autofocus tracking can be clumsy”), I had no way to find the reviews it had sourced them from to learn more. Some charts and comparisons in later slides had good insights, showing this could be a great way to learn. But the system continued to make up facts about the E-M5 / TG-5 hybrid it had imagined.

Next, I asked Gemini to plan a trip to Osaka. I said where I’d be staying and for how long, while also giving a preference for vegetarian food and avoiding tourist traps. Again, it did an excellent job at first glance, coming to some of the same conclusions I would from Googling while also pulling out some fun suggestions I’d never heard of. But, since I’m more familiar with the cameras than I am with Osaka, it’s more difficult to pick out potential mistakes. I would absolutely double-check all travel times.

Loading

The AI picked out four themes and peppered them throughout the four-day plan, but it also focused on a specific theme for each day. It included off-beat attractions slightly outside the city, such as a hike to Minoo falls with a detour to an old insect museum, as well as less-popular alternatives to obvious choices, such as seeing Kishiwada Castle instead of Osaka Castle. It also found interesting vegan or vegetarian restaurant options for lunch and dinner every day, nearby the attractions it suggested. Some extreme detail (“the owner speaks English”) again made me curious where the facts came from, though.

Finally, I asked for a garden design, and here’s where it had the hardest time. I uploaded a drawn description of the space, specified preferred types of plants, and requested a visual representation, but it fell back on the classic chatty and verbose AI writing. It did select four plants and give justifications, along with a step-by-step plan, but its visual representation was just a plain text drawing with dashes and letters. The drawing it made was inconsistent with its planting advice, too.

I asked if it could show me the plants visually and it sent some images from Getty and Shutterstock. I asked if it could generate an image of what the finished garden would look like, and it said enthusiastically that it could, but then it displayed a big black nothing.

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.

Most Viewed in Technology

Loading

Original URL: https://www.theage.com.au/technology/google-s-gemini-3-what-s-new-and-what-can-it-do-20251119-p5ngly.html