Google Chief Scientist: It’s Important that People Understand the Progress in AI
A quadrillion-dollar opportunity, breakthroughs every day, why Google was late to the chatbot game, and more from two of the company’s leading AI figures
Jeff Dean, chief scientist at Google and co-lead of Gemini, doesn’t like hyping things before they are done.
But he is excited about what progress in AI could mean for areas like education and healthcare, and in a new episode of the Dwarkesh Podcast, he shares an insight about what could be ahead.
Recently in the field the spotlight has been on reasoning models that work by breaking problems down into steps, but the approach isn’t super reliable yet and only works at a limited scale, Dean notes.
“If you could go from 80% of the time a perfect answer to something that's ten steps long, to something that 90% of the time gives you a perfect answer to something that's 100–1,000 steps long, that would be an amazing improvement in the capability of these models. We're not there yet, but I think that's what we're aspirationally trying to get to,” Jeff Dean says.
“That's a major, major step up in what the models are capable of. So I think it's important for people to understand what is happening in the progress in the field.”
He acknowledges that there is also a flipside to these enhanced capabilities, that Google should be aware of.
“We also realize that they could be used for misinformation, for automated hacking of computer systems, and we want to put as many safeguards and mitigations in place as we can and understand the capabilities of the models.”
A quadrillion dollars is the new cool
In the podcast, Dean appears alongside his Gemini co-lead Noam Shazeer, also known as one of the eight inventors of the groundbreaking Transformer innovation, that played a pivotal role in sparking the modern AI era.
“Organizing information is clearly a trillion-dollar opportunity,” he says, referring to Google’s mission ‘to organize the world’s information and make it universally accessible and useful’, “but a trillion dollars is not cool anymore. What's cool is a quadrillion dollars.”
The idea is not just to pile up money, but create value, Shazeer emphasizes.
“And so much more value can be created when these systems can actually go and do something for you, write your code, or figure out problems that you wouldn't have been able to figure out yourself.”
A dream direction
Jeff Dean paints a vision of what organizing the world’s information could look like now, including that everything should be usable by anyone, regardless of their language. He thinks they’ve done some work in that direction.
“But it's not nearly the full vision of, ‘No matter what language you speak, out of thousands of languages, we can make any piece of content available to you and make it usable by you,’” he says. “We're not quite there yet, but that's definitely things I see on the horizon that should be possible.”
For Dean who is also Google’s Chief Scientist, the progress in AI likewise has interesting perspectives for automating research, for example in coming up with code for experiments that a human could then evaluate and decide whether to run or not.
“That seems like a dream direction to go in. It seems plausible in the next year or two that you might make a lot of progress on that.”
Dwarkesh Patel notes that it seems under-hyped, because you could have millions of extra employees in that way, to which Dean replies that he does find it super exciting, but (here’s the reference from the beginning of the post) “I just don't like to hype things that aren't done yet.”
Breakthroughs every day?
When attempting novel research there is maybe a 2% chance of succeeding, Shazeer says, but driven by AI the process of exploring different directions could significantly speed up.
“If you try 100 things or 1,000 or a million, then you might hit on something amazing,” he says, noting that availability of compute is not the problem, as modern labs might have a million times more than it took to train the Transformer.
Could that mean breakthroughs every day?, Dwarkesh Patel asks.
“Maybe. Sounds potentially good.”
Talking about the risks of AI, Shazeer is optimistic that large language models (LLMs) themselves could be key in mitigating the liabilities.
“I believe that the ability of language models to analyze language model output and figure out what is problematic or dangerous will actually be the solution to a lot of these control issues,” he says, adding that there are currently people working on this at Google.
Not a fixed pie
Following a tradition of open research, Google published the Transformer paper (2017), which has since helped competitors pocket billions of dollars in investments and revenue. Should they have kept it indoors?, Dwarkesh Patel asks.
“It's not a fixed pie,” Noam Shazeer notes.
“I think we're going to see orders of magnitude of improvements in GDP, health, wealth, and anything else you can think of. So I think it's definitely been nice that Transformer has got around.”
Why Google was late to the chatbot game
After what could be called a delayed entrance to the chatbot era, Gemini is now topping the leading benchmark. How did it happen that OpenAI, backed by one of Google’s top rivals Microsoft, got a headstart?
Dean explains that from Google’s point of view, initially the hallucinations from LLMs were considered unacceptable in a search context where you want the right answer 100% of the time.
“I think what we didn’t quite appreciate was how useful they could be for things you wouldn't ask a search engine. Like, help me write a note to my veterinarian, or can you take this text and give me a quick summary of it? I think that's the kind of thing we've seen people really flock to in terms of using chatbots as amazing new capabilities rather than as a pure search engine.”
For more insights, stories, and views from the notable Google duo, you can find the full episode here: