Elizabeth Reid has over the past 12 months led Google’s push to reinvent its core product: search. About a 12 months ago her team launched the corporate’s biggest revamp in years with AI Overviews, by which generative artificial intelligence models summarise search results.
The feature began tentatively, with the AI summaries prompting ridicule after they advised users that eating rocks may be healthy and told others to connect cheese to pizza. Since then, Reid says, the corporate has worked to balance accuracy and usefulness, and is seeing people change the best way they seek information online.
In this conversation with the Financial Times’ AI correspondent Melissa Heikkilä, Reid talks in regards to the way forward for AI-powered search and the way it’s changing the business model of the web.
Melissa Heikkilä: You graduated from Dartmouth College, which is where the definition of AI was first conceived in 1956. Tell me about your journey to AI. Has Dartmouth influenced you in any way?
Elizabeth Reid: Dartmouth definitely got me into computer science. I did little or no of it in highschool. I went to a small school in Massachusetts whose idea of computer classes on the time was typing and learning to make use of Microsoft Excel and Word. I did somewhat programming on my graphing calculator because they told me I couldn’t take this class unless I knew the right way to do this.
And I went to Dartmouth, considering I used to be going to enter physics. I used to be good at maths. I did an internship in my freshman summer, and it was in material science and, in theory, was really interesting (but) I wanted something more applied. So, I assumed I’d go into engineering physics.
I took (a pc science class) at the identical time I used to be taking thermodynamics and physics. And I hung out doing extra credits for computer science, (moderately than) focusing as much as I probably must have on my physics. I talked to Professor (Thomas) Cormen, a longtime Dartmouth professor, and he convinced me to change into computer science.
Then I needed a job. It was 2003. Dartmouth had computer science department, nevertheless it was not Stanford or MIT or Carnegie Mellon. (Cormen) had a previous student who was at Google, and he helped me. He contacted her, and he or she helped me get an interview. So, I landed at Google within the New York office. There were about 10 engineers there and perhaps 500 or 1,000 total employees (at the company headquarters) in Mountain View in California.
I began in search on a project that became local search. At some point, that moved to the geo-map space, and I worked on engineering problems there. We sometimes synonymise AI with generative AI but, really, AI isn’t nearly generative AI. And so, across time, in each local search and a few of the maps, we were using AI in (many) different areas.
I moved to (Google) Search a number of years ago and was talking to the engineers about what they were doing (and) what was possible. The technology then had a tipping point, and we were suddenly capable of do lots more with it. It was pretty exciting.
MH: You’re working on one of the crucial concrete applications of AI. And it’s been just below a 12 months since AI Overviews was launched. Could you tell me a bit in regards to the past 12 months and the way it’s gone?
ER: It’s been a terrific launch. We see a few of the strongest growth in (Google) Search and folks issuing more queries.
It unlocks the issue of asking an issue. It permits you to ask questions you couldn’t ask before because the data wasn’t on a single webpage. It was scattered across the net, and also you’d must have pulled it together. Something we’ve seen over and all over again with (Google) Search is that human curiosity is boundless. People have a number of questions.
A 3-year-old will go: “Why, why, why, why, why?” But, as an adult, you don’t assume the person you ask the query knows the reply. You don’t know if you will have enough time. You don’t know if it’s well worth the effort. And so that you don’t ask those questions. But in case you lower the (barrier) to asking the query, then people just come. They have lots more questions they usually ask anything as of late.
MH: And how else are you seeing AI changing search?
ER: Besides seeing people ask more questions, they ask longer questions. And the best way you’ll be able to take into consideration an extended query is: do you will have to take the actual query you will have and switch it into the strictest “keywordese” or are you able to ask what’s in your mind? With AI Overviews, people start asking these longer queries that express more of the constraints, more of the angles that they see.
We see it resonate particularly with younger users. They are sometimes the primary to push expectations about what needs to be possible and to adapt to latest technology. More and longer questions. They start asking more nuanced questions.
AI Overviews is the beginning with serious about transforming search. How can you consider transforming the entire page, organising the data in a way that’s easier, even finding what the appropriate web links are that you ought to go and pursue? We see a number of growth in multi-modality: people asking these text-plus-image questions. So, it’s not only, “What is that this image?” or “Here’s my query”, but combining them.
MH: With ChatGPT, we’ve seen some evidence that individuals are changing the best way they behave. Are you serious about adapting to more chat-based search functions?
ER: We’re not looking in that direction in the identical way: to the extent that someone will consider a chatbot as talking to something that feels personified and you’ll be able to ask it, “How was your day?”, then expect a response.
We consider search as more of an information-focused query. We are beginning to experiment more with the concept that people sometimes have an issue that has multiple parts plus a follow-up. And if you will have a follow-up query, you don’t want to start out over from scratch.
But it’s more designed as: how will you further your journey without repeating it the identical way you may to a human — moderately than designing it within the sense of: do you will have a friend to talk with and ask them their views? It’s far more about organising information.
MH: There’s been a number of criticism about search being broken, people having so as to add “Reddit” as (a) search keyword or, after they search, they’re getting hallucinations, or incorrect or misleading results, as answers. Or the AI answers are telling them to eat rocks or glue. How are you working to repair that?
ER: I don’t think adding the word “Reddit” is a foul thing. Some people want more discussions. Others might want it from more mainstream or authoritative sources. So, the power to precise more of what you wish could be a win. But what now we have seen is that individuals, especially younger users, wish to hear directly from others who’ve experienced something.
And so, it’s not only, “here’s a site that’s done some research”, but “did you go there yourself?” Did you utilize the product yourself, or did you examine it and write some summary on it? We’ve been doing a number of work to work out how we bring more human voices on.
It is the case with generative AI that the technology sometimes makes mistakes. We saw, with eating rocks, that it was an especially small-use case. Despite our extensive work and testing, it was not the form of query we had seen previously.
People didn’t ask us, “How many rocks should I eat a day?” People use latest technology in ways that you simply hadn’t imagined. We took it seriously. It didn’t matter that it was a small incident.
We put a number of effort in our models on taking note of factuality. That’s a way that we make a unique selection on search, compared with a chatbot. You typically have to choose from how factual it’s versus how creative or how conversational it’s.
If you’re constructing a product that’s designed to be conversational, you may weigh it a technique. But within the case of (Google) Search, now we have weighted factuality and put extensive work into that. We have continued to lift the bar on that for the past several months.
MH: Language models do have this technical flaw where it’s easy for outsiders to inject unwanted prompts, and that then influences what the overviews say, or hallucinations. Are these models fit for purpose for something like search, which requires accuracy? And how do you consider these security weaknesses and the right way to fix them?
ER: There’s a difference between “are you able to hack the prompts”, versus “are they going to make occasional mistakes”? Those are various things. From a security perspective, on the prompting, everyone seems to be working to work out the right way to avoid jailbreaking, or finding loopholes that make AI models bypass their guardrails. We’re doing that. The way search is designed, when it comes to the way it uses the net, it tends to not have that problem in the identical way that a conventional chatbot might need.
But when it comes to, are they able to be used, considered one of the things that we do depend on for search is using high-quality information from the web. It’s a unique use, in that it’s not a lot the model generating all the things and using somewhat little bit of web, but feeding the net on the centre and designing it. Our models are trained not only to try to be highly accurate, but to try to base their answers on information on the internet.
That helps in two ways. One, it increases the accuracy and, two, we are able to then inform you where to search for further confirmation.
AI Overviews aren’t designed to be a standalone product. They are designed to get you began after which assist you dive deeper. And so, when it’s vital, the thought is that you simply get some context on where to ascertain and then you definitely can decide to double-check more on a few of them.
There are a number of questions people ask, where, in case you are only counting on webpages, it might be difficult. So, tech support is considered one of the AI Overviews areas that individuals depend on. The tech documents will not be necessarily extensive online. Maybe there’s a form that talks about your problem, but perhaps not. Or the forum talks about your problem, but you’ve tried those two or three things.
We don’t show AI Overviews in every query. In order to point out AI Overviews, now we have to imagine the response is top quality (and) is it a net value over the remainder of the search results? If we predict the remainder of the search results page provides the reply, then we don’t feel an obligation to reply.

MH: What form of behaviour change are you seeing in people double-checking sources? Are people doing that, or how often do they depend on the AI Overviews?
ER: We do see people dive in, often to proceed. That may be because they need to substantiate data, but often it’s not simply because they need to substantiate. They are available with an initial query after which they read something, and it sparks the following query. Or they actually need to listen to a more in-depth perspective now they’ve a way of the subject and what parts they’re concerned with, they usually can zero in. We see them engage.
We see the clicks are of upper quality, because they’re not clicking on a webpage, realising it wasn’t what they need and immediately bailing. So, they spend more time on those sites. We see that it shows a greater diversity of internet sites that come up.
And that could be surprising. But in case your query is long, finding a webpage that covers every a part of your query is tough, and sometimes what you get is a really surface-level webpage. Technically it talks about every considered one of your words, but you didn’t get much substance. With generative AI, we are able to go and search for web pages that discuss specific subsets. So, we’ll take that question, and we’ll turn it into multiple queries.
And then we’ll say, a-ha, OK, you’re comparing two items that will not be traditionally compared. Let me discover a webpage about one item. Let me discover a webpage about one other. And then, you’ll be able to expose web sites that go in additional depth on a part of a subject, as an alternative of only a webpage that’s surface level in regards to the whole topic.
MH: Some people have criticised language models in search, not for the “eat rocks” mistakes but for these subtle, inaccurate mistakes that individuals don’t pick up in the event that they’re not experts in the sphere. How concerned are you about that?
ER: Besides trying to put a high bar on quality, we take extra effort on things we call “your money or your life”. So, questions of finance, questions on medical topics — we attempt to be thoughtful in our answers about each. Maybe we must always not give a response in any respect or where we predict we are able to provide you with something to start, but we must always recommend you talk over with a physician, dig in additional and discover details.
And that’s a crucial thing to do, because in lots of those cases, you’d prefer that they search out a medical skilled. But there are various individuals who don’t necessarily have access to a medical skilled. So, in case you said: I’m not going to reply anything, even some basics a few rash, and also you’re a stressed mother and it’s the nighttime, and you’ll be able to’t reach someone in some a part of the world, do you not help them?
We attempt to be clear that the technology is more experimental. (With) a number of questions people ask, though, the stakes aren’t as high. If you’re attempting to get tech support on determining the right way to fix your phone, hopefully we provide you with the appropriate instructions, but when we don’t provide you with exactly the appropriate instructions on the right way to turn something on, you normally figure that out and then you definitely can do more searching. But often we are able to get you there faster.
MH: Going back to what you said about information and different publishers getting access, publishers have criticised AI seek for dropping traffic and ad revenue. How are you avoiding this or taking this into consideration?
ER: We do imagine, in (Google) Search, that individuals continuing to listen to from other people is crucial and at the center of our product. That’s vital, not only for a healthy ecosystem, but for users. Lots of times you wish a fast answer, but often you ought to hear from other people.
I often use a fashion example: most people I do know who wish to delegate their selections to a bot for fashion are the set of people that weren’t attempting to spend any time on fashion before.
The people who find themselves following influencers and creators and others, they’re not able to go there. They wish to hear from the people they trust. So, we spend a number of time serious about, how can we elevate the appropriate content? How can we present it? We run different experiments. We design it to not only show links, but take into consideration where it could add additional links inside the response. Not just at the tip, but perhaps we are able to say, “in keeping with the Financial Times” and put a link to the Financial Times.
What you see with something like AI Overviews, whenever you bring the friction down for users, is people search more and that opens up latest opportunities for web sites, for creators, for publishers to access. And they get higher-quality clicks.
MH: Is there a risk that you simply find yourself cannibalising your individual product? Generative search is pricey, and that is changing the entire ad revenue model.
ER: There are a number of opportunities for ads. We show them each above and below in AI Overviews, but in addition inside. Ads are relevant at any time when users are going to select that has some business aspect.
When a question is predominantly business intent — like we predict you ought to buy something — then we’d often show ads. But sometimes we predict you almost certainly don’t wish to (see) ads, and so we don’t want to provide everyone ads. But some people might wish to buy something. If (you search) “the right way to clean a stain out of the couch” and the very first thing we show is a bunch of ads, you’re like, “Whoa, I just wanted some advice.”
But if we’re providing you with ideas after which we are saying, “in case you’re having trouble you may want to think about a stain-remover product”, after which we provide you with some ads for stain-remover products, it feels natural and in context. And so, there are latest opportunities.
MH: Are we going to see a paid version of (Google) Search? And what would that include?
ER: Never say never about what the longer term will hold. Ensuring that search generally, the essence of it, is offered without spending a dime, to permit access to information, can be vital. There could also be some points for individuals who have subscriptions in the longer term. But the core of search we wish to have available for everybody without spending a dime, yes.
MH: What does the longer term of search seem like? Are you serious about other modalities or agents?
ER: One thing that’s really at the center of it’s this concept that we have the desire to make search effortless. That assumes multimodalities, because humans are wired not only to type or text or use voice. They see things. They use alternative ways of expressing what they need.
It will get more personalised over time, not only in the outcomes, but in the way you learn well. Are you someone who learns well with videos or are you somebody who prefers text?
So, that ability for the technology to fulfill you where you might be — can we make it as easy as possible so that you can learn and explore the world?
This query is about the way you make use of tools. People use the word “agents” to mean various things. But the sense of “you need to use tools to ask hard questions” will proceed. (Google) Search will remain an information product at heart, but sometimes information is tough and there’s a number of work.
MH: Have your search habits modified on this AI era?
ER: I personally ask more questions. So, one example: I work with people who find themselves into cricket. They would say something, and it will make no sense. But I didn’t have enough time to go and do an hour-long tutorial on cricket.
I’d start asking the query and eventually get the reply. So, as an example, there’s this thing in cricket where if there’s rain that cuts the sport short, the scoring uses an algorithm to make a decision what number of runs you may have been capable of rating based on where they’re.
I ask questions on a book my son is reading and is talking about. I haven’t read the book, so I’ll ask an issue about it. I’d like to give you the chance to read all the books at the speed he does. I don’t have the time to try this. So, as an alternative of serious about the query and having it come out, I find myself asking the query and learning about latest things.