HomeIndustriesGoogle DeepMind’s Demis Hassabis on his Nobel Prize: ‘It looks like a...

Google DeepMind’s Demis Hassabis on his Nobel Prize: ‘It looks like a watershed moment for AI’ 

In the 15 years because it was founded, Google DeepMind has grown into one among the world’s foremost artificial intelligence research and development labs. In October, its chief executive and co-founder Sir Demis Hassabis was one among three joint recipients of this yr’s Nobel Prize in chemistry for unlocking a 50-year-old problem: predicting the structure of each known protein using AI software referred to as AlphaFold.

DeepMind, which was acquired by Google in 2014, was founded with the mission of “solving” intelligence — designing artificial intelligence systems that would mimic and even supersede human cognitive capabilities. In recent years, the technology has change into increasingly powerful and ubiquitous and is now embedded in industries starting from healthcare and education to financial and government services.

Last yr, the London-based lab merged with Google Brain, the tech giant’s own AI lab headquartered in California, to tackle stiff competition from its peers within the tech industry, within the race to create powerful AI.

DeepMind’s latest positioning on the centre of Google’s AI development was spurred by OpenAI’s ChatGPT, the Microsoft-backed group’s chatbot that gives plausible and nuanced text responses to questions. Despite its business underpinnings, Google DeepMind has remained focused on complex and fundamental problems in science and engineering, making it one of the vital consequential projects in AI globally. 

In the primary interview of our latest AI Exchange series, Hassabis — a baby chess prodigy, designer of cult video game and a trained neuroscientist — spoke to FT’s Madhumita Murgia just 24 hours after being announced as a Nobel Prize winner. He talked extensively in regards to the big puzzles he desires to crack next, the role of AI in scientific progress, his views on the trail to artificial general intelligence — and what’s going to occur once we get there. 


Madhumita Murgia: Having reflected in your Nobel Prize for a day, how are you feeling?

Demis Hassabis: To be honest with you, yesterday was only a blur and my mind was completely frazzled which infrequently happens. It was an odd experience, almost like an out-of-body experience. And it still feels pretty surreal today. When I woke up this morning, I used to be like, is that this real or not? It still looks like a dream, to be honest. 

MM: Protein folding is actually solved, due to your work on AlphaFold models — an AI system that may predict the structure of all known proteins. What is your next grand challenge for AI to crack?

DH: There are several. Firstly, on the biology track — you’ll be able to see where we’re going with that with AlphaFold 3 — the concept is to grasp (biological) interactions, and eventually to model an entire pathway. And, then, I would like to perhaps construct a virtual cell in some unspecified time in the future. 

With Isomorphic (DeepMind’s drug development spin-off) we are attempting to expand into drug discovery — designing chemical compounds, figuring out where they bind, predicting properties of those compounds, absorption, toxicity and so forth. We have great partners (in) Eli Lilly and Novartis . . . working on projects with them, that are going very well. I would like to unravel some diseases, Madhu. I would like us to assist cure some diseases.

MM: Do you will have any specific diseases you’re eager about tackling?

DH: We do. We are working on six actual drug programmes. I can’t say which areas but they’re the massive areas of health. I hope we could have something within the clinic in the following couple of years — so, very fast. And, then, obviously, we could have to undergo the entire clinical process, but not less than the drug discovery part we could have shrunk massively.

Demis Hassabis at Google DeepMind’s head office in London © Jose Sarmento Matos/Bloomberg

MM: What about outside of biology? Are there areas you’re enthusiastic about working on?

DH: I’m very enthusiastic about our material design work: we published a paper in Nature last yr on a tool called GNoME (an AI tool that discovered 2.2mn latest crystals). That’s AlphaFold 1-level of fabric design. We must get to AlphaFold-2 level, which we’re working on. 

We are going to unravel some vital conjectures in maths with the assistance of AI. We got the Olympiad silver medal over the summer. It’s a extremely hard competition. In the following couple of years, we are going to solve one among the key conjectures. 

And, then, on energy/climate, you saw our Graphcast weather modelling won a MacRobert award, an enormous honour on the engineering side. We’re investigating if we are able to use a few of these techniques to assist with climate modelling, to do this more accurately, which will probably be vital to assist tackle climate change, in addition to optimising power grids and so forth. 

MM: It looks as if your focus is more on the appliance side — on work that translates into real world impact, reasonably than purely fundamental. 

DH: That’s probably true to say. There aren’t many challenges like protein folding. I used to call it Fermat’s last theorem of biology, equivalent. There’s not that many things which might be that vital and long-standing as a challenge. 

Obviously, I’m very focused on advancing artificial general intelligence (AGI) with agent-based systems. Probably, we’re going to need to speak about Project Astra and the longer term of digital assistants, universal digital assistants, which I’m personally working on, as well, and which I consider to be on the trail to AGI. 

MM: What does the AI double Nobel Prize in chemistry and physics (this yr’s prize for physics went to Geoffrey Hinton and John Hopfield for his or her work on neural networks, the foundational technology for contemporary AI systems) say in regards to the technology’s role and impact in science?

DH: It’s interesting. Obviously, no person knows what the committee was pondering. But it’s hard to flee the concept that perhaps it’s a press release the committee is making. It looks like a watershed moment for AI, a recognition that it could actually, is mature enough now, to assist with scientific discovery. 

AlphaFold is one of the best example of that. And Geoff and Hopfield’s prizes were for more fundamental, underlying algorithmic work . . . interesting they decided to place that together, almost as double, related awards. 

For me, I hope we glance back in 10 years and AlphaFold could have heralded a brand new golden era of scientific discovery in all these different domains. I hope that we will probably be adding to that body of labor. I believe we’re quite unique as one among the massive labs on the planet that truly doesn’t just speak about using it for science, but is doing it.

There’s so many cool things occurring in academia as well. I used to be talking to someone in astrophysics, actually a Nobel Prize winner, who’s using it to scan the skies for atmospheric signals and so forth. It’s perfect. It’s getting used in Cern. So perhaps the committee desired to recognise that moment. I believe it’s pretty cool they’ve done that. 

MM: Where is your AlphaFold work going to take us next when it comes to latest discoveries? Have there been any interesting breakthroughs in other labs you’ve seen that you simply’re enthusiastic about?

DH: I used to be really impressed with the special issue of Science on the nuclear pore complex, one among the most important proteins within the body, which opens and closes like a gateway to let nutrients out and in of the cell nucleus. Four studies found this structure at the identical time. Three out of 4 papers found AlphaFold predictions (were) a key a part of them having the ability to solve the general structure. That was fundamental biology understanding. That was the thing that stuck out to me. 

Enzyme design is admittedly interesting. People like (US biochemist and Nobel laureate) Francis Arnold have checked out combining AI with directed (protein) evolution. There’s a number of interesting combos of things. Lots of top labs have been using it for plants, to see in the event that they could make them higher proof against climate change. Wheat has tens of 1000’s of proteins. No one had investigated that because it will be experimentally too expensive to do this. It’s helped in every kind of areas, it’s been wonderful to see. 

MM: I even have a conceptual query about scientific endeavour. We originally thought predicting something was the be-all-and-end-all, and spent all this effort and time predicting, say, the structure of a protein. But now we are able to do that basically quickly with machine learning, without understanding the ‘why’. Does that mean we must be pushing ourselves to search for more, as scientists? Does that change how we study scientific concepts?

The Nobel Committee for chemistry announcing that US biochemist David Baker together with Google Deepmind’s Hassabis and John Jumper had won this yr’s prize © JONATHAN NACKSTRAND/AFP via Getty Images

DH: That’s an interesting query. Prediction is partly understanding, in some sense. If you’ll be able to predict, that may result in understanding. Now, with these latest (AI) systems, they’re latest artefacts on the planet, they don’t fit into normal classification of objects. They have some intrinsic capability themselves, which makes them a novel class of latest tool.

My view on that’s, if the output is significant enough, for instance, a protein structure, then that, in itself, is precious. If a biologist is working on leishmaniasis, it doesn’t matter where they got protein structures from so long as they’re correct for them to do their science work on top. Or, when you cure cancer, you’re not going to say: don’t give me that because we don’t understand it. It could be an incredible thing, without understanding it fully. 

Science has plenty of abstraction. The whole of chemistry is like that, right? It’s built on physics, after which biology emerges out of it. But it could be understood in its own abstract layer, without necessarily understanding all of the physics below it. You can speak about atoms and chemicals and compounds, without fully understanding every little thing about quantum mechanics — which we don’t fully understand yet. It’s an abstraction layer. It already exists in science.

And biology, we are able to study life and still don’t know the way life evolved or emerged. We can’t even define it properly. But these are massive fields: biology, chemistry, and physics. So it’s common in a way — AI is like an abstraction layer. The people constructing the programs and networks understand this at some physics level but, then, this emergent property comes out of it, on this case, predictions. But you’ll be able to analyse the predictions on their very own at a scientific level.

Having said all of that, I believe understanding could be very vital. Especially as we catch up with to AGI. I believe it is going to get quite a bit higher than it’s today. AI is an engineering science. That means you will have to construct the artefact first and then you definately can study it. It’s different to a natural science, where the phenomenon is already there.

And simply because it’s a man-made, engineered artefact doesn’t mean it is going to be any less complex than the natural phenomena we would like to check. So it’s best to expect it to be just as hard to grasp and unpack and deconstruct an engineered artefact like a neural network. That’s happening now and we’re making some good progress. There is an entire field called mechanistic interpretation, which is all about using neuroscience tools and concepts to analyse these virtual brains. I really like this area and have encouraged this at DeepMind.

MM: I looked up a project you mentioned previously a couple of fruit fly connectome (brain map), made using neural networks. AI helped understand that natural phenomenon. 

DH: Exactly. That’s an ideal example of how these items could be combined, after which we slowly understand increasingly more in regards to the systems. So, yes, it’s an ideal query, and I’m very optimistic we are going to make plenty of progress in the following few years on the understanding of AI systems. And, then, in fact, perhaps they can even explain themselves. Imagine combining an AlphaFold with a language capability system, and perhaps it could explain a little bit bit about what it’s doing. 

MM: The competitive dynamics within the technology industry have intensified quite a bit in AI. How do you see that impacting and shaping progress on this field? Are you nervous there will probably be fewer ideas and a give attention to transformer-based large language models (LLMs)? 

Google Deepmind’s offices in London © Dan Kitwood/Getty Images

DH: I believe that truly plenty of the leading labs are getting narrower with what they’re exploring — scaling transformers. Clearly, they’re amazing and going to be a key component of ultimate AGI systems. But we now have all the time been big believers in exploration and revolutionary research. We have kept our capabilities of doing that — we now have by far the broadest and deepest research bench when it comes to inventing the following transformer, if that’s what’s required. That’s a part of our scientific heritage, not only at DeepMind but in addition Google Brain. We are doubling down on that, in addition to obviously matching everyone on engineering and scaling.

One has to do this — partly to see how far that would go, so you recognize what you could explore. I’ve all the time believed in pushing exciting ideas to the utmost in addition to exploring latest ideas. You don’t know what breakthrough you wish until you recognize absolutely the limits of the present ideas.

You saw that with long context windows (a measure of how much text could be processed by an LLM directly). It was a cool latest innovation and nobody else has been in a position to replicate that. That’s only one thing — you’ll see quite a bit more breakthroughs coming into our mainstream work. 

MM: You and others have said AGI is anywhere between 5 to twenty years away: what does the scientific approach appear like to achieving that goal? What happens once we get there?  

DH: The scientific approach would mean focusing quite a bit more time and energy and thought on understanding and evaluation tools, benchmarking, and evaluations. There must be 10 times more of that, not only from corporations but in addition AI safety institutes. I believe from academia and civil society, (too). 

I believe we want to grasp what the systems are doing, the bounds of those systems, and , then, control and guardrail those systems. Understanding is an enormous a part of scientific method. I believe that’s missing from pure engineering. Engineering is just seeing — does it work? And, if it doesn’t, you are attempting again. It’s all trial and error.

Science is what are you able to understand before all that happens. And, ideally, that understanding means you make less mistakes. The reason that’s vital for AGI and AI is that it’s such a robust technology you would like to make as few mis-steps as you’ll be able to. 

Of course, you would like to find a way to get it perfect, nevertheless it’s too latest and fast-paced. But we are able to definitely do a greater job than, perhaps, we’ve done with past technologies. I believe we want to do this with AI. That’s what I’d advocate.

When we get nearer to AGI, perhaps just a few years out, then a societal query comes, which also could possibly be informed by the scientific method. What values do we would like these systems to have? What goals do we would like to set them?

So they’re kind of separate things. There’s the technical query of how do you retain the thing heading in the right direction to the goal that you simply set? But that doesn’t aid you resolve what goal this must be, right? But you wish each those things to be correct for a secure AGI system. 

The second one, I believe, could also be harder, like, what goals, what values and so forth — because that’s more of a UN or geopolitical query. I believe we want a broad discussion on that, with governments, with civil society, and academia, all parts of society — and social science and philosophy, even, as well. 

And I try and have interaction with all those sorts of people, but I’m a bit unusual in that sense. I’m attempting to encourage more people to do this or not less than act as a task model, act as a conduit to bring those voices across the table. 

I believe we must always start now because, even when AGI is 10 years away, and a few people think it could possibly be quite a bit sooner, that’s not plenty of time.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read