HomeIndustriesRene Haas: ‘Arm has probably the most ubiquitous computer architecture on the...

Rene Haas: ‘Arm has probably the most ubiquitous computer architecture on the planet’

Rene Haas is chief executive of Arm, the chip designer behind the processors in 99 per cent of all smartphones. After being bought by SoftBank in 2016, the UK-headquartered company became last 12 months’s biggest initial public offering, in a deal valuing it at $54.5bn on Nasdaq. Since then, Arm’s market capitalisation has nearly tripled to around $140bn, because it has been caught within the updraft of investor excitement about artificial intelligence. 

Based in Silicon Valley, Haas has worked within the industry for nearly 40 years, including seven years at AI chipmaker Nvidia before joining Arm in 2013. Since becoming chief executive in 2022, he has pushed Arm to diversify farther from its cell phone roots into PCs, automotive and industrial components and, increasingly, servers — all underpinned by the identical promise of power efficiency that has kept its technology at the center of the iPhone. 

Arm doesn’t manufacture its own processors — though a recent report suggested that will soon change — as a substitute licensing a growing array of designs to the most important names within the tech industry, including Microsoft, Nvidia, Apple, Google, Samsung, Amazon, and Taiwan Semiconductor Manufacturing Company. 

After Apple switched its Mac processors from Intel to its own Arm-based versions in 2020, Microsoft this 12 months unveiled a series of Arm-powered Windows PCs, hailing a brand new era of the “AI PC”. 

In this conversation with FT global technology correspondent Tim Bradshaw, Haas discusses the growing importance of software to chipmakers and the way AI is changing the devices we use.


Tim Bradshaw: Microsoft has been making a giant push with Arm-based Windows PCs previously few weeks but this isn’t the primary time Microsoft tried to make that switch. What’s different now compared with the failed efforts of the past, resembling 2012’s Windows RT? 

Rene Haas: I worked on the very first Windows on Arm PCs back in 2012. And rather a lot has modified since that point. One of the things that’s probably the most important difference now’s that virtually the whole application ecosystem is native to Arm — meaning that, not only is the performance going to be incredible, but try to seek out an application that’s not going to run. If you return 12 years when Windows on Arm kicked off, it was a totally different world when it comes to local apps versus cloud, and Windows on Arm didn’t support a number of popular applications (resembling Apple’s iTunes and Google’s Chrome web browser). That was a killer blow.

Tech Exchange

The FT’s top reporters and commentators hold conversations with the world’s most thought-provoking technology leaders, innovators and academics, to debate the long run of the digital world and the role of Big Tech corporations in shaping it. The dialogues are in-depth and detailed, specializing in the best way technology groups, consumers and authorities will interact to resolve global problems and supply latest services. Read all of them here

Fast forward to 2024, there’s no issue with application ecosystems. And what’s been proven on the Windows on Arm platforms as an extension of the opposite ecosystem, MacOS, is the experience is phenomenal, after we take a look at the battery life and the performance that the Macs have . . . It’s going to be a really different game this time. 

TB: And now with the additional sales pitch of ‘AI all over the place’. Where do you think that we’re as much as to find the correct applications for these latest AI tools? 

RH: Talking about AI PCs, I believe it’s very early. I believe you could have Copilot (Microsoft’s brand for products enhanced by AI assistants recently prolonged to its latest AI PCs) that has now been introduced. So the feature set that has been talked about, I believe it’s going to start out to make the most of the underlying hardware.

We know there’s going to be another (Windows AI PC) systems coming out within the upcoming years. So, while the first-generation systems are going to be interesting, the second generation systems are going to be much more (so). And folks who bought the primary ones are probably going to be a bit bit resentful after they see what the second ones appear like. 

TB: Buying version considered one of any latest product is just a part of the chance/reward of being an early adopter. Are you an early adopter? What tech are you fidgeting with straight away?  

It’s early for AI PCs: Microsoft’s Copilot+ on display at the corporate’s campus in Redmond © Chona Kasinger/Bloomberg

RH: Whether it’s game consoles, whether it’s phones . . . I’m a really much an early adopter. I probably have every cell phone in existence. I’m a giant foldable phone guy. I believe they’re great. Because they’re sufficiently small when folded to act like a cell phone. But while you expand it out, you possibly can take a look at spreadsheets, you possibly can watch videos. It’s like a mini tablet.

TB: It looks as if we’re in one other moment where individuals are experimenting with different form aspects for consumer electronics, with folding phones and AI glasses. Have you tried any of those latest AI wearables?

RH: I even have tried a few of them. I do just like the Meta Ray Ban augmented reality glasses. They’re stylish. The video quality is sweet. They are good sunglasses and so they don’t feel bulky or weird. Me, personally, I don’t like something heavy on my head. So that’s why I just like the Ray Bans and so they have Arm inside, which can be what I like. 

TB: Do you see that becoming a giant product category? Because we’ve been here before with Google Glass which — to say the least — was not successful.

RH: I believe augmented reality remains to be emerging when it comes to the capabilities of that field. I believe there’s an enormous opportunity with holograms, with display technology. That is an area that might be early days still when it comes to being discovered. I believe it’s a generational thingI think a generation has to grow up being comfortable with wearing things for an prolonged period of time. (So) it’s more of a distinct segment item straight away. 

TB: All of those products, whether AI PCs or smart glasses, are a part of a broader trend for moving from AI services that run within the cloud — just like the ChatGPT app, which needs an online connection to work — to systems that run on the “edge” (industry jargon essentially meaning people’s or corporations’ own devices, like phones or factory equipment). There’s way more competition here than in AI chips, where Nvidia totally dominates straight away. Do you see the sting becoming a much bigger opportunity for chipmakers than the cloud? 

RH: We are still in very early days when it comes to AI workloads running all over the place. So to your point of, ‘what’s an edge device?’ perhaps the user would describe that as ‘not the cloud’. So what has to occur is the (AI) models . . . have to evolve. I believe the models have to get a bit bit smaller, a bit bit more efficient to run in these other areas.

Where is Arm going to play? They’re all going to run through Arm because, first off, you could have to have a CPU (central processing unit), which is table stakes and for any of those end devices, and the installed base is all Arm anyway. So the software ecosystem goes to look to optimise around Arm. 

We’re showing some information at Computex (the trade event in Taiwan this week) around compute libraries that may essentially make it very, very easy to run these AI workloads on Arm CPUs. Developers, previously, didn’t have access to the Arm CPU after they desired to run an AI application. Arm will now be making these libraries available to developers. So they will write the applying and it takes advantage of the hardware. It could run 3 times faster, 4 times faster, at the identical power.

Rene Haas in New York the day Arm’s share began trading on Nasdaq in September last 12 months © Michael M. Santiago/Getty Images

TB: These libraries are a part of the broader package of Arm products that you just describe because the ‘compute subsystem’. This is a core a part of Arm’s strategy now, to transcend designing one single chip for patrons to construct on. Can you explain more about that — and why you’re doing it?

RH: What really makes Arm unique is we’ve got probably the most ubiquitous computer architecture on the planet. Our CPUs are in 30bn devices per 12 months, almost 300bn in total. What we’re finding is that the chips have gotten increasingly tougher to construct and it takes longer to construct them . . . as you get to smaller transistors.

So how can Arm help? Let’s say, in a server, you may have 128 ARM CPUs. And with those 128 ARM CPUs, you could have all the (networking systems) that connect them together. You have a memory mapping system, you could have a mesh network . . . Previously, the tip customer would need to put all that stuff together after which construct their chip. With compute subsystems, we put all that together for them.

We are in mobile phones, we’re in PCs, we’re in automotive applications, we’re in complex AI training, and we’re in general-purpose server(s). All of those are Arm CPUs (and) areas that we’re going to do compute subsystems. So, over time, it’s going to be a really, very large a part of our business. 

TB: One of your big latest customer wins on the info centre side recently was Microsoft which is doing a brand new Arm-based CPU for its cloud called Cobalt. You’ve now got Amazon, Google, Microsoft — the three biggest cloud computing providers — all running Arm CPUs as a part of their cloud platforms. When did that work start out of your side to see that come to fruition? 

30bnNumber of devices built yearly with an Arm central processing unit

RH: We have been working on this for over 10 years. It’s been an amazing amount of labor (during which) two things had to come back together. The CPUs needed to get performant enough against the competition. They needed to be very efficient. They needed to be very high speed. And we needed to have all of the components around it. And then . . . the software ecosystem needed to have all the things required that you possibly can just run the servers. So Linux distributions, like Red Hat and SuSE. We were working in parallel to have all of the pieces of the software together.

When you mix the software being ready with world-class products and power efficiency, you now have a compelling advantage when it comes to the chip. Now, what makes it much more compelling is, by constructing a custom chip, you possibly can essentially construct a custom blade, a custom rack, and a custom server that’s very unique to what Microsoft is running with Azure or what Google is running in Google Cloud or AWS.

TB: Power efficiency is a giant a part of Arm’s pitch over traditional server chipmakers like Intel and AMD. Microsoft said recently that it’s investing so fast in AI data centres that it’s looking like it’d miss a few of its climate targets. That have to be an issue all of the Big Tech corporations are facing straight away?

RH: Oh, yes, it’s massive. Two things are going to speed up Arm’s adoption within the cloud. One is just broadly, this power efficiency issue. And secondly, the proven fact that, on AI, we are able to greatly reduce power by this customisation. Just take a look at Nvidia. Nvidia built a chip called Grace Hopper after which they built a chip called Grace Blackwell. They are essentially replacing the Intel or AMD CPU with an Arm CPU, which is known as Grace.

TB: One Big Tech company that hasn’t announced an Arm-based chip in its data centres yet is Meta, Facebook’s owner. Its latest chip for AI inference (the sort needed to deliver AI services reasonably than create them), called MTIA, is using an open-source alternative to Arm’s architecture called RISC-V . . . Are they using Arm in other ways or have they decided to go down a distinct path? 

RH: This MTIA chip is an accelerator. And that accelerator has to hook up with a CPU. So it could possibly connect with an ARM CPU, or it could possibly connect with an Intel CPU. RISC-V will not be interesting from a CPU standpoint, since it’s not running any key software . . . I’ll leave it to Meta to say whether or not they’re going to hook up with Intel or Arm.

TB: The analysts I speak to see big potential growth for RISC-V in areas like automotive, where Arm can be hoping to grow. Do you are worried that RISC-V is beginning to nibble at the perimeters? 

RH: Where I don’t see it nibbling anywhere is running key software applications. I believe there’s a misunderstanding commonly between the RISC-V architecture because it applies to being a chip and when it’s really running (key) software. Because it’s all concerning the software. 

And, again, back to what makes Arm very unique: every mass popular application you possibly can consider has been ported to and optimised for Arm. It takes an extended, very long time not only to get the software written, but ported and optimised. There’s no software anywhere for RISC-V in these places. None.

TB: So, if not competition from RISC-V, what does keep you up at night?  

RH: The things that I worry about are the stuff that’s inside my control. We have massive opportunity with all these compute subsystems. We have massive opportunity with growth in AI. We have massive opportunity to cut back power to go solve this issue relative to data centres. It’s just ensuring that we are able to execute on the strategies we’ve got, because we’re at a magical time in our industry relative to the expansion potential. 

TB: How much does being a public company keep you awake at night?

RH: Generally speaking, it doesn’t change how I take into consideration running the corporate because I don’t really think concerning the company from quarter to quarter. I believe concerning the company from 12 months to 12 months. Most of my discussions that I even have with our internal teams or engineers are about 2027, 2028.

TB: Unfortunately, Wall Street does tend to take a look at things quarter by quarter. You’ve had a number of stock-price volatility around your quarterly earnings reports. That’s not unusual for a newly-listed company but do you think that investors really understand the Arm business? 

RH: What I’d say concerning the volatility is we’ve had three quarters of being a public company and every quarter was greater than the last one. And each quarter that we talked about going forward was larger . . . we principally indicated that we see 20 per cent growth 12 months on 12 months and we see that proceed for the subsequent few years.

We achieved $3bn in revenue over this past 12 months. It took us 20 years to get to $1bn. It took us, I believe, one other 10 to get to $2bn. It took us two years to get to $3bn. And we’re seeking to get to $4bn in a single 12 months. So the trajectory is in the correct place. 

We have incredible visibility when it comes to our business, (not only because) we get reports from our customers, but because our market share is so high.

TB: Some investors worry about visibility in two parts of your enterprise specifically. One of them is Apple, considered one of your biggest customers but which is famously not very open with its partners. The other is Arm China. You warned in your IPO prospectus of past problems obtaining “accurate information” from Arm China. What insight do you actually have? 

RH: We have great insight with Apple. They’re an exceptional partner for us. They have signed a long-term (contract) extension. They’re very committed to Arm.

Arm China, that’s our three way partnership in China. They are essentially a distributor for us. So we’ve got superb visibility when it comes to how we work with partners there. With China, the problem that we’ve faced when it comes to export control aren’t any different from other (chip) corporations. But, usually, I’d say, with Arm China, things are going quite well. 

Apple’s latest iPad Air uses the Arm-based M2 chip © Bloomberg

TB: How has being a public company modified your relationship with SoftBank and its chief executive, Masayoshi Son? They’re still a 90 per cent shareholder but you’re more out on your personal now. How does that dynamic change? 

RH: I believe it’s modified within the sense that, as a public company, we now have a board that has independent directors that represent shareholders. So all of the things that we’ve got to do from a governance standpoint, that’s a bit bit different. I’d say we’re actually more independent when it comes to how we predict concerning the company, how we talk concerning the company. But SoftBank’s our largest shareholder, so obviously they’ve a giant say when it comes to things on the boardroom table. 

With Masa, I’d say the connection isn’t any different. We talk on a regular basis. He’s a superb guy. I believe he gets a bit little bit of a nasty rap within the press. He’s a man who began the corporate 40 years ago and remains to be running it. There’s a fairly small group of people that have done that form of thing, and the corporate remains to be broadly successful. 

TB: How does Arm slot in with SoftBank’s broader strategic goals around AI? 

RH: Clearly, Masa may be very bullish on all things AI and — provided that it’s pretty hard to speak about an AI application that doesn’t bump into Arm — we’re on the centre of lots of those things. For example, SoftBank made an investment right into a UK company called Wayve, which is performing some amazing work in LLMs (large language models, the complex AI systems that sit behind applications resembling ChatGPT) for full self-driving cars. It’s running on Arm. So there may be an area where if Wayve does well, Arm does well.

TB: Does that mean you’re going to maneuver into making your personal AI chips, as Nikkei reported recently?

RH: I can’t offer you anything on that one. I can’t comment.

TB: Silicon Valley usually, and the chip industry specifically, is stuffed with ‘frenemies’. Nvidia’s biggest customers are all making their very own AI chips, for instance. Where do you think that you possibly can, and might’t, compete along with your customers? 

RH: I are inclined to think more about where can we add value and where is the industry going? Back to compute subsystems. When we kicked the thought off, this was a bit controversial because, by doing a full subsystem implementation, some customers might say, ‘Hey, that’s the form of work I do. Arm, I don’t have to have your finished solution.’ Fast forward, we solve a number of problems when it comes to engineering overhead. We solve a number of problems relative to time to market. We solve a number of problems relative to broadening out Arm’s platforms.

So that’s an example of something that is likely to be a frenemy form of thing where people might take a look at it and say, ‘That’s my domain’. But I’d say it’s worked out much better than we thought. Even the early customers who pushed back at it are adopting it. 

TB: Another example of a frenemy for Arm is Intel. At the identical time as competing for a number of Intel’s PC and server business, you’re actually getting closer to them on the foundry side. You were recently on stage at an Intel event — which some individuals who have been watching this industry for 30 years may need seen as a ‘hell freezing over’ form of moment. What is the character of that relationship exactly? 

RH: Yeah, that’s an incredible example of the world moving around. Intel, 10 years ago, probably saw it was very useful to see Arm as not a healthy competitor. Fast forward, Intel has a burgeoning business that’s attempting to grow around Intel Foundry. What does Intel Foundry need? They need volume. Well, who drives probably the most volume when it comes to CPUs? It’s Arm. So they obviously see the dimensions of that chance . . . They’ve taken a number of money from the US government on the Chips Act and so they have to put that cash to work. I believe working with Arm goes to be the fastest way they will do this. 

TB: We’ve talked rather a lot about AI within the abstract. What are the actual applications of AI that you just’re most enthusiastic about personally? 

RH: A very easy AI application that I exploit is to remove people from photographs. I’ll take pictures of my kids, my grandkids, my friends, and someone will photobomb. And you possibly can just clean that stuff up. With (Google Photos) Magic Eraser, you possibly can do this. Crazy easy, but that’s AI. 

But the areas that I personally find way more interesting are drug research and medical. A quite simple example: You’re unwell, you go to the pharmacy, they prescribe some medicine to you, and also you take a look at the medication and the unintended effects are as generic as it could possibly be. That looks as if something that, if the doctor knew my DNA genome sequence and would have the option to map out exactly which drugs will give me what form of response, knowing exactly my background and profile, that will be compelling. I used to be meeting this morning with any individual who’s on this industry and was asking that query. With AI, that’s probably three to 4 years away.

Another interesting example is drug research. How long does it take to develop a brand new drug? Ten years. That will be cut in half, it could possibly be cut by two-thirds by utilizing AI. That to me is incredibly exciting.

TB: Some AI boosters argue the technology will soon replace all human labour. Do you think that your grandchildren could have to work? 

RH: I hope so. I hope so. What a life in the event that they don’t. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read