HomeIndustriesHow AI and software can improve semiconductor chips | Accenture interview

How AI and software can improve semiconductor chips | Accenture interview

Accenture has greater than 743,000 people serving up consulting expertise on technology to clients in greater than 120 countries. I met with certainly one of them at CES 2024, the massive tech trade show in Las Vegas, and had a conversation about semiconductor chips, the inspiration of our tech economy.

Syed Alam, Accenture‘s semiconductor lead, was certainly one of many individuals on the show talking concerning the impact of AI on a serious tech industry. He said that certainly one of as of late we’ll be talking about chips with trillions of transistors on them. No single engineer will give you the chance to design all of them, and so AI goes to must help with that task.

According to Accenture research, generative AI has the potential to affect 44% of all working hours
across industries, enable productivity enhancements across 900 various kinds of jobs and create $6 to
$8 trillion in global economic value.

It’s no secret that Moore’s Law has been slowing down. Back in 1965, former Intel CEO Gordon Moore predicted that chip manufacturing advances were proceeding so fast that the industry would give you the chance to double the variety of components on a chip every couple of years.

For a long time, that law held true, as a metronome for the chip industry that brought enormous economic advantages to society as every part on this planet became electronic. But the slowdown signifies that progress isn’t any longer guaranteed.

This is why the businesses leading the race for progress in chips — like Nvidia — are valued at over $1 trillion. And the interesting thing is that as chips get faster and smarter, they’re going for use to make AI smarter and cheaper and more accessible.

A supercomputer used to coach ChatGPT has over 285,000 CPU cores, 10,000 GPUs, and 400 gigabits per second of network connectivity for every GPU server. The a whole bunch of thousands and thousands of queries of ChatGPT consumes about one GigaWatt-hour every day, which is about every day energy consumption of 33,000 US households. Building autonomous cars requires greater than 2,000 chips, greater than double the variety of chips utilized in regular cars. These are tough problems to unravel, and so they might be solvable because of the dynamic vortex of AI and semiconductor advances.

Alam talked concerning the impact of AI in addition to software changes on hardware and chips. Here’s an edited transcript of our interview.

VentureBeat: Tell me what you’re eager about now.

Syed Alam is head of the semiconductor practice at Accenture.

Syed Alam: I’m hosting a panel discussion tomorrow morning. The topic is the hard a part of AI, hardware and chips. Talking about how they’re enabling AI. Obviously the people who find themselves doing the hardware and chips imagine that’s the difficult part. People doing software imagine that’s the difficult part. We’re going to take the view, most certainly–I actually have to see what view my fellow panelists take. Most likely we’ll find yourself in a situation where the hardware independently or the software independently, neither is the difficult part. It’s the combination of hardware and software that’s the difficult part.

You’re seeing the businesses which can be successful–they’re the leaders in hardware, but in addition invested heavily in software. They’ve done a excellent job of hardware and software integration. There are hardware or chip corporations who’re catching up on the chip side, but they’ve quite a lot of work to do on the software side. They’re making progress there. Obviously the software corporations, corporations writing algorithms and things like that, they’re being enabled by that progress. That’s a fast outline for the talk tomorrow.

VentureBeat: It makes me take into consideration Nvidia and DLSS (deep learning super sampling) technology, enabled by AI. Used in graphics chips, they use AI to estimate the likelihood of the subsequent pixel they’re going to must draw based on the last one that they had to attract.

Alam: Along the identical lines, the success for Nvidia is clearly–they’ve a really powerful processor on this space. But at the identical time, they’ve invested heavily within the CUDA architecture and software for a few years. It’s the tight integration that’s enabling what they’re doing. That’s making Nvidia the present leader on this space. They have a really powerful, robust chip and really tight integration with their software.

VentureBeat: They were getting excellent percentage gains from software updates for this DLSS AI technology, versus sending the chip back to the factory one other time.

Alam: That’s the fantastic thing about a superb software architecture. As I said, they’ve invested heavily over so a few years. A variety of the time you don’t must do–if you may have tight integration with software, and the hardware is designed that way, then quite a lot of those updates may be done in software. You’re not spinning something latest out each time a slight update is required. That’s traditionally been the mantra in chip design. We’ll just spin out latest chips. But now with the integrated software, quite a lot of those updates may be done purely in software.

VentureBeat: Have you seen quite a lot of changes happening amongst individual corporations due to AI already?

AI goes to the touch every industry, including semiconductors.

Alam: At the semiconductor corporations, obviously, we’re seeing them design more powerful chips, but at the identical time also software as a key differentiator. You saw AMD announce the acquisition of AI software corporations. You’re seeing corporations not only investing in hardware, but at the identical time also investing in software, especially for applications like AI where that’s very necessary.

VentureBeat: Back to Nvidia, that was at all times a bonus that they had over a few of the others. AMD was at all times very hardware-focused. Nvidia was investing in software.

Alam: Exactly. They’ve been investing in Cuda for a very long time. They’ve done well on each fronts. They got here up with a really robust chip, and at the identical time the advantages of investing in software for an extended period got here along around the identical time. That’s made their offering very powerful.

VentureBeat: I’ve seen another corporations coming up with–Synopsis, for instance, they simply announced that they’re going to be selling some chips. Designing their very own chips as opposed to simply making chip design software. It was interesting in that it starts to mean that AI is designing chips as much as humans are designing them.

Alam: We’ll see that an increasing number of. Just like AI is writing code. You can translate that now into AI playing a key role in designing chips as well. It may not design the complete chip, but quite a lot of the primary mile, or perhaps just the last mile of customization is completed by human engineers. You’ll see the identical thing applied to chip design, AI playing a task in design. At the identical time, in manufacturing AI is playing a key role already, and it’s going to play quite a bit more of a task. We saw a few of the foundry corporations announcing that they’ll have a fab in a number of years where there won’t be any humans. The leading fabs have already got a really limited variety of humans involved.

VentureBeat: I at all times felt like we’d eventually hit a wall within the productivity of engineers designing things. How many billions of transistors would one engineer be liable for creating? The path results in an excessive amount of complexity for the human mind, too many tasks for one person to do without automation. The same thing is going on in game development, which I also cover quite a bit. There were 2,000 people working on a game called Red Dead Redemption 2, and that got here out in 2018. Now they’re on the subsequent version of Grand Theft Auto, with 1000’s of developers liable for the sport. It seems like you may have to hit a wall with a project that complex.

This supercomputer uses Nvidia's Grace Hopper chips.
This supercomputer uses Nvidia’s Grace Hopper chips.

Alam: No one engineer, as you realize, actually puts together all these billions of transistors. It’s putting Lego blocks together. Every time you design a chip, you don’t start by putting each transistor together. You take pieces and put them together. But having said that, quite a lot of that work might be enabled by AI as well. Which Lego blocks to make use of? Humans might resolve that, but AI could help, depending on the design. It’s going to develop into more necessary as chips get more complicated and also you get more transistors involved. Some of this stuff develop into almost humanly unattainable, and AI will take over.

If I remember appropriately, I saw a road map from TSMC–I feel they were saying that by 2030, they’ll have chips with a trillion transistors. That’s coming. That won’t be possible unless AI is involved in a serious way.

VentureBeat: The path that individuals at all times took was that while you had more capability to make something larger and more complex, they at all times made it more ambitious. They never took the trail of creating it less complex or smaller. I’m wondering if the less complex path is definitely the one which starts to get slightly more interesting.

Alam: The other thing is, we talked about using AI in designing chips. AI can also be going for use for manufacturing chips. There are already AI techniques getting used for yield improvement and things like that. As chips develop into an increasing number of complicated, talking about many billions or a trillion transistors, the manufacturing of those dies goes to develop into much more complicated. For manufacturing AI goes for use an increasing number of. Designing the chip, you encounter physical limitations. It could take 12 to 18 weeks for manufacturing. But to extend throughput, increase yield, improve quality, there’s going to be an increasing number of AI techniques in use.

VentureBeat: You have compounding effects in AI’s impact.

How will AI change the chip industry?

Alam: Yes. And again, going back to the purpose I made earlier, AI might be used to make more AI chips in a more efficient manner.

VentureBeat: Brian Comiskey gave certainly one of the opening tech trends talks here. He’s certainly one of the researchers on the CTA. He said that a horizontal wave of AI goes to hit every industry. The interesting query then becomes, what form of impact does which have? What compound effects, while you change every part within the chain?

Alam: I feel it is going to have the identical form of compounding effect that compute had. Computers were used initially for mathematical operations, those sorts of things. Then computing began to impact just about all of industry. AI is a distinct form of technology, nevertheless it has an analogous impact, and might be as pervasive.

That brings up one other point. You’ll see an increasing number of AI on the sting. It’s physically unattainable to have every part done in data centers, due to power consumption, cooling, all of those things. Just as we do compute on the sting now, sensing on the sting, you’ll have quite a lot of AI on the sting as well.

VentureBeat: People say privacy goes to drive quite a lot of that.

Alam: A variety of aspects will drive it. Sustainability, power consumption, latency requirements. Just as you expect compute processing to occur on the sting, you’ll expect AI on the sting as well. You can draw some parallels to after we first had the CPU, the foremost processor. All sorts of compute was done by the CPU. Then we decided that for graphics, we’d make a GPU. CPUs are all-purpose, but for graphics let’s make a separate ASIC.

Now, similarly, we’ve got the GPU because the AI chip. All AI is running through that chip, a really powerful chip, but soon we’ll say, “For this neural network, let’s use this particular chip. For visual identification let’s use this other chip.” They’ll be super optimized for that exact use, especially on the sting. Because they’re optimized for that task, power consumption is lower, and so they’ll produce other benefits. Right now we’ve got, in a way, centralized AI. We’re going toward more distributed AI on the sting.

VentureBeat: I remember a superb book way back when called Regional Advantage, about why Boston lost the tech industry to Silicon Valley. Boston had a really vertical business model, corporations like DEC designing and making their very own chips for their very own computers. Then you had Microsoft and Intel and IBM coming together with a horizontal approach and winning that way.

Alam: You have more horizontalization, I assume is the word, happening with the fabless foundry model as well. With that model and foundries becoming available, an increasing number of fabless corporations got began. In a way, the cycle is repeating. I began my profession at Motorola in semiconductors. At the time, all of the tech corporations of that era had their very own semiconductor division. They were all vertically integrated. I worked at Freescale, which got here out of Motorola. NXP got here out of Philips. Infineon got here from Siemens. All the tech leaders of that point had their very own semiconductor division.

Because of the capex requirements and the cycles of the industry, they spun off quite a lot of these semiconductor operations into independent corporations. But now we’re back to the identical thing. All the tech corporations of our time, the foremost tech corporations, whether it’s Google or Meta or Amazon or Microsoft, they’re designing their very own chips again. Very vertically integrated. Except the profit they’ve now’s they don’t must have the fab. But not less than they’re going vertically integrated as much as the purpose of designing the chip. Maybe not manufacturing it, but designing it. Who knows? In the long run they may manufacture as well. You have slightly little bit of verticalization happening now as well.

VentureBeat: I do wonder what explains Apple, though.

Alam: Yeah, they’re entirely vertically integrated. That’s been their philosophy for a very long time. They’ve applied that to chips as well.

VentureBeat: But they get the advantage of using TSMC or Samsung.

A close-up of the Apple Vision Pro.
An in depth-up of the Apple Vision Pro.

Alam: Exactly. They still don’t must have the fab, since the foundry model makes it easier to be vertically integrated. In the past, within the last cycle I used to be talking about with Motorola and Philips and Siemens, in the event that they desired to be vertically integrated, that they had to construct a fab. It was very difficult. Now these corporations may be vertically integrated as much as a certain level, but they don’t must have manufacturing.

When Apple began designing their very own chips–should you notice, once they were using chips from suppliers, like on the time of the unique iPhone launch, they never talked about chips. They talked concerning the apps, the user interface. Then, once they began designing their very own chips, the star of the show became, “Hey, this phone is using the A17 now!” It made other industry leaders realize that to actually differentiate, you wish to have your personal chip as well. You see quite a lot of other players, even in other areas, designing their very own chips.

VentureBeat: Is there a strategic suggestion that comes out of this not directly? If you step outside into the regulatory realm, the regulators are vertical corporations as too concentrated. They’re looking closely at something like Apple, as as to whether or not their store ought to be broken up. The ability to make use of one monopoly as support for one more monopoly becomes anti-competitive.

Alam: I’m not a regulatory expert, so I can’t comment on that one. But there’s a difference. We were talking about vertical integration of technology. You’re talking about vertical integration of the business model, which is a bit different.

VentureBeat: I remember an Imperial College professor predicting that this horizontal wave of AI was going to spice up the entire world’s GDP by 10 percent in 2032, something like that.

Alam: I can’t comment on the precise research. But it’s going to assist the semiconductor industry quite a bit. Everyone keeps talking about a number of major corporations designing and coming out with AI chips. For every AI chip, you wish all the opposite surrounding chips as well. It’s going to assist the industry grow overall. Obviously we speak about how AI goes to be pervasive across so many other industries, creating productivity gains. That may have an impact on GDP. How much, how soon, we’ll must see.

VentureBeat: Things just like the metaverse–that looks as if a horizontal opportunity across a bunch of various industries, stepping into virtual online worlds. How would you most easily go about constructing ambitious projects like that, though? Is it the vertical corporations like Apple that may take the primary opportunity to construct something like that, or is it opened up across industries, with someone like Microsoft as only one layer?

Alam: We can’t assume that a vertically integrated company may have a bonus in something like that. Horizontal corporations, in the event that they have the correct level of ecosystem partnerships, they’ll do something like that as well. It’s hard to make a definitive statement, that only vertically integrated corporations can construct a brand new technology like this. They obviously have some advantages. But if Microsoft, like in your example, has good ecosystem partnerships, they may also succeed. Something just like the metaverse, we’ll see corporations using it in alternative ways. We’ll see different sorts of user interfaces as well.

VentureBeat: The Apple Vision Pro is an interesting product to me. It might be transformative, but then they arrive out with it at $3500. If you apply Moore’s Law to that, it might be 10 years before it’s all the way down to $300. Can we expect the form of progress that we’ve come to expect over the past 30 years or so?

Can AI bring people and industries closer together?

Alam: All of those sorts of products, these emerging technology products, once they initially come out they’re obviously very expensive. The volume isn’t there. Interest from the general public and consumer demand drives up volume and drives down cost. If you don’t ever put it on the market, even at that higher price point, you don’t get a way of what the quantity goes to be like and what consumer expectations are going to be. You can’t put quite a lot of effort into driving down the fee until you get that. They each help one another. The technology getting on the market helps educate consumers on the way to use it, and once we see the expectation and might increase volume, the worth goes down.

The other advantage of putting it out there’s understanding different use cases. The product managers at the corporate might imagine the product has, say, these five use cases, or these 10 use cases. But you may’t consider all of the possible use cases. People might start using it on this direction, creating demand through something you didn’t expect. You might run into these 10 latest use cases, or 30 use cases. That will drive volume again. It’s necessary to get a way of market adoption, and likewise get a way of various use cases.

VentureBeat: You never know what consumer desire goes to be until it’s on the market.

Alam: You have some sense of it, obviously, since you invested in it and put the product on the market. But you don’t fully appreciate what’s possible until it hits the market. Then the quantity and the rollout is driven by consumer acceptance and demand.

VentureBeat: Do you think that there are enough levers for chip designers to tug to deliver the compounding advantages of Moore’s Law?

Alam: Moore’s Law within the classic sense, just shrinking the die, goes to hit its physical limits. We’ll have diminishing returns. But in a broader sense, Moore’s Law continues to be applicable. You get the efficiency by doing chiplets, for instance, or improving packaging, things like that. The chip designers are still squeezing more efficiency out. It is probably not within the classic sense that we’ve seen over the past 30 years or so, but through other methods.

VentureBeat: So you’re not overly pessimistic?

Alam: When we began seeing that the classic Moore’s law, shrinking the die, would decelerate, and the prices were becoming prohibitive–the wafer for 5nm is super expensive in comparison with legacy nodes. Building the fabs costs twice as much. Building a very cutting-edge fab is costing significantly more. But then you definitely see advancements on the packaging side, with chiplets and things like that. AI will help with all of this as well.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read