HomeArtificial IntelligenceThe high- and low-level context behind Nvidia CEO Jensen Huang’s GTC 2025...

The high- and low-level context behind Nvidia CEO Jensen Huang’s GTC 2025 keynote | Dion Harris interview

Jensen Huang, CEO of Nvidia, hit a variety of high concepts and low-level tech speak at his GTC 2025 keynote speech last Tuesday on the sprawling SAP Center in San Jose, California. My big takeaway was that the humanoid robots and self-driving cars are coming faster than we realize.

Huang, who runs some of the precious firms on earth with a market value of $2.872 trillion, talked about synthetic data and the way latest models would enable humanoid robots and self-driving cars to hit the market with faster velocity.

He also noted that we’re about to shift from data-intensive retrieval-based computing to a unique form enabled by AI: generative computing, where AI reasons a solution and provides the knowledge, slightly than having a pc fetch data from memory to supply the knowledge.

I used to be fascinated how Huang went from subject to subject with ease, and not using a script. But there have been moments once I needed an interpreter to inform me more context. There were some deep topics like humanoid robots, digital twins, the intersection with games and the Earth-2 simulation that uses a variety of supercomputers to work out each global and native climate change effects and the every day weather.

Just after the keynote talk, I spoke with Dion Harris, Nvidia’s senior director of their AI and HPC AI factory solutions group, to get more context on the announcements that Huang made.

Here’s an edited transcript of our interview.

Dion Harris, Nvidia’s senior director of our AI and HPC AI factory solutions group. He is at SAP Center after Jensen Huang’s GTC 2025 keynote.

VentureBeat: Do you own anything specifically within the keynote up there?

Harris: I worked on the primary two hours of the keynote. All the stuff that needed to do with AI factories. Just until he handed it over to the enterprise stuff. We’re very involved in all of that.

VentureBeat: I’m at all times all in favour of the digital twins and the Earth-2 simulation. Recently I interviewed the CTO of Ansys, talking concerning the sim to real gap. How far do you think that we’ve come on that?

Harris: There was a montage that he showed, just after the CUDA-X libraries. That was interesting in describing the journey when it comes to closing that sim to real gap. It describes how we’ve been on this path for accelerated computing, accelerating applications to assist them run faster and more efficiently. Now, with AI brought into the fold, it’s creating this realtime acceleration when it comes to simulation. But after all you would like the visualization, which AI can be helping with. You have this interesting confluence of core simulation accelerating to coach and construct AI. You have AI capabilities which are making the simulation run much faster and deliver accuracy. You even have AI assisting within the visualization elements it takes to create these realistic physics-informed views of complex systems.

When you think that of something like Earth-2, it’s the culmination of all three of those core technologies: simulation, AI, and advanced visualization. To answer your query when it comes to how far we’ve come, in only the last couple of years, working with folks like Ansys, Cadence, and all these other ISVs who built legacies and expertise in core simulation, after which partnering with folks constructing AI models and AI-based surrogate approaches–we expect that is an inflection point, where we’re going to see an enormous takeoff in physics-informed, reality-based digital twins. There’s a variety of exciting work happening.

Nvidia GR00T makes it easier to design humanoid robots.
Nvidia Isaac GR00T makes it easier to design humanoid robots.

VentureBeat: He began with this computing concept fairly early there, talking about how we’re moving from retrieval-based computing to generative computing. That’s something I didn’t notice (before). It looks like it may very well be so disruptive that it has an impact on this space as well. 3D graphics seems to have at all times been such a data-heavy form of computing. Is that someway being alleviated by AI?

Harris: I’ll use a phrase that’s very contemporary inside AI. It’s called retrieval augmented generation. They use that in a unique context, but I’ll use it to elucidate the thought here as well. There will still be retrieval elements of it. Obviously, if you happen to’re a brand, you desire to maintain the integrity of your automotive design, your branding elements, whether it’s materials, colours, what have you ever. But there will probably be elements inside the design principle or practice that could be generated. It will probably be a mixture of retrieval, having stored database assets and classes of objects or images, but there will probably be a lot of generation that helps streamline that, so that you don’t should compute all the pieces.

It goes back to what Jensen was describing originally, where he talked about how raytracing worked. Taking one which’s calculated and using AI to generate the opposite 15. The design process will look very similar. You can have some assets which are retrieval-based, which are very much grounded in a selected set of artifacts or IP assets it’s essential construct, specific elements. Then there will probably be other pieces that will probably be completely generated, because they’re elements where you should utilize AI to assist fill within the gaps.

VentureBeat: Once you’re faster and more efficient it starts to alleviate the burden of all that data.

Harris: The speed is cool, but it surely’s really interesting whenever you consider the brand new kinds of workflows it enables, the things you’ll be able to do when it comes to exploring different design spaces. That’s whenever you see the potential of what AI can do. You see certain designers get access to a few of the tools and understand that they’ll explore hundreds of possibilities. You talked about Earth-2. One of essentially the most fascinating things about what a few of the AI surrogate models assist you to do is just not just doing a single forecast a thousand times faster, but with the ability to do a thousand forecasts. Getting a stochastic representation of all of the possible outcomes, so you’ve gotten a rather more informed view about making a choice, versus having a really limited view. Because it’s so resource-intensive you’ll be able to’t explore all the probabilities. You should be very prescriptive in what you pursue and simulate. AI, we expect, will create an entire latest set of possibilities to do things very in a different way.

Earth-2 at Nvidia's GTC 2024 event.
Earth-2 at Nvidia’s GTC 2024 event.

VentureBeat: With Earth-2, you would possibly say, “It was foggy here yesterday. It was foggy here an hour ago. It’s still foggy.”

Harris: I might take it a step further and say that you just would have the ability to grasp not only the impact on the fog now, but you possibly can understand a bunch of possibilities around where things will probably be two weeks out in the longer term. Getting very localized, regionalized views of that, versus doing broad generalizations, which is how most forecasts are used now.

VentureBeat: The particular advance we now have in Earth-2 today, what was that again?

Harris: There weren’t many announcements within the keynote, but we’ve been doing a ton of labor throughout the climate tech ecosystem just when it comes to timetable. Last 12 months at Computex we unveiled the work we’ve been doing with the Taiwan climate administration. That was demonstrating CorrDiff over the region of Taiwan. More recently, at Supercomputing we did an upgrade of the model, fine-tuning and training it on the U.S. data set. A much larger geography, totally different terrain and weather patterns to learn. Demonstrating that the technology is each advancing and scaling.

Image Credit: Nvidia
Image Credit: Nvidia

As we have a look at a few of the other regions we’re working with–on the show we announced we’re working with G42, which is predicated within the Emirates. They’re taking CorrDiff and constructing on top of their platform to construct regional models for his or her specific weather patterns. Much like what you were describing about fog patterns, I assumed that almost all of their weather and forecasting challenges can be around things like sandstorms and warmth waves. But they’re actually very concerned with fog. That’s one thing I never knew. Lots of their meteorological systems are used to assist manage fog, especially for transportation and infrastructure that relies on that information. It’s an interesting use case there, where we’ve been working with them to deploy Earth-2 and particular CorrDiff to predict that at a really localized level.

VentureBeat: It’s actually getting very practical use, then?

Harris: Absolutely.

VentureBeat: How much detail is in there now? At what level of detail do you’ve gotten all the pieces on Earth?

Harris: Earth-2 is a moon shot project. We’re going to construct it piece by piece to get to that end state we talked about, the complete digital twin of the Earth. We’ve been doing simulation for quite a while. AI, we’ve obviously done some work with forecasting and adopting other AI surrogate-based models. CorrDiff is a singular approach in that it’s taking any data set and super resolving it. But you’ve gotten to coach it on the regional data.

If you concentrate on the globe as a patchwork of regions, that’s how we’re doing it. We began with Taiwan, like I discussed. We’ve expanded to the continental United States. We’ve expanded to taking a look at EMEA regions, working with some weather agencies there to make use of their data and train it to create CorrDiff adaptations of the model. We’ve worked with G42. It’s going to be a region-by-region effort. It’s reliant on a few things. One, having the info, either the observed data or the simulated data or the historical data to coach the regional models. There’s a lot of that on the market. We’ve worked with a variety of regional agencies. And then also making the compute and platforms available to do it.

The excellent news is we’re committed. We comprehend it’s going to be a long-term project. Through the ecosystem coming together to lend the info and convey the technology together, it looks like we’re on an excellent trajectory.

VentureBeat: It’s interesting how hard that data is to get. I figured the satellites up there would just fly over some variety of times and also you’d have all of it.

Nvidia and GM have teamed up on self-driving cars.
Nvidia and GM have teamed up on self-driving cars.

Harris: That’s an entire other data source, taking all of the geospatial data. In some cases, because that’s proprietary data–we’re working with some geospatial firms, for instance Tomorrow.io. They have satellite data that we’ve used to capture–within the montage that opened the keynote, you saw the satellite roving over the planet. That was some imagery we took from Tomorrow.io specifically. OroraTech is one other one which we’ve worked with. To your point, there’s a variety of satellite geospatial observed data that we are able to and do use to coach a few of these regional models as well.

VentureBeat: How can we get to a whole picture of the Earth?

Harris: One of what I’ll call the magic elements of the Earth-2 platform is OmniVerse. It means that you can ingest numerous several types of data and stitch it together using temporal consistency, spatial consistency, even when it’s satellite data versus simulated data versus other observational sensor data. When you have a look at that issue–for instance, we were talking about satellites. We were talking with one in all the partners. They have great detail, because they literally scan the Earth day-after-day at the identical time. They’re in an orbital path that enables them to catch every strip of the earth day-after-day. But it doesn’t have great temporal granularity. That’s where you desire to take the spatial data we’d get from a satellite company, but then also take the modeling simulation data to fill within the temporal gaps.

It’s taking all these different data sources and stitching them together through the OmniVerse platform that can ultimately allow us to deliver against this. It won’t be gated by anybody approach or modality. That flexibility offers us a path toward attending to that goal.

VentureBeat: Microsoft, with Flight Simulator 2024, mentioned that there are some cases where countries don’t want to offer up their data. (Those countries asked,) “What are you going to do with this data?”

Harris: Airspace definitely presents a limitation there. You should fly over it. Satellite, obviously, you’ll be able to capture at a much higher altitude.

VentureBeat: With a digital twin, is that only a far simpler problem? Or do you run into other challenges with something like a BMW factory? It’s only so many square feet. It’s not the complete planet.

BMW Group's factory of the future - designed and simulated in NVIDIA Omniverse
BMW Group’s factory of the longer term – designed and simulated in NVIDIA Omniverse

Harris: It’s a unique problem. With the Earth, it’s such a chaotic system. You’re attempting to model and simulate air, wind, heat, moisture. There are all these variables that you’ve gotten to either simulate or account for. That’s the actual challenge of the Earth. It isn’t the size a lot because the complexity of the system itself.

The trickier thing about modeling a factory is it’s not as deterministic. You can move things around. You can change things. Your modeling challenges are different since you’re attempting to optimize a configurable space versus predicting a chaotic system. That creates a really different dynamic in the way you approach it. But they’re each complex. I wouldn’t downplay it and say that having a digital twin of a factory isn’t complex. It’s just a unique form of complexity. You’re trying to attain a unique goal.

VentureBeat: Do you are feeling like things just like the factories are pretty much mastered at this point? Or do you furthermore may need increasingly more computing power?

Harris: It’s a really compute-intensive problem, needless to say. The key profit when it comes to where we at the moment are is that there’s a reasonably broad recognition of the worth of manufacturing a variety of these digital twins. We have incredible traction not only inside the ISV community, but in addition actual end users. Those slides we showed up there when he was clicking through, a variety of those enterprise use cases involve constructing digital twins of specific processes or manufacturing facilities. There’s a reasonably general acceptance of the concept that if you happen to can model and simulate it first, you’ll be able to deploy it rather more efficiently. Wherever there are opportunities to deliver more efficiency, there are opportunities to leverage the simulation capabilities. There’s a variety of success already, but I feel there’s still a variety of opportunity.

VentureBeat: Back in January, Jensen talked so much about synthetic data. He was explaining how close we’re to getting really good robots and autonomous cars due to synthetic data. You drive a automotive billions of miles in a simulation and also you only should drive it 1,000,000 miles in real life. You comprehend it’s tested and it’s going to work.

Harris: He made a few key points today. I’ll attempt to summarize. The very first thing he touched on was describing how the scaling laws apply to robotics. Specifically for the purpose he mentioned, the synthetic generation. That provides an incredible opportunity for each pre-training and post-training elements which are introduced for that whole workflow. The second point he highlighted was also related to that. We open-sourced, or made available, our own synthetic data set.

We consider two things will occur there. One, by unlocking this data set and making it available, you get rather more adoption and plenty of more folks picking it up and constructing on top of it. We think that starts the flywheel, the info flywheel we’ve seen happening within the virtual AI space. The scaling law helps drive more data generation through that post-training workflow, after which us making our own data set available should further adoption as well.

VentureBeat: Back to things which are accelerating robots in order that they’re going to be in every single place soon, were there another big things price noting there?

Nvidia RTX 50 Series graphics cards can do serious rendering.
Nvidia RTX 50 Series graphics cards can do serious rendering.

Harris: Again, there’s numerous mega-trends which are accelerating the interest and investment in robotics. The very first thing that was a bit loosely coupled, but I feel he connected the dots at the tip–it’s mainly the evolution of reasoning and pondering models. When you concentrate on how dynamic the physical world is, any form of autonomous machine or robot, whether it’s humanoid or a mover or the rest, must have the ability to spontaneously interact and adapt and think and have interaction. The advancement of reasoning models, with the ability to deliver that capability as an AI, each virtually and physically, goes to assist create an inflection point for adoption.

Now the AI will develop into rather more intelligent, rather more prone to have the ability to interact with all of the variables that occur. It’ll come to that door and see it’s locked. What do I do? Those styles of reasoning capabilities, you’ll be able to construct them into AI. Let’s retrace. Let’s go find one other location. That’s going to be an enormous driver for advancing a few of the capabilities inside physical AI, those reasoning capabilities. That’s a variety of what he talked about in the primary half, describing why Blackwell is so necessary, describing why inference is so necessary when it comes to deploying those reasoning capabilities, each in the info center and at the sting.

VentureBeat: I used to be watching a Waymo at an intersection near GDC the opposite day. All these people crossed the road, after which much more began jaywalking. The Waymo is politely waiting there. It’s never going to maneuver. If it were a human it could start inching forward. Hey, guys, let me through. But a Waymo wouldn’t risk that.

Harris: When you concentrate on the actual world, it’s very chaotic. It doesn’t at all times follow the principles. There are all these spontaneous circumstances where it’s essential think and reason and infer in real time. That’s where, as these models develop into more intelligent, each virtually and physically, it’ll make a variety of the physical AI use cases rather more feasible.

The Nvidia Omniverse is growing.
The Nvidia Omniverse is growing.

VentureBeat: Is there the rest you desired to cover today?

Harris: The one thing I might touch on briefly–we were talking about inference and the importance of a few of the work we’re doing in software. We’re referred to as a hardware company, but he spent an excellent period of time describing Dynamo and preambling the importance of it. It’s a really hard problem to unravel, and it’s why firms will have the ability to deploy AI at large scale. Right now, as they’ve been going from proof of concept to production, that’s where the rubber goes to hit the road when it comes to reaping the worth from AI. It’s through inference. Lots of the work we’ve been doing on each hardware and software will unlock a variety of the virtual AI use cases, the agentic AI elements, getting up that curve he was highlighting, after which after all physical AI as well.

Dynamo being open source will help drive adoption. Being capable of plug into other inference runtimes, whether it’s taking a look at SGLang or vLLM, it’s going to assist you to have much broader traction and develop into the usual layer, the usual operating system for that data center.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read