HomeArtificial IntelligenceMicrosoft and Nvidia announce major recent integrations, breakthroughs and more at GTC

Microsoft and Nvidia announce major recent integrations, breakthroughs and more at GTC


Microsoft's announcements of brand name recent collaborations with long-time partner Nvidia propelled the corporate to the highest this 12 months Nvdia GTC AI conference in San Jose, March 18-21.

This week's AI innovation news ranged from AI infrastructure and services advancements to recent platform integrations, industry breakthroughs and more. Additionally, Nidhi Chappell, VP of Azure Generative AI and HPC Platform Microsoft, sat down for an exclusive one-on-one with Sharon Goldman, Senior Writer at VentureBeat, to debate Microsoft's partnership with OpenAI and Nvidia, where the market is headed, and more.

“If you take a look at what got us here, partnership is at the guts of all the things we do. When you train a big base model, you need to have large-scale infrastructure that may run for a protracted time frame,” Chappell said. “We have invested plenty of effort and time with Nvidia to be certain that we are able to deliver performance, that we are able to achieve this reliably, and that we are able to achieve this globally internationally in order that (using our Azure OpenAI service) enterprise customers have a seamless experience “You can enable integration into your existing processes or you may start your recent work with our tool.”

Check out the complete interview below. Live from GTC: A Conversation with Microsoft | NVIDIA On DemandRead on to check out the important thing conference announcements, and don't miss Microsoft's in-depth series of panels and talks. All free to observe on demand.

The AI ​​infrastructure is improved with necessary recent integrations

Workloads have gotten more demanding and require more heavy lifting – which implies hardware innovation must intervene. Announcements to this end: First, Microsoft is one in every of the primary corporations to make use of the Nvidia G200 Grace Blackwell superchip and the Nvidia Quantum-X800 InfiniBand network. integrate these into Azure. Additionally, the Azure NC H100 v5 VM virtual machine series is now available for businesses of all sizes.

The Nvidia G200 Grace Blackwell superchip is specifically designed to handle increasingly complex AI workloads, high-performance workloads and data processing. New Azure instances based on the newest GB200 and the recently announced Nvidia Quantum-X800 InfiniBand network will help speed up frontier and baseline models for natural language processing, computer vision, speech recognition, and more. It has a storage bandwidth of as much as 16 TB/s and as much as an estimated 45 times greater trillion-parameter model inference than the previous generation. The Nvidia Quantum-X800 InfiniBand networking platform expands the GB200's parallel computing tasks to massive GPU size.

The Azure NC H100 v5 VM series, designed for medium-level training, inference, and high-performance computing (HPC) simulations, is now available for organizations of all sizes. The VM series is predicated on the Nvidia H100 NVL platform, available with one or two NVLink-connected Nvidia H100 94GB PCIe Tensor Core GPUs with 600GB/s bandwidth.

It supports 128GB/s bi-directional communication between the host processor and GPU to cut back latency and data transfer overhead and make AI and HPC applications faster and more scalable. With support for Nvidia Multi-Instance GPU (MIG) technology, customers also can divide each GPU into as much as seven instances.

Major breakthroughs in healthcare and life sciences

AI has been a significant breakthrough for rapid innovation in medicine and life sciences, from research to drug development to patient care. The expanded collaboration connects Microsoft Azure with Nvidia DGX Cloud and Nvidia Clara Suite of microservices to enable healthcare providers, pharmaceutical and biotechnology corporations, and medical device developers to speed up innovation in clinical research, drug development and patient care.

Organizations already using cloud computing and AI include: Sanofi, the Broad Institute of MIT and Harvard, Flywheel and Sophia Genetics, academic medical centers equivalent to the University of Wisconsin School of Medicine and Public Health, and health systems equivalent to Mass General Brigham. They are driving transformative changes in healthcare, improving patient care, democratizing AI for healthcare professionals, and more.

Industrial digital twins gain traction with Omniverse APIs on Azure

Nvidia Omniverse Cloud APIs are joining Microsoft Azure, expanding the reach of the Omniverse platform. Developers can now integrate core Omniverse technologies directly into existing digital twin design and automation software applications or their simulation workflows for testing and validating autonomous machines equivalent to robots or self-driving vehicles.

Microsoft demonstrated a preview of what's possible with Omniverse Cloud APIs on Azure. For example, factory operators can see real-time factory data overlaid on a 3D digital twin of their facility to achieve recent insights that may speed up production.

In his GTC keynote, Nvidia CEO Jensen Huang showed that Teamcenter

Improving real-time contextualized intelligence

Copilot for Microsoft 365, soon available as a dedicated version physical keyboard button on Windows 11 PCs combines the facility of enormous language models with proprietary company data. Nvidia GPUs and the Nvidia Triton Inference Server enable AI inference predictions for real-time, contextualized information, enabling users to enhance their creativity, productivity and skills.

Turbocharge AI training and deployment

Nvidia NIM Inference Microservices, a part of Nvidia AI Enterprise Software platform, provides cloud-native microservices for optimized inference on greater than two dozen popular base models. For deployment, the microservices provide pre-built, run-anywhere containers powered by Nvidia AI Enterprise inference software – including Triton Inference Server, TensorRT and TensorRT-LLM – to assist developers speed up time-to-market of performance-optimized production AI applications.

Nvidia DGX Cloud's integration with Microsoft Fabric gets deeper

To ensure this, Microsoft and Nvidia are joining forces Microsoft Fabric, the all-in-one enterprise analytics solution, is further integrated with Nvidia DGX Cloud Compute. This signifies that Nvidia's workload-optimized runtimes, LLMs and machine learning work seamlessly with Microsoft Fabric. With Fabric OneLake because the underlying data storage, developers can apply data-intensive use cases equivalent to digital twins and weather forecasting. The integration also gives customers the power to make use of DGX Cloud to speed up their fabric data science and data engineering workloads.

See what you missed at GTC 2024

Microsoft capitalized on the big potential of all its collaborations with Nvidia and demonstrated why Azure is a critical component of a successful AI strategy for corporations of all sizes. View all of Microsoft's panels and talks here. free to stream on demand.

Learn more about AI solutions from Microsoft and NVIDIA:


LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read