HomeArtificial IntelligenceFrom Google to IBM: How big tech giants are embracing Nvidia's latest...

From Google to IBM: How big tech giants are embracing Nvidia's latest hardware and software services

Nvidia gave his all to push the boundaries of computing at the continued GTC conference in San Jose.

CEO Jensen Huang, donning a black leather jacket, addressed a packed crowd in his keynote (the event was more like a concert than a conference) and announced the long-awaited GB200 Grace Blackwell superchip, which delivers as much as 30x performance improvement for giant Languages ​​promised model inference workloads (LLM). He also covered notable developments in automotive, robotics, omniverse, and healthcare, flooding the Internet with all things Nvidia.

However, GTC just isn’t complete without industry partnerships. Nvidia shared the way it is advancing its collaboration with several industry giants by integrating its newly announced AI computing infrastructure, software and services into its tech stack. Below you will see that an outline of crucial partnerships.


Nvidia said AWS will offer its latest Blackwell platform with GB200 NVL72 with 72 Blackwell GPUs and 36 Grace CPUs on EC2 instances. This enables customers to construct and run real-time inference on multi-trillion-parameter LLMs faster, at scale, and at a lower cost than previous generation Nvidia GPUs. The corporations also announced that they’re bringing 20,736 GB200 superchips to Project Ceiba – an AI supercomputer built exclusively on AWS – and are teaming as much as integrate Amazon SageMaker integration with Nvidia NIM inference microservices.

Google Cloud

Like Amazon, Google also announced that it is going to integrate Nvidia's Grace Blackwell platform and NIM microservices into its cloud infrastructure. The company also announced that it’s adding support for JAX, a Python-native framework for high-performance LLM training, on Nvidia H100 GPUs and deploying the Nvidia NeMo framework on its platform via Google Kubernetes Engine (GKE) and Google Cloud facilitates HPC toolkit.

Additionally, Vertex AI now supports Google Cloud A3 VMs with NVIDIA H100 GPUs and G2 VMs with NVIDIA L4 Tensor Core GPUs.


Microsoft also confirmed the plan so as to add NIM microservices and Grace Blackwell to Azure. However, the partnership for the superchip also includes Nvidia's latest Quantum-X800 InfiniBand network platform. The Satya Nadella-led company also announced the native integration of DGX Cloud with Microsoft Fabric to streamline the event of custom AI models and the supply of newly launched Omniverse Cloud APIs on the Azure Power platform.

In healthcare, Microsoft said Azure will leverage Nvidia's Clara microservices suite and DGX Cloud to assist healthcare providers, pharmaceutical and biotech corporations, and medical device developers rapidly innovate in clinical research and care.


oracle said it plans to make use of the Grace Blackwell computing platform across OCI Supercluster and OCI Compute instances, the latter of which might adopt each the Nvidia GB200 Superchip and B200 Tensor Core GPU. It may also be available on Nvidia DGX Cloud on OCI.

Additionally, Oracle said that Nvidia will support NIM and CUDA-X microservices, including the NeMo Retriever for RAG inference deployments may also help OCI customers bring greater insight and accuracy to their generative AI applications.


JUICE is working with Nvidia to integrate generative AI into its cloud solutions, including the newest version of SAP Datasphere, SAP Business Technology Platform and RISE with SAP. The company also said it plans to construct additional generative AI capabilities inside SAP BTP using Nvidia's generative AI Foundry service, including DGX Cloud AI supercomputing, Nvidia AI Enterprise software and NVIDIA AI Foundation models.


To help customers solve complex business challenges, IBM Consulting plans to mix its technology and industry expertise with Nvidia's AI Enterprise software stack, including latest NIM microservices and Omniverse technologies. According to IBM, this can speed up customers' AI workflows, improve use case-to-model optimization, and develop business and industry-specific AI use cases. The company is already developing and delivering digital twin applications for supply chain and manufacturing with Isaac Sim and Omniverse.


Data cloud company Snowflake expanded its previously announced partnership with Nvidia to incorporate integration with NeMo Retriever. The generative AI microservice connects custom LLMs with enterprise data and enables the corporate's customers to enhance the performance and scalability of chatbot applications built with Snowflake Cortex. The collaboration also includes Nvidia TensorRT software, which delivers low latency and high throughput for deep learning inference applications.

In addition to Snowflake, data platform providers Box, Dataloop, Cloudera, Cohesity, Datastax and NetApp also announced that they plan to make use of Nvidia microservices, including the brand latest NIM technology, to assist customers optimize RAG pipelines and integrate their proprietary data to support generative AI applications.

Nvidia GTC 2024 runs March 18-21 in San Jose and online.


Please enter your comment!
Please enter your name here

Must Read