HomeArtificial IntelligenceComputer manufacturers introduce Nvidia Blackwell systems for AI rollouts

Computer manufacturers introduce Nvidia Blackwell systems for AI rollouts

Nvidia CEO Jensen Huang announced at Computex that the world's leading computer manufacturers are today unveiling systems based on the Nvidia Blackwell architecture, featuring Grace CPUs, Nvidia networking and infrastructure for enterprises to construct AI factories and data centers.

Nvidia Blackwell graphics processing units (GPUs) deliver 25x higher power and lower cost for AI processing tasks. And the Nvidia GB200 Grace Blackwell superchip—that’s, it consists of multiple chips in the identical package—guarantees exceptional performance gains, offering as much as a 30x performance increase for LLM inference workloads in comparison with previous iterations.

With the goal of driving the following wave of generative AI, ASRock Rack, Asus, Gigabyte, Ingrasys, Inventec, Pegatron, QCT, Supermicro, Wistron and Wiwynn will deploy cloud, on-premises, embedded and edge AI systems using Nvidia graphics processing units (GPUs) and networks, based on Huang.

“The next industrial revolution has begun. Companies and countries are working with Nvidia to rework trillions of dollars in traditional data centers to accelerated computing and construct a brand new kind of knowledge center – AI factories – to provide a brand new product: artificial intelligence,” Huang said in a press release. “From server, network and infrastructure manufacturers to software developers, the whole industry is preparing for Blackwell to speed up AI-powered innovation across the board.”

To cover all kinds of applications, the offering ranges from single to multiple GPUs, from x86 to Grace-based processors and from air to liquid cooling technology.

To speed up the event of systems of various sizes and configurations, the Nvidia MGX modular reference design platform now supports Blackwell products. This includes the brand new Nvidia GB200 NVL2 platform, which offers unprecedented performance for giant language model inference, query-enhanced generation, and data processing.

Jonney Shih, CEO of Asus, said in a press release: “ASUS is working with NVIDIA to advance enterprise AI
to latest heights with our powerful server offering that we are going to showcase at COMPUTEX. Using NVIDIA's MGX and Blackwell platforms, we are able to develop customized data center solutions designed to handle customer workloads in training, inference, data analytics and HPC.”

GB200 NVL2 is ideally suited to latest market opportunities akin to data analytics, where corporations spend tens of billions of dollars annually. By leveraging the high-bandwidth storage performance provided by NVLink C2C interconnects and dedicated decompression engines within the Blackwell architecture, data processing is accelerated by as much as 18x, with 8x higher power efficiency in comparison with using x86 CPUs.

Modular reference architecture for accelerated computing

Nvidia's Blackwell platform.

To meet the varied accelerated computing needs of the world's data centers, Nvidia MGX provides computer manufacturers with a reference architecture that allows them to quickly and cost-effectively create greater than 100 system design configurations.

Manufacturers start with a basic system architecture for his or her server chassis after which select GPU, DPU and CPU to cover different workloads. To date, greater than 90 systems from over 25 partners have been released or are in development using the MGX reference architecture, up from 14 systems from six partners last 12 months. By using MGX, development costs will be reduced by as much as three-quarters and development time will be shortened by two-thirds to simply six months.

AMD and Intel support the MGX architecture with plans to supply their very own CPU host processor module designs for the primary time, including the next-generation AMD Turin platform and the Intel® Xeon® 6 processor with P-cores (formerly codenamed Granite Rapids). Any server system builder can use these reference designs to save lots of development time while ensuring consistency in design and performance.

Nvidia's latest platform, the GB200 NVL2, also leverages MGX and Blackwell. Its scale-out single-node design enables a wide range of system configurations and networking options to seamlessly integrate accelerated computing into existing data center infrastructure.

The GB200 NVL2 complements the Blackwell product line, which incorporates Nvidia Blackwell Tensor Core GPUs, GB200 Grace Blackwell Superchips and the GB200 NVL72.

An ecosystem

Nvidia Blackwell has 208 billion transistors.
Nvidia Blackwell has 208 billion transistors.

NVIDIA's extensive partner ecosystem includes TSMC, the world's leading semiconductor manufacturer and Nvidia Foundry partner, in addition to global electronics manufacturers that provide key components for constructing AI factories. This includes manufacturing innovations akin to server racks, power supplies, cooling solutions and more from corporations akin to Amphenol, Asia Vital Components (AVC), Cooler Master, Colder Products Company (CPC), Danfoss, Delta Electronics and LITEON.

This enables latest data center infrastructure to be quickly developed and deployed to satisfy the needs of worldwide enterprises. Further acceleration is provided by Blackwell technology, NVIDIA Quantum 2 or Quantum X800 InfiniBand networks, NVIDIA Spectrum-X Ethernet networks, and NVIDIA BlueField 3 DPUs in servers from leading system manufacturers Dell Technologies, Hewlett Packard Enterprise, and Lenovo.

Enterprises also can access the Nvidia AI Enterprise software platform, which incorporates Nvidia NIM inference microservices to construct and run production-quality generative AI applications.

Taiwan welcomes Blackwell

According to Blackwell, Nvidia is driving generative AI forward.
According to Blackwell, Nvidia is driving generative AI forward.

Huang also announced during his keynote that Taiwan's leading corporations are rapidly adopting Blackwell to bring the ability of AI to their very own businesses.

Taiwan's leading medical center, Chang Gung Memorial Hospital, plans to make use of the Blackwell computing platform to advance biomedical research, speed up imaging and speech applications, and improve clinical workflows and ultimately improve patient care.

Young Liu, CEO of Hon Hai Technology Group, said in a press release: “As generative AI transforms industries, Foxconn stands ready with cutting-edge solutions to satisfy essentially the most diverse and demanding computing needs. Not only are we using the newest Blackwell platform in our own servers, but we’re also helping to supply Nvidia with key components, enabling our customers to get to market faster.”

Foxconn, one in every of the world's largest electronics manufacturers, plans to make use of Nvidia Grace Blackwell to develop intelligent solution platforms for AI-powered electric vehicles and robotic platforms, in addition to a growing variety of voice-based generative AI services to supply more personalized experiences to its customers.

Barry Lam, chairman of Quanta Computer, said in a press release: “We are at the middle of an AI-driven
world where innovation is moving faster than ever before. Nvidia Blackwell shouldn’t be just an engine; it’s the spark that ignites this industrial revolution. As we define the following era of generative AI, Quanta is proud to affix NVIDIA on this amazing journey. Together, we’ll shape and define a brand new chapter of AI.”

Charles Liang, President and CEO of Supermicro: “Our constructing block architecture and liquid-cooled rack solutions, combined with our in-house development and global production capability of 5,000 racks per thirty days, enable us to rapidly deliver a big selection of groundbreaking products based on the Nvidia AI platform to AI factories worldwide. Our high-performance liquid-cooled or air-cooled systems with
The rack-scale design, optimized for all Blackwell architecture-based products, offers customers an incredible alternative of platforms to satisfy their next-level computing needs and enable them to take a large leap into the long run of AI.”

CC Wei, CEO of TSMC, said in a press release: “TSMC is working closely with Nvidia to push the boundaries of semiconductor innovation that can enable them to comprehend their vision for AI. Our industry-leading semiconductor manufacturing technologies have helped develop Nvidia's groundbreaking GPUs, including those based on the Blackwell architecture.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read