HomeArtificial IntelligenceNvidia unveils GeForce RTX enhancements for AI digital PC assistants

Nvidia unveils GeForce RTX enhancements for AI digital PC assistants

Nvidia has introduced recent RTX technology that powers AI assistants and digital humans on recent GeForce RTX AI laptops.

The major AI and graphics technology company unveiled Project G-Assist – an RTX-powered AI assistant tech demo that gives contextual help for PC games and apps. The Project G-Assist tech demo debuted with Studio Wildcard's ARK: Survival Ascended.

Nvidia also introduced the primary PC-based Nvidia NIM (Nvidia Inference Microservices) for the Nvidia Ace digital human platform. Nvidia made the announcements during CEO Jensen Huang's keynote speech on the Computex trade show in Taiwan.

These technologies are enabled by the Nvidia RTX AI Toolkit, a brand new suite of tools and SDKs that help developers optimize and deploy large-scale generative AI models on Windows PCs. They complement Nvidia's full-stack RTX AI innovations that speed up over 500 PC applications and games and 200 OEM laptop designs. Nvidia is attempting to spread AI world wide, in data centers, at the sting, and in homes.

In addition, the newly announced RTX AI PC laptops from ASUS and MSI feature GPUs as much as GeForce RTX 4070 and power-efficient systems-on-a-chip with Windows 11 AI PC features.

“Nvidia ushered within the era of AI PCs in 2018 with the discharge of RTX Tensor Core GPUs and DLSS technology,” said Jason Paul, vice chairman of consumer AI at Nvidia, in an announcement. “Now, with Project G-Assist and Nvidia ACE, we're unlocking the following generation of AI-powered experiences for over 100 million RTX AI PC users.”

Project G-Assist, a GeForce AI assistant

AI assistants will fundamentally change gaming and in-app experiences – from providing game strategies and analyzing multiplayer replays to assisting with complex creative workflows. The G-Assist project offers a glimpse into this future.

PC games offer vast universes to explore and sophisticated game mechanics to master, which might be difficult and time-consuming for even probably the most dedicated gamers. The G-Assist project goals to supply game knowledge to players using generative AI.

The G-Assist project takes voice or text input from the player, in addition to contextual information from the sport screen, and runs the information through AI vision models. These models enhance the context awareness and app-specific understanding of a big language model (LLM) linked to a knowledge base in regards to the game, after which generate a tailored response that’s delivered as text or speech.

Nvidia partnered with Studio Wildcard to indicate off the technology with ARK: Survival Ascended. Project G-Assist will help answer questions on creatures, items, lore, objectives, difficult bosses, and more. Because Project G-Assist is contextual, it personalizes its answers to the player's game session.

Additionally, Project G-Assist can configure the gamer's gaming system for optimal performance and efficiency. It can provide insights into performance metrics, optimize graphics settings depending on the user's hardware, perform secure overclocking, and even intelligently reduce power consumption while maintaining a performance goal.

Debut of the primary ACE PC NIM

Nvidia ACE technology to power digital humans now involves RTX AI PCs and workstations with NVIDIA NIM—inference microservices that help developers reduce deployment times from weeks to minutes. ACE NIMs provide high-quality inference running locally on devices for natural language understanding, speech synthesis, facial animation, and more.

At Computex, Nvidia ACE NIM's PC gaming debut is showcased within the Covert Protocol tech demo, developed in collaboration with Inworld AI, which now showcases NVIDIA Audio2FaceTM and NVIDIA Riva automatic speech recognition running locally on devices.

Windows Copilot Runtime adds GPU acceleration for local PC SLMs Microsoft and NVIDIA are collaborating to assist developers integrate recent generative AI capabilities into their native Windows and web apps. The collaboration provides application developers with easy application programming interface (API) access to GPU-accelerated Small Language Models (SLMs) that enable retrieval-augmented generation (RAG) capabilities that run on the device and are supported by Windows Copilot Runtime.

SLMs provide tremendous capabilities to Windows developers, including content summarization, content generation, and task automation. RAG capabilities extend SLMs by giving the AI ​​model access to domain-specific information that shouldn’t be well represented in the bottom models. RAG APIs enable developers to leverage application-specific data sources and tailor the behavior and features of SLMs to fulfill application requirements.

These AI capabilities are accelerated by Nvidia RTX GPUs in addition to AI accelerators from other hardware vendors, providing end users with fast, responsive AI experiences across the Windows ecosystem.

The API can be released as a developer preview later this 12 months.

The AI ​​ecosystem has created lots of of hundreds of open-source models that app developers can use, but most models are pre-trained for general purposes and designed to run in an information center.

To help developers construct application-specific AI models that run on PCs, Nvidia is introducing the RTX AI Toolkit – a collection of tools and SDKs for model customization, optimization, and deployment on RTX AI PCs. The RTX AI Toolkit can be available for broader developer access in June.

Developers can customize a pre-trained model using open source QLoRa tools. They can then use the Nvidia TensorRT model optimizer to quantize models using as much as 3x less RAM. Nvidia TensorRT Cloud then optimizes the model for peak performance across all RTX GPU lines. The result’s as much as 4x faster performance in comparison with the pre-trained model.

Now available in Early Access, the Nvidia AI Inference Manager (AIM) software development kit (SDK) simplifies the complexity of AI integration for PC application developers by seamlessly orchestrating AI inference across PCs and the cloud. It also preconfigures the PC with the required AI models, engines, and dependencies in a unified NIM format and supports all major inference backends – including TensorRT, DirectML, Llama.cpp, and PyTorch CUDA across various processors, including GPUs, NPUs, and CPUs.

Software partners comparable to Adobe, Blackmagic Design and Topaz are integrating components of the RTX AI Toolkit into their popular creative apps to speed up AI performance on RTX PCs.

“Adobe and Nvidia proceed to work together to deliver breakthrough customer experiences across all creative workflows, from video to imaging, design, 3D and beyond,” said Deepa Subramaniam, vice chairman of product marketing for Creative Cloud at Adobe, in an announcement. “TensorRT 10.0 on RTX PCs delivers unprecedented performance and AI-powered capabilities for creatives, designers and developers, opening up recent creative possibilities for content creation in industry-leading creative tools like Photoshop.”

Components of the RTX AI Toolkit, comparable to TensorRT-LLM, are integrated into popular generative AI developer frameworks and applications, including Automatic1111, ComfyUI, Jan.AI, Langchain, LlamaIndex, Oobabooga, and Sanctum.AI.

AI for content creation

Nvidia can also be integrating RTX AI acceleration into apps for developers, modders and video enthusiasts.

Last 12 months, Nvidia introduced RTX acceleration with TensorRT for some of the popular Stable Diffusion UIs, Automatic1111. Starting this week, RTX may also speed up the hugely popular ComfyUI, delivering as much as a 60% performance boost over the currently shipping version and 7 times faster performance in comparison with the MacBook Pro M3 Max.

Nvidia RTX Remix is ​​a modding platform for remastering classic DirectX 8 and DirectX 9 games with full ray tracing, NVIDIA DLSS 3.5, and physically accurate materials. RTX Remix features a runtime renderer and the RTX Remix Toolkit app, which facilitates modding of game assets and materials.

Last 12 months, Nvidia made RTX Remix Runtime open source, allowing modders to expand game compatibility and improve rendering capabilities.

Since the RTX Remix Toolkit was launched earlier this 12 months, 20,000 modders have used it to switch classic games, leading to the event of over 130 RTX remasters on the RTX Remix Showcase Discord.

This month, Nvidia will open source the RTX Remix Toolkit, allowing modders to streamline asset swapping and scene relighting, increase the variety of supported file formats for RTX Remix's Asset Ingestor, and expand RTX Remix's AI Texture Tools with recent models.

In addition, Nvidia is making the RTX Remix Toolkit's features available through a REST API, so modders can link RTX Remix live with digital content creation tools like Blender, modding tools like Hammer, and generative AI apps like ComfyUI. Nvidia can also be providing an RTX Remix Runtime SDK so modders can use RTX Remix's renderer in applications and games apart from the DirectX 8 and 9 classics.

As increasingly parts of the RTX Remix platform grow to be open source, modders world wide will find a way to create much more stunning RTX remasters.

Nvidia RTX Video, the favored AI-powered super-resolution feature supported in Google Chrome, Microsoft Edge and Mozilla Firefox browsers, is now available as an SDK for all developers, allowing them to natively integrate AI for upscaling, sharpening, compression artifact reduction and high dynamic range (HDR) conversion.

Coming soon to Blackmagic Design's DaVinci Resolve and Wondershare Filmora video editing software, RTX Video will allow video editors to upscale lower quality video files to 4K resolution, in addition to convert standard dynamic range source files to HDR. In addition, the free media player VLC Media will soon add RTX Video HDR to its existing super resolution feature.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read