Rapte aiA provider of AI-operated AI workload automation for GPUs and AI accelerators has teamed up with AMD to enhance the AI ​​infrastructure.
The long-term strategic cooperation goals to enhance the AI ​​inference and work at AMD Instinct GPUs to coach and offer customers a scalable and cheap solution for the availability of AI applications.
When the acceptance of AI accelerates, organizations are fighting with resource allocation, performance bottlenecks and sophisticated GPU management.
By integrating the intelligent workload automation platform with AMD Instinct MI300X, MI325X and GPUS of the MI350 series, this cooperation provides a scalable, powerful and cheap solution, with which customers can maximize AI inference and training efficiency over the data and multi-cloud infrastructures.
A more efficient solution
Charlie Leeming, CEO of Rapt Ai, said in a press conference: “The AI ​​models we see today are so big and above all so dynamic and unpredictable. The older tools for optimization do probably not fit in any respect. We have observed this dynamic. Is the return.
Leeming said Anil Ravindranath, CTO of Rapt Ai, saw the answer. And thus using monitors to enable observations of the infrastructure.
“We consider that now we have the best solution at the best time. We got here from Stealth last autumn. We are in a growing variety of Fortune 100 corporations. Two lead the code under Cloud service providers,” said Leeming.
And he said: “We have strategic partners, but our conversations with AMD went thoroughly. They are constructing enormous GPUs, AI accelerators. We are known to have the utmost workload for the GPUs. Inference decreases. It is within the production level. Right solution.
Improvements that may take nine hours may be carried out in three minutes, he said. Ravindranath said in a press conference, which enables the Raptai platform as much as 10 times the model running capability for a similar AI calculation level, as much as 90% cost savings and no people in a loop and without changing code. For productivity, this not means the calculation and the time that the infrastructure has spent on the mood of doing.
Lemming said other techniques have been around for some time and haven’t cut it. Run ai, a rival, overlaps just a little competitive. He said his company observed in minutes as an alternative of in hours after which optimizes the infrastructure. Ravindranath said Run Ai was more of a scheduler, but rapt Ai positions himself for unpredictable results and deals with it.
“We perform the model and discover and that’s an ideal advantage for inference workloads. It should simply be carried out robotically,” said Ravindranath.
The benefits: lower costs, higher GPU use

The corporations said that AMD instinctive GPUs with their industry-leading storage capability together with
The intelligent resource optimization of RAPT contributes to making sure the utmost GPU utilization for KI workloads, which lowered the overall ownership costs (TCO).
The RAPT platform optimizes GPU management and eliminates the necessity for data scientists to spend precious time for test and error infrastructure configurations. By robotically optimizing resource task for her specific workloads, it could possibly focus more on innovation than on infrastructure. It seamlessly supports different GPU environments (AMD and others, be it within the cloud, on premises or each) by a single instance to make sure the maximum flexibility of the infrastructure.
The combined solution intelligently optimizes the order density and resource allocation on the AMD instinctive gpus, which results in higher inference performance and scalability for production KI. The automatic scaling functions of RAPT also contribute to making sure efficient use of resources based on demand, reducing latency and maximizing cost efficiency.
The RAPT platform works with the AMD instinctive GPU outside of the box, which contributes to making sure immediate performance benefits. The ongoing cooperation between Rapt and AMD shall be further optimized in exciting areas equivalent to GPU planning, memory utilization and more to be sure that customers are equipped with a future AI infrastructure.
“At AMD we try to deliver powerful, scalable AI solutions, enable organizations to use the complete potential of their AI workload.” Said Negin Oliver, Corporate Vice President for Business Development for the GPU business of the info center at AMD, in an evidence. “Our collaboration with Rapt Ai combines the state-of-the-art skills of the AMD instinctive GPU with the intelligent workload automation of RAPT and enables customers to attain more efficiency, flexibility and value savings of their AI infrastructure.”