HomeArtificial IntelligenceLightning AI launches next-generation AI compiler “Thunder” to speed up model training

Lightning AI launches next-generation AI compiler “Thunder” to speed up model training

Open source AI development platform Blitz AIIn a relationship with Nvidia, today announced the discharge of Thunder, a source-to-source compiler for the open source machine learning (ML) framework PyTorch. The company claims that this recent offering is designed to speed up the training of AI models through the use of multiple GPUs to enhance efficiency.

According to Lightning AI, the Thunder compiler achieves as much as 40% speedup when training large language models (LLMs) in comparison with unoptimized code in real-world scenarios. Regarding pricing, the corporate stated that Thunder is open source under an Apache 2.0 license and is freely available.

Lightning AI made its presence known at Nvidia GTC, where the corporate has since unveiled Thunder, which could possibly be the reply to the challenge of benefiting from GPUs slightly than increasing the variety of GPUs in use. As of 2022, Lightning AI is committed to developing next-generation deep learning for PyTorch, compatible with Nvidia's suite of other products, including Torch.compile, nvFuser, Apex, CUDA Deep Neural Network Library (cuDNN), in addition to OpenAI's Triton.

Formerly referred to as Grid AILightning AI – the creators of the open source Python library PyTorch Lightning – goals to speed up workloads through its optimizations to potentially compete with other open source communities equivalent to Open AI, Meta AI and Nvidia.

Led by PyTorch core developer Thomas Viehmann, known for his work on TorchScript and implementing PyTorch on mobile devices, the corporate announced that the compiler will provide generative AI models from multiple GPUs. William Falcon, CEO and Founder of Lightning AI, expressed his excitement about working with Viehmann, noting, “Thomas literally wrote the book on PyTorch. At Lightning AI, he’ll lead the upcoming performance breakthroughs we’ll bring to the PyTorch and Lightning AI communities,” Falcon said in a press release.

Between data collection, model configuration, and supervised fine-tuning, the model training process might be time-consuming and dear. When you add other aspects equivalent to technological expertise, management and optimization, these challenges turn into even greater. In the age of adversarial AI, attackers are training LLMs to govern and deceive AI systems, which may pose a significant threat to firms.

Luca Antiga, Chief Technology Officer of Lightning, points out that performance optimization and profiling tools are obligatory keys to scaling model training. Without it, lots of time, resources, and money—to the tune of billions of dollars—will probably be spent on training LLMs. “What we’re seeing is that customers will not be taking full advantage of accessible GPUs and are as a substitute addressing the difficulty with more GPUs,” Antiga said in a press release. He also noted that when combined with Lightning Studios and its profiling tools, customers can use GPUs more effectively and train the LLMs to operate faster and at scale through Thunder's code optimization.
Following the corporate's release of Lightning 2.2 in February, Thunder is now available to be used. Lightning Studios products might be purchased at 4 price levels: free for individual developers, Pro level for engineers, researchers and scientists; Teams level for startups and teams and enterprise level for larger organizations.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read