magican AI startup that builds models to generate code and automate a variety of software development tasks, has raised a big tranche of money from investors including former Google CEO Eric Schmidt.
In a Blog post On Thursday, Magic announced it had closed a $320 million funding round with contributions from Schmidt, in addition to Alphabet's CapitalG, Atlassian, Elad Gil, Jane Street, Nat Friedman & Daniel Gross, Sequoia and others. The funding brings the corporate's total raised to just about half a billion dollars ($465 million) and catapults it into a bunch of better-funded AI coding startups whose members include Codeium, Cognition, Poolside, Anysphere and Augment. (Interestingly, Schmidt can also be backing Augment.)
In July, Reuters reported reported that Magic was seeking to raise over $200 million at a $1.5 billion valuation. Clearly, the round exceeded expectations, although the startup's current valuation couldn’t be ascertained; Magic was valued at $500 million in February.
Magic also announced on Thursday a partnership with Google Cloud to construct two “supercomputers” on the Google Cloud Platform. The Magic G4 will consist of Nvidia H100 GPUs and the Magic G5 will use Nvidia's next-generation Blackwell chips, that are scheduled to come back online next 12 months. (GPUs are sometimes used to coach and deploy generative AI models because of their ability to run many calculations in parallel.)
Magic says the goal is to scale the latter cluster to “tens of hundreds” of GPUs over time. Together, the clusters might be able to 160 exaflops, where one exaflop is equal to at least one trillion computer operations per second.
“We are excited to partner with Google and Nvidia to construct our next-generation AI supercomputer on Google Cloud,” said Eric Steinberger, Magic co-founder and CEO, in a press release. “Nvidia's (Blackwell) system will greatly improve the inference and training efficiency of our models, and Google Cloud provides us with the fastest time to scale and a wealthy ecosystem of cloud services.”
Steinberger and Sebastian De Ro co-founded Magic in 2022. In a previous interview, Steinberger told TechCrunch that he was inspired by the potential of AI at a young age; in highschool, he and his friends wired up the college's computers to coach machine learning algorithms.
That experience laid the muse for Steinberger's bachelor's degree in computer science at Cambridge (which he dropped out after a 12 months) and later his job as an AI researcher at Meta. De Ro got here from the German business process management firm FireStart, where he worked his way as much as CTO. Steinberger and De Ro met at ClimateScience.org, an environmental volunteer organization co-founded by Steinberger.
Magic develops AI-driven tools (not yet on the market) to assist software developers write, review, debug, and plan code changes. The tools work like an automatic pair programmer, trying to know the context of various coding projects and constantly learn more about them.
Many platforms do the identical, including the undisputed GitHub Copilot. One of Magic's innovations, nonetheless, lies within the ultra-long context windows of its models. The architecture of the models is named “Long-term Memory Network” or “LTM” for brief.
The context, or context window, of a model refers to input data (e.g., code) that the model considers before generating output (e.g., additional code). An easy query—“Who won the 2020 U.S. presidential election?”—can function context, as can a movie script, show, or audio clip.
As context windows grow, so does the dimensions of the documents—or code bases, because the case could also be—that may fit inside them. An extended context can prevent models from “forgetting” the contents of current documents and data and becoming off-topic and drawing incorrect conclusions.
Magic claims its latest model, LTM-2-mini, has a context window of 100 million tokens. (Tokens are chunked raw data, just like the syllables “fan,” “tas,” and “tic” within the word “unbelievable.”) 100 million tokens is reminiscent of about 10 million lines of code, or 750 novels. And it's by far the biggest context window of any business model; the following largest are Google's flagship Gemini models, which have 2 million tokens.
Magic says that because of its long context, LTM-2-mini was in a position to implement a password strength meter for an open source project and create a calculator nearly autonomously using a custom UI framework.
The company is currently training a bigger version of this model.
Magic has a small team – about two dozen people – and no revenue to talk of, but it surely's targeting a market that could possibly be value $27.17 billion by 2032. after in keeping with an estimate by Polaris Research, and investors consider this a worthwhile (and potentially quite lucrative) enterprise.
Despite the safety, copyright and reliability concerns related to AI-powered utility programming tools, developers have shown enthusiasm for them. overwhelming majority of respondents in GitHub’s recent survey said they’ve adopted AI tools in some form. Microsoft reported in April that Copilot over 1.3 million paying users and greater than 50,000 business customers.
And Magic has even larger ambitions than automating routine software development tasks. The company's website talks a few path to AGI – an AI that may solve problems more reliably than humans alone.
To develop such AI, San Francisco-based Magic recently hired Ben Chess, a former head of OpenAI's supercomputing team, and plans to expand its cybersecurity, engineering, research and systems engineering teams.