Safe Superintelligence (SSI), a startup co-founded by former OpenAI chief scientist Ilya Sutskever, has secured $1 billion in funding just three months after its founding.
The company, whose goal is to develop “protected” artificial general intelligence (AGI) systems that surpass human intelligence, has achieved a valuation of around $5 billion despite having no product and only ten employees.
The funding round, led by leading enterprise capital firms resembling Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel, shows that AI firms are still attracting huge amounts of cash.
It refutes skepticism about investments within the AI sector.
When generative AI became mainstream in late 2022, people thought it could change the world on the push of a button.
To some extent that is true, but it surely remains to be too early for investors. AI cCompanies founded just a couple of months ago have attracted billions of dollars in money, but repayment may take longer than expected as the businesses attempt to earn cash from their products.
Putting all that aside, SSI still has a whopping billion dollars left. Sutskever himself said that after he found the corporate, he would don’t have any problem raising money.
Ex-OpenAI Co-founder Sutskever launched SSI in June with serial AI investors Nat Friedman, Daniel Gross and Daniel Levy, a former OpenAI researcher. Sutskever was a part of a series of high-profile exits from OpenAI.
The company plans to make use of the newly acquired funds to secure computing resources and expand its team with offices in Palo Alto, California, and Tel Aviv, Israel.
“We have identified a brand new mountain to climb that’s somewhat different from the one I actually have worked on before,” said Sutskever to the Financial Times.
“We're not attempting to go down the identical path faster. If you do something in a different way, it becomes possible to do something special.”
SSI: A singular player within the AI sector
SSI's mission contrasts with the goals of other major AI players resembling OpenAI, Anthropic and Elon Musk's xAI, that are developing models with broad consumer and enterprise applications.
Instead, SSI is concentrated entirely on what it calls a “direct path to secure superintelligence.”
At its founding, SSI stated: “Superintelligence is close by. Building secure superintelligence (SSI) is a very powerful engineering problem of our time. We launched the world's first SSI laboratory with a single goal and a single product: secure superintelligence.”
SSI plans to speculate several years in research and development before it may well bring a product to market.
“It is significant to us to be surrounded by investors who understand, respect and support our mission, which is to take a direct path to secure superintelligence and, specifically, to speculate a couple of years in research and development of our product before bringing it to market,” said Gross, CEO of SSI. said Reuters.
This allows them to “scale in peace”, free from the pressures of administration, product cycles and short-term business requirements.
Some doubt that “protected superintelligence” can work conceptually, but in an AI sector dominated by language models, this unique approach is welcomed.
The real test, after all, can be whether SSI can achieve its ambitious goals and overcome the complex challenges of developing protected, super-intelligent AI systems.
If this small startup is successful, it would undoubtedly have far-reaching implications for the long run of AI and its impact on society.