HomeIndustriesOpenAI co-founder Ilya Sutskever launches recent startup Safe Superintelligence Inc.

OpenAI co-founder Ilya Sutskever launches recent startup Safe Superintelligence Inc.

Ilya Sutskever, co-founder and former chief scientist of OpenAI, has announced the launch of his recent company Safe Superintelligence Inc. (SSI).

Together with co-founders Daniel Gross of Y Combinator and former OpenAI engineer Daniel Levy, Sutskever goals to tackle what they imagine is probably the most critical problem in AI: developing a protected and powerful superintelligent AI system.

Sutskever believes that inside ten years, AI superintelligence can be possible, a vague term for AI that equals or surpasses human intelligence.

The Company statementposted by Sutskever on X, explains: “Superintelligence is close by. Building secure superintelligence (SSI) is crucial engineering problem of our time. We have launched the world's first SSI lab with a single goal and a single product: secure superintelligence.”

The founders describe SSI not only as their mission, but additionally as their name and their entire product roadmap.

“SSI is our mission, our name and our entire product roadmap since it is our sole focus. Our team, our investors and our business model are all aligned to attain SSI,” the statement said.

An antithesis to OpenAI?

While Sutskever and OpenAI CEO Sam Altman have publicly expressed their mutual respect, recent events suggest tensions exist.

Sutskever played a key role within the Attempt to suppress Altmanwhich he later said he regretted. Sutskever officially resigned in May after keeping a low public profile, leaving observers uncertain about his whereabouts.

This incident and the departure of other key researchers as a result of security concerns at OpenAI raises questions on the corporate's priorities and direction.

OpenAI’s “Superalignment team,” whose mission is to align AI with human values ​​and advantages, was practically dismantled after Sutskever and his research colleague Jan Leike left the corporate this 12 months.

Sutskever's decision to depart the corporate appears to be as a result of his desire to pursue a project that’s more consistent with his vision for the longer term of AI development – a vision that OpenAI appears to be failing at, because it is being pushed out of its Founding principles.

Safety first – AI

The risks related to AI are hotly debated.

Although humanity has a primal instinct to fear artificial systems which might be more intelligent than we’re – and that is entirely justified – not all AI researchers imagine this is feasible within the near future.

An important point, nonetheless, is that neglecting the risks now could have disastrous consequences in the longer term.

SSI intends to handle security in parallel with the event of AI: “We approach security and capabilities together, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as quickly as possible while ensuring that our security all the time comes first,” explain the founders.

This approach allows SSI to “scale at leisure” without being distracted by administrative overhead, product cycles and short-term industrial pressures.

“Our singular focus means we will not be distracted by management overhead or product cycles, and our business model means safety, security and progress are insulated from short-term industrial pressures,” the statement stresses.

Putting together a dream team

To achieve its goals, SSI is assembling a “lean, world-class team of the world’s best engineers and researchers, focused exclusively on SSI.”

“We are an American company with offices in Palo Alto and Tel Aviv, where we now have deep roots and the flexibility to recruit one of the best technical talent,” the statement said.

“If this describes you, we provide you the chance to comprehend your life’s work and help solve crucial technological challenge of our time.”

With SSI, one other player is entering the ever-growing field of AI.

It can be very interesting to see who involves SSI, and particularly if there may be a robust talent inflow from OpenAI.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read