TRENDING

Ilya Sutskever's New AI Company Was Born After the Crisis That Almost Ended Sam Altman. It Wants to Become What OpenAI Couldn't Be

  • Sutskever, an OpenAI co-founder, and former engineer Daniel Levy’s new company is based on the approach that destroyed them at OpenAI.

  • An approach that prioritizes safety over commercial pressures.

The new company of two ex-OpenAI was born after the crisis almost ended with Altman. It aims to become what it couldn’t be
No comments Twitter Flipboard E-mail

Ilya Sutskever, a former OpenAI employee and co-founder, has a new company with a name that's a declaration of intent about itself and about OpenAI, which has now become its competitor. The name: Safe Superintelligence (SSI).

Why it matters. The launch, with former OpenAI leaders at the helm, marks a remarkable step in the quest for a future of safe AI, which is precisely what set off an internal battle at the company, with the CEO Sam Altman winning.

SSI is committed to ensuring that the superintelligent AI systems it develops have safety and security as their top priority to avoid potentially catastrophic consequences.

  • It was this cautious approach that put Sutskever at odds with Altman.
  • Jan Leike, another OpenAI heavyweight who is now at Anthropic, accused the company of prioritizing “shiny products” over security.
  • Both come from an OpenAI where Altman has become so powerful that he heads the committee that oversees his developments.

Context. Superintelligence refers to AI systems that surpass human intelligence in all areas. Ensuring the security of these systems is critical, as their capabilities could impact many aspects of our lives.

SSI’s mission, as summarized on its very simple corporate website, is to continually advance AI’s capabilities, with a focus on safety. It seeks to solve this dual challenge through engineering and scientific innovation, free from commercial pressures.

The Challenge. Creating a secure superintelligence requires overcoming several obstacles, both technical and philosophical:

  1. Alignment problem. Ensuring that AI systems act based on human intentions. Current methods, such as reinforcement learning based on human feedback, need to be revised to monitor AI systems that are significantly smarter than humans.
  2. Ensuring scalability. Current AI adaptation techniques don’t scale to the levels required for superintelligence. New scientific and engineering advances are necessary to develop reliable safety measures.
  3. Balancing speed and safety. SSI aims to rapidly advance AI capabilities while ensuring that security measures are always one step ahead. This requires a delicate balance and constant innovation.

In perspective. Analysts like Chirag Mehta have expressed doubts about the ambiguity of the concept of “secure superintelligence” and its practical implications.

The launch of SSI will attract researchers who are strongly committed to safe AI development and may be disillusioned with the current approach of much of the industry, which prioritizes marketable products over ethical considerations.

  • SSI’s mission aligns with the original goals of OpenAI, which are increasingly in doubt.

With the announcement of the new company, we now have to wait to see what types of people will join it and in what context. It'll also be interesting to see who’s funding it and who it’s partnering with to increase the possibilities that it will achieve its goals.

Image | OpenAI, Xataka On

Related | Microsoft’s Agreement With OpenAI Was Only the First Step. What It (Probably) Wants Is to Get Rid of the AI Company

Home o Index