TRENDING

Ilya Sutskever's New AI Company Has a Clear Goal: To Develop a Superintelligence With 'Nuclear'-Level Safety

  • The OpenAI co-founder, who also served as its chief scientist, faced criticism after Sam Altman was removed as CEO and then returned to the company.

  • Sutskever has recently established Safe Superintelligence Inc. (SSI) with the aim of creating a secure superintelligence.

Sutskever
No comments Twitter Flipboard E-mail
javier-pastor

Javier Pastor

Senior Writer

Computer scientist turned tech journalist. I've written about almost everything related to technology, but I specialize in hardware, operating systems and cryptocurrencies. I like writing about tech so much that I do it both for Xataka and Incognitosis, my personal blog. LinkedIn

In November, OpenAI faced significant turmoil when the company fired its CEO, Sam Altman, only to have him return to his post shortly thereafter. The aftermath of these events led to the departure of Ilya Sustkever, the company’s co-founder and chief scientist. A month after leaving OpenAI, Sutskever has announced the establishment of a new AI company: Safe Superintelligence Inc. (SSI).

Safe Superintelligence Inc. The company was founded by Sutskever, Daniel Gross, who was a partner at Y-Combinator and used to work for Apple’s AI division, and Daniel Levy, a former OpenAI engineer. SSI will be based in Palo Alto and Tel Aviv in Israel. Notably, both Sutskever and Gross have roots in Israel.

The challenge. SSI’s official announcement came as a surprise because of how it was made. The founders posted the news on minimalist website, which seems to hark back to an earlier era. On the website, the founders outline their mission to develop safe superintelligence. The question here is: What do they mean by “safe?”

Nuclear safety. In an interview with Bloomberg, Sutskever explained: “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” which is the discourse Altman has maintained in recent years.

Sutskever always advocated cautious AI development. The engineer was one of the people who pushed for Altman’s dismissal. Rumors suggested that the divergence in their views on the development of the company's AI models led to their rift. Altman advocated for releasing AI models as soon as possible, while Sutskever was more cautious. After the crisis, Sutskever maintained a low profile and barely appeared at the GPT-4o launch event.

Despite an almost unattainable goal, substantial investment seems almost assured due to the founders’ impressive background. However, success is not guaranteed. Creating a superintelligence that rivals or surpasses human capability is a daunting task. Nonetheless, both OpenAI and Meta have made the quest for AGI a focal point, drawing substantial attention and investment.

How can companies ensure security? While developing an AGI or superintelligence may be feasible, ensuring its security is less certain. Sutskever claims to have been contemplating this issue for years and has ideas on how to address it. He believes that “at the most basic level, safe superintelligence should have the property that it will not harm humanity at a large scale.” Although this idea is reasonable, the challenge lies in making it a reality.

Image | OpenAI

Related | OpenAI Co-Founder Ilya Sutskever Unexpectedly Leaves the AI Company

Home o Index