OpenAI has made Altman the supervisor of the team at the company in charge of responsible AI development, which was disbanded.
OpenAI has created a new safety and security committee after dissolving its “superalignment” team, which oversaw and addressed existential AI risks, a few weeks ago.
Now, Sam Altman leads the team responsible for his own developments.
Why does it matter? Security in AI is crucial. As such, scrutiny of large companies developing these systems will increase. However, at OpenAI, security seems to take a back seat to the pressure to release spectacular products, according to Jan Leike, a former AI researcher at OpenAI who’s now working for Anthropic.
The disbandment and departure of key managers raise questions about OpenAI’s commitment to responsible AI development. And Altman’s leadership of this committee doesn’t inspire the utmost confidence.
What we know. Altman and three advisors—OpenAI board members Bret Taylor, Adam D’Angelo, and Nicole Seligman—will lead the new committee. Its first task is to evaluate and develop OpenAI’s processes and protocols for 90 days.
After that, it will make recommendations to the board and decide how to implement them. The committee will also include technology, cybersecurity, and related policy experts.
Signs. Reasons for concern:
- The dissolution of the “superalignment” team.
- The resignation of its two leaders.
- Leike’s words on leaving: “Culture and safety processes have taken a back seat to shiny products.”
- OpenAI co-founder Ilya Sutskever tried to fire Altman for dishonesty. Today, Sutskever is no longer at the company.
- Other researchers left because they disagreed with OpenAI’s new commercial direction.
- To avoid criticism, the company withdrew its policy of taking back equity from former employees who did not sign an NDA.
Between the lines. The announcement of the new committee comes amid scandals and disturbing news. Altman’s leadership doesn’t calm tempers or dismiss suspicions. And we haven’t included the Scarlett Johansson voice incident.
Making Altman head of security doesn’t quell concerns about whether the company will truly change its priorities or whether it’s just trying to to clean up its image—something it isn’t doing well, either. And all this is happening just as GPT-5 is beginning to be visible on the horizon.
Image | Xataka On and Midjourney
Related | We Thought ChatGPT Was Great for Programming. A New Study Finds That Half of Its Answers Are Wrong
See all comments on https://www.xatakaon.com
SEE 0 Comment