After the recent demo of GPT-4o, everyone started using the same word: Her. Even OpenAI CEO Sam Altman had tweeted it hours before, a reference to the 2013 movie starring Joaquin Phoenix. With its new features, ChatGPT brings us closer to the sci-fi scenario from the movie.
What happened next was unbelievable: Scarlett Johansson, Phoenix’s co-star in Her, revealed that she had been involved unsuccessful negotiations with OpenAI, which wanted her to be the voice of GPT-4o. In addition, Johansson pointed out the enormous resemblance of ChatGPT’s Sky voice to hers. In light of this, the actress has asked for detailed explanations about how OpenAI created the synthetic voice. Is it just a coincidence?
Why it matters. This case may demonstrate a pattern of Altman’s lack of transparency and honesty. Last November, the OpenAI board nearly fired him over another drama he was involved in.
The evidence:
- On the day of the chatbot’s release, Altman posted the word “her” on X (the post has 19 million impressions to date), a reference to the movie. He knew what the new voice sounded like.
- Johansson revealed that Altman offered her the opportunity to voice the chatbot, but she declined. Nonetheless, OpenAI used an imitation of her voice without any consent.
- Altman contacted Johansson again a few days before launch to ask her to reconsider, but he released the product without waiting for her response.
The real issue: consent. Artists and creators don’t want their work used without their permission and without providing compensation. “No” means “no.” Johansson said no, but it didn’t stop Altman. Not only can it get him into legal trouble, but it’s also disrespectful to the actress. What message does this situation send to smaller creators with less influence and power?
None of this is new. Altman had already created confusion about OpenAI’s structure and his investments in the company through a venture capital firm. Furthermore, Altman's lack of transparency in the Helen Toner case caused the company to go into crisis in November, when the board proceeded to try to fire him.
In the words of AI expert Gary Marcus, what Altman has done is so stupid and blatant, it's practically incomprehensible.
To recap. As regulators increasingly scrutinize these technologies, Altman’s modus operandi is becoming too obvious a pattern to ignore.
In November, the company’s board argued that he needed to be more consistently transparent, even in trivial details. As time passes, Altman’s actions make it clearer than ever why the board felt it needed to take action.
As if that weren’t enough, the recent resignations of several managers at OpenAI reinforce this trend. According to the employees, Altman promised resources OpenAI's Superalignment team, which is in charge of ensuring responsible AI development, certain resources that he never followed through on. In the ended, the CEO dismantled the team.
Image | Xataka On with Midjourney
View 0 comments