TRENDING

Beware of Falling in Love With Your Chatbot: OpenAI Warns That GPT-4o May Reduce Users' Need To Socialize With Humans

The company has assessed the risks of using GPT-4o and has detected a potential risk to the way we relate to each other.

Chatbot
No comments Twitter Flipboard E-mail
alba-mora

Alba Mora

Writer

An established tech journalist, I entered the world of consumer tech by chance in 2018. In my writing and translating career, I've also covered a diverse range of topics, including entertainment, travel, science, and economy. LinkedIn

OpenAI has released the GPT-4o System Card, an internal report that details the findings of its research into the safety (or lack of safety) of its latest generative AI model. The results are striking.

“Medium” risk in persuasion. OpenAI engineers evaluated the risks associated with GPT-4o in various areas such as cybersecurity, potential creation of biological threats, and its autonomy. They found the risk to be “low” in all these areas except for persuasion, where they determined there was a “medium” risk.

Persuasive texts. The evaluation involved trials with GPT-4o’s synthesized voice and texts. The voice posed a low risk, but the texts “marginally cross[ed] into our medium risk threshold.” Specifically, when generating articles on political topics, the tests suggested that in certain cases, these texts could be more persuasive than those created by humans.

An almost human-like voice. More interestingly, the team also evaluated the GPT-4o’s anthropomorphism, or its ability to imitate human voices, including pauses, intonation, and emotions. A few months back, its similarity to Scarlett Johansson’s voice in Spike Jonze’s Her, where the actress voices the chatbot Samantha, caused quite a stir. It was later withdrawn, but the demos were particularly suggestive. What’s more, the replacement voice also carries risks.

Don’t fall in love with your chatbot. The evaluation recognized that GPT-4o’s “audio capabilities” could “facilitate more human-like interactions with the model.” This is an exceptional synthesized voice and, as OpenAI itself acknowledges, it has pros and cons. While it can be useful for lonely people, it can also further isolate them and lead to emotional dependence on a machine.

According to OpenAI’s report:

“Human-like socialization with an AI model may produce externalities impacting human-to-human interactions. For instance, users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships. Extended interaction with the model might influence social norms. For example, our models are deferential, allowing users to interrupt and ‘take the mic’ at any time, which, while expected for an AI, would be anti-normative in human interactions.”

The risk is there. Some experts believe that the OpenAI report is a positive initial step, but there are still numerous risks to evaluate. Regarding the impact on human social relationships, previous studies have also addressed this issue. 

In April, Google's AI laboratory DeepMind released an analysis on the topic. Author Iason Gabriel noted that the chatbot’s ability to communicate “creates this impression of genuine intimacy,” and that certain voices, such as those created by DeepMind, can be unsettling and lead to “questions about emotional entanglement.” The Replika platform is familiar with these concerns: Its chatbot has previously succeeded in making its users develop feelings for it.

This article was written by Javier Pastor and originally published in Spanish on Xataka.

Related | OpenAI Is Close to Making ‘Her’ a Reality. Its New Voice Model Keeps You Company (and Could Make You Fall in Love)

Home o Index