TRENDING

Telegram’s Other Big Problem Has Nothing to Do With Russia: Bots That Use AI to Strip People Naked

  • A Wired investigation has revealed how Telegram hosts bots that use AI to strip people naked in photos.

  • The underlying issue is far more serious: detecting and stopping this content is an enormous challenge.

Telegram bots use AI to strip people naked
No comments Twitter Flipboard E-mail

In 2019, generative AI was still in its early stages. However, people were already starting to make noise over terms like neural networks (often spoken about for its positive potential) and deepfakes (often used in negative connotations). One of the most notorious scandals at the time was DeepNude, a website that allowed users to undress any woman by simply uploading a photo. Behind this website was a neural network trained on more than 10,000 images of naked women.

Though the site had been circulating for months, it was shut down just hours after being widely discovered. The developer, who claimed his name was Alberto and was based in Estonia, closed the platform, saying, “The likelihood of people abusing this” was “too high” and that “the world isn’t ready for DeepNude yet.” That was in 2019.

Today, in 2024, this technology has become a powerful tool. Its potential is matched only by the challenge of curbing its misuse. While AI offers numerous positive applications, people can also use it for unethical purposes. For example, a Telegram bot can now undress individuals. It’s similar to DeepNude but easier and more publicly accessible. In 2024, the world still isn’t equipped to handle such a challenge.

Four million users. According to an investigation by Wired, at least 50 Telegram bots, whose sole purpose is to generate images or videos of real naked people, have more than four million users per month. The media outlet claims that two of these bots have 400,000 monthly users. Fourteen others have more than 100,000 users.

These are thousands of people who have (potentially) generated images of others naked without their consent. By all accounts, this is a violation of data protection, privacy, honor, and self-image. Far from being innocent, it’s a practice that can, and does, impact people’s lives. Between 2022 and 2023, deepfake pornographic content increased by 464%, according to Home Security Heroes’ State of Deepfakes study. Ninety-nine percent of this content features women.

How the bots work. Wired reports that the developers of these bots sell them with messages like “I can do anything you want with the face or clothes of the photo you give me.” Most require users to purchase tokens with real money or cryptocurrency. Whether they produce the promised results or are a scam is another story. Some of these bots allow users to upload photos of people, which Wired claims trains the AI to generate more accurate images. Others don’t advertise themselves as stripping bots but link to bots capable of stripping.

The underlying problem. The issue isn’t that users can find and use these types of bots on Telegram, but how complicated it is to curb this content. The platform, which is sometimes referred to as a deep web in itself, has occasionally been controversial for things like this.

The latest case is relatively recent: the arrest of its founder. France arrested Telegram CEO Pavel Durov for allegedly helping others commit crimes on the platform due to a lack of moderation. Telegram defended itself by saying it’s “absurd to claim that a platform or its owner is responsible for the abuse of that platform.” After his arrest, Durov said that moderation would become a priority for the service.

The underlying problem is how complicated it is to curb the creation and distribution of this type of content.

Since Wired published its article, however, Telegram has removed the channels and bots reported by the media outlet. Still, these are surely not all the channels that exist. As mentioned earlier, Telegram is like a deep web in itself. It provides users with all the necessary tools to find content, such as a search engine.

Fighting against these bots is complex. The fight against deepfakes is “basically a lost cause,” actress Scarlett Johansson said in 2019. She was one of the first victims of pornographic deepfakes (not the only one, of course). Today, in 2024, the truth is that the situation remains largely the same. There have been some moves by big tech companies, but the reality is that deepfakes continue to run rampant.

AI generated image on X An example of an AI generated image during the devastation of Hurricane Helene. Click on the image to see the original tweet.

Today’s tools make it even easier. Want a picture of former Microsoft CEO Bill Gates holding a gun? Or one of singer Taylor Swift in lingerie or supporting presidential candidate Donald Trump? You can create them right in Grok, X’s AI chatbot. Although some platforms, like Midjourney or DALL-E, block controversial requests, anyone with free time and a bad idea can train their own AI to do who knows what with a simple Internet search.

Examples. You can find as many as you like. The most recent include deepfakes generated after the devastation of Hurricane Helene. In South Korea, the problem of deepfake porn has reached the highest levels of government and become a matter of national interest. The government recently passed a series of laws imposing prison sentences and fines for creating and even viewing fake content. According to Reuters, anyone who purchases, saves, or watches this material could face up to three years in jail or a fine of up to 30 million won ($21,765). Telegram has also played a significant role in the spread of synthetic pornographic content in South Korea.

What has the industry tried? One approach is to tag AI-generated content with invisible watermarks. Currently, it’s up to creators to mark content as synthetic or not (Instagram and TikTok, for example, have tools for this), but a watermark could prevent or at least reduce the spread of fake content and fake news. It would also allow for early detection.

However, global implementation isn’t yet the norm. The challenge is much greater when it comes to synthetic pornographic content. It’s not just about platform moderation but also early detection and prevention of harm. A watermark doesn’t solve the problem, but it would help identify AI-generated content.

Watermark OpenAI Watermarking of AI-generated content proposed by OpenAI. Image | OpenAI

For watermarking to be effective, every synthetic content generation model and tool should implement it—not just commercial ones, but also those that users can run locally. That way, all AI-generated content would carry a watermark of origin, making detection by platform systems easier. But it’s one thing to say it and quite another to implement it.

Image | Yuri Samoilov

Related | This Is a Ferrari Executive's Method to Avoid Deepfake Scams: Ask the Scammer a Question That Only the CEO Would Know the Answer to

Home o Index