South Korea is taking strong actions to combat the issue of deepfake porn, which has become a significant social problem. In response to an increase in complaints about the use of artificial intelligence to create fake sexual images and videos, South Korean authorities have decided to tighten the country’s legislation on the matter.
Lawmakers are working on a new bill that will punish those who create and spread sexual deepfakes. It’ll also go after those who intentionally purchase, store, or view this type of content. The proposed penalties include heavy fines and even prison sentences of several years.
What happened. South Korea is looking to strengthen its laws to combat deepfake pornography. Currently, individuals who create such content for distribution can face up to five years in prison and fines of up to ₩50 million (about $38,000). However, authorities aim to expand the scope of punishment to include those who possess or view this type of content.
To achieve this, the new bill will impose fines and jail time on individuals who knowingly possess pornographic content containing fake images created with AI. After a parliamentary committee approved the revision to the act on Wednesday, lawmakers passed the bill on Thursday, Reuters reported.
3 years of jail time. Those who knowingly and intentionally store or view sexually explicit deepfake content will face serious consequences. According to The Korea Times, the revised act allows for sentences of up to three years in prison or fines of ₩30 million (about $22,900). The aim is to discourage anyone from buying, storing, or viewing deepfake material, in order to reduce its spread.
Political debate. This issue has gained significant attention in the country. As such, South Korean authorities have approved another regulation to combat illegal deepfake content and provide support for victims. Additionally, the parliamentary committee agreed to revise the law to impose harsher penalties on those who use sexual material to blackmail children or adolescents.
Looking at the official figures. South Korea’s actions aren’t surprising. In late August, President Yook Suk Yeol ordered measures to address the impact of deepfake pornography. According to Yook, it’s an exploitation of technology relying on the protection of anonymity, which is “a clear criminal act.”
South Korea’s National Police Agency has released some figures that help understand the extent to which this type of content has become a major problem in the country. In 2024, so far, purported victims have reported 812 deepfake-related sex crimes, leading to the arrest of 387 suspects.
Even more alarming is the fact that almost half of these reports (367) were filed in the last month after the authorities launched a special campaign to prosecute this type of crime. In addition, 83.7% of those arrested were teenagers. Moreover, 66 of them were under the age of 14, meaning they were legally exempt from criminal punishment. 13% were in their twenties.
“An epidemic.” The Asian country isn’t the only nation dealing with the significant challenge of deepfakes. Cases have also been reported in the U.S. In South Korea, however, the issue is so widespread that Human Rights Watch has publicly acknowledged its concern.
“South Korea faces an epidemic of digital sex crimes, hundreds of women and girls targeted through deepfake sexual images being shared online,” Heather Barr, associate director of the Women’s Rights Division, stated in a post. Sungshin Bae, a researcher and official with the South Korean Supreme Prosecutor’s Office, also recently spoke of a “crisis” in The Conversation.
“I was petrified.” In her post, Bae said, “AI is fueling a deepfake porn crisis in South Korea.” She also reported that the startup Security Heroes recently analyzed nearly 96,000 AI-generated sex videos from various sources and found that just over half of the material, about 53%, featured South Korean singers and actresses.
The issue also impacts women who work outside the spotlight, like Heejin, a pseudonym for a university student. She recently told the BBC how she felt upon discovering a pornographic image of herself circulating in a chat room. The image was generated by AI and was clearly of a sexual nature. “I was petrified, I felt so alone,” she said.
In August, The Guardian reported that the 220,000 members of a Telegram chat room created and shared manipulated images of women, including young girls, university students, teachers, and military personnel. They used photos taken from social media platforms like Instagram to create these images.
Image | 卡晨
Related | New Tool for Creating Deepfakes for Free Rekindles Debate on the Dangers It Poses
View 0 comments