South Korea appears to be in the grip of a non-consensual deepfake porn crisis, and female show business stars are the main target of the rise in this form of online sex crime. The alarming trend has caught the attention of authorities and activists alike, as many women find themselves victims of digitally manipulated content. As technology advances, the consequences of such exploitation have become increasingly severe.
The startup Security Heroes found that of 95,820 sexually explicit deepfake videos analyzed, over half featured South Korean singers and actresses. This staggering statistic highlights the extent to which deepfake technology is being misused to create damaging and unauthorized content. Deepfake porn crime complaints arise from digitally altered videos or images where an individual's likeness is superimposed onto explicit material, shared online without their consent.
According to South Korea's police agency, 297 cases of deepfake crimes of a sexual nature were reported in the first seven months of 2024. This marks an increase of 180 from 2023 and nearly doubles the number reported in 2021. Among the 178 individuals charged, 113 were teenagers, underscoring how this issue affects younger demographics as well.
Deepfake technology uses deep learning algorithms to fabricate highly realistic depictions of individuals. By combining artificial intelligence and machine learning techniques, it becomes increasingly challenging to differentiate between genuine content and manipulated videos. This technological advancement poses serious concerns for privacy and consent, especially for women in the entertainment industry.
Lilian Coral, Vice President of Technology & Democracy Programs, and Head of the Open Technology Institute at New America, highlighted the issue, stating, "One of the most popular uses of generative AI is image creation. We can now quickly produce any kind of image we can think of for relatively little to no cost." The downside of this accessibility is the rapid rise in deepfakes, particularly non-consensual sexual images.
Digital sexual exploitation, especially against women, is not a new phenomenon. The Global Initiative Against Transnational Organized Crime reported a staggering 1,530 percent increase in deepfake cases between 2022 and 2023 in the Asia-Pacific region, making it the second highest increase globally, following North America. This alarming trend calls for urgent action and awareness to protect individuals from such violations.
Earlier this year, AI-generated images of celebrities like Taylor Swift circulated on social media, raising additional concerns about the impact of deepfakes. Coral noted that even with existing legislation, addressing the issue requires concerted efforts from social media platforms and public pressure to compel them to take action against such content.
The community of Taylor Swift fans, known as Swifties, played a crucial role in combating non-consensual images by identifying sources and overwhelming platforms with #ProtectTaylorSwift posts. This grassroots activism highlights how collective efforts can drive significant change and promote accountability among tech companies.