Deepfake crimes using AI technology are increasing.                                                                                                                 /Photography Extracted from Pexel
Deepfake crimes using AI technology are increasing. /Photography Extracted from Pexel

   In February 2024, the social networking service X (formerly Twitter) created a buzz around the world when a sexually inappropriate image with Taylor Swift’s face manipulated on it was spread. Despite the eventual removal of the photo by X on the 24th, it had already garnered more than 47 million views. In addition, many famous overseas celebrities have suffered from deepfake media. Deepfake is a technology that combines a specific person’s face into an image or video using AI. While advancements in deepfake technology contribute to various aspects of convenience in people’s lives, they also give rise to increased criminal activities. This includes various forms of misuse, prompting the need for legislative measures to address the challenges posed by deepfake technology. The Dongguk Post discovers the crime and regulatory bills being used to fight deepfake.

 

Current status and problems of the deepfake technology

   Deepfake is used in various fields such as advertising and spreading political propaganda, and has advantages in that it produces attractive contents easily and provides functions for media production. Therefore, companies developing such AI technology received significant attention recently. Companies such as OpenAI, Nvidia, Adobe, and Lyrebird have studied the development and implementation of deepfake, and the field is currently growing rapidly. In addition, applications using the technology have been released and are being utilized. For example, The facial synthesis app “Reface” can create a deepfake picture in one minute with a weekly membership worth just KRW 6,500. Users can switch the faces just by pressing the “Face Swap” button. In this way, any user of the app can create and utilize the desired deepfake content.

   Deepfake began to become widely known in Korea around 2019. However, since deepfake became open-source recently, the technology has been used in many industrial fields, and in the process, deepfake content has been abused in the media, causing ethical problems one after another. Today, many celebrities are targeted for deepfake crimes using AI technology. For instance, videos of U.S. President Joe Biden making transgender hate speeches were generated by deepfake technology, and spread on social media.

   With global efforts being made, Korea is also trying to prevent the abuse of deepfake. Recently, the National Election Commission banned election campaigns using deepfake videos to prevent damage caused by deepfake. In addition, if there are suspected campaign cases using deepfake media, there are legal procedures to determine whether the images violate the law. The legal process consists of three steps: 1) Monitoring verification personnel, 2) Verifying deep fake identification program, and 3) AI expert verification.

   Deepfake content is difficult to distinguish due to its resemblance to reality, which enables fake news to be delivered to people. In addition, there is a growing concern that the general public may also be targeted for deepfake crimes. Therefore, many experts say that strict legal sanctions and regulations on crimes using AI technology are necessary at the distribution stage to prevent deepfake abuse crimes.

   

A regulation plan for crime using AI technology

   What are the ways to solve deepfake crimes? Efforts are needed in both technical responses and legal actions against crimes using deepfake. For the technical response, deepfake detection technology and technical methods to identify and verify deepfake products are required. Currently, several companies are conducting research on deepfake detection algorithms using AI. In addition, if deepfake crimes increase, legal measures against them are expected to be further strengthened. To punish for abusing deepfake, various countries are strengthening deepfake regulations and establishing a legal system at the national level.

   As crimes using AI technology are increasingly spreading, countries around the world are proposing various measures at the government level to prevent them. In October last year, the United States, led by the Biden administration, implemented an order to attach watermarks to deepfake content. The European Union (EU) introduced a legislation that imposes an obligation on platform companies to display AI products, and the Chinese government also started regulating deepfakes in January last year.

   Korea seems to be joining the move, too. Kim Seung-soo, a lawmaker of the People Power Party (PPP), proposed a revision to the Information and Communication Network Act that mandates watermarks when false information created using such AI technology is posted online. In addition, a fine of up to 10 million won will be imposed and the video will be deleted.

   Besides the government level, many companies and businesses are making various efforts to prevent fake news from spreading to society and to help citizens utilize safer and more reliable technology. For example, in November 2022, Intel released “Fake Catcher,” a deepfake detector. This technology can determine authenticity by analyzing changes in the blood flow of a person’s face in a video in units of real humans and pixels. In addition, Google, TikTok, and other platforms mandate that content uploaded using AI technology be notified.

   In addition, it is necessary to pay attention at the personal level to prevent crimes using deepfake and AI technology. SNS users need an attitude to critically understand and utilize the information with a balanced perspective rather than unconditionally accepting the videos floating on the media. Furthermore, media literacy skills to creatively express and communicate based on reliable information are expected to be more required in the future.

   Various institutions and organizations provide programs to help media users become digital citizens. For example, the Korea Press Promotion Foundation and the National Library of Korea held the “Media Literacy Academy for Media Consumers” for about a week in 2022. This shows that society understood that the practice of media literacy requires effort at the individual level, but is a necessary competency at the social level beyond the individual level. In the program, lectures were given on the subject of sharing and distributing information often encountered in life, such as deepfake videos, advertisements, and news.

 

   As such, AI technology is rapidly developing, and it is being used and processed in various industrial fields so that humans can enjoy high-quality lives. The use of AI in content creation enables more professional and attractive content creation. Meanwhile, because the dangers of deepfake still exist, it is necessary to acknowledge that if AI technology is abused, anyone can become a victim and a perpetrator. The Dongguk Post suggests that we need to always pay attention and try to use technology in a positive direction.

저작권자 © 대학미디어센터 무단전재 및 재배포 금지