On May 30th, 2023, the Center for Artificial Intelligence Safety (CAIS), a United States (U.S.) non-profit organization, issued a short but clear joint statement that “reducing the risk of human destruction caused by Artificial Intelligence (AI) should be a global priority, just like social threats such as infectious diseases and nuclear wars.” The statement was signed by more than 350 AI experts, including Sam Altman, CEO of Open AI, David Hasavis, CEO of Google DeepMind, and Jeffrey Hinton, a professor at the University of Toronto.

   Immediately after the emergence of generative AI, there were a series of optimistic prospects such as 7% annual growth in the global economy due to generative AI, drug development at the speed of light, and personal information protection using synthetic data. However, with the advent of GPT-4, as the speed of development of generative AI technology progressed faster than expected, voices of technology regulation began to grow among AI experts who led the technology development. The statement seems to have come from a combination of voices.

   Since the emergence of generative AI, national and international regulatory efforts have become visible. On the other hand, there is no visible regulatory movement related to generative AI in Korea. Efforts have been made to establish AI ethics by establishing “AI ethics standards” for “people-centered artificial intelligence” in 2020 and producing reliable AI development guides in 2022. In 2023, under Europe’s AI law, a bill to enact a law on fostering the artificial intelligence industry and creating a trust base was passed by the subcommittee of the National Assembly bill. However, the reality is that these measures related to artificial intelligence ethics have not been able to be improved and supplemented with the emergence of generative AI. In particular, the domestic artificial intelligence law focuses on fostering AI companies, so it does not contain strengthened regulations on companies related to artificial intelligence in high risk areas and weak personal information protection caused by generative AI.

   As the focus is on fostering companies, it is difficult for companies to enforce safety checks and risk mitigation measures before launching their models, as in the European Union (EU). The most current issue at the U.S. hearing was the election impact caused by fake news. In the case of Korea, it is also necessary to come up with regulatory measures for the more sophisticated deepfake technology of generative AI ahead of next year’s general elections. The National Assembly is said to have a bill containing a clause that requires AI-based videos to be marked as ‘virtual images’ and punishes candidates if they spread false information or slander candidates through virtual videos. It seems urgent to enact such a bill. The problem of copyright infringement by creators can also occur frequently in Korea, so it is necessary to quickly overhaul copyright laws. Discussions are also underway to grant copyright marks to learning data, and domestic discussions on this seem to be necessary. IBM has decided not to hire more employees because administrative positions related to organizational management are likely to be replaced by generative AI. As unemployment is becoming visible, it is also necessary to come up with domestic alternatives.

   Some argue that it is hasty to discuss regulations at a time when generative AI companies have not yet grown. However, regulations need to be preemptively prepared as guidelines for companies to act so that negative effects can be blocked in advance in the process of designing and manufacturing generative AI. This is because the development of a reliable and socially responsible “Korean-style generative AI” could be accelerated.

저작권자 © 대학미디어센터 무단전재 및 재배포 금지