자유NKorea's only voice of freedom
#기술

AI-Generated Content Fools Media, Fueling Distrust in Images and Verification

류근웅 기자· 11/2/2025, 5:40:49 PM|
AI-Generated Content Fools Media, Fueling Distrust in Images and Verification
Artificial intelligence (AI)-generated content is bypassing even media outlets' verification systems, deepening a widespread decline in trust in both images and verification processes. With AI detection tools also repeatedly making errors, concerns are growing about the spread of false information. Recently, posts highlighting the loopholes in AI have spread on social media, sparking controversy. One user posted on X (formerly Twitter), stating that 'AI detection tools can't even properly distinguish whether a photo is AI-generated or not.' The post garnered over 2.2 million views, highlighting a clear limitation of AI technology. Cases of errors by AI image identification tools are also on the rise. A prime example is the photo of Nvidia CEO Jensen Huang, Samsung Electronics Chairman Lee Jae-yong, and Hyundai Motor Group Chairman Chung Euisun having a 'chicken and beer meeting' at a Kkanbu Chicken restaurant in Gangnam, Seoul, on October 30. An AI image identification tool incorrectly judged the photo as 'fake.' The AI image detection site 'Undetectable AI' also committed the same error. The site mistakenly classified the aforementioned 'chicken and beer meeting' photo as '1% real (almost AI-generated).' This demonstrates the low accuracy of AI detection tools. Media outlets have also fallen victim to fake news generated by AI. On October 31 (local time), Fox News reported on the suspension of the Supplemental Nutrition Assistance Program (SNAP) due to the government shutdown. The article cited TikTok videos showing citizens protesting the SNAP suspension. However, it was later revealed that the content of these videos was false and AI-generated. Fox News issued a correction stating, 'This article was published without clarifying that some of the videos appeared to be AI-generated.' This case highlights the vulnerability of media outlets' verification systems in the face of sophisticated AI forgeries. Surveys indicate that trust in news content generated by AI remains low. According to the 'Generative AI and the News Report 2025' released by the Reuters Institute for the Study of Journalism in October, only 12% of respondents said they 'feel comfortable with news entirely written by AI.' The report, which included survey results from 12,565 people in six countries (the United States, Japan, the United Kingdom, etc.) between June 5 and July 15, reveals the public's low confidence in AI-generated content. As AI technology advances, the production and distribution of fake information is also becoming more sophisticated. Experts emphasize the urgent need to develop AI identification technologies and strengthen media outlets' own verification systems. Efforts are needed to prevent social disruption caused by false information. Discussions are also actively taking place regarding the ethical issues surrounding AI-generated content. In addition to in-depth research on the impact of AI-generated content on society, the need for relevant legislation is also being raised. Alongside the development of AI technology, the establishment of a social safety net is crucial. Ultimately, there is growing concern that indiscriminate trust in AI-generated content could threaten the entire information ecosystem. In line with the rapid development of AI technology, multifaceted efforts are needed, including strengthening media literacy education and establishing fact-checking systems.

관련 기사

AliExpress