Workshop Proceedings of the 18th International AAAI Conference on Web and Social Media
Workshop: Digital State Sponsored Disinformation and Propaganda: Challenges and Opportunities
DOI: 10.36190/2024.66There is growing concern about misinformation and propaganda being spread through AI-generated content that is often indistinguishable from human-made content. As a response, major platforms (e.g., Google, Meta, TikTok) have introduced policies to warn users of AI-generated content. One potential challenge here is the divergent perceptions of AI in the public may cause different reactions to such warnings. While some see AI's potential benefits for society, others are more pessimistic about the potential risks. It is not yet clear how these polarized attitudes affect efforts to combat misinformation. Our experimental study investigated how people's attitudes toward AI influenced their perceptions of AI-generated posts. In our experiment, participants were asked to report what factors influenced their judgments about the accuracy of AI-generated video content in an open-ended response format. The study found that most participants relied on their pre-existing knowledge and beliefs to evaluate AI-generated posts, even when shown a warning message to AI-made contents. Interestingly, some participants mindlessly evaluated the accuracies of all of the videos positively or negatively based on their positive or negative beliefs about AI. The finding suggests that incorporating a simple warning with GenAI-made content may be insufficient and has varying effects on users, ranging from underestimated to no reactions to overestimated reactions.