Workshop Proceedings of the 19th International AAAI Conference on Web and Social Media
Workshop: CySoc 2025: 6th International Workshop on Cyber Social Threats
DOI: 10.36190/2025.12Misinformation on social media often involves logical fallacies, challenging traditional fact-checking methods and increasing demand for collective correction approaches like Community Notes. This study uses large language model (LLM)-based agent simulations to analyze the effectiveness of various counter-argument strategies against logical fallacies and the polarization induced by evaluator's stance. We evaluated 10 CALSA-based rebuttal patterns against 13 common logical fallacies using independent and misinformationaligned simulated agents. Results indicate that the "No Evidence" strategy was widely effective across various fallacies, functioning as a well-balanced rebuttal that helps curb polarization. Meanwhile, in many cases we observed a "Persuasiveness- Polarization Dilemma," wherein strategies with higher persuasiveness can also increase polarization. Furthermore, we found that objective strategies that are less likely to trigger psychological resistance among misinformation supporters achieved both high persuasiveness and lower polarization risk. Our findings offer practical guidelines for designing effective misinformation corrections with reduced polarization risks.