Workshop Proceedings of the 19th International AAAI Conference on Web and Social Media
Workshop: MisD 2025: 1st Workshop on Misinformation Detection in the Era of LLMs
DOI: 10.36190/2025.29Inconsistent political statements represent a form of misinformation. They erode public trust and pose challenges to accountability when left unnoticed. Detecting inconsistencies automatically could support journalists in asking clarification questions, thereby helping to keep politicians accountable. We propose the Inconsistency Detection task and develop a scale of inconsistency types to prompt NLP research in this direction. To provide a resource for detecting inconsistencies in a political domain, we present a dataset of 698 human-annotated pairs of political statements with explanations of the annotators' reasoning for 237 samples. The statements mainly come from voting assistant platforms such as Wahl-O-Mat in Germany and Smartvote in Switzerland, reflecting real-world political issues. We benchmark Large Language Models (LLMs) on our dataset and show that in general, they are as good as humans at detecting inconsistencies, and might be even better than individual humans at predicting the crowd-annotated ground truth. However, when it comes to identifying fine-grained inconsistency types, none of the models have reached the upper bound of performance (due to natural labeling variation), thus leaving room for improvement. We make our dataset and code publicly available.