Workshop Proceedings of the 19th International AAAI Conference on Web and Social Media
Workshop: MisD 2025: 1st Workshop on Misinformation Detection in the Era of LLMs
DOI: 10.36190/2025.23This study explores the use of Large Language Models (LLMs) for Claim Check Worthiness Prediction (CCWP), a critical first step in the fact-checking process. Building on prior research, we propose a method that utilizes structured checklists to break down CCWP into interpretable and manageable subtasks. In our approach, an LLM answers 52 human-crafted questions for each claim, and the resulting responses are used as features for traditional supervised learning models. Experiments across six datasets show that our method consistently improves performance across key evaluation metrics, including accuracy, macro F1, and micro F1, surpassing few-shot prompting baselines on most datasets. Moreover, our method enhances the stability of LLM outputs, reducing sensitivity to prompt design. These findings suggest that LLM-based feature extraction guided by structured checklists offers a promising direction for more reliable and efficient claim prioritization in fact-checking systems.