Workshop Proceedings of the 19th International AAAI Conference on Web and Social Media
Workshop: COMPASS 2025: The 1st international Workshop on Computational Approaches to Content Moderation and Platform Governance
DOI: 10.36190/2025.07This study investigates the relationship between user anonymity and toxicity in discourse on Mastodon - a decentralized social media platform. We develop a profile-based anonymity classification framework using name and face presence as identity cues, and apply a pre-trained language model to detect toxicity in user-generated posts. Our results show that identifiable users, particularly those who disclose both name and face, tend to post less toxic content. We also find that higher levels of instance moderation are associated with reduced toxicity overall. These findings highlight how identity presentation and platform governance jointly shape discourse quality in decentralized social networks.