Workshop Proceedings of the 16th International AAAI Conference on Web and Social Media

Workshop: International Workshop on Social Sensing (SocialSens 2022): Special Edition on Belief Dynamics

DOI: 10.36190/2022.25

Published: 2022-06-01
Vision: Explainable Hidden Mental States as Influence Indicators
Brodie Mather, Ian Perera, Vera Kazakova, Daniel Capecci, Muskan Garg, Damon Woodard, Bonnie J. Dorr

We posit that the next major thrust relevant to capturing dynamics for detecting and responding to information operations is inference of hidden mental states through natural language processing and social computing techniques. An important factor contributing to this vision is the need for explainable representations, e.g., propositions, to capture hidden mental states as indicators of influence campaigns. Hidden mental states under exploration include, for example, belief, stance, and concern. We view explainability not in terms of a "reason describer" for machine learning (ML) model output, but in terms of an inherently interpretable paradigm that leverages hidden mental states to produce both an explanation and a justification of output. The aim is to reap the benefits of both worlds: (1) breadth of coverage for features that are essential to the task at hand (e.g., embedding and attention models for extracting sentiment); (2) depth and transparency of representational formalisms for explaining system decisions (e.g., propositions that identify beliefs and attitudes).