Workshop Proceedings of the 19th International AAAI Conference on Web and Social Media
Workshop: NLPSI 2025: First Workshop on Integrating NLP and Psychology to Study Social Interactions
DOI: 10.36190/2025.34Humans develop cooperation heuristics in social decision-making, either intuitively or deliberatively. Large language models (LLMs), which exhibit human-like heuristics across cognitive domains, may acquire prosocial tendencies through instruction tuning, latently encoded in their representations to foster cooperative behavior in social reasoning games. However, most studies of this kind either focus on cooperative language generation or explicitly instruct LLMs to cooperate, deviating from the inherent cooperation heuristics of humans. Our negotiation role-play simulations using BATNA (Best Alternative to a Negotiated Agreement) with a GPT-based LLM reveal that LLMs may struggle with cooperation in the absence of explicit instructions, showing a 50-90% lower success rate than in instructed scenarios and a 40-80% lower success rate than human performance reported in past studies. Implicitly inducing cooperation through personality traits had inconsistent effects, with agreeableness showing only a marginal influence and other traits exhibiting no systematic impact. These findings suggest that personality-based cooperation cues are subtle and that explicit instructions may still be essential for multi-agent LLMs to approximate human-like negotiation.