Workshop Proceedings of the 19th International AAAI Conference on Web and Social Media

Workshop: #SMM4H-HeaRD 2025: Joint 10th Social Media Mining for Health and Health Real-World Data Workshop and Shared Tasks

DOI: 10.36190/2025.73

Published: 2025-06-05
IAI at #SMM4H-HeaRD 2025: Exploring the Limitations of PLMs in Medical Language Understanding Tasks
Aman Sinha

Recent progress in large language models have demonstrated significant performance improvements across various language tasks. However, their high computational demands pose challenges for deployment in resource-constrained environments. This paper examines the performance of small language models, particularly domain-specific pre-trained language models (PLMs), in the context of healthcare-related language tasks introduced in the SMM4H-HeaRD 2025 shared tasks. We focus on two tasks: detecting dementia family caregivers on Twitter (Task 3) and identifying insomnia in clinical notes (Task 4). Our study primarily utilizes pre-trained language models and investigates the conditions under which these models struggle. The findings offer insights into the limitations of pre-trained models for clinical language understanding, highlighting potential factors that could inform strategies for improving model performance in practical, resource-limited settings.