Workshop Proceedings of the 17th International AAAI Conference on Web and Social Media
Workshop: Disrupt, Ally, Resist, Embrace (DARE): Action Items for Computational Social Scientists in a Changing World
DOI: 10.36190/2023.15The increasing demand for natural language processing (NLP) applications has created a need for large amounts of labeled data to train machine learning models. This has led to using human annotators for tasks such as text classification, sentiment analysis, and named entity recognition. However, human annotation is costly and time-consuming, and the annotation quality can significantly vary depending on the annotator. Recent advances in language modeling have led to the development of large language models (LLMs), such as Chat-GPT, which are capable of generating human-like responses to text prompts. In this position paper, we explore the question: whether ChatGPT-like LLMs can effectively replace human annotators in NLP tasks? We discuss the advantages and limitations of using LLMs for annotation and highlight some of the challenges that need to be addressed to make this a feasible approach. We argue that while LLMs can potentially reduce the cost and time required for annotation, they may not be able to fully replace human annotators in all NLP tasks. We conclude by outlining future research directions that could help advance the use of LLMs for NLP annotation.