Workshop Proceedings of the 18th International AAAI Conference on Web and Social Media
Workshop: REAL-Info 2024: First Workshop on Reliable Evaluation of LLMs for Factual Information
DOI: 10.36190/2024.32A key priority in web and social media research is to distinguish between human-generated and machine-generated content, especially in the era of widespread use of Large Language Models. In order to be able to detect Machine-Generated Content, we compared the writing styles of essays written by human subjects and by GPT under the same conditions. Specifically, we analyzed the sentiment, readability and subjectivity of 4895 essays, written by 789 university students in response to 9 specific prompts, with 225 essays of the same length written by GPT-4 in response to the same prompts. We observed that GPT's essays are more difficult to read and less subjective. All of the above quantities depend on the specific prompt, and show a correlation between human subjects and GPT, across the 9 prompts.