Workshop Proceedings of the 16th International AAAI Conference on Web and Social Media
Since recommendation systems have been created and developed to automate the recommendation process, users can easily consume their desired video content on online platforms. In this line, several content recommendation algorithms are introduced and employed to allow users to encounter content of their interests, effectively. However, the recommendation systems sometimes regrettably recommend inappropriate content, including misinformation or fake news. To make it worse, people would unreservedly accept such content due to their cognitive heuristic, machine heuristic, which is the rule of thumb that machines are more accurate and trustworthy than humans. In this study, we designed and conducted a web-based experiment where the participants are invoked machine heuristic by experiencing the whole process of a machine or human recommendation system. The results demonstrate that participants (N = 89) showed a more positive attitude toward a machine recommender than a human recommender, even if the recommended videos contain inappropriate content. While participants who have a high level of trust in machines exhibited a negative attitude toward recommendations. Based on these results, we suggest that a phenomenon known as algorithm aversion might be simultaneously considered with the machine heuristic in investigating human interaction with a machine.