Workshop Proceedings of the 16th International AAAI Conference on Web and Social Media
Obtaining reliable and quality training datasets is resource-intensive, especially in interpretation and human judgment tasks, such as racism detection. Related work reveals that annotators subjected to hate are more sensitive to labelling something as offensive and advocate giving more voice to these collectives. This study analyses a new dataset for detecting racism in Spanish, focusing on solving a ground truth estimate given a few labels and high disagreement. Most annotators may not have previous experience with racism, as only three belong to the Black community. Our empirical results show better performance at lower thresholds for classifying messages as racist, which may be due to how annotators being permissive in identifying racist content propagates to the model. This analysis can be crucial for tailoring a general model to the specific needs of a particular individual or group. Especially in applications such as online abuse, detection models that reflect the viewpoint of crowdworkers may not be sufficient to detect all the intricacies of these social challenges.