%0 Conference Proceedings %T Features or Spurious Artifacts? Data-centric Baselines for Fair and Robust Hate Speech Detection %A Ramponi, Alan %A Tonelli, Sara %Y Carpuat, Marine %Y de Marneffe, Marie-Catherine %Y Meza Ruiz, Ivan Vladimir %S Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies %D 2022 %8 July %I Association for Computational Linguistics %C Seattle, United States %F ramponi-tonelli-2022-features %X Avoiding to rely on dataset artifacts to predict hate speech is at the cornerstone of robust and fair hate speech detection. In this paper we critically analyze lexical biases in hate speech detection via a cross-platform study, disentangling various types of spurious and authentic artifacts and analyzing their impact on out-of-distribution fairness and robustness. We experiment with existing approaches and propose simple yet surprisingly effective data-centric baselines. Our results on English data across four platforms show that distinct spurious artifacts require different treatments to ultimately attain both robustness and fairness in hate speech detection. To encourage research in this direction, we release all baseline models and the code to compute artifacts, pointing it out as a complementary and necessary addition to the data statements practice. %R 10.18653/v1/2022.naacl-main.221 %U https://aclanthology.org/2022.naacl-main.221 %U https://doi.org/10.18653/v1/2022.naacl-main.221 %P 3027-3040