Mirazul Haque
2024
HateModerate: Testing Hate Speech Detectors against Content Moderation Policies
Jiangrui Zheng
|
Xueqing Liu
|
Mirazul Haque
|
Xing Qian
|
Guanqun Yang
|
Wei Yang
Findings of the Association for Computational Linguistics: NAACL 2024
To protect users from massive hateful content, existing works studied automated hate speech detection. Despite the existing efforts, one question remains: Do automated hate speech detectors conform to social media content policies? A platform’s content policies are a checklist of content moderated by the social media platform. Because content moderation rules are often uniquely defined, existing hate speech datasets cannot directly answer this question. This work seeks to answer this question by creating HateModerate, a dataset for testing the behaviors of automated content moderators against content policies. First, we engage 28 annotators and GPT in a six-step annotation process, resulting in a list of hateful and non-hateful test suites matching each of Facebook’s 41 hate speech policies. Second, we test the performance of state-of-the-art hate speech detectors against HateModerate, revealing substantial failures these models have in their conformity to the policies. Third, using HateModerate, we augment the training data of a top-downloaded hate detector on HuggingFace. We observe significant improvement in the models’ conformity to content policies while having comparable scores on the original test data. Our dataset and code can be found on https://github.com/stevens-textmining/HateModerate.
2022
TestAug: A Framework for Augmenting Capability-based NLP Tests
Guanqun Yang
|
Mirazul Haque
|
Qiaochu Song
|
Wei Yang
|
Xueqing Liu
Proceedings of the 29th International Conference on Computational Linguistics
The recently proposed capability-based NLP testing allows model developers to test the functional capabilities of NLP models, revealing functional failures for models with good held-out evaluation scores. However, existing work on capability-based testing requires the developer to compose each individual test template from scratch. Such approach thus requires extensive manual efforts and is less scalable. In this paper, we investigate a different approach that requires the developer to only annotate a few test templates, while leveraging the GPT-3 engine to generate the majority of test cases. While our approach saves the manual efforts by design, it guarantees the correctness of the generated suites with a validity checker. Moreover, our experimental results show that the test suites generated by GPT-3 are more diverse than the manually created ones; they can also be used to detect more errors compared to manually created counterparts. Our test suites can be downloaded at https://anonymous-researcher-nlp.github.io/testaug/.
Search
Co-authors
- Xueqing Liu 2
- Guanqun Yang 2
- Wei Yang 2
- Jiangrui Zheng 1
- Xing Qian 1
- show all...