Rebecca Qian
2022
Perturbation Augmentation for Fairer NLP
Rebecca Qian
|
Candace Ross
|
Jude Fernandes
|
Eric Michael Smith
|
Douwe Kiela
|
Adina Williams
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Unwanted and often harmful social biases are becoming ever more salient in NLP research, affecting both models and datasets. In this work, we ask whether training on demographically perturbed data leads to fairer language models. We collect a large dataset of human annotated text perturbations and train a neural perturbation model, which we show outperforms heuristic alternatives. We find that (i) language models (LMs) pre-trained on demographically perturbed corpora are typically more fair, and (ii) LMs finetuned on perturbed GLUE datasets exhibit less demographic bias on downstream tasks, and (iii) fairness improvements do not come at the expense of performance on downstream tasks. Lastly, we discuss outstanding questions about how best to evaluate the (un)fairness of large language models. We hope that this exploration of neural demographic perturbation will help drive more improvement towards fairer NLP.
Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agents
Eric Smith
|
Orion Hsu
|
Rebecca Qian
|
Stephen Roller
|
Y-Lan Boureau
|
Jason Weston
Proceedings of the 4th Workshop on NLP for Conversational AI
At the heart of improving conversational AI is the open problem of how to evaluate conversations. Issues with automatic metrics are well known (Liu et al., 2016), with human evaluations still considered the gold standard. Unfortunately, how to perform human evaluations is also an open problem: differing data collection methods have varying levels of human agreement and statistical sensitivity, resulting in differing amounts of human annotation hours and labor costs. In this work we compare five different crowdworker-based human evaluation methods and find that different methods are best depending on the types of models compared, with no clear winner across the board. While this highlights the open problems in the area, our analysis leads to advice of when to use which one, and possible future directions.
Search
Co-authors
- Candace Ross 1
- Jude Fernandes 1
- Eric Michael Smith 1
- Douwe Kiela 1
- Adina Williams 1
- show all...