Bing Bai


2020

pdf bib
Demographics Should Not Be the Reason of Toxicity: Mitigating Discrimination in Text Classifications with Instance Weighting
Guanhua Zhang | Bing Bai | Junqi Zhang | Kun Bai | Conghui Zhu | Tiejun Zhao
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

With the recent proliferation of the use of text classifications, researchers have found that there are certain unintended biases in text classification datasets. For example, texts containing some demographic identity-terms (e.g., “gay”, “black”) are more likely to be abusive in existing abusive language detection datasets. As a result, models trained with these datasets may consider sentences like “She makes me happy to be gay” as abusive simply because of the word “gay.” In this paper, we formalize the unintended biases in text classification datasets as a kind of selection bias from the non-discrimination distribution to the discrimination distribution. Based on this formalization, we further propose a model-agnostic debiasing training framework by recovering the non-discrimination distribution using instance weighting, which does not require any extra resources or annotations apart from a pre-defined set of demographic identity-terms. Experiments demonstrate that our method can effectively alleviate the impacts of the unintended biases without significantly hurting models’ generalization ability.

2019

pdf bib
Selection Bias Explorations and Debias Methods for Natural Language Sentence Matching Datasets
Guanhua Zhang | Bing Bai | Jian Liang | Kun Bai | Shiyu Chang | Mo Yu | Conghui Zhu | Tiejun Zhao
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Natural Language Sentence Matching (NLSM) has gained substantial attention from both academics and the industry, and rich public datasets contribute a lot to this process. However, biased datasets can also hurt the generalization performance of trained models and give untrustworthy evaluation results. For many NLSM datasets, the providers select some pairs of sentences into the datasets, and this sampling procedure can easily bring unintended pattern, i.e., selection bias. One example is the QuoraQP dataset, where some content-independent naive features are unreasonably predictive. Such features are the reflection of the selection bias and termed as the “leakage features.” In this paper, we investigate the problem of selection bias on six NLSM datasets and find that four out of them are significantly biased. We further propose a training and evaluation framework to alleviate the bias. Experimental results on QuoraQP suggest that the proposed framework can improve the generalization ability of trained models, and give more trustworthy evaluation results for real-world adoptions.

2013

pdf bib
Answer Extraction by Recursive Parse Tree Descent
Christopher Malon | Bing Bai
Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality

2004

pdf bib
Designing a Realistic Evaluation of an End-to-end Interactive Question Answering System
Nina Wacholder | Sharon Small | Bing Bai | Diane Kelly | Robert Rittman | Sean Ryan | Robert Salkin | Peng Song | Ying Sun | Ting Liu | Paul Kantor | Tomek Strzalkowski
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)