Xiang Pan
2022
Task Transfer and Domain Adaptation for Zero-Shot Question Answering
Xiang Pan
|
Alex Sheng
|
David Shimshoni
|
Aditya Singhal
|
Sara Rosenthal
|
Avirup Sil
Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing
Pretrained language models have shown success in various areas of natural language processing, including reading comprehension tasks. However, when applying machine learning methods to new domains, labeled data may not always be available. To address this, we use supervised pretraining on source-domain data to reduce sample complexity on domainspecific downstream tasks. We evaluate zeroshot performance on domain-specific reading comprehension tasks by combining task transfer with domain adaptation to fine-tune a pretrained model with no labelled data from the target task. Our approach outperforms DomainAdaptive Pretraining on downstream domainspecific reading comprehension tasks in 3 out of 4 domains.
Are All Spurious Features in Natural Language Alike? An Analysis through a Causal Lens
Nitish Joshi
|
Xiang Pan
|
He He
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
The term ‘spurious correlations’ has been used in NLP to informally denote any undesirable feature-label correlations. However, a correlation can be undesirable because (i) the feature is irrelevant to the label (e.g. punctuation in a review), or (ii) the feature’s effect on the label depends on the context (e.g. negation words in a review), which is ubiquitous in language tasks. In case (i), we want the model to be invariant to the feature, which is neither necessary nor sufficient for prediction. But in case (ii), even an ideal model (e.g. humans) must rely on the feature, since it is necessary (but not sufficient) for prediction. Therefore, a more fine-grained treatment of spurious features is needed to specify the desired model behavior. We formalize this distinction using a causal model and probabilities of necessity and sufficiency, which delineates the causal relations between a feature and a label. We then show that this distinction helps explain results of existing debiasing methods on different spurious features, and demystifies surprising results such as the encoding of spurious features in model representations after debiasing.
Search
Co-authors
- Alex Sheng 1
- David Shimshoni 1
- Aditya Singhal 1
- Sara Rosenthal 1
- Avirup Sil 1
- show all...