Wang-Chien Lee


2024

pdf bib
Evidence-guided Inference for Neutralized Zero-shot Transfer
Xiaotong Feng | Meng-Fen Chiang | Wang-Chien Lee | Zixin Kuang
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Human annotation is costly and impractical when it comes to scarcely labeled data. Besides, the presence of biased language in well-known benchmarks notably misleads predictive models to perform incredibly well, not because of the model capability but due to the hidden false correlations in the linguistic corpus. Motivated by this, we propose a neutralized Knowledge Transfer framework (NKT) to equip pre-trained language models with neutralized transferability. Specifically, we construct debiased multi-source corpora (CV and EL) for two exemplary knowledge transfer tasks: claim verification and evidence learning, respectively. To counteract biased language, we design a neutralization mechanism in the presence of label skewness. We also design a label adaptation mechanism in light of the mixed label systems in the multi-source corpora. In extensive experiments, the proposed NKT framework shows effective transferability contrarily to the disability of dominant baselines, particularly in the zero-shot cross-domain transfer setting.