%0 Conference Proceedings %T Learning to Rank Question Answer Pairs with Bilateral Contrastive Data Augmentation %A Deng, Yang %A Zhang, Wenxuan %A Lam, Wai %Y Xu, Wei %Y Ritter, Alan %Y Baldwin, Tim %Y Rahimi, Afshin %S Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021) %D 2021 %8 November %I Association for Computational Linguistics %C Online %F deng-etal-2021-learning %X In this work, we propose a novel and easy-to-apply data augmentation strategy, namely Bilateral Generation (BiG), with a contrastive training objective for improving the performance of ranking question answer pairs with existing labeled data. In specific, we synthesize pseudo-positive QA pairs in contrast to the original negative QA pairs with two pre-trained generation models, one for question generation, the other for answer generation, which are fine-tuned on the limited positive QA pairs from the original dataset. With the augmented dataset, we design a contrastive training objective for learning to rank question answer pairs. Experimental results on three benchmark datasets show that our method significantly improves the performance of ranking models by making full use of existing labeled data and can be easily applied to different ranking models. %R 10.18653/v1/2021.wnut-1.20 %U https://aclanthology.org/2021.wnut-1.20 %U https://doi.org/10.18653/v1/2021.wnut-1.20 %P 175-181