Le Wang
2022
Improving Robustness of Language Models from a Geometry-aware Perspective
Bin Zhu
|
Zhaoquan Gu
|
Le Wang
|
Jinyin Chen
|
Qi Xuan
Findings of the Association for Computational Linguistics: ACL 2022
Recent studies have found that removing the norm-bounded projection and increasing search steps in adversarial training can significantly improve robustness. However, we observe that a too large number of search steps can hurt accuracy. We aim to obtain strong robustness efficiently using fewer steps. Through a toy experiment, we find that perturbing the clean data to the decision boundary but not crossing it does not degrade the test accuracy. Inspired by this, we propose friendly adversarial data augmentation (FADA) to generate friendly adversarial data. On top of FADA, we propose geometry-aware adversarial training (GAT) to perform adversarial training on friendly adversarial data so that we can save a large number of search steps. Comprehensive experiments across two widely used datasets and three pre-trained language models demonstrate that GAT can obtain stronger robustness via fewer steps. In addition, we provide extensive empirical results and in-depth analyses on robustness to facilitate future studies.
2018
NPRF: A Neural Pseudo Relevance Feedback Framework for Ad-hoc Information Retrieval
Canjia Li
|
Yingfei Sun
|
Ben He
|
Le Wang
|
Kai Hui
|
Andrew Yates
|
Le Sun
|
Jungang Xu
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Pseudo relevance feedback (PRF) is commonly used to boost the performance of traditional information retrieval (IR) models by using top-ranked documents to identify and weight new query terms, thereby reducing the effect of query-document vocabulary mismatches. While neural retrieval models have recently demonstrated strong results for ad-hoc retrieval, combining them with PRF is not straightforward due to incompatibilities between existing PRF approaches and neural architectures. To bridge this gap, we propose an end-to-end neural PRF framework that can be used with existing neural IR models by embedding different neural models as building blocks. Extensive experiments on two standard test collections confirm the effectiveness of the proposed NPRF framework in improving the performance of two state-of-the-art neural IR models.
Search
Co-authors
- Bin Zhu 1
- Zhaoquan Gu 1
- Jinyin Chen 1
- Qi Xuan 1
- Canjia Li 1
- show all...