Yi Guo
2023
[MASK] Insertion: a robust method for anti-adversarial attacks
Xinrong Hu
|
Ce Xu
|
Junlong Ma
|
Zijian Huang
|
Jie Yang
|
Yi Guo
|
Johan Barthelemy
Findings of the Association for Computational Linguistics: EACL 2023
Adversarial attack aims to perturb input sequences and mislead a trained model for false predictions. To enhance the model robustness, defensing methods are accordingly employed by either data augmentation (involving adversarial samples) or model enhancement (modifying the training loss and/or model architecture). In contrast to previous work, this paper revisits the masked language modeling (MLM) and presents a simple yet efficient algorithm against adversarial attacks, termed [MASK] insertion for defensing (MI4D). Specifically, MI4D simply inserts [MASK] tokens to input sequences during training and inference, maximizing the intersection of the new convex hull (MI4D creates) with the original one (the clean input forms). As neither additional adversarial samples nor the model modification is required, MI4D is as computationally efficient as traditional fine-tuning. Comprehensive experiments have been conducted using three benchmark datasets and four attacking methods. MI4D yields a significant improvement (on average) of the accuracy between 3.2 and 11.1 absolute points when compared with six state-of-the-art defensing baselines.
2022
Seeing the wood for the trees: a contrastive regularization method for the low-resource Knowledge Base Question Answering
Jpliu@wtu.edu.cn Jpliu@wtu.edu.cn
|
Shijie Mei
|
Xinrong Hu
|
Xun Yao
|
Jack Yang
|
Yi Guo
Findings of the Association for Computational Linguistics: NAACL 2022
Given a context knowledge base (KB) and a corresponding question, the Knowledge Base Question Answering task aims to retrieve correct answer entities from this KB. Despite sophisticated retrieval algorithms, the impact of the low-resource (incomplete) KB is not fully exploited, where contributing components (. key entities and/or relations) may be absent for question answering. To effectively address this problem, we propose a contrastive regularization based method, which is motivated by the learn-by-analogy capability from human readers. Specifically, the proposed work includes two major modules: the knowledge extension and sMoCo module. The former aims at exploiting the latent knowledge from the context KB and generating auxiliary information in the form of question-answer pairs. The later module utilizes those additional pairs and applies the contrastive regularization to learn informative representations, that making hard positive pairs attracted and hard negative pairs separated. Empirically, we achieved the state-of-the-art performance on the WebQuestionsSP dataset and the effectiveness of proposed modules is also evaluated.
2020
Slot Attention with Value Normalization for Multi-Domain Dialogue State Tracking
Yexiang Wang
|
Yi Guo
|
Siqi Zhu
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Incompleteness of domain ontology and unavailability of some values are two inevitable problems of dialogue state tracking (DST). Existing approaches generally fall into two extremes: choosing models without ontology or embedding ontology in models leading to over-dependence. In this paper, we propose a new architecture to cleverly exploit ontology, which consists of Slot Attention (SA) and Value Normalization (VN), referred to as SAVN. Moreover, we supplement the annotation of supporting span for MultiWOZ 2.1, which is the shortest span in utterances to support the labeled value. SA shares knowledge between slots and utterances and only needs a simple structure to predict the supporting span. VN is designed specifically for the use of ontology, which can convert supporting spans to the values. Empirical results demonstrate that SAVN achieves the state-of-the-art joint accuracy of 54.52% on MultiWOZ 2.0 and 54.86% on MultiWOZ 2.1. Besides, we evaluate VN with incomplete ontology. The results show that even if only 30% ontology is used, VN can also contribute to our model.
Search
Co-authors
- Xinrong Hu 2
- Ce Xu 1
- Junlong Ma 1
- Zijian Huang 1
- Jie Yang 1
- show all...