Liping Yuan
2023
Unsupervised Grammatical Error Correction Rivaling Supervised Methods
Hannan Cao
|
Liping Yuan
|
Yuchen Zhang
|
Hwee Tou Ng
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
State-of-the-art grammatical error correction (GEC) systems rely on parallel training data (ungrammatical sentences and their manually corrected counterparts), which are expensive to construct. In this paper, we employ the Break-It-Fix-It (BIFI) method to build an unsupervised GEC system. The BIFI framework generates parallel data from unlabeled text using a fixer to transform ungrammatical sentences into grammatical ones, and a critic to predict sentence grammaticality. We present an unsupervised approach to build the fixer and the critic, and an algorithm that allows them to iteratively improve each other. We evaluate our unsupervised GEC system on English and Chinese GEC. Empirical results show that our GEC system outperforms previous unsupervised GEC systems, and achieves performance comparable to supervised GEC systems without ensemble. Furthermore, when combined with labeled training data, our system achieves new state-of-the-art results on the CoNLL-2014 and NLPCC-2018 test sets.
2021
On the Transferability of Adversarial Attacks against Neural Text Classifier
Liping Yuan
|
Xiaoqing Zheng
|
Yi Zhou
|
Cho-Jui Hsieh
|
Kai-Wei Chang
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Deep neural networks are vulnerable to adversarial attacks, where a small perturbation to an input alters the model prediction. In many cases, malicious inputs intentionally crafted for one model can fool another model. In this paper, we present the first study to systematically investigate the transferability of adversarial examples for text classification models and explore how various factors, including network architecture, tokenization scheme, word embedding, and model capacity, affect the transferability of adversarial examples. Based on these studies, we propose a genetic algorithm to find an ensemble of models that can be used to induce adversarial examples to fool almost all existing models. Such adversarial examples reflect the defects of the learning process and the data bias in the training set. Finally, we derive word replacement rules that can be used for model diagnostics from these adversarial examples.
Search
Co-authors
- Hannan Cao 1
- Yuchen Zhang 1
- Hwee Tou Ng 1
- Xiaoqing Zheng 1
- Yi Zhou 1
- show all...