Hanyu Liu
2023
Expanding Scope: Adapting English Adversarial Attacks to Chinese
Hanyu Liu
|
Chengyuan Cai
|
Yanjun Qi
Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)
Recent studies have revealed that NLP predictive models are vulnerable to adversarial attacks. Most existing studies focused on designing attacks to evaluate the robustness of NLP models in the English language alone. Literature has seen an increasing need for NLP solutions for other languages. We, therefore, ask one natural question whether state-of-the-art (SOTA) attack methods generalize to other languages. This paper investigates how to adapt SOTA adversarial attack algorithms in English to the Chinese language. Our experiments show that attack methods previously applied to English NLP can generate high-quality adversarial examples in Chinese when combined with proper text segmentation and linguistic constraints. In addition, we demonstrate that the generated adversarial examples can achieve high fluency and sentiment consistency by focusing on the Chinese language’s morphology and phonology, which in turn can be used to improve the adversarial robustness of Chinese NLP models.
2022
LEGO-ABSA: A Prompt-based Task Assemblable Unified Generative Framework for Multi-task Aspect-based Sentiment Analysis
Tianhao Gao
|
Jun Fang
|
Hanyu Liu
|
Zhiyuan Liu
|
Chao Liu
|
Pengzhang Liu
|
Yongjun Bao
|
Weipeng Yan
Proceedings of the 29th International Conference on Computational Linguistics
Aspect-based sentiment analysis (ABSA) has received increasing attention recently. ABSA can be divided into multiple tasks according to the different extracted elements. Existing generative methods usually treat the output as a whole string rather than the combination of different elements and only focus on a single task at once. This paper proposes a unified generative multi-task framework that can solve multiple ABSA tasks by controlling the type of task prompts consisting of multiple element prompts. Further, the proposed approach can train on simple tasks and transfer to difficult tasks by assembling task prompts, like assembling Lego bricks. We conduct experiments on six ABSA tasks across multiple benchmarks. Our proposed multi-task approach achieves new state-of-the-art results in almost all tasks and competitive results in task transfer scenarios.
Search
Co-authors
- Chengyuan Cai 1
- Yanjun Qi 1
- Tianhao Gao 1
- Jun Fang 1
- Zhiyuan Liu 1
- show all...