Xinzhe Li


2023

pdf bib
Can Pretrained Language Models Derive Correct Semantics from Corrupt Subwords under Noise?
Xinzhe Li | Ming Liu | Shang Gao
Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)

For Pretrained Language Models (PLMs), their susceptibility to noise has recently been linked to subword segmentation. However, it is unclear which aspects of segmentation affect their understanding. This study assesses the robustness of PLMs against various disrupted segmentation caused by noise. An evaluation framework for subword segmentation, named Contrastive Lexical Semantic (CoLeS) probe, is proposed. It provides a systematic categorization of segmentation corruption under noise and evaluation protocols by generating contrastive datasets with canonical-noisy word pairs. Experimental results indicate that PLMs are unable to accurately compute word meanings if the noise introduces completely different subwords, small subword fragments, or a large number of additional subwords, particularly when they are inserted within other subwords.

pdf bib
Make Text Unlearnable: Exploiting Effective Patterns to Protect Personal Data
Xinzhe Li | Ming Liu
Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)

This paper addresses the ethical concerns arising from the use of unauthorized public data in deep learning models and proposes a novel solution. Specifically, building on the work of Huang et al. (2021), we extend their bi-level optimization approach to generate unlearnable text using a gradient-based search technique. However, although effective, this approach faces practical limitations, including the requirement of batches of instances and model architecture knowledge that is not readily accessible to ordinary users with limited access to their own data. Furthermore, even with semantic-preserving constraints, unlearnable noise can alter the text’s semantics. To address these challenges, we extract simple patterns from unlearnable text produced by bi-level optimization and demonstrate that the data remains unlearnable for unknown models. Additionally, these patterns are not instance- or dataset-specific, allowing users to readily apply them to text classification and question-answering tasks, even if only a small proportion of users implement them on their public content. We also open-source codes to generate unlearnable text and assess unlearnable noise to benefit the public and future studies.

2021

pdf bib
Exploring the Vulnerability of Natural Language Processing Models via Universal Adversarial Texts
Xinzhe Li | Ming Liu | Xingjun Ma | Longxiang Gao
Proceedings of the 19th Annual Workshop of the Australasian Language Technology Association

Universal adversarial texts (UATs) refer to short pieces of text units that can largely affect the predictions of NLP models. Recent studies on universal adversarial attacks assume the accessibility of datasets for the task, which is not realistic. We propose two types of Data-Free Adjusted Gradient (DFAG) attacks to show that it is possible to generate effective UATs with only one arbitrary example which could be manually crafted. Based on the proposed DFAG attacks, this paper explores the vulnerability of commonly used NLP models in terms of two factors: network architectures and pre-trained embeddings. Our empirical studies on three text classification datasets reveal that: 1) CNN based models are more extremely vulnerable to UATs while self-attention models show the most robustness, 2) the vulnerability of CNN and LSTM models and robustness of self-attention models could be attributed to whether they rely on training data artifacts for their predictions, and 3) the pre-trained embeddings could expose vulnerability to both universal adversarial attack and the UAT transfer attack.