Shengfeng Pan
2023
Rank-Aware Negative Training for Semi-Supervised Text Classification
Ahmed Murtadha
|
Shengfeng Pan
|
Wen Bo
|
Jianlin Su
|
Xinxin Cao
|
Wenze Zhang
|
Yunfeng Liu
Transactions of the Association for Computational Linguistics, Volume 11
Semi-supervised text classification-based paradigms (SSTC) typically employ the spirit of self-training. The key idea is to train a deep classifier on limited labeled texts and then iteratively predict the unlabeled texts as their pseudo-labels for further training. However, the performance is largely affected by the accuracy of pseudo-labels, which may not be significant in real-world scenarios. This paper presents a Rank-aware Negative Training (RNT) framework to address SSTC in learning with noisy label settings. To alleviate the noisy information, we adapt a reasoning with uncertainty-based approach to rank the unlabeled texts based on the evidential support received from the labeled texts. Moreover, we propose the use of negative training to train RNT based on the concept that “the input instance does not belong to the complementary label”. A complementary label is randomly selected from all labels except the label on-target. Intuitively, the probability of a true label serving as a complementary label is low and thus provides less noisy information during the training, resulting in better performance on the test data. Finally, we evaluate the proposed solution on various text classification benchmark datasets. Our extensive experiments show that it consistently overcomes the state-of-the-art alternatives in most scenarios and achieves competitive performance in the others. The code of RNT is publicly available on GitHub.
2021
BioCopy: A Plug-And-Play Span Copy Mechanism in Seq2Seq Models
Yi Liu
|
Guoan Zhang
|
Puning Yu
|
Jianlin Su
|
Shengfeng Pan
Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing
Copy mechanisms explicitly obtain unchanged tokens from the source (input) sequence to generate the target (output) sequence under the neural seq2seq framework. However, most of the existing copy mechanisms only consider single word copying from the source sentences, which results in losing essential tokens while copying long spans. In this work, we propose a plug-and-play architecture, namely BioCopy, to alleviate the problem aforementioned. Specifically, in the training stage, we construct a BIO tag for each token and train the original model with BIO tags jointly. In the inference stage, the model will firstly predict the BIO tag at each time step, then conduct different mask strategies based on the predicted BIO label to diminish the scope of the probability distributions over the vocabulary list. Experimental results on two separate generative tasks show that they all outperform the baseline models by adding our BioCopy to the original model structure.
Search
Co-authors
- Jianlin Su 2
- Yi Liu 1
- Guoan Zhang 1
- Puning Yu 1
- Ahmed Murtadha 1
- show all...