Dan Qiao
2023
Towards Better Hierarchical Text Classification with Data Generation
Yue Wang
|
Dan Qiao
|
Juntao Li
|
Jinxiong Chang
|
Qishen Zhang
|
Zhongyi Liu
|
Guannan Zhang
|
Min Zhang
Findings of the Association for Computational Linguistics: ACL 2023
Hierarchical text classification (HTC) focuses on classifying one text into multiple labels, which are organized as a hierarchical taxonomy. Due to its wide involution in realistic scenarios, HTC attracts long-term attention from both industry and academia. However, the high cost of hierarchical multi-label annotation makes HTC suffer from the data scarcity problem. In view of the difficulty in balancing the controllability of multiple structural labels and text diversity, automatically generating high-quality data for HTC is challenging and under-explored. To fill this blank, we propose a novel data generation framework tailored for HTC, which can achieve both label controllability and text diversity by extracting high-quality semantic-level and phrase-level hierarchical label information. Experimental results on three benchmarks demonstrate that, compared with existing data augmentation methods, the data generated from our method can bring the most significant performance improvements of several strong HTC models. Extensive analysis confirms that the improvements yielded by our proposed method do correlate to the enhancement of label controllability and text diversity.
2022
SelfMix: Robust Learning against Textual Label Noise with Self-Mixup Training
Dan Qiao
|
Chenchen Dai
|
Yuyang Ding
|
Juntao Li
|
Qiang Chen
|
Wenliang Chen
|
Min Zhang
Proceedings of the 29th International Conference on Computational Linguistics
The conventional success of textual classification relies on annotated data, and the new paradigm of pre-trained language models (PLMs) still requires a few labeled data for downstream tasks. However, in real-world applications, label noise inevitably exists in training data, damaging the effectiveness, robustness, and generalization of the models constructed on such data. Recently, remarkable achievements have been made to mitigate this dilemma in visual data, while only a few explore textual data. To fill this gap, we present SelfMix, a simple yet effective method, to handle label noise in text classification tasks. SelfMix uses the Gaussian Mixture Model to separate samples and leverages semi-supervised learning. Unlike previous works requiring multiple models, our method utilizes the dropout mechanism on a single model to reduce the confirmation bias in self-training and introduces a textual level mixup training strategy. Experimental results on three text classification benchmarks with different types of text show that the performance of our proposed method outperforms these strong baselines designed for both textual and visual data under different noise ratios and noise types. Our anonymous code is available at https://github.com/noise-learning/SelfMix.
Search
Co-authors
- Juntao Li 2
- Min Zhang 2
- Yue Wang 1
- Jinxiong Chang 1
- Qishen Zhang 1
- show all...