LiGCN: Label-interpretable Graph Convolutional Networks for Multi-label Text Classification

Irene Li, Aosong Feng, Hao Wu, Tianxiao Li, Toyotaro Suzumura, Ruihai Dong


Abstract
Multi-label text classification (MLTC) is an attractive and challenging task in natural language processing (NLP). Compared with single-label text classification, MLTC has a wider range of applications in practice. In this paper, we propose a label-interpretable graph convolutional network model to solve the MLTC problem by modeling tokens and labels as nodes in a heterogeneous graph. In this way, we are able to take into account multiple relationships including token-level relationships. Besides, the model allows better interpretability for predicted labels as the token-label edges are exposed. We evaluate our method on four real-world datasets and it achieves competitive scores against selected baseline methods. Specifically, this model achieves a gain of 0.14 on the F1 score in the small label set MLTC, and 0.07 in the large label set scenario.
Anthology ID:
2022.dlg4nlp-1.7
Volume:
Proceedings of the 2nd Workshop on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022)
Month:
July
Year:
2022
Address:
Seattle, Washington
Editors:
Lingfei Wu, Bang Liu, Rada Mihalcea, Jian Pei, Yue Zhang, Yunyao Li
Venue:
DLG4NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
60–70
Language:
URL:
https://aclanthology.org/2022.dlg4nlp-1.7
DOI:
10.18653/v1/2022.dlg4nlp-1.7
Bibkey:
Cite (ACL):
Irene Li, Aosong Feng, Hao Wu, Tianxiao Li, Toyotaro Suzumura, and Ruihai Dong. 2022. LiGCN: Label-interpretable Graph Convolutional Networks for Multi-label Text Classification. In Proceedings of the 2nd Workshop on Deep Learning on Graphs for Natural Language Processing (DLG4NLP 2022), pages 60–70, Seattle, Washington. Association for Computational Linguistics.
Cite (Informal):
LiGCN: Label-interpretable Graph Convolutional Networks for Multi-label Text Classification (Li et al., DLG4NLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.dlg4nlp-1.7.pdf
Data
RCV1