Label Representations in Modeling Classification as Text Generation

Xinyi Chen, Jingxian Xu, Alex Wang


Abstract
Several recent state-of-the-art transfer learning methods model classification tasks as text generation, where labels are represented as strings for the model to generate. We investigate the effect that the choice of strings used to represent labels has on how effectively the model learns the task. For four standard text classification tasks, we design a diverse set of possible string representations for labels, ranging from canonical label definitions to random strings. We experiment with T5 on these tasks, varying the label representations as well as the amount of training data. We find that, in the low data setting, label representation impacts task performance on some tasks, with task-related labels being most effective, but fails to have an impact on others. In the full data setting, our results are largely negative: Different label representations do not affect overall task performance.
Anthology ID:
2020.aacl-srw.23
Volume:
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: Student Research Workshop
Month:
December
Year:
2020
Address:
Suzhou, China
Venue:
AACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
160–164
Language:
URL:
https://aclanthology.org/2020.aacl-srw.23
DOI:
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2020.aacl-srw.23.pdf
Data
CoLAMRPCPAWSSST