Linear Classifier: An Often-Forgotten Baseline for Text Classification

Yu-Chen Lin, Si-An Chen, Jie-Jyun Liu, Chih-Jen Lin


Abstract
Large-scale pre-trained language models such as BERT are popular solutions for text classification. Due to the superior performance of these advanced methods, nowadays, people often directly train them for a few epochs and deploy the obtained model. In this opinion paper, we point out that this way may only sometimes get satisfactory results. We argue the importance of running a simple baseline like linear classifiers on bag-of-words features along with advanced methods. First, for many text data, linear methods show competitive performance, high efficiency, and robustness. Second, advanced models such as BERT may only achieve the best results if properly applied. Simple baselines help to confirm whether the results of advanced models are acceptable. Our experimental results fully support these points.
Anthology ID:
2023.acl-short.160
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1876–1888
Language:
URL:
https://aclanthology.org/2023.acl-short.160
DOI:
10.18653/v1/2023.acl-short.160
Bibkey:
Cite (ACL):
Yu-Chen Lin, Si-An Chen, Jie-Jyun Liu, and Chih-Jen Lin. 2023. Linear Classifier: An Often-Forgotten Baseline for Text Classification. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1876–1888, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Linear Classifier: An Often-Forgotten Baseline for Text Classification (Lin et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-short.160.pdf
Video:
 https://aclanthology.org/2023.acl-short.160.mp4