Efficient Classification of Long Documents Using Transformers

Hyunji Park, Yogarshi Vyas, Kashif Shah


Abstract
Several methods have been proposed for classifying long textual documents using Transformers. However, there is a lack of consensus on a benchmark to enable a fair comparison among different approaches. In this paper, we provide a comprehensive evaluation of the relative efficacy measured against various baselines and diverse datasets — both in terms of accuracy as well as time and space overheads. Our datasets cover binary, multi-class, and multi-label classification tasks and represent various ways information is organized in a long text (e.g. information that is critical to making the classification decision is at the beginning or towards the end of the document). Our results show that more complex models often fail to outperform simple baselines and yield inconsistent performance across datasets. These findings emphasize the need for future studies to consider comprehensive baselines and datasets that better represent the task of long document classification to develop robust models.
Anthology ID:
2022.acl-short.79
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
702–709
Language:
URL:
https://aclanthology.org/2022.acl-short.79
DOI:
10.18653/v1/2022.acl-short.79
Bibkey:
Cite (ACL):
Hyunji Park, Yogarshi Vyas, and Kashif Shah. 2022. Efficient Classification of Long Documents Using Transformers. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 702–709, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Efficient Classification of Long Documents Using Transformers (Park et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-short.79.pdf
Data
CMU Book Summary DatasetEURLEX57K