%0 Conference Proceedings %T Processing Long Legal Documents with Pre-trained Transformers: Modding LegalBERT and Longformer %A Mamakas, Dimitris %A Tsotsi, Petros %A Androutsopoulos, Ion %A Chalkidis, Ilias %Y Aletras, Nikolaos %Y Chalkidis, Ilias %Y Barrett, Leslie %Y Goan\textcommabelowtă, Cătălina %Y Preo\textcommabelowtiuc-Pietro, Daniel %S Proceedings of the Natural Legal Language Processing Workshop 2022 %D 2022 %8 December %I Association for Computational Linguistics %C Abu Dhabi, United Arab Emirates (Hybrid) %F mamakas-etal-2022-processing %X Pre-trained Transformers currently dominate most NLP tasks. They impose, however, limits on the maximum input length (512 sub-words in BERT), which are too restrictive in the legal domain. Even sparse-attention models, such as Longformer and BigBird, which increase the maximum input length to 4,096 sub-words, severely truncate texts in three of the six datasets of LexGLUE. Simpler linear classifiers with TF-IDF features can handle texts of any length, require far less resources to train and deploy, but are usually outperformed by pre-trained Transformers. We explore two directions to cope with long legal texts: (i) modifying a Longformer warm-started from LegalBERT to handle even longer texts (up to 8,192 sub-words), and (ii) modifying LegalBERT to use TF-IDF representations. The first approach is the best in terms of performance, surpassing a hierarchical version of LegalBERT, which was the previous state of the art in LexGLUE. The second approach leads to computationally more efficient models at the expense of lower performance, but the resulting models still outperform overall a linear SVM with TF-IDF features in long legal document classification. %R 10.18653/v1/2022.nllp-1.11 %U https://aclanthology.org/2022.nllp-1.11 %U https://doi.org/10.18653/v1/2022.nllp-1.11 %P 130-142