%0 Conference Proceedings %T Effectively Leveraging BERT for Legal Document Classification %A Limsopatham, Nut %Y Aletras, Nikolaos %Y Androutsopoulos, Ion %Y Barrett, Leslie %Y Goanta, Catalina %Y Preotiuc-Pietro, Daniel %S Proceedings of the Natural Legal Language Processing Workshop 2021 %D 2021 %8 November %I Association for Computational Linguistics %C Punta Cana, Dominican Republic %F limsopatham-2021-effectively %X Bidirectional Encoder Representations from Transformers (BERT) has achieved state-of-the-art performances on several text classification tasks, such as GLUE and sentiment analysis. Recent work in the legal domain started to use BERT on tasks, such as legal judgement prediction and violation prediction. A common practise in using BERT is to fine-tune a pre-trained model on a target task and truncate the input texts to the size of the BERT input (e.g. at most 512 tokens). However, due to the unique characteristics of legal documents, it is not clear how to effectively adapt BERT in the legal domain. In this work, we investigate how to deal with long documents, and how is the importance of pre-training on documents from the same domain as the target task. We conduct experiments on the two recent datasets: ECHR Violation Dataset and the Overruling Task Dataset, which are multi-label and binary classification tasks, respectively. Importantly, on average the number of tokens in a document from the ECHR Violation Dataset is more than 1,600. While the documents in the Overruling Task Dataset are shorter (the maximum number of tokens is 204). We thoroughly compare several techniques for adapting BERT on long documents and compare different models pre-trained on the legal and other domains. Our experimental results show that we need to explicitly adapt BERT to handle long documents, as the truncation leads to less effective performance. We also found that pre-training on the documents that are similar to the target task would result in more effective performance on several scenario. %R 10.18653/v1/2021.nllp-1.22 %U https://aclanthology.org/2021.nllp-1.22 %U https://doi.org/10.18653/v1/2021.nllp-1.22 %P 210-216