%0 Conference Proceedings %T Large-scale text pre-training helps with dialogue act recognition, but not without fine-tuning %A Noble, Bill %A Maraev, Vladislav %Y Zarrieß, Sina %Y Bos, Johan %Y van Noord, Rik %Y Abzianidze, Lasha %S Proceedings of the 14th International Conference on Computational Semantics (IWCS) %D 2021 %8 June %I Association for Computational Linguistics %C Groningen, The Netherlands (online) %F noble-maraev-2021-large %X We use dialogue act recognition (DAR) to investigate how well BERT represents utterances in dialogue, and how fine-tuning and large-scale pre-training contribute to its performance. We find that while both the standard BERT pre-training and pretraining on dialogue-like data are useful, task-specific fine-tuning is essential for good performance. %U https://aclanthology.org/2021.iwcs-1.16 %P 166-172