Effective Pretraining Objectives for Transformer-based Autoencoders

Luca Di Liello, Matteo Gabburo, Alessandro Moschitti


Abstract
In this paper, we study trade-offs between efficiency, cost and accuracy when pre-training Transformer encoders with different pre-training objectives. For this purpose, we analyze features of common objectives and combine them to create new effective pre-training approaches. Specifically, we designed light token generators based on a straightforward statistical approach, which can replace ELECTRA computationally heavy generators, thus highly reducing cost. Our experiments also show that (i) there are more efficient alternatives to BERT’s MLM, and (ii) it is possible to efficiently pre-train Transformer-based models using lighter generators without a significant drop in performance.
Anthology ID:
2022.findings-emnlp.405
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5533–5547
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.405
DOI:
10.18653/v1/2022.findings-emnlp.405
Bibkey:
Cite (ACL):
Luca Di Liello, Matteo Gabburo, and Alessandro Moschitti. 2022. Effective Pretraining Objectives for Transformer-based Autoencoders. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5533–5547, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Effective Pretraining Objectives for Transformer-based Autoencoders (Di Liello et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.405.pdf
Video:
 https://aclanthology.org/2022.findings-emnlp.405.mp4