Benchmarking down-scaled (not so large) pre-trained language models

Matthias Aßenmacher, Patrick Schulze, Christian Heumann


Anthology ID:
2021.konvens-1.2
Volume:
Proceedings of the 17th Conference on Natural Language Processing (KONVENS 2021)
Month:
6--9 September
Year:
2021
Address:
Düsseldorf, Germany
Editors:
Kilian Evang, Laura Kallmeyer, Rainer Osswald, Jakub Waszczuk, Torsten Zesch
Venue:
KONVENS
SIG:
Publisher:
KONVENS 2021 Organizers
Note:
Pages:
14–27
Language:
URL:
https://aclanthology.org/2021.konvens-1.2
DOI:
Bibkey:
Cite (ACL):
Matthias Aßenmacher, Patrick Schulze, and Christian Heumann. 2021. Benchmarking down-scaled (not so large) pre-trained language models. In Proceedings of the 17th Conference on Natural Language Processing (KONVENS 2021), pages 14–27, Düsseldorf, Germany. KONVENS 2021 Organizers.
Cite (Informal):
Benchmarking down-scaled (not so large) pre-trained language models (Aßenmacher et al., KONVENS 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.konvens-1.2.pdf
Code
 PMSchulze/NLP-benchmarking
Data
CoLAGLUEMultiNLIQNLISSTSST-2WikiText-103WikiText-2