Overview of the SustaiNLP 2020 Shared Task

Alex Wang, Thomas Wolf


Abstract
We describe the SustaiNLP 2020 shared task: efficient inference on the SuperGLUE benchmark (Wang et al., 2019). Participants are evaluated based on performance on the benchmark as well as energy consumed in making predictions on the test sets. We describe the task, its organization, and the submitted systems. Across the six submissions to the shared task, participants achieved efficiency gains of 20× over a standard BERT (Devlin et al., 2019) baseline, while losing less than an absolute point in performance.
Anthology ID:
2020.sustainlp-1.24
Volume:
Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing
Month:
November
Year:
2020
Address:
Online
Editors:
Nafise Sadat Moosavi, Angela Fan, Vered Shwartz, Goran Glavaš, Shafiq Joty, Alex Wang, Thomas Wolf
Venue:
sustainlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
174–178
Language:
URL:
https://aclanthology.org/2020.sustainlp-1.24
DOI:
10.18653/v1/2020.sustainlp-1.24
Bibkey:
Cite (ACL):
Alex Wang and Thomas Wolf. 2020. Overview of the SustaiNLP 2020 Shared Task. In Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing, pages 174–178, Online. Association for Computational Linguistics.
Cite (Informal):
Overview of the SustaiNLP 2020 Shared Task (Wang & Wolf, sustainlp 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.sustainlp-1.24.pdf
Data
BoolQCOPAMultiRCReCoRDSuperGLUE