Small is the New Big: Pre-finetuned compact models are better for Asynchronous Active Learning

Dantong Liu, Kaushik Pavani, Sunny Dasgupta


Anthology ID:
2023.sustainlp-1.7
Volume:
Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)
Month:
July
Year:
2023
Address:
Toronto, Canada (Hybrid)
Editors:
Nafise Sadat Moosavi, Iryna Gurevych, Yufang Hou, Gyuwan Kim, Young Jin Kim, Tal Schuster, Ameeta Agrawal
Venue:
sustainlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
110–120
Language:
URL:
https://aclanthology.org/2023.sustainlp-1.7
DOI:
10.18653/v1/2023.sustainlp-1.7
Bibkey:
Cite (ACL):
Dantong Liu, Kaushik Pavani, and Sunny Dasgupta. 2023. Small is the New Big: Pre-finetuned compact models are better for Asynchronous Active Learning. In Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP), pages 110–120, Toronto, Canada (Hybrid). Association for Computational Linguistics.
Cite (Informal):
Small is the New Big: Pre-finetuned compact models are better for Asynchronous Active Learning (Liu et al., sustainlp 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.sustainlp-1.7.pdf
Video:
 https://aclanthology.org/2023.sustainlp-1.7.mp4