Generating Datasets with Pretrained Language Models

Timo Schick, Hinrich Schütze


Abstract
To obtain high-quality sentence embeddings from pretrained language models (PLMs), they must either be augmented with additional pretraining objectives or finetuned on a large set of labeled text pairs. While the latter approach typically outperforms the former, it requires great human effort to generate suitable datasets of sufficient size. In this paper, we show how PLMs can be leveraged to obtain high-quality sentence embeddings without the need for labeled data, finetuning or modifications to the pretraining objective: We utilize the generative abilities of large and high-performing PLMs to generate entire datasets of labeled text pairs from scratch, which we then use for finetuning much smaller and more efficient models. Our fully unsupervised approach outperforms strong baselines on several semantic textual similarity datasets.
Anthology ID:
2021.emnlp-main.555
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6943–6951
Language:
URL:
https://aclanthology.org/2021.emnlp-main.555
DOI:
10.18653/v1/2021.emnlp-main.555
Bibkey:
Cite (ACL):
Timo Schick and Hinrich Schütze. 2021. Generating Datasets with Pretrained Language Models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6943–6951, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Generating Datasets with Pretrained Language Models (Schick & Schütze, EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.555.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.555.mp4
Code
 timoschick/dino +  additional community code
Data
SICKSTS Benchmark