Sentence Selection Strategies for Distilling Word Embeddings from BERT

Yixiao Wang, Zied Bouraoui, Luis Espinosa Anke, Steven Schockaert


Abstract
Many applications crucially rely on the availability of high-quality word vectors. To learn such representations, several strategies based on language models have been proposed in recent years. While effective, these methods typically rely on a large number of contextualised vectors for each word, which makes them impractical. In this paper, we investigate whether similar results can be obtained when only a few contextualised representations of each word can be used. To this end, we analyse a range of strategies for selecting the most informative sentences. Our results show that with a careful selection strategy, high-quality word vectors can be learned from as few as 5 to 10 sentences.
Anthology ID:
2022.lrec-1.277
Volume:
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Month:
June
Year:
2022
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
2591–2600
Language:
URL:
https://aclanthology.org/2022.lrec-1.277
DOI:
Bibkey:
Cite (ACL):
Yixiao Wang, Zied Bouraoui, Luis Espinosa Anke, and Steven Schockaert. 2022. Sentence Selection Strategies for Distilling Word Embeddings from BERT. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 2591–2600, Marseille, France. European Language Resources Association.
Cite (Informal):
Sentence Selection Strategies for Distilling Word Embeddings from BERT (Wang et al., LREC 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.lrec-1.277.pdf
Data
GenericsKB