Benchmarking Meta-embeddings: What Works and What Does Not

Iker García-Ferrero, Rodrigo Agerri, German Rigau


Abstract
In the last few years, several methods have been proposed to build meta-embeddings. The general aim was to obtain new representations integrating complementary knowledge from different source pre-trained embeddings thereby improving their overall quality. However, previous meta-embeddings have been evaluated using a variety of methods and datasets, which makes it difficult to draw meaningful conclusions regarding the merits of each approach. In this paper we propose a unified common framework, including both intrinsic and extrinsic tasks, for a fair and objective meta-embeddings evaluation. Furthermore, we present a new method to generate meta-embeddings, outperforming previous work on a large number of intrinsic evaluation benchmarks. Our evaluation framework also allows us to conclude that previous extrinsic evaluations of meta-embeddings have been overestimated.
Anthology ID:
2021.findings-emnlp.333
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
3957–3972
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.333
DOI:
10.18653/v1/2021.findings-emnlp.333
Bibkey:
Cite (ACL):
Iker García-Ferrero, Rodrigo Agerri, and German Rigau. 2021. Benchmarking Meta-embeddings: What Works and What Does Not. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3957–3972, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Benchmarking Meta-embeddings: What Works and What Does Not (García-Ferrero et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.333.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.333.mp4
Code
 ikergarcia1996/metavec
Data
CoLAConceptNetGLUEMRPCMultiNLIQNLISSTSST-2