Evaluating Natural Alpha Embeddings on Intrinsic and Extrinsic Tasks

Riccardo Volpi, Luigi Malagò


Abstract
Skip-Gram is a simple, but effective, model to learn a word embedding mapping by estimating a conditional probability distribution for each word of the dictionary. In the context of Information Geometry, these distributions form a Riemannian statistical manifold, where word embeddings are interpreted as vectors in the tangent bundle of the manifold. In this paper we show how the choice of the geometry on the manifold allows impacts on the performances both on intrinsic and extrinsic tasks, in function of a deformation parameter alpha.
Anthology ID:
2020.repl4nlp-1.9
Volume:
Proceedings of the 5th Workshop on Representation Learning for NLP
Month:
July
Year:
2020
Address:
Online
Editors:
Spandana Gella, Johannes Welbl, Marek Rei, Fabio Petroni, Patrick Lewis, Emma Strubell, Minjoon Seo, Hannaneh Hajishirzi
Venue:
RepL4NLP
SIG:
SIGREP
Publisher:
Association for Computational Linguistics
Note:
Pages:
61–71
Language:
URL:
https://aclanthology.org/2020.repl4nlp-1.9
DOI:
10.18653/v1/2020.repl4nlp-1.9
Bibkey:
Cite (ACL):
Riccardo Volpi and Luigi Malagò. 2020. Evaluating Natural Alpha Embeddings on Intrinsic and Extrinsic Tasks. In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 61–71, Online. Association for Computational Linguistics.
Cite (Informal):
Evaluating Natural Alpha Embeddings on Intrinsic and Extrinsic Tasks (Volpi & Malagò, RepL4NLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.repl4nlp-1.9.pdf
Video:
 http://slideslive.com/38929775