Two experiments for embedding Wordnet hierarchy into vector spaces

Jean-Philippe Bernardy, Aleksandre Maskharashvili


Abstract
In this paper, we investigate mapping of the WORDNET hyponymy relation to feature vectors. Our aim is to model lexical knowledge in such a way that it can be used as input in generic machine-learning models, such as phrase entailment predictors. We propose two models. The first one leverages an existing mapping of words to feature vectors (fastText), and attempts to classify such vectors as within or outside of each class. The second model is fully supervised, using solely WORDNET as a ground truth. It maps each concept to an interval or a disjunction thereof. The first model approaches but not quite attain state of the art performance. The second model can achieve near-perfect accuracy.
Anthology ID:
2019.gwc-1.11
Volume:
Proceedings of the 10th Global Wordnet Conference
Month:
July
Year:
2019
Address:
Wroclaw, Poland
Venue:
GWC
SIG:
Publisher:
Global Wordnet Association
Note:
Pages:
79–84
Language:
URL:
https://aclanthology.org/2019.gwc-1.11
DOI:
Bibkey:
Cite (ACL):
Jean-Philippe Bernardy and Aleksandre Maskharashvili. 2019. Two experiments for embedding Wordnet hierarchy into vector spaces. In Proceedings of the 10th Global Wordnet Conference, pages 79–84, Wroclaw, Poland. Global Wordnet Association.
Cite (Informal):
Two experiments for embedding Wordnet hierarchy into vector spaces (Bernardy & Maskharashvili, GWC 2019)
Copy Citation:
PDF:
https://aclanthology.org/2019.gwc-1.11.pdf