Efficient Graph-based Word Sense Induction by Distributional Inclusion Vector Embeddings

Haw-Shiuan Chang, Amol Agrawal, Ananya Ganesh, Anirudha Desai, Vinayak Mathur, Alfred Hough, Andrew McCallum


Abstract
Word sense induction (WSI), which addresses polysemy by unsupervised discovery of multiple word senses, resolves ambiguities for downstream NLP tasks and also makes word representations more interpretable. This paper proposes an accurate and efficient graph-based method for WSI that builds a global non-negative vector embedding basis (which are interpretable like topics) and clusters the basis indexes in the ego network of each polysemous word. By adopting distributional inclusion vector embeddings as our basis formation model, we avoid the expensive step of nearest neighbor search that plagues other graph-based methods without sacrificing the quality of sense clusters. Experiments on three datasets show that our proposed method produces similar or better sense clusters and embeddings compared with previous state-of-the-art methods while being significantly more efficient.
Anthology ID:
W18-1706
Volume:
Proceedings of the Twelfth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-12)
Month:
June
Year:
2018
Address:
New Orleans, Louisiana, USA
Venues:
NAACL | TextGraphs | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
38–48
Language:
URL:
https://aclanthology.org/W18-1706
DOI:
10.18653/v1/W18-1706
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/W18-1706.pdf