Word Sense Disambiguation with Recurrent Neural Networks

Alexander Popov


Abstract
This paper presents a neural network architecture for word sense disambiguation (WSD). The architecture employs recurrent neural layers and more specifically LSTM cells, in order to capture information about word order and to easily incorporate distributed word representations (embeddings) as features, without having to use a fixed window of text. The paper demonstrates that the architecture is able to compete with the most successful supervised systems for WSD and that there is an abundance of possible improvements to take it to the current state of the art. In addition, it explores briefly the potential of combining different types of embeddings as input features; it also discusses possible ways for generating “artificial corpora” from knowledge bases – for the purpose of producing training data and in relation to possible applications of embedding lemmas and word senses in the same space.
Anthology ID:
R17-2004
Volume:
Proceedings of the Student Research Workshop Associated with RANLP 2017
Month:
September
Year:
2017
Address:
Varna
Editors:
Venelin Kovatchev, Irina Temnikova, Pepa Gencheva, Yasen Kiprov, Ivelina Nikolova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
25–34
Language:
URL:
https://doi.org/10.26615/issn.1314-9156.2017_004
DOI:
10.26615/issn.1314-9156.2017_004
Bibkey:
Cite (ACL):
Alexander Popov. 2017. Word Sense Disambiguation with Recurrent Neural Networks. In Proceedings of the Student Research Workshop Associated with RANLP 2017, pages 25–34, Varna. INCOMA Ltd..
Cite (Informal):
Word Sense Disambiguation with Recurrent Neural Networks (Popov, RANLP 2017)
Copy Citation:
PDF:
https://doi.org/10.26615/issn.1314-9156.2017_004