Gradual Learning of Matrix-Space Models of Language for Sentiment Analysis

Shima Asaadi, Sebastian Rudolph


Abstract
Learning word representations to capture the semantics and compositionality of language has received much research interest in natural language processing. Beyond the popular vector space models, matrix representations for words have been proposed, since then, matrix multiplication can serve as natural composition operation. In this work, we investigate the problem of learning matrix representations of words. We present a learning approach for compositional matrix-space models for the task of sentiment analysis. We show that our approach, which learns the matrices gradually in two steps, outperforms other approaches and a gradient-descent baseline in terms of quality and computational cost.
Anthology ID:
W17-2621
Volume:
Proceedings of the 2nd Workshop on Representation Learning for NLP
Month:
August
Year:
2017
Address:
Vancouver, Canada
Editors:
Phil Blunsom, Antoine Bordes, Kyunghyun Cho, Shay Cohen, Chris Dyer, Edward Grefenstette, Karl Moritz Hermann, Laura Rimell, Jason Weston, Scott Yih
Venue:
RepL4NLP
SIG:
SIGREP
Publisher:
Association for Computational Linguistics
Note:
Pages:
178–185
Language:
URL:
https://aclanthology.org/W17-2621
DOI:
10.18653/v1/W17-2621
Bibkey:
Cite (ACL):
Shima Asaadi and Sebastian Rudolph. 2017. Gradual Learning of Matrix-Space Models of Language for Sentiment Analysis. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 178–185, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Gradual Learning of Matrix-Space Models of Language for Sentiment Analysis (Asaadi & Rudolph, RepL4NLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-2621.pdf