Representations of Meaning in Neural Networks for NLP: a Thesis Proposal

Tomáš Musil


Abstract
Neural networks are the state-of-the-art method of machine learning for many problems in NLP. Their success in machine translation and other NLP tasks is phenomenal, but their interpretability is challenging. We want to find out how neural networks represent meaning. In order to do this, we propose to examine the distribution of meaning in the vector space representation of words in neural networks trained for NLP tasks. Furthermore, we propose to consider various theories of meaning in the philosophy of language and to find a methodology that would enable us to connect these areas.
Anthology ID:
2021.naacl-srw.4
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop
Month:
June
Year:
2021
Address:
Online
Editors:
Esin Durmus, Vivek Gupta, Nelson Liu, Nanyun Peng, Yu Su
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
24–31
Language:
URL:
https://aclanthology.org/2021.naacl-srw.4
DOI:
10.18653/v1/2021.naacl-srw.4
Bibkey:
Cite (ACL):
Tomáš Musil. 2021. Representations of Meaning in Neural Networks for NLP: a Thesis Proposal. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 24–31, Online. Association for Computational Linguistics.
Cite (Informal):
Representations of Meaning in Neural Networks for NLP: a Thesis Proposal (Musil, NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-srw.4.pdf
Video:
 https://aclanthology.org/2021.naacl-srw.4.mp4