Discreteness in Neural Natural Language Processing

Lili Mou, Hao Zhou, Lei Li


Abstract
This tutorial provides a comprehensive guide to the process of discreteness in neural NLP.As a gentle start, we will briefly introduce the background of deep learning based NLP, where we point out the ubiquitous discreteness of natural language and its challenges in neural information processing. Particularly, we will focus on how such discreteness plays a role in the input space, the latent space, and the output space of a neural network. In each part, we will provide examples, discuss machine learning techniques, as well as demonstrate NLP applications.
Anthology ID:
D19-2005
Volume:
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): Tutorial Abstracts
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Timothy Baldwin, Marine Carpuat
Venues:
EMNLP | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
Language:
URL:
https://aclanthology.org/D19-2005
DOI:
Bibkey:
Cite (ACL):
Lili Mou, Hao Zhou, and Lei Li. 2019. Discreteness in Neural Natural Language Processing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): Tutorial Abstracts, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Discreteness in Neural Natural Language Processing (Mou et al., EMNLP-IJCNLP 2019)
Copy Citation: