From back to the roots into the gated woods: Deep learning for NLP

Barbara Plank


Abstract
Deep neural networks have revolutionized many fields, including Natural Language Processing. This paper outlines teaching materials for an introductory lecture on deep learning in Natural Language Processing (NLP). The main submitted material covers a summer school lecture on encoder-decoder models. Complementary to this is a set of jupyter notebook slides from earlier teaching, on which parts of the lecture were based on. The main goal of this teaching material is to provide an overview of neural network approaches to natural language processing, while linking modern concepts back to the roots showing traditional essential counterparts. The lecture departs from count-based statistical methods and spans up to gated recurrent networks and attention, which is ubiquitous in today’s NLP.
Anthology ID:
2021.teachingnlp-1.9
Volume:
Proceedings of the Fifth Workshop on Teaching NLP
Month:
June
Year:
2021
Address:
Online
Editors:
David Jurgens, Varada Kolhatkar, Lucy Li, Margot Mieskes, Ted Pedersen
Venue:
TeachingNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
59–61
Language:
URL:
https://aclanthology.org/2021.teachingnlp-1.9
DOI:
10.18653/v1/2021.teachingnlp-1.9
Bibkey:
Cite (ACL):
Barbara Plank. 2021. From back to the roots into the gated woods: Deep learning for NLP. In Proceedings of the Fifth Workshop on Teaching NLP, pages 59–61, Online. Association for Computational Linguistics.
Cite (Informal):
From back to the roots into the gated woods: Deep learning for NLP (Plank, TeachingNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.teachingnlp-1.9.pdf