The Natural Language Pipeline, Neural Text Generation and Explainability

Juliette Faille, Albert Gatt, Claire Gardent


Abstract
End-to-end encoder-decoder approaches to data-to-text generation are often black boxes whose predictions are difficult to explain. Breaking up the end-to-end model into sub-modules is a natural way to address this problem. The traditional pre-neural Natural Language Generation (NLG) pipeline provides a framework for breaking up the end-to-end encoder-decoder. We survey recent papers that integrate traditional NLG submodules in neural approaches and analyse their explainability. Our survey is a first step towards building explainable neural NLG models.
Anthology ID:
2020.nl4xai-1.5
Volume:
2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence
Month:
November
Year:
2020
Address:
Dublin, Ireland
Editors:
Jose M. Alonso, Alejandro Catala
Venue:
NL4XAI
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
16–21
Language:
URL:
https://aclanthology.org/2020.nl4xai-1.5
DOI:
Bibkey:
Cite (ACL):
Juliette Faille, Albert Gatt, and Claire Gardent. 2020. The Natural Language Pipeline, Neural Text Generation and Explainability. In 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, pages 16–21, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
The Natural Language Pipeline, Neural Text Generation and Explainability (Faille et al., NL4XAI 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.nl4xai-1.5.pdf