Abhay Garg
2021
Stylistic MR-to-Text Generation Using Pre-trained Language Models
Kunal Pagarey
|
Kanika Kalra
|
Abhay Garg
|
Saumajit Saha
|
Mayur Patidar
|
Shirish Karande
Proceedings of the 18th International Conference on Natural Language Processing (ICON)
We explore the ability of pre-trained language models BART, an encoder-decoder model, GPT2 and GPT-Neo, both decoder-only models for generating sentences from structured MR tags as input. We observe best results on several metrics for the YelpNLG and E2E datasets. Style based implicit tags such as emotion, sentiment, length etc., allows for controlled generation but it is typically not present in MR. We present an analysis on YelpNLG showing BART can express the content with stylistic variations in the structure of the sentence. Motivated with the results, we define a new task of emotional situation generation from various POS tags and emotion label values as MR using EmpatheticDialogues dataset and report a baseline. Encoder-Decoder attention analysis shows that BART learns different aspects in MR at various layers and heads.