Transforming Multi-Conditioned Generation from Meaning Representation

Joosung Lee


Abstract
Our study focuses on language generation by considering various information representing the meaning of utterances as multiple conditions of generation. Generating an utterance from a Meaning representation (MR) usually passes two steps: sentence planning and surface realization. However, we propose a simple one-stage framework to generate utterances directly from MR. Our model is based on GPT2 and generates utterances with flat conditions on slot and value pairs, which does not need to determine the structure of the sentence. We evaluate several systems in the E2E dataset with 6 automatic metrics. Our system is a simple method, but it demonstrates comparable performance to previous systems in automated metrics. In addition, using only 10% of the dataset without any other techniques, our model achieves comparable performance, and shows the possibility of performing zero-shot generation and expanding to other datasets.
Anthology ID:
2021.ranlp-1.92
Volume:
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)
Month:
September
Year:
2021
Address:
Held Online
Editors:
Ruslan Mitkov, Galia Angelova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
805–813
Language:
URL:
https://aclanthology.org/2021.ranlp-1.92
DOI:
Bibkey:
Cite (ACL):
Joosung Lee. 2021. Transforming Multi-Conditioned Generation from Meaning Representation. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 805–813, Held Online. INCOMA Ltd..
Cite (Informal):
Transforming Multi-Conditioned Generation from Meaning Representation (Lee, RANLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.ranlp-1.92.pdf
Code
 rungjoo/TransMC-data2text