Learning Disentangled Meaning and Style Representations for Positive Text Reframing

Xu Sheng, Fumiyo Fukumoto, Jiyi Li, Go Kentaro, Yoshimi Suzuki


Abstract
The positive text reframing (PTR) task which generates a text giving a positive perspective with preserving the sense of the input text, has attracted considerable attention as one of the NLP applications. Due to the significant representation capability of the pre-trained language model (PLM), a beneficial baseline can be easily obtained by just fine-tuning the PLM. However, how to interpret a diversity of contexts to give a positive perspective is still an open problem. Especially, it is more serious when the size of the training data is limited. In this paper, we present a PTR framework, that learns representations where the meaning and style of text are structurally disentangled. The method utilizes pseudo-positive reframing datasets which are generated with two augmentation strategies. A simple but effective multi-task learning-based model is learned to fuse the generation capabilities from these datasets. Experimental results on Positive Psychology Frames (PPF) dataset, show that our approach outperforms the baselines, BART by five and T5 by six evaluation metrics. Our source codes and data are available online.
Anthology ID:
2023.inlg-main.31
Volume:
Proceedings of the 16th International Natural Language Generation Conference
Month:
September
Year:
2023
Address:
Prague, Czechia
Editors:
C. Maria Keet, Hung-Yi Lee, Sina Zarrieß
Venues:
INLG | SIGDIAL
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
424–430
Language:
URL:
https://aclanthology.org/2023.inlg-main.31
DOI:
10.18653/v1/2023.inlg-main.31
Bibkey:
Cite (ACL):
Xu Sheng, Fumiyo Fukumoto, Jiyi Li, Go Kentaro, and Yoshimi Suzuki. 2023. Learning Disentangled Meaning and Style Representations for Positive Text Reframing. In Proceedings of the 16th International Natural Language Generation Conference, pages 424–430, Prague, Czechia. Association for Computational Linguistics.
Cite (Informal):
Learning Disentangled Meaning and Style Representations for Positive Text Reframing (Sheng et al., INLG-SIGDIAL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.inlg-main.31.pdf