Dynamic Multi-Reward Weighting for Multi-Style Controllable Generation

Karin De Langis, Ryan Koo, Dongyeop Kang


Abstract
Textual style expresses a diverse set of information, including interpersonal dynamics (e.g., formality) and the author’s emotions or attitudes (e.g., disgust). An open question is how language models can be explicitly controlled so that they weave together target styles when generating text: for example, to produce text that is both negative and non-toxic. One approach to such controlled generation is multi-objective reinforcement learning (RL), but how to best combine multiple objectives in a reward function is an open question. In this paper, we investigate various formulations of multi-style reward formulations, including calibrated outputs from discriminators and dynamic weighting by discriminator gradient magnitudes. We find that our proposed dynamic weighting outperforms static weighting approaches with respect style control while maintaining linguistic quality, and we explore its effectiveness in 2- and 3-style control.
Anthology ID:
2024.emnlp-main.386
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6783–6800
Language:
URL:
https://aclanthology.org/2024.emnlp-main.386
DOI:
Bibkey:
Cite (ACL):
Karin De Langis, Ryan Koo, and Dongyeop Kang. 2024. Dynamic Multi-Reward Weighting for Multi-Style Controllable Generation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 6783–6800, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Dynamic Multi-Reward Weighting for Multi-Style Controllable Generation (De Langis et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.386.pdf