Sicheng Ma
2023
Controllable Text Generation via Probability Density Estimation in the Latent Space
Yuxuan Gu
|
Xiaocheng Feng
|
Sicheng Ma
|
Lingyuan Zhang
|
Heng Gong
|
Weihong Zhong
|
Bing Qin
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Previous work on controllable text generation has explored the idea of control from the latent space, such as optimizing a representation with attribute-specific classifiers or sampling one from relevant discrete samples. However, they cannot effectively model a complex space with diverse attributes, high dimensionality, and asymmetric structure, leaving subsequent controls unsatisfying. In this work, we propose a novel control framework using probability density estimation in the latent space. Our method utilizes an invertible transformation function, the Normalizing Flow, that maps the complex distributions in the latent space to simple Gaussian distributions in the prior space. Thus, we can perform sophisticated and flexible controls in the prior space and feed the control effects back into the latent space owing to the bijection property of invertible transformations. Experiments on single-attribute and multi-attribute control reveal that our method outperforms several strong baselines on attribute relevance and text quality, achieving a new SOTA. Further analysis of control strength adjustment demonstrates the flexibility of our control strategy.
2022
A Distributional Lens for Multi-Aspect Controllable Text Generation
Yuxuan Gu
|
Xiaocheng Feng
|
Sicheng Ma
|
Lingyuan Zhang
|
Heng Gong
|
Bing Qin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Multi-aspect controllable text generation is a more challenging and practical task than single-aspect control. Existing methods achieve complex multi-aspect control by fusing multiple controllers learned from single-aspect, but suffer from attribute degeneration caused by the mutual interference of these controllers. To address this, we provide observations on attribute fusion from a distributional perspective and propose to directly search for the intersection areas of multiple attribute distributions as their combination for generation. Our method first estimates the attribute space with an autoencoder structure. Afterward, we iteratively approach the intersections by jointly minimizing distances to points representing different attributes. Finally, we map them to attribute-relevant sentences with a prefix-tuning-based decoder. Experiments on the three-aspect control task, including sentiment, topic, and detoxification aspects, reveal that our method outperforms several strong baselines on attribute relevance and text quality and achieves the SOTA. Further analysis also supplies some explanatory support for the effectiveness of our approach.
Improving Controllable Text Generation with Position-Aware Weighted Decoding
Yuxuan Gu
|
Xiaocheng Feng
|
Sicheng Ma
|
Jiaming Wu
|
Heng Gong
|
Bing Qin
Findings of the Association for Computational Linguistics: ACL 2022
Weighted decoding methods composed of the pretrained language model (LM) and the controller have achieved promising results for controllable text generation. However, these models often suffer from a control strength/fluency trade-off problem as higher control strength is more likely to generate incoherent and repetitive text. In this paper, we illustrate this trade-off is arisen by the controller imposing the target attribute on the LM at improper positions. And we propose a novel framework based on existing weighted decoding methods called CAT-PAW, which introduces a lightweight regulator to adjust bias signals from the controller at different decoding positions. Experiments on positive sentiment control, topic control, and language detoxification show the effectiveness of our CAT-PAW upon 4 SOTA models.
Search
Co-authors
- Yuxuan Gu 3
- Xiaocheng Feng 3
- Heng Gong 3
- Bing Qin 3
- Lingyuan Zhang 2
- show all...