Low Resource Style Transfer via Domain Adaptive Meta Learning

Xiangyang Li, Xiang Long, Yu Xia, Sujian Li


Abstract
Text style transfer (TST) without parallel data has achieved some practical success. However, most of the existing unsupervised text style transfer methods suffer from (i) requiring massive amounts of non-parallel data to guide transferring different text styles. (ii) colossal performance degradation when fine-tuning the model in new domains. In this work, we propose DAML-ATM (Domain Adaptive Meta-Learning with Adversarial Transfer Model), which consists of two parts: DAML and ATM. DAML is a domain adaptive meta-learning approach to learn general knowledge in multiple heterogeneous source domains, capable of adapting to new unseen domains with a small amount of data.Moreover, we propose a new unsupervised TST approach Adversarial Transfer Model (ATM), composed of a sequence-to-sequence pre-trained language model and uses adversarial style training for better content preservation and style transfer.Results on multi-domain datasets demonstrate that our approach generalizes well on unseen low-resource domains, achieving state-of-the-art results against ten strong baselines.
Anthology ID:
2022.naacl-main.220
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3014–3026
Language:
URL:
https://aclanthology.org/2022.naacl-main.220
DOI:
10.18653/v1/2022.naacl-main.220
Bibkey:
Cite (ACL):
Xiangyang Li, Xiang Long, Yu Xia, and Sujian Li. 2022. Low Resource Style Transfer via Domain Adaptive Meta Learning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3014–3026, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Low Resource Style Transfer via Domain Adaptive Meta Learning (Li et al., NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.220.pdf