MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators

Zhixing Tan, Xiangwen Zhang, Shuo Wang, Yang Liu


Abstract
Prompting has recently been shown as a promising approach for applying pre-trained language models to perform downstream tasks. We present Multi-Stage Prompting, a simple and automatic approach for leveraging pre-trained language models to translation tasks. To better mitigate the discrepancy between pre-training and translation, MSP divides the translation process via pre-trained language models into three separate stages: the encoding stage, the re-encoding stage, and the decoding stage. During each stage, we independently apply different continuous prompts for allowing pre-trained language models better shift to translation tasks. We conduct extensive experiments on three translation tasks. Experiments show that our method can significantly improve the translation performance of pre-trained language models.
Anthology ID:
2022.acl-long.424
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6131–6142
Language:
URL:
https://aclanthology.org/2022.acl-long.424
DOI:
10.18653/v1/2022.acl-long.424
Bibkey:
Cite (ACL):
Zhixing Tan, Xiangwen Zhang, Shuo Wang, and Yang Liu. 2022. MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6131–6142, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators (Tan et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-long.424.pdf
Code
 thunlp-mt/plm4mt