P3LM: Probabilistically Permuted Prophet Language Modeling for Generative Pre-Training

Junwei Bao, Yifan Wang, Ying Jiangyong, Yeyun Gong, Jing Zhao, Youzheng Wu, Xiaodong He


Abstract
Conventional autoregressive left-to-right (L2R) sequence generation faces two issues during decoding: limited to unidirectional target sequence modeling, and constrained on strong local dependencies. To address the aforementioned problem, we propose P3LM, a probabilistically permuted prophet language model, which strengthens the modeling of bidirectional information and long token dependencies for sequence generation. Specifically, P3LM learns to generate tokens in permuted order upon an order-aware transformer decoder, as well as to generate the corresponding future N tokens with a multi-stream attention mechanism. Extensive experiments are conducted on the GLGE benchmark, which includes four datasets for summarization, two for question generation, one for conversational question answering, and one for dialog response generation, where P3LM achieves state-of-the-art results compared with strong publicly available generative pre-training methods.
Anthology ID:
2022.findings-emnlp.496
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6663–6675
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.496
DOI:
10.18653/v1/2022.findings-emnlp.496
Bibkey:
Cite (ACL):
Junwei Bao, Yifan Wang, Ying Jiangyong, Yeyun Gong, Jing Zhao, Youzheng Wu, and Xiaodong He. 2022. P3LM: Probabilistically Permuted Prophet Language Modeling for Generative Pre-Training. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 6663–6675, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
P3LM: Probabilistically Permuted Prophet Language Modeling for Generative Pre-Training (Bao et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.496.pdf