Distilling Knowledge for Search-based Structured Prediction

Yijia Liu, Wanxiang Che, Huaipeng Zhao, Bing Qin, Ting Liu


Abstract
Many natural language processing tasks can be modeled into structured prediction and solved as a search problem. In this paper, we distill an ensemble of multiple models trained with different initialization into a single model. In addition to learning to match the ensemble’s probability output on the reference states, we also use the ensemble to explore the search space and learn from the encountered states in the exploration. Experimental results on two typical search-based structured prediction tasks – transition-based dependency parsing and neural machine translation show that distillation can effectively improve the single model’s performance and the final model achieves improvements of 1.32 in LAS and 2.65 in BLEU score on these two tasks respectively over strong baselines and it outperforms the greedy structured prediction models in previous literatures.
Anthology ID:
P18-1129
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1393–1402
Language:
URL:
https://aclanthology.org/P18-1129
DOI:
10.18653/v1/P18-1129
Bibkey:
Cite (ACL):
Yijia Liu, Wanxiang Che, Huaipeng Zhao, Bing Qin, and Ting Liu. 2018. Distilling Knowledge for Search-based Structured Prediction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1393–1402, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Distilling Knowledge for Search-based Structured Prediction (Liu et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-1129.pdf
Presentation:
 P18-1129.Presentation.pdf
Video:
 https://aclanthology.org/P18-1129.mp4
Code
 Oneplus/twpipe