Dissecting Lottery Ticket Transformers: Structural and Behavioral Study of Sparse Neural Machine Translation

Rajiv Movva, Jason Zhao


Abstract
Recent work on the lottery ticket hypothesis has produced highly sparse Transformers for NMT while maintaining BLEU. However, it is unclear how such pruning techniques affect a model’s learned representations. By probing Transformers with more and more low-magnitude weights pruned away, we find that complex semantic information is first to be degraded. Analysis of internal activations reveals that higher layers diverge most over the course of pruning, gradually becoming less complex than their dense counterparts. Meanwhile, early layers of sparse models begin to perform more encoding. Attention mechanisms remain remarkably consistent as sparsity increases.
Anthology ID:
2020.blackboxnlp-1.19
Volume:
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2020
Address:
Online
Editors:
Afra Alishahi, Yonatan Belinkov, Grzegorz Chrupała, Dieuwke Hupkes, Yuval Pinter, Hassan Sajjad
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
193–203
Language:
URL:
https://aclanthology.org/2020.blackboxnlp-1.19
DOI:
10.18653/v1/2020.blackboxnlp-1.19
Bibkey:
Cite (ACL):
Rajiv Movva and Jason Zhao. 2020. Dissecting Lottery Ticket Transformers: Structural and Behavioral Study of Sparse Neural Machine Translation. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 193–203, Online. Association for Computational Linguistics.
Cite (Informal):
Dissecting Lottery Ticket Transformers: Structural and Behavioral Study of Sparse Neural Machine Translation (Movva & Zhao, BlackboxNLP 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.blackboxnlp-1.19.pdf
Optional supplementary material:
 2020.blackboxnlp-1.19.OptionalSupplementaryMaterial.pdf
Video:
 https://slideslive.com/38939765