Joshua Rozner
2022
Causal Distillation for Language Models
Zhengxuan Wu
|
Atticus Geiger
|
Joshua Rozner
|
Elisa Kreiss
|
Hanson Lu
|
Thomas Icard
|
Christopher Potts
|
Noah Goodman
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Distillation efforts have led to language models that are more compact and efficient without serious drops in performance. The standard approach to distillation trains a student model against two objectives: a task-specific objective (e.g., language modeling) and an imitation objective that encourages the hidden states of the student model to be similar to those of the larger teacher model. In this paper, we show that it is beneficial to augment distillation with a third objective that encourages the student to imitate the causal dynamics of the teacher through a distillation interchange intervention training objective (DIITO). DIITO pushes the student model to become a causal abstraction of the teacher model – a faithful model with simpler causal structure. DIITO is fully differentiable, easily implemented, and combines flexibly with other objectives. Compared against standard distillation with the same setting, DIITO results in lower perplexity on the WikiText-103M corpus (masked language modeling) and marked improvements on the GLUE benchmark (natural language understanding), SQuAD (question answering), and CoNLL-2003 (named entity recognition).
Search
Co-authors
- Zhengxuan Wu 1
- Atticus Geiger 1
- Elisa Kreiss 1
- Hanson Lu 1
- Thomas Icard 1
- show all...