Efficient Hierarchical Domain Adaptation for Pretrained Language Models

Alexandra Chronopoulou, Matthew Peters, Jesse Dodge


Abstract
The remarkable success of large language models has been driven by dense models trained on massive unlabeled, unstructured corpora. These corpora typically contain text from diverse, heterogeneous sources, but information about the source of the text is rarely used during training. Transferring their knowledge to a target domain is typically done by continuing training in-domain. In this paper, we introduce a method to permit domain adaptation to many diverse domains using a computationally efficient adapter approach. Our method is based on the observation that textual domains are partially overlapping, and we represent domains as a hierarchical tree structure where each node in the tree is associated with a set of adapter weights. When combined with a frozen pretrained language model, this approach enables parameter sharing among related domains, while avoiding negative interference between unrelated ones. Experimental results with GPT-2 and a large fraction of the 100 most represented websites in C4 show across-the-board improvements in-domain. We additionally provide an inference time algorithm for a held-out domain and show that averaging over multiple paths through the tree enables further gains in generalization, while adding only a marginal cost to inference.
Anthology ID:
2022.naacl-main.96
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1336–1351
Language:
URL:
https://aclanthology.org/2022.naacl-main.96
DOI:
10.18653/v1/2022.naacl-main.96
Bibkey:
Cite (ACL):
Alexandra Chronopoulou, Matthew Peters, and Jesse Dodge. 2022. Efficient Hierarchical Domain Adaptation for Pretrained Language Models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1336–1351, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Efficient Hierarchical Domain Adaptation for Pretrained Language Models (Chronopoulou et al., NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.96.pdf
Video:
 https://aclanthology.org/2022.naacl-main.96.mp4
Code
 alexandra-chron/hierarchical-domain-adaptation