Domain Confused Contrastive Learning for Unsupervised Domain Adaptation

Quanyu Long, Tianze Luo, Wenya Wang, Sinno Pan


Abstract
In this work, we study Unsupervised Domain Adaptation (UDA) in a challenging self-supervised approach. One of the difficulties is how to learn task discrimination in the absence of target labels. Unlike previous literature which directly aligns cross-domain distributions or leverages reverse gradient, we propose Domain Confused Contrastive Learning (DCCL), which can bridge the source and target domains via domain puzzles, and retain discriminative representations after adaptation. Technically, DCCL searches for a most domain-challenging direction and exquisitely crafts domain confused augmentations as positive pairs, then it contrastively encourages the model to pull representations towards the other domain, thus learning more stable and effective domain invariances. We also investigate whether contrastive learning necessarily helps with UDA when performing other data augmentations. Extensive experiments demonstrate that DCCL significantly outperforms baselines, further ablation study and analysis also show the effectiveness and availability of DCCL.
Anthology ID:
2022.naacl-main.217
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2982–2995
Language:
URL:
https://aclanthology.org/2022.naacl-main.217
DOI:
10.18653/v1/2022.naacl-main.217
Bibkey:
Cite (ACL):
Quanyu Long, Tianze Luo, Wenya Wang, and Sinno Pan. 2022. Domain Confused Contrastive Learning for Unsupervised Domain Adaptation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2982–2995, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Domain Confused Contrastive Learning for Unsupervised Domain Adaptation (Long et al., NAACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.naacl-main.217.pdf
Video:
 https://aclanthology.org/2022.naacl-main.217.mp4