Task Refinement Learning for Improved Accuracy and Stability of Unsupervised Domain Adaptation

Yftah Ziser, Roi Reichart


Abstract
Pivot Based Language Modeling (PBLM) (Ziser and Reichart, 2018a), combining LSTMs with pivot-based methods, has yielded significant progress in unsupervised domain adaptation. However, this approach is still challenged by the large pivot detection problem that should be solved, and by the inherent instability of LSTMs. In this paper we propose a Task Refinement Learning (TRL) approach, in order to solve these problems. Our algorithms iteratively train the PBLM model, gradually increasing the information exposed about each pivot. TRL-PBLM achieves stateof- the-art accuracy in six domain adaptation setups for sentiment classification. Moreover, it is much more stable than plain PBLM across model configurations, making the model much better fitted for practical use.
Anthology ID:
P19-1591
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5895–5906
Language:
URL:
https://aclanthology.org/P19-1591
DOI:
10.18653/v1/P19-1591
Bibkey:
Cite (ACL):
Yftah Ziser and Roi Reichart. 2019. Task Refinement Learning for Improved Accuracy and Stability of Unsupervised Domain Adaptation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5895–5906, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Task Refinement Learning for Improved Accuracy and Stability of Unsupervised Domain Adaptation (Ziser & Reichart, ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1591.pdf
Video:
 https://aclanthology.org/P19-1591.mp4
Code
 yftah89/TRL-PBLM