On the Hidden Negative Transfer in Sequential Transfer Learning for Domain Adaptation from News to Tweets

Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat


Abstract
Transfer Learning has been shown to be a powerful tool for Natural Language Processing (NLP) and has outperformed the standard supervised learning paradigm, as it takes benefit from the pre-learned knowledge. Nevertheless, when transfer is performed between less related domains, it brings a negative transfer, i.e. hurts the transfer performance. In this research, we shed light on the hidden negative transfer occurring when transferring from the News domain to the Tweets domain, through quantitative and qualitative analysis. Our experiments on three NLP taks: Part-Of-Speech tagging, Chunking and Named Entity recognition, reveal interesting insights.
Anthology ID:
2021.adaptnlp-1.14
Volume:
Proceedings of the Second Workshop on Domain Adaptation for NLP
Month:
April
Year:
2021
Address:
Kyiv, Ukraine
Editors:
Eyal Ben-David, Shay Cohen, Ryan McDonald, Barbara Plank, Roi Reichart, Guy Rotman, Yftah Ziser
Venue:
AdaptNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
140–145
Language:
URL:
https://aclanthology.org/2021.adaptnlp-1.14
DOI:
Bibkey:
Cite (ACL):
Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, and Fatiha Sadat. 2021. On the Hidden Negative Transfer in Sequential Transfer Learning for Domain Adaptation from News to Tweets. In Proceedings of the Second Workshop on Domain Adaptation for NLP, pages 140–145, Kyiv, Ukraine. Association for Computational Linguistics.
Cite (Informal):
On the Hidden Negative Transfer in Sequential Transfer Learning for Domain Adaptation from News to Tweets (Meftah et al., AdaptNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.adaptnlp-1.14.pdf
Data
Tweebank