YNU-HPCC at SemEval-2021 Task 10: Using a Transformer-based Source-Free Domain Adaptation Model for Semantic Processing

Zhewen Yu, Jin Wang, Xuejie Zhang


Abstract
Data sharing restrictions are common in NLP datasets. The purpose of this task is to develop a model trained in a source domain to make predictions for a target domain with related domain data. To address the issue, the organizers provided the models that fine-tuned a large number of source domain data on pre-trained models and the dev data for participants. But the source domain data was not distributed. This paper describes the provided model to the NER (Name entity recognition) task and the ways to develop the model. As a little data provided, pre-trained models are suitable to solve the cross-domain tasks. The models fine-tuned by large number of another domain could be effective in new domain because the task had no change.
Anthology ID:
2021.semeval-1.184
Volume:
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
Month:
August
Year:
2021
Address:
Online
Editors:
Alexis Palmer, Nathan Schneider, Natalie Schluter, Guy Emerson, Aurelie Herbelot, Xiaodan Zhu
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
1289–1294
Language:
URL:
https://aclanthology.org/2021.semeval-1.184
DOI:
10.18653/v1/2021.semeval-1.184
Bibkey:
Cite (ACL):
Zhewen Yu, Jin Wang, and Xuejie Zhang. 2021. YNU-HPCC at SemEval-2021 Task 10: Using a Transformer-based Source-Free Domain Adaptation Model for Semantic Processing. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 1289–1294, Online. Association for Computational Linguistics.
Cite (Informal):
YNU-HPCC at SemEval-2021 Task 10: Using a Transformer-based Source-Free Domain Adaptation Model for Semantic Processing (Yu et al., SemEval 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.semeval-1.184.pdf