Multilingual Pre-Trained Transformers and Convolutional NN Classification Models for Technical Domain Identification

Suman Dowlagar, Radhika Mamidi


Abstract
In this paper, we present a transfer learning system to perform technical domain identification on multilingual text data. We have submitted two runs, one uses the transformer model BERT, and the other uses XLM-ROBERTa with the CNN model for text classification. These models allowed us to identify the domain of the given sentences for the ICON 2020 shared Task, TechDOfication: Technical Domain Identification. Our system ranked the best for the subtasks 1d, 1g for the given TechDOfication dataset.
Anthology ID:
2020.icon-techdofication.4
Volume:
Proceedings of the 17th International Conference on Natural Language Processing (ICON): TechDOfication 2020 Shared Task
Month:
December
Year:
2020
Address:
Patna, India
Editors:
Dipti Misra Sharma, Asif Ekbal, Karunesh Arora, Sudip Kumar Naskar, Dipankar Ganguly, Sobha L, Radhika Mamidi, Sunita Arora, Pruthwik Mishra, Vandan Mujadia
Venue:
ICON
SIG:
Publisher:
NLP Association of India (NLPAI)
Note:
Pages:
16–20
Language:
URL:
https://aclanthology.org/2020.icon-techdofication.4
DOI:
Bibkey:
Cite (ACL):
Suman Dowlagar and Radhika Mamidi. 2020. Multilingual Pre-Trained Transformers and Convolutional NN Classification Models for Technical Domain Identification. In Proceedings of the 17th International Conference on Natural Language Processing (ICON): TechDOfication 2020 Shared Task, pages 16–20, Patna, India. NLP Association of India (NLPAI).
Cite (Informal):
Multilingual Pre-Trained Transformers and Convolutional NN Classification Models for Technical Domain Identification (Dowlagar & Mamidi, ICON 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.icon-techdofication.4.pdf