Arianna Patrizi
2023
The Dark Side of the Language: Pre-trained Transformers in the DarkNet
Leonardo Ranaldi
|
Aria Nourbakhsh
|
Elena Sofia Ruzzetti
|
Arianna Patrizi
|
Dario Onorati
|
Michele Mastromattei
|
Francesca Fallucchi
|
Fabio Massimo Zanzotto
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing
Pre-trained Transformers are challenging human performances in many Natural Language Processing tasks. The massive datasets used for pre-training seem to be the key to their success on existing tasks. In this paper, we explore how a range of pre-trained natural language understanding models performs on definitely unseen sentences provided by classification tasks over a DarkNet corpus. Surprisingly, results show that syntactic and lexical neural networks perform on par with pre-trained Transformers even after fine-tuning. Only after what we call extreme domain adaptation, that is, retraining with the masked language model task on all the novel corpus, pre-trained Transformers reach their standard high results. This suggests that huge pre-training corpora may give Transformers unexpected help since they are exposed to many of the possible sentences.
Search
Co-authors
- Leonardo Ranaldi 1
- Aria Nourbakhsh 1
- Elena Sofia Ruzzetti 1
- Dario Onorati 1
- Michele Mastromattei 1
- show all...