2024
pdf
bib
abs
Linguistic Fingerprint in Transformer Models: How Language Variation Influences Parameter Selection in Irony Detection
Michele Mastromattei
|
Fabio Massimo Zanzotto
Proceedings of the 3rd Workshop on Perspectivist Approaches to NLP (NLPerspectives) @ LREC-COLING 2024
This paper explores the correlation between linguistic diversity, sentiment analysis and transformer model architectures. We aim to investigate how different English variations impact transformer-based models for irony detection. To conduct our study, we used the EPIC corpus to extract five diverse English variation-specific datasets and applied the KEN pruning algorithm on five different architectures. Our results reveal several similarities between optimal subnetworks, which provide insights into the linguistic variations that share strong resemblances and those that exhibit greater dissimilarities. We discovered that optimal subnetworks across models share at least 60% of their parameters, emphasizing the significance of parameter values in capturing and interpreting linguistic variations. This study highlights the inherent structural similarities between models trained on different variants of the same language and also the critical role of parameter values in capturing these nuances.
pdf
bib
Less is KEN: a Universal and Simple Non-Parametric Pruning Algorithm for Large Language Models
Michele Mastromattei
|
Fabio Massimo Zanzotto
Findings of the Association for Computational Linguistics: ACL 2024
2023
pdf
bib
abs
Exploring Linguistic Properties of Monolingual BERTs with Typological Classification among Languages
Elena Sofia Ruzzetti
|
Federico Ranaldi
|
Felicia Logozzo
|
Michele Mastromattei
|
Leonardo Ranaldi
|
Fabio Massimo Zanzotto
Findings of the Association for Computational Linguistics: EMNLP 2023
The impressive achievements of transformers force NLP researchers to delve into how these models represent the underlying structure of natural language. In this paper, we propose a novel standpoint to investigate the above issue: using typological similarities among languages to observe how their respective monolingual models encode structural information. We aim to layer-wise compare transformers for typologically similar languages to observe whether these similarities emerge for particular layers. For this investigation, we propose to use Centered Kernel Alignment to measure similarity among weight matrices. We found that syntactic typological similarity is consistent with the similarity between the weights in the middle layers, which are the pretrained BERT layers to which syntax encoding is generally attributed. Moreover, we observe that a domain adaptation on semantically equivalent texts enhances this similarity among weight matrices.
pdf
bib
abs
The Dark Side of the Language: Pre-trained Transformers in the DarkNet
Leonardo Ranaldi
|
Aria Nourbakhsh
|
Elena Sofia Ruzzetti
|
Arianna Patrizi
|
Dario Onorati
|
Michele Mastromattei
|
Francesca Fallucchi
|
Fabio Massimo Zanzotto
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing
Pre-trained Transformers are challenging human performances in many Natural Language Processing tasks. The massive datasets used for pre-training seem to be the key to their success on existing tasks. In this paper, we explore how a range of pre-trained natural language understanding models performs on definitely unseen sentences provided by classification tasks over a DarkNet corpus. Surprisingly, results show that syntactic and lexical neural networks perform on par with pre-trained Transformers even after fine-tuning. Only after what we call extreme domain adaptation, that is, retraining with the masked language model task on all the novel corpus, pre-trained Transformers reach their standard high results. This suggests that huge pre-training corpora may give Transformers unexpected help since they are exposed to many of the possible sentences.
2022
pdf
bib
abs
Lacking the Embedding of a Word? Look it up into a Traditional Dictionary
Elena Sofia Ruzzetti
|
Leonardo Ranaldi
|
Michele Mastromattei
|
Francesca Fallucchi
|
Noemi Scarpato
|
Fabio Massimo Zanzotto
Findings of the Association for Computational Linguistics: ACL 2022
Word embeddings are powerful dictionaries, which may easily capture language variations. However, these dictionaries fail to give sense to rare words, which are surprisingly often covered by traditional dictionaries. In this paper, we propose to use definitions retrieved in traditional dictionaries to produce word embeddings for rare words. For this purpose, we introduce two methods: Definition Neural Network (DefiNNet) and Define BERT (DefBERT). In our experiments, DefiNNet and DefBERT significantly outperform state-of-the-art as well as baseline methods devised for producing embeddings of unknown words. In fact, DefiNNet significantly outperforms FastText, which implements a method for the same task-based on n-grams, and DefBERT significantly outperforms the BERT method for OOV words. Then, definitions in traditional dictionaries are useful to build word embeddings for rare words.
pdf
bib
abs
Change My Mind: How Syntax-based Hate Speech Recognizer Can Uncover Hidden Motivations Based on Different Viewpoints
Michele Mastromattei
|
Valerio Basile
|
Fabio Massimo Zanzotto
Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022
Hate speech recognizers may mislabel sentences by not considering the different opinions that society has on selected topics. In this paper, we show how explainable machine learning models based on syntax can help to understand the motivations that induce a sentence to be offensive to a certain demographic group. By comparing and contrasting the results, we show the key points that make a sentence labeled as hate speech and how this varies across different ethnic groups.
pdf
bib
abs
Every time I fire a conversational designer, the performance of the dialogue system goes down
Giancarlo Xompero
|
Michele Mastromattei
|
Samir Salman
|
Cristina Giannone
|
Andrea Favalli
|
Raniero Romagnoli
|
Fabio Massimo Zanzotto
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Incorporating handwritten domain scripts into neural-based task-oriented dialogue systems may be an effective way to reduce the need for large sets of annotated dialogues. In this paper, we investigate how the use of domain scripts written by conversational designers affects the performance of neural-based dialogue systems. To support this investigation, we propose the Conversational-Logic-Injection-in-Neural-Network system (CLINN) where domain scripts are coded in semi-logical rules. By using CLINN, we evaluated semi-logical rules produced by a team of differently-skilled conversational designers. We experimented with the Restaurant domain of the MultiWOZ dataset. Results show that external knowledge is extremely important for reducing the need for annotated examples for conversational systems. In fact, rules from conversational designers used in CLINN significantly outperform a state-of-the-art neural-based dialogue system when trained with smaller sets of annotated dialogues.