Eliana Di Palma
2024
ELIta: A New Italian Language Resource for Emotion Analysis
Eliana Di Palma
Proceedings of the 10th Italian Conference on Computational Linguistics (CLiC-it 2024)
Emotions and language are strongly associated. In recent years, many resources have been created to investigate this association and automatically detect emotions from texts.Presenting ELIta (Emotion Lexicon for Italian), this study provides a new language resource for the analysis and detection of emotions in Italian texts. It describes the process of lexicon creation, including lexicon selection and annotation methodologies, and compares the collected data with existing resources. By offering a non-aggregated lexicon, ELIta fills a crucial gap and is applicable to various research and practical applications. Furthermore, the work utilises the lexicon by analysing the relationships between emotions and gender.
2022
From Speed to Car and Back: An Exploratory Study about Associations between Abstract Nouns and Images
Ludovica Cerini
|
Eliana Di Palma
|
Alessandro Lenci
Proceedings of the 2022 CLASP Conference on (Dis)embodiment
Abstract concepts, notwithstanding their lack of physical referents in real world, are grounded in sensorimotor experience. In fact, images depicting concrete entities may be associated to abstract concepts, both via direct and indirect grounding processes. However, what are the links connecting the concrete concepts represented by images and abstract ones is still unclear. To investigate these links, we conducted a preliminary study collecting word association data and image-abstract word pair ratings, to identify whether the associations between visual and verbal systems rely on the same conceptual mappings. The goal of this research is to understand to what extent linguistic associations could be confirmed with visual stimuli, in order to have a starting point for multimodal analysis of abstract and concrete concepts.
2021
A howling success or a working sea? Testing what BERT knows about metaphors
Paolo Pedinotti
|
Eliana Di Palma
|
Ludovica Cerini
|
Alessandro Lenci
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Metaphor is a widespread linguistic and cognitive phenomenon that is ruled by mechanisms which have received attention in the literature. Transformer Language Models such as BERT have brought improvements in metaphor-related tasks. However, they have been used only in application contexts, while their knowledge of the phenomenon has not been analyzed. To test what BERT knows about metaphors, we challenge it on a new dataset that we designed to test various aspects of this phenomenon such as variations in linguistic structure, variations in conventionality, the boundaries of the plausibility of a metaphor and the interpretations that we attribute to metaphoric expressions. Results bring out some tendencies that suggest that the model can reproduce some human intuitions about metaphors.