Antonela Tommasel
2025
Characterizing Positional Bias in Large Language Models: A Multi-Model Evaluation of Prompt Order Effects
Patrick Schilcher
|
Dominik Karasin
|
Michael Schöpf
|
Haisam Saleh
|
Antonela Tommasel
|
Markus Schedl
Findings of the Association for Computational Linguistics: EMNLP 2025
Large Language Models (LLMs) are widely used for a variety of tasks such as text generation, ranking, and decision-making. However, their outputs can be influenced by various forms of biases. One such bias is positional bias, where models prioritize items based on their position within a given prompt rather than their content or quality, impacting on how LLMs interpret and weigh information, potentially compromising fairness, reliability, and robustness. To assess positional bias, we prompt a range of LLMs to generate descriptions for a list of topics, systematically permuting their order and analyzing variations in the responses. Our analysis shows that ranking position affects structural features and coherence, with some LLMs also reordering or omitting topics. Nonetheless, the impact of positional bias varies across different LLMs and topics, indicating an interplay with other related biases.
2018
Textual Aggression Detection through Deep Learning
Antonela Tommasel
|
Juan Manuel Rodriguez
|
Daniela Godoy
Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)
Cyberbullying and cyberaggression are serious and widespread issues increasingly affecting Internet users. With the widespread of social media networks, bullying, once limited to particular places, can now occur anytime and anywhere. Cyberaggression refers to aggressive online behaviour that aims at harming other individuals, and involves rude, insulting, offensive, teasing or demoralising comments through online social media. Considering the dangerous consequences that cyberaggression has on its victims and its rapid spread amongst internet users (specially kids and teens), it is crucial to understand how cyberbullying occurs to prevent it from escalating. Given the massive information overload on the Web, there is an imperious need to develop intelligent techniques to automatically detect harmful content, which would allow the large-scale social media monitoring and early detection of undesired situations. This paper presents the Isistanitos’s approach for detecting aggressive content in multiple social media sites. The approach is based on combining Support Vector Machines and Recurrent Neural Network models for analysing a wide-range of character, word, word embeddings, sentiment and irony features. Results confirmed the difficulty of the task (particularly for detecting covert aggressions), showing the limitations of traditionally used features.
Search
Fix author
Co-authors
- Daniela Godoy 1
- Dominik Karasin 1
- Juan Manuel Rodriguez 1
- Haisam Saleh 1
- Markus Schedl 1
- show all...