Mehrzad Mashal
2025
Am I Blue or Is My Hobby Counting the Teardrops? Expression Leakage in Large Language Models as a Symptom of Irrelevancy Disruption
Berkay Kopru
|
Mehrzad Mashal
|
Yigit Gurses
|
Akos Kadar
|
Maximilian Schmitt
|
Ditty Mathew
|
Felix Burkhardt
|
Florian Eyben
|
Björn W. Schuller
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Large language models (LLMs) have advanced natural language processing (NLP) skills such as through next-token prediction and self-attention, but their ability to integrate broad context also makes them prone to incorporating irrelevant information. Prior work has focused on semantic leakage—bias introduced by semantically irrelevant context.In this paper, we introduce expression leakage, a novel phenomenon where LLMs systematically generate sentimentally charged expressions that are semantically unrelated to the input context. To analyse the expression leakage, we collect a benchmark dataset along with a scheme to automatically generate a dataset from free-form text from common-crawl. In addition, we propose an automatic evaluation pipeline that correlates well with human judgment, which accelerates the benchmarking by decoupling from the need of annotation for each analysed model. Our experiments show that, as the model scales in the parameter space, the expression leakage reduces within the same LLM family. On the other hand, we demonstrate that expression leakage mitigation requires specific care during the model building process, and cannot be mitigated by prompting. In addition, our experiments indicate that, when negative sentiment is injected in the prompt, it disrupts the generation process more than the positive sentiment, causing a higher expression leakage rate.
Search
Fix author
Co-authors
- Felix Burkhardt 1
- Florian Eyben 1
- Yigit Gurses 1
- Berkay Kopru 1
- Ákos Kádár 1
- show all...