Eyke Hüllermeier


2025

pdf bib
Adaptive Prompting: Ad-hoc Prompt Composition for Social Bias Detection
Maximilian Spliethöver | Tim Knebler | Fabian Fumagalli | Maximilian Muschalik | Barbara Hammer | Eyke Hüllermeier | Henning Wachsmuth
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

Recent advances on instruction fine-tuning have led to the development of various prompting techniques for large language models, such as explicit reasoning steps. However, the success of techniques depends on various parameters, such as the task, language model, and context provided. Finding an effective prompt is, therefore, often a trial-and-error process. Most existing approaches to automatic prompting aim to optimize individual techniques instead of compositions of techniques and their dependence on the input. To fill this gap, we propose an adaptive prompting approach that predicts the optimal prompt composition ad-hoc for a given input. We apply our approach to social bias detection, a highly context-dependent task that requires semantic understanding. We evaluate it with three large language models on three datasets, comparing compositions to individual techniques and other baselines. The results underline the importance of finding an effective prompt composition. Our approach robustly ensures high detection performance, and is best in several settings. Moreover, first experiments on other tasks support its generalizability.

pdf bib
Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues
Leandra Fichtel | Maximilian Spliethöver | Eyke Hüllermeier | Patricia Jimenez | Nils Klowait | Stefan Kopp | Axel-Cyrille Ngonga Ngomo | Amelie Robrecht | Ingrid Scharlau | Lutz Terfloth | Anna-Lisa Vollmer | Henning Wachsmuth
Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue

The ability to generate explanations that are understood by explainees is the quintessence of explainable artificial intelligence. Since understanding depends on the explainee’s background and needs, recent research focused on co-constructive explanation dialogues, where an explainer continuously monitors the explainee’s understanding and adapts their explanations dynamically. We investigate the ability of large language models (LLMs) to engage as explainers in co-constructive explanation dialogues. In particular, we present a user study in which explainees interact with an LLM in two settings, one of which involves the LLM being instructed to explain a topic co-constructively. We evaluate the explainees’ understanding before and after the dialogue, as well as their perception of the LLMs’ co-constructive behavior. Our results suggest that LLMs show some co-constructive behaviors, such as asking verification questions, that foster the explainees’ engagement and can improve understanding of a topic. However, their ability to effectively monitor the current understanding and scaffold the explanations accordingly remains limited.

2017

pdf bib
Annotation Challenges for Reconstructing the Structural Elaboration of Middle Low German
Nina Seemann | Marie-Luis Merten | Michaela Geierhos | Doris Tophinke | Eyke Hüllermeier
Proceedings of the Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature

In this paper, we present the annotation challenges we have encountered when working on a historical language that was undergoing elaboration processes. We especially focus on syntactic ambiguity and gradience in Middle Low German, which causes uncertainty to some extent. Since current annotation tools consider construction contexts and the dynamics of the grammaticalization only partially, we plan to extend CorA - a web-based annotation tool for historical and other non-standard language data - to capture elaboration phenomena and annotator unsureness. Moreover, we seek to interactively learn morphological as well as syntactic annotations.

2013

pdf bib
Learning to Rank Lexical Substitutions
György Szarvas | Róbert Busa-Fekete | Eyke Hüllermeier
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing