Ingrid Scharlau
2025
Investigating Co-Constructive Behavior of Large Language Models in Explanation Dialogues
Leandra Fichtel
|
Maximilian Spliethöver
|
Eyke Hüllermeier
|
Patricia Jimenez
|
Nils Klowait
|
Stefan Kopp
|
Axel-Cyrille Ngonga Ngomo
|
Amelie Robrecht
|
Ingrid Scharlau
|
Lutz Terfloth
|
Anna-Lisa Vollmer
|
Henning Wachsmuth
Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue
The ability to generate explanations that are understood by explainees is the quintessence of explainable artificial intelligence. Since understanding depends on the explainee’s background and needs, recent research focused on co-constructive explanation dialogues, where an explainer continuously monitors the explainee’s understanding and adapts their explanations dynamically. We investigate the ability of large language models (LLMs) to engage as explainers in co-constructive explanation dialogues. In particular, we present a user study in which explainees interact with an LLM in two settings, one of which involves the LLM being instructed to explain a topic co-constructively. We evaluate the explainees’ understanding before and after the dialogue, as well as their perception of the LLMs’ co-constructive behavior. Our results suggest that LLMs show some co-constructive behaviors, such as asking verification questions, that foster the explainees’ engagement and can improve understanding of a topic. However, their ability to effectively monitor the current understanding and scaffold the explanations accordingly remains limited.
2023
Modeling Highlighting of Metaphors in Multitask Contrastive Learning Paradigms
Meghdut Sengupta
|
Milad Alshomary
|
Ingrid Scharlau
|
Henning Wachsmuth
Findings of the Association for Computational Linguistics: EMNLP 2023
Metaphorical language, such as “spending time together”, projects meaning from a source domain (here, money) to a target domain (time). Thereby, it highlights certain aspects of the target domain, such as the effort behind the time investment. Highlighting aspects with metaphors (while hiding others) bridges the two domains and is the core of metaphorical meaning construction. For metaphor interpretation, linguistic theories stress that identifying the highlighted aspects is important for a better understanding of metaphors. However, metaphor research in NLP has not yet dealt with the phenomenon of highlighting. In this paper, we introduce the task of identifying the main aspect highlighted in a metaphorical sentence. Given the inherent interaction of source domains and highlighted aspects, we propose two multitask approaches - a joint learning approach and a continual learning approach - based on a finetuned contrastive learning model to jointly predict highlighted aspects and source domains. We further investigate whether (predicted) information about a source domain leads to better performance in predicting the highlighted aspects, and vice versa. Our experiments on an existing corpus suggest that, with the corresponding information, the performance to predict the other improves in terms of model accuracy in predicting highlighted aspects and source domains notably compared to the single-task baselines.