Sofia Brenna
2025
Investigating Proactivity in Task-Oriented Dialogues
Sofia Brenna | Elisabetta Jezek | Bernardo Magnini
Dialogue Discourse Volume 16
Sofia Brenna | Elisabetta Jezek | Bernardo Magnini
Dialogue Discourse Volume 16
This paper investigates proactivity, a characteristic phenomenon of collaborative human-human interaction, where a participant in the dialogue offers the addressee some useful and not explicitly requested information. More precisely, a proactive behaviour is: (i) self-prompted and not simply reactive, that is, the speaker does not act merely in response to the requests the other participant has made; (ii) somehow effective for the achievement of the dialogue goal, since the speaker has a long-term, goal-directed behaviour that predicts future states and needs. Proactivity has been poorly investigated from a theoretical point of view, and there is a general need of empirical data for both quantitative and qualitative research. The paper provides an extensive analysis of proactivity in several human-human task-oriented dialogic corpora, selected with different characteristics, including chat exchanges and telephone calls, collection modalities such as natural setting and Wizard of Oz, and two languages, Italian and English. The main result is the D-Pro Corpus, a new resource manually annotated at the utterance level with proactivity and dialogue acts, which allows to investigate proactivity in the context of task-oriented dialogues. There are several findings from our empirical investigation of proactivity: (i) we find that about 20% of turns in our corpus are proactive turns, showing that this is a very diffused and relevant phenomenon; (ii) we confirm the non-reactive nature of proactivity, highlighting the presence of a pattern where a turn in the dialogue triggers a reaction in a following turn and a proactive utterance is then added to the turn; (iii) we show that only a limited number of dialogue acts are actually involved in expressing proactivity, and we discuss the theoretical implications of this finding; (iv) we empirically confirm that proactivity has a crucial role in recovering from goal-failure situations, contributing to the effectiveness of the whole dialogue; (v) we support the intuition of a non-uniform distribution of proactive utterances throughout the dialogue. Our empirical findings and the D-Pro Corpus provide relevant insights for deeper theoretical investigations, as well as crucial resources for improving proactivity in current task-oriented dialogue systems.
2024
Are You a Good Assistant? Assessing LLM Trustability in Task-oriented Dialogues
Tiziano Labruna | Sofia Brenna | Giovanni Bonetta | Bernardo Magnini
Proceedings of the Tenth Italian Conference on Computational Linguistics (CLiC-it 2024)
Tiziano Labruna | Sofia Brenna | Giovanni Bonetta | Bernardo Magnini
Proceedings of the Tenth Italian Conference on Computational Linguistics (CLiC-it 2024)
Despite the impressive capabilities of recent Large Language Models (LLMs) to generate human-like text, their ability to produce contextually appropriate content for specific communicative situations is still a matter of debate. This issue is particularly crucial when LLMs are employed as assistants to help solve tasks or achieve goals within a given conversational domain. In such scenarios, the assistant is expected to access specific knowledge (e.g., a database of restaurants, a calendar of appointments) that is not directly accessible to the user and must be consistently utilised to accomplish the task.In this paper, we conduct experiments to evaluate the trustworthiness of automatic assistants in task-oriented dialogues. Our findings indicate that state-of-the-art open-source LLMs still face significant challenges in maintaining logical consistency with a knowledge base of facts, highlighting the need for further advancements in this area.
Dynamic Task-Oriented Dialogue: A Comparative Study of Llama-2 and Bert in Slot Value Generation
Tiziano Labruna | Sofia Brenna | Bernardo Magnini
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop
Tiziano Labruna | Sofia Brenna | Bernardo Magnini
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop
Recent advancements in instruction-based language models have demonstrated exceptional performance across various natural language processing tasks. We present a comprehensive analysis of the performance of two open-source language models, BERT and Llama-2, in the context of dynamic task-oriented dialogues. Focusing on the Restaurant domain and utilizing the MultiWOZ 2.4 dataset, our investigation centers on the models’ ability to generate predictions for masked slot values within text. The dynamic aspect is introduced through simulated domain changes, mirroring real-world scenarios where new slot values are incrementally added to a domain over time.This study contributes to the understanding of instruction-based models’ effectiveness in dynamic natural language understanding tasks when compared to traditional language models and emphasizes the significance of open-source, reproducible models in advancing research within the academic community.