David Griol


2026

This paper focuses on improving customer service in call centers, where finding accurate answers in the shortest possible time is crucial. The proposed solution is the development of a conversational AI system that acts as a "copilot" for human operators. The main goal of this copilot is to assist the operator in real time by providing conversation summaries, relevant domain information, and suggested responses that help guide the interaction toward a successful resolution. To achieve this, different approaches to Retrieval Augmented Generation (RAG) have been explored. The proposed agentic-RAG architecture integrates multiple autonomous agents for routing, retrieval validation, and response generation, achieving consistent improvements in real-time performance, grounding, and overall user experience across diverse service scenarios. Empirical results with the Action-Based Conversations Dataset (ABCD) corpus show that the use of agents proved to be effective in handling unstructured conversational data. The proposed approach showed an improvement in the quality, relevance, and accuracy of the generated responses with respect to a naïve RAG baseline. It is important to emphasize that this system is not intended to replace the operator, but rather to act as a support tool to enhance efficiency and customer satisfaction.

2025

In Retrieval-Augmented Generation (RAG) systems efficient information retrieval is crucial for enhancing user experience and satisfaction, as response times and computational demands significantly impact performance. RAG can be unnecessarily resource-intensive for frequently asked questions (FAQs) and simple questions. In this paper we introduce an approach in which we categorize user questions into simple queries that do not require RAG processing. Evaluation results show that our proposal reduces latency and improves response efficiency compared to systems relying solely on RAG.
Conversational AI (ConvAI) systems are gaining growing importance as an alternative for more natural interaction with digital services. In this context, Large Language Models (LLMs) have opened new possibilities for less restricted interaction and richer natural language understanding. However, despite their advanced capabilities, LLMs can pose accuracy and reliability problems, as they sometimes generate factually incorrect or contextually inappropriate content that does not fulfill the regulations or business rules of a specific application domain. In addition, they still do not possess the capability to adjust to users’ needs and preferences, showing emotional awareness, while concurrently adhering to the regulations and limitations of their designated domain. In this paper we present the TrustBoost project, which addresses the challenge of improving trustworthiness of ConvAI from two dimensions: cognition (adaptability, flexibility, compliance, and performance) and affectivity (familiarity, emotional dimension, and perception). The duration of the project is from September 2024 to December 2027.

2010

2009

2008

In this paper, we present a comparison between two corpora acquired by means of two different techniques. The first corpus was acquired by means of the Wizard of Oz technique. A dialog simulation technique has been developed for the acquisition of the second corpus. A random selection of the user and system turns has been used, defining stop conditions for automatically deciding if the simulated dialog is successful or not. We use several evaluation measures proposed in previous research to compare between our two acquired corpora, and then discuss the similarities and differences between the two corpora with regard to these measures.

2007