Tristan Guigue


2025

pdf bib
PyTOD: Programmable Task-Oriented Dialogue with Execution Feedback
Alexandru Coca | Bo-Hsiang Tseng | Peter Boothroyd | Jianpeng Cheng | Zhenxing Zhang | Mark Gaynor | Joe Stacey | Tristan Guigue | Héctor Martínez Alonso | Diarmuid Ó Séaghdha | Anders Johannsen
Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue

Programmable task-oriented dialogue (TOD) agents enable language models to follow structured dialogue policies, but their effectiveness hinges on accurate dialogue state tracking (DST). We present PyTOD, an agent that generates executable code to track dialogue state and uses policy and execution feedback for efficient error correction. To achieve this, PyTOD employs a simple constrained decoding approach, using a language model instead of grammar rules to follow API schemata. This leads to state-of-the-art DST performance on the challenging SGD benchmark. Our experiments show that PyTOD surpasses strong baselines in both accuracy and cross-turn consistency, demonstrating the effectiveness of execution-aware state tracking.

2024

pdf bib
LUCID: LLM-Generated Utterances for Complex and Interesting Dialogues
Joe Stacey | Jianpeng Cheng | John Torr | Tristan Guigue | Joris Driesen | Alexandru Coca | Mark Gaynor | Anders Johannsen
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)

Spurred by recent advances in Large Language Models (LLMs), virtual assistants are poised to take a leap forward in terms of their dialogue capabilities. Yet a major bottleneck to achieving genuinely transformative task-oriented dialogue capabilities remains the scarcity of high quality data. Existing datasets, while impressive in scale, have limited domain coverage and contain few genuinely challenging conversational phenomena; those which are present are typically unlabelled, making it difficult to assess the strengths and weaknesses of models without time-consuming and costly human evaluation. Moreover, creating high quality dialogue data has until now required considerable human input, limiting both the scale of these datasets and the ability to rapidly bootstrap data for a new target domain. We aim to overcome these issues with LUCID, a modularised and highly automated LLM-driven data generation system that produces realistic, diverse and challenging dialogues. We use LUCID to generate a seed dataset of 4,277 conversations across 100 intents to demonstrate its capabilities, with a human review finding consistently high quality labels in the generated data.