2023
pdf
bib
abs
No that’s not what I meant: Handling Third Position Repair in Conversational Question Answering
Vevake Balaraman
|
Arash Eshghi
|
Ioannis Konstas
|
Ioannis Papaioannou
Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue
The ability to handle miscommunication is crucial to robust and faithful conversational AI. People usually deal with miscommunication immediately as they detect it, using highly systematic interactional mechanisms called repair. One important type of repair is Third Position Repair (TPR) whereby a speaker is initially misunderstood but then corrects the misunderstanding as it becomes apparent after the addressee’s erroneous response. Here, we collect and publicly release REPAIR-QA, the first large dataset of TPRs in a conversational question answering (QA) setting. The data is comprised of the TPR turns, corresponding dialogue contexts, and candidate repairs of the original turn for execution of TPRs. We demonstrate the usefulness of the data by training and evaluating strong baseline models for executing TPRs. For stand-alone TPR execution, we perform both automatic and human evaluations on a fine-tuned T5 model, as well as OpenAI’s GPT-3 LLMs. Additionally, we extrinsically evaluate the LLMs’ TPR processing capabilities in the downstream conversational QA task. The results indicate poor out-of-the-box performance on TPR’s by the GPT-3 models, which then significantly improves when exposed to REPAIR-QA.
2021
pdf
bib
abs
Recent Neural Methods on Dialogue State Tracking for Task-Oriented Dialogue Systems: A Survey
Vevake Balaraman
|
Seyedmostafa Sheikhalishahi
|
Bernardo Magnini
Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue
This paper aims at providing a comprehensive overview of recent developments in dialogue state tracking (DST) for task-oriented conversational systems. We introduce the task, the main datasets that have been exploited as well as their evaluation metrics, and we analyze several proposed approaches. We distinguish between static ontology DST models, which predict a fixed set of dialogue states, and dynamic ontology models, which can predict dialogue states even when the ontology changes. We also discuss the model’s ability to track either single or multiple domains and to scale to new domains, both in terms of knowledge transfer and zero-shot learning. We cover a period from 2013 to 2020, showing a significant increase of multiple domain methods, most of them utilizing pre-trained language models.
2019
pdf
bib
How to Use Gazetteers for Entity Recognition with Neural Models
Simone Magnolini
|
Valerio Piccioni
|
Vevake Balaraman
|
Marco Guerini
|
Bernardo Magnini
Proceedings of the 5th Workshop on Semantic Deep Learning (SemDeep-5)
2018
pdf
bib
abs
Toward zero-shot Entity Recognition in Task-oriented Conversational Agents
Marco Guerini
|
Simone Magnolini
|
Vevake Balaraman
|
Bernardo Magnini
Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue
We present a domain portable zero-shot learning approach for entity recognition in task-oriented conversational agents, which does not assume any annotated sentences at training time. Rather, we derive a neural model of the entity names based only on available gazetteers, and then apply the model to recognize new entities in the context of user utterances. In order to evaluate our working hypothesis we focus on nominal entities that are largely used in e-commerce to name products. Through a set of experiments in two languages (English and Italian) and three different domains (furniture, food, clothing), we show that the neural gazetteer-based approach outperforms several competitive baselines, with minimal requirements of linguistic features.
2016
pdf
bib
abs
FBK’s Neural Machine Translation Systems for IWSLT 2016
M. Amin Farajian
|
Rajen Chatterjee
|
Costanza Conforti
|
Shahab Jalalvand
|
Vevake Balaraman
|
Mattia A. Di Gangi
|
Duygu Ataman
|
Marco Turchi
|
Matteo Negri
|
Marcello Federico
Proceedings of the 13th International Conference on Spoken Language Translation
In this paper, we describe FBK’s neural machine translation (NMT) systems submitted at the International Workshop on Spoken Language Translation (IWSLT) 2016. The systems are based on the state-of-the-art NMT architecture that is equipped with a bi-directional encoder and an attention mechanism in the decoder. They leverage linguistic information such as lemmas and part-of-speech tags of the source words in the form of additional factors along with the words. We compare performances of word and subword NMT systems along with different optimizers. Further, we explore different ensemble techniques to leverage multiple models within the same and across different networks. Several reranking methods are also explored. Our submissions cover all directions of the MSLT task, as well as en-{de, fr} and {de, fr}-en directions of TED. Compared to previously published best results on the TED 2014 test set, our models achieve comparable results on en-de and surpass them on en-fr (+2 BLEU) and fr-en (+7.7 BLEU) language pairs.