2025
pdf
bib
abs
Analyzing Dialogue System Behavior in a Specific Situation Requiring Interpersonal Consideration
Tetsuro Takahashi
|
Hirofumi Kikuchi
|
Jie Yang
|
Hiroyuki Nishikawa
|
Masato Komuro
|
Ryosaku Makino
|
Shiki Sato
|
Yuta Sasaki
|
Shinji Iwata
|
Asahi Hentona
|
Takato Yamazaki
|
Shoji Moriya
|
Masaya Ohagi
|
Zhiyang Qi
|
Takashi Kodama
|
Akinobu Lee
|
Takashi Minato
|
Kurima Sakai
|
Tomo Funayama
|
Kotaro Funakoshi
|
Mayumi Usami
|
Michimasa Inaba
|
Ryuichiro Higashinaka
Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue
In human-human conversation, interpersonal consideration for the interlocutor is essential, and similar expectations are increasingly placed on dialogue systems. This study examines the behavior of dialogue systems in a specific interpersonal scenario where a user vents frustrations and seeks emotional support from a long-time friend represented by a dialogue system. We conducted a human evaluation and qualitative analysis of 15 dialogue systems under this setting. These systems implemented diverse strategies, such as structuring dialogue into distinct phases, modeling interpersonal relationships, and incorporating cognitive behavioral therapy techniques. Our analysis reveals that these approaches contributed to improved perceived empathy, coherence, and appropriateness, highlighting the importance of design choices in socially sensitive dialogue.
pdf
bib
abs
Key Challenges in Multimodal Task-Oriented Dialogue Systems: Insights from a Large Competition-Based Dataset
Shiki Sato
|
Shinji Iwata
|
Asahi Hentona
|
Yuta Sasaki
|
Takato Yamazaki
|
Shoji Moriya
|
Masaya Ohagi
|
Hirofumi Kikuchi
|
Jie Yang
|
Zhiyang Qi
|
Takashi Kodama
|
Akinobu Lee
|
Masato Komuro
|
Hiroyuki Nishikawa
|
Ryosaku Makino
|
Takashi Minato
|
Kurima Sakai
|
Tomo Funayama
|
Kotaro Funakoshi
|
Mayumi Usami
|
Michimasa Inaba
|
Tetsuro Takahashi
|
Ryuichiro Higashinaka
Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Challenges in multimodal task-oriented dialogue between humans and systems, particularly those involving audio and visual interactions, have not been sufficiently explored or shared, forcing researchers to define improvement directions individually without a clearly shared roadmap. To address these challenges, we organized a competition for multimodal task-oriented dialogue systems and constructed a large competition-based dataset of 1,865 minutes of Japanese task-oriented dialogues. This dataset includes audio and visual interactions between diverse systems and human participants. After analyzing system behaviors identified as problematic by the human participants in questionnaire surveys and notable methods employed by the participating teams, we identified key challenges in multimodal task-oriented dialogue systems and discussed potential directions for overcoming these challenges.
2024
pdf
bib
abs
Dialogue Systems Can Generate Appropriate Responses without the Use of Question Marks?– a Study of the Effects of “?” for Spoken Dialogue Systems –
Tomoya Mizumoto
|
Takato Yamazaki
|
Katsumasa Yoshikawa
|
Masaya Ohagi
|
Toshiki Kawamoto
|
Toshinori Sato
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
When individuals engage in spoken discourse, various phenomena can be observed that differ from those that are apparent in text-based conversation. While written communication commonly uses a question mark to denote a query, in spoken discourse, queries are frequently indicated by a rising intonation at the end of a sentence. However, numerous speech recognition engines do not append a question mark to recognized queries, presenting a challenge when creating a spoken dialogue system. Specifically, the absence of a question mark at the end of a sentence can impede the generation of appropriate responses to queries in spoken dialogue systems. Hence, we investigate the impact of question marks on dialogue systems, with the results showing that they have a significant impact. Moreover, we analyze specific examples in an effort to determine which types of utterances have the impact on dialogue systems.
2023
pdf
bib
A Follow-up Study on Evaluation Metrics Using Follow-up Utterances
Toshiki Kawamoto
|
Yuki Okano
|
Takato Yamazaki
|
Toshinori Sato
|
Kotaro Funakoshi
|
Manabu Okumura
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation
pdf
bib
abs
An Open-Domain Avatar Chatbot by Exploiting a Large Language Model
Takato Yamazaki
|
Tomoya Mizumoto
|
Katsumasa Yoshikawa
|
Masaya Ohagi
|
Toshiki Kawamoto
|
Toshinori Sato
Proceedings of the 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue
With the ambition to create avatars capable of human-level casual conversation, we developed an open-domain avatar chatbot, situated in a virtual reality environment, that employs a large language model (LLM). Introducing the LLM posed several challenges for multimodal integration, such as developing techniques to align diverse outputs and avatar control, as well as addressing the issue of slow generation speed. To address these challenges, we integrated various external modules into our system. Our system is based on the award-winning model from the Dialogue System Live Competition 5. Through this work, we hope to stimulate discussions within the research community about the potential and challenges of multimodal dialogue systems enhanced with LLMs.
2021
pdf
bib
Phrase-Level Action Reinforcement Learning for Neural Dialog Response Generation
Takato Yamazaki
|
Akiko Aizawa
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
2020
pdf
bib
abs
A Linguistic Analysis of Visually Grounded Dialogues Based on Spatial Expressions
Takuma Udagawa
|
Takato Yamazaki
|
Akiko Aizawa
Findings of the Association for Computational Linguistics: EMNLP 2020
Recent models achieve promising results in visually grounded dialogues. However, existing datasets often contain undesirable biases and lack sophisticated linguistic analyses, which make it difficult to understand how well current models recognize their precise linguistic structures. To address this problem, we make two design choices: first, we focus on OneCommon Corpus (CITATION), a simple yet challenging common grounding dataset which contains minimal bias by design. Second, we analyze their linguistic structures based on spatial expressions and provide comprehensive and reliable annotation for 600 dialogues. We show that our annotation captures important linguistic structures including predicate-argument structure, modification and ellipsis. In our experiments, we assess the model’s understanding of these structures through reference resolution. We demonstrate that our annotation can reveal both the strengths and weaknesses of baseline models in essential levels of detail. Overall, we propose a novel framework and resource for investigating fine-grained language understanding in visually grounded dialogues.