Shiki Sato


2025

pdf bib
User Willingness-aware Sales Talk Dataset
Asahi Hentona | Jun Baba | Shiki Sato | Reina Akama
Proceedings of the 31st International Conference on Computational Linguistics

User willingness is a crucial element in the sales talk process that affects the achievement of the salesperson’s or sales system’s objectives. Despite the importance of user willingness, to the best of our knowledge, no previous study has addressed the development of automated sales talk dialogue systems that explicitly consider user willingness. A major barrier is the lack of sales talk datasets with reliable user willingness data. Thus, in this study, we developed a user willingness–aware sales talk collection by leveraging the ecological validity concept, which is discussed in the field of human–computer interaction. Our approach focused on three types of user willingness essential in real sales interactions. We created a dialogue environment that closely resembles real-world scenarios to elicit natural user willingness, with participants evaluating their willingness at the utterance level from multiple perspectives. We analyzed the collected data to gain insights into practical user willingness–aware sales talk strategies. In addition, as a practical application of the constructed dataset, we developed and evaluated a sales dialogue system aimed at enhancing the user’s intent to purchase.

pdf bib
DSLCMM: A Multimodal Human-Machine Dialogue Corpus Built through Competitions
Ryuichiro Higashinaka | Tetsuro Takahashi | Shinya Iizuka | Sota Horiuchi | Michimasa Inaba | Zhiyang Qi | Yuta Sasaki | Kotaro Funakoshi | Shoji Moriya | Shiki Sato | Takashi Minato | Kurima Sakai | Tomo Funayama | Masato Komuro | Hiroyuki Nishikawa | Ryosaku Makino | Hirofumi Kikuchi | Mayumi Usami
Proceedings of the 15th International Workshop on Spoken Dialogue Systems Technology

A corpus of dialogues between multimodal systems and humans is indispensable for the development and improvement of such systems. However, there is a shortage of human-machine multimodal dialogue datasets, which hinders the widespread deployment of these systems in society. To address this issue, we construct a Japanese multimodal human-machine dialogue corpus, DSLCMM, by collecting and organizing data from the Dialogue System Live Competitions (DSLCs). This paper details the procedure for constructing the corpus and presents our analysis of the relationship between various dialogue features and evaluation scores provided by users.

pdf bib
Analyzing Dialogue System Behavior in a Specific Situation Requiring Interpersonal Consideration
Tetsuro Takahashi | Hirofumi Kikuchi | Jie Yang | Hiroyuki Nishikawa | Masato Komuro | Ryosaku Makino | Shiki Sato | Yuta Sasaki | Shinji Iwata | Asahi Hentona | Takato Yamazaki | Shoji Moriya | Masaya Ohagi | Zhiyang Qi | Takashi Kodama | Akinobu Lee | Takashi Minato | Kurima Sakai | Tomo Funayama | Kotaro Funakoshi | Mayumi Usami | Michimasa Inaba | Ryuichiro Higashinaka
Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue

In human-human conversation, interpersonal consideration for the interlocutor is essential, and similar expectations are increasingly placed on dialogue systems. This study examines the behavior of dialogue systems in a specific interpersonal scenario where a user vents frustrations and seeks emotional support from a long-time friend represented by a dialogue system. We conducted a human evaluation and qualitative analysis of 15 dialogue systems under this setting. These systems implemented diverse strategies, such as structuring dialogue into distinct phases, modeling interpersonal relationships, and incorporating cognitive behavioral therapy techniques. Our analysis reveals that these approaches contributed to improved perceived empathy, coherence, and appropriateness, highlighting the importance of design choices in socially sensitive dialogue.

pdf bib
How Stylistic Similarity Shapes Preferences in Dialogue Dataset with User and Third Party Evaluations
Ikumi Numaya | Shoji Moriya | Shiki Sato | Reina Akama | Jun Suzuki
Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue

Recent advancements in dialogue generation have broadened the scope of human–bot interactions, enabling not only contextually appropriate responses but also the analysis of human affect and sensitivity. While prior work has suggested that stylistic similarity between user and system may enhance user impressions, the distinction between subjective and objective similarity is often overlooked. To investigate this issue, we introduce a novel dataset that includes users’ preferences, subjective stylistic similarity based on users’ own perceptions, and objective stylistic similarity annotated by third party evaluators in open-domain dialogue settings. Analysis using the constructed dataset reveals a strong positive correlation between subjective stylistic similarity and user preference. Furthermore, our analysis suggests an important finding: users’ subjective stylistic similarity differs from third party objective similarity. This underscores the importance of distinguishing between subjective and objective evaluations and understanding the distinct aspects each captures when analyzing the relationship between stylistic similarity and user preferences. The dataset presented in this paper is available online.

pdf bib
Identification and Analysis of Identity-Centric Elements of Character-Likeness in Game Scenario
Shinji Iwata | Koya Ihara | Shiki Sato | Jun Baba | Asahi Hentona | Masahiro Yamazaki | Yuki Shiotsuka | Takahiro Ishizue | Akifumi Yoshimoto
Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue

Generating and evaluating character-like utterances automatically is essential for applications ranging from character simulation to creative-writing support. Existing approaches primarily focus on basic aspects of character‐likeness, such as script-fidelity knowledge and conversational ability. However, achieving a higher level of character‐likeness in utterance generation and evaluation requires consideration of the character’s identity, which deeply reflects the character’s inner self. To bridge this gap, we identified a set of identity-centric character-likeness elements. First, we listed 27 elements covering various aspects of identity, drawing on psychology and identity theory. Then, to clarify the features of each element, we collected utterances annotated with these elements from a commercial smartphone game and analyzed them based on user evaluations regarding character-likeness and charm. Our analysis reveals part of element-wise effects on character‐likeness and charm. These findings enable developers to design practical and interpretable element-feature-aware generation methods and evaluation metrics for character-like utterances.

pdf bib
Key Challenges in Multimodal Task-Oriented Dialogue Systems: Insights from a Large Competition-Based Dataset
Shiki Sato | Shinji Iwata | Asahi Hentona | Yuta Sasaki | Takato Yamazaki | Shoji Moriya | Masaya Ohagi | Hirofumi Kikuchi | Jie Yang | Zhiyang Qi | Takashi Kodama | Akinobu Lee | Masato Komuro | Hiroyuki Nishikawa | Ryosaku Makino | Takashi Minato | Kurima Sakai | Tomo Funayama | Kotaro Funakoshi | Mayumi Usami | Michimasa Inaba | Tetsuro Takahashi | Ryuichiro Higashinaka
Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue

Challenges in multimodal task-oriented dialogue between humans and systems, particularly those involving audio and visual interactions, have not been sufficiently explored or shared, forcing researchers to define improvement directions individually without a clearly shared roadmap. To address these challenges, we organized a competition for multimodal task-oriented dialogue systems and constructed a large competition-based dataset of 1,865 minutes of Japanese task-oriented dialogues. This dataset includes audio and visual interactions between diverse systems and human participants. After analyzing system behaviors identified as problematic by the human participants in questionnaire surveys and notable methods employed by the participating teams, we identified key challenges in multimodal task-oriented dialogue systems and discussed potential directions for overcoming these challenges.

2024

pdf bib
A Large Collection of Model-generated Contradictory Responses for Consistency-aware Dialogue Systems
Shiki Sato | Reina Akama | Jun Suzuki | Kentaro Inui
Findings of the Association for Computational Linguistics: ACL 2024

Mitigating the generation of contradictory responses poses a substantial challenge in dialogue response generation. The quality and quantity of available contradictory response data play a vital role in suppressing these contradictions, offering two significant benefits. First, having access to large contradiction data enables a comprehensive examination of their characteristics. Second, data-driven methods to mitigate contradictions may be enhanced with large-scale contradiction data for training. Nevertheless, no attempt has been made to build an extensive collection of model-generated contradictory responses. In this paper, we build a large dataset of response generation models’ contradictions for the first time. Then, we acquire valuable insights into the characteristics of model-generated contradictions through an extensive analysis of the collected responses. Lastly, we also demonstrate how this dataset substantially enhances the performance of data-driven contradiction suppression methods.

2022

pdf bib
Bipartite-play Dialogue Collection for Practical Automatic Evaluation of Dialogue Systems
Shiki Sato | Yosuke Kishinami | Hiroaki Sugiyama | Reina Akama | Ryoko Tokuhisa | Jun Suzuki
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: Student Research Workshop

Automation of dialogue system evaluation is a driving force for the efficient development of dialogue systems. This paper introduces the bipartite-play method, a dialogue collection method for automating dialogue system evaluation. It addresses the limitations of existing dialogue collection methods: (i) inability to compare with systems that are not publicly available, and (ii) vulnerability to cheating by intentionally selecting systems to be compared. Experimental results show that the automatic evaluation using the bipartite-play method mitigates these two drawbacks and correlates as strongly with human subjectivity as existing methods.

pdf bib
Target-Guided Open-Domain Conversation Planning
Yosuke Kishinami | Reina Akama | Shiki Sato | Ryoko Tokuhisa | Jun Suzuki | Kentaro Inui
Proceedings of the 29th International Conference on Computational Linguistics

Prior studies addressing target-oriented conversational tasks lack a crucial notion that has been intensively studied in the context of goal-oriented artificial intelligence agents, namely, planning. In this study, we propose the task of Target-Guided Open-Domain Conversation Planning (TGCP) task to evaluate whether neural conversational agents have goal-oriented conversation planning abilities. Using the TGCP task, we investigate the conversation planning abilities of existing retrieval models and recent strong generative models. The experimental results reveal the challenges facing current technology.

pdf bib
N-best Response-based Analysis of Contradiction-awareness in Neural Response Generation Models
Shiki Sato | Reina Akama | Hiroki Ouchi | Ryoko Tokuhisa | Jun Suzuki | Kentaro Inui
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue

Avoiding the generation of responses that contradict the preceding context is a significant challenge in dialogue response generation. One feasible method is post-processing, such as filtering out contradicting responses from a resulting n-best response list. In this scenario, the quality of the n-best list considerably affects the occurrence of contradictions because the final response is chosen from this n-best list. This study quantitatively analyzes the contextual contradiction-awareness of neural response generation models using the consistency of the n-best lists. Particularly, we used polar questions as stimulus inputs for concise and quantitative analyses. Our tests illustrate the contradiction-awareness of recent neural response generation models and methodologies, followed by a discussion of their properties and limitations.

2020

pdf bib
Evaluating Dialogue Generation Systems via Response Selection
Shiki Sato | Reina Akama | Hiroki Ouchi | Jun Suzuki | Kentaro Inui
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Existing automatic evaluation metrics for open-domain dialogue response generation systems correlate poorly with human evaluation. We focus on evaluating response generation systems via response selection. To evaluate systems properly via response selection, we propose a method to construct response selection test sets with well-chosen false candidates. Specifically, we propose to construct test sets filtering out some types of false candidates: (i) those unrelated to the ground-truth response and (ii) those acceptable as appropriate responses. Through experiments, we demonstrate that evaluating systems via response selection with the test set developed by our method correlates more strongly with human evaluation, compared with widely used automatic evaluation metrics such as BLEU.