Tetsuji Ogawa
2026
Effects of Dialogue Corpora Properties on Fine-Tuning a Moshi-Based Spoken Dialogue Model
Yuto Abe | Mao Saeki | Atsumoto Ohashi | Shinnosuke Takamichi | Shiyna Fujie | Tetsunori Kobayashi | Tetsuji Ogawa | Ryuichiro Higashinaka
Proceedings of the 16th International Workshop on Spoken Dialogue System Technology
Yuto Abe | Mao Saeki | Atsumoto Ohashi | Shinnosuke Takamichi | Shiyna Fujie | Tetsunori Kobayashi | Tetsuji Ogawa | Ryuichiro Higashinaka
Proceedings of the 16th International Workshop on Spoken Dialogue System Technology
This study investigates how interactional characteristics of spoken dialogue corpora influence the learning process and resulting behavior of speech language models for full-duplex dialogue systems. While previous research has mainly focused on improving acoustic and linguistic quality, an effective dialogue system must also capture and reproduce task-dependent interactional dynamics such as conversational tempo and turn-taking patterns. To analyze these properties, we evaluated multiple dialogue corpora using NISQA for speech quality, LLM-as-a-Judge for linguistic and semantic appropriateness, and four timing-based indicators: inter-pausal units, pause, gap, and overlap. A curriculum learning strategy was applied to fine-tune a Moshi-based full-duplex dialogue model by incrementally combining corpora with different interactional characteristics. Experimental results on a dialogue continuation task showed that corpus-specific interactional patterns effectively shape model behavior. Chat-style corpora facilitated natural rhythms with moderate overlaps and gaps, whereas consultation-style corpora promoted more stable and deliberate timing. Fine-tuning with high-quality audio improved speech quality, while using task-mismatched data degraded linguistic coherence.
2025
Analysis of the Correlation Between Theory of Mind and Dialogue Ability to Identify Essential ToM for Dialogue Systems
Haruhisa Iseno | Atsumoto Ohashi | Tetsuji Ogawa | Shinnosuke Takamichi | Ryuichiro Higashinaka
Proceedings of the 39th Pacific Asia Conference on Language, Information and Computation
Haruhisa Iseno | Atsumoto Ohashi | Tetsuji Ogawa | Shinnosuke Takamichi | Ryuichiro Higashinaka
Proceedings of the 39th Pacific Asia Conference on Language, Information and Computation
2022
BERT Meets CTC: New Formulation of End-to-End Speech Recognition with Pre-trained Masked Language Model
Yosuke Higuchi | Brian Yan | Siddhant Arora | Tetsuji Ogawa | Tetsunori Kobayashi | Shinji Watanabe
Findings of the Association for Computational Linguistics: EMNLP 2022
Yosuke Higuchi | Brian Yan | Siddhant Arora | Tetsuji Ogawa | Tetsunori Kobayashi | Shinji Watanabe
Findings of the Association for Computational Linguistics: EMNLP 2022
This paper presents BERT-CTC, a novel formulation of end-to-end speech recognition that adapts BERT for connectionist temporal classification (CTC). Our formulation relaxes the conditional independence assumptions used in conventional CTC and incorporates linguistic knowledge through the explicit output dependency obtained by BERT contextual embedding. BERT-CTC attends to the full contexts of the input and hypothesized output sequences via the self-attention mechanism. This mechanism encourages a model to learn inner/inter-dependencies between the audio and token representations while maintaining CTC’s training efficiency. During inference, BERT-CTC combines a mask-predict algorithm with CTC decoding, which iteratively refines an output sequence. The experimental results reveal that BERT-CTC improves over conventional approaches across variations in speaking styles and languages. Finally, we show that the semantic representations in BERT-CTC are beneficial towards downstream spoken language understanding tasks.
2020
Exploiting Narrative Context and A Priori Knowledge of Categories in Textual Emotion Classification
Hikari Tanabe | Tetsuji Ogawa | Tetsunori Kobayashi | Yoshihiko Hayashi
Proceedings of the 28th International Conference on Computational Linguistics
Hikari Tanabe | Tetsuji Ogawa | Tetsunori Kobayashi | Yoshihiko Hayashi
Proceedings of the 28th International Conference on Computational Linguistics
Recognition of the mental state of a human character in text is a major challenge in natural language processing. In this study, we investigate the efficacy of the narrative context in recognizing the emotional states of human characters in text and discuss an approach to make use of a priori knowledge regarding the employed emotion category system. Specifically, we experimentally show that the accuracy of emotion classification is substantially increased by encoding the preceding context of the target sentence using a BERT-based text encoder. We also compare ways to incorporate a priori knowledge of emotion categories by altering the loss function used in training, in which our proposal of multi-task learning that jointly learns to classify positive/negative polarity of emotions is included. The experimental results suggest that, when using Plutchik’s Wheel of Emotions, it is better to jointly classify the basic emotion categories with positive/negative polarity rather than directly exploiting its characteristic structure in which eight basic categories are arranged in a wheel.