Yotaro Watanabe
2025
Enhancing Persuasive Dialogue Agents by Synthesizing Cross‐Disciplinary Communication Strategies
Shinnosuke Nozue | Yuto Nakano | Yotaro Watanabe | Meguru Takasaki | Shoji Moriya | Reina Akama | Jun Suzuki
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Shinnosuke Nozue | Yuto Nakano | Yotaro Watanabe | Meguru Takasaki | Shoji Moriya | Reina Akama | Jun Suzuki
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Current approaches to developing persuasive dialogue agents often rely on a limited set of predefined persuasive strategies that fail to capture the complexity of real-world interactions. We applied a cross-disciplinary approach to develop a framework for designing persuasive dialogue agents that draws on proven strategies from social psychology, behavioral economics, and communication theory. We validated our proposed framework through experiments on two distinct datasets: the Persuasion for Good dataset, which represents a specific in-domain scenario, and the DailyPersuasion dataset, which encompasses a wide range of scenarios. The proposed framework achieved strong results for both datasets and demonstrated notable improvement in the persuasion success rate as well as promising generalizability. Notably, the proposed framework also excelled at persuading individuals with initially low intent, which addresses a critical challenge for persuasive dialogue agents.
2024
Multilingual Sentence-T5: Scalable Sentence Encoders for Multilingual Applications
Chihiro Yano | Akihiko Fukuchi | Shoko Fukasawa | Hideyuki Tachibana | Yotaro Watanabe
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Chihiro Yano | Akihiko Fukuchi | Shoko Fukasawa | Hideyuki Tachibana | Yotaro Watanabe
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Prior work on multilingual sentence embedding has demonstrated that the efficient use of natural language inference (NLI) data to build high-performance models can outperform conventional methods. However, the potential benefits from the recent “exponential” growth of language models with billions of parameters have not yet been fully explored. In this paper, we introduce Multilingual Sentence T5 (m-ST5), as a larger model of NLI-based multilingual sentence embedding, by extending Sentence T5, an existing monolingual model. By employing the low-rank adaptation (LoRA) technique, we have achieved a successful scaling of the model’s size to 5.7 billion parameters. We conducted experiments to evaluate the performance of sentence embedding and verified that the method outperforms the NLI-based prior approach. Furthermore, we also have confirmed a positive correlation between the size of the model and its performance. It was particularly noteworthy that languages with fewer resources or those with less linguistic similarity to English benefited more from the parameter increase. Our model is available at https://huggingface.co/pkshatech/m-ST5.
2021
Validity-Based Sampling and Smoothing Methods for Multiple Reference Image Captioning
Shunta Nagasawa | Yotaro Watanabe | Hitoshi Iyatomi
Proceedings of the Third Workshop on Multimodal Artificial Intelligence
Shunta Nagasawa | Yotaro Watanabe | Hitoshi Iyatomi
Proceedings of the Third Workshop on Multimodal Artificial Intelligence
In image captioning, multiple captions are often provided as ground truths, since a valid caption is not always uniquely determined. Conventional methods randomly select a single caption and treat it as correct, but there have been few effective training methods that utilize multiple given captions. In this paper, we proposed two training technique for making effective use of multiple reference captions: 1) validity-based caption sampling (VBCS), which prioritizes the use of captions that are estimated to be highly valid during training, and 2) weighted caption smoothing (WCS), which applies smoothing only to the relevant words the reference caption to reflect multiple reference captions simultaneously. Experiments show that our proposed methods improve CIDEr by 2.6 points and BLEU4 by 0.9 points from baseline on the MSCOCO dataset.
2014
Finding The Best Model Among Representative Compositional Models
Masayasu Muraoka | Sonse Shimaoka | Kazeto Yamamoto | Yotaro Watanabe | Naoaki Okazaki | Kentaro Inui
Proceedings of the 28th Pacific Asia Conference on Language, Information and Computing
Masayasu Muraoka | Sonse Shimaoka | Kazeto Yamamoto | Yotaro Watanabe | Naoaki Okazaki | Kentaro Inui
Proceedings of the 28th Pacific Asia Conference on Language, Information and Computing
2013
Is a 204 cm Man Tall or Small ? Acquisition of Numerical Common Sense from the Web
Katsuma Narisawa | Yotaro Watanabe | Junta Mizuno | Naoaki Okazaki | Kentaro Inui
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Katsuma Narisawa | Yotaro Watanabe | Junta Mizuno | Naoaki Okazaki | Kentaro Inui
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Computer-assisted Structuring of Emergency Management Information: A Project Note
Yotaro Watanabe | Kentaro Inui | Shingo Suzuki | Hiroko Koumoto | Mitsuhiro Higashida | Yuji Maeda | Katsumi Iwatsuki
Proceedings of the Workshop on Language Processing and Crisis Information 2013
Yotaro Watanabe | Kentaro Inui | Shingo Suzuki | Hiroko Koumoto | Mitsuhiro Higashida | Yuji Maeda | Katsumi Iwatsuki
Proceedings of the Workshop on Language Processing and Crisis Information 2013
2012
A Latent Discriminative Model for Compositional Entailment Relation Recognition using Natural Logic
Yotaro Watanabe | Junta Mizuno | Eric Nichols | Naoaki Okazaki | Kentaro Inui
Proceedings of COLING 2012
Yotaro Watanabe | Junta Mizuno | Eric Nichols | Naoaki Okazaki | Kentaro Inui
Proceedings of COLING 2012
2010
A Structured Model for Joint Learning of Argument Roles and Predicate Senses
Yotaro Watanabe | Masayuki Asahara | Yuji Matsumoto
Proceedings of the ACL 2010 Conference Short Papers
Yotaro Watanabe | Masayuki Asahara | Yuji Matsumoto
Proceedings of the ACL 2010 Conference Short Papers
Automatic Classification of Semantic Relations between Facts and Opinions
Koji Murakami | Eric Nichols | Junta Mizuno | Yotaro Watanabe | Hayato Goto | Megumi Ohki | Suguru Matsuyoshi | Kentaro Inui | Yuji Matsumoto
Proceedings of the Second Workshop on NLP Challenges in the Information Explosion Era (NLPIX 2010)
Koji Murakami | Eric Nichols | Junta Mizuno | Yotaro Watanabe | Hayato Goto | Megumi Ohki | Suguru Matsuyoshi | Kentaro Inui | Yuji Matsumoto
Proceedings of the Second Workshop on NLP Challenges in the Information Explosion Era (NLPIX 2010)
2009
Multilingual Syntactic-Semantic Dependency Parsing with Three-Stage Approximate Max-Margin Linear Models
Yotaro Watanabe | Masayuki Asahara | Yuji Matsumoto
Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task
Yotaro Watanabe | Masayuki Asahara | Yuji Matsumoto
Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task
2008
A Pipeline Approach for Syntactic and Semantic Dependency Parsing
Yotaro Watanabe | Masakazu Iwatate | Masayuki Asahara | Yuji Matsumoto
CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Language Learning
Yotaro Watanabe | Masakazu Iwatate | Masayuki Asahara | Yuji Matsumoto
CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Language Learning
2007
A Graph-Based Approach to Named Entity Categorization in Wikipedia Using Conditional Random Fields
Yotaro Watanabe | Masayuki Asahara | Yuji Matsumoto
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)
Yotaro Watanabe | Masayuki Asahara | Yuji Matsumoto
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)
2005
Search
Fix author
Co-authors
- Yuji Matsumoto 6
- Masayuki Asahara 5
- Kentaro Inui 5
- Junta Mizuno 3
- Naoaki Okazaki 3
- Eric Nichols 2
- Reina Akama 1
- Ai Azuma 1
- Shoko Fukasawa 1
- Akihiko Fukuchi 1
- Kenta Fukuoka 1
- Chooi-Ling Goh 1
- Hayato Goto 1
- Mitsuhiro Higashida 1
- Masakazu Iwatate 1
- Katsumi Iwatsuki 1
- Hitoshi Iyatomi 1
- Hiroko Koumoto 1
- Yuji Maeda 1
- Suguru Matsuyoshi 1
- Shoji Moriya 1
- Koji Murakami 1
- Masayasu Muraoka 1
- Shunta Nagasawa 1
- Yuto Nakano 1
- Katsuma Narisawa 1
- Shinnosuke Nozue 1
- Megumi Ohki 1
- Sonse Shimaoka 1
- Jun Suzuki 1
- Shingo Suzuki 1
- Hideyuki Tachibana 1
- Meguru Takasaki 1
- Takashi Tsuzuki 1
- Kazeto Yamamoto 1
- Chihiro Yano 1