Shusaku Sone


2024

pdf bib
First Heuristic Then Rational: Dynamic Use of Heuristics in Language Model Reasoning
Yoichi Aoki | Keito Kudo | Tatsuki Kuribayashi | Shusaku Sone | Masaya Taniguchi | Keisuke Sakaguchi | Kentaro Inui
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Explicit multi-step reasoning, such as chain-of-thought, is widely adopted in the community to explore the better performance of language models (LMs). We report on the systematic strategy that LMs use in this process.Our controlled experiments reveal that LMs rely more heavily on heuristics, such as lexical overlap, in the earlier stages of reasoning when more steps are required to reach an answer. Conversely, their reliance on heuristics decreases as LMs progress closer to the final answer. This suggests that LMs track only a limited number of future steps and dynamically combine heuristic strategies with rational ones in solving tasks involving multi-step reasoning.

pdf bib
A Multimodal Dialogue System to Lead Consensus Building with Emotion-Displaying
Shinnosuke Nozue | Yuto Nakano | Shoji Moriya | Tomoki Ariyama | Kazuma Kokuta | Suchun Xie | Kai Sato | Shusaku Sone | Ryohei Kamei | Reina Akama | Yuichiroh Matsubayashi | Keisuke Sakaguchi
Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue

The evolution of large language models has enabled fluent dialogue, increasing interest in the coexistence of humans and avatars. An essential aspect of achieving this coexistence involves developing sophisticated dialogue systems that can influence user behavior. In this background, we propose an effective multimodal dialogue system designed to promote consensus building with humans. Our system employs a slot-filling strategy to guide discussions and attempts to influence users with suggestions through emotional expression and intent conveyance via its avatar. These innovations have resulted in our system achieving the highest performance in a competition evaluating consensus building between humans and dialogue systems. We hope that our research will promote further discussion on the development of dialogue systems that enhance consensus building in human collaboration.

2023

pdf bib
TohokuNLP at SemEval-2023 Task 5: Clickbait Spoiling via Simple Seq2Seq Generation and Ensembling
Hiroto Kurita | Ikumi Ito | Hiroaki Funayama | Shota Sasaki | Shoji Moriya | Ye Mengyu | Kazuma Kokuta | Ryujin Hatakeyama | Shusaku Sone | Kentaro Inui
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)

This paper describes our system submitted to SemEval-2023 Task 5: Clickbait Spoiling. We work on spoiler generation of the subtask 2 and develop a system which comprises two parts: 1) simple seq2seq spoiler generation and 2) post-hoc model ensembling. Using this simple method, we address the challenge of generating multipart spoiler. In the test set, our submitted system outperformed the baseline by a large margin (approximately 10 points above on the BLEU score) for mixed types of spoilers. We also found that our system successfully handled the challenge of the multipart spoiler, confirming the effectiveness of our approach.