Bin Yu


2025

pdf bib
Towards Consistent Natural-Language Explanations via Explanation-Consistency Finetuning
Yanda Chen | Chandan Singh | Xiaodong Liu | Simiao Zuo | Bin Yu | He He | Jianfeng Gao
Proceedings of the 31st International Conference on Computational Linguistics

Large language models (LLMs) often generate convincing, fluent explanations. However, different from humans, they often generate inconsistent explanations on different inputs. For example, an LLM may explain “all birds can fly” when answering the question “Can sparrows fly?” but meanwhile answer “no” to the related question “Can penguins fly?”. Explanations should be consistent across related examples so that they allow humans to simulate the LLM’s decision process on multiple examples. We propose explanation-consistency finetuning (EC-finetuning), a method that adapts LLMs to generate more consistent natural-language explanations on related examples. EC-finetuning involves finetuning LLMs on synthetic data that is carefully constructed to contain consistent explanations. Across a variety of question-answering datasets in various domains, EC-finetuning yields a 10.0% relative explanation consistency improvement on 4 finetuning datasets, and generalizes to 7 out-of-distribution datasets not seen during finetuning (+4.5% relative). We will make our code available for reproducibility.

2006

pdf bib
Approximation Lasso Methods for Language Modeling
Jianfeng Gao | Hisami Suzuki | Bin Yu
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics