Sheng-Lun Wei
2026
Bias in the Ear of the Listener: Assessing Sensitivity in Audio Language Models Across Linguistic, Demographic, and Positional Variations
Sheng-Lun Wei | Yu-Ling Liao | Yen-Hua Chang | Hen-Hsen Huang | Hsin-Hsi Chen
Findings of the Association for Computational Linguistics: EACL 2026
Sheng-Lun Wei | Yu-Ling Liao | Yen-Hua Chang | Hen-Hsen Huang | Hsin-Hsi Chen
Findings of the Association for Computational Linguistics: EACL 2026
Recent multimodal large language models (MLLMs) extend language understanding beyond text to speech, enabling unified reasoning across modalities. While biases in text-based LLMs have been widely examined, their persistence and manifestation in spoken inputs remain underexplored. This work presents the first systematic investigation of speech bias in multilingual MLLMs.We construct and release the BiasInEar Dataset, a speech-augmented benchmark based on Global MMLU Lite, spanning English, Chinese, and Korean, balanced by gender and accent, and totaling 70.8 hours (≈4,249 minutes) of speech with 11,200 questions. Using four complementary metrics (accuracy, entropy, APES, and Fleiss’ 𝜅), we evaluate nine representative models under linguistic language and accent, demographic gender, and structural option order perturbations. Our findings reveal that MLLMs are relatively robust to demographic factors but highly sensitive to language and option order, suggesting that speech can amplify existing structural biases. Moreover, architectural design and reasoning strategy substantially affect robustness across languages. Overall, this study establishes a unified framework for assessing fairness and robustness in speech-integrated LLMs, bridging the gap between text- and speech-based evaluation.
2025
Do Before You Judge: Self-Reference as a Pathway to Better LLM Evaluation
Wei-Hsiang Lin | Sheng-Lun Wei | Hen-Hsen Huang | Hsin-Hsi Chen
Findings of the Association for Computational Linguistics: EMNLP 2025
Wei-Hsiang Lin | Sheng-Lun Wei | Hen-Hsen Huang | Hsin-Hsi Chen
Findings of the Association for Computational Linguistics: EMNLP 2025
LLM-as-Judge frameworks are increasingly popular for AI evaluation, yet research findings on the relationship between models’ generation and judgment abilities remain inconsistent. We investigate this relationship through systematic dataset- and instance-level analyses across 11 models and 21 diverse tasks. Despite both capabilities relying on the same underlying knowledge, our analyses reveal they are only weakly correlated, primarily due to LLMs’ sensitivity to the responses being judged. To address this, we propose a self-reference-guided evaluation strategy that leverages a model’s own answers as references. This approach significantly strengthens the correlation between generation and judgment abilities, offering a practical path to align these skills and providing a reliable proxy for model selection in evaluation tasks.
2024
Induct-Learn: Short Phrase Prompting with Instruction Induction
Po-Chun Chen | Sheng-Lun Wei | Hen-Hsen Huang | Hsin-Hsi Chen
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Po-Chun Chen | Sheng-Lun Wei | Hen-Hsen Huang | Hsin-Hsi Chen
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Large Language Models (LLMs) have demonstrated capability in “instruction induction,” generating instructions from demonstrations (input-output pairs). However, existing methods often rely on large datasets or numerous examples, which is impractical and costly in real-world scenarios. In this work, we propose a low-cost, task-level framework called Induct-Learn. It induces pseudo instructions from a few demonstrations and a short phrase, adding a CoT process into existing demonstrations. When encountering new problems, the learned pseudo instructions and demonstrations with the pseudo CoT process can be combined into a prompt to guide the LLM’s problem-solving process. We validate our approach on the BBH-Induct and Evals-Induct datasets, and the results show that the Induct-Learn framework outperforms state-of-the-art methods. We also exhibit cross-model adaptability and achieve superior performance at a lower cost compared to existing methods.
Unveiling Selection Biases: Exploring Order and Token Sensitivity in Large Language Models
Sheng-Lun Wei | Cheng-Kuang Wu | Hen-Hsen Huang | Hsin-Hsi Chen
Findings of the Association for Computational Linguistics: ACL 2024
Sheng-Lun Wei | Cheng-Kuang Wu | Hen-Hsen Huang | Hsin-Hsi Chen
Findings of the Association for Computational Linguistics: ACL 2024
In this paper, we investigate the phenomena of “selection biases” in Large Language Models (LLMs), focusing on problems where models are tasked with choosing the optimal option from an ordered sequence. We delve into biases related to option order and token usage, which significantly impact LLMs’ decision-making processes. We also quantify the impact of these biases through an extensive empirical analysis across multiple models and tasks. Furthermore, we propose mitigation strategies to enhance model performance. Our key contributions are threefold: 1) Precisely quantifying the influence of option order and token on LLMs, 2) Developing strategies to mitigate the impact of token and order sensitivity to enhance robustness, and 3) Offering a detailed analysis of sensitivity across models and tasks, which informs the creation of more stable and reliable LLM applications for selection problems.
2016
NL2KB: Resolving Vocabulary Gap between Natural Language and Knowledge Base in Knowledge Base Construction and Retrieval
Sheng-Lun Wei | Yen-Pin Chiu | Hen-Hsen Huang | Hsin-Hsi Chen
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations
Sheng-Lun Wei | Yen-Pin Chiu | Hen-Hsen Huang | Hsin-Hsi Chen
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations
Words to express relations in natural language (NL) statements may be different from those to represent properties in knowledge bases (KB). The vocabulary gap becomes barriers for knowledge base construction and retrieval. With the demo system called NL2KB in this paper, users can browse which properties in KB side may be mapped to for a given relational pattern in NL side. Besides, they can retrieve the sets of relational patterns in NL side for a given property in KB side. We describe how the mapping is established in detail. Although the mined patterns are used for Chinese knowledge base applications, the methodology can be extended to other languages.