BiasFilter: An Inference-Time Debiasing Framework for Large Language Models

Xiaoqing Cheng, Ruizhe Chen, Hongying Zan, Yuxiang Jia, Min Peng


Abstract
Mitigating social bias in large language models (LLMs) has become an increasingly important research objective. However, existing debiasing methods often incur high human and computational costs, exhibit limited effectiveness, and struggle to scale to larger models and open-ended generation tasks. To address these limitations, this paper proposes BiasFilter, a model-agnostic, inference-time debiasing framework that integrates seamlessly with both open-source and API-based LLMs. Instead of relying on retraining with balanced data or modifying model parameters, BiasFilter enforces fairness by filtering generation outputs in real time. Specifically, it periodically evaluates intermediate outputs every few tokens, maintains an active set of candidate continuations, and incrementally completes generation by discarding low-reward segments based on a fairness reward signal. To support this process, we construct a fairness preference dataset and train an implicit reward model to assess token-level fairness in generated responses. Extensive experiments demonstrate that BiasFilter effectively mitigates social bias across a range of LLMs while preserving overall generation quality.
Anthology ID:
2025.findings-emnlp.821
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15187–15205
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.821/
DOI:
Bibkey:
Cite (ACL):
Xiaoqing Cheng, Ruizhe Chen, Hongying Zan, Yuxiang Jia, and Min Peng. 2025. BiasFilter: An Inference-Time Debiasing Framework for Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 15187–15205, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
BiasFilter: An Inference-Time Debiasing Framework for Large Language Models (Cheng et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.821.pdf
Checklist:
 2025.findings-emnlp.821.checklist.pdf