Jian Du
2026
Attribution-Guided Multi-Object Hallucination and Bias Detection in Vision-Language Models
Sirat Samyoun | Yingtai Xiao | Jian Du
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Sirat Samyoun | Yingtai Xiao | Jian Du
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Vision-Language Models excel in multi-modal tasks but often hallucinate objects or exhibit linguistic bias by over-repeating object names, especially in complex multi-object scenes. Existing methods struggle with multi-object grounding because language priors frequently dominate visual evidence, causing hallucinated or biased objects to produce attention distributions or similarity scores nearly indistinguishable from those of real objects. We introduce SHAPLENS, a Shapley value–based attribution framework using Kernel SHAP and multi-layer fusion to detect hallucinated and biased objects. Evaluated on ADE and COCO datasets across four leading VLMs, SHAPLENS improves hallucination detection accuracy by 8–12% and F1 by 10–14% over the best baselines. It also achieves up to 6% higher bias detection performance across three distinct bias types on a curated HQH benchmark and exhibits minimal degradation (<0.03%) across partial and perturbed contexts.
2025
TokenShapley: Token Level Context Attribution with Shapley Value
Yingtai Xiao | Yuqing Zhu | Sirat Samyoun | Wanrong Zhang | Jiachen T. Wang | Jian Du
Findings of the Association for Computational Linguistics: ACL 2025
Yingtai Xiao | Yuqing Zhu | Sirat Samyoun | Wanrong Zhang | Jiachen T. Wang | Jian Du
Findings of the Association for Computational Linguistics: ACL 2025
Large language models (LLMs) demonstrate strong capabilities in in-context learning, but verifying the correctness of their generated responses remains a challenge. Prior work has explored attribution at the sentence level, but these methods fall short when users seek attribution for specific keywords within the response, such as numbers, years, or names. To address this limitation, we propose TokenShapley, a novel token-level attribution method that combines Shapley value-based data attribution with KNN-based retrieval techniques inspired by recent advances in KNN-augmented LLMs. By leveraging a precomputed datastore for contextual retrieval and computing Shapley values to quantify token importance, TokenShapley provides a fine-grained data attribution approach. Extensive evaluations on four benchmarks show that TokenShapley outperforms state-of-the-art baselines in token-level attribution, achieving a 11–23% improvement in accuracy.