Jianping Zhang
2025
VisBias: Measuring Explicit and Implicit Social Biases in Vision Language Models
Jen-tse Huang
|
Jiantong Qin
|
Jianping Zhang
|
Youliang Yuan
|
Wenxuan Wang
|
Jieyu Zhao
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
This research investigates both explicit and implicit social biases exhibited by Vision-Language Models (VLMs). The key distinction between these bias types lies in the level of awareness: explicit bias refers to conscious, intentional biases, while implicit bias operates subconsciously. To analyze explicit bias, we directly pose questions to VLMs related to gender and racial differences: (1) Multiple-choice questions based on a given image (e.g., “What is the education level of the person in the image?”) (2) Yes-No comparisons using two images (e.g., “Is the person in the first image more educated than the person in the second image?”) For implicit bias, we design tasks where VLMs assist users but reveal biases through their responses: (1) Image description tasks: Models are asked to describe individuals in images, and we analyze disparities in textual cues across demographic groups. (2) Form completion tasks: Models draft a personal information collection form with 20 attributes, and we examine correlations among selected attributes for potential biases. We evaluate Gemini-1.5, GPT-4V, GPT-4o, LLaMA-3.2-Vision and LLaVA-v1.6. Our code and data are publicly available at https://github.com/uscnlp-lime/VisBias.
Confusion is the Final Barrier: Rethinking Jailbreak Evaluation and Investigating the Real Misuse Threat of LLMs
Yu Yan
|
Sheng Sun
|
Zhe Wang
|
Yijun Lin
|
Zenghao Duan
|
Zhifei Zheng
|
Min Liu
|
Zhiyi Yin
|
Jianping Zhang
Findings of the Association for Computational Linguistics: EMNLP 2025
With the development of Large Language Models (LLMs), numerous efforts have revealed their vulnerabilities to jailbreak attacks. Although these studies have driven the progress in LLMs’ safety alignment, it remains unclear whether LLMs have internalized authentic knowledge to deal with real-world crimes, or are merely forced to simulate toxic language patterns. This ambiguity raises concerns that jailbreak success is often attributable to a hallucination loop between jailbroken LLM and judger LLM. By decoupling the use of jailbreak techniques, we construct knowledge-intensive Q&A to investigate the misuse threats of LLMs in terms of dangerous knowledge possession, harmful task planning utility, and harmfulness judgment robustness. Experiments reveal a mismatch between jailbreak success rates and harmful knowledge possession in LLMs, and existing LLM-as-a-judge frameworks tend to anchor harmfulness judgments on toxic language patterns. Our study reveals a gap between existing LLM safety assessments and real-world threat potential.
2003
Inferring Temporal Ordering of Events in News
Inderjeet Mani
|
Barry Schiffman
|
Jianping Zhang
Companion Volume of the Proceedings of HLT-NAACL 2003 - Short Papers
Search
Fix author
Co-authors
- Zenghao Duan 1
- Jen-tse Huang 1
- Yijun Lin 1
- Min Liu 1
- Inderjeet Mani 1
- show all...