Basem Rizk
2025
Beyond Visual Understanding Introducing PARROT-360V for Vision Language Model Benchmarking
Harsha Vardhan Khurdula
|
Basem Rizk
|
Indus Khaitan
Proceedings of the 31st International Conference on Computational Linguistics: Industry Track
Current benchmarks for evaluating Vision Language Models (VLMs) often fall short in thoroughly assessing these models’ abilities to understand and process complex visual and textual content. They typically focus on simple tasks that do not require deep reasoning or the integration of multiple data modalities to solve an original problem. To address this gap, we introduce the PARROT-360V Benchmark, a novel and comprehensive benchmark featuring 2487 challenging visual puzzles designed to test VLMs on complex visual reasoning tasks. We evaluated leading models—GPT-4o, Claude-3.5-Sonnet, and Gemini-1.5-Pro—using PARROT-360V to assess their capabilities in combining visual clues with language skills to solve tasks in a manner akin to human problem-solving. Our findings reveal a notable performance gap: state-of-the-art models scored between 28% to 56% on our benchmark, significantly lower than their performance on popular benchmarks. This underscores the limitations of current VLMs in handling complex, multi-step reasoning tasks and highlights the need for more robust evaluation frameworks to advance the field.
2024
Can Language Model Moderators Improve the Health of Online Discourse?
Hyundong Cho
|
Shuai Liu
|
Taiwei Shi
|
Darpan Jain
|
Basem Rizk
|
Yuyang Huang
|
Zixun Lu
|
Nuan Wen
|
Jonathan Gratch
|
Emilio Ferrara
|
Jonathan May
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Conversational moderation of online communities is crucial to maintaining civility for a constructive environment, but it is challenging to scale and harmful to moderators. The inclusion of sophisticated natural language generation modules as a force multiplier to aid human moderators is a tantalizing prospect, but adequate evaluation approaches have so far been elusive. In this paper, we establish a systematic definition of conversational moderation effectiveness grounded on moderation literature and establish design criteria for conducting realistic yet safe evaluation. We then propose a comprehensive evaluation framework to assess models’ moderation capabilities independently of human intervention. With our framework, we conduct the first known study of language models as conversational moderators, finding that appropriately prompted models that incorporate insights from social science can provide specific and fair feedback on toxic behavior but struggle to influence users to increase their levels of respect and cooperation.
Search
Fix data
Co-authors
- Hyundong Cho 1
- Emilio Ferrara 1
- Jonathan Gratch 1
- Yuyang Huang 1
- Darpan Jain 1
- show all...