Yuxiang Wang
2024
TempCompass: Do Video LLMs Really Understand Videos?
Yuanxin Liu
|
Shicheng Li
|
Yi Liu
|
Yuxiang Wang
|
Shuhuai Ren
|
Lei Li
|
Sishuo Chen
|
Xu Sun
|
Lu Hou
Findings of the Association for Computational Linguistics: ACL 2024
Recently, there is a surge in interest surrounding video large language models (Video LLMs). However, existing benchmarks fail to provide a comprehensive feedback on the temporal perception ability of Video LLMs. On the one hand, most of them are unable to distinguish between different temporal aspects (e.g., speed, direction) and thus cannot reflect the nuanced performance on these specific aspects. On the other hand, they are limited in the diversity of task formats (e.g., only multi-choice QA), which hinders the understanding of how temporal perception performance may vary across different types of tasks. Motivated by these two problems, we propose the TempCompass benchmark, which introduces a diversity of temporal aspects and task formats. To collect high-quality test data, we devise two novel strategies: (1) In video collection, we construct conflicting videos that share the same static content but differ in a specific temporal aspect, which prevents Video LLMs from leveraging single-frame bias or language priors. (2) To collect the task instructions, we propose a paradigm where humans first annotate meta-information for a video and then an LLM generates the instruction. We also design an LLM-based approach to automatically and accurately evaluate the responses from Video LLMs. Based on TempCompass, we comprehensively evaluate 9 state-of-the-art (SOTA) Video LLMs and 3 Image LLMs, and reveal the discerning fact that these models exhibit notably poor temporal perception ability.
2022
PARSE: An Efficient Search Method for Black-box Adversarial Text Attacks
Pengwei Zhan
|
Chao Zheng
|
Jing Yang
|
Yuxiang Wang
|
Liming Wang
|
Yang Wu
|
Yunjian Zhang
Proceedings of the 29th International Conference on Computational Linguistics
Neural networks are vulnerable to adversarial examples. The adversary can successfully attack a model even without knowing model architecture and parameters, i.e., under a black-box scenario. Previous works on word-level attacks widely use word importance ranking (WIR) methods and complex search methods, including greedy search and heuristic algorithms, to find optimal substitutions. However, these methods fail to balance the attack success rate and the cost of attacks, such as the number of queries to the model and the time consumption. In this paper, We propose PAthological woRd Saliency sEarch (PARSE) that performs the search under dynamic search space following the subarea importance. Experiments show that PARSE can achieve comparable attack success rates to complex search methods while saving numerous queries and time, e.g., saving at most 74% of queries and 90% of time compared with greedy search when attacking the examples from Yelp dataset. The adversarial examples crafted by PARSE are also of high quality, highly transferable, and can effectively improve model robustness in adversarial training.
Search
Co-authors
- Yuanxin Liu 1
- Shicheng Li 1
- Yi Liu 1
- Shuhuai Ren 1
- Lei Li 1
- show all...