Shijia Huang
2024
Enhancing Temporal Modeling of Video LLMs via Time Gating
Zi-Yuan Hu
|
Yiwu Zhong
|
Shijia Huang
|
Michael Lyu
|
Liwei Wang
Findings of the Association for Computational Linguistics: EMNLP 2024
Video Large Language Models (Video LLMs) have achieved impressive performance on video-and-language tasks, such as video question answering. However, most existing Video LLMs neglect temporal information in video data, leading to struggles with temporal-aware video understanding. To address this gap, we propose a Time Gating Video LLM (TG-Vid) designed to enhance temporal modeling through a novel Time Gating module (TG). The TG module employs a time gating mechanism on its sub-modules, comprising gating spatial attention, gating temporal attention, and gating MLP. This architecture enables our model to achieve a robust understanding of temporal information within videos. Extensive evaluation of temporal-sensitive video benchmarks (i.e., MVBench, TempCompass, and NExT-QA) demonstrates that our TG-Vid model significantly outperforms the existing Video LLMs. Further, comprehensive ablation studies validate that the performance gains are attributed to the designs of our TG module. Our code is available at https://github.com/LaVi-Lab/TG-Vid.
2023
Learning Preference Model for LLMs via Automatic Preference Data Generation
Shijia Huang
|
Jianqiao Zhao
|
Yanyang Li
|
Liwei Wang
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Despite the advanced capacities of the state-of-the-art large language models (LLMs), they suffer from issues of hallucination, stereotype, etc. Preference models play an important role in LLM alignment, yet training preference models predominantly rely on human-annotated data. This reliance limits their versatility and scalability. In this paper, we propose learning the preference model for LLMs via automatic preference data generation (AutoPM). Our approach involves both In-Breadth Data Generation, which elicits pairwise preference data from LLMs following the helpful-honest-harmless (HHH) criteria, and In-Depth Data Generation, which enriches the dataset with responses spanning a wide quality range. With HHH-guided preference data, our approach simultaneously enables the LLMs to learn human preferences and align with human values. Quantitative assessments on five benchmark datasets demonstrate the reliability and potential of AutoPM, pointing out a more general and scalable way to improve LLM performance.
CLEVA: Chinese Language Models EVAluation Platform
Yanyang Li
|
Jianqiao Zhao
|
Duo Zheng
|
Zi-Yuan Hu
|
Zhi Chen
|
Xiaohui Su
|
Yongfeng Huang
|
Shijia Huang
|
Dahua Lin
|
Michael Lyu
|
Liwei Wang
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
With the continuous emergence of Chinese Large Language Models (LLMs), how to evaluate a model’s capabilities has become an increasingly significant issue. The absence of a comprehensive Chinese benchmark that thoroughly assesses a model’s performance, the unstandardized and incomparable prompting procedure, and the prevalent risk of contamination pose major challenges in the current evaluation of Chinese LLMs. We present CLEVA, a user-friendly platform crafted to holistically evaluate Chinese LLMs. Our platform employs a standardized workflow to assess LLMs’ performance across various dimensions, regularly updating a competitive leaderboard. To alleviate contamination, CLEVA curates a significant proportion of new data and develops a sampling strategy that guarantees a unique subset for each leaderboard round. Empowered by an easy-to-use interface that requires just a few mouse clicks and a model API, users can conduct a thorough evaluation with minimal coding. Large-scale experiments featuring 23 Chinese LLMs have validated CLEVA’s efficacy.
Search
Co-authors
- Liwei Wang 3
- Jianqiao Zhao 2
- Yanyang Li 2
- Zi-Yuan Hu 2
- Michael Lyu 2
- show all...