Haotian Liu
2024
UNICORN: A Unified Causal Video-Oriented Language-Modeling Framework for Temporal Video-Language Tasks
Yuanhao Xiong
|
Yixin Nie
|
Haotian Liu
|
Boxin Wang
|
Jun Chen
|
Rong Jin
|
Cho-Jui Hsieh
|
Lorenzo Torresani
|
Jie Lei
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
The great success of large language models has encouraged the development of large multimodal models, with a focus on image-language interaction. Despite promising results in various image-language downstream tasks, it is still challenging and unclear how to extend the capabilities of these models to the more complex video domain, especially when dealing with explicit temporal signals. To address the problem in existing large multimodal models, in this paper we adopt visual instruction tuning to build a unified causal video-oriented language modeling framework, named UNICORN. Specifically, we collect a comprehensive dataset under the instruction-following format, and instruction-tune the model accordingly. Experimental results demonstrate that without customized training objectives and intensive pre-training, UNICORN can achieve comparable or better performance on established temporal video-language tasks including moment retrieval, video paragraph captioning and dense video captioning. Moreover, the instruction-tuned model can be used to automatically annotate internet videos with temporally-aligned captions. Compared to commonly used ASR captions, we show that training on our generated captions improves the performance of video-language models on both zero-shot and fine-tuning settings. Source code can be found at https://github.com/xyh97/UNICORN.
Aligning Large Multimodal Models with Factually Augmented RLHF
Zhiqing Sun
|
Sheng Shen
|
Shengcao Cao
|
Haotian Liu
|
Chunyuan Li
|
Yikang Shen
|
Chuang Gan
|
Liangyan Gui
|
Yu-Xiong Wang
|
Yiming Yang
|
Kurt Keutzer
|
Trevor Darrell
Findings of the Association for Computational Linguistics: ACL 2024
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in “hallucination”, generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 96% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement of 60% on MMHAL-BENCH over other baselines.
Search
Co-authors
- Yuanhao Xiong 1
- Yixin Nie 1
- Boxin Wang 1
- Jun Chen 1
- Rong Jin 1
- show all...