On Efficient Language and Vision Assistants for Visually-Situated Natural Language Understanding: What Matters in Reading and Reasoning

Geewook Kim, Minjoon Seo


Abstract
Recent advancements in language and vision assistants have showcased impressive capabilities but suffer from a lack of transparency, limiting broader research and reproducibility. While open-source models handle general image tasks effectively, they face challenges with the high computational demands of complex visually-situated text understanding. Such tasks often require increased token inputs and large vision modules to harness high-resolution information. Striking a balance between model size and data importance remains an open question. This study aims to redefine the design of vision-language models by identifying key components and creating efficient models with constrained inference costs. By strategically formulating datasets, optimizing vision modules, and enhancing supervision techniques, we achieve significant improvements in inference throughput while maintaining high performance. Extensive experiments across models ranging from 160M to 13B parameters offer insights into model optimization.We will fully open-source our codebase, models, and datasets at https://github.com/naver-ai/elva.
Anthology ID:
2024.emnlp-main.944
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16978–17000
Language:
URL:
https://aclanthology.org/2024.emnlp-main.944
DOI:
10.18653/v1/2024.emnlp-main.944
Bibkey:
Cite (ACL):
Geewook Kim and Minjoon Seo. 2024. On Efficient Language and Vision Assistants for Visually-Situated Natural Language Understanding: What Matters in Reading and Reasoning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 16978–17000, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
On Efficient Language and Vision Assistants for Visually-Situated Natural Language Understanding: What Matters in Reading and Reasoning (Kim & Seo, EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.944.pdf
Software:
 2024.emnlp-main.944.software.zip
Data:
 2024.emnlp-main.944.data.zip