VisualRWKV: Exploring Recurrent Neural Networks for Visual Language Models

Haowen Hou, Peigen Zeng, Fei Ma, Fei Richard Yu


Abstract
Visual Language Models (VLMs) have rapidly progressed with the recent success of large language models. However, there have been few attempts to incorporate efficient linear Recurrent Neural Networks (RNNs) architectures into VLMs. In this study, we introduce VisualRWKV, the first application of a linear RNN model to multimodal learning tasks, leveraging the pre-trained RWKV language model. We propose a data-dependent recurrence and sandwich prompts to enhance our modeling capabilities, along with a 2D image scanning mechanism to enrich the processing of visual sequences. Extensive experiments demonstrate that VisualRWKV achieves competitive performance compared to Transformer-based models like LLaVA-1.5 on various benchmarks. Compared to LLaVA-1.5, VisualRWKV has a speed advantage of 3.98 times and can save 54% of GPU memory when reaching an inference length of 24K tokens. To facilitate further research and analysis, we have made the checkpoints and the associated code publicly accessible at the following GitHub repository: https://github.com/howard-hou/VisualRWKV.
Anthology ID:
2025.coling-main.694
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10423–10434
Language:
URL:
https://aclanthology.org/2025.coling-main.694/
DOI:
Bibkey:
Cite (ACL):
Haowen Hou, Peigen Zeng, Fei Ma, and Fei Richard Yu. 2025. VisualRWKV: Exploring Recurrent Neural Networks for Visual Language Models. In Proceedings of the 31st International Conference on Computational Linguistics, pages 10423–10434, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
VisualRWKV: Exploring Recurrent Neural Networks for Visual Language Models (Hou et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.694.pdf