Haowen Hou


2025

pdf bib
VisualRWKV: Exploring Recurrent Neural Networks for Visual Language Models
Haowen Hou | Peigen Zeng | Fei Ma | Fei Richard Yu
Proceedings of the 31st International Conference on Computational Linguistics

Visual Language Models (VLMs) have rapidly progressed with the recent success of large language models. However, there have been few attempts to incorporate efficient linear Recurrent Neural Networks (RNNs) architectures into VLMs. In this study, we introduce VisualRWKV, the first application of a linear RNN model to multimodal learning tasks, leveraging the pre-trained RWKV language model. We propose a data-dependent recurrence and sandwich prompts to enhance our modeling capabilities, along with a 2D image scanning mechanism to enrich the processing of visual sequences. Extensive experiments demonstrate that VisualRWKV achieves competitive performance compared to Transformer-based models like LLaVA-1.5 on various benchmarks. Compared to LLaVA-1.5, VisualRWKV has a speed advantage of 3.98 times and can save 54% of GPU memory when reaching an inference length of 24K tokens. To facilitate further research and analysis, we have made the checkpoints and the associated code publicly accessible at the following GitHub repository: https://github.com/howard-hou/VisualRWKV.

2023

pdf bib
RWKV: Reinventing RNNs for the Transformer Era
Bo Peng | Eric Alcaide | Quentin Anthony | Alon Albalak | Samuel Arcadinho | Stella Biderman | Huanqi Cao | Xin Cheng | Michael Chung | Leon Derczynski | Xingjian Du | Matteo Grella | Kranthi Gv | Xuzheng He | Haowen Hou | Przemyslaw Kazienko | Jan Kocon | Jiaming Kong | Bartłomiej Koptyra | Hayden Lau | Jiaju Lin | Krishna Sri Ipsit Mantri | Ferdinand Mom | Atsushi Saito | Guangyu Song | Xiangru Tang | Johan Wind | Stanisław Woźniak | Zhenyuan Zhang | Qinghua Zhou | Jian Zhu | Rui-Jie Zhu
Findings of the Association for Computational Linguistics: EMNLP 2023

Transformers have revolutionized almost all natural language processing (NLP) tasks but suffer from memory and computational complexity that scales quadratically with sequence length. In contrast, recurrent neural networks (RNNs) exhibit linear scaling in memory and computational requirements but struggle to match the same performance as Transformers due to limitations in parallelization and scalability. We propose a novel model architecture, Receptance Weighted Key Value (RWKV), that combines the efficient parallelizable training of transformers with the efficient inference of RNNs. Our approach leverages a linear attention mechanism and allows us to formulate the model as either a Transformer or an RNN, thus parallelizing computations during training and maintains constant computational and memory complexity during inference. We scale our models as large as 14 billion parameters, by far the largest dense RNN ever trained, and find RWKV performs on par with similarly sized Transformers, suggesting future work can leverage this architecture to create more efficient models. This work presents a significant step towards reconciling trade-offs between computational efficiency and model performance in sequence processing tasks.