BcQLM: Efficient Vision-Language Understanding with Distilled Q-Gated Cross-Modal Fusion

Sike Xiang, Shuang Chen, Amir Atapour-Abarghouei


Abstract
As multimodal large language models (MLLMs) advance, their large-scale architectures pose challenges for deployment in resource-constrained environments. In the age of large models, where energy efficiency, computational scalability and environmental sustainability are paramount, the development of lightweight and high-performance models is critical for real-world applications. As such, we propose a lightweight MLLM framework for end-to-end visual question answering. Our proposed approach centres on BreezeCLIP, a compact yet powerful vision-language encoder optimised for efficient multimodal understanding. With only 1.2 billion parameters overall, our model significantly reduces computational cost while achieving performance comparable to standard-size MLLMs. Experiments conducted on multiple datasets further validate its effectiveness in balancing accuracy and efficiency. The modular and extensible design enables generalisation to broader multimodal tasks. The proposed lightweight vision-language framework is denoted as BcQLM (BreezeCLIP-enhanced Q-Gated Multimodal Language Model). It offers a promising path toward deployable MLLMs under practical hardware constraints. The source code is available at https://github.com/thico0224/BcQLM.
Anthology ID:
2025.findings-emnlp.780
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14462–14472
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.780/
DOI:
Bibkey:
Cite (ACL):
Sike Xiang, Shuang Chen, and Amir Atapour-Abarghouei. 2025. BcQLM: Efficient Vision-Language Understanding with Distilled Q-Gated Cross-Modal Fusion. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 14462–14472, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
BcQLM: Efficient Vision-Language Understanding with Distilled Q-Gated Cross-Modal Fusion (Xiang et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.780.pdf
Checklist:
 2025.findings-emnlp.780.checklist.pdf