HGCLIP: Exploring Vision-Language Models with Graph Representations for Hierarchical Understanding

Peng Xia, Xingtong Yu, Ming Hu, Lie Ju, Zhiyong Wang, Peibo Duan, Zongyuan Ge


Abstract
Object categories are typically organized into a multi-granularity taxonomic hierarchy. When classifying categories at different hierarchy levels, traditional uni-modal approaches focus primarily on image features, revealing limitations in complex scenarios. Recent studies integrating Vision-Language Models (VLMs) with class hierarchies have shown promise, yet they fall short of fully exploiting the hierarchical relationships. These efforts are constrained by their inability to perform effectively across varied granularity of categories. To tackle this issue, we propose a novel framework (**HGCLIP**) that effectively combines **CLIP** with a deeper exploitation of the **H**ierarchical class structure via **G**raph representation learning. We explore constructing the class hierarchy into a graph, with its nodes representing the textual or image features of each category. After passing through a graph encoder, the textual features incorporate hierarchical structure information, while the image features emphasize class-aware features derived from prototypes through the attention mechanism. Our approach demonstrates significant improvements on 11 diverse visual recognition benchmarks. Our codes are fully available at https: //github.com/richard-peng-xia/HGCLIP.
Anthology ID:
2025.coling-main.19
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
269–280
Language:
URL:
https://aclanthology.org/2025.coling-main.19/
DOI:
Bibkey:
Cite (ACL):
Peng Xia, Xingtong Yu, Ming Hu, Lie Ju, Zhiyong Wang, Peibo Duan, and Zongyuan Ge. 2025. HGCLIP: Exploring Vision-Language Models with Graph Representations for Hierarchical Understanding. In Proceedings of the 31st International Conference on Computational Linguistics, pages 269–280, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
HGCLIP: Exploring Vision-Language Models with Graph Representations for Hierarchical Understanding (Xia et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.19.pdf