3D-Aware Vision-Language Models Fine-Tuning with Geometric Distillation

Seonho Lee, Jiho Choi, Inha Kang, Jiwook Kim, Junsung Park, Hyunjung Shim


Abstract
Vision-Language Models (VLMs) have shown remarkable performance on diverse visual and linguistic tasks, yet they remain fundamentally limited in their understanding of 3D spatial structures.We propose Geometric Distillation, a lightweight, annotation-free fine-tuning framework that injects human-inspired geometric cues into pretrained VLMs without modifying their architecture.By distilling (1) sparse correspondences, (2) relative depth relations, and (3) dense cost volumes from off-the-shelf 3D foundation models (e.g., MASt3R, VGGT), our method shapes representations to be geometry-aware while remaining compatible with natural image–text inputs.Through extensive evaluations on 3D vision-language reasoning and 3D perception benchmarks, our method consistently outperforms prior approaches, achieving improved 3D spatial reasoning with significantly lower computational cost.Our work demonstrates a scalable and efficient path to bridge 2D-trained VLMs with 3D understanding, opening up wider use in spatially grounded multimodal tasks.
Anthology ID:
2025.findings-emnlp.562
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10628–10647
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.562/
DOI:
Bibkey:
Cite (ACL):
Seonho Lee, Jiho Choi, Inha Kang, Jiwook Kim, Junsung Park, and Hyunjung Shim. 2025. 3D-Aware Vision-Language Models Fine-Tuning with Geometric Distillation. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 10628–10647, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
3D-Aware Vision-Language Models Fine-Tuning with Geometric Distillation (Lee et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.562.pdf
Checklist:
 2025.findings-emnlp.562.checklist.pdf