3DRP-Net: 3D Relative Position-aware Network for 3D Visual Grounding

Zehan Wang, Haifeng Huang, Yang Zhao, Linjun Li, Xize Cheng, Yichen Zhu, Aoxiong Yin, Zhou Zhao


Abstract
3D visual grounding aims to localize the target object in a 3D point cloud by a free-form language description. Typically, the sentences describing the target object tend to provide information about its relative relation between other objects and its position within the whole scene. In this work, we propose a relation-aware one-stage framework, named 3D Relative Position-aware Network (3DRP-Net), which can effectively capture the relative spatial relationships between objects and enhance object attributes. Specifically, 1) we propose a 3D Relative Position Multi-head Attention (3DRP-MA) module to analyze relative relations from different directions in the context of object pairs, which helps the model to focus on the specific object relations mentioned in the sentence. 2) We designed a soft-labeling strategy to alleviate the spatial ambiguity caused by redundant points, which further stabilizes and enhances the learning process through a constant and discriminative distribution. Extensive experiments conducted on three benchmarks (i.e., ScanRefer and Nr3D/Sr3D) demonstrate that our method outperforms all the state-of-the-art methods in general.
Anthology ID:
2023.emnlp-main.656
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10612–10625
Language:
URL:
https://aclanthology.org/2023.emnlp-main.656
DOI:
10.18653/v1/2023.emnlp-main.656
Bibkey:
Cite (ACL):
Zehan Wang, Haifeng Huang, Yang Zhao, Linjun Li, Xize Cheng, Yichen Zhu, Aoxiong Yin, and Zhou Zhao. 2023. 3DRP-Net: 3D Relative Position-aware Network for 3D Visual Grounding. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 10612–10625, Singapore. Association for Computational Linguistics.
Cite (Informal):
3DRP-Net: 3D Relative Position-aware Network for 3D Visual Grounding (Wang et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.656.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.656.mp4