Learning to Focus on the Foreground for Temporal Sentence Grounding

Daizong Liu, Wei Hu


Abstract
Temporal sentence grounding (TSG) is crucial and fundamental for video understanding. Previous works typically model the target activity referred to the sentence query in a video by extracting the appearance information from each whole frame. However, these methods fail to distinguish visually similar background noise and capture subtle details of small objects. Although a few recent works additionally adopt a detection model to filter out the background contents and capture local appearances of foreground objects, they rely on the quality of the detection model and suffer from the time-consuming detection process. To this end, we propose a novel detection-free framework for TSG—Grounding with Learnable Foreground (GLF), which efficiently learns to locate the foreground regions related to the query in consecutive frames for better modelling the target activity. Specifically, we first split each video frame into multiple patch candidates of equal size, and reformulate the foreground detection problem as a patch localization task. Then, we develop a self-supervised coarse-to-fine paradigm to learn to locate the most query-relevant patch in each frame and aggregate them among the video for final grounding. Further, we employ a multi-scale patch reasoning strategy to capture more fine-grained foreground information. Extensive experiments on three challenging datasets (Charades-STA, TACoS, ActivityNet) show that the proposed GLF outperforms state-of-the-art methods.
Anthology ID:
2022.coling-1.490
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
5532–5541
Language:
URL:
https://aclanthology.org/2022.coling-1.490
DOI:
Bibkey:
Cite (ACL):
Daizong Liu and Wei Hu. 2022. Learning to Focus on the Foreground for Temporal Sentence Grounding. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5532–5541, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Learning to Focus on the Foreground for Temporal Sentence Grounding (Liu & Hu, COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.490.pdf