Botian Shi
2021
Hashing based Efficient Inference for Image-Text Matching
Rong-Cheng Tu
|
Lei Ji
|
Huaishao Luo
|
Botian Shi
|
Heyan Huang
|
Nan Duan
|
Xian-Ling Mao
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
2020
A Benchmark for Structured Procedural Knowledge Extraction from Cooking Videos
Frank F. Xu
|
Lei Ji
|
Botian Shi
|
Junyi Du
|
Graham Neubig
|
Yonatan Bisk
|
Nan Duan
Proceedings of the First International Workshop on Natural Language Processing Beyond Text
Watching instructional videos are often used to learn about procedures. Video captioning is one way of automatically collecting such knowledge. However, it provides only an indirect, overall evaluation of multimodal models with no finer-grained quantitative measure of what they have learned. We propose instead, a benchmark of structured procedural knowledge extracted from cooking videos. This work is complementary to existing tasks, but requires models to produce interpretable structured knowledge in the form of verb-argument tuples. Our manually annotated open-vocabulary resource includes 356 instructional cooking videos and 15,523 video clip/sentence-level annotations. Our analysis shows that the proposed task is challenging and standard modeling approaches like unsupervised segmentation, semantic role labeling, and visual action detection perform poorly when forced to predict every action of a procedure in a structured form.
2019
Dense Procedure Captioning in Narrated Instructional Videos
Botian Shi
|
Lei Ji
|
Yaobo Liang
|
Nan Duan
|
Peng Chen
|
Zhendong Niu
|
Ming Zhou
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Understanding narrated instructional videos is important for both research and real-world web applications. Motivated by video dense captioning, we propose a model to generate procedure captions from narrated instructional videos which are a sequence of step-wise clips with description. Previous works on video dense captioning learn video segments and generate captions without considering transcripts. We argue that transcripts in narrated instructional videos can enhance video representation by providing fine-grained complimentary and semantic textual information. In this paper, we introduce a framework to (1) extract procedures by a cross-modality module, which fuses video content with the entire transcript; and (2) generate captions by encoding video frames as well as a snippet of transcripts within each extracted procedure. Experiments show that our model can achieve state-of-the-art performance in procedure extraction and captioning, and the ablation studies demonstrate that both the video frames and the transcripts are important for the task.
Search
Co-authors
- Lei Ji 3
- Nan Duan 3
- Yaobo Liang 1
- Peng Chen 1
- Zhendong Niu 1
- show all...