Minh Hoai
2023
Text-Derived Knowledge Helps Vision: A Simple Cross-modal Distillation for Video-based Action Anticipation
Sayontan Ghosh
|
Tanvi Aggarwal
|
Minh Hoai
|
Niranjan Balasubramanian
Findings of the Association for Computational Linguistics: EACL 2023
Anticipating future actions in a video is useful for many autonomous and assistive technologies. Prior action anticipation work mostly treat this as a vision modality problem, where the models learn the task information primarily from the video features in the action anticipation datasets. However, knowledge about action sequences can also be obtained from external textual data. In this work, we show how knowledge in pretrained language models can be adapted and distilled into vision based action anticipation models. We show that a simple distillation technique can achieve effective knowledge transfer and provide consistent gains on a strong vision model (Anticipative Vision Transformer) for two action anticipation datasets (3.5% relative gain on EGTEA-GAZE+ and 7.2% relative gain on EPIC-KITCHEN 55), giving a new state-of-the-art result.
2020
Structural and Functional Decomposition for Personality Image Captioning in a Communication Game
Minh Thu Nguyen
|
Duy Phung
|
Minh Hoai
|
Thien Huu Nguyen
Findings of the Association for Computational Linguistics: EMNLP 2020
Personality image captioning (PIC) aims to describe an image with a natural language caption given a personality trait. In this work, we introduce a novel formulation for PIC based on a communication game between a speaker and a listener. The speaker attempts to generate natural language captions while the listener encourages the generated captions to contain discriminative information about the input images and personality traits. In this way, we expect that the generated captions can be improved to naturally represent the images and express the traits. In addition, we propose to adapt the language model GPT2 to perform caption generation for PIC. This enables the speaker and listener to benefit from the language encoding capacity of GPT2. Our experiments show that the proposed model achieves the state-of-the-art performance for PIC.
Search
Co-authors
- Sayontan Ghosh 1
- Tanvi Aggarwal 1
- Niranjan Balasubramanian 1
- Minh Thu Nguyen 1
- Duy Phung 1
- show all...