%0 Conference Proceedings %T Knowledge Transfer with Visual Prompt in multi-modal Dialogue Understanding and Generation %A Zhu, Minjun %A Weng, Yixuan %A Li, Bin %A He, Shizhu %A Liu, Kang %A Zhao, Jun %Y Dernoncourt, Franck %Y Nguyen, Thien Huu %Y Lai, Viet Dac %Y Veyseh, Amir Pouran Ben %Y Bui, Trung H. %Y Yoon, David Seunghyun %S Proceedings of the First Workshop On Transcript Understanding %D 2022 %8 October %I International Conference on Computational Linguistics %C Gyeongju, South Korea %F zhu-etal-2022-knowledge-transfer %X Visual Dialogue (VD) task has recently received increasing attention in AI research. Visual Dialog aims to generate multi-round, interactive responses based on the dialog history and image content. Existing textual dialogue models cannot fully understand visual information, resulting in a lack of scene features when communicating with humans continuously. Therefore, how to efficiently fuse multimodal data features remains to be a challenge. In this work, we propose a knowledge transfer method with visual prompt (VPTG) fusing multi-modal data, which is a flexible module that can utilize the text-only seq2seq model to handle visual dialogue tasks. The VPTG conducts text-image co-learning and multi-modal information fusion with visual prompts and visual knowledge distillation. Specifically, we construct visual prompts from visual representations and then induce sequence-to-sequence(seq2seq) models to fuse visual information and textual contexts by visual-text patterns. And we also realize visual knowledge transfer through distillation between two different models’ text representations, so that the seq2seq model can actively learn visual semantic representations. Extensive experiments on the multi-modal dialogue understanding and generation (MDUG) datasets show the proposed VPTG outperforms other single-modal methods, which demonstrate the effectiveness of visual prompt and visual knowledge transfer. %U https://aclanthology.org/2022.tu-1.2 %P 8-19