Yuxin Zuo
2024
KnowCoder: Coding Structured Knowledge into LLMs for Universal Information Extraction
Zixuan Li
|
Yutao Zeng
|
Yuxin Zuo
|
Weicheng Ren
|
Wenxuan Liu
|
Miao Su
|
Yucan Guo
|
Yantao Liu
|
Lixiang Lixiang
|
Zhilei Hu
|
Long Bai
|
Wei Li
|
Yidan Liu
|
Pan Yang
|
Xiaolong Jin
|
Jiafeng Guo
|
Xueqi Cheng
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2023
Incorporating Probing Signals into Multimodal Machine Translation via Visual Question-Answering Pairs
Yuxin Zuo
|
Bei Li
|
Chuanhao Lv
|
Tong Zheng
|
Tong Xiao
|
JingBo Zhu
Findings of the Association for Computational Linguistics: EMNLP 2023
This paper presents an in-depth study of multimodal machine translation (MMT), examining the prevailing understanding that MMT systems exhibit decreased sensitivity to visual information when text inputs are complete. Instead, we attribute this phenomenon to insufficient cross-modal interaction, rather than image information redundancy. A novel approach is proposed to generate parallel Visual Question-Answering (VQA) style pairs from the source text, fostering more robust cross-modal interaction. Using Large Language Models (LLMs), we explicitly model the probing signal in MMT to convert it into VQA-style data to create the Multi30K-VQA dataset. An MMT-VQA multitask learning framework is introduced to incorporate explicit probing signals from the dataset into the MMT training process. Experimental results on two widely-used benchmarks demonstrate the effectiveness of this novel approach. Our code and data would be available at: https://github.com/libeineu/MMT-VQA.
Search
Co-authors
- Bei Li 1
- Chuanhao Lv 1
- Tong Zheng 1
- Tong Xiao 1
- Jingbo Zhu 1
- show all...