Xiaobao Guo
2023
Retrieving Multimodal Information for Augmented Generation: A Survey
Ruochen Zhao
|
Hailin Chen
|
Weishi Wang
|
Fangkai Jiao
|
Xuan Long Do
|
Chengwei Qin
|
Bosheng Ding
|
Xiaobao Guo
|
Minzhi Li
|
Xingxuan Li
|
Shafiq Joty
Findings of the Association for Computational Linguistics: EMNLP 2023
As Large Language Models (LLMs) become popular, there emerged an important trend of using multimodality to augment the LLMs’ generation ability, which enables LLMs to better interact with the world. However, there lacks a unified perception of at which stage and how to incorporate different modalities. In this survey, we review methods that assist and augment generative models by retrieving multimodal knowledge, whose formats range from images, codes, tables, graphs, to audio. Such methods offer a promising solution to important concerns such as factuality, reasoning, interpretability, and robustness. By providing an in-depth review, this survey is expected to provide scholars with a deeper understanding of the methods’ applications and encourage them to adapt existing techniques to the fast-growing field of LLMs.
2021
Unimodal and Crossmodal Refinement Network for Multimodal Sequence Fusion
Xiaobao Guo
|
Adams Kong
|
Huan Zhou
|
Xianfeng Wang
|
Min Wang
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Effective unimodal representation and complementary crossmodal representation fusion are both important in multimodal representation learning. Prior works often modulate one modal feature to another straightforwardly and thus, underutilizing both unimodal and crossmodal representation refinements, which incurs a bottleneck of performance improvement. In this paper, Unimodal and Crossmodal Refinement Network (UCRN) is proposed to enhance both unimodal and crossmodal representations. Specifically, to improve unimodal representations, a unimodal refinement module is designed to refine modality-specific learning via iteratively updating the distribution with transformer-based attention layers. Self-quality improvement layers are followed to generate the desired weighted representations progressively. Subsequently, those unimodal representations are projected into a common latent space, regularized by a multimodal Jensen-Shannon divergence loss for better crossmodal refinement. Lastly, a crossmodal refinement module is employed to integrate all information. By hierarchical explorations on unimodal, bimodal, and trimodal interactions, UCRN is highly robust against missing modality and noisy data. Experimental results on MOSI and MOSEI datasets illustrated that the proposed UCRN outperforms recent state-of-the-art techniques and its robustness is highly preferred in real multimodal sequence fusion scenarios. Codes will be shared publicly.
Search
Co-authors
- Ruochen Zhao 1
- Hailin Chen 1
- Weishi Wang 1
- Fangkai Jiao 1
- Xuan Long Do 1
- show all...