2024
pdf
bib
abs
MERLIN: Multimodal Embedding Refinement via LLM-based Iterative Navigation for Text-Video Retrieval-Rerank Pipeline
Donghoon Han
|
Eunhwan Park
|
Gisang Lee
|
Adam Lee
|
Nojun Kwak
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
The rapid expansion of multimedia content has made accurately retrieving relevant videos from large collections increasingly challenging. Recent advancements in text-video retrieval have focused on cross-modal interactions, large-scale foundation model training, and probabilistic modeling, yet often neglect the crucial user perspective, leading to discrepancies between user queries and the content retrieved. To address this, we introduce MERLIN (Multimodal Embedding Refinement via LLM-based Iterative Navigation), a novel, training-free pipeline that leverages Large Language Models (LLMs) for iterative feedback learning. MERLIN refines query embeddings from a user perspective, enhancing alignment between queries and video content through a dynamic question answering process. Experimental results on datasets like MSR-VTT, MSVD, and ActivityNet demonstrate that MERLIN substantially improves Recall@1, outperforming existing systems and confirming the benefits of integrating LLMs into multimodal retrieval systems for more responsive and context-aware multimedia retrieval.
pdf
bib
abs
Towards Efficient Visual-Language Alignment of the Q-Former for Visual Reasoning Tasks
Sungkyung Kim
|
Adam Lee
|
Junyoung Park
|
Andrew Chung
|
Jusang Oh
|
Jay-Yoon Lee
Findings of the Association for Computational Linguistics: EMNLP 2024
Recent advancements in large language models have demonstrated enhanced capabilities in visual reasoning tasks by employing additional encoders for aligning different modalities. While the Q-Former has been widely used as a general encoder for aligning several modalities including image, video, audio, and 3D with large language models, previous works on its efficient training and the analysis of its individual components have been limited. In this work, we investigate the effectiveness of parameter efficient fine-tuning (PEFT) the Q-Former using InstructBLIP with visual reasoning benchmarks ScienceQA and IconQA. We observe that applying PEFT to the Q-Former achieves comparable performance to full fine-tuning using under 2% of the trainable parameters. Additionally, we employ AdaLoRA for dynamic parameter budget reallocation to examine the relative importance of the Q-Former’s sublayers with 4 different benchmarks. Our findings reveal that the self-attention layers are noticeably more important in perceptual visual-language reasoning tasks, and relative importance of FFN layers depends on the complexity of visual-language patterns involved in tasks. The code is available at https://github.com/AttentionX/InstructBLIP_PEFT.
2011
pdf
bib
Cross-lingual Slot Filling from Comparable Corpora
Matthew Snover
|
Xiang Li
|
Wen-Pin Lin
|
Zheng Chen
|
Suzanne Tamang
|
Mingmin Ge
|
Adam Lee
|
Qi Li
|
Hao Li
|
Sam Anzaroot
|
Heng Ji
Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web
2010
pdf
bib
Enhancing Multi-lingual Information Extraction via Cross-Media Inference and Fusion
Adam Lee
|
Marissa Passantino
|
Heng Ji
|
Guojun Qi
|
Thomas Huang
Coling 2010: Posters