By My Eyes: Grounding Multimodal Large Language Models with Sensor Data via Visual Prompting

Hyungjun Yoon, Biniyam Tolera, Taesik Gong, Kimin Lee, Sung-Ju Lee


Abstract
Large language models (LLMs) have demonstrated exceptional abilities across various domains. However, utilizing LLMs for ubiquitous sensing applications remains challenging as existing text-prompt methods show significant performance degradation when handling long sensor data sequences. In this paper, we propose a visual prompting approach for sensor data using multimodal LLMs (MLLMs). Specifically, we design a visual prompt that directs MLLMs to utilize visualized sensor data alongside descriptions of the target sensory task. Additionally, we introduce a visualization generator that automates the creation of optimal visualizations tailored to a given sensory task, eliminating the need for prior task-specific knowledge. We evaluated our approach on nine sensory tasks involving four sensing modalities, achieving an average of 10% higher accuracy compared to text-based prompts and reducing token costs by 15.8 times. Our findings highlight the effectiveness and cost-efficiency of using visual prompts with MLLMs for various sensory tasks. The source code is available at https://github.com/diamond264/ByMyEyes.
Anthology ID:
2024.emnlp-main.133
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2219–2241
Language:
URL:
https://aclanthology.org/2024.emnlp-main.133
DOI:
Bibkey:
Cite (ACL):
Hyungjun Yoon, Biniyam Tolera, Taesik Gong, Kimin Lee, and Sung-Ju Lee. 2024. By My Eyes: Grounding Multimodal Large Language Models with Sensor Data via Visual Prompting. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 2219–2241, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
By My Eyes: Grounding Multimodal Large Language Models with Sensor Data via Visual Prompting (Yoon et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.133.pdf
Software:
 2024.emnlp-main.133.software.zip