Yao Sai
2024
Autonomous Workflow for Multimodal Fine-Grained Training Assistants Towards Mixed Reality
Jiahuan Pei
|
Irene Viola
|
Haochen Huang
|
Junxiao Wang
|
Moonisa Ahsan
|
Fanghua Ye
|
Jiang Yiming
|
Yao Sai
|
Di Wang
|
Zhumin Chen
|
Pengjie Ren
|
Pablo Cesar
Findings of the Association for Computational Linguistics: ACL 2024
Autonomous artificial intelligence (AI) agents have emerged as promising protocols for automatically understanding the language-based environment, particularly with the exponential development of large language models (LLMs). However, a fine-grained, comprehensive understanding of multimodal environments remains under-explored. This work designs an autonomous workflow tailored for integrating AI agents seamlessly into extended reality (XR) applications for fine-grained training. We present a demonstration of a multimodal fine-grained training assistant for LEGO brick assembly in a pilot XR environment. Specifically, we design a cerebral language agent that integrates LLM with memory, planning, and interaction with XR tools and a vision-language agent, enabling agents to decide their actions based on past experiences. Furthermore, we introduce LEGO-MRTA, a multimodal fine-grained assembly dialogue dataset synthesized automatically in the workflow served by a commercial LLM. This dataset comprises multimodal instruction manuals, conversations, XR responses, and vision question answering. Last, we present several prevailing open-resource LLMs as benchmarks, assessing their performance with and without fine-tuning on the proposed dataset. We anticipate that the broader impact of this workflow will advance the development of smarter assistants for seamless user interaction in XR environments, fostering research in both AI and HCI communities.
Search
Co-authors
- Jiahuan Pei 1
- Irene Viola 1
- Haochen Huang 1
- Junxiao Wang 1
- Moonisa Ahsan 1
- show all...