Agent Lumos: Unified and Modular Training for Open-Source Language Agents

Da Yin, Faeze Brahman, Abhilasha Ravichander, Khyathi Chandu, Kai-Wei Chang, Yejin Choi, Bill Yuchen Lin


Abstract
Closed-source agents suffer from several issues such as a lack of affordability, transparency, and reproducibility, particularly on complex interactive tasks. This motivates the development of open-source alternatives. We introduce Lumos, one of the first frameworks for training open-source LLM-based agents. Lumos features a learnable, unified and modular architecture with a planning module that learns high-level subgoal generation, and a grounding module trained to translate these into the actions using various tools in the execution module. The design allows for modular upgrades and wider applicability to diverse interactive tasks. To foster generalizable agent learning, we collect large-scale, unified, and high-quality training annotations derived from diverse ground-truth reasoning rationales across various complex interactive tasks. On 9 datasets, Lumos exhibits several key advantages: (1) Lumos excels multiple larger open-source agents on the held-out datasets (unused for training) for each task type. Lumos even surpasses GPT agents on QA and web tasks; (2) Lumos outperforms open-source agents produced by chain-of-thoughts and unmodularized integrated training; and (3) Lumos effectively generalizes to unseen tasks, outperforming 33B-scale agents and domain-specific agents. Code and data will be released.
Anthology ID:
2024.acl-long.670
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12380–12403
Language:
URL:
https://aclanthology.org/2024.acl-long.670
DOI:
Bibkey:
Cite (ACL):
Da Yin, Faeze Brahman, Abhilasha Ravichander, Khyathi Chandu, Kai-Wei Chang, Yejin Choi, and Bill Yuchen Lin. 2024. Agent Lumos: Unified and Modular Training for Open-Source Language Agents. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12380–12403, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Agent Lumos: Unified and Modular Training for Open-Source Language Agents (Yin et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.670.pdf