Suhong Moon
2024
Virtual Personas for Language Models via an Anthology of Backstories
Suhong Moon
|
Marwa Abdulhai
|
Minwoo Kang
|
Joseph Suh
|
Widyadewi Soedarmadji
|
Eran Behar
|
David Chan
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) are trained from vast repositories of text authored by millions of distinct authors, reflecting an enormous diversity of human traits. While these models bear the potential to be used as approximations of human subjects in behavioral studies, prior efforts have been limited in steering model responses to match individual human users. In this work, we introduce Anthology, a method for conditioning LLMs to particular virtual personas by harnessing open-ended life narratives, which we refer to as backstories. We show that our methodology enhances the consistency and reliability of experimental outcomes while ensuring better representation of diverse sub-populations. Across three nationally representative human surveys conducted as part of Pew Research Center’s American Trends Panel (ATP), we demonstrate that Anthology achieves up to 18% improvement in matching the response distributions of human respondents and 27% improvement in consistency metrics.
TinyAgent: Function Calling at the Edge
Lutfi Erdogan
|
Nicholas Lee
|
Siddharth Jha
|
Sehoon Kim
|
Ryan Tabrizi
|
Suhong Moon
|
Coleman Hooper
|
Gopala Anumanchipalli
|
Kurt Keutzer
|
Amir Gholami
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Recent large language models (LLMs) have enabled the development of advanced agentic systems that can integrate various tools and APIs to fulfill user queries through function calling. However, the deployment of these LLMs on the edge has not been explored since they typically require cloud-based infrastructure due to their substantial model size and computational demands. To this end, we present TinyAgent, an end-to-end framework for training and deploying task-specific small language model agents capable of function calling for driving agentic systems at the edge. We first show how to enable accurate function calling for open-source models via the LLMCompiler framework. We then systematically curate a high-quality dataset for function calling, which we use to fine-tune two small language models, TinyAgent-1.1B and 7B. For efficient inference, we introduce a novel tool retrieval method to reduce the input prompt length and utilize quantization to further accelerate the inference speed. As a driving application, we demonstrate a local Siri-like system for Apple’s MacBook that can execute user commands through text or voice input. Our results show that our models can achieve, and even surpass, the function-calling capabilities of larger models like GPT-4-Turbo, while being fully deployed at the edge. We open-source our [dataset, models, and installable package](https://github.com/SqueezeAILab/TinyAgent) and provide a [demo video](https://www.youtube.com/watch?v=0GvaGL9IDpQ) for our MacBook assistant agent.
Search
Co-authors
- Marwa Abdulhai 1
- Minwoo Kang 1
- Joseph Suh 1
- Widyadewi Soedarmadji 1
- Eran Behar 1
- show all...