Chi Wang


2024

pdf bib
Assessing and Verifying Task Utility in LLM-Powered Applications
Negar Arabzadeh | Siqing Huo | Nikhil Mehta | Qingyun Wu | Chi Wang | Ahmed Hassan Awadallah | Charles L. A. Clarke | Julia Kiseleva
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

The rapid development of Large Language Models (LLMs) has led to a surge in applications that facilitate collaboration among multiple agents, assisting humans in their daily tasks. However, a significant gap remains in assessing to what extent LLM-powered applications genuinely enhance user experience and task execution efficiency. This highlights the need to verify utility of LLM-powered applications, particularly by ensuring alignment between the application’s functionality and end-user needs. We introduce AgentEval, a novel framework designed to simplify the utility verification process by automatically proposing a set of criteria tailored to the unique purpose of any given application. This allows for a comprehensive assessment, quantifying the utility of an application against the suggested criteria. We present a comprehensive analysis of the effectiveness and robustness of AgentEval for two open source datasets including Math Problem solving and ALFWorld House-hold related tasks. For reproducibility purposes, we make the data, code and all the logs publicly available at https://github.com/Narabzad/AgentEval

pdf bib
AUTOGEN STUDIO: A No-Code Developer Tool for Building and Debugging Multi-Agent Systems
Victor Dibia | Jingya Chen | Gagan Bansal | Suff Syed | Adam Fourney | Erkang Zhu | Chi Wang | Saleema Amershi
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

Multi-agent systems, where multiple agents (generative AI models + tools) collaborate, are emerging as an effective pattern for solving long-running, complex tasks in numerous do- mains. However, specifying their parameters (such as models, tools, and orchestration mechanisms etc,.) and debugging them remains challenging for most developers. To address this challenge, we present AUTOGEN STUDIO, a no-code developer tool for rapidly prototyping, debugging, and evaluating multi-agent work- flows built upon the AUTOGEN framework. AUTOGEN STUDIO offers a web interface and a Python API for representing LLM-enabled agents using a declarative (JSON-based) specification. It provides an intuitive drag-and-drop UI for agent workflow specification, interactive evaluation and debugging of workflows, and a gallery of reusable agent components. We highlight four design principles for no-code multi-agent developer tools and contribute an open-source implementation. https://github.com/microsoft/autogen/tree/autogenstudio/samples/apps/autogen-studio

2021

pdf bib
An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models
Xueqing Liu | Chi Wang
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

The performance of fine-tuning pre-trained language models largely depends on the hyperparameter configuration. In this paper, we investigate the performance of modern hyperparameter optimization methods (HPO) on fine-tuning pre-trained language models. First, we study and report three HPO algorithms’ performances on fine-tuning two state-of-the-art language models on the GLUE dataset. We find that using the same time budget, HPO often fails to outperform grid search due to two reasons: insufficient time budget and overfitting. We propose two general strategies and an experimental procedure to systematically troubleshoot HPO’s failure cases. By applying the procedure, we observe that HPO can succeed with more appropriate settings in the search space and time budget; however, in certain cases overfitting remains. Finally, we make suggestions for future work. Our implementation can be found in https://github.com/microsoft/FLAML/tree/main/flaml/nlp/

2017

pdf bib
Identifying Semantically Deviating Outlier Documents
Honglei Zhuang | Chi Wang | Fangbo Tao | Lance Kaplan | Jiawei Han
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

A document outlier is a document that substantially deviates in semantics from the majority ones in a corpus. Automatic identification of document outliers can be valuable in many applications, such as screening health records for medical mistakes. In this paper, we study the problem of mining semantically deviating document outliers in a given corpus. We develop a generative model to identify frequent and characteristic semantic regions in the word embedding space to represent the given corpus, and a robust outlierness measure which is resistant to noisy content in documents. Experiments conducted on two real-world textual data sets show that our method can achieve an up to 135% improvement over baselines in terms of recall at top-1% of the outlier ranking.

2014

pdf bib
The Wisdom of Minority: Unsupervised Slot Filling Validation based on Multi-dimensional Truth-Finding
Dian Yu | Hongzhao Huang | Taylor Cassidy | Heng Ji | Chi Wang | Shi Zhi | Jiawei Han | Clare Voss | Malik Magdon-Ismail
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers