Chaoyang He


2024

pdf bib
ScaleLLM: A Resource-Frugal LLM Serving Framework by Optimizing End-to-End Efficiency
Yuhang Yao | Han Jin | Alay Dilipbhai Shah | Shanshan Han | Zijian Hu | Dimitris Stripelis | Yide Ran | Zhaozhuo Xu | Salman Avestimehr | Chaoyang He
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track

Large language models (LLMs) have surged in popularity and are extensively used in commercial applications, where the efficiency of model serving is crucial for the user experience. Most current research focuses on optimizing individual sub-procedures, e.g. local inference and communication, however, there is no comprehensive framework that provides a holistic system view for optimizing LLM serving in an end-to-end manner. In this work, we conduct a detailed analysis to identify major bottlenecks that impact end-to-end latency in LLM serving systems. Our analysis reveals that a comprehensive LLM serving endpoint must address a series of efficiency bottlenecks that extend beyond LLM inference. We then propose ScaleLLM, an optimized system for resource-efficient LLM serving. Our extensive experiments reveal that reveal that with 64 concurrent requests on Mixtral 8x7B, ScaleLLM achieves a 4.3× speed up over vLLM and outperforms state-of-the-arts with 1.5× higher throughput.

pdf bib
TensorOpera Router: A Multi-Model Router for Efficient LLM Inference
Dimitris Stripelis | Zhaozhuo Xu | Zijian Hu | Alay Dilipbhai Shah | Han Jin | Yuhang Yao | Jipeng Zhang | Tong Zhang | Salman Avestimehr | Chaoyang He
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track

With the rapid growth of Large Language Models (LLMs) across various domains, numerous new LLMs have emerged, each possessing domain-specific expertise. This proliferation has highlighted the need for quick, high-quality, and cost-effective LLM query response methods. Yet, no single LLM exists to efficiently balance this trilemma. Some models are powerful but extremely costly, while others are fast and inexpensive but qualitatively inferior. To address this challenge, we present TO-Router, a non-monolithic LLM querying system that seamlessly integrates various LLM experts into a single query interface and dynamically routes incoming queries to the most high-performant expert based on query’s requirements. Through extensive experiments, we demonstrate that when compared to standalone expert models, TO-Router improves query efficiency by up to 40%, and leads to significant cost reductions of up to 30%, while maintaining or enhancing model performance by up to 10%.

2022

pdf bib
FedNLP: Benchmarking Federated Learning Methods for Natural Language Processing Tasks
Bill Yuchen Lin | Chaoyang He | Zihang Ze | Hulin Wang | Yufen Hua | Christophe Dupuy | Rahul Gupta | Mahdi Soltanolkotabi | Xiang Ren | Salman Avestimehr
Findings of the Association for Computational Linguistics: NAACL 2022

Increasing concerns and regulations about data privacy and sparsity necessitate the study of privacy-preserving, decentralized learning methods for natural language processing (NLP) tasks. Federated learning (FL) provides promising approaches for a large number of clients (e.g., personal devices or organizations) to collaboratively learn a shared global model to benefit all clients while allowing users to keep their data locally. Despite interest in studying FL methods for NLP tasks, a systematic comparison and analysis is lacking in the literature. Herein, we present the FedNLP, a benchmarking framework for evaluating federated learning methods on four different task formulations: text classification, sequence tagging, question answering, and seq2seq. We propose a universal interface between Transformer-based language models (e.g., BERT, BART) and FL methods (e.g., FedAvg, FedOPT, etc.) under various non-IID partitioning strategies. Our extensive experiments with FedNLP provide empirical comparisons between FL methods and help us better understand the inherent challenges of this direction. The comprehensive analysis points to intriguing and exciting future research aimed at developing FL methods for NLP tasks.

pdf bib
Proceedings of the First Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022)
Bill Yuchen Lin | Chaoyang He | Chulin Xie | Fatemehsadat Mireshghallah | Ninareh Mehrabi | Tian Li | Mahdi Soltanolkotabi | Xiang Ren
Proceedings of the First Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022)