Xing Sun


2025

pdf bib
MAC-SQL: A Multi-Agent Collaborative Framework for Text-to-SQL
Bing Wang | Changyu Ren | Jian Yang | Xinnian Liang | Jiaqi Bai | LinZheng Chai | Zhao Yan | Qian-Wen Zhang | Di Yin | Xing Sun | Zhoujun Li
Proceedings of the 31st International Conference on Computational Linguistics

Recent LLM-based Text-to-SQL methods usually suffer from significant performance degradation on “huge” databases and complex user questions that require multi-step reasoning. Moreover, most existing methods neglect the crucial significance of LLMs utilizing external tools and model collaboration. To address these challenges, we introduce MAC-SQL, a novel LLM-based multi-agent collaborative framework. Our framework comprises a core decomposer agent for Text-to-SQL generation with few-shot chain-of-thought reasoning, accompanied by two auxiliary agents that utilize external tools or models to acquire smaller sub-databases and refine erroneous SQL queries. The decomposer agent collaborates with auxiliary agents, which are activated as needed and can be expanded to accommodate new features or tools for effective Text-to-SQL parsing. In our framework, We initially leverage GPT-4 as the strong backbone LLM for all agent tasks to determine the upper bound of our framework. We then fine-tune an open-sourced instruction-followed model, SQL-Llama, by leveraging Code Llama 7B, to accomplish all tasks as GPT-4 does. Experiments show that SQL-Llama achieves a comparable execution accuracy of 43.94, compared to the baseline accuracy of 46.35 for vanilla GPT-4. At the time of writing, MAC-SQL+GPT-4 achieves an execution accuracy of 59.59 when evaluated on the BIRD benchmark, establishing a new state-of-the-art (SOTA) on its holdout test set.

pdf bib
FIPO: Free-form Instruction-oriented Prompt Optimization with Preference Dataset and Modular Fine-tuning Schema
Junru Lu | Siyu An | Min Zhang | Yulan He | Di Yin | Xing Sun
Proceedings of the 31st International Conference on Computational Linguistics

When carefully optimized by human experts, naive prompts can significantly enhance the task performance of large language models (LLMs). However, such expert-driven prompt optimizations are resource-intensive. To address this, some studies have proposed Automatic Prompt Optimization (APO), which refines naive prompts according to task outputs from in-box testing models, utilizing advanced LLMs (e.g., GPT-4) in an ad-hoc way. Although effective, current approaches face challenges in generalization and privacy risks. To overcome these limitations, we have developed the first large-scale Prompt Optimization Preference (POP) dataset, fine-tuned offline local LLM-based optimizers, and conducted fairly evaluations across various downstream models. Our method, named Free-from Instruction-oriented Prompt Optimization (FIPO), allows precise optimization of the core task instructions in naive prompts in a model-agnostic manner. FIPO uses a modular APO template that dynamically incorporates the naive task instructions, optional instruction responses, and optional ground truth to produce refined prompts. The POP dataset is meticulously constructed using advanced LLMs, undergoing rigorous cross-validation by human experts and analytical models. By leveraging insights from this dataset, along with Tulu2 models and diverse fine-tuning strategies, we validate the efficacy of the FIPO framework across five public benchmarks and six testing models. Our dataset and codes are available at: https://github.com/LuJunru/FIPO_Project.

2024

pdf bib
Eliminating Biased Length Reliance of Direct Preference Optimization via Down-Sampled KL Divergence
Junru Lu | Jiazheng Li | Siyu An | Meng Zhao | Yulan He | Di Yin | Xing Sun
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Direct Preference Optimization (DPO) has emerged as a prominent algorithm for the direct and robust alignment of Large Language Models (LLMs) with human preferences, offering a more straightforward alternative to the complex Reinforcement Learning from Human Feedback (RLHF). Despite its promising efficacy, DPO faces a notable drawback: “verbosity”, a common over-optimization phenomenon also observed in RLHF. While previous studies mainly attributed verbosity to biased labels within the data, we propose that the issue also stems from an inherent algorithmic length reliance in DPO. Specifically, we suggest that the discrepancy between sequence-level Kullback–Leibler (KL) divergences between chosen and rejected sequences, used in DPO, results in overestimated or underestimated rewards due to varying token lengths. Empirically, we utilize datasets with different label lengths to demonstrate the presence of biased rewards. We then introduce an effective downsampling approach, named SamPO, to eliminate potential length reliance. Our experimental evaluations, conducted across three LLMs of varying scales and a diverse array of conditional and open-ended benchmarks, highlight the efficacy of SamPO in mitigating verbosity, achieving improvements of 5% to 12% over DPO through debaised rewards. Our code can be accessed at: https://github.com/LuJunru/SamPO/.

pdf bib
Sinkhorn Distance Minimization for Knowledge Distillation
Xiao Cui | Yulei Qin | Yuting Gao | Enwei Zhang | Zihan Xu | Tong Wu | Ke Li | Xing Sun | Wengang Zhou | Houqiang Li
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Knowledge distillation (KD) has been widely adopted to compress large language models (LLMs). Existing KD methods investigate various divergence measures including the Kullback-Leibler (KL), reverse Kullback-Leibler (RKL), and Jensen-Shannon (JS) divergences. However, due to limitations inherent in their assumptions and definitions, these measures fail to deliver effective supervision when few distribution overlap exists between the teacher and the student. In this paper, we show that the aforementioned KL, RKL, and JS divergences respectively suffer from issues of mode-averaging, mode-collapsing, and mode-underestimation, which deteriorates logits-based KD for diverse NLP tasks. We propose the Sinkhorn Knowledge Distillation (SinKD) that exploits the Sinkhorn distance to ensure a nuanced and precise assessment of the disparity between teacher and student distributions. Besides, profit by properties of the Sinkhorn metric, we can get rid of sample-wise KD that restricts the perception of divergence in each teacher-student sample pair. Instead, we propose a batch-wise reformulation to capture geometric intricacies of distributions across samples in the high-dimensional space. Comprehensive evaluation on GLUE and SuperGLUE, in terms of comparability, validity, and generalizability, highlights our superiority over state-of-the-art methods on all kinds of LLMs with encoder-only, encoder-decoder, and decoder-only architectures.

2023

pdf bib
Span-level Aspect-based Sentiment Analysis via Table Filling
Mao Zhang | Yongxin Zhu | Zhen Liu | Zhimin Bao | Yunfei Wu | Xing Sun | Linli Xu
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

In this paper, we propose a novel span-level model for Aspect-Based Sentiment Analysis (ABSA), which aims at identifying the sentiment polarity of the given aspect. In contrast to conventional ABSA models that focus on modeling the word-level dependencies between an aspect and its corresponding opinion expressions, in this paper, we propose Table Filling BERT (TF-BERT), which considers the consistency of multi-word opinion expressions at the span-level. Specially, we learn the span representations with a table filling method, by constructing an upper triangular table for each sentiment polarity, of which the elements represent the sentiment intensity of the specific sentiment polarity for all spans in the sentence. Two methods are then proposed, including table-decoding and table-aggregation, to filter out target spans or aggregate each table for sentiment polarity classification. In addition, we design a sentiment consistency regularizer to guarantee the sentiment consistency of each span for different sentiment polarities. Experimental results on three benchmarks demonstrate the effectiveness of our proposed model.