Li Sheng
2025
UltraIF: Advancing Instruction Following from the Wild
Kaikai An
|
Li Sheng
|
Ganqu Cui
|
Shuzheng Si
|
Ning Ding
|
Yu Cheng
|
Baobao Chang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Instruction-following made modern large language models (LLMs) helpful assistants. However, the key to taming LLMs on complex instructions remains mysterious, for that there are huge gaps between models trained by open-source community and those trained by leading companies. To bridge the gap, we propose a simple and scalable approach UltraIF for building LLMs that can follow complex instructions with open-source data. UltraIF first decomposes real-world user prompts into simpler queries, constraints, and corresponding evaluation questions for the constraints. Then, we train an UltraComposer to compose constraint-associated prompts with evaluation questions. This prompt composer allows us to synthesize complicated instructions as well as filter responses with evaluation questions. In our experiment, for the first time, we successfully align LLaMA-3.1-8B-Base to catch up with its instruct version on 5 instruction-following benchmarks without any benchmark information, using only 8B model as response generator and evaluator. The aligned model also achieved competitive scores on other benchmarks. Moreover, we also show that UltraIF could further improve LLaMA-3.1-8B-Instruct through self-alignment, motivating broader use cases for the method.
Depression Detection on Social Media with Large Language Models
Xiaochong Lan
|
Zhiguang Han
|
Yiming Cheng
|
Li Sheng
|
Jie Feng
|
Chen Gao
|
Yong Li
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Limited access to mental healthcare resources hinders timely depression diagnosis, leading to detrimental outcomes.Social media platforms present a valuable data source for early detection, yet this task faces two significant challenges: 1) the need for medical knowledge to distinguish clinical depression from transient mood changes, and 2) the dual requirement for high accuracy and model explainability.To address this, we propose DORIS, a framework that leverages Large Language Models (LLMs).To integrate medical knowledge, DORIS utilizes LLMs to annotate user texts against established medical diagnostic criteria and to summarize historical posts into temporal mood courses.These medically-informed features are then used to train an accurate Gradient Boosting Tree (GBT) classifier.Explainability is achieved by generating justifications for predictions based on the LLM-derived symptom annotations and mood course analyses.Extensive experimental results validate the effectiveness as well as interpretability of our method, highlighting its potential as a supportive clinical tool.
Search
Fix author
Co-authors
- Kaikai An 1
- Baobao Chang (常宝宝) 1
- Yu Cheng 1
- Yiming Cheng 1
- Ganqu Cui 1
- show all...