Divide, Optimize, Merge: Scalable Fine-Grained Generative Optimization for LLM Agents

Jiale Liu, Yifan Zeng, Shaokun Zhang, Chi Zhang, Malte Højmark-Bertelsen, Marie Normann Gadeberg, Huazheng Wang, Qingyun Wu


Abstract
LLM-based optimization has shown remarkable potential in improving agentic systems. However, the conventional approach of prompting LLM-based generative optimizer with the trajectories on the whole training dataset in a single pass becomes untenable as datasets grow, leading to context window overflow and degraded pattern recognition. To address these challenges, we propose Fine-grained Generative Optimization (FGO), a scalable framework that divides large optimization tasks into manageable subsets, performs targeted optimizations, and systematically combines optimized components through progressive merging.Evaluation across ALFWorld, LogisticsQA, and GAIA benchmarks demonstrates that FGO outperforms conventional approach by 1.6-8.6% while reducing average prompt token consumption by 56.3%. Our framework provides a practical solution for scaling up LLM-based generative optimization of increasingly sophisticated agentic systems. Further analysis demonstrates that FGO achieves the most consistent performance gain in all training dataset sizes, showcasing its scalability and efficiency.
Anthology ID:
2025.findings-emnlp.1034
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18990–19012
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.1034/
DOI:
Bibkey:
Cite (ACL):
Jiale Liu, Yifan Zeng, Shaokun Zhang, Chi Zhang, Malte Højmark-Bertelsen, Marie Normann Gadeberg, Huazheng Wang, and Qingyun Wu. 2025. Divide, Optimize, Merge: Scalable Fine-Grained Generative Optimization for LLM Agents. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 18990–19012, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Divide, Optimize, Merge: Scalable Fine-Grained Generative Optimization for LLM Agents (Liu et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.1034.pdf
Checklist:
 2025.findings-emnlp.1034.checklist.pdf