MANTA: A Scalable Pipeline for Transmuting Massive Web Corpora into Instruction Datasets

Heuiyeen Yeen, Seokhee Hong, Hyeongu Yun, Jinsik Lee


Abstract
We introduce MANTA, an automated pipeline that generates high-quality large-scale instruction fine-tuning datasets from massive web corpora while preserving their diversity and scalability. By extracting structured syllabi from web documents and leveraging high-performance LLMs, our approach enables highly effective query-response generation with minimal human intervention. Extensive experiments on 8B-scale LLMs demonstrate that fine-tuning on the MANTA-1M dataset significantly outperforms other massive dataset generation methodologies, particularly in knowledge-intensive tasks such as MMLU and MMLU-Pro, while also delivering superior performance across a broad spectrum of tasks. Moreover, MANTA supports seamless scalability by allowing the continuous integration of web corpus data, enabling expansion into domains requiring intensive knowledge.
Anthology ID:
2025.findings-emnlp.1019
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18755–18770
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.1019/
DOI:
Bibkey:
Cite (ACL):
Heuiyeen Yeen, Seokhee Hong, Hyeongu Yun, and Jinsik Lee. 2025. MANTA: A Scalable Pipeline for Transmuting Massive Web Corpora into Instruction Datasets. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 18755–18770, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
MANTA: A Scalable Pipeline for Transmuting Massive Web Corpora into Instruction Datasets (Yeen et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.1019.pdf
Checklist:
 2025.findings-emnlp.1019.checklist.pdf