Herumb Shandilya
2026
Beyond a Single Extractor: Re-thinking HTML-to-Text Extraction for LLM Pre-training
Jeffrey Li | Joshua P Gardner | Doug Kang | Fangping Shi | Karanjeet Singh | Chun-Liang Li | Herumb Shandilya | David Leo Wright Hall | Oncel Tuzel | Percy Liang | Ludwig Schmidt | Hadi Pouransari | Fartash Faghri
Findings of the Association for Computational Linguistics: EACL 2026
Jeffrey Li | Joshua P Gardner | Doug Kang | Fangping Shi | Karanjeet Singh | Chun-Liang Li | Herumb Shandilya | David Leo Wright Hall | Oncel Tuzel | Percy Liang | Ludwig Schmidt | Hadi Pouransari | Fartash Faghri
Findings of the Association for Computational Linguistics: EACL 2026
One of the first pre-processing steps for constructing web-scale LLM pretraining datasets involves extracting text from HTML. Despite the immense diversity of web content, existing open-source datasets predominantly apply a single fixed extractor to all webpages. In this work, we investigate whether this practice leads to suboptimal coverage and utilization of Internet data. We first show that while different extractors may lead to similar model performance on standard language understanding tasks, the pages surviving a fixed filtering pipeline can differ substantially. This suggests a simple intervention: by taking a Union over different extractors, we can increase the token yield of DCLM-Baseline by up to 71% while maintaining benchmark performance. We further show that for structured content such as tables and code blocks, extractor choice can significantly impact downstream task performance, with differences of up to 10 percentage points (p.p.) on WikiTQ and 3 p.p. on HumanEval.
2024
Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning
Shivalika Singh | Freddie Vargus | Daniel D’souza | Börje F. Karlsson | Abinaya Mahendiran | Wei-Yin Ko | Herumb Shandilya | Jay Patel | Deividas Mataciunas | Laura O’Mahony | Mike Zhang | Ramith Hettiarachchi | Joseph Wilson | Marina Machado | Luisa Moura | Dominik Krzemiński | Hakimeh Fadaei | Irem Ergun | Ifeoma Okoh | Aisha Alaagib | Oshan Mudannayake | Zaid Alyafeai | Vu Chien | Sebastian Ruder | Surya Guthikonda | Emad Alghamdi | Sebastian Gehrmann | Niklas Muennighoff | Max Bartolo | Julia Kreutzer | Ahmet Üstün | Marzieh Fadaee | Sara Hooker
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Shivalika Singh | Freddie Vargus | Daniel D’souza | Börje F. Karlsson | Abinaya Mahendiran | Wei-Yin Ko | Herumb Shandilya | Jay Patel | Deividas Mataciunas | Laura O’Mahony | Mike Zhang | Ramith Hettiarachchi | Joseph Wilson | Marina Machado | Luisa Moura | Dominik Krzemiński | Hakimeh Fadaei | Irem Ergun | Ifeoma Okoh | Aisha Alaagib | Oshan Mudannayake | Zaid Alyafeai | Vu Chien | Sebastian Ruder | Surya Guthikonda | Emad Alghamdi | Sebastian Gehrmann | Niklas Muennighoff | Max Bartolo | Julia Kreutzer | Ahmet Üstün | Marzieh Fadaee | Sara Hooker
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Datasets are foundational to many breakthroughs in modern artificial intelligence. Many recent achievements in the space of natural language processing (NLP) can be attributed to the fine-tuning of pre-trained models on a diverse set of tasks that enables a large language model (LLM) to respond to instructions. Instruction fine-tuning (IFT) requires specifically constructed and annotated datasets. However, existing datasets are almost all in the English language. In this work, our primary goal is to bridge the language gap by building a human-curated instruction-following dataset spanning 65 languages. We worked with fluent speakers of languages from around the world to collect natural instances of instructions and completions. Furthermore, we create the most extensive multilingual collection to date, comprising 513 million instances through templating and augmenting existing datasets across 114 languages. In total, we contribute three key resources: we develop and open-source the Aya Dataset, the Aya Collection, and the Aya Evaluation Suite. The Aya initiative also serves as a valuable case study in participatory research, involving collaborators from 119 countries. We see this as an important framework for future research collaborations that aim to bridge gaps in resources.
Search
Fix author
Co-authors
- Aisha Alaagib 1
- Emad Alghamdi 1
- Zaid Alyafeai 1
- Max Bartolo 1
- Vu Chien 1
- Daniel D’souza 1
- Irem Ergun 1
- Hakimeh Fadaee 1
- Marzieh Fadaee 1
- Fartash Faghri 1
- Joshua P Gardner 1
- Sebastian Gehrmann 1
- Surya Guthikonda 1
- David Leo Wright Hall 1
- Ramith Hettiarachchi 1
- Sara Hooker 1
- Doug Kang 1
- Börje F. Karlsson 1
- Wei-Yin Ko 1
- Julia Kreutzer 1
- Dominik Krzemiński 1
- Jeffrey Li 1
- Chun-Liang Li 1
- Percy Liang 1
- Marina Machado 1
- Abinaya Mahendiran 1
- Deividas Mataciunas 1
- Luisa Moura 1
- Oshan Mudannayake 1
- Niklas Muennighoff 1
- Ifeoma Okoh 1
- Laura O’Mahony 1
- Jay Patel 1
- Hadi Pouransari 1
- Sebastian Ruder 1
- Ludwig Schmidt 1
- Fangping Shi 1
- Karanjeet Singh 1
- Shivalika Singh 1
- Oncel Tuzel 1
- Freddie Vargus 1
- Joseph Wilson 1
- Mike Zhang 1
- Ahmet Üstün 1