Gili Lior
2025
PromptSuite: A Task-Agnostic Framework for Multi-Prompt Generation
Eliya Habba
|
Noam Dahan
|
Gili Lior
|
Gabriel Stanovsky
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Evaluating LLMs with a single prompt has proven unreliable, with small changes leading to significant performance differences. However, generating the prompt variations needed for a more robust multi-prompt evaluation is challenging, limiting its adoption in practice. To address this, we introduce PromptSuite, a framework that enables the automatic generation of various prompts. PromptSuite is flexible – working out of the box on a wide range of tasks and benchmarks. It follows a modular prompt design, allowing controlled perturbations to each component, and is extensible, supporting the addition of new components and perturbation types. Through a series of case studies, we show that PromptSuite provides meaningful variations to support strong evaluation practices. All resources, including the Python API, source code, user-friendly web interface, and demonstration video, are available at: https://eliyahabba.github.io/PromptSuite/.
ReliableEval: A Recipe for Stochastic LLM Evaluation via Method of Moments
Gili Lior
|
Eliya Habba
|
Shahar Levy
|
Avi Caciularu
|
Gabriel Stanovsky
Findings of the Association for Computational Linguistics: EMNLP 2025
LLMs are highly sensitive to prompt phrasing, yet standard benchmarks typically report performance using a single prompt, raising concerns about the reliability of such evaluations. In this work, we argue for a stochastic method of moments evaluation over the space of meaning-preserving prompt perturbations. We introduce a formal definition of *reliable evaluation* that accounts for prompt sensitivity, and suggest ReliableEval - a method for estimating the number of prompt resamplings needed to obtain meaningful results. Using our framework, we stochastically evaluate five frontier LLMs and find that even top-performing models like GPT-4o and Claude-3.7-Sonnet exhibit substantial prompt sensitivity. Our approach is model-, task-, and metric-agnostic, offering a recipe for meaningful and robust LLM evaluation.
2024
Leveraging Collection-Wide Similarities for Unsupervised Document Structure Extraction
Gili Lior
|
Yoav Goldberg
|
Gabriel Stanovsky
Findings of the Association for Computational Linguistics: ACL 2024
Document collections of various domains, e.g., legal, medical, or financial, often share some underlying collection-wide structure, which captures information that can aid both human users and structure-aware models.We propose to identify the typical structure of document within a collection, which requires to capture recurring topics across the collection, while abstracting over arbitrary header paraphrases, and ground each topic to respective document locations. These requirements pose several challenges: headers that mark recurring topics frequently differ in phrasing, certain section headers are unique to individual documents and do not reflect the typical structure, and the order of topics can vary between documents. Subsequently, we develop an unsupervised graph-based method which leverages both inter- and intra-document similarities, to extract the underlying collection-wide structure. Our evaluations on three diverse domains in both English and Hebrew indicate that our method extracts meaningful collection-wide structure, and we hope that future work will leverage our method for multi-document applications and structure-aware models.
Search
Fix author
Co-authors
- Gabriel Stanovsky 3
- Eliya Habba 2
- Avi Caciularu 1
- Noam Dahan 1
- Yoav Goldberg 1
- show all...