Maria Lomeli
2024
TOOLVERIFIER: Generalization to New Tools via Self-Verification
Dheeraj Mekala
|
Jason Weston
|
Jack Lanchantin
|
Roberta Raileanu
|
Maria Lomeli
|
Jingbo Shang
|
Jane Dwivedi-Yu
Findings of the Association for Computational Linguistics: EMNLP 2024
Teaching language models to use tools is an important milestone towards building general assistants, but remains an open problem. While there has been significant progress on learning to use specific tools via fine-tuning, language models still struggle with learning how to robustly use new tools from only a few demonstrations. In this work we introduce a self-verification method which distinguishes between close candidates by self-asking contrastive questions during (1) tool selection; and parameter generation. We construct synthetic, high-quality, self-generated data for this goal using Llama-2 70B, which we intend to release publicly. Extensive experiments on 4 tasks from the ToolBench benchmark, consisting of 17 unseen tools, demonstrate an average improvement of 22% over few-shot baselines, even in scenarios where the distinctions between candidate tools are finely nuanced.
EditEval: An Instruction-Based Benchmark for Text Improvements
Jane Dwivedi-Yu
|
Timo Schick
|
Zhengbao Jiang
|
Maria Lomeli
|
Patrick Lewis
|
Gautier Izacard
|
Edouard Grave
|
Sebastian Riedel
|
Fabio Petroni
Proceedings of the 28th Conference on Computational Natural Language Learning
Evaluation of text generation to date has primarily focused on content created sequentially, rather than improvements on a piece of text. Writing, however, is naturally an iterative and incremental process that requires expertise in different modular skills such as fixing outdated information or making the writing style more consistent. Even so, comprehensive evaluation of a model’s capacity to perform these skills and the ability to edit remains sparse. This work introduces EditEval: An instruction-based, benchmark and evaluation suite that leverages high-quality existing and new datasets in English for the automatic evaluation of editing capabilities, such as making text more cohesive and paraphrasing. We evaluate several pre-trained models, which shows that InstructGPT and PEER on average perform the best, but that most baselines fall below the supervised state-of-the-art, particularly when neutralizing and updating information. Our analysis also shows that commonly used metrics for editing tasks do not always correlate well, and that prompts leading to the strongest performance do not necessarily elicit strong performance across different models. Through the release of this benchmark (code and data available at https://github.com/facebookresearch/EditEval) and a publicly available leaderboard challenge, we hope to unlock future work on developing models more capable of controllable and iterative editing.
Search
Co-authors
- Jane Dwivedi-Yu 2
- Dheeraj Mekala 1
- Jason Weston 1
- Jack Lanchantin 1
- Roberta Raileanu 1
- show all...