Lisa Alazraki
2026
Improving the OOD Performance of Closed-Source LLMs on NLI Through Strategic Data Selection
Joe Stacey | Lisa Alazraki | Aran Ubhi | Beyza Ermis | Aaron Mueller | Marek Rei
Findings of the Association for Computational Linguistics: EACL 2026
Joe Stacey | Lisa Alazraki | Aran Ubhi | Beyza Ermis | Aaron Mueller | Marek Rei
Findings of the Association for Computational Linguistics: EACL 2026
We investigate the robustness of fine-tuned Large Language Models (LLMs) for the task of Natural Language Inference (NLI), finding that the in-distribution gains from fine-tuning correspond to a large drop in out-of-distribution (OOD) performance. Despite the widespread use of closed-source LLMs, there are no robustness mitigation methods that work under their API fine-tuning constraints. Existing methods to improve robustness typically require changing the fine-tuning process or large-scale data augmentation, methods that are infeasible or cost prohibitive for closed-source models. To address this, we propose strategically selecting the NLI fine-tuning data, prioritising more complex examples or replacing existing training examples with LLM-generated data. Prioritising more complex training examples improves performance on challenging OOD NLI datasets, while training with synthetic data leads to substantial improvements on easier OOD datasets. We find that synthetic examples are often too simple, and by prompting LLMs to create more complex synthetic data we can improve performance on both easy and challenging OOD datasets. Finally, we show that recent autoregressive LLMs are substantially more robust to distributional shifts compared to encoder models, and should be a preferred baseline for future research.
2025
No Need for Explanations: LLMs can implicitly learn from mistakes in-context
Lisa Alazraki | Maximilian Mozes | Jon Ander Campos | Tan Yi-Chern | Marek Rei | Max Bartolo
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Lisa Alazraki | Maximilian Mozes | Jon Ander Campos | Tan Yi-Chern | Marek Rei | Max Bartolo
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Showing incorrect answers to Large Language Models (LLMs) is a popular strategy to improve their performance in reasoning-intensive tasks. It is widely assumed that, in order to be helpful, the incorrect answers must be accompanied by comprehensive rationales, explicitly detailing where the mistakes are and how to correct them. However, in this work we present a counterintuitive finding: we observe that LLMs perform *better* in math reasoning tasks when these rationales are eliminated from the context and models are left to infer on their own what makes an incorrect answer flawed. This approach also substantially outperforms chain-of-thought prompting in our evaluations. These results are consistent across LLMs of different sizes and varying reasoning abilities. To gain an understanding of *why* LLMs learn from mistakes more effectively without explicit corrective rationales, we perform a thorough analysis, investigating changes in context length and answer diversity between different prompting strategies, and their effect on performance. We also examine evidence of overfitting to the in-context rationales when these are provided, and study the extent to which LLMs are able to autonomously infer high-quality corrective rationales given only incorrect answers as input. We find evidence that, while incorrect answers are more beneficial for LLM learning than additional diverse *correct* answers, explicit corrective rationales over-constrain the model, thus limiting those benefits.
Meta-Reasoning Improves Tool Use in Large Language Models
Lisa Alazraki | Marek Rei
Findings of the Association for Computational Linguistics: NAACL 2025
Lisa Alazraki | Marek Rei
Findings of the Association for Computational Linguistics: NAACL 2025
External tools help large language models succeed at tasks where they would otherwise typically fail. In existing frameworks, choosing tools at test time relies on naive greedy decoding, regardless of whether the model has been fine-tuned on tool-annotated data or prompted with in-context examples. In contrast, we find that gathering and choosing among a suitable set of candidate tools has greater potential to lead to an optimal selection. We present Tool selECTion via meta-reasONing (TECTON), a two-phase system that first *reasons* over a task and outputs candidate tools using a custom fine-tuned language modelling head. Then, with the custom head disabled, it *meta-reasons* (i.e., it reasons over the previous reasoning process) to make a final choice. We show that TECTON results in substantial gains—both in-distribution and out-of-distribution—on a range of math reasoning datasets.