Lovish Madaan
2025
Lost in Inference: Rediscovering the Role of Natural Language Inference for Large Language Models
Lovish Madaan
|
David Esiobu
|
Pontus Stenetorp
|
Barbara Plank
|
Dieuwke Hupkes
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
In the recent past, a popular way of evaluating natural language understanding (NLU), was to consider a model’s ability to perform natural language inference (NLI) tasks. In this paper, we investigate if NLI tasks, that are rarely used for LLM evaluation, can still be informative for evaluating LLMs. Focusing on five different NLI benchmarks across six models of different scales, we investigate if they are able to discriminate models of different size and quality and how their accuracies develop during training. Furthermore, we investigate the extent to which the softmax distributions of models align with human distributions in cases where statements are ambiguous or vague. Overall, our results paint a positive picture for the NLI tasks: we find that they are able to discriminate well between models at various stages of training, yet are not (all) saturated. Furthermore, we find that while the similarity of model distributions with human label distributions increases with scale, it is still much higher than the similarity between two populations of humans, making it a potentially interesting statistic to consider.
2020
Transfer Learning for Related Languages: Submissions to the WMT20 Similar Language Translation Task
Lovish Madaan
|
Soumya Sharma
|
Parag Singla
Proceedings of the Fifth Conference on Machine Translation
In this paper, we describe IIT Delhi’s submissions to the WMT 2020 task on Similar Language Translation for four language directions: Hindi <-> Marathi and Spanish <-> Portuguese. We try out three different model settings for the translation task and select our primary and contrastive submissions on the basis of performance of these three models. For our best submissions, we fine-tune the mBART model on the parallel data provided for the task. The pre-training is done using self-supervised objectives on a large amount of monolingual data for many languages. Overall, our models are ranked in the top four of all systems for the submitted language pairs, with first rank in Spanish -> Portuguese.
Search
Fix data
Co-authors
- David Esiobu 1
- Dieuwke Hupkes 1
- Barbara Plank 1
- Soumya Sharma 1
- Parag Singla 1
- show all...