Sanjiv Kumar
2024
Regression Aware Inference with LLMs
Michal Lukasik
|
Harikrishna Narasimhan
|
Aditya Krishna Menon
|
Felix Yu
|
Sanjiv Kumar
Findings of the Association for Computational Linguistics: EMNLP 2024
Large language models (LLMs) have shown strong results on a range of applications, including regression and scoring tasks.Typically, one obtains outputs from an LLM via autoregressive sampling from the model’s output distribution. We show that this inference strategy can be sub-optimal for common regression and scoring evaluation metrics. As a remedy, we build on prior work on Minimum Bayes Risk decoding,and propose alternate inference strategies that estimate the Bayes-optimal solution for regression and scoring metrics in closed-form from sampled responses.We show that our proposal significantly improves over baselines across datasets and models.
2023
Large Language Models with Controllable Working Memory
Daliang Li
|
Ankit Singh Rawat
|
Manzil Zaheer
|
Xin Wang
|
Michal Lukasik
|
Andreas Veit
|
Felix Yu
|
Sanjiv Kumar
Findings of the Association for Computational Linguistics: ACL 2023
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP), partly owing to the massive amounts of world knowledge they memorize during pretraining. While many downstream applications provide the model with an informational context to aid its underlying task, how the model’s world knowledge interacts with the factual information presented in the context remains under explored. As a desirable behavior, an LLM should give precedence to the context whenever it contains task-relevant information that conflicts with the model’s memorized knowledge. This enables model predictions to be grounded in the context, which then facilitates updating specific model predictions without frequently retraining the model. By contrast, when the context is irrelevant to the task, the model should ignore it and fall back on its internal knowledge. In this paper, we undertake a first joint study of the aforementioned two properties, namely controllability and robustness, in the context of LLMs. We demonstrate that state-of-the-art T5 and PaLM models (both pretrained and finetuned) could exhibit low controllability and robustness that does not improve with increasing the model size. As a solution, we propose a simple yet effective method – knowledge aware finetuning (KAFT) – to strengthen both controllability and robustness by injecting counterfactual and irrelevant contexts to standard supervised datasets. Our comprehensive evaluation showcases the utility of KAFT across model architectures and sizes.
2020
Semantic Label Smoothing for Sequence to Sequence Problems
Michal Lukasik
|
Himanshu Jain
|
Aditya Menon
|
Seungyeon Kim
|
Srinadh Bhojanapalli
|
Felix Yu
|
Sanjiv Kumar
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Label smoothing has been shown to be an effective regularization strategy in classification, that prevents overfitting and helps in label de-noising. However, extending such methods directly to seq2seq settings, such as Machine Translation, is challenging: the large target output space of such problems makes it intractable to apply label smoothing over all possible outputs. Most existing approaches for seq2seq settings either do token level smoothing, or smooth over sequences generated by randomly substituting tokens in the target sequence. Unlike these works, in this paper, we propose a technique that smooths over well formed relevant sequences that not only have sufficient n-gram overlap with the target sequence, but are also semantically similar. Our method shows a consistent and significant improvement over the state-of-the-art techniques on different datasets.
Search
Co-authors
- Michal Lukasik 3
- Felix Yu 3
- Daliang Li 1
- Ankit Singh Rawat 1
- Manzil Zaheer 1
- show all...