Gantavya Bhatt


2024

pdf bib
An Experimental Design Framework for Label-Efficient Supervised Finetuning of Large Language Models
Gantavya Bhatt | Yifang Chen | Arnav Das | Jifan Zhang | Sang Truong | Stephen Mussmann | Yinglun Zhu | Jeff Bilmes | Simon Du | Kevin Jamieson | Jordan Ash | Robert Nowak
Findings of the Association for Computational Linguistics: ACL 2024

Supervised finetuning (SFT) on instruction datasets has played a crucial role in achieving the remarkable zero-shot generalization capabilities observed in modern large language models (LLMs). However, the annotation efforts required to produce high quality responses for instructions are becoming prohibitively expensive, especially as the number of tasks spanned by instruction datasets continues to increase. Active learning is effective in identifying useful subsets of samples to annotate from an unlabeled pool, but its high computational cost remains a barrier to its widespread applicability in the context of LLMs. To mitigate the annotation cost of SFT and circumvent the computational bottlenecks of active learning, we propose using experimental design. Experimental design techniques select the most informative samples to label, and typically maximize some notion of uncertainty and/or diversity. In our work, we implement a framework that evaluates several existing and novel experimental design techniques and find that these methods consistently yield significant gains in label efficiency with little computational overhead. On generative tasks, to reach the same generalization performance, our methods save 50% of the annotation cost compared to random sampling.

2021

pdf bib
Can RNNs trained on harder subject-verb agreement instances still perform well on easier ones?
Hritik Bansal | Gantavya Bhatt | Sumeet Agarwal
Proceedings of the Society for Computation in Linguistics 2021

2020

pdf bib
How much complexity does an RNN architecture need to learn syntax-sensitive dependencies?
Gantavya Bhatt | Hritik Bansal | Rishubh Singh | Sumeet Agarwal
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop

Long short-term memory (LSTM) networks and their variants are capable of encapsulating long-range dependencies, which is evident from their performance on a variety of linguistic tasks. On the other hand, simple recurrent networks (SRNs), which appear more biologically grounded in terms of synaptic connections, have generally been less successful at capturing long-range dependencies as well as the loci of grammatical errors in an unsupervised setting. In this paper, we seek to develop models that bridge the gap between biological plausibility and linguistic competence. We propose a new architecture, the Decay RNN, which incorporates the decaying nature of neuronal activations and models the excitatory and inhibitory connections in a population of neurons. Besides its biological inspiration, our model also shows competitive performance relative to LSTMs on subject-verb agreement, sentence grammaticality, and language modeling tasks. These results provide some pointers towards probing the nature of the inductive biases required for RNN architectures to model linguistic phenomena successfully.