Mayank Agarwal


2024

pdf bib
Granite-Function Calling Model: Introducing Function Calling Abilities via Multi-task Learning of Granular Tasks
Ibrahim Abdelaziz | Kinjal Basu | Mayank Agarwal | Sadhana Kumaravel | Matthew Stallone | Rameswar Panda | Yara Rizk | G P Shrivatsa Bhargav | Maxwell Crouse | Chulaka Gunasekara | Shajith Ikbal | Sachindra Joshi | Hima Karanam | Vineet Kumar | Asim Munawar | Sumit Neelam | Dinesh Raghu | Udit Sharma | Adriana Meza Soria | Dheeraj Sreedhar | Praveen Venkateswaran | Merve Unuvar | David Daniel Cox | Salim Roukos | Luis A. Lastras | Pavan Kapanipathi
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track

An emergent research trend explores the use of Large Language Models (LLMs) as the backbone of agentic systems (e.g., SWE-Bench, Agent-Bench). To fulfill LLMs’ potential as autonomous agents, they must be able to identify, call, and interact with a variety of external tools and application program interfaces (APIs). This capability of LLMs, commonly termed function calling, leads to a myriad of advantages such as access to current and domain-specific information in databases and the outsourcing of tasks that can be reliably performed by tools. In this work, we introduce Granite-20B-FunctionCalling, a model trained using a multi-task training approach on seven fundamental tasks encompassed in function calling. Our comprehensive evaluation on multiple out-of-domain datasets, which compares Granite-20B-FunctionCalling to more than 15 other best proprietary and open models, shows that Granite-20B-FunctionCalling has better generalizability on multiple tasks across seven different evaluation benchmarks. Moreover, Granite-20B-FunctionCalling shows the best performance among all open models and ranks among the top on the Berkeley Function Calling Leaderboard (BFCL).

pdf bib
Aligners: Decoupling LLMs and Alignment
Lilian Ngweta | Mayank Agarwal | Subha Maity | Alex Gittens | Yuekai Sun | Mikhail Yurochkin
Findings of the Association for Computational Linguistics: EMNLP 2024

Large Language Models (LLMs) need to be aligned with human expectations to ensure their safety and utility in most applications. Alignment is challenging, costly, and needs to be repeated for every LLM and alignment criterion. We propose to decouple LLMs and alignment by training *aligner* models that can be used to align any LLM for a given criteria on an as-needed basis, thus also reducing the potential negative impacts of alignment on performance. Our recipe for training the aligner models solely relies on synthetic data generated with a (prompted) LLM and can be easily adjusted for a variety of alignment criteria. We use the same synthetic data to train *inspectors*, binary miss-alignment classification models to guide a *squad* of multiple aligners. Our empirical results demonstrate consistent improvements when applying aligner squad to various LLMs, including chat-aligned models, across several instruction-following and red-teaming datasets.

2023

pdf bib
Explain-then-translate: an analysis on improving program translation with self-generated explanations
Zilu Tang | Mayank Agarwal | Alexander Shypula | Bailin Wang | Derry Wijaya | Jie Chen | Yoon Kim
Findings of the Association for Computational Linguistics: EMNLP 2023

This work explores the use of self-generated natural language explanations as an intermediate step for code-to-code translation with language models. Across three types of explanations and 19 programming languages constructed from the MultiPL-E dataset, we find the explanations to be particularly effective in the zero-shot case, improving performance by 12% on average. Improvements with natural language explanations are particularly pronounced on difficult programs. We release our dataset, code, and canonical solutions in all 19 languages.

2022

pdf bib
A Method for Automatically Estimating the Informativeness of Peer Reviews
Prabhat Bharti | Tirthankar Ghosal | Mayank Agarwal | Asif Ekbal
Proceedings of the 19th International Conference on Natural Language Processing (ICON)

Peer reviews are intended to give authors constructive and informative feedback. It is expected that the reviewers will make constructive suggestions over certain aspects, e.g., novelty, clarity, empirical and theoretical soundness, etc., and sections, e.g., problem definition/idea, datasets, methodology, experiments, results, etc., of the paper in a detailed manner. With this objective, we analyze the reviewer’s attitude towards the work. Aspects of the review are essential to determine how much weight the editor/chair should place on the review in making a decision. In this paper, we used a publically available Peer Review Analyze dataset of peer review texts manually annotated at the sentence level (∼13.22 k sentences) across two layers:Paper Section Correspondence and Paper Aspect Category. We transform these categorical annotations to derive an informativeness score of the review based on the review’s coverage across section correspondence, aspects of the paper, and reviewer-centric uncertainty associated with the review. We hope that our proposed methods, which are motivated towards automatically estimating the quality of peer reviews in the form of informativeness scores, will give editors an additional layer of confidence for the automatic judgment of review quality. We make our codes available at https://github.com/PrabhatkrBharti/informativeness.git.