Balaji Ganesan


2024

pdf bib
Sequential API Function Calling Using GraphQL Schema
Avirup Saha | Lakshmi Mandal | Balaji Ganesan | Sambit Ghosh | Renuka Sindhgatta | Carlos Eberhardt | Dan Debrunner | Sameep Mehta
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Function calling using Large Language Models (LLMs) is an active research area that aims to empower LLMs with the ability to execute APIs to perform real-world tasks. However, sequential function calling using LLMs with interdependence between functions is still under-explored. To this end, we introduce GraphQLRestBench, a dataset consisting of natural language utterances paired with function call sequences representing real-world REST API calls with variable mapping between functions. In order to represent the response structure of the functions in the LLM prompt, we use the GraphQL schema of the REST APIs. We also introduce a custom evaluation framework for our dataset consisting of four specially designed metrics. We evaluate various open-source LLMs on our dataset using few-shot Chain-of-Thought and ReAct prompting to establish a reasonable baseline.

2023

pdf bib
Infusing Knowledge into Large Language Models with Contextual Prompts
Kinshuk Vasisht | Balaji Ganesan | Vikas Kumar | Vasudha Bhatnagar
Proceedings of the 20th International Conference on Natural Language Processing (ICON)

Knowledge infusion is a promising method for enhancing Large Language Models for domainspecific NLP tasks rather than pre-training models over large data from scratch. These augmented LLMs typically depend on additional pre-training or knowledge prompts from an existing knowledge graph, which is impractical in many applications. In contrast, knowledge infusion directly from relevant documents is more generalisable and alleviates the need for structured knowledge graphs while also being useful for entities that are usually not found in any knowledge graph. With this motivation, we propose a simple yet generalisable approach for knowledge infusion by generating prompts from the context in the input text. Our experiments show the effectiveness of our approach which we evaluate by probing the fine-tuned LLMs.

pdf bib
Automated Answer Validation using Text Similarity
Balaji Ganesan | Arjun Ravikumar | Lakshay Piplani | Rini Bhaumik | Dhivya Padmanaban | Shwetha Narasimhamurthy | Chetan Adhikary | Subhash Deshapogu
Proceedings of the 20th International Conference on Natural Language Processing (ICON)

Automated answer validation can help improve learning outcomes by providing appropriate feedback to learners, and by making question answering systems and online learning solutions more widely available. There have been some works in science question answering which show that information retrieval methods outperform neural methods, especially in the multiple choice version of this problem. We implement Siamese neural network models and produce a generalised solution to this problem. We compare our supervised model with other text similarity based solutions.