2024
pdf
bib
abs
Granite-Function Calling Model: Introducing Function Calling Abilities via Multi-task Learning of Granular Tasks
Ibrahim Abdelaziz
|
Kinjal Basu
|
Mayank Agarwal
|
Sadhana Kumaravel
|
Matthew Stallone
|
Rameswar Panda
|
Yara Rizk
|
G P Shrivatsa Bhargav
|
Maxwell Crouse
|
Chulaka Gunasekara
|
Shajith Ikbal
|
Sachindra Joshi
|
Hima Karanam
|
Vineet Kumar
|
Asim Munawar
|
Sumit Neelam
|
Dinesh Raghu
|
Udit Sharma
|
Adriana Meza Soria
|
Dheeraj Sreedhar
|
Praveen Venkateswaran
|
Merve Unuvar
|
David Daniel Cox
|
Salim Roukos
|
Luis A. Lastras
|
Pavan Kapanipathi
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
An emergent research trend explores the use of Large Language Models (LLMs) as the backbone of agentic systems (e.g., SWE-Bench, Agent-Bench). To fulfill LLMs’ potential as autonomous agents, they must be able to identify, call, and interact with a variety of external tools and application program interfaces (APIs). This capability of LLMs, commonly termed function calling, leads to a myriad of advantages such as access to current and domain-specific information in databases and the outsourcing of tasks that can be reliably performed by tools. In this work, we introduce Granite-20B-FunctionCalling, a model trained using a multi-task training approach on seven fundamental tasks encompassed in function calling. Our comprehensive evaluation on multiple out-of-domain datasets, which compares Granite-20B-FunctionCalling to more than 15 other best proprietary and open models, shows that Granite-20B-FunctionCalling has better generalizability on multiple tasks across seven different evaluation benchmarks. Moreover, Granite-20B-FunctionCalling shows the best performance among all open models and ranks among the top on the Berkeley Function Calling Leaderboard (BFCL).
pdf
bib
abs
API-BLEND: A Comprehensive Corpora for Training and Benchmarking API LLMs
Kinjal Basu
|
Ibrahim Abdelaziz
|
Subhajit Chaudhury
|
Soham Dan
|
Maxwell Crouse
|
Asim Munawar
|
Vernon Austel
|
Sadhana Kumaravel
|
Vinod Muthusamy
|
Pavan Kapanipathi
|
Luis Lastras
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
There is a growing need for Large Language Models (LLMs) to effectively use tools and external Application Programming Interfaces (APIs) to plan and complete tasks. As such, there is tremendous interest in methods that can acquire sufficient quantities of train and test data that involve calls to tools / APIs. Two lines of research have emerged as the predominant strategies for addressing this challenge. The first has focused on synthetic data generation techniques, while the second has involved curating task-adjacent datasets which can be transformed into API / Tool-based tasks. In this paper, we focus on the task of identifying, curating, and transforming existing datasets and, in turn, introduce API-BLEND, a large corpora for training and systematic testing of tool-augmented LLMs. The datasets mimic real-world scenarios involving API-tasks such as API / tool detection, slot filling, and sequencing of the detected APIs. We demonstrate the utility of the API-BLEND dataset for both training and benchmarking purposes.
2023
pdf
bib
abs
MISMATCH: Fine-grained Evaluation of Machine-generated Text with Mismatch Error Types
Keerthiram Murugesan
|
Sarathkrishna Swaminathan
|
Soham Dan
|
Subhajit Chaudhury
|
Chulaka Gunasekara
|
Maxwell Crouse
|
Diwakar Mahajan
|
Ibrahim Abdelaziz
|
Achille Fokoue
|
Pavan Kapanipathi
|
Salim Roukos
|
Alexander Gray
Findings of the Association for Computational Linguistics: ACL 2023
With the growing interest in large language models, the need for evaluating the quality of machine text compared to reference (typically human-generated) text has become focal attention. Most recent works focus either on task-specific evaluation metrics or study the properties of machine-generated text captured by the existing metrics. In this work, we propose a new evaluation scheme to model human judgments in 7 NLP tasks, based on the fine-grained mismatches between a pair of texts. Inspired by the recent efforts in several NLP tasks for fine-grained evaluation, we introduce a set of 13 mismatch error types such as spatial/geographic errors, entity errors, etc, to guide the model for better prediction of human judgments. We propose a neural framework for evaluating machine texts that uses these mismatch error types as auxiliary tasks and re-purposes the existing single-number evaluation metrics as additional scalar features, in addition to textual features extracted from the machine and reference texts. Our experiments reveal key insights about the existing metrics via the mismatch errors. We show that the mismatch errors between the sentence pairs on the held-out datasets from 7 NLP tasks align well with the human evaluation.
pdf
bib
abs
Self-Supervised Rule Learning to Link Text Segments to Relational Elements of Structured Knowledge
Shajith Ikbal
|
Udit Sharma
|
Hima Karanam
|
Sumit Neelam
|
Ronny Luss
|
Dheeraj Sreedhar
|
Pavan Kapanipathi
|
Naweed Khan
|
Kyle Erwin
|
Ndivhuwo Makondo
|
Ibrahim Abdelaziz
|
Achille Fokoue
|
Alexander Gray
|
Maxwell Crouse
|
Subhajit Chaudhury
|
Chitra Subramanian
Findings of the Association for Computational Linguistics: EMNLP 2023
We present a neuro-symbolic approach to self-learn rules that serve as interpretable knowledge to perform relation linking in knowledge base question answering systems. These rules define natural language text predicates as a weighted mixture of knowledge base paths. The weights learned during training effectively serve the mapping needed to perform relation linking. We use popular masked training strategy to self-learn the rules. A key distinguishing aspect of our work is that the masked training operate over logical forms of the sentence instead of their natural language text form. This offers opportunity to extract extended context information from the structured knowledge source and use that to build robust and human readable rules. We evaluate accuracy and usefulness of such learned rules by utilizing them for prediction of missing kinship relation in CLUTRR dataset and relation linking in a KBQA system using SWQ-WD dataset. Results demonstrate the effectiveness of our approach - its generalizability, interpretability and ability to achieve an average performance gain of 17% on CLUTRR dataset.
2022
pdf
bib
abs
Logical Neural Networks for Knowledge Base Completion with Embeddings & Rules
Prithviraj Sen
|
Breno William Carvalho
|
Ibrahim Abdelaziz
|
Pavan Kapanipathi
|
Salim Roukos
|
Alexander Gray
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Knowledge base completion (KBC) has benefitted greatly by learning explainable rules in an human-interpretable dialect such as first-order logic. Rule-based KBC has so far, mainly focussed on learning one of two types of rules: conjunction-of-disjunctions and disjunction-of-conjunctions. We qualitatively show, via examples, that one of these has an advantage over the other when it comes to achieving high quality KBC. To the best of our knowledge, we are the first to propose learning both kinds of rules within a common framework. To this end, we propose to utilize logical neural networks (LNN), a powerful neuro-symbolic AI framework that can express both kinds of rules and learn these end-to-end using gradient-based optimization. Our in-depth experiments show that our LNN-based approach to learning rules for KBC leads to roughly 10% relative improvements, if not more, over SotA rule-based KBC methods. Moreover, by showing how to combine our proposed methods with knowledge graph embeddings we further achieve an additional 7.5% relative improvement.
pdf
bib
abs
SYGMA: A System for Generalizable and Modular Question Answering Over Knowledge Bases
Sumit Neelam
|
Udit Sharma
|
Hima Karanam
|
Shajith Ikbal
|
Pavan Kapanipathi
|
Ibrahim Abdelaziz
|
Nandana Mihindukulasooriya
|
Young-Suk Lee
|
Santosh Srivastava
|
Cezar Pendus
|
Saswati Dana
|
Dinesh Garg
|
Achille Fokoue
|
G P Shrivatsa Bhargav
|
Dinesh Khandelwal
|
Srinivas Ravishankar
|
Sairam Gurajada
|
Maria Chang
|
Rosario Uceda-Sosa
|
Salim Roukos
|
Alexander Gray
|
Guilherme Lima
|
Ryan Riegel
|
Francois Luus
|
L V Subramaniam
Findings of the Association for Computational Linguistics: EMNLP 2022
Knowledge Base Question Answering (KBQA) involving complex reasoning is emerging as an important research direction. However, most KBQA systems struggle with generalizability, particularly on two dimensions: (a) across multiple knowledge bases, where existing KBQA approaches are typically tuned to a single knowledge base, and (b) across multiple reasoning types, where majority of datasets and systems have primarily focused on multi-hop reasoning. In this paper, we present SYGMA, a modular KBQA approach developed with goal of generalization across multiple knowledge bases and multiple reasoning types. To facilitate this, SYGMA is designed as two high level modules: 1) KB-agnostic question understanding module that remain common across KBs, and generates logic representation of the question with high level reasoning constructs that are extensible, and 2) KB-specific question mapping and answering module to address the KB-specific aspects of the answer extraction. We evaluated SYGMA on multiple datasets belonging to distinct knowledge bases (DBpedia and Wikidata) and distinct reasoning types (multi-hop and temporal). State-of-the-art or competitive performances achieved on those datasets demonstrate its generalization capability.
pdf
bib
abs
A Two-Stage Approach towards Generalization in Knowledge Base Question Answering
Srinivas Ravishankar
|
Dung Thai
|
Ibrahim Abdelaziz
|
Nandana Mihindukulasooriya
|
Tahira Naseem
|
Pavan Kapanipathi
|
Gaetano Rossiello
|
Achille Fokoue
Findings of the Association for Computational Linguistics: EMNLP 2022
Most existing approaches for Knowledge Base Question Answering (KBQA) focus on a specific underlying knowledge base either because of inherent assumptions in the approach, or because evaluating it on a different knowledge base requires non-trivial changes. However, many popular knowledge bases share similarities in their underlying schemas that can be leveraged to facilitate generalization across knowledge bases. To achieve this generalization, we introduce a KBQA framework based on a 2-stage architecture that explicitly separates semantic parsing from the knowledge base interaction, facilitating transfer learning across datasets and knowledge graphs. We show that pretraining on datasets with a different underlying knowledge base can nevertheless provide significant performance gains and reduce sample complexity. Our approach achieves comparable or state-of-the-art performance for LC-QuAD (DBpedia), WebQSP (Freebase), SimpleQuestions (Wikidata) and MetaQA (Wikimovies-KG).
2021
pdf
bib
abs
A Semantics-aware Transformer Model of Relation Linking for Knowledge Base Question Answering
Tahira Naseem
|
Srinivas Ravishankar
|
Nandana Mihindukulasooriya
|
Ibrahim Abdelaziz
|
Young-Suk Lee
|
Pavan Kapanipathi
|
Salim Roukos
|
Alfio Gliozzo
|
Alexander Gray
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Relation linking is a crucial component of Knowledge Base Question Answering systems. Existing systems use a wide variety of heuristics, or ensembles of multiple systems, heavily relying on the surface question text. However, the explicit semantic parse of the question is a rich source of relation information that is not taken advantage of. We propose a simple transformer-based neural model for relation linking that leverages the AMR semantic parse of a sentence. Our system significantly outperforms the state-of-the-art on 4 popular benchmark datasets. These are based on either DBpedia or Wikidata, demonstrating that our approach is effective across KGs.
pdf
bib
Leveraging Abstract Meaning Representation for Knowledge Base Question Answering
Pavan Kapanipathi
|
Ibrahim Abdelaziz
|
Srinivas Ravishankar
|
Salim Roukos
|
Alexander Gray
|
Ramón Fernandez Astudillo
|
Maria Chang
|
Cristina Cornelio
|
Saswati Dana
|
Achille Fokoue
|
Dinesh Garg
|
Alfio Gliozzo
|
Sairam Gurajada
|
Hima Karanam
|
Naweed Khan
|
Dinesh Khandelwal
|
Young-Suk Lee
|
Yunyao Li
|
Francois Luus
|
Ndivhuwo Makondo
|
Nandana Mihindukulasooriya
|
Tahira Naseem
|
Sumit Neelam
|
Lucian Popa
|
Revanth Gangi Reddy
|
Ryan Riegel
|
Gaetano Rossiello
|
Udit Sharma
|
G P Shrivatsa Bhargav
|
Mo Yu
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021