2023
pdf
bib
abs
Learning Symbolic Rules over Abstract Meaning Representations for Textual Reinforcement Learning
Subhajit Chaudhury
|
Sarathkrishna Swaminathan
|
Daiki Kimura
|
Prithviraj Sen
|
Keerthiram Murugesan
|
Rosario Uceda-Sosa
|
Michiaki Tatsubori
|
Achille Fokoue
|
Pavan Kapanipathi
|
Asim Munawar
|
Alexander Gray
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Text-based reinforcement learning agents have predominantly been neural network-based models with embeddings-based representation, learning uninterpretable policies that often do not generalize well to unseen games. On the other hand, neuro-symbolic methods, specifically those that leverage an intermediate formal representation, are gaining significant attention in language understanding tasks. This is because of their advantages ranging from inherent interpretability, the lesser requirement of training data, and being generalizable in scenarios with unseen data. Therefore, in this paper, we propose a modular, NEuro-Symbolic Textual Agent (NESTA) that combines a generic semantic parser with a rule induction system to learn abstract interpretable rules as policies. Our experiments on established text-based game benchmarks show that the proposed NESTA method outperforms deep reinforcement learning-based techniques by achieving better generalization to unseen test games and learning from fewer training interactions.
pdf
bib
abs
Laziness Is a Virtue When It Comes to Compositionality in Neural Semantic Parsing
Maxwell Crouse
|
Pavan Kapanipathi
|
Subhajit Chaudhury
|
Tahira Naseem
|
Ramon Fernandez Astudillo
|
Achille Fokoue
|
Tim Klinger
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Nearly all general-purpose neural semantic parsers generate logical forms in a strictly top-down autoregressive fashion. Though such systems have achieved impressive results across a variety of datasets and domains, recent works have called into question whether they are ultimately limited in their ability to compositionally generalize. In this work, we approach semantic parsing from, quite literally, the opposite direction; that is, we introduce a neural semantic parsing generation method that constructs logical forms from the bottom up, beginning from the logical form’s leaves. The system we introduce is lazy in that it incrementally builds up a set of potential semantic parses, but only expands and processes the most promising candidate parses at each generation step. Such a parsimonious expansion scheme allows the system to maintain an arbitrarily large set of parse hypotheses that are never realized and thus incur minimal computational overhead. We evaluate our approach on compositional generalization; specifically, on the challenging CFQ dataset and two other Text-to-SQL datasets where we show that our novel, bottom-up semantic parsing technique outperforms general-purpose semantic parsers while also being competitive with semantic parsers that have been tailored to each task.
pdf
bib
abs
MISMATCH: Fine-grained Evaluation of Machine-generated Text with Mismatch Error Types
Keerthiram Murugesan
|
Sarathkrishna Swaminathan
|
Soham Dan
|
Subhajit Chaudhury
|
Chulaka Gunasekara
|
Maxwell Crouse
|
Diwakar Mahajan
|
Ibrahim Abdelaziz
|
Achille Fokoue
|
Pavan Kapanipathi
|
Salim Roukos
|
Alexander Gray
Findings of the Association for Computational Linguistics: ACL 2023
With the growing interest in large language models, the need for evaluating the quality of machine text compared to reference (typically human-generated) text has become focal attention. Most recent works focus either on task-specific evaluation metrics or study the properties of machine-generated text captured by the existing metrics. In this work, we propose a new evaluation scheme to model human judgments in 7 NLP tasks, based on the fine-grained mismatches between a pair of texts. Inspired by the recent efforts in several NLP tasks for fine-grained evaluation, we introduce a set of 13 mismatch error types such as spatial/geographic errors, entity errors, etc, to guide the model for better prediction of human judgments. We propose a neural framework for evaluating machine texts that uses these mismatch error types as auxiliary tasks and re-purposes the existing single-number evaluation metrics as additional scalar features, in addition to textual features extracted from the machine and reference texts. Our experiments reveal key insights about the existing metrics via the mismatch errors. We show that the mismatch errors between the sentence pairs on the held-out datasets from 7 NLP tasks align well with the human evaluation.
pdf
bib
abs
Self-Supervised Rule Learning to Link Text Segments to Relational Elements of Structured Knowledge
Shajith Ikbal
|
Udit Sharma
|
Hima Karanam
|
Sumit Neelam
|
Ronny Luss
|
Dheeraj Sreedhar
|
Pavan Kapanipathi
|
Naweed Khan
|
Kyle Erwin
|
Ndivhuwo Makondo
|
Ibrahim Abdelaziz
|
Achille Fokoue
|
Alexander Gray
|
Maxwell Crouse
|
Subhajit Chaudhury
|
Chitra Subramanian
Findings of the Association for Computational Linguistics: EMNLP 2023
We present a neuro-symbolic approach to self-learn rules that serve as interpretable knowledge to perform relation linking in knowledge base question answering systems. These rules define natural language text predicates as a weighted mixture of knowledge base paths. The weights learned during training effectively serve the mapping needed to perform relation linking. We use popular masked training strategy to self-learn the rules. A key distinguishing aspect of our work is that the masked training operate over logical forms of the sentence instead of their natural language text form. This offers opportunity to extract extended context information from the structured knowledge source and use that to build robust and human readable rules. We evaluate accuracy and usefulness of such learned rules by utilizing them for prediction of missing kinship relation in CLUTRR dataset and relation linking in a KBQA system using SWQ-WD dataset. Results demonstrate the effectiveness of our approach - its generalizability, interpretability and ability to achieve an average performance gain of 17% on CLUTRR dataset.
2022
pdf
bib
abs
SYGMA: A System for Generalizable and Modular Question Answering Over Knowledge Bases
Sumit Neelam
|
Udit Sharma
|
Hima Karanam
|
Shajith Ikbal
|
Pavan Kapanipathi
|
Ibrahim Abdelaziz
|
Nandana Mihindukulasooriya
|
Young-Suk Lee
|
Santosh Srivastava
|
Cezar Pendus
|
Saswati Dana
|
Dinesh Garg
|
Achille Fokoue
|
G P Shrivatsa Bhargav
|
Dinesh Khandelwal
|
Srinivas Ravishankar
|
Sairam Gurajada
|
Maria Chang
|
Rosario Uceda-Sosa
|
Salim Roukos
|
Alexander Gray
|
Guilherme Lima
|
Ryan Riegel
|
Francois Luus
|
L V Subramaniam
Findings of the Association for Computational Linguistics: EMNLP 2022
Knowledge Base Question Answering (KBQA) involving complex reasoning is emerging as an important research direction. However, most KBQA systems struggle with generalizability, particularly on two dimensions: (a) across multiple knowledge bases, where existing KBQA approaches are typically tuned to a single knowledge base, and (b) across multiple reasoning types, where majority of datasets and systems have primarily focused on multi-hop reasoning. In this paper, we present SYGMA, a modular KBQA approach developed with goal of generalization across multiple knowledge bases and multiple reasoning types. To facilitate this, SYGMA is designed as two high level modules: 1) KB-agnostic question understanding module that remain common across KBs, and generates logic representation of the question with high level reasoning constructs that are extensible, and 2) KB-specific question mapping and answering module to address the KB-specific aspects of the answer extraction. We evaluated SYGMA on multiple datasets belonging to distinct knowledge bases (DBpedia and Wikidata) and distinct reasoning types (multi-hop and temporal). State-of-the-art or competitive performances achieved on those datasets demonstrate its generalization capability.
pdf
bib
abs
A Two-Stage Approach towards Generalization in Knowledge Base Question Answering
Srinivas Ravishankar
|
Dung Thai
|
Ibrahim Abdelaziz
|
Nandana Mihindukulasooriya
|
Tahira Naseem
|
Pavan Kapanipathi
|
Gaetano Rossiello
|
Achille Fokoue
Findings of the Association for Computational Linguistics: EMNLP 2022
Most existing approaches for Knowledge Base Question Answering (KBQA) focus on a specific underlying knowledge base either because of inherent assumptions in the approach, or because evaluating it on a different knowledge base requires non-trivial changes. However, many popular knowledge bases share similarities in their underlying schemas that can be leveraged to facilitate generalization across knowledge bases. To achieve this generalization, we introduce a KBQA framework based on a 2-stage architecture that explicitly separates semantic parsing from the knowledge base interaction, facilitating transfer learning across datasets and knowledge graphs. We show that pretraining on datasets with a different underlying knowledge base can nevertheless provide significant performance gains and reduce sample complexity. Our approach achieves comparable or state-of-the-art performance for LC-QuAD (DBpedia), WebQSP (Freebase), SimpleQuestions (Wikidata) and MetaQA (Wikimovies-KG).
2021
pdf
bib
Leveraging Abstract Meaning Representation for Knowledge Base Question Answering
Pavan Kapanipathi
|
Ibrahim Abdelaziz
|
Srinivas Ravishankar
|
Salim Roukos
|
Alexander Gray
|
Ramón Fernandez Astudillo
|
Maria Chang
|
Cristina Cornelio
|
Saswati Dana
|
Achille Fokoue
|
Dinesh Garg
|
Alfio Gliozzo
|
Sairam Gurajada
|
Hima Karanam
|
Naweed Khan
|
Dinesh Khandelwal
|
Young-Suk Lee
|
Yunyao Li
|
Francois Luus
|
Ndivhuwo Makondo
|
Nandana Mihindukulasooriya
|
Tahira Naseem
|
Sumit Neelam
|
Lucian Popa
|
Revanth Gangi Reddy
|
Ryan Riegel
|
Gaetano Rossiello
|
Udit Sharma
|
G P Shrivatsa Bhargav
|
Mo Yu
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
2018
pdf
bib
abs
A Systematic Classification of Knowledge, Reasoning, and Context within the ARC Dataset
Michael Boratko
|
Harshit Padigela
|
Divyendra Mikkilineni
|
Pritish Yuvraj
|
Rajarshi Das
|
Andrew McCallum
|
Maria Chang
|
Achille Fokoue-Nkoutche
|
Pavan Kapanipathi
|
Nicholas Mattei
|
Ryan Musa
|
Kartik Talamadupula
|
Michael Witbrock
Proceedings of the Workshop on Machine Reading for Question Answering
The recent work of Clark et al. (2018) introduces the AI2 Reasoning Challenge (ARC) and the associated ARC dataset that partitions open domain, complex science questions into easy and challenge sets. That paper includes an analysis of 100 questions with respect to the types of knowledge and reasoning required to answer them; however, it does not include clear definitions of these types, nor does it offer information about the quality of the labels. We propose a comprehensive set of definitions of knowledge and reasoning types necessary for answering the questions in the ARC dataset. Using ten annotators and a sophisticated annotation interface, we analyze the distribution of labels across the challenge set and statistics related to them. Additionally, we demonstrate that although naive information retrieval methods return sentences that are irrelevant to answering the query, sufficient supporting text is often present in the (ARC) corpus. Evaluating with human-selected relevant sentences improves the performance of a neural machine comprehension model by 42 points.
pdf
bib
abs
An Interface for Annotating Science Questions
Michael Boratko
|
Harshit Padigela
|
Divyendra Mikkilineni
|
Pritish Yuvraj
|
Rajarshi Das
|
Andrew McCallum
|
Maria Chang
|
Achille Fokoue
|
Pavan Kapanipathi
|
Nicholas Mattei
|
Ryan Musa
|
Kartik Talamadupula
|
Michael Witbrock
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Recent work introduces the AI2 Reasoning Challenge (ARC) and the associated ARC dataset that partitions open domain, complex science questions into an Easy Set and a Challenge Set. That work includes an analysis of 100 questions with respect to the types of knowledge and reasoning required to answer them. However, it does not include clear definitions of these types, nor does it offer information about the quality of the labels or the annotation process used. In this paper, we introduce a novel interface for human annotation of science question-answer pairs with their respective knowledge and reasoning types, in order that the classification of new questions may be improved. We build on the classification schema proposed by prior work on the ARC dataset, and evaluate the effectiveness of our interface with a preliminary study involving 10 participants.