Maxwell Crouse


2023

pdf bib
MISMATCH: Fine-grained Evaluation of Machine-generated Text with Mismatch Error Types
Keerthiram Murugesan | Sarathkrishna Swaminathan | Soham Dan | Subhajit Chaudhury | Chulaka Gunasekara | Maxwell Crouse | Diwakar Mahajan | Ibrahim Abdelaziz | Achille Fokoue | Pavan Kapanipathi | Salim Roukos | Alexander Gray
Findings of the Association for Computational Linguistics: ACL 2023

With the growing interest in large language models, the need for evaluating the quality of machine text compared to reference (typically human-generated) text has become focal attention. Most recent works focus either on task-specific evaluation metrics or study the properties of machine-generated text captured by the existing metrics. In this work, we propose a new evaluation scheme to model human judgments in 7 NLP tasks, based on the fine-grained mismatches between a pair of texts. Inspired by the recent efforts in several NLP tasks for fine-grained evaluation, we introduce a set of 13 mismatch error types such as spatial/geographic errors, entity errors, etc, to guide the model for better prediction of human judgments. We propose a neural framework for evaluating machine texts that uses these mismatch error types as auxiliary tasks and re-purposes the existing single-number evaluation metrics as additional scalar features, in addition to textual features extracted from the machine and reference texts. Our experiments reveal key insights about the existing metrics via the mismatch errors. We show that the mismatch errors between the sentence pairs on the held-out datasets from 7 NLP tasks align well with the human evaluation.

pdf bib
Self-Supervised Rule Learning to Link Text Segments to Relational Elements of Structured Knowledge
Shajith Ikbal | Udit Sharma | Hima Karanam | Sumit Neelam | Ronny Luss | Dheeraj Sreedhar | Pavan Kapanipathi | Naweed Khan | Kyle Erwin | Ndivhuwo Makondo | Ibrahim Abdelaziz | Achille Fokoue | Alexander Gray | Maxwell Crouse | Subhajit Chaudhury | Chitra Subramanian
Findings of the Association for Computational Linguistics: EMNLP 2023

We present a neuro-symbolic approach to self-learn rules that serve as interpretable knowledge to perform relation linking in knowledge base question answering systems. These rules define natural language text predicates as a weighted mixture of knowledge base paths. The weights learned during training effectively serve the mapping needed to perform relation linking. We use popular masked training strategy to self-learn the rules. A key distinguishing aspect of our work is that the masked training operate over logical forms of the sentence instead of their natural language text form. This offers opportunity to extract extended context information from the structured knowledge source and use that to build robust and human readable rules. We evaluate accuracy and usefulness of such learned rules by utilizing them for prediction of missing kinship relation in CLUTRR dataset and relation linking in a KBQA system using SWQ-WD dataset. Results demonstrate the effectiveness of our approach - its generalizability, interpretability and ability to achieve an average performance gain of 17% on CLUTRR dataset.

pdf bib
Laziness Is a Virtue When It Comes to Compositionality in Neural Semantic Parsing
Maxwell Crouse | Pavan Kapanipathi | Subhajit Chaudhury | Tahira Naseem | Ramon Fernandez Astudillo | Achille Fokoue | Tim Klinger
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Nearly all general-purpose neural semantic parsers generate logical forms in a strictly top-down autoregressive fashion. Though such systems have achieved impressive results across a variety of datasets and domains, recent works have called into question whether they are ultimately limited in their ability to compositionally generalize. In this work, we approach semantic parsing from, quite literally, the opposite direction; that is, we introduce a neural semantic parsing generation method that constructs logical forms from the bottom up, beginning from the logical form’s leaves. The system we introduce is lazy in that it incrementally builds up a set of potential semantic parses, but only expands and processes the most promising candidate parses at each generation step. Such a parsimonious expansion scheme allows the system to maintain an arbitrarily large set of parse hypotheses that are never realized and thus incur minimal computational overhead. We evaluate our approach on compositional generalization; specifically, on the challenging CFQ dataset and two other Text-to-SQL datasets where we show that our novel, bottom-up semantic parsing technique outperforms general-purpose semantic parsers while also being competitive with semantic parsers that have been tailored to each task.

2022

pdf bib
X-FACTOR: A Cross-metric Evaluation of Factual Correctness in Abstractive Summarization
Subhajit Chaudhury | Sarathkrishna Swaminathan | Chulaka Gunasekara | Maxwell Crouse | Srinivas Ravishankar | Daiki Kimura | Keerthiram Murugesan | Ramón Fernandez Astudillo | Tahira Naseem | Pavan Kapanipathi | Alexander Gray
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Abstractive summarization models often produce factually inconsistent summaries that are not supported by the original article. Recently, a number of fact-consistent evaluation techniques have been proposed to address this issue; however, a detailed analysis of how these metrics agree with one another has yet to be conducted. In this paper, we present X-FACTOR, a cross-evaluation of three high-performing fact-aware abstractive summarization methods. First, we show that summarization models are often fine-tuned on datasets that contain factually inconsistent summaries and propose a fact-aware filtering mechanism that improves the quality of training data and, consequently, the factuality of these models. Second, we propose a corrector module that can be used to improve the factual consistency of generated summaries. Third, we present a re-ranking technique that samples summary instances from the output distribution of a summarization model and re-ranks the sampled instances based on their factuality. Finally, we provide a detailed cross-metric agreement analysis that shows how tuning a model to output summaries based on a particular factuality metric influences factuality as determined by the other metrics. Our goal in this work is to facilitate research that improves the factuality and faithfulness of abstractive summarization models.