Petr Babkin


2024

pdf bib
ReportGPT: Human-in-the-loop Verifiable Table-to-Text Generation
Lucas Cecchi | Petr Babkin
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track

Recent developments in the quality and accessibility of large language models have precipitated a surge in user-facing tools for content generation. Motivated by a necessity for human quality control of these systems, we introduce ReportGPT: a pipeline framework for verifiable human-in-the-loop table-to-text generation. ReportGPT is based on a domain specific language, which acts as a proof mechanism for generating verifiable commentary. This allows users to quickly check the relevancy and factuality of model outputs. User selections then become few-shot examples for improving the performance of the pipeline. We configure 3 approaches to our pipeline, and find that usage of language models in ReportGPT’s components trade off precision for more insightful downstream commentary. Furthermore, ReportGPT learns from human feedback in real-time, needing only a few samples to improve performance.

pdf bib
DocLLM: A Layout-Aware Generative Language Model for Multimodal Document Understanding
Dongsheng Wang | Natraj Raman | Mathieu Sibue | Zhiqiang Ma | Petr Babkin | Simerjot Kaur | Yulong Pei | Armineh Nourbakhsh | Xiaomo Liu
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Enterprise documents such as forms, receipts, reports, and other such records, often carry rich semantics at the intersection of textual and spatial modalities. The visual cues offered by their complex layouts play a crucial role in comprehending these documents effectively. In this paper, we present DocLLM, a lightweight extension to traditional large language models (LLMs) for reasoning over visual documents, taking into account both textual semantics and spatial layout. Our model differs from existing multimodal LLMs by avoiding expensive image encoders and focuses exclusively on bounding box information to incorporate the spatial layout structure. Specifically, the cross-alignment between text and spatial modalities is captured by decomposing the attention mechanism in classical transformers to a set of disentangled matrices. Furthermore, we devise a pre-training objective that learns to infill text segments. This approach allows us to address irregular layouts and heterogeneous content frequently encountered in visual documents. The pre-trained model is fine-tuned using a large-scale instruction dataset, covering four core document intelligence tasks. We demonstrate that our solution outperforms SotA LLMs on 14 out of 16 datasets across all tasks, and generalizes well to 4 out of 5 previously unseen datasets.

2017

pdf bib
Fast Forward Through Opportunistic Incremental Meaning Representation Construction
Petr Babkin | Sergei Nirenburg
Proceedings of ACL 2017, Student Research Workshop

2016

pdf bib
Detection and Resolution of Verb Phrase Ellipsis
Marjorie McShane | Petr Babkin
Linguistic Issues in Language Technology, Volume 13, 2016

Verb phrase (VP) ellipsis is the omission of a verb phrase whose meaning can be reconstructed from the linguistic or real-world context. It is licensed in English by auxiliary verbs, often modal auxiliaries: She can go to Hawaii but he can’t [e]. This paper describes a system called ViPER (VP Ellipsis Resolver) that detects and resolves VP ellipsis, relying on linguistic principles such as syntactic parallelism, modality correlations, and the delineation of core vs. peripheral sentence constituents. The key insight guiding the work is that not all cases of ellipsis are equally difficult: some can be detected and resolved with high confidence even before we are able to build systems with human-level semantic and pragmatic understanding of text.

2014

pdf bib
Nominal Compound Interpretation by Intelligent Agents
Marjorie McShane | Stephen Beale | Petr Babkin
Linguistic Issues in Language Technology, Volume 10, 2014

This paper presents a cognitively-inspired algorithm for the semantic analysis of nominal compounds by intelligent agents. The agents, modeled within the OntoAgent environment, are tasked to compute a full context-sensitive semantic interpretation of each compound using a battery of engines that rely on a high-quality computational lexicon and ontology. Rather than being treated as an isolated “task”, as in many NLP approaches, nominal compound analysis in OntoAgent represents a minimal extension to the core process of semantic analysis. We hypothesize that seeking similarities across language analysis tasks reflects the spirit of how people approach language interpretation, and that this approach will make feasible the long-term development of truly sophisticated, human-like intelligent agents. The initial evaluation of our approach to nominal compounds are fixed expressions, requiring individual semantic specification at the lexical level.