Vlad I. Morariu
Also published as: Vlad I Morariu
2024
DocEdit-v2: Document Structure Editing Via Multimodal LLM Grounding
Manan Suri
|
Puneet Mathur
|
Franck Dernoncourt
|
Rajiv Jain
|
Vlad I Morariu
|
Ramit Sawhney
|
Preslav Nakov
|
Dinesh Manocha
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Document structure editing involves manipulating localized textual, visual, and layout components in document images based on the user’s requests. Past works have shown that multimodal grounding of user requests in the document image and identifying the accurate structural components and their associated attributes remain key challenges for this task. To address these, we introduce the DocEditAgent, a novel framework that performs end-to-end document editing by leveraging Large Multimodal Models (LMMs). It consists of three novel components – (1) Doc2Command to simultaneously localize edit regions of interest (RoI) and disambiguate user edit requests into edit commands. (2) LLM-based Command Reformulation prompting to tailor edit commands originally intended for specialized software into edit instructions suitable for generalist LMMs. (3) Moreover, DocEditAgent processes these outputs via Large Multimodal Models like GPT-4V and Gemini, to parse the document layout, execute edits on grounded Region of Interest (RoI), and generate the edited document image. Extensive experiments on the DocEdit dataset show that DocEditAgent significantly outperforms strong baselines on edit command generation (2-33%), RoI bounding box detection (12-31%), and overall document editing (1-12%) tasks.
DocScript: Document-level Script Event Prediction
Puneet Mathur
|
Vlad I. Morariu
|
Aparna Garimella
|
Franck Dernoncourt
|
Jiuxiang Gu
|
Ramit Sawhney
|
Preslav Nakov
|
Dinesh Manocha
|
Rajiv Jain
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
We present a novel task of document-level script event prediction, which aims to predict the next event given a candidate list of narrative events in long-form documents. To enable this, we introduce DocSEP, a challenging dataset in two new domains - contractual documents and Wikipedia articles, where timeline events may be paragraphs apart and may require multi-hop temporal and causal reasoning. We benchmark existing baselines and present a novel architecture called DocScript to learn sequential ordering between events at the document scale. Our experimental results on the DocSEP dataset demonstrate that learning longer-range dependencies between events is a key challenge and show that contemporary LLMs such as ChatGPT and FlanT5 struggle to solve this task, indicating their lack of reasoning abilities for understanding causal relationships and temporal sequences within long texts.
Search
Co-authors
- Puneet Mathur 2
- Franck Dernoncourt 2
- Rajiv Jain 2
- Ramit Sawhney 2
- Preslav Nakov 2
- show all...