DocEdit-v2: Document Structure Editing Via Multimodal LLM Grounding

Manan Suri, Puneet Mathur, Franck Dernoncourt, Rajiv Jain, Vlad Morariu, Ramit Sawhney, Preslav Nakov, Dinesh Manocha


Abstract
Document structure editing involves manipulating localized textual, visual, and layout components in document images based on the user’s requests. Past works have shown that multimodal grounding of user requests in the document image and identifying the accurate structural components and their associated attributes remain key challenges for this task. To address these, we introduce the DocEditAgent, a novel framework that performs end-to-end document editing by leveraging Large Multimodal Models (LMMs). It consists of three novel components – (1) Doc2Command to simultaneously localize edit regions of interest (RoI) and disambiguate user edit requests into edit commands. (2) LLM-based Command Reformulation prompting to tailor edit commands originally intended for specialized software into edit instructions suitable for generalist LMMs. (3) Moreover, DocEditAgent processes these outputs via Large Multimodal Models like GPT-4V and Gemini, to parse the document layout, execute edits on grounded Region of Interest (RoI), and generate the edited document image. Extensive experiments on the DocEdit dataset show that DocEditAgent significantly outperforms strong baselines on edit command generation (2-33%), RoI bounding box detection (12-31%), and overall document editing (1-12%) tasks.
Anthology ID:
2024.emnlp-main.867
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15485–15505
Language:
URL:
https://aclanthology.org/2024.emnlp-main.867
DOI:
Bibkey:
Cite (ACL):
Manan Suri, Puneet Mathur, Franck Dernoncourt, Rajiv Jain, Vlad Morariu, Ramit Sawhney, Preslav Nakov, and Dinesh Manocha. 2024. DocEdit-v2: Document Structure Editing Via Multimodal LLM Grounding. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 15485–15505, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
DocEdit-v2: Document Structure Editing Via Multimodal LLM Grounding (Suri et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.867.pdf