Xiaoyu Sun
2024
LMDX: Language Model-based Document Information Extraction and Localization
Vincent Perot
|
Kai Kang
|
Florian Luisier
|
Guolong Su
|
Xiaoyu Sun
|
Ramya Sree Boppana
|
Zilong Wang
|
Zifeng Wang
|
Jiaqi Mu
|
Hao Zhang
|
Chen-Yu Lee
|
Nan Hua
Findings of the Association for Computational Linguistics: ACL 2024
Large Language Models (LLM) have revolutionized Natural Language Processing (NLP), improving state-of-the-art and exhibiting emergent capabilities across various tasks. However, their application in extracting information from visually rich documents, which is at the core of many document processing workflows and involving the extraction of key entities from semi-structured documents, has not yet been successful. The main obstacles to adopting LLMs for this task include the absence of layout encoding within LLMs, which is critical for high quality extraction, and the lack of a grounding mechanism to localize the predicted entities within the document. In this paper, we introduce Language Model-based Document Information EXtraction and Localization (LMDX), a methodology to reframe the document information extraction task for a LLM. LMDX enables extraction of singular, repeated, and hierarchical entities, both with and without training data, while providing grounding guarantees and localizing the entities within the document. Finally, we apply LMDX to the PaLM 2-S and Gemini Pro LLMs and evaluate it on VRDU and CORD benchmarks, setting a new state-of-the-art and showing how LMDX enables the creation of high quality, data-efficient parsers.
2023
DocumentNet: Bridging the Data Gap in Document Pre-training
Lijun Yu
|
Jin Miao
|
Xiaoyu Sun
|
Jiayi Chen
|
Alexander Hauptmann
|
Hanjun Dai
|
Wei Wei
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track
Document understanding tasks, in particular, Visually-rich Document Entity Retrieval (VDER), have gained significant attention in recent years thanks to their broad applications in enterprise AI. However, publicly available data have been scarce for these tasks due to strict privacy constraints and high annotation costs. To make things worse, the non-overlapping entity spaces from different datasets hinder the knowledge transfer between document types. In this paper, we propose a method to collect massive-scale and weakly labeled data from the web to benefit the training of VDER models. The collected dataset, named DocumentNet, does not depend on specific document types or entity sets, making it universally applicable to all VDER tasks. The current DocumentNet consists of 30M documents spanning nearly 400 document types organized in a four-level ontology. Experiments on a set of broadly adopted VDER tasks show significant improvements when DocumentNet is incorporated into the pre-training for both classic and few-shot learning settings. With the recent emergence of large language models (LLMs), DocumentNet provides a large data source to extend their multimodal capabilities for VDER.
Search
Co-authors
- Lijun Yu 1
- Jin Miao 1
- Jiayi Chen 1
- Alexander G. Hauptmann 1
- Hanjun Dai 1
- show all...