LayoutMask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding

Yi Tu, Ya Guo, Huan Chen, Jinyang Tang


Abstract
Visually-rich Document Understanding (VrDU) has attracted much research attention over the past years. Pre-trained models on a large number of document images with transformer-based backbones have led to significant performance gains in this field. The major challenge is how to fusion the different modalities (text, layout, and image) of the documents in a unified model with different pre-training tasks. This paper focuses on improving text-layout interactions and proposes a novel multi-modal pre-training model, LayoutMask. LayoutMask uses local 1D position, instead of global 1D position, as layout input and has two pre-training objectives: (1) Masked Language Modeling: predicting masked tokens with two novel masking strategies; (2) Masked Position Modeling: predicting masked 2D positions to improve layout representation learning. LayoutMask can enhance the interactions between text and layout modalities in a unified model and produce adaptive and robust multi-modal representations for downstream tasks. Experimental results show that our proposed method can achieve state-of-the-art results on a wide variety of VrDU problems, including form understanding, receipt understanding, and document image classification.
Anthology ID:
2023.acl-long.847
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15200–15212
Language:
URL:
https://aclanthology.org/2023.acl-long.847
DOI:
10.18653/v1/2023.acl-long.847
Bibkey:
Cite (ACL):
Yi Tu, Ya Guo, Huan Chen, and Jinyang Tang. 2023. LayoutMask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15200–15212, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
LayoutMask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding (Tu et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.847.pdf
Video:
 https://aclanthology.org/2023.acl-long.847.mp4