Read Extensively, Focus Smartly: A Cross-document Semantic Enhancement Method for Visual Documents NER

Jun Zhao, Xin Zhao, WenYu Zhan, Tao Gui, Qi Zhang, Liang Qiao, Zhanzhan Cheng, Shiliang Pu


Abstract
The introduction of multimodal information and pretraining technique significantly improves entity recognition from visually-rich documents. However, most of the existing methods pay unnecessary attention to irrelevant regions of the current document while ignoring the potentially valuable information in related documents. To deal with this problem, this work proposes a cross-document semantic enhancement method, which consists of two modules: 1) To prevent distractions from irrelevant regions in the current document, we design a learnable attention mask mechanism, which is used to adaptively filter redundant information in the current document. 2) To further enrich the entity-related context, we propose a cross-document information awareness technique, which enables the model to collect more evidence across documents to assist in prediction. The experimental results on two documents understanding benchmarks covering eight languages demonstrate that our method outperforms the SOTA methods.
Anthology ID:
2022.coling-1.177
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
2034–2043
Language:
URL:
https://aclanthology.org/2022.coling-1.177
DOI:
Bibkey:
Cite (ACL):
Jun Zhao, Xin Zhao, WenYu Zhan, Tao Gui, Qi Zhang, Liang Qiao, Zhanzhan Cheng, and Shiliang Pu. 2022. Read Extensively, Focus Smartly: A Cross-document Semantic Enhancement Method for Visual Documents NER. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2034–2043, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Read Extensively, Focus Smartly: A Cross-document Semantic Enhancement Method for Visual Documents NER (Zhao et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.177.pdf
Data
FUNSD