Chris Tensmeyer
2022
MGDoc: Pre-training with Multi-granular Hierarchy for Document Image Understanding
Zilong Wang
|
Jiuxiang Gu
|
Chris Tensmeyer
|
Nikolaos Barmpalios
|
Ani Nenkova
|
Tong Sun
|
Jingbo Shang
|
Vlad Morariu
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Document images are a ubiquitous source of data where the text is organized in a complex hierarchical structure ranging from fine granularity (e.g., words), medium granularity (e.g., regions such as paragraphs or figures), to coarse granularity (e.g., the whole page). The spatial hierarchical relationships between content at different levels of granularity are crucial for document image understanding tasks. Existing methods learn features from either word-level or region-level but fail to consider both simultaneously. Word-level models are restricted by the fact that they originate from pure-text language models, which only encode the word-level context. In contrast, region-level models attempt to encode regions corresponding to paragraphs or text blocks into a single embedding, but they perform worse with additional word-level features. To deal with these issues, we propose MGDoc, a new multi-modal multi-granular pre-training framework that encodes page-level, region-level, and word-level information at the same time. MGDoc uses a unified text-visual encoder to obtain multi-modal features across different granularities, which makes it possible to project the multi-granular features into the same hyperspace. To model the region-word correlation, we design a cross-granular attention mechanism and specific pre-training tasks for our model to reinforce the model of learning the hierarchy between regions and words. Experiments demonstrate that our proposed model can learn better features that perform well across granularities and lead to improvements in downstream tasks.
TELIN: Table Entity LINker for Extracting Leaderboards from Machine Learning Publications
Sean Yang
|
Chris Tensmeyer
|
Curtis Wigington
Proceedings of the first Workshop on Information Extraction from Scientific Publications
Tracking state-of-the-art (SOTA) results in machine learning studies is challenging due to high publication volume. Existing methods for creating leaderboards in scientific documents require significant human supervision or rely on scarcely available LaTeX source files. We propose Table Entity LINker (TELIN), a framework which extracts (task, model, dataset, metric) quadruples from collections of scientific publications in PDF format. TELIN identifies scientific named entities, constructs a knowledge base, and leverages human feedback to iteratively refine automatic extractions. TELIN identifies and prioritizes uncertain and impactful entities for human review to create a cascade effect for leaderboard completion. We show that TELIN is competitive with the SOTA but requires much less human annotation.
Search
Co-authors
- Zilong Wang 1
- Jiuxiang Gu 1
- Nikolaos Barmpalios 1
- Ani Nenkova 1
- Tong Sun 1
- show all...