Zhi Yu
2024
DocHieNet: A Large and Diverse Dataset for Document Hierarchy Parsing
Hangdi Xing
|
Changxu Cheng
|
Feiyu Gao
|
Zirui Shao
|
Zhi Yu
|
Jiajun Bu
|
Qi Zheng
|
Cong Yao
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Parsing documents from pixels, such as pictures and scanned PDFs, into hierarchical structures is extensively demanded in the daily routines of data storage, retrieval and understanding. However, previously the research on this topic has been largely hindered since most existing datasets are small-scale, or contain documents of only a single type, which are characterized by a lack of document diversity. Moreover, there is a significant discrepancy in the annotation standards across datasets. In this paper, we introduce a large and diverse document hierarchy parsing (DHP) dataset to compensate for the data scarcity and inconsistency problem. We aim to set a new standard as a more practical, long-standing benchmark. Meanwhile, we present a new DHP framework designed to grasp both fine-grained text content and coarse-grained pattern at layout element level, enhancing the capacity of pre-trained text-layout models in handling the multi-page and multi-level challenges in DHP. Through exhaustive experiments, we validate the effectiveness of our proposed dataset and method.
2023
Translate the Beauty in Songs: Jointly Learning to Align Melody and Translate Lyrics
Chengxi Li
|
Kai Fan
|
Jiajun Bu
|
Boxing Chen
|
Zhongqiang Huang
|
Zhi Yu
Findings of the Association for Computational Linguistics: EMNLP 2023
Song translation requires both translation of lyrics and alignment of music notes so that the resulting verse can be sung to the accompanying melody, which is a challenging problem that has attracted some interests in different aspects of the translation process. In this paper, we propose Lyrics-Melody Translation with Adaptive Grouping (LTAG), a holistic solution to automatic song translation by jointly modeling lyric translation and lyrics-melody alignment. It is a novel encoder-decoder framework that can simultaneously translate the source lyrics and determine the number of aligned notes at each decoding step through an adaptive note grouping module. To address data scarcity, we commissioned a small amount of training data annotated specifically for this task and used large amounts of automatic training data through back-translation. Experiments conducted on an English-Chinese song translation data set show the effectiveness of our model in both automatic and human evaluations.
GEM: Gestalt Enhanced Markup Language Model for Web Understanding via Render Tree
Zirui Shao
|
Feiyu Gao
|
Zhongda Qi
|
Hangdi Xing
|
Jiajun Bu
|
Zhi Yu
|
Qi Zheng
|
Xiaozhong Liu
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Inexhaustible web content carries abundant perceptible information beyond text. Unfortunately, most prior efforts in pre-trained Language Models (LMs) ignore such cyber-richness, while few of them only employ plain HTMLs, and crucial information in the rendered web, such as visual, layout, and style, are excluded. Intuitively, those perceptible web information can provide essential intelligence to facilitate content understanding tasks. This study presents an innovative Gestalt Enhanced Markup (GEM) Language Model inspired by Gestalt psychological theory for hosting heterogeneous visual information from the render tree into the language model without requiring additional visual input. Comprehensive experiments on multiple downstream tasks, i.e., web question answering and web information extraction, validate GEM superiority.
Search
Co-authors
- Jiajun Bu 3
- Hangdi Xing 2
- Feiyu Gao 2
- Zirui Shao 2
- Qi Zheng 2
- show all...