2024
pdf
bib
abs
TextLap: Customizing Language Models for Text-to-Layout Planning
Jian Chen
|
Ruiyi Zhang
|
Yufan Zhou
|
Jennifer Healey
|
Jiuxiang Gu
|
Zhiqiang Xu
|
Changyou Chen
Findings of the Association for Computational Linguistics: EMNLP 2024
Automatic generation of graphical layouts is crucial for many real-world applications, including designing posters, flyers, advertisements, and graphical user interfaces. Given the incredible ability of Large language models (LLMs) in both natural language understanding and generation, we believe that we could customize an LLM to help people create compelling graphical layouts starting with only text instructions from the user. We call our method TextLap (text-based layout planning). It uses a curated instruction-based layout planning dataset (InsLap) to customize LLMs as a graphic designer. Human annotators are asked to build a benchmark to evaluate different layout planning models. We demonstrate the effectiveness of TextLap and show that it outperforms strong baselines, including GPT-4 based methods, for document generation and graphical design benchmarks.
pdf
bib
abs
FinTextQA: A Dataset for Long-form Financial Question Answering
Jian Chen
|
Peilin Zhou
|
Yining Hua
|
Loh Xin
|
Kehui Chen
|
Ziyuan Li
|
Bing Zhu
|
Junwei Liang
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Accurate evaluation of financial question answering (QA) systems necessitates a comprehensive dataset encompassing diverse question types and contexts. However, current financial QA datasets lack scope diversity and question complexity. This work introduces FinTextQA, a novel dataset for long-form question answering (LFQA) in finance. FinTextQA comprises 1,262 high-quality, source-attributed QA pairs extracted and selected from finance textbooks and government agency websites.Moreover, we developed a Retrieval-Augmented Generation (RAG)-based LFQA system, comprising an embedder, retriever, reranker, and generator. A multi-faceted evaluation approach, including human ranking, automatic metrics, and GPT-4 scoring, was employed to benchmark the performance of different LFQA system configurations under heightened noisy conditions. The results indicate that: (1) Among all compared generators, Baichuan2-7B competes closely with GPT-3.5-turbo in accuracy score; (2) The most effective system configuration on our dataset involved setting the embedder, retriever, reranker, and generator as Ada2, Automated Merged Retrieval, Bge-Reranker-Base, and Baichuan2-7B, respectively; (3) models are less susceptible to noise after the length of contexts reaching a specific threshold. The dataset is publicly available at: https://huggingface.co/datasets/GPS-Lab/FinTextQA.
pdf
bib
Exploring the Necessity of Visual Modality in Multimodal Machine Translation using Authentic Datasets
Zi Long
|
ZhenHao Tang
|
Xianghua Fu
|
Jian Chen
|
Shilong Hou
|
Jinze Lyu
Proceedings of the 17th Workshop on Building and Using Comparable Corpora (BUCC) @ LREC-COLING 2024
2020
pdf
bib
abs
DeepPaperComposer: A Simple Solution for Training Data Preparation for Parsing Research Papers
Meng Ling
|
Jian Chen
Proceedings of the First Workshop on Scholarly Document Processing
We present DeepPaperComposer, a simple solution for preparing highly accurate (100%) training data without manual labeling to extract content from scholarly articles using convolutional neural networks (CNNs). We used our approach to generate data and trained CNNs to extract eight categories of both textual (titles, abstracts, headers, figure and table captions, and other texts) and non-textural content (figures and tables) from 30 years of IEEE VIS conference papers, of which a third were scanned bitmap PDFs. We curated this dataset and named it VISpaper-3K. We then showed our initial benchmark performance using VISpaper-3K over itself and CS-150 using YOLOv3 and Faster-RCNN. We open-source DeepPaperComposer of our training data generation and released the resulting annotation data VISpaper-3K to promote re-producible research.