Yuyang Dong
2026
SCAN: Semantic Document Layout Analysis for Textual and Visual Retrieval-Augmented Generation
Nobuhiro Ueda | Yuyang Dong | Krisztián Boros | Daiki Ito | Takuya Sera | Masafumi Oyamada
Findings of the Association for Computational Linguistics: EACL 2026
Nobuhiro Ueda | Yuyang Dong | Krisztián Boros | Daiki Ito | Takuya Sera | Masafumi Oyamada
Findings of the Association for Computational Linguistics: EACL 2026
With the increasing adoption of Large Language Models (LLMs) and Vision-Language Models (VLMs),rich document analysis technologies for applications like Retrieval-Augmented Generation (RAG)and visual RAG are gaining significant attention.Recent research indicates that using VLMs yields better RAG performance,but processing rich documents remains a challenge since a single page contains large amounts of information.In this paper, we present SCAN (SemantiC Document Layout ANalysis),a novel approach that enhances both textual and visual Retrieval-Augmented Generation (RAG) systemsthat work with visually rich documents.It is a VLM-friendly approach that identifies document components with appropriate semantic granularity,balancing context preservation with processing efficiency.SCAN uses a coarse-grained semantic approach that divides documents into coherent regions covering contiguous components.We trained the SCAN model by fine-tuning object detection models on an annotated dataset.Our experimental results across English and Japanese datasets demonstrate that applying SCAN improvesend-to-end textual RAG performance by up to 9.4 points and visual RAG performance by up to 10.4 points,outperforming conventional approaches and even commercial document processing solutions.
2024
Jellyfish: Instruction-Tuning Local Large Language Models for Data Preprocessing
Haochen Zhang | Yuyang Dong | Chuan Xiao | Masafumi Oyamada
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Haochen Zhang | Yuyang Dong | Chuan Xiao | Masafumi Oyamada
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
This paper explores the utilization of LLMs for data preprocessing (DP), a crucial step in the data mining pipeline that transforms raw data into a clean format. We instruction-tune local LLMs as universal DP task solvers that operate on a local, single, and low-priced GPU, ensuring data security and enabling further customization. We select a collection of datasets across four representative DP tasks and construct instruction data using data configuration, knowledge injection, and reasoning data distillation techniques tailored to DP. By tuning Mistral-7B, Llama 3-8B, and OpenOrca-Platypus2-13B, our models, Jellyfish-7B/8B/13B, deliver competitiveness compared to GPT-3.5/4 models and strong generalizability to unseen tasks while barely compromising the base models’ abilities in NLP tasks. Meanwhile, Jellyfish offers enhanced reasoning capabilities compared to GPT-3.5. Our models are available at: https://huggingface.co/NECOUDBFM/JellyfishOur instruction dataset is available at: https://huggingface.co/datasets/NECOUDBFM/Jellyfish-Instruct