Tao Wu
2025
Do Current Video LLMs Have Strong OCR Abilities? A Preliminary Study
Yulin Fei
|
Yuhui Gao
|
Xingyuan Xian
|
Xiaojin Zhang
|
Tao Wu
|
Wei Chen
Proceedings of the 31st International Conference on Computational Linguistics
With the rise of multi-modal large language models, accurately extracting and understanding textual information from video content—referred to as video-based optical character recognition (Video OCR)—has become a crucial capability. This paper introduces a novel benchmark designed to evaluate the video OCR performance of multi-modal models in videos. Comprising 1,028 videos and 2,961 question-answer pairs, this benchmark proposes several key challenges through 6 distinct sub-tasks: (1) Recognition of text content itself and its basic visual attributes, (2) Semantic and Spatial Comprehension of OCR objects in videos (3) Dynamic Motion detection and Temporal Localization. We developed this benchmark using a semi-automated approach that integrates the OCR ability of image LLMs with manual refinement, balancing efficiency, cost, and data quality. Our resource aims to help advance research in video LLMs and underscores the need for improving OCR ability for video LLMs. The benchmark will be released on https://github.com/YuHuiGao/FG-Bench.git.
2022
FormLM: Recommending Creation Ideas for Online Forms by Modelling Semantic and Structural Information
Yijia Shao
|
Mengyu Zhou
|
Yifan Zhong
|
Tao Wu
|
Hongwei Han
|
Shi Han
|
Gideon Huang
|
Dongmei Zhang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Online forms are widely used to collect data from human and have a multi-billion market. Many software products provide online services for creating semi-structured forms where questions and descriptions are organized by predefined structures. However, the design and creation process of forms is still tedious and requires expert knowledge. To assist form designers, in this work we present FormLM to model online forms (by enhancing pre-trained language model with form structural information) and recommend form creation ideas (including question / options recommendations and block type suggestion). For model training and evaluation, we collect the first public online form dataset with 62K online forms. Experiment results show that FormLM significantly outperforms general-purpose language models on all tasks, with an improvement by 4.71 on Question Recommendation and 10.6 on Block Type Suggestion in terms of ROUGE-1 and Macro-F1, respectively.