Derek Greene
2024
Effective Synthetic Data and Test-Time Adaptation for OCR Correction
Shuhao Guan
|
Cheng Xu
|
Moule Lin
|
Derek Greene
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Post-OCR technology is used to correct errors in the text produced by OCR systems. This study introduces a method for constructing post-OCR synthetic data with different noise levels using weak supervision. We define Character Error Rate (CER) thresholds for “effective” and “ineffective” synthetic data, allowing us to create more useful multi-noise level synthetic datasets. Furthermore, we propose Self-Correct-Noise Test-Time Adaptation (SCN-TTA), which combines self-correction and noise generation mechanisms. SCN-TTA allows a model to dynamically adjust to test data without relying on labels, effectively handling proper nouns in long texts and further reducing CER. In our experiments we evaluate a range of models, including multiple PLMs and LLMs. Results indicate that our method yields models that are effective across diverse text types. Notably, the ByT5 model achieves a CER reduction of 68.67% without relying on manually annotated data
Advancing Post-OCR Correction: A Comparative Study of Synthetic Data
Shuhao Guan
|
Derek Greene
Findings of the Association for Computational Linguistics: ACL 2024
This paper explores the application of synthetic data in the post-OCR domain on multiple fronts by conducting experiments to assess the impact of data volume, augmentation, and synthetic data generation methods on model performance. Furthermore, we introduce a novel algorithm that leverages computer vision feature detection algorithms to calculate glyph similarity for constructing post-OCR synthetic data. Through experiments conducted across a variety of languages, including several low-resource ones, we demonstrate that models like ByT5 can significantly reduce Character Error Rates (CER) without the need for manually annotated data, and our proposed synthetic data generation method shows advantages over traditional methods, particularly in low-resource languages.
2022
A Novel Perspective to Look At Attention: Bi-level Attention-based Explainable Topic Modeling for News Classification
Dairui Liu
|
Derek Greene
|
Ruihai Dong
Findings of the Association for Computational Linguistics: ACL 2022
Many recent deep learning-based solutions have adopted the attention mechanism in various tasks in the field of NLP. However, the inherent characteristics of deep learning models and the flexibility of the attention mechanism increase the models’ complexity, thus leading to challenges in model explainability. To address this challenge, we propose a novel practical framework by utilizing a two-tier attention architecture to decouple the complexity of explanation and the decision-making process. We apply it in the context of a news article classification task. The experiments on two large-scaled news corpora demonstrate that the proposed model can achieve competitive performance with many state-of-the-art alternatives and illustrate its appropriateness from an explainability perspective. We release the source code here.
2016
Topic Stability over Noisy Sources
Jing Su
|
Derek Greene
|
Oisín Boydell
Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)
Topic modelling techniques such as LDA have recently been applied to speech transcripts and OCR output. These corpora may contain noisy or erroneous texts which may undermine topic stability. Therefore, it is important to know how well a topic modelling algorithm will perform when applied to noisy data. In this paper we show that different types of textual noise can have diverse effects on the stability of topic models. On the other hand, topic model stability is not consistent with the same type but different levels of noise. We introduce a dictionary filtering approach to address this challenge, with the result that a topic model with the correct number of topics is always identified across different levels of noise.
Search
Co-authors
- Shuhao Guan 2
- Cheng Xu 1
- Moule Lin 1
- Dairui Liu 1
- Ruihai Dong 1
- show all...