Saurabh Tiwary


2023

pdf bib
DUBLIN: Visual Document Understanding By Language-Image Network
Kriti Aggarwal | Aditi Khandelwal | Kumar Tanmay | Owais Khan Mohammed | Qiang Liu | Monojit Choudhury | Hardik Chauhan | Subhojit Som | Vishrav Chaudhary | Saurabh Tiwary
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track

In this paper, we present DUBLIN, a pixel-based model for visual document understanding that does not rely on OCR. DUBLIN can process both images and texts in documents just by the pixels and handle diverse document types and tasks. DUBLIN is pretrained on a large corpus of document images with novel tasks that enhance its visual and linguistic abilities. We evaluate DUBLIN on various benchmarks and show that it achieves state-of-the-art performance on extractive tasks such as DocVQA, InfoVQA, AI2D, OCR-VQA, RefExp, and CORD, as well as strong performance on abstraction datasets such as VisualMRC and text captioning. Our model demonstrates the potential of OCR-free document processing and opens new avenues for applications and research.

2022

pdf bib
Invariant Language Modeling
Maxime Peyrard | Sarvjeet Ghotra | Martin Josifoski | Vidhan Agarwal | Barun Patra | Dean Carignan | Emre Kiciman | Saurabh Tiwary | Robert West
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Modern pretrained language models are critical components of NLP pipelines. Yet, they suffer from spurious correlations, poor out-of-domain generalization, and biases. Inspired by recent progress in causal machine learning, in particular the invariant risk minimization (IRM) paradigm, we propose invariant language modeling, a framework for learning invariant representations that generalize better across multiple environments. In particular, we adapt a game-theoretic implementation of IRM (IRM-games) to language models, where the invariance emerges from a specific training schedule in which all the environments compete to optimize their own environment-specific loss by updating subsets of the model in a round-robin fashion. We focused on controlled experiments to precisely demonstrate the ability of our method to (i) remove structured noise, (ii) ignore specific spurious correlations without affecting global performance, and (iii) achieve better out-of-domain generalization. These benefits come with a negligible computational overhead compared to standard training, do not require changing the local loss, and can be applied to any language model. We believe this framework is promising to help mitigate spurious correlations and biases in language models.

2019

pdf bib
Towards Language Agnostic Universal Representations
Armen Aghajanyan | Xia Song | Saurabh Tiwary
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

When a bilingual student learns to solve word problems in math, we expect the student to be able to solve these problem in both languages the student is fluent in, even if the math lessons were only taught in one language. However, current representations in machine learning are language dependent. In this work, we present a method to decouple the language from the problem by learning language agnostic representations and therefore allowing training a model in one language and applying to a different one in a zero shot fashion. We learn these representations by taking inspiration from linguistics, specifically the Universal Grammar hypothesis and learn universal latent representations that are language agnostic. We demonstrate the capabilities of these representations by showing that models trained on a single language using language agnostic representations achieve very similar accuracies in other languages.