Yifei Ming
2023
A Critical Analysis of Document Out-of-Distribution Detection
Jiuxiang Gu
|
Yifei Ming
|
Yi Zhou
|
Jason Kuen
|
Vlad Morariu
|
Handong Zhao
|
Ruiyi Zhang
|
Nikolaos Barmpalios
|
Anqi Liu
|
Yixuan Li
|
Tong Sun
|
Ani Nenkova
Findings of the Association for Computational Linguistics: EMNLP 2023
Large-scale pre-training is widely used in recent document understanding tasks. During deployment, one may expect that models should trigger a conservative fallback policy when encountering out-of-distribution (OOD) samples, which highlights the importance of OOD detection. However, most existing OOD detection methods focus on single-modal inputs such as images or texts. While documents are multi-modal in nature, it is underexplored if and how multi-modal information in documents can be exploited for OOD detection. In this work, we first provide a systematic and in-depth analysis on OOD detection for document understanding models. We study the effects of model modality, pre-training, and fine-tuning across various types of OOD inputs. In particular, we find that spatial information is critical for document OOD detection. To better exploit spatial information, we propose a spatial-aware adapter, which serves as a parameter-efficient add-on module to adapt transformer-based language models to the document domain. Extensive experiments show that adding the spatial-aware adapter significantly improves the OOD detection performance compared to directly using the language model and achieves superior performance compared to competitive baselines.
2022
Utilizing Language-Image Pretraining for Efficient and Robust Bilingual Word Alignment
Tuan Dinh
|
Jy-yong Sohn
|
Shashank Rajput
|
Timothy Ossowski
|
Yifei Ming
|
Junjie Hu
|
Dimitris Papailiopoulos
|
Kangwook Lee
Findings of the Association for Computational Linguistics: EMNLP 2022
Word translation without parallel corpora has become feasible, rivaling the performance of supervised methods. Recent findings have shown the improvement in accuracy and robustness of unsupervised word translation (UWT) by utilizing visual observations, which are universal representations across languages.Our work investigates the potential of using not only visual observations but also pretrained language-image models for enabling a more efficient and robust UWT. We develop a novel UWT method dubbed Word Alignment using Language-Image Pretraining (WALIP), leveraging visual observations via the shared image-text embedding space of CLIPs (Radford et al., 2021). WALIP has a two-step procedure. First, we retrieve word pairs with high confidences of similarity, computed using our proposed image-based fingerprints, which define the initial pivot for the alignment.Second, we apply our robust Procrustes algorithm to estimate the linear mapping between two embedding spaces, which iteratively corrects and refines the estimated alignment.Our extensive experiments show that WALIP improves upon the state-of-the-art performance of bilingual word alignment for a few language pairs across different word embeddings and displays great robustness to the dissimilarity of language pairs or training corpora for two word embeddings.
Search
Co-authors
- Jiuxiang Gu 1
- Yi Zhou 1
- Jason Kuen 1
- Vlad Morariu 1
- Handong Zhao 1
- show all...