Tomer Levinboim


2021

pdf bib
Quality Estimation for Image Captions Based on Large-scale Human Evaluations
Tomer Levinboim | Ashish V. Thapliyal | Piyush Sharma | Radu Soricut
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Automatic image captioning has improved significantly over the last few years, but the problem is far from being solved, with state of the art models still often producing low quality captions when used in the wild. In this paper, we focus on the task of Quality Estimation (QE) for image captions, which attempts to model the caption quality from a human perspective and *without* access to ground-truth references, so that it can be applied at prediction time to detect low-quality captions produced on *previously unseen images*. For this task, we develop a human evaluation process that collects coarse-grained caption annotations from crowdsourced users, which is then used to collect a large scale dataset spanning more than 600k caption quality ratings. We then carefully validate the quality of the collected ratings and establish baseline models for this new QE task. Finally, we further collect fine-grained caption quality annotations from trained raters, and use them to demonstrate that QE models trained over the coarse ratings can effectively detect and filter out low-quality image captions, thereby improving the user experience from captioning systems.

2020

pdf bib
Improving Text Generation Evaluation with Batch Centering and Tempered Word Mover Distance
Xi Chen | Nan Ding | Tomer Levinboim | Radu Soricut
Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems

Recent advances in automatic evaluation metrics for text have shown that deep contextualized word representations, such as those generated by BERT encoders, are helpful for designing metrics that correlate well with human judgements. At the same time, it has been argued that contextualized word representations exhibit sub-optimal statistical properties for encoding the true similarity between words or sentences. In this paper, we present two techniques for improving encoding representations for similarity metrics: a batch-mean centering strategy that improves statistical properties; and a computationally efficient tempered Word Mover Distance, for better fusion of the information in the contextualized word representations. We conduct numerical experiments that demonstrate the robustness of our techniques, reporting results over various BERT-backbone learned metrics and achieving state of the art correlation with human ratings on several benchmarks.

2019

pdf bib
Informative Image Captioning with External Sources of Information
Sanqiang Zhao | Piyush Sharma | Tomer Levinboim | Radu Soricut
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

An image caption should fluently present the essential information in a given image, including informative, fine-grained entity mentions and the manner in which these entities interact. However, current captioning models are usually trained to generate captions that only contain common object names, thus falling short on an important “informativeness” dimension. We present a mechanism for integrating image information together with fine-grained labels (assumed to be generated by some upstream models) into a caption that describes the image in a fluent and informative manner. We introduce a multimodal, multi-encoder model based on Transformer that ingests both image features and multiple sources of entity labels. We demonstrate that we can learn to control the appearance of these entity labels in the output, resulting in captions that are both fluent and informative.

2015

pdf bib
Model Invertibility Regularization: Sequence Alignment With or Without Parallel Data
Tomer Levinboim | Ashish Vaswani | David Chiang
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Multi-Task Word Alignment Triangulation for Low-Resource Languages
Tomer Levinboim | David Chiang
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Supervised Phrase Table Triangulation with Neural Word Embeddings for Low-Resource Languages
Tomer Levinboim | David Chiang
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing