Nonparametric Method for Data-driven Image Captioning

We present a nonparametric density estimation technique for image caption generation. Data-driven matching methods have shown to be effective for a variety of complex problems in Computer Vision. These methods reduce an inference problem for an unknown image to ﬁnding an existing labeled image which is semantically similar. However, related approaches for image caption generation (Ordonez et al., 2011; Kuznetsova et al., 2012) are hampered by noisy estimations of visual content and poor alignment between images and human-written captions. Our work addresses this challenge by estimating a word frequency representation of the visual content of a query image. This allows us to cast caption generation as an extractive summarization problem. Our model strongly outperforms two state-of-the-art caption extraction systems according to human judgments of caption relevance.


Introduction
Automatic image captioning is a much studied topic in both the Natural Language Processing (NLP) and Computer Vision (CV) areas of research. The task is to identify the visual content of the input image, and to output a relevant natural language caption.
Much prior work treats image captioning as a retrieval problem (see Section 2). These approaches use CV algorithms to retrieve similar images from a large database of captioned images, and then transfer text from the captions of those images to the query image. This is a challenging problem for two main reasons. First, visual similarity measures do not perform reliably and do not Query Image: Captioned Images:  Table 1: Example of a query image from the SBU-Flickr dataset (Ordonez et al., 2011), along with scene-based estimates of visually similar images.
Our system models visual content using words that are frequent in these captions (highlighted) and extracts a single output caption.
capture all of the relevant details which humans might describe. Second, image captions collected from the web often contain contextual or background information which is not visually relevant to the image being described.
In this paper, we propose a system for transferbased image captioning which is designed to address these challenges. Instead of selecting an output caption according to a single noisy estimate of visual similarity, our system uses a word frequency model to find a smoothed estimate of visual content across multiple captions, as Table 1 illustrates. It then generates a description of the query image by extracting the caption which best represents the mutually shared content.
The contributions of this paper are as follows: 1. Our caption generation system effectively leverages information from the massive amounts of human-written image captions on the internet. In particular, it exhibits strong performance on the SBU-Flickr dataset (Ordonez et al., 2011), a noisy corpus of one million captioned images collected from the web. We achieve a remarkable 34% improvement in human relevance scores over a recent state-of-the-art image captioning system (Kuznetsova et al., 2012), and 48% improvement over a scene-based retrieval system (Patterson et al., 2014) using the same computed image features.
2. Our approach uses simple models which can be easily reproduced by both CV and NLP researchers. We provide resources to enable comparison against future systems. 1

Image Captioning by Transfer
The IM2TEXT model by Ordonez et al. (2011) presents the first web-scale approach to image caption generation. IM2TEXT retrieves the image which is the closest visual match to the query image, and transfers its description to the query image. The COLLECTIVE model by Kuznetsova et al. (2012) is a related approach which uses trained CV recognition systems to detect a variety of visual entities in the query image. A separate description is retrieved for each visual entity, which are then fused into a single output caption. Like IM2TEXT, their approach uses visual similarity as a proxy for textual relevance. Other related work models the text more directly, but is more restrictive about the source and quality of the human-written training data. Farhadi et al. (2010) and Hodosh et al. (2013) learn joint representations for images and captions, but can only be trained on data with very strong alignment between images and descriptions (i.e. captions written by Mechanical Turkers). Another line of related work (Fan et al., 2010;Feng and Lapata, 2010) generates captions by extracting sentences from documents which are related to the query image. These approaches are tailored toward specific domains, such as travel and news, where images tend to appear with corresponding text.

Dataset
In this paper, we use the SBU-Flickr dataset 2 . Ordonez et al. (2011) query Flickr.com using a huge number of words which describe visual entities, in order to build a corpus of one million images with captions which refer to image content. However, further analysis by Hodosh et al. (2013) shows that many captions in SBU-Flickr (∼67%) describe information that cannot be obtained from the image itself, while a substantial fraction (∼23%) contain almost no visually relevant information. Nevertheless, this dataset is the only web-scale collection of captioned images, and has enabled notable research in both CV and NLP. 3 4 Our Approach

Overview
For a query image I q , our task is to generate a relevant description by selecting a single caption from C, a large dataset of images with human-written captions. In this section, we first define the feature space for visual similarity, then formulate a density estimation problem with the aim of modeling the words which are used to describe visually similar images to I q . We also explore methods for extractive caption generation.

Measuring Visual Similarity
Data-driven matching methods have shown to be very effective for a variety of challenging problems (Hays and Efros, 2008;Makadia et al., 2008;Tighe and Lazebnik, 2010). Typically these methods compute global (scene-based) descriptors rather than object and entity detections. Scenebased techniques in CV are generally more robust, and can be computed more efficiently on large datasets. The basic IM2TEXT model uses an equally weighted average of GIST (Oliva and Torralba, 2001) and TinyImage (Torralba et al., 2008) features, which coarsely localize low-level features in scenes. The output is a multi-dimensional image space where semantically similar scenes (e.g. streets, beaches, highways) are projected near each other. Patterson and Hays (2012) present "scene attribute" representations which are characterized using low-level perceptual attributes as used by GIST (e.g. openness, ruggedness, naturalness), as well as high-level attributes informed by openended crowd-sourced image descriptions (e.g., indoor lighting, running water, places for learning). Follow-up work (Patterson et al., 2014) shows that their attributes provide improved matching for image captioning over IM2TEXT baseline. We use their publicly available 4 scene attributes for our experiments. Training set and query images are represented using 102-dimensional real-valued vectors, and similarity between images is measured using the Euclidean distance.

Density Estimation
As shown in Bishop (2006), probability density estimates at a particular point can be obtained by considering points in the training data within some local neighborhood. In our case, we define some region R in the image space which contains I q .
The probability mass of that space is and if we assume that R is small enough such that p(I q ) is roughly constant in R, we can approximate where k img is the number of images within R in the training data, n img is the total number of images in the training data, and V img is the volume of R. In this paper, we fix k img to a constant value, so that V img is determined by the training data around the query image. 5 At this point, we extend the density estimation technique in order to estimate a smoothed model of descriptive text. Let us begin by considering p(w|I q ), the conditional probability of the word 6 w given I q . This can be described using a 4 https://github.com/genp/sun_ attributes 5 As an alternate approach, one could fix the value of V img and determine k img from the number of points in R, giving rise to the kernel density approach (a.k.a. Parzen windows). However we believe the KNN approach is more appropriate here, because the number of samples is nearly 10000 times greater than the number of dimensions in the image representation. 6 Here, we use word to refer to non-function words, and assume all function words have been removed from the captions.
Bayesian model: The prior for w is simply its unigram frequency in C, where n txt w and n txt are word token counts: Note that n txt is not the same as n img because a single captioned image can have multiple words in its caption. Likewise, the conditional density considers instances of observed words within R, although the volume of R is still defined by the image space. k txt w is the number of times w is used within R while n txt w is the total number of times w is observed in C.
Combining Equations 2, 4, and 5 and canceling out terms gives us the posterior probability: If the number of words in each caption is independent of its image's location in the image space, then p(w|I q ) is approximately the observed unigram frequency for the captions inside R.

Extractive Caption Generation
We compare two selection methods for extractive caption generation: 1. SumBasic SumBasic (Nenkova and Vanderwende, 2005) is a sentence selection algorithm for extractive multi-document summarization which exclusively maximizes the appearance of words which have high frequency in the original documents. Here, we adapt SumBasic to maximize the average value of p(w|I q ) in a single extracted caption: The candidate captions c txt do not necessarily have to be observed in R, but in practice we did not find increasing the number of candidate captions to be more effective than increasing the size of R directly.

KL Divergence
We also consider a KL Divergence selection method. This method outperforms the SumBasic selection method for extractive multi-document summarization (Haghighi and Vanderwende, 2009). It also generates the best extractive captions for Feng and Lapata (2010), who caption images by extracting text from a related news article. The KL Divergence method is

Automatic Evaluation
Although BLEU (Papineni et al., 2002) scores are widely used for image caption evaluation, we find them to be poor indicators of the quality of our model. As shown in Figure 1, our system's BLEU scores increase rapidly until about k = 25. Past this point we observe the density estimation seems to get washed out by oversmoothing, but the BLEU scores continue to improve until k = 500 but only because the generated captions become increasingly shorter. Furthermore, although we observe that our SumBasic extracted captions obtain consistently higher BLEU scores, our personal observations find KL Divergence captions to be better at balancing recall and precision. Nevertheless, BLEU scores are the accepted metric for recent work, and our KL Divergence captions with k = 25 still outperform all other previously published systems and baselines. We omit full results here due to space, but make our BLEU setup with captions for all systems and baselines available for documentary purposes.

Human Evaluation
We perform our human evaluation of caption relevance using a similar setup to that of Kuznetsova et al. (2012), who have humans rate the image captions on a 1-5 scale (5: perfect, 4: almost perfect, 3: 70-80% good, 2: 50-70% good, 1: totally bad). Evaluation is performed using Amazon Mechanical Turk. Evaluators are shown both the caption and the query image, and are specifically instructed to ignore errors in grammaticality and coherence.
We generate captions using our system with KL Divergence sentence selection and k = 25. We also evaluate the original HUMAN captions for the query image, as well as generated captions from two recently published caption transfer systems. First, we consider the SCENE ATTRIBUTES system (Patterson et al., 2014), which represents both the best scene-based transfer model and a k = 1 nearest-neighbor baseline for our system. We also compare against the COLLECTIVE system (Kuznetsova et al., 2012), which is the best objectbased transfer model.
In order to facilitate comparison, we use the same test/train split that is used in the publicly available system output for the COLLECTIVE system 7 . However, we remove some query images which have contamination between the train and test set (this occurs when a photographer takes multiple shots of the same scene and gives all the images the exact same caption). We also note that their test set is selected based on images where their object detection systems had good performance, and may not be indicative of their performance on other query images. Table 2 shows the results of our human study. Captions generated by our system have 48% improvement in relevance over the SCENE AT-TRIBUTES system captions, and 34% improve-COLLECTIVE: One of the birds seen in company of female and juvenile.
View of this woman sitting on the sidewalk in Mumbai by the stained glass. The boy walking by next to matching color walls in gov t building.
Found this mother bird feeding her babies in our maple tree on the phone.  ment over the COLLECTIVE system captions. Although our system captions score lower than the human captions on average, there are some instances of our system captions being judged as more relevant than the human-written captions.

Discussion and Examples
Example captions are shown in Table 3. In many instances, scene-based image descriptors provide enough information to generate a complete description of the image, or at least a sufficiently good one. However, there are some kinds of images for which scene-based features alone are insufficient. For example, the last example describes the small pink flowers in the background, but misses the bear. Image captioning is a relatively novel task for which the most compelling applications are probably not yet known. Much previous work in image captioning focuses on generating captions that concretely describe detected objects and entities Yang et al., 2011;Yu and Siskind, 2013). However, human-generated captions and annotations also describe perceptual features, contextual information, and other types of content. Additionally, our system is robust to instances where entity detection systems fail to perform. However, one could consider combined approaches which incorporate more regional content structures. For example, previous work in nonparametric hierarchical topic modeling (Blei et al., 2010) and scene labeling (Liu et al., 2011) may provide avenues for further improvement of this model. Compression methods for removing visually irrelevant information (Kuznetsova et al., 2013) may also help increase the relevance of extracted captions. We leave these ideas for future work.