Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

While objects are the core building blocks of an image, it is often the relationships between objects that determine the holistic interpretation. For example, an image with a person and a bicycle might involve the man riding, pushing, or even falling off of the bicycle (Fig. 1). Understanding this diversity of relationships is central to accurate image retrieval and to a richer semantic understanding of our visual world.

Fig. 1.
figure 1

Even though all the images contain the same objects (a person and a bicycle), it is the relationship between the objects that determine the holistic interpretation of the image.

Visual relationships are a pair of localized objects connected via a predicate (Fig. 2). We represent relationships as \(\langle \) object \(_1\)-predicate-object \(_2\) \(\rangle \) Footnote 1. Visual relationship detection involves detecting and localizing pairs of objects in an image and also classifying the predicate or interaction between each pair (Fig. 2). While it poses similar challenges as object detection [1], one critical difference is that the size of the semantic space of possible relationships is much larger than that of objects. Since relationships are composed of two objects, there is a greater skew of rare relationships as object co-occurrence is infrequent in images. So, a fundamental challenge in visual relationship detection is learning from very few examples.

Fig. 2.
figure 2

Visual Relationship Detection: Given an image as input, we detect multiple relationships in the form of \(\langle \) object \(_1\)-relationship-object \(_2\) \(\rangle \). Both the objects are localized in the image as bounding boxes. In this example, we detect the following relationships: \(\langle \) person - on - motorcycle \(\rangle \), \(\langle \) person - wear - helmet \(\rangle \) and \(\langle \) motorcycle - has - wheel \(\rangle \).

Visual Phrases [6] studied visual relationship detection using a small set of 13 common relationships. Their model requires enough training examples for every possible \(\langle \) object \(_1\)-predicate-object \(_2\) \(\rangle \) combination, which is difficult to collect owing to the infrequency of relationships. If we have N objects and K predicates, Visual Phrases [6] would need to train \(\mathcal {O}(N^2K)\) unique detectors separately. We use the insight that while relationships (e.g. “person jumping over a fire hydrant”) might occur rarely in images, its objects (e.g. person and fire hydrant) and predicate (e.g. jumping over) independently appear more frequently. We propose a visual appearance module that learns the appearance of objects and predicates and fuses them together to jointly predict relationships. We show that our model only needs \(\mathcal {O}(N + K)\) detectors to detect \(\mathcal {O}(N^2K)\) relationships.

Another key observation is that relationships are semantically related to each other. For example, a “person riding a horse” and a “person riding an elephant” are semantically similar because both elephant and horse are animals. Even if we haven’t seen many examples of “person riding an elephant”, we might be able to infer it from a “person riding a horse”. Word vector embeddings [7] naturally lend themselves in linking such relationships because they capture semantic similarity in language (e.g. elephant and horse are cast close together in a word vector space). Therefore, we also propose a language module that uses pre-trained word vectors [7] to cast relationships into a vector space where similar relationships are optimized to be close to each other. Using this embedding space, we can finetune the prediction scores of our relationships and even enable zero shot relationship detection.

In this paper, we propose a model that can learn to detect visual relationships by (1) (1) learning visual appearance models for its objects and predicates and (2) using the relationship embedding space learnt from language. We train our model by optimizing a bi-convex function. To benchmark the task of visual relationship detection, we introduce a new dataset that contains 5000 images with 37, 993 relationships. Existing datasets that contain relationships were designed for improving object detection [6] or image retrieval [8] and hence, don’t contain sufficient variety of relationships or predicate diversity per object category. Our model outperforms all previous models in visual relationship detection. We further study how our model can be used to perform zero shot visual relationship detection. Finally, we demonstrate that understanding relationships can improve image-based retrieval.

2 Related Work

Visual relationship prediction involves detecting the objects that occur in an image as well as understanding the interactions between them. There has been a series of work related to improving object detection by leveraging object co-occurrence statistics [914]. Structured learning approaches have improved scene classification along with object detection using hierarchial contextual data from co-occurring objects [1518]. Unlike these methods, we study the context or relationships in which these objects co-occur.

Some previous work has attempted to learn spatial relationships between objects [13, 19] to improve segmentation [19]. They attempted to learn four spatial relationships: “above”, “below”, “inside”, and “around” [13]. While we believe that that learning spatial relationships is important, we also study non-spatial relationships such as pull (actions), taller than (comparative), etc.

There have been numerous efforts in human-object interaction [2022] and action recognition [23] to learn discriminative models that distinguish between relationships where object \(_1\) is a human ( e.g. “playing violin” [24]). Visual relationship prediction is more general as object \(_1\) is not constrained to be a human and the predicate doesn’t have to be a verb.

Visual relationships are not a new concept. Some papers explicitly collected relationships in images [2529] and videos [27, 30, 31] and helped models map these relationships from images to language. Relationships have also improved object localization [6, 3234]. A meaning space of relationships have aided the cognitive task of mapping images to captions [3538]. Finally, they have been used to generate indoor images from sentences [39] and to improve image search [8, 40]. In this paper, we formalize visual relationship prediction as a task onto itself and demonstrate further improvements in image retrieval.

The most recent attempt at relationship prediction has been in the form of visual phrases. Learning appearance models for visual phrases has shown to improve individual object detection, i.e. detecting “a person riding a horse” improves the detection and localization of “person” and “horse” [6, 41]. Unlike our model, all previous work has attempted to detect only a handful of visual relationships and do not scale because most relationships are infrequent. We propose a model that manages to scale and detect millions of types of relationships. Additionally, our model is able to detect unseen relationships.

3 Visual Relationship Dataset

Visual relationships put objects in context; they capture the different interactions between pairs of objects. These interactions (shown in Fig. 3) might be verbs (e.g. wear), spatial (e.g. on top of), prepositions (e.g. with), comparative (e.g. taller than), actions (e.g. kick) or a preposition phrase (e.g. drive on). A dataset for visual relationship prediction is fundamentally different from a dataset for object detection. A relationship dataset should contain more than just objects localized in images; it should capture the rich variety of interactions between pairs of objects (predicates per object category). For example, a person can be associated with predicates such as ride, wear, kick etc. Additionally, the dataset should contain a large number of possible relationships types.

Fig. 3.
figure 3

(left) A log scale distribution of the number of instances to the number of relationships in our dataset. Only a few relationships occur frequently and there is a long tail of infrequent relationships. (right) Relationships in our dataset can be divided into many categories, 5 of which are shown here: verb, spatial, preposition, comparative and action.

Existing datasets that contain relationships were designed to improve object detection [6] or image retrieval [8]. The Visual Phrases [6] dataset focuses on 17 common relationship types. But, our goal is to understand the rich variety of infrequent relationships. On the other hand, even though the Scene Graph dataset [8] has 23,190 relationship typesFootnote 2, it only has 2.3 predicates per object category. Detecting relationships on the Scene Graph dataset [8] essentially boils down to object detection. Therefore, we designed a dataset specifically for benchmarking visual relationship prediction.

Our dataset (Table 1) contains 5000 images with 100 object categories and 70 predicates. In total, the dataset contains 37,993 relationships with 6,672 relationship types and 24.25 predicates per object category. Some example relationships are shown in Fig. 3. The distribution of relationships in our dataset highlights the long tail of infrequent relationships (Fig. 3(left)). We use 4000 images in our training set and test on the remaining 1000 images. 1,877 relationships occur in the test set but never occur in the training set.

Table 1. Comparison between our visual relationship benchmarking dataset with existing datasets that contain relationships. Relationships and Objects are abbreviated to Rel. and Obj. because of space constraints.

4 Visual Relationship Prediction Model

The goal of our model is to detect visual relationships from an image. During training (Sect. 4.1), the input to our model is a fully supervised set of images with relationship annotations where the objects are localized as bounding boxes and labelled as \(\langle \) object \(_1\)-predicate-object \(_2\) \(\rangle \). At test time (Sect. 4.2), our input is an image with no annotations. We predict multiple relationships and localize the objects in the image. Figure 4 illustrates a high level overview of our detection pipeline.

4.1 Training Approach

In this section, we describe how we train our visual appearance and language modules. Both the modules are combined together in our objective function.

Fig. 4.
figure 4

A overview of our visual relationship detection pipeline. Given an image as input, RCNN [43] generates a set of object proposals. Each pair of object proposals is then scored using a (1) visual appearance module and a (2) language module. These scores are then thresholded to output a set of relationship labels (e.g. \(\langle \) person - riding - horse \(\rangle \)). Both objects in a relationship (e.g. person and horse) are localized as bounding boxes. The parameters of those two modules (W and \(\Theta \)) are iteratively learnt in Sect. 4.1.

Visual Appearance Module. While Visual Phrases [6] learned a separate detector for every single relationship, we model the appearance of visual relationships V() by learning the individual appearances of its comprising objects and predicate. While relationships are infrequent in real world images, the objects and predicates can be learnt as they independently occur more frequently. Furthermore, we demonstrate that our model outperforms Visual Phrases’ detectors, showing that learning individual detectors outperforms learning detectors for relationships together (Table 2).

First, we train a convolutional neural network (CNN) (VGG net [44]) to classify each of our \(N=100\) objects. Similarly, we train a second CNN (VGG net [44]) to classify each of our \(K=70\) predicates using the union of the bounding boxes of the two participating objects in that relationship. Now, for each ground truth relationship \(R_{\langle i,k,j \rangle }\) where i and j are the object classes (with bounding boxes \(O_1\) and \(O_2\)) and k is the predicate class, we model V (Fig. 4) as:

$$\begin{aligned} V( R_{\langle i,k,j \rangle }, \Theta | \langle O_1, O_2 \rangle ) = P_i(O_1) ( \mathbf{z}_{k}^{T} \text {CNN}(O_1,O_2) + s_k) P_j(O_2) \end{aligned}$$
(1)

where \(\Theta \) is the parameter set of \(\{ \mathbf{z}_{k}, s_k\}\). \(\mathbf{z}_{k}\) and \(s_k\) are the parameters learnt to convert our CNN features to relationship likelihoods. \(k = 1,\ldots , K\) represent the K predicates in our dataset. \(P_i(O_1)\) and \(P_j(O_2)\) are the CNN likelihoods of categorizing box \(O_1\) as object category i and box \(O_2\) as category j. CNN\((O_1,O_2)\) is the predicate CNN features extracted from the union of the \(O_1\) and \(O_2\) boxes.

Language Module. One of our key observations is that relationships are semantically related to one another. For example, \(\langle \) person - ride - horse \(\rangle \) is semantically similar to \(\langle \) person - ride - elephant \(\rangle \). Even if we have not seen any examples of \(\langle \) person - ride - elephant \(\rangle \), we should be able to infer it from similar relationships that occur more frequently (e.g. \(\langle \) person - ride - horse \(\rangle \)). Our language module projects relationships into an embedding space where similar relationships are optimized to be close together. We first describe the function that projects a relationship to the vector space (Eq. 2) and then explain how we train this function by enforcing similar relationships to be close together in a vector space (Eq. 4) and by learning a likelihood prior on relationships (Eq. 5).

Projection Function. First, we use pre-trained word vectors (word2vec) [7] to cast the two objects in a relationship into an word embedding space [7]. Next, we concatenate these two vectors together and transform it into the relationship vector space using a projection parameterized by \(\mathbf{W}\), which we learn. This projection presents how two objects interact with each other. We denote \(word2vec ()\) as the function that converts a word to its 300 dim. vector. The relationship projection function (shown in Fig. 4) is defined as:

$$\begin{aligned} f(\mathcal {R}_{\langle i,k,j \rangle }, \mathbf{W}) = \mathbf{w}_k^{T}[word2vec (t_i), word2vec (t_j)] + b_k \end{aligned}$$
(2)

where \(t_j\) is the word (in text) of the \(j^{th}\) object category. \(\mathbf{w}_k\) is a 600 dim. vector and \(b_k\) is a bias term. \(\mathbf{W}\) is the set of \(\{ \{ \mathbf{w}_1, b_1 \}, \ldots , \{ \mathbf{w}_k, b_k \}\}\), where each row presents one of our K predicates.

Training Projection Function. We want to optimize the projection function f() such that it projects similar relationships closer to one another. For example, we want the distance between \(\langle \) man - riding - horse \(\rangle \) to be close to \(\langle \) man - riding - cow \(\rangle \) but farther from \(\langle \) car - has - wheel \(\rangle \). We formulate this by using a heuristic where the distance between two relationships is proportional to the word2vec distance between its component objects and predicate:

$$\begin{aligned} \frac{[f(\mathcal {R}, \mathbf{W}) - f(\mathcal {R}^\prime , \mathbf{W}) ]^2}{d( \mathcal {R}, \mathcal {R}^\prime )} = constant, ~~ \forall \mathcal {R}, \mathcal {R}^\prime \end{aligned}$$
(3)

where \(d(\mathcal {R}, \mathcal {R}^\prime )\) is the sum of the cosine distances (in word2vec space [7]) between of the two objects and the predicates of the two relationships \(\mathcal {R}\) and \(\mathcal {R}^\prime \). Now, to satisfy Eq. 3, we randomly sample pairs of relationships (\(\langle \mathcal {R},\mathcal {R}^\prime \rangle \)) and minimize their variance:

$$\begin{aligned} K(\mathbf{W}) = var (\{\frac{[f(\mathcal {R}, \mathbf{W}) - f(\mathcal {R}^\prime , \mathbf{W}) ]^2}{d( \mathcal {R}, \mathcal {R}^\prime )} ~~ \forall \mathcal {R},\mathcal {R}^\prime \}) \end{aligned}$$
(4)

where var() is a variance function. The sample number we use is 500K.

Likelihood of a Relationship. The output of our projection function should ideally indicate the likelihood of a visual relationship. For example, our model should not assign a high likelihood score to a relationship like \(\langle \) dog - drive - car \(\rangle \), which is unlikely to occur. We model this by enforcing that if \(\mathcal {R}\) occurs more frequently than \(\mathcal {R}^\prime \) in our training data, then it should have a higher likelihood of occurring again. We formulate this as a rank loss function:

$$\begin{aligned} L(\mathbf{W}) = \sum _{\{\mathcal {R},\mathcal {R}^\prime \}} \max \{ f( \mathcal {R}^\prime , \mathbf{W}) - f(\mathcal {R}, \mathbf{W}) + 1, 0 \} \end{aligned}$$
(5)

While we only enforce this likelihood prior for the relationships that occur in our training data, the projection function f() generalizes it for all \(\langle \) object \(_1\)-predicate-object \(_2\) \(\rangle \) combinations, even if they are not present in our training data. The \(\max \) operator here is to encourage correct ranking (with margin) \(f(\mathcal {R}, \mathbf{W}) - f( \mathcal {R}^\prime , \mathbf{W}) \ge 1\). Minimizing this objective enforces that a relationship with a lower likelihood of occurring has a lower f() score.

Objective Function. So far we have presented our visual appearance module (V()) and the language module (f()). We combine them to maximize the rank of the ground truth relationship \(\mathcal {R}\) with bounding boxes \(O_1\) and \(O_2\) using the following rank loss function:

$$\begin{aligned} C(\Theta , \mathbf{W}) = \sum _{\langle O_1, O_2 \rangle , \mathcal {R}} \max \{ 1 - V( \mathcal {R}, \Theta | \langle O_1, O_2 \rangle ) f( \mathcal {R}, \mathbf{W}) \nonumber \\ + \max _{ \langle O_1^\prime , O_2^\prime \rangle \ne \langle O_1 , O_2 \rangle , \mathcal {R}^\prime \ne \mathcal {R}} V( \mathcal {R}^\prime , \Theta | \langle O_1^\prime , O_2^\prime \rangle ) f(\mathcal {R}^\prime , \mathbf{W}), 0 \} \end{aligned}$$
(6)

We use a ranking loss function to make it more likely for our model to choose the correct relationship. Given the large number of possible relationships, we find that a classification loss performs worse. Therefore, our final objective function combines Eq. 6 with Eqs. 4 and 5 as:

$$\begin{aligned} \min _{\Theta , \mathbf{W}} \{ C(\Theta , \mathbf{W}) + \lambda _1 L(\mathbf{W}) + \lambda _2 K(\mathbf{W}) \} \end{aligned}$$
(7)

where \(\lambda _1 = 0.05\) and \(\lambda _2 = 0.002\) are hyper-parameters that were obtained though grid search to maximize performance on the validation set. Note that both Eqs. 6 and 5 are convex functions. Equation 4 is a biqudratic function with respect to \(\mathbf{W}\). So our objective function Eq. 7 has a quadratic closed form. We perform stochastic gradient descent iteratively on Eqs. 6 and 5. It converges in \(20 \sim 25\) iterations.

4.2 Testing

At test time, we use RCNN [43] to produce a set of candidate object proposals for every test image. Next, we use the parameters learnt from the visual appearance model (\(\Theta \)) and the language module (\(\mathbf{W}\)) to predict visual relationships (\(\mathcal {R}_{\langle i,k,j \rangle }^*\)) for every pair of RCNN object proposals \(\langle O_1, O_2 \rangle \) using:

$$\begin{aligned} \mathcal {R}^{*} = \arg \max _{\mathcal {R}} V(\mathcal {R}, \Theta | \langle O_1, O_2 \rangle )f(\mathcal {R}, \mathbf{W}) \end{aligned}$$
(8)

5 Experiments

We evaluate our model by detecting visual relationships from images. We show that our proposed method outperforms previous state-of-the-art methods on our dataset (Sect. 5.1) as well as on previous datasets (Sect. 5.3). We also measure how our model performs in zero-shot learning of visual relationships (Sect. 5.2). Finally, we demonstrate that understanding visual relationship can improve common computer vision tasks like content based image retrieval (Sect. 5.4).

5.1 Visual Relationship Detection

Setup. Given an input image, our task is to extract a set of visual relationships \(\langle \) object \(_1\)-predicate-object \(_2\) \(\rangle \) and localize the objects as bounding boxes in the image. We train our model using the 4000 training images and perform visual relationship prediction on the 1000 test images.

The evaluation metrics we report is recall @ 100 and recall @ 50 [45]. Recall @ x computes the fraction of times the correct relationship is predicted in the top x confident relationship predictions. Since we have 70 predicates and an average of 18 objects per image, the total possible number of relationship predictions is \(100 \times 70 \times 100\), which implies that the random guess will result in a recall @ 100 of 0.00014. We notice that mean average precision (mAP) is another widely used metric. However, mAP is a pessimistic evaluation metric because we can not exhaustively annotate all possible relationships in an image. Consider the case where our model predicts \(\langle \) person - taller than - person \(\rangle \). Even if the prediction is correct, mAP would penalize the prediction if we do not have that particular ground truth annotation.

Detecting a visual relationship involves classifying both the objects, predicting the predicate and localization both the objects. To study how our model performs on each of these tasks, we measure visual relationship prediction under the following conditions:

  1. 1.

    In predicate detection (Fig. 5(left)), our input is an image and set of localized objects. The task is to predict a set of possible predicates between pairs of objects. This condition allows us to study how difficult it is to predict relationships without the limitations of object detection [43].

  2. 2.

    In phrase detection (Fig. 5(middle)), our input is an image and our task is to output a label \(\langle \) object \(_1\)-predicate-object \(_2\) \(\rangle \) and localize the entire relationship as one bounding box having at least 0.5 overlap with ground truth box. This is the evaluation used in Visual Phrases [6].

  3. 3.

    In relationship detection (Fig. 5(right)), our input is an image and our task is to output a set of \(\langle \) object \(_1\)-predicate-object \(_2\) \(\rangle \) and localize both object\(_1\) and object\(_2\) in the image having at least 0.5 overlap with their ground truth boxes simultaneously.

Fig. 5.
figure 5

We evaluate visual relationship detection using three conditions: predicate detection (where we only predict the predicate given the object classes and boxes), phrase detection (where we label a region of an image with a relationship) and relationship detection (where we detect the objects and label the predicate between them).

Comparison Models. We compare our method with some state-of-that-art approaches [6, 44]. We further perform ablation studies on our model, considering just the visual appearance and the language module, including the likelihood term (Eq. 4) and embedding term (Eq. 5) to study their contributions.

  • Visual phrases. Similar to Visual Phrases [6], we train deformable parts models for each of the 6, 672 relationships (e.g. “chair under table”) in our training set.

  • Joint CNN. We train a CNN model [44] to predict the three components of a relationship together. Specifically, we train a 270 (\(100+100+70\)) way classification model that learns to score the two objects (100 categories each) and predicate (70 categories). This model represents the Visual phrases

  • Visual appearance (Ours - V only). We only use the visual appearance module of our model described in Eq. 6 by optimizing V().

  • Likelihood of a relationship (Ours - L only). We only use the likelihood of a relationship described in Eq. 5 by optimizing L().

  • Visual appearance + naive frequency (Ours - V + naive FC). One of the contributions of our model is the ability to use a language prior via our semantic projection function f() (Eq. 2). Here, we replace f() with a function that maps a relationship to its frequency in our training data. Using this naive function, we hope to test the effectiveness of f().

  • Visual appearance + Likelihood (Ours - V + L only). We use both the visual appearance module (Eq. 6) and the likelihood term (Eq. 5) by optimizing both V() and L(). The only part of our model missing is K() Eq. 4, which projects similar relationships closer.

  • Visual appearance + likelihood + regularizer (Ours - V + L + Reg.). We use the visual appearance module and the likelihood term and add an \(L_{2}\) regularizer on W.

  • Full Model (Ours - V + L + K). This is our full model. It contains the visual appearance module (Eq. 6), the likelihood term (Eq. 5) and the embedding term (Eq. 4) from similar relationships.

Table 2. Results for visual relationship detection (Sect. 5.1). R@100 and R@50 are abbreviations of Recall @ 100 and Recall @ 50. Note that in predicate det., we are predicting multiple predicates per image (one between every pair of objects) and hence R@100 is less than 1.

Results. Visual Phrases [6] and Joint CNN [44] train an individual detector for every relationship. Since the space of all possible relationships is large (we have 6,672 relationship types in the training set), there is a shortage of training examples for infrequent relationships, causing both models to perform poorly on predicate, phrase and relationship detection (Table 2). (Ours - V only) can’t discriminative between similar relationships by itself resulting in 1.85 R@100 for relationship detection. Similarly, (Ours - L only) always predicts the most frequent relationship \(\langle \) person - wear - shirt \(\rangle \) and results in 0.08 R@100, which is the percentage of the most frequent relationship in our testing data. These problems are remedied when both V and L are combined in (Ours - V + L only) with an increase of \(3\%\) R@100 in on both phrase and relationship detection and more than \(10\%\) increase in predicate detection. (V + Naive FC) is missing our relationship projection function f(), which learns the likelihood of a predicted relationship and performs worse than (Ours - V + L only) and (Ours - V + L + K). Also, we observe that (Ours - V + L + K) has an \(11\%\) improvement in comparison to (Ours - V + L only) in predicate detection, demonstrating that the language module from similar relationships significantly helps improve visual relationship detection. Finally, (Ours - V + L + K) outperforms (Ours - V + L + Reg.) showcasing the K() is acting not only as a regularizer but is learning to preserve the distances between similar relationships.

Fig. 6.
figure 6

(a), (b) and (c) show results from our model, Visual Phrases [6] and Joint CNN [44] on the same image. All ablation studies results for (d), (e) and (f) are reported below the corresponding image. Ticks and crosses mark the correct and incorrect results respectively. Phrase, object\(_1\) and object\(_2\) boxes are in blue, red and green respectively. (Color figure online)

By comparing the performance of all the models between relationship and predicate detection, we notice a \(30\%\) drop in R@100. This drop in recall is largely because we have to localize two objects simultaneously, amplifying the object detection errors. Note that even when we have ground truth object proposals (in predicate detection), R@100 is still 47.87.

Qualitative Results. In Fig. 6(a), (b) and (c), Visual Phrase and Joint CNN incorrectly predict a common relationship: \(\langle \) person - drive - car \(\rangle \) and \(\langle \) car - next to - tree \(\rangle \). These models tend to predict the most common relationship as they see a lot of them during training. In comparison, our model correctly predicts and localizes the objects in the image. Figure 6(d), (e) and (f) compares the various components of our model. Without the relationship likelihood score, (Ours - V only) incorrectly classifies a wheel as a clock in (d) and mislabels the predicate in (e) and (f). Without any visual priors, (Ours - L only) always reports the most frequent relationship \(\langle \) person - wear - shirt \(\rangle \). (Ours - V + L) fixes (d) by correcting the visual model’s misclassification of the wheel as a clock. But it still does not predict the correct predicate for (e) and (f) because \(\langle \) person - ride - elephant \(\rangle \) and \(\langle \) hand - hold - phone \(\rangle \) rarely occur in our training set. However, our full model (Ours - V + L + K) leverages similar relationships it has seen before and is able to correctly detect the relationships in (e) and (f).

Table 3. Results for zero-shot visual relationship detection (Sect. 5.2). Visual Phrases, Joint CNN and Ours - V + naive FC are omitted from this experiment as they are unable to do zero-shot learning.

5.2 Zero-shot Learning

Owing to the long tail of relationships in real world images, it is difficult to build a dataset with every possible relationship. Therefore, a model that detects visual relationships should also be able to perform zero-shot prediction of relationships it has never seen before. Our model is able to leverage similar relationships it has already seen to detect unseen ones.

Setup. Our test set contains 1, 877 relationships that never occur in our training set (e.g. \(\langle \) elephant - stand on - street \(\rangle \)). These unseen relationships can be inferred by our model using similar relationships (e.g. \(\langle \) dog - stand on - street \(\rangle \)) from our training set. We report our results for detecting unseen relationships in Table 3 for predicate, phrase, and relationship detection.

Results. (Ours - V) achieves a low 3.52 R@100 in predicate detection because visual appearances are not discriminative enough to predict unseen relationships. (Ours - L only) performs poorly in predicate detection (5.09 R@100) because it automatically returns the most common predicate. By comparing (Ours - V + L+ K) and (Ours - V + L only), we find the use of K gains an improvement of \(30\%\) since it utilizes similar relationships to enable zero shot predictions.

5.3 Visual Relationship Detection on Existing Dataset

Our goal in this paper is to understand the rich variety of infrequent relationships. Our comparisons in Sect. 3 show that existing datasets either do not have enough diveristy of predicates per object category or enough relationship types. Therefore, we introduced a new dataset (in Sect. 3) and tested our visual relationship detection model in Sects. 5.1 and 5.2. In this section, we run additional experiments on the existing visual phrases dataset [6] to provide further benchmarks.

Setup. The visual phrase dataset contains 17 phrases (e.g. “dog jumping”). We evaluate the models (introduced in Sect. 5.1) for visual relationship detection on 12 of these phrases that can be represented as a \(\langle \) object \(_1\)-predicate-object \(_2\) \(\rangle \) relationship. To study zero-shot learning, we remove two phrases (“person lying on sofa” and “person lying on beach”) from the training set, and attempt to recognize them in the testing set. We report mAP, R@50 and R@100.

Results. In Table 4 we see that our method is able to perform better than the existing Visual Phrases’ model even though the dataset is small and contains only 12 relationships. We get a mAP of 0.59 using our entire model as compared to a mAP of 0.38 using Visual Phrases’ model. We also outperform the Joint CNN baseline, which achieves a mAP of 0.54. Considering that (Ours - V only) model performs similarly to the baselines, we believe that our full model’s improvements on this dataset are heavily influenced by the language priors. By learning to embed similar relationships close to each other, the language model’s aid can be thought of as being synonymous to the improvements achieved through training set augmentation. Finally, we see a similar improvements in zero shot learning.

Table 4. Visual phrase detection results on Visual Phrases dataset [6].

5.4 Image based Retrieval

An important task in computer vision is image retrieval. An improved retrieval model should be able to infer the relationships between objects in images. We will demonstrate that the use of visual relationships can improve retrieval quality.

Fig. 7.
figure 7

Examples retrieval results using an image as the query.

Table 5. Example image retrieval using a image of a \(\langle \) person - ride - horse \(\rangle \) (Sect. 5.4). Note that a higher recall and lower median rank indicates better performance.

Setup. Recall that our test set contains 1000 images. Every query uses 1 of these 1000 images and ranks the remaining 999. We use 54 query images in our experiments. Two annotators were asked to rank image results for each of the 54 queries. To avoid bias, we consider the results for a particular query as ground truth only if it was selected by both annotators. We evaluate performance using R@1, R@5 and R@10 and median rank [8]. For comparison, we use three image descriptors that are commonly used in image retrieval: CNN [44], GIST [46] and SIFT [47]. We rank results for a query using the \(L_2\) distance from the query image. Given a query image, our model predicts a set of visual relationships \(\{ R_{1}, \ldots , R_{n} \}\) with a probability of \(\{ P_{1}^{q}, \ldots , P_{n}^{q}\}\) respectively. Next, for every image \(I_i\) in our test set, it predicts \({R_{1}, \ldots , R_{n}}\) with a confidence of \(\{ P_{1}^{i}, \ldots , P_{n}^{i}\}\). We calculate a matching score between an image with the query as \(\sum _{j =1}^{n} P_{j}^{q} * P_{j}^{i}\). We also compare our model with Visual Phrases’ detectors [6].

Results. SIFT [47] and GIST [46] descriptors perform poorly with a median rank of 54 and 68 (Table 5) because they simply measure structural similarity between images. CNN [44] descriptors capture object-level information and performs better with a median rank of 20. Our method captures the visual relationships present in the query image, which is important for high quality image retrieval, improving with a median rank of 4. When queried using an image of a “person riding a horse” (Fig. 7), SIFT returns images that are visually similar but are not semantically relevant. CNN retrieves one image that contains a horse and one that contains both a man and a horse but neither of them capture the relationship: “person riding a horse”. Visual Phrases and our model are able to detect the relationship \(\langle \) person - ride - horse \(\rangle \) and perform better.

6 Conclusion

We proposed a model to detect multiple visual relationships in a single image. Our model learned to detect thousands of relationships even when there were very few training examples. We learned the visual appearance of objects and predicates and combined them to predict relationships. To finetune our predictions, we utilized a language prior that mapped similar relationships together – outperforming previous state of the art [6] on the visual phrases dataset [6] as well as our dataset. We also demonstrated that our model can be used for zero shot learning of visual relationships. We introduced a new dataset with 37, 993 relationships that can be used for further benchmarking. Finally, by understanding visual relationships, our model improved content based image retrieval.