What and where: A context-based recommendation system for object insertion

We propose a novel problem revolving around two tasks: (i) given a scene, recommend objects to insert, and (ii) given an object category, retrieve suitable background scenes. A bounding box for the inserted object is predicted in both tasks, which helps downstream applications such as semiautomated advertising and video composition. The major challenge lies in the fact that the target object is neither present nor localized in the input, and furthermore, available datasets only provide scenes with existing objects. To tackle this problem, we build an unsupervised algorithm based on object-level contexts, which explicitly models the joint probability distribution of object categories and bounding boxes using a Gaussian mixture model. Experiments on our own annotated test set demonstrate that our system outperforms existing baselines on all sub-tasks, and does so using a unified framework. Future extensions and applications are suggested.


I. INTRODUCTION
Our goal is to build a bidirectional recommendation system [1], [2] that performs two tasks under a unified framework: 1) Object Recommendation: For a given scene, recommend a sorted list of categories and bounding boxes for insertable objects; 2) Scene Retrieval: For a given object category, retrieve a sorted list of suitable background scenes and corresponding bounding boxes for insertion.The motivation for the two tasks stems from the bilateral collaboration between media owners and advertisers in the advertising industry.Some media owners make profits by offering paid promotion [3], while many advertisers pay media owners for product placement [4].This collaboration pattern reflects the mutual requirement, from which we distill the novel research topic of dual recommendation for object insertion.
Consider a typical collaborative workflow between a media owner and an advertising artist consisting of three phases: 1) Matching: The media owner determines what kind of products are insertable, while the advertiser determines what kind of background scenes are suitable.Both of them, in this process, also consider where an insertion might potentially happen; 2) Negotiation: They contact each other and confirm what and where after negotiation; 3) Insertion: Post-process the media to perform the actual insertion.In this work, both of the above tasks aim underpin phases 1 and 2, but neither include a fully automatic solution for phase 3. Analogously, the key idea here is to automatically make recommendations rather than make decisions for the user.We do not perform automatic segment selection or insertion, because in practice the inserted object will be brand-specific and the final decision depends upon the personal opinions of the advertiser [5].Nonetheless, for illustration purpose only, we use manually selected, yet automatically pasted object segments for cases presented in this paper, which demonstrates our system's ability to make reasonable recommendations on categories and bounding boxes.
The advantage of our system is three-fold.First, we provide constructive ideas for designers: the object recommendation task can be especially useful for sponsored media platforms, which may profit by making recommendations to media owners.Second, the scene retrieval task provides a specialized search engine that is capable of retrieving images, given an object, that goes beyond previous content-based image retrieval systems [6], [7], [8].Future applications include advertiser-oriented search engines, or matching services for designer websites.Third, the bounding boxes predicted for both tasks further makes the recommendation concrete and visualizable.As we will show in our experiments, this not only enables applications such as automatic preview over a gallery of target segments, but also may assist designers with a heatmap as hint to users.
Specifically, our contributions are: 1) We are the first, to the best of our knowledge, to propose dual recommendation for object insertion as a new research topic; 2) We develop an unsupervised algorithm (Sect.III) based on object-level context [9], which explicitly models the joint probability distribution of object category and bounding box; 3) We establish a newly annotated test set (Sect.IV), and introduce task-specific metrics for automatic quantitative evaluation (Sect.V); 4) We outperform existing baselines on all subtasks under a unified framework, as demonstrated by both quantitative and qualitative results (Sect.V).

II. RELATED WORK
Although there are no related works that directly addresses exactly the same topic, we can still borrow ideas from previous arts on related tasks.
Generally, the appearance of the target object is given, and the expected output is either the category (image classification), or the location (weakly supervised object detection), or both (object detection, semantic segmentation).Our object recommendation task shares the similar output of both category and location, but there are two key differences: (i) the appearance of the target object is unknown in our task, for the object is even not present at the scene; (ii) the expected outputs for both category and location are not unique, for there may be multiple objects suitable for the same scene with multiple reasonable placements.
In this work, we build our system upon the recently proposed state-of-the-art object detector, Faster R-CNN [13].The basic idea is to seek evidence from other existing objects in the scene, which requires object detection as a basic building block.We also extend the expected output from a single category and a single location to lists of each, to allow multiple acceptable results in an information retrieval (IR) fashion.
Image Retrieval.Image retrieval tasks aim to retrieve a list of relevant images based on keywords [6], example images [8], or even other abstract concepts such as sketches or layouts [22].
Generally, some attributes (topic, features, color, layout, etc.) are known about the target image, and the expected output is a list of images that satisfies these conditions.Our scene retrieval task is distinct to this family of tasks because our query object is not generally present in the scene.Neither is it an attribute possessed by the target image.Nonetheless, we share a similar idea as the retrieval systems in two aspects: First, we adopt the similar expected output as a ranked list, and employ the metric, normalized discounted cumulative gain (nDCG) , as is widely used in previous retrieval tasks; Second, similar to content-based image retrieval systems [6], [7], [8], we also utilize the known information of the image, typically the categories and locations of the existing objects.
Image Composition.Our work aims to provide inspirations for object insertion, which has a close relationship to image composition.Some works focus on interactive editing, for instance, [23] builds an interactive library-based editing tool.It enables users to draw rough sketches, leading to plausible composite images incorporated with retrieved patches; Some other works focus on automatic completion, with image inpainting as one of the most notable research topics [24], [25].These works aim to restore the removed region of an image, typically with neural networks that exploit the context.Our system is unique in two aspects: 1) We neither take the user's sketches as input, nor require a masked region as location hint; 2) We do not take "plausible" as our final goal, because our motivation is to do recommendations, rather than make decisions, as explained in Sect.I .
Closest to our work is the automatic person composition task proposed by [26], which establishes a fully automatic pipeline for incorporating a person into a scene.This pipeline consists of two stages: 1) location prediction; 2) segment retrieval.Though our system is different from this work, in that we do not perform segment retrieval; while it could not make recommendations on categories or scenes.We compare our system's performance on bounding box prediction with the first stage of this work, and report both quantitative and qualitative results.

III. METHOD
In this section, first, we decompose the two tasks into three subtasks with probabilistic formulations, which we derive from the same joint probability distribution.Furthermore, we present an algorithm that models object-level context with a Gaussian mixture model (GMM), which leads to an approximation for the joint distribution.Finally, we report implementation details and per-image runtime.

A. Problem Formulation
Given a set of candidate object categories C, a set of scene images I, and a set of candidate bounding boxes B I for each specific image I, we further break the two tasks introduced in Sect.I into the following three subtasks: We show that all of the three subtasks can be solved from the same joint probability distribution P (B, C|I).The basic intuition is that the object category and bounding box should be interrelated, when judging if the insertion is appropriate.By adopting Bayes' theorem, we arrive at: 1) Object Recommendation: 2) Scene Retrieval: where we perform the maximum a posteriori (MAP) estimation and assume a uniform prior for P (I).
3) Bounding Box Prediction: where we rank all bounding boxes B ∈ B I for each given pair of < C, I >.
In summary, to achieve our goal (which breaks down to three subtasks), we need an algorithm to estimate P (B, C|I), which is discussed in the next subsection.

B. Modeling the Joint Probability Distribution
1) Model Formulation: For each image I, we obtain a set of N bounding boxes B I = {B i } N i=1 for existing objects, which is typically the output of a region proposal network (RPN) [13].
Note that the candidate bounding box B and category C are conditionally independent with B I given I, because B I is derivable from I. We then model the joint probability distribution P (B, C|I) as follows: We represent each context object with a probability distribution over all possible categories.Denoting the set of all categories considered in the context as C, we have where, the last term in the right-hand-side is the output distribution obtained from an object detector [13], [27].The first term is decided by the co-occurrence frequency of the inserted object C with an localized existing object (B i , C j ).
For simplicity, we drop B i and approximate this term with . The basic intuition is that B i does not contribute significantly to the ranking between categories.For instance, compared to a mouse, a cake is more likely to co-occur with a plate, no matter where the plate is.The second term is an object-level context [9] term that will be modeled with a Gaussian mixture model (GMM), as described next.
2) Context Modeling with GMM: We now focus on the context term in equation 5 that remains unsolved.Consider the case when C = clock and C j = wall.The term P (B|C, B i , C j , I) answers the question "Having observed a wall in a certain place, where should we insert a clock?".Given such a question, a human agent would first identify that a clock is likely to be mounted on the wall, then conclude that the clock is likely to appear in the upper region of the wall, and its size should be much smaller than the wall.Our GMM model simulates the above process to judge each candidate bounding box .
Based on this intuition, we further exploit inter-object relationships as proposed by previous works on scene graphs [28], [6].Denoting the set of all considered relations as R, we get: Following [6], we extract pairwise bounding box feature, which encodes the relative position and scale of the inserted object and a context object: where (x 1 , y 1 ), (x 2 , y 2 ) are the bottom-left corners of the 2 boxes, and (w 1 , w 2 ), (h 1 , h 2 ) are the widths and heights respectively.We then train a Gaussian mixture model (GMM) for each annotated (subject, relation, object) triple from the Visual Genome [29] dataset: where gmm (t) denotes the GMM model corresponding to triple t = (C, r, C j ).K is the number of components same for each GMM, which we empirically set to 4 in our experiments.N is the normal distribution.a k are the prior, mean, and covariance for the k th component of gmm (t) , which we learn using the EM algorithm implemented by Scikit-learn [30].

C. Implementation Details
We adopt the pretrained Faster R-CNN released by [27] as object detector.We use 10 object categories for insertion (detailed in Sect.IV) and keep the top 20 object categories and top 10 relations from the Visual Genome [29] dataset, sorted by the co-occurrence count with the 10 insertable categories.We consider at most N = 20 existing objects with detection threshold of 0.4 for context modeling.For each image I with size H I × W I , we sample the candidate bounding boxes B I in a sliding window fashion, with window size w ∈ { 1  8 , 1 16 } max{H I , W I } and stride s = 1 2 w, which generates around 800 candidate boxes per image.We further refine the size of the best ranked box by searching over sizes within interval [0, 1  8 ] max{H I , W I } equally discretized into 32 values.A complete, single thread, pure Python implementation on an Intel i7-5930K 3.50GHz CPU and a single Titan X Pascal GPU takes around 4 seconds per image.

IV. DATASET A. Scenes and Objects
We establish a test set that consists of fifty scenes from the Visual Genome [29] dataset.The test scenes come from 4 indoor scene types: living room, dining room, kitchen, office.The statistic for each scene type is shown in Table I.

TABLE I: Statistics on different scene types
There are ten insertable objects considered in this experiment, as shown in Table II.The same illustrations and specifications are emphasized to the annotators as a standard to ensure consistency for the same category.We choose these insertable objects based on the following principles: 1) Environment: Mostly appears indoor; 2) Frequency: Is within the top 150 frequent categories [28] in Visual Genome; 3) Flexibility: Is not generally embedded (e.g.sink) or large and clumsy (e.g.bed), so that it can be flexibly inserted into a scene; 4) Diversity: Does not have a significant context overlap with other object categories (e.g.bottle is not included because we already have cup).

B. Annotation Guideline
On average, there are 11 human annotators for each scene.For each scene, the annotator is asked to generate the following annotations: 1) Insertable Categories: For each scene, the annotator is encouraged to annotate as much as possible, yet no more than 5 insertable object categories (chosen from the categories in Table II).2) User Preference: For each annotated object category, the annotator should assign a preference score ranging from {1, 2}.The annotators are shown a wide range of different example scenes in advance to ensure that they have consistent criterion towards this preference.
• Score 2 (very suitable): Indicates "this category is very suitable to be inserted into the scene"; • Score 1 (generally suitable): Indicates "this category can be inserted into this scene, yet not very suitable".3) Bounding Box Size: For each annotated object category, the annotator should draw a rectangle bounding box, whose longer side equates to the longer side of an appropriate bounding box of the object.We only need 1 freedom for size evaluation because the aspect ratio of the inserted object is typically fixed.
4) Insertable Region: For each annotated object category, the annotator should draw a region.The method for drawing this region is that: Imagine you are holding the object for insertion, and you drag it over all the places that it can be inserted.In this process, the region that can be covered by the object is defined as the insertable region, which should be drawn using a brush tool (Fig. 2).Fig. 2: Method for drawing the insertable region.This is also the same illustration that we presented to the annotators.Note that in this case, because the cup has a non-zero height, some pixels above the table can also be covered.
Note that, different annotators may have different opinions towards this region.For instance, for the scene in Fig. 2, some annotators may not include the left-bottom corner of the table when drawing the insertable region.This subjectivity is explicitly allowed within the range of quality control.

V. EXPERIMENTS
Table IV and V shows qualitative results for object recommendation and scene retrieval, both enhanced by bounding box prediction.We further quantitatively evaluate our method against existing baselines on our new test set.Task-specific metrics are designed for comprehensive evaluation.
We design experiments for the 3 subtasks systematically.First, for both the object recommendation and scene retrieval subtasks, we compare our system against a statistical baseline, bag-of-categories (BOC), which is based on category co-occurrence.Second, we separately evaluate the size and location for the bounding box prediction subtask, and compare our results against a recently proposed neural model for person composition [26].Finally, we report comparisons on both

A. Object Recommendation
We adopt the normalized discounted cumulative gain (nDCG) [31], which is an indicator widely used in information retrieval (IR) for ranking quality.We use this as the metric for object recommendation.Because the desired output for this subtask is a ranked list, and each item is annotated with a gain reflecting user preference, nDCG is a perfect choice for evaluation.
1) Metric Formulation: For n images and m i annotators for the i th image, the averaged nDCG@K is defined as where nDCG@K(i, j) measures the ranking quality for the top-K recommended object categories of the i th image, with regard to the ground truth user preference scores provided by the j th annotator.
2) Quantitative Results: The baseline method, bag-ofcategories (BOC), regards each image as a bag of existing objects, and ranks all candidate objects by the sum of cooccurrences with the existing objects.BOC borrows idea from the simple yet effective bag-of-words (BOW) model [32] in natural language processing, which ignores the structural information and only keeps the statistical count.
The quantitative comparison between our system and BOC is shown in Table III.We evaluate nDCG at the top-1, top-3, top-5 results respectively, because there are at most 5 annotations per image.As demonstrated by the results, our method achieves consistent improvements as compared to BOC.  3) Qualitative Analysis: The largest gain of our method over baseline is reflected by nDCG@1, i.e. the top result.Fig. 3 shows qualitative comparison against BOC on top 1 recommendation.In Fig. 3a, the baseline wrongly recommends a clock because there are 2 detected walls.Whereas, our system recognizes that most candidate boxes for clock lead  to unreasonable relative positions with the walls.In Fig. 3b, the baseline recommends a laptop due to high co-occurrence of pair (laptop, table).However, the table in this scene is too small, disabling any noticeable bounding box for insertion.In Fig. 3c, the baseline recommends a book because there's a mis-detection for a small shelf to the edge of the background (the blue box, which is actually a counter), which is almost ignored by our system for the same reason as in Fig. 3b.In summary, the key advantage of our algorithm over baseline is that we not only consider the co-occurrence frequency, but also take into account the relative locations and relationships between the inserted object and context objects.This enables our system to bypass candidate categories with high co-occurrence counts yet unreasonable placements; and to also be more robust when faced with detection failures.

B. Scene Retrieval
Similarly, for the scene retrieval subtask, we also adopt nDCG as a metric for ranked image list.
1) Metric Formulation: For n insertable categories and m candidate images for each category, the averaged nDCG@K is defined as where nDCG@K(i, j) measures the ranking quality of the top-K retrieved scene images for the i th category, with regard to the ground truth user preference scores provided by the j th annotator.
2) Quantitative Results: The quantitative comparison between our system and BOC is shown in table VI.We evaluate nDCG at K = 1, 10, 20 respectively, in consideration of the fact that there are 50 candidate images in total.Again, we outperform the baseline by a remarkable margin.3) Qualitative Analysis: Fig. 4 shows qualitative comparison against BOC on top 10 retrieved scenes.Intuitively, our system prefers scenes whose supportive objects that are visually large, continuous or close to the user, while the baseline is typically biased towards scenes with more relevant objects.This is due to the fact that only boxes that lead to reasonable relationships will contribute significantly to P (B, C|I), while BOC is agnostic to the spatial structure of the context objects.

C. Bounding Box Prediction
We evaluate the size and location of the predicted bounding box separately.The baseline for this subtask is the neural approach proposed by [26].[26] builds an automatic twostage pipeline for inserting a person's segment into an image.It first determines the best bounding box using the dilated convolution networks [33], then retrieves a context-compatible person segment from a database.
Here, we compare our system's performance on bounding box prediction, against the first stage of [26].We adopt the same object detector [27] with the same confidence threshold as in our experiments, and the same training settings for [26] as reported in its supplementary material.
For size prediction, we design a single metric to measure the similarity of 2 lengths.For location prediction, however, we design 2 different metrics for automatic use cases and manual use cases, respectively.The automatic use case would require an API that returns the best ranked bounding box, while the manual use case would prefer a heatmap as an intuitive hint.We will discuss these 3 metrics and different use cases in detail.
1) Metric Formulation -Size: For a bounding box B with height h B and width w B , we define its box size s B = max{h B , w B }.We then define a metric that evaluates how close is the ground truth box compared with the predicted box, under the measurement of box size.Note that we only preserve 3 freedoms for a box, because the aspect ratio of the inserted object segment should be predetermined.
Given n images, and m i annotators for the i th image, for a specific category C, we define the average intersection over union (IoU) score for box size as: IoU size (g, s) = min(g, s) max(g, s) where, δ is the ground truth box size provided by annotator j in image i for category C, s (C) i is the predicted box size in image i for category C. IoU size (g, s) has an upper bound of 1.0 (when g = s), and a lower bound of 0.0 (when g and s are drastically different).
2) Metric Formulation -Location, Best Box: The best recommended box would be crucial to an automatic application, such as an automatic preview software.Hence, this experiment evaluates whether the best recommended box is in a reasonable location.We consider the location of a bounding box as reasonable, if it is contained within the insertable region annotated by the user.
Note that this criterion can be biased towards smaller boxes.We address this drawback in 2 aspects: First, larger boxes that slightly exceeds the insertable region may still have a non-zero contribution to this metric; Second, unreasonably small boxes will pull down the size prediction score accordingly.
Given n images, and m i annotators for the i th image, for a specific category C, we define the average accuracy for the location of best recommended box as if it is contained within the insertable region annotated by the user (Fig. 5a).A box that slightly exceeds this region (Fig. 5b) is not good enough, yet still visually better than a box that is an outlier (Fig. 5c).For best box evaluation, the difference between accuracy and strict accuracy is that for cases like Fig. 5b, the former one counts the fraction of area that is included in the insertable region, whereas the later one only counts valid boxes as in Fig. 5a.
accuracy loc (g, B) = intersection area of g and B area of B (15) where, δ shares the same meaning as before.g is the ground truth insertable region drawn by annotator j in image i for category C, B (C) i is the best recommended box in image i for category C. accuracy loc (g, B) has an upper bound of 1.0 (when B is entirely contained within g), and a lower bound of 0.0 (when B is entirely outside g).
Furthermore, if we only regard bounding boxes that are fully contained by the insertable region as reasonable, we can define a stricter metric by substituting the accuracy loc in Eq. 14 with a binary indicator function I(g, B) , which is set to 1 if and only if B is fully contained by g.We denote this metric as the "strict accuracy".This metric excludes boxes that are partially contained by the insertable region, and only counts for valid boxes that are entirely covered.
3) Metric Formulation -Location, Heatmap: This metric evaluates the score distribution of all sampled boxes, which we further convert into an intuitive pixel-level representation.We denote this representation as a heatmap.
Specifically, we generate a heatmap by adding the score of each sampled box to all its contained pixels.The heat value at each pixel hence approximates the probability that it is contained within at least one insertable box 1 .This representation is compatible, and hence directly comparable, with the insertable region provided by the user.
Note that the heatmap does not support any programmatical usages, but only aims to provide a clear user hint.We do not adopt the distribution of the left-bottom corner or the stand position [26] because not all the insertable categories are supported from the bottom (e.g.TV).Hence, a heatmap that dissolves the probability of each box into its inner pixels is more cognitively consistent across different categories.
Given n images, and m i annotators for the i th image, for a specific category C, we define the average IoU for the heatmap as (illustrated in Fig. 6) 1 For each pixel p in image I contained within candidate boxes B 1 , B 2 , ..., B k , for a specific category C, we have Typically, there are around 800 candidate boxes per image, therefore the numerical value of P (B i |I, C) is reasonably small to make this approximation.Fig. 6: Metric for heatmap: The heat value at each pixel represents the probability that it is contained within at least 1 insertable box.We take the average insertable region over all users as the ground truth heatmap, and test the consistency of this pixel-level probability distribution between ground truth and prediction.This metric measures the system's ability to approximate the hint provided by a human.
IoU loc (g, h) = p min(g p , h p ) p max(g p , h p ) In Eq. 16, δ (C) ij shares the same meaning as before.g ij is the ground truth insertable region drawn by annotator j in image i for category C, h (C) i is the predicted heatmap for category C in image i.
In Eq. 17, g denotes the averaged ground truth insertable region, and h denotes the predicted heatmap.p iterates through all pixels of g and h.We normalize g and h such that they each sums to 1.0.This definition for IoU has a maximum value 1.0 when g and h are exactly the same, and a minimum value 0.0 when they are absolutely disjoint.
4) Quantitative Results: We report the average IoU for size, the average accuracy for best recommended location, and the average IoU for heatmap, over all insertable categories.We refine the location heatmap generated by [26] by adding the heat value at each stand position to pixels in the corresponding box to match our heatmap definition.As shown in Table VII, we achieve consistent improvement over the baseline in all metrics designed for bounding box prediction.
5) Qualitative Analysis: Table VIII shows the qualitative comparison against [26] on bounding box prediction.We outperform baseline significantly, especially on location prediction.Possible reasons include: 1) The baseline employs an impainting model to generate fake background images that do not contain the target object, which leads to error propagation throughout the downstream training process; 2) The Visual Genome [29] dataset is relatively small, and images containing non-human objects (i.e. the insertable categories considered in this paper) are even fewer.We do not use larger datasets such as MS-COCO [34], because many important context object categories (e.g.desk, counter, wall, etc.) are not annotated.The data-driven nature of neural network hence limits the performance of [26].

VI. CONCLUSION
We propose a novel research topic, dual recommendation for object insertion, and build an unsupervised algorithm that exploits object-level context.We establish a new test dataset and design task-specific metrics for automatic quantitative evaluation.We outperform existing baselines on all subtasks under a unified framework, as evidenced by both quantitative and qualitative results.Future work includes incorporation of high-dimensional image features, or larger datasets that is able to fully drive the training of neural networks.[26] on bounding box prediction.The first row is the original images, the second row is our results, the third row is the results of [26].The inserted object for each image is labeled at the bottom of each column.

Fig. 3 :
Fig. 3: Qualitative comparison on top recommended object TABLE IV: Object recommendation results.The first row for each demo is the recommended bounding box (V-C2) and indicative heatmap (V-C3) automatically generated for the top 3 recommended object categories (V-A).The second row is generated using manually selected yet automatically pasted object segments (for illustration purpose only).
(a) clock (b) apple (c) cup TABLE V: Scene retrieval results: top 5 retrieved scenes (V-B) for clock, apple, cup.The first row for each category shows the original images, and the second row shows the scenes overlaid with automatically generated bounding boxes (V-C2) and heatmaps (V-C3).

TABLE II
: Insertable object categories considered in this experiment

TABLE III :
Quantitative evaluation for object recommendation quantitative and qualitative results, which helps interpretations for what is learned by our algorithm.

TABLE VI :
Quantitative evaluation for scene retrieval

TABLE VII :
Quantitative evaluation for bounding box prediction

TABLE VIII :
Qualitative comparison against