Keywords

1 Introduction

Given some object of interest marked in one frame of a video, the goal of “single-target tracking” is to locate this object in subsequent video frames, despite object motion, changes in viewpoint, lighting changes, or other variations. Single-target tracking is an important component of many systems. For person-following applications, a robot must track a person as they move through their environment. For autonomous driving, a robot must track dynamic obstacles in order to estimate where they are moving and predict how they will move in the future.

Generic object trackers (trackers that are not specialized for specific classes of objects) are traditionally trained entirely from scratch online (i.e. during test time) [3, 15, 19, 36], with no offline training being performed. Such trackers suffer in performance because they cannot take advantage of the large number of videos that are readily available to improve their performance. Offline training videos can be used to teach the tracker to handle rotations, changes in viewpoint, lighting changes, and other complex challenges.

In many other areas of computer vision, such as image classification, object detection, segmentation, or activity recognition, machine learning has allowed vision algorithms to train from offline data and learn about the world [5, 9, 13, 23, 25, 28]. In each of these cases, the performance of the algorithm improves as it iterates through the training set of images. Such models benefit from the ability of neural networks to learn complex functions from large amounts of data.

Fig. 1.
figure 1

Using a collection of videos and images with bounding box labels (but no class information), we train a neural network to track generic objects. At test time, the network is able to track novel objects without any fine-tuning. By avoiding fine-tuning, our network is able to track at 100 fps

In this work, we show that it is possible to learn to track generic objects in real-time by watching videos offline of objects moving in the world. To achieve this goal, we introduce GOTURN, Generic Object Tracking Using Regression Networks. We train a neural network for tracking in an entirely offline manner. At test time, when tracking novel objects, the network weights are frozen, and no online fine-tuning required (as shown in Fig. 1). Through the offline training procedure, the tracker learns to track novel objects in a fast, robust, and accurate manner.

Although some initial work has been done in using neural networks for tracking, these efforts have produced neural-network trackers that are too slow for practical use. In contrast, our tracker is able to track objects at 100 fps, making it, to the best of our knowledge, the fastest neural-network tracker to-date. Our real-time speed is due to two factors. First, most previous neural network trackers are trained online [7, 24, 26, 27, 30, 34, 35, 37, 39]; however, training neural networks is a slow process, leading to slow tracking. In contrast, our tracker is trained offline to learn a generic relationship between appearance and motion, so no online training is required. Second, most trackers take a classification-based approach, classifying many image patches to find the target object [24, 26, 27, 30, 33, 37, 39]. In contrast, our tracker uses a regression-based approach, requiring just a single feed-forward pass through the network to regresses directly to the location of the target object. The combination of offline training and one-pass regression leads to a significant speed-up compared to previous approaches and allows us to track objects at real-time speeds.

GOTURN is the first generic object neural-network tracker that is able to run at 100 fps. We use a standard tracking benchmark to demonstrate that our tracker outperforms state-of-the-art trackers. Our tracker trains from a set of labeled training videos and images, but we do not require any class-level labeling or information about the types of objects being tracked. GOTURN establishes a new framework for tracking in which the relationship between appearance and motion is learned offline in a generic manner. Our code and additional experiments can be found at http://davheld.github.io/GOTURN/GOTURN.html.

2 Related Work

Online training for tracking. Trackers for generic object tracking are typically trained entirely online, starting from the first frame of a video [3, 15, 19, 36]. A typical tracker will sample patches near the target object, which are considered as “foreground” [3]. Some patches farther from the target object are also sampled, and these are considered as “background.” These patches are then used to train a foreground-background classifier, and this classifier is used to score patches from the next frame to estimate the new location of the target object [19, 36]. Unfortunately, since these trackers are trained entirely online, they cannot take advantage of the large amount of videos that are readily available for offline training that can potentially be used to improve their performance.

Some researchers have also attempted to use neural networks for tracking within the traditional online training framework [7, 16, 24, 26, 27, 30, 34, 35, 37, 39], showing state-of-the-art results [7, 21, 30]. Unfortunately, neural networks are very slow to train, and if online training is required, then the resulting tracker will be very slow at test time. Such trackers range from 0.8 fps [26] to 15 fps [37], with the top performing neural-network trackers running at 1 fps on a GPU [7, 21, 30]. Hence, these trackers are not usable for most practical applications. Because our tracker is trained offline in a generic manner, no online training of our tracker is required, enabling us to track at 100 fps.

Model-based trackers. A separate class of trackers are the model-based trackers which are designed to track a specific class of objects [1, 11, 12]. For example, if one is only interested in tracking pedestrians, then one can train a pedestrian detector. During test-time, these detections can be linked together using temporal information. These trackers are trained offline, but they are limited because they can only track a specific class of objects. Our tracker is trained offline in a generic fashion and can be used to track novel objects at test time.

Other neural network tracking frameworks. A related area of research is patch matching [14, 38], which was recently used for tracking in [33], running at 4 fps. In such an approach, many candidate patches are passed through the network, and the patch with the highest matching score is selected as the tracking output. In contrast, our network only passes two images through the network, and the network regresses directly to the bounding box location of the target object. By avoiding the need to score many candidate patches, we are able to track objects at 100 fps.

Prior attempts have been made to use neural networks for tracking in various other ways [18], including visual attention models [4, 29]. However, these approaches are not competitive with other state-of-the-art trackers when evaluated on difficult tracker datasets.

3 Method

3.1 Method Overview

At a high level, we feed frames of a video into a neural network, and the network successively outputs the location of the tracked object in each frame. We train the tracker entirely offline with video sequences and images. Through our offline training procedure, our tracker learns a generic relationship between appearance and motion that can be used to track novel objects at test time with no online training required.

3.2 Input/Output Format

What to track. In case there are multiple objects in the video, the network must receive some information about which object in the video is being tracked. To achieve this, we input an image of the target object into the network. We crop and scale the previous frame to be centered on the target object, as shown in Fig. 2. This input allows our network to track novel objects that it has not seen before; the network will track whatever object is being input in this crop. We pad this crop to allow the network to receive some contextual information about the surroundings of the target object.

Fig. 2.
figure 2

Our network architecture for tracking. We input to the network a search region from the current frame and a target from the previous frame. The network learns to compare these crops to find the target object in the current image

In more detail, suppose that in frame \(t-1\), our tracker previously predicted that the target was located in a bounding box centered at \(c = (c_x, c_y)\) with a width of w and a height of h. At time t, we take a crop of frame \(t-1\) centered at \((c_x, c_y)\) with a width and height of \(k_1 w\) and \(k_1 h\), respectively. This crop tells the network which object is being tracked. The value of \(k_1\) determines how much context the network will receive about the target object from the previous frame.

Where to look. To find the target object in the current frame, the tracker should know where the object was previously located. Since objects tend to move smoothly through space, the previous location of the object will provide a good guess of where the network should expect to currently find the object. We achieve this by choosing a search region in our current frame based on the object’s previous location. We crop the current frame using the search region and input this crop into our network, as shown in Fig. 2. The goal of the network is then to regress to the location of the target object within the search region.

In more detail, the crop of the current frame t is centered at \(c' = (c'_x, c'_y)\), where \(c'\) is the expected mean location of the target object. We set \(c' = c\), which is equivalent to a constant position motion model, although more sophisticated motion models can be used as well. The crop of the current frame has a width and height of \(k_2 \, w\) and \(k_2 \, h\), respectively, where w and h are the width and height of the predicted bounding box in the previous frame, and \(k_2\) defines our search radius for the target object. In practice, we use \(k_1 = k_2 = 2\).

As long as the target object does not become occluded and is not moving too quickly, the target will be located within this region. For fast-moving objects, the size of the search region could be increased, at a cost of increasing the complexity of the network. Alternatively, to handle long-term occlusions or large movements, our tracker can be combined with another approach such as an online-trained object detector, as in the TLD framework [19], or a visual attention model [2, 4, 29]; we leave this for future work.

Network output. The network outputs the coordinates of the object in the current frame, relative to the search region. The network’s output consists of the coordinates of the top left and bottom right corners of the bounding box.

3.3 Network Architecture

For single-target tracking, we define a novel image-comparison tracking architecture, shown in Fig. 2 (note that related “two-frame” architectures have also been used for other tasks [10, 20]). In this model, we input the target object as well as the search region each into a sequence of convolutional layers. The output of these convolutional layers is a set of features that capture a high-level representation of the image.

The outputs of these convolutional layers are then fed through a number of fully connected layers. The role of the fully connected layers is to compare the features from the target object to the features in the current frame to find where the target object has moved. Between these frames, the object may have undergone a translation, rotation, lighting change, occlusion, or deformation. The function learned by the fully connected layers is thus a complex feature comparison which is learned through many examples to be robust to these various factors while outputting the relative motion of the tracked object.

In more detail, the convolutional layers in our model are taken from the first five convolutional layers of the CaffeNet architecture [17, 23]. We concatenate the output of these convolutional layers (i.e. the pool5 features) into a single vector. This vector is input to 3 fully connected layers, each with 4096 nodes. Finally, we connect the last fully connected layer to an output layer that contains 4 nodes which represent the output bounding box. We scale the output by a factor of 10, chosen using our validation set (as with all of our hyperparameters). Network hyperparameters are taken from the defaults for CaffeNet, and between each fully-connected layer we use dropout and ReLU non-linearities as in CaffeNet. Our neural network is implemented using Caffe [17].

3.4 Tracking

During test time, we initialize the tracker with a ground-truth bounding box from the first frame, as is standard practice for single-target tracking. At each subsequent frame t, we input crops from frame \(t-1\) and frame t into the network (as described in Sect. 3.2) to predict where the object is located in frame t. We continue to re-crop and feed pairs of frames into our network for the remainder of the video, and our network will track the movement of the target object throughout the entire video sequence.

4 Training

We train our network with a combination of videos and still images. The training procedure is described below. In both cases, we train the network with an L1 loss between the predicted bounding box and the ground-truth bounding box.

4.1 Training from Videos and Images

Our training set consists of a collection of videos in which a subset of frames in each video are labeled with the location of some object. For each successive pair of frames in the training set, we crop the frames as described in Sect. 3.2. During training time, we feed this pair of frames into the network and attempt to predict how the object has moved from the first frame to the second frame (shown in Fig. 3). We also augment these training examples using our motion model, as described in Sect. 4.2.

Fig. 3.
figure 3

Examples of training videos. The goal of the network is to predict the location of the target object shown in the center of the video frame in the top row, after being shifted as in the bottom row. The ground-truth bounding box is marked in green (Color figure online)

Our training procedure can also take advantage of a set of still images that are each labeled with the location of an object. This training set of images teaches our network to track a more diverse set of objects and prevents overfitting to the objects in our training videos. To train our tracker from an image, we take random crops of the image according to our motion model (see Sect. 4.2). Between these two crops, the target object has undergone an apparent translation and scale change, as shown in Fig. 4. We treat these crops as if they were taken from different frames of a video. Although the “motions” in these crops are less varied than the types of motions found in our training videos, these images are still useful to train our network to track a variety of different objects.

Fig. 4.
figure 4

Examples of training images. The goal of the network is to predict the location of the target object shown in the center of the image crop in the top row, after being shifted as in the bottom row. The ground-truth bounding box is marked in green (Color figure online)

4.2 Learning Motion Smoothness

Objects in the real-world tend to move smoothly through space. Given an ambiguous image in which the location of the target object is uncertain, a tracker should predict that the target object is located near to the location where it was previously observed. This is especially important in videos that contain multiple nearly-identical objects, such as multiple fruit of the same type. Thus we wish to teach our network that, all else being equal, small motions are preferred to large motions.

To concretize the idea of motion smoothness, we model the center of the bounding box in the current frame \( (c'_x, c'_y)\) relative to the center of the bounding box in the previous frame \((c_x, c_y)\) as

$$\begin{aligned} c'_x&= c_x + w \cdot \varDelta x \end{aligned}$$
(1)
$$\begin{aligned} c'_y&= c_y + h \cdot \varDelta y \end{aligned}$$
(2)

where w and h are the width and height, respectively, of the bounding box of the previous frame. The terms \(\varDelta x\) and \(\varDelta y\) are random variables that capture the change in position of the bounding box relative to its size. In our training set, we find that objects change their position such that \(\varDelta x\) and \(\varDelta y\) can each be modeled with a Laplace distribution with a mean of 0 (see Supplementary Material for details). Such a distribution places a higher probability on smaller motions than larger motions.

Similarly, we model size changes by

$$\begin{aligned} w'&= w \cdot \gamma _w \end{aligned}$$
(3)
$$\begin{aligned} h'&= h \cdot \gamma _h \end{aligned}$$
(4)

where \(w'\) and \(h'\) are the current width and height of the bounding box and w and h are the previous width and height of the bounding box. The terms \(\gamma _w\) and \(\gamma _h\) are random variables that capture the size change of the bounding box. We find in our training set that \(\gamma _w\) and \(\gamma _h\) are modeled by a Laplace distribution with a mean of 1. Such a distribution gives a higher probability on keeping the bounding box size near the same as the size from the previous frame.

To teach our network to prefer small motions to large motions, we augment our training set with random crops drawn from the Laplace distributions described above (see Figs. 3 and 4 for examples). Because these training examples are sampled from a Laplace distribution, small motions will be sampled more than large motions, and thus our network will learn to prefer small motions to large motions, all else being equal. We will show that this Laplace cropping procedure improves the performance of our tracker compared to the standard uniform cropping procedure used in classification tasks [23].

The scale parameters for the Laplace distributions are chosen via cross-validation to be \(b_x = 1/5\) (for the motion of the bounding box center) and \(b_s = 1/15\) (for the change in bounding box size). We constrain the random crop such that it must contain at least half of the target object in each dimension. We also limit the size changes such that \(\gamma _w, \gamma _h \in (0.6, 1.4)\), to avoid overly stretching or shrinking the bounding box in a way that would be difficult for the network to learn.

4.3 Training Procedure

To train our network, each training example is alternately taken from a video or from an image. When we use a video training example, we randomly choose a video, and we randomly choose a pair of successive frames in this video. We then crop the video according to the procedure described in Sect. 3.2. We additionally take \(k_3\) random crops of the current frame, as described in Sect. 4.2, to augment the dataset with \(k_3\) additional examples. Next, we randomly sample an image, and we repeat the procedure described above, where the random cropping creates artificial “motions” (see Sects. 4.1 and 4.2). Each time a video or image gets sampled, new random crops are produced on-the-fly, to create additional diversity in our training procedure. In our experiments, we use \(k_3 = 10\), and we use a batch size of 50.

The convolutional layers in our network are pre-trained on ImageNet [8, 31]. Because of our limited training set size, we do not fine-tune these layers to prevent overfitting. We train this network with a learning rate of 1e–5, and other hyperparameters are taken from the defaults for CaffeNet [17].

5 Experimental Setup

5.1 Training Set

As described in Sect. 4, we train our network using a combination of videos and still images. Our training videos come from ALOV300++ [32], a collection of 314 video sequences. We remove 7 of these videos that overlap with our test set (see Supplementary Material for details), leaving us with 307 videos to be used for training. In this dataset, approximately every 5th frame of each video has been labeled with the location of some object being tracked. These videos are generally short, ranging from a few seconds to a few minutes in length. We split these videos into 251 for training and 56 for validation/hyper-parameter tuning. The training set consists of a total of 13,082 images of 251 different objects, or an average of 52 frames per object. The validation set consists of 2,795 images of 56 different objects. After choosing our hyperparameters, we retrain our model using our entire training set (training + validation). After removing the 7 overlapping videos, there is no overlap between the videos in the training and test sets.

Our training procedure also leveraged a set of still images that were used for training, as described in Sect. 4.1. These images were taken from the training set of the ImageNet Detection Challenge [31], in which 478,807 objects were labeled with bounding boxes. We randomly crop these images during training time, as described in Sect. 4.2, to create an apparent translation or scale change between two random crops. The random cropping procedure is only useful if the labeled object does not fill the entire image; thus, we filter those images for which the bounding box fills at least 66 % of the size of the image in either dimension (chosen using our validation set). This leaves us with a total of 239,283 annotations from 134,821 images. These images help prevent overfitting by teaching our network to track objects that do not appear in the training videos.

5.2 Test Set

Our test set consists of the 25 videos from the VOT 2014 Tracking Challenge [22]. We could not test our method on the VOT 2015 challenge [21] because there would be too much overlap between the test set and our training set. However, we expect the general trends of our method to still hold.

The VOT 2014 Tracking Challenge [22] is a standard tracking benchmark that allows us to compare our tracker to a wide variety of state-of-the-art trackers. The trackers are evaluated using two standard tracking metrics: accuracy (A) and robustness (R) [6, 22], which range from 0 to 1. We also compute accuracy errors \((1-A)\), robustness errors \((1-R)\), and overall errors \(1 - (A+R)/2\).

Each frame of the video is annotated with a number of attributes: occlusion, illumination change, motion change, size change, and camera motion. The trackers are also ranked in accuracy and robustness separately for each attribute, and the rankings are then averaged across attributes to get a final average accuracy and robustness ranking for each tracker. The accuracy and robustness rankings are averaged to get an overall average ranking.

6 Results

6.1 Overall Performance

The performance of our tracker is shown in Fig. 5, which demonstrates that our tracker has good robustness and performs near the top in accuracy. Further, our overall ranking (computed as the average of accuracy and robustness) outperforms all previous trackers on this benchmark. We have thus demonstrated the value of offline training for improving tracking performance. Moreover, these results were obtained after training on only 307 short videos. Figure 5 as well as analysis in the supplement suggests that further gains could be achieved if the training set size were increased by labeling more videos. Qualitative results, as well as failure cases, can be seen in the Supplementary Video; currently, the tracker can fail due to occlusions or overfitting to objects in the training set.

Fig. 5.
figure 5

Tracking results from the VOT 2014 tracking challenge. Our tracker’s performance is indicated with a blue circle, outperforming all previous methods on the overall rank (average of accuracy and robustness ranks). The points shown along the black line represent training from 14, 37, 157, and 307 videos, with the same number of training images used in each case (Color figure online)

On an Nvidia GeForce GTX Titan X GPU with cuDNN acceleration, our tracker runs at 6.05 ms per frame (not including the 1 ms to load each image in OpenCV), or 165 fps. On a GTX 680 GPU, our tracker runs at an average of 9.98 ms per frame, or 100 fps. If only a CPU is available, the tracker runs at 2.7 fps. Because our tracker is able to perform all of its training offline, during test time the tracker requires only a single feed-forward pass through the network, and thus the tracker is able to run at real-time speeds.

We compare the speed and rank of our tracker compared to the 38 other trackers submitted to the VOT 2014 Tracking Challenge [22] in Fig. 6, using the overall rank score described in Sect. 5.2. We show the runtime of the tracker in EFO units (Equivalent Filter Operations), which normalizes for the type of hardware that the tracker was tested on [22]. Figure 6 demonstrates that ours was one of the fastest trackers compared to the 38 other baselines, while outperforming all other methods in the overall rank (computed as the average of the accuracy and robustness ranks). Note that some of these other trackers, such as ThunderStruck [22], also use a GPU.

Fig. 6.
figure 6

Rank vs runtime of our tracker (red) compared to the 38 baseline methods from the VOT 2014 Tracking Challenge (blue). Each blue dot represents the performance of a separate baseline method (best viewed in color). Accuracy and robustness metrics are shown in the supplement (Color figure online)

Our tracker is able to track objects in real-time due to two aspects of our model: First, we learn a generic tracking model offline, so no online training is required. Online training of neural networks tends to be very slow, preventing real-time performance. Online-trained neural network trackers range from 0.8 fps [26] to 15 fps [37], with the top performing trackers running at 1 fps on a GPU [7, 21, 30]. Second, most trackers evaluate a finite number of samples and choose the highest scoring one as the tracking output [24, 26, 27, 30, 33, 37, 39]. With a sampling approach, the accuracy is limited by the number of samples, but increasing the number of samples also increases the computational complexity. On the other hand, our tracker regresses directly to the output bounding box, so GOTURN achieves accurate tracking with no extra computational cost, enabling it to track objects at 100 fps.

6.2 How Does It Work?

How does our neural-network tracker work? There are two hypotheses that one might propose:

  1. 1.

    The network compares the previous frame to the current frame to find the target object in the current frame.

  2. 2.

    The network acts as a local generic “object detector” and simply locates the nearest “object.”

We differentiate between these hypotheses by comparing the performance of our network (shown in Fig. 2) to the performance of a network which does not receive the previous frame as input (i.e. the network only receives the current frame as input). For this experiment, we train each of these networks separately. If the network does not receive the previous frame as input, then the tracker can only act as a local generic object detector (hypothesis 2).

Fig. 7.
figure 7

Overall tracking errors for our network which receives as input both the current and previous frame, compared to a network which receives as input only the current frame (lower is better). This comparison allows us to disambiguate between two hypotheses that can explain how our neural-network tracker works (see Sect. 6.2). Accuracy and robustness metrics are shown in the supplement

Figure 7 shows the degree to which each of the hypotheses holds true for different tracking conditions. For example, when there is an occlusion or a large camera motion, the tracker benefits greatly from using the previous frame, which enables the tracker to “remember” which object is being tracked. Figure 7 shows that the tracker performs much worse in these cases when the previous frame is not included. In such cases, hypothesis 1 plays a large role, i.e. the tracker is comparing the previous frame to the current frame to find the target object.

On the other hand, when there is a size change or no variation, the tracker performs slightly worse when using the previous frame (or approximately the same). Under a large size change, the corresponding appearance change is too drastic for our network to perform an accurate comparison between the previous frame and the current frame. Thus the tracker is acting as a local generic object detector in such a case and hypothesis 2 is dominant. Each hypothesis holds true in varying degrees for different tracking conditions, as shown in Fig. 7.

6.3 Generality vs Specificity

How well can our tracker generalize to novel objects not found in our training set? For this analysis, we separate our test set into objects for which at least 25 videos of the same class appear in our training set and objects for which fewer than 25 videos of that class appear in our training set. Figure 8 shows that, even for test objects that do not have any (or very few) similar objects in our training set, our tracker performs well. The performance continues to improve even as videos of unrelated objects are added to our training set, since our tracker is able to learn a generic relationship between an object’s appearance change and its motion that can generalize to novel objects.

Fig. 8.
figure 8

Overall tracking errors for different types of objects in our test set as a function of the number of videos in our training set (lower is better). Class labels are not used by our tracker; these labels were obtained only for the purpose of this analysis. Accuracy and robustness metrics are shown in the supplement

Additionally, our tracker can also be specialized to track certain objects particularly well. Figure 8 shows that, for test objects for which at least 25 videos of the same class appear in the training set, we obtain a large improvement as more training videos of those types of objects are added. This allows the user to specialize the tracker for particular applications. For example, if the tracker is being used for autonomous driving, then the user can add more objects of people, bikes, and cars into the training set, and the tracker will learn to track those objects particularly well. At the same time, Fig. 8 also demonstrates that our tracker can track novel objects that do not appear in our training set, which is important when tracking objects in uncontrolled environments.

6.4 Ablative Analysis

In Table 1, we show which components of our system contribute the most to our performance. We train our network with random cropping from a Laplace distribution to teach our tracker to prefer small motions to large motions (e.g. motion smoothness), as explained in Sect. 4.2. Table 1 shows the benefit of this approach compared to the baseline of uniformly sampling random crops (“No motion smoothness”), as is typically done for classification [23]. As shown, we reduce errors by 20 % by drawing our random crops from a Laplace distribution.

Table 1 also shows the benefit of using an L1 loss compared to an L2 loss. Using an L1 loss significantly reduces the overall tracking errors from 0.43 to 0.24. Because the L2 penalty is relatively flat near 0, the network does not sufficiently penalize outputs that are close but not correct, and the network would often output a bounding box that was slightly too large or too small. When applied to a sequence of frames, the bounding box would grow or shrink without bound until the predicted bounding box was just a single point or the entire image. In contrast, an L1 loss penalizes more harshly answers that are only slightly incorrect, which keeps the bounding box size closer to the correct size and prevents the bounding box from shrinking or growing without bound.

Table 1. Comparing our full GOTURN tracking method to various modified versions of our method to analyze the effect of different components of the system

We train our tracker using a combination of images and videos. Table 1 shows that, given the choice between images and videos, training on only videos gives a much bigger improvement to our tracker performance. At the same time, training on both videos and images gives the maximum performance for our tracker. Training on a small number of labeled videos has taught our tracker to be invariant to background motion, out-of-plane rotations, deformations, lighting changes, and minor occlusions. Training from a large number of labeled images has taught our network how to track a wide variety of different types of objects. By training on both videos and images, our tracker learns to track a variety of object types under different conditions, achieving maximum performance.

7 Conclusions

We have demonstrated that we can train a generic object tracker offline such that its performance improves by watching more training videos. During test time, we run the network in a purely feed-forward manner with no online fine-tuning required, allowing the tracker to run at 100 fps. Our tracker learns offline a generic relationship between an object’s appearance and its motion, allowing our network to track novel objects at real-time speeds.