Advances in Deep Learning Methods for Visual Tracking: Literature Review and Fundamentals

Recently, deep learning has achieved great success in visual tracking tasks, particularly in single-object tracking. This paper provides a comprehensive review of state-of-the-art single-object tracking algorithms based on deep learning. First, we introduce basic knowledge of deep visual tracking, including fundamental concepts, existing algorithms, and previous reviews. Second, we briefly review existing deep learning methods by categorizing them into data-invariant and data-adaptive methods based on whether they can dynamically change their model parameters or architectures. Then, we conclude with the general components of deep trackers. In this way, we systematically analyze the novelties of several recently proposed deep trackers. Thereafter, popular datasets such as Object Tracking Benchmark (OTB) and Visual Object Tracking (VOT) are discussed, along with the performances of several deep trackers. Finally, based on observations and experimental results, we discuss three different characteristics of deep trackers, i.e., the relationships between their general components, exploration of more effective tracking frameworks, and interpretability of their motion estimation components.


Introduction
Single object tracking is a fundamental and critical task in the fields of computer vision and video processing. It has various practical applications in areas such as navigation, robotics, traffic control, and augmented reality. Therefore, numerous efforts have been devoted to overcoming the challenges in the single-object tracking task and developing effective tracking algorithms. However, this task remains challenging because of the difficulty in balancing the effectiveness and efficiency of the tracking algorithms. In addition, existing algorithms are not sufficiently robust under complex scenes with multiple issues, e.g., background clutter, motion blur, viewpoint changes, and illumination variations.
Single object tracking aims at locating a given target in all frames of a video. To this end, tracking algorithms always extract certain features from the template of target appearance and a search frame, and then iteratively match these features to locate the object. For retaining effective target templates, the appearance of the object in the first frame is taken as the initialization and continuously updated during tracking. In contrast, the matching framework is manually designed and fixed during the entire tracking process. As a result, the extracted features are required to be representative to accurately distinguish the object from the background. However, because these extracted features cannot comprehensively reflect the characteristics of an object, conventional tracking algorithms [1−4] tend to have relatively poor performance. Therefore, the improvements of these conventional tracking algorithms are twofold: by exploring features that can better reflect the characteristics of the object and by proposing effective matching frameworks. For example, the template-based [1,5,6] , subspace-based [7] , and sparse-representation [8,9] methods use certain elements to represent an object, rather than directly using cropped pixels or image patches. Frameworks such as boosting [10,11], support vector machine [12] , random forest [13] , multiple instance learning [14] , and metric learning [15] have also been used to enhance the matching ability of tracking algorithms.
With the advancements in deep learning mechanisms [16] , numerous studies have been proposed to conduct computer vision [17,18] , speech recognition [19,20] , and natural language processing tasks [21,22] . Motivated by these breakthroughs, the deep learning mechanisms have also been introduced for the single object tracking task [23][24][25][26] . Meanwhile, several tracking datasets, such as Object Tracking Benchmark 2013 (OTB-2013) [27] and Visual Object Tracking 2013 (VOT-2013) [28] , have been proposed to evaluate the performance of these tracking algorithms. With these developments, several papers reviewed the advancements and challenges in deep-learning-based tracking algorithms. However, according to our statistical results (see Table 1), none of these existing reviews discusses tracking methods that are recently published in top conferences and journals. In addition, existing reviews mostly concentrate on classifying deep trackers according to their methodologies or on evaluating their performance. It can also be noted that none of the existing reviews details specific components of existing deep trackers. For example, in the two latest reviews, Li et al. [32] present comprehensive classification results based on characteristics such as network architecture, network function, and training frameworks. Yao et al. [34] systematically detail methods that can jointly conduct the video object segmentation and the visual object tracking tasks. To facilitate the development of single object tracking algorithms based on deep learning, in this work, we conclude with the general components of existing deep-learning-based tracking algorithms and present the popular components of deep neural networks, which are proposed for improving the representative ability of the features in the deep neural networks. In addition, we compare the recently proposed deep trackers by collecting and analyzing their metrics on benchmark datasets. In this way, we provide some important observations. For example, through making comparisons, we find that attention mechanisms are widely used to combine the online-updating methods with offline-trained ones. On the other hand, we find that since different components in the deep trackers have their special characteristics, improving only a single component sometimes cannot facilitate the tracking process.
The rest of this paper is organized as follows. In Section 2, we briefly introduce fundamental frameworks and novel mechanisms of deep learning methods. In Section 3, we present the general components of deep trackers. In Section 4, the most popular tracking datasets are detailed and compared with each other. We then present popular metrics used for evaluating the tracking performance on popular tracking datasets. With these metrics, we present and compare the performance of recently published deep trackers in Section 5. Based on these comparison results, we provide several observations in Section 6. Finally, Section 7 summarizes this work.

Deep learning models
Deep learning models (i.e., deep neural networks) have been widely studied and applied to several computer vision tasks [35−39] , such as image classification [35] , object detection [36] , and image restoration [38] . In general, the pipeline of the deep neural networks can be seen in Fig. 1. During generating manually designed outputs, different inputs (e.g., a single image for image restoration or continuous frames for video captioning) are fed through a pre-processing module for data augmentation, which aims at alleviating the huge requirement of training data and enhancing the robustness of the networks. After that, several feature processing modules are used to capture the characteristics of the inputs. Based on the captured char- Table 1 Statistical results of existing reviews related to single object tracking algorithms. In the newest work column, the year of publications of the latest work is presented.
acteristics and manual knowledge, a feature post-processing module is then used to generate outputs, which are supervised by computing the distance to the ground truth. Finally, the calculated loss function is used to update the network parameters through the back propagation. In this pipeline, it is easy to find that the most important part is the feature processing module, whose components are organized in encoder-decoder (denoted by a gray rectangle) or directly-stacked (denoted by a green rectangle) schemes. However, as reported in [40,41], these two methods are time-consuming, error-prone and datainvariant, always leading to a poor generalization performance. Therefore, numerous feature processing components have been proposed to address the above issues.
In addition, the skip connections between these components and the architectures of these components are also explored in the field of neural architecture search (NAS) [41] . According to whether the feature processing module changes its configurations with respect to specific inputs, we roughly split existing deep neural networks into two classes: data-invariant and data-adaptive methods.

Data-invariant methods
In the early stage of developing deep neural networks, the deep neural networks are static models, whose architectures are fixed and the parameters are iteratively updated in the training stage. Once the network is trained, its architectures and parameters are used to handle all the testing samples. Therefore, it is easy to find that these networks can be considered as data-invariant methods. Next, we detail three popular types of data-invariant methods and discuss their improvements.

Convolution neural networks
The convolution neural network is the first generation of deep neural networks. Given an input, the convolution neural network learns high-dimensional features from the input and generates supervised outputs with respect to the learned features. In this process, several layers, such as convolution layers, pooling layers, batch normalization layers, rectified linear units (ReLU), and fully connected layers, are used to first magnify the feature channel (sometimes also change the spatial resolution), and then gradually reduce the channel numbers. Therefore, the convolution layer, whose function is shown in Fig. 2, is the most basic component of convolution neural networks. In general, the convolution layer uses a trained kernel to convolute the entire input. As such, the resolutions of the outputs generated by convolution layers are decided by factors such as dimension, stride, dilation, and spatial resolution of the convolution kernel. All these factors are closely related to the receptive field of each convolution layer. Moreover, the factors of each convolution layer jointly influence the receptive field of convolution neural networks.
To enlarge the receptive field of the convolution neur-al networks, the two simplest methods include increasing their depth or width. For example, in the heuristic work [42] , multiple convolution layers with small convolution kernels are incorporated along with a max-pooling layer to form the VGG19 network. For obtaining a network as deep as possible, He et al. [43] employ the residual connections in the hierarchical convolution layers. Consequently, the Res101 and Res152 [43] are approximately 10 times deeper than their former networks, and therefore significantly outperform their counterparts. Following these two representative works, designing deeper networks is considered as the most useful method to improve the model performance [44,45] . However, this approach is highly empirical and is limited by computing resources. As a substitute, some methods try to increase the width of convolution neural networks to enlarge their receptive field. Szegedy et al. [46] carefully find out the optimal local sparse structure in a convolution neural network by using the Hebbian principle [47] . Based on this sparse structure, the Inception modules are introduced by implementing parallel branches. Later, Szegedy et al. [48] use hierarchical convolution layers with small kernels to replace convolution layers with large kernels and thus propose the Inception-v2. However, modules proposed in [46,48] are too various to be used in different tasks. Therefore, Szegedy et al. [45] incorporate the existing Inception modules with residual connections to form the Inception-v4 and Inception-ResNet. Although increasing the depth or width of neural networks yields a remarkable performance, these two methods are still unreasonable. To facilitate the designation of neural networks and alleviate the computation resource restriction, many studies have been conducted on the characteristics of the convolution layers. To name a few, Huo et al. [49] introduce a feature replay algorithm to learn the parameters of the convolution layers. Similarly, Jeong and Shin [50] modify the standard convolution layer with a channel-selectivity function and a spatial shifting function to dynamically emphasize important features, which have the highest influence on the output vector. Qiao et al. [51] also find that owing to the cascaded architecture of the convolution layers, it is unnecessary to update all the features during image recognition. As well as these methods that abandon irrelevant or unimportant fea-

Input
Kernel Output Fig. 2 Illustration of a convolution layer, which is the basic component of convolution neural networks. With different convolution kernels, the convolution layer can change the channels and spatial resolutions of the input feature.
tures to improve overall performance [52] , there are also some methods that improve their performance by exploring the interpretability of convolution neural networks [53−57] . Geirhos et al. [53] find that convolution neural networks (CNNs) trained with different datasets are biased toward image textures or shapes. For example, Im-ageNet-trained [54] CNNs are strongly biased toward image textures. By systematically exploring such a bias, Geirhos et al. [53] not only improve the performance of existing networks, but also improve their robustness.

Recurrent neural networks
Unlike the convolution neural networks, the recurrent neural networks (RNNs) are proposed to handle sequential data such as videos and natural language. During handling this information, each recurrent cell not only obtains the hidden states from previous cells, but also takes inputs based on the timestamp. Therefore, both shortterm and long-term relationships between the sequential information are dynamically learned and transferred to the subsequent cells. To this end, RNNs maintain a vector of the activation for each timestamp, which makes most RNNs extremely deep. As a consequence, RNNs are difficult to train because of the exploding and the vanishing gradient problems [58,59] . It is widely known that the first problem can be easily addressed by employing a hard constraint over the norm of these gradients [60,61] . However, the vanishing gradient problem is more complicated, and two types of methods have been proposed to address this problem.
On the one hand, novel models such as long shortterm memory (LSTM) [62] and gated recurrent unit (GRU) [63] are designed. Compared with LSTM, GRU makes it easier to forget long-term information, which is always irrelevant to recent inputs, and thus has better performance on most sophisticated tasks [64] . In detail, the gated recurrent unit contains a reset gate and an update gate, as shown in Fig. 3. In each timestamp, the reset gate takes the previous state h t−1 to drop any information that is irrelevant later in the future, while the update gate controls the degree of information that should be carried over from the previous hidden state to the current hidden state. When the reset gate is close to 0, the hidden state is forced to ignore the previous hidden state and reset with the current input X t . Thus, compared with LSTM, GRU makes it easier to drop useless information from the previous hidden states. The same methodology is also used in later proposed RNNs [65,66] . Other attempts to overcome the vanishing gradient problem involve using powerful second-order optimization algorithms to regularize weights of RNNs [67,68] or carefully initialize the weights of RNNs [69] . In general, these above methods aim at proposing deeper recurrent neural networks, rather than improving the mechanisms of previous RNNs. As a consequence, several RNNs can only handle unidirectional temporal relationships between bidirectional sequential data.
To fully exploit the bidirectional temporal relationship, there are also several methods that incorporate other mechanisms into RNN. For example, for input sequences whose starts and ends are known in advance, the bidirectional RNNs can make full use of both forward and backward temporal relationships [70−78] . By using a CNN to learn hidden states for each recurrent cell, Liu et al. [73] propose a spatially variant recurrent network for several image restoration tasks. Recently, a fully regulated neural network (NN) with a double hidden layer structure is designed, and an adaptive global sliding-mode controller is proposed for a class of dynamic systems [77] . Overall, RNNs have been continuously enhanced to process the bidirectional relationships, and the forgetting mechanism is important for the above enhancements.

Graph neural networks
As we discussed earlier, CNNs and RNNs are widely used to process Euclidean data (e.g., RGB images) or sequential data. However, these two networks cannot handle data that are represented in the form of graphs. For example, in chemistry, molecules are represented as graphs, and their bioactivity needs to be identified as edges constructed between multiple nodes. In a citation network, articles are linked to each other via citations, and most articles can be categorized into different groups. Therefore, the wide application scenarios of graph data have imposed significant challenges along with opportunities for deep learning methods.
Recently, several graph neural networks (GNNs) have been proposed to address the above challenges. In general, an image can be considered as a fully connected graph, where all the pixels are connected by adjacent pixels. Similarly, as Fig. 4 illustrates, the standard convolution layer can also be seen as a special graph convolution layer, where all convolution kernels are connected in an undirected manner [79] . Therefore, it can be found that GNNs can be achieved by employing constraints in the kernels of traditional CNNs. Gori et al. [80] first propose a GNNbased method to process data with different characteristics, such as directed, undirected, labeled and cyclic  . At each timestamp, GRU takes hidden state h t−1 from the previous cells and current inputs X t to model the temporal information between sequential data [63] .
graphs. After this pioneering work, Scarselli et al. [81,82] find that, with GNN, the representations of a single node can be obtained by propagating the neighbor information in an iterative manner until a stable fixed point is reached. With the above foundation, graphs are widely embedded into convolution layers and neural networks. Bruna et al. [83] develop the graph convolution based on the spectral graph theory. In [84], the graph kernels, whereby graphs or nodes can be embedded into the feature space using a mapping function, is proposed to study the random walk problem [85] . More recently, researchers find that Euclidean data and sequential data can also be represented by special graphs. Therefore, the graph neural networks are also used to handle these data [86−88] . For example, to capture topology and long-range dependencies of a lane graph, Liang et al. [86] extend existing graph convolutions with multiple adjacency matrices and along-lane dilation. By organizing the features in the different channels as nodes, a representative graph layer is proposed to dynamically sample the most representative features [88] , leading to less computing consumption and more representative features. Since the components of different data are sparsely or closely related to their neighbors, most data can be represented in the form of different graphs. Hence, possible future research directions include transforming un-graph data into graphs and employing GNNs to handle these transformed graphs.

Data-adaptive methods
As we mentioned previously, early deep neural networks are designed in the static mode. That is, once the network is trained, its architecture and parameters will not change regardless of the inputs. Therefore, these static networks do not always generalize well, making practical applications difficult. To address this issue, several methods are proposed to change their components with respect to the input. Thus, we categorize these methods as data-adaptive methods and introduce several representative data-adaptive methods in the following sections.

Attention mechanisms
The field of neural language processing (NLP) has witnessed significant development of attention mechanisms, starting from the pioneering work [21] . Bahdanau et al. [21] introduce various attention factors and weight assign-ment functions. In [89], these factors are further considered, and the inner product of the vectors, which encode the query and key contents, is recommended for computing the attention weights. Later, the landmark work [90] proposes a new standard, and its follow-up studies demonstrate that relative positions can provide better generalization ability than the absolute positions [18, 91−93] . Motivated by the success in the NLP tasks, the attention mechanisms are also employed in computer vision (CV) applications [94−99] . However, unlike the attention mechanisms used in NLP, the key and query of the attention mechanisms in CV refer to certain visual elements, and the formulation of attention mechanisms in CV is similar to Transformer [90] .
As Fig. 5 shows, given a query element (i.e., the yellow dot) and several key elements (i.e., all colored dots), the attention mechanism aims at adaptively highlighting the key contents based on the attention weights that measure the compatibility of the query-key pairs. Therefore, based on the aggregated domain, existing attention mechanisms can be divided into two categories: spatialwise and channel-wise attention mechanisms. An example of the attention mechanisms used in computer vision tasks. Given the query-key pairs, the attention mechanism computes scores for each key [94−99] .
To name a few, an attention scaling network is introduced to learn attention scores of different regions according to an estimation result of the input images [100] . Zhang et al. [101] employ attention mechanisms in the generative adversarial network to model attention-driven and long-range dependency for image generation tasks. In contrast, Dai et al. [102] develop a novel attention module to adaptively rescale the channel-wise features by using second-order feature statistics. In general, the attention mechanism is biologically plausible and has been widely used in both CV and NLP fields. However, the best configuration of the query-key pairs remains unknown [103] . Hence, there is much room for improving attention mechanisms.

Dynamic neural networks
Dynamic neural networks, which can adjust the network architectures or network parameters depending on the corresponding inputs, have been recently studied in the computer vision field. In early studies, these dynamic neural networks are proposed for the image classification and semantic segmentation tasks by dropping blocks [104−107] or pruning channels [73,108] for efficient inference. For example, in [109], a soft conditional gate is proposed to select scale transform paths for each layer. Wang et al. [105] attempt to skip several convolution blocks by using a reinforce learning gating function. It is possible to think that dynamic networks with gating functions, which adjust network architectures such as connections according to inputs, appear to be similar to the neural architecture search methods. In most NAS methods, the model parameters are iteratively initialized and trained, while the model architectures are continuously varied. In contrast, the parameters of the dynamic neural networks are all initialized before the training phase, and their architectures always remain unchanged. In addition, most NAS methods search model components based on predefined backbones, whereas the dynamic neural networks with the gating functions remove unnecessary components from predefined intact networks. Compared with dynamic networks with gating functions, the other types of dynamic neural networks only learn parameters for certain components based on their inputs. Among these models, the most important one is [110], where the dynamic filter network is proposed to learn filtering operations such as local spatial transformations, selective blurring/deblurring, or adaptive feature extraction. After that, the method proposed in [111] dynamically learns two 1D kernels to replace standard 2D convolution kernels. According to their results, this method not only remarkably reduces the number of parameters, but also improves the performance of video frame interpolation. Since then, the dynamic neural networks with parameter learners are found to be useful for image/video restoration tasks. In [112], a dynamic neural network is designed to learn upsampling filters and residual images, which avoid the requirement of explicitly compensating for the motions of restored videos. Overall, the second type of dynamic neural networks can be illustrated as in Fig. 6, which is proposed in [113−115]. In Fig. 6, several standard convolution layers are used to learn the dynamic kernels and bias, which are respect-ively used to convolute a low-resolution image and further enhance the quality of the high-resolution output. As [115] discussed, a dynamic neural network that learns the parameters for convolution layers can better handle variational degradation in the inputs. Therefore, the dynamic neural networks can be further used to conduct various tasks. However, to the best of our knowledge, there is no work that effectively combines these two kinds of dynamic neural networks.

Other neural networks
Recently, neurological research has significantly progressed and continues to reveal new characteristics of the biological neurons and brains. These new characteristics have led to several new types of artificial neural networks that can selectively make forward inference based on their inputs. For example, Dai et al. [116] find that using fixed convolution kernels to process different inputs inevitably limits CNN to model geometric transformation. Therefore, a 2D offset is learned from the preceding feature maps to regularize the learned convolution kernels. After this work, improved deformable convolutional networks such as Deformable ConvNet V2 [117] , SqueezeSeg V3 [118] and Variational Context-Deformable ConvNets [119] are also proposed. For example, as shown in Fig. 7, the offsets are obtained by applying a standard convolution layer over the same input feature map, thus it has the same spatial resolution as that of the input feature map. When applied to different tasks, the standard convolution layer can be replaced by different operations. Therefore, the deformable convolution layer not only dynamically learns the feature maps, but can also be efficiently used for various tasks. On the other hand, the spiking neural networks (SNN) are introduced to mimic how information is encoded and processed in the human brain by employing spiking neurons as computation units [120,121] . Unlike standard CNNs, SNNs use temporal aspects for the information transmission as in biological neural systems [122] , thereby providing a sparse yet powerful computing ability [123] . However, due to the sparse nature of spike events, SNNs are always used for image classification, except in [121], where SNNs are successfully employed to  Fig. 6 An example of the dynamic neural network with a parameter learner. With the low-resolution input I m−1 , the dynamic neural network learns a set of convolution kernels and biases, which are used to generate the high-resolution output I m [113−115] .

Offsets Offset field Conv
Deformable convolution Input feature map Output feature map Fig. 7 Illustration of a 3 × 3 deformable convolution. Conv indicates the standard convolution layers [116−119] . the object detection task by using the DNN-to-SNN conversation method [124] .

Deep tracker components
In this section, we detail several existing deep trackers, whose components can be generally concluded as in Fig. 8. First, by considering that most deep trackers take cascaded blocks (e.g., convolution layers or residual blocks) to extract the features of the tracked targets, the feature extraction module is concluded and discussed. Second, because tracking algorithms aim at locating the same target in different frames, the mechanisms of estimating the motion patterns are compared. Third, we also discuss how deep trackers obtain the bounding boxes, which indicate precise locations of the tracked targets. Finally, we discuss how loss functions influence the performance of deep trackers.

Feature extraction module
Feature extraction is important for tracking algorithms. In general, the extracted features should effectively and robustly represent the tracking target. However, such a requirement is difficult to meet due to challenges such as illumination variations or appearance variations of the tracked targets. To address these challenges, several modules are designed in previous methods to extract various local and statistical features. For example, Ross et al. [7] propose a tracking method to incrementally learn a low-dimensional subspace representation, which is concluded as gray features in [30]. By taking mean shift iterations, Comaniciu et al. [125] introduce the color distribution of probable target models and the target candidates. A texture feature is also proposed in [126]. Henriques et al. [127] extend the traditional RGB space to a 11-dimensional color space and use the principle component analysis method to extract features from this 11-dimensional color space. Later, Henriques et al. [128] further incorporate a correlation filter tracker with the kernel space and propose the histogram of oriented gradient feature. Except for these methods, other methods also employ multiple manually designed features to represent the tracked target [129] . However, these manually designed features only concentrate on certain characteristics of the tracking targets, thus, they are easy to be violated during tracking.
Since deep neural networks are effective at extracting features with a powerful representative ability, they become the substitutions of the above manually designed features. For example, hierarchical convolution layers are introduced into the correlation filter algorithm [130] . Experimental results reported in [130] indicate that low-level features contain more information of the target location, whereas high-level features include more semantic information and are more robust than low-level ones. Inspired by this observation, Qi et al. [131] extend the three convolution layers used in [130] to six layers, and take dynamic parameters to adaptively fuse features from these six layers. After these methods, the methodology of the correlation filter tracking algorithms is utilized to form the Siamese networks. In [132], the first Siamese tracker, i.e., Siamese-FC, is designed with two parallel branches, which respectively extract features from the first frame and the remaining ones. With these two branches, the features from the first frame are taken as a convolution kernel to scan all positions of the features from other frames. Finally, the response map is obtained, and the max value in this map is seen as the target location.
In general, the above deep trackers achieve better per-  formance than most conventional trackers. However, with the development of deep neural networks, deep networks are also employed in learning-based trackers for further improving their performance and robustness. There are two types of methods for deepening learning-based trackers. The first type is to transfer pre-trained deep neural networks to learning-based trackers. Among these transfer-based methods, the most representative work is presented in [133], where the authors find that the learned feature representation for a target should remain spatially invariant. In addition, they theoretically find the zero-padding configuration of CNN trackers influences the spatial invariance restriction. Based on this work, Lukezic et al. [134] use a ResNet50 as a backbone to simultaneously conduct the visual object tracking and video object segmentation tasks. Similarly, the ResNet50 is also taken as a backbone to implicitly and dynamically encode the camera geometric relationship, and hence address missing target issues such as occlusion [135]. Chen et al. [136] view the tracking problem as a parallel classification and regression problem, and take a pre-trained classification network to solve the former problem. In contrast, the other kind of method directly constructs deep neural networks. By conducting extensive and systemic experiments, Zhang and Peng [137] find that the network stride, receptive field and spatial size of network output are important for constructing deep Siamese networks. They then propose the cropping-inside residual (CIR) units, down-sampling CIR unit, CIR-Inception and CIR-NeXt units to design deeper or wider Siamese networks than previous ones. Later, these units and observations proposed in [137] are also widely used to form Siamese trackers with deep architectures [138,139] . However, Zhang and Peng [137] also indicate that the perceptual inconsistency between the target template and the search frame should be carefully designed for robust tracking. Therefore, for tracking algorithms, it is unnecessary to form extraction modules as deeply as possible. With this observation, several novel frameworks, such as generative adversarial network (GAN) and attention mechanisms are employed into deep trackers for effectively learning features. On the other hand, the above indication also shows that the motion pattern of tracked targets is another important consideration for deep trackers. Therefore, in the next subsection, the motion estimation module is discussed to explore such importance.

Motion estimation module
Unlike the image classification task, single object tracking aims at locating a given target in all frames. Therefore, the motion pattern between consecutive frames or tracked targets is important for enhancing the robustness and effectiveness of the tracking algorithms. In detail, for deep trackers such as Siamese networks, there is a branch that learns features related to previous target appearances. As Fig. 8 illustrates, the motion module is designed for constructing a target template, hence deep trackers can dynamically update the appearances of the tracked targets. For example, Ning et al. [76] first take an object detection method to choose candidate samples, which are then fed through LSTM blocks to generate object locations. Inside LSTM blocks, the context relationships between consecutive frames are used to select samples related to the targets. Reinforcement learning has also been used to capture motion patterns. In [63], an action-decision tracker (ADNet) is proposed to predict object locations by a learned agent. Specifically, this agent is trained to foresee movement and scale-change based on the present frame. However, since the motion estimation module tends to forget the appearance from the first frame, they are easily influenced by heavy occlusions or out-of-the-view movements.
To address the above problem, recently proposed motion estimation modules take both the search frame and other frames as inputs. For example, Teng et al. [140] propose a neural network to explicitly exploit object representations from each frame and changes among multiple frames in the same video. Thus, the proposed network could integrate object appearances with their motions for effectively capturing temporal variations among consecutive frames. In [141], all frames in each tracking sequence are used to train a reinforcement learning network, which aims at producing a continuous action for predicting the optimal object location. In contrast, Zhang et al. [101] introduce a motion estimation network to learn how to update the object template. In detail, this motion estimation network is provided with the first frame, the current frame, and the template of the current frame. Thus, the appearances from the first frame are always emphasized, leading to a more robust tracking performance than its counterparts. Li et al. [142] innovatively use gradient information between consecutive frames and the current frame. For better using of gradient information and avoiding of the over-fitting problem, they also propose a template generalization training method. According to their experimental analysis, motion estimation modules always learn a general template from the initial appearance, and continuously fine-tune the general template based on the appearances in the search frames. Therefore, it can be concluded that the appearance from the initial frame can help avoid the tracking-drifting problem to some extent. However, excessive information from the initial frame will influence the overall tracking performance (e.g., GRU tends to forget information from early times>tamps, thus are more effective and robust than LSTM). This indicates that balancing the importance of the initial frame and other frames still needs more exploration.

Regression module
With the motion estimation module, deep trackers can maintain a template of the tracking targets. Thus, the feature extraction module can learn features from this template and the search frame to locate tracking targets. However, since the features in deep neural networks are high-dimensional, regressing the bounding box from these extracted features also requires more researches. In general, for regressing a bounding box, a response map is first obtained via the following operation: from which indicates the feature extraction module with parameters , denotes a bias term, and indicate the target template and search frame, respectively. Then, the maximum value in the response map is taken as the target location. In addition, for processing the scale variations, several scale parameters are first manually given and then gradually updated on the basis of the search frames or fixed factors.
Recently, it is demonstrated that the manually provided scale parameters cannot fully process the scale variations. In addition, with the deepening of the feature extraction module, more and more spatial information is lost, leading to poor tracking performance. Therefore, to alleviate the above issues, several methods take features from different levels of the feature extraction module to generate response maps, and dynamically fuse these response maps to obtain the final one. During this process, the multi-scale information in these features is incorporated together to handle the scale variation problem and improve the precision of the final response map. For example, in [143], several features are first used to obtain the final response map with multiple channels, and then, a classification sub-network and a regression one are used to decode the location and scale information of the object. Wang et al. [144] first take a semi-supervised video object segmentation network to obtain the segmentation results of the object, then use different branches to regress the bounding box and scores for each pixel. Fan and Lin [145] leverage high-level semantic information and low-level spatial information to obtain different response maps. These response maps are then used to progressively finetune the response map obtained by computing the final output features of the feature extraction module. The same methodology can also be found in [146]. In contrast, Ge et al. [147] search and select only partial features to make the response map. This is the first work that considers differences between features at different channels, rather than features at different levels, and according to their experimental results, the selected features could provide better performance than multi-level features. From this aspect, it seems that high-dimensional features should not be entirely used to obtain the response maps. Therefore, a possible research direction for the regression module is the selection of optimal features from the feature extraction module to generate the response map and scale parameters.

Loss function
After detailing the general components of deep trackers, in this section, we introduce popular loss functions used for training these deep trackers. First, we present statistical results of the loss functions. As listed in Table 2, among works recently published in top conferences or journals (e.g., IEEE Transactions on Image Processing, IEEE Transactions on Pattern Analysis and Machine Intelligence, and European Conference on Computer Vision), the cross entropy loss function, logistic loss function, and smooth L1 loss function are the three most popular loss functions. Because both the cross entropy loss function and logistic loss function supervise categories of each location and the latter loss function is a special version of the former one, we only introduce the cross entropy loss function, which is also the basis for the focal loss function. In addition, we discuss the smooth L1 loss function since it directly supervises the bounding box.

Cross entropy loss function
In general, the cross entropy loss function can be defined as follows: where is the expected contribution of the sample , and is the generated contribution. However, for the single object tracking task, there are only two kinds of samples, i.e., target and background samples, which are respectively consisted of pixels belonging to the tracked target and other pixels. Therefore, in some tracking algorithms, the cross entropy loss function is used to force these algorithms to learn the discriminative features of the object and background. For example, in [148], the cross entropy loss function is incorporated along with the focal loss function to supervise the feature extraction module. In [161], this loss function is used to force a classifier to accurately classify pixels. However, since the cross entropy loss function only concentrates on the categories of each pixel, it cannot directly supervise tracking algorithms to generate accurate bounding boxes. Thus, the cross entropy loss function is typically used along with other loss functions such as the focal loss function or the IoU loss function.

Smooth L1 loss function
It is widely known the smooth L1 loss function is a special version of the L1 loss function, and the L1 loss function is the square root of the L2 loss function. Therefore, these three loss functions can be defined as where , and are the pre-defined parameters. Clearly, compared with the L2 loss function, the smooth L1 loss function tends to generate relatively small values. In addition, the smooth L1 loss function tends to generate higher values than the L1 loss function, when . This ensures that during training deep trackers, the gradient value is appropriate to get off the local optimal solution. However, although the smooth L1 loss function can directly supervise the bounding box, it tends to force deep trackers prior to short-term templates.
Therefore, not only the smooth L1 loss function but also its methodology are widely used to train deep trackers. For example, in [133], the smooth L1 loss function is used together with a classification loss function. In [150], the methodology of the smooth L1 loss function is used to update a short-term template, thus the model drifting and inconsistency of target template problems are avoided. Overall, even with an effective feature extraction module, motion estimation module, and regression module, the loss function is also important for obtaining an effective tracker.
However, since a single loss function cannot effectively supervise all modules, multiple loss functions are often jointly used in the weighted summation manner.

Visual tracking datasets
It is widely known that most deep learning methods rely on benchmark datasets with a large amount of labeled data. In addition, with the developments in deep learning trackers, previous datasets with limited data cannot fully validate their effectiveness. Therefore, several tracking datasets have been proposed in recent years (see Table 3). In this section, we discuss several conventional datasets and two newly proposed ones.

Object tracking benchmark datasets
In general, the Object Tracking Benchmark datasets are the most widely used datasets for evaluating tracking algorithms. In [27], a testing dataset namely OTB-2013 is the first proposed dataset to address and analyze the initialization problem of object tracking. In OTB-2013, there are a total of 51 sequences with manually annotated bounding boxes in each frame 1 . For further analyzing the tracking performance, all these 51 sequences are categorized with 11 attributes, i.e., illumination variation, scale variation, occlusion, deformation, motion blur, fast motion, in-plane rotation, out-of-plane rotation, out-of-view, background clutters, and low resolution. After that, Wu et al. [177] later extend OTB-2013 by adding other 48 additional videos to obtain OTB-2015, which is also denoted by OTB-100 2 . In addition, 50 difficult and representative sequences in OTB-100 are selected to form other datasets, namely OTB-50.

Visual object tracking datasets
The other datasets, which are denoted as Visual Object Tracking datasets [28, 176, 178, 179, 192−194] , are also popular for tracker evaluation. Unlike OTB datasets, trackers are initialized by themself in this dataset, and they are allowed to be reinitialized when a failure is detected by comparing the predicted bounding box with the groundtruth annotations. In the first proposed VOT-2013 [28] , each frame of the released 16 sequences is labeled with different attributes. After VOT-2013, the VOT datasets are updated per year, and in the VOT-2019 challenge [179] , there are in total 60 sequences.

Large-scale single object tracking dataset
A large-scale dedicated benchmark with a high quality of training and testing sequences is proposed in [175]. Considering that this dataset contains 1 400 videos with large-scale variants, it is called as large-scale single object tracking (LaSOT) Dataset. LaSOT is different from existing datasets, since it provides both visual bounding box annotations and rich natural language specifications. This dataset also takes the class imbalance into consideration, thus each video in LaSOT contains 2 512 frames and is categorized into only one class. In addition, LaSOT provides two different evaluation protocols. In the first protocol, all the 1 400 sequences are used for evaluation, and the tracking algorithms are allowed to use any sequence from the other dataset for training. In the second 1 In the jogging video, two different targets are annotated.
Therefore, there are totally 50 videos and 51 sequences. 2 Except the jogging video, there is also a sequence (i.e., skating2) with two different annotated targets. Therefore, there are only 98 videos and 100 sequences. protocol, both the training and testing sets are predefined to make fair comparisons.

GOT-10k dataset
Unlike the LaSOT, the GOT-10k dataset aims at providing a wide coverage of common moving objects in the wild. Specifically, GOT-10k contains over 10 000 video segments with more than 1.5 million manually labeled bounding boxes. These video segments have over 560 classes of moving objects and 87 motion patterns. To ensure a comprehensive and unbiased coverage of diverse moving objects, GOT-10k also uses the semantic hierarchy of WordNet [195] to guide class populations. Thus, each sequence in GOT-10k is labeled with 2D labels: object and motion classes. The former label denotes the target that will be tracked, whereas the other one describes the motion patterns of the target.

Performance evaluation
After introducing several datasets used for training and testing deep trackers, in this section, we first introduce several popular metrics that are used to evaluate the performance of these trackers. We then present quantitative results of recently proposed methods. Finally, we also discuss the changes and differences between methods that achieve state-of-the-art performance.

Evaluation metrics
As we previously discussed, four datasets, i.e., the OTB, VOT, LaSOT, and GOT-10k datasets, are widely used to train and test tracking algorithms. Therefore, in this subsection, we introduce several metrics used in these datasets, such as precision, success, robustness, and accuracy.

Precision
The precision metric is based on the most basic metric, i.e., center location error, which is defined as the average Euclidean distance between the center locations of the generated bounding box and the corresponding ground truth. Therefore, a simple method to evaluate the tracking performance is to summarize the average center location errors over the entire sequence. However, this simple method is unsuitable when the model drifting problem occurs, since the drifted bounding box is randomly distributed. Therefore, the precision metric is proposed to measure the percentage of frames, in which the estimated location is within the given threshold distance of the ground truth. The default threshold is 20 pixels. In addition, by varying this threshold, the precision plot can be obtained to make thorough comparisons.

Success and accuracy
Evaluating the tracking performance based on the center location error is sometimes ineffective. Therefore, the overlap ratio (also known as accuracy) between the generated bounding and labeled boxes is also considered for evaluation. Given a generated bounding box r t and a labeled one r a , their overlap ratio is defined as follows: where and indicate the intersection and union of these two boxes, and represents the number of pixels in the region. On the one hand, by averaging the values of accuracy in all frames, the average accuracy can be obtained to evaluate the tracking performance. On the other hand, for computing the metric success, a threshold t o is also given, and the default value of t o is always fixed as 0.5. Thus, frames whose overlap is greater than t o are seen as frames where the target is successfully tracked. By counting the number of these frames, the success can be calculated. However, since the default value of t o cannot fully represent the overall performance, values of t o are always varied from 0 to 1 for obtaining the success plot. For quantitative comparison, the success plot is always compared with the area under curve (AUC).

Robustness
For evaluating a tracking algorithm, a conventional method is to run this algorithm throughout a sequence with initialization from the ground truth position in the first frame, and then compute certain metrics such as precision and success. However, because several tracking algorithms are sensitive to the initialization, the above conventional method is partial. For this reason, the robustness metric is proposed. Specifically, there are two kinds of robustness metrics. The first kind of robustness metrics is calculated by initializing the ground truth position from different frames. Therefore, it is named the temporal robustness evaluation (TRE). In contrast, the second kind of robustness metrics are computed by perturbing the initialization from different positions in the first frame. Therefore, it is named the spatial robustness evaluation (SRE).

Quantitative results
In this subsection, we present metrics of recently proposed deep trackers on benchmark datasets such as OTB-2013, OTB-2015 and LaSOT. First, metrics such as precision and success applied to these three datasets are presented in Tables 4−6, from which we can find that visual tracking via adversarial learning (VITAL) [153] , discriminative and robust online learning for Siamese visual tracking (DROL-RPN) [150] , and Siam R-CNN [203] achieve the best performance on these datasets. In detail, on the OTB-2013 dataset, VITAL achieves the highest value of precision and comparable performance in term of success. By comparing the mechanisms of the two best methods, it can be found that both of them consider the imbalanced distributions of the target samples and background samples. In addition, VITAL additionally considers the overlap between the target samples. For addressing the above issues, VITAL takes the adversarial learning, while Yang et al. [149] use a novel online training strategy. As their performance indicates, the adversarial learning is more effective. On the OTB-2015 dataset, there are also two outperforming methods, i.e., Siamese visual tracking (DROL-RPN) [150] and deformable Siamese attention networks (SiamAttn) [139] . Among these two methods, the former one uses a Siamese attention mechanism to generate deformable self-attention and cross-attention weights, leading to an online-updated target and representative features from both the target template and the search frame. In contrast, the latter one combines an online module with an offline Siamese network via the attention mechanisms. Empirically speaking, by jointly extracting representative features from the target template and search frame, DROL-RPN is more effective and robust than SiamAttn. Similar to SiamAttn, the long-term tracking with meta-updater (LTMU) [208] also combines offline-trained Siamese architectures with the online-update-based trackers, which takes target appearance from the first frame and the previous appear- Table 4 Precision and success of deep trackers on the OTB-2013 dataset. Red: the best result; blue: the second best result.
Here, we also introduce historical results on the VOT challenges. As Fig. 9 illustrates, the best performance on VOT challenges is gradually increased. Among these metrics, the robustness has been remarkably enhanced in 2020. By analyzing the methodologies of these top trackers, it can be found that Siamese trackers are still the most popular methods. In addition, in 2020, most Siamese trackers are designed with multiple tasks, such as video segmentation and object detection, leading to superior precision and robustness. Therefore, multi-task learning is a possible methodology to design effective trackers and achieve semi-supervised/un-supervised trackers. Historical results on the VOT datasets. A and R indicate the accuracy and robustness metrics, while EAO and AO denote the expected average overlap and average overlap. All results are collected from official presentations of existing VOT challenges [28,175,176,178,179,192,193] .

Relationship among different components
Different components of deep trackers have different functions and characteristics. For the feature extraction module, the most important function is to extract representative features that can be used to separate the object from the background. Therefore, the feature extraction module should dynamically focus on the most salient parts of the object appearance. However, since the appearance of the tracked objects always varies along the entire sequence, the feature extraction module is easily fooled by these appearance variants. In addition, the environment around the object seriously influences this appearance. To address these two issues, improving the robustness of the feature extraction module and the regression module is helpful. For example, the global information of an object usually changes less than the local one. Thus, taking features generated by convolution layers with a big receptive field can help easily determine the coarse location of the object. However, tracking al- Table 5 Precision and success of deep trackers on the OTB-2015 dataset. Red: the best result; blue: the second best result.

Trackers Precision Success
GradNet [142] 0.351 0.365 SiamCAR [143] 0.510 0.507 MDNet [24] 0.373 0.397 VITAL [153] 0.360 0.390 StructSiam [165] 0.333 0.335 ROAM [168] 0.368 0.390 ROAM++ [168] 0.445 0.447 Siam R-CNN [202] − 0.648 Dimp50 [206] 0.564 0.568 LTMU [207] 0.572 0.572 GlobalTrack [208] 0.528 0.517 ATOM [209] 0.500 0.501 gorithms are always designed to precisely locate the given targets. As a result, the local information is also used to regress the fine locations of the object. It is easy to find that different features in the feature extraction module can be used to enhance performance of the regression module. Therefore, the feature extraction module and regression modules are complementary to each other. On the other hand, the motion estimation module, which maintains the previous target templates, is also based on the feature extraction module. This is because the target template is always updated with a certain loss function, which uses the feature extraction module to extract the features. However, the motion estimation module is somewhat different from the above two modules. This is because, for maintaining effective target templates, the appearances in all frames are useful. However, the above two modules must handle all frames to locate the tracking target. This contradiction is also the reason why several works iteratively emphasize the initial appearance during tracking. Overall, the feature extraction module is complementary with the regress module, whereas the motion estimation module is based on the feature extraction module and has its own special characteristics.

Exploration of more effective frameworks
In Section 3, the general pipeline of deep trackers is concluded in two steps: 1) The feature extraction module extracts features from the search frame and target templates of the motion estimation module, 2) The regression module takes these two kinds of features to generate the response map, and then estimates the max value in the response map to regress the bounding box. It is easy to find that this pipeline is similar to the correlation filter methods, which heavily rely on manually defined features. Therefore, the main difference between these two kinds of tracking algorithms lies in the extracted features. However, it is demonstrated that features extracted by deep neural networks are always redundant and sometimes noisy [52, 210−212] . Therefore, in the field of image classification, several mechanisms are proposed to address this issue and achieve better performance than previous classification methods. In contrast, the above issue has remained largely unexplored in the field of visual tracking. Therefore, we believe that more effective frameworks can be proposed by exploring the inherent characteristics of the features in these deep trackers.

Interpretability of motion estimation module
It is widely known that deep learning mechanisms are somewhat unreasonable, which is also a popular research point. However, existing research related to the interpretability of deep learning mechanisms mostly focus on the image classification task, which only concentrates on modeling the spatial relationships between different pixels. In contrast, these existing deep trackers not only take the deep learning mechanisms to extract the spatial information of the object, but also use them to estimate the motion patterns and update the target template. Therefore, it is easy to find why the deep learning mechanisms can be used to form the motion estimation module is still unknown. From this perspective, there is also a possible research point to enhance existing deep trackers. Last but not least, compared with modeling the spatial relationships between the pixels in the same frame, capturing the temporal relationships additionally considers relationships of pixels in different frames. This indicates that exploring the mechanisms and interpretability of the motion estimation module is also important for deep trackers.

Conclusions
In this work, we review several recently proposed single object tracking algorithms based on deep learning mechanisms. We also review traditional static convolution neural networks, recurrent neural networks, and the graph neural networks. Based on these static networks, we introduce several data-adaptive methods such as attention mechanisms and dynamic networks. After that, we detail the general components of deep trackers and present their experimental details. Finally, by systematically analyzing the experimental results, we provide three different research directions for exploring the mechanisms of deep-learning-based trackers.

Open Access
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article′s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article′s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.