Advertisement

A novel object tracking method based on a mixture model

  • Dongxu Gao
  • Zhaojie Ju
  • Jiangtao Cao
  • Honghai Liu
Regular Paper
  • 203 Downloads

Abstract

Object tracking has been applied in many fields such as intelligent surveillance and computer vision. Although much progress has been made, there are still many puzzles which pose a huge challenge to object tracking. Currently, the problems are mainly caused by occlusion, similar object appearance and background clutters. A novel method based on a mixture model was proposed for solving these issues. The mixture model was integrated into a Bayes framework with the combination of locally dense contexts feature and the fundamental image information (i.e. the relationship between the object and its surrounding regions). This is because that the tracking problem can be seen as a prediction question, which can be solved using the Bayes method. In addition, both scale variations and templet updating are considered to assure the effectiveness of the proposed algorithm. Furthermore, the Fourier Transform (FT) is used when solving the Bayes equation to make the algorithm run in a real-time system. Therefore, the MMOT (Mixture model for object tracking) can run faster and perform better than existing algorithms on some challenging images sequences in terms of accuracy, quickness and robustness.

Keywords

Object tracking MMOT Fourier Transform Bayes equation 

1 Introduction

Object tracking is known as locating positions of interest area over time in every frame of a video. Many targets with various features have been researched for different applications. For example, Xiang et al. (2015) utilized optical flow and sampled points within the Markov Decision Process framework for tracking pedestrians. Single person and multiple people could be tracked effectively under the proposed tracking framework. Li et al. (2014) adopted the discriminative feature into a convolutional neural network (CNN) framework for tracking an arbitrary single object after learning its feature online. In addition, the trajectory and the state of the tracking target could also contribute the tracking in many application areas.

Besides, object tracking can be applied in many application areas such as human-computer interaction, self-driving vehicles and surveillance system. Many researchers contribute their time for getting a more robust and effective tracking result with a focus on object appearance modelling, model updating, optimizing algorithm and recent hot topic deep learning. Although many different kinds of object tracking algorithms have been studied for several decades, and much progress has been made in recent years (Yu 2012), there are still many challenging problems such as fast movement, illumination variation, occlusion, background clutters and proceeding time.

Generally speaking, current tracking algorithms are categorized as two methods: generative tracker and discriminative tracker. The noticeable difference between them is how to build an appearance model of the tracking target (Heber et al. 2013). A generative tracker usually focuses on the appearance of a moving object and tries to find a model to represent it. It is unnecessary to consider the background information, which makes the tracker works faster. Online updating method is often used in case that the appearance changed. However, the change of the object appearance caused by some factors such as occlusion and pose variation makes it more difficult for modelling. Some generative tracking examples can be found from a benchmark paper (Wu et al. 2013).

A discriminative method mainly emphasizes how to separate the target from the background in a video scene. Finding a decision boundary between the object and the background is the key issue for a discriminative method. It is well known that the discriminative method works better when enough training samples were given. This tracker works well even though dramatic changes of an object, but it needs more sophisticated calculation, which makes it fail to use in a real-time system as the higher process speed is needed. Due to a large number of features is necessary, the offline feature selection procedure and trained classifier make it difficult to get an arbitrary object type for tracking approaches which need online boosting.

Some methods try to combine the generative and discriminative models, which can be often treated as a semi-supervised problem. A common approach learns an online appearance model which can select features from an arbitrary object. The core idea of a combination method is to predict a classifier with the aim of enlarging the training data after obtaining two independent conditional classifiers from the same data. More detail information about the comparison of discriminative and generative models can be found in Zilka et al. (2013).

Therefore, various representative models become an important research topic. The algorithm framework will also be researched for computing different models. For example, the methods (Brendel and Todorovic 2009; Avidan 2007) used the Bayes theorem as a basic framework in this paper. However, a novel appearance model and the solution method for a model are quite different. Motivated by literatures about Bayes framework, the main contribution of this paper is: (1) the appearance model of an object is modelled as a prediction problem; (2) the MMOT with the combination of locally dense context feature and fundamental image information is proposed; 3. the fast Fourier Transform is introduced for solving the Bayes equation and reducing the processing time.

In this paper, advantages of both colour information and Fourier transform are utilized for effectiveness and efficiency. The rest of this paper is organized as follows: the second section reviewed some related work, the MMOT is introduced in Sect. 3; experimental results and discussion are given in Sect. 4, finally, conclusion is given in Sect. 5.

2 Related work

A typical tracking algorithm consists of four steps: object representation, search mechanism, model solving and model updating. For recent generative trackers and discriminative trackers, both of their key step is how to acquire a better appearance of an object. There are many papers focus on object information to find the target. Recently, there are several methods utilized context information to handle object tracking which locate the target through finding consistent information of an object. To do so, related data mining method should be introduced for extracting both object and its surrounding region as supplement information, although satisfied results have been obtained, computational cost are still needed. Not only that, templates and subspace models are also contribute to robust performance. Wang et al. (2010) utilized the subspace model, which can handle appearance change while online learning model can learn appearance model in IVT methods. To solve this kind of model, the optimized algorithms (Ross et al. 2008) have been proposed to meet the real time performance such as proximal gradient approach and the l1-norm related minimization method (Yao et al. 2013). These methods seem sensitive to partial occlusion according to many experiments.

Although some algorithms (Wu et al. 2011) were proposed to manage occlusion while drift might be occur because of the offline update of template or offline subspace model. Many researchers have developed the online updating model which can deal with drift well. However, the scale of an object sometimes change which poses another challenge for these trackers. For different trackers (Lasserre et al. 2006), scale updating should be considered separately. Compressive tracking (Zhang et al. 2012) method cannot handle scale variance well but introduce a multi-scale information in fast compressive tracking (FCT) (Zhang et al. 2014). However, there is no colour information included for FCT which might fail when the colour of the object and background are similar. Fei et al. (2015) introduced a perceptual hashing method which can track moving object effectively. Wang et al. (2015) used a probability continuous outlier model and background information for the tracking issue. They also proved that the least soft-threshold squares can improve the tracking performance (Wang et al. 2016).

Lots of researchers who exploited colour information have achieved an excellent performance in object detection. This method not only handle similar colour problem but can also locate the first location of an object (Liu et al. 2011). Danelljan et al. (2014) analysed how the colour information contribute the performance of tracking and the experiments proved the effectiveness compared with CSK tracker and VTD tracker (Wang et al. 2016; Kwon and Lee 2010).

3 Proposed algorithm

Recently, object tracking problem has been treated as a predictive problem which can be solved by the particle filter framework based on the Bayes theorem. The main difference compared with previous traditional particle filter framework is that the number of particles is not needed for solving the model while using a kernel function to obtain the probability needed. When estimating the object location, the object location likelihood is used which is shown as follows:
$$\begin{aligned} p(x)=p(x|o) \end{aligned}$$
(1)
x is the output vector which includes the predicting object information and represents the current object feature in an image sequence. p(x) can be computed according to the Bayes theory.
$$\begin{aligned} p(x)& = p(x|o)=\sum _{f(\mathbf z )\in X^{f}}p(x,f(\mathbf z ,o)) \nonumber \\& = \sum _{f(\mathbf z )\in X^{f}}p(x|f(\mathbf z )|o)p(f(\mathbf z )|o) \end{aligned}$$
(2)
Then, the problem can be transferred to compute the joint probability. p(x) represents the context feature, f(z) denotes image information including the location and the feature of a target, it can be represented as Eq. (3).
$$\begin{aligned} M(z)=(V(\mathbf z ),\mathbf z ) \end{aligned}$$
(3)
denotes the colour information which adopted the HSV (Hue, Saturation, Value) colour space at location z(mn),especially the value of V channel ( the use of V channel makes the algorithm work well for both colour images and gray-scale images), z belongs to the neighbourhood of location X that includes target object. The target model is defined as z which includes the vectorized image patches centred at pixel position c, the distance between the surrounding pixel and the centre is assigned by applying an isotropic kernel k(c), and then the target model is obtained by computing the value of the colour model histogram, in which the j-th value is:
$$\begin{aligned} q_{j} = N_{c}\sum ^{N}_{i=1}k(||c||^{2})|\alpha _{f}|, \end{aligned}$$
(4)
where \( N_{c} \) is the normalisation constant to make sure the summation is 1, and \(\alpha _{f}\) is the coefficient of the image patch. \(\alpha _{f}\) is the learning rate, \(C_{f}\) is the covariance matrix of the current frame appearance, and \(\Lambda _{j}\) is the a \(D_{1}\times D_{2}\) diagonal matrix. Then we select a mapping matrix \(B_{1}\) according to normalised eigenvectors of \(R_{f}\), which denotes the largest eigenvalue. The mapping matrix is found by the dimensionality reduction technique to get a projection \(D_{1}\times D_{2}\) with orthogonal column vectors.

As the colour attributes normally have high-dimensional colour features, a dimensionality reduction method is used to make the algorithm preserve useful information after the colour dimensions are reduced dramatically, then the computational time will be decreased. The problem of dimension reduction is formulated to find a mapping for the current frame f, by performing an eigenvalue decomposition of the matrix in Eq. (4).

The framework of our proposed algorithm is described in the following table.

To solve the Eq. (2), two conditional probability should be computed separately.
$$\begin{aligned} p(x|f(\mathbf z ),o)=h(x-\mathbf z ), \end{aligned}$$
(5)
where h can be seen as a kernel function with respect to the relationship between the centre location of object and its surrounding region. The object location likelihood can be computed through the confidence map as Zhang et al. (2014)
$$\begin{aligned} C_{m}(x)=P(f(z)|o)=ae^{\frac{|z-x^{*}|^{\beta }}{\sigma }}. \end{aligned}$$
(6)
In Eq. (6), a denotes the normalization constant, \(\sigma \) represents a scale parameter and \(\beta \) define the shape parameter. The confidence map in the Eq. (7) considers the colour information of the tracking target which improves the challenging problem effectively. The STC method guides us about how to set the parameters of \(\beta \) with some experimental results.
Then, take (3), (4), (5), (6) into account, the Eq. (2) can be formulated as:
$$\begin{aligned} p(x)& = \sum _{f(\mathbf z )\in X^{f}}h(x-\mathbf z )V(\mathbf z )\omega _{\sigma }(\mathbf z -x^{*})\nonumber \\&= h(x)\bigotimes V(x)\omega _{\sigma }(x-x^{*}). \end{aligned}$$
(7)
As the \(\bigotimes \) is a convolution operator, so the Fast Fourier Transform (FFT) can be applied for ensuring the computing speed fast, the location of an object can be determined by the maximum value of p(x) at the \((t+1)\)th frame, which can be represented as:
$$\begin{aligned} F\left( be^{|\frac{(x-x^{*})}{\alpha }|^{\beta }}\right) =F(h(x))\bigodot F(I(x)\omega _{\sigma }(x-x^{*})). \end{aligned}$$
(8)
Therefore, the appearance model can be obtained by:
$$\begin{aligned} h(x)=F^{-1}\left( \frac{F(be^{-|\frac{x-x^{*}}{\alpha }|^{\beta }})}{F(I(x)\omega _{\sigma }(x-x^{*}))} \right) . \end{aligned}$$
(9)
In addition, it is well known that the visual tracking could fail when the target appearance changes. So it is necessary to update the target model over time. For the MMOT tracker, the appearance model considers the learned target x and the transformed classifier coefficient A computed using the current appearance, and then we use a simple linear interpolation method to update the classifier coefficients:
$$\begin{aligned} A^{t}=(1-\rho )A^{t-1}+\rho A, \end{aligned}$$
(10)
where t indicates the the current frame and \(\rho \) means the learning rate parameter, thus a sub-optimal problem is introduced. A scheme, allowing the model to be updated without storing the previous target appearances, is introduced to ensure the fast computing speed. Then not all previous frames are considered when computing the current model.
$$\begin{aligned} A^{t}_{C}=(1-\rho )A^{t-1}_{D}+\rho O^{t}(O^{t}+\rho ) \end{aligned}$$
(11)
$$\begin{aligned} x^{t}_{C}=(1-\rho )x^{t-1}_{C}+\rho x^{t}_{C}. \end{aligned}$$
(12)
\(O^{t}\) is the output of the Fourier transformed kernel, the weight is set with a learning rate \(\rho \), \(x^{t}\) denotes the learned target appearance to calculate the detection scores for the next frame appearance. Therefore, only \(A^{t}_{C}\) and \(x^{t}\) need to be stored with updating method in above equations.

4 Experiments

We have successfully integrated our method into a real-time system which is used for the task of not only tracking the Autism children but also need tracking some objects that children are grasping. To prove the efficiency of the algorithm, we evaluate our method on eight challenging image sequences and compare its performance with some other methods which represent the most common tracking framework. For the convenience of comparison, the algorithm is implemented in Matlab and achieves at least 25 frames per second on a PC with Intel E7500 CPU (2.93 GHz).

Some key parameters setting in our experiments are: \(\alpha \) is 3 and \(\beta \) is 1; the scale factor is set as 1 and the learning parameter is 0.05. The scale is updated every 5 frame. For the colour information, the tracker normalises the scale values to [− 0.5,0.5], which can counter the distortion as an effect of the window operation, thus avoid to affect the kernel. The kernel is introduced as we extend the colour feature to multi-dimensional features, which are extracted from an image patch and it is set to 6 to get the best result.

In order to illustrate the qualitative comparison more clearly, some methods most used to be compared are introduced as they used different object representing methods to locate the very first object appeared in the first frame, and various computing methods are used to solve their models. These methods are described briefly here. The visual tracking decomposition (VTD) (Kwon and Lee 2010) method used the observation model, which is decomposed into multiple basic observation models that are constructed by sparse principal component analysis (SPCA) (Ross et al. 2008) of a set of feature templates. The MIL (Babenko et al. 2009) method put all ambiguous positive and negative samples into bags to learn a discriminative model for tracking. The L1 method (Mei et al. 2011) adopted the holistic representation of the object as the appearance model and then track the object by solving the L1 minimization problem. The assessment of several methods above in different situations are shown as below:
  1. (a)
    Qualitative and quantitative evaluation Figs. 2, 3, 4, 5, 6, 7, 8 and 9 show the tracking results of the proposed method and three different algorithms including L1, VTD and MIL in eight diverse images sequences for tracking. We use red for our method, green for L1, blue for VTD and pink for the MIL. These images sequences are extremely challenging because they contain various difficulties for tracking such as occlusion, scale change, similar objects, illumination change, fast motion , camera angles and cluttered background. The tracking rectangle with different colours represents the compared methods which is shown in Fig. 1. In the sequence of Cliffbar, the L1 and VTD trackers drift away from the object and could not track the target again when the object is on the top of the book shown in Fig. 2, the major challenge is the object and background share the similar appearance sometimes. The results show our method performs good even the background information is similar to the target. There is a huge illumination change in the sequence of DavidIndoor which is supposed to be one of the main challenges to track. However, from the Fig. 3, it is clear to see that all these four trackers can handle the challenges but the MIL method seems more sensitive to the scale change. Our method is adaptive when the light is changing, the camera is moving and the appearance is changing because of the glasses and the face angle. To the DavidOutdoor in Fig. 4, the L1 tracker performs the worst after the person appears in the back of a tree and could not track it again. Even though the VTD and MIL tracker fails to track the person when the occlusion occurs but they can track it afterwards. Our method can track the person from the beginning to the end even though the occlusion is occur.
    Fig. 1

    The sequences of Cliffbar

    Fig. 2

    The sequences of DavidInDoor

    Fig. 3

    The sequences of DavidOutdoor

    Fig. 4

    The sequences of Girl

    Fig. 5

    The sequences of Occlusion1

    Fig. 6

    The sequences of Occlusion2

    Fig. 7

    The sequences of Deer

    Fig. 8

    The sequences of Stone

    Fig. 9

    The center error result

    The four trackers try to track a girl’s face in Fig. 5, only our method can track the face from the beginning to the end though the similar face appears and blocks the target. The other three cannot handle this problem and the scale change. But they perform well when only occlusion occurs as seen in Figs. 6 and 7. The only problem is that they could not locate the target particular accurate when the target changes the angles. Fast motion is an extremely difficult problem in object tracking, both our method and VTD method achieve a satisfied performance all along as shown in Fig. 8. All these three trackers except ours could not track an indicated object when the object is quite similar with the background in Fig. 8. Our method has the ability to handle different tracking difficulties no matter they appear individually or in distinct combinations.
    Fig. 10

    The overlap result

     
  2. (b)
    Discussion Although these methods can track the object both for the sequences Davidindoor and Occlusion1 as some occlusion occurs, if there are some rotation for the Occlusion2 sequences occurs, only our method performs well. L1 and VTD could not handle the severe occlusion like Girl sequences, while only the VTD method could not track the object again if there is a drift when tracking, other methods could keep tracking after temporally drift. For the sequences of Cliffbar, the colour of moving object is nearly same with its surrounding region, only our method can keep tracking over the time, as both color information and context information were adopted when modelling appearance. So the experimental results show our method is robust to the current tracking challenges including the occlusion and rotation, and performs best compares with other methods. In addition, as both center error evaluation and overlap evaluation, shown in Figs. 9 and 10, which are defined by the PASCAL VOC, have been used to evaluate the performance of the proposed algorithms. We use the same evaluation criterion in this paper.
    Table 1

    The average center error

    Overlap

    L1

    VTD

    MIL

    Ours

    Cliffbar

    24.8

    34.6

    13.4

    3.3

    David Indoor

    7.6

    13.6

    16.2

    3.2

    DavidOutdoor

    100.3

    61.9

    38.3

    4.6

    Girl

    62.4

    21.4

    32.3

    10.2

    Occlusion1

    6.5

    11.1

    32.3

    3.8

    Occlusion2

    11.1

    10.4

    14.1

    3.9

    Deer

    171.5

    11.9

    66.5

    8.3

    Stone

    19.2

    31.3

    32.3

    1.3

    Average

    50.4

    24.5

    30.7

    4.8

    Table 2

    The average overlap

    Center error

    L1

    VTD

    MIL

    Ours

    Cliffbar

    0.2

    0.3

    0.5

    0.7

    David Indoor

    0.7

    0.6

    0.5

    0.8

    DavidOutdoor

    0.3

    0.4

    0.4

    0.7

    Girl

    0.3

    0.5

    0.5

    0.6

    Occlusion1

    0.8

    0.7

    0.6

    0.9

    Occlusion2

    0.6

    0.6

    0.6

    0.8

    Deer

    0.04

    0.5

    0.2

    0.7

    Stone

    0.3

    0.4

    0.3

    0.8

    Average

    0.37

    0.50

    0.45

    0.75

    Tables 1 and 2 summarizes the experimental results in terms of the average center error and the average tracking overlap. It is clearly to see that our method achieves the lowest tracking errors compared with the others in Table 1, and the highest overlap rate in Table 2. The overlap rate is one of the evaluations to verify the tracking success. According to the PASCAL VOC criterion, given the tracking result of each frame \(R_T \) and the corresponding ground truth \(R_G \), the \(score=\frac{area(R_T \cap R_G )}{area(R_T \cup R_G )}\), indicates the tracking performance. The tracking results are regarded as being valid when the score is over 0.5. The average overlap rate of our tracker is 0.75 while the highest is 0.50 at present. In addition, the average processing time of the proposed method is 52 fps and the slowest is more than 30 fps. Therefore, our method is valid and can run in a real time system.
     

5 Conclusion

The method in this paper combines the colour information and context feature, which makes it have robustness for appearance change of the object. It can work well even though occlusion and similar colour occur. Not only that, scale update information and online update are considered to make it perform better. In addition, it can run in a real-time system as the algorithm computed in frequency domain through Fourier Transform. Qualitative and quantitative experiments prove the effectiveness and efficiency of MMOT algorithm compared with existing methods. The next step of this work will try to compare the results on benchmark sequences with the VOT.

Notes

Acknowledgements

The authors would like to acknowledge the support from the EU Seventh Framework Programme (FP7)-ICT under Grant no. 611391, Natural Science Foundation of China under Grant no. 51575412, 51575338 and 51575407, China Scholarship Council (Grant no. 201508060340) and Research Project of State Key Lab of Digital Manufacturing Equipment & Technology of China under Grant no. DMETKF2017003.

References

  1. Avidan, S.: Ensemble tracking. IEEE Trans. Pattern Anal. Mach. Intell. 29, 261–271 (2007)Google Scholar
  2. Babenko, B., Yang, M.-H., Belongie, S.: Visual tracking with online multiple instance learning. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 983–990. IEEE (2009)Google Scholar
  3. Brendel, W., Todorovic, S.: Video object segmentation by tracking regions. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 833–840. IEEE (2009)Google Scholar
  4. Danelljan, M., Khan, F.S., Felsberg, M., Van De Weijer, J.: Adaptive color attributes for real-time visual tracking. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1090–1097. IEEE (2014)Google Scholar
  5. Fei, M., Li, J., Liu, H.: Visual tracking based on improved foreground detection and perceptual hashing. Neurocomputing 152, 413–428 (2015)CrossRefGoogle Scholar
  6. Heber, M., Godec, M., Rüther, M., Roth, P.M., Bischof, H.: Segmentation-based tracking by support fusion. Comput. Vis. Image Understand., vol. 117, pp. 573–586 (2013)Google Scholar
  7. Kwon, J., Lee, K.M.: Visual tracking decomposition. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1269–1276. IEEE (2010)Google Scholar
  8. Lasserre, J.A., Bishop, C.M., Minka, T.P.: Principled hybrids of generative and discriminative models. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 87–94. IEEE (2006)Google Scholar
  9. Li, H., Li, Y., Porikli, F.: DeepTrack: learning discriminative feature representations by convolutional neural networks for visual tracking. In: Proceedings of the British Machine Vision Conference 2014, pp. 56.1–56.12 (2014)Google Scholar
  10. Liu, B., Huang, J., Yang, L., Kulikowsk, C.: Robust tracking using local sparse appearance model and K-selection. In: CVPR 2011, pp. 1313–1320. IEEE (2011)Google Scholar
  11. Mei, X., Ling, H., Wu, Y., Blasch, E., Bai, L.: Minimum error bounded efficient 1 tracker with occlusion detection. In: CVPR 2011, pp. 1257–1264. IEEE (2011)Google Scholar
  12. Ross, D.A., Lim, J., Lin, R.-S., Yang, M.-H.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77, 125–141 (2008)CrossRefGoogle Scholar
  13. Wang, D., Lu, H., Chen, Y.-W.: Incremental MPCA for color object tracking. In: 2010 20th International Conference on Pattern Recognition, pp. 1751–1754. IEEE (2010)Google Scholar
  14. Wang, D., Lu, H., Bo, C.: Fast and robust object tracking via probability continuous outlier model. IEEE Trans. Image Process. 24, 5166–5176 (2015)MathSciNetCrossRefGoogle Scholar
  15. Wang, D., Lu, H., Yang, M.-H.: Robust visual tracking via least soft-threshold squares. IEEE Trans. Circuits Syst. Video Technol 26, 1709–1721 (2016)CrossRefGoogle Scholar
  16. Wu, Y., Lim, J., Yang, M.H.: Online object tracking: a benchmark. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2411–2418 (2013)Google Scholar
  17. Wu, Y., Ling, H., Yu, J., Li, F., Mei, X., Cheng, E.: Blurred target tracking by blur-driven tracker. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1100–1107. IEEE (2011)Google Scholar
  18. Xiang, Y., Alahi, A., Savarese, S.: Learning to track: online multi-object tracking by decision making. In: Proceedings of the IEEE International Conference on Computer Vision, vol. 2015 Inter, pp. 4705–4713 (2015)Google Scholar
  19. Yao, R., Shi, Q., Shen, C., Zhang, Y., van den Hengel, A.: Part-based visual tracking with online latent structural learning. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2363–2370. IEEE (2013)Google Scholar
  20. Yu, Y., Mann, G.K.I., Gosine, R.G.: A single-object tracking method for robots using object-based visual attention. Int. J. Human. Robot. 09, 1250030 (2012)Google Scholar
  21. Zhang, K., Zhang, L., Liu, Q., Zhang, D., Yang, M.H.: Fast visual tracking via dense spatio-temporal context learning. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 8693 LNCS, pp. 127–141. Springer, Cham (2014)Google Scholar
  22. Zhang, K., Zhang, L., Yang, M.-H.: Real-Time Compressive Tracking, pp. 864–877 (2012)Google Scholar
  23. Zhang, K., Zhang, L., Yang, M.-H.: Fast compressive tracking. IEEE Trans. Pattern Anal. Mach. Intell. 36, 2002–2015 (2014)CrossRefGoogle Scholar
  24. Zilka, L., Marek, D., Korvas, M., Jurcicek, F.: Comparison of Bayesian discriminative and generative models for dialogue state tracking. In: Proceedings of the SIGDIAL 2013 Conference, pp. 452–456 (2013)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.School of ComputingUniversity of PortsmouthPortsmouthUK
  2. 2.Faculty of Information and Control EngineeringLiaoning Shihua UniversityFushunChina

Personalised recommendations