Skip to main content
Log in

Calibrating the Gaussian multi-target tracking model

  • Published:
Statistics and Computing Aims and scope Submit manuscript

Abstract

We present novel batch and online (sequential) versions of the expectation–maximisation (EM) algorithm for inferring the static parameters of a multiple target tracking (MTT) model. Online EM is of particular interest as it is a more practical method for long data sets since in batch EM, or a full Bayesian approach, a complete browse of the data is required between successive parameter updates. Online EM is also suited to MTT applications that demand real-time processing of the data. Performance is assessed in numerical examples using simulated data for various scenarios. For batch estimation our method significantly outperforms an existing gradient based maximum likelihood technique, which we show to be significantly biased.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. We assume a bounded \(\mathcal {Y}\) and regard observations that are not recorded due to being outside this interval as also a missed detection. With this assumption, it may be more realistic to have \(p_{d}\) varying with \(x\), but we ignore this for simplicity.

  2. As we do not prove consistency theoretically, assuming the data is generated by some \(\theta ^{*} \in \varTheta \) is not necessary. However, our synthetic data is generated from some \(\theta ^{*} \in \varTheta \) which allows us to compare our estimate with the true value.

References

  • Andrieu, C., Doucet, A., Holenstein, R.: Particle Markov chain Monte Carlo methods. J. R. Stat. Soc. Ser. B-stat. Methodol. 72, 269–342 (2010). doi:10.1111/j.1467-9868.2009.00736.x

    Article  MathSciNet  Google Scholar 

  • Bar-Shalom, Y., Li, X.: Multitarget-Multisensor Tracking: Principles and Techniques. YBS, Bradford (1995)

    Google Scholar 

  • Cappé, O.: Online sequential Monte Carlo EM algorithm. In: Proc. IEEE Workshop Stat. Signal Process. (2009).

  • Cappé, O., Moulines, E., Rydén, T.: Inference in Hidden Markov Models. Springer, New York (2005)

    MATH  Google Scholar 

  • Chopin, N., Jacob, P., Papaspiliopoulous, O.: SMC\(^2\): an efficient algorithm for sequential analysis of state-space models. JRSSB 75, 397–426 (2012)

    Article  Google Scholar 

  • Cox, I.J., Miller, M.L.: On finding ranked assignments with application to multi-target tracking and motion correspondence. IEEE Trans. Aerosp. Electron. Syst. 32, 48–49 (1995)

    Google Scholar 

  • Del Moral, P., Doucet, A., Singh, S.S.: Forward smoothing using sequential Monte Carlo. Tech. Rep. 638, Univ. Cambridge, Eng. Dep. (2009).

  • Delyon, B., Lavielle, M., Moulines, E.: Convergence of a stochastic approximation version of the EM algorithm. Ann. Stat. 27(1), 94–128 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  • Ehrlich, E., Jasra, A., Kantas, N.: Gradient free parameter estimation for hidden Markov models with intractable likelihoods. Methodol. Comput. Appl. Probab. 2, 1–35 (2013)

    Google Scholar 

  • Elliott, R.J., Krishnamurthy, V.: New finite-dimensional filters for parameter estimation of discrete-time linear Gaussian models. IEEE Trans. Autom. Control 44(5), 938–951 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  • Fearnhead, P.: MCMC, sufficient statistics and particle filters. J. Comput. Graph. Stat. 11, 848–862 (2002)

    Article  MathSciNet  Google Scholar 

  • Hue, C., Le Cadre, J.P., Perez, P.: Sequential Monte Carlo methods for multiple target tracking and data fusion. IEEE Trans. Signal Process. 50(2), 309–325 (2002)

    Article  Google Scholar 

  • Kalman, R.E.: A new approach to linear filtering and prediction problems. Trans. ASME—Ser. D 82, 35–45 (1960)

    Article  Google Scholar 

  • Lee, A., Yau, C., Giles, M., Doucet, A., Holmes, C.: On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. JCGS 19(4), 769–789 (2010). 14

    Google Scholar 

  • Mahler, R.: Multitarget Bayes filtering via first-order multitarget moments. IEEE Trans. Aerosp. Electron. Syst. 39(4), 1152–1178 (2003)

    Google Scholar 

  • Mahler, R.P.S., Vo, B.T., Vo, B.N.: CPHD filtering with unknown clutter rate and detection profile. IEEE Trans. Signal Process. 59(8), 3497–3513 (2011)

    Article  MathSciNet  Google Scholar 

  • Murray, L.: GPU acceleration of the particle filter: the Metropolis resampler. In: Distributed machine learning and sparse representation with massive data sets. (2011). http://arxiv.org/abs/1202.6163

  • Murty, K.G.: An algorithm for ranking all the assignments in order of increasing cost. Oper. Res. 16(3), 682–687 (1968)

    Article  MATH  Google Scholar 

  • Nemeth, C., Fearnhead, P., Mihaylova, L.: Particle approximations of the score and observed information matrix for parameter estimation in state space models with linear computational cost, preprint (2013). http://arxiv.org/abs/1306.0735

  • Ng, W., Li, J., Godsill, S., Vermaak, J.: A hybrid approach for online joint detection and tracking for multiple targets. In: Aerospace Conference, 2005 IEEE, pp. 2126–2141 (2005).

  • Oh, S., Russell, S., Sastry, S.: Markov chain Monte Carlo data association for multi-target tracking. IEEE Trans. Autom. Control 54(3), 481–497 (2009)

    Article  MathSciNet  Google Scholar 

  • Poyiadjis, G., Doucet, A., Singh, S.S.: Particle approximations of the score and observed information matrix in state space models with application to parameter estimation. Biometrika 98(1), 65–80 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  • Racine, V., Hertzog, A., Jouanneau, J., Salamero, J., Kervrann, C., Sibarita, J.B.: Multiple-target tracking of 3d fluorescent objects based on simulated annealing. In: IEEE International Symposium on Biomedical Imaging: Nano to Macro, 2006. 3rd, pp. 1020–1023 (2006).

  • Reid, D.B.: An algorithm for tracking multiple targets. IEEE Trans. Autom. Control 24, 843–854 (1979)

    Article  Google Scholar 

  • Särkkä, S., Vehtari, A., Lampinen, J.: Rao-Blackwellized particle filter for multiple target tracking. Inf. Fusion 8, 2–15 (2007)

    Article  Google Scholar 

  • Sergé, A., Bertaux, N., Rigneault, H., Marguet, D.: Dynamic multiple-target tracing to probe spatiotemporal cartography of cell membranes. Nat. Methods 5, 687–694 (2008)

    Article  Google Scholar 

  • Singh, S.S., Whiteley, N., Godsill, S.: An approximate likelihood method for estimating the static parameters in multi-target tracking models. In: Barber, D., Cemgil, T., Chiappa, S. (eds.) Bayesian Time Series Models, pp. 225–244. Cambridge University Press, Cambridge (2011)

    Chapter  Google Scholar 

  • Storlie, C.B., Lee, T.C., Hannig, J., Nychka, D.W.: Tracking of multiple merging and splitting targets: A statistical perspective. Stat. Sin. 19, 1–52 (2009)

    MATH  MathSciNet  Google Scholar 

  • Streit, R., Luginbuhi, T.: Probabilistic multi-hypothesis tracking. Tech. Rep. 10,428, Naval Undersea Warfare Center Division, Newport, Rhode Island (1995).

  • Vihola, M.: Rao-Blackwellised particle filtering in random set multitarget tracking. IEEE Trans. Aerosp. Electron. Syst. 43, 689–705 (2007)

    Article  Google Scholar 

  • Vo, B.N., Ma, W.K.: The Gaussian mixture probability hypothesis density filter. IEEE Trans. Signal Process. 54(11), 4091–4104 (2006)

    Article  Google Scholar 

  • Vo, B.T., Vo, B.N., Cantoni, A.: Analytic implementations of the cardinalized probability hypothesis density filter. IEEE Trans. Signal Process. 55, 3553–3567 (2007)

    Article  MathSciNet  Google Scholar 

  • Yoon, J.W., Singh, S.S.: A Bayesian approach to tracking in single molecule fluorescence microscopy. Tech. Rep. CUED/F-INFENG/TR-612, University of Cambridge, Eng. Dep. (2008).

  • Yoon, J., Bruckbauer, A., Fitzgerald, W.J., Klenerman, D.: Bayesian inference for improved single molecule fluorescence tracking. Biophys. J. 94, 4932–4947 (2008)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sumeetpal S. Singh.

Appendices

Appendix 1: Kalman filter updates

  • \((\mu _{t | t-1}, \Sigma _{t | t-1})= \text {KP}(\mu _{t-1 | t-1}, \Sigma _{t -1 | t-1})\):

    $$\begin{aligned} \mu _{t | t-1}&= F \mu _{t-1 | t-1} \\ \Sigma _{t | t-1}&= F \Sigma _{t -1 | t-1} F^{T} + W. \end{aligned}$$
  • \((B_{t}, b_{t}, \Sigma _{t-1|t}) = \text {BT}(\mu _{t-1 | t-1}, \Sigma _{t-1 | t-1})\):

    $$\begin{aligned} B_{t}&= \Sigma _{t-1 | t} F^{T} (F \Sigma _{t-1 | t} F^{T} + W)^{-1} \\ b_{t}&= (I_{d_{x} \times d_{x}} - B_{t} F ) \mu _{t-1 | t-1} \\ \Sigma _{t-1 | t}&= (I_{d_{x} \times d_{x}} - B_{t} F ) \Sigma _{t-1 | t-1}. \end{aligned}$$
  • \((\mu _{t | t}, \Sigma _{t | t}) = \text {KF}(\mu _{t | t-1}, \Sigma _{t | t-1}, c^{d}_{t}, y_{t})\):

    $$\begin{aligned} \mu _{t | t}&= \left\{ \begin{array}{l@{\quad }l} \mu _{t | t-1} + \Sigma _{t | t-1} G^{T} \Gamma _{t}^{-1} \epsilon _{t}, &{} c_{t}^{d} = 1, \\ \mu _{t | t-1}, &{} c_{t}^{d} = 0. \end{array}\right. \\ \Sigma _{t | t}&= \left\{ \begin{array}{l@{\quad }l} \Sigma _{t | t-1} - \Sigma _{t | t-1} G^{T} \Gamma _{t}^{-1} G \Sigma _{t | t-1}, &{} c_{t}^{d} = 1 \\ \Sigma _{t | t-1} , &{} c_{t}^{d} = 0. \end{array}\right. \end{aligned}$$

    where \(\Gamma _{t} = G \Sigma _{t | t-1} G^{T} + V\) and \(\epsilon _{t} = y_{t} - G \mu _{t | t-1}\).

Appendix 2: Recursive updates for sufficient statistics in a GLSSM

We present the update rules for the recursion variables of the intermediate functions for the sufficient statistics in (21),

$$\begin{aligned} P_{m, t, ij}, \;q_{m, t, ij}, \;r_{m, t, ij}, \quad 1 \le m \le 7, \end{aligned}$$

where \(i, j = 1, \ldots , d_{x}\) for \(m = 1, 3, 4, 5, 7\); \(i = 1, \ldots , d_{x}, j = 1, \ldots , d_{y}\) for \(m = 2\); and \(i = 1, \ldots , d_{x}\), \(j = 1\) for \(m = 6\). All \(P_{m, t, ij}\)’s, \(q_{m, t, ij}\)’s and \(r_{m, t, ij}\)’s are \(d_{x} \times d_{x}\) matrices, \(d_{x} \times 1\) vectors and scalars, respectively. To avoid repetition, we present the recursions with averaging by the step-sizes \(\{ \gamma _{t} \}_{t \ge 1}\). For the calculations of Sect. 3.2.1 and 3.2.2 and Algorithm 1 replace every instance of \((1-\gamma _{t})\) and \(\gamma _{t}\) below by the constant \(1\). For Sect. 3.3 and Algorithm 2, no change needed.

Start with the initial values \(P_{m, 1, ij} = 0_{d_{x} \times d_{x}}\), \(q_{m, 1, ij} = 0_{d_{x} \times 1}\), and \(r_{m, 1, ij} = 0\) for all \(m\) except \( P_{1, 1, ij} = \gamma _{1} c_{1}^{d} e_{i}e_{j}^{T}\), \(P_{7, 1, ij} = \gamma _{1} e_{i}e_{j}^{T}\), \(q_{2, 1, ij} = c_{1}^{d} y_{1}(j) e_{i}\), and \(q_{6, 1, i1} = \gamma _{1} e_{i}\). At time \(t\), update

$$\begin{aligned} P_{1, t, ij}&= (1 - \gamma _{t}) B_{t}^{T} P_{1, t-1, ij} B_{t} + \gamma _{t} c^{d}_{t} e_{i}e_{j}^{T} \\ q_{1, t, ij}&= (1 \!-\! \gamma _{t}) \left[ B_{t}^{T} q_{1, t-1, ij} \!+\! B_{t}^{T} \left( P_{1, t-1, ij} + P_{1, t-1, ij}^{T} \right) b_{t} \right] \\ r_{1, t, ij}&= (1 - \gamma _{t}) \big [ r_{1, t-1, ij} + \text {tr} \left( P_{1, t-1, ij} \Sigma _{t-1 | t} \right) + q_{1, t-1, ij}^{T} b_{t} \\&+ b_{t}^{T} P_{1, t-1, ij} b_{t} \big ] \\ P_{2, t, ij}&= 0_{d_{x} \times d_{x}} \\ q_{2, t, ij}&= (1 - \gamma _{t}) B_{t}^{T} q_{2, t-1, ij} + \gamma _{t} c_{t}^{d} y_{t}(j) e_{i} \\ r_{2, t, ij}&= (1- \gamma _{t}) \left[ r_{2, t-1, ij} + q_{2, t-1, ij}^{T} b_{t} \right] \\ P_{3, t, ij}&= B_{t}^{T} \left[ (1 - \gamma _{t}) P_{3, t-1, ij} + \gamma _{t} e_{i}e_{j}^{T} \right] B_{t} \\ q_{3, t, ij}&= (1 - \gamma _{t}) B_{t}^{T} \left[ q_{3, t-1, ij} +\left( P_{3, t-1, ij} + P_{3, t-1, ij}^{T} \right) b_{t} \right] \\&+ \gamma _{t} B_{t}^{T} \left( e_{i}e_{j}^{T} + e_{j}e_{i}^{T} \right) b_{t} \\ r_{3, t, ij}&= (1 - \gamma _{t}) \big [ r_{3, t-1, ij} + \text {tr}(P_{3, t-1, ij} \Sigma _{t-1 | t}) + q_{3, t-1, ij}^{T} b_{t} \\&+ b_{t}^{T} P_{3, t-1,ij} b_{t} \big ] + \gamma _{t} \left( e_{j}^{T} \Sigma _{t-1|t} e_{i} + b_{t}^{T} e_{i} e_{j}^{T} b_{t} \right) \\ P_{4, t, ij}&= (1 - \gamma _{t}) B_{t}^{T} P_{4, t-1, ij} B_{t} + \gamma _{t} e_{i}e_{j}^{T} \\ q_{4, t, ij}&= (1 - \gamma _{t}) \left[ B_{t}^{T} q_{4, t-1, ij} + B_{t}^{T} \left( P_{4, t-1, ij} + P_{4, t-1, ij}^{T} \right) b_{t} \right] \\ r_{4, t, ij}&= (1- \gamma _{t}) \big [ r_{4, t-1, ij} + \text {tr} \left( P_{4, t-1, ij} \Sigma _{t-1 | t} \right) + q_{4, t-1, ij}^{T} b_{t} \\&+ b_{t}^{T} P_{4, t-1, ij} b_{t} \big ] \end{aligned}$$
$$\begin{aligned} P_{5, t, ij}&= (1 -\gamma _{t}) B_{t}^{T} P_{5, t-1, ij} B_{t} + \gamma _{t} e_{i}e_{j}^{T} B_{t} \\ q_{5, t, ij}&= (1 - \gamma _{t}) \left[ B_{t}^{T} q_{5, t-1, ij} + B_{t}^{T} \left( P_{5, t-1, ij} + P_{5, t-1, ij}^{T} \right) b_{t} \right] \\&+ \gamma _{t} e_{j} b_{t}^{T} e_{i} \\ r_{5, t, ij}&= (1 - \gamma _{t}) \big [ r_{5, t-1, ij} + \text {tr} \left( P_{5, t-1, ij} \Sigma _{t-1 | t} \right) + q_{5, t-1, ij}^{T} b_{t} \\&+ b_{t}^{T} P_{5, t-1, ij} b_{t} \big ] \\ P_{6, t, i1}&= 0_{d_{x} \times d_{x}} \\ q_{6, t, i1}&= (1 - \gamma _{t}) B_{t}^{T} q_{6, t-1, i1} \\ r_{6, t, i1}&= (1 - \gamma _{t}) \left[ r_{6, t-1, i1} + q_{6, t-1, i1}^{T} b_{t} \right] \\ P_{7, t, ij}&= (1 - \gamma _{t}) B_{t}^{T} P_{7, t-1, ij} B_{t} \\ q_{7, t, ij}&= (1 - \gamma _{t}) \left[ B_{t}^{T} q_{7, t-1, ij} + B_{t}^{T} \left( P_{7, t-1, ij} + P_{7, t-1, ij}^{T} \right) b_{t} \right] \\ r_{7, t, ij}&= (1 - \gamma _{t}) \big [ r_{7, t-1, ij} + \text {tr} \left( P_{7, t-1, ij} \Sigma _{t-1 | t} \right) + q_{7, t-1, ij}^{T} b_{t} \\&+ b_{t}^{T} P_{7, t-1, ij} b_{t} \big ] \end{aligned}$$

Appendix 3: SMC algorithm for MTT

An SMC algorithm is mainly characterised by how the particles are proposed, hence we present the proposal scheme of our SMC algorithm. This L-best assignments approach has appeared previously in the literature in similar ways, e.g. see Cox and Miller (1995); Ng et al. (2005) We exclude the superscripts for particle numbers from the notation for simplicity. Assume that \(z_{1:t-1}\) is the ancestor of the particle of interest with weight \(w_{t-1}\). We sample \(z_{t} = (k_{t}^{b}, c_{t}^{s}, c_{t}^{d}, k_{t}^{f}, a_{t} )\) and calculate its weight by performing the following steps:

  • Birth-death move: Sample \(k_{t}^{b} \sim \mathcal {P}(\lambda _{b})\) and \(c_{t}^{s}(j) \sim \text {Bernoulli}(p_{s})\) for \(j = 1, \ldots , k_{t-1}^{x}\). Set \(k_{t}^{s} = \sum _{j = 1}^{k_{t-1}^{x}} c_{t}^{s}\) and construct the \(k_{t}^{s} \times 1\) vector \(i_{t}^{s}\) from \(c_{t}^{s}\). Set \(k_{t}^{x} = k_{t}^{s} + k_{t}^{b}\).

  • Prediction moments for state and observation: For \(1 \le j \le k_{t}^{s}\), set

    $$\begin{aligned} (\mu _{t | t-1, j}, \Sigma _{t | t-1, j}) = \text {KP}(\mu _{t-1 | t-1, i_{t}^{s}(j)}, \Sigma _{t-1 | t-1, i_{t}^{s}(j)}); \end{aligned}$$

    for \(k^{s}_{t}+ 1 \le j \le k_{t}^{x}\), set \(( \mu _{t|t-1, j}, \Sigma _{t | t-1, j}) = ( \mu _{b}, \Sigma _{b} )\). Also, for \(j = 1, \ldots , k_{t}^{x}\), calculate

    $$\begin{aligned} (\mu _{t, j}^{y}, \Sigma _{t, j}^{y} ) = ( G \mu _{t|t-1, j}, G \Sigma _{t | t-1, j} G^{T} + V ). \end{aligned}$$
  • Detection and target-observation association Define the \(k^{x}_{t} \times ( k^{y}_{t} + k^{x}_{t} )\) matrix \(D_{t}\) as

    $$\begin{aligned} D_{t}(i, j) = \left\{ \begin{array}{l@{\quad }l} \log [ p_{d} \mathcal {N}(y_{t, i}; \mu _{t, j}^{y}, \Sigma _{t, j}^{y}) ] &{} \text { if } j \le k^{y}_{t}, \\ \log [ (1- p_{d}) \lambda _{f} / | \mathcal {Y} | ] &{} \text { if } i = j - k^{y}_{t}, \\ - \infty &{} \text { otherwise. } \end{array}\right. \end{aligned}$$

    and an assignment is a one-to-one mapping \( \alpha _{t} : \{ 1, \ldots , k^{x}_{t} \} \rightarrow \{ 1, \ldots , k^{y}_{t} + k^{x}_{t} \}\). The cost of the assignment, up to an identical additive constant for each \(\alpha _{t}\) is

    $$\begin{aligned} d(D_{t}, \alpha _{t}) = \sum _{j = 1}^{k_{t}^{d}} D_{t}(j, \alpha _{t}(j)). \end{aligned}$$

    Find the set \(\mathcal {A}_{L} = \{ \alpha _{t, 1}, \ldots , \alpha _{t, L} \}\) of \(L\) assignments producing the highest assignment scores. The set \(\mathcal {A}_{L}\) can be found using the Murty’s assignment ranking algorithm (Murty 1968). Sample \(\alpha _{t} = \alpha _{t, j}\) from \(\mathcal {A}_{L}\) with probability

    $$\begin{aligned} \frac{\exp [d(D_{t}, \alpha _{t, j})]}{\sum _{j^{\prime } = 1}^{L} \exp [d(D_{t}, \alpha _{t, j^{\prime }})]}, \quad j = 1, \ldots , L \end{aligned}$$

    Given \(\alpha _{t}\), one can infer \(c^{d}_{t}\) (hence \(i^{d}_{t}\)), \(k^{d}_{t}\), \(k^{f}_{t}\) and the association \(a_{t}\) as follows: for \(1 \le k \le k_{t}^{x}\),

    $$\begin{aligned} c^{d}_{t}(k) = \left\{ \begin{array}{ll} 1 &{} \text { if } \alpha _{t}(k) \le k^{y}_{t}, \\ 0 &{} \text { if } \alpha _{t}(k) > k^{y}_{t}. \end{array}\right. \end{aligned}$$

    Then \(k^{d}_{t} = \sum _{j = 1}^{k^{x}_{t}} c^{d}_{t}(k) \), \(k^{f}_{t} = k^{y}_{t} - k^{d}_{t}\), \(i^{d}_{t}\) is constructed from \(c^{d}_{t}\), and finally

    $$\begin{aligned} a_{t}(k) = \alpha _{t}(i^{d}_{t}(k)), \qquad k = 1, \ldots , k^{d}_{t}. \end{aligned}$$
  • Filtering moments for state: For \(1 \le k \le k^{x}_{t}\) calculate

    $$\begin{aligned} (\mu _{t | t, k}, \Sigma _{t | t, k}) = \text {KF}(\mu _{t | t-1, k}, \Sigma _{t | t-1, k}, c^{d}_{t}(k), y_{t, a'_{t}(k)} ). \end{aligned}$$

    where \(a'_{t}(k) = a_{t}( \sum _{j = 1}^{k} c_{t}^{d}(j))\).

  • Reweighting: After sampling \(z_{t} = (k_{t}^{b}, c_{t}^{s}, c_{t}^{d}, k_{t}^{f}, a_{t} )\) from \(q_{\theta }(z_{t} | z_{1:t-1}, \mathbf {y}_{t})\), we calculate the weight of the particle, which becomes for this sampling scheme as

    $$\begin{aligned} w_{t} \propto w_{t-1} \lambda _{f}^{- k^{x}_{t}} \sum _{j = 1}^{L} \exp [d(D_{t}, \alpha _{t, j})]. \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yıldırım, S., Jiang, L., Singh, S.S. et al. Calibrating the Gaussian multi-target tracking model. Stat Comput 25, 595–608 (2015). https://doi.org/10.1007/s11222-014-9456-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11222-014-9456-2

Keywords

Navigation