Skip to main content
Log in

Eigenbackground Revisited: Can We Model the Background with Eigenvectors?

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

Using dominant eigenvectors for background modeling (usually known as Eigenbackground) is a common technique in the literature. However, its results suffer from noticeable artifacts. Thus, there have been many attempts to reduce the artifacts by making some improvements/enhancements in the Eigenbackground algorithm. In this paper, we show the main problem of the Eigenbackground is at its own core and in fact, it may not be a good idea to use the strongest eigenvectors for modeling the background. Instead, we propose an alternative solution by exploiting the weakest eigenvectors (which are usually thrown away and treated as garbage data) for background modeling. MATLAB codes are available at the GitHub of the paper.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

Notes

  1. https://github.com/mamintoosi/Eigenbackground-Revisited

  2. For better demonstration, the frames’ order is perturbed.

  3. BM is a user-friendly name of \(\Phi _M\), mentioned in page 1.

References

  1. Oliver, N.M., Rosario, B., Pentland, A.P.: A Bayesian computer vision system for modeling human interactions. IEEE Trans. Pattern Anal. Mach. Intell. 22, 831–843 (2000)

    Article  Google Scholar 

  2. De La Torre, F., Black, M.J.: “Robust principal component analysis for computer vision,” in Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on, vol. 1, pp. 362–369 vol.1, IEEE, (2001)

  3. Skočaj, D., Leonardis, A.: Incremental and robust learning of subspace representations. Image Vis. Comput. 26(1), 27–38 (2008)

    Article  Google Scholar 

  4. Dickinson, P., Hunter, A., Appiah, K.: A spatially distributed model for foreground segmentation. Image Vis. Comput. 27(9), 1326–1335 (2009)

    Article  Google Scholar 

  5. Yuan, Y., Pang, Y., Pan, J., Li, X.: Scene segmentation based on ipca for visual surveillance. Neurocomputing 72(10–12), 2450–2454 (2009)

    Article  Google Scholar 

  6. Casares, M., Velipasalar, S., Pinto, A.: Light-weight salient foreground detection for embedded smart cameras. Comput. Vis. Image Underst. 114(11), 1223–1237 (2010)

    Article  Google Scholar 

  7. Dong, Y., Desouza, G.: Adaptive learning of multi-subspace for foreground detection under illumination changes. Comput. Vis. Image Underst. 115(1), 31–49 (2011)

    Article  Google Scholar 

  8. Tzevanidis, K., Argyros, A.: Unsupervised learning of background modeling parameters in multicamera systems. Comput. Vis. Image Underst. 115(1), 105–116 (2011)

    Article  Google Scholar 

  9. Guyon, C., Bouwmans, T., Zahzah, E.-H.: “Robust Principal Component Analysis for Background Subtraction: Systematic Evaluation and Comparative Analysis,” in Principal Component Analysis, Book 1, pp. 223–238, INTECH, (2012)

  10. Vosters, L., Shan, C., Gritti, T.: Real-time robust background subtraction under rapidly changing illumination conditions. Image Vis. Comput. 30(12), 1004–1015 (2012)

    Article  Google Scholar 

  11. Krishna, M.G., Aradhya, V.M., Ravishankar, M., Babu, D.R.: “Lopp: Locality preserving projections for moving object detection,” Procedia Technology, vol. 4, pp. 624–628, 2012. 2nd International Conference on Computer, Communication, Control and Information Technology( C3IT-2012) on February 25 - 26, (2012)

  12. Zhao, Y., Gong, H., Jia, Y., Zhu, S.-C.: Background modeling by subspace learning on spatio-temporal patches. Pattern Recognit. Lett. 33(9), 1134–1147 (2012)

    Article  Google Scholar 

  13. Yeo, B., Lim, W., Lim, H.: Scalable-width temporal edge detection for recursive background recovery in adaptive background modeling. Appl. Soft Comput. J. 13(4), 1583–1591 (2013)

    Article  Google Scholar 

  14. Seger, R., Wanderley, M., Koerich, A.: Automatic detection of musicians’ ancillary gestures based on video analysis. Expert Syst. Appl. 41(4), 2098–2106 (2014)

    Article  Google Scholar 

  15. Spampinato, C., Palazzo, S., Kavasidis, I.: A texton-based kernel density estimation approach for background modeling under extreme conditions. Comput. Vis. Image Underst. 122, 74–83 (2014)

    Article  Google Scholar 

  16. Bouwmans, T.: Traditional and recent approaches in background modeling for foreground detection: An overview. Comput. Sci. Rev. 11–12, 31–66 (2014)

    Article  Google Scholar 

  17. Varadarajan, S., Miller, P., Zhou, H.: Region-based mixture of gaussians modelling for foreground detection in dynamic scenes. Pattern Recognit. 48(11), 3488–3503 (2015)

    Article  Google Scholar 

  18. Shakeri, M., Zhang, H.: “COROLA: A sequential solution to moving object detection using low-rank approximation,” CoRR, arXiv:abs/1505.03566, (2015)

  19. Dou, J., Li, J., Qin, Q., Tu, Z.: Moving object detection based on incremental learning low rank representation and spatial constraint. Neurocomputing 168, 382–400 (2015)

    Article  Google Scholar 

  20. Xu, Z., Shi, P., Gu, I.Y.H.: “An eigenbackground subtraction method using recursive error compensation.,” in PCM (Y. Zhuang, S. Yang, Y. Rui, and Q. He, eds.), vol. 4261 of Lecture Notes in Computer Science, pp. 779–787, Springer, (2006)

  21. Chen, W., Tian, Y., Wang, Y., Huang, T.: Fixed-point gaussian mixture model for analysis-friendly surveillance video coding. Comput. Vis. Image Underst. 142, 65–79 (2016)

    Article  Google Scholar 

  22. Wan, M., Gu, G., Qian, W., Ren, K., Chen, Q., Zhang, H., Maldague, X.: Total variation regularization term-based low-rank and sparse matrix representation model for infrared moving target tracking. Remote Sens. 10(4), 510 (2018)

    Article  Google Scholar 

  23. Banu, S., Maheswari, N.: Background modelling using a q-tree based foreground segmentation. Scalable Comput. Pract. Exp. 21, 17–31 (2020)

    Article  Google Scholar 

  24. Djerida, A., Zhao, Z., Zhao, J.: Background subtraction in dynamic scenes using the dynamic principal component analysis. IET Image Process. 14(2), 245–255 (2020)

    Article  Google Scholar 

  25. Shah, N., Píngale, A., Patel, V., George, N.V.: “An adaptive background subtraction scheme for video surveillance systems,” in 2017 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), 013–017, (2017)

  26. Bouwmans, T.: Subspace learning for background modeling: a survey. Recent Patents Comput. Sci. 2, 223–234 (2009)

    Article  Google Scholar 

  27. Cao, X.-J., Pan, B.-C., Zheng, S.-L., Zhang, C.-Y.: “Motion object detection method based on piecemeal principal component analysis of dynamic background updating,” in 2008 International Conference on Machine Learning and Cybernetics, 5, 2932–2937, (2008)

  28. Ziubiński, P., Garbat, P., Zawistowski, J.: “Local eigen background substraction,” in Advances in Intelligent Systems and Computing, pp. 199–204, Springer International Publishing, (2014)

  29. Kim, J.-H., Kang, B.-D., Ahn, S.-H., Kim, H.-S., Kim, S.-K.: “A real-time object detection system using selected principal components,” in Lecture Notes in Electrical Engineering, pp. 367–376, Springer Netherlands, (2013)

  30. Hughes, K., Grzeda, V., Greenspan, M.: “Eigenbackground bootstrapping,” in 2013 International Conference on Computer and Robot Vision, pp. 196–201, (2013)

  31. Stauffer, C., Grimson, W.E.L.: “Adaptive background mixture models for real-time tracking,” in Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on., vol. 2, (Los Alamitos, CA, USA), pp. 246–252 Vol. 2, IEEE, (1999)

  32. Welford, B.P.: Note on a method for calculating corrected sums of squares and products. Technometrics 4(3), 419–420 (1962)

    Article  MathSciNet  Google Scholar 

  33. Stewart, G.W.: Introduction to matrix computations, vol. Academic Press, Computer science and applied mathematics (1973)

  34. Hastie, T., Tibshirani, R., Friedman, J.: The elements of statistical learning: data mining, inference and prediction. Springer, 2 ed., (2008)

  35. Bishop, C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag, Berlin, Heidelberg (2006)

    MATH  Google Scholar 

  36. Ipsen, I.C.F., Nadler, B.: Refined perturbation bounds for eigenvalues of hermitian and non-hermitian matrices. SIAM J. Matrix Anal. Appl. 31(1), 40–53 (2009)

    Article  MathSciNet  Google Scholar 

  37. Zemlys, V.: https://math.stackexchange.com/questions/9302/norm-of-asymmetric-matrix (2011)

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Mahmood Amintoosi or Farzam Farbiz.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Theorem 3

Suppose that \(A \in {\mathbb {C}}^{n\times n}\) is a Hermitian matrix, \(A'=A+yy^*\) is rank one updated of A and \(y \in {\mathbb {C}}^n\) is column vector. If v and \(v'\) are the normalized eigenvectors of A and \(A'\) corresponding to their largest eigenvalues, then:

$$\begin{aligned} \left\Vert v-v'\right\Vert \le \beta \left\Vert E\right\Vert \end{aligned}$$
(13)

where \(E=yy^*\) is a small perturbation of A and \(\beta \) is a value unrelated to E.

Proof

Suppose that \(\lambda \) and \(\lambda '\) are the largest eigenvalues of A and \(A'\). By definition of eigenvector we have

$$\begin{aligned} Av&= \lambda v \\ A'v'&= \lambda ' v' \end{aligned}$$

According to Proposition 2, there is an \(\epsilon >0\) such that \(\lambda +\epsilon = \lambda '\). Hence:

$$\begin{aligned} (A+E)v' - Av&= \lambda ' v' - \lambda v \nonumber \\&= (\lambda + \epsilon )v' - \lambda v \end{aligned}$$
(14)

Thus:

$$\begin{aligned} A(v' -v) + Ev'= & {} \lambda v' + \epsilon v' - \lambda v \nonumber \\\Rightarrow & {} \nonumber \\ (A-\lambda I)(v'-v)= & {} 7 (\epsilon I - E)v' \nonumber \\\Rightarrow & {} \nonumber \\ v'-v= & {} {(A-\lambda I)^{-1} (\epsilon I - E)v' } \nonumber \\\Rightarrow & {} \nonumber \\ \left\Vert v'-v\right\Vert= & {} \left\Vert (A-\lambda I)^{-1} (\epsilon I - E)v' \right\Vert \nonumber \\&\le \underbrace{\left\Vert (A-\lambda I)^{-1}\right\Vert }_{\alpha }\left\Vert (\epsilon I - E)\right\Vert \underbrace{\left\Vert v'\right\Vert }_1 \nonumber \\&= \alpha \left\Vert (\epsilon I - E)\right\Vert \nonumber \\&\le \alpha (\left\Vert \epsilon I\right\Vert + \alpha \left\Vert -E\right\Vert ) \quad (\mathrm{Triangle Inequality})\nonumber \\&= \alpha ( \epsilon + \left\Vert E\right\Vert )\nonumber \\&\le \alpha (\left\Vert E\right\Vert + \left\Vert E\right\Vert ) (\mathrm{According to Proposition}\,\nonumber \\&\quad 2: \epsilon \le \left\Vert E\right\Vert )\nonumber \\&= \beta \left\Vert E\right\Vert \end{aligned}$$
(15)

where \(\beta = 2\alpha \) \(\square \)

Lemma 3

For a given Hermitian matrix \(A \in {\mathbb {C}}^{n\times n}\) and a column vector \(y \in {\mathbb {C}}^n\), we have:

$$\begin{aligned}\lambda _\mathrm{max}(A) \le \lambda _\mathrm{max}(A+yy^*) \le \lambda _\mathrm{max}(A) + \left\Vert y\right\Vert ^2 \end{aligned}$$

Proof

See [36] . \(\square \)

Lemma 4

The norm of a symmetric matrix is maximum absolute value of its eigenvalue.

Proof

We have

$$\begin{aligned} \Vert A\Vert _2=\max _{\Vert x\Vert =1}\Vert Ax\Vert \end{aligned}$$

where \(\Vert \cdot \Vert \) denotes the ordinary Euclidean norm. This is a constrained optimization problem with Lagrange function:

$$\begin{aligned} L(x,\lambda )=\Vert Ax\Vert ^2-\lambda (\Vert x\Vert ^2-1)=x^TA^2x-\lambda (x^Tx-1) \end{aligned}$$

Taking squares makes the following step easier. Taking derivative with respect to x and equating it to zero, we get

$$\begin{aligned} A^2x-\lambda x=0 \end{aligned}$$

the solution for this problem is the eigenvector of \(A^2\). Since \(A^2\) is symmetric, all its eigenvalues are real. So \(x^TA^2x\) will achieve maximum on set \(\Vert x\Vert ^2=1\) with maximal eigenvalue of \(A^2\). Now since A is symmetric, it admits representation

$$\begin{aligned} A=Q\Lambda Q^T \end{aligned}$$

with Q the orthogonal matrix and \(\Lambda \) diagonal with eigenvalues in diagonals. For \(A^2\) we get

$$\begin{aligned} A^2=Q\Lambda ^2 Q^T \end{aligned}$$

so the eigenvalues of \(A^2\) are squares of eigenvalues of A. The norm \(\Vert A\Vert _2\) is the square root taken from maximum \(x^TA^2x\) on \(x^Tx=1\), which will be the square root of maximal eigenvalue of \(A^2\) which is the maximal absolute eigenvalue of A [37]. \(\square \)

Lemma 5

Suppose \(A=uv^T\) where u and v are nonzero column vectors in \({{\mathbb {R}}}^n\), \(n\ge 3\). Then, \(\lambda =0\) and \(\lambda =v^Tu\) are the only eigenvalues of A.

Proof

\(\lambda =0\) is an eigenvalue of A since A is not of full rank. \(\lambda =v^Tu\) is also an eigenvalue of A since

$$\begin{aligned} Au = (uv^T)u=u(v^Tu)=(v^Tu)u. \end{aligned}$$

We assume \(v\ne 0\). The orthogonal complement of the linear subspace generated by v (i.e., the set of all vectors orthogonal to v) is therefore \((n-1)\)-dimensional. Let \(\phi _1,\dots ,\phi _{n-1}\) be a basis for this space. Then, they are linearly independent and \(uv^T \phi _i = (v\cdot \phi _i)u=0 \). Thus, the eigenvalue 0 has multiplicity \(n-1\), and there are no other eigenvalues besides it and \(v\cdot u\). \(\square \)

Proposition 2

In the previous lemmas (3, 4 and 5), assume that \(E=yy^*\), then there exists an \(0 \le \epsilon \le \left\Vert E\right\Vert \) such that:

$$\begin{aligned} \lambda _\mathrm{max}(A) + \epsilon = \lambda _\mathrm{max}(A+yy^*)&\le \lambda _\mathrm{max}(A) + \left\Vert y\right\Vert ^2 \\&= \lambda _\mathrm{max}(A) + \left\Vert E\right\Vert \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Amintoosi, M., Farbiz, F. Eigenbackground Revisited: Can We Model the Background with Eigenvectors?. J Math Imaging Vis 64, 463–477 (2022). https://doi.org/10.1007/s10851-022-01080-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-022-01080-4

Keywords

Navigation