Abstract
Using dominant eigenvectors for background modeling (usually known as Eigenbackground) is a common technique in the literature. However, its results suffer from noticeable artifacts. Thus, there have been many attempts to reduce the artifacts by making some improvements/enhancements in the Eigenbackground algorithm. In this paper, we show the main problem of the Eigenbackground is at its own core and in fact, it may not be a good idea to use the strongest eigenvectors for modeling the background. Instead, we propose an alternative solution by exploiting the weakest eigenvectors (which are usually thrown away and treated as garbage data) for background modeling. MATLAB codes are available at the GitHub of the paper.
Similar content being viewed by others
Notes
For better demonstration, the frames’ order is perturbed.
BM is a user-friendly name of \(\Phi _M\), mentioned in page 1.
References
Oliver, N.M., Rosario, B., Pentland, A.P.: A Bayesian computer vision system for modeling human interactions. IEEE Trans. Pattern Anal. Mach. Intell. 22, 831–843 (2000)
De La Torre, F., Black, M.J.: “Robust principal component analysis for computer vision,” in Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on, vol. 1, pp. 362–369 vol.1, IEEE, (2001)
Skočaj, D., Leonardis, A.: Incremental and robust learning of subspace representations. Image Vis. Comput. 26(1), 27–38 (2008)
Dickinson, P., Hunter, A., Appiah, K.: A spatially distributed model for foreground segmentation. Image Vis. Comput. 27(9), 1326–1335 (2009)
Yuan, Y., Pang, Y., Pan, J., Li, X.: Scene segmentation based on ipca for visual surveillance. Neurocomputing 72(10–12), 2450–2454 (2009)
Casares, M., Velipasalar, S., Pinto, A.: Light-weight salient foreground detection for embedded smart cameras. Comput. Vis. Image Underst. 114(11), 1223–1237 (2010)
Dong, Y., Desouza, G.: Adaptive learning of multi-subspace for foreground detection under illumination changes. Comput. Vis. Image Underst. 115(1), 31–49 (2011)
Tzevanidis, K., Argyros, A.: Unsupervised learning of background modeling parameters in multicamera systems. Comput. Vis. Image Underst. 115(1), 105–116 (2011)
Guyon, C., Bouwmans, T., Zahzah, E.-H.: “Robust Principal Component Analysis for Background Subtraction: Systematic Evaluation and Comparative Analysis,” in Principal Component Analysis, Book 1, pp. 223–238, INTECH, (2012)
Vosters, L., Shan, C., Gritti, T.: Real-time robust background subtraction under rapidly changing illumination conditions. Image Vis. Comput. 30(12), 1004–1015 (2012)
Krishna, M.G., Aradhya, V.M., Ravishankar, M., Babu, D.R.: “Lopp: Locality preserving projections for moving object detection,” Procedia Technology, vol. 4, pp. 624–628, 2012. 2nd International Conference on Computer, Communication, Control and Information Technology( C3IT-2012) on February 25 - 26, (2012)
Zhao, Y., Gong, H., Jia, Y., Zhu, S.-C.: Background modeling by subspace learning on spatio-temporal patches. Pattern Recognit. Lett. 33(9), 1134–1147 (2012)
Yeo, B., Lim, W., Lim, H.: Scalable-width temporal edge detection for recursive background recovery in adaptive background modeling. Appl. Soft Comput. J. 13(4), 1583–1591 (2013)
Seger, R., Wanderley, M., Koerich, A.: Automatic detection of musicians’ ancillary gestures based on video analysis. Expert Syst. Appl. 41(4), 2098–2106 (2014)
Spampinato, C., Palazzo, S., Kavasidis, I.: A texton-based kernel density estimation approach for background modeling under extreme conditions. Comput. Vis. Image Underst. 122, 74–83 (2014)
Bouwmans, T.: Traditional and recent approaches in background modeling for foreground detection: An overview. Comput. Sci. Rev. 11–12, 31–66 (2014)
Varadarajan, S., Miller, P., Zhou, H.: Region-based mixture of gaussians modelling for foreground detection in dynamic scenes. Pattern Recognit. 48(11), 3488–3503 (2015)
Shakeri, M., Zhang, H.: “COROLA: A sequential solution to moving object detection using low-rank approximation,” CoRR, arXiv:abs/1505.03566, (2015)
Dou, J., Li, J., Qin, Q., Tu, Z.: Moving object detection based on incremental learning low rank representation and spatial constraint. Neurocomputing 168, 382–400 (2015)
Xu, Z., Shi, P., Gu, I.Y.H.: “An eigenbackground subtraction method using recursive error compensation.,” in PCM (Y. Zhuang, S. Yang, Y. Rui, and Q. He, eds.), vol. 4261 of Lecture Notes in Computer Science, pp. 779–787, Springer, (2006)
Chen, W., Tian, Y., Wang, Y., Huang, T.: Fixed-point gaussian mixture model for analysis-friendly surveillance video coding. Comput. Vis. Image Underst. 142, 65–79 (2016)
Wan, M., Gu, G., Qian, W., Ren, K., Chen, Q., Zhang, H., Maldague, X.: Total variation regularization term-based low-rank and sparse matrix representation model for infrared moving target tracking. Remote Sens. 10(4), 510 (2018)
Banu, S., Maheswari, N.: Background modelling using a q-tree based foreground segmentation. Scalable Comput. Pract. Exp. 21, 17–31 (2020)
Djerida, A., Zhao, Z., Zhao, J.: Background subtraction in dynamic scenes using the dynamic principal component analysis. IET Image Process. 14(2), 245–255 (2020)
Shah, N., Píngale, A., Patel, V., George, N.V.: “An adaptive background subtraction scheme for video surveillance systems,” in 2017 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), 013–017, (2017)
Bouwmans, T.: Subspace learning for background modeling: a survey. Recent Patents Comput. Sci. 2, 223–234 (2009)
Cao, X.-J., Pan, B.-C., Zheng, S.-L., Zhang, C.-Y.: “Motion object detection method based on piecemeal principal component analysis of dynamic background updating,” in 2008 International Conference on Machine Learning and Cybernetics, 5, 2932–2937, (2008)
Ziubiński, P., Garbat, P., Zawistowski, J.: “Local eigen background substraction,” in Advances in Intelligent Systems and Computing, pp. 199–204, Springer International Publishing, (2014)
Kim, J.-H., Kang, B.-D., Ahn, S.-H., Kim, H.-S., Kim, S.-K.: “A real-time object detection system using selected principal components,” in Lecture Notes in Electrical Engineering, pp. 367–376, Springer Netherlands, (2013)
Hughes, K., Grzeda, V., Greenspan, M.: “Eigenbackground bootstrapping,” in 2013 International Conference on Computer and Robot Vision, pp. 196–201, (2013)
Stauffer, C., Grimson, W.E.L.: “Adaptive background mixture models for real-time tracking,” in Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on., vol. 2, (Los Alamitos, CA, USA), pp. 246–252 Vol. 2, IEEE, (1999)
Welford, B.P.: Note on a method for calculating corrected sums of squares and products. Technometrics 4(3), 419–420 (1962)
Stewart, G.W.: Introduction to matrix computations, vol. Academic Press, Computer science and applied mathematics (1973)
Hastie, T., Tibshirani, R., Friedman, J.: The elements of statistical learning: data mining, inference and prediction. Springer, 2 ed., (2008)
Bishop, C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag, Berlin, Heidelberg (2006)
Ipsen, I.C.F., Nadler, B.: Refined perturbation bounds for eigenvalues of hermitian and non-hermitian matrices. SIAM J. Matrix Anal. Appl. 31(1), 40–53 (2009)
Zemlys, V.: https://math.stackexchange.com/questions/9302/norm-of-asymmetric-matrix (2011)
Author information
Authors and Affiliations
Corresponding authors
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
Theorem 3
Suppose that \(A \in {\mathbb {C}}^{n\times n}\) is a Hermitian matrix, \(A'=A+yy^*\) is rank one updated of A and \(y \in {\mathbb {C}}^n\) is column vector. If v and \(v'\) are the normalized eigenvectors of A and \(A'\) corresponding to their largest eigenvalues, then:
where \(E=yy^*\) is a small perturbation of A and \(\beta \) is a value unrelated to E.
Proof
Suppose that \(\lambda \) and \(\lambda '\) are the largest eigenvalues of A and \(A'\). By definition of eigenvector we have
According to Proposition 2, there is an \(\epsilon >0\) such that \(\lambda +\epsilon = \lambda '\). Hence:
Thus:
where \(\beta = 2\alpha \) \(\square \)
Lemma 3
For a given Hermitian matrix \(A \in {\mathbb {C}}^{n\times n}\) and a column vector \(y \in {\mathbb {C}}^n\), we have:
Proof
See [36] . \(\square \)
Lemma 4
The norm of a symmetric matrix is maximum absolute value of its eigenvalue.
Proof
We have
where \(\Vert \cdot \Vert \) denotes the ordinary Euclidean norm. This is a constrained optimization problem with Lagrange function:
Taking squares makes the following step easier. Taking derivative with respect to x and equating it to zero, we get
the solution for this problem is the eigenvector of \(A^2\). Since \(A^2\) is symmetric, all its eigenvalues are real. So \(x^TA^2x\) will achieve maximum on set \(\Vert x\Vert ^2=1\) with maximal eigenvalue of \(A^2\). Now since A is symmetric, it admits representation
with Q the orthogonal matrix and \(\Lambda \) diagonal with eigenvalues in diagonals. For \(A^2\) we get
so the eigenvalues of \(A^2\) are squares of eigenvalues of A. The norm \(\Vert A\Vert _2\) is the square root taken from maximum \(x^TA^2x\) on \(x^Tx=1\), which will be the square root of maximal eigenvalue of \(A^2\) which is the maximal absolute eigenvalue of A [37]. \(\square \)
Lemma 5
Suppose \(A=uv^T\) where u and v are nonzero column vectors in \({{\mathbb {R}}}^n\), \(n\ge 3\). Then, \(\lambda =0\) and \(\lambda =v^Tu\) are the only eigenvalues of A.
Proof
\(\lambda =0\) is an eigenvalue of A since A is not of full rank. \(\lambda =v^Tu\) is also an eigenvalue of A since
We assume \(v\ne 0\). The orthogonal complement of the linear subspace generated by v (i.e., the set of all vectors orthogonal to v) is therefore \((n-1)\)-dimensional. Let \(\phi _1,\dots ,\phi _{n-1}\) be a basis for this space. Then, they are linearly independent and \(uv^T \phi _i = (v\cdot \phi _i)u=0 \). Thus, the eigenvalue 0 has multiplicity \(n-1\), and there are no other eigenvalues besides it and \(v\cdot u\). \(\square \)
Proposition 2
In the previous lemmas (3, 4 and 5), assume that \(E=yy^*\), then there exists an \(0 \le \epsilon \le \left\Vert E\right\Vert \) such that:
Rights and permissions
About this article
Cite this article
Amintoosi, M., Farbiz, F. Eigenbackground Revisited: Can We Model the Background with Eigenvectors?. J Math Imaging Vis 64, 463–477 (2022). https://doi.org/10.1007/s10851-022-01080-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10851-022-01080-4