Abstract
Various problems in object detection and tracking have attracted researchers to develop methodologies for solving these problems. Occurrence of camouflage is one of such challenges that makes object detection and tracking problems more complex. However, less attention has been given to detect and track camouflaged objects due to complexity of the problem. In this article, we propose a tracking-by-detection algorithm to detect and track camouflaged objects. To increase separability between the camouflaged object and the background, we propose to integrate features (CIELab, histogram of orientation gradients and locally adaptive ternary pattern) from multi-cue (color, shape and texture) to represent a camouflaged object. A probabilistic neural network (PNN) is modified to construct an efficient discriminative appearance model for detecting camouflaged objects in video sequences. A large number of training patterns (many could be redundant) are reduced based on motion of the object in the modified PNN. The modified PNN makes the detection process faster and also increases the detection accuracy. Due to high visual similarity among the camouflaged object and the background, the boundary of camouflaged object is not well defined (i.e., boundary may be smooth and/or discontinuous). In this context, a robust fuzzy energy based active contour model using both global and local information is proposed to extract contour (boundary) of the detected camouflaged object for tracking. We show a realization of the proposed method and demonstrate its performance (both quantitatively and qualitatively) with respect to state-of-the-art techniques on several challenging sequences. Analysis of results concludes that the proposed technique can track camouflaged (fully or partially) objects as well as objects in various complex environments in a better way as compare to the existing ones.
Similar content being viewed by others
Notes
In this article, the words object and target are used interchangeably.
In this article, training, target, \((t-1)\mathrm{th}\) and previous frame are used interchangeably.
In this article, test, target candidate, \(t\mathrm{th}\) and current frame are used interchangeably.
References
Akhloufi, M.A., & Bendada, A. (2010). Locally adaptive texture features for multispectral face recognition. In IEEE International Conference on Systems Man and Cybernetics (SMC) (pp. 3308–3314).
Avidan, S. (2007). Ensemble tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(2), 261–271.
Babenko, B., Yang, M., & Belongie, S. (2011). Robust object tracking with online multiple instance learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(8), 1619–1632.
Bandouch, J., Jenkins, O. C., & Beetz, M. (2012). A self-training approach for visual tracking and recognition of complex human activity patterns. International Journal of Computer Vision, 99(2), 166–189.
Bishop, C. M. (1995). Neural networks for pattern recognition. Oxford: Clarendon Press.
Boult, T. E., Micheals, R. J., Gao, X., & Eckmann, M. (2001). Into the woods: Visual surveillance of noncooperative and camouflaged targets in complex outdoor settings. Proceedings of the IEEE, 89(10), 1382–1402.
Burrascano, P. (1990). Learning vector quantization for the probabilistic neural network. IEEE transactions on Neural Networks, 2(4), 458–461.
Caselles, V., Kimmel, R., & Sapiro, G. (1997). Geodesic active contours. International journal of Computer Vision, 22(1), 61–79.
Challa, S., Morelande, M. R., Musicki, D., & Evans, R. J. (2011). Fundamentals of object tracking. Cambridge: Cambridge University Press.
Chan, T. F., & Vese, L. A. (2001). Active contours without edges. IEEE Transactions on Image Processing, 10(2), 266–277.
Chandesa, T., Pridmore, T., Bargiela, A. (2009). Detecting occlusion and camouflage during visual tracking. In IEEE International Conference on Signal and Image Processing Applications (ICSIPA) (pp. 468–473).
Cohen, L. D., & Kimmel, R. (1997). Global minimum for active contour models: A minimal path approach. International Journal of Computer Vision, 24(1), 57–78.
Conte, D., Foggia, P., Percannella, G., Tufano, F., Vento, M. (2009). An algorithm for detection of partially camouflaged people. In 6th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (pp. 340–345).
Copeland, A.C., Trivedi, M.M. (1997). Models and metrics for signature strength evaluation of camouflaged targets. In AeroSense (pp. 194–199).
Cremers, D., Rousson, M., & Deriche, R. (2007). A review of statistical approaches to level set segmentation: Integrating color, texture, motion and shape. International Journal of Computer Vision, 72(2), 195–215.
Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1, 886–893.
Desai, C., Ramanan, D., & Fowlkes, C. C. (2011). Discriminative models for multi-class object layout. International Journal of Computer Vision, 95(1), 1–12.
Di Lascio, R., Foggia, P., Percannella, G., Saggese, A., & Vento, M. (2013). A real time algorithm for people tracking using contextual reasoning. Computer Vision and Image Understanding, 117(8), 892–908.
Du, H., Jin, X., Mao, X. (2012). Digital camouflage images using two-scale decomposition. In Computer Graphics Forum (pp. 2203–2212).
Fox, C. W. (1988). An introduction to the calculus of variations. New York: Dover Publications Inc.
Freeman, W.T., & Roth, M. (1995). Orientation histograms for hand gesture recognition. In IEEE International Workshop on Automatic Face and Gesture Recognition (pp. 296–301).
Gonzalez, R. F., & Woods, R. E. (2008). Digital image processing. Singapore: Pearson Education.
Gretzmacher, F.M., Ruppert, G.S., Nyberg, S. (1998). Camouflage assessment considering human perception data. In Aerospace/Defense Sensing and Controls (pp. 58–67).
Hao, W., Zhang, B., & Tian, W. (2007). Head tracking by means of probabilistic neural networks. Measurement Science and Technology, 18(7), 1999–2009.
Haralick, R. M., Shanmugam, K., & Dinstein, I. H. (1973). Textural features for image classification. IEEE Transactions on Systems, Man and Cybernetics, 3(6), 610–621.
Hare, S., Saffari, A., Torr, P.H. (2011). Struck: Structured output tracking with kernels. In IEEE International Conference on Computer Vision (ICCV) (pp. 263–270).
Harville, M., Gordon, G., Woodfill, J. (2001). Foreground segmentation using adaptive mixture models in color and depth. In Proceedings on IEEE Workshop on Detection and Recognition of Events in Video (pp. 3–11).
He, L., & Osher, S. (2007). Solving the Chan–Vese model by a multiphase level set algorithm based on the topological derivative. In Proceedings of the 1st International Conference on Scale Space and Variational Methods in Computer Vision, SSVM (pp. 777–788).
Henriques, J. F., Caseiro, R., Martins, P., & Batista, J. (2015). High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3), 583–596.
Hou, J. Y. Y. H. W., & Li, J. (2011). Detection of the mobile object with camouflage color under dynamic background based on optical flow. Procedia Engineering, 15, 2201–2205.
Huang, Z.Q., & Jiang, Z. (2005). Tracking camouflaged objects with weighted region consolidation. In Proceedings on Digital Image Computing: Techniques and Applications (DICTA) (pp. 24–31).
Jia, X., Lu, H., Yang, M.H. (2012). Visual tracking via adaptive structural local sparse appearance model. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1822–1829).
KaewTrakulPong, P., & Bowden, R. (2003). A real time adaptive visual surveillance system for tracking low-resolution colour targets in dynamically changing scenes. Image and Vision Computing, 21(10), 913–929.
Kalal, Z., Mikolajczyk, K., & Matas, J. (2012). Tracking-learning-detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(7), 1409–1422.
Kass, M., Witkin, A., & Terzopoulos, D. (1988). Snakes: Active contour models. International Journal of Computer Vision, 1(4), 321–331.
Kasturi, R., Goldgof, D., Soundararajan, P., Manohar, V., Garofolo, J., Bowers, R., et al. (2009). Framework for performance evaluation of face, text, and vehicle detection and tracking in video: Data, metrics and protocol. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2), 319–336.
Krinidis, S., & Chatzis, V. (2009). Fuzzy energy-based active contours. IEEE Transactons on Image Processing, 18(12), 2747–2755.
Krinidis, & S., Krinidis, M. (2012). Fuzzy energy-based active contours exploiting local information. In 8th International Conference on Artificial Intelligence Applications and Innovations (AIAI’12) (pp. 27–30).
Kusy, M., & Kluska, J. (2013). Probabilistic neural network structure reduction for medical data classification. In Artificial Intelligence and Soft Computing (pp. 118–129).
Kwon, J., & Lee, K.M. (2010). Visual tracking decomposition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1269–1276).
Lan, X., Ma, A.J., Yuen, P.C. (2014). Multi-cue visual tracking using robust feature-level fusion based on joint sparse representation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1194–1201).
Lankton, S., & Tannenbaum, A. (2008). Localizing region-based active contours. IEEE Transactions on Image Processing, 17(11), 2029–2039.
Levi, K., & Weiss, Y. (2004). Learning object detection from a small number of examples: the importance of good features. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2, 53–60.
Li, C., Kao, C. Y., Gore, J. C., & Ding, Z. (2008). Minimization of region-scalable fitting energy for image segmentation. IEEE Transactions on Image Processing, 17(10), 1940–1949.
Li, C., Xu, C., Gui, C., & Fox, M. D. (2010). Distance regularized level set evolution and its application to image segmentation. IEEE Transactions on Image Processing, 19(12), 3243–3254.
Li, X., Hu, W., Shen, C., Zhang, Z., Dick, A., & Hengel, A. V. D. (2013). A survey of appearance models in visual object tracking. ACM Transactions on Intelligent Systems and Technology, 4(4), 1–58.
Maggio, E., & Cavallaro, A. (2011). Video tracking: Theory and practice. West Sussex: Wiley.
Malathi, T., & Bhuyan, K.M. (2013). Foreground object detection under camouflage using multiple camera-based codebooks. In Annual IEEE India Conference (INDICON) (pp. 1–6).
Mao, K. Z., Tan, K. C., & Ser, W. (2000). Probabilistic neural-network structure determination for pattern classification. IEEE Transactions on Neural Networks, 11(4), 1009–1016.
Metz, & C.E. (1978). Basic principles of ROC analysis. In Seminars in nuclear medicine (Vol. 8, pp. 283–298). Elsevier
Mondal, A., Ghosh, S., & Ghosh, A. (2014). Efficient silhouette-based contour tracking using local information. Soft Computing, 20, 1–21.
Mumford, D., & Shah, J. (1989). Optimal approximations by piecewise smooth functions and associated variational problems. Communications on Pure and Applied Mathematics, 42(5), 577–685.
Musavi, M. T., Chan, K. H., Hummels, D. M., & Kalantri, K. (1994). On the generalization ability of neural network classifiers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(6), 659–663.
Ning, J., Zhang, L., Zhang, D., & Wu, C. (2009). Robust object tracking using joint color-texture histogram. International Journal of Pattern Recognition and Artificial Intelligence, 23(07), 1245–1263.
Ojala, T., Pietikäinen, M., & Harwood, D. (1996). A comparative study of texture measures with classification based on featured distributions. Pattern Recognition, 29(1), 51–59.
Osher, S., & Sethian, J. A. (1988). Fronts propagating with curvature dependent speed: algorithms based on hamilton–jacobi formulation. Journal of Computational Physics, 79(1), 12–49.
Pan, Y., Birdwell, D.J., Djouadi, S.M. (2006). Efficient implementation of the Chan-Vese models without solving PDEs. In Proceedings of International Workshop on Multimedia Signal Processing (pp. 350–353).
Raghu, P., & Yegnanarayana, B. (1998). Supervised texture classification using a probabilistic neural network and constraint satisfaction model. IEEE Transactions on Neural Networks, 9(3), 516–522.
Ross, D. A., Lim, J., Lin, R. S., & Yang, M. H. (2008). Incremental learning for robust visual tracking. International Journal of Computer Vision, 77(1–3), 125–141.
Santner, J., Leistner, C., Saffari, A., Pock, T., Bischof, H. (2010). PROST: Parallel robust online simple tracking. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 723–730).
Sevilla-Lara, L., & Learned-Miller, E. (2012). Distribution fields for tracking. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 1910–1917).
Shyu, K. K., Pham, V. T., Tran, T. T., & Lee, P. L. (2012). Global and local fuzzy energy-based active contours for image segmentation. Nonlinear Dynamics, 67(2), 1559–1578.
Singh, S. K., Dhawale, C. A., & Misra, S. (2013). Survey of object detection methods in camouflaged image. IERI Procedia, 4, 351–357.
Song, B., & Chan, T. (2002). A fast algorithm for level set based optimization. CAM-UCLA, 68, 02–68.
Specht, D. F. (1990). Neural networks, 3(1), 109–118.
Suard, F., Rakotomamonjy, A., Bensrhair, A. (2006). Pedestrian detection using Infrared images and histograms of oriented gradients. In IEEE Conference on Intelligent Vehicles (pp. 206–212).
Talu, M. F. (2013). ORACM: Online region-based active contour model. Expert Systems with Applications, 40(16), 6233–6240.
Tan, X., & Triggs, B. (2010). Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Transactions on Image Processing, 19(6), 1635–1650.
Tang, S., Andriluka, M., & Schiele, B. (2014). Detection and tracking of occluded people. International Journal of Computer Vision, 110(1), 58–69.
Tran, T. T., Pham, V. T., & Shyu, K. K. (2014). Image segmentation using fuzzy energy-based active contour with shape prior. Journal of Visual Communication and Image Representation, 25(7), 1732–1745.
Traven, H. G. (1991). A neural network approach to statistical pattern classification by semiparametric’ estimation of probability density functions. IEEE Transactions on Neural Networks, 2(3), 366–377.
Wang, J., & Yagi, Y. (2008). Integrating color and shape-texture features for adaptive real-time object tracking. IEEE Transactions on Image Processing, 17(2), 235–240.
Wu, Y., Lim, J., Yang, M.H. (2013). Online object tracking: A benchmark. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 2411–2418).
Wu, Y., Ma, W., Gong, M., Li, H., & Jiao, L. (2015). Novel fuzzy active contour model with kernel metric for image segmentation. Applied Soft Computing, 34(2015), 301–311.
Yan, W., Weber, C., & Wermter, S. (2011). A hybrid probabilistic neural model for person tracking based on a ceiling-mounted camera. Journal of Ambient Intelligence and Smart Environments, 3(3), 237–252.
Yang, B., & Nevatia, R. (2014). Multi-target tracking by online learning a CRF model of appearance and motion patterns. International Journal of Computer Vision, 107(2), 203–217.
Yang, F., Lu, H., & Yang, M. H. (2014). Robust superpixel tracking. IEEE Transactions on Image Processing, 23(4), 1639–1651.
Yezzi, A., Kichenassamy, S., Kumar, A., Olver, P., & Tannenbaum, A. (1997). A geometric Snake model for segmentation of medical imagery. IEEE Transactions on Medical Imaging, 16(2), 199–209.
Yilmaz, A., Javed, O., & Shah, M. (2006). Object tracking: A survey. ACM Computing Surveys, 38(4), 1264–1291.
Zhang, K., Zhang, L., Song, H., & Zhou, W. (2010). Active contours with selective local or global segmentation: a new formulation and level set method. Image and Vision Computing, 28(4), 668–676.
Zhang, T., Ghanem, B., Liu, S., & Ahuja, N. (2013). Robust visual tracking via structured multi-task sparse learning. International Journal of Computer Vision, 101(2), 367–383.
Zhang, X., Hu, W., Xie, N., Bao, H., & Maybank, S. (2015). A robust tracking system for low frame rate video. International Journal of Computer Vision, 115, 1–26.
Zhong, M., Coggeshall, D., Ghaneie, E., Pope, T., Rivera, M., Georgiopoulos, M., et al. (2007). Gap-based estimation: Choosing the smoothing parameters for probabilistic and general regression neural networks. Neural Computation, 19(10), 2840–2864.
Zhong, W., Lu, H., & Yang, M. H. (2014). Robust object tracking via sparse collaborative appearance model. IEEE Transactions on Image Processing, 23(5), 2356–2368.
Acknowledgments
The authors like to thank the editor and the reviewers for their thorough and constructive comments, which helped a lot to enhance the quality of the manuscript. Funding by U. S. Army through the project “Processing and Analysis of Aircraft Images with Machine Learning Techniques for Locating Objects of Interest” (Contract No. FA5209-08-P-0241) is also gratefully acknowledged.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by T.E. Boult.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Appendices
Appendix 1
The proposed energy function F in Eq. (15) is convex with respect to u.
Proof:
Let
and
where \(f_A : \varOmega \rightarrow \mathfrak {R}\). Therefore, \(F_A(\mathbf X ^{t}) = \int \limits _\varOmega {f_A(\mathbf X ^{t})} d\mathbf X ^{t}\).
Now let \( \mathbf X _1 \equiv (x_1 ,y_1 )\) and \(\mathbf X _2 \equiv (x_2 ,y_2 ) \in \varOmega \). For any \(\theta \in [0,1]\), we have
Since \(\theta \left( {x_1 - x_2 } \right) + x_2 \in \mathfrak {R}\) as \(x_1\), \(x_2\) \(\in \mathfrak {R}\) and \(\theta \in [0,1]\); and \(\theta \left( {y_1 - y_2 } \right) + y_2 \in \mathfrak {R}\) as \(y_1\), \(y_2\) \(\in \mathfrak {R}\) and \(\theta \in [0,1]\). Therefore, the domain of \(f_A\) i.e., \(\varOmega \equiv \mathfrak {R}^{2}\) is convex.
Differentiating Eq. (21) w. r. t. u, we have
Again differentiating \(\frac{{\partial f_A }}{{\partial u}}\) w. r. t. u, we have
Now \(\frac{{\partial ^2 f_A }}{{\partial u^{2} }} \ge 0\), since \(m > 1\), \(u(\mathbf X ^{t}) \in [0,1]\) and \(\left\| {I(\mathbf X ^{t}) - c_{1} } \right\| ^2 \ge 0\).
Since domain of \(f_A\) is convex and \(\frac{{\partial ^2 f_A }}{{\partial u^{2} }} \ge 0\), therefore \(f_A\) is convex. Thus, \(\forall \mathbf Y _1, \mathbf Y _2 \in \varOmega \equiv \mathfrak {R}^{2}\) and \(\theta \in [0,1]\). The following relation
holds. Integrating both the sides of Eq.(22), we have
But \(\int \limits _\varOmega {f_A (\mathbf X ^{t})d\mathbf X ^{t}} = F_A(\mathbf X ^{t}) \). Therefore, we have
Hence, \(F_{A}\) is convex.
Similarly, let
and
where \(f_B : \varOmega \rightarrow \mathfrak {R}\). Therefore \(F_B(\mathbf X ^{t}) = \int \limits _\varOmega f_B(\mathbf X ^{t})d\mathbf X ^{t}\). In a similar way, we can prove that \(F_{B}\) is also convex.
Again let
and
Therefore \(F_{C_1}(\mathbf Y ^{t}) = \int \limits _\varOmega {f_{C_1}(\mathbf Y ^{t})} d\mathbf Y ^{t}\) and \(F_C(\mathbf X ^{t}) = \int \limits _\varOmega {f_C(\mathbf X ^{t})} d\mathbf X ^{t}\). Here \(f_{C_1 } : \varOmega \rightarrow \mathfrak {R}\) and the domain of \(f_{C_1 }\) is convex.
Now differentiating Eq. (28) w. r. t. u, we have
Again differentiating \(\frac{{\partial f_{C_1 } }}{{\partial u}}\) w. r. t. u, we have
Now \(\frac{{\partial ^2 f_{C_1 } }}{{\partial u^{2} }} \ge 0\), since \(m > 1\), \(W_\mathbf{X ^{t}{} \mathbf Y ^{t}} \ge 0\), \(u(\mathbf Y ^{t}) \in [0,1]\) and \(\left\| {I(\mathbf Y ^{t}) - c_{1} } \right\| ^2 \ge 0\).
Since domain of \(f_{C_1 }\) is convex and \(\frac{{\partial ^2 f_{C_1 } }}{{\partial u^{2} }} \ge 0\), hence \(f_{C_1 }\) is convex. Therefore \(\forall \mathbf Y _1,\mathbf Y _2 \in \varOmega \) and \(\theta \in [0,1]\). The following relation
holds. Integrating both the sides, we have
But \(F_{C_1 } = \int \limits _\varOmega {f_{C_1 } } \left( \mathbf Y \right) d\mathbf Y ^{t}\), hence \(F_{C_{1}}\) is convex. Since \(f_{C_1}\) is convex and \(\left[ {u(\mathbf X ^{t})} \right] ^m \ge 0\), then \(f_C = \left[ {u(\mathbf X ^{t})} \right] ^m f_{C_1 }\) is also convex.
Again let \(\overline{F_C } = \left[ {u(\mathbf X ^{t})} \right] ^m F_{C_1}\). Since \(F_{C_1}\) is convex and \(\left[ {u(\mathbf X ^{t})} \right] ^m \ge 0\), therefore \(\overline{F_C }\) is also convex. Therefore \(\forall \mathbf X _1, \mathbf X _2 \in \varOmega \) and \(\theta \in [0,1]\). The following equation
holds. Then integrating both the sides, we have
Therefore \(F_C = \int \limits _\varOmega {\overline{F_C } \left( \mathbf X \right) } d\mathbf X ^{t} = \int \limits _\varOmega {\left[ {u(\mathbf X ^{t})} \right] } ^m F_{C_1 } d\mathbf X ^{t}\) is convex. In a similar way, it can be shown that
is also convex. Since \(0 \le \beta \le 1\), \(1 - \beta \ge 0\), \(\lambda _1 ,\lambda _2 > 0\), then F is the weighted sum of four (\(F_A\), \(F_B\), \(F_C\), and \(F_D\)) convex functions. i.e.,
Hence, F is convex with respect to u. Therefore, the proposed energy function is convex with respect to membership function \(u(\mathbf X ^{t})\). \(\square \)
Appendix 2
Since an image is discrite in nature, instead of integration, summation is considered here.
Let us assume two prototypes \(c_1\) and \(c_2\) which approximate the image intensity inside(C) and outside(C) the contour C. Thus it can be written as
and
where I(X) is the intensity value at pixel location X, u(X) is the degree of membership of pixel X inside C, and m is the fuzzifier which determines the fuzziness present in the given image.
Therefore, total fuzzy energy for the whole image can be computed as
where
with
Let us assume that a pixel \(P\in I\) with intensity \(I_o\) and degree of membership \(u_o\). If we change the degree of membership of pixel P to the value \(u_n\) using Eq. (18), then \(c_1\) and \(c_2\) will be changed to new values \( \overline{c_1}\) and \(\overline{c_2}\), respectively. The new values of \(c_1\) and \(c_2\) are calculated as
where \(a_1 = \sum \limits _\varOmega {\left[ {u\left( X \right) } \right] } ^m\) and \(S_1 = \frac{{u_n^m - u_o^m }}{{\sum \limits _\varOmega {\left[ {u\left( X \right) } \right] } ^m + u_n^m - u_o^m }}\) \( = \frac{{u_n^m - u_o^m }}{{a_1 + u_n^m - u_o^m }}\).
Similarly,
where \(a_2 = \sum \limits _\varOmega {\left[ {1 - u\left( X \right) } \right] } ^m \) and \(S_2 = \frac{{\left( {1 - u_n } \right) ^m - \left( {1 - u_o } \right) ^m }}{{a_2 + \left( {1 - u_n } \right) ^m - \left( {1 - u_o } \right) ^m }}\).
From Eq. (15), it is seen that if degree of membership u(X) is changed, then the energy of the model will also be changed. If F denotes the old energy and \(\overline{F}\) denotes the new energy due to changing of degree of membership of the point P, then
with
Similarly,
Then
Let
Now,
Therefore, we have \(\overline{F} _C = \overline{F} _{C_1 } + \overline{F} _{C_2 } + \overline{F} _{C_3 }\).
Similarly, we have
Therefore, we have
Thus the energy difference \(\varDelta F\) between old energy (F) and new energy (\(\overline{F}\)) due to change of degree of membership value can be obtained as
where
Rights and permissions
About this article
Cite this article
Mondal, A., Ghosh, S. & Ghosh, A. Partially Camouflaged Object Tracking using Modified Probabilistic Neural Network and Fuzzy Energy based Active Contour. Int J Comput Vis 122, 116–148 (2017). https://doi.org/10.1007/s11263-016-0959-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11263-016-0959-5