Advertisement

Machine Vision and Applications

, Volume 19, Issue 5–6, pp 443–455 | Cite as

Object matching in disjoint cameras using a color transfer approach

  • Kideog Jeong
  • Christopher Jaynes
Special Issue Paper

Abstract

Object appearance models are a consequence of illumination, viewing direction, camera intrinsics, and other conditions that are specific to a particular camera. As a result, a model acquired in one view is often inappropriate for use in other viewpoints. In this work we treat this appearance model distortion between two non-overlapping cameras as one in which some unknown color transfer function warps a known appearance model from one view to another. We demonstrate how to recover this function in the case where the distortion function is approximated as general affine and object appearance is represented as a mixture of Gaussians. Appearance models are brought into correspondence by searching for a bijection function that best minimizes an entropic metric for model dissimilarity. These correspondences lead to a solution for the transfer function that brings the parameters of the models into alignment in the UV chromaticity plane. Finally, a set of these transfer functions acquired from a collection of object pairs are generalized to a single camera-pair-specific transfer function via robust fitting. We demonstrate the method in the context of a video surveillance network and show that recognition of subjects in disjoint views can be significantly improved using the new color transfer approach.

Keywords

Gaussian Mixture Model True Positive Rate Video Surveillance Color Histogram Bijection Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Basri R. and Jacobs D.W. (2003). Lambertian reflectance and linear subspaces. IEEE Trans. Pattern Anal. Mach. Intel. 25(2): 218–233 CrossRefGoogle Scholar
  2. 2.
    Orwell, J., Remagnino, P., Jones, G.A.: Multi-camera color tracking. In: Proc. the Second IEEE Workshop on Visual Surveillance (1999)Google Scholar
  3. 3.
    Mittal, A., Davis, L.: Unified multi-camera detection and tracking using region-matching. Proc. IEEE Workshop on Multi-Object Tracking (2001)Google Scholar
  4. 4.
    Rahimi, A., Dunagan, B., Darrell, T.: Simultaneous calibration and tracking with a network of non-overlapping sensors. Proc. Comput. Vis. Pattern Recog. vol. 1, I–187–I–194 (2004)Google Scholar
  5. 5.
    Kang, J., Cohen, I., Medioni, G.: Persistent objects tracking across multiple non overlapping cameras. In: Proc. IEEE Wacv-motion 2005. Vol. 2, No. 2 (2005)Google Scholar
  6. 6.
    Shan Y., Sawhney H.S. and Kumar R.T. (2005). Unsupervised learning of discriminative edge measures for vehicle matching between non-overlapping cameras. cvpr 1: 894–901 Google Scholar
  7. 7.
    Shan Y., Sawhney H.S. and Kumar R. (2005). Vehicle identification between non-overlapping cameras without direct feature matching. iccv 1: 378–385 Google Scholar
  8. 8.
    Porikli, F.M., Divakaran, A.: Multi-camera calibration, object tracking and query generation. In: Proc. IEEE International Conference on Multimedia and Expo (ICME). Vol. 1, pp. 653–656 (2003)Google Scholar
  9. 9.
    Javed, O., Shafique, K., Shah, M.: Appearance modeling for tracking in multiple non-overlapping cameras. In: Proc. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR). pp. 20–26 (2005)Google Scholar
  10. 10.
    Swain M.J. and Ballard D.H. (1991). Color indexing. Int. J. Comput. Vis. 7: 11–32 CrossRefGoogle Scholar
  11. 11.
    Stauffer, C., Eric, W., Grimson, L.: Learning patterns of activity using real-time tracking. IEEE Trans. Pattern Anal. Mach. Intell. (2000)Google Scholar
  12. 12.
    Elgammal, A., Duraiswami, R., Harwood, D., Davis, L.: Background and foreground modeling using nonparametric kernel density for visual surveillance. In: Proc. the IEEE (2002)Google Scholar
  13. 13.
    Xiong, Q., Jaynes, C.: Multi-resolution background modeling of dynamic scenes using weighted match filters. In: Proc. the ACM 2nd International Workshop on Video Surveillance & Sensor Networks (VSSN ’04), New York, pp. 88–96 ACM Press (2004)Google Scholar
  14. 14.
    Jeong, K., Jaynes, C.: Moving shadow detection using a combined geometric and color classification approach. In: IEEE MOTION (2005)Google Scholar
  15. 15.
    Wren, C., Azarbayejani, A., Darrell, T., Pentland, A.: Pfinder: real-time tracking of the human body. IEEE Trans. Pattern Anal. Mach. Intell. (1997)Google Scholar
  16. 16.
    McKenna, S., Raja, Y., Gong, S.: Tracking colour objects using adaptive mixture models. Image Vis. Comput. 225–231 (1999)Google Scholar
  17. 17.
    Sanders, N., Jaynes, C.: Class-specific color camera calibration with application to object recognition. In: IEEE WACV (2005)Google Scholar
  18. 18.
    Bauml K. (1994). Color appearance: effects of illuminant changes under different surface collections. J. Opt. Soc. Am. A 11(2): 531–542 Google Scholar
  19. 19.
    Speigle J.M. and Brainard D.H. (1999). Predicting color from gray: the relationship between achromatic adjustment and asymmetric matching. J. Opt. Soc. Am. A 16(10): 2370–2376 CrossRefGoogle Scholar
  20. 20.
    Werner J.S. and Schefrin B.E. (1993). Loci of achromatic points throughtout the life span. J. Opt. Soc. Am. A 10(7): 1509–1516 CrossRefGoogle Scholar
  21. 21.
    Grossberg M.D. and Nayar S.K. (2003). Determining the Camera Response from Images: What is Knowable?. IEEE Trans. Pattern Anal. Mach. Intell. 25: 1455–1467 CrossRefGoogle Scholar
  22. 22.
    Grossberg M.D. and Nayar S.K. (2004). Modeling the space of camera response functions. IEEE Trans. Pattern Anal. Mach. Intell. 26: 1272–1282 CrossRefGoogle Scholar
  23. 23.
    Dempster, A., Laird, N., Rubin, D.: Maximum-likelihood from incomplete data via the em algorithm. J. Royal Statist. Soc. Series B 39 (1977)Google Scholar
  24. 24.
    Gross, R., Yang, J., Waibel, A.: Growing gaussian mixture models for pose invariant face recognition. In: Proc. the 15th International Conference on Pattern Recognition. vol. 1, pp. 1088–1091 (2000)Google Scholar
  25. 25.
    Cover T.M. and Thomas J.A. (1991). Elements of Information Theory. Wiley, New York MATHGoogle Scholar
  26. 26.
    Kullback S. and Leibler R.A. (1951). On information and sufficiency. Ann. Math. Statist. 22: 79–86 MATHCrossRefMathSciNetGoogle Scholar
  27. 27.
    Johnson D.H. and Orsak G. (1993). Relation of signal set choice to the performance of optimal non-gaussian detectors. IEEE Trans. Comm. 41(9): 1319–1328 MATHCrossRefGoogle Scholar
  28. 28.
    Walraven J. and Werner J.S. (1991). The invariance of unique white; a possible implication for normalizing cone action spectra. Vis. Res. 31: 2185–2193 CrossRefGoogle Scholar
  29. 29.
    Rousseeuw P.J. and Leroy A.M. (1987). Robust Regression and Outlier Detection. Wiley, New York MATHGoogle Scholar
  30. 30.
    Jaynes, C., Kale, A., Sanders, N., Grossmann, E.: The terrascope dataset: a scripted multi-camera indoor video surveillance dataset with ground-truth. In: Proc. the IEEE Workshop on VS PETS (2005)Google Scholar
  31. 31.
    Porikli, F.: Inter-camera color calibration using cross-correlation model function. In: Proc. IEEE Int. Conf. on Image Processing (2003)Google Scholar
  32. 32.
    Drew, M.S., Wei, J., Li, Z.: Illumination-invariant color object recognition via compressed chromaticity histograms of color-channel-normalized images. In: Computer Vision, 1998. Sixth International Conference, pp. 533–540 (1998)Google Scholar
  33. 33.
    Finlayson, G.D., Schiele, B., Crowley, J.L.: Comprehensive colour image normalization. In: Proc. European Conference on Computer Vision (1998)Google Scholar
  34. 34.
    Barnard K., Cardei V. and Funt B. (2002). A comparison of color constancy algorithms. part two: experiments with image data. IEEE Trans. Image Process. 11(9): 985–996 CrossRefGoogle Scholar
  35. 35.
    Barnard K., Martin L., Coath A. and Funt B. (2002). A comparison of color constancy algorithms. part one: Methodology and experiments with synthesized data. IEEE Trans. Image Process. 11(9): 972–984 CrossRefGoogle Scholar
  36. 36.
    Belhumeur N. and Kriegman D. (1998). What is the set of images of an object under all possible illumination conditions?. Int. J. Comput. Vis. 28(3): 245–260 CrossRefGoogle Scholar
  37. 37.
    Berwick, D., Lee, S.: A chromaticity space for specularity-, illumination color- and illumination pose-invariant 3-d object recognition. In: ICCV pp. 165–170 (1998)Google Scholar

Copyright information

© Springer-Verlag 2007

Authors and Affiliations

  1. 1.Department of Computer Science and Center for Visualization and Virtual EnvironmentsUniversity of KentuckyLexingtonUSA

Personalised recommendations