Encyclopedia of Color Science and Technology

2016 Edition
| Editors: Ming Ronnier Luo

Color Constancy

Reference work entry
DOI: https://doi.org/10.1007/978-1-4419-8071-7_266

Synonyms

Definition

Color constancy refers to the ability of the human visual system to perceive stable object color, despite significant variation of illumination. Color constancy is also a desired algorithm in machine vision. In image processing, it is widely used in white balancing algorithms. Color constancy has been an active research topic in the past 100 years. For a thorough understanding of this subject, please see the following recent reviews [4, 14, 16, 18, 36, 35, 58, 72].

Overview

Figure 1 shows photos of the same scene under two illumination conditions. The right photo was taken in the morning when the dominant light source is the daylight, while the left photo was taken in the evening when the dominant light source is tungsten light. If one were in the scene, one would perceive the mug under both illumination conditions to have the same orange color, though individual pixels from the same location from the mug in two photos appear different (see the patches above the photos). Humans exhibit very good color constancy under natural viewing conditions (see a recent review [16]). However, constancy can also be poor under certain conditions. Figure 2 illustrated a scenario when color constancy fails for most observers. Color constancy is important in real-world tasks such as object and scene recognition and visual search [67, 81].
Color Constancy, Fig. 1

Images of the samae thermal mug lighted under two illuminants (tungsten illumination and window daylight). Top: the patches show rectangular regions filled with the color from roughly the same locations of the mug in the two images. Bottom: the images from which the patches were extracted. The image on the left was taken under two tungsten lamps. The image on the right was taken under the daylight from the windows. The photographs were taken by the author using a Canon EOS Rebel T2 digital SLR camera with a 50 mm fixed lens and the automatic white balance function of the camera disabled

Color Constancy, Fig. 2

Failures of color constancy. Left: a photograph of fruits taken under a monochromatic low-pressure sodium light. Right: the same scene was illuminated by normal broadband light sources. Most observers won’t be able to tell the color of the bell pepper from the left image (the images were downloaded from http://www.soxlamps.com/advantages_sub.htm). But such monochromatic light source is rare in the real world

Unlike the human visual system, the image captured by a digital camera has the issue that the color of the scene may be shifted by the change in the external illumination, even though the intrinsic spectral property object in the scene (e.g., the mug) stays the same. The goal of a color constancy algorithm is to correct the color shift caused by the illumination change and to extract reliable color features that are invariant to the change in illumination [25, 43]. The method of correcting image color shift caused by changes in the scene illumination in a camera is called white balance.

The Problem of Color Constancy

Figure 3 illustrates the problem of color constancy. The color signal reaching the eye, C(λ), is a wavelength-by-wavelength product of the spectral power distribution of the illumination I(λ) and the surface reflectance function S(λ). Different light sources have different spectral power distributions; for example, daylight has different spectral power distribution from that of a tungsten light source. Surface reflectance function S(λ) is an intrinsic property of a surface, and it is determined by how the surface absorbs and reflects light. Under a neutral light source, objects with different surface reflectance functions appear to have different colors. The goal of color constancy is to extract the intrinsic surface reflectance function from the color signal. Color constancy is an ambiguous problem because different combinations of illuminant and surface can give rise to the same color signal (see Fig. 3). Many computational models in the past suggest that the visual system first makes an estimate of the illuminant and uses it to recover the surface reflectance function {Brainard [15, 40, 43, 68]}.
Color Constancy, Fig. 3

Illustration of color constancy. The spectrum of reflected light reaching the eye, C(λ), is wavelength-by-wavelength product of surface reflectance S and the illumination E. The problem of color constancy is challenging because different combination of illumination and surface reflectance can result in the same color signal (This illustration is adapted from David Brainard [16])

What Do We Know About Human Color Constancy?

How Human Color Constancy Is Measured?

After establishing the problem, now the question is how good the human visual system is at color constancy. To answer this, we need to measure color constancy in a controlled way. Three common methods have been used in the past to measure color constancy in a laboratory setting: color naming where observers name colors of surfaces under different illuminations [46, 47], asymmetric matching where observers adjust a match surface under one illuminant to match the color appearance of a reference surface under another illuminant [2, 21, 80], and achromatic adjustment where observers adjust the chromaticity of a test surface so that it appears achromatic and then repeat the task when the test is embedded in scenes with different illuminants [13]. It was found that asymmetric matching and achromatic adjustment reach similar conclusions of constancy when the two tasks were compared using the same scenes [21].

How Good Is Human Color Constancy?

Overall, these methods show that human constancy is not perfect but generally very good. We can compute a color constancy index from either asymmetric matching or achromatic adjustment experiments, where 0 % means no constancy and 100 % means perfect constancy [2, 21, 20, 73]. Most studies in color constancy use simplified laboratory stimuli that consist of flat and matte surfaces under diffuse lighting conditions (for reviews, see [2, 13, 14, 16, 56]).

For such flat-matte-diffuse surfaces in simple scenes, especially when only the illuminant is varied in the scene, color constancy can be very good with an average constancy index around 0.85 for real scenes [13] and around 0.75 for graphical simulated scenes [28]. However, constancy can be decreased significantly when both the illuminants and the surfaces are varied in a scene. Figure 4b shows an example of such manipulations. The surface in the scene was manipulated so that the color of the background wall reaching the eye under illuminant A is the same as when a neutral colored wall is illuminated under illuminant B (the middle and the leftmost images). In this condition, constancy index drops to 0.2 [80]. Similar effects have also been explored in previous studies [52, 53]. In these cases, constancy is reduced but not completely diminished.
Color Constancy, Fig. 4

Various stimuli used in color constancy and color perception experiments. (a) Stimuli used to study color constancy by [28]. Synthetic images contain a flat test surface embedded in a relative complex scene (b). The rendered images were used in a study of color constancy of 3D object by Xiao and Brainard [80]. The leftmost scene was illuminated by a neutral illuminant, the middle scene was illuminated by a bluish illuminant, the rightmost scene was illuminated by the same bluish illuminant as the middle scene, but the reflectance of the background surface has been changed so that the light reflected from the background is the same as in the leftmost image. (c) Photograph of a piece of fabric draped over an object. The left image shows the original photograph of the fabric. There is a significant variation of color across fabric’s surface. Such color variation is important for material perception. The right image shows the same photograph in gray scale. A recent study on tactile and visual matching of fabric properties shows that observers make more mistakes predicting fabric’s tactile properties with grayscale images than color images (Photos taken by the author). (d) Translucent objects, such as liquid, stone, skin, and wax, represent new challenges for color constancy research. The photo on the left shows a glass of fat-free milk illuminated from the front, and the photo on the right shows the same glass of fat-free milk illuminated from the back. One can observe that the color appears to be slightly different when the illumination direction is varied (The photos were taken by Ioannis Gkioulekas from Harvard University [45])

Natural scenes are rarely composed of flat-matte-diffuse surfaces. First, objects usually have 3D shapes and are made of non-matte material such as metal, plastic, or wax, which looks glossy and translucent. Second, the objects in the three-dimensional space are arranged at different depths from the viewing point. Lastly, the lighting condition often has complex spatial and spectral distribution. How good is human color constancy in natural scenes? The factors in natural scenes that have been studied include the effect of three-dimensional pose of surfaces ([8, 9, 10, 11], Boyaci et al. 2003, [29, 30, 44, 65, 66]), the effect of lighting geometry on constancy [1, 60, 61, 62], the stereo depth on constancy [79], the 3D shape, and the material that an object is made of on color constancy [33, 59, 61, 80].

Cognitive factors have also been considered in studying color constancy [63, 71]. Some objects have characteristic color, such as bananas are yellow and cucumbers are green. How do we take the prior knowledge of objects’ color into account in achieving color constancy? A recent work by Kanematsu [51] suggests the effect of familiar contextual object on color constancy is small.

Figure 4 depicts several scenes used previously in experiments on color constancy. The associated stimuli effectively challenge existing theories of color constancy.

Theories of Color Constancy

How does the visual system achieve color constancy? One approach to understand constancy is to explain it using low-level visual mechanisms such as chromatic adaptation. The color signal reaching the eye, C(λ), is encoded by the responses of three classes of light-sensitive photoreceptors in the retina, which are referred to as long (L)-, medium (M)-, and short (S)-wavelength-sensitive cones [17, 50]. Let us represent the spectral properties of the reflected light reaching the eye by the quantal absorption rates for the three classes of cones. The light signal, r, can be represented by a three-dimensional column vector (Eq. 1):
$$ r=\left[\begin{array}{l}{r}_L\hfill \\ {}{r}_M\hfill \\ {}{r}_S\hfill \end{array}\right] $$
(1)
The cone signals are subjected by adaptation. Von Kries proposed that the LMS cone signals are scaled by a multiplicative factor, and at each retinal location the gains are scaled independently [76]. For each cone class, the gain is set in inverse proportion to the spatial mean of the signals from the cones of the same class. This algorithm is called von Kries adaptation. The adapted cone signals, a, can be obtained by multiplying the cone signals a by a diagonal matrix D, where the elements gl, gm, and gs represent the three gains:
$$ a=\left[\begin{array}{l}{a}_L\hfill \\ {}{a}_M\hfill \\ {}{a}_S\hfill \end{array}\right]=\left[\begin{array}{ccc}\hfill {g}_L\hfill & \hfill 0\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill {g}_M\hfill & \hfill 0\hfill \\ {}\hfill 0\hfill & \hfill 0\hfill & \hfill {g}_S\hfill \end{array}\right]=Dr $$
(2)
The von Kries adaptation inspired Land’s famous retinex theory [54], which has very wide application in camera color balance and can be used to explain human color constancy for the flat-matte-diffuse stimuli. The central principle of the retinex theory is that the lightness values at each pixel are calculated independently for each cone class. For an analysis of the retinex theory and color constancy, see a study by Brainard and Wandell [19].

Further along the visual pathway, in addition to cone adaptation, secondary adaptation is also proposed. The effects include gain control after the combination of the cone signals and also subtractive modulation instead of multiplicative modulation [48, 49, 64, 69, 77, 78].

The adaptation models predict color constancy quite well for flat-matte-diffuse scenes. However, it often fails to predict constancy for rich scenes [52]. We do now know how to obtain the values of the multiplicative gain from images, which contain spatially rich information. That being said, the mechanistic approach sometimes can inspire new algorithms for color constancy [39].

Another approach is to use a computational method developed from a computer vision perspective. As described above, color constancy is an ill-posed problem. Bayesian methods combine the information contained in the observed scene with information given a priori about the likely physical configuration of the world [15]. In the case of color constancy, some prior knowledge about the illuminants and surface reflectance can resolve ambiguity. Earlier work has used statistical constraints of illuminants and surface reflectance on solving color constancy such as the gray-world, subspace, and physical realizability algorithms [23, 27, 34, 57]. Brainard and Freeman [15] constructed prior distribution describing the probability of illuminants and surface in the world and then estimate the illuminant from the posterior distribution conditioned on the image intensity data. Brainard et al. [22] applied the similar Bayesian model to predicate the degree of human color constancy across different manipulations and connect the variation in constancy to the prior distribution of the illuminant.

What Do We Know About Machine Color Constancy?

While human visual system is equipped with good color constancy, the digital camera has to rely on color balancing algorithm to discount the illumination effect and extract the invariable object color. This process is also called color balance or white balance. The most popular method is based on adaptation such as the von Kries coefficient rule and Land’s retinex theory discussed above [31, 54, 76]. But this of this type of model is restricted to simple scenes.

In some sense, there is a significant overlap between the development of algorithms for machine color constancy and the modeling of human color constancy. However, one distinction is the choice of stimuli. To understand human color constancy, simple synthetic stimuli that allow systematic manipulation of scene parameters are often used as experimental stimuli. A successful machine color constancy algorithm, on the other hand, should aim at correcting illumination effects for real-world complex images (see a recent review by [43]).

The best-known statistical method is the gray-world theory, which assumes that the average reflectance of a scene is gray [23]. Some other similar algorithms include white patch and max-RGB [37, 38, 54] and shade of gray [32]. The gray-world algorithm will fail if the average reflectance is not achromatic or if there is a large uniform colored surface in the scene. The incorporation of higher-order statistics in terms of image derivatives is proposed, where a framework called gray edge is presented [74]. Chakrabarti et al. [24] go beyond statistics of per-pixel colors and model the spatial dependencies between pixels by decomposing the input images into spatial sub-bands and then model the color statistics separately in each sub-band.

Forsyth [34] introduced the gamut-mapping method. It is based on the assumption that only a limited set of colors (canonical gamut) can occur under a given illuminant. The model learns a model based on training images (the canonical gamut) and estimate the illuminant based on the input features.

Inspired by Brainard [15], Rosenberg et al. [68] introduced a Bayesian model of color constancy utilizing a non-Gaussian probabilistic model of the image formation process and demonstrated that it can outperform the gamut-mapping algorithm. Gehler et al. [40] extended the Bayesian algorithm using new datasets, which allows the algorithm to learn more precise priors of the illuminations.

A new thread of algorithms estimates the illuminant using high-level features [6, 41, 55, 75]. For example, Gijsenij and Gervse [42] proposed to dynamically determine which color constancy algorithm to be used for a specific image based on the scene category. Bianco and Schettini [5] proposed a model to estimate illuminant from color statistics extracted from faces automatically detected in the image. It takes advantage that the skin color tends to cluster in the color space, which provides a valid cue to estimate the illuminant. Inspired by human visual system mechanisms, a recent study by Gao et al. [39] built a physiologically based color constancy model that imitated the double-opponent mechanism of the visual system. The illuminant is estimated by searching for the maxima of the separate RGB channels of the responses of double-opponent cells in the RGB space.

As the technology of camera sensor develops, several datasets captured by high-quality SLR camera in RAW format have been used in the community. The popular datasets include Ciurea’s dataset [26], Middlebury color database [24], and Barnard’s datasets [3, 70].

Future Directions

Color constancy is an important and practical problem in both human and computer vision. Color constancy provides an excellent model system to understand how the visual system solves ambiguity. Significant progress has been made to understand color constancy of simple scenes. A big challenge is how to extend the theoretical model of color constancy for simple scenes to predict human performance in rich scenes.

Even though many computer vision algorithms are successful at correcting color bias caused by illumination, whether or not the human visual system uses similar algorithms to achieve constancy is poorly understood. A major challenge to reconcile the latter with the former is a matter of scene and stimulus complexity. Often, images used in computer vision algorithms are real photos or videos, which contain rich scenes. To model color constancy in humans, however, experiments require more simplified scenes and conditions. How to bridge the two fields is a promising endeavor for future research.

Cross-References

References

  1. 1.
    Adelson, E.H.: Lightness perception and lightness illusions. In: The New Cognitive Neurosciences, p. 339. MIT Press, Cambridge, MA (2000). Retrieved from http://www.cs.tau.ac.il/~hezy/Vision Seminar/Lightness Perception and Lightness Illusions.htmGoogle Scholar
  2. 2.
    Arend, L., Reeves, A.: Simultaneous color constancy. J. Opt. Soc. Am. A. 3(10), 1743–1751 (1986). Retrieved from http://www.opticsinfobase.org/abstract.cfm?&id=2483Google Scholar
  3. 3.
    Barnard, K., Martin, L., Funt, B., Coath, A.: A data set for color research. Color Res. Appl. 27(3), 147–151 (2002). Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.3.4123&rep=rep1&type=pdfGoogle Scholar
  4. 4.
    Bianco, S., Schettini, R.: Computational color constancy In: Visual Information Processing (EUVIP), 2011 3rd European Workshop on, Paris, 4–6 July 2011, pp. 1–7. IEEE (2011) . doi:10.1109/EuVIP.2011.6045557Google Scholar
  5. 5.
    Bianco, S., Schettini, R.: Color constancy using faces. In: Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on Biometrics Compendium, IEEE (2012)Google Scholar
  6. 6.
    Bianco, S., Ciocca, G., Cusano, C., Schettini, R.: Improving color constancy using indoor − outdoor image classification. IEEE Trans. Image Process. 17(12), 2381–2392 (2008). Retrieved from http://www.ivl.disco.unimib.it/publications/pdf/bianco2008improving-color.pdf
  7. 7.
    Bloj, M.G., Kersten, D., Hurlbert, A.C.: Perception of three-dimensional shape influences colour perception through mutual illumination. Nature 402(6764), 877–879 (1999). Retrieved from http://www.nature.com/nature/journal/v402/n6764/abs/402877a0.html
  8. 8.
    Bloj, M., Ripamonti, C., Mitha, K., Hauck, R., Greenwald, S., Brainard, D.H.: An equivalent illuminant model for the effect of surface slant on perceived lightness. J. Vis. 4(9), 735–746 (2004). doi:10.1167/4.9.6CrossRefGoogle Scholar
  9. 9.
    Bloj, M.G., Hurlbert, A.C.: An empirical study of the traditional Mach card effect. Perception Lond. 31(2), 233–246 (2002). Retrieved from http://www.perceptionjournal.com/perception/fulltext/p31/p01sp.pdf
  10. 10.
    Boyaci H, Maloney LT, Hersh S. The effect of perceived surface orientation on perceived surface albedo in binocularly viewed scenes. J Vis. 2003;3(8):541-553. Epub 2003 Sep 25CrossRefGoogle Scholar
  11. 11.
    Boyaci, H., Doerschner, K., Maloney, L.T.: Perceived surface color in binocularly viewed scenes with two light sources differing in chromaticity. J. Vis. 4(9) (2004). Retrieved from http://ww.journalofvision.org/content/4/9/1.full
  12. 12.
    Boyaci, H., Doerschner, K., Snyder, J., Maloney, L.: Surface color perception in three-dimensional scenes. Vis. Neurosci. 23(3/4), 311 (2006). Retrieved from http://www.bilkent.edu.tr/~hboyaci/Vision/Boyaci_Doerschner_Snyder_Maloney_VisNeuro_2006.pdfGoogle Scholar
  13. 13.
    Brainard, D.H.: Color constancy in the nearly natural image. 2. Achromatic loci. J. Opt. Soc. Am. A 15(2), 307–325 (1998). Retrieved from http://www.opticsinfobase.org/viewmedia.cfm?URI=josaa-15-2-307&seq=0Google Scholar
  14. 14.
    Brainard, D.H.: Color constancy. In: The Visual Neurosciences, vol. 1, pp. 948–961. MIT Press, Cambridge, MA (2004). Retrieved from http://www.cns.nyu.edu/csh04/Articles/Brainard-02.pdf
  15. 15.
    Brainard, D.H., Freeman, W.T.: Bayesian color constancy. J. Opt. Soc. Am. A 14(7), 1393–1411 (1997). Retrieved from http://www.opticsinfobase.org/viewmedia.cfm?uri=josaa-14-7-1393&seq=0Google Scholar
  16. 16.
    Brainard, D.H., Radonjić, A.: Color constancy. In: Werner, J.S., Chalupa, L.M. (eds.) The New Visual Neuroscience, pp. 545–556. MIT Press, Cambridge (2013)Google Scholar
  17. 17.
    Brainard, D.H., Stockman, A.: Colorimetry. (1995). Retrieved from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.140.3027Google Scholar
  18. 18.
    Brainard, D.H., Stockman, A.: Colorimetry. In: Bass, M. (ed.) OSA Handbook of Optics. McGraw-Hill, New York (2010)Google Scholar
  19. 19.
    Brainard, D.H., Wandell, B.A.: Analysis of the retinex theory of color vision. J. Opt. Soc. Am. A. 3(10), 1651–1661 (1986). Retrieved from http://www.opticsinfobase.org/viewmedia.cfm?URI=josaa-3-10-1651&seq=0Google Scholar
  20. 20.
    Brainard, D.H., Wandell, B.A.: A bilinear model of the illuminant’s effect on color appearance. In: Computational Models of Visual Processing, pp. 171–186. MIT Press, Cambridge, MA (1991)Google Scholar
  21. 21.
    Brainard, D.H., Brunt, W.A., Speigle, J.M.: Color constancy in the nearly natural image I. Asymmetric matches. J Opt Soc Am A Opt Image Sci Vis 14(9), 2091–2110 (1997). Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=9291602Google Scholar
  22. 22.
    Brainard, D.H., Longère, P., Delahunt, P.B., Freeman, W.T., Kraft, J.M., Xiao, B.: Bayesian model of human color constancy. J. Vis. 6(11) (2006). Retrieved from http://w.journalofvision.org/content/6/11/10.full
  23. 23.
    Buchsbaum, G.: A spatial processor model for object colour perception. J. Franklin Inst. 310(1), 1–26 (1980). Retrieved from http://www.sciencedirect.com/science/article/pii/0016003280900587
  24. 24.
    Chakrabarti, A., Scharstein, D., Zickler, T.: Color datasets. An empirical camera model for Internet color vision. In: Proceedings of the British Machine Vision Conference (BMVC) (2009)Google Scholar
  25. 25.
    Chakrabarti, A., Hirakawa, K., Zickler, T.: Color constancy with spatio-spectral statistics. IEEE Trans. Pattern Anal. Mach. Intell. 34(8), 1509–1519 (2012). http://cilab.knu.ac.kr/seminar/Seminar/2013/20130330ColorConstancywithSpatio-SpectralStatistics.pdf
  26. 26.
    Ciurea, F., Funt, B. A large image database for color constancy research. In: Proceedings of the Eleventh Color Imaging Conference (2003)Google Scholar
  27. 27.
    D’Zmura, M., Iverson, G., Singer, B.: Probabilistic color constancy. In Luce R.D., D’Zmura M., Hoffman D.D., Iverson G., Romney, K. (eds.) Geometric Representations of Perceptual Phenomena, pp. 187–202. Lawrence Erlbaum Associates, Mahwah (1995)Google Scholar
  28. 28.
    Delahunt, P.B., Brainard, D.H.: Does human color constancy incorporate the statistical regularity of natural daylight? J. Vis. 4(2) (2004). Retrieved from http://ww.journalofvision.org/content/4/2/1.full
  29. 29.
    Doerschner, K., Boyaci, H., Maloney, L.T.: Human observers compensate for secondary illumination originating in nearby chromatic surfaces. J. Vis. 4(2) (2004). Retrieved from http://wwww.journalofvision.org/content/4/2/3.full
  30. 30.
    Epstein, W.: Phenomenal orientation and perceived achromatic color. J. Psychol. 52(1), 51–53 (1961). Retrieved from http://www.tandfonline.com/doi/pdf/10.1080/00223980.1961.9916503
  31. 31.
    Finlayson, G.D., Drew, M.S., Funt, B.V.: Color constancy: generalized diagonal transforms suffice. J. Opt. Soc. Am. A 11(11), 3011–3019 (1994). Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.121.872&rep=rep1&type=pdfGoogle Scholar
  32. 32.
    Finlayson, G.D., Hordley, S.D., Hubel, P.M.: Color by correlation: a simple, unifying framework for color constancy. IEE Trans. Pattern Anal. Mach. Intell. 23(11), 1209–1221 (2001). Retrieved from http://th.physik.uni-frankfurt.de/~triesch/courses/275vision/papers/finlayson_et_al_pami_2001.pdfGoogle Scholar
  33. 33.
    Fleming, R.W., Bülthoff, H.H.: Low-level image cues in the perception of translucent materials. ACM Trans. Appl. Percept. (TAP) 2(3), 346–382 (2005). Retrieved from http://dl.acm.org/citation.cfm?id=1077409
  34. 34.
    Forsyth, D.A.: A novel algorithm for color constancy. Int. J. Comp. Vis. 5(1), 5–35 (1990). Retrieved from http://link.springer.com/article/10.1007/BF00056770
  35. 35.
    Foster, D.H.: Does colour constancy exist? Trends Cogn. Sci. 7(10), 439–443 (2003). Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=14550490Google Scholar
  36. 36.
    Foster, D.H.: Color constancy. Vision Res. 51(7), 674–700 (2011). Retrieved from http://www.sciencedirect.com/science/article/pii/S0042698910004402
  37. 37.
    Funt, B., Shi, L.: The effect of exposure on MaxRGB color constancy. In: Proceedings of the SPIE, San Jose. Human Vision and Electronic Imaging XV, vol. 7527 (2010)Google Scholar
  38. 38.
    Funt, B., Shi, L.: The rehabilitation of MaxRGB . In: Proceedings of the IS&T Eighteenth Color Imaging Conference, San Antonio (2010)Google Scholar
  39. 39.
    Gao, S., Yang, K., Li, C., Li, Y.: A color constancy model with double-opponency mechanisms. In: Computer Vision (ICCV), 2013 IEEE International Conference on, Sydney, 1–8 Dec 2013, pp. 929–936. IEEE (2013). doi:10.1109/ICCV.2013.119Google Scholar
  40. 40.
    Gehler, P. V., Rother, C., Blake, A., Minka, T., Sharp, T.: Bayesian color constancy revisited (2008)Google Scholar
  41. 41.
    Gijsenij, A., Gevers, T.: Color constancy using natural image statistics. IEEE Trans. Patt. Anal. Mach. Intell. 33(4), 687–698 (2007). doi:10.1109/TPAMI.2010.93Google Scholar
  42. 42.
    Gijsenij, A., Gevers, T.: Color constancy using natural image statistics and scene semantics. IEE Trans. Pattern Anal. Mach. Intell. 33(4), 687–698 (2011). Retrieved from http://staff.science.uva.nl/~gevers/pub/GeversPAMI11.pdfGoogle Scholar
  43. 43.
    Gijsenij, A., Gevers, T., van de Weijer, J.: Computational color constancy: survey and experiments. IEEE Trans. Image Process. 20(9), 2475–2489 (2011). doi:10.1109/TIP.2011.2118224ADSMathSciNetCrossRefGoogle Scholar
  44. 44.
    Gilchrist, A.L.: When does perceived lightness depend on perceived spatial arrangement? Percept. Psychophys. 28(6), 527–538 (1980). Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.211.7842&rep=rep1&typ e=pdfGoogle Scholar
  45. 45.
    Gkioulekas I, Zhao S, Bala K, Zickler T, Levin A. Inverse volume rendering with material dictionaries. ACM Trans Graphics (TOG). 2013;32(6). doi:10.1145/2508363.2508377Google Scholar
  46. 46.
    Helson, H.: Fundamental problems in color vision. I. The principle governing changes in hue, saturation, and lightness of non-selective samples in chromatic illumination. J Exp. Psychol. 23(5), 439 (1938). Retrieved from http://psycnet.apa.org/journals/xge/23/5/439/
  47. 47.
    Helson, H., Jeffers, V.B.: Fundamental problems in color vision. II. Hue, lightness, and saturation of selective samples in chromatic illumination. J. Exp. Psychol. 26(1), 1 (1940). Retrieved from http://psycnet.apa.org/journals/xge/26/1/1/
  48. 48.
    Hurvich, L.M., Jameson, D.: An opponent-process theory of color vision. Psychol. Rev. 64(6p1), 384 (1957). Retrieved from http://cogsci.bme.hu/~gkovacs/letoltes/HurvichJameson1957.pdfGoogle Scholar
  49. 49.
    Jameson, D., Hurvich, L.: Opponent-response functions related to measured cone photopigments*. J. Opt. Soc. Am. 58(3), 429–430 (1968). Retrieved from http://www.opticsinfobase.org/abstract.cfm?URI=josa-58-3-429_1
  50. 50.
    Kaiser, P.K., Boynton, R.M.: Human Color Vision (287). Optical Society of America, Washington, DC (1996). Retrieved from http://www.getcited.org/pub/100154932Google Scholar
  51. 51.
    Kanematsu, E., Brainard, D.H.: No measured effect of a familiar contextual object on color constancy. Color Res. Appl. 39(4), 347–359 (2013). Retrieved from http://color.psych.upenn.edu/brainard/papers/Kanematsu_Brainard_13.pdf
  52. 52.
    Kraft, J.M., Brainard, D.H.: Mechanisms of color constancy under nearly natural viewing. Proc. Natl. Acad. Sci. U. S. A. 96(1), 307–312 (1999). Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=9874814Google Scholar
  53. 53.
    Kraft, J.M., Maloney, S.I., Brainard, D.H.: Surface-illuminant ambiguity and color constancy: effects of scene complexity and depth cues. Perception 31(2), 247–263 (2002). Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=11922136Google Scholar
  54. 54.
    Land, E.H.: The retinex theory of color vision. Sci. Am. 237, 108–128 (1977). Retrieved from http://xa.yimg.com/kq/groups/18365325/470399326/name/E.Land_-_Retinex_Theory%255B1%255D.pdf
  55. 55.
    Li, B., Xu, D., Lang, C.: Colour constancy based on texture similarity for natural images. Color. Technol. 125(6), 328–333 (2009). Retrieved from http://onlinelibrary.wiley.com/doi/10.1111/j.1478-4408.2009.00214.x/full
  56. 56.
    Maloney, L.T.: Physics-based approaches to modeling surface color perception. In: Color Vision: From Genes to Perception, pp. 387–416. Cambridge University Press, Cambridge, New York (1999). Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.211.8602&rep=rep1&type=pdfGoogle Scholar
  57. 57.
    Maloney, L.T., Wandell, B.A.: Color constancy: a method for recovering surface spectral reflectance. J. Opt. Soc. Am. A. 3(1), 29–33 (1986). Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.6.4745&rep=rep1&type=pdfGoogle Scholar
  58. 58.
    Maloney, L. T., Brainard, D. H.: Color and material perception: achievements and challenges. J. Vis. 10(9). doi:10.1167/10.9.19 (2010)Google Scholar
  59. 59.
    Motoyoshi, I., Nishida, S., Sharan, L., Adelson, E.H.: Image statistics and the perception of surface qualities. Nature 447(7141), 206–209 (2007). Retrieved from http://www.cns.nyu.edu/~msl/courses/2223/Readings/MotoyoshiNishidaSharanAdelson.Nature.2007.pdfGoogle Scholar
  60. 60.
    Obein, G., Knoblauch, K., Viéot, F.: Difference scaling of gloss: nonlinearity, binocularity, and constancy. J. Vision. 4(9), 711–20 (2004). Retrieved from http://ww.journalofvision.org/content/4/9/4.full
  61. 61.
    Olkkonen, M., Brainard, D.H.: Perceived glossiness and lightness under real-world illumination. J. Vis. 10(9), 5 (2010). doi10.1167/10.9.5CrossRefGoogle Scholar
  62. 62.
    Olkkonen, M., Brainard, D.H.: Joint effects of illumination geometry and object shape in the perception of surface reflectance. Iperception 2(9), 1014–1034 (2011). doi:10.1068/i0480Google Scholar
  63. 63.
    Olkkonen, M., Hansen, T., Gegenfurtner, K.R.: Color appearance of familiar objects: effects of object shape, texture, and illumination changes. J. Vis. 8(5) (2008). Retrieved from http://www.journalofvision.orgwww.journalofvision.org/content/8/5/13.full
  64. 64.
    Poirson, A.B.., Wandell, B.A.: Appearance of colored patterns: pattern—color separability. J. Opt. Soc. Am. A. 10(12), 2458–2470 (1993). Retrieved from http://white.stanford.edu/~brian/papers/color/smatch.pdfGoogle Scholar
  65. 65.
    Radonjić, A., Todorović, D., Gilchrist, A.: Adjacency and surroundedness in the depth effect on lightness. J. Vis. 10(9) (2010). Retrieved from http://171.67.113.220/content/10/9/12.full
  66. 66.
    Ripamonti, C., Bloj, M., Hauck, R., Mitha, K., Greenwald, S., Maloney, S.I., Brainard, D.H.: Measurements of the effect of surface slant on perceived lightness. J. Vis. 4(9) (2004). Retrieved from http://www.journalofvision.orgwww.journalofvision.org/content/4/9/7.full
  67. 67.
    Robilotto, R., Zaidi, Q.: Limits of lightness identification for real objects under natural viewing conditions. J. Vis. 4(9) (2004). Retrieved from http://ww.w.journalofvision.org/content/4/9/9.full
  68. 68.
    Rosenberg, C., Ladsariya, A., Minka, T.: Bayesian color constancy with non-Gaussian models. In: Advances in Neural Information Processing Systems (2003)Google Scholar
  69. 69.
    Shevell, S.K.: The dual role of chromatic backgrounds in color perception. Vision Res. 18(12), 1649–1661 (1978). Retrieved from http://deepblue.lib.umich.edu/bitstream/2027.42/22782/1/0000337.pdf
  70. 70.
    Shi, L., Funt, B. V.: Re-processed version of the Gehler color constancy database of 568 images. Simon Fraser University (2010)Google Scholar
  71. 71.
    Smet, K., Ryckaert, W.R., Pointer, M.R., Deconinck, G., Hanselaer, P.: Colour appearance rating of familiar real objects. Color Res. Appl. 36(3), 192–200 (2011). Retrieved from http://www.esat.kuleuven.be/electa/publications/fulltexts/pub_2070.pdf
  72. 72.
    Smithson, H.E.: Sensory, computational and cognitive components of human colour constancy. Philos. Trans. R. Soc. Lond. B Biol. Sci. 360(1458), 1329–1346 (2005). doi:10.1098/rstb.2005.1633CrossRefGoogle Scholar
  73. 73.
    Troost, J.M., De Weert, C.M.M.: Naming versus matching in color constancy. Percept. Psychophys. 50(6), 591–602 (1991). Retrieved from http://link.springer.com/article/10.3758/BF03207545
  74. 74.
    Van De Weijer, J., Gevers, T., Gijsenij, A.: Edge-based color constancy. IEEE Trans. Image Process. 16(9), 2207–2214 (2007). Retrieved from http://hal.archives-ouvertes.fr/docs/00/54/86/86/PDF/IP07_vandeweijer.pdf
  75. 75.
    Van De Weijer, J., Schmid, C., Verbeek, J.: Using high-level visual information for color constancy. In: Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on. IEEE (2007)Google Scholar
  76. 76.
    von Kries, J.: Chromatic adaptation. Festschrift der Albrecht-Ludwigs-Universität, pp. 145–158. (1902)Google Scholar
  77. 77.
    Walraven, J.: Discounting the background – the missing link in the explanation of chromatic induction. Vision Res. 16(3), 289–295 (1976). Retrieved from http://www.sciencedirect.com/science/article/pii/0042698976901127
  78. 78.
    Webster, M.A., Mollon, J.D.: Colour constancy influenced by contrast adaptation. Nature 373(6516), 694–698 (1995). Retrieved from http://www.nature.com/nature/journal/v373/n6516/abs/373694a0.html
  79. 79.
    Werner, A.: Color constancy improves, when an object moves: high-level motion influences color perception. J. Vis. 7(14) (2007). Retrieved from http://wwww.journalofvision.org/content/7/14/19.full
  80. 80.
    Xiao, B., Hurst, B., MacIntyre, L., Brainard, D.H.: The color constancy of three-dimensional objects. J. Vis. 12(4), 6 (2012). doi:10.1167/12.4.6CrossRefGoogle Scholar
  81. 81.
    Zaidi, Q., Bostic, M.: Color strategies for object identification. Vision Res. 48(26), 2673–2681 (2008). doi:10.1016/j.visres.2008.06.026CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.Department of Computer ScienceAmerican UniversityWashingtonUSA