Color constancy refers to the ability of the human visual system to perceive stable object color, despite significant variation of illumination. Color constancy is also a desired algorithm in machine vision. In image processing, it is widely used in white balancing algorithms. Color constancy has been an active research topic in the past 100 years. For a thorough understanding of this subject, please see the following recent reviews [4, 14, 16, 18, 36, 35, 58, 72].
Unlike the human visual system, the image captured by a digital camera has the issue that the color of the scene may be shifted by the change in the external illumination, even though the intrinsic spectral property object in the scene (e.g., the mug) stays the same. The goal of a color constancy algorithm is to correct the color shift caused by the illumination change and to extract reliable color features that are invariant to the change in illumination [25, 43]. The method of correcting image color shift caused by changes in the scene illumination in a camera is called white balance.
The Problem of Color Constancy
What Do We Know About Human Color Constancy?
How Human Color Constancy Is Measured?
After establishing the problem, now the question is how good the human visual system is at color constancy. To answer this, we need to measure color constancy in a controlled way. Three common methods have been used in the past to measure color constancy in a laboratory setting: color naming where observers name colors of surfaces under different illuminations [46, 47], asymmetric matching where observers adjust a match surface under one illuminant to match the color appearance of a reference surface under another illuminant [2, 21, 80], and achromatic adjustment where observers adjust the chromaticity of a test surface so that it appears achromatic and then repeat the task when the test is embedded in scenes with different illuminants . It was found that asymmetric matching and achromatic adjustment reach similar conclusions of constancy when the two tasks were compared using the same scenes .
How Good Is Human Color Constancy?
Overall, these methods show that human constancy is not perfect but generally very good. We can compute a color constancy index from either asymmetric matching or achromatic adjustment experiments, where 0 % means no constancy and 100 % means perfect constancy [2, 21, 20, 73]. Most studies in color constancy use simplified laboratory stimuli that consist of flat and matte surfaces under diffuse lighting conditions (for reviews, see [2, 13, 14, 16, 56]).
Natural scenes are rarely composed of flat-matte-diffuse surfaces. First, objects usually have 3D shapes and are made of non-matte material such as metal, plastic, or wax, which looks glossy and translucent. Second, the objects in the three-dimensional space are arranged at different depths from the viewing point. Lastly, the lighting condition often has complex spatial and spectral distribution. How good is human color constancy in natural scenes? The factors in natural scenes that have been studied include the effect of three-dimensional pose of surfaces ([8, 9, 10, 11], Boyaci et al. 2003, [29, 30, 44, 65, 66]), the effect of lighting geometry on constancy [1, 60, 61, 62], the stereo depth on constancy , the 3D shape, and the material that an object is made of on color constancy [33, 59, 61, 80].
Cognitive factors have also been considered in studying color constancy [63, 71]. Some objects have characteristic color, such as bananas are yellow and cucumbers are green. How do we take the prior knowledge of objects’ color into account in achieving color constancy? A recent work by Kanematsu  suggests the effect of familiar contextual object on color constancy is small.
Figure 4 depicts several scenes used previously in experiments on color constancy. The associated stimuli effectively challenge existing theories of color constancy.
Theories of Color Constancy
Further along the visual pathway, in addition to cone adaptation, secondary adaptation is also proposed. The effects include gain control after the combination of the cone signals and also subtractive modulation instead of multiplicative modulation [48, 49, 64, 69, 77, 78].
The adaptation models predict color constancy quite well for flat-matte-diffuse scenes. However, it often fails to predict constancy for rich scenes . We do now know how to obtain the values of the multiplicative gain from images, which contain spatially rich information. That being said, the mechanistic approach sometimes can inspire new algorithms for color constancy .
Another approach is to use a computational method developed from a computer vision perspective. As described above, color constancy is an ill-posed problem. Bayesian methods combine the information contained in the observed scene with information given a priori about the likely physical configuration of the world . In the case of color constancy, some prior knowledge about the illuminants and surface reflectance can resolve ambiguity. Earlier work has used statistical constraints of illuminants and surface reflectance on solving color constancy such as the gray-world, subspace, and physical realizability algorithms [23, 27, 34, 57]. Brainard and Freeman  constructed prior distribution describing the probability of illuminants and surface in the world and then estimate the illuminant from the posterior distribution conditioned on the image intensity data. Brainard et al.  applied the similar Bayesian model to predicate the degree of human color constancy across different manipulations and connect the variation in constancy to the prior distribution of the illuminant.
What Do We Know About Machine Color Constancy?
While human visual system is equipped with good color constancy, the digital camera has to rely on color balancing algorithm to discount the illumination effect and extract the invariable object color. This process is also called color balance or white balance. The most popular method is based on adaptation such as the von Kries coefficient rule and Land’s retinex theory discussed above [31, 54, 76]. But this of this type of model is restricted to simple scenes.
In some sense, there is a significant overlap between the development of algorithms for machine color constancy and the modeling of human color constancy. However, one distinction is the choice of stimuli. To understand human color constancy, simple synthetic stimuli that allow systematic manipulation of scene parameters are often used as experimental stimuli. A successful machine color constancy algorithm, on the other hand, should aim at correcting illumination effects for real-world complex images (see a recent review by ).
The best-known statistical method is the gray-world theory, which assumes that the average reflectance of a scene is gray . Some other similar algorithms include white patch and max-RGB [37, 38, 54] and shade of gray . The gray-world algorithm will fail if the average reflectance is not achromatic or if there is a large uniform colored surface in the scene. The incorporation of higher-order statistics in terms of image derivatives is proposed, where a framework called gray edge is presented . Chakrabarti et al.  go beyond statistics of per-pixel colors and model the spatial dependencies between pixels by decomposing the input images into spatial sub-bands and then model the color statistics separately in each sub-band.
Forsyth  introduced the gamut-mapping method. It is based on the assumption that only a limited set of colors (canonical gamut) can occur under a given illuminant. The model learns a model based on training images (the canonical gamut) and estimate the illuminant based on the input features.
Inspired by Brainard , Rosenberg et al.  introduced a Bayesian model of color constancy utilizing a non-Gaussian probabilistic model of the image formation process and demonstrated that it can outperform the gamut-mapping algorithm. Gehler et al.  extended the Bayesian algorithm using new datasets, which allows the algorithm to learn more precise priors of the illuminations.
A new thread of algorithms estimates the illuminant using high-level features [6, 41, 55, 75]. For example, Gijsenij and Gervse  proposed to dynamically determine which color constancy algorithm to be used for a specific image based on the scene category. Bianco and Schettini  proposed a model to estimate illuminant from color statistics extracted from faces automatically detected in the image. It takes advantage that the skin color tends to cluster in the color space, which provides a valid cue to estimate the illuminant. Inspired by human visual system mechanisms, a recent study by Gao et al.  built a physiologically based color constancy model that imitated the double-opponent mechanism of the visual system. The illuminant is estimated by searching for the maxima of the separate RGB channels of the responses of double-opponent cells in the RGB space.
As the technology of camera sensor develops, several datasets captured by high-quality SLR camera in RAW format have been used in the community. The popular datasets include Ciurea’s dataset , Middlebury color database , and Barnard’s datasets [3, 70].
Color constancy is an important and practical problem in both human and computer vision. Color constancy provides an excellent model system to understand how the visual system solves ambiguity. Significant progress has been made to understand color constancy of simple scenes. A big challenge is how to extend the theoretical model of color constancy for simple scenes to predict human performance in rich scenes.
Even though many computer vision algorithms are successful at correcting color bias caused by illumination, whether or not the human visual system uses similar algorithms to achieve constancy is poorly understood. A major challenge to reconcile the latter with the former is a matter of scene and stimulus complexity. Often, images used in computer vision algorithms are real photos or videos, which contain rich scenes. To model color constancy in humans, however, experiments require more simplified scenes and conditions. How to bridge the two fields is a promising endeavor for future research.
- 1.Adelson, E.H.: Lightness perception and lightness illusions. In: The New Cognitive Neurosciences, p. 339. MIT Press, Cambridge, MA (2000). Retrieved from http://www.cs.tau.ac.il/~hezy/Vision Seminar/Lightness Perception and Lightness Illusions.htmGoogle Scholar
- 2.Arend, L., Reeves, A.: Simultaneous color constancy. J. Opt. Soc. Am. A. 3(10), 1743–1751 (1986). Retrieved from http://www.opticsinfobase.org/abstract.cfm?&id=2483Google Scholar
- 3.Barnard, K., Martin, L., Funt, B., Coath, A.: A data set for color research. Color Res. Appl. 27(3), 147–151 (2002). Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.3.4123&rep=rep1&type=pdfGoogle Scholar
- 4.Bianco, S., Schettini, R.: Computational color constancy In: Visual Information Processing (EUVIP), 2011 3rd European Workshop on, Paris, 4–6 July 2011, pp. 1–7. IEEE (2011) . doi:10.1109/EuVIP.2011.6045557Google Scholar
- 5.Bianco, S., Schettini, R.: Color constancy using faces. In: Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on Biometrics Compendium, IEEE (2012)Google Scholar
- 6.Bianco, S., Ciocca, G., Cusano, C., Schettini, R.: Improving color constancy using indoor − outdoor image classification. IEEE Trans. Image Process. 17(12), 2381–2392 (2008). Retrieved from http://www.ivl.disco.unimib.it/publications/pdf/bianco2008improving-color.pdf
- 7.Bloj, M.G., Kersten, D., Hurlbert, A.C.: Perception of three-dimensional shape influences colour perception through mutual illumination. Nature 402(6764), 877–879 (1999). Retrieved from http://www.nature.com/nature/journal/v402/n6764/abs/402877a0.html
- 9.Bloj, M.G., Hurlbert, A.C.: An empirical study of the traditional Mach card effect. Perception Lond. 31(2), 233–246 (2002). Retrieved from http://www.perceptionjournal.com/perception/fulltext/p31/p01sp.pdf
- 11.Boyaci, H., Doerschner, K., Maloney, L.T.: Perceived surface color in binocularly viewed scenes with two light sources differing in chromaticity. J. Vis. 4(9) (2004). Retrieved from http://ww.journalofvision.org/content/4/9/1.full
- 12.Boyaci, H., Doerschner, K., Snyder, J., Maloney, L.: Surface color perception in three-dimensional scenes. Vis. Neurosci. 23(3/4), 311 (2006). Retrieved from http://www.bilkent.edu.tr/~hboyaci/Vision/Boyaci_Doerschner_Snyder_Maloney_VisNeuro_2006.pdfGoogle Scholar
- 13.Brainard, D.H.: Color constancy in the nearly natural image. 2. Achromatic loci. J. Opt. Soc. Am. A 15(2), 307–325 (1998). Retrieved from http://www.opticsinfobase.org/viewmedia.cfm?URI=josaa-15-2-307&seq=0Google Scholar
- 14.Brainard, D.H.: Color constancy. In: The Visual Neurosciences, vol. 1, pp. 948–961. MIT Press, Cambridge, MA (2004). Retrieved from http://www.cns.nyu.edu/csh04/Articles/Brainard-02.pdf
- 15.Brainard, D.H., Freeman, W.T.: Bayesian color constancy. J. Opt. Soc. Am. A 14(7), 1393–1411 (1997). Retrieved from http://www.opticsinfobase.org/viewmedia.cfm?uri=josaa-14-7-1393&seq=0Google Scholar
- 16.Brainard, D.H., Radonjić, A.: Color constancy. In: Werner, J.S., Chalupa, L.M. (eds.) The New Visual Neuroscience, pp. 545–556. MIT Press, Cambridge (2013)Google Scholar
- 17.Brainard, D.H., Stockman, A.: Colorimetry. (1995). Retrieved from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.140.3027Google Scholar
- 18.Brainard, D.H., Stockman, A.: Colorimetry. In: Bass, M. (ed.) OSA Handbook of Optics. McGraw-Hill, New York (2010)Google Scholar
- 19.Brainard, D.H., Wandell, B.A.: Analysis of the retinex theory of color vision. J. Opt. Soc. Am. A. 3(10), 1651–1661 (1986). Retrieved from http://www.opticsinfobase.org/viewmedia.cfm?URI=josaa-3-10-1651&seq=0Google Scholar
- 20.Brainard, D.H., Wandell, B.A.: A bilinear model of the illuminant’s effect on color appearance. In: Computational Models of Visual Processing, pp. 171–186. MIT Press, Cambridge, MA (1991)Google Scholar
- 21.Brainard, D.H., Brunt, W.A., Speigle, J.M.: Color constancy in the nearly natural image I. Asymmetric matches. J Opt Soc Am A Opt Image Sci Vis 14(9), 2091–2110 (1997). Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=9291602Google Scholar
- 22.Brainard, D.H., Longère, P., Delahunt, P.B., Freeman, W.T., Kraft, J.M., Xiao, B.: Bayesian model of human color constancy. J. Vis. 6(11) (2006). Retrieved from http://w.journalofvision.org/content/6/11/10.full
- 23.Buchsbaum, G.: A spatial processor model for object colour perception. J. Franklin Inst. 310(1), 1–26 (1980). Retrieved from http://www.sciencedirect.com/science/article/pii/0016003280900587
- 24.Chakrabarti, A., Scharstein, D., Zickler, T.: Color datasets. An empirical camera model for Internet color vision. In: Proceedings of the British Machine Vision Conference (BMVC) (2009)Google Scholar
- 25.Chakrabarti, A., Hirakawa, K., Zickler, T.: Color constancy with spatio-spectral statistics. IEEE Trans. Pattern Anal. Mach. Intell. 34(8), 1509–1519 (2012). http://cilab.knu.ac.kr/seminar/Seminar/2013/20130330ColorConstancywithSpatio-SpectralStatistics.pdf
- 26.Ciurea, F., Funt, B. A large image database for color constancy research. In: Proceedings of the Eleventh Color Imaging Conference (2003)Google Scholar
- 27.D’Zmura, M., Iverson, G., Singer, B.: Probabilistic color constancy. In Luce R.D., D’Zmura M., Hoffman D.D., Iverson G., Romney, K. (eds.) Geometric Representations of Perceptual Phenomena, pp. 187–202. Lawrence Erlbaum Associates, Mahwah (1995)Google Scholar
- 28.Delahunt, P.B., Brainard, D.H.: Does human color constancy incorporate the statistical regularity of natural daylight? J. Vis. 4(2) (2004). Retrieved from http://ww.journalofvision.org/content/4/2/1.full
- 29.Doerschner, K., Boyaci, H., Maloney, L.T.: Human observers compensate for secondary illumination originating in nearby chromatic surfaces. J. Vis. 4(2) (2004). Retrieved from http://wwww.journalofvision.org/content/4/2/3.full
- 30.Epstein, W.: Phenomenal orientation and perceived achromatic color. J. Psychol. 52(1), 51–53 (1961). Retrieved from http://www.tandfonline.com/doi/pdf/10.1080/00223980.1961.9916503
- 31.Finlayson, G.D., Drew, M.S., Funt, B.V.: Color constancy: generalized diagonal transforms suffice. J. Opt. Soc. Am. A 11(11), 3011–3019 (1994). Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.121.872&rep=rep1&type=pdfGoogle Scholar
- 32.Finlayson, G.D., Hordley, S.D., Hubel, P.M.: Color by correlation: a simple, unifying framework for color constancy. IEE Trans. Pattern Anal. Mach. Intell. 23(11), 1209–1221 (2001). Retrieved from http://th.physik.uni-frankfurt.de/~triesch/courses/275vision/papers/finlayson_et_al_pami_2001.pdfGoogle Scholar
- 33.Fleming, R.W., Bülthoff, H.H.: Low-level image cues in the perception of translucent materials. ACM Trans. Appl. Percept. (TAP) 2(3), 346–382 (2005). Retrieved from http://dl.acm.org/citation.cfm?id=1077409
- 34.Forsyth, D.A.: A novel algorithm for color constancy. Int. J. Comp. Vis. 5(1), 5–35 (1990). Retrieved from http://link.springer.com/article/10.1007/BF00056770
- 35.Foster, D.H.: Does colour constancy exist? Trends Cogn. Sci. 7(10), 439–443 (2003). Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=14550490Google Scholar
- 36.Foster, D.H.: Color constancy. Vision Res. 51(7), 674–700 (2011). Retrieved from http://www.sciencedirect.com/science/article/pii/S0042698910004402
- 37.Funt, B., Shi, L.: The effect of exposure on MaxRGB color constancy. In: Proceedings of the SPIE, San Jose. Human Vision and Electronic Imaging XV, vol. 7527 (2010)Google Scholar
- 38.Funt, B., Shi, L.: The rehabilitation of MaxRGB . In: Proceedings of the IS&T Eighteenth Color Imaging Conference, San Antonio (2010)Google Scholar
- 39.Gao, S., Yang, K., Li, C., Li, Y.: A color constancy model with double-opponency mechanisms. In: Computer Vision (ICCV), 2013 IEEE International Conference on, Sydney, 1–8 Dec 2013, pp. 929–936. IEEE (2013). doi:10.1109/ICCV.2013.119Google Scholar
- 40.Gehler, P. V., Rother, C., Blake, A., Minka, T., Sharp, T.: Bayesian color constancy revisited (2008)Google Scholar
- 41.Gijsenij, A., Gevers, T.: Color constancy using natural image statistics. IEEE Trans. Patt. Anal. Mach. Intell. 33(4), 687–698 (2007). doi:10.1109/TPAMI.2010.93Google Scholar
- 42.Gijsenij, A., Gevers, T.: Color constancy using natural image statistics and scene semantics. IEE Trans. Pattern Anal. Mach. Intell. 33(4), 687–698 (2011). Retrieved from http://staff.science.uva.nl/~gevers/pub/GeversPAMI11.pdfGoogle Scholar
- 44.Gilchrist, A.L.: When does perceived lightness depend on perceived spatial arrangement? Percept. Psychophys. 28(6), 527–538 (1980). Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.211.7842&rep=rep1&typ e=pdfGoogle Scholar
- 45.Gkioulekas I, Zhao S, Bala K, Zickler T, Levin A. Inverse volume rendering with material dictionaries. ACM Trans Graphics (TOG). 2013;32(6). doi:10.1145/2508363.2508377Google Scholar
- 46.Helson, H.: Fundamental problems in color vision. I. The principle governing changes in hue, saturation, and lightness of non-selective samples in chromatic illumination. J Exp. Psychol. 23(5), 439 (1938). Retrieved from http://psycnet.apa.org/journals/xge/23/5/439/
- 47.Helson, H., Jeffers, V.B.: Fundamental problems in color vision. II. Hue, lightness, and saturation of selective samples in chromatic illumination. J. Exp. Psychol. 26(1), 1 (1940). Retrieved from http://psycnet.apa.org/journals/xge/26/1/1/
- 48.Hurvich, L.M., Jameson, D.: An opponent-process theory of color vision. Psychol. Rev. 64(6p1), 384 (1957). Retrieved from http://cogsci.bme.hu/~gkovacs/letoltes/HurvichJameson1957.pdfGoogle Scholar
- 49.Jameson, D., Hurvich, L.: Opponent-response functions related to measured cone photopigments*. J. Opt. Soc. Am. 58(3), 429–430 (1968). Retrieved from http://www.opticsinfobase.org/abstract.cfm?URI=josa-58-3-429_1
- 50.Kaiser, P.K., Boynton, R.M.: Human Color Vision (287). Optical Society of America, Washington, DC (1996). Retrieved from http://www.getcited.org/pub/100154932Google Scholar
- 51.Kanematsu, E., Brainard, D.H.: No measured effect of a familiar contextual object on color constancy. Color Res. Appl. 39(4), 347–359 (2013). Retrieved from http://color.psych.upenn.edu/brainard/papers/Kanematsu_Brainard_13.pdf
- 52.Kraft, J.M., Brainard, D.H.: Mechanisms of color constancy under nearly natural viewing. Proc. Natl. Acad. Sci. U. S. A. 96(1), 307–312 (1999). Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=9874814Google Scholar
- 53.Kraft, J.M., Maloney, S.I., Brainard, D.H.: Surface-illuminant ambiguity and color constancy: effects of scene complexity and depth cues. Perception 31(2), 247–263 (2002). Retrieved from http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&dopt=Citation&list_uids=11922136Google Scholar
- 54.Land, E.H.: The retinex theory of color vision. Sci. Am. 237, 108–128 (1977). Retrieved from http://xa.yimg.com/kq/groups/18365325/470399326/name/E.Land_-_Retinex_Theory%255B1%255D.pdf
- 55.Li, B., Xu, D., Lang, C.: Colour constancy based on texture similarity for natural images. Color. Technol. 125(6), 328–333 (2009). Retrieved from http://onlinelibrary.wiley.com/doi/10.1111/j.1478-4408.2009.00214.x/full
- 56.Maloney, L.T.: Physics-based approaches to modeling surface color perception. In: Color Vision: From Genes to Perception, pp. 387–416. Cambridge University Press, Cambridge, New York (1999). Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.211.8602&rep=rep1&type=pdfGoogle Scholar
- 57.Maloney, L.T., Wandell, B.A.: Color constancy: a method for recovering surface spectral reflectance. J. Opt. Soc. Am. A. 3(1), 29–33 (1986). Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.6.4745&rep=rep1&type=pdfGoogle Scholar
- 58.Maloney, L. T., Brainard, D. H.: Color and material perception: achievements and challenges. J. Vis. 10(9). doi:10.1167/10.9.19 (2010)Google Scholar
- 59.Motoyoshi, I., Nishida, S., Sharan, L., Adelson, E.H.: Image statistics and the perception of surface qualities. Nature 447(7141), 206–209 (2007). Retrieved from http://www.cns.nyu.edu/~msl/courses/2223/Readings/MotoyoshiNishidaSharanAdelson.Nature.2007.pdfGoogle Scholar
- 60.Obein, G., Knoblauch, K., Viéot, F.: Difference scaling of gloss: nonlinearity, binocularity, and constancy. J. Vision. 4(9), 711–20 (2004). Retrieved from http://ww.journalofvision.org/content/4/9/4.full
- 62.Olkkonen, M., Brainard, D.H.: Joint effects of illumination geometry and object shape in the perception of surface reflectance. Iperception 2(9), 1014–1034 (2011). doi:10.1068/i0480Google Scholar
- 63.Olkkonen, M., Hansen, T., Gegenfurtner, K.R.: Color appearance of familiar objects: effects of object shape, texture, and illumination changes. J. Vis. 8(5) (2008). Retrieved from http://www.journalofvision.orgwww.journalofvision.org/content/8/5/13.full
- 64.Poirson, A.B.., Wandell, B.A.: Appearance of colored patterns: pattern—color separability. J. Opt. Soc. Am. A. 10(12), 2458–2470 (1993). Retrieved from http://white.stanford.edu/~brian/papers/color/smatch.pdfGoogle Scholar
- 65.Radonjić, A., Todorović, D., Gilchrist, A.: Adjacency and surroundedness in the depth effect on lightness. J. Vis. 10(9) (2010). Retrieved from http://126.96.36.199/content/10/9/12.full
- 66.Ripamonti, C., Bloj, M., Hauck, R., Mitha, K., Greenwald, S., Maloney, S.I., Brainard, D.H.: Measurements of the effect of surface slant on perceived lightness. J. Vis. 4(9) (2004). Retrieved from http://www.journalofvision.orgwww.journalofvision.org/content/4/9/7.full
- 67.Robilotto, R., Zaidi, Q.: Limits of lightness identification for real objects under natural viewing conditions. J. Vis. 4(9) (2004). Retrieved from http://ww.w.journalofvision.org/content/4/9/9.full
- 68.Rosenberg, C., Ladsariya, A., Minka, T.: Bayesian color constancy with non-Gaussian models. In: Advances in Neural Information Processing Systems (2003)Google Scholar
- 69.Shevell, S.K.: The dual role of chromatic backgrounds in color perception. Vision Res. 18(12), 1649–1661 (1978). Retrieved from http://deepblue.lib.umich.edu/bitstream/2027.42/22782/1/0000337.pdf
- 70.Shi, L., Funt, B. V.: Re-processed version of the Gehler color constancy database of 568 images. Simon Fraser University (2010)Google Scholar
- 71.Smet, K., Ryckaert, W.R., Pointer, M.R., Deconinck, G., Hanselaer, P.: Colour appearance rating of familiar real objects. Color Res. Appl. 36(3), 192–200 (2011). Retrieved from http://www.esat.kuleuven.be/electa/publications/fulltexts/pub_2070.pdf
- 73.Troost, J.M., De Weert, C.M.M.: Naming versus matching in color constancy. Percept. Psychophys. 50(6), 591–602 (1991). Retrieved from http://link.springer.com/article/10.3758/BF03207545
- 74.Van De Weijer, J., Gevers, T., Gijsenij, A.: Edge-based color constancy. IEEE Trans. Image Process. 16(9), 2207–2214 (2007). Retrieved from http://hal.archives-ouvertes.fr/docs/00/54/86/86/PDF/IP07_vandeweijer.pdf
- 75.Van De Weijer, J., Schmid, C., Verbeek, J.: Using high-level visual information for color constancy. In: Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on. IEEE (2007)Google Scholar
- 76.von Kries, J.: Chromatic adaptation. Festschrift der Albrecht-Ludwigs-Universität, pp. 145–158. (1902)Google Scholar
- 77.Walraven, J.: Discounting the background – the missing link in the explanation of chromatic induction. Vision Res. 16(3), 289–295 (1976). Retrieved from http://www.sciencedirect.com/science/article/pii/0042698976901127
- 78.Webster, M.A., Mollon, J.D.: Colour constancy influenced by contrast adaptation. Nature 373(6516), 694–698 (1995). Retrieved from http://www.nature.com/nature/journal/v373/n6516/abs/373694a0.html
- 79.Werner, A.: Color constancy improves, when an object moves: high-level motion influences color perception. J. Vis. 7(14) (2007). Retrieved from http://wwww.journalofvision.org/content/7/14/19.full