Context-Based Automatic Local Image Enhancement
- 19 Citations
- 8.5k Downloads
Abstract
In this paper, we describe a technique to automatically enhance the perceptual quality of an image. Unlike previous techniques, where global statistics of the image are used to determine enhancement operation, our method is local and relies on local scene descriptors and context in addition to high-level image statistics. We cast the problem of image enhancement as searching for the best transformation for each pixel in the given image and then discovering the enhanced image using a formulation based on Gaussian Random Fields. The search is done in a coarse-to-fine manner, namely by finding the best candidate images, followed by pixels. Our experiments indicate that such context-based local enhancement is better than global enhancement schemes. A user study using Mechanical Turk shows that the subjects prefer contextual and local enhancements over the ones provided by existing schemes.
Keywords
Input Image User Study Image Enhancement High Dynamic Range Enhancement MethodReferences
- 1.Luo, Y., Tang, X.: Photo and Video Quality Evaluation: Focusing on the Subject. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part III. LNCS, vol. 5304, pp. 386–399. Springer, Heidelberg (2008)CrossRefGoogle Scholar
- 2.Finlayson, G., Trezzi, E.: Shades of gray and colour constancy. In: 1st Conf. on Color Imaging, pp. 37–41 (2004)Google Scholar
- 3.van de Weijer, J., Gevers, T., Gijsenij, A.: Edge-based color constancy. IEEE Trans. on Image Processing 16(9) (2007)Google Scholar
- 4.Gehler, P.V., Rother, C., Blake, A., Minka, T., Sharp, T.: Bayesian color constancy revisited. In: CVPR (2008)Google Scholar
- 5.Rosenberg, C., Minka, T., Ladsariya, A.: Bayesian color constancy with non-Gaussian models. In: NIPS (2003)Google Scholar
- 6.Kang, S.B., Kapoor, A., Lischinski, D.: Personalization of image enhancement. In: CVPR (2010)Google Scholar
- 7.Caicedo, J.C., Kapoor, A., Kang, S.B.: Collaborative personalization of image enhancement. In: CVPR (2011)Google Scholar
- 8.Bychkovsky, V., Paris, S., Chan, E., Durand, F.: Learning photographic global tonal adjustment with a database of input/output image pairs. In: CVPR (2011)Google Scholar
- 9.Reinhard, E., Ward, G., Pattanaik, S., Debevec, P., Heidrich, W., Myszkowski, K.: High Dynamic Range Imaging: Acquisition, Display, and Image-based Lighting, 2nd edn. Elsevier (Morgan Kaufmann) (2010)Google Scholar
- 10.Fattal, R., Lischinski, D., Werman, M.: Gradient domain high dynamic range compression. ACM TOG and SIGGRAPH 21, 249–256 (2002)Google Scholar
- 11.Reinhard, E., Stark, M., Shirley, P., Ferwerda, J.: Photographic tone reproduction for digital images. ACM TOG and SIGGRAPH 21 (2002)Google Scholar
- 12.Durand, F., Dorsey, J.: Fast bilateral filtering for the display of high-dynamic-range images. ACM TOG and SIGGRAPH 21 (2002)Google Scholar
- 13.Bae, S., Paris, S., Durand, F.: Two-scale tone management for photographic look. ACM TOG and SIGGRAPH 25 (2006)Google Scholar
- 14.Dale, K., Johnson, M.K., Sunkavalli, K., Matusik, W., Pfister, H.: Image restoration using online photo collections. In: ICCV (2009)Google Scholar
- 15.Torralba, A.: Contextual priming for object detection. IJCV 53, 169–191 (2003)CrossRefGoogle Scholar
- 16.Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: NIPS (December 2006)Google Scholar
- 17.Liu, C., Yuen, J., Torralba, A., Sivic, J., Freeman, W.T.: SIFT Flow: Dense Correspondence across Different Scenes. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part III. LNCS, vol. 5304, pp. 28–42. Springer, Heidelberg (2008)CrossRefGoogle Scholar
- 18.Jain, P., Kulis, B., Dhillon, I., Grauman, K.: Online metric learning and fast similarity search. In: NIPS (2008)Google Scholar
- 19.Jain, P., Kulis, B., Grauman, K.: Fast image search for learned metrics. In: CVPR (2008)Google Scholar