What the Eye Did Not See – A Fusion Approach to Image Coding
The concentration of the cones and ganglion cells is much higher in the fovea than the rest of the retina. This non-uniform sampling results in a retinal image that is sharp at the fixation point, where a person is looking, and blurred away from it. This difference between the sampling rates at the different spatial locations presents us with the question of whether we can employ this biological characteristic to achieve better image compression. This can be achieved by compressing an image less at the fixation point and more away from it. It is, however, known that the vision system employs more that one fixation to look at a single scene which presents us with the problem of combining images pertaining to the same scene but exhibiting different spatial contrasts. This article presents an algorithm to combine such a series of images by using image fusion in the gradient domain. The advantage of the algorithm is that unlike other algorithms that compress the image in the spatial domain our algorithm results in no artifacts. The algorithm is based on two steps, in the first we modify the gradients of an image based on a limited number of fixations and in the second we integrate the modified gradient. Results based on measured and predicted fixations verify our approach.
KeywordsOriginal Image Image Compression Human Visual System Image Code Contrast Threshold
Unable to display preview. Download preview PDF.
- 1.Chikane, V., Fuh, C.S.: Automatic white balance for digital still cameras. Journal of Information Science and Engineering 22, 497–509 (2006)Google Scholar
- 3.Qiu, G., Guan, J., Duan, J., Chen, M.: Tone mapping for hdr image using optimization a new closed form solution. In: 18th International Conference on Pattern Recognition, ICPR 2006, vol. 1, pp. 996–999 (2006)Google Scholar
- 4.Cormack, L.K.: Computational models of early human vision. In: Handbook of Image and Video Processing, pp. 325–345. Elsevier Academic Press (2005)Google Scholar
- 6.Judd, T., Ehinger, K., Durand, F., Torralba, A.: Learning to predict where humans look. In: International Conference on Computer Vision, ICCV (2009)Google Scholar
- 7.Alsam, A., Sharma, P.: Analysis of eye fixations data. In:Proceedings of the IASTED International Conference, Signal and Image Processing (SIP 2011), pp. 342–349 (2011)Google Scholar
- 11.Geisler, W.S., Perry, J.S.: A real-time foveated multiresolution system for low-bandwidth video communication. In: SPIE Proceedings, vol. 3299, pp. 1–13 (1998)Google Scholar
- 12.Alsam, A., Drew, M.S.: Fast colour2grey. In: 16th Color Imaging Conference: Color, Science, Systems and Applications, Society for Imaging Science & Technology (IS&T)/Society for Information Display (SID) Joint Conference, pp. 342–346 (2008)Google Scholar
- 14.Alsam, A., Rivertz, H.J.: Constrained gradient integration for improved image contrast. In: Proceedings of the IASTED International Conference, Signal and Image Processing (SIP 2011), pp. 13–18 (2011)Google Scholar
- 17.Arnow, T.L., Geisler, W.S.: Visual detection following retinal damage: Predictions of an inhomogeneous retino-cortical model. In: Human Vision and Electronic Imaging. Proceedings of SPIE, vol. 2674 (1996)Google Scholar