Advertisement

Image Analysis, Information Theory and Prosthetic Vision

  • Luke E. Hallum
  • Nigel H. Lovell
Chapter

Abstract

Recent years have seen markedly improved clinical outcomes in cochlear implantees. This improvement is largely attributed to improvements in speech processing algorithms. In light of these improvements, researchers are prompted to ask, “Could image analysis improve clinical outcomes in retinal implantees?” We discuss our approach to image analysis, microelectronic retinal prostheses, and the perception of low-resolution images, which we believe can be used to help constrain the design of an implant. We hope that our approach, and developments thereof, will ultimately contribute to improved clinical outcomes in retinal implantees.

Keywords

Tracking Performance Cochlear Implantees Visual Modeling Phosphene Image Bivariate Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Abbreviation

APRL

Artificial preferred retinal locus

Notes

Acknowledgments

We thank Shaun Cloherty for comments on an early draft of the manuscript.

References

  1. 1.
    Blahut RE (1987), Principles and practice of information theory. Addison-Wesley: Norwood, MA.MATHGoogle Scholar
  2. 2.
    Cha K (1992), Functional capabilities with a pixelized vision system: application to visual prosthesis. PhD dissertation, University of Utah.Google Scholar
  3. 3.
    Chen SC, Hallum LE, Suaning GJ, Lovell NH (2007), A quantitative analysis of head movement behaviour during visual acuity assessment under prosthetic vision simulation. J Neural Eng 4: p. S108–S123.CrossRefGoogle Scholar
  4. 4.
    Dowling J (2005), Artificial human vision. Expert Rev Med Devices 2: p. 73–85.CrossRefGoogle Scholar
  5. 5.
    Fornos AP, Sommerhalder J, Rappaz B, et al. (2005), Simulation of artificial vision, III: do the spatial or temporal characteristics of stimulus pixelization really matter? Invest Ophthalmol Vis Sci 46: p. 3906–3912.CrossRefGoogle Scholar
  6. 6.
    Hallum LE, Cloherty SL, Lovell NH (2008), Image analysis for microelectronic retinal prosthesis. IEEE Trans Biomed Eng 55: p. 344–346.CrossRefGoogle Scholar
  7. 7.
    Hallum LE, Dagnelie G, Suaning GJ, Lovell NH (2007), Simulating auditory and visual sensorineural prostheses: a comparative review. J Neural Eng 4: p. S58–S71.CrossRefGoogle Scholar
  8. 8.
    Hallum LE, Suaning GJ, Taubman DS, Lovell NH (2005), Simulated prosthetic visual fixation, saccade, and smooth pursuit. Vision Res 45: p. 775–788.CrossRefGoogle Scholar
  9. 9.
    Rubinstein JT, Miller CA (1999), How do cochlear prostheses work? Curr Opin Neurobiol 9: p. 399–404.CrossRefGoogle Scholar
  10. 10.
    Thompson, Jr., RW, Barnett GD, Humayun MS, Dagnelie G (2003), Facial recognition using simulated prosthetic pixelized vision. Invest Ophthalmol Vis Sci 44: p. 5035–5042.CrossRefGoogle Scholar
  11. 11.
    Timberlake GT, Mainster MA, Peli E, et al. (1986), Reading with a macular scotoma. I. Retinal location of scotoma and fixation area. Invest Ophthalmol Vis Sci 27: p. 1137–1147.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.Graduate School of Biomedical EngineeringUniversity of New South WalesSydneyAustralia
  2. 2.Center for Neural ScienceNew York UniversityNew YorkUSA

Personalised recommendations