Advertisement

Bayesian Reconstruction of Perceptual Experiences from Human Brain Activity

  • Jack Gallant
  • Thomas Naselaris
  • Ryan Prenger
  • Kendrick Kay
  • Dustin Stansbury
  • Michael Oliver
  • An Vu
  • Shinji Nishimoto
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5638)

Abstract

A method for decoding the subjective contents of perceptual systems in the human brain would have broad practical utility for communication and as a brain-machine interface. Previous approaches to this problem in vision have used linear classifiers to solve specific problems, but these approaches were not general enough to solve complex problems such as reconstructing subjective perceptual states. We have developed a new approach to these problems based on quantitative encoding models that explicitly describe how visual stimuli are (nonlinearly) transformed into brain activity. We then invert these encoding models in order to decode activity evoked by novel images or movies, providing reconstructions with unprecedented fidelity. Here we briefly review these results and the potential uses of perceptual decoding devices.

Keywords

Bayesian vision brain-machine interface brain-computer interface brain reading 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Carlson, T.A., Schrater, P., He, S.: Patterns of activity in the categorical representations of objects. J. Cog. Neurosci. 15, 704–717 (2003)CrossRefGoogle Scholar
  2. 2.
    Cox, D.D., Savoy, R.L.: Functional magnetic resonance imaging (fMRI) ’brain reading’: detecting and classifying distributed patterns of fMRI activity in human visual cortex. Neuroimage 19, 261–270 (2003)CrossRefPubMedGoogle Scholar
  3. 3.
    Haynes, J.D., Rees, G.: Predicting the orientation of invisible stimuli from activity in human primary visual cortex. Nat. Neurosci. 8, 686–691 (2005)CrossRefPubMedGoogle Scholar
  4. 4.
    Haynes, J.D., Rees, G.: Decoding mental states from brain activity in humans. Nat. Neurosci. Rev. 7, 523–534 (2006)CrossRefGoogle Scholar
  5. 5.
    Hanson, S.J., Matsuka, T., Haxby, J.V.: Combinatorial codes in ventral temporal lobe for object recognition: Haxby (2001) revisited: is there a “face” area? Neuroimage 23, 156–166 (2001)CrossRefGoogle Scholar
  6. 6.
    Kamitani, Y., Tong, F.: Decoding the visual and subjective contents of the human brain. Nat. Neurosci. 8, 679–685 (2005)CrossRefPubMedPubMedCentralGoogle Scholar
  7. 7.
    Kay, K.N., Naselaris, T., Prenger, R.J., Gallant, J.L.: Identifying natural images from human brain activity. Nature 452, 352–355 (2008)CrossRefPubMedPubMedCentralGoogle Scholar
  8. 8.
    Thirion, B., Duchesnay, E., Hubbard, E., Dubois, J., Poline, J.B., Lebihan, D., Dehaene, S.: Inverse retinotopy: inferring the visual content of images from brain activation patterns. Neuroimage 33, 1104–1116 (2006)CrossRefPubMedGoogle Scholar
  9. 9.
    Van Essen, D.C., Gallant, J.L.: Neural mechanisms of form and motion processing in the primate visual system. Neuron 13, 1–10 (1994)CrossRefPubMedGoogle Scholar
  10. 10.
    Wu, M.C.-K., David, S.V., Gallant, J.L.: Complete functional characterization of sensory neurons by system identification. Ann. Rev. Neurosci. 29, 477–505 (2006)CrossRefPubMedGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Jack Gallant
    • 1
    • 2
  • Thomas Naselaris
    • 1
  • Ryan Prenger
    • 3
  • Kendrick Kay
    • 2
  • Dustin Stansbury
    • 4
  • Michael Oliver
    • 4
  • An Vu
    • 5
  • Shinji Nishimoto
    • 1
  1. 1.Program in NeuroscienceUSA
  2. 2.Departments of PsychologyUSA
  3. 3.PhysicsUSA
  4. 4.Vision Science andUSA
  5. 5.BioengineeringUniversity of California at BerkeleyBerkeleyUSA

Personalised recommendations