Advertisement

Bayesian Inference for Sparse Generalized Linear Models

  • Matthias Seeger
  • Sebastian Gerwinn
  • Matthias Bethge
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4701)

Abstract

We present a framework for efficient, accurate approximate Bayesian inference in generalized linear models (GLMs), based on the expectation propagation (EP) technique. The parameters can be endowed with a factorizing prior distribution, encoding properties such as sparsity or non-negativity. The central role of posterior log-concavity in Bayesian GLMs is emphasized and related to stability issues in EP. In particular, we use our technique to infer the parameters of a point process model for neuronal spiking data from multiple electrodes, demonstrating significantly superior predictive performance when a sparsity assumption is enforced via a Laplace prior distribution.

Keywords

Bayesian Inference Expectation Propagation Gaussian Process Model Approximate Inference Point Process Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Berry, M., Warland, D., Meister, M.: The structure and precision of retinal spike trains (1997)Google Scholar
  2. 2.
    Carandini, M., Demb, J., Mante, V., Tolhurst, D., Dan, Y., Olshausen, B., Gallant, J., Rust, N.: Do we know what the early visual system does? J Neurosci 25(46), 10577–10597 (2005)CrossRefGoogle Scholar
  3. 3.
    Gilks, W.R., Wild, P.: Adaptive rejection sampling for Gibbs sampling. Applied Statistics 41(2), 337–348 (1992)zbMATHCrossRefGoogle Scholar
  4. 4.
    Harris, K., Csicsvari, J., Hirase, H., Dragoi, G., Buzsaki, G.: Organization of cell assemblies in the hippocampus. Nature 424(6948), 552–556 (2003)CrossRefGoogle Scholar
  5. 5.
    McCullach, P., Nelder, J.A.: Generalized Linear Models. In: Monographs on Statistics and Applied Probability, 1st edn. no. 37, Chapman & Hall (1983)Google Scholar
  6. 6.
    Minka, T.: Divergence measures and message passing. Technical Report MSR-TR-2005-173, Microsoft Research, Cambridge (2005)Google Scholar
  7. 7.
    Minka, T.: Expectation propagation for approximate Bayesian inference. Uncertainty in AI 17 (2001)Google Scholar
  8. 8.
    Nodelman, U., Koller, D., Shelton, C.: Expectation propagation for continuous time Bayesian networks. Uncertainty in AI 21, 431–440 (2005)Google Scholar
  9. 9.
    Opper, M., Winther, O.: Gaussian processes for classification: Mean field algorithms. N. Comp. 12(11), 2655–2684 (2000)CrossRefGoogle Scholar
  10. 10.
    Paninski, L.: Maximum likelihood estimation of cascade point-process neural encoding models. Network: Computation in Neural Systems 15, 243–262 (2004)CrossRefGoogle Scholar
  11. 11.
    Park, T., Casella, G.: The Bayesian Lasso. Technical report, University of Florida (2005)Google Scholar
  12. 12.
    Qi, Y., Minka, T., Picard, R., Ghahramani, Z.: Predictive automatic relevance determination by expectation propagation. In: Proceedings of ICML 21 (2004)Google Scholar
  13. 13.
    Rajaram, S., Graepel, T., Herbrich, R.: Poisson networks: A model for structured point processes. AI and Statistics 10 (2005)Google Scholar
  14. 14.
    Rieke, F., Warland, D., van Steveninck, R.R., Bialek, W.: Spikes: Exploring the Neural Code, 1st edn. MIT Press, Cambridge (1999)Google Scholar
  15. 15.
    Seeger, M.: Expectation propagation for exponential families. Technical report, University of California at Berkeley (2005), See http://www.kyb.tuebingen.mpg.de/bs/people/seeger
  16. 16.
    Seeger, M., Steinke, F., Tsuda, K.: Bayesian inference and optimal design in the sparse linear model. AI and Statistics 11 (2007)Google Scholar
  17. 17.
    Simoncelli, E., Paninski, L., Pillow, J., Schwartz, O.: Characterization of neural responses with stochastic stimuli. In: Gazzaniga, M. (ed.) The Cognitive Neurosciences, 3rd edn., MIT Press, Cambridge (2004)Google Scholar
  18. 18.
    Snyder, D., Miller, M.: Random point processes in time and space. Springer Texts in Electrical Engineering (1991)Google Scholar
  19. 19.
    Tibshirani, R.: Regression shrinkage and selection via the Lasso. J. Roy. Stat. Soc. B 58, 267–288 (1996)zbMATHMathSciNetGoogle Scholar
  20. 20.
    Tipping, M.: Sparse Bayesian learning and the relevance vector machine. J. M. Learn. Res. 1, 211–244 (2001)zbMATHMathSciNetGoogle Scholar
  21. 21.
    Wilkinson, D.: Stochastic Modelling for Systems Biology. Chapman & Hall, Sydney (2006)zbMATHGoogle Scholar
  22. 22.
    Wipf, D., Palmer, J., Rao, B.: Perspectives on sparse Bayesian learning. In: Advances in NIPS 16 (2004)Google Scholar
  23. 23.
    Zeck, G., Xiao, Q., Masland, R.: The spatial filtering properties of local edge detectors and brisk-sustained retinal ganglion cells. Eur. J. Neurosci. 22(8), 2016–2026 (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Matthias Seeger
    • 1
  • Sebastian Gerwinn
    • 1
  • Matthias Bethge
    • 1
  1. 1.Max Planck Institute for Biological Cybernetics, Spemannstr. 38, TübingenGermany

Personalised recommendations