Advertisement

Quantifying Humans’ Priors Over Graphical Representations of Tasks

  • Gecia Bravo Hermsdorff
  • Talmo Pereira
  • Yael Niv
Conference paper
Part of the Springer Proceedings in Complexity book series (SPCOM)

Abstract

Some new tasks are trivial to learn while others are almost impossible; what determines how easy it is to learn an arbitrary task? Similar to how our prior beliefs about new visual scenes colors our perception of new stimuli, our priors about the structure of new tasks shapes our learning and generalization abilities [2]. While quantifying visual priors has led to major insights on how our visual system works [5, 10, 11], quantifying priors over tasks remains a formidable goal, as it is not even clear how to define a task [4]. Here, we focus on tasks that have a natural mapping to graphs. We develop a method to quantify humans’ priors over these “task graphs”, combining new modeling approaches with Markov chain Monte Carlo with people, MCMCP (a process whereby an agent learns from data generated by another agent, recursively [9]). We show that our method recovers priors more accurately than a standard MCMC sampling approach. Additionally, we propose a novel low-dimensional “smooth” (In the sense that graphs that differ by fewer edges are given similar probabilities.) parametrization of probability distributions over graphs that allows for more accurate recovery of the prior and better generalization. We have also created an online experiment platform that gamifies our MCMCP algorithm and allows subjects to interactively draw the task graphs. We use this platform to collect human data on several navigation and social interactions tasks. We show that priors over these tasks have non-trivial structure, deviating significantly from null models that are insensitive to the graphical information. The priors also notably differ between the navigation and social domains, showing fewer differences between cover stories within the same domain. Finally, we extend our framework to the more general case of quantifying priors over exchangeable random structures.

Keywords

Markov chain Monte Carlo with People (MCMCP) Representational learning Structural priors Task graphs Human cognition 

References

  1. 1.
    Arlot, S., Alan, C.: A survey of cross-validation procedures for model selection. Stat. Surv. 4, 40–79 (2010).  https://doi.org/10.1214/09-SS054MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35, 1798–1828 (2013).  https://doi.org/10.1109/TPAMI.2013.50CrossRefGoogle Scholar
  3. 3.
    Bellman, R.E.: Dynamic Programming. Princeton University Press, Princeton (1957)zbMATHGoogle Scholar
  4. 4.
    Botvinick, M., Weinstein, A., Solway, A., Barto, A.: Reinforcement learning, efficient coding, and the statistics of natural tasks. Curr. Opin. Behav. Sci. 5, 71–77 (2015).  https://doi.org/10.1016/j.cobeha.2015.08.009CrossRefGoogle Scholar
  5. 5.
    Brady, T.F., Konkle, T., Alvarez, G.A.: Compression in visual working memory: using statistical regularities to form more efficient memory representations. J. Exp. Psychol. Gen. 138, 487–502 (2009).  https://doi.org/10.1037/a0016797CrossRefGoogle Scholar
  6. 6.
    Canini, K.R., Griffiths, T.L., Vanpaemel, W., Kalish, M.L.: Revealing human inductive biases for category learning by simulating cultural transmission. Psychon. Bull. Rev. 21, 785–793 (2014).  https://doi.org/10.3758/s13423-013-0556-3CrossRefGoogle Scholar
  7. 7.
    Field, D.F.: What the statistics of natural images tell us about visual coding. Proc. SPIE Int. Soc. Opt. Eng. 1077, 269–276 (1989).  https://doi.org/10.1117/12.952724ADSCrossRefGoogle Scholar
  8. 8.
    Graziano, M.S.A.: Cortical action representations. In: Toga, A.W., Poldrack, R.A. (eds.) Brain Mapping: An Encyclopedic Reference. Elsevier, Amsterdam (2014). http://www.princeton.edu/~graziano/Graziano_encyclopedia_2015.pdf
  9. 9.
    Griffiths, T., Kalish, M.: Language evolution by iterated learning with Bayesian agents. Cogn. Sci. 31, 441–480 (2007).  https://doi.org/10.1080/15326900701326576CrossRefGoogle Scholar
  10. 10.
    Howe, C.Q., Purves, D.: The Müller-Lyer illusion explained by the statistics of image source relationships. Proc. Natl. Acad. Sci. 102(4), 1234–1239 (2005).  https://doi.org/10.1073/pnas.0409314102ADSCrossRefGoogle Scholar
  11. 11.
    Howe, C.Q., Yang, Z., Purves, D.: The Poggendorff illusion explained by natural scene geometry. Proc. Natl. Acad. Sci. 102(21), 7707–7712 (2005).  https://doi.org/10.1073/pnas.0502893102ADSCrossRefGoogle Scholar
  12. 12.
    Lewicki, M.S.: Efficient coding of natural sounds. Nat. Neurosci. 5(4), 356–363 (2002).  https://doi.org/10.1038/nn831CrossRefGoogle Scholar
  13. 13.
    Orbán, G., Fiser, J., Aslin, R.N., Lengyel, M.: Bayesian learning of visual chunks by human observers. Proc. Natl. Acad. Sci. 105(7), 2745–2750 (2008).  https://doi.org/10.1073/pnas.0708424105ADSCrossRefGoogle Scholar
  14. 14.
    Orbanz, P., Roy, D.M.: Bayesian models of graphs, arrays and other exchangeable random structures. IEEE Trans. Pattern Anal. Mach. Intell. 37, 437–461 (2015).  https://doi.org/10.1109/TPAMI.2014.2334607CrossRefGoogle Scholar
  15. 15.
    The on-line encyclopedia of integer sequences (OEIS). https://oeis.org/A000088

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Gecia Bravo Hermsdorff
    • 1
  • Talmo Pereira
    • 1
  • Yael Niv
    • 1
  1. 1.Princeton Neuroscience InstitutePrinceton UniversityPrincetonUSA

Personalised recommendations