Advertisement

Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Inferring an unobservable population size from observable samples

Abstract

Success in the physical and social worlds often requires knowledge of population size. However, many populations cannot be observed in their entirety, making direct assessment of their size difficult, if not impossible. Nevertheless, an unobservable population size can be inferred from observable samples. We measured people’s ability to make such inferences and their confidence in these inferences. Contrary to past work suggesting insensitivity to sample size and failures in statistical reasoning, inferences of populations size were accurate—but only when observable samples indicated a large underlying population. When observable samples indicated a small underlying population, inferences were systematically biased. This error, which cannot be attributed to a heuristics account, was compounded by a metacognitive failure: Confidence was highest when accuracy was at its worst. This dissociation between accuracy and confidence was confirmed by a manipulation that shifted the magnitude and variability of people’s inferences without impacting their confidence. Together, these results (a) highlight the mental acuity and limits of a fundamental human judgment and (b) demonstrate an inverse relationship between cognition and metacognition.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Notes

  1. 1.

    Fifteen (s1 + s2o = 10 + 5 − 0 = 15) is the logical minimum number of objects in the population when s1 is 10, s2 is 5, and o is 0/5. Thus, any population size estimates that fall below this value are invalid. In the Supplemental Materials, analyses show that including these participants does not change the findings.

  2. 2.

    Statistical tests are abbreviated and reported in accordance with guidelines of the Publication Manual of the American Psychological Association (6th ed.). For additional effects, please refer to data and code posted on OSF (osf.io/g7v3f/).

  3. 3.

    Experiments 2 and 3 were conducted after summer 2018 when researchers observed a drop in data quality from Amazon Mechanical Turk (Bai, 2018). To guard against this concern, far more participants than necessary were recruited and stringent manipulation checks were included, resulting in the exclusion of many data points from the analysis. Although Experiments 1 and 4 do not contain these manipulation checks, this is not a concern because data quality was assessed in accordance with Bai (2018) and the results are robust and replicate.

  4. 4.

    Some readers may wonder about manipulating the presence versus absence of Nmax in addition to Nmin. This manipulation would not be feasible because computing the theoretical posterior distribution requires a bounded prior over N, meaning Nmax must be specified.

References

  1. Amir, O., Rand, D. G., & Gal, Y. K. (2012). Economic games on the Internet: The effect of $1 stakes. PLOS ONE, 7(12), e31461. https://doi.org/10.1371/journal.pone.0031461

  2. Bai, H. (2018). Evidence that a large amount of low quality responses on MTurk can be detected with repeated GPS coordinates. Retrieved from https://www.maxhuibai.com/blog/evidence-that-responses-from-repeating-gps-are-random

  3. Blake, P. R., & McAuliffe, K. (2011). I had so much it didn’t seem fair: Eight-year-olds reject two forms of inequity. Cognition, 120(2), 215–224.

  4. Buehler, R., Griffin, D., & Ross, M. (1994). Exploring the “planning fallacy”: Why people underestimate their task completion times. Journal of Personality and Social Psychology, 67(3), 366–381.

  5. Cao, J., & Banaji, M. R. (2017).Social inferences from group size. Journal of Experimental Social Psychology, 70, 204–211.

  6. Christensen-Szalanski, J. J. J., & Bushyhead, J. B. (1981). Physicians’ use of probabilistic information in a real clinical setting. Journal of Experimental Psychology: Human Perception and Performance, 7(4), 928–935.

  7. Clark, M. S., & Isen, A. M. (1982). Toward understanding the relationship between feeling states and social behavior. In A. H. Hastorf & A. M. Isen (Eds.), Cognitive social psychology (pp. 73–108). New York, NY: Elsevier/North-Holland.

  8. Clayson, D. E. (2005). Performance overconfidence: Metacognitive effects of misplaced student expectations? Journal of Marketing Education, 27(2), 122–129.

  9. Clore, G. L., Schwarz, N., & Conway, M. (1993). Affective causes and consequences of social information processing. In R. S. Wyer & T. K. Srull (Eds.), Handbook of social cognition (pp. 323–417). Hillsdale, NJ: Erlbaum.

  10. Conover, W. J., Johnson, M. E., & Johnson, M. M. (1981). A comparative study of tests for homogeneity of variances, with applications to the outer continental shelf bidding data. Technometrics, 23, 351–361.

  11. De Langhe, B., Fernbach, P. M., & Lichtenstein, D. R. (2016). Navigating by the stars: Investing the actual and perceived validity of online user ratings. Journal of Consumer Research, 42, 817–833.

  12. delMas, R., Garfield, J., Ooms, A., & Chance, B. (2007). Assessing students’ conceptual understanding after a first course in statistics. Statistics Education Research Journal, 6(2), 28–58.

  13. Epley, N., & Gilovich, T. (2006). The anchoring-and-adjustment heuristic. Psychological Science, 17(4), 311–318.

  14. Gregg, A. P., Seibt, B., & Banaji, M. R. (2006). Easier done than undone: Asymmetry in the malleability of implicit preferences. Journal of Personality and Social Psychology, 90(1), 1–20.

  15. Kahneman, D., & Tversky, A. (1972). Subjective probability. A judgment of representativeness. Cognitive Psychology, 3(3), 430–454.

  16. Kersten, D., Mammassian, P., & Yuille, A. (2004). Object perception as Bayesian inference. Annual Review of Psychology, 55(1), 271–304.

  17. Le Corre, M., & Carey, S. (2007). One, two, three, four, nothing more: An investigation of the conceptual sources of the verbal counting principles. Cognition, 105(2), 395–438.

  18. Lee, M., & Wagenmakers, E.-J. (2013). Bayesian cognitive modeling: A practical course. Cambridge, England: Cambridge University Press.

  19. Libertus, M. E., Feigenson, L., & Halberda, J. (2011). Preschool acuity of the approximate number system correlates with school math ability. Developmental Science, 14(6), 1292–1300.

  20. Lieder, F., Griffiths, T. L., Huys, Q. J. M., & Goodman, N. D. (2018a). The anchoring bias reflects rational use of cognitive resources. Psychonomic Bulletin & Review, 25(1), 322–349.

  21. Lieder, F., Griffiths, T. L., Huys, Q. J. M., & Goodman, N. D. (2018b). Empirical evidence for resource-rational anchoring and adjustment. Psychonomic Bulletin & Review, 25(2), 775–784.

  22. Metcalfe, J. (1996). Metacognition: Knowing about knowing. Cambridge, MA: MIT Press.

  23. Moore, D., & Healy, P. J. (2008). The trouble with overconfidence. Psychological Review, 115(2), 502–517.

  24. Mukhopadhyay, N., & De Silva, B.M. (2009). Sequential methods and their applications. Boca Raton, FL: Taylor & Francis.

  25. Obrecht, N. A., Chapman, G. B., & Gelman, R. (2007). Intuitive t tests: Lay use of statistical information. Psychonomic Bulletin & Review, 14, 1147–1152.

  26. Petersen, C. G. J. (1896). The yearly immigration of young plaice into the Limfjord from the German Sea. Report of the Danish Biological Station, 6, 5–84.

  27. Pietraszeswki, D., & Shaw, A. (2015). Not by strength alone: Children’s conflict expectations follow the logic of the asymmetric war of attrition. Human Nature, 26, 44–72.

  28. Seber, G. A. F. (1982). A review of estimating animal abundance. Biometrics, 42(2), 267–292.

  29. Simonson, I., & Tversky, A. (1992). Choice in context: Tradeoff contrast and extremeness aversion. Journal of Marketing Research, 29(3), 281–295.

  30. Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279–1285.

  31. Tulving, E. (1999). Study of memory: Processes and systems. In J. K. Foster & M. Jelicic (Eds.), Debates in psychology. Memory: Systems, process, or function? (pp. 11–30). New York, NY: Oxford University Press.

  32. Tversky, A., & Kahneman (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.

  33. Ubel, P. A., Jepson, C., & Baron, J. (2001). The inclusion of patient testimonials in decision aids: Effects on treatment choices. Medical Decision Making, 21, 60–68.

  34. Xu, F., & Garcia, V. (2008). Intuitive statistics by 8-month old infants. Proceedings of the National Academy of Sciences of the United States of America, 105(13), 5012–5015.

Download references

Acknowledgements

This work was supported by a National Science Foundation Graduate Research Fellowship (DGE 1144152) to J.C.

Open practices statement

All data and code are available at [osf.io/g7v3f/]. All materials are included in Supplemental Materials section at the end of this manuscript.

Author information

Correspondence to Jack Cao.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

ESM 1

(DOCX 1452 kb)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Cao, J., Banaji, M.R. Inferring an unobservable population size from observable samples. Mem Cogn (2019). https://doi.org/10.3758/s13421-019-00974-w

Download citation

Keywords

  • Numerical cognition
  • Population estimates
  • Accuracy
  • Confidence
  • Sampling processes