Advertisement

Applications of Internet Methods in Psychology

  • Lee-Xieng Yang
Chapter
Part of the Computational Social Sciences book series (CSS)

Abstract

Web technology evolves quickly from Web 1.0 to Web 2.0 and even Web 3.0 since its birth in 1990s. Now it is not only a broadcasting channel (e.g., Wikipedia) but also a platform where people share their opinions, ideas, and sentiments with friends (e.g., social network sites). Therefore, more and more psychologists are interested in how the Web can help us investigate human mind and behaviors. In this chapter, I review different approaches of psychological studies on the Internet as a summary for the current applications of the Internet technology in psychology. The first approach is simply conducting surveys and experiments online, although caution is needed for some types of online experiment. The second approach is using the Internet search engine (e.g., Google or Wikipedia) to search for behavior criteria on Web pages. The last one is directly using social network sites (e.g., Facebook) to investigate people’s behaviors under online social contexts.

Keywords

Psychology Crowd sourcing Big data Social media Personality Search engine 

References

  1. Amir, O., Rand, D. G., & Gal, Y. (2012). Economic games on the Internet: The effect of $1 stakes. PLoS ONE, 7(2), e31461.  https://doi.org/10.1371/journal.pone.0031461.CrossRefGoogle Scholar
  2. Bakshy, E., Messing, S., & Adamic, L. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 1–5.Google Scholar
  3. Bhatia, S. (2015). The power of the representativeness heuristic. In D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, & P. P. Maglio (Eds.), Proceedings of the 37th Annual Meeting of the Cognitive Science Society (pp. 232–237). Austin, TX: Cognitive Science Society.Google Scholar
  4. Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6, 3–5.CrossRefGoogle Scholar
  5. Chandler, J., Mueller, P., & Paolacci, G. (2014). Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods, 46, 112–130.  https://doi.org/10.3758/s13428-013-0365-7.CrossRefGoogle Scholar
  6. Chandler, J., Paolacci, G., Peer, E., Mueller, P., & Ratliff, K. (2015). Non-naïve participants can reduce effect sizes. In K. Diehl & C. Yoon (Eds.), NA—advances in consumer research (Vol. 43, pp. 18–22). Duluth, MN: Association for Consumer Research.Google Scholar
  7. Coviello, L., Sohn, Y., Kramer, A. D. I., Marlow, C., Franceschetti, M., Christakis, N. A., et al. (2014). Detecting emotional contagion in massive social networks. PLOS ONE, 9, e90315.  https://doi.org/10.1371/journal.pone.0090315.CrossRefGoogle Scholar
  8. Craft, J. L., & Simon, J. R. (1970). Processing symbolic information from a visual display: Interference from an irrelevant directional cue. Journal of Experimental Psychology, 83, 415–420.CrossRefGoogle Scholar
  9. Crump, M. J. C., McDonnell, J. V., & Gureckis, T. M. (2013). Evaluating Amazon’s Mechanical Turk as a tool for experimental behavioral research. PLoS ONE, 8(3), e57410.  https://doi.org/10.1371/journal.pone.0057410.CrossRefGoogle Scholar
  10. Eriksen, B. A., & Eriksen, C. W. (1974). Effects of noise letters upon the identification of a target letter in a nonsearch task. Perception & Psychoophysics, 16, 143–149.CrossRefGoogle Scholar
  11. Eriksen, C. W. (1995). The flankers task and response competition: A useful tool for investigating a variety of cognitive problems. Visual Cognition, 2, 101–118.CrossRefGoogle Scholar
  12. Gosling, S. D., Vazire, S., Srivastava, S., & John, O. P. (2004). Should we trust web-based studies? A comparative analysis of six preconceptions about Internet questionnaires. American Psychologist, 59, 93–104.CrossRefGoogle Scholar
  13. Greenwald, A. G., & Nosek, B. A. (2001). Health of the implicit association test at age 3. Zeitschrift für Experimentelle Psychologie, 48(2), 85–93.CrossRefGoogle Scholar
  14. Harlow, L. L., & Oswald, F. L. (2016). Big data in psychology: Introduction to the special issue. Psychological Methods, 21, 447–457.CrossRefGoogle Scholar
  15. Howe, J. (2006). The rise of crowdsourcing. Wired Magazine, 14(06), 1–5.Google Scholar
  16. Howe, J. (2008). Crowdsourcing: Why the power of crowd is driving the future of business. New York: Crown Publishing Group.Google Scholar
  17. Jersild, A. T. (1927). Mental set and shift. Archives of Psychology, 14, (Whole No. 89).Google Scholar
  18. Kosinski, M., Stillwell, D., & Graepel, T. (2013). Private traits and attributes are predictable from digital records of human behavior. PNAS, 110, 5802–5805.CrossRefGoogle Scholar
  19. Kruschke, J. K. (1992). ALCOVE: An exemplar-based connectionist model of category learning. Psychological Review, 99, 22–44.CrossRefGoogle Scholar
  20. Kurtz, K. J., Levering, K. R., Stanton, R. D., Romero, J., & Morris, S. N. (2012). Human learning of elemental category structures: Revising the classic result of Shepard, Hovland, and Jenkins (1961). Journal of Experimental Psychology: Learning, Memory, and Cognition, Online first publication. doi: https://doi.org/10.1037/a0029178 Google Scholar
  21. Lewandowsky, S. (2011). Working memory capacity and categorization: Individual differences and modeling. Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 720–738.Google Scholar
  22. Love, B. C. (2002). Comparing supervised and unsupervised category learning. Psychonomic Bulletin & Review, 9, 829–835.CrossRefGoogle Scholar
  23. Lu, C., & Proctor, R. W. (1995). The influence of irrelevant location information on performance: A review of the Simon and spatial Stroop effects. Psychological Bulletin & Review, 2, 174–207.CrossRefGoogle Scholar
  24. MacLeod, C. M. (1991). Half a century of research on the Stroop effect: An integrative review. Psychological Bulletin, 109, 163–203.CrossRefGoogle Scholar
  25. Mason, W., & Watts, D. J. (2009). Financial incentives and the performance of crowds. In Proceedings of the ACM SIGKDD Workshop on Human Computation (pp. 77–85). New York: ACM.CrossRefGoogle Scholar
  26. Messing, S., & Westwood, S. J. (2012). Selective exposure in the age of social media: Endorsements trump partisan source affiliation when selecting news online. Communication Research, 41, 1042–1063.  https://doi.org/10.1177/0093650212466406.CrossRefGoogle Scholar
  27. Mezzacappa, E. (2000). Letter to the Editor. APS Observer, 13, 10.Google Scholar
  28. Moat, H. S., Curme, C., Avakian, A., Kenett, D. Y., Stanley, H. E., & Preis, T. (2013). Quantifying Wikipedia usage patterns before stock market moves. Scientific Reports, 3, 1801.CrossRefGoogle Scholar
  29. Monsell, S. (2003). Task switching. Cognitive Science, 7, 134–140.Google Scholar
  30. Nosofsk, Y. R. M., Gluck, M. A., Palmeri, T. J., McKinley, S. C., & Glauthier, P. (1994). Comparing models of rule-based classification learning: A replication and extension of Shepard, Hovland, and Jenkins (1961). Memory & Cognition, 22, 352–369.CrossRefGoogle Scholar
  31. Nosofsky, R. M., Palmeri, T. J., & McKinley, S. C. (1994). Rule-plus-exception model of classification learning. Psychological Review, 101, 53–79.CrossRefGoogle Scholar
  32. Olivola, C. Y., & Sagara, N. (2009). Distributions of observed death tolls govern sensitivity to human fatalities. Proceedings of the National Academy of Sciences, 106, 22151–22156.CrossRefGoogle Scholar
  33. Preis, T., Moat, H. S., & Stanley, H. E. (2013). Quantifying trading behavior in financial markets using Google Trends. Scientific Reports, 3, 1684.CrossRefGoogle Scholar
  34. Rand, D. G., Peysakhovich, A., Kraft-Todd, G. T., Newman, G. E., Wurzbacher, O., Nowak, M. A., et al. (2014). Social Heuristics shape intuitive cooperation. Nature Communications.Google Scholar
  35. Raymond, J. E., Shapiro, K. L., & Arnell, K. M. (1992). Temporary suppression of visual processing in an RSVP task: An attentional blink. Journal of Experimental Psychology: Human Perception & Performance, 18, 849–860.Google Scholar
  36. Schnoebelen, T., & Kuperman, V. (2010). Using Amazon Mechanical Turk for linguistic research. Psihologija, 43, 441–464.CrossRefGoogle Scholar
  37. Shafir, E. B., Smith, E. E., & Osherson, D. N. (1990). Typicality and reasoning fallacies. Memory & Cognition, 18, 229–239.CrossRefGoogle Scholar
  38. Shapiro, K. L., & Raymond, J. E. (1997). The attentional blink. Trends in Cognitive Science, 1, 291–296.CrossRefGoogle Scholar
  39. Shepard, R. N., Hovland, C. I., & Jenkins, H. M. (1961). Learning and memorization of classifications. Psychological Monographs, 75(13), 1–42 (Whole No. 517).CrossRefGoogle Scholar
  40. Stewart, N., Chater, N., & Brown, G. D. A. (2005). Decision by sampling. Cognitive Psychology, 53, 1–26.CrossRefGoogle Scholar
  41. Stroop, J. R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18, 643–662.CrossRefGoogle Scholar
  42. Suri, S., & Watts, D. J. (2011). Cooperation and contagion in web-based, networked public goods experiments. PLoS ONE, 6(3), e16836.  https://doi.org/10.1371/journal.pone.0016836.CrossRefGoogle Scholar
  43. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 141–162.CrossRefGoogle Scholar
  44. Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90, 293–315.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Lee-Xieng Yang
    • 1
  1. 1.Department of PsychologyNational Chengchi UniversityTaipeiTaiwan

Personalised recommendations