Human vs. machine: the psychological and behavioral consequences of being compared to an outperforming artificial agent

Abstract

While artificial agents (AA) such as Artificial Intelligence are being extensively developed, a popular belief that AA will someday surpass human intelligence is growing. The present research examined whether this common belief translates into negative psychological and behavioral consequences when individuals assess that an AA performs better than them on cognitive and intellectual tasks. In two studies, participants were led to believe that an AA performed better or less well than them on a cognitive inhibition task (Study 1) and on an intelligence task (Study 2). Results indicated that being outperformed by an AA increased subsequent participants’ performance as long as they did not experience psychological discomfort towards the AA and self-threat. Psychological implications in terms of motivation and potential threat as well as the prerequisite for the future interactions of humans with AAs are further discussed.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3

Notes

  1. 1.

    Alphas less than 0.7 are considered as non-reliable (Cronbach, 1951; James Dean Brown, 2002).

  2. 2.

    For second session analyses similar to Normand & Croizet (2013), see Supplementary Material.

  3. 3.

    However we controlled that participants’ performance on the Raven matrices did not differ in Session 1, F(2, 73) = 429., p = .653, ηp2 = .012.

References

  1. Anderson, M. L. (2005). Why is AI so scary? Artificial Intelligence, 169(2), 201–208. https://doi.org/10.1016/j.artint.2005.10.008.

    Article  Google Scholar 

  2. Augustinova, M., & Ferrand, L. (2012). The influence of mere social presence on Stroop interference: New evidence from the semantically-based Stroop task. Journal of Experimental Social Psychology. https://doi.org/10.1016/j.jesp.2012.04.014.

    Article  Google Scholar 

  3. Augustinova, M., & Ferrand, L. (2014). Automaticity of word reading: Evidence from the semantic stroop paradigm. Current Directions in Psychological Science, 23(5), 343–348. https://doi.org/10.1177/0963721414540169.

    Article  Google Scholar 

  4. Ayoub, K., & Payne, K. (2016). Strategy in the Age of Artificial Intelligence. Journal of Strategic Studies, 39(5–6), 793–819. https://doi.org/10.1080/01402390.2015.1088838.

    Article  Google Scholar 

  5. Baron, R. S. (1986). Distraction-conflict theory: Progress and problems. In Advances in experimental social psychology (Vol. 19, pp. 1–40). Academic Press.

  6. Blascovich, J., Mendes, W. B., Hunter, S. B., & Salomon, K. (1999). Social “facilitation” as challenge and threat. Journal of Personality and Social Psychology, 77(1), 68–77. https://doi.org/10.1037/0022-3514.77.1.68.

    Article  PubMed  Google Scholar 

  7. Brewka, G. (1996). Artificial intelligence—a modern approach by Stuart Russell and Peter Norvig, Prentice Hall. Series in Artificial Intelligence, Englewood Cliffs, NJ. In The Knowledge Engineering Review (Vol. 11). https://doi.org/10.1017/s0269888900007724

  8. Brown, J. D. (2002). The Cronbach alpha reliability estimate. JALT Testing & Evaluation SIG Newsletter, 6(1), 17–18.

    Google Scholar 

  9. Carpenter, P. A., Just, M. A., & Shell, P. (1990). What one intelligence test measures: A theoretical account of the processing in the Raven progressive matrices test. Psychological Review, 97(3), 404–431. https://doi.org/10.1037/0033-295X.97.3.404.

    Article  PubMed  Google Scholar 

  10. Carpinella, C. M., Wyman, A. B., Perez, M. A., & Stroessner, S. J. (2017). The Robotic Social Attributes Scale (RoSAS): Development and Validation. ACM/IEEE International Conference on Human-Robot Interaction, Part F1271, 254–262. https://doi.org/10.1145/2909824.3020208

  11. Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334. https://doi.org/10.1007/BF02310555.

    Article  Google Scholar 

  12. Dalrymple, K. L., & Herbert, J. D. (2007). Acceptance and commitment therapy for generalized social anxiety disorder a pilot study. Behavior Modification, 31(5), 543–568. https://doi.org/10.1177/0145445507302037.

    Article  PubMed  Google Scholar 

  13. Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/BF03193146.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Fumi, F. G., & Parr, R. G. (1953). Electronic states of diatomic molecules: The oxygen molecule. The Journal of Chemical Physics, 21(10), 1864–1868. https://doi.org/10.1063/1.1698680.

    Article  Google Scholar 

  15. Gerber, J. P., Wheeler, L., & Suls, J. (2018). A social comparison theory meta-analysis 60+ years on. Psychological Bulletin, 144(2), 177–197. https://doi.org/10.1037/bul0000127.

    Article  PubMed  Google Scholar 

  16. Harrison, T. L., Shipstead, Z., & Engle, R. W. (2015). Why is working memory capacity related to matrix reasoning tasks? Memory and Cognition, 43(3), 389–396. https://doi.org/10.3758/s13421-014-0473-3.

    Article  PubMed  Google Scholar 

  17. Heerink, M. (2011). Exploring the influence of age, gender, education and computer experience on robot acceptance by older adults. HRI 2011 - Proceedings of the 6th ACM/IEEE International Conference on Human-Robot Interaction. https://doi.org/10.1145/1957656.1957704

  18. Huguet, P., Galvaing, M. P., Monteil, J. M., & Dumas, F. (1999). Social presence effects in the Stroop task: Further evidence for an attentional view of social facilitation. Journal of Personality and Social Psychology, 77(5), 1011–1024. https://doi.org/10.1037/0022-3514.77.5.1011.

    Article  PubMed  Google Scholar 

  19. Kuo, I. H., Rabindran, J. M., Broadbent, E., Lee, Y. I., Kerse, N., Stafford, R. M. Q., et al. (2009). Age and gender factors in user acceptance of healthcare robots. Proceedings IEEE International Workshop on Robot and Human Interactive Communication. https://doi.org/10.1109/ROMAN.2009.5326292.

    Article  Google Scholar 

  20. Lachaud, C. M., & Renaud, O. (2011). A tutorial for analyzing human reaction times: How to filter data, manage missing values, and choose a statistical model. Applied Psycholinguistics. https://doi.org/10.1017/s0142716410000457.

    Article  Google Scholar 

  21. Lawless, W. F., Mittu, R., Russell, S., & Sofge, D. (2017). Autonomy and artificial intelligence: A Threat or Savior? In: Autonomy and Artificial Intelligence: A Threat or Savior?https://doi.org/10.1007/978-3-319-59719-5

  22. Lockwood, P., & Kunda, Z. (1997). Superstars and me: Predicting the impact of role models on the self. Journal of Personality and Social Psychology, 73(1), 91–103. https://doi.org/10.1037/0022-3514.73.1.91.

    Article  Google Scholar 

  23. McArthur, D., Lewis, M., & Bishary, M. (2005). The Roles Of Artificial Intelligence In Education: Current Progress And Future Prospects. I-Manager’s Journal of Educational Technology, 1(4), 42–80. https://doi.org/10.26634/jet.1.4.972.

    Article  Google Scholar 

  24. Muller, D., Atzeni, T., & Butera, F. (2004). Coaction and upward social comparison reduce the illusory conjunction effect: Support for distraction-conflict theory. Journal of Experimental Social Psychology, 40(5), 659–665. https://doi.org/10.1016/j.jesp.2003.12.003.

    Article  Google Scholar 

  25. Muller, D., & Butera, F. (2007). The focusing effect of self-evaluation threat in coaction and social comparison. Journal of Personality and Social Psychology, 93(2), 194–211. https://doi.org/10.1037/0022-3514.93.2.194.

    Article  PubMed  Google Scholar 

  26. Mushtaq, F., Bland, A. R., & Schaefer, A. (2011). Uncertainty and cognitive control. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2011.00249.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Nomura, T. (2017). Robots and Gender. In: Principles of gender-specific medicine: Gender in the genomic era: Third Edition. 10.1016/B978-0-12-803506-1.00042-5

  28. Normand, A., & Croizet, J. C. (2013). Upward social comparison generates att entional focusing when the dimension of comparison is self-threatening. Social Cognition, 31(3), 336–348. https://doi.org/10.1521/soco.2013.31.3.336.

    Article  Google Scholar 

  29. Normand, A., Bouquet, C. A., & Croizet, J. C. (2014). Does evaluative pressure make you less or more distractible? Role of top-down attentional control over response selection. Journal of Experimental Psychology: General, 143(3), 1097–1111. https://doi.org/10.1037/a0034985.

    Article  Google Scholar 

  30. Pan, Y., & Steed, A. (2016). A comparison of avatar-, video-, and robot-mediated interaction on users’ trust in expertise. Frontiers Robotics AI. https://doi.org/10.3389/frobt.2016.00012.

    Article  Google Scholar 

  31. Paulhus, D. L. (2013). Measurement and control of response bias. Measures of Personality and Social Psychological Attitudes. https://doi.org/10.1016/b978-0-12-590241-0.50006-x.

    Article  Google Scholar 

  32. Perri, (2001). Ethics, regulation and the new artificial intelligence, part I: Accountability and Power. Information Communication and Society, 4(2), 199–229. https://doi.org/10.1080/13691180110044461.

    Article  Google Scholar 

  33. Przybylski, A. K., Rigby, C. S., & Ryan, R. M. (2010). A motivational model of video game engagement. Review of General Psychology, 14(2), 154–166. https://doi.org/10.1037/a0019440.

    Article  Google Scholar 

  34. Raven, J. C. (1941). Standardization of progressive matrices, 1938. British Journal of Medical Psychology, 19(1), 137–150. https://doi.org/10.1111/j.2044-8341.1941.tb00316.x.

    Article  Google Scholar 

  35. Raven, J. (2000). The Raven’s progressive matrices: Change and stability over culture and time. Cognitive Psychology, 41(1), 1–48. https://doi.org/10.1006/cogp.1999.0735.

    Article  PubMed  Google Scholar 

  36. Riether, N., Hegel, F., Wrede, B., & Horstmann, G. (2012). Social facilitation with social robots? HRI’12 - Proceedings of the 7th Annual ACM/IEEE International Conference on Human-Robot Interaction, 41–47. https://doi.org/10.1145/2157689.2157697

  37. Rigas, H., Booth, T., Briggs, F., Murata, T., & Stone, H. S. (1985). Artificial intelligence research in Japan. Computer, 18(9), 83–90. https://doi.org/10.1109/MC.1985.1663007.

    Article  Google Scholar 

  38. Rubio, V., & Deng, X. W. (2007). PLANT SCIENCE: Standing on the Shoulders of GIGANTEA. Science, 318(5848), 206–207. https://doi.org/10.1126/science.1150213.

    Article  PubMed  Google Scholar 

  39. Sanders, G. S., Baron, R. S., & Moore, D. L. (1978). Distraction and social comparison as mediators of social facilitation effects. Journal of Experimental Social Psychology, 14(3), 291–303. https://doi.org/10.1016/0022-1031(78)90017-3.

    Article  Google Scholar 

  40. Schermerhorn, P., Scheutz, M., & Crowell, C. R. (2008). Robot social presence and gender: Do females view robots differently than males? HRI 2008 Proceedings of the 3rd ACM/IEEE International Conference on Human-Robot Interaction: Living with Robots. https://doi.org/10.1145/1349822.1349857

  41. Serrano-Cinca, C., Fuertes-Callén, Y., & Mar-Molinero, C. (2005). Measuring DEA efficiency in Internet companies. Decision Support Systems, 38(4), 557–573. https://doi.org/10.1016/j.dss.2003.08.004.

    Article  Google Scholar 

  42. Spatola, N., Belletier, C., Chausse, P., Augustinova, M., Normand, A., Barra, V., et al. (2019a). Improved Cognitive Control in Presence of Anthropomorphized Robots. International Journal of Social Robotics, 11(3), 463–476. https://doi.org/10.1007/s12369-018-00511-w.

    Article  Google Scholar 

  43. Spatola, N., Belletier, C., Normand, A., Chausse, P., Monceau, S., Augustinova, M., et al. (2018). Not as bad as it seems: When the presence of a threatening humanoid robot improves human performance. Science Robotics, 3(21), aat5843. https://doi.org/10.1126/scirobotics.aat5843.

    Article  Google Scholar 

  44. Spatola, N., Monceau, S., & Ferrand, L. (2019b). Cognitive impact of Social Robots: How anthropomorphism boosts performance. IEEE Robotics and Automation Magazine. https://doi.org/10.1109/MRA.2019.2928823.

    Article  Google Scholar 

  45. Stankov, L., & Schweizer, K. (2007). Raven’s progressive matrices, manipulations of complexity and measures of accuracy, speed and confidence. Psychology Science, 49(4), 326–342.

    Google Scholar 

  46. Suls, J., Martin, R., & Wheeler, L. (2002). Social comparison: Why, with whom, and with what effect? Current Directions in Psychological Science, 11(5), 159–163. https://doi.org/10.1111/1467-8721.00191.

    Article  Google Scholar 

  47. Tanaka, K., Nakanishi, H., & Ishiguro, H. (2014). Comparing video, avatar, and robot mediated communication: pros and cons of embodiment. Collaboration Technologies and Social Computing, 460, 96–110. https://doi.org/10.1007/978-3-662-44651-5_9.

    Article  Google Scholar 

  48. Tesser, A. (1988). Toward a self-evaluation maintenance model of social behavior. Advances in Experimental Social Psychology, 21(C), 181–227. https://doi.org/10.1016/S0065-2601(08)60227-0.

    Article  Google Scholar 

  49. Testa, M., & Major, B. (1990). The impact of social comparisons after failure: The moderating effects of perceived control. Basic and Applied Social Psychology, 11(2), 205–218. https://doi.org/10.1207/s15324834basp1102_7.

    Article  Google Scholar 

  50. Vandierendonck, A. (2017). A comparison of methods to combine speed and accuracy measures of performance: A rejoinder on the binning procedure. Behavior Research Methods. https://doi.org/10.3758/s13428-016-0721-5.

    Article  PubMed  Google Scholar 

  51. Vandierendonck, A. (2018). Further tests of the utility of integrated speed-accuracy measures in task switching. Journal of Cognition. https://doi.org/10.5334/joc.6.

    Article  PubMed  PubMed Central  Google Scholar 

  52. Wood, J. V. (1989). Theory and research concerning social comparisons of personal attributes. Psychological Bulletin, 106, 231–248. https://doi.org/10.1037/0033-2909.106.2.231.

    Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Nicolas Spatola.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical statement

This study was approved by the Clermont-Ferrand IRM UCA Ethics Committee (Ref.: IRB00011540-2018-23) and was carried out in accordance with the provisions of the World Medical Association Declaration of Helsinki.

Open Practices

All data are publicly available via the Open Science Framework and can be accessed at https://osf.io/7yd9v/.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 22 kb)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Spatola, N., Normand, A. Human vs. machine: the psychological and behavioral consequences of being compared to an outperforming artificial agent. Psychological Research 85, 915–925 (2021). https://doi.org/10.1007/s00426-020-01317-0

Download citation

Keywords

  • Human–machine interaction
  • Social comparison
  • Logical reasoning
  • Cognitive control