Abstract
AI-generated “deep fakes” is increasingly used by cybercriminals conducting targeted and tailored social engineering attacks, and for influencing public opinion. To raise awareness and efficiently train individuals in recognizing deep fakes, understanding individual differences in the ability to recognize them is central. Previous research suggested a close relationship between political attitudes and top-down perceptual and cognitive processing styles. In this study, we investigate the impact of political attitudes and agreement with the political message content on individual deep fake recognition skills. 163 adults (72 females = 44.2%) judged a series of video clips with politicians’ statements across the political spectrum regarding their authenticity and their agreement with the message content. Half of the presented videos were fabricated via lip-sync technology. In addition to agreement with each statement made, global political attitudes towards social and economic topics were assessed via the Social and Economic Conservatism Scale (SECS). There were robust negative associations between participants’ general and social conservatism and their ability to recognize fabricated videos, especially when where there was agreement with the message content. Deep fakes watched on mobile phones and tablets were considerably less likely to be recognized compared to when watched on stationary computers. This is the first study to investigate and establish the association between political attitudes and interindividual differences in deep fake recognition. The study supports recently published research suggesting relationships between conservatism and perceived credibility of conspiracy theories and fake news in general. Implications for further research are discussed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aitken, F., Menelaou, G., Warrington, O., Koolschijn, R.S., Corbin, N., et al.: Prior expectations evoke stimulus-specific activity in the deep layers of the primary visual cortex. PLoS Biol. 18(12), e3001023 (2020a)
Aitken, F., Turner, G., Kok, P.: Prior expectations of motion direction modulate early sensory processing. J. Neurosci. 40(33), 6389–6397 (2020b)
Azucar, D., Marengo, D., Settanni, M.: Predicting the big 5 personality traits from digital footprints on social media: a meta-analysis. Pers. Individ. Differ. 124, 150–159 (2018)
Balcetis, E., Dunning, D.: Cognitive dissonance and the perception of natural environments. Psychol. Sci. 18(10), 917–921 (2007)
Balcetis, E., Dunning, D., Granot, Y.: Subjective value determines initial dominance in binocular rivalry. J. Exp. Soc. Psychol. 48(1), 122–129 (2012)
Canham, M., Posey, C., Constantino, M.: Phish derby: shoring the human shield through gamified phishing attacks. In: Frontiers in Higher Education. Advance Online Publication (2022)
Cialdini, R.B.: The Psychology of Persuasion. New York (1993)
Compton, J., van der Linden, S., Cook, J., Basol, M.: Inoculation theory in the post- truth era: extant findings and new frontiers for contested science, misinformation, and conspiracy theories. Soc. Pers. Psychol. Compass 15(6), e12602 (2021)
European Parliament Global Trendometer. Essays on medium- and long-term global trends. ISSN 2529-6345 (2018)
Everett, J.A.: The 12 item social and economic conservatism scale (SECS). PLoS ONE 8(12), e82131 (2013)
Garrett, R.K., Bond, R.M.: Conservatives’ susceptibility to political misperceptions. Sci. Adv. 7(23), eabf1234 (2021)
Golbeck, J., Robles, C., Turner, K.: Predicting personality with social media. In: CHI 2011 Extended Abstracts on Human Factors in Computing Systems, pp. 253–262 (2011)
Goss-Sampson, M.A.: Statistical analysis in JASP 0.14: a guide for students (2020)
Gupta, P., Chugh, K., Dhall, A., Subramanian, R.: The eyes know it: Fakeet-an eye-tracking database to understand deepfake perception. In: Proceedings of the 2020 International Conference on Multimodal Interaction, pp. 519–527 (2020)
Hadnagy, C.: Social Engineering: The Science of Human Hacking. Wiley, Hoboken (2018)
Halevi, T., Memon, N., Nov, O.: Spear-phishing in the wild: a real-word study of personality, phishing self-efficacy and vulnerability to spear-phishing attacks. Soc. Sci. Res. Netw. (2015)
Hu, S., Li, Y., Lyu, S.: Exposing GAN-generated faces using inconsistent corneal specular highlights. In: ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2021)
IBM Security: Cost of a data breach report 2020 (2020). https://www.capita.com/sites/g/files/nginej291/files/2020-08/Ponemon-Global-Cost-of-Data-Breach-Study-2020.pdf
iProov: The Threat of Deepfakes. The consumer view of deepfakes and the role of biometric authentication in protecting against their misuse (2020). https://www.iproov.com/wp-content/uploads/2021/05/iProov-Deepfakes-Report.pdf
JASP Team (2020). JASP (Version 0.14.1)
Korshunov, P., Marcel, S.: Deepfake detection: humans vs. machines. arXiv preprint arXiv:2009.03155 (2020)
Kosinski, M., Stillwell, D., Graepel, T.: Private traits and attributes are predictable from digital records of human behavior. Proc. Natl. Acad. Sci. 110(15), 5802–5805 (2013)
Lawson, M.A., Kakkar, H.: Of pandemics, politics, and personality: the role of conscientiousness and political ideology in the sharing of fake news. J. Exp. Psychol.: Gener. 151, 1154 (2021)
Lin, T., et al.: Susceptibility to spear-phishing emails: effects of internet user demographics and email content. ACM Trans. Comput.-Hum. Interact. 26(5), 32 (2019). https://doi.org/10.1145/3336141
Lyu, S.: Deepfake detection: current challenges and next steps. In: 2020 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), pp. 1–6. IEEE (2020)
Masood, M., Nawaz, M., Malik, K.M., Javed, A., Irtaza, A.: Deepfakes generation and detection: state-of-the-art, open challenges, countermeasures, and way forward. arXiv preprint arXiv:2103.00484 (2021)
Meakins, J.: A zero-sum game: the zero-day market in 2018. J. Cyber Policy 4(1), 60–71 (2019)
Montañez, R., Golob, E., Xu, S.: Human cognition through the lens of social engineering cyberattacks. Front. Psychol. 11, 1755 (2020)
Mouton, F., Leenen, L., Malan, M.M., Venter, H.S.: Towards an ontological model defining the social engineering domain. In: Kimppa, K., Whitehouse, D., Kuusela, T., Phahlamohlaka, J. (eds.) HCC 2014. IAICT, vol. 431, pp. 266–279. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-44208-1_22
Nakajima, M., Schmitt, L.I., Halassa, M.M.: Prefrontal cortex regulates sensory filtering through a basal ganglia-to-thalamus pathway. Neuron 103(3), 445–458 (2019)
Panichello, M.F., Turk-Browne, N.B.: Behavioral and neural fusion of expectation with sensation. J. Cogn. Neurosci. 33(5), 814–825 (2021)
Pattinson, M., Jerram, C., Parsons, K., McCormac, A., Butavicius, M.: Why do some people manage phishing e-mails better than others? Inf. Manag. Comput. Secur. 20(1), 18–28 (2012)
Phillips, J.M., Kambi, N.A., Saalmann, Y.B.: A subcortical pathway for rapid, goal-driven, attentional filtering. Trends Neurosci. 39(2), 49–51 (2016)
Pinault, D.: The thalamic reticular nucleus: structure, function and concept. Brain Res. Rev. 46(1), 1–31 (2004)
Powell, K., Canham, M.: User be aware: is your smart phone or TV putting you at risk? In: Proceedings of the 65th Annual Meeting of the Human Factors and Ergonomics Society. Human Factors and Ergonomics Society, Santa Monica (2021)
Purplesec: 2021 Cyber Security Statistics. The Ultimate List Of Stats, Data & Trends (2021). https://purplesec.us/resources/cyber-security-statistics/
Rossler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., Niessner, M.: FaceForensics++: learning to detect manipulated facial images. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV) (2019)
Rungratsameetaweemana, N., Serences, J.T.: Dissociating the impact of attention and expectation on early sensory processing. Curr. Opin. Psychol. 29, 181–186 (2019)
Schreiber, D., et al.: Red brain, blue brain: evaluative processes differ in Democrats and Republicans. PLoS ONE 8(2), e52970 (2013)
Sheng, S., Holbrook, M., Kumaraguru, P., Cranor, L., Downs, J.: Who falls for phish? A demographic analysis of phishing susceptibility and effectiveness of interventions. In: 28th ACM Conference on Human Factors in Computing Systems, pp. 373–382 (2010)
Sütterlin, S., et al.: Individual deep fake recognition skills are affected by viewers’ political orientation, agreement with content and device used. PsyArXiv (2021). https://doi.org/10.31234/osf.io/hwujb
Tahir, R., et al.: Seeing is believing: exploring perceptual differences in DeepFake videos. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–16 (2021)
Thaw, N.N., July, T., Wai, A.N., Goh, D.H.L., Chua, A.Y.: Is it real? A study on detecting deepfake videos. Proc. Assoc. Inf. Sci. Technol. 57(1), e366 (2020)
Tong, F., Nakayama, K., Vaughan, J.T., Kanwisher, N.: Binocular rivalry and visual awareness in human extrastriate cortex. Neuron 21(4), 753–759 (1998)
Uebelacker, S., Quiel, S.: The social engineering personality framework. In: 2014 Workshop on Socio-Technical Aspects in Security and Trust, pp. 24–30. IEEE (2014)
Van Koningsbruggen, G.M., Stroebe, W., Aarts, H.: Through the eyes of dieters: biased size perception of food following tempting food primes. J. Exp. Soc. Psychol. 47(2), 293–299 (2011)
Verizon: DBIR 2021 Data breach investigations report (2021). https://www.verizon.com/business/en-gb/resources/reports/dbir/
Wang, Z., Sun, L., Zhu, H.: Defining social engineering in cybersecurity. IEEE Access 8, 85094–85115 (2020)
Wöhler, L., Zembaty, M., Castillo, S., Magnor, M.: Towards understanding perceptual differences between genuine and face-swapped videos. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–13 (2021)
Zollhöfer, M., et al.: State of the art on monocular 3D face reconstruction, tracking, and applications. Comput. Graph. Forum 37(2), 523–550 (2018)
Acknowledgements
This study was supported by the Norwegian Research Council (project number 302941). A preprint of this article is available at PsyArXiv (Sütterlin et al., 2021).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Sütterlin, S. et al. (2023). Individual Deep Fake Recognition Skills are Affected by Viewer’s Political Orientation, Agreement with Content and Device Used. In: Schmorrow, D.D., Fidopiastis, C.M. (eds) Augmented Cognition. HCII 2023. Lecture Notes in Computer Science(), vol 14019. Springer, Cham. https://doi.org/10.1007/978-3-031-35017-7_18
Download citation
DOI: https://doi.org/10.1007/978-3-031-35017-7_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-35016-0
Online ISBN: 978-3-031-35017-7
eBook Packages: Computer ScienceComputer Science (R0)