Skip to main content

The Role of IT Background for Metacognitive Accuracy, Confidence and Overestimation of Deep Fake Recognition Skills

  • Conference paper
  • First Online:
Augmented Cognition (HCII 2022)

Abstract

The emergence of synthetic media such as deep fakes is considered to be a disruptive technology shaping the fight against cybercrime as well as enabling political disinformation. Deep faked material exploits humans’ interpersonal trust and is usually applied where technical solutions of deep fake authentication are not in place, unknown, or unaffordable. Improving the individual’s ability to recognise deep fakes where they are not perfectly produced requires training and the incorporation of deep fake-based attacks into social engineering resilience training. Individualised or tailored approaches as part of cybersecurity awareness campaigns are superior to a one-size-fits-all approach, and need to identify persons in particular need for improvement. Research conducted in phishing simulations reported that persons with educational and/or professional background in information technology frequently underperform in social engineering simulations. In this study, we propose a method and metric to detect overconfident individuals in regards to deep fake recognition. The proposed overconfidence score flags individuals overestimating their performance and thus posing a previously unconsidered cybersecurity risk. In this study, and in line with comparable research from phishing simulations, individuals with IT background were particularly prone to overconfidence. We argue that this data-driven approach to identifying persons at risk enables educators to provide a more targeted education, evoke insight into own judgement deficiencies, and help to avoid the self-selection bias typical for voluntary participation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Purplesec, Cyber Security Statistics (2021)

    Google Scholar 

  2. IBM Security, Cost of a Data Breach Report 2021 (2021)

    Google Scholar 

  3. Verizon, 2021 Data Breach Investigations Report (2021)

    Google Scholar 

  4. Hadnagy, C.: Social Engineering: The Art of Human Hacking. Wiley, New York (2010)

    Google Scholar 

  5. Mouton, F., Leenen, L., Malan, M., Venter, H. S.: Towards an ontological model defining the social engineering domain. In: Kimppa, K., Whitehouse, D., Kuusela, T., Phahlamohlaka, J. (eds.) HCC 2014. IAICT, vol. 431, pp. 266–279. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-44208-1_22

    Chapter  Google Scholar 

  6. Uebelacker, S., Quiel, S.: The social engineering personality framework. In: 2014 Workshop on Socio-Technical Aspects in Security and Trust. IEEE (2014)

    Google Scholar 

  7. Cialdini, R.: Influence: Science and Practice, 3rd edn. Harper Collins College Publishers, New York (1993)

    Google Scholar 

  8. Parsons, K., et al.: Predicting susceptibility to social influence in phishing emails. Int. J. Hum Comput Stud. 128, 17–26 (2019)

    Article  Google Scholar 

  9. Baek, E.C., Falk, E.B.: Persuasion and influence: what makes a successful persuader? Curr. Opin. Psychol. 24, 53–57 (2018)

    Article  Google Scholar 

  10. Schick, N.: Deep Fakes and the Infocalypse: What You Urgently Need to Know. Hachette UK (2020)

    Google Scholar 

  11. Korshunov, P., Marcel, S.: Deepfake detection: humans vs. machines. arXiv preprint arXiv:2009.03155 (2020)

  12. Rossler, A., et al. Faceforensics++: Learning to detect manipulated facial images. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2019)

    Google Scholar 

  13. Masood, M., et al.: Deepfakes Generation and Detection: State-of-the-art, open challenges, countermeasures, and way forward. arXiv preprint arXiv:2103.00484 (2021)

  14. Zollhöfer, M., et al.: State of the art on monocular 3D face reconstruction, tracking, and applications. In: Computer Graphics Forum. Wiley Online Library (2018)

    Google Scholar 

  15. iProov: The Threat of Deepfakes. The consumer view of deepfakes and the role of biometric authentication in protecting against their misuse (2020)

    Google Scholar 

  16. Hu, S., Li, Y., Lyu, S.: Exposing GAN-generated faces using inconsistent corneal specular highlights. In: ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE (2021)

    Google Scholar 

  17. Drogkaris, P., Bourka, A.: Cybersecurity culture guidelines: Behavioural aspects of cybersecurity. European Union Agency for Network and Information Security (ENISA) (2019)

    Google Scholar 

  18. Egelman, S., Peer, E.: The myth of the average user: Improving privacy and security systems through individualization. In: Proceedings of the 2015 New Security Paradigms Workshop (2015)

    Google Scholar 

  19. Montañez, R., Golob, E., Xu, S.: Human cognition through the lens of social engineering cyberattacks. Front. Psychol. 11, 1755 (2020)

    Article  Google Scholar 

  20. Schraw, G.: Promoting general metacognitive awareness. Instr. Sci. 26(1), 113–125 (1998)

    Article  Google Scholar 

  21. Butavicius, M., et al.: Breaching the human firewall: Social engineering in phishing and spear-phishing emails. arXiv preprint arXiv:1606.00887 (2016)

  22. Jampen, D., Gür, G., Sutter, T., Tellenbach, B.: Don’t click: towards an effective anti-phishing training. A comparative literature review. HCIS 10(1), 1–41 (2020). https://doi.org/10.1186/s13673-020-00237-7

    Article  Google Scholar 

  23. Jøsok, Øyvind., Knox, B., Helkala, K., Lugo, R., Sütterlin, S., Ward, P.: Exploring the hybrid space. In: Schmorrow, D.D.D., Fidopiastis, C.M.M. (eds.) AC 2016. LNCS (LNAI), vol. 9744, pp. 178–188. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-39952-2_18

    Chapter  Google Scholar 

  24. Jøsok, Ø., et al. Macrocognition applied to the hybrid space: team environment, functions and processes in cyber operations. in International Conference on Augmented Cognition. 2017. Springer

    Google Scholar 

  25. Knox, B.J., et al.: Socio-technical communication: the hybrid space and the OLB model for science-based cyber education. Mil. Psychol. 30(4), 350–359 (2018)

    Article  Google Scholar 

  26. Knox, B., Lugo, R., Jøsok, Øyvind., Helkala, K., Sütterlin, S.: Towards a cognitive agility index: the role of metacognition in human computer interaction. In: Stephanidis, C. (ed.) HCI 2017. CCIS, vol. 713, pp. 330–338. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58750-9_46

    Chapter  Google Scholar 

  27. Canfield, C.I., Fischhoff, B., Davis, A.: Better beware: comparing metacognition for phishing and legitimate emails. Metacognition and Learning 14(3), 343–362 (2019). https://doi.org/10.1007/s11409-019-09197-5

    Article  Google Scholar 

  28. Kleitman, S., Law, M.K., Kay, J.: It’s the deceiver and the receiver: Individual differences in phishing susceptibility and false positives with item profiling. PLoS ONE 13(10), e0205089 (2018)

    Article  Google Scholar 

  29. Franke, T., Attig, C., Wessel, D.: A personal resource for technology interaction: development and validation of the affinity for technology interaction (ATI) scale. International Journal of Human-Computer Interaction 35(6), 456–467 (2019)

    Article  Google Scholar 

  30. JASP, JASP-Statistics. 2021

    Google Scholar 

  31. Vishwanath, A., Harrison, B., Ng, Y.J.: Suspicion, cognition, and automaticity model of phishing susceptibility. Commun. Res. 45(8), 1146–1166 (2018)

    Article  Google Scholar 

  32. Chechlacz, M., et al.: Structural variability within frontoparietal networks and individual differences in attentional functions: an approach using the theory of visual attention. J. Neurosci. 35(30), 10647–10658 (2015)

    Article  Google Scholar 

  33. Shekhar, M., Rahnev, D.: Distinguishing the roles of dorsolateral and anterior PFC in visual metacognition. J. Neurosci. 38(22), 5078–5087 (2018)

    Article  Google Scholar 

  34. Zanto, T.P., et al.: Causal role of the prefrontal cortex in top-down modulation of visual processing and working memory. Nat. Neurosci. 14(5), 656–661 (2011)

    Article  Google Scholar 

Download references

Acknowledgements

This study was supported by the Norwegian Research Council (project number 302941).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stefan Sütterlin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sütterlin, S. et al. (2022). The Role of IT Background for Metacognitive Accuracy, Confidence and Overestimation of Deep Fake Recognition Skills. In: Schmorrow, D.D., Fidopiastis, C.M. (eds) Augmented Cognition. HCII 2022. Lecture Notes in Computer Science(), vol 13310. Springer, Cham. https://doi.org/10.1007/978-3-031-05457-0_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-05457-0_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-05456-3

  • Online ISBN: 978-3-031-05457-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics