Skip to main content

Part of the book series: Human–Computer Interaction Series ((BRIEFSHUMAN))

  • 270 Accesses

Abstract

As demonstrated in previous chapters, human cognitive modeling techniques and related software tools have been widely used by researchers and practitioners to evaluate user interface (UI) designs and related human performance. However, for a system involving a relatively complicated UI, it could be difficult to build a cognitive model that accurately captures the different cognitive tasks involved in all user interactions.

The integration of human behavioral data could be useful to help the cognitive modeling process. This chapter firstly provides an overview of how behavioral data, particularly eye tracking data, can be used in cognitive modeling, followed by presenting two user studies of incorporating human behavior/data in the process of creating human cognitive models to better estimate human performance and evaluate UI (part of this chapter previously appeared in Yuan et al. (When eye-tracking meets cognitive modeling: applications to cyber security systems. In: Human Aspects of Information Security, Privacy and Trust: 5th International Conference, HAS 2017, Held as Part of HCI International 2017, Vancouver, 9–14 July 2017, Proceedings. Lecture notes in computer science, vol. 10292, pp 251–264. Springer, Cham (2017))).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We actually built a number of models for each of the two models as CogTool supports only static cognitive tasks but Undercover involves dynamic ones related to varying challenges. We are developing an extension of CogTool to facilitate modeling of such dynamic cognitive tasks, but in this chapter we will not focus on this issue.

  2. 2.

    University of Surrey Ethics Self-Assessment form with reference number: 160708-160702-20528369 (first round), and 160708-160702-26614408 (second round), which determines if the research meets the ethical review criteria of the University of Surrey’s University Ethical Committee (UEC).

  3. 3.

    Available at: http://cvit.iiit.ac.in/images/Projects/cartoonFaces/IIIT-CFW1.0.zip

References

  1. Alsharnouby, M., Alaca, F., Chiasson, S.: Why phishing still works: user strategies for combating phishing attacks. Int. J. Hum.-Comput. Stud. 82, 69–82 (2015)

    Article  Google Scholar 

  2. Bahrick, H., Bahrick, P., Wittlinger, R.: Fifty years of memory for names and faces: a cross-sectional approach. J. Exp. Psychol. Gen. 104(1), 54–75 (1975)

    Article  Google Scholar 

  3. Barragan-Jason, G., Cauchoix, M., Barbeau, E.: The neural speed of familiar face recognition. Neuropsychologia 75(Supplement C), 390–401 (2015)

    Google Scholar 

  4. Byrne, M.D., Anderson, J.R., Douglass, S., Matessa, M.: Eye tracking the visual search of click-down menus. In: Proceedings of 1999 SIGCHI Conference on Human Factors in Computing Systems (CHI’99), pp. 402–409. ACM (1999)

    Google Scholar 

  5. Canosa, R., Pelz, J., Mennie, N., Peak, J.: High-level aspects of oculomotor control during viewing of natural-task images. In: Human Vision and Electronic Imaging VIII. Proceedings of SPIE – the International Society for Optical Engineering, vol. 5007, pp. 240–251 (2003)

    Google Scholar 

  6. Chanceaux, M., Mathôt, S., Grainger, J.: Normative-ratings experiment for flanker stimuli, figshare. Online dataset (2014). https://doi.org/10.6084/m9.figshare.977864.v1

  7. Cocozza, P.: Crying with laughter: how we learned how to speak emoji. Online document (2015). http://www.richardhartley.com/2015/11/crying-with-laughter-how-we-learned-how-to-speak-emoji/

  8. Fleetwood, M.D., Byrne, M.D.: Modeling the visual search of displays: a revised ACT-R model of icon search based on eye-tracking data. Hum.-Comput. Interact. 21(2), 153–197

    Google Scholar 

  9. Foulsham, T., Underwood, G.: What can saliency models predict about eye movements? spatial and sequential aspects of fixations during encoding and recognition. J. Vis. 8(2), 6 (2008)

    Google Scholar 

  10. Foulsham, T., Walker, E., Kingstone, A.: The where, what and when of gaze allocation in the lab and the natural environment. Vis. Res. 51(17), 1920–1931 (2011)

    Google Scholar 

  11. Golla, M., Detering, D., Durmuth, M.: EmojiAuth: quantifying the security of emoji-based authentication. In: Proceedings of 2017 Workshop on Usable Security (USEC) (2017). https://www.ndss-symposium.org/ndss2017/usec-mini-conference-programme/emojiauth-quantifying-security-emoji-based-authentication/

  12. Hornof, A.J.: Cognitive strategies for the visual search of hierarchical computer displays. Hum.-Comput. Interact. 10(3), 183–223 (2004)

    Google Scholar 

  13. Hornof, A.J., Halverson, T.: Cognitive strategies and eye movements for searching hierarchical computer displays. In: Proceedings of 2003 SIGCHI Conference on Human Factors in Computing Systems (CHI 2003), pp. 249–256. ACM (2003)

    Google Scholar 

  14. John, B., Prevas, K., Salvucci, D., Koedinger, K.: Predictive human performance modeling made easy. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI’04, pp. 455–462. ACM, New York (2004). http://doi.acm.org/10.1145/985692.985750

  15. Liu, J., Harris, A., Kanwisher, N.: Stages of processing in face perception: an meg study. Nat. Neurosci. 5(9), 910–6 (2002)

    Google Scholar 

  16. Locke, C.: Emoji passcodes – seriously fun, seriously secure. Online document (2015). https://www.iedigital.com/fintech-news-insight/fintech-security-regulation/emoji-passcodes-seriously-fun-seriously-secure/

  17. Mckone, E., Kanwisher, N., Duchaine, B.C.: Can generic expertise explain special processing for faces? Trends Cogn. Sci. 11(1), 8–15 (2007)

    Article  Google Scholar 

  18. Mishra, A., Rai, N., Mishra, A., Jawahar, C.: IIIT-CFW: a benchmark database of cartoon faces in the wild (2016). https://cvit.iiit.ac.in/research/projects/cvit-projects/cartoonfaces

  19. Miyamoto, D., Blanc, G., Kadobayashi, Y.: Eye can tell: on the correlation between eye movement and phishing identification. In: Neural Information Processing: 22nd International Conference, ICONIP 2015, Istanbul, 9–12 Nov 2015, Proceedings Part III. Lecture Notes in Computer Science, vol. 9194, pp. 223–232. Springer (2015)

    Google Scholar 

  20. Nicholson, J., Coventry, L., Briggs, P.: Faces and pictures: understanding age differences in two types of graphical authentications. Int. J. Hum.-Comput. Stud. 71(10), 958–966 (2013)

    Google Scholar 

  21. Perković, T., Li, S., Mumtaz, A., Khayam, S., Javed, Y., Čagalj, M.: Breaking undercover: exploiting design flaws and nonuniform human behavior. In: Proceedings of the Seventh Symposium on Usable Privacy and Security, SOUPS’11, pp. 5:1–5:15. ACM, New York (2011). http://doi.acm.org/10.1145/2078827.2078834

  22. Pessoa, L., Japee, S., Ungerleider, L.G.: Visual awareness and the detection of fearful faces. Emotion 5(2), 243–247 (2005)

    Google Scholar 

  23. Rao, R.P.N., Zelinsky, G.J., Hayhoe, M.M., Ballard, D.H.: Eye movements in iconic visual search. Vis. Res. 42(11), 1447–1463 (2002)

    Google Scholar 

  24. Sasamoto, H., Christin, N., Hayashi, E.: Undercover: authentication usable in front of prying eyes. In: Proceedings of 2008 SIGCHI Conference on Human Factors in Computing Systems (CHI 2008), pp. 183–192. ACM (2008)

    Google Scholar 

  25. Sona System Ltd: Psychology Research Participation System. Website. https://surrey-uk.sona-systems.com/

  26. Tanaka, J.: The entry point of face recognition: evidence for face expertise. J. Exp. Psychol. 130(3), 534–543 (2001)

    Google Scholar 

  27. Tatler, B.: The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions. J. Vis. 7(14), 4 (2007)

    Google Scholar 

  28. Tatler, B., Baddeley, R., Gilchrist, I.: Visual correlates of fixation selection: effects of scale and time. Vis. Res. 45(5), 643–659 (2005)

    Google Scholar 

  29. Tobii AB: Tobii Studio User’s Manual. Online document, Version 3.4.5 (2016). https://www.tobiipro.com/siteassets/tobii-pro/user-manuals/tobii-pro-studio-user-manual.pdf

  30. Tobii AB: Tobii Pro X3-120 eye tracker user manual. Online document, Version 1.0.7 (2017). https://www.tobiipro.com/siteassets/tobii-pro/user-manuals/tobii-pro-x3-120-user-manual.pdf/?v=1.0.7

  31. Treisman, A., Souther, J.: Search asymmetry: a diagnostic for preattentive processing of separable features. J. Exp. Psychol. 114(3), 285–310 (1985)

    Google Scholar 

  32. Tsao, D., Livingstone, M.: Mechanisms of face perception. Ann. Rev. Neurosci. 31, 411–437 (2008)

    Google Scholar 

  33. Unicode: Unicode ® Emoji Charts v5.0. Online document. http://unicode.org/emoji/charts/full-emoji-list.html

  34. Willis, J., Todorov, A.: First impressions. Psychol. Sci. 17(7), 592–598 (2006)

    Google Scholar 

  35. Wolfe, J.: Asymmetries in visual search: an introduction. Percept. Psychophys. 63(3), 381–389 (2001)

    Google Scholar 

  36. Yuan, H., Li, S., Rusconi, P., Aljaffan, N.: When eye-tracking meets cognitive modeling: applications to cyber security systems. In: Human Aspects of Information Security, Privacy and Trust: 5th International Conference, HAS 2017, Held as Part of HCI International 2017, Vancouver, 9–14 July 2017, Proceedings. Lecture Notes in Computer Science, vol. 10292, pp. 251–264. Springer, Cham (2017)

    Google Scholar 

  37. Zelinsky, G., Sheinberg, D.: Eye movements during parallel–serial visual search. J. Exp. Psychol. Hum. Percept. Perform. 23(1), 244–262 (1997)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2020 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Yuan, H., Li, S., Rusconi, P. (2020). Integration of Behavioral Data. In: Cognitive Modeling for Automated Human Performance Evaluation at Scale . Human–Computer Interaction Series(). Springer, Cham. https://doi.org/10.1007/978-3-030-45704-4_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-45704-4_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-45703-7

  • Online ISBN: 978-3-030-45704-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics