Skip to main content

Adaptive Psychological Profiling from Nonverbal Behavior – Why Are Ethics Just Not Enough to Build Trust?

  • Chapter
  • First Online:
Women in Computational Intelligence

Part of the book series: Women in Engineering and Science ((WES))

  • 394 Accesses

Abstract

The ethical, social and legal landscape of artificial intelligence (AI) driven systems is rapidly changing. Since the GDPR (GDPR Portal, [Online], 2018. Available at: https://gdpr-info.eu/. Accessed 27/02/2020), stakeholders in developing AI systems have had to interpret and implement Article 22 concerning an individual’s rights in the context of automated decision making, the ability of AI to explain decisions and the logic involved, and to develop models using only “correct” data. This has caused major challenges due to the lack of legal guidance, case law and ethical principles about the use of AI in different contexts. This chapter describes two case studies which use AI within adaptive and automated psychological profiling in the fields of deception and comprehension detection from analysis of nonverbal behavior. The first study considers an automated deception detection system, which is used to contribute toward a risk score of a traveler within a traveler pre-registration system based upon an interview with an avatar border guard. The second study describes a system designed to detect comprehension levels of learners whilst they engage in a learning activity which can then automatically intervene to support their learning. These two application areas are at opposite ends of the spectrum in terms of the media interest and public perceptions of ethical AI, within a climate of continually emerging AI technologies and illustrate why ethics are not enough to build acceptance and trust. Numerous public surveys have been conducted on public perceptions of AI on populations with a certain level of educational attainment. In order to dispel the myths of AI, it is necessary to empower the general public, regardless of social class, through educational awareness of AI applications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In 2019, 45.12% of the world’s population owned a smartphone [7], and by 2025, 72% of all users will use only smartphones to access the internet. In 2019, 49.7% of households had access to a PC [62].

References

  1. Ada Lovelace Institute, Survey: Beyond face value: public attitudes to facial recognition technology, (2019). Available: https://www.adalovelaceinstitute.org/beyond-face-value-public-attitudes-to-facial-recognition-technology/

  2. ARM, Global Artificial Intelligence Survey [Online], (2019). Available: https://www.arm.com/solutions/artificial-intelligence/survey. Accessed 17 Dec 2019

  3. Art. 22 GDPR Automated individual decision-making, including profiling [Online], (2018). Available: https://gdpr-info.eu/art-22-gdpr/

  4. Australian Government – AI principles, (2019). Available: https://www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework/ai-ethics-principles

  5. E. Babad, Teaching and nonverbal behavior in the classroom, in International Handbook of Research on Teachers and Teaching, ed. by L. J. Saha, A. G. Dworkin, (Springer, 2009), pp. 817–827

    Chapter  Google Scholar 

  6. Z. Bandar, D.A. McLean, J.D. O’Shea, J.A. Rothwell, International Patent Number WO02087443 (World Intellectual Property Organization, Geneva, 2002)

    Google Scholar 

  7. Bankmycell, How many people have smartphones in the world? (2020). Available: https://www.bankmycell.com/blog/how-many-phones-are-in-the-world

  8. F.J. Buckingham, Detecting human comprehension from nonverbal behaviour using artificial neural networks, Manchester Metropolitan University [Online]. PhD thesis, (2017). Available: https://e-space.mmu.ac.uk/id/eprint/617426

  9. F. Buckingham, K. Crockett, Z. Bandar, J. O’Shea, K. MacQueen, M. Chen, Measuring Human Comprehension from Nonverbal Behaviour Using Artificial Neural Networks (IEEE World Congress on Computational Intelligence Australia, 2012), pp. 368–375. https://doi.org/10.1109/IJCNN.2012.6252414

    Book  Google Scholar 

  10. F.J. Buckingham, K.A. Crockett, Z.A. Bandar, J.D. O’Shea, FATHOM: A neural network-based non-verbal human comprehension detection system for learning environments, in IEEE Symposium on Computational Intelligence and Data Mining (CIDM), (2014), pp. 403–409. https://doi.org/10.1109/CIDM.2014.7008696

    Chapter  Google Scholar 

  11. B. Cheatham, K. Javanmardian, H. Samandari, Confronting the risks of artificial intelligence, McKinsey [Online], (2019). Available: https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/confronting-the-risks-of-artificial-intelligence#. Accessed 6 Dec 2019

  12. K.A. Crockett, J. O’Shea, Z. Szekely, A. Malamou, G. Boultadakis, S. Zoltan, Do Europe’s borders need multi-faceted biometric protection. Biometric Technol. Today 2017(7), 5–8 (2017) ISSN:0969-4765

    Article  Google Scholar 

  13. K. Crockett, S. Goltz, M. Garratt, A. Latham, Trust in computational intelligence systems: A case study in public perceptions, in IEEE Congress on Evolutionary Computation, (2019), pp. 3228–3235

    Google Scholar 

  14. K. Crockett, J. O’Shea, W. Khan, Automated deception detection of male and females from non-verbal facial micro-gestures, 2020 International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1–7, https://doi:10.1109/IJCNN48605.2020.9207684.

    Google Scholar 

  15. K. Crockett, M. Garratt, S. Goltz, A. Latham, E. Coyler, Risk and trust perceptions of the public of artificial intelligence applications, 2020 International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1–8, https://doi:10.1109/IJCNN48605.2020.9207654

    Google Scholar 

  16. K. Crockett, J. Stoklas, J. O’Shea, T. Krügel, W. Khan, Reconciling adapted psychological profiling with the new European data protection legislation, In: C. Sabourin, J. J. Merelo, A. L. Barranco, K. Madani, K. Warwick, (eds) Computational Intelligence, Studies in Computational Intelligence, vol 893. Springer, Cham. https://doi.org/10.1007/978-3-030-64731-5_2.

  17. J. Daniels, Lie-detecting computer kiosks equipped with artificial intelligence look like the future of border security [Online], (2018). Available: https://www.cnbc.com/2018/05/15/lie-detectors-with-artificial-intelligence-are-future-of-border-security.html. Accessed 04/01/2020

  18. F. Davis, Psychometric research on comprehension in reading. Read. Res. Q. 7(4), 628–678 (1972). https://doi.org/10.2307/747108

    Article  Google Scholar 

  19. R. Day, Can you fool a lie detector? Manchester Evening News interview, 27/10/2018 [Online], (2018). Available: https://www.manchestereveningnews.co.uk/news/greater-manchester-news/lie-detector-test-border-control-15319641

  20. F. Doshi-Velez, M. Kortz, Accountability of AI Under the Law: The Role of Explanation (Berkman Klein Center Working Group on Explanation and the Law, Berkman Klein Center for Internet & Society Working Paper, 2017) Available: https://dash.harvard.edu/bitstream/handle/1/34372584/2017-11_aiexplainability-1.pdf

    Google Scholar 

  21. P. Ekman, V.W. Friesen, The Facial Action Coding System (FACS) (Consulting Psychologists Press, Palo Alto, 1978)

    Google Scholar 

  22. Elements of AI, (2019). Available: https://www.elementsofai.com/eu2019fi

  23. European Commission, BES-05-2015 – Border crossing points topic 1: Novel mobility concepts for land border security [Online], (2015). Available: https://cordis.europa.eu/programme/id/H2020_BES-05-2015

  24. European Commission, Smart lie-detection system to tighten EU’s busy borders, (2018). Available: https://ec.europa.eu/research/infocentre/article_en.cfm?artid=49726

  25. European Commission, Shaping Europe’s digital future: Commission presents strategies for data and Artificial Intelligence [Online], (2020a). Available: https://ec.europa.eu/commission/presscorner/detail/en/ip_20_273

  26. European Commission, White Paper on Artificial Intelligence: A European approach to excellence and trust, (2020b). Available: https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf

  27. European Commission, The ethics of artificial intelligence: Issues and initiatives [Online], (2020c). Available: https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf

  28. European Commission Ethics guidelines for trustworthy AI, (2019), Available: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

  29. European Union, “Regulation of the European Parliament and of the Council, Laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts”, (2021). Available: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206

  30. European Union Agency for Fundamental Rights, Preventing Unlawful Profiling Today and in the Future: A Guide [Online], (2018), p. 12. https://op.europa.eu/en/publication-detail/-/publication/328663bc-f909-11e8-9982-01aa75ed71a1/language-en. Accessed 5/10/2020

  31. European Union Agency for Fundamental Rights, Facial recognition technology: Fundamental rights considerations in the context of law enforcement, (2019). Available: https://fra.europa.eu/en/publication/2018/prevent-unlawful-profiling

  32. D. Frauendorfer, M.S. Mast, L. Nguyen, D. Gatica-Perez, Nonverbal social sensing in action: Unobtrusive recording and extracting of nonverbal behavior in social interactions illustrated with a research example. J. Nonverbal Behav. 38(2), 231–245 (2014)

    Article  Google Scholar 

  33. GDPR Portal, (2018) [Online]. Available at: https://gdpr-info.eu/. Accessed 27/02/2020

  34. A.C. Graesser, S. Lu, B.A. Olde, E. Cooper-Pye, S. Whitten, Question asking and eye tracking during cognitive disequilibrium: Comprehending illustrated texts on devices when the devices break down. Memory Cognit. 33(7), 1235–1247 (2005)

    Article  Google Scholar 

  35. Guardian, The Guardian podcast, “Can we trust AI lie detectors? Chips with Everything podcast”, (2018). Available: https://www.theguardian.com/technology/audio/2018/nov/23/can-we-trust-ai-lie-detectors-chips-with-everything-podcast

  36. C. Hodgson, AI lie detector developed for airport security, Financial Times [Online], (2019). Available: https://www.ft.com/content/c9997e24-b211-11e9-bec9-fdcab53d6959

  37. M. Holmes, Comprehension based adaptive learning systems. PhD thesis, Manchester Metropolitan University, (2017)

    Google Scholar 

  38. M. Holmes, A. Latham, K. Crockett, J. O’Shea, Near real-time comprehension classification with artificial neural networks: Decoding e-learner non-verbal behaviour. IEEE Trans. Learn. Technol. 2017(99) (2017). https://doi.org/10.1109/TLT.2017.2754497

  39. iBorderCtrl, (2020). Available: https://www.iborderctrl.eu/

  40. IEEE Ethically Aligned Design, Version 2 (EADv2), (2017). Available: https://ethicsinaction.ieee.org/?utm_medium=undefined&utm_source=undefined&utm_campaign= undefined&utm_content=undefined&utm_term=undefined

  41. Information Commissioners Office (ICO), Privacy by design, (2020). Available: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/accountability-and-governance/data-protection-impact-assessments/

  42. J.P. Kincaid, R.P. Fishburne, R.L. Rogers, B.S. Chissom, Derivation of New Readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel (National Technical Information Service. (RBR 8–75), Springfield, 1975)

    Book  Google Scholar 

  43. R.V. Krejcie, D.W. Morgan, Determining sample size for research activities. Educ. Psychol. Measur. 30, 607–610 (1970)

    Article  Google Scholar 

  44. D. Leslie, Understanding artificial intelligence and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute, (2019). Available: https://zenodo.org/record/3240529#.XjcJhbk3ZaQ

  45. A. Marsh, A Brief History of the Lie Detector [Online], IEEE Spectrum, (2019), Available: https://spectrum.ieee.org/tech-history/heroic-failures/a-brief-history-of-the-lie-detector. Accessed 04/01/2020

  46. Media College, What Makes a Story Newsworthy, (2020). Available: https://www.mediacollege.com/journalism/news/newsworthy.html

  47. Microsoft, AI Principles, (2020). Available: https://www.microsoft.com/en-us/ai/our-approach-to-ai

  48. G.A. Miller, The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychol. Rev. 63(2), 81–97 (1956). https://doi.org/10.1037/h0043158

    Article  Google Scholar 

  49. Mores.Code, Morse.ai, (2020). Available: http://www.morse.ai/

  50. New Scientist, AI lie detection at border control should proceed with caution, (2018). Available: https://www.newscientist.com/article/mg24032022-900-ai-lie-detection-at-border-control-should-proceed-with-caution/

  51. R.S. Nickerson, Understanding understanding. Am. J. Educ. 93(2), 201–239 (1985)

    Article  Google Scholar 

  52. OECD, Principles on Artificial Intelligence, OCED, (2019). Available: http://www.oecd.org/going-digital/ai/

  53. Open Web Application Security Project (OWASP), (2019). Available: https://owasp.org/. Accessed 28/2/2020

  54. J. O’Shea, K. Crockett, W. Khan, P. Kindynis, A. Antoniades, Intelligent deception detection through machine based interviewing, in IEEE International Joint Conference on Artificial Neural Networks (IJCNN), (2018). https://doi.org/10.1109/IJCNN.2018.8489392

    Chapter  Google Scholar 

  55. R. Picheta, CNN, Passengers to face AI lie detector tests at EU airports, (2018). Available: https://edition.cnn.com/travel/article/ai-lie-detector-eu-airports-scli-intl/index.html

  56. S. Pinker, The media exaggerates negative news. This distortion has consequences, in Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, (Penguin Publishing Group, 2018). Available: https://www.theguardian.com/commentisfree/2018/feb/17/steven-pinker-media-negative-news

    Google Scholar 

  57. S. Porter, L. ten Brinke, The truth about lies: What works in detecting high-stakes deception? Legal Criminol. Psychol. 15(1), 57–75 (2010)

    Article  Google Scholar 

  58. K. Rayner, K.H. Chace, T.J. Slattery, J. Ashby, Eye movements as reflections of comprehension processes in reading. Sci. Stud. Read. 10(3), 241–255 (2006)

    Article  Google Scholar 

  59. J. Rothwell, Z. Bandar, J. O’Shea, D. McLean, Silent talker: A new computer-based system for the analysis of facial cues to deception. Appl. Cognit. Psychol. 757, 20(6), –777 (2006). https://doi.org/10.1002/acp.1204

  60. Silent Talker, (2020). Available: https://find-and-update.company-information.service.gov.uk/company/09533454/officers

  61. S. Soroka, P. Fournier, L. Nir, Cross-national evidence of a negativity bias in psychophysiological reactions to news. Proc. Natl. Acad. Sci. U. S. A. 116(38), 18888–18892 (2019). https://doi.org/10.1073/pnas.1908369116

    Article  Google Scholar 

  62. Statista, Share of households with a computer at home worldwide from 2005 to 2019, (2020). Available: https://www.statista.com/statistics/748551/worldwide-households-with-computer/

  63. The Royal Society, Portrayals and perceptions of AI and why they matter [Online], (2018). Available: https://royalsociety.org/-/media/policy/projects/ai-narratives/AI-narratives-workshop-findings.pdf

  64. The Royal Society, Explainable AI Policy Briefing, (2019). Available: https://royalsociety.org/-/media/policy/projects/explainable-ai/AI-and-interpretability-policy-briefing.pdf

  65. The Times Educational Supplement, AI in UK schools? I’d give us 5 out of 10, (2019). Available: https://www.tes.com/news/ai-uk-schools-id-give-us-5-out-10

  66. M. Turner, Research Gate, (2015). Available: https://www.researchgate.net/post/How_ do_you_determine_whether_a_news_source_is_reputable_Or_a_news_story_is_reliable

    Google Scholar 

  67. UK Gov., Artificial Intelligence: Public awareness survey [Online], (2019). Available https://www.gov.uk/government/publications/artificial-intelligence-public-awareness-survey. Accessed 17 Dec 2019

  68. US Government, Guidance for Regulation of Artificial Intelligence Applications, (2020). Available: https://www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf

  69. M. Van Amelsvoort, B. Joosten, E. Krahmer, E. Postma, Using non-verbal cues to (automatically) assess children’s performance difficulties with arithmetic problems. Comput. Hum. Behav. 29(3), 654–664 (2013)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Keeley Crockett .

Editor information

Editors and Affiliations

Appendices

Biography

When Keeley Crockett was in school, she wanted to be an astronaut – but she was not the greatest at physics, demonstrating a stronger ability in computer science and control technology. Keeley first learnt to program using the BBC BASIC programming language at the age of 14 in school. During this time, she also studied control technology using simple circuits to build simple traffic lights and small robots. Keeley was inquisitive, liked a challenge, and studied artificial intelligence as part of her first degree. Following a practical Higher National Diploma in Software Engineering, she spent 2 years at the University of Manchester Institute of Science and Technology studying computation, graduating in 1993. Here she gained an appreciation of artificial intelligence and was introduced to fuzzy logic. Whilet studying, Keeley found herself to be in a very small minority of women on the course.

Following graduation, Keeley applied and received a good job offer; however, she choose to carry on in education and pursue a PhD, which involved a teaching role within the University. As well as research, Keeley really enjoyed working with and helping students to understand key computer science concepts and loved seeing them have a Eureka moment when they finally managed to solve a problem. Over the years, she has had the opportunity to work in hospitals with medical professionals on using ICT, through to teaching the more elderly community to use email, and working with young people who have left school with no qualifications on computer science projects to allow them to believe in themselves.

Toward the end of her PhD, Keeley joined the IEEE Computational Intelligence Society and was inspired by other women professors in the field. She attended her first IEEE conference on Fuzzy systems (IEEE-FUZZ) in San Antonio, Texas in 2000 and was inspired and motivated by the quality of the speakers. In 2001, she attended IEEE-FUZZ in Melbourne and was privileged that the founder of fuzzy logic, Professor Lofti Zadeh, attended her paper session and spoke to her briefly afterward about her work – providing motivation. At this conference, she met the most amazing woman – Professor Bernadette Bouchon-Meunier from the Université Pierre et Marie Curie, who had started a group within the IEEE Computational Intelligence Society known as IEEE Women in Computational Intelligence (WCI). Bernadette went on to become her unofficial mentor.

Keeley is an active volunteer within the IEEE Computational Intelligence Society, chairing many sub-committees on travel grants. In 2014, she also became the Chair for Women in Engineering (WIE) in the UK and Ireland until 2019, when she served a year on the IEEE Women in Engineering Leadership Committee. If it was not for the incredible women mentors and role models who provided advice and support throughout her career, she is convinced her path would have been different. Now as a qualified mentor, she has had the privilege to see students grow and follow their goals to achieve their own successful careers (kind of like a proud parent!).

Keeley also has a passion and drive to bring computer science opportunities to rural schools in the UK and can be regularly found in primary schools delivering programming and robotics sessions with children aged between 4 and 10 years old. Running computer science events at National Science festivals and IEEE WIE and WCI events allows young people to have hands on experiences, inspires, and encourages them within their education. Seeing female academics engage in these many activities sends a clear action message, that yes you can be female and work in this field. Despite incredible efforts over the years to encourage females into taking STEM careers from many organizations and people, there is still much work to be done form the grassroots level. Keeley became a national STEM Ambassador in 2018.

Currently, Keeley is Professor in Computational Intelligence at Manchester Metropolitan University. She has supervised 25 successfully completed PhD students to date, 50% whom were women. She believes that the key to successful PhDs is a team partnership where supervisors and students are on a research journey together to try and solve a societal challenge that will have some positive impact on people’s lives.

Message

I was honored to take part in the writing of this book which showcases some of the amazing research undertaken by women in the field of computational intelligence. I will openly admit I suffer from imposter syndrome, I am not very confident, and I end up questioning everything that I do. It took me longer than most to become a full professor in my academic career journey, but I never gave up and instead relished working with some amazing students and people along the way. My advice is to never lose sight of who you are, always be kind, and have the courage to follow your dreams.

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Crockett, K. (2022). Adaptive Psychological Profiling from Nonverbal Behavior – Why Are Ethics Just Not Enough to Build Trust?. In: Smith, A.E. (eds) Women in Computational Intelligence. Women in Engineering and Science. Springer, Cham. https://doi.org/10.1007/978-3-030-79092-9_3

Download citation

Publish with us

Policies and ethics