Advertisement

A Survey of the General Public’s Views on the Ethics of Using AI in Education

  • Annabel LathamEmail author
  • Sean Goltz
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11625)

Abstract

Recent scandals arising from the use of algorithms for user profiling to further political and marketing gain have popularized the debate over the ethical and legal implications of using such ‘artificial intelligence’ in social media. The need for a legal framework to protect the general public’s data is not new, yet it is not clear whether recent changes in data protection law in Europe, with the introduction of the GDPR, have highlighted the importance of privacy and led to a healthy concern from the general public over online user tracking and use of data. Like search engines, social media and online shopping platforms, intelligent tutoring systems aim to personalize learning and thus also rely on algorithms that automatically profile individual learner traits. A number of studies have been published on user perceptions of trust in robots and computer agents. Unsurprisingly, studies of AI in education have focused on efficacy, so the extent of learner awareness, and acceptance, of tracking and profiling algorithms remains unexplored. This paper discusses the ethical and legal considerations for, and presents a case study examining the general public’s views of, AI in education. A survey was recently taken of attendees at a national science festival event highlighting state-of-the-art AI technologies in education. Whilst most participants (77%) were worried about the use of their data, in learning systems fewer than 8% of adults were ‘not happy’ being tracked, as opposed to nearly two-thirds (63%) of children surveyed.

Keywords

Ethics Trust GDPR 

Notes

Acknowledgements

The study described in this paper was supported by Manchester Metropolitan University, IEEE Women in Engineering United Kingdom and Ireland, IEEE Women in Computational Intelligence and Manchester Science Museum.

References

  1. 1.
    Pariser, E.: The Filter Bubble: What The Internet Is Hiding From You. Penguin, London (2011)Google Scholar
  2. 2.
    Zuboff, S.: Big other: surveillance capitalism and the prospects of an information civilization. J. Inf. Technol. 30, 75–89 (2015)CrossRefGoogle Scholar
  3. 3.
    Hoofnagle, C.J., King, J., Li, S., Turow, J.: How different are young adults from older adults when it comes to information privacy attitudes and policies? SSRN Electron. J. (2010). http://www.ssrn.com/abstract=1589864. Accessed 02 Feb 2019
  4. 4.
    The Cambridge Analytica Files | The Guardian. https://www.theguardian.com/news/series/cambridge-analytica-files. Accessed 02 Feb 2019
  5. 5.
    Burns, H., Luckhardt, C.A., Parlett, J.W., Redfield, C.L.: Intelligent Tutoring Systems: Evolutions in Design. Psychology Press, London (1991)Google Scholar
  6. 6.
    Van Lehn, K.: The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educ. Psychol. 46(4), 197–221 (2011)CrossRefGoogle Scholar
  7. 7.
    Fallout from Facebook data scandal may hit health research | New Scientist. https://www.newscientist.com/article/2164521-fallout-from-facebook-data-scandal-may-hit-health-research/. Accessed 02 Feb 2019
  8. 8.
    Lin, H.C.K., Wu, C.H., Hsueh, Y.P.: The influence of using affective tutoring system in accounting remedial instruction on learning performance and usability. Comput. Hum. Behav. 41, 514–522 (2014)CrossRefGoogle Scholar
  9. 9.
    Ammar, M.B., Neji, M., Alimi, A.M., Gouardères, G.: The affective tutoring system. Expert Syst. Appl. 37(4), 3013–3023 (2010)CrossRefGoogle Scholar
  10. 10.
    Latham, A., Crockett, K., McLean, D., Edmonds, B.: A conversational intelligent tutoring system to automatically predict learning styles. Comput. Educ. 59(1), 95–109 (2012)CrossRefGoogle Scholar
  11. 11.
    Holmes, M., Latham, A., Crockett, K., O’Shea, J.D.: Near real-time comprehension classification with artificial neural networks: decoding e-learner non-verbal behavior. IEEE Trans. Learn. Technol. 11(1), 5–12 (2018)CrossRefGoogle Scholar
  12. 12.
    Hengstler, M., Enkel, E., Duelli, S.: Applied artificial intelligence and trust—the case of autonomous vehicles and medical assistance devices. Technol. Forecast. Soc. Chang. 105, 105–120 (2016)CrossRefGoogle Scholar
  13. 13.
    König, M., Neumayr, L.: Users’ resistance towards radical innovations: the case of the self-driving car. Transp. Res. Part F: Traffic psychol. Behav. 44, 42–52 (2017)CrossRefGoogle Scholar
  14. 14.
    Aiken, R.M., Epstein, R.G.: Ethical guidelines for AI in education: starting a conversation. Int. J. Artif. Intell. Educ. 11, 163–176 (2000)Google Scholar
  15. 15.
    Hines, A.: Jobs and infotech: work in the information society. Futurist 28(1), 9–11 (1994)Google Scholar
  16. 16.
    Shneiderman, B.: Human values and the future of technology: a declaration of responsibility. ACM SIGCAS Comput. Soc. 29(3), 5–9 (1999)CrossRefGoogle Scholar
  17. 17.
    Mumford, L.: Technics and Civilization. Harcourt Brace and World, Inc., New York (1934)Google Scholar
  18. 18.
    Nichols, M., Holmes, W.: Don’t do evil: implementing artificial intelligence in universities. towards personalized guidance and support for learning. In: Proceedings of the 10th European Distance and E-Learning Network Research Workshop, Barcelona, p. 109 (2018)Google Scholar
  19. 19.
    Gitelman, L., Jackson, V.: Introduction. In: Gitelman, L. (ed.) “Raw data” is an oxymoron, pp. 1–14. The MIT Press, Cambridge (2013)CrossRefGoogle Scholar
  20. 20.
    Artificial Intelligence’s white guy problem: The New York Times. https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html. Accessed 06 Aug 2018
  21. 21.
    UK wants to lead the world in tech ethics…but what does that mean? | Ada Lovelace Institute. https://www.adalovelaceinstitute.org/uk-wants-to-lead-the-world-in-tech-ethicsbut-what-does-that-mean/. Accessed 10 Aug 2018
  22. 22.
    Prinsloo, P., Slade, S.: Student vulnerability, agency and learning analytics: an exploration. J. Learn. Anal. 3(1), 159–182 (2016)CrossRefGoogle Scholar
  23. 23.
    Boddington, P.: Towards a code of ethics for artificial intelligence. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-60648-4CrossRefGoogle Scholar
  24. 24.
    Trust and transparency. http://lcfi.ac.uk/projects/ai-trust-and-society/trust-and-transparency/. Accessed 08/13 Aug 2018
  25. 25.
    Social well-being and data ethics | Ada Lovelace Institute. https://www.adalovelaceinstitute.org/socialwell-being-and-data-ethics-tim-gardams-speech-to-techuk-digital-ethics-summit/. Accessed 10 Aug 2018
  26. 26.
    DeepMind announces ethics group to focus on problems of AI. Technology | The Guardian. https://www.theguardian.com/technology/2017/oct/04/google-deepmind-ai-artificialintelligence-ethics-group-problemsAccessed 10 Aug 2018
  27. 27.
    About OpenAI. https://openai.com/about/. Accessed 13 Aug 2018
  28. 28.
    Making artificial intelligence socially just: why the current focus on ethics is not enough | British Politics and Policy at LSE. https://blogs.lse.ac.uk/politicsandpolicy/artificial-intelligence-and-society-ethics/. Accessed 06 July 2018
  29. 29.
    Bolton College used IBM Watson to build a virtual assistant that enhances teaching, learning and information access – Watson. https://www.ibm.com/blogs/watson/2017/08/bolton-college-uses-ibm-watson-ai-to-build-virtual-assistant-that-enhances-teaching-learning-and-assessment/. Accessed 17 Aug 2018
  30. 30.
  31. 31.
    The ethics of Artificial Intelligence in education - University Business. https://universitybusiness.co.uk/Article/the-ethics-ofartificial-intelligence-in-education-who-care/. Accessed 10 Aug 2018
  32. 32.
    Key Changes with the General Data Protection Regulation – EUGDPR. https://eugdpr.org/the-regulation/. Accessed 08 Feb 2019
  33. 33.
    Tsang, L., Mulryne, J., Strom, L.: The impact of artificial intelligence on medical innovation in the European Union and United States. Intellect. Prop. Technol. Law J. 7, 2018 (2017)Google Scholar
  34. 34.
    Wachter, S., Mittelstadt, B., Floridi, L.: Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int. Data Priv. Law 7(2), 76–99 (2017)CrossRefGoogle Scholar
  35. 35.
    Europeans Express Positive Views on AI and Robotics: Report on Preliminary Results from Public Consultations | McCarthy Tetrault. https://www.mccarthy.ca/en/insights/blogs/cyberlex/europeans-express-positive-views-ai-and-robotics-report-preliminary-results-public-consultations. Accessed 01 Feb 2019
  36. 36.
    Draft Ethics Guidelines for Trustworthy AI | Digital Single Market. https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai. Accessed 30 Jan 2019
  37. 37.
    AI Principles – Future of Life Institute. https://futureoflife.org/ai-principles/?cn-reloaded=1. Accessed 05 Feb 2019
  38. 38.
    Ethically Aligned Design, Version 2 (EADv2) | IEEE Standards Association. https://ethicsinaction.ieee.org/. Accessed 29 Dec 2018
  39. 39.
    Papanikolaou, K.A., Grigoriadou, M., Kornilakis, H., Magoulas, G.D.: Personalizing the Interaction in a web-based educational hypermedia system: the case of INSPIRE. User Model. User-Adap. Inter. 13(3), 213–267 (2003)CrossRefGoogle Scholar
  40. 40.
    Brusilovsky, P., Peylo, C.: Adaptive and intelligent web-based educational systems. Int. J. Artif. Intell. Educ. 13, 156–169 (2003)Google Scholar
  41. 41.
    Sidney, K.D., Craig, S.D., Gholson, B., Franklin, S., Picard, R., Graesser, A.C.: Integrating affect sensors in an intelligent tutoring system. In Affective Interactions: The Computer in the Affective Loop Workshop, pp. 7–13 (2005)Google Scholar
  42. 42.
    Arroyo, I., Cooper, D.G., Burleson, W., Woolf, B.P., Muldner, K., Christopherson, R.: Emotion sensors go to school. In: AIED, vol. 200, pp. 17–24 (2009)Google Scholar
  43. 43.
    Crockett, K., Latham, A., Whitton, N.: On predicting learning styles in conversational intelligent tutoring systems using fuzzy decision trees. Int. J. Hum. Comput. Stud. 97, 98–115 (2017)CrossRefGoogle Scholar
  44. 44.
    Latham, A., Crockett, K., McLean, D.: An adaptation algorithm for an intelligent natural language tutoring system. Comput. Educ. 71, 97–110 (2014)CrossRefGoogle Scholar
  45. 45.
    The Flesch Reading Ease and Flesch-Kincaid Grade Level – readable.io. https://readable.io/blog/the-flesch-reading-ease-and-flesch-kincaid-grade-level/. Accessed 05 June 2018

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Manchester Metropolitan UniversityManchesterUK
  2. 2.Business and Law SchoolEdith Cowan UniversityPerthAustralia

Personalised recommendations