Skip to main content

Advertisement

Log in

Development and validation of an instrument to measure undergraduate students’ attitudes toward the ethics of artificial intelligence (AT-EAI) and analysis of its difference by gender and experience of AI education

  • Published:
Education and Information Technologies Aims and scope Submit manuscript

An Author Correction to this article was published on 18 June 2022

This article has been updated

Abstract

As artificial intelligence (AI) becomes more prevalent, so does the interest in AI ethics. To address issues related to AI ethics, many government agencies, non-governmental organizations (NGOs), and corporations have published AI ethics guidelines. However, only a few test instruments have been developed to assess students’ attitudes toward AI ethics. A related instrument is required to effectively prepare lecture curricula and materials on AI ethics, as well as to quantitatively evaluate the learning effect of students. In this study, we developed and validated the instrument (AT-EAI) to assess undergraduate students’ attitudes toward AI ethics. The instrument’s reliability, content validity, and construct validity were evaluated following its development and application in a sample of 1,076 undergraduate students. Initially, the instrument comprised five dimensions that totaled 42 items, while the final version had 17 items. When it came to content validity, experts (n = 8) were involved in the process. Exploratory factor analysis identified five dimensions, and confirmatory factor analysis found that the model was good-fitting. The reliability analysis using Cronbach’s alpha and the corrected item-total correlation were both satisfactory. Considering all the results, the developed instrument possesses the psychometric properties necessary to be considered a valid and reliable instrument for measuring undergraduate students’ attitudes toward AI ethics. This study also found that there were gender differences in fairness, privacy, and non-maleficence dimensions. Furthermore, it revealed the difference in students’ attitudes toward fairness based on their prior experience with AI education.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Change history

References

  • AccessNow Conference Declaration (2018). The Toronto declaration: protecting the rights to equality and non-discrimination in machine learning systems.

  • Aday, L. A., & Cornelius, L. J. (2006). Designing and conducting health surveys: A comprehensive guide. John Wiley & Sons.

    Google Scholar 

  • Aiken, L. R. (1997). Psychological testing and assessment. Allyn & Bacon.

    Google Scholar 

  • Algo.Rules (2019). Rules for the design of algorithmic systems.

  • Asan, O., Bayrak, A. E., & Choudhury, A. (2020). Artificial intelligence and human trust in healthcare: Focus on clinicians. Journal of Medical Internet Research, 22(6), 15154. https://doi.org/10.2196/15154

    Article  Google Scholar 

  • Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., … & Rahwan, I. (2018). The moral machine experiment. Nature, 563(7729), 59–64.

  • Bartlett, M. S. (1954). A note on multiplying factors for various chi-squared approximations. Journal of the Royal Statistical Society: Series B (methodological), 16(2), 296–298.

    MathSciNet  MATH  Google Scholar 

  • van Berkel, N., Goncalves, J., Russo, D., Hosio, S., & Skov, M. B. (2021). Effect of information presentation on fairness perceptions of machine learning predictors. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–13).

  • Bernacki, M. L., Chavez, M. M., & Uesbeck, P. M. (2020). Predicting achievement and providing support before STEM majors begin to fail. Computers & Education, 158, 103999.

    Article  Google Scholar 

  • Borenstein, J., & Howard, A. (2021). Emerging challenges in AI and the need for AI ethics education. AI and Ethics, 1(1), 61–65.

    Article  Google Scholar 

  • Van Brummelen, J., & Lin, P. (2020). Engaging Teachers to Co-Design Integrated AI Curriculum for K-12 Classrooms. arXiv preprint arXiv:2009.11100.

  • Van Brummelen, J., Heng, T., & Tabunshchyk, V. (2021). Teaching Tech to Talk: K-12 Conversational artificial intelligence literacy curriculum and development tools. In 2021 AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI).

  • Burbach, L., Nakayama, J., Plettenberg, N., Ziefle, M., & Valdez, A. C. (2018). User preferences in recommendation algorithms: the influence of user diversity, trust, and product category on privacy perceptions in recommender algorithms. In Proceedings of the 12th ACM conference on recommender systems (pp. 306–310).

  • Campolo, A., Sanfilippo, M. R., Whittaker, M., & Crawford, K. (2017). AI now 2017 report.

  • Chin, W. W. (1998). The partial least squares approach to structural equation modelling. In G. A. Marcoulides (Ed.), Modern methods for business research (pp. 295–336). Erlbaum.

    Google Scholar 

  • Chiu, T. K., & Chai, C. S. (2020). Sustainable curriculum planning for artificial intelligence education: A self-determination theory perspective. Sustainability, 12(14), 5568.

    Article  Google Scholar 

  • Chiu, T. K., Meng, H., Chai, C. S., King, I., Wong, S., & Yam, Y. (2021). Creation and evaluation of a pretertiary artificial intelligence (AI) curriculum. IEEE Transactions on Education.

  • Chiu, T. K. (n.d.). Six key principles in designing artificial intelligence (AI) curriculum for middle Schools.

  • Choi, S., Jang, Y., & Kim, H. (2022). Influence of pedagogical beliefs and perceived trust on teachers’ acceptance of educational artificial intelligence tools. International Journal of Human-Computer Interaction. https://doi.org/10.1080/10447318.2022.2049145

    Article  Google Scholar 

  • Coghlan, S., Miller, T., & Paterson, J. (2020). Good proctor or" Big Brother"? AI Ethics and Online Exam Supervision Technologies.

  • Collectif, C. (2018). Research ethics in machine learning (Doctoral dissertation, CERNA; ALLISTENE).

  • Connelly, L. M. (2011). Cronbach’s alpha. Medsurg Nursing, 20(1), 45–47.

    Google Scholar 

  • Cox, E. O., Green, K. E., Seo, H., Inaba, M., & Quillen, A. A. (2006). Coping with late-life challenges: Development and validation of the care-receiver efficacy scale. The Gerontologist, 46(5), 640–649.

    Article  Google Scholar 

  • Currie, G., Hawk, K. E., & Rohren, E. M. (2020). Ethical principles for the application of artificial intelligence (AI) in nuclear medicine. European Journal of Nuclear Medicine and Molecular Imaging, 47(4), 748–752.

    Article  Google Scholar 

  • Davis, L. L. (1992). Instrument review: Getting the most from your panel of experts. Applied Nursing Research, 5, 194–197.

    Article  Google Scholar 

  • Dineen, B. R., Noe, R. A., & Wang, C. (2004). Perceived fairness of web-based applicant screening procedures: Weighing the rules of justice and the role of individual differences. Human Resource Management: Published in Cooperation with the School of Business Administration, the University of Michigan and in Alliance with the Society of Human Resources Management, 43(2–3), 127–145.

    Article  Google Scholar 

  • Dodds, Z., Greenwald, L., Howard, A., Tejada, S., & Weinberg, J. (2006). Components, curriculum, and community: Robots and robotics in undergraduate ai education. AI Magazine, 27(1), 11–11.

    Google Scholar 

  • Ellore, V. P. K., Mohammed, M., Taranath, M., Ramagoni, N. K., Kumar, V., & Gunjalli, G. (2015). Children and parent’s attitude and preferences of dentist’s attire in pediatric dental practice. International Journal of Clinical Pediatric Dentistry, 8(2), 102.

    Article  Google Scholar 

  • European Commission, High-Level Expert Group on AI. (2019). Ethics guidelines for trustworthy AI. Brussels.

  • European Group on Ethics in Science and New Technologies. (2018). Statement on artificial intelligence, robotics and ‘autonomous’ systems. Retrieved September, 18, 2018.

  • Filieri, R., D’Amico, E., Destefanis, A., Paolucci, E., & Raguseo, E. (2021). Artificial intelligence (AI) for tourism: An European-based study on successful AI tourism start-ups. International Journal of Contemporary Hospitality Management.

  • Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., Srikumar, M. (2020) Prin- cipled artificial intelligence: mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication No. 2020–1. https://doi.org/10.2139/ssrn.3518482

  • Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., et al. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommenda- tions. Minds and Machines, 28(4), 689–707.

    Article  Google Scholar 

  • Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50.

    Article  Google Scholar 

  • Furey, H., & Martin, F. (2019). AI education matters: A modular approach to AI ethics education. AI Matters, 4(4), 13–15.

    Article  Google Scholar 

  • Gefen, D., Straub, D., & Boudreau, M. (2000). Structural equation modeling and regression: Guidelines for research practice. Communications of the Association for Information Systems, 4.

  • Ghotbi, N., Ho, M. T., & Mantello, P. (2021). Attitude of college students towards ethical issues of artificial intelligence in an international university in Japan. AI & SOCIETY, 1–8.

  • Grgić-Hlača, N., Weller, A., & Redmiles, E. M. (2020). Dimensions of diversity in human perceptions of algorithmic fairness. arXiv preprint arXiv:2005.00808.

  • Hair, J., Anderson, R., Tathan, R., & Black, W. (2009). Análisis multivariante. Pearson.

    Google Scholar 

  • Hair, J. F., Ringle, C. M., & Sarstedt, M. (2011). PLS-SEM: Indeed a silver bullet. Journal of Marketing Theory and Practice, 19(2), 139–152.

    Article  Google Scholar 

  • Hair, J., Black, W., Babin, B., & Anderson, R. (2010). Multivariate d data analysis: A global perspective. In P. P. Hall (Ed.), Multivariate data analysis: A global perspective (7th Ed., Vol. 7th). Pearson.

  • Han, X., Hu, F., Xiong, G., Liu, X., Gong, X., Niu, X., … & Wang, X. (2018). Design of AI+ curriculum for primary and secondary schools in Qingdao. In 2018 Chinese Automation Congress (CAC) (pp. 4135–4140). IEEE.

  • Hickok, M. (2021). Lessons learned from AI ethics principles for future actions. AI and Ethics 1(1), 41–47

  • Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., … & Koedinger, K. R. (2021). Ethics of AI in education: towards a community-wide framework. International Journal of Artificial Intelligence in Education, 1–23.

  • Howard, A., Borenstein, J. (2020) AI, robots, and ethics in the age of COVID-19. MIT sloan management review. https://sloanreview.mit.edu/article/ai-robots-and-ethics-in-the-age-of-covid-19/. Accessed 17 May 2022.

  • Hoy, M. G., & Milne, G. (2010). Gender differences in privacy-related measures for young adult Facebook users. Journal of Interactive Advertising, 10(2), 28–45.

    Article  Google Scholar 

  • Hubbard, S. M., & Stage, F. K. (2009). Attitudes, perceptions, and preferences of faculty at Hispanic serving and predominantly Black institutions. The Journal of Higher Education, 80(3), 270–289.

    Article  Google Scholar 

  • Hulland, J. (1999). Use of partial least squares (PLS) in strategic management research: A review of four recent studies. Strategic Management Journal, 20, 195–204.

    Article  Google Scholar 

  • IBM (2018). Everyday ethics for artificial intelligence.

  • Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–399. https://doi.org/10.1038/s42256-019-0088-2

    Article  Google Scholar 

  • Johnson, K. B., Wei, W. Q., Weeraratne, D., Frisse, M. E., Misulis, K., Rhee, K., … & Snowdon, J. L. (2021). Precision medicine, AI, and the future of personalized health care. Clinical and Translational Science, 14(1), 86–93.

  • Kaiser, H. F. (1958). The varimax criterion for analytic rotation in factor analysis. Psychometrika, 23, 187–200.

    Article  MATH  Google Scholar 

  • Kaiser, H. F., & Rice, J. (1974). Little Jiffy, Mark Lv. Educational and Psychological Measurement, 34(1), 111–117. https://doi.org/10.1177/001316447403400115

    Article  Google Scholar 

  • Kieslich, K., Keller, B., & Starke, C. (2021). AI-Ethics by Design. Evaluating public perception on the importance of ethical design principles of AI. arXiv preprint arXiv:2106.00326.

  • Kim, S., Jang, Y., Kim, W., Choi, S., Jung, H., Kim, S., & Kim, H. (2021a). Why and what to teach: AI curriculum for elementary school. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 17, pp. 15569–15576).

  • Kim, S., Kim, W., Jang, Y., Choi, S., Jung, H., & Kim, H. (2021b). Student knowledge prediction for teacher-student interaction. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 17, pp. 15560–15568).

  • Krejcie, R. V., & Morgan, D. W. (1970). Determining sample size for research activities. Educational and Psy- Chological Measurement, 30(3), 607–610.

    Article  Google Scholar 

  • Latonero, M. (2018). Governing artificial intelligence: Upholding human rights & dignity.

  • Lindqwister, A. L., Hassanpour, S., Lewis, P. J., & Sin, J. M. (2021). AI-RADS: An artificial intelligence curriculum for residents. Academic Radiology, 28(12), 1810–1816.

    Article  Google Scholar 

  • Lynn, M. R. (1986). Determination and quantification of content validity. Nursing Research, 35, 382e385.

    Article  Google Scholar 

  • Majbar, M. A., Majbar, Y., Benkabbou, A., Amrani, L., Bougtab, A., Mohsine, R., & Souadka, A. (2020). Validation of the French translation of the Dutch residency educational climate test. BMC Medical Education, 20(1), 1–7.

    Article  Google Scholar 

  • McFadden, D., Machina, M. J., & Baron, J. (1999). Rationality for economists?. In Elicitation of preferences (pp. 73–110). Springer.

  • McGill, T., & Thompson, N. (2021). Exploring potential gender differences in information security and privacy. Information & Computer Security.

    Book  Google Scholar 

  • Mohamed, N., & Ahmad, I. H. (2012). Information privacy concerns, antecedents and privacy measure use in social networking sites: Evidence from Malaysia. Computers in Human Behavior, 28(6), 2366–2375.

    Article  Google Scholar 

  • Morhason-Bello, I. O., Olayemi, O., Ojengbede, O. A., Adedokun, B. O., Okuyemi, O. O., & Orji, B. (2008). Attitude and preferences of Nigerian antenatal women to social support during labour. Journal of Biosocial Science, 40(4), 553–562.

    Article  Google Scholar 

  • Moss, T. P., Lawson, V., & White, P. (2015). Identification of the underlying factor structure of the Derriford Appearance Scale 24. PeerJ, 3, e1070.

    Article  Google Scholar 

  • Nisar, N., Sohoo, N. A., & Memon, A. (2009). Knowledge, attitude and preferences of pregnant women towards modes of delivery. JLUMHS, 8(03), 228.

    Google Scholar 

  • Nunnally, J. C. (1978). Psychometric theory. McGraw-Hill.

    Google Scholar 

  • Page, K. (2012). The four principles: Can they be measured and do they predict ethical decision making? BMC Medical Ethics, 13(1), 1–8.

    Article  Google Scholar 

  • Peña, A., Serna, I., Morales, A., & Fierrez, J. (2020). Bias in multimodal AI: Testbed for fair automatic recruitment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (pp. 28–29).

  • Pierson, E. (2017). Gender differences in beliefs about algorithmic fairness. arXiv preprint arXiv:1712.09124.

  • Qin, F., Li, K., & Yan, J. (2020). Understanding user trust in artificial intelligence-based educational systems: Evidence from China. British Journal of Educational Technology, 51(5), 1693–1710. https://doi.org/10.1111/bjet.12994

    Article  Google Scholar 

  • Quinn, T. P., & Coghlan, S. (2021). Readying medical students for medical AI: The need to embed AI ethics education. arXiv preprint arXiv:2109.02866.

  • Rabby, F., Chimhundu, R., & Hassan, R. (2021). Artificial intelligence in digital marketing influences consumer behaviour: A review and theoretical foundation for future research. Academy of Marketing Studies Journal, 25(5), 1–7.

    Google Scholar 

  • Rajpurkar, P., Irvin, J., Ball, R. L., Zhu, K., Yang, B., Mehta, H., … & Lungren, M. P. (2018). Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS medicine, 15(11), e1002686.

  • Ryan, M., & Stahl, B. C. (2020). Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications. Journal of Information, Communication and Ethics in Society.

  • Sabuncuoglu, A. (2020). Designing one year curriculum to teach artificial intelligence for middle school. In Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education (pp. 96–102).

  • Sandoval-Henríquez, F. J., & Badilla-Quintana, M. G. (2021). Measuring stimulation and cognitive reactions in middle schoolers after using immersive technology: Design and validation of the TINMER questionnaire. Computers & Education, 166, 104157.

    Article  Google Scholar 

  • Seo, J., & Im, S. (2021). Designing a learning model for an artificial intelligence curriculum. Review of International Geographical Education Online, 11(8), 1972–1977.

    Google Scholar 

  • Shahriari, K., & Shahriari, M. (2017). IEEE standard review—Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. In 2017 IEEE Canada International Humanitarian Technology Conference (IHTC) (pp. 197–201). IEEE.

  • Sharif, P. S., Javadi, M., & Asghari, F. (2011). Pharmacy ethics: evaluation pharmacists’ ethical attitude. Journal of medical ethics and history of medicine, 4.

  • Sheehan, K. B. (1999). An investigation of gender differences in on-line privacy concerns and resultant behaviors. Journal of Interactive Marketing, 13(4), 24–38.

    Article  Google Scholar 

  • Shin, D. (2020). User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability. Journal of Broadcasting & Electronic Media, 64(4), 541–565.

    Article  Google Scholar 

  • Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551.

    Article  Google Scholar 

  • Siau, K., & Wang, W. (2020). Artificial intelligence (AI) ethics: Ethics of AI and ethical AI. Journal of Database Management (JDM), 31(2), 74–87.

    Article  Google Scholar 

  • Tabachnick, B. G., & Fidell, L. S. (1996). Using multivariate statistics (3rd ed.). Harper Collins College.

    Google Scholar 

  • Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752.

    Article  MathSciNet  MATH  Google Scholar 

  • The Public Voice (2018). Universal guidelines for artificial intelligence.

  • Thiebes, S., Lins, S., & Sunyaev, A. (2021). Trustworthy artificial intelligence. Electronic Markets, 31(2), 447–464.

    Article  Google Scholar 

  • Tifferet, S. (2019). Gender differences in privacy tendencies on social network sites: A meta-analysis. Computers in Human Behavior, 93, 1–12.

    Article  Google Scholar 

  • Tiit, E. M. (2021). Impact of voluntary sampling on estimates. Papers on Anthropology, 30(2), 9–13.

    Article  Google Scholar 

  • Tzafilkou, K., Perifanou, M., & Economides, A. A. (2021a). Development and validation of a students’ remote learning attitude scale (RLAS) in higher education. Education and Information Technologies, 1–27.

  • Vandenberg, R. J., & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3(1), 4–70.

    Article  Google Scholar 

  • Villani, C., Bonnet, Y., & Rondepierre, B. (2018). For a meaningful artificial intelligence: Towards a French and European strategy. Conseil national du numérique.

  • Waltz, C. F., & Bausell, R. B. (1981). Nursing research: Design, statistics, and computer analysis. F. A. Davis.

    Google Scholar 

  • Weisberg, E. M., & Fishman, E. K. (2020). Developing a curriculum in artificial intelligence for emergency radiology. Emergency Radiology, 27(4), 359–360.

    Article  Google Scholar 

  • Williams, R., & Breazeal, C. (2020). How to train your robot: A middle school AI and ethics curriculum. IJCAI.

  • Williams, R., Park, H. W., Oh, L., & Breazeal, C. (2019). Popbots: Designing an artificial intelligence curriculum for early childhood education. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, No. 01, pp. 9729–9736).

  • Williams, T., Zhu, Q., & Grollman, D. (2020). An experimental ethics approach to robot ethics education. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 09, pp. 13428–13435).

  • Xu, J. J., & Babaian, T. (2021). Artificial intelligence in business curriculum: The pedagogy and learning outcomes. The International Journal of Management Education, 19(3), 100550.

    Article  Google Scholar 

  • Yapo, A., & Weiss, J. (2018). Ethical implications of bias in machine learning.

Download references

Acknowledgements

This work was supported by the National Research Foundation (NRF), Korea, under the project BK21 FOUR.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hyeoncheol Kim.

Ethics declarations

Conflict of interest

None.

Additional information

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The original online version of this article was revised: The Korean letters were translated to English.

Appendices

Appendix A

Table

Table 13 Expert panel CVI analysis

13

Appendix B

Attitudes toward the ethics of artificial intelligence (AT-EAI).

The purpose of this questionnaire is to investigate undergraduate students’ attitudes toward AI ethics. Each item is followed by five digits (1, 2, 3, 4, 5), and we would like you to click on the digit that most accurately represents your opinion.

1 means “I strongly disagree with this statement.”

2 means “I disagree with this statement.”

3 means “I neither agree nor disagree with this statement.”

4 means “I agree with this statement.”

5 means “I strongly agree with this statement.”

There are no correct or incorrect responses for these items. The collected data will be kept strictly confidential and will be used solely for research purposes. Thank you for your participation.

Table

Table 14 Final version of AT-EAI

14

Appendix C

Table

Table 15 Resources used for AI curriculum analysis

15

Table

Table 16 MOOCs

16

Table

Table 17 National educational institution

17

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jang, Y., Choi, S. & Kim, H. Development and validation of an instrument to measure undergraduate students’ attitudes toward the ethics of artificial intelligence (AT-EAI) and analysis of its difference by gender and experience of AI education. Educ Inf Technol 27, 11635–11667 (2022). https://doi.org/10.1007/s10639-022-11086-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10639-022-11086-5

Keywords

Navigation