Skip to main content
Log in

A seven-layer model with checklists for standardising fairness assessment throughout the AI lifecycle

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

Problem statement: Standardisation of AI fairness rules and benchmarks is challenging because AI fairness and other ethical requirements depend on multiple factors, such as context, usage case, type of the AI system, and so on. In this paper, we elaborate that the AI system is prone to biases at every stage of its lifecycle, from inception to its use, and that all stages require due attention for mitigating AI bias. We need a standardised approach to handle AI fairness at every stage. Gap analysis: While AI fairness is a hot research topic, a holistic strategy for AI fairness is generally missing. The AI fairness assessment is often developer-centric. Standard processes are required for independent audits for the benefit of other stakeholders. Also, most researchers focus only on a few facets of AI model-building. Peer review shows a focus on biases in the datasets, fairness metrics, and algorithmic bias. In the process, other aspects affecting AI fairness get ignored. The solution proposed: We propose a comprehensive approach in the form of a novel seven-layer model, inspired by the Open System Interconnection (OSI) model, to standardise AI fairness handling. Despite the differences in the various aspects, most AI systems have similar model-building stages. The proposed model splits the AI system lifecycle into seven abstraction layers, each corresponding to a well-defined AI model-building or usage stage. We also provide checklists for each layer and deliberate on potential sources of bias in each layer and their mitigation methodologies. This work will facilitate layer-wise standardisation of AI fairness rules and benchmarking parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Availability of data and material

Not applicable.

References

  1. Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M.E., et al.: Bias in data-driven artificial intelligence systems—an introductory survey. Wiley Interdiscip. Rev.: Data Min. Knowl. Discov. 10(3), e1356 (2020)

    Google Scholar 

  2. Flores, A.W., Bechtel, K., Lowenkamp, C.T.: False positives, false negatives, and false analyses: a rejoinder to machine bias: there’s software used across the country to predict future criminals. and it’s biased against blacks. Fed Prob. 80, 38 (2016)

    Google Scholar 

  3. Datta, A., Tschantz, M.C., Datta, A.: Automated experiments on ad privacy settings: a tale of opacity, choice, and discrimination. arXiv preprint arXiv:1408.6491. (2014)

  4. Barocas, S., Selbst, A.D.: Big data’s disparate impact. California law review. pp. 671–732 (2016)

  5. Koski, E., Scheufele, E.L., Karunakaram, H., Foreman, M.A., Felix, W., Dankwa-Mullan, I.: Understanding disparities in healthcare: implications for health systems and AI applications. In: Healthcare Information Management Systems: Cases, Strategies, and Solutions. Springer; pp. 375–387 (2022)

  6. Ferrer, X., van Nuenen, T., Such, J.M., Coté, M., Criado, N.: Bias and Discrimination in AI: a cross-disciplinary perspective. IEEE Technol. Soc. Mag. 40(2), 72–80 (2021)

    Article  Google Scholar 

  7. Wegner, L., Houben, Y., Ziefle, M., Calero Valdez, A.: Fairness and the need for regulation of AI in medicine, teaching, and recruiting. In: Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. AI, Product and Service: 12th International Conference, DHM 2021, Held as Part of the 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings, Part II. Springer. pp. 277–295 (2021)

  8. Binns, R., Kirkham, R.: How could equality and data protection law shape AI fairness for people with disabilities? ACM Trans. Access. Comput. (TACCESS) 14(3), 1–32 (2021)

    Article  Google Scholar 

  9. Zhou, J., Chen, F., Berry, A., Reed, M., Zhang, S., Savage, S.A., survey on ethical principles of AI and implementations. In: IEEE Symposium Series on Computational Intelligence (SSCI). IEEE 2020, 3010–3017 (2020)

  10. Giovanola, B., Tiribelli, S.: Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms. AI Soc. pp. 1–15 (2022)

  11. Verma, S., Rubin, J.: Fairness definitions explained. In, ieee/acm international workshop on software fairness (fairware). IEEE 2018, 1–7 (2018)

    Google Scholar 

  12. Toreini, E., Aitken, M., Coopamootoo, K., Elliott, K., Zelaya, C.G., Van Moorsel, A.: The relationship between trust in AI and trustworthy machine learning technologies. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. pp. 272–283 (2020)

  13. Narayanan, A.: Translation tutorial: 21 fairness definitions and their politics. In: Proceeding Conference of Fairness Accountability Transport, New York, USA. vol. 1170. p. 3 (2018)

  14. Abu-Elyounes, D.: Contextual fairness: a legal and policy analysis of algorithmic fairness. U Ill JL Tech & Pol’y. p. 1 (2020)

  15. Mohseni, S., Zarei, N., Ragan, E.D.: A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Trans. Interact. Intell. Syst. (TiiS) 11(3–4), 1–45 (2021)

    Google Scholar 

  16. Mulligan, D.K., Kroll, J.A., Kohli, N., Wong, R.Y.: This thing called fairness: disciplinary confusion realizing a value in technology. In: Proceedings of the ACM on Human-Computer Interaction. 3(CSCW):1–36 (2019)

  17. Schäfer, M., Haun, D.B., Tomasello, M.: Fair is not fair everywhere. Psychol. Sci. 26(8), 1252–1260 (2015)

    Article  Google Scholar 

  18. Charisi, V., Imai, T., Rinta, T., Nakhayenze, J.M., Gomez, R.: Exploring the concept of fairness in everyday, imaginary and robot scenarios: a cross-cultural study with children in Japan and Uganda. In: Interaction Design and Children. pp. 532–536 (2021)

  19. Terhörst, P., Kolf, J.N., Huber, M., Kirchbuchner, F., Damer, N., Moreno, A.M., et al.: A comprehensive study on face recognition biases beyond demographics. IEEE Trans. Technol. Soc. 3(1), 16–30 (2021)

    Article  Google Scholar 

  20. Zimmermann, H.: OSI reference model-the ISO model of architecture for open systems interconnection. IEEE Trans. Commun. 28(4), 425–432 (1980)

    Article  Google Scholar 

  21. Preece, A., Harborne, D., Braines, D., Tomsett, R., Chakraborty, S.: Stakeholders in explainable AI. arXiv preprint arXiv:1810.00184. (2018)

  22. Tomsett, R., Braines, D., Harborne, D., Preece, A., Chakraborty, S.: Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. arXiv preprint arXiv:1806.07552. (2018)

  23. Madaio, M., Egede, L., Subramonyam, H., Wortman Vaughan, J., Wallach, H.: Assessing the fairness of AI systems: AI practitioners’ processes, challenges, and needs for support. Proc. ACM Hum.-Comput. Interact. 6(CSCW1), 1–26 (2022)

    Article  Google Scholar 

  24. Langer, M., Baum, K., Hartmann, K., Hessel, S., Speith, T., Wahl, J., Explainability auditing for intelligent systems: a rationale for multi-disciplinary perspectives. In: IEEE 29th International Requirements Engineering Conference Workshops (REW). IEEE 2021, 164–168 (2021)

  25. Shin, D.: Toward fair, accountable, and transparent algorithms: case studies on algorithm initiatives in Korea and China. Javnost Public. 26(3), 274–290 (2019)

    Article  Google Scholar 

  26. Saldanha, D.M.F., Dias, C.N., Guillaumon, S.: Transparency and accountability in digital public services: learning from the Brazilian cases. Gov. Inf. Q. 39(2), 101680 (2022)

    Article  Google Scholar 

  27. Bilan, Y., Mishchuk, H., Samoliuk, N., Mishchuk, V.: Gender discrimination and its links with compensations and benefits practices in enterprises. Entrep. Bus. Econ. Rev. 8(3), 189–203 (2020)

    Google Scholar 

  28. Esses, V.M.: Prejudice and discrimination toward immigrants. Annu. Rev. Psychol. 72, 503–531 (2021)

    Article  Google Scholar 

  29. Yan, E., Lai, D.W., Lee, V.W., Bai, X., KLNg, H.: Abuse and discrimination experienced by older women in the era of representative COVID-19: a two-wave community survey in Hong Kong. Violence Against Women 28(8), 1750–1772 (2022)

    Article  Google Scholar 

  30. Wadsworth, C., Vera, F., Piech, C.: Achieving fairness through adversarial learning: an application to recidivism prediction. arXiv preprint arXiv:1807.00199. (2018)

  31. Chowdhury, R., Mulani, N.: Auditing algorithms for bias. Harvard Bus. Rev. 24 (2018)

  32. Manrai, A.K., Funke, B.H., Rehm, H.L., Olesen, M.S., Maron, B.A., Szolovits, P., et al.: Genetic misdiagnoses and the potential for health disparities. N. Engl. J. Med. 375(7), 655–665 (2016)

    Article  Google Scholar 

  33. Shankar, S., Halpern, Y., Breck, E., Atwood, J., Wilson, J., Sculley, D.: No classification without representation: assessing geodiversity issues in open data sets for the developing world. arXiv preprint arXiv:1711.08536. (2017)

  34. Kodiyan, A.A.: An overview of ethical issues in using AI systems in hiring with a case study of Amazon’s AI based hiring tool. Researchgate Preprint. pp. 1–19 (2019)

  35. Ajunwa, I.: Beware of automated hiring. The New York Times. 8 (2019)

  36. Baeza-Yates, R.: Bias on the web. Commun. ACM 61(6), 54–61 (2018)

    Article  Google Scholar 

  37. Akter, S., Dwivedi, Y.K., Biswas, K., Michael, K., Bandara, R.J., Sajib, S.: Addressing algorithmic bias in AI-driven customer management. J. Global Inf. Manag. (JGIM). 29(6), 1–27 (2021)

    Article  Google Scholar 

  38. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR). 54(6), 1–35 (2021)

    Article  Google Scholar 

  39. Friedman, B., Nissenbaum, H.: Bias in computer systems. In: Computer Ethics. Routledge. pp. 215–232 (2017)

  40. Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., Hall, P., et al.: Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. Special Publication (NIST SP), National Institute of Standards and Technology. (2022)

  41. Castelnovo, A., Crupi, R., Greco, G., Regoli, D., Penco, I.G., Cosentini, A.C.: A clarification of the nuances in the fairness metrics landscape. Sci. Rep. 12(1), 1–21 (2022)

    Article  Google Scholar 

  42. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. Adv. Neural Inf. Process. Syst. 29 (2016)

  43. Hinnefeld, J.H., Cooman, P., Mammo, N., Deese, R.: Evaluating fairness metrics in the presence of dataset bias. arXiv preprint arXiv:1809.09245. (2018)

  44. Pandit, S., Gupta, S., et al.: A comparative study on distance measuring approaches for clustering. Int. J. Res. Comput. Sci. 2(1), 29–31 (2011)

    Article  Google Scholar 

  45. Agarwal, A., Agarwal, H., Agarwal, N.: Fairness Score and process standardization: framework for fairness certification in artificial intelligence systems. AI and Ethics. p. 1–13 (2022)

  46. Shearer, C.: The CRISP-DM model: the new blueprint for data mining. J. Data Warehous. 5(4), 13–22 (2000)

    Google Scholar 

  47. Fayyad, U., Piatetsky-Shapiro, G., Smyth, P.: The KDD process for extracting useful knowledge from volumes of data. Commun. ACM 39(11), 27–34 (1996)

    Article  Google Scholar 

  48. Azevedo, A., Santos, M.F.: KDD, SEMMA and CRISP-DM: a parallel overview. IADS-DM. (2008)

  49. Shafique, U., Qaiser, H.: A comparative study of data mining process models (KDD, CRISP-DM and SEMMA). Int. J. Innov. Sci. Res. 12(1), 217–222 (2014)

    Google Scholar 

  50. Amershi, S., Begel, A., Bird, C., DeLine, R., Gall, H., Kamar, E., et al.: Software engineering for machine learning: a case study. In: 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). IEEE pp. 291–300 (2019)

  51. De Silva, D., Alahakoon, D.: An artificial intelligence life cycle: from conception to production. Patterns 3(6), 100489 (2022)

    Article  Google Scholar 

  52. Wang, L., Liu, Z., Liu, A., Tao, F.: Artificial intelligence in product lifecycle management. Int. J. Adv. Manuf. Technol. 114, 771–796 (2021)

    Article  Google Scholar 

  53. Suresh, H., Guttag, J.: A framework for understanding sources of harm throughout the machine learning life cycle. In: Equity and Access in Algorithms, Mechanisms, and Optimization. pp. 1–9 (2021)

  54. Fahse, T., Huber, V., van Giffen, B.: Managing bias in machine learning projects. In: Innovation Through Information Systems: Volume II: A Collection of Latest Research on Technology Issues. Springer. p. 94–109 (2021)

  55. Bantilan, N.: Themis-ml: a fairness-aware machine learning interface for end-to-end discrimination discovery and mitigation. J. Technol. Hum. Serv. 36(1), 15–30 (2018)

    Article  Google Scholar 

  56. Ashmore, R., Calinescu, R., Paterson, C.: Assuring the machine learning lifecycle: desiderata, methods, and challenges. ACM Comput. Surv. (CSUR) 54(5), 1–39 (2021)

    Article  Google Scholar 

  57. Raji, I.D., Smart, A., White, R.N., Mitchell, M., Gebru, T., Hutchinson, B., et al.: Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. pp. 33–44 (2020)

  58. Mökander, J., Floridi, L.: Operationalising AI governance through ethics-based auditing: an industry case study. AI and Ethics pp. 1–18 (2022)

  59. Klein, E.: Validation of a framework for bias identification and mitigation in algorithmic systems. Int. J. Adv. Softw. 14(1 &2), 59–70 (2021)

    Google Scholar 

  60. Wang, H.E., Landers, M., Adams, R., Subbaswamy, A., Kharrazi, H., Gaskin, D.J., et al.: A bias evaluation checklist for predictive models and its pilot application for 30-day hospital readmission models. J. Am. Med. Inform. Assoc. 29(8), 1323–1333 (2022)

    Article  Google Scholar 

  61. Fabbrizzi, S., Papadopoulos, S., Ntoutsi, E., Kompatsiaris, I.: A survey on bias in visual datasets. Comput. Vis. Image Underst. 223, 103552 (2022)

    Article  Google Scholar 

  62. Madaio, M.A., Stark, L., Wortman Vaughan, J., Wallach, H.: Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. pp. 1–14 (2020)

  63. Richardson, B., Gilbert, J.E.: A framework for fairness: a systematic review of existing fair AI solutions. arXiv preprint arXiv:2112.05700. (2021)

  64. Seedat, N., Imrie, F., van der Schaar, M.: DC-Check: A Data-Centric AI checklist to guide the development of reliable machine learning systems. arXiv preprint arXiv:2211.05764. (2022)

  65. Ryan, M., Stahl, B.C.: Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications. J. Inf. Commun. Ethics Soc. (2020)

  66. Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1(9), 389–399 (2019)

    Article  Google Scholar 

  67. HLEG, A.: Assessment list for trustworthy artificial intelligence (ALTAI) for self-assessment. High Level Expert Group on Artificial Intelligence B-1049 Brussels. (2020)

  68. Kumar, A., Braud, T., Tarkoma, S., Hui, P.: Trustworthy AI in the age of pervasive computing and big data. In: 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). IEEE. pp. 1–6 (2020)

  69. Bateni, A., Chan, M.C., Eitel-Porter, R.: AI fairness: from principles to practice. arXiv preprint arXiv:2207.09833. (2022)

  70. Gupta, D., Krishnan, T.: Algorithmic bias: Why bother. California Manag. Rev. 63(3) (2020)

  71. Sorokin, A., Forsyth, D., Utility data annotation with amazon mechanical turk. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE 2008, 1–8 (2008)

  72. Northcutt, C.G., Athalye, A., Mueller, J.: Pervasive label errors in test sets destabilize machine learning benchmarks. arXiv preprint arXiv:2103.14749. (2021)

  73. Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Conference on Fairness, Accountability and Transparency. PMLR. pp. 77–91 (2018)

  74. Fu, R., Huang, Y., Singh, P.V.: AI and algorithmic bias: source, detection, mitigation and implications. Detect. Mitigat. Implicat. (July 26, 2020). (2020)

  75. Srinivasan, R., Chander, A.: Biases in AI systems. Commun. ACM 64(8), 44–49 (2021)

    Article  Google Scholar 

  76. Ayres, I.: Testing for discrimination and the problem of “included variable bias”. Yale Law School Mimeo. (2010)

  77. U S Bureau of Labor Statistics.: Atus Home. [Online; accessed 4-Feb-2023]. Available from: https://www.bls.gov/tus

  78. Kamiran, F., Calders, T.: Data preprocessing techniques for classification without discrimination. Knowl. Inf. Syst. 33(1), 1–33 (2012)

    Article  Google Scholar 

  79. Calmon, F., Wei, D., Vinzamuri, B., Natesan Ramamurthy, K., Varshney, K.R.: Optimized pre-processing for discrimination prevention. Adv. Neural Inf. Process. Syst. 30 (2017)

  80. Polli F.: The dark side of artificial intelligence

  81. Whiteford, P.: Debt by design: The anatomy of a social policy fiasco-or was it something worse? Aust. J. Public Adm. 80(2), 340–360 (2021)

    Article  Google Scholar 

  82. Wakabayashi, D.: Self-driving Uber car kills pedestrian in Arizona, where robots roam. The New York Times. 19(03) (2018)

  83. Shah, S.: Amazon workers hospitalized after warehouse robot releases bear repellent

  84. Siwicki, B.: How AI bias happens – and how to eliminate it

  85. Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M., Rudin, C.: Learning certifiably optimal rule lists for categorical data. J. Mach. Learn. Res. 18, 1–78 (2018)

    MathSciNet  MATH  Google Scholar 

  86. Aggarwal, A., Lohia, P., Nagar, S., Dey, K., Saha, D.: Black box fairness testing of machine learning models. In: Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. pp. 625–635 (2019)

  87. Kazim, E., Koshiyama, A.S., Hilliard, A., Polle, R.: Systematizing audit in algorithmic recruitment. J. Intell. 9(3), 46 (2021)

    Article  Google Scholar 

  88. Landers, R.N., Behrend, T.S.: Auditing the AI auditors: a framework for evaluating fairness and bias in high stakes AI predictive models. Am. Psychol. (2022)

  89. Dua, D., Graff, C.: UCI Machine Learning Repository. Available from: http://archive.ics.uci.edu/ml

Download references

Funding

The authors did not receive support from any organization for the submitted work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Avinash Agarwal.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Disclamer

The views or opinions expressed in this paper are solely those of the authors and do not necessarily represent those of their respective organizations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Agarwal, A., Agarwal, H. A seven-layer model with checklists for standardising fairness assessment throughout the AI lifecycle. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00266-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s43681-023-00266-9

Keywords

Navigation