1 Introduction

The growing concerns about the societal impact of biased artificial intelligence (AI) have led to a demand for a shift in the approach to developing AI algorithms [8]. These concerns emphasise the need for collaboration between AI developers, organisations, management, society (that might be impacted by AI), and decision-makers [1, 6, 20]. While data quality and ethical considerations are critical in the development of unbiased AI, the responsibility for developing fair AI and the ethical implications of AI also lie with AI developers [12]. Developers build, train, monitor, evaluate and test the AI algorithms, which makes them responsible for the moral consequences of the decisions made by the algorithms [12, 13, 15, 16, 19, 21]. AI models do what they are asked to do and, therefore, AI developers are essential in the development of fair AI [12, 19].

AI developers have been called to prioritise the social and ethical considerations of building fair AI and to proactively work on eliminating bias from the AI models that they develop [19]. However, the role of AI developers in reducing AI bias in AI algorithms using fair processes has not been investigated to any extent. While considerable research has been carried out on the importance of explainable AI, which pertains to the development of AI that is transparent and understandable to society [4, 7, 17], there are, however, still challenges regarding the methods and theory of how explanations are employed [17]. A process model that promotes open dialogue and collaboration between developers and AI stakeholders can be instrumental in developing fair AI. This process can facilitate explainable AI and the development of models that can be understood and trusted by all stakeholders, including the society that may be impacted by the models. It is crucial to ensure that individuals affected by decisions made by AI algorithms are informed of this fact, provided with relevant information, and entitled to understand the decision-making process behind those algorithms [11]. How organisations develop and deploy their technology can influence society’s trust in them [13]. We further discuss this under the training and development theme of the study, where developers emphasise the importance of transparency and explainability in building fair AI.

Xivuri and Twinomurinzi [22] developed a process model using Habermas’ communicative action theory to ensure compliance with equitable processes during the development of AI. The process model provides AI developers with a fairness process model to prioritise ethical considerations when developing and implementing AI, rather than solely focusing on technological advancements. It allows them to consider the impact of their work on society, and actively seek feedback from stakeholders. A process model enables continuous improvement, thereby enhancing the quality of AI algorithms. However, despite the importance of the process models, there is a lack of understanding about how the AI developers interact with process models to develop fair AI.

This study therefore used an exploratory qualitative research methodology to engage with AI developers to understand how they interact with fair processes to develop fair AI. The study aimed to answer the following question: How can AI developers contribute to developing fair AI processes to ensure the fairness of AI algorithms?

The results were then used to provide practical guidelines to support the implementation of the Habermas approach to fair AI processes.

The study contributes to the practical guidelines on how AI developers can contribute to developing fair AI through fair processes. The findings highlight several concerns in the AI development process, including a lack of gender and social diversity in the development team. Managers may prioritise achieving desired outcomes, sometimes missing important checks. Developers are often the only ones who fully understand the work being done, and if they lack integrity, they may conceal biased results from management and other stakeholders. Although models undergo testing before deployment, representatives from diverse societal groups who may be affected are often not included in the testing process.

We make recommendations in four areas: governance, social, technical, and training and development processes. Responsible organisations need to take deliberate actions to ensure that their developers follow fair processes when developing AI; developers must prioritise ethical considerations and the potential impact of their models on society; partnerships between AI developers, stakeholders and society, particularly those who may be affected by AI models, should be established; and AI developers should ensure transparency and explainability in their models while ensuring they follow adequate processes for testing bias and implementing corrective measures prior to deployment. Additionally, emotional intelligence training should be provided to developers to empower them to facilitate productive conversations with individuals outside their team in order to reach a common understanding and develop fair AI.

The remainder of the paper is formatted as follows: the next section discusses the theoretical background covering AI algorithmic fairness, and the Habermas approach to fair AI and what it means for developers. This is followed by the research approach, the discussion of the findings, discussion summary, limitations, practical implications, and a conclusion.

2 AI frameworks/models (theoretical background)

2.1 AI algorithmic fairness

The implementation of biased AI has led to several instances of unfair outcomes and perpetuated existing societal inequities. For example, the facial recognition technology tool has been criticised for being biased against certain racial groups. The Amazon recruitment tool has also been criticised for being biased against women [5], while a risk assessment tool used in the US was found to be more likely to falsely label black defendants as higher risk than white defendants [5, 9]. These are examples of biased AI and the significant impact that AI can have on certain societal groups. The fairness of AI decisions has become a major challenge for various sectors and society as a whole [10], [14]. This highlights the need for AI developers to understand the consequences that biased AI has on society. Building fair AI requires a domination-free development process and environment, where ethical considerations are prioritised throughout the development process. Habermas defined a domination-free environment as one devoid of power imbalances to foster genuine understanding, mutual agreements, and democratic decision-making. A domination-free environment allows all participants to have an equal opportunity to express their views, interests, and concerns without fear of coercion or manipulation.

In the next section, we delve into a more detailed presentation of the Habermas approach and its application to AI algorithmic fairness.

2.2 The Habermas approach

A Habermasian approach to fair processes in AI algorithmic fairness is highly relevant to AI developers as they are at the forefront of developing AI algorithms. The approach emphasises engaging in discourse that is free of domination and adhering to a set of rules to exclude bias and include those who will be affected by AI throughout the development phases [22]. The approach draws inspiration from the work of Jürgen Habermas, a renowned philosopher and sociologist known for his theories on communicative action. Habermas emphasises the importance of engaging in open and inclusive discourse to achieve fair decision-making processes. In the context of AI algorithmic fairness, this means that developers should actively seek out input from various stakeholders and create opportunities for meaningful discussions that are free from power imbalances or domination. The approach specifically focuses on following the three discursive requirements, which include adherence to logical-semantic, procedural, and performative rules, all of which are crucial for AI developers to follow. Figure 1 shows the Habermas process framework for AI algorithmic fairness.

Fig. 1
figure 1

Process framework for AI algorithmic fairness—a Habermasian approach

By conforming to logical-semantic rules, AI developers can create an AI development environment that is free of domination. For example, they can ensure that whatever is applied to A is applied to B, as long as A and B represent the same object (for example, female/male).

Procedural rules illustrate how AI developers can create a space that allows the AI development team to reach a thorough understanding by testing validity claims. For example, they can assert and dispute what they believe with supporting reasons, which can help identify biases and ensure that the AI algorithm is fair.

Performative rules illustrate how AI developers can ensure that their AI development teams reach a shared understanding and create a balance of power during the development stages of AI. This is crucial for ensuring that all stakeholders are included in the process and that the AI algorithm is fair and equitable.

The next section outlines the research approach of the study.

3 Research approach

The study adopted a qualitative research approach to understand how AI developers can interact in a fair AI algorithm development process. An open-structured interview instrument was used for data collection. Ethics clearance for data collection was received from the university’s ethics committee.

The interview questions were formulated so that they were aligned with the Habermasian approach to discursive interactions free from domination. Therefore, they had three sections: logical-semantic rules, procedural rules, and performative rules. The study aimed to interview AI developers and, therefore, purposive sampling was used. Purposive sampling identifies case sites with specific and desired contextual characteristics [2].

The responses from the interviews were transcribed and thematically analysed using Atlas.ti to understand the shared meanings and experiences of AI developers [3]. Specifically, abductive data coding and analysis were used to identify the themes and patterns and to generate theory from the data analysed [18]. Abductive analysis and the critical theory of Jurgen Habermas both aim to critically examine social issues and uncover the underlying structures and causes. Data coding was grouped into the three themes of discursive rules, logical-semantic, procedural, and performative.

The data analysis sought to reveal whether adherence to the Habermasian approach to fair AI processes could help reduce bias in AI.

3.1 Demographic analysis: AI developers

Ten (10) AI developers were interviewed. The demographics of the 10 individuals are given in Table 1 below. The highest level of education was a PhD (30%) and this was followed by Masters (30%), Honours (20%) and an undergraduate degree (20%). Table 1 indicates few female AI developers as they proved difficult to access despite using a snowball method.

Table 1 AI developers’ demographics

The industries that the interviewees worked in were mainly Information and Communication Technology (40%), Banking (20%), Telecommunications (10%), Retail (10%), Education (10%), Agriculture (10%) and Wholesale and Retail (10%), and their experience in AI ranged from 1 to 12 years. The AI developers interviewed were based in South Africa which hosts several global companies.

3.2 Content analysis: perspectives of AI developers on AI algorithmic fairness

Fifty-five (55) unique codes were identified for logical-semantic rules, twenty-one (21) for procedural rules and forty-one (41) for performative rules, resulting in 117 codes across the three discursive rules. (Refer to Appendix 1—AI Developers thematic analysis codes). This is also presented in Fig. 2 below. The codes were grouped into different themes and sub-themes, revealing 30 Governance related codes (AI policies and governance structures, and responsible organisations), 27 Social Responsibility related codes (Collaboration between AI developers, management, domain experts and society affected by the models), 31 Technical related codes (Data collection and review processes, model testing, and model review), and lastly, 29 Training and Development related codes (Explainability and transparency of the model, AI developers training, AI ethics training, and emotional intelligence training) across the logical-semantic rules, procedural rules and performative rules. Figure 2 below summarises the different themes and codes for the three discursive requirements.

Fig. 2
figure 2

Identified themes and coding summary

Figure 2 above represents the number of times coding under a specific theme applied to the three Habermas discursive requirements. Although most of the themes are applicable across the different discursive requirements, there were more codes for technical requirements under logical-semantic rules (19 codes), more codes for social responsibility (7 codes) and training and development under procedural rules (7 codes), and lastly more codes for governance requirements under performative rules (17 codes).

4 Findings

Based on the study’s results, we discuss the AI developer’s role in fair development processes following the Habermas discursive requirements. The study identified the following themes from the data as presented in Table 2 below:

Table 2 The Habermas discursive rules and applicability of identified themes

4.1 Logical-semantic rules

Logical-semantic rules consider how AI developers can ensure that they do not contradict themselves during the development of AI affecting different societal groups. They also examine how developers can be upskilled in identifying protected attributes and how they are used in the model. Furthermore, logical-semantic rules examine how data used to train models can be verified to ensure that it is not biased against certain societal groups and is a good representation of the population in question. Themes applicable to logical-semantic rules are Governance in the form of AI policies and governance structures (8 codes), responsible organisations (2 codes) and the need for AI regulations (1 code); Social Responsibility in the form of collaboration between AI developers, management, domain experts and society affected by AI (9 codes) and the need for a diverse development team (1 code); Technical in the form of data collection and review processes (17 codes) and the testing of the model before and after deployment (2 codes); and lastly, Training and Development comprising the need for transparent and explainable models (12 codes), AI developers training (1 code) and AI ethics training (2 codes). Figure 3 below provides a summary of the themes and codes for logical-semantic rules.

Fig. 3
figure 3

Logical-semantic rules—identified themes and coding summary

4.1.1 Governance

4.1.1.1 AI policies and governance

AI policies and governance involve the creation of regulations and governing frameworks, structures and policies that AI developers must comply with in order to ensure the development of ethical and unbiased AI. Eight (8) codes indicate the need for fair AI policies and governance structures that follow best practice, the development and adherence to an AI code of ethics, and the need for every AI project to undergo an approval and ethics clearance process. According to two of the interviewees:

“By obtaining approval from the ethical committee and an ethical clearance certificate, we may prevent having AI engineers contradict themselves while building a model that involves groups of people. A policy statement that formally describes the role of AI in the evolution of humanity is known as an AI code of ethics, also known as an AI value platform. The goal of an AI code of ethics is to advise stakeholders when faced with an ethical decision concerning the use of AI.” Developer 1.

“With primary data, the ethical committee needs to ensure that the data you are collecting doesn't violate certain ethics and principles. This committee is involved in the data collection and data analysis stage in ensuring that the data used is not biased in any way.” Developer 2.

The above findings indicate the need to establish a well-defined policies and governance frameworks to guarantee the development of unbiased AI, provided they are followed. AI policies and appropriate governance structures could serve as one of the strategies to encourage AI developers to develop fair AI [6].

4.1.1.2 Responsible organisation

A responsible organisation is one that takes deliberate actions to ensure that the entire organisation is dedicated to developing and implementing unbiased AI. The first of the two (2) codes on the need for responsible organisations stressed the necessity for organisations to assume accountability and establish governance structures and ethical AI mandates, while the second code emphasised the significance of building trust among the public or society. According to two of the interviewees:

“Organisations should have governance structures and mandates that highlight rules and regulations around handling personal identifying information and are very strict on things like POPI. There should also be a way on how outputs and results are analysed. There's usually a predefined structure on the table.” Developer 9.

“What is required is trust, and to build trust, organisations need to work on the brand, there are some brands that people just don't trust, but the brand is very important. Once the brand is good, companies need to create transparency, not giving the technicalities of it. You want some simple statements showing what the model does and not to the level of algorithmic depth.” Developer 7.

4.1.1.3 AI regulations

AI regulations are a set of rules and principles established by the government to ensure the development of ethical and unbiased AI models. AI regulations aim to balance the potential benefits of AI with its potential risks and to ensure that AI is developed and used in a manner that aligns with social values and human rights. One (1) code indicated the need for government to create fair AI compliance and regulatory requirements. According to one interviewee:

“By keeping up to date with compliance/regulatory requirements, developers will ensure they identify protected attributes and not use them in the development of AI.” Developer 4.

If there are compliance/regulatory requirements for developing fair AI, developers will be aware of the exclusion of protected attributes that could lead to the development of biased AI.

4.1.2 Social responsibility

4.1.2.1 Collaboration

Collaboration refers to the developer’s responsibility to ensure that AI stakeholders beyond the deployment team understand the AI models they create. Nine (9) codes on collaboration emphasised the necessity for communication of AI to individuals without an AI background, and engaging functional group or subject matter experts to provide guidance during model development. They also highlighted the need to involve the society affected by the model in the development process and establish a platform for all AI stakeholders to participate. According to two of the interviewees:

“One of the skills required is to be able to communicate what AI developers do as they know all the terms and technicalities; however, they need to learn how to send the message of what they do to a layman. Another important aspect is to show the practicality of AI models and how practical they can be to them. An end user will appreciate an AI model if they see the practicality of it and won't even care about the background.” Developer 3.

“Create cross-functional groups of experts to guide all decisions on the design, development and deployment of responsible ML [machine learning] and AI. These groups examine future and existing use of ML in products developed. Bring customer collaboration into the design, development and deployment of responsible AI. Engage customer advisory councils, drawn from a broad cross-section of our customer base during the product development lifecycle to gain feedback around our development themes related to AI and ML.” Developer 5.

While the majority of AI developers acknowledged the importance of involving society in the development process and explaining AI models to them, some argued that society is only concerned with the model’s background when it fails. According to one of the interviewees:

“This is possible but might not be effective because the focus and effectiveness are different between users and developers. For example, if I’m going to use a facial recognition platform, I won’t care how it does it, I just want to use it and trust it. Explaining won’t do any justice to this, however from a management standpoint it’s possible to explain the model. Management has the responsibility to ensure they understand the model before it is used by society, however this is not necessary for society.” Developer 7.

4.1.2.2 Development team

There is a need for a development team that is diverse. This refers to a group of individuals with varying societal backgrounds and perspectives collaborating to develop fair AI. One (1) code on the development team highlighted the need for diversity. According to two of the interviewees:

“We need diversity in the development teams, with diversity comes different perspectives, and as a result, we get more information from different perspectives and involve quite a number of people. Instead of these instructions coming from management saying develop 123, get input from the people who will be impacted on what they would like to see in the models before we even start developing so that the model is not biased.” Developer 6.

“Most of the bias will reflect what society thinks. The first thing we need to do is have a representative of different social groups as part of the development team. One example includes having both females and males if the model has anything to do with gender. We need to make sure that the developers need to be well represented and all groups that could be biased against/for, need to be properly represented within the development group. And also within management, there needs to be a good representation. Unless we deal with this, we will always have some form of bias, because if a group of males are developing for females, it will be from a male’s perspective, and this might be engraved from a societal standpoint. If we are fair regarding who is involved in the development, and we give them fair rights when they develop, and management who makes the final decision is well represented, then this might really help. From an end-to-end perspective, we need a good representation.” Developer 7.

The responses indicate that the development and management teams currently lack diversity, leading to inadequate representation and the subsequent creation of biased AI.

4.1.3 Technical

4.1.3.1 Data collection and review

Data collection and review refers to the processes followed for the collection of data, selection of data sets, and the review of the data used to train the model. Seventeen (17) codes on data review highlighted the need for data review, evaluation, testing and validation before it is used to train a model. The data needs to be anonymised. Some of the AI developers highlighted the need to use big data sets that appropriately represent the population/society that will be affected by the model. There is also a need for segregation between the people involved in the preparation of the data and the developers using the data to train the model. According to two interviewees:

“Before we even deal with the data, we need to start with the data collection and the sampling that goes statistically on which groups to look at—it needs to well represent whatever is going to be the output of the model. Once we have the data, we need to have a different group of people that does verifications that show that the data is well represented which is different from people that collected, and then have a different group looking at the data from an auditing standpoint to make sure that everything is represented. We need to add these different types of roles going forward.” Developer 7.

“The data needs to go through different processes such as training, tests and validation. If the data has gone through these processes and the validation process, there should not be any bias. In addition to this, one needs to ensure that they work with a lot of data. The more the data, the better.” Developer 3.

One interviewee emphasised the need to exclude protected attributes for decision and classification models:

“As much as demographic data is important in certain models, it is advised that in decision or classification models where you provide certain services based on the model output, avoid including attributes or variables such as race, sex etc. So your model can ignore stereotypical decisions based on traits of a certain group of people.” Developer 8.

4.1.3.2 AI testing

AI model testing refers to testing the AI model for any bias or potential for bias. Two (2) codes on AI testing revealed the need to test and evaluate AI models for bias both before and after they are deployed into the live environment. According to two of the interviewees:

“Some of the ways used to prevent AI bias is to test the model before and after deployment.” Developer 5.

“Proper testing and evaluation of the model. Model should be evaluated on slices of the data and the whole dataset.” Developer 4.

4.1.4 Training and development

4.1.4.1 Transparency and explainability

Transparency and explainability refer to the ability to understand and trace the datasets used in a model and how a model gets to its decision. Twelve (12) codes on transparency and explainability reveal the need for technology and tree models to explain the model to society and other stakeholders. AI developers should be able to transform the technicalities of the model they are building into a business story that is easy to understand for all stakeholders. Two of the interviewees highlighted the need for transparency in the AI development process:

“Try and stay away from the technical aspects of the model when presenting to business. The best way to do this is to take your results and transform them into a business answer or a business story that you can better understand….” Developer 9.

“Build ethical AI into product development and release framework. These cannot be separate processes that create more work and complexity for developers and product teams. New ML controls have been incorporated into our formal control framework to serve as additional enforcement of our ML ethical principles. Development teams must examine every ML product through an ethical lens by asking questions about data collection and data minimization, transparency and values” Developer 5.

Some AI developers, however, believed that explaining the model to society and management might not be effective or easy at all. Ethics should rather be incorporated into the development process of AI to ensure that all stakeholders trust the models being developed without needing them to be explainable. According to one of the interviewees:

“It’s difficult as most of the models used are black boxed and it’s difficult to explain to a businessperson how the model makes the decisions.” Developer 9.

4.1.4.2 AI developers’ training

One (1) code on developers’ training highlighted the need for developers to be trained to become better developers. This will ensure that they build AI models that are easy to understand. According to an interviewee:

“The use of applications would be very helpful in explaining what an algorithm does to society and management. For example, everyone has got apps on their phones, and even a two-year-old can operate an app. Therefore, we can also ensure that our data products have an end product whereby all the technicalities going on in the background will be obscured, and society only gets to see the easy part so that they can appreciate what you’ve done using apps.” Developer 2.

4.1.4.3 AI ethics training

Two (2) codes on AI ethics training revealed the need for developers to undergo such training and the management of protected attributes. According to two of the interviewees:

“Ethics is very important, whatever you are developing has to go through a fair process to ensure that what happens to A, happens to B. AI developers should be given training around AI ethics to help them in ensuring they are not biased during AI development.” Developer 2.

“AI developers need to be trained on how to handle protected attributes.” Developer 2.

4.1.5 Summary

The above section discussed how AI developers can establish an AI development environment that promotes equality, such as ensuring that any changes made to object A are also applied to object B if they represent the same entity (for example, male/female). The primary findings relating to logical-semantic rules reveal several shortcomings in the AI development process. One major issue is the absence of proper procedures for obtaining ethical clearance prior to the initiation of AI projects. Furthermore, the findings indicate that there is a general lack of trust from society towards AI development organisations, and these organisations are not taking appropriate steps to earn that trust when it comes to developing fair AI. The general public is typically indifferent to the model’s background unless the model fails to perform as expected. Additionally, the responses from the interviews indicate that there is currently no diversity within the development or management teams. Lastly, there are inadequate processes in place for thorough data reviews and validations.

The results indicate the necessity for governance mechanisms, including AI policies and structures, and responsible organisations committed to developing AI for the betterment of society. Additionally, adherence to AI regulations, and social responsibility through partnerships with experts and those affected by AI are critical. Technical processes such as pre-and post-deployment testing, model reviews, and user feedback are also critical. Lastly, training and development that include AI ethics training for developers, and the transparency and explainability of models developed by developers, are essential.

4.2 Procedural rules

Procedural rules examine the creation of a space that allows all AI stakeholders to assert what they believe and dispute any aspect of AI development that they feel could lead to bias. Themes applicable to procedural rules are Governance and the requirement of AI policies and governance structures (1 code) and responsible organisation (1 code); Social Responsibility in the form of collaboration between AI developers and all affected stakeholders (6 codes) and the need for a diverse development team (1 code); Technical in the form of data review processes (3 codes), and testing of the model before and after deployment (2 codes); and lastly, Training and Development comprising the need for transparent and explainable models (4 codes), and emotional intelligence training for AI developers (3 codes). Figure 4 below provides a summary of the themes and codes for procedural rules.

Fig. 4
figure 4

Procedural rules—identified themes and coding summary

4.2.1 Governance

4.2.1.1 AI policies and governance

It is crucial to establish AI policies and governance structures that enable the participation of various AI stakeholders in the development process and empower them to raise any concerns about any potential bias. One (1) code on the need for AI policies and governance structures highlighted the need for AI development standard procedures that encourage collaboration between management, domain experts and society impacted by the AI models, ultimately ensuring the deployment of ethical models. According to one of the interviewees:

“Institute standards when deploying AI, organisations should follow a framework that will standardise production while ensuring ethical models.” Developer 5.

4.2.1.2 Responsible organisation

To achieve fair AI, organisations need to take more responsibility in prioritising its development and deployment. Having a responsible organisation that prioritises the need to develop and deploy fair AI is essential. One (1) code on responsible organisation emphasised the importance of organisations prioritising implementation measures that prevent bias and engaging all stakeholders to ensure the development of fair AI. According to one of the interviewees:

“As AI gains traction in a critical process, organisations must learn how to prevent AI bias.” Developer 5.

4.2.2 Social responsibility

4.2.2.1 Collaboration

Collaboration among all AI stakeholders is essential to achieve a comprehensive understanding of what an AI model does and ensure the development of fair AI. Empowering participants to voice their concerns about potential biases is crucial in this process. Six (6) codes on collaboration highlighted the need for the involvement of all affected stakeholders in improving the model, the alignment between the AI developers and all affected stakeholders, and lastly, the need for design thinking tailored to AI development. According to two of the interviewees:

“There’s room for errors and always room for improvements in AI development, and this is why there are prediction accuracies in AI models. A developer can get ideas from someone who wasn’t involved, and if they can give you advice it’s always good to incorporate their advice as it can help improve the model. Any party can be involved because they can look at the model from different aspects and analyse the problem differently. If you are working on a model that’s trying to improve something, you need different stakeholders to give you their ideas as different parties might have different perspectives. It’s very important to involve different stakeholders.” Developer 3.

“There are already methodologies that are used, that is, design thinking, and these types of frameworks can be very key as they involve the users and stakeholders before finalising any development process. Right now, design thinking is used for the traditional development of products; however, the same approach could be applied to ML models. You could have design thinking tailored to ML to create and ensure fairness. Design thinking does a lot of justice in development in general, so something similar like this but focused on ML could be done.” Developer 7.

4.2.2.2 Development team

Having a development team that is receptive to inputs and recommendations from various AI stakeholders outside of the team is essential. The one (1) code on the development team highlighted the need for AI developers that are open-minded and willing to receive criticism and input from different stakeholders to improve the model and promote the development of fair AI. According to one of the interviewees:

“There’s always a possibility of Developer/management bias. Developers always want to see what they want to see and this could skew the models results. Everyone needs to go into model development open-minded.” Developer 4.

The response from the interviewee indicate that developers and management may have a predetermined mindset that influences their preferences, which can result in bias in their development process. To prevent this from happening, the development team must remain receptive and inclusive towards external perspectives and feedback.

4.2.3 Technical

4.2.3.1 Data collection and review

The involvement of all AI stakeholders in the AI development process needs to start from the data collection and review practices. Three (3) codes on data collection and review highlighted the need for unbiased data sampling techniques and the implementation of robust processes for selecting the right datasets for the model. AI developers need to recognise that the quality of the model is directly linked to the quality of the data ingested into the model. According to one of the interviewees:

“Bias can creep in through multiple ways, including in sampling practices that ignore large swaths of populations, and confirmation bias, where a data scientist only includes those datasets that conform to the worldview. A way to address the bias problem is to understand the potential for AI bias. Supervised learning, which is one of the subsets of AI, operates on rote ingestion of data. By supervised learning, a trained algorithm makes decisions on datasets that it has never seen before. Following the garbage in and garbage out principle, the quality of the AI decision can only be as good as the data it ingests.” Developer 5.

Some developers believed that a model is as good as the data it ingests and, therefore, creating a space that allows all stakeholders to be involved will not help unless the data used to train the model is dealt with first.

4.2.3.2 AI testing

Testing the AI models before and after deployment is crucial to detect and address bias effectively. Two (2) codes on AI model testing highlighted the need to test the AI model before and after deployment. According to one interviewee:

“Test the model before and after deployment. Testing AI and ML models is one way to prevent biases before releasing the algorithm into production.” Developer 5.

One of the developers suggested performing bias tests and internally publishing the fairness tests’ results with other stakeholders within the organisation. This indicate that testing results are typically not shared with other stakeholders within the organisation. According to the interviewee:

“The emphasis should be on identifying biases, and ensuring we communicate these biases when we use our model. Removing bias is much harder.” Developer 10.

4.2.4 Training and development

4.2.4.1 Transparency and explainability

Developers making the models they develop transparent and explainable to stakeholders outside the development team is crucial. Doing so enables stakeholders to review, criticise, and raise concerns about potential biases in the algorithms. Four (4) codes on transparency and explainability emphasised the importance of disclosing the model’s workings, the datasets used, the model training process, and the decision-making process. According to two of the interviewees:

“Increase transparency. AI remains challenged by the inscrutability of its processes. Deep learning algorithms, for example, use neural networks modelled after the human brain to arrive at a decision but exactly how they get there remains unclear. Part of the move toward explainable AI is to shine a light on how the data is being trained and how you're using the algorithm.” Developer 5.

“Transparency, data scientists should not hide anything about their datasets. The main key is transparency and understanding the problem. Once you understand the problem you are able to understand the route that you need to take to get to the solution and you are able to defend the results of the model.” Developer 9.

4.2.4.2 Emotional intelligence training

To collaborate effectively with AI stakeholders outside of the development team and consider their inputs and concerns, developers must acquire emotional intelligence skills. Emotional intelligence training can help developers understand and manage their emotions, communicate effectively with AI stakeholders, and develop unbiased AI models. Three (3) codes on emotional intelligence training emphasised the need to train AI developers in emotional intelligence, soft skills and being receptive to ideas and recommendations from other stakeholders. According to one of the interviewees:

“Developers have to be open-minded; they need to be open to changes and be willing to accept other people's opinions. To get developers to be open-minded and open to change, they need to be trained on not just being code wizards but being mature and emotionally intelligent. AI developers need to be trained on soft skills.” Developer 2.

The response above indicates that AI developers are often reluctant to accept criticism and recommendations from stakeholders who are not part of the development team.

4.2.5 Summary

The above section examined how organisations can create a space that allows all AI stakeholders, including society, impacted by these systems to reach a thorough understanding of the AI models being developed. Among the key findings regarding procedural rules, a lack of clear policies and governance structures, as well as organisations failing to take responsibility for building fair AI, were identified. Additionally, the responses indicate that developers were found to be resistant to receiving criticism and recommendations from other stakeholders. Developers and organisations were found to be opaque regarding the data sets they use, how the model is trained and how it arrives at its decisions. The results show the need for AI policies and governance structures, responsible organisations that are dedicated to developing fair AI, and collaboration between different stakeholders. Moreover, AI models and data sets used to train the models must be transparent and explainable, and AI developers need to be trained in emotional intelligence and be receptive to critique and views from other stakeholders.

4.3 Performative rules

Performative rules examine how AI developers can ensure that all AI stakeholders are involved, that firm orders on the importance of all parties avoiding bias are issued and that developers implement recommendations from the various stakeholders. Themes applicable for performative rules are Governance comprising AI policies and governance (9 codes), AI regulations (2 codes) and responsible organisation (6 codes); Social Responsibility comprising collaboration (10 codes); Technical comprising AI testing (4 codes) data review (1 code) and model review (2 codes); and lastly, Training and Development comprising emotional intelligence training (4 codes), transparency and explainability (2 codes), and AI ethics training (1 code). Figure 5 provides a summary of the themes and codes for performative rules.

Fig. 5
figure 5

Performative Rules—Identified themes and coding summary

4.3.1 Governance

4.3.1.1 AI policies and governance

To promote power balance in the development of fair AI, AI policies and governance structures that enable and facilitate the involvement of all parties affected by the AI model should be established. This can help ensure the development of fair AI. Nine (9) codes on AI policies and governance highlighted the need for alignment in AI governance frameworks, policies, processes and principles, an AI fairness charter, and documented guidelines and standards. There is a need for an AI principles management team and the enforcement of penalties for non-compliance with AI principles and ethics. According to one of the interviewees:

“Define fairness for your organisation. Develop an AI fairness charter template and then ask all departments that are actively using AI to complete it in their context. In particular, for business and line managers and product and services owners to ensure AI fairness along the supply chain. Require suppliers you are using who have AI built into their procured products and services to complete an AI fairness charter and to adhere to company policies on AI fairness” Developer 5.

4.3.1.2 Responsible organisation

A responsible organisation takes steps to guarantee that developers and stakeholders affected by the model reach a common understanding when developing unbiased AI. Six (6) codes on being a responsible organisation highlighted the need for the implementation of responsible AI governance. Management needs to agree on the use of AI aligning with the organisational values and include this in the organisational management’s agenda. The organisational approach to AI fairness should be formally communicated and fairness test outcomes shared with the organisation at large. According to two of the interviewees:

“Risk mitigation needs to be built into the development process.” Developer 4.

“We can avoid this kind of bias by implementing responsible AI governance. Some of the practices that are critical to achieving AI governance: 1. Establish AI principles—the management team must be aligned through an established set of AI principles. The leaders must meet and discuss AI use and how it aligns with the organisation's values. 2. Establish a responsible AI governance framework. Organisational values and tech are linked and must be handled collectively. Management's agenda should include how innovation heads share how they develop and use AI in key functions. 3. Operationalise your governance framework. This can be achieved by designing a point of contact and support. Communicate the stages of the AI lifecycle for testing procedures and document relevant findings at the completion of each stage.” Developer 5.

The above indicate that there is misalignment and lack of coherence between AI governance structures, frameworks, policies, processes, principles, AI fairness charters, guidelines, and standards. The findings further indicate that there is no alignment between the use of AI and the organizational values.

4.3.1.3 AI regulations

Two (2) codes on AI regulations highlighted the need for formalised AI compliance and regulatory law enforcement requirements. According to one interviewee:

“Enforcing laws and penalties if models are biased, there needs to be more law enforcement. There's currently no law enforcement for technology being deployed in SA. The sooner we get this in place, the more careful organisations will be, if the cost of penalties is high people will think twice when developing AI. If organisations know they can be fined if their models are biased, they will put in the effort and make sure their models are not biased.” Developer 6.

The above response indicates that there is currently no law enforcement or penalties in place to address the development of unfair and biased AI models. Implementing such measures can serve as a crucial mechanism to ensure that organisations prioritise the development of fair AI.

4.3.2 Social responsibility

4.3.2.1 Collaboration

Collaboration among AI developers, AI stakeholders and society that may be impacted by these models is essential to establish a shared understanding of the creation of unbiased AI. Ten (10) codes on collaboration highlighted the need to involve all AI stakeholders in the AI lifecycle as a measure to improve the model and ensure that it is not biased against certain societal groups. According to two of the interviewees:

“The need for collaboration is something that I've conceptualised. How do we consolidate this ecosystem, which moves directly from management to the end-users, or community, in which the developer is in the middle? How do they connect to ensure that excellent products are created? This goes back to the creation of apps. We need an app that an ordinary person can connect—with developers, management and the community. That chain of transmission is a requirement.” Developer 2.

“Through a project pipeline. You must design a pipeline that can accommodate every single stage of project development and make sure that everyone involved has an input. From research to testing.” Developer 8.

One developer believed that currently, it is usually only the project manager that has access to all the different stages of AI development. Other stakeholders are not involved or informed about the AI development process. Their exposure is limited to the final product. According to the developer:

“A project has various aspects (including end-user engagement, project management, development, model optimisation, and dev-ops). Typically, only the project manager has access to all aspects.” Developer 10.

4.3.3 Technical

4.3.3.1 Data collection and review

One (1) code on data collection and review highlighted the need for AI developers to fully understand the data in place before starting with the development of the model. According to one interviewee:

“You then get to understand what type of data is readily available. You also need to understand the raw files as a data scientist because what you see in the database has been decoded and changed and diced and it's not loaded because they don't know the use of some of the fields.” Developer 9.

4.3.3.2 AI testing

Different AI stakeholders including society impacted by these models need to participate in the AI model testing. Representatives from diverse societal groups that will be impacted by the model should be included in the testing process. Four (4) codes on AI testing highlighted the need to perform fairness tests on the model before and after deployment. Fairness tests should be performed and results shared with the organisation before deployment of the model. According to two of the interviewees:

“Test AI fairness before any tech launches. Require departments and suppliers to run and internally publish fairness outcome tests before any AI algorithm is allowed to go live. Once you know what groups may be unfairly treated due to data bias, simulate users from that group and monitor the results. Communicate your approach to AI fairness. Set up fairness outcome learning sessions with customers and public-facing staff to go through the fairness outcome test for any new or updated products and services. This is particularly relevant for marketing and external communications as well as customer service teams.” Developer 5.

“The model biases are tested and if the model is biased, it has less chance of going into production.” Developer 9.

The above indicate that the results of AI model testing are not publicly shared with individuals outside the development team. One developer suggested the need to go through the fairness test results with individuals outside the development team. This proposition entails sharing the fairness test outcomes with external stakeholders who can provide fresh perspectives and insights. This can enable a broader range of perspectives, ensuring a more comprehensive evaluation of the AI model's fairness.

4.3.3.3 Model review

The model should be reviewed by different AI stakeholders and feedback provided to the AI developers responsible for the development of the models. Two (2) codes on AI model review emphasised the importance of statistical performance reviews of the model. Developers should be given feedback from the public regarding the fairness of the model. According to one of the interviewees:

“These days we have advanced statistical analysis in terms of how products are performing in the market and we need to have these statistics get back to the developers and those outputs show why people are interested and why people are not interested in specific models. Statistical feedback on the model can assist the developer in understanding what they need to do going forward.” Developer 7.

4.3.4 Training and development

4.3.4.1 Transparency and explainability

Transparency and explainability are crucial for achieving a common understanding of developing fair AI between the developer, AI stakeholders and society impacted by the models. Two (2) codes on transparency and explainability highlighted the need for transparency on the data used to train the model and the need for the AI development lifecycle and any findings out of it to be documented. According to two of the interviewees:

“The key stakeholders are the client or business with a problem that needs to be resolved. There's also another aspect which is an insight that you provide to businesses when you identify something that you can highlight to them that can potentially assist them. The marketing team and sales team also needs to be involved. You also give the different stakeholders updates during the whole process. There's also explanatory analysis and understanding of the data. The end user is the most important person and it's important to understand them.” Developer 9.

“The model biases are tested and if the model is biased, it has less chances of going into production. However, data scientists can hide the fact that their model is biased and if it's deployed it won't do well. Data scientists ensuring there's no bias in their models is based on one's integrity. There are things that data scientists can hide because they are the technical ones, and the ones that understand what they did with their variables, or decide the results they share with the business and the results they don't share. Sometimes you know your model is biased as a data scientist, but you have managers that are result driven and won't give you enough time to make sure your model is not biased. Some data scientists are not true to their work and don't apply the rules and regulations learnt on AI ethics. Management needs to be technically skilled so that they can understand the data scientist and not give them all the power, and just say they only want to see the results. The moment you give a data scientist all the rights you are not taking responsibility as a manager and the data scientist can do as they please. If managers are technically skilled there can be a flowing conversation between them and the developer.” Developer 9.

The above indicate that individuals who may be impacted by the model are of utmost importance and should be actively involved in the development process. Furthermore, the responses indicate that managers, driven by their goals, sometimes fail to allocate sufficient time for developers to ensure that their models are unbiased.

4.3.4.2 AI ethics training

AI developers need to be given training on AI ethics for them to understand the importance of building fair AI and ensuring that they follow processes that enable the building of fair AI models. One (1) code highlighted the need for AI developers to go for AI ethics training. One developer believed that in order for them to guarantee the development of fair AI, it was necessary for the scope of work to explicitly include the requirement for fairness. According to two of the interviewees:

“In practice, instructions to avoid bias will only trickle down to the developers if it is included in the scope of work. Clients should explicitly request this in the specification. On AI projects, there is rarely time and budget to create tasks that are not explicitly listed in the spec.” Developer 10.

“Once you build something from an organisation standpoint, you build something, but you are not involved in how it affects people. If you are a developer, you might just sit in the basement and do your development.” Developer 7.

The above indicate that there are some developers who may lack an understanding of the significance of ensuring fairness in their AI models, and therefore these developers would benefit from receiving AI ethics training.

4.3.4.3 Emotional intelligence training

Establishing a common understanding and achieving a power balance between the developer and other AI stakeholders requires the developer to undergo emotional intelligence training. This will enable them to engage in productive conversations with individuals outside the development team and facilitate a shared understanding in creating fair AI. Four (4) codes on emotional intelligence training highlighted the need for training AI developers to be open to receiving criticism and recommendations from stakeholders outside the development team. According to two of the interviewees:

“Organise workshops to train developers in accepting criticism on what they have done. Different individuals/data scientists can be invited to give feedback on experiences that they have had where their work was criticised and how the criticism has helped them improve their model.” Developer 3.

“When you develop a model, it’s mostly to solve a problem. However, developers get too attached to their work because they have been doing it for weeks and it becomes personal. However, they need to keep in mind that they are developing these models to solve a problem and if people come with critics and recommendations, it's just to enhance the model and it's not personal. Developers need to be open-minded, and understand that we all have a common goal to solve the problem. They must also be agile in their model building, and there should not be only one way to do something but be able to understand that things happen in between, and they need to adjust accordingly. Developers should be trained, there should be mandatory training for developers or compulsory short courses on how to be agile and not to take things personally, have a common goal in mind etc. Organisations shouldn't assume that developers know all these things, they need to be taught because they could be good with developing but need assistance with their emotional intelligence.” Developer 6.

The responses above indicate that developers are used to working independently and may not always be receptive to criticism or actively involving others in the development process.

4.3.5 Summary

Performative rules examine how organisations can ensure that their AI stakeholders reach a shared understanding and create a power balance during the development stages of AI. One of the key findings for performative rules indicates that there was no alignment and coherence in the AI governance structures, frameworks, policies, processes, principles, AI fairness charters, guidelines, and standards. The responses further indicate the disconnect between organisational values and their AI processes. AI developers have traditionally worked independently, without incorporating inputs from other stakeholders, such as management and society, who may be impacted by the algorithms. Even though the models are tested, the results of the tests are not shared with individuals outside of the development team. There is no law enforcement or penalties for developing unfair AI. At times, managers in charge of AI may prioritise achieving the desired outcomes without allowing developers adequate time to test and perform hyperparameter tuning which is crucial for ensuring the models are not biased prior to deployment. This approach can result in developers being the only ones with a full understanding of the work being done, and if they lack integrity, they may conceal certain results from management and other AI stakeholders. The study findings emphasise the need for clear and aligned AI policies and governance structures that can safeguard the development of fair AI models, for responsible organisations that ensure alignment between their organisational values and organisational mandates, and for transparency in testing the fairness of the models. The results of tests should be shared with the larger organisation and the models should be continually reviewed for any bias.

5 Discussion summary

AI developers play a crucial role in developing fair AI processes. They need to take responsibility for considering the consequences that biased AI has on society and for ensuring they build fair AI. Achieving fair AI requires the establishment of governance, societal, technical, and training and development fair processes. To ensure that AI developers follow fair processes, organisations must take more responsibility and deliberately take action to ensure their developers follow fair processes. Establishing spaces that allow for collaboration between AI developers, AI stakeholders and society, particularly those who may be affected by AI models, is crucial for fairness in AI development. AI developers must ensure transparency and explainability in their models, and follow adequate processes for testing bias and implementing corrective measures before the deployment of AI. Additionally, providing emotional intelligence training to developers can empower them to facilitate productive conversations with individuals outside their teams and help them develop fair AI.

Overall, following the logical-semantic, procedural, and performative rules can help ensure an AI development process that is free of domination. Table 3 summarises the measures that can be put in place to ensure fair processes for the development of fair AI, and AI developers must prioritise these measures to promote ethical and inclusive AI development.

Table 3 Discussion summary

5.1 Theoretical implications

In terms of theoretical implications, this study offers a fair process model that can be used to promote AI developers’ responsibility in ensuring AI algorithmic fairness throughout the AI development process.

The process model aims to address the growing concerns on the potential bias of AI models, which can have significant societal impacts. The study aims to address this concerns by providing a process model to mitigate bias throughout the development process of AI.

The study suggests practical guidelines to aid organisations, AI managers and developers in ensuring fairness throughout the development process. It emphasises the need for fair processes that ensure that bias is identified and addressed before deployment of the AI model. The model suggests the application of logical semantic rules, procedural rules, and performative rules to mitigate power imbalances and prevent any entity or developer from exerting undue influence over the design and deployment of AI systems.

By providing a process model and practical guidelines for AI algorithmic fairness, the study contributes to the field of AI ethics and fairness. Overall, the study's theoretical contributions lie in the provision of a process model and practical guidelines that promotes AI developers' responsibility to ensure algorithmic fairness throughout the AI development process. By mitigating bias and promoting fairness, the study aims to enhance the ethical and societal implications of AI systems.

5.2 Practical implications

This study provides practical guidelines for AI developers to adhere to fair processes during the development of AI. Through engagement with AI developers, the study identified key measures that are crucial for the development of fair AI. Our proposed model and guidelines have implications for governance, social, technical, and training and development.

Governance: Organisations have to document and implement AI policies and governance structures that promote the development of fair AI. AI policies and governance structures can serve as critical safeguards in the development of fair AI models. These structures can ensure that AI models are developed and deployed in a manner that upholds fairness, transparency and accountability as explicitly outlined in the policies. Organisations have a responsibility to deliberately put measures in place to ensure their AI developers adhere to fair processes. Government should develop and implement AI regulations on AI ethical principles and rules that will safeguard society and force organisations and developers to develop fair AI.

Social: Collaboration spaces that allow AI developers to work with management, AI stakeholders and society that may be impacted by the AI they develop, should be established. These collaborative environments should facilitate open communication, enabling all stakeholders to cultivate a shared understanding and equitable power dynamics, ultimately leading to the implementation of fair AI. Organisations and AI management should prioritise diversity within the development team.

Technical: Data collection and review processes that ensure that the data used to train the AI model is representative of the society it pertains to should be established. Technical measures such as adequate AI model testing should be implemented, with representatives from diverse societal groups involved in the testing. Fairness testing results should be shared with stakeholders outside the development team. All AI models should be reviewed, and feedback provided to developers responsible for the development of the model.

Training and development: AI developers should ensure transparency and explainability of their models to individuals outside the development team. Emotional intelligence training should be given to developers to empower them to engage with different stakeholders throughout the development process, thereby ensuring that they develop fair AI.

6 Limitations

The study found that many AI developers do not appreciate the societal impact of their models, which then leads to a disinterest in addressing the ethical implications of AI. Despite the difficulty in finding AI developers who were willing to engage in conversations around ethical AI development, ten (10) AI developers were ultimately interviewed. This study was limited to AI developers in one country due to the ethical implications of cross-border ethics clearance. Future research should investigate adoption with AI developers from diverse countries or regions.

7 Conclusion

This study aimed to answer the following question: How can AI developers contribute to developing fair AI processes to ensure the fairness of AI algorithms? Qualitative research was used to determine how AI developers interact with the Habermasian approach to ensure fair processes in developing AI algorithms.

The key findings include the lack of gender and social diversity in development teams, and managers are often in a rush to produce long-awaited and anticipated results, resulting in important fairness checks being missed. The honesty and credibility of AI developers are crucial as they made hide their model bias from both management and other stakeholders. Testing prior to deployment seldom represents the various societal groups that could be impacted.

The findings reveal the opportunity to move from biased AI to fair AI by developing fair AI processes. AI developers play a crucial role in the development process of AI, and they need to prioritise the ethical considerations of AI during the development process. Developers need to understand the impact that biased AI has on society and ensure that they are committed to developing fair AI. The study recommends practical guidelines for AI developers to adhere to fair processes during the development of AI, and these include governance, societal, technical, and training and development processes.

It is essential for organisations developing AI to take responsibility and deliberately put in measures to ensure their AI developers adhere to fair processes. This includes the creation of collaboration spaces that allow AI developers to collaborate with management, AI policymakers, and society that may be impacted by the AI they develop; the implementation of technical measures such as adequate AI model testing; and the adequate involvement in the testing of the models by society and other stakeholders affected by the models. AI developers should ensure that their models are transparent and explainable to individuals outside the development team, and emotional intelligence training should be given to developers to ensure that they engage with different stakeholders throughout the process of developing fair AI.

The study highlights the need to implement AI policies and governance structures that promote the development of fair AI. Various national governments and regulators, including the European Union's General Data Protection Regulation (GDPR), the proposed EU AI Act (AIA), the United States, the United Kingdom, Canada, China, Singapore, France, and New Zealand, have taken steps to formulate strategies and plans aimed at developing AI ethics policies, standards, regulations, and frameworks to safeguard society from biased AI. However, it is worth mentioning that while progress has been made, the work in this area is still ongoing. This study therefore suggests more focus on a fairness process model, which promotes adaptability and continuous improvement, while remaining agile and responsive to evolving needs and challenges.

The study emphasises the importance of fairness in AI development and provides practical guidelines for AI developers to ensure that their models are developed through a fair and ethical process. The study’s findings have significant implications for organisations developing AI in that they must take a more active role in ensuring their developers adhere to fair processes. As AI continues to play an increasingly important role in society, it is crucial to ensure that it is developed in a way that is fair and equitable for all.

The proposed process model in this study makes a contribution to the advancement of fair AI development. The study provides practical guidelines for AI developers to work with all relevant stakeholders in the AI development process, to allow for bias to be detected and mitigated before deployment, and to promote the overall development and deployment of fair AI.

In conclusion, future research is needed to investigate and understand the role of society in achieving the development of AI algorithmic fairness. Future research should consider exploring how AI developers can ensure AI algorithmic fairness across various regions and countries.