The next overarching task we have identified for AI’s integration into society concerns the engagement of stakeholders. This raises the following question: ‘who should be involved?’ When any new technology is introduced, after all, various parties are involved right from the start. The previous chapter on contextualization made this apparent; it showed that both companies and government started working with AI at an early stage. In discussing their involvement then, our focus was the question ‘how do we make AI work’? Companies and government organizations have the resources and impetus needed to become key drivers of AI’s use in society. As a result, they also have a lot of influence over how it is implemented in practice. In this chapter, by contrast, we home in on parties that do not initially use AI themselves but – given its ubiquitous use – are likely to encounter this technology in their activities. Our particular focus is parties in civil society.

In Chap. 4 we saw that the introduction of any new system technology is accompanied by social tensions and growing inequality. This is because certain groups are better able to use such new technologies than others. During the Industrial Revolution workers found themselves in precarious situations. The electrification of society was patchy, and for many years rural regions lagged behind other areas. Cars became associated with wealthy sections of the population, marginalizing less affluent road users. These developments also caused more indirect suffering. Companies used electric lighting to supervise workers more effectively. Cars polluted the air and made the roads hazardous for cyclists. The process of integration or embedding was thus almost always accompanied by malpractices and by irresponsible use of the new technology. We also saw that, over time, civil society groups began to actively oppose these wrongs and to correct imbalances in the use of the new technology. So, when introducing a new system technology, it is important to involve various groups in this process. This helps shape it and its application.

All societies are made up of numerous different parties. This is why civil society needs to be engaged in the embedding of AI. Only the most authoritarian regimes designate a single player – a leader or political party – to chart the course to be taken by society. Democratic constitutional states have a range of institutions designed to counterbalance the power of the state. However, these are also protected by that very same state through such mechanisms as constitutional rights. A strong and well-developed civil society is thus an important precondition for the proper operation of the state and of the market.Footnote 1 Civil society can involve itself in the embedding of a new system technology in a variety of ways. Many of these options were not yet available during the Industrial Revolution. This was because workers were not united and universal suffrage had not yet been introduced. Today’s stakeholders have many ways of expressing their views and making their presence felt. These include filing lawsuits, establishing new interest groups and participating in decision-making by both public and private bodies. Organizations may not always have a choice when it comes to involving stakeholders. In some countries works councils are legally entitled to participate in companies’ decision-making processes. This gives them a say in the use of employee monitoring systems, such as cameras on the shop floor or smart tracking systems in vehicles.Footnote 2 These are not simply private initiatives; companies are formally obliged to engage with stakeholders.

In democratic societies stakeholders are free to engage in matters that have an impact on their own lives. This is valuable in itself. In addition, a society that has a broad-based engagement with technology can help to improve that technology.Footnote 3 Here people’s responses to AI are not limited to the impact of its use, they also contribute their own knowledge and experience. Indeed, they can even start using the technology to promote their own interests and values. Various initiatives to involve stakeholders in technology development have been launched since the 1960s. Their goal is to raise awareness about its impact on society and to make use of technology more socially accountable. It is important to include values and moral considerations in the development of a new technology. This can also help avoid any risk of societal resistance to its use at a later stage.Footnote 4

In some ways AI is no different from previous system technologies. As we shall see in this chapter, it is associated with all kinds of malpractices and worsening imbalances of power. A case from the UK exemplifies that dynamic in which AI perpetuates existing inequalities. This concerns an algorithm that was supposed to help predict final school exam grades. In fact, it put pupils from certain schools at a considerable disadvantage compared with others (see Box 7.1). By raising issues of this kind and making people aware of them, civil society is making a significant contribution towards the further societal integration of AI. In other words, engagement is a key overarching task when it comes to embedding AI in society. Various civil society parties are particularly important to this overarching task. These include interest groups, the media, scientists and other experts.

FormalPara Box 7.1: Unequal Opportunities for Success

In 2020 lockdowns prevented school students in the UK, like those in many other countries, from taking their final exams. Instead, their final grades were determined by an algorithm. The input was the expected grade per pupil and their individual rankings relative to other pupils. The authorities also included the school’s performance in recent years in the calculation. They expected these estimated final grades for individual students to be more accurate than the teacher’s estimate alone. This is because teachers often tend to overestimate their own pupils’ performance.

In more than 35% of cases the algorithm did indeed predict a lower final grade than the teachers had. However, it downgraded pupils at state schools to a far greater extent than those who attended private institutions. The algorithm placed great emphasis on results from previous years. As a result, both state schools and individual pupils were unduly penalized due to these schools’ relatively poor past performance. This focus on the past thus placed state schools that were on an upward trend, or individual pupils whose performance was improving, at a disadvantage. Private schools on the other hand have traditionally achieved better results. The current cohort has benefited from this. This example shows how algorithms intended to produce fairer predictions can confirm and prolong existing differences.Footnote 5

The expertise of interest groups is a key issue when it comes to the impact of a new technology on disadvantage and equality. Disadvantaged groups in society are served by numerous national and international organizations. Many of these, however, are ill-equipped to pursue that work when confronted with a new technology like AI. This is because it opens up new dimensions to the unfair disadvantage suffered by certain groups. Accordingly, this changes the nature of those organizations’ fields of work and the problems they have to address. We could say that AI versions of different forms of inequality are emerging, which require a change of perspective and additional expertise.

This applies to discrimination against people of colour, for example. Ruha Benjamin coined the term ‘New Jim Code’ to describe this phenomenon. That refers to the so-called Jim Crow laws that codified racial segregation in the US South. Today’s code also disadvantages racial minorities, but in a different way. Benjamin defines the New Jim Code as “the employment of new technologies that reflect and reproduce existing inequities but that are promoted and perceived as more objective and progressive than the discriminatory systems of a previous era”.Footnote 6 In other words AI here provides a channel for discrimination. Moreover, this is far more insidious in nature.Footnote 7 Unlike discrimination by police officers, which has attracted so much attention recently, this form is presented as objective and is less visible. That also makes it harder to identify and oppose. There are no racist bosses, bankers or shop owners to report here.Footnote 8 Indeed, many people present the principle of discrimination in a positive light. For example, services like Netflix tailor different trailers to different target groups. So, someone who feels that actors of colour are underrepresented in the movie industry will be shown a trailer that mainly features members of this group. But this can give the impression that a series is more diverse than is actually the case. The diversity or otherwise of Oscar winners is there for everyone to see. But there is no such clarity in the world of Netflix because everyone is presented with a different representation of actors and producers. Safiya Umoja Noble refers here to “algorithms of oppression”.Footnote 9

Also consider the exclusion of people with low incomes. Virginia Eubanks explains how, in times gone by, poor people in the US were oppressed and stigmatized in the poorhouse. Those sent these institutions were said to be lazy. They often had to work without pay for their upkeep. According to Eubanks, today’s equivalent is the ‘digital poorhouse’.Footnote 10 People’s data points stigmatize them, which can make it more difficult for them to obtain insurance, mortgages and benefits.Footnote 11 Eubanks documents the insidious impacts of this digital poorhouse in all kinds of places. Here too, discrimination is openly presented in a positive light. A case in point is the growing range of insurance products that offer discounts in exchange for personal data. The companies involved use this to better predict their policyholders’ behaviour and to customize their offers accordingly. But those applying this kind of price profiling are rarely transparent about the criteria, margins of error, parameters and analytical insights involved.Footnote 12 The creeping acceptance of this practice is insidiously fostering inequality.

Another issue is the violation of human rights. This has traditionally involved the incarceration of activists and dissidents. Nowadays though, technology such as AI can be used to facilitate digital exclusion and incarceration. For instance, China’s highly developed social credit system excludes people with a low rating from trains and planes. Human rights organizations also refer to the ‘open-air prison’ in the Chinese province of Xinjiang. We discuss this in greater detail in Chap. 9.

Yet another example of an AI variant of inequality pertains to gender and sexual orientation. Caroline Criado Perez shows how many domains revolve around male views and interests, from work to care and politics. Which means that when they are digitalized, data relating to women is underrepresented. The authorities in these domains tend to view women simply as ‘smaller men’ rather than giving due consideration to all the ways in which the sexes can differ. As a result, many AI applications do not work well for women.Footnote 13 New ways of excluding those with a different sexual orientation are also appearing. In the world of credit ratings, for example, Frank Pasquale shows that the presence of gay men is seen as a positive indicator for house prices.Footnote 14 In the past people were hostile to homosexuality, seeing it as a bad thing. That does not apply in this case. Yet this is still a type of different treatment, which can insidiously foster inequality.

All of the above areas share a common pattern. Throughout history various groups labelled as ‘deviant’ were pressurized to conform to societal norms. An AI world would not oppose difference in that way. Instead, this technology serves indirectly as a source of unequal treatment. People often present that difference as something positive, partly under the guise of providing opportunities for a more individual-centred approach.

In this chapter we discuss engagement as an overarching task that can take a variety of forms: fight, walkout, protest, supervision, agenda-setting, improving and appropriating. We have arranged those forms a continuum that reflects their relationship to AI (Fig. 7.1). At one extreme are those with an antagonistic attitude: individuals or groups opposed to AI or in favour of a ban on the technology, for example. We refer to this cluster as ‘resistance’. Those at the other extreme have a symbiotic attitude. Such individuals or groups engage positively with AI by incorporating the technology into their everyday lives. We summarize this attitude as ‘co-operation’. Intermediate types adopt a critical but not necessarily negative approach, which we refer to as ‘monitoring’.

Fig. 7.1
An illustration of a spectrum model which represents the different forms of engagement under three groups. 1. Resistance includes fights, walkouts, and protests. 2. Monitoring includes supervision and agenda setting. 3. Cooperation includes improving appropriating. An arrow mark in the center indicates monitoring.

A spectrum of different forms of engagement

These different forms of engagement are archetypal and in practice often overlap. As we shall see, parties can utilize various forms simultaneously. The spectrum we present is simply a means to gain an overview of the extensive field of activities undertaken by diverse players in civil society with a view to tightening, bending, breaking or shifting existing practices and, on occasion, applicable laws and regulations as well. We discuss the various forms of engagement below, including their status regarding AI, so as to identify those currently prevalent and those requiring more work.

1 Resistance

New technologies often trigger resistance. This certainly applies in situations where there is rapid technological change and where people become convinced that the technology will only benefit a very limited section of society, while its risks are widespread.Footnote 15 We place resistance at the left end of the spectrum of engagement and subdivide into three forms. At the far left is the most antagonistic relationship with AI: fight. Stakeholders reject the new technology out of hand and resort to violence to oppose it. Next comes ‘walkout’, also referred to by Albert Hirschman as ‘exit’, characterized by stakeholders leaving the negotiating table. Hirschman contrasts ‘exit’ with ‘voice’, in which people articulate their dissatisfaction without terminating the relationship. The third form of resistance we have identified, protest, is an example of this. Here again, people oppose the technology. However, they do so in a peaceful manner while putting forward a clearly articulated counterproposal. That could involve a ban on a technology or further regulation of its use.

1.1 Fight: Violent Resistance

Historically, violent resistance has often been an iconic form of negative engagement with a new technology. In Chap. 4 we have mentioned the infamous Luddites who, worried that they might lose their jobs and incomes, proceeded to smash the newly introduced machines. In terms of resistance, they had very few other options. The owners of the machines were not prepared to listen to them, nor did the workers have any political representation.

In modern democracies groups at risk from technology have a range of non-violent options to make their voices heard. Also in Chap. 4, we argued that democratization distinguishes the embedding of later system technologies from that of their predecessors. Yet even in democratic societies some forms of resistance deliberately transgress legal boundaries and do not shy away from violence. The anti-nuclear campaign is one example: its members have invaded power plants and destroyed equipment.Footnote 16

Is anyone fighting AI at the moment? Not yet, it seems. Perhaps this is because both the technology itself and its impact are not immediately visible. That makes it more difficult for people to locate and destroy. The intangible nature of AI means that resistance to this technology could become interwoven with opposition to more physical things, such as computers, robots or the companies that develop AI.

In 2014 an anarchist movement called The Counterforce took up arms against the influence of Silicon Valley in general and that of companies like Google in particular. The group accused these firms of driving up house prices in San Francisco, and as a result undermining ordinary people’s quality of life. It also campaigned against the impact of digital technology on attention spans and against the construction of an infrastructure that could be used to facilitate totalitarianism. The Counterforce’s resistance was not limited to demonstrations; it also encouraged people to obstruct staff buses in Silicon Valley, to steal software engineers’ belongings and to tear down surveillance cameras. One of the people targeted was Anthony Levandowski, at the time responsible for the technology behind Google’s autonomous vehicle.Footnote 17 In Hong Kong protesters used power saws to damage lampposts equipped with facial recognition equipment.Footnote 18

Closer to home, AI has been targeted by groups with an agenda violent resistance – most recently as part of the social media narrative surrounding 5G and COVID-19 vaccines. American video-clips on YouTube link 5G, Huawei and AI with alleged Chinese plans to surreptitiously gather data on a global scale. There are also people who thought that ‘COVID’ stood for ‘certificate of vaccination identification by artificial intelligence’. They saw a connection between A (the first letter of the alphabet) and I (the ninth) and the number 19 in COVID-19.Footnote 19 This theory, the brainchild of osteopath Carrie Madej, circulated on the internet in the spring of 2020. She claimed that the coronavirus vaccine was designed to rewrite our DNA to assimilate everyone into an interface (API or application programming interface) between man and machine that would enable our behaviour to be completely controlled by external agents. The prime suspect was Bill Gates. In conspiracy theories of this kind, AI is part of a narrative that has led people to vandalize telecommunications masts. This questionable form of engagement does not need to be reinforced. In fact, we should guard against potential escalation. The overarching task of demystification, which has already been discussed, can also help people to engage with AI more peacefully and democratically.

1.2 Walkout: Refuse to Co-operate

People can resist in a non-violent way by refusing to co-operate with something. This is the second form of engagement we have identified. Known as a ‘walkout’, participants include people working in the technology sector. They have a special weapon in their armoury, after all: the ability to ‘down tools’. Those with specialist expertise can exert pressure by refusing to co-operate. Without their input some projects will be unable to take off. People with more widely available expertise can exert collective pressure, especially if they are able to publicize their campaign successfully. This form of engagement has grown in recent years. Several so-called ‘walkouts’ have taken place in Silicon Valley. In the Netherlands a recent legal battle between the University of Amsterdam and its students falls into the same category. The students refused to allow themselves to be observed by AI-based online surveillance software (proctoring) during exams held in lockdown. In that sense, they opted for a ‘walkout’.Footnote 20

Many walkouts are in fact work stoppages that, as you might expect, are associated with working conditions. Campaigns to improve these occur in all types of companies, of course, but in some cases are linked specifically to the use of technologies such as AI. This is because new system technologies can create new working conditions, which can sometimes be very detrimental to employees. When electric lighting was first introduced, it enabled employers to monitor their workers more effectively – just as algorithms are doing now, with tools ranging from trackers of office workers’ internet surfing behaviour (even using biometric information such as eye movements) to the micromanagement of staff in warehouses and delivery services (see Box 7.2).Footnote 21

Box 7.2: Worker Surveillance

Employers have been gathering data about their workers and using it to manage them for at least a hundred years. But now they are deploying today’s improved technology to conduct surveillance and monitoring that are more in-depth, more variable, more fine-grained, larger in scale and more rapid than ever before.Footnote 22 AI is often used to analyse employee data in the context of personnel policy.Footnote 23 Based on the information gathered and the purposes to which it is put, four sub-trends can be identified.

  1. 1.

    Systems that use various types of data to predict worker behaviour, including potentially unacceptable conduct.

  2. 2.

    Systems that make inferences about working conditions based on biometrics and health data, giving employees insights into their own health but also serving as tracking systems.

  3. 3.

    Systems that remotely monitor worker behaviour to measure their performance and determine their pay.

  4. 4.

    Systems designed to facilitate the ‘algorithmic control’ or ‘gamification’ of work through the continuous collection of performance data.Footnote 24

One example from the Netherlands concerns an app used by PostNL, a postal company. This calculates the routes delivery workers should follow and how long their rounds should take. Any employee exceeding the allotted time can expect that to have disciplinary consequences, but the app takes little account of factors like the weather or leftover mail from the previous day.Footnote 25 Amazon uses a similar app, Mentor, to track and assess its delivery staff. It plans to expand this with AI-compatible cameras.Footnote 26

To develop a more fact-based personnel policy, many other companies and government bodies collect data about their employees’ state of mind, their health and even their job motivation. These organizations frequently use language analysis to scan e-mails sent between employees for characteristics such as ‘enthusiasm’. In many cases, however, their staff are entirely unaware of this. But analyses of this kind are problematic in that their validity has not been proven.

Platform work is a new occupational category that has actually been created by technology.Footnote 27 Taxi drivers or riders who deliver meals or parcels are not official employees with the right to a minimum wage or secondary benefits. As a result, many have precarious livelihoods. This development is being countered by the rise of trade unions and by lawsuits to compel employers to recognize these people as employees.Footnote 28

Employees in the US have launched several major legal actions that specifically target AI. In 2018 a group of engineers at Google stated that they did not want to participate in Project Maven. This was a programme for the US military. The aim was to create drones with advanced image recognition capabilities that would be able to automatically recognize people and objects. So many of its employees objected that Google was compelled to terminate this collaboration with the US Department of Defense. Later that year Google engineers signed a petition against Dragonfly, a censored search engine. The company had developed this for use in China, in the hope of gaining a foothold in that country. The workers refused to be party to oppression and this project, too, was subsequently discontinued. In 2019 Microsoft employees sent an open letter that publicly opposed bidding for tenders for the Jedi project – a cloud computing venture for the US military that involved augmented reality equipment – because they did not want to profit from war.

Companies that supply AI technology to Immigration and Customs Enforcement (ICE, the US border security agency) have also encountered resistance from their staff. In 2018 employees at Palantir, Salesforce, Microsoft, Accenture, Google, GitHub and Tableau signed petitions and open letters against working for that organization. In a letter published in The New York Times, Microsoft employees called on their CEO, Satya Nadella, to “take an ethical stance and to place children and families above profit”.Footnote 29 Nadella responded by speaking out against President Trump’s immigration policy. Chef Robotics was another company that worked for ICE. When he discovered that this company was using his code, programmer Seth Vargo removed it from online libraries, forcing the company to suspend its service for several days. Chef Robotics eventually terminated its co-operation with ICE.

When Google fired Timnit Gebru it became apparent that even single individuals can refuse to co-operate. Gebru had been a member of the firm’s Ethical Artificial Intelligence team. Her scientific work focused on bias and data mining. She wanted to publish a paper on bias in language models, but her employer objected. The company asked her either not to publish the paper or to remove the names of any Google employees. When she refused, she was fired. That sparked outrage. Thousands of company employees, scientists and civil society parties signed a letter denouncing her dismissal. Members of the US Congress asked Google to explain its actions.

Technology businesses depend on talented personnel, so these individuals have sufficient leverage to influence company policy. That applies even to potential future employees. Stanford University is a prestigious institute in the field of AI. In 2018 its students voted to have no dealings with Google until the company shut down Project Maven. They also held on-campus protests against recruitment by companies that support border controls and police activities. More than 1200 students from seventeen campuses signed a pledge never to work for Palantir because of that company’s ties to ICE. In addition, students at Central Michigan University opposed the creation of a university Army AI Task Force.

One of the key ways in which parties can make their voices heard is engagement in the form of a walkout, at least in the initial phase of a technology’s societal embedding. The number of AI applications is growing rapidly, and employees, students and platform workers are on the front line, as it were. If these individuals take action based on their knowledge of developments, they can play a key part in spotlighting any questionable uses of AI. Walkouts are deemed successful if they receive legal endorsement, for example. In cases such as these, the workers’ engagement has a corrective effect.

A more institutionalized form of protest is when unions call for a strike. The right to strike is enshrined in the European Social Charter (Art. 6, paragraph 4). It also derives from the freedoms of association and assembly. A few years ago, a number of Deliveroo delivery riders in the Netherlands went on strike for better working conditions. They were supported by the Riders’ Union, part of the FNV (the largest Dutch trade union federation). The pressure exerted by this and other campaigns helped place the working conditions of platform workers on the national political agenda. Those involved are now making every effort to create better legal protection for these workers.Footnote 30

1.3 Protest: Campaigning for a Ban

Protest is the third form of resistance against a given technology or a particular application it can be put to. This approach is particularly common in democratic societies. In such cases people mobilize peacefully to call upon the authorities to, say, impose some kind of ban. Unlike the walkout, protest is not organized from within, nor is it aimed at a particular company and its policy. Instead, it focuses on government and often has a broader base in civil society. A case in point is the anti-nuclear energy movement, which used peaceful protests to call on government to stop building atomic power plants. Similarly, people have staged all kinds of protests against military technologies such as chemical weapons and cluster bombs. In many instances broad-based civil society movements like this have ultimately led to international treaties banning certain weapons.

With regard to AI, protest is one of the most conspicuous forms of engagement. This is especially true of three specific applications: its use by the police for surveillance and prediction, facial recognition and autonomous weapons. One particularly prominent movement against the use of AI in law enforcement was launched in Los Angeles a few years ago. A community group filed a lawsuit to ban the city’s police department from using its LASER predictive policing system. This group called itself the Stop LAPD Spying Coalition. It argued that the police were using unjust means – involving proxy data – to predict crime. In doing so they discriminated against people from the Latino and African American communities. University of California Los Angeles (UCLA) students also joined this movement. In its support they highlighted the results of a UCLA study into PredPol (a predictive policing program developed by the university) showing that tools of this kind trigger excessive levels of police activity in communities of colour.

Residents of St Louis, Missouri, also demonstrated against police technology, and in particular a collaborative venture between their city’s police and a company called Predictive Surveillance Systems. This uses surveillance aircraft or drones to gather images of members of the public. The residents took to the streets claiming that such “suspicionless tracking” would constitute a massive invasion of their privacy. In the Netherlands Amnesty called for the police’s Sensing project at a shopping centre in the town of Roermond to be halted. This used smart cameras to combat ‘mobile banditry’ – defined by the EU as “...an association of criminals who systematically enrich themselves by perpetrating crimes against property or fraud, (in particular shop and cargo theft, break-ins of homes and companies, fraud, skimming and pickpocketing), within a widespread area in which they carry out activities and are internationally active” – but according to Amnesty this involved the use of mass surveillance and discrimination against certain groups based on their nationality.Footnote 31

A second AI application, facial recognition, has also triggered a great deal of protest. Cameras are increasingly being equipped with this form of computer vision, enabling specific individuals to be monitored with great precision. Concerned citizens see this as a tool for totalitarian surveillance. That has led some people to push for a ban on all use of facial recognition. Others emphasize that government bodies in particular should avoid its adoption. Others still want to impose very strict requirements on its use, such as prohibiting the storage of data or restricting its deployment to, say, searches for missing children. Besides more general concerns about surveillance, some people worry that this technology will be ineffective in the case of individuals from minority groups. They believe that its main effect could be to aggravate the oppression of those communities.

Civil society organizations throughout the world have demonstrated against facial recognition (see also Box 7.3). San Francisco has Stop Secret Spy Tech and Face Surveillance. Successful protests along the same lines have been held in several other American cities, too. Police in San Francisco and Boston are now banned from using this technology. In Portland, Oregon, any use whatsoever is prohibited. Other American movements protesting against facial recognition include Why ID, the Electronic Frontier Alliance and Public Voice. Their European counterparts include Privacy International in the UK and Techno Police in France.

Box 7.3: A Ban on Facial Recognition?

During the preparatory phase of the draft European AI Act, numerous parties called for it to include a ban on facial recognition technology. Amongst them were dozens of civil society organizations.Footnote 32 More than 60 MEPs and 50,000 EU citizens also backed the campaign.Footnote 33 They had two main demands.

  1. 1.

    A ban on the indiscriminate or arbitrary use of biometric identification in public or in publicly accessible areas, which could lead to mass surveillance.

  2. 2.

    Legal restrictions or hard limits on uses that endanger fundamental rights, such as AI applications for border control, predictive policing, access to social security systems and risk assessments in the context of criminal law.

The call appears to have had some effect. The final version of the draft act prohibits such uses as ‘social scoring’ and the deployment of biometric identification systems in public spaces. This is because, in the European Commission’s view, they pose an ‘unacceptable risk’ to European values.

A third AI-related topic to attract protest is autonomous weapons. Movements throughout the world are calling for these to be prohibited. The Campaign to Stop Killer Robots was founded in 2012. This coalition of non-governmental organizations is committed to a ban on fully autonomous weapons and to upholding ‘meaningful human control’ over the use of force. In 2015 more than a thousand AI experts, including Stephen Hawkins, Elon Musk, Steve Wozniak and Noam Chomsky, signed an open letter warning of an AI arms race and calling for autonomous weapons to be outlawed. In 2017 a similar letter calling for a ban on lethal autonomous weapons was submitted to the United Nations. This was signed by 166 robotics pioneers and by the directors of several technology companies. The Dutch government received a similar exhortation at the end of 2020; more than 150 scientists active in the fields of robotics and AI asked it to support a ban on lethal autonomous weapons. Protest – in the sense of being able to speak out against something – is thus an important form of engagement in a democratic society. With regard to AI, this approach is already highly developed and will continue to play a prominent role.

Key Points – Resistance: Fight, Walkout, Protest

  • New technologies often provoke resistance, especially in cases of rapid technological change and where people become convinced that a technology will benefit only a very limited sections of society, while its risks are widespread. Resistance expresses an antagonistic attitude There are three different forms: fight, walkout, and protest.

  • In the past groups opposed to new technology regularly resorted to violence in their fight against it. AI has not yet been associated with this form of resistance, a questionable aspect of engagement that does not need reinforcement. Democratic engagement with AI is preferable.

  • People who engage in walkouts are refusing to co-operate with AI in various ways. One option is work stoppages, literal ‘walkouts’, where pressure from within compels companies to change course. This form of engagement has grown in recent years. It is typical of the initial phase of AI.

  • In democratic societies protest is a highly developed and significant form of resistance. Here people mobilize peacefully to call upon the authorities to, say, impose a ban on something. This is currently one of the most prominent forms of engagement regarding AI, targeting three of its applications in particular: its use by the police for surveillance and prediction, facial recognition and autonomous weapons.

2 Monitoring

The next two forms of engagement we have identified fall under the collective heading ‘monitoring’. This cluster occupies the central part of the spectrum between the antagonistic forms discussed above and the symbiotic ones we examine in the next section. The two forms that count as monitoring are supervision and agenda-setting. Both subject actions by other parties –public and private alike – to critical scrutiny. Where necessary these actions are corrected and adjusted in line with alternative proposals. This approach is in line with the historical trend signalled by John Keane in his book The Life and Death of Democracy. Basically, he argues that many hundreds of new types of institutions came into being after 1945 to track the actions of influential parties and subject them to intense scrutiny.Footnote 34 He characterizes this development as ‘monitory democracy’ and refers to options such as the use of surveys, online petitions and focus groups, but also to self-appointed watchdogs and NGOs committed to weaker or underrepresented groups in society.Footnote 35 Building on this basis, we take ‘supervision’ to mean co-ordinating stakeholders to address malpractices in the use of a new technology. We refer to the fifth and slightly more neutral form of engagement as ‘agenda-setting’. This involves civil society parties who identify both positive and negative aspects of the technology, but are dedicated primarily to turning a public spotlight on the theme.

2.1 Supervision: Reporting Malpractices

Supervision has a different goal from the forms of engagement discussed above. It is not so much about preventing specific uses of AI by banning them as about correcting the applications themselves or the conditions under which they operate. Specific parties could be informed, say, or public campaigns conducted. Alternatively, people could bring lawsuits or submit notifications to regulators to address malpractices. In practice a critical benchmark here is the matter of rights – human rights first and foremost. Civil society parties assess the nature of AI applications to determine whether these are legally permitted. This is a key feature of supervision.

There is some uncertainty about the impact of AI at this early stage of its integration into society. Lawsuits play an important part in dealing with these ‘grey areas’; they enable any malpractices to be identified and case law to be developed. In this way, directly disadvantaged groups can be protected by restoring their rights. The development of the law also benefits. Accordingly, jurisprudence can also spotlight issues and provide guidance. It helps people understand what is happening in the field and gives them the clarity needed to respond appropriately. It also helps create a framework for further applications, and possibly, future legislation as well. History shows that lawsuits challenging abuses by railway companies and telegraph services have specifically served this purpose.Footnote 36

In addition, there are situations in which AI applications must be subjected to mandatory ‘assessments’. This is because stakeholders’ views concerning the implementation of new technological capabilities need to be heard. In the Netherlands this applies to the statutory remit of works councils: their right to consultation (concerning investments and so on), right of consent and right to be informed.Footnote 37

Supervisory activities can be undertaken by interest groups, experts or the media. This growing form of engagement is also important, even if it does not usually attract quite so much public attention as protests. Its expansion is due mainly to the relatively recent emergence of organizations linked to the use of AI in society and, more broadly, to the rise of digitalization. It is also due to the fact that the Netherlands has quite recently broadened the legal scope for collective action in judicial proceedings.Footnote 38 That change has facilitated this form of engagement. Nevertheless, people often express concern that the use of AI is at odds with the current legal protections available to victims. This is because those safeguards are organized at the individual level whereas AI applications categorize people into group profiles.Footnote 39

In this section we first discuss a number of national and international organizations that are helping to monitor the use of AI, mainly through knowledge sharing. Then we review a number of prominent lawsuits.

Various international organizations have taken on a supervisory role by disseminating knowledge about the use of AI. In 2017 Kate Crawford and Meredith Whittaker founded the AI Now Institute in New York. It issues reports and analyses that focus on AI’s impact in four areas: rights and liberties, labour and automation, bias and inclusion and safety and critical infrastructure. For instance, it has raised the issue of malpractices associated with poor working conditions in the technology sector (including at Amazon’s warehouses). It has also spotlighted AI systems’ ecological footprint, something that generally receives little attention. In its annual report the institute describes the current state of affairs regarding the use of AI. It then goes on to make recommendations concerning the further development of AI in society.

Another such organization was the Google Transparency Project, launched in 2016. Its goal was to conduct research and analysis that would shed light on the ways in which Google influences government and policy. Under the new name Tech Transparency Project, the organization is now focusing more broadly on the technology sector. It acts a non-profit watchdog pursuing corporate accountability through investigations, litigation and the disclosure of misconduct.

In Europe, Germany’s AlgorithmWatch is one of the key players. This non-profit organization focuses on algorithmic decision-making processes that have a social impact. These include algorithms that predict or direct people’s behaviour or are designed to make automatic decisions. Its approach is to analyse the ways in which algorithmic decision-making influence human behaviour. In this context it explains to the general public how decision-making works, brings experts together and develops ideas and strategies for the beneficial use of AI in society. AlgorithmWatch’s annual Automating Society Report tracks the use of automated decision-making in Europe. It also publishes sub-studies on topics such as the use of algorithmic decision-making in response to the COVID-19 crisis. The organization identifies ethical dilemmas and puts forward proposals for more responsible use of algorithms.

Such bodies contribute to the supervision of new technologies by providing knowledge. In the Netherlands the digital rights organization Bits of Freedom showed that it was possible for people in one country to place advertisements on Facebook in another country – during elections in the latter, for example. The organization revealed just how easily Dutch users could upload German memes (and hence ideas) conveying party political messages.Footnote 40 This contradicted testimony to the Dutch House of Representatives from senior Facebook staff.

Another way to deal with malpractices is to take such matters to court. In a further example from the Netherlands, a coalition of civil society organizations and individuals sought an injunction against the Dutch state to ban the use of System Risk Indication (SyRI) (Box 7.4). Various local authorities, working in co-operation with the Ministry of Social Affairs and Employment (SZW), had been using this tool for purposes such as detecting cases of benefit fraud.

The coalition stated that SyRI involved unlawful automated decision-making. The plaintiffs also argued that SyRI was used mainly in areas already labelled ‘problem neighbourhoods’. As a result, the system had a discriminatory and stigmatizing effect. In reaching its verdict, the court first considered the nature of SyRI itself. It took the view that this tool was in line with forms of AI such as deep learning and self-learning systems. Given that SyRI uses risk profiles, the court felt that this could lead (unintentionally) to biased connections being made. They could be based on lower socio-economic status or migrant background, for example. This would mean that SyRI has a disproportionately large impact on poor people. According to the court, the infringement of privacy that results from the state’s use of this system is out of all proportion to the importance of detecting benefit fraud. Following this verdict, SyRI was discontinued.

Box 7.4: System Risk Indication

System Risk Indication (SyRI) is a technical application that calculates the probability of a particular individual fraudulently claiming social benefits. To do this it links seventeen different types of data. The government states that SyRI compares files of existing factual data from sources such as the Employee Insurance Agency (UWV), the Social Insurance Bank (SVB), local authorities, the Dutch tax authorities and the Netherlands Labour Authority. It then checks for discrepancies between the information garnered from these sources. If this comparison and an assessment against the risk model reveal any irregularities, these must first be investigated by one or more of the above organisations. Only then can a decision be taken that might have legal consequences for the individual concerned.

This is not just about government, though. Lawsuits concerning the use of AI have been instigated against other parties as well. Private companies have been sued for all kinds of malpractice. In the UK Uber was sued because its facial recognition system failed to effectively identify drivers and couriers of colour.Footnote 41 In a case in the Netherlands the court ruled in favour of Uber by finding that the company had lawfully used algorithms to determine which drivers had or had not been fired.Footnote 42

Incidentally, it can be quite challenging for civil society parties to bring cases to court. Those concerned must have the resources and knowledge needed to raise the issue of malpractices and take action. Moreover, more traditional interest groups (many of which originated in the analogue era) are usually unaware of how AI is changing their field of work.Footnote 43 Consequently, they do not yet have a sufficient grasp of how AI can marginalize the groups they represent or jeopardize their interests. Take consumers. In economic transactions with companies, they are viewed as the weak party. Accordingly, they are afforded legal protection in Europe. The use of AI systems can have an impact on their autonomy because it is algorithms, not the buyers themselves, that search for the ideal purchase based on an identified need.Footnote 44 Many consumers are ignorant about the underlying workings of AI systems. As a result, companies can persuade them to make purchases that are not in their own interest, perhaps because they are more expensive. This could blur the distinction between personalized offers and manipulation.Footnote 45 So both legislators and consumer organizations need to understand developments of this kind and, if necessary, take a stand against them.

The lawsuit against SyRI in the Netherlands was unusual in that it was driven by a broad coalition. These included some traditional interest groups as well as experts in the field of law and digital technology. Alliances like this are very helpful inasmuch as they fulfil civil society’s supervisory role quite effectively. Organizations less familiar with AI and the problems associated with digitalization can access the expertise of others that do possess the requisite knowledge. This expertise is not restricted to digital rights groups. Human rights organizations are also increasingly acquiring knowledge and expertise in this domain, and developing it further.Footnote 46 Moreover, both human rights and digital rights organizations are part of larger international networks where AI has long been on the agenda.

Such organizations are incentivized to initiate joint lawsuits by a type of procedure known as public interest litigation.Footnote 47 In these cases the organizations involved must be able to demonstrate that rules or policies directly impact the public interest in general or the particular collective interests they represent. This form of litigation is not yet routinely used everywhere because not all legal systems are receptive to it. At the same time lawyers are pushing for innovation in this area, especially with a view to advances in digitalization. They point to Germany, for example, where competitors can hold each other accountable for compliance through the courts.Footnote 48

2.2 Agenda-Setting: Information About the Importance of AI

The next form of engagement with AI we identify is positioned slightly more towards the right-hand end of the spectrum from antagonism to symbiosis. Parties here are committed to generating more attention for AI because they believe that that is important in itself, whether it focuses on the positive or negative aspects. Various civil society organizations are helping to place AI on the agenda. Some specifically focus on drawing attention to new technology, but thought leaders and artists also play their part in this respect. Moreover, they use a wide range of platforms for this purpose. In addition to artistic events and reports from think tanks, we discuss the ways in which civil society parties are involved in the development of policy and legislation for AI. We also spotlight the interests they represent.

In addition to supervising AI, many of the abovementioned international organizations like AI Now and AlgorithmWatch also publish reports and stage events on this topic. The artist Trevor Paglen has created ImageNet Roulette, an app people can use to upload photographs of their faces to see how they are ranked by the influential ImageNet database.Footnote 49 The Dutch organization Waag has made an especially outstanding contribution in this area; with its origins in the hacker movement and the early rollout of the internet in the Netherlands, its goal is to achieve the open, fair and inclusive use of digital technology. In particular, it defends public values and interests against the influence of commercial logic. People can also use art projects to raise awareness about AI. We discuss two Dutch examples of this approach in Box 7.5.

Box 7.5: Agenda-Setting Through Art

  • We Are Data

This artists’ collective is helping to create a more profound awareness of the types of personal information that can be recorded in databases. The idea is that, by experiencing this phenomenon at first hand, you gain a better idea of the impact of various technologies – old and new. To this end We Are Data has developed a ‘mirror room’. Visitors enter an enclosed space one at a time. There they are subjected to an impressive and very personal experience. It is also a smart space in which the visitor is surreptitiously observed and measured. They thus find out what it is like to be processed into data and can decide which of their personal information remains their private property. In this way they are literally held up to a mirror.

  • Wouter Moraal

Moraal aims to inform people about how deep-learning algorithms work. He also wants to warn them about the potential repercussions of misusing algorithms. To do this he has developed Artificial Impact, a board game modelled on a deep-learning algorithm.

The players first have to train the algorithm. At the end of the game their performance is rated by their own creation – a self-taught risk-prediction algorithm. The assessments made during the game are based on situations in which AI was used without due care and attention, leading to malpractices. The project therefore has much in common with Monopoly, the board game developed by Lizzie Magie in 1903. She wanted to make people aware of the harmful consequences of people owning huge estates and of capitalist exploitation.

Various aspects of AI need to be placed on the political agenda. This is a particularly important aspect of political decision-making processes. In a representative democracy, elected representatives have the final say in political decisions. However, they are still accountable to voters and to society at large. In practice various more or less optional processes have been set up for this purpose. Civil society parties can independently present their views to the legislature or to ministries working on specific policy and/or legislative proposals. These, after all, increasingly concern AI and its use in various sectors. Here we discuss the engagement of civil society parties with the European Commission’s draft AI Act.

The European Commission published this document on 21 April 2021.Footnote 50 Civil society parties were also involved at various stages throughout its development. First, they participated in the High-Level Expert Group on Artificial Intelligence (AI HLEG), founded in 2018. Through this forum 52 experts advised the European Commission on the implementation of its AI strategy, details of which were published on 7 December 2018.Footnote 51 The expert group consisted of 18 academics, 37 business representatives and four representatives of civil society. The AI HLEG presented its final Assessment List for Trustworthy Artificial Intelligence (ALTAI) on 17 July 2020.Footnote 52 Even before it had completed its meetings, however, civil society organizations were accusing the industry of unduly influencing that list. In particular, they claimed that the sector had blocked a number of proposals to ban some forms of AI.Footnote 53 Another relevant point of criticism was that those parties with practical knowledge and ties with groups in society who had to deal with AI systems were not being properly heard. Michael Veale, a British researcher in the field of digital rights, says that these ‘low-level experts’ are the very people who will have to deal with the ethical considerations when AI applications are implemented.Footnote 54 He also states that there is a much greater need for such practice-based experts than for the professors of applied ethics advocated by the AI HLEG.

Civil society was also involved through the Alliance for Artificial Intelligence, which provides a platform for approximately 4000 stakeholders. Its initial purpose was to provide feedback to the AI HLEG. Over time however, the alliance has become a benchmark for stakeholder-driven discussions about AI policy.

Finally, several civil society parties participated in the public consultation that preceded the publication on 19 February 2020 of the White Paper entitled On Artificial Intelligence – A European approach to excellence and trust. EU Member States contributed 84% of the content, with the remainder coming from other parts of the world. Civil society actors were responsible for 13% of contributions.Footnote 55 Many of them felt that the Commission should have done much more to safeguard human rights, especially regarding the use of facial recognition. A case in point was when dozens of civil society organizations jointly appealed to the European Commission to ban certain forms and uses of AI (see Box 7.3).Footnote 56

The European Commission consulted parties across the board, including civil society actors, to give them an opportunity to present their views on the white paper. As we have contended in the introduction to this chapter, while issues important to specific groups in society need to be put on the agenda, this is not the sole responsibility of civil society parties. It is primarily government’s duty to represent numerous different interests as far as possible. It therefore needs to develop a vision that encompasses the full range of views concerning the integration of AI into society. Government also needs to understand the technology in terms of its potential implications for different groups of stakeholders. In the case of political decisions, one aspect of this task concerns formal stipulations to consult stakeholders or to allow participation in political decision-making.

A broader and more structured process of this kind should have taken place during the preparatory phase of the draft AI Act. For example, the existing internet consultation mechanism could have been used to reach groups that operate below the radar of government bodies. On the one hand AI is an early-stage systems technology. Understandably therefore, it is not immediately clear to government which groups should be involved in plans to manage its impact. On the other hand, the individuals with experience in everyday practice are the very people who, by definition, are in a position to identify the challenges that will arise as AI integrates into society. This is borne out by recent evidence that weaker or vulnerable groups tend to be affected by the adverse impacts of AI systems. We therefore recommend that government bodies actively and formally involve a broad spectrum of civil society groups in the process of formulating AI policy.Footnote 57

Key Points – Monitoring: Supervision, Agenda-Setting

  • Monitoring subjects the actions of public and private parties to critical checks. Where necessary these actions are corrected and adjusted in line with alternative proposals. We have identified two forms of this type of engagement: supervision and agenda-setting. These occupy the middle ground between antagonistic forms of engagement on the one hand and more symbiotic forms on the other.

  • Supervision involves correcting AI applications themselves or the conditions under which they operate. For instance, specific parties could be informed, or public campaigns could be conducted. Alternatively, people could bring lawsuits or submit reports to regulators to address malpractices. In practice one critical benchmark here is the matter of rights (human rights first and foremost). This provides insight into the impact of AI at an early stage.

  • In agenda-setting, civil society parties, opinion formers and artists commit themselves to spotlighting certain aspects of AI and its use. Despite its undoubted importance, this form of engagement is often underdeveloped. In addition, it is rather like preaching to the converted.

  • Agenda-setting is another important aspect of political decision-making processes, at both national and international levels. Here it is essential for government bodies to approach a broad spectrum of civil society groups. Weaker or vulnerable groups often experience the adverse impacts of AI systems.

3 Co-operation

Our third and final cluster of forms of engagement comes under the heading ‘co-operation’. First and foremost, this entails a commitment to improving the technology. That could include civil society parties that draw up principles of good practice or are involved in standardization processes. Co-operation also includes appropriating the new technology, whereby parties incorporate it into their existing activities and use it to achieve their own goals and values. People co-operate for a variety of reasons, some related to the particular nature of AI.

3.1 Improving: Knowledge of Good Practice

Improving AI is positioned towards the ‘symbiosis’ end of the engagement spectrum. Involved here are those who work in the field itself or possess related know-how or other relevant expertise. They work with AI because they are convinced that the technology will enrich society. They are prompted to mobilize by the desire to put their expertise on the subject to good use, with the aim of improving AI and its application. Some might draft principles, while others write open letters and others still develop instruments for good AI practices (toolkits) or other types of publication. Many of these initiatives take place at the international level. Institutions with regulatory powers are actively involved in drawing up principles or standards; they include the EU, the UN and various standardization bodies. Here however, we confine ourselves to the bottom-up initiatives launched by various civil society parties. These include professional organizations, academic institutions and non-profit organizations. In Box 7.6, we describe one example, the Dutch ALLAI initiative, which focuses on developing responsible AI through research and collaborative projects.

An AI security conference was held in Puerto Rico in 2015. The participants issued an open letter stressing the importance of broadening AI research. This was based on the notion that AI was conceived ‘in a lab’. The participants stated that ethicists, philosophers, economists, legal scholars and cybersecurity researchers should be more engaged with the interdisciplinary research agenda.Footnote 58

In 2017 the Future of Life Institute hosted the Asilomar Conference on Beneficial AI. A hundred people, including AI scientists, economists, philosophers and lawyers as well as politicians, joined forces to develop 23 principles for ‘beneficial AI’. These are divided into questions for research into AI, ethics and values and long-term issues.Footnote 59 Several prominent researchers attended the conference. The list of principles was signed by such eminent figures as Elon Musk, Nick Bostrom, Demis Hassabis, Yann LeCun, Yoshua Bengi, and Stuart Russell.

Box 7.6: ALLAI

The Alliance for Artificial Intelligence Netherlands (ALLAI) was launched at the World Summit AI in 2018.Footnote 60 This made the Netherlands the first European country to have an independent organization dedicated entirely to the responsible use of AI. Amongst other things, ALLAI focuses on developing ethical preconditions for AI through projects, research, policy advice and education. Basing its approach on ‘responsible AI’, it aspires to create national and international environments that will deliver the benefits of artificial intelligence while at the same time safeguarding civic values such as security, autonomy and inclusion. To this end alliance founders Catelijne Muller, Virginia Dignum and Aimee van Wynsberghe (all former members of the AI HLEG) encourage stakeholders across the field to co-operate. They also make every effort to involve policymakers, scientists, entrepreneurs, lawyers and consumers in their projects. Since the outbreak of the COVID-19 crisis the organization has been exploring options for the responsible use of AI in tackling the pandemic. In this domain it is working with policymakers and researchers.

Meanwhile, a team at the University of Montreal has developed a set of ethical principles for responsible AI. This group of ethicists, legal scholars, public administrators and AI experts prepared a draft proposal listing seven principles. Five hundred academics, members of the public and stakeholders were mobilized to respond to that in writing and at meetings. The goals were to establish frameworks for the development and application of AI, to create principles that enable everyone to benefit from it and to facilitate the debate on equity-oriented, inclusive and sustainable AI. This process culminated in the Montreal Declaration for the Responsible Development of Artificial Intelligence. In another effort to improve use of the technology, the AI Now Institute has been developing an Algorithmic Accountability Policy Toolkit and Algorithmic Impact Assessments.

The Partnership on AI is yet another prominent body dedicated to improving AI. Its members include large companies like Amazon, Facebook, Google, DeepMind, Microsoft and IBM, as well as China’s Baidu. The partnership itself is a non-profit organization committed to the responsible use of AI, its approach being to identify good practices and share knowledge.Footnote 61 For example, it has developed a database of AI incidents involving autonomous vehicles or so-called ‘flash crashes’ on stock exchanges.

Finally, there is the organization OpenAI. This partly for-profit venture has originated products such as GPT-3, the AI program that wrote the article in The Guardian mentioned at the very start of this report. Its activities also include non-profit research aimed at developing ‘friendly AI’. OpenAI has received significant funding from Elon Musk and Microsoft.

Public courses are another form of engagement based on co-operation. ‘Elements of AI’ was the first of these, a series of lessons intended to give people a basic understanding of the topic. It was developed by the University of Helsinki in co-operation with Reaktor, a technology company, and originally funded by the Finnish government. ‘Elements of AI’ is now backed by the European Commission and available in dozens of languages. More than 750,000 people have taken these courses.

AI is playing an increasingly important part in everyday life. Yet people still have a lot of mistaken ideas about what the technology is and what it can do. Courses like Elements of AI are designed to provide information in an accessible way to anyone wanting to find out more about the subject. Also included in this category of tools are impact assessments that can be used to identify the effects of using AI. These forms of engagement are based on improving technology and the ways in which we make use of it. The momentum behind them is growing, and they will become increasingly important as AI becomes more deeply embedded in our society.

3.2 Appropriating: Diversity in Goals and Interests

Our final – and most symbiotic – form of engagement is appropriating AI. Whereas improving AI is about working on good practices and its lawful future use, its appropriation means civil society parties actually adopt it. The business community and government bodies have the resources to put new system technologies into practice, so they are usually the first to do so. Civil society parties usually take much longer to follow their example. These are mainly groups of individuals and professional organizations. Here we discuss a number of initiatives that would enable these latter two groups to appropriate AI.

Several projects have been launched to assist social groups disadvantaged by AI (see paragraph 6.2). These mainly involve the critical monitoring and assessment of its use by companies and government bodies. As a more extreme option, AI itself can be used to represent the interests of those groups. Ruha Benjamin stresses the importance of community-wide technology use to counteract any exclusive effects. She explores the democratization of data, citing initiatives such as DiscoTech (‘Discovering Technology’). These make technology accessible in ways that allow particular groups to appropriate it in practice.Footnote 62

The Mijente group describes itself as a ‘political home base’ for Latino Americans and Mexicans. Its projects include identifying the relationship between AI and immigration. MediaJustice is a US organization that champions people of colour and those on lower incomes. It is working to achieve a fair economy, connected communities and a political landscape in which these groups are not only visible but have a voice and power. Its founders say that to achieve this we need a media and technology environment able to sustain real justice. Numerous organizations are currently spotlighting the interests of minorities. These include Women in AI, Black in AI (co-founded by Timnit Gebru, who was fired by Google) and Queer in AI.

In the Netherlands appropriation takes place in AI labs and numerous other places. Many of these are working on the application of AI by companies or by government. But civil society parties are also becoming involved. The Civic AI Lab is one example. This collaborative venture between the University of Amsterdam, VU Amsterdam university and the City of Amsterdam was established in 2021. Scientists at Tilburg University are co-operating with partners such as Greenpeace, the World Food Programme and the Jeroen Bosch Hospital to use AI for public-interest tasks in the fields of climate, food shortages and healthcare.Footnote 63 A final example is The Hague Institute for Innovation of Law (HiiL), co-founded by the legal scholar Maurits Barendrecht. This has been using a so-called ‘justice accelerator’ to launch various projects involving the use of AI, mainly in Africa.Footnote 64

So, there is already a great deal of activity in the field of appropriation, although civil society parties still seem rather slow in grasping the opportunities presented by AI. As yet, organizations representing more traditional interest groups like tenants, patients, consumers and teachers do not seem to be very active in this domain. This is partly because we are still just starting the process of embedding AI in our society. Accordingly, groups that defend public values by appropriating AI (which helps to shape that process) are only now emerging. Many of these still have a limited understanding of the technology, let alone ideas about how to use it for their own purposes. At the same time, it is important that they not be left behind, as their grassroots have much to gain from AI (Box 7.7).

Box 7.7: PublicSpaces

While it does not focus specifically on AI, the Dutch PublicSpaces coalition is a great example of civil society appropriating digital technology. This is a collaborative venture by more than twenty parties from the public media, cultural heritage, festivals, museums, education and healthcare sectors.

The coalition was created to reimagine the internet as a public space and revive its founding principles. PublicSpaces is campaigning against our reliance on big tech companies for communications, information and media circulation. Its goal is an alternative software ecosystem that revolves around public values rather than commercial interests.

In that context the organization is developing tools such as ‘public badges’. These are quality labels for the coding and tooling of websites and software applications based on the values espoused by PublicSpaces. It is also working to implement open-source initiatives.

Besides communities and specific sections of the general population, appropriation is also important for professional groups. This is an enormous field and AI is triggering workplace changes in all kinds of occupational arenas (see Chap. 6). Here we focus on the interests and values embodied by certain professions in particular, such as doctors, teachers and lawyers. They possess specific expertise derived from their educational background and work experience, which we need to safeguard when AI systems are introduced. In other words, these groups need to appropriate AI in a way that gives their students a good education, enhances their patients’ health or safeguards the rights of their clients.

Many people claim that AI applications can replace this kind of expertise. Some assert that robot judges or medical algorithms can render traditional professions superfluous. As we have seen, misconceptions like this are typical of antagonistic relationships between AI and society. Whereas AI’s increasing integration into society in fact requires a symbiotic relationship. That means combining it with human professional expertise. Frank Pasquale has shown that rather than undermining expertise in general, AI in its current form actually tends to place more emphasis on some types than others. The skills possessed by computer scientists and economists is central to many AI applications. As things stand these are taking precedence over other forms of know-how.Footnote 65 Consequently, applications of this kind are based solely on a single, simple criterion. This is in stark contrast with the real-world situation, which involves a complex web of standards, goals, interests and knowledge from all kinds of professional groups. Algorithms that write articles may perhaps be able to simulate part of a journalist’s work, but they do not come close to replacing every aspect of their day-to-day responsibilities. These include considering different perspectives, treating people equitably and conducting in-depth research.

During the initial phase of AI’s entry into society, people tended to focus mainly on its revolutionary nature. However, they have since become increasingly concerned about the jobs that would be rendered obsolete by this technology. In the next phase it is vital for all kinds of professional groups to appropriate AI based on their responsibilities as professionals. Such groups are subject to various forms of self-regulation. They also have regulatory bodies that issue licences and monitor practices. These provisions need to include the use of AI in their field of work.Footnote 66 Before that can happen, professionals need to master the technology and understand exactly how it can contribute towards their everyday work.

Key Points – Co-operation: Improving, Appropriating

  • Co-operation involves a symbiotic attitude towards AI. It encompasses commitment to improving the technology and to appropriating it to achieve your own goals and values.

  • Improving AI involves people who work in the field or possess related know-how or other relevant expertise. Their efforts in this domain are motivated by a belief that the technology will enrich society. They use their expertise in the subject to improve AI and its use. More specifically, some might draft principles while others write open letters or develop instruments for good AI practices (toolkits) or other types of publication.

  • As yet, few individuals or groups are involved in appropriating in AI. It is mainly the business community and government bodies that are putting the technology into practice. Civil society parties and professional groups seem to be rather slow in grasping this opportunity. Appropriation is important for a variety of reasons. For example, these parties can use AI to counteract its own exclusionary effects or to safeguard the values they embody in their own professional practice.

4 In Conclusion

The overarching task of engagement is all about who should be involved in AI. Companies and government bodies are often the first to use new system technologies. This gives them a huge amount of influence over these technologies’ developmental paths. Civil society parties, too, are gradually becoming involved in this process. They can include interest groups, academic institutions, the media and specific professions.

Engagement is important for any society, especially democracies. The process of embedding a new technology responsibly within a society hinges on the interests, values and knowledge of a wide range of actors. This means their voices need to be heard not only during the design process, but also if they are impacted by the use to which that technology is put. When all is said and done, they should be able to use it themselves to achieve their own goals. Or to put it another way, civil society parties provide valuable feedback (based on their own experience and knowledge) for AI. We need to take this into account to ensure that the technology becomes properly integrated into our society.

So far though, there are few formal channels for feedback of this kind. As a result, companies and government bodies are developing all kinds of AI applications without fully understanding how they will impact the lives of individuals and specific social groups. They are also failing to exploit the knowledge and expertise that such groups could contribute. Teachers and students have a part to play in the development of AI in education, doctors and patients have a part to play in healthcare AI and so on and so forth.

This chapter has focused on engagement with AI. In this respect we have identified a spectrum of different forms, ranging from an antagonistic relationship with AI to a symbiotic one. Some of the antagonistic forms are already highly developed, such as protest and supervision. These efforts are key to preventing the malicious use of AI, and they must be continued. Supervision also plays an important part in spotlighting issues, thereby helping to create frameworks, standards and regulations. The same goes for walkouts. The employees of technology companies are on the front line, so they can identify any problems at an early stage. The most antagonistic form, fighting, is not yet widely used in connection with AI but it can send a clear signal to society.

Some parties adopt a neutral stance when placing AI on the public agenda. At an international level too, people have launched initiatives to improve the technology. Their approach is to develop principles and to share knowledge and experience. Engagement in the form of appropriation is enormously important as well. It enables civil society actors, communities and professional groups in particular to use AI in ways that suit them, helping them to achieve their own goals and safeguard their own values. As yet though, traditional interest groups and professions only have limited capabilities when it comes to appropriating AI.

Progress is being made with neutral monitoring and the symbiotic forms of engagement. But unlike the more antagonistic forms, these are still quite poorly developed. There is also a great deal of activity at the international level. Government’s task is to encourage national forms of engagement as a way of more effectively involving civil society in embedding AI. First and foremost, government bodies can do this by augmenting stakeholder expertise. That is, by equipping particular groups of stakeholders with the means to participate in constructively critical forms of engagement. Which is all the more important given the civic values at stake here, or potentially so.