Gender equality and artificial intelligence in Europe. Addressing direct and indirect impacts of algorithms on gender-based discrimination

This article assesses whether current European law sufficiently captures gender-based biases and algorithmic discrimination in the context of artificial intelligence (AI) and provides a short analysis of a draft EU legislative proposal, the Artificial Intelligence Act. To this end, current trends and uses of algorithms with potential impacts on gender will be analysed through the lens of direct and indirect impacts for gender equality law, highlighting the implications for European gender equality enforcement. This article concludes that legislative and accompanying policy measures are necessary to ensure an effective gender equality policy and to avoid algorithmic discrimination.


Introduction
Algorithms 1 have played a major role for many years but have only recently caught the attention of European and international regulators. Despite living in the "Age of Algorithms", 2 the impact of algorithms on gender equality 3 has received little attention, 4 requiring, as it does, detailed analysis of challenges and opportunities as well as possible discriminatory outcomes. 5 In case law, 6 in particular, when it comes to gender equality and discrimination, analysis of algorithms is still a rarity. 7 An outlier is the judgment of the Italian Consiglio di Stato reviewing the legality of using algorithms in public administration to automatically allocate school postings for teachers. 8 Some forms of discrimination are visible and perceived directly by women and men, such as the algorithm that denied access to a woman to the female lockers in a gym due to her "Dr." title being associated with men. 9 Others are invisible, for example, where algorithms sort CVs in a fully automated application procedure and do not select women or men because of their sex. 10 Other behaviour does not necessarily cross the threshold of anti-discriminatory behaviour under EU law but clearly poses problems in terms of gender bias, stereotypes 11 and gender equality policy goals. It therefore represents a threat to gender equality in general but also concretely paves the way for future gender-based discrimination. Besides biases and stereotypes contained in datasets used for training algorithms 12 and the intentional or unintentional introduction of biases into the design and programming of the algorithms, 13 there is another underlying problem that might favour gender inequalities: since its early days the "gender make-up of the AI community" 14 greatly influences the way algorithms are shaped and consequently has an impact on how algorithms work, leading to potential discriminatory outcomes. 2 See Abiteboul and Dowek [1]. 3 See European Commission Algorithmic discrimination in Europe [16]. 4 See EP Committee on Women's Rights and Gender Equality [22] and EC Advisory Committee [15]. 5 On hidden aspects of discrimination see, Broussard [6], p. 75. 6 See Rechtbank Den Haag [46]. 7 For EU law, the lack of references from national courts (preliminary ruling procedure, Art. 267 TFEU) is most likely the reason underlying the absence of case law involving the interpretation of algorithmic discrimination with regard to Directive 2006/54/EC of the European Parliament 8 The judgment highlighted two essential requirements for the use of algorithms: the knowability or understandability of the decision and the possibility of full judicial review, Consiglio di Stato [9], notably para. 8.2, 8.3, 8.4 and 9. In another judgment, the court tried to define algorithms, Consiglio di Stato [10], para. 3: "una sequenza finite di istruzioni". 9 See Gufran [27]; Wheaton [51]. 22 US data shows that 72% of CVs are apparently automatically discarded by recruitment algorithms, see Criado Perez (2019) [11] p. 166. 23 Currently 31 cases are listed that include a reference to Art. 14 of Directive 2006/54, see https://curia.europa.eu/juris/liste.jsf?oqp=&for=&mat=or&jge=&td=%3BALL&jur=C%2CT%2CF&page=1&dates= &pcs=Oor&lg=&pro=&nat=or&cit=L%252CC%252CCJ%252CR%252C2008E%252C%252C2006 %252C54%252C%252C14%252C%252C%252C%252C%252Ctrue%252Cfalse%252Cfalse&language= en&avg=&cid=1781563. on how the CJEU might decide in the future, should a dispute on algorithmic discrimination arise. 24 In addition, recruitment algorithms provoke discriminatory behaviour towards women or men which could be partly covered by the future Artificial Intelligence Act. More problematic are algorithms producing discriminatory outcomes other than recruitment algorithms that do not fall under the proposed Artificial Intelligence Act. This category includes algorithms used by public or private operators as preparatory step, 25 merely triggering a decision but not representing a discriminatory act as such. The discrimination would need to be proven in a similar way as if the decision is taken by a human, in cases where there is a discriminatory impact on a potential employee.
A distinction must be made between algorithms impacting gender equality with a direct discriminatory effect and those that have indirect effects. Because the degree and consequences of alleged gender equality violations is different, they need to be addressed differently. Indirect effects should not be underestimated because these might undermine a gender equality policy aimed at tackling and fighting stereotypes and biases. Research and practice have shown that without a good gender equality policy, 26 discrimination cannot be addressed. In addition, indirect gender effects can also play a role in the formation of direct gender effects and thereby favour discrimination.
The core of gender equality policy is based on national constitutions, the EU Treaties and on national and European legislation. Other policies might help fight biases in the context of algorithmic discrimination and achieve more equality, such as increasing the number of women on boards and in leadership positions. 27 The proposed Directive COM/2012/0614 could have an impact in reality so that the search for CEOs would yield more female leaders over time. Equally, in the area of work-life balance, the WLB-Directive 28 could have tremendous effects, because a more equal sharing of caring responsibilities and more take-up of leave by men would shap perceptions, stereotypes which are reflected in the data used by algorithms and search engines. Other measures, such as positive action or gender mainstreaming are important in ensuring gender equality as well, as are addressing the gender pay gap, 24  press-releases/2022/03/14/les-etats-membres-arretent-leur-position-surune-directive-europeenne-visant-a-renforcer-l-egalite-entre-les-femmes-et-les-hommes-dans-les-conseilsd-administration/. It has been reported that compromise seems possible after a decade of deadlock: see De La Baume [13]. 28 [40]. the gender pension gap and violence against women, notably as regards online violence and hate speech. 29 The more equality there is between women and men, and the more equal a society gets, the more this will be mirrored in the datasets used by algorithms. Such an approach could diminish overt, open and intentional discrimination. It is more difficult to eliminate unintentional and indirect discrimination that might occur unknowingly. However unintentional discrimination is equally covered under EU law as no subjective element or intent is necessary.

How gender equality is affected by indirect and direct effects of algorithms
As for human decisions, gender equality law and policy can be affected either directly (3.2) or indirectly (3.1) by algorithms.

The indirect gender effects of algorithms
Indirect gender effects (IGE) of algorithms can be defined as all effects that shape, influence and perpetuate gender biases and stereotypes by altering datasets that underlie algorithms and that have neither a direct impact nor represent a clear violation of EU gender equality law as such. An example is results of search queries. By promoting, perpetuating or combining different issues, creating new biases, stereotypes, and potentially discriminatory tendencies, search algorithms are problematic. For example, when typing CEO into a search engine, the algorithm shows nearly no pictures of female CEOs, but only male ones. 30 Even though there is huge inequality between women and men when it comes to leadership positions in companies, pictures shown in the search results are not in line with reality as the number of female board chairs is 7.5% and female CEOs is 7.7% in Europe. 31 Based on search results, a wrong perception is thus created and reinforced. These stereotypes could become a basis for discriminatory behaviour and enable preparation for discriminatory decisions. For example, in a recruitment procedure for CEOs, online information for hiring procedures might be (unconsciously) influenced and culminate in indirect gender effects. Search queries might feed into the process leading to gendered outcomes. While available datasets and training data for algorithms are part of what risks facilitating gender inequalities, (deep) neural networks 32 might worsen gender inequalities alongside inaccurate data: one algorithm that was trained to detect human activities in images developed gender biases. 33  tended to be shown doing shopping, being in the kitchen at the microwave or washing. 34 Although less visible and apparent, such indirect gender effects produced by using algorithms are potentially causing more harm than direct gender effects. This can be explained due to the widespread use and the ease with which biases and stereotypes in datasets spread and are used and re-used in different algorithms accessing common databases or accessing the internet for data and information. 35 In the end, indirect gender effects of algorithms, if perpetuated and distributed among networks and databases could ultimately lead to create direct gender effects, if an algorithm uses datasets and databases that have been shaped by indirect gender effects.
The Word2vec/Word vectors technique used by algorithms and search engines impacting gender equality is at the heart of the problem described. 36 Word2vec is "a technique for natural language processing [that] uses a neural network model to learn word associations from a large corpus of text. Once trained, such a model can detect synonymous words or suggest additional words for a partial sentence. As the name implies, word2vec represents each distinct word with a particular list of numbers called a vector. [which] indicates the level of semantic similarity between the words represented by those vectors." 37 Word2vec "produce[s] word embeddings. These models [..] are trained to reconstruct linguistic contexts of words.
[on] a large corpus of text and produces a vector space [..] with each unique word in the corpus being assigned a corresponding vector in the space. Word vectors are positioned in the vector space such that words that share common contexts in the corpus are located close to one another in the space." 38 An oft-cited example is that the word man is more often associated with computer programmers and woman more often associated with homemakers. 39 While Word2vec facilitates the efficiency and user-friendliness of search engines and algorithms, the danger is that biases and stereotypes will not only result in showing certain search results but also be used as a basis for decision-making algorithms. If the Word2vec technique is used by algorithms, it may not only shape and distribute biases and stereotypes among search engines but also find its way into underlying 34 Zhao et al. [55], para. 6.1. 35 See the draft report 2022 AIDA Parliament [23], para. 69: This "raises the question of whether certain biases can be resolved by using more diverse datasets, given the structural biases present in our society; specifies in this regard that algorithms learn to be as discriminatory as the society they observe and then suggest decisions that are inherently discriminatory, which again contributes to exacerbating discrimination within society; concludes that there is therefore no such thing as a completely impartial and objective algorithm." 36 See the general overview in Buijsman [7], p. 109-112 and the more technical analysis in Mikolov [38], p. 1-6. 37 Wikipedia, https://en.wikipedia.org/wiki/Word2vec; see Alpaydin [3], p. 133-135. 38 Wikipedia, https://en.wikipedia.org/wiki/Word2vec; see Russel and Norvig [48], p. 908, 926 and 929. 39 Bolukbasi et al. [4]. datasets on which algorithms base their decisions and learn. Even though not necessarily crossing over the line into illegal behaviour under current EU gender equality rules, politically this undermines the Treaty goals of gender equality 40 and might therefore necessitate a review of current EU rules. Consequently, the problem of the indirect gender effects of algorithms merits the same level of attention as the more obvious problem of direct gender effects.

The direct gender effects of algorithms
Direct gender effects (DGE) of algorithms can be defined as either violations of the gender equality norms or as behaviours of algorithms that are directly measurable and discriminatory. Examples are the exclusion of a female job applicant by an algorithm for the mere reason of being female or the non-granting of a credit by an algorithm based on data associated with women's (lower) creditworthiness because of statistical association with the group "women". 41 Direct gender effects are not only more obvious than indirect gender effects but are also more easily identified and understood as they are roughly identical in terms of outcome to the classical discrimination triggered by a human decision. However, many unsolved issues remain when a direct gender effect is caused by algorithms and causes harm, notably access to evidence and facilitating proof of alleged discrimination in court proceedings, which seem more difficult in the light of the opaqueness of algorithms. 42 A strict application of the principle of burden of proof and its possible reversal in case of alleged victims of discrimination could facilitate and encourage a better and more effective enforcement of equality rules in the area of algorithmic discrimination. 43 Other examples of direct gender effect include algorithms used for automatically granting access to lockers for gyms, for applications to universities or for benefits as well as recruitment decisions based on algorithms. In general, a distinction between algorithms that are used by private operators and public bodies can be useful, given that public bodies often represent a monopoly without alternatives whereas for private operators there is often choice. Therefore, in the case of an algorithm for labour or employment benefits used by the state for example, 44 the algorithm produces direct results and citizens need to be guaranteed non-discriminatory access.

Positioning indirect and direct gender effects in the direct/indirect discrimination dichotomy
The distinction between indirect gender effect and direct gender effect is of importance to the question of how to address gender inequalities with law and policy measures. 45 Often, when it comes to indirect gender effect, the threshold of reprimandable gender-based discriminations is not crossed and therefore gender equality law does not apply. In this case, policies need to be put in place to achieve the gender equality goals laid down in the Treaties. Under EU law, both direct and indirect discrimination is prohibited. However, as direct and indirect gender effects do not operate in the same way they cannot necessarily be addressed both under direct and indirect discrimination regimes. For discrimination to be found under EU law, a concrete discriminatory act or behaviour needs to be identified which is often lacking in the case of indirect gender effects. Indirect gender effects typically mirror, create or reinforce biases and stereotypes, but do not have a direct and concrete or visible impact on a person. 46 If biases and stereotypes are represented in the search results of a search engine, they might influence a person to take a certain action, discriminate or enable a person to prepare a discriminatory act, for example research on the internet, to prepare a recruitment for a specific job. This could lead a person relying too much on an algorithm 47 to take a decision based on and influenced by the data revealed in the search results, potentially discriminating. One remedy is to address the gender data gap in order to obtain more representative and diverse datasets that reflect reality. 48 44 See for example Austria's AMS, Fröhlich and Spiecker [24]; https://www.ams.at/regionen/ oberoesterreich/news/2019/01/ams-oberoesterreich-arbeitsprogramm-2019. 45 Brière and Dony [5], p. 297. 46 On biases, note the EP AIDA draft report 2022 [23], at para. 68: "Stresses that bias in AI systems often occurs due to a lack of diverse and high-quality training data, for instance where data sets are used which do not sufficiently cover discriminated groups, or where the task definition or requirement setting themselves were biased; notes that bias can also arise due to a limited volume of training data, which can result from overly strict data protection provisions, or where a biased AI developer has compromised the algorithm; points out that some biases in the form of reasoned differentiation are, on the other hand, also intentionally created in order to improve the AI's learning performance under certain circumstances." 47 In general, it is considered that humans are better than algorithms "at work that involves unusual combinations of skills (..)" and recruitment is certainly an activity that requires the making of an holistic assessment of future employees. See Roose, [47], p. 71. 48 The EU Data Governance Act (Proposal for a Regulation of the European Parliament and of the Council on European data governance (Data Governance Act), COM/2020/767 final) does not directly address this issue but could help encourage a discussion in the direction of more diverse data and reduction of the gender data gap. See also Criado Perez (2020) [12].

The future legislative framework to address gender-based algorithmic discrimination
In this section, the core elements of the Artificial Intelligence Act regarding gender equality will be outlined (see 4.1 below), together with amendments proposed by the European Parliament, the Committee of the Regions and the European Economic and Social Committee in view of strengthening the gender equality perspective (see 4.2) and some views in the literature (see 4.3).

The EU proposal of the European Commission -the Artificial Intelligence Act in brief
The Artificial Intelligence Act can be considered as a leap forward for horizontal artificial intelligence regulation in that it seeks to create harmonised rules for AI. 49 According to the Artificial Intelligence Act proposal an " 'artificial intelligence system' (AI system) means software that is developed [..] for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with". 50 In essence, if the Artificial Intelligence Act applies, some artificial intelligence systems are prohibited 51 while others are subject to regulation and high-risk systems 52 require specific regulatory consideration. 53 With regard to its scope, the Artificial Intelligence Act applies not only to artificial intelligence systems within the EU (see Art. 2(b) of the Artificial Intelligence Act), but also "[..] irrespective of whether those providers are established within the Union or in a third country" and to "providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union". 54 Any artificial intelligence system that has effects on Union citizens is thus covered by the Artificial Intelligence Act. Following a dynamic approach, future technological advances in artificial intelligence are included by referring to annexes that can be adopted by the Commission without following the ordinary legislative procedure. 55 Regarding gender equality and non-discrimination, the Artificial Intelligence Act complements the existing legislative framework 56 but includes gender equality and non-discrimination to the extent of prohibiting certain artificial intelligence applications and defining high-risk artificial intelligence systems that require specific regulation. 57 The Artificial Intelligence Act also addresses the violation of fundamental rights 58 which include the principle of non-discrimination. Due to the horizontal 49 Art. 1 AIA. 50 Art. 3(1). 51 Art. 5. 52 Art. 6. 53 Art. 8-14. 54 Art. 2. 55 Annex III lists AI applications that fall under the high-risk category. 56 See Explanatory Memorandum, 1.2. 57 Art. 6, Art. 6(2) + Annex III. 58 See Art. 7(1)(b) "usage in areas of Annex 1" and the "risk of adverse impact on fundamental rights". nature of the Artificial Intelligence Act, discrimination and gender equality are not specifically addressed but referenced in the non-operative part. 59 Article 6 (2) of the Artificial Intelligence Act which refers to Annex III Nr. 4 regarding recruitment systems, is potentially relevant for gender-based algorithmic discrimination: "throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women [..]". 60 Recruitment software would therefore be considered as an artificial intelligence application that falls within the high-risk category of Article 6. In the case of this category, Article 8 requires respect for the specific requirements listed in Articles 9-14. This includes inter alia a risk management system, 61 compliance with data and data governance principles in terms of training, validation and testing of data sets, 62 technical documentation, 63 record-keeping, 64 transparency and provision of information to users 65 and, finally, human oversight. 66 Those requirements could ensure sufficient regulation of algorithms and enable competent authorities to verify conformity with the Artificial In-59 "(Non)-discrimination" (16 references), "Gender Equality" (1 reference) and "women" (2 references). 60 Recital 36. 61 See Art. 9 and Recitals 42 and 46: "A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems." (per Art. 9(1)) and "shall consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updating" (per Art. 9(2)) which comprises the following steps: "(a) identification and analysis of the known and foreseeable risks associated with each high-risk AI system; (b) estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse; (c) evaluation of other possibly arising risks". 62 Art. 10.
"High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. (..). In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers should be able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high-risk AI systems" (See Recital 44, highlighted by the author.) See also the Proposal for a Regulation on European data governance (Data Governance Act) COM/2020/767. 63 Art. 11: "technical documentation shall be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements" (Art. 11(1)). 64 Art. 12: "High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events ('logs') while the high-risk AI systems is operating". 65 Art. 13: "their operation is sufficiently transparent to enable users to interpret the system's output and use it appropriately". 66 Art. 14(1): "High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use." Art. 14(2) reminds the reason for human oversight is the risk for fundamental rights violations. telligence Act. Human oversight (see Art. 14 of the Artificial Intelligence Act) is a key requirement that also has been addressed in the GDPR 67 and has been discussed in the literature. 68 The obligations of providers (and users) of high-risk artificial intelligence systems 69 include establishing a quality management system, 70 drawing up the technical documentation of the high-risk artificial intelligence system, 71 keeping the logs automatically generated by the high-risk AI systems, 72 ensuring relevant conformity with the assessment procedure 73 prior to market access, compliance with registration obligations 74 or affixing the CE marking to their high-risk AI systems so as to indicate the conformity with the Artificial Intelligence Act 75 and -upon the request of a national competent authority -demonstrating the conformity of the highrisk artificial intelligence system with these requirements. 76 Regarding institutional set-up and enforcement, the draft Artificial Intelligence Act foresees sanctions for violations of the Artificial Intelligence Act 77 and the creation of a European Artificial Intelligence Board. 78 Including potentially discriminatory recruitment systems as a high-risk category would address some of the algorithmic discriminations occurring in disputes concerning access to the labour market while other relevant activities would still represent a challenge currently regulated only by Directive 2006/54/EU. 79 In the light of this, the further evolution of the Artificial Intelligence Act proposal will show the need to review or to propose separate legislation focussing specifically on gender equality and non-discrimination will remain on the agenda of the EU. 67 One might argue that every applicant needs to be aware if and to what extend an algorithm is involved in the selection procedure. See Art. 22

The Artificial Intelligence Act proposal in the European Economic and Social Committee, the Committee of the Regions and the European Parliament
The adoption of the Artificial Intelligence Act falls under the ordinary legislative procedure, 80 and is considered by the Parliament, the Council and the Commission as a common legislative priority for 2021, and one for which they want to ensure substantial progress. 81 Several national parliaments (those of Czechia, Germany, Portugal and Poland) have issued opinions among which only that of the German Bundesrat refers to gender equality and highlights the "Kohärenz mit der EU-Grundrechte-Charta und dem geltenden Sekundärrecht der Union zur Nichtdiskriminierung und zur Gleichstellung der Geschlechter gewährleistet". 82 The AIDA committee of the European Parliament is currently preparing a draft report on artificial intelligence. 83 The main suggestions of Parliament's draft report, which mentions gender and discrimination only two times, concern the issue of diversity and more female representation among coders and developers and the issue of biases and discrimination due to incomplete or non-diverse datasets.  [23] in its AI report. In addition a total of 1384 amendments have been proposed to amend the draft report. 84 AIDA Parliament Amendments [21], Amendment ("A") 88: "Underlines the gender gap across all digital technology domains, which has a concrete impact on the development of AI, reproducing and enhancing stereotypes and bias, since it has predominantly been designed by males"; A 124: "Highlights that significant work still needs to be carried out in order to include certain groups, such as women and minority communities, in this transition; warns that the fact that only 22% of AI professionals globally are female has the potential to deepen already existing inequalities such as the gender pay gap as well as to lead to a devaluation of problems that affect mostly women, such as online gender-based violence". 85 AIDA Parliament Amendments [21], A 377: "Expects the EU to shape the AI revolution globally by promoting its values such as respect for fundamental rights, democracy, non-discrimination and inclusivity". 86 AIDA Parliament Amendments [21], A 476: "addressing the gender gap and the lack of diversity among developers of AI systems which is another crucial aspect in increasing EU's competitiveness". 87 AIDA Parliament Amendments [21], A 630: "non-discriminatory algorithms are those which prevent gender, racial and other social biases in the selection and treatment of different groups and do not reinforce inequalities and stereotypes" and A 669: "calls on the Commission and the Member States to align the measures shaping the EUs digital transition with the Union's goals on gender equality; calls on the Commission and the Member States to provide appropriate funding to programmes aimed at attracting women to study and work in ICT and STEM, to develop strategies aimed at increasing women's digital inclusion, in fields relating to STEM, AI and the research and innovation sector, and to adopt a multi-level approach to address the gender gap at all levels of education and employment in the digital sector". 88 AIDA Parliament Amendments [21], A 1146: "Highlights that in order to address bias in AI, there is a need to promote diversity in the teams that develop, implement and assess the risks of specific AI applica-In this line the parallel legislative proposal Digital Services Act (DSA) 89 also tries to mitigate some of the risks for women: "Specific groups (..) may be vulnerable or disadvantaged in their use of online services because of their gender (..) They can be disproportionately affected by restrictions [..] following from (unconscious or conscious) biases potentially embedded in the notification systems by users and third parties, as well as replicated in automated content moderation tools used by platforms." The Parliament adopted its amendments to the Digital Services Act on 20 January 2021 and included the right to gender equality and non-discrimination in recitals 57 and 91 and Article 26(1)(b) and the principle of equality between women and men in recital 3. 90 Both the European Economic and Social Committee (EESC) and the Committee of the Regions (CoR) have proposed concrete amendments to the Artificial Intelligence Act in the area of gender equality in their non-binding opinions.
The EESC adopted an opinion in December 2021, highlighting that "the 'list-based' approach for high-risk AI runs the risk of normalising and mainstreaming a number of AI systems and uses that are still heavily criticised. The EESC warns that compliance with the requirements set for medium-and high-risk AI does not necessarily mitigate the risks of harm to [..] fundamental rights for all high-risk AI. [..] the requirements of (i) human agency, (ii) privacy, (iii) diversity, non-discrimination and fairness, (iv) explainability and [..] of the Ethics guidelines for trustworthy AI should be added." 91 The Committee of Regions adopted some amendments, proposing that "the Board should be gender-balanced", claiming that such gender balance is a precondition for diversity in issuing opinions and drafting guidelines. 92 Furthermore, it proposed to include as a recital "AI system providers shall refrain from any measure promoting unjustified discrimination based on sex, origin, religion or belief, disability, age, sexual orientation, or discrimination on any other grounds, in their quality management system", reasoning that "unlawful discrimination originates in human action. AI system providers should refrain from any measures in their quality system that could promote discrimination." Another suggestion was to introduce into Article 17(1) "measures to prevent unjustified discrimination based on sex, (..)". Both the Economic and Social Committee's and the Committee of the Region's proposals would lead to the incorporation of a gender equality perspective into the Artificial Intelligence Act. tions; stresses the need for gender disaggregated data to be used to evaluate AI algorithms and for gender analysis to be part of all risk assessments".

Critical and nuanced views in the academic literature
While the Artificial Intelligence Act has been generally welcomed, with 1216 contributions being received during the open public consultation by the Commission, 93 133 contributions on the roadmap and 304 feedback contributions following the adoption of the draft regulation, some concerns and alternative ideas have been voiced. Whereas some assume that the gender equality framework provides "useful yardsticks" highlighting that there are some systemic problems complicating the way EU law deals with algorithmic discrimination, 94 others regard EU law as in principle "well-equipped" all the while highlighting "areas for improvement". 95 Others have called for a new regulatory regime, such as the Artificial Intelligence Act currently going through the process of legislative adoption in the EU. 96 Also identifying room for the improvement of EU anti-discrimination law so as to address algorithmic discrimination, Hacker suggests an "integrated vision of anti-discrimination and data protection law". 97 Finally, others are more cautious on the regulatory front when it comes to artificial intelligence and propose a "purposive interpretation and instrumental application of EU non-discrimination law". 98 Other authors identify shortcomings in the Artificial Intelligence Act, and propose moving away from the notion of individual harm and following a more holistic approach focusing as well on collective and societal harm. 99 While it is true that law in general, and European gender equality law in particular can cope to some extent with newly arising technologies that produce discriminatory outcomes, the author has advocated for regulation as conditio sine qua non and pinpointed to the need to find inspiration for EU artificial intelligence regulation in other international instruments currently under development that also partly address gender equality and non-discrimination issues. 100

Regulatory suggestions
This section sketches out what can be done concretely in terms of policy measures and legislative action. 101 Legislation is fundamental to ensure gender equality and 93 See https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificialintelligence-ethical-and-legal-requirements_en. 94 Xenidis et al. [53], p. 181. 95 Allen et al. [2], p. 598. 96 Lütz (2022) [35]. 97 Hacker [28], pp. 1143-1186. 98 Xenidis [54], p. 757. 99 Shmua [50], pp. 4-5 and 25-26. the Artificial Intelligence Act makes a good start by posing for the first time a general principle that some areas of artificial intelligence systems should be regulated. First, added value can be achieved by ensuring a good level of protection of equality between women and men in all fields by adopting adequate legislation 102 or by proposing legislation such as that on pay transparency, 103 women on boards, 104 violence against women 105 and by ensuring its effective enforcement. Overall, an increase in gender equality in the real world will be reflected in datasets and thereby reduce potential algorithmic discriminations.
Second, on this basis, and complementary to the rules of the Artificial Intelligence Act that would apply to situations involving gender-based discriminations caused by recruitment algorithms, more technical or sector specific regulations could be explored, such as specific requirements to ensure the respect of gender equality norms when it comes to algorithmic discrimination. A review of the legislative gender equality acquis (which is required periodically by the better regulation guidelines of the EU 106 ) could be a good opportunity to incorporate more clearly and define algorithmic discrimination in EU gender equality law, as well as to detail rules on the (shifting) of the burden of proof in algorithmic discrimination cases.

Accompanying policy measures
In order to achieve gender equality, alongside legislative measures, 107 self-standing or accompanying policy measures could be taken to address the issues raised above. 108 Policy and awareness-raising measures should include designers and developers of algorithms, as well as general training on equality issues. Targeted training on gender equality is needed for developers when designing and coding algorithms. This does not avoid all biases, and bad design and biases would still be found in algorithms, but they would be less prone to biases, stereotypes and discriminatory behaviour.
If more women were represented among IT developers, 109 this could increase the chances of more diverse and equal outcomes from algorithms. Navigating between 102 In this regard, the recent legislation adopted (WLB-Directive) and proposed are a step in the right direction, each ensuring that the potential biases and stereotypes will be reduced in the data that is underlying algorithmic discrimination. 103 Proposal for a Directive of the European Parliament and of the Council to strengthen the application of the principle of equal pay for equal work or work of equal value between men and women through pay transparency and enforcement mechanisms, COM/2021/93 final. 104 Proposal for a Directive of the European Parliament and of the Council on improving the gender balance among non-executive directors of companies listed on stock exchanges and related measures, COM/2012/0614 final. 105 https://ec.europa.eu/info/policies/justice-and-fundamental-rights/gender-equality/gender-basedviolence/ending-gender-based-violence_en. 106 European Commission Better Regulation [17].
fully-fledged regulations and policy measures, aside from encouraging training, one could also prescribe mandatory measures at the design stage/coding stage. Good practice principles and mandatory training on gender equality or mandatory reporting could bring change. It could also be left to companies to decide how to achieve objectives which could be fixed in the law. Concrete outcomes should not be fixed in law as it is probably impossible to create an algorithm that never will discriminate. The aim, therefore, is not to eliminate but to reduce the risk of gender-based discrimination.
Ensuring that humans remain in control as highlighted in Article 14 of the Artificial Intelligence Act, could help identify and mitigate gender biases in individual datasets. 110 For this to be effective, training and awareness-raising for developers/programmers are one way of mitigating the dangers of gender biases and stereotypes when designing algorithms in general and with regard to the Word2vec technique in particular. Regardless of whether designed as a mandatory or an optional requirement, this is relatively easy to implement and a way to reduce gender biases and stereotypes. A label certifying that gender and diversity knowledge is available in a company could be another way to incentivise IT companies to gain the relevant knowledge. Transparency could also increase compliance, by publishing on a company or general website, whether a specific algorithm has been built by a company which has the relevant gender and diversity knowledge.
More generally, increasing diversity and female participation in coding/artificial intelligence jobs 111 -a need also highlighted in the recent own-initiative report ("INI") report of the Parliament's AIDA committee, is vital. 112 Amendment 673 of the report also support skills and training in this regard: in that it "highlights the importance of including basic training in digital skills and AI in national education systems; [and e]mphasizes the importance of empowering and motivating girls for the subsequent study of STEM careers and eradicating the gender gap in this area". 113 Increasingly firms are aware of the need for more equality and diversity in the IT and 110 Mitchell [39], p. 126. 111 See Wooldridge [52], p. 290, who highlights that the kick-off event for AI at Dartmouth saw no representation by any woman, a situation that he considers unthinkable today despite the fact that women are still largely underrepresented in AI-related jobs. 112 AIDA Parliament [23], para. 75: "is concerned about the extensive gender gap in this area, with only one in six ICT specialists and one in three (..) STEM graduates being female". A 670 proposes to add "stresses that this gap inevitably results in biased algorithms". See https://www.europarl.europa.eu/ meetdocs/2014_2019/plmrep/COMMITTEES/AIDA/AM/2022/01-13/1245946EN.pdf"; European Commission 2030 Digital Compass [18], p. 4-5. 113 A 777 equally supports this: "recalls therefore the need to address the gender gap in STEM in which women are still underrepresented; calls on the Commission and the Member States to provide appropriate funding to programmes aimed at attracting women to study and work in STEM, to develop strategies aimed at increasing women's digital inclusion, in fields relating to STEM, AI and the research and innovation sector, and to adopt a multi-level approach to address the gender gap at all levels of education and employment in the digital sector" AIDA Parliament Amendments [21]. artificial intelligence world, as highlighted by the foundation of a consulting firm recently in California to address the issue of diversity and equality in the tech world. 114

Conclusion
Approaching the problem of gender-based algorithmic discrimination through the lens of indirect and direct gender impacts enables researchers and policy-makers not only to perceive the depth of the problem but also to identify the need for legal and policy measures. It facilitates understanding, by looking at the concrete mechanisms that underlie the functioning of algorithms and thereby sheds light on why indirect gender effects that might seem at first irrelevant from a legal gender equality enforcement perspective are key to understanding and solving the problem of genderbased algorithmic discrimination. The role played by algorithms and search engines in shaping, reinforcing and perpetuating gender stereotypes and biases has been highlighted and should be taken into account in legislative and policy actions. This also strengthens the argument for having not only a robust legislative framework but also accompanying non-legislative measures that reinforce and complement EU law in order to support and achieve the entirety of its aims.
Funding Note Open access funding provided by University of Lausanne.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/ 4.0/.