1 Introduction

AlgorithmsFootnote 1 have played a major role for many years but have only recently caught the attention of European and international regulators. Despite living in the “Age of Algorithms”,Footnote 2 the impact of algorithms on gender equalityFootnote 3 has received little attention,Footnote 4 requiring, as it does, detailed analysis of challenges and opportunities as well as possible discriminatory outcomes.Footnote 5 In case law,Footnote 6 in particular, when it comes to gender equality and discrimination, analysis of algorithms is still a rarity.Footnote 7 An outlier is the judgment of the Italian Consiglio di Stato reviewing the legality of using algorithms in public administration to automatically allocate school postings for teachers.Footnote 8

Some forms of discrimination are visible and perceived directly by women and men, such as the algorithm that denied access to a woman to the female lockers in a gym due to her “\(Dr\).” title being associated with men.Footnote 9 Others are invisible, for example, where algorithms sort CVs in a fully automated application procedure and do not select women or men because of their sex.Footnote 10 Other behaviour does not necessarily cross the threshold of anti-discriminatory behaviour under EU law but clearly poses problems in terms of gender bias, stereotypesFootnote 11 and gender equality policy goals. It therefore represents a threat to gender equality in general but also concretely paves the way for future gender-based discrimination. Besides biases and stereotypes contained in datasets used for training algorithmsFootnote 12 and the intentional or unintentional introduction of biases into the design and programming of the algorithms,Footnote 13 there is another underlying problem that might favour gender inequalities: since its early days the “gender make-up of the AI community”Footnote 14 greatly influences the way algorithms are shaped and consequently has an impact on how algorithms work, leading to potential discriminatory outcomes.

2 The challenges and opportunities of algorithms for achieving gender equality goals

Grasping gender-based discrimination when algorithms are used remains a legal challenge. The key problem is that under EU law, gender-based discrimination is prohibited if a certain type of behaviour - for example, not hiring a woman because of pregnancy or a company policy favouring men - represents a clear violation of gender equality rules.Footnote 15 If a company uses algorithms for its recruitment procedure,Footnote 16 the legal qualification as discrimination should be no different in principle. Indeed, algorithms might produce biases (“machine bias”)Footnote 17 or reproduce biases or stereotypes and might even favour gender-based discrimination. The real challenge for policy-makers is however that algorithms might only reproduce or reinforce existing societal biases and stereotypes. However, if algorithms such as those used in search engines merely favour certain biases, stereotypes and encourage behaviour, it is not clear whether the threshold of discrimination is crossed. In combating some behaviour, the EU would only deploy policy measures, whereas for others, legislative action could be the solution.

The proposed EU Artificial Intelligence Act (AIA)Footnote 18 foresees restrictions and regulatory actions for certain forms of behaviour that involve gender equality questions. One example relevant to many companies Footnote 19 is covered by the Artificial Intelligence Act: recruitment algorithms.Footnote 20 Legal scholars have raised concerns due to the increased risk and “clear and present danger” of biases and discrimination in such algorithms.Footnote 21 Recruitment algorithms could exert exclusionary power by automatically pre-selecting or discard CVs before they reach human eyes, thereby increasing the risk of gender-based discrimination.Footnote 22 It appears from the case law of the Court of Justice of the European Union (CJEU) that labour law issues such as recruitment and promotion procedures are often the subject of gender equality disputes.Footnote 23 To date, no directives or cases deal with algorithms and gender-based discrimination. Notably due to a lack of preliminary references, the CJEU has so far not interpreted any concepts in relation to algorithmic discrimination. Several cases could shed some light on how the CJEU might decide in the future, should a dispute on algorithmic discrimination arise.Footnote 24 In addition, recruitment algorithms provoke discriminatory behaviour towards women or men which could be partly covered by the future Artificial Intelligence Act. More problematic are algorithms producing discriminatory outcomes other than recruitment algorithms that do not fall under the proposed Artificial Intelligence Act. This category includes algorithms used by public or private operators as preparatory step,Footnote 25 merely triggering a decision but not representing a discriminatory act as such. The discrimination would need to be proven in a similar way as if the decision is taken by a human, in cases where there is a discriminatory impact on a potential employee.

A distinction must be made between algorithms impacting gender equality with a direct discriminatory effect and those that have indirect effects. Because the degree and consequences of alleged gender equality violations is different, they need to be addressed differently. Indirect effects should not be underestimated because these might undermine a gender equality policy aimed at tackling and fighting stereotypes and biases. Research and practice have shown that without a good gender equality policy,Footnote 26 discrimination cannot be addressed. In addition, indirect gender effects can also play a role in the formation of direct gender effects and thereby favour discrimination.

The core of gender equality policy is based on national constitutions, the EU Treaties and on national and European legislation. Other policies might help fight biases in the context of algorithmic discrimination and achieve more equality, such as increasing the number of women on boards and in leadership positions.Footnote 27 The proposed Directive COM/2012/0614 could have an impact in reality so that the search for CEOs would yield more female leaders over time. Equally, in the area of work-life balance, the WLB-DirectiveFootnote 28 could have tremendous effects, because a more equal sharing of caring responsibilities and more take-up of leave by men would shap perceptions, stereotypes which are reflected in the data used by algorithms and search engines. Other measures, such as positive action or gender mainstreaming are important in ensuring gender equality as well, as are addressing the gender pay gap, the gender pension gap and violence against women, notably as regards online violence and hate speech.Footnote 29 The more equality there is between women and men, and the more equal a society gets, the more this will be mirrored in the datasets used by algorithms. Such an approach could diminish overt, open and intentional discrimination. It is more difficult to eliminate unintentional and indirect discrimination that might occur unknowingly. However unintentional discrimination is equally covered under EU law as no subjective element or intent is necessary.

3 How gender equality is affected by indirect and direct effects of algorithms

As for human decisions, gender equality law and policy can be affected either directly (3.2) or indirectly (3.1) by algorithms.

3.1 The indirect gender effects of algorithms

Indirect gender effects (IGE) of algorithms can be defined as all effects that shape, influence and perpetuate gender biases and stereotypes by altering datasets that underlie algorithms and that have neither a direct impact nor represent a clear violation of EU gender equality law as such. An example is results of search queries. By promoting, perpetuating or combining different issues, creating new biases, stereotypes, and potentially discriminatory tendencies, search algorithms are problematic. For example, when typing CEO into a search engine, the algorithm shows nearly no pictures of female CEOs, but only male ones.Footnote 30 Even though there is huge inequality between women and men when it comes to leadership positions in companies, pictures shown in the search results are not in line with reality as the number of female board chairs is 7.5% and female CEOs is 7.7% in Europe.Footnote 31 Based on search results, a wrong perception is thus created and reinforced. These stereotypes could become a basis for discriminatory behaviour and enable preparation for discriminatory decisions. For example, in a recruitment procedure for CEOs, online information for hiring procedures might be (unconsciously) influenced and culminate in indirect gender effects. Search queries might feed into the process leading to gendered outcomes. While available datasets and training data for algorithms are part of what risks facilitating gender inequalities, (deep) neural networksFootnote 32 might worsen gender inequalities alongside inaccurate data: one algorithm that was trained to detect human activities in images developed gender biases.Footnote 33 Men tended to be shown doing outside activities such as driving cars, coaching and shooting while women tended to be shown doing shopping, being in the kitchen at the microwave or washing.Footnote 34

Although less visible and apparent, such indirect gender effects produced by using algorithms are potentially causing more harm than direct gender effects. This can be explained due to the widespread use and the ease with which biases and stereotypes in datasets spread and are used and re-used in different algorithms accessing common databases or accessing the internet for data and information.Footnote 35 In the end, indirect gender effects of algorithms, if perpetuated and distributed among networks and databases could ultimately lead to create direct gender effects, if an algorithm uses datasets and databases that have been shaped by indirect gender effects.

The Word2vec/Word vectors technique used by algorithms and search engines impacting gender equality is at the heart of the problem described.Footnote 36 Word2vec is

“a technique for natural language processing [that] uses a neural network model to learn word associations from a large corpus of text. Once trained, such a model can detect synonymous words or suggest additional words for a partial sentence. As the name implies, word2vec represents each distinct word with a particular list of numbers called a vector. [which] indicates the level of semantic similarity between the words represented by those vectors.”Footnote 37

Word2vec

“produce[s] word embeddings. These models [..] are trained to reconstruct linguistic contexts of words. [on] a large corpus of text and produces a vector space [..] with each unique word in the corpus being assigned a corresponding vector in the space. Word vectors are positioned in the vector space such that words that share common contexts in the corpus are located close to one another in the space.” Footnote 38

An oft-cited example is that the word man is more often associated with computer programmers and woman more often associated with homemakers.Footnote 39 While Word2vec facilitates the efficiency and user-friendliness of search engines and algorithms, the danger is that biases and stereotypes will not only result in showing certain search results but also be used as a basis for decision-making algorithms. If the Word2vec technique is used by algorithms, it may not only shape and distribute biases and stereotypes among search engines but also find its way into underlying datasets on which algorithms base their decisions and learn. Even though not necessarily crossing over the line into illegal behaviour under current EU gender equality rules, politically this undermines the Treaty goals of gender equalityFootnote 40 and might therefore necessitate a review of current EU rules. Consequently, the problem of the indirect gender effects of algorithms merits the same level of attention as the more obvious problem of direct gender effects.

3.2 The direct gender effects of algorithms

Direct gender effects (DGE) of algorithms can be defined as either violations of the gender equality norms or as behaviours of algorithms that are directly measurable and discriminatory. Examples are the exclusion of a female job applicant by an algorithm for the mere reason of being female or the non-granting of a credit by an algorithm based on data associated with women’s (lower) creditworthiness because of statistical association with the group “women”.Footnote 41 Direct gender effects are not only more obvious than indirect gender effects but are also more easily identified and understood as they are roughly identical in terms of outcome to the classical discrimination triggered by a human decision. However, many unsolved issues remain when a direct gender effect is caused by algorithms and causes harm, notably access to evidence and facilitating proof of alleged discrimination in court proceedings, which seem more difficult in the light of the opaqueness of algorithms.Footnote 42 A strict application of the principle of burden of proof and its possible reversal in case of alleged victims of discrimination could facilitate and encourage a better and more effective enforcement of equality rules in the area of algorithmic discrimination.Footnote 43

Other examples of direct gender effect include algorithms used for automatically granting access to lockers for gyms, for applications to universities or for benefits as well as recruitment decisions based on algorithms. In general, a distinction between algorithms that are used by private operators and public bodies can be useful, given that public bodies often represent a monopoly without alternatives whereas for private operators there is often choice. Therefore, in the case of an algorithm for labour or employment benefits used by the state for example,Footnote 44 the algorithm produces direct results and citizens need to be guaranteed non-discriminatory access.

3.3 Positioning indirect and direct gender effects in the direct/indirect discrimination dichotomy

The distinction between indirect gender effect and direct gender effect is of importance to the question of how to address gender inequalities with law and policy measures.Footnote 45 Often, when it comes to indirect gender effect, the threshold of reprimandable gender-based discriminations is not crossed and therefore gender equality law does not apply. In this case, policies need to be put in place to achieve the gender equality goals laid down in the Treaties. Under EU law, both direct and indirect discrimination is prohibited. However, as direct and indirect gender effects do not operate in the same way they cannot necessarily be addressed both under direct and indirect discrimination regimes. For discrimination to be found under EU law, a concrete discriminatory act or behaviour needs to be identified which is often lacking in the case of indirect gender effects. Indirect gender effects typically mirror, create or reinforce biases and stereotypes, but do not have a direct and concrete or visible impact on a person.Footnote 46 If biases and stereotypes are represented in the search results of a search engine, they might influence a person to take a certain action, discriminate or enable a person to prepare a discriminatory act, for example research on the internet, to prepare a recruitment for a specific job. This could lead a person relying too much on an algorithmFootnote 47 to take a decision based on and influenced by the data revealed in the search results, potentially discriminating. One remedy is to address the gender data gap in order to obtain more representative and diverse datasets that reflect reality.Footnote 48

4 The future legislative framework to address gender-based algorithmic discrimination

In this section, the core elements of the Artificial Intelligence Act regarding gender equality will be outlined (see 4.1 below), together with amendments proposed by the European Parliament, the Committee of the Regions and the European Economic and Social Committee in view of strengthening the gender equality perspective (see 4.2) and some views in the literature (see 4.3).

4.1 The EU proposal of the European Commission – the Artificial Intelligence Act in brief

The Artificial Intelligence Act can be considered as a leap forward for horizontal artificial intelligence regulation in that it seeks to create harmonised rules for AI.Footnote 49 According to the Artificial Intelligence Act proposal an “ ‘artificial intelligence system’ (AI system) means software that is developed [..] for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.Footnote 50

In essence, if the Artificial Intelligence Act applies, some artificial intelligence systems are prohibitedFootnote 51 while others are subject to regulation and high-risk systemsFootnote 52 require specific regulatory consideration.Footnote 53 With regard to its scope, the Artificial Intelligence Act applies not only to artificial intelligence systems within the EU (see Art. 2(b) of the Artificial Intelligence Act), but also “[..] irrespective of whether those providers are established within the Union or in a third country” and to “providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union”.Footnote 54 Any artificial intelligence system that has effects on Union citizens is thus covered by the Artificial Intelligence Act. Following a dynamic approach, future technological advances in artificial intelligence are included by referring to annexes that can be adopted by the Commission without following the ordinary legislative procedure.Footnote 55

Regarding gender equality and non-discrimination, the Artificial Intelligence Act complements the existing legislative frameworkFootnote 56 but includes gender equality and non-discrimination to the extent of prohibiting certain artificial intelligence applications and defining high-risk artificial intelligence systems that require specific regulation.Footnote 57 The Artificial Intelligence Act also addresses the violation of fundamental rightsFootnote 58 which include the principle of non-discrimination. Due to the horizontal nature of the Artificial Intelligence Act, discrimination and gender equality are not specifically addressed but referenced in the non-operative part.Footnote 59 Article 6 (2) of the Artificial Intelligence Act which refers to Annex III Nr. 4 regarding recruitment systems, is potentially relevant for gender-based algorithmic discrimination: “throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women [..]”.Footnote 60 Recruitment software would therefore be considered as an artificial intelligence application that falls within the high-risk category of Article 6. In the case of this category, Article 8 requires respect for the specific requirements listed in Articles 9–14. This includes inter alia a risk management system,Footnote 61 compliance with data and data governance principles in terms of training, validation and testing of data sets,Footnote 62 technical documentation,Footnote 63 record-keeping,Footnote 64 transparency and provision of information to usersFootnote 65 and, finally, human oversight.Footnote 66 Those requirements could ensure sufficient regulation of algorithms and enable competent authorities to verify conformity with the Artificial Intelligence Act. Human oversight (see Art. 14 of the Artificial Intelligence Act) is a key requirement that also has been addressed in the GDPR Footnote 67 and has been discussed in the literature.Footnote 68 The obligations of providers (and users) of high-risk artificial intelligence systemsFootnote 69 include establishing a quality management system,Footnote 70 drawing up the technical documentation of the high-risk artificial intelligence system,Footnote 71 keeping the logs automatically generated by the high-risk AI systems,Footnote 72 ensuring relevant conformity with the assessment procedureFootnote 73 prior to market access, compliance with registration obligationsFootnote 74 or affixing the CE marking to their high-risk AI systems so as to indicate the conformity with the Artificial Intelligence ActFootnote 75 and – upon the request of a national competent authority - demonstrating the conformity of the high-risk artificial intelligence system with these requirements.Footnote 76 Regarding institutional set-up and enforcement, the draft Artificial Intelligence Act foresees sanctions for violations of the Artificial Intelligence ActFootnote 77 and the creation of a European Artificial Intelligence Board.Footnote 78 Including potentially discriminatory recruitment systems as a high-risk category would address some of the algorithmic discriminations occurring in disputes concerning access to the labour market while other relevant activities would still represent a challenge currently regulated only by Directive 2006/54/EU.Footnote 79 In the light of this, the further evolution of the Artificial Intelligence Act proposal will show the need to review or to propose separate legislation focussing specifically on gender equality and non-discrimination will remain on the agenda of the EU.

4.2 The Artificial Intelligence Act proposal in the European Economic and Social Committee, the Committee of the Regions and the European Parliament

The adoption of the Artificial Intelligence Act falls under the ordinary legislative procedure,Footnote 80 and is considered by the Parliament, the Council and the Commission as a common legislative priority for 2021, and one for which they want to ensure substantial progress.Footnote 81 Several national parliaments (those of Czechia, Germany, Portugal and Poland) have issued opinions among which only that of the German Bundesrat refers to gender equality and highlights the “Kohärenz mit der EU-Grundrechte-Charta und dem geltenden Sekundärrecht der Union zur Nichtdiskriminierung und zur Gleichstellung der Geschlechter gewährleistet”.Footnote 82 The AIDA committee of the European Parliament is currently preparing a draft report on artificial intelligence.Footnote 83 The main suggestions of Parliament’s draft report, which mentions gender and discrimination only two times, concern the issue of diversity and more female representation among coders and developers and the issue of biases and discrimination due to incomplete or non-diverse datasets. The amendments prepared by Parliament refer more frequently to gender and/or discrimination: amendments 1-281 (four references),Footnote 84 amendments 282-555 (four references),Footnote 85 amendments 556-825 (eight references),Footnote 86 amendments 826-1108Footnote 87 and amendments 1109-1384 (three references).Footnote 88

In this line the parallel legislative proposal Digital Services Act (DSA)Footnote 89 also tries to mitigate some of the risks for women:

“Specific groups (..) may be vulnerable or disadvantaged in their use of online services because of their gender (..) They can be disproportionately affected by restrictions [..] following from (unconscious or conscious) biases potentially embedded in the notification systems by users and third parties, as well as replicated in automated content moderation tools used by platforms.”

The Parliament adopted its amendments to the Digital Services Act on 20 January 2021 and included the right to gender equality and non-discrimination in recitals 57 and 91 and Article 26(1)(b) and the principle of equality between women and men in recital 3.Footnote 90

Both the European Economic and Social Committee (EESC) and the Committee of the Regions (CoR) have proposed concrete amendments to the Artificial Intelligence Act in the area of gender equality in their non-binding opinions.

The EESC adopted an opinion in December 2021, highlighting that

“the ‘list-based’ approach for high-risk AI runs the risk of normalising and mainstreaming a number of AI systems and uses that are still heavily criticised. The EESC warns that compliance with the requirements set for medium- and high-risk AI does not necessarily mitigate the risks of harm to [..] fundamental rights for all high-risk AI. [..] the requirements of (i) human agency, (ii) privacy, (iii) diversity, non-discrimination and fairness, (iv) explainability and [..] of the Ethics guidelines for trustworthy AI should be added.”Footnote 91

The Committee of Regions adopted some amendments, proposing that “the Board should be gender-balanced”, claiming that such gender balance is a precondition for diversity in issuing opinions and drafting guidelines.Footnote 92 Furthermore, it proposed to include as a recital “AI system providers shall refrain from any measure promoting unjustified discrimination based on sex, origin, religion or belief, disability, age, sexual orientation, or discrimination on any other grounds, in their quality management system”, reasoning that “unlawful discrimination originates in human action. AI system providers should refrain from any measures in their quality system that could promote discrimination.” Another suggestion was to introduce into Article 17(1) “measures to prevent unjustified discrimination based on sex, (..)”. Both the Economic and Social Committee’s and the Committee of the Region’s proposals would lead to the incorporation of a gender equality perspective into the Artificial Intelligence Act.

4.3 Critical and nuanced views in the academic literature

While the Artificial Intelligence Act has been generally welcomed, with 1216 contributions being received during the open public consultation by the Commission,Footnote 93 133 contributions on the roadmap and 304 feedback contributions following the adoption of the draft regulation, some concerns and alternative ideas have been voiced. Whereas some assume that the gender equality framework provides “useful yardsticks” highlighting that there are some systemic problems complicating the way EU law deals with algorithmic discrimination,Footnote 94 others regard EU law as in principle “well-equipped” all the while highlighting “areas for improvement”.Footnote 95 Others have called for a new regulatory regime, such as the Artificial Intelligence Act currently going through the process of legislative adoption in the EU.Footnote 96 Also identifying room for the improvement of EU anti-discrimination law so as to address algorithmic discrimination, Hacker suggests an “integrated vision of anti-discrimination and data protection law”.Footnote 97 Finally, others are more cautious on the regulatory front when it comes to artificial intelligence and propose a “purposive interpretation and instrumental application of EU non-discrimination law”.Footnote 98 Other authors identify shortcomings in the Artificial Intelligence Act, and propose moving away from the notion of individual harm and following a more holistic approach focusing as well on collective and societal harm.Footnote 99

While it is true that law in general, and European gender equality law in particular can cope to some extent with newly arising technologies that produce discriminatory outcomes, the author has advocated for regulation as conditio sine qua non and pinpointed to the need to find inspiration for EU artificial intelligence regulation in other international instruments currently under development that also partly address gender equality and non-discrimination issues.Footnote 100

5 Regulatory and policy recommendations

5.1 Regulatory suggestions

This section sketches out what can be done concretely in terms of policy measures and legislative action.Footnote 101 Legislation is fundamental to ensure gender equality and the Artificial Intelligence Act makes a good start by posing for the first time a general principle that some areas of artificial intelligence systems should be regulated. First, added value can be achieved by ensuring a good level of protection of equality between women and men in all fields by adopting adequate legislationFootnote 102 or by proposing legislation such as that on pay transparency,Footnote 103 women on boards,Footnote 104 violence against womenFootnote 105 and by ensuring its effective enforcement. Overall, an increase in gender equality in the real world will be reflected in datasets and thereby reduce potential algorithmic discriminations.

Second, on this basis, and complementary to the rules of the Artificial Intelligence Act that would apply to situations involving gender-based discriminations caused by recruitment algorithms, more technical or sector specific regulations could be explored, such as specific requirements to ensure the respect of gender equality norms when it comes to algorithmic discrimination. A review of the legislative gender equality acquis (which is required periodically by the better regulation guidelines of the EUFootnote 106) could be a good opportunity to incorporate more clearly and define algorithmic discrimination in EU gender equality law, as well as to detail rules on the (shifting) of the burden of proof in algorithmic discrimination cases.

5.2 Accompanying policy measures

In order to achieve gender equality, alongside legislative measures,Footnote 107 self-standing or accompanying policy measures could be taken to address the issues raised above.Footnote 108 Policy and awareness-raising measures should include designers and developers of algorithms, as well as general training on equality issues. Targeted training on gender equality is needed for developers when designing and coding algorithms. This does not avoid all biases, and bad design and biases would still be found in algorithms, but they would be less prone to biases, stereotypes and discriminatory behaviour.

If more women were represented among IT developers,Footnote 109 this could increase the chances of more diverse and equal outcomes from algorithms. Navigating between fully-fledged regulations and policy measures, aside from encouraging training, one could also prescribe mandatory measures at the design stage/coding stage. Good practice principles and mandatory training on gender equality or mandatory reporting could bring change. It could also be left to companies to decide how to achieve objectives which could be fixed in the law. Concrete outcomes should not be fixed in law as it is probably impossible to create an algorithm that never will discriminate. The aim, therefore, is not to eliminate but to reduce the risk of gender-based discrimination.

Ensuring that humans remain in control as highlighted in Article 14 of the Artificial Intelligence Act, could help identify and mitigate gender biases in individual datasets.Footnote 110 For this to be effective, training and awareness-raising for developers/programmers are one way of mitigating the dangers of gender biases and stereotypes when designing algorithms in general and with regard to the Word2vec technique in particular. Regardless of whether designed as a mandatory or an optional requirement, this is relatively easy to implement and a way to reduce gender biases and stereotypes. A label certifying that gender and diversity knowledge is available in a company could be another way to incentivise IT companies to gain the relevant knowledge. Transparency could also increase compliance, by publishing on a company or general website, whether a specific algorithm has been built by a company which has the relevant gender and diversity knowledge.

More generally, increasing diversity and female participation in coding/artificial intelligence jobsFootnote 111 - a need also highlighted in the recent own-initiative report (“INI”) report of the Parliament’s AIDA committee, is vital.Footnote 112 Amendment 673 of the report also support skills and training in this regard: in that it “highlights the importance of including basic training in digital skills and AI in national education systems; [and e]mphasizes the importance of empowering and motivating girls for the subsequent study of STEM careers and eradicating the gender gap in this area”.Footnote 113 Increasingly firms are aware of the need for more equality and diversity in the IT and artificial intelligence world, as highlighted by the foundation of a consulting firm recently in California to address the issue of diversity and equality in the tech world.Footnote 114

5.3 Conclusion

Approaching the problem of gender-based algorithmic discrimination through the lens of indirect and direct gender impacts enables researchers and policy-makers not only to perceive the depth of the problem but also to identify the need for legal and policy measures. It facilitates understanding, by looking at the concrete mechanisms that underlie the functioning of algorithms and thereby sheds light on why indirect gender effects that might seem at first irrelevant from a legal gender equality enforcement perspective are key to understanding and solving the problem of gender-based algorithmic discrimination. The role played by algorithms and search engines in shaping, reinforcing and perpetuating gender stereotypes and biases has been highlighted and should be taken into account in legislative and policy actions. This also strengthens the argument for having not only a robust legislative framework but also accompanying non-legislative measures that reinforce and complement EU law in order to support and achieve the entirety of its aims.