Skip to main content

Machine learning, political participation and the transformations of democratic self-determination

  • Chapter
  • First Online:
Künstliche Intelligenz, Mensch und Gesellschaft

Abstract

This contribution addresses links between machine learning technologies and democracy with a focus on political participation. Democracy research often regards machine learning technologies as a threat, as these technologies could violate fundamental rights or replace democratic decision making. While raising important concerns, these approaches underestimate the malleability of digital technologies and their relationship to democracy. Our argument is that inherent to democratic practice we find a constant (re)negotiation of rights and institutions, in this case not least driven by the fact that machine learning technologies themselves are far from reaching maturity. The openness and negotiability of the relationship of AI and democracy is illustrated by three critical perspectives that hold special importance for political participation: algorithmic bias, automated decision-making and AI’s epistemic dimension. By reflecting the changing condition of political organisation, current research can be productive and even performative in the sense of co-defining a shared understanding of new technologies and aim to set standards for their legitimate use.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    As we demonstrate with examples throughout this text.

  2. 2.

    Although or precisely because artificial intelligence (AI) is currently a much-discussed topic, it is difficult to define it. For this reason, many experts completely abandon the term and switch to abstractions such as “predictive technologies”, “agentic machines”, or “algorithmic systems” (Joyce et al. 2021, p. 2; see also Dignum 2022). While this leads some observers to question the very existence of AI, others prefer to reserve the term for “whatever we are doing next in computing” (Recker et al. 2021, p. 1435). Nevertheless, the vague terminology proves to be a problem if one wants to investigate the interactions of AI with society and democracy. To avoid ambiguity, we refer instead to algorithmic systems, learning algorithms, or machine learning.

  3. 3.

    Legal philosophy and behavioural theories have explored an array of factors that judges weigh into this process of materializing what are, ultimately, ethical standards elected by the rule of law (Pereira 2016, p. 347), including hermeneutic choices, political implications, and institutional culture, beyond personal ideologies and bias (Campos Mello 2018; Pereira 2016; Freitas 2018).

  4. 4.

    For instance, an experiment in the US Supreme Court led by Epstein et al. (2018, p. 239) found that justices who subscribe to a liberal ideology were more supportive of free-speech claims than conservative justices. Showing that bias also infiltrates collegiate deliberation, Cesário Alvim Gomes et al. (2018) documented how justices of the Brazilian Supreme Court were found more likely to disagree with rulings reported by female justices, in comparison to their male peers (Cesário Gomes Alvim, Werneck Arguelhes, und Nogueira 2018, p. 866).

  5. 5.

    For hermeneutic techniques, see Freitas 2018.

  6. 6.

    There are options to choose from when selecting learning algorithms defining target variables, compiling training and test data as well as optimizing during training processes (cf. Domingos 2012, p. 79–80).

  7. 7.

    In recent years, the increasing use of automated decision-making systems has not only brought existing network policy organizations onto the scene but has also led to a number of start-ups: NGOs such as the Ada Lovelace Institute (2018, UK), Algorithm watch (2017, Germany), AI Now (2017, USA) or Data and Society (2014, USA.).

  8. 8.

    COM (2021) 206 final.

  9. 9.

    Hartmut Rosa (2020, p. 21) describes subjecting the world to control along four dimensions: recognizability, accessibility, controllability and usability.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jeanette Hofmann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 Der/die Autor(en), exklusiv lizenziert an Springer Fachmedien Wiesbaden GmbH, ein Teil von Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Hofmann, J., Iglesias Keller, C. (2024). Machine learning, political participation and the transformations of democratic self-determination. In: Heinlein, M., Huchler, N. (eds) Künstliche Intelligenz, Mensch und Gesellschaft. Springer VS, Wiesbaden. https://doi.org/10.1007/978-3-658-43521-9_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-658-43521-9_13

  • Published:

  • Publisher Name: Springer VS, Wiesbaden

  • Print ISBN: 978-3-658-43520-2

  • Online ISBN: 978-3-658-43521-9

  • eBook Packages: Social Science and Law (German Language)

Publish with us

Policies and ethics