Skip to main content

Advertisement

Log in

In AI we trust? Perceptions about automated decision-making by artificial intelligence

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Fueled by ever-growing amounts of (digital) data and advances in artificial intelligence, decision-making in contemporary societies is increasingly delegated to automated processes. Drawing from social science theories and from the emerging body of research about algorithmic appreciation and algorithmic perceptions, the current study explores the extent to which personal characteristics can be linked to perceptions of automated decision-making by AI, and the boundary conditions of these perceptions, namely the extent to which such perceptions differ across media, (public) health, and judicial contexts. Data from a scenario-based survey experiment with a national sample (N = 958) show that people are by and large concerned about risks and have mixed opinions about fairness and usefulness of automated decision-making at a societal level, with general attitudes influenced by individual characteristics. Interestingly, decisions taken automatically by AI were often evaluated on par or even better than human experts for specific decisions. Theoretical and societal implications about these findings are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. While perceptions of ADM are relevant in many societal contexts, this study has chosen to focus on media, (public) health, and justice. In these three sectors, we expect that ADM can have a significant impact on individual rights, well-being, and functioning in a society (as citizens and voters in the case of the media, as members of a society in the case of justice, and as humans in the case of health).

  2. Results are reported for the measure with higher reliability (without the reversed item), but differences are communicated in the notes (Table 2).

  3. Approximately 28% (818) of the responses for all the scenarios combined (N = 2874) were removed because of the manipulation check. When running the analyses with these responses included, the results stay largely the same with regards to direction and significance levels. Exceptions are indicated in the notes.

References

Download references

Acknowledgements

This study was funded by the Research Priority Area Communication and its Digital Communication Methods Lab at the University of Amsterdam.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Theo Araujo.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Araujo, T., Helberger, N., Kruikemeier, S. et al. In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Soc 35, 611–623 (2020). https://doi.org/10.1007/s00146-019-00931-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-019-00931-w

Keywords

Navigation