What do we loose when machines take the decisions?

A Correction to this article is available

This article has been updated

Abstract

This paper concerns the technical issues raised when humans are replaced by artificial intelligence (AI) in organisational decision making, or decision making in general. Such automation of human tasks and decision making can of course be beneficial through saving human resources, and through (ideally) leading to better solutions and decisions. However, to guarantee better decisions, the current AI techniques still have some way to go in most areas, and many of the techniques also suffer from weaknesses such as lack of transparency and explainability. The goal of the paper is not to argue against using any kind of AI in organisational decision making. AI techniques have a lot to offer, and can for instance assess a lot more possible decisions—and much faster—than any human can. The purpose is just to point to the weaknesses that AI techniques still have, and that one should be aware of when considering to implement AI to automate human decisions. Significant current AI research goes into reducing its limitations and weaknesses, but this is likely to become a fairly long-term effort. People and organisations might be tempted to fully automate certain crucial aspects of decision making without waiting for these limitations and weaknesses to be reduced—or, even worse, not even being aware of those weaknesses and what is lost in the automatisation process.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Change history

  • 02 March 2020

    In the original publication, the word ���lose��� has been inadvertently published as ���loose���. All occurrences of ���loose��� should be replaced by ���lose���.

References

  1. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, 23 May). Machine bias. ProPublica.

  2. Baron-Cohen, S. (1997). Mindblindness: An essay on autism and theory of mind. Cambridge: MIT press.

    Google Scholar 

  3. Blomberg, S. N., Folke, F., Ersbøll, A. K., Christensen, H. C., Torp-Pedersen, C., Sayre, M. R., et al. (2019). Machine learning as a supportive tool to recognize cardiac arrest in emergency calls. Resuscitation, 138, 322–329.

    Article  Google Scholar 

  4. Ferrucci, D. A. (2012). Introduction to “This is Watson”. IBM Journal of Research and Development, 56(3.4), 1–15.

    Article  Google Scholar 

  5. Hoffmann, A. G. (1998). Paradigms of artificial intelligence: A methodological and computational analysis. Berlin: Springer.

    Google Scholar 

  6. Jensen, M. H., Jørgensen, D. R., Jalaboi, R., Hansen, M. E., & Olsen, M. A. (2019). Improving uncertainty estimation in convolutional neural networks using inter-rater agreement. In International Conference on Medical Image Computing and Computer-Assisted Intervention, (pp. 540–548). Berlin: Springer.

  7. King, H. (2016). This startup uses battery life to determine credit scores. CNN Business.

  8. Lakoff, G. (2008). Women, fire, and dangerous things. Chicago: University of Chicago press.

    Google Scholar 

  9. Lambert, F. (2016, July 1). Understanding the fatal Tesla accident on autopilot and the NHTSA probe. electrek.

  10. Levesque, H. J. (2014). On our best behaviour. Artificial Intelligence, 212, 27–35.

    Article  Google Scholar 

  11. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., et al. (2015). Human-level controlthrough deep reinforcement learning. Nature, 518(7540), 529–533.

    Article  Google Scholar 

  12. Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1(4), 515–526.

    Article  Google Scholar 

  13. Radford, A., Jeffrey, W., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8)

  14. Regeringen. (2019). National strategi for kunstig intelligens. Technical report.

  15. Ribeiro, M. T., Singh, S., Guestrin, C. (2016). Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM.

  16. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., et al. (2016). Mastering the game of go with deep neural networks and tree search. Nature, 529(7587), 484–489.

    Article  Google Scholar 

  17. Simonite, T. (2019). OpenAI said its code was risky. Two grads re-created it anyway. Wired

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Thomas Bolander.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Bolander, T. What do we loose when machines take the decisions?. J Manag Gov 23, 849–867 (2019). https://doi.org/10.1007/s10997-019-09493-x

Download citation

Keywords

  • Artificial intelligence (AI)
  • Connectionist AI
  • Symbolic AI
  • Explainability
  • Trust
  • Algorithmic bias
  • Algorithmic decision making
  • Human decision making