Advertisement

Journal of Management and Governance

, Volume 23, Issue 4, pp 849–867 | Cite as

What do we loose when machines take the decisions?

  • Thomas BolanderEmail author
Article

Abstract

This paper concerns the technical issues raised when humans are replaced by artificial intelligence (AI) in organisational decision making, or decision making in general. Such automation of human tasks and decision making can of course be beneficial through saving human resources, and through (ideally) leading to better solutions and decisions. However, to guarantee better decisions, the current AI techniques still have some way to go in most areas, and many of the techniques also suffer from weaknesses such as lack of transparency and explainability. The goal of the paper is not to argue against using any kind of AI in organisational decision making. AI techniques have a lot to offer, and can for instance assess a lot more possible decisions—and much faster—than any human can. The purpose is just to point to the weaknesses that AI techniques still have, and that one should be aware of when considering to implement AI to automate human decisions. Significant current AI research goes into reducing its limitations and weaknesses, but this is likely to become a fairly long-term effort. People and organisations might be tempted to fully automate certain crucial aspects of decision making without waiting for these limitations and weaknesses to be reduced—or, even worse, not even being aware of those weaknesses and what is lost in the automatisation process.

Keywords

Artificial intelligence (AI) Connectionist AI Symbolic AI Explainability Trust Algorithmic bias Algorithmic decision making Human decision making 

Notes

References

  1. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, 23 May). Machine bias. ProPublica.Google Scholar
  2. Baron-Cohen, S. (1997). Mindblindness: An essay on autism and theory of mind. Cambridge: MIT press.Google Scholar
  3. Blomberg, S. N., Folke, F., Ersbøll, A. K., Christensen, H. C., Torp-Pedersen, C., Sayre, M. R., et al. (2019). Machine learning as a supportive tool to recognize cardiac arrest in emergency calls. Resuscitation, 138, 322–329.CrossRefGoogle Scholar
  4. Ferrucci, D. A. (2012). Introduction to “This is Watson”. IBM Journal of Research and Development, 56(3.4), 1–15.CrossRefGoogle Scholar
  5. Hoffmann, A. G. (1998). Paradigms of artificial intelligence: A methodological and computational analysis. Berlin: Springer.Google Scholar
  6. Jensen, M. H., Jørgensen, D. R., Jalaboi, R., Hansen, M. E., & Olsen, M. A. (2019). Improving uncertainty estimation in convolutional neural networks using inter-rater agreement. In International Conference on Medical Image Computing and Computer-Assisted Intervention, (pp. 540–548). Berlin: Springer.Google Scholar
  7. King, H. (2016). This startup uses battery life to determine credit scores. CNN Business.Google Scholar
  8. Lakoff, G. (2008). Women, fire, and dangerous things. Chicago: University of Chicago press.Google Scholar
  9. Lambert, F. (2016, July 1). Understanding the fatal Tesla accident on autopilot and the NHTSA probe. electrek.Google Scholar
  10. Levesque, H. J. (2014). On our best behaviour. Artificial Intelligence, 212, 27–35.CrossRefGoogle Scholar
  11. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., et al. (2015). Human-level controlthrough deep reinforcement learning. Nature, 518(7540), 529–533.CrossRefGoogle Scholar
  12. Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1(4), 515–526.CrossRefGoogle Scholar
  13. Radford, A., Jeffrey, W., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8)Google Scholar
  14. Regeringen. (2019). National strategi for kunstig intelligens. Technical report.Google Scholar
  15. Ribeiro, M. T., Singh, S., Guestrin, C. (2016). Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM.Google Scholar
  16. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., et al. (2016). Mastering the game of go with deep neural networks and tree search. Nature, 529(7587), 484–489.CrossRefGoogle Scholar
  17. Simonite, T. (2019). OpenAI said its code was risky. Two grads re-created it anyway. Wired Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.DTU ComputeTechnical University of DenmarkLyngbyDenmark

Personalised recommendations