Skip to main content


Log in

The ethical AI—paradox: why better technology needs more and not less human responsibility

  • Opinion Paper
  • Published:
AI and Ethics Aims and scope Submit manuscript


Because AI is gradually moving into the position of decision-maker in business and organizations, its influence is increasingly impacting the outcomes and interests of the human end-user. As a result, scholars and practitioners alike have become worried about the ethical implications of decisions made where AI is involved. In approaching the issue of AI ethics, it is becoming increasingly clear that society and the business world—under the influence of the big technology companies—are accepting the narrative that AI has its own ethical compass, or, in other words, that AI can decide itself to do bad or good. We argue that this is not the case. We discuss and demonstrate that AI in itself has no ethics and that good or bad decisions by algorithms are caused by human choices made at an earlier stage. For this reason, we argue that even though technology is quickly becoming better and more sophisticated a need exists to simultaneously train humans even better in shaping their ethical compass and awareness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others


  1. CEOs: AI will have larger impact than the internet. (2019). Accessed 1 June 2021

  2. Haesevoets, T., De Cremer, D., Dierck, K., Van Hiel, A.: Human-machine collaboration in managerial decision making. Comput. Hum. Behav. (2021).

    Article  Google Scholar 

  3. De Cremer, D., Kasparov, G.: AI should augment human intelligence, not replace it. Harvard business review. (2021). Accessed 1 June 2021

  4. De Cremer, D.: What does building a fair AI really entail? Harvard business review. (2020). Accessed 1 June 2021

  5. Sankaran, V.: Military drones may have attacked humans for first time without being instructed to, UN report says. Independent. (2021). Accessed 1 June 2021

  6. Machines are indifferent, we are not: Yann LeCun’s Tweet sparks ML bias debate. (2020). Accessed 1 June 2021

  7. Simonite, T.: Google offers to help others with the tricky ethics of AI. (2020). Accessed 1 June 2021

  8. Grant, N., Bass, D., Eidelson, J.: Google Turmoil exposes cracks long in making for top AI watchdog. Bloomberg. (2021). Accessed 1 June 2021

  9. De Cremer, D., Moore, C.: Toward a better understanding of behavioral ethics in the workplace. Annu. Rev. Organ. Psych. Organ. Behav. 7, 369–393 (2020)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to David De Cremer.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

De Cremer, D., Kasparov, G. The ethical AI—paradox: why better technology needs more and not less human responsibility. AI Ethics 2, 1–4 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: