The possibility of deliberate norm-adherence in AI

Abstract

Moral agency status is often given to those individuals or entities which act intentionally within a society or environment. In the past, moral agency has primarily been focused on human beings and some higher-order animals. However, with the fast-paced advancements made in artificial intelligence (AI), we are now quickly approaching the point where we need to ask an important question: should we grant moral agency status to AI? To answer this question, we need to determine the moral agency status of these entities in society. In this paper I argue that to grant moral agency status to an entity, deliberate norm-adherence must be possible (at a minimum). In this paper I argue that, under the current status quo, AI systems are unable to meet this criterion. The novel contribution this paper makes to the field of machine ethics is first, to provide at least two criteria with which we can determine moral agency status. We do this by determining the possibility of deliberate norm-adherence through examining the possibility of deliberate norm-violation. Second, to show that establishing moral agency in AI suffer the same pitfalls as establishing moral agency in constitutive accounts of agency.

This is a preview of subscription content, access via your institution.

Notes

  1. 1.

    The level of abstraction is determined, according to Floridi and Sanders, “by the way in which one chooses to describe, analyse and discuss a system and its context. LoA is formalised in the concept of ‘interface’, which consists of a set of features, the observables. Agenthood, and in particular moral agenthood, depends on a LoA”.

  2. 2.

    for more on moral patiency and AI, see Gunkel (2012).

  3. 3.

    1 I think it should go without saying that as AI progresses in the future, what is discussed here may no longer be relevant in the next few decades. But this is one of the unfortunate consequences of doing research in such a fast-paced industry.

  4. 4.

    No doubt, far more criteria need to be added in order for us to truly determine the morality-status of an entity. But here, I only want to introduce two.

  5. 5.

    Many thanks to Christoph Hanisch who proposed that I use the terms norm-compliance and norm-endorsement.

  6. 6.

    I appreciate that I am setting a tacit counterfactual condition here that should, in a longer account, be a) more carefully worked out, and b) related to standard versions of the Principle of Alternate Possibilities (starting with Frankfurt 1969). But I hope that the intuitive point I am trying to make here is clear without getting bogged down in the intricacies of either the vast literature on counterfactuals or on PAP.

  7. 7.

    Acknowledgement to Veli Mitova who used this term in discussions we had.

  8. 8.

    There are, of course, various ways of getting around this, such as voluntarist approaches or a hybrid theory which sees the merging of voluntarist and constitutivist features (Bratman 2007; Korsgaard 2008; Katsafanas 2013; Rosati 1995, 2003, 2016; Tiffany 2012). However, these approaches face their own set of problems which I cannot go into here for brevity sake. This paper is not particularly interested in proving the legitimacy of constitutivism, but is rather only interested in the constitutive relationship between agency and norms, where norms are adhered to in virtue of them being part and parcel of the features constituting agency.

References

  1. Bratman, M. (2007). Structures of agency. New York: Oxford University Press.

    Google Scholar 

  2. Castelfranchi, C., Dignum, F., Jonker, C., & Treur, J. (2000). Deliberative normative agents: Principles and architecture. Intelligent agents (pp. 364–378). Berlin: Springer.

    Google Scholar 

  3. Coeckelbergh, M. (2009). Virtual moral agency, virtual moral responsibility: On the moral significance of the appearance, perception, and performance of artificial agents. AI and Society,1, 10–25. https://doi.org/10.1007/s00146-009-0208-3.

    Article  Google Scholar 

  4. Davidson, D. (1963). Actions, reasons, and causes. The Journal of Philosophy,60(23), 685–700.

    Article  Google Scholar 

  5. Enoch, D. (2006). Agency, Shmagency: Why normativity won't come from what is constitutive of action. Philosophical Review,115(2), 31–60.

    Article  Google Scholar 

  6. Ferrero, L. (2009). Constitutivism and the inescapability of agency. Oxford Studies in Metaethics,IV, 303–333.

  7. Floridi, L., Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14, 349–379.

    Article  Google Scholar 

  8. Frankfurt, H. (1969). Alternative possibilities and moral responsibility. Journal of Philosophy,66(23), 829–839.

    Article  Google Scholar 

  9. Gunkel, D. (2012). The machine question: Critical perspectives on AI, robots, and ethics. Cambridge: MIT Press.

    Google Scholar 

  10. Hansson, S. (1994). Decision theory: A brief introduction. Stockholm: Royal Institute of Technology.

    Google Scholar 

  11. Huffer, B. (2007). Actions and outcomes: Two aspects of agency. Synthese,157, 241–265.

    MathSciNet  Article  Google Scholar 

  12. Johnson, A., Hathcock, D. (n.d). Study abroad and moral development. Ejournal of Public Affairs,3(3), 52–70.

  13. Kant, I. (1785). Groundwork for the metaphysics of morals. In: A. Wood (Ed.) Groundwork for the metaphysics of morals. New York: Yale University.

  14. Katsafanas, P. (2013). Agency and the foundation of ethics: Nietzschean constitutivism. Oxford: Oxford University Press.

    Google Scholar 

  15. Korsgaard, C. (2008). The constitution of agency. Essays on practical reason and moral psychology. Oxford: Oxford University Press.

    Google Scholar 

  16. Korsgaard, C. (2009). Self-constitution: Agency, identity, and integrity. Oxford: Oxford University Press.

    Google Scholar 

  17. McKenna, M. & Coates, J., (2018). Compatibilism. [Online]. Retrieved March 25, 2019, from https://plato.stanford.edu/archives/win2018/entries/compatibilism/.

  18. Moor, J. (2011). The nature, importance, and difficulty of machine ethics. In M. Ethics (Ed.), Anderson & Anderson (pp. 13–20). New York: Cambridge University Press.

    Google Scholar 

  19. Muller, V. (2019). Ethics of AI and robotics. Retrieved August 15, 2019, from https://www.researchgate.net/project/Ethics-of-AI-and-Robotics-for-Stanford-Encyclopedia-of-Philosophy.

  20. Railton, P. (2003). On the hypothetical and non-hypothetical in reasoning about belief and action. Ethics and practical reason (pp. 53–80). Oxford: Clarendon Press.

    Google Scholar 

  21. Rosati, C. (1995). Naturalism, normativity, and the open argument question. Nous,29(1), 46–70.

    MathSciNet  Article  Google Scholar 

  22. Rosati, C. (2003). Agency and the open question argument. Ethics,113(3), 490–527.

    Article  Google Scholar 

  23. Rosati, C. (2016). Agents and "shmagents" an essay on agency and normativity. In R. Shafer-Landau (Ed.), Oxford studies in metaethics 11 (pp. 182–213). Oxford: Oxford University Press.

    Google Scholar 

  24. Tiffany, E. (2012). Why be an agent? Australasia Journal of Philosophy,90(2), 223–233.

    Article  Google Scholar 

  25. Velleman, D. (1996). The possibility of practical reason. Ethics,106(4), 694–726.

    Article  Google Scholar 

  26. Velleman, D. (2004). Replies to discussion on the possibility of practical reason. Philosophical Studies,121, 225–238.

    Article  Google Scholar 

  27. Warfield, T. (2000). Causal determination and human freedom is incompatible: A new argument for incompatibilism. Nous,34, 167–180.

    Article  Google Scholar 

Download references

Acknowledgements

Many thanks to Veli Mitova for her encouragement, invaluable feedback and assistance. Thanks to Thaddeus Metz for his advice regarding article writing and feedback on a related project which greatly informed this one. Further thanks to Samuel Segun for his very helpful comments. Finally, thanks to all at the university of Johannesburg and SolBridge International School of Business who facilitated and assisted.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Danielle Swanepoel.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Swanepoel, D. The possibility of deliberate norm-adherence in AI. Ethics Inf Technol (2020). https://doi.org/10.1007/s10676-020-09535-1

Download citation

Keywords

  • Artificial intelligence
  • Moral agency
  • Norm-violation
  • Norm-adherence
  • Constitutivism