, Volume 7, Issue 4, pp 233-242

Delegating and Distributing Morality: Can We Inscribe Privacy Protection in a Machine?

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access

Abstract

This paper addresses the question of delegation of morality to a machine, through a consideration of whether or not non-humans can be considered to be moral. The aspect of morality under consideration here is protection of privacy. The topic is introduced through two cases where there was a failure in sharing and retaining personal data protected by UK data protection law, with tragic consequences. In some sense this can be regarded as a failure in the process of delegating morality to a computer database. In the UK, the issues that these cases raise have resulted in legislation designed to protect children which allows for the creation of a huge database for children. Paradoxically, we have the situation where we failed to use digital data in enforcing the law to protect children, yet we may now rely heavily on digital technologies to care for children. I draw on the work of Floridi, Sanders, Collins, Kusch, Latour and Akrich, a spectrum of work stretching from philosophy to sociology of technology and the “seamless web” or “actor–network” approach to studies of technology. Intentionality is considered, but not deemed necessary for meaningful moral behaviour. Floridi’s and Sanders’ concept of “distributed morality” accords with the network of agency characterized by actor–network approaches. The paper concludes that enfranchizing non-humans, in the shape of computer databases of personal data, as moral agents is not necessarily problematic but a balance of delegation of morality must be made between human and non-human actors.