Skip to main content
Log in

Ethics, Speculation, and Values

  • Original Paper
  • Published:
NanoEthics Aims and scope Submit manuscript

Abstract

Some writers claim that ethicists involved in assessing future technologies like nanotechnology and human enhancement devote too much time to debating issues that may or may not arise, at the expense of addressing more urgent, current issues. This practice has been claimed to squander the scarce and valuable resource of ethical concern. I assess this view, and consider some alternatives to ‘speculative ethics’ that have been put forward. I argue that attempting to restrict ethical debate so as to avoid considering unacceptably speculative scenarios would not only leave scientific progress devoid of ethical guidance, but would also rule out some of our most important ethical projects. I conclude that the issue of speculation is a red herring: what is most important is not that ethicists concentrate on current issues or those that are most likely to arise; but that ethicists, scientists, and others focus on maximising what is most valuable.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Some have disputed the value of considering such hypothetical scenarios. Ludwig Wittgenstein believed that exotic thought experiments can be misleading and meaningless: ‘It is as if our concepts involve a scaffolding of facts … If you imagine certain facts otherwise … then you can no longer imagine the application of certain concepts’ ([3]: proposition 350). And Alasdair Urquhart has accused discussions in the philosophy of mind and cognitive science of a ‘fascination with far-fetched thought experiments’ ([4]: 27, cited at [5]: 34f), which illicitly derive their power from ignorance of the underlying science. For an illuminating discussion of the value of thought experiments, see [6].

  2. I use this term broadly, to include not only academic or philosophically-trained ethicists, but people in general whose job it is to decide how best to prepare for future scientific developments. This is in line with the concerns of the authors whose views I consider here.

  3. Others have used the term ‘speculative ethics’ to refer to other things. For example, John Maynard Keynes used it to refer to the investigation of what is good in and of itself (Keynes JM (1905) Miscellanea Ethica. Unpublished manuscript deposited in King’s College Library, University of Cambridge), and Albert William Levi, based on Henri Bergson’s distinction between knowing a thing by moving around an object and knowing a thing by entering into it, used it to refer to a method of ethics that proceeds by ‘moving around’ moral phenomena rather than attempting to penetrate them [9].

  4. Nordmann does not attribute this particular claim to de Grey, or to anyone else. He appears to use it only to illustrate his claim that ethical deliberation can escalate into a demand for action.

  5. He lists additional reasons why the ‘if and then’ syndrome is worrying: it casts purely speculative scenarios as more probable than they really are, or as more probable than we have reason to believe they are; and it prompts questions about ourselves and our future that are ‘unintelligible’. I shall ignore these other concerns, however. By themselves, they do not constitute a compelling case against speculation in ethics, and—for reasons I do not have space to outline here—the latter problem arguably does not exist at all.

  6. In fact, the picture is more complex than this. Judgments about the likelihood of a given scenario’s occurring depend on a variety of factors, of which its desirability is only one. Other relevant factors include whether the activity that might lead to the scenario is voluntary, whether it is perceived as controllable, and whether it is familiar. See [14].

  7. Occasionally Nordmann seems to do this. For example, he writes, ‘In order to resist foreshortening, considerable work is required to hold the scientific community to its own standards of honesty and clarity. Whose responsibility is it … to remind scientists … of the categorical difference between a therapeutic brain-machine interface and the vision of a thought-controlled mind-machine interface?’ ([5]: 43). However, we can interpret him to mean not that scientists should avoid speculation—that is, imagining what might be possible given the right technology—but that it is important that scientists should be clear about what is involved in achieving those scenarios that they envision.

  8. It may be objected that Weckert and Moor are concerned with the precautionary principle, which is often said to be an alternative to cost-benefit analysis, and as such the observation that their view is at odds with cost-benefit analysis can hardly be deemed an objection. However, it is worth noting that, despite the fact that those who advocate using the precautionary principle rather than cost-benefit analysis do so because the former is more risk-averse than the latter, the version of the precautionary principle that Weckert and Moor favour—which considers only ‘credible’ threats—fails to consider some risks that even cost-benefit analysis would recognise. If the costs of a particular outcome are sufficiently severe, cost-benefit analysis advocates preparing for that outcome even if it is highly unlikely.

References

  1. Parfit D (1984) Reasons and persons. 2nd edn. Oxford University Press, Oxford

    Google Scholar 

  2. Dennett DC (1978) Brainstorms: philosophical essays on mind and psychology. Bradford Books, New York

    Google Scholar 

  3. Wittgenstein L (1967) Zettel. G. Anscombe and G. von Wright (Eds) Translated by G. Anscombe. Blackwell, London

  4. Urquhart A (2004) Complexity. In: Floridi L (ed) The Blackwell guide to philosophy of computing and information. Blackwell, Malden, pp 18–27

    Chapter  Google Scholar 

  5. Nordmann A (2007) If and then: a critique of speculative nanoethics. NanoEthics 1:31–46. doi:10.1007/s11569-007-0007-6

    Article  Google Scholar 

  6. O’Neill O (1986) The power of example. Philosophy 61(235):5–29

    Article  Google Scholar 

  7. Manson N (2002) Formulating the precautionary principle. Environ Ethics 24:263–274

    Google Scholar 

  8. Keiper A (2007) Nanoethics as a discipline? New Atlantis (Spring):55–67

  9. Levi WL (1943) Temperament and moral theory. Ethics 53(2):128–132. doi:10.1086/290336

    Article  Google Scholar 

  10. Sunstein CR (2004). Your money or your life. The New Republic, 11 March, 2004. Retrieved 8 August, 2008, from http://www.powells.com/review/2004_03_11

  11. Joy B (2000) Why the future doesn’t need us. Wired 8(04):238–262

    Google Scholar 

  12. Brown JS, Duguid P (2001) Don’t count society out: a reply to Bill Joy. In: Denning PJ (ed) The invisible future. McGraw-Hill, New York, NY, pp 117–144 Retrieved 13 August, 2008, from http://people.ischool.berkeley.edu/~duguid/SLOFI/Don’t_Count.htm

    Google Scholar 

  13. Slovic P, Finucane M, Peters E, MacGregor DG (2002) The affect heuristic. In: Gilovich T, Griffin D, Kahneman D (eds) Heuristics and biases: the psychology of intuitive judgement. Cambridge University Press, New York, NY, pp 397–420

    Google Scholar 

  14. Starr C (1969) Social benefit versus technological risk. Science 165:1232–1238. doi:10.1126/science.165.3899.1232

    Article  Google Scholar 

  15. Crow MM, Sarewitz D (2001) Nanotechnology and societal transformation. In: Teich AH, Nelson SD, McEnaney C, Lita SJ (eds) AAAS Science and Technology policy yearbook. American Association for the Advancement of Science, Washington, D.C, pp 89–101

    Google Scholar 

  16. Weckert J, Moor J (2007) The precautionary principle in nanotechnology. In: Allhoff F, Lin P, Moor J, Weckert J (eds) Nanoethics: the ethical and social implications of nanotechnology. Wiley-Interscience: Hoboken, NJ, pp 133–146

    Google Scholar 

  17. Shaw GB (1921) Back to Methuselah. Penguin, London

    Google Scholar 

  18. Tversky A, Kahneman D (1973) Availability: a heuristic for judging frequency and probability. Cognit Psychol 5:207–232. doi:10.1016/0010-0285(73)90033-9

    Article  Google Scholar 

  19. Boyle A (2008). Doomsday fears spark lawsuit. Cosmic Log, 27 March, 2008. Retrieved 2 July, 2008, from http://cosmiclog.msnbc.msn.com/archive/2008/03/27/823924.aspx

  20. Bostrom N (2006) Welcome to world of exponential change. In: Miller P, Wilsdon J (eds) Better humans? The politics of human enhancement and life extension. DEMOS, London, pp 40–50

    Google Scholar 

  21. Bostrom N (2007) Technological revolutions and the problem of prediction. In: Allhoff F, Lin P, Moor J, Weckert J (eds) Nanoethics: the ethical and social implications of nanotechnology. Wiley-Interscience, Hoboken, NJ, pp 101–118

    Google Scholar 

  22. Department of Health Antibiotic Campaign 2007–2008. Retrieved 24 July, 2008, from http://www.dh.gov.uk/en/Publichealth/Patientsafety/Antibioticresistance/DH_082512

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rebecca Roache.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Roache, R. Ethics, Speculation, and Values. Nanoethics 2, 317–327 (2008). https://doi.org/10.1007/s11569-008-0050-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11569-008-0050-y

Keywords

Navigation