Advertisement

Pragmatism for a Digital Society: The (In)significance of Artificial Intelligence and Neural Technology

Chapter
  • 94 Downloads
Part of the Advances in Neuroethics book series (AIN)

Abstract

Headlines in 2019 are inundated with claims about the “digital society,” making sweeping assertions of societal benefits and dangers caused by a range of technologies. This situation would seem an ideal motivation for ethics research, and indeed much research on this topic is published, with more every day. However, ethics researchers may feel a sense of déjà vu, as they recall decades of other heavily promoted technological platforms, from genomics and nanotechnology to machine learning. How should ethics researchers respond to the waves of rhetoric and accompanying academic and policy-oriented research? What makes the digital society significant for ethics research? In this paper, we consider two examples of digital technologies (artificial intelligence and neural technologies), showing the pattern of societal and academic resources dedicated to them. This pattern, we argue, reveals the jointly sociological and ethical character of significance attributed to emerging technologies. By attending to insights from pragmatism and science and technology studies, ethics researchers can better understand how these features of significance affect their work and adjust their methods accordingly. In short, we argue that the significance driving ethics research should be grounded in public engagement, critical analysis of technology’s “vanguard visions,” and in a personal attitude of reflexivity.

Keywords

Digital society Pragmatism Ethics Sociotechnical imaginaries 

Notes

Acknowledgments

This project was supported by ERANET-Neuron, the Canadian Institutes of Health Research and the Fonds de recherche du Québec—Santé (European Research Projects on Ethical, Legal, and Social Aspects of Neurosciences), and a career award from the Fonds de recherche du Québec—Santé (ER).

References

  1. 1.
    Digital champions joint mission statement. European Commission. 2014. https://ec.europa.eu/digital-single-market/news/digital-champions-joint-mission-statement. Accessed 28 Mar 2019.
  2. 2.
    Turner L. Bioethic$ Inc. Nat Biotech. 2004;22(8):947–8.CrossRefGoogle Scholar
  3. 3.
    Parens E, Johnston J, Moses J. Ethics. Do we need “synthetic bioethics”? Science. 2008;321(5895):1449.PubMedCrossRefPubMedCentralGoogle Scholar
  4. 4.
    Racine E, Martin Rubio T, Chandler J, Forlini C, Lucke J. The value and pitfalls of speculation about science and technology in bioethics: the case of cognitive enhancement. Med Heal Care Philos. 2014;17(3):325–7.CrossRefGoogle Scholar
  5. 5.
    Parens E, Johnston J. Against hyphenated ethics. Bioethics forum. 2006. http://www.bioethicsforum.org/genethics-neuroethics-nanoethics.asp. Accessed 28 Mar 2019.
  6. 6.
    Parens E, Johnston J. Does it make sense to speak of neuroethics? Three problems with keying ethics to hot new science and technology. EMBO Rep. 2007;8(1S):S61–4.PubMedPubMedCentralGoogle Scholar
  7. 7.
    Evans JH. Playing god? Human genetic engineering and the rationalization of public bioethical debate. Chicago: Chicago University Press; 2002.Google Scholar
  8. 8.
    De Vries R. Who will guard the guardians of neuroscience? Firing the neuroethical imagination. EMBO Rep. 2007;8(S1):S65–9.PubMedPubMedCentralCrossRefGoogle Scholar
  9. 9.
    De Vries R. Framing neuroethics: a sociological assessment of the neuroethical imagination. Am J Bioeth. 2005;5(2):25–7.PubMedCrossRefPubMedCentralGoogle Scholar
  10. 10.
    Forlini C, Partridge B, Lucke J, Racine E. Popular media and bioethics: sharing responsibility for portrayals of cognitive enhancement with prescription medications. In: Clausen J, Levy N, editors. Handbook on neuroethics. Dordrecht: Springer; 2015. p. 1473–86.Google Scholar
  11. 11.
    Caulfield T. The commercialisation of medical and scientific reporting. PLoS Med. 2004;1(3):e38.PubMedPubMedCentralCrossRefGoogle Scholar
  12. 12.
    Caulfield T. Biotechnology and the popular press: hype and the selling of science. Trends Biotechnol. 2004;22(7):337–9.PubMedCrossRefPubMedCentralGoogle Scholar
  13. 13.
    Hedgecoe A. Bioethics and the reinforcement of socio-technical expectations. Soc Stud Sci. 2010;40(2):163–86.PubMedCrossRefPubMedCentralGoogle Scholar
  14. 14.
    Forlini C, Racine E. Does the cognitive enhancement debate call for a renewal of the deliberative role of bioethics? In: Hildt E, Franke A, editor. Cognitive enhancement: an interdisciplinary perspective. New York: Springer; 2013. p. 173–86.Google Scholar
  15. 15.
    Racine E, Gareau I, Doucet H, Laudy D, Jobin G, Schraedley-Desmond P. Hyped biomedical science or uncritical reporting? Press coverage of genomics (1992-2001) in Québec. Soc Sci Med. 2006;62(5):1278–90.PubMedCrossRefGoogle Scholar
  16. 16.
    Doucet H. Imagining a neuroethics which would go further than genethics. Am J Bioeth. 2005;5(2):29–31.PubMedCrossRefGoogle Scholar
  17. 17.
    Caulfield T, Condit C. Science and the sources of hype. Public Health Genomics. 2012;15(3–4):209–17.PubMedCrossRefGoogle Scholar
  18. 18.
    Bubela T, Nisbet MC, Borchelt R, et al. Science communication reconsidered. Nat Biotechnol. 2009;27(6):514–8.PubMedCrossRefGoogle Scholar
  19. 19.
    Rayner S. The novelty trap: why does institutional learning about new technologies seem so difficult? Ind High Educ. 2004;18(6):349–55.CrossRefGoogle Scholar
  20. 20.
    Simonson P. Bioethics and the rituals of media. Hastings Cent Rep. 2002;32(1):32–9.PubMedCrossRefPubMedCentralGoogle Scholar
  21. 21.
    Burwell S, Sample M, Racine E. Ethical aspects of brain computer interfaces: a scoping review. BMC Med Ethics. 2017;18(1):60.PubMedPubMedCentralCrossRefGoogle Scholar
  22. 22.
    Dubljevic V, Saigle V, Racine E. The rising tide of tDCS in the media and academic literature. Neuron. 2014;82(4):731–6.PubMedCrossRefPubMedCentralGoogle Scholar
  23. 23.
    Wexler A. The social context of “do-it-yourself” brain stimulation: neurohackers, biohackers, and lifehackers. Front Hum Neurosci. 2017;11:224.  https://doi.org/10.3389/fnhum.2017.00224.CrossRefPubMedPubMedCentralGoogle Scholar
  24. 24.
    Clausen J. Man, machine and in between. Nature. 2009;457(7233):1080.PubMedCrossRefPubMedCentralGoogle Scholar
  25. 25.
    Gardner J, Warren N, Addison C, Samuel G. Persuasive bodies: testimonies of deep brain stimulation and Parkinson’s on YouTube. Soc Sci Med. 2019;222:44–51.PubMedCrossRefPubMedCentralGoogle Scholar
  26. 26.
    Jasanoff S. Perfecting the human: posthuman imaginaries and technologies of reason. In: Hurlbut JB, Tirosh-Samuelson H, editors. Perfecting human futures. Wiesbaden: Springer; 2016. p. 73–95.Google Scholar
  27. 27.
    Cabrera LY, Bittlinger M, Lou H, Müller S, Illes J. The re-emergence of psychiatric neurosurgery: insights from a cross-national study of newspaper and magazine coverage. Acta Neurochir. 2018;160(3):625–35.PubMedCrossRefPubMedCentralGoogle Scholar
  28. 28.
    Racine E. Neuroscience and the media: ethical challenges and opportunities. In: Illes J, Sahakian B, editors. Oxford handbook of neuroethics. Oxford: Oxford University Press; 2011. p. 783–802.Google Scholar
  29. 29.
    Yuste R, Goering S, Bi G, et al. Four ethical priorities for neurotechnologies and AI. Nat News. 2017;551(7679):159–63.CrossRefGoogle Scholar
  30. 30.
    Allen M, VandeHei J. Elon musk: humans must merge with machines. Axios. https://www.axios.com/elon-musk-artificial-intelligence-neuralink-9d351dbb-987b-4b63-9fdc-617182922c33.html. Accessed 28 Mar 2019.
  31. 31.
    Funk C, Kennedy B, Sciupac E. US public wary of biomedical technologies to “enhance” human abilities. Pew Research Center. 2016. https://www.pewresearch.org/science/2016/07/26/u-s-public-wary-of-biomedical-technologies-to-enhance-human-abilities/. Accessed 28 Mar 2019.
  32. 32.
    Sample M, Sattler S, Racine E, Blain-Moraes S, Rodriguez-Arias S. Do publics share experts’ concerns about neural technology? A trinational survey on the ethics of brain-computer interfaces. Sci Tech Hum Val. 2019;45(6):1242–70.Google Scholar
  33. 33.
    Racine E, Sample M. Two problematic foundations of neuroethics and pragmatist reconstructions. Camb Q Healthc Ethics. 2018;27(4):566–77.PubMedCrossRefPubMedCentralGoogle Scholar
  34. 34.
    Klein E, Peters B, Higger M. Ethical considerations in ending exploratory brain–computer interface research studies in locked-in syndrome. Camb Q Healthc Ethics. 2018;27(4):660–74.PubMedCrossRefPubMedCentralGoogle Scholar
  35. 35.
    Sparrow R. Implants and ethnocide: learning from the cochlear implant controversy. Disabil Soc. 2010;25(4):455–66.CrossRefGoogle Scholar
  36. 36.
    Oullette A. Hearing the deaf: cochlear implants, the deaf community, and bioethical analysis. Val U L Rev. 2010;45:1247–70.Google Scholar
  37. 37.
    Sample M, Aunos M, Blain-Moraes S, et al. Brain-computer interfaces and personhood: interdisciplinary deliberations on neural technology. J Neur Eng. 2019;16:063001.Google Scholar
  38. 38.
    Vidal F. Brainhood, anthropological figure of modernity. Hist Human Sci. 2009;22(1):5–36.PubMedCrossRefGoogle Scholar
  39. 39.
    Edwards PN. The closed world: computers and the politics of discourse in Cold War America. Cambridge: MIT Press; 1997.CrossRefGoogle Scholar
  40. 40.
    Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44–56.PubMedCrossRefPubMedCentralGoogle Scholar
  41. 41.
    AI-Powered Supply Chains Supercluster. Government of Canada. 2018. https://www.ic.gc.ca/eic/site/093.nsf/eng/00009.html. Accessed 28 Mar 2019.
  42. 42.
    Waldrop MM. Artificial intelligence (I): into the world; AI has become a hot property in financial circles: but do the promises have anything to do with reality? Science. 1984;223:802–6.PubMedCrossRefPubMedCentralGoogle Scholar
  43. 43.
    Paxton S, Yin W. Bill Gates, Gov. Gavin Newsom speak at unveiling of new human-centered artificial intelligence institute. Stanford Daily. 2019. https://www.stanforddaily.com/2019/03/19/bill-gates-gov-gavin-newsom-speak-at-unveiling-of-new-human-centered-artificial-intelligence-institute/. Accessed 28 Mar 2019.
  44. 44.
    Metz C. Is ethical AI even possible? The New York Times. 2019. https://www.nytimes.com/2019/03/01/business/ethics-artificial-intelligence.html. Accessed 28 Mar 2019.
  45. 45.
    Choudhury SR. Singapore to invest over $100 million in A.I. in next five years in smart nation, innovation hub push. CNBC. 2017. https://www.cnbc.com/2017/05/03/singapores-national-research-foundation-to-invest-150-million-dollars-in-ai.html. Accessed 28 Mar 2019.
  46. 46.
    Accelerating America’s leadership in artificial intelligence. White House Office of Science and Technology Policy. 2019. https://www.whitehouse.gov/articles/accelerating-americas-leadership-in-artificial-intelligence/. Accessed 28 Mar 2019.
  47. 47.
    Special Eurobarometer 460: attitudes towards the impact of digitisation and automation on daily life. European Commission. 2017. https://data.europa.eu/euodp/data/dataset/S2160_87_1_460_ENG. Accessed 28 Mar 2019.
  48. 48.
    Zhang B, Dafoe A. Artificial intelligence: American attitudes and trends. Future of Humanity Institute. 2019. https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/us_public_opinion_report_jan_2019.pdf. Accessed 28 Mar 2019.
  49. 49.
    Anderson M, Anderson SL, editors. Machine ethics. Cambridge: Cambridge University Press; 2011.Google Scholar
  50. 50.
    The ethics and governance of artificial intelligence initiative. 2017. https://aiethicsinitiative.org. Accessed 28 Mar 2019.
  51. 51.
    Bostrom N, Yudkowsky E. Ethics of artificial intelligence. In: Frankish K, Ramsey W, editors. The Cambridge handbook of artificial intelligence. New York: Cambridge University Press; 2014. p. 316–34.CrossRefGoogle Scholar
  52. 52.
    Natarajan P. Amazon and NSF collaborate to accelerate fairness in AI research. Alexa Blogs. 2019. https://developer.amazon.com/blogs/alexa/post/1786ea03-2e55-4a93-9029-5df88c200ac1/amazon-and-nsf-collaborate-to-accelerate-fairness-in-ai-research. Accessed 28 Mar 2019.
  53. 53.
    Facebook-funded AI ethics institute faces independence questions. Times Higher Education. https://www.timeshighereducation.com/news/facebook-funded-ai-ethics-institute-faces-independence-questions. Accessed 28 Mar 2019.
  54. 54.
    Floridi L, Cowls J, Beltrametti M, et al. AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds Mach. 2018;28(4):689–707.CrossRefGoogle Scholar
  55. 55.
    Cutler A, Pribić M, Humphrey L. Everyday ethics for AI design. IBM. 2018. https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf. Accessed 28 Mar 2019.
  56. 56.
    Asilomar AI principles. Future of Life Institute. 2017. https://futureoflife.org/ai-principles/. Accessed 28 Mar 2019.
  57. 57.
    IEEE ethically aligned design. IEEE Standards Association. 2019. https://standards.ieee.org/industry-connections/ec/autonomous-systems.html. Accessed 28 Mar 2019.
  58. 58.
    Diakopoulos N, Friedler SA, Arenas M, et al. Principles for accountable algorithms and a social impact statement for algorithms. 2016. http://www.fatml.org/resources/principles-for-accountable-algorithms. Accessed 28 Mar 2019.
  59. 59.
    Montréal declaration for responsible development of artificial intelligence. 2019. https://www.declarationmontreal-iaresponsable.com/la-declaration. Accessed 28 Mar 2019.
  60. 60.
    The European Commission’s high-level expert group on artificial intelligence: ethics guidelines for trustworthy AI. European Commission. 2018. https://ec.europa.eu/digital-singlemarket/en/news/draft-ethics-guidelines-trustworthy-ai. Accessed 28 Mar 2019.
  61. 61.
    Artificial intelligence at Google: our principles. Google AI. 2019. https://ai.google/principles/. Accessed 28 Mar 2019.
  62. 62.
    Microsoft AI principles. Microsoft Corporation. 2019. https://www.microsoft.com/en-us/ai/our-approach-to-ai. Accessed 28 Mar 2019.
  63. 63.
    Whittlestone J, Nyrup R, Alexandrova A, Dihal K, Cave S. Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. Nuffield Foundation. 2019. http://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf. Accessed 28 Mar 2019.
  64. 64.
    Angwin J, Larson J, Mattu S, Kirchner L. Machine bias. ProPublica. 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed 28 Mar 2019.
  65. 65.
    Calo R. Artificial intelligence policy: a primer and roadmap. UCDL Rev. 2017;51:399.Google Scholar
  66. 66.
    Courtland R. Bias detectives: the researchers striving to make algorithms fair. Nature. 2018;558:357–60.PubMedCrossRefPubMedCentralGoogle Scholar
  67. 67.
    Somerville H. Uber shuts Arizona self-driving program two months after fatal crash. Reuters. 2018. https://www.reuters.com/article/us-autos-selfdriving-uber/uber-shuts-arizona-self-driving-program-two-months-after-fatal-crash-idUSKCN1IO2SD. Accessed 28 Mar 2019.
  68. 68.
    Joly PB. On the economics of techno-scientific promises. In: Akrich M, Barthe Y, Muniesa F, Mustar P, editors. Débordements. Mélanges offerts à Michel Callon. Paris: Presses des Mines; 2010. p. 203–22.CrossRefGoogle Scholar
  69. 69.
    Brown N, Michael M. A sociology of expectations: retrospecting prospects and prospecting retrospects. Technol Anal Strateg Manag. 2003;15(1):3–18.CrossRefGoogle Scholar
  70. 70.
    Hilgartner S. Capturing the imaginary: vanguards, visions and the synthetic biology revolution. In: Miller C, Hagendijk R, Hilgartner S, editors. Science and democracy: making knowledge and making power in the biosciences and beyond. London: Routledge; 2015. p. 51–73.CrossRefGoogle Scholar
  71. 71.
    Jasanoff S, Kim SH, editors. Dreamscapes of modernity: sociotechnical imaginaries and the fabrication of power. Chicago: University of Chicago Press; 2015.Google Scholar
  72. 72.
    Gilbert F, Viaña JNM, Ineichen C. Deflating the “DBS causes personality changes” bubble. Neuroethics. 2018.  https://doi.org/10.1007/s12152-018-9373-8.
  73. 73.
    Gilbert F, Ovadia D. Deep brain stimulation in the media: over-optimistic portrayals call for a new strategy involving journalists and scientists in ethical debates. Front Integr Neurosci. 2011;10(5):16.Google Scholar
  74. 74.
    Gilbert F, Viaña JNM, O’Connell CD, Dodds S. Enthusiastic portrayal of 3D bioprinting in the media: ethical side effects. Bioethics. 2018;32(2):94–102.PubMedCrossRefPubMedCentralGoogle Scholar
  75. 75.
    Wade L, Forlini C, Racine E. Generating genius: how an Alzheimer’s drug became considered a ‘cognitive enhancer’ for healthy individuals. BMC Med Ethics. 2014;15:37.PubMedPubMedCentralCrossRefGoogle Scholar
  76. 76.
    Forlini C, Racine E. Added stakeholders, added value(s) to the cognitive enhancement debate: are academic discourse and professional policies sidestepping values of stakeholders? AJOB Prim Res. 2012;3(1):33–47.CrossRefGoogle Scholar
  77. 77.
    Forlini C, Racine E. Disagreements with implications: diverging discourses on the ethics of non-medical use of methylphenidate for performance enhancement. BMC Med Ethics. 2009;10(1):9.PubMedPubMedCentralCrossRefGoogle Scholar
  78. 78.
    Racine E, Forlini C. Cognitive enhancement, lifestyle choice or misuse of prescription drugs? Ethics blind spots in current debates. Neuroethics. 2010;3(1):1–4.CrossRefGoogle Scholar
  79. 79.
    Dewey J. The public and its problems. Denver: Swallow Press; 1927.Google Scholar
  80. 80.
    Pappas GF. John Dewey’s ethics: democracy as experience. Bloomington: Indiana University Press; 2008.Google Scholar
  81. 81.
    Pekarsky D. Dewey’s conception of growth reconsidered. Educ Theory. 1990;40(9):283–94.CrossRefGoogle Scholar
  82. 82.
    Gouinlock J. Dewey’s theory of moral deliberation. Ethics. 1978;88(1977–1978):218–28.CrossRefGoogle Scholar
  83. 83.
    Evans JH. A sociological account of the growth of principlism. Hastings Cent Rep. 2000;30(5):31–9.PubMedCrossRefPubMedCentralGoogle Scholar
  84. 84.
    Fiester AM. Weaponizing principles: clinical ethics consultations & the plight of the morally vulnerable. Bioethics. 2015;29(5):309–15.PubMedCrossRefPubMedCentralGoogle Scholar
  85. 85.
    Racine E. Éthique de la discussion et génomique des populations. Éthique publique. 2002;4(1):77–90.Google Scholar
  86. 86.
    Doucet H. Les méthodes empiriques, une nouveauté en bioéthique? Revista Colombiana de Bioética. 2008;3(2):9–19.Google Scholar
  87. 87.
    Doucet H. Le développement des morales, des législations et des codes, garder le dialogue ouvert et la conscience inquiète. In: Hébert A, Doré S, de Lafontaine I, editors. Élargir les horizons: Perspectives scientifiques sur l’intégration sociale. Sainte Foy: Éditions Multimondes; 1994. p. 135–41.Google Scholar
  88. 88.
    Jasanoff S. Technologies of humility: citizen participation in governing science. Minerva. 2003;41(3):223–44.CrossRefGoogle Scholar
  89. 89.
    Doucet H. Le développement de la génétique: quelle tâche pour l’éthique? Isuma. 2001;2(3):38–45.Google Scholar
  90. 90.
    Voarino N, Dubljević V, Racine E. tDCS for memory enhancement: analysis of the speculative aspects of ethical issues. Front Hum Neurosci. 2017;10:678.  https://doi.org/10.3389/fnhum.2016.00678.CrossRefPubMedPubMedCentralGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2021

Authors and Affiliations

  1. 1.Pragmatic Health Ethics Research UnitInstitut de recherches cliniques de MontréalMontrealCanada
  2. 2.Department of Neurology and NeurosurgeryMcGill UniversityMontrealCanada
  3. 3.Center for Ethics and Law in the Life Sciences, University of HannoverHannoverGermany
  4. 4.Department of Experimental Medicine (Biomedical Ethics Unit)McGill UniversityMontrealCanada
  5. 5.Department of Medicine and Department of Social and Preventive MedicineUniversité de MontréalMontrealCanada

Personalised recommendations