Abstract
This study deals with the nature of a post-truth society, which is imminent, considering the widespread use of artificial intelligence (AI)-based information systems that use machine learning methods such as deep learning, as well as the social attitudes and responsibilities of organisations that develop, implement, operate and/or use those systems. In such a society, the truths about individuals, groups, organisations, communities, societies, nations, things, events and the world become meaningless or worthless; individuals are treated as black boxes to be manipulated and exploited by malicious AI-based system operators; and the four factors that erode accountability in computing—many hands, bugs, the computer as a scapegoat, and ownership without liability [Nissenbaum (Science and Engineering Ethics 2(1):25–42, 1996)]—worsen because of the unpredictability and uncontrollability of AI-based system behaviours, leading to a lack of responsibility and accountability in AI computing. To prevent the full emergence of a post-truth society or mitigate risks associated with such a society, and to restore responsibility and accountability to computing, organisations that are key players in AI computing must be required to proactively address ethical and social issues caused by the development and use of AI-based systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Thinking far-distant future, we may need to specify the conditions of granting legal personality to AI-based systems before it is too late. But, this is beyond the scope of this study.
References
Barr, A. (2015, July 1). Google mistakenly tags black people as ‘gorillas,’ showing limits of algorithms. The Wall Street Journal. Accessed May 23, 2021, from https://www.wsj.com/articles/BL-DGB-42522
Coeckelbergh, M. (2020). AI ethics. The MIT Press.
Cook, J. (2019a, June 12). Deepfake videos and the threat of not knowing what’s real. Huffpost. Accessed May 15, 2021, from https://www.huffpost.com/entry/deepfake-videos-and-the-threat-of-not-knowing-whats-real_n_5cf97068e4b0b08cf7eb2278
Cook, J. (2019b, June 23). Here’s what it’s like to see yourself in a deepfake porn video: There’s almost nothing you can do to get a fake sex tape of yourself taken offline. Huffpost. Accessed May 15, 2021, from https://www.huffpost.com/entry/deepfake-porn-heres-what-its-like-to-see-yourself_n_5d0d0faee4b0a3941861fced
Heaven, W. D. (2020, July 17). Predictive policing algorithms are racist. They need to be dismantled. MIT Technology Review. Accessed May 23, 2021, from https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/
Ihde, D. (1990). Technology and the lifeworld: From garden to earth. Indiana University Press.
Kemeny, R. (2018, July 10). AIs created our fake video dystopia but now they could help fix it: New software developed by artificial intelligence researchers could help in the fight against so-called deepfake videos. Wired. Accessed May 15, 2021, from https://www.wired.co.uk/article/deepfake-fake-videos-artificial-intelligence
Leveson, N. G., & Turner, C. S. (1993). An investigation of the Therac-25 accidents. IEEE Computer, 26(7), 18–41. https://doi.org/10.1109/MC.1993.274940
Lorinc, J. (2021, January 12). From facial recognition, to predictive technologies, big data policing is rife with technical, ethical and political landmines. Toronto Star. Accessed May 20, 2021, from https://www.thestar.com/news/atkinsonseries/2021/01/12/from-facial-recognition-to-predictive-technologies-big-data-policing-is-rife-with-technical-ethical-and-political-landmines.html
Murata, K. (2013). Construction of an appropriately professional working environment for IT professionals: A key element of quality IT-enabled services. In S. Uesugi (Ed.), IT enabled services (pp. 61–75). Springer. https://doi.org/10.1007/978-3-7091-1425-4_4
Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics, 2(1), 25–42. https://doi.org/10.1007/BF02639315
Owen, R., Macnaghten, P. M., & Stilgoe, J. (2012). Responsible research and innovation: From science in society to science for society, with society. Science and Public Policy, 39(6), 751–760. https://doi.org/10.1093/scipol/scs093
Pariser, E. (2011). the filter bubble: What the internet is hiding from you. Penguin Press.
Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.
Raymond, E. S. (1999). The cathedral and the bazaar: Musings on linux and open source by an accidental revolutionary. O’Reilly Media.
Stahl, B. C. (2013). Responsible research and innovation: The role of privacy in an emerging framework. Science and Public Policy, 40(6), 708–716. https://doi.org/10.1093/scipol/sct067
Strand, R., Spaapen, J., Bauer, M. W., Hogan, E., Revuelta, G., Stagl, S., Paula, L., & Guimarães Pereira, Â. (2015). Indicators for promoting and monitoring responsible research and innovation. Report from the Expert Group on Policy Indicators for Responsible Research and Innovation. https://doi.org/10.2777/9742
Surowiecki, J. (2004). The wisdom of crowds: Why the many are smarter than the few and how collective wisdom shapes business, economies, societies and nations. Doubleday.
Sunstein, C. (2000). Deliberative trouble? Why groups go to extremes. The Yale Law Journal, 110(1), 71–119. https://doi.org/10.2307/797587
Taylor, S., Pickering, B., Boniface, M., Anderson, M., Danks, D., Følstad, A., Leese, M., Müller, V. C., Sorell, T., Winfield, A., & Woollard, F. (2018). Responsible AI – key themes, Concerns & Recommendations for European Research and Innovation. HUB4NGI Consortium. https://doi.org/10.5281/zenodo.1303252
Wall, M. (2019, July 8). Biased and wrong? Facial recognition tech in the dock. BBC News. Accessed May 20, 2021, from https://www.bbc.com/news/business-48842750
Wang, P., Mi, X., Liao, X., Wang, X., Yuan, K., Qian, F., & Beyah, R. A. (2018). Game of missuggestions: Semantic analysis of search-autocomplete manipulations. Network and Distributed System Security Symposium 2018. Accessed May 20, 2021, from https://informatics.indiana.edu/xw7/papers/peng18ndss.pdf
Yadron, D. (2014, April 11). Heartbleed bug’s ‘voluntary’ origins: Internet security relies on a small team of coders, most of them volunteers; flaw was a fluke. The Wall Street Journal. Accessed May 15, 2021, from https://www.wsj.com/articles/programmer-says-flub-not-ill-intent-behind-heartbleed-bug-1397225513
Yamazaki, T., Murata, K., Orito, Y., & Shimizu, K. (2020). Post-truth society: The AI-driven society where no one is responsible. In M. Arias Oliva, J. Pelegrín Borondo, K. Murata, & A. M. Lara Palma (Eds.), Societal challenges in the smart society (ETHICOMP Book Series) (pp. 397–405). Universidad de La Rioja.
Acknowledgements
This study is based on the author’s previous work with Dr. Yohko Orito of Ehime University, Mr. Tatsuya Yamazaki of the University of Toyama and Dr. Kazuyuki Shimizu of Meiji University, which was presented at the ETHICOMP 2020 conference and published as a full conference paper under the title of “Post-truth society: the AI-driven society where no one is responsible” (Yamazaki et al., 2020). The author thanks these colleagues for their enthusiastic participation in discussions about the phenomena of the post-truth society. The author also thanks Dr. Fareed Ben-Youssef of Texas Tech University and Dr. Paul B. de Laat of the University of Groningen for helpful suggestions provided during the conference. This study was supported by the JSPS Grant-in-Aid for Scientific Research (C) 20K01920.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Murata, K. (2022). Post-Truth: Organisational Social Responsibility in an AI-Driven Society. In: Bounfour, A. (eds) Platforms and Artificial Intelligence . Progress in IS. Springer, Cham. https://doi.org/10.1007/978-3-030-90192-9_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-90192-9_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-90191-2
Online ISBN: 978-3-030-90192-9
eBook Packages: Business and ManagementBusiness and Management (R0)