Skip to main content

Governing Artificial Intelligence in Post-Pandemic Society

  • Chapter
  • First Online:
Global Pandemic and Human Security

Abstract

Pandemic escalated the need of adopting technology for human security and public service. Technological integration and digital transformation are of focus in the strategy to recover and reconstruct civic society post-pandemic across the globe, especially in the domains of healthcare, education, surveillance, and governance. Artificial intelligence (AI) is seen to benefit society through building and assisting critical socio-technical systems.

Automated decision-making through algorithms is debated widely for its limitations in tackling biases and inability to discourage unintended. Moreover, AI learns patterns from the data, which by nature is biased due to existing socio-economic complexities. The pervasive application of AI when implemented and integrated with social systems is observed to pose socio-ethical challenges such as institutionalization of discrimination, biased decision-making, intrusiveness, low accountability, and mistrust. Various threats and vulnerabilities imposed to human community-like natural disasters, health pandemics, and economic uncertainties necessitate inevitable adoption of AI applications that can mitigate socio-ethical challenges and adhere to human security principles. Current data protection laws seem to be insufficient to protect human rights in the given scenarios.

Literature advocates for transparency, explainability, and auditability of AI models. However, it may not necessarily lead to accountability and fairness. Embedding these socio-technical systems in the broader institutional frameworks of regulation and governance can balance the risks without compromising on the benefits of technological innovations. The socio-economic context in which AI model is deployed necessitates the responses to be local and context specific. This also necessitates AI governance framework to be comprehensive, prevention-oriented, while protecting and empowering human value and dignity.

This chapter provides commentary on the social, ethical, and technical issues that AI can impose along with various aspects that need to be considered while governing AI. Finally, an AI governance framework is proposed based on socio-administrative principles to extend their credibility in mitigating, managing and governing the human threats and uphold human security.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Abebe R, Barocas S, Kleinberg J, Levy K, Raghavan M, Robinson DG (2020) Roles for computing in social change. In: FAT* 2020 - proceedings of the 2020 conference on fairness, accountability, and transparency, pp 252–260

    Chapter  Google Scholar 

  • Ananny M, Crawford K (2018) Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc 20(3). https://doi.org/10.1177/1461444816676645

  • Arrieta B, Alejandro ND-R, Del Ser J, Bennetot A, Tabik S, Barbado A, Garcia S, Gil-Lopez S, Molina D, Benjamins R, Chatila R, Herrera F (2020) Explainable explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion 58:82–115. https://doi.org/10.1016/j.inffus.2019.12.012

    Article  Google Scholar 

  • Arya V, Bellamy RKE, Chen PY, Dhurandhar A, Hind M, Hoffman SC, Houde S, Liao QV, Luss R, Mojsilović A, Mourad S, Pedemonte P, Raghavendra R, Richards J, Sattigeri P, Shanmugam K, Singh M, Varshney KR, Wei D, Zhang Y (2019) One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv preprint. arXiv:1909.03012

    Google Scholar 

  • Bambauer J, Zarsky T (2018) The algorithm game. Notre Dame Law Rev 94(1):12–14

    Google Scholar 

  • Barocas S, Selbst A (2016) Big data’s disparate impact. Calif Law Rev 104(3):671. https://doi.org/10.15779/Z38BG31

    Article  Google Scholar 

  • Bellamy RKE, Mojsilovic A, Nagar S, Natesan Ramamurthy K, Richards J, Saha D, Sattigeri P, Singh M, Varshney KR, Zhang Y, Dey K, Hind M, Hoffman SC, Houde S, Kannan K, Lohia P, Martino J, Mehta S (2019) AI fairness 360: an extensible toolkit for detecting and mitigating algorithmic Bias. IBM J Res Dev 63(4/5):4.1–4.15. https://doi.org/10.1147/JRD.2019.2942287

    Article  Google Scholar 

  • Bendapudi N, Leone RP (2003) Psychological implications of customer participation in co-production. J Mark 67(1):14–28. https://doi.org/10.1509/jmkg.67.1.14.18592

    Article  Google Scholar 

  • Binns R (2017) Fairness in machine learning: lessons from political philosophy. arXiv preprint. arXiv:1712.03586

    Google Scholar 

  • Binns R, Van Kleek M, Veale M, Lyngs U, Zhao J, Shadbolt N (2018) ‘It’s reducing a human being to a percentage’; perceptions of justice in algorithmic decisions. In: Conference on human factors in computing systems - proceedings

    Google Scholar 

  • Bonnefon JF, Shariff A, Rahwan I (2016) The social dilemma of autonomous vehicles. Science 352(6293):1573. https://doi.org/10.1126/science.aaf2654

    Article  CAS  Google Scholar 

  • Brundage M, Avin S, Wang J, Belfield H, Krueger G, Hadfield G, Khlaaf H, Yang J, Toner H, Fong R, Maharaj T, Koh PW, Hooker S, Leung J, Trask A, Bluemke E, Lebensold J, O’Keefe C, Koren M, Théo R, Rubinovitz JB, Besiroglu T, Carugati F, Clark J, Eckersley P, de Haas S, Johnson M, Laurie B, Ingerman A, Krawczuk I, Askell A, Cammarota R, Lohn A, Krueger D, Stix C, Henderson P, Graham L, Prunkl C, Martin B, Seger E, Zilberman N, Héigeartaigh S, Kroeger F, Sastry G, Kagan R, Weller A, Tse B, Barnes E, Dafoe A, Scharre P, Herbert-Voss A, Rasser M, Sodhani S, Flynn C, Gilbert TK, Dyer L, Khan S, Bengio Y, Anderljung M (2020) Toward trustworthy AI development: mechanisms for supporting verifiable claims. arXiv preprint. arXiv:2004.07213

    Google Scholar 

  • Buhmann A, Paßmann J, Fieseler C (2020) Managing algorithmic accountability: balancing reputational concerns, engagement strategies, and the potential of rational discourse. J Bus Ethics 163(2):265. https://doi.org/10.1007/s10551-019-04226-4

    Article  Google Scholar 

  • Burke R (2017) Multisided fairness for recommendation. arXiv:1707.00093

    Google Scholar 

  • Burrell J (2016) How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data and Society 3(1). https://doi.org/10.1177/2053951715622512

  • Butterworth M (2018) The ICO and artificial intelligence: the role of fairness in the GDPR framework. Comput Law Secur Rev 34(2). https://doi.org/10.1016/j.clsr.2018.01.004

  • Cai CJ, Reif E, Hegde N, Hipp J, Kim B, Smilkov D, Wattenberg M, Viegas F, Corrado GS, Stumpe MC, Terry M (2019) Human-centered tools for coping with imperfect algorithms during medical decision-making. In: Conference on human factors in computing systems - proceedings

    Google Scholar 

  • Calandrino JA, Kilzer A, Narayanan A, Felten EW, Shmatikov V (2011) ‘You might also like:’ privacy risks of collaborative filtering. In: Proceedings - IEEE symposium on security and privacy

    Google Scholar 

  • Canada (2019) Algorithmic impact assessment (AIA). Government of Canada

    Google Scholar 

  • Chaudhuri K, Monteleoni C (2009) Privacy-preserving logistic regression. In: Advances in neural information processing systems 21 - proceedings of the 2008 conference

    Google Scholar 

  • Cobbe J, Lee MSA, Singh J (2021) Reviewable automated decision-making: a framework for accountable algorithmic systems. In: ACM conference on fairness, accountability, and transparency (FAccT ‘21). ACM, Toronto

    Google Scholar 

  • Corvalán JG (2018) Digital and Intelligent Public Administration: Transformations in the Era of Artificial Intelligence. A&C Rev Direito Administrativo Constitucional 18(71). https://doi.org/10.21056/aec.v18i71.857

  • Dabbish L, Stuart C, Tsay J, Herbsleb J (2012) Social coding in GitHub: transparency and collaboration in an open software repository. In: Proceedings of the ACM conference on computer supported cooperative work, CSCW

    Google Scholar 

  • Datta A, Tschantz MC, Datta A (2015) Automated experiments on ad privacy settings. In: Proceedings on privacy enhancing technologies, 2015, no 1. https://doi.org/10.1515/popets-2015-0007

  • David M (2015) The correspondence theory of truth. Stanford encyclopedia of philosophy. In: Zalta EN (ed) The stanford encyclopedia of philosophy, Fall edn [On-line]. Available: http://plato.stanford.edu/archives/fall2015/entries/truth-correspondence. Accessed 12 May 2020

    Google Scholar 

  • Doshi-Velez F, Kortz M, Budish R, Klein B, Bavitz C, Gershman S, O’Brien D, Shieber S, Waldo J, Weinberger D, Wood A (2017) Accountability of AI under the law: the role of explanation. arXiv preprint. arXiv:1711.01134

    Google Scholar 

  • Dwork C, Hardt M, Pitassi T, Reingold O, Zemel R (2012) Fairness through awareness. In: ITCS 2012 - innovations in theoretical computer science conference

    Google Scholar 

  • Edwards L, Veale M (2017) Slave to the algorithm? Why a right to explanation is probably not the remedy you are looking for. SSRN Electron J. https://doi.org/10.2139/ssrn.2972855

  • Ehsan U, Riedl MO (2020) Human-centered explainable AI: towards a reflective sociotechnical approach. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol 12424

    Google Scholar 

  • Ehsan U, Liao QV, Muller M, Riedl MO, Weisz JD (2021) Expanding explainability: towards social transparency in AI systems. In: CHI conference on human factors in computing systems (CHI ‘21). ACM, Yokohama

    Google Scholar 

  • Engstrom DF, Daniel EH (2020) Algorithmic accountability in the administrative state. Yale J Regul 37(3):800

    Google Scholar 

  • Ensign D, Friedler SA, Neville S, Scheidegger C, Venkatasubramanian S (2017) Runaway feedback loops in predictive policing. arXiv preprint. arXiv:1706.09847

    Google Scholar 

  • Eubanks V (2018) Automating inequality: how high-tech tools profile, police, and punish the poor. St. Martin’s Press, New York

    Google Scholar 

  • Executive Office of the President of the United States (2020) Promoting the use of trustworthy artificial intelligence in the Federal Government, United States

    Google Scholar 

  • Feldman M, Friedler SA, Moeller J, Scheidegger C, Venkatasubramanian S (2015) Certifying and removing disparate impact. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining

    Google Scholar 

  • Flores Y “Nash” (2018) Human security. In: Handbook of security science. Springer, Cham

    Google Scholar 

  • Floridi L (2018) Soft ethics, the governance of the digital and the general data protection regulation. Philos Trans R Soc A Math Phys Eng Sci 376(2133):20180081. https://doi.org/10.1098/rsta.2018.0081

    Article  Google Scholar 

  • Floridi L, Taddeo M (2016) What is data ethics? Philos Trans Royal Soc Math Phys Eng Sci 374(2083):20160360

    Google Scholar 

  • Fredrikson M, Jha S, Ristenpart T (2015) Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the ACM conference on computer and communications security, pp 1322–1333

    Google Scholar 

  • Friedler SA, Scheidegger C, Venkatasubramanian S, Choudhary S, Hamilton EP, Roth D (2018) A comparative study of fairness-enhancing interventions in machine learning. arXiv preprint. arXiv:1802.04422

    Google Scholar 

  • Fung A (2003) Survey article: recipes for public spheres - eight institutional design choices and their consequences. J Polit Philos 11(3):338

    Article  Google Scholar 

  • Gillis TB, Spiess JL (2019) Big data and discrimination. Univ Chicago Law Rev 86(2):459

    Google Scholar 

  • Gilpin LH, Bau D, Yuan BZ, Bajwa A, Specter M, Kagal L (2019) Explaining explanations: an overview of interpretability of machine learning. In: Proceedings - 2018 IEEE 5th international conference on data science and advanced analytics, DSAA 2018

    Google Scholar 

  • Green B, Hu L (2018) The myth in the methodology: towards a recontextualization of fairness in machine learning. In: Presented at the machine learning: the debates workshop at the 35th international conference on machine learning

    Google Scholar 

  • Green B, Viljoen S (2020) Algorithmic realism: expanding the boundaries of algorithmic thought. In: FAT* 2020 - proceedings of the 2020 conference on fairness, accountability, and transparency

    Google Scholar 

  • Grgic-Hlaca N, Redmiles EM, Gummadi KP, Weller A (2018) Human perceptions of fairness in algorithmic decision making: a case study of criminal risk prediction. In: The web conference 2018 - proceedings of the world wide web conference, WWW 2018

    Google Scholar 

  • Grote T, Berens P (2020) On the ethics of algorithmic decision-making in healthcare. J Med Ethics 46(3):205

    Article  Google Scholar 

  • Henderson P, Sinha K, Angelard-Gontier N, Ke NR, Fried G, Lowe R, Pineau J (2018) Ethical challenges in data-driven dialogue systems. In: AIES 2018 - Proceedings of the 2018 AAAI/ACM conference on AI, Ethics, and Society. Association for Computing Machinery, New York

    Google Scholar 

  • Hildebrandt M (2015) Smart technologies and the end(s) of law. Edward Elgar, Cheltenham

    Book  Google Scholar 

  • Hirsch T, Merced K, Narayanan S, Imel ZE, Atkins DC (2017) Designing contestability: interaction design, machine learning, and mental health. In: DIS 2017 - proceedings of the 2017 ACM conference on designing interactive systems, Edinburgh

    Google Scholar 

  • Hoffmann AL, Roberts ST, Wolf CT, Wood S (2018) Beyond fairness, accountability, and transparency in the ethics of algorithms: contributions and perspectives from LIS. Proc Assoc Inf Sci Technol 55(1). https://doi.org/10.1002/pra2.2018.14505501084

  • Huang SW, Fu WT (2013) Don’t hide in the crowd! Increasing social transparency between peer workers improves crowdsourcing outcomes. In: Conference on human factors in computing systems - proceedings

    Google Scholar 

  • Hutchins E (1991) The social organization of distributed cognition. In: Resnick LB, Levine JM, Teasley SD (eds) Perspectives on socially shared cognition. American Psychological Association, Washington, DC, pp 283–307

    Chapter  Google Scholar 

  • Joseph M, Kearns M, Morgenstern J, Roth A (2016) Fairness in learning: classic and contextual bandits. arXiv preprint. arXiv:1605.07139

    Google Scholar 

  • Kahneman D, Slovic SP, Slovic P, Tversky A (1982) Judgment under uncertainty: heuristics and biases. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Kairouz P, Brendan McMahan H, Avent B, Bellet A, Bennis M, Bhagoji AN, Bonawitz K, Charles Z, Cormode G, Cummings R, D’Oliveira RGL, El Rouayheb S, Evans D, Gardner J, Garrett Z, Gascón A, Ghazi B, Gibbons PB, Gruteser M, Harchaoui Z, He C, He L, Huo Z, Hutchinson B, Hsu J, Jaggi M, Javidi T, Joshi G, Khodak M, Konečný J, Korolova A, Koushanfar F, Koyejo S, Lepoint T, Liu Y, Mittal P, Mohri M, Nock R, Özgür A, Pagh R, Raykova M, Qi H, Ramage D, Raskar R, Song D, Song W, Stich SU, Sun Z, Suresh AT, Tramèr F, Vepakomma P, Wang J, Xiong L, Xu Z, Yang Q, Yu FX, Yu H, Zhao S (2019) Advances and open problems in federated learning. arXiv preprint. arXiv:1912.04977

    Google Scholar 

  • Kamiran F, Calders T (2012) Data preprocessing techniques for classification without discrimination. Knowl Inf Syst 33:1–33. https://doi.org/10.1007/s10115-011-0463-8

    Article  Google Scholar 

  • Kamishima T, Akaho S, Asoh H, Sakuma J (2012) Fairness-aware classifier with prejudice remover regularizer. In: Lecture notes in computer science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol 7524 LNAI

    Google Scholar 

  • Karppi T (2018) ‘The computer said so’: on the ethics, effectiveness, and cultural techniques of predictive policing. Soc Media Soc 4(2). https://doi.org/10.1177/2056305118768296

  • Katell M, Young M, Dailey D, Herman B, Guetler V, Tam A, Binz C, Raz D, Krafft PM (2020) Toward situated interventions for algorithmic equity: lessons from the field. In: FAT* 2020 - Proceedings of the 2020 conference on fairness, accountability, and transparency

    Google Scholar 

  • Kizilcec RF (2016) How much information? Effects of transparency on trust in an algorithmic interface. In: Conference on Human Factors in Computing Systems - Proceedings

    Google Scholar 

  • Kleinberg J, Mullainathan S, Raghavan M (2017) Inherent trade-offs in the fair determination of risk scores. In: Leibniz international proceedings in informatics, LIPIcs, vol 67

    Google Scholar 

  • Koren M, Corso A, Kochenderfer MJ (2020) The adaptive stress testing formulation. arXiv:2004.04293

    Google Scholar 

  • Kou Y, Gui X (2020) Mediating community-AI interaction through situated explanation: the case of AI-Led moderation. In: Proceedings of the ACM on Human-Computer Interaction 4 (CSCW2). https://doi.org/10.1145/3415173

  • Kroll JA (2018) The fallacy of inscrutability. Philos Trans R Soc A Math Phys Eng Sci 376(2133):20180084. https://doi.org/10.1098/rsta.2018.0084

    Article  Google Scholar 

  • Kroll JA (2021) Outlining traceability: a principle for operationalizing accountability in computing systems. In: FAccT '21: proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp 758–771. https://doi.org/10.1145/3442188.3445937

    Chapter  Google Scholar 

  • Kroll JA, Kroll JA (2020) Accountability in computer systems. In: Dubber M, Pasquale F, Das S (eds) The Oxford handbook of ethics of Artificial Intelligence. Oxford University Press, Oxford, pp 181–196

    Google Scholar 

  • Lambrecht A, Tucker C (2019) Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of stem career ads. Manag Sci 65(7):2947. https://doi.org/10.1287/mnsc.2018.3093

    Article  Google Scholar 

  • Larson B (2017) Gender as a variable in natural-language processing: ethical considerations. In: Proceedings of the First ACL workshop on ethics in natural language processing, Valencia, pp 1–11

    Google Scholar 

  • Latonero M (2018) Governing artificial intelligence: upholding human rights & dignity. Data & Society. https://datasociety.net/output/governing-artificial-intelligence/. Accessed 9 Jan 2020

  • Lecuyer M, Atlidakis V, Geambasu R, Hsu D, Jana S (2019) Certified robustness to adversarial examples with differential privacy. In: Proceedings - IEEE symposium on security and privacy

    Google Scholar 

  • Lee MK (2018) Understanding perception of algorithmic decisions: fairness, trust, and emotion in response to algorithmic management. Big Data Soc 5(1):1–16. https://doi.org/10.1177/2053951718756684

    Article  Google Scholar 

  • Lee R, Kochenderfer MJ, Mengshoel OJ, Brat GP, Owen MP (2015) Adaptive stress testing of airborne collision avoidance systems. In: AIAA/IEEE digital avionics systems conference - proceedings

    Google Scholar 

  • Lee MK, Kim JT, Lizarondo L (2017) A human-centered approach to algorithmic services: considerations for fair and motivating smart community service management that allocates donations to non-profit organizations. In: Conference on human factors in computing systems - proceedings

    Google Scholar 

  • Lee MK, Jain A, Cha HJIN, Ojha S, Kusbit D (2019a) Procedural justice in algorithmic fairness: leveraging transparency and outcome control for fair algorithmic mediation. In: Proceedings of the ACM on Human-Computer Interaction 3 (CSCW). https://doi.org/10.1145/3359284

    Chapter  Google Scholar 

  • Lee MK, Kusbit D, Kahng A, Kim JT, Yuan X, Chan A, See D, Noothigattu R, Lee S, Psomas A, Procaccia AD (2019b) WeBuildAI. Proc ACM Hum-Comput Interact 3(CSCW):1–35. https://doi.org/10.1145/3359283

    Article  CAS  Google Scholar 

  • Liao QV, Gruen D, Miller S (2020) Questioning the AI: informing design practices for explainable AI user experiences. arXiv preprint. arXiv:2001.02478

    Google Scholar 

  • Lim BY, Yang Q, Abdul A, Wang D (2019) Why these explanations? Selecting intelligibility types for explanation goals. In: CEUR workshop proceedings, vol 2327

    Google Scholar 

  • Lind EA, Tyler TR (1988) The social psychology of procedural justice. Springer, Cham

    Book  Google Scholar 

  • Lipton ZC, Steinhardt J (2019) Troubling trends in machine-learning scholarship. Queue 17(1):45. https://doi.org/10.1145/3317287.3328534

    Article  Google Scholar 

  • Malle BF, Scheutz M, Arnold T, Voiklis J, Cusimano C (2015) Sacrifice one for the good of many?: People apply different moral norms to human and robot agents. In: ACM/IEEE international conference on human-robot interaction

    Google Scholar 

  • Marcus G, Davis E (2019) Rebooting AI - building artificial intelligence we can trust. Pantheon Books, New York

    Google Scholar 

  • Martin K (2019) Ethical implications and accountability of algorithms. J Bus Ethics 160(4):835. https://doi.org/10.1007/s10551-018-3921-3

    Article  Google Scholar 

  • Matias JN, Mou M (2018) CivilServant: community-led experiments in platform governance. In: Conference on human factors in computing systems - proceedings

    Google Scholar 

  • Metzger MJ, Flanagin AJ (2013) Credibility and trust of information in online environments: the use of cognitive heuristics. J Pragmat 59:210. https://doi.org/10.1016/j.pragma.2013.07.012

    Article  Google Scholar 

  • Milano S, Taddeo M, Floridi L (2020) Recommender systems and their ethical challenges. AI Soc 35(4):957. https://doi.org/10.1007/s00146-020-00950-y

    Article  Google Scholar 

  • Miller T (2019) Explanation in artificial intelligence: insights from the social sciences. Artif Intell 267:1–38

    Article  Google Scholar 

  • Mittelstadt B (2019) Principles alone cannot guarantee ethical AI. ArXiv preprint. arXiv:1906.06668

    Google Scholar 

  • Mohamed S, Png MT, Isaac W (2020) Decolonial AI: decolonial theory as sociotechnical foresight in artificial intelligence. Philos Technol 33(4). https://doi.org/10.1007/s13347-020-00405-8

  • Mohseni S, Zarei N, Ragan ED (2018) A multidisciplinary survey and framework for design and evaluation of explainable AI systems. arXiv preprint. arXiv:1811.11839

    Google Scholar 

  • Mojsilovic A (2018) Introducing AI explainability 360. IBM

    Google Scholar 

  • Mosteller F (2006) Remarks on the method of paired comparisons: I. the least squares solution assuming equal standard deviations and equal correlations. In: Selected papers of Frederick Mosteller. Springer, Cham, pp 157–162

    Chapter  Google Scholar 

  • Nguyen DT, Dabbish LA, Kiesler S (2015) The perverse effects of social transparency on online advice taking. In: CSCW 2015 - proceedings of the 2015 ACM international conference on computer-supported cooperative work and social computing

    Google Scholar 

  • Nissenbaum H (1996) Accountability in a computerized society. Sci Eng Ethics 2(1):25. https://doi.org/10.1007/BF02639315

    Article  Google Scholar 

  • Noothigattu R, Gaikwad SNS, Awad E, Dsouza S, Rahwan I, Ravikumar P, Procaccia AD (2018) A voting-based system for ethical decision making. In: 32nd AAAI conference on artificial intelligence, AAAI 2018

    Google Scholar 

  • O’Neil L (2016) Weapons of math destruction: how big data increases inequality and threatens democracy, 1st edn. Crown, New York

    Google Scholar 

  • Obermeyer Z, Powers B, Vogeli C, Mullainathan S (2019) Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464):447. https://doi.org/10.1126/science.aax2342

    Article  CAS  Google Scholar 

  • Pasquale F (2016) The black box society: the secret algorithms that control money and information. Harvard University Press, Cambridge

    Google Scholar 

  • Pasztor A, Sider A (2020) Internal boeing documents show cavalier attitude to safety. https://www.wsj.com/articles/internal-boeing-documents-show-cavalier-attitude-to-safety-11578627206

  • Perra N, Rocha LEC (2019) Modelling opinion dynamics in the age of algorithmic personalisation. Sci Rep 9(1):7261. https://doi.org/10.1038/s41598-019-43830-2

    Article  CAS  Google Scholar 

  • Poursabzi-Sangdeh F, Goldstein DG, Hofman JM, Vaughan JW, Wallach H (2018) Manipulating and measuring model interpretability. arXiv preprint. arXiv:1802.07810

    Google Scholar 

  • Prates MOR, Avelar PH, Lamb LC (2020) Assessing gender bias in machine translation: a case study with Google translate. Neural Comput Appl 32(10). https://doi.org/10.1007/s00521-019-04144-6

  • Rahwan I (2018) Society-in-the-loop: programming the algorithmic social contract. Ethics Inf Technol 20(1). https://doi.org/10.1007/s10676-017-9430-8

  • Raj E, Westerlund M, Espinosa-Leal L (2021) Reliable fleet analytics for edge IoT solutions. ArXiv Eprint 2101:04414

    Google Scholar 

  • Raji ID, Smart A, White RN, Mitchell M, Gebru T, Hutchinson B, Smith-Loud J, Theron D, Barnes P (2020) Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. In: FAT* 2020 - proceedings of the 2020 conference on fairness, accountability, and transparency

    Google Scholar 

  • Reisman D, Schultz J, Crawford K, Whittaker M (2018) Algorithmic impact assessments: a practical framework for public agency accountability. AI Now Institute, New York

    Google Scholar 

  • Richardson R, Schultz JM, Crawford K (2019) Dirty data, bad predictions: how civil rights violations impact police data, predictive policing systems, and justice. N Y Univ Law Rev 94(2)

    Google Scholar 

  • Saxena NA, Huang K, DeFilippis E, Radanovic G, Parkes DC, Yang L (2018) How do fairness definitions fare? Examining public attitudes towards algorithmic definitions of fairness. arXiv preprint. arXiv:1811.03654

    Google Scholar 

  • Schiff D, Ayesh A, Musikanski L, Havens JC (2020) IEEE 7010: a new standard for assessing the well-being implications of artificial intelligence. In: IEEE transactions on systems, man, and cybernetics: systems

    Google Scholar 

  • Shah H (2018) Algorithmic accountability. Philos Trans R Soc A Math Phys Eng Sci 376(2128):20170351. https://doi.org/10.1098/rsta.2017.0362

    Article  Google Scholar 

  • Sharma GD, Yadav A, Chopra R (2020) Artificial intelligence and effective governance: a review, critique and research agenda. Sustainable Futures 2:100004. https://doi.org/10.1016/j.sftr.2019.100004

    Article  Google Scholar 

  • Shin D, Park YJ (2019) Role of fairness, accountability, and transparency in algorithmic affordance. Comput Hum Behav 98:277–284. https://doi.org/10.1016/j.chb.2019.04.019

    Article  Google Scholar 

  • Showkat D (2021) Tinkering: a way towards designing transparent algorithmic user interfaces. In: Joint proceedings of the ACM IUI 2021 workshops. ACM, College Station

    Google Scholar 

  • Singh J, Cobbe J, Norval C (2019) Decision provenance: harnessing data flow for accountable systems. IEEE Access 7:6562. https://doi.org/10.1109/ACCESS.2018.2887201

    Article  Google Scholar 

  • de Spiegeleire S, Maas M, Sweijs T (2017) Artificial intelligence and the future of defense – strategic implications for small- and medium-sized force providers. Center for Strategic Studies, The Hague

    Google Scholar 

  • Suresh H, Guttag JV (2019) A framework for understanding unintended consequences of machine learning. arXiv preprint. arXiv:1901.10002

    Google Scholar 

  • Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. In: 2nd international conference on learning representations, ICLR 2014 - conference track proceedings

    Google Scholar 

  • Taddeo M, Floridi L (2018) Regulate artificial intelligence to avert cyber arms race comment. Nature 556(7701):296

    Article  CAS  Google Scholar 

  • Tutt A (2017) An FDA for algorithms. Adm Law Rev 69(1):83

    Google Scholar 

  • Veale M, Binns R (2017) Fairer machine learning in the real world: mitigating discrimination without collecting sensitive data. Big Data Soc 4(2). https://doi.org/10.1177/2053951717743530

  • Veale M, Binns R, Edwards L (2018) Algorithms that remember: model inversion attacks and data protection law. Philos Trans R Soc A Math Phys Eng Sci 376(2133):20180083. https://doi.org/10.1098/rsta.2018.0083

    Article  Google Scholar 

  • Weick KE, Roberts KH (1993) Collective mind in organizations: heedful interrelating on flight decks. Adm Sci Q 38(3):357. https://doi.org/10.2307/2393372

    Article  Google Scholar 

  • Wexler J (2018) The what-if tool: code-free probing of machine learning models. Google AI Blog

    Google Scholar 

  • Whittaker M, Crawford K, Dobbe R, Fried G, Kaziunas E, Mathur V, West SM, Richardson R, Schultz J, Schwartz O (2018) AI Now Report 2018

    Google Scholar 

  • Wieringa M (2020) What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability. In: FAT* 2020 - proceedings of the 2020 conference on fairness, accountability, and transparency

    Google Scholar 

  • Wilkenfeld DA, Lombrozo T (2015) Inference to the best explanation (IBE) versus explaining for the best inference (EBI). Sci Educ 24(9–10):1059. https://doi.org/10.1007/s11191-015-9784-4

    Article  Google Scholar 

  • Yampolskiy RV (2018) Artificial intelligence safety and security, 1st edn. Chapman & Hall/CRC, Boca Raton

    Book  Google Scholar 

  • Yang Q, Steinfeld A, Zimmerman J (2019) Unremarkable AI: fitting intelligent decision support into critical, clinical decisionmaking processes. arXiv preprint. arXiv:1904.09612

    Google Scholar 

  • Zehlike M, Castillo C, Bonchi F, Hajian S, Megahed M (2017) Fairness measures: datasets and software for detecting algorithmic discrimination. http://fairness-measures.org/

  • Zhu H, Yu B, Halfaker A, Terveen L (2018) Value-sensitive algorithm design: method, case study, and lessons. In: Proceedings of the ACM on human-computer interaction 2 (CSCW). https://doi.org/10.1145/3274463

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Avadhanam Udayaadithya .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Arunagiri, A., Udayaadithya, A. (2022). Governing Artificial Intelligence in Post-Pandemic Society. In: Shaw, R., Gurtoo, A. (eds) Global Pandemic and Human Security. Springer, Singapore. https://doi.org/10.1007/978-981-16-5074-1_22

Download citation

  • DOI: https://doi.org/10.1007/978-981-16-5074-1_22

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-16-5073-4

  • Online ISBN: 978-981-16-5074-1

  • eBook Packages: Social SciencesSocial Sciences (R0)

Publish with us

Policies and ethics