Skip to main content

Artificial Intelligence and People with Disabilities: a Reflection on Human–AI Partnerships

  • Chapter
  • First Online:
Humanity Driven AI

Abstract

Artificial intelligence (AI) has much potential to enhance opportunities and independence for people with disabilities by addressing practical problems that they encounter in a variety of domains. Indeed, the partnership between AI and people with disabilities already has a history that spans several decades, through the use of assistive technologies based, for example, on speech recognition, optical character recognition, word prediction, and text-to-speech conversion. Contemporary developments in machine learning can extend and enhance the capabilities of such assistive technology applications, while opening the way to further improvements in accessibility. AI applications intended to benefit people with disabilities can also give rise to questions of values and priorities. These issues are here discussed in relation to the role of design practices and policy in shaping the solutions adopted. AI can also contribute to discrimination on grounds of disability, especially if machine learning algorithms are substituted partly or completely for human decision making. The potential for bias and strategies for overcoming it raise as yet unresolved research questions. In exploring some of these considerations, a case is developed for favoring approaches which shape the normative and social context in which AI technologies are developed and used, as well as the technical details of their design.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    According to Degener [14], the CRPD acknowledges but then extends considerably beyond the conception of the human rights of people with disabilities recognized by the social model.

  2. 2.

    The authors instead regard most forms of impairment as neutral traits that do not in themselves negatively affect quality of life.

  3. 3.

    For an overview of these technical developments, see LeCun, Bengio and Hinton [31].

  4. 4.

    For further discussion of issues raised by sound and image recognition systems designed for use by people with disabilities, including some of the concerns introduced here, see Findlater et al. [18].

  5. 5.

    Solutions to the general problem of mixed traffic are developed in Nyholm and Smids [38].

  6. 6.

    See generally Employer Assistance and Resource Network on Disability Inclusion [16].

  7. 7.

    The risks of using AI as a tool of medical diagnosis in relation to people with disabilities is discussed in Trewin et al. [54].

  8. 8.

    The limitation of data processing to specified, explicitly stated purposes is an aspect of European data protection law that raises difficulties for machine learning-based AI applications generally. See Marsch [37] for treatment of the relevant human rights obligations.

  9. 9.

    Shew further develops the point in a brief discussion of additional examples, including the rationale for using companion robots, which may serve the interests of human care givers more than those of the person with a disability whose needs are to be met.

  10. 10.

    For an overview of the history and the guiding ideas, see Ehn [15]. A more recent introduction to participatory design appears in Spinuzzi [49].

  11. 11.

    This consequence of the value placed on preexisting tacit knowledge is acknowledged as a limitation of participatory design in Spinuzzi [49].

  12. 12.

    The approach is articulated and illustrated in Friedman, Kahn and Borning [20]. For a recent treatment of the underlying concepts and design methods, see Friedman and Hendry [19] (Chaps. 2 and 3).

  13. 13.

    Interestingly, Friedman and Hendry [19] (Chap. 2) regard policy as a kind of technology for the purpose of applying value sensitive design methods.

  14. 14.

    An informative overview of how discrimination can occur is presented in Barocas and Selbst [3].

  15. 15.

    The cited references should be consulted for more detailed illustration and discussion of applications in which bias against people with disabilities can reasonably be foreseen.

  16. 16.

    See Alston [1] for an overview of human rights-related concerns about this practice.

  17. 17.

    Trewin et al. [54] acknowledge the practical dimension of the problem, and recommend consultation with stakeholders as part of the development process.

  18. 18.

    The target variable is that which the machine learning model is designed to predict. It is assumed here to be in the legitimate interest of the discriminator, such as the probability that a person would be an effective employee.

  19. 19.

    Hoffman [24] argues that anti-discrimination law should be extended to address decisions based on predictions of a person’s likelihood of developing a disability, and to require disclosure of the use of data in making such decisions.

  20. 20.

    An insightful discussion of intersectionality, noting the risk of over-simplifying its effects in responding to problems of injustice that result from machine learning technologies, appears in Hoffmann [25].

  21. 21.

    The law concerning liability for disparate impact (often referred to outside the USA as indirect discrimination) has evolved differently between common law countries. See Khaitan [28] for a discussion.

  22. 22.

    Prince and Schwarcz [41](§ IV.B) consider potential reforms, such as restricting the variables that may be used by AI systems in making certain kinds of decisions to a prescribed list of permitted factors.

  23. 23.

    Selbst and Barocas [44](§ III.B) insightfully discuss difficulties resulting from the role of intuition in the reasoning required for the application of norms of non-discrimination. If the relations among variables apparently revealed by a machine learning system manifestly treat people with disabilities unfavorably, for example, but there is no coherent or plausible explanation of why this is the case, then evaluation of the grounds of these unequal outcomes becomes problematic. In some instances, techniques of ‘interpretable’ or ‘explainable’ machine learning may facilitate the emergence of a suitable explanation. On the other hand, and as the authors recognize, it would be naive to presuppose that social and natural phenomena are always amenable to explanations that cohere with human intuitions.

  24. 24.

    An interesting further possibility is for a machine learning system to give an ‘explanation’ of its output that would enable an adversely affected person to change his or her situation sufficiently to achieve a more favorable classification. The difficulties of two promising approaches to such explanation are considered in Barocas, Selbst and Raghavan [4].

  25. 25.

    Article 22 of the GDPR [17] establishes a limited right not to be subject to legally significant, fully automated decisions. For an argument against recognizing such a right to human involvement in individual decisions, which does not entirely address the philosophical grounds summarized in Binns [6], see Huq [26].

  26. 26.

    Citron [11](§ III A and B) discusses the tendency of automation to substitute precise rules for more general legal standards that allow for the exercise of human discretion. This trend, Citron argues, prioritizes cost efficiency over justice.

  27. 27.

    A clear summary of the authors’ position appears in Sunstein [50].

  28. 28.

    A much discussed strategy for seeking to overcome human biases against social out-groups is the contact hypothesis. See, for example, Pettigrew and Tropp [39] and Pettigrew et al. [40].

  29. 29.

    Under this proposal, the auditing is to be carried out by a regulator with the authority to compel changes that address discrimination. A more skeptical view of transparency as a means to greater accountability of machine learning systems is elaborated in Ananny and Crawford [2].

References

  1. Alston, P.: Report of the special rapporteur on extreme poverty and human rights. Tech. rep., United Nations (2019). https://www.ohchr.org/Documents/Issues/Poverty/A_74_48037_AdvanceUneditedVersion.docx

  2. Ananny, M., Crawford, K.: Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. new media & society 20(3), 973–989 (2018)

    Google Scholar 

  3. Barocas, S., Selbst, A.D.: Big data’s disparate impact. Calif. L. Rev. 104, 671 (2016)

    Google Scholar 

  4. Barocas, S., Selbst, A.D., Raghavan, M.: The hidden assumptions behind counterfactual explanations and principal reasons. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 80–89 (2020)

    Google Scholar 

  5. Berke, L., Caulfield, C., Huenerfauth, M.: Deaf and hard-of-hearing perspectives on imperfect automatic speech recognition for captioning one-on-one meetings. In: Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 155–164 (2017)

    Google Scholar 

  6. Binns, R.: Human judgment in algorithmic loops: Individual justice and automated decision-making. Regulation & Governance (2020). DOI https://doi.org/10.1111/rego.12358. https://onlinelibrary.wiley.com/doi/abs/10.1111/rego.12358

  7. Bogen, M., Rieke, A.: Help wanted: An examination of hiring algorithms, equity, and bias. Tech. rep., Upturn (2018). https://www.upturn.org/reports/2018/hiring-algorithms/

  8. Bradshaw-Martin, H., Easton, C.: Autonomous or ‘driverless’ cars and disability: a legal and ethical analysis. European Journal of Current Legal Issues 20(3) (2014)

    Google Scholar 

  9. Brewer, R.N., Kameswaran, V.: Understanding the power of control in autonomous vehicles for people with vision impairment. In: Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 185–197 (2018)

    Google Scholar 

  10. Campbell, S.M., Stramondo, J.A.: The complicated relationship of disability and well-being. Kennedy Institute of Ethics Journal 27(2), 151–184 (2017)

    Article  Google Scholar 

  11. Citron, D.K.: Technological due process. Wash. UL Rev. 85, 1249 (2007)

    Google Scholar 

  12. Crawford, K., Schultz, J.: Big data and due process: Toward a framework to redress predictive privacy harms. BCL Rev. 55, 93 (2014)

    Google Scholar 

  13. Dasgupta, N.: Implicit ingroup favoritism, outgroup favoritism, and their behavioral manifestations. Social justice research 17(2), 143–169 (2004)

    Article  Google Scholar 

  14. Degener, T.: A new human rights model of disability. In: V. Della Fina, R. Cera, G. Palmisano (eds.) The United Nations convention on the rights of persons with disabilities, pp. 41–59. Springer (2017)

    Google Scholar 

  15. Ehn, P.: Scandinavian design: On participation and skill. In: Participatory design: Principles and practices, p. 77. CRC Press (1993)

    Google Scholar 

  16. Employer Assistance and Resource Network on Disability Inclusion: Use of artificial intelligence to facilitate employment opportunities for people with disabilities (2019). https://askearn.org/wp-content/uploads/2019/06/AI_PolicyBrief-A.pdf

  17. European Union: Regulation (eu) 2016/679 of the european parliament and of the council of 27 april 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/ec (general data protection regulation). Official Journal of the European Union L 119, 1 (2016)

    Google Scholar 

  18. Findlater, L., Goodman, S., Zhao, Y., Azenkot, S., Hanley, M.: Fairness issues in ai systems that augment sensory abilities. ACM SIGACCESS Accessibility and Computing (125) (2020)

    Google Scholar 

  19. Friedman, B., Hendry, D.G.: Value sensitive design: Shaping technology with moral imagination. Mit Press (2019)

    Google Scholar 

  20. Friedman, B., Kahn, P.H., Borning, A.: Value sensitive design and information systems. In: K.E. Himma, H.T. Tavani (eds.) The handbook of information and computer ethics, pp. 69–101. John Wiley & Sons (2008)

    Google Scholar 

  21. Friedman, B., Nissenbaum, H.: Bias in computer systems. ACM Transactions on Information Systems (TOIS) 14(3), 330–347 (1996)

    Article  Google Scholar 

  22. Giermanowska, E., Racław, M., Szawarska, D.: Employing People with Disabilities: Good Organisational Practices and Socio-cultural Conditions, chap. 2, pp. 9–36. Springer (2020)

    Google Scholar 

  23. Guerreiro, J., Sato, D., Asakawa, S., Dong, H., Kitani, K.M., Asakawa, C.: Cabot: Designing and evaluating an autonomous navigation robot for blind people. In: The 21st International ACM SIGACCESS Conference on Computers and Accessibility, pp. 68–82 (2019)

    Google Scholar 

  24. Hoffman, S.: Big data’s new discrimination threats: Amending the americans with disabilities act to cover discrimination based on data-driven predictions of future disease. In: I.G. Cohen, H.F. Lynch, E. Vayena, U. Gasser (eds.) Big Data, Health Law, and Bioethics. Cambridge University Press (2018)

    Google Scholar 

  25. Hoffmann, A.L.: Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society 22(7), 900–915 (2019)

    Article  Google Scholar 

  26. Huq, A.Z.: A right to a human decision. Va. L. Rev. 106, 611 (2020)

    Google Scholar 

  27. Iglesias-Pérez, A., Loitsch, C., Kaklanis, N., Votis, K., Stiegler, A., Kalogirou, K., Serra-Autonell, G., Tzovaras, D., Weber, G.: Accessibility through preferences: context-aware recommender of settings. In: International Conference on Universal Access in Human-Computer Interaction, pp. 224–235. Springer (2014)

    Google Scholar 

  28. Khaitan, T.: Indirect discrimination. In: K. Lippert-Rasmussen (ed.) The Routledge handbook of the ethics of discrimination, pp. 30–41. Routledge (2017)

    Google Scholar 

  29. Kleinberg, J., Ludwig, J., Mullainathan, S., Sunstein, C.R.: Discrimination in the age of algorithms. Journal of Legal Analysis 10 (2018)

    Google Scholar 

  30. Kleiner, A., Kurzweil, R.C.: A description of the kurzweil reading machine and a status report on its testing and dissemination. Bull Prosthet Res 10(27), 72–81 (1977)

    Google Scholar 

  31. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  32. Lewis, C.: Simplicity in cognitive assistive technology: a framework and agenda for research. Universal Access in the Information Society 5(4), 351–361 (2007)

    Article  Google Scholar 

  33. Lippert-Rasmussen, K.: Nothing personal: On statistical discrimination. Journal of Political Philosophy 15(4), 385–403 (2007)

    Article  Google Scholar 

  34. Loitsch, C., Weber, G., Kaklanis, N., Votis, K., Tzovaras, D.: A knowledge-based approach to user interface adaptation from preferences and for special needs. User Modeling and User-Adapted Interaction 27(3-5), 445–491 (2017)

    Article  Google Scholar 

  35. Marks, M.: Emergent medical data (2017). https://blog.petrieflom.law.harvard.edu/2017/10/11/emergent-medical-data/

  36. Marks, M.: Algorithmic disability discrimination. In: I.G. Cohen, C. Shachar, A. Silvers, M.A. Stein (eds.) Disability, Health, Law, and Bioethics. Cambridge University Press (2020)

    Google Scholar 

  37. Marsch, N.: Artificial intelligence and the fundamental right to data protection: Opening the door for technological innovation and innovative protection. In: Regulating Artificial Intelligence, pp. 33–52. Springer (2020)

    Google Scholar 

  38. Nyholm, S., Smids, J.: Automated cars meet human drivers: Responsible human-robot coordination and the ethics of mixed traffic. Ethics and Information Technology pp. 1–10 (2018)

    Google Scholar 

  39. Pettigrew, T.F., Tropp, L.R.: A meta-analytic test of intergroup contact theory. Journal of personality and social psychology 90(5), 751 (2006)

    Article  Google Scholar 

  40. Pettigrew, T.F., Tropp, L.R., Wagner, U., Christ, O.: Recent advances in intergroup contact theory. International journal of intercultural relations 35(3), 271–280 (2011)

    Article  Google Scholar 

  41. Prince, A.E., Schwarcz, D.: Proxy discrimination in the age of artificial intelligence and big data. Iowa L. Rev. 105, 1257 (2019)

    Google Scholar 

  42. Rambachan, A., Kleinberg, J., Mullainathan, S., Ludwig, J.: An economic approach to regulating algorithms. Tech. Rep. w27111, National Bureau of Economic Research (2020)

    Google Scholar 

  43. Schauer, F.: Statistical (and non-statistical) discrimination. In: K. Lippert-Rasmussen (ed.) The Routledge Handbook of the Ethics of Discrimination, pp. 42–53. Routledge (2017)

    Google Scholar 

  44. Selbst, A.D., Barocas, S.: The intuitive appeal of explainable machines. Fordham L. Rev. 87, 1085 (2018)

    Google Scholar 

  45. Shakespeare, T.: Critiquing the social model. In: A. Lawson (ed.) Disability and equality law, pp. 67–94. Routledge London (2017)

    Google Scholar 

  46. Shakespeare, T.: The social model of disability. In: L.J. Davis (ed.) The disability studies reader, 5 edn. Routledge (2017)

    Google Scholar 

  47. Shew, A.: Ableism, technoableism, and future ai. IEEE Technology and Society Magazine 39(1), 40–85 (2020)

    Article  Google Scholar 

  48. Skitka, L.J., Mosier, K., Burdick, M.D.: Accountability and automation bias. International Journal of Human-Computer Studies 52(4), 701–717 (2000)

    Article  Google Scholar 

  49. Spinuzzi, C.: The methodology of participatory design. Technical communication 52(2), 163–174 (2005)

    Google Scholar 

  50. Sunstein, C.R.: Algorithms, correcting biases. Social Research: An International Quarterly 86(2), 499–511 (2019)

    Article  Google Scholar 

  51. Szarkowska, A., Krejtz, I., Klyszejko, Z., Wieczorek, A.: Verbatim, standard, or edited? reading patterns of different captioning styles among deaf, hard of hearing, and hearing viewers. American annals of the deaf 156(4), 363–378 (2011)

    Article  Google Scholar 

  52. Treviranus, J.: The value of being different. In: Proceedings of the 16th web for all 2019 conference—personalization-personalizing the web, pp. 1–7 (2019)

    Google Scholar 

  53. Trewin, S.: Ai fairness for people with disabilities: Point of view. arXiv preprint arXiv:1811.10670 (2018)

  54. Trewin, S., Basson, S., Muller, M., Branham, S., Treviranus, J., Gruen, D., Hebert, D., Lyckowski, N., Manser, E.: Considerations for ai fairness for people with disabilities. AI Matters 5(3), 40–63 (2019)

    Article  Google Scholar 

  55. UN: Convention on the rights of persons with disabilities. United Nations Treaty Series 2515, 3 (2006)

    Google Scholar 

  56. United States: K.  w.  v.  Armstrong, no. 14-35296 (9th cir. 2015)

    Google Scholar 

  57. Vanderheiden, G.C., Treviranus, J., Gemou, M., Bekiaris, E., Markus, K., Clark, C., Basman, A.: The evolving global public inclusive infrastructure (gpii). In: International Conference on Universal Access in Human-Computer Interaction, pp. 107–116. Springer (2013)

    Google Scholar 

  58. Whittaker, M., Alper, M., Bennett, C.L., Hendren, S., Kaziunas, L., Mills, M., Morris, M.R., Rankin, J., Rogers, E., Salas, M., et al.: Disability, bias, and ai. Tech. rep., AI Now Institute (2019). https://ainowinstitute.org/disabilitybiasai-2019.pdf

  59. World Health Organization: Disability and health (2020). https://www.who.int/news-room/fact-sheets/detail/disability-and-health

  60. World Institute on Disability: Ai and accessibility (2019). https://wid.org/2019/06/12/ai-and-accessibility/

Download references

Acknowledgements

The author gratefully acknowledges Mark Hakkinen, Klaus Zechner, and Cary Supalo of Educational Testing Service for reviewing the manuscript. Mark Hakkinen and Kris Anne Kinney of Educational Testing Service offered valuable advice concerning creation of the diagrams. Anonymous reviewers contributed thoughtful suggestions for improving the chapter. This work has also been influenced by various seminars and workshops on the ethics of artificial intelligence that the author has attended.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jason J. G. White .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

White, J.J.G. (2022). Artificial Intelligence and People with Disabilities: a Reflection on Human–AI Partnerships. In: Chen, F., Zhou, J. (eds) Humanity Driven AI. Springer, Cham. https://doi.org/10.1007/978-3-030-72188-6_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-72188-6_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-72187-9

  • Online ISBN: 978-3-030-72188-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics