Skip to main content
Log in

Black is the new orange: how to determine AI liability

  • Original Research
  • Published:
Artificial Intelligence and Law Aims and scope Submit manuscript

Abstract

Autonomous artificial intelligence (AI) systems can lead to unpredictable behavior causing loss or damage to individuals. Intricate questions must be resolved to establish how courts determine liability. Until recently, understanding the inner workings of “black boxes” has been exceedingly difficult; however, the use of Explainable Artificial Intelligence (XAI) would help simplify the complex problems that can occur with autonomous AI systems. In this context, this article seeks to provide technical explanations that can be given by XAI, and to show how suitable explanations for liability can be reached in court. It provides an analysis of whether existing liability frameworks, in both civil and common law tort systems, with the support of XAI, can address legal concerns related to AI. Lastly, it claims their further development and adoption should allow AI liability cases to be decided under current legal and regulatory rules until new liability regimes for AI are enacted.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. See this discussion in the report mentioned above, House of Lords (2018), and in Robot Law Calo et al. (2016, pp. introduction xiv/ xv, 98).

  2. For Wright (1985), this test means “something is a cause if it is a ‘necessary element of a set of conditions jointly sufficient for the result.”

  3. The only exception to strict liability that does not demand a ‘causation’ element in Brazilian law is related to integral risk theory.

  4. See Bloch (2005).

  5. Ibid.

  6. See Bloch (2011).

  7. See Muschara (2007).

  8. See Cohen (1995).

  9. See Angelov and Soares (2019).

  10. See Ribeiro et al. (2016).

  11. See Lundberg and Lee (2017).

  12. See Friedman (2001).

  13. See Ho (1995).

  14. See Goldstein et al. (2015).

  15. See Friedman (2001).

  16. See Gu et al. (2019) and Nicolae et al. (2018).

  17. See Verma and J Rubin (2018), d’Alessandro et al. (2017), and Friedler et al. (2016).

  18. See (Piatetsky-Shapiro 2007).

  19. See (Harper and Pickett 2006).

  20. See Chapman et al. (2000).

  21. See Aïvodji et al. (2019).

  22. See Berendt and Preibusch (2012) and Adebayo (2016).

References

  • Adebayo, J. A. (2016) FairML: ToolBox for diagnosing bias in predictive modeling (Doctoral dissertation, Massachusetts Institute of Technology)

  • Aïvodji, U., Arai, H., Fortineau, O., Gambs, S., Hara, S., & Tapp, A. (2019) Fairwashing: the risk of rationalization. In International Conference on Machine Learning (pp. 161–170). PMLR

  • Angelov, P., Soares, E. (2019). Towards Explainable Deep Neural Networks (xDNN) (2019). Cornell University. ArXiv: https://arxiv.org/abs/1912.02523

  • Berendt, B., & Preibusch, S. (2012) Exploring discrimination: A user-centric evaluation of discrimination-aware data mining. In 2012 IEEE 12th International Conference on Data Mining Workshops (pp. 344–351). IEEE

  • Bloch, H. P. (2005) Successful failure analysis strategies. Reliability Advantage: Training Bulletin. Retrieved from http://www.heinzbloch.com/docs/ReliabilityAdvantage/Reliability_Advantage_Volume_3.pdf

  • Bloch, H. P. (2011) Structured failure analysis strategies solve pump problems. Machinery Lubrication. Retrieved from http://www.machinerylubrication.com/Read/28467/pump-failure-analysis

  • Calo, R., Froomkin, M. & Kerr, I. (2016) Robot Law Edward Elgar Edward Elgar Publishing UK

  • Carrier, B. (2002) Defining Digital Forensic Examination and Analysis Tools. In 2002 Digital Forensics Research Workshop

  • Chapman, P., Clinton, J., Kerber, R., Khabaza, T., Reinartz, T., Shearer, C., & Wirth, R. (2000) CRISP-DM 1.0: Step-by-step Data Mining Guide. SPSS

  • Chappell, B. (2015) It Was Installed For This Purpose,' VW's U.S. CEO Tells Congress About Defeat Device. Retrieved from NPR: https://www.npr.org/sections/thetwo-way/2015/10/08/446861855/volkswagen-u-s-ceo-faces-questions-on-capitol-hill (22 May 2020)

  • Cohen, W., (1995) Fast effective rule induction. Proceedings of the Twelfth International Conference on International Conference on Machine Learning. Morgan Kaufmann Publishers Inc Elsevier 115–123

  • d’Alessandro B, O’Neil C, LaGatta T (2017) Conscientious classification: a data scientist’s guide to discrimination-aware classification. Big Data 5:120–134. https://doi.org/10.1089/big.2016.0048

    Article  Google Scholar 

  • Fisher, D. (2019). Explainable AI: Addressing Trust, Utility, Liability. (31 May 2019). aitrends The Business and Technology of Enterprise AI. https://www.aitrends.com/explainable-ai/a-chief-ai-officer-on-explainable-ai-addressing-trust-utility-liability

  • Friedler, S., Scheidegger, C. & Venkatasubramanian, S. (2016). On the (im)possibility of fairness. Cornell University. ArXiv: https://arxiv.org/abs/1609.07236

  • Friedman, J. H. (2001) Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29(5) https://projecteuclid.org/euclid.aos/1013203451

  • Goldstein A, Kepelner A, Bleich J, Pitkin E (2015) Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation. J Comput Graph Statistics 24(1):44–65

    Article  Google Scholar 

  • Goudkamp, J. & Peel, W.E. (2014) Tort Winfield & Jolowicz. Sweet & Maxwell

  • Gunning D, Aha D (2019) DARPA’s Explainable Artificial Intelligence (XAI) Program. AI Mag. https://doi.org/10.1609/aimag.v40i2.2850

    Article  Google Scholar 

  • Gunning, D. (2019) DARPA's explainable artificial intelligence (XAI) program. Proceedings of the 24th International Conference on Intelligent User Interfaces, IUI, p. 47

  • Harper G, Pickett S (2006) Methods for mining HTS data. Drug Discovery Today 11(15–16):694

    Article  Google Scholar 

  • Ho, T.K., (1995) Random Decision Forests. Proceedings of the 3rd International Conference on Document Analysis and Recognition, IEEE, 278 – 282

  • House of Lords, Select Committee on Artificial Intelligence, AI in the UK: ready, willing and able? (2018) (Report of Session 2017–19)

  • Lundberg SM, Lee S (2017) A unified approach to interpreting model predictions. Adv Neural Information Processing Sys NIPS’ 17:465–474

    Google Scholar 

  • Mojsilovic, A. (2019). Introducing AI Explainability 360. https://www.ibm.com/blogs/research/2019/08/ai-explainability-360/ (4 June 2021)

  • Muschara, T. (2007). INPO’s approach to human performance in the United States commercial nuclear power industry. IEEE Xplore Digital Library. Retrieved from http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4413179&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D4413179

  • National Transportation Safety Board (NTSB). (March 18, 2018) (Preliminary report highway: HWY18MH0102018) <https://www.ntsb.gov/pages/default.aspx (24 February 2020)

  • Nicolae, M. & Sinn, M. (2018). Adversarial Robustness Toolbox v1.0.0. Cornell University https://arxiv.org/abs/1807.01069 (24 May 2020)

  • Palmer, G. (2001). A Road Map for Digital Forensic Research. Technical Report DTR-T001–01, DFRWS, Report From the First Digital Forensic Research Workshop (DFRWS).

  • Piatetsky-Shapiro, G. (2007). Methodology Poll. (2007). KDnuggets. https://www.kdnuggets.com/polls/2007/data_mining_methodology.htm (14 June 2020)

  • Reed, C., Kennedy, E. & Silva, S. (2016). Responsibility, Autonomy and Accountability: legal liability for machine learning. Queen Mary University of London, School of Law Legal Studies Research Paper No. 243/2016

  • Reith M, Carr C, Gunsch G (2002) An Examination of Digital Forensic Models. Int J Digital Evidence 1:3

    Google Scholar 

  • Ribeiro, M. T., Singh, S., Guestrin, C. (2016) Why should I trust you? Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining KDD 16: 1135–1144

  • Verma, S. & Rubin, J. (2018) Fairness Definitions Explained. ACM/IEEE International Workshop on Software Fairness Gothenburg: IEEE, p.1

  • Wilson PF, Dell LD, Anderson GF (1993) Root Cause Analysis: A Tool for Total Quality Management. ASQ Quality Press, Milwaukee, Wisconsin

    Google Scholar 

  • Wright RW (1985) Causation in tort law. Calif Law Rev 73(6):1735–1828. https://doi.org/10.2307/3480373

    Article  Google Scholar 

Download references

Acknowledgements

We gratefully acknowledge Dr Armando Castro’s invaluable comments on the revision of this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Clarice Marinho Martins.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Padovan, P.H., Martins, C.M. & Reed, C. Black is the new orange: how to determine AI liability. Artif Intell Law 31, 133–167 (2023). https://doi.org/10.1007/s10506-022-09308-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10506-022-09308-9

Keywords

Navigation