Advertisement

Uncertainty in Machine Learning Applications: A Practice-Driven Classification of Uncertainty

  • Michael Kläs
  • Anna Maria Vollmer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11094)

Abstract

Software-intensive systems that rely on machine learning (ML) and artificial intelligence (AI) are increasingly becoming part of our daily life, e.g., in recommendation systems or semi-autonomous vehicles. However, the use of ML and AI is accompanied by uncertainties regarding their outcomes. Dealing with such uncertainties is particularly important when the actions of these systems can harm humans or the environment, such as in the case of a medical product or self-driving car. To enable a system to make informed decisions when confronted with the uncertainty of embedded AI/ML models and possible safety-related consequences, these models do not only have to provide a defined functionality but must also describe as precisely as possible the likelihood of their outcome being wrong or outside a given range of accuracy. Thus, this paper proposes a classification of major uncertainty sources that is usable and useful in practice: scope compliance, data quality, and model fit. In particular, we highlight the implications of these classes in the development and testing of ML and AI models by establishing links to specific activities during development and testing and means for quantifying and dealing with these different sources of uncertainty.

Keywords

Artificial intelligence Dependability Safety engineering Data quality Model validation Empirical modelling 

Notes

Acknowledgments

Parts of this work is funded by the German Ministry of Education and Research (BMBF) under grant number 01IS16043E (CrESt).

References

  1. 1.
    Carrer-Neto, W., Hernández-Alcaraz, M.L., Valencia-García, R., García-Sánchez, F.: Social knowledge-based recommender system. Application to the movies domain. Expert Syst. Appl. 39(12), 10990–11000 (2012)CrossRefGoogle Scholar
  2. 2.
    Sirinukunwattana, K., Raza, S.E.A., Tsang, Y.W., Snead, D.R.J., Cree, I.A., Rajpoot, N.M.: Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images. IEEE Trans. Med. Imaging 35(5), 1196–1206 (2016)CrossRefGoogle Scholar
  3. 3.
    Bengler, K., Dietmayer, K., Farber, B., Maurer, M., Stiller, C., Winner, H.: Three decades of driver assistance systems: review and future perspectives. IEEE Intell. Transp. Syst. Mag. 6(4), 6–22 (2014)CrossRefGoogle Scholar
  4. 4.
    ISO 26262: Road vehicles – Functional safetyGoogle Scholar
  5. 5.
    IEC 61508 ed. 2, Functional safety of electrical/electronic/programmable electronic (E/E/PE) safety related systemsGoogle Scholar
  6. 6.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017)CrossRefGoogle Scholar
  7. 7.
    Varshney, K.R.: Engineering safety in machine learning. In: Information Theory and Applications Workshop (ITA), La Jolla, CA, pp. 1–5 (2016)Google Scholar
  8. 8.
    Rasmus, A., Feth, P., Schneider, D.: Safety engineering for autonomous vehicles. In: IEEE/IFIP International Conference on Dependable Systems and Network Workshops 2016, pp. 200–205 (2016)Google Scholar
  9. 9.
    Avizienis, A., Laprie, J.C., Randell, B., Landwehr, C.: Basic concepts and taxonomy of dependable and secure computing. IEEE Trans. Depend. Secur. Comput. 1(1), 11–33 (2004)CrossRefGoogle Scholar
  10. 10.
    Ciregan, D., Meier, U., Schmidhuber, J.: Multi-column deep neural networks for image classification. In: IEEE Conference on Computer Vision and Pattern Recognition 2012, pp. 3642–3649 (2012)Google Scholar
  11. 11.
    Booker, J.M., Ross, T.J.: An evolution of uncertainty assessment and quantification. Sci. Iran. 18(3), 669–676 (2011)CrossRefGoogle Scholar
  12. 12.
    Mahdavi-Hezavehi, S., Avgeriou, P., Weyns, D.: A classification framework of uncertainty in architecture-based self-adaptive systems with multiple quality requirements. In: Managing Trade-offs in Adaptable Software Architectures, vol. 1, pp. 45–78 (2017)Google Scholar
  13. 13.
    Zhang, M., Selic, B., Ali, S., Yue, T., Okariz, O., Norgren, R.: Understanding uncertainty in cyber-physical systems: a conceptual model. In: Wąsowski, A., Lönn, H. (eds.) ECMFA 2016. LNCS, vol. 9764, pp. 247–264. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-42061-5_16CrossRefGoogle Scholar
  14. 14.
    Kennedy, M.C., O’Hagan, A.: Bayesian calibration of computer models. J. Roy. Stat. Soc. 63(3), 425–464 (2001)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Faria, J.M.: Non-determinism and failure modes in machine learning. In: IEEE International Symposium on Software Reliability Engineering Workshops. 2017, pp. 310–316 (2017)Google Scholar
  16. 16.
    Wirth, R., Hipp, J.: CRISP-DM: towards a standard process model for data mining. In: International Conference on the Practical Applications of Knowledge Discovery and Data Mining (2000)Google Scholar
  17. 17.
    Yadav, G., Maheshwari, S., Agarwal, A.: Contrast limited adaptive histogram equalization based enhancement for real time video system. In: International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 2392–2397 (2014)Google Scholar
  18. 18.
    Glasner, D., Bagon, S., Irani, M.: Super-resolution from a single image. In: IEEE 12th International Conference on Computer Vision, Kyoto, pp. 349–356 (2009)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Fraunhofer Institute for Experimental Software Engineering IESEKaiserslauternGermany

Personalised recommendations