Machine Self-confidence in Autonomous Systems via Meta-analysis of Decision Processes

  • Brett IsraelsenEmail author
  • Nisar Ahmed
  • Eric Frew
  • Dale Lawrence
  • Brian Argrow
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 965)


Algorithmic assurances assist human users in trusting advanced autonomous systems appropriately. This work explores one approach to creating assurances in which systems self-assess their decision-making capabilities, resulting in a ‘self-confidence’ measure. We present a framework for self-confidence assessment and reporting using meta-analysis factors, and then develop a new factor pertaining to ‘solver quality’ in the context of solving Markov decision processes (MDPs), which are widely used in autonomous systems. A novel method for computing solver quality self-confidence is derived, drawing inspiration from empirical hardness models. Numerical examples show our approach has desirable properties for enabling an MDP-based agent to self-assess its performance for a given task under different conditions. Experimental results for a simulated autonomous vehicle navigation problem show significantly improved delegated task performance outcomes in conditions where self-confidence reports are provided to users.


Human-Machine systems Artificial intelligence Self-assessment 


  1. 1.
    Israelsen, B.W., Ahmed, N.R.: “Dave…I can assure you …that it’s going to be all right …” a definition, case for, and survey of algorithmic assurances in human-autonomy trust relationships. ACM Comput. Surv. 51(6), 113:1–113:37 (2019)CrossRefGoogle Scholar
  2. 2.
    Bradshaw, J.M., et al.: The seven deadly myths of “autonomous systems”. IEEE Intell. Syst. 28(3), 54–61 (2013)CrossRefGoogle Scholar
  3. 3.
    Sweet, N., et al.: Towards self-confidence in autonomous systems. In: AIAA Infotech @ Aerospace, p. 1651 (2016)Google Scholar
  4. 4.
    Leyton-Brown, K., Nudelman, E., Shoham, Y.: Empirical hardness models: methodology and a case study on combinatorial auctions. J. ACM 56(4), 22:1–22:52 (2009)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Kochenderfer, M.J.: Decision Making Under Uncertainty: Theory and Application. MIT Press, Cambridge (2015)CrossRefGoogle Scholar
  6. 6.
    Humphrey, L.: Model checking UAV mission plans. In: AIAA Modeling and Simulation Technologies Conference. Guidance, Navigation, and Control and Co-located Conferences. American Institute of Aeronautics and Astronautics, August 2012 Google Scholar
  7. 7.
    Aitken, M.: Assured human-autonomy interaction through machine self-confidence. M.S. thesis. University of Colorado at Boulder (2016)Google Scholar
  8. 8.
    Hutchins, A.R., et al.: Representing autonomous systems’ self-confidence through competency boundaries. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 59, pp. 279–283. Sage (2015)Google Scholar
  9. 9.
    Browne, C.B., et al.: A survey of Monte Carlo tree search methods. IEEE Trans. Comput. Intell. AI Games 4(1), 1–43 (2012)CrossRefGoogle Scholar
  10. 10.
    Israelsen, B.W., et al.: Factorized machine self-confidence for decision-making agents. arXiv:1810.06519 [cs.LG], October 2018

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Brett Israelsen
    • 1
    Email author
  • Nisar Ahmed
    • 1
  • Eric Frew
    • 1
  • Dale Lawrence
    • 1
  • Brian Argrow
    • 1
  1. 1.University of Colorado BoulderBoulderUSA

Personalised recommendations