Self-assessment of Proficiency of Intelligent Systems: Challenges and Opportunities

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1210)


Autonomous systems, although capable of performing complicated tasks much faster than humans, are brittle due to uncertainties encountered in most real-time applications. People supervising these systems often rely on information relayed by the system to make any decisions, which places a burden on the system to self-assess its proficiency and communicate the relevant information.

Proficiency self-assessment benefits from an understanding of how well the models and decision mechanisms used by robot align with the world and a problem holder’s goals. This paper makes three contributions: (1) Identifying the importance of goal, system, and environment for proficiency assessment; (2) Completing the phrase “proficient ‹preposition›” using an understanding of proficiency span; and (3) Proposing the proficiency dependency graph to represent causal relationships that contribute to failures, which highlights how one can reason about their own proficiency given alterations in goal, system, and environment.


Proficiency Self-assessment Goal(s) System Environment Intelligent agents 



This work was supported in part by the U.S. Office of Naval Research under Grants N00014-18-1-2503 and N00014-16-1-302. All opinions, findings, conclusions, and recommendations expressed in this paper are those of the author and do not necessarily reflect the views of the Office of Naval Research.


  1. 1.
    Daftry, S., Zeng, S., Bagnell, J.A., Hebert, M.: Introspective perception: learning to predict failures in vision systems. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1743–1750. IEEE (2016)Google Scholar
  2. 2.
    Zhang, P., Wang, J., Farhadi, A., Hebert, M., Parikh, D.: Predicting failures of vision systems. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3566–3573 (2014)Google Scholar
  3. 3.
    Wu, H., Lin, H., Guan, Y., Harada, K., Rojas, J.: Robot introspection with bayesian nonparametric vector autoregressive hidden Markov models. In: 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), pp. 882–888. IEEE (2017)Google Scholar
  4. 4.
    Kaipa, K.N., Kankanhalli-Nagendra, A.S., Gupta, S.K.: Toward estimating task execution confidence for robotic bin-picking applications. In: 2015 AAAI Fall Symposium Series (2015)Google Scholar
  5. 5.
    Israelsen, B., Ahmed, N., Frew, E., Lawrence, D., Argrow, B.: Machine self-confidence in autonomous systems via meta-analysis of decision processes. In: International Conference on Applied Human Factors and Ergonomics, pp. 213–223. Springer, Cham (2019)Google Scholar
  6. 6.
    Lakhal, N.M.B., Adouane, L., Nasri, O., Slama, J.B.H.: Interval-based solutions for reliable and safe navigation of intelligent autonomous vehicles. In: 2019 12th International Workshop on Robot Motion and Control (RoMoCo), pp. 124–130. IEEE (2019)Google Scholar
  7. 7.
    Havens, A., Jiang, Z., Sarkar, S.: Online robust policy learning in the presence of unknown adversaries. In: Advances in Neural Information Processing Systems, pp. 9916–9926 (2018)Google Scholar

Copyright information

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021

Authors and Affiliations

  1. 1.Computer Science DepartmentBrigham Young UniversityProvoUSA

Personalised recommendations