Advancing Performance Assessment for Aviation Training

  • Beth F. Wheeler Atkinson
  • Mitchell J. TindallEmail author
  • John P. Killilea
  • Emily C. Anania
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 785)


A major goal of human factors interventions in aviation environments is to increase performance without sacrificing safety. The performance assessment state-of-the-practice within aviation training relies heavily on instructor observations and performance checklists or gradesheets. While these tools quantify trainee performance, they focus on outcomes as opposed to the processes (i.e., behaviors and cognitions) that led to a good or bad performance. Theoretical guidance and technological advances offer opportunities to improve the effectiveness and efficiency of instructor feedback by increasing the availability of diagnostic feedback [1]. Specifically, construct validation research indicates that multiple criteria and methods for measuring performance are necessary to provide and accurate picture of performance [2, 3]. Increasing the focus of observer-based grade sheets to account for process-oriented and higher order cognitive skills encourages feedback discussions to address diagnostic details. Additionally, improvements in system processing and computing power can offset human-in-the-loop data analysis with automated capabilities. These system-based measures standardize outcome assessments that minimize human biases and errors [4, 5]. For these reasons, the use of system-based measures to complement instructor-observed assessments provides a more comprehensive understanding of performance. This approach increases reliability of performance evaluations thereby improving determinations of proficiency by relying on quantitative assessments, vice participation or quantities of exposure. This presentation will discuss ongoing efforts to develop and transition tools to address these gaps in current aviation performance assessment capabilities. The goal is to capture observer gradesheets and automated performance measures that reflect individual and team performance on tactical tasks that can be archived for long range data analyses. In addition to presenting the system architecture, the presentation will include a discussion of future directions such as archival systems leveraging data science and the need for increased standardization in performance measurement implementation.


Performance assessment Measurement validation Training effectiveness Observer-based metrics Automated system-based metrics 


  1. 1.
    Thalheimer, W.: Simulation-like questions: the basics of how and why to write them (2002)Google Scholar
  2. 2.
    James, L.R.: Criterion models and construct validity for criteria. Psychol. Bull. 80(1), 75 (1973)CrossRefGoogle Scholar
  3. 3.
    Earley, P.C., et al.: Impact of process and outcome feedback on the relation of goalsetting to task performance. Acad. Manage. J. 33(1), 87–105 (1990)Google Scholar
  4. 4.
    Kahneman, D.: Attention and Effort, vol. 1063. Prentice-Hall, Englewood Cliffs (1973)Google Scholar
  5. 5.
    Wickens, C.D.: Multiple resources and performance prediction. Theor. Issues Ergon. Sci. 3(2), 159–177 (2002)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature (outside the USA) 2019

Authors and Affiliations

  • Beth F. Wheeler Atkinson
    • 1
  • Mitchell J. Tindall
    • 1
    Email author
  • John P. Killilea
    • 1
  • Emily C. Anania
    • 2
  1. 1.Naval Air Warfare Center Training Systems DivisionOrlandoUSA
  2. 2.Don Selvy EnterprisesLexington ParkUSA

Personalised recommendations