Abstract
A major goal of human factors interventions in aviation environments is to increase performance without sacrificing safety. The performance assessment state-of-the-practice within aviation training relies heavily on instructor observations and performance checklists or gradesheets. While these tools quantify trainee performance, they focus on outcomes as opposed to the processes (i.e., behaviors and cognitions) that led to a good or bad performance. Theoretical guidance and technological advances offer opportunities to improve the effectiveness and efficiency of instructor feedback by increasing the availability of diagnostic feedback [1]. Specifically, construct validation research indicates that multiple criteria and methods for measuring performance are necessary to provide and accurate picture of performance [2, 3]. Increasing the focus of observer-based grade sheets to account for process-oriented and higher order cognitive skills encourages feedback discussions to address diagnostic details. Additionally, improvements in system processing and computing power can offset human-in-the-loop data analysis with automated capabilities. These system-based measures standardize outcome assessments that minimize human biases and errors [4, 5]. For these reasons, the use of system-based measures to complement instructor-observed assessments provides a more comprehensive understanding of performance. This approach increases reliability of performance evaluations thereby improving determinations of proficiency by relying on quantitative assessments, vice participation or quantities of exposure. This presentation will discuss ongoing efforts to develop and transition tools to address these gaps in current aviation performance assessment capabilities. The goal is to capture observer gradesheets and automated performance measures that reflect individual and team performance on tactical tasks that can be archived for long range data analyses. In addition to presenting the system architecture, the presentation will include a discussion of future directions such as archival systems leveraging data science and the need for increased standardization in performance measurement implementation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Thalheimer, W.: Simulation-like questions: the basics of how and why to write them (2002)
James, L.R.: Criterion models and construct validity for criteria. Psychol. Bull. 80(1), 75 (1973)
Earley, P.C., et al.: Impact of process and outcome feedback on the relation of goalsetting to task performance. Acad. Manage. J. 33(1), 87–105 (1990)
Kahneman, D.: Attention and Effort, vol. 1063. Prentice-Hall, Englewood Cliffs (1973)
Wickens, C.D.: Multiple resources and performance prediction. Theor. Issues Ergon. Sci. 3(2), 159–177 (2002)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer International Publishing AG, part of Springer Nature (outside the USA)
About this paper
Cite this paper
Atkinson, B.F.W., Tindall, M.J., Killilea, J.P., Anania, E.C. (2019). Advancing Performance Assessment for Aviation Training. In: Nazir, S., Teperi, AM., Polak-Sopińska, A. (eds) Advances in Human Factors in Training, Education, and Learning Sciences. AHFE 2018. Advances in Intelligent Systems and Computing, vol 785. Springer, Cham. https://doi.org/10.1007/978-3-319-93882-0_18
Download citation
DOI: https://doi.org/10.1007/978-3-319-93882-0_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-93881-3
Online ISBN: 978-3-319-93882-0
eBook Packages: EngineeringEngineering (R0)