Experimental Assessment of Accuracy of Automated Knowledge Capture

  • Susan M. Stevens
  • J. Chris Forsythe
  • Robert G. Abbott
  • Charles J. Gieseler
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5638)

Abstract

The U.S. armed services are widely adopting simulation-based training, largely to reduce costs associated with live training. However simulation-based training still requires a high instructor-to-student ratio which is expensive. Intelligent tutoring systems target this need, but they are often associated with high costs for knowledge engineering and implementation. To reduce these costs, we are investigating the use of machine learning to produce models of expert behavior for automated student assessment. A key concern about the expert modeling approach is whether it can provide accurate assessments on complex tasks of real-world interest. This study evaluates of the accuracy of model-based assessments on a complex task. We trained employees at Sandia National Laboratories on a Navy simulator and then compared their simulation performance to the performance of experts using both automated and manual assessment. Results show that automated assessments were comparable to the manual assessments on three metrics.

Keywords

Automated assessment Naval training systems simulation-based training intelligent tutoring systems 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Susan M. Stevens
    • 1
  • J. Chris Forsythe
    • 1
  • Robert G. Abbott
    • 1
  • Charles J. Gieseler
    • 1
  1. 1.Sandia National LaboratoriesAlbuquerqueUSA

Personalised recommendations