Medical education via simulation training is an area of exponential growth. Originally developed in the aerospace industry and military to teach management of rare life-threatening events, the medical profession and especially critical care are well-suited to simulation-based education where prompt recognition and management are often keys to successful outcomes. This physician training method is especially pertinent in the current climate of resident work hour limitations and increased focus on patient safety. Neuro-critical care represents a unique challenge to simulation. Currently, mannequins cannot mimic a focal neurological exam beyond the latest advancement (a pupil that can react). However, realistic simulation of neurological emergencies is certainly possible with current technology and programming. Furthermore, general intensive care emergencies are easily simulated and more widely employed.

The study by Musacchio et al. is the first to specifically address the potential benefit of simulation-based training for neurologic emergencies [1]. The investigators at Rush University have formulated a successful early program in neuro-simulation. Spinal shock and closed head injury (CHI) are relevant clinical situations that don’t specifically require a focal neurological exam. A CHI scenario could feasibly include securing an airway, volume resuscitation, and brain resuscitation/treatment of ICP. A vasospasm scenario benefits from a non-specific presentation (new onset encephalopathy) encompassing a large differential. Otherwise, a focal neurologic exam would need to be verbalized or artificially demonstrated via a secondary medium (professional patient or recording), which detracts from the believability of the scenario.

The neuro-simulation efforts at Rush University have focused on resident and medical student education. Evaluation was based on subjective criteria—through the post-course assessment, as well as objective criteria—pre and post test comparison. Unfortunately, the testing assessed learning from the simulation exercise and didactic interaction simultaneously, and not the impact of each component separately. In addition, there were no controls, no long-term follow-up assessing retention, and no unbiased measurement of effectiveness. These are all limitations acknowledged by the authors, and admittedly, this is an early description of a neuro-simulation program, which will undoubtedly improve over time.

It is worthwhile considering whether we must try to separate the potential effect of simulation and didactic instruction? The effectiveness of a simulation program should be assessed independently from lecture-based learning. However, we believe that a powerful program should be enhanced with a post-simulation debriefing, and ideally, the debriefing would be used to reinforce points from a directed didactic session. Fisher et al. compared three educational intervention groups for resident training of two life-threatening obstetric emergencies: simulation followed by lecture, lecture alone, and simulation alone [2]. They found that simulation testing + debriefing was superior to traditional lecture alone and that lecture added to simulation did not lead to incremental benefit. They attribute the positive learning impact of simulation in part to trainee-directed post-simulation debriefing and the immediate identification of knowledge deficits. Other benefits include the opportunity to safely make mistakes in a realistic environment, to practice recently gained skills and increased retention of information. The need for a didactic session depends on what is being taught and the baseline knowledge of the target audience. It may be necessary to provide a comprehensive understanding of the disease process. One other method used is a “simulator tutorial session” prior to simulation sessions to ensure a certain minimal foundation of medical knowledge and to provide orientation to the simulator patient setting.

An important factor differentiating simulated emergencies from didactic learning is that the skill set being taught is usually not pure medical knowledge. Crisis resource management (CRM) is the term used for the non-medical skills required to successfully manage medical emergencies [3]. Although a gold standard assessment tool for CRM performance has not previously existed, Kim et al. provide evidence for construct validity when using two rating instruments, a Global Rating Scale (GRS) and a checklist, both developed at the University of Ottawa [3, 4]. The Ottawa GRS is a 7-point ordinal scale which allows evaluation of five aspects of crisis management: leadership skills, situational awareness, communication skills, problem solving, and resource utilization. The CRM checklist evaluates 12 items in the five CRM categories. Both have demonstrated good construct validity and similar inter-rater reliability although neither is a proven evaluation tool at this time.

At Johns Hopkins Hospital, we have chosen a specific angle with neuro-simulation, currently targeting efforts at fellow education. Sessions are formulated to teach procedures (ventilation, LMA placement, intubation, bronchoscopy, central line and arterial line placement, and transcranial Doppler), “the brain code” (intractable intracranial hypertension, herniation), ACLS/BLS, airway management, and miscellaneous emergencies (pulmonary embolism, acute coronary syndrome, pneumothorax). Instruction is directed at molding competent intensive care physicians.

In simulating the brain code, there are countless ways to present an emergent patient scenario where herniation is impending with or without intracranial hypertension. Scenarios could include CHI, subarachnoid hemorrhage, subdural hematoma, and meningitis, just to name a few. We propose several levels of evaluation. Similar to Rush University, we utilize videotape replay, pre- and post-tests and a satisfaction questionnaire. The pretest is followed by a short didactic lecture (~20 min) followed by simulation sessions in groups of 3 (each fellow receives a unique and equally emergent case). Each case is followed by a debriefing. Finally, the post-test is administered, which has identical questions as the pretest.

We compare initial performance of a fellow to performance 3 months later (to assess retention and improvement). We employ a set of “time to” measures for key tasks that must be performed during a brain code. These measures include time to initiate ABC’s, consult anesthesia, optimize patient positioning, hyperventilate, administer osmotherapy, order imaging, and consult for ICP monitoring. We compare “time to” measures to our own internal gold standards (using a group of experienced neuro-intensivists). Because fellows receive additional didactic lectures and enhance their clinical experience during the 3-month period between tests, improvement cannot be attributed solely to SIM.

Each simulation effort, whether ACLS/BLS or brain code, appears to require a unique sequence of didactics, simulation and testing. Further experience and experimentation will be necessary to determine whether simulation-based education should be routinely incorporated into medical student, resident, and fellow education on neurologic emergencies. Although neuro-simulation education is labor-intensive for faculty and a financial commitment for institutions, the potential educational benefit and patient safety impact are significant and waiting to be discovered.