Surgical Endoscopy

, Volume 26, Issue 9, pp 2587–2593 | Cite as

The virtual reality simulator dV-Trainer® is a valid assessment tool for robotic surgical skills

  • Cyril Perrenot
  • Manuela Perez
  • Nguyen Tran
  • Jean-Philippe Jehl
  • Jacques Felblinger
  • Laurent Bresler
  • Jacques Hubert



Exponential development of minimally invasive techniques, such as robotic-assisted devices, raises the question of how to assess robotic surgery skills. Early development of virtual simulators has provided efficient tools for laparoscopic skills certification based on objective scoring, high availability, and lower cost. However, similar evaluation is lacking for robotic training. The purpose of this study was to assess several criteria, such as reliability, face, content, construct, and concurrent validity of a new virtual robotic surgery simulator.


This prospective study was conducted from December 2009 to April 2010 using three simulators dV-Trainers® (MIMIC Technologies®) and one Da Vinci S® (Intuitive Surgical®). Seventy-five subjects, divided into five groups according to their initial surgical training, were evaluated based on five representative exercises of robotic specific skills: 3D perception, clutching, visual force feedback, EndoWrist® manipulation, and camera control. Analysis was extracted from (1) questionnaires (realism and interest), (2) automatically generated data from simulators, and (3) subjective scoring by two experts of depersonalized videos of similar exercises with robot.


Face and content validity were generally considered high (77 %). Five levels of ability were clearly identified by the simulator (ANOVA; p = 0.0024). There was a strong correlation between automatic data from dV-Trainer and subjective evaluation with robot (r = 0.822). Reliability of scoring was high (r = 0.851). The most relevant criteria were time and economy of motion. The most relevant exercises were Pick and Place and Ring and Rail.


The dV-Trainer® simulator proves to be a valid tool to assess basic skills of robotic surgery.


dV-Trainer Surgical education Da Vinci robot Reliability and validity Robotic surgery Simulation 

The Da Vinci® robot (Intuitive Surgical, Sunnyvale, CA, USA) is a tool that has been implemented in more than 1,600 operating rooms throughout the world in many different fields (e.g., urology, general surgery, gynecology, heart and thoracic surgery, head and neck surgery). The skills required for robotic surgery are different for laparoscopic surgery or open surgery: clutching, lack of force feedback, Endowrist® manipulation, camera control, 3D-vision. This involves highly specialized training for surgeons and residents. However, teaching such minimally invasive procedures in the operating room according to the Halsted model (with two attending surgeons) is not the most appropriate, because the operator is the only one present at the console and because the learning curve on real patients has financial and medicolegal implications [1].

From this standpoint, it is important to remember the difficulties encountered when laparoscopic surgery was first introduced [2]. This has in part led to the development of simulation tools, from basic training boxes to complex virtual reality simulators [3, 4, 5]. Since 2009, the American College of Surgeons (ACS) and the Society of Gastrointestinal Endoscopic Surgeons (SAGES) have required all residents to obtain Fundamentals of Laparoscopic Surgery certification (FLS), a license for laparoscopy. Before being recognized as standard, these exercises were extensively studied over a 7 year period [6, 7, 8] to determine whether they met the requirements of large-scale assessments [9]: ease of use, low cost, reliability, accuracy, validity of skills assessment, and correlation with future surgical performance [10].

Currently, even though recommendations from the Minimally Invasive Robotic Association (MIRA) and SAGES in 2007 encourage the rapid implementation of such a curriculum, there is no equivalent of FLS in robotic surgery. It will be necessary to develop similar tools for training in robotic surgery. Working with the actual robot on anatomical samples, animal models, or inanimate models is costly in terms of equipment and mobilizing the robot (estimated cost: $500/h). Robotic surgery simulators could offer a more economical training alternative. Three simulators, the Ross® by the Roswell Park Cancer Institute (Buffalo, NY), the dV-Trainer® by MIMIC Technologies (Seattle, WA), and the Da Vinci skill simulator® by Intuitive Surgical, which is the Da Vinci Si® console with dV-Trainer® software, are currently available. Before implementation of those new tools in robotic curricula, objective validation is required.

The purpose of our study was to validate the dV-Trainer® as an assessment tool for specific skills in robotic surgery. The first part was dedicated to test face validity (degree of resemblance between the actual robot and the simulator), content validity (interest of the simulator for a training program), and construct validity (degree to which the results on the simulator reflect the actual skill of the subject). These validities have already been proven in other studies on a previous version of the dV-Trainer. The second part of the study tested for the first time reliability (reproducibility of scoring of the subjects when performing the same task twice) and concurrent validity (equivalence between an assessment on the simulator and an assessment on actual Da Vinci®).

Materials and methods

Simulator specifications

The dV-Trainer® (MIMIC Technologies®) is a robotic surgery simulator consisting of a console that reproduces the look and feel of the Da Vinci system workspace, foot pedals, master controls, and hardware platform with surgical simulation software (M-Sim®, Beta version It offers a range of training exercises in a virtual 3D environment. M-Sim® software includes a scoring utility with seven criteria: time, economy of motion, drops, instrument collisions, excessive instrument force, instruments out of view, and master workspace range. A total percentage score representing a combination of these criteria is automatically generated by a computerized algorithm created by the manufacturer.

Study design

Surgeons, residents, medical students, engineers, and nurses involved in a course in our training center from December 2009 to April 2010 and robotic experts giving these courses were invited to participate in a prospective, institutional review-board study. The participants were prospectively categorized in five groups according to their robotic surgery experience: group 1 (>100 cases), group 2 (10–40 cases), group 3 (no complete case; >4 h at the console), group 4 (no experience in robotic; surgeon or resident), group 5 (no experience in robotic; no experience in surgery).

All participants started with 10 min of practice and received standardized explanations before performing five exercises on the dV-Trainer (Fig. 1a) and after the same five exercises on real models with Da Vinci S® robot (Intuitive Surgical) in a dry lab (Fig. 1b). Participants registered in the complete course performed several series on simulator during a 4 h dV-Trainer session.
Fig. 1

Exercises with the dV-Trainer® (a) and on models with the Da Vinci® robot (b): Pick and Place, Peg Board, Ring and Rail, Match Board, Camera Targeting

Exercises were selected according to five key pedagogical objectives defined by our robotic experts.
  1. (1)

    3D perception: Pick and Place consists in placing red, blue, or yellow objects in corresponding coloured boxes.

  2. (2)

    Clutching: Peg Board consists in grasping rings on a vertical stand with the left hand and then passing them to the right hand before placing them on a peg.

  3. (3)

    Visual force feedback: Ring and Rail consists in moving a ring along a twisted metal rod without applying excessive force to either the ring or the rail.

  4. (4)

    Endowrist manipulation (dexterity when working with one or more instruments): Match Board consists in placing nine numbers and letters in specific squares on a board.

  5. (5)

    Camera control: Camera Targeting consists in focusing the camera on different blue spheres spread across a broad pelvic cavity.


After finishing the protocol, each subject completed a demographic questionnaire, a face validity questionnaire with two questions (Was this exercise realistic? What are the advantages and drawbacks of the robot and the simulator?), and a content validity questionnaire with two questions (Was this exercise interesting for basic skills learning? Did you prefer simulator or actual robot for basic skills learning?).

Statistical analysis

Scores on simulator session were exported from the M-Sim® software. Scores on actual Da Vinci® were given by two experts based on de-identified videos, one of the endoscopic view and another of the participant’s arms. They used a scoring system developed in our institution based on six criteria (time, fluidity, excessive force, instrument use, camera use, ergonomy). Validity of this scoring system was tested in a previous unpublished study; interobserver reliability was high (r = 0.802).

Based on quality criteria recommended by Van Nortwick et al. [11] who reviewed 83 studies on laparoscopic simulators, we decided to assess several validities simultaneously:
  1. (I-a)

    Face validity. Subjects without previous robotic experience were excluded from this analysis.

  2. (I-b)

    Content validity. Subjects without previous robotic experience were excluded from this analysis.

  3. (I-c)

    Construct validity. It was tested using ANOVA [12] with a threshold of p < 0.05 and Student’s t test with a threshold of p < 0.05.

  4. (II-a)

    Reliability. Test-retest reliability [13] was assessed at the end of the 4 h dV-Trainer session on two consecutive series of exercises using Pearson’s coefficient.

  5. (II-b)

    Concurrent validity (equivalence between an assessment on the simulator and an assessment on actual Da Vinci®) was tested using Pearson’s coefficient [14] to compare, for each subject, the automatically generated score on the simulator and the score of the real exercise on Da Vinci given by the experts. Statistics were produced using Microsoft Excel 2007 (Microsoft Office®).



Demographic data

Seventy-five participants were included in the five groups. There were 58 men and 17 women with one left-handed, two ambidextrous, and 72 right-handed people. Demographic data are summarized in Table 1. Twenty-one of them were enrolled in the 4 h dV-Trainer session and included in reliability analysis. Thirty-eight completed the entire protocol. Thirty-seven completed only the dV-Trainer part due to low accessibility of the robot. Thirty-seven had previous experience in robotic surgery and fulfilled the face and content validity questionnaires.
Table 1

Demographic data



Experience in laparoscopy (year)

Experience in robotic surgery

Age (year)

1 = Experts


14.2 ± 5.3

264 ± 164 cases

48.2 ± 5.8

2 = Intermediates


6.5 ± 6.9

21 ± 12 cases

43.3 ± 6.4

3 = Beginners


2.6 ± 3.4

0 cases; 6.4 ± 2.0 h

31.3 ± 5.2

4 = Surgeons and residents


3.3 ± 4.9

0 cases; 0.22 ± 0.45 h

34 ± 8.4

5 = Nurses and medical students



0 cases; 0 h

29.7 ± 6.8

Data are means ± standard deviations

First part

  1. (I-a)
    Face Validity: The realism of the exercises was considered high or very high by most of the subjects 67.6 % (range, 48.6–81.1 %). Match Board and Camera Targeting were the most realistic exercises (Table 2). Responses are summarized in Table 3 for the question, “What are the advantages and drawbacks of the dV-Trainer in learning robotic surgery?”
    Table 2

    Qualitative validity

    Face Validity

    Not realistic

    Low realism

    Average realism

    High realism

    Very high realism

    Pick and Place (one-hand basic manipulation)

    0 (0)

    4 (11)

    15 (41)

    18 (49)

    0 (0)

    Peg Board (clutching)

    0 (0)

    3 (8)

    9 (24)

    20 (54 %)

    5 (14)

    Ring and Rail (visual force feedback)

    0 (0)

    4 (11)

    9 (24)

    21 (57)

    3 (8)

    Match Board (two-hand complex manipulation)

    0 (0)

    2 (5)

    7 (19)

    20 (54)

    8 (22)

    Camera Targeting (camera moving)

    0 (0)

    2 (5)

    4 (11)

    17 (46)

    13 (35)

    Content Validity

    No interest

    Low interest

    Average interest

    High interest

    Very high interest

    Pick and Place (one-hand basic manipulation)

    0 (0)

    2 (5)

    18 (49)

    16 (43)

    1 (3)

    Peg Board (clutching)

    0 (0)

    1 (3)

    6 (16)

    24 (65)

    6 (16)

    Ring and Rail (visual force feedback)

    0 (0)

    3 (8)

    7 (19)

    21 (57)

    6 (16)

    Match Board (two-hand complex manipulation)

    0 (0)

    0 (0)

    4 (11)

    21 (57)

    12 (32)

    Camera targeting (camera moving)

    0 (0)

    1 (3)

    2 (5)

    20 (54)

    14 (38)

    Data are numbers with percentages in parentheses

    Table 3

    Advantages and disadvantages of the dV-Trainer®






    MIMIC dV-Trainer®

    Basic skills learning (clutching; camera)


    Fragility and bugs




    Less mobility; difficult rotations


    Evaluation of skills


    More movements


    Feeling like Da Vinci


    Feeling different from Da Vinci


    Low price


    More difficult than actual robot


    No risk to broken instrument or Da Vinci


    No fine manipulation




    Exercises sometimes too difficult


    Da Vinci Surgical System

    More fluidity and precision


    Low accessibility


    Better 3D vision


    High cost


    Easier to use


    Time for installation


    More comfortable


    No force feedback


    Better feeling


    Need for materials/animals


    More pretty


    Limited training program


    List of all advantages and disadvantages cited by participant with number of each answers

  2. (I-b)

    Content Validity: The interest of the exercises was considered high or very high by most of the subjects 76.2 % (range, 45.9–91.9 %). Match Board and Camera Targeting were considered the most interesting exercises (Table 2). Most subjects cited the dV-Trainer as the best tool for basic skills learning: 48.6 % answered “simulator”; 16.2 % answered “robot”; 32.4 % answered “both”; and 2.8 % did not answer.

  3. (I-c)
    Construct validity: The global scores were strongly correlated with previous experience in robotic surgery. Conversely, the standard deviation within each group diminished with experience. The scores were 56 % ± 11.7, 59.4 % ± 11.4, 62.6 % ± 9.3, 66.1 % ± 8.9, 77.3 % ± 8.2, respectively, for groups 5, 4, 3, 2, and 1 (Fig. 2). Single factor analysis of variance revealed a significant difference between the five groups (ANOVA, p = 0.0024). Robotic surgeons (groups 1 and 2) outperformed subjects with no experience (groups 3, 4, and 5; t test, p = 0.00092). Analyses by exercises and by criteria confirmed this result, except for force and instruments out of view (Table 4).
    Fig. 2

    Construct validity of the dV-Trainer on five groups (min., max., 0.25 percentile, 0.75 percentile). Scores increase from group 5 to group 1. Variances decline from group 5 to group 1

    Table 4

    Objective validity


    Reliability 5–6 (Pearson coefficient)

    Reliability 6–7 (Pearson coefficient)

    Construct validity 5 groups (ANOVA)

    Construct validity 2 groups (t Test)

    Concurrent validity (Pearson coefficient)

    Compared to Da Vinci S

    Series of five exercises






    Series of 5 exercises

    Pick and Place






    Pick and Place

    Peg Board






    Peg Board

    Ring and Rail






    Ring and Rail

    Match Board






    Match Board

    Camera targeting






    Camera Targeting








    Economy of motion







    Excessive instrument Force






    Excessive Force







    Instrument use






    Instrument out of view






    Camera use

    Master workspace








Second part

  1. (II-a)
    Reliability: Analysis of the learning curve (Fig. 3) revealed a plateau after six series of exercises. Reliability was analyzed between the fifth and sixth attempts (Pearson; r = 0.851) and between the sixth and seventh attempts (Pearson; r = 0.847). The same calculation was performed for each of the five exercises and for each of the seven scoring criteria. Results are summarized in Table 4.
    Fig. 3

    Learning curve of the dV-Trainer with the mean of five exercises (attempt 1 to 10); start of the learning plateau after six attempts (blue arrow)

  2. (II-b)

    Concurrent validity: The overall scores attributed by experts on the robot were strongly correlated with scores automatically generated by the dV-Trainer (Pearson, r = 0.822). An analysis by exercise and criterion, conducted by matching the seven simulator criteria with the six criteria of our robotic scoring system, confirmed this result, but only for the “Pick and Place” and the “Ring and Rail” exercises and the criteria time and economy of motion (Table 4).



Robotic surgery simulators are economical training tools that could offer standardized and objective skills assessments. The principles of evaluating surgical simulators are well established. Common benchmarks on which simulators are judged include reliability, as well as face, content, construct, concurrent, and predictive validities (correlation between results on simulator and future results in operating room).

Our study is the largest to date in terms of the number of exercises tested (n = 5), exercises performed (n = 1,164), participants (n = 75), and levels of skill distinguished (n = 5). The first part has confirmed on a large scale and on the new version of the dV-Trainer®, the face, content, and construct validity, which was the case in previous studies by Kenney et al. [15], Sethi et al. [16], and Lendvay et al. [17]. The dV-Trainer is realistic, useful for training, and able to distinguish accurately five levels of robotic skills from novices to experts.

The second part demonstrated for the first time on a robotic simulator the reliability of skill assessment and concurrent validity. So, dV-Trainer® skills assessment can replace an assessment by expert in a robotic surgery dry lab. Lerner’s study [18] already proved the equivalence of progress between a group trained on simulator and a group trained on actual robot but did not evaluate assessment equivalence.

A detailed analysis of the exercises (Table 4) found that two of them, “Pick and Place” and “Ring and Rail,” were simple and highly relevant, offering good reliability as well as construct and concurrent validity. The Camera Targeting exercise was relevant, with good reliability and construct validity, but lacked concurrent validity. This could be explained by the difficulty of modelling this camera control exercise in a dry lab. The Peg Board and Match Board exercises—the more difficult ones—were less relevant due to lower reliability. This could be explained by significant variations of criteria for instrument collisions, force, and drops. Fifty percent of the overall score is determined by these parameters, which leads to significant variations in the global score. This could be corrected by pooling the results of those two exercises.

A detailed analysis of criteria (Table 4) identified two highly relevant parameters: time and economy of motion. Three criteria (drops, collisions, and master workspace) showed a trend toward significance. The “instrument out of view” criterion was discriminating for groups 2, 3, 4, and 5 (in training), but group 1 (experienced experts) obtained very poor results. The poor results of the experts could be explained by their experience, which allowed them to continue the exercise safely with instruments out of view, whereas in beginners instrument out of view often means lost control of instruments. Excessive force was identified as not statistically significant, because most of the participants have 100 % score for this criterion.

These encouraging initial results should be qualified, because only five of the 30 exercises available on the dV-Trainer® were tested. Moreover, predictive validity [19] was not studied, because this would have required a longer period of time. Finally, the dV-Trainer does not allow simulations of stitches or dissection and is limited to basic exercises. For the moment, this implies the use of a robot for advanced training [20]. It is highly possible that in the future the development of surgical simulation modules will allow them to practice more extended training with completely simulated surgical cases.

The results of this study position the dV-Trainer® as a good candidate for a large-scale skills certification program similar to the FLS (Fundamentals of Laparoscopic Surgery). The Ross® [21, 22] and Da Vinci skills simulator® [23] have been validated by only few studies.

Other more exhaustive, and ideally multicentric, studies of all the available exercises would be necessary to select the most relevant and to assess predictive validity to know the impact of simulation training on human procedures. A comparative study of the three simulators also would be useful. Thanks to rigorous methodology [24], they could define the role of those new tools in skills certification and a multimodal proficiency-based curriculum.


The dV-Trainer® simulator is a reliable tool in the field of robotic surgery that meets the quality requirements of skills certification. It will undoubtedly be a useful training and assessment tool in the field of robotic surgery.



The authors thank Ecole de Chirurgie de Nancy and its staff, CRAN (Centre de Recherche en Automatisme de Nancy) and its staff, Conseil Régional de Lorraine, Communauté Urbaine du Grand Nancy, and Association des Chefs de Service du CHU de Nancy. This work was supported by Conseil Régional de Lorraine, Communauté Urbaine du Grand Nancy, and Association des Chefs de Service du CHU de Nancy.


Cyril Perrenot, Dr. Perez, Dr. Tran, Jehl Jean-Philippe, Dr. Felblinger, Dr. Bresler, and Dr. Hubert have no conflict of interest or financial ties to disclose.


  1. 1.
    Amodeo A, Linares Quevedo A, Joseph JV, Belgrano E, Patel HRH (2009) Robotic laparoscopic surgery: cost and training. Minerva Urol Nefrol 61(2):121–128PubMedGoogle Scholar
  2. 2.
    Callery MP, Strasberg SM, Soper NJ (1996) Complications of laparoscopic general surgery. Gastrointest Endosc Clin N Am 6(2):423–444PubMedGoogle Scholar
  3. 3.
    Bruynzeel H, de Bruin AF, Bonjer HJ, Lange JF, Hop WC, Ayodeji ID, Kazemier G (2007) Desktop simulator: key to universal training? Surg Endosc 21(9):1637–1640PubMedCrossRefGoogle Scholar
  4. 4.
    Van Dongen KW, Tournoij E, van der Zee DC, Schijven MP, Broeders IA (2007) Construct validity of the LapSim: can the LapSim virtual reality simulator distinguish between novices and experts? Surg Endosc 21(8):1413–1417PubMedCrossRefGoogle Scholar
  5. 5.
    Kroeze SGC, Mayer EK, Chopra S, Aggarwal R, Darzi A, Patel A (2009) Assessment of laparoscopic suturing skills of urology residents: a pan-European study. Eur Urol 56(5):865–873PubMedCrossRefGoogle Scholar
  6. 6.
    Fried GM (2008) FLS assessment of competency using simulated laparoscopic tasks. J Gastrointest Surg 12(2):210–212PubMedCrossRefGoogle Scholar
  7. 7.
    Xeroulis G, Dubrowski A, Leslie K (2009) Simulation in laparoscopic surgery: a concurrent validity study for FLS. Surg Endosc 23(1):161–165PubMedCrossRefGoogle Scholar
  8. 8.
    Sroka G, Feldman LS, Vassiliou MC, Kaneva PA, Fayez R, Fried GM (2010) Fundamentals of laparoscopic surgery simulator training to proficiency improves laparoscopic performance in the operating room-a randomized controlled trial. Am J Surg 199(1):115–120PubMedCrossRefGoogle Scholar
  9. 9.
    Feldman LS, Sherman V, Fried GM (2004) Using simulators to assess laparoscopic competence: ready for widespread use? Surgery 135(1):28–42PubMedCrossRefGoogle Scholar
  10. 10.
    Sweet RM, Hananel D, Lawrenz F (2010) A unified approach to validation, reliability, and education study design for surgical technical skills training. Arch Surg 145(2):197–201PubMedCrossRefGoogle Scholar
  11. 11.
    Van Nortwick SS, Lendvay TS, Jensen AR, Wright AS, Horvath KD, Kim S (2010) Methodologies for establishing validity in surgical simulation studies. Surgery 147(5):622–630PubMedCrossRefGoogle Scholar
  12. 12.
    Chipman JG, Schmitz CC (2009) Using objective structured assessment of technical skills to evaluate a basic skills simulation curriculum for first-year surgical residents. J Am Coll Surg 209(3):364–370PubMedCrossRefGoogle Scholar
  13. 13.
    Hogle NJ, Briggs WM, Fowler DL (2007) Documenting a learning curve and test-retest reliability of two tasks on a virtual reality training simulator in laparoscopic surgery. J Surg Educ 64(6):424–430PubMedCrossRefGoogle Scholar
  14. 14.
    Gallagher AG, Ritter EM, Satava RM (2003) Fundamental principles of validation, and reliability: rigorous science for the assessment of surgical education and training. Surg Endosc 17(10):1525–1529PubMedCrossRefGoogle Scholar
  15. 15.
    Kenney PA, Wszolek MF, Gould JJ, Libertino JA, Moinzadeh A (2009) Face, content, and construct validity of dV-Trainer: a novel virtual reality simulator for robotic surgery. Urology 73(6):1288–1292PubMedCrossRefGoogle Scholar
  16. 16.
    Sethi AS, Peine WJ, Mohammadi Y, Sundaram CP (2009) Validation of a novel virtual reality robotic simulator. J Endourol 23(3):503–508PubMedCrossRefGoogle Scholar
  17. 17.
    Lendvay TS, Casale P, Sweet R, Peters C (2008) VR robotic surgery: randomized blinded study of the dV-Trainer robotic simulator. Stud Health Technol Inform 132:242–244PubMedGoogle Scholar
  18. 18.
    Lerner MA, Ayalew M, Peine WJ, Sundaram CP (2010) Does training on a virtual reality robotic simulator improve performance on the da Vinci surgical system? J Endourol 24(3):467–472PubMedCrossRefGoogle Scholar
  19. 19.
    Hogle NJ, Chang L, Strong VEM, Welcome AOU, Sinaan M, Bailey R, Fowler DL (2009) Validation of laparoscopic surgical skills training outside the operating room: a long road. Surg Endosc 23(7):1476–1482PubMedCrossRefGoogle Scholar
  20. 20.
    Grover S, Tan GY, Srivastava A, Leung RA, Tewari AK (2010) Residency training program paradigms for teaching robotic surgical skills to urology residents. Curr Urol Rep 1(2):87–92CrossRefGoogle Scholar
  21. 21.
    Seixas-Mikelus SA, Stegemann AP, Kesavadas T, Srimathveeravalli G, Sathyaseelan G, Chandrasekhar R, Wilding GE, Peabody JO, Guru KA (2011) Content validation of a novel robotic robotic surgical simulator. BJU Int 107(7):1130–1135PubMedCrossRefGoogle Scholar
  22. 22.
    Seixas-Mikelus SA, Kesavadas T, Srimathveeravalli G, Chandrasekhar R, Wilding GE, Guru KA (2010) Face Validation of a novel surgical simulator. Urology 76(2):357–360PubMedCrossRefGoogle Scholar
  23. 23.
    Hung AJ, Zehnder P, Patil MB, Cai J, Ng CK, Aron M, Gill IS, Desai MM (2011) Face, content and construct validity of a novel robotic surgery simulator. J Urol 186(3):1019–1025PubMedCrossRefGoogle Scholar
  24. 24.
    Gallagher AG, Ritter EM, Satava RM (2003) Fundamental principles of validation, and reliability: rigorous science for the assessment of surgical education and training. Surg Endosc 17(10):1525–1529PubMedCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  • Cyril Perrenot
    • 1
    • 2
    • 3
  • Manuela Perez
    • 3
    • 4
  • Nguyen Tran
    • 1
  • Jean-Philippe Jehl
    • 5
  • Jacques Felblinger
    • 3
  • Laurent Bresler
    • 1
    • 2
  • Jacques Hubert
    • 1
    • 3
    • 6
  1. 1.School of SurgeryFaculty of Medicine-UHP- Nancy UniversityVandoeuvre-les-NancyFrance
  2. 2.Department of Endocrine, Digestive and General Surgery, Brabois HospitalUniversity Hospital of NancyVandoeuvre-les-NancyFrance
  3. 3.IADI Laboratory, INSERM-U947, Nancy-UniversityVandoeuvre-les-NancyFrance
  4. 4.Department of Emergency and General Surgery, Central HospitalUniversity Hospital of NancyNancyFrance
  5. 5.Continuing Education DepartmentNancy-UniversityNancy cedexFrance
  6. 6.Department of UrologyBrabois Hospital, University Hospital of NancyVandoeuvre-les-NancyFrance

Personalised recommendations