Background

Latin America is a region of the American continent whose languages, derived from Latin, are mainly Spanish and Portuguese. This territory is made up of approximately 20 nations and has an extension of 22,000,000 km2, where about 626 million people live, with ethnic, cultural [1], economic, and public education financing diversity [2].

Latin America is different from regions in which clinical simulation training and research criteria or recommendations are available for simulation-based education such as Europe, the UK, the USA, and Canada [3,4,5].

Simulation has been reported in Latin America as a teaching tool in prelicensure [6, 7], postgraduate [8], and cardiopulmonary resuscitation programs [9], and as an assessment tool inside OSCEs (Objective and Structured Clinical Examination) [10,11,12]. Information regarding simulation centers in Latin America is scarce [13, 14]. There are no studies that characterize simulation centers and their programs or quality.

The quality of clinical simulation occupies an important part of the agenda of scientific societies in Europe and North America, focusing both on the standards and recommendations of good practice [15], as well as on the accreditation criteria to measure quality. The accreditation criteria for the simulation centers of the Society for Simulation in Healthcare (SSH), which consider elements of systems integration [3], the criteria of the Association for Medical Education in Europe (AMEE) for accreditation of programs [4], the standards for simulation training developed by the International Nursing Association for Clinical Simulation and Learning (INACSL) [5], and the Association of Standardized Patient Educators (ASPE) [16] are examples of standards of quality that do not exists for Latin America.

Currently, in Latin America, there is the need to work on standardized aspects of the quality of simulation-based education; nonetheless, until now, there is no consensus instrument to measure the quality of our centers and programs.

The aims of this work are to characterize the current state of simulation-based education in the health sciences, to determine the structure of Latin American simulation centers in terms of teaching, research, and continuing medical education (CME), as well as to determine the perception of quality based on international standards of simulation practices for the directors of Latin American centers.

Methods

Study design

A quantitative, descriptive, cross-sectional study using an online self-report instrument was conducted.

Setting and participants

For this study, we considered a simulation center or simulation program as an organization with dedicated resources, and a mission targeted to the use of simulation for education, assessment or research, that uses a substantial component of simulation as a technique [3].

Directors of simulation centers were asked to respond representing simulation centers in Latin American Spanish- and Portuguese-speaking countries.

An intentional sample was selected based on the defined population. A database was created with the contact emails that appeared on the institutional websites and was complemented by a snowball sampling technique to cover the greatest extent of the universe in the cases where a contact email was not available at the institutional website. To include centers that do not have a website, we used the information available from congress contacts. Once the database was constructed with the information known to the authors, we expanded it, identifying simulation center directors in each country from whom we requested contact information for centers not yet included in the database. No databases of country societies or the Latin American federation were used.

We also used data available on population, and the percentage of global Gross Domestic Product (GDP) spended on education and health in Latin American countries, obtained from public information on the CEPAL website for 2018 [17].

Instrument development

A group of researchers (two nurses and five medical doctors) trained in simulation, with experience in administering simulation centers and research, belonging to the Federación Latinoamericana de Simulación Clínica (FLASIC, www.flasic.org), constituted a committee to design the research protocol to develop the instruments.

Based on a literature review that included the accreditation criteria of SSH and ASPIRE, surveys used to characterize worldwide simulation centers [18,19,20], centers in European [21, 22] and Latin American single countries [14], and some definitions to report simulation centers resources and activities [23]. A two-part bilingual instrument (Spanish and Portuguese) was designed [24].

The selection of the SSH criteria was based on the fact that they include systems integration criteria that we needed to characterize the simulation centers of clinical institutions, and the ASPIRE criteria because they are based on elements of Medical Education. In addition, both SSH and ASPIRE have center and program accreditation processes. The survey includes the main criteria of both SSH and ASPIRE.

The first part is a characterization questionnaire with 19 questions, focusing on center information (country, type of institutions, infrastructure, metrics of activities, human resources and directors profile, simulation resources) and program orientations. In relation to the size of the centers, they were classified into groups according to the square meters reported (small <122 m2, medium 122–1500m2, large 1500–2900m2, very large> 2900m2). For the metrics of activities used in the questionnaire (number of participants per year, hours per participant, number of activities, and number of hours of room use), an explanation of how to calculate them was included [23].

The second part was developed through a modified e-Delphi method based on the opinions of six Latin American experts to assess the self-perception of quality. The expert profile was a professional with more than 7 years of experience and postgraduate training in educational sciences and clinical simulation who had experience in administering simulation centers and training instructors.

A first draft of the second part of the instrument was created in Spanish with six dimensions and 42 items based on the accreditation criteria for SSH [3] and ASPIRE [4] simulation centers. This version passed through a three-step iterative creation process (e-Delphi) until it reached a complete consensus. The semantics, wording, and spelling were adjusted at the first Delphi step. The following stages did not generate changes in the instrument. As a result of that process, a preliminary Spanish version of this questionnaire was obtained [24]. The revised questionnaire maintained the initial number of dimensions and items.

The instrument was translated to Portuguese by a researcher, native in Portuguese and proficient in Spanish. A backward independent translation was performed from Portuguese to Spanish to corroborate the first. Finally, in terms of the semantics and cultural equivalence for the study, the Portuguese version was reviewed by two simulation instructors with Portuguese as their native language [26, 27].

The bilingual final instrument was composed of a demographic questionnaire with 19 items and a 42 items quality self-perception questionnaire Likert-type (1 = totally disagree to 5 = totally agree).

The average lickert per item and dimension was calculated. A score was also calculated for each dimension and for the total scale, adding the score for each item.

Data collection

This study was carried out between January and May 2019. The survey was hosted on the Survey Monkey® platform (https://es.surveymonkey.com/) and sent by email to the directors of the simulation centers in Latin America, with monthly reminders (four email reminders). There were no incentives for participation or completion of the survey.

Statistical analysis

The use of descriptive statistics for the characterization of the sample was considered. Cronbach’s alpha was calculated as a measure of internal consistency of the self-perception of the quality questionnaire (alpha value> 0.70) [25, 28]. In the 42 items self-perception questionnaire of the instrument, good internal consistency with Cronbach’s alpha of 0.95 was found.

The statistical package IBM Corp. Released 2015. IBM SPSS Statistics for Windows, Version 23.0. Armonk, NY: IBM Corp was used. The correlation analysis with demographic and economic data was performed with Microsoft Excell (version 16.52).

Ethical approval

Participants informed consent were obtained. The ethical committee approved the Research design in the Universidad del Desarrollo de Chile (CEI 46/2018) and the Federal University of Santa Catarina of Brazil (Parecer do Comitê de Ética N° 3.206.561).

Results

Characteristics of Latin American simulation centers

Four hundred eight directors of simulation centers in Latin America were contacted. The distribution of those centers goes between 1 and 136 per country.

Using CEPAL information about countries’ population, education GDP, and health GDP, we graphed the relationship with the number of simulation centers. There is a positive linear correlation between the number of centers v/s population (correlation coefficient 0,922) (Fig. 1)

Fig. 1
figure 1

Number of centers v/s population

149 directors sent complete responses (36.5%) on the 42 quality self-perception scale and considered valid on further analyses related to the quality of the programs. Valid responses were obtained from 14 countries. The countries with the highest response rate were Chile (37.5%), Brazil (18.1%), and Mexico (12.7%) (Fig. 2).

Fig. 2
figure 2

Number of centers contacted by country and complete responses to quality self-perception survey

Most simulation centers were university linked (84%), and only 12 centers linked to health institutions were reported, of which 50% were located in Chile.

Centers responding to the survey were in existence for an average of 7 years with a standard deviation of 6833 years and range from 0 to 58 years. The first simulation center in Latin America was created in 1961 in Peru and is still open to date. It is a surgical simulation center that reports that the resources are mainly biological models and surgical trainers. New simulation centers were created every year from 1998 to 2019 (Fig. 3). Between 2008 and 2016, about ten centers per year were created, and in the years 2017 and 2018, we observed the creation of about 20 centers per year. During the first half of 2019, 8 new simulation centers were created in the region.

Fig. 3
figure 3

Year of creation of new simulation centers in latinamerica

Infrastructure and metric of activities

The data reported in the logistics aspects are heterogeneous; the centers’ area is declared between 8 and 4307 m2. The information was organized into quartiles, and the centers were categorized into small <122 m2, medium 122–1500m2, large 1500–2900m2, and very large> 2900m2. Twenty-five percent of simulation centers are small, and 71% are medium size. Heterogeneity was found in the number of participants per year, hours per participant, number of activities, and number of hours of room use (Table 1).

Table 1 Characteristics of Latin American simulation centers

Human resources and directors’ profile

Most simulation centers (54%) have less than ten instructors and 8% more than 50. Regarding the profile of the simulation center directors, 61.7% are women, 50% are medical doctors (MD), 40% nurses, and 5% engineers. Thirty-seven percent have a clinical specialty, 53% have a master’s degree, and 13% have doctoral training. In the specific training in simulation, 75% have completed an instructor course, 6% have completed a fellowship in simulation, 5.4% report having Certified Healthcare Simulation Educator (CHSE), and 17% report not having specific training in simulation-based education.

Simulation resources and programs’ orientation

Regarding the types of simulation resources used in simulation centers, the most commonly used are high-cost simulators (81%) and simulators for procedures (79%). The third place among the simulation resources used in the centers corresponds to simulated patients. The least used resources correspond to biological samples and animal models, which are used in centers dedicated to surgical training (Table 2).

Table 2 Distribution of centers who use simulation resources and declare orientation of simulation programs (n = 149)

Regarding the intention of acquiring new simulators for next year, 65% of respondents do not want to.

When asked about program orientations, most centers report that their programs intend to improve the practice and development of clinical skills (94%), critical thinking (93%), and human factors (84%) (Table 2).

Self-perception of quality

The research dimension was the one with the lowest Likert average (m = 3.3), and the SSH teaching/learning dimension the one with the highest Likert (m = 4.1) global score by dimensions (Table 3).

Table 3 Mean average of Likert points by every item of quality self-perception instrument (ASPIRE and SSH criteria) (n = 149)

The individual descriptors in which the Likert average was higher were, in decreasing order, those related to adequate learning environments (item 29, SSH Teaching/Learning Criteria, m = 4.4) ethical aspects (item 20, SSH CORE Criteria, m = 4.4) and reflective training techniques (item 7, ASPIRE Criteria, m = 4.3) as seen in Table 3.

The lowest average Likert in individual descriptors was 3.0 in item 12, corresponding to ASPIRE Criteria: “The faculty of the institution (or its students) conducts research related to simulation-based education,” and in item 38, corresponding to SSH Research Criteria: “There is a designated individual who is responsible for administering research programs” (Table 3).

The higher score by dimension was obtained using ASPIRE criteria, and the lowest in SSH Research dimension (Table 4).

Table 4 Total Score (Sum of total scores of quality self perception instrument) and Total Dimension Score (Sum of Scores by each dimension of quality self perception instrument)

Discussion

Few studies show the specific characteristics of the simulation centers [18, 19], but they do not consider the orientations of the programs carried out in them. Our work is the first on a large scale in Latin America, and we found an acceptable response rate compared to works in a single country in the region [14].

In the recent Italian survey of simulated pediatric training, nearly 15% of the surveyees answered [21]; In Switzerland, Stocker obtained a response in 96% of hospital centers where they offered training in pediatrics; of these, 66.6% used clinical simulation in their teaching practice [22]. Sixty-six percent of residents surveyed, and 100% of program managers responded to the Canadian emergency medicine training survey [20].

Our response rate is lower than that reported in other latitudes and contexts. It may be given by the cultural diversity in Latin America, by the differences in the development of the simulation in the different countries of the region, or because this is the first time that is carried out surveys of this type.

The number of simulation centers in Latinamerican Countries shows large differences. Since these are countries with different economic and population sizes, it is convenient to compare not only in absolute terms, but also in relation to the level of wealth and number of inhabitants of the countries. We found a positive linear correlation between the number of centers v/s population, with a country that deviates from the tendency, with a higher proportion of centers for the number of inhabitants. Moreover, it is precisely this country that reports the most simulation centers associated with clinical institutions, an area in which it is possible that the rest of Latin America will expand simulation centers in the future given the worldwide trends on systems integration of clinical simulation [20,21,22]. We do not believe we can recommend a standard of centers per number of inhabitants at the present time or estimate how much this growth will reach a plateau.

We cannot find correlation between global GDP in education or GDP in health and number of simulation centers. One particularity of latinamerican countries are the differences about education funding. In some countries, the expenditure of education depends mostly on private funding, contrasting with the rest of Latin American countries [17].

The size and activities of the Latin American simulation centers were heterogeneous. This may be explained by the fact that some universities have several campuses in different regions of the same country, and the report shows the total activities as a single center, due to their administrative organization of the programs. Another explanation is that some centers conduct a significant number of OSCE evaluations to their own students and in processes of re-validation of international professional degrees, which influences the reporting of higher indicators or that some centers are dedicated to single professions and others attend multiple careers with large groups of students.

At the time of data collection, most of the simulation centers in Latin America were linked to university institutions. It is important to consider that centers linked to clinical institutions may have different forms of organization and focuses of action than those of university institutions, and that the number of centers or simulation activities with a focus on clinical teams may be modified given the need for training of specific clinical competencies related to the current pandemic context.

It was reported that a simulation center using surgical simulation resources was created in 1961. Given the methodology defined in this study, based on self-reporting, no actions were taken to verify this statement. The literature describes the use of frozen biological material for surgical procedural training in 1986 [29], and in the early 1990s, the first recommendations for surgical training with simulation are found [30].

In our work, the professional profile of those who run the centers is heterogeneous in the profession, clinical specialty, academic degrees, and simulation training. The vast majority have received instruction with short courses, and only a few have fellow or international certifications. It is noteworthy that almost one sixth of them do not have specific training in the area.

Observing quality criteria is necessary for human activities that aim for excellence; this includes clinical simulation. In this study, the quality standards used were recognized as highly relevant by the directors of the simulation centers, mainly in teaching, learning environments, and ethical criteria. The research criteria were considered less relevant; this is consistent with the low region research visibility in the world ranking in the last decade [31].

Most of the simulation centers in the region were reported to be linked to universities. This may be related to the fact that the best rated dimensions globally correspond to those based on the ASPIRE criteria and the SSH teaching and learning criteria.

In 2013, Arthur et al. conducted a study with Delphi methodology in which the importance of maintaining standards in nursing simulation-based education is denoted [32]. Although the daily activity of the Latin America centers (participants, activities, etc.) was heterogeneous and relatively low, most of them showed a high number of activities related to the modalities of simulation resources used.

This study presents some limitations, such as the dependence of the self-report and the sincerity of the respondents [33], and the majority of responses come from three countries.

Another important consideration is that snowball sampling makes it challenging to determine the sampling error or make inferences about populations based on the obtained sample.

However, the internal consistency of the survey was high, and the responses were similar by country, size of the center, and profile of the director, which gives greater validity to the results [24]. Another research that attempts to characterize centers worldwide has a lesser response rate than our research [18].

In this case, the information that we get is important because it is the first attempt to characterize the complete region. We consider that this work is a basis to better understand how simulation centers operate in Latin America and open the opportunity for new research in this field.

Some of these research areas are the differences between simulation centers based in university and clinical institutions, or the relevance of the training of center directors in the development of educational programs. It is also important to inquire into the potential differences between self-reporting and independent observer evaluation.

Conclusions

Simulation-based education in health sciences has had an increasing development in Latin America. Growth alone is not a goal, and quality might be worth looking at.

Characterized centers are predominantly medium-sized, university-based, using standardized mannequins and patients to train clinical skills and procedures.

Agreement with the importance of quality and continuous improvement is high; it is low concerning research criteria and adherence to program evaluation mechanisms.

We recommend starting accreditation processes in Latin America and studies that measure the quality of simulation-based education in our region, based on objective observations more than in self-reporting.