Background

Evidence-based medicine (EBM) is defined as the integration of best available evidence in a conscientious, explicit and judicious manner from literature and patients’ values which then informs clinical decision making [1]. Practicing EBM in clinical practice helps doctors make a proper diagnosis and selects the best treatment available to treat or manage a disease [2]. The use of EBM in clinical setting is thought to provide the best standard of medical care at the lowest cost [3].

Evidence-based medicine has an increasing impact in primary care over recent years [4]. It involves patients in decision making and influences the development of guidelines and quality standards for clinical practice [4]. Primary care physicians are the first person of contact for patients [5]. They have high workload and at the same time they need to uphold the quality of healthcare [6]. Therefore, it is important for them to treat patients based on research evidence, clinical expertise and patient preferences [7]. However, integrating EBM into clinical practice in primary care is challenging as there are variations in team composition, organisational structures, culture and working practices [8].

A search from literature revealed that the international main barriers were lack of time, lack of resources, negative attitudes towards EBM and inadequate EBM skills [9]. A recent qualitative study conducted in 2014 found that the unique barriers in implementing EBM among primary care physicians in Malaysia were lack of awareness and attention toward patient values. Patient values forms a key element of EBM and they still preferred obtaining information from their peers and interestingly, they used WhatsApp—a smart phone messenger [10].

Therefore, we need an instrument to determine the knowledge, practice and barriers of the implementation of EBM among the primary care physicians. It is important to have an instrument to identify the gaps on a larger scale and improve the implementation of EBM in their clinical practice. A systematic review by Shaneyfelt et al. [11] reported that 104 instruments have been developed to evaluate the acquisition of skills by healthcare professionals to practice EBM. These instruments assessed one or more of the following domains on EBM: knowledge, attitude, search strategies, frequency of use of evidence sources, current applications, intended future use and confidence in practice. However, only eight instruments were validated: four instruments assessed the competency in EBM teaching and learning [12,13,14,15,16], whilst four assessed knowledge, attitude and skills [16,17,18,19]. However, no instrument has assessed the knowledge, practice and barriers in the implementation of EBM. Therefore, this study aimed to develop and validate the English version of the Evidence-Based Medicine Questionnaire (EBMQ), which was designed to assess knowledge, practice and barriers of primary care physicians regarding the implementation of EBM.

Methods

Development of the evidence-based medicine questionnaire

A literature search was conducted in PubMed; using keywords such as “Evidence-based medicine”, “general practioners”, “primary care physicians” and “survey/questionnaire” from this search, nine relevant studies were identified [12,13,14,15,16, 19, 20]. However, only one instrument [20] evaluated the attitude and needs of primary care physicians. Twenty four items from this questionnaire and findings from two previous qualitative studies in rural and urban primary care settings in Malaysia [10, 21] were used to develop the EBMQ (version 1). The EBMQ was developed in English, as English is used in the training of doctors in medical schools and also taught as a second language in all public schools in Malaysia.

Face and content validity of the EBMQ was verified by an expert panel which consisted of nine academicians (a nurse, a pharmacist and seven primary care physicians). Each item was reviewed, and the relevance and appropriateness of each item was discussed (version 2). A pilot test was then conducted on ten medical officers with a minimum of one year working experience wihout any postgraduate qualification. They were asked to evaluate verbally if any items were difficult to understand. Feedback received were that the font was too small and that there was no option for “place of work” for those working in a University hospital. Changes were made based on these comments to produce version 3, which was then pilot tested in another two participants. No difficulties were encountered. Hence, version 3 was used as the final version.

The evidence based medicine questionnaire (EBMQ)

The EBMQ consists of 84 items and 6 sections as shown in Table 1. Only 55 items (33 items in the “knowledge” domain, 9 items in the “practice” domain and 13 items in the “barriers” domain) were measured on a Likert-scale, and could be validated. The final version of the EBMQ is added in Additional file 1. A higher score indicates better knowledge and better practice of EBM and less barriers in practicing EBM.

Table 1 The initial version of the Evidence-Based Medicine Questionnaire (version 3)

Participants took 15 to 20 min to complete the EBMQ. We hypothesized that the EBMQ would have 3 domains: knowledge, practice and barriers.

Validation of the evidence-based medicine questionnaire

Participants

Primary care physicians with or without EBM training, who could understand English and who attended a Diploma in Family Medicine workshop, were recruited from December 2015 to January 2016.

Sample size

Sample size calculation was based on a participant to item ratio of 5:1 to perform factor analysis [22]. There are 55 items in the EBMQ. Hence, the minimum number of participants required was 55*5 = 275.

Procedure

Permission was obtained from the Academy of Family Physicians Malaysia to recruit participants who attended their workshops. For those who agreed, written informed consent was obtained. Participants were then asked to fill in the EBMQ at baseline. Two weeks later, the EBMQ was mailed to each participant, with a postage-paid return envelope. If a reply was not obtained within a week, participants were contacted via email and/or SMS, and reminded to send in their completed EBMQ form as soon as possible.

Data analysis

Data were analyzed using the Statistical Package for Social Sciences (SPSS) version 22 software (Il, Chicago, USA). Normality could not be assumed, hence non-parametric tests were used. Categorical variables were presented as percentage and frequencies, while continuous variables were presented as median and interquartile range (IQR).

Validity

Flesch reading ease

The readability of the EBMQ was assessed using Flesch reading ease. This was calculated based on the average number of syllables per word and the average number of words per sentence [23]. An average document should have a score of 60–70 [23].

Exploratory factor analysis

Exploratory factor analysis (EFA) was used to test the underlying structures within the EBMQ. EFA is a type of factor analysis that is utilised to identify the number of latent variables that underlies an entire set of items [24]. EFA was performed to explore the factors appropriateness that can be grouped into specific factors and also to provide information about the validity of each item in each domain. It is important to ensure that the items in each domain of the EBMQ are connected to their basic factors.

Factor loadings were assessed using the Keiser-Meyer-Olkin (KMO) and Bartlett’s test of sphericity. The principal components variance with promax variation were used for data reduction purposes, and eigenvalues > 1 was selected to see the variances of the principal components. KMO value of > 0.6, individual factor loadings > 0.5, average variance extracted (AVE) > 0.5 and composite reliability (CR) > 0.7, indicate good structure within the domains [25, 26].

Discriminative validity

To assess discriminative validity, participants were divided into those with or without EBM training. We hypothesized that the knowledge and practice of participants with EBM training would have better knowledge, better practice and less barriers than those without EBM training. The Chi-square test was used to determine if there was any difference between the two groups. A p-value < 0.05 was considered as statistically significant.

Reliability

Internal consistency

Internal consistency was performed to test the consistency of the results and estimates the reliability of the items in the EBMQ. The internal consistency of the EBMQ was assessed using Cronbach’s α coefficient. A Cronbach’s alpha value of 0.5–0.69 is acceptable, while values of 0.70–0.90 indicate a strong internal consistency [27]. Corrected item-total correlations should be > 0.2 for it to be considered acceptable [28]. If omitting an item increases the Cronbach’s α significantly, the item will be excluded.

Test-retest reliability

The test-retest was performed to measure the reliability and stability of the items in the EBMQ over a period of time. It is also important to administer the same test twice to measure the consistency of the answers by the participants. The intra-class correlation coefficient (ICC) was used to assess the total score at test-retest. A ICC agreement value of 0.7 was considered acceptable [29]. ICC values between 0.75 and 1.00 indicate high reliability, 0.60 and 0.74 indicate good reliability, 0.40–0.59 has fair reliability and those below 0.40 indicate low reliability [30].

Results

A total of 343 primary care doctors were approached; of whom 320 agreed to participate (response rate = 93.2%). The majority of them were female (69.4%) with a median age of 32.2 years [IQR = 4.0]. Nearly all (97.2%) were medical officers, working in government health clinics (54.4%) and possessed no postgraduate qualifications after their basic medical degree (78.4%). All participants had heard about EBM, but only 222 (69.7%) had attended an EBM course (Table 2).

Table 2 Demographic characteristics of participants

Validity

Flesch reading ease of the EBMQ was 61.2. Initially, we hypothesized that the “knowledge” domain would have two factors. However, EFA found that the “knowledge” domain had four factors: (“evidence-based medicine websites”, “evidence-based journals”, “type of studies” and “terms related to EBM”) after 9 items (item C1: “Clinical Practice Guidelines”, item C7: “Dynamed”, item C11: “InfoPoems”, item C4: “Cochrane”, item C8: “TRIP database”, item C15: “BestBETs”, item C9: “MEDLINE”, item C17: “Medscape” and item C16: “UpToDate”) were removed. This model explained 54.3% of the variation (Table 3).

Table 3 Exploratory factor analysis of the evidence-based medicine questionnaire

EFA found that the “practice” domain had only one factor with eight items after one item (item 9: “I prefer to manage patients based on my experience”) was removed. This model explained 49.0% of the variation (Table 3).

We hypothesized that the ‘barriers’ domain would only have one factor. However, EFA revealed that the ‘barriers’ domain has three factors (“access”, “support” and “patient’s preferences”) after three items were removed (item 7: “I can consult the specialist anytime to answer my queries”, item 10: “I have the authority to change the management of patients in my clinic” and item 11: “There are incentives for me to practice EBM”). This model explained 49.9% of the variation (Table 3).

Discriminative validity

In the “knowledge” domain, doctors who had EBM training had significant higher scores in 13 out of 24 items compared to those without training. In the “practice” domain, doctors who had EBM training had significant higher scores in 5 out 8 items compared to those without training. In the “barriers” domain, doctors who had EBM training had significant higher scores in 5 out of 10 items compared to those without training (Table 4).

Table 4 The discriminative validity of the Evidence-Based Medicine Questionnaire

Reliability

Cronbach alpha for the overall EBMQ was 0.909, whilst individual domains ranged from 0.657 to 0.933 (Table 4). All corrected item-total correlation (CITC) values were > 0.2. At retest, 185 participants completed the EBMQ (response rate = 57.85%), as n = 23 (42%) were uncontactable. Thirty items had good and fair correlations (r = 0.418–0.620) while 12 items had low correlations (r = < 0.4). (Table 5).

Table 5 The psychometric properties of the Evidence-Based Medicine Questionnaire

Discussion

The EBMQ was found to be a valid and reliable instrument to assess the knowledge, practice and barriers of primary care physicians regarding the implementation of EBM. The final EBMQ consists of 42 items with 8 domains after 13 items were removed. The Flesch reading ease was 61.2. This indicates that the EBMQ can be easily understood by 13–15 years old students who study English as a first language [23].

Initially, we hypothesized that there were two factors in the “knowledge” domain: “sources related to EBM” and “terms related to EBM”. However, EFA revealed that the EBMQ had four factors: “evidence-based medicine websites”, “evidence-based journals”, “terms related to EBM” and “type of studies” after 9 items were removed. This was because “sources related to EBM” was further divided into another three factors. It is not surprising because knowledge is a broad concept that can be further recategorized. EFA revealed that the “practice” domain had one factor which concurred with our initial hypothesis. One item (item P9: “I prefer to manage patients based on my experience”) was removed as this was regarding doctors’experience rather than their practice. Initially, we hypothesized that there was one factor in the “barriers” domain. However, EFA revealed that there were three factors: ‘access to resources’, ‘patient preferences towards EBM’ and ‘support from the management’ after three items were removed. This may be because instead of one barrier, EFA had re-grouped into three factors that provided a better description of barriers encountered by the primary care physicians. As highlighted in literature [9, 31], there are many barriers to practice EBM and some of it were also categorized according the specific and types of barriers.

The EBMQ was able to discriminate the knowledge, practice and barriers between doctors with and without EBM training. In the knowledge domain, there were significant differences for all items in the “terms related to EBM”. This is not surprising as doctors with EBM training would have been exposed to these terms. No differences was found between those with and without EBM training in “information sources related to EBM” as those who did not attend EBM training could still access online information resources. Several studies were found to improve knowledge but did not report in detail which areas on knowledge. Hence, we could not compare their findings to our studies [32,33,34,35].

Our findings also showed that doctors with EBM training had better practice of EBM. This differed from several studies which reported changes in practice [32, 36,37,38,39] and some reported no changes in practice [35, 40]. However, the authors commented that these findings were not meaningful as it was self-perceived. Other than that, in our findings, doctors who attended EBM training had less barriers regarding the implementation of EBM in their clinical practice. They seemed to have better access to resources, more patients had a positive attitude towards EBM, and better support from management to practice EBM compared to those without EBM training. This could be because doctors with EBM training knew how to overcome problems that would prevent them from practicing EBM. In the systematic review [41], the barriers in the implementation of EBM remains unclear as it was not reported.

The overall Cronbach’s alpha as well as the individual domains were > 0.7. This indicates that the EBMQ has adequate psychometric properties, which was similar to previous studies [12, 14,15,16, 19, 42]. The majority (71.4%) of the items in EBMQ had good and fair correlation at test-retest, which indicates that the EBMQ has achieved adequate reliability. The reliability testing two weeks later did not affect the methodology as the acceptable time interval for test-retest reliability is approximately 2 weeks [28]. The discriminative validity was performed using the baseline data and not after retest which then impact on the methodology.

To our knowledge, this was the first validation study assessed the discriminative validity (i.e. between doctors with and without EBM training) that assessed their implementation of EBM. One of the limitations of this study was that participants were recruited whilst attending a Family Medicine module workshop. This may mean that participants that were recruited may be more interested in the practice of EBM as they are already interested in furthering their postgraduate studies. This cohort are likely to be more interested with the practice of EBM as they are more incline to further their studies rather than the normal general practitioners. Hence, our result may not be generalizable.

Conclusions

The EBMQ was found to be a valid and reliable instrument to assess the knowledge, practice and barriers of primary care physicians towards EBM in Malaysia. The EBMQ can be used to assess doctors’ practices and barriers in the implementation of EBM. Information gathered from the administration of the EBMQ will assist policy makers to identify the level of knowledge, practice and barriers of EBM and to improve its uptake in clinical practice. Although the findings of this study are not generalizable, they may be of interest to primary care physicians in other countries.