Background

The organization of care for cancer patients is complex and multifaceted, cancer can cause a great deal of distress for patients. A study among lung-cancer patients showed that 27 % mentioned healthcare experiences as an important cause of distress. Waiting times, and lack of information are some mentioned experiences [1]. Different healthcare providers are engaged in prevention, diagnosis, treatment and follow up. This requires a high degree of coordination and if inadequately organized, can result in fragmented and discontinued care [2]. The Institute of Medicine (IOM) proposed patient centeredness as a way how healthcare systems could improve patients’ experience [3]. Patient centeredness is defined as: care that respects and responds to individual patient’s preferences, needs, and values and involves clinical decisions guided by patients [3]; and is associated with better treatment adherence and improved health outcomes [4]. Healthcare professionals and patients do not always agree on what is important in patient centered care. Wessels et al. [5] reported that expertise and attitude of healthcare providers as well as accessibility were more important to cancer patients than healthcare professionals expected. This underlines the importance of questionnaires that actually reflect the perspective of the patient. Patient experience and satisfaction are increasingly seen as a quality outcome for health-system or –provider performance, by consumers, practitioners and governing agencies [6].

The Consumer Quality Index (CQI) used in this study is based on the American CAHPS (Consumer Assessment of Healthcare Providers and Systems) [7]. The CAHPS is one of the most well-known initiatives to measure quality of care from the healthcare user’s perspective. CAHPS is widely used in the United States and translated and used in the Netherlands. The CQI is also based on the Dutch QUOTE (Quality of care through the patient’s eyes) [8]. Many researchers have designed instruments to measure patient experience and satisfaction that are specific to a country’s health system or individual hospital [914]. In order to compare performance across health systems and providers, standardized and comparable measures of patient experience and satisfaction are necessary, to our knowledge there is no such instrument yet. Our objective was to adapt and test the psychometric properties of a generic questionnaire that measures the actual experiences and satisfaction of cancer patients with care in different countries in Europe based on the Dutch version of the CQI. A generic questionnaire has advantages: it can be used for patients with all tumor types, which makes developing different tumor-specific questionnaires redundant [4]. Questions regarding actual experiences tend to reflect the quality of care better and are more interpretable and actionable for quality improvement purposes, while satisfaction ratings shows whether expectations were met [15]. In order to get a comprehensive picture both satisfaction and experience are measured. Our research questions were:

  1. 1.

    What are the differences in patient experience and satisfaction between countries and/or patient characteristics?

  2. 2.

    What is the validity and internal consistency (reliability) of the European Cancer Consumer Quality Index?

Methods

Questionnaire

To use the existing CQI in an international context, questions related specific to the Dutch system were removed based on expert opinion. The updated questionnaire was send to the European Cancer Patient Coalition and a patient representative at each of the pilot sites to check for appropriateness for international measurement. Patient representatives were asked to judge whether their patients would be able to read and comprehend the questions. Twelve institutes across Europe were invited to participate of which seven institutes in six countries (two in Italy) responded positively. These countries were: Hungary (HUN), Portugal (PRT), the Netherlands (NLD), Romania (ROM), Lithuania (LIT), and Italy (ITA). The CQI was translated into the local language at the pilot sites and translated back into English, to ensure that no information was lost in translation, so called cross-translation. Cross-translation is used to ensure the translated instruments are conceptually equivalent in each of the target countries/cultures [16]. The CQI used in this study will be referred to as European Cancer Consumer Quality Index (ECCQI) and consists of 65 questions/items divided into 13 categories. The three categories with demographic or disease specific information were used as background and were not part of the analysis which therefore includes 10 categories (45 items). Participants were given the opportunity to comment on the questionnaire.

Data collection

The target response was a minimum of 100 respondents per institute.

Every institute assigned a person who ensured the distribution and collection of the questionnaires. In the Netherlands, data were collected through an online survey tool [17], in other institutes (N = 6) the questionnaire was paper-based due to the fact that internet coverage was not sufficient in these countries. Respondents were selected by convenience sampling. This study was performed in agreement with the declaration of Helsinki. Approval by a medical ethics committee was not required. All participants consented to the use of the data provided by them. Data from interviews and questionnaires were analyzed anonymously.

Inclusion criteria

The following criteria were used for inclusion of the questionnaires: (1) Patients had to be 18 years or older, (2) patients had to be examined, treated or had after-care for cancer within the last two years in the examined center, (3) gender, age and level of education had to be known, (4) 50 % of the questions answered.

Cognitive interviews

Cognitive interviews were performed in order to measure the face validity of the ECCQI and to identify problems in the wording or structure of questions which might lead to difficulties in question administration, miscommunication, etc. Face validity is the extent to which a test is subjectively viewed as covering the concept it is supposed to measure, which in this study, is the experience of and satisfaction of cancer patients with care received at the cancer center. Both ‘thinking aloud’ and ‘verbal probing’ [18], were used in this study. When thinking aloud, respondents are asked to read the questions out loud and to verbalize their thoughts as they fill out the questionnaire. With verbal probing, the interviewer asks follow-up questions to understand a participant’s interpretation more clearly and precisely. The cognitive interviews were conducted in the Netherlands, Romania (with interpreter), and Portugal (with interpreter). Data collected through the cognitive interviews were analyzed by means of the Question Appraisal System(QAS-99) [19]. The QAS-99 consists of seven elements: (i) Determine if it is difficult to read the question uniformly for all respondents; (ii) Look for problems with any introductions, instructions, or explanations from the respondent’s point of view; (iii) Identify problems related to communicating the intent or meaning of the question to the respondent; (iv) Determine if there are problems with assumptions made or the underlying logic the questions; (v) Check whether respondents are likely to not know or have trouble remembering information; (vi) Assess questions for sensitive nature or wording, and for bias; (vii) Assess the adequacy of the range of responses to be recorded.

Recoding

Data were recorded in order to be analyzed. Almost all categories of the CQI consist of questions with four response options: never = 1, sometimes = 2, usually = 3 and always = 4. For the categories that did not consist of those four response options, the options were recorded into one of the four options above. Response codes of the questions about demographic characteristics were also recoded; (i) Age: 18–34, 35–64, and 65 or older; (ii) Years of education: low (1–8 years), moderate (9–13 years), and high (14 and higher). The answers ‘I don’t know/I no longer remember’ and ‘Not applicable’ were scored as missing.

Analyses

For descriptive analyses we used SPSS v.22. To aid future comparison of samples and normalization, descriptive statistics involved calculating the weighted mean for each scale and country. In line with the instructions [20], patient’s scores were only valid if 50 % or more questions within a scale were answered. We performed a chi-square test to determine whether the distribution of patient characteristics such as age differed between countries. For every category the weighted mean was calculated per country, where the weight depended on the number of items rated by the patient. We summed the scale scores and calculated the weighted mean of overall patient experience and satisfaction for every patient. The possible effects of demographic characteristics on ECCQI score were examined with one way ANalyses Of VAriance (ANOVA) analysis (95 % Confidence Intervals: CI).

To estimate the internal consistency (reliability) of each scale, we calculated the Cronbach’s alpha [21] (Cronbach, 1951; α) for ordinal items. In short, we followed the method from Gadermann et al. [22], where α was calculated on the polychoric correlation matrix (calculated with the psych package available in the R programming language), instead of the normal Pearson correlation. Acceptable α scores fall between 0.5 to 0.7 and α is considered good if higher than 0.7 [23].

The ECCQI presented here is based on the factor structure of the CQI. We tested the structural validity of the ECCQI in our data with Confirmatory Factor Analysis (CFA). The rationale behind applying CFA is that a predefined measurement model can be tested with Structural Equation Modeling, where CFA provides insight into the fit of the model on the current data. CFA analyses were conducted in Mplus v.7 [24] fitted using the Weighted Least Squares Mean Variance adjusted (WLSMV). As general measures of fit, the Root Mean Square Error of Approximation (RMSEA) and Comparative Fit Index (CFI) were evaluated. The RMSEA provides an indication of how well the model fits in the population. Values > .10 indicate poor model fit, values between .08 and .05 indicate adequate model fit, and values of .05 or below indicate good fit of the model to the data [25]. The CFI ranges from zero to one and higher values indicate better fit. It has been shown to be an adequate fit statistic for ordinal data [26] with values larger than .90 indicating moderate fit and .95 indicating good fit. Also, we fitted all models using the Weighted Least Squares Mean Variance adjusted (WLSMV).

Patient activation

To investigate relationships between level of patient activation and ECCQI score the Patient Activation Measure (PAM) was administered [27, 28]. The PAM was included later on in the study. It was only send to institutes in the Netherlands, Romania and Italy, since these countries indicated that they still could implement the PAM at that time, however not all patients filled out the PAM.

Results

Response

Initially 958 questionnaires were collected. After application of the inclusion criteria 698 questionnaires were included in this study (see Fig. 1). Respondent characteristics can be found in Table 1. In order to ensure anonymity data are presented by country and not by individual institute (the Italian institutes are combined).

Fig. 1
figure 1

Flow-chart of sample size ECCQI

Table 1 ECCQI Respondent characteristics. Percentage and absolute numbers

Results of the chi-square test showed a significant difference in the distribution of the patient characteristics such as level of education (χ2(10) = 210.315, p < 0.001) and perceived overall health (χ2(20) = 77.641, p < 0.001).

Results of the ECCQI per country

Table 2 shows the descriptive statistics of the ECCQI. The weighted mean of the summed scale scores was 3.35, ranging from 2.05 to 4, being slightly skewed (skewness = .871). Comparison between countries revealed a significant difference in experience and satisfaction [F(5692) = 5.337, p < 0.001]. Post hoc comparisons indicated that this overall effect was predominantly influenced by a significant (p < 0.001) mean difference between Hungary (mean = 3.29, Standard Deviation (StDev) = .34) and the Netherlands (mean = 3.46, StDev = .33), and Italy (mean = 3.28, StDev = .33) and the Netherlands.

Table 2 Results of the ECCQI per country and category, mean and median score 4 point scale and range, StDev standard deviation

Looking more specifically Portugal (mean = 3.11 and StDev = .97) scored fairly low on ‘own inputs’ as does Italy (mean = 3.09 and StDev = .89). ‘Coordination’ is scored quite low by Italian patients (mean = 3.03 and StDev = .54), whereas Hungarian patients give a relatively low score to ‘rounding of the treatment (mean = 2.99 and StDev = .53). However, for none of the categories significant differences were found between highest scoring country and the lowest scoring country. Looking at some specific questions about practical experiences it was found that patients in Hungary, Romania and Lithuania found it difficult to park at the institute (average score of 1). In all countries except Romania the majority of the patients received their diagnosis when expected (in Romania a majority, 47.5 %, received it sooner). For a detailed overview of the outcomes of each separate question see Additional file 1. Looking at the satisfaction questions specifically (Table 3) it can be seen that all patients give a higher grade to the likeliness of recommending the center than to how they experienced the center themselves.

Table 3 Overall opinion absolute numbers, mean and median scale 1–10 and range, StDev standard deviation

Patient characteristics

When looking at the division by age it can be seen that patients who are 65 or older report the highest score at half of all categories. The total scale score increased with age, being 3.27 (StDev = .39) in patients aged 18–34, 3.34 (StDev = .33) in patients 35–64 and 3.39 (StDev = .32) in patients aged >65. The age differences were not significant [F(2692) = 2.68, p = .069]. Stratification by gender shows that females scored lower (mean = 3.34, StDev = .33) compared to males (mean = 3.38, StDev = .34), but this difference is not significant [F(1696) = 1.828, p = 0.177]. Also, quality of care was not reported differently by patients with a higher/longer education [F(5694) = 0.093, p = .911]. When we clustered the patients on tumor type, we observed no significant differences [F(14,683) = 1.297, p = 0.204]. A representative subset of 172 patients (score 1 believing the patient role is important N = 31; score 2 having the confidence and knowledge necessary to take action N = 32; score 3 actually taking action to maintain and improve one’s health N = 76; and score 4 staying the course even under stress N = 33) also completed the PAM, which revealed that reported quality of care significantly differs across PAM level [F(3168) = 2.362, p < 0.034]. Post hoc comparisons showed that this effect is mainly driven by patients in the highest level of activation scoring higher (mean = 3.48, StDev = .26) than respondents with the lowest level (mean = 3.26, StDev = .36) of activation.

Validity and evaluation of the questions

Fourteen cognitive interviews were conducted. For interviewee characteristics see Additional file 2. Patients felt that in general the questionnaire was appropriate to measure patient satisfaction and experience. However, in 18 questions at least one problem was identified based on the QAS-99 method [19]. Most problems concerned the interpretation of questions. A full overview of the problems can be found in Additional file 3. The most frequently mentioned comment was that the questionnaire does not differentiate between nurses and doctors (N = 7), whereby patients could not give a nuanced answer. CFA revealed that the ECCQI measurement model had a moderate to good fit on our data (RMSEA = 0.039, CFI = 0.943).

Internal consistency

Seven categories (‘attitude of the healthcare professional’, ‘communication and information’, ‘coordination’, supervision and support’ and ‘rounding off the treatment’) represent a good level of internal consistency (α > 0.7) for all countries and overall (see Table 4). In three categories (’ organization’, ‘hospitalization’ and ‘own inputs) level of internal consistency was acceptable (α between .5 and .7) to good. The alphas in the categories ‘accessibility’ and ‘safety’ were lower and represented an unacceptable internal consistency (α > 0.5) in three countries (accessibility), possibly due to a low number of variables (accessibility = 3, safety = 2) and a smaller sample size after splitting the data to country specific. With the exemption of the Dutch population, removing the question: “Is it difficult to get to the this hospital (either by your own transport, by public transport or by taxi)” could increase α, but the correlational stability of this item increased with sample size.

Table 4 Ordinal Cronbach’s alpha(α) score per ECCQI category and country and number of respondents (N) per ECCQI category and country

Discussion

We developed a questionnaire that measures patient experiences and satisfaction with cancer care in hospitals in European countries for patients with all types of cancer. It measures a broad array of topics capturing specific needs and wishes of cancer patients. We found no significant differences between tumor types, supporting the use of a generic questionnaire [4].

With regard to our first question - ‘What are the differences in patient experience and satisfaction between countries and/or patient characteristics?’ we found that patient experience and satisfaction is scored different between countries, with significant differences ranging from an average of 3.27 to 3.46 on a 4-point scale. Patient experience and satisfaction is scored, on average, the lowest in Italy and the highest in the Netherlands. Using one questionnaire for different cultural groups (different nationalities) could lead to measurement bias which could be an explanation for the differences between countries. Looking at possible effects of cultural differences applying Hofstede’s cultural dimension theory [29, 30], possible explanatory factors for the difference in patient satisfaction between countries can be found. High masculine societies (Hungary and Italy) had significantly lower satisfaction scores than low masculine societies (the Netherlands). According to Hofstede, Hofstede & Minkov [31], a high masculine score indicates an assertive judgemental behaviour without having much concern for the feelings of others, which could result in lower satisfaction scores. A low masculine score indicates more tenderness and sympathy for others, resulting in less willingness to provide criticism and therefore higher satisfaction scores. Previous studies on ethnic groups [32, 33] showed however that differences in satisfaction with care should not be ascribed to measurement bias but should be viewed as arising from actual differences in experiences. Evaluation of the measurement equivalence across race and ethnicity on the CAHPS shows that that measurement bias does not substantively influence conclusions based on patients’ responses [33]. A study amongst 15 countries performed by Ipsos [34] showed that Italy scores low on patient experience which corresponds to our findings. Another population survey conducted in 2010 [35] showed a high degree of satisfaction with health-care services and access to health care in both outpatient and inpatient setting in Lithuania.

Regarding the second question: What is the validity and internal consistency (reliability) of the ECCQI?- the cognitive interviews showed problems with different questions. Most problems concerned the interpretation of questions. These questions will be reviewed in order to make them more clear and understandable. The structural validity of the ECCQI measurement model was moderate to good. Given the relative large number of items and scales, versus the number of respondents, the fit could be improved by including more persons to increase the person vs. item ratio. Also, the fit of the model was evaluated for all six countries combined and it is possible that the ECCQI is not measurement invariant across countries or cultures. With more data, it would be possible to investigate whether the measurement model (and thus the latent constructs of the scales) are identical across nations [36]. The validity of the ECCQI could be also be increased with more specificity in the questions, for example by dividing healthcare professionals into doctors and nurses. Regarding internal consistency, alpha was satisfactory to good in eight out of ten categories. Lack of questions in the categories with a low alpha are most likely the reason for the low internal consistency score. It is recommended to investigate whether reliable scales could be created by means of creating other sub-scales, or replace these scales by singe-item questions.

The small differences between countries could be attributed to the difference in completing the questionnaire. In the Netherlands the questionnaires were Internet-based, while in other countries they were paper-based. Studies investigating the equivalence between Internet and paper-based questionnaires are conflicting. Fang [37] indicated that differences were apparent when analyzing data from distinct survey modes (Internet and paper-based). On the other hand, other studies provided results which support the measurement equivalence of survey instruments across Internet and paper-based surveys [3840].

Age does not significantly influence the results. For the total satisfaction score in all countries, differences between the highest scoring age group and lowest scoring group were not significant. This finding contrasts other studies [41, 42] showing that age needs to be considered when looking at patient experience and satisfaction data. In addition, results show that males were more positive than women which corresponds to results from other studies [41], this difference was however not significant. Further, level of activation seems to have a significant influence, since low activated patients reported lower scores and highly activated patients reported higher scores. It can be seen that all patients give a higher mark to likelihood that they would recommend the hospital to other patients than that they rate the hospitals for themselves. Our results indicate that when measuring patient experience and satisfaction results need to be adjusted for nationality and level of activation but not for age or other demographic characteristics. Based on this research, the current questionnaire should be further tested for its ability to discriminate between hospitals and countries.

A possible limitation of this study design is the sampling method. With convenience sampling the chance of selection bias is high which could have influenced the outcomes. For example, in education level a majority of the Portuguese patients had a low education level, a majority of the Italian patient had a moderate education while in the other countries the majority had a high education level. Regarding physical health, patients in Portugal were more negative giving a moderate score, while in the other countries most patients rated their physical health as good or excellent. Analysis of the total study population however showed no influence of demographic characteristics.

The real value of these studies lies in their use to stimulate quality improvements. Even though the centers studied are not necessarily representative of all cancer centers in the study countries, the results indicate areas of improvement and might provide evidence about how organizations and providers could meet patients’ needs more effectively.

Conclusions

To our knowledge, the questionnaire used in this study is the first that measures the experiences and satisfaction of cancer patients with care provided by cancer centers in Europe. Our results show that patient satisfaction is scored significantly different between countries. We showed that differences exist in experiences and satisfaction between people with different characteristics such as activation levels. After testing for discriminatory power our questionnaire can be used Europe-wide to measure quality of cancer care from the patient perspective and to identify differences in the experiences of patients in different hospitals. This ECCQI is a first step towards the international comparison of patient experience and satisfaction, which could enable healthcare providers and policy makers to improve the quality of cancer care.