How Does Your Doctor Talk with You? Preliminary Validation of a Brief Patient Self-Report Questionnaire on the Quality of Physician–Patient Interaction

  • Christiane Bieber
  • Knut G. Müller
  • Jennifer Nicolai
  • Mechthild Hartmann
  • Wolfgang Eich
Article

Abstract

The quality of physician–patient interaction is increasingly being recognized as an essential component of effective treatment. The present article reports on the development and validation of a brief patient self-report questionnaire (QQPPI) that assesses the quality of physician–patient interactions. Data were gathered from 147 patients and 19 physicians immediately after consultations in a tertiary care outpatient setting. The QQPPI displayed good psychometric properties, with high internal consistency and good item characteristics. The QQPPI total score showed variability between different physicians and was independent of patients’ gender, age, and education. The QQPPI featured high correlations with other quality-related measures and was not influenced by social desirability, or patients’ clinical characteristics. The QQPPI is a brief patient self-report questionnaire that allows assessment of the quality of physician–patient interactions during routine ambulatory care. It can also be used to evaluate physician communication training programs or for educational purposes.

Keywords

Quality of physician–patient interaction Validation of questionnaire Quality of health care Patient-centered care Patient involvement Shared decision-making 

Abbreviations

QQPPI

Questionnaire on the Quality of Physician–Patient Interaction

QHC

Quality of health care

PSHC

Patient satisfaction with health care

PPS

Presumed patient satisfaction

In recent decades, physician–patient interactions in outpatient settings have become a focus of scientific interest (Barr, 2004; Detmar, Muller, Wever, Schornagel, & Aaronson, 2001; Ford, Schofield, & Hope, 2006; Frankel, 2004; Gericke, Schiffhorst, Busse, & Haussler, 2004; Kaplan, Greenfield, & Ware, 1989; Kjeldmand, Holmstrom, & Rosenqvist, 2006; Langewitz, Keller, Denz, Wössmer-Buntschu, & Kiss, 1995; Ong, de Haes, Hoos, & Lammes, 1995; Rimal, 2001; Roter et al., 1997; Safran et al., 2006). There is general agreement that high quality physician–patient interactions should be considered an asset in themselves (Goldman Sher et al., 1997; Mead & Bower, 2000b, 2002) and that they also appear to be a precondition for an effective treatment (Di Blasi, Harkness, Ernst, Georgiou, & Kleijnen, 2001; Swenson et al., 2004). Positive physician–patient interactions have favorable effects on patient satisfaction (Rosenberg, Lussier, & Beaudoin, 1997; Williams, Weinman, & Dale, 1998), treatment adherence (Anstiss, 2009; Schneider, Kaplan, Greenfield, Li, & Wilson, 2004), and on medical outcomes (Safran et al., 1998; Stewart et al., 2000).

Communication training programs for physicians aim at improving the quality of the physician–patient interaction to facilitate information exchange, patient participation, shared decision-making, and the development of a reliable physician–patient relationship (Bieber et al., 2006, 2008; Tiernan, 2003; Towle, Godolphin, Grams, & Lamarre, 2006; Weiner, Barnet, Cheng, & Daaleman, 2005). To monitor the success of such communication training programs and to allow for individual feedback to physicians, an instrument assessing the quality of physician–patient interaction is desirable. In addition, such an instrument should ideally be brief enough to allow its use in routine care. However, no brief patient self-report questionnaire encompassing adequate psychometric properties measuring the quality of physician–patient interaction is currently available.

Several extensive patient self-report instruments focused on the general quality of care are currently at hand, which tap into the quality of physician–patient interactions with one of their several subscales (Bitzer, Dierks, Dörning, & Schwartz, 1999; Gericke et al., 2004; Grol et al., 1999; Safran et al., 2006). However, these instruments are rather extensive and therefore, not practical for use in routine quality assessment or as add-on questionnaires in clinical trials. Furthermore, it is doubtful whether the specific subscales of interest can be used independently of the whole instrument. Additionally, because these instruments have been developed in the context of large survey research projects, there is currently no published evidence for their validity or reliability in small study samples.

When looking for an instrument to assess the quality of physician–patient interaction, it is important to recognize other quality of care measures that assess related concepts (Baker, 1990; Barr, 2004; Gericke et al., 2004; Gremigni, Sommaruga, & Peltenburg, 2008; Grogan, Conner, Norman, Willits, & Porter, 2000; Hendriks, Vrielink, van Es, De Haes, & Smets, 2004; Mead & Bower, 2002; Mercer, Maxwell, Heaney, & Watt, 2004; Nicolai, Demmel, & Hagen, 2007; Rimal, 2001; Ross, Steward, & Sinacore, 1995; Safran et al., 2006; Sixma, Kerssens, Campen, & Peters, 1998; Wolf, Putnam, James, & Stiles, 1978). Patient satisfaction is the most frequently assessed surrogate parameter in quality of care assessment (Baker, 1990; Gericke et al., 2004; Grogan et al., 2000; Langewitz et al., 1995; Ross et al., 1995; Wolf et al., 1978; Zandbelt, Smets, Oort, Godfried, & de Haes, 2004), but numerous related parameters exist, including satisfaction with decisions reached (Holmes-Rovner et al., 1996), patient-centeredness (Kjeldmand et al., 2006; Mead & Bower, 2000a), empathy (Mercer et al., 2004; Nicolai et al., 2007), active listening (Fassaert, van Dulmen, Schellevis, & Bensing, 2007), patient-empowerment, patient involvement (Lerman et al., 1990), shared decision-making (Edwards et al., 2003; Elwyn et al., 2003; Simon et al., 2006), patients’ general experience with care (Jenkinson, Coulter, & Bruster, 2002), as well as issues pertaining to the organization of the consultation (Grogan et al., 2000; Safran et al., 2006).

There are, however, several reservations regarding the use of these surrogate parameters to assess the quality of physician–patient interaction. Arguments against the use of patient satisfaction questionnaires state that they rarely assess observable physician behavior, often show a narrow range of scores with high ceiling effects (Garratt, Bjaertnes, Krogstad, & Gulbrandsen, 2005; Ross et al., 1995), barely detect any quality improvement, and are often tailored for an exclusive, disease-specific use (e.g., Barron & Kotak, 2006; Flood et al., 2006; Hagedoorn et al., 2003; Nordyke et al., 2006). Patient self-report instruments assessing shared decision behavior, as another possible surrogate parameter for interaction quality (Edwards et al., 2003; Simon et al., 2006), are narrowly and exclusively focused on this specific aspect of physician–patient interaction and therefore also seem inappropriate for evaluating comprehensive physician communication training programs.

Another aspect one should consider when assessing the quality of the physician–patient interaction is the question of perspective. It is possible to assess the patient’s perspective (Bitzer et al., 1999; Gericke et al., 2004; Grogan et al., 2000; Langewitz et al., 1995; Safran et al., 2006), the physician’s perspective (Hahn, 2001; Kjeldmand et al., 2006), or use observation-based process measures to evaluate recorded consultations by independent raters (Cox, Smith, Brown, & Fitzpatrick, 2008; Elwyn et al., 2003; Mead & Bower, 2000a). The latter are considered the gold standard because they are the most objective. However, they tend to be too complex and time consuming to implement in routine care (Mead & Bower, 2000a). Physician self-report scales are prone to bias because they reflect the physicians’ subjective perception of their own performance (Kjeldmand et al., 2006). Therefore, patient self-report measures constitute a good compromise in terms of reliability and feasibility and are increasingly being used for quality assessment and improvement of care.

The above considerations led us to conclude that there is a need to develop and validate a brief new instrument to directly assess the quality of physician–patient interactions from the patient’s perspective. Such an instrument should meet several requirements. First of all, it should transcend satisfaction ratings that have been criticized as too superficial because they tend to overestimate the quality of care and often do not correspond with objective and observable characteristics of the consultation (Langewitz et al., 1995; Ross et al., 1995; Williams, Coyle, & Healy, 1998). Ideally, such an instrument should rather focus on observable physician behaviors that are taught in physician communication training programs and that are associated with high-quality care, such as relationship building, patient involvement, and sharing of decisions. As a further requirement, the instrument should not be biased by patients’ social desirability considerations, socio-demographic, or clinical variables. Its briefness is another necessary condition to guarantee acceptability and to allow for its potential implementation in routine care.

The aim of the present paper is to describe the development and preliminary validation of such a brief, 14-item, patient self-report instrument, the Questionnaire on the Quality of Physician–Patient Interaction (QQPPI). This questionnaire is practicable and efficient for use in routine care and it can also be used to evaluate physician communication training programs. During the validation process, special attention was paid to controlling for variables known to impact the internal validity of rating scales, such as patients’ tendencies toward social desirability or the instrument’s susceptibility to bias by mental co-morbidities (Hahn et al., 1996; Ross et al., 1995).

Method

Study Sample and Procedures

The present study was carried out in four outpatient clinics of the Medical University Hospital of Heidelberg (outpatient clinics for rheumatology, pain, general internal medicine, and diabetes) between July 2003 and March 2004. Study approval was obtained from the Heidelberg Universities ethical review board. On pre-selected days, all patients scheduled for a consultation were asked to participate in the study. Patients were eligible for the study if they were at least 18 years of age yet not older than 75 years, had sufficient knowledge of the German language, and had no cognitive or visual impairments. Patients were approached in the waiting rooms of the four outpatient clinics by study personnel and asked to participate in the study. Participation was voluntary and anonymous. All participants gave written informed consent. Participants completed a set of questionnaires directly before the consultation (T0) and immediately after the consultation (T1), while still in the waiting room area. Three weeks after the consultation (T2), retest-questionnaires were mailed to the participants, filled in at their home, and returned by mail in a prepaid envelope.

Additionally, physicians working in the outpatient clinics completed a short set of questionnaires immediately after the consultations (T1).

Measures

The present study investigated the validity of the newly developed Questionnaire on the Quality of Physician–Patient Interaction (QQPPI) (“Fragebogen zur Arzt-Patient-Interaktion—FAPI”) by examining the performance of the QQPPI when concurrently administrated with other accepted quality-associated measures. Furthermore, several instruments were used to control for the influence of social desirability, clinical characteristics (e.g., mental co-morbidity, functional capacity) and to allow for a comparison between patients’ and physicians’ perspectives. Table 1 gives an overview of all instruments and assessment points used.
Table 1

Overview of instruments used in the present study at defined assessment points

Construct assessed

Corresponding instrument

Before consultation

T0

After consultation

T1

Retest after 3 weeks

T2

Patients

 Quality-associated measures

  Quality of physician–patient interaction

Questionnaire on the Quality of Physician–Patient Interaction (QQPPI)

 

  Quality of health care

Single item (QHC)

 

  Patient satisfaction with health care

Single item (PSHC)

 

  Involvement in health care

Perceived Involvement in Health Care Scale (PICS)

 

 

  Satisfaction with decision

Satisfaction with Decision Scale (SWD)

 

 

  Social desirability

Balanced Inventory of Desirable Responding (BIDR)

  

 Clinical characteristics

  Sociodemographic variables

Basic documentation taken from Psy-BADO

  

  Functional capacity

Single item

  

  State of health

Single item

  

  Quality of life

Short Form Scale (SF-12)

  

  Depression and somatization

Patient Health Questionnaire German version (PHQ-D)

  

  Anxiety

Anxiety subscale of Hospital Anxiety and Depression Scale (HADS)

  

Physicians

 Diagnosis and presenting complaint

Free text field

 

 

 Presumed patient satisfaction

Single item (PPS)

 

 

 Physician–patient interaction

Difficult Doctor–Patient Relationship Questionnaire (DDPRQ-10)

 

 

Psy-BADO stands for a standardized psychosocial basic documentation that is commonly used in German studies

Patients’ Questionnaires

The Quality of Physician–Patient Interaction

(QQPPI; see Appendix) was developed to directly assess the quality of the physician–patient interaction during a consultation. It places special emphasis on several aspects important to building a good physician–patient relationship such as information exchange, patient involvement, and the sharing of decisions. It deliberately does not address organizational aspects around the consultation that may be important for mere satisfaction ratings, such as waiting time, premises of the clinic, or interaction with non-medical staff.

Item generation. In a preliminary survey, exploratory, in-depth interviews were conducted with 20 patients with gastroenterological, cardiological, endocrinological, or rheumatological diagnoses who were attending an outpatient clinic for general internal medicine. Patients were asked to describe their expectations with regard to an adequate physician–patient relationship. This information allowed an expert panel to generate possible questionnaire items. In addition to this, existing English and German questionnaires were screened for appropriate items. All items were scrutinized for ambiguity and repetition. Overall, nine items from the in-depth interviews, two items from the German version of the Patient Satisfaction Questionnaire (PSQ) (Langewitz et al., 1995), and three items from the Grogan Patient Satisfaction Questionnaire (Grogan et al., 2000) were included in the final version of the questionnaire. The selected items were considered to cover the main aspects of the physician–patient interaction. All 14 items were rephrased to ensure that they were worded positively, avoided statements containing double-negations, and did not confuse participants with a change of meaning in the response categories. Items in the questionnaire are to be rated on a 5-point Likert-scale from ‘I do not agree’ to ‘I fully agree’ (see Appendix). Content validity was addressed by using patient-generated issues from the initial interviews and having the instrument reviewed by an expert panel of physicians and patients to determine if each item captured the intended domain.

Assessment of Further Quality-Associated Measures

Several other Quality-Associated Measures were used to assess convergent validity with the QQPPI. This was necessary to confirm that the QQPPI measures a related, yet not identical construct. Consequently, we expected moderate to substantial correlations (.3–.7) between the QQPPI and these other quality-associated measures.

Patients’ global assessment of quality of health care (QHC) and patient satisfaction with health care (PSHC) were both assessed with a single item on a 5-point Likert scale.

Involvement in health care decisions was measured with two subscales of the Perceived Involvement in Care Scale (PICS) (Lerman et al., 1990), which comprises a scale assessing doctor facilitation of patient involvement (doctor facilitation scale; PICS-A) and a scale assessing the level of information exchange (patient information scale; PICS-B).

Satisfaction with the treatment decision was measured by the German version of the Satisfaction with Decision Scale (SWD) (Holmes-Rovner et al., 1996).

Social Desirability is a possible to the validity of a measure, therefore, social desirability was assessed in this study by the impression management subscale of the German version of the Balanced Inventory of Desirable Responding (BIDR) (Musch, Brockhaus, & Bröder, 2002).

Assessment of clinical characteristics. Patients’ mental problems and Other Clinical Characteristics were assessed in the present study since they can bias quality evaluations (Hahn et al., 1996). The patients’ functional capacity and state of health were globally assessed with single item measures using 5-point Likert-scales. The patients’ quality of life was assessed by the 12-item version of the Short Form Scale (SF-12) (Bullinger & Kirchberger, 1998).

The German version of the Patient Health Questionnaire (PHQ-D) (Löwe et al., 2002; Spitzer, Kroenke, & Williams, 1999) was used to screen participants for the existence and degree of mental problems. The PHQ-D scans the diagnostic criteria according to DSM-IV criteria and the level of, among others, somatization and depression (Löwe et al., 2003, 2004). The seven item anxiety subscale of the Hospital Anxiety and Depression Scale (HADS) (Zigmond & Snaith, 1983) was used to assess the level of anxiety in each patient.

Physicians’ Questionnaires

The physicians’ set of questionnaires asked for the patients’ diagnoses and the reasons for consultation. It globally assessed the presumed patient satisfaction (PPS) (5-point Likert-scale) with a single item. The Difficult Doctor–Patient Relationship Questionnaire DDPRQ (Hahn et al., 1996) was used to assess the physicians’ view on the difficulty of interacting with the patient.

Statistical Analysis

Descriptive statistics were used to characterize the study sample. To obtain a QQPPI total score, the mean value of all QQPPI item scores was calculated. A maximum of three missing values was considered acceptable for the QQPPI. Missing values were estimated by means of two-way imputation (Sijtsma & van der Ark, 2003; van Ginkel & van der Ark, 2005). To assess the item and scale characteristics of the QQPPI, item difficulty and item-total correlation were calculated.

To investigate the underlying structure of the QQPPI, the items were subjected to factor analysis using maximum likelihood factor extraction with oblique rotations (Promax). Three criteria were used to determine the number of factors to extract: the scree plot, the Kaiser–Gutman Rule, and solution interpretability.

Internal consistency and test–retest reliability were used as indicators of the QQPPI reliability. Internal consistency was assessed by means of Cronbach’s alpha, and test–retest reliability was assessed by means of Pearson correlation coefficients.

Analyses of variance (ANOVAs) were calculated to detect whether there were differences in QQPPI total scores between the four different outpatient clinics and between the 19 participating physicians.

Convergent validity of the QQPPI was assessed via Spearman correlations of the QQPPI total score with global ratings of QHC, PSHC, SWD, and PICS. To identify possible confounders, correlations between the QQPPI total score and clinical characteristics (PHQ-D, SF-12, etc.) were also calculated.

The influence of social desirability was assessed by calculating Spearman correlation coefficients between the BIDR score and the QQPPI total score, and between the BIDR score and QQPPI single item scores.

To assess whether physicians were able to estimate their patients’ levels of satisfaction and how their own ratings corresponded, correlations between physicians’ ratings (DDPRQ, PPS) and patients’ ratings (QQPPI, PSHC, QHC) were calculated. We analyzed how often physicians estimated their patients’ satisfaction levels correctly.

All statistical analyses were performed with the SPSS (Version 15.0) software package and SAS-System Release 8.2.

Results

Sample Characteristics

One hundred and fifty-four volunteers from four outpatient clinics participated in the study. Seven individuals were excluded from the study due to incomplete QQPPI ratings. The final sample consisted of 147 outpatients (mean age of 48.8 years; SD = 14.7; see Table 2) treated by 19 physicians (32% female). The test–retest sample after three weeks included 122 respondents (17% non-respondents). Compared with respondents, non-respondents were more likely to be unmarried, Χ2(1, N = 147) = 6.7, p = .01. No other significant differences between respondents and non-respondents were found.
Table 2

Demographic and clinical characteristics of study participants

 

Study sample (N = 147)

Retest-sample (N = 122)

Age in years, Mean (SD)

48.82 (14.65)

49.14 (14.62)

Gender

 Female (%)

81 (55.10)

68 (55.74)

Marital status

 Married, spouse present (%)

87 (59.18)

78 (68.93)

Education level

 ≥10 years of education (%)

76 (51.70)

64 (52.46)

Employment status (%)

 Full or part-time working

56 (38.90)

47 (39.50)

 Unemployed

17 (11.81)

15 (12.61)

 Homemaker

10 (6.94)

6 (5.04)

 Retired due to age

38 (26.39)

33 (27.73)

 Retired due to disease

14 (9.72)

11 (9.24)

 In education

9 (6.25)

7 (5.88)

Functional capacity, Mean (SD)

3.09 (1.10)

3.08 (1.11)

State of health, Mean (SD)

3.09 (0.94)

3.07 (0.93)

PHQ-D, level of somatization (%)

 Minimal

49 (37.41)

42 (38.89)

 Low

30 (22.90)

25 (23.15)

 Medium

32 (24.43)

23 (21.30)

 High

20 (15.27)

18 (16.67)

PHQ-D, level of depression severity (%)

 Minimal

65 (47.10)

55 (47.83)

 Mild

39 (28.26)

32 (27.83)

 Moderate

17 (12.32)

12 (10.44)

 Moderately severe

14 (10.15)

14 (12.17)

 Severe

3 (2.17)

2 (1.74)

HADS-D, level of anxiety, Mean (SD)

8.18 (2.43)

8.25 (2.49)

SF-12, physical health status, Mean (SD)

40.23 (12.35)

40.53 (12.40)

SF-12, mental health status, Mean (SD)

47.27 (12.36)

48.36 (12.15)

Note: Data are n (%) or mean (SD). Functional capacity: range = 1–5; State of health: range = 1–5; SF-12: 12-Item Short Form Health. Survey: range = 0–100, with higher scores indicating more favorable status, respectively. PHQ-D: German Version of the Patient Health Questionnaire, level of somatization: range = 0–30; level of depression severity: range = 0–27. HADS: German Version of the Hospital Anxiety and Depression Scale, range: 0–21 with higher scores indicating increased level of severity

The demographic and clinical characteristics of the participants are shown in Table 2. Participant characteristics did not significantly differ between the four outpatient clinics.

Descriptive Item Characteristics and Scale Characteristics of the QQPPI

Item difficulties of the QQPPI ranged from .55 to .77, which confirms good item characteristics. The mean of the QQPPI total score was 3.61 with a standard deviation of .92 and a median of 3.64 (for means and SDs of single QQPPI items, see Table 3). Skewness was −.33; kurtosis was −.49. The Kolmogorov–Smirnov test indicated that the QQPPI total score met normal distribution (D = .067; p = .10). The QQPPI total score did not differ as a function of patients’ gender, age, and level of education.
Table 3

Descriptive statistics, factor loadings, communalities, and percent of variance for maximum likelihood extraction on the 14-item QQPPI (N = 147)

Item

Mean (SD)

Factor loadings

Communalities h2

1. The physician seemed to be genuinely interested in my problems

4.05 (1.02)

.72

.52

2. The physician gave me detailed information about the available treatment options

3.89 (1.06)

.84

.70

3. I felt I could have trusted the physician with my private problems

3.23 (1.33)

.68

.47

4. The physician and I made all treatment decisions together

3.82 (1.13)

.79

.62

5. The physician’s explanations were easy to understand

3.93 (1.03)

.80

.64

6. The physician spent sufficient time on my consultation

3.95 (1.18)

.83

.68

7. The physician spoke to me in detail about the risks and side-effects of the proposed treatment

3.54 (1.19)

.76

.57

8. The physician understood my needs and problems, and took them seriously

3.84 (1.08)

.82

.68

9. The physician did all he/she could to put me at ease

3.48 (1.18)

.72

.52

10. The doctor asked about how my illness affects my everyday life

3.17 (1.40)

.66

.44

11. The doctor gave me enough chance to talk about all my problems

3.59 (1.20)

.76

.58

12. The physician respects the fact that I may have a different opinion regarding treatment

3.31 (1.17)

.76

.57

13. The physician gave me a thorough examination

3.20 (1.41)

.64

.40

14. The physician gave me detailed information about my illness

3.50 (1.27)

.78

.61

Total score

3.61 (.92)

Eigenvalue

8.42

Percent of variance

60.11%

Note: QQPPI: Questionnaire on the Quality of Physician–Patient Interaction, range = 1–5, with higher scores indicating more favorable ratings

The QQPPI total scores obtained by the 19 participating physicians ranged from 2.0 to 5.0 (SD = .40–1.10). The QQPPI total scores obtained in the four different outpatient clinics varied from 3.46 to 3.85 (SD = .86–1.01), F(3, 143) = .80, ns. QQPPI total scores relating to consultations with female physicians (M = 3.76, SD = .27) did not differ significantly from those with male physicians (M = 3.73, SD = .89), F(1, 17) = .01, ns.

Factor Structure of the QQPPI

To examine the factors underlying the QQPPI, an exploratory factor analysis was conducted. The factor structure of the QQPPI was assessed using the maximum-likelihood method of factor analysis. The Kaiser–Meyer–Olkin measure of sampling adequacy was .94, indicating that the correlation matrix was appropriate for the factor analysis. Bartlett’s test of sphericity yielded an approximate chi-squared of 3883.85, p < .001, providing additional evidence that the analyzed data do not produce an identity matrix, and are thus approximately multivariate normal and acceptable for factor analysis. One factor with an eigenvalue greater than 1 emerged, accounting for 60.11% of the total variance. Factor loadings ranged from .64 to .84, and communalities ranged from .40 to .70 (see Table 3).

Reliability Analysis of the QQPPI

Cronbach’s alpha for the overall scale was .95. The test–retest reliability of the QQPPI over a 3-week retest period was r = .59, suggesting that the QQPPI score was relatively stable over time. To check whether the retest values of the QQPPI were biased by a time effect, all calculations were repeated with the retest scores of the QQPPI. No significant differences were found.

Convergent Validity of the QQPPI

To analyze convergent validity of the QQPPI, we calculated correlations of the QQPPI with other quality-associated measures (see Table 4). As expected, all correlations were positive and significant. Most importantly, the QQPPI total score correlated substantially with the PICS-A and SWD scores (r = .64 and .59). There were also substantial but somewhat lower correlations with the global QHC assessment and the PICS-B scores (r = .54 and .52). The correlation between the QQPPI total score and PSHC was moderate (r = .38). These results indicate that the QQPPI and most of the other quality-associated measures tap highly related constructs, whereas patient satisfaction measured with the PSHC seems to be a somewhat different construct.
Table 4

Correlations of patients’ QQPPI total scores and other quality-associated measures with clinical characteristics and physicians’ ratings at T1

 

Patients’ ratings

Physician–patient interaction (QQPPI)

Patient satisfaction with health care (PSHC)

Quality of health care (QHC)

Patients’ ratings

 Quality-associated measures

  Patient satisfaction with health care (PSHC)

.38***

.58***

  Quality of health care (QHC)

.54***

.58***

  Satisfaction with decision (SWD)

.59***

.48***

.46***

  Patients’ perceived involvement in care, Doctor facilitation scale (PICS A)

.64***

.49***

.51***

  Patients’ perceived involvement in care, Patient information scale (PICS B)

.52***

.31***

.41***

 Clinical characteristics

  Functional capacity (single item)

.06

.21*

.23*

  State of health (single item)

.13

.25**

.24**

  Physical health status (SF-12)

.03

−.10

.00

  Mental health status (SF-12)

.04

.08

.16

  Somatization (PHQ-D)

−.03

−.21*

−.06

  Depression (PHQ-D)

−.08

−.23**

−.13

  Anxiety (HADS-D)

−.09

−.21*

−.08

Physicians’ ratings

 Difficulty in relationship (DDPRQ-10)

−.10

−.15

−.17

 Presumed patient satisfaction (PPS)

.10

.17*

.28**

p < .05; ** p < .01; *** p < .001

Social Desirability

The Spearman correlation coefficient between the QQPPI total score and social desirability, as measured with the German version of the BIDR, was r = −.05. The correlations between single QQPPI items and the BIDR score ranged from r = −.11 to r = .05. This indicates that the QQPPI is not severely biased by social desirability considerations.

Influence of Clinical Characteristics on Quality Assessment

To identify systematic bias caused by patients’ health condition, we analyzed correlations of quality-associated measures (QQPPI, PSHC, and QHC) with clinical characteristics (functional capacity, state of health, mental co-morbidity, and quality of life) (see Table 4). Whereas QQPPI total scores showed no significant correlation with any of these clinical characteristics, PSHC and QHC showed low correlations with functional capacity and state of health. The single QQPPI items showed moderate correlations with functional capacity and quality of life, especially with quality of life related to mental health status (SF-12). These results indicate that the QQPPI is less prone to systematic bias caused by patients’ health condition than these other two quality-associated measures.

Comparison of Physicians’ and Patients’ Perspectives

Physicians’ assessments regarding the difficulty of physician–patient interaction (DDPRQ) did not correlate significantly with patients’ QQPPI scores, patients’ ratings regarding QHC, or PSHC (see Table 4). This means patients that are considered to be difficult can still be happy with the quality of physician–patient interaction. Similarly, physicians’ PPS assessments did not significantly correlate with QQPPI scores or PSHC ratings. Only in 27.3% of the consultations did the physicians estimate their patient’s satisfaction correctly, and the physicians underestimated their patients’ satisfaction with health care in 56.7% of the consultations.

Discussion

The aim of the present study was to develop and validate a brief patient self-report instrument on the quality of the physician–patient interaction; this was intended as an add-on questionnaire for routine use in ambulatory quality assessment. It had to transcend the satisfaction ratings that are often used as surrogate parameters for the quality of physician–patient interaction because these are known to have some shortcomings (Garratt et al., 2005; Ross et al., 1995; Sitzia, 1999). Besides its use in routine care, it is also intended for evaluating physician communication training programs, with a focus on relationship building, including patient involvement, information exchange, and shared decision-making. Therefore, it focuses on the physician–patient interaction being central to the consultation, and disregards organizational and service issues around the consultation.

The main results of our study can be catalogued as follows. First, the QQPPI met good psychometric properties with good item and scale characteristics. In contrast to other measures, the QQPPI total score was independent of patients’ gender, age, and educational level (Garratt et al., 2005; Ross et al., 1995). Second, the results of the factor analysis suggest that the quality of physician–patient interaction, as measured by the QQPPI, is a distinct, uni-dimensional construct. Third, reliability analysis revealed that the QQPPI is internally consistent (Cronbach’s Alpha α = .95). Its internal consistency is even slightly superior to established patient satisfaction scales (Gericke et al., 2004; Grogan et al., 2000; Langewitz et al., 1995). Fourth, evidence was obtained for convergent validity of the QQPPI with various other quality-associated questionnaires. Substantial associations were found between the QQPPI and patients’ perceived involvement in care (PICS B) (r = .64), satisfaction with decision (SWD) (r = .59) and the quality of health care (QHC) (r = .54). A moderate correlation was found with patient satisfaction with health care (PSHC) (r = .38). This indicates that the QQPPI assesses a related but distinct construct which is, however, closer to quality than to satisfaction ratings.

During the validation process we paid special attention to controlling for variables known to impact the internal validity of rating scales, namely patients’ tendencies toward social desirability or the instrument’s susceptibility to bias by patients’ health status. Previous research has shown that patients with mental problems show more difficult interactions (Hahn, Thompson, Wills, Stern, & Budner, 1994) with their physicians, and may therefore be more critical in their evaluations. Quality assessment may also be biased by other clinical characteristics, such as functional capacity, state of health, and quality of life. In our study, it was possible to show that the QQPPI is neither influenced by social desirability considerations nor by the patients’ health status. This makes the QQPPI superior to simple quality or satisfaction assessment, which showed low correlations with patients’ health status.

The present study suggests that physicians do not have the ability to correctly predict their patients’ satisfaction and quality ratings. They seem to be more critical of their own performance because more than half of the physicians underestimated their patients’ satisfaction and quality evaluations. This corresponds to the findings of other studies (Dobkin et al., 2003; Zandbelt et al., 2004) and highlights the importance of assessing the quality of physician–patient interactions from multiple perspectives. It also stresses the importance of directly asking the patients because the physician is not very likely to guess his patient’s appraisal correctly. It has been suggested that this discordance might be explained by social desirability considerations on behalf of the physicians’ and patients’ dependency considerations (Zandbelt et al., 2004), which may also apply to our sample.

There was no systematic difference in the QQPPI total scores between the four different outpatient clinics. Interestingly, QQPPI total scores differed significantly at the level of the 19 physicians involved in the consultations, indicating that patients perceived considerable variation among their physicians with regard to their interaction qualities. However, it is beyond the scope of the present study to analyze the QQPPI’s ability to differentiate between better and poorer communicators among physicians. Further studies concurrently assessing strictly objective measures of physician–patient interaction are needed to show whether the QQPPI might be used to identify physicians who particularly require a communication training program.

The present study shows the usefulness and applicability of the QQPPI to assess the quality of physician–patient interactions during routine ambulatory care. In addition, the QQPPI is able to discriminate between the communication performances of individual physicians. Future research should also address the responsiveness of the instrument to change e.g., does it measure improvements in the quality of the physician–patient interaction after communication skills training? In the meantime, there are reports of QQPPI implementation in a randomized controlled trial evaluating a comprehensive physician communication training program in the context of chronic pain (Bieber et al., 2006, 2008). Its results support the validity of using the QQPPI as an outcome measure of communication training programs, and they further demonstrate that it allows differentiation between the interaction skills of individual physicians.

There are other potential applications for the QQPPI. Note that the instrument’s items were derived from in-depth interviews with patients who described their expectations with regard to an adequate physician–patient relationship. Consequently, it incorporates well the essence of patients’ wishes and expectations. One might think of using it as a “teaching” tool in medical schools to impart desirable physician behavior step by step. Instead of basing standardized patients’ evaluation of the student doctor on subjective comments, the QQPPI might be used as a more objective adjunct to the evaluation.

It might also be worth considering to expand the QQPPI to evaluate physicians and link it to incentives or disincentives when it comes to yearly goals. Depending on different types of physician practices, the QQPPI could possibly be adapted and complemented in the future.

Limitations

Some limitations should be considered when interpreting the results of the present study. First, the QQPPI was developed and evaluated in a university outpatient setting, which is not necessarily representative of routine ambulatory care provided by office-based physicians. However, we expect only small setting influences on the direct quality of physician–patient interactions, in contrast to broader quality assessments taking into account organizational factors, such as waiting time, facilities, access, or competence of medical assistants (Gericke et al., 2004; Grogan et al., 2000).

A second limitation is the reliance on physicians’ and patients’ self-reports. Because self-report data may be influenced by global impressions unrelated to the actual quality of the physician–patient interaction, a comparison of QQPPI ratings with objective observer-based ratings of the physician–patient interaction will be necessary. We are currently tackling this challenge in another study.

In summary, the QQPPI is a brief, valid, and reliable patient self-report instrument that allows the efficient assessment of the quality of the physician–patient interaction in outpatient settings. It can be used in routine care and for evaluating physician communication training programs. Because QQPPI scores are independent of patient characteristics (i.e., age, gender, education) and are not confounded by social desirability or health status, the instrument is apt to identify genuine determinants of physician–patient interactions.

Notes

Acknowledgements

We would like to thank all participating patients and physicians.

References

  1. Anstiss, T. (2009). Motivational interviewing in primary care. Journal of Clinical Psychology in Medical Settings, 16, 87–93.CrossRefPubMedGoogle Scholar
  2. Baker, R. (1990). Development of a questionnaire to assess patients’ satisfaction with consultations in general practice. British Journal of General Practice, 40, 487–490.PubMedGoogle Scholar
  3. Barr, D. A. (2004). Race/ethnicity and patient satisfaction. Using the appropriate method to test for perceived differences in care. Journal of General Internal Medicine, 19, 937–943.CrossRefPubMedGoogle Scholar
  4. Barron, R., & Kotak, A. (2006). Development of a patient satisfaction with treatment questionnaire for benign prostatic hyperplasia (BPH-PSTQ). Value in Health, 9, A55.Google Scholar
  5. Bieber, C., Muller, K. G., Blumenstiel, K., Hochlehnert, A., Wilke, S., Hartmann, M., et al. (2008). A shared decision-making communication training program for physicians treating fibromyalgia patients: Effects of a randomized controlled trial. Journal of Psychosomatic Research, 64, 13–20.CrossRefPubMedGoogle Scholar
  6. Bieber, C., Müller, K. G., Blumenstiel, K., Schneider, A., Richter, A., Wilke, S., et al. (2006). Long-term effects of a shared decision making intervention on physician–patient interaction and outcome in fibromyalgia. A qualitative and quantitative one year follow-up of a randomized controlled trial. Patient Education and Counseling, 63, 357–366.CrossRefPubMedGoogle Scholar
  7. Bitzer, E., Dierks, M., Dörning, H., & Schwartz, F. (1999). Zufriedenheit in der Arztpraxis aus Patientenperspektive - Psychometrische Prüfung eines standardisierten Erhebungsinstrumentes. Zeitschrift für Gesundheitswissenschaften, 7, 196–209.Google Scholar
  8. Bullinger, M., & Kirchberger, I. (1998). SF-36. Fragebogen zum Gesundheitszustand. Hogrefe: Göttingen.Google Scholar
  9. Cox, E. D., Smith, M. A., Brown, R. L., & Fitzpatrick, M. A. (2008). Assessment of the Physician–Caregiver Relationship Scales (PCRS). Patient Education and Counseling, 70, 69–78.CrossRefPubMedGoogle Scholar
  10. Detmar, S. B., Muller, M. J., Wever, L. D., Schornagel, J. H., & Aaronson, N. K. (2001). The patient–physician relationship. Patient–physician communication during outpatient palliative treatment visits: An observational study. JAMA, 285, 1351–1357.CrossRefPubMedGoogle Scholar
  11. Di Blasi, Z., Harkness, E., Ernst, E., Georgiou, A., & Kleijnen, J. (2001). Influence of context effects on health outcomes: A systematic review. Lancet, 357, 757–762.CrossRefPubMedGoogle Scholar
  12. Dobkin, P. L., De Civita, M., Abrahamowicz, M., Bernatsky, S., Schulz, J., Sewitch, M., et al. (2003). Patient–physician discordance in fibromyalgia. Journal of Rheumatology, 30, 1326–1334.PubMedGoogle Scholar
  13. Edwards, A., Elwyn, G., Hood, K., Robling, M., Atwell, C., Holmes-Rovner, M., et al. (2003). The development of COMRADE—a patient-based outcome measure to evaluate the effectiveness of risk communication and treatment decision making in consultations. Patient Education and Counseling, 50, 311–322.CrossRefPubMedGoogle Scholar
  14. Elwyn, G., Edwards, A., Wensing, M., Hood, K., Atwell, C., & Grol, R. (2003). Shared decision making: Developing the OPTION scale for measuring patient involvement. Quality and Safety in Health Care, 12, 93–99.CrossRefPubMedGoogle Scholar
  15. Fassaert, T., van Dulmen, S., Schellevis, F., & Bensing, J. (2007). Active listening in medical consultations: Development of the Active Listening Observation Scale (ALOS-global). Patient Education and Counseling, 68, 258–264.CrossRefPubMedGoogle Scholar
  16. Flood, E. M., Beusterien, K. M., Green, H., Shikiar, R., Baran, R. W., Amonkar, M. M., et al. (2006). Psychometric evaluation of the Osteoporosis Patient Treatment Satisfaction Questionnaire (OPSAT-Q), a novel measure to assess satisfaction with bisphosphonate treatment in postmenopausal women. Health and Quality of Life Outcomes, 4, 42.CrossRefPubMedGoogle Scholar
  17. Ford, S., Schofield, T., & Hope, T. (2006). Observing decision-making in the general practice consultation: Who makes which decisions? Health Expectations, 9, 130–137.CrossRefPubMedGoogle Scholar
  18. Frankel, R. M. (2004). Relationship-centered care and the patient–physician relationship. Journal of General Internal Medicine, 19, 1163–1165.CrossRefPubMedGoogle Scholar
  19. Garratt, A. M., Bjaertnes, O. A., Krogstad, U., & Gulbrandsen, P. (2005). The OutPatient Experiences Questionnaire (OPEQ): Data quality, reliability, and validity in patients attending 52 Norwegian hospitals. Quality and Safety in Health Care, 14, 433–437.CrossRefPubMedGoogle Scholar
  20. Gericke, C. A., Schiffhorst, G., Busse, R., & Haussler, B. (2004). Ein valides Instrument zur Messung der Patientenzufriedenheit in ambulanter haus- und fachärztlicher Behandlung: Das Qualiskope-A. Gesundheitswesen, 66, 723–731.CrossRefPubMedGoogle Scholar
  21. Goldman Sher, T., Cella, D., Leslie, W. T., Bonomi, P., Taylor, S. G., IV, & Serafian, B. (1997). Communication differences between physicians and their patients in an oncology setting. Journal of Clinical Psychology in Medical Settings, 4, 281–293.CrossRefGoogle Scholar
  22. Gremigni, P., Sommaruga, M., & Peltenburg, M. (2008). Validation of the Health Care Communication Questionnaire (HCCQ) to measure outpatients’ experience of communication with hospital staff. Patient Education and Counseling, 71, 57–64.CrossRefPubMedGoogle Scholar
  23. Grogan, S., Conner, M., Norman, P., Willits, D., & Porter, I. (2000). Validation of a questionnaire measuring patient satisfaction with general practitioner services. Quality in Health Care, 9, 210–215.CrossRefPubMedGoogle Scholar
  24. Grol, R., Wensing, M., Mainz, J., Ferreira, P., Hearnshaw, H., Hjortdahl, P., et al. (1999). Patients’ priorities with respect to general practice care: An international comparison. European Task Force on Patient Evaluations of General Practice (EUROPEP). Family Practice, 16, 4–11.CrossRefPubMedGoogle Scholar
  25. Hagedoorn, M., Uijl, S. G., Van Sonderen, E., Ranchor, A. V., Grol, B. M., Otter, R., et al. (2003). Structure and reliability of Ware’s Patient Satisfaction Questionnaire III: Patients’ satisfaction with oncological care in the Netherlands. Medical Care, 41, 254–263.CrossRefPubMedGoogle Scholar
  26. Hahn, S. R. (2001). Physical symptoms and physician-experienced difficulty in the physician–patient relationship. Annals of Internal Medicine, 134, 897–904.PubMedGoogle Scholar
  27. Hahn, S. R., Kroenke, K., Spitzer, R. L., Brody, D., Williams, J. B. W., Linzer, M., et al. (1996). The difficult patient: Prevalence, psychopathology, and functional impairment. Journal of General Internal Medicine, 11, 1–8.CrossRefPubMedGoogle Scholar
  28. Hahn, S. R., Thompson, K. S., Wills, T. A., Stern, V., & Budner, N. S. (1994). The difficult doctor–patient relationship: Somatization, personality and psychopathology. Journal of Clinical Epidemiology, 47, 647–657.CrossRefPubMedGoogle Scholar
  29. Hendriks, A. A., Vrielink, M. R., van Es, S. Q., De Haes, H. J., & Smets, E. M. (2004). Assessing inpatients’ satisfaction with hospital care: Should we prefer evaluation or satisfaction ratings? Patient Education and Counseling, 55, 142–146.CrossRefPubMedGoogle Scholar
  30. Holmes-Rovner, M., Kroll, J., Schmitt, N., Rovner, D. R., Breer, M. L., Rothert, M. L., et al. (1996). Patient satisfaction with health care decisions: The satisfaction with decision scale. Medical Decision Making, 16, 58–64.CrossRefPubMedGoogle Scholar
  31. Jenkinson, C., Coulter, A., & Bruster, S. (2002). The Picker Patient Experience Questionnaire: Development and validation using data from in-patient surveys in five countries. International Journal for Quality in Health Care, 14, 353–358.CrossRefPubMedGoogle Scholar
  32. Kaplan, S. H., Greenfield, S., & Ware, J. E., Jr. (1989). Assessing the effects of physician–patient interactions on the outcomes of chronic disease. Medical Care, 27, 110–127.CrossRefGoogle Scholar
  33. Kjeldmand, D., Holmstrom, I., & Rosenqvist, U. (2006). How patient-centred am I? A new method to measure physicians’ patient-centredness. Patient Education and Counseling, 62, 31–37.CrossRefPubMedGoogle Scholar
  34. Langewitz, W., Keller, A., Denz, M., Wössmer-Buntschu, B., & Kiss, A. (1995). Patientenzufriedenheits-Fragebogen (PZF): Ein taugliches Mittel zur Qualitätskontrolle der Arzt-Patient-Beziehung? Psychotherapie, Psychosomatik, Medizinische Psychologie, 45, 351–357.PubMedGoogle Scholar
  35. Lerman, C. E., Brody, D. S., Caputo, G. C., Smith, D. G., Lazaro, C. G., & Wolfson, H. G. (1990). Patients’ Perceived Involvement in Care Scale: Relationship to attitudes about illness and medical care. Journal of General Internal Medicine, 5, 29–33.CrossRefPubMedGoogle Scholar
  36. Löwe, B., Gräfe, K., Quenter, A., Buchholz, C., Zipfel, S., & Herzog, W. (2002). Screening psychischer Störungen in der Primärmedizin: Validierung des “Gesundheitsfragebogens für Patienten” (PHQ-D). Psychotherapie, Psychosomatik, Medizinische Psychologie, 52, 104–105.Google Scholar
  37. Löwe, B., Gräfe, K., Zipfel, S., Spitzer, R. L., Herrmann-Lingen, C., Witte, S., et al. (2003). Detecting panic disorder in medical and psychosomatic outpatients: Comparative validation of the Hospital Anxiety and Depression Scale, the Patient Health Questionnaire, a screening question, and physicians’ diagnosis. Journal of Psychosomatic Research, 55, 515–519.CrossRefPubMedGoogle Scholar
  38. Löwe, B., Spitzer, R. L., Gräfe, K., Kroenke, K., Quenter, A., Zipfel, S., et al. (2004). Comparative validity of three screening questionnaires for DSM-IV depressive disorders and physicians’ diagnoses. Journal of Affective Disorders, 78, 131–140.CrossRefPubMedGoogle Scholar
  39. Mead, N., & Bower, P. (2000a). Patient-centredness: A conceptual framework and review of the empirical literature. Social Science and Medicine, 51, 1087–1110.CrossRefPubMedGoogle Scholar
  40. Mead, N., & Bower, P. (2000b). Measuring patient-centredness: A comparison of three observation-based instruments. Patient Education and Counseling, 39, 71–80.CrossRefPubMedGoogle Scholar
  41. Mead, N., & Bower, P. (2002). Patient-centred consultations and outcomes in primary care: A review of the literature. Patient Education and Counseling, 48, 51–61.CrossRefPubMedGoogle Scholar
  42. Mercer, S. W., Maxwell, M., Heaney, D., & Watt, G. C. (2004). The consultation and relational empathy (CARE) measure: Development and preliminary validation and reliability of an empathy-based consultation process measure. Family Practice, 21, 699–705.CrossRefPubMedGoogle Scholar
  43. Musch, J., Brockhaus, R., & Bröder, A. (2002). Ein Inventar zur Erfassung von zwei Faktoren sozialer Erwünschtheit. Diagnostica, 48, 121–129.CrossRefGoogle Scholar
  44. Nicolai, J., Demmel, R., & Hagen, J. (2007). Rating Scales for the Assessment of Empathic Communication in Medical Interviews (REM): Scale development, reliability, and validity. Journal of Clinical Psychology in Medical Settings, 14, 367–375.CrossRefGoogle Scholar
  45. Nordyke, R. J., Chang, C. H., Chiou, C. F., Wallace, J. F., Yao, B., & Schwartzberg, L. S. (2006). Validation of a patient satisfaction questionnaire for anemia treatment, the PSQ-An. Health and Quality of Life Outcomes, 4, 28.CrossRefPubMedGoogle Scholar
  46. Ong, L. M., de Haes, J. C., Hoos, A. M., & Lammes, F. B. (1995). Doctor–patient communication: A review of the literature. Social Science and Medicine, 40, 903–918.CrossRefPubMedGoogle Scholar
  47. Rimal, R. N. (2001). Analyzing the physician–patient interaction: An overview of six methods and future research directions. Health Communication, 13, 89–99.CrossRefPubMedGoogle Scholar
  48. Rosenberg, E. E., Lussier, M. T., & Beaudoin, C. (1997). Lessons for clinicians from physician–patient communication literature. Archives of Family Medicine, 6, 279–283.CrossRefPubMedGoogle Scholar
  49. Ross, C. K., Steward, C. A., & Sinacore, J. M. (1995). A comparative study of seven measures of patient satisfaction. Medical Care, 33, 392–406.CrossRefPubMedGoogle Scholar
  50. Roter, D. L., Stewart, M., Putnam, S. M., Lipkin, M., Jr., Stiles, W., & Inui, T. S. (1997). Communication patterns of primary care physicians. JAMA, 277, 350–356.CrossRefPubMedGoogle Scholar
  51. Safran, D. G., Karp, M., Coltin, K., Chang, H., Li, A., Ogren, J., et al. (2006). Measuring patients’ experiences with individual primary care physicians. Results of a statewide demonstration project. Journal of General Internal Medicine, 21, 13–21.CrossRefPubMedGoogle Scholar
  52. Safran, D. G., Taira, D. A., Rogers, W. H., Kosinski, M., Ware, J. E., & Tarlov, A. R. (1998). Linking primary care performance to outcomes of care. Journal of Family Practice, 47, 213–220.PubMedGoogle Scholar
  53. Schneider, J., Kaplan, S. H., Greenfield, S., Li, W., & Wilson, I. B. (2004). Better physician–patient relationships are associated with higher reported adherence to antiretroviral therapy in patients with HIV infection. Journal of General Internal Medicine, 19, 1096–1103.CrossRefPubMedGoogle Scholar
  54. Sijtsma, K., & van der Ark, L. A. (2003). Investigation and treatment of missing item scores in test and questionnaire data. Multivariate Behavioral Research, 38, 505–528.CrossRefGoogle Scholar
  55. Simon, D., Schorr, G., Wirtz, M., Vodermaier, A., Caspari, C., Neuner, B., et al. (2006). Development and first validation of the Shared Decision-Making Questionnaire (SDM-Q). Patient Education and Counseling, 63, 319–327.CrossRefPubMedGoogle Scholar
  56. Sitzia, J. (1999). How valid and reliable are patient satisfaction data? An analysis of 195 studies. International Journal for Quality in Health Care, 11, 319–328.CrossRefPubMedGoogle Scholar
  57. Sixma, H. J., Kerssens, J. J., Campen, C. V., & Peters, L. (1998). Quality of care from the patients’ perspective: From theoretical concept to a new measuring instrument. Health Expectations, 1, 82–95.CrossRefPubMedGoogle Scholar
  58. Spitzer, R. L., Kroenke, K., & Williams, J. B. (1999). Validation and utility of a self-report version of PRIME-MD: The PHQ primary care study. Primary Care Evaluation of Mental Disorders. Patient Health Questionnaire. JAMA, 282, 1737–1744.CrossRefPubMedGoogle Scholar
  59. Stewart, M., Brown, J. B., Donner, A., McWhinney, I. R., Oates, J., Weston, W. W., et al. (2000). The impact of patient-centered care on outcomes. Journal of Family Practice, 49, 796–804.PubMedGoogle Scholar
  60. Swenson, S. L., Buell, S., Zettler, P., White, M., Ruston, D. C., & Lo, B. (2004). Patient-centered communication: Do patients really prefer it? Journal of General Internal Medicine, 19, 1069–1079.CrossRefPubMedGoogle Scholar
  61. Tiernan, E. (2003). Communication training for professionals. Supportive Care in Cancer, 11, 758–762.CrossRefPubMedGoogle Scholar
  62. Towle, A., Godolphin, W., Grams, G., & Lamarre, A. (2006). Putting informed and shared decision making into practice. Health Expectations, 9, 321–332.CrossRefPubMedGoogle Scholar
  63. van Ginkel, J. R., & van der Ark, L. A. (2005). SPSS syntax for missing value imputation in test and questionnaire data. Applied Psychological Measurement, 29, 152–153.CrossRefGoogle Scholar
  64. Weiner, S. J., Barnet, B., Cheng, T. L., & Daaleman, T. P. (2005). Processes for effective communication in primary care. Annals of Internal Medicine, 142, 709–714.PubMedGoogle Scholar
  65. Williams, B., Coyle, J., & Healy, D. (1998a). The meaning of patient satisfaction: An explanation of high reported levels. Social Science and Medicine, 47, 1351–1359.CrossRefPubMedGoogle Scholar
  66. Williams, S., Weinman, J., & Dale, J. (1998b). Doctor–patient communication and patient satisfaction: A review. Family Practice, 15, 480–492.CrossRefPubMedGoogle Scholar
  67. Wolf, M. H., Putnam, S. M., James, S. A., & Stiles, W. B. (1978). The Medical Interview Satisfaction Scale: Development of a scale to measure patient perceptions of physician behavior. Journal of Behavioral Medicine, 1, 391–401.CrossRefPubMedGoogle Scholar
  68. Zandbelt, L. C., Smets, E. M., Oort, F. J., Godfried, M. H., & de Haes, H. C. (2004). Satisfaction with the outpatient encounter: A comparison of patients’ and physicians’ views. Journal of General Internal Medicine, 19, 1088–1095.CrossRefPubMedGoogle Scholar
  69. Zigmond, A. S., & Snaith, R. P. (1983). The Hospital Anxiety and Depression Scale. Acta Psychiatrica Scandinavica, 67, 361–370.CrossRefPubMedGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  • Christiane Bieber
    • 1
  • Knut G. Müller
    • 1
  • Jennifer Nicolai
    • 1
  • Mechthild Hartmann
    • 1
  • Wolfgang Eich
    • 1
  1. 1.Department of Psychosomatic and General Internal Medicine, Centre for Psychosocial MedicineUniversity of HeidelbergHeidelbergGermany

Personalised recommendations