How Does Your Doctor Talk with You? Preliminary Validation of a Brief Patient Self-Report Questionnaire on the Quality of Physician–Patient Interaction
The quality of physician–patient interaction is increasingly being recognized as an essential component of effective treatment. The present article reports on the development and validation of a brief patient self-report questionnaire (QQPPI) that assesses the quality of physician–patient interactions. Data were gathered from 147 patients and 19 physicians immediately after consultations in a tertiary care outpatient setting. The QQPPI displayed good psychometric properties, with high internal consistency and good item characteristics. The QQPPI total score showed variability between different physicians and was independent of patients’ gender, age, and education. The QQPPI featured high correlations with other quality-related measures and was not influenced by social desirability, or patients’ clinical characteristics. The QQPPI is a brief patient self-report questionnaire that allows assessment of the quality of physician–patient interactions during routine ambulatory care. It can also be used to evaluate physician communication training programs or for educational purposes.
KeywordsQuality of physician–patient interaction Validation of questionnaire Quality of health care Patient-centered care Patient involvement Shared decision-making
Questionnaire on the Quality of Physician–Patient Interaction
Quality of health care
Patient satisfaction with health care
Presumed patient satisfaction
In recent decades, physician–patient interactions in outpatient settings have become a focus of scientific interest (Barr, 2004; Detmar, Muller, Wever, Schornagel, & Aaronson, 2001; Ford, Schofield, & Hope, 2006; Frankel, 2004; Gericke, Schiffhorst, Busse, & Haussler, 2004; Kaplan, Greenfield, & Ware, 1989; Kjeldmand, Holmstrom, & Rosenqvist, 2006; Langewitz, Keller, Denz, Wössmer-Buntschu, & Kiss, 1995; Ong, de Haes, Hoos, & Lammes, 1995; Rimal, 2001; Roter et al., 1997; Safran et al., 2006). There is general agreement that high quality physician–patient interactions should be considered an asset in themselves (Goldman Sher et al., 1997; Mead & Bower, 2000b, 2002) and that they also appear to be a precondition for an effective treatment (Di Blasi, Harkness, Ernst, Georgiou, & Kleijnen, 2001; Swenson et al., 2004). Positive physician–patient interactions have favorable effects on patient satisfaction (Rosenberg, Lussier, & Beaudoin, 1997; Williams, Weinman, & Dale, 1998), treatment adherence (Anstiss, 2009; Schneider, Kaplan, Greenfield, Li, & Wilson, 2004), and on medical outcomes (Safran et al., 1998; Stewart et al., 2000).
Communication training programs for physicians aim at improving the quality of the physician–patient interaction to facilitate information exchange, patient participation, shared decision-making, and the development of a reliable physician–patient relationship (Bieber et al., 2006, 2008; Tiernan, 2003; Towle, Godolphin, Grams, & Lamarre, 2006; Weiner, Barnet, Cheng, & Daaleman, 2005). To monitor the success of such communication training programs and to allow for individual feedback to physicians, an instrument assessing the quality of physician–patient interaction is desirable. In addition, such an instrument should ideally be brief enough to allow its use in routine care. However, no brief patient self-report questionnaire encompassing adequate psychometric properties measuring the quality of physician–patient interaction is currently available.
Several extensive patient self-report instruments focused on the general quality of care are currently at hand, which tap into the quality of physician–patient interactions with one of their several subscales (Bitzer, Dierks, Dörning, & Schwartz, 1999; Gericke et al., 2004; Grol et al., 1999; Safran et al., 2006). However, these instruments are rather extensive and therefore, not practical for use in routine quality assessment or as add-on questionnaires in clinical trials. Furthermore, it is doubtful whether the specific subscales of interest can be used independently of the whole instrument. Additionally, because these instruments have been developed in the context of large survey research projects, there is currently no published evidence for their validity or reliability in small study samples.
When looking for an instrument to assess the quality of physician–patient interaction, it is important to recognize other quality of care measures that assess related concepts (Baker, 1990; Barr, 2004; Gericke et al., 2004; Gremigni, Sommaruga, & Peltenburg, 2008; Grogan, Conner, Norman, Willits, & Porter, 2000; Hendriks, Vrielink, van Es, De Haes, & Smets, 2004; Mead & Bower, 2002; Mercer, Maxwell, Heaney, & Watt, 2004; Nicolai, Demmel, & Hagen, 2007; Rimal, 2001; Ross, Steward, & Sinacore, 1995; Safran et al., 2006; Sixma, Kerssens, Campen, & Peters, 1998; Wolf, Putnam, James, & Stiles, 1978). Patient satisfaction is the most frequently assessed surrogate parameter in quality of care assessment (Baker, 1990; Gericke et al., 2004; Grogan et al., 2000; Langewitz et al., 1995; Ross et al., 1995; Wolf et al., 1978; Zandbelt, Smets, Oort, Godfried, & de Haes, 2004), but numerous related parameters exist, including satisfaction with decisions reached (Holmes-Rovner et al., 1996), patient-centeredness (Kjeldmand et al., 2006; Mead & Bower, 2000a), empathy (Mercer et al., 2004; Nicolai et al., 2007), active listening (Fassaert, van Dulmen, Schellevis, & Bensing, 2007), patient-empowerment, patient involvement (Lerman et al., 1990), shared decision-making (Edwards et al., 2003; Elwyn et al., 2003; Simon et al., 2006), patients’ general experience with care (Jenkinson, Coulter, & Bruster, 2002), as well as issues pertaining to the organization of the consultation (Grogan et al., 2000; Safran et al., 2006).
There are, however, several reservations regarding the use of these surrogate parameters to assess the quality of physician–patient interaction. Arguments against the use of patient satisfaction questionnaires state that they rarely assess observable physician behavior, often show a narrow range of scores with high ceiling effects (Garratt, Bjaertnes, Krogstad, & Gulbrandsen, 2005; Ross et al., 1995), barely detect any quality improvement, and are often tailored for an exclusive, disease-specific use (e.g., Barron & Kotak, 2006; Flood et al., 2006; Hagedoorn et al., 2003; Nordyke et al., 2006). Patient self-report instruments assessing shared decision behavior, as another possible surrogate parameter for interaction quality (Edwards et al., 2003; Simon et al., 2006), are narrowly and exclusively focused on this specific aspect of physician–patient interaction and therefore also seem inappropriate for evaluating comprehensive physician communication training programs.
Another aspect one should consider when assessing the quality of the physician–patient interaction is the question of perspective. It is possible to assess the patient’s perspective (Bitzer et al., 1999; Gericke et al., 2004; Grogan et al., 2000; Langewitz et al., 1995; Safran et al., 2006), the physician’s perspective (Hahn, 2001; Kjeldmand et al., 2006), or use observation-based process measures to evaluate recorded consultations by independent raters (Cox, Smith, Brown, & Fitzpatrick, 2008; Elwyn et al., 2003; Mead & Bower, 2000a). The latter are considered the gold standard because they are the most objective. However, they tend to be too complex and time consuming to implement in routine care (Mead & Bower, 2000a). Physician self-report scales are prone to bias because they reflect the physicians’ subjective perception of their own performance (Kjeldmand et al., 2006). Therefore, patient self-report measures constitute a good compromise in terms of reliability and feasibility and are increasingly being used for quality assessment and improvement of care.
The above considerations led us to conclude that there is a need to develop and validate a brief new instrument to directly assess the quality of physician–patient interactions from the patient’s perspective. Such an instrument should meet several requirements. First of all, it should transcend satisfaction ratings that have been criticized as too superficial because they tend to overestimate the quality of care and often do not correspond with objective and observable characteristics of the consultation (Langewitz et al., 1995; Ross et al., 1995; Williams, Coyle, & Healy, 1998). Ideally, such an instrument should rather focus on observable physician behaviors that are taught in physician communication training programs and that are associated with high-quality care, such as relationship building, patient involvement, and sharing of decisions. As a further requirement, the instrument should not be biased by patients’ social desirability considerations, socio-demographic, or clinical variables. Its briefness is another necessary condition to guarantee acceptability and to allow for its potential implementation in routine care.
The aim of the present paper is to describe the development and preliminary validation of such a brief, 14-item, patient self-report instrument, the Questionnaire on the Quality of Physician–Patient Interaction (QQPPI). This questionnaire is practicable and efficient for use in routine care and it can also be used to evaluate physician communication training programs. During the validation process, special attention was paid to controlling for variables known to impact the internal validity of rating scales, such as patients’ tendencies toward social desirability or the instrument’s susceptibility to bias by mental co-morbidities (Hahn et al., 1996; Ross et al., 1995).
Study Sample and Procedures
The present study was carried out in four outpatient clinics of the Medical University Hospital of Heidelberg (outpatient clinics for rheumatology, pain, general internal medicine, and diabetes) between July 2003 and March 2004. Study approval was obtained from the Heidelberg Universities ethical review board. On pre-selected days, all patients scheduled for a consultation were asked to participate in the study. Patients were eligible for the study if they were at least 18 years of age yet not older than 75 years, had sufficient knowledge of the German language, and had no cognitive or visual impairments. Patients were approached in the waiting rooms of the four outpatient clinics by study personnel and asked to participate in the study. Participation was voluntary and anonymous. All participants gave written informed consent. Participants completed a set of questionnaires directly before the consultation (T0) and immediately after the consultation (T1), while still in the waiting room area. Three weeks after the consultation (T2), retest-questionnaires were mailed to the participants, filled in at their home, and returned by mail in a prepaid envelope.
Additionally, physicians working in the outpatient clinics completed a short set of questionnaires immediately after the consultations (T1).
Overview of instruments used in the present study at defined assessment points
Retest after 3 weeks
Quality of physician–patient interaction
Questionnaire on the Quality of Physician–Patient Interaction (QQPPI)
Quality of health care
Single item (QHC)
Patient satisfaction with health care
Single item (PSHC)
Involvement in health care
Perceived Involvement in Health Care Scale (PICS)
Satisfaction with decision
Satisfaction with Decision Scale (SWD)
Balanced Inventory of Desirable Responding (BIDR)
Basic documentation taken from Psy-BADO
State of health
Quality of life
Short Form Scale (SF-12)
Depression and somatization
Patient Health Questionnaire German version (PHQ-D)
Anxiety subscale of Hospital Anxiety and Depression Scale (HADS)
Diagnosis and presenting complaint
Free text field
Presumed patient satisfaction
Single item (PPS)
Difficult Doctor–Patient Relationship Questionnaire (DDPRQ-10)
The Quality of Physician–Patient Interaction
(QQPPI; see Appendix) was developed to directly assess the quality of the physician–patient interaction during a consultation. It places special emphasis on several aspects important to building a good physician–patient relationship such as information exchange, patient involvement, and the sharing of decisions. It deliberately does not address organizational aspects around the consultation that may be important for mere satisfaction ratings, such as waiting time, premises of the clinic, or interaction with non-medical staff.
Item generation. In a preliminary survey, exploratory, in-depth interviews were conducted with 20 patients with gastroenterological, cardiological, endocrinological, or rheumatological diagnoses who were attending an outpatient clinic for general internal medicine. Patients were asked to describe their expectations with regard to an adequate physician–patient relationship. This information allowed an expert panel to generate possible questionnaire items. In addition to this, existing English and German questionnaires were screened for appropriate items. All items were scrutinized for ambiguity and repetition. Overall, nine items from the in-depth interviews, two items from the German version of the Patient Satisfaction Questionnaire (PSQ) (Langewitz et al., 1995), and three items from the Grogan Patient Satisfaction Questionnaire (Grogan et al., 2000) were included in the final version of the questionnaire. The selected items were considered to cover the main aspects of the physician–patient interaction. All 14 items were rephrased to ensure that they were worded positively, avoided statements containing double-negations, and did not confuse participants with a change of meaning in the response categories. Items in the questionnaire are to be rated on a 5-point Likert-scale from ‘I do not agree’ to ‘I fully agree’ (see Appendix). Content validity was addressed by using patient-generated issues from the initial interviews and having the instrument reviewed by an expert panel of physicians and patients to determine if each item captured the intended domain.
Assessment of Further Quality-Associated Measures
Several other Quality-Associated Measures were used to assess convergent validity with the QQPPI. This was necessary to confirm that the QQPPI measures a related, yet not identical construct. Consequently, we expected moderate to substantial correlations (.3–.7) between the QQPPI and these other quality-associated measures.
Patients’ global assessment of quality of health care (QHC) and patient satisfaction with health care (PSHC) were both assessed with a single item on a 5-point Likert scale.
Involvement in health care decisions was measured with two subscales of the Perceived Involvement in Care Scale (PICS) (Lerman et al., 1990), which comprises a scale assessing doctor facilitation of patient involvement (doctor facilitation scale; PICS-A) and a scale assessing the level of information exchange (patient information scale; PICS-B).
Satisfaction with the treatment decision was measured by the German version of the Satisfaction with Decision Scale (SWD) (Holmes-Rovner et al., 1996).
Social Desirability is a possible to the validity of a measure, therefore, social desirability was assessed in this study by the impression management subscale of the German version of the Balanced Inventory of Desirable Responding (BIDR) (Musch, Brockhaus, & Bröder, 2002).
Assessment of clinical characteristics. Patients’ mental problems and Other Clinical Characteristics were assessed in the present study since they can bias quality evaluations (Hahn et al., 1996). The patients’ functional capacity and state of health were globally assessed with single item measures using 5-point Likert-scales. The patients’ quality of life was assessed by the 12-item version of the Short Form Scale (SF-12) (Bullinger & Kirchberger, 1998).
The German version of the Patient Health Questionnaire (PHQ-D) (Löwe et al., 2002; Spitzer, Kroenke, & Williams, 1999) was used to screen participants for the existence and degree of mental problems. The PHQ-D scans the diagnostic criteria according to DSM-IV criteria and the level of, among others, somatization and depression (Löwe et al., 2003, 2004). The seven item anxiety subscale of the Hospital Anxiety and Depression Scale (HADS) (Zigmond & Snaith, 1983) was used to assess the level of anxiety in each patient.
The physicians’ set of questionnaires asked for the patients’ diagnoses and the reasons for consultation. It globally assessed the presumed patient satisfaction (PPS) (5-point Likert-scale) with a single item. The Difficult Doctor–Patient Relationship Questionnaire DDPRQ (Hahn et al., 1996) was used to assess the physicians’ view on the difficulty of interacting with the patient.
Descriptive statistics were used to characterize the study sample. To obtain a QQPPI total score, the mean value of all QQPPI item scores was calculated. A maximum of three missing values was considered acceptable for the QQPPI. Missing values were estimated by means of two-way imputation (Sijtsma & van der Ark, 2003; van Ginkel & van der Ark, 2005). To assess the item and scale characteristics of the QQPPI, item difficulty and item-total correlation were calculated.
To investigate the underlying structure of the QQPPI, the items were subjected to factor analysis using maximum likelihood factor extraction with oblique rotations (Promax). Three criteria were used to determine the number of factors to extract: the scree plot, the Kaiser–Gutman Rule, and solution interpretability.
Internal consistency and test–retest reliability were used as indicators of the QQPPI reliability. Internal consistency was assessed by means of Cronbach’s alpha, and test–retest reliability was assessed by means of Pearson correlation coefficients.
Analyses of variance (ANOVAs) were calculated to detect whether there were differences in QQPPI total scores between the four different outpatient clinics and between the 19 participating physicians.
Convergent validity of the QQPPI was assessed via Spearman correlations of the QQPPI total score with global ratings of QHC, PSHC, SWD, and PICS. To identify possible confounders, correlations between the QQPPI total score and clinical characteristics (PHQ-D, SF-12, etc.) were also calculated.
The influence of social desirability was assessed by calculating Spearman correlation coefficients between the BIDR score and the QQPPI total score, and between the BIDR score and QQPPI single item scores.
To assess whether physicians were able to estimate their patients’ levels of satisfaction and how their own ratings corresponded, correlations between physicians’ ratings (DDPRQ, PPS) and patients’ ratings (QQPPI, PSHC, QHC) were calculated. We analyzed how often physicians estimated their patients’ satisfaction levels correctly.
All statistical analyses were performed with the SPSS (Version 15.0) software package and SAS-System Release 8.2.
Demographic and clinical characteristics of study participants
Study sample (N = 147)
Retest-sample (N = 122)
Age in years, Mean (SD)
Married, spouse present (%)
≥10 years of education (%)
Employment status (%)
Full or part-time working
Retired due to age
Retired due to disease
Functional capacity, Mean (SD)
State of health, Mean (SD)
PHQ-D, level of somatization (%)
PHQ-D, level of depression severity (%)
HADS-D, level of anxiety, Mean (SD)
SF-12, physical health status, Mean (SD)
SF-12, mental health status, Mean (SD)
The demographic and clinical characteristics of the participants are shown in Table 2. Participant characteristics did not significantly differ between the four outpatient clinics.
Descriptive Item Characteristics and Scale Characteristics of the QQPPI
Descriptive statistics, factor loadings, communalities, and percent of variance for maximum likelihood extraction on the 14-item QQPPI (N = 147)
1. The physician seemed to be genuinely interested in my problems
2. The physician gave me detailed information about the available treatment options
3. I felt I could have trusted the physician with my private problems
4. The physician and I made all treatment decisions together
5. The physician’s explanations were easy to understand
6. The physician spent sufficient time on my consultation
7. The physician spoke to me in detail about the risks and side-effects of the proposed treatment
8. The physician understood my needs and problems, and took them seriously
9. The physician did all he/she could to put me at ease
10. The doctor asked about how my illness affects my everyday life
11. The doctor gave me enough chance to talk about all my problems
12. The physician respects the fact that I may have a different opinion regarding treatment
13. The physician gave me a thorough examination
14. The physician gave me detailed information about my illness
Percent of variance
The QQPPI total scores obtained by the 19 participating physicians ranged from 2.0 to 5.0 (SD = .40–1.10). The QQPPI total scores obtained in the four different outpatient clinics varied from 3.46 to 3.85 (SD = .86–1.01), F(3, 143) = .80, ns. QQPPI total scores relating to consultations with female physicians (M = 3.76, SD = .27) did not differ significantly from those with male physicians (M = 3.73, SD = .89), F(1, 17) = .01, ns.
Factor Structure of the QQPPI
To examine the factors underlying the QQPPI, an exploratory factor analysis was conducted. The factor structure of the QQPPI was assessed using the maximum-likelihood method of factor analysis. The Kaiser–Meyer–Olkin measure of sampling adequacy was .94, indicating that the correlation matrix was appropriate for the factor analysis. Bartlett’s test of sphericity yielded an approximate chi-squared of 3883.85, p < .001, providing additional evidence that the analyzed data do not produce an identity matrix, and are thus approximately multivariate normal and acceptable for factor analysis. One factor with an eigenvalue greater than 1 emerged, accounting for 60.11% of the total variance. Factor loadings ranged from .64 to .84, and communalities ranged from .40 to .70 (see Table 3).
Reliability Analysis of the QQPPI
Cronbach’s alpha for the overall scale was .95. The test–retest reliability of the QQPPI over a 3-week retest period was r = .59, suggesting that the QQPPI score was relatively stable over time. To check whether the retest values of the QQPPI were biased by a time effect, all calculations were repeated with the retest scores of the QQPPI. No significant differences were found.
Convergent Validity of the QQPPI
Correlations of patients’ QQPPI total scores and other quality-associated measures with clinical characteristics and physicians’ ratings at T1
Physician–patient interaction (QQPPI)
Patient satisfaction with health care (PSHC)
Quality of health care (QHC)
Patient satisfaction with health care (PSHC)
Quality of health care (QHC)
Satisfaction with decision (SWD)
Patients’ perceived involvement in care, Doctor facilitation scale (PICS A)
Patients’ perceived involvement in care, Patient information scale (PICS B)
Functional capacity (single item)
State of health (single item)
Physical health status (SF-12)
Mental health status (SF-12)
Difficulty in relationship (DDPRQ-10)
Presumed patient satisfaction (PPS)
The Spearman correlation coefficient between the QQPPI total score and social desirability, as measured with the German version of the BIDR, was r = −.05. The correlations between single QQPPI items and the BIDR score ranged from r = −.11 to r = .05. This indicates that the QQPPI is not severely biased by social desirability considerations.
Influence of Clinical Characteristics on Quality Assessment
To identify systematic bias caused by patients’ health condition, we analyzed correlations of quality-associated measures (QQPPI, PSHC, and QHC) with clinical characteristics (functional capacity, state of health, mental co-morbidity, and quality of life) (see Table 4). Whereas QQPPI total scores showed no significant correlation with any of these clinical characteristics, PSHC and QHC showed low correlations with functional capacity and state of health. The single QQPPI items showed moderate correlations with functional capacity and quality of life, especially with quality of life related to mental health status (SF-12). These results indicate that the QQPPI is less prone to systematic bias caused by patients’ health condition than these other two quality-associated measures.
Comparison of Physicians’ and Patients’ Perspectives
Physicians’ assessments regarding the difficulty of physician–patient interaction (DDPRQ) did not correlate significantly with patients’ QQPPI scores, patients’ ratings regarding QHC, or PSHC (see Table 4). This means patients that are considered to be difficult can still be happy with the quality of physician–patient interaction. Similarly, physicians’ PPS assessments did not significantly correlate with QQPPI scores or PSHC ratings. Only in 27.3% of the consultations did the physicians estimate their patient’s satisfaction correctly, and the physicians underestimated their patients’ satisfaction with health care in 56.7% of the consultations.
The aim of the present study was to develop and validate a brief patient self-report instrument on the quality of the physician–patient interaction; this was intended as an add-on questionnaire for routine use in ambulatory quality assessment. It had to transcend the satisfaction ratings that are often used as surrogate parameters for the quality of physician–patient interaction because these are known to have some shortcomings (Garratt et al., 2005; Ross et al., 1995; Sitzia, 1999). Besides its use in routine care, it is also intended for evaluating physician communication training programs, with a focus on relationship building, including patient involvement, information exchange, and shared decision-making. Therefore, it focuses on the physician–patient interaction being central to the consultation, and disregards organizational and service issues around the consultation.
The main results of our study can be catalogued as follows. First, the QQPPI met good psychometric properties with good item and scale characteristics. In contrast to other measures, the QQPPI total score was independent of patients’ gender, age, and educational level (Garratt et al., 2005; Ross et al., 1995). Second, the results of the factor analysis suggest that the quality of physician–patient interaction, as measured by the QQPPI, is a distinct, uni-dimensional construct. Third, reliability analysis revealed that the QQPPI is internally consistent (Cronbach’s Alpha α = .95). Its internal consistency is even slightly superior to established patient satisfaction scales (Gericke et al., 2004; Grogan et al., 2000; Langewitz et al., 1995). Fourth, evidence was obtained for convergent validity of the QQPPI with various other quality-associated questionnaires. Substantial associations were found between the QQPPI and patients’ perceived involvement in care (PICS B) (r = .64), satisfaction with decision (SWD) (r = .59) and the quality of health care (QHC) (r = .54). A moderate correlation was found with patient satisfaction with health care (PSHC) (r = .38). This indicates that the QQPPI assesses a related but distinct construct which is, however, closer to quality than to satisfaction ratings.
During the validation process we paid special attention to controlling for variables known to impact the internal validity of rating scales, namely patients’ tendencies toward social desirability or the instrument’s susceptibility to bias by patients’ health status. Previous research has shown that patients with mental problems show more difficult interactions (Hahn, Thompson, Wills, Stern, & Budner, 1994) with their physicians, and may therefore be more critical in their evaluations. Quality assessment may also be biased by other clinical characteristics, such as functional capacity, state of health, and quality of life. In our study, it was possible to show that the QQPPI is neither influenced by social desirability considerations nor by the patients’ health status. This makes the QQPPI superior to simple quality or satisfaction assessment, which showed low correlations with patients’ health status.
The present study suggests that physicians do not have the ability to correctly predict their patients’ satisfaction and quality ratings. They seem to be more critical of their own performance because more than half of the physicians underestimated their patients’ satisfaction and quality evaluations. This corresponds to the findings of other studies (Dobkin et al., 2003; Zandbelt et al., 2004) and highlights the importance of assessing the quality of physician–patient interactions from multiple perspectives. It also stresses the importance of directly asking the patients because the physician is not very likely to guess his patient’s appraisal correctly. It has been suggested that this discordance might be explained by social desirability considerations on behalf of the physicians’ and patients’ dependency considerations (Zandbelt et al., 2004), which may also apply to our sample.
There was no systematic difference in the QQPPI total scores between the four different outpatient clinics. Interestingly, QQPPI total scores differed significantly at the level of the 19 physicians involved in the consultations, indicating that patients perceived considerable variation among their physicians with regard to their interaction qualities. However, it is beyond the scope of the present study to analyze the QQPPI’s ability to differentiate between better and poorer communicators among physicians. Further studies concurrently assessing strictly objective measures of physician–patient interaction are needed to show whether the QQPPI might be used to identify physicians who particularly require a communication training program.
The present study shows the usefulness and applicability of the QQPPI to assess the quality of physician–patient interactions during routine ambulatory care. In addition, the QQPPI is able to discriminate between the communication performances of individual physicians. Future research should also address the responsiveness of the instrument to change e.g., does it measure improvements in the quality of the physician–patient interaction after communication skills training? In the meantime, there are reports of QQPPI implementation in a randomized controlled trial evaluating a comprehensive physician communication training program in the context of chronic pain (Bieber et al., 2006, 2008). Its results support the validity of using the QQPPI as an outcome measure of communication training programs, and they further demonstrate that it allows differentiation between the interaction skills of individual physicians.
There are other potential applications for the QQPPI. Note that the instrument’s items were derived from in-depth interviews with patients who described their expectations with regard to an adequate physician–patient relationship. Consequently, it incorporates well the essence of patients’ wishes and expectations. One might think of using it as a “teaching” tool in medical schools to impart desirable physician behavior step by step. Instead of basing standardized patients’ evaluation of the student doctor on subjective comments, the QQPPI might be used as a more objective adjunct to the evaluation.
It might also be worth considering to expand the QQPPI to evaluate physicians and link it to incentives or disincentives when it comes to yearly goals. Depending on different types of physician practices, the QQPPI could possibly be adapted and complemented in the future.
Some limitations should be considered when interpreting the results of the present study. First, the QQPPI was developed and evaluated in a university outpatient setting, which is not necessarily representative of routine ambulatory care provided by office-based physicians. However, we expect only small setting influences on the direct quality of physician–patient interactions, in contrast to broader quality assessments taking into account organizational factors, such as waiting time, facilities, access, or competence of medical assistants (Gericke et al., 2004; Grogan et al., 2000).
A second limitation is the reliance on physicians’ and patients’ self-reports. Because self-report data may be influenced by global impressions unrelated to the actual quality of the physician–patient interaction, a comparison of QQPPI ratings with objective observer-based ratings of the physician–patient interaction will be necessary. We are currently tackling this challenge in another study.
In summary, the QQPPI is a brief, valid, and reliable patient self-report instrument that allows the efficient assessment of the quality of the physician–patient interaction in outpatient settings. It can be used in routine care and for evaluating physician communication training programs. Because QQPPI scores are independent of patient characteristics (i.e., age, gender, education) and are not confounded by social desirability or health status, the instrument is apt to identify genuine determinants of physician–patient interactions.
We would like to thank all participating patients and physicians.
- Barron, R., & Kotak, A. (2006). Development of a patient satisfaction with treatment questionnaire for benign prostatic hyperplasia (BPH-PSTQ). Value in Health, 9, A55.Google Scholar
- Bieber, C., Muller, K. G., Blumenstiel, K., Hochlehnert, A., Wilke, S., Hartmann, M., et al. (2008). A shared decision-making communication training program for physicians treating fibromyalgia patients: Effects of a randomized controlled trial. Journal of Psychosomatic Research, 64, 13–20.CrossRefPubMedGoogle Scholar
- Bieber, C., Müller, K. G., Blumenstiel, K., Schneider, A., Richter, A., Wilke, S., et al. (2006). Long-term effects of a shared decision making intervention on physician–patient interaction and outcome in fibromyalgia. A qualitative and quantitative one year follow-up of a randomized controlled trial. Patient Education and Counseling, 63, 357–366.CrossRefPubMedGoogle Scholar
- Bitzer, E., Dierks, M., Dörning, H., & Schwartz, F. (1999). Zufriedenheit in der Arztpraxis aus Patientenperspektive - Psychometrische Prüfung eines standardisierten Erhebungsinstrumentes. Zeitschrift für Gesundheitswissenschaften, 7, 196–209.Google Scholar
- Bullinger, M., & Kirchberger, I. (1998). SF-36. Fragebogen zum Gesundheitszustand. Hogrefe: Göttingen.Google Scholar
- Edwards, A., Elwyn, G., Hood, K., Robling, M., Atwell, C., Holmes-Rovner, M., et al. (2003). The development of COMRADE—a patient-based outcome measure to evaluate the effectiveness of risk communication and treatment decision making in consultations. Patient Education and Counseling, 50, 311–322.CrossRefPubMedGoogle Scholar
- Flood, E. M., Beusterien, K. M., Green, H., Shikiar, R., Baran, R. W., Amonkar, M. M., et al. (2006). Psychometric evaluation of the Osteoporosis Patient Treatment Satisfaction Questionnaire (OPSAT-Q), a novel measure to assess satisfaction with bisphosphonate treatment in postmenopausal women. Health and Quality of Life Outcomes, 4, 42.CrossRefPubMedGoogle Scholar
- Grol, R., Wensing, M., Mainz, J., Ferreira, P., Hearnshaw, H., Hjortdahl, P., et al. (1999). Patients’ priorities with respect to general practice care: An international comparison. European Task Force on Patient Evaluations of General Practice (EUROPEP). Family Practice, 16, 4–11.CrossRefPubMedGoogle Scholar
- Löwe, B., Gräfe, K., Quenter, A., Buchholz, C., Zipfel, S., & Herzog, W. (2002). Screening psychischer Störungen in der Primärmedizin: Validierung des “Gesundheitsfragebogens für Patienten” (PHQ-D). Psychotherapie, Psychosomatik, Medizinische Psychologie, 52, 104–105.Google Scholar
- Löwe, B., Gräfe, K., Zipfel, S., Spitzer, R. L., Herrmann-Lingen, C., Witte, S., et al. (2003). Detecting panic disorder in medical and psychosomatic outpatients: Comparative validation of the Hospital Anxiety and Depression Scale, the Patient Health Questionnaire, a screening question, and physicians’ diagnosis. Journal of Psychosomatic Research, 55, 515–519.CrossRefPubMedGoogle Scholar
- Schneider, J., Kaplan, S. H., Greenfield, S., Li, W., & Wilson, I. B. (2004). Better physician–patient relationships are associated with higher reported adherence to antiretroviral therapy in patients with HIV infection. Journal of General Internal Medicine, 19, 1096–1103.CrossRefPubMedGoogle Scholar