Background

Person-centred care (also termed patient-centred care [1]) has been widely acknowledged as an essential element of high-quality health service provision [2]. The concept of person-centredness has been utilized for roughly half a century and has been applied at different levels, from national healthcare policy to skills as specific as non-verbal communication behaviours [3]. Many different perspectives on, and definitions of, person-centredness exist, thus making it a somewhat contested concept to operationalise [1, 4]. Arguably, these are variations in emphasis within a core theme, though they do have implications for valid measurement.

Consultations are a key component in health care provision which offer an opportunity for patients to discuss issues with practitioners. Practitioners often have multiple tasks within consultations, including eliciting information to aid assessment, and information-giving. Individual practitioners vary in consultation skills and commitment to make the conversation person-centred in practice [5, 6]. In the past two decades person-centred communication skills acquisition has received much greater attention in training programmes [7, 8]. To evaluate the efficacy of training programmes designed to enhance person-centred skills, validated instruments that objectively measure these skills and their use in practice are needed.

Systematic reviews of validation studies of instruments measuring person-centeredness were known to exist prior to undertaking this study, however, it was clear that this literature was diverse, and that such reviews may have different purposes, aims, and inclusion criteria. Reviews have been aimed at identifying and/or appraising instruments for specific conditions (e.g., cancer, [9]), health care settings (e.g., neonatal intensive care units, [10]), or professions (e.g., psychiatrists, [11]). In addition, across existing reviews different conceptualisations of person-centredness frame research questions and selection criteria in distinct ways (e.g., see [12,13,14,15,16]). Consequently, there may be little overlap in the primary studies included in available reviews, and no one review summarises and evaluates the literature as a whole. For these reasons we aimed to provide a high-level synthesis of this complex literature by undertaking a systematic review of reviews. This was intended to provide an overview of how existing systematic reviews are designed and report on validation studies, and to incorporate details of the included instruments. This study thus brings together what is known about available instruments that may be considered for use in training and assessment of person-centred consultation skills among healthcare practitioners, for researchers and research users. This review of reviews was thus not undertaken to identify a particular instrument for a particular purpose, but rather to survey the level of development of, and the strength of the evidence available in, this field of study.

Reflecting these aims, the objectives of this review of reviews were to: 1) undertake a critical appraisal of systematic reviews reporting validation studies of instruments aiming to measure person-centred consultation skills among healthcare practitioners, and 2) identify and summarise the range of validated instruments available for measuring person-centred consultation skills in practitioners, including material on the strength of the validation evidence for each instrument.

Methods

This review followed the process outlined in this section, which followed the development of a study protocol prior to the conduct of the review. We did not prospectively register or otherwise publish the protocol.

Search strategy

Systematic searches were conducted in the electronic databases MEDLINE, EMBASE, PsycINFO, and CINAHL. The search strategy combined different search terms for three key search components: ‘person- or patient centredness’ (Block 1), ‘assessment instrument’ (Block 2), and ‘systematic or scoping review’ (Block 3).

For Block 1 (the search component ‘person- or patient centredness’) we used an iterative approach. A preliminary search of EMBASE, MEDLINE, and PsychInfo (all in Ovid) was undertaken using the keywords: (person-cent* or patient-cent* or personcent* or patientcent*) and ‘review’ in the title; and ‘measurement or tool or scale or instrument’; from 2010. Full text papers identified (n = 24) were searched for words used to describe ‘person- or patient centredness’. The resulting search terms were discussed and selected to reflect the scope of the study. The final search included the following terms: person-cent* or patient-cent* or personcent* or patientcent* or person-orient* or person-focus* or person-participation or person-empowerment or person-involvement or patient-orient* or patient-focus* or patient-participation or patient-empowerment or patient-involvement or "person orient*" or "person focus*" or "person participation" or "person empowerment" or "person involvement" or "patient orient*" or "patient focus*" or "patient participation" or "patient empowerment" or "patient involvement"; or (clinician-patient or physician–patient or professional-patient or provider-patient or practitioner-patient or pharmacist-patient or doctor-patient or nurse-patient) adjacent to (communication* or consultation* or practice* or relation* or interaction* or rapport).

For Block 2 (the search component ‘assessment instrument’) we used the existing COSMIN filters proposed by Terwee et al. [17]. The COSMIN (COnsensus-based Standards for the selection of health Measurement Instruments) project has developed highly sensitive search filters for finding studies on measurement properties [17]. The search filter was adapted to each database. For Block 3, the search terms (systematic* or scoping) adjacent to review* were used. The search did not include restrictions pertaining to date of publication, and the language was restricted to English. The database search was conducted in September 2020. See appendix 1 for the details of all searches run in all databases.

Study selection

One author (JG) screened titles and abstracts against preliminary selection criteria, using Rayyan software for systematic reviews [18]. Ideally all parts of the process of undertaking a review are duplicated to in order to avoid errors. Here we relied on one author for screening, with the rationale was that we expected systematic reviews to be readily identifiable in the title and abstract, making screening more straightforward, for example, than in conducting a systematic review of primary studies, which may be described in more heterogeneous ways. Another author (AD) screened 5% independently. The authors met weekly to resolve any problems or questions during the process and no contentious issues were identified in screening. Full text articles of potentially eligible papers were retrieved and assessed for inclusion against the criteria below. Two authors (AD & JM) reviewed all full text papers independently in order to select studies for inclusion. One disagreement was resolved through discussion with a third author (DS) and reasons for exclusion were noted. Inclusion criteria were:

  • a peer-reviewed journal report

  • used systematic review methods to identify primary studies for inclusion (including both a search strategy and explicit selection criteria)

  • stated aims and objectives specifying the measurement of ‘person centredness’ or ‘patient centredness’ or a related construct as defined by search Block 1.

  • concerned assessment of individual practitioner consultation skills or behaviour (i.e., not policy)

  • included only validation studies of instruments

  • reported any measurement properties of the included instruments

Reviews of instruments developed for any practitioner group, patient population, or health care setting were included. Studies were excluded unless they met all inclusion criteria. After the full text eligibility check, a backwards search of the references of the included reviews, as well as a forward reference search using Google Scholar was performed. This was last updated in January 2022 and no further reviews were identified. A PRISMA flowchart [19] shows the results of the identification, screening, and eligibility assessment process (Fig. 1).

Fig. 1
figure 1

PRISMA flow diagram

Data extraction

One author (AD) performed data extraction from the included reviews using a standardised form created in Excel developed by all co-authors in a preliminary phase. A second author (DS) subsequently checked all the extracted information in the form, and screened the paper for any missing information. At the review level, we extracted the stated aims and objectives, definition or conceptualisation of person-centredness used, numbers, names and types of instruments, research questions, dates, databases, and languages included in search strategies, selection criteria regarding health care populations, health care settings, raters of the instruments, other selection criteria, details of the assessment of methodological quality and psychometric properties, and numbers of validation studies. At the validation study level, we extracted the country of origin, the type of validation study, and whether the developers of the instrument validated their own instrument. At the instrument level we extracted who developed the instrument, in what year, in which country and in what language the instrument, how many subscales and items the instruments consisted of, and the response formats used. Other information on validation studies and instruments was not reported consistently enough to be extracted.

Quality assessment

Two authors (AD & DS) independently assessed the quality of the included reviews using the Joanna Briggs Institute Critical Appraisal Checklist for Systematic Reviews and Research Syntheses checklist [20]. Each of the 11 criteria was given a rating of ‘yes’ (definitely done), ‘no’ (definitely not done), ‘unclear’ (unclear if completed) or ‘not applicable’. Discrepancies in the ratings of the methodological reviews were be resolved by consensus.

Results

Description of the reviews

The search identified 2,215 unique articles with 21 papers selected for a full-text eligibility assessment (see Fig. 1). Four studies were included. None of the reviews identified in further searching fulfilled our inclusion criteria.

The four included reviews each had different aims and selection criteria, resulting in few primary studies and instruments being included in more than one review. Two reviews targeted different groups of practitioners; nurses for Köberich and Farin [21] and physicians or medical students for Brouwers et al. [22]). Hudon et al. [23] and Köberich and Farin included only patient rated instruments, while Ekman et al. [24] included only direct observation tools (e.g., checklists or rating scales). In total, the four reviews included 71 validation studies (68 unique studies) of 42 different instruments.

Conceptualisations of person-centredness

Conceptualisations of person-centredness varied between the included studies. Two reviews used Stewart and colleagues [15] model of interconnecting dimensions: 1) exploring both the disease and the illness experience; 2) understanding the whole person; 3) finding common ground between the physician and patient; 4) incorporating prevention and health promotion; 5) enhancing the doctor–patient relationship, and 6) ‘being realistic’ about personal limitations and issues such as the availability of time and resources. Dimensions 4 and 6 were later dropped [14]. Brouwers et al. [22] included instruments measuring at least three out of the six dimensions, while Hudon et al. [23] included those measuring at least two out of the later version of four dimensions. Köberich and Farin [21] used a framework of three core themes of person centredness based on Kitson et al. [13]: 1) participation and involvement; 2) relationship between the patient and the health professional; and 3) the context where care is delivered. Finally, Ekman et al. used an Institute of Medicine framework [16] of six dimensions: 1) respect for patients’ values, preferences, and expressed needs; 2) coordination and integration of care; 3) information, communication, and education; 4) physical comfort; 5) emotional support, e.g., relieving fear and anxiety; and 6) involvement of family and friends (Table 1).

Table 1 Overview of reviews

Overview of reviews

Hudon et al.’s review [23] aimed to identify and compare instruments, subscales, or items assessing patients’ perceptions of patient-centred care used in an ambulatory family medicine setting. Only patient rated instruments were included. Quality assessment of the validation studies was conducted with the Modified Version of Standards for Reporting of Diagnostic Accuracy (STARD) tool [25]. The authors identified two instruments fully dedicated to patient-centred care, and 11 further instruments with subscales or items measuring person-centred care.

Köberich and Farin’s review [21] aimed to provide an overview of instruments measuring patients’ perception of patient-centred nursing care, defined as the degree to which the patient’s wishes, needs and preferences are taken into account by nurses when the patient requires professional nursing care. Again, only patient rated instruments were included. The four included instruments were described in detail, including their theoretical background, development processes including consecutive versions and translations, and validity and reliability testing. No quality assessment was undertaken.

Brouwers et al. [22] aimed to review all available instruments measuring patient centredness in doctor–patient communication, in the classroom and workplace, for the purposes of providing direct feedback. Instruments for use in health care professionals other than physicians or medical students were thus excluded. The authors used the COSMIN checklist for quality assessment of the instruments [26].

Ekman et al.’s review [24] aimed to identify available instruments for direct observation in assessment of competence in person-centred care. The study then assessed them with respect to underlying theoretical or conceptual frameworks, coverage of recognized components of person-centred care, types of behavioural indicators, psychometric performance, and format (i.e., checklist, rating scale, coding system). The review used the six-dimension framework endorsed by the Institute of Medicine [16] however, they did not use the framework as a selection criterion. No quality assessment was undertaken. The authors group the included instruments in four categories: global person-centred care/person centredness, shared decision-making, person-centred communication, and nonverbal person-centred communication.

The critical appraisal of the included reviews using the Joanna Briggs Institute Critical Appraisal Checklist for Systematic Reviews and Research Syntheses is reported in Table 2. The review by Brouwers et al. [22] scored positively on all but one items. We note that no study assessed publication bias, and this may be a particularly important threat to valid inference in a literature of this nature. There were issues with the methods of critical appraisal in two reviews.

Table 2 Critical Appraisal

Overview of the validation studies

Sixty-eight validation studies were included across the four reviews. Hudon et al. [23] described one to three validation studies for each instrument included and was the only review to report specific information on the validation studies in addition to information on the instruments. Köberich and Farin [21] identified several validation studies for each instrument. Brouwers et al. [22] identified one validation study for each included instrument. Ekman et al. [24] describe one validation study for 13 instruments, and two validation studies for three other included instruments. Table 3 provides an overview of the validation studies [3, 27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91].

Table 3 Overview of validation studies (n = 68)

The validation studies were published between 1989 and 2015 inclusive. The majority of the studies were done in English speaking countries: 29 originated in the USA, 10 in the UK, 8 in Canada; 4 in Finland; 2 in Australia, the Netherlands, and Turkey; and 1 in Germany, Israel, Norway, and Sweden. The country of origin was not specified for the remaining 7 studies.

Overview of the instruments

Forty-two instruments were included across the four reviews, with minimal overlap. The Patient-Centred Observation Form (PCOF) was included in two reviews [22, 24]. The original Perceived Involvement in Care Scale (PICS) is included by Hudon [23], while Brouwers [22]included the modified PICS (M-PICS). The Consultation and Relational Empathy instrument (CARE), and the Patient Perception of Patient Centeredness (PPPC) are included by both Hudon and Brouwers [22, 23]. Hudon [23] included what they referred to as the Consultation Care Measure (CCM), and Brouwers [22] included the same instrument, named differently as the Little instrument. Little et al. [34] do not name the instrument in their validation study, so we decided to refer to this instrument as the ‘Little Instrument’ in this review of reviews.

The four reviews reported varying types of information on the included instruments. All reported the year and country of development, the response scale, the number of subscales and items, and the intended rater of the instrument. Table 4 gives an overview of what information about the instrument is included in each review.

Table 4 Reported data on instruments included in each review

As with the validation studies, the publication years of the instruments ranged from 1989 up to 2015. The majority of the instruments were developed in English speaking countries: 21 originated from the USA, 7 from the UK, 7 from Canada; 2 from the Netherlands; and 1 from Australia, Finland, Germany, Israel, and Norway. The country of origin was not specified in the review for the remaining 3 instruments. Table 5 summarises the information that is reported in the reviews.

Table 5 Overview of the instruments (n = 42)

The measurement properties of instruments that were reported in the reviews varied considerably.. Table 6 shows which properties were reported in which review, and Table 7 is a literal presentation of all psychometric information reported in the four included reviews.

Table 6 Reported measurement properties of instruments
Table 7 Data on measurement properties of instruments

Discussion

This review of reviews sought to summarise the range of validated instruments available for measuring practitioners’ person-centred consultation skills, including the strength of the validation evidence for each instrument, and to appraise the systematic reviews examining the validation studies. The reviews varied in quality, and our JBI quality assessment showed only one review which fulfilled all assessment criteria except for the assessment of publication bias [22]. In addition, only one review described several validation studies per instrument, including modifications and translations [21]. We found that the four included systematic reviews used very different inclusion criteria, leading to little overlap in included validation studies and instruments between them. This was because the reviews also differed in aims, appraisal tools used, and conceptual framework used, which limited the consistency of reported information across studies and instruments. These features underline the value of the present study, which in bringing together these literatures offers a guide to a wider set of instruments of interest to researchers than has previously been available. This diversity also underlines a key limitation of this review of reviews, as the included reviews themselves may complicate attention to the primary literature unhelpfully.

We make no claim that the list of instruments reported in this review of reviews is exhaustive. Our search was undertaken in September 2020 and although we have checked for citations of the included reviews and the primary studies, we may have missed later published reviews and instruments. There are many more instruments available, varying in aims, objectives, and conceptualisations of person-centredness. In addition, there may be other validation studies available on the instruments the reviews did not include, or which were published after the reviews, and the study findings suggest it is indeed likely that new instruments will have been published. We searched for all reviews meeting our selection criteria and acknowledge the perennial possibility that we may have missed eligible reviews, as well as being clear that there exist other validation studies and instruments that our study was not designed to include. We used an extensive list of keywords for our search, based on published reviews of person-centredness, but as the concept is so scattered, we may have left out search terms that could have led us to other reviews that could have been included. This we regard as a real risk and suggest careful extension of search strategy development in future studies. Procedural issues, particularly reliance on sole author for screening and data extraction, albeit with checks, should be borne in mind as review limitations.

There are many instruments available which measure person-centred skills in healthcare practitioners. The reviews point out that the instruments measured person-centredness in various dimensions, emphasising different aspects of the basic concept of person-centredness. This indicates the lack of agreement on what could be considered defining, central or important characteristics, so there are construct validity issues to be considered carefully. Person-centred care is an umbrella term used for many different conceptualisations in many different contexts [1, 4]. Separating consideration of what constitutes person centred care from person centred consultation skills is necessary, as the latter construct is merely one element of the former. Often teaching materials and guidelines on person centredness are not very clear on what person-centred behaviour and communication actually entails, and what skills and behaviours health care professionals are supposed to learn to make their practice person-centred. For example, Kitson and colleagues [13] reported that health policy stakeholders and nurses perceive patient-centred care more broadly than medical professionals. Medical professionals tend to focus on the doctor-patient relationship and the decision-making process, while in the nursing literature there is also a focus on patients’ beliefs and values [13]. Measurement instruments can help us operationalise person-centredness and can help practitioners understand what exactly it is that they are supposed to be doing. Developing the science of measurement in this area may also assist resolution of the construct validity issues by making clear what can be validly measured and what cannot.

Three of the four reviews [20, 21, 23] concluded that psychometric evidence is lacking for nearly all of the instruments. This finding may seem unsurprising in light of the foregoing discussion of construct validity. Brouwers [22] used the COSMIN rating scale [26] and found only one instrument rated as ‘excellent’ on all aspects of validity studied (internal consistency, content, and structural validity), but its reliability had not been studied. Köberich [21] specifically mentions test–retest reliability as a neglected domain and adds that all instruments lack evidence of adequate convergent, discriminant, and structural validity testing. Köberich and Farin, Brouwers, and Ekman [21, 22, 24] also highlight the need for further research on validity and reliability of existing instruments in their discussion and conclusion sections. In other reviews, De Silva [92], Gärtner et al. [93] and Louw et al. [94] attribute the lack of good evidence on the measurement qualities of instruments both to a failure to study their measurement properties and to the overall poor methodological quality of validation studies. Many tools are developed but few are studied sufficiently in terms of their psychometric properties and usefulness for research on and teaching of person-centredness. Often, a tool is “developed, evaluated, and then abandoned” [92].

Researchers and research users may seek instruments of these kinds for many different purposes. Using the most relevant and promising instruments that have already been developed and tested, in however a limited fashion, and rigorously studying and reporting on their psychometric properties, will be useful in building the science of measuring person-centred consultation skills. It may also be useful to develop item banking approaches that combine instruments. Researchers or educators intending to choose an instrument for their purposes also need to know several things to decide whether an instrument is relevant and suitable for their specific needs. For future primary studies and systematic reviews, we suggest paying heed to, and indeed rectifying, the limitations of existing studies identified here and elsewhere. In addition, both Hudon and Ekman [23, 24] found that paradoxically, there is very limited evidence of patients taking part in the evaluation process. This has also been reported in a systematic review by Ree et al. [95] who looked specifically at patient involvement in person centeredness instruments for health professionals. This is painfully ironic. There is thus a further major lesson to be drawn from this study; that in developing the science of measurement of person-centred skills, new forms of partnership need to be formed between researchers and patients.

Conclusion

There are many instruments available which measure person-centred skills in healthcare practitioners and the most relevant and promising instruments that have already been developed, or items within them, should be further studied rigorously. Validation study of existing material is needed, not the development of new measures. New forms of partnership are needed between researchers and patients to accelerate the pace at which further work will be successful.