Background

Advanced cancer patients under palliative care (PC) experience many physical, psychosocial, and existential problems [1]. The PC team is essential for the screening, diagnosis, and treatment of cancer symptoms with the aim of improving the patients’ health-related quality of life (HRQoL). Therefore, an ideal assessment of symptoms and HRQoL should be performed using validated patient-reported outcome (PRO) instruments [2].

The process of PRO validation requires time and includes rigorous methods of data analysis. It encompasses the translation of foreign languages, cultural adaptation, and the evaluation of psychometric properties. Overall, a PRO validation needs to achieve the standards of validity and reliability [2]. Among reliability analyses, the most important are internal consistency, inter-rater reliability, and test-retest reliability. Test-retest reliability can be defined as “a measure of the reproducibility of the scale, that is, the ability to provide consistent scores over time in a stable population” [3].

Retested patients must be in a stable condition with respect to the construct to be measured by the PRO. This situation is particularly problematic in PC settings because advanced cancer patients are prone to a faster rate of clinical deterioration [4].

Thus, the aim of the present study was to evaluate the method by which multi-symptom and HRQoL PROs have been validated in oncological PC settings regarding test-retest reliability.

Methods

Design

A systematic literature review was used.

Eligibility criteria

The studies included in this systematic review met all of the following criteria: (1) validation study of a multidimensional quality of life instrument or a multidimensional symptom assessment instrument; (2) publication in a peer-reviewed journal; and (3) analysis of a population composed mainly of advanced cancer patients undergoing PC (or hospice care, end-of-life care, or some similar type of care).

Studies were excluded for any of the following reasons: (1) the study was not published as a full article (i.e., conference proceedings were excluded); (2) the study contained pediatric data; or (3) the publication was a duplicate publication.

Data sources

Validation studies of PROs were retrieved from the following online databases: PubMed (1966 to June 2013), EMBASE (1980 to June 2013), PsychInfo (1806 to June 2013), CINAHL (1980 to June 2013), and SCIELO (1998 to June 2013). The Patient-Reported Outcome and Quality of Life Instrument Database (PROQOLID) [5] (http://www.proqolid.org/) and the Australian Centre on Quality of Life (ACQOL) [6] (http://www.deakin.edu.au/research/acqol/index.php) were also used to search for validation studies.

Search strategy

Our search strategies for PubMed included the following: (1) quality of life instruments: (instrument OR questionnaire OR scale OR inventory OR checklist) AND (reliability OR test-retest OR validation OR psychometric* OR retest OR repeatability) AND (cancer OR tumor OR tumour OR carcinoma OR malignancy OR “neoplasms” [MESH]) AND “quality of life” AND (palliative care OR end-of-life OR “end of life” OR hospice OR terminal OR advanced); (2) multiple-symptom instruments: (instrument OR questionnaire OR scale OR inventory OR checklist) AND (reliability OR test-retest OR validation OR psychometric* OR retest OR repeatability) AND (cancer OR tumor OR tumour OR carcinoma OR malignancy OR “neoplasms” [MESH]) AND (symptom OR symptoms) AND (palliative care OR end-of-life OR “end of life” OR hospice OR terminal OR advanced). Searches using EMBASE, PsychInfo, CINAHL, and SCIELO were conducted by combining each of the terms used in the PubMed search strategy. With regards to the PROQOLID and ACQOL databases, references were individually screened. To identify additional papers, the reference lists of relevant articles were reviewed by one of the authors (CEP).

Data extraction

Initial searches (titles and abstracts) were conducted independently by CPS and FT. The studies with full text available were further reviewed, the data were independently extracted by two other reviewers (CEP, CPS), and the data were verified by a third reviewer (BSRP).

A standardized data collection form was used. The data collected included study demographics (year of publication, country in which the study was conducted, language in which the instrument was administered), the name of the instrument, and information about the characteristics of the patients enrolled (age, performance status). We also collected data regarding the statistical methods employed to perform the test-retest analysis, the time frame from test to retest, the total number of patients included in the study, and the number of patients included in the reliability test-retest analysis. If the sample size for the test-retest reliability was planned a priori, the study was included only if the article stated that the patient was clinically stable. The following QoL domains were systematically extracted from each article: global, physical, psychological, social, existential/spiritual, and functional. With regard to the symptoms, we specifically analyzed the pain, fatigue, nausea, anxiety, and depression domains.

Analytic approach

The COSMIN (COnsensus-based Standards for selection of health Measurement INstruments) checklist [7] was used to rate the methodological quality of the study designs. Because our focus was test-retest reliability, only the COSMIN Box B (reliability) was used. The “worst score counts” algorithm was used for the analysis [8]. Briefly, each item from COSMIN Box B was rated individually as “excellent”, “good”, “fair”, or “poor”, and an overall score was given by taking the lowest score of any of the items.

Because different statistical methods were used in many of the studies, a robust meta-analysis of the data was not possible. Therefore, to perform a pooled analysis, we followed the method of Terwee et al. [9] and accepted a minimum reliability threshold of 0.70 as a measure of “adequate test-retest results”. Each extracted domain was classified as the number of articles with test-retest values ≥ 0.70. The number of adequate test-retest results was associated with the evaluated outcome (HRQoL versus symptoms) and with the evidence provided for clinical stability, as measured by item 7 of COSMIN Box B. For this analysis, a chi-square test for linear trend was used. In addition, the time (in hours) to retest was compared between groups with adequate and non-adequate test-retest results using the Mann–Whitney U Test. Data are presented as median values and percentiles of 25 (P25) and 75 (P75).

The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines for building reviews were followed during the preparation of this review (see the PRISMA checklist in the Supplementary file).

Results

Figure 1 summarizes the identification and selection of studies. We identified 89 articles describing validation studies of PRO that evaluated advanced cancer patients under PC. Of those, 31 (34.8%) measured test-retest reliability. Information from the included studies is detailed in Table 1.

Figure 1
figure 1

PRISMA flow diagram for search strategy.

Table 1 Instruments developed to assess symptoms or quality of life of cancer patients under palliative care

Methodological quality of the studies

Two authors (CEP, EMB) classified the articles according to the COSMIN guidelines; the percentage of agreement between coders was 85.3% (Cohen’s Kappa coefficient = 0.764). Any disagreements in interpretation were resolved by a discussion with a third author (BSRP). There were 4 (12.9%) [11, 25, 28, 40], 17 (54.8%) [10, 1215, 18, 20, 24, 26, 27, 3032],[34, 3638], and 10 (32.2%) [16, 17, 19, 2123, 29, 33, 35, 39] articles classified as good, fair, and poor, respectively, with regards to the overall quality criteria. No article was classified as excellent according to the aforementioned criteria. The global quality classification per item is described in Figure 2.

Figure 2
figure 2

Quality criteria of the included studies according to the COSMIN checklist.

Sample sizes

A total of 29 studies (29 of 31; 93.5%) [1019, 2123, 2540] described the number of patients submitted to the test-retest analysis. Of those, the median (P25-P75) number of patients included was 60 (32–119). The majority of the studies (24 of 29; 82.8%) [1012, 1416, 18, 19, 2123, 2529, 31, 32],[3436, 3840] included fewer than the total number of patients for the reliability analysis. Overall, 53.8% (95% CI 19.6%-87.9%) of the total number of analyzed patients were used for the test-retest reliability analysis. Only 2 articles [28, 37] described the sample size calculation for the test-retest analysis. One study [35] used a reference from others recommending that at least 50 patients should be used for this type of statistical analysis.

Time to retest

The time interval to retest was clearly stated in all of the included studies [1040]. The median (P25-P75) time was 72 (27–168) hours. The median (P25-P75) time intervals for the retest were 24 (3.25-60) hours and 168 (48–204) hours for the symptom and HRQoL validation studies, respectively (p = 0.001).

Confirmation of clinical stability

Of the 31 analyzed articles, 10 (10 of 31, 32.3%) [1115, 20, 21, 26, 36, 38] clearly stated that only clinically stable patients were submitted to the retest. The confirmation of clinical stability in accordance with the COSMIN checklist was associated with adequate results for the test-retest analysis regarding both pain and global HRQoL scores (p < 0.05, Table 2). Of those studies, 6 used 1 or more of the following objective criteria to define a stable condition: patient perception of change (n = 2) [11, 20]; stable doses of opiates (n = 1) [36] or lack of a new medication for symptom treatment (n = 2) [14, 38]; emergency department visit and/or hospitalization (n = 1) [14]; and change in Performance Status or in daily living activities (n = 1) [12].

Table 2 Association between the test-retest reliability values and the evidence of clinical stability

Scores of retest

In the present review, we chose to perform statistical comparisons only for the pain and global HRQoL scores because they were the most commonly described domains in the selected studies (Table 3). Taking into consideration a set value of ≥ 0.70 as an adequate result in the test-retest analysis, 50% (9 of 18, 50%) [13, 20, 25, 29, 32, 3538] and 45% (9 of 20, 45%) [1012, 18, 2325, 32, 40] of the studies with pain and global HRQoL values, respectively, were considered adequate.

Table 3 Number of studies with adequate and non-adequate test-retest values

There was a non-significant trend favoring shorter time intervals for the retest in the studies with adequate results for the retest statistical analysis (value ≥ 0.70) in comparison with those with non-satisfactory results (value <0.70) (Table 4).

Table 4 Median values of time intervals of studies with non-adequate (<0.70) and adequate (≥0.70) test-retest values

Three studies [18, 29, 40] compared 2 different time frame intervals for the retest. Two of them [18, 40] measured global HRQoL 3 hours and 7 days after the first evaluation; the test-retest results were 0.84-0.93 and 0.63 at 3 hours and 7 days, respectively. The other validation study [29] evaluated cancer symptoms using the Edmonton Symptom Assessment System (ESAS) scale. That study found higher test-retest values for shorter time intervals, with the exception of the symptom of fatigue (Table 5).

Table 5 Test-retest reliability scores measured at two different time intervals from the first evaluation

Statistical methods used

Of those instruments with continuous scores (n = 29), 11 (11 of 29, 37.9%) [11, 12, 14, 18, 20, 23, 25, 28],[31, 38, 40] evaluated the test-retest reliability using the intraclass correlation coefficient, and 14 (14 of 29, 48.3%) [10, 13, 15, 17, 24, 27, 29, 30],[32, 3437, 39] performed some type of correlation analysis such as Spearman’s (n = 6) or Pearson’s (n = 8) test. Interestingly, for those studies in which the intraclass correlation coefficient (ICC) method was used, none described the statistical model or formula used; therefore, no study could be classified as “excellent” according to item 11 in Box B of the COSMIN guidelines. Two studies used paired analysis (repeated measures of analysis of variance [ANOVA], n = 1; paired t test, n = 1) to evaluate test-retest reliability [16, 33]. Three studies with ordinal score instruments calculated the weighted kappa statistic [20, 21, 26]. Two studies did not describe the type of statistics used [19, 22].

Discussion

In this study, we investigated the methods by which validation studies of PRO have been performed in the PC setting, particularly with regards to test-retest reliability. In general, the methodological quality of this psychometric property was investigated poorly to fairly; according to the COSMIN checklist, only 12.3% of the studies were considered of good quality, and none were considered of excellent quality. In addition, we highlighted the importance of verifying the clinical stability of advanced cancer patients before performing the retest. Based on our results, clinical stability is even more important for test-retest reliability than the accurate definition of the time interval at which the retest is performed.

In our review, we identified 89 validation studies that included cancer symptoms and/or HRQoL as outcome variables. Of those, only 31 (34.8%) evaluated the test-retest reliability. As the test-retest reliability is an essential psychometric property to be measured in validation studies, we hypothesize that researchers are not systematically measuring it because of the instability of advanced cancer patients. Overall, half of the evaluated test-retest reliability scores were classified as inadequate when 0.70 was used as the threshold value [9]. The pressure experienced by scientific researchers to publish positive results [41] may also explain why only 34.8% of validation studies measured the test-retest reliability. Furthermore, it is possible that inappropriate test-retest values were omitted from some publications.

It is essential to accurately estimate the sample size prior to beginning a study. An insufficient sample size might not detect true differences, which might lead to unreliable results. Conversely, an excessive sample size may produce unnecessary financial losses and ethical concerns regarding futile exposure of study participants [42]. With regards to test-retest reliability analysis, we observed that determining an adequate sample size is not a common practice because only 2 studies [28, 37] described performing a sample size calculation prior to the study. Overall, the median number of included patients for test-retest analysis was 60, which represents 53.8% of the total number of included patients. One study [35] justified the sample size by citing a rule of thumb suggesting that 50 patients would be sufficient for the analysis [43, 44].

A basic concept regarding test-retest reliability is the need to retest clinically stable patients [45]. The retest of advanced cancer patients is challenging because they are in a dynamic phase of their disease in which symptoms and functionality are prone to decline quickly. The retest of a clinically unstable patient may incorrectly define a PRO as a non-reliable tool. Our results confirm the importance of verifying the clinical stability of the patients before retesting. In addition, our review described the objective criteria used by some studies to define a stable condition.

The definition of an adequate between-assessment time gap for the retest is of the utmost importance. An insufficient time period might allow respondents to recall their first answers, and a longer interval might allow for a true change of the construct to occur [2, 45]. The appropriate time interval depends on the construct to be measured and the target population [46]; however, approximately 2 weeks is often considered generally appropriate [47]. Nevertheless, the time interval over which to retest advanced cancer patients under PC is still a matter of debate. Some authors have considered retesting advanced cancer patients at least 3 days apart as a measure of responsiveness but not as a measure of test-retest reliability [4].

In fact, because of concerns about reassessing an unstable patient, some authors (n = 7) reapplied the questionnaires at very short intervals (i.e., less than 24 hours). Jim et al. [48] investigated daily and intraday changes in the fatigue, depression, sleep, and activity scores in a cohort of cancer patients undergoing chemotherapy. Significant changes were observed over time. Additionally, Dimsdale et al. [49] investigated cancer-related fatigue every hour for 72 consecutive hours and observed a diurnal variation in fatigue. HRQoL, on the other hand, is a multidimensional construct that encompass physical, psychological, social, and spiritual domains. In general, instruments that measure HRQoL use recall periods of 7 days. Although HRQoL is not commonly assessed on a daily basis, it is expected to behave stably over a few days, especially the social, existential, and global domains. Consequently, we observed that multi-symptom instruments are generally retested within a shorter time frame than HRQoL instruments.

There was a trend of shorter time periods in the adequate test-retest reliability results when compared with the scores with inadequate results (less than 0.7). One reason contributing to the non-significant results might be the large interquartile range for some of the domains; since few studies were analyzed, there was insufficient statistical power for further conclusions. Three studies [18, 29, 40] evaluated the retest reliability at 2 different time points (< 24 hours and 1 week after the first evaluation); in general, a lower time interval was associated with a better retest analysis result. Considering the median time interval used in the studies with adequate test-retest results, in addition to the findings from studies that used two different time intervals for the retest, we can recommend that patients under palliative care for advanced cancer should be retested somewhere around 24 to 48 hours later when evaluating cancer symptoms and 2 to 7 days later when assessing HRQoL. However, we believe that the most important factor is not the time itself but rather confirmation of clinical stability before retesting patients.

As mentioned previously, we concluded that the test-retest reliability analysis was of low quality according to the COSMIN checklist. Other studies using the same guidelines but not the same population have yielded similar results [5052]. The most troublesome question in our review was item 11 (“for continuous scores: was an intraclass correlation coefficient calculated?”), with 62% of studies classified as poor to fair quality. The preferred test-retest reliability statistic depends on the type of response options. In our review, the majority of the studies evaluated continuous scores. In these cases, the ICC [31] is the preferred statistic [46, 47, 53]. Moreover, the use of correlation coefficients (Pearson’s and Spearman’s tests) is not adequate because they do not include a consideration of systematic error [46]. In the present study, 18 of 29 studies evaluating continuous scores used a correlation analysis, but they did not evaluate the agreement by using the ICC. Six different versions of ICC can be used depending on various assumptions, and 4 of those are subdivided into consistency or absolute agreement, yielding a total of 10 different ICC calculations [54]. The choice of the correct index has a highly significant impact on the numerical value of the ICC [53]. Even in those studies that correctly used the ICC, none stated the version of the ICC used.

This study has some limitations. Because the studies evaluated test-retest reliability using different statistics, we could not perform a robust meta-analysis. Therefore, we decided to use 0.7 as the threshold for adequate results on test-retest reliability to perform a pooled data analysis. However, the categorization of the test-retest results as a function of a predefined cut-off point may be considered an inadequate simplification. Another limitation is that we did not include in the systematic review other instruments developed to assess only one symptom (fatigue or pain scales, for example). In addition, we did not include abstracts from meetings because it would be difficult to extract the necessary data.

Conclusions

In conclusion, we determined that test-retest reliability has been infrequently and poorly evaluated in validation studies of PRO assessing advanced cancer patients under PC. Multi-symptom instruments were retested over a shorter time interval when compared to HRQoL. The confirmation of clinical stability was an important factor in our analysis, and we suggest that special attention to this parameter is required when designing a PRO validation study that includes advanced cancer patients under PC.