Longitudinal validity and reliability of the Myeloma Patient Outcome Scale (MyPOS) was established using traditional, generalizability and Rasch psychometric methods

Purpose The Myeloma Patient Outcome Scale (MyPOS) was developed to measure quality of life in routine clinical care. The aim of this study was to determine its longitudinal validity, reliability, responsiveness to change and its acceptability. Methods This 14-centre study recruited patients with multiple myeloma. At baseline and then every two months for 5 assessments, patients completed the MyPOS. Psychometric properties evaluated were as follows: (a) confirmatory factor analysis and scaling assumptions (b) reliability: Generalizability theory and Rasch analysis, (c) responsiveness and minimally important difference (MID) relating changes in scores between baseline and subsequent assessments to an external criterion, (d) determining the acceptability of self-monitoring. Results 238 patients with multiple myeloma were recruited. Confirmatory factor analysis found three subscales; criteria for scaling assumptions were satisfied except for gastrointestinal items and the Healthcare support scale. Rasch analysis identified limitations of suboptimal scale-to-sample targeting, resulting in floor effects. Test–retest reliability indices were good (R = > 0.97). Responsiveness analysis yielded an MID of +2.5 for improvement and -4.5 for deterioration. Conclusions The MyPOS demonstrated good longitudinal measurement properties, with potential areas for revision being the Healthcare Support subscale and the rating scale. The new psychometric approaches should be used for testing validity of monitoring in clinical settings. Electronic supplementary material The online version of this article (doi:10.1007/s11136-017-1660-z) contains supplementary material, which is available to authorized users.


Introduction
Cancer is a major public health concern, being the second leading cause of death worldwide [1]. With the ageing of the society, cancer incidence is rising [2,3]. Despite advances in treatments, many cancer patients still face long disease trajectories and incurable disease. Multiple myeloma, an incurable cancer of the bone marrow and the second most common haematological malignancy [4], exemplifies this changing face of cancer. Many myeloma patients experience a more chronic disease trajectory, coping with gradually progressing disease, interspersed with intervals of stable disease with minimal or maintenance treatment but lasting effects of high-dose chemotherapy [5,6]. This longer disease trajectory of cancer and the intensive treatments have led to a need to evaluate patient-reported outcomes in addition to traditional monitoring, such as response to treatment and toxicity profiles, in this condition.
Patient-reported outcomes primarily comprise symptoms and health-related quality of life (HRQOL). Incorporating longitudinal assessment into routine clinical practice has shown benefits such as better symptom control, improved patient-clinician communication and satisfaction with care [7,8]. In trials, serial assessment of HRQOL incorporates the patient's experience while monitoring treatment safety and efficacy [9]. It also aids prognosis in chronic conditions and in haematological malignancy [10][11][12].
Despite these benefits, few measures are designed for monitoring HRQOL in routine clinical settings [13,14]. A systematic review of 13 generic and disease-specific HRQOL measures in multiple myeloma [13] found no single tool developed or validated for this purpose. Consequently, the Myeloma Patient Outcome Scale (MyPOS), a questionnaire to measure disease-specific HRQOL and palliative care concerns, was developed and validated in a cross-sectional sample of 380 community and inpatient myeloma patients in the United Kingdom (UK) [15]. However, the clinical utility of the MyPOS in form of longitudinal validity and reliability [16][17][18][19] still needs to be established.
The psychometric criteria for longitudinal monitoring validity are still ill-defined. Traditional psychometrics and associated guidelines focus on usages of assessment or screening [20][21][22]. The notable exception is McHorney's study of individual patient-monitoring in which the following criteria were proposed [23]: (i) practical features (brief measures, easy administration, easy score interpretation), (ii) breadth of health measured (variety of health concepts with assessing the full range of health from disability to well-being), (iii) depth of health measured (minimal floor and ceiling effects), (iv) precision for crosssectional assessment (precise reliability estimates, e.g. Cronbach's alpha, with small standard error of measurement) (v) precision for longitudinal monitoring (high reproducibility/test-retest reliability with small standard error of measurement), and (vi) validity (satisfactory convergent/divergent validity, high responsiveness/sensitivity to clinical change and definition of individual patient application, e.g. screening, monitoring, decision-making, tested). The authors also recommend more stringent benchmarks for measurement errors to fit the longitudinal use of measures [23]. Building on this work, we propose to extend McHorney et al's framework by incorporating new psychometric approaches, particularly Rasch analysis [24,25] and Generalizability theory [26][27][28], to further test some of their six quality criteria for longitudinal monitoring applications. Particularly Generalizability theory has been used successfully in psychological studies that monitored emotional changes [29]. Both techniques are suitable since they address the limitations of classical test theory (CTT) by providing individual item information, information on item invariance and person-level indicators that help understand floor and ceiling effects, understanding sources of measurement error, and the ability for discriminating among different patient groups (i.e. disease severity) [24,25,28,30]. In particular, we propose to extend analysis for criteria (iii), depth of health measured, (iv) precision for cross-sectional assessment, and (v) precision for longitudinal monitoring by using person-item targeting in Rasch analysis to further understand floor and ceiling effects, and to use the variance decomposition method for forming reliability indices beyond simple testretest reliability, to understand how reliable the use of an instrument is in the situation of screening HRQOL at one point in time, monitoring HRQOL over time and detecting change over time (iv and v, [29]).
We aim to examine the longitudinal validity and reliability of the MyPOS. The objectives are: (a) to evaluate the validity of the MyPOS and its scale in myeloma patients at different stages in their disease trajectory, (b) to determine the reliability of the MyPOS over time (test-retest reliability) within a Generalizability framework, (c) to determine the responsiveness and clinical significance of changes in quality of life scores and subscale scores and estimate the minimal important change (MID), both for patients who deteriorated and improved, and (d) to explore the acceptability of frequent selfmonitoring of HRQOL.

Study design and participants
This multi-centre, prospective longitudinal study recruited patients with multiple myeloma at different disease stages. Patients were enrolled in the study from March 2014 until July 2015. Inclusion criteria were as follows: older than 18 years, confirmed diagnosis of multiple myeloma that had been disclosed to the patient and capacity to give informed written consent. Patients who were too unwell, distressed or symptomatic to participate, as judged by their clinical team, were excluded, as were patients with severe neutropenia or for whom myeloma was not the most important health problem. Patients were recruited from 14 hospital trusts in the United Kingdom, both from secondary and tertiary centres. Study procedures followed the guidelines of the Helsinki Declaration. Ethical and research governance approvals were obtained from the Central London Ethics Committee (13/LO/1140) with further local Research and Development approvals from all participating National Health Service (NHS) hospital trusts.

Procedures
Consenting patients were invited to complete questionnaires at baseline and then every two months for a total of five assessments and a maximum follow-up time of eight months post-baseline. The first questionnaire was given to patients when they attended outpatient clinics. Subsequent questionnaires were sent via mail, with a self-addressed, prestamped envelope provided for return, a pen and a sweet to boost participation [31]. Patients were followed, if possible, if they moved to a nursing home, hospital or hospice. We sought information about any deaths that occurred.

Questionnaires
Participants completed the MyPOS [15]. The MyPOS is a module of the Palliative Care Outcome Scale (POS) [32][33][34], extended by myeloma-specific concerns. It comprises a list of 13 symptoms and 20 items about quality of life or palliative care concerns. The POS is a prominent measure of palliative care concerns. During the development phase of the MyPOS, in focus groups with experts as well as in cognitive interviews with patients, it was opted to adapt an existing questionnaire rather than develop a new one [35]. In the cognitive interviews, a clear preference for the item style and response options of the POS was shown. Also, some of the generic POS items were used in building the MyPOS since they measured relevant domains of myeloma-related quality of life [35]. In an attempt to harmonise disease-or condition-specific measures of the POS, the Integrated Palliative care Outcome Scale (IPOS) [36] was formed and it was opted by the POS research group to convert all disease-specific POS measures to a common, module-style format (similar to the European Organization for the Research and Therapy of Cancer (EORTC) quality of life and the Functional Assessment of Cancer Therapy (FACT) quality of life questionnaires [37,38]). At the same time, the POS was revised and especially its original two symptom items were extended by a list of symptoms prevalent in palliative care patients. The revised IPOS now contains 17 items. It is a valid and reliable measure [36]. Just prior to commencing this longitudinal validation study, the MyPOS was converted to become a module of the IPOS. All symptom (generic and myeloma-specific) and general palliative care-related problem items (list extended by four general palliative care-related concerns) now form the first part of the MyPOS and the myeloma-specific concerns form the third part of the questionnaire (for original and revised version see Supplemental Figs. 1, 2). The MyPOS used in this study therefore contains six additional IPOS items not contained in the version validated in Osborne et al. (2015) [15].
Items are scored on a five-point Likert scale. For symptom items, the scale ranges from 0 'not at all' to 4 'overwhelmingly'. For all other items, response options labels are question-specific with 0 signifying no problems and 4 signifying a high amount of problems (Supplemental Fig. 3 shows the response options for each scale of the MyPOS). Content and construct validity of the original MyPOS have been established in a clinically representative sample [15,35].
To evaluate the responsiveness and minimal important change on the MyPOS, an independent question to assess the degree of change was used. This global rating of change question (GRC) [22,39] asked 'Has your overall quality of life changed since the first time you completed this questionnaire?', with patients indicating whether their quality of life had got worse, stayed the same, or had improved. The GRC question was part of each follow-up assessment.
The questionnaire sent at the third assessment contained three open-ended questions to explore the acceptability of frequent self-monitoring. The questions concerned the suitability of the MyPOS for monitoring quality of life, the potential usefulness of monitoring quality of life and how results could be used by patients and clinicians. Table 1 provides an overview of analyses methods per objective, following the McHorney et al. framework [23], and detailing the criteria that were used for establishing fit and validity/reliability. All quantitative data analyses were conducted in SPSS v.22 [40], lavaan package in R [41] and partial credit Rasch models were run in RUMM2030 [42]. Patients with three or more missing MyPOS questionnaires at the follow-up time points were excluded from statistical analyses. If more than 50% of responses within a scale were missing from one questionnaire, it was removed from the analysis. Missing data in the confirmatory factor analysis were imputed using a multiple imputation approach [43]. Responsiveness analyses and Rasch analysis used a complete case analysis without imputation of missing data.

Statistical analysis
For construct validity (objective 1), re-evaluating the subscale structure defined in the initial validation [15] was necessary due to the sample-dependency of CTT approaches [58]. Confirmatory factor analyses contrasting three models to find best fit of the data were used: (i) a unidimensional model (one factor) solution, (ii) three-factor solution replicating the solution from the initial validation [15] with symptom and functioning items loading on one factor, separate from factors emotional response and healthcare support, and (iii) an adapted three-factor solution with all functioning items loading onto the emotional response factor, resulting in three subscales Symptoms, Goodness-of-fit indices: [44] (b) Comparative fit index (CFI) of C 0.90 [45] (c) Root mean square error of approximation (RMSEA) of B 0.06, 90% confidence interval 0.05-0.08 [45] (d) Non-normal fit index (NNFI) of C 0.95 or normal fit index (NFI) of C 0.95 [45] Checks of unidimensionality of three separate subscales analysed with Rasch analysis: principal component analysis and paired t-tests in RUMM2030 [46,47]: (a) RMSEA \ 0.08 [48] (b) CFI [ 0.90 [49] (c) Tucker-Lewis Index (TLI) [ 0.90 [45] Floor and ceiling effects via descriptive and Rasch analysis Data completeness and distribution of item responses [15% of responders at the lower or upper end of the scale [16] Rasch analysis: Scale-to-person targeting, the ability of the scale to cover the whole range of person estimates, shown on the person-item threshold distribution map [29] Scaling assumptions via Rasch analysis (RUMM 2030) [42] Fit to the Rasch model Fit to the Rasch model Non-significant V 2 -test [50] and RMSEA \ 0.2 [45]. However, large sample size can inflate the V 2 value and increase the likelihood of identifying misfit [45]. A partial credit Rasch model was used Objective 2: Test-retest reliability/item invariance of the MyPOS Test-retest reliability using Generalizability theory Restricted maximum-likelihood variance decomposition (VARCOMP) with participants and interaction terms as random factors and items and days as fixed factors. The variance associated with each component of variation, systematic between-person differences in mean item levels, true within-person change over time, idiosyncratic item responses and random measurement error, is partitioned [27,28]. These variance estimates are used to form indices of the reliability for discriminating between-persons (between-person differences) and within-person change Four generalizability coefficients (all [0.5 [29]): It should be noted that determination of test-retest reliability within Generalizability theory is a model-based approach that derives reliability indices from variance decomposition as an alternative way to intra-class correlation coefficients. Analysis of test-retest reliability was based on the subgroup of stable patients as indicated by the global rating of change (''unchanged''-see objective 3, responsiveness) Item invariance using Rasch analysis Differential item functioning (DIF) via a two-way ANOVA of standardised residuals with Bonferroni correction for type I error [52]; assessing whether item mean scores showed significant change over all five assessments [50] Significant interaction between class interval (level of quality of life) and time indicates a nonuniform DIF and an unstable, unreliable item Functioning and Emotional response, and Healthcare support. Scaling assumptions of the total MyPOS score, subscale scores and individual item scores were evaluated using Rasch analysis. A partial credit Rasch model was fitted to each subscale, Symptoms (13 items), Emotions (17 items) and Healthcare Support (3 items), separately. Floor/ceiling effects and distribution of item responses were checked using descriptive statistics and Rasch analysis (person-location maps). The presence of floor or ceiling effects is indicated in the person-location map by mean item location scores not matching the whole range of person locations at the lower or upper end of the scale [59]. This indicates either items missing from the measure to represent very good or poor HRQOL, or indicates that the sample used for evaluation of the measure is not well targeted to comprise all levels of severity that the MyPOS measures [50]. For establishing the test-retest reliability and invariance of the MyPOS (objective 2) for participants that indicated they did not experience a change in their condition over time, the Generalizability theory framework was used [26][27][28]. Four generalizability coefficients [29] were computed (see Table 1). Item invariance was further tested using Rasch analysis following Hobart et al.'s [58] approach using differential item functioning (DIF). DIF is an indicator of items not performing in a stable/invariant way since the expected values on the item are not the same for all subgroups in the sample (i.e. groups of different disease severity or functional ability) [58]. Objective 3, establishing the responsiveness to change and the minimal important difference for the MyPOS, followed guidelines by Guyatt [55] and used a combination of anchor-based, distribution-based approaches. For responsiveness, we used the GRC to identify patients that experienced change over time, with categories improved, unchanged and deteriorated to examine the differences in mean score changes between each time point and baseline (T2 to T1, T3 to T1, T4 to T1, T5 to T1). We determined ROC curves separately for improvement and deterioration (improved vs. stable or deteriorated vs. stable) for total MyPOS score and the three subscale scores. For objective 4, we analysed participants' written comments in the open-ended questions of the MyPOS using thematic analysis [57].

Results
Characteristics of patients and questionnaire completion 250 patients were recruited of whom 238 completed the questionnaire at baseline. Mean age was 68.5 (range 34-92 years), mean time post diagnosis was 3.3 years with 139 (25.5%) patients who had been living with myeloma 5 years and longer (see Table 2). 199 participants Differences in mean score changes between each time point and baseline were assessed and graphed. The adequacy of the anchor was assessed via Spearman correlation [17] MID: anchor-based approach Receiver-operating characteristic curve (ROC) to determine optimal cut-off points separately for improvement and deterioration, according to GRC ratings [53].
MID: cut-off point on ROC for which the sum of percentages of false-positive and false-negative classifications [(1-sensitivity or true positive rate) ? (1-specificity or false positive rate)] is smallest [39].
Significance of the area under the curve with a p value [ 0.5 indicates changes on the MyPOS scores are associated with the gold standard GRC criterion [39].
Graph of distribution of change scores, MIDs and 95% CIs [54] MID: distribution-based approach Standard deviation at baseline used to estimate MID [55] Following Cohen's criteria [56], small changes (0.2 9 SD), moderate changes (0.5 9 SD) and large changes (0.8 9 SD) were computed. A moderate effect size change was designated as the MID [55] Objective 4: acceptability of monitoring Acceptability Thematic analysis of responses to open-ended questions about views on self-monitoring and data feedback were analysed using thematic analysis [57] Qual Life Res (2017)

Confirmatory factor analysis
Factor analysis confirmed a three-factor structure but with functioning items now loading onto the Emotional response factor (solution iii). The fit indices indicated a satisfactory model fit. Although the X 2 test was significant, the RMSEA (0.056; 90% CI 0.050-0.063) and the CFI (0.942) were satisfactory. When compared to the uni-dimensional solution, the three-factor solution performed best. The three factors together explain a total of 42.2% of the variance with the three subscales explaining 28.1, 7.2 and 6.9%, respectively. All items loaded above 0.40 on their respective subscales, except item 12 (''Tingling in the hands/feet, 0.378) and item 29 (''Worry about sex life'', 0.189) (see Supplemental Table 1).

Rasch analysis
Overall fit of the data to the Rasch model for each subscale was given (see Supplemental Table 2). The range of item locations and item thresholds logits for all three subscales indicated that items mapped out a measurement continuum. The Symptom subscale had the widest range of item locations from -1.16 to ?1.92 logits. The Healthcare support subscale had a range of item thresholds from a maximum of -3.07 to ?5.28 logits. Regarding individual item fit, item 12 'Tingling in hands/feet' was the only item showing misfit in the Symptoms subscale with a fit residual of ?2.68. In the Emotional response subscale, three items ('Sharing feelings with family/friends', 'Worry about sex life', 'Information about the future') showed misfit to the Rasch model (fit indices ranged from ?2.52 to ?3.16). All items in the Healthcare support subscale fitted the Rasch model (see Table 3). Examination of graphical fit via item characteristic curves confirmed good fit to the Rasch model for 30/33 items, except for 'Tingling in the hands/feet', 'Worry about sex life' and 'Information about future' (see Supplemental Fig. 3). These show a slight under-discrimination, indicating difficulties to stratify participants according to different levels on the latent variable HRQOL.
Regarding item response options, thresholds were ordered for 12/33 items, but for 21/33 items the 5-point scale did not work in a linear way (see Supplemental  Table 2). For ten of these items, people appeared to have difficulty discriminating between the last two to three categories, thus distinguishing a moderate problem from a severe or overwhelming one. For 11 items, people seemed to have difficulty discriminating between the first two categories ('not at all' and 'slight'/'moderate'). Fit for all items improved after removing extreme persons and rescaling the MyPOS items showing misfit and disordered thresholds to a 3-point Likert scale by combining categories ''A little'' and ''Moderate'', and combining ''Severe'' and ''Overwhelming'', the two highest response categories. After rescoring, all items on the Symptom subscale showed ordered thresholds. In the emotional subscale, item 19 (Having enough information about the illness'') and item 33 (''Having enough information about what might happen in the future'') retained disordered thresholds, as did item 32 (''Doctors/nurses show care & respect'') on the Support subscale. Chi-square test statistics and the person separation index did not improve on this last subscale after rescoring and the Support subscale does not fit the Rasch model. Some item redundancy was present for seven pairs of items that had residual correlations exceeding r \ 0.30 (3% of total correlations). The following item pairs showed potential redundancy: Nausea-Vomiting (r = 0.37), Problems with feeling at peace-Depression (r = 0.36), Problems with sharing feelings with family-family anxiety (r = 0.39), Hobbies-Usual activities (r = 0.36), Worry about illness worsening-Anxiety (r = 0.35). Two pairs of items in the Healthcare support subscale correlated highly: Contacting doctors for advice-knowledge of staff (r = 0.82) and Contacting doctors for advice-Doctors showing respect (r = 0.55).

Floor and ceiling effects
For most items, all response options were endorsed. However, 10/33 items ('Nausea', 'Vomiting', 'Poor appetite', 'Sore or dry mouth', 'Diarrhoea', 'Drowsiness', 'Tingling in the hands/feet', and three items in the Healthcare support subscale) had floor effects with participants not using the two highest levels. These were also the items with the most skew. Up to 18/33 items had percentages of [50% of participants choosing the option 'Not at all'. The MyPOS total score and subscale scores showed a normal distribution except for the Healthcare support subscale which demonstrated skew [2.5 at each time point.
In Rasch analysis, 14 person fit residuals exceeded the recommended range of -2.5 to ?2.5 (-3.68 to 3.55); implying that approximately 6% of people gave responses not in keeping with expected scores. Scale-to-scale targeting was suboptimal. Figure 1 shows the person estimation-item location distribution for the three MyPOS subscales. The sample covers the bulk of possible item locations on the MyPOS Symptom. Some mistargeting exists for the Emotional response subscale. The scale did not cover the sample in the Healthcare support scale, indicating floor effects.

Reliability of the Myeloma Patient Outcome Scale
The Person separation indices implied good sample separation and high reliability (Supplemental Table 2), except for the Healthcare support subscale consisting of only three items. This was confirmed by values of Cronbach's alpha that did not exceed a lower bound of 0.795.
Variance decomposition shows that the largest component is error variance. Next, variance is due to participants experiencing change between assessments (Table 4), reflected by high between-person variation and interaction terms for person x time and indicating that participants experienced different HRQOL trajectories over the period of eight months. The generalizability coefficients (Table 4) show that (a) reliability of screening was reasonable to good (R IF 0.55 to 0.73), (b) discrimination was lower (R IK \ 0.50), except for the Healthcare support scale, (c) test-retest reliability of the MyPOS was excellent   Item invariance via DIF analysis identified the items 'Constipation', 'Drowsiness', 'Diarrhoea' in the Symptom subscale as unstable over time. In the Emotional response subscale, only the item 'Worry about infections' showed DIF. None of the items in the Healthcare support subscale showed DIF (see Supplemental Table 3).

Responsiveness of the Myeloma Patient Outcome Scale
The total MyPOS score correlated moderately with the global rating scale (GRC, anchor) at every time point (range: r = 0.312 to r = 0.482). 125 participants contributed data for all five time points. Equal numbers of participants experienced a change in quality of life for the better or the worse, but the majority (about 60%) experienced no change (see Supplemental Table 4). Figure 2 shows the plotted change scores across time points. Except for the Healthcare support subscale, all mean change scores and corresponding confidence intervals indicated an improvement in MyPOS scores when patients classified themselves as overly improved, and a worsening of MyPOS scores when participants described their general quality of life as deteriorated.   Table 5 lists the optimal cut-off points (MIDs). For patients who reported they had improved, the MID for the total MyPOS score was 2.5. The subscale MIDs were 1.5 for Symptoms, 4.5 for Emotional response and 0.5 for Healthcare support. MIDs for deterioration were similar to those for improvement, with an MID of 4.5 for the total score and MIDs of 2.5, 3.5 and 0.5 for the subscale scores. The range of MIDs is much larger derived from the distribution-based approach, with estimates ranging from a minimum of 3.4-13.4 in the total score and 0.3-9 in the subscale scores (Table 5). Further examination of mismatch between the two methods and uncertainty around the MID revealed greater misclassification for improvement than for deterioration (see distribution graph for total MyPOS, Supplemental Fig. 3). The area under the ROC curve predicting improvement or deterioration was significantly greater than 0.5 (p \ 0.01) for the total MyPOS change score and all subscale scores except the Healthcare support subscale.

Acceptability of frequent self-monitoring for patients
46% of participants thought the MyPOS to be a feasible tool for monitoring symptoms and problems/concerns over time. 23.9% of patients did not believe it was acceptable to complete the MyPOS regularly before clinic visits. 30% of responses were missing due to drop-out at this time. Concerns about acceptability fell into two categories: (a) those who thought it unfeasible to monitor changes because their condition changed on a daily basis and a questionnaire could not capture these minute alterations; and (b) those who felt that their clinical team monitored their condition regularly and a questionnaire would duplicate information. Linked to both of these were concerns regarding overall burden, especially when receiving treatment within a clinical trial with regular data collection, and associated cost. Positive statements included the belief that monitoring would help to focus on the symptoms and problems over time, something which these patients felt was often disregarded or overlooked in consultations: ''It would help the patient to focus on their treatment, difficulties and problems. We are not always aware that some problems and side effects are related to medication and treatment and try to ignore them''. (Female participant with relapsed disease)

Discussion
In the CTT and Rasch psychometric analysis, the MyPOS, a disease-specific measure of quality of life and palliative care concerns in multiple myeloma, presented as having adequate construct validity and reliability for certain subscales and items. For example, in the Rasch analysis items mapped out a measurement continuum in all three subscales. In terms of suitability for longitudinal monitoring, it had excellent test-retest reliability as well as reliably measuring change and being responsive. The MyPOS was able to discriminate between subgroups of patients longitudinally. However, some symptom and health care support items with floor effects, suboptimal scale-to-scale targeting and disordered thresholds point towards areas for revision. These revisions in particular concern the third subscale, Healthcare support, which overall had very substantial floor effects in the items, high inter-item correlations and thus item redundancy. Further targets are items in the Emotional Response subscale, particularly items 15 (''Family anxiety'') and 18 (''Sharing feelings with family/ friends''), item 14 (''Anxiety'') and item 28 (''Worry about illness worsening''), item 21 and 22 (''Usual activities''/ ''Hobbies'') and items 19 (''Information about illness/ treatment'') and 33 (''Information what might happen in the future''). It is worth exploring whether the MyPOS could be shortened by removing redundant items, which might also improve model fit in the factor analysis, and whether a two-factor structure (after removal of the Healthcare Support items) provides a better fit to the data.
Any revisions of the MyPOS must weigh information on psychometric quality with considerations of clinical utility of the item in the clinical context [60]. Revisions need to balance considerations regarding content validity, clinical usefulness and applicability of the item and take item quality into account. A systematic review [13] identified 13 HRQOL instruments validated in myeloma, most of them generic in nature (EORTC QLQ-C30, EQ-5D and 15D, FACT-G, SF-36/12). This poses a problem as generic questionnaires do not include disease-specific concerns and symptoms and are therefore less suited for validly reflecting patient experience [18]. The MyPOS was subsequently developed following extensive patient interviews to close the gaps in item coverage identified in other HRQOL instruments, and to operationalise disease-specific HRQOL according to a conceptual model developed from these qualitative interviews [35].
We argue further that for clinical applicability, considerations of test-retest reliability and responsiveness to change for enabling the valid monitoring of patients in clinical practice are paramount. However, this information is often not available for disease-specific tools in multiple myeloma. For example, an MID was only determined for the EORTC QLQ-C30 and the two health state measures EQ-5D and 15D [61,62]. Subsequently, two new diseasespecific tools, the MDASI-MM [63] and the FACT-MM [64], have been developed, but their validation has not yet been completed or has not included longitudinal validity testing to date. Another aspect lacking from validation  studies is the investigation of scaling quality. One notable exception is a study exploring Mokken scaling stability of the EORTC QLQ-C30 across different subpopulations of myeloma [65]. However, this analysis did not provide in-depth information on each item and did not look at item stability in a longitudinal context. For the MyPOS, we provide both information on scaling quality and longitudinal validity.
Regarding possible revisions of the MyPOS, the measurement aim needs to be considered. For example, floor effects in gastrointestinal symptoms may be observed for most of the sample of a relatively stable myeloma population not currently undergoing anti-cancer treatment or receiving maintenance treatment only [66]. However, they are important symptoms to monitor for the clinician to make adjustments to the treatment plan should they suddenly become severe [67][68][69][70]. Inspection of the personitem threshold maps shows that it is not the items in the measure that do not cover the whole spectrum but rather the sample that did not target all the item difficulty locations. Similarly, floor effects are commonly seen in HRQOL and health satisfaction measures that are constructed with the intention of being applicable to a wide range of disease severity levels [71][72][73]. This is even true for disease-specific scales and was observed in the fieldtesting of the EORTC QLQ-MY24 [74], subsequently revised to 20 items. Floor effects in healthcare support items may reflect the finding that respondents have more positive experiences with the healthcare they received affecting their willingness to participate in studies from the outset [75]. However, while revision of the scale helped improve the fit of items in the Symptoms and Emotional Response subscale, these response scale adaptations should be performed after further qualitative, cognitive interview work [59,76]. Another option is to extend the range of item difficulties to cover all levels of severity and impact of myeloma on HRQOL by constructing item banks and computer adaptive testing [77]. In our analysis, we tried to combine the perspectives of traditional psychometric approaches (confirmatory factor analysis, responsiveness and MID) with modern item response theory for evaluating the stringent criteria proposed by McHorney et al. [23] for longitudinal individual patient monitoring. Using the new approaches addresses shortcomings of CTT such as validating only total scores instead of single items in a measure and yielding sample-dependent results [30]. The benefits of Rasch analysis include item-level statistics and information on how items can be improved to fit the application in a specific sample [78]. Furthermore, generalizability theory [26][27][28] allows an exploration of sources of variation in item scores, which leads to establishing various reliability indices to distinguish different scenarios of use, i.e. using HRQOL measures for screening (single application) or for monitoring (application to track outcomes over time in an individual). This extends the limited exploration of testretest reliability in CTT approaches [22]. The new psychometrics are proposed as extensions to the original operationalisations of measurement quality criteria that were proposed by McHorney et al. [23] in their seminal paper. They can potentially offer additional information on sources of floor & ceiling effects and, due to Rasch analysis yielding information on the full range of the construct being measured, sources of problems with the coverage of constructs and diverse patient groups. The same is true for Generalizability analysis that provides a fine-grained picture of sources of measurement error beyond the random measurement error and can therefore help understand problems with precision of measurement in the cross-sectional and the longitudinal application [27,28]. However, especially the latter approach to reliability assessment and the indices proposed by Cranford et al. [29] are limited by not being used widely in the literature which makes their interpretation difficult. For example, it is not clear whether thresholds for acceptable ICC estimates as proposed by McHorney et al. [23] are applicable for the screening, discrimination and reliable change index proposed in this paper [29]. Further research is needed to explore this issue. Moreover, we used Cranford et al.'s [29] method in a situation of a less intensive longitudinal design, with far less frequent measurement than was employed in their diary study. Therefore, the analysis of sources of variation stemming from different time points is not as detailed as in their original analysis.
Applying the framework of quality criteria for individual patient-monitoring to the MyPOS yields the following assessment of its suitability for this application. Regarding (i) practical features, survey administration is well below 15 min [15], however, the number of items is rather high for a clinically applicable tool [18]. The analysis of breadth of health measured (ii) yields good dimensionality of the measure and coverage of all aspects of disease-related QOL according to the theoretical model [15], however, scale revisions indicated by low factor loadings, item redundancy and poor fit of the Healthcare Support subscale call for further exploration of dimensionality. Criterion (iii), the depth of health measured, was partially fulfilled with floor effects showing in 10/33 items and person-item targeting analysis within Rasch modelling suggesting further analysis in more severely affected samples. Criteria (iv) and (v) pertaining to reliability were assessed slightly differently by extending suggested analyses of Cronbach's alpha for cross-sectional reliability and test-retest reliability by Rasch analysis and Generalizability theory, and by omitting standard error of measurement as a quality criterion. Although the actual size of the coefficient that should be obtained is unclear, the rigorous criterion for reliability  [23] was achieved for all subscales in longitudinal analysis, but not for cross-sectional reliability (screening & discrimination application, Cronbach's alpha). Validity (vi) in terms of cross-sectional construct validity and responsiveness to change yielded good sensitivity to change values. Further convergent and divergent validity assessment is reported in the initial validation of the MyPOS [15]. One of the most important features that makes a scale suitable for monitoring purposes is its responsiveness to change [19]. Our MIDs for improvement and deterioration were smaller than the MIDs reported by Kvam and colleagues for the EORTC QLQ-C30 for patients with multiple myeloma [62]. Their MIDs range from 6 to 17 points for improvement and 12-27 points for deterioration, a small to medium change [62]. This discrepancy might arise from the different nature of the QLQ-C30, a generic measure, with absolute higher values of meaningful change [78][79][80][81]. The large baseline standard deviations and the amount of misclassification that was seen imply that not enough patients in our sample experienced a substantial change and that there further exists imprecision in the anchor in classifying participants into improved and deteriorated. This is a commonly reported problem with the ROC-curve based method of MID [54,82] which, as a diagnostic approach, would require a bias-free and precise gold-standard anchor. However, in the absence of guidance regarding construction of global rating scales this situation might not easily be rectified.
The first limitation of our study is the use of consecutive enrolment, resulting in a convenience sample. The strength lies in its greater clinical representativeness that counteracts the effect of sampling younger and fitter patients if validation is part of a clinical trial [66,83]. However, since we recruited from outpatient clinics or day centres, we potentially missed patients feeling too unwell to participate in a longitudinal survey. This was the first study to use Generalizability theory. This approach for evaluating sensitivity to change normally requires frequent assessments [29]. However, due to patient burden this was not feasible. The reliability coefficients may be an underestimation of the true longitudinal reliability of the MyPOS. Furthermore, since this approach is relatively new, there are no guidelines as to the size of the coefficients. Confirmatory factor analysis used the DWLS approach to account for non-normality and the ordinal nature of the response scale in the MyPOS. However, although this approach has been reported as robust in samples of above 200, a caveat is its use in situations were missing data is missing not at random [84]. Baseline data was used for confirmatory factor analysis with missingness likely not due to systematic item nonresponse or non-random mechanisms.
However, low factor loadings of some items might be due to systematic bias, i.e. for item 24 ''Worry about sex life'', with effect on model fit. Different groupings of functioning items on subscales in the reported factor analysis compared to the initial factor analysis reported in Osborne et al. [15] are most likely due to changing descriptive labels of the rating scale of the symptom items to adapt the MyPOS to the overall item and scaling format of the IPOS [32,36], of which it is a module. In the adapted version of the MyPOS, the rating scale for the symptoms only lists the severity of impairment but not the added descriptor ''impaired activity or concentration''. This change might have affected other aspects of construct validity, which likely necessitates a re-validation of aspects of construct validity of the symptom subscale. For the anchor-based MID approach, there is no consensus for the amount of categories and the exact phrasing of the global rating scale of change. Authors have used 3-point [56] to 15-point scales [85]. We tried to balance the potential lack of sensitivity of fewer response options with the need to arrive at a valid measurement of change presenting only so many levels which patients can adequately discriminate. Since we asked patients to compare a change in their condition always to the first assessment, recall bias may have affected at least part of the sample. Furthermore, the wording of the rating scale might not present a valid global assessment of change in quality of life as operationalised in the multi-dimensional, diseasespecific MyPOS. The validity of the global rating of change as a criterion for anchor-based derivation of the MID is further pulled into question by the relatively low correlation between anchor and change scores and the MID not exceeding the SEM in all subscales.

Conclusion
This analysis supported the responsiveness and test-retest reliability of the MyPOS, using a multi-centre outpatient sample of patients at different disease stages. Additional derivation of the MID for use in individual patient care and exploration of valid anchors of global change are needed. Modifications to the scoring format and potential removal of the Healthcare Support subscale may be warranted, subject to further testing. The study was the first to apply Generalizability theory to establish test-retest reliability and stability of scores in frequent measurements in medicine. design, data collection and analysis, decision to publish, or preparation of the manuscript. The corresponding author had full access to all the data and had responsibility for the decision to submit for publication. The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the National Institute for Health Research or the Department of Health.
Funding This work was supported by grants from Myeloma UK, St Christopher's Hospice London, UK, and the National Institute of Health Research (NIHR) UK (Professor Irene Higginson holds a Senior Investigator Award and leads a theme in the South London Collaboration for Leadership in Applied Health Research and Care (CLAHRC)). The funder of the study had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The corresponding author had full access to all the data and had responsibility for the decision to submit for publication. The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the National Institute for Health Research or the Department of Health.
Author's contributions CR detailed methods for the study, led the application for ethical approvals, collected the data, planned and conducted the data analysis and drafted the manuscript, supervised by IJH, GW and RJS. IJH led the application for funding for this programme of work, which included this study, in collaboration with SAS, RJS and PME, and acted as senior researcher overseeing the project and publications. All authors contributed to the preparation of the manuscript and read and approved the final manuscript.

Compliance with ethical standards
Conflicts of interest All authors declare that they have no conflicts of interests.  Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creative commons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.