Introduction to the special section "Reducing research waste in (health-related) quality of life research"

In 2021, we issued a call for papers on reducing (healthrelated) quality of life ((HR)QL) research waste and optimizing patient-reported outcome (PRO)/(HR)QL data. As identified by Chalmers and Glasziou [1], research waste refers to avoidable inappropriate conduct and dissemination of research. It has multiple contributing factors and is likely a spectrum of impact rather than a dichotomous categorization (e.g., a question of how much of a given study could be considered research waste). In particular, Chalmers and Glasziou highlighted "the choice of research questions; the quality of research design and methods; the adequacy of publication practices; and the quality of reports of research" [2] as focal areas where decisions and actions of stakeholders in the research process could have negative impact. The topic of research waste has received limited interest from our community. To date, publications have been largely focused on adherence to reporting guidelines and quality reviews [3–8]. We were therefore looking for current and innovative state-of-the-art thinking, evidence, and methodological and clinical approaches to reducing research waste in (HR)QL/PRO research. We considered research waste across five stages of research production: question selection; study design, conduct and analysis; ethics, regulation and delivery; publication and reporting; and bias and usability of results/ reports [9, 10]. We were looking for innovative theoretical approaches, applications, and research exploring this issue. We received 18 expressions of interest of which eight papers are now collected in this special section. The eight papers are presented under three headings that align with elements of the call and represent four of the five stages of research production.

In 2021, we issued a call for papers on reducing (healthrelated) quality of life ((HR)QL) research waste and optimizing patient-reported outcome (PRO)/(HR)QL data. As identified by Chalmers and Glasziou [1], research waste refers to avoidable inappropriate conduct and dissemination of research. It has multiple contributing factors and is likely a spectrum of impact rather than a dichotomous categorization (e.g., a question of how much of a given study could be considered research waste). In particular, Chalmers and Glasziou highlighted "the choice of research questions; the quality of research design and methods; the adequacy of publication practices; and the quality of reports of research" [2] as focal areas where decisions and actions of stakeholders in the research process could have negative impact.
The topic of research waste has received limited interest from our community. To date, publications have been largely focused on adherence to reporting guidelines and quality reviews [3][4][5][6][7][8]. We were therefore looking for current and innovative state-of-the-art thinking, evidence, and methodological and clinical approaches to reducing research waste in (HR)QL/PRO research. We considered research waste across five stages of research production: question selection; study design, conduct and analysis; ethics, regulation and delivery; publication and reporting; and bias and usability of results/ reports [9,10]. We were looking for innovative theoretical approaches, applications, and research exploring this issue. We received 18 expressions of interest of which eight papers are now collected in this special section. The eight papers are presented under three headings that align with elements of the call and represent four of the five stages of research production.

The special section
The section opens with a focus on research design and conduct. Chalmers and Glasziou [2] identified four sources of research waste starting off with not asking the right research question. The first paper [11] in this special section explored the extent to which research questions in journals focused on PROs were clearly stated. The authors found that almost half of research questions were poorly framed or unframed. Even "adequately framed" questions rarely stated what researchers wanted to know a priori, increasing the risk of biased reporting. Despite standards for framing research questions being in use for over 30 years, researchers still often fail to include key elements when stating their research question. The first paper in the section summarizes existing frameworks available for formulating research questions and sets out two criteria for a good research question in the context of health outcomes research.
Further sources of research waste include poorly designed or conducted studies [12]. Systematic reviews have shown that trial protocols often lack important information regarding PROs [13][14][15]. To equip trialists with the motivation, knowledge and resources to write PRO content in trial protocols, the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials)-PRO guidance was developed [16,17]. The second paper [18] in this special section assessed whether a 2-day educational workshop intended to educate researchers on key considerations for designing a PRO study and how these aspects should be addressed in a clinical trial protocol improved the PRO completeness of protocols against consensusbased minimum standards provided in the SPIRIT-PRO Extension [17]. Although participants were highly satisfied with the workshops, completeness of PRO protocol content generally did not improve. Adverse consequences of not considering these minimum standards during study design and conduct may result in gathering inadequate or misleading information and lack of statistical precision or power. The authors highlighted the inadequacy of study protocols, failure to involve experienced PRO methodologists, and failure to train clinical researchers and statisticians in relevant PRO research methods and design, and provide suggestions for how future educational efforts may be more effective [18].
The third paper [19] addresses the importance of considering adequate sample size during study design to ensure research resources are utilized in an ethical manner and maximize impact and replicability. Common misconceptions related to sample size planning specific to (HR)QL/PRO studies and non-technical corrections to these misconceptions are discussed, including a sample size reporting checklist, to help researchers have a more nuanced understanding of sample size planning and items to consider during (HR) QL/PRO study design and reporting.
The second set of papers focuses on research waste related to reporting. Despite a number of guidelines developed to encourage better reporting across research designs [20,21], research waste continues to be a major problem for (HR)QL/PRO research. Systematic reviews consistently report poor PRO reporting according to CONSORT (CONsolidated Standards for Reporting)-PRO reporting standards [20]; studies either do not report PRO results or PRO results are inadequately reported [6,22,23], resulting in PRO results being unusable or not replicable, and calls into question the ethics of collecting PRO data that will not be used [2]. PRO ethics guidelines that provide recommendations on questions that should be asked of a study's design to facilitate the evaluation of its ethical acceptability have been recently published [24].
Three papers address the issue of inadequate reporting. The first paper [25] reports a systematic review of papers reporting EQ-5D utility weights in patients with coronary artery disease and highlights the difficulties researchers face when trying to retrieve and reuse published (HR)QL/PRO data. A major contributor of research waste is the inability to reuse and/or include published studies in meta-analyses due to insufficient reporting, threatening the reliability of meta-analysis results, and wastes researcher time and cost reviewing poorly reported papers. As the authors found, missing about half of the data points that might otherwise have been included is problematic and potentially harmful through reliance on such meta-analyses [26]. Poor reporting is leading to similar problems in the context of cancer research [27][28][29]. To reduce waste, the authors recommend that studies using the EQ-5D (or any PRO measure (PROM) for that matter) should report appropriate summary statistics to enable reuse in meta-analyses and include the PROM in the title or abstract in line with current reporting guidelines (CONSORT-PRO and SPIRIT-PRO Extensions).
The second paper [30] appraises the use of the CON-SORT-PRO Extension as an evaluation tool for assessing the reporting of PROs in publications and describes the reporting of PRO research across reviews. The authors found that many PRO studies published after the release of CONSORT-PRO in 2013 did not report recommended CONSORT-PRO items, and studies reviewing PRO publications omitted or largely modified recommended items from their evaluations. Such variation in evaluations impacts knowledge translation and may lead to potential misuse of the CONSORT-PRO Extension.
The third paper [31] describes a project undertaken to develop a clear, unified, universally applicable approach for the translation, dissemination, and impact of research undertaken by Health Service Evaluation staff and organizations. The authors provide a threefold approach of providing information, guidance, and training in the form of an explainer video, guidance, and learning modules to those who create and use research knowledge. However, the authors caution that a knowledge translation approach on its own does not guarantee research findings will be implemented. Patient and public involvement is necessary during research design to ensure that research conducted is relevant, reflects patient priorities, and that appropriate PROs are assessed and PROMs used. The results from the three papers reinforce the need for better reporting, implementation and translation of (HR)QL/PRO results to practice to maximize the value of research informing health service delivery and policy and reducing avoidable research waste.
The section closes with papers on usability of results. The first paper [32] discusses how validity theory provides a framework to evaluate whether re-purposing and adapting an existing PROM for a new use (e.g., new patient population, altered recall period), rather than creating a new one, is needed. Four examples of modifications including changing mode of administration, the recall period, extending from one clinical indication to another, and adapting a "general" PROM to encompass disease-specific aspects of the concept of interest are presented to demonstrate these ideas in practice. The authors propose that rather than assuming any change to an existing PROM requires the entire development process to be repeated, which consequently requires extensive resources, we can consider the nature of the proposed change and use validity theory to evaluate which inference/ claim is impacted (and to what extent), limiting development of de novo PROMs to when such development is absolutely necessary [32].
The special section closes with a paper [33] focused on inefficient research regulation and management practices as sources of research waste, and discusses opportunities in (HR)QL/PRO research enabled by data linkage and data registries; barriers to data access and use and the implications for waste in (HR)QL/PRO research; and proposed legislative reforms. The authors argue that rather than investing in (HR)QL/PRO data infrastructure to support multiple studies, there is reliance on collecting data de novo for discrete studies, resulting in research waste that arises "from questions being overlooked or unnecessarily addressed, research being underpowered or done too slowly, and research being too costly" [34].

Editorial commentary
Research waste relating to the production and reporting of health and medical research is a major problem. It has been recognised that "[h]uge sums of money are spent annually on research that is seriously flawed through the use of inappropriate designs, unrepresentative samples, small samples, incorrect methods of analysis, and faulty interpretation" [35]. In 2014, the Lancet series about increasing value and reducing waste in medical research estimated that 85% of research is wasted because studies ask the wrong questions [36], are poorly designed or conducted [12], are inefficiently regulated and managed [34], produce inaccessible information [26], and are not appropriately reported, disseminated, or translated into decision making [10]. These issues are pertinent to (HR)QL/PRO research. In an attempt to reduce waste and maximise efficiency, the Lancet's REWARD (REduce research Waste And Reward Diligence) Campaign invited everyone involved in research to critically examine how they work to reduce waste and maximize efficiency, and to strive to improve the value of the funds invested in the research we commission, deliver, publish, and implement [37]. However, despite nearly a decade since the Lancet series, research waste is still a major problem. We therefore felt that a call for papers reporting current and innovative thinking, evidence, and approaches to reducing research waste and maximizing (HR)QL/PRO data would continue the effort. Papers in this special section discuss the many contributors to research waste and we take the opportunity to highlight what we think are ongoing issues that as researchers we should all be mindful of, and potential ways in which we could all do our bit to reduce research waste in our field.
Poor (HR)QL/PRO study design, analysis, reporting, and application all contribute to research waste and reduce the benefit of (HR)QL/PRO data. As researchers and editors, we still see problems with study design across the (HR)QL/ PRO literature. Firstly, better use of theoretical frameworks would allow researchers and practitioners to design better quality studies. For example, key terms in our field such as PRO(M)s and (HR)QL are often used interchangeably or without clear definitions [38][39][40] when actually frameworks for these terms exist that could be used [41][42][43]. Welldeveloped theoretical frameworks inform what needs to be assessed and when, and how constructs are operationalized in a specific setting. Use of theoretical frameworks could also help identify study design problems early on that arise from using PROMs for assessing the study's independent and dependent variables, whose items are in their entirety or in parts assessing the same construct(s). Such overlap leads to spuriously inflated relationships which have wide-ranging impacts on the interpretation of results [44].
A second set of problems directly related to these first considerations concerns the PROs assessed within studies and how to increase their relevance and appropriateness: (1) Uninformative PROs assessed-for example, likely intervention effects might include pain, fatigue and sexual dysfunction but only global HRQL is assessed; (2) Inappropriate PROMs used-for example, study outcomes of interest are stated as pain, fatigue and sexual dysfunction but only a generic HRQL instrument is used, which does not assess fatigue or sexual function; or using a PROM not well-targeted at the intendend population(s) and therefore unable to detect an intervention effect due to floor or ceiling effects. Related is the problem of jingle-jangle fallacies when different constructs are incorrectly assumed to be the same because of a shared label ("jingle", most commonly in our field is probably not differentiating between HRQL and QOL [38]); or where different terms are used when in fact describing the same construct [45,46]; (3) Uninformative time-points for PRO assessment-for example, likely intervention effects occur 1 month postintervention but PROs are assessed at baseline before intervention and then 6 months after intervention when intervention effects are diluted or resolved.
Planning PRO assessments informed by theoretical frameworks, be it for research or practice, will not only reduce research waste as an academic exercise, but likely increase the relevance of the research findings for clinical practice and policy. And in particular for HRQL research, it is during these stages of study design where input from key stakeholders including co-creation and co-production is most beneficial for designing better quality studies and avoiding research waste [47].
A third problem area is study samples. While reporting guidelines [21] emphasise the importance of detailing how study participants were approached and any reductions in sample size from recruitment to the final analytic sample, this information is often not reported. Failure to report this limits transparency and the ability for readers to assess both quality and relevance of the sample for a particular research context and conceales the need for exploring the effects of such selection processes with appropriate additional details and/or sensitivity analyses. The problem with sampling is additionally conflated with exclusion of participants with missing PRO data, as it can cause loss of power and bias and seriously affect the external validity (generalizability) of the results. Strategies for reducing the instance and impact of missing PRO data have been summarised [48,49], and methods to explore the impact of missing data on study results have been developed [50,51]. Finally, small sample sizes may be underpowered for confirmatory PRO objectives and hypotheses. Especially in medical research where sample sizes are based on clinical endpoints such as survival or biomedical indicators, the resulting sample size may be insufficient for PROs as secondary outcomes. All relevant study outcomes should be considered during study design and planning a priori. In our continued effort to give PROs a prominent role in studies evaluating patient-facing health care interventions, we have previously argued [52] that short of achieving this goal, registered papers or Registered Reports in particular offer new ways of: (i) gathering early feedback on publication plans, (ii) using writing resources earlier in the study process, and (iii) ensuring timely dissemination.
Inadequate or inappropriate analysis of PRO data is another source of research waste. For example, not considering confounders or multicollinearity in the analysis increases the chance of false positive findings, resulting in potentially misleading findings [53]. From our experience, studies may also recruit patient samples with mixed disease type (e.g., breast, colon, and lung cancer patients), stage (e.g., early stage disease vs advanced stage disease) and treatment, and/ or with a wide variation of time since end of treatment to data collection but pool their data for analysis rather than report their results separately [27][28][29]. These analyses are problematic as we cannot assume that their experiences and treatment impacts are equal and precludes observations due to specific treatment and disease type and stage. Pooling PRO data collected at widely variable times since diagnosis/end of treatment obscures patterns of adjustment over-time and rates of recovery, and tells us little about PRO trajectories but rather simply describes PROs in a sample of patients, which may have a purpose, but is not very useful for clinical decision making [27-29, 53, 54].
Many of these analysis problems could be mitigated with greater consideration of a study's theoretical foundation and research question(s). Analyses are the operational reflection of these aspects of the research process. An area where this becomes especially apparent are non-randomised comparisons between participant groups. Such study designs are especially common in practice settings and can generate important insights with high relevance for implementation and application. But the descriptive report of observed differences between two groups (e.g., patients receiving two different treatments for the same condition) rarely results in actionable information. And it almost surely does not inform about any causal differences in effectiveness between the compared treatments. Existing frameworks for determining causal effects are available [55][56][57][58], and considering them when developing observational research is crucial. These 'design' issues can all lead to inaccurate conclusions about the benefits and harms of interventions and could be potentially harmful to future patients. We propose that all researchers consider 'what is the purpose for collecting PROs', 'what information would be informative', and 'how will the PRO data be used' when designing their studies. This will then in turn inform sample size and analytical considerations, and lead to better quality PRO data.
Another important source of research waste is poor reporting. Reporting issues pertinent to our field include lack of critical details, failure to publish PRO results, over-interpretation (e.g., overemphasis on positive results/ benefits, gloss over harms/negative findings, causal interpretation of non-causal findings), selective reporting (e.g., unpublished hidden PRO data, not analysed or not reported), the spin, and manipulation of data [22,35,37,[59][60][61][62]. Disappointingly, reporting guidelines exist but are not adhered to [30]. A noteworthy finding from one of the papers in this special section [30], of 13 journals that published reviews synthesizing PRO studies, none recommended use of the CONSORT-PRO reporting guidelines specifically; five recommended use of EQUATOR or CONSORT guidelines, and nine did not mention either in their instructions to authors. It seems journals publishing PRO studies have not endorsed use of the CONSORT-PRO. Author instructions and administrative checks by journals may be potential mechanisms/ forcing functions to ensure better reporting. We would argue that a paper that adheres to reporting guidelines better places a reader to assess the quality of the study design and conduct and to interpret its findings accurately, improving the potential of the research to be impactful and meaningful to patients and clinical practice.
Not only does poor (HR)QL/PRO study design, conduct, analysis and reporting contribute to research waste, they limit the extent to which PRO data can benefit patients and inform clinical practice. Increasing the quality, availability and use of (HR)QL/PRO data may ultimately enable this data to inform public health, clinical practice and health policy. Assessment of PROs in a study should only be included if they will inform future decision making. Assessing outcomes such as HRQL requires time and effort. Researchers need to plan the HRQL data collection, and then enter the data, analyse it and report it. Patients spend their time completing the PROM(s) and staff at sites need to ensure that the PRO assessments are completed at scheduled time-points and record reasons for non-completion if they are not. And collecting PROs is expensive-there are costs associated with PROM licencing, staff time, and administration. All these factors need careful thought and planning so that PRO data are collected in a scientifically robust and meaningful way.
Resources exist to help researchers design (HR)QL/ PRO studies, such as what to include in a PRO study protocol [16] and recommendations for selecting PROMs [63,64], analysizing PRO data [53], and reporting PRO studies [20]. A new initiative, PROTEUS (Patient-Reported Outcomes Tools: Engaging Users & Stakeholders), is promoting the systematic use of available tools to optimize the design, analysis, reporting, and interpretation of PROs in clinical trials [65]. The PROTEUS website includes checklists, web tutorials, and other resources to support the optimal use of PROs. And the individual papers in this special section provide additional resources for reducing research waste in our field. However, without education about robust PRO methodology, appreciation for the importance of high-quality research design, conduct and reporting, and dissemination and use of available resources, we will continue to contribute research waste and not realise the value of PRO data. Study design tools and reporting guidelines are just one part of the job. It is ultimately the responsibility of researchers to ensure appropriate methods and conduct are applied in any specific study, and it is ultimately the responsibilty of authors to ensure their study methods, conduct and results are adequately justified and reported.
We are grateful for the excellent range of submissions received and to all authors and reviewers involved in selecting the published papers. The issues raised in our commentary are only a small selection relevant to us as HRQL/PRO researchers, editors, and to the whole research and practice field. The papers in this special section further highlight sources of research waste and provide resources, recommendations and possible solutions for reducing research waste and maximizing PRO data. However, discussion of research waste since Chalmers and Glasziou [4] introduced the problem, highlights other areas that impact on the available resources we have to conduct research such as time needed for grant writing and peer-review in funding processes [66,67]; and not least at ourselves, with view to the peer-review of publications [68,69]. These issues become even more pertinent when we consider that "health" is a global priority needing a global agenda [70][71][72]. We expect therefore that the topic of research waste related to (HR)QL/PRO research and practice will remain an important issue on our community's agenda.