Introduction

Clinical trials networks (CTNs) conduct investigator-initiated research and public good trials, largely funded by charities, universities and governments. Examples of CTNs in Australia include the Australian Kidney Trials Network (AKTN https://aktn.org.au/), the Australia and New Zealand College of Anaesthetists clinical trials network (ANZCA https://www.anzca.edu.au/research/anzca-clinical-trials-network) and the Cooperative Trials Group for Neuro-Oncology (COGNO https://www.cogno.org.au/default.aspx). Example CTNs in Europe include the European Society of Anaesthesiology and Intensive Care (ESAIC https://www.esaic.org/research/clinical-trial-network/) and in the USA include the HIV Prevention Trials Network (HPTN https://www.hptn.org/). However, with limited available resources including trained research personnel, trial participants and funds, decisions need to be made about which trials are a priority. The desired result of successful prioritisation is funded trials that generate important information, help inform clinical and policy decision-making and improve health outcomes. In reality, research prioritisation is not easy, and many organisations wrestle with competing criteria and the multiple interests of stakeholder groups.

Three main approaches to prioritisation have emerged in health and medicine research, namely interpretive, quantitative and blended methods. Interpretive approaches utilise consensus views of informed participants and include James Lind Alliance (JLA) and Delphi surveys [1, 2]. These approaches can reflect emerging patterns in the future and engage consumers; however, they do not provide methodology for identifying participants, often lack criteria transparency and have the potential for investigators and facilitators to bias opinions. Quantitative approaches utilise epidemiological, clinical or economic data. Examples include burden of disease, prospective payback and value of information (VOI) analyses [3, 4]. These approaches provide an objective assessment of value for money; however, they do not consider other criteria such as equity and broad stakeholder’s involvement; furthermore, they can be technically demanding. Blended approaches utilise and combine both interpretive and quantitative assessments and include the Child Health Nutrition Research Initiative (CHNRI) and multi-criteria decision analysis (MCDA [5, 6].

The aim of this scoping review was to identify approaches for priority setting in health and medical research useful to clinical trial networks (CTNs) in Australia and internationally and research funders. Specifically to answer the following research questions: what models, approaches or methods are used by CTNs to prioritise clinical trials; how have these models, approaches or methods been developed and validated; and what is the best practice for prioritising clinical trials? The findings will then be used to develop best practice guidance for CTNs and research funders.

Methods

A scoping review of published literature and working documents, as well as websites from research funding organisations and CTNs, was undertaken to identify research prioritisation tools and criteria. Digital databases including Ovid MEDLINE, Embase and the WHO library database (WHOLIS) were searched for publications about guidelines for prioritising research questions relevant to CTNs. Search terms included ([prioritization OR prioritisation OR setting priorities OR priority setting OR research priority*] AND [clinical trials OR clinical trial networks OR clinical trial group]). The search was limited to studies in English published from year 2000 onwards. The search was updated on 30 January 2020.

Titles and abstracts were screened, and eligible studies were selected by a single reviewer (VBN) for the following inclusion criteria: original studies, systematic reviews, guidelines, recommendations, and tools for research prioritisation. Both qualitative and quantitative methods of prioritisation were accepted. Studies not relevant to CTNs, duplicate publications, guidelines written from the perspective of funders, opinion articles, letters to editors and abstracts only were excluded. A manual search of key references cited in the retrieved papers and reports was also undertaken to identify additional publications not encountered by the electronic searches. A second reviewer (RLM) was consulted when in doubt regarding study selection, and any discrepancies were resolved by consensus with a third reviewer (MC).

A second search of key Australian and international CTNs/clinical disciplines/clinical specialties websites was then undertaken. Organisations were selected by the author team as likely to provide guidance on prioritisation and selection of clinical trials, and websites were searched by two authors (VBN, AB). Searched websites are listed in the Appendix. Searching included exploration of the website menu structure for relevant documents and searching within the sites using the terms “clinical trials”, “priorities”, “prioritisation” or “prioritization” (depending on the nationality of the website).

The following types of documents were selected for inclusion: guidance on prioritisation; case studies or examples of prioritisation exercises that reported the methods used; guidance on criteria for the assessment, selection or prioritization of clinical trials (e.g. for funding purposes). Documents that did not constitute current guidance or were superseded by later versions of current guidance (e.g. prioritisation processes to inform past priorities or strategic plans or discussion documents that appeared to be older than current guidance), were excluded. Documents with URLs that were no longer accessible in January 2020 were also excluded.

Data from studies and websites were extracted and tabulated into an Excel file according to a predefined codebook. Data extraction variables comprised author name, author group (e.g. CTN, funder), clinical discipline, country, year of publication, participants or stakeholders in the prioritisation process (e.g. health professionals, researchers, policy/decision makers, funders, patients, carers/consumers), intended audience (e.g. government/policymakers, clinicians, researchers, funders, the public), brief reason for prioritisation (e.g. knowledge gap, important to patients, return on investment, feasibility of methodology), type of research (e.g. trials), research prioritisation tools (e.g. Delphi, CHNRI, JLA, payback, MCDA, forced ranking, workshop/consensus meeting, other), prioritisation method (e.g. quantitative scoring, nominal group technique, weighted scores, monetary, other), research prioritisation criteria (e.g. relevance, appropriateness, significance, feasibility, cost-effectiveness [7]) and the URLs (for websites). Data from published articles and websites were summarised and tabulated separately. Critical appraisals of included studies, guidance documents or websites were not undertaken. Reporting of this scoping review was consistent with items in the PRISMA-ScR checklist [8].

Results

Literature search

The results of the literature search and study selection process are depicted in Fig. 1. A table of the seventy-eight primary studies included in this review is presented in Table 1.

Fig. 1
figure 1

PRISMA 2009 diagram

Table 1 Included studies

Most research prioritisation exercises were conducted either in Europe (n = 32; 41%) or North America (n = 25; 32%); six prioritisation studies (8%) originated in Australia and New Zealand. Two studies were conducted in South Asia (India; 3%), one in South-East Asia (1%), one in Africa (1%) and 11 studies were international (14%). Included studies were published between 2000 and 2019 (see Fig. 2a). Clinical specialties most frequently involved in research prioritisation were oncology (n = 11 studies; 14%), neurology (n = 11; 14%), paediatrics (n = 8; 10%), maternal and child health (n = 4; 5%), infectious diseases and HIV/AIDS (n = 5; 6%), nephrology (n = 4; 5%), respiratory medicine (n = 4; 5%), mental health (n = 3; 4%) and ophthalmology (n = 3; 4%).

Fig. 2
figure 2

Prioritisation study numbers, participants, reason and intended. a Number of studies published by year. b Participants in the prioritisation exercise. c Reason for prioritisation. d Intended audience of the prioritisation process

The stakeholders most frequently involved in the prioritisation process were health professionals (n = 65 studies; 83.3%), patients and carers/consumers (each n = 42 studies; 53.8%), researchers (n = 34 studies, 44%), policy or decision makers (n = 20 studies, 26%), and funders (n = 15 studies, 19%; see Fig. 2b). Stakeholders were not stated in four studies (5%).

The reasons for conducting the research prioritisation exercise included a knowledge gap in 51 studies (65%), ascertaining what was important to patients in 28 studies (36%), assessing the feasibility of a particular prioritisation methodology in 12 studies (15%) and estimating a return on investment in 10 studies (13%; see Fig. 2c; Table 2).

Table 2 Prioritisation methodologies

The intended audience for the outcomes of the prioritisation exercise were the funders in 40 studies (51%), researchers in 34 studies (44%), government or policymakers in 13 studies (17%), clinicians in 12 studies (15%), CTNs in 11 studies (14%), the general public in 4 studies (5.1%) and not stated in 20 studies (26%; see Fig. 2d).

A table of the prioritisation approaches is presented in Table 2. Fifty-seven studies used interpretative prioritisation approaches (73%), 14 studies used blended approaches (18%) and six studies used quantitative approaches (8%). Twenty-two studies used the JLA prioritisation tool or a modification thereof (28%), 19 studies used the Delphi methodology (24%) and 11 studies used a workshop or consensus meeting to establish their priorities (18%; see Fig. 3; Table 2). The “Payback” category included quantitative methods such as prospective payback of research (PPoR), expected value of information (EVI), return on investment (ROI) and the “Other” category included methods such as online surveys/questionnaires, focus groups, World Café and mixed methods. Forty-five studies (58%) employed quantitative scoring as a prioritisation method, frequently in the form of nominal group technique (n = 11 studies; 14%). Six studies used weighted scores (8%) and five studies used monetary value (6%). One study each used Agency for Healthcare Research and Quality (AHRQ) criteria, Dotmocracy, forced ranking, red-amber-green light and no prioritisation (1%).

Fig. 3
figure 3

Prioritisation tools used. CHNRI, Child Health and Nutrition Research Initiative; JLA, James Lind Alliance; MCDA, Multiple-criteria Decision Analysis

Over two-thirds of the identified studies (n=53, 68%) did not describe any formal prioritisation criteria. In those that did describe prioritisation criteria, multiple criteria were mentioned. Relevance (i.e. why should we do it? including the burden of disease, equity, and knowledge gaps) was cited in 14 of the 78 included studies (18%). Seven studies (9%) cited criteria related to appropriateness (i.e. should we do it? including scientific rigour and suitability to answer the research question); 17 studies (22%) considered criteria related to significance of research outcomes (i.e. what will we get out of it? including impact, innovation, capacity building); 12 studies (15%) cited feasibility among their prioritisation criteria (i.e. can we do it? including team quality and research environment). Cost-effectiveness was considered by fifteen studies (19%). Five studies cited other prioritisation criteria (6%).

Organisational websites

Thirty-nine websites of research funding organisations and CTNs were reviewed (Appendix), and 18 were found to contain research prioritisation information: one from Australia (6%), two from New Zealand (11%), one from Ireland (6%), eight from Canada (44%) and six from the USA (33%; see Table 3). A table of the clinical disciplines involved is depicted listed in Table 3. The stakeholders most frequently involved in priority setting were researchers (n = 14 websites; 78%) followed by health professionals (n = 12 websites; 67%) and policy/decision makers (n = 11 websites; 61%; see Fig. 4a). Funders and patients were mentioned in seven processes each (39%) and carers/consumers were mentioned six times (33%). Participants or stakeholders were not stated in three occasions (17%; see Fig. 4a).

Table 3 Websites searched
Fig. 4
figure 4

Research prioritisation participants, reasons, intended audience and funding criteria from websites of research funders and clinical trial networks. a Participants in the prioritisation exercise. b Reason for prioritisation. c Intended audience for prioritisation. d Funding criteria as reported in organisational websites

A “knowledge gap” was the reason for developing a prioritisation guideline among 10 websites (56%), followed by “wanting to know what was important to patients” (n = 8 websites; 44%). Six organisations mentioned the reason for the priority-setting exercise was to support their vision and mission or to invest strategically and in a balanced way (33%) and five organisations wanted to find the best return on investment (32%; see Fig. 4b). Feasibility of the methodologies used was mentioned once (6%).

The intended audience was in all but one case the general public (n = 17 websites; 94%), followed by the researchers (n = 6 websites; 33%), the government and policymakers (n = 5 websites; 28%) and clinicians or funders (n = 4 websites each; 22%; see Fig. 4c).

As for the prioritisation tools used, three organisations used workshop/consensus meeting (17%), one used the JLA tool, one used payback (VOI) and one used MCDA (6%). Five organisations used other tools (surveys, working groups; 28%) and seven did not describe the tool used (39%). In general, few details were provided in organisational websites to further describe the prioritisation approaches undertaken.

The prioritisation criteria included relevance on 16 occasions (89%), appropriateness on 10 occasions (56%), significance on 14 occasions (78%), feasibility on 9 occasions (50%) and cost-effectiveness on 4 occasions (22%). Two websites did not state any prioritisation criteria (11%; see Fig. 4d).

Discussion

This extensive scoping review summarises findings from international agencies about current methods and approaches to prioritisation of clinical trials undertaken by CTNs and research funders. The main reasons for prioritisation were to address a knowledge gap in clinical decision making, and to define patient-important topics. More than two thirds used an interpretive approach (e.g. James Lind Alliance); a small proportion used a quantitative approach (e.g. prospective payback); and one fifth used a blended approach combining qualitative and quantitative methods (e.g. CHNRI). The most common criteria for prioritisation were significance, relevance and cost-effectiveness.

The rationale for prioritisation of trials on the basis of generating new knowledge to improve clinical decision-making is not surprising, as efficacy and effectiveness trials are designed to answer important questions in patient management [27, 50, 88]. What was less clear, however, was how these trials all with “good” questions were then ranked in order of priority. Consensus-based methods that use an interpretative approach are appealing because of their broad stakeholder engagement; however, the trade-offs between criteria, such as significance versus feasibility, and the subsequent processes for overall ranking of trials are not transparent [89, 90]. This is where blended approaches that include a quantitative component that facilitates objective scoring of trial proposals can assist.

The infrequent use of pure quantitative approaches for prioritisation of trials including burden of disease or value for money is likely due to few standardised methods to view competing claims side by side, or knowing how to weight such criteria. It may also be related to low technical knowledge or expertise within the trials community to generate this information. For example, many clinical trial funders suggest the relevance of the problem to be stated, which is typically reported as burden of disease, incidence or prevalence. When different metrics are used across trial proposals they become difficult to compare, which may lead to grant reviewers considering whether the criterion is satisfied (i.e. is there a substantial burden, [yes/no]), rather than comparing those burdens. Sometimes, the burden is presented as disability-adjusted life years (DALYs), and sometimes, the disease burden is monetized to provide an overview of health system or societal costs. While this provides a common metric on which trial applications can be compared, these estimates are limited to quantifying the current situation; they do not provide insight into the value of the proposed trial in reducing that burden (i.e. the significance), otherwise known as the impact or net benefit.

Value of information (VOI) analysis has emerged as a new framework for quantifying the net benefit of proposed randomised trials. VOI uses a cost-effectiveness modelling approach and takes into account the cost of running the trial and the value of the new trial information to reduce uncertainty with the current clinical decision. The benefit of the health outcomes for the better decision (e.g. using drug A over drug B) is then multiplied across the population at risk using assumptions about post-trial implementation. A VOI analysis can be undertaken for most randomised trials enabling studies in a given portfolio to be ranked from most to least value. This requires capacity building in the health economics and statistics workforce. Efficient methods to calculate VOI are currently underway [91, 92].

An encouraging sign from this review was the emphasis placed on patient-important topics through consumer-generated questions and topic ranking, from both published literature and organisational websites. This ensures that not only are questions important and of interest to clinicians or trialists, but that they also address issues, problems or concerns that are bothering those with the disease and/or undergoing specific treatments. This is especially important for government and non-profit charity funders where the funding for research originates from the general public (i.e. tax-payers), or donors.

The strengths of this review include the dual searching of published and unpublished literature, including organisational websites of international clinical trial networks and trial funders. This approach was likely to identify prioritisation processes that were operational, yet had not been formally described in the peer-reviewed literature. It is a strength that we were able to locate and extract research prioritisation approaches and methods as well as the prioritisation criteria used, as this provides sufficient detail for clinical trials networks and funders to replicate. Our scoping review was limited to studies and websites published in English and therefore may omit relevant studies published in other languages. It was not a systematic review and therefore may not have identified all studies of research prioritisation in the published literature. In addition, we could only tabulate methods where they were clearly described.

Further research consulting consumers, researchers and policy-makers is now needed to develop specific criteria weights for clinical trials networks and coordinating centre members of the Australian Clinical Trials Alliance (ACTA), of international CTNs and funders of clinical trials. Development of tools to aid clinicians and researchers in using quantitative approaches is also needed. Following implementation of a formalised prioritisation process, clinical trials networks and funders will need to then evaluate the process and assess whether the “best” trials are subsequently funded and deliver on their expected benefits [93].

Conclusion

Research prioritisation approaches for groups conducting and funding clinical trials are predominantly interpretative. Given the strengths of a blended approach to prioritisation, there is an opportunity to improve the transparency of process through the inclusion of quantitative techniques.