Background

Cancer research attracts substantial public funding globally. For example, the National Cancer Institute (NCI) in the United States of America (USA) had a 2020 budget of over $6 billion United States (US) dollars. In addition to public funds, there is also huge monetary investment from private pharmaceutical companies, as well as altruistic investment of time and effort to participate in cancer research from patients and their families. In the United Kingdom (UK), over 25,000 patients were recruited to cancer trials funded by one charity (Cancer Research UK (CRUK)) alone in 2018 [1]. The need to conduct research within the field of oncology is an ongoing priority because cancer is highly prevalent, with up to one in two people now having a diagnosis of cancer in their lifetime [2, 3], and despite current treatments, mortality and morbidity from cancer are still high [2].

In the current era of increasing austerity, there is a desire to ensure that the money and effort to conduct any type of research delivers tangible downstream benefits for society with minimal waste [4,5,6]. These wider, real-life benefits from research are often referred to as research impact. Given the significant resources required to conduct cancer research in particular, it is reasonable to question if this investment is leading to the longer-term benefits expected, and to query the opportunity cost of not spending the same money directly within other public sectors such as health and social care, the environment or education.

The interest in evaluating research impact has been rising, partly driven by the actions of national bodies and governments. For example, in 2014, the UK government allocated its £2 billion annual research funding to higher education institutions, in part based on an assessment of the impact of research performed by each institution in an assessment exercise known as the Research Excellence Framework (REF). The proportion of funding dependent on impact assessment will increase from 20% in 2014, to 25% in 2021[7].

Despite the clear rationale and contemporary interest in research impact evaluation, assessing the impact of research comes with challenges. First, there is no single definition of what research impact encompasses, with potential differences in the evaluation approach depending on the definition. Second, despite the recent surge of interest, knowledge of how best to perform assessments and the infrastructure for, and experience in doing so, are lagging [6, 8, 9]. For the purposes of this review, the definition of research impact given by the UK Research Councils is used (see Additional file 1 for full definition). This definition was chosen because it takes a broad perspective, which includes academic, economic and societal views of research impact [10].

There is a lack of clarity on how to perform research impact evaluation, and this extends to cancer research. Although there is substantial interest from cancer funders and researchers [11], this interest is not accompanied by instruction or reflection on which approaches would be suited to assessing the impact of cancer research specifically. In a survey of Australian cancer researchers, respondents indicated that they felt a responsibility to deliver impactful research, but that evaluating and communicating this impact to stakeholders was difficult. Respondents also suggested that the types of impact expected from research, and the approaches used, should be discipline specific [12]. Being cognisant of the discipline specific nature of impact assessment, and understanding the uniqueness of cancer research in approaching such evaluations, underpins the rationale for this study.

The aim of this study was to explore approaches to research impact assessment, identify those approaches that have been used previously for cancer research, and to use this information to make recommendations for future evaluations. For the purposes of this study, cancer research included both basic science and applied research, research into any malignant disease, concerning paediatric or adult cancer, and studies spanning nursing, medical, public health elements of cancer research.

The study objectives were to:

  1. i.

    Identify existing literature reviews that report approaches to research impact assessment and summarise these approaches.

  2. ii.

    Use these literature reviews to identify examples of cancer research impact evaluations, describe the approaches to evaluation used within these studies, and compare them to those described in the broader literature.

This approach was taken because of the anticipated challenge of conducting a primary review of empirical examples of cancer research impact evaluation, and to allow a critique of empirical studies in the context of lessons learnt from the wider literature. A primary review would have been difficult because examples of cancer research impact evaluation, for example, the assessment of research impact on clinical guidelines [13], or clinical practice [14,15,16], are often not categorised in publication databases under the umbrella term of research impact. Reasons for this are the lack of medical subject heading (MeSH) search term relating to research impact assessment and the differing definitions for research impact. In addition, many authors do not recognise their evaluations as sitting within the discipline of research impact assessment, which is a novel and emerging field of study.

Methods

General approach

A systematic search of the literature was performed to identify existing reviews of approaches to assess the impact of research. No restrictions were placed on the discipline, field, or scope (national/global) of research for this part of the study. In the second part of this study, the reference lists of the literature reviews identified were searched to find empirical examples of the evaluation of the impact of cancer research specifically.

Data sources and searches

For the first part of the study, 11 publication databases and the grey literature from January 1998 to May 2019 were searched. The electronic databases were Medline, Embase, Health Management and Policy Database, Education Resources Information Centre, Cochrane, Cumulative Index of Nursing and Allied Health Literature, Applied Social Sciences Index and Abstract, Social Services Abstracts, Sociological Abstracts, Health Business Elite and Emerald. The search strategy specified that article titles must contain the word “impact”, as well as a second term indicating that the article described the evaluation of impact, such as “model” or “measurement” or “method”. Additional file 1 provides a full list of search terms. The grey literature was searched using a proforma. Keywords were inserted into the search function of websites listed on the proforma and the first 50 results were screened. Title searches were performed by either a specialist librarian or the primary researcher (Dr. C Hanna). All further screening of records was performed by the primary researcher.

Following an initial title screen, 800 abstracts were reviewed and 140 selected for full review. Articles were kept for final inclusion in the study by assessing each article against specific inclusion criteria (Additional file 1). There was no assessment of the quality of the included reviews other than to describe the search strategy used. If two articles drew primarily on the same review but contributed a different critique of the literature or methods to evaluate impact, both were kept. If a review article was part of a grey literature report, for example a thesis, but was also later published in a journal, the journal article only was kept. Out of 140 articles read in full, 27 met the inclusion criteria and a further 13 relevant articles were found through reference list searching from the included reviews [17].

For the second part of the study, the reference lists from the literature reviews were manually screened [17] (n = 4479 titles) by the primary researcher to identify empirical examples of assessment of the impact of cancer research. Summary tables and diagrams from the reviews were also searched using the words “cancer” and “oncology” to identify relevant articles that may have been missed by reference list searching. After removal of duplicates, 57 full articles were read and assessed against inclusion criteria (Additional file 1). Figure 1 shows the search strategy for both parts of the study according to the guidelines for preferred reporting items for systematic reviews and meta-analysis (PRISMA) [18].

Fig. 1
figure 1

Search strategies for this study

Data extraction and analysis

A data extraction form produced in Microsoft® Word 2016 was used to collect details for each literature review. This included year of publication, location of primary author, research discipline, aims of the review as described by the authors and the search strategy (if any) used. Information on approaches to impact assessment was extracted under three specific themes which had been identified from a prior scoping review as important factors when planning and conducting research impact evaluation. These themes were: (i) categorisation of impact into different types depending on who or what is affected by the research (the individuals, institutions, or parts of society, the environment), and how they are affected (for example health, monetary gain, sustainability) (ii) methods of data collection and analysis for the purposes of evaluation, and (iii) frameworks to organise and communicate research impact. There was space to document any other key findings the researcher deemed important. After data extraction, lists of commonly described categories, methods of data collection and analysis, and frameworks were compiled. These lists were tabulated or presented graphically and narrative analysis was used to describe and discuss the approaches listed.

For the second part of the study, a separate data extraction form produced in Microsoft® Excel 2016 was used. Basic information on each study was collected, such as year of publication, location of primary authors, research discipline, aims of evaluation as described by the authors and research type under assessment. Data was also extracted from these empirical examples using the same three themes as outlined above, and the approaches used in these studies were compared to those identified from the literature reviews. Finally, a set of recommendations for future evaluations of cancer research impact were developed by identifying the strengths of the empirical examples and using the lists generated from the first part of the study to identify improvements that could be made.

Results

Part one: Identification and analysis of literature reviews describing approaches to research impact assessment

Characteristics of included literature reviews

Forty literature reviews met the pre-specified inclusion criteria and the characteristics of each review are outlined in Table 1. A large proportion (20/40; 50%) were written by primary authors based in the UK, followed by the USA (5/40; 13%) and Australia (5/40; 13%), with the remainder from Germany (3/40; 8%), Italy (3/40; 8%), the Netherlands (1/40; 3%), Canada (1/40; 3%), France (1/40; 3%) and Iran (1/40; 3%). All reviews were published since 2003, despite the search strategy dating from 1998. Raftery et al. 2016 [19] was an update to Hanney et al. 2007 [20] and both were reviews of studies assessing research impact relevant to a programme of health technology assessment research. The narrative review article by Greenhalgh et al. [21] was based on the same search strategy used by Raftery et al. [19].

Table 1 Summary of literature reviews

Approximately half of the reviews (19/40; 48%) described approaches to evaluate research impact without focusing on a specific discipline and nearly the same amount (16/40; 40%) focused on evaluating the impact of health or biomedical research. Two reviews looked at approaches to impact evaluation for environmental research and one focused on social sciences and humanities research. Finally, two reviews provided a critique of impact evaluation methods used by different countries at a national level [22, 23]. None of these reviews focused specifically on cancer research.

Twenty-five reviews (25/40; 63%) specified search criteria and 11 of these included a PRISMA diagram. The articles that did not outline a search strategy were often expert reviews of the approaches to impact assessment methods and the authors stated they had chosen the articles included based on their prior knowledge of the topic. Most reviews were found by searching traditional publication databases, however seven (7/40; 18%) were found from the grey literature. These included four reports written by an independent, not-for-profit research institution (Research and Development (RAND) Europe) [23,24,25,26], one literature review which was part of a Doctor of Philosophy (Ph.D) thesis [27], a literature review informing a quantitative study [28] and a review that provided background information for a report to the UK government on the best use of impact metrics [29].

Key findings from the reviews: approaches to research impact evaluation

  1. i.

    Categorisation of impact for the purpose of impact assessment

    Nine reviews attempted to categorise the type of research impact being assessed according to who or what is affected by research, and how they are affected. In Fig. 2, colour coding was used to identify overlap between impact types identified in these reviews to produce a summary list of seven main impact categories.

    The first two categories of impact refer to the immediate knowledge produced from research and the contribution research makes to driving innovation and building capacity for future activities within research institutions. The former is often referred to as the academic impact of research. The academic impact of cancer research may include the knowledge gained from conducting experiments or performing clinical trials that is subsequently disseminated via journal publications. The latter may refer to securing future funding for cancer research, providing knowledge that allows development of later phase clinical trials or training cancer researchers of the future.

    The third category identified was the impact of research on policy. Three of the review articles included in this overview specifically focused policy impact evaluation [30,31,32]. In their review, Hanney et al. [30] suggested that policy impact (of health research) falls into one of three sub-categories: impact on national health policies from the government, impact on clinical guidelines from professional bodies, and impact on local health service policies. Cruz Rivera and colleagues [33] specifically distinguished impact on policy making from impact on clinical guidelines, which they described under health impact. This shows that the lines between categories will often blur.

    Impact on health was the next category identified and several of the reviews differentiated health sector impact from impact on health gains. For cancer research, both types of health impact will be important given that it is a health condition which is a major burden for healthcare systems and the patients they treat. Economic impact of research was the fifth category. For cancer research, there is likely to be close overlap between healthcare system and economic impacts because of the high cost of cancer care for healthcare services globally.

    In their 2004 article, Buxton et al. [34] searched the literature for examples of the evaluation of economic return on investment in health research and found four main approaches, which were referenced in several later reviews [19, 25, 35, 36]. These were (i) measuring direct cost savings to the health-care system, (ii) estimating benefits to the economy from a healthy workforce, (iii) evaluating benefits to the economy from commercial development and, (iv) measuring the intrinsic value to society of the health gain from research. In a later review [25], they added an additional approach of estimating the spill over contribution of research to the Gross Domestic Product (GDP) of a nation.

    The final category was social impact. This term was commonly used in a specific sense to refer to research improving human rights, well-being, employment, education and social inclusion [33, 37]. Two of the reviews which included this category focused on the impact of non-health related research (social sciences and agriculture), indicating that this type of impact may be less relevant or less obvious for health related disciplines such as oncology. Social impact is distinct from the term societal impact, which was used in a wider sense to describe impact that is external to traditional academic benefits [38, 39]. Other categories of impact identified that did not show significant overlap between the reviews included cultural and technological impact. In two of the literature reviews [33, 40], the authors provided a list of indicators of impact within each of their categories. In the review by Thonon et al. [40], only one (1%) of these indicators was specific to evaluating the impact of cancer research.

  2. ii.

    Methods for data collection and analysis

    In total, 36 (90%) reviews discussed methods to collect or analyse the data required to conduct an impact evaluation. The common methods described, and the  strengths and weaknesses of each approach, are shown in Additional file 2: Table S1. Many authors advocated using a mixture of methods and in particular, the triangulation of surveys, interviews (of researchers or research users), and documentary analysis [20, 30,31,32]. A large number of reviews cautioned against the use of quantitative metrics, such as bibliometrics, alone [29, 30, 41,42,43,44,45,46,47,48]. Concerns included that these metrics were often not designed to be comparable between research programmes [49], their use may incentivise researchers to focus on quantity rather than quality [42], and these metrics could be gamed and used in the wrong context to make decisions about researcher funding, employment and promotion [41, 43, 45].

    Several reviews explained that the methods for data collection and analysis chosen for impact evaluation depended on both the unit of research under analysis and the rationale for the impact analysis [23, 24, 26, 31, 36, 50, 51]. Specific to cancer research, the unit of analysis may be a single clinical trial or a programme of trials, research performed at a cancer centre or research funded by a specific institution or charity. The rationale for research impact assessment was categorised in multiple reviews under four headings (“the 4 As”): advocacy, accountability, analysis and allocation [19, 20, 23, 24, 30,31,32,33, 36, 46, 52, 53]. Finally, Boaz and colleagues found that there was a lack of information on the cost-effectiveness of research impact evaluation methods but suggested that pragmatic, but often cheaper approaches to evaluation, such as surveys, were least likely to give in depth insights into the processes through which research impact occurred [31].

  3. iii.

    Using a framework within a research impact evaluation

    Applied to research impact evaluation, a framework provides a way of organising collected data, which encourages a more objective and structured evaluation than would be possible with an ad hoc analysis. In total, 27 (68%) reviews discussed the use of a framework in this context. Additional file 2: Table S2 lists the frameworks mentioned in three or more of the included reviews. The most frequently described framework was the Payback Framework, developed by Buxton and Hanney in 1996 [54], and many of the other frameworks identified reported that they were developed by adapting key elements of the Payback framework. None of the frameworks identified were specifically developed to assess the impact of cancer research, however several were specific to health research. The unit of cancer research being evaluated will dictate the most suitable framework to use in any evaluation. The unit of research most suited to each framework is outlined in Additional file 2: Table S2.

Fig. 2
figure 2

Categories of impact identified in the included literature reviews

Additional findings from the included reviews

The challenges of research impact evaluation were commonly discussed in these reviews. Several mentioned that the time lag [24, 25, 33, 35, 38, 46, 50, 53, 55] between research completion and impact occurring should influence when an impact evaluation is carried out: too early and impact will not have occurred, too late and it is difficult to link impact to the research in question. This overlapped with the challenge of attributing impact to a particular piece of research [24, 26, 33,34,35, 37,38,39, 46, 50, 56]. Many authors argued that the ability to show attribution was inversely related to the time since the research was carried out [24, 25, 31, 46, 53].

Part II: Empirical examples of cancer research impact evaluation

Study characteristics

In total, 14 empirical impact evaluations relevant to cancer research were identified from the references lists of the literature reviews included in the first part of this study. These empirical studies were published between 1994–2015 by primary authors located in the UK (7/14; 50%), USA (2/14; 14%), Italy (2/14; 14%), Canada (2/14; 14%) and Brazil (1/14; 14%). Table 2 lists these studies with the rationale for each assessment (defined using the “4As”), the unit of analysis of cancer research evaluated and the main findings from each evaluation. The categories of impact evaluated, methods of data collection and analysis, and impact frameworks utilised are also summarised in Table 2 and discussed in more detail below.

Table 2 Examples of primary studies assessing the impact of cancer research

Approaches to cancer research impact evaluation used in empirical studies

  1. i.

    Categories of impact evaluated in cancer research impact assessments

    Several of the empirical studies focused on academic impact. For example, Ugolini and colleagues evaluated scholarly outputs from one cancer research centre in Italy [57] and in a second study looked at the academic impact of cancer research from European countries [58]. Saed et al. [59] used submissions to an international cancer conference (American Society of Clinical Oncology (ASCO)) to evaluate the dissemination of cancer research to the academic community, and Lewison and colleagues [60,61,62,63] assessed academic, as well as policy impact and dissemination of cancer research findings to the lay media.

    The category of the health impact was also commonly evaluated, with particular focus on the assessment of survival gains. Life years gained or deaths averted [64], life expectancy gains [65] and years of extra survival [66] were all used as indicators of the health impact attributable to cancer research. Glover and colleagues [67] used a measure of health utility, the quality adjusted life year (QALY), which combines both survival and quality of life assessments. Lakdawalla and colleagues [66] considered the impact of both research on cancer screening and treatments, and concluded that survival gains were 80% attributable to treatment improvement. In contrast, Glover and colleagues [67] acknowledged the importance of improved cancer therapies due to research but also highlight the major impacts from research around smoking cessation, as well as cervical and bowel cancer screening. Several of these studies that assessed health impact, also used the information on health gains to assess the economic impact of the same research [64,65,66,67].

    Finally, two studies [68, 69] performed multi-dimensional research impact assessments, which incorporated nearly all of the seven categories of impact identified from the previous literature (Fig. 2). In their assessment of the impact of research funded by one breast cancer charity in Australia, Donovan and colleagues [69] evaluated academic, capacity building, policy, health, and wider economic impacts. Montague and Valentim [68] assessed the impact of one randomised clinical trial (MA17) which investigated the use of a hormonal medication as an adjuvant treatment for patients with breast cancer. In their study, they assessed the dissemination of research findings, academic impact, capacity building for future trials and international collaborations, policy citation, and the health impact of decreased breast cancer recurrence attributable to the clinical trial.

  2. ii.

    Methods of data collection and analysis for cancer research impact evaluation

    Methods for data collection and analysis used in these studies aligned with the categories of impact assessed. For example, studies assessing academic impact used traditional bibliometric searching of publication databases and associated metrics. Ugolini et al. [57] applied a normalised journal impact factor to publications from a cancer research centre as an indicator of the research quality and productivity from that centre. This analysis was adjusted for the number of employees within each department and the scores were used to apportion 20% of future research funding. The same bibliometric method of analysis was used in a second study by the same authors to compare and contrast national level, cancer research efforts across Europe [58]. They assessed the quantity and the mean impact factor of the journals for publications from each country and compared this to the location-specific population and GDP. A similar approach was used for the manual assessment of 10% of cancer research abstracts submitted to an international conference (ASCO) between 2001–2003 and 2006–2008 [59]. These authors examined if the location of authors affected the likelihood of the abstract being presented orally, as a face-to-face poster or online only.

    Lewison and colleagues, who performed four of the studies identified [60,61,62,63], used a different bibliometric method of publication citation count to analyse the dissemination, academic, and policy impact of cancer research. The authors also assigned a research level to publications to differentiate if the research was a basic science or clinical cancer study by coding the words in the title of each article or the journal in which the paper was published. The cancer research types assessed by these authors included cancer research at a national level for two different countries (UK and Russia) and research performed by cancer centres in the UK.

    To assess policy impact these authors extracted journal publications from cancer clinical guidelines and for media impact they looked at publications cited in articles stored within an online repository from a well-known UK media organisation (British Broadcasting Co-operation). Interestingly, most of the cancer research publications contained in guidelines and cited in the UK media were clinical studies whereas a much higher proportion published by UK cancer centres were basic science studies. These authors also identified that funders of cancer research played an critical role as commentators to explain the importance of the research in the lay media. The top ten most frequent commentators (commenting on > 19 media articles (out of 725) were all representatives from the UK charity CRUK.

    A combination of clinical trial findings and documentary analysis of large data repositories were used to estimate health system or health impact. In their study, Montague and Valentim [68] cited the effect size for a decrease in cancer recurrence from a clinical trial and implied the same health gains would be expected in real life for patients with breast cancer living in Canada. In their study of the impact of charitable and publicly funded cancer research in the UK, Glover et al. [67] used CRUK and Office for National Statistics (ONS) cancer incidence data, as well as national hospital databases listing episodes of radiotherapy delivered, number of cancer surgeries performed and systemic anti-cancer treatments prescribed, to evaluate changes in practice attributable to cancer research. In their USA perspective study, Lakdawalla et al. [66] used the population-based Surveillance, Epidemiology and End Results Program (SEER) database to evaluate the number of patients likely to be affected by the implementation of cancer research findings [66]. Survival calculations from clinical trials were also applied to population incidence estimates to predict the scale of survival gain attributable to cancer research [64, 66].

    The methods of data collection and analysis used for economic evaluations aligned with the categories of assessment identified by Buxton in their 2004 literature review [34]. For example, three studies [65,66,67] estimated direct healthcare cost savings from implementation of cancer research. This was particularly relevant in one ex-ante assessment of the potential impact of a clinical trial testing the equivalence of using less intensive follow up for patients following cancer surgery [65]. These authors assessed the number of years it would take (“years to payback”) of implementing the hypothetical clinical trial findings to outweigh the money spent developing and running the trial. The return on investment calculation was performed by estimating direct cost savings to the healthcare system by using less intensive follow up without any detriment to survival.

    The second of Buxton’s categories was an estimation of productivity loss using the human capital approach. In this method, the economic value of survival gains from cancer research are calculated by measuring the monetary contribution from patients surviving longer who are of working age. This approach was used in two studies [64, 66] and in both, estimates of average income (USA) were utilised. Buxton’s fourth category, an estimation of an individual’s willingness to pay for a statistical life, was used in two assessments [65, 66], and Glover and colleagues [67] adapted this method, placing a monetary value on the opportunity cost of QALYs forgone in the UK health service within a fixed budget [70]. One of the studies that used this method identified that there may be differences in how patients diagnosed with distinct cancer types value the impact of research on cancer specific survival [66]. In particular, individuals with pancreatic cancer seemed to be willing to spend up to 80% of their annual income for the extra survival attributable to implementation of cancer research findings, whereas this fell to below 50% for breast and colorectal cancer. Only one of the studies considered Buxton’s third category of benefits to the economy from commercial development [66]. These authors calculated the gain to commercial companies from sales of on-patent pharmaceuticals and concluded that economic gains to commercial producers were small relative to gains from research experienced by cancer patients.

    The cost estimates used in these impact evaluations came from documentary analysis, clinical trial publications, real-life data repositories, surveys, and population average income estimates. For example, in one study, cost information from NCI trials was supplemented by using a telephone phone survey to pharmacies, historical Medicare documents and estimates of the average income from the 1986 US Bureau of the Census Consumer Income [64]. In their study, Coyle et al. [65] costed annual follow up and treatment for cancer recurrence based on the Ontario Health Insurance plan, a cost model relevant to an Ottawa hospital and cost estimates from Statistics Canada [71]. The data used to calculate the cost of performing cancer research was usually from funding bodies and research institutions. For example, charity reports and Canadian research institution documents were used to estimate that it costs the National Cancer Institute in Canada $1500 per patient accrued to a clinical trial [65]. Government research investment outgoings were used to calculate that $300 billion was spent on cancer research in the USA from 1971 to 2000, 25% of which was contributed by the NCI [66] and that the NCI spent over $10 million USD in the 1980s to generate the knowledge that adjuvant chemotherapy was beneficial to colorectal cancer patients [64]. Charity and research institution spending reports, along with an estimation of the proportion of funds spent specifically on cancer research, were used to demonstrate £15 billion of UK charity and public money was spent on cancer research between 1970 and 2009 [67].

    Lastly, the two studies [68, 69] which adopted a multi-category approach to impact assessment used the highest number and broadest range of methods identified from the previous literature (Additional file 2: Table S1). The methods utilised included surveys and semi-structured telephone interviews with clinicians, documentary analysis of funding and project reports, case studies, content analysis of media release, peer review, bibiliometrics, budget analysis, large data repository review, and observations of meetings.

  3. iii.

    Frameworks for cancer research impact evaluation

    Only two of the empirical studies identified used an impact framework. Unsurprisingly, these were also the studies that performed a multi-category assessment and used the broadest range of methods within their analyses. Donovan et al. [69] used the Payback framework (Additional file 2: Table S2) to guide the categories of impact assessed and the questions in their researcher surveys and interviews. They also reported the results of their evaluation using the same categories: from knowledge production, through capacity building to health and wider economic impacts. Montague and Valentim [68] used the Canadian Academy Health Services (CAHS) Framework (Additional file 2: Table S2). Rather than using the framework in it is original form, they arranged impact indicators from the CAHS framework within a hierarchy to illustrate impacts occurring over time. The authors distinguished short term, intermediate and longer-term changes resulting from one clinical cancer trial, aligning with the concept of categorising impacts based on when they occur, which was described in one of the literature reviews identified in the first part of this study [33].

    Lastly, the challenges of time lags and attribution of impact were identified and addressed by several of these empirical studies. Lewison and colleagues tracked the citation of over 3000 cancer publications in UK cancer clinical guidelines over time [61], and in their analysis Donovan et al. [69] explicitly acknowledged that the short time frame between their analysis and funding of the research projects under evaluations was likely to under-estimate the impact achieved. Glover et al. [67] used bibliometric analysis of citations in clinical cancer guidelines to estimate the average time from publication to clinical practice change (8 years). They added 7 years to account for the time between funding allocation and publication of research results giving an overall time lag from funding cancer research to impact of 15 years. The challenge of attribution was addressed in one study by using a time-line to describe impacts occurring at different time-points but linking back to the original research in question [68]. The difficultly of estimating time lags and attributing impact to cancer research were both specifically addressed in a companion study [72] to the one conducted by Glover and colleagues. In this study, instead of quantifying the return on cancer research investment, qualitative methods of assessment were used. This approach identified factors that enhanced and accelerated the process of impact occurring and helped to provide a narrative to link impacts to research.

Discussion

This study has identified several examples of the evaluation of the impact of cancer research. These evaluations  were performed over three decades, and mostly assessed research performed in high-income countries. Justification for the approach to searching the literature used  in this study is given by looking at the titles of the articles identified. In only 14% (2/14) was the word “impact” included, suggesting that performing a search for empirical examples of cancer research impact evaluation using traditional publication databases would have been challenging. Furthermore, all the studies identified were included within reviews of approaches to research impact evaluation, which negated the subjective decision of whether the studies complied with a particular definition of research impact.

Characteristics of research that were specifically relevant to cancer studies can be identified from these impact assessments. Firstly, many of these evaluations acknowledged the contribution of both basic and applied studies to the body of cancer research, and several studies categorised research publications based on this distinction. Second, the strong focus on health impact and the expectation that cancer research will improve health was not surprising. The focus on survival in particular, especially in economic studies looking at the value of health gains, reflects the high mortality of cancer as a disease entity. This contrasts with similar evaluations of musculoskeletal or mental health research, which have focused on improvements in morbidity [73, 74]. Third, several studies highlighted the distinction between research looking at different aspects of the cancer care continuum; from screening, prevention and diagnosis, to treatment and end of life care. The division of cancer as a disease entity by the site of disease was also recognised. Studies that analysed the number of patients diagnosed with cancer, or population-level survival gains, often used site-specific cancer incidence and other studies evaluated research relating to only one type of cancer [64, 65, 68, 69]. Lastly, the empirical examples of cancer research impact identified in this study confirm the huge investment into cancer research that exists, and the desire by many research organisations and funders to quantify a rate of return on that investment. Most of these studies concluded that cancer research investment far exceeded expectations of the return on investment. Even using the simple measure of future research grants attracted by researchers funded by one cancer charity, the monetary value of these grants outweighed the initial investment [69].

There were limitations in the approaches to impact evaluation used in these studies which were recognised by reflecting on the findings from the broader literature. Several studies assessed academic impact in isolation, and studies using the journal impact factor or the location of authors on publications were limited in the information they provided. In particular, using the journal impact factor (JIF) to allocate funding research which was used in one study, is now outdated and controversial. The policy impact of cancer research was commonly evaluated by using clinical practice guidelines, but other policy types that could be used in impact assessment [30], such as national government reports or local guidelines, were rarely used. In addition, using cancer guidelines as a surrogate for clinical practice change and health service impact could have drawbacks. For example, guidelines can often be outdated, irrelevant or simply not used by cancer clinicians and in addition, local hospitals often have their own local clinical guidelines, which may take precedent over national documents. Furthermore, the other aspects of policy impact described in the broader literature [30], such as impact on policy agenda setting and implementation, were rarely assessed. There were also no specific examples of social, environmental or cultural impacts and very few of the studies mentioned wider economic benefits from cancer research, such as spin out companies and patents. It may be that these types of impact were less relevant to cancer research being assessed, however unexpected impacts may have be identified if they were considered at the time of impact evaluation.

Reflecting on how the methods of data collection and analysis used in these studies aligned with those listed in Additional file 2: Table S1 bibliometrics, alternative metrics (media citation), documentary analysis, surveys and economic approaches were often used. Methods less commonly adopted were interviews, using a scale and focus groups. This may have been due to the time and resource implications of using qualitative techniques and more in depth analysis, or a lack of awareness by authors regarding the types of scales that could be used. An example of a scale that could be used to assess the impact of research on policy is provided in one of the literature reviews identified [30]. The method of collecting expert testimony from researchers was utilised in the studies identified, but there were no obvious examples of testimony about the impact of cancer research from stakeholders such as cancer patients or their families.

Lastly, despite the large number of examples identified from the previous literature, a minority of the empirical assessments used an impact framework. The Payback Framework, and an iteration of the CAHS Framework were used with success and these studies are excellent examples of how frameworks can be used for cancer research impact evaluation in future. Other frameworks identified from the literature (Additional file 2: Table S2) that may be appropriate for the assessment of cancer research impact in future include Anthony Weiss’s logic model [75], the research impact framework [76] and the research utilisation ladder [77]. Weiss’s model is specific to medical research and encourages evaluation of how clinical trial publication results are implemented in practice and lead to health gain. He describes an efficacy-efficiency gap [75] between clinical decision makers becoming aware of research findings, changing their practice and this having impact on health. The Research Impact Framework, developed by the Department of Public Health and Policy at the UK London School of Hygiene and Tropical Medicine [76], is an aid for researchers to self-evaluate their research impact, and offers an extensive list of categories and indicators of research which could be applied to evaluating the impact of cancer research. Finally, Landry’s Research Utilisation Ladder [77] has similarities to the hierarchy used in the empirical study by Montegue and Valentim [68], and focuses on the role of the individual researcher in determining how research is utilised and its subsequent impact.

Reflecting on the strengths and limitations of the empirical approaches to cancer research impact identified in this study, Fig. 3 outlines recommendations for the future. One of these recommendations refers to improving the use of real-life data to assess the actual impact of research on incidence, treatment, and outcomes, rather than predicting these impacts by using clinical trial results. Databases for cancer incidence, such as SEER (USA) and the Office of National Statistics (UK), are relatively well established. However, those that collect data on treatments delivered and patient outcomes are less so, and when they do exist, they have been difficult to establish and maintain and often have large quantities of missing data [78, 79]. In their study, Glover et al. [67] specifically identified the lack of good quality data documenting radiotherapy use in the UK in 2012.

Fig. 3
figure 3

1Thonon F, Boulkedid R, Teixeira M, Gottot S, Saghatchian M, Alberti C. Identifying potential indicators to measure the outcome of translational cancer research: a mixed methods approach. Health Res Policy Syst. 2015;13:72

Suggestions for approaching cancer research impact evaluation.

The recommendations also suggest that impact assessment for cancer and other health research could be made more robust by giving researchers access to cost data linked to administrative datasets. This type of data was used in empirical impact assessments performed in the USA [64, 66] because the existing Medicare and Medicaid health service infrastructure collects and provides access to this data. In the UK, hospital cost data is collected for accounting purposes but this could be unlocked as a resource for research impact assessments going forward. A good example of where attempts are being made to link resource use to cost data for cancer care in the UK is through the UK Colorectal Cancer Intelligence Hub [80].

Lastly, several empirical examples highlighted that impact from cancer research can be increased when researchers or research organisations advocate, publicise and help to interpret research findings for a wider audience [60, 72]. In addition, it is clear from these studies that organisations that want to evaluate the impact of their cancer research must also appreciate that research impact evaluation is a multi-disciplinary effort, requiring the skills and input from individuals with different skill sets, such as basic scientists, clinicians, social scientists, health economists, statisticians, and information technology analysts. Furthermore, the users and benefactors from cancer research, such as patients and their families, should not be forgotten, and asking them which impacts from cancer research are important will help direct and improve future evaluations.

The strengths of this study are the broad, yet systematic approach used to identify existing reviews within the research impact literature. This allowed a more informed assessment of cancer research evaluations than would have been possible if a primary review of these empirical examples had been undertaken. Limitations of the study include the fact that the review protocol was not registered in advance and that one researcher screened the full articles for review. The later was partly mitigated by using pre-defined inclusion criteria.

Conclusions

Impact assessment is a way of communicating to funders and patients the merits of undertaking cancer research and learning from previous research to develop better studies that will have positive impacts on society in the future. To the best of our knowledge, this is the first review to consider how to approach evaluation of the impact of cancer research. At the policy level, a lesson learned from this study for institutions, governments, and funders of cancer research, is that an exact prescription for how to conduct cancer research impact evaluation cannot be provided, but a multi-disciplinary approach and sufficient resources are required if a meaningful assessment can be achieved. The approach to impact evaluation used to assess cancer research will depend on the type of research being assessed, the unit of analysis, rationale for the assessment and the resources available. This study has added to an important dialogue for cancer researchers, funders and patients about how cancer research can be evaluated and ultimately how future cancer research impact can be improved.