Background

Public funds are expended on health research in large part to lead to improvements in policy [13], practice, resource allocation, and ultimately, the health of the community [4, 5]. However, the transfer of new knowledge from research into practice continues to be far from optimal [2, 6, 7]. It is widely recognized that increasing the impact of research on policy and practice is likely to require many different strategies, including the development of research-policy partnerships, better summaries of evidence and more research-receptive policy and funding agencies [8, 9]. It is also increasingly acknowledged that studies designed to evaluate the impact of interventions to improve health (intervention research) can inform subsequent intervention-specific policy and practice [10]. However, only a relatively small proportion (between 10-23%) of primary research funded by public health agencies or published in the peer reviewed literature is intervention research [11, 12].

Little is known about the nature and mechanisms that underlie the influence of intervention research on health policy or practice. In fact, there are no agreed systematic approaches for measuring such impacts [13]. Traditional indices of research productivity relate to numbers of papers, impact factors of journals and citations. These metrics are widely used by research granting bodies, although they do not always relate well to the ultimate goals of applied health and medical research [1416]. The emerging literature on research impact [1719] highlights its complex, non-linear, unpredictable nature, and the propensity, to date, to count what can be easily measured, rather than measuring what “counts” in terms of significant, enduring changes [14].

A recent systematic review of approaches to assessing research impacts by Banzi et al. [13] identified 22 reports included in four systematic reviews and 14 primary studies. These publications described several theoretical frameworks and methodological approaches (for example, bibliometrics, econometrics, interviews, ad hoc case studies) to measuring research impacts, with the “payback model” as the most frequently used conceptual framework [19]. Based on this review of existing models, Banzi et al. differentiated five broad categories of research impacts: i) advancing knowledge; ii) capacity building; iii) informing decision-making; iv) health benefits; and, v) broad socio-economic benefits.

To date, most primary studies of research impacts (‘impacts research’) have been small scale case studies; and there has been no comprehensive assessment of impacts and their mediators across any single applied research funding scheme. The New South Wales (NSW) Health Promotion Demonstration Research Grants Scheme (HPDRGS) was designed by NSW Ministry of Health, Australia, in response to a paucity of evidence on large-scale intervention effectiveness across prevention policy priorities. The specific aims of the scheme are to fund applied research that builds the evidence-base for health promotion policy and practice; develop partnerships and build capacity for health promotion research between health districts, universities and organizations outside of the health sector. This paper reports on a exploratory case series analysis of all 15 projects funded under the HPDRGS during the period 2000 to 2006, to determine their subsequent policy and practice impacts (the ‘what’) and to explore the forces and factors influencing these impacts (the ‘how’ and ‘why’).

Methods

At the commencement of the study in January 2012, 15 projects funded during the period 2000 to 2006 had been completed for at least twenty-four months and most (n = 12) for longer than four years. This period was selected to balance the time required for evidence of impact to become manifest, against the potential accuracy of recall by respondents. A case study approach was used to explore if, and in what ways, research projects were used to influence policy and practice, and to identify the key factors (how and why) which influenced their use. Case study methods are appropriate for answering ‘how’ and ‘why’ questions when the phenomenon of interest (in this case, applied research) is embedded within a real-life context (policy and practice environment) [20]. Due to the diversity of projects under consideration, this case series included a number of different methods (Figure 1). The study was approved by the University of Sydney Human Research Ethics Committee and all participants gave written informed consent to take part in the study.

Figure 1
figure 1

Overview of study methods and key steps in the research process

Step 1 Research scoping

After considering the ‘research impact’ literature, we adapted the conceptual framework by Banzi et al. [13] as its domains aligned very closely with the objectives of the HPDRGS; with the five broad impact domains collapsed into four, as follows: i) Advancing knowledge and research related impacts (peer review articles, impact on research methods, better targeting for future research); ii) Capacity building (development of research capacity of staff, students, others); iii) Informing policies and product development (policy, guidelines, products, intervention development); and, iv) Health, societal and economic impacts (health status, social benefits, shift in knowledge, attitudes, behaviors, social capital, macroeconomic impacts, etc.). We combined categories four and five of Banzi’s framework (health benefits and broad socio-economic benefits) as pilot testing suggested they were both distal and inter-related and would be more efficiently described together.

Interview protocols for chief investigators (CI) and nominated end-users were guided by the adapted Banzi impact categories, lessons from the ‘research impacts’ literature scan and questions arising from a preliminary review of available documentation for the 15 projects. The interviews were then piloted with a CI and an end-user of two intervention research projects of commensurate size not funded through the HPDRGS.

Step 2 Data collection

Semi-structured telephone interviews

The CIs were invited by email to participate in the study, with non-responders sent a reminder email after one week and then followed-up by telephone up to three times. Participating CIs were asked to nominate up to three end-users, defined as individuals who could provide a perspective on how the project had influenced policy, practice, organizational development, further research or in applications such as guidelines or teaching materials. CIs were encouraged to identify end-users from a range of sectors in which impacts occurred. These end-users were approached by email, using the same contact and follow-up procedure as CIs, to participate in an interview exploring how the project and its findings had been used from their perspective. CIs were typically university based academics or health service managers/researchers with joint appointments. While end-users were most frequently current or former policy makers, health service managers and practitioners.

Telephone interviews were conducted by an experienced research officer (RN) who has a good working knowledge of disease prevention, intervention research, and related policy and practice contexts; and was independent of the CIs and end-users. Interviews were digitally recorded with participants’ permission. Both CI and end-user telephone interviews explored perspectives on the overall impacts of individual projects, asked about specific impacts in relation to each of the four categories, and identified factors contributing to such impacts, or lack thereof. The following list outlines a summary of the telephone interview topic guides for CIs and end-users.

Table 1 Project characteristics, key implications and dissemination methods used for HPDRG projects 2000-2006

Semi-structured telephone interview topic guide: Investigators and End users

  • Recall of research aims, key finding and implications

  • Dissemination process (how, factors influencing the dissemination process)

  • Interface with end users – how research team worked with potential end users (investigators only)

  • Interface with researchers – how were end users involved in the research project, how did they hear about the findings (end users only)

  • Overall impact – how have the findings been used

  • Specific impacts – capacity building, partnerships, policy and product development, health and other sector impacts, societal and economic impacts

  • Circumstances surrounding the use of the findings, or limited impact of the findings

  • Evidence of impacts – documentary sources

  • Nomination of end users (investigators only)

Bibliometric analysis

A bibliometric analysis was also undertaken in Scopus in April-June 2012 to examine the total and mean number of citations (excluding self-citations) for all peer review publications arising from each project. Project reports were located and examined to document key project findings. Respondents were also asked to provide copies of additional documentary sources as evidence of how project findings had been used, such as policy documents, briefs, reports and curriculum materials. Additional searches of the grey literature were undertaken to corroborate documentary evidence of impacts reported in the interviews. Documentary evidence was compiled by the research officer (RN) and checked by two other authors (AJM and JB).

Step 3 Impact Assessment

Data synthesis and verification panel

Interview and document data were collated and triangulated in ‘case summaries’ by two authors (AJM and JB) and reviewed for accuracy by the research officer who conducted the interviews (RN). Case summaries for each project included: i) key research findings and implications; ii) the perspectives of CIs and end-users on how project findings had been used and key factors influencing use, including illustrative quotations; iii) bibliometric analysis; iv) documentary evidence of impacts; and, v) notes and observations made during CI and end-user interviews. The coding framework for analyzing these case summaries was based on impact domains, contextual information and key factors influencing research use.

A verification panel was established to review and assess the collated case study material, and provide an overall assessment of the policy and practice impact of each of the 15 projects. Our approach was adapted from the RAND/UCLA (University of California, LA, USA) appropriateness method [21, 22]. This systematic consensus method has been widely used to derive expert consensus on clinical indications, quality improvement and assessing effectiveness of health networks [23, 24].

The verification panel was made up of eight members of the research team: a mix of senior academics and policy makers, including international experts in the field of applied population health research. Case summaries of each project were independently assessed by panel members across the four impact domains and overall impact. Assessments were made using a nine point scale: 1 to 3 ‘limited impact’; 4 to 6 ‘moderate impact’; and 7 to 9 ‘high impact’. Judgments of overall impact took into account the four impact domains as well as: size of the project and level of funding; time since project completion; potential sustainability of the impact; and, research and implementation challenges that were addressed in creating the impact. Individual ratings were compiled and discussed at a verification panel meeting held in August 2012, where consensus was reached on overall impact assessments for all 15 studies. The panel also identified a number of key influences on policy and practice impacts across projects, which were further explored by a final analysis of the data to describe ‘how’ and ‘why’ projects were impactful or not.

Results

Project characteristics

Between 2000 and 2006, fifteen projects were funded across a broad range of topics, using a range of study designs, most commonly RCTs (n = 7), mixed methods (n = 5) and quasi-experimental designs (n = 2) (Table 1). Most projects employed a mix of qualitative and quantitative methods (n = 13). Funding ranged from 10,000 to 300,000 Australian dollars per project. Projects were most commonly implemented in community (n = 9) and health services (n = 5) settings in both rural (n = 8) and metropolitan areas (n = 7).

Table 2 Interview sample, research outputs, means of independent assessment and panel assessment of overall impact for projects funded between 2000 and 2006

Semi-structured interviews and panel impact assessments

A total of 46 interviews were conducted (Table 2), with CI interviews (mean duration: 53.3 mins; range: 38 to 97 mins) lasting longer than end-user interviews (mean duration: 40.0 mins; range: 19 to 81 mins). The response rate for CIs was 70.8% and 74.4% for end-users.

Table 3 How projects and their findings informed policy and practice

There was limited variation between panel members in their assessments of the overall impact of each project, and consensus on the final group overall impact assessment was achieved easily. Three studies were considered to possess ‘high’ overall impact (Tai Chi, Mental Health First Aid and Nicotine Dependent Inpatients), eight ‘moderate’ overall impact (Rural Hearing Conservation, Smoking Cessation in Indigenous Communities, Pedometers in Cardiac Rehab, Exercise to Prevent Falls after Stroke, Walk-to-School, Reducing Falls Injuries within Aged Care, Reducing Smoking in Mental Health, Cycling Infrastructure), while four were rated as ‘low’ overall impact (Nutrition Practices in Youth Housing, Secondary Prevention in Patients with CVD, Safer Streetscapes, Making Connections). Impact ratings across the adapted Banzi categories, as well as overall impact assessments for each project, are shown in Table 2.

Table 4 Factors influencing impacts of research on policy and practice (across case studies)

Advancing knowledge

Projects sought to advance knowledge using a variety of dissemination methods including reports, peer-reviewed papers, conference presentations, theses, presentations to stakeholder groups, political advocacy, training, websites and the media (Table 1). Peer reviewed papers generated by projects ranged from 0 to 7, with a mean of 10 citations per paper (range: 0 to 73); 96% of these citations came from six projects, which were rated as high or moderate impact studies. The two projects independently rated as having a ‘high impact’ were the Tai Chi and Mental Health First Aid projects. The studies with the highest citations were effective interventions which provided novel results for the field of interest. High and moderately impactful projects were all managed by experienced researchers, and high quality publications were produced despite equivocal findings in some instances. All of the studies with low impact on advancing knowledge had null study results, no publications and were mostly led by inexperienced researchers and practitioners.

Capacity building impacts

Both CIs and end-users indicated that capacity building occurred through staff development, partnership building and follow-on research funding. For many end-users, projects provided opportunities to develop their research skills and partnerships with researchers. Researchers of the two projects with high capacity building impact, Tai Chi and Mental Health First Aid, consistently stated that projects helped them to build their own research capacity and partnership networks, enabling them to build enduring connections to policy and practitioner networks from which a body of research emerged. A number of CIs and end-users of high and moderately impactful projects spoke of projects as a place where future research and service ‘leaders’ were trained.

Policy and practice impacts

In terms of policy impacts, end-user respondents from high and moderate impact studies reported using research to inform agenda setting and policy debates. Project findings also informed policy planning, and in some cases underpinning elements of new policies in health services. At the practice level, high and moderate impact projects were reported as being used to inform program planning across a range of sectors. In the health sector a number of projects (Treatment of Nicotine Dependant Inpatients, Reducing Smoking in Mental Health; Smoking Cessation in Indigenous communities) resulted in substantial practice changes in the provision of smoking cessation advice and nicotine replacement therapy in health services. A number of projects also informed organizational development, where interventions were integrated into core business of health services. One such study (Tai Chi) led to a much more standardized provision of falls prevention interventions in community setting across large parts of the state of NSW.

In some cases, high impact research provided retrospective support and rationale for existing health promotion programs, such as the NSW Rural Hearing Conservation Program. Overall, practice impacts appeared to largely flow from policy impacts. For example, the policy focus on tobacco control in hospital settings contributed to the development of new practice resources and professional development for smoking brief intervention in hospitals, as well as in mental health units. A summary of how projects and their findings influenced policy and practice and illustrative quotes derived from interviews are provided in Table 3

Broader health, economic and societal impacts

None of the projects were independently assessed as ‘high impact’ in the health, societal and economic impacts domain, with a mean ‘moderate’ rating being the highest achieved for the Rural Hearing Conservation Program, Tai Chi, Mental Health First Aid, Treatment of Nicotine Dependant Inpatients, Falls in Aged Care and Smoking Cessation in Indigenous Communities programs.

Factors influencing policy and practice impacts: the ‘how’ and ‘why’

Examination of patterns differentiating high, moderate and low (overall) impact intervention research at the verification panel and further thematic analysis of interview transcripts identified six key factors that particularly contributed to these impacts. A summary of these factors and illustrative quotes derived from the interviews are collated in Table 4.

Nature of the intervention

All of the studies considered to have high policy and practice impacts (Tai Chi, Mental Health First Aid and Treatment of Nicotine Dependent Inpatients) also had moderate to high ratings for advancing knowledge and strong research outputs. However, a number of studies that achieved moderate to high ratings in advancing knowledge, and demonstrated strong research outputs (journal papers and citations) failed to achieve high levels of real world policy and practice impacts, namely the Walk to School and Cycling Infrastructure programs. Data suggest that these projects lacked definitive results and a clear agency with policy responsibility, where policy makers could advocate for their replication and expansion. In addition, the complex and inter-sectoral nature of these interventions, that require environmental and cultural change to achieve intended outcomes made them difficult to readily replicate or scale-up.

Further examination of studies with low and moderate impact also highlighted a number of barriers to applying findings, including not producing clear results indicating effective action, interventions and outcomes that were hard to explain, and no consideration of how effective interventions could be scaled-up for population level implementation. The majority of high impact projects effectively packaged intervention materials and tapped into readily available workforce to expand program reach. To illustrate, in the space of nine years the Mental Health First Aid program has been scaled-up using a ‘train the trainer’ model to the point where it has reached 1% of the Australian Population [25].

It is interesting to note that though high and moderately impactful projects generally received larger amounts of funding, this alone was not always related to impact. The Rural Hearing Conservation project evaluated an existing program with minimal resources (A$17,670), providing a high return on investment in terms of policy and practice impacts. Also, a number of the least impactful projects received large amounts of funding.

Leadership and champions

Highly impactful projects all displayed strong networks of leaders and champions who advocated for further adoption of interventions into policy and practice. These individuals were found to promote the benefits of the intervention across a variety of stakeholder groups including, politicians, media, policy makers and the general public, as well as relevant professional and academic networks. Champions included CIs, end-users and chief executive officers of organizations within which interventions were trialed, as well as intervention service providers who had a commercial interest in expanded program delivery.

Effective partnerships

For the majority of high and moderately impactful studies, partnerships between end-user groups and the CIs existed from the inception of the projects. The analysis showed that in many cases ongoing relationships provided the continuity and mechanisms for project findings to be disseminated and considered, and for end-user groups to become engaged in formulating the key policy recommendations and wider dissemination processes. These partnerships also allowed researchers to tap into prevailing policy priorities and were considered an important contributor to their capacity to undertake further priority-driven research in partnership with end-users.

Dissemination approaches

Impactful projects consistently used active dissemination strategies, such as discussion of findings at workshops between researchers and end-users, as well as dissemination of findings through established policy and practitioner networks. These projects also developed ‘knowledge transfer’ products, such as short reports highlighting key findings and recommendations and packaged project resources/materials, making them available on websites for broader use. Some high impact studies intentionally published findings in open access journals, as a way of disseminating project findings to a broader audience of end-users. Analysis of low impact studies indicated that they, for the most part, gave little consideration to dissemination processes and, in a number of instances, offered no analysis of broader policy implications of project findings.

Perceived research quality

Research quality was consistently cited by a number of end-users of high impact projects as an important consideration in their use of research findings. However, end-users also stated that decisions to change or modify policy or practice were informed by the ‘body of evidence’, rather than findings of single studies.

Contextual factors

Among the numerous contextual factors identified as potential facilitators to the application of research findings, one of the most influential was the prevailing policy ‘zeitgeist’. CIs and end-users of high impact projects spoke of a study’s ability to provide a potential solution to a pressing policy problem. So much so, that some projects gained momentum through external factors, such societal events and parliamentary inquiries, that focused community and political attention on issues for which research could provide a response (such as Mental Health First Aid). For low impact studies, some of the key impediments to applying the findings comprised circumstances where researchers did not have capacity to establish and maintain links with policy makers or with the current policy priorities.

Discussion

While a growing number of studies have examined impacts of research [13] and research funding [26], this is the first study to document the impacts of a policy-driven applied research funding scheme. This analysis of research impacts indicates that some, but not all, of the intervention research funded through the HPDRGS achieved a wide range of tangible impacts across most domains. It is clear that the three projects with the highest overall impact ratings in this study had substantial impacts on advancing knowledge and capacity building, as well as policy and practice. However, some projects with substantial research impacts (papers and citations) yielded only minimal policy and practice impacts. This reinforces that traditional indices of research impact and researchers’ track record on publications and grants are not always an accurate guide to the policy and practice impacts of their research.

This case study analysis demonstrates the positive impact that intervention research funding can have on a range of policy and practice decisions, with findings used as a policy advocacy tool (to attract attention and funding to an issue), for priority setting (identifying areas and target groups for intervention), and to support and justify existing programs/approaches or identify the need for alternatives. In a number of instances project findings informed the early stages of policy development, when there had previously been a lack of definitive evidence about effective intervention approaches. We also found that research findings were used to directly underpin key elements of existing policies for falls prevention in older people, and tobacco control. In addition, findings were used to improve understanding of issues associated with implementing and assessing new interventions such as travel guides and other promotion of active transport.In many instances the use of project findings by practitioners reflected a need to act on state-wide policy imperatives. The introduction of the NSW policies on smoke free hospitals and falls prevention saw many practitioners tasked with developing local responses using relevant HPDRGS project resources. This highlights the value of having research funding aligned with state-wide policy.

It is clear from this analysis that many factors influence public health policy and practice, with evidence from an effective intervention study in itself generally not enough to shift the current approaches. Consistent with previous research [5], we found that findings of a single study were usually considered alongside a broader body of evidence about effective intervention approaches, as well as a consideration of the local context and timing requirements. In a seeming paradox, some studies that had null or equivocal results still achieved moderate policy and practice impact. This suggests that adoption of project findings into policy and practice are influenced by factors other than evidence of effectiveness. Closer review of these projects revealed that the introduction of state-wide policies and programs meant that practitioners adopted the available project materials, meeting an immediate practice need, even though studies were demonstrably not effective.

Further, we found that a range of contextual factors were critical in facilitating the use of the projects findings, which is in agreement with previous studies [2731]. In particular, supportive policy contexts encouraged partnerships between researchers and end-users from the inception of projects, and where possible utilized existing structures (policy and practitioner networks, etc.) for communication. Tapping into existing policy and practitioner networks and processes appeared to enable researchers to build partnerships and trust with practice and policy ‘users’ and allowed better utilization of policy ‘windows of opportunity’. In some instances a confluence of events provided the right conditions for an intervention to be widely adopted into policy and practice. One such tragic event was the Virginia Tech massacre in the United States that highlighted the importance of mental health literacy and was thought to be a critical factor by the CI of this project in driving the early expansion of Mental Health First Aid in North America. All of the high impact projects were characterized by simple interventions that were well implemented, high quality research, champions to advocate and disseminate for adoption, as well as supportive contextual factors. The review of materials by the verification panel identified that an intervention’s capacity to be packaged and change ‘agents’ trained in its delivery was particularly important, as evidenced by the rapid expansion of Mental Health First Aid and Tai Chi across respective practice settings.

This study supports a growing body of evidence about the importance of embedding and linking research with broader strategic policy contexts. In a systematic review of 24 studies of the use of evidence by health policy makers, Innvaer and colleagues [28] found that personal contact, timeliness and relevance were the most commonly reported facilitators of research use. In our current study, impactful projects appeared to effectively engage key end-users groups, to ensure that projects were aligned to the interests and needs of such groups and to promote ownership of the findings and by doing so, increasing commitment to action. Most of the low impact studies had no such clear links with end-users or existing policy and practice networks.

Findings of this and other recent studies [31] also highlight the importance of the production of a range of dissemination products such as short reports, fact sheets and project resources, and their availability through websites, as well as publishing in open access journals to facilitate the use of the findings by end-users. There is increasing emphasis from funding agencies on making research evidence readily available [32, 33]. Yet, recent studies of public health research suggest that most dissemination activity rarely goes beyond publishing academic papers, appears to be undertaken in an ad hoc, unfunded fashion, and that access to dissemination advice and support for researchers from funding agencies and academic institutions is lacking [34, 35]. This study highlights the value of funding and systematically supporting a wide range of dissemination activities.

The excellent return on investment from the Rural Hearing Conservation Program Evaluation highlights what can be achieved with limited resources when research funding is well targeted. There appears to be merit in funding high quality evaluations of existing policies and programs. Increasingly, funding agencies require investigators to detail how their research impacts on policy and practice [36, 37], and a growing number of theoretical frameworks for assessing impact have been proposed [16, 3841]. This study demonstrates the utility of the scoring and panel verification methods used for identifying and measuring proximal research impacts (advancing knowledge, capacity building and policy and practice impacts). The findings of this and other recent studies [31] suggest however, that the longer-term impacts (societal, health and economic) of a single study can be difficult to discern and attribute. The CIs, end-users and verification panelist all reported difficulty in identifying and assessing these impacts for any single study. This is understandable, as such impacts almost always result from a complex interplay of contributing factors; and there remains a need for alternative ways of conceptualizing and measuring longer term research ‘impacts’.

This study has a number of strengths and limitations. The strengths of this study were that impacts were assessed using multiple methods, including bibliometric analysis, interviews with researchers and end-users, and documentary checks. These data were triangulated and distilled into case summaries, which were used in a rigorous verification process involving independent assessments of impacts and a group panel assessment. The documentary checks lend confidence that the perspectives of the chief investigators and end-users were credible, while the verification panel process provides a well established and tested methodology for reaching expert consensus, and minimizing subjectivity of assessments. The end-users were purposefully sampled on the basis of having knowledge and/or experience of how the project findings had been used, and while this ensured they contributed relevant information, there was potential for some degree of social response bias, as some end-users may have been inclined to report positive impacts or over-inflate those impacts. We attempted to reduce social response bias by having researchers not previously involved in funded HPDRGS projects conducting the interviews and undertaking the analysis. The recall of impacts was somewhat uncertain for some projects from the early funding rounds, as these were conducted between 10 to 12 years ago. For one of the projects the end-users could not be identified. It is possible therefore that the impacts may have been underestimated for some of these older projects.

Conclusions

This HPDRGS case series analysis provides new methods and insights into how intervention research projects influence policy and practice. Funded projects had variable impacts on policy and practice. Where impacts occurred they ranged from raising awareness of health interventions, identifying priority issues and target groups for interventions, underpinning new policies, and supporting and justifying existing policy and/or programs. The success of high impact projects was perceived in large part to be due to the nature and quality of the intervention itself (simple to understand, built in mechanisms for training and delivery), high quality research, champions who advocated for adoption, and active dissemination strategies. Our findings also highlight the need for strong partnerships between researchers and policy makers/practitioners to increase ownership over the findings and commitment to action.