Background

The recognised gap between healthcare research and practice has led to research funders, amongst other initiatives, introducing explicit expectations that applications detail the expected impact of research and demonstrate how it will be achieved [[1]–[3]]. Similarly, the UK’s Research Excellence Framework now includes a retrospective evaluation of the economic, societal and cultural ‘impact’ of University-based research as well as its scientific quality [[4]]. There are a number of resources to help researchers think about how best to increase the impact of their research and write knowledge exchange plans [[5]–[8]]. Noting that Tetroe et al. identified up to 29 terms used for ‘knowledge transfer’ [[1]], we use the term ‘knowledge exchange’ to describe the multidirectional, dynamic and iterative nature of translating research-based knowledge into policy and practice [[9]]. Knowledge exchange encompasses a number of activities, such as dissemination (i.e., sharing of research findings), collaboration and consultancy, which ideally result in a range of impacts.

Resources are available to help reviewers assess knowledge exchange plans [[2]]. Despite the presence of such guidance and funders’ insistence that research proposals include explicit knowledge exchange and impact plans, a general standard for assessing those plans is not readily available. Funders provide guidelines to reviewers, which vary significantly across funders and funding programmes. Some of these guidelines are freely available whilst others are only made available to reviewers directly when asked to review a proposal. It can therefore be difficult for researchers to know whether reviewers are likely to judge their knowledge exchange plans as suitable. Furthermore, it is rare for researchers to receive feedback on this aspect of their proposal. This is unsurprising since these plans currently form a relatively limited part of the assessment process. Indeed, if researchers have followed recent advice to embed knowledge exchange principles and mechanisms throughout the entire lifecycle of a research project [[2],[5]], it may be relatively difficult for reviewers to directly comment on this aspect of a proposal. This situation offers little scope for researchers to learn about how knowledge exchange can be better incorporated into the research process. There is a risk that researchers will come to see knowledge exchange and impact plans as a ‘tick-box’ exercise rather than considering how these could genuinely improve the entire research project.

Having been involved in advising a number of colleagues in our own institution on how to enhance knowledge exchange plans, we began to consider how to change researcher behaviour. We therefore took an approach based upon the principles of audit and feedback [[10]]. We developed criteria for assessing knowledge exchange plans within research proposals that could be applied to a sample of grant proposals and fed back to researchers. We demonstrate the feasibility of criteria development and assessment before discussing their potential to help researchers to improve their knowledge exchange plans.

Findings

Criteria development

We aimed to develop assessment criteria that drew upon existing conceptual frameworks, were underpinned by a sound rationale, and could potentially be measured from a review of written grant proposals. We extracted candidate knowledge exchange principles and recommendations from a review and synthesis of knowledge exchange frameworks, supplemented by existing guidance issued by UK research councils [[6]-[8],[11]]. Following iterative development, including feedback from academic colleagues, we established a set of 19 criteria for assessing knowledge exchange plans grouped under five thematic headings (Table 1). The five themes cover: problem definition; involvement of research users; public and patient engagement; dissemination and implementation; and planning, management and evaluation of knowledge exchange.

Table 1 Criteria for the assessment of knowledge exchange plans and illustrative text from proposals

Application of the assessment criteria

We applied the criteria in an audit of applied health research proposals submitted from our own institution. We designed each criterion so that it could be rated as ‘met’ (scoring ‘1’) or ‘not met’ (scoring ‘0’) from reviewing grant proposals. We also anticipated that judgements upon whether or not each criterion was met would depend upon an assessment of the entire proposal as opposed to only ‘dissemination plans’ or equivalent. We took this approach because we expected evidence of knowledge exchange to be embedded throughout proposals (e.g., ‘problem definition’ in introductory sections). Three project team members (AIR, AR and RF) piloted the criteria by independently assessing three proposals, comparing assessments, and then clarifying criteria where necessary.

We screened the titles of grant proposals recorded by the Faculty of Medicine and Health, University of Leeds, which were submitted between May 2011 and May 2012. We selected 102 with a likely focus on applied health research. We included pending, successful and unsuccessful proposals because we sought a representative range of applications. We subsequently identified 25 full proposals led by academics in our institution. The majority of these were submitted to various National Institute of Health Research (NIHR) programmes (20), three to UK research councils, and two to other funders. We obtained permission from all lead applicants to review their grant applications in full. One project team member (AR) then applied the criteria to each application.

We calculated mean scores for each criterion and also for each theme across the 25 proposals (Figure 1). Proposals scored highest in problem definition (0.87, out of a maximal 1), followed by public and patient involvement (0.68), dissemination and implementation (0.63), and involvement of users (0.57), and lowest in planning, management and evaluation of knowledge exchange activities (0.18).

Figure 1
figure 1

Mean criterion scores (grey bars) and mean themes scores (black outline bars) across the 25 proposals in the audit.

Amongst individual criteria, the three most frequently met were: ‘problem addressed by this proposal and its significance to the health service or health is stated’ (24 of 25 proposals); ‘specific users of research are identified’ (24 proposals); and ‘statement about how the problem has been identified’ (23 proposals). The three least frequently met criteria were: ‘ways in which the uptake of research findings can be monitored’ (no proposals); ‘timing and order of knowledge exchange activities is stated’ (3 proposals); and ‘applicants’ previous experience of undertaking knowledge exchange activities is described’ (5 proposals). A mean of 11.2 criteria out of a maximal 19 were met across the proposals (range 5.8 to 16.3). Table 1 also illustrates part-anonymised text from proposals that would allow a criterion to be judged as met (with the addition of a fictional example for one criterion met by no applications).

Conclusions

It is feasible to develop and apply audit criteria for assessing knowledge exchange plans within research proposals. We suggest they can be used by individual researchers and teams for self-assessment, or by grant-seeking institutions to identify common strengths and weaknesses and hence guide staff development. Our modest analysis of one institution suggests some key challenges that others are likely to face, especially around identifying resources and methods to monitor the longer term impact of research.

Developing meaningful and feasible criteria posed three main challenges. First, we aimed to develop criteria that would offer researchers enough detail to guide improvement of knowledge exchange plans whilst avoiding over-specification. We found it helpful to organise the emerging criteria into five themes that both followed the flow of a proposal and strongly related to the knowledge exchange process [[9],[11]]. The themes helped to contextualise the criteria and safeguarded against missing aspects of the knowledge exchange process. Second, there is a risk that rather than encouraging a longitudinal view of the knowledge exchange process, our audit criteria may promote a tokenistic ‘box-ticking’ approach by applicants [[2],[5]], especially if their institutions use measurement as a feature of performance management [[12]]. Any audit instrument is prone to the same misuse and degrees of self-deception. Furthermore, developing and stating a plan for knowledge exchange is more likely in principle to result in action than not making a plan [[13]]. Third, we were aware of the need to capture knowledge exchange plans aimed at a range of different research ‘users’. The Canadian Institutes of Health Research (CIHR), for instance, explains that ‘A knowledge user can be, but is not limited to, a practitioner, a policy maker, an educator, a decision maker, a health care administrator, a community leader or an individual in a health charity, patient group, private sector organization or media outlet’ [[14]]. The UK NIHR states that ‘the term user refers to patients, their carers and family members, as well as to members of the public and representatives from patient and charitable organisations’ [[15]]. We therefore distinguished between immediate users of research findings (e.g., clinicians, commissioners) and longer-term beneficiaries (e.g., patients).

Applying the tool to research proposals also posed a number of challenges. First, funders have adopted different concepts of knowledge exchange and impact, and use different terminology [[1]]. We suggest that our criteria are sufficiently generic to be transferable beyond the funding applications we assessed from one UK institution. Second, proposal forms differ substantially across different funders and programmes, making it necessary for assessors to read entire proposals to capture the full extent of knowledge exchange plans. Third, some criteria within the planning, management and evaluation of knowledge exchange theme scored poorly; e.g., none of the 25 proposals included a statement about the monitoring of the uptake of research findings. This may reflect both the absence of explicit guidance by funders and limited experience and skills amongst researchers. Fourth, researchers and institutions will inevitably raise the question of whether stronger knowledge exchange plans actually enhance the chances of grant success. We did not examine associations with success, partly because of the small number of applications reviewed but mainly because this was not the key aim. Whilst demonstrating stronger knowledge exchange may have variable impacts upon the likelihood of success, we suggest that the fundamental issue concerns how to maximise the chances of relevant impact during and following research projects.

In summary, research funders and institutions are increasingly interested in demonstrating impact. Researchers are therefore expected to present clear knowledge exchange plans, ideally embedded throughout the whole research cycle. We suggest that our criteria are useful for researcher self-assessment of individual applications and as an audit tool for research institutions to identify areas for improvement. Given the limited, exploratory nature of this work, we welcome further suggestions and debate around how to enhance the validity and relevance of such audit criteria.