Abstract
The allocation of research funding can benefit greatly from robust analysis of what has worked in research. In turn, these analyses can help advocacy initiatives and demonstrate accountability to taxpayers and donors. Capturing and mapping data on the inputs, processes, outputs, outcome and impact of research is crucial for these analyses. In this article we argue that the research community as a whole—including funders, researchers and administrators—is potentially in a position where it can assess or evaluate research not just according to academic outputs (production of knowledge), but also its outcomes and/or impact (effects on society). Using an exploratory framework that assesses effectiveness, efficiency and equity (3e’s) of research and research assessment both in terms of academic outputs and non-academic impact, we also argue that most assessments are primarily examining the effectiveness of research, as tools are not yet available to systematically assess research for its efficiency and equity. This article is published as part of a special issue on the future of research assessment.
Similar content being viewed by others
Challenges in research impact assessment
Deciding on the appropriate distribution and allocation of research funding in any sector is no easy task. In the case of the life, biomedical and health sciences, funders making such decisions may consider the greatest needs in research, existing gaps in topic or disease areas, or research that has the potential to demonstrate the greatest breakthroughs and health returns. While each of these considerations are taken into account to some extent at national or, in some cases, at a global funding level, there is little evidence to suggest that this is done systematically across funders in one country, let alone globally. Part of the challenge is the lack of accurate data both on the inputs into research (funding investments), and the accompanying attributed outputs and wider outcomes and impacts. The Global Forum for Health Research, for example, has published estimates for global spending on health research for the past 10 years based primarily on surveys conducted by the OECD (Landriault and Matlin, 2009). In 2014 the UK Clinical Research Collaboration (UKCRC) published the third UK-wide analysis of public and charity funded health relevant research since 2004, for which it used the Health Research Classification System (HRCS) to categorize projects corresponding to £3bn of spend in 2014. (UK Clinical Research Collaboration, 2015) The UKCRC report was helpful in demonstrating, for example, that half of all funding is concentrated in “basic research” (underpinning and aetiology research) although this proportion to other research has decreased over the ten-year reporting period. Data were collected from awards databases from the 64 participating funding institutions. However, the interpretation of this type of data can be complex and resource-intensive (Terry et al., 2012), and likely to be inconsistent if this is to be done across many different funding institutions globally. Similarly, the inputs and associated outputs, outcomes and wider impacts of research are also difficult to track, mainly because the research system has traditionally relied on academic publications being the main output type that is systematically tracked and documented, and even that is seldom attributed directly to funding sources.
In this article, we describe how as a research community (funders, administrators, researchers and beneficiaries) we are beginning to create more systematic ways of capturing inputs, and then tracking these to the wider outcomes and impact of research; but propose that there is still a long way to go. We explore this by using a framework through which research can be assessed for public benefit, centering on three broad elements or “3e’s” of research, shown in Table 1. This framework asks whether research is effective (that is, does it produce any outputs, outcomes and/or societal benefits or impact?), efficient (that is, how productive is the research system? is research happening at an appropriate “rate”? is there waste in research?), and equitable (that is, is the research achieving specific goals, reaching certain beneficiaries, or addressing specific health needs?).
We acknowledge that in order to systematically answer these three questions there is an inevitable cost (both to researchers and to funders), which is spent in collecting and analysing the data required to assess research. In this article we therefore apply the same framework as a lens to explore the exercise of assessing research (that is, whether we have a research) assessment system that is effective, efficient, and equitable. In effect, we are exploring two questions: (1) to what extent does the research ecosystem and community have the infrastructure in place to systematically assess research (in line with the 3e’s)? and (2) would the inevitable transaction cost of such systematic assessments be appropriate?
Conceptualizing 3e’s for research impact assessment
The use of 3e’s as an approach to evaluation in general is not new. Effectiveness, efficiency and equity have been used in a range of settings, including general programme evaluation (Reinke, 1994), programme evaluation of quality of health care services (Donabedian, 1988), evaluating hospital performance (Davis et al., 2013), health system performance (Aday et al., 1999; Aday, 2004) and health promotion (Tones and Tilford, 2001). Outside of health, there are also examples of considering 3e’s to assess proposed options for climate change initiatives (Stern, 2007; Angelsen, 2009), and achieving value for money in international development (Department for International Development, 2011; OECD, 2012). Finally, the 3e’s have been conceptualized and used in other forms. For example, DFID’s approach to value for money uses three different terms: economy (where they examine the inputs to their programmes), efficiency and effectiveness (which includes considerations of outputs that lead to equity) (Department for International Development, 2011).
Most of these examples relate to the evaluation of delivering a programme or intervention. While programme evaluation shares characteristics and methodological approaches with research impact assessment, the focus in this article is specifically on how research is assessed. Research impact assessment can be thought of as research on research, with the aim of providing analysis that describes what works in research, helps better allocation of research funding, creates accountability for research, and supports advocacy initiatives in policy and practice (Morgan Jones and Grant, 2013).
Previous studies have reviewed the conceptual tools that have been developed for understanding, assessing and describing research activity (Banzi et al., 2011; Bornmann, 2013; Guthrie et al., 2013; Milat et al., 2015; Greenhalgh et al., 2016). The methods used within these tools include the use of bibliometrics to assess academic impact, quantitative indicators and metrics on economic and health outcomes, qualitative narratives and case studies, and conceptual frameworks such as logic models and related theories of change. All of these require data on the inputs of research, and, depending on the questions asked in the assessment, associated data on outputs, outcomes and impact or a combination of the three. Figure 1 is a simplified illustration of these essential elements of research and how we have conceptualized the 3e’s in the context of these elements. While we are aware of other conceptual frameworks for describing research processes (Buxton and Hanney, 1996; CAHS, 2009; Guthrie et al., 2013; Greenhalgh et al., 2016) we are using this simplified illustration to contextualize the 3e’s framework. The inputs of research include the funding invested, knowledge brought in, and resources required to deliver the research. The research process includes all the activities that enable the research to happen (ie reviewing of evidence, data collection, analysis, reporting and so forth). Asking if these processes are occurring optimally, or if there is waste, duplication of efforts, or indeed if there is a lack of productivity when comparing across research groups can assess the efficiency of this process, which we explore further below.
Research activity leads to outputs, outcomes and wider impact, which can serve to tell us whether research has been effective. Finally, the information on inputs, research processes and outputs, outcomes and impact can all serve to determine if research is equitable. We explore assessment for equity further in the following sections, but emphasize that we interpret this in this context as whether the research achieves specific goals, reaching certain beneficiaries, or addressing specific health needs.
Applying the 3e’s to research impact assessment
In the following sections we explore these 3e’s further and demonstrate that the research community as a whole, including funders, researchers and administrators, is potentially in a position where it can assess or evaluate research not just according to academic outputs (production of knowledge), but also its outcomes and/or impact (effects on society). Such data are essential in being able to assess research itself for its effectiveness, efficiency and equity and we explore how each of these are achieved, or could be achieved. Furthermore, we argue that the various assessments of research that currently exist are primarily examining the effectiveness of research, and less attention is paid to whether research is efficient and equitable, mainly because the tools to do so do not yet exist. There is seldom a systematic attempt to gather data that shows whether research was produced in the most optimized way, or benchmarked for performance (efficient), or if it reached certain beneficiaries, or addressed specific health needs (equitable).
Effectiveness in research
Taking our simplified definition of effectiveness (Table 1), assessing whether research is effective simply means finding out if it produced any outputs, outcomes and/or societal benefits or impact. The main unit of analysis required is simply a measure of outputs (or outcomes and/or impact).
At its core, the proximate role of research is to produce new knowledge and understanding and to build on (or challenge) previous knowledge, which can then lead to improved understanding or benefits to society. The academic and funding community monitors, audits and/or evaluates research activity primarily by quality and excellence standards in the production of knowledge through journal articles, and discussions are ongoing on how to improve the measurement of “quality” of research through publications (Boaz et al., 2003). If research is assessed purely for its production of knowledge (ie academic publications), then the growth in publications and attempts to find the highest quality output through methods such as bibliometrics may serve this purpose. Similarly, a combination of publication and patent data can also demonstrate performance, as exemplified by the Elsevier report on comparative performance of UK research (Elsevier, 2013).
If, however, our focus shifts to non-academic societal outcomes and impact, the tools and methods for gathering this evidence vary. Publications alone, while capturing the contributions of research in the knowledge sphere, do not serve to systematically capture the wider impacts on society arising from research. More recently, therefore, research funders are collecting extra data to assess research on its secondary outcomes and benefits to wider society or “impact”. In the United Kingdom, for example, the Higher Education Funding Council for England (HEFCE) for the first time based 20% of the overall assessment on non-academic impact in the 2014 Research Excellence Framework (REF). The National Institute for Health Research (NIHR), the Medical Research Council (MRC) and other funders, including the medical research charities, collect information beyond publications in their progress and annual reports. This means we have a rich database available to us that demonstrates the effectiveness of research.
The ways in which we capture data beyond academic publications have also developed. For REF 2014, HEFCE chose to collect information on impact from UK researchers in the form of “impact case studies” (a four page narrative), which are available to read in an online searchable database.Footnote 1 Similarly, Research Councils and funding bodies in the United Kingdom regularly collect information (as descriptive text in annual reports) on the outputs of the research they have funded, beyond academic publications, and report on these. Much of this data on wider impacts was initially collected in the form of reports, using free-text written into word-processing documents. There are now a growing number of tools to facilitate the collection of evidence for these wider outcomes/impact; such as Researchfish®, Symplectic, ImpactStory and Kilola. By adopting these tools, funders can now also analyse the outputs from funded projects and report on this. Reports using Researchfish data by funders including Cancer Research UK, the MRC, and the Science and Technology Facilities Council, all linked from the Researchfish website;Footnote 2 while others such as the Association of Medical Research Charities are currently working on the analysis of their Researchfish data.Footnote 3
In these examples, we can see that most assessments of research are assessing whether the inputs of research (that is, funding) are producing outputs (knowledge in the form of publications), and, more recently, other outcomes or wider impact. Taking our simplified definition of effectiveness in Table 1, we see that publication lists, tools that collect research outputs and narrative descriptions of impact to society are therefore effective ways of demonstrating the “input-output” pathway of research. The analyses of the 6,679 impact case studies submitted to REF 2014 concluded that it is possible to extract useful information on the impact of research through impact case studies, especially if used in combination with text mining and other automated tools (King’s College London and Digital Science, 2015). If the role of research assessment is to assess whether research inputs are producing outputs and outcomes/impact, that is, whether research is effective, then all these tools serve this purpose.
Efficiency in research
Efficiency is generally reported in terms of the cost per unit of production (for example, how much does it cost to produce a number of cars in a production line per year?). In research it can essentially be used to test research performance—measuring whether the ratio of output/input of research can be optimized—by comparing, for example, against other research programmes or countries (external benchmarking), or to previous years’ performance (internal benchmarking). We have summarized this in Table 1; the working definition of efficiency in research asks how well the health research outputs, outcomes and impact occur. This often implies asking if there is waste in research, which means we now need two units for analysis: inputs and outputs (or outcomes and/or impact).
Efficiency in terms of academic outputs using a crude output/input ratio already occurs. In 1997, the UK Chief Scientist Robert May (May, 1997), as well as Grant and Lewison (1997) were the first to calculate publications and citations per funding spent by country. The UK Department for Business, Innovation and Skills has since published similar calculations in their reports that assess the performance of the United Kingdom compared with 7 other research-intensive countries (Elsevier, 2013). However, efficiency in terms of wider outcomes and impact, the connection between inputs and outcomes/impact are not clearly linked, making this calculation much more challenging. Taking the example of the REF 2014 process, which was largely peer-review based, the data were not available to conduct systematic, rigorous benchmarking of research outcomes and impact. The impact data was available in the form of narrative text, with no requirement to produce standardized reporting of the reach and significance of impact. For example, in our own analysis of these narrative texts, we had envisaged being able to extract quantitative information and to group such information by various indicators, thus enabling us to develop return-on-investment type estimates (King’s College London and Digital Science, 2015). However, this was not feasible as there was a very large amount of numerical data in the case studies that were inconsistently used and that would need converting to standard units. Financial information was expressed in different currencies, while measures and calculations of health gains (in terms of quality adjusted life years, or QALYs) were inconsistent. To calculate a crude estimate of total health gain, we had to supplement and manipulate the data in the case studies given by the authors with external data cited in their references or using our own judgement (King’s College London and Digital Science, 2015). Morevoer, the actual input data was not available at all, as researchers were not required (thankfully) to link each individual impact to a funding source or proportions thereof.
One could foresee, however, a scenario in which this information is captured systematically in more standardized units, whereby the impact of different research projects or programmes can be compared against each other, if project or programme outputs and impact were also linked to funding. For example, the ratio of health gains per £1 achieved in one project (in the form of QALYs) could be compared with the ratio of health gains per £1 spent in another (and if indeed the research investment, or inputs, of such outcomes could be attributed to one or more funding sources). Tools such as Researchfish that link inputs to research outputs could in future enable such calculations to allow at least funding bodies to make decisions about which research is working more efficiently. Within the Researchfish platform, research outputs (and outcomes and impact) are gathered through a “question set” that range from academic publications to patents and commercialization activities, to informing policy, products, and interventions. Researchers can attribute these entries to research grants and awards, thereby enabling funders to capture a range of data that have been submitted by the researchers they fund and evaluate the impact of their research funding by various units of assessment (for example, disciplinary focus, research funding mechanism, host institution and so on). Such evaluations strengthen accountability to the taxpayer and donor communities, and can be used to assess the effectiveness of different aspects of research funding (Hinrichs et al., 2015). Tools such as these, if used extensively, could provide funders with agile ways to discover how work across their research portfolio is progressing and what it is producing (such as knowledge, leverage and connections) and enable assessing for efficiency with more standardized data. Further considerations would then have to be made on whether efficiency is more important, than say, equity considerations, and we note the challenge of trade-offs in our discussion.
In addition to data linking challenges, another challenge to benchmarking impacts for comparisons in efficiency, there are no standardized measures of what constitutes a good impact story, nor which types of research are producing valuable impacts. It has recently been shown that both the general public and researchers value impact in different ways, which calls for the development of future generic measures of “impact utility” using micro-economic approaches such as contingent valuation and discrete choice modeling (Pollit et al., 2016). At present, therefore, a systematic “output/input” or “outcome (and impact)/input” calculation, taken to be our working definition for efficiency in Table 1, is not strictly used, with the exception of studies that benchmark publication and citation per £ spent (Elsevier, 2013). An alternative approach is to measure the rate of return of research, with a view to compare efficiencies of these across funding programmes, disease areas, or countries’ investments in research. There are examples where research benefits have been quantified within a disease area, such as cancer (Glover et al., 2014) and cardiovascular research (Buxton et al., 2008), but limited data availability and the associated necessary assumptions mean that direct comparisons on return on assessment should be avoided.
There is also a body of work that acknowledges that there may be inefficiencies in research, and therefore efforts should be made to reduce waste in research (Chalmers et al., 2014). In 2009, Chalmers and Glasziou (2009) had previously estimated that the cumulative effect was that about 85% of research investment is wasted—not taking into account the inefficiencies in regulation and management of research. We note that the concept of waste requires some critical reflection and definition. While in some fields, duplicating studies could indicate waste (since existing rigorous studies may have already answered relevant health-related questions), in others such duplication is necessary in order to validate findings which may not yet be definitive. Furthermore, most research systems include an element of competition, which is regarded as beneficial to the research process and may mean that multiple researchers may be tackling the same research questions at the same time, thereby encouraging optimized innovation rather than waste. Nevertheless, the principle of reflecting on potential waste still applies in these considerations. Waste can be reduced by ensuring new research builds on previous research and/or best practice, for example, by requiring systematic reviews as part of the research proposal, as encouraged by initiatives such as the EBRNetwork (http://ebrnetwork.org); by making protocols available to the public to ensure study designs build on previous experience; by encouraging publication of raw data (for example, clinical trial data via IMPACT, http://ottawagroup.ohri.ca/disclosure.html); and by encouraging open access and making research findings more accessible and promote and build on knowledge “efficiently”. The NIHR in the United Kingdom has been identified as championing reduction of waste in research by requiring systematic reviews for any application submitted to them, involving patients and the public in decision making, and making full protocols available for a number of their research projects (https://monanasser.wordpress.com/2015/12/03/how-to-reduce-waste-in-research-from-edinburgh-to-vienna-and-sarajevo; http://www.nihr.ac.uk/funding/pgfar-application-process.htm).
As a research community and ecosystem, therefore, while we do not yet systematically assess research outputs, outcomes and wider impacts for its efficiency, the tools are available to do so. Furthermore, the initiatives to reduce waste a priori, that is, at grant application assessment stage, suggest that there is willingness in the research community to reduce waste in research and promote efficiency in research. However, if the role of research assessment is to assess whether research inputs are producing outputs and outcomes/impact at an appropriate rate, ie whether research is efficient, then better tools are required to link these outputs to research inputs and systematically make such comparisons, especially for research outcomes and impacts that are not counted in the same way as academic publications.
Equity in research
Assessing research for equity involves setting priorities for research; ensuring that inputs and outputs, outcomes and impact are aligned to intended equitable social goals (which include, for example, eliminating extremes of wealth and poverty, avoiding neglect of specific disease areas, ensuring gender and race equality). Deciding on those particular goals depends on exactly where equity needs to be achieved. Another way to achieve equity is through the equitable funding of researchers, that is, ensuring that the allocation of health research funding (inputs) is done equitably and without biases (for example of gender, age and institutional ranking). In this article, however, because we are focusing on the outcomes and impact of research, we are not referring to equitable funding but rather the need to conduct research assessment to achieve equity. Consequently, we acknowledge that in order to achieve equity will require a value judgement by those making the decisions on how funds are distributed. This helps us distinguish equity from a broader concept such as diversity (Stirling, 2007), as the intention is to encourage careful thought on allocating research funds according to pressing social goals. A helpful definition for equity in this context is the distribution of benefits in a target population in relation to individual needs (Roemer, 1980; Reinke, 1994), which could be health needs.
Identifying health needs and matching these to funding allocation, however, can be challenging (Guindo et al., 2012). To be equipped to consider equity in research assessment, data is needed on how the outputs, outcomes and impact of research have contributed to specific health needs, or specific beneficiaries of research. We describe three challenges with respect to research assessment for equity below.
Firstly, there is a challenge in mapping health expenditure overall. Few funders publicly report disaggregated statistics on health R&D expenditures, and there is a lack of uniformity in the use of R&D classification systems across different funders (Terry et al., 2012). There have been some initiatives that have begun to address the first challenge, such as the WHO Global Health Observatory which identifies gaps in health R&D (WHO, 2016).
Second, there are challenges in identifying and then prioritizing health needs when it comes to research allocation. In the aforementioned 2014 UKCRC, burden of disease (measured using Disability Adjusted Life Years or DALYs’) was matched with HRCS to identify differences between health research funded and burden of disease (UK Clinical Research Collaboration, 2015). An analysis by Rottingen et al. of research investment and subsequent outputs confirm that there are substantial gaps in the global landscape of health R&D, especially for and in low-income and middle-income countries (Røttingen et al., 2013). Viergever (2013) has also demonstrated a mismatch between the health R&D that is needed and that which is undertaken, especially in the areas of neglected diseases, neglected populations, and neglected products such as diagnostics and platform technologies, because of the favoured investments in drugs and vaccines (Viergever, 2013). Part of the challenge, he argues, is that R&D is not needs-driven, there is no system to facilitate the prioritization of health needs, and, finally, the research system is largely dependent on market incentives. Furthermore, the role of other stakeholders outside the research community can also influence prioritization. For example, private investment in research can also be driven primarily by expected rate of return, which can distort equity considerations. Increasingly, public engagement has a role in setting priorities for health research, which can support its societal legitimacy and provide validation for making value judgements in prioritizing research needs.
Finally, the challenge shared with the consideration of “efficiency” in research assessment, is that we do not have systematic and standardized reporting on who benefits and “what works” in research funding (ie the outcomes and impact components of Fig. 1). The “gap maps” from the International Initiative for Impact Evaluation, 3ie, for example, demonstrate what is known and not known from impact evaluations and systematic reviews in particular areas such as education, HIV and AIDS, or agriculture (3IE, 2016). Gathering the data for this, however, can be challenging. For example, the latest edition of Millions Saved by the Center for Global Development has identified cases of proven success in global health (Glassman and Temin, 2016), but this required issuing public calls for submissions of good practice, reviewing systematic reviews databases and conducting interviews with subject matters. Women, for example, may be disadvantaged as the beneficiaries of research, in terms of its health, societal and economic impacts (Sen et al., 2007; Kuhlmann and Annandale, 2015; Schiebinger et al., 2011-2015). There is evidence to suggest that research that does not account for gender differences can result in inaccurate conclusions about how women respond to disease and this in turn will influence the effectiveness of treatment choices (Bartlett et al., 2005; Johnson et al., 2014).
Much more has been written on the subject of equity in research, and we will not attempt to list all the evidence nor enter into discussion about how equity is judged, as such arguments could generate different subjective views. However, we list the above examples to demonstrate the existence of activity in this area, and to state that to bring equity into research impact assessment, data still needs to be collected systematically. Ultimately, this will help prioritize resource allocation and we acknowledge this will still require value judgements. As discussed earlier, while the data on research inputs and processes is collected by different funders, that of outcomes and impact are not yet done in a systematic and standardized format.
3e’s in research assessment processes
We have so far examined whether current assessment processes are capable of assessing research for its 3e’s, and have argued that despite good will, there are still infrastructure and data collection and data sharing challenges to overcome. We now explore 3e’s in the process of assessing research.
Around the world most assessment for research performance are based on number and/or quality of publications, including Norway, Sweden, Canada, Australia, Italy, Denmark, Spain and Finland, and the Czech Republic (Krapels et al., 2016). There has been some debate on whether peer review and bibliometrics are the right tools for assessing research, but they are broadly accepted as key tools to assess research outputs (academic outputs). In practice, we also see much of the information on research outputs, outcomes and impact used effectively—for example, data collected through tools such as Researchfish have enabled funders such as the MRC to make strategic decisions about what works in their research funding portfolio (Hinrichs et al., 2015).
The assessment of non-academic outcomes and impact is relatively new, and therefore more difficult to assess given the lack of systematic and standardized reporting of these (as noted earlier). Following REF 2014, HEFCE commissioned a number of evaluations and reviews of its assessment process, including an independent review of the REF commissioned by government to also provide conclusions and suggestions for the next assessment cycle (Stern, 2016). If we simply wish to answer whether or not the assessment process worked, ie whether or not it is effective as a means of capturing research performance, then we would argue that to a large extent this was the case in REF 2014 (Manville et al., 2015a, b).
What has not been yet demonstrated, however, is whether research assessment is efficient (or whether it is “too expensive” to justify). Research assessment entails an inevitable transaction cost, both to the funder in analysing the outputs and to the research organizations who need to prepare the data to demonstrate outputs. Processes such as peer review have been noted to have substantial costs for upholding quality (Wessely, 1998) and questions have been raised about its cost-effectiveness (and indeed overall effectiveness) (Godlee et al., 1999).
The total transaction costs for both the universities and funding councils for REF 2014 were 2.4% of the total money allocated (Technopolis, 2015). To investigate whether these figures were higher or lower than expected, we conducted a brief search to find comparable transaction costs elsewhere—both in research assessment and in other areas. In our search to find comparable examples we found there were two ways in which transaction costs of assessment were reported: (i) private or internal transaction costs of organizations being assessed in preparing for assessment (such as universities’ internal costs in preparing REF submissions), and (ii) costs for undertaking assessments for the assessor or funder (usually expressed as a percentage of their total expenditure). We show a sample we found in Table 2, which includes the total transaction costs (including both to the assessor and the institutions being assessed, such as HEIs). The resulting figures are not intended to compare like for like, as each calculation differs in the methods employed in estimating individual costs, but serve to give a rough representation of transaction costs.
What we concluded from this table is first that direct comparisons are challenging, given the varying ways in which these estimates were calculated. Costs are more often shown in terms of direct costs to the organization doing the assessment (as in (ii) above, since this is easier to calculate for one single organization, rather than turning to the various assessed organizations to calculate their time spent in preparing for the assessment). An example of this is a report by Morton et al., which compares administration as a percentage of total budget for UK funders such as the Wellcome Trust, MRC and DFID, which range from 2.8 to 7.1% of their total budget (Morton et al., 2012). There is one example from another sector which could be comparable to assessment preparations (as in costs (i) above), which are farmer’s transaction costs (private transaction costs) incurred as a proportion of the premium received for being part of an agri-environmental scheme reported as high as 25% (Mettepenningen et al., 2009). We also note that the figures for RCUK for example may be higher if calculated today given the fall in success rates (although potentially balanced out again with efficiency gained in internal administration since then). Although we cannot make complete conclusions about how transaction costs of assessment in higher education compare with other forms of assessment in other sectors, we can observe that these costs vary and the process of assessment has the potential to become more efficient.
Finally, considerations of equity within the research assessment process can also drive how individuals, projects and institutions are assessed and rewarded. There is evidence to suggest that performance assessment can serve to either encourage or discourage equity in the distribution of research. Gender inequity, for example, could arise as a result of gender bias in both research and research assessment; women traditionally have received fewer awards than men, are less included as beneficiaries of research, and are cited less (Ovseiko et al., 2016). Research impact assessment, if motivated and driven by equitable principles, can become an engine for creating equity in the allocation of research funding (Ovseiko et al., 2016). Part of what may need to improve are the methods we employ within research assessment to avoid inequity or unequal opportunity. For example, the use of a mixture of disciplines, and incorporating diversity in review panels, can aid in avoiding unconscious biases among the panel. Adopting the appropriate method, whether it is peer review, the use of metrics, or alternative methods, is therefore important. In the independent review of the role of metrics in research assessment and management, a correlation analysis was undertaken to compare the use of individual metrics with the outcomes of the REF peer review process (Wilsdon et al., 2015). The review found evidence to suggest statistically significant differences in the correlation with REF scores for early-career researchers and women in a small number of Unit of Assessment (Wilsdon et al., 2015).
Concluding thoughts
The allocation of research funding can benefit greatly from robust analysis of what has worked in research, and, in turn, these analyses can help advocacy initiatives and demonstrate accountability to taxpayers and donors. Capturing and mapping data on the inputs, processes, outputs, outcome and impact of research is crucial for these analyses and helps conduct research on research. We have argued here that the research community as a whole, including funders, researchers and administrators, is potentially in a position where it can assess or evaluate research not just according to academic outputs (production of knowledge), but also its outcomes and/or impact (effects on society). Using an exploratory framework that assesses 3e’s of research and research assessment, we also argue that most assessments are primarily examining the effectiveness of research, as tools are not yet available to systematically assess research for its efficiency and equity.
We have also made a distinction between general evaluation and research impact assessment, emphasizing that the latter allows for better allocation of research funding, creates accountability for research, and supports advocacy initiatives in policy and practice (Morgan Jones and Grant, 2013). Each of the 3e’s are important considerations for improving assessments for these purposes and can help answer crucial funding policy questions. Essentially the 3e’s framework can help answer the following policy questions with regards to research funding: Which of our funding programmes are effective? Which funding programmes are most efficient? How do we allocate research funding? And, finally, is the transactional cost worth it?
Furthermore, we acknowledge that these 3e’s are not necessarily reinforcing and combining them may involve trade-offs. For example, it has been argued that “equity” still struggles to find its place as an equal among traditional public administration values of 3e’s (Norman-Major, 2011). In his application of 3e’s to programme evaluation, Reinke (1994) rightly points out, for example, that the high cost of equitably serving hard-to-reach members of the population may require efficiency considerations to be compromised. However, we echo Reinke’s sentiments that these considerations are informative and important in their own right and are increasingly being used in evaluation and assessment in this same or similar forms. Therefore not only is it important to consider what the research funding policy questions are in relation to these 3e’s, but the associated inevitable value judgements that will be required. We suggest that this framework does not replace such judgements but helps supports those decisions.
We acknowledge that this 3e’s framework needs further refinement and invite readers to examine it critically. Our purpose in writing this is driven by the fact that assessments occur anyway, and significant investments have gone into reviewing them. The recently published Stern review of the Research Excellence Framework was based on the assumption that research assessment exercises have contributed productively to driving competition and fostering research excellence (Stern, 2016). The existence of the review itself also points to the need for recommendations for shaping future assessment exercises. To consider the 3e’s in research assessment, especially in the systematic manner that we are suggesting, there will be inevitable transaction costs. Our crude comparisons have shown these may actually be comparatively small, although implementing much of what we suggest here could increase those costs. To manage those costs the research community and infrastructure would have to be tailored to systematically capture the information needed for such assessments. The 3e’s of research assessment provides an alternative approach on how research assessment can be framed so that it more holistically addresses research funding challenges, while remaining mindful of the realistic transaction costs that could be incurred.
Additional information
How to cite this article: Hinrichs-Krapels S and Grant J (2016) Exploring the effectiveness, efficiency and equity (3e’s) of research and research impact assessment. Palgrave Communications. 2:16090 doi: 10.1057/palcomms.2016.90.
Notes
REF 2014 impact case studies searchable database. Available at: http://impact.ref.ac.uk/CaseStudies/ (accessed 30 May 2016).
RCUK publications—Produced using Researchfish data. Available at: https://knowledge.researchfish.com/publications/content/rcuk-publications-produced-using-researchfish-data (accessed 15 October 2015).
AMRC. Using Researchfish to track the impact of charity research funding. Available at: http://www.amrc.org.uk/our-work/showing-impact/using-researchfish-track-impact-charity-research-funding (accessed 30 May 2015).
References
3IE. (2016) Evidence gap maps [Online]. Available, http://www.3ieimpact.org/evaluation/evidence-gap-maps/, accessed 14 March 2016.
Aday LA (2004) Evaluating the Healthcare System: Effectiveness, Efficiency, and Equity. 3rd edn. Health administration press: Chicago, IL.
Aday LA, Begley CE, Lairson DR, Slater CH, Richard AJ and Montoya ID (1999) A framework for assessing the effectiveness, efficiency, and equity of behavioral healthcare. The American Journal of Managed Care; 5 (Special issue): S25–S43.
Angelsen A (2009) Realising REDD+: National Strategy and Policy Options. CIFOR: Denmark.
Banzi R, Moja L, Pistotti V, Facchini A and Liberati A (2011) Conceptual frameworks and empirical approaches used to assess the impact of health research: An overview of reviews. Health Research Policy and Systems; 9 (1): 1.
Bartlett C et al. (2005) The causes and effects of socio-demographic exclusions from clinical trials. Health Technology Assessment; 9 (38)iii-iv, ix-x, 1–152.
Boaz A, Ashby D and Esrc U (2003) Fit for Purpose?: Assessing Research Quality for Evidence Based Policy and Practice. ESRC UK Centre for Evidence Based Policy and Practice: London.
Bornmann L (2013) What is societal impact of research and how can it be assessed? a literature survey. Journal of the American Society for Information Science and Technology; 64 (20): 217–233.
Buxton M et al. (2008) Medical research: What’s it worth? Estimating the economic benefits from medical research in the UK. Health Economics Research Group, https://www.mrc.ac.uk/publications/browse/medical-research-whats-it-worth/.
Buxton M and Hanney S (1996) How can payback from health services research be assessed? Journal of Health Services Research; 1 (1): 35–43.
CAHS. (2009) Panel on Return on Investment in Health Research, 2009. Making an Impact: A Preferred Framework and Indicators to Measure Returns on Investment in Health Research. Canadian Academy of Health Sciences, Ottawa, Canada, http://www.cahs-acss.ca/wp-content/uploads/2011/09/ROI_FullReport.pdf, accessed 10 June 2016.
Chalmers I et al. (2014) How to increase value and reduce waste when research priorities are set. The Lancet; 383 (9912): 156–165.
Chalmers I and Glasziou P (2009) Avoidable waste in the production and reporting of research evidence. Obstetrics & Gynecology; 114, 1341–1345.
Davis P et al. (2013) Efficiency, effectiveness, equity (E3). Evaluating hospital performance in three dimensions. Health Policy; 112 (1–2): 19–27.
Department for International Development. (2011) DFID’s Approach to Value for Money (VfM), https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/67479/DFID-approach-value-money.pdf, accessed 20 September 2016.
Donabedian A (1988) The quality of care: how can it be assessed? JAMA; 260 (12): 1743–1748.
Elsevier. (2013) International Comparative Performance of the UK Research Base—2013. A report prepared by Elsevier for the UK’s Department of Business, Innovation and Skills (BIS), https://www.gov.uk/government/publications/performance-of-the-uk-research-base-international-comparison-2013, accessed 10 June 2016.
Glassman A and Temin M (2016) Millions Saved: New Cases of Proven Success in Global Health. Brookings Institution Press.
Glover M, Buxton M, Guthrie S, Hanney S, Pollitt A and Grant J (2014) Estimating the returns to UK publicly funded cancer-related research in terms of the net value of improved health outcomes. BMC Medicine; 12 (99): 1.
Godlee F et al. (1999) Peer Review in Health Sciences. BMJ books: London.
Grant J and Lewison G (1997) Government funding of research and development. Science; 278, 878–880.
Greenhalgh T, Raftery J, Hanney S and Glover M (2016) Research impact: a narrative review. BMC Medicine; 14 (78): 1.
Guindo LA et al. (2012) From efficacy to equity: Literature review of decision criteria for resource allocation and healthcare decisionmaking. Cost Effectiveness and Resource Allocation; 10 (9): 1.
Guthrie S, Wamae W, Diepeveen S, Wooding S, Grant J and Europe R (2013) Measuring research: A guide to research evaluation frameworks and tools. RAND Europe: Cambridge, UK.
Herbert DL, Barnett AG, Clarke P and Graves N (2013) On the time spent preparing grant proposals: an observational study of Australian researchers. BMJ Open; 3 (5): e002800.
Hinrichs S, Montague E and Grant J (2015) Researchfish: A Forward Look; Challenges and Opportunities for Using Researchfish to Support Research Assessment. The Policy Institute; King’s College: London.
Johnson PA, Fitzgerald T, Salganicoff A, Wood SF and Goldstein JM (2014) Sex-Specific Medical Research: Why Women’s Health can’t Wait. A Report of the Mary Horrigan Connors Center for Women’s Health & Gender Biology at Brigham and Women’s Hospital [Online]. Brigham and Women’s Hospital, http://www.brighamandwomens.org/Departments_and_Services/womenshealth/ConnorsCenter/Policy/ConnorsReportFINAL.pdf, accessed 24 March 2016.
King’s College London and Digital Science. (2015) The Nature, Scale and Beneficiaries of Research Impact: An Initial Assessment of the Research Excellence Framework (REF) 2014 Impact Case Studies. HEFCE: Bristol, UK.
Krapels J et al. (2016) The Relationship Between Research Spending and Research Performance. RAND Europe, (in press).
Kuhlmann E and Annandale E (2015) Gender and healthcare policy. In: Kuhlmann E, Blank RH, Bourgeault IL and Wendt C (eds). The Palgrave International Handbook of Healthcare Policy and Governance. Palgrave Macmillan: Basingstoke, UK.
Landriault E and Matlin SA (2009) Monitoring financial flows for health research 2009: Behind the global numbers. Global Forum for Health Research, http://announcementsfiles.cohred.org/gfhr_pub/reports/2009_en.pdf.
Manville C et al. (2015a) Assessing impact submissions for REF 2014: An evaluation. RAND Europe: Cambridge, UK.
Manville C et al. (2015b) Preparing impact submissions for REF 2014: An evaluation. RAND Europe: Cambridge, UK.
May RM (1997) The scientific wealth of nations. Science; 275, 793.
Mettepenningen E, Verspecht A and van Huylenbroeck G (2009) Measuring private transaction costs of European agri-environmental schemes. Journal of Environmental Planning and Management; 52 (5): 649–667.
Milat AJ, Bauman AE and Redman S (2013) A narrative review of research impact assessment models and methods. Health Research Policy and Systems; 13 (1): 1.
Morgan Jones M and Grant J (2013) Making the grade: methodologies for assessing and evidencing research impact. Dean et al. (Eds) (2013) 7 Essays on Impact. DESCRIBE Project Report for Jisc. University of Exeter.
Morton J, Shaxon L and Greenland J (2012) Process Evaluation of the International Initiative for Impact Evaluation (2008-11). Triple Line Consulting. Overseas Development Institute: London.
Norman-Major K (2011) Balancing the four Es; or can we achieve equity for social equity in public administration? Journal of Public Affairs Education; 17 (2): 233–252.
OECD. (2012) Value for money and international development: Deconstructing myths to promote a more constructive discussion. In: Jackson, P. (ed.). Organisation for Economic Co-operation and Development, http://www.oecd.org/development/effectiveness/49652541.pdf, accessed 22 September 2016.
Ovseiko PV et al. (2016) A global call for action to include gender in research impact assessment. Health Research Policy and Systems; 14 (1): 50.
Pollit A et al. (2016) Understanding the relative valuation of research impact: A best-worst scaling experiment of the general public and biomedical and health researchers. BMJ Open; 6 (8): e010916.
Reinke WA (1994) Program evaluation: Considerations of effectiveness, efficiency and equity. Journal of Family & Community Medicine; 1 (1): 61.
Research Councils UK. (2006) Report of the Research Councils UK Efficiency and Effectiveness of Peer Review Project. Research Councils: Swindon, UK
Roemer MI (1980) Optimism on attaining health care equity. Medical care; 18 (7): 775–781.
Røttingen J-A et al. (2013) Mapping of available health research and development data: what’s there, what’s missing, and what role is there for a global observatory? The Lancet; 382 (9900): 1286–1307.
Schiebinger L et al. (2011-2015) Gendered innovations in science, health & medicine, engineering, and environment [Online], http://genderedinnovations.stanford.edu/, accessed 24 March 2016.
Sen G, Östlin P and George A (2007) Unequal, unfair, ineffective and inefficient gender inequity in health: why it exists and how we can change it. Final Report to the WHO Commission on Social Determinants of Health [Online]. Women and Gender Equity Knowledge Network, http://www.who.int/social_determinants/resources/csdh_media/wgekn_final_report_07.pdf, accessed 24 March 2016.
Stern N (2016) Building on Success and Learning from Experience: An Independent Review of the Research Excellence Framework. Department for Business, Energy & Industrial Strategy, https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/541338/ind-16-9-ref-stern-review.pdf.
Stern NH (2007) The Economics of Climate Change: The Stern Review. Cambridge University press: Cambridge.
Stirling A (2007) A general framework for analysing diversity in science, technology and society. Journal of the Royal Society Interface; 4 (15): 707–719.
Technopolis. (2015) REF Accountability Review: Costs, benefits and burden Report by Technopolis to the four UK higher education funding bodies. Technopolis group, http://www.technopolis-group.com/wp-content/uploads/2015/11/REF_costs_review_July_2015.pdf, accessed June 2016.
Terry RF, Allen L, Gardner CA, Guzman J, Moran M and Viergever RF (2012) Mapping global health research investments, time for new thinking–a Babel Fish for research data. Health Res Policy Syst; 10 (28): 28.
Tones K and Tilford S (2001) Health Promotion: Effectiveness, Efficiency and Equity. 3rd edn. Nelson Thornes: Leeds.
UK Clinical Research Collaboration. (2015) UK Health Research Analysis 2014, http://www.hrcsonline.net/sites/default/files/UKCRCHealthResearchAnalysis2014 WEB.pdf, accessed 10 June 2016.
Viergever RF (2013) The mismatch between the health research and development (R&D) that is needed and the R&D that is undertaken: An overview of the problem, the causes, and solutions. Global Health Action; 6: 22450, http://dx.doi.org/10.3402/gha.v6i0.22450.
Wessely S (1998) Peer review of grant applications: what do we know? The lancet; 352 (9133): 301–305.
WHO. (2016) Global Health Observatory [Online], http://www.who.int/research-observatory/en/, accessed 14 March 2016.
Wilsdon J et al. (2015) The metric tide: Report of the independent review of the role of metrics in research assessment and management. DOI: 10.13140/RG.2.1.4929.1363.
Acknowledgements
We are grateful to Dr Steven Wooding and Dr Joachim Krapels (RAND Europe) for their helpful input and comments on earlier drafts of this paper. This article was written as part of the Policy Research in Science and Medicine (PRiSM) unit, which is commissioned and funded by the Policy Research Programme in the Department of Health. This is an independent article by the PRiSM unit; the views expressed are not necessarily those of the Department of Health.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The Authors declare no competing financial interests.
Rights and permissions
This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
About this article
Cite this article
Hinrichs-Krapels, S., Grant, J. Exploring the effectiveness, efficiency and equity (3e’s) of research and research impact assessment. Palgrave Commun 2, 16090 (2016). https://doi.org/10.1057/palcomms.2016.90
Received:
Accepted:
Published:
DOI: https://doi.org/10.1057/palcomms.2016.90
- Springer Nature Limited
This article is cited by
-
Prioritising and incentivising productivity within indicator-based approaches to Research Impact Assessment: a commentary
Health Research Policy and Systems (2023)
-
Knowledge gaps and national research priorities for COVID-19 in Iran
Health Research Policy and Systems (2022)
-
e-Tourism beyond COVID-19: a call for transformative research
Information Technology & Tourism (2020)