Science and Engineering Ethics

, Volume 19, Issue 2, pp 337–340

Metrics-Based Assessments of Research: Incentives for ‘Institutional Plagiarism’?

Authors

    • Cardiff School of BiosciencesCardiff University
Original Paper

DOI: 10.1007/s11948-012-9352-0

Cite this article as:
Berry, C. Sci Eng Ethics (2013) 19: 337. doi:10.1007/s11948-012-9352-0

Abstract

The issue of plagiarism—claiming credit for work that is not one’s own, rightly, continues to cause concern in the academic community. An analysis is presented that shows the effects that may arise from metrics-based assessments of research, when credit for an author’s outputs (chiefly publications) is given to an institution that did not support the research but which subsequently employs the author. The incentives for what is termed here “institutional plagiarism” are demonstrated with reference to the UK Research Assessment Exercise in which submitting units of assessment are shown in some instances to derive around twice the credit for papers produced elsewhere by new recruits, compared to papers produced ‘in-house’.

Keywords

Research assessment exerciseResearch excellence frameworkPlagiarismMetrics-based assessment

A recent paper in this journal has discussed the responsibilities of supervisors and students in the practice of plagiarism (Alfredo and Hart 2011). Letters to Science (Roig 2009; Loadsman 2009) and a Nature Medicine editorial (The insider’s guide to plagiarism 2009) provide interesting critiques of the practice of plagiarism and the pressures on individuals, in their quest for a share of available funding, to claim credit for themselves for the work of others. We should also be aware of parallel pressures, applied externally at a higher level on research institutions by metrics-based assessment systems imposed by governments or other funders. This higher-level pressure has been ingrained in funding systems in recent decades, at least in the UK, and has received little criticism in the scientific press but can reward universities and other research establishments with credit for work that was carried out elsewhere. This has created a vibrant ‘transfer market’ for highly productive staff, based on a principle that might be considered as ‘institutional plagiarism’—the claiming of credit by one institution for work that was carried out in another.

The UK government-sponsored Research Assessment Exercises (RAE) have rated research in a number of disciplines based on formulae that give a high weighting to research output (for example 75% in the case of biological sciences in 2008), assessed principally on publications. The regulations for submissions to these RAEs gave full credit for any published work to the institution employing an author at the end of the assessment period, regardless of the fact that all the published work may have been carried out elsewhere. An analysis of the RAE 2008 data is precluded by the nature of the ratings assigned in this assessment. However, scrutiny of RAE 2001 data shows the utility of ‘gamesmanship’ to enhance institutional ratings through the recruitment of new staff, as illustrated below. Despite the necessity to refer to old data for this analysis, the principle illustrated will still apply since the forthcoming UK Research Excellence Framework (REF) that replaces RAE and will be completed in 2014, retains the system of awarding credit to an author’s employing institution on the census date, regardless of the location at which the work was carried out.

A small subset of RAE 2001 submissions were analysed where a unit of assessment had shown an improvement in rating of two steps (from 3a to 5) from RAE 1996 to RAE 2001. The analysis did not attempt to cover all such improvers since, for many, full details of submitted publications were not available. Thus, this survey set out to identify instances of this strategy rather than to assess its full extent. In the analysis, the contribution of each paper for each staff member submitted to RAE 2001 was assigned a value equal to the Impact Factor rating for the publishing journal. Of course, the relationship between Impact Factor and quality of research is complex but the dominance of this measure in the mindsets of universities will be well known to readers of this journal. Each paper was then sought via online databases to verify the listed address of the RAE-submitted author. Papers listing the submitting University were categorised as “indigenous” while those listing the author’s address elsewhere (with a previous employer) were categorised as “imported”. The sum of Impact Factors was calculated for the submissions and for each category of paper (indigenous/imported) within them. The proportion of submitted papers in each category and the ratio of mean Impact Factors for the two groups (a measure of the relative importance of imported papers to the overall submission) were then derived. Source data were taken from www.rae.ac.uk/2001/ and the universities and their submissions have been anonymised here.

For one university’s submission in ‘Physiology’, the 35.5% imported papers (describing work carried out by authors in their previous institutions and unrelated to work in the submitting university) contributed 50.7% of the total Impact Factor sum (Fig. 1a). For two universities’ submissions in ‘Biological Sciences’, 27.5 and 18.7% imported papers contributed 45.2 and 34.4% of the total Impact Factor sum, respectively (Fig. 1b, c). The ratios of the mean Impact Factors of imported to indigenous papers in these three cases were 1.874, 2.176 and 2.280, so that the relatively small number of imported publications in these submissions, made a disproportionately large contribution to the overall Impact Factor total of the submitting department, with the average imported paper contributing approximately twice as much to the total sum as the average indigenous publication.
https://static-content.springer.com/image/art%3A10.1007%2Fs11948-012-9352-0/MediaObjects/11948_2012_9352_Fig1_HTML.gif
Fig. 1

Contribution of indigenous and imported papers. For each of three submissions (ac), the width of the bars represents the proportion of indigenous papers (describing work carried out in the submitting university—light grey) and imported papers (describing work carried out without support from the submitting university—dark grey). The percent that indigenous papers contribute to the total is marked within the bars. The height of the dark grey bars represents the ratio of the mean impact factor of imported papers over that of the indigenous papers (shown numerically above the dark grey bars)

Plagiarism is defined in the Oxford English Dictionary as “taking someone else’s work or ideas and passing them off as one’s own”. The analysis presented here indicates that the rules of metrics-based systems can encourage universities, if not to pass work off as their own, at least to gain credit from research in which they played no part. The data show that the credit gained by the submitting entity can be highly significant and the incentive for this ‘institutional plagiarism’ was retained in RAE 2008 and will be a feature of the new REF, despite warnings that “the consequences of gaining or losing a grade are so great that institutions are obliged to ‘play-games’ in order to ensure that they fall the right side of the grade boundary” (Roberts 2003). Of course, Universities should have vigorous recruitment policies to attract research-active members of staff who publish regularly in highly-rated journals to enhance departmental research capabilities (although as investors are warned, past success cannot be taken as an indicator of future performance). However, the scientific community should question the desirability of systems that deny credit to those institutions in which work has been carried out (providing the research environment and support), only to reward recruiting institutions with credit for work in which they played no role. Alternative assessment systems could be devised to eliminate or reduce the effects described above. For instance, the Performance-Based Research Fund quality evaluation that will occur in 2012 in New Zealand, apportions credit for transferring staff to both institutions on a scale that reflects the period before the census date when the transfer occurs. Indeed, the UK 2014 REF will also assess “Impact” of research and this is specifically restricted to work “underpinned by research that was carried out by staff while working in the submitting Higher Education Institute” (http://www.hefce.ac.uk/research/ref/faq/showcats.asp?cat=6#q14 accessed 9 January 2012). Thus, in the area of “Impact” (defined to encompass social, economic, cultural, environmental, health and quality of life benefits), universities will be restricted to describing benefits to which they have contributed while in the area of research publications they will be free to submit outputs generated elsewhere.

Clearly it is important for academics to educate undergraduate and postgraduate students about plagiarism and its avoidance. However, we should be aware of the wider context that is set by universities and governments as we assess the ethical framework that surrounds this issue and the message that is sent to students and the wider community. As academics, we should be wary of assessment games that allow goals scored to follow players as they change teams, rather than strictly allocating the goals to the team for which they were scored.

Copyright information

© Springer Science+Business Media B.V. 2012