Introduction

The UK higher education (HE) system is made up of about 130 universities, ranging from the ancient to the new, the small to the large and from those that offer a wide range of programmes to those that specialise in particular disciplines. The development of universities in the UK has consisted of a number of distinct phases of expansion after the establishment of the six ancient universities of Oxford, Cambridge, St Andrews, Glasgow, Aberdeen and Edinburgh which go back to mediaeval times. The nineteenth century saw the creation of a few more including Durham and the various London and Welsh collegiate institutions. There was then no activity until the first half of the twentieth century when 13 civic universities in cities such as Liverpool, Bristol, Nottingham and Southampton were created. The result of the Robbins Report of 1963 was a large expansion in the 60s of more than 20 universities in large cities such as Newcastle and Bradford but also in smaller towns and cities such as Loughborough and Bath. The Further and Higher Education Act (FHEA) of 1992 (Department of Education and Science, 1992) saw the end of the binary divide between the universities and the polytechnics and others when more than 30 polytechnics and several Scottish Central Institutions were awarded university status. Many were in towns and cities such as Birmingham, Leeds, Manchester and Brighton that already had a university.

The rise in the number of universities in the UK as a result of the addition of the post-1992 universities was substantial but the 2000s brought a second wave of new university creation with an even larger expansion of more than 40 new universities, taking the total to around 130. This second wave of new universities consisted largely of former colleges of education and specialist colleges with many having quite small student numbers compared with their older counterparts. Table 1 shows clearly the acceleration in university formation. The increases in the numbers and variety of universities has caused many, especially the more established ones, to try to differentiate themselves from others with very different aims by forming mission groups around common themes. These included the prestigious Russell Group of 24 research-intensive universities, the MillionPlus group of 20 new universities whose declared aim is to widen access and extend opportunity, and the University Alliance of 18, mainly new universities that aim to drive innovation and enterprise through research and teaching. The increased importance of the contribution of new universities is underlined by the fact that there are now more new than old universities in the UK. The FHEA swept away barriers to competition between universities, polytechnics and colleges and allowed them to compete with one another in the market for students and staff.

Table 1 UK universities by period of establishment

The major changes to the structure of the UK HE system ushered in by the FHEA were accompanied by the rise to prominence of formal research evaluation exercises (REEs) such as the Research Assessment Exercises (RAEs) of 1992, and 2008 and the Research Excellence Framework (REF) of 2014. Before this, the UK REE of 1986 was the first attempt anywhere in the world to measure formally the quality of research output from higher education institutions. It was undertaken at the behest of the Thatcher government to put in place a system where research quality was rewarded and encouraged, with research funding based on the quality of research outputs. Soon after, in 1989, a ‘research selectivity exercise’ saw the introduction of ‘units of assessment’ (UOA) to represent subject areas. A major criticism of the 1986 exercise was that institutions felt that they were not able to demonstrate the full extent of their research prowess by being permitted to submit just two pieces of research per member of staff, and from the 1992 RAE onwards, institutions were allowed to submit up to four research papers per member of staff. Yet, even this higher limit has not been without its critics, and it has been suggested that ‘it discriminates against highly productive world-leading researchers’ (Matthews, 2016). Over the years many writers have claimed that the UK HE landscape has been transformed by universities and researchers reacting both consciously and unconsciously to REEs. Among the main criticisms are that researchers engage in a variety of dubious activities such as ‘salami slicing’ or putting into several papers what before the advent of REEs would have been in just one paper, ‘shadow publishing’ or publishing what is essentially the same work in different forms in several different journals and short-term behaviour leading to premature submission of articles. Some writers also point to the rapid increase in the number of journals and the creation of a vibrant market for strong researchers who can sell their services to the highest bidder (Elton, 2000; Sharp and Coleman, 2005).

Over time, there have been many changes in the methods used to assess research quality but the prime aim of REEs to focus research monies on those areas and into those universities that have demonstrated their ability to produce the highest quality research has been maintained. It is not the purpose of this paper to describe in detail the changes that have taken place over the years for these were many and varied (Bence and Oppenheim 2005; Otley 2010; Radice 2013; Rebora and Turri 2013). Nor is it the intention to discuss the underlying approaches to the consequences for management control systems in higher education as a response to research assessments (Smith et al. 2011; Rebora and Turri 2013; Agyemang and Broadbent 2015; Maesse 2017). Nevertheless, some brief comments on some of the main changes in the regime are useful in the context of the approach taken here. The first major change is in the length of the assessment period which has risen from three to four to five and latterly to 7 years. This reflects the increasing scale and complexity of the assessment process. The second change is in the rating scales used to assess research quality. Initially a simple 5-point scale was used but this was changed to a 7-point scale and eventually to a profile system. Other things such as research environment and esteem were also formerly included in 2008 and in the 2014 REF research impact was assessed for the first time (Broadbent 2010; Pidd and Broadbent 2015).The final change is in funding arrangements. Over the years, the changes have been very significant. The trend has been to reward the highest quality research with more funding and to take away funding from those that are not of the highest quality. To illustrate, in the early days, lower quality research was given some funding, albeit less than higher quality research, but since 2008 everything below 3* (internationally excellent) and 4* (world leading), that is, 1* (nationally recognised) and 2* (internationally recognised) and unclassified (below nationally recognised) is unfunded. For the 2104 REF, the ratio of funding for 4* and 3* was raised from the 2008 figure of 3:1 to 4:1, emphasising the increasing importance of the highest quality research. Despite this, Otley (2010) found evidence that in 2008 funding was more evenly distributed than in earlier years but this appears to be an anomaly connected with the move from a single number rating system to one based on profiles.

Despite these changes some things have remained largely unchanged. Universities have always had a choice as which UOAs to enter and how many staff should be entered. For example, Russell Group institutions have entered many more UOAs than some other universities: most Russell Group members averaged more than 40 UOA entries over the period 1996–2014 while new universities could boast fewer than 20. On the staff entered into REEs, after the Stern Review (Department for Business, Energy & Industrial Strategy, 2016), the rules for REF 2022 may change to include all relevant staff as opposed to those selected by universities. The use of peer assessment rather than metrics has remained at the heart of the assessment process despite the increasing availability of more sophisticated bibliometric methods and it is likely to continue into the future.

Geuna and Martin (2001), Lucas (2006) and Hicks (2012) are among researchers who have explored performance-based research funding, a system that is gaining ground across the world. To be classed as a performance-based system, a number of conditions have to be met: research has to be evaluated ex post, research output (not input) has to be evaluated, funding must depend on the results of the evaluations and finally, it must be organised on a national basis. The UK is one of many countries that now assess research quality as a means of identifying areas most worthy of funding. Across the world in 2010, Hicks (2012) identified at least 14 such systems, most of which were in Europe. An interesting feature is the diversity of ways of measuring the quality of research in different countries. The UK continues to be extremely active in its use of REEs to the extent that REEs have become a vitally important part of the activities of many UK universities, especially the older ones which traditionally have a strong research role. Excellent results mean more research funding and more league table points that enable UK universities to compete successfully at home and on a world stage. Increasing globalisation and the need to enhance international competitiveness behove governments to search for means of gaining an advantage (Lucas 2006). World class research will help universities to do that. REEs will continue to evolve and become more refined but their purpose remains unchanged. It is the unintended consequences some of which have already been pointed out that may give rise for concern. The impact of not doing well in REEs can mean threats to whole subject areas when degree programmes are closed as a consequence of research funds being cut or completely withdrawn. Also, as reported by Lee (2007) and Earle (2017) with respect to economics research, it may lead to a narrowing of the subject as researchers focus on what is needed to meet the demands of the REF and possible intellectual stagnation as the unorthodox is eschewed in favour of mainstream contributions.

The Stern Review of REF 2014 and the responses from interested parties such as universities and other bodies indicate some areas of consensus but a number of areas of concern. For the next REF in about 2022, the role of impact is said to be enhanced but metrics will not replace peer assessment. Lord Stern will take into account the responses before he arrives at the format for REF 2022.

The aim of this study is to investigate the relationship between research performance as measured in REEs and the structural changes to economics education that have taken place in the last two decades in the UK, with reference to undergraduate economics programmes.Footnote 1 To achieve this aim four questions are posed:

  • Question 1: Has UK economics research undergone a similar socio-geographic fragmentation as economics programmes?

  • Question 2: Is poor performance in the E&E UOA a factor in universities pulling out of subsequent E&E UOAs?

  • Question 3: Do universities that have moved from the E&E UOA to the B&M UOA improve their performance in the B&M UOA?

  • Question 4: Is the decision to stay in or withdraw from the E&E UOA connected with the retention/withdrawal of economics undergraduate programmes; and for the withdrawers, did withdrawal from the E&E UOA precede, follow or coincide with the closure of economics programmes?

The absence of undergraduate economics programmes and the failure to submit to the discipline’s UOA in REEs suggests that the subject is not meeting the ‘gold standard’ of strong teaching married to strong research. If teaching is not underpinned by research, the quality of the student experience may suffer. Old universities have long been active in research as well as teaching though they have not always been easy bedfellows (Bessant et al., 2003). Many new universities have traditionally focused on teaching to the detriment of research activity. Though academic research in this field does not provide a clear answer to the question about whether research-informed teaching is necessarily better than teaching not informed by research, many would agree (among them most universities) that a nexus of research and teaching is a vital element of the activities of a successful university (Robertson 2007; Visser-Wijnveen et al. 2010). It is in this context that the paper sets out to explore the relationship between economics research and economics programmes between 1992 and 2014. The economics programmes used for this purpose are predominantly the undergraduate degree programme entitled BA (Hons) Economics or BSc (Hons) Economics. Also possible but in much smaller numbers are BA (Hons) or BSc (Hons) Business Economics and BA (Hons) or BSc (Hons) Financial Economics. It excludes universities that only offer economics as part of major, joint or minor programmes.

The paper is organised as follows. The next section discusses the existing literature on the socio-geographic fragmentation of economics education in the UK HE system and provides an up-date on Johnston et al.’s (2014) results. A rationale for the rise to prominence of REEs is provided along with a short assessment of the literature on how the increased attention given to formal REEs has affected the UK HE system. The extent to which an institution’s REE score in the Economics and Econometrics (E&E) UOA is associated with changes in future support for research in the E&E area is then explored. Whether switching from E&E to B&M boosts an institution’s REE scores is then analysed. The link between research performance and programme retention and closure decisions is discussed. Finally, conclusions and suggestions for further research are set out.

The socio-geographic fragmentation of economics education in the UK HE system

Johnston et al. (2014) document the socio-geographic fragmentation of undergraduate economics education in the UK HE system. Their study revealed that many of the UK’s new universities have closed their undergraduate economics programmes over the last two decades. The programmes concerned with mainly Economics but also included Business Economics and Financial Economics. In fact, the best predictor of whether a university offered an undergraduate economics programme at the time of their survey was whether it was old or new. Not only were new universities less likely to offer an undergraduate economics programme but they had also experienced a significantly higher undergraduate economics programme closure rate over the period. Between 2003 and 2012, 16 universities in the UK removed an undergraduate economics programme and all but two were post-1992 universities. It is important to recognise that this contraction came against the backdrop of rising student demand for undergraduate economics programmes, most of which was met by growth in the numbers studying at the UK’s old universities. Analysis of university websites and The Complete University Guide 2017 showed that the current situation is a little more encouraging with the last 5 years seeing an increase of eight post-1992 universities and four post-2000 universities offering a single undergraduate degree programme entitled ‘Economics’. Among the universities that had undergraduate programmes in Business Economics or Financial Economics four added Economics, three removed Business Economics and one Financial Economics from their portfolios. Thus, as of 2017, a total of 79 UK universities (51 old, 24 post-1992 and 4 post-2000) offered single undergraduate degrees entitled ‘Economics’ (Table 2).

Table 2 Universities offering an undergraduate degrees entitled ‘Economics’ 2017

Trends similar to those observed in the UK have been observed in other countries: Siegfried (2014) reports an increase in the demand for economics programmes In the US between 2007 and 2010, albeit one that appears to have stalled between 2011 and 2013; Lodewijks and Stokes (2014) uncover a similar picture of growing elitism in economics education in Australian universities. There is also evidence that the binary divide in the UK that the FHEA of 1992 was designed to remove may have reasserted itself, with new universities now offering more vocational course such as Business Studies, with their older counterparts focusing on more traditional academic disciplines such as Economics (Talbot et al. 2014). The new evidence from this study is somewhat more encouraging in that since 2012 there has been a net increase of ten new universities offering an undergraduate programme entitled Economics with six in the North or Midlands and just four in the South. Nonetheless, the reason why the imbalance between new and old universities in the absorption of new demand seen in the UK may be important is that new universities are the primary conduit through which students from lower socio-economic groups access the UK’s HE system (Boliver 2015; Johnston et al. 2014). Johnston and Reeves (2015b) raise concerns that an economics education may be in the process of becoming an elite pursuit, restricted to higher income groups.

Johnston and Reeves (2015a) found that running in parallel with the growth in the concentration of economics programmes in less accessible institutions is an equally striking shift in the geographical distribution of economics provision. They highlighted the fact that at the time of their survey, there were no undergraduate economics programmes in new universities in Scotland. Something of a north-south divide in the supply of undergraduate economics programmes in the new universities would appear to have developed, with the new universities that offered the subject were located mainly in London and the south of England. As there is some evidence that middle class students prefer Economics and working class students Business Studies (Office for Fair Access 2010), it might then be expected that universities, irrespective of whether they are old or new, that draw their intake from more affluent areas of the UK are more likely to continue to offer the subject. However, given that most universities draw students from outside of their local area, this is only likely to be part of the explanation of the socio-geographic fragmentation of undergraduate economics programmes.

The current situation is presented in Table 3 which gives the number of universities classified as old, post-1992 and post-2000 in selected areas of the UK. Precise definitions of the areas as the North, Midlands and South is the cause of much heated debate in some quarters but here, pretty conventionally, the North is defined as consisting of Cheshire, Merseyside, Lancashire, greater Manchester, Yorkshire and above; the South is London, the South-East and the South-West of England; and the Midlands everywhere in between. Wales and Northern Ireland are treated separately. The table shows that the South dominates especially for new universities but the changes since 2012 have redressed the balance somewhat with the North now having one quarter of undergraduate programmes in ‘Economics’ in new universities. Scotland and Northern Ireland though still have no provision among new universities and Wales has but one.

Table 3 The number of UK universities offering a degree entitled Economics by area 2017

The rise to prominence of the REE in the UK HE system

In the year 2014–15, the HE Funding Council for England (HEFCE) distributed around £1.6 billion in funds, with the majority of funds allocated on the outcome of the 2008 RAE. Estimates of the cost of REEs in the UK vary dramatically, depending on the assumptions made. HEFCE (2008), based on a study by PA Consulting, estimate the total sector cost for all higher education institutions in England of the 2008 RAE at £47 million, but others have pointed to a figure perhaps as high as £200 million (Jump 2014). As a result of REEs, universities have sought to build up centres of research excellence from funds allocated according to the quality of the research output. REEs attempt to capture the complexity and diversity of the research produced by universities in a simple number(s) that enables the research productivity of different individuals, teams and institutions to be ranked. The results of REEs underpin resource allocation both between and within individual institutions and so it is difficult to overestimate their importance. High ratings will mean more funds, and low ratings may mean no funding at all, putting at risk departments and academic jobs. One interpretation of the ongoing commitment by policy makers of scarce resources to REEs is that such exercises yield sufficient off-setting benefits in the shape of higher quality information on research performance. This information should lead to improved efficiency in the allocation of resources and in the long run to an increase in the quality-adjusted volume of research output. Viewed from this perspective, the REE can be thought of as a response to the difficulty of quantifying the outputs from research activity. REE scores allow state support for research to be directed to those deemed by the independent panels of experts best able to make use of them.

University managers will often lack the expertise required to gauge accurately the quality of the research produced by academic staff across a variety of disciplines. The academics themselves may seek to exploit any informational asymmetry by supplying an overly optimistic view of the quality of their research efforts. Given that UOA panels are composed of independent experts, their evaluations have the potential to alleviate this information asymmetry and to enable university managers to assess more accurately the quality of research output. While research groups may have had greater opportunity to exploit information gaps in the past, the development of formal REEs ought to have reduced the scope for such opportunistic behaviour. It would be surprising if the data on research quality provided by REEs were not used to inform decisions on whether to continue to support research in a particular UOA. In addition, the information provided by REEs may show that some UOAs operate with less demanding standards than others. If institutions believe, rightly or wrongly, that any given set of output will be treated more favourably in one UOA than another, then this will provide an incentive to submit to the more ‘profitable’ UOA. The scores produced by REEs are likely to affect research activity directly but programme provision indirectly. For any given level of student fee income, the lower the research performance of an institution in a REE the more likely it is that the institution will simply decide to withdraw support from the subject entirely. The HE system as a whole will change as the individual institutions of which it is composed make decisions about what to teach and what to research in the light of REE scores.

Proponents of this approach to the evaluation of research would argue that without the results of REEs, scarce resources would be squandered on low quality programmes and research, and that the creation and implementation of this system will have identified and removed poor performers. However, once these ‘low hanging fruit’ have been picked, critics, such as Geuna and Martin (2001) question the merits of the approach. They contend that the ostensible meritocracy of the ex-post peer review approach may simply entrench the interests of first-movers and work against future research potential, and that it may in addition lower the willingness of researchers to engage in riskier research projects. Others, such as Docherty (2015), express concern over the potential loss of institutional autonomy as the state, through its ability to shape the REE process, uses its influence to alter the kinds of research carried out in universities, e.g. the shift in emphasis away from publications to measures of ‘impact’ assessment is seen by critics as an attempt to tie the research efforts of universities more closely to the needs of the economy. Sayer (2015) maintains that the inadequate composition and mode of operation of some of the assessment committees means that they may fail to evaluate accurately the outputs put before them. Good research may be classified as bad, and bad as good. Agyemang and Broadbent (2015) go further and suggest that the academic community itself through its willingness to internalise the externally imposed exercise in ‘commensuration’—that is the research assessment regime—may have contributed to its own subjugation to university management.

A simple model of the UOA submission decision and programme retention

Figure 1 illustrates how REE research scores and programme retention might be related to one another. Though the cases included in the figure do not capture the actual experience of real universities—they are intended to illustrate some of the key possibilities—actual REE dates have been imposed on the model to link it more closely to the empirical work carried out in the study.Footnote 2 To interpret the figure, note that the vertical lines indicate the timing of different REEs, running from 1992 to 2014. Given the focus of the paper, it is natural to use the E&E UOA to illustrate the various possibilities. In case III, for example, the university’s last submission to the E&E UOA was in 1996 but it was still offering a programme in 2014. Case IV shows an institution that withdrew from the REE in 1996 and closed its programme around 2003.

Fig. 1
figure 1

E&E UOA submission and economics programme status

If the results of REEs are important in decisions on whether to continue to support research in E&E and if this decision in turn influences the decision on whether to retain or close a programme, then we would expect cases I, IV, VIII and IX to occur frequently. The reason for this expectation is that case I represents the ‘gold standard’ in that it picks out universities that have always submitted to the E&E UOA and offered an economics undergraduate programme throughout the period under consideration. At the other end of the spectrum, the absence of a submission to the E&E UOA at any time may be taken as an indication of weak research capability in E&E. As such, case IX, which captures institutions that have never engaged with the subject either by offering a programme or submitting to the UOA, should be fairly common, as should case VIII. In cases IV and V, institutions withdraw programmes and exit the E&E UOA. The difference between the two cases is the sequence of these events. In case IV pulling out of the UOA precedes programme closure, whereas in case V, the programme is closed before the institution removes itself from the UOA. If poor research performance in a REE raises the risk of programme closure, then case IV should be more common than case V. Case II should also be rare as it is for institutions that have removed programmes but have submitted to the UOA in all REEs. If a group’s research performance is deemed by university managers to be of high enough quality to justify continued submission to the UOA, it would be surprising if such a group did not also offer a programme. Case III institutions have withdrawn from the UOA but may not yet have fully adjusted to the implications of this choice for their programmes. Case VI universities are those that find it worthwhile to offer a programme that is not underpinned by UOA entry. Teaching-led universities such as those in the MillionPlus group and the University Alliance in areas where programme demand is strong would fall into this category. Case VII should be very rare. This would involve institutions with research deemed by management good enough to be submitted to the UOA in every REE but where an associated programme has never been offered. Large numbers in cases II, V and VII would suggest that the information produced by REEs does not carry a large weight in the decision making of university managers.

Data and methods

The basis of assessment of the quality of research done in UK universities is the UOA. Between 1992 and 2008, the number of UOAs was little changed with around 70 units in total; the REF of 2014 saw the number reduced to just 36. Over the years with subject developments and other changes, the titles and compositions of many units have changed, so that of the original 72, just 14 have remained the same from 1992 to 2014. This includes the E&E UOA, the subject of this paper, which, having the same title throughout and very similar content allows a longitudinal study to be undertaken. Secondary data gleaned from official UCAS, HESA and REE publications (RAEs 1992, 2008, REF 2014a, b) on the entire population of around 129 universities, roughly 50 old and 80 new, are used in the study. All universities are assigned to one of three groups in relation to programme status at the observation point, 2014-retainers, closers and those that have never had an economics programme. With the growth in the number of universities around 1992 and in the 2000s, the early years had fewer universities than the later years. Table 4 shows that of the 129 UK universities examined in the study, 67 continued to offer an economics programme in 2014, with 16 having closed an economics programme and the remaining 46 universities never having offered one. E&E UOA status consists of four possibilities—always in the E&E UOA, withdrawn from the E&E UOA, never in the E&E UOA and late entrant to the E&E UOA. Over the period 1992 to 2014, 26 universities were always in the E&E UOA, 35 had entered and withdrawn at some point, 66 had never entered and there were two late entrants.

Table 4 All universities’ E&E UOA status and economics programme status 1992–2014

Though data on REE scores and UOA and programme retention decisions over time may shed light on how each is related to the other, it is important to bear in mind that just because one event occurs before or after another does not mean that the two are causally related (the fallacy known as post hoc ergo propter hoc). The findings have to be carefully interpreted and important caveats borne in mind. If, for example, a low score in a REE is found to be associated with an increased likelihood of withdrawing from a subsequent REE, to say that the low research score is the cause of the increased likelihood of withdrawal would only be valid to the extent that all of the other possible determinants of the decision to withdraw that may vary across universities have remained constant over time. Similarly, in the case of a university that withdraws from one UOA and submits to another in the next REE, to attribute any higher score solely to the decision to submit to another UOA would require all other possible determinants of an institution’s REE score to remain unchanged. The difficulty for the researcher is that many of the factors that might conceivably influence UOA submission and programme retention decisions such as changes in management or strategic focus are inherently difficult to identify, let alone measure.

Furthermore, there are other limitations stemming from the method used in this study. It should be noted that the data are censored at the observation date and that programmes may be closed and universities may still withdraw from a UOA beyond this point and any such changes will not be picked up. The data are also left censored in that they do not allow an assessment of the relationship between research performance and research support and between research support and programme retention before the initiation of formal REEs. It is also recognised that the definition of what constitutes economics research and economics teaching used in this paper is specific and a wider interpretation may yield different results.

Results

  • Question 1: Has economics research experienced a similar socio-geographic fragmentation as economics education?

REE data plotted in Fig. 2 show that the total number of UK universities that submitted to the E&E UOA fell from 60 in 1992 to 28 in 2014 while the number of old universities fell from 49 in 1992 to 28 in 2014. This decline is modest when compared to the experience of new universities. All 1thirteen new universities that had submitted to the E&E UOA in 1992 had withdrawn from the unit by 2014, and any latecomers had also withdrawn by 2014. To tease out whether this stark decline in economics research in the UK’s new universities was discipline-specific or more general, the experience of new universities in the closely related field of B&M was considered.

Fig. 2
figure 2

The number of UK universities submitting to the E&E UOA (1992–2014)

Figure 3 shows that while the percentage of submissions to the E&E UOA from new universities had fallen from 22% in 1992 to zero in 2014, the percentage of submissions to the B&M UOA showed a comparatively modest fall from 53 in 1992 to 46 in 2014. While the movement of old universities from the E&E UOA to the B&M UOA will partly be responsible for this fall, it may also be the result of the withdrawal of new universities previously included in the B&M UOA.

Fig. 3
figure 3

The percentage of submissions from new UK universities in the E&E and B&M UOAs 1992–2014

Just as new universities have retreated from economics research so also have some old institutions. It emerges from further investigation of the data that 16 of the 28 submissions to the E&E UOA in 2014 were from Russell Group universities with another six coming from the now defunct 1994 group, which consisted of smaller universities with a strong research focus. Two former 1994 group members also submitted, bringing the total to 24 out of 28 drawn from this elite group of research-intensive universities. To the extent that submission to the E&E UOA is the most important indicator of the presence of serious economics research in a university, it would seem that such research has become the preserve of a small and shrinking group of elite universities.

It is also apparent that the increased concentration of economics research in the UK’s elite universities has been accompanied by a reconfiguration of its geographical distribution across the UK. The institutions that have withdrawn from serious economics research have not been evenly spread across the regions of the UK. Economics research in the north of England and in the Celtic nations would appear to have struggled to compete successfully. None of the Welsh or Northern Irish universities was included in the E&E UOA in 2014, and above a line between Preston and Sheffield in the north of England, very few universities were included in the E&E UOA. Only the Scottish ancient universities of St Andrews, Edinburgh, Aberdeen and Glasgow had an economics submission. There may be something of a cluster effect here with a large proportion of universities entered into the E&E UOA being based in London and the south of England. Given the concentration of economic and political power in London, this should not come as a surprise. Referring back to the question posed at the start of this section, it is clear that economics research has gone through a similar socio-geographic fragmentation as that found in economics education. However, at this stage, it is important to stress that just because these two aspects of the development of this particular subject appear to have followed a similar path does not necessarily mean that they are causally related.

  • Question 2: Is poor performance in the E&E UOA a factor in universities pulling out of subsequent E&E UOAs?

What explains the withdrawals from the E&E UOA? Is it simply those that have scored highly in the past stay and those that have not, leave? Though the data presented above provide useful information on the link between research performance and the decision on whether to submit to an E&E UOA at the next REE, they do not relate directly to why this pattern is observed. A number of possible explanations suggest themselves. The first relates to institutional expectations. Research groups that perform below their universities’ expectations may be candidates to be pulled out of the next E&E UOA (Johnston and Reeves 2017). One of the consequences of performing less well than expected may be a cut in funding or, very likely, no funding at all (Lee et al. 2013; Sayer 2015). Unfortunately, it is very difficult for an outside observer to know the expected rating for any given university. Take for example the LSE, which has had very high scores in the five E&E UOAs (1992–5, 1996–5*, 2001–5*, 2008–3.55, rank 1, 2014–3.55, rank 2).Footnote 3 Given its record, it would be safe to assume that the LSE would expect the highest ratings in any future E&E UOAs, and any less would be a blow to its reputation and could lead to a drop of internal support. Protection would undoubtedly come from the large scale of research activity in institutions like LSE. Second, at the beginning of the RAE in 1992, many universities would have been unaware of the standards necessary to achieve a particular rating, and some would have been very disappointed and possibly somewhat embarrassed to get such low ratings of 1 and 2. Avoiding humiliation may have been the reason why they did not re-enter the E&E UOA. Third, the relatively small number of high quality submissions to the E&E UOA increases the likelihood of an institution being nearer the bottom of the rankings, damaging its reputational capital. University managers, with league tables in mind, may prefer to be lost in the mass of the B&M UOA table than to occupy a place towards the bottom of E&E rankings. Fourth, the importance university managers attach to the results of REEs may not reflect concern over their impact on a university’s staff and students but on how these influence the remuneration and future career prospects of the managers concerned. In these circumstances, starving areas that have strong student numbers and research to avoid closing university departments are likely to become increasingly unpalatable. Once the rot has set in in an area, the danger is that no new appointments are made, and the failure of the subject area is locked in. Programmes can be closed at any time but the REEs take place only about every 6 years so it is unlikely the two would match up.

How the evidence produced by REEs is used by university managers in internal resource allocation decisions is difficult to come by. One possibility is that the REE scores may be highly weighted in internal decision making, and institutions may respond quickly and strongly to the latest set of REE results. Alternatively, they may not attach great importance to the scores or use them only as part of a broader set of measures of an area’s performance. Which of these competing views is the more accurate cannot be answered on an a priori basis, and ultimately only relevant empirical evidence can help us to better understand what has happened. Table 5 gives a list of universities that pulled out of the E&E UOA from the first RAE in 1992, 2008. Universities in italics are new and otherwise are old. A total of 35 universities (22 old and 13 new) withdrew from the E&E UOA over the whole period. There were ten withdrawals after the 1992 RAE, 13 after the 1996 RAE, seven after the 2001 RAE and seven after the 2008 RAE making a total of 37 withdrawals.Footnote 4Durham and Kingston withdrew twice, Brunel did not submit in 1996 but re-entered in 2001.

Table 5 Ratings and rankings of E&E UOA withdrawers in the E&E and B&M UOAs

Two measures of performance are presented in Table 5: first, the score achieved and second, the ranking in the UOA. Note that the number to the right of the slash is the number of submissions to the UOA in the year that the institution last submitted. The number to the left is the institution’s ranking, with the equals sign indicating that the ranking was shared with at least one other institution. Quite how university managers interpret league table data is unclear. Take the case of an institution that achieves the same score in two consecutive REEs but that score places it 28 out of 60 in the first REE but 28 out of 28 in the other, how does the institution interpret this information to assess performance? Using a relativist interpretation of the data, it might be argued that in the earlier REE, the institution is in the top half of all submissions and has done relatively well, while in the latter submission, it has finished last and so has done very poorly. Equally, however, it might be argued that as the institution has maintained its score, it has maintained its performance, which would reflect an absolutist stance.

Though it is difficult to know how university managers evaluate the information produced by REEs, we can nonetheless draw some useful inferences from the empirical relationship between results and subsequent decisions. Columns 3 and 4 of Table 5 show the rating and ranking respectively in the E&E UOA at the time of the last E&E UOA entry. For 2014, only the ranking is shown. Thus, for example, reading’s last entry was in 1996, and its rating of 4 then put it in equal 14th place out of an entry of 50; and Dundee’s last entry in 2008 was after a rating of 2.45 and a ranking of equal 31st. What is apparent from the ratings of those universities that pulled out of the E&E UOA is that none of them ever had a top rating of 5 or 5* in any E&E UOA. A top rating of 5 or 5* then guaranteed continuation in the E&E UOA, and a rating lower than 5 or 5* led either to continuation in the E&E UOA or a later withdrawal from it. This is a strong result that emphasises the importance of REE performance in university funding decisions.

At the other end of the rating scale were the four (all new) universities that left the E&E UOA after being awarded the lowest rating of 1. Among these, two left after 1992 but Abertay and Thames Valley (now West London) continued after receiving a rating of 1 in 1992, leaving in 1996 after ratings of 2 and 1, respectively. This illustrates how different institutions may respond differently to the same score. Four universities left after receiving a rating of 2 but five universities also with a 2 rating continued to the next E&E UOA (East London, Manchester Metropolitan, Northumbria, Queen’s Belfast and Salford). All but Manchester Metropolitan subsequently dropped out after one more submission. What of the ratings 3, 3a and 3b? As the table shows, with 14 universities dropping out after receiving ratings of 3, 3a or 3b, for these universities, the rating was not high enough to justify staying in the E&E UOA. It was, though, for City, Kent, Leicester, Loughborough, Manchester, Manchester Metropolitan, Surrey, Dundee, Edinburgh, St. Andrews and Stirling, all but Manchester Metropolitan being old universities. Five universities with a rating of 4 (Durham, Liverpool, Newcastle, Reading and Strathclyde) also dropped out, contrasting with 21 universities with the same rating in at least one E&E UOA that continued.

Another perspective on how REE scores and future research activity are linked is shown in Fig. 4, where the proportion of all universities withdrawing from the E&E UOA for the various REE scores rated 1–5, 1 being the lowest score and 5 being the highest, is set out.Footnote 5 The bars show the proportion that withdrew from the next E&E UOA for each score. Thus, for REE scores of 1, the proportion equal to 1 signifies that all universities with a score of 1 withdrew from the next REE. At the other extreme, with a proportion being zero, a score of 5 meant that none withdrew. The shape of the chart shows that the higher the REE score, the more likely it was that universities would stay in the E&E UOA and vice versa.

Fig. 4
figure 4

Proportions of UK universities withdrawing from the E&E UOA after various REE scores (UK REEs 1992–2008)

While it is apparent that universities may respond differently to the same REE scores, some important patterns emerge. For the vast majority of universities, a rating of 4 was sufficient for them to submit to the next E&E UOA. For the seven universities dropping out after 2008, none was ranked in the top 20 (out of a total of 35), while five were ranked as the bottom five. With regard to the question set out at the start of this section, these results suggest that the information on research quality provided by REEs may enter into decision making on whether or not to continue to support research in the area.

  • Question 3: Do universities that move from the E&E UOA to the B&M UOA improve their performance in the B&M UOA?

It has been suggested that it may be easier to get a high rating in UOAs such as B&M than in E&E where the standards are thought to be more demanding. In E&E, publications in elite journals (e.g. the so-called Diamond list) are essential for a high rating to guarantee funding for future research (Lee et al. 2013). It is believed by some that the Diamond list journals are open only to those with specialist training in econometrics and work against researchers working in areas that do not excel in econometric techniques (Lodewijks and Stokes 2014). In their attempt to get a higher rating, some universities may have moved out of the E&E UOA into the B&M UOA in the belief that they will achieve a higher rating and consequently boost research funding. Yet, it is important to stress that the assessment panels in UK REEs work in such a way that outputs submitted to the B&M UOA deemed to be more appropriate for the E&E sub-panel are simply cross-referred. Nonetheless in 2008, in addition to more than 3,000 outputs being directly submitted to the Economics sub-panel, a further 1240 outputs were cross-referred from the B&M sub-panel (RAE 2008 UOA 34 Subject Overview Report). This suggests that a large number of outputs were ‘wrongly’ categorised by those making submissions. Not only that, but the Economics sub-panel stated that the cross-referred work was generally of lower quality than that submitted directly to the Economics sub-panel in both 2008 and 2014. Both of these observations are supportive of the possibility that institutions may have behaved strategically in relation to where they have submitted their outputs in the hope of achieving better scores.

To determine whether a shift to B&M from E&E makes a difference to performance in B&M, the data in Table 5 are used to make a comparison between the B&M score at the time of the shift and the B&M score at the next REE. Take the case of Abertay which achieved a score of 1 in both their E&E and B&M submissions in 1996 but managed to achieve a 3b in B&M in the 2001 REE after withdrawing from the E&E UOA, a clear improvement. Careful scrutiny of the data reveals that 13 of those who withdrew from the E&E UOA and that already had a B&M entry improved their B&M performance at the next REE; nine experienced no change and only one institution performed less well. So, with respect to question three, almost all of those universities that withdrew from the E&E UOA either improved or at least maintained their previous B&M score.Footnote 6 This is a strong finding but it is one that has to be interpreted cautiously. Of course, the improvement in the B&M score could have been due to other factors not connected with the output of the economists. For example, the quality of the non-economics output may have risen, and the economics output may have had little or no effect on the rating. On the other hand, it is also possible that the B&M UOA ratings would have been lower if the economists’ output had not been included but without a breakdown of who was awarded what it is impossible to know.

  • Question 4: Is the decision to stay in or withdraw from the E&E UOA connected with the retention/closure of undergraduate programmes in Economics, Business Economics and Financial Economics; and for the withdrawers, did withdrawal from the E&E UOA precede, follow or coincide with the closure of economics programmes?

Figure 1 outlined how decisions on submission to the E&E UOA and the retention of economics programmes may be related to one another. Table 1 provided evidence on the frequency of each of the theoretical possibilities. Column 1 of Table 1 shows that there were 26 (case I) universities. Uninterrupted submission to the E&E UOA was sufficient for the continuous provision of an economics programme over the period under consideration, a robust finding that highlights the importance of a strong research base in programme provision decisions. There were no universities always in the E&E that had withdrawn (case II) or never offered (case VII) an economics programme. At the other end of the spectrum, there were 45 institutions that had never submitted to the E&E UOA at any REE and have never offered an economics programme (case IX universities). The absence of a research base and a programme go hand in hand in these cases. However, the table also demonstrates that submission to the E&E UOA was not necessary for a university to offer an economics programme. Column 3 shows that 21 of the 66 universities that had never submitted to the E&E UOA had offered an economics programme between 1992 and 2014. Only 12 of the 21 universities (case VI) in this group continued to offer a programme over the whole period, with just under half (nine of the 21) removing their economics programme (case VIII universities). So, 82 % of institutions that had never submitted to the E&E UOA had either never offered or had offered but removed an economics programme. Once again this speaks to the link between research prowess and programme provision. The picture for institutions that had entered but withdrawn from the E&E UOA at some point over the period is perhaps a little less clear cut. Of the 35 universities in column 2 that had withdrawn from the E&E UOA at some point, 27 (case III universities) had nonetheless retained an economics programme throughout the entire period, with one fifth (seven of 35) of this group were closing their programme (case IV universities).Footnote 7 Surprisingly, one university had entered and pulled out of the E&E UOA even though it had never offered an economics programme. When summed up, cases I, IV, VIII and IX account for 67% (87 of the 129) of the entire population of UK universities. Moreover, it has to be borne in mind that the data are right censored and that universities that have withdrawn from the E&E UOA but that have not withdrawn their programme may go on to do so at a later date. In the extreme, if all of the 27 case III universities—that is those that have withdrawn from the REE but have not yet removed their economics programme—eventually closed their programmes, and this group is added to the existing group, then the resultant combined group accounts for 88% or 114 of the 129 universities. With reference to question 4, the evidence suggests that the complete absence of a research submission or a very strong research record are strongly associated with programme provision but that between these two extremes, the picture is more nuanced. To explore these complexities a little more closely new and old universities are considered separately.

In Tables 6 and 7 old and new universities are considered separately. Column 1 of Table 3 shows that all 28 (26 and two late entrants) universities that have always had an E&E UOA entry (case I universities) and that have retained their economics programme were old universities. Column 2 shows that there have been 20 old university withdrawers from the E&E UOA that have still retained an economics programme (case III universities) so that only two, Salford and Liverpool, have closed one or more of their economics programmes (case IV universities). Liverpool still retains an economics programme while Salford still has some economics joint titles. Almost all of the old universities that have withdrawn from the E&E UOA retained an economics programme. It is interesting to note that in old universities, submission to the E&E UOA does not appear to be tightly linked to the decision to offer an undergraduate programme, with almost all withdrawers continuing to offer a programme.

Table 6 Old universities’ E&E UOA status and economics programmes 1992–2014
Table 7 New universities’ E&E UOA status and economics programmes 1992–2014

Column 3 of Table 7 shows that 66 of the 79 new universities have never been in the E&E UOA and from column 1, it is clear that no new universities have always been in the E&E UOA. Column 2 shows that of the 13 new universities that have withdrawn from the E&E UOA, only seven institutions have retained an economics programme (case III universities). This proportion is clearly far lower than the similar proportion for old universities and indicates that old universities are less likely than their new counterparts to close a programme following withdrawal from a REE. Remarkably, one new university, Buckinghamshire New (as Buckingham College of Higher Education), submitted to the E&E UOA without having an economics programme. The data show that in new universities, withdrawal from the E&E UOA is more likely to be followed by economics programme closure. For some universities, it is almost as if a withdrawal from the E&E UOA signifies the death knell for economics with the subsequent removal of an economics programme and the probable disappearance of many of the economics staff.

Examination of the relationship between the sequence of decisions on research withdrawal and programme retention may help shed further light on the nature of the relationship between the two. Cases IV and V from Fig. 1 earlier both involve withdrawal from the UOA and closure of a programme. In case IV, the programme closure takes place after withdrawal from the UOA, with withdrawal from the UOA following on from programme closure in case V. If it is poor performance in a REE that leads to closure of a programme and not the other way round, then we would expect case IV to be more common than case V. Table 8 shows for each university that closed an economics programme the dates of programme closure and E&E UOA withdrawal. Also, shown is the area in which the university is located divided as before into four areas: the South, the Midlands, the North and Other for Wales and Northern Ireland. So, for example, Northumbria’s last submission to the E&E UOA was in 2001 and its Financial Economics programme closed in 2004 and its Economics programme in 2005. The main feature of Table 8 is the disproportionate rate of programme closure in the North compared with other areas. Recall that the North includes Scotland which saw four of its universities closed an economics programme.

Table 8 Economics titles withdrawn between 2003 and 2011, by area with year of last E&E UOA entry

In all cases, the closure of an economics programme occurred after the institution had withdrawn from the E&E UOA. Just as there were no cases II or VII universities, there were no cases where a programme was closed before the institution had pulled out of the E&E UOA (case V universities), nor were there any where both were closed or withdrawn in the same year. The data on programme closure points to the importance of the link between teaching and research. Programme closure did not necessarily, however, occur immediately on withdrawal from the E&E UOA, note the considerable variation in the gap between the two events. Withdrawals from the E&E UOA preceded the programme closures by between 3 and 13 years with a mean gap of 8.6 years. University strategic plans rarely last more than 5 years before they are revisited and revised so, these gaps may suggest different plans for research and programmes. Nonetheless, with respect to question 4, the evidence suggests that new universities are more likely than their traditional counterparts to remove programmes as a result of a poor research assessment and that programme closures follow on from the publication of REE results.

Conclusions

This study has shown that the socio-geographic fragmentation of economics education documented by Johnston et al. (2014) has been replicated in university economics research and that the two changes appear to be related to one another. The decline of economics research in the UK’s new universities has been so dramatic that by 2014 there were no new universities entered into the E&E UOA in the REF. All 13 new (post-1992) present in the 1992 RAE had withdrawn by 2014, and any later entrants had also withdrawn. Thus submission to the E&E UOA is now restricted to highly selective elite institutions located mainly in the south of the UK. It is instructive to consider how this has come about. Analysis of research scores in the E&E UOA over the period 1992–2014 indicates all of the institutions achieving the highest scores were submitted in E&E at the next REE, while those institutions whose scores placed them at the lower end of the range of possible results were likely to withdraw. While the response of institutions to scores between these two extremes is less clear-cut, it is apparent that institutions were more likely to pull out of the E&E UOA if scores were towards the lower end of the range. This suggests that the information provided by REEs may have been used by universities in making decisions on whether to continue to support or withdraw from a research area. The likelihood of a university withdrawing from the E&E UOA is found to be inversely related to the score at the previous UOA. The possibility that some UOAs may offer higher scores for any given body of work and in so doing provide an incentive to switch from one area to another is also considered. In the case of E&E, the closest alternative unit of assessment is Business and Management. The results show that universities that exit the E&E UOA and move to the B&M UOA boost their scores in B&M.

A positive association between a strong research base, as evidenced by uninterrupted submission to the E&E UOA and an institution continuing to offer an economics programme is clearly discernible. It also emerges that withdrawal from the UOA is more likely to be associated with programme closure in the new than in the old universities. With regard to the sequencing of decisions on programmes and research, withdrawals from the E&E UOA always preceded the closure of an economics programme, suggesting that causation may run from research performance to programme retention or closure. However, one event does not flow automatically or immediately from the other, and the time between the withdrawal from the E&E UOA and programme closure displays a lot of variation. Nonetheless, the findings of this study suggest that the results of REEs have played a role in generating the observed changes in the position of economics in the UK’s HE system. There is little reason to believe that the experience of economics in the UK has not been replicated in other subject areas. REEs would appear to be implicated in the fragmentation of the UK’s HE system in ways that may be intended and unintended. REEs may have driven up research standards and moved resources to their most productive use but in doing so they may also have contributed to a socio-geographic fragmentation of economics education in the UK HE sector. The desirability of restricting access to certain academic areas by particular groups is open to debate. Policy makers, when designing REEs should bear in mind these (presumably) unintended consequences.

Though this investigation has shed light on some important questions, it is not without its limitations. To derive more accurate estimates of the relationship between research performance, research support and programme decisions would require the specification and estimation of a fully specified structural model. If the data could be obtained their use in this way would provide a useful extension to the work carried out in this study. In addition, future research might explore whether the experience of E&E is replicated in other disciplines in the UK and in other parts of the world where performance-based research funding systems are in place.