Introduction

Impact divides opinion. For some, it was a controversial addition to the United Kingdom’s (UK) research excellence framework (REF) audit in 2014Footnote 1 (Bastow et al. 2014; Ladyman 2009; Smith 2010; Smith and Meer 2012; Smith et al. 2011; Watermeyer 2014) and remains so, as we head towards REF2021 (Chubb and Watermeyer 2017; MacDonald 2017; Ní Mhurchú et al. 2017). For others, it is a welcome opportunity both to widen the appeal of social science and defend it against government cuts. And, in politics and international studies,Footnote 2 there has been an acceptance that, love it or loathe it, impact is now part of our professional landscape. An energetic debate emerged in the run-up to REF2014 on, among other things: the art of translation (Flinders 2013); the need for design-oriented methods (Stoker 2013); and the dilemmas posed for early career researchers (Savage 2013). As the profession continues to be socialised on impact, ‘how to’ guides are popping up where researchers share both data and their own experiences of ‘doing impact’ (Cairney et al. 2016; Dommett et al. 2016; Geddes et al. 2017; Matthews et al. 2017). Beyond peer-to-peer support, commercialisation is also taking hold—with companies such as Fast Track Impact and Vertigo Ventures—contracted by universities to support impact tracking and case study writing. The arrival of impact matchmakers—like Columbia University's Research4Impact (r4i) impact networking site—cannot be far behind (see Frazer 2017).

Of course, while the impact imperative is certainly more pressing now than at any other time (the value of case studies having significantly increased for REF2021Footnote 3), questions of relevance are not new. The relationship between political research and practice was a major pre-occupation in the post-war years as politics and international studies professionalised and built distinct disciplinary identities in Britain. What does it mean to be relevant? Is it a good thing? Do practitioners care about academic research? Can academics engage to avert policy disasters? How do we do it? Luminaries of the disciplines grappled with these questions and more (see for example, Booth 1997; Crick 1962; Hayward 1991, 1999; Hill 1994; Johnson 1989; Smith 1990, 1997; Wallace 1996).

At last, we have some empirical evidence to further animate these questions, and to accompany the maturing literature on impact’s conceptual and practical dimensions (Blagden in press; Boswell and Smith 2017) . Specifically, there are 166 publicly available political science and international studies impact case studies submitted for REF2014Footnote 4: certainly, the most complete and only comparable systematic data of UK-based academics’ non-academic activities. While a broad brush view of all 6975 impact case study submissions has been drawn (Kings College London and Digital Science 2015), we know much less about the content of cases at subject level (though see Smith and Stewart 2017 on social policy). Here, we offer the first full analysis of politics and international studies submissions. Our aim is straightforward. Using frequency data, we report the political economy of political science and international studies impacts’ across four broad themes: who has what impact and when; impact’s beneficiaries; impact’s evidence base; and, generating and validating impact. Analytically, we comment on the findings using insights from disciplinary histories and knowledge utilisation literatures. We conclude by discussing the ramifications of our case analysis for the discipline.

Data and methods

We start with a brief note on the data. We coded 166 politics and international case studies submitted by 56 universities. We do not analyse the impact templates submitted by departments. The case studies are structured in five prescribed sections: summary of the impact; underpinning research; references to research; details of the impact; and, sources to corroborate the impact. In most instances, coding is a straightforward business of following what is said in the text. That said, there are some instances where judgements were necessary—for example, in determining a case studies sub-field.

We report frequencies; such a descriptive, distributional approach is all the data will allow. The REF rates case studies as Unclassified (U), 1*, 2*, 3* and 4* (where 4* is the top-quality ranking, see Box 1). No department achieved a 100% 4* profile, so to isolate excellence we checked for differences between findings in all cases and those of the 15 universities with only 3* and 4* impact profiles (N = 55 case studies). We found broadly the same distribution across all our analytical dimensions.

Box 1 Impact case study scoring scheme.

We should be clear about the status of these cases. We cannot claim they represent the universe of UK-based academics’ impact work. Rather, these have been selected and packaged by universities for the REF audit. The coupling of the number of staff members whose outputs were returned with impact case study numbers in REF2014, and so the possibility of taking a selective approach, may have worked to ensure that what was returned in impact is considered best practice rather than the comprehensive picture. But, this is speculation.

Who has what impact, when?

A political economy of impact requires a detailed mapping of who does what and when; this is the first dimension of our data analysis. To begin, we identify our impact and the impactors. Regarding impacts claimed, the cases demonstrate a broad and careful interpretation of what benefit can be. A basic analysis of case study titles reveals 55 different verbs are used to characterise impact.Footnote 5 Four verbs dominate—‘improving’ accounts for 11% of verbs usedFootnote 6 (N = 14); ‘informing’ 9% (N = 11); ‘shaping’ 9% (N = 11); ‘influencing’ 8% (N = 10)—and account for 37% of total verb use. Although certainly simplistic, these headline data show authors avoiding claims about change in a direct way (indeed, the verb change is only used twice in case study titles). Avoiding claims of change demonstrates our understanding of over four decades of knowledge utilisation studies (see Alkin 2013 on the defining contribution of Carol Weiss in this regard)—social science can rarely be demonstrated as the source of change in society, polity or economy. This caution carries through to the texts themselves. Only 13% (N = 22) of case studies make direct reference to having changed policy. By contrast, colleagues are more comfortable making claims about: influencing user understandings (96%, N = 160); shaping organisations’ rules, procedures or processes (55%, N = 91); and, enhancing public debate (25%, N = 42). A good deal of attention is paid to unpacking these claims. All but one case study goes beyond just naming the impacts to identifying (or at least implying) the actions that underpin them. Although developing a shared understanding of what impact is across disciplines was identified as a key challenge of REF2014 (Manville et al. 2015), in politics and international studies, academics converged upon similar ways of describing what they had done. So, in most cases, impact is generated by: making a contribution to debates (40%, N = 66); bringing new evidence into the public domain (40%, N = 67); and, producing best practice models (38%, N = 63).

Moving on to who makes an impact, single ‘lead’ academics dominate. In 66% of cases (N = 109), impact accounts are structured around one central figure and their research. We should be clear about what we mean here. Although many cases do make reference to other colleagues or team members from that department (42%, N = 69), two-thirds of the submissions use a single academic as the key author of impact throughout the texts.Footnote 7 A total of 20% (N = 22) of those heroes are women. Academic collaborators from other UK universities are mentioned in 19% of cases (N = 32) and international colleagues fair worse on 13% (N = 21). What does this tell us? We can reasonably argue such ‘heroic’ accounts are reflections of reality; they map on to the structure of knowledge production in the social sciences where ideas are often developed by individuals and where individual’s backgrounds play a key role in how they engage with impact (Matthews et al. 2017). Indeed, only 25% of cases name more than two researchers (N = 42). When it comes to acknowledging the contribution of academics from other places, the hand of the audit structure is at work. Ultimately, the REF is a competitive process which rewards partial accounts that spotlight one department and, in doing so, obscures the contribution of those beyond.

We can also think about ‘who’ has an impact in disciplinary terms. The importance of what is topical is clear when we explore how case studies are distributed across the discipline. Coding the case studies by the sub-field of their authors and underpinning research (Fig. 1) reveals four areas of the discipline dominate: public policy and administration 23% of cases (N = 38); electoral and parliamentary studies at 17% (N = 28); security studies 14% (N = 24); and, human rights and conflict resolution 12% (N = 20). The data also demonstrate areas that are poorly represented. Political theory and comparative politics (two central parts of UK political science [Dunleavy et al. 2000, p. 8]) account for only 5% (N = 8) of the case studies. This is not so surprising. Political theorists are fewer in number than sub-fields like public policy, for example.Footnote 8 This question of numbers is exacerbated by the fact they are often located in departments of Philosophy (which has its own Unit of Assessment in REF).Footnote 9 The emphasis on demonstrating impact rather than participating in public engagement poses a tough challenge for political theorists.Footnote 10 Moreover, for some theorists the endorsement of impact is to endorse a neoliberal ideology and renouncement of the very critical faculties that are central to theory scholarship (Vincent 2015). This may be especially true for theorists not working in the analytical, normative tradition (this helps account for the University of Oxford’s contribution of half of the theory case studies). The absence of comparative politics is explained, in part, by the existence of the Area Studies panel (UoA 27).Footnote 11

Fig. 1
figure 1

Politics and international studies’ case study sub-fields

Now we move to the question of time: when is impact generated? This temporal dimension of impact was keenly debated both before and after REF2014 with the 5 year window for impact—i.e., 2008–2013—seen by some as too narrow and the underpinning research cut-off date of 1993 as too short (Morgan Jones and Grant 2013; Manville et al. 2015; Technopolis Group 2010). One recurring concern is that these timelines would result in an upsurge of impact-related research as academics hastily built alliances with potential users. Yet, when we look at the date spans of the underpinning research, and starting points of academic–user relationships, we see a far less frenetic picture. 43% (N = 72) of the cases are based on work or around relationships developed over 10 years or more before 2014, and 40% (N = 66) are between five and 9 years before the REF cut-off point. This date distribution is broadly the same when we compare those Universities with 3* and 4* impact profiles against the rest.

These data counter fears of any ‘ambulance chasing’ in politics and international studies. Far from desperately engineering impact for a quick hit in the immediate run up to the audit, the data suggest a more limestone logic is at work,Footnote 12 where layers of academic engagement, relationships and research have built up over time to form impact. In contrast to the frustrations vented by colleagues in the 1970s and 1980s about the paucity of political scientists able to make a difference (Bevir and Rhodes 2007), it appears that in the 1990s and 2000s, many UK-based political scientists took advantage of government advocacy (at least rhetorically) for evidence-based policymaking (EBPM) and engaged in impact work long before it became the subject of the research audit.

Impact’s beneficiaries

We turn now to impact beneficiaries: who gains from politics and international studies work, and what is that work? 95% (N = 157) of cases had multiple beneficiaries (Table 1). With 13 different types of beneficiaries, we can no longer think in terms of ‘reticent practitioners’ (Grant 2010, p. 164; Hayward 1990, p. 320); a wide range of actors and sectors are willing to form effective relationships with academics. Most obviously, impact is found across the UK’s political venues—government executives (UK Government, Scottish Government, Welsh Government, Northern Ireland Executive and local governments) are beneficiaries in two-thirds of cases 66% (N = 109) and UK’s parliamentsFootnote 13 and assemblies feature in 42% (N = 70). This compares favourably with analysis of the entire REF2014 impact submissions across all disciplines, which puts these figures at 20 and 17% respectively (Kings College London and Digital Science 2015, pp. 55–59). These findings echo earlier observations in the profession that the UK ‘policy community is getting quite extraordinarily good value out of UK Politics and International Studies’ (ESRC 2007, p. 32).

Table 1 Impact beneficiaries

There are supply and demand logics behind this focus around government and legislatures. On supply, while the impact imperatives encourages academics to ‘get out more’ (Campbell and Childs 2013, p. 185) and has made academics more visible (Talbot and Talbot 2015, p. 190), as we have seen, the case study evidence suggests that these excursions pre-dated the build up to REF2014 by some time. Public funding bodies have long been mindful of Haldane’s vision of working in tandem with academics (King 1998). Indeed, when it was created in 1965, the Social Sciences Research Council (SSRC) aimed to promote the production of ‘policy-relevant knowledge’ (Bevir and Rhodes 2007, p. 255). Despite a government climate which was hostile to social science in the 1980s, Grant notes this is when the steady promotion of relationships between politics academics and the civil service began in earnestness (Grant 2010, pp. 101, 105, 123–124). Such overtures are not uncontroversial, of course. In The Limits of Political Science, Johnson (1989) offers an excoriating critique of this move towards practitioners as based on ‘illusions of utility’ (1989, pp. 57–86) bound to result in ‘embarrassing’ research which is ‘too impressionistic in content and method to be applied with any confidence to practical affairs’ (1989: Preface). Similar sentiments were voiced in international studies with Hill bemoaning the ‘siren sound of policy relevance’ that lured colleagues from the pursuit of intellectual endeavour (1994; see also Smith 1990, p. 153). In the 1990s, practitioner links were institutionally consolidated in research structures. Notably, the ESRC’s thematic research initiatives designed with practitioners in mind—for example, the Whitehall Programme (1994–1999) and Evidence Network (launched in 2000)—rewarded mission-driven academics and socialised the profession towards today’s pathways to impact agenda and What Works Centres (launched in 2013).

More pertinent, however, is change on the demand side. Historically, there has been concern that ‘political scientists were more interested in developing relationships with civil servants than the other way around’ (Grant 2010, p. 124). Increased policy complexity (both in terms of the technical complexity of issues and complications posed by the UK’s evolving composite polity and globalisation) and the advent of open policymaking and evidence-based policymaking (EBPM) which have been institutionalised in the UK civil service and beyond since the late 1990s, at least as totemic ideas of modernity and good governance, have pushed civil servants to engage with academics both inside (for example in scientific advisory committees—see LSE GV314 2017) and outside the machinery of government (Davies et al. 2000; Nutley et al. 2007; Talbot and Talbot 2015, p. 188).

This is not simply a story about UK-based academics working with UK-based policymakers, however. Internationalisation is very strong: 64% of cases (N = 106) involve non-UK governments as beneficiaries, 45% (N = 74) international organisations and 58% of all cases (N = 96) record some kind of international impact. There are also various non-governmental actors (NGOs) in the cases—most notably 61% (N = 102) are linked to NGOs and charities and think tanks (UK-based or international). This makes sense given that 27% (N = 45) of cases address populations they define as disadvantaged.Footnote 14

And, then there are the rest. Perhaps unsurprisingly, covering only 5% of cases (N = 9), politics and international studies academics’ work with business and industry is some way-off the ‘triple helix’ knowledge production mode that conceptualises developments in the natural sciences’ external links with industry (Etzkowitz and Leydesdorff 2000). More notable is the extremely low level of interaction between academics and the public who are listed as direct beneficiaries in 22% of cases (N = 37). This is less an indication of paucity of impact on citizens, and more a reflection of the difficulty of capturing and isolating effects on public values in a convincing way (for a forerunning comment on this see ESRC 2007, p. 34). Thus, while Stern’s reviewFootnote 15 of REF2014 bemoans the absence of citizen benefit in impact submissions (2016: Sects. 54, 83, recommendation 7), academics’ interactions with the general public might be better thought of as engagement. As such, they are not well captured in impact case studies. Indeed, the 2007 review of the profession found not only ‘considerable evidence of engagement with end-users in the policy community, narrowly construed …’ but ‘… even more evidence of knowledge transfer more broadly construed’ (2007: 5–6, emphasis added).

Impact’s evidence base

Next we explore the evidence base in the case studies—what HEFCE refers to as ‘underpinning research’ (2011).Footnote 16 First, we consider research methods. If is oft said that research users, government in particular, prefer statistics—the so-called ‘killer facts’—which have the advantages over qualitative evidence of both simplicity and malleability (Dunleavy 2011; Shucksmith 2017; Stevens 2011; Wood 2012). Yet, in politics and international studies, we see a truly mixed economy with a range of qualitative and quantitative tools underpinning the case studies (Fig. 2). 44% of cases are rooted in two or more methods (N = 73). The importance of descriptive studies or evidence reviews is of particular note; 76% (N = 126) involve the transfer of this evidence to users and in 40% of cases, this is the single method used (N = 66). This backs a point made in a recent Joseph Rowntree Foundation/Carnegie Trust report that policymakers value accumulated knowledge and academic expertise that provide an intellectual backdrop more than findings from a discrete piece of research (McCormick 2013; see also Avery and Desch 2014, p. 229; Shucksmith 2017; Talbot and Talbot 2015, p. 190). Beyond these reviews, traditional survey work underpins 29% (N = 48), qualitative interviews 23% (N = 39) and conceptual frameworks 20% (N = 33). Again, this even spread matches the requirements reported by some users (Talbot and Talbot 2015, p. 190 survey of senior civil servants in the UK). We also see more niche methods such as experiments and ethnography registering impact at 4% of cases each (N = 6). This embrace of a plurality of robust methods is undergirded by substantial ESRC action on methods training in the 1990s. This has enabled the profession to ditch what Hayward called the ‘humdrum’ methodology of muddling through by seeing what emerges from data (1991, pp. 320–321; see also Hayward 1986) and, as a consequence, has surely brought credibility to the political science voice.

Fig. 2
figure 2

% of cases using specific methods

We can dig deeper into the nature of academic work reported in the cases studies. Specifically, we are interested in the ‘mode’ of knowledge that is influential. In their ground-breaking sociology of science volume, Gibbons et al. (1994) introduce us to two ‘modes’ of knowledge production. Mode 1 is well known; this is the world of so-called ‘basic research’ principally motivated by academic rationales in isolation of concerns with wider utility or societal applicability. Despite empirical ambiguity (Martin 2011), it is widely accepted that such ‘blue skies’ work is becoming a ‘minority preoccupation’ in universities giving way to a second more applied knowledge type (Nowotny et al. 2003, p. 184; see also Bentley et al. 2015). Mode 2 captures this knowledge produced by multi-disciplinary teams brought together to address specific problems. Coding our cases for these modes throws up curious findings; notably, 25% (N = 41) are grounded in mode 1. These findings may give solace to those who defend ‘pure’ research as embodying an alternative utility to policy relevance (Lynd 1939; Smith 2002). This importance of ‘blue skies’ thinking runs against the early impulse of the UK impact agenda. The Warry Report (2006) produced for RCUK, and central in the initiation of impact discussions in government, narrowly focussed on the economic impacts implied by mode 2 type systems of knowledge (Watermeyer 2014, p. 202). Yet, in politics and international studies, a quarter of impact is generated by academics whose work is neither predicated on making socio-economic contributions, nor on any institutionalised alignment or relationship with stakeholders. This demonstrates how both the broadening of impact’s definition for REF2014 (in the wake of the backlash against reports like Warry), and its enactment on the ground by academics, has avoided the instrumental and economic vision advanced in the early discussions.

But what of mode 2? Mode 2 knowledge is transdisciplinary; that is, it mobilises a range of methods and theories fusing them in innovative and effective ways (Gibbons et al. 1994; Nowotny et al. 2003). Only 7% (N = 11) of cases declared research across disciplines (something recognised by the UoA 21 panel [Main Panel C, 2015] and subsequent Stern review [Stern, 2016: Sects. 40, 52, recommendation 5]). What we have instead is a pre-eminence of what we term ‘mode 2-lite’ research. 68% of cases (N = 113) have aspects of mode 2—research is problem focussed and generated in relation to a specific context—but, none involves knowledge that we could say truly transcends the discipline. This makes sense; career incentive structures are discipline focussed: academics’ career progression, where research gets published, and the type of research that gets funded, and, correspondingly, the low status of research that does not fall within those clear boundaries all militate against working across silos (ESRC 2007, p. 18; Warleigh-Lack and Cini 2009). Mono-disciplinary work may be particularly pronounced in politics and international studies whose journeys towards professionalisation in the post-war era involved the self-conscious distinction from their interdisciplinary roots in law, sociology, economics, history and philosophy (Bull 1976; Capano and Verzichelli 2016; Crick 1975; Hayward 1999; Smith 1990; Wallace 1996).

Finally, our analysis addresses quality (a major area of debate in the run-up to REF2014—Bishop 2013; Curry 2013). Case studies were to be underpinned by research of at least 2* quality—recognised internationally in terms of originality, significance and rigour—but these outputs are not assessed. This raises at least two challenges. First, given the likely applied nature of social science that is impactful, the questions of what outputs would be cited—for example, the scale of non-traditional formats included—and how case study authors demonstrate having met the 2* threshold loomed large (Tinkler and Dunleavy 2012). 898 outputs were referenced in the 166 cases (mean of 5.4) with 109 (66%) submissions using the maximum six references allowed (Fig. 3 has the full breakdown). In terms of the research mix, as anticipated, grey literature is present—working papers and official reports account for 12% of outputs (N = 107)—although perhaps not as prevalent as expected (Tinkler and Dunleavy 2012). Formally published and peer-reviewed works form the bulk of the underpinning research—articles, book chapters and books comprise 86% (N = 770) of politics and international studies impact portfolio. Moreover, it is worth noting that 46% of case studies (N = 77) used the ‘references to research’ section to offer assurances of research quality—listing journal impact factors, citation or download rates, grant success and prizes as indicators of research quality.

Fig. 3
figure 3

Output types

The second puzzle is that 4* impact case studies could be underpinned by less than excellent research; a product of what Dunleavy terms HEFCE’s ‘impossibilist’ discourse on research excellence (2011). It is not possible to definitively answer whether or not there has been a bifurcation of excellence in research outputs and those outputs underpinning impact case studies. But, we go some of the way there by analysing journal outlets. 49% (N = 440) of the entire evidence base is published in articles in 233 different journals though there is some concentration around the practitioner- and policy-oriented publications—the Hansard Society’s Parliamentary Affairs and Chatham House’s International Affairs (see Table 2). We then go further. Using the Social Sciences Citation IndexFootnote 17 (SSCI) 2015 journal rankings, we examine the percentage of articles published in journals ranked in the top third of three SSCI categories: Political Science, International Relations (IR) and Public Administration. 50% of articles are in top third ranked Political Science journals. This figure rises to 67% for IR journals and 68% for Public Administration. When we consider only the case studies of those fifteen institutions ranked either 3* and 4* on impact, these figures increase in all categories to: 58% in top third ranked Political Science journals; 77% in IR; and, 87% in Public Administration.

Table 2 Most frequent journal outlets

We also look at quality in terms of the extent and type of grants that underpin impact research. A total of 371 grants are mentioned in the case studies with 79% citing at least one grant (N = 131). Again, we see another mixed economy. Although the ESRC leads the way with 23% (N = 86) of grant funds, government monies—UK and international—international organisations, and charities (including British Academy, Leverhulme Foundation, Nuffield Foundation and Wellcome Trust) make up 53% of grants cited in case studies (N = 196). European Research Council (ERC) grants are noticeably low at only 10% (N = 17). This is perhaps explained, at least in part, by the absence of an impact component in the EU grant application process and presence of an Area Studies panel as an alternative venue for European Studies scholars (indeed 16 of that panel’s 68 impact case studies are European Union (EU) focussed.

Impact generation and validation

We now address how impact is generated and evidenced. Taking generation first, we analyse the starting points of impact; how does the impact ball get rolling? By coding the case study narratives for how relationships develop in the first place, we find that academics initiated contact with users in approaching two-thirds of the cases (61%, N = 102). This academic entrepreneurship endorses the view that what time-pressured practitioners often need the most is advice based on research that has already taken place as opposed to speculative, exploratory work (ESRC 2007, p. 36; see also Weiss [1977] on the enlightenment function of social research). Co-production—where impact starts with shared research projects between academics and users—and commissioned work—where academics bid for contract research work—both lag some way behind on 14% of cases each (N = 24, N = 23 respectively). This reliance on proactive mission-driven academics in politics and international studies may balance out a little in REF2021 if the co-production agenda of the ESRC bears fruit (ESRC 2007), and with a corresponding increase in non-academic groups’ awareness of academics need to generate impact.

This vision of academics seeking out potential users is some way-off the doomsday scenarios advanced before REF2014 which raised the spectre of scholars becoming little more than knowledge workers beholden to regulatory frameworks (Burawoy 2011). Far from being slaves to the audit, in politics and international studies at least, colleagues have been willingly reaching out beyond the academy for years before the impact agenda was born (see the earlier discussion of the time dimension). This echoes some of the discussions in the discipline in the run-up to REF2014. Gender politics specialists powerfully argued that many of feminist scholars have always spoken in impact (Campbell and Childs 2013). We explored this empirically and found that there is indeed a sizeable number of ‘boundary spanners’ (Tushman 1977; see also March and Simon 1958). Over a fifth of cases (21%, N = 35) involve academics with a non-research-related (and often long-term) commitment to user groups that is integral to the impact they generate—often occupying official roles on a semi-permanent basis. For example, academics are variously: members of political parties, NGOs and charities. An understanding of both academic and practitioner worlds drives boundary spanners to use their research to open up what could otherwise be closed systems. But, it goes beyond understanding alone. Nearly a third of these boundary spanners are active in cases that involve disadvantaged populations (31%, N = 11).

With boundary spanners found in a fifth of cases, we are some way off the discipline’s roots in the nineteenth century and early twentieth century when ‘… it was considered an advantage to have had personal political experience’ (Hayward 1999, p. 2). But, this does shed a light on the importance of being normative for impact with a non-trivial number of activist academics following the Gandhian path of ‘being the change they want to see in the world’.

Moving beyond how impact relationships start, we code what academics actually do in impact generation. In all cases, there are multiple communication pathways to impact (Table 3). Direct interaction is central: 80% of cases cited direct briefings (N = 133) and 56% interviews with key stakeholders (N = 93). Academics hold advisory positions in 44% (N = 73). This refers to time-limited roles that are usually problem specific, and so is distinct from our boundary spanners discussed above. On this evidence, scholars have been able to come a long way since the 1970s and 1980s when they were chided for being ‘… too happy to stick to the library and too self-effacing to push their way into the corridors of power’ (Ivor Crewe in Grant 2010, p. 103), not paying ‘sufficient attention to promoting themselves and their subject to government’ (PSA minutes 1981 in Grant 2010, p. 125).

Table 3 Pathways to impact

Tailoring the written word for non-academic audiences is also key (see Flinders 2013). 70% of cases involve some kind of targeted report (N = 116) and written evidence to committees or organisations is found in 45% of cases (N = 74). Focussed communication also takes the form of training—either through the development of bespoke training materials (28%, N = 46) delivered by others, or by academics conducting training with practitioners (20%, N = 33).

Generating impact through communication with wider publics is varied, however. Indirect public engagement through the media is noted in 40% of cases (newspaper articles, radio or TV appearances). This demonstrates that in many ways, the UK profession is far from the ‘self-deprecating discipline’ (Hayward 1999); it once was, and has moved a long way from its professionalised beginnings in the 1950s and 1960s where relationships with the media were, at best, regarded with suspicion (Grant 2010, pp. 44–45). Yet, events where academics actually rub shoulders with citizens are mentioned in only eighteen cases (11%). Even though enhancing public debate is the impact claimed in a quarter of the cases, this impact is not achieved through direct engagement. We must tread with caution and rehearse the caveats outlined earlier. Recall, we only record what is documented in the impact case studies. The scale of public engagement of the profession is not being captured, but rather the scale of that linked to impact case studies. As we noted earlier, the need to make clear the link between impact and research at a granular level mitigates against time-consuming and costly public events whose impact is hard to capture (or at least it is harder than evidencing work with a government department). The result may be the impression that influencing the public is best done through indirect means: by shaping policy and influencing elite debate through focussed and private briefings.

Finally, we explore how impact claims are evidenced and validated. The novelty of impact, and the ambiguity concerning its financial significance, resulted in hastily assembled institutional processes for evidencing impact (Gill 2012). The expectation was ‘few academics engaged in recording, reconnoitring or auditing their impact activities on a regular basis’ which would lead institutions to ‘play it safe and opt for statistical and numerical evidence in supporting their impact claims’ (Watermeyer 2014, p. 205). Yet, no robust metrics for impact exist (Wilsdon et al. 2015). As such, in politics and international studies, reliance on metrics did not come to pass. Although 46% of case studies (N = 77) note esteem indicators regarding the underpinning research, actual impact corroboration is heavily qualitative (Figs. 4, 5).

Fig. 4
figure 4

Number of corroborating sources

Fig. 5
figure 5

% of cases with corroborating sources

A total of 1428 corroborating sources were cited—an average of 8.6 per case study. 51% of cases studies (N = 85) used the maximum possible of ten. Beyond references to academic work in official reports (80% of cases, N = 132) and press coverage (48%, N = 80), there is particular reliance on statements from users and beneficiaries. Only 10% of cases (N = 17) contained no user statements. Indeed, these testimonials accounted for 39% of corroborating sources (N = 551) (Fig. 4). Although tracking down people who could assist in evidencing impact may have been the result of an absence of documentary evidence (Manville et al. 2015), this qualitative corroboration does bring rare (and welcome!) colour to the case studies themselves. For it is in these quotes that readers are given a sense of why a piece of research made the difference.

Discussion and conclusions

We offer an anatomy of the first set of politics and international studies impact case studies based on original data analysis of all publically available submissions. While this is just a snapshot image of some the benefits UK politics and international studies brings to the world beyond the academy, when we look at this big picture, it is clear that by 2014, the discipline and users had well-developed impact stories to tell. Despite separation from other disciplines and specialisation in theory and methods, scholars remain able and are willing to develop effective relationships with an array of increasingly receptive practitioners.

Let us be clear about the implications of this positive analysis and specifically what it does not imply. We do not propose the tensions involved in relationships between academics and practitioners have been resolved. Debates about relevance, and the role of the academic beyond the academy, will and must continue; they are one of the best guardians we have of academic and impact quality. Rather, given that impact is now institutionalised in the regulatory landscape, we must attend to the ramifications of our case analysis for the discipline. Our findings suggest key challenges in six areas: the impact we have; the heroic paradigm; extending existing cases; managing and diversifying our beneficiaries; impact’s evidence base in disciplinary and quality terms; and, impact communication within the profession.

First, we know politics and international studies scholars are informing decision-makers, government and Parliaments at all levels and jurisdictions of the UK and beyond. So, we can have some degree of confidence that academic research is informing policy. But, when we explore what the profession is actually doing, what is being affected is the wider climate of policy not policy itself (see John 2014). This may be as good as it gets. We cannot assume that other knowledge providers are cutting through in a way academics are not. There is a huge literature from the 1970s onwards that explores the difficultly in moving social scientific evidence beyond this conceptual function where research enlightens but does not direct (Weiss 1977). A recent report suggests academics are the most trusted source for policymakers (McCormick 2013, Sect. 6), but using their knowledge to inform policy change is a different matter (Rogowski 2013). Consider Avey and Desch’s (2014) survey of senior White House officials which reports the widest gap between what international studies scholarship offers policymakers and their needs can be found at the top of the policy world. For some, the answer to narrowing this gap lies in our methods—for example, the promise that design thinking (Stoker 2010) and experimental techniques may reach the parts that others cannot are currently being championed in UK social sciences and government (Halpern 2015; Haynes et al. 2012; John 2014; Togerson 2017). Of course, we may well consider the utility of downplaying change as a way to manage expectations about what can reasonably be expected from politics and international studies.

Alternatively, broadening out impact in REF2021 to include forms of engagement may help some politics and international studies academics to re-direct their focus beyond the policy world (and the risk of being super heroes on the side of the powerful—for a characteristically insightful account see Les Back 2015) . This is not simply a matter of opening-up to wider audiences but may also boost the impact profile of those academics whose works offer radical critiques of prevailing political norms (see Smith 2012 in Watermeyer 2014, p. 204).

Second, we have the challenge presented by our heroic impactors. While it is clear that building an impact case study is a team effort involving many actors, the case study narratives often focus on the intellectual heavy lifting of single heroes. For some, this will be a necessary evil of writing for the audit. Yet, might the pre-eminence of single academics holding up case studies risk feeding an instrumental model of research where universities aim to recruit impact stars or alienate early career researchers from engaging in impact? The increased understanding of the risk of letting a case study slip if impact heroes leave institutions with scant evidence trails with which to piece together a case study (impact case studies are non portable) may result in a more explicitly inclusive approach to build resilience by spreading case study knowledge and boost institutional memory.Footnote 18 Moreover, move towards full submission of research active staff may help prevent such externalities implied by a hero paradigm. The university level challenge is to reward impact effort not just outcomes.

The third challenge concerns the temporal dimension of impact. Second time around, to what extent will academics be able to tell continuation impact stories that build on from the last submission. This number of follow-up cases will reveal much about the strength and depth of politics and international studies impact relationships. REF2021 will also tell us much about how long it takes to consciously generate impact. Over a decade will have passed since the introduction of ‘Pathways to Impact’ sections of RCUK research grants (introduced in 2009) and we will see how and to what extent these plans have cashed out in reality. There will be similar curiosity around the effect on case studies of ESRC’s Impact Acceleration Accounts (IAA) (launched in 2014).

Now we turn to beneficiaries. Governments and parliaments at home and abroad are the big winners. Even with some re-balancing from a wider definition of impact, these will likely remain our primary impact partners. Given the (rhetorical) backlash against experts in political debate, this poses interesting challenges—intellectually and in terms of impact. Thinking about the latter; how can we ensure scholars are not captured by government and become ‘inside dopesters’ too busy reacting to events to try to shape them (Grant 2010, pp. 125–126)? One possible insurance against this is the incompatibility of academic and policy timelines. Policy practitioners cannot direct research in an instrumental way because they ‘simply do not know what they will need 48 months from now’ (ESRC 2007, p. 35). But, we should go further than this and equip researchers with an ability to reflect on how to draw and maintain clear boundaries in their impact work.

The challenge of beneficiaries not simply about boundaries but also about diversification. Relevance need not always be prefaced with ‘policy’. Academics’ educative impact is the biggest one we will ever have, yet teaching was artificially excluded from the 2014 impact definition. The Stern review has changed this and as the definition of impact loosens to include teaching at all levels, colleagues are well placed to make imaginative links between an array of hitherto unlinked agendas—impact, research-informed teaching, Teaching Excellence Framework (TEF) and Widening Participation (WP).

Publics have lost out also. Although perhaps this ‘top down’ attitude is part of the problem. Rather than asking what we can do for citizens, politics and international studies academics need to ask: what can the real world do for political science? We need to turn attention to what we learn by working with ‘real people in real places’? (Booth 1997, p. 372; see Miller and Sabathy 2011 on the open university). Again, HEFCE’s apparent widening of impact to include publics is critical to moving this forward. But, the methodological challenge of evidencing claims to influencing public debates requires discipline-level support.

Moving beyond these evidencing challenges, we can think about the absence of publics as linked to knowledge production itself. Mono-disciplinarily means problems are explored in closed systems. If we don’t open up problems with other disciplinary viewpoints, the likelihood of us being equipped to open them up with publics is remote. This ripple effect of opening-up was part of the wider vision behind the potential of mode 2 knowledge—where ultimately problems could be explored in the public sphere where there was ‘no entrance ticket in terms of expertise’ (Gibbons et al. 1994, p. 148). It demands that academics think beyond simplifying and clarifying their ideas to sharing ownership and co-producing impact in a way that empowers practitioners and society. We are some way off that ideal.

Our fifth set of challenges concern impact’s evidence base in terms of inter-disciplinarity and research quality. Analysis demonstrates a healthy methodological pluralism. The mixed approaches informing most case studies, and the centrality of descriptive evidence summaries, suggest professionalisation has not resulted in a flight from reality (Shapiro 2005; see also Crick 1959, 1962; Bull 1976; Flyvbjerg 2001; Moore 1953; Ricci 1984; Wallace 1996). That said, our overwhelmingly disciplinary approach to impact is less well matched with the complex and multifaceted reality of policy and social problems. This may change as the work generated by the Pathways to Impact elements of RCUK standard grants comes to fruition and with HEFCE’s inclusion of interdisciplinary sub-panel members. Of course, we might well ask whether it matters at all. The case studies themselves may serve as proof positive that the juxtaposition of ‘disciplined research and undisciplined problems’ (Rose 2014, p. 176) does not necessarily mean academic abstractions are privileged over problems. Yet, we can think about this in a different way by asking counterfactual question: what work is closed-off by the lack of interdisciplinary thinking? In his review, Stern (2016) was concerned about novel research being inhibited by the audit’s bias towards disciplinary silos and reinforcement of hyper-specialisation. Indeed, when we consult analysis on this, we are left with the nagging concern that opening up other disciplines—becoming more like foxes and less like hedgehogs (Berlin 1953)—could help political scientists effect a step change in the nature of their impact—forecasting problems and influencing action as opposed to analysing the wider climate (see Tetlock 2005). Moreover, broadening out might offer a way for under-represented sub-fields to play a deeper role in impact—for example, political theory to use its obvious links with history and philosophy to find a stronger impact voice.

Although using journal rankings and research grants as proxies for output quality is partial, our analysis is sufficient to suggest the discipline has avoided the perverse possibility of poor-quality research generating the biggest impact. We might even be tempted to say ‘job done’! We could go further still to advocate a further layer of complexity to the audit and require scoring of underpinning research. But, there is probably little wisdom in adopting either complacency or pedantry. Rather, let us think about two key questions that the issue of quality raises.

First, what does the impact imperative do to the quality of research? The obvious way to think about this is in negative terms. Pressure to produce a large number of case studies could certainly result in tail wagging the dog where research quality reduces as academics spread their intellectual energies too thinly. Yet, we can think about this in more positive terms. By linking the theory and methods of a discipline with problems, we can improve the quality of social science itself.

Second, what does quality have to do with impact, in any case? Let us unpack the assumptions behind this question. Scientific quality in research is judged by peer review and underpinned primarily by concerns with objectivity and methodological rigour. Yet, in a world where facts are increasingly challenged in the political arena, is this really all that is needed for impact? We may say yes, the rise in fact-checking by social scientists is perhaps the archetypal way for us to engage with anti-intellectualism (Tyler 2017). It certainly provides one way to demonstrate that the ability to evaluate evidence and value of the research process is the major contribution we can offer society (Holt 2017).

Yet, there is another view. Might we not need more mission-driven academics who take unashamedly normative positions on issues of the day? Former ESRC council member and research policy commentator Walker (2017) is particularly provocative: ‘[W]hen fact and truth cannot be taken for granted, they will have to be fought for hand to hand and campaign by campaign … [S]ocial scientists face a choice between sticking with the norms of science and condemning themselves to railing on the margins, or mixing with a fractious public and politicians and adopting normative positions that cannot claim the backing of science’. Is it possible or necessary to revise HEFCE’s model of impact assessment for a world with a reduced interest in facts? Certainly, focussing more on bodies of knowledge offers one way to encourage more boundary spanners to enter the impact fray where impact is not focussed around simply speaking truth to power but talking values as well.

The final challenge is one of impact communication. We have covered a good deal of ground on evidencing already. Let us turn our attention to how we communicate impact among ourselves. The case studies demonstrate politics and international studies academics are willing and able to modulate how they communicate (see Flinders 2013 on triple writing and Cairney et al. 2016 on jargon reduction), they understand the utility of a differentiated dissemination strategy in securing impact (Watermeyer 2014, p. 200) and the importance and difficulty of avoiding over-claiming (Chubb and Watermeyer 2017; see also Tooke 2017 for a discussion of this in the medical sciences). This would suggest that any outsourcing of impact writing to commercial companies may be at best unnecessary and at worst contrary to the original intent of impact’s introduction to the audit.

Yet, the REF case studies themselves are of limited use as learning documents for the profession. The prescribed impact templates result in stylised stories of impact—where academic research matters in linear and intentional ways. These suit the strictures of the audit but carry dangers for professional practice. How can we learn how ‘to do impact’ when the codified accounts are largely decontextualised? We need something more to share with colleagues if the craft of impact is to be honed and extended. There is also the danger of codification. If the bank of REF2014 case studies become exemplars in our field, we risk closing-off our impact imagination and ambitions. What is needed are supplements to these narrow narratives—venues where case study authors can give context and meaning to the submission. We also need non-impact studies—stories of near misses and impact frustrated.

Impact case studies’ value is set to increase from 16 to 25% in REF2021 (HEFCE 2017).Footnote 19 Understanding what we did the last time round and the dilemmas to be faced has never been more important. The goal must be to balance reaching for excellence in impact while protecting and celebrating the diversity of interests in the discipline. Activities like offering policy and advice and conducting civil service training ‘can never be the primary goals of the university’ (Smith 1997, p. 510). Rather, they are by-products of high-quality research. This is the bedrock of academic work. High-quality research, and ability to evaluate evidence necessary for impact, can only be produced in environments where scholars have the freedom to explore intellectual ‘eccentricities’ (Hill 1994). Only then do we have something to share.