Keywords

Arguably, the main unit of analysis for all bibliometric analyses is the reference. It may be listed conveniently at the end of a document in a separate bibliographical section, in a footnote, in an endnote or, more inconveniently, only vaguely alluded to in the main text. In either case, the reference communicates or conveys “something” to the reader. One question arises: What exactly does it communicate, what is embedded in a reference, or, to place the question into the context of our study, how have we interpreted the reference in the larger corpus of policy documents that we collected in Denmark, Iceland, Finland, Norway, and Sweden in an attempt to understand the policy process?

References have several constitutive elements: an indication of authorship, year of publication, topic or theme, location of publisher, and type of publisher. The authorship may be further differentiated by gender, nationality, institutional affiliation, and, if several coauthors are involved, the network structure within the group of authors. All these constitutive elements are essential in a bibliometric analysis because they are utilized as epistemological cues for understanding not only whose texts or whose knowledge the authors have selected to substantiate their points, but also whose knowledge they cite as sources of expertise to reduce uncertainty or generate legitimacy about the validity of their own claims or assertions.

According to political scientist Paul Cairney (2015), “‘[e]vidence’ is assertion backed by information.” In our bibliometric network analysis, we treat references as a construct or aggregate of several pieces of information (authorship, year of publication, topic or theme, etc.), helping position the author in a larger semantic space. Referencing sources in a policy document—whether it is a Green Paper (in the Nordic context: prepared by government-appointed expert panels) or a White Paper (issued by the government)—is fundamentally different from the references in, for example, an encyclopedia, where the expectation is that the researcher at least pretends to identify the universe of relevant research literature on the topic with which they can situate their own framework. Policy documents are much more evaluative and controversial in nature; therefore, they tend to openly take a stance for or against existing assertions. Unsurprisingly, a bibliometric analysis of policy documents will often surface clusters of like-minded authors/documents and set them apart from those holding viewpoints that are seen as different or distant.

References from the Perspective of Sociological Systems Theory

In the Nordic POLNET (Policy Knowledge and Lesson Drawing in an Era of International Comparison) study, we interpret a reference functionally, that is, in its larger context of evidence-based policy planning. We ask the following: What does a reference stand for—or rather do—in a policy document? From the perspective of sociological systems theory, references are meant to reduce uncertainty for the reader. They do this by making transparent the positionality of the author in the larger, discursive policy space and by documenting the credibility of the sources used for the assertions the author is making. As Jenny Ozga (2019) has astutely pointed out, the discursive space in policy documents is by default a political space in which authors position themselves in terms of political orientation and alliances.

To reiterate, references help validate or provide legitimacy to the evidence that the author (e.g., the government-appointed expert commissions or the government) has presented in the document. Thus, if “evidence is assertion backed by information” (Cairney, 2015), then a reference is validation of evidence. Said differently, references are used to provide authoritative status to the evidence presented in policy documents. Naturally, a host of questions surface with this particular conceptualization of references: Which texts are influential, that is, referenced frequently or referenced by two or more different knowledge networks? Are international references, that is, texts published outside the Nordic region, more influential than national or regional references or vice versa? Does the institutional affiliation (government, academe, “institute sector,” private think-tank, civil society organization) of the author matter? Finally, the cross-national dimension enables us to examine the varied legitimization or authorization strategies in the five countries in depth and, by means of comparison, identify nation-specific patterns in the policy process.

Eyal (2019) explains Luhmann’s conceptualization of authority, validity, and legitimacy of expert knowledge, juxtaposing the systems-theoretical approach against Jürgen Habermas’ work on the legitimacy crisis. In concert with Luhmann, Eyal uses “validation” in the sense of defensibility to “reassure people that if they would bother to check, they would find that the particular decision was rationally taken and justified, so no need to bother!” (Eyal, 2019, p. 88). For Luhmann, the insistence on validation is ultimately a “functionally necessary deception” that saves time and prevents discord, here given that every fact may also be interpreted differently.

Strikingly, the validity issue has become key at a time when information is openly and abundantly available and when expertise has become democratized, enabling users and other lay persons to participate in the production of evidence and render each and every piece of evidence contestable. In fact, the two prominent trends in “modern governance practices” are, according to Krick et al. (2019, p. 927), a trend toward scientization and a participatory turn. The first trend is discernible in the composition of government-appointed advisory committees (a growing number of researchers), citation patterns (reference to studies, technical reports, and academic publications), and epistemic language (reference to “evidence,” “knowledge,” “data,” and “research”). The second trend implies greater public engagement in agenda setting as a result of open access to information, calling into question the corporatist or representative model of democracy (see Rommetvedt, 2017; Rommetvedt et al., 2012). Other scholars (Stehr & Grundmann, 2011) have labeled the second trend as a pluralization of expertise. These two trends—scientization and pluralization of expertise—have triggered a third trend that has only recently been discussed (see Lubienski, 2019): a surplus of evidence. Taken together, all three trends account for the fact that references, that is, the sources of information used to validate the evidence, have gained authoritative status.

Arguably, referencing other texts as an instrument of validation of one’s assertions has become an object of intense scrutiny, including in bibliometric network analyses. The pressure to disclose the sources of information that were used to produce evidence is discernible in the ever-increasing number of references listed in Green Papers. The Green Papers of the Norwegian Official Commissions (NOUs; Norges offentlige utredninger) and the White Papers of the Ministry of Education and Research of Norway make for good cases for demonstrating the trend over time. The relevant papers of the 1996 School Reform in Norway made only sparse use of references, many of which were either embedded in the text or listed as footnotes. Twenty years later, however, there were 246 references on average per relevant Green or White Paper for the 2020 Curriculum Renewal Reform (Baek et al., 2018).

Paradoxically, the proliferation of evidence-based policy planning has added fuel to the crisis of expertise (see Eyal, 2019). Not only has science become politicized and politics scientized, but science has also become demystified in front of everyone’s eyes:

[T]he very discourse on expertise increases uncertainty and threatens legitimacy because now the public is witness to controversies between scientists. (Eyal, 2019, p. 102)

In effect, the proliferation of evidence-based policy planning has brought to light that evidence is considered not more, and not less, than a subjective assertion backed by information. The boom has generated a surplus of evidence to the extent that there is now the challenge of how to weed out evidence based on relevance and credibility criteria. Concretely, in the wake of complexity reduction, we are witnessing a hierarchization of information (very often with randomized controlled trials on the top and qualitative data on the bottom), rendering some types of evidence more relevant than others. At the same time, the disclosure of the source of information to make a case for the credibility of the evidence, that is, the reference, has become as important, if not more so, than the information itself. In fact, the legitimacy of the assertion rests in great part on the source of the information itself. For example, a reference here and there to OECD studies has become a sine qua non for policy analysts in Europe because the OECD is seen, in the Foucauldian sense, as the founder of discursiveness for a very special kind of policy knowledge that ranks at the top in the hierarchy of evidence, one that operates with numbers and draws on international comparisons to enforce a political program of accountability. Ydesen (2019) has convincingly documented the rise of the OECD as a global education governing complex that uses a range of policy instruments (PISA, Education at a Glance, country reports, etc.) to diagnose and monitor national developments and advance the global solutions of a particular kind for national reforms.

There are many reasons why OECD studies are attractive for government officials (see Martens & Jakobi, 2010; Niemann & Martens, 2018). Espeland (2015) and Gorur (2015) masterfully observe the advantages of numbers over complex narratives because one may attach one’s own narratives to numbers. What is especially appealing to policy actors are OECD-type studies, that is, statistics, scores, ranking, and benchmarks based on international comparisons or on comparisons over time. Novoa and Yariv-Mashal dissect the politics of international comparison and examine how:

[T]his ongoing collection, production and publication of surveys leads to an ‘instant democracy’, a regime of urgency that provokes a permanent need for self-justification. (2003, p. 427)

Espeland (2015, p. 56) explains the dual process of simplification and elaboration involved in using numbers. In the first step, numbers “erase narratives” by systematically removing the persons, institutions, or systems being evaluated by the indicator and the researcher doing the evaluation. This technology of simplification stimulates narratives, or as Espeland astutely observes:

If the main job of indicators is to classify, reduce, simplify, and make visible certain kinds of knowledge, indicators are also generative in ways we sometimes ignore: the evoke narratives, stories about what the indicators mean, what their virtues or limitations are, who should use them to what effect, their promises, and their failings. (2015, p. 65)

Scholars in comparative policy studies have started to explore why PISA and other international large-scale student assessments are so attractive to policy actors and politicians (Addey et al., 2017; Pizmony-Levy, 2018). A few studies focused on the “narrative evoking” phase (Espeland, 2015, p. 65) of such studies have dissected what national governments interpret or project onto OECD reports or other international comparative studies based on their own policy context and agenda (Waldow & Steiner-Khamsi, 2019).

Reference Societies in Comparative Education Research

In addition to the governance-by-numbers argument presented above, for many countries, the OECD represents an attractive geo-political space inhabited by people in 36 high-income economies. Therefore, a recourse to OECD publications may be seen both as an affirmation of the affiliation and an acknowledgment of the OECD as a “reference society” (Bendix, 1978, p. 292) or rather “[transnational] reference space” toward which national governments orient themselves or aspire to belong. In fact, in all five Nordic countries of the POLNET study, OECD publications represent the most cited international texts (see Chap. 11). This may come as a surprise for a non-Nordic audience because one would expect competition between two dominant policy discourses in the five countries of the POLNET study: the Nordic reference space, which traditionally has had a strong commitment to equity, and the OECD reference space, which has a mission to advance economic growth.

Thus, in comparative policy studies, the term “reference” also carries a spatial, geo-political, or epistemological connotation. It is used in connection with “reference society” or “reference space.” The reference as a validation instrument and the reference as a point of epistemological orientation both share a common feature: they position the author (in the case of references) or the state (in the case of reference society) in its larger discursive space.

By now, there is a well-established tradition in comparative policy studies to draw on references as an analytical tool to situate the positionality of an actor (author, institution, government) in a broader transnational, geo-political space. This body of scholarship is closely associated with studies on the “reference society” presented by sociologist Reinhard Bendix (1978, p. 292). Bendix uses the term to denote how governments used economic competitors and military rivals as reference societies for their own development. One of the examples discussed by Bendix is the fascination of Meiji-era Japan with the West.

In comparative education, the term was—according to Waldow (2019)—first introduced by Butts (1973), associate dean and professor of Teachers College, Columbia University. Butts observes that the governments of developing countries frequently used a specific educational system in the Global North as a model for emulation. That country’s path to “modernization” served government officials in the Global South as a reference for educational reforms in their country. It is important to bear in mind here that during Butts’ time, transnational networks and dependencies established during colonial times had endured into the present and determined in great part the choice of reference societies. Another noted historian and comparativist, David Phillips, first coined the term “cross-national policy attraction” to denote the keen interest of nineteenth-century British government officials in the educational reforms of Germany (see Ochs & Phillips, 2002). Both Butts and Phillips use records of study visits and government reports as sources for their analyses of cross-national attraction or policy borrowing.

Similar in the conceptual framework but different in terms of unit analysis, Schriewer and Martinez (2004) use bibliometric data to examine the use of reference societies in “educational knowledge,” as reflected in the publications of educational research journals; they wonder whether educational researchers in their sample of three countries draw on similar or different bodies of knowledge or texts. They purposefully use a time period of 70 years to see whether a convergence toward a single international canon of scientific educational knowledge, here interpreted as internationalization or globalization, has occurred. Concretely, they examine the references listed in flagship educational research journals in three countries (Spain, Russia/Soviet Union, PR China) and code them in terms of the national origin of the referenced authors. Rather than detecting a pattern of steady internationalization toward a single body of internationally acclaimed authors, they notice considerable fluctuation regarding the space allocated to international scholarship, as measured in the number and type of foreign bibliographical references made in the journal articles of the three countries. They find that the “socio-logic” (particularly political developments in a given country) was a better predictor of receptiveness toward international scholarship than an external logic as manifested in the ever-expanding transnational network of educational researchers.

In fact, the era of the greatest convergence regarding educational knowledge was in the 1920s and 1930s, when educational researchers in Spain, the Soviet Union, and China were drawn to the work of John Dewey. Once that brief period was over, Dewey was dropped from the reference list in Soviet educational journals and replaced by Nadezhda Krupskaya (Lenin’s wife). It is striking that against all expectations of globalization or international convergence theorists, educational knowledge in these three countries did not become more internationalized until after the mid-1980s, when all three opened their ideological boundaries and increased international cooperation. Even though Schriewer and Martinez’s (2004) justification for their case selection leans on a problematic notion of culture and “civilization,” the design and methodology of the study is compelling and well-suited for analyzing international convergence/divergence processes in educational research.

The link between the reference society and political change has also been well documented in comparative education research. Examples include two cases of a radical change in reference societies as a result of fundamental political changes in post-Soviet Latvia and post-socialist Mongolia, respectively. Silova (2006) examines the erasure of Soviet references and their subsequent replacement with Western European references. She interprets the shift from the Soviet to the Western European reference system as a marker for the new geo-political educational space that Latvia politically and economically had been aspiring to inhabit at the turn of the millennium. What is fascinating about this particular change of political allies is that it has merely affected the discursive level, not the practice of separate schooling. The separation of school systems, one for Latvian speakers and another for Russian and other ethnic speakers, continues to exist; however, segregated schools are no longer seen as “sites of occupation” but are now being reframed as “symbols of multiculturalism.” The list of comparative policy studies on reference societies is too long to present in an exhaustive manner. In our own study on Mongolia (Steiner-Khamsi & Stolpe, 2006), we observe the discursive ruptures and reorientation in terms of reference spaces that accompanied the political changes, notably the replacement of the Communist Council for Mutual Economic Assistance with the Asian Development, World Bank, and other post-communist international aid agencies.

Strikingly, studies on reference societies and cross-national policy attraction have experienced a revitalization of a special sort in recent years with the fast advance of international large-scale student assessments (ILSAs) used in many countries as a policy tool for governance by numbers (see Carvalho & Costa, 2015; Volante, 2018; Waldow & Steiner-Khamsi, 2019). Preoccupation with what league leaders (Finland, Shanghai, Singapore, etc.) have “done right” has generated new momentum for policy borrowing research. Precisely at a stage in policy borrowing research when scholars have put the study of cross-national policy attraction to rest and instead directed their attention at the ubiquitous diffusion processes of global education policies in the form of “best practices” or “international standards” vaguely defined, the cross-national dimension—and by implication the focus on the nation-state and its national policy actors—has regained importance in ILSA policy research.

In the case of PISA (Programme of International Student Assessment), the preoccupation of national policy actors is, at least rhetorically, on how their own system scores compared with others and what there is to “learn” from the league winners, league-slippers, and league-losers, in terms of PISA’s twenty-first-century skills. Because policy actors often attribute “best practices” to particular national educational systems, the national level regained importance as a unit of analysis. Therefore, ILSA policy researchers found themselves in a position of having to bring back the focus on national systems, a unit of analysis criticized as “methodological nationalism,” which, if used naively, is a cause for concern because of its homogenizing effects (see Giddens, 1995; Wimmer & Glick Schiller, 2003; Robertson & Dale, 2008).

Research on reference societies has also been refined over the past few years in other ways. For example, intrigued by negative media accounts in Germany about the PISA league leader Shanghai (during the 2012 PISA round), Waldow scrutinizes the policy usage of “negative reference societies” (2016) or “counter-reference societies,” respectively. The concept of a reference or counter-reference society is based on commensurability. How do national policy actors make the educational systems of league winners appear to be comparable to their own educational system in a way that can suggest that lessons could be drawn? Vice versa, how do they manage to make two educational systems incommensurable and incomparable to avoid lesson-drawing? The disbelief or the downplaying, respectively, of Chinese success in ILSAs, notably in the PISA rounds of 2012 and 2018, is comparable to earlier stereotypical accounts of Japanese or pan-Asian education.

Similar to the US media accounts of A Nation at Risk (1983) in which American policy analyses attempted but ultimately failed to persuade Americans of the great benefits of the German and Japanese educational systems, the education systems of Beijing, Shanghai, Jiangsu, and Zhejian are, despite “PISA success,” hardly used as models for emulation in Western countries. As with the A Nation at Risk report, the common reaction to Chinese success reflects a “yes, but …” attitude (see Cummings, 1989, p. 296): even though there is a general agreement about the outstanding student performance in ILSAs in Hong Kong, Japan, Korea, Macao, Singapore, and select cities of PR China, there are too many negative stereotypes associated with education in these locations to assign them reference or emulation status. In fact, the exaggerated statements or myths about “Asian education” include images of overly ambitious mothers (“tiger mothers”), excessive use of cram schools, competition and suicide among students, elitist higher education, and social inequality. More often than not, the educational systems in Asia are politically instrumentalized as a counter-reference, that is, examples of how educational systems should not be developed.

Let us now circle back to the five-country study at hand. In the Nordic region, there is an elephant in the room in the broader ILSA space, making one wonder the following: Do the other countries of the region (Denmark, Iceland, Norway, and Sweden) consider the educational system of Finland (a PISA league leader) as a reference or a counter-reference for educational reform in their own system, or are they indifferent toward lesson-drawing from Finland? The coding of the references by their country of publication and the qualitative analysis of thematic cross references make it possible to examine the fascinating question of a reference society within the Nordic education space (see Sivesind, 2019; Chap. 12).

Clearly, the bibliometric analyses presented in this volume demonstrate that OECD publications eclipse studies from Finland. Two possible yet inconclusive interpretations lend themselves to further investigation: either “Finnish success” is acknowledged but rendered irrelevant for one’s own national context (the “yes, but …” attitude explained earlier), or Finnish success is, for a variety of reasons, including linguistic ones, referenced via an authoritative source of information: OECD publications.

Expertise-Seeking Arrangements in Policy Making

In his dissertation research, Baek (2020) coins the term “expertise-seeking arrangements,” which captures very well the dilemma of governments in an era of evidence-based policy making: Where and how do they seek advice for policy analysis, evaluation, and formulation? Given that “the authority of experts is destabilized” (Eyal, 2019, p. 102) in an era in which scientific evidence production is easily demystified and an ever-increasing number of individuals, including concerned citizens and other laypersons, lay claim to expertise, the question of governments’ expertise-seeking arrangements is taking center stage.

Eyal presents a typology of responses to the legitimation crisis, which is reproduced in Table 2.1 below. His focus is on “regulatory science” or the “interface between scientific research, law and policy” (see Eyal, 2019, p. 7f.).

Table 2.1 Typology of responses to the legitimation crisis

The first strategy of the state is to pretend that science is purified from politics by pursuing “mechanical objectivity” (Eyal, 2019, p. 115), as reflected in references to scores, rankings, numbers, impact evaluations, and quantifiable comparisons. In our case, references to OECD studies, evaluations, and ILSAs belong to this category.

The second strategy of inclusion is to acknowledge that science is politicized—or as Latour has eloquently put it, “Science is not politics. It is politics by other means” (1984, p. 229)—and, therefore, includes laypersons, interest groups, and other engaged citizens into the advisory bodies of the government.

The third strategy of exclusion is to decouple science from politics and generate “gate-keeping mechanisms designed to maintain an artificial scarcity of expertise” (Eyal, 2019, p. 105; see also Weingart, 2003); these are typically academic associations (academy of sciences) or professional associations that claim exclusive expertise and advise the government.

The fourth and final strategy of outsourcing represents another functional differentiation process. It decouples science from politics to the extent that it delegates research and policy formulation to outside groups, think-tanks, or semiprivate entities. Eyal contends that the fourth strategy is oftentimes a reaction to failed attempts of the state to generate trust or being inclusive, which would be pursued in the first three strategies. In particular, the fourth strategy is often a response to the critique that the second strategy—the appointment of ad hoc advisory commissions or government-appointed expert commissions—are merely meant for window-dressing and rarely impact the ultimate policy decisions and formulations prepared by government officials.

Another useful typology of “advisory system activity” has been developed by Craft and Howlett (2013, p. 193ff.). In general, they find a trend toward the inclusion of nonstate actors (think-tanks, open data citizen engagement driven policy initiatives/web 2.0, etc.) and international actors (e.g., OECD, ILO, UN organizations) as policy advisors. This is in stark contrast to traditional advisory systems, which mainly have drawn on national advisory bodies, including statistical offices, strategic policy units within the government, or government-appointed ad hoc commissions.

Naturally, the changing nature of the relationship between politics and science has preoccupied comparative policy studies for a while. One of the early, more important comparative studies exploring the interpenetration of the two function systems is the research project “The role of knowledge in the construction and regulation of health and education policy in Europe: convergences and specificities among nations and sectors,” abbreviated as Know&Pol and funded in the Sixth Framework Programme of the European Commission (see, e.g., Fenwick et al., 2014). Knowledge-based regulation, which is analyzed in the Know&Pol research project, has fundamentally changed the role of the state from one that runs schools to one that establishes learning standards and monitors learning outcomes, thereby enabling a multitude of providers, including businesses, to enter the school market. Over the past 20 years or so, the private sector has become not only a major provider of education, but also a key policy actor, lobbying for reforms that further restrict the role of the state in the education sector (Verger et al., 2017).

As mentioned above, knowledge-based regulation has also enlarged the radius of individuals contributing to policy-relevant educational knowledge. An early indication of changes in knowledge production and sharing is the open-access policies that both governments and research councils have put in place recently. System theorists Peter Weingart and Justus Lentsch (2008) consider such open-access policies to be part and parcel of a democratization of expertise; here, the relationship between science and politics has experienced three distinct shifts over the past 70 years (2008, p. 207 ff.). During the early period of scientific policy advice (1950s–1970s), the ad hoc expert commissions insisted on being autonomous and independent from governments. As a corollary, their reports amassed foundational scientific knowledge that policy actors could or could not use, respectively. In a second phase, the commissions became increasingly politicized (1970s to 1990s) because they were charged with the task of producing policy-relevant scientific knowledge. In the current third phase, the governments in many countries have experienced a shift from “knowledge-based legitimacy” to “participation-based legitimacy.” This also applies to government-appointed ad hoc expert commissions. Governments are under pressure to “democratize” scientific policy advice by (i) providing open access to reviews and expertise, (ii) expanding the definition of “experts” (including nowadays, both producers and consumers), and (iii) insisting that the knowledge products are useful, that is, provide a clear foundation for stop/go policy decisions.

In the five participating countries of the Nordic region, there is a wide array of expertise-seeking arrangements that these countries’ governments have put in place (see Chap. 10). The Eurydice Report (2017), Support Mechanisms for Evidence-Based Policy-Making in Education, is incomplete (data on Iceland is missing) and too imprecise to provide any useful clues for a categorization of expertise-seeking arrangements. For example, the government-appointed expert commissions in Norway (NOUs) and Sweden (SOUs) that amass evidence to substantiate their evaluation of past reforms and their recommendations for new directions are not mentioned. As documented in the OECD study on policy advisory systems (OECD, 2017), in all five countries, there is a commitment to evidence-based policy planning (which in some countries is inscribed in law), an extensive stakeholder review, or a “hearing” process in which draft versions of new policy are opened up for public consultation.

The type of expertise-seeking arrangement in each of the five countries needs to be kept in mind when interpreting the role of experts in producing evidence for policy making. It may be useful to draw on an existing typology of such arrangements. For example, Weingart and Lentsch identify six types of commissions that provide scientific advice for policy making (2008, Chap. 2): (i) policy-domain-specific advisory councils, (ii) expert commissions for risk management, (iii) policy-specific expert commissions, (iv) ad hoc commissions, (v) enquete commissions, and (vi) sector research.

The typology may be used to categorize the five types of expertise-seeking arrangements in the countries of the Nordic region. According to the typology of Weingart and Lentsch (2008), the government-appointed “official commissions” in Norway and Sweden (NOUs and SOUs, respectively), which prepare and help legitimize policy decisions, fall into the categorization of “ad hoc commissions.” In Denmark, the School Council, which was established in 2006, serves to advise the ministry on topics related to elementary school (see Chap. 4). In Finland, the expertise-seeking arrangement is multisited or hybridized, according to Holli and Turkka (2021). The government-appointed ad hoc commissions, which exist in Norway and Sweden, were abolished in 2003 and replaced with broad-based working groups. The representation of academics declined over time and constituted only 4.7% of all working group members in 2015. At the same time, the Government of Finland pluralized the policy advisory system:

[T]he policy advisory system of Finland shows signs of hybridisation, as the channels and organization of policy advice have pluralised and advice has taken new forms. (Holli & Turkka, 2021, p. 58 [translation by the authors])

The partial externalization of policy advice is manifested in the rise of “state investigators” or consultants who are hired to produce government-commissioned reports to a general outsourcing of policy research. According to the authors, Finland has created a “research market,” where the state buys the research it needs for policy preparation. In Iceland, finally, the composition of advisory bodies is strictly regulated in terms of gender and political parties to ensure an inclusive consultative process. According to the OECD survey on policy advisory systems, which is carried out in 17 countries, including the 5 Nordic countries studied here, the policy advisory system in Iceland requires that at least 40 of the members of ad hoc advisory commissions are female and that all political parties are represented (OECD, 2017).

The OECD has formulated five quality standards for policy advisory systems: adaptability, transparency, autonomy, inclusiveness, and effectiveness. The OECD 17-country study (OECD, 2017) presents a positive assessment of the ad hoc advisory committees found in the Nordic region:

Ad hoc advisory bodies […] are often used by governments to gather evidence-based answers to particular questions relatively quickly. They often serve as a “fast track” and specialized option for governments to obtain advice. The Nordic countries have well-established traditions of creating ad hoc bodies to enhance the adaptability of the system. (OECD, 2017, p. 17)

Implications for the Five-Country Bibliometric Network Analyses

In the parallel universe of “gray literature” or technical reports, which are often commissioned by international organizations, there is an interesting discussion unfolding on the rapid spread of “global public goods” (GPGs) or global knowledge banks. GPGs include, for example, openly accessible international toolkits, documents, studies and databanks, training modules, good practices, and global monitoring reports. Clearly, the GPGs are openly accessible information, nowadays often backed up with numbers that any local, national, or international organization may use to back up their production of evidence. The rapid spread of GPGs was addressed by a few scholars around the turn of the millennium (notably Stone, 2000), but curiously, the discussion has not yet gained traction in academic debates. Hence, a brief summary of the debates, carried out in the context of development studies, may be in order here.

Within development studies, the discussion is now bifurcating in at least two different directions. One group of authors makes the argument for more funding for GPGs produced by a more diverse body of researchers (based in the Global North and the Global South) and another group critically examines the uptake of GPGs for national policy making at the national level (see Vasquez Cuevas, 2020).

In some countries, a wide range of propositions have been made about how to remedy the shortfalls related to global agenda setting, channeling of aid, and GPGs (see Schäferhoff & Burnett, 2016). Some suggestions entail more funding at the global level, whereas others notice that the production of GPGs is mostly done by a few global actors (OECD, the World Bank, the UN system) at the expense of national research institutions outside of North America, Europe, and Australia. This applies in particular to think-tanks, research institutions, and universities in the Global South, whose knowledge products are rarely taken up at the global level. Examples of more funding to the Global South include Oxfam’s early suggestion to eliminate one-size-fits-all benchmarking processes and dedicate three grants for capacity building to recipient governments and civil society organizations of low- and lower-middle-income countries (Oxfam, 2010).

For a while, the question arose whether the World Bank, UNESCO institutions, UNICEF, Global Partnership for Education, or other international organizations should earmark funds for research capacity building and policy analysis. One of the early suggestions was to increase funding for the global and regional agencies of UNESCO and UNICEF to advance cross-country sharing of knowledge on education and development. In addition to statistics, the UN organizations would use the funds to disseminate knowledge derived from research and from global sharing of experience. Others found the World Bank to be ideally suited for helping expand research funding and activities given its commitment to evidence-based policy making; they recommended that researchers at the World Bank would work more closely with other staff for country-level policy advice (Clemens & Kremer, 2016). Unsurprisingly, the lively debate in development studies is on whose knowledge is made publicly available at the global level and whose knowledge is confined to the national boundaries of its producers.

In OECD countries, the debate over the asymmetry of global knowledge production and update seems to center more on the language of publication (English vs. all other languages) rather than on the center/periphery differentiation in an unequal world system. Some countries in the Nordic region (Sweden and Norway in particular) require that all public documents be made openly accessible. The transparency standard of policy advisory systems, which are forcefully promoted by the OECD (2015, 2017), is also practiced in the other three Nordic countries.

The open-access policies and practices have both enabled and exacerbated the “participatory turn” or the “pluralization of expertise” (see Krick et al., 2019), which has been explained above. The decline in corporatism or interest group representation in policy advisory systems is but one of the manifestations of this trend. Another manifestation is the surplus of evidence.

In the US context, Lubienski (2019) contends that there is not a scarcity but rather a “surplus of evidence.” In such a “marketplace of ideas,” there is ample opportunity for new, nonstate actors—specifically the private sector—to serve as intermediaries between research production and policy making:

Into the chasm between research production and policy-making, we are seeing the entrance of new actors—networks of intermediaries—that seek to collect, interpret, package, and promote evidence for policymakers to use in forming their decisions. (Lubienski, 2019, p. 70)

Indeed, two decades after neoliberal calls for less politics and more scientific rationality in the policy process, we are now entering a new phase in policy making: the stage of surplus of evidence in which calls for “actionable research,” “policy-relevant research,” or “what works” studies are being heard. At the 2015 Public Governance Ministerial Meeting in Helsinki, the ministers from OECD countries agreed (OECD, 2015, p. 3) that evidence alone is not sufficient. Evidence needs to be policy relevant, robust, and comparable:

We acknowledge the importance of evidence as a critical underpinning of public policies and recognize the need for a continuous effort to develop policy-relevant evidence on government performance that is robust and comparable. “What Works” initiatives are an example of how to ensure systematic assessment and leverage the stock of information on good practices available at the international level on policy impact. (OECD, 2015, p. 3)

In the Nordic policy context, the OECD plays a major role as a transnational policy advisor and standard setter. During the first phase of evidence-based policy planning, the OECD advanced the notion of autonomous policy advisor systems that produce, independent of the state, evidence. In today’s stage of evidence overproduction, the OECD offers itself as an interpreter of evidence by selecting from the marketplace of ideas those that are actionable and in line with the broader political program of accountability.

The proliferation of GPGs and the overproduction of evidence make it pressing to investigate the changing role of government-appointed advisory commissions in the production and interpretation of evidence. A host of research questions open up once we acknowledge that the presentation of “facts” (information) and transformation of these facts into evidence rests on a subjective selection process that reflects the frame of reference or broader discursive orientation of the author.

Based on the elaborations presented above, we have several research questions that lend themselves to a systems-theoretical preoccupation with evidence-based policy planning: First, what sources of information do government-appointed advisory panels consider credible and, therefore, select to reduce uncertainty and generate trust? Second, what kind of hierarchization of evidence do government-appointed advisory panels generate to reduce complexity? Third, bearing in mind the three most common types of externalization, what type of externalization do the cited texts represent: reference to (i) tradition or values (e.g., Nordic value of equity), (ii) organization (e.g., reference to laws and regulations), or (iii) scientific rationality (e.g., studies and evaluations). Fourth, in which broader epistemic community, or rather political reference space, do the authors situate themselves? Finally, given the academization of government-appointed advisory commissions in some countries (Christensen & Hesstvedt, 2018), as a result of which the number of academics serving as members increased at the expense of interest group representatives, the phenomenon of structural coupling between science and politics (Steiner-Khamsi et al., 2019; Weingart, 2003; Stehr & Grundmann, 2011; see also Chap. 10) offers itself as an object of empirical scrutiny.

This chapter has attempted to demonstrate how a bibliometric network analysis may be used as a method of inquiry for understanding legitimization processes in evidence-based policy making. Drawing our attention to how uncertainty is reduced and trust in evidence is created is essential in an era in which there is a surplus and, by implication, competing notions of evidence. A bibliometric investigation of the references in a text provides important clues about the (i) selection of sources of information, (ii) hierarchization of evidence, (iii) and type of externalization made by an author to leverage authority for the claims made in the text. A network analysis of the references further helps complicate the findings. In fact, the focus on relations brings to light that different authors use the same references for different purposes, that is, one and the same reference may show up in, and bridge, two different knowledge networks. Perhaps needless to reiterate, the long list of fascinating research questions, which are presented in a nonexhaustive manner in this chapter, gains additional attraction when investigated across the five different national contexts within the Nordic region.