Abstract
Previous empirical analyses have revealed the characteristics a university should possess to be associated with engagement in competitive international innovation programs. Within this context, this paper aimed to assess the importance of academic reputation in determining high-level participation in the most recently concluded European Union Framework Program (i.e., Horizon 2020). By considering three countries (i.e., Italy, Sweden, and the United Kingdom), and using qualitative comparative analysis (QCA), we showed that academic reputation is a key condition among high-scoring universities above a certain threshold (i.e., top 25% organizations). This finding should lead to further research and has policy implications. It is essential to integrate better those organizations considered less prestigious into targeted, highly competitive funding schemes to foster more widespread, even participation.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Many studies have shown that participation in competitive—and highly selective—innovation networks is concentrated among a few organizations, which are generally located in more advanced, innovative regions (e.g., Roediger-Schluga & Barber, 2008). Although some of these studies have performed in-depth examinations of specific organization types (e.g., universities), and have revealed which factors primarily affect their degree of participation (e.g., Cavallaro & Lepori, 2021; Lepori et al., 2015; Piro et al., 2020), limited attention has been devoted to a critical element: academic reputation. The few previous empirical analyses addressing this topic have led to divergent results and have not clearly determined whether academic reputation is a key driver of participation (e.g., Enger & Castellacci, 2016) or whether, conversely, academic reputation is fostered by participation in international innovation programs and collaborative funding schemes (see Calignano, 2021).
Although we cannot exclude the possibility that participating in innovation programs may enhance an organization’s reputation, this paper aimed to address the novel—yet heretofore understudied—and relevant research question: Is academic reputation a key factor in determining universities’ participation in international innovation networks?
Based on previous relevant empirical analyses, which benefited from large volumes of transnational collaborative funding scheme data (e.g., Autant-Bernard et al., 2007; Barber & Scherngell, 2013; Wanzenböck, 2020), our research was conducted within the context of the latest European Union (EU) Framework Program (FP): Horizon 2020 (H2020). Specifically, we compared participation among all universities within each of three countries, which are characterized by different socioeconomic conditions, including innovation outcomes and higher education systems (i.e., Italy, Sweden, and the United Kingdom [UK]).
Using qualitative comparative analysis (QCA), we were able to demonstrate that—with a few caveats that we discuss in the Methodology section—academic reputation is clearly a key feature among the organizations that scored very high (i.e., within the top 25%) in this targeted EU innovation program. However, the importance of academic reputation decreases when a lower threshold for determining high-level participation is set (i.e., scoring above the median, or in the top 50%).
Literature review
The FP represents the most important EU innovation policy implementation, the aim of which is to fund highly qualified consortia comprising various organization types (e.g., private companies, universities, research centers, agencies) that conduct research with potential industrial applications at the forefront of different fields and socioeconomic spheres (e.g., health, mobility, environment, key enabling technologies) (CROS, 2023).
Throughout the various programming cycles, which were launched in 1984 and generally last 5–7 years, the EU has continuously fostered scientific and business excellence, while attempting to enable less equipped organizations and regions to catch up with those that are more advanced (European Parliament, 2017). However, empirical evidence suggests that EU FPs are largely characterized by unbalanced participation, with a few organizations and regional areas leading, and failing to achieve widespread knowledge exchange among a greater number of organizations (e.g., Calignano, 2022).
Breschi and Cusmano (2004) were among the first to reveal the “elitist club” of organizations that represent the core EU-funded innovation networks throughout the FPs. This important finding has since been confirmed, showing that EU programming cycles are distinctly characterized by long-term disparities between a small core of strongly connected organizations and a large number of peripheral, loosely tied organizations in generally less competitive, regional areas (Muscio & Ciffolilli, 2018).
Organizations with a central position in innovation networks like the EU FPs benefit from their high degree of participation in two main ways. First, they acquire relevant novel or complementary knowledge from qualified partners; second, they receive vital EU funds during periods when national governments show trends toward reducing their domestic research budgets (Schulze-Krogh and Calignano, 2020, 2019). Both of these make national university participation in competitive EU innovation programs very important, or even—especially in the case of less competitive or underfunded organizations—essential (Calignano, 2022).
Universities with high levels of participation in competitive EU innovation programs share specific organizational, geographical, and relational characteristics. Organizationally, larger universities participate more often than those that are smaller, by virtue of their higher quality infrastructures, better administrative capacities, and related abilities to manage a large number of applications and funded projects simultaneously (Cavallaro & Lepori, 2021; Sciacca, 2013). Geographically, as noted above, location influences participation, as highly participating universities are generally in more wealthy, dynamic, and innovative regional areas at the EU or domestic level (e.g., Calignano, 2022). Regarding relational characteristics, through a mechanism resembling “preferential attachment” (see Calignano et al., 2019), high-scoring organizations often rely on a cumulative process through which—by leveraging their status and existing networks—they can add new collaborating partners with whom they have previously established long-lasting relationships through various EU programming cycles (Pohoryles, 2002; Barber et al., 2006; Nokkala, 2007; Calignano, 2014).
Although there is widespread consensus about the positive impacts of these factors in determining universities’ degree of participation in international innovation networks, the role of academic reputation—which represents the main variable of interest herein—remains contested. Specifically, the few studies that have directly or indirectly addressed this topic have neither unequivocally identified an association between academic reputation and level of EU FP participation nor determined the direction of this presumed association. In other words, the previous, discordant findings disallow clarification of whether academic reputation actually leads to greater participation in international innovation programs. Moreover, some scholars argue that the relation between these two variables is inverse: that high participation in important innovation networks itself contributes decisively to shaping an academic institution’s reputation.
Using an original database of information about applications and funded EU projects, Enger and Castellacci (2016) demonstrated that whereas academic reputation does not influence the number of submitted applications, it does have a clear, positive impact on the rate of success. This finding was strengthened by those by Cavallaro and Lepori (2021), Lepori et al. (2015), and Piro et al. (2020), according to which the positive impact of academic reputation on the EU FP participation level is beyond doubt. However, other studies have provided more nuanced or divergent evidence regarding the positive association between academic reputation and the degree of EU FP participation. Nokkala et al., (2011, p. 127) argued that “[w]hile high scientific excellence or a good reputation is beneficial for the university to become a frequent project coordinator, this fact is less important for project participation or even hindering it.”
Several other scholars have also revealed an inverse relation between academic reputation and successful participation in international innovation networks. Among these, Wagner and Leydesdorff (2005) highlighted that universities are interested in joining international collaborative innovation networks because doing so both improves their reputation and provides them with other benefits and resources. Other scholars similarly argue that strong participation in innovation networks, and specifically in the EU FPs, is associated with improved credibility (Schütz et al., 2018) and academic reputation (Calignano, 2021).
Although we acknowledge that strong participation in prestigious international innovation networks may represent one factor that contributes to shaping or strengthening a university’s reputation—likely in combination with other key elements like teaching, legacy, personal relationships, and individual parameters (see Amado Mateus & Juarez Acosta, 2022)—herein, we are specifically and exclusively interested in determining whether there is a clear, direct relation between academic reputation and degree of EU FP participation.
Although they are often treated as synonyms, academic reputation and scientific excellence do not necessarily overlap completely (Calignano, 2021). Rather, the latter refers to the actual, and presumably more objective (although still approximate) quality of a university (Anowar et al., 2015) whereas the former refers to how this quality is generally perceived. Different rankings capture different reputation types; based on our comparison of the various current rankings and availability of ranking data for the past decade, we used the Quacquarelli Symonds World University Rankings (QS2013). This decision is more thoroughly justified in the QCA procedure section below.
Methodology
QCA was used herein to examine whether academic reputation is a crucial factor in determining a high degree of H2020 participation. QCA allows researchers to identify causal relations between a given outcome and different combinations of selected factors (in QCA terminology, “configurations”) (Ragin, 2000). In other words, if we draw an analogy between QCA and regression analysis, an “outcome” corresponds to a dependent variable, whereas “conditions” are akin to independent variables.
More specifically, QCA makes it possible to find a “multiple conjunctural causation” across observed cases. This means that QCA is a particularly appropriate technique for studying complex systems because it is based on the consideration that “[d]ifferent causal “paths”—each path being relevant, in a distinct way—may lead to the same outcome (De Meur & Rihoux, 2002). […] “Multiple” here refers to the number of paths, whereas “conjunctural” conveys the notion that each path consists of a combination of conditions. Thus, multiple conjunctural causation contains the notion of equifinality, which simply means that different paths can lead to the same outcome” (Berg-Schlosser et al., 2009: 8). Based on this, Berg-Schlosser et al. (2009) explain that the variables making up QCA (conditions and outcome) must be carefully selected by researchers and theoretically informed to adopt a deductive approach. However, QCA techniques can also be used more inductively by identifying the key “ingredients” (conditions) that may lead to a given outcome.
We adopted the crisp-set (csQCA) approach, in which both outcome and conditions are dichotomous (i.e., 1 is the presence of the outcome and a given condition, 0 is their absence). In some cases, defining the presence or absence of an outcome or condition is relatively easy (e.g., being listed in a given university ranking or not), whereas in others the researcher must determine a credible threshold. This is the defined “crossover point” (see Ragin, 2000) and is determined by the individual researcher based on factors including theoretical and conceptual considerations, knowledge of the context in which the study is conducted, and previous analogous empirical analyses.
Herein, our cases were represented by universities in three countries—Italy, Sweden, and the UK—which were selected based on their differing characteristics, with the aim to capture whether the possible impact of academic reputation on the level of EU FP participation is widespread and observable in diverse organizational and socioeconomic contexts. The first relevant difference among them is innovativeness. According to the European Innovation Scoreboard (European Commission, 2020), Sweden is a leading innovative country, followed by the UK (strong innovator) and Italy (moderate innovator). Moreover, the three countries clearly differ in terms of size (the UK and Italy are larger countries) and disposal income and income inequality (i.e., two indicators in which Sweden performs considerably better). Finally, the UK shows more and better positioned academic organizations in university ranking compared with Italy and—to a lesser extent—Sweden.
The ability of a given university to score higher than the 75th percentile (i.e., top 25%) in terms of H2020 participation in their respective country was considered a positive outcome. This threshold was chosen based on the research question, which drove our empirical analysis and the related necessity to demonstrate whether a high degree of participation is determined by reputation. Similarly, we used the 75th percentile for all conditions used to determine which combinations led to the outcome, excepting academic reputation (ACAD_REP), for which—as briefly mentioned before—a binary variable based on being (1) or not being (0) within a specific university ranking was adopted. The other model variables were university size (SIZE), level of internationalization (INTER), science, technology, engineering, and mathematics orientation (STEM), regional per capita gross domestic product (GDP), and presence of high-technology sectors (TECH) in the university’s region. More details about these conditions and the rationale for their use in our QCA model are in the QCA procedure section.
Briefly, the aims of our csQCA were twofold. The first aim was to determine the configuration that allows universities to be strongly involved in highly competitive and selective innovation networks, allowing them to benefit from relevant knowledge flows and vital research funds. The second, and more important, aim was to determine whether academic reputation drives participation in competitive and selective innovation networks, such as FPs, to definitively clarify whether it is a necessary variable.
QCA procedure
The data for each condition were from 2013 or—in the case of rare, single observations for which data were unavailable—from 2012 or 2011. Regardless, all data were from before 2014 (i.e., when H2020 FP began). This allows us to delineate a clear direction in the relation between each university’s characteristics before the target FP and their subsequent level of participation (i.e., 2014–2020, during H2020 execution).
Our QCA was conducted using the following procedure, which was repeated for each of the three countries analyzed (Italy, Sweden, and the UK). First, we created a “truth table,” which is essentially a matrix showing all possible configurations and outcomes for the targeted cases. Subsequently, through a “logical minimization” process, we found the minimal formula allowing us to determine which combinations of conditions are sufficient to produce the outcome (i.e., being within the top 25% scoring organizations in the H2020 FP). In simpler terms, the logical minimization process (with a consistency threshold of 0.80; see Ragin, 2008) eliminates irrelevant causal conditions from the original matrix, with the aim of finding the smallest causal configuration sufficient for a given outcome to occur (Ragin, 1987). As shown in Table 1, outcome Y happens in both cases (value = 1), but condition B is present in the first case (value = 1) and absent in the second (value = 0). Based on logical minimization, condition B is considered causally irrelevant and must be eliminated accordingly, whereas condition A is the only condition consistently associated with outcome Y. This operation is repeated for all possible case pairs, to determine whether they differ by exactly one condition. When this happens, the condition in which the two cases differ is eliminated, to produce the “implicants.” The same procedure is then iteratively repeated for each implicant pair until it is possible to identify a final solution with the smallest number of combined causal conditions associated with the outcome (Dușa, 2019).
We also analyzed necessary conditions, allowing us to identify which single condition (if any, and albeit insufficient alone), is necessary to produce the outcome, in combination with other conditions (i.e., all situations in which both a given condition and the outcome are simultaneously present in a configuration) (Ragin, 2008).
The logical minimization procedure allows the identification of necessary and sufficient conditions, which are then validated for consistency and coverage (for more QCA details, terms, and score interpretation, see Fiss, 2011; Longest & Vaisey, 2008; Ragin, 2000, 2006, and the sections herein on QCA and sufficiency analyses). fs/QCA software was used to conduct our csQCA.
Variables
Based on the main literature and a thorough consideration of factors that may significantly impact university EU FP participation, we selected the most appropriate conditions for our empirical analysis (see Methodology section). In particular, we followed the method described by Cavallaro and Lepori (2021) and Lepori et al. (2015), who adopted both organizational (ACAD_REP, SIZE, INTER, and STEM) and geographical (GDP and TECH) variables with the potential positively to influence the degree of university participation in competitive international innovation networks like the EU FPs (see Table 2).
ACAD_REP, our main variable of interest, was calculated by examining whether universities in each targeted country were included in the top 700 organizations in the renowned QS Ranking in 2013.Footnote 1 This database was chosen over other similarly reliable and informative sources (e.g., Academic Ranking of World Universities, Times Higher Education, Leiden) because it is primarily based on academics’ perceptions of other universities (i.e., two reputation surveys of academics’ opinions of other universities account for > 50% of the total score). This was ideally aligned with our concept of academic reputation herein (i.e., subjectively perceived prestige, rather than more objective scientific excellence; see the Literature review section for details).
According to several previous studies (e.g., Piro et al., 2020), university SIZE positively impacts EU FP participation. We assert that two additional factors represent potential critical conditions. First, high levels of INTER (i.e., the proportion of foreign academic personnel) might foster transnational collaborations via social proximity (e.g., acquaintances, previous common work experiences; see Boschma, 2005). Second, high levels of STEM (the proportion of graduates in science and technology) might have a positive impact by virtue of the predominant role scientific and technological themes play in the EU FPs (Sonetti et al., 2020).
Regarding the geographical dimension of our analysis, we considered GDP and TECH particularly relevant because the most central organizations, which represent the persisting core of the EU innovation networks throughout its various cycles, tend to be located in countries’ more wealthy and technologically advanced regions, irrespective of the degree of development and innovativeness (see Calignano 2022).
Empirical analysis
Descriptive statistics
As specified in the previous sections, we compared the universities within Italy, Sweden, and the UK to determine whether academic reputation is an essential condition in determining the degree of competitive innovation network participation.
Table 2 shows descriptive statistics for the three target countries. We surveyed 159 organizations in the UK, 78 in Italy, and 38 in Sweden. On average, the Swedish universities participated more often in the EU FPs and, consistently, had the highest threshold value within the 75th percentile (i.e., the top 25% scoring organizations: > 96.5 vs > 65 in the UK and > 64 in Italy; see Table 3). However, the top-scoring organization in absolute terms was in the UK (the University of Cambridge, with 754 participations), whereas the top-scoring university in Italy, interestingly, had more funded applications compared with the corresponding Swedish university (411 vs 344 participations). Considering the three countries together, 10 UK organizations were in the top 15 scoring universities, followed by three Italian organizations (the highest ranked of which was the Polytechnic University of Milan, ranked 5th) and two Swedish organizations (the highest ranked of which was KTH Royal Institute of Technology, ranked 9th) (see Table 4).
As described above, we used a dichotomous variable to differentiate between universities with higher and lower rankings (1 = ranked in the QS2013; 0 = not ranked in the QS2013). We were thus able to determine that, as shown in Table 3, the UK showed the highest share of ranked domestic institutions (38%), whereas Italian (25%) and Swedish (22%) organizations were less frequently listed in the top 700 prestigious universities, according to the QS2013 ranking.
Regarding the other variables included in our final dataset, the Italian universities were larger (SIZE) and the UK universities showed considerably higher INTER levels, whereas STEM orientation did not differ significantly among the three countries.
Regarding geographical conditions, the UK and Sweden had a higher average regional GDP than Italy, despite the UK also being characterized by the largest inequality. Finally, the Swedish regions were more technologically advanced (TECH) on average, although the most advanced region in absolute terms was in the UK (10.3% of high-technology sectors are in South East England).
QCA—Analysis of necessary conditions
In the first step, we analyzed the necessary conditions, allowing us to determine whether ACAD_REP or any other condition is necessary to produce the outcome (i.e., scoring in the top 25% of H2020 participating organizations within the respective country). Specifically, a given condition X is necessary when it does not automatically produce outcome Y (i.e., X = 1, with Y = 1 or 0), but without which the outcome does not exist (i.e., if Y = 1, then X = 1). In other words, the “necessary condition is a bottleneck, a constraint, for the outcome to exist” (Vis & Dul, 2018: 873). Thus, two measures are used to determine whether a single condition is necessary to produce an outcome: consistency and coverage.
Consistency refers to how often condition X is present when outcome Y occurs. The value of consistency may be either 0 or 1, where 1 means the targeted condition is always present when the outcome is present. In the case of csQCA, the recommended level of consistency is ≥ 0.90. In other words, a maximum of 10% of cases in which the outcome is present and the condition is absent is allowed. A condition can be a necessary variable, but this does not imply that its presence is automatically associated with an outcome. For this reason, coverage is used to determine the degree to which there is a co-occurrence between a necessary condition and outcome. This measure resembles the variance explained or R-squared used in other statistical approaches. High coverage indicates that the presence of a necessary condition represents a large proportion of cases in which the outcome occurs. Thus, scores ≥ 0.60 indicate the relevance of the identified necessary condition, whereas scores below that threshold indicate its trivialness (Schneider & Wagemann, 2010).
Table 5 shows that ACAD_REP is a necessary and relevant condition in Italy and the UK. In addition, although our main variable of interest cannot be fully considered a necessary variable in Sweden due to its consistency score being below the recommended threshold, ACAD_REP shows both high coverage (> 0.6) and the highest consistency and coverage scores in absolute terms (together with INTER).
It is important to highlight here that if we exclude ACAD_REP in two of the three targeted countries, none of the other variables are necessary to determine high university H2020 FP participation, while they also show relatively low consistency and coverage scores. The only exceptions to this are SIZE in Italy, SIZE and the aforementioned INTER in Sweden, and—to a lesser extent—INTER in the UK.
Analysis of sufficiency
After determining that ACAD_REP is a necessary condition for universities to achieve very high H2020 participation, we analyzed sufficiency. This procedure represents the core of QCA, allowing us to find the minimal configurations of conditions sufficient to produce the outcome. Specifically, we first created the truth table listing all possible configurations, then identified the sufficiency relations through the logical minimization process.
Similar to the analysis of necessity, solution consistency and solution coverage need to be calculated and reported with the aim of assessing the validity of this analysis. Specifically, solution consistency refers to the proportion of cases showing a given combination of conditions that also exhibits a positive outcome. According to previous research, the minimum acceptable consistency level is 0.80 (Ragin, 2008), while a score of 1.0 indicates that when a given sufficient combination of conditions is present, a positive outcome is always present. The consistency solution scores related to our QCA models were 0.88 for the UK and 1.00 for Italy and Sweden. In addition, solution coverage is a measure used to determine the extent to which the outcome of interest may be explained by the configurations identified through QCA. The high solution coverage scores (ranging from 0.59 to 0.89) for Italy, Sweden, and the UK show that our QCA results were reliable and particularly robust (see Ragin, 2006).
Regarding the other scores reported in Tables 6, 7, and 8, raw coverage refers to the extent to which outcome Y is explained by a specific combination of conditions, whereas unique coverage quantifies the extent to which such an explanation can be exclusively attributed to that combination, and not others (i.e., the percentage of cases that are not simultaneously represented by other condition combinations). In our QCA models, there were few combinations with unique coverage of 0 (i.e., two for Italy and two for the UK). Although empirically valid and often reported (e.g., Australian Government, 2022; Porfírio et al., 2021), combinations with 0 unique coverage should be interpreted with caution or disregarded because they are considered “logically redundant” (i.e., all cases belonging to them are already explained by other combinations) (Schneider & Wagemann, 2009). Thus, we overlooked these, as explained in the taglines in Table 6 (Italy) and Table 8 (the UK). Finally, an additional consistency score was based on the same concepts illustrated above and applied to the single combinations of conditions, as reported in Tables 6, 7, and 8.
As illustrated above (see Tables 6, 7, and 8), ACAD_REP played a key role in all three targeted countries. In particular, the main variable of interest was present in all configurations sufficient to produce the outcome in Italy, sometimes in combination with a geographical variable like regional GDP and an organizational variable like STEM. All other conditions played marginal roles, particularly TECH, which did not contribute to any configuration and showed low levels in several combinations (see white circles in Table 6).
We were able to identify four configurations sufficient to produce an outcome in Sweden—the only country for which ACAD_REP was not a necessary condition. However, academic reputation still represented an important condition in this country as it was present in three combinations. SIZE and INTER were two other important variables for determining high H2020 participation, whereas STEM and (even more so) the geographical variables GDP and TECH played marginal roles. The latter showed high values only in Combination 4, along with ACAD_REP and all other conditions, excepting STEM (see Table 7).
Finally, the analysis of sufficiency for the UK is in Table 8. In this case, ACAD_REP was in four out of five combinations, with the only exception being Combination 5. Interestingly, in this combination, ACAD_REP showed low scores, whereas SIZE, INTER, STEM, and TECH were the combination of conditions that led to a high rate of H2020 FP participation. Finally, it should be stressed that ACAD_REP was often combined with both other organizational (SIZE, INTER, and STEM) and geographical (TECH) variables when the outcome was achieved.
Testing QCA against alternative models
We conducted several tests to identify the mechanisms by which alternative QCA models were revealed by different variables and thresholds, to achieve a more robust, accurate, and insightful set of findings.
First, we used a different outcome with a different crossover point. In this analysis, the median represented the new (lower) threshold through which we identified the presence (1) or absence (0) of the outcome. Similarly, the median was set as the crossover point for all the other conditions. ACAD_REP was calculated as in the main QCA model (1 = being included in the QS2013, 0 = not being included in the top 700 universities by the QS2013).
The alternative QCA, based on the newly calibrated variables, revealed different results. Most importantly, ACAD_REP was no longer a necessary variable to achieve the outcome in any of the targeted countries. In other words, the importance of ACAD_REP is reduced when we apply a lower (though still relevant) level of H2020 participation (i.e., scoring above the median of participating universities). Moreover, regarding the other conditions, only SIZE in Italy could be considered necessary (see Table 9).
These results were confirmed by the analysis of sufficiency, in which ACAD_REP contributed to fewer configurations than with the main QCA model (see section QCA—Analysis of necessary conditions), for which the outcome crossover point was the 75th percentile (i.e., scoring in the top 25% universities). Specifically, ACAD_REP was included in 4 out of 5 configurations in Italy (i.e., while SIZE, which appeared only in one main model configuration, can be found in all of the combinations in this alternative model), 3 out of 5 in Sweden, and 3 out of 7 in the UK. Moreover, distinct from that observed in the main QCA model, ACAD_REP showed low levels in various condition combinations (4 in Italy, 2 in Sweden, and 2 in the UK). In this alternative QCA, other variables were more relevant and, compared with the main model looking at the top 25% scoring organizations, more directly related to the national characteristics of each targeted country (SIZE in Italy, INTER in Sweden, and INTER and—to a lesser extent—SIZE in the UK).
Discussion
Herein, we sought to examine whether academic reputation is a key variable determining high university participation in competitive innovation funding schemes like the EU FPs. Our analyses were motivated by previous studies showing divergent results, regarding both the association between the dependent and independent variables and the direction of their relations (i.e., whether academic reputation influences international program participation or vice versa) (e.g., Cavallaro & Lepori, 2021; Enger & Castellacci, 2016; Nokkala et al., 2011).
Although we assert that frequent engagement in highly qualified, selective consortia certainly contributes to shaping academic reputation (e.g., Calignano, 2021; Wagner & Leydesdorff, 2005), this is probably one of many factors that determine how universities are externally perceived. Indeed, in their recent literature review, Amado Mateus and Juarez Acosta (2022) found that a higher education institution’s image is multidimensional and comprised of a vast number of elements, ranging from student satisfaction to visual identity to strong traditions.
Based on this, we used QCA to demonstrate that a university’s academic reputation is undoubtedly a critical—and possibly necessary—prerequisite for joining the dense and permeating elite club of organizations that traditionally characterize EU innovation program membership (Breschi & Cusmano, 2004). Thus, it is relevant to emphasize that being ranked in the QS2013 is a key condition for scoring very highly, in all three targeted countries (75th percentile), whereas its impact decreases when a lower threshold is set (50th percentile). This is important because no specific national characteristics (e.g., population, number of universities, national-level innovativeness, socioeconomics) affected these results, which were stable across Italy, the UK, and—although to a slightly lesser extent—Sweden.
Likewise, national characteristics that promote their organizations’ participation emerged when we used a lower outcome threshold (i.e., SIZE in Italy, INTER in Sweden, and INTER and SIZE in the UK). However, although statistically significant and policy-relevant, the alternative model results should not be overestimated. Indeed, the difference between scoring above the median (top 50%) and scoring above the 75th percentile (top 25%) is sizable. In all cases (with the extreme example of Sweden), the median (50th percentile) is considerably closer to the 25th percentile than it is to the 75th percentile (see Table 3). In other words, in extremely centralized networks like those characterizing EU FPs (e.g., Protogerou et al., 2010; Wanzenböck et al., 2015), although less prestigious organizations can become participating organizations, and even score moderately well (i.e., top 50%), their engagement is actually relatively limited due to the low median level (i.e., skewed distribution). This creates a fissure between lower- and higher-scoring universities, with negative implications for those less prestigious organizations and, consequently, for the territories in which they are situated. In this regard, the previously established presence of clear core–periphery dynamics (Roediger-Schluga & Barber, 2008) creates problems for less prestigious organizations and regions in terms of research funding allocation and acquisition of beneficial new or complementary knowledge (e.g., Calignano, 2022).
Based on these cumulative findings, we argue that more qualitative and quantitative studies should be carried out in different fields (e.g., psychology and corporate reputation) to determine whether academic reputation causes unconscious bias among evaluators and policymakers. Stated differently, we are concerned that the large perceived importance of academic reputation may bias evaluators toward single organizations and/or members that make up aspiring research consortia (see, e.g., Knight, 2017; McNatt, 2010, regarding bias in the evaluation process). Participation in highly selective programs like the EU FPs is important for all applicants, but it can be vital to those deemed less prestigious, which are also often far from the core participating geographical areas. We further argue that higher engagement among less prestigious organizations is probably more essential to their success (e.g., for the acquisition of new knowledge, crucial funds, and improved status; e.g., Calignano, 2021), fosters related territorial areas, and—more generally, but similarly important—promotes EU policy actions and outcomes explicitly targeting more even and inclusive participation (European Parliament, 2017).
In conclusion, as explained in our methodological section, QCA allows researchers to claim causality between different configurations of factors and a given outcome. Our empirical analysis revealed that academic reputation is a main “ingredient” that allows universities to score very highly in competitive, selective international innovation programs. Three selected countries that have historically performed well in the various FPs and are characterized by different academic systems and socioeconomic contexts (i.e., Italy, Sweden, and the UK), were used as relevant cases to test whether more prestigious universities perform better in EU FPs. Replicating these analyses with more countries, especially considering those characterized by lower innovativeness (i.e., “emerging innovators” according to the European Innovation Scoreboard [European Commission, 2020]) and academic performance, could confirm and even strengthen the causal deduction suggested herein. As briefly described above, innovation activities tend to be concentrated within nations’ capitals or in a few, select areas within weakly innovative countries. Thus, we suspect that levels of EU FP participation may be even lower among peripheral and lower-ranked domestic academic institutions (see Calignano, 2022).
Notes
These 700 are all QS-ranked universities in 2013 according to the site https://www.universityrankings.ch/ (accessed on 04.09.2023).
References
Amado Mateus, M., & Acosta Juarez, F. (2022). Reputation in Higher Education: A Systematic Review. Frontiers in Education, 7, 1–19.
Australian Government. (2022). The impact of ACIAR work in agricultural research for development 1982–2022 Volume 2: A qualitative comparative analysis. Retrieved September 10, 2023 from https://www.aciar.gov.au/sites/default/files/2022-02/ACIAR_IAS100-Vol2_web.pdf
Autant-Bernard, C., Billand, P., Frachisse, D., & Massard, N. (2007). Social distance versus spatial distance in R&D co-operation: Empirical evidence from European collaboration choices in micro and nanotechnologies. Regional Science, 86(3), 495–519. https://doi.org/10.1111/j.1435-5957.2007.00132.x
Barber, M. J., & Scherngell, T. (2013). Is the European R&D network homogeneous? Distinguishing relevant network communities using graph theoretic and spatial interaction modelling approaches. Regional Studies, 47, 1283–1298. https://doi.org/10.1080/00343404.2011.622745
Barber, M. J., Krueger, A., Krueger, T., & Roediger-Schluga, T. (2006). The network of EU-funded collaborative R&D projects. Physical Review E, 73, 036132. https://doi.org/10.1103/PhysRevE.73.036132
Berg-Schlosser, D., De Meur, G., Rihoux, B., & Ragin, C. C. (2009). Qualitative Comparative Analysis (QCA) as an approach. In B. Rihoux & C. C. Ragin (Eds.), Configurational comparative methods: Qualitative comparative analysis and related techniques (pp. 1–8). SAGE.
Boschma, R. (2005). Proximity and innovation: A critical assessment. Regional Studies, 39(1), 61–74. https://doi.org/10.1080/0034340052000320887
Breschi, S., & Cusmano, L. (2004). Unveiling the texture of a European research area: Emergence of oligarchic networks under EU framework programmes. International Journal of Technology Management, 27, 747–772. https://doi.org/10.1080/09537320903182314
Calignano, G. (2014). Italian organisations within the European nanotechnology network: Presence, dynamics and effects. Die Erde, 14(4), 241–259. https://doi.org/10.12854/erde-145-21
Calignano, G., Fitjar, R.D., & Hjertvikrem, N. (2019). Innovation networks and green restructuring: Which path development can EU Framework Programmes stimulate in Norway?. Norsk Geografisk Tidsskrift - Norwegian Journal of Geography, 73(1), 65–78. https://doi.org/10.1080/00291951.2018.1561512
Calignano, G. (2021). Better connected, more reputable? On the association between node centrality and academic reputation in the European Union research and innovation networks. European Policy Analysis, 7(1), 240–262. https://doi.org/10.1002/epa2.1079
Calignano, G. (2022). Not all peripheries are the same: The importance of relative regional innovativeness in transnational innovation networks. Growth and Change, 53(1), 276–312. https://doi.org/10.1111/grow.12585
Cavallaro, M., & Lepori, B. (2021). Institutional barriers to participation in EU framework programs: Contrasting the Swiss and UK cases. Scientometrics, 126, 1311–1328. https://doi.org/10.1007/s11192-020-03810-0
CROS – Collaboration in Research and Methodology for Official Statistics. (2023). Research Projects under Framework Programmes 2023. Retrieved February 27, 2023 from https://ec.europa.eu/eurostat/cros/content/research-projects-under-framework-programmes-0_en
De Meur, G., & Rihoux, B. (2002). L’Analyse quali-quantitative comparée (AQQCQCA): approche, techniques et applications en sciences humaines. Academia-Bruylant.
Dușa, A. (2019). QCA with R: A comprehensive resource. Springer.
Enger, S., & Castellacci, F. (2016). Who gets Horizon 2020 research grants? Propensity to apply and probability to succeed in a two-step analysis. Scientometrics, 109(3), 1611–1638. https://doi.org/10.1007/s11192-016-2145-5
European Parliament. (2017). EU Framework Programmes for research and innovation. Evolution and key data from FP1 to Horizon 2020 in view of FP9. Retrieved March 10, 2023 from https://www.europarl.europa.eu/RegData/etudes/IDAN/2017/608697/EPRS_IDA%282017%29608697_EN.pdf
European Commission. (2020). European Innovation Scoreboard. Retrieved September 11, 2023 from https://research-and-innovation.ec.europa.eu/statistics/performance-indicators/european-innovation-scoreboard_en
Fiss, P. C. (2011). Building better causal theories: A fuzzy set approach to typologies in organization research. Academy of Management Journal, 54, 393–420. https://doi.org/10.5465/amj.2011.60263120
Lepori, B., Veglio, V., Heller-Schuh, B., Scherngell, T., & Barber, M. (2015). Participations to European Framework Programs of higher education institutions and their association with organizational characteristics. Scientometrics, 105(3), 2149–2178.
Longest, K. C., & Vaisey, V. (2008). Fuzzy: A program for performing qualitative comparative analyses (QCA) in Stata. The Stata Journal, 8, 79–104. https://doi.org/10.1177/1536867X0800800106
McNatt, B. (2010). Negative reputation and biased student evaluations of teaching: Longitudinal results from a naturally occurring experiment. The Academy of Management Learning and Education, 9(2), 225–242. https://doi.org/10.5465/AMLE.2010.51428545
Muscio, A., & Ciffolilli, A. (2018). Technological diversity in Europe: Empirical evidence from agri-food research projects. Regional Studies, 52(3), 374–387. https://doi.org/10.1080/00343404.2017.130166
Nokkala, T. (2007). Network building, motivation and learn-ing in inter-organisational R&D collaboration projects. Theoretical considerations. – NEMO Working Paper 4. – Guildford. Retrieved Feburaury 27, 2023 from http://epubs.surrey.ac.uk/1571/1/fulltext.pdf
Nokkala, T., Heller-Schuh, B., & Paier, M. (2011). Ranking lists and European Framework Programmes: Does university status matter for performance in Framework Programmes? In P. T. D. Dill (Ed.), Public Vices, Private Virtues? Assessing the Effects of Marketization in Higher education (pp. 111–139). Sense Publishers.
Piro, F. N., Børing, P., Scordato, L., & Aksnes, D. W. (2020). University characteristics and probabilities for funding of proposals in the European Framework Programs. Science and Public Policy, 47(4), 581–593. https://doi.org/10.1093/scipol/scaa037
Pohoryles, R. J. (2002). The making of the European research area – A view from research networks. Innovation, the European Journal of Social Science Research, 15(4), 325–340. https://doi.org/10.1007/978-94-6091-466-9_6
Porfírio, J. A., Carrilho, T., Felício, J. A., & Jardim, J. (2021). Leadership characteristics and digital transformation. Journal of Business Research, 124, 610–619. https://doi.org/10.1016/j.jbusres.2020.10.058
Protogerou, A., Caloghirou, Y., & Siokas, E. (2010). Policy-driven collaborative research networks in Europe. Economics of Innovation and New Technology, 19(4), 349–372. https://doi.org/10.1080/10438590902833665
Ragin, C. C. (1987). The comparative method. University of California Press.
Ragin, C. C. (2000). Fuzzy-set social science. University of Chicago Press.
Ragin, C. C. (2006). Set relations in social research: Evaluating their consistency and coverage. Political Analysis, 14, 291–310. https://doi.org/10.1093/pan/mpj019
Ragin, C. C. (2008). Redesigning social inquiry: Fuzzy sets and beyond. University of Chicago Press.
Roediger-Schluga, T., & Barber, M. J. (2008). R&D collaboration networks in the European Framework Programmes: Data processing, network construction and selected results. International Journal of Foresight and Innovation Policy, 4, 321–347. https://doi.org/10.1504/IJFIP.2008.017583
Schneider, C. Q., & Wagemann, C. (2009). Set-theoretic methods for the social sciences. A guide to qualitative comparative analysis. Cambridge University Press.
Schneider, C. Q., & Wagemann, C. (2010). Standards of good practice in qualitative comparative analysis (QCA) and fuzzy-sets. Comparative Sociology, 9, 397–418. https://doi.org/10.1163/156913210X12493538729793
Schütz, F., Schroth, F., Muschner, A., & Schraudner, M. (2018). Defining functional roles for research institutions in helix innovation networks. Journal of Technology Management & Innovation, 13(4), 47–53. https://doi.org/10.4067/S0718-27242018000400047
Schulze-Krogh A.C., & Calignano, G. (2020). How do firms perceive interactions with researchers in small innovation projects? Advantages and barriers for satisfactory collaborations. Journal of the Knowledge Economy, 11(3), 908–930. https://doi.org/10.1007/s13132-019-0581-1
Sciacca, M. (2013). Industrial funding path analysis in the Italian university system. In R. J. Howlett, B. Gabrys, K. M. Gabrys, & J. Roach (Eds.), Innovation through knowledge transfer 2012 (pp. 211–226). Springer.
Sonetti, G., Arrobbio, O., Lombardi, P., Lami, I. M., & Monaci, S. (2020). “Only Social Scientists Laughed”: Reflections on social sciences and humanities integration in European energy projects. Energy Research & Social Science. https://doi.org/10.1016/j.erss.2019.101342
Uhlbach, W.-H., Balland, P.-A., & Scherngell, T. (2017). R&D policy and technological trajectories of regions: Evidence from the EU framework programmes. Papers in Evolutionary Economic Geography, 17(22), 1–21.
Vis, B., & Dul, J. (2018). Analyzing relationships of necessity not just in kind but also in degree: Complementing fsQCA with NCA. Sociological Methods & Research, 47(4), 872–899. https://doi.org/10.1177/0049124115626179
Wagner, C., & Leydesdorff, L. (2005). Network structure, self-organization, and the growth of international collaboration in science. Research Policy, 34(10), 1608–1618. https://doi.org/10.1016/j.respol.2005.08.002
Wanzenböck, I., Scherngell, T., & Lata, R. (2015). Embeddedness of European regions in European Union-funded research and development (R&D) networks: A spatial econometric perspective. Regional Studies, 49(10), 1685–1705. https://doi.org/10.1080/00343404.2013.873119
Wanzenböck, I., Neuländtner, A., & Scherngell, T. (2020). Impacts of EU funded R&D networks on the generation of key enabling technologies: Empirical evidence from a regional perspective. Papers in Regional Science, 99(1), 3–24. https://doi.org/10.1111/pirs.12473
Funding
Open access funding provided by Inland Norway University Of Applied Sciences.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Calignano, G., Winsents, E. Relations between academic reputation and innovation networks. Scientometrics 129, 889–908 (2024). https://doi.org/10.1007/s11192-023-04869-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-023-04869-1