Introduction

Many studies have shown that participation in competitive—and highly selective—innovation networks is concentrated among a few organizations, which are generally located in more advanced, innovative regions (e.g., Roediger-Schluga & Barber, 2008). Although some of these studies have performed in-depth examinations of specific organization types (e.g., universities), and have revealed which factors primarily affect their degree of participation (e.g., Cavallaro & Lepori, 2021; Lepori et al., 2015; Piro et al., 2020), limited attention has been devoted to a critical element: academic reputation. The few previous empirical analyses addressing this topic have led to divergent results and have not clearly determined whether academic reputation is a key driver of participation (e.g., Enger & Castellacci, 2016) or whether, conversely, academic reputation is fostered by participation in international innovation programs and collaborative funding schemes (see Calignano, 2021).

Although we cannot exclude the possibility that participating in innovation programs may enhance an organization’s reputation, this paper aimed to address the novel—yet heretofore understudied—and relevant research question: Is academic reputation a key factor in determining universities’ participation in international innovation networks?

Based on previous relevant empirical analyses, which benefited from large volumes of transnational collaborative funding scheme data (e.g., Autant-Bernard et al., 2007; Barber & Scherngell, 2013; Wanzenböck, 2020), our research was conducted within the context of the latest European Union (EU) Framework Program (FP): Horizon 2020 (H2020). Specifically, we compared participation among all universities within each of three countries, which are characterized by different socioeconomic conditions, including innovation outcomes and higher education systems (i.e., Italy, Sweden, and the United Kingdom [UK]).

Using qualitative comparative analysis (QCA), we were able to demonstrate that—with a few caveats that we discuss in the Methodology section—academic reputation is clearly a key feature among the organizations that scored very high (i.e., within the top 25%) in this targeted EU innovation program. However, the importance of academic reputation decreases when a lower threshold for determining high-level participation is set (i.e., scoring above the median, or in the top 50%).

Literature review

The FP represents the most important EU innovation policy implementation, the aim of which is to fund highly qualified consortia comprising various organization types (e.g., private companies, universities, research centers, agencies) that conduct research with potential industrial applications at the forefront of different fields and socioeconomic spheres (e.g., health, mobility, environment, key enabling technologies) (CROS, 2023).

Throughout the various programming cycles, which were launched in 1984 and generally last 5–7 years, the EU has continuously fostered scientific and business excellence, while attempting to enable less equipped organizations and regions to catch up with those that are more advanced (European Parliament, 2017). However, empirical evidence suggests that EU FPs are largely characterized by unbalanced participation, with a few organizations and regional areas leading, and failing to achieve widespread knowledge exchange among a greater number of organizations (e.g., Calignano, 2022).

Breschi and Cusmano (2004) were among the first to reveal the “elitist club” of organizations that represent the core EU-funded innovation networks throughout the FPs. This important finding has since been confirmed, showing that EU programming cycles are distinctly characterized by long-term disparities between a small core of strongly connected organizations and a large number of peripheral, loosely tied organizations in generally less competitive, regional areas (Muscio & Ciffolilli, 2018).

Organizations with a central position in innovation networks like the EU FPs benefit from their high degree of participation in two main ways. First, they acquire relevant novel or complementary knowledge from qualified partners; second, they receive vital EU funds during periods when national governments show trends toward reducing their domestic research budgets (Schulze-Krogh and Calignano, 20202019). Both of these make national university participation in competitive EU innovation programs very important, or even—especially in the case of less competitive or underfunded organizations—essential (Calignano, 2022).

Universities with high levels of participation in competitive EU innovation programs share specific organizational, geographical, and relational characteristics. Organizationally, larger universities participate more often than those that are smaller, by virtue of their higher quality infrastructures, better administrative capacities, and related abilities to manage a large number of applications and funded projects simultaneously (Cavallaro & Lepori, 2021; Sciacca, 2013). Geographically, as noted above, location influences participation, as highly participating universities are generally in more wealthy, dynamic, and innovative regional areas at the EU or domestic level (e.g., Calignano, 2022). Regarding relational characteristics, through a mechanism resembling “preferential attachment” (see Calignano et al., 2019), high-scoring organizations often rely on a cumulative process through which—by leveraging their status and existing networks—they can add new collaborating partners with whom they have previously established long-lasting relationships through various EU programming cycles (Pohoryles, 2002; Barber et al., 2006; Nokkala, 2007; Calignano, 2014).

Although there is widespread consensus about the positive impacts of these factors in determining universities’ degree of participation in international innovation networks, the role of academic reputation—which represents the main variable of interest herein—remains contested. Specifically, the few studies that have directly or indirectly addressed this topic have neither unequivocally identified an association between academic reputation and level of EU FP participation nor determined the direction of this presumed association. In other words, the previous, discordant findings disallow clarification of whether academic reputation actually leads to greater participation in international innovation programs. Moreover, some scholars argue that the relation between these two variables is inverse: that high participation in important innovation networks itself contributes decisively to shaping an academic institution’s reputation.

Using an original database of information about applications and funded EU projects, Enger and Castellacci (2016) demonstrated that whereas academic reputation does not influence the number of submitted applications, it does have a clear, positive impact on the rate of success. This finding was strengthened by those by Cavallaro and Lepori (2021), Lepori et al. (2015), and Piro et al. (2020), according to which the positive impact of academic reputation on the EU FP participation level is beyond doubt. However, other studies have provided more nuanced or divergent evidence regarding the positive association between academic reputation and the degree of EU FP participation. Nokkala et al., (2011, p. 127) argued that “[w]hile high scientific excellence or a good reputation is beneficial for the university to become a frequent project coordinator, this fact is less important for project participation or even hindering it.”

Several other scholars have also revealed an inverse relation between academic reputation and successful participation in international innovation networks. Among these, Wagner and Leydesdorff (2005) highlighted that universities are interested in joining international collaborative innovation networks because doing so both improves their reputation and provides them with other benefits and resources. Other scholars similarly argue that strong participation in innovation networks, and specifically in the EU FPs, is associated with improved credibility (Schütz et al., 2018) and academic reputation (Calignano, 2021).

Although we acknowledge that strong participation in prestigious international innovation networks may represent one factor that contributes to shaping or strengthening a university’s reputation—likely in combination with other key elements like teaching, legacy, personal relationships, and individual parameters (see Amado Mateus & Juarez Acosta, 2022)—herein, we are specifically and exclusively interested in determining whether there is a clear, direct relation between academic reputation and degree of EU FP participation.

Although they are often treated as synonyms, academic reputation and scientific excellence do not necessarily overlap completely (Calignano, 2021). Rather, the latter refers to the actual, and presumably more objective (although still approximate) quality of a university (Anowar et al., 2015) whereas the former refers to how this quality is generally perceived. Different rankings capture different reputation types; based on our comparison of the various current rankings and availability of ranking data for the past decade, we used the Quacquarelli Symonds World University Rankings (QS2013). This decision is more thoroughly justified in the QCA procedure section below.

Methodology

QCA was used herein to examine whether academic reputation is a crucial factor in determining a high degree of H2020 participation. QCA allows researchers to identify causal relations between a given outcome and different combinations of selected factors (in QCA terminology, “configurations”) (Ragin, 2000). In other words, if we draw an analogy between QCA and regression analysis, an “outcome” corresponds to a dependent variable, whereas “conditions” are akin to independent variables.

More specifically, QCA makes it possible to find a “multiple conjunctural causation” across observed cases. This means that QCA is a particularly appropriate technique for studying complex systems because it is based on the consideration that “[d]ifferent causal “paths”—each path being relevant, in a distinct way—may lead to the same outcome (De Meur & Rihoux, 2002). […] “Multiple” here refers to the number of paths, whereas “conjunctural” conveys the notion that each path consists of a combination of conditions. Thus, multiple conjunctural causation contains the notion of equifinality, which simply means that different paths can lead to the same outcome” (Berg-Schlosser et al., 2009: 8). Based on this, Berg-Schlosser et al. (2009) explain that the variables making up QCA (conditions and outcome) must be carefully selected by researchers and theoretically informed to adopt a deductive approach. However, QCA techniques can also be used more inductively by identifying the key “ingredients” (conditions) that may lead to a given outcome.

We adopted the crisp-set (csQCA) approach, in which both outcome and conditions are dichotomous (i.e., 1 is the presence of the outcome and a given condition, 0 is their absence). In some cases, defining the presence or absence of an outcome or condition is relatively easy (e.g., being listed in a given university ranking or not), whereas in others the researcher must determine a credible threshold. This is the defined “crossover point” (see Ragin, 2000) and is determined by the individual researcher based on factors including theoretical and conceptual considerations, knowledge of the context in which the study is conducted, and previous analogous empirical analyses.

Herein, our cases were represented by universities in three countries—Italy, Sweden, and the UK—which were selected based on their differing characteristics, with the aim to capture whether the possible impact of academic reputation on the level of EU FP participation is widespread and observable in diverse organizational and socioeconomic contexts. The first relevant difference among them is innovativeness. According to the European Innovation Scoreboard (European Commission, 2020), Sweden is a leading innovative country, followed by the UK (strong innovator) and Italy (moderate innovator). Moreover, the three countries clearly differ in terms of size (the UK and Italy are larger countries) and disposal income and income inequality (i.e., two indicators in which Sweden performs considerably better). Finally, the UK shows more and better positioned academic organizations in university ranking compared with Italy and—to a lesser extent—Sweden.

The ability of a given university to score higher than the 75th percentile (i.e., top 25%) in terms of H2020 participation in their respective country was considered a positive outcome. This threshold was chosen based on the research question, which drove our empirical analysis and the related necessity to demonstrate whether a high degree of participation is determined by reputation. Similarly, we used the 75th percentile for all conditions used to determine which combinations led to the outcome, excepting academic reputation (ACAD_REP), for which—as briefly mentioned before—a binary variable based on being (1) or not being (0) within a specific university ranking was adopted. The other model variables were university size (SIZE), level of internationalization (INTER), science, technology, engineering, and mathematics orientation (STEM), regional per capita gross domestic product (GDP), and presence of high-technology sectors (TECH) in the university’s region. More details about these conditions and the rationale for their use in our QCA model are in the QCA procedure section.

Briefly, the aims of our csQCA were twofold. The first aim was to determine the configuration that allows universities to be strongly involved in highly competitive and selective innovation networks, allowing them to benefit from relevant knowledge flows and vital research funds. The second, and more important, aim was to determine whether academic reputation drives participation in competitive and selective innovation networks, such as FPs, to definitively clarify whether it is a necessary variable.

QCA procedure

The data for each condition were from 2013 or—in the case of rare, single observations for which data were unavailable—from 2012 or 2011. Regardless, all data were from before 2014 (i.e., when H2020 FP began). This allows us to delineate a clear direction in the relation between each university’s characteristics before the target FP and their subsequent level of participation (i.e., 2014–2020, during H2020 execution).

Our QCA was conducted using the following procedure, which was repeated for each of the three countries analyzed (Italy, Sweden, and the UK). First, we created a “truth table,” which is essentially a matrix showing all possible configurations and outcomes for the targeted cases. Subsequently, through a “logical minimization” process, we found the minimal formula allowing us to determine which combinations of conditions are sufficient to produce the outcome (i.e., being within the top 25% scoring organizations in the H2020 FP). In simpler terms, the logical minimization process (with a consistency threshold of 0.80; see Ragin, 2008) eliminates irrelevant causal conditions from the original matrix, with the aim of finding the smallest causal configuration sufficient for a given outcome to occur (Ragin, 1987). As shown in Table 1, outcome Y happens in both cases (value = 1), but condition B is present in the first case (value = 1) and absent in the second (value = 0). Based on logical minimization, condition B is considered causally irrelevant and must be eliminated accordingly, whereas condition A is the only condition consistently associated with outcome Y. This operation is repeated for all possible case pairs, to determine whether they differ by exactly one condition. When this happens, the condition in which the two cases differ is eliminated, to produce the “implicants.” The same procedure is then iteratively repeated for each implicant pair until it is possible to identify a final solution with the smallest number of combined causal conditions associated with the outcome (Dușa, 2019).

Table 1 Example of logical minimization

We also analyzed necessary conditions, allowing us to identify which single condition (if any, and albeit insufficient alone), is necessary to produce the outcome, in combination with other conditions (i.e., all situations in which both a given condition and the outcome are simultaneously present in a configuration) (Ragin, 2008).

The logical minimization procedure allows the identification of necessary and sufficient conditions, which are then validated for consistency and coverage (for more QCA details, terms, and score interpretation, see Fiss, 2011; Longest & Vaisey, 2008; Ragin, 2000, 2006, and the sections herein on QCA and sufficiency analyses). fs/QCA software was used to conduct our csQCA.

Variables

Based on the main literature and a thorough consideration of factors that may significantly impact university EU FP participation, we selected the most appropriate conditions for our empirical analysis (see Methodology section). In particular, we followed the method described by Cavallaro and Lepori (2021) and Lepori et al. (2015), who adopted both organizational (ACAD_REP, SIZE, INTER, and STEM) and geographical (GDP and TECH) variables with the potential positively to influence the degree of university participation in competitive international innovation networks like the EU FPs (see Table 2).

Table 2 QCA indicators: Variables, acronyms, descriptions, sources, and years

ACAD_REP, our main variable of interest, was calculated by examining whether universities in each targeted country were included in the top 700 organizations in the renowned QS Ranking in 2013.Footnote 1 This database was chosen over other similarly reliable and informative sources (e.g., Academic Ranking of World Universities, Times Higher Education, Leiden) because it is primarily based on academics’ perceptions of other universities (i.e., two reputation surveys of academics’ opinions of other universities account for > 50% of the total score). This was ideally aligned with our concept of academic reputation herein (i.e., subjectively perceived prestige, rather than more objective scientific excellence; see the Literature review section for details).

According to several previous studies (e.g., Piro et al., 2020), university SIZE positively impacts EU FP participation. We assert that two additional factors represent potential critical conditions. First, high levels of INTER (i.e., the proportion of foreign academic personnel) might foster transnational collaborations via social proximity (e.g., acquaintances, previous common work experiences; see Boschma, 2005). Second, high levels of STEM (the proportion of graduates in science and technology) might have a positive impact by virtue of the predominant role scientific and technological themes play in the EU FPs (Sonetti et al., 2020).

Regarding the geographical dimension of our analysis, we considered GDP and TECH particularly relevant because the most central organizations, which represent the persisting core of the EU innovation networks throughout its various cycles, tend to be located in countries’ more wealthy and technologically advanced regions, irrespective of the degree of development and innovativeness (see Calignano 2022).

Empirical analysis

Descriptive statistics

As specified in the previous sections, we compared the universities within Italy, Sweden, and the UK to determine whether academic reputation is an essential condition in determining the degree of competitive innovation network participation.

Table 2 shows descriptive statistics for the three target countries. We surveyed 159 organizations in the UK, 78 in Italy, and 38 in Sweden. On average, the Swedish universities participated more often in the EU FPs and, consistently, had the highest threshold value within the 75th percentile (i.e., the top 25% scoring organizations: > 96.5 vs > 65 in the UK and > 64 in Italy; see Table 3). However, the top-scoring organization in absolute terms was in the UK (the University of Cambridge, with 754 participations), whereas the top-scoring university in Italy, interestingly, had more funded applications compared with the corresponding Swedish university (411 vs 344 participations). Considering the three countries together, 10 UK organizations were in the top 15 scoring universities, followed by three Italian organizations (the highest ranked of which was the Polytechnic University of Milan, ranked 5th) and two Swedish organizations (the highest ranked of which was KTH Royal Institute of Technology, ranked 9th) (see Table 4).

Table 3 Selected QCA variables’ descriptive statistics
Table 4 Top 15-scored universities in Italy (IT), Sweden (SE), and the UK.

As described above, we used a dichotomous variable to differentiate between universities with higher and lower rankings (1 = ranked in the QS2013; 0 = not ranked in the QS2013). We were thus able to determine that, as shown in Table 3, the UK showed the highest share of ranked domestic institutions (38%), whereas Italian (25%) and Swedish (22%) organizations were less frequently listed in the top 700 prestigious universities, according to the QS2013 ranking.

Regarding the other variables included in our final dataset, the Italian universities were larger (SIZE) and the UK universities showed considerably higher INTER levels, whereas STEM orientation did not differ significantly among the three countries.

Regarding geographical conditions, the UK and Sweden had a higher average regional GDP than Italy, despite the UK also being characterized by the largest inequality. Finally, the Swedish regions were more technologically advanced (TECH) on average, although the most advanced region in absolute terms was in the UK (10.3% of high-technology sectors are in South East England).

QCA—Analysis of necessary conditions

In the first step, we analyzed the necessary conditions, allowing us to determine whether ACAD_REP or any other condition is necessary to produce the outcome (i.e., scoring in the top 25% of H2020 participating organizations within the respective country). Specifically, a given condition X is necessary when it does not automatically produce outcome Y (i.e., X = 1, with Y = 1 or 0), but without which the outcome does not exist (i.e., if Y = 1, then X = 1). In other words, the “necessary condition is a bottleneck, a constraint, for the outcome to exist” (Vis & Dul, 2018: 873). Thus, two measures are used to determine whether a single condition is necessary to produce an outcome: consistency and coverage.

Consistency refers to how often condition X is present when outcome Y occurs. The value of consistency may be either 0 or 1, where 1 means the targeted condition is always present when the outcome is present. In the case of csQCA, the recommended level of consistency is ≥ 0.90. In other words, a maximum of 10% of cases in which the outcome is present and the condition is absent is allowed. A condition can be a necessary variable, but this does not imply that its presence is automatically associated with an outcome. For this reason, coverage is used to determine the degree to which there is a co-occurrence between a necessary condition and outcome. This measure resembles the variance explained or R-squared used in other statistical approaches. High coverage indicates that the presence of a necessary condition represents a large proportion of cases in which the outcome occurs. Thus, scores ≥ 0.60 indicate the relevance of the identified necessary condition, whereas scores below that threshold indicate its trivialness (Schneider & Wagemann, 2010).

Table 5 shows that ACAD_REP is a necessary and relevant condition in Italy and the UK. In addition, although our main variable of interest cannot be fully considered a necessary variable in Sweden due to its consistency score being below the recommended threshold, ACAD_REP shows both high coverage (> 0.6) and the highest consistency and coverage scores in absolute terms (together with INTER).

Table 5 Analysis of necessary conditions. Consistency threshold: 0.90. Coverage threshold: 0.60

It is important to highlight here that if we exclude ACAD_REP in two of the three targeted countries, none of the other variables are necessary to determine high university H2020 FP participation, while they also show relatively low consistency and coverage scores. The only exceptions to this are SIZE in Italy, SIZE and the aforementioned INTER in Sweden, and—to a lesser extent—INTER in the UK.

Analysis of sufficiency

After determining that ACAD_REP is a necessary condition for universities to achieve very high H2020 participation, we analyzed sufficiency. This procedure represents the core of QCA, allowing us to find the minimal configurations of conditions sufficient to produce the outcome. Specifically, we first created the truth table listing all possible configurations, then identified the sufficiency relations through the logical minimization process.

Similar to the analysis of necessity, solution consistency and solution coverage need to be calculated and reported with the aim of assessing the validity of this analysis. Specifically, solution consistency refers to the proportion of cases showing a given combination of conditions that also exhibits a positive outcome. According to previous research, the minimum acceptable consistency level is 0.80 (Ragin, 2008), while a score of 1.0 indicates that when a given sufficient combination of conditions is present, a positive outcome is always present. The consistency solution scores related to our QCA models were 0.88 for the UK and 1.00 for Italy and Sweden. In addition, solution coverage is a measure used to determine the extent to which the outcome of interest may be explained by the configurations identified through QCA. The high solution coverage scores (ranging from 0.59 to 0.89) for Italy, Sweden, and the UK show that our QCA results were reliable and particularly robust (see Ragin, 2006).

Regarding the other scores reported in Tables 6, 7, and 8, raw coverage refers to the extent to which outcome Y is explained by a specific combination of conditions, whereas unique coverage quantifies the extent to which such an explanation can be exclusively attributed to that combination, and not others (i.e., the percentage of cases that are not simultaneously represented by other condition combinations). In our QCA models, there were few combinations with unique coverage of 0 (i.e., two for Italy and two for the UK). Although empirically valid and often reported (e.g., Australian Government, 2022; Porfírio et al., 2021), combinations with 0 unique coverage should be interpreted with caution or disregarded because they are considered “logically redundant” (i.e., all cases belonging to them are already explained by other combinations) (Schneider & Wagemann, 2009). Thus, we overlooked these, as explained in the taglines in Table 6 (Italy) and Table 8 (the UK). Finally, an additional consistency score was based on the same concepts illustrated above and applied to the single combinations of conditions, as reported in Tables 6, 7, and 8.

Table 6 QCA results. Country: Italy. Outcome: H2020 FP participation. Legend: Black circles (●) indicate a high condition level; Empty circles (○) indicate a low condition level; Blank cells indicate an irrelevant condition in which the condition can be high- or low-level. Combinations with 0 unique coverage are not included. Solution coverage: 0.89; Solution consistency: 1
Table 7 QCA results. Country: Sweden. Outcome: H2020 FP participation. Legend: Black circles (●) indicate a high condition level; Empty circles (○) indicate a low condition level; Blank cells indicate an irrelevant condition in which the condition can be high- or low-level. Solution coverage: 0.77; Solution consistency: 1
Table 8 QCA results. Country: UK. Outcome: H2020 FP participation. Legend: Black circles (●) indicate a high condition level; Empty circles (○) indicate a low condition level; Blank cells indicate an irrelevant condition in which the condition can be high- or low-level. Combinations with 0 unique coverage are not included. Solution coverage: 0.59; Solution consistency: 0.88

As illustrated above (see Tables 6, 7, and 8), ACAD_REP played a key role in all three targeted countries. In particular, the main variable of interest was present in all configurations sufficient to produce the outcome in Italy, sometimes in combination with a geographical variable like regional GDP and an organizational variable like STEM. All other conditions played marginal roles, particularly TECH, which did not contribute to any configuration and showed low levels in several combinations (see white circles in Table 6).

We were able to identify four configurations sufficient to produce an outcome in Sweden—the only country for which ACAD_REP was not a necessary condition. However, academic reputation still represented an important condition in this country as it was present in three combinations. SIZE and INTER were two other important variables for determining high H2020 participation, whereas STEM and (even more so) the geographical variables GDP and TECH played marginal roles. The latter showed high values only in Combination 4, along with ACAD_REP and all other conditions, excepting STEM (see Table 7).

Finally, the analysis of sufficiency for the UK is in Table 8. In this case, ACAD_REP was in four out of five combinations, with the only exception being Combination 5. Interestingly, in this combination, ACAD_REP showed low scores, whereas SIZE, INTER, STEM, and TECH were the combination of conditions that led to a high rate of H2020 FP participation. Finally, it should be stressed that ACAD_REP was often combined with both other organizational (SIZE, INTER, and STEM) and geographical (TECH) variables when the outcome was achieved.

Testing QCA against alternative models

We conducted several tests to identify the mechanisms by which alternative QCA models were revealed by different variables and thresholds, to achieve a more robust, accurate, and insightful set of findings.

First, we used a different outcome with a different crossover point. In this analysis, the median represented the new (lower) threshold through which we identified the presence (1) or absence (0) of the outcome. Similarly, the median was set as the crossover point for all the other conditions. ACAD_REP was calculated as in the main QCA model (1 = being included in the QS2013, 0 = not being included in the top 700 universities by the QS2013).

The alternative QCA, based on the newly calibrated variables, revealed different results. Most importantly, ACAD_REP was no longer a necessary variable to achieve the outcome in any of the targeted countries. In other words, the importance of ACAD_REP is reduced when we apply a lower (though still relevant) level of H2020 participation (i.e., scoring above the median of participating universities). Moreover, regarding the other conditions, only SIZE in Italy could be considered necessary (see Table 9).

Table 9 Analysis of necessary conditions. Consistency threshold: 0.90; Coverage threshold: 0.60

These results were confirmed by the analysis of sufficiency, in which ACAD_REP contributed to fewer configurations than with the main QCA model (see section QCA—Analysis of necessary conditions), for which the outcome crossover point was the 75th percentile (i.e., scoring in the top 25% universities). Specifically, ACAD_REP was included in 4 out of 5 configurations in Italy (i.e., while SIZE, which appeared only in one main model configuration, can be found in all of the combinations in this alternative model), 3 out of 5 in Sweden, and 3 out of 7 in the UK. Moreover, distinct from that observed in the main QCA model, ACAD_REP showed low levels in various condition combinations (4 in Italy, 2 in Sweden, and 2 in the UK). In this alternative QCA, other variables were more relevant and, compared with the main model looking at the top 25% scoring organizations, more directly related to the national characteristics of each targeted country (SIZE in Italy, INTER in Sweden, and INTER and—to a lesser extent—SIZE in the UK).

Discussion

Herein, we sought to examine whether academic reputation is a key variable determining high university participation in competitive innovation funding schemes like the EU FPs. Our analyses were motivated by previous studies showing divergent results, regarding both the association between the dependent and independent variables and the direction of their relations (i.e., whether academic reputation influences international program participation or vice versa) (e.g., Cavallaro & Lepori, 2021; Enger & Castellacci, 2016; Nokkala et al., 2011).

Although we assert that frequent engagement in highly qualified, selective consortia certainly contributes to shaping academic reputation (e.g., Calignano, 2021; Wagner & Leydesdorff, 2005), this is probably one of many factors that determine how universities are externally perceived. Indeed, in their recent literature review, Amado Mateus and Juarez Acosta (2022) found that a higher education institution’s image is multidimensional and comprised of a vast number of elements, ranging from student satisfaction to visual identity to strong traditions.

Based on this, we used QCA to demonstrate that a university’s academic reputation is undoubtedly a critical—and possibly necessary—prerequisite for joining the dense and permeating elite club of organizations that traditionally characterize EU innovation program membership (Breschi & Cusmano, 2004). Thus, it is relevant to emphasize that being ranked in the QS2013 is a key condition for scoring very highly, in all three targeted countries (75th percentile), whereas its impact decreases when a lower threshold is set (50th percentile). This is important because no specific national characteristics (e.g., population, number of universities, national-level innovativeness, socioeconomics) affected these results, which were stable across Italy, the UK, and—although to a slightly lesser extent—Sweden.

Likewise, national characteristics that promote their organizations’ participation emerged when we used a lower outcome threshold (i.e., SIZE in Italy, INTER in Sweden, and INTER and SIZE in the UK). However, although statistically significant and policy-relevant, the alternative model results should not be overestimated. Indeed, the difference between scoring above the median (top 50%) and scoring above the 75th percentile (top 25%) is sizable. In all cases (with the extreme example of Sweden), the median (50th percentile) is considerably closer to the 25th percentile than it is to the 75th percentile (see Table 3). In other words, in extremely centralized networks like those characterizing EU FPs (e.g., Protogerou et al., 2010; Wanzenböck et al., 2015), although less prestigious organizations can become participating organizations, and even score moderately well (i.e., top 50%), their engagement is actually relatively limited due to the low median level (i.e., skewed distribution). This creates a fissure between lower- and higher-scoring universities, with negative implications for those less prestigious organizations and, consequently, for the territories in which they are situated. In this regard, the previously established presence of clear core–periphery dynamics (Roediger-Schluga & Barber, 2008) creates problems for less prestigious organizations and regions in terms of research funding allocation and acquisition of beneficial new or complementary knowledge (e.g., Calignano, 2022).

Based on these cumulative findings, we argue that more qualitative and quantitative studies should be carried out in different fields (e.g., psychology and corporate reputation) to determine whether academic reputation causes unconscious bias among evaluators and policymakers. Stated differently, we are concerned that the large perceived importance of academic reputation may bias evaluators toward single organizations and/or members that make up aspiring research consortia (see, e.g., Knight, 2017; McNatt, 2010, regarding bias in the evaluation process). Participation in highly selective programs like the EU FPs is important for all applicants, but it can be vital to those deemed less prestigious, which are also often far from the core participating geographical areas. We further argue that higher engagement among less prestigious organizations is probably more essential to their success (e.g., for the acquisition of new knowledge, crucial funds, and improved status; e.g., Calignano, 2021), fosters related territorial areas, and—more generally, but similarly important—promotes EU policy actions and outcomes explicitly targeting more even and inclusive participation (European Parliament, 2017).

In conclusion, as explained in our methodological section, QCA allows researchers to claim causality between different configurations of factors and a given outcome. Our empirical analysis revealed that academic reputation is a main “ingredient” that allows universities to score very highly in competitive, selective international innovation programs. Three selected countries that have historically performed well in the various FPs and are characterized by different academic systems and socioeconomic contexts (i.e., Italy, Sweden, and the UK), were used as relevant cases to test whether more prestigious universities perform better in EU FPs. Replicating these analyses with more countries, especially considering those characterized by lower innovativeness (i.e., “emerging innovators” according to the European Innovation Scoreboard [European Commission, 2020]) and academic performance, could confirm and even strengthen the causal deduction suggested herein. As briefly described above, innovation activities tend to be concentrated within nations’ capitals or in a few, select areas within weakly innovative countries. Thus, we suspect that levels of EU FP participation may be even lower among peripheral and lower-ranked domestic academic institutions (see Calignano, 2022).