Scientometrics

, Volume 109, Issue 2, pp 629–659

Research diversification and impact: the case of national nanoscience development

  • Patrick Herron
  • Aashish Mehta
  • Cong Cao
  • Timothy Lenoir
Article

DOI: 10.1007/s11192-016-2062-7

Cite this article as:
Herron, P., Mehta, A., Cao, C. et al. Scientometrics (2016) 109: 629. doi:10.1007/s11192-016-2062-7
  • 306 Downloads

Abstract

Newcomer nations, promoted by developmental states, have poured resources into nanotechnology development, and have dramatically increased their nanoscience research influence, as measured by research citation. Some achieved these gains by producing significantly higher impact papers rather than by simply producing more papers. Those nations gaining the most in relative strength did not build specializations in particular subfields, but instead diversified their nanotechnology research portfolios and emulated the global research mix. We show this using a panel dataset covering the nanotechnology research output of 63 countries over 12 years. The inverse relationship between research specialization and impact is robust to several ways of measuring both variables, the introduction of controls for country identity, the volume of nanoscience research output (a proxy for a country’s scientific capability) and home-country bias in citation, and various attempts to reweight and split the samples of countries and journals involved. The results are consistent with scientific advancement by newcomer nations being better accomplished through diversification than specialization.

Keywords

Diversification Specialization Impact Nanotechnology Nanoscience Developmental state 

JEL Classification

O10 O25 O30 

Introduction

Tensions between diversification and specialization are a common theme in debates over economic development policy. On one hand, the standard, static Ricardian logic of comparative advantage suggests that a country should concentrate its resources on producing what it is relatively good at, utilizing the surpluses to purchase its other needs from other countries (Feldman and Audretsch 1999). This logic emphasizes the folly of expending resources on activities in which the country’s performance is likely to be marginal. While there are several possible objections to this logic (Shaikh 2007), the counterargument that has received the most attention in recent years is that activity in one area builds real workable knowledge that is applicable in other areas (Schubert and Braun 1986). Thus, when knowledge spillovers are substantial, ignoring certain areas of activity carries a cost in terms of lost productivity in other areas (Hidalgo et al. 2007; Hausmann et al. 2011). In an information age, it is argued, diversification, not specialization, is the way forward. Industrial policies may be required to lean against market forces that promote specialization (Lundberg 2007).

This logic is particularly well accepted in science and technology policy making (Lundberg 2007). Knowledge spillovers and transferable capabilities can, after all, exert an outsize influence on scientific success. The benefits of scientific diversification include: risk management—specializing in scientific dead-ends is costly (Moed et al. 1995; Leydesdorff 2013); creative and learning synergies—progress hinges on recombining existing knowledge and techniques in novel ways (Aksnes et al. 2012; MacRoberts and MacRoberts 1996; Shapira and Wang 2010; Braun et al. 2007); and rigor—which is enhanced by being able to assess ideas from multiple perspectives (Leydesdorff 1998). Yet, it must be remembered that diversification carries costs. These costs can be relatively large in countries that are new to the scientific stage, with limited economic and human resources and small scientific communities (Guan and Ma 2007). In such countries, diversification could lead to thinly spread pools of scientific talent across many areas, including some in which performance is likely to be poor. Spreading resources too thinly can reduce healthy scientific competition and quality control. Thus, theory is inconclusive on whether national scientific diversification actually promotes scientific success or not, especially in newcomer scientific nations.1

Unfortunately, empirical understanding of this relationship at the national level is quite incomplete.2 The central purpose of this paper, therefore, is to put this relationship under the microscope. We focus on nanotechnology, because it is an area in which the governments of newcomer scientific nations are actively seeking to push back on static comparative advantages. Drawing on a large body of work examining ways to measure diversification and research impact (e.g., Schubert and Grupp 2011; Schubert and Braun 1986), we establish an extremely robust positive statistical relationship between a country’s degree of nanoscience diversification and its research impact.

Previous studies of national scientific diversification have focused more on its causes than its effects. They show a clear inverse relationship between the volume of a nation’s scientific activity and specialization—countries that published more scientific papers have more diversified publication mixes (Pianta and Archibugi 1991; Horlings and Van den Besselaar 2011). Similar results are reported for technological diversification—countries that patent more conduct more diverse types of R&D (Archibugi and Pianta 1992; Cantwell and Vertova 2004). Thus countries and organizations diversify their scientific and technological portfolios as their capabilities develop. Yet, Cantwell and Vertova (2004) also argue that technological diversification has become more difficult to achieve in recent decades, in the sense that the volume of patenting associated with a given level of patent diversification in a cross-section of advanced countries has increased. They suggest that the increased importance of multinational firms (and therefore the relatively reduced importance of the developmental state) is leading to greater technological specialization. While scientific and technological specialization are very different processes, the ever-expanding range of scientific research areas and deepening international collaboration networks (Wagner 2008) could similarly induce nations, especially newcomers with limited scientific resources, to specialize. Thus, it is important to ask not only whether scientific diversification is desirable (in the sense that it is associated with greater research impact), but whether it remains feasible. We will take on this task as well.

We will add to the existing literature on diversification and scientific development in several ways. First, whereas the literature has focused on the relationship between the volume of national output and diversification, we show a relationship between diversification and impact, as proxied by citations (Garfield 1979). Second, ours appears to be the first paper to do this at the national level, and the first to do it with publication, rather than patent data. Third, whereas much of the literature on the diversification–size relationship makes static comparisons between countries, we show a dynamic relationship between diversification and impact over time, holding the country constant. While the relationship between size and diversification across countries sheds light on why some countries specialize more than others (Mangàni 2007), the diversification–impact relationship addresses a policy question, namely, whether a given country should specialize or diversify.

Unfortunately, variations in diversification rates are unlikely to be exogenous to impact, and we therefore do not claim to identify the causal effect of diversification on impact. However, we do document a highly robust relationship over time within countries, and rule out several alternative interpretations of it. Perhaps most importantly, the relationship is observed even after controlling for countries’ changing volume of nanoscience publications. This suggests that the relationship between diversification and impact measured over time is not driven by unmeasured developments in scientific capabilities.

Our work is subject to two important caveats. The first is that, as is well understood, citation patterns provide only a proxy measure of the influence of scientific research. We use the word impact simply as a technical descriptor of citation based indicators of the relative frequency with which people report having read a particular research publication. The second is that highly-cited research is itself only one input into national technological advancement. Showing that countries that pursue a more diverse mix of scientific topics produce more highly cited work is therefore only the first stage in uncovering (or rejecting) the view that scientific diversification is important for national technological development.

Nanotechnology development provides, in some ways, an ideal setting to examine the strategies and successes of developmental states.3 Nanotechnology, being costly, is not an obvious area of natural strength for young, resource-constrained scientific powers. Resource requirements probably constrain both diversification into nanotechnology and diversification amongst its various subfields. It is therefore an area in which developmental states wishing to go beyond their static comparative advantages have had to make serious attempts to enhance scientific effort. Moreover, while some countries have diversified their nanotechnology research portfolios, others have become more narrowly concentrated in particular areas; and trends in research impact also vary widely across countries. This permits us to measure the diversification–impact relationship quite accurately. The next section of the paper reviews the role of the developmental state in nanotechnology development around the world to make this case. We then proceed to data, analysis and conclusions.

Nanotechnology and the developmental state

In the early years of the Twenty-First Century, the major Twentieth Century scientific research powers, or “Incumbents”—the US, Western Europe, Japan and Russia—have faced intensified challenges from rapidly developing states such as China, India, Korea, Brazil, Singapore and Taiwan. These challenges are manifest in these “Newcomer” countries’ increasing investments in research and development (R&D) and in their growing scientific productivity, both of which have outstripped their economic output (Royal Society of London 2011). China has led the way here, overtaking Japan and Europe in terms of gross R&D expenditures and publication output in recent years (OECD 2014; Van Noorden 2014). Scientific development has also been rapid in India and Brazil, in new emergent scientific nations in East and South-East Asia such as Korea and Singapore as well as in some of the smaller European nations. While the Incumbents as a whole still lead the Newcomers in investing in and reaping the performance rewards of scientific research in terms of numbers of research articles, citations, and especially translation of research science into patents and innovative products, these gaps have begun to shrink (Zakaria 2008; King 2004; Shapira and Wang 2010; Youtie et al. 2008; Royal Society of London 2011).4

Since roughly 2000 both Incumbents and Newcomers have targeted nanotechnology as one of the primary growth areas of their strategic R&D activity. The US led the way, forming the US National Nanotechnology Initiative (NNI) in 2000 as an umbrella organization to fund and coordinate the activities of several federal agencies responsible for research and development aimed at exploiting the groundbreaking opportunities for research and discovery at the first level of organization of matter, ranging from a single atom to 100 nm (www.nano.gov). In one of its largest long-term investments in support of research in science and technology, the US has pinned considerable hopes on the NNI to maintain its world leadership in science and technology. The framers of the NNI argued that with potential applications in virtually every existing industry and new applications yet to be discovered, nanoscale science and technology would emerge as one of the major drivers of economic growth in the first part of the Twenty-First Century (Abramo et al. 2012). They estimated that with industry input the market for products incorporating nanotechnology could reach $1 trillion worldwide by 2015 (M. C. Roco 2007). The US funding contribution to launch nanotechnology was therefore substantial ($1 billion in the first 2 years), and this was just the beginning. Annual US government funding for nanotechnology rose to $1.8 billion in 2009 and by 2010 the cumulative federal support for NNI programs since the start in FY 2001 amounted to more than $12 billion (NNI.gov).5

The US was soon joined by at least 60 countries in founding their own nanotechnology initiatives, so that by 2008 the worldwide government support of nanotechnology initiatives was more than $6.3 billion (Chen and Roco 2008, Table 1-1). On top of investments by its member states (Germany and France each committed about $3 billion to government funded nano research between 2005–2010, averaging 2.4 and 3.1 % respectively of their total government funded research budgets), the EU allocated $1.9 billion during 2002–2006 for nanoscience and nanomaterials in its Framework Program 6, and significantly increased that commitment under the Framework Program 7 (2007–13) to $3.2 billion during 2007–10 (Commission of the European Communities 2007; European Commission 2006).

Other countries have also ramped up their nanotech investments, in some cases closing the gap with the US (Fig. 1). China has committed an average of 5.4 % of its total public research budget to nanotechnology since 2005 compared, to an average of 1.4 % by the US (Fig. 2). Russia has come from nowhere to commit a whopping $3.5 billion to nano between 2007 and 2010, much of it on commercialization, constituting 3.9 % of its total federally funded research budget. Japan has committed 3.7 % of its total government-funded research to nanotechnology since 2005, and the corresponding commitments over this period for other countries were: Taiwan 4 %; Singapore 3.3 %; Korea 2.9 %; and the UK 1.9 %.
Fig. 1

Government funded nanotechnology R&D. Sources OECD Statistics; Working party on Nanotechnology (2012); PCAST (2012); Raje (2011); Harper (2011); Palmberg et al. (2009)

Fig. 2

Government funded nanotechnology R&D as  % of total government R&D. Sources OECD Statistics (2013); OECD Working party on Nanotechnology (2012); PCAST (2012); Raje (2011); Harper (2011); Palmberg et al. (2009)

As a whole, these efforts clearly demonstrate that developmental states have been aggressively pushing innovation in nanotechnology (see, for example, Appelbaum et al. 2011). These efforts are reflected in the shares of these countries’ nanoscience publications compared to their global shares of all scientific publications. In its 2009 examination of the top 25 countries by the share of nanotechnology-related publications, the OECD Working Party on Nanotechnology reported that China and Singapore had nearly three times the share of papers devoted to nanotechnology as compared to these countries’ respective shares of all publications in the ISI Web of Science database. Japan, Germany, France, Taiwan, and the Russian Federation also had a significantly higher share of nanotechnology publications compared to all publications (Palmberg et al. 2009). Overall the OECD study reported that growth in nanotechnology publications outstripped the growth rate of all publications in the Web of Science database.

This paper studies the scientific output that has been supported by this wave of largely state sponsored research. Previous studies indicate that while the national funding efforts of Newcomer nations have fueled a large increase in the number of nanotechnology publications by those nations, the average impact of their work as measured by citations still trails the US and several European nations (Youtie et al. 2008; Guan and Ma 2007; Jin and Rousseau 2005; Kostoff 2012; Leydesdorff and Wagner 2009; Leydesdorff 2013; Editorial, 2008). As we demonstrate in this paper, while the impact of nanoscience and technology publications of Newcomer nations lagged behind the US in the launch phase of federally funded nanotechnology (2000–2005/07), the gap between the Incumbents and some of the Newcomers has been closing rapidly, and other Newcomer nations are making up for a lack of impact with rapid growth in the quantity of research publication. We especially show that several Newcomer nations have rapidly diversified their nanotechnology research efforts, so that the proportions of their research publications that fall in different nanotechnology subfields are converging rapidly towards global norms. Other countries remained specialized in particular subfields, or became more specialized. This diversity of trends in research impact and in specialization/diversification permits us to study the relationship between changes in diversification and research impact. Given the public resource commitments just documented, it is important to understand the effectiveness of countries’ nanotechnology research strategies.

Data and methods for bibliometric analysis and topic modeling

Bibliometric analyses of nanotechnology have been performed using a variety of query techniques for selecting nanotechnology-relevant documents from the Thompson Reuters (formerly ISI) Web of Science (WoS) database (Glanzel et al. 2003; Noyons et al. 2003; Zitt and Bassecoulard 2006; Mogoutov and Kahane 2007; Leydesdorff and Zhou 2007; Porter et al. 2008; Pouris 2007; Onel et al. 2011; Mehta et al. 2012). The WoS contains English language metadata records for nearly 50 million scholarly research articles, providing a convenient and rich resource for scientometric analysis. Data for the present study were collected using the query method in Kostoff et al. (2006), Lenoir and Herron (Lenoir and Herron 2009) and Mehta et al. (2012). For the present study the Kostoff query was used to retrieve the metadata records covering all 564,993 relevant nanoscience and nanotechnology articles spanning 1 January 2000 through 1 May 2012 from the WoS one country at a time for 68 countries (reduced to 63 once we drop five countries that had no nano publications in one or more years). Data were extracted from the metadata records and organized into a flat-file format containing 564,970 useable records.

The query in Kostoff et al. (2006) was chosen at the time the data were collected because it was developed iteratively, has been widely used, incorporates expert input, and requires no citation network data. Huang et al. (2011) note that several outcomes of bibliometric research on nanotechnology are qualitatively invariant to the choice of lexical query used to generate the data they use. This said, while the Kostoff query produces excellent retrieval performance, it could result in under-representation of emergent nanoscale subjects described entirely by novel keywords. In practice, such problems should be reduced by the fact that most nano papers are positively identified by more than one of our search terms. For example, our dataset does capture the enormous growth rate in graphene publications since 2004, even though graphene is not one of our search terms. Nevertheless, some under-representation of emergent fields is likely and future studies should examine the issue using data collected using other queries.

In order to examine trends in research specialization and in particular to track research trends by nanotechnology subfield, we introduced topic modeling of our original data set. Topics in the nanotechnology papers represented by WoS metadata records were identified and assigned using Latent Dirichlet Analysis (LDA), a machine learning technique for identifying topics (Blei et al. 2003) that has been widely used as the basis for topic discovery tasks across diverse collections of documents, including collections of news articles (Newman et al. 2006), broad digital libraries (Mimno and McCallum 2007), general scientific literature (Blei and Lafferty 2007; Hall et al. 2008), and nanotechnology (Porter and Zhang 2012; Zhang et al. 2007). LDA is used in the present study to identify latent topics in the nanotechnology literature and subsequently assign the most likely topic to each paper. The process of using LDA to discover topics within nanotechnology has been done in four steps: (1) corpus preparation; (2) topic learning; (3) topic naming; and (4) topic assignment.

A straightforward procedure to prepare the document corpus was undertaken. Text fields from document metadata (title, abstract, keywords and Keywordplus®) were concatenated for each document, parsed into unigrams, converted to all lower case, and used to represent document content. Unigrams parsed from the four metadata text fields were filtered using two stopword lists: from the NLTK software package (the “English” stopword list); and ISI Web of Knowledge stopword list (http://images.webofknowledge.com/WOK45/help/WOK/ht_stopwd.html). All terms occurring only once in the entire collection were also removed, as were terms occurring in more than 75 % of the documents.

Using the GENSIM python software package (Řehůřek and Sojka 2010), we generated a “bag of words” representation from the resulting list of document-text tuples, and then four different topic models were learned using the Gensim LDA model, each model corresponding to a fixed number of topics (k = 5,…,8). The selected range for k privileges interpretability over improved likelihoods (Chang et al. 2009) with k = 8 considered the upper end of the range of human interpretability (Miller 1956; Cowan 2001). Every document was used in the learning phase. For each topic Gensim output a list of most identifying terms, and we labeled each topic based on a combination of those terms.

Each of our four learned models was then applied to each document in order to output from GENSIM an array of 1 to k topic assignment scores for each document. Each topic assignment score indicates the posterior probability of the document belonging to that topic; given that LDA is a mixed-model, papers can belong to more than one topic. A document may receive only one non-zero probability score, indicating it belongs in one topic, or it may receive as many as five non-zero scores, indicating some likelihood the document belongs to all five topics. In the present study, in order to partition nanotechnology into topic fields in a simple way, we removed mixed assignment simply by choosing for each document the topic assignment with the highest score. Nonetheless, it is crucial to point out that the 560,000 papers in the present study do frequently belong to more than one topic. The topics do overlap, as they should, because the nanotechnology topics we are calling “subfields” do not represent distinct disciplinary entities. The topic assignment of each paper merely shows the most central topical tendency of a paper even though a paper may be relevant to more than one topic. For example, a paper labeled as “thin films” may also be about a nano-optoelectronics application, or a paper labeled as “nanoparticles & self-assembly” may also be, perhaps to a lesser extent, about “nanomedicine/nanobio”. The label selected for a paper is simply the one to which the paper has the highest probability of belonging.

Each of these four models was then assessed in terms of their precision and recall. We found that the optimal models using each of k = 6, 7 and 8, included some extremely small and large categories, with indistinct clustering. The optimal model using k = 5 did not have these features. We therefore selected the five-topic model, manually choosing appropriate names for each cluster, shown in Table 1.6
Table 1

Subfield definitions

Topic

Subfield (label on graphs)

Most distinguishing terms

# of papers (% of total)

1

Nanomedicine and nanobiotechnology (Bio)

Cells, protein, binding, cell, human, DNA, proteins

46,495 (8.22 %)

2

Thin films (films)

Films, temperature, growth, film, thin, deposition, surface

166,752 (29.47 %)

3

Nanoparticles and self-assembly (particles)

Surface, nanoparticles, particles, gold, solution, monolayers, polymer

152,124 (26.89 %)

4

Carbon nanotubes and quantum Dots (T&D)

Quantum, carbon, dots, nanotubes, energy, electron, states

85,649 (15.14 %)

5

Nanoelectronics and nano-optoelectronics (electronics)

Laser, optical, surface, model, force, high, wavelength

114,806 (20.29 %)

Figure 3 shows trends in the number of publications and the relative citation rates7 in each of our five fields. Several trends are clear, all of which will influence our choices regarding the construction and interpretation of our measures of specialization and impact: (1) citation rates vary significantly across fields, and, with the exception of particles, papers in bigger fields are generally cited less often; (2) the field of nanobio is growing most rapidly; and (3) citation rates to nanobio papers (a nascent field) drop as the number of nanobio papers accelerates, while relative citation rates to Tubes and Dots (which provides some of the knowledge base for nanobio) increase.
Fig. 3

The growth of fields. All papers in this field in the world. Relative citation rate is citations per paper in the field, scaled by the average citation rate across fields in the year. a Publications. b Citations

Analysis

Our analysis proceeds in three phases. First, we examine trends in the scientific “influence” of 11 countries that are particularly important sources of nanotechnology research. We ask whether the Newcomers are rising by improving the quantity or average impact of their research. This is followed by an analysis of the relative strengths of these nations, as proxied by shares of publications and citations, in the five major nanotechnology subfields. Finally, we examine whether highly specialized nano research portfolios are a recipe for improving the impact of a country’s research or whether either of the two strategies of diversification—conformity and deconcentration–pays higher dividends.

Scientific influence: quantity versus impact

Following the same intuition as King (2004) we measure the relative “influence” of a country’s ideas in a given year by the share of all citations to papers published that year that are to papers involving at least one author from that country.8, 9 Of course, true scientific influence goes beyond academic citations, and citations are an imperfect indicator of the origin of scientific ideas (MacRoberts and MacRoberts 1996; Leydesdorff 1998). In keeping with scientometric precedent, we are working with imperfect but useful proxies for academic influence (Kostoff 1998).10

This simple share-of-citations measure of influence is attractive because it can be perfectly decomposed to show whether a nation’s influence varies over time due to changes in the “quantity” and “impact” of its research. In keeping with previous studies (T. Schubert and Grupp 2011; A. Schubert and Braun 1986; Moed et al. 1995, p. 399), we proxy for the academic impact of a country’s research in a given year using a national relative citation index (RCI)—the ratio of the average citation rate of nanotechnology papers published that year involving this country to the average citation rate of all nanotechnology papers published in that year.11 We measure the country’s (relative) research quantity as the share of publications in that year that involve an author from this country. Denote the country by c, year by t, the total numbers of publications and citations attributed to country c and year t by Nc,t and TCc,t, and those by all countries in year t by Nt and TCt. “Influence” can be expressed and decomposed as follows:
$${\text{Influence}}_{c ,t} \equiv \tfrac{{TC_{c,t} }}{{TC_{t} }} \equiv \left( {\tfrac{{N_{c,t} }}{{N_{t} }}} \right) \times \left( {\tfrac{{{{TC_{c,t} } \mathord{\left/ {\vphantom {{TC_{c,t} } {N_{c,t} }}} \right. \kern-0pt} {N_{c,t} }}}}{{{{TC_{t} } \mathord{\left/ {\vphantom {{TC_{t} } {N_{t} }}} \right. \kern-0pt} {N_{t} }}}}} \right) \equiv \left( {\text{Share of publications}} \right)_{c ,t} \left( {\text{Relative citation index}} \right)_{c,t}$$
(1)
Taking logs and time differencing (2) then expresses the growth rate of influence as the sum of the growth rates of “quantity” and “impact”:
$$\% \Delta \left( {\text{Influence}} \right)_{c ,t}\,\equiv \% \Delta \left( {{\text{Share}}\,{\text{of}}\,{\text{publications}}} \right)_{c ,t}\,+\,\% \Delta \left( {{\text{Relative}}\,{\text{citation}}\,{\text{index}}} \right)_{c ,t}$$
(2)
Figure 4 shows the evolution of the shares of publications. Incumbents have a larger scale than the Newcomers, and Chinese output is measured on its own (right) axis, given that it is now many times larger than that of any other country among the Newcomers. The Figure makes clear that in quantity terms, the Newcomers are rising relative to the Incumbents (especially the EU, US and Japan). China’s growth is particularly arresting, while India has picked up speed. The Brazilian and Canadian shares remain flat, so their output has effectively grown at the same rate as that of the literature itself. South Korea and Taiwan, the earliest powers among the Newcomers, are still growing, but have decelerated, while Singapore’s output has grown slightly faster than has the global literature.
Fig. 4

Changes in research quantity. Share of all papers involving an author from this country/bloc. Note Internationally collaborative papers are attributed to more than one country. a Incumbents. b Newcomers

Figure 5 shows trends in relative citation rates. A more mixed picture emerges here. Of the Incumbents, the EU has edged up slightly, and the US is rapidly losing its advantage over other countries. Japan is making gains, while Russia is not. Among the Newcomers, Singapore stands out for its remarkable improvements in relative citation rates, while China has climbed strongly to equality with the global average. Taiwan has made some gains, South Korea and India have roughly held their ground, and Brazil’s RCI has declined somewhat.
Fig. 5

Changes in research Impact. Relative citation rate of papers involving an author from this country/bloc. Note Internationally collaborative papers are attributed to more than one country. a Incumbents. b Newcomers

Figure 6 shows the evolution of influence, while Table 2 decomposes its growth rate using identity (2). For example, the bottom three rows of Table 2 indicate that EU influence declined by 24.1 %, and would have declined 36.9 % due to its reduced share of publications if the relative citation rate of its publications had not improved, pulling its influence back up by 12.8 %.
Fig. 6

Changes in research Influence. Share of all citations to papers involving an author from this country/bloc. Note Internationally collaborative papers are attributed to more than one country. a Incumbents. b Newcomers

Table 2

Decomposition of scientific influence

 

Incumbents

Newcomers

 

USA

EU

Japan

Russia

Canada

China

Korea

India

Taiwan

Singapore

Brazil

Share of publications

2000

0.260

0.385

0.162

0.052

0.025

0.106

0.033

0.021

0.015

0.012

0.014

2011

0.197

0.266

0.073

0.028

0.024

0.268

0.069

0.061

0.036

0.019

0.013

Relative times cited

2000

1.615

0.951

0.769

0.477

1.157

0.659

0.897

0.730

0.731

0.669

0.785

2011

1.438

1.081

0.896

0.416

1.197

1.002

0.876

0.719

0.785

1.623

0.648

Share of citations

2000

0.420

0.366

0.125

0.025

0.029

0.070

0.030

0.016

0.011

0.008

0.011

2011

0.283

0.288

0.065

0.012

0.029

0.268

0.061

0.044

0.029

0.031

0.009

% Change (2000–2011)

In influence; of which

−0.394

−0.241

−0.649

−0.758

−0.002

1.346

0.712

1.033

0.966

1.357

−0.234

 Quantity

−0.278

−0.369

−0.802

−0.621

–0.035

0.926

0.735

1.048

0.894

0.471

−0.044

 Impact

−0.116

0.128

0.153

−0.136

0.034

0.419

−0.023

−0.016

0.071

0.885

−0.191

Decompositions calculated per Eq. (2)

The results clearly reveal the rise of the Newcomers. Canada is the only Incumbent whose influence did not decline appreciably relative to the rest of the world, and Brazil is the only Newcomer country whose relative influence did decline. These trends in influence are mirrored in the contributions of quantity, which reflect the growing numbers of publications by Newcomer countries observed in Fig. 4.

The relative decline of the Incumbents and the rise of the Newcomers in influence have, however, taken radically different forms in different countries. Amongst the Incumbents, two-thirds of the loss of US relative influence owes to its declining edge in research impact, and Russia has also lost influence in both quantity and impact terms. The EU and Canada, on the other hand, have offset a large portion of the quantity loss through impact gains. Japan, despite registering the largest contribution of impact growth in the Incumbents, could not make up for its even larger loss in quantity.

Amongst the Newcomers, however, China, Singapore and Taiwan are increasing their influence via both the impact and quantity of their research. Less than a tenth of Taiwanese gains come from impact improvements, compared with around one-third of Chinese gains and two-thirds of Singaporean gains. Thus, the rise of the Newcomers is not simply a quantity story. This said, it has been entirely a quantity story for both South Korea and India, whose governments may wish to examine their policy differences with Singapore to seek ways of increasing research impact. Brazil, unlike the rest of the Newcomers, is not gaining in influence. There is, therefore, significant variation in nation’s trends in impact. We will explore this variation in Sect. 3.3.

Next, to gain a sharper impression of the activities of world-class scientists, and to possibly reduce problems of self-citation and home-country bias in citation, we focus on articles published in more prestigious journals. The appendix presents these results, which use a subset of journals with Eigenfactor scores of 0.045 or higher.12 This restriction eliminates roughly half of our sample. All of the major movements in influence, quantity and impact mentioned above are intact in a comparison to the figures and table presented above. The biggest changes are that China and South Korea register larger improvements in relative citation rates to their papers in prestigious journals, suggesting a core cadre of high-quality researchers that has gained in strength; while India’s apparent gains in influence seem to come from lower quality journals. Conversely, among the Incumbents, the most obvious change is that the US’s share of publications did not shrink by nearly as much in the prestigious journals, while its relative citation rate shrank much faster in these journals. Thus, while roughly one-third of the US’s loss of relative influence in the general journal pool is due to a loss of quantity, roughly two-thirds of its slightly smaller loss of influence in prestigious journals is due to reduced relative citation rates.

The implications of these trends, together, are clear. The Newcomers are rising in both quantity and impact. While their nanotechnologists still face challenges publishing in the most prestigious journals, when they do, they tend to make a serious and growing impact.

Trends in research specialization

A long literature in scientometrics and economics has developed measures of trade and research “specialization” (Balassa 1965; Hidalgo and Hausmann 2009; Avila-Robinson and Miyazaki 2012; Schubert and Braun 1986; Schubert and Grupp 2011). Specialization is most often measured at the level of specific countries and subfields (for example, we may measure Korea’s degree of specialization in nano-electronics). The most common scientometric indicator of country c’s specialization in subfield f in year t is its Revealed Literature Advantage: RLAf,c,t ≡ sf,c,t/sf,c,tsf,t.sf,t; where sf,c,t ≡ Nf,c,t/Nf,c,tNc,t.Nc,tis the share of the country’s publications that year in the subfield, and sf,t ≡ Nf,t/Nf,tNt.Ntis the share of all nanotechnology publications that year in that subfield (Schubert and Grupp 2011). This is greater than one whenever country c specializes in subfield f.

It is also possible to aggregate these measures to a national level to quantify two notions of a country’s research specialization: the extent to which its research mix is concentrated in certain fields, and its non-conformity to the global research mix. In the least concentrated research mix, 20 % of papers would fall into each of our five categories. Conformity is the strategy of emulating the global average distribution of publication, which is approximately 8.2 % nanobio/nanomedicine, 15.1 % quantum dots/carbon nanotubes, 29.5 % nanofilms, 26.9 % nanoparticles, and 20.3 % nanoelectronics (Table 1). Figure 7 shows why these two notions of specialization are distinct, especially when global research effort is unevenly distributed across fields. While concentration and non-conformity are notions of specialization, their opposites, de-concentration and conformity capture research diversification.
Fig. 7

Distribution of publishing efforts using two stylized nanoscience diversification strategies. Note Plot shows the research footprints for perfect conformity to the global average, in blue with circular markers, and perfect deconcentration, in red with diamond markers. The distance from the origin to the data point represents the fraction of all nanoscience publications in that subfield

We track concentration using the Herfindahl–Hirschman Index \(HHI_{c,t} \equiv \sum\nolimits_{f} {s_{f,c,t}^{2} }\). The index ranges from 0.2, when each of the five subfields has the same number of papers, to 1, when all papers are in only one subfield.13 We have examined four measures of non-conformity to ensure that our results are robust to our choice of measure. First, the easiest to interpret measure of non-conformity (UNCTAD 2006) is \(NonConformity_{c,t} \equiv {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}\sum\nolimits_{f} {\left| {s_{f,c,t} - s_{f, \sim c,t} } \right|}\). This ranges in value from zero (total conformity: the subfield composition of c’s publication mix is exactly the same as that of other countries) to a theoretical maximum near one (total non-conformity: c publishes in a field if and only if other countries do not publish in it). Our second measure, referred to as a Chi square measure, is \(\chi_{c,t}^{2} \equiv \sum _{f} \left( {s_{f,c,t} - s_{f,t} } \right)^{2} /s_{f,t}\) (Pianta and Archibugi 1991; Andersson and Ejermo 2008). Our third and fourth non-conformity measures (see Cantwell and Vertova 2004) are the coefficients of variation across subfields within a country and year of RLA and of RLAM ≡ 2RLA/2RLA(RLA + 1). (RLA + 1). RLAM modifies RLA so that it ranges from zero to two and is symmetric around one. We refer to these coefficients of variation collectively as the CV measures of non-conformity. Our qualitative findings are all robust to the choice of measure. Nonconformity is the easiest measure to relate intuitively to the underlying subfield shares, and the two CV measures, being scaled by within country means, explode in countries with high specialization in some small subfield (Cantwell and Vertova 2004, p. 515). We therefore focus on non-conformity results using the NonConformity measure for the rest of the paper. Results using χ2 and the two CV measures are available on request.

Figure 8 tracks concentration over time for our main countries using a common scale. The data indicate that the Newcomers have been more concentrated in particular areas of nanoscience than the Incumbents. However, this situation is changing rapidly, as the Newcomers begin to spread their research portfolios into formerly under-represented fields. In conjunction with the rapid growth in research quantity (and for some, impact), this smoothing indicates that the Newcomers are rapidly developing capacities in new areas. This is nowhere more evident than in Singapore. We will examine what these areas might be, presently. Meanwhile, among the Incumbents, the US, EU and Canada are holding steady near the minimum possible level of concentration (0.2), but Japan and Russia are building broader capacities.
Fig. 8

Concentration in fields. a Incumbents. b Newcomers

Figure 9 examines NonConformity, which displays very different trends across countries. The US and EU stand apart for having developed more non-conforming research mixes over the duration of our study. By 2011, the US is the major power with the least conformist research mix. On the other hand, the research mixes among the major Newcomers have been converging on the global mix. Once again, this process is extremely pronounced in Singapore, which began as the most non-conforming emerging nanotechnology power, and virtually erased any trace of idiosyncrasy within a decade. National trends in conformity using χ2 and CV are qualitatively identical to those using NonConformity.14
Fig. 9

Non-conformity across fields. a Incumbents. b Newcomers

Figure 10 looks more deeply at what key countries have specialized in. Apparently, US diversification is led by its movement into photo-electronics, and its extremely high RLA in nanobio. The EU is specializing in photo-electronics even more noticeably, but moving out of nanobio. Meanwhile the convergence amongst the four Asian countries depicted comes from rapid gains in nanobio, movement away from concentrating principally in films, and some gains in photo-electronics. Once again, this transition has been most rapid and prominent in Singapore. By 2011, Singapore has RLAs of close to one in all five fields. Thus, the impression is that the US and EU are opening up new frontiers in nanobio and electronics while other countries are following them with a lag.
Fig. 10

Revealed literature advantage over time. a USA. b EU. c Japan. d China. e Korea. f Singapore

To summarize, Fig. 8 indicates that the Newcomers are becoming less concentrated, but that the US and EU are already near the minimum possible level of concentration; Fig. 9 indicates that the Newcomers are becoming more conforming as the US and EU are becoming less conforming; and Fig. 10 similarly shows that the US and EU are specializing as the major Asian powers are diversifying. We interpret this as a reflection of a process of scientific catch-up, with the Newcomers attempting to diversify by making up ground in areas such as nanobio, and the US and EU attempting to maintain a competitive edge in nanobio and electronics—two areas in which their industries have been investing heavily (Roco et al. 2010; van Zeebroeck et al. 2006; Raje 2011). Meanwhile the Newcomers are attempting to make inroads into these areas.

Are specialization and impact related?

We now use simple regressions to examine the relationship between proxies for research specialization (concentration and non-conformity) and research impact. We do this at the level of nations, using variants of the following regression specification:
$$Impact_{c,t}=\alpha_{c}+\alpha_{t}+\beta\,\times\,Specialization_{c,t}+\gamma X_{c,t}+e_{c,t}$$
(3)

Each variant involves one of the five measures of national scientific specialization analyzed previously (HHI, NonConformity, χ2, and the two CVmeasures), and one of three different proxy measures of scientific impact. The first proxy for impact is simply the relative citation index (RCI) discussed in Sect. 3.1. The RCI will be too generous to countries that are focused on disciplines that tend to have higher citation rates. This could lead to a spurious relationship between conformity and national research impact, because a country that moves into highly cited disciplines will tend to receive more citations even if the intrinsic quality of its work remains unchanged, simply because there are more citing papers in highly cited disciplines. To deal with inter-discipline differences in citations, we use the weighted average of a country’s rebased impact factors in each discipline (King 2004). The rebased impact factors are the weighted-average of the citation rates of papers in each country, discipline, and year, divided by the global average citation rate in that discipline and year; the weights are the share of the country’s publications that year in each discipline. We refer to this national aggregate as the country’s rebased impact factor (RBIc,t) in that year. Our third impact measure (Top5c,t) is simply the share of all publications by that country that are among the most highly cited 5 % of papers in their discipline that year. This corrects for differences in citation norms across disciplines because it applies discipline-specific citation rankings rather than citation levels. Insofar as it is unlikely that a paper could make it to the top 5 % in its discipline on the strength of friendly citations, it should also reduce problems arising from authors citing their compatriots’ work at a higher rate and other forms of strategic citation.

As there could be disagreement regarding how to normalize citation rates, we check our results for robustness to four different normalization strategies. First, we normalize taking as disciplines the 135 subject categories reported by Web of Science that appear in our dataset. Second, we normalize within each of the eight topics discussed in footnote 8. Third, we normalize within the five topics used in Sect. 3.2. Fourth, the RCI is simply the limiting case of an RBI normalized within only one category—nanotechnology itself. Our results are invariant to the normalization strategy used. While it may be preferable to normalize citations to reflect the citation behavior of the citing article or journal (Waltman and van Eck 2013), we lack the data this would require and so leave this robustness check to future researchers.

Critically, all regressions employ country fixed-effects (αc) and allow for the average impact of countries’ nanotechnology papers to vary over the years, as the international competition and citation norms change. We correct for time using either 11 dummy variables corresponding to the years in our sample (αt), or [in a restricted version of Eq. (3)] a linear time trend in the year of publication. The country fixed-effects control for all national characteristics that remain constant over the time-period of our study (e.g., many cultural, historical, and geographic variables). Thus, the inclusion of country fixed effects and controls for time helps us to hold the country constant, and examine its relative scientific impact over time as it became more or less specialized. We do this because from a policy maker’s perspective it may be less relevant, for example, that the US is more highly cited than other countries and also more specialized. What matters is whether a given country is likely to do better when it specializes.

While the fixed effects control for country-specific omitted variables that remain constant over time, growth in countries’ scientific capacity or other time-varying characteristics could lead to spurious relationships between diversification and impact if we do not correct for them. The key assumption in a fixed effect model is that Xc,t is sufficiently comprehensive to capture such characteristics. For example, the development of capacity could both permit countries to pursue a wider array of scientific activities and also increase impact. To examine this possibility, we also experiment with controls for the share of all nanoscience publications in a year that come from that country. While adequate panel data on scientific funding are not available, we anticipate that this measure of research quantity will proxy for it.

To further reduce the risk that a bias towards citing papers by one’s compatriots could inflate our measures of impact, we also correct in several specifications for the share of all papers in the sample in the subsequent years that involve an author for this country. We anticipated, correctly, that this variable would be a positive and statistically significant determinant of citation. A further step is to reduce the sample to papers appearing in top journals (defined in "Scientific influence: quantity versus impact" section).

Table 3 provides the estimates of β for 80 versions of this regression. The versions are distinguished by the following choices: the measures for specialization and impact, the set of journals whose articles are in the sample; the sets of countries included in the sample and the relative weightage given to each country in the regression; the manner in which we correct for time; and the other control variables included. The reported regressions use HHI or NonConformity to proxy for specialization, and using RCI and two versions of RBI to proxy for impact. The versions of RBI result from normalizing citation rates within 135 disciplines, and within 5 topics. Results using the χ2 and CVmeasures to proxy for specialization, and normalizing citation rates within 8 topics are almost identical qualitatively, and are available on request.
Table 3

Fixed effects regressions of nanoscience impact on specialization measures

Independent variable

Regression features

   

Sample consists of papers appearing in… Impact measured by…

Measure of specialization

Weighted by, or correcting for pubs?

Time dummies/trend?

Future publication share

Set of countries

All journals

Good journals

      

RCI

RBI (5 topics)

RBI (135 disciplines)

RBI (135 disciplines)

      

Coeff.

P value

Coeff.

P value

Coeff.

P value

Coeff.

P value

(1)

NonConformity

Weighted

Dummies

No

All

−1.18

(0.000)

−0.652

(0.000)

−1.065

(0.000)

−0.852

(0.000)

(2)

HHI

Weighted

Dummies

No

All

−1.882

(0.000)

−0.943

(0.000)

−1.374

(0.000)

−2.019

(0.000)

(3)

NonConformity

Correcting

Dummies

No

All

−0.439

(0.000)

−0.369

(0.000)

−0.436

(0.000)

−0.746

(0.000)

(4)

HHI

Correcting

Dummies

No

All

−0.499

(0.000)

−0.355

(0.000)

−0.337

(0.000)

−0.701

(0.000)

(5)

NonConformity

Correcting

Dummies

Yes

All

−0.441

(0.000)

−0.362

(0.000)

−0.435

(0.000)

−0.441

(0.000)

(6)

HHI

Correcting

Dummies

Yes

All

−0.555

(0.000)

−0.409

(0.000)

−0.396

(0.000)

−0.402

(0.000)

(7)

NonConformity

Weighted

Dummies

Yes

All

−0.579

(0.000)

−0.336

(0.002)

−0.566

(0.000)

−0.679

(0.000)

(8)

HHI

Weighted

Dummies

Yes

All

−1.048

(0.000)

−0.527

(0.005)

−0.645

(0.003)

−0.796

(0.000)

(9)

NonConformity

Weighted

Trend

Yes

Conforming

−0.584

(0.000)

−0.588

(0.000)

−0.500

(0.001)

−0.847

(0.001)

(10)

HHI

Weighted

Trend

Yes

Conforming

−0.923

(0.000)

−0.722

(0.000)

−0.478

(0.019)

−0.415

(0.154)

(11)

NonConformity

Weighted

Trend

Yes

Nonconforming

0.242

(0.307)

0.073

(0.709)

0.123

(0.596)

−0.602

(0.000)

(12)

HHI

Weighted

Trend

Yes

Nonconforming

−0.629

(0.249)

0.003

(0.995)

−0.279

(0.604)

−0.896

(0.008)

(13)

NonConformity

Weighted

Trend

Yes

Deconcentrating

−0.217

(0.123)

−0.156

(0.215)

−0.121

(0.378)

−1.549

(0.006)

(14)

HHI

Weighted

Trend

Yes

Deconcentrating

−0.822

(0.000)

−0.665

(0.001)

−0.490

(0.021)

−1.395

(0.021)

(15)

NonConformity

Weighted

Trend

Yes

Concentrating

0.06

(0.788)

−0.098

(0.629)

−0.126

(0.560)

−0.503

(0.000)

(16)

HHI

Weighted

Trend

Yes

Concentrating

−0.723

(0.276)

−0.386

(0.525)

−0.111

(0.863)

−0.206

(0.422)

(17)

NonConformity

Weighted

Trend

Yes

Not Rising

−0.606

(0.001)

−0.192

(0.200)

−0.615

(0.001)

−0.837

(0.000)

(18)

HHI

Weighted

Trend

Yes

Not Rising

−1.303

(0.001)

−0.353

(0.236)

−0.949

(0.010)

−1.136

(0.003)

(19)

NonConformity

Weighted

Trend

Yes

Rising

−0.502

(0.003)

−0.591

(0.000)

−0.423

(0.012)

−0.457

(0.007)

(20)

HHI

Weighted

Trend

Yes

Rising

−0.926

(0.000)

−0.735

(0.002)

−0.48

(0.043)

−0.533

(0.026)

Each of the 80 coefficients is from a separate regression of the form in Eq. (3). Regressions are differentiated by the indicators of national research impact and specialization, samples of journals, samples of countries and the choice of control variables. Every regression includes country fixed effects. The complete sample (Rows 1–8) covers papers published during 2000–2010 by the 63 countries listed in Table 4

The most obvious feature of the results in this table is that specialization is never positively and significantly associated with impact. Indeed, across a wide variety of specifications, the relationship is usually negative and highly significant. While results for TOP5 are not always statistically significant, they are always negative when they are significant.15

Results in rows 1–2 indicate that specialization is associated with significantly lower impact, no matter how specialization and impact are measured, or the set of journals in which the articles appeared. The regressions in rows 3–6 drop country weights, so that, for example, Algeria’s experience is given equal weight to the US’s experience. The regressions in these four rows also include controls for the country’s share of nanoscience publications that year. If diversification is associated with impact because it proxies for the size of the country’s nanoscience program, the coefficient on this proxy for size should be positive. The coefficients (not shown) are all statistically insignificant when the size proxy is introduced on its own (regressions in rows 3–4) and negative when future corrections for future research publication are introduced (regressions in rows 5–6). Moreover, the negative relationship between impact and specialization persists in rows 3–6, indicating that it is a feature of prolific and non-prolific countries alike, and is probably not driven by a failure to correct for national trends in unobserved scientific capacity. Rows 5–8 add our correction for countries’ future publication shares. This variable enters positively and significantly, consistent with the existence of a home-country bias in citation, but does not eliminate the inverse relationship with specialization, implying that this relationship is not an artifact of home-country bias.

Finally, to accommodate the possibility that any connection between specialization and impact is the spurious result of rising scientific powers’ natural tendency to develop strengths in new and unusual fields as they do so, we divided the sample in three different ways. To do this, we conducted separate regressions for each country of each of the following on a linear time trend: the country’s share of publications that year, its non-conformity index, and its HHI index. We then categorized countries whose trend in publication share was positive and significant as rising (the rest are classified as not rising), those whose Non-conformity index trended downwards significantly as conforming (the rest are nonconforming), and those whose HHI trended significantly downwards as deconcentrating (vs. concentrating). In assigning countries to categories, we require the trend coefficients to be significant at only the 25 % level, reflecting our lack of prior expectation on the null that there is/isn’t a time trend, and the small sample size. Table 4 lists countries according to these categories. As required for this robustness test, the categories capture distinct aspects of science development, so that, for example, the rising countries include some that are nonconforming and deconcentrating. It should therefore be difficult to attribute any negative association between specialization and impact within these groups to some omitted secular trend in scientific capacity.
Table 4

List of countries classified by nanotechnology development trends

Country

Rising

Conforming

Deconcentrating

Country

Rising

Conforming

Deconcentrating

Algeria

Yes

Yes

Yes

Latvia

  

Yes

Argentina

   

Lithuania

Yes

Yes

Yes

Australia

Yes

Yes

 

Malaysia

Yes

Yes

Yes

Austria

 

Yes

Yes

Mexico

 

Yes

Yes

Belarus

   

Netherlands

   

Belgium

   

New_Zealand

Yes

Yes

 

Brazil

  

Yes

Norway

Yes

Yes

 

Bulgaria

 

Yes

Yes

Pakistan

Yes

Yes

 

Canada

Yes

Yes

 

Peru

 

Yes

Yes

Chile

Yes

Yes

 

Poland

  

Yes

China

Yes

Yes

Yes

Portugal

Yes

Yes

Yes

Colombia

Yes

Yes

Yes

Romania

Yes

Yes

Yes

Costa_Rica

   

Russia

  

Yes

Croatia

 

Yes

 

Saudi_Arabia

Yes

Yes

Yes

Cyprus

Yes

Yes

Yes

Scotland

   

Czech Republic

  

Yes

Singapore

Yes

Yes

Yes

Denmark

   

Slovakia

 

Yes

Yes

Egypt

Yes

Yes

Yes

Slovenia

Yes

  

England

   

South_Africa

Yes

Yes

 

Estonia

 

Yes

Yes

Spain

Yes

 

Yes

Finland

   

Sweden

  

Yes

France

  

Yes

Switzerland

  

Yes

Germany

  

Yes

Taiwan

Yes

Yes

Yes

Greece

   

Thailand

Yes

  

Hungary

 

Yes

Yes

Tunisia

Yes

 

Yes

India

Yes

Yes

Yes

Turkey

Yes

Yes

 

Iran

Yes

Yes

 

Ukraine

  

Yes

Ireland

Yes

 

Yes

USA

   

Israel

  

Yes

Venezuela

   

Italy

  

Yes

Vietnam

Yes

Yes

Yes

Japan

 

Yes

Yes

Wales

 

Yes

 

Korea

Yes

Yes

Yes

    

Countries are ‘rising’ if their share of global nano publications displayed a significant upwards trend. They are ‘Deconcentrating’ (‘conforming’) if their HHI (non-conformity index) displays a significant downwards trend. 25 % significance levels are applied

Rows 9–10 of Table 3 confirm a fairly robust statistically significant negative relationship between specialization and impact amongst the countries whose research mixes are beginning to conform to global norms. Among those countries whose research mixes are becoming less conformist (rows 11–12), this general relationship also appears in high quality journals, but not in the wider pool. Together, the results from rows 9–12 are consistent with the view that conformity yields benefits that remain after countries begin to develop a more specialized research mix. Rows 13–14 confirm an inverse relationship between the HHI and our impact measures among deconcentrating nations. Conversely, Rows 15–16 reveal more limited evidence of the trend among the concentrating nations. Together, rows 13–16 are therefore consistent with diversification yielding benefits that remain after countries begin to concentrate on specific areas. Meanwhile, the fact that the relationship is only significant amongst those countries whose research mixes became less conformist (Rows 11–12) and less concentrated (Rows 15–16) when the sample is restricted to papers appearing in the best journals, suggests that it is particularly damaging for a country’s top scientific performers to become more specialized.

Rows 17–20 confirm the inverse relationship fairly strongly in rising and non-rising countries alike.

The large majority of these results, and every comparison of these results between good journals and the general pool, are consistent with the position that top-class science is achieved through diversification, not specialization. We have corrected in three ways for the possibility that some countries’ relative research impact is boosted by a familiarity bias in citation, with no effect on the finding. We have checked whether this finding is driven by rising scientific powers that reduced the concentration and increased the conformity of their research mixes—however the relationship still survives in the best journals even when they are excluded. The relationship is observed in countries with large and small publication volumes alike. And, perhaps most importantly, it is observed even when we control for countries’ shares of nanoscience publications that year.

Conclusions

We have studied trends in countries’ nanotechnology research volumes, impact, influence and composition, making both analytical and descriptive contributions.

Analytically, we exploit variations in countries’ trajectories in terms of nanoscience diversification and citation rates (a proxy for scientific impact) to study the relationship between these two variables over time. We have shown that nations more often improved their nanotechnology research impact when they conformed to the nanotechnology research portfolio of other nations and when they did not concentrate in a few subfields. This result is robust to corrections for the volume of scientific activity, home-country bias in citation, variations in citation rates across subfields, the exclusion of lower tier journals and various types of countries, and the choice of measures for impact and specialization. This is all consistent with diversification being preferable to specialization, for purposes of enhancing impact.16

One caveat on a causal interpretation of this result is that we could not control for measured resource commitments: countries that put more financial resources and personnel into work on nanoscience are likely to be able to sustain more and more unusual research-foci, and to produce higher-impact science. Unfortunately, cross-country, over-time measures of these variables are too sparse and crude for us to examine this issue statistically. Nevertheless, we have shown that the diversification-impact relationship is robust to corrections for countries’ shares of nanoscience publications over time, which should capture the effects of these changing resource commitments, and to different weighting and sample selection schemes.

Our further contributions are as follows: We show that research in nanotechnology in both Incumbent and Newcomer nations has clearly picked up in recent years, as their governments have poured resources into the endeavor. Amongst the Incumbents, the EU maintained a lead in research quality, despite seeing a decline in its share in publications. This may owe to the synergies resulting from more centralized, coordinated, and focused efforts by the European Commission and its member states. Russia has yet to realize a return on its recent huge investment in nanotechnology. Japan seems to hold on to its strength in quality, although its relative decline in quantity is quite significant. The US has lost relative influence not only because of quantity effects, but also because of declining relative citation rates.

Conversely, the Newcomers, (save Brazil), have increased their research output faster than the Incumbents, and some of them have also gained on the Incumbents in research impact. China, Singapore, and Taiwan have gained in both quantity and impact, while India and South Korea have primarily advanced in terms of quantity. These advances reflect the emphasis of the Newcomers on nanotechnology, following the lead of the US NNI, and reflected in the spending trends we document.

What emerges from our work, then, is a picture of a global nano-scientific playing field that is becoming more uniform, and is doing so along several dimensions. Newcomer scientists are converging on the Incumbents not only in the amount and impact of their research, but also in the composition and diversity of their research portfolios.

Research production and impact is just one piece of the puzzle. Governments are typically interested in outcomes further downstream. Future studies will therefore need to incorporate other measures of technological success and better understand their relationship to scientific diversification.

Footnotes
1

Suggesting that the case for diversification may well vary with a country’s stage of development, Imbs and Wazciarg (2003) show that poorer and richer countries tend to have lower levels of industrial diversity, relative to modestly rich countries. Similar forces could yield similar effects for scientific diversification.

 
2

Indeed, despite significant efforts, we have been unable to locate any studies of this relationship at the national level. At the firm level, Lee et al. (2012) find that firms with more specialized research portfolios filed more high impact patents; while Matusik and Fitza (2012) show that venture capital firms that are highly specialized and highly diversified were more successful than those who were moderately diversified in taking firms they invest in public.

 
3

In keeping with previous literature, we define the developmental state simply as a government that, motivated by desire for economic advancement, intervenes in industrial affairs (Woo-Cumings 1999). For our purposes, technological affairs are an aspect of industrial ones.

 
4

Iran could certainly be included in this list, given the dramatic rise in its nanotechnology research output, which surpassed that of Brazil by 2009. However, for most of the sample period, there are too few papers from Iran to analyze its patterns of specialization.

 
5

To the extent that research is cheaper in lower-income countries, the dollar figures provided in this section overstate the research budgets of high-relative to low-income countries. There is no index comparing the cost of scientific research across countries that would permit us to compare these budgets in terms of purchasing power.

 
6

We have re-run the analyses appearing in the next section for k = 8, and the results did not change qualitatively. The only exception is Fig. 8, which involves the Hirschman Herfindahl Index of concentration. The HHI is known to be sensitive to changes in the number of subfields across which concentration is measured.

 
7

The citation rate for a subfield in some year is the ratio between the average citation rate of papers published in that subfield that year and the average citation rate of all nanotechnology papers published that year.

 
8

Using a share-of-citations measure allows for the fact the fact that more recent publications have had less time to be cited and compete for recognition in a larger pool of publications. We emphasize that this is a measure if relative influence, increases in which only tell us that the number of citations to papers involving country c have grown more rapidly than have citations to all papers in the field.

 
9

One obvious drawback to this simple share-of-citations measure is that it gives greater weight to internationally coauthored papers (Aksnes 2006). Shares of citations and publications measures recalculated to attribute papers to countries in inverse proportion to the number of countries that authored each paper yield the same qualitative findings as the figures and table presented here, indicating that our results are not driven by an over-counting of internationally coauthored papers. Fractional attribution of internationally coauthored papers yields only two minor changes: (1) The growth over time in China’s RCI is slightly more pronounced, because the citation rates of Chinese- only papers have converged on those of papers involving authors from both China and other countries; (2) Russia’s relative RCI drops with fractional attribution, as its internationally collaborative papers are more highly cited than Russian only papers.

 
10

Some countries may tend to cite their own work, and lower-income countries are more likely to cite work in lower ranked journals (Didegah et al. 2012). There is little we can do to correct our estimates of influence and impact for this with the data available, but we will check that our estimates of the relationship between diversification and impact are plausibly robust to such problems.

 
11

We tried using a re-based impact factor (RBI—see Sect. 3.3) as a proxy for impact. The RBI normalizes citations across subfields by subfield norms and then takes the country’s publication-share-weighted average of these normalize citation across sub fields (King 2004). Rebasing has imperceptible effects on Fig. 5, indicating that results in this section are not sensitive to the weighting of citations in different subfields.

 
12

Eigenfactor scores as reported by Web of Science, 2010 Journal Rankings. The cutoff presented here for prestigious journals was provided by nanotechnologist at a top-ranked engineering department who we asked to identify the lowest ranked journal they would support their graduate students submitting papers to. Lowering the bar does not alter our results qualitatively.

 
13

Four out of the five concentration measures (C4, C8, C20 and the Gini Coefficient) studied by Van Zeebroeck et al. (2006) are not suitable when the number of subfields is small (Khramova et al. 2013).

 
14

Some countries do rank differently by CV and by Nonconformity. As noted, this is unsurprising given that coefficients of variation are very sensitive to changes in the country-mean value of RLA and RLAM.

 
15

We have chosen not to present estimates using Top5 because the measure is censored at zero. Estimates from Tobit models with country fixed effects are not consistent, due to the incidental parameters problem. We have, however, analyzed the behavior of Top5 using linear regressions with fixed effects (ignoring censoring), and the results are qualitatively identical to those using RBI and RCI, although the coefficient on the diversification measure is sometimes less statistically significant. Tobit models without country fixed effects yield negative and highly significant coefficients on the diversification measure.

 
16

Brazil provides an illustrative example. While Brazil is now developing a research mix that resembles that of the rest of the world, it had, until at least 2005, focused on nanobio and alternative energy applications, not only within nanotechnology but also across all of science and technology (Fink et al. 2012). As its nanotechnology research mix has become more concentrated, its research impact has declined—the only of our six Newcomer nations for which this is the case.

 

Acknowledgments

We are grateful to Richard Appelbaum, Matthew Gebbie, Shirley Han, Barbara Harthorn, Luciano Kay, Sumita Pennathur and Galen Stocking for support and for useful discussions of our results. Rachael Drew, Quinn McCreight, Caitlin Vejby and Chris Wegemer provided invaluable research assistance. This material is based upon work supported by the National Science Foundation under Grant No. SES 0531184. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. This work was conducted under the auspices of the University of California at Santa Barbara’s Center for Nanotechnology in Society (www.cns.ucsb.edu).

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2016

Authors and Affiliations

  • Patrick Herron
    • 1
  • Aashish Mehta
    • 2
  • Cong Cao
    • 3
  • Timothy Lenoir
    • 4
  1. 1.Information Science + StudiesDuke UniversityDurhamUSA
  2. 2.Global Studies/Center for Nanotechnology in SocietyUniversity of California-Santa BarbaraSanta BarbaraUSA
  3. 3.School of Contemporary Chinese StudiesUniversity of Nottingham NingboNingboChina
  4. 4.Science & Technology StudiesUniversity of California-DavisDavisUSA

Personalised recommendations