1 Introduction

Universities play an important role in the creation and accumulation of knowledge. They drive technological innovation and create positive economic spillovers (Jaffe 1989; Kantor and Whalley 2014). The demand for a more productive form of university governance exists at any time in any country.

In Japan, national universities’ political independence and financial autonomy have been recognized as essential for maintaining academic freedom and effective knowledge creation, since the Empire of Japan established the Imperial UniversitiesFootnote 1 as a vital national institution in the late 19th century. Since then, national universities’ privatization has been repeatedly discussed (Amano 2008). In 2004, all Japanese national universities were partially privatized (incorporated, or Dokuritsu Gyosei Hojinka in Japanese) under the National University Corporation Law and acquired the status of national university corporations.Footnote 2

Therefore, since university reform is economically policy-relevant and historically important for Japan, it is useful to provide a quantitative assessment of whether partial privatization since 2004 was favorable. However, there has been no concerted empirical attempt to investigate the impact of this reform in the literature.

The previous literature provides mixed predictions of the direction of this reform’s impact on national universities’ research performance. Aghion et al. (2010) argue that autonomy and competition improve a university’s performance. They also find a positive correlation of university-level autonomy and competition index with performance. As I discuss qualitatively at the crude level in the next section, based on Aghion et al. (2010)’s criteria, partial privatization might have intensified at least some aspects of national universities’ autonomous function. Therefore, Aghion et al. (2010) suggest that this reform could positively impact the national universities’ performance. However, McCormack et al. (2014) predict no significant impact. They highlight a positive correlation between performance and university department-level management scores based on Bloom and Van Reenen (2007)’s interview tool, which considers each department’s use of incentives, target setting, and performance monitoring processes in detail. Since this reform was at the university level and did not consider each department’s unique characteristics and incentives, it might not have directly changed the department-level management index based on the criteria used in McCormack et al. (2014).

This study quantitatively evaluates the impact of partial privatization on national university research outcomes, which are aggregated and disaggregated across research fields. Japanese private universities can be viewed as counterfactuals not targeted by such systematic reform for the same period that provides within-country variations in governance/managerial change. Therefore, an identification strategy that utilizes private universities as a control group can be used. Another advantage of this event is that the reform is plausibly considered as an exogenous structural change in national universities. While the privatization of national universities was discussed regularly as a distinct topic, Amano (2008) notes that it is undeniable that the 2004 reform was a by-product of the Koizumi administration’s structural reform, which primarily aimed to privatize public organizations, such as the postal service.

The difference-in-differences specifications suggest that partial privatization leads to a deterioration of 16–18% point in the quality and quantity of national universities’ research output, as constructed using publication records in the fields of medical science, biology, chemistry, physics, engineering, and economics from 1999 to 2009. The fixed-effect specification, which estimates the effect separately for these six fields, suggests that only medical science is significantly negatively affected. To address identification challenges (i.e., whether this analysis can capture the impact of partial privatization), I conduct several robustness checks and find that the adverse effects are due to the partial privatization.

This study’s primary contribution to the literature is that it provides the first evaluation of this reform using an identification strategy based on a quasi-experiment. Since this was the most significant reform in higher education policy after World War II in Japan, quantitatively delineating its consequences on research activity is particularly important for Japanese science policymakers and researchers.

This study’s secondary contribution is that it analyzes the relationship between university governance/managerial style change and research performance disaggregated across research fields. A few recent studies, such as Aghion et al. (2010) and McCormack et al. (2014), find positive cross-sectional correlations between university or department-level management/governance and performance in the EU and the UK. These studies do not explore heterogeneity in the correlation among research fields. This study uncovers the heterogeneous impacts across research fields and departments. Therefore, this study provides insight into the interaction between management/governance style change and each research field’s unique characteristics and associated incentives, particularly in medical science.

The remainder of this paper is organized as follows. Section 2 provides background on the university system in Japan and provides details on the partial privatization of national universities. Section 3 presents the data construction process and identification strategies. Section 4 presents the results of the main analysis and the robustness checks. Section 5 considers potential explanations of the negative effects identified for medical science research. The final section concludes.

2 Institutional background

2.1 The partial privatization (Dokuritsu Gyosei Hojinka) of Japanese national universities

In Japan, universities are divided into the following three categories: national universities established by the Japanese Government, private universities established by educational corporations, and public universities set up by local public entities or public university corporations. As of 2009, which is the end of this study’s period of analysis, there were 86 national universities, 599 private universities, and 95 public universities in Japan.

In 2004, all national universities were partially privatized (incorporated, or Dokuritsu Gyosei Hojinka in Japanese) under the National University Corporation Law and acquired the status of a national university corporation. The Japanese Government and Ministry of Education, Culture, Sports, Science and Technology (MEXT), a public administrative organization, took the initiative in enacting this reform, which consists of the following aspects (Yamamoto (2011) and MEXT’s webpage http://www.mext.go.jp/a_menu/koutou/houjin/houjin.htm):

  • Identifying the missions and goals of universities.

  • Defining responsibilities and granting autonomy in management through the adoption of business management tools.

  • Introducing a competitive mechanism among universities and placing greater emphasis on student needs and the business world.

Consequently, national universities started to be more autonomously managed and financed using revenue they obtained from tuition, entrance fees, competitive funds, hospital revenues, other sources of income, and block grant budgets allocated as a lump sum by the government (“operating support funds”). However, to date, the government (MEXT) has continued to exert substantial influence over the overall national university system despite the delegation of authority for university operations. Therefore, it is more appropriate to call this reform a type of “partial” privatization, as in Gupta (2005).Footnote 3

In addition to managerial changes, operational support funds, which had been the primary sources of national university budgets, were required to decrease at an annual rate of 1%. The reason given for this staggered budget reduction was that increasing managerial efficiency in the allocation and utilization of resources was necessary to financially detach national universities from the government (Center for National University Finance and Management 2004).

Fig. 1
figure 1

Average Operational Support Fund (Public Funding) and Hospital Revenue in National Universities. Note: The plots in the figure are created from the balance sheet of each national university from the Official Government Journal of Japan (Kanpo, http://kanpou.npb.go.jp). Accounting standards based on corporate accounting principles were introduced in conjunction with the partial privatization of national universities in 2004, making it impossible to plot this information on balance sheets for the period before 2004

Perhaps more importantly, this substantial budget reduction has incentivized universities to implement income-generating activities and promoted increased competition between universities in fund acquisition efforts. Faculty members are increasingly encouraged to seek external research grants, such as Grants-in-Aid for Scientific Research (Kaken-hi) and other types of competitive funds, to compensate for the loss in operational support funds. Thus, the reform can generally be considered as a shift from public funding to university-generated revenue and external research grants. In conjunction with these structural changes implemented in 2004, accounting standards based on corporate accounting principles were also introduced. Unfortunately, these made it difficult to compare balance sheets before and after 2004. However, Table 1, as reproduced from Urata (2010), shows that the ratio of the total amount of operational support funds in 2008 to those of 2005 is 0.956, and that the operational fund continued to provide the majority of universities’ budgets immediately after the reform. By contrast, all the ratios of competitive-type funds in 2008 to those of 2005 are above 1.1.

Table 1 Total amount from main sources of revenue for national university corporations (million yen)

I now briefly summarize the evaluation of partial privatization conducted by the Center for National University Finance and Management, an independent administrative corporation established to promote national universities’ education and research activities. To evaluate the perceived changes in the 2004 reform, the corporation conducted a series of interviews with all national university presidents in 2004, 2006, and 2009. In 2009, 84% of the presidents considered that the reforms had imparted a positive impact on the differentiation of each university’s identity, 80% reported positive impacts on organizational improvement, 76% reported positive impacts on autonomy, 70.6% reported positive impacts on social contributions, and 66.6% reported positive impacts on competitiveness. However, there is controversy concerning the reform’s effects on research activities. In the 2009 interviews, 50.4% of faculty heads responded that the partial privatization’s effect had been negative, and only 23.2% reported a positive effect.

2.2 What impacts can be predicted from previous literature?

To relate the reform to the literature, I compare the changes it induced with Aghion et al. (2010)’s autonomy and competition index. Unfortunately, this study cannot create the same index for each university with available data both before and after the partial privatization of national universities. Thus, it is impossible to exactly quantify the extent of change induced by the reform. Aghion et al. (2010) measure the autonomy and competition index of universities based on the following eight components:

  1. 1.

    a low share of public funding;

  2. 2.

    a large share of research grants;

  3. 3.

    no government approval of university budget;

  4. 4.

    proprietary buildings;

  5. 5.

    freedom to differentiate wages;

  6. 6.

    freedom to select students;

  7. 7.

    control over professor appointments; and

  8. 8.

    freedom to set the curriculum.

Based on these criteria, the reform might have intensified some aspects of the degree of autonomy competitive status on average. As shown in Table 1, and noted in the discussion in the previous subsection, the partial privatization lowered the average share of core government funds from approximately 44% in 2005 to 39% in 2008, raised the average share of competitive-type funds from approximately 14% in 2005 to 17% in 2008, weakened the government’s role in approving university budgets, enabled national universities to own the buildings as their assets, and gave freedom to differentiate wages. These five changes comprise five out of the eight components of the autonomy and competitive criteria. These changes seem to be concentrated in increasing autonomous functions.

Aghion et al. (2010) argue that autonomy and competition, in combination, improve a university’s performance. A university’s production function is almost impossible for an outsider to understand, including government policymakers. Therefore, a more productive governance style could be a model with considerable university autonomy and a competitive environment rather than centralized government control. Following this argument, Aghion et al. (2010) find a positive correlation between the university-level autonomy and competition index and performance. These arguments and findings suggest that partial privatization may positively impact the national universities’ performance.

By contrast, McCormack et al. (2014) predict no significant impact. They highlight a positive correlation between performance and university department-level management scores based on Bloom and Van Reenen (2007)’s interview tool, which considers each department’s use of incentives, target setting, and performance monitoring processes in detail.Footnote 4 The management survey’s focus is operations-focused management practices consisting of 17 indicators of management practices. Their survey interview asks, for instance, whether the organization is actively engaged in pursuing long-term goals with appropriate short-term targets; whether targets are clear and understandable; whether the organization has a clear employee value proposition; whether the promotion is performance based; and whether good performance is rewarded proportionately. Since this reform’s main target was not each department, it might not have directly changed the department-level management index based on the criteria used in McCormack et al. (2014).

Alternatively, it is possible that the reform indirectly distorts the departments’ (long-term) targets and incentives, which are considered in McCormack et al. (2014), through incentivizing universities to implement income-generating activities, as discussed in the previous subsection. For instance, in a department that can pursue revenue-generating activity and research activity (e.g., a university hospital), which can increase revenue by providing more clinical services, the increased revenue-generating activity would hinder research activity. In this case, the reform would negatively impact research performance for such a research field.

3 Data and estimation strategy

3.1 Data

To develop my dataset, I create a list of Japanese national and private universities from the Grants-in-Aid for Scientific Research (KAKEN) database, which records all research projects that received Grants-in-Aid for Scientific Research (KAKENHI), the largest and most representative competitive research fund in Japan. The KAKEN database includes affiliation data, which enables me to identify universities that received grants from this information. I restrict my sample to universities that appear in this affiliation data, because the database indicates all the universities engaging in research activity.Footnote 5 Next, I exclude universities that did not publish during the period 1999–2003 from this dataset to select the universities that were more actively involved in research and to make private universities comparable to national universities.

Data on research output are based on articles in the web database “ISI Web of Science.” This database is provided by Thomson Scientific and includes all papers from many research journals. From this database, I compute annual paper publication records for each university.

I use Journal Citation Reports (JCRs), as published by Thomson Scientific, to measure the quality of articles. These JCRs denote the impact factor (IF)—an annually published index of the frequency of journal article citations—for each journal contained in the Web of Science. I select only the journals with IF information for the period 1999–2009. Letting J be the set of all journals, I then take the average IF for each journal \(j \in J\) during the period 1999 to 2009 and create “\(\text {average\ IF}_{j}\).”Footnote 6 I weight each article published by the universities in my sample by the corresponding journal’s average IF.

To create the research outcome of each university, for each journal j, I let \(A_{ijt}\) be the set of all articles associated with university i at year t. I use \(\phi (a_{ijt})\) to denote the number of authors of an article \(a_{ijt} \in A_{ijt}\). \(A_{ijt}\) includes information on multiple authors in the same university. For each university i in year t, I compute the research outcome as follows:Footnote 7

$$\begin{aligned} \text {Research}\ \text {Output}_{it} \equiv \sum _{j \in J} \sum _{a_{ijt} \in A_{ijt}} \text {average IF}_{j} \frac{1}{\phi \left( a_{ijt}\right) }. \end{aligned}$$
(1)

I determine an aggregated research output indicator and disaggregated indicators for medical science, biology, chemistry, physics, engineering, and economics. I select these fields, because they cover the largest possible area of the representative research field.Footnote 8Footnote 9 I also follow the categorization of the journal framework used in the Web of Science.

Table 2 reports descriptive statistics for my sample of universities in Japan. The first row represents the average (1) for all universities, national universities, and private universities. National universities tend to produce better research output than private universities, indicating that national universities are more research intensive. Therefore, to make national and private universities comparable, this study uses the rate of change from the base year of 1999. In other words, the dependent variable is defined as \((\text {Research}\ \text {Output}_{it} - \text {Research}\ \text {Output}_{i1999})/\text {Research}\ \text {Output}_{i1999}\) for each university i in year t.

Table 2 Descriptive statistics: means and standard deviations

3.2 Estimation framework

To identify the impact of partial privatization on national university research outcomes, I use difference-in-differences estimators in a linear model. This fixed-effect model can be specified as

$$\begin{aligned} \frac{\text {Research}\ \text {Output}_{it} - \text {Research}\ \text {Output}_{i1999}}{\text {Research}\ \text {Output}_{i1999}} = \beta \left( \text {National}_{i} \times \text {post2004}_{t}\right) + \mathbf {X}_{it}\gamma +\lambda _{t} + \theta _{i} + \varepsilon _{it}, \end{aligned}$$
(2)

where i indicates the university, t indicates the year, \(\text {Research}\ \text {Output}_{it}\) is the output variable computed in Section 3.1, \(\text {National}_{i} \times \text {post2004}_{t}\) is a dummy variable that takes the value of 1 if the university i is national and the observation at year t is after 2004, and 0 otherwise, \(\lambda _{t}\) is the year dummy capturing effects common to Japanese universities, \(\theta _{i}\) represents all time-invariant university-specific characteristics for university i, \(\mathbf {X}_{it}\) incorporates a full set of regional dummy variables times the year dummy and the fourth-order polynomial of universities’ age (defined as years since they were established), and \(\varepsilon _{it}\) represents university-specific temporal shocks.Footnote 10

The estimation framework assumes that national universities and private universities follow similar research trends in the absence of partial privatization. Consequently, \(\beta\) detects whether the reform impacted the research outcome of the national university.Footnote 11 Moreover, to control potential heteroskedasticity and serial correlation, standard errors are clustered at the university level.

First, these estimation strategies are adequate from the viewpoint that all national universities were partially privatized in 2004 uniformly and without selection. The 2004 reform represented a forced exogenous event enacted through external pressure on national universities (and particularly on researchers) (Amano 2008). While the privatization of national universities was discussed regularly as a distinct topic, Amano (2008) notes that it is undeniable that the 2004 reform emerged as a by-product of the Koizumi administration’s structural reform, which primarily aimed to privatize public organizations, such as the postal service. This reform enforced large reductions in government budget support, prompted institutional change, and created additional clerical duties for universities to increase accountability. Thus, debates among researchers working for national universities focused on whether the new governance and management system would hinder their research activities. In general, national universities were skeptical about the reform’s proposed benefits and remained opposed to it even after implementation. Second, these specifications can eliminate the common macro shock in the research trend for both national and private universities by common year effects.

4 Data analysis

4.1 Main results

The fourth row of Table 2 compares outcomes from before and after the partial privatization, with the outcome defined as the rate of change from the base year. Comparing the pre-change average of the outcome and the post-change average of the outcome, I confirm that the overall national universities' outcome decreased 13.6% point. The difference is statistically significant at the 1% level. Meanwhile, there is 4.4% point increase among private universities in the same period. The difference is not statistically significant at the conventional test levels.

To quantify the differences in the research output of national and private universities after 2004, I produce difference-in-differences models that control for the effects of unobservable university-specific characteristics. Column (1) of Table 3 reports the baseline estimates for my basic specifications in Eq. (2) with standard errors clustered at the university level. The coefficient of the interaction \(\text {National}_{i} \times \text {post2004}_{t}\) indicates that the research output was 16.3% point lower after 2004 for national universities that experienced partial privatization. I also observe similar-sized adverse effects even if the sample is restricted to COE universitiesFootnote 12 in Column (2) (16.2% point decrease) and non-COE universities in Column (3) (18.1% point decrease). Since the number of clusters is too small for COE universities’ results, I also report p values in square brackets, as calculated by clustering robust wild bootstrap standard errors with the better finite-sample property. These overall significant negative effects are not consistent with the positive effect predicted in Aghion et al. (2010).

To explore the heterogeneity of the impact among department levels, I estimate the effects of partial privatization by research field. Using the research outcome for each research field, Column (4) of Table 3 estimates a fixed-effect model with a treatment dummy interacting with dummies for each research field—medical science, biology, chemistry, physics, engineering, and economics. These are specified as

$$\begin{aligned}&\frac{\text {Research}\ \text {Output}_{ift} - \text {Research}\ \text {Output}_{if1999}}{\text {Research}\ \text {Output}_{if1999}} \nonumber \\&\quad = \beta _{m}\left( \text {National}_{i} \times \text {post 2004}_{t} \times \text {MedicalSci}_{if}\right) \nonumber \\&\qquad + \beta _{b}\left( \text {National}_{i} \times \text {post 2004}_{t} \times \text {Biology}_{if}\right) \nonumber \\&\qquad + \beta _{c}\left( \text {National}_{i} \times \text {post 2004}_{t} \times \text {Chemistry}_{if}\right) \nonumber \\&\qquad + \beta _{p}\left( \text {National}_{i} \times \text {post 2004}_{t} \times \text {Physics}_{if}\right) \nonumber \\&\qquad + \beta _{engi}\left( \text {National}_{i} \times \text {post 2004}_{t} \times \text {Engineering}_{if}\right) \nonumber \\&\qquad + \beta _{econ}\left( \text {National}_{i} \times \text {post 2004}_{t} \times \text {Economics}_{if}\right) \nonumber \\&\qquad + \mathbf {X}_{it}\gamma +\lambda _{t} + \theta _{if} + \varepsilon _{it}, \end{aligned}$$
(3)

where f indicates the research field and \(\text {MedicalSci}_{if}, \text {Biology}_{if}, \text {Chemistry}_{if}, \text {Physics}_{if}, \text {Engineering}_{if}\), and \(\text {Economics}_{if}\) are corresponding dummy variables for each research field f.

The result indicates that only medical science is significantly negatively affected by the reforms, regardless of whether COE universities, shown by Column (5) of Table 3, or non-COE universities, shown by Column (6), are used.Footnote 13 The less significant effect or no effect in biology, chemistry, physics, engineering, and economics is consistent with the no effect predicted in McCormack et al. (2014).

Since these results indicate that the main driver of the overall negative effect in Columns (1) to (3) of Table 3 is the deterioration in medical science research outcomes, I estimate the difference-in-differences model only for the medical science field. In Columns (7)–(9) of Table 3, I estimate the model only for university hospitals with the research outcome for the field of medical science. The impacts are significantly negative.

Table 3 The effect of partial privatization on national universities (all fields combined, department level, and medical science only)

However, recognizing the long-term nature of institutional transition and the tapered reductions in the size of the operational support fund in this case, it appears feasible that the effect on research could also be gradual. To provide a more detailed assessment of the evolution of the impact, I also estimate a second specification of the fixed-effect model

$$\begin{aligned} \frac{\text {Research}\ \text {Output}_{it}- \text {Research}\ \text {Output}_{i1999}}{\text {Research}\ \text {Output}_{i1999}} = \sum _{t=2004}^{2009} \beta _{t}\left( \text {national}_{i} \times \lambda _{t}\right) + \mathbf {X}_{it}\gamma + \lambda _{t} + \theta _{i} + \varepsilon _{it}, \end{aligned}$$
(4)

where \(\text {national}_{i}\) is a dummy variable that takes the value of 1 if university i is a national university, and 0 otherwise. Each coefficient \(\beta _{t}\) captures the effect for a specific year after 2004.

Table 4 provides the results for Eq. (4), which considers annual changes in effects experienced after the partial privatization of the national universities, recognizing different degrees of exposure to institutional change. These columns report the estimated coefficients for the interactions between a national university dummy and time dummies for the period 2004–2009 for all university research outcomes in Columns (1)–(3) and the field of medical science only in Columns (4)–(6). For each column, the effect is significantly negative during the study period, thereby supporting the conclusion that the adverse effect unfolded rather monotonically.

Table 4 The dynamic effect of partial privatization on national universities

4.2 Robustness check

To effectively validate my results, it is necessary to check whether the results indeed capture the impact of partial privatization. Therefore, I perform four robustness checks: the placebo test, the full leads and lags test, a test of the effect of the new medical training system in 2004 on the number of new researchers, and a test of the effect of the new medical training system in 2004 on medical colleges.

4.2.1 Robustness check 1: placebo test

To assess the robustness of my results, I first check the extent to which the main results capture trends that existed before the reform.

The difference-in-differences specification in Eq. (2) depends on this assumption of a parallel trend in the absence of partial privatization and, although the assumption is not directly testable, it is straightforward to assess the assumption’s plausibility by conducting a placebo test. Under the parallel trend assumption during the periods \(t = \{t_{-1},t_{0},t_{1}\)} where \(t_{-1}< t_{0} < t_{1}\), \(t_{0}\) indicates a pre-treatment period, \(t_{-1}\) indicates a period before \(t_{0}\), \(t_{1}\) indicates a post-treatment period, and \(Y_{t}\) indicates the outcome at period t, the following also holds:

$$\begin{aligned} E\left[ Y_{t_{0}} - Y_{t_{-1}} | \text {National}=1\right] - E\left[ Y_{t_{0}} - Y_{t_{-1}} | \text {National}=0\right] = 0. \end{aligned}$$
(5)

If I apply the same specification as Eq. (2) for the periods \(t= t_{-1}\) and \(t = t_{0}\), the placebo treatment effect can be identified where the expected magnitude would be close to 0. To implement this test, I reproduce the estimations of Table 3 with only pre-2004 data. I divide the pre-2004 data into two approximately equal datasets of observations before and during 2001 (or before and during 2002) and observations after 2001 (or after 2002). I then proceed as in Table 3, using a post-2001 dummy (or post-2002 dummy) in place of the post-2004 dummy.

Table 5 shows the results of the placebo tests. Columns (1)–(3) report the placebo effects for the entire research outcome using a post-2001 dummy variable and Columns (4)–(6) show placebo effects for medical science only. Columns (7)–(12) show the placebo effects using a post-2002 dummy variable. Columns (13)–(18) examine outcomes from 1999, 2000, 2002, and 2003 to divide the sample periods equally and reproduce the placebo effects with the post-2002 dummy variable. The coefficients are insignificant and smaller than the results presented in Table 3, suggesting that, at least before 2004, trends in research output do not vary significantly.

Table 5 Placebo test

4.2.2 Robustness check 2: full leads and lags

Although partial privatization was an exogenous phenomenon for national universities, a key concern is that it was anticipated by universities and some of the essential structural changes were partially implemented before 2004, making researchers in national universities already prepared (Amano (2008)). In particular, the effect on aggregated research outcomes appears immediately after 2004 [see Column (1) of Table 4]. This, therefore, poses another threat to proper identification.

To address this issue, I set 2000, 2001, 2002, and 2003 as alternative falsified enforcement years and reproduce the estimates of Table 4 under four specifications. Table 6 shows the results for the falsified coefficient years for all research outcomes and medical science specifically.

Table 6 Falsification enforcement year (leads)

For the aggregated research outcomes, the coefficients in Columns (1)–(4) of Table 6 on the leads, which are \(\text {National} \times \text {year} 2000\) to \(\text {National} \times \text {year} 2003\), are found to be statistically insignificant and smaller than the coefficients of the lags, thereby providing no evidence of anticipatory or differential processes within national universities. The lag trend shows that the adverse impact is statistically significant after 2004 and remained relatively constant across years.

For medical science, the coefficients of the leads in Columns (5)–(8) of Table 6 are close to 0 and statistically insignificant, providing no evidence of an anticipatory or differential process within universities about to be partially privatized. Table 4 shows how the coefficients decrease notably 2 years after the enforcement and then remain stable.

Therefore, these results support the assumption that the effect appears after 2004. Furthermore, the results indicate that other pre-existing trends do not contaminate the effects.

4.2.3 Robustness check 3: the effect of the new medical training system in 2004 on the number of new researchers

For the field of medical science specifically, it is possible that I have captured the impact of the New Postgraduate Medical Education Program rather than the effect of partial privatization.

All university hospitals in Japan experienced a change in the medical training system—the New Postgraduate Medical Education Program—in 2004. This could have reduced the number of medical students with whom university hospitals collaborated by obliging them to work as clinicians for 2 years. Therefore, this change could have negatively affected both national and private university hospitals’ research activities, because it had the potential to decrease the labor force engaged in research activities.

Thus far, my approach has avoided similar potential threats to appropriate identification by capturing shocks as common macro-level events via year effects, because both national and private universities faced the same reform. This was achieved by controlling for heterogeneity in any regional trend change through region \(\times\) year dummy variables and university age. However, supposing that the decrease in the labor force due to the new training system was more considerable in national universities (which are more research-oriented and hence, supposed to be more heavily reliant on young researchers) than in private universities, I cannot rule out the possibility of capturing its effects instead of partial privatization.

If the impact of the New Postgraduate Medical Education Program was more pronounced at national universities, it would have reduced the number of new medical researchers at national universities since 2004 compared to private universities, because the new system might have taken away opportunities to become new researchers.

To examine this possibility, it is necessary to measure the number of new researchers for each year during the study period, but I can identify only university-level research achievements at best and cannot create complete career path research achievements for each researcher with the Web of Science dataset. Therefore, I approximately measure new researchers as follows.

Letting \(\text {Authors}_{it}\) be a set of all the unique authors appearing in any paper published in association with a university hospital i in year t, I approximate the number of new researchers as follows:

$$\begin{aligned} \text {NewAuthors}_{it}^{T} \equiv n\left( \text {Authors}_{it} \setminus \cup _{s=1}^{T} \text {Authors}_{it-s}\right) , \end{aligned}$$
(6)

where \(n(\cdot )\) counts the number of elements in each set.

In other words, I consider the set of differences between the author set \(\text {Authors}_{it}\) and the author set \(\cup _{s=1}^{T} \text {Authors}_{it-s}\) as a set of new authors, and count the total number of this set. This metric shows the number of new authors in year t that did not appear in the authors’ list from 1 to T years ago.

\(\text {NewAuthors}_{it}^{T}\) indicates: (i) the number of new researchers joining university hospital i’s author list during or after graduate school as new researchers, (ii) the number of researchers who had worked at another institute but were newly hired by university hospital i in or before year t and published while affiliated to the university hospital i in year t, and (iii) the number of researchers who worked at university hospital i, but did not publish from year \(t-T\) to year \(t-1\) and published in year t.

Therefore, \(\text {NewAuthors}_ {it} ^ {T}\) indicates the upper bound of the number of new researchers (i). To reduce the possibility of counting (iii) as much as possible, \(\text {NewAuthors}_ {it}^{T}\) is created with \(T = 2, 3\), and 4. For example, there may be researchers who did not publish in year \(t-1\) and published in year t while belonging to the same university, i.

Table 7 displays the results of difference-in-differences estimates using the rate of change of the number of new authors from the base year of 1999 as dependent variables. Columns (1)–(3) suggest that, from 2004, the number of new researchers did not change significantly at national universities compared to private universities when \(T=2\). These results do not change for \(T=3\) in Columns (4)–(6) and for \(T=4\) in Columns (7)–(9). Therefore, these estimates do not support the hypothesis that the new training system directly reduced the number of new researchers at national university hospitals. Hence, this finding supports the assertion that the adverse effects on medical science research are not due to the new training system.

Table 7 Test of the effects of new postgraduate medical education program on the number of new researchers in national universities

4.2.4 Robustness check 4: the effect of the new medical training system in 2004 on medical colleges

As another check, I test whether the New Postgraduate Medical Education Program impacted research activities through some other route.

For example, if the new clinician training system affected research activity, the strength of the effect might have differed between medical colleges and other institutions, rather than through the distinction between national and private universities. For example, if medical universities had a better medical system and training system before the changes were implemented, then even if there was a negative impact from the new clinician training system, the impact might feasibly have been weaker than that of non-medical universities. Alternatively, universities with medical schools in their faculties may have more research facilities and research institutes, such that the adverse effects might have been mitigated compared to medical universities.

Therefore, I conduct a difference-in-differences estimation by setting medical universities as the treatment group (\(\text {Medical} = 1\)) and non-medical universities as the control group (\(\text {Medical} = 0\)), and by setting 2004 as the year when the treatment occurred. In Columns (1)–(3) of Table 8, I observe no significant effects of the coefficient \(\text {Medical} \times \text {post} 2004\) on the medical colleges’ research outcomes. Likewise, in Columns (4)–(12) of Table 8, I observe no significant effects of the coefficient \(\text {Medical} \times \text {post} 2004\) on the rate of change of the number of new authors in medical colleges from the base year of 1999. These results indicate that the new training system did not have an unbalanced effect on medical colleges, supporting the claim that this event had no impact on research activity.

Table 8 Test of the effects of new postgraduate medical education program on medical colleges’ research outcomes and the number of new researchers

5 Discussion of the possible causal mechanisms of the negative effect on medical science research

This section asks the following question: why did the partial privatization have a negative impact only on medical science research? A possible explanation is that, after the partial privatization, national universities shifted toward prioritizing medical services rather than conducting research activities, with the increased provision of clinical services crowding out research activities, while the number of new researchers remained constant as suggested in section 4.2.3. This hypothesis is derived from the following two theories, (a) and (b).

  1. (a)

    National universities were incentivized to engage in revenue-generating activities to compensate for budgetary losses from reductions in operational support funds.

  2. (b)

    Although not perfect, partial privatization has provided national universities (hospitals) with management personnel and structures that have greater independence from the government. In the process of defining and positioning themselves as more independent organizations, national universities (hospitals) have shifted their management style to focus more on providing medical care than on engaging in research activities. Aghion et al. (2010) assume that the less dependent a university is on public funds, the higher its degree of autonomy. Thus, this shift could be seen to be encouraged at least in part because of the associated reduction in national universities’ reliance on public funds.

The following two evidence, (i), and (ii), are at least consistent with these hypotheses.

  1. (i)

    If (a) and (b) are correct, the reforms should have increased national university hospitals’ revenue more than that of private university hospitals. To test this, I collect data on university hospital revenue from the balance sheet of each national university. The results in Table 9 show that the mean ratio of national university hospital revenue from 2009 to 2005 is approximately 1.275.Footnote 14 This sample produces a ratio of private university hospital revenue from 2009 to 2005 of 1.093. Therefore, the difference with national university hospitals is statistically significant, as shown in Table 9. The actual increase in hospital revenue for national universities is depicted in Fig. 1.

  2. (ii)

    University hospitals have a three-pronged mission: research, providing clinical services, and education. If (a) and (b) served to increase clinical service provision and limit research activities, the time spent providing clinical service would theoretically increase and that spent conducting research would decrease. The following survey results support consistent evidence with the theory. A survey conducted among national university hospitals (see MEXT (2010)) in 2005 reported that 48.9% of faculties experienced reductions in time spent on research activities, which increased to 77.8% in 2008. Furthermore, 48% of faculties in university hospitals experienced increases in the time spent delivering clinical services, which increased to 66.7% in 2008.

However, there are two problems with the evidence in \(\mathrm{( i )}\) and \(\mathrm{( ii )}\). First, the hospital revenue in \(\mathrm{( i )}\) can be compared only after 2005, which is after the university reform, and it is not possible to analyze data from before the change by the same standard. Second, the data in \(\mathrm{( ii )}\) cannot be disaggregated at the university level. Thus, since the data on hospital revenue and time spent on each activity are unavailable for both national and private universities, it is not possible to directly analyze whether the time spent providing clinical services has increased, whether hospital income has increased, or whether the time spent on research has decreased following the reform.

Table 9 The ratio of university hospital revenue in 2009 versus 2005

Hence, assuming that (a) and (b) are correct, I indirectly test whether the situation after the reform is consistent with these hypotheses by analyzing whether effects on research outcomes are heterogeneous across national university types. I use the research outcome, because this is the only factor available and observable at the university hospital level both before and after the reform. I calculate the ratio of each national university hospital’s revenue to their total revenue as of 2004—the first year in which revenue and expenditure are documented under the new accounting standards of national universities, which were also introduced by the partial privatization reform.Footnote 15 The ratio is denoted as \(\text {HT ratio} (\equiv \text {Hospital Revenue}_{i2004}/ \text {Total\ Revenue}_{i2004}\)). This ratio indicates how dependent each university is on hospital revenue, and thus, demonstrates the degree of heterogeneity in the original management style.Footnote 16

Using these data, I produce a fixed-effect model in which the treatment dummy after 2004 interacts with the HT ratio. The results obtained are presented in Columns (1)–(3) of Table 10. In Columns (7)–(9) of Table 3, the negative coefficient values are 1.3–3.0 times larger than those of the baseline coefficient results. In other words, the more dependent the national university is on hospital revenue, the more significant the negative impact on research outcomes.

Next, to assess the impact magnitude’s heterogeneity, I create a dummy variable that takes the value of 1 if the HT ratio is at or above the dataset’s median value, and 0 otherwise, and an opposing dummy variable that takes the value of 1 if the HT ratio is less than the median value, and 0 otherwise. I then estimate a fixed-effect model in which the treatment dummy interacts with these two dummy variables. The estimation results are presented in Columns (4)–(6) of Table 10. I confirm that the negative values of the coefficient \(\text {National} \times \text {post 2004} \times \text {above median (HT ratio)}\) are larger than those of \(\text {National} \times \text {post 2004} \times \text {below median (HT ratio)}\) in Columns (4) and (5). The F tests indicate statistically significant differences between these two coefficients in Column (4) (\(\text {Prob} > \text {F} = 0.094\)) and Column (5) (\(\text {Prob} > \text {F} = 0.009\)) but not for Column (6) (\(\text {Prob} > \text {F} = 0.644\)). Therefore, from Columns (4) and (5), I confirm that universities that are highly dependent on hospital revenue experienced a more significant negative impact on research performance and that universities that are less dependent on hospital revenue experienced a smaller negative impact.

The results in Columns (1)–(6) of Table 10 suggest that, recognizing (a), universities that are highly dependent on hospital revenue should seek to increase the university hospital’s revenue to make up for the reduction in the operational support fund. Therefore, it follows that the incentive to prioritize medical care provision over research activities was more strongly felt, and, as a result, the increase in resources used to provide clinical services had a major negative impact on research results. Conversely, the less dependent a national university is on the university hospital’s revenue, the more revenue can be raised by channels other than the university hospital. In other words, even though a crowding out effect is present, the negative impact on research activities was relatively weak for those universities. In other words, if (a) is correct, the crowding effect would not be as severe if the university was less dependent on university hospital revenue.

Additionally, after the reforms, national universities, which initially emphasized the provision of medical services (as inferred from the high HT ratio), became more specialized in medical service provision rather than research activity, which is consistent with (b). In other words, assuming that (b) is correct, (b) appears to have a heterogeneous effect depending on the original management style.

I conclude that the heterogeneous results in Table 10 at least do not reject (a) and (b). These data and results imply that partial privatization, which might encourage revenue-generating activity, leads to a decrease in university hospitals’ research output. Another possibility, in which clinical research activities are direct by-products of clinical services, could imply the mitigation of this harmful effect. However, the result may well be inconsistent and difficult to prove in this specific case.Footnote 17

Table 10 The heterogeneous effects of partial privatization depending on university types

6 Conclusion

This study’s primary contribution to the literature is that it provides the first evaluation of the partial privatization reform of Japanese national universities in 2004, using it as a quasi-experiment and using comparable private universities as a control group. I relate the impact to recent research analyzing the relationship between university governance/managerial style change and research performance, as in Aghion et al. (2010) and McCormack et al. (2014). Compared to these studies, this study uncovers the heterogeneous impacts across research fields and departments. My study provides insight into the interaction between management/governance style change and each research field’s unique characteristics and associated incentives, particularly for medical science. The estimation results suggest that partial privatization leads to a deterioration in the quality and quantity of national universities’ research output. The study then estimates the effects of partial privatization disaggregated by research field and finds that only medical science is negatively affected.

As I observe, the clinical service probably increased, while the medical research declined. Thus, it is not immediately evident whether the negative impact of the reform on medical science implies overall welfare loss. However, since the overall medical service demand does not change after the reform, the welfare improvement due to the increase in the supply of service is likely to be negligible, and hence, there is probably an overall loss in welfare.