## Abstract

Many studies examine the importance of teachers in students’ learning, but few exist on the contribution of principals. We measure the effect of principals on gains in primary test scores in North Carolina and estimate the standard deviation of principals’ value added to be 0.12–0.17. We find that the match between principals and schools accounts for a significant amount of principals’ value added and also find that replacing the current principal has little effect on non-test-score school inputs and outcomes regardless of the new principal’s value added, but that brand new principals have a detrimental effect.

This is a preview of subscription content,

to check access.### Similar content being viewed by others

## Notes

While our analysis and that of Miller (2013) use a similar sample of data drawn from the same source, our analyses are fundamentally very different. Our manuscript focuses first on estimating principal value added, and subsequently on determining what makes higher value added principals better at raising student achievement. Miller (2013) focuses on the dynamics of student achievement surrounding principal transitions, and explanations for this phenomenon. Like Miller (2013), the second part of our analysis examines school outcomes following principal transitions, but the underlying goal is different. We examine transitions to differentiate the effects on school outcomes of principals of different quality, whereas Miller (2013) estimates the overall effect of changing the principal.

Clark et al. (2009) use data from New York City to estimate how the characteristics of principals, rather than principal fixed effects, relate to school performance and other measures. They do find that student achievement is increasing in roughly the first 3 or 4 years of principal experience, but the profile flattens beyond that. They also find that as principal experience rises, student absences and suspensions fall. Finally, more principal experience is associated with lower teacher turnover.

While principals possess the power of evaluating teachers, the superintendent or the board of directors (for regional schools) have the power to employ and contract with teachers along with the duty to maintain personnel files and participate in the firing and demoting.

The value-added estimates provided in this paper are conditional on the effect of the school principal on initial test scores in grade 3. A high-quality elementary school principal may raise grade 3 test scores but then produce smaller value-added gains from grade 4 through 8. This principal could be identified as below average in these estimates but may be raising achievement throughout elementary school by more than other principals identified as above average.

For example, suppose principals A and B have worked at school 1 and principal C has worked at school 2 and none of the principals have ever worked at any other school. Principals A and B and school 1 form one connected group, and principal C and school 2 form another.

Another option is to simply drop one principal within each group. This is not appropriate in our case because principal effects would then be interpreted as deviations from the left out principal, which depends on which principal is left out.

Advanced degrees are 1-year educational programs beyond the Masters degree, sometimes called “sixth-year” degrees. This level of education is used to meet licensure requirements for higher-level jobs in North Carolina public schools that require a higher level of education than a Masters degree, but not necessarily a Doctorate. Barron’s ranks schools according to high school class rank, high school grades, standardized test scores, and acceptance rates. The schools are then divided into seven categories, the top four of which are competitive. The indicator for competitive schools equals 1 if the school is in one of these top four categories, and 0 otherwise.

The Empirical Bayes shrinkage estimate \({\hat{\delta }}_{p}^{*}\) of the principal effect would shrink our existing estimates based on a signal to total variance ratio:

$$\begin{aligned} {\hat{\delta }}_{p}^{*} = \left( {\frac{\delta _{\delta }^{2}}{\delta _{\delta }^{2} + \delta _{v}^{2}}}\right) {\hat{\delta }}_{p} \end{aligned}$$where the variances are defined in Sect. 4.1. If our estimate \({\hat{\delta }}_p\) contains mostly noise, then the estimates shrink toward zero. We compute the shrinkage estimate by using our estimates of \(\sigma _\delta ^2\) and \(\delta _{v}^{2}\) described previously.

We need to use a constant prior time period so that the principal effects are comparable across time.

The teacher measures come from \(t+1\) to avoid picking up the spurious effect of concurrently moving teachers and principals, since we are interested in changes in these variables that occur after the new principal arrives.

The process for attaching the census data to school records is described in detail in a data appendix available from the authors upon request.

We keep students who switched schools between \(t-1\) and t in our sample. The effects of any past schools and principals are held constant by the lagged test score term, as per the standard value-added specification of the education production function.

Title I eligible schools are schools in which at least 40% of the enrollment of the school come from low-income families. Magnet schools are part of the local public school system but usually draws from a larger geographic area for students and often have a special curricular focus such as arts, vocational, or STEM (science, technology, engineering and math).

For example, if school districts chose not to renew contracts of principals who performed poorly on current or lagged test scores due to random fluctuations or one-time shocks to student performance, and if these test scores were mean-reverting, we mistakenly might attribute an improvement in scores to a new principal when in fact it was just mean reversion.

All scores are measured in student-level standard deviations. Note that one student-level standard deviation is roughly equal to two school-level standard deviations. We have run all our analysis on the subsample of principals that only switch from school to school and all the results are similar.

Number of students in a school is related to mobility because principal salaries are based partly on the size of the school and therefore principals have an incentive to move to larger schools.

In the context of teacher effects, Rothstein (2010) examines variation across current teachers in lagged test-score gains as a test for non-random sorting within schools. Because it focuses on detecting within-school sorting across teachers, it is not appropriate to conduct this test in the context of principal effects. In particular, we suspect that a main reason why some principals are more effective than others is because they are better at allocating students to classrooms. Note also that our results are in line with Miller (2013) who finds that principal turnover in North Carolina does not have a large effect on school performance. However, Miller (2013) does find a drop in test scores preceding a principal departure using a subsample of our data, whereas our analysis does not support this conclusion.

We include dummies for principal tenure in our regressions. An incorrect specification of tenure effects could introduce error. Therefore, we have run the analysis without tenure controls and find similar estimates. These estimates are available from the authors upon request. We have also run the analysis with experience controls and find similar estimates. These estimates are also available from the authors upon request.

To show this, we follow Dhuey and Smith (2014). Suppose we run an OLS regression of test scores on only principal effects and obtain \(y_i ={\hat{\delta }}_p +e_i\). Aggregating to the principal level, we get \({\bar{y}}_p ={\hat{\delta }}_p\) because the OLS residual sums to zero for each principal. Now suppose we include school effects in the regression and get \(y_i ={\hat{\delta }}_p^*+\phi _p +e_i^*\). Aggregating to the principal level, we get \({\bar{y}}_p ={\hat{\delta }}_p^*+ {\bar{\hat{\phi }}}_p\) again because the residual sums to zero for each principal. Combining the two equations, we get \({\hat{\delta }}_p ={\hat{\delta }}_p^*+{\bar{\hat{\phi }}}_p\). The variance is \({\mathrm{VAR}}({\hat{\delta }}_p) = {\mathrm{VAR}}({\hat{\delta }}_p^*)+ {\mathrm{VAR}} ({\bar{\hat{\phi }}}_p) + 2{\mathrm{COV}} ({{\hat{\delta }}_p^*,{\bar{\hat{\phi }}}_p})\). We can use this to show that \({\mathrm{VAR}}({\hat{\delta }}_p^*)> {\mathrm{VAR}} ({\hat{\delta }}_p)\) if \({\mathrm{COV}} ({{\hat{\delta }}_p^*, {\bar{\hat{\phi }}}_p})/ {\mathrm{VAR}}({\bar{\hat{\phi }}}_p )<-1/2\), which will occur if the principal and school effects are negatively correlated.

We performed various robustness checks and we report the results for the school fixed-effects specification here. First, we computed the adjusted standard deviation using each principal’s total number of students as weights (Rothstein 2010), which returned a standard deviation of 0.158 in math and 0.113 in reading. Second, we assume that the underlying true principal effects are normally distributed and estimate their variance using maximum likelihood (Rockoff 2004), yielding a standard deviation 0.160 in math and 0.096 in reading. When we run the analysis from Table 5 using only principals who switched schools, the standard deviation of the principal fixed effects is 0.195 in math and 0.141 in reading (adjusted standard deviations are 0.181 and 0.117). We also ran regressions with a set of teacher control variables for each students, including education, experience, licensing, certification, race, and gender. To ensure the correct teacher characteristics were matched with the student, we used a subsample of grade 4–5 students. The standard deviation across the 3212 principals for this subsample when teacher variables are excluded is 0.175 in math and 0.113 in reading (0.163 and 0.091 adjusted), and when they are included the standard deviations are 0.174 in math and 0.111 in reading (0.162 and 0.088 adjusted).

The hybrid random-effects estimator of the variances assumes orthogonality between the three effects and the error term, which may not be justified.

Roughly 68% of first-time principals were assistant principals at a different school the year prior. A further 19% were assistant principals at the same school. About 3% were teachers, and 2.5% were involved in some other role in a North Carolina school. About 6.5% were entirely new to the dataset. Although we cannot be sure the latter group are brand new principals (since they might have worked as a principal in another state or at a private school), we interpret them as such.

Clark et al. (2009) also examine student absences and suspensions, in addition to teacher turnover. Their focus is on the relationship with principal experience, whereas we are concerned with their relationship to principal value added.

We reestimate these coefficients with school-specific linear time trends and report the results in Appendix Table 10.

## References

Aaronson D, Barrow L, Sander W (2007) Teachers and student achievement in the Chicago Public High Schools. J Labor Econ 25:95–135

Beteille T, Kalogrides D, Loeb S (2012) Stepping stones: principal career paths and school outcomes. Soc Sci Res 41:904–919

Branch GF, Hanushek EA, Rivkin SG (2012) Estimating the effect of leaders on public sector productivity: the case of school principals. NBER Working Paper 17803. National Bureau of Economic Research, Cambridge

Cannon S, Figlio D, Sass T (2012) Principal quality and the persistence of school policies. Northwestern University (unpublished manuscript)

Chiang H, Lipscomb S, Gill B (2016) Is school value-added indicative of principal quality? Educ Financ Policy 11:283–309

Clark, D, Martorell P, Rockoff J (2009) School principals and school performance. Working Paper 38. Urban Institute, Washington, DC

Clotfelter CT, Ladd HF, Vigdor JL (2007) Teacher credentials and student achievement: longitudinal analysis with student fixed effects. Econ Educ Rev 26:673–682

Coelli M, Green D (2012) Leadership effects: school principals and student outcomes. Econ Educ Rev 31:92–109

Cullen J, Mazzeo MJ (2007) Implicit performance awards: an empirical analysis of the labor market for public school administrators. Northwestern University (unpublished manuscript)

Dhuey E, Smith J (2014) How effective are principals in the production of school achievement? Can J Econ 47:634–663

Goldhaber D, Anthony E (2007) Can teacher quality be effectively assessed? National Board certification as a signal of effective teaching. Rev Econ Stat 89:134–150

Grissom JA, Kalogrides D, Loeb S (2015) Using student test scores to measure principal performance. Educ Eval Policy Anal 37:3–28

Grissom JA, Loeb S (2011) Triangulating principal effectiveness: how perspectives of parents, teachers, and assistant principals identify the central importance of managerial skills. Am Educ Res J 48:1091–1123

Hallinger P, Heck RH (1998) Exploring the principal’s contribution to school effectiveness: 1980–1995. Sch Eff Sch Improv 9:157–191

Hanushek EA (2006) Teacher quality. In: Hanushek EA, Welch F (eds) Handbook of the economics of education, vol 2. Elsevier, Amsterdam

Horng EL, Klasik D, Loeb S (2010) Principal time-use and school effectiveness. Am J Educ 116:491–523

Jackson CK (2013) Match quality, worker productivity, and worker mobility: direct evidence from teachers. Rev Econ Stat 95:1096–1116

Jacob B, Lefgren L (2005) Principals as agents: subjective performance measurement in education. NBER Working Paper 11463. National Bureau of Economic Research, Cambridge

Li D (2015) School accountability and principal mobility: how no child left behind affects the allocation of school leaders. Harvard Business School Working Paper, No. 16–052

Mihaly K, McCaffrey D, Lockwood JR, Sass TR (2010) Centering and reference groups for estimates of fixed effects: modifications to felsdvreg. Stata J 10:82–103

Miller A (2013) Principal turnover and student achievement. Econ Educ Rev 36:60–72

Rockoff JE (2004) The impact of individual teachers on student achievement: evidence from panel data. Am Econ Rev 94:247–252

Ronfeldt M, Loeb S, Wykoff J (2013) How teacher turnover harms student achievement. Am Educ Res J 50:4–36

Rothstein J (2010) Teacher quality in educational production: tracking, decay, and student achievement. Q J Econ 125:175–214

Woodcock S (2015) Match effects. Res Econ 69(1):100–121

## Author information

### Authors and Affiliations

### Corresponding author

## Ethics declarations

### Conflict of interest

The authors declare that they have no conflict of interest.

### Data and computer code availability

The data that support the findings of this study are available from the North Carolina Education Research Data Center but restrictions apply to the availability of these data, which were used under license for the current study and so are not publicly available. Instructions for how other researchers can obtain the data, and all the information needed to proceed from the raw data to the results of the paper (including code) are however available from the authors upon reasonable request and with permission of the North Carolina Education Research Data Center.

## Rights and permissions

## About this article

### Cite this article

Dhuey, E., Smith, J. How school principals influence student learning.
*Empir Econ* **54**, 851–882 (2018). https://doi.org/10.1007/s00181-017-1259-9

Received:

Accepted:

Published:

Issue Date:

DOI: https://doi.org/10.1007/s00181-017-1259-9