Students from low-income backgrounds are underrepresented at 4-year colleges and universities in the U.S. (e.g., Baum et al. 2013), even after controlling for academic achievement (e.g., Astin and Oseguera 2004). This disparity is especially pronounced at the most selective institutions, where only 4% of students come from the lowest income quintile (Chetty et al. 2017). As a result, relatively few students from low-income backgrounds may receive the benefits linked to enrollment at a selective college, such as increased earnings (Card and Krueger 2002; Hoekstra 2009) and higher odds of graduate school enrollment (Eide et al. 1998). The socioeconomic stratification within higher education has remained persistent over time (Bastedo and Jaquette 2011), and is likely related to both real and perceived price barriers (Dynarski 1999). With growing media coverage of student loan debt, student borrowing in particular may be increasingly salient in students’ application and enrollment decisions. Indeed, two-fifths of high school seniors exhibit loan aversion (Boatman et al. 2017), presenting challenges for institutions seeking to enroll greater numbers of students from low-income backgrounds.

In response, several dozen institutions–usually with significant endowments–have developed financial aid initiatives designed to reduce or eliminate loans in the financial aid packages awarded to students, which we refer to as “loan-reduction initiatives” (LRIs). LRIs typically provide grants to cover eligible students’ financial need—that is, the difference between the institution’s cost of attendance and what the institution determines is the family’s ability to pay. This determination usually begins with a student’s Expected Family Contribution (EFC) as calculated by the federal government using information on the FAFSA, but can be adjusted based on additional financial information collected by the institution. An institution might determine that a student has no ability to pay and that the LRI will therefore cover the full cost of attendance, but it can also determine that the student must contribute some financial resources towards the cost of attendance. In the latter instance, a student may still borrow to cover what the institution determines is the student’s contribution.

Adopting institutions typically cast LRIs as mechanisms for increasing socioeconomic diversity (Lips 2011), directly addressing affordability concerns that frequently deter low-income students from applying to or enrolling in these colleges (e.g., Bowen et al. 2009). To test these claims, several prior studies have attempted to measure the efficacy of these LRIs. In addition to examinations of individual institutions (Avery et al. 2006; Linsenmeier et al. 2006; Pallais and Turner 2006), a few studies have examined these policies over broader sets of institutions (Hillman 2013; Rosinger et al. 2018; Waddell and Singell 2011). These multi-institution studies have limited their analyses to measuring the impact on the income distribution of enrolled students, with somewhat mixed findings. There are, however, other important outcomes likely affected by adopting these initiatives. Our study is the first multi-institutional analysis to investigate the effects of LRI adoption on three sets of outcomes: borrowing behaviors, admission metrics, and enrollment of students from underrepresented racially/ethnically minoritized (URM) backgrounds.

First, perhaps surprisingly, it is currently unclear whether these initiatives actually reduce student loan borrowing at the institution level. Since aid-eligible students may still take out federal loans even if they are not initially included in their financial aid package, an institutional no-loan pledge is no guarantee of a reduction in borrowing. If these policies do reduce borrowing, we should be able to identify whether such a reduction occurs in terms of the share of students borrowing any amount, the average amount borrowed, or both.

Second, researchers have not yet identified the impacts these initiatives might have on admissions metrics such as application volume, acceptance rate, and yield rate. From an enrollment management perspective, it is important to distinguish between several possible avenues by which the policy could have an effect. Following LRI adoption, institutions could receive greater numbers of applications (likely from LRI-eligible applicants), experience declines in acceptance rates (e.g., more students vying for a finite number of spots), or find that admitted LRI-eligible students are more likely to enroll (thereby increasing the yield rate). These are not mutually exclusive, but we heretofore have limited evidence of whether these policies affect admissions-related metrics.

Third, adopting loan-reduction policies may alter campus racial/ethnic diversity. Several LRI-adopting institutions have explicitly stated a goal is to improve racial/ethnic diversity alongside socioeconomic diversity. For example, in announcing a revised loan cap policy, the University of Virginia’s chief operating officer said, “We have a really good opportunity to improve our socioeconomic diversity, as well as our racial diversity” (Anderson 2015). However, prior work has shown that admissions-related efforts to increase socioeconomic diversity do not necessarily lead to substantial increases in racial/ethnic diversity (Cancian 1998; Long 2007; Reardon et al. 2018). Consequently, it is worth considering the interplay of an institution’s goals for racial/ethnic and socioeconomic diversity following adoption of an LRI. Hence, enrollment of URM students is an important but unstudied outcome when measuring the overall effect of these institutional financial aid policies.Footnote 1

As a result, we seek to identify the causal effects of LRIs, guided by two questions:

  1. (1)

    Does the adoption of a loan reduction initiative affect borrowing, admission metrics, and the number of Pell Grant recipients or URM students enrolled at the institution?

  2. (2)

    What characteristics of LRIs are associated with the above outcomes?

We accomplish this by implementing a difference-in-differences analytic strategy using national institution-level data. We compare institutions that adopted the policy to several different comparison groups of institutions, two of which we construct using coarsened exact matching (CEM) and propensity score matching (PSM). Considering results across a wide range of relevant comparison groups provides stronger protection against violations of the identifying assumption. To provide further support of our causal evidence, we conduct a host of falsification and robustness checks that augment the credibility of our results. We also examine the heterogeneity of effects by institutional control and by attributes of the LRI adopted.

In line with prior research from Hillman (2013), our findings demonstrate that LRIs increase the number of Pell Grant recipients enrolled at these institutions (4–8% at public institutions, 9–14% at private institutions). We also extend the current literature in several important ways. Specifically, we find that loan-reduction policies substantially reduce the proportion of borrowers at private adopters (by 4–8 percentage points), but there is no detectable reduction in borrowing at public institutions. We also examine potential relationships between LRIs and overall admissions metrics, finding few effects apart from an increase in yield rate at private adopters (3–4 percentage points). Finally, adopting these initiatives leads to large declines in the number of URM students who enroll at public institutions (16–23%) relative to comparison institutions (even after accounting for state affirmative action bans), with no clear evidence of a change at private institutions. Taken together, these findings demonstrate that LRIs achieve one of their stated goals of increasing the number of students from low-income backgrounds, but, at public institutions, it appears to come at the cost of racial/ethnic diversity.

Research on Loan-Reduction Initiatives

Princeton University began the first LRI in the 1998–1999 academic year, announcing that the university would meet the full financial need of students with family incomes below $40,000 through a combination of work-study and grant aid, removing loans from the financial aid packages offered to qualifying students (Princeton University, 1998). Over the next decade, several dozen institutions adopted similar “no-loan” policies or related loan-reduction pledges. While united in their emphasis on reducing student borrowing, such initiatives differ in their structure and eligibility. In terms of structure, Lips’s (2011) typology of these LRIs specifies two primary arrangements: “no-loan” initiatives, which do not include loans as part of need-based financial aid packages, and “loan cap” initiatives, which can include loans up to a set amount that is less than the federally mandated federal student loan limits.Footnote 2 In terms of eligibility, some initiatives apply to all aid-eligible students (“all-aid”), while other pertain to only students below a particular income threshold (“income-level”). (Rosinger et al. [2018] also refer to all-aid LRIs as “universal” programs and income-level LRIs as “targeted” programs. Throughout this paper, we use Lips’s typology for the sake of consistency.)

Rather than focusing on variation across these program types, however, the prior efficacy literature on LRIs is primarily divided between single-institution and multi-institution studies. Several of the earliest analyses have focused on the effects of LRIs at individual institutions. Because one of the stated goals of these policies is to increase low-income student enrollment, Linsenmeier et al. (2006) used a difference-in-differences approach to examine the likelihood of matriculation among admitted low-income students, or “yield rate,” at one of the earliest-adopting institutions. They found that adopting an LRI did not lead to statistically significant effects on the overall yield rate for low-income students, although they did detect a marginally significant increase in yield rates among low-income students who were members of underrepresented minority groups.

In addition to potential effects on the enrollment decisions of admitted students, LRIs may also affect student body composition by inducing more low-income students to apply. Avery et al. (2006) considered a broader range of admissions-related outcomes by studying the first year of an LRI at Harvard University. They found an increase in the number of low-income students who applied to, gained admission to, and enrolled at Harvard compared to the year preceding program adoption. A similar study at the University of Virginia found increased enrollment of students from low-income backgrounds (Pallais and Turner 2006). Work on the University of North Carolina at Chapel Hill suggested these initiatives function through reinforcing existing messaging about the institution’s affordability (Harris and Barnes 2011).

Despite the generally positive results from single-institution studies, multi-institution research has revealed mixed findings on the success of LRIs. Focusing exclusively on public flagships that adopted LRIs through 2007, Waddell and Singell (2011) identified no overall increase in Pell Grant recipients at institutions that adopted such policies, although they found that the composition of the low-income students changed at adopting institutions (they were financially needier and came from more distant locations).

Using Pell Grant receipt as a proxy for low-income status, Hillman (2013) found modest increases in the number of low-income students attending both public and private institutions following the adoption of “no-loan”-type policies. Across multiple comparison groups, Hillman’s (2013) estimates ranged from 2.6 to 4.1% increases in Pell Grant recipients at public institutions and 3.4 to 5.7% increases at private institutions.

More recently, Rosinger et al. (2018) analyzed no-loan policies at elite private institutions. Using a difference-in-differences design, they find that no-loan policies increase enrollment among middle- and upper-middle income students (i.e., students from the second through fourth family income quintiles). In contrast to Hillman (2013), they identified no increases in enrollment for students from lower-income backgrounds. The authors also extended the literature by identifying that no-loan programs for all aid-eligible students, rather than income-based no-loan programs, generate the observed effects among middle-class students.

Several factors may account for the mixed results with respect to low-income student enrollment. First, the three multi-institution studies vary in the types of programs examined. Waddell and Singell (2011) focused exclusively on in-state students at public institutions; Hillman (2013) examined both public and private institutions using a broad definition of “no-loan” programs (including institutions that only guarantee “no-loan” policies for tuition and fees); and Rosinger et al. (2018) focused specifically on elite private institutions. Second, the three studies concentrated on a limited number of years post-adoption for many institutions: Waddell and Singell (2011) used data through 2006–2007; Hillman (2013) used data through 2008–2009; and Rosinger et al. (2018) used data through 2010–2011.

In an effort to better understand the impact of LRIs, our study extends these important prior works in several ways. Whereas earlier multi-institution studies focus on the impacts of these programs on socioeconomic diversity, we expand the analysis to consider additional outcomes of interest (i.e., student borrowing, admission metrics, and racial/ethnic diversity on campus). Second, we incorporate data through 2014–2015, enabling us to examine longer-term impacts of LRIs operating at scale, which may differ from initial effects found during the early years of adoption. Third, we build upon the existing difference-in-differences methodology by using more comparison groups, including two comparisons estimated using CEM and PSM, thereby increasing the confidence in our estimates as plausibly causal. Fourth, we consider the heterogeneity of program characteristics by producing separate estimates for multiple types of LRIs, a distinction that only Rosinger et al. (2018) have employed.

Theory of Action

Motivations for LRI Adoption

Prior research suggests the most-resourced institutions strategically deploy their resources to increase perceived prestige on such measures as Carnegie Classification (McClure and Titus 2018; Morphew and Baker 2004) and rankings in the U.S. News & World Report and other media outlets (Bastedo and Bowman 2009; Volkwein and Sweitzer 2006). Adopting LRIs may affect perceived institutional prestige in at least two different ways. First, offering more institutional grant aid may drive up the number of applications as the institution becomes more affordable relative to its competitors that do not offer LRIs. In turn, this rise in applications increases the selectivity of the institution, which is dependent on the proportion of applicants who are admitted. College rankings have historically included some measure of selectivity, with lower acceptances rates rewarded (Monks and Ehrenberg 1999).Footnote 3 Second, the proportion of low-income and URM students in the student body may also affect perceived prestige, encouraging selective institutions to increase the relative enrollment of students from these backgrounds (Stevens 2009). By devoting financial resources to replace student loans with grant aid, institutions can strategically target low-income students with incentives to apply and enroll. As noted earlier, some adopting institutions publicly indicated that they hoped the LRIs would help to increase the racial diversity on campus, presumably through increased enrollment among URM students who qualified for the programs.

For some later-adopting institutions, political context may have factored in to the decision to implement an LRI. In January 2008, Sens. Baucus and Grassley wrote a letter to 136 institutions with endowments of at least $500 million (U.S. Senate 2008). They highlighted that, unlike most private foundations, university endowments were not subject to a requirement to expend 5% of their assets per year on charitable activities. They requested more details on endowment expenditures, particularly related to low- and middle-income families, hoping that the answers might “help Congress make informed decisions about a potential pay-out requirement and allow universities to show what they can accomplish on their own initiative” (U.S. Senate 2008). This Congressional scrutiny likely provided additional motivation to implement programs designed to increase affordability for low- and middle-income families. Indeed, the subsequent 2008–2009 academic year was the single year with the most LRI adoptions.

How LRIs Affect Outcomes

We consider how LRIs affect each of the specific outcomes we examine below. By replacing loans with grants, LRIs should reduce borrowing by reducing the proportion of students borrowing any student loans, reducing the average loan amount borrowed, or both. Consideration of these previously uninvestigated first-order effects on borrowing is particularly worthwhile because, even at the most generous “no-loan” institutions, it is still possible for students to borrow. In fact, our analyses suggest that even at institutions with all-aid no-loan policies, about 15% of first-time, full-time students borrowed federal loans after LRI adoption.

LRIs may also alter admissions metrics prioritized in enrollment management strategies, such as application volume, acceptance rate, and yield rate (e.g., Antons and Maltz 2006). Although we do not have access to these measures for low-income students in particular, identifying potential changes in overall admissions metrics is valuable given their role in shaping public perceptions of institutional selectivity and guiding enrollment management decisions. Adopting LRIs can lead to increased institutional publicity, thereby generating more applications, presumably from the LRI-eligible students that the policies target. An increase in applications mechanically lowers the acceptance rate if the institution admits the same number of students. Finally, the policy may increase yield rates if more admitted students can afford the institution and decide to attend. Accordingly, we expect policy adoption to increase the number of low-income students who enroll through some combination of increased applications or elevated yield rates, which should be observable in the overall application numbers and yield rates not specific to low-income students.

Although these policies do not exclusively target URM students, LRIs may shape the application and enrollment choices of students along racial and ethnic dimensions. Since a disproportionate number of low-income students are also members of URM groups (Povich et al. 2015), one might anticipate that enrollment gains among low-income students would result in gains of URM students. Additionally, given prior evidence that Latinx students have above-average levels of loan aversion (Boatman et al. 2017; Cunningham and Santiago 2008), reductions in loans may induce more loan-averse minoritized students to apply and enroll. However, a separate strand of research has found that emphasizing socioeconomic diversity in college admissions is likely to substantially benefit white students (Cancian 1998; Long 2007) and is less effective than race-based affirmative action at increasing racial/ethnic diversity (Reardon et al. 2018). Thus, it is also possible that LRIs may constrain efforts to increase racial/ethnic diversity—reinforcing the importance of measuring URM enrollment as an outcome.

Finally, we anticipate heterogeneous effects across program type. The most generous programs in terms of eligibility (i.e., all-aid) are expected to have larger effects on applications and yield relative to income-level programs since they apply to a wider pool of entering students. We also expect the effects across admissions metrics and low-income student enrollment to be higher among no-loan programs relative to loan-cap programs given the larger amount of loan replacement with grants. While the tradeoff between higher eligibility but less generous aid (i.e., all-aid, loan-cap) versus more restrictive eligibility but more generous aid (i.e., income-level, no-loan) is unclear and would be interesting to study, we are unable to do so because there are no programs in our dataset that began with an all-aid, loan-cap approach.

Data and Methods

Data Sources

In order to assess the effect of LRIs on our outcome measures, we compile information from several sources, resulting in a longitudinal institution-level dataset covering the academic years 2002–2003 through 2014–2015. The primary source for variables is the Integrated Postsecondary Education Data System, known as IPEDS (U.S. Department of Education 2018). Measures from IPEDS include institutional control (public or private), Carnegie classification, and required tuition and fees. Through the Delta Cost Project, we obtain two additional control measures, which are standardized versions of variables available through IPEDS (Hurlburt et al. 2017). These include the full-time equivalent (FTE) number of students and instructional expenditure data, which represents the financial resources an institution uses to demonstrate their commitment to education to (prospective) students and their parents.

Given the expense required to implement an LRI, we also incorporate data on university endowments from the National Association of College and University Business Officers (NACUBO 2019; NACUBO-Commonfund Institute 2019). These studies represent one of the most complete resources for university endowment information available. We supplement this endowment information with university endowment data in IPEDS, which primarily improves the coverage for public institutions. As a result, within the analytic time period, a measure of endowment per FTE student is available for 89.0% of institution-by-year observations.

As a measure of institutional selectivity, we also include Barron’s competitiveness index, which incorporates acceptance rate and admitted students’ class rank and standardized test scores (Giancola and Kahlenberg 2016). This index includes several distinct levels, with “most competitive” and “highly competitive” representing the two highest levels of selectivity.

Our final control is an indicator of institutions’ adoption of test-optional admissions policies. Prior work on selective liberal arts colleges found that applications increased at test-optional institutions immediately following adoption of the policy (Belasco et al. 2015). Since there is overlap in the time periods during which institutions adopted test-optional policies and LRIs, and application behavior is an outcome of interest for our study, it is necessary to control for adoption of test-optional policies. Building on a list of test-optional institutions maintained at (National Center for Fair and Open Testing 2018), we identify over 100 institutions that enacted test-optional policies during the time period examined.

IPEDS also served as the primary source for a number of key outcome variables, including the proportion of students who took out a loan, average loan amount among borrowers, number of first-year applications an institution received, acceptance rate, yield rate, and number of URM students.Footnote 4 To measure our final outcome, we gather data on the number of Pell Grant recipients attending an institution (U.S. Department of Education 2018).Footnote 5 Apart from the number of Pell Grant recipients, which represents all undergraduate students at the institution, the student outcome measures exclusively focus on first-time, full-time (FTFT) students.

There are several limitations to consider for the available data. First, the measures of borrowing behaviors are limited to federal student loans. Second, it would be ideal to know the number of applications, acceptance rate, and yield rate for low-income students, but data on these measures are only available for all students. Third, the majority of the data are based on institution-reported measures. Particularly in the case of endowment data, it is possible that the institutions that elected not to report (either overall or during specific years in the Great Recession) may differ systematically from those that provided endowment information. Finally, several universities that report to IPEDS using multiple unit IDs (e.g., Kent State University) are excluded due to an inability to assign outcome values to each campus. Also excluded are institutions that were open-admission between 2002–2003 and 2014–2015, as well as institutions that did not award bachelor’s degrees, were ineligible for federal Title IV aid, or were not classified as 4-year institutions. The final analytical sample includes 892 institutions for which outcome measures and covariates are available in all years of operation from 2002–2003 through 2014–2015.

Identification of Treatment and Comparison Groups

In this paper, LRIs refer to institutional policies that eliminate loans from the financial aid packages for students or cap the amount of loans that a student can receive. Such policies may still require parental and student contributions. We only consider LRIs that are available to a broad pool of income-qualifying students. Therefore, we exclude programs with merit-based requirements and any state- or local-level programs (e.g., Georgia HOPE, Kalamazoo Promise). To develop the list of qualifying institutions, we consulted the eligible institutions in Lips (2011), Hillman (2013), and the “no-loan” institutions at (Kantrowitz 2018). For each potentially eligible initiative, we examined program characteristics using the institution’s website; campus, regional, and trade publications; and personal communication.

Online Appendix Table 1 provides a full chronological list of the 57 institutions identified in our study as having an LRI in place by the 2014–2015 academic year. In addition to institutional control (public/private) and the first year of the program, Online Appendix Table 1 provides a categorization of the first LRI that the institution offered. All 57 institutions began with one of three different LRI structures that either eliminated loans for all aid-eligible students (“all-aid no-loan”), eliminated loans for students below a particular income threshold (“income-level no-loan”), or capped the amount of loans a student could receive for students below a particular income threshold (“income-level loan cap”).Footnote 6 Several institutions altered the structure of their programs in years subsequent to adoption, though all continued to offer some form of LRI, as outlined in Online Appendix Table 1. Throughout our analyses we categorize the institution by the type of program first adopted; additional analyses (available upon request) suggest that our findings are largely consistent regardless of whether we include or exclude institutions that switched LRI types. In several cases, institutions adopted both a no-loan program and a loan-cap policy (e.g., Lehigh University); in such instances, we identify the initial program as the no-loan program. Overall, the vast majority of institutions enacted their programs in close succession, with 44 of the 57 programs first taking effect in the 2006–2007 through 2008–2009 academic years. One key difference between the list of institutions in Online Appendix Table 1 and the list from Hillman (2013) is that we do not consider income-based “no tuition and fees” policies (e.g., University of Vermont) to be LRIs. Though in principle a policy of “no tuition and fees” for income-qualifying students could limit the expenses for which a student may need to borrow, expenses beyond tuition and fees (e.g., room and board) may still require aid-eligible students to borrow extensive amounts in federal loans. From the institutions in Online Appendix Table 1, we exclude three from our analyses: Princeton University and Brown University (because their programs were adopted prior to years for which key data is available) and the University of Illinois at Urbana-Champaign (due to missing critical measures such as FTE student counts and endowment values).

Table 1 Initial loan-reduction initiative typology, by institutional control

Table 1 provides an overview of the 54 LRIs in our analytic sample by institutional control and policy type. Overall, 16 of the institutions with LRIs were public, while 38 were private. The initial type of LRI differed notably based on institutional control. Among public institutions with LRIs, none provided all-aid no-loan guarantees, 9 began with income-level no-loan policies, and 7 initially offered income-level loan cap policies. Private institutions predominantly offered all-aid no-loan policies (13) or income-level no-loan policies (24) as their initial LRI, with just one private institution initially offering an income-level loan cap.

In recognition of differences in institutional characteristics by control, we constructed separate sets of comparison groups for public and private institutions. We established multiple comparison groups within each sector (five for publics and four for privates) in an effort to increase confidence that the findings hold across varying relevant definitions of comparison institutions, rather than anomalous findings highly contingent on comparison group construction. These comparison groups reflect the fact that institutions with LRIs tend to be among the best-resourced and most selective institutions in higher education (Lips 2011).

The first comparison group for public institutions consists of public flagships. Using Stater’s (2009) definition of public flagships, the LRIs include 9 public flagships, and sufficient comparison data is available for 31 of the remaining public flagships.Footnote 7 We separately construct four additional comparison groups for both public and private institutions: high-endowment institutions (defined as endowments per FTE at or above the 95th percentile within the same control), a selectivity-based group (Barron’s “most” or “highly” selective for public institutions, “most selective” for private institutions), a comparison group identified through CEM, and a comparison group identified through PSM.

Both CEM and PSM provide enhanced identification of untreated institutions that resemble treated institutions on key measures prior to policy adoption. Rather than identify comparison groups using one variable, these matching methods simultaneously incorporate multiple measures when constructing the comparison group. The matching analyses establish a better counterfactual for the treated institutions by conducting the analysis within defined strata (CEM) or by constructing and applying analytic weights to each control institution based on the estimated propensity for LRI adoption (PSM). The use of multiple matching methods, along with several other theoretically defined comparison groups, is designed to increase confidence in findings that are consistent across groups.

The CEM approach involves temporarily recoding certain variables into conceptually meaningful categories, developing strata based on the coarsened variables, and then comparing institutions in the same stratum using the underlying, non-coarsened values (Iacus et al. 2011). In this study, our CEM approach uses four categories of FTE enrollment, three categories of endowment, two categories of Carnegie classification, and two categories of selectivity, resulting in a total of 48 potential strata (see Online Appendix Table 2 for categorizations). Only strata with at least one treated and one control institution contribute to the estimates from the CEM procedure, and no LRI institutions in our data are excluded because of this restriction.

Table 2 Descriptive statistics for control variables, 2002–2003

In the PSM approach, the propensity score is a single value corresponding to the probability of an institution adopting an LRI, conditional on the set of observable pre-treatment covariates (Rosenbaum and Rubin 1985). With the goal of matching institutions based on a broad group of potentially confounding variables that are related to both assignment to treatment and the outcome of interest (Rubin 2007), we conducted the PSM procedure using an Epanechnikov kernel function and matching on baseline (2002–2003) values for the log number of FTE undergraduates, log endowment per FTE, Carnegie classification, Barron’s selectivity, and log tuition and mandatory fees (see Online Appendix Table 2). Online Appendix Fig. 1 shows the propensity scores before and after the matching procedure conducted separately for public and private institutions. The figure provides visual evidence that the matching process identifies untreated institutions with similar probabilities of LRI adoption to the institutions that adopted. In the PSM procedure, 13 of 16 public institutions with LRIs are on common support, and 21 of 38 private institutions are on common support. Thus, the PSM approach excludes several institutions that were most likely to have the resources to enact an LRI and instead focuses on LRI-adopting institutions that more closely resemble the pool of institutions that have yet to establish an LRI.

As an overview of baseline characteristics of institutions with LRIs and comparison groups, Table 2 presents descriptive statistics for control variables in 2002–2003, the earliest year of the analytic dataset. While generally similar, we note two differences between adopters and the comparison groups across these controls. First, the average number of FTE students at private institutions with LRIs was generally higher than the comparison groups. Second, institutions that adopted a loan-reduction pledge were typically among the best-resourced financially, with mean endowment per FTE and instructional expenditures per FTE that exceed their comparison institutions. Focusing on median values for endowment and instructional expenditures reduces but does not eliminate this disparity.

Table 3 offers descriptive statistics for the outcome variables in the 2002–2003 academic year. Many of the outcomes are broadly similar between adopters and various comparison groups. For example, institutions with LRIs have similar shares of the student body who received Pell Grants or were members of URM groups relative to comparison institutions. A few outcomes are notably different, such as the selectivity of private LRI adopters (30%) relative to the comparison groups (45–65%). This fact explains why some of the most selective private institutions are not on common support in the PSM analysis.

Table 3 Descriptive statistics for outcome measures, 2002–2003

Analytic Strategy

Our study employs a difference-in-differences (DD) strategy, which identifies treatment effects by comparing outcomes of the treatment group to a comparison group during pre- and post-treatment time periods. Although classic examples of DD designs focus on policies with a single implementation year (e.g., Card and Kreuger 1993), institutions have adopted LRIs over the course of many years. Therefore, following approaches used in research with multiple adoption years (e.g., Belasco et al. 2015; Furquim and Glasener 2017), our study relies on the following model, which accounts for the staggered nature of the policy adoption by centering each institution’s data around its year of adoption:

$${Y}_{it}= {\beta }_{0}+ {\beta }_{1}\left(Pos{t}_{t}*{LRI}_{i}\right)+ {{\varvec{X}}}_{{\varvec{i}}{\varvec{t}}}{\varvec{\beta}}+{\gamma }_{i}+ {\lambda }_{t}+{\epsilon }_{it}$$

In the equation above, \({Y}_{it}\) represents a particular dependent variable (proportion of FTFT students borrowing loans, average loan amount, number of applications, acceptance rate, yield rate, number of Pell Grant recipients, or number of URM students) for institution i in year t. \({\beta }_{1}\) represents the coefficient of interest for institutions with LRIs (\({LRI}_{i}\)) after their adoption of the policy (\(Pos{t}_{t}\)). \({{\varvec{X}}}_{{\varvec{i}}{\varvec{t}}}\) represents a vector of time-variant, institution-level covariates (i.e., FTE undergraduate enrollment, endowment per FTE, instructional expenditures per FTE, the total required tuition and fees [in-state for public institutions], and an indicator for whether the institution had a test-optional program active in that year). The model also includes two sets of fixed effects, \({\gamma }_{i}\) for institution fixed effects, and \({\lambda }_{t}\) for year fixed effects. The institution fixed effects account for all time-invariant characteristics of institutions, including whether they ever adopted LRIs, specialized missions (e.g., religiously affiliated institutions), and populations served (e.g., Historically Black Colleges and Universities [HBCUs]). The year fixed effects control for secular trends that are potentially related to the outcome measures. Finally, \({\epsilon }_{it}\) indicates the error term, which we cluster at the institution level to account for autocorrelation.

In order for the DD design to identify an unbiased estimate for the impact of the average treatment effect on the treated (ATT), the change in outcomes over time for the comparison group would need to match the expected change for the treatment group, had that group not received treatment. This requirement is known as the parallel trends assumption (Angrist and Pischke 2009), which focuses on the trends in outcomes over time rather than the means of the values. The parallel trends assumption is not directly testable, but we employ several strategies in the “Robustness of Results” section to increase confidence that the data meet this assumption.


Table 4 presents our main findings of the effect of adopting an LRI on outcomes related to student borrowing, admission metrics, and student body diversity. We separately estimate the effects for public (left panel) and private (right panel) institutions because of the differences in funding and governance structures. Each cell presents the β1 coefficient and its standard error from our estimating equation. Because there is no obviously superior comparison group, our analysis uses multiple comparison groups across columns to examine the robustness of results. We should feel most confident about results that are consistent across comparison groups.

Table 4 Difference-in-differences estimates of loan-reduction initiative adoption, by institution control

As an example of interpreting coefficients, consider the Pell Grant recipients outcome, represented as the natural log of the number of Pell Grant recipients at an institution. The 0.074 coefficient in column 1 states that adopting an LRI increases the number of undergraduate Pell Grant recipients at public institutions by 7.4% relative to changes at public flagship institutions that did not adopt an LRI.

Student Borrowing

We first focus on two outcomes related to student borrowing that we examine as first-order effects: does the adoption of an LRI alter the proportion of FTFT students awarded federal loans and the average amount borrowed among borrowers at the institution? If these policies achieved their stated aims of reducing debt burdens, we would expect observable reductions in at least one of these measures. However, we are unable to detect declines in either borrowing rates or borrowing amounts among public LRI adopters relative to all five comparison groups. There are several possible explanations for the absence of a first-order effect at public institutions. First, it is possible that there was indeed a decline on one of the borrowing dimensions, but at a lower level than we have power to detect. Second, outcomes may differ by the type of LRI implemented, a possibility we explore in subgroup analyses in Tables 5 and 6. Third, perhaps the overall borrowing rates and amounts did not change at public LRI adopters, but the composition of borrowers changed (e.g., relatively more middle- and upper-income students borrowing), though additional data is necessary to empirically test this potential explanation.

Table 5 Difference-in-differences estimates of loan-reduction initiative adoption at public institutions, by LRI type
Table 6 Difference-in-differences estimates of loan-reduction initiative adoption at private institutions, by LRI type

In contrast, among private institutions, there is strong evidence that LRI adoption leads to statistically significant declines in the share of students who took out federal loans. After LRI adoption, the proportion of students who borrowed declines by a highly statistically significant 4 to 8 percentage points across the different private comparison groups. Relative to a baseline borrowing rate of 36% at those institutions, this effect corresponds to an 11–22% reduction in borrowing. The results do not provide clear indications of a reduction in the average student loan amount among the diminished pool of borrowers at private LRI adopters.

Admission Metrics

The next three rows of Table 4 assess LRI adoption’s effect on admission metrics.Footnote 8 Such changes could occur even in the absence of shifts in borrowing behavior due to factors accompanying LRI policy adoption such as increased publicity, marketing, and outreach efforts; changes to admission practices; and reduced funding uncertainty from the perspective of students. Our empirical estimates in Table 4 do not identify consistently significant changes in application volume or acceptance rate for either public or private institutions. However, since our measures do not exclusively focus on the admission metrics for students eligible for the initiatives, we cannot rule out the possibility of application-related effects for LRI-eligible students specifically. There is evidence of increases in yield rate at private institutions that adopted LRIs, with two of four comparison groups showing an estimated 3–4 percentage point increase in yield rate, and the most selective comparison group suggesting a 1.8 percentage point increase with a p-value of 0.051. Especially when compared against the baseline yield rate of 43% at private institutions, these results suggest that efforts to replace loans with institutional grants may produce sizable gains in the share of admitted students who choose to attend a private LRI adopter. However, we observe no effect on yield at public institutions.

Campus Diversity

To examine potential effects of LRI adoption on the composition of the undergraduate student body, we examine two measures of socioeconomic and racial/ethnic diversity. The effect estimates of LRI adoption on the number of Pell Grant recipients are the most consistently positive results in Table 4. Specifically, LRI adoption increases Pell Grant recipient enrollment by 4 to 8% at public institutions (statistically significant in four of five comparison groups) and 9 to 12% at private institutions (statistically significant in all four comparison groups). Robustness checks (discussed later) suggest readers should interpret the Pell Grant recipient findings with caution for the public flagship comparison group; actual effects for that single comparison group are likely lower due to changes preceding LRI adoption. These results, which rely on more years of post-adoption data than prior studies, are consistent with the significance and direction of the findings in Hillman (2013), though somewhat higher than Hillman’s point estimates of 3–4% at public institutions and 3–6% at private institutions. We note that even at the high end of the point estimates (8% for public institutions and 12% for private institutions), such an effect corresponds to a limited number of students, given the baseline share of Pell Grant recipients (20% at public institutions and 12% at private institutions). These results imply that LRI adoption increases the share of the student body receiving a Pell Grant by about 2 percentage points.

To assess whether LRI policies worked in concert with or in opposition to stated institutional goals of enhancing racial diversity, we also examine the effect of LRI adoption on the number of URM students. For public institutions, our analysis indicates that, among four of the five comparison groups, adoption of an LRI significantly and substantially reduces the number of URM students enrolling at the institution by 15–23%. With baseline URM enrollment shares of 14% among public LRI adopters, such effects would amount to an overall decline in URM students of 2 to 3 percentage points. According to analyses by individual racial/ethnic groups (available upon request), the declines in URM enrollment at public institutions are most evident among Black and Native American students, although point estimates are also negative for Latinx students in four of five comparison groups; no substantial shifts are evident for the enrollment of Asian or white students (though standard errors for Asian students are comparatively large). The evidence at private institutions does not clearly identify an effect of LRI adoption on URM enrollment.

For the enrollment of URM students, there may be some concern about the role of affirmative action bans in states where LRI-adopting and LRI-non-adopting institutions are located. In an alternative specification (available upon request), we added a flag for years in which a state-level affirmative action ban was in effect (based on dates in Baker 2019). We obtained results qualitatively similar to our main estimates, showing significant reductions in URM students in 3 of 5 comparison groups for public institutions with LRIs (ranging from declines of 13% to 21%). Another concern regarding URM student enrollment may be that colleges and universities change their institutional policies concerning affirmative action due to a lawsuit or some other external pressure that we do not measure consistently across institutions in our dataset. While it is possible that institutions changed their affirmative action policies around the same time they adopted LRI programs, we have no reason to believe that occurred. The fact that recent high-profile lawsuits concerning affirmative action at Harvard University, the University of Texas at Austin, and the University of North Carolina at Chapel Hill have not yielded any publicly known differences in institutional admissions policies leads us to believe that it is unlikely that a systematic change in institutional affirmative action occurred across multiple institutions at the same time they adopted LRIs.

Event Study Analysis

In addition to the main DD analysis, we also illustrate temporal trends in effects using an event study approach (described in greater detail in the Robustness Checks section). As a visual presentation of event study findings, Online Appendix Figs. 2 and 3 depict the point estimates and 95% confidence intervals for six outcome variables in the event study model for public and private institutions, respectively. These event study results highlight that effects on some outcomes are apparent shortly after LRI adoption, whereas others become more evident over time. For example, the event study figure for private institutions (Online Appendix Fig. 3) shows relatively consistent reductions in the share of students borrowing beginning with the second year of the policy. In contrast, estimates for the relationship between LRI adoption and the percent of Pell Grant recipients gradually increased in the post-adoption years for both types of institutions. Such gradual changes are consistent with the fact that data for Pell Grant recipients cover all students and therefore effects take 4 years to become fully evident. These event study estimates also demonstrate a fairly steady decline for FTFT URM students at public institutions and an upward trend for applications at private institutions in the years after adopting the LRI. With some shifts in outcomes taking years to materialize, these event study findings reinforce the value of observing outcomes for numerous years following LRI policy adoption.

Subgroup Analysis by LRI Type

Because there is variation in the attributes of the LRI policies, we consider the effect of different types of LRIs at public and private institutions, as shown in Tables 5 and 6, respectively. There is a notable reduction of statistical power in further dividing the sample by policy type across institutional control, so the goal of this subgroup analysis is to provide suggestive—but by no means dispositive—evidence regarding the outcomes experienced under various policy types. Due to reduced power, several of the statistically significant findings in the public sector subgroup analysis do not withstand multiple hypothesis testing adjustments (discussed in the robustness section below).

Table 5 distinguishes between the two types of LRIs adopted at public institutions, income-level no-loan programs and income-level loan cap programs, for which effects of LRI adoption appear to differ on several dimensions. For example, while we are unable to detect a consistent change in borrowing rates at public institutions with income-level no-loan programs, there is limited evidence that adoption of an income-level loan cap program actually increases borrowing at public institutions with loan cap policies. Although we do not observe consistent changes in application volume, there is suggestive evidence that the income-level no-loan programs at public institutions increase yield rates. In contrast, we observe no changes on application metrics for public institutions that adopted income-level loan cap policies. Reductions in URM students appear relatively consistent for both types of policies at public institutions (15–22% for income-level no-loan and 15–26% for income-level loan cap). Falsification checks suggest that caution is warranted in interpreting subgroup estimates for Pell Grant enrollment at public institutions, so we do not emphasize those results.

The subgroup analysis for private institutions, depicted in Table 6, conveys that the overall reduction in proportion of borrowers observed in Table 4 is clearly concentrated among the private institutions with all-aid no-loan programs, amounting to 6–13 percentage point reductions at those institutions (significant for all four groups). For the admissions-related outcomes, there is strong evidence of an increase of 2–4 percentage points for yield rates at all-aid no-loan programs (significant for all four groups), with modest evidence of a 2–3 percentage point higher yield rate at those with income-level no-loan programs (two groups significant). As with the public institutions, subgroup analyses for Pell Grant recipient enrollment by LRI type should be interpreted with caution in light of falsification checks, as discussed below.

Robustness of Results

We undertake a number of robustness checks to bolster our confidence in our primary identifying assumption: that LRI-adopting and non-LRI-adopting institutions would have experienced similar trends in outcomes in the absence of adoption.

Visual Assessment of Parallel Trends

The DD method relies on the parallel trends assumption, which states that the trends in outcomes for the treatment and comparison groups would have been the same in the post-adoption period in the absence of treatment. One method of assessing the plausibility of this assumption is through visual inspection of the average values of the outcome variables in the pre-treatment years (St. Clair and Cook 2015). Given the high concentration of policy adoptions in the 2007–2008 and 2008–2009 academic years, we can observe whether treatment and comparison institutions appear similar in earlier years. Figures 1, 2, 3 and 4 show similar trends in the pre-treatment period for both the Pell recipients and URM enrollment outcomes for public and private institutions; the graphical evidence broadly supports the parallel trends assumption.Footnote 9

Fig. 1
figure 1

Number of Pell Grant recipients per year at public institutions with loan-reduction initiatives and comparison groups

Fig. 2
figure 2

Number of Pell Grant recipients per year at private institutions with loan-reduction initiatives and comparison groups

Fig. 3
figure 3

Number of first-time, full-time students from underrepresented racial/ethnic minority groups per year at public institutions with loan-reduction initiatives and comparison groups

Fig. 4
figure 4

Number of first-time, full-time students from underrepresented racial/ethnic minority groups per year at private institutions with loan-reduction initiatives and comparison groups

Event Study

We also conducted an event study analysis to quantitatively assess the parallel trends assumption in the pre-treatment periods. We accomplish this by de-parameterizing the linear time trend using dichotomous indicator variables for each year relative to LRI adoption. Periods 5 or more years prior to LRI adoption are treated as a single indicator variable, as are the fifth or greater year of the policy. For the event study approach, we replace the \(\left(Pos{t}_{t}*{LRI}_{i}\right)\) term in the main analytic equation with these binary indicator variables, with the period prior to policy implementation serving as the omitted reference category in the regression.

The results of the event study analyses across several comparison groups are presented in Online Appendix Tables 3, 4 for public and private program types, respectively. Results from CEM and PSM comparison groups are qualitatively similar and are omitted from the tables due to space constraints. Consistent with the parallel trends assumption, in only limited cases do we see statistically significant (p < 0.05) interactions between the centered year and LRI status in pre-LRI years. Of 140 possible pre-treatment estimates, we see statistically significant interactions in only 8 cases. Those 8 cases are spread out in different years and different outcomes, suggesting no systematic differences in trends in the years leading up to LRI adoption and further supporting the parallel trends assumption.

Falsification Tests

To assess whether we only detect observed treatment effects at the time of the LRI adoption, we perform several falsification tests. First, we drop data from years of actual LRI adoption forward; assign placebo versions of LRI adoption 1, 2, and 3 years prior to actual adoption; and re-estimate our difference in differences model. Significant findings from such a falsification test would suggest a policy effect prior to policy adoption that must be caused by other unobserved factors that may falsely be attributed to the LRIs. Online Appendix Tables 5–7 present the results of falsification tests for our primary analysis (Table 4) based on placebo treatment years 1, 2, and 3 years prior to actual treatment, respectively. Only 7 out of 189 tests have a statistically significant coefficient, which is less than what we would expect by chance. However, we consistently see a significant finding on Pell Grant recipients for the public flagship comparison across all 3 years, casting doubt on the utility of this particular comparison group for the Pell recipient outcome.

Online Appendix Tables 8–10 and Online Appendix Tables 11–13 provide falsification results by initial program type (i.e., all-aid no-loan, income-level no-loan, income-level loan cap) for public institutions and private institutions, respectively. The primary outcome measure on which we detect statistically significant results across multiple comparison groups across different falsification years at the type level is the log number of Pell Grant recipients.Footnote 10 At public institutions with income-level loan cap programs, three of the five comparison groups show positive, statistically significant coefficients of 3–4% for placebo treatment 1 year prior to actual treatment, although these results disappear in second- and third-year falsification tests. The pattern is even stronger for private institutions with all-aid no-loan programs, which show statistically significant coefficients of 8–10% for placebo treatment in at least three comparison groups 1, 2, and 3 years prior to LRI adoption. These results suggest serious caution in interpreting the estimated effects of LRI adoption on Pell Grant recipient enrollment at the program type level for two groups: the public income-level loan cap programs and private all-aid no-loan programs. In both cases, however, the estimated effects of actual treatment are greater than the coefficients from the falsification tests (6–10% for public income-level loan cap programs vs. 3–4% for placebo tests, and 10–17% for private all-aid no-loan programs vs. 8–10% for placebo tests). We therefore argue that, while other changes in recruitment and admissions practices prior to LRI adoption may have increased the number of Pell Grant recipients at particular institutions, the adoption of each of those two LRI types likely led to further increases in Pell Grant populations.

Covariate Balance

In Online Appendix Table 14, we present results of analyses in which we substitute covariates for outcome variables using our DD model to test the balance of observable pre-treatment covariates between treatment and comparison institutions. Significant changes in control variables are worth noting for their potential relationships to the effects attributed to LRI adoption. Our covariate balance tests provide some evidence of declines in tuition and required fees at private institutions that adopted LRIs relative to their non-adopting peers (for two of four comparison groups); however, because we do not observe these results consistently across comparison groups, we do not believe this imbalance is a major concern. We do not observe systematic differences on full-time-equivalent student enrollment, instructional expenditures, or endowment between LRI-adopting and non-LRI-adopting institutions.

Contemporaneous Treatments

We would also be concerned if there were any concurrent interventions that would have impacted LRI-adopting and LRI-non-adopting institutions differentially. We have already discussed one such possibility—a state affirmative action ban. Another such possibility involves two prominent national programs, the Quest Scholars Program and Posse Scholars, both of which partner with many of the LRI-adopting institutions to guarantee students a loan-free undergraduate education. Therefore, we examined the years in which an institution joined the Quest Scholars Program or the Posse Scholars program, relative to their LRI adoption. The years of LRI adoption and partnership with these two national programs have very little overlap (e.g., only two institutions adopted LRIs and joined Posse in the same year), helping mitigate concern about the adoption of these major programs as a contemporaneous treatment.

Multiple Hypothesis Testing

Each of our main results tables conducts a large number of hypothesis tests, so we may be concerned that some of our findings are spurious and occur due to chance. We employ Benjamini and Hochberg’s (1995) correction to assess the potential for “false discoveries.” Using a reasonable false discovery rate of 0.10 (Efron 2012; McDonald 2014) on our main findings, we find that only 2 of our 20 significant findings in Table 4 may be spurious: Pell Grant recipients at publics using CEM, and URM students at privates using high-endowments as the comparison group. The CEM finding among publics was the smallest of the five comparison groups, but we see enough consistent evidence across comparison groups for Pell Grant recipients that we believe this result is not due to multiple hypothesis testing. Given that the URM result relative to other high-endowment institutions was the only statistically significant finding across comparison groups for private institutions, the results of this correction do not affect our finding of no detectable effect on URM enrollment at private institutions.

When we turn to considering subgroup effects by LRI type among (public institutions in Table 5 and private institutions in Table 6), we see divergent results of the Benjamini and Hochberg procedure. At private institutions, all 16 of the statistically significant findings in Table 6 remain significant after accounting for the false discovery rate. At public institutions, however, 11 of the 19 statistically significant subgroup findings in Table 5 are possible false discoveries, with only the yield rate among public institutions with income-level no-loan programs showing significant results across more than one comparison group. This result is likely driven by the smaller sample size among public institutions. Consequently, we caution against strong reliance on subgroup results in Table 5 for public institutions by LRI type.

Discussion and Conclusion

In response to the underrepresentation of students from low-income backgrounds at selective colleges and universities in the United States, more than 50 four-year institutions have adopted LRIs to eliminate or limit loans in financial aid packages. Although there was a hiatus on adoption of LRIs between 2009–2010 and 2019–2020 that we suspect was a result of financial constraints imposed by the Great Recession, some universities expanded these initiatives (Brown University 2017; Rice University 2018), and Johns Hopkins University recently announced the adoption of a no-loan program (Johns Hopkins University 2018). With adverse financial repercussions stemming from COVID-19, however, other institutions may pull back on the generosity of their LRIs, as several did during the Great Recession (e.g., Dartmouth, Haverford). As institutions decide whether to adopt, expand, or contract LRI programs, it is important to understand the effects of such programs on a variety of outcomes.

Given the prominence, expense, and durability of such programs, it is noteworthy that previous research on LRIs across multiple institutions has focused primarily on a single outcome, enrollment of students by socioeconomic status, with mixed results (Hillman 2013; Rosinger et al. 2018; Waddell and Singell 2011). Our study builds on this prior work by examining a broader set of outcomes (i.e., borrowing, admission metrics, and racial/ethnic diversity); incorporating a greater number of years post-LRI adoption; and employing matching techniques to establish improved comparison groups.

We offer several key insights into the institution-level effects of LRIs. In terms of first-order effects on student borrowing, we find that LRI adoption reduces the share of borrowers by between 4 and 8 percentage points at private institutions, though we do not detect a significant change at public institutions. Adopting institutions do not experience changes in the average loan amount among borrowers at detectable levels.

The lack of change in the share of borrowers at public institutions may be the result of students continuing to borrow to cover their EFC. Prior to the adoption of an LRI, students may have needed to borrow to fill the gap between their EFC and the total cost of attendance. Upon adoption, a no-loan LRI would be committed to covering the difference between EFC and the total cost of attendance. Scholars have found, however, that students and their families often have challenges paying the EFC (Long and Riley 2007). As a result, even after implementation of an LRI at a public institution, a similar share of students may continue to borrow to fulfill their EFC (or institutionally determined contribution). While we do not detect a simultaneous decline in the average amount borrowed, our estimates on that measure are somewhat imprecise, leaving open the possibility of a slight decline in the average amount borrowed.

In general, a higher dosage of student loan relief may be required to impact institution-level student borrowing behaviors than what was offered at public institutions. No public institution implemented the most generous program type, an all-aid, no-loan program. In addition, nine of the fourteen income-level, loan cap programs (the least generous program type) are public institutions. Regardless of the explanation for the null findings on borrowing rates and amounts at public institutions, these results highlight the importance of examining whether observed borrowing behaviors align with institutions’ goals for their LRI programs.

We find few significant effects of LRI adoption on admission metrics, apart from moderate evidence of 3 to 4 percentage point increases in yield rates at private adopters. Nevertheless, consistent with the asserted aims of LRIs, we find that such programs increase the number of Pell Grant recipients enrolling at both public institutions (4–8%) and private institutions (9–12%).

In perhaps our most striking finding, LRIs appear to reduce the number of URM students at public institutions by 15–23% relative to trends at comparison groups (with no detected differences among private adopters). Thus, while public institutions implement LRIs in a manner that advances their socioeconomic diversity goals, the policy adoption appears to come at the expense of racial/ethnic diversity goals. Progressing towards these twin diversity goals will require public institutions to resolve this tension. More broadly, these findings reinforce that policies focused on one population can have important implications for non-targeted groups–a valuable lesson for other college affordability initiatives, such as tuition-free college and merit aid programs.

Our findings regarding URM enrollment may be reflective of institutional changes in applicant recruitment strategies, perhaps to offset the lost tuition and fee revenue from low-income recipients of LRI funding. Indeed, recent research has shown that many universities disproportionately target their recruiting efforts to higher-income high schools with greater white student populations (Jaquette and Salazar 2018). Institutions may also be shifting admissions preferences towards low-income students and away from URM students. Because we controlled for the adoption of state affirmative action bans, this potential shift is not likely due to formal opposition to racial preferences in admission. We believe it is more likely that admission offices are responding to institutional pressures to demonstrate positive effects of LRI adoption by preferentially admitting low-income students. Our findings are consistent with a scenario in which admission offices place an added focus on low-income students–including those from non-minoritized racial/ethnic backgrounds–without correspondingly large advances in URM recruitment efforts.

Public institutions truly interested in expanding educational access to both low-income and URM students may need to examine other policy options to recruit, admit, and yield URM students, especially as they continue to adopt or expand LRIs. State policymakers who govern public universities, such as state higher education executive officers (SHEEOs), state governing boards, or even state legislatures could also consider regulation of in-state institution recruitment behavior to ensure low-income and URM students do not miss out on recruitment opportunities. Policymakers may want to create incentive packages to motivate recruiters to visit high schools with large URM populations.

Due to our ability to observe a longer post-adoption time period than prior studies, our event study analysis (Online Appendix Tables 3, 4; Online Appendix Figs. 2, 3) enables us to consider whether treatment effects change over time. We find evidence of several treatment effects that do indeed vary in the post-adoption years. Most importantly, the negative URM enrollment effects observed at public institutions appear to grow in magnitude over time. Using both the public flagship and high-endowment comparison groups, the small initial negative coefficients after one and 2 years of adoption grow substantially larger, with long-term reductions in URM enrollments of 35 and 26%, respectively, at 5+ years after adoption. We also observe growing magnitude of effects over time in the reduction of borrowers, increases in yield rate, and increases in Pell Grant recipients at private institutions.Footnote 11

The positive impacts of LRIs on Pell Grant recipient enrollment is consistent with the established literature on the positive enrollment effects of grant aid generally. Our findings identify that institutional aid can be effectively targeted towards financially disadvantaged students and change their enrollment behavior, although we cannot identify whether these policies induce students to alter enrollment decisions on the margin of attending any college. We do, however, believe these findings have implications beyond institutional financial aid policy. Increases in the maximum Pell Grant award are financially similar to an income-level loan-reduction policy, as long as institutions do not concurrently increase prices. However, from a behavioral perspective, LRIs tend to be much simpler and more transparent than the complexity of applying for and receiving federal grant aid. If lawmakers were to reduce federal borrowing limits, that action would be somewhat analogous to the loan cap policies examined here; however, only wealthy institutions would be able to replace those loans with institutional grants.

While our study examines average effects of such policies, there may be substantial heterogeneity across institutions. Future research should attempt to explain what drives outcome differences across institutions even within LRI type. Only by understanding why similar programs exhibit different trends can policymakers understand how to best craft LRIs to the benefit of students and institutions.

LRIs offer institutions the prospect of increasing institutional access and diversifying their student bodies. However, given the findings of this paper concerning the trade-offs between URM and low-income student enrollment, institutions and policymakers alike should understand the implications of LRI design for equity and inclusion on measures beyond socioeconomic status. Educational leaders should also understand that even with the adoption of LRIs, many students will choose to take on loan debt. While such programs have demonstrated the potential to increase socioeconomic diversity and reduce institution-level borrowing, LRIs do not appear to be a panacea for the student loan crisis or racial homogeny on college campuses.