1 Introduction

In the decade since the Global Financial Crisis (GFC), there has been an outpouring of research on the topic of “credit cycles,” meaning macroeconomic fluctuations that are driven in significant part by variations in credit supply. With this work as a backdrop, my goal in this talk is threefold. First, I will provide a very brief and selective survey of a handful of papers that highlight some of the key empirical facts about credit cycles. Second, I will try to interpret these facts through a conceptual lens, asking what theoretical mechanisms appear to be most consistent with the data. And third, I will pose the question of what policies, if any, can be helpful in moderating these credit-driven fluctuations in real activity. In this regard, it is important to stress that credit-supply shocks are relevant for understanding more than just crises. As will become clear, they play an important role in garden-variety recessions and slowdowns as well. And to the extent that any form of policy, be it regulatory or monetary, aims to moderate the business cycle, it also needs to attend to these less dramatic, but more frequent kinds of credit-induced fluctuations.

One caveat at the outset: I am going to spend some time dwelling on the limitations of financial regulation. However, this does not at all imply that I am not a supporter of the general thrust of the financial regulatory reforms that were undertaken in the wake of the GFC. Quite to the contrary: I believe that these reforms have been both necessary and valuable, and in many cases—e.g. with respect to increases in bank capital levels—I would have preferred to go even further than what has been done to date. At the same time, and especially in a capital-markets-dominated economy like the USA, it is important to recognize that regulation is unlikely to be a panacea in taming the credit cycle. Given the gaps in regulatory coverage of different types of financial institutions and markets, and the inexorable forces of regulatory arbitrage, it may be asking too much to hope that regulation alone can achieve the degree of financial stability that is optimal for macroeconomic performance. This observation then raises the question of whether, in a second-best world, monetary policy should also be implemented with financial stability in mind.

2 Evidence on the Credit Cycle

Let us begin with the evidence. By way of overview, I want to emphasize two sets of stylized facts. The first, which has become increasingly well-known and widely accepted in recent years, is that if one looks at quantity data that captures the growth of aggregate credit, then at relatively low frequencies rapid growth in credit tends to portend adverse macroeconomic outcomes, be it a financial crisis or some kind of more modest slowdown in activity. Second, and perhaps less familiar, is that elevated credit-market sentiment also tends to carry negative information about future economic growth, above and beyond that impounded in credit-quantity variables.

To be more precise, I am equating “sentiment” with time-variation in expected returns; thus when I say that credit-market sentiment is high, this is tantamount to saying that the expected returns to bearing credit risk are low. And it turns out that elevated sentiment in this sense of the word is a pessimistic indicator for future economic activity. One interpretation of this pattern is that when sentiment is high, there is an increased risk of disappointing over-optimistic investors. And when investors are disappointed, this tends to lead to get a sharp reversal in credit conditions that corresponds to an inward shift in credit supply, which in turn exerts a contractionary effect on economic activity. So again, the overall picture is that credit booms, especially those associated not just with rapid increases in the quantity of credit, but also with exuberant sentiment—i.e., aggressive pricing of credit risk—tend to end badly.

With respect to the quantity-oriented evidence, some of the most influential work is by Schularick and Taylor (2012), and Jorda et al. (2013), in papers aptly titled “Credit Booms Gone Bust”, and “When Credit Bites Back” respectively. In the former, they study 14 developed countries over the period 1870–2008: the USA, Canada, Australia, Denmark, France, Germany, Italy, Japan, the Netherlands, Norway, Spain, Sweden, Switzerland, and the UK. Using definitions of financial crises based on Bordo et al. (2001) and Reinhart and Rogoff (2009)—whereby crises are identified with bank runs and/or public interventions in the banking system—they find that the growth of bank loans in the preceding 5 years is associated with a significantly increased probability of a financial crisis. One interesting thing to note is the relatively low-frequency nature of the exercise. Simply put, it can take a while for a boom to turn into a bust. Or said differently, an econometrician looking at a country that is experiencing rapid growth in the quantity of bank credit might conclude that the country is in a vulnerable state but would not necessarily predict that it is going to face an imminent reversal in the next year.

In a similar spirit is more recent work by Mian et al. (2017). Like Schularick and Taylor (2012), and Jorda et al. (2013), they focus on a quantitative measure of credit expansion, in this case the growth of household credit to GDP. Using a sample of 30 mostly advanced economies, and an unbalanced panel running from 1960 to 2012, they find large negative effects of credit booms on future output: a one-standard-deviation increase in household debt to GDP (equivalent to 6.2 percentage points) in a 3-year interval leads to 2.1% decline in GDP over the following 3 years. Notably, these results reflect not just the consequences of extreme financial crises but are also driven by more moderate non-crisis recessions and slowdowns.

Turning to the role of credit-market sentiment, López-Salido et al. (2017) investigate the role of sentiment in a US sample running from 1929 to 2015. To do so, they build on the work of Greenwood and Hanson (2013), who show that when credit spreads are narrow, and when the share of high-yield (or “junk bond”) issuance in total corporate bond issuance is high, the expected returns to bearing credit risk are predictably low—in other words, narrow credit spreads and an above-average high-yield share are indicative of elevated credit-market sentiment. López-Salido et al. (2017) then show that exuberant credit-market sentiment in year t − 2 is also associated with a decline in economic activity in years t and t + 1. Underlying this result is the existence of predictable mean reversion in market conditions. When credit risk is aggressively priced, spreads subsequently widen. The timing of this widening is, in turn, closely tied to the onset of a contraction in economic activity. Exploring the mechanism, they find that buoyant credit-market sentiment in year t − 2 also forecasts a change in the composition of external finance: net debt issuance falls in year t, while net equity issuance increases, consistent with the reversal in credit-market conditions leading to an inward shift in credit supply.

This focus on sentiment is extended by Kirti (2020), who examines a much broader sample encompassing 38 countries. His key finding concerns the interaction of growth in the quantity of credit with credit-market sentiment, where he follows Greenwood and Hanson (2013) and proxies for the latter with the high-yield share. In particular, following strong credit growth (a 30% increase in credit to GDP over a 5-year period), economic growth in the following 3 years is roughly 1.10% a year slower. However if this increase in the quantity of credit is also accompanied by a two-standard-deviation increase in the high-yield share, growth over the subsequent 3 years slips by a further 0.80% per year. So again, the picture that emerges is that measures of both the quantity of credit growth, as well as the sentiment of credit-market investors, are important in capturing those aspects of credit booms which seem to be associated with subsequent economic reversals. Krishnamurthy and Muir (2020) present closely related findings, using a remarkably long panel that goes back 150 years and covers 19 countries.

3 Theoretical Perspectives

This section draws heavily on the discussion in my earlier paper, López-Salido et al. (2017).

There are a number of theories which suggest that credit booms might lead to recessions or financial crises. It is useful to divide these theories into two categories: those based on financial frictions and those that feature an independent role for investor beliefs, or sentiment.

3.1 Theories Based on Financial Frictions

There is a long tradition in macroeconomics of using models with financial frictions to study aggregate fluctuations, with an early example being Fisher’s (1933) discussion of debt-deflation dynamics during the Great Depression. Modern treatments begin with Bernanke and Gertler (1989), Kiyotaki and Moore (1997), and Geanakoplos (2009), and share certain core ingredients. In particular, all agents have rational expectations, and due to various agency problems, debt contracts are the primary mode of external finance. However, there are frictions in the debt market as well, with the ability to borrow being constrained by either an exogenous debt limit or some function of endogenous borrower net worth or collateral value.

Taken together, these ingredients generate amplification and propagation effects: When a negative shock hits the economy, firms and households that have borrowed to finance past spending find their net worth reduced. Given frictions in the debt market, this forces them to reduce borrowing and to cut back on future investment and consumption. The associated reduction in aggregate demand in turn sets the stage for further declines in economic activity, leading to another round of reductions in net worth and collateral values, and so on.

Several recent papers extend this approach to deliver results that are particularly relevant in light of the GFC. Brunnermeier and Sannikov (2014) show that the amplification effects described above can be highly nonlinear, so that the economy’s response to a large external shock can be much stronger than its response to a smaller shock. Hall (2011) and Eggertsson and Krugman (2012) argue that the resulting downturn will be more protracted when the zero lower bound (ZLB) on interest rates interferes with the economy’s equilibrating process.

Given that agents in these models are rational, one question that arises is why they would take on so much debt in the first place if doing so makes the economy so fragile. The general answer proposed in the literature is that there are externalities in leverage choice: individual agents do not fully internalize the vulnerabilities that their borrowing decisions impose on the aggregate economy, and so they over-borrow from the perspective of a social planner. These externalities can be rooted in fire-sale effects (Lorenzoni 2008; Stein 2012) or in aggregate demand spillovers in the presence of a binding ZLB (Farhi and Werning 2016; Korinek and Simsek 2016).

Thus models in the financial-frictions genre can provide an account of both why economies with highly levered firms, households, or intermediaries can be vulnerable to exogenous shocks, and why the decentralized decisions of these actors can lead to high leverage ex ante, in spite of its potential costs. Moreover, with their emphasis on leverage as a state variable that captures the fragility of the economy, they provide motivation for the above-discussed empirical work that uses balance-sheet measures of leverage to predict economic downturns.

However, because they are fundamentally theories of amplification and propagation and rely on exogenous shocks to set the system in motion, this class of models typically has less to say about when and how a credit-driven downturn gets triggered. Relatedly, they are for the most part silent on the duration of the credit cycle. For example, if significant negative shocks only arrive infrequently, an econometrician observing that the economy is in a fragile high-leverage state—but having no further information about the probability of the exogenous shock hitting—might have to wait a long time on average before seeing the predicted downturn.

3.2 Behavioral Theories

An alternative approach to studying credit cycles builds on the narratives of Minsky (1977, 1986) and Kindleberger (1978) and on the work in behavioral finance which focuses on investors with imperfectly-rational beliefs. Two papers in this vein are Bordalo et al. (2018), and Greenwood et al. (2020). These papers can be thought of as trying to explain three sentiment-related aspects of the credit cycle: (1) why investors sometimes become overoptimistic, thereby driving credit spreads to unduly low levels; (2) what causes the optimism to reverse endogenously, leading to a subsequent tightening of credit conditions; and (3) the associated macroeconomic dynamics.

In these models, time-varying credit-market sentiment arises from the extrapolative beliefs of investors. An alternative view is that while mistaken beliefs may be important, they are not the whole story (Stein 2013). For example, it is often argued that the agency problem between intermediaries and their shareholders is intensified in periods of low interest rates, and this makes intermediaries more likely to “reach for yield”—that is, to accept lower premiums for bearing duration and credit risk—at such times.Footnote 2

3.3 Towards an Integrated View

While the financial-frictions and sentiment-based theories of credit cycles are logically distinct, it seems likely that the mechanisms they describe would be complementary. One way to understand how they might fit together is to note that the frictions-based models are well-suited to explaining why the economy can find itself in a fragile highly-leveraged state, but they typically rely on an exogenous shock to actually kick off a downturn. That is, they are effectively models of vulnerabilities, not triggers. Conversely, the sentiment-based approach, which emphasizes the endogenous unwinding of over-optimistic beliefs, comes closer to providing a theory of triggers. Indeed, this interplay between leverage and mispricing is central to Minsky (1977, 1986). From an empirical perspective, the vulnerabilities-plus-triggers framing suggests an interactive regression specification, much like that in Kirti (2020) and Krishnamurthy and Muir (2020). So the evidence in these papers would seem to be quite consistent with this integrated view of the credit cycle.

4 Implications for Regulation

What are the implications of the above discussion for financial regulation? A first point is that the rational theories of credit booms based on externalities in leverage choice immediately suggest a role for constraints on leverage; these constraints might in principle be applied either to intermediaries like banks, or on financial products, like home mortgages. Thus these models provide a basis for some of the most familiar forms of regulatory intervention that we observe, such as time-invariant bank capital requirements. By contrast, the sentiment-based theories put time-variation in the nature of the problem more explicitly front-and-center. That is, there are times when sentiment is elevated—e.g., when credit spreads are narrow and there is a lot of low-quality issuance—and those are the times when it may be more important for policymakers to actively lean against an incipient credit boom. Thus one question to bear in mind when evaluating a regulatory regime is whether and how it makes sense to try to implement such time-varying leans.

As noted at the outset, I am going to highlight what I see as some of the worrisome limitations of the post-GFC regulatory regime. Again, this should not be taken to imply that there has not been important progress since the GFC. Certainly the significantly-increased levels of equity capital in the banking system are a welcome development. But it is important to focus on where the gaps still remain, and to ask about the realistic prospects for filling these gaps. In this regard, it is worth emphasizing that there is substantial heterogeneity across countries in what one might call the regulatory-possibilities frontier, on at least two dimensions. One is the political economy environment. It is simply more difficult, for a variety of reasons—including the relative power of different interest groups—to implement certain types of regulations in some jurisdictions than others. Thus for example, while a number of countries have implemented time-varying loan-to-value or debt-to-income requirements on home mortgage loans, we have not seen anything similar in the USA., and it does not appear that we are likely to anytime in the near future. A second constraint on the efficacy of regulation is the extent to which an economy is bank-dominated. A large share of the regulatory apparatus is oriented around the formal banking sector, and this approach works less well when borrowers and lenders can easily migrate to the less-easily regulated capital markets.

With these observations in mind, here are several areas potential areas for concern with respect to the current financial regulatory environment.

4.1 Bank Capital: The Importance of Dynamic Recapitalization

As emphasized by Hanson et al. (2011), and Greenwood et al. (2017), higher levels of bank capital, while undeniably necessary and helpful, are not sufficient by themselves to prevent bank credit crunches in the wake of large shocks. It is also important for the banking sector to have the ability to promptly recapitalize, either by curbing payouts to shareholders, raising new equity, or both.

A simple example illustrates the point. Suppose that regulators require all banks to maintain equity capital equal to 10% of their assets. Suppose further that the worst-case scenario is that loan losses are 5% of assets. It follows that banks can never be insolvent, and—to the extent that bank runs are driven by a fear of insolvency—the risk of runs and banking panics should also be largely mitigated. In this sense the regulation can be said to be effective. Yet consider what happens after a realization of losses of 4% of assets. If banks do not recapitalize by issuing new equity, their assets will have to fall by 40% in order to maintain compliance with the regulation, resulting in a severe contraction in the supply of bank credit. This is the “leveraged losses” mechanism presciently highlighted by Greenlaw et al. (2008) in the early stages of the GFC. Baron et al. (2020) provide evidence for this mechanism, using a long cross-country panel to document that sharp declines in the market value of bank equity tend to lead to significant contractions in economic activity, even absent banking panics.

The experience in the GFC underscores the importance of dynamic recapitalization. In the USA, banks were strikingly slow to curtail payouts to common shareholders and took only limited steps to raise new equity during 2007 and early 2008. Between 2007Q1 and 2008Q2, publicly traded banks—who would later charge off $216 billion of loans and incur $311 billion of provision expenses from 2008Q3 to 2009Q4—paid out $136 billion in cash to shareholders in dividends and repurchases. During this same period, they raised only $68 billion of new equity. With the benefit of hindsight, it seems clear that the impact on the real economy could have been attenuated if regulators had responded earlier, by clamping down on payouts and compelling more common equity issuance, say in the fall of 2007 or the spring of 2008.

My ongoing worry is that, painful as it was, it is not clear whether this lesson has been fully learned. That is, I am not at all convinced that, if we were to have another severe shock to the economy and the financial system, regulators—even with their post-GFC stress testing authorities in hand—would this time around act more aggressively to cut off payouts to bank shareholders and to encourage new equity issues.Footnote 3

4.2 Regulatory Arbitrage

One of the most difficult challenges facing any attempt to regulate the behavior of financial institutions and markets is that of regulatory arbitrage—i.e., gaming of the rules. In the banking context, concerns have centered on banks’ attempts to circumvent the risk-based capital regime that was in place in the years leading up to the GFC. These concerns have led some observers to advocate for a more prominent role for an un-risk-weighted leverage ratio, which treats all bank assets (including e.g. Treasury securities) the same for the purposes of assessing a capital requirement. The premise is that if a bank has a high ratio of unweighted assets to risk-weighted assets, this is a clue that it may be gaming the risk-based regime, and so one might want to impose another constraint that limits this gaming.

However, in Greenwood et al. (2017), we argue that simply adding more rules is not the best way to deal with the problem. Rather, if one wants to attack regulatory arbitrage most effectively, it may be necessary to change the timing of the interaction between the regulator and the banks. The fundamental problem with an entirely rules-based system is that the regulator moves first, setting the rules in stone, after which the bank gets to move second, optimizing against the now-rigid and therefore easily-exploitable set of rules. Ideally, to curb arbitrage, it would help to let the regulator have another go at the problem, after having observed the specific actions that the bank has taken in light of the ex ante rules, which were not contractible in advance.

Consider a concrete example: suppose that the only ex ante rule in place is a conventional risk-weighted capital requirement. Both the capital requirement and the risk weights associated with this rule have been fixed, and do not change from year to year. But consistent with the worries that have motivated the focus on the leverage ratio, the regulator observes that ex post, once the rule is in place, banks are loading up to an unexpected degree on a particular type of loan that has a low risk weight in the rule. Moreover, the regulator suspects that this is in part because this loan type is exposed to a source of risk that was not adequately captured in the ex-ante risk weighting scheme, i.e., to a risk that was not contractible ex ante, but that has now been revealed to be important by the banks’ actions.

Greenwood et al. (2017) argue that a better response is not to impose another rigid ex-ante rule as a patch on the first, but rather to use the stress-testing process to fill in this ex-post-observable contingency after the fact. For example, the stress test in a given year could be designed to make particularly pessimistic assumptions about losses on any asset type that has grown unexpectedly rapidly in the past year or two, or where the associated bankers or traders are seeing unusually large increases in compensation. If done this way, much of the year-to-year variation in stress-test scenarios would be driven not just by changes in the macroeconomic environment, but instead by supervisors’ observations of granular changes in the composition of bank portfolios.

There is a tension here, however. This sort of more discretionary approach is likely to invite complaints from banks about the stress-testing process being non-transparent and lacking in due process. Consider how a bank might respond if it is told that it is facing tougher assumptions on loss rates in a given year simply because it has been particularly profitable in some areas or is paying some of its employees in these areas generously. At the extreme, such complaints could manifest in legal challenges under the Administrative Procedure Act. And even if they did not, the associated pushback and political pressure might ultimately weaken regulators’ hands to the point where the discretionary approach becomes ineffective. And indeed, consistent with these concerns, the direction of travel in the USA in the last few years has been towards more transparency and predictability in the stress-testing process. As a result, stress-testing has become something closer to an alternative implementation of a hard-coded capital rule and is likely to be less helpful as a discretionary supervisory tool to combat regulatory arbitrage.

4.3 Lack of Time-Varying Macroprudential Tools

As noted above, theories of the credit cycle that emphasize time-variation in sentiment point to the potential usefulness of time-varying macroprudential regulatory tools—with the basic idea being that one may want to lean more aggressively against credit creation when sentiment is elevated, and lending standards are eroding. One such tool that emerged in the wake of the GFC is the so-called counter-cyclical capital buffer, or CCyB, which gives banking regulators the authority to raise capital requirements in an economic expansion, or when signs of excess are emerging in credit markets. The Federal Reserve’s website describes it thusly: “The (CCyB) buffer is a macroprudential tool that can be used to increase the resilience of the financial system by raising capital requirements on internationally active banking organizations when there is an elevated risk of above-normal future losses and when the banking organizations for which capital requirements would be raised by the buffer are exposed to or are contributing to this elevated risk—either directly or indirectly. The buffer could also help moderate fluctuations in the supply of credit. The CCyB is designed to be released when economic conditions deteriorate, in order to support lending and economic activity more broadly.”

Interestingly, while a number of other developed countries have chosen to maintain non-zero levels of the CCyB in the current environment, the USA has not activated it, so that the CCyB remains at zero. One can speculate as to the reasons for this difference, but as a practical matter, it suggests that there are constraints—at least in some jurisdictions—on the ability to deploy time-varying macroprudential tools. And if we run into a major negative shock, the USA, unlike these other countries, will not have any room to cut the CCyB, a maneuver that can usefully help to support lending in an adverse scenario.Footnote 4

Another time-varying macroprudential tool that US bank regulators turned to in 2013 was supervisory guidance on leveraged lending, which sought to deter banks from originating poorly-underwritten leveraged loans. However, in October of 2017, the General Accounting Office (GAO) determined that the 2013 guidance was a “rule” under the Congressional Review Act, meaning that it would have to be submitted to Congress and the GAO for review and potential disapproval. This ruling effectively took the leveraged-lending guidance out of play as an active policy tool.

The bottom line is that, at least as far as the USA is concerned, regulators appear to have little in the way of operational, time-varying macroprudential tools at their disposal. Again, this conclusion is jurisdiction-specific, highly dependent on political-economy factors, and does not apply more generally. But to the extent that it is important to have such tools to moderate the credit cycle, the practical reality is that this leaves the US economy at something of a disadvantage.

4.4 What’s Happening Outside the Banking Sector?

In a capital-markets-dominated economy like the USA, much of the financial stability discussion necessarily revolves around activities that take place outside of the formal banking sector. The term “shadow banking” is often used to describe these activities, and it can be a little imprecise. A broad definition might take shadow banking to be any form of credit creation that occurs outside the banking system, e.g., in the corporate bond market. A sharper and narrower definition—and one that may be more relevant for thinking about macroprudential policy—would associate shadow banking with those forms of credit creation that are bank-like in that they are financed by runnable liabilities. In the years leading up to the GFC these might have been structured investment vehicles that were financed by asset-backed commercial paper, to take just one example.

With that narrower definition in mind, it is useful to think about the rapid growth in recent years of the corporate bond market and the leveraged loan market. And bear in mind that some of this growth may be explained by lending to large and medium-sized firms migrating away from the banking sector as capital requirements there have gone up. Leveraged loan issuance in particular has been booming of late; these are loans that are typically structured and syndicated by banks but most often wind up on the balance sheet of other investors, be they collateralized loan obligations (CLOs), pension funds, insurance companies, or mutual funds.

To what extent should we think of any of this leveraged-lending boom as representing shadow banking in the stricter sense of the term, and hence a relatively greater cause for concern, all else equal? Importantly, there has been rapid growth in the share of the leveraged-loan market that is held by open-end mutual funds and exchange-traded funds (ETFs)—some estimates put this share as currently around 20%. On the one hand, these vehicles do not have “leverage” in the conventional sense, in that they do not use borrowed money—the investors owning these funds have an equity claim. On the other hand, this equity claim is redeemable on demand—investors in open-end funds can withdraw their money the next day.

Moreover, recent research has made clear that this demandability feature involves a first-mover advantage, much like in the classic bank run model of Diamond and Dybvig (1983).Footnote 5 For example, Feroli et al. (2014) show that redemptions from junk-bond mutual funds today forecast declines in the funds’ net asset values (NAVs) going forward; this sort of predictability is precisely the motive for any given investor to rush to get out before others do. One way to think of it is this: consider a bond fund, and suppose it is invested 95% in relatively illiquid junk bonds, and 5% in cash. So when the first investors start redeeming, the fund accommodates those redemptions out of its cash buffer, as it does not want to have to immediately sell the illiquid bonds. But over time, it has to rebuild its cash buffer by gradually selling off some of the bonds. Thus we know that the bond sales are coming, and that this will eventually put downward pressure on the NAV, which is the price at which future investor can exit.

This is what creates a first-mover advantage. And again, a number of recent papers have documented the existence of this effect. We do not yet know just how important it will be in a severe stress scenario, and whether it will be powerful enough to create truly destabilizing run-like dynamics that threaten aggregate credit supply. I will instead make a more modest prediction: if we do in fact have a significant problem with these vehicles, we’ll look back and say: “Well, the assets were essentially illiquid loans, and the liabilities were all demandable—was this not kind of like a bank? And should we not have been more worried about the resemblance?”Footnote 6

4.5 Restrictions on the Fed as a Lender of Last Resort

One final concern—and I think this was simply a policy error—is that the Dodd-Frank Act has made it more difficult for the Fed in its role as lender of last resort (LOLR) to extend emergency credit to nonbank financial intermediaries such as broker dealer firms. With the memories of Bear Stearns and Lehman Brothers still vivid, post-GFC regulation has appropriately recognized the run-like vulnerabilities inherent in the broker-dealer model and has imposed more stringent regulation on broker-dealers; the largest ones are now located inside bank holding companies and therefore subject to the associated set of capital and stress-testing rules. This is all to the good. But these same run-like vulnerabilities are precisely what give rise to a need for an LOLR, and just as with traditional banks, regulation is complementary to, not a substitute for an LOLR mechanism. I worry that the lack of an LOLR for nonbanks that face significant run risks will present a significant handicap for policymakers in responding to the next crisis.Footnote 7

5 Implications for Monetary Policy

5.1 A Second-Best Role for Monetary Policy?

The next question to ask is whether there is a role for monetary policy in managing the credit cycle. The conventional central-banking view, as expressed by, e.g., Yellen (2011) and Bernanke (2015), is a decisive no. Rather, according to this view, monetary policy should focus on its traditional inflation-employment mandate and should leave matters of financial stability to regulatory tools. As Bernanke (2015) puts it:

What about monetary policy? Notwithstanding the critical importance of maintaining financial stability, I do not at this point see a very strong case for diverting monetary policy, either in the United States or elsewhere, from the pursuit of its macroeconomic objectives (inflation and employment)…..Under most circumstances, monetary policy is just too blunt a tool for addressing financial stability risks…More-targeted policies, such as financial regulation, should accordingly be the first line of defense under most circumstances.

To be clear, I think this view is almost certainly right in a world where financial regulation is highly effective. However, for the reasons outlined above, I am inclined to be more skeptical with respect to this premise than Bernanke and Yellen would appear to be, at least in the current US context. This is of course not to say that we should not make every possible effort to improve our regulatory apparatus so as to mitigate its existing weaknesses. But taking the world as it exists today, I am more pessimistic that we can expect financial regulation to satisfactorily address the booms and busts created by the credit cycle entirely on its own. This would seem to leave open the possibility of a role for monetary policy—albeit a second-best one—in attending to the credit cycle.

5.2 But Is the Fed Smarter than the Market?

Almost by definition, any attempt by a central bank to use a time-varying tool like monetary policy to address the credit cycle would involve making a judgment about the current state of credit-market sentiment. This observation leads to an oft-stated objection: that elevated sentiment cannot be reliably assessed in real time. For if it could, hedge funds and other investors would have huge incentives to take contrarian positions. Put simply, how can the Fed have the information and conviction to act as a market timer, leaning against overheated markets, when other highly sophisticated market participants cannot or will not?

This is the point in the debate where a limits-to-arbitrage perspective, as in Shleifer and Vishny (1997), is especially helpful. According to this view, what holds back hedge funds and other sophisticated arbitrageurs from betting aggressively against—and thereby correcting—certain kinds of long-horizon macro mis-pricings is not a scarcity of reliable predictive information, but rather the constraints of their organizational form. In particular, these arbitrageurs are typically organized as open-ended or partially open-ended investment vehicles, and often take on considerable short-term debt to finance their positions. This means that if, for example, they short a particular market and the trade goes against them in the near term, they can be forced to liquidate their positions. Thus they simply are not well-suited to taking large undiversifiable bets that can take multiple years to converge.

Interestingly, and consistent with this limits-to-arbitrage perspective, non-financial firms appear to be aggressive and generally successful macro market timers in their capital-structure decisions. This shows up in a variety of market settings. When aggregate equity issuance is high, stock-market index returns in the following few years tend to be lower than average, suggesting that firms manage to sell more of their shares when these shares are relatively overvalued (Baker and Wurgler 2000). In a similar vein, when consolidated government debt maturity shortens (as happens, e.g., under a policy of quantitative easing) non-financial firm debt maturity lengthens, effectively taking the other side of the trade, and Treasury-market term premiums decline (Greenwood et al. 2010). When aggregate issuance by junk-rated firms is high, the realized returns on junk bonds relative to Treasuries decline over the next few years. And finally, non-financial firms appear to engage in aggressive cross-market arbitrage based on macro market conditions, borrowing to buy back shares when term premiums and credit spreads are low (Ma 2019).

What explains the striking willingness—and apparent success—of non-financial firms to act as macro-market arbitrageurs? Presumably this is not because corporate CFOs or treasurers are smarter or have better access to information than hedge-fund managers, but rather, because they have a variety of organizational and structural advantages. In particular, they are operating inside closed-end firms, so investors cannot withdraw their money. In addition, they do not have to mark-to-market or settle up their arbitrage positions if these positions move against them in the short run. If a hedge fund shorts the stock market and it continues to go up, the fund faces investor redemptions and margin calls. If a non-financial firm acts on the same market view and issues more of its own shares or uses its equity to finance a stock-for-stock merger, its only obligation is to pay out dividends over time on the newly-created shares. So if its stock price continues to rise in the near term, there is no pressure brought to bear on the firm.

This way of thinking leads to two immediate implications for central bankers: first, as a matter of theory, it is not at all obvious that effectively leaning against credit-market sentiment requires them to be smarter, or better-informed than sophisticated investors. And second, the data required to do so may in many cases be simple metrics that policymakers already look at on a regular basis, such as credit spreads, measures of lending standards, and issuer quality. Patterns of non-financial corporate issuance, such as the high-yield share emphasized by Greenwood and Hanson (2013), are especially interesting in this regard, because they are likely to be relatively immune to the Lucas (1976) critique: even if the underlying model of the world changes, so long as non-financial firms are aware of this change, their issuance decisions should continue to reveal their views of which segments of the markets are most overvalued.

5.3 How Would Monetary Policy Look Different?

If monetary policymakers did choose to attend to credit-cycle considerations, the resulting differences might be quite subtle under most circumstances. As just noted, the sorts of data that one would look at are already carefully considered in most policy discussions. The distinction is this: a traditional policymaker following Yellen (2011) or Bernanke (2015) might look at unusually narrow credit spreads and conclude that all else equal, they can set the short-term policy rate at a somewhat higher level and still attain their desired targets for inflation and unemployment, because broader financial conditions are easier than suggested by the policy rate alone. By contrast, a non-traditional policymaker concerned with the credit cycle might look at unusually narrow credit spreads—along with a boom in high-yield and leverage lending issuance, and a general decline in lending standards—and be willing to accept falling a bit short of their desired current targets for inflation and unemployment, because they are worried that a further cut in the policy rate could lead to an even further heating up of credit market sentiment.

Crucially, this need not be because the non-traditional policymaker has a third mandate, and cares about the state of the credit market per se. Rather, as argued by Stein (2014), it is because they explicitly recognize an intertemporal aspect to their traditional dual inflation-employment mandate: in the words of Jorda et al. (2013), an overly exuberant credit market today can “bite back” with negative consequences for output and employment in the future. Taking this intertemporal reversal effect into account can lead the non-traditional central banker to put a different weight on variables like credit spreads, issuer quality and lending standards than would the traditional central banker.

As an anecdote to illustrate that the core challenge for monetary policymaking may not be so much in recognizing when markets are overheating, but rather in deciding how much weight to put on this data, consider the following prescient observation from then Fed Vice-Chair Don Kohn, in the March 2004 FOMC meeting:

A second concern is that policy accommodation—and the expectation that it will persist—is distorting asset prices. Most of this distortion is deliberate and a desirable effect of the stance of policy. We have attempted to lower interest rates below long-term equilibrium rates and to boost asset prices in order to stimulate demand. But as members of the Committee have been pointing out, it’s hard to escape the suspicion that at least around the margin some prices and price relationships have gone beyond an economically justified response to easy policy. House prices fall into this category, as do risk spreads in some markets and perhaps even the level of long-term rates themselves, which many in the market perceive as particularly depressed by the carry trade or foreign central bank purchases. If major distortions do exist, two types of costs might be incurred. One is from a misallocation of resources encouraging the building of houses, autos, and capital equipment that will not prove economically justified under more-normal circumstances. Another is from the possibility of discontinuities in economic activity down the road when the adjustment to more sustainable asset values occurs. Neither of these concerns, in my view, is sufficient to overcome the arguments for remaining patient awhile longer.

As this passage indicates, Kohn and his FOMC colleagues appear to have recognized some of the risks that were emerging in credit and housing markets by early 2004, as well as the role that monetary policy might have been playing in stoking these risks. Perhaps this did not influence their view of the appropriate monetary policy stance by as much as we think it should have from today’s vantage point, given the benefit of 20/20 hindsight. But again, some of the signs of market overheating were fairly clear even early on in the boom and were well-noted in real time.

If monetary policymaking were to indeed move in the direction of attending to the credit cycle, how big would the resulting differences in policy settings be? Unfortunately, I do not have anything useful to say quantitatively on this score. But it is possible to venture some broad qualitative insights. Consider two different macroeconomic scenarios. First, take that which prevailed in September of 2012, when the Fed initiated QE3, its third round of quantitative easing: at this time the (U3) unemployment rate stood at nearly 8%. Given the enormous costs associated with such high unemployment, any financial-stability risks associated with QE3 would loom much less large in relative terms, so taking them into account might not be expected to lead to a significantly different policy setting. Simply put, if unemployment is 8% and you are not courting some financial-stability risk with aggressive monetary policy, you are probably not trying hard enough.

By contrast, think about where we are today, in November of 2019. Unemployment is at a historically low level of 3.5%, so the Fed is doing about as well as one could hope on the employment leg of its mandate. Inflation however has been persistently a little below its 2.0% target, with year-on-year readings for core PCE inflation recently in the ballpark of 1.7%. In this configuration, how aggressively should accommodative policy be used to try to push inflation back up to the 2.0% target? Should the Fed be willing to cut rates aggressively to do so? Here I would argue that with both elements of the mandate currently closer to target, and the marginal costs of any shortfalls therefore lower, the relative weight of intertemporal financial-stability considerations looms larger. In other words, the Fed might not want to pull out all the stops just to try to move inflation up by 30 basis points, if doing so means further stoking credit market overheating and thereby raising the probability of a credit-driven recession a couple of years down the road. This is particularly true to the extent that one thinks the Phillips curve is very flat, so that it would take a lot of monetary stimulus, and presumably a significant impact on financial markets, to move the needle much on inflation.

Again, these are only qualitative observations. I will not venture to guess whether, even in my second example, the implied adjustment in the optimal policy rate setting that would come from explicitly taking into account the intertemporal effect of easy policy on output and employment is on the order of 10 basis points or 100. Similarly, if I could send Don Kohn back in a time machine to March of 2004, I’m pretty sure that I would encourage him to raise the policy rate in the hopes of slowing down the expansion of the pre-GFC credit bubble. But I honestly do not have a good sense for by how much.

6 Conclusions

I suspect that it will take a good deal of further research before we are anywhere close to a professional consensus on these more quantitative questions. My only real hope here is to have made the case for keeping the door open to this much-needed work. In other words, given what we know about the economic costs associated with credit booms and busts, and given what I see as the practical limitations of financial regulation in moderating the credit cycle, I think it is important to be open-minded about bringing other tools to bear, even if these other tools, like monetary policy in particular, are also not perfectly suited to the task. When it comes to dealing with the credit cycle, the stakes are high and we are deep in a second-best world, so we need to be accordingly pragmatic and willing to consider attacking the problem on multiple fronts.