Introduction

The real estate market is a natural setting to test predictions from information economics. This decentralized market is characterized by complex assets trading infrequently with high transaction costs. Because the market is thin for units in any particular feature space, researchers can reasonably assume that trading occurs in an environment of imperfect information. And within this setting, one party to the transaction commonly has superior information. Consider these close parallels between real estate trading and parables drawn from classic articles. Similar to the market for used cars in Akerlof (1970), homeowners know more about their properties than prospective buyers. Landlords, like firms hiring workers in Spence (1973) and Stiglitz (1975), do not know the traits of potential tenants. And certain groups of participants in real estate markets, such as locals and real estate agents, hold information advantages in a manner analogous to the grain traders with insider information in Milgrom and Stokey (1982).

This paper reviews research on information frictions as part of the Journal of Real Estate Finance and Economics special issue on real estate market efficiency. (We use the term “frictions” as a catchall to describe information that is imperfect, costly, or asymmetric, and also strategies that market participants pursue to profit on, or even out, asymmetries in information.) The content is organized into three parts that span the real estate marketplace: “Part 1: Private Markets,” “Part 2: Public Markets,” and “Part 3: Brokerage Markets.” Although we focus on empirical research, each part begins with a description of sources in economic theory for the testing we cover. Beyond giving context to the articles in the special issue, the purpose of this review is to provide a resource for researchers that compares recent results with influential earlier findings and suggests areas where more work is needed.

The real estate market has been the subject of sustained research effort on information frictions for over thirty years. Due to the scope of the literature, especially in recent years, our coverage is extensive by nature and selective by necessity. Although the reference list of about 400 works includes over 70 from just 2019 and later, the treatment of a particular topic may still highlight only a representative subset of papers. And we do not cover several information topics that are important and interesting enough to merit entire reviews of their own. Noteworthy omissions include time-series tests for the information content of past prices, housing search, property market cycles, real estate development and debt markets, and behavioral real estate.Footnote 1

The policy implications of the research in this review are profound. The literature shows that real property never trades with full information and “even small information costs can have large consequences,” in the words of Stiglitz (2000, p. 1443). Real estate is the largest asset class in the world (Savills World Research, 2017). By examining the effects of information frictions on real estate transactions, researchers have improved our understanding of potential market failures and corrections that can improve the functioning of this important market.

Part 1: Private Markets

Selected Theory

If real estate assets trade under conditions of imperfect or asymmetric information, then what are the consequences according to economic theory? Before turning to specialized real estate models for answers, we describe a selection of seminal works from the general economics of information listed in Panel A of Table 1. Our coverage is limited to just those articles cited commonly by the empirical research we review. (More extensive surveys are listed in Panel B for interested readers.) An early work considers information frictions like any other transaction cost. However, the review makes clear that this view is overly simplistic. Subsequent papers show that equilibria in markets with information frictions may have properties that depart substantially from the benchmark Arrow and Debreu (1954) model with costless information exchange.

Table 1 Theory Papers (Part 1), Private Real Estate Markets

One of the first consequences of information frictions developed in the literature is that the “law of one price” does not hold. Stigler (1961) argues that, because search is costly, there can be a distribution of prices across sellers in a decentralized market, even for a homogeneous good. Indeed, Stigler describes price dispersion as both a manifestation and measure of “ignorance in the market” (p. 214). Sixty years on, real estate researchers continue to test this postulation by examining whether exogenous increases in information quality reduce transaction price dispersion.Footnote 2

Another consequence of information frictions is that the “first and second fundamental theorems of welfare economics,” which concern the efficiency of markets in equilibrium, do not hold.Footnote 3 There have been two dominant approaches to characterizing market equilibria in the presence of asymmetric information: 1) a pooling (price) equilibrium based on Akerlof’s (1970) model of the used car market, and 2) a separating (quantity) equilibrium with self-selection due to Rothschild and Stiglitz, 1976’s model of the insurance market. Researchers have applied both classes of models to the real estate market and examined the consequences of different information structures. We discuss some of those extensions later in the review. For now, we describe how real estate researchers have drawn testable hypotheses directly from Akerlof (1970).

Akerlof (1970) begins with the contention that buyers in resale markets have reduced willingness-to-pay if they cannot fully judge the quality of a product prior to delivery. In a vicious cycle, reduced prices drive sellers of products with above-average latent quality off market, increasing the share of low-quality products (adverse selection). If the market does not break down, remaining sellers of high-quality goods subsidize the low-quality types in the pooling equilibrium that emerges (they incur a deadweight loss), because buyers mistakenly perceive the “peaches” to be “lemons.” Real estate researchers have cited Akerlof’s adverse selection model when examining whether innovations that reduce information asymmetries have 1) a negative effect on the price of properties with latent material defects, and 2) a positive effect on the overall price level in the market.

Akerlof (1970) does not consider the possibility that market participants might act to reduce information asymmetries. The model assumes that credible disclosure is impossible because costs are prohibitive and buyers suppose that sellers misrepresent quality. Beginning with Spence (1973), papers in one strand of follow-on literature examine whether sellers can take costly actions to credibly signal that their product quality is high. In these models, the sellers of high-quality products suffer deadweight loss because 1) buyers require a costly signal to distinguish quality, and 2) the signal must be more costly for the low-quality seller than the high to prevent the low-quality type from misrepresenting. An information signal with these characteristics is referred to as “dissipative” in the literature. A second strand, with Stiglitz (1975) at its root, asks whether an uninformed party can use costly action to screen for quality.

Stemming from the seminal papers on signaling and screening, the subject of voluntary disclosure has received significant theoretical treatment. Under the assumption that disclosure can be verified, Milgrom (1981) shows that a seller with private information about the quality of a product will choose to disclose that information to a prospective buyer in a game of persuasion, because failure to disclose would lead the buyer to infer even worse quality.Footnote 4 This finding is known as the “unraveling result” in reference to the way a strategy of concealing low-quality features unravels at every sequential equilibrium in the game. In the real estate research, papers testing for strategic disclosure in listings have used the unraveling result—i.e., the full disclosure of private information predicted by Milgrom—as a null hypothesis. When the null is (inevitably) rejected, authors document which assumption(s) from the Milgrom model they suspect are violated in their experimental settings.

A straight forward consequence of information frictions is that real estate market insiders, broadly defined, face strong incentives to profit from information asymmetries. Empirical attention has focused on whether locals and real estate professionals are able to earn excess returns. For the most part, this testing has proceeded without a basis in economic theory. However, we describe two models of trading with differentially informed investors that have been influential in the empirical research.

The “no-trade theorem,” attributed to Milgrom and Stokey (1982) is well known for raising a theoretical obstacle to insider trading. Milgrom and Stokey establish that the arrival of asymmetric information does not induce a trade, so long as the initial asset allocation is commonly known to be Pareto efficient. Real estate research has tested hypotheses based on a (highly) generalized version of the no-trade theorem which holds simply that less-informed parties are reluctant to trade with better-informed counterparties.

The rational expectations model in Van Nieuwerburgh and Veldkamp (2009) examines learning and information asymmetry. The model assumes that local investors initially have slightly better information about risky home assets than non-local investors. Investors choose between obtaining additional information on either local or nonlocal assets. In a partial equilibrium analysis, local investors could choose to close their information gap and diversify. However, because both investor types end up with the same information set in this case, the price of those assets are bid up, eliminating excess returns. If certain local investors instead choose to learn more about local assets than the average investor, local asset prices do not fully reflect willingness to pay and the well-informed locals can earn information rents. In the real estate literature, this paper has been influential in explaining how small initial information advantages can generate home bias in investment without assuming the existence of substantial frictions.

We turn next to the theory papers on real estate trading relationships listed in Panel C of Table 1. The first grouping on home buyers and sellers incorporate signaling and screening into models of the property market with imperfect information. The subset of articles on signals of seller motivation (i.e., Albrecht et al., 2016; Carrillo, 2012; Merlo et al., 2015) represents what we consider to be the prevalent approach in this literature. These papers can be characterized as reactions against the full information bargaining models in an earlier line of literature.Footnote 5 In the more recent papers, valuations are assumed to be the private information of the parties to the transaction. A typical approach has sellers directing the search process by setting a binding listing price that signals their motivation to sell.Footnote 6 The question of how a buyer responds to private information the seller has about the quality of a property, the prototypical information problem in real estate markets, is not a feature of these models.Footnote 7

The second grouping of buyer/seller papers in Table 1 on signaling property quality (i.e., Kaya & Kim, 2018; Taylor, 1999) extend Akerlof’s (1970) adverse selection model to two-sided, dynamic settings. In particular, the authors examine 1) what home buyers can infer about a property’s quality from the length of time it has been on the market, and 2) how sellers can influence the buyer’s learning process through their choice of listing price. (Unlike in Milgrom (1981) models, market participants can strategically produce noisy signals.) While dynamic adverse selection models in non-real estate applications typically feature increasing beliefs—i.e., longer marketing time is associated with higher quality in the minds of buyers—these papers model the potential taint that can attach to a property that remains on the market for too long.

Taylor (1999) presents a model in which asymmetric information is two-sided: quality of the house is private information of the seller, and preferences over quality are private information of an infinite pool of buyers. In each of two periods, the seller conducts a second-price auction with a random draw of buyers. Prior to finalizing the purchase, the high bidder obtains an inspection which can generate a false favorable result, but not a false negative. The key assumption to obtain declining beliefs is that no trade occurs in the negative/low-quality case. Because favorable inspections are more likely with high-quality properties, buyers assign a lower probability to the positive outcome in the second period.

Taylor (1999) finds that the seller’s optimal strategy depends on the structure of the information set. When the bidding history for a house is private, the seller offers the property at a low price to avoid a “vicious circle of rejected offers and declining prices” (p. 577). When previous offers are observable, the seller can benefit by setting a high initial price to introduce noise in the time-on-market signal. In particular, prospective buyers do not know if earlier prospects walked away because the house was over-priced or because they detected a flaw during the inspection. Interestingly, the model provides a rationale in economic theory for a policy requiring public disclosure of inspection results.

Kaya and Kim (2018) extend Taylor (1999) by more fully characterizing trading dynamics in their model. Buyers arrive sequentially according to a Poisson process. Before making an offer, each buyer receives a private and noisy signal about the quality of the property. The authors show that the seller’s decision not to accept an offer can be perceived as either good news or bad news, depending on the buyers’ prior beliefs about the “reputation” of the asset. The main dynamic result is that beliefs are decreasing over time if the property’s initial reputation is high and increasing if low, whereas the increasing case is absent in Taylor (1999).

If the source of asymmetric information invoked most commonly in analyses of real estate markets is that buyers are less aware than sellers of property characteristics, the runner up might be that landlords do not know the traits of potential tenants. The particular traits that the literature focuses on are the propensity of a tenant to 1) cause wear-and-tear or damage to the unit, and 2) to vacate (non-renew) at lease expiration. These traits affect the landlord’s maintenance and turnover costs of providing housing services, but are considered unobservable when a lease is negotiated. Panel B of Table 1 lists four theory papers we discuss next that examine how information issues in leasing affect tenure choice. Later, in the section on “Information Strategies”, we cover additional work on how landlords use lease terms as screening devices.

Henderson and Ioannides (1983) describe what they call the “fundamental rental externality” in which tenants fail to “face the social marginal costs of their utilization rates” (p. 983). The externality arises because utilization is essentially non-contractible, meaning it is difficult for landlords to observe and recover all damages caused by tenants. As a result, tenants have an incentive to over-utilize their rented housing units and landlords have to charge inefficiently high rents to cover the excess utilization. A logical consequence of the model is that owning dominates renting and they introduce portfolio considerations to provide an economic rationale for renting.

Miceli (1989a) examine the externality problem from Henderson and Ioannides (1983) in an adverse selection model. The setup is typical of models that follow Rothschild and Stiglitz (1976). There are two types of households, low and high utilization, and the cost of providing housing services depends on the type of tenant. Type is unobservable ex ante, but population shares are known. The equilibrium analysis produces standard results: 1) a pooling equilibrium with one contract cannot exist, and 2) a separating contract that satisfies the self-selection constraint is optimal for high utilization types and second-best for low types. To apply the model to the tenure-choice problem, Miceli (1989a) introduces transaction costs and traces out three additional implications: 1) high utilization households rent, 2) low utilization households who face low transaction costs own (to avoid subsidizing high utilization types) and low utilization who face high transaction costs rent, and 3) the low utilization types who own consume more housing services than the low who rent.

Arnold and Babl (2014) also apply a Rothschild and Stiglitz (1976)-type model to the problem of housing tenure choice. At a separating equilibrium, households make their tenure choices depending on their utilization rate: high-utilization households rent and low-utilization types own. The advancement relative to Miceli (1989a) is that tenure choice is now the screening mechanism instead of avoiding subsidizing transaction costs. While the screening model in Miceli (1989a) suggests the non-existence of a pooling equilibrium, Arnold and Babl (2014) show that a pooling equilibrium is possible due to the indivisibility of house ownership. The authors note that introducing partial ownership could restore the standard Rothschild and Stiglitz (1976) outcome.

Seshimo (2014) describes tenure choice as a trade-off between inefficiencies from investment hold-up and adverse selection. Asymmetric information in the model is two-sided: 1) whether a household has short-term or long-term occupancy plans is their private information, and 2) unit quality is the private information of the landlord, who is deciding between renting and selling the unit. The hold-up problem, inspired by Williamson (1985), is that short-term households do not occupy a property long enough to benefit from any investments they make in their homes and those investments are not verifiable for contracting purposes. The adverse selection problem is a standard lemons story. Owners of high-quality units prefer to rent their homes rather than sell, and vice versa for owners of low-quality units. If the market does not break down, the equilibrium price declines toward the value based on the conditional expectation of unit quality. The model shows that reducing the degree of asymmetric information regarding unit quality increases the likelihood that long-term type households buy. However, the focus of the paper is to show that a strong tenancy protection regime exacerbates adverse selection. The author argues that the interaction of information frictions and tenancy law can explain the relatively small resale market in Japan.

Identifying Imperfect and Asymmetric Information

We cover identifying imperfect and asymmetric information in four sections. In the first two, the focus is on asymmetric information, and identification occurs in the cross section. Coverage begins with “Proxy Variables” that measure information quality and symmetry. Researchers have advanced several metrics and we review the evidence of market failure that arises from their implementation in a set of representative papers. Next, in “Nonlocal Premium” we discuss findings that nonlocal and foreign buyers pay a premium to buy properties outside of their homemarkets. Although empirical research supports multiple explanations, including anchoring and bargaining power differentials, asymmetric information is the leading candidate.

In the next two sections, the focus turns to direct effects of imperfect information. In “Information Shocks” we cover recent research on reactions to releases of new data that improve the quality of the information environment, and in “Learning” we discuss documented patterns of dynamic learning, with a focus on stochastic events. Because shock exposure varies geographically in the experimental settings, authors are able to employ both temporal and spatial–temporal identification strategies. Although either a transitory or persistent effect relative to a control area could violate rational expectations, we review explanations based on information frictions rather than behavioral economics.

Proxy Variables

While there are standard metrics for characterizing the information environment surrounding trading in organized financial markets—e.g., bid-ask spread and analyst forecast dispersion—no consensus exists on similar measures for studying the decentralized property markets. Panel A of Table 2 lists several market-level measures that have been advanced, where the market boundary is defined in either geographic or product space. Like their capital market analogs, these variables do not distinguish which group of traders has an informational advantage over another, that is, their exposure to asymmetric information. Instead, researchers make assumptions about an average participant’s information set, such as those listed in Panel B, based on the participant’s trading experience or the characteristics of the property traded.

Table 2 Proxy Variables

Both types of proxy variables in Table 2 may be utilized together as independent variables in 2 × 2 factorial designs (2 factors, 2 levels). In particular, papers test for variation in an outcome variable between less- versus more-informed participants in settings of low- versus high-quality information.Footnote 8 The dependent variable is typically either 1) a trading strategy (e.g., limited participation or selective offering), or 2) a market outcome (e.g., sale price or liquidity). For a trading strategy equation, one of the transaction-level proxies in Panel B might serve as the dependent variable. In this case, the construct for which the proxy variable is standing in will differ from the generalized descriptions given in the table. For an equilibrium variable equation, rejection of the null suggests that market participants are not just attempting to trade on asymmetric information, they may be earning actual excess returns doing so. In other words, the second case is a test of strong form market efficiency.

Garmaise and Moskowitz (2004) characterize the information environment around investment sales in commercial real estate using a (“direct”) measure at the market level and variation in exposure to asymmetric information using multiple (“indirect”) transaction-level variables. At the market level, they propose that the difference in property tax assessment ratios across jurisdictions provides a continuous measure of exogenous variation in information quality. When dispersion in ratios is high, there is greater potential for trading on asymmetric information, because the public assessments are nosier and less useful to market participants.Footnote 9 Transaction-level measures in the paper are standard in the literature and include distance between the buyer and property (area knowledge), property age (length of income and price history), and whether the buyer or seller is a real estate broker (area knowledge and general expertise).

Using a large sample of transactions in seven US states for 1996–1999, Garmaise and Moskowitz (2004) test a dozen hypotheses on how asymmetric information affects trading and financing strategies, and we highlight some representative results. Three hypotheses in the paper are based on the generalized Milgrom and Stokey (1982) no-trade theorem, that less-informed parties will be reluctant to trade with better-informed counterparties. In markets with low information quality, the authors predict it is more likely that 1) buyers are drawn from nearby locales (limited participation strategy) and 2) commercial real estate brokers trade with other brokers (market segmentation strategy), and less likely that 3) young properties are traded (selective offering strategy). Results of the testing support these predictions from theory.Footnote 10 For example, they find a standard deviation increase in dispersion of assessment ratios is associated with a reduction in buyer-property distance of 70 km, and the effect is more pronounced for younger properties with shorter income and price histories that are relatively difficult to evaluate. The authors conclude their results provide clear evidence that “asymmetric information is significant in the commercial property market” (p. 435).Footnote 11

Kurlat and Stroebel (2015), in turn, characterize the information environment around the resale market in residential real estate by leveraging the findings in Garmaise and Moskowitz (2004). If market participants form strategies in response to asymmetric information, it follows that the composition of sellers and properties should contain information about demand in the market. In particular, based on the theoretical framework in Kurlat (2016), they show that an increase in the share of informed sellers and sellers with high elasticity of supply indicates a negative shock to neighborhood amenities has not been fully priced. Their ultimate insight is that a relation between seller composition and subsequent appreciation is evidence of information frictions.

For variation in seller informedness, Kurlat and Stroebel (2015) use the share of sellers who are real estate agents. For variation in seller supply elasticity, they propose two measures: average land share (the factor loading of neighborhood characteristics on property values) from the property tax assessment and share of sellers who have recently moved. Owners of homes with high land share, they argue, should be more responsive to changes in neighborhood characteristics. Longer-tenure owner occupants should be less elastic in the decision to move. And for a separate transaction-level analysis, they use standard measures of the buyer’s knowledge of neighborhood characteristics. Their proxy variables include an indicator for whether the buyer currently resides in the neighborhood, and distance from previous home for buyers making within-county moves.

While Garmaise and Moskowitz (2004) show that information frictions appear to influence trading strategies (limited participation, market segmentation, and selective offering), Kurlat and Stroebel (2015) take the next logical step by documenting an effect on prices that is perhaps more noteworthy in terms of implications for market efficiency. Using 1.5 million residential sales from Los Angeles County for 1994–2011, the authors test several hypotheses and we review some representative results. At the market level, the composition of sellers predicts future price changes. A standard deviation increase in the share of informed sellers in the neighborhood (zip code), proxied by the share of real estate professionals among sellers, is associated with a 13 basis point decrease in the annual appreciation rate of houses in the area—they apparently sold for good reason. For seller supply elasticity, standard deviation increases in land share and new-mover share in the neighborhood are associated with 79 and 47 basis point drops, respectively.

At the transaction level, Kurlat and Stroebel (2015) show that informed buyers, operationalized as agent-buyers and buyers who previously owned in the same zip code, obtain annual appreciation rates on their homes that are 74 and 110 basis points higher. And as expected, the effects of seller composition on price appreciation are smaller when buyers are more informed. Overall, the paper provides strong evidence of market failure due to information frictions in the residential housing market. Information on neighborhood characteristics appears to diffuse over time rather than instantaneously, such that house prices do not fully reflect all available information, and informed buyers are able to form effective trading strategies based on informational advantages in environments of imperfect information.

Table 2 lists several additional proxy variables that researchers have used to identify information effects in real estate transactions. Representative papers are discussed below. Our review highlights that which property market or transaction type in a pairing is exposed to more imperfect or asymmetric information depends on assumptions about institutional and market settings. The use of these particular proxy variables in different settings may be limited relative to other measures with interpretations that are less dependent on context.

Neo et al. (2008) contend that single-family dwellings (“low-rise”) in their sample of home sales in Singapore constitute a more heterogeneous structure type than units in multifamily (“high-rise”) buildings.Footnote 12 They propose testing for differences in outcomes between less-informed versus more-informed buyers using structure type as a proxy for settings of low versus high information quality. For informedness, they use an indicator variable if the buyer is foreign. In transactions from 1989 to 1999, they find (less-informed) foreign buyers pay above-market prices for the more difficult to evaluate low-rise buildings, but not for units in high-rises. In addition, they find pre-sale units uniformly sell for a discount, regardless of structure type, which they attribute to buyer concerns about moral hazard. It seems the legal regime in Singapore does not adequately shield buyers from the risk that developers, who have received advanced payment for unfinished units, may choose to lower costs by reducing quality (p. 340).

While Kurlat and Stroebel (2015) use land’s share of property value as a proxy for supply elasticity, Wong et al. (2012) suggest that land share can be used as a measure of property transparency—i.e., the ease of evaluating property quality—in their experimental setting. The authors argue that land attributes in a sample of condominium units in Hong Kong are relatively more transparent than structural features, about which sellers have an informational advantage. Therefore, a property with a high land value relative to its structure value is subject to less information asymmetry.Footnote 13 Empirical results support the main prediction of the model in the paper, inspired by Akerlof (1970). Based on a sample of housing transactions from 1992 to 2008 across 50 districts, liquidity increases with the land price gradient, a proxy for land share. Chau and Wong (2016) find similar results in an extension to the Hong Kong office market.

A challenge with using land share as a proxy for property transparency is that it is not clear what else it may be measuring. Structure to land ratios are endogenously determined and affected by regulations that restrict density. And findings in Clapp et al. (2021) in this special issue raise an additional challenge if researchers use land share from property tax assessment data, a common source. Assessors typically use the land residual model to estimate land values in built-up areas.Footnote 14 Clapp et al. (2021) show that estimates of land ratio using the residual model increase in a rising market. This has the potential to bias results in research designs that rely on land share to predict price changes.

Most of the papers in this section have not addressed implications of the internet revolution for information concerns in real estate markets, perhaps because their designs exploit cross-sectional, not temporal, variation in information. An exception is Gordon and Winkler (2019), who examine information asymmetry using trends in the new-home premium. They estimate that buyers of new homes pay an average premium of 5.6% relative to comparable existing properties in Madison County (Huntsville), Alabama for 1996–2015. In addition to low maintenance and repair costs, they contend that new home buyers are paying for reduced uncertainty around those costs.Footnote 15 In that case, the authors expect that the premium should be falling over time due to the secular reduction in the cost of accessing information associated with the internet revolution. Results in the paper support this prediction. Consistent with the idea that part of the new home premium is due to information asymmetry, trend analysis shows the premium falls by 0.85% annually during their sample period.

Nonlocal Premium

Conventional wisdom in the real estate industry holds that nonlocal buyers pay more than locals (Lambson et al., 2004, p. 120). This view is not supported in the first wave of studies from the 1990s (Myer et al., 1992; Turnbull & Sirmans, 1993; Watkins, 1998) that find no significant price differences, consistent with expectations from the efficient markets paradigm. However, several later studies, discussed below, have overcome data limitations in the earlier work and document significant differences between prices paid by local and nonlocal buyers. Based on the accumulated body of evidence, the conventional wisdom in the market, that nonlocals overpay, is now a stylized fact in the literature.

A common view is that nonlocal, especially foreign, investors trade off diversification benefits and information costs. Local investors may obtain information advantages from, for example, personal experiences living and working in the market, connections with other well-informed local investors and industry professionals, and by consuming local media. An often implicit assumption is that frictions make it difficult or costly for nonlocal buyers to obtain the same quality of information as locals. Indeed, belief in this mechanism is so strong that distance from the subject property has been used as a proxy variable for measuring the relative informedness of buyers in studies of asymmetric information, as indicated in Table 2. However, the literature, summarized in Table 3, has considered several alternative explanations for the nonlocal premium, including cognitive heuristics (anchoring), bargaining power, market timing, and econometric issues. Our review of these papers indicates that findings are decidedly mixed. This suggests that researchers should use restraint in interpreting results when the research design uses nonlocal or foreign status to identify a causal effect of asymmetric information.

Table 3 Explanations for the Nonlocal Premium†

Studies of commercial real estate markets tend to support an information mechanism, albeit in combination with various other explanations. For example, Lambson et al. (2004) find that, compared to their in-state counterparts, out-of-state buyers pay an average premium of 5.5% for apartment complexes in Phoenix over the period from 1990 to 2002. Testing in the paper supports two explanations, especially in combination: buyers from states with high average prices pay more than buyers from low-priced states (anchoring), and inexperienced out-of-state buyers pay more than experienced out-of-staters, where experience means having previously owned an apartment complex in Phoenix (asymmetric information). Liu et al. (2015) extend Lambson et al. by examining outcomes for distant investors on both sides of the transaction. Using a sample of office transactions from 138 US markets for 1996–2012, they find that nonlocal investors sell at a 7% discount in addition to paying a 13.8% premium at purchase. They conclude that finding both a sales discount and a purchase premium favors an information asymmetry explanation.

Although most studies of the nonlocal premium are based on a single jurisdiction and/or a single asset class, two papers stand out for the comprehensiveness of their data. Examining 115,000 sales of industrial, multi-family and office properties in the 15 largest US metros, Ling et al. (2018) find that nonlocal buyers paid an average premium of 4–15% for 1997–2011. Their findings suggest the premium is best explained by information asymmetries and that anchoring effects are less consequential. Based on a sample of 160,000 commercial real estate transactions across major asset classes in 146 countries, Agarwal et al. (2019c) find that foreign buyers pay 3.6% more than domestic investors, on average, for 2001–2015. They conclude that the premium reflects information disadvantages of foreign buyers and their findings are not confounded by anchoring, buyer heterogeneity, and selection bias.

Results for residential real estate transactions are qualitatively similar to the findings discussed above from the commercial market. An early study by Miller et al. (1988) finds that Japanese buyers significantly overpaid for homes in Honolulu, Hawaii in the late 1980s. This is partly due to information asymmetry, but also movements in the yen-dollar exchange rate. Using a sample of relatively homogeneous residential properties from a large development in Chengdu, China to mitigate omitted variable bias, Zhou et al. (2015) find support for both the information asymmetry and anchoring hypotheses. In contrast, Ihlanfeldt and Mayock (2012) attribute the nonlocal premium in a large sample of single-family home transactions in Florida more to bargaining advantages of local residents and anchoring rather than information disadvantages. Similar to Liu et al. (2015), Holmes and Xie (2018) also examine both sides of the transaction. For out-of-state buyers in Indiana, they find that the price premium is explained completely by property and transaction characteristics. Out-of-state sellers, however, sell at a substantial 11.9% discount, which they decompose as follows: 3.2% due to bargaining power, 1.5% due to buyer heterogeneity, and the remaining 7.2% because of informational disadvantage and market conditions.

Several studies advance alternative explanations for the nonlocal premium other than information asymmetry. Findings in Simonsohn and Loewenstein (2006) support a behavioral (anchoring) story: a household moving from a more expensive housing market spends more on rent than an observationally similar household moving from a less expensive market. Chinco and Mayer (2016) report that out-of-town second-home owners mistime the market and therefore realize lower capital gains than locals in 21 US metros for the upmarket period from 2000 to 2007. Using comprehensive data on the Paris housing market, Cvijanović and Spaenjers (2021) find that out-of-country second-home owners experience poorer investment outcomes, buying high and selling low. They rule out explanations based on preference heterogeneity, private valuations, or home-market anchoring, leaving bargaining intensity as the leading candidate mechanism. In contrast, Kim et al. (2020) suggests that nonlocal investors in commercial real estate increase their market share during times of financial distress (“firesales”) and pay lower prices on average by taking advantage of reduced competition.

Lastly, several papers examine whether model misspecification or sample selection might bias estimates of the nonlocal premium in the literature. Clauretie and Thistle (2007) suggest that both location and time-on-market are important and should be controlled in the analysis. Devaney and Scofield (2017) find that, although foreign investors pay more relative to local investors, they sell at a premium. Their results suggest that unobserved quality of the assets, instead of information asymmetry, may drive the nonlocal premium. In contrast, results in a few papers suggest that, if anything, estimates based on classifying buyers into local and non-local categories may suffer from attenuation bias. In Chinloy et al. (2013), the price difference between local and non-local novice buyers (defined as having completed two or fewer transactions during the study period) is statistically indistinguishable from zero. In Agarwal et al. (2019c), the nonlocal premium decreases over time as foreign investors learn from experience, disappearing completely by the fourth acquisition experience in a host market.

Information Shocks

Papers in this section (and the next) use exogenous shocks to identify the effects of imperfect information on outcomes and to study learning dynamics in real estate markets. Papers employ either event study (temporal: before vs. after) or difference-in-differences (spatial–temporal: treatment vs. control, before vs. after) designs. The usual advantages for internal validity of quasi-experimental approaches over research designs that rely on identification in the cross section apply here. In particular, the results are less likely to be biased by correlated omitted variables.Footnote 16

Table 4 lists a group of papers that leverage releases of new information. The first two papers study how sudden disclosures of housing transaction details affect housing market outcomes. The improvements in the quality of publicly available market data are expected to increase liquidity and reduce price dispersion based on arguments tracing back to Stigler (1961). As we discuss, results from information shocks in Israel and Helsinki support these predictions from theory. The remaining papers in the table examine shocks that are more circumscribed in nature: improved information on flood risk, school quality, and pollution. Results for this group are mixed. The varied findings may represent a tradeoff in the information shock literature between internal and external validity: many of the papers involve idiosyncratic situation-by-treatment interactions that provide identification, but tend to limit generalizability.

Table 4 Information Shocks

Ben-Shahar and Golan (2019) examine an information shock from 2010 in Israel when transaction data from 1988 onward became available to the public due to a court decision. Using an event-study design, they find this exogenous increase in price transparency is associated with an 18% reduction in price dispersion. Areas with relatively low levels of socioeconomic characteristics, where market participants have less means to mitigate information frictions, exhibit a larger effect. Because the shock is nationwide, the authors can only exploit before-vs-after variation and that makes inferring causality difficult. However, the paper stands out for the battery of testing the authors perform, including using rental properties as a control group, to show that their results are robust to substantial stressing.

Eerola and Lyytikainen (2015) analyze a similar information shock in Finland when a website was launched in 2007 that provided detailed housing transaction data for only part of the Helsinki metro area, making a difference-in-differences design possible. To enhance credibility of the common trends assumption, the authors define the study area to lie along an important mass transit line. They find that marketing time fell by 20% and average sale price increased by 5% for homes covered by the website relative to comparable properties in the control area. Results are consistent with predictions from a theoretical framework developed in the paper in which both improved matching and seller learning following a shock lead to higher transaction prices.

Votsis and Perrels (2016) employ a design similar to Eerola and Lyytikainen (2015), but with a narrower information shock that arises in Finland from an EU-directed public disclosure of high-resolution flood maps in 2006–2007. Although, “coarser flood maps were available to some extent before the high-resolution maps were published” (p. 453), this is not a case of simply redisclosing already public information. The new maps provided details that clarified existing notions of flood risk among market participants. The authors study three cities with histories of coastal (Helsinki) and river (Pori and Rovaniemi) flooding and find that prices fall 10-13% for properties located in flood planes relative to comparable non-flood plane properties after the new maps are implemented. On the one hand, the analysis shows that house prices quickly and accurately reflect improved information, consistent with the efficient markets paradigm. However, the results also suggest that before the new release when the information about flood risk was more imperfect, homeowners were overly optimistic in whatever process they used to fill knowledge gaps. Although such behavior could be viewed as an artifact of Bayesian updating, the authors argue that it is consistent with results tracing back to the influential paper by Kahneman (1979) on prospect theory.

Across three papers we review next on school quality releases, researchers observe temporary, differential, and null effects of shocks on house prices.Footnote 17 Fiva and Kirkebøen (2011) analyze an exogenous shock created by the first-time release of information on the quality of schools in Oslo, Norway. They find a short-term behavioral-type effect on house prices, with reversion to pre-release levels after a few months. In a similar study, Haisken-DeNew et al. (2018) examine house price changes after the launch of a school quality website in the Australian state of Victoria. The information shock causes house prices to increase by 3.6% in suburbs with high-quality schools, but has no effect on house prices in low-quality school suburbs. Lastly, Imberman and Lovenheim (2016) use a public release of school and teacher value-added measures in Los Angeles and find no effect on house prices.

This review highlights how the interpretation of null results varies in papers that examine releases of new information. In Haisken-DeNew et al. (2018) and Imberman and Lovenheim (2016), researchers implicitly assume information is capitalized efficiently, in that they interpret null effects as indicating the information shock is simply not salient for groups of local residents on the margin. In contrast, authors in the next two papers appear to use null results to reject efficient capitalization. In these papers, the authors argue that the content of an information shock is salient to market participants, but that a friction prevents capitalization.

In the first paper, we encounter a case where a null result could indicate that an information shock is too complex to digest for the typical homebuyer (low signal-to-noise ratio). Bui and Mayer (2003) study the Toxic Release Inventory, a regulation introduced by the US environmental protection agency (EPA) in 1986 that requires polluting firms to disclose information about their toxic emissions. The authors use an event study approach to study house price changes in Massachusetts following disclosure of toxic releases and find no response. In this case, the excess complexity, according to the authors, may lie in the lack of a clear delineation of the affected areas.

In the second paper, the effects of an information shock could be heterogeneous due to variation in the sophistication of market participants. In 2013, the Intergovernmental Panel on Climate Change attracted significant media attention when they updated their projections of sea level rise, approximately doubling previous estimates. Bernstein et al. (2019) find that new information about exposure to expected sea level rise affects house prices for coastal communities in the contiguous United States, even in areas not expected to be inundated until the end of the century. Interestingly, this effect exists only within their sample of non-owner-occupied properties, suggesting that new information moves prices more when more sophisticated investors are the marginal purchasers.Footnote 18

Learning

A substantial body of real estate literature examines the dynamics of learning in settings of imperfect information. To begin with, market participants in the literature learn from strategic search and bargaining activities. For example, sellers learn about current market demand from showing frequencies and offer distributions and adjust list prices accordingly, and landlords offer a variety of lease structures to learn about tenant characteristics. These behaviors are covered in the sections on “Listing” and “Leasing” strategies, respectively.

Next, participants learn from proximity. Three recent papers document market activities that generate information externalities. Bayer et al. (2021) show a causal effect of nearby investment activity on the likelihood that an individual becomes an investor in the housing market. The spillovers are quite large: the presence of an investor within 1/10th of a mile increases the probability of a household investing in housing by 8% within the year and up to 14% over three years. Szumilo (2021) examines the information content for neighbors of a homeowner’s decision to renovate. The identification strategy exploits the fact that interior renovations affect the price of that house, but not the desirability of the neighborhood. The author estimates substantial price spillovers: a 1% increase in the price of a house with interior renovations increases the price of a nearby home by up to 0.3%. Dumm et al. (2022) examine the effects on area house prices in Florida from a local household filing a sinkhole claim. There are three possible outcomes of a claim: denial, cash settlement, or remediation. Of these, the authors argue, remediation is the resolution least associated with fraud, a documented problem in the state. As expected, the public disclosure of a sinkhole that occurs as a result of the claim process has a negative effect on the price of the subject and surrounding properties. Consistent with the efficient markets paradigm, buyers differentiate between properties where sinkholes are remediated and those where they are not, and between claims that are paid and those that are denied. However, denial by the insurer reduces, but does not eliminate the claim effect. The finding of a negative externality from potentially fraudulent claiming behavior is a novel contribution.

But mostly, market participants in the literature learn from experience. In some papers, participants choose their experiences. An example is a puzzle that arises naturally in the section on “Nonlocal Premium”, that is, why do nonlocal and foreign investors ignore their information disadvantages and invest outside of their home markets? The answer given by Agarwal et al. (2019c) is that investors choose to make nonlocal acquisitions in order to learn from experience. Their results show the nonlocal premium disappearing completely after an investor’s fourth acquisition experience in a host market. In other papers, events “choose” the participants. The remainder of this section discusses the articles listed in Table 5 that examine how market participants learn from the occurrence of an extreme and low probability event, such as a natural or (human-caused) environmental disaster, or a disease outbreak.

Table 5 Stochastic Events

The literature that tests predictions from behavioral economics typically assumes the stochastic process that generates a class of catastrophic events is stationary and well understood by market participants. Under that assumption, a single event should have only pecuniary effects on the local property market if market participants are rational, because a new occurrence produces only trivially small improvements to estimates of the distribution moments. Studies surveyed in Gallagher (2014) and Beltrán et al. (2018) typically report that markets overreact to catastrophic events, but the effects fade away. Such findings are interpreted as rejecting a null hypothesis of rationality in favor of alternative explanations based commonly on cognitive heuristics, especially Tversky and Kahneman (1973)’s availability bias. As mentioned in the introduction, the behavioral approach is beyond the scope of this review.

Our coverage focuses on case studies in which researchers assume risk is either highly uncertain or changing. In this setting of imperfect information, as explained in Yezer (2010, p. 46), a long-run effect is evidence that an event may have caused rational market participants to update their risk perceptions. And a temporary effect, i.e., one that researchers might attribute to a behavioral response, represents the null hypothesis of no persistent learning. Our review highlights divergent findings on whether the effects of extreme and infrequent events in settings of imperfect information are transitory (behavioral) or persistent (informational). However, it is not always clear what real estate market participants should learn from a particular catastrophic event. Similar to papers discussed in “Proxy Variables” and “Information Shocks”, generalizing from these findings depends critically on assumptions about the experimental setting.

Several papers examine local housing market responses to major environmental events, such as the 1979 Three Mile Island nuclear accident (Gamble & Downing, 1982; Nelson, 1981) and the 2010 Deepwater Horizon oil spill (Cano-Urbina et al., 2019; Siegel et al., 2013; Winkler & Gordon, 2013). Nuclear accidents and major marine oil spills are complex, low-probability events and the histories of commercial nuclear power and deepwater oil drilling operations are relatively short. It seems reasonable to assume that the true frequency distribution of accidents is largely uncertain. In addition, the distributions are most likely nonstationary, changing irregularly over time with rates of technological innovation and utilization. Thus, researchers might expect updating of risk perceptions after an adverse event that could affect demand for housing in the hazard area over the medium to long-run. These two events represent, respectively, the most serious commercial nuclear power accident in US history (US Nuclear Regulatory Commission, 2018) and the largest marine oil spill in world history (Robertson & Krauss, 2010). However, housing market outcomes indicate that neither accident seems to have changed perceptions of the risk of owning residential property in the impacted areas. The papers use temporal or spatial–temporal identification and mostly find immediate declines in prices and appreciation rates that disappear within three months, a result which fails to reject a null hypothesis of no (persistent) learning.Footnote 19 Perhaps null results should be expected. Unlike the subsequent Chernobyl and Fukushima Daiichi nuclear disasters, the Three Mile Island accident did not involve a major release of radioactive material. Similarly, the hazards from a marine oil spill for owners of coastal real estate are mostly short-term in nature, unlike, for example, the carcinogen exposure experienced by residents of Love Canal, NY or Hinkley, CA.

The next two papers on environmental disasters in Table 5 stand out for clever use by the authors of comparison populations in their research designs. First, Hansen et al. (2006) examine how the housing market in Bellingham, WA responded to a major underground pipeline incident in 1999. As reported by the National Transportation Safety Board (2002), the Olympic Pipeline rupture spilled 236,796 gallons of gasoline into Whatcom Creek which subsequently exploded. Three people died in the incident and property damage was estimated to be at least $45 million. This is another case in which the authors reasonably assume that initial risk perceptions in the area likely “deviated from true risk” (p. 531).Footnote 20As evidence, the authors find that proximity to the pipeline did not affect house prices prior to the incident. And as expected, there is a strong negative effect of proximity after the event. Although the effect diminishes over time, it remains significant at the end of the study period, five years after the incident. The results from the Olympic case collectively reject a null of no (persistent) learning, a finding that stands in contrast with the case studies of Three Mile Island and Deepwater Horizon.

In addition to the before-after comparison in the paper, Hansen et al. (2006) leverage the fact that there are two underground transmission pipelines that run through residential areas in Bellingham, WA. The two pipelines are at most 1,500 feet apart, but nonetheless represent different risks to surrounding properties following a leak. While the Olympic Pipeline that ruptured carries refined petroleum products, the Trans Mountain Pipeline transports crude oil, which is less volatile. The authors find that proximity to the Trans Mountain pipeline does not have a significant effect on house prices before or after the nearby Olympic accident. On the one hand, these results suggest that buyers differentiate the risks from proximity to the pipelines, consistent with expectations from the efficient markets paradigm. Inconsistent with expectations, the results also suggest that the local housing market went from not pricing the refined product pipeline risk to overreacting, in what the authors call the “attention-focusing effect” of the incident (p. 532).

Next, Ambrose and Shen (2021) in this special issue examine house price effects from publicly reported accidents at crude oil and natural gas well sites that utilize hydraulic fracturing (fracking) technology.Footnote 21 (An accident involves the release of hazardous substances into the air, soil, or water.) Considering that the fracking boom began recently, circa 2007, this is another case in which researchers can assume that information regarding a household’s risk exposure is imperfect.Footnote 22 Rather than a single large and infrequent event, this analysis focuses on how housing market participants learn from a series of smaller-scale adverse events. Examining house prices at the zip code level in Pennsylvania for 2004–2012, they find that a local housing market experiences a decline of 0.7% in the month following a fracking accident. However, the effect is transitory, disappearing after three months. The results fail to reject a null hypothesis of no (persistent) learning and support the view in Yezer (2010) that the occurrences of common events typically have smaller effects on beliefs than large, sudden, and infrequent ones.

While Ambrose and Shen (2021) do not find evidence that market participants change their perceptions about fracking risks in the long run based on publicly reported accidents, they do find evidence of persistent learning in a separate (primary) analysis in which the fracking boom itself is the shock. A revealing comparison relies on the fact that many of the benefits and hazards posed by wells that use conventional extraction methods are similar to those that use fracking. The authors categorize zip codes as fracking-only, dual exploration (conventional and fracking), or no wells. Sellers in dual-exploration areas have direct experience with conventional production in their local market and should more readily update beliefs about risks and benefits of fracking than owners in fracking-only markets who lack such experience. Results show a positive correlation between appreciation rates and fracking, in contrast with existing research which finds heterogeneous, but typically negative, effects of proximity, as described in Muehlenbachs et al. (2015).Footnote 23 The annual net appreciation after the fracking boom ranges from $1,813 to $4,800, depending on the number of fracking permits in the zip code. As expected, house prices in fracking-only areas respond more strongly than those in dual-exploration regions early in the fracking boom. However, the premium disappears after three years, indicating that participants in the fracking-only areas learn about fracking risks over time.

Moving from environmental to natural disasters, we next discuss a representative set of papers that focus on how property markets respond to changes in flood risk. Our review indicates that Hurricane Sandy in 2012 is a turning point in the literature on flood effects. In papers written prior to Sandy, researchers typically assume, explicitly or implicitly, that exposure probabilities are constant, and expect that changes in risk perceptions after an event will be temporary.Footnote 24 After Sandy, researchers often assume flood risk is changing and examine whether effects of catastrophes on risk perception are persistent.

Why is Hurricane Sandy this turning point? In the years leading up to 2012, public concern had been growing that effects of global climate change—i.e., sea-level rise, increased storm intensity, and unconventional storm tracks—were exposing new areas to flood risk and increasing property risk in existing flood-prone areas. Sandy embodied these concerns. It was the most destructive hurricane in over 70 years to hit the Northeastern US, a region that market participants may have previously considered to be at low risk, especially relative to the southeastern Atlantic and Gulf Coasts. In addition, Sandy produced flood damage that extended well beyond recognized areas of flooding hazard: of the 51 square miles of New York City that flooded, only 33 were located within the base (100-year) FEMA flood zone. Uniquely salient, it seems plausible that the information effect from Sandy may be large and long lasting. For empirical evidence that 2013 was a turning point in climate change risk perceptions, Keys and Mulder (2020) document a sharp decline in sales volume in markets more exposed to sea level rise since 2013.Footnote 25

The papers we review on Hurricane Sandy are listed in Table 5. All of them examine, among other research questions, how Sandy affects the subsequent market for properties that were not damaged by the storm. While spillovers from affected areas can confound estimates of short-run price effects, a long-run effect for undamaged properties is considered evidence of changing risk perceptions.

The first paper we review examines price effects for undamaged properties within FEMA flood zones for evidence of learning. Ortega and Taspinar (2018) estimate a hedonic model in a difference-in-differences design, similar to the papers in the “Information Shocks” section, and affirm their results using a repeat sales approach. They find that as of 2017, five years after the storm, flood zone properties in New York City that were not damaged by Sandy trade at an 8% penalty relative to similar non-damaged properties not in flood zones. They conclude that Sandy may have increased the perceived flood risk for flood zone properties in New York City and cite the updating process in response to extreme shocks described in Kozlowski et al. (2019).

Next, a recent paper by Cohen et al. (2021) analyzes market outcomes subsequent to Sandy for undamaged properties in New York City (excluding Manhattan) just beyond the storm surge boundary and outside of FEMA flood zones. The authors define two tests for information effects: “exposure” is distance to the storm surge, and “surprise” is distance to the storm surge less distance to the nearest flood zone boundary. Properties experiencing negative surprise are located closer to the storm surge than the flood zone, and vice versa for positive surprise. Estimating difference-in-differences models including these continuous treatment variables, they find that, conditional on exposure, sale prices fall about 2% per standard deviation of negative surprise distance during the year after the storm. In the medium run (2–5 years), the surprise effect is not significant, but the exposure effect remains negative and significant, at 4.4% per standard deviation of distance to the surge. They conclude that market participants adjust their risk perceptions based on actual exposure to the storm: a near miss provides lasting information, but the short-run surprise effect fades away.

In addition, two recent papers examine how the occurrence of Hurricane Sandy affects distant property markets that are not directly impacted by the storm. Using a matching difference-in-differences design, Addoum et al. (2021) estimate that a one-mile increase in proximity to the coast for commercial properties in Boston, MA results in 9.5% lower price appreciation among properties sold in the five-year period after Sandy. And in this special issue, Fang et al. (2021) find a short-term (three month) reduction of 4% in prices for flood zone homes relative to similar non-flood zone properties in Miami-Dade County using a hedonic difference-in-differences model. However, they find no persistent effect in the longer run.

Of the climate change impacts potentially embodied in Hurricane Sandy, the papers we review collectively indicate the primary concern among property owners is the changing distribution of future weather patterns. The persistent price effects in Boston and New York, combined with the null effect in Miami, suggest that Sandy may have caused market participants to update perceptions of flood risk for areas along the northeastern seaboard, rather than in coastal areas overall. Researchers argue such findings are consistent with a documented northward shift in hurricane tracks during the period of anthropogenic climate change, as referenced in Addoum et al. (2021).

An underdeveloped issue in interpreting results from research designs that utilize flood maps in their identification strategies is the assumption concerning the accuracy of those maps. Flood inundation mapping technology has improved significantly over the life of the US National Flood Insurance Program. Accordingly, federal law since 1994 requires that FEMA assess the need to revise and update all floodplain areas and flood risk zones it identifies at least once every five years. However, FEMA has regularly failed to meet this requirement and many of its maps may be outdated.Footnote 26 In the case of price changes following Hurricane Sandy, researchers should consider to what extent property markets are responding to the information that flood maps may be inaccurate because they are based on models that 1) fail to reflect the impact of climate change, or 2) perform poorly even on historical data, i.e., in the absence of climate change.

As an emergent infectious disease at the time of writing, the COVID-19 pandemic presents another case study in the literature on major stochastic events in which information on a property’s risk exposure is imperfect and nonstationary. Papers have been documenting substantial heterogeneity in short-run effects of the pandemic on real estate markets. Ling et al. (2020) show a direct relation between local transmission rates and the price of commercial real estate during the early stages of the pandemic (January to April, 2020) by examining prices of equity REITs. Wang and Zhou (2021) find that properties tenanted with firms that depend on face-to-face interactions in their business models perform worse. And Garcia et al. (2022) document the emergence of a large risk premium for density in newly negotiated leases.

Whether COVID will cause long-run change in perceptions of relative disease risk within and among cities will certainly be an area of future research. Among cities, the pullback from the largest markets, manifest in falling commercial rents in those locations, has been the primary real-estate story of the COVID-19 pandemic. But will it last? The literature suggests that urban housing markets have been resilient to major shocks throughout history. However, the ability to generalize from these historical papers to the COVID-19 case is limited because they mostly study events in which the housing stock of a city suffers widespread destruction from fire or warfare—Davis and Weinstein (2002) is a representative example. A recent exception is Francke and Korevaar (2021), who use event study methods to examine the housing market effects of ten plague epidemics in Amsterdam between 1557 and 1664, and two cholera outbreaks in Paris from 1832 and 1849. They find that outbreaks cause substantial drops in house prices that do not last more than one to two years beyond the end of the epidemic, suggesting that city dwellers may not update their long-run risk perceptions surrounding dense urban living as a result of an outbreak.

For the within-city analysis, a set of recent papers suggest how the pandemic might affect neighborhood dynamics and urban form. Ambrus et al. (2020) study an 1854 cholera outbreak caused by contamination in a water well serving a single London neighborhood. Using a regression discontinuity design, they find a persistent fall in rent in the area serviced by the well. Rather than a change in risk premia, as one might expect, the authors describe a lasting neighborhood transition caused by the sudden impoverishment of households who lost current and potential wage earners in the outbreak. Although the COVID-19 outbreak constitutes a global pandemic, exposure to health and economic consequences varies geographically. The results in Ambrus et al. suggest the potential for long-run responses to any COVID-19-caused shocks in external obsolescence.

Davis et al. (2021) argue that the pandemic has dramatically raised the productivity of telecommuting relative to working at the office. The model in their paper predicts that in response to such a large relative productivity shock, the city spreads out as workers move to places with longer commutes and lower housing costs and that CBD rents decline by 15%. In Delventhal et al. (2022), the share of employees working from home is exogenous. Calibrating their model to residential and employment patterns for the Los Angeles metro area, they model the response when the work-from-home share increases from 4% to 33%. Similar to Davis et al., real estate prices decline in the CBD and increase in the suburbs due to the movement of households further to the periphery of the city. The average price for both commercial and residential floorspace falls by about 6%. They conclude that their results are consistent with findings in the urban literature on telework, i.e., Safirova (2002), Rhee (2008), and Larson and Zhao (2017).

We conclude this section with some general comments regarding research on how property markets respond to the occurrence of an extreme and low-probability event. Most of the studies we review essentially find that what appears to be “bad” news lowers property values and “good” news raises them. To the extent that prior expectations are considered, it is in the papers on flooding that assume market participants form their expectations based on flood maps. And even in these papers, the extent to which the market believes that flood maps are accurate is typically ignored. Under Bayesian updating, stating the prior distribution and modeling the updating process using the new information acquired from the event are necessary steps to examine a change in property prices. Our review highlights how the literature contains insufficient treatment of whether price changes are appropriate under Bayesian updating. This calls for future research.

Information Strategies

We next review articles on strategies that market participants use to either leverage or mitigate asymmetric information. More informed market participants have strong incentives to exploit superior information. However, less informed participants, knowing their information disadvantage, should react ex ante. We cover these information strategies in three sections on “Listing”, “Leasing”, and “Geographic Asset Allocation.”

Listing

The most fundamental element of listing strategy is setting the asking price. It is conventional wisdom among real estate professionals that in setting a high listing price, a seller is more likely to attain a high sale price, but less likely to enjoy a quick sale. This conventional wisdom is also a stylized fact in the literature. For example, Yavas and Yang (1995), Anglin et al. (2003), and Merlo and Ortalo-Magne (2004) all find that listing price is positively associated with sale price and negatively associated with liquidity. An additional stylized fact is that sellers who have received few offers tend to reduce the price of their listings, independent of changes in market conditions (Merlo & Ortalo-Magne, 2004). If search were costless and information symmetric, then a non-binding offer price would not affect transaction outcomes. The failure of these conditions to hold in the market means sellers can potentially set listing prices strategically. The idea that sellers signal property quality through the listing price is developed in the papers by Taylor (1999) and Kaya and Kim (2018) discussed in the “Selected Theory” section for Part 1. Here we review testing performed in empirical papers on information-based listing strategies listed Table 6.

Table 6 Information Strategies

Using a detailed sample of listings and transactions in the Dutch housing market, de Wit and van der Klaauw (2013) find that a list-price reduction raises the sale rate by 83% and the withdrawal rate by 44%, taking into consideration the endogeneity of list prices. They interpret these substantial effects of list-price reductions as evidence of asymmetric information. The very idea that time on market is a negative signal, they argue, assumes the presence of unobserved property characteristics that are revealed to buyers during inspection.

Research inspired by Stigler (1961) in the section on “Information Shocks” shows that increasing the quality of the information environment reduces the dispersion of transaction prices. A more interesting question may be how does price dispersion (which implies a low-quality information environment) affect transaction outcomes. To answer this question, Deng et al. (2012) develop a search model which shows that list price and sale price are positively related to price dispersion. Empirical estimates in the paper using condominium sales in Tokyo support these predictions from theory. They find a 1% increase in the standard deviation of transaction price is associated with a 0.2% increase in both list and sale prices. In addition, they show that less-informed sellers who set a higher list price are more likely to reduce the list price, wait longer on the market, and subsequently sell at a lower price.

Gatzlaff and Liu (2013) are among the first researchers to investigate the list price strategy for commercial real estate transactions. Different from the housing market, less than one-third of the commercial property sales in their sample include an asking price. They find that listings for larger and more complex properties are less likely to contain prices and, subsequently, properties with list price information are sold at lower prices. Although their findings do not establish a causal effect of list prices on sale prices, the results are consistent with an information asymmetry story: sellers use list price information to signal property conditions and their willingness to negotiate. They view the strategic non-use of a list price as maintaining an information advantage, in addition to avoiding truncating higher than expected offers.

The listing provides the primary source of information for a property on the market. In addition to posting an offer price, the literature has examined the strategic omission of information from residential real estate listings. In particular, we cover the decision to include fewer photographs than allowed by the local MLS (private information on horizontally differentiated features of a property), and the decision to omit the name of the assigned school or the age of the property (public information on vertically differentiated features).

Photographs of the property are perhaps second only to the asking price as the most important information in a listing. Benefield et al. (2011) find that sale price increases with number of photographs over the range of image counts allowed by the local MLS during their sample period. Yet, Bian et al. (2021) examine listings from the Central Virginia MLS for 2001–2013 in this special issue and find a puzzle: only 30–40% contain the maximum number of photos allowed. This begs the question, is there a strategy for choosing the number of photographs? To answer this question, the authors present a model based on the theory of ordered consumer search developed by Armstrong (2017). In the model, under disclosing a property’s taste-specific features tends to increase the arrival rate. However, reducing information, in general, may reduce the arrival rate by increasing uncertainty over the property’s condition. For homes on the higher-end of the price distribution, which exhibit greater customization, incomplete disclosure can be an optimal strategy to maximize arrivals, because quality is more readily presumed for these properties.

Empirical testing in Bian et al. (2021) confirms that providing fewer photographs is associated with a higher sale price and shorter time on market for properties in the top quartile of the Richmond, Virginia market. As expected based on the existing literature, the opposite result obtains for properties on the lower end of the price distribution.Footnote 27 These results show that because search is costly, it is possible for sellers to form strategies that leverage even easily eliminated information asymmetries. The results also clearly reject the “unraveling result” described in the “Selected Theory” section for Part 1. The authors point out how models in the unraveling line assume that products are only vertically differentiated, while the higher-end homes in their data are more likely to have horizontally differentiated characteristics, meaning taste-specific features that not all buyers may value equivalently.

In addition to private information, we review two studies that document strategic (re)disclosure of publicly available information in residential listings. Carrillo et al. (2013) find that real estate agents are more likely to mention the name of a high-quality assigned school. They compare pooled cross sections from Fairfax County, Virginia, to examine changes over time. For 2001–2002, the listing agent indicates the name of the assigned elementary school in 68% of sales. The authors find that a standard deviation increase in a school’s average pass rate on a state-wide standardized test raises the likelihood of disclosure by 3.3%. Consistent with falling costs of obtaining information, the disclosure rate for 20,062,007 rises to 75% and the standard deviation effect on disclosure falls to 1.3%. Although the evidence suggests that sellers and their agents are disclosing school quality strategically, this strategy is ultimately ineffective, as assigned school does not have an independent effect on sale price in either sample. More recently, Gordon and Winkler (2020) find similar evidence of ineffective strategic disclosure of public information in home listings for Madison County, Alabama. Although property age is publicly available on the county tax assessor’s website, the listing agent categorizes house age as “unknown” in 30% of sales for 1996–2015. The authors find that age omission is strategic—i.e., more likely for properties in superior condition—but does not have a statistically significant effect on sale price.

The finding of ineffective strategic (re)disclosure of public information in Carrillo et al. (2013) and Gordon and Winkler (2020) is perhaps not surprising. Households face large incentives to extract economic rents through the creation of information asymmetries. Although the documented benefit to the buyer of strategic omission appears to be minimal, the benefit to the agent may be to increase satisfaction among naive clients by signaling marketing savvy. And the practice appears to have no drawbacks. Because the cost of independently obtaining school quality or property age information is sufficiently low, the prediction from the unraveling literature that buyers will interpret non-disclosure as a signal of low quality does not appear to hold in these cases.

Lastly, researchers consistently find that the text entered in the comments section of the listing by the agent conveys economically meaningful information on the condition and quality of the property. For example, researchers, when estimating hedonic equations, commonly obtain significant coefficients on dummy variables for a large set of subjective keywords and phrases (tokens).Footnote 28 Although they do not test whether agents are strategic in their usage, Goodwin et al. (2014) do find that the effect of positive broker vernacular on sale price is correlated with information asymmetry, proxied by property heterogeneity.

Leasing

The papers on leasing in Table 6 examine strategies that landlords use to screen tenants for their propensity to utilize a property.Footnote 29 The information issues are that tenant quality is imperfectly observed at lease signing and utilization is largely non-contractible.Footnote 30 The leasing strategies covered by the first three sets of papers include tenure discounts, security deposits, and expense sharing. These papers are best described as applications of the Rothschild and Stiglitz (1976) screening model, where the terms of the lease are the screening device. The contribution of these papers is mostly conceptual in nature and our review indicates a need for additional empirical testing.

Hubert (1995) appears to be the first paper to use an asymmetric information model to explain the tenure discount that has been documented in rental housing, wherein landlords charge existing tenants less in rent at renewal than new tenants (e.g., Genesove, 2003; Goodman & Kawai, 1985; Raess & von Ungern-Sternberg, 2002). In their two-period model, tenants differ in quality, which landlords observe only after one period, and moving is costly for tenants. With a short-term (one-period) lease, a landlord can either evict or raise the rent (economic eviction) of an observed low-quality/high utilization tenant at renewal. Assuming that tenants know their type, landlords can offer a pair of separating contracts—short-term/terminable and long-term/non-terminable—that satisfy a self-selection constraint. Depending on how mobility and service costs are parameterized, low rents and low turnover for high-quality tenants can combine with high rents and high turnover for low-quality tenants to yield a tenure discount in equilibrium. Because the equilibrium that emerges is second best, the author argues that tenure security and rent control laws can be welfare improving.Footnote 31 Miceli and Sirmans (1999) develop a two-period model like Hubert (1995) in which landlords use lease duration to induce self-selection by tenants according to their unobserved propensity to move. In this case, landlords offer lower rent on longer leases to minimize transaction costs associated with turnover. Together, these papers suggest a downward-sloping term structure of leases that should be accounted for when estimating measures of central tendency.

Benjamin et al. (1998) examine the trade-off between up-front security deposits and rental rates in apartment leases. They show formally that a lease with a high security deposit greatly reduces asymmetric information and moral hazard problems between landlords and tenants, which should be reflected in reduced rents. Examining markets in Washington, DC and State College, Pennsylvania, they find that rents and security deposits are inversely related as expected. They note that in the case of liquidity constrained households with low security deposits, landlords are effectively loaning the optimal security deposit funds to the tenant who repays them in terms of higher rents. The authors show that landlords earn rates of return in excess of 30% per year on these “loans.”

Mooradian and Yang (2002) study the use of expense sharing as a screening mechanism for unobserved utilization propensity in commercial real estate leases. The paper stands out for simultaneously modeling asymmetric information and market power in leasing transactions. The authors develop the equilibrium outcomes for perfectly competitive markets and also for low-vacancy (tight) markets in which landlords can operate as spatial monopolists. In their model, landlords offer two types of contracts: 1) a gross lease in which the landlord pays all expenses of operating the property, including maintenance and repairs, and 2) a net lease in which the tenant is responsible for expenses. Theoretical results show that the adverse selection problem of low quality/high utilization tenants preferring the gross lease is mitigated if landlords, perhaps through economies of scale, can provide property management services at a lower cost than tenants. Although this result attains in both the competitive and monopolistic markets, adverse selection is more of a problem in the monopolistic case. The reason is that a monopolistic landlord charges a relatively higher rent for a gross lease in order to extract a portion of the gain from shifting property management from the tenant. Given the relatively high rent for a gross lease in the monopolistic case, only very high utilization tenants select gross leases.

The papers described above, which examine how landlords use lease terms to induce sorting, assume that tenant quality is unobservable. Ambrose and Diop (2021) assume instead that landlords can obtain a noisy signal of tenant quality by investing in screening. In their model, the optimal level of screening depends on the distribution of signal quality and tenant quality in the population. Empirically, they find a negative relation between lease defaults and regulation at the state level, using estate tax rates as an instrument for a rental regulation index. The authors argue this finding suggests a positive relation between screening by landlords and regulation.

Geographic Asset Allocation

Perhaps the most fundamental element of real estate investment strategy is where to buy. According to the literature in Table 6 on home bias in asset allocation, for most investors, the decision is, “as close to home as possible.” In a survey of 1,463 private owners in Scotland, for example, Crook et al. (2012) find that the median distance between a landlord and their property is only 2.3 miles for 2008. For a sample of 21,653 US office buildings in Eichholtz et al. (2016), the median distance is 5.4 miles for 2011.Footnote 32 And Ling et al. (2021c) find that equity REITs hold 20% of their property portfolios in their home metro areas for 1996–2019, versus a concentration of 1.4% in those same locations for REITs not headquartered locally.

Finding a home market bias in real estate investment is not surprising given the documented “Nonlocal Premium” in purchase prices. The leading view in the real estate literature is that investors are leveraging (or mitigating) asymmetric information by specializing in properties located in their home market. Empirical results are consistent with this view. Eichholtz et al. (2016) estimate that distant ownership entails a discount to effective rents of 6.4-10.1%, depending on how they define distant, which would seemingly be in addition to the nonlocal premium paid by investors at acquisition.Footnote 33Decomposing the differentials, it seems that distance affects vacancy more than contract rent. The authors’ interpretation is that local information advantages are most relevant to effective property management, i.e., finding and retaining tenants. We would conjecture they also matter to choosing properties that are well positioned within their market to enjoy consistent demand over an investor’s holding period, although more work on the nature of distance effects would be welcome.

At the portfolio level, Ling et al. (2021c) find a positive and substantial relation between home market concentration and return (Jensen’s alpha) for REITs. Using a 2 × 2 factorial design inspired by Garmaise and Moskowitz (2004), the authors show that outperformance by higher concentration firms is greater in markets with more imperfect information, where information quality is measured by various proxy variables. The interaction finding, in particular, supports the view that home bias can be explained by information advantages of local investors.

In using asymmetric information to explain home bias, researchers commonly assume that frictions make it difficult or costly for nonlocal buyers to obtain the same quality of information as locals. A potential criticism of this view is that the costs investors would incur to learn about nonlocal markets may not be that prohibitive, especially considering the secular reduction in the cost of accessing information associated with the internet revolution. To this point, Agarwal et al. (2019c) document that the nonlocal premium for foreign investors disappears by the fourth acquisition experience in a host market. (And that is for foreign transactions, the learning curve is presumably steeper when only geographic distance—not national boundaries and all that those entail—separates an investor from their property.) These results beg the question, why don’t more real estate investors learn about distant opportunities and diversify according to the predictions of modern portfolio theory?

Van Nieuwerburgh and Veldkamp (2009) provide a possible explanation based on rational expectations for the persistence of home bias in the face of falling information costs. Accounting for the equilibrium consequences of learning on portfolio choice, it is simply more profitable in their model for an investor to learn still more about risks for which they already have superior knowledge, than to eliminate their information deficits on others. Home bias is a topic for which the behavioral approach (i.e., familiarity bias) is also well suited. A gap in the real estate literature are attempts to decompose variance and assess the relative merits of different explanations.

Policy Responses

In the sections above we reviewed strategies utilized by market participants in settings of asymmetric information. We cover standard policy responses next. We first examine common law and statutory requirements on a seller to disclose negative information about the property in a section on “Property Condition Disclosure Laws.” The research documents clear instances of market failure and the results tend to support policies requiring salient disclosure, including redisclosure of information that is already in the public domain. However, external validity emerges as a potential challenge in this literature. Most of the papers we review study transactions in specific local housing markets around the turn of the century and a concern is that the results may not reflect current developments at the national level.

Next, we review efforts that sellers can undertake to credibly disclose information that tends to increase a prospective buyer’s willingness to pay for the property. The section on “Certifications and Inspections” surveys the literature that examines how energy efficiency certifications and windstorm mitigation inspections address distortions caused by asymmetric information. Implementation of these information provision mechanisms has involved a mixture of voluntary efforts by sellers and mandatory programs that allow significant exemptions. Internal validity is the greater challenge confronting researchers in this literature. Properties that are certified or inspected likely vary systematically from those that are not. Recent results with improved experimental designs suggest changes in policy orientation might be called for.

Property Condition Disclosure Laws

In an extraordinary legal reform, the maxim, caveat venditor (let the seller beware), has largely supplanted caveat emptor (let the buyer beware) as the governing principle of residential real estate transactions in the United States. As part of the trend toward stronger consumer protection that began in the 1960s, courts in nearly all states have imposed an affirmative duty on home sellers, and their agents, to disclose known material defects not readily apparent to prospective buyers.Footnote 34 If a seller fails to disclose a latent defect, then the buyer has a right to just compensation to remedy it. Rescission of the purchase contract is even possible after closing. And when nondisclosure rises to the level of fraudulent misrepresentation, the seller can be liable for additional punitive damages. For commercial real estate transactions, in contrast, courts continue to place the burden on buyers to perform their own due diligence. Interested readers can refer to papers listed in Table 7 for in-depth discussions of the case law on property disclosure, i.e., see Weinberger (1996) and Lefcoe (2004).

Table 7 Policy Responses

Discussions in the policy realm of the erosion of caveat emptor under common law have focused on the significant administrative challenges involved in dispute resolution, considering that practically every used home has some form of physical depreciation that could give rise to factual disagreement. In our review of the literature, however, we find that the implications for economic efficiency of this tectonic shift in information disclosure requirements have been understudied both theoretically and empirically.Footnote 35 It may be that findings from the literature on bilateral trading (Myerson & Satterthwaite, 1983) and bargaining under asymmetric information (Grossman & Perry, 1986; Samuelson, 1984) are relevant to this case. However, we find no work applying the relevant theory from industrial organization to this setting in real estate economics. And we find only one empirical paper, listed in Table 7, that directly compares outcomes under caveat venditor and caveat emptor for real estate transactions. Using an unusual natural experiment in Hong Kong, Chau and Choy (2011) find that the overpricing of properties with a common latent defect known to the seller is significantly less under caveat venditor than caveat emptor, 6.7% versus 9.9%. However, the authors point out that their findings are limited to the problem of over-priced lemons and do not show that caveat venditor is the more efficient doctrine overall considering all tradeoffs. This is an important topic where more research is needed.

On the statutory law side, about two-thirds of states have passed laws requiring that home sellers complete a detailed property condition disclosure form and furnish it to prospective buyers. Weinberger (1996) and Lefcoe (2004) document that the purpose of these laws was not to protect consumers per se, but rather to shield real estate agents from legal liability in home defect lawsuits by clearly assigning the disclosure burden to sellers. Given this well-known legislative history, it is not surprising, therefore, that Nanda (2008) finds the probability of passage of a residential property disclosure law is positively correlated with the number of disciplinary actions against agents in the jurisdiction. Of course, passage of a law does not guarantee that sellers will be truthful and forthcoming in completing the document. As with common law torts, remedies for nondisclosure or misrepresentation can include rescission, compensatory damages, and punitive damage. Readers can refer to Wiley and Zumpano (2008) for a theoretical treatment of the tradeoffs in which the real estate agent, at least, faces in deciding whether to comply with a requirement to disclose their knowledge of latent material defects.

Relative to the erosion of caveat emptor in the courts, the impact of property condition disclosure statutes has been studied more widely. Nanda and Ross (2012a, b), for example, use an event study design to examine the effects on home sale prices of disclosure laws in the 34 states that had implemented them by 1996. Following Akerlof (1970), they argue that if disclosure laws reduce asymmetric information, then the willingness-to-pay for a home should rise after such laws are implemented. Results show that average sale prices in covered metro areas increase by 3-4% over a 4-year period following adoption, consistent with a 6% reduction in the risk premium for owner-occupied housing. To be clear, Nanda and Ross are not testing the effect of the movement away from caveat emptor: at common law, sellers in most states have a duty to disclose regardless of whether the state has a seller property condition disclosure statute on the books. Instead, the authors are estimating the effect of layering a specific written disclosure requirement on top of an existing ambiguous disclosure regime.

In addition to physical defects and legal impairments to use, some disclosure laws require sellers to alert buyers to certain hazards in the wider geographic area that are already public information. For example, it is well known that the Federal Emergency Management Agency (FEMA) produces publicly available maps delineating zones of differential flooding risk. And any home in a high-risk zone securing a federally-related mortgage must be covered by flood insurance.Footnote 36 Nonetheless, in a turn-of-century survey administered to home buyers in Boulder, Colorado, 60% of respondents reported first learning of the potential flood risk associated with their homes at the time of closing, i.e., well after tendering their purchase offers (Chivers & Flores, 2002). Recognizing there may be gaps in the public disclosure of flood risk, some states have mandated that home sellers disclose to prospective buyers if the property is located in a designated flood hazard area. In addition to natural hazards, property disclosure forms may also ask the seller to indicate if the property is impacted by airport operations or nearby commercial or industrial usages or if there is environmental contamination in the area. These hazards may not be mapped with the same quality of information as detailed on a FEMA flood map. However, economic research tends to treat such environmental disamenities as essentially public information, while acknowledging that home sellers may be better informed about impacts at the property level than buyers.

The statutory requirement that sellers (re)disclose public information on local area hazards has been evaluated by embedding a hedonic model in a difference-in-differences design. For example, Troy and Romm (2004) find that California homes located in a flood zone sell at a 4.2% discount relative to comparable non-flood zone properties following the passage of a 1998 natural hazard disclosure law and no discount before the law. In separate articles on a 1996 disclosure law in North Carolina, Pope estimates that the average price of flood zone homes in Wake County fall by an additional 4.0% (2008b) and properties in designated high-noise areas near the Raleigh-Durham airport fall by 2.9% (2008a). Walsh and Mui (2017) examine the effects of New Jersey’s 2004 property disclosure law on the sale price of houses at different distances from contamination sites in Atlantic County. While the marginal cost of proximity to landfills does not change, there are statistically significant effects for lesser-known, but still publicly disclosed, sites of environmental contamination, such as leaking underground storage tanks. While Troy and Romm are concerned about equity – in particular, their finding that capitalization after passage is much greater in areas with high Hispanic population share – the discussion in the Pope and Walsh and Mui articles focuses on the full information assumption underlying the hedonic model. If a non-trivial fraction of home buyers are uninformed about a disamenity when bidding, as Pope (2008b) explains, then the implicit price of the disamenity from a hedonic will suffer from attenuation bias relative to a full information estimate.

Results in the papers on redisclosure of area hazards indicate that home prices at the turn of the century do not fully internalize knowable costs and risks associated with owning property in particular locations, a clear case of market failure. While the research shows that placing information in the public domain does not guarantee the information will be used for price discovery, the quasi-experimental approach is not well suited for examining the nature of the frictions. More work is needed in this area and we point readers to Kousky et al. (2020) for a helpful discussion on possible explanations for the low take up rate of flood insurance that encompasses the relevant information issues. Regardless of the nature of the frictions, the findings support salient disclosure as a simple means of mitigating market distortions, at least during the timeframe examined by the research reviewed here.

Our review indicates that more up-to-date estimates would be helpful to determine whether results continue to justify mandating seller disclosure given the fall in the cost of obtaining information over the last two decades. Identification will be a challenge, however, as the push for new disclosure statutes may have run its course. Waivers and disclaimers are another topic for which additional research is needed. Some states allow either buyers to waive their right to receive a seller’s property disclosure, or sellers to unilaterally choose disclaimer over disclosure. (To be clear, a buyer cannot waive a seller’s common-law obligation to disclose known material latent defects.) It may be difficult to understand why a rational buyer might waive the right to receive a property condition disclosure. However, if a market contains buyers and sellers with heterogeneous risk tolerances, a diversity of contracts would be expected. The provision for, or prohibition against, waivers and disclaimers constitutes an evolving area of disclosure law. However, due likely to data availability, we find no articles examining these changes outside of law review articles.

Certifications and Inspections

Asymmetric information is widely considered a primary cause of the “energy efficiency gap” (Sanstad et al., 2006), the puzzlingly slow adoption by households and firms of cost-effective energy efficiency technology.Footnote 37 Because energy consumption is partially determined by the unobserved intensity of utilization, credible disclosure of building performance is costly and sellers can misrepresent the efficiency of their properties. As evidence that households and firms may not use accurate information when making investment decisions, researchers and policy advocates highlight estimates of implicit discount rates in the literature tracing back to an influential paper by Hausman (1979). To assume buyers are making fully-informed choices on the tradeoff between lower operating costs and higher initial capital costs, the argument goes, implies discount rates for individuals that are much higher than discount rates observed in the market.

The primary policy response to this market failure has been the introduction of certificate programs that assign standardized ratings or labels to homes and buildings. Energy benchmarking in the US has been voluntary, except in select cities that require it (variation that could be exploited in future studies). Two programs dominate: 1) the US Environmental Protection Agency and US Department of Energy together began applying the Energy Star label to qualified new homes in 1993 and new and existing commercial buildings in 1999, and 2) the US Green Building Council adopted the Leadership in Energy and Environmental Design (LEED) program for new and existing commercial buildings in 1999. Buildings can qualify for four levels of LEED certification: certified, silver, gold, or platinum. In the European Union, sellers and landlords began making a standard energy performance certificate (EPC) available to buyers and tenants of commercial buildings in 2003, and residential dwellings in 2007. EPCs categorize the energy efficiency of properties in a range from A to G. In 2009, EPC disclosure became mandatory with several exemptions.Footnote 38

Two seminal hedonic estimates of the relation between real estate prices and energy efficiency certification are Eichholtz et al. (2010) for commercial properties and Brounen and Kok (2011) for residential. Eichholtz et al. find that for 2004–2007 US buildings with an Energy Star label rent for approximately 3% more per square foot than comparable buildings that lack the label. And, using a two-step sample selection model, Brounen and Kok estimate that for 2008–2009 homes in the Netherlands with a green EPC label (levels A-C) sell at a 3.6% premium relative to comparable properties with a non-green label (levels D-G). Several additional papers use similar approaches to find capitalization of energy efficiency certification for commercial (Chegut et al., 2014; Fuerst & McAllister, 2011; Kok & Jennen, 2012; Miller et al., 2008; Wiley et al., 2010) and residential (Kahn & Kok, 2014; Zheng et al., 2012) properties. Although the authors suppose, based on their capitalization results, that public provision of information on energy efficiency plays an important role in the investment decisions of households and firms, the cross-sectional designs in these papers do not allow the authors to estimate the independent effects of the introduction of the respective information provision programs that they study.

More recently, Aydin et al. (2020) show capitalization in their sample using instrumental variables and repeat sales analyses to mitigate the potential for omitted variables to bias their estimates. In particular, they find that a 10% increase in efficiency increases the value of a home by 2.2%, which they show represents nearly perfect capitalization of the energy savings. However, using multiple approaches they are unable to reject the null hypothesis that the provision of an EPC rating has no effect on the buyer’s valuation of the energy efficiency of the dwelling. Regarding the energy efficiency gap, their results indicate that home buyers make optimal, full-information decisions at the initial purchase, but perhaps not for capital investments during their tenure (p. 15).Footnote 39 Based on these new results, Aydin et al. question the continued need for the EPC program. However, this study is limited to one asset class (residential housing) in just one European country (the Netherlands), the population of which represents less than 5% of the population of the European Union.

The information issues raised in the literature on energy efficiency capitalization reviewed above are similar to the frictions that may limit capitalization of hurricane mitigation measures. Because some mitigation features are not readily apparent to buyers and sellers have an incentive to misrepresent the storm resistance of their properties, third-party certification may be required to achieve credible disclosure. While jurisdictions in storm-prone regions have stopped short of requiring that sellers provide mitigation reports to buyers, as of 2003 Florida has required that home owner’s insurers provide discounts for home features that reduce wind damage and loss. To qualify for this discount homes must undergo a windstorm mitigation inspection from a certified inspector.

While various papers, such as Bin et al. (2008) and Nyce et al. (2015), estimate the capitalization of home owner’s insurance premiums, we find just one, Gatzlaff et al. (2018), that examines the effect of information from an inspection. Using a two-stage treatment effects model, they find that verifying mitigation features by a certified inspection increases home prices in Miami-Dade county by 4.2-10.4% for 2007–2011, with their preferred estimates falling at the lower end of the range. As expected, inspection significantly increases the implicit price of hidden features, and, perhaps surprisingly, also visible features. The effect of verification on the implicit price of obvious home features is consistent with the literature reviewed above on the (re)disclosure of public information on local area hazards, but also simple capitalization of state-mandated insurance premium discounts. The experimental design in the paper does not allow the authors to distinguish capitalized insurance premium discounts from risk mitigation benefits. However, evidence in the paper suggests that estimated price premia are dominated by the former.

Conclusion

This part of the paper reviews articles on the role of information issues in trading strategies and outcomes in the private property market. After introducing the foundational theory, we present a substantial body of empirical evidence of market failures due to information frictions. Research shows that prices may not fully reflect all available information, and that market participants are able to form listing, leasing, and investment strategies based on informational advantages. Consistent with economic theory, innovations and interventions that reduce information asymmetry are found to decrease the price of properties with latent defects, increase average price, and reduce price dispersion.

This review raises substantial unresolved issues in the empirical literature, beginning with research designs that rely on proxy variables to identify asymmetric information. In our assessment, construct validity is a major challenge for the ability of papers in this line to produce useful inferences. We describe fourteen proxies variables taken from the literature. When researchers have tested the assumptions that underlie some of these measures, results suggest they capture other correlated factors beyond information frictions. For still others, their general use is limited because the underlying assumptions are highly context dependent. We conclude that more research is needed to assess the validity of the proxies and researchers using these measures should acknowledge the potential ambiguity inherent in the interpretation of their results.

This review raises additional concerns with alternative research designs that leverage exogenous shocks. To begin with, the interpretation of null results is a challenge inherent to papers that examine releases of new information. We describe how researchers can typically only speculate whether a null effect indicates the new information is simply not salient or that a friction prevents market participants from responding. For papers that examine stochastic events, the literature is largely evolving from the study of purely random shocks to random shocks with drift. Most studies use recent occurrences as “new information,” but lack any measure of previous expectations. We suggest that what matters is not catastrophic events per se but the deviation between past experience (used to form expectations) and recent events (which update expectations). There is a need for research that models the updating process with inferences from new information and determines whether the magnitude of a documented effect on property values is appropriate.

Lastly, this review reveals several gaps in the literature and we highlight a few here. Regarding market studies, while a substantial number of theoretical and empirical papers examine information-based listing strategies, the analogous literature on leasing strategies is mostly conceptual and more empirical testing is needed. Also lacking are attempts to assess the relative merits of information versus behavioral explanations. The field would benefit from more examination of the many instances in this review when stylized facts and statistical findings are consistent with multiple explanations.

Regarding policy studies, there is a need for both theoretical and empirical examinations of the efficiency implications of the historic shift from caveat emptor to caveat venditor as the principle governing residential transactions in the US. And while the push for new property disclosure statutes may have run its course, condition waivers and disclaimers constitutes an evolving policy that has not been examined in the literature. Beyond the US market, there is a need for more work that examines the effect of tenure security laws on tenure choice in settings of asymmetric information. The international markets also present some of the best opportunities for additional research examining mandated versus voluntary provision of information on energy efficiency that might better inform the policy debate.

Part 2: Public Markets

Selected Theory

Information economics provides the intellectual foundation for the literature on corporate finance and governance. The literature has not developed models specific to public real estate firms as a subject distinct from listed companies in general and there are several extensive review articles that cover the general theoretical literature. Our purpose in this section is to provide a more directed survey of just those articles cited commonly in the empirical research on public firms that invest in real property as their primary business. Table 8 lists the selection of theory papers that we cover. Some of the general surveys are indicated at the bottom of the table for the interested reader.

Table 8 Theory Papers (Part 2), Public Real Estate Markets

Although agency problems are generic to society, the classic example of an agency relationship may be the contract between shareholders (principals) and corporate managers (agents). Agency theory of the firm assumes that managers make choices to maximize their own utility and may behave opportunistically when their interests conflict with those of shareholders (moral hazard). However, a misalignment of preferences between managers and shareholders alone is insufficient to develop managerial moral hazard. An additional assumption is that information frictions interfere with observation of the actions, or characteristics, of managers. Hence, the managerial agency problem is to design a compensation contract that incentivizes the manager to behave in a manner consistent with shareholder preferences when information is asymmetric.

Ross (1973) is the first published paper to formally analyze the generic principal-agent problem with moral hazard. It develops the optimal compensation scheme and welfare loss that results from the need to motivate the agent. While not the original work, Jensen & Meckling’s, 1976 agency theory of the firm is the paper most cited as the source of hypotheses in the papers we review. Jensen and Meckling influentially identify three agency costs of equity: monitoring costs, contracting costs, and residual losses.Footnote 40 The paper does not develop specific optimal compensation schemes; they are simply assumed to exist. Instead, it argues that because monitoring is costly and subject to decreasing returns, any contractual mechanism to align interests will be imperfect and result in residual welfare loss. Testable implications include that agency costs are positively related to measures of monitoring and contracting complexity, which implies a negative relation with firm value.

If the origin of the managerial agency problem is the separation of ownership and control in a Berle and Means-type corporation,Footnote 41 then it follows per Jensen and Meckling (1976) that ownership by management should better align manager and shareholder interests and reduce agency costs. Accordingly, an additional hypothesis that empirical researchers attribute to Jensen and Meckling (1976) is that a firm’s market value increases with ownership by firm insiders (directors and officers). This idea is referred to alternately as the “incentive alignment” or “convergence-of-interest” hypothesis in papers.Footnote 42

The sources of information asymmetry are mostly exogenous in the models covered in Part 1 on private markets: e.g., buyers are less aware than sellers of property characteristics, and landlords do not know the traits of potential tenants. In the literature on public markets, information asymmetries can also be endogenous. A classic example raises a potential downside to insider ownership. The literature has long considered that managers may increase their market power by raising barriers to the entry of competitive managerial teams. Demsetz (1983), Fama and Jensen (1983), and Shleifer and Vishny (1989) are typically credited for theorizing that managers may undertake investments specific to their own skillsets in order to make it more costly for shareholders to replace them. A similar idea modeled in Edlin and Stiglitz (1995) is that managers may simply invest in activities for which information asymmetries are particularly large. Once entrenched, regardless of how, managers can demand above-market compensation and consume excessive perquisites.

The empirical literature has considered the possibility that managerial entrenchment activities are associated with large equity stakes. The possible costs of inside ownership are described in papers by the umbrella term, “entrenchment hypothesis.” With the potential for offsetting effects (incentive alignment versus entrenchment), the consensus view in the literature seems to be that the shape of the relation between management ownership share and market value cannot be reasonably predicted by economic theory.

Turning from corporate governance to corporate finance, the empirical real estate literature on agency problems commonly cite Jensen (1986) as providing the basis in theory for testing they perform. Jensen argues that managers with substantial amounts of free cash flow are more likely to engage in non-optimal activities, such as investing in negative net present-value projects (empire building/growth for growth’s sake) or excessive perquisites (gold plating), at the expense of firm performance. This idea is commonly referred to in papers as the “free cash flow” problem, and it has received significant empirical attention. A view in Jensen (1986) is that corporate debt is a mechanism that can reduce the agency costs of free cash flow. Accordingly, the article is commonly cited as providing a theoretical explanation for the stylized fact that firms underperform following secondary equity offerings: i.e., the market responds because management chose to issue equity instead of debt.Footnote 43

The body of empirical evidence we review shows that real estate firms issue securities under conditions of imperfect or asymmetric information. As shown by Leland and Pyle (1977), the influential Modigliani and Miller (1958) theorem, that corporate financing decisions— for example, the choice of a leverage or dividend policy—are irrelevant to the market value of a company in the absence of tax and bankruptcy distortions, does not hold in the presence of information frictions.Footnote 44 In the remainder of this theoretical introduction, we introduce explanations based on asymmetric information for empirical regularities from the corporate finance literature.

We begin with analytical explanations for the stylized fact that initial public offerings (IPOs) are underpriced on average. Among the various theories of IPO underpricing that have been developed, Rock’s (1986) “winner’s curse” model and extensions have received the most attention in terms of empirical testing. In the Rock model, there are two types of IPO bidders, informed and uninformed. Meanwhile, the issuer is risk averse and unsure of the firm’s true value. Because informed investors do not participate when an offering is overpriced, uninformed investors purchase a disproportionate share of offerings that underperform. The uniformed investors are aware of their predicament and in the extreme case unwilling to bid. Assuming informed demand is insufficient to take up all shares of even an attractive offering, the risk-averse firm has to price an issue at a discount to keep uninformed investors in the market and ensure full subscription.

The adverse selection problem in Rock (1986) is a standard lemons story a la Akerlof (1970), and IPO underpricing is another good example of why researchers do not consider information frictions as simple transaction costs. Beatty and Ritter (1986) are typically credited with providing the primary testable implication of the winner’s curse model. They formalize the notion implied by Rock that difficult-to-value issues will be underpriced to a greater extent than issues that are less opaque. Accordingly, empirical testing has involved estimating the relation between underpricing and an ever growing number of proxies for value uncertainty and asymmetric information that are thoroughly described in the research reviews listed at the bottom of Table 8.

Assuming the same information structure as winner’s curse models, another set of papers develop the conditions necessary to induce asymmetrically informed investors to reveal their true private valuations of the firm to underwriters during the book-building process. These works are referred to in the literature as information revelation or book building models. Instead of compensation for adverse selection, underpricing, combined with IPO allocation, in these models incentivizes investors to provide information in the static case (Benveniste & Spindt, 1989; Benveniste & Wilhelm, 1990; Spatt & Srivastava, 1991), or to produce information in a dynamic setting (Chemmanur, 1993). Book building models predict underpricing and also that share allocations will favor the underwriter’s regular customers. Hanley (1993) points out an additional testable implication suggested by the model in Benveniste and Spindt (1989): the offer price only partially adjusts to new information revealed during book building, meaning that the difference between the final offer price and the offer price range disclosed in the prospectus should be positively related to the amount of underpricing on the first day of trading.

In winner’s curse and information revelation models, the information asymmetry is between informed investors and the other parties to the transaction. In the next grouping of models from Table 8, the issuers of IPOs are assumed to have superior information about the value of their expected future cash flows and underpricing is used as a mechanism to credibly signal firm quality—i.e., Allen and Faulhaber (1989), Grinblatt and Hwang (1989), and Welch (1989). In a two-period model, firms raise equity through IPOs and subsequent SEOs. There are two types of issuers, high-quality and low-quality. Although investors are unable to distinguish firm type at the IPO stage, quality is revealed stochastically prior to the SEO. Underpricing creates an imitation cost for low-quality firms which run the risk of not recouping the discount at the SEO. This cost of underpricing can induce low-quality firms to truthfully reveal their quality at the IPO stage. Like the other IPO models discussed above, signaling models predict a positive relation between underpricing and proxies for value uncertainty and asymmetric information. These models produce a broad set of hypotheses regarding pricing of SEOs conditional on pricing at the IPO stage.

We turn next to short-run stock price patterns accompanying the announcement of a seasoned equity offering (SEO). It is a stylized fact that the market reacts to an SEO filing with a negative abnormal return and that return is larger in magnitude than what is associated with a debt issuance. “Pecking order theory” (Greenwald et al., 1984; Myers, 1984; Myers & Majluf, 1984) provides an explanation based on adverse selection that has received substantial attention in the empirical literature.Footnote 45 In the typical model, the objective of the firm’s financial managers is to maximize the full-information value of existing shares. (Unlike in principal-agent models which assume that a firm’s financial managers act to maximize their own utility.) The information asymmetry is between managers who know the true value of the firm and outside investors who know only the probability distribution. The flotation method is direct public sale: in other words, the book-building process is not modeled and the firm makes no attempt to certify quality. The authors argue that managers only issue new equity when they believe the current stock price is high, because issuing when undervalued would dilute the ownership shares of existing investors. In the separating equilibrium, the pool of issuers is adversely selected because some undervalued firms choose not to sell shares. The model explains a negative reaction to the issuance of equity, and also that the reaction is more negative when information asymmetry between managers and outside investors is greater.

An additional empirical regularity commonly studied is that the longer-run performance of firms following SEOs is worse than that of comparable non-issuing firms. The most popular explanation is that a firm’s choice of capital structure conveys information about the firm’s unobserved earnings. Following Smith (1986), researchers call this argument the “implied cash flow hypothesis.” The idea traces back to Ross (1978). However, researchers often cite the dividend model in Miller and Rock (1985) as the root. The information structure is again one of informed managers and less informed outside investors. The model assumes that managers face incentives to temporarily run up the stock price by engaging in less external financing than the market was expecting. This practice favors current investors over future: the price eventually falls, but that matters not to previous shareholders who sold at the inflated price. The hypothesis, accordingly, is that when managers choose to raise additional funds, they signal to investors an impending deterioration in operating performance. Researchers of public real estate firms point out that the stylized facts around post-issuance performance are also consistent with the idea in pecking order theory that financial managers market time. A difference is that in pecking order models a pooling equilibrium is possible in which the announcement of an equity offering does not convey information to outside investors that the stock is overvalued. For implied cash flow change models, in contrast, external financing is intrinsically bad news.

Finally, we discuss analytical examinations of trading in financial markets. The dominant approach treats trading costs (bid-ask spreads) as an informational phenomenon.Footnote 46 The information structure here is similar to the setup in the winner’s curse class of models. Dealers face two types of traders, informed investors, with nonpublic information, and liquidity investors, willing to pay a premium for an immediate trade. Researchers model the dealer choosing a bid-ask spread to maximize the difference between expected gains from liquidity traders and expected losses to information traders. Seminal examples include Copeland and Galai (1983), Glosten and Milgrom (1985), Kyle (1985), and Admati and Pfleiderer (1988). A general hypothesis attributed to these papers is that bid-ask spreads are positively related to information asymmetry. While this hypothesis is not tested in the real estate literature, we cover it here because it justifies the use of the spread as a proxy variable for asymmetric information in information studies.

Institutional Characteristics of REITs

Real estate investment trusts, or REITs, are companies that own and operate real estate related assets and are legally organized as pass-through entities for tax purposes. In this section we present arguments and evidence from the literature listed in Table 9 for how information flows and principal–agent relationships differ in REITs compared with traditional public firms. We focus on equity REITs, which invest primarily in real property, as opposed to mortgages or mortgage-related securities.Footnote 47 The literature consistently supports the view that the regulations governing REITs create an experimental setting in which asymmetric information is reduced. For agency costs, however, some effects of the regulatory regime may be countervailing, and we conclude that the net regulatory effect is conceptually ambiguous.

Table 9 Institutional Characteristics of REITs

The fact that REITs do not pay corporate income taxes is their clearest differentiator. For our purposes, the primary implication of the tax exemption pertains to whether researchers can attribute to information and agency costs, versus alternative explanations, the effects of equity issuance on firm value in REIT studies. According to the corporate finance literature discussed in the “Selected Theory” section for Part 2, if a security offering affects a stock’s price, the relation is due to bankruptcy, tax, or information distortions. We discuss below how REIT investors face low expected bankruptcy costs. Whether the tax exemption means that tax effects on capital structure are effectively de minimis for REITs has been a matter of debate in the literature. Three older papers on REIT capital policy reach conflicting conclusions.Footnote 48 For more recent empirical evidence, we refer to Barclay et al. (2013), who compare debt ratios for taxable and nontaxable real estate firms against those of industrial companies. They find that real estate firms choose leverage levels that are about 16 percentage points higher than industrials on average. However, the difference in differentials for taxable and nontaxable real estate firms—15.6% vs. 16.2% over industrials, respectively—is economically small and not statistically significant. Because the tax effects on capital structure appear small, it seems that researchers may find greater heterogeneity to exploit from the regulations required to maintain a tax-exempt status, as we discuss next, rather than the exemption itself.

REITs must meet several requirements to maintain their tax exemption. Researchers typically highlight three of these when developing implications of the institutional characteristics of REITs. Before describing the arguments and evidence in more detail, we provide a summary statement in brackets of the expected effect from the literature of each regulation on information asymmetry and agency costs.

Regulation 1: REITs are pass-through entities. REITs must distribute at least 90% of their taxable income to shareholders annually as dividends (95% before 1999). [Reduces agency costs, positive effect on information].

The distribution requirement has two primary consequences. To begin with, the requirement may reduce monitoring costs by limiting free cash flow available to managers. This does not imply that tax law is the sole determinant of REIT dividend policy. Although large depreciation deductions allow REITs to legally retain a substantial share of accounting net earnings, most choose not to.Footnote 49 Among other rationales, researchers attribute the payment of excess dividends to reducing the agency costs of free cash flow (McDonald et al., 2000; Downs et al., 2000; Hardin and Hill, 2008). In the final assessment, however, researchers seem to agree that conflicts of interest between shareholders and managers over payout policies are less severe for REITs than traditional firms.

Second, because the ability of REITs to finance investments internally is constrained, REITs are considered to be more reliant on external financing than traditional firms. And more frequent trips to the capital markets, researchers argue, should increase financial disclosure and decrease information asymmetry (Ghosh et al., 2000b; Ott et al., 2005). Empirical results in Devos et al. (2019) support the idea that information asymmetry, based on bid-ask spread, falls when REITs access capital markets.

Regulation 2: REITs cannot be closely held. Companies must have at least 100 shareholders, and no more than 50% of any company’s stock can be held by five or fewer individuals (the “5–50” rule). [Worsens agency costs].

The 5–50 rule may tend to worsen agency conflicts by reducing the effectiveness of both external (takeover threat) and internal (monitoring) governance mechanisms. On the external side, researchers argue that the market for corporate control provides important shareholder protection against value-destroying actions by management. However, this view has been subject to prominent counterpoint voices.Footnote 50 Supporting the idea that REITs are more likely to have entrenched management, an early study by Campbell et al. (1998) finds no evidence of hostile takeovers in the REIT sector. And Chan et al. (2003) attribute the apparent lack of takeover activity to excess share provisions placed in REIT charters to ensure compliance with tax law. In opposition, legal experts on mergers and acquisitions (M&A) caution that an excess share provision does not immunize a REIT firm against a hostile bid, e.g., see Einhorn et al. (2006).Footnote 51 And more recently, Glascock et al. (2018) find that the share of hostile takeovers for REITs (13/883 = 1.4%) is comparable to that of conventional firms (590/35,727 = 1.7%) in their study of M&A activity for 1980–2016.

In terms of internal governance mechanisms, researchers argue that the 5–50 rule may increase the coordination costs involved in monitoring REIT managers by reducing the number of non-institutional blockholders. A blockholder is a shareholder who is influential due to the size of their ownership stake, defined as 5% or greater. REIT researchers generally argue that blockholders, both institutional and non-institutional, improve monitoring.Footnote 52 Empirical results support the idea that REITs have diffuse ownership. For example, (Hartzell et al., 2010) report that the median REIT does not have a non-institutional blockholder, and average aggregate non-institutional block ownership is 5.4% in a sample of 142 equity REITs for 1995–2003.Footnote 53 Although the effect of the 5–50 rule on monitoring is difficult to directly examine directly, it seems reasonable that dispersed share ownership may increase the bargaining power of managers.

Regulation 3: The principal business of REITs must be real estate. REITs are required to invest at least 75% of total holdings in real estate (or cash and government securities) and derive at least 75% of gross income from real estate investments. [Positive effect on information, reduces agency costs].

The asset requirement has several consequences for the research that is our focus. First, the tangible nature of REIT investments creates an experimental setting—of firms with transparent cash flows and straightforward business plans—in which the effects of asymmetric information on investment decisions are reduced. Empirical evidence is consistent with the view that information asymmetry is less economically meaningful for investment in REITs than in traditional firms. Downs and Güner (2006) find that REITs forecast funds from operations (FFO) more accurately than comparison groups of unregulated firms forecast earnings per share (EPS), and Bertin et al. (2005) and Cannon and Cole (2011) find lower bid-ask spreads for REITs, a common proxy for information asymmetry.Footnote 54 Finally, Asem et al. (2022) find that abnormal trading volume by institutional investors is higher when industrial firms change dividends than when REITs do so. In spite of their documented transparent nature, we note that researchers still report positive relations between REIT value and analyst coverage of funds from operations (Devos et al., 2007) and net asset value (Letdin et al., 2019). The findings on analyst coverage suggest that the regulatory environment of REITs mitigates, but does not eliminate, information frictions.

Second, and closely related to the first, researchers beginning with Capozza and Seguin (2003) and Gentry and Mayer (2006) argue that they can estimate the relative market value of REIT assets with less error than those of non-regulated companies. Tobin’s q, the ratio of an asset’s market value to its replacement value, is a commonly used outcome variable in studies of information and agency issues.Footnote 55 The estimation of replacement value needed to calculate q is relatively straightforward in REIT studies, especially compared with studies in which firms may hold substantial intangible assets. In general, when a dependent variable is measured with error, researchers may defend the validity of their results by claiming that their coefficient estimates remain unbiased and consistent. However, that assertion does not hold for nonadditive noise. And noise in the numerator or denominator of a scaled accounting variable like Tobin’s q (or ROA, ROE, etc.) is inherently nonadditive.Footnote 56

Third, because they invest in real assets and are conservative in their use of leverage (relative to private real estate investors) REIT bankruptcies are rare. Researchers argue that when REITs issue equity, the decision is relatively unaffected by the transfer of risk from bondholders to shareholders described by Galai and Masulis (1976). Together with the limited effects of tax distortions discussed above, this suggests that REITs can provide an experimental setting for studying information and agency issues in security offerings that is less contaminated by other capital structure effects than the those in studies of non-regulated firms. Assuming that tax and bankruptcy distortions are not economically meaningful for REITs, when results reject a null hypothesis of Modigliani and Miller (1958) irrelevancy, asymmetric information is considered the last alternative hypothesis standing.

Finally, the asset requirement may also reduce monitoring costs by attracting institutional investors, which are considered by researchers to be more effective monitors than retail investors.Footnote 57 It is conventional wisdom that REITs are a popular vehicle for portfolio diversification among large investors because they provide stable income (Devos et al., 2013). As reported in Feng et al. (2011), the fraction of market capitalization of REITs held by institutional investors increased from 25% in 1993 to more than 64% in 2009, following passage of the Omnibus Reconciliation Act of 1993.

Corporate Finance of REITs

We discuss information frictions and REIT security offerings in three sections that cover “Initial Public Offerings (IPOs)”, “Seasoned Equity Offerings (SEOs)”, and “At-the-Market Offerings (ATMs)”. (Note that consistent with our focus throughout the paper on equity investment, we cover offerings of public equity for cash only and mostly do not discuss findings on debt offerings.) Like conventional firms, REITs raise external funds to finance their investments and modify their capital structures. Because they cannot retain taxable net earnings, REITs are reliant on the capital markets for funding, and active follow-on issuers, including through ATM programs. For perspective, REITs raised a total of $385 billion of common equity between 2009 and 2019, $28 billion of IPOs, $297 billion of SEOs, and $60 billion through ATMs (Schnure, 2021).

Initial Public Offerings (IPOs)

Like conventional firms, REITs typically market their IPOs through firm commitment underwritings. The phrase “firm commitment” means the underwriter (an investment bank) purchases the shares from the issuer at a negotiated discount and re-sells them to investors at the public offering price. Between the launch and closing days, the issuing firm’s management conducts roadshows during which they meet with institutional investors, and the underwriter undertakes bookbuilding, during which they determine the price and allocation of shares based on indications of interest. This process seems prone by nature to information asymmetry and agency issues, and indeed it has proven to be a fertile setting for studies of information frictions in the public markets.

It is a stylized fact that IPOs are underpriced on average. The most common measure of underpricing is the return on buying a share at the offering price and selling it at the closing price on the first day of trading.Footnote 58 Based on average offer-to-close return, the IPOs of “common” US companies—i.e., excluding REITs, closed-end funds, banks, and S&Ls— are underpriced by 18.9% for 1980–2021 according to statistics reported annually by Ritter (2022).Footnote 59 That amounts to a cumulative $230 billion left on the table by underwriters and their issuers. Underpricing represents the most important indirect flotation cost associated with an initial security offering according to Eckbo et al., (2007, p. 262), and from an efficiency standpoint, underpricing indicates that shares may not have been allocated to those who value them most (Wilhelm, 2005, p. 56).

Prominent explanations for IPO underpricing are based on asymmetric information. A challenge facing researchers is that the information theories we cover are not mutually exclusive. Typical empirical strategies involve explaining first-day returns using heterogeneity in various firm, security, and offering characteristics that are plausibly associated with the form of information asymmetry and valuation uncertainty emphasized in a particular explanation. A large number of explanatory variables have been examined. Interested readers can refer to Table 8 in Eckbo et al. (2007, pp. 276–279) for a summary of the evidence that had accumulated as of their publication date.

Turning to REIT offerings, the articles listed in Table 10 generally find that REIT IPOs are underpriced, but substantially less so than conventional firms that go public contemporaneously. Researchers explain the differential based on the relative transparency of REITs. Collectively, REIT offerings from the 1970s and 1980s are an exception regarding underpricing, but, nonetheless, a case in point on transparency. Because REITs during this period were not allowed to actively manage the properties they held, the early REIT IPO market may have been at its lowest levels of uncertainty and information asymmetry. In addition, underpricing in the broader IPO market was much lower at this time than in subsequent years. Considering the combination of these two trends, it is perhaps not surprising that REITs IPOs during this era are either correctly priced (Below et al., 1995b) or even overpriced (Wang et al., 1992) in published papers.

Table 10 Equity Offerings by REITs

The modern REIT era in the US is marked by two events, the Tax Reform Act of 1986, which relaxed restrictions on active management, and the creation of the first umbrella partnership REIT, or UPREIT, in 1992, a structure that allows a property owner to exchange their interest in real property for share ownership in a REIT. These changes presumably increased uncertainty surrounding the valuation of REIT IPOs and coincided with increasing participation in the market segment by institutional investors. The position taken in the literature is that the combination of these two trends should make the modern REIT IPO market more susceptible to information asymmetries than earlier offerings. Studies of IPOs from the 1990s and later support this view, finding average underpricing of about 2–3% for REITs in the US (Buttimer et al., 2005; Ling & Ryngaert, 1997), Europe (Brounen and Eichholtz, 2001a, b), and Asia (Ooi et al., 2019). While these offer-to-close returns represent a substantial amount of money left on the table, they are an order of magnitude less than the underpricing observed in the broader IPO market. A general takeaway from these studies is that the lower levels of underpricing observed for REIT IPOs, considering the relative transparency of their stocks, emphasizes the relevance of information-based explanations for IPOs in general.

The strategies used to test specific theories of IPO underpricing in the empirical REIT literature are the same as those utilized in the general corporate finance literature, that is, explaining first day return using heterogeneity that might affect the efficiency of the capital-raising process. As expected, researchers find that underpricing is negatively associated with underwriter reputation (Ling & Ryngaert, 1997), and positively associated with variance in returns in the period after issuance (Brounen and Eichholtz, 2001a, b), ex ante and ex post proxy variables for valuation uncertainty, respectively. Issue size is typically not statistically significant in the papers we review. In addition, results in both Ling and Ryngaert (1997) and Ooi et al. (2019) indicate that greater underpricing is necessary to attract institutional investors. These cross-sectional findings support the view that information asymmetry affects REIT IPO pricing, even if less so than for common stocks. However, the results are consistent with winner’s curse, information revelation, and signaling models. So, no particular information-based explanation is especially elevated by the findings for REIT IPOs.

Seasoned Equity Offerings (SEOs)

Like conventional firms, REITs return to the market subsequent to an IPO to raise additional funds through SEOs. And like IPOs, SEOs are typically marketed through the firm commitment underwriting process and underpriced on average. Researchers commonly cite Loderer et al. (1991) as providing justification for extending to SEOs the asymmetric information theories that have been advanced to explain IPO underpricing. In particular, they argue that researchers should expect a positive relation between SEO underpricing and proxies for value uncertainty and asymmetric information. Although, Corwin (2003) points out that the information problems identified in these models are likely to be less impactful for seasoned firms than for firms going public, a result born out in the data.

In studies of SEO underpricing, researchers typically calculate the return from the first prior trading day’s closing price to the offer price, multiplied by negative one. Based on average close-to-offer returns reported in the literature, the discounting of SEOs for traditional companies has been 1) less than the underpricing observed for IPOs, and 2) increasing over time. For example, Eckbo and Masulis (1992) report underpricing for industrial firms of 0.4% for 1963–1981. In Corwin (2003), underpricing is 1.3% for the 1980s and 2.9% for the 1990s. And recently, Bordeman et al. (2021) find that SEOs are underpriced by 3.2% for 1984–2019.

Some articles on REIT SEOs and IPO-SEO pairs are listed in Table 10, beginning with Ghosh et al. (2000a, p. 364) who argue that it is ambiguous a priori whether SEOs by REITs should be more or less underpriced than those of traditional firms. Their reasoning begins with the reliance of REITs on external financing. As a consequence, REITs should be less underpriced than common firms, under the assumption that more frequent trips to the capital markets increases the production of public information and decreases valuation uncertainty. On the other hand, because REITs issue shares more frequently, they may have to offer greater incentives through underpricing to sustain interest by institutional investors.

Results in Ghosh et al. (2000a) show that REIT SEOs appear to be underpriced, but seemingly to a lesser degree than the contemporaneous SEOs of common companies. They document underpricing of just 1.1% based on close-to-offer, and 0.7% based on offer-to-close, when examining a sample of 178 REIT SEOs for 1990–1996. More recently, Goodwin (2013) observes discounts of 1.2% and 1.8%, respectively, in a sample for 1994–2006. These findings for REIT SEOs, relative to the results from common companies, emphasizes the importance of the relative transparency of REITs over other potentially offsetting effects of their institutional characteristics.

Examinations in the literature of the time series and cross-sectional variation in SEO outcomes yield results that are consistent with winner’s curse, information revelation, and signaling models of underpricing. Similar to the IPO literature, Ghosh et al. (2000a) find that SEOs after 1990, when value uncertainty for REITs increased, are more underpriced than those before. In addition, underpricing is decreasing in underwriter reputation and increasing in institutional investor share. And, Goodwin finds that value uncertainty is positively associated with underpricing, as expected. Lastly, Ghosh et al. (2000b) report results on IPO-SEO pairs that lend a measure of support for signaling models. For example, they find that REITs that underprice IPOs more are more likely to issue SEOs sooner. However, their results reject the hypothesis from signaling theory that REITs with greater IPO underpricing experience smaller negative market reactions on the first subsequent SEO. So, as with IPOs, our assessment is that no particular information-based explanation for SEO underpricing appears especially elevated by the findings for REITs.

Two additional closely related empirical regularities regarding SEOs have been the subject of substantial research effort: 1) the operating performance of firms falls following an issuance, and 2) markets react negatively on average to the announcement of an equity, but not a debt, issuance. Although the first fact can provide an explanation for the second, most of the papers we review focus on the second, and so we begin our coverage there. Researchers commonly measure the short-run market reaction to a new security offering using the abnormal return over a two- or three-day window around the announcement filing date. Abnormal return is the difference between the stock’s actual and expected returns, where the expected return is estimated using either a statistical model (market, market adjusted, or mean adjusted) or an economic model (e.g., Fama French Three-Factor or Carhart Four-Factor).Footnote 60

Based on cumulative abnormal return, Eckbo et al. (2007) report that investors discount the price of an issuer’s existing shares by 2.2% on average across fifteen studies with samples spanning 1963 to 1995.Footnote 61 According to Eckbo et al., this stock price effect represents an indirect cost of issuance amounting to about 15% of the proceeds of a typical SEO.Footnote 62 More recently, Veld et al. (2020) find the mean abnormal return is -1.5% and the median is -1.9% in a meta analysis of 199 studies published or posted by 2017. In contrast, announcements of debt offerings are typically met by non-negative market reactions. For example, Eckbo et al. report that the average abnormal return for debt offerings is a not statistically significant -0.2% across nine studies spanning 1969 to 1993.

The literature on capital structure offers several explanations for why investors on average react negatively to equity offerings. In terms of information-based explanations, a negative market response is consistent with pecking order and implied cash flow models. In both models, rational investors understand that financial managers with asymmetric information issue equity when they believe the stock to be overpriced. In addition, the availability of investible funds following an offering may tend to worsen agency problems according to a generalized version of free cash flow theory.

Turning to the REIT literature, the papers listed in Table 10 document negative market reactions to SEO announcements, but the abnormal returns are smaller in absolute magnitude than contemporaneous values for equity issues by common companies. For US REITs, Howe and Shilling (1988) observe a cumulative abnormal return of -2.2% in a small sample of 27 SEOs for 1970–1985, Ghosh et al. (1999) report a lower value of -1.0% in a 100-observation sample for 1991–1995, and Ghosh et al. (2013) obtain a value of -1.6% for a large sample of 604 SEOs for 1990–2007. All of these returns are for the (-1, + 1) filing event window. In cross-sectional regressions, underwriter rank, offering size, and insider ownership are significant and have signs consistent with the pecking order and implied cash flow hypotheses. In a study of 113 offerings across thirteen European countries, Brounen and Eichholtz (2001a, b) observe abnormal returns of -1.2% for 1990–2000. They report that issue size and property portfolio diversity are related to price reaction as expected based on asymmetric information explanations.

Researchers commonly argue that the unique institutional features of REITs offers the potential to abstract away from non-information-based explanations for the negative stock price effect of an SEO announcement. The point of departure in analyses is typically the conventional “tradeoff theory” of optimal capital structure associated with Fama and French (1998) and Shyam-Sunder and Myers (1999). The idea is that firms borrow until the marginal benefit of tax savings on a dollar of additional debt is exactly offset by the marginal cost of financial distress. Accordingly, an equity issuance may reduce a firm’s stock price if it reduces the firm’s debt ratio away from its optimal level.Footnote 63 To this point, Ghosh et al. (1999) conclude that effects of the information effect of the decision by REIT managers to issue an SEO dominates tax-based explanations. In counterpoint, Brounen and Eichholtz (2001a, b) examine the heterogeneity in tax rates across European countries and report that negative price reaction is larger in magnitude in high-tax countries—consistent with expectations from tradeoff theory—in spite of the fact that REITs do not pay corporate taxes.

In regard to the first stylized fact, researchers document trends in the longer-run performance of firms subsequent to an SEO using buy-and-hold returns and three- or fourfactor regressions. Studies commonly show that firms issue equity following an increase in performance in the year prior to the offering (market timing), and then deliver poor returns in the multi-year post-offering period (our focus). Some seminal articles include Loughran and Ritter (1995); McLaughlin et al. (1996); Loughran and Ritter (1997); Ritter (2003). For post-offering performance, Ritter (2003) reports that average buy-and-hold returns for the first year following an issuance range from -1.1% to -6.1% across eight published studies with samples spanning 1961 to 1995. And Ritter’s own results that compare issuing and nonissuing firms are comparable: examining 7,760 SEOs for 1970–2000, firms average an annual return of 10.8% in the five years after an issuance, compared to 14.4% for non-issuing firms matched based on size and book-to-market value, an average underperformance of 3.6%.Footnote 64

Turning to REITs, the two earlier papers on post-SEO performance listed in Table 10 report divergent results. Friday et al. (2000) find flat operating performance for a sample of 200 SEOs by 112 equity REITs for 1990–1996. The authors attribute their results to the unique regulatory features of REITs. In contrast, Brounen and Eichholtz (2001a, b) document declines of 99 and 46 basis points, respectively, in the mean and median ratios of the return on capital employed, in the year following an SEO for 113 offerings from 1990–2000. A post issuance slump occurs in a large share (75%) of cases. Later, Ghosh et al. (2013) carefully adjust operating cash flows for the particular features of REITs and also obtain the standard result from corporate finance that operating performance ratios improve prior to issuance and deteriorate after. They attribute the exceptional findings in Friday et al. (2000) to sample selection and some measurement issues which they elaborate.

The cross-sectional strategies used to empirically test information-based explanations for the underperformance of REITs following SEOs are the same as those utilized in the general corporate finance literature. For example, using an approach similar to McLaughlin et al. (1996), Ghosh et al. (2013) report that free cash flow is negatively related, and the number of analysts following a REIT is positively related, to changes in post-issuance performance. The negative effect of free cash flow is as expected and the analyst effect is consistent with pecking order and implied cash flow theories. While relative issue size is not significant in Ghosh et al., an early study by Brounen and Eichholtz (2001a, b) find that relatively larger issues have more negative effects, which they describe as expected based on greater complexity and scope for agency problemsFootnote 65 Because the results of the cross-sectional analyses are consistent with multiple models, no particular information-based explanation is especially elevated by the findings for REIT SEOs.

At-the-Market Offerings (ATMs)

ATM equity offerings are an alternative to SEOs that allow firms to sell new shares directly into the trading flow of the secondary market. Their usage expanded substantially following regulatory reforms in 2005 and 2008.Footnote 66 In ATM programs, firms forgo formal underwriting and “dribble-out” shares over time. This enables financial managers to match the timing and amount of capital raises with investment opportunities. In addition, the issuance costs of ATMs are substantially lower than those of SEOs, about 2% versus 5% on average (Cashman et al., 2021). On the other hand, researchers emphasize that the increase in managerial control of capital raising (alternatively, the lack of external monitoring) could increase agency conflicts, and that could offset the benefits of flexibility and low costs.

ATMs have become a popular flotation method for REITs. However, research into their usage is still in a nascent state, and we find just three published articles to review, listed in Table 10. Howton et al. (2018) focus on the benefits of the financial flexibility involved in ATM programs. They find a portfolio of 66 REIT firms that issue 143 ATM programs for 2006–2012 long-run outperform a matched sample of REIT firms that issue SEOs. Two additional articles examine the indirect flotation costs of REIT ATMs, which the authors attribute to agency conflicts and increased information opacity of the firm’s operations. Hartzell et al. (2019) document a negative effect around the announcement of a REIT ATM offering. However, that magnitude is significantly smaller than the abnormal return associated with a REIT SEO. And Cashman et al. (2021) report in this special issue that firms face higher implied costs of equity capital following the establishment of an ATM program. The implied cost of capital is 130 basis points per year during quarters in which the firm has an open ATM program for the years 2006–2015. Collectively, the results in these papers suggest that REIT firms that take advantage of the financial flexibility offered by ATM programs may achieve performance gains that outweigh any associated increases in agency costs. However, this is an area where more work is needed to reach a more definitive conclusion.

ATMs are also a natural setting for researchers to examine the relation between information asymmetry and underwriter certification in equity raises. In the general corporate finance literature, Billett et al. (2019) use ATM offerings to test the “costly certification hypothesis” due to Chemmanur (1993), who provide a seminal theoretical treatment of investment bank reputation.Footnote 67 However, we find no analogous research effort in the published REIT literature. Given the relatively transparent nature of REIT stocks, the effects of information asymmetry on liquidity management are presumably less important. That remains an open question at this time.

Corporate Governance of REITs

We survey research on the governance of public real estate firms in four sections. The focus is now squarely on agency issues. Indeed, we define corporate governance as the set of mechanisms that firms use to address the managerial agency problem. In each section, we begin by reviewing results from the research on traditional companies, and then we examine whether the unique features of REITs means they have fewer agency problems in comparison.

The papers we review in sections on “Ownership Structure” and “Composite Indexes” of corporate governance provide somewhat contradictory evidence on whether REITS are different. Papers in the section on “Financial Reporting” leverage exogenous accounting regulation changes in quasi-experimental research designs. Because the policy changes pertain only to REITs, this research cannot address the question of whether information asymmetry and agency costs are less economically meaningful for REITs than unregulated firms. Lastly, REITs are a natural choice for studying the effects of “Firm Focus” and REIT performance, because the investments made by REITs are readily observable. The governance connection is that monitoring costs are likely positively related to the diversity of a firm’s investment opportunities.

Ownership Structure

A common empirical strategy in the corporate governance literature is to regress firm value (Tobin’s q) or performance (ROE and ROA) on separate measures of corporate governance. We begin by reviewing papers that use this approach to examine a generalized version of the Jensen and Meckling (1976) incentive alignment hypothesis in which firm value is a function of the distribution of share ownership among insiders (directors and officers), institutional investors, large block shareholders, and retail investors. Before turning to REITs, we describe how the findings in the literature for traditional firms are mixed.

Two early, prominent works on ownership structure estimate pooled regression models on panels of annual firm observations, i.e., Morck et al. (1988) and McConnell and Servaes (1990). These papers report a nonmonotonic relation between insider ownership and firm q that is increasing initially, followed by one or two turning points, with the local maximum occurring when directors and officers hold about 40% to 50% of outstanding shares. The authors interpret these results as indicating opposite-signed incentive alignment and entrenchment effects. These papers also report a positive relation between institutional ownership and q, consistent with the idea that institutional investors are more effective monitors.

Other research, mostly more recent, cautions against drawing causal conclusions from models that treat ownership structure as exogenous. The concern is that results in the early papers may suffer from simultaneity and omitted variable biases in their econometric models. When raising these issues, researchers commonly cite the argument in Demsetz (1983) that “the ownership structure of the firm is an endogenous outcome of a [profit] maximizing process [by shareholders]” (p. 377). Using various methods to account for the endogeneity of ownership structure in market value regressions—i.e., firm fixed effects, instrumental variables, and simultaneous equations—several papers fail to find statistically significant relations, indicating the previously estimated effects may be spurious (Cho, 1998; Coles et al., 2012; Demsetz & Lehn, 1985; Demsetz & Villalonga, 2001; Himmelberg et al., 1999; Loderer & Martin, 1997). Collectively, these null results are consistent with the view in Demsetz (1983) that market forces hold each firm close to its individually optimal ownership structure.

Countering the counterpoint voices, Zhou (2001) cautions that null results reported in the previous paragraph may simply represent type-II errors. Because ownership structure is sticky, a panel of annual data likely features an abundance of observations that lack meaningful year-over-year change.Footnote 68 In this setting, Zhou (2001)’s concern is that researchers may fail to reject the null, not because ownership structure does not affect market value, but because the within estimator has low power to detect the effect. Avoiding this issue, McConnell et al. (2008) examine changes in value over the six-day period around announcements of insider purchases for 1994–1999. They find firm value first increases and then decreases with increases in insider ownership, consistent with the earlier findings in Morck et al. (1988) and McConnell and Servaes (1990).

We now turn to studies of ownership structure and REIT value listed at the top of Table 11. Capozza and Seguin (2003) report a positive, linear association for a sample spanning 1985 to 1995. Interestingly, they find that returns do not vary with ownership structure after adjusting for risk. Ghosh and Sirmans (2003) estimate a simultaneous equations model with endogenous firm performance (ROE), board composition, and ownership structure. In a sample of 122 REITs for 1999, they report that ownership by internal directors and blockholders improves performance in accord with expectations. Regarding monitoring, they find a positive, but weak, relation between performance and share of outside directors. However, their results also indicate that CEO ownership and tenure reduce the share of outside directors on the board and their tenure, and adversely affect performance. These findings of potential agency problems are consistent with the managerial entrenchment hypothesis. Ghosh and Sirmans conclude that “despite the various regulatory restraints that REITs operate under, the CEO exerts a greater influence on board composition and performance, than do outside directors” (p. 313).

Table 11 Corporate Governance of REITs

Consistently across multiple estimation methods (OLS, fixed-effects, and 2SLS) and specifications (quadratic and piecewise linear), Han (2006) finds that firm q increases with insider ownership at a decreasing rate until reaching a local maximum at about 33-40%, after which the relationship is decreasing.Footnote 69 The concave relation is considered indirect evidence of countervailing agency costs, perhaps due to management entrenchment. A question undiscussed is whether market forces hold firms close to the optimal ownership structure. For REITs with large holdings by institutional investors, the relation between insider ownership and q remains positive throughout the range of ownership levels. Lastly, Hartzell et al. (2006) find that q is positively associated with insider and institutional ownership, and negatively associated with board size. However, this result is only obtained in a model with firm fixed effects, suggesting that the variation is within firms and not across them.

An implicit channel for the observed positive relation in the empirical literature between governance measures and performance for REITs is a reduction in asymmetric information among managers, directors, and shareholders. In spite of this clear connection in theory, we find only one REIT paper that directly examines the question, does good governance diminish asymmetric information? Using pooled OLS and 2SLS regressions, Anglin et al. (2011) find that director experience and audit committee structure affects bid-ask spread in a sample of 233 observations for 109 REITs during the 2003–2006 period.Footnote 70

One of the primary rationales in papers for examining REITs is that researchers can estimate more reliable measures of Tobin’s q for REITs compared with non-regulated firms, as discussed in the previous section. Whether for this reason or another is difficult to say, but the REIT papers we review all find that insider ownership is an economically and statistically significant determinant of firm value, in contrast with the mixed results in the general literature. The REIT papers also find direct and indirect evidence of agency costs consistent with management entrenchment. A reasonable takeaway from the literature in this section is that legal and regulatory constraints on REITs matter, but cannot adequately substitute for good corporate governance.

Composite Indexes

Instead of estimating the effects of individual elements of corporate governance, another common empirical strategy in the literature is to examine a composite measure. Although the results using this index-value approach are mixed, some of the more prominent implementations, at least, report positive relations between governance strength and firm performance and value. For example, Gompers et al. (2003) is the original paper in this strand of literature. Their well-known “G-index” is an unweighted count of takeover defense provisions (external governance), so that higher index values indicate lower levels of shareholder rights. As expected, they find that the G-index is inversely related to abnormal return and firm value. Another seminal paper by Cremers and Nair (2005) extends the approach introduced in Gompers et al. (2003) by examining how internal and external governance mechanisms interact to affect firm performance. They find a negative relation between takeover defenses and firm performance that is conditional on high public pension fund (institutional blockholder) ownership.

Real estate researchers have examined whether the relation between governance index values and performance documented in the literature for traditional firms is different for REITs due to their institutional features. A selection of representative papers listed in Table 11 under the heading of “Composite indexes” begins with Bianco et al. (2007). The authors estimate the relationship between the G-Index and various measures of REIT performance (ROA, ROE, and total return) separately for relatively small samples consisting of 58 REITs in 2004 and 53 REITs in 2006. They find that the G-Index is only significant in one out of six regressions reported in the paper, i.e., estimating ROA with the 2004 cross section.

A limitation of the G-index is its narrow focus on takeover provisions. As well, Frankenreiter et al. (2021) recently report substantial inaccuracies in the data underlying the index. Avoiding these issues, Bauer et al. (2010) use the broader Corporate Governance Quotient (CGQ) Index from Institutional Shareholder Services (ISS) in their estimates.Footnote 71Similar to the results in Bianco et al. (2007), the authors report that measures of firm performance (ROA, ROE, and FFM) and value (Tobin’s q) are statistically unrelated to the CGQ Index in most estimates for larger samples of about 200 REITs for 2003–2005.Footnote 72 In contrast, they find strong effects in a control sample of about 5,000 traditional firms.

To sum up, composite indexes of governance appear to matter more for the performance of traditional firms than REITs.Footnote 73 This finding is consistent with the idea that payout requirements for REITs restrain managerial opportunism. Moreover, the mitigating effects of payout requirements on agency conflicts appear to dominate any aggravating effects of dispersed, 5–50 rule ownership. In addition, the finding in the previous section that individual elements of corporate governance, especially ownership structure, matter consistently for REITs raises concerns about construct validity for composite indexes in studies of REITs. At the least, the conflicting results for governance index elements and index values call for further study.

Financial Reporting

Reliable and accurate financial reporting being crucial for investors to make informed investment decisions, the primary policy response to asymmetric information in the public equity markets is the requirement by regulators that firms report standardized financial information to shareholders on a regular basis. A substantial literature has analyzed the quality, or informativeness, of financial reporting—see He et al. (2009), Leuz and Wysocki (2016) and Herath and Albarqi (2017) for surveys. In a subset of the literature that is especially relevant for our purposes, researchers examine quantitative measures of disclosure quality produced from computational text analyses of 10-k filings. While Campbell et al. (2014) estimate level effects over the 250 trading days after the filing event for 2005–2009, Kravet and Muslu (2013) estimate relations in changes, to mitigate concerns over omitted variable bias, using 60 trading-day pre- and post-filing periods for 1994–2007. The authors find that risk factor disclosure reduces information asymmetry (bid-ask spread and analyst forecast revisions), and increases investor risk perceptions (beta and stock return volatility) and trading volume. The results in these papers run counter to the popular perception that the text portion of an annual report consists largely of boilerplate verbiage. It seems that the discussion of risk factors, at least, conveys new information that investors use to update their estimated risk parameters for the distribution of expected future cash flows.

In this special issue, Kim et al. (2021) extend the literature described above by examining the relation between changes in the quality of investment risk disclosure and information asymmetry for a sample of 211 REIT stocks during the long 1999–2016 period. Although they closely follow the methodology in Kravet and Muslu (2013), they obtain “markedly different” results (p. 4). Using difference variables, Kim et al. fail to find a statistically significant effect of disclosure on return volatility or analyst forecast revisions surrounding the annual filing for REITs. They argue that their null results support the view that information asymmetry is less economically meaningful for REITs than unregulated firms, as described in the section on “Institutional Characteristics of REITs”.

Several papers listed in Table 11 leverage exogenous changes in regulations to identify the effects of financial reporting on asymmetric information in public real estate markets. These papers combine event and difference-in-difference (DD) approaches. The dependent variable is the pre- versus post-filing difference in information asymmetry (the event study component). The comparisons are before versus after a regulation change, and control versus treatment samples (the DD components). The identifying parallel- or common-trends assumption is that information trends would have been the same in both samples had the regulation change not occurred. And the authors expect that the usual advantages for internal validity of quasi-experimental approaches should apply.

Higgins et al. (2006) examine variation in the information environment for REIT trading around the issuance of guidance in 1999 by the National Association of Real Estate Investment Trusts (NAREIT) on the calculation of funds from operations (FFO), a non-GAAP measure of REIT operating performance that adjusts net operating income for interest, operating, and tax expenses. Researchers describe this accounting change as increasing transparency and reducing managerial opportunism for REITs, i.e., see (Downs and Güner, 2006). Using the adverse selection component of the bid-ask spread as a proxy variable, Higgins et al. describe finding “weak evidence” of decreasing information asymmetry when comparing the 30-day periods before and after the announcement by NAREIT of the change (p. 241).

Higgins et al. (2006) implement spread decomposition methods from George et al. (1991) and Lin et al. (1995). They find the adverse selection component from both methods decreases significantly, as expected, following the accounting change in estimates performed of simple event study models. However, when examining the difference-in-differences between pairs of REIT and matched non-treated firms, the results are no longer significant in some estimates. The mixed results are perhaps not surprising considering, as the authors point out, that the accounting change does not actually provide new information to market participants. It is perhaps most accurate to say that by promulgating a standard definition, the 1999 accounting guidance simply increases the usefulness of reported FFO as a metric for comparing the relative performance of REIT stocks.

Muller et al. (2011) and Ghosh et al. (2020) examine the adoption of International Accounting Standard 40 (IAS 40) in 2005, which requires that public firms in the European Union disclose annual fair value estimates for their investment properties. At issue is whether the adoption of the standard increased the quality of financial reporting and decreased information asymmetry for REITs. Muller et al. (2011) implement a difference-in-differences design comparing firms that did and did not voluntarily provide fair values before and after the implementation of IAS 40. Their sample consists of 178 firms from 14 countries. For the two fiscal years preceding mandatory adoption and the three years after (2003–2008), they have 431 firm-year observations. They find that firms that did not previously report fair values have larger decreases in information asymmetry (bid-ask spread) than those that voluntarily provided them, as expected. However, the non-voluntarily reporting firms continue to have greater asymmetry after the change, indicating that the policy change mitigates, but does not eliminate, information frictions.

Ghosh et al. (2020) implement a design similar to the setup in Muller et al. (2011). However, their sample is substantially larger, with 1822 firm-year observations from 169 firms for 2002–2017. In addition, they follow Lof and van Bommel (2021) in using the coefficient of variation of trading volume and daily turnover as proxies for information asymmetry, instead of the more standard bid-ask spread, which they argue is a noisy measure in this setting.Footnote 74 As expected, they find that both of their proxies for information asymmetry decrease significantly after adoption of IAS 40, and that the effect is increasing in firm size. However, they find no change in the deviation of stock price from net asset value. This finding is contrary to expectations that reducing information asymmetry increases pricing efficiency.

We conclude this section with a final comment on the three papers described above that use quasi-experimental methods. Because the policy changes that the researchers leverage to create the natural experiments only affect REITs, the research designs cannot address the question of whether agency problems are less economically meaningful for REITs than unregulated firms. However, finding any effect of a policy change on the information environment provides evidence that the legal and regulatory constraints on REITs do not eliminate the need for good corporate governance.

Firm Focus

Similar to the articles on home bias covered in the “Geographic Asset Allocation” section, real estate researchers use geographic asset allocation decisions of REITs to examine the disputed relationship between firm focus and performance. For background, this line of REIT research is directly inspired by papers from the broader corporate governance literature that find investors value conglomerates less than either the imputed values of their business segments or matched portfolios of single-segment firms, a phenomenon referred to as the “diversification discount.”Footnote 75 Whereas the closely related home bias literature covered in the “Geographic Asset Allocation” Section focuses on asymmetric information, common explanations for the diversification discount are based on monitoring and agency costs. In this view, it is more difficult to deter managers from placing private benefit ahead of shareholder value in firms with diverse investment opportunities.Footnote 76 However, the idea that diversification destroys value is a matter of active debate in the literature. Researchers find that controlling for investment opportunities, firm characteristics, and the endogeneity of the diversification decision attenuates or even nullifies estimated discounts.Footnote 77

Real estate researchers argue that REITs present fewer identification challenges than conglomerate enterprises in estimating the effect of firm focus on performance. Unlike traditional firms, investments made by REITs are consistently observable to the econometrician, and REITs within an asset class face comparable investment opportunities. A substantial literature has developed, see Table 11 for the selection of papers we cover. The measure of concentration most commonly used is the Herfindahl index, calculated at the metro, state, or region level. For performance, researchers use Jensen’s alpha (return) or Tobin’s q (value). While two of the earliest studies (Ambrose et al., 2000; Capozza & Seguin, 1999) find null results, Cronqvist et al. (2001) and Hartzell et al. (2014) document a positive relation between REIT performance and geographic concentration consistent with the idea of a diversification discount. Similarly, Eichholtz et al. (2001) and Eichholtz et al. (2011) find that the costs of international diversification appear to outweigh the benefits.

While the findings in the papers above are consistent with diversification destroying firm value, the research designs cannot rule out the possibility that causality runs the other direction; i.e., perhaps under-performing firms choose to diversify. The next three papers we discuss use event study designs to address the endogeneity of the diversification decision. While results in the first two papers support the hypothesis that investors value focus over diversification, those in the third imply rejection.

Campbell et al. (2003) study REIT returns following the announcement of a portfolio acquisition, defined as a single transaction in which a firm purchases two or more unrelated properties from a single seller. They show that excess return is positive following the announcement of a transaction that confirms a geographic focus, while a diversifying transaction has a negative, or not significant, effect. Similarly, Ling et al. (2021b) that REITs with more geographically concentrated assets experience higher initial returns on their IPOs. In contrast, Wang and Zhou (2020) find that investors respond favorably to REIT sales of properties located close to a selling firm’s headquarters. However, because the negative relation between distance and excess return appears to be driven by sales in non-gateway markets, the results may reflect location-specific risk and growth opportunities rather than agency costs.

If agency costs can explain the diversification discount, then researchers might expect that increasing institutional ownership should reduce the effect. Beginning with Hartzell et al. (2014), findings in the REIT literature support the hypothesis that large shareholders, through active monitoring, mitigate agency costs associated with diversification. For example, Feng et al. (2021) report a geographic diversification premium for REITs with high levels of institutional ownership. Interestingly, they show that diversification benefits top line (revenue channel) rather than bottom line (operating efficiency channel) performance.

In papers on geographic asset allocation strategies, home market bias and diversification discount are mostly discussed separately as manifestations of distinct information issues, i.e., information asymmetry versus principal-agent conflicts. Notable exceptions focused on institutional investors that bridge the two branches include Ling et al. (2021d) and Milcheva et al. (2021). Ling et al. (2021d) argue that geographic diversification impedes both access to information and monitoring. As expected, they find that institutional investors over-weigh firms headquartered locally and those with greater property holdings in the investor’s home market. The example in the paper is that a Boston-based investor tends to over-weigh Boston-based REITs as well as REITs with concentrated holdings in Boston, regardless of whether those REITs are headquartered there. By exploiting their information advantage, in this view, local investors enjoy superior performance of their REIT portfolios relative to nonlocal counterparts.

Milcheva et al. (2021) study the relation between geographic diversification and REIT performance before and after the Global Financial Crisis. They calculate the distance to headquarters and share of properties located in the headquarters MSA of the firm, like in the home bias literature, and a MSA-level Herfindahl Index, like in the diversification discount papers. They show that average distance to headquarters has been increasing over time since 2004, suggesting a potential regime change in how firms allocate capital geographically. Testing in the paper shows that REITs with geographically dispersed assets deliver significant excess returns in the period after the financial crisis. The authors interpret the combination of the return and distance trends as evidence of increased sophistication of REIT managers and changes in the attitudes of institutional investors toward REITs.

Conclusion

This part of the paper reviews articles on investment in real property through vehicles listed on public markets. After introducing the foundational theory, we present arguments and evidence for how information flows and principal–agent relationships differ in REITs, compared with traditional public firms, due to the regulatory environment in which REITs operate. The consensus view is that REITs are more transparent. In contrast, the net regulatory effect on agency costs for REITs is conceptually ambiguous, a proverbial empirical question.

Because of their unique features, REITs are typically excluded from studies of information issues in corporate finance. Many of the articles we review can be seen as an effort by real estate researchers to fill this gap. In this review, we present several stylized facts regarding security offerings and firm performance or valuation, including most prominently that IPOs are underpriced on average. We then discuss articles that answer the question that naturally follows: Are REITs different? For each fact, the same effect for REITs is present, but substantially muted, in the real estate literature. Considering the relative transparency of their stocks, we conclude that weaker findings for REITs support the view in Ljungqvist (2007)’s handbook chapter that “information frictions have a first-order effect” for traditional firms (p. 376).

This review also examines whether there is a REIT effect for corporate governance. The literature finds consistently that ownership structure matters for the performance of REITs, in contrast with mixed results for traditional firms. However, more research is needed to determine the shape of the relation between insider ownership and firm performance. The opposite arrangement of results obtains when the explanatory variable is a composite index of corporate governance quality: in these studies, the measures are significant in regressions for traditional firms, but not for REITs. Considering the difference in dividend policies between the average REIT and the average traditional firm, we conclude weaker findings for REITs support the view that managers with substantial amounts of free cash flow are more likely to engage in activities that are not in the best interests of shareholders.

This review shows that REITs are more transparent and appear to have fewer agency costs on average than traditional (unregulated) public firms. While the regulatory constraints on REITs matter, so do governance measures. In other words, regulation does not adequately substitute for sound corporate governance. Beyond simply extending research designs to understudied samples, however, real estate researchers commonly argue that the regulatory environment of REITs allows them to perform sharper tests for information effects on public firms. However, this may not be the strongest contribution of the REIT literature. While the results from the REIT papers we review emphasize the importance of information-based theories, because the theories we highlight are difficult to test separately, no particular explanation is especially elevated.Footnote 78 The warning in Myers (2001) is apt:

...the words `consistent with’ are particularly dangerous in this branch of empirical financial economics. A fact or statistical finding is often consistent with two or more competing capital structure theories. It is too easy to interpret results as supporting the theory that one is used to.” (p. 91).

While this review does suggest ideas for more research examining the finance and governance of REITs, portfolio management is the area that we believe studying REITs has the potential to make the greatest contribution. This review covers two empirical regularities in commercial real estate investment, home market bias (in Part 1) and the diversification discount (in Part 2), and potential explanations based on asymmetric information and agency costs, respectively. Real estate researchers argue that REITs present fewer identification challenges than traditional firms. Unlike traditional firms, investments made by REITs are consistently observable to the econometrician, and REITs within an asset class face comparable investment opportunities. In general, results strongly support explanations based on information frictions. However, the literature also demonstrates the need for more study of cross-sectional asset and firm location characteristics, and time-series regime differences. We believe this will be a fertile area for future research.

Part 3: Brokerage Markets

The listing agreement between a property seller (principal) and a real estate broker (agent) is a classic example of an agency relationship in economics, second for attention in the literature perhaps only to the contract between shareholders and corporate managers. In a seminal analysis, Anglin and Arnott (1991) describe two frictions that a seller faces: they can readily observe neither an agent’s level of effort nor expertise in marketing the seller’s real property. Applying contract theory (e.g., Hart and Holmström, 1987), Anglin and Arnott develop the conditions necessary for a listing agreement to incentivize an agent, based on observable outcomes, to expend something close to the seller’s desired level of effort and truthfully reveal their expertise. They also identify the distortions (deadweight loss) associated with such measures to curb moral hazard and prevent adverse selection. They then emphasize how the terms of the standard listing contract in North America diverge from the (second best) efficient ideal to a surprisingly great extent, potentially exacerbating the predicted distortions.

In the 30 years that have passed since the publication of Anglin and Arnott (1991), theory articles have examined whether building models with more realistic assumptions can explain the puzzling prevalence of the standard listing agreement in the residential market. Meanwhile, empirical papers have mostly accepted the finding from theory that the standard contract is inefficient and then proceeded to estimate the economic significance of the market distortions that result. These two strands of literature have been systematically reviewed together in multiple places. As indicated in Table 12, the most recent review articles are by Miceli et al. (2007), Zietz and Sirmans (2011), and Han and Strange (2015).

Table 12 Theory Papers (Part 3), Real Estate Brokerage Markets

In this part of the paper, we update the existing surveys through four sections. After a brief introduction to the “Selected Theory”, we first cover articles on information issues in “Residential Brokerage” published after the latest review by Han and Strange. We summarize key results and suggest areas where more work is needed. Although we focus on new work, we discuss earlier papers as needed to place recent developments in context. Then, we fill two gaps in the existing surveys with two sections reviewing research on information issues in “Commercial Brokerage” and “Broker Trading Networks”, respectively.

Selected Theory

Because an agent’s effort and expertise are difficult to monitor, a seller has to base compensation in the listing agreement on transaction outcomes. Sale price and time on market are two obvious candidates. Think of the agent as producing expected return for a seller, which can be inferred from these two outcome variables. According to contract theory, a seller should set the average commission rate to satisfy the agent’s participation constraint, which holds that total expected compensation must exceed the agent’s opportunity cost, or the agent will not contract. A seller should choose the marginal commission function and contract duration by maximizing the seller’s expected return subject to the agent’s participation constraint and reaction function. In other words, a seller should assume the agent will make choices that maximize the agent’s own expected net income, rather than in the seller’s best interests. With diverse agents and sellers, theory predicts heterogeneous contracts with different combinations of risk-sharing and effort inducement. A self-selection constraint ensures that agents and sellers are incentivized to choose the optimal contract corresponding to their types. A standard result is that the optimal or “efficient” contract is second best. In particular, the agent bears more risk and expends less effort than the levels the seller would choose in the full-information case.

Instead of the diverse contract terms predicted by theory, listing agreements in North American housing markets are remarkably uniform. In the standard contract, a seller agrees to compensate a brokerage firm (“broker”) a fixed percentage of the sales price in return for procuring a buyer who is ready, willing, and able to purchase the subject property on terms acceptable to the seller. Raising concerns of collusion, the commission paid to full-service brokers, typically 5%-6%, is among the highest in the developed world according to The Wall Street Journal (2016), consistent within markets, and seemingly unrelated to the cost of selling a house.Footnote 79 The standard contract gives the broker the exclusive right to sell a property during a specified period. There is more variation in contract duration than commission rate. While common terms are 60, 90, and 120 days, most contracts are written for 90 days of representation. Shorter terms are riskier for brokers and their affiliated agents: if the broker’s agent fails to sell the property during the contract period, then no fee is due and the brokerage must absorb any marketing expenses they have incurred. Longer terms are riskier for sellers: if the seller is dissatisfied with an agent’s service, then they can only terminate the contract with the broker’s consent after possibly reimbursing the brokerage’s marketing expenses and/or paying a cancellation fee.Footnote 80

While research on the competitiveness of the brokerage industry finds that the standard listing agreement pays an average commission rate that is puzzlingly high, the theory papers listed in the first row of Table 12 argue that its fixed-percentage commission structure pays a marginal commission rate that is too low to adequately align agent incentives with seller interests. Under the standard listing agreement, a typical listing agent receives only 1.5–2.5% of any increase in price, while the seller retains 94–95%.Footnote 81 For comparison, Anglin and Arnott (1991) show that the efficient marginal commission rate is 50% if the seller and agent are equally risk averse. That high of a marginal rate is infeasible with a fixed-percentage (proportional) commission. Although Anglin and Arnott do not state this explicitly, we take away from their paper that only net-listing (affine) or waterfall (convex) contracts are possibly efficient.Footnote 82

The literature has focused on two ways that the interests of the agent and seller can diverge under a fixed-percentage commission system. We refer to these as the effort and information (or advice) channels, respectively. In the effort story, the failure of the listing contract to fully internalize an agent’s effort to market the property, combined with the seller’s imperfect monitoring of agent effort, can lead an agent to shirk. A shirking agent generates a slow arrival rate for offers, and those offers come from the low side of the distribution. Thus, the compensation structure of the standard listing agreement, through an effort channel, can cause a property to stay on the market longer and sell for a lower price than the second-best outcome under an incentive-compatible contract.

In contrast to the effort channel, the information story yields the hypothesis of lower than optimal sale prices due to quick sales. Beginning with Arnold (1992), papers have argued that if a seller is less informed about market conditions than the agent they hire, then the seller is prone to rely on (conflicted) advice from the agent in determining their reservation price. Suppose an offer is tendered for less than the property’s true (unobserved) market value. For the seller, maximizing expected return may involve holding out for a higher offer, depending on holding costs and the distribution of potential bids. For the agent, the expected marginal commission they might earn from continuing to market the property is likely, per the literature, substantially less than the marginal opportunity cost of their time. Thus, the compensation structure of the standard listing agreement incentivizes agents to misuse their position of superior market knowledge to influence sellers to accept offers that are too low too quickly.

The results from theory that the standard listing contract creates effort and advice conflicts, while common and intuitive, have nonetheless been subject to criticism in a series of papers listed in Table 12. First, the analyses are static. Beginning with Miceli (1989a), the literature has considered the commonly held view among market participants that a seller can induce increased effort from an agent by simply reducing the duration of the listing. Geltner et al. (1991) agree that agent effort should increase as contract expiration approaches. However, they argue that problems with conflicted advice worsen in a dynamic context, tending to offset gains in effort. Second, neither static nor dynamic models account for the important effects of competition: each agent is assumed to work for only one seller and never has to search for a new seller to represent. When Williams (1998) and Fisher and Yavaş (2010) model the competitive equilibrium for agents, they find that a fixed-percentage commission does not create conflicts of interest in this context. However, Han and Strange (2015) point out that this iconoclastic “no-conflict” result flows from restrictive assumptions and may not hold if those are relaxed.Footnote 83 Lending prescience to Han and Strange’s observation, Li et al. (2021) have recently recovered the finding of agency conflicts in a generalized version of the Williams model.

Even if one accepts the conventional finding from economic theory that the fixed percentage commission exacerbates agency problems from asymmetric information, the prevalence of this compensation structure raises the question of whether any resultant distortions are economically meaningful. This is the primary question addressed in the empirical literature on real estate brokerage. The answer provided in the earlier work is yes, they are. However, more recent results that we review are less supportive of substantial agency costs.

Residential Brokerage

Real estate brokers are not market makers. Their raison d’etre is to even out information among market participants. A direct test of how brokers, through expertise and access to multiple listing service data, improve the functioning of real estate markets could be to estimate differences in outcomes between transactions with and without representation. However, this “value-added” approach faces major identification challenges, because for-sale by-owner (FSBO) properties and their owners likely vary in ways that are unobserved by the econometrician and correlated with the decision to use an agent. FSBO transactions account for a small share of US home sales and many are not arms-length deals at market prices. According to the National Association of Realtors, just 7% of all US home sales were FSBOs in the 12-month period ending June 2021, and the seller knew the buyer in 57% of these transactions (Yun et al., 2021). Because representation is so far from randomly assigned, estimates of intermediation effects using the value-added approach are not likely credible.Footnote 84

Due to the challenges involved in directly testing how agents improve market function, research on residential brokerage has focused more narrowly on conflicts of interest that might offset potential gains. Seminal empirical papers on agency conflicts in residential brokerage are listed in Table 13. These “freakonomics”-style works examine whether listing agents fulfill a fiduciary requirement to treat the sale of their clients’ homes as they do their own. Infamous results in Rutherford et al. (2005) and Levitt and Syverson (2008) suggest they do not. In both papers, agent-owned properties sell for higher prices than observationally equivalent client-owned homes. The authors report sale price premiums of 4.5%-7.0% and 3.7%-4.8%, respectively, with preferred estimates falling at the low ends of the ranges. These premiums are too large to be explained by agent-type sellers having lower discount rates or higher risk tolerance than client-type sellers. Consistent with the information channel described in the “Selected Theory” section, Levitt and Syverson find that 1) premiums are larger in sub-markets with more heterogeneous properties, and 2) agent-owned properties stay on the market 10.0%–17.8% longer. In contrast, Rutherford et al. find no significant differences for time on market in their sample, and so neither the effort nor the information channel is elevated by their results.

Table 13 Conflicts of Interest in Residential Brokerage

As with any hedonic study that does not exploit plausibly exogenous variation, there is concern that estimates in the “agents-as-sellers” papers might suffer from omitted variable bias. To address this issue, Rutherford et al. (2007) extend their 2005 paper by examining sales of condominium properties, which are more homogeneous than single-family houses. For sale price effects, they estimate premiums of 3.0%-7.0% for agent-owned properties, commensurate with their previous results and those in Levitt and Syverson (2008). In terms of liquidity, they find that agent-owned homes stay on the market for 3% longer than client-owned; this finding conflicts with their previous null result, but is consistent with Levitt and Syverson (4).

Inspired by the seminal papers showing that agents sell their own homes at a premium, several recent papers examine whether agents also buy homes at a discount. The theoretical treatment of buyer representation posits that the fixed-percentage commission creates even worse agency conflicts for buyers than sellers: while a proportional commission fails to align seller and agent interests in magnitude, for buyers, the direction is wrong (Yavaş and Colwell, 1999; Kryzanowski et al., 2022). Consistent with this view from theory, three papers listed in Table 13 report that agents buy homes at prices discounted 4.0%, 2.5%, and 1.4%, respectively, relative to the prices paid by client-type buyers. As discussed by Allen et al. (2015), these discounts evidence not only agency problems on the buying side, but also that listing agents, when the buyer is another agent, fail to meet a fiduciary requirement to never subordinate the best interests of their principals (p. 4). Based on econometric approach, Allen et al. (2015) can be grouped with the earlier seminal agents-as-sellers papers. In contrast, Agarwal et al. (2019b) and Hayunga and Munneke (2021), by describing their estimated discounts as consistent with agents trading on asymmetric information and/or bargaining power, belong to a second wave of estimates that we discuss next.

Recently, several papers in the agents-as-sellers line have made advancements to the identification of sale price premiums resulting in substantially smaller estimates relative to those in the seminal works. The recent papers are listed in Table 13 with the description “Agents as sellers: New estimates.” The primary econometric issues they address are simultaneity bias and heterogeneity among sellers, agents, and properties. Bian et al. (2017) and Hayunga and Munneke (2021) address the simultaneous determination of sale price and time on market (TOM). Both papers estimate systems of simultaneous Eqs. (3SLS) and report premiums for agent-owned homes of 1.8% and 1.4%, respectively, with the latter estimate based on more narrowly defined categories of sellers than the former.Footnote 85 For TOM, Bian et al. find agent-owned homes sell 49% faster, and Hayunga and Munneke observe no significant differences.

Xie (2018) and Lopez (2021) address heterogeneity in sellers and agents. Rather than a simple indicator variable for agent-owned properties, Xie is seemingly the first researcher to subdivide the non-agent group into categories—i.e., individual, corporate (relocation companies), lender, and government—on the assumption that bargaining power varies across these groups. Similarly, Lopez identifies not only agent-owned transactions, but also those where the seller is related to the listing agent and where the agent holds an appraiser’s license. Controlling for this seller heterogeneity, these papers report premiums for agent-owned homes of 1.5% and 1.6%, respectively. Xie finds that premiums are much larger for institutional (lender and government) than individual sellers, which he attributes to variation in motivation (bargaining power), and Lopez finds that premiums on agent-related sales are statistically equivalent to those for agent-owned. Consistent with an information channel, Lopez reports that premiums are larger in areas with more variation in property tax assessment ratios and recent sale prices, and when the agent is an appraiser. For TOM, Lopez finds agent-owned homes sell 8% faster, while Xie observes no significant differences.

In addition to the agents-as-sellers analysis, Bian et al. (2017) examine whether personal real estate transactions affect the performance of agents in selling properties for their clients. They find that client-owned homes sell for 1.4% less and stay on the market for 45% longer if the listing agent is marketing their own property at the same time. This particular principal-agent issue—we label it the “distracted agent” problem—has not been studied previously in the real estate literature. To determine if the effects on outcomes are due to shirking and not competition, future research should examine how often agents sell properties they own that buyers likely perceive as close substitutes to the properties the agents have listed on behalf of clients. Ling et al. (2021a) provide additional evidence on the distracting effect of personal transactions on professional productivity. They find that hedge fund performance deteriorates significantly when fund managers make personal real estate purchases. However, there find no significant effects of personal sales.

If the fixed-percentage commission pays a marginal commission that is too low, then co-listing—where two or more agents jointly represent the seller and split the commission— should seemingly worsen incentive problems. Instead, Allen et al. (2021) find that co-listed properties sell at higher prices with slightly less time on market than traditional listings. The explanation in the paper pits gains from specialization versus costs from the effort (distraction/shirking) channel.Footnote 86 In a typical team structure, the more junior agent handles showing and marketing properties, freeing time for the senior agent to focus on generating new listings. It appears that the benefit of hiring a co-listing agent, who is less distracted by the need to source new business, outweighs the potential agency costs from diminished marginal incentives in the co-listing contract. The findings in this paper especially highlight the importance of modeling the competitive equilibrium for agents when examining conflicts of interest.

There has been limited empirical work on policies and mechanisms that might reduce agency conflicts in residential brokerage. In their agents-as-buyers paper, Agarwal et al. (2019b) find that the 2.5% discount they estimate disappears for transactions after a 2010 regulatory intervention in Singapore disallowed dual agency.Footnote 87 Turnbull et al. (2020) examine whether reputational risk might be a stronger deterrent against exploiting conflicts of interest for principal brokers/partners compared with traditional affiliated agents. In a novel approach, they embed the agents-as-sellers metric in a difference-in-differences design in which the treatment is becoming a principal broker/partner. Unlike affiliated agents, empirical results show that principal brokers appear to face incentives sufficient to ensure that they fulfill their fiduciary duty to treat the sale of their clients’ homes as they would their own.

Omitted variable bias is a potential challenge to identification of sale price differentials in the empirical papers on moral hazard in real estate brokerage. The concern is that properties bought and sold by agents versus non-agents might vary systematically in quality and condition in ways that are either not observed or not easily quantified by the econometrician. All of the papers in Table 13 use multiple listing service data. While listing systems contain a large set of variables that quantify property and locational features, researchers have to glean information on condition and quality from the subjective tokens (keywords and phrases) in the remarks section. Levitt and Syverson (2008) report that including in estimating models dummy variables for a large set of tokens results in only small changes to estimated premiums. In contrast, Liu et al. (2020) find more recently that controlling for textual information does change results meaningfully. Similar to values in the seminal papers, the authors first estimate premiums of 3.3% and 4.1% for agent-owned sales in Atlanta, GA and Phoenix, AZ, respectively. However, when they include in their models tokens selected by machine learning procedures, the premium for Phoenix falls substantially to 1.7% and Atlanta is no longer significant, suggesting that omitted quality is confounding the initial estimates.

In the literature on residential brokerage, it has been freakonomics versus econometrics, and econometrics is winning. While earlier studies find that agents buy homes at a discount and sell homes at a premium of about 4% each, price differentials estimated recently using more extensive models have fallen to around 1.5%. The newer papers show that a substantial share of the earlier results can be explained by seller heterogeneity, in particular, lumping together individual and institutional sellers, with differing degrees of bargaining power, into a single “non-agent” group. As well, recent findings suggest that earlier results may have been confounded by omitted variable bias related to systematic differences in the quality of properties owned by agents. Considering the trend of decreasing estimates, it seems possible, even likely, that price differentials estimated with designs that adequately model heterogeneity (agent, property, and seller) and simultaneity will no longer reject the null hypothesis. And perhaps this should be expected.

The strongest counter argument against the notion that the fixed-percentage commission causes meaningful agency costs in the presence of imperfect information may be its very prevalence. Why would a rational seller agree to a listing contract with major misaligned interests? The leading answer in the literature is that the benefits of hiring an agent must exceed the agency costs. However, that does not explain why we rarely observe the net-listing and waterfall contracts in residential markets that theory indicates are more incentive compatible than the standard fixed-percentage commission.

The lower sale price differentials reported in the recent agents-as-sellers literature suggest that the presence of a conflict of interest is a necessary, but not a sufficient, condition for substantial agency costs. It is easy to think of reasons why agents might not behave as opportunistically as a reductionist approach suggests. Perhaps ethics and reputation risk ensure that agents mostly fulfill their fiduciary requirements. For example, it is common for a seller to request that an agent provide a comparative market analysis or broker’s price opinion before signing a listing agreement. An agent must know that the seller might also obtain analyses from other agents or even an appraisal.Footnote 88 This possibility creates reputation risk that should deter an agent from misrepresenting the fair market value of a client’s house in giving advice during sale negotiations.

This review suggests three areas for future research in areas related to information frictions. First, the papers on brokerage markets, similar to the research on property markets, do not directly examine the effects of the secular reduction in information asymmetry associated with the internet revolution. In recent papers, client-type buyers and sellers obtain prices that are close to what agents get in their own transactions. As we discuss, the recent works focus on econometric differences to explain falling estimates of price differentials. This literature is missing analyses of the variation in the time series of price differentials. Compared with the pre-internet era, an agent’s relative effort in staging properties, for example, is now evident at low cost. Similarly, buyers and sellers can easily access information on competitive sales, including automated estimates of value. Given the sea change in the availability of information, examinations of whether information is curing the asymmetric information problem are surprisingly underrepresented in the literature on residential brokerage.

Next, if the finding that agents buy properties at a discount and sell at a premium survives the additional testing that appears necessary, then additional evidence will still be needed to elevate a particular explanation. In the empirical papers we discuss, researchers promote explanations based on agents taking advantage of superior information over explanations based on agents exerting less effort than optimal. However, the question of how to interpret transaction price differentials is far from settled. To begin with, none of the papers we review on residential brokerage reproduce the Rutherford et al. (2007) and Levitt and Syverson (2008) result that agents are more patient in selling their own homes, a finding that supports an information channel. Instead, two papers find agent-owned homes sell more quickly (Bian et al., 2017; Lopez, 2021), supporting the shirking story, and three fail to find statistically significant differences (Hayunga & Munneke, 2021; Rutherford et al., 2005; Xie, 2018x). In addition, the empirical support for agents having informational advantages is mostly based on findings that estimated discounts and premiums are larger in absolute terms in more heterogeneous markets. The results in Liu et al. (2020), in our assessment, challenge this inference. It seems reasonable that the heterogeneity of properties in certain submarkets could just as easily cause greater scope for omitted variable bias in estimating price differentials as for trading on asymmetric information.

Lastly, recent findings indicate that differences between the outcomes for agents and clients when they buy and sell properties could be due to systematic differences in bargaining power rather than information asymmetry. Some households are in the market to adjust their consumption bundle with respect to gradual changes in their demand for housing in a particular location. They can be patient in their search and matching process. Others enter the market with greater urgency because of sudden life events or the necessities of contingent transactions. On account of being always in the market, agents are well positioned to take advantage of good deals presented by urgent buyers and sellers. Such correlated differences in bargaining power are difficult to observe empirically and calls for more study. To better hold motivation/bargaining power constant, additional research could, for example, compare prices paid by agents versus client-type buyers for investment properties, as opposed to owner-occupied homes, as a potential sharper test of the underlying research question.

Commercial Brokerage

Relative to residential brokerage, the commercial brokerage industry presents limitations and opportunities as an experimental setting for studying information issues in real estate markets. On the limitation side, data is generally less available and the literature on commercial brokerage cannot take advantage of the agents-as-sellers approach that Rutherford et al. (2005) and Levitt and Syverson (2008) popularized for studying agency issues in residential brokerage.Footnote 89 On the opportunity side, because many commercial transactions do not involve a broker, unlike with home sales, researchers can more credibly perform direct testing for the effects of intermediation. This value-added approach involves comparing the sale prices for observationally identical commercial properties that transacted with different forms of representation—“single (buyer)”, “single (seller)”, “double”, and “dual”—with those sold directly by owners. A selection of representative papers is listed in Table 14.

Table 14 Value-Added Estimates for Commercial Brokerage

In their study of investor domicile, Devaney and Scofield (2017) test whether brokers might reduce information deficits for foreign buyers. Examining office transactions in New York for 2001–2015, they find that seller representation is associated with a higher sale price relative to when the counterparty is not represented. In contrast, using a broader dataset of commercial transactions in the largest 15 US markets for 1997–2011, Ling et al. (2018) find that buyers pay more and sellers receive less when represented. The authors describe their results as, “consistent with the agency problems reported in Levitt and Syverson (2008)” (p. 117). Finally, using a sample of office transactions from 239 metro areas for 2000–2016, Eichholtz et al. (2021) find no significant effect of broker representation.

In the residential literature, an “in-house” transaction occurs when the buyer and seller are represented by different agents at the same brokerage. At issue is that brokers might offer affiliated agents incentives (higher commission splits) to promote internal listings, creating conflicts of interest. In the commercial literature, instead of in-house, the term “dual brokerage” is used to denote the case when the brokers for both sides of a transaction are affiliated with the same firm. As discussed in Scofield and Xie (2019), the direction of a dual brokerage effect is unclear a priori. Because a fixed-percentage commission provides such a weak incentive for higher sale prices, it seems likely that the relative strength of the relationship between the firm and the two sides would be determinant. Two articles that consider the effects of dual brokerage report mixed results. Hardin et al. (2009) find no statistical effect in a study of multi-family transactions in Atlanta and Phoenix for 19,952,003. In contrast, Scofield and Xie (2019) examine office transactions with dual brokerage and find that buyers fare better in this structure. In the six largest US markets for 20,032,016 dual brokerage is associated with a substantial 5.8% discount. However, the discount is driven by results in two metros (Chicago and San Francisco) and only emerges in the years following the global financial crisis. As well, the authors report that larger brokerage firms perform better for their principals (higher prices for sellers and lower prices for buyers) than smaller firms. The authors attribute these results to “agency issues stemming from information asymmetry” (p. 348).

This review shows that more research is needed to determine whether brokers reduce information asymmetries in commercial real estate transactions and to what extent, if any, conflicts of interest limit those gains. To begin with, we are not able to draw a clear narrative from the findings in this literature other than to say that results are mixed. Even if we are mistaken and there is a leading story, both internal and external validity present substantial challenges. The papers carefully model property, broker, and investor heterogeneity. However, the potential endogeneity of the decision to use a broker limits the ability of the authors to make strong causal claims about the effects of intermediation on transaction outcomes. In terms of external validity, with the exception of the papers by Ling et al. (2018) and Eichholtz et al. (2021), the articles we review study transactions in just a handful of markets. A concern is that the results may not reflect broader developments.

Broker Trading Networks

Because of the close cooperation between agents (listing and selling, landlord and tenant) required to complete transactions, the real estate brokerage industry provides an ideal setting to study professional networks. In addition, the skills that agents gain from experience, which allow them to complete more transactions, create human capital externalities in these networks. Although real estate brokerage may be the quintessential relationship business, only a few attempts to measure the importance of relationships to agents have been published. This is surprising also considering that research on the economic consequences of social and professional network structure has been such a growth industry—see Jackson et al. (2017) for a helpful survey. In this section we discuss a few recent papers that examine network effects in real estate brokerage. While it may be the case that networks are relatively efficient ways to transmit information, the research that we review, listed in Table 15, shows they can also introduce frictions that may hinder efficient search and matching.

Table 15 Trading Networks in Real Estate Brokerage

Han and Miller (2015) note that the brokerage industry is characterized by a mixture of franchise and non-franchise firms. They develop a dynamic network model that can explain how organizational structure arises endogenously in an industry that requires collaboration between and within firms. The model considers two types of network externalities: 1) an average network effect represents human capital spillovers from the average skill level of agents in the network, and 2) a network complementarity effect represents the amenity value of working at a firm with individuals who enhance one’s own productivity. The model is estimated using a network of 70,000 agents involved in producing 1.6 million residential transactions in the greater Toronto area for 1988–2015.

Results in Han and Miller (2015) offer several insights into organizational structure and labor market turnover which, while interesting, we do not cover because they are not germane to this review. However, we do highlight how the authors attribute to information issues the finding that experienced agents with high levels of human capital specialize in listing, while junior agents specialize in selling. Results show that listing agents rely heavily on networked agents, especially from the same brokerage office, to sell their listings. The reliance on within-firm connections is less for selling agents. A potential explanation is that while listing agents need to utilize their professional networks to find buyers (and new sellers to represent, we would add), less experienced selling agents can take advantage of the pooling of listings in the MLS to find properties for their buyers, consistent with Allen et al. (2021) discussed earlier in the section on “Residential Brokerage”.

Xie (2022) estimates how intensively residential agents utilize trading networks. The author fits the dynamic network formation model from Jackson and Rogers (2007) to MLS transactions involving 1,588 agents for “the suburban areas of a large Midwestern city in the United States” for 2008–2010 (p. 8). The implementation of the Jackson model here assumes once a trade is made, a permanent network link exists between the listing and selling agents. In contrast to a random pattern, Xie documents that in 35–55% of trades, the parties are represented by networked agents—meaning the agents are either previous trading partners, or partners of partners. Two acknowledged limitations of the paper are that the author does not observe connections made before 2008 and which side initiates the trade. Although the author finds that agents rely more on their trading networks as they gain experience, the analysis does not consider how that reliance affects market outcomes.

Smith et al. (2019) examine agency issues in network formation. They find that listing agents obtain below-market prices for clients when the buyer is represented by an agent in their trading network, defined as any agent they have cooperated with on a sale in the past. For home sales involving 83,678 agents in the Atlanta metro area for 1997–2014, estimates of the discount range from 1.3%-5.5%. The authors conclude that agents appear to emphasize building their professional networks—by, for example, being perceived as easy to work with—at the expense of their clients’ best interests.

Agarwal et al. (2019a) focus on ethnic matching between buyers and sellers in the Singapore housing market. They are able to merge transaction data from a real estate exchange for 2007–2012 with a “personal database” containing demographic attributes of Singaporean residents (p. 3964). Comparing transaction price to appraised value (estimated prior to price negotiations), they find that sellers of Chinese ethnicity earn an additional 1.7% premium when selling homes to Chinese buyers in neighborhoods with high Chinese concentrations and Malaysian sellers give 1.6% discounts to Malaysian buyers. This is expected on the demand side assuming concentration is associated with ethnic amenities, and on the supply side due to government caps on ethnic concentration at the neighborhood and block level. A puzzle is that these price effects only arise in same-ethnicity transactions: i.e., Chinese buyers do not pay a premium when buying homes from non-Chinese owners and Malaysian buyers do not receive discounts from non-Malay sellers. This raises the question, why don’t more Chinese buy homes from Malaysian sellers and why don’t more Malaysian sellers hold out for a buyer from another ethnic group? For an explanation, the authors look to the professional networks of real estate agents. They present evidence that large shares of agents specialize in particular ethnicities. While these networks presumably reduce information asymmetries on the spatial variation in ethnic amenities, they appear to do so at the expense of search efficiency.

In this special issue, Scofield and Xie (2021) extend the analysis of residential agent networks from Xie (2022) to the commercial brokerage market. As in the earlier work, the implementation of the Jackson and Rogers (2007) model assumes once a trade is made, a permanent network link exists between the selling brokerage firms (not agents, in this application). Their data provide near census-level coverage of commercial real estate transactions over $5 m from the six largest US markets for 2003–2016. While most sales (67%) are facilitated by networked brokers, there is substantial variation in the reliance on network search across geographic space, property type, and stages of the property cycle (high in bust years). Interestingly, the authors find evidence of negative assortativity, meaning firms with smaller networks are more likely to trade with firms with larger networks, which they conclude raises issues of asymmetric bargaining power.

In this review, the research on trading networks in real estate brokerage raises more questions than answers. Is brokerage a relationship business or an information business? On the information side, our focus, what information flows across networks, 1) off-market/pocket listings, 2) property characteristics that may be difficult to gauge form from the listing, saving costly site visits for unsuitable properties, 3) trading partner characteristics like workability and urgency, or 4) something else? There is certainly a need for additional contributions on the economic consequences of professional networks in real estate brokerage.

Conclusion

This part of the paper reviews articles on the brokerage industry, which we view as the real estate market’s response to imperfect information. While real estate agents potentially ameliorate information asymmetry, researchers have examined whether conflicts of interest may limit those gains. This literature has deep roots and has been the subject of multiple systematic reviews. We fill gaps in the existing coverage, update it with discussions of recently published results, and suggest areas where more work is needed.

Seminal papers on agency conflicts in residential brokerage find that residential agents fail to fulfill a fiduciary requirement to treat the transactions of their clients as they would their own. The studies show that agents buy their own homes at a substantial discount and sell their own homes at a substantial premium relative to transactions in which they are representing clients. Researchers interpret these results as evidence that the fixed-percentage commission structure common in North American brokerage agreements pays a marginal commission rate that is too low to adequately align agent incentives with the best interests of their clients. However, more recent papers find that large shares of the estimated discount and premium are explained by systematic differences in bargaining power differentials and the quality of properties transacted personally by agents. In other words, conflicts of interest in residential brokerage are more muted than researchers previously thought.

The newer results on conflicts of interest call for a potential rethinking of policy priorities. The findings suggest that the focus of policy attention should be more on enhancing price competition in the brokerage industry, rather than eliminating potential conflicts of interest in brokerage contracts that appear to not be economically as meaningful. However, any policy recommendation needs to consider the theory of second best, which holds that when there are multiple market failures, removing just one may actually worsen economic welfare.

This review reveals that there is not a single published empirical paper on contract design in commercial brokerage. This gap stands in stark contrast compared with the well-developed literature on residential brokerage. Data availability is a challenge, of course. As well, researchers have not been able to develop an experimental design for examining commercial brokerage that is as successful as the agents-as-sellers approach has been for studying residential brokerage. We conclude that more research is needed to determine whether brokers reduce information asymmetries in commercial sales and leasing transactions and whether conflicts of interest limit those gains.

Lastly, broker trading networks has been a relatively new area of study. Articles so far mostly attempt to quantify how much these networks matter to the performance of agents. However, the review reveals that there is still a lot to learn about the nature of information flows across trading networks and we hope this is another area where additional contributions will be made.