1 Introduction

The standard theoretical model of perfectly durable goods finds that frictionless resale does not lower monopolist profits, implying producers would welcome well-functioning resale markets for their products. The reasoning behind this finding is that forward-looking consumers incorporate the expected resale price into initial willingness to pay, and the increase in revenue from higher initial prices offsets revenue lost from fewer sales. Since firms may earn the same revenues with fewer goods produced, profits may be higher when resale is allowed.

If one modifies the standard model of resale by allowing consumers to tire of goods, however, this finding no longer holds, implying the impact of resale on profits is ambiguous. The intuition is as follows. Owners, having grown tired with use, resell their used products in later periods. These used goods fulfill some residual demand, clearing the market at a low price. Anticipating the low price and extant high quality to someone that has not used it, forward-looking consumers are not willing to pay as much as they otherwise would initially.

The assumption that consumers tire of products with use is relevant for many types of information goods. It clearly applies to books, movies, video games, and music, which together yield about $90 billion in sales in the U.S.Footnote 1 It also applies to non-entertainment goods like learning software (e.g. languages) and any product eliciting greater enjoyment during initial use.

For many such goods, resale markets exist not because they weakly raise producer profits—nay, practitioners strongly believe otherwise—but because there has been no legal way to prevent them. The U.S. first sale doctrine (17 U.S.C. section 109) guarantees consumers’ right to resell the original purchased copy of a durable good, even if copyrighted, so long as no copies were made.Footnote 2

However, because transferring a digital file involves making a copy from the hard drive, and the first sale doctrine only applies to the original copy, the doctrine does not apply to digital downloads.Footnote 3 \(^,\) Footnote 4 \(^,\) Footnote 5 Firms can also prevent resale indirectly by streaming rentals from places like Netflix and Spotify. Moreover, with the advent of high speed Internet connections, digital distribution is becoming feasible. Resale has for practical purposes been shut down for applications purchased on tablets or smartphones. Traditional platforms may follow suit. Microsoft initially announced that all games for their newest video game platform would be downloaded and publishers could place restrictions on resale. However, they quickly reversed course following customer threats to switch to a rival platform (Stuart 2013).

To investigate the empirical impact of shutting resale markets in the video game market, I employ a two-step approach to estimate the impact on a sample of 14 highly acclaimed games. First, demand parameters are estimated in a market where resale exists, using a dynamic discrete choice structural model of the consumers’ purchase and resale decisions and a dataset containing new video game purchases and used game resales. Second, using the parameter estimates, profits are simulated in counterfactual environments.

I find that consumers tire quickly with use. High valuation consumers reduce their value from $80 for the average game in the 1st month of use to just a couple of dollars per month by the sixth month. As a result, resale markets put downward pressure on price. In counterfactual simulations, I find that optimal prices fall at a much slower rate when resale markets are shut down, providing much less incentive to delay purchase. Selling non-resellable downloads is estimated to raise producer profits substantially, by 109 %. However, the magnitude of this finding is sensitive to the assumed market size. Preventing resale indirectly by exclusively renting only raises profits by at most 5 %.

This paper contributes to a sparse empirical literature on resale and monopolist profits. Chen et al. (2011) find that resale would raise a monopolist car producer’s profits by 15 %, but did not allow consumers to tire with use.Footnote 6 A contemporaneous paper by Ishihara and Ching (2011) allows for such tiring. To appropriately model Japan, they assume novelty effects which cause usage values to also decline for consumers yet to play the game, but they do not allow for heterogeneous usage values.Footnote 7 Despite using different assumptions, their results are qualitatively similar to the findings in this paper. Focusing on airline tickets, Lazarev (2012) also finds resale would lower profits, by 29 %.Footnote 8 To my knowledge, no empirical papers investigate preventing resale via renting.

In the next section, I discuss the importance of the assumption of losing interest with use and provide an industry background. Section 3 describes the data. Then, in Sections 4 and 5, the model of consumer demand and the estimation strategy are detailed. Section 6 presents the results.

2 Background

2.1 Logic and prior theory

This subsection presents logical arguments illustrating why monopolist revenues may be higher if selling a non-resellable good, compared with renting or selling a resellable good. The intuition is then contrasted with the prior theoretical literature.

When forward-looking consumers tire of goods, selling a non-resellable good may yield more revenue for a monopolist than renting does. To illustrate this point, suppose a single individual and a monopolist selling a non-resellable good. To maximize profits, the firm should set a price equal to the individual’s present discounted value from use. If the individual buys, the firm extracts full surplus, leaving the consumer with zero net surplus. If the individual cannot acquire the good for less later, i.e. the firm commits to maintaining prices, then the individual is not able to increase net consumer surplus by delaying purchase. Thus, the individual is indifferent between buying and not, and is assumed to buy.

The analogous rental strategy involves the producer charging the individual her full usage value each period, i.e. set a rental price for the t \(^{th}\) period equal to the utility received from her t \(^{th}\) period of use. Since static usage utility declines with use, prices under this strategy would decline over time. Therein lies the problem. With falling prices, a forward-looking consumer would prefer to wait. By delaying, the individual pays a lower price, but receives the same utility, since her usage value declines only after she uses the product. By waiting to rent, the individual yields positive, rather than zero, consumer surplus. The firm thus cannot successfully charge rental prices that would extract full surplus from the individual, implying profits under renting are lower than profits from selling a non-resellable good.Footnote 9

Similar logic applies to frictionless resale. When resale markets exist, an individual can implicitly rent the product for any number of periods by buying and reselling. Because the supply of existing products is non-declining over time and the overall demand for use falls as consumers own, use, and tire of the product, the market clearing implicit rental price should fall over time. The falling implicit rental price limits the amount of surplus a monopolist can extract, suggesting that preventing resale would raise profits.

By contrast, existing models of secondary markets for durables, which do not allow consumers to tire of goods, typically find that frictionless resale can raise, but not lower, monopolist producer profits (Hendel and Lizzeri 1999; Rust 1986).Footnote 10 \(^{,}\) Footnote 11 \(^{,}\) Footnote 12 The intuition is that the same stream of services might be provided with fewer produced goods when goods change hands under resale and renting. Hence, the firm may have equal or similar revenues when resale is allowed compared to when not, but lower costs when selling a resellable good.

The following reasoning helps clarify why two seemingly similar modeling assumptions, (i) consumers grow tired of a product and (ii) imperfect durability, have very different implications. When consumers tire of non-depreciating goods, they can obtain the full quality product for a lower price by waiting, reducing how much they are willing to pay initially. For goods that depreciate, however, waiting to buy a used good entails obtaining a lower quality product, generally offsetting the gain from obtaining the product at a lower price, eliminating the incentive to wait.

The above logic and prior theory have conflicting implications for whether allowing resale raises or lowers profit, implying it is an empirical question, one investigated next.

2.2 Industry background

When purchasing a game, consumers can generally choose between a new and used copy, and between online and offline sellers. The price a consumer pays for a used copy is close to the price of a new copy. The small price difference (about 10 %) is usually assumed to be due to risks involved in buying a used copy which may not function, and negative feelings associated with buying a used copy experienced at time of purchase. However, as long as the used game is not sufficiently damaged (such as having a bad scratch), it provides exactly the same service as a new copy. It also is typically assumed that the larger price difference between copies bought in brick and mortar stores and via auction, both for new and used games, reflects additional costs borne by the buyer in an auction. Such costs include shipping fees, risk of being scammed, the cost of postponing gratification while the auction concludes and the game ships, etc. It seems reasonable to assume that the true total cost of buying a copy of a game does not depend on the condition, new or used, or whether bought online. Hence, consumers view these choices as roughly equivalent in value.

Consumers, when they have tired of a game, have the option to resell their copy either directly back to another consumer via auction, or to a retailer. It seems logical that since these different mediums are competing for traded-in games, and consumers can freely choose where to resell their games, that the price they receive for trade-ins does not heavily depend on where they choose to resell it.

The used game market was generally considered concentrated, but the market for new games was not. One company, GameStop, was and generally still is regarded as the dominant buyer and seller of used games. The former director of used games at GameCrazy assured that GameStop’s used game market share was estimated to be 80 % in 2010.Footnote 13 GameStop also competed in the resale market with online auction websites like eBay and niche online retailers. Sales of new games were less concentrated. Based on calculations explained later, GameStop was estimated to account for approximately 25 % of new sales, and competitors such as Best Buy, Target, and Walmart also had substantial new game sales market share. While these major retail chains contemplated entering the used game market in earnest, none did until 2009, which is after the period investigated in this paper.

See Lee (2012a) for a more in-depth background and related literature.

3 Data

The data used in this paper were constructed from two datasets. The first dataset, from the NPD group, provides information on monthly sales and average prices of new copies of XBOX 360 video games by game in the U.S. from November 2005 to December 2008.Footnote 14 The second dataset contains used game auctions from a popular online marketplace over that same timespan. These latter data are a good indicator of consumers’ decision to resell, and hence serve as a good proxy for the monthly quantity of used games traded-in by consumers, i.e. resold by consumers to retail stores, which is how most games are resold. These data also contain prices consumers receive for reselling.Footnote 15 Because these auctions will proxy for trade-ins, I will subsequently refer to these data as the trade-ins data. Additionally, time-invariant game characteristics were obtained from the NPD group and Game Informer Magazine’s reviews.

The trade-ins data, comprised of 691,722 used game auction sales, were aggregated to the month in order to match the new game sales data. Trade-in quantities were summed and trade-in prices averaged.Footnote 16 Then, the trade-ins data were matched to the new game sales data. In total, 326 games were successfully matched, yielding 5020 game/month observations.

Since the trade-ins data are from a single firm, they must be scaled up to be consistent with the new copy sales data which represent the entire U.S. market. The scale-up involved several steps. First, the expected number of used copy sales by retailers to consumers in the first two months following games’ releases is calculated using industry statistics. Next, this figure is used to calculate the expected number of trade-ins by consumers to retailers in this same time frame, including copies remaining in retailers’ unsold inventories. This calculation uses the average time lag between trade-ins by consumers to retailers and subsequent resale of the same copies from inventory back to consumers. Lastly, the appropriate scale-up factor matches the trade-ins in the data with the expected number of trade-ins from this calculation.

The analysis in this paper also requires information on total sales of a game by retailers to consumers, including both new and used copies. The expected number of used copy sales of each game is calculated from trade-ins, again using the average time a game remains in inventory. Such sales are added to new copy sales to yield an estimate of total sales by retailers to consumers.

The dataset using these imputed values for trade-ins quantities and quantities of used sales by retailers is used in the remainder of this paper unless otherwise specified. Further details on the data construction are available in an (Online Appendix).

3.1 Data consistency tests

A major concern, given the low share of the used market covered by the raw data, is that the propensity to sell online may change as a game ages. This would imply the appropriate scale-up should change over a game’s life cycle.

The trade-ins data provide a simple check for this concern. Sellers whose shipments originated from zipcodes of the lowest population density quintile, average density 114 people per square mile, likely have poor access to brick and mortar stores like GameStop, making offline trading-in impractical. Consumers in high population density areas, however, should on average have much better access to brick and mortar stores to trade-in their used games. If the relative benefit of trading-in at GameStop changes with games’ ages, then consumers in high density areas would likely shift their trade-ins from nearby to online, or vice versa, as games age. Consumers in low density areas, however, likely resell used games online regardless. Therefore, if the appeal of trading-in at GameStop changes with a game’s age, then the fraction of used game auctions for a game occurring in the first several months of release, relative to later following release, should be different for rural and urban consumers. Figure 1 shows this does not occur, supporting the contention that the data are equally representative over game age, at least for the first year.

Fig. 1
figure 1

Normalized used auction sales vs. time since release—by population density quintile of seller’s zipcode

A second concern is that the market share of the raw used game data may change over the time period analyzed, due to changing conditions, entry, or exit. This too is easily checked. If this were to occur, then one would expect the ratio of trade-ins in the raw data to total new game sales in the U.S. market to change over time. Figure 2 shows no consistent trend of this sort, addressing this concern as well.

Fig. 2
figure 2

Ratio of trade-ins in raw data to new game sales in first two months following game release

3.2 Summary statistics

The trends in the new game price data, shown in Table 1, suggest that the decisions of when to buy and when to resell are non-trivial. If consumers buy a game right after it is released, they typically pay about $55 for the game. But, if they wait to buy the game, they can acquire the game for much less, since the price typically declines rapidly. They can on average save 22 % by waiting 6 months, and almost 50 % by waiting a full year to buy the game. The implied rental prices (the buying price minus the amount received when reselling later) also typically decline over time. Since both prices and implied rental prices decline over time, consumers must trade off between buying and using the product immediately and paying less for it later.

Table 1 Average price and quantity patterns over time

The sales data, also summarized in Table 1, show that sales are more front-loaded than in the classic diffusion model, in which sales start slowly and increase over time. This front-loading may be due to the firm’s response to competition from used sales. In the market for video games, on average about 40 % of total sales of a game in the first year occur in the first 2 months. Not surprisingly, almost all of these games are new, and the firm profits directly from these sales. As time progresses, while total sales typically decline, the number of trade-ins initially increases, reaching a peak in the fifth month, and thereafter declines slightly before appearing to plateau. Used copy sales by retailers to consumers follow a similar pattern, with a slight lag, peaking in the 7\(^{th}\) month following release. Since new copy sales continue to decline, the average proportion of game sales that are of used copies increases reaching over 40 % a month one year after release. Hence, producers face steep competition from used goods.

Sales are heavily skewed across games. New copy sales in the first year following release ranged from 15,643 for “Fatal Inertia” to over five million for “Halo 3.” The standard deviation equals 642,768.

The data exhibit obvious seasonality, e.g. more sales around Christmas. One method for controlling for seasonality is to add monthly demand shifters. However, Gowrisankaran and Rysman (2009) note the lack of intuition for why products would be enjoyed so much more during the Christmas season, also noting that adding season dummies to a dynamic model typically requires adding an additional state variable, substantially slowing estimation due to the curse of dimensionality. They show that deseasoning the data a priori yields similar results to traditional methods, suggesting a better alternative.

Each monthly numerical variable in the dataset is deseasoned by running a regression of its log on the composite critic review score and its square, age of game dummies, and date fixed effects. Specifically:

$$\begin{array}{@{}rcl@{}} Log(Var_{t})&=& \alpha+\beta_{1}\ast rev\_score_{j}+\beta_{2}\ast rev\_score_{j}^2+\lambda_{age(t)}\ast I(age(t))\\ && +\gamma_{t}\ast I(t)+\varepsilon_{t} \end{array} $$
(1)

The dependent variable is deseasoned by subtracting \(\gamma _{t}\ast I(t) \) from the log of the dependent variable, and then exponentiating. This variable is then scaled up or down to the point where the sum of this seasonally adjusted variable equals the sum of the corresponding raw (not deseasoned) variable.

3.3 Law of motion

An important component of any dynamic model of consumer demand is the specification of consumer expectations of payoff relevant state variables in future periods. A common method is to assume that consumers base expectations on reduced form regressions of current state variables on lagged variables and product characteristics (Gowrisankaran and Rysman 2009; Lee 2012b; Nair 2007). The underlying assumption is that the reduced form model well replicates heuristics that consumers have developed with experience.

To see which observables predict future values of prices, they are regressed on lag prices and game characteristics. Recall two types of prices exist—purchase and trade-in. The results for both are shown in Table 2. For purchase prices, including one lag alone yields an \(R^{2}\) of 0.90. However, no other variable, including additional lags, increases the \(R^{2}\) by more than 0.01. This suggests that while other variables may be statistically significant, they are not meaningful to the consumers’ decisions. The rightmost columns in Table 2 repeats this exercise for trade-in prices of used games, yielding similar patterns.

Table 2 Price path regressions

4 Empirical model

4.1 Demand

The model of consumer demand is cast as a discrete choice problem, where at the beginning of each period each consumer decides whether to be an owner of each game, independent of which other games they own. If, at the beginning of a time period, they do not own the product in question, this framework requires that they decide between buying and not buying the product. If they do own the product, they alternatively decide between keeping the product and selling it.

This setup implicitly makes several assumptions. First, it assumes consumers have no use for a second copy. This seems reasonable given that the second copy does not provide any additional functionality. Second, this framework implies that games are not substitutable for one another. Nair (2007) argues this is true because games are fairly unique in their story and game play. Theymay be no closer substitutes for each other than for other entertainment activities like watching movies. In the Appendix, I test this, finding no evidence that popular games are substitutable, suggesting the assumption is reasonable enough for the purpose of this paper. Third, this setup implies that used and new games are perfect substitutes for one another. Since used games provide the same service as new games, it seems quite reasonable that this assumption would nearly hold. Any price discounts for used copies are assumed to exactly offset a one-time disutility of purchasing a used, as opposed to new, copy. Hence, it is assumed that the true price of buying a game, new or used, is the price of a new copy. Lastly, this setup assumes consumers do not have the opportunity to rent games through a third party seller. Rental companies did exist, but had very small market share.Footnote 17

In the remainder of this section, I present the specifics. Note that game subscripts are omitted for parsimonious notation, since the described process is repeated for each game during estimation.

4.1.1 Flow utility

There are four possible actions consumers can take at some point. They are buying, waiting to buy, keeping, and selling.

The flow utility of buying a game is given by:

$$ u_{buy}\left(\bar{\delta}_{i},\alpha_{i},\xi_{t},P_{t},\varepsilon_{i,t}\right)=\bar{\delta}_{i}+\xi_t-\alpha_{i} P_t+\varepsilon_{i,t}=\bar{u}_{buy}\left(\bar{\delta}_{i},\alpha_{i},\xi_{t},P_{t}\right)+\varepsilon_{i,t} $$
(2)

where \(\bar {\delta }_{i}\) is the mean flow utility of the product to individual i at first use, \(\xi _{t}\) is a transient utility shock in period t common across individuals, \(\alpha _{i}\) is the price sensitivity of consumer i, \(P_{t}\) is the price at which the game can be bought, and \(\varepsilon _{i,t}\) is an individual specific shock. The function \(\bar {u}_{buy}\left (\bar {\delta }_{i},\alpha _{i},\xi _{t},P_{t}\right ) \) equals the flow utility of buying minus \(\varepsilon _{i,t}\), and will be used subsequently for notational purposes.

The mean flow utility of owning, assuming purchased previously, is given by:

$$ u_{own}\left(\bar{\delta}_{i},\xi_{t},h_{i,t},\varepsilon_{i,t}\right)=(\bar{\delta}_{i}+\xi_t) B( h_{i,t})+\varepsilon_{i,t}=\bar{u}_{own}\left(\bar{\delta}_{i},\xi_{t},h_{i,t}\right)+\varepsilon_{i,t} $$
(3)

where \(B\left(h_{i,t}\right)\) is a function that reflects the decrease in value due to length of previous ownership \(h_{i,t}\).

For several reasons, it is assumed that the rate at which consumers lose interest is the same across games. First, it seems unlikely that consumers anticipate the game-specific rate of boredom. Rather it is learned with experience as they play the game. Second, Game Informer reviews’ “Replay Value” scores, which purportedly inform consumers of how quickly they will bore of a game, did not predict differences across games in the rate of price declines or the timing of sales. This suggests differences across games are not large. Third, this assumption reduces the number of parameters to be estimated.

The function \(B\left (h_{i,t}\right )\) is parameterized as \(B\left (h_{i,t}\right )=\left (1-\lambda _{1}\right )\exp \left (\lambda _{2}h_{i,t}\right )-\lambda _{1}\), where \(0\leq \lambda _{1}\leq 1,\lambda _{2}\leq 0\). I assume that \(h_{i,t}\) is capped at \(H=12\), i.e. subsequent ownership periods beyond H do not change the value of the \(B\left ({}\right )\) function. More flexible specifications were tested, such as combining an exponential and linear functional form, relaxing the assumption that monthly value with use plateaus. They did not meaningfully improve fit or change the shape of the estimated boredom function.

The mean flow utility of the outside good equals:

$$ u_{wait}\left(\varepsilon^{0}_{i,t}\right)=\omega+\varepsilon^{0}_{i,t}=\bar{u}_{wait}+\varepsilon^{0}_{i,t} $$
(4)

where \(\omega \) is the value of the outside good and \(\varepsilon ^{0}_{i,t}\) is an individual specific shock. \(\omega \) is normalized to a positive constant large enough to ensure that \(\left (\bar {\delta }_{i}+\xi _{t}\right )\) is positive in every instance. As long as this holds, the actual value of \(\omega \) is inconsequential. If \(\left (\bar {\delta }_{i}+\xi _{t}\right )\) were less than zero, the model would nonsensically imply that tiring with use increases valuation for the good.

The mean flow utility of selling is given by:

$$ u_{sell}\left(P_{t}^{TI},\alpha_{i},\zeta_{t},\varepsilon^{0}_{i,t}\right)=\omega+\alpha_{i}\left(P_{t}^{TI}+\zeta_{t}\right)+\varepsilon^{0}_{i,t}=\bar{u}_{sell}\left(P_{t}^{TI},\alpha_{i},\zeta_{t}\right)+\varepsilon^{0}_{i,t} $$
(5)

where \(P_{t}^{TI}\) is the price at which owners can resell the product, i.e. the trade-in price, and \(\alpha _{i}\) is the same as in the buying equation. \(\zeta _{t}\), a transaction cost shock common across individuals, is similar to the demand shock \(\xi _{t}\), but is experienced only by current owners.

4.1.2 Heterogeneity

Managers have noted the video game market consists of two groups of consumers, “hard-core gamers” and the “mass market.”Footnote 18 I use a latent class approximation to the bimodal distribution of valuations (Kamakura and Russell 1989). Following earlier papers focusing on the market for video games (Liu 2010; Nair 2007) I assume there are two types, denoted by k. The low type has intrinsic value of ownership equal to \(\bar {\delta }_{k}=\bar {\delta }\), and the high type has intrinsic value equal to \(\bar {\delta }_{k}=\bar {\delta }+\beta \). The price sensitivity is also estimated separately by type. The fraction of high types amongst the population is estimated as \(\frac {exp\left (\gamma _1+\gamma _{2} * A^{P}_{j,t}\right )}{1+exp\left (\gamma _1+\gamma _2* A^{P}_{j,t}\right )}\), where \(A^{P}_{j,t}\) is the age of platform at the time of game j’s release. The parameter \(\gamma _{2}\) accounts for the changes in the initial composition of consumers, i.e. fraction that are high types, in the market for the game as the platform matures. Lee (2012b) noted the importance of controlling for changes in the installed base of potential buyers in the context of video games. The fraction of non-owners that are high types is allowed to change endogenously over time in the model, since high-type consumers are more likely to purchase thus removing them from the pool of potential buyers for that game.

4.1.3 State and control space

While there is only one control variable, ownership, there are several other relevant state variables. They are the price (\(P_{t}\)), trade-in price \(\left (P_{t}^{TI}\right )\), previous periods of ownership (\(h_{i,t}\)), demand shock (\(\xi _{t}\)), transaction cost shock (\(\zeta _{t}\)), and individual specific utility shocks (\(\varepsilon _{i,t}\) and \(\varepsilon ^{0}_{i,t}\)). Ownership status and previous periods of ownership are deterministic. The remaining state variables are stochastic from the perspective of consumers.

Evidence presented in Section 3.3 supports the assumption that purchase prices and trade-in prices follow a first order Markov process. Accordingly, I assume that from the perspective of consumers, purchase prices for the highest quintile of games follow the random process described in Eq. 7 below.

$$ P_{t+1}=g(P_t)+\eta_{t+1} $$
(6)

where g(P) is the function estimated from the data, and \(\eta _{t}\) is the component of the change in price unpredictable to consumers. Similarly, I assume consumer expectations of trade-in prices are given by:

$$ P^{TI}_{t+1}=f\left(P^{TI}_{t}\right)+\eta^{TI}_{t+1} $$
(7)

where \(f\left (P_{t}^{TI}\right )\) is the function estimated from the data, and \(\eta _{t}^{TI}\) is the component of trade-in prices unanticipated by consumers.

Following Nair (2007), I allow consumers to incorporate the correlation between price shocks \(\eta _{t}\) and demand shocks \(\xi _{t}\) in their expectations.Footnote 19 Specifically, I assume \(\eta _{t}\) and \(\xi _{t}\) are distributed jointly normal with correlation \(\rho _{\xi ,\eta }\) to be estimated, but are uncorrelated with other state variables. Since they are also not persistent, these shocks follow Rust’s conditional independence assumption (Rust 1987).

Each of the remaining stochastic state variables \(\left (\zeta _{t},\varepsilon _{i,t},\varepsilon ^{0}_{i,t}\right )\) are assumed to be independently distributed across time, and hence are uncorrelated with all other state variables, present and future. This implies that they too follow Rust’s conditional independence assumption. I assume \(\zeta _{t}\) are distributed normally, and \(\varepsilon _{i,t}\) and \(\varepsilon ^{0}_{i,t}\) follow the type 1 extreme value distribution with location parameter equal to the negative of Euler’s constant and scale parameter equal to one.

4.1.4 Value functions

The ownership control variable is binary. Hence, for presentation purposes, the value function can be broken into two value functions, each conditional on ownership status.

For both value functions, one can yield a simplified Bellman equation on a reduced state space without the variables that satisfy Rust’s conditional independence assumptions (\(\xi _{t}\),\(\zeta _{t}\),\(\eta _{t}\),\(\eta _{t}^{TI}\),\(\varepsilon _{i,t}\), and \(\varepsilon ^{0}_{i,t}\)) by integrating these states out of the value function. This results in the expected value of the value function before any of these variables are known. The states \(\varepsilon _{i,t}\) and \(\varepsilon ^{0}_{i,t}\) can be integrated over analytically.Footnote 20 However \(\zeta _{t}\), \(\xi _{t}\), \(\eta _{t}\), and \(\eta _{t}^{TI}\), must be integrated out numerically.

I specify the “alternative specific” (i.e. choice specific) value functions so that they provide the expected maximum discounted future utility.Footnote 21 The equation for the ”alternative specific” value function of owning (excluding current flow utility), equals:

$$ W_{O}(S_t) = {\displaystyle\int} \ln\left\{ \begin{array} [c]{l} \exp\left(\bar{u}_{own}( S_{t+1},\xi_{t+1}) +\varphi W_{O}( S_{t+1})\right) \\ +\exp\left( \bar{u}_{sell}( S_{t+1},\zeta_{t+1}) +\varphi\frac{\omega}{1-\varphi}\right) \end{array} \right\} df( S_{t+1},\xi_{t+1},\zeta_{t+1}|S_t) $$
(8)

where the state variables \(S_t=\left\{\bar{\delta}_{k},\alpha_k,P_t^{TI},h_{i,t}\right\}\), \(\varphi \) is the discount factor, and \(\varphi \frac {\omega }{1-\varphi }\) gives the expected discounted value of using the outside product for all future periods. This formula defines the expected future utility as the expected maximum utility from the consumer’s two choices in the next period, assuming that the consumer owns the product at the end of the current period. If, next period, the consumer again keeps the product, they obtain flow utility from using it, plus the expected discounted utility from continuing to own it going forward. If, next period, they sell the product, they obtain utility from money received from selling it, plus the discounted expected utility of using the outside good for all future periods.

Likewise, the equation for the alternative specific value function for the future value of not having owned the product, \(W_{NO}\), can be written as:

$$ W_{NO}( S_t) = {\displaystyle\int} \ln\left\{ \begin{array} [c]{l} \exp( \bar{u}_{buy}( S_{t+1},\xi_{t+1}) +\varphi W_{O}( S_{t+1}) ) \\ +\exp( \bar{u}_{wait}+\varphi W_{NO}( S_{t+1})) \end{array} \right\} df( S_{t+1},\xi_{t+1}|S_t) $$
(9)

where in this equation the state variables \(S_t=\left\{\bar{\delta}_{k},\alpha_k,P_t,P_t^{TI},h_{i,t}\right\}\). This equation equals the expected maximum utility of the consumer’s two choices in the next period, assuming the consumer does not own the product at the end of the current period. To reduce the computational burden of including several continuous state variables, I assume, for the above equation only, consumers use the purchase price to approximate the trade-in price. Anecdotal evidence suggests consumers do not pay attention to the trade-in price when deciding whether to buy a game, but rather use heuristics based on past experiences.

4.1.5 Policy functions

An owner will sell the product if the expected discounted utility of selling exceeds the expected discounted utility of keeping. Specifically, the optimal policy is selling if and only if:

$$ \bar{u}_{sell}( S_{t},\zeta_t)+\varphi\frac{\omega}{1-\varphi}+\varepsilon^{0}_{i,t}>\bar{u}_{own}(S_{t},\xi_t)+\varphi W_{O}(S_t)+\varepsilon_{i,t} $$
(10)

Since the error terms \(\varepsilon _{i,t}\) and \(\varepsilon ^{0}_{i,t}\) follow the type 1 extreme value distribution, the probability of selling for owner i of type k with h previous ownership periods can be written analytically as:

$$ s_{k,sell}(S_{t},\xi_{t},\zeta_t)=\frac{\exp\left(\bar{u}_{sell}(S_{t},\zeta_t)+\varphi\frac{\omega}{1-\varphi}\right)}{\exp\left(\bar{u}_{sell}(S_{t},\zeta_t)+\varphi\frac{\omega}{1-\varphi}\right)+\exp(\bar{u}_{own}(S_{t},\xi_t)+\varphi W_{O}(S_t))} $$
(11)

Following analogous steps, the probability of buying for non-owner i of type k can be written as:

$$ s_{k,buy}( S_{t},\xi_t) =\frac{\exp( \bar{u}_{buy}( S_{t},\xi_t)+\varphi W_{O}(S_t) )}{\exp(\bar{u}_{buy}( S_{t},\xi_t) +\varphi W_{O}( S_t))+\exp(\bar{u}_{wait}+\varphi W_{NO}( S_t))} $$
(12)

5 Estimation

The model is estimated by maximum likelihood. The likelihood function equals:

$$ L(\theta) = {\displaystyle\prod_{j,t}} L\left(P_{j,t},P^{TI}_{j,t},Q_{j,t},Q_{j,t}^{TI};\theta \right) $$
(13)

where Q is the quantity purchased, including used game purchases, \(Q^{TI}\) is the quantity of trade-ins, and the parameter vector \(\theta = \left (\lambda ,\alpha ,\beta ,\gamma ,\delta _{j},\sigma _{\xi },\sigma _{\eta },\sigma _{\eta ^{TI}},\right .\) \(\left .\sigma _{\zeta },\rho _{\xi ,\eta },\varphi \right )\). Prior literature has noted that the discount factor (\(\varphi \)) is not identified in dynamic models of aggregate demand without substitution across products (Aguirregabiria and Nevo Forthcoming) or, in the monopoly case, without observed variation in the expected continuation values of owning the product (Chevalier and Goolsbee 2009). In cases without such variation, its value is often assumed. Following Nair (2007), I assume \(\varphi = 0.975\). This falls between recent estimates of 0.93 (Lee 2012b), and close to 1 (Chevalier and Goolsbee 2009). The standard deviation of the price shocks, \(\sigma _{\eta }\) and \(\sigma _{\eta ^{TI}}\), can be estimated a priori.

After a change of variables transformation, the likelihood can be written in terms of the price, demand, and transaction cost shocks, as:

$$ L( \eta,\xi,\zeta;\theta) = {\displaystyle\prod_{j,t}} f( \eta_{j,t},\xi_{j,t};\theta) f( \zeta_{j,t};\theta) \vert \vert J\vert\vert $$
(14)

where \(\left \vert \left \vert J\right \vert \right \vert \) is the Jacobian determinant. The derivation of the Jacobian is shown in an online Appendix. The price shocks \(\eta \) are estimated a priori from the data, according to Eq. 7. The demand and transaction cost shocks (\(\xi \) and \(\zeta \)) are recovered within each iteration, by the method described in the next subsection.

5.1 Recovering error terms

To compute the values of the demand and transaction cost shocks, \(\xi \) and \(\zeta \), for each product and period, I follow recent models (Gowrisankaran and Rysman 2009; Nair 2007). In each iteration in the maximization procedure, the value functions are computed first. Then, the shocks are computed as explained below.

For each game j, the values of \(\xi \) are found sequentially using contraction mapping from Berry et al. (1995) to find the value of \(\xi _{j,t}\) which equalizes the observed and model’s predicted aggregate share buying.Footnote 22 The formula for the model’s predicted market share buying is:

$$ \bar{s}_{buy}(S_{j,t},\xi_{j,t}) =\frac{\sum\limits_{k}M_{j,k,t}\ast s_{k,buy}(S_{j,t},\xi_{j,t}) }{\sum\limits_{k}M_{j,k,t}} $$
(15)

where \(M_{j,k,t}\) equals the mass of non-owners of type k in the market for game j at the beginning of period t.

The mass of non-owners in the first period equals the installed base of potential consumers in that period. In subsequent periods, it is updated to reflect buyers, who have exited the market, and incoming potential consumers who were not in the market for the game in the previous period, for example because they had not yet joined the platform, i.e. bought the console. The fraction of entering consumers of a given type then equals the total amount of entering consumers multiplied by the fraction of consumers that are of that type, implied by the \(\gamma \) parameters. The formula for updating the masses of non-owners in each period after the game is released is given below:

$$ M_{j,k,t+1}=M_{j,k,t}\ast\left( 1-s_{k,buy}\left( S_{j,t},\xi_{j,t}\right)\right) +M_{j,k}^{new} $$
(16)

where \(M_{j,k}^{new}\) is the incoming group of potential consumers of type k entering each period.

The transaction cost shocks \(\zeta \) can be found through a similar method. In each period beyond the first, \(\zeta _{j,t}\) is found by equalizing the actual and model’s predicted share of owners selling, using the same contraction mapping. The predicted share selling is:

$$ \bar{s}_{sell}( S_{j,t},\xi_{j,t},\zeta_{j,t}) =\frac{\sum\limits_{k,h} R_{h,j,k,t}\ast s_{k,sell}( S_{j,t},\xi_{j,t},\zeta_{j,t}) }{\sum\limits_{k,h}R_{h,j,k,t}} $$
(17)

where \(R_{h,j,k,t}\) is the mass of owners of type k with h previous ownership periods of game j at the beginning of period t.

The mass of owners of each type and previous ownership length \(R_{h,j,k,t}\) evolves similarly to masses of non-owners. Before the product is introduced, there are no owners. After the good is released, the masses of owners are updated each period by:

$$ R_{h,j,k,t+1}=\left\{ \begin{array}{lll} M_{j,k,t}\ast s_{k,buy}( S_{j,t},\xi_{j,t}) \ & & for \: h=1\\ R_{h-1,j,k,t}\ast\left( 1-s_{k,sell}( S_{j,t},\xi_{j,t},\zeta_{j,t})\right ) & &for \: h>1 \end{array} \right\} $$
(18)

Hence the number of owners of game j of discrete type k with one period of previous ownership equals the mass of buyers of type k in the previous period. The mass of owners of type k with \(h>1\) previous periods of ownership of game j equals the mass of individuals of type k, who in the previous period had \(h-1\) previous ownership periods and decided not to sell.

5.2 Market size

It is common in studies of video games (Lee 2012b; Nair 2007) to assume the market size is comprised of initial consumers present when the game is released, and incoming consumers arriving each period.

It is easiest to assume all platform owners are in the market for a game. However, some platform owners may have no value for specific games, irrespective of quality, implying that they are not for practical purposes in the market for those games. For example, an adult male may have no interest in a game designed for young girls, even if it has exceedingly high quality.

Previous research supports this assertion. Shiller and Waldfogel (2011) found using stated valuations that, for songs on iTunes top 50, on average about a third of individuals attribute zero value to the song. However, which individuals had zero valuations depended on which song. Moreover, the zero valuations could not reasonably be explained as being the censored tail of the distribution of valuations.

The video game data seem to display similar features. Even among games from the top quintile of critic review scores, average shares of platform owners buying a specific game vary greatly in the first year, from about 0.8 % to 55.3 %, nearly a two order difference in magnitude.

Falsely attributing differences in sales to quality levels could lead to odd results in models of resale markets. Under such assumptions, games with low sales would be estimated to have low quality. This would in turn imply that most purchases were due to transient utility shocks, and therefore owners would be keen on quickly reselling these games soon after purchase. The model would have trouble explaining why they do not, and may lead to biased estimates of the rate of lost interest for games.

Heterogeneity, depending on the form, may be able to absorb incorrect assumptions on market sizes common across games. But, it cannot explain differences across games unless heterogeneity is allowed to vary by game, which is not feasible—heterogeneity is not separately identified by game. Moreover, differing heterogeneity across games may merely be compensating for incorrect market size assumptions, suggesting a simpler and feasible approach.

I assume that the size of incoming potential consumers for a game, \(M_{j,k}^{new}\), is constant, and that after prices stabilize some fraction \(\kappa \) buy upon entering, explaining all sales that period. This assumption seems logical, because once the price has stabilized, there is no incentive to delay purchase. So, \(\kappa *M_{j,k}^{new}\) can be observed directly late in a game’s life cycle. Therefore, I restrict the sample to games released early enough to observe sales after prices plateau—the data suggest 24 months is sufficiently long. Potential consumers also include those present at game launch. A fraction \(\kappa \) of them would be willing to purchase once prices stabilized at lower levels, or before then. Hence, \(\kappa \) times total potential consumers at game launch can be calculated by subtracting from total cumulative sales the total cumulative sales due to potential consumers entering after launch. Last, I find the level of \(\kappa \) such that the game with the highest proportion of console owners consuming it has market size equal to the installed base of platform owners.

5.3 Controlling for endogeneity

A worry with this estimation approach is that firms have more information on the demand shock \(\xi \) than the econometrician does, which would give rise to a biased estimate of the price coefficient. Controlling for endogeneity is difficult in this context. A valid instrument would be a shock to costs or competitive conditions. It is difficult to find Bresnahan style instruments (Bresnahan 1981, 1987) for competitive conditions, since games are not very substitutable statistically (shown in Appendix). It is also difficult to find strong Hausman style instruments for cost shocks (Hausman and McFadden 1984; Hausman 1996), since marginal production costs are very low (about $1.50).

The difficulty in finding valid instruments motivates use of an alternative method to control for endogeneity, one often used in marketing (Jiang et al. 2009; Nair 2007; Villas-Boas and Winer 1999). The method involves first estimating a reduced form regression of price on an instrument for price, often lagged price, and recording the residuals. Price can then be split into two parts, an exogenous component and these residuals which contain the endogenous component of price. The correlation between these price residuals and the unobserved demand shocks can then be estimated within the model, and explicitly accounted for in the likelihood function.

Without the correlation parameter, exogenous and endogenous price shocks would elicit the same response in the share buying in the model. The correlation parameter explains the extent to which share buying responds less to endogenous price shocks than to exogenous price shocks. This muted response thus no longer loads on the price parameter, limiting the extent of bias. At first glance it might appear that there is not the cross-sectional variation in the exogenous component of price required for identification, since prices for games in estimation follow the same 1st order Markov process. However, because higher order Markov processes were empirically ruled out, endogenous price shocks in the current period carry forward and provide cross-sectional variation in the exogenous component of price in future periods. Jiang et al. (2009) show using simulations that this approach in a Bayesian model typically yields lower mean square error and bias, compared to the GMM method outlined by Berry et al. (1995). As an added benefit in dynamic models, this approach facilitates incorporation of the correlation between price and demand shocks into consumers’ expectations.

5.4 Identification

Heterogeneity in intrinsic valuations (\(\beta ,\gamma \)) is identified by the average trends in prices and in the share of non-owners buying. I borrow an argument from Nair (2007). Both the price and implicit rental cost decline on average over time, but quality on average remains the same. Hence, the expected gain from buying and probability of purchase typically increases over time for any one type of individual. With heterogeneity, the share of non-owners that are low-valuation types increases over time, which can explain the trends in share buying, even if decreasing as price falls. The greater the heterogeneity, the less the share buying increases as prices fall. Differences across games released at different stages in the console’s life cycle identifies the impact of console’s age on the composition of types k.

The mean price sensitivity is identified by cross-sectional variation in prices and share of non-owners buying. Heterogeneity in price sensitivities is identified by the differential impacts of price shocks on sales early in the product’s life cycle, and later in the life cycle when a greater share of non-owners are of the low-valuation type and the low-valuation types are the marginal buyers. This logic is similar to identification arguments from Lazarev (2012).

The coefficient of lost interest (\(\lambda \)) generates the time pattern of used sales over time. Higher boredom implies consumers are more likely to sell the product soon after purchase, which translates into a more active resale market.

The coefficient of lost interest and heterogeneity parameters are separately identified. While an infinite number of sets of rates of lost interest and initial valuations can result in an individual wanting to sell a game after h periods but not after \(h-1\) periods, given prices, they cannot explain the individual’s choice of when to buy. Her first h uses must in expectation be valued at least high enough to justify buying when she did, but not high enough to justify buying earlier when prices were higher. A series of inequalities like these separately identifies these parameters.

5.5 Computation of counterfactuals

Counterfactual simulations using the parameters from demand estimation allow comparison of prices, profits, and welfare under alternative environments. Specifically, I compare the status quo environment where resale is allowed with hypothetical environments where the firm sells only non-resellable goods, or where the firm exclusively rents goods directly to consumers. I do not consider the case of combining selling and renting, nor allow third party firms like Blockbuster to rent games to consumers.

Calculation of dynamic equilibrium prices in counterfactuals requires that price paths and levels be allowed to differ from those occurring in the status quo environment, and also that consumers have correct expectations. I assume consumers know the firm’s policy function and observe all relevant information. It seems reasonable to assume that the firm sets prices based on a set of demand conditions. Demand conditions for non-resellable goods can be characterized by the mass of non-owners of the good by consumer type (two variables). When resale is allowed, however, conditions for the supply of used goods, which compete with new goods, are relevant for pricing. Thus the masses of owners by type and ownership length, 24 variables in total, must also be included as continuous state variables. The same is true for renting, because demand to rent depends on the length of time different consumers have used the product.

While a method from Nair (2007) can be used to calculate the Markov perfect equilibrium prices when resale is prohibited, the large number of variables characterizing market conditions in the other cases, which need to be included as state variables, pose a challenge for estimation. I circumvent this problem for the case of allowed resale by taking observed prices and quantities sold from the data. By doing so, I am implicitly assuming that the observed prices would result from an equilibrium simulated by the model, were such a simulation feasible. To allow simulation of the rental counterfactual, I make some strong assumptions, all of which should bias profits upwards. Thus, this simulations yields an upper bound estimate of rental profits.

5.5.1 Non-resellable goods

To find the price paths arising from a Markov Perfect rational expectations equilibrium game between firms and consumers when resale markets are shut down, I use a policy function iteration procedure similar to Nair (2007).Footnote 23 The resulting policy functions for each player account for their own impact on the evolution of states, and are optimal given the calculated policy functions of other players in the game.

The profit-maximizing firm’s policy function follows from the static profit function and the evolution of states. The static profit function is:

$$ \pi\left( S^{F}_{j,t},P_{j,t}\right) =( P_{j,t}-MC) \ast Q\left( S^{F}_{j,t},P_{j,t}\right) $$
(19)

where \(Q\left ( S^{F},P\right )\), the quantity purchased, is given by the product of the shares buying, and the market sizes of each type, denoted by \(S^{F}\). The change in notation of the state variables, to \(S^{F}\), highlights that the firm’s state variables omit some variables in the set of consumers’ state variables S. Nair (2007) notes the marginal cost for a copy of a game equals about $11.50; $1.50 in production costs plus $10 in fees paid to the platform. The firm’s value function equals:

$$ V_{Firm}\left( S^{F}_{j,t}\right) =\max_{P_{j,t}}\left\{ \pi\left( S^{F}_{j,t},P_{j,t}\right) +\varphi V_{Firm}\left( S^{F}_{j,t+1}|S^{F}_{j,t},P_{j,t}\right) \right\} $$
(20)

where \(S^{F}\) evolve analogously to the masses of non-owners according to Eq. 17. The firm’s policy function is the profit maximizing price at each state:

$$ P\left( S^{F}_{j,t}\right) =\arg\max_{S^{F}_{j,t}}\left\{ \pi\left( S^{F}_{j,t},P_{j,t}\right) +\varphi V_{Firm}\left( S^{F}_{j,t+1}|S^{F}_{j,t},P_{j,t}\right) \right\} $$
(21)

The consumer policy functions used in these simulations are similar to the ones specified in the model section, with an important change. In the counterfactual, I no longer assume consumers base future price expectation on current prices as I do during estimation. Rather, I assume consumers base future price expectations on the same state variables firms use to set prices in the counterfactual simulations, i.e. masses of non-owners by type. So consumer expectations are correct.

The numerical procedure proceeds by four steps, separately for each game. The first step is simply to guess at both the consumers’ policy functions, \(s_{k,buy}( S)\), and their expectations for the evolution of the state variables, \(M_{j,t+1}(s_{k,buy}( S))\). The second step calculates an updated guess at the the firm’s value and policy functions, \(V_{Firm}(S) \) and P(S), conditioning on the most recent guess of the consumers’ policy functions, \(s_{k,buy}( S)\). The third step iterates on two substeps. The first substep is calculating the next period states, \(M_{j,t+1}(s_{k,buy}( S))\), as a function of current states and the most recent guess at the consumers’ policy functions, \(s_{k,buy}( S) \). Then in the next substep the consumers’ policy functions, \(s_{k,buy}( S)\), are recomputed using the updated guess of consumers expectations, \(M_{j,t+1}(s_{k,buy}(S)) \). These two substeps are repeated until the total absolute difference in the consumers’ policy functions between iterations on the substeps is sufficiently small. This concludes the third step, yielding the next guess of the consumers’ policy functions, \(s_{k,buy}(S)\). The fourth step iterates on steps two and three repeatedly until the total absolute difference in the firm’s policy function betweens iterations is smaller than some arbitrary value, at which time the procedure stops.

5.5.2 Rented goods

Many information goods are now “streamed”, whereby the firm sells the right to use the digital product over the Internet for a short length of time, without granting ownership. In this case, consumers lose access to the product at the end of the period, and must rent the product again if they wish to continue to have access to it. This strategy is commonly employed for movies (e.g. Netflix, Amazon Prime) and music (e.g. Spotify).

To circumvent issues arising when simulating rentals, four different assumptions are chosen for the rental simulations. Each assumption should bias rental profits upwards, yielding an upper bound estimate of rental profits. First, fully characterizing demand would require too many state variables to feasibly simulate a Markov equilibrium, so I allow the firm to set flat rental prices. Second, the licensing fees the platform charges content producers, currently charged per physical copy produced, would likely be different for digital rentals. To address this, I assume zero marginal costs of renting, but assume an additional fixed cost equal to the total marginal cost incurred when resale is allowed.Footnote 24 Third, I assume zero transactions costs for returning or acquiring a rental and allow consumers to rent again after they had stopped renting. Fourth, consumers incorporate expected future value into their willingness to pay when purchasing a game, implying that profits from outright sales during any length of time reflect consumers’ use beyond that time. This would not be true, however, for short rentals. I therefore allow the firm to earn profits from renting over a much longer period, two years as opposed to one when goods are sold. However, to be consistent, I only allow new consumers to enter during the first year following release.

Optimal profits are simulated by searching over the flat rental price to maximize profits. For each rental price, the consumer policy functions are calculated, assuming they expect rental prices to remain flat.

6 Results

The model was estimated using the sample of 14 games in the top critic quintile that are observed for a long enough period to construct market sizes.Footnote 25 The parameter estimates and their standard errors are reported in Table 3. Context is provided below.

Table 3 Estimation results

The two curves in Fig. 3, one for each consumer type, show the consumers’ average monthly values for owning a game conditional on how many periods they have owned that game previously. The “hardcore gamers,” i.e. high types, value a month of use of an average game at about $80 if they have not owned it previously, about $30 more than the low types. Both types tire quickly with use. The high types, for example, lower their value for a month’s use of an average game from about $80 for the first month they own it to about $4 per month after having owned it for 6 months. About 90 % of this decline occurs in the first two months following purchase.

Fig. 3
figure 3

Static (one-period) dollar value from usage against length of previous use

The fraction of consumers in the market that have high intrinsic value for a game, as opposed to low intrinsic value, is implied by the parameters \(\gamma \). For games released at the same time as the platform was released, it is estimated that about half of the consumers in the market for the game are of the high valuation type. For games released 6 months later, however, only about 20 % are. This reflects the endogenous adoption decision shown in Lee (2012b) and found in Liu (2010). Consumers with high valuations for games presumably are more likely to buy the console early on, even before many games have been developed for it. Later, lower valuation types comprise a larger share of consumers.

The price elasticities over time are shown in Table 4. Static (one-period) elasticities and dynamic elasticities are reported. For both types of elasticities, I allow expectations of future prices to change with the current price change. The average of the static elasticities, \(-2.16\), is quite close to the average elasticity estimated in Nair (2007), also \(-\)2. The dynamic elasticities, which report the percent change in the next 3 month’s sales following a one percent change in price in the current month, are similar.

Table 4 Mean price elasticity estimates

The estimated values of the standard deviations of \(\xi \) and \(\zeta \) are both relatively small. The standard deviation of the demand shock \(\xi \) equals about $4 for the high type, roughly 5 % of a high type’s typical initial one-period valuation for a game. The standard deviation of the transaction cost shock, \(\zeta \), is $15 for a high type, roughly 15 % of the high type’s typical initial static valuation for the good. The correlation between the demand and price shocks is estimated to be 0.24.

6.1 Counterfactual results

The main findings are shown in Table 5. Profits from selling non-resellable games are about twice as high as the corresponding profits from resellable games, in the first year following a game’s release. This is true even before accounting for the fixed costs of developing the game.Footnote 26 There is substantial variation across games. The standard deviation of the percent increase in profits is 75 %, although each simulated game is estimated to yield higher profits under prohibited resale. The overall profit difference is due to the facts that (i) when resale is not allowed the firm sells more copies and (ii) they sell for higher prices in later periods. Some of the difference in quantities sold is due to new sales being displaced by used copies when resale is allowed and some to the fact that consumers delay purchase beyond a year when resale is allowed and prices fall quickly. These latter sales are not captured in this comparison. Most individuals buying later when resale is allowed, however, buy used games if available, or buy the game at a price close to marginal costs, so the firm earns little profits on later sales when resale is allowed. Hence ignoring later sales has negligible impact.Footnote 27

Table 5 Counterfactual results (total across 14 games)\(^{\mathrm {a}}\)

The rental simulations yield an upper bound estimate of profits that is barely higher than profits from selling a resellable good, not significantly different. Rental profits are still much lower than profits from selling a non-resellable good, about half as much.

The average equilibrium price paths under each distribution strategy are shown in Fig. 4. When resale is prohibited, optimal prices decline very slowly over time, falling by a few dollars over the first year. As a result, consumers do not have much incentive to wait. However, when resale is allowed, prices fall sharply over time. The prices decline for this group of games by about $30 over the first year on average, and implied rental prices fall too.Footnote 28 Consumers in the model anticipate the falling prices, and fewer are willing to buy in the first few months when prices are high, despite the fact that the good is worth more to them when resale is allowed, since they then have the option to resell the good. Figure 5 verifies this, showing the increase in monthly sales from preventing resale is highest early on. By the end of the first year, monthly sales of resellable copies are close to monthly sales under prevented resale.

Fig. 4
figure 4

Simulated price paths

Fig. 5
figure 5

Simulated sales trends

The difference in prices across counterfactuals is logical. When resale is allowed, lowering price reduces the number of traded-in games. Fewer trade-ins in the current period imply lower inventory levels of used games at retailers. Hence, by dropping price in the current period, the seller reduces competition from used copies in later periods. This dynamic effect provides additional motivation to lower prices when resale is allowed.

A related question is what impact resale markets have on consumer welfare, after accounting for the firm’s response. I calculate the exact net consumer welfare gain from resale markets using Small and Rosen’s (1981) formula for Hick’s equivalent variation. The net consumer welfare gain/loss from allowing resale to an individual of type k, in dollars, equals:

$$ EV_{k}=\frac{W_{NO,YR}( k,1) -W_{NO,NR}( k,1) }{\alpha_{k}} $$
(22)

where \(\alpha _{k}\) is the price sensitivity, and \(W_{NO,YR}( k,1) \) and \(W_{NO,NR}( k,1) \) are the value functions at game release, conditional on not owning the good, when resale is and is not allowed, respectively. I find that only distributing non-resellable goods would lower welfare by $23.80 on average for each low-type individual, but raise welfare by $11.87 on average per high-type individual, for an aggregate loss of $21 million on average across games.Footnote 29 High-type individuals are better off because they can acquire the game for less. Low types, however, yield positive utility from use of the product for fewer periods. Losing the ability to resell the game is therefore quite costly to them.

Using an analogous equation for consumer welfare changes, I find that renting as opposed to selling resellable goods lowers aggregate consumer welfare by $75 million for the average game. This loss arises from the fact that consumers forgo using the product when their valuations are between zero and the rental price.

In a related paper, Ishihara and Ching (2011) find less of a difference in profits, possibly due to differences across these markets. In Japan, new games prices do not fall over time. This stands in stark contrast to other countries. Additionally, the Japanese market seems to highly value novelty. Accordingly, they allow for novelty effects, i.e. for perceived quality to decline to non-owners as well. They did not, however, allow for heterogeneous usage values, since heterogeneous usage values are not separately identified from novelty effects. In their judgment, novelty was the more important feature to include when studying the Japanese market.

If novelty is an important feature in the U.S. market, then the findings in this paper may overstate the true impact of resale. Recall that the rapid price declines which are present only when resale markets exist make waiting to buy an attractive substitute for buying immediately. This substitute reduces extractable surplus when resale markets exist. However, the value to a consumer of waiting until later to buy is reduced when novelty matters, since in later periods the good will inherently be valued less due to novelty effects. As a result, firms can extract more revenue from consumers in the presence of resale when novelty effects exist, possibly implying a smaller profit gain from preventing resale. However, the profit gain should not disappear. Even assuming the other extreme—novelty effects but no heterogeneous usage values—Ishihara and Ching (2011) still find that preventing resale would meaningfully raise profits in Japan.

6.2 Robustness checks

The standard approach for comparing profits between the status quo and counterfactual policies is to simulate both scenarios using the estimated parameters. For rentals and non-resellable goods, this was feasible. However, for reasons described earlier, it is not feasible to fully simulate the case where resale is allowed. This leads to the concern that a poorly specified model might be driving the finding that preventing resale raises profits.

This subsection tests whether any of several modeling assumptions drives the main result. One at a time, each assumption is changed, and the estimation model and counterfactual simulations are repeated. The results are shown in Table 6.

Table 6 Robustness checks profits (in Millions)

Neither reducing the monthly discount from 0.975 to 0.95 nor reducing the number of games traded-in by 25 % changes the results qualitatively.Footnote 30 Note that by construction, profits from resellable goods are unchanged.

The assumed market size magnitude does seem to matter, however. The main specification assumed that the most popular game had a market size equal to the installed base of platform owners. Reducing the market size by 25 % for all games reduces non-resellable good profits by 25 % and rental profits by 29 %. This is likely due to the fact that reducing the market size reduces the number of unserved consumers in the status quo case, who may be served and thus contribute to revenues in counterfactual environments. Reducing the market size further, by 50 %, implies nearly everyone in the market for a game buys it within the first two years. In this case, the difference in profits between non-resellable and resellable goods was not statistically significant. However, a drastically different market size assumption was required to reach this result.

The observed relationship between rental profits and resellable good profits can be used to provide credibility to the main model. Because buying and selling a resellable good is logically similar to renting, as explained in Section 2.1, we might expect profits from renting to be similar to profits from resellable goods. This provides a check—are rental and resellable good profits in fact similar? In the main model they were: 522 vs. 495 million, close enough that the difference was not significant. In robustness checks yielding different qualitative findings, they were not. Moreover, the relationship between non-resellable good profits and rental profits is more stable; their percent difference ranged from 83 % to 165 % across robustness tests. So the two environments which could be simulated did demonstrate a robust qualitative result. We might expect a similar result for the impact of eliminating resale directly, and did so in the main model, thus providing some ex-post support for it.

7 Discussion and conclusion

Firms can or may soon be able to circumvent the first sale doctrine and legally prevent resale outright in the U.S. by selling non-resellable digital downloads or by streaming rentals. The results in this paper show that profits from selling video games would be much higher for non-resellable goods, over 100 % higher in the base specification. However, the exact size of the increase was sensitive to the assumed market size. Throughout all robustness checks, non-resellable good profits were substantialy higher than rental profits, at leat 83 %.

These findings stand in contrast to most previous empirical and theoretical papers focusing on monopolists producing goods for which consumers do not lose interest. The different results, along with the logic put forth in Section 2.1, suggest the rate at which consumers lose interest might be important in determining the impact of resale on producer profits. This may interest managers and policy makers, since the types of goods obviously satisfying this assumption, entertainment information products, are the types of goods easily distributed as non-resellable downloads.