1 Introduction

Whether or not public investments and policy measures improve societal welfare is one of the most important questions confronting economists. The answer to this question requires a clear metric of inter-temporal social welfare, against which alternative investments and policies can be compared. The choice of social welfare function (SWF) determines a social discount rate (SDR), which reflects the rate at which society is willing to make inter-temporal trade-offs of consumption. Discounting the future is an essential step in social cost–benefit analysis (CBA) since it identifies which investments or policies have a rate of return sufficient to raise social welfare. CBA is central to the economic analysis of public interventions, so an important question is therefore: do governments really reflect societal preferences in their choice of the SDR (at least insofar as societal preferences are revealed by private decisions)? The empirical analysis in this paper suggests that the UK government might not.

The SDR determines the weight that is placed on future costs and benefits in CBA, summarising how the SWF accounts for well-being in the future. Increasingly these days, public policy has implications for the well-being of successive generations, not just the current one. Climate change mitigation, nuclear energy and the valuation of natural capital are all examples of such policies (Arrow et al. 2013a, b; Gollier 2012; NCC 2017; Fenichel et al. 2016). Whether or not, or how much to invest in intergenerational projects is extremely sensitive to the SDR. For this reason, determining the SDR has been described as “one of the most critical problems in all of economics” (Weitzman 2001).

This paper focusses on the way in which social welfare is measured by Her Majesty’s Treasury in the UK, and the SDR that it uses in its guidelines on CBA: the “Green Book: Appraisal and Evaluation in Central Government” (HMT 2003). Every department in the UK Government is obliged to follow the guidelines in the Green Book in evaluating public projects, and so it has considerable influence. The SDR that HM Treasury uses is a risk-free Social Rate of Time Preference (SRTP) calibrated according to the Ramsey Rule: \( SRTP = \delta + \eta g \). The SRTP reflects the typical discounted utilitarian inter-temporal SWF in which the instantaneous utilities of a representative agent are discounted at a rate \( \delta \), the elasticity of marginal utility is \( \eta \) and per capita growth rate of consumption is \( g \). Two critical features of the Green Book guidelines on the SDR are analysed in this paper.

Our primary focus is on the elasticity of marginal utility, \( \eta \), which is a key determinant of the SDR. \( \eta \) reflects the responsiveness of the representative agent’s marginal utility to changes in consumption, and determines the “wealth effect” (\( \eta g \)) in the SDR. Given the sensitivity of project appraisal to the SDR it is surprising to find that the estimation of \( \eta \) has not received much serious attention. Furthermore, the attention it has received has led to considerable disagreement on the matter, sometimes creating more heat than light (Stern 2007; Dasgupta 2008; Nordhaus 2007).

One source of disagreement on this matter concerns the appropriate source of information with which to calibrate the SWF: subjective normative/prescriptive views or positive/descriptive data (Arrow et al. 1996; Dasgupta 2008). The analysis in this paper takes the view, widely held amongst CBA practitioners, that the SWF should reflect the preferences observed in society. Within the Ramsey framework \( \eta \) can be interpreted as reflecting a societal preference for consumption smoothing, inequality aversion or risk aversion. Implicit to the Ramsey approach is that \( \eta \) ought not to depend on the domain (inequality, inter-temporal substitution, substitution between goods, and risk aversion) in which the agent is making decisions. That is, aversion to differences in consumption is the same regardless of whether it occurs between individuals today, across time or across risky scenarios. Using revealed preference methodologies in each of these domains we provide new estimates of the elasticity of marginal utility for the UK. This allows us to empirically test the theoretical proposition of parameter homogeneity across domains.Footnote 1 Although our focus is mainly on revealed preference methodologies this is certainly not to say that we think stated preference approaches are incapable of providing credible estimates of \( \eta \). One important feature of the studies that we present however is the fact that they all use secondary data.

The second focus of the paper is on the term structure of the SDR. The Green Book recommends a declining term structure of discount rates for costs and benefits with longer maturities. This guidance is also routinely followed in public policy appraisal.Footnote 2 The precise shape of the term structure depends on the nature of the SWF, but also on the level of persistence in the diffusion of the growth of consumption. The current Green Book term structure is not based on a thorough empirical analysis of the time series properties of UK growth (see Groom and Hepburn 2017). In order to provide a more robust basis for the guidelines on long-term CBA, we estimate an empirical term structure of discount rates for the UK, by estimating the growth diffusion process in the UK and calibrating using our estimates of \( \eta \).

The results of our analysis provide grounds for believing that the Green Book guidelines might need to be updated on two counts. First, a meta-analysis of the four new estimates of \( \eta \) plus another recently-provided estimate indicates that they are not significantly different from each other, yet significantly different from unity. That is, the hypothesis that \( \eta = 1 \), the value used to calibrate the SWF and SDR in the Green Book, is rejected. In short, HM Treasury is not valuing social welfare in accordance with the preferences revealed by UK citizens, no matter in which domain these preferences are revealed. Second, neither does the Green Book term structure of the SDR reflect societal preferences for the distant future, given how growth is expected to diffuse. The estimated impact of persistence on the term structure of SDRs does not provide justification for the current SDR being as low as 1% for horizons of 300 years. The term structure should decline to around 2.5–3% at most.

The analysis in this paper is timely since in March 2018 HM Treasury published its conclusions following what was colloquially referred to as the “refresh” of the Green Book guidelines. Our results suggest that the way in which HM Treasury evaluates social welfare in both the short- and the long-run should probably also be refreshed. Yet the influence of this recommendation goes further than this. It has recently been recommended that discounting procedures used by the Office for National Statistics in the calculation of the UK National Accounts should be harmonised with the Green Book.Footnote 3 Our recommendations could influence the estimation of the National Accounts, including those for human and natural capital.Footnote 4 Further afield, the US National Academy of Sciences recently released guidance on the estimation of the Social Cost of Carbon (SCC) (NAS 2017). Its lengthy chapter on social discounting concludes that Ramsey discounting is a desirable approach to the evaluation of the SCC, and emphasises the need for robust methodologies for the estimation of its parameters. Recent debate suggests that revealed preference estimates of such parameters are more acceptable in the US (Drupp et al. 2018). So the methods and procedures we use might provide guidance for future work in the US and elsewhere.Footnote 5 Finally, while our focus has been on the importance of estimating the elasticity of marginal utility for CBA, the results of this paper are also relevant to the evaluation of non-marginal interventions, such as Integrated Assessment Modelling of climate change mitigation and macro-economic policy simulations. Such analyses require a direct calibration of a social welfare function.Footnote 6 Our results therefore provide a revealed preference benchmark for a broader range of policy evaluations, beyond the confines of CBA.

The paper proceeds as follows. We first provide some background on social discounting and the Ramsey Rule. We then utilize five revealed preference methodologies to estimate η: the equal-sacrifice approach, the Euler-equation approach, the Frisch additive-preferences approach, an approach based on insurance and the subjective-wellbeing approach.Footnote 7 We estimate the first four of these methodologies improving on previous empirical work in several important ways. First, for the equal-sacrifice income tax approach we obtain a representative estimate of η by appropriately weighting by the number of income tax payers in each tax bracket. Second, we analyse unique historical data on income tax schedules stretching back to 1948 and address a number of technical issues thrown up by earlier research. Third, we estimate the Euler equation using data from 1970 to 2011 checking carefully for the existence of parameter stability and endogeneity. Fourth, in undertaking the Frisch additive-preferences approach, we test the underlying assumption of additive preferences, something ignored by previous research.

A meta-analysis of the estimates is then undertaken to obtain a single ‘best’ estimate and test our hypotheses concerning the homogeneity and value of η. Finally, we estimate a growth diffusion process for the UK and calibrate the term structure of the SDR using our estimate of η. Taken together, our empirical evidence shows that HM Treasury’s calibration of social welfare arguably does not adequately represent society’s revealed preferences for the short- and the long-run.

2 The Ramsey Rule and Its Calibration

The workhorse specification of inter-temporal welfare is the discounted Utilitarian framework, which leads to the following expression for the SDR of the following form, known as the Ramsey Rule:

$$ r = \delta + \eta g $$
(1)

where r is the social rate of return to capital, δ is the pure rate of time preference, g is the average per capita consumption growth rate and \( \eta \) is the elasticity of marginal utility.

Equation (1) is known as the Ramsey rule, after Ramsey (1928). The right hand side of (1) is commonly referred to as the SRTP, and is argued to be the appropriate risk-free SDR for the appraisal of public projects (Lind 1982; Feldstein 1964). The UK, France, and many other countries use the SRTP as the SDR in public project appraisal (e.g. HMT 2003; Lebegue 2005; ADB 2007; MNOF 2012; Quinet 2013). Beyond this, the International Panel on Climate Change (IPCC) fifth assessment report by Working Group III, centre their discussion of inter-generational equity on this model (Kolstad et al. 2014, pp. 229–231), while the same representative agent approach is used in most Integrated Assessment Models of climate change (Nordhaus 2014). The value of the Ramsey Rule is also recognised in the recent National Academy of sciences report of the Social Cost of Carbon (NAS 2017). Views differ as to the correct value of the parameters of the Ramsey Rule: δ, η and g. The Green Book uses values of 1.5, 1 and 2 respectively for a short-term risk-free SDR of 3.5% (HMT 2003).Footnote 8 The French government was advised to use values equivalent to 0, 2 and 2 respectively (Lebegue 2005). More recent recommendations propose a risk free SDR of 2.5% (Quinet 2013). Disagreement on the correct value of δ has rumbled on since Ramsey (1928) whilst, albeit for quite different reasons, opinions also differ sharply on the correct value of η.Footnote 9

Disagreement over the value of η arises because it has several different interpretations in the Ramsey context: intra-temporal inequality aversion, inter-temporal inequality aversion or risk aversion. These interpretations of η give rise to separate methods of estimation which have historically yielded different estimates. This is the main source of disagreement among economists (Dasgupta 2008; Drupp et al. 2018). For instance, Atkinson et al. (2009) suggest that the concepts are “siblings, not triplets”, meaning related, but not numerically identical. It has also been observed that disentangling the concepts helps resolve the so-called risk-free and equity premium puzzles (Weil 1989). Last, mainly for intergenerational projects, economists also disagree on whether a positive or a normative perspective should be adopted to the estimation of the parameters of the SRTP, including η.Footnote 10

The critical role of η on the risk-free discount rate can also be illustrated when uncertainty in growth (g) is considered and the Ramsey rule must be extended. In the simplest case in which g is i.i.d. normal with mean μ and variance σ2, the Ramsey rule becomesFootnote 11:

$$ {\it SRTP} = \delta + \eta \bar{g} - 0.5\eta \left( {\eta + 1} \right)\sigma^{2} $$
(2)

where \( \bar{g} \) is the expected value of the annual consumption growth rate (Gollier 2012).Footnote 12 The final term on the right hand side of this equation is known as the ‘prudence’ effect, and reflects a precautionary motive for saving. If η > 0 this obviously reduces the risk-free SRTP and the size of this correction is determined by η.

For long-term CBA, there is now much interest in policy circles about the term structure of the SDR. Many theoretical arguments have been presented for a term structure which declines with the time horizon.Footnote 13 Most of these require uncertainty and persistence in either g or r itself. For instance, if g is subject to autocorrelation, this can provide a justification for declining discount rates. In the simple case where g follows a mean reverting process it is straightforward to show that the short-run and long-run risk-free SRTP become respectively (Gollier 2012, ch 3):Footnote 14

$$ r_{t} = \left\{ \begin{array}{ll} \rho + \eta \mu - 0.5\eta^{2} \left[ {\sigma_{x}^{2} + \sigma_{y}^{2} } \right] & \quad {\text{ for }}t = 1 \hfill \\ \rho + \eta \mu - 0.5\eta^{2} \left[ {\sigma_{x}^{2} + \frac{{\sigma_{y}^{2} }}{{\left( {1 - \phi } \right)^{2} }}} \right] & \quad {\text{ for }}t = \infty \hfill \\ \end{array} \right. $$
(3)

where ϕ measures the degree of persistence in g. In this context, the value of η has additional significance; other things equal, it determines the shape of the term structure of risk-free discount rates in the intervening period and the extent of its decline with the time horizon.Footnote 15

A wide range of estimates for η exists in the literature. Based on a variety of methodologies such as the equal-sacrifice income tax approach (see below) and estimates of the Euler-equation, Stern (1977) advocates a value of around 2 with a possible range of 1 to 10. Based on a review of the revealed preference literature, Pearce and Ulph (1995) suggest a value of between 0.7 and 1.5, and a best-guess estimate of 0.83 based on the estimates from the Euler-equation provided by Blundell et al. (1994). Cowell and Gardiner (1999) point to a range of 0.5–4.0 based on similar range of methodologies whilst Evans and Sezer (2005) apply the equal-sacrifice income tax approach to all the countries of the European Union and find that values fall between 1.3 and 1.6. More recently, in response to the Stern Review, Gollier (2006) suggested a value between 2 and 4 based on a thought experiment based on willingness to pay to reduce risk, while Dasgupta (2008) prefers a value of 2 on the basis of introspection on inequality aversion. Drupp et al. (2018) obtain a more representative picture of expert opinion on the matter in their survey over 200 discounting experts, for whom the mean (median) of values of η was 1.3 (1).

The range of these estimates is wide partly as a result of the diverse methodologies employed and the subjective manner in which they are combined. Also, with few notable exceptions, the applications have been empirically weak and tests of the theoretical assumptions underpinning these methodologies are absent. Indeed, the influential article by Stern concludes in one section “it is hoped that enough has been said to prevent any reader taking such numbers away for direct use in cost–benefit analysis” (Stern 1977, p. 244). Given the sensitivity of CBA to the SDR, it is obviously important to have precise estimates of η. The rest of the paper provides more robust estimates that we hope will be used in CBA.

3 Estimating η from Income Tax Schedules: The Equal-Sacrifice Approach

In this section we use ‘socially-revealed’ preferences to infer the value of η. More specifically, we analyse information on the progressivity of income tax schedule to infer the value of η under the assumption of equal sacrifice. This approach also requires that the utility function takes a known (almost invariably iso-elastic) form. The justification for the assumption of equal-sacrifice may be traced back to Mill (1848) who stated: “Equality of taxation, as a maxim of politics, means equality of sacrifice”. In this exercise η can also be interpreted as a measure of society’s aversion to inequality.

Algebraically, the principle of equal-sacrifice implies that for all income levels Y the following equation must hold:

$$ U\left( Y \right) - U\left( {Y - T\left( Y \right)} \right) = k $$
(4)

where k is a constant, Y is gross income, U is utility and T(Y) is the total tax liability according to the income tax schedule. Assuming an iso-elastic utility function then differentiating this expression with respect to Y and solving for \( \eta \) yields (see e.g. Evans 2004a):

$$ \eta = \frac{{\ln \left( {1 - MTR} \right)}}{{\ln \left( {1 - ATR} \right)}} $$
(5)

where T(Y)/Y is the average tax rate (ATR) and ∂T(Y)/∂Y is the marginal tax rate (MTR).

Cowell and Gardiner (1999) argue that there is good reason to take seriously estimates derived from tax schedules: decisions on taxation have to be defended before an electorate and the values implicit in them ought, therefore, to be applicable in other areas where distributional considerations are important such as discounting or the determination of welfare weights. Such estimates have a particular appeal if one is concerned about a possible difference between societal preferences and individual preferences. At the same time however, there are concerns about whether a progressive income tax structure consistent with equal sacrifice would adversely impact work incentives (Spackman 2004). Furthermore, tests of the equality of sacrifice assumption are themselves impossible since they are necessarily based on a particular utility function.Footnote 16

Previous studies have used income tax schedules to estimate η in many different countries as well as at different income levels.Footnote 17 Our focus however, is on evidence for the UK where empirical estimates are available from Stern (1977), Cowell and Gardiner (1999), Evans and Sezer (2005) and Evans (2008).

We now provide updated estimates of η using the equal-sacrifice approach. Data taken from Her Majesty’s Revenue and Customs (HMRC) website consists of i = 134 observations on all earnings liable to income taxation including: earnings arising from paid employment, self employment, pensions and miscellaneous benefits. These observations are drawn from the tax years 2000–2001 through to the tax year 2009–2010 excluding the tax year 2008–2009 for which no data are available. Each tax year includes between 13 and 17 earnings categories. Together, these span almost the entire earnings distribution, from earnings only slightly in excess of the tax allowance up to earnings of £1,940,000. Also included (and critical for our purposes) is the number of individuals in each earnings category.

For mean earnings in each earnings category we calculate the ATR and the MTR using the online tax calculator http://listentotaxman.com/. The tax calculator separately identifies income tax and employee National Insurance Contributions (NICs). As with most previous papers the data generated assumes a single individual with no dependents or special circumstances (e.g. registered blind or repaying a student loan).Footnote 18

Despite the (in our view fundamental) need to weight the observations according to the number of individuals in each earnings category this appears to have been largely overlooked in the literature (although see Stern 1977). The ‘importance’ weights we will employ for this purpose refer to the number of individuals in each earnings category divided by the total number of individuals in the sample.Footnote 19 Hence, although our sample contains different numbers of observations in different tax years, each tax year receives equal weight. In what follows we demonstrate that employing weighted and un-weighted data yields different estimates of η. But only weighted data yields an estimate of η which is representative of the population of income tax payers.

We also address the issue of whether or not to include NICs. Evans (2005) argues against their inclusion on grounds that: “An income tax-only model seems more in keeping with the underlying theory concerning equal absolute sacrifice”. By contrast, Reed and Dixon (2005) find that there is no operational difference between them arguing that NICs are: “increasingly cast as a surrogate income tax”. These views are echoed by Adam and Loutzenhiser (2007) who survey the literature concerned with combining NICs and income tax. They assert that: “NICs and national insurance expenditure proceed on essentially independent paths”. Our view is that, whilst historically NICs embodied a contributory principle, this linkage has now all but disappeared, the key exception from this being the entitlement to a full state pension.Footnote 20 Nevertheless, in what follows we examine the sensitivity of estimates of η to omitting NICs from the calculations.

Table 1 contains regression estimates for η obtained from regressing ln(1-MTRi) against ln(1-ATRi) with the constant term suppressed. OLS Estimates from two models are reported. Model 1 is based on the un-weighted data and Model 2 is based on the weighted data. As discussed, the weights refer to the proportion of individuals contained in each particular income category in any given year. The un-weighted and weighted regression results are very different emphasising the importance of weighting the data to ensure that it is representative of the underlying population. But irrespective of whether the regression is weighted, if the constant elasticity and equal-sacrifice assumptions are correct, the hypothesis that η is equal to unity can be rejected at the 1% level of confidence. For Model 2 which is the preferred model, the estimate of η is 1.515 with a standard error of 0.047. Table 2 presents two further OLS regression results this time excluding NICs. Once more there is a large difference between the results for the weighted (Model 3) and un-weighted (Model 4) regressions.

Table 1 Constant η OLS regression estimates
Table 2 Constant η OLS regression estimates excluding NICs

Whilst these calculations make use of recently published data on income tax liability at different points on the income distribution we can obviously go much further back in time if we instead confine ourselves to estimating η for a person earning the average production wage (APW) and ignore other points on the income distribution.

This alternative historical analysis is arguably more relevant for long-term project appraisal. We use UK data drawn from the EuropTax database compiled by Lynch and Weingarten (2010). This database provides income tax schedules with options for evaluating the incidence of income tax on various types of household. For the purposes of this analysis however, we continue to consider a single person household. Note that NICs are also included in the analysis.

Figure 1 displays the MTR and ATR measured at the APW, along with the implied estimate of η. Over the period 1948–2007 this Figure shows that socially-revealed estimates of η, measured at the APW, declined significantly in the immediate aftermath of the WWII before stabilising at a lower level. This decline is caused by a narrowing of the gap between the MTR and ATR. The average value of η over the entire time period is 1.45 with a 95% confidence interval of 1.38–1.51. It may also be appropriate to take into account the time series properties of the data, that the underlying persistence in political processes might introduce for instance (Cowell and Gardiner 1999), Modelled as a simple AR(1) process the series reverts to a mean of 1.57 with a 95% confidence interval of 1.09–2.07. The estimate of η obtained using historical data is therefore statistically no different from the figure of 1.51 obtained earlier using more recent, more detailed data, despite its fluctuations over time.

Fig. 1
figure 1

MTRs and ATRs and the implied value of η in the United Kingdom 1948–2005

4 Life-Cycle Behavioural Models: The Euler-Equation Approach

The preceding section sought to derive estimates of η by observing societal choices. Estimates of η however, can also be derived from individual households’ observed saving decisions. More specifically, in the life-cycle model of household behaviour the household is viewed as allocating its consumption over different time periods in order to maximise a multi-period discounted utility function subject to an intertemporal wealth constraint. Consumption decisions are affected by the rate of interest and households’ attempts to smooth consumption over time according to (a) the extent that deferred consumption is less costly than immediate consumption and (b) the curvature of the utility function. To some e.g. Pearce and Ulph (1995), this is the preferred ‘gold-standard’ method of estimating η since it avoids the untestable equal-sacrifice assumption and is thought to be conceptually closer to the dynamic context of social discounting than the equal-sacrifice approach, which relates to intra-generational inequality aversion.

In the life-cycle model, estimates of η are derived from the so-called Euler-equation although in the macroeconomics literature this information is normally presented in terms of the elasticity of intertemporal substitution (EIS) which is equal to 1/η. At the household level, when the EIS is high households readily reallocate consumption in response to changes in the interest rate and are less concerned about consumption smoothing. In the context of the social discounting and the Ramsey formula, a high value of η combined with a positive growth rate will lead to a higher social discount rate, other things equal. In this context η measures inter-temporal inequality aversion.

Using the Taylor series approximation, the solution to the canonical household problem of maximising discounted utility subject to a wealth constraint leads to the following empirical specification:

$$ \Delta \ln \left( {C_{t} } \right) = \alpha + \beta r_{t} + \varepsilon_{t} $$
(6)

where rt is the real interest, the coefficient β is the EIS, ε is an error term and the intercept α yields information on the value of δ.Footnote 21

The life-cycle model of household consumption behaviour rests on a large number of assumptions probably the most important of which is the presumed existence of perfect capital markets allowing households to borrow and lend in an unrestricted fashion. Estimates of the EIS may be impacted by periods of financial turbulence and prone to change following financial deregulation. And they may depend on the definition of consumption e.g. whether consumption includes the purchase of durable goods. And where analysts use aggregate rather than micro data this raises the question about how aggregate data accommodates changes in demographic composition and the changing ‘needs’ of households over the life-cycle. One might even question the convenient assumption that the intertemporal utility function is additively separable. For a recent meta-analysis of the empirical evidence on the EIS see Havranek (2015).

For the UK there appear to be 10 papers which provide estimates of the EIS.Footnote 22 Each of these papers typically provides more than one estimate usually arising out of attempts to assess the sensitivity of results to changes in estimation technique, minor changes in the specification and different periods of time. These papers include Kugler (1988), Attanasio and Weber (1989), Campbell and Mankiw (1991), Patterson and Pesaran (1992), Robertson and Scott (1993), Attanasio and Weber (1993), Blundell et al. (1994), Van Dalen (1995), Attanasio and Browning (1995), Berloffa (1997) and Yogo (2004).

These studies differ considerably in terms of their sophistication and employ both microeconomic and macroeconomic data. Perhaps the most important feature of the literature however is the finding of statistically significant parameter instability.Footnote 23 Such instability is unsurprising; the period 1970–1986 witnessed oil price shocks, record levels of inflation and an experiment with monetarism. Both the Building Societies Act and the Financial Services Act were passed by Parliament in 1986. It is hard to argue that these events will not have had some impact on intertemporal consumption allocation decisions.

We update the empirical evidence on the EIS for the UK. Data for the Euler-equation approach is taken from the Office for National Statistics (ONS) and the Bank of England (BOE) websites. Quarterly data is available from 1975Q1 through to 2011Q1. We employ data which has not been seasonally adjusted. Domestic spending in both current prices and 2006 prices is available for durable goods, semi-durable goods, non-durable goods and services. Following convention, we omit durable goods and form a price index Pt for all other goods and services using the share-weighted geometric mean of the price series for semi-durable goods, non-durable goods and services. Henceforth we refer to this as ‘consumption’. Consumption is measured in constant 2006 prices. For the real interest rate rt we take the official Bank of England base rate minus the previously created price index. A quarterly series for population is created from mid-year population estimates using linear interpolation.Footnote 24 C is per capita consumption.

Table 3 displays an OLS regression of the per capita consumption growth rate ΔLn(Ct) against a constant term, rt and three dummy variables DUMQ1, DUMQ2 and DUMQ3 to account for seasonal effects. The regression displays no evidence of heteroscedasticity or autocorrelation and the test for functional form is significant only at the 10% level of confidence. In Model 2 we interact the real rate of interest with a dummy variable DUM1993Q2 which takes the value unity for observations from 1993Q2 onwards. The purpose of including this dummy variable which divides the sample in two is to conduct a Chow test for parameter stability. An F-test cannot reject the hypothesis of parameter stability i.e. that the dummy variable and the coefficient on the interacted term are both simultaneously zero with a p value = 0.935.Footnote 25 This is welcome but surprising in light of the problems encountered by earlier analyses. There is nevertheless now some ambiguity concerning the RESET test which is significant at the 5% but not at 1% level of confidence.

Table 3 OLS estimates of the Euler equation

It is typical in the literature to address the endogeneity of the explanatory variables by using an instrumental variables approach. Table A.1 in the Appendix provides the results of this exercise where lagged values of the consumption growth rate, real interest rate and inflation rate are used as instruments. The results of this approach are essentially identical to the OLS results in Table 3, and test for endogeneity also point to the reliability of the OLS results. For this reason, all further analysis uses the results in Table 3.

To obtain an estimate of η (the inverse of the coefficient on the real rate of interest, rt) we use bootstrap techniques with 1000 replications. This procedure results in an estimate for η of 1.584 with a 95% confidence interval of 1.181–1.987.Footnote 26

5 Additive Preferences and the Frisch Formula

A third, and possibly the oldest technique for estimating η moves away from the context of inequality aversion and focuses on the way in which marginal utility changes as a result of the consumption of particular goods. The Frisch formula relies on the presumed existence of additive preferences (elsewhere this property is referred to as ‘strong separability’ or ‘wants independence’).Footnote 27 Additivity implies that the extra utility obtained from consuming additional units of the additively separable commodity is independent of the quantity consumed of any other commodity. Given additivity all the information necessary for estimating η can be obtained by analysing the demand for the additively separable commodity. In some developing country contexts this might be the only feasible means of estimating η.

For goods that enter the utility function in an additive fashion it can be shown that the following relationship holds (Frisch 1959):

$$ \eta = \frac{{\kappa_{i} \left( {1 - w_{i} \kappa_{i} } \right)}}{{\varepsilon_{ii} }} $$
(7)

where η is as before, w is the budget share, κ is the income elasticity of demand and ε is the own compensated elasticity of demand for good i. A derivation of this relationship is provided in the Appendix.

Evans (2008) provides a review of estimates of η obtained using this technique.Footnote 28 The studies therein are based on the demand for food, although a priori arguments for believing that food is additively separable are somewhat hard to find. Primary studies analyse the demand for food with the specific intention of estimating η. Secondary studies simply borrow estimates from existing studies of consumer demand e.g. Blundell et al. 1993; Banks et al. 1997.Footnote 29 Either way, in order to invoke the Frisch formula it is has to be assumed that food is additively separable.

So far as the United Kingdom is concerned there seem to be three studies of the demand for food undertaken for the purposes of estimating: Evans and Sezer (2002), Evans (2004b) and Evans et al. (2005).

The obvious limitation of all these studies is the absence of any simultaneous evidence suggesting that food is additively separable.Footnote 30 But if food is not additively separable from other commodities then the Frisch formula does not apply. Furthermore, it is important to avoid attaching too much significance to the fact that resulting estimates appear to take ‘plausible’ values since, as Deaton and Muellbauer (1980) note, even with the failure of additivity, it is not surprising that estimates of η should fall into the range 1–3. This is because η will always be estimated as approximately equal to the average ratio of uncompensated own price elasticities and income elasticities. And with the typical level of aggregation adopted in consumer demand studies, an estimate of 1–3 is entirely plausible.

The approach taken here is to analyse the demand for food using the Rotterdam demand system whilst explicitly testing the validity of the restrictions associated with the additivity assumption.

The Rotterdam demand system has long served as a vehicle for testing the theoretical postulates of consumer demand theory. Furthermore the Rotterdam system has been found comparable to the more modern AIDS system in terms of its ability to estimate the value of key parameters. For a recent discussion of the Rotterdam model and evidence of its enduring appeal presented in terms of the number of papers published using the Rotterdam demand system relative to other well-known demand systems see Clements and Gao (2015).

Of particular interest to us is the fact that the Rotterdam system may be used to test and impose the restrictions associated with additivity in a relatively straightforward manner.Footnote 31 And once they have been imposed an estimate of η, which is modelled as a constant, is directly available along with its associated standard error.Footnote 32

The Rotterdam system is defined by the equation:

$$ w_{i} d\ln \left( {Q_{i} } \right) = a_{i} + b_{i} d\ln \left( R \right) + \varSigma_{j} c_{ij} d\ln \left( {P_{ij} } \right) $$
(8)

where

$$ d\ln \left( R \right) = d\ln \left( Y \right) - \varSigma_{i} w_{i} d\ln \left( {P_{i} } \right) $$
(9)

and w is the share, Q is the quantity and P is the price of commodity i. The variable R can be interpreted as real income. Note the existence of an intercept allowing for autonomous changes in the demand for food. This system is then implemented using time series data with the following approximations:

$$ w_{i} = 0.5\left( {w_{it} + w_{it - 1} } \right) $$
(10)
$$ d\ln (Q_{i} ) = \ln (Q_{it} ) - \ln (Q_{it - 1} ) $$
(11)
$$ d\ln (P_{i} ) = \ln (P_{it} ) - \ln (P_{it - 1} ) $$
(12)
$$ d\ln (Y) = \ln (Y_{t} ) - \ln (Y_{t - 1} ) $$
(13)

Additivity involves imposing the following constraints on the substitution matrix:

$$ c_{ii} = \frac{1}{\eta }b_{i} (1 - b_{i} ) $$
(14)

and

$$ c_{ij} = - \frac{1}{\eta }b_{i} b_{j} $$
(15)

Consistent with previous research we analyse UK food and non-food commodity expenditures. Annual data are available from the ONS from 1964 to 2010. All variables are taken in per capita terms and prices are indexed such that the year 2006 = 100. We calculate the price of the non-food commodity by assuming that the logarithm of the implied price index for all household expenditure P is equal to the share weighted sum of the logarithm of PF and the logarithm of PN where these represent the price of the food and the non-food commodities respectively.Footnote 33

The results from the econometric analysis are displayed in Table 4. Model 1 does not impose additivity. The estimate of the income elasticity of demand for food is 0.393 with an average budget share for food of 0.147. The uncompensated own price elasticity is − 0.164 whilst the compensated own price elasticity of demand is − 0.106. Turning to Model 2, the restrictions associated with additivity are imposed and, surprisingly perhaps, these are accepted even at the 10% level of significance as shown by a Likelihood Ratio test (χ2[1] = 0.0105). The estimate for η of − 3.57 is however, disappointingly imprecise having a standard error of 2.188. For reasons of precision, the Frisch approach appears not to be a useful methodology at least in the UK context.

Table 4 Non-linear least squares estimates of the Rotterdam system

6 Measuring the Coefficient of Relative Risk Aversion Using Insurance Data

There are several ways of estimating η in the context of risk without resorting to experimental approaches. In this context however η is usually referred to as the coefficient of relative risk aversion. These methodologies were recently reviewed by Outreville (2014). One approach is to consider investors’ demand for risky assets e.g. Friend and Blume (1975). Here however, we investigate the opportunity to obtain an estimate of η in its guise as the coefficient of relative risk aversion using data on the demand for insurance and wealth based upon the work of Szpiro (1986a). The attractions of this approach are its simplicity and the ubiquity of insurance.

Szpiro (1986a) demonstrates that the following relationship holds:

$$ I = W - \frac{\lambda }{a(W)}, $$
(16)

where I is the amount of insurance, W is wealth, λ is insurance loading defined as (premiums − claims)/claims and a(W) is absolute risk aversion. Denoting the coefficient of relative risk aversion by η(W) yields the relationship: \( a(W) = \eta (W)/W \). Substituting this expression gives:

$$ I = W - \frac{\lambda W}{\eta } $$
(17)

under the assumption of a constant coefficient of relative risk aversion (independent of wealth). Given that the amount of insurance I is not observable, data on claims Q can be used on the assumption that the probability of loss q is constant since: \( I = Q/q \). Making this substitution gives:

$$ Q = qW - \frac{q}{\eta }\lambda W $$
(18)

It is expected that q is positive since it represents the probability of a claim whereas the coefficient on λW is expected to be negative since the quantity of insurance declines with insurance loading. Thus η is expected to be positive.Footnote 34

In order to produce estimates of the coefficient of relative risk aversion for the United States as well as test whether or not it is constant Szpiro uses time series aggregate data on total wealth comprising the wealth of households, non-profit organisations, the Government and the net foreign balance. The insurance data he uses relates to property and liability insurance. With these data Szpiro determines that the coefficient of relative risk aversion is constant with respect to wealth and takes a value between 1 and 2.

In order to estimate the coefficient of relative risk aversion for the United Kingdom using this technique we combine data on wealth from the ONS with data on non-health non-life insurance premiums and claims for domestic risks from the European insurance industry database.Footnote 35

The regression displayed in Table 5 is estimated using OLS. Like Szipro (1986b) and Szipro and Outreville (1988) our analysis assumes that the claims rate is constant. We argue that the premium rate cannot be constant if premiums are subject to taxes which vary over time as was indeed the case in the United Kingdom. The resulting estimate of the coefficient of relative risk aversion obtained by dividing the first coefficient in Table 5 by the (negative of) the second is 2.19.Footnote 36 Using the delta method we test the hypothesis that the value of this parameter is equal to unity. The hypothesis can be rejected with a p-value of 0.001. Once more it seems that the value of the η might be in excess of unity.

Table 5 Estimates of the coefficient of relative risk aversion for the UK using insurance data

7 Subjective Well-Being

Layard et al. (2008) present a method of estimating η based on surveys of subjective wellbeing; a technique regarded as distinct from stated preference approaches. In such surveys individual respondents are invited to respond to questions such as:

All things considered how satisfied (or happy) are you on a 1-10 scale where 10 represents the maximum possible level of satisfaction and 1 the lowest level of satisfaction?

Life satisfaction is taken as being synonymous with utility and it is assumed that survey respondents are able accurately to map their utility onto an integer scale:

$$ S_{i} = h_{i} (U_{i} ) $$
(19)

where Si is the reported satisfaction of individual i and hi describes a monotonic function used by individual i to convert utility Ui to reported S. It is further necessary to assume all survey respondents use a common function g to convert utility to reported S: \( h_{i} = h\forall i \).

The functional relationship h between S and U determines the appropriate estimation technique. The least restrictive approach is to assume only an ordinal association between reported life satisfaction and utility. So if an individual reports an 8 one ought merely to assume that they are more satisfied than if they had reported a 7. This entails use of the ordered logit model. Analysing such data using ordinary least squares by contrast assumes a linear association between the utility of each respondent and their reported life satisfaction.

Layard et al. (2008) analyse six separate surveys variously containing questions on happiness and life satisfaction. The estimates for η remain surprisingly consistent across datasets and are robust to different estimation techniques. Layard et al. (2008) nevertheless acknowledge that income reported in household surveys may contain measurement error (especially when respondents are required only to identify a range within which their income falls rather than the exact value). The estimate of η for the British Household Panel survey is 1.32 with a confidence interval of 0.99–1.65 implying a standard error of 0.168 (these estimates are taken from the slightly less restrictive ordered logit model). Here anyway, we cannot quite reject the hypothesis that η = 1.

8 Meta-Analysis of η

The preceding sections investigated five alternative methodologies for estimating η. We reviewed the evidence and in the case of four methodologies (the demand for insurance, the Frisch additive preferences approach, the Euler equation approach and the Equal Sacrifice approach) generated more up to date estimates along with their associated standard errors. The final methodology utilised data on Subjective Wellbeing. Here we merely identified a single estimate for Britain provided by Layard et al. (2008). These five (sets of) estimates and their associated standard errors are summarised in Table 6. Here we use meta-analysis to combine these estimates to obtain a single ‘best’ estimate. Two different ways of combining these estimates are employed: a pooled fixed effects estimator and a random effects estimator. The difference between these two estimators is that whereas the former assumes that studies differ only because of random sampling the latter acknowledges probable differences between studies and attempts to estimate the mean of the distribution of all possible studies. Both estimators use information on the standard errors of the component estimates.Footnote 37

Table 6 Meta-analysis of estimates of η

The pooled fixed effects estimator for η is 1.528 with a 95% confidence interval of 1.443–1.613, whereas the pooled random effects estimator is 1.594 with a confidence interval of 1.362–1.827. Both pooled estimators clearly exclude the current Green Book estimate of unity (and they continue to do so even using 99% confidence intervals). Furthermore the hypothesis of parameter homogeneity cannot be rejected. This latter finding is of interest because our estimates arise out of very different situations where others e.g. Atkinson et al. (2009) have suggested that the value of η might differ depending on whether it reflects risk, the inter-temporal allocation of consumption or societal inequality aversion. If the weighted equal sacrifice study is excluded the fixed (random) effects pooled estimate of the remaining studies rises to 1.600 (1.671) with a 95% confidence interval of 1.382–1.818 (1.273–2.069). If the historical equal sacrifice study is omitted then the fixed (random) effects pooled estimate of the remaining studies is unchanged at 1.528 (1.603) with a 95% confidence interval of 1.442–1.613 (1.345–1.861). If the additive preferences-based estimate is omitted the fixed (random) effects pooled estimate of the remaining studies is 1.527 (1.589) with a 95% confidence interval of 1.442–1.612 (1.335–1.824). If the insurance-based estimate is omitted the fixed effects pooled estimate is 1.506 with a 95% confidence interval of 1.420–1.592 (in this case the random effects estimator gives the same results as the fixed effects estimator).

Using a convenience sample the stated preference survey of Atkinson et al. (2009) by contrast reports median estimates of the coefficient of relative risk aversion of between 3 and 5, estimates of inequality aversion of between 2 and 3 and a median estimate for aversion to inequality over time of 8.8.Footnote 38

9 Implications for the SDR

In this section we demonstrate the implications of our new estimate of η for the SRTP (and hence the SDR) for the UK using the simple and the extended Ramsey rules. The Ramsey rule requires inputting an assumption about g and the extended Ramsey rules require evidence on the historical variance and autocorrelation of g. For illustrative purposes we estimate the model of persistent growth derived from the model discussed above in footnote 14. This model collapses to a simple AR(1) when it is assumed that \( \varepsilon_{yt} \) and \( \varepsilon_{xt} \) are identical.Footnote 39 This is the same as assuming the following diffusion process:

$$ \ln (C_{t + 1} ) - \ln (C_{t} ) = \mu + y_{t} $$
(20)

where C is consumption and

$$ y_{t} = \varphi y_{t - 1} + \varepsilon_{yt} $$
(21)

and εyt is assumed to be distributed N(0, σ 2 ).

Concern about the long-run also raises questions about the appropriate period of data that should be used to determine the average growth rate. Once again we undertake a sensitivity analysis by distinguishing between Green Book growth assumptions and a longer run approach. The Green Book assumes that average growth is 2% based on an analysis of the period 1949–1998, yet consumption data are available for the 180 year period 1830–2009 in 2006 prices.Footnote 40

Table A2 in the Appendix presents the results of the AR(1) estimation on per capita consumption growth in the UK using the long and the short time series of growth data. Models 1 and 3, both use the raw per capita consumption growth data. Models 2 and 4 use smoothed data: a 5 year moving average (MA5). The smoothing removes cyclical and short term fluctuation which may not be appropriate for discounting distant time horizons (Newell and Pizer 2003; Muller and Watson 2016).Footnote 41

A comparison of Models 1 and 3 shows that average growth differs radically depending on which time period is analysed. Between 1949 and 1998, which is the period deemed relevant by the Green Book, per capita consumption has an annual growth rate of 2.3%. However, between 1830 and 2009 annual growth has been a mere 1.1%. This result is not affected by data smoothing. It is a moot point as to which period is more appropriate in the context of social discounting. Persistence, which is important for estimating the term structure of discount rates, differs depending on whether smoothed or unsmoothed data is used. This is to be expected and raises further empirical issues of consequence for estimating the term structure of discount rates.

In Table 7 we provide two alternative sets of estimates of the SDR. These include estimates for the SDR under certainty, with prudence and in the long-run [corresponding to Eqs. (1), (2) and (3) respectively]. The first employs all the parameter assumptions of HMT (i.e. δ = 1.5 and g = 2%), except with η now equal to 1.5, in line with our empirical results. The parameters for prudence and the long-run (\( \sigma_{y\varepsilon }^{2} \) and ϕ) are first estimated over the period 1949–1998, the period used by the Green Book to estimate average growth. We then estimate these parameters using data from the 180 year period of 1830–2009. In each case we report the results using unsmoothed and smoothed data.

Table 7 The SDR for the UK

Table 7 shows that, if one uses the value of η derived from the meta-analysis and then combines it with the current assumptions concerning ρ and HMT’s preferred estimate of g one obtains a value for the SDR more than 1% higher than that recommended in the Green Book. As elsewhere (e.g. Gollier 2012), the prudence effect is very small due to the relatively low variance of g.

The main surprise however, is that explicitly accounting for the diffusion process of per capita consumption growth, and hence taking into account the uncertainty and persistence of growth, the resulting term structure of discount rates barely declines at all. This result is irrespective of whether the data is smoothed or not. The short horizon rates are at most only 0.5% higher than those appropriate for long-horizons, and are much higher than the 1% currently recommended for long-horizons in the Green Book. More important from the perspective of discounting the distant future, is the selection of the historical time series for per capita consumption growth. This is a full percentage point lower when the period 1830–2009 is considered rather than the current Green Book choice of 1949–1998.

The message of this application is that our estimate of the elasticity of marginal utility leads to an increase in the SDR currently recommended in the Green Book, all else equal. Also, the current HM Treasury guidance that declining discount rates should be used for long-time horizons is seemingly not justified by our inspection of the time series properties of per capita consumption data. Care is needed though since we have not undertaken an exercise in model selection for the diffusion process. Also, an SRTP in excess of 4%, looks high for a risk-free rate, even when one removes what the Treasury Green Book refers to as the ‘catastrophic risk’ element contained in their overall estimate of δ, which is currently thought to be 1%. Of course, the high theoretical value for the risk-free rate is partly a manifestation of the risk-free rate puzzle (Weil 1989). The risk-free rate puzzle is a potential problem for the SRTP approach to discounting if one thinks that the SRTP should be a predictor of the market risk-free rate. This is not the view in the Green Book however, whose approach to defining the SWF is normative, and not intended to have predictive power over interest rates. Nevertheless, the calibration does raise further issues about the validity of including catastrophic risk in the SDR, the treatment of systematic risk, and the potential for an SDR which adjusts to reflect current macroeconomic conditions. The theory of the term structure in Sect. 2 would recommend adjustments for lower or higher than expected growth, for instance.Footnote 42 Our recommendation would be that these other elements of the SDR: growth, project risk and pure time preference, should be revised along with the elasticity of marginal utility to arrive at an empirically justifiable SDR.

10 Conclusion

In the nomenclature of social discounting this paper has taken a positive (revealed preference) approach to the estimation of the elasticity of marginal utility, rather than a normative approach (Arrow et al. 1996). Five separate empirical methods have been investigated, each of which uses revealed preference or subjective wellbeing data. The results suggest that the conceptual and empirical approach to estimating the elasticity of marginal utility in the UK is not materially important. Where Atkinson et al. (2009) referred to these different concepts (risk, inter-temporal substitution, and inequality aversion) as empirical “siblings not triplets”, our results suggest a more accurate description might be “non-identical quintuplets”. It is possible that the variety of estimates that Atkinson et al. (2009) arose due to framing effects in their hypothetical scenario, the absence of which is a benefit of revealed preference. Among other benefits of this result, it eliminates at least some of the potential for disagreement among experts on this matter, allowing a focus on other essential issues. That said, we see considerable benefit in attempting to replicate these results using data from other countries. We also see considerable benefits in incorporating stated preference evidence on \( \eta \) obtained from representative surveys. Moreover, whether revealed preference data, based on private decisions, is always an appropriate source of information for the Social Discount Rate remains a matter of debate, particularly when it comes to intergenerational project appraisal.

The meta-analysis of the various estimates suggests that a value for η of 1.5% is defensible for the UK, with confidence intervals which exclude unity, the value currently contained in the Green Book and preferred by the Stern Review (HMT 2003; Stern 2007). With no other changes the short run SDR for the UK becomes 4.5%, up from the current value of 3.5%. Analysis of long-run consumption growth further shows that simple persistence in growth justifies a term structure which hardly declines at all compared to the currently recommended 1% for horizons of 300 years. At most, the decline should go no lower than 2.5%, when using estimates of growth (g) over the period 1830–2009. Taken on their own, these findings would have significant implications for long-lived man-made assets such as flood defences, biodiversity conservation, slow-growing natural resources such as forests and natural capital valuation more generally.

Yet the analysis gives rise to questions about the values of the other parameters if the SDR, since 4.5% is large for what is essentially a risk-free SRTP. This observation places focus on the other parameters of the SDR in the UK context, and also raises the issue of how catastrophic and project risk should be dealt with in a discounting context. The idea that there is a single discount rate for public policy analysis is questionable since projects have different risk profiles and recent work refutes the relevance of the risk free rate for public project appraisal (Baumstark and Gollier 2013). Ultimately, a unilateral change in the value of the elasticity of marginal utility should not happen without these other issues being dealt with in parallel. Other areas for consideration in this regard include judicious estimates of the expected level and diffusion of growth in the twenty-first century.

In conclusion, we have used several revealed preference methods to estimate the elasticity of marginal utility in this paper, each with its own interpretation. Ultimately, no estimate of η is without criticism, yet assuming one wishes to take a positive approach the practical message of this paper is clear: in the case of UK the choice of method would make very little difference to the final policy recommendation of raising the value of η from 1 to 1.5 as part of a wholesale review of the SRTP. Currently HM Treasury arguably does not reflect socially revealed preferences. This has implications for how policies and projects are appraised across government and how public resources are allocated over time.