Advertisement

Cost Benefit Analysis

  • Jean-Michel Josselin
  • Benoît Le Maux
Chapter

Abstract

Cost benefit analysis compares policy strategies not only on the basis of their financial flows but also on the basis of their overall impact, be it economic, social, or environmental (Sect. 9.1). To do so, the approach relies on the concepts of Pareto-optimality and Kaldor-Hicks compensation to assess whether public programs are globally beneficial or detrimental to society’s welfare (Sect. 9.2). All effects must be expressed in terms of their equivalent money values and discounted based on how society values the well-being of future generations (Sect. 9.3). Using conversion factors, the cost benefit analysis methodology also ensures that the price of inputs and outputs used in the analysis reflects their true economic values (Sect. 9.4). Last, the analysis ends with a deterministic or probabilistic sensitivity analysis which examines how the conclusions of the study change with variations in cash flows, assumptions, or the manner in which the evaluation is set up (Sects. 9.5 and 9.6). One may also employ a mean-variance analysis to compare the performance and risk of each competing strategy (Sect. 9.7). Applications and examples, provided on spreadsheets and R-CRAN, will enable the reader to take the methodology in hand.

Keywords

Pareto-optimality Discounting Conversion factor Sensitivity analysis Mean-variance analysis 

9.1 Rationale for Cost Benefit Analysis

The aim of cost benefit analysis is to determine whether a public policy choice is globally beneficial or detrimental to society’s welfare. The method allows several projects to be compared not only on the basis of their financial flows (investments and operating costs) but also on the basis of their economic effects (variation in consumption of public services, change in the satisfaction derived from them). For instance, the construction of a highway not only modifies the financial flows of the authority at stake, but affects traffic congestion, reduces travel time, improves safety, influences positively or negatively real estate in the areas served by the project. All these consequences must be properly accounted for. To do so, cost benefit analysis expresses these impacts in terms of a common metric, their equivalent money value. Then the method confronts the financial flows with the economic impacts to measure the total contribution of the project to the well-being of the society.

Cost benefit analysis is primarily used for assessing the value of large capital investment projects (transportation infrastructures, public facilities, recreational sites) but can be employed for appraising smaller projects, public regulations or a change in operating expenditures, especially where other methods fail to account for welfare improvements. It became popular in the 1960s with the growing concern over the quality of the environment, not only in the USA but also in many other countries. Nowadays, institutions such as the European Commission, the European Investment Bank or the World Bank require cost benefit analysis studies for extra funding and loan requests. In the USA, the approach is frequently used when a public policy choice imposes significant costs and economic impacts. In Europe, more and more governments use the approach to justify a particular policy to taxpayers. This was for instance the case with HS2, a controversial high-speed rail link between London, Birmingham and Manchester.

Cost benefit analysis is concerned with two types of economic consequences. First, a project produces “direct effects” affecting the welfare of those who primarily benefit from the public service. They are related to the main objectives of the program. Table 9.1 provides several examples. They include for instance the satisfaction from using recreational parks or the time saved by projects in the transport sector. Assessing the direct contribution of a public policy to society’s welfare is at the heart of cost benefit analysis. Second, a project may generate “negative and positive externalities”, so-called “external effects”. Those are costs and benefits that manifest themselves beyond the primary objectives of the program. They appear when a policy affects the consumption and production opportunities of third parties. Typical examples are deterioration of landscape (negative externality) and economic development (positive externality), as described in Table 9.1. Given their potential impact on society’s welfare, externalities are also monetized, including social and environmental effects.
Table 9.1

Examples of economic impacts

Type of project

Direct benefits

Positive externalities

Negative externalities

Price distortions

Rail investment

Time savings, additional capacity, increased reliability

Economic development, reduced negative externalities from other transport modes

Aesthetic and landscape impacts, impacts on human health

Opportunity costs of raw materials due the diversion of them from the best alternative use, land made available free of charge by a public body while it may earn a rent, tariff subsidized by the public sector, labor market distortions due to minimum wages or unemployment benefits

Waste treatment

Treatment of waste which minimizes impacts on human health

Environmental benefits

Aesthetic and landscape impacts, other impacts on human health, increase in local traffic for the transport of waste

Production of electricity from renewable energy sources

Reduction in greenhouse gases

Amount of fossil fuels or of other non-renewable energy sources saved

Aesthetic and landscape impacts, negative effects on air, water and land

Telecommunication infrastructures

Time saved for each communication, new additional services

Economic development induced by the project

Aesthetic and landscape impacts, Impacts on human health

Parks and forests

Recreational benefits, utilization and transformation of wood

Improvement of the countryside, environmental protection, increased income for the tourist sector

Increased traffic

 

The cost benefit analysis methodology also ensures that the price of inputs and outputs used in the analysis reflect their true economic values, and not only the values observed in existing markets. Government intervention may divert the factors of production (land, labor, capital, entrepreneurial skill) from other productive use. This is particularly true where markets are distorted by regulated prices or where taxes or subsidies are imposed on imports or exports (see Table 9.1). For instance, a land made available for free by a public body generates an implicit cost to the taxpayers: the opportunity cost of not renting the land to another entity. A tax on imports which affects the price of inputs also has to be deduced to better reflect the true costs to taxpayers. To account for these distortions, cost benefit analysis makes use of what is called conversion factors. They represent the weights by which market prices have to be multiplied to obtain cash flows valued at their true price from the society’s point of view.

Public policies involve many objectives, concern different beneficiary groups and differ with respect to their time horizon. The cost benefit analysis methodology has the advantage to simplify the multidimensionality of the problem by calculating a monetary value for every main benefit and cost. To make those items fully comparable, the approach converts all the economic and financial flows observed at different times to present-day values. This approach, known as discounting, is essential to cost benefit analysis. It enables the projects to be evaluated based on how the society values the well-being of future generations. The idea is that a benefit, or a cost, is worth more when it has an immediate impact. As a consequence, activities imposing large costs or benefits on future generations may appear of less importance. The use of an appropriate discount factor, which weights the time periods, is here decisive.

Despite its popularity among many government agencies and the fact that cost benefit analysis provides an all-encompassing tool for evaluating public policies, the approach has been intensively decried. One of the major drawbacks is the fact that it often struggles to put monetary values on items such as aesthetic landscapes, human health, economic growth, environmental quality, time or even life. The core of the problem is that public projects mostly affect the value of goods that are not bought and sold on regular markets (non-market goods). For this reason, variations in society’s welfare cannot be valued based on the usual observation of market prices. One has to rely instead on non-market valuation methods, such as revealed preference methods or stated preference techniques. Obtaining accurate estimates of costs and benefits is all the more challenging that many factors may affect the conclusions of the study, such as the time horizon, whether people have prior experiences of using the service or whether they have perfect information about the consequences of the project, etc. To overcome these issues, a sound cost benefit analysis generally ends with a sensitivity analysis which examines how the conclusions of the study change with variations in cash flows, assumptions, or the manner in which the evaluation is set up. This sensitivity analysis can be implemented in a deterministic way. In this case, the uncertainty of the project is described through simple assumptions, for instance by assuming lower and upper bounds in economic flows. The sensitivity analysis can also be probabilistic. Each economic flow is assigned a random generator based on a well-defined probability distribution. A mean-variance analysis then helps the decision-maker to fully compare the risk and performance of each competing strategy.

The remainder of the chapter is organized as follows. Section 9.2 details the theoretical foundations of the method. Section 9.3 provides a tutorial describing the discounting approach. Section 9.4 explains how to convert market prices to economic prices. Sections 9.5 and 9.6 show how to implement a deterministic and probabilistic sensitivity analysis. Section 9.7 finally explains the methodology of mean-variance analysis.

9.2 Conceptual Foundations

Government intervention affects the satisfaction of the agents composing society in many different ways. While some agents will benefit from a change in the public good-tax structure, other agents can be worse off. An investment decision may also generate positive and negative externalities that affect the well-being of particular agents in specific areas. Due to the presence of government-imposed regulations, project cash flows may be distorted and should be valued at their opportunity costs. Cost benefit analysis captures all these economic consequences by expressing them in terms of a common currency, money. How is that possible? How can a change in the society’s welfare be related to dollar value? The approach is actually based on the observation that individuals are often willing to pay more for a good than the price they are charged for it. For instance, if one is willing to spend $10 at most for a service, but pays only $2, one achieves some surplus of $8. This is what cost benefit analysis aims at measuring. If there is a public policy choice for which the net benefits are greater than those of the competing strategies, then society should go ahead with it.

More specifically, cost benefit analysis relies on estimating what is called the economic surplus, a measure of welfare that we find in microeconomics, a branch of economics that studies the behavior of agents at an individual level. Under this framework, the surplus is computed as the sum of two elements: (1) the consumer surplus, measured as the monetary gain obtained by agents being able to purchase a good for a price that is less than the highest price they are willing to pay, and (2) the producer surplus, defined as the difference between the price producers would be willing to supply a good for and the price actually received. While a reasonable measure of benefit to a producer is the net profit, the task is much more difficult with respect to the consumer surplus. One has to identify here the demand for the good. All the difficulty and ingenuity of cost benefit analysis lies there. This section provides a simple microeconomic framework serving to better describe the underlying assumptions.

Formally, let u(x, z) denote the utility (or satisfaction) an agent derives from consuming a private good in quantity x and a public good in quantity z. The public good is provided by the government and financed through taxes. The budget constraint of the agent is defined as p x x + br = y where p x denotes the price of the private good, b is the tax base of the agent, r stands for the tax rate chosen by the government and y represents the agent’s income. Although we could relax this assumption, the public budget is assumed to be balanced. We have rB = cz, where B is the total tax base upon which the tax rate is applied, and c denotes the marginal cost of production. Using r = cz/B in the budget constraint of the agent, we obtain p x x + p z z = y where p z  = cb/B represents the (tax) price of the public good. The demand of the agent for the public good is then determined by the maximization of u given this budget constraint:
$$ \underset{\left\{\mathrm{x},\mathrm{z}\right\}}{ \max }u\left(x,z\right)\ \mathrm{subject}\ to\ {p}_xx+{p}_zz=y $$
To simplify the exposition, we set p x  = 1 and assume that the agent has quasi-linear preferences: u(x, z) = x + v(z). Function v is an increasing and concave function of z while x enters the utility function as a simple additive term. Solving the optimization problem by substitution yields:
$$ \underset{\left\{\mathrm{z}\right\}}{ \max }u\left(y-{p}_zz,z\right)\iff {v}^{\prime }(z)={p}_z $$
The solution is obtained by taking the derivative of y − p z z + v(z) with respect to z. The derivative of v represents the inverse demand function. The lower the price (for instance due to a decrease in b or c), the higher the demand for the good, as illustrated in Fig. 9.1. This generalizes the usual “law of demand” to the case of a publicly provided good.
Fig. 9.1

Demand for public good and consumer surplus

The welfare the agent derives from z is directly linked to the shape of the inverse demand curve. If the public good is not produced at all, the level of satisfaction obtained by the agent is u(x, 0) = y. If z units are produced, the maximum amount A the agent would be willing to pay is determined by the condition u(y − A, z) > u(x, 0). Equivalently, we have A < v(z). In other words, v(z) represents the willingness to pay (WTP) of the agent. Similarly, v′(z) is what is termed the marginal WTP. The consumer surplus is thus defined as:
$$ s(z)=v(z)-{p}_zz $$
where p z z represents the cost of the policy to the agent. This expression can also be written as:
$$ s(z)={\int}_0^z{v}^{\prime }(z)dz-{p}_zz $$

Graphically, this means that the consumer surplus is the area comprised between the inverse demand curve v (z) and the horizontal line at the price p z . As shown in Fig. 9.1, a decrease in the price generates a direct effect on surplus (effect 1), but also a change in the demand and, if this demand is satisfied, an additional increase in surplus (effect 2).

Consider for instance an agent who enjoys visiting a recreational park. Assume that one has information about his/her willingness to pay for visiting the site. As shown in Fig. 9.2, the marginal willingness to pay is decreasing. For a first visit, the agent is willing to pay $9 at most. For a second visit, the agent is willing to pay $4, and so on. This illustrates the “law of diminishing marginal utility”. There is a decline in the marginal welfare the agent derives from consuming each additional unit. At some point, and for a given price, the agent will stop visiting the site which will determine his/her final demand. To illustrate, imagine that the price is $2. The agent will visit the site three times and obtain a surplus of $9. This information is essential to cost benefit analysis. It allows the net welfare of the agent to be quantified in monetary terms. For instance, if a public body aims at providing a better access to the recreational park, thereby reducing the travel expenses of the agent by $1, then the surplus will increase by $3. This means that the agent does not want to pay more than $3 for the project. This amount represents the willingness to pay of the agent for a better access to the park.
Fig. 9.2

Consumer surplus in a discrete choice setting: example 1

Let us now consider a framework with two agents, indexed 1 and 2, respectively. They differ in their tax base (b 1 and b 2) and consequently in their tax price, p 1 = cb 1/(b 1 + b 2) and p 2 = cb 2/(b 1 + b 2), with p 1 + p 2 = c. They have different preferences about the public good: u 1 = x + v 1(z) and u 2 = x + v 2(z). Cost benefit analysis aims at selecting one single level of public good by maximizing the total surplus s 1(z) + s 2(z). This amounts to differentiate v 1(z) + v 2(z) − cz with respect to z. We obtain:
$$ {v}_1^{\prime }(z)+{v}_2^{\prime }(z)=c $$
In equilibrium, the sum of the marginal WTP is equal to the marginal cost of production. In economics, this is known as the Pareto-optimality condition for the supply of a public good. For instance, in Fig. 9.3, agent 1 desires a lower quantity of public good than agent 2. To maximize the surplus, cost benefit analysis considers the demand of the whole society (by summing the two demand curves) and chooses a solution that lies between the ideal points of the agents (such that the budget constraint p 1 + p 2 = c is fulfilled). By doing so, we are guaranteed to reach a situation of Pareto-optimality which, by definition, is a state of allocation of resources in which it is impossible to make any one individual better off without making at least one individual worse off.
Fig. 9.3

The optimal provision of public goods

The concept of Pareto-optimality should not be mistaken with that of Pareto-improvement, which denotes a “move” that benefits an individual or more without reducing any other individual’s well-being. To illustrate, consider three agents who have preferences about two competing strategies. In Table 9.2, those preferences are expressed in monetary terms. We can see that agents 1 and 2 prefer strategy S 1, while agent 3 prefers strategy S 2. In this example, the concept of Pareto-improvement is not useful as it is impossible to implement a policy change without making at least one individual worse off. A change from S 1 to S 2 is not Pareto-improving but neither is a move from S 2 to S 1. Then, how can we reach a situation of Pareto-optimality? The answer is through reallocation. One has to rely on what is termed a Kaldor-Hicks improvement. A move is more efficient as long as everyone can be compensated to offset any potential loss. Using this criterion, one would typically select strategy S 2. To clarify the whys and wherefores, one needs at this stage to understand that what matters in cost benefit analysis is welfare. In Table 9.2, agent 3 derives a high level of satisfaction from S 2 and is willing to pay a lot for it, even if this means to compensate the welfare loss of the other agents. For instance, if agent 3 gives $5 to agent 1 and $1 to agent 2, a move from S 1 to S 2 would benefit the whole society.
Table 9.2

The concept of Pareto-optimality: example 2

Project

Agent 1’s surplus

Agent 2’s surplus

Agent 3’s surplus

Total surplus

Strategy S 1

$5

$25

$75

$105

Strategy S 2

$0

$24

$84

$108

Cost benefit analysis provides public managers with a decision criterion based on the Kaldor-Hicks criterion. A project is an improvement over the status quo if the sum of welfare variations is positive. If the gains exceed the losses, then the winners could in theory compensate the losers so that policy changes are Pareto-improving. Yet, the compensation scheme must be chosen carefully. Cost benefit analysis is in this respect not equipped to conceive and implement redistributive schemes. Instead, the approach aims at selecting a particular policy among competing strategies. It nevertheless remains that redistribution, if carried out, can strongly affect work incentives, induce mobility, generate tax evasion or encourage tax fraud.

9.3 Discount of Benefits and Costs

Project selection starts with an option analysis discussed at the level of a planning document such as a master plan. The set of strategies is generally reduced so that at least three alternatives are examined: (1) a baseline strategy (or status quo) which is a forecast of the future without investment; (2) a minimum investment strategy and (3) a maximum investment strategy. Once the strategies are identified, a financial analysis is implemented to determine whether they are sustainable and profitable (a chapter has been dedicated to this step). If the financial analysis is not conclusive, then cost benefit analysis must outline some rationale for public support by demonstrating that the policy generates sufficient economic benefits. This step, termed economic appraisal, aims to assess the viability of a project from the society perspective. To do so, all economic impacts are expressed in terms of equivalent money value. Discounting then renders these items fully comparable by multiplying all future cash flows by a discount factor.

Formally, let NB t =B t  − C t denote for each year t the net economic benefit, defined as the difference between total benefits B t and total costs C t . Discounting is accomplished by computing the economic net present value (ENPV hereafter):
$$ ENPV={NB}_0+{\delta}_1{NB}_1+\cdots +{\delta}_T{NB}_T $$
where T represents the time horizon of the project and δ t (t = 1 … T) are the discount factors by which the net benefits at year t are multiplied in order to obtain the present value. The discount factors are lower than one and decreasing with the time period. They are defined as:
$$ {\delta}_t=\frac{1}{{\left(1+r\right)}^t},\kern0.5em for\ t=1\dots T $$

where r denotes the economic discount rate. This rate is different from the discount rate used in the financial appraisal. It does not represent some opportunity cost of capital (the return obtained from a best alternative strategy). It reflects instead the society’s view on how future benefits and costs should be valued against present ones. This rate is generally computed and recommended by government agencies such as the Treasury, or upper authorities such as the European Union. Their values may differ from 2 to 15% from one country to another. They are usually expressed in annual terms. When the analysis is carried out at current prices (resp. constant), the discount rate is expressed in nominal terms (resp. real) accordingly.

A positive economic net present value provides evidence that the project is desirable from a socio-economic point of view. In Excel, the computation can be accomplished with the NPV function. This command calculates the net present value via two entries: (1) a discount rate and (2) a series of future payments. One needs to enter the following formula in a cell:
$$ = value0+NPV\left( rate, value1, value2,\dots \right) $$

where rate is the discount rate, value0 represents the first cash flow. This cash flow is excluded from the NPV formula because it occurs in period 0 and should not be discounted. Last, “value1, value2, … ” is the range of cells containing the subsequent cash flows.

The following statement holds true in many occasions: the higher is the discount rate, the less likely the net present value is to reach positive values. The reason behind this is that most policy decisions involve large immediate outlays for building the infrastructure. Benefits on the other hand are observed in the future all along the project’s life. When the discount rate increases, the value of future inflows decrease, which thereby reduces the ENPV. At some point, known as the internal rate of return (EIRR), the net present value reaches negative values. We have:
$$ EIRR=r\ \mathrm{such}\ \mathrm{that}\ ENPV(r)=0. $$
The internal rate of return is defined as the discount rate that zeroes out the economic net present value of an investment. It cannot be determined by an algebraic formula but can be approximated in Excel using the IRR function:
$$ =IRR\left( value0, value1, value2,\dots \right) $$

The formula yields the internal rate of return for a series of cash flows (here value0 , value1 , value2), starting from the initial period. Values must contain at least one positive value and one negative value.

The EIRR is an indicator of the relative efficiency of an investment. It should however be used with caution as multiple solutions may be found, especially when large cash outflows appear during or at the end of the project. An example is provided in Fig. 9.4. While strategy S 1 is characterized by a negative net benefit observed only in period 1, strategy S 2 induces negative values both at the beginning and at the end of the project. As a consequence, strategy S 2 yields two internal rates of return, around 1% and 7% respectively, while strategy S 1 generates only one EIRR, around 4%. Given these difficulties, the net present value is often considered as a more suitable criterion for comparing alternative strategies. The strategy with the highest ENPV is designated as the most attractive and chosen first. For instance, we can see from Fig. 9.4 that strategy S 1 prevails over strategy S 2 only for small discount rates, lower than 4% approximately.
Fig. 9.4

Multiple internal rate of returns: example 3

An alternative approach to assess the relative efficiency of an investment is the benefit-cost ratio. It is defined as:
$$ BCR=\frac{PVEB}{PVEC}\ \mathrm{with}\kern0.5em PVEB=\sum_{t=0}^T\frac{B_t}{{\left(1+r\right)}^t}\ \mathrm{and}\ PVEC=\sum_{t=0}^T\frac{C_t}{{\left(1+r\right)}^t} $$

The BCR is computed from the present value of the benefits (PVEB) and the present value of the costs (PVEC). Ideally, it should be higher than 1 and maximized. Unlike the EIRR, the BCR has the advantage of being always computable.

When comparing two alternative investments of different size, the BCR and the ENPV may reach different conclusions. The reason is that the BCR is a ratio and, therefore, like the EIRR, it is independent of the amount of the investment. Consider for instance two alternative strategies: a small investment project versus a large one. The smaller project induces net benefits and costs which amount to PVEB = 10 and PVEC = 5 million dollars respectively. This yields a BCR of 2 and an ENPV of 5 million dollars. The larger project generates benefits and costs which amount to PVEB = 50 and PVEC = 40 million dollars, respectively. This generates a BCR of 1.25 and an ENPV of 10 million dollars. As can be seen, while the BCR is higher for the smaller project, the ENPV is higher for the larger project.

Direct effects and externalities, when they are quantified, are directly included as new items in the financial analysis, thus filling the cash flow statement with additional rows. To illustrate, Fig. 9.5 provides a detailed presentation of costs and benefits for a bridge project. For simplicity of exposition, the time horizon is set to 3 years. The project involves an immediate outlay of $30 million (for the lands, bridge infrastructure, equipment, etc.) and is followed by annual operating expenditures of $4 million (raw materials, labor, etc.). It generates annual revenues via a toll, which are expected to amount to $13.6 million for the first year, $17 million for the second year, and $17.5 million for the third year. The main economic benefit of the bridge project is the time saved by users (row R16). Those benefits may have been valued for instance at the opportunity cost of time, which is the fraction of salary the users could earn with the time saved, net of toll fees. Furthermore, the bridge generates negative externalities due to noise and aesthetic impacts, both at the time of construction and during the following years (row R17). Those external effects may have been estimated using revealed preferences techniques. To avoid double-counting, the economic appraisal does not account for external sources of financing (interest and principal repayment, private equity) as they are supposed to cover the initial investment costs.
Fig. 9.5

Economic return on investment: example 4

In practice, a road connection induces several economic effects. It generates savings in travel time for the users. In the meantime, if the infrastructure is equipped with a toll gate, it provides a revenue stream for the operator. In that context, the toll is a cost from the point of view of the users and a revenue from the point of view of the operator. The effects cancel each other out. To better assess the impact on welfare for the whole society (users and operator), the users’ benefits must be expressed net of fees. In other words, the consumer surplus (e.g., time saving in Fig. 9.5) is computed as the difference between the gains in terms of time saved and the toll fees. By doing so, toll revenues/fees are finally excluded from the economic appraisal:
$$ \underset{\mathrm{Time}\text{\ }\mathrm{s}\mathrm{a}\mathrm{v}\mathrm{i}\mathrm{n}\mathrm{g}\text{\ }(\mathrm{R}16)}{\underbrace{(\begin{array}{c}\mathrm{G}\mathrm{a}\mathrm{i}\mathrm{n}\mathrm{s}\text{\ }\mathrm{i}\mathrm{n}\text{\ }\mathrm{t}\mathrm{e}\mathrm{r}\mathrm{m}\mathrm{s}\\ {}\mathrm{o}\mathrm{f}\text{\ }\mathrm{t}\mathrm{i}\mathrm{m}\mathrm{e}\text{\ }\mathrm{s}\mathrm{a}\mathrm{v}\mathrm{e}\mathrm{d}\end{array})-(\begin{array}{c}\mathrm{T}\mathrm{o}\mathrm{l}\mathrm{l}\text{\ }\\ {}\mathrm{f}\mathrm{e}\mathrm{e}\mathrm{s}\end{array})}}+\underset{\mathrm{Sales}\text{\ }(\mathrm{R}14)}{\underbrace{(\begin{array}{c}\mathrm{T}\mathrm{o}\mathrm{l}\mathrm{l}\text{\ }\\ {}\mathrm{f}\mathrm{e}\mathrm{e}\mathrm{s}\end{array})}}=(\begin{array}{c}\mathrm{G}\mathrm{a}\mathrm{i}\mathrm{n}\mathrm{s}\text{\ }\mathrm{i}\mathrm{n}\text{\ }\mathrm{t}\mathrm{e}\mathrm{r}\mathrm{m}\mathrm{s}\\ {}\mathrm{o}\mathrm{f}\text{\ }\mathrm{t}\mathrm{i}\mathrm{m}\mathrm{e}\text{\ }\mathrm{s}\mathrm{a}\mathrm{v}\mathrm{e}\mathrm{d}\end{array}) $$

The approach is thus different from that of a financial appraisal, where toll revenues are included as a positive cash flow, to assess the financial sustainability and profitability of the investment strategy.

In Fig. 9.5, the most important row is R20, which represents the net benefits of the project for each time period. Summing all the flows (−32,500 + 12,600 + 17,500 + 20,000) to evaluate the suitability of the project would make us assume that future flows matter as much as present flows. In practice, however, one usually prefers to give lower importance to future cash flows. Discounting is accomplished by applying each year a discount factor that reflects the value of future cash flows to society. Consider for instance Fig. 9.5 where several discount rates (row R21) have been considered to assess the desirability of the project. With a discount rate equal to 4%, the economic net present value is computed as:
$$ ENPV\left(4\%\right)=-32,500+\frac{12,600}{1.04}+\frac{17,500}{1.04^2}+\frac{20,000}{1.04^3}\approx 13,575 $$
Similarly, using information from total costs (row R18) and total benefits (R19), we have:
$$ PVEC\left(4\%\right)=\left(-\right)32,500+\frac{\left(-\right)6000}{1.04}+\frac{\left(-\right)6000}{1.04^2}+\frac{\left(-\right)6000}{1.04^3}\approx \left(-\right)49,151 $$
$$ PVEB\left(4\%\right)=0+\frac{18,600}{1.04}+\frac{23,500}{1.04^2}+\frac{26,000}{1.04^3}\approx 62,726 $$
Equivalently, the difference between these two expressions yields the net present value:
$$ ENPV\left(4\%\right)= PVEB\left(4\%\right)- PVEC\left(4\%\right)\approx 13,575 $$

The ENPV decreases to 10,047 when the discount rate increases to 8%, and to 4181 when it equals 16% (row R24). Those values are positive and provide evidence that the project is desirable.

The benefit-cost ratios are provided in row R25 of Fig. 9.5. They are higher than one, which points out again the attractiveness of the project, no matter what the value of the discount rate is. They are computed from rows R22 and R23. For instance, for a 4% discount rate, we have:
$$ BCR\left(4\%\right)=\frac{PVEB\left(4\%\right)}{PVEC\left(4\%\right)}=\frac{62,726}{49,151}\approx 1.28 $$
Statistical programming languages such as R-CRAN have proven to be very handy when it comes to assigning probability distributions to particular events. In the bridge example, cash flows may for instance be characterized in a less deterministic manner to better assess the uncertainty of the project. This analytical technique, known as probabilistic sensitivity analysis, is further detailed in Sect. 9.6. As a preliminary step, Fig. 9.6 provides the codes to be used if one wants to calculate the ENPV and other performance indicators in R-CRAN.
Fig. 9.6

Discounting cash flows with R CRAN: example 4

Figure 9.6 reproduces the results obtained in Fig. 9.5. The first step consists in downloading the Excel worksheet in R-CRAN. For this purpose, the table has been modified and consists only of raw data, cleaned of totals, and rearranged so that the costs and benefits are presented successively. The command read . table reads the file (saved as a .csv file on disc C:) and creates a data frame from it, renamed D. This yields a table equivalent to Fig. 9.5. The object D has the properties of a matrix. An element observed in row i and column j is referred to as D[i, j].

Elements of D are summable. The cost vector C is for instance obtained by summing rows 2 to 13, while the benefit vector B is the result of the sum of rows 14 and 15. The command colSums is used to ensure that only the rows are summed over columns 2 to 5. Last, package Fincal and its function npv are used to compute the different performance indicators. The entry Disc . Rate specifies the vector of discount rates to be applied in the npv function. As the costs are already expressed in negative values, the analysis does not need to subtract them. For the BCR, one needs to use the function abs() to express the costs in absolute value.

Under special circumstances, an indicator known as the net benefit investment ratio (NBIR) can also be examined. It is defined as the ratio of the present value of the benefits (PVEB), net of operating costs (PVEC − PVK), to discounted investment costs (PVK):
$$ NBIR=\frac{PVEB-\left( PVEC-PVK\right)}{PVK} $$

For instance, in Fig. 9.5, the investment outlay is PVK = 30,000. For a discount rate equal to 4%, the present value of operating costs amounts to PVEC − PVK = 49,151 − 30,000 = 19,151. We also have PVEB = 62,726. This yields a NBIR equal to (62,726 − 19,151)/30,000 = 1.45. This ratio assesses the economic profitability of a project per dollar of investment. It is very useful when an authority is willing to finance several projects but has to face a budget constraint. The method allows the best combination of projects to be selected.

To illustrate the advantages of the NBIR, let us consider an authority that has a budget of $1,000,000. Information about the competing strategies is provided in Table 9.3 (amounts in thousands of dollars). If one were to compare the alternatives using net present values, strategies S 1 and S 2 would appear as the best alternatives. The budget constraint would be fulfilled and, overall, one would obtain a total ENPV equal to 100+80=180 thousand dollars. The ENPV, however, does not assess accurately the return on investment. With this criterion, larger projects are more likely to be selected. In contrast, the net benefit investment ratio assesses the profitability of the investment independently of its size (like the BCR). Using this criterion, we can see from the last column of Table 9.3 that the approach would rank strategy S 1 first, then S 3, S 4 and S 6. Assuming that only part of S 6 is financed (50% of it), this combination of strategies would yield a ENPV equal to 100+70+60+20/2=240 thousands of dollars. Thus, under capital rationing, the NBIR appears as a very performing selection criterion.
Table 9.3

Ranking of project under capital rationing: example 5

Strategy

Investment costs PVK

Present value of benefits net of operating costs

PVEB − (PVEC − PVK)

Economic net present value

ENPV

Profitability ratio NBIR

S 1

400

500

100

1.30

S 2

600

680

80

1.10

S 3

300

370

70

1.25

S 4

200

260

60

1.23

S 5

500

530

30

1.06

S 6

200

220

20

1.13

9.4 Accounting for Market Distortions

Conversion factors are related to the concept of shadow prices (also termed accounting or economic prices). They reflect the cost of an activity when prices are unobservable or when they do not truly reflect the real cost to society. Shadow prices do not relate to real life-situations. They correspond instead to the prices that would prevail if the market was perfectly competitive. For instance, the prices used in the financial appraisal (which are usually referred to as market prices) are likely to include taxes or government subsidies. Prices have to be adjusted in consequence, to better reflect trading values on a hypothetical free market. Yet, the term market prices can be misleading. In cost benefit analysis, it stands for the actual price of transaction subject to market distortions, while in common language a “market economy” denotes an economy that is little planned or controlled by government. To avoid any confusion in the remaining of the chapter, we will use the term “financial prices” as a synonym for “market prices”.

What is the true economic cost of inputs and outputs for cost benefit analysis use? To answer this question, it is convenient to appeal to the concept of opportunity cost. We should ask ourselves “what would be the value of inputs and outputs if they were employed somewhere else?” For instance, the Little-Mirrlees-Squire-van der Tak method advocates the use of international prices (converted to domestic currencies), for evaluating and comparing public projects in different countries. The rationale for the method is that world prices more accurately reflect the opportunities that are available to the countries. The method has for instance been frequently used for calculating shadow prices for aid projects in developing countries, where markets are often considered more distorted.

In some cases, it is easy to correct for price distortions ex ante. For instance, the cash flows should be net of VAT (value added tax). The value of land and buildings provided by other public bodies can be included directly at their true costs. In some other cases, however, the use of a conversion factor may be necessary. Formally, a shadow price is defined as:
$$ \mathrm{Shadow}\ \mathrm{price}=\mathrm{Financial}\ \mathrm{price}\times CF $$

The conversion factor CF approximates the degree of perfection of the market. In an undistorted economy, the conversion factor is equal to one and shadow prices are identical to financial prices. Should CF be higher than one, then the financial prices would yield an underestimation of the true value of inputs and outputs. If lower than one, they yield instead an overestimation. Several examples are provided below.

Regulated Price

This situation occurs when an input is made available at a lower price by the public sector, hence yielding an underestimation of the true costs to taxpayers. Common examples are a land proposed at a reduced price by a public authority while it may earn a higher rent or price otherwise; or an energy sold at a regulated tariff. The conversion factor should reflect these opportunity costs:
$$ CF=\frac{\mathrm{Shadow}\ \mathrm{price}}{\mathrm{Financial}\ \mathrm{price}} $$

Consider for instance row R1 of Fig. 9.5. Assume that the land has been sold at 80% of the usual price. The conversion factor is computed as CF = 1/0.80 = 1.25. In this case, the shadow value of lands amounts to 7000 × 1.25 = 8750 thousand dollars. Similarly, assume that electricity (row R9 of Fig. 9.5) is produced at a tariff that covers only 60% of marginal cost. The true cost to society is defined as 300 × (1/0.60) = 500 thousand dollars.

Undistorted Labor Market

When the labor market is perfectly competitive, the economy reaches an equilibrium where only “voluntary unemployment” prevails. People have chosen not to work solely because they do not consider the equilibrium wages as sufficiently high. If this is to be the case, the project would only divert the labor force from their current use, at its market value (assuming that the project is not large enough to influence wages). The conversion factor is defined as CF = 1.

Distorted Labor Market

When minimum wages are adopted in a given labor market, the quantity of labor supplied increases (the number of workers who wish to work) while the demand for labor decreases (the number of positions offered by employers). This creates a situation of involuntary unemployment, where labor supply exceeds demand. Some individuals, in particular unskilled workers, are willing to work at the prevailing wage but are unable to find employment. In such markets, the opportunity costs of hiring these individuals is lower because the project would hire agents who would have been unemployed otherwise. In that context, the conversion factor is lower than one (but not necessarily zero as the new workers could also work in the informal economy or just enjoy leisure). For instance, in its “Guide to Cost-Benefit Analysis of Investment”, the European Commission advocates the use of the regional unemployment rate (u) as a basis for the determination of the shadow wage:
$$ CF=\left(1-u\right)\left(1-s\right) $$

where s is the rate of social security payments and relevant taxes that should be excluded from the financial prices as they also represent a revenue for the public sector. Assume for instance that the unemployment rate is u = 9% and s = 15%. Row R8 of Fig. 9.5 would be replaced with 750 × (1 − 9%)(1 − 15%) ≈ 522 thousand dollars.

Import Tariffs

A tariff on imports increases the costs of inputs used in the project and, at the same time, induces additional revenue for the central government which can be used for other purposes. The true cost of inputs would be overestimated if no adjustment of cash flows were to be made. To better adjudge the true cost to society, any tariff should be excluded from the financial statement. This is equivalent to say that only the CIF price (cost plus insurance and freight) should be used for valuing the imported inputs. Let t m denotes the proportional tax rate on imports. The conversion factor is defined as:
$$ CF=\frac{\mathrm{Shadow}\ \left(\mathrm{world}\right)\ \mathrm{price}}{\mathrm{Financial}\ \mathrm{price}}=\frac{CIF\ \mathrm{price}}{CIF\ \mathrm{price}\times \left(1+{t}_m\right)}=\frac{1}{\left(1+{t}_m\right)} $$

Assume in row R7 of Fig. 9.5 that the raw materials have been imported. If the tax rate is equal to t m  = 20%, then the value to be used in the economic appraisal is 2250 × (1/1.2) = 1875 thousand dollars. The tariff has been removed from the price.

Export Subsidies

If the project receives an additional payment for exporting, it would be at the expense of the domestic taxpayers. The reasoning is thus similar to that previously made regarding an import tariff. Any subsidy should be excluded from the financial flows. This amounts to consider the “free on board” FOB price only (before insurance and freight charges):
$$ CF=\frac{\mathrm{Shadow}\ \left(\mathrm{world}\right)\ \mathrm{price}}{\mathrm{Financial}\ \mathrm{price}}=\frac{FOB\ \mathrm{price}}{FOB\ \mathrm{price}\times \left(1+{s}_x\right)}=\frac{1}{\left(1+{s}_x\right)} $$

where s x denote the rate of subsidy.

Major Non-traded Goods

The fact that an input is not imported or an output not exported does not necessarily mean that they are not subject to trade distortions. The existence of a tariff that aims to protect the domestic market may explain for instance the current use of local inputs. In such situations, the tax on imports is also reflected in the domestic prices. Similarly, a subsidy that encourages the producers to export may generate an increase in the price level. When data are available, these distortions can be valued using the same approach as previously. Assume for instance that the government has imposed an import tax of 25% on equipment and infrastructure (rows R2 and R3 of Fig. 9.5). The conversion factor to be used is 1/(1 + 25%) = 0.8, even if those goods are bought in the domestic market.

Minor Non-traded Goods

For minor items, or when data are not easily available, the European Commission advocates the use of a “standard conversion factor”. The latter is specified as:
$$ SCF=\frac{M+X}{M+{T}_m-{S}_m+X-{T}_x+{S}_x} $$

where M denotes the total imports valued at the CIF price, X the total exports valued at the FOB price, T m and S m the total import taxes and subsidies, and T x and S x the total export tax and subsidies. In simple words, SCF is the ratio of the value at world prices of all imports and exports to their value at domestic prices. It generalizes the previous formulas and provides a general proxy of how international trade is distorted due to trade barriers. It assesses how the prices would change on average if such barriers were removed. For instance, should T m  + S x be larger than S m  + T x , then, on average, the country would support the domestic producers. In that context, the standard conversion factor would be lower than one, meaning equivalently that the trade balance is artificially increased, or that the domestic prices are overvalued. The inverse of the standard conversion factor (1/SCF) is also termed “shadow exchange rate factor”. In practice, the approach is used when one wants to compare the economic performance of competing projects in different developing countries.

The conversion factors previously determined are applied to the cash flows of the bridge project (Fig. 9.6). They are displayed in the last column of Fig. 9.7. For all minor traded items (for which data on trade distortions were not accurately available) a standard conversion factor equal to 0.9 has been considered (emphasized in red). As can be seen from the ENPV and BCR criteria, the project remains economically viable, even at a discount rate of 16%. For this rate, the performance indicators have been computed has follows, and rounded for presentation purposes:
$$ PVEC\left(16\%\right)=\left(-\right)29,175+\frac{\left(-\right)5477}{1.16}+\frac{\left(-\right)5477}{1.16^2}+\frac{\left(-\right)5477}{1.16^3}\approx \left(-\right)41,476 $$
$$ PVEB\left(16\%\right)=0+\frac{17,240}{1.16}+\frac{21,800}{1.16^2}+\frac{24,250}{1.16^3}\approx 46,599 $$
$$ ENPV\left(16\%\right)= PVEB\left(16\%\right)- PVEC\left(16\%\right)\approx 5123 $$
$$ BCR\left(16\%\right)= PVEB\left(16\%\right)/ PVEC\left(16\%\right)\approx 1.12 $$
Fig. 9.7

Corrections for market distortions: example 4

In theory, conversion factors should provide the evaluator with a better decision tool. However, in practice, they are unique to the context and the methods used to approximate those weights are often based on rough calculations. While time-consuming, those adjustments can also be of minor importance for the investment decision. Therefore, conversion factors should be used with caution or in exceptional cases only. It is also possible to provide the results of the analysis both with and without shadow prices.

At this stage of the analysis, the net benefits of the bridge project should be analyzed against those of larger and smaller investments. Moreover, it may be useful to check whether some excluded items are likely to compromise or reinforce the decision made, especially if the ENPV reaches surprisingly high values. For instance a BCR approaching 2 would mean that the economic benefits are twice as high as the economic costs. Then why was the project not implemented before? If data is available, it can also be useful to compare the economic return of the investment with that of an already existing project. Last, if some important costs and benefits have been excluded from the analysis because it was not possible to monetize them, a description of these items should at least be provided in physical terms. In this respect, a multi-criteria analysis can be used to combine both physical terms and monetary terms (e.g., the ENPV) into a single indicator.

9.5 Deterministic Sensitivity Analysis

Cost benefit analysis generally goes further by questioning the valuation of the costs and benefits themselves. When some important items are difficult to estimate but yet quantified, or when some degree of uncertainty is inherent to the study, a sensitivity analysis can be used to examine the degree of risk in the project. This can take the form of a partial sensitivity analysis, a scenario analysis, a Monte Carlo analysis or a mean-variance analysis.

Uncertainty not only refers to variations in the economic environment (uncertain economic growth, natural hazards, modification of relative prices), but also to sampling errors resulting from data collection. Consider for instance the bridge example (Fig. 9.7). The study makes assumptions about the cost of inputs, about the amount of sales, and uses estimates to calculate the economic effects (time saving, externalities). Those cash flows can only be assessed or forecasted imprecisely. This affects in return the precision of the ENPV. The purpose of a sensitivity analysis is to identify these sources of variability and assess how sensitive the conclusions are to changes in the variables in question.

In a partial sensitivity analysis, only one single variable is modified while holding the other variables constant. Figure 9.8 illustrates the approach by assessing first the effect of a change in energy costs (row R9 of Fig. 9.7) on the ENPV of the bridge project. A 15 percent increase in costs yields for instance a net benefit of $13,699 for a 4% discount rate, $10,412 for a 8% discount rate and $4947 for a 16% discount rate. The results and conclusions (suitability of the project) are not really sensitive to those variations. Similar results are obtained when the costs of raw materials vary from −15% to +15% (see Fig. 9.8).
Fig. 9.8

Partial sensitivity analysis: example 4

A partial sensitivity analysis has its limits as the approach does not consider variations in more than one variable. To solve this issue, an alternative approach known as scenario analysis is commonly used. It combines the results in a three-scenario framework with two extreme scenarios and one most-likely scenario. These scenarios draw attention to the main uncertainties involved in the project. The idea is to focus on the upper boundaries (best-case scenario) and lower boundaries (worst-case scenario) of the study’s results. For instance, considering variations that are similar to those of Fig. 9.8, the approach would combine the assumed lower and upper values (–15% and +15%) of both variables simultaneously to characterize the two extreme scenarios. By doing so, we obtain Table 9.4. We can see that the ENPV never reaches negative values, which gives support to the strategy at stake. The results can then be compared to those obtained for competing strategies.
Table 9.4

Scenario analysis: example 4

 

Best-case

Energy costs: 15%

Raw materials: 15%

Most-likely

Energy costs: 0%

Raw materials: 0%

Worst-case

Energy costs: +15%

Raw materials: +15%

ENPV(4%)

14,914

13,916

12,918

ENPV(8%)

11,540

10,614

9687

ENPV(16%)

5931

5123

4316

Consider now Table 9.5 where the results of a scenario analysis are displayed for three competing strategies, in thousands of dollars. It can be seen that strategy S 3 is dominated by the other strategies as its net present value is always equal to or lower than the net present values of the other projects. Strategy S 3 is thus eliminated. If no other dominant strategy is apparent, different decision-rules can be employed:
  1. 1.

    The maximin rule consists in selecting the alternative that yields the largest minimum payoff. This rule would be typically used by a risk-averse decision-maker. In Table 9.5 for instance, the minimum payoff is 1000 for strategy S 1, while it is –500 for S 2. Strategy S 1 is thus selected. This ensures that the payoff will be of at least 1000 whatever happens.

     
  2. 2.

    The minimax regret rule consists in minimizing the maximum regret. Regret is defined as the opportunity cost incurred from having made the wrong decision. In Table 9.5, if one chooses strategy S 1 we have no regret when the worst-case or most-likely scenarios occur. On the other hand, if the best-case scenario occurs, the regret is 2000 − 3000 =  − 1000. If one chooses strategy S 2, the regret amounts to −500 − 1000 =  − 1500 for the worst-case scenario, 1000 − 1500 =  − 500 for the most-likely scenario, and zero for the best-case scenario. Overall, the maximum regret is –1000 for strategy S 1 and –1500 for strategy S 2. Strategy S 2 is thus eliminated. The approach is relatively similar to the maximin approach as it favors less risky alternatives. The approach however accounts for the opportunity cost of not choosing the other alternatives. It is used when one wishes not to regret the decision afterwards.

     
  3. 3.

    The maximax rule involves selecting the project that yields the largest maximum payoff. For project S 1, the maximum payoff is 2000, while for S 2, it is 3000. A risk-lover decision-maker would favor project S 2.

     
  4. 4.

    The Laplace rule implies maximizing the expected payoffs assuming equiprobable scenarios. In our example, we would assign a probability of 1/3 to each scenario. The expected payoff for project S 1 is defined as: (1/3) × 1000 + (1/3) × 1500 + (1/3) × 2000 = 1500. For project S 2, we have: (1/3) × (−500) + (1/3) × 1000 + (1/3) × 3000 = 1166. Based on this decision rule, strategy S 1 is selected.

     
Table 9.5

Scenario analysis and project selection: example 6

ENPV

Worst-case

Most-likely

Best-case

Strategy S 1

1000

1500

2000

Strategy S 2

−500

1000

3000

Strategy S 3

−500

1000

2000

The two methods of sensitivity analysis described above are often considered as deterministic as they assess how costs are sensitive to pre-determined change in parameter values through upper and lower bounds. They may be sufficient to assess roughly the risk associated with a project, but they do not account for all the uncertainty involved. In particular, they do not evaluate precisely the probability of occurrence of each possible outcome. A probabilistic sensitivity analysis is a useful alternative in this respect. Instead of focusing only on extreme bounds, the approach examines a simulation model that replicates the complete random behavior of the sensitive variables. It allows the distribution of the ENPV to be fully examined.

9.6 Probabilistic Sensitivity Analysis

A probabilistic sensitivity analysis assigns a probability distribution to all sensitive variables. These variables thereby become random in the sense that their value is subject to variations due to chance. The approach, also known as Monte Carlo analysis, examines those variations simultaneously and simulates thousands of scenarios, which results in a range of possible ENPV with their probabilities of occurrence. The method requires detailed data about the statistical distribution of the variables in question. One can for instance use information about similar existing projects, observed variations in input prices or, if the direct benefits and externalities have been estimated in the context of the project, the estimated standard deviations.

Among the most common distribution patterns that are used in probabilistic sensitivity analysis, we may name the uniform distribution, the triangular distribution, the normal distribution, and the beta distribution. The triangular and beta distributions are frequently used because both can be characterized by the worst-case, most likely, and best-case parameters. The normal distribution on the other hand has proven to be very handy provided that information on standard deviations is available. In what follows, R-CRAN is used to illustrate the differences among them (see Figs. 9.9 and 9.10). Many other probability distributions exist. For simplicity of exposition, they are not presented here.
Fig. 9.9

Probability distributions in R-CRAN

Fig. 9.10

Examples of probability distributions. (a) Uniform distribution. (b) Triangle distribution. (c) Normal distribution. (d) Beta distribution (1.5,1.5). (e) Beta distribution (5,5). (f) Beta distribution (2,5)

The uniform distribution assigns an equal probability of occurrence to each possible value of the random variable. It is used when little information is available about the distribution in question. Consider for instance the energy costs (row R9) of Fig. 9.7. They are initially set to –450 thousand dollars. We could assign instead a range of values between –500 and –400 with an equal probability of occurrence. With R-CRAN (Fig. 9.9), this amounts to use the runif command. The first entry denotes the number of randomly generated observations (here 1,000,000), while the second and third entries stand for the lower and upper limits of the distribution. Figure 9.10a provides the probability density function estimated with plot(density()). As can be seen, the uniform distribution yields a rectangular density curve (which would be perfectly rectangular with an infinite number of observations). Each value has an equal probability of occurring inside the range [−500, −400]. In Fig. 9.10, the bandwidth relates to the precision of the local estimations used to approximate the shape of the density curve and is of no interest for the present purpose.

The triangular distribution is a continuous probability distribution with a minimum value, a mode (most-likely value), and a maximum value. In contrast with the uniform distribution, the triangular distribution does not assign the same probability of occurrence to each outcome. The probability of occurrence of the lower and upper bounds is zero, while the maximum of the probability density function is obtained at the mode. This distribution is used when no sufficient or reliable information are available to identify the probability distribution more rigorously. In R-CRAN (Fig. 9.9), we can use the rtriangle function from the triangle package. The first entry specifies the number of observations; the second and third entries are the lower and upper limit of the distribution; last entry stands for the mode of the distribution. As can be seen from Fig. 9.10b, the program yields a triangular probability density function with minimum and maximum values obtained at –500 and –400, respectively.

With the normal distribution, the density curve is symmetrical around the mean and has a bell-shaped curve (see Fig. 9.10c). The random variable can take any value from −∞ to +∞. Train punctuality is an example of a normal probability density function. A train arrives frequently just in time, less frequently 1 min earlier or late and very rarely 20 min earlier or late. In R-CRAN (Fig. 9.9), the command in use is rnorm. The first entry specifies the number of observations; the second is the mean; the third is the standard deviation. The simplest form of this distribution is known as the standard normal distribution. It has a mean of 0 and a variance of 1. In Fig. 9.9, the standard deviation is set arbitrarily to 100. In Fig. 9.10c, the values tend to distribute in a symmetrical hill shape, with most of the observations near the specified mean. Provided that information about the standard deviation is available, the normal distribution may provide a reasonable approximation to the distribution of many events. Meanwhile, a lot of variables are likely to be not normally distributed. They may exhibit skweness (asymmetry), or have a different kurtosis (differently curved). A way to capture these differences is to rely instead on the beta distribution.

The beta distribution is determined by four parameters: a minimum value, a maximum value and two positive shape parameters, denoted α and β. Depending on those parameters, the beta distribution takes different shapes. Like the uniform and triangular distributions, it models outcomes that have a limited range. In Fig. 9.9, energy costs are randomized using the rbetagen command available with the package mc2d. The first entry is the number of randomly generated observations; the second and third entries stand for parameters α and β; the last two entries represent the lower and upper bounds, respectively. When α and β are the same, the distribution is symmetric (Figs. 9.10d, e). As their values increase, the distribution becomes more peaked (Fig. 9.10e). When α and β are different, the distribution is asymmetric. For instance, in Fig. 9.10f, the curve is skewed to the right because α is set to be lower than β.

So far, we have assumed that there was uncertainty about the project’s variables independently of each other. Those variables were considered uncorrelated. When this is the case, one can easily assign a random generator to each variable without worries. In some other cases however, the use of joint probability distributions is required. Consider for instance time saving (row R16 of Fig. 9.7) and sales (row R14). To some extent, those variables are likely to be correlated. The higher is the traffic, the higher are the sales revenues and the time saved by users (if there is induced traffic congestion, time saved may decrease and correlation would be negative). Traffic may also significantly affect maintenance costs (raw materials, labor, electric power, etc.). If one were to assign separately a random generator to these variables they would be varying independently from each other. We could for instance observe a decrease in sales and, in the meanwhile, a significant increase in time saving. To avoid those situations, it is generally advised to use a multivariate probability distribution. This can be done for instance with a multivariate normal distribution, provided that information about the covariance matrix is available.

A covariance matrix contains information about the variance and covariance for several random variables:
$$ V=\left[\begin{array}{cccc}Var\left({x}_1\right)& Cov\left({x}_1,{x}_2\right)& \dots & Cov\left({x}_1,{x}_K\right)\\ {}Cov\left({x}_1,{x}_2\right)& Var\left({x}_2\right)& \dots & Cov\left({x}_2,{x}_K\right)\\ {}\vdots & \vdots & \dots & \vdots \\ {}Cov\left({x}_1,{x}_K\right)& Cov\left({x}_2,{x}_K\right)& \dots & Var\left({x}_K\right)\end{array}\right] $$

The variances appear along the diagonal and covariances appear in the off-diagonal elements. This matrix is symmetric (i.e. entries are symmetric with respect to the main diagonal). If a number outside the diagonal is large, then the variable that corresponds to that row and the variable that corresponds to that column change with one another.

The covariance matrix can be used to generate random observations that are dependent from each other. To illustrate, let us examine again the bridge project (example 4). Assume that we have obtained data about a bridge of equivalent size. We can use information about the existing infrastructure to build a random generator for both sales (variable Sales) and time saving (variable Time). Table 9.6 provides the dataset (10 years) and Fig. 9.11 illustrates the simulation method. The mean values are computed using the mean command, while the covariance matrix is directly obtained using function cov. Those values are used afterwards in the rmvnorm function of the package mvtnorm. Basically speaking, the rmvnorm function randomly generates numbers using the multivariate normal distribution. The first entry is the number of observations; the second entry is the vector of means; last entry specifies the covariance matrix. The rmvnorm command generates a matrix made of (1) as many columns as there are variables and (2) as many rows as they are observations. For illustrative purposes, the number of randomly generated observations is set to 10,000. We thus have two columns (one for Sales and one for Time) and ten thousands rows. If one were to use the resulting random generator in a Monte Carlo framework, we would generate instead a matrix made of as many columns as there are variables and as many rows as they are time periods.
Table 9.6

Dataset for comparison: example 4

Sales

Time

12,652

2310

11,688

2441

13,044

2408

12,343

2466

11,753

2092

14,292

2595

13,802

2810

12,249

2659

11,598

2485

12,239

2147

Fig. 9.11

Simulating joint distributions with R-CRAN: example 4

Figure 9.11 ends with assessing the quality of the model by comparing the real distributions against the simulated ones. The first plot command provides a scatter plot of the relationship between sales and time saving using the existing dataset. As expected, some correlation between the variables is highlighted. As shown through a simple linear regression (abline command), this correlation is accurately taken into account by the simulated data (displayed in red in Fig. 9.12). To draw this regression line, we have made use of the ten thousand randomly generated observations. For simplicity of exposition, those observations are not displayed on the graph. Last, Fig. 9.12b, c compare the probability density of the real observations (in black) with that of the simulated ones (in red). As can be seen, the model provides an accurate estimation of the phenomena in question.
Fig. 9.12

Bivariate normal distribution: example 4. (a) Scatter plot. (b) Probability density of sales. (c) Probability density of time saving

Note that the simulations presented in Fig. 9.11 can be easily extended to more than two variables. The resulting random generator can also be used for simulating cash flows in a Monte Carlo framework. In this purpose, the randomly generated data can be multiplied by a trend variable, e.g., 1 , (1 + x) , (1 + x)2 , …, to account for a possible automatic increase in sales in the first years of the project (for instance by x% each year).

A Monte Carlo simulation consists in assigning a probability distribution to each of the sensitive cash flows and running the model a high number of times to generate the probability distribution of the ENPV. A loop is created in which (1) sensitive cash flows are assigned a random number generator, (2) the ENPV is computed using the randomly generated cash flows and (3) the ENPV is saved for subsequent analysis. The simulation is repeated a large number of times (10,000 times or more) to provide a range of possible ENPV. This range is then used to estimate a probability distribution. One can then decide whether the probability that the ENPV is negative is an acceptable risk or not.

Figure 9.13 applies a Monte Carlo simulation in order to estimate the risk of the bridge project (example 4). For sake of simplicity, the dataset still corresponds to that of Fig. 9.5. First, the program defines a loop that goes from i = 1 to 10,000, this last number representing the number of times the ENPV will be randomly generated. Although we could extend the simulations to a larger number of variables, only “electric power”, “raw materials”, “Sales” and “time saving” are assigned a random number. The uniform and triangular distributions (runif, rtriangl e) as well as the multivariate normal distribution (mvtnorm) are used in this purpose.
Fig. 9.13

Monte Carlo simulation with R-CRAN: example 4

The cash flows for electric power, raw materials, sales and time saving are observed at year 1, 2 and 3. As such, three observations per variable must be generated. A joint distribution similar to that of Fig. 9.11 (database E) is used to simulate sales and time saving. To ensure that the difference in means between Table 9.6 and Fig. 9.5 are fully assessed, two additional variables are created: Weights. Sales and Weights. Time. They are used to weight the random generator (simu) so that the average values of sales and time saving correspond to those of Fig. 9.5. Mathematically, we rely on the fact that the standard deviation of the product of a constant a with a random variable X is equal to the product of the standard deviation of X with the constant: \( \sqrt{Var(aX)}=a\sqrt{Var(X)} \). In other words, we consider the possibility that the new bridge (aX, i.e. database D) is not of the same size as the existing bridge (X, i.e. database E) and, consequently, that the standard errors are proportionally different. From the previous mathematical formula, we can see that we can indifferently include the weights (Weights. Sales and Weights. Time) in the random generator (in which case each generated value accounts for that modification) or after as it is done in Fig. 9.13.

Then the discounted cash flows are computed in a manner similar to that presented in Fig. 9.6. The analysis ends with drawing the probability density function of the randomly generated ENPV (see Fig. 9.14). What matters here is the 95%-confidence interval obtained using the function quantile. The latter yields positive values, ranging from 8643 to 18,561 thousand dollars, which gives support to the bridge project. Roughly speaking, the probability that the ENPV falls outside this interval is lower than 5%. Notice that the mean is approximately 13,568 thousand dollars, i.e. quite similar to the ENPV obtained in Fig. 9.5. This result is not surprising as the mean of the distributions used for the simulations are set to the same values as those of the raw dataset (for instance –300 for electricity and –2250 for raw materials). More interesting is the variance, which indicates the risk of the project.
Fig. 9.14

Estimated distribution of the ENPV: example 4

9.7 Mean-Variance Analysis

Once the probability distributions have been calculated for several strategies, the mean and variance of each ENPV distribution can be compared. The approach, known as mean-variance analysis plots the different strategies in the mean-variance plane and selects them based on their position in this plane. Under this framework, the mean of the ENPV represents the expected return of the project from the society point of view. The variance, on the other hand, represents how spread out the economic performance is. It measures the variability or volatility from the mean and, as such, it helps the decision-maker to evaluate the risk behind each strategy. A variance value of zero means for instance that the chances of achieving the most-likely scenario are 100%. On the contrary, a large variance implies uncertainty about the final outcome. If two strategies have the same expected ENPV, but one has a lower variance, the one with the lower variance is generally preferred.

Imagine that a decision-maker must choose one alternative out of four possible strategies. The plane in question is presented in Fig. 9.15. Strategy S 1 and strategy S 2 have similar variance. In other words, they have the same risk. However, the distribution of S 2 yields a higher ENPV on average, and is thus preferable. Consider now S 2 versus S 3. They have the same mean, but the ENPV of S 3 is more dispersed. Strategy S 3 is what is called a “mean preserving spread” of strategy S 2 and, as such, is riskier. A risk-averse decision-maker would typically prefer strategy S 2 since the likelihood that the ENPV falls below zero is lower. Comparing strategy S 2 with strategy S 4 is less clear-cut. While S 4 has a much higher mean, its variance is also greater. In this situation, it is up to the decision-maker to decide what matters most, whether it is an increase in the expected ENPV or a decrease in the variance.
Fig. 9.15

Mean-variance analysis: example 4. (a) Probability distributions. (b) Mean-variance plane

Bibliographical Guideline

The concept of consumer surplus is attributed to Dupuit (1844, 1849), an Italian-born French civil engineer who was working for the French State. His articles “On the measurement of the utility of public works” and “on tolls and transport charges” provide a discussion on the concept of marginal utility. They point out that the market price paid for consuming a good does not provide an accurate measure of the utility derived from its consumption. If one wants to construct a public infrastructure, it is instead the monetary value of the absolute utility, i.e. the willingness to pay, that matters.

The theory of externalities was initially developed by Pigou (1932) who demonstrated that, under some circumstances, the government could levy taxes on companies that pollute the environment or create economic costs.

Shadow prices have been intensively used after the proposal of Little and Mirrlees (1968, 1974) to use world market prices (and standard conversion factors) in project evaluation. Their approach was subsequently promoted by Squire and van der Tak (1975) in a book commissioned by the World Bank.

The modern form of Monte Carlo simulation was developed by von Neumann and Ulam for the Manhattan Project, in order to develop nuclear weapons. The method was named after the Monte Carlo Casino in Monaco.

The mean-variance portfolio theory is attributed to Markowitz (1952, 1959), which assumes that investors are rational individuals and that for an increased risk they expect a higher return.

Many guides provided by government agencies are available online. We may cite in particular the “Guide to Cost-Benefit Analysis of Investment Projects” of the European Commission which describes project appraisal in the framework of the 2014–2020 EU funds, as well as the agenda, the methods and several case studies. The European Investment Bank (EIB) proposes as a complement a document that presents the economic appraisal methods that the EIB advocates to assess the economic viability of projects. Additional guides are available, such as the “Canadian cost benefit analysis guide” provided by the Treasury Board of Canada Secretariat, the “Cost benefit analysis guide” prepared by the US Army, the guide to “Cost-benefit analysis for development” proposed by the Asian Development Bank, the “Cost benefit analysis methodology procedures manual” by the Australian Office of Airspace Regulation. Those documents review recent developments in the field and provide several examples of application with the purpose to make CBA as clear and as user-friendly as possible.

Several textbooks can also be of interest for the reader. We may in particular cite Campbell and Brown (2003). Their book illustrates the practice of cost benefit analysis using a spreadsheet framework, including case studies, risk and alternative scenario assessment. We may also cite Garrod and Willis (1999) and Bateman et al. (2002) who provide an overview of the theory as well as the different methods to estimate welfare changes.

Bibliography

  1. Australian Office of Airspace Regulation. (2007). Cost benefit analysis methodology procedures manual. Civil Aviation Safety Authority.Google Scholar
  2. Asian Development Bank. (2013). Cost-benefit analysis for development: A practical guide. Mandaluyong City, Philippines: Asian Development Bank.Google Scholar
  3. Bateman, I., Carson, R., Day, B., Haneman, M., Hanley, N., Hett, T., et al. (2002). Economic valuation with stated preference techniques: A manual. Cheltenham: Edward Elgar.CrossRefGoogle Scholar
  4. Campbell, H., & Brown, R. (2003). Benefit-cost analysis: Financial and economic appraisal using spreadsheets. Cambridge University Press: Cambridge Books.CrossRefGoogle Scholar
  5. Dupuit, J. (1844). De la mesure de l’utilité des travaux publics. Annales des Ponts et Chaussées, 2, 332–375 [English translation: Barback, R. H. (1952). On the measurement of the utility of public works. International Economic Papers, 2, 83–110].Google Scholar
  6. Dupuit, J. (1849). De l’influence des péages sur l’utilité des voies de communication. Annales des Ponts et Chaussées, 2, 170–248 [English translation of the last section: Henderson, E. (1962). On tolls and transport charges. International Economic Papers 11: 7–31].Google Scholar
  7. European Commission. (2014). Guide to cost-benefit analysis of investment projects. Economic appraisal tool for cohesion policy 2014-2020.Google Scholar
  8. European Investment Bank. (2013). The economic appraisal of investment projects at the EIB.Google Scholar
  9. Garrod, G., & Willis, K. G. (1999). Economic valuation of the environment: Methods and case studies. Cheltenham: Edward Elgar.Google Scholar
  10. Little, I. M. D., & Mirrlees, J. A. (1968). Manual of industrial project analysis in developing countries, 2: Social cost–benefit analysis. Paris: OECD.Google Scholar
  11. Little, I. M. D., & Mirrlees, J. A. (1974). Project appraisal and planning for developing countries. New York: Basic Books.Google Scholar
  12. Markowitz, H. M. (1952). Portfolio selection. Journal of Finance, 7, 77–91.Google Scholar
  13. Markowitz, H.M. (1959). Portfolio selection: Efficient diversification of investments. New York: Wiley (Reprinted by Yale University Press, 2nd ed. Basil Blackwell, 1991).Google Scholar
  14. Pigou, A. C. (1932). The economics of welfare (4th ed.). London: Mac Millan.Google Scholar
  15. Squire, L., & van der Tak, H. G. (1975). Economic analysis of projects. Baltimore: Johns Hopkins,University Press for the World Bank.Google Scholar
  16. Treasury Board of Canada Secretariat. (2007). Canadian cost-benefit analysis guide: Regulatory proposals.Google Scholar
  17. US Army. (2013). Cost benefit analysis guide. Prepared by the Office of the Deputy Assistant Secretary of the Army.Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Jean-Michel Josselin
    • 1
  • Benoît Le Maux
    • 1
  1. 1.Faculty of EconomicsUniversity of Rennes 1RennesFrance

Personalised recommendations