Statistical Tools for Program Evaluation pp 291-324 | Cite as

# Cost Benefit Analysis

## Abstract

Cost benefit analysis compares policy strategies not only on the basis of their financial flows but also on the basis of their overall impact, be it economic, social, or environmental (Sect. 9.1). To do so, the approach relies on the concepts of Pareto-optimality and Kaldor-Hicks compensation to assess whether public programs are globally beneficial or detrimental to society’s welfare (Sect. 9.2). All effects must be expressed in terms of their equivalent money values and discounted based on how society values the well-being of future generations (Sect. 9.3). Using conversion factors, the cost benefit analysis methodology also ensures that the price of inputs and outputs used in the analysis reflects their true economic values (Sect. 9.4). Last, the analysis ends with a deterministic or probabilistic sensitivity analysis which examines how the conclusions of the study change with variations in cash flows, assumptions, or the manner in which the evaluation is set up (Sects. 9.5 and 9.6). One may also employ a mean-variance analysis to compare the performance and risk of each competing strategy (Sect. 9.7). Applications and examples, provided on spreadsheets and R-CRAN, will enable the reader to take the methodology in hand.

## Keywords

Pareto-optimality Discounting Conversion factor Sensitivity analysis Mean-variance analysis## 9.1 Rationale for Cost Benefit Analysis

The aim of cost benefit analysis is to determine whether a public policy choice is globally beneficial or detrimental to society’s welfare. The method allows several projects to be compared not only on the basis of their financial flows (investments and operating costs) but also on the basis of their economic effects (variation in consumption of public services, change in the satisfaction derived from them). For instance, the construction of a highway not only modifies the financial flows of the authority at stake, but affects traffic congestion, reduces travel time, improves safety, influences positively or negatively real estate in the areas served by the project. All these consequences must be properly accounted for. To do so, cost benefit analysis expresses these impacts in terms of a common metric, their equivalent money value. Then the method confronts the financial flows with the economic impacts to measure the total contribution of the project to the well-being of the society.

Cost benefit analysis is primarily used for assessing the value of large capital investment projects (transportation infrastructures, public facilities, recreational sites) but can be employed for appraising smaller projects, public regulations or a change in operating expenditures, especially where other methods fail to account for welfare improvements. It became popular in the 1960s with the growing concern over the quality of the environment, not only in the USA but also in many other countries. Nowadays, institutions such as the European Commission, the European Investment Bank or the World Bank require cost benefit analysis studies for extra funding and loan requests. In the USA, the approach is frequently used when a public policy choice imposes significant costs and economic impacts. In Europe, more and more governments use the approach to justify a particular policy to taxpayers. This was for instance the case with HS2, a controversial high-speed rail link between London, Birmingham and Manchester.

Examples of economic impacts

Type of project | Direct benefits | Positive externalities | Negative externalities | Price distortions |
---|---|---|---|---|

Rail investment | Time savings, additional capacity, increased reliability | Economic development, reduced negative externalities from other transport modes | Aesthetic and landscape impacts, impacts on human health | Opportunity costs of raw materials due the diversion of them from the best alternative use, land made available free of charge by a public body while it may earn a rent, tariff subsidized by the public sector, labor market distortions due to minimum wages or unemployment benefits |

Waste treatment | Treatment of waste which minimizes impacts on human health | Environmental benefits | Aesthetic and landscape impacts, other impacts on human health, increase in local traffic for the transport of waste | |

Production of electricity from renewable energy sources | Reduction in greenhouse gases | Amount of fossil fuels or of other non-renewable energy sources saved | Aesthetic and landscape impacts, negative effects on air, water and land | |

Telecommunication infrastructures | Time saved for each communication, new additional services | Economic development induced by the project | Aesthetic and landscape impacts, Impacts on human health | |

Parks and forests | Recreational benefits, utilization and transformation of wood | Improvement of the countryside, environmental protection, increased income for the tourist sector | Increased traffic |

The cost benefit analysis methodology also ensures that the price of inputs and outputs used in the analysis reflect their true economic values, and not only the values observed in existing markets. Government intervention may divert the factors of production (land, labor, capital, entrepreneurial skill) from other productive use. This is particularly true where markets are distorted by regulated prices or where taxes or subsidies are imposed on imports or exports (see Table 9.1). For instance, a land made available for free by a public body generates an implicit cost to the taxpayers: the opportunity cost of not renting the land to another entity. A tax on imports which affects the price of inputs also has to be deduced to better reflect the true costs to taxpayers. To account for these distortions, cost benefit analysis makes use of what is called conversion factors. They represent the weights by which market prices have to be multiplied to obtain cash flows valued at their true price from the society’s point of view.

Public policies involve many objectives, concern different beneficiary groups and differ with respect to their time horizon. The cost benefit analysis methodology has the advantage to simplify the multidimensionality of the problem by calculating a monetary value for every main benefit and cost. To make those items fully comparable, the approach converts all the economic and financial flows observed at different times to present-day values. This approach, known as discounting, is essential to cost benefit analysis. It enables the projects to be evaluated based on how the society values the well-being of future generations. The idea is that a benefit, or a cost, is worth more when it has an immediate impact. As a consequence, activities imposing large costs or benefits on future generations may appear of less importance. The use of an appropriate discount factor, which weights the time periods, is here decisive.

Despite its popularity among many government agencies and the fact that cost benefit analysis provides an all-encompassing tool for evaluating public policies, the approach has been intensively decried. One of the major drawbacks is the fact that it often struggles to put monetary values on items such as aesthetic landscapes, human health, economic growth, environmental quality, time or even life. The core of the problem is that public projects mostly affect the value of goods that are not bought and sold on regular markets (non-market goods). For this reason, variations in society’s welfare cannot be valued based on the usual observation of market prices. One has to rely instead on non-market valuation methods, such as revealed preference methods or stated preference techniques. Obtaining accurate estimates of costs and benefits is all the more challenging that many factors may affect the conclusions of the study, such as the time horizon, whether people have prior experiences of using the service or whether they have perfect information about the consequences of the project, etc. To overcome these issues, a sound cost benefit analysis generally ends with a sensitivity analysis which examines how the conclusions of the study change with variations in cash flows, assumptions, or the manner in which the evaluation is set up. This sensitivity analysis can be implemented in a deterministic way. In this case, the uncertainty of the project is described through simple assumptions, for instance by assuming lower and upper bounds in economic flows. The sensitivity analysis can also be probabilistic. Each economic flow is assigned a random generator based on a well-defined probability distribution. A mean-variance analysis then helps the decision-maker to fully compare the risk and performance of each competing strategy.

The remainder of the chapter is organized as follows. Section 9.2 details the theoretical foundations of the method. Section 9.3 provides a tutorial describing the discounting approach. Section 9.4 explains how to convert market prices to economic prices. Sections 9.5 and 9.6 show how to implement a deterministic and probabilistic sensitivity analysis. Section 9.7 finally explains the methodology of mean-variance analysis.

## 9.2 Conceptual Foundations

Government intervention affects the satisfaction of the agents composing society in many different ways. While some agents will benefit from a change in the public good-tax structure, other agents can be worse off. An investment decision may also generate positive and negative externalities that affect the well-being of particular agents in specific areas. Due to the presence of government-imposed regulations, project cash flows may be distorted and should be valued at their opportunity costs. Cost benefit analysis captures all these economic consequences by expressing them in terms of a common currency, money. How is that possible? How can a change in the society’s welfare be related to dollar value? The approach is actually based on the observation that individuals are often willing to pay more for a good than the price they are charged for it. For instance, if one is willing to spend $10 at most for a service, but pays only $2, one achieves some surplus of $8. This is what cost benefit analysis aims at measuring. If there is a public policy choice for which the net benefits are greater than those of the competing strategies, then society should go ahead with it.

More specifically, cost benefit analysis relies on estimating what is called the economic surplus, a measure of welfare that we find in microeconomics, a branch of economics that studies the behavior of agents at an individual level. Under this framework, the surplus is computed as the sum of two elements: (1) the consumer surplus, measured as the monetary gain obtained by agents being able to purchase a good for a price that is less than the highest price they are willing to pay, and (2) the producer surplus, defined as the difference between the price producers would be willing to supply a good for and the price actually received. While a reasonable measure of benefit to a producer is the net profit, the task is much more difficult with respect to the consumer surplus. One has to identify here the demand for the good. All the difficulty and ingenuity of cost benefit analysis lies there. This section provides a simple microeconomic framework serving to better describe the underlying assumptions.

*u*(

*x*,

*z*) denote the utility (or satisfaction) an agent derives from consuming a private good in quantity

*x*and a public good in quantity

*z*. The public good is provided by the government and financed through taxes. The budget constraint of the agent is defined as

*p*

_{ x }

*x*+

*br*=

*y*where

*p*

_{ x }denotes the price of the private good,

*b*is the tax base of the agent,

*r*stands for the tax rate chosen by the government and

*y*represents the agent’s income. Although we could relax this assumption, the public budget is assumed to be balanced. We have

*rB*=

*cz*, where

*B*is the total tax base upon which the tax rate is applied, and

*c*denotes the marginal cost of production. Using

*r*=

*cz*/

*B*in the budget constraint of the agent, we obtain

*p*

_{ x }

*x*+

*p*

_{ z }

*z*=

*y*where

*p*

_{ z }=

*cb*/

*B*represents the (tax) price of the public good. The demand of the agent for the public good is then determined by the maximization of

*u*given this budget constraint:

*p*

_{ x }= 1 and assume that the agent has quasi-linear preferences:

*u*(

*x*,

*z*) =

*x*+

*v*(

*z*). Function

*v*is an increasing and concave function of

*z*while

*x*enters the utility function as a simple additive term. Solving the optimization problem by substitution yields:

*y*−

*p*

_{ z }

*z*+

*v*(

*z*) with respect to

*z*. The derivative of

*v*represents the inverse demand function. The lower the price (for instance due to a decrease in

*b*or

*c*), the higher the demand for the good, as illustrated in Fig. 9.1. This generalizes the usual “law of demand” to the case of a publicly provided good.

*z*is directly linked to the shape of the inverse demand curve. If the public good is not produced at all, the level of satisfaction obtained by the agent is

*u*(

*x*, 0) =

*y*. If

*z*units are produced, the maximum amount

*A*the agent would be willing to pay is determined by the condition

*u*(

*y*−

*A*,

*z*) >

*u*(

*x*, 0). Equivalently, we have

*A*<

*v*(

*z*). In other words,

*v*(

*z*) represents the willingness to pay (WTP) of the agent. Similarly,

*v*′(

*z*) is what is termed the marginal WTP. The consumer surplus is thus defined as:

*p*

_{ z }

*z*represents the cost of the policy to the agent. This expression can also be written as:

Graphically, this means that the consumer surplus is the area comprised between the inverse demand curve *v* ^{′}(*z*) and the horizontal line at the price *p* _{ z }. As shown in Fig. 9.1, a decrease in the price generates a direct effect on surplus (effect 1), but also a change in the demand and, if this demand is satisfied, an additional increase in surplus (effect 2).

*b*

_{1}and

*b*

_{2}) and consequently in their tax price,

*p*

_{1}=

*cb*

_{1}/(

*b*

_{1}+

*b*

_{2}) and

*p*

_{2}=

*cb*

_{2}/(

*b*

_{1}+

*b*

_{2}), with

*p*

_{1}+

*p*

_{2}=

*c*. They have different preferences about the public good:

*u*

_{1}=

*x*+

*v*

_{1}(

*z*) and

*u*

_{2}=

*x*+

*v*

_{2}(

*z*). Cost benefit analysis aims at selecting one single level of public good by maximizing the total surplus

*s*

_{1}(

*z*) +

*s*

_{2}(

*z*). This amounts to differentiate

*v*

_{1}(

*z*) +

*v*

_{2}(

*z*) −

*cz*with respect to

*z*. We obtain:

*p*

_{1}+

*p*

_{2}=

*c*is fulfilled). By doing so, we are guaranteed to reach a situation of Pareto-optimality which, by definition, is a state of allocation of resources in which it is impossible to make any one individual better off without making at least one individual worse off.

*S*

_{1}, while agent 3 prefers strategy

*S*

_{2}. In this example, the concept of Pareto-improvement is not useful as it is impossible to implement a policy change without making at least one individual worse off. A change from

*S*

_{1}to

*S*

_{2}is not Pareto-improving but neither is a move from

*S*

_{2}to

*S*

_{1}. Then, how can we reach a situation of Pareto-optimality? The answer is through reallocation. One has to rely on what is termed a Kaldor-Hicks improvement. A move is more efficient as long as everyone can be compensated to offset any potential loss. Using this criterion, one would typically select strategy

*S*

_{2}. To clarify the whys and wherefores, one needs at this stage to understand that what matters in cost benefit analysis is welfare. In Table 9.2, agent 3 derives a high level of satisfaction from

*S*

_{2}and is willing to pay a lot for it, even if this means to compensate the welfare loss of the other agents. For instance, if agent 3 gives $5 to agent 1 and $1 to agent 2, a move from

*S*

_{1}to

*S*

_{2}would benefit the whole society.

The concept of Pareto-optimality: example 2

Project | Agent 1’s surplus | Agent 2’s surplus | Agent 3’s surplus | Total surplus |
---|---|---|---|---|

Strategy | $5 | $25 | $75 | $105 |

Strategy | $0 | $24 | $84 | $108 |

Cost benefit analysis provides public managers with a decision criterion based on the Kaldor-Hicks criterion. A project is an improvement over the status quo if the sum of welfare variations is positive. If the gains exceed the losses, then the winners could in theory compensate the losers so that policy changes are Pareto-improving. Yet, the compensation scheme must be chosen carefully. Cost benefit analysis is in this respect not equipped to conceive and implement redistributive schemes. Instead, the approach aims at selecting a particular policy among competing strategies. It nevertheless remains that redistribution, if carried out, can strongly affect work incentives, induce mobility, generate tax evasion or encourage tax fraud.

## 9.3 Discount of Benefits and Costs

Project selection starts with an option analysis discussed at the level of a planning document such as a master plan. The set of strategies is generally reduced so that at least three alternatives are examined: (1) a baseline strategy (or status quo) which is a forecast of the future without investment; (2) a minimum investment strategy and (3) a maximum investment strategy. Once the strategies are identified, a financial analysis is implemented to determine whether they are sustainable and profitable (a chapter has been dedicated to this step). If the financial analysis is not conclusive, then cost benefit analysis must outline some rationale for public support by demonstrating that the policy generates sufficient economic benefits. This step, termed economic appraisal, aims to assess the viability of a project from the society perspective. To do so, all economic impacts are expressed in terms of equivalent money value. Discounting then renders these items fully comparable by multiplying all future cash flows by a discount factor.

*NB*

_{ t }=

*B*

_{ t }−

*C*

_{ t }denote for each year

*t*the net economic benefit, defined as the difference between total benefits

*B*

_{ t }and total costs

*C*

_{ t }. Discounting is accomplished by computing the economic net present value (

*ENPV*hereafter):

*T*represents the time horizon of the project and

*δ*

_{ t }(

*t*= 1 …

*T*) are the discount factors by which the net benefits at year

*t*are multiplied in order to obtain the present value. The discount factors are lower than one and decreasing with the time period. They are defined as:

where *r* denotes the economic discount rate. This rate is different from the discount rate used in the financial appraisal. It does not represent some opportunity cost of capital (the return obtained from a best alternative strategy). It reflects instead the society’s view on how future benefits and costs should be valued against present ones. This rate is generally computed and recommended by government agencies such as the Treasury, or upper authorities such as the European Union. Their values may differ from 2 to 15% from one country to another. They are usually expressed in annual terms. When the analysis is carried out at current prices (resp. constant), the discount rate is expressed in nominal terms (resp. real) accordingly.

*NPV*function. This command calculates the net present value via two entries: (1) a discount rate and (2) a series of future payments. One needs to enter the following formula in a cell:

where *rate* is the discount rate, *value*0 represents the first cash flow. This cash flow is excluded from the *NPV* formula because it occurs in period 0 and should not be discounted. Last, “value1, value2, … ” is the range of cells containing the subsequent cash flows.

*ENPV*. At some point, known as the internal rate of return (

*EIRR*), the net present value reaches negative values. We have:

*IRR*function:

The formula yields the internal rate of return for a series of cash flows (here *value*0 , *value*1 , *value*2), starting from the initial period. Values must contain at least one positive value and one negative value.

*EIRR*is an indicator of the relative efficiency of an investment. It should however be used with caution as multiple solutions may be found, especially when large cash outflows appear during or at the end of the project. An example is provided in Fig. 9.4. While strategy

*S*

_{1}is characterized by a negative net benefit observed only in period 1, strategy

*S*

_{2}induces negative values both at the beginning and at the end of the project. As a consequence, strategy

*S*

_{2}yields two internal rates of return, around 1% and 7% respectively, while strategy

*S*

_{1}generates only one

*EIRR*, around 4%. Given these difficulties, the net present value is often considered as a more suitable criterion for comparing alternative strategies. The strategy with the highest

*ENPV*is designated as the most attractive and chosen first. For instance, we can see from Fig. 9.4 that strategy

*S*

_{1}prevails over strategy

*S*

_{2}only for small discount rates, lower than 4% approximately.

The *BCR* is computed from the present value of the benefits (*PVEB*) and the present value of the costs (*PVEC*). Ideally, it should be higher than 1 and maximized. Unlike the *EIRR*, the *BCR* has the advantage of being always computable.

When comparing two alternative investments of different size, the *BCR* and the *ENPV* may reach different conclusions. The reason is that the *BCR* is a ratio and, therefore, like the *EIRR*, it is independent of the amount of the investment. Consider for instance two alternative strategies: a small investment project versus a large one. The smaller project induces net benefits and costs which amount to *PVEB* = 10 and *PVEC* = 5 million dollars respectively. This yields a *BCR* of 2 and an *ENPV* of 5 million dollars. The larger project generates benefits and costs which amount to *PVEB* = 50 and *PVEC* = 40 million dollars, respectively. This generates a *BCR* of 1.25 and an *ENPV* of 10 million dollars. As can be seen, while the *BCR* is higher for the smaller project, the *ENPV* is higher for the larger project.

The approach is thus different from that of a financial appraisal, where toll revenues are included as a positive cash flow, to assess the financial sustainability and profitability of the investment strategy.

The *ENPV* decreases to 10,047 when the discount rate increases to 8%, and to 4181 when it equals 16% (row R24). Those values are positive and provide evidence that the project is desirable.

*ENPV*and other performance indicators in R-CRAN.

Figure 9.6 reproduces the results obtained in Fig. 9.5. The first step consists in downloading the Excel worksheet in R-CRAN. For this purpose, the table has been modified and consists only of raw data, cleaned of totals, and rearranged so that the costs and benefits are presented successively. The command *read* . *table* reads the file (saved as a .*csv* file on disc *C*:) and creates a data frame from it, renamed *D*. This yields a table equivalent to Fig. 9.5. The object *D* has the properties of a matrix. An element observed in row *i* and column *j* is referred to as *D*[*i*, *j*].

Elements of *D* are summable. The cost vector *C* is for instance obtained by summing rows 2 to 13, while the benefit vector *B* is the result of the sum of rows 14 and 15. The command *colSums* is used to ensure that only the rows are summed over columns 2 to 5. Last, package *Fincal* and its function *npv* are used to compute the different performance indicators. The entry *Disc* . *Rate* specifies the vector of discount rates to be applied in the *npv* function. As the costs are already expressed in negative values, the analysis does not need to subtract them. For the *BCR*, one needs to use the function *abs*() to express the costs in absolute value.

*NBIR*) can also be examined. It is defined as the ratio of the present value of the benefits (

*PVEB*), net of operating costs (

*PVEC*−

*PVK*), to discounted investment costs (

*PVK*):

For instance, in Fig. 9.5, the investment outlay is *PVK* = 30,000. For a discount rate equal to 4%, the present value of operating costs amounts to *PVEC* − *PVK* = 49,151 − 30,000 = 19,151. We also have *PVEB* = 62,726. This yields a NBIR equal to (62,726 − 19,151)/30,000 = 1.45. This ratio assesses the economic profitability of a project per dollar of investment. It is very useful when an authority is willing to finance several projects but has to face a budget constraint. The method allows the best combination of projects to be selected.

*NBIR*, let us consider an authority that has a budget of $1,000,000. Information about the competing strategies is provided in Table 9.3 (amounts in thousands of dollars). If one were to compare the alternatives using net present values, strategies

*S*

_{1}and

*S*

_{2}would appear as the best alternatives. The budget constraint would be fulfilled and, overall, one would obtain a total

*ENPV*equal to 100+80=180 thousand dollars. The

*ENPV*, however, does not assess accurately the return on investment. With this criterion, larger projects are more likely to be selected. In contrast, the net benefit investment ratio assesses the profitability of the investment independently of its size (like the

*BCR*). Using this criterion, we can see from the last column of Table 9.3 that the approach would rank strategy

*S*

_{1}first, then

*S*

_{3},

*S*

_{4}and

*S*

_{6}. Assuming that only part of

*S*

_{6}is financed (50% of it), this combination of strategies would yield a

*ENPV*equal to 100+70+60+20/2=240 thousands of dollars. Thus, under capital rationing, the

*NBIR*appears as a very performing selection criterion.

Ranking of project under capital rationing: example 5

Strategy | Investment costs | Present value of benefits net of operating costs | Economic net present value | Profitability ratio |
---|---|---|---|---|

| 400 | 500 | 100 | 1.30 |

| 600 | 680 | 80 | 1.10 |

| 300 | 370 | 70 | 1.25 |

| 200 | 260 | 60 | 1.23 |

| 500 | 530 | 30 | 1.06 |

| 200 | 220 | 20 | 1.13 |

## 9.4 Accounting for Market Distortions

Conversion factors are related to the concept of shadow prices (also termed accounting or economic prices). They reflect the cost of an activity when prices are unobservable or when they do not truly reflect the real cost to society. Shadow prices do not relate to real life-situations. They correspond instead to the prices that would prevail if the market was perfectly competitive. For instance, the prices used in the financial appraisal (which are usually referred to as market prices) are likely to include taxes or government subsidies. Prices have to be adjusted in consequence, to better reflect trading values on a hypothetical free market. Yet, the term market prices can be misleading. In cost benefit analysis, it stands for the actual price of transaction subject to market distortions, while in common language a “market economy” denotes an economy that is little planned or controlled by government. To avoid any confusion in the remaining of the chapter, we will use the term “financial prices” as a synonym for “market prices”.

What is the true economic cost of inputs and outputs for cost benefit analysis use? To answer this question, it is convenient to appeal to the concept of opportunity cost. We should ask ourselves “what would be the value of inputs and outputs if they were employed somewhere else?” For instance, the Little-Mirrlees-Squire-van der Tak method advocates the use of international prices (converted to domestic currencies), for evaluating and comparing public projects in different countries. The rationale for the method is that world prices more accurately reflect the opportunities that are available to the countries. The method has for instance been frequently used for calculating shadow prices for aid projects in developing countries, where markets are often considered more distorted.

*ex ante*. For instance, the cash flows should be net of VAT (value added tax). The value of land and buildings provided by other public bodies can be included directly at their true costs. In some other cases, however, the use of a conversion factor may be necessary. Formally, a shadow price is defined as:

The conversion factor *CF* approximates the degree of perfection of the market. In an undistorted economy, the conversion factor is equal to one and shadow prices are identical to financial prices. Should *CF* be higher than one, then the financial prices would yield an underestimation of the true value of inputs and outputs. If lower than one, they yield instead an overestimation. Several examples are provided below.

### Regulated Price

Consider for instance row R1 of Fig. 9.5. Assume that the land has been sold at 80% of the usual price. The conversion factor is computed as *CF* = 1/0.80 = 1.25. In this case, the shadow value of lands amounts to 7000 × 1.25 = 8750 thousand dollars. Similarly, assume that electricity (row R9 of Fig. 9.5) is produced at a tariff that covers only 60% of marginal cost. The true cost to society is defined as 300 × (1/0.60) = 500 thousand dollars.

### Undistorted Labor Market

When the labor market is perfectly competitive, the economy reaches an equilibrium where only “voluntary unemployment” prevails. People have chosen not to work solely because they do not consider the equilibrium wages as sufficiently high. If this is to be the case, the project would only divert the labor force from their current use, at its market value (assuming that the project is not large enough to influence wages). The conversion factor is defined as *CF* = 1.

### Distorted Labor Market

*u*) as a basis for the determination of the shadow wage:

where *s* is the rate of social security payments and relevant taxes that should be excluded from the financial prices as they also represent a revenue for the public sector. Assume for instance that the unemployment rate is *u* = 9% and *s* = 15%. Row R8 of Fig. 9.5 would be replaced with 750 × (1 − 9%)(1 − 15%) ≈ 522 thousand dollars.

### Import Tariffs

*t*

_{ m }denotes the proportional tax rate on imports. The conversion factor is defined as:

Assume in row R7 of Fig. 9.5 that the raw materials have been imported. If the tax rate is equal to *t* _{ m } = 20%, then the value to be used in the economic appraisal is 2250 × (1/1.2) = 1875 thousand dollars. The tariff has been removed from the price.

### Export Subsidies

where *s* _{ x } denote the rate of subsidy.

### Major Non-traded Goods

The fact that an input is not imported or an output not exported does not necessarily mean that they are not subject to trade distortions. The existence of a tariff that aims to protect the domestic market may explain for instance the current use of local inputs. In such situations, the tax on imports is also reflected in the domestic prices. Similarly, a subsidy that encourages the producers to export may generate an increase in the price level. When data are available, these distortions can be valued using the same approach as previously. Assume for instance that the government has imposed an import tax of 25% on equipment and infrastructure (rows R2 and R3 of Fig. 9.5). The conversion factor to be used is 1/(1 + 25%) = 0.8, even if those goods are bought in the domestic market.

### Minor Non-traded Goods

where *M* denotes the total imports valued at the CIF price, *X* the total exports valued at the FOB price, *T* _{ m } and *S* _{ m } the total import taxes and subsidies, and *T* _{ x } and *S* _{ x } the total export tax and subsidies. In simple words, *SCF* is the ratio of the value at world prices of all imports and exports to their value at domestic prices. It generalizes the previous formulas and provides a general proxy of how international trade is distorted due to trade barriers. It assesses how the prices would change on average if such barriers were removed. For instance, should *T* _{ m } + *S* _{ x } be larger than *S* _{ m } + *T* _{ x }, then, on average, the country would support the domestic producers. In that context, the standard conversion factor would be lower than one, meaning equivalently that the trade balance is artificially increased, or that the domestic prices are overvalued. The inverse of the standard conversion factor (1/*SCF*) is also termed “shadow exchange rate factor”. In practice, the approach is used when one wants to compare the economic performance of competing projects in different developing countries.

*ENPV*and

*BCR*criteria, the project remains economically viable, even at a discount rate of 16%. For this rate, the performance indicators have been computed has follows, and rounded for presentation purposes:

In theory, conversion factors should provide the evaluator with a better decision tool. However, in practice, they are unique to the context and the methods used to approximate those weights are often based on rough calculations. While time-consuming, those adjustments can also be of minor importance for the investment decision. Therefore, conversion factors should be used with caution or in exceptional cases only. It is also possible to provide the results of the analysis both with and without shadow prices.

At this stage of the analysis, the net benefits of the bridge project should be analyzed against those of larger and smaller investments. Moreover, it may be useful to check whether some excluded items are likely to compromise or reinforce the decision made, especially if the *ENPV* reaches surprisingly high values. For instance a *BCR* approaching 2 would mean that the economic benefits are twice as high as the economic costs. Then why was the project not implemented before? If data is available, it can also be useful to compare the economic return of the investment with that of an already existing project. Last, if some important costs and benefits have been excluded from the analysis because it was not possible to monetize them, a description of these items should at least be provided in physical terms. In this respect, a multi-criteria analysis can be used to combine both physical terms and monetary terms (e.g., the *ENPV*) into a single indicator.

## 9.5 Deterministic Sensitivity Analysis

Cost benefit analysis generally goes further by questioning the valuation of the costs and benefits themselves. When some important items are difficult to estimate but yet quantified, or when some degree of uncertainty is inherent to the study, a sensitivity analysis can be used to examine the degree of risk in the project. This can take the form of a partial sensitivity analysis, a scenario analysis, a Monte Carlo analysis or a mean-variance analysis.

Uncertainty not only refers to variations in the economic environment (uncertain economic growth, natural hazards, modification of relative prices), but also to sampling errors resulting from data collection. Consider for instance the bridge example (Fig. 9.7). The study makes assumptions about the cost of inputs, about the amount of sales, and uses estimates to calculate the economic effects (time saving, externalities). Those cash flows can only be assessed or forecasted imprecisely. This affects in return the precision of the *ENPV*. The purpose of a sensitivity analysis is to identify these sources of variability and assess how sensitive the conclusions are to changes in the variables in question.

*ENPV*of the bridge project. A 15 percent increase in costs yields for instance a net benefit of $13,699 for a 4% discount rate, $10,412 for a 8% discount rate and $4947 for a 16% discount rate. The results and conclusions (suitability of the project) are not really sensitive to those variations. Similar results are obtained when the costs of raw materials vary from −15% to +15% (see Fig. 9.8).

*ENPV*never reaches negative values, which gives support to the strategy at stake. The results can then be compared to those obtained for competing strategies.

Scenario analysis: example 4

Best-case Energy costs: Raw materials: | Most-likely Energy costs: 0% Raw materials: 0% | Worst-case Energy costs: +15% Raw materials: +15% | |
---|---|---|---|

ENPV(4%) | 14,914 | 13,916 | 12,918 |

ENPV(8%) | 11,540 | 10,614 | 9687 |

ENPV(16%) | 5931 | 5123 | 4316 |

*S*

_{3}is dominated by the other strategies as its net present value is always equal to or lower than the net present values of the other projects. Strategy

*S*

_{3}is thus eliminated. If no other dominant strategy is apparent, different decision-rules can be employed:

- 1.
The maximin rule consists in selecting the alternative that yields the largest minimum payoff. This rule would be typically used by a risk-averse decision-maker. In Table 9.5 for instance, the minimum payoff is 1000 for strategy

*S*_{1}, while it is –500 for*S*_{2}. Strategy*S*_{1}is thus selected. This ensures that the payoff will be of at least 1000 whatever happens. - 2.
The minimax regret rule consists in minimizing the maximum regret. Regret is defined as the opportunity cost incurred from having made the wrong decision. In Table 9.5, if one chooses strategy

*S*_{1}we have no regret when the worst-case or most-likely scenarios occur. On the other hand, if the best-case scenario occurs, the regret is 2000 − 3000 = − 1000. If one chooses strategy*S*_{2}, the regret amounts to −500 − 1000 = − 1500 for the worst-case scenario, 1000 − 1500 = − 500 for the most-likely scenario, and zero for the best-case scenario. Overall, the maximum regret is –1000 for strategy*S*_{1}and –1500 for strategy*S*_{2}. Strategy*S*_{2}is thus eliminated. The approach is relatively similar to the maximin approach as it favors less risky alternatives. The approach however accounts for the opportunity cost of not choosing the other alternatives. It is used when one wishes not to regret the decision afterwards. - 3.
The maximax rule involves selecting the project that yields the largest maximum payoff. For project

*S*_{1}, the maximum payoff is 2000, while for*S*_{2}, it is 3000. A risk-lover decision-maker would favor project*S*_{2}. - 4.
The Laplace rule implies maximizing the expected payoffs assuming equiprobable scenarios. In our example, we would assign a probability of 1/3 to each scenario. The expected payoff for project

*S*_{1}is defined as: (1/3) × 1000 + (1/3) × 1500 + (1/3) × 2000 = 1500. For project*S*_{2}, we have: (1/3) × (−500) + (1/3) × 1000 + (1/3) × 3000 = 1166. Based on this decision rule, strategy*S*_{1}is selected.

Scenario analysis and project selection: example 6

ENPV | Worst-case | Most-likely | Best-case |
---|---|---|---|

Strategy | 1000 | 1500 | 2000 |

Strategy | −500 | 1000 | 3000 |

Strategy | −500 | 1000 | 2000 |

The two methods of sensitivity analysis described above are often considered as deterministic as they assess how costs are sensitive to pre-determined change in parameter values through upper and lower bounds. They may be sufficient to assess roughly the risk associated with a project, but they do not account for all the uncertainty involved. In particular, they do not evaluate precisely the probability of occurrence of each possible outcome. A probabilistic sensitivity analysis is a useful alternative in this respect. Instead of focusing only on extreme bounds, the approach examines a simulation model that replicates the complete random behavior of the sensitive variables. It allows the distribution of the *ENPV* to be fully examined.

## 9.6 Probabilistic Sensitivity Analysis

A probabilistic sensitivity analysis assigns a probability distribution to all sensitive variables. These variables thereby become random in the sense that their value is subject to variations due to chance. The approach, also known as Monte Carlo analysis, examines those variations simultaneously and simulates thousands of scenarios, which results in a range of possible *ENPV* with their probabilities of occurrence. The method requires detailed data about the statistical distribution of the variables in question. One can for instance use information about similar existing projects, observed variations in input prices or, if the direct benefits and externalities have been estimated in the context of the project, the estimated standard deviations.

The uniform distribution assigns an equal probability of occurrence to each possible value of the random variable. It is used when little information is available about the distribution in question. Consider for instance the energy costs (row R9) of Fig. 9.7. They are initially set to –450 thousand dollars. We could assign instead a range of values between –500 and –400 with an equal probability of occurrence. With R-CRAN (Fig. 9.9), this amounts to use the *runif* command. The first entry denotes the number of randomly generated observations (here 1,000,000), while the second and third entries stand for the lower and upper limits of the distribution. Figure 9.10a provides the probability density function estimated with *plot*(*density*()). As can be seen, the uniform distribution yields a rectangular density curve (which would be perfectly rectangular with an infinite number of observations). Each value has an equal probability of occurring inside the range [−500, −400]. In Fig. 9.10, the bandwidth relates to the precision of the local estimations used to approximate the shape of the density curve and is of no interest for the present purpose.

The triangular distribution is a continuous probability distribution with a minimum value, a mode (most-likely value), and a maximum value. In contrast with the uniform distribution, the triangular distribution does not assign the same probability of occurrence to each outcome. The probability of occurrence of the lower and upper bounds is zero, while the maximum of the probability density function is obtained at the mode. This distribution is used when no sufficient or reliable information are available to identify the probability distribution more rigorously. In R-CRAN (Fig. 9.9), we can use the *rtriangle* function from the *triangle* package. The first entry specifies the number of observations; the second and third entries are the lower and upper limit of the distribution; last entry stands for the mode of the distribution. As can be seen from Fig. 9.10b, the program yields a triangular probability density function with minimum and maximum values obtained at –500 and –400, respectively.

With the normal distribution, the density curve is symmetrical around the mean and has a bell-shaped curve (see Fig. 9.10c). The random variable can take any value from −∞ to +∞. Train punctuality is an example of a normal probability density function. A train arrives frequently just in time, less frequently 1 min earlier or late and very rarely 20 min earlier or late. In R-CRAN (Fig. 9.9), the command in use is *rnorm*. The first entry specifies the number of observations; the second is the mean; the third is the standard deviation. The simplest form of this distribution is known as the standard normal distribution. It has a mean of 0 and a variance of 1. In Fig. 9.9, the standard deviation is set arbitrarily to 100. In Fig. 9.10c, the values tend to distribute in a symmetrical hill shape, with most of the observations near the specified mean. Provided that information about the standard deviation is available, the normal distribution may provide a reasonable approximation to the distribution of many events. Meanwhile, a lot of variables are likely to be not normally distributed. They may exhibit skweness (asymmetry), or have a different kurtosis (differently curved). A way to capture these differences is to rely instead on the beta distribution.

The beta distribution is determined by four parameters: a minimum value, a maximum value and two positive shape parameters, denoted *α* and *β*. Depending on those parameters, the beta distribution takes different shapes. Like the uniform and triangular distributions, it models outcomes that have a limited range. In Fig. 9.9, energy costs are randomized using the *rbetagen* command available with the package *mc*2*d*. The first entry is the number of randomly generated observations; the second and third entries stand for parameters *α* and *β*; the last two entries represent the lower and upper bounds, respectively. When *α* and *β* are the same, the distribution is symmetric (Figs. 9.10d, e). As their values increase, the distribution becomes more peaked (Fig. 9.10e). When *α* and *β* are different, the distribution is asymmetric. For instance, in Fig. 9.10f, the curve is skewed to the right because *α* is set to be lower than *β*.

So far, we have assumed that there was uncertainty about the project’s variables independently of each other. Those variables were considered uncorrelated. When this is the case, one can easily assign a random generator to each variable without worries. In some other cases however, the use of joint probability distributions is required. Consider for instance time saving (row R16 of Fig. 9.7) and sales (row R14). To some extent, those variables are likely to be correlated. The higher is the traffic, the higher are the sales revenues and the time saved by users (if there is induced traffic congestion, time saved may decrease and correlation would be negative). Traffic may also significantly affect maintenance costs (raw materials, labor, electric power, etc.). If one were to assign separately a random generator to these variables they would be varying independently from each other. We could for instance observe a decrease in sales and, in the meanwhile, a significant increase in time saving. To avoid those situations, it is generally advised to use a multivariate probability distribution. This can be done for instance with a multivariate normal distribution, provided that information about the covariance matrix is available.

The variances appear along the diagonal and covariances appear in the off-diagonal elements. This matrix is symmetric (i.e. entries are symmetric with respect to the main diagonal). If a number outside the diagonal is large, then the variable that corresponds to that row and the variable that corresponds to that column change with one another.

*Sales*) and time saving (variable

*Time*). Table 9.6 provides the dataset (10 years) and Fig. 9.11 illustrates the simulation method. The mean values are computed using the

*mean*command, while the covariance matrix is directly obtained using function

*cov*. Those values are used afterwards in the

*rmvnorm*function of the package

*mvtnorm*. Basically speaking, the

*rmvnorm*function randomly generates numbers using the multivariate normal distribution. The first entry is the number of observations; the second entry is the vector of means; last entry specifies the covariance matrix. The

*rmvnorm*command generates a matrix made of (1) as many columns as there are variables and (2) as many rows as they are observations. For illustrative purposes, the number of randomly generated observations is set to 10,000. We thus have two columns (one for

*Sales*and one for

*Time*) and ten thousands rows. If one were to use the resulting random generator in a Monte Carlo framework, we would generate instead a matrix made of as many columns as there are variables and as many rows as they are time periods.

Dataset for comparison: example 4

Sales | Time |
---|---|

12,652 | 2310 |

11,688 | 2441 |

13,044 | 2408 |

12,343 | 2466 |

11,753 | 2092 |

14,292 | 2595 |

13,802 | 2810 |

12,249 | 2659 |

11,598 | 2485 |

12,239 | 2147 |

*plot*command provides a scatter plot of the relationship between sales and time saving using the existing dataset. As expected, some correlation between the variables is highlighted. As shown through a simple linear regression (

*abline*command), this correlation is accurately taken into account by the simulated data (displayed in red in Fig. 9.12). To draw this regression line, we have made use of the ten thousand randomly generated observations. For simplicity of exposition, those observations are not displayed on the graph. Last, Fig. 9.12b, c compare the probability density of the real observations (in black) with that of the simulated ones (in red). As can be seen, the model provides an accurate estimation of the phenomena in question.

Note that the simulations presented in Fig. 9.11 can be easily extended to more than two variables. The resulting random generator can also be used for simulating cash flows in a Monte Carlo framework. In this purpose, the randomly generated data can be multiplied by a trend variable, e.g., 1 , (1 + *x*) , (1 + *x*)^{2} , …, to account for a possible automatic increase in sales in the first years of the project (for instance by *x*% each year).

A Monte Carlo simulation consists in assigning a probability distribution to each of the sensitive cash flows and running the model a high number of times to generate the probability distribution of the *ENPV*. A loop is created in which (1) sensitive cash flows are assigned a random number generator, (2) the *ENPV* is computed using the randomly generated cash flows and (3) the *ENPV* is saved for subsequent analysis. The simulation is repeated a large number of times (10,000 times or more) to provide a range of possible *ENPV*. This range is then used to estimate a probability distribution. One can then decide whether the probability that the *ENPV* is negative is an acceptable risk or not.

*i*= 1 to 10,000, this last number representing the number of times the

*ENPV*will be randomly generated. Although we could extend the simulations to a larger number of variables, only “electric power”, “raw materials”, “Sales” and “time saving” are assigned a random number. The uniform and triangular distributions (

*runif*,

*rtriangl*e) as well as the multivariate normal distribution (

*mvtnorm*) are used in this purpose.

The cash flows for electric power, raw materials, sales and time saving are observed at year 1, 2 and 3. As such, three observations per variable must be generated. A joint distribution similar to that of Fig. 9.11 (database *E*) is used to simulate sales and time saving. To ensure that the difference in means between Table 9.6 and Fig. 9.5 are fully assessed, two additional variables are created: *Weights*. *Sales* and *Weights*. *Time*. They are used to weight the random generator (*simu*) so that the average values of sales and time saving correspond to those of Fig. 9.5. Mathematically, we rely on the fact that the standard deviation of the product of a constant *a* with a random variable *X* is equal to the product of the standard deviation of *X* with the constant: \( \sqrt{Var(aX)}=a\sqrt{Var(X)} \). In other words, we consider the possibility that the new bridge (*aX*, i.e. database *D*) is not of the same size as the existing bridge (*X*, i.e. database *E*) and, consequently, that the standard errors are proportionally different. From the previous mathematical formula, we can see that we can indifferently include the weights (*Weights*. *Sales* and *Weights*. *Time*) in the random generator (in which case each generated value accounts for that modification) or after as it is done in Fig. 9.13.

*ENPV*(see Fig. 9.14). What matters here is the 95%-confidence interval obtained using the function

*quantile*. The latter yields positive values, ranging from 8643 to 18,561 thousand dollars, which gives support to the bridge project. Roughly speaking, the probability that the

*ENPV*falls outside this interval is lower than 5%. Notice that the mean is approximately 13,568 thousand dollars, i.e. quite similar to the

*ENPV*obtained in Fig. 9.5. This result is not surprising as the mean of the distributions used for the simulations are set to the same values as those of the raw dataset (for instance –300 for electricity and –2250 for raw materials). More interesting is the variance, which indicates the risk of the project.

## 9.7 Mean-Variance Analysis

Once the probability distributions have been calculated for several strategies, the mean and variance of each *ENPV* distribution can be compared. The approach, known as mean-variance analysis plots the different strategies in the mean-variance plane and selects them based on their position in this plane. Under this framework, the mean of the *ENPV* represents the expected return of the project from the society point of view. The variance, on the other hand, represents how spread out the economic performance is. It measures the variability or volatility from the mean and, as such, it helps the decision-maker to evaluate the risk behind each strategy. A variance value of zero means for instance that the chances of achieving the most-likely scenario are 100%. On the contrary, a large variance implies uncertainty about the final outcome. If two strategies have the same expected *ENPV*, but one has a lower variance, the one with the lower variance is generally preferred.

*S*

_{1}and strategy

*S*

_{2}have similar variance. In other words, they have the same risk. However, the distribution of

*S*

_{2}yields a higher

*ENPV*on average, and is thus preferable. Consider now

*S*

_{2}versus

*S*

_{3}. They have the same mean, but the

*ENPV*of

*S*

_{3}is more dispersed. Strategy

*S*

_{3}is what is called a “mean preserving spread” of strategy

*S*

_{2}and, as such, is riskier. A risk-averse decision-maker would typically prefer strategy

*S*

_{2}since the likelihood that the

*ENPV*falls below zero is lower. Comparing strategy

*S*

_{2}with strategy

*S*

_{4}is less clear-cut. While

*S*

_{4}has a much higher mean, its variance is also greater. In this situation, it is up to the decision-maker to decide what matters most, whether it is an increase in the expected

*ENPV*or a decrease in the variance.

### Bibliographical Guideline

The concept of consumer surplus is attributed to Dupuit (1844, 1849), an Italian-born French civil engineer who was working for the French State. His articles “On the measurement of the utility of public works” and “on tolls and transport charges” provide a discussion on the concept of marginal utility. They point out that the market price paid for consuming a good does not provide an accurate measure of the utility derived from its consumption. If one wants to construct a public infrastructure, it is instead the monetary value of the absolute utility, i.e. the willingness to pay, that matters.

The theory of externalities was initially developed by Pigou (1932) who demonstrated that, under some circumstances, the government could levy taxes on companies that pollute the environment or create economic costs.

Shadow prices have been intensively used after the proposal of Little and Mirrlees (1968, 1974) to use world market prices (and standard conversion factors) in project evaluation. Their approach was subsequently promoted by Squire and van der Tak (1975) in a book commissioned by the World Bank.

The modern form of Monte Carlo simulation was developed by von Neumann and Ulam for the Manhattan Project, in order to develop nuclear weapons. The method was named after the Monte Carlo Casino in Monaco.

The mean-variance portfolio theory is attributed to Markowitz (1952, 1959), which assumes that investors are rational individuals and that for an increased risk they expect a higher return.

Many guides provided by government agencies are available online. We may cite in particular the “Guide to Cost-Benefit Analysis of Investment Projects” of the European Commission which describes project appraisal in the framework of the 2014–2020 EU funds, as well as the agenda, the methods and several case studies. The European Investment Bank (EIB) proposes as a complement a document that presents the economic appraisal methods that the EIB advocates to assess the economic viability of projects. Additional guides are available, such as the “Canadian cost benefit analysis guide” provided by the Treasury Board of Canada Secretariat, the “Cost benefit analysis guide” prepared by the US Army, the guide to “Cost-benefit analysis for development” proposed by the Asian Development Bank, the “Cost benefit analysis methodology procedures manual” by the Australian Office of Airspace Regulation. Those documents review recent developments in the field and provide several examples of application with the purpose to make CBA as clear and as user-friendly as possible.

Several textbooks can also be of interest for the reader. We may in particular cite Campbell and Brown (2003). Their book illustrates the practice of cost benefit analysis using a spreadsheet framework, including case studies, risk and alternative scenario assessment. We may also cite Garrod and Willis (1999) and Bateman et al. (2002) who provide an overview of the theory as well as the different methods to estimate welfare changes.

## Bibliography

- Australian Office of Airspace Regulation. (2007).
*Cost benefit analysis methodology procedures manual*. Civil Aviation Safety Authority.Google Scholar - Asian Development Bank. (2013).
*Cost-benefit analysis for development: A practical guide*. Mandaluyong City, Philippines: Asian Development Bank.Google Scholar - Bateman, I., Carson, R., Day, B., Haneman, M., Hanley, N., Hett, T., et al. (2002).
*Economic valuation with stated preference techniques: A manual*. Cheltenham: Edward Elgar.CrossRefGoogle Scholar - Campbell, H., & Brown, R. (2003).
*Benefit-cost analysis: Financial and economic appraisal using spreadsheets*. Cambridge University Press: Cambridge Books.CrossRefGoogle Scholar - Dupuit, J. (1844). De la mesure de l’utilité des travaux publics.
*Annales des Ponts et Chaussées, 2*, 332–375 [English translation: Barback, R. H. (1952). On the measurement of the utility of public works. International Economic Papers, 2, 83–110].Google Scholar - Dupuit, J. (1849). De l’influence des péages sur l’utilité des voies de communication.
*Annales des Ponts et Chaussées, 2*, 170–248 [English translation of the last section: Henderson, E. (1962). On tolls and transport charges. International Economic Papers 11: 7–31].Google Scholar - European Commission. (2014).
*Guide to cost-benefit analysis of investment projects. Economic appraisal tool for cohesion policy 2014-2020*.Google Scholar - European Investment Bank. (2013).
*The economic appraisal of investment projects at the EIB*.Google Scholar - Garrod, G., & Willis, K. G. (1999).
*Economic valuation of the environment: Methods and case studies*. Cheltenham: Edward Elgar.Google Scholar - Little, I. M. D., & Mirrlees, J. A. (1968).
*Manual of industrial project analysis in developing countries, 2: Social cost–benefit analysis*. Paris: OECD.Google Scholar - Little, I. M. D., & Mirrlees, J. A. (1974).
*Project appraisal and planning for developing countries*. New York: Basic Books.Google Scholar - Markowitz, H. M. (1952). Portfolio selection.
*Journal of Finance, 7*, 77–91.Google Scholar - Markowitz, H.M. (1959). Portfolio selection: Efficient diversification of investments. New York: Wiley (Reprinted by Yale University Press, 2nd ed. Basil Blackwell, 1991).Google Scholar
- Pigou, A. C. (1932).
*The economics of welfare*(4th ed.). London: Mac Millan.Google Scholar - Squire, L., & van der Tak, H. G. (1975).
*Economic analysis of projects*. Baltimore: Johns Hopkins,University Press for the World Bank.Google Scholar - Treasury Board of Canada Secretariat. (2007).
*Canadian cost-benefit analysis guide: Regulatory proposals*.Google Scholar - US Army. (2013).
*Cost benefit analysis guide*. Prepared by the Office of the Deputy Assistant Secretary of the Army.Google Scholar