Agile

Agile methodology purports to deal with uncertainty through continuous monitoring and learning. To do so, we need to see how productivity is faring against our plans, as in the previous chapter. But we also need to communicate what our uncertainty is realistically. This is regularly done for cost, but must also be done for beneﬁt to obtain a complete picture. In this chapter, we show how both beneﬁt points and size points can be instantiated with values reﬂecting different levels of uncertainty.


Introduction
A very fortunate thing about points-based estimates is that one can instantiate them with different values that reflect the stakeholders' current understanding. We instantiated the points with initial monetary values in Fig. 3.16. We will instantiate points with values that reflect scenarios according to uncertainty assessments.
In particular, we will demonstrate how to instantiate points with so-called pX values, where the p stands for percentile. If you are looking at a set of project outcome values, a pX value is the boundary value above or equal to X% of all sorted outcome values. So, if one has a database of historical data with actual cost outcomes, and one sorts those projects on descending cost, the p85 value for cost is the value below which one finds 85% of the projects. Equivalently, one finds 15% of the projects above that value.
In the unlikely event that the database also has historical data on benefit, the p35 value, say, would be the benefit value below which one finds 35% of the projects when sorted on descending benefit. Equivalently, one finds 65% of the projects above that value.

Uncertainty Assessment
In this section, we want to estimate the most likely benefit and most likely cost of a new project, together with upper and lower bounds due to uncertainty. In [2], one can see how to derive pX values from relevant historical project outcome data, to provide cost estimates for a project.
However, historical data are often not available. In particular, outcome data in terms of benefit are currently extremely sparse. In this situation, one can elicit and systematize the stakeholders' perception of uncertainty. To do so, one should address the drivers of uncertainty that the stakeholders identify as salient to the project.
One can sort drivers of uncertainty into two categories: estimation uncertainty and event uncertainty. The former reflects the fact that estimates are forecasts of the future and are therefore inherently uncertain. In our context, we have estimates of • A product element's lifecycle cost, • A product element's effect on objectives, and • An objective's worth on returns.
To assess estimation uncertainty is to contemplate the inherent uncertainty associated with these estimates. Event uncertainty, on the other hand, pertains to uncertainty arising from events internal and external to the project. Contemplating event uncertainty involves risk assessments. Risk assessment is another extensive subject that the reader should review elsewhere.
Here, we are out to express, simplistically, the resulting perception of uncertainty, however the group of stakeholders arrived at it. We will exemplify with three-point estimates.
Let us first look at cost estimates, since this is common practice in many organizations. We choose to express estimation uncertainty at the level of epics. However, our stakeholders might find it more meaningful to assess uncertainty on groups of epics or on other parts of the current backlog. It is possible to assess uncertainty at lower levels of the product breakdown structure if that is meaningful in the context in question.
Let us assume that the appropriate stakeholders have come up with the relative cost estimates in Fig. 3.8 (p. 29), and that they have used their knowledge and experience to fix the initial monetary value of a size point at 0.6 million, producing an estimate of 37.8 million for the most likely project development and post-deployment cost, as shown in the 'Cost' column of Fig. 3.16 (p. 39).
The stakeholders have devised three-point uncertainty cost estimates for the epics and events given in the upper half of Fig. 6.1. Note that the most likely cost estimates for epics are those in Fig. 3.16. 1 For example, for epic E3, the most likely estimate is 1.8 million (corresponding to Fig. 3.16). This epic also has a bad-case estimate  of four and a good-case estimate of one. Further, the three-point estimate for E2 is wider than that for E3, indicating lower confidence in the most likely estimate. All the three-point estimates are asymmetrical, reflecting the fact that the range of probable outcomes stretches further upward than downward. Next, for the three-point estimates of event uncertainty in Fig. 6.1, a value of zero signifies that the event, if it occurs, will have no impact, while negative values signify that the event could lead to a decrease in cost, and positive values signify that the event could lead to an increase in cost. Most of the event uncertainties are assessed to increase cost, but, for this example, 'Market and Inferior quality of data' are assessed to provide probabilities of decreasing cost.
For uncertainty regarding benefit, in the example, we choose to show the uncertainty assessment on the worth relation, in other words, the objectives' contribution to return. See, for example, Fig. 4.2 (p. 51). In contrast to the estimates for cost, the three-point estimates reflect the expectation that the ranges of probable outcomes of benefit tend to stretch farther downward than upward. Again, one can assess uncertainty at any level that makes sense in a given project. For example, one could assess uncertainty on the effect relation instead of, or in addition to, the worth relation. In this example, we assume that stakeholders' perceptions of uncertainty are more salient at a level closer to the business domain.

Use of Uncertainty Assessments
A three-point estimate gives a range of probable values, which is an important step in acknowledging that hitting the target on a single estimate is not a realistic goal. By itself, though, a three-point estimate does not indicate how probable different values are. For that, one needs a probability distribution. If one has usable theoretical or empirical results, one might be able to apply these results to choose an appropriate distribution type. For example, theoretically, time and cost are often distributed lognormally, as illustrated in Fig. 6.2.
In software projects one is often not in a position to apply theoretical results, and the best bet is to use rule-of-thumb methods that are good enough. The project evaluation and review techniques (PERT) [5] includes one such method, where one calculates an expected value estimate EV from a three-point estimate as EV=(low+4*most likely+high)/6. This approach assumes a beta distribution (see Fig. 6.2, middle panel).
Even simpler, a triangular distribution is given by the formula for the area of a triangle (see Fig. 6.2, bottom panel), which could be a better approximation when one is not able to apply theory or empirical data. The low and high values in the three-point estimates can have various interpretations. For example, when experts naturally think in terms of 'in one of 10 cases with epics similar to this one, the cost will be less than low, and in nine of 10 cases the cost will be less than high', it is the p10 (low) and P90 (high) values for the epic that are being estimated. The PERT method, on the other hand, prompts for low and high values without asking for probabilities, which could be advantageous, since thinking in terms of probabilities is hard [3,6]. The triangular distribution interprets the low and high values simply as p0 and p100 values.
Exactly what marginal probabilities your low and high values represent is not that important. It is more important that your interval is not too narrow. According to evidence [1], you should fix the low and high values first and then assess the probability of staying within these bounds, rather than fix a probability first and then find an interval in which there is that probability of staying within the interval. Research is ongoing on how best to elicit people's perceptions of uncertainty.

Obtaining pX Values for the Project
We now want to use the above assessments on uncertainty drivers to construct project-wide pX values that we can plug into our benefit points and size points.
For simplicity, we will use the triangular distributions generated automatically from the three-point estimates in Fig. 6.1, and we will assume that the drivers are independent of each other. These distributions are then given as input to Monte Carlo simulations. 2 A Monte Carlo simulation simulates a large number of project runs, say, 10,000, and it will do so based on our uncertainty assessments expressed as probability distributions. One simulated run will capture one possible project outcome according to one draw of the hat from each of the supplied distributions. Over a large number of runs, the more likely values, according to the distributions, will be drawn more frequently. This, in turn, will affect the distribution of total project outcomes. Figure 6.3 (top) shows the histogram after 60,000 iterations giving the proportion of times the simulation outcome fell within a given cost interval (with intervals of 0.25 million each).
The cumulative curve of the histogram (Fig. 6.3, second panel) is generated by adding the bar heights in the histogram from left to right and plotting the result. One can then easily read off the project-level pX values. See Section 6.7 for common values. The p50 most likely cost estimate is 49.25 million, here, giving a size point value of 0.78 million. The p85 bad-case estimate is 52.75 million, which yields a size point of 0.84 million. The p35 good-case estimate is 48.00 million, for a size point value of 0.71 million.
Example 1. Some early adopters have also applied this approach in benefit estimates, as advocated in the main text. For example, a large business-critical Norwegian public agency analyzed possible changes to business processes within one of their service domains. It then estimated the benefit of each change, including uncertainty assessments, by providing three-point estimates of the time that could be saved in the processes due to the planned changes. These estimates were converted to monetary values and submitted as triangular distributions to Monte Carlo simulation. The project could therefore provide a range within which the benefit for the functional domain would arise, together with pX estimates.
This organization also developed a dashboard for tracking earned business value along the lines described in the previous chapter. They are not yet applying the practice of using benefit points, but when they do, they will be able to view different scenarios concurrently in the dashboard by plugging various pX values into their points. 2 There will be dependencies. Product elements are independent, in that they can provide individual benefits, but they will likely depend on each other for maximum effect. Additionally, event uncertainty drivers will likely be interdependent, and so on. Modelling dependencies and their effects is outside the scope of this text and described elsewhere. The independence assumption is reasonable if coarse-grained drivers are used as input for the Monte Carlo simulations, and one can still carry out meaningful uncertainty assessments for the main effects under this assumption.  Looking again at the histogram (Fig. 6.3 top), it is not at all likely for cost to be as low as the initial project estimate of 37.8 million calculated prior to uncertainty assessment. Further, the PERT approach would involve computing the PERT expected value for each three-point estimate in Fig. 6.1 and adding them to obtain a project total of 44.8 million, within which the project only has about a 7.5% chance of staying.
Regarding benefit, Fig. 6.3 (bottom half) shows the histogram after 60,000 iterations, giving the proportion of times the simulation outcome fell within a given benefit interval (with intervals of 0.25 million each). The cumulative curve (bottom) indicates that the p50 most likely estimate is 65.5 million (1 benefit point = 0.31 million), the p15 bad-case estimate is 61.25 million (1 benefit point = 0.29 million), and the p65 good-case estimate is 66.75 million (1 benefit point = 0.32 million). According to the histogram, there is zero likelihood of obtaining the initial project estimate of 76.5 million or better, and only about a 0.12% chance of obtaining the PERT estimate of 69.7 million or better. This is a fictitious example, and pX estimates will not necessarily give more pessimistic forecasts than initial base estimates. However, the example demonstrates that, if the project does have a perception of uncertainty, one should capture it by using, for example, three-point estimates and a sound method for integrating these uncertainty assessments into the base estimates (e.g. using Monte Carlo simulations). The use of base estimates alone ignores project knowledge. Research also shows that the PERT method as such can lead one astray [4], but that the beta distribution it is based on can be used sensibly in Monte Carlo simulations.

Instantiation with pX Values
Now we are ready to instantiate benefit points and size points with pX values. Figure 6.4 shows the benefit/cost according to initial estimates and the good-case, most likely, and bad-case pX estimates. Figure 6.5 shows the corresponding planned realization curves.
So, a project manager who has been given the p65/p35 order should work with monetary values of 0.32 million for benefit points and 0.71 million for size points. If you are allowed to work with p50 estimates, then you should use 0.31 million for benefit points and 0.78 million for size points. Both choices will impact when to stop construction and affect how backlogs are prioritized across a portfolio.

Simple Sensitivity Analysis
Looking more closely at the p50 scenario compared to the initial estimates, we find the estimates imply that E5 joins E6 in being questionable for construction. If your stakeholders' uncertainty assessments were different, your p50 estimates might be providing an overall stronger benefit-to-cost ratio than your initial estimates, making E6 more viable. At this point, however, you can see what happens if you were to eliminate waste by discarding E6 from the plan. In reality, you would wait until story elaboration time to eliminate waste, but it is still strategically useful to experiment at the level of epics.  Fig. 6.4 Benefit/cost obtained by instantiating benefit points and size points with initial estimates, good-case estimates (p65 for benefit, p35 for cost), expected case estimates (p50 for both benefit and cost), and bad-case estimates (p15 for benefit, p85 for cost). Bad benefit-cost ratios are outlined in red, and questionable ones in yellow. The point to be made here is that you can run Monte Carlo simulations on your initial estimates with uncertainty assessments again, but omitting E6. In this example, you obtain a p50 benefit point value of 0.32 million on the remaining 195.55 benefit points and a p50 size point value of 0.82 on the remaining 50 size points. The use of these values to recompute your epics backlog benefit-cost ratios still renders E5 as waste. Now, you can try eliminating E5 instead, since E5 has a cost uncertainty assessment that tends towards higher values (Fig. 6.1). Recomputing p50 estimates renders E6 as waste. You can try eliminating both E6 and E5 and recomputing the p50 estimates, which produces a backlog without waste at the level of epics. Figure 6.6 (top) summarizes this sensitivity analysis and waste elimination with the relevant values.
You can carry out this exercise even if you do not use uncertainty assessments. Then, you simply eliminate the epic with an unfortunate benefit-cost ratio (E6), and you are done (Fig. 6.6  To incorporate uncertainty or not is a choice that has to made based on how much effort one wishes to expend on project governance and on how meaningfully stakeholders think they can assess uncertainty. If you incorporate uncertainty into your project metrics, you can enhance project learning, both by making uncertainty an explicit -and acceptable -part of project life and by adjusting your numbers and plans to reflect uncertainty. You can use simple uncertainty assessment methods to generate pX estimates that you can plug into your benefit points and size points, giving you various views on your project that you can report to your stakeholders. You can do this at any point during the project, based on whatever is left of your backlog or portions of it. Regarding benefit uncertainty, we illustrated the use of three-point estimates for the objective-returns relation. During construction, you have to adjust the amount of return that has been realized by the partly achieved objectives. Since benefit points map to objectives, and therefore returns, this adjustment can be computed automatically, a substantial advantage of using benefit points.

6.7* How Businesses Construct Project-Level pX Values
Over the years, it has become common practice to provide uncertainty analyses for cost in large public sector projects in Norway. Such analyses are mandatory for projects above NOK 750 million (about USD 100 million), but smaller projects, down to NOK 10 million, also perform these analyses. There is work underway to establish benefit budget regimes analogous to those for cost. The corresponding pX values for benefit uncertainty reserves could be given in terms of, for example, p50 (for the project owner), p15 (bad case), and p65 (for the project manager).
The following is a common approach for cost estimates. A similar approach can be used for benefit estimates.

Estimation uncertainty:
a. Walk through the project scenario and identify drivers for estimation uncertainty in the initial cost baseline. It is common to choose drivers of a certain size, such as groups of epics, so that the total number of drivers will be less than 15.
b. For each driver, provide three-point estimates: i. Optimistic scenario -what will be the lowest cost in one of 10 cases? ii. Most likely cost (often coincides with the initial cost baseline). iii. Pessimistic scenario -what will be the highest cost in one of 10 cases? c. Model the dependencies between drivers, if desired. Current tools support multivariate distributions.

Event uncertainty:
a. Walk through the project scenario and identify internal and external uncertainty factors that could impact project progress and costs, that is, factors not included in the cost baseline. Group factors into uncertainty domains (main drivers). b. For each driver, provide three-point estimates analogously to items i to iii above. c. Model the dependencies between drivers, if desired.
3. Generate a distribution from the three-point estimates items A and B. Current tools generate a range of distributions, including normal, log-normal, beta, and triangular ones. 4. Feed the distributions into the tools for Monte Carlo simulation. The Monte Carlo simulation generates a cumulative probability distribution of the total simulated project cost. 5. From the cumulative probability distribution, read off the desired pX values for cost. These values are used for decisions on uncertainty reserves at different management levels. In large public sector projects, the p50 cost is often given by the sponsor (e.g. the Department of Finance) to the project owner (e.g. a public service organization) as the budget limit. To be prepared for possible overruns of this limit, the sponsor will want to set a bad-case scenario limit, say, at p85. Sometimes, the project owner will impose a p35 estimate as the target for the project manager, the point being that the project should be managed on a day-to-day basis relative to a target that does not incorporate any uncertainty reserves.