We compare the institutional compass to other decision aides.

The institutional compass is not suitable for all decisions. However, its best use is when we find ourselves trying to make policy in complex situations with a lot of uncertainty. We shall see that the institutional compass makes an important contribution to the suite of existing decision aides. In its right place, it is a significant improvement on the existing aides.

1 Why Is the Institutional Compass Important?

At present, policy decisions are largely made in terms of money – for maximising profit. We are told that this is the bottom line, or at least an important constraint, but it usually ends up being the top line too, and so, the only real consideration. Such policy decisions can be made out of greed, because monetary calculations are thought to provide an objective measure or because it is thought that making more profit increases the well-being of the investors or the people involved in the institution. Making decisions to maximise profit is appropriate when maximising profit in the short term is the highest, or unique, consideration. At best, it is suitable for share-holder investment, for profit-only businesses, or for institutions in financial crisis, although even this is disputable (Varoufakis, 2017). Making decisions this way is inappropriate on other occasions. Unfortunately, policy decisions are still, too often, made on a profit-making basis despite the fact that what goes into this sort of calculation is precariously superficial.

While I hesitate to offer a diagnosis, I suspect that when we are not merely motivated by greed for money, we nevertheless, make policy decisions in this way because we have one numerical figure: a money amount. This gives our policy choice a justification in terms of a number. Money functions as a “common yardstick” as Söderbaum puts it (Söderbaum, 2000, 53). The valuation in terms of money has a levelling effect, so that everything on the market can be compared with everything else. Monetary calculations are usually mathematically simple. We think that they are objective and we believe we understand them. Because this method of making decisions is too often the bottom line, the top line and the only line, or the real basis upon which policy decisions are made, let us call it into question.

1.1 Objectivity of Price

…But a price is a curious kind of fact, that differs starkly from the type of [objective] fact that seventeenth-century experts [or scientists] were seeking to defend. …It provides little certainty or common ground, and always offers an advantage to the person who detects it and reacts fastest. (Davies, 2018, 157)

Let us ponder. In philosophy, ‘objectivity’ refers to mind-independence. The mind independence is rooted either in there being some object independent of us, the object is just there presented to us as reality, or the objectivity is rooted in there being some truth of the matter that we discover (as opposed to create). Now let us think of the supposed objectivity of prices. For most of us in the “developed world”, our overwhelmingly most frequent experience with prices is when we go to purchase processed and packaged goods. We are confronted by copies of interchangeable products,Footnote 1 and we compare them to other copies of interchangeable products that are similar. We compare them by price, size, presumed quality, packaging, maybe country of origin or reputation of the company. In the modern free market supermarket, each product is marked with a price. As a consumer, or customer, we can decide whether or not to purchase the product, but we feel that we cannot negotiate the price, as we would have done in a more traditional market. We have the illusion that the price of the good is not something that any particular person directly controls. It is, after all, marked on the productFootnote 2 and comes from a calculation balancing supply and demand. It is calculated by some experts. The psychological effect of marking the price on a product is to dissuade bargaining. The prices are objective, in the sense that we have very little control over them. It is for psychological reasons that we relinquish our control over them.

Occasionally, we have other experiences, when we can negotiate price. On these occasions we have partial control over the price. When we negotiate price, both parties have to agree. In larger groups, collectively, we also have partial control. We can boycott a product, and so, reduce demand, and thereby, we lower the price. But by-and-large our experience of price is as of something outside our control, so as something objective. Mathematical calculations are also objective. Adding the prices of several goods together determines a set price for the goods. We can add correctly or incorrectly, we cannot choose to change the outcome of the addition.

Our individual helplessness tells us of our market dependence. Collectively, it is an illusion. In the world of finance and price determination, there is a lot of choice. There are several formulas to choose from. As experts, we can calculate in order to have the company make a profit, a loss or to break even. It might be unusual to calculate how to make a loss deliberately.Footnote 3 But such a choice being unusual does not, in itself, mean that it cannot be made. The choice could be made to try to maximise profit in the long term, as opposed to making a small but steady profit or instead of taking a risk to make a large profit. We anticipate demand and correct our supply to meet it. In the name of making a profit, we encourage demand through packaging and advertising, based on the choice to maximise profit in the short term.

We also believe that we understand prices. And we do, in the sense of having a rough idea of whether or not we can afford to purchase a product on a particular occasion, or what privations will ensue. We sometimes think we can work out what range of debt we can overcome. We also know roughly that prices are calculated on the basis of supply, demand and profit making. But if we compare prices of similar or quite different items or services, sometimes we meet with absurd surprises.

We all have our favourite examples, but here are three. Airline tickets can vary considerably. Yet the service is very much the same. House prices are also very different one from the other, and vary more with location than the quality of the building. Artwork varies considerably in price, and while in some artist communities, some conventions might be decided upon such as: larger pieces are priced higher, oil paintings cost more than water-colour, more time or attention to detail might be priced higher and so on; it remains that these are conventions that are decided upon. So, once someone explains the reasons or conventions for the varying prices, then we “understand” why this airline ticket costs more than that, why this house is priced higher than that or why this painting is priced higher than that. With far-from-identical items we can think we understand price variation. There is a rational for each price, and we are, after all, free to forgo the purchase of inessential items. But it remains unclear what it really means for a painting to be the same price as an airline ticket and a thirtieth the price of a house.

For such reasons, it is not so very clear that price is objective as such, that we understand it or that it is justified. Consider the obverse. We do not understand debt very well, especially large debt. Keen (2000) shows how an economy can halt under too heavy a debt, and cannot be re-started without a debt moratorium. We are also not clear about debt when we cross a threshold of there being too many people in debt. Debt by one person in many thousand is very different from debt of one person in ten. Scale can disturb our sense of understanding; large scale of an individual debt or scale in terms of percentage of people.

Since prices and money amounts are not so very objective, nor so very well understood, making policy decisions purely on the basis of them is unwise.

Worse, in a complex setting, using only one quantity is subject to very large error margins. For example, the epidemiology model used by the Institute for Health Metrics and Evaluation in the effort to predict the number of deaths from the COVID-19 virus in the United States of America was based only on past numbers of deaths. The prediction was between 80,000 and 170,000 (Larousserie, 2020, 2). The margin is too large to be significant. Six months after the article was written, even the highest extreme was known to be a woeful under-estimation. The reason for the significant error margin is that in this particular method, not enough factors are considered. We even got the mathematics wrong, thinking that the progression was more-or-less linear (as it was in the past pattern of increase) whereas it was (to become more obviously) exponential. We have a less error-prone predicting tool when we consider many more criteria, such as when we use a multi-criteria decision aide or a multi-compartment decision aide. They add comprehensiveness. This reduces the error margin. Adding more dimensions, adds accuracy to the model, and this makes multi-criteria decision aides more useful for policy decision making.

Before we move from one-dimensional decision making to multi-dimension decision making, let us examine another important claim in defence of cost-benefit analysisFootnote 4 or life-cycle analysis.Footnote 5 It is claimed that money-based decision making is objective because it is independent of our subjective values. This is an important claim because it brings to the fore the concept of objectivity, that is so dear to science, and also brings to the fore the more general theory from which the money-based cost-benefit calculation comes from: neoclassical economic theory. For some readers, it will not be necessary to emphasise this point. Therefore, the detailed version of the argument is relegated to the appendix. In it, we give a stark version of the theory to expose the skeleton, so that we can question it, without being distracted by its flesh of marginal calculations. It is a fact that neoclassical economic theory holds the monopoly in economics departments at universities, and so many of us are not given much of a chance to know about alternative theories. So, we think that economics is neoclassical economics. As we can see from the appendix, the supposed objectivity or the value-neutrality of neoclassical economics is pre-supposed and false.

1.2 Return to Policy Decision Making

When we do not make policy decisions purely with the end of maximising profit, or because we think our decisions are somehow objective and understood if we make them on the basis of profit increasing, then we might make them in the belief that we, who participate in the institution, even very marginally, shall be better off. That is, we think that income is an indicator of well-being.

In 1995, statistically this was correct as a rough assumption up to a disposable income (purchasing power parity) of $US 13,000 on average per person in a country (Jackson, 2009, 42). See Fig. 3.1. The dots in the graph represent countries. The actual cut off might have changed since the graph was first made. The calculation is one of purchasing power parity calculated in terms of gross domestic product, henceforth: GDP per person in a country versus the mean in the population of the people who consider themselves to be happy or satisfied with their situation in life. Ignoring the “mean” part of the calculation – which might include very high disparity of income, what we find statistically, is that after this threshold, there is not much difference. Another way of putting this is that the regression curve for showing the match between mean income and mean satisfaction is not a straight line, but a curve. This shows us that there are diminishing returns on growth of GDP after a purchasing power parity of $US 13,000 on average, per person in a country. If we look at other indicators of “well-being” such as health indicators or education, we find similar results (Jackson, 2009, 56–59). See Fig. 3.2. There the dots represent countries. There is a threshold in income beyond which mean purchase parity does not track well-being. Therefore, for those countries that enjoy a higher mean purchase power than the threshold, or for those sub-cultures who enjoy a higher mean purchase power than the threshold, it is appropriate to pay closer attention to the other indicators.

Fig. 3.1
figure 1

Purchasing power parity versus mean population who are satisfied

Fig. 3.2
figure 2

Purchasing power parity versus longevity

There is another reason we might use to defend using monetary calculations at the expense of other indicators to guide policy. Sometimes we make policy decisions purely on the basis of money, for purposes of expediency. In an apologetic mood, when we are aware of the superficiality of the calculation, we might think that if our institution has surplus money later, then we cut ourselves the slack to execute the real purpose of the institution better. In other words, we defer trying to realise the mandate of our institution directly under two sorts of pressure, one is that it is too subjective or too complicated to explain or understand the implications of non-monetary policy, the other is that at a later date, when we have the cash, we can think at greater leisure how to better realise our true mandate. I suppose that we hope that the complications will be erased in time. So, even when the mandate is clearly not put in monetary terms, policy decisions are nevertheless still made in these terms.

This is no accident. In the modern world, our acceptance of finance-based decisions is systemic. It is for this reason that I labour the point so very much. We should be aware of how prevalent such thinking is, even if we do not believe that we ourselves share it. In the modern, so called “developed” world, we are taught from a very young age to behave as homo-economicus , and that institutions are ‘better off’ if we make similar sorts of decision for them. We are taught to want money. As mentioned at the end of the last section, universities increasingly teach neoclassical economic theory to the exclusion of alternatives (Söderbaum, 2017, 26). We also think that we understand credit and debit, which we do to some extent, but as a ‘value’ it is highly abstract and only reflects value in exchange. It follows that in many instances making policy decisions based the idea of maximising profit is inadequate.

So how do we make decisions now, when we want to try to realise the purpose of the institution directly and at present?

2 Existing Multi-criteria Decision Aides

As a policy maker, we might be aware of the limitations of money-only decisions. Once aware, we can choose to make decisions in another way. For example, we could use some of the many lovely tools for making policy decisions: have recourse to multi-criteria decision aides as found in Shmelev (2012), or multi-compartment decisions aides.Footnote 6 Fitoussi, Sen and Stiglitz, propose a ‘dash-board’ of such decision aides (Fitoussi et al., 2010). See Fig. 3.3. These are a much better option than money-based decisions because we have the opportunity to bring more values to the fore, such as long-term profits, social values and environmental values. We do not convert them into a monetary value. They are measured in non-monetary terms. Social values will include at least: health, education, culture and security. Environmental values are intrinsic. We recognise the value of an ecosystem just by its existence. We value the integrity of the ecosystem. This includes biodiversity and health of the ecosystem. We know that human activity affects the health of ecosystems, and so we might be interested in indicators such as: CO2 emissions, soil retention/erosion, extent of natural spaces actually managed by indigenous peoples, energy use, water purity and so on.

Fig. 3.3
figure 3figure 3

(a) A multi-criteria decision aide. (b) Another multi-criteria decision aide. (c) A third multi-criteria decision aide. (Source: Shmelev (2012), p. 124, Shmelev & Powell (2006))

Using multi-criteria decision aides or multi-compartment decision aides brings values other than profit into our decision making. There are circumstances when these are appropriate.

One of the problems with the existing tools, exacerbated in the case of the “dash-board” approach is that the representation of the data is difficult to read. It has to be done by an expert. Unless one is trained to read these representations of data, or to read a table of data, it is very difficult to use them to make a policy decision based on the represented data. It is equally complicated to justify the decision using the aide, to people who lack the training. In reply, one could defer to the authority of an expert trained in using such aides, but then we compromise democracy, which is an important consideration in some institutions, and is recognised to be important when making decisions that affect society or the natural environment. When we defer to an expert, we have a technocracy, not a democracy (Söderbaum, 2017, 35).

3 Futures Modelling

Another popular sophisticated means of reaching decisions is futures modelling. Here we take a scenario, say, a factory in a specified region. As a factory decision maker, we might want to anticipate whether we face the possibility of making a profit, staying steady with our income or making a loss and having to close down. Before we invest, we want an idea about when our investment will be paid off, and we can make profits. If we are no longer making a profit with an existing factory, we have options to try to increase production, decrease it or diversify. We use futures modelling to predict possible outcomes and time-scales. We can shift the contextual parameters on the outcomes depending on what we think is likely.

The important non-monetary consideration is that we realise that our factory sits in a context. So, we might want to model various likely futures of the monetary, social or environmental context: interest rate changes, stock-market crashes, political unrest, crime, war, drought, natural disasters, bumper harvests, climate change and so on, since these influence supply chains and consumer behaviour. We run models based on future scenarios under different decisions that we make for the production in the factory. We come to a decision, depending on what we think is likely in the future, our parameters for variation, our tolerance of risk and our understanding of the modelling and its limitations.

This too, requires expertise and a subjective sense of future possibilities. We could be ignoring quite a lot of information, we could mis-judge it and we could mis-understand the models themselves. Therefore, while this method is very much preferable to making decisions based on monetary calculations alone, it is still technocratic and could be quite subjective, in the negative sense of missing too much information.

We can do better. We introduce the institutional compass here, merely for the sake of contrast. We shall examine the construction in detail in part II.

4 The Institutional Compass as a Better Multi-criteria Decision Aide

In this book, I propose a new tool for policy analysis, justification, development and change. I call it ‘the institutional compass’. It can be used by any institution for creating, modifying, justifying or critiquing policies.

In contrast to the tools depicted in Fig. 3.3a–c, visually, the institutional compass is something very simple and intuitive. See Fig. 3.4. The simplicity of representation meets the demands of policy makers (Söderbaum, 2000, 54). The compass has three sectors: harmony, discipline and excitement. There is one arrow that represents the summation of a large table of data. The constructed arrow indicates the direction in which the institution is heading de facto according to the data.

Fig. 3.4
figure 4

An institutional compass

The final arrow lies in one of sectors. This reflects the fact that the statistical data, when aggregated, show that, overall, the institution displays this quality more than the others. The degree of the arrow within the quality sector indicates the degree to which it approaches, or tends away from, the other two qualities. The length indicates the strength with which it sits in that quality. A shorter arrow would indicate more balance between the three qualities, but also that it is easy to shift into another sector, albeit in a mild or balanced way.

We make a conscious philosophical or ideological choice about where it is that we would like the arrow to be, and how long we would like it to be. Some ideological positions are reflected by a preference for one sector over another, some prefer a balance – represented by a short arrow close to the centre. A society that follows Confucius will prefer institutions that lie in the harmony sector and will encourage institutions to move in that direction. A society based on principles of competition will prefer excitement. A society based on principles of stoicism and order will prefer discipline.

Behind the simple final representation lies a culturally sensitive, statistically robust and holistic construction.

4.1 The Three Qualities

Following Kumar (2007), the general qualities are inspired by the three gunas of Hindu, Jain and Buddhist philosophy: sattva , raja and tamas .Footnote 7 I translate these as: harmony, excitement and discipline, respectively. These are general, in the sense that other qualities fall under them. If properly written, i.e., not completely made up of empty verbiage, institutional mandates will indicate which of the three qualities is desired by members of the institution, qua members.

Policy decisions for an institution are then made on the basis of the ‘final arrow’, as depicted in Fig. 3.4. How we create a new policy, how we adapt or change a policy, how we analyse or criticise a policy, how we justify a policy will then depend on the data table. We look to the data points that re-enforce the quality sought in the mandate, and those that pull away from the desired quality. Through policy we promote the data that points in the desired direction, and try to impede or discourage the data that points in the opposite direction.

4.2 The Institutional Compass Compared to the Ecological Footprint Measure

The compass is not better in all respects than the ecological footprint. They do different things. The ecological footprint is a measure (in terms of acres of arable land) of the energy and resources needed to maintain a life-style. Because of the common denominator: arable land; people’s life-styles, and to some extent institutions, can be compared by a score. This makes comparison numeric rather than purely qualitative. This is helpful for more reductionist and linear thinkers. It is better for computer aided decision making.

Apart from missing some important things to measure in evaluating a life-style, the ecological footprint measure is indirect. There is a conversion from an activity to the amount of arable land needed for that activity. This makes the measure (proportionately) hostage to agricultural technology and practices. As farming technology changes, so will the measure. This is not too bad, since, presumably, the measure decreases for everyone uniformly as the technology improves efficiency. But of course, in the practice of agriculture, this is not the case. Not everyone has the same access to the same technology instantaneously, and different crops use different technology. So as a measure, the ecological footprint cannot be realistic, even in terms of arable land.

In contrast to this indirect (converted) measurement, the compass takes into account directly what is happening in and around an institution. The types of activity or material goods or energy used are not converted. They are noted and entered in a data table. They are then compared to each other in terms of the three general qualities. The common denominator is qualitative and quantitative.

This is an important and subtle point. Any independent quantitative data can be entered in the compass table. So, anything thought to be relevant is included. Each data point is treated and analysed separately as possessing a quality, and a strength in that quality. As we accumulate data showing the same quality, the strength and degree of the quality in that institution stabilises. Eventual stability is an indicator of objectivity, since it shows that we are only re-enforcing the same quality point or that new data is not significant. Stability in a sector arrow is a type of convergence. It is similar to, but more abstract than, Bayesian statistical convergence. When we have sector stability, we can be quite objective in saying that the institution enjoys that quality to this extent, and added data will no longer change the result. This is what we mean by stability. It is a meta-statistical conception of stability and objectivity.

To explain: begin with the concept of statistical convergence or stability. When trying to determine what the percentage distribution is in a population, say for a preference for x over y, we ensure that our sample is representative. We then ask members of the population until the percentage distribution stabilises. At that point, there is no reason to interview more people. The result will no longer change. Manuals for statistics give us a rough number that is usual for reaching stability. This is the number used to mark sufficiency. That is, when we have reached this number in a representative population, we know that we have asked a sufficient number of people such that the statistic is accurate within a reasonable error margin.

When we seek sector objectivity in the compass, we look for stability one level of generality up – at the level of the quality: how harmonious, exciting and disciplined is our institution? The answer will stabilise. Once stabilised, there is no point in adding more data from that sector. It is as objective as the data we use.

In summary, if we wanted to use the compass to tell us of the quality of our lifestyle, and our toll on the natural environment, we would measure this directly. How much land is actually used, how many chickens are actually raised and slaughtered, how many tons of fish are actually caught, what volume of waste is generated and so on. All the data would be entered on a table and analysed.

After, we have the option to add an explicitly normative dimension to our analysis – in terms of what it means to live “sustainably” or what it means to run over the limits of the planet if all eight billion of us were to live in the same life-style. The limits in waste absorption, arable land, fishing stocks are each different and are not converted to a common measure. We might find that a certain life-style is not bad in terms of biodiversity loss, but it is in terms of a particular sort of pollution in a very restricted area of soil. What mix of lifestyles the planet can support is an interesting question. It is better treated with the compass than with the ecological footprint measure.

4.3 The Institutional Compass Compared to Cradle to Grave Analysis

The cradle to grave analysis of a product or even of an institution, takes into account the resources and waste of production, the use value and how use affects the environment. It also takes into account the type and resources needed for waste management at the “end” of the life of a product. In particular, when we do a cradle to grave analysis, we look at volume of land-fill and pollution that seeps into the soil, water or air if it is incinerated.

When constructing a compass, all of this information is relevant. So, the institutional compass uses all of this information but adds more, such as the social costs and benefits. Furthermore, the intension behind the two methods is a little different. Cradle to grave analysis is used for conforming to regulations (about pollution, for example). We make a separate analysis for each product. It can be used for certifying a company as “responsible” for the product even when it becomes waste because the company is then in charge of making available some form of treatment of the product at the end of its use by the consumer. Cradle to grave analysis could be used to compare functionally interchangeable products, say, an electrical scooter to a diesel operated scooter. In this way, in some very limited situations, it could be used to help with policy decision making, critiquing or policy justification, but only in the limited cases of supporting functionally comparable products. In contrast, the institutional compass can be used to compare very different products to each other, whole institutions or regions.

We have made a comparison of the institutional compass with several alternative tools that help in decision making. Each has its place and merits. Each has its purpose and its limitations. The limitations of the existing decision aides are overcome or better treated by the institutional compass.

5 Objectivity, Superficiality and Depth of Analysis

We looked at the question of objectivity in Sects. 3.1.1 and 3.4.2. In Sect. 3.1.1 we were interested in the supposed objectivity of price, and saw there that price is often determined independent of the consumer, but that this is not enough to make it mind-independent. It is the latter that is needed to make something objective, and price is not objective in this sense. In Sect. 3.4.2, we discussed a different sort of objectivity, that of sector stability. This is reached when adding more independent data makes no difference to the compass reading of the extent to which an institution shows one of the general qualities (read from the length of the arrow) and the extent to which it sits squarely within that quality as opposed to leaning towards the others (read from the degree of the arrow). We shall see this clearly in the part II of the book.

The quality of the data is, of course, important for objectivity. This should be an obvious point. And we shall return to it. Rather than address it here, let us think of objectivity for the purpose of decision making.

As we look at the method for constructing a compass, it will become obvious that we can be less, or more, objective in our constructions. The degree of objectivity is necessary for good policy, but it is not sufficient. The degree of objectivity marks the soundness of the information we are using to make policy decisions. However, there is another very important and separate aspect to consider which is how we treat that information.

We can create, adapt, analyse, criticise or justify a policy in a superficial manner by gerrymandering the representation of the statistics that swing, lengthen or shorten the final arrow by “adjusting” the data points that influence the final arrow. We simply fudge the books by changing degree or, often more important, length of a few influential data point arrows.

We can be less superficial. We can forgo gerrymandering and leave the initial analysis of data points intact. However, we then concentrate on the influential data points and address them directly in our policy. For example, if there is significant soil erosion, we import and distribute soil. Of course, this addresses the embarrassing statistic, but it will probably aggravate another. Thus, it will not significantly change the final arrow of the construction in the long term.

Or, more deeply, we can look at the underlying causes of the influential data points.

How deep or superficial we want to be in policy decision making is a choice. But it is not a mere choice. In general, the soundness and longevity of a policy will depend on the depth of analysis. How superficial we want to be in our analysis and decisions depends on our ambitions for the longevity of the decisions.

  • Claim 1: The deeper the analysis of the compass reading, ceteris paribus, the greater the longevity of the policy.

So, we can make, analyse, justify, criticise, modify policy based on the final arrow and our ambitions for the institution. This is the primary purpose of the institutional compass. We shall see other purposes to which it can be put in Part III.

6 Comparing Institutions Using the Compass

If we have only one compass constructed, then this tells us holistically and qualitatively about an institution at a particular time (of data recording). The compass is more informative when used for comparisons between the same institution at different times, or between similar institutions in the same or different contexts.

  • Claim 2: The compass can be used as a “common measure” for comparing institutions. The comparison is qualitative, quantitative and holistic.

When the data has been aggregated into one simple piece of information: that the institution displays one quality more strongly than another, and that it does so with a certain strength, we can then use this information to compare it to other institutions. By comparing compass readings, we learn that institution x displays a different general quality than institution y, or that they share the same general quality but differ (or are the same) in strength.

The measure is different from what many of us are used to because it is not quantitative, but both quantitative and qualitative. It is also not a binary comparison in the form of good and bad, although we might prefer one of the general qualities over the others, and therefore, associate it with “good” by ranking our preference for that quality; a bit like preferring red to brown. Red is not better than brown per se, but I might have a preference for it and rank it higher especially within a chosen context. What is good and what is bad is sensitive to context and purpose of an institution. So, one quality is good in one institution, in a particular cultural setting, at a particular time-period, where another is bad in that cultural setting at that time.

Drawing this out and making it explicit is part of the turn from normative to descriptive. That is, there is a difference between making the normative claim: “people should tidy up after themselves when eating in the common room”, and saying “eighty percent of employees agree that people should tidy up after themselves when eating in the common room.” The first is normative, as indicated by the word “should” that sets a standard. The second is descriptive in that it just records what a percentage of people claim. In a description, no normative judgment is passed, no norm is set. Norms are clearly ideological. Descriptions are more objective since they just record a fact that can be verified. In making the turn from a norm to a description on a data point, we add objectivity to it. We remove our own judgment or feeling, and replace it with a recording of how it is that a percentage of the people feel about the fact. The turn adds ideological transparency in the sense that we now have a better sense of the ideological orientation of the population we recorded or questioned. The ideological transparency brought about by the turn elicits philosophical debate, and this is needed when making policy decisions in a complex setting where many people are affected, possibly for many generations.

  • Claim 3: The turn from normative to descriptive in considering our ideological orientations, has two roles. It makes the construction of the compass more objective, and it elicits philosophical debate that should not be avoided. These roles are a strength of the compass construction, not a weakness.

In the act of adding a partly ideologically informed explanation as to how or why we think an institution shows a particular quality, we turn a normative claim into a description. This is a way of adding objectivity to the eventual analysis and of helping with eventual policy recommendations. We shall discuss objectivity in many places in this book. The reason is that in policy, which is partly political and partly based on the real situation that faces us, it is important to be very careful about when we are objective, subjective and relatively objective.

Objectivity comes in degrees and types. It is important to be as objective as possible for scientific reasons. On the other hand, policy is not a pure and hard science. It is political and cultural. We should not ignore this aspect of policy in our decision aide. Instead, we should be aware of this, and keep the varying ideological, cultural, emotional reactions in their right place when making decisions. This is what we aim to do with the compass. For example, when writing up an analysis of the final reading of the compass, we should explain our own ideological orientation, since, this allows readers to correct for the inevitable bias in analysis. The correction adds to the objectivity. When we are explicit about our ideological orientation, we invite discussion. We try to persuade others, and might change our own minds in the light of rational and persuasive reasons. We then have the chance to resolve our differences, be reconciled to the compromises we are asked to make, or we might find creative solutions for resolving differences. If we hide or disregard our ideological orientation, then we miss too much important information when making policy decisions. The missing out of data in the name of objectivity is an error when making policy decisions, since decisions are political. To avoid the error, we seek a comprehensive decision aide.

7 Holism and Objectivity

The comparison of compasses for institutions is holistic in the following senses. First, it is comprehensive. That is, we can continue to add information to hone and make the final reading more accurate, and therefore, objective. What “more objective” means is what I referred to above as a sense of convergence in Sect. 3.4.2. Second, we can analyse the same institution from different perspectives, from inside the institution and from outside, from the point of view of anyoneFootnote 8 affected by the institution. In other words, we look at the institution together with its greater context. Third, the comparison is quantitative and qualitative. A purely quantitative measure is not holistic, it is one dimensional. Fourth, we can use very different data from one compass to another, and still make an informative comparison. This makes compass construction sensitive and tailored to the context of the institution. This is an important point that will emerge as we understand the compass construction better. The objective comparison comes at the level of the qualities.

In compass construction, data is classified as belonging to one of the three sectors. Data from the same general quality-sector is aggregated to form what I call a “sector arrow”. These are what pull against each other to form the final arrow – an aggregation of the three. So, it does not matter which particular data we put in a sector, what matters is that we have enough good data in each sector to (more or less, objectively) represent the degree of that general quality of the institution.

In this way we can compare things that are otherwise very unlike each other, without falling into a trap of mis-representation that happens by omission. This is a conceptual trap we fall into when we think that we need to always use the same data points to make comparisons. We are caught in the trap with other multi-criteria decision aides because the choice of data points is fixed in advance, or has to be available in each institution that is compared. This means that we miss important data particular to one institution and missing in another. So, one institution is not comprehensively represented by the aide because we used what we thought was representative data, when it is not. The important difference is between representative data available across institutions and a comprehensive suite of data for an institution.

By being able to add any data relevant to that institution, we avoid the problem that we encounter with single criterion comparisons, or even with multi-criteria comparisons, when the list of points of comparison are fixed in advance. Such lists are good for certification , but not for holistic evaluation that adapts to the particularities of the institution. For the latter, I sometimes use the terms: “context”, “milieu” or “cultural sensitivity”.

For example, we might compare countries on their physical health statistics, and insist on all and only the following: longevity of the population, survival rate of babies through their first year, percentage of population dying from heart-attacks and obesity. Under this particular selection of data, we will get a very different results between countries’ health situation than if we choose a different suite: percentage of the population that contracts a cold or flu every one or two years, longevity of people living in the lowest twenty percent of income, percentage of people who die from lung cancer and percentage of people who die from malnutrition. The lesson to learn is that what counts as representative data, say, of the health of a community, is a good general indicator but might fail in important particular instances.

The point I am trying to make is that if we choose a particular set of data points, in the name of objective comparison, we will sometimes get perverse results. Objectivity should not be confused with same. This is what we have learned with using gross domestic product per capita as a measure of the well-being of a country. The GDP (gross domesticated product) measure is objective, in the sense of being calculated roughly in the same way in each country, but as a measure of wellbeing it is not accurate. It is an indirect and quite imperfect measure of well-being after a disposable income of $10, 000–$15,000 per year per capita (Jackson, 2009, 56–59). See Figs. 3.1 and 3.2. We find the same shortcoming in carbon footprint measures, in environmental impact assessments, in cradle to grave analysis and even in the ecological footprint assessment.

With the institutional compass, I am proposing something radically different. Give me whatever data you have, and enough in each sector that the sector arrow stabilises and I’ll generate a compass reading that is appropriately objective. The objectivity is not met at the level of “same data” but of “aggregation of data that indicates one of the three general qualities of the institution” plus “the aggregation of the three indicated qualities”.

At this stage, the reader will have many questions. Remember that this introductory part is to give a general impression, not answer all of the questions. They will be answered in due course as we look at the construction in detail and as we look at the extensions and adaptations.

8 Originality and Contribution of the Institutional Compass

The institutional compass is original in the way in which objectivity is conceived, in the technique of data analysis, in the representation of data for purposes of communication, in the possibility of explicitly adding an ideological orientation on top – a normative view, in policy guidance and in the several adaptations.

We discussed objectivity in the last section, and the issue will re-surface. Here, let us turn our attention to the other respects in which the institutional compass is original.

The data analysis is original because it is in terms of qualities and is culturally sensitive. The mathematical algorithm used in the method of aggregation of the data is original. The representation of the aggregated data is simple and original, the interaction between the representation and the data is original – there is a feedback loop. Adding the explicit interaction between descriptive judgements and normative judgments in the context of decision aides is original. Existing multi-criteria decision aides avoid the normative in the spirit of objectivity. But users of such aides forget that we can turn any norm into a description. The policy guidance is quite explicit, we have already weighted the data in terms of importance. This is clear when we return to the original analysis. The several adaptations are also original, since the basis for the adaptations is original.

This introduction was meant to give the reader an overall impression. I gave no details on the method of construction. I hope only to have piqued your curiosity to find out how this new method is carried out. We now go into more depth, repeating some of what has been already said, but now in a focussed way, to lay out in detail how to construct an institutional compass. We start with some background concepts.