1 Introduction

This paper examines some issues concerning the use of trust assumptions to underpin the design of secure online systems.

Trust is an essential ingredient in any distributed system: Alice cannot directly inspect the state of remote parts of the system, so she must trust Bob to tell her. So far so good—but what if Bob is trusting Carol? Does this mean that Alice has to trust Carol too?

This didn't matter in the old days, when systems were monolithic. The system designers specified a uniform security policy, which set out whom everybody trusted, and for what purposes. Those days are gone. Online systems are increasingly built using repurposed components, by co-operating participants who have different agendas, different beliefs about what is acceptable—or even possible—and very different views about whom they trust.Footnote 1 There is no “lowest common boss” to whom conflicts can be appealed, and while governments may be participants, they are no longer in charge and in most cases don't really want to be. And the stakes are high: online systems broker a vast amount of sensitive personal information. This sensitive information includes Bob's trust assumptions, which he may be very unwilling to reveal to Alice.

But trust involves risk. If Alice has to trust Carol whenever Bob does, but has no way of knowing whom Bob is trusting, then Alice has no way to control the risks to which she is exposed. This is the problem that we call the promiscuous transitivity of trust.

The keystone of this paper is an argument that trust is not promiscuously transitive. According to this argument, trust turns out to be an intrinsically intensional notion, one which involves a doxastic agent and the world as that agent understands it to be: trust cannot be unpacked into a purely extensional relationship between an agent and the world as it actually is. However, inferences derived from trust have an intrinsically weaker epistemic status than knowledge, and agents adhering to different trust policies may have mutually inconsistent understandings of the world.

The second part of the paper unpicks some of the consequences of this for the design of online systems that are required to comply with a range of mutually incompatible, and only partially disclosed, security policies. Our perhaps slightly surprising conclusion is that the desire of agents to keep their trust assumptions private can potentially assist the seemingly impossible task of building such systems. However, the resulting approach to system design is more akin to diplomacy than to engineering.

Compared to the vast amount that has been written on, say, knowledge or truth, the philosophical literature has historically had surprisingly little to say about trust.Footnote 2 Fortunately, that situation is now changing, and recent years have seen an explosion of interest in the philosophy of trust, particularly of trust on the internet. I'm not going to review this literature systematically here, but Simpson (2012) gives a superb genealogical map of the different ways in which the concept of trust has been evoked.

In this paper, I use the word trust in a number of different but interlocking senses,Footnote 3 some of them quite technical. Although I maintain that each of these uses has enough in common with the folk-concept of trust to justify using the word “trust” for it (rather than, say, “squibble”), I make no attempt to give any complete unpacking of trust. Nor do I seek to establish a semantic monopoly—there is plenty of room for others to use the word differently. However, I shall mean what I stipulate at each point.

The connotations of “Alice trusts Bob” that I exploit systematically are, firstly that Alice believes (or at least, commits to behave as if it were the case) that Bob's actions will respect Alice's interests, in some highly context-dependent sense, and secondly that Alice is nevertheless open to the possibility that Bob might (might actually, rather than counterfactually) act otherwise, in such a way that Alice suffers some damage to her integrity.

Similar remarks about stipulation apply to the term “believes”. In this paper, we deliberately draw no clear distinction between human principals and automated agents acting on their behalf. Those concerned about attributing opinions to automata can read “believes” as “is authorised to act as if”, following in the tradition of Burrows et al. (1990), Abadi et al. (1993), etc.

Here is a roadmap for the paper. We start with a quick review of the essential role played by trust in distributed systems and the reasons why transitivity of trust is sometimes enabling and sometimes corrosive. Next, we disentangle a particular conception of trust, one we claim will turn out to be useful to the system designer, from a number of other concepts (reliance, trustworthiness, jurisdiction) with which trust is often naturally aligned. We then demonstrate, via a flawed proof giving rise to a counterexample, that this conception of trust is not promiscuously transitive. It turns out that this is the case precisely because the notion of trust we are using is intrinsically intensional.

Next, we examine in more detail the implications for system design of users being reticent about their trust assumptions. A consequence of reticence, in the light of the intensional nature of trust, is that trust-derived beliefs have an intrinsically weak epistemic status that generally cannot be traded up to knowledge even in principle. We argue that the resulting ambiguity, about which purpose is served by the various measures introduced to counter perceived threats, can be exploited so as to provide system designers with more options and make it easier for users with different agendas to collaborate. Our conclusion is that it doesn't matter if there is no consensus about why the system is safe to use, so long as each of the participants has appropriate reasons to believe that it is.

2 The Need for Transitivity

Systems have requirements, which we can think of as desired behaviours for the system. In terms of the design of the system, requirements generally correspond to predicates that the global state of the system must satisfy. In a distributed system Alice cannot directly inspect the state of remote elements of the system. Thus, she is dependent upon representations, in the form of messages informing her, for example, that the state of Bob's part of the system satisfies a particular predicate at a particular time. These messages are generally sent as part of a protocol for changing the global system state.Footnote 4

Alice's security policy must specify the predicates that each part of the system is required to satisfy in order for her transaction to be carried out securely. But her security policy must also allow Alice to determine whether or not she is justified in proceeding to the next step in the protocol on the basis of the messages she has received: in particular, should Alice believe Bob's representation or not? Alice will need to make some trust assumptions in order to perform any non-trivial distributed operations at all.Footnote 5

Many approaches to online trust are based upon some notion of reputation. The idea is that Bob decides whether or not to trust Carol based upon the reputation that Carol has. Reputation is initially a feature of the mechanism by which Bob is introduced to Carol, for example via a trusted third party, such as a close relative or an online trading site. This reputation is modified in the light of Bob's subsequent beliefs about Carol's dealings with himself and with other principals, and this in turn affects the initial reputation that Carol acquires when Bob introduces her to Alice, who trusts Bob.

On this view of trust, it is natural to seek ways of enabling the hand-off of trust by making reputation more public. This enables others to trust someone who is already highly trusted. Finding the right mechanisms for doing this might be problematic (in a purely technical sense), but it seems clear that it would be desirable to facilitate the transitivity of trust, so that:

If A trusts B and B trusts C, then it is possible for A to trust C.

Here is a premonition that transitivity of trust is not without its problems:

Bob trusts Carol to service his car. Alice trusts Bob to recommend a suitable garage. Bob recommends Carol to Alice, based on the excellent service he has received. Is Alice justified in trusting Carol to service her car?

In fact, Carol really fancies Bob and would like to go out with him, which is why he has had such a good experience. Bob's trust in Carol is justified, and Bob has been honest with Alice, but nevertheless, Alice ends up with an inferior deal.

The reputation-based approach to trust focuses upon what we might describe as the supply side—how can we provide more trust? I want to start my argument by looking instead at the demand side of trust—why do we need to have trust online at all? Looking at the electronic protocols in which trust features, the answer is almost always that trust is being used as a substitute for knowledge.

A consequence of using trust as a substitute for knowledge is that there is no trust without risk: to the extent that we can be certain, we have no need for trust. Helen Nissenbaum (2001) puts this very well: suppose that Bob is a psychopath who has been shackled to the wall. Although Alice may feel secure while she is in the room with Bob, this is not because Alice trusts Bob.Footnote 6

Trust thus comes with a price tag. The cost of using a trust assumption is exposure to risk. The situation online is similar to that described by Piero Ferrucci (2006):

“To trust is to bet. Each time we trust, we put ourselves on the line. If we confide in a friend, we can be betrayed. If we put faith in a partner we can be abandoned. If we trust in the world, we can be crushed. Far too often it ends that way. But the alternative is worse still, because if we do not put ourselves on the line, nothing will happen.”

Usually, when Alice chooses to trust Bob in the online world, it is not because she particularly desires to accept the risk, but because there is no better available alternative: if Alice does not make some trust assumptions, she incurs the opportunity cost of being unable to complete the transaction she wants to make.

On this understanding, the less our system requires Alice to trust, the better, because the less involuntary risk she is exposed to that something will go wrong. Limiting the scope of the trust assumptions required by the design of a system is called trust confinement. We cannot do without trust online, any more than we can in society generally. But it is desirable that those who use a system be able to keep track of the risks to which they are being exposed and to choose wisely which risks they invest.

3 The Problem with Promiscuity

Different users of the system generally have different requirements, corresponding to their different agendas and security policies.Footnote 7 For example, Alice wants the system to give her money when she puts her card into a cash machine, even though her account is at another branch. Alice's primary concern is being able to withdraw money when she does have it. On the other hand, the bank is concerned to make sure that Alice cannot withdraw money that she does not have. It is generally very hard to design systems so that they will demonstrably meet the requirements of disparate groups of users who subscribe to different security policies and trust assumptions.Footnote 8

Some users of the system have desires that correspond to behaviour that the designers do not wish the system to exhibit. For example, Rob would quite like to be able to take Alice's money out of her account without her permission or prior knowledge. Such usersFootnote 9 are called attackers. Note that Rob may also be a legitimate user of the system, with an account at the same branch as Alice.

In order to satisfy Alice's requirements, the system must behave in a way consistent with Alice's security policy. Alice needs to be able to prove that the system will comply with her security policy—for example that only she, and merchants that she has authorised, are able to remove money from her account. To this end, Alice will need to make some trust assumptions in order to be able to prove that the system (as specified) will exhibit the behaviours that she wants it to have.Footnote 10

Alice thus has a trust confinement problem: in order to control the risks to which she is exposed, she needs to be able to ensure that assertions such as

< Alice trusts Rob >

cannot somehow find their way into the set of statements that can be proved about the system.

However, if trust is promiscuously transitive, in the sense that we have:

If < < A trusts B > and < B trusts C > > then < A trusts C >

as a rule of inference, then Alice is in trouble: Bob could decide to trust Rob without Alice's consent—or even knowledge—and Alice would then be trusting Rob even though she believes that Rob is an attacker—and worse, if Bob does not share his trust assumptions with Alice, then Alice may not even be aware that she is trusting Rob until it is too late.

Hidden trust assumptions lie at the root of many security problems. As an example, consider key distribution: how can Edwina be sure that she is using the correct public key for George, whom she has never met in real life?

A classical hierarchy of public key certification authorities (A, B and C) is shown in Fig. 1, along with some of their clients (D, E, F and G). The solid arrows show direct knowledge of public keys, and the dotted arrows show trust about key provision.

E wants to learn the public key of G. E trusts B to provide E with the correct public key for G; however, B does not know the public key of G. B trusts A to provide B with the correct public key for G, but A does not know the public key of G either. C knows the public key of G, so in order for E to prove that he has the correct public key for G, E must assume that A trusts C.

Thus, as well as assuming that trust is transitive, clients must also assume that trust between certification authorities is symmetric. An immediate corollary of this is that all certification authorities in the tree trust one another, whence clients must trust the entire tree.Footnote 11 At this point, the hierarchical approach to key certification becomes hopelessly risky.

This type of trust explosion is not confined to hierarchical approaches: analogous problems arise in key handoffs based on web-of-trust models such as Phil Zimmerman's Pretty Good Privacy.Footnote 12

Fig. 1
figure 1

A public key certification hierarchy

A major advantage of public key over symmetric key cryptography is the capability it provides for localisation of trust, but this localization is lost if Alice's trust can be handed off promiscuously, thereby exposing Alice to risks of which she is not aware.

How can Alice limit the risks to which she is exposed or even maintain awareness of what these risks are? If trust is promiscuously transitive, then trust confinement appears to be an intractable problem.

4 Disentangling Trust

How can we limit the transitivity of trust? Some approaches to unpacking the semantics of trust have the effect of forcing trust to be promiscuously transitive, and I shall argue that such approaches are mistaken. Before we pass on to this, it will be helpful to distinguish trust from some other notions with which trust is often naturally aligned.

The first of these is reliance. If Alice relies upon Bob to carry out some operation, and Bob in turn relies upon Carol, then Alice is relying upon Carol whether she knows it or not. However, it does not follow that any trust is involved. Reliance need not involve openness to risk in the same way as trust: Alice may have a way of checking whether or not Bob has done what he is supposed to do. Alice is not authorised to issue an ISO/IEC 27001 security compliance certificate for her software, whereas Bob is. Bob is relying upon Carol to carry out the appropriate tests before Bob issues the certificate. However, the tests that Carol is required to carry out are in the public domain, and if Alice has the required skill,Footnote 13 she can carry out the same tests and determine whether or not Bob has acted correctly, regardless of whether Alice is aware of Carol's involvement. The case of online transaction validation is similar: for example, Alice may be able to verify, but not generate, Bob's asymmetric key signature on the transaction. Conversely, Alice may trust Bob to do something for her that she could alternatively do perfectly well for herself: in this case, she is not reliant upon Bob.

Another notion from which trust should be distinguished is belief in trustworthiness. Saying that Alice trusts Bob is conceptually a very different statement to saying that Alice believes that Bob is trustworthy.Footnote 14 There are many scenarios in which these two concepts have a natural alignment.Footnote 15 However, I claim that for some of the scenarios discussed in this paper, forcing trust to align with trustworthiness in this way distorts the semantics of one or the other to the point where a useful word is wasted.

That Alice believes that Bob is trustworthy does not entail that Alice actually trusts him: Alice may have no need to do so or may prefer to trust somebody else. Conversely, the fact that Alice trusts Bob online does not entail that Alice believes that Bob is trustworthy: Alice may trust Bob because she has no choice, because she prefers accepting the risk to foregoing the transaction or because Alice's security policy says that her insurance underwriter will accept liability for the consequences if Bob does something wrong.Footnote 16 Of course, Alice may be more willing to trust Bob if she believes that the risk is lower, but this is not quite the same relationship between the concepts.

Finally, trust is not the same as belief in jurisdiction. Many of the messages which are exchanged in our protocols are effectively performative speech actsFootnote 17: if Bob has control over some aspect of the system state, such as key allocation, then the fact that Bob asserts “Alice, your key is K” is precisely what makes that statement true. We need to distinguish carefully between:

Alice believes < X because < Bob says X > >

and the more general

< Alice believes X > because < Bob says X >

The first of these statements expresses Alice's belief in Bob's jurisdiction over X; the second is more like what we intend by trust. In the first case, Alice might be factually wrong (if it turns out that B saying X is not what makes X true after all); in the second case, Alice cannot be factually wrong (since Alice does, as a matter of fact, believe X because Bob said X) although she may be wrong (in a different sense) to have trusted Bob about X if it turns out that her trust was misplaced.

In the next section, we construct an invalid argument for the promiscuous transitivity of trust. We shall see that this “proof” of transitivity fails in an interesting way, which sheds light on the intensional nature of trust.

In the following sections, we shall exploit this insight to explore a novel understanding of secure system design—specifically to draw some lessons about how to model systems that are going to be shared by groups of users who do not share their security policies.

5 Trust and Reticence

For the purpose of this section, we shall formally model trust as an inference rule, along the line identified towards the end of the previous section:

A trusts < B about X > =

If < A believes < B says X > > then < A believes X >

where each schema applies to all X satisfying some syntactic condition that depends on A and B.

Schneier (2012) gives some everyday examples for X:

“I trust Alice to return a $10 loan but not a $10,000 loan, Bob to return a $10,000 loan but not to babysit an infant, Carol to babysit but not with my house key, Dave with my house key but not my intimate secrets, and Ellen with my intimate secrets but not to return a $10 loan. I trust Frank if a friend vouches for him, a taxi driver as long as he's displaying his license, and Gail as long as she hasn't been drinking.”

More formally, X may stand for an assertion about the present (or future) state of part of the system, a predicate or a relation (which may be a second-order relation) involving A, B and possibly other parties, or even another schema.Footnote 18 We remark in passing that this approach encompasses delegation.Footnote 19

A more systematic formal development could proceed along the lines of Primiero and Taddeo (2012), adding epistemic contexts and justifications to the environments of the schemata, but I intend not to say too much about further details of the formalism as I want the argument to be quite general. What we have specified here will suffice for the purpose at hand.

At first sight, our trust inference rule appears to make trust a consequence of belief in trustworthiness after all: for if we unpack

B is trustworthy about X =

If < B says X > then X

then it follows that

If < A believes < B is trustworthy about X > > then

If < A believes < B says X > > then < A believes X >

But we need to be careful here: the inference rule given above for trust addresses only one of trust's crucial properties. What is being modelled here is what we might call latent trust: Alice would believe X if Bob said X, and this is all that can be derived from belief in trustworthiness. The case where Alice actually does believe X because she believes that Bob did say X, we might call actuated trust.Footnote 20

It is the possibility that trust might become actuated that makes trust risky, and it is arguable that without this possibility, Alice isn't actually trusting Bob at all, even though (counterfactually) she might have trusted him.Footnote 21

For example, it may be that Alice already knows X, won't ever make a decision that is based on X, prefers to trust somebody other than Bob about X or knows that Bob will never say X: in all these cases, the inference rule has no effect.

Paradoxically, this is arguably also the case if Alice can be certain that Bob is trustworthy—for example, because she has personally certified the code that she herself loaded into his tamper-proof hardware—the argument is that Alice doesn't actually trust Bob, because there is no risk: the effect of a cast-iron proof of trustworthiness is to shackle Bob to the wall.

Next, we define two subsidiary notions:

A trusts < B competent about X > =

If < A believes < B believes X > > then < A believes X >

A trusts < B honest about X > =

If < A believes < B says X > > then < A believes < B believes X > >

Certainly, trust in competence and trust in honesty jointly entail trust.Footnote 22

Now let us attempt to establish promiscuous transitivity of trust as a derived inference rule and see what goes wrong.

Assume:

A believes < A trusts < B about X > >

A believes < B trusts < C about X > >

A believes < C says X >

Required:

A believes X

Remember, we are going to establish that the conclusion does not follow, so it is all the better if we do this under strengthened premises. We therefore choose to substitute for the assumption that A trusts B, the jointly stronger assumptionsFootnote 23:

A believes < A trusts < B competent about X > >

A believes < A trusts < B honest about X > >

We suppose that A has an excellent inference engine, which reasons as follows:

Suppose that B were aware of the facts, i.e. that B believes what A believes about what C said. Then B would believe < C says X >. Therefore, since

< B trusts < C about X > >

B would believe X under circumstances that are, in fact, true. Therefore, since

A believes < A trusts < B competent about X > >

A would believe X under circumstances that are, in fact, true.

Thus, from < A believes < C says X > >, A can deduce < A believes X >. Therefore, A believes < A trusts < C about X > >. QED.

Clearly, something is wrong: nowhere have we used the assumption

< A trusts < B honest about X > >.

Our “proof” would work even if B were not honest.

Here is a counterexampleFootnote 24 to demonstrate that the conclusion really is not entailed by the premises:

Carol holds the back-up key for Bob's presence in the Cloud. Alice trusts Bob; Bob trusts Carol. Carol is threatened with a fatal power-down while Bob is offline.

Should Carol broadcast the key or destroy the key?

This reflects a common situation, where two security requirements conflict. The first requirement is an availability requirement: Bob must have access to the key. The second is a confidentiality requirement: only Bob may have access to the key. Broadcasting the key will ensure that the first requirement continues to be met, but will break the second. Destroying the key will ensure that the second requirement continues to be met, but will break the first. Which requirement is primary? This is specified by Bob's security policy, which Alice does not know, but which Bob trusts Carol to follow.

Carol says to Alice:

“Here is a message for Bob. Please tell him his key is 281478a6f9032e49.”

Bob knows that this means that Carol has successfully destroyed the key.

Carol's message said something different to Bob to what it said to Alice. Bob and Carol have deceived Alice. Alice trusts Bob, but Bob did not betray Alice's trust: Bob did not say anything to Alice or to anyone else.Footnote 25 Even if Alice believes that Carol will be honest to Bob, this does not require Carol to be honest to Alice.

What can we conclude from this? The moral of our failed proof is that reticenceFootnote 26 can block transitivity. What B would believe about what C says isn't just a question of fact.

A cannot deduce

< B believes < C says X to B > >

from

< A believes < C says X to A > >.

Our crude model of trust is therefore in need of refinement. A number of possibilities present themselves. One possibility is:

A trusts < B about X > =

If < A believes < B says X to A > > then < A believes X >

This reformulation leaves it open to A to make additional assumptions, if she chooses, of the formFootnote 27:

If < A believes < B says X to C > > then < A believes < B says X to A > >

It is tempting to go on and consider how to refine our logic, but this would take us too far afield. We already have enough for what we need, and we turn now to considering the consequences of intransitivity for our agenda of secure system design.

The messages to take away from this formal interlude are that trust is not promiscuously transitive and that the intransitivity comes about as a result of the intensional nature of trust, combined with the reticence of principals about their trust assumptions.

6 Intensional Trust

Our conclusion from the previous section is that trust is a modality similar to an epistemic modality, indeed that trust is doxastic: statements about beliefs held by particular principals must be unpacked in terms of their reasons for holding those beliefs, not in terms of the facts which would make those beliefs true in the actual world.

The analysis of trust must thus consider intensional conditions and not just extensional properties of the system.

In particular trust, like knowledge, is not referentially transparent. For example, Alice may trust Heosphoros but not trust Hesperos, because she does not realise that these two entities are the same in real life. One of the very problematic aspects of cyberspace, as distinct to the physical world, is the great difficulty involved in knowing when two entities are in fact the same.Footnote 28 This is why it is not safe for Alice to adopt a strategy of “trust by default” for remote individuals, trusting Rob until he proves himself untrustworthy, since Rob can simply re-present himself as Bob, and Alice has no way of knowing whether Bob and Rob are the same or different principals.Footnote 29

Distributed systems are ultimately built in order to enable individuals to extend the reach of their societal interactions. However, the various individuals and organisations interacting online increasingly have different requirements, different security policies, including different trust assumptions, and consequently may hold radically different beliefs about the properties of the system. The state of the system that Alice believes actual may be one that Bob believes impossible, and vice versa.Footnote 30

Even when Alice and Bob do share a belief, they may have radically different grounds for holding it—and they may be unwilling to share these grounds. There are many motives for not sharing information, including commercial confidentiality and considerations of personal privacy.

Among the information which they may not be willing to share is the content of their security policies: our security policy and our trust assumptions reveal a great deal about us, and we may not wish to violate our privacy by allowing other parties unfettered access to them.

As well as trust assumptions, security policies may also include knowledge that the principals are unwilling to share.Footnote 31 But in contrast with knowledge, which could in principle be shared,Footnote 32 beliefs based upon trust may be innately subject dependent: if Alice trusts reticent Bob, then she shares some of his beliefs but not all, and their full belief sets may be mutually inconsistent. Thus, there is in general no possibility of “trading up” trust-derived beliefs to knowledge.Footnote 33

What are the implications of this for secure systems design? We shall argue that the weak epistemic status of trust-based beliefs presents us with an opportunity as well as a problem. But in order to lay the ground for this argument, we must first examine the methodology of the design process in more detail.

7 Imaginary Threats with Real Effects

The conventional narrative of secure systems development goes like this.Footnote 34 Legitimate users of a system have requirements, which we can think of as predicates that the system state must be constrained to satisfy. Attackers may potentially perform actions that would cause these predicates to be violated. Such potential actions are threats. Threats are addressed by incorporating countermeasures into the design of the system. These countermeasures provide a security service that blocks the threat.

Clients of a telecommunications service need to know the correct account numbers to use for various online payment transactions. The service posts the up-to-date numbers on a publicly visible notice board. The requirement is that these numbers be correct.Footnote 35 An attacker wishes to substitute her own bank accounts for these numbers. The countermeasure is to provide a digital signature for the content of the notice board and a key certification service which ensures that only authorised signatures verify correctly. These mechanisms provide the security service of integrity.

Once a countermeasure is deployed, it becomes a part of the system. Consequently, threat analysis is a recursive process. In a mature system, most threats arise as a result of countermeasures placed in the system to counteract a different threat.Footnote 36

A warehouse has a secure area. The threat is that Rob, an unauthorised person, may enter the secure area. The countermeasure is that Alice, the authorised employee, must enter her unique pass number into a keypad to open the door. A new threat is that Rob may attempt to determine Alice's pass number by examining the keypad, for example using an infrared detector or fingerprint dust. A second countermeasure is that motion-sensitive video cameras are set to record anything in the vicinity of the door. An interactional threat is that Rob gains access to a discarded video recording and sees Alice's pass number being typed in. This imposes a new security requirement upon old video recordings: they must not be viewed by unauthorised persons. The designers of the countermeasures may not have adequately considered the interaction between them.Footnote 37

Thus, an imaginaryFootnote 38 threat can summon a real one into being: a particular countermeasure may be essential even though the corresponding threat is imaginary, because a user who falsely believes the threat to be real will not use the system unless the countermeasure is present. Which threats are actually real and which are in fact imaginary is not relevant here: what matters are the beliefs of the individual principals, derived in accordance with their respective security policies and trust assumptions.

Alice believes that she is at risk from Carol revealing the secret key they share. Bob knows that Alice is not at risk, because Bob has secretly installed spyware in Carol's computer, which ensures that she cannot transmit the key. However, Bob cannot disclose this fact to Alice. In order to persuade Alice that it is safe to use the system, Bob must agree to use a key escrow service, which Alice believes will block the imaginary threat. However, unbeknown to Bob, the escrow service has been penetrated by the CIA. This exposes everyone, including Bob, to a real threat.

Principles may be reluctant to reveal which threats they regard as real and which threats they regard as imaginary. It is unhelpful to attempt to settle the status of a threat by an appeal to “facts”: after all, as already remarked, if principals could agree on the facts, there would be no need for trust at all, and we have already seen that beliefs mediated by trust cannot be cashed out in terms of extensional predicates on the system.

Consequently, although the first instinct of a systems designer dealing with a problematic inconsistency is to attempt to settle the matter by looking at how things actually are (the extensional properties of the state the system is actually in), such an approach cannot settle inconsistencies arising from incompatible trust assumptions.

The situation for secure distributed systems design now appears desperate: it is not possible for the principals to agree on the facts, and the principals themselves are unwilling to reveal their beliefs.Footnote 39 How can the system designer possibly satisfy Alice that the system is safe for her to use without inadvertently introducing countermeasures that Bob believes make the system unsafe for himFootnote 40?

We shall argue that the intensional need for reticence, which helps to control the transitivity of trust and limits risk, is a cause for hope as well as for despair. Far from needing to arrive at consensus about which threats are real, it suffices in practice to agree to useful public fictions. The issue is not whether a threat “really is” imaginary, but whether it justifies an “appropriate” countermeasure.

However, this argument requires us first to analyse more closely the relationship between countermeasures and threats.

8 Threat/Service Duality

There is a duality between security threats and security services.Footnote 41 To explain this duality, we shall use the example of authentication versus plausible deniability.

Alice explains to Bob that from time to time she will provide him with inside information texted from a series of pay-as-you-go phones. Alice explains the threat of plausible deniability: an impostor such as Alice's mutant twin sister mAlice might text Bob information claiming that it came from Alice. When Bob acts upon the information, Alice denies that the information came from her. Bob could not distinguish deception by mAlice from the case in which Alice had given him the information and then changed her mind about the wisdom of doing so.Footnote 42

Alice therefore provides Bob with a mechanism that provides an authentication service. This mechanismFootnote 43 is based upon a secret key that Alice shares with Bob. This allows Bob to verify that Alice is the source of a particular text message.

Alice has deceived Bob about her security policy: in fact, Alice needs to ensure that the leaked information cannot be attributed to her. For Alice, plausible deniability is the requirement, and unforgeable authentication is the threat. This is the reason that Alice has chosen an authentication protocol based upon symmetric key, rather than asymmetric key, cryptography.

Alice has provided Bob with a mechanism that allows him to verify that she is the source of the information that he receives. However, the secret key is shared, and so Bob can produce the authenticator for any message which Alice can. Thus, Alice's authentication cannot be passed to policeman Carol or reporter Darren unless they trust Bob implicitly.Footnote 44 Alice can deny giving Bob the message and claim that Bob sent it to himself. Alice has deceived Bob about which was the security requirement and which the threat.Footnote 45

A corollary of Threat/Service duality is that there is not a unique mapping between threats and countermeasures: just as there is more than one possible countermeasure to block a given threat, so for any given countermeasure, there can be more than one threat which it could plausibly be intended to block. Indeed, in many cases, a single countermeasure does block more than one threat. For example, a digital signature based upon an asymmetric key can be used to block both masquerade and sender repudiation.

The following hypothetical examples illustrate the difficulty of inferring a threat from a countermeasure. They should not be regarded as indicative of any actual bank's practice.

It requires two bank employees with different keys to open the safe. Is this because the bank does not trust its managers? Is the threat that the manager might take money from the safe for their own use?

An alternative explanation of the two-key rule is that it keeps the families of bank managers safe and reduces the risk that they will be taken hostage. The threat is that a bank robber will tell the manager to open the safe if she ever wants to see her children again. The need to coordinate the kidnapping of two families increases the number of shares into which the stolen money must be divided and the chances that at least one member of the gang is a police informer. The bank publishes the countermeasure widely (so that bank robbers realise that a single manager cannot open the safe.) However, the threat remains secret: the bank does not need to reveal whether or not it trusts its managers.

A video camera records the teller's hands as she carries out banking transactions. Is this because tellers might short-change customers, by deliberately miscounting cash and pocketing the difference later? This is the threat perceived by the customer and may be the reason that the customer regards the countermeasure as a good idea and is willing for it to be paid for out of the overheads on her account.

However, an alternative threat is that a customer waits until the cash is in the till before falsely accusing the teller of giving a receipt for an amount that is slightly too low, in the hope that the bank will simply pay up the difference rather than close out the till and take another teller off duty to reconcile the cash tray with the recorded transactions. With the countermeasure in place, a simple replay of the tape in the bank manager's office in front of witnesses allows the fraudulent claim to be dismissed. The bank need not reveal to the teller, or to the customer, which of them they do not trust.

A security screen separates the tellers from the customers. Is the threat that the robber might shoot a teller who does not comply with a request to empty her till? Or is the threat that a member of the bank staff might otherwise be tempted to tackle the robber when his back is turnedFootnote 46? The bank need not reveal whom the screen is intended to protect.

The consequence of this is that principals do not need to reveal which threats are the ones they regard as real: it suffices if they subscribe publicly to a threat that justifies the countermeasure they seek.Footnote 47

It may be that all the users of the system believe that a particular threat is imaginary, but subscribe publicly to it because it justifies a countermeasure that they all want (perhaps for different private reasons.) Bob does not need to say “I don't trust Carol”, but can say “Potential future users might be deterred from joining the system if they were required to trust the key server.”

This provides us with a way out of the design paradox.

9 Designing for the Impossible

Increasingly, online systems are required to support diverse groups of clients who have very different beliefs about what threats need to be countered and very different assumptions about whom they trust.

Our task is to design online systems that people believe are safe to use and yet which support privacy by not requiring people to disclose too much about their security policy and trust assumptions. Our argument is that, to enable this, we should shift the focus of our modelling: from extensional properties of the real world in which the system will be deployed, to intensional properties of the partially overlapping but mutually inconsistent virtual worlds in which the various rival principals, stakeholders and protagonists of the system believe themselves to be living.

Bob believes that Alice is living in an impossible world or rather that the world in which Alice falsely believes herself to live is impossible.Footnote 48 Bob has no privileged position. Bob must assume that Alice might have the same view of him and his beliefs (if she knew what they were): in fact, Bob must consider even the possibility that Alice's view could be correct, because she may know something that Bob does not. Nevertheless, Alice and Bob need to be able to trust one another sufficiently to enable them to cooperate, and for this, they need to be able to interpret each other's stated beliefs appropriately in the context of their own.Footnote 49

We need the ability to predicate accuratelyFootnote 50 about imaginary threats in impossible worlds if we are to build systems that allow effective interworking between groups adhering to conflicting security policies.

We cannot do without trust online: we can never prove that a distributed system meets our requirements without making some assumptions based on trust; but trust entails risk, so we need to be able to contain transitivity; but users are unwilling to reveal to us whom they trust and for what. Fortunately trust-derived beliefs about remote parts of the system are not extensional propositions, so it doesn't fatally matter if different groups of users hold mutually inconsistent beliefs about the state of the system.

All the stakeholders must agree that the system is secure, in the sense that they believe the properties of the system to be consistent with their own security policy. Otherwise, they would be unwilling to use the system. However, there does not need to be a consensus about why the system is secure or about what facts and assumptions (including trust assumptions) underpin “the proof” that the system is secure. There doesn't need to be an agreed proof: it suffices that each group has their own reasons to believe their own proof. If we unpack trust intensionally, then this can become an achievable goal.

Under this approach, designing secure systems to be used across multiple security policy domains is more like diplomacy than engineering. The real threats to security need not be agreed or even revealed. Although the agreed countermeasures may be real and necessary, the threats they have been agreed to guard against (and the services they are intended to provide) may be imaginary or impossible.