A relationship between a buyer and a seller of electronic equipment is one of trust. The buyer of the equipment trusts the seller to deliver the equipment on time, with the right quality, and at the agreed price. Usually the buyer also has to trust the seller to provide support and security updates for the lifetime of the product. The focus of this book is somewhat unusual, since we are not concerned with price, quality, or technical support. Rather, we study the relationship between the seller and the buyer under the assumption that the seller might want to use its position as the equipment provider for purposes that are directly opposed to the interests of the buyer. From this position, the notion of trust between the equipment provider and the buyer of the equipment takes on a very different flavour.

Notions of trust have been heavily studied in modern philosophy from the 1980s onward. Much of this research is based on rational choice theory and most authors relate trust to an aspect of risk taking. The question that one tries to answer is how to capture the rationality of relationships between parties that are not transparent to each other. Framed by the problem we address in this book, the question is how to make rational decisions on buying equipment from a vendor when neither the interests of the vendor nor the current and future contents of the product are fully transparent.

2.1 Prisoner’s Dilemma

One of the most celebrated examples that highlight the complexity of decisions based on trust is the prisoner’s dilemma [12]. There are several versions of it, but the one most frequently cited is the following: two partners in crime are imprisoned. The crime they have committed will, in principle, give them a sentence of three years but the police have only evidence to sentence each of them to prison for one year. The police give each of the partners the offer that the sentence will be reduced by one year if they testify against the other. If only one defects, the other will serve three years in prison. However, if both partners defect, both of them are put away for two years.

What makes this situation interesting is the choice that each prisoner must make between defecting and not defecting. If you are one of the prisoners, you can go free if you defect and the other does not. You get one year in prison if none of you defects. If both of you defect, you get two years in prison and, if you do not defect but your partner does, you end up with a three-year sentence. Clearly, what would generate the least number of years in prison for the two prisoners in total is for neither to defect. This situation would require, however, that each one trust the other not to defect. Analysis based purely on self-interest makes it clear that whatever choice my partner makes, I will be better off defecting. If both prisoners follow this strategy, we will both receive a two-year sentence. Still, the best outcome is if my partner and I can trust each other not to defect. Then we would both receive a one-year sentence.

The winning strategy in the prisoner’s dilemma is to defect, but the game changes when the same dilemma appears with the same pair of prisoners multiple times. It becomes even more complicated if there are multiple prisoners and if, for each iteration, any arbitrary one of them is involved in the game. In the latter case, the winning strategy depends on the average attitude of the population. If the population tends to collaborate, then tending towards collaboration is a winning strategy, whereas tending towards defection is a winning strategy if that is what most of the population does [2].

The prisoner’s dilemma has been used to describe the appearance of trust-based collaboration among animals, in politics, and in economics, to mention just a few areas. It seems to capture the essence of trust-based decisions when there are risks of defection involved. For us, the risk we are concerned with is related to the fear that the vendor from which we buy our equipment will defect on us and the choice we have to make is whether we will buy from this vendor or not. For the sake of discussion here, we assume that the vendor has motivation for defecting, without going into detail what it might be.

The first question that arises is whether the singular or the iterated version of the prisoner’s dilemma is involved. The answer depends heavily on what our fears are or, to put it in another way, what the defection of the vendor would amount to. If we buy a mobile phone and we fear that the vendor could steal the authentication details of our bank account, we have a case of the iterated prisoner’s dilemma. It is unlikely that we would be fooled more than once and we would not buy equipment from the same vendor again. The vendor would most likely suffer a serious blow to its reputation and subsequent loss of market share, so the vendor would not be likely to defect.

At another extreme, we find nations that buy equipment for their critical infrastructure. If defection means to them that the infrastructure would be shut down as part of an attempt to overthrow the government, then the game is not likely to be played out more than once. In this case, the winning strategy for the nation would be to find another vendor and the aggressive vendor would not let the possible loss of a customer stop it from defecting.

It is therefore clear that decisions about which vendors to trust will depend heavily on the consequences of defection. Another equally important aspect of the discussion is the degree to which the vendor’s actions are detectable. Assume that we, as buyers of equipment, fear that the vendor will use the equipment to steal information from us. The realism of this situation is highlighted by the fact that unauthorized eavesdropping is one of the greatest fears behind the discussions on whether to allow Chinese vendors to provide equipment for critical infrastructures in Western countries. In this case, it is unclear if we would ever know if the vendor defected. Therefore, even if we are in a situation of repeated decisions on whom to buy from, we might be no wiser the second time than we were the first. In terms of the prisoner’s dilemma, this means that we can assume that the buyer will never find out if the vendor has defected or not and, then, the game can, for all practical purposes, be viewed as a single game. Still, seen from the perspective of the vendor, there is also a risk associated with the assumption that the vendor will never be caught. We return to this discussion in Sect. 2.7.

2.2 Trust and Game Theory

The prisoner’s dilemma and variants that we presented above are examples of a problem from game theory; that is, the rules of the game are transparent and its outcome for each individual results from the decisions made by all the participants. An elaborate mathematical theory can be built around such games, given a solid foundation for what will be the rational choices of each participant in the game.

Basic notions of trust can be derived from such a theory of rational choice. Gambetta [6] defines trust (or, symmetrically, distrust) as

A particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he can monitor such action (or independently of his capacity ever to be able to monitor it) and in a context in which it affects his own action.

Seen from this point of view, a trust-based choice of a vendor for a particular piece of technology would be based on a subjective understanding of the chance that vendor fulfils the buyer’s expectations, and does not use its position as equipment provider for purposes that are against the buyer’s interests.

Rational choice as a basis for trust does have its weaknesses. Consider the iterated prisoner’s dilemma again. It is easy to imagine a situation in which you do not trust your fellow prisoner. Still, knowing that establishing trust would be beneficial to the outcome of the game, it might be a rational choice to play as if you trusted the other prisoner, even if you do not. This point, taken up by Hardin [9], highlights a distinction between beliefs and strategies. It defines trust as a belief of what your opponent will actually do. When I trust you, it is because I believe it will be in your best interest to protect my interests. The implications and value of building a trust relation between a buyer and a vendor of electronic equipment is discussed further in Sect. 2.5.

2.3 Trust and Freedom of Choice

One major weakness in the game-theoretical definitions of trust is that trust is assumed to be something one can choose to have or not to have. In an essay on trust and antitrust, Baier [3] argues that this is not always the case. The most convincing example is the trust a small child must have towards his parents. He will not be in a position to choose other parents, so, regardless of what reasons he has been given to trust the ones he has, he actually has to place some trust in them.

The insight of Baier’s work, for our purposes, is that you might be forced to entrust something that is of value to you to people or organizations you do not necessarily trust. There are many examples of this in the world of computers. In a modern society, one can hardly choose not to be dependent on the national communications network. Still, you may not trust it to be there when you need it or you might have reasons to believe that your communications are being intercepted by national authorities that you do not trust. You may not trust your computer to be virus free but you may still feel that you have to entrust it with the task of transferring money from your bank account.

When we choose which vendor of electronic equipment to use for critical infrastructure, we are in a situation in which our choices are very limited. There will generally be very few vendors to choose from and we will rarely be in a position to choose not to buy at all. If we decide, as some governments have, that equipment or components from a given country should not be trusted, we may find that there are no trustworthy options left. A very obvious example is that several Western countries have decided to prohibit Chinese equipment from central parts of their telecommunication infrastructure. In today’s complex world, there is hardly any piece of modern electronic equipment without at least one part that was designed or fabricated in China. To a significant degree, the information carried by nations’ telecommunication systems must be entrusted to components made in countries these nations do not trust.

2.4 Trust, Consequence, and Situation

Lack of real choices is not the only limitation to basing trust on game-theoretic approaches. Another shortcoming is the fact that the level of trust depends on factors that will typically not be part of the game. Most people would agree that the level of trust required to make a given choice will depend on the consequences of being let down. Govier [7] provides an example in which a stranger on the street volunteers to carry packages for you. Your willingness to entrust your packages to this stranger will depend on what the packets contain. If they contain things that are highly valuable to you, you would be less likely to entrust them to a stranger and you would require a higher level of trust to do so. The same considerations apply to your situation. If you have only a small, light packet that fits easily in your hand, the mere fact that some stranger is volunteering to help you will probably make you distrust that person.

To us, the most relevant learning point from Govier’s work is that the answer to the question of whether to trust an equipment vendor will depend highly on what kind of equipment it is and on how you intend to use it. At one extreme is the private person who buys an alarm clock to help wake up in the morning. The level of trust the person has on the manufacturer of the clock should hardly be a criterion of choice. The worst thing that can happen is that the person will oversleep a day or two, requiring her to write off the costs when she buys a new clock. At the other extreme, we have investments in national infrastructures for electronic communications and control systems for power plants. The level of trust needed before one selects a provider of equipment for such structures is very high and cases in which lack of trust has barred named providers from competing for such contracts are well documented.

2.5 Trust and Security

The terms trust and security are usually seen as interrelated and it is common to assume that the presence of one will promote the formation of the other. If you place trust in a person, you expose yourself to harm from that person. This person knows that he can retaliate if you misbehave and thus has all the more reason to trust you by exposing himself to you in the same way. The fact that you are both exposed will reduce your inclination to harm each other and you therefore end up being more secure. Similarly, when one experiences a system as being secure, one starts to trust it. The fact that an online baking website is relative secure makes one trust it.

Unfortunately, the idea that trust and security are a consequence of each other is not always true. These two words have many interpretations and the claim that one follows from the other is valid only for some interpretations [11, 13]. In particular, placing trust in a computer system will not make it secure. A computer system will not (well, at least not yet) change simply because it is trusted not to do anything wrong. One objection to this argument could be that it is not the computer system itself with which you intend to build a secure relationship but, rather, with the people who developed it. Still, there is an inherent asymmetry in the exposure to harm in the relationship between a vendor and a buyer of equipment. Sometimes it is to the buyer’s advantage, in the sense that the buyer can withhold payments or hurt the vendor’s reputation. Other times it is to the vendor’s advantage, particularly when the vendor is controlled by powers, for example, states, that are not exposed to the financial threats of single companies. If you trust a vendor, this trust will not automatically make a vendor safe to use; that will depend on the balance between the risks you expose to each other.

What, however, if we turn the question around? If I develop a highly secure system, would not the consequence be that my customers will tend to trust me? It would appear that the answer to this question is yes. Huawei is arguably the company that is the most challenged by customer distrust throughout the world. It has been banned from delivering products to the critical infrastructures of several countries and it has complained that it does not have a level playing ground when competing for contracts. One of their solutions to this challenge has been to focus heavily and visibly on making their products secure. Unfortunately, this strategy – even though it might be financially successful – misses the focal point of the discussion. It is entirely possible to make a system that is very secure against third-party attacks but where the maker of the equipment, that is, the second party, has full access to do whatever it wants. An analogy would be if you had a locksmith change all the locks of your house. Even if the locks were to keep all burglars out, the locksmith could still keep a copy of the key. Even if you made your house secure, this security is still built on the trustworthiness of the locksmith.

This section ends with two important observations. In the vendor–user relationship related to electronic equipment, we cannot assume that security comes with trust. We cannot assume that trust comes with security either.

2.6 Trusted Computing Base; Trust Between Components

A computer system is built by putting together a range of different pieces of technology. Multiple components constitute the hardware that an application is running on and an operating system will typically be composed of various parts. Adding to this, we have external devices and drivers, before we find an application on top of the technology stack. Finally, the application itself will be made of different components. More details on each of these components are given in Chap. 3.

These different parts can fail or default individually. It is thus clear that a notion of trust is also relevant between the constituent parts. A well-designed system will have an internal policy that makes it clear to what degree each part is supposed to be trusted, based on what tasks they are expected to perform. Such trust between components is described by Arbaugh et al. [1], with the eye-opening statement that the integrity of lower layers is treated as axiomatic by higher layers. There is every reason to question the basis for this confidence in the integrity of lower layers.

In their Orange Book from 1983, the US Department of Defense introduced the concept of the Trusted Computing Base (TCB). This is a minimal set of components of a system upon which the security of the entire system depends. Security breaches in any other component can have severe consequences, but the consequences should be confined to a subset of the system. Security breaches in the TCB, however, will compromise the entire system and all of its data. The Orange Book makes the case that, in a security-critical system, the TCB should be made as small as possible. Ideally, it should be restricted to sizes such that the security can be verified using formal methods.

Unfortunately for us, the notion of a TCB is of little help. The strengths and shortcomings of formal methods are discussed in Chap. 9, but the main reason why a TCB cannot help us is embedded within another fact. As long as you do not trust the manufacturer of the system, it is hard to limit the TCB to a tractable size. As discussed by Lysne et al. [10], the developer of electronic equipment has many possible points of attack that are unavailable to a third-party attacker. In particular, Thompson  [15] has demonstrated that the compiler or any other development tool could be used as a point of attack. This makes the compiler as well as the synthesis tools used for hardware development part of the TCB. Furthermore, since the compiler itself is built by another compiler, the argument iterates backwards through the history of computing. Computer systems and compilers have for a long time evolved in generations, each generation being built using software from the preceding generation. For example, in the UNIX world, the origins of today’s tools are found in the first design of the first system back in the 1970s [5]. Finding a small computing base that can truly be trusted given a dishonest equipment provider is therefore close to an impossible task.

2.7 Discussion

The need for trust between buyers and vendors of electronic equipment depends heavily on what is at stake. There are electronics in your egg timer, as well as in systems managing and monitoring the power networks of entire nations; however, whereas the need for trust in the former case is close to nonexistent, in the latter case, it is a question of deep concern to national security. This insight from Govier [7] helps us to focus the discussions in this book. We are not concerned with fine details on how the need for trust should be derived from a consequence analysis related to the equipment in question. Rather, we concentrate our discussion of trust on cases in which the damage potential is huge and thus the need for trust is paramount.

Long-term relationships with equipment providers are sometimes advocated as beneficial for building trust and confidence between customers and vendors. For many aspects of the customer–vendor relationships, we would agree. The quality and stability of a product, the availability of support, price, and time of delivery are all aspects of electronic equipment where trust can be built over time. It can also be argued that classic cybersecurity, where the vendor is supposed to help you to protect against third parties, is an area where trust can be built gradually. If, over several years, my equipment is not broken into, I will be inclined to trust my vendor. If I need to protect against the vendor itself, however, a long-term relationship is not likely to make me secure. Rather, a long-term relationship is likely to make me more exposed. An equipment vendor of complex equipment almost always has the opportunity to change the equipment it has already sold and deployed through software updates. A vendor that has proven to be trustworthy over a long period could become malicious through a change of regimes in its home country or through changes in ownership or management. A long-term relationship therefore increases the need for trust and confidence but it does not constitute a basis for trust in itself.

In the international discussions concerning Chinese vendors of telecommunication equipment, we have frequently heard arguments in the style of Hardin: we can trust Chinese vendors because it is in their best interest not to use their position against others. If they did, they would be out of business as soon as they were caught in the act. We should indeed take it as a fact that Chinese vendors of electronic equipment may be trustworthy. Still, regardless of the origin of the equipment, we should not make Hardin’s kind of trust a basis for our security concerns about equipment in a country’s critical infrastructure: First, because in questions of national security, the situation often has similarities to single games of the prisoner’s dilemma. A hostile takeover of a country takes place once and there is no obvious sequence of games in which a dishonest vendor will pay for its actions. Second, there are examples of large companies that deliberately cheated with their electronic equipment, were caught in the act, and which are still in business. The most well-known example currently is the Volkswagen case, where electronic circuits controlling a series of diesel engines reduced engine emissions once they detected that they were being monitored [4]. A related example is the claim made by Edward Snowden that routers and servers manufactured by Cisco were manipulated by the National Security Agency (NSA) to send Internet traffic back to them [8]. There is no documentation showing that Cisco was aware of this manipulation. Still, the example highlights that the question of trust in an equipment vendor is far too complex to be based on our perception of what will be in the vendor’s best interest. This argument is strengthened by Cisco’s claim that its losses due to the event were minor [14].

An old Russian proverb, made famous by Ronald Reagan, states, ‘Trust, but verify!’. This means that, in all trust relations, there is a limit to how far blind trust can go. At the end of the day, trust must be confirmed through observations; thus our need for trust depends heavily on our ability to observe. As Gambetta [6] points out, ‘Our need of trust will increase with the decrease of our chances actually to coerce and monitor the opposite party’. For us, this raises the question of how to coerce and monitor the actions of an equipment vendor. The remainder of this book is devoted to exactly that question. We examine the state of the art in all relevant areas of computer science in an attempt to answer the question of how we can verify malicious actions in electronic equipment or prepare for them.