Leakage and Protocol Composition in a Game-Theoretic Perspective

  • Mário S. Alvim
  • Konstantinos Chatzikokolakis
  • Yusuke Kawamoto
  • Catuscia Palamidessi
Open Access
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10804)

Abstract

In the inference attacks studied in Quantitative Information Flow (QIF), the adversary typically tries to interfere with the system in the attempt to increase its leakage of secret information. The defender, on the other hand, typically tries to decrease leakage by introducing some controlled noise. This noise introduction can be modeled as a type of protocol composition, i.e., a probabilistic choice among different protocols, and its effect on the amount of leakage depends heavily on whether or not this choice is visible to the adversary. In this work we consider operators for modeling visible and invisible choice in protocol composition, and we study their algebraic properties. We then formalize the interplay between defender and adversary in a game-theoretic framework adapted to the specific issues of QIF, where the payoff is information leakage. We consider various kinds of leakage games, depending on whether players act simultaneously or sequentially, and on whether or not the choices of the defender are visible to the adversary. Finally, we establish a hierarchy of these games in terms of their information leakage, and provide methods for finding optimal strategies (at the points of equilibrium) for both attacker and defender in the various cases.

1 Introduction

A fundamental problem in computer security is the leakage of sensitive information due to correlation of secret values with observables—i.e., any information accessible to the attacker, such as, for instance, the system’s outputs or execution time. The typical defense consists in reducing this correlation, which can be done in, essentially, two ways. The first, applicable when the correspondence secret-observable is deterministic, consists in coarsening the equivalence classes of secrets that give rise to the same observables. This can be achieved with post-processing, i.e., sequentially composing the original system with a program that removes information from observables. For example, a typical attack on encrypted web traffic consists on the analysis of the packets’ length, and a typical defense consists in padding extra bits so to diminish the length variety [28].

The second kind of defense, on which we focus in this work, consists in adding controlled noise to the observables produced by the system. This can be usually seen as a composition of different protocols via probabilistic choice.

Example 1

(Differential privacy). Consider a counting query f, namely a function that, applied to a dataset x, returns the number of individuals in x that satisfy a given property. A way to implement differential privacy [12] is to add geometrical noise to the result of f, so to obtain a probability distribution P on integers of the form Open image in new window , where c is a normalization factor. The resulting mechanism can be interpreted as a probabilistic choice on protocols of the form Open image in new window , where the probability assigned to Open image in new window and to Open image in new window decreases exponentially with n.

Example 2

(Dining cryptographers). Consider two agents running the dining cryptographers protocol [11], which consists in tossing a fair binary coin and then declaring the exclusive or Open image in new window of their secret value x and the result of the coin. The protocol can be thought as the fair probabilistic choice of two protocols, one consisting simply of declaring x, and the other declaring Open image in new window .

Most of the work in the literature of quantitative information flow (QIF) considers passive attacks, in which the adversary only observes the system. Notable exceptions are the works [4, 8, 21], which consider attackers who interact with and influence the system, possibly in an adaptive way, with the purpose of maximizing the leakage of information.

Example 3

(CRIME attack). Compression Ratio Info-leak Made Easy (CRIME) [25] is a security exploit against secret web cookies over connections using the HTTPS and SPDY protocols and data compression. The idea is that the attacker can inject some content a in the communication of the secret x from the target site to the server. The server then compresses and encrypts the data, including both a and x, and sends back the result. By observing the length of the result, the attacker can then infer information about x. To mitigate the leakage, one possible defense would consist in transmitting, along with x, also an encryption method f selected randomly from a set F. Again, the resulting protocol can be seen as a composition, using probabilistic choice, of the protocols in the set F.

In all examples above the main use of the probabilistic choice is to obfuscate the relation between secrets and observables, thus reducing their correlation—and, hence, the information leakage. To achieve this goal, it is essential that the attacker never comes to know the result of the choice. In the CRIME example, however, if f and a are chosen independently, then (in general) it is still better to choose f probabilistically, even if the adversary will come to know, afterwards, the choice of f. In fact, this is true also for the attacker: his best strategies (in general) are to chose a according to some probability distribution. Indeed, suppose that Open image in new window are the defender’s choices and Open image in new window are the attacker’s, and that Open image in new window leaks more than Open image in new window , while Open image in new window leaks less than Open image in new window . This is a scenario like the matching pennies in game theory: if one player selects an action deterministically, the other player may exploit this choice and get an advantage. For each player the optimal strategy is to play probabilistically, using a distribution that maximizes his own gain for all possible actions of the adversary. In zero-sum games, in which the gain of one player coincides with the loss of the other, the optimal pair of distributions always exists, and it is called saddle point. It also coincides with the Nash equilibrium, which is defined as the point in which neither of the two players gets any advantage in changing unilaterally his strategy.

Motivated by these examples, this paper investigates the two kinds of choice, visible and hidden (to the attacker), in a game-theoretic setting. Looking at them as language operators, we study their algebraic properties, which will help reason about their behavior in games. We consider zero-sum games, in which the gain (for the attacker) is represented by the leakage. While for visible choice it is appropriate to use the “classic” game-theoretic framework, for hidden choice we need to adopt the more general framework of the information leakage games proposed in [4]. This happens because, in contrast with standard game theory, in games with hidden choice the utility of a mixed strategy is a convex function of the distribution on the defender’s pure actions, rather than simply the expected value of their utilities. We will consider both simultaneous games—in which each player chooses independently—and sequential games—in which one player chooses his action first. We aim at comparing all these situations, and at identifying the precise advantage of the hidden choice over the visible one.

To measure leakage we use the well-known information-theoretic model. A central notion in this model is that of entropy, but here we use its converse, vulnerability, which represents the magnitude of the threat. In order to derive results as general as possible, we adopt the very comprehensive notion of vulnerability as any convex and continuous function, as used in [5, 8]. This notion has been shown [5] to subsume most information measures, including Bayes vulnerability (aka min-vulnerability, aka (the converse of) Bayes risk) [10, 27], Shannon entropy [26], guessing entropy [22], and g-vulnerability [6].

The main contributions of this paper are:
  • We present a general framework for reasoning about information leakage in a game-theoretic setting, extending the notion of information leakage games proposed in [4] to both simultaneous and sequential games, with either hidden or visible choice.

  • We present a rigorous compositional way, using visible and hidden choice operators, for representing adversary and defender’s actions in information leakage games. In particular, we study the algebraic properties of visible and hidden choice on channels, and compare the two kinds of choice with respect to the capability of reducing leakage, in presence of an adaptive attacker.

  • We provide a taxonomy of the various scenarios (simultaneous and sequential) showing when randomization is necessary, for either attacker or defender, to achieve optimality. Although it is well-known in information flow that the defender’s best strategy is usually randomized, only recently it has been shown that when defender and adversary act simultaneously, the adversary’s optimal strategy also requires randomization [4].

  • We use our framework in a detailed case study of a password-checking protocol. The naive program, which checks the password bit by bit and stops when it finds a mismatch, is clearly very insecure, because it reveals at each attempt the maximum correct prefix. On the other hand, if we continue checking until the end of the string (time padding), the program becomes very inefficient. We show that, by using probabilistic choice instead, we can obtain a good trade-off between security and efficiency.

Plan of the Paper. The remaining of the paper is organized as follows. In Sect. 2 we review some basic notions of game theory and quantitative information flow. In Sect. 3 we introduce our running example. In Sect. 4 we define the visible and hidden choice operators and demonstrate their algebraic properties. In Sect. 5, the core of the paper, we examine various scenarios for leakage games. In Sect. 6 we show an application of our framework to a password checker. In Sect. 7 we discuss related work and, finally, in Sect. 8 we conclude.

2 Preliminaries

In this section we review some basic notions from game theory and quantitative information flow. We use the following notation: Given a set Open image in new window , we denote by Open image in new window the set of all probability distributions over Open image in new window . Given Open image in new window , its support Open image in new window is the set of its elements with positive probability. We use Open image in new window to indicate that a value Open image in new window is sampled from a distribution \(\mu \) on  Open image in new window .

2.1 Basic Concepts from Game Theory

Two-Player Games. Two-player games are a model for reasoning about the behavior of two players. In a game, each player has at its disposal a set of actions that he can perform, and he obtains some gain or loss depending on the actions chosen by both players. Gains and losses are defined using a real-valued payoff function. Each player is assumed to be rational, i.e., his choice is driven by the attempt to maximize his own expected payoff. We also assume that the set of possible actions and the payoff functions of both players are common knowledge.

In this paper we only consider finite games, in which the set of actions available to the players are finite. Next we introduce an important distinction between simultaneous and sequential games. In the following, we will call the two players defender and attacker.

Simultaneous Games. In a simultaneous game, each player chooses his action without knowing the action chosen by the other. The term “simultaneous” here does not mean that the players’ actions are chosen at the same time, but only that they are chosen independently. Formally, such a game is defined as a tuple1 Open image in new window , where Open image in new window is a nonempty set of defender’s actions, Open image in new window is a nonempty set of attacker’s actions, Open image in new window is the defender’s payoff function, and Open image in new window is the attacker’s payoff function.

Each player may choose an action deterministically or probabilistically. A pure strategy of the defender (resp. attacker) is a deterministic choice of an action, i.e., an element Open image in new window (resp. Open image in new window ). A pair Open image in new window is called pure strategy profile, and Open image in new window , Open image in new window represent the defender’s and the attacker’s payoffs, respectively. A mixed strategy of the defender (resp. attacker) is a probabilistic choice of an action, defined as a probability distribution Open image in new window (resp. Open image in new window ). A pair Open image in new window is called mixed strategy profile. The defender’s and the attacker’s expected payoff functions for mixed strategies are defined, respectively, as: Open image in new window and Open image in new window .

A defender’s mixed strategy Open image in new window is a best response to an attacker’s mixed strategy Open image in new window if Open image in new window . Symmetrically, Open image in new window is a best response to Open image in new window if Open image in new window . A mixed-strategy Nash equilibrium is a profile Open image in new window such that Open image in new window is the best response to Open image in new window and vice versa. This means that in a Nash equilibrium, no unilateral deviation by any single player provides better payoff to that player. If Open image in new window and Open image in new window are point distributions concentrated on some Open image in new window and Open image in new window respectively, then Open image in new window is a pure-strategy Nash equilibrium, and will be denoted by Open image in new window . While not all games have a pure strategy Nash equilibrium, every finite game has a mixed strategy Nash equilibrium.

Sequential Games. In a sequential game players may take turns in choosing their actions. In this paper, we only consider the case in which each player moves only once, in such a way that one of the players (the leader) chooses his action first, and commits to it, before the other player (the follower) makes his choice. The follower may have total knowledge of the choice made by the leader, or only partial. We refer to the two scenarios by the terms perfect and imperfect information, respectively.

We now give the precise definitions assuming that the leader is the defender. The case in which the leader is the attacker is similar.

A defender-first sequential game with perfect information is a tuple Open image in new window where Open image in new window , Open image in new window , \( u_\mathsf{d} \) and \( u_\mathsf{a} \) are defined as in simultaneous games. Also the strategies of the defender (the leader) are defined as in simultaneous games: an action Open image in new window for the pure case, and a distribution Open image in new window for the mixed one. On the other hand, a pure strategy for the attacker is a function Open image in new window , which represents the fact that his choice of an action Open image in new window in Open image in new window depends on the defender’s choice d. An attacker’s mixed strategy is a probability distribution Open image in new window over his pure strategies.2 The defender’s and the attacker’s expected payoff functions for mixed strategies are defined, respectively, as Open image in new window and Open image in new window .

The case of imperfect information is typically formalized by assuming an indistinguishability (equivalence) relation over the actions chosen by the leader, representing a scenario in which the follower cannot distinguish between the actions belonging to the same equivalence class. The pure strategies of the followers, therefore, are functions from the set of the equivalence classes on the actions of the leader to his own actions. Formally, a defender-first sequential game with imperfect information is a tuple Open image in new window where Open image in new window , Open image in new window , \( u_\mathsf{d} \) and \( u_\mathsf{a} \) are defined as in simultaneous games, and \( K _{\mathsf {a}}\) is a partition of Open image in new window . The expected payoff functions are defined as before, except that now the argument of \( s_\mathsf{a} \) is the equivalence class of d. Note that in the case in which all defender’s actions are indistinguishable from each other at the eyes of the attacker (totally imperfect information), we have Open image in new window and the expected payoff functions coincide with those of the simultaneous games.

Zero-sum Games and Minimax Theorem. A game Open image in new window is zero-sum if for any Open image in new window and any Open image in new window , the defender’s loss is equivalent to the attacker’s gain, i.e., Open image in new window . For brevity, in zero-sum games we denote by u the attacker’s payoff function \( u_\mathsf{a} \), and by U the attacker’s expected payoff \( U_\mathsf{a} \).3 Consequently, the goal of the defender is to minimize U, and the goal of the attacker is to maximize it.

In simultaneous zero-sum games the Nash equilibrium corresponds to the solution of the minimax problem (or equivalently, the maximin problem), namely, the strategy profile Open image in new window such that Open image in new window . The von Neumann’s minimax theorem, in fact, ensures that such solution (which always exists) is stable.

Theorem 1

(von Neumann’s minimax theorem). Let Open image in new window and Open image in new window be compact convex sets, and Open image in new window be a continuous function such that Open image in new window is a convex function in Open image in new window and a concave function in Open image in new window . Then Open image in new window .

A related property is that, under the conditions of Theorem 1, there exists a saddle point Open image in new window s.t., for all Open image in new window and Open image in new window : Open image in new window .

The solution of the minimax problem can be obtained by using convex optimization techniques. In case Open image in new window is affine in x and in y, we can also use linear optimization.

In case Open image in new window and Open image in new window contain two elements each, there is a closed form for the solution. Let Open image in new window and Open image in new window respectively. Let \(u_{ij}\) be the utility of the defender on \(d_i, a_j\). Then the Nash equilibrium Open image in new window is given by: Open image in new window and Open image in new window if these values are in Open image in new window . Note that, since there are only two elements, the strategy Open image in new window is completely specified by its value in \(d_0\), and analogously for Open image in new window .

2.2 Quantitative Information Flow

Finally, we briefly review the standard framework of quantitative information flow, which is concerned with measuring the amount of information leakage in a (computational) system.

Secrets and Vulnerability. A secret is some piece of sensitive information the defender wants to protect, such as a user’s password, social security number, or current location. The attacker usually only has some partial knowledge about the value of a secret, represented as a probability distribution on secrets called a prior. We denote by Open image in new window the set of possible secrets, and we typically use \(\pi \) to denote a prior belonging to the set Open image in new window of probability distributions over Open image in new window .

The vulnerability of a secret is a measure of the utility that it represents for the attacker. In this paper we consider a very general notion of vulnerability, following [5], and we define a vulnerability Open image in new window to be any continuous and convex function of type Open image in new window . It has been shown in [5] that these functions coincide with the set of g-vulnerabilities, and are, in a precise sense, the most general information measures w.r.t. a set of basic axioms.4

Channels, Posterior Vulnerability, and Leakage. Computational systems can be modeled as information theoretic channels. A channel Open image in new window is a function in which Open image in new window is a set of input values, Open image in new window is a set of output values, and Open image in new window represents the conditional probability of the channel producing output Open image in new window when input Open image in new window is provided. Every channel C satisfies Open image in new window for all Open image in new window and Open image in new window , and Open image in new window for all Open image in new window .

A distribution Open image in new window and a channel C with inputs Open image in new window and outputs Open image in new window induce a joint distribution Open image in new window on Open image in new window , producing joint random variables XY with marginal probabilities Open image in new window and Open image in new window , and conditional probabilities Open image in new window if Open image in new window . For a given y (s.t. Open image in new window ), the conditional probabilities Open image in new window for each Open image in new window form the posterior distribution Open image in new window .

A channel C in which Open image in new window is a set of secret values and Open image in new window is a set of observable values produced by a system can be used to model computations on secrets. Assuming the attacker has prior knowledge \(\pi \) about the secret value, knows how a channel C works, and can observe the channel’s outputs, the effect of the channel is to update the attacker’s knowledge from \(\pi \) to a collection of posteriors Open image in new window , each occurring with probability Open image in new window .

Given a vulnerability Open image in new window , a prior \(\pi \), and a channel C, the posterior vulnerability Open image in new window is the vulnerability of the secret after the attacker has observed the output of the channel C. Formally: Open image in new window .

It is known from the literature [5] that the posterior vulnerability is a convex function of \(\pi \). Namely, for any channel C, any family of distributions Open image in new window , and any set of convex coefficients Open image in new window , we have: Open image in new window .

The (information) leakage of channel C under prior \(\pi \) is a comparison between the vulnerability of the secret before the system was run—called prior vulnerability—and the posterior vulnerability of the secret. Leakage reflects by how much the observation of the system’s outputs increases the attacker’s information about the secret. It can be defined either additively ( Open image in new window ), or multiplicatively ( Open image in new window ).
Fig. 1.

Running example.

3 An Illustrative Example

We introduce an example which will serve as running example through the paper. Although admittedly contrived, this example is simple and yet produces different leakage measures for all different combinations of visible/invisible choice and simultaneous/sequential games, thus providing a way to compare all different scenarios we are interested in.

Consider that a binary secret must be processed by a program. As usual, a defender wants to protect the secret value, whereas an attacker wants to infer it by observing the system’s output. Assume the defender can choose which among two alternative versions of the program to run. Both programs take the secret value x as high input, and a binary low input a whose value is chosen by the attacker. They both return the output in a low variable y.5 Program 0 returns the binary product of x and a, whereas Program 1 flips a coin with bias Open image in new window (i.e., a coin which returns heads with probability Open image in new window ) and returns x if the result is heads, and the complement \(\bar{x}\) of x otherwise. The two programs are represented in Fig. 1.

The combined choices of the defender’s and of the attacker’s determine how the system behaves. Let Open image in new window represent the set of the defender’s choices—i.e., the index of the program to use—, and Open image in new window represent the set of the attacker’s choices—i.e., the value of the low input a. We shall refer to the elements of Open image in new window and Open image in new window as actions. For each possible combination of actions Open image in new window and Open image in new window , we can construct a channel \(C_{da}\) modeling how the resulting system behaves. Each channel \(C_{da}\) is a function of type Open image in new window , where Open image in new window is the set of possible high input values for the system, and Open image in new window is the set of possible output values from the system. Intuitively, each channel provides the probability that the system (which was fixed by the defender) produces output Open image in new window given that the high input is Open image in new window (and that the low input was fixed by the attacker). The four possible channels are depicted as matrices below.Note that channel \(C_{00}\) does not leak any information about the input x (i.e., it is non-interferent), whereas channels \(C_{01}\) and \(C_{10}\) completely reveal x. Channel \(C_{11}\) is an intermediate case: it leaks some information about x, but not all.
We want to investigate how the defender’s and the attacker’s choices influence the leakage of the system. For that we can just consider the (simpler) notion of posterior vulnerability, since in order to make the comparison fair we need to assume that the prior is always the same in the various scenarios, and this implies that the leakage is in a one-to-one correspondence with the posterior vulnerability (this happens for both additive and multiplicative leakage).
Table 1.

Vulnerability of each channel \(C_{da}\) in the running example.

For this example, assume we are interested in Bayes vulnerability [10, 27], defined as Open image in new window for every Open image in new window . Assume for simplicity that the prior is the uniform prior \(\pi _u\). In this case we know from [9] that the posterior Bayes vulnerability of a channel is the sum of the greatest elements of each column, divided by the total number of inputs. Table 1 provides the Bayes vulnerability Open image in new window of each channel considered above.

Naturally, the attacker aims at maximizing the vulnerability of the system, while the defender tries to minimize it. The resulting vulnerability will depend on various factors, in particular on whether the two players make their choice simultaneously (i.e. without knowing the choice of the opponent) or sequentially. Clearly, if the choice of a player who moves first is known by an opponent who moves second, the opponent will be in advantage. In the above example, for instance, if the defender knows the choice a of the attacker, the most convenient choice for him is to set Open image in new window , and the vulnerability will be at most Open image in new window . Vice versa, if the attacker knows the choice d of the defender, the most convenient choice for him is to set Open image in new window . The vulnerability in this case will be 1.

Things become more complicated when players make choices simultaneously. None of the pure choices of d and a are the best for the corresponding player, because the vulnerability of the system depends also on the (unknown) choice of the other player. Yet there is a strategy leading to the best possible situation for both players (the Nash equilibrium), but it is mixed (i.e., probabilistic), in that the players randomize their choices according to some precise distribution.

Another factor that affects vulnerability is whether or not the defender’s choice is known to the attacker at the moment in which he observes the output of the channel. Obviously, this corresponds to whether or not the attacker knows what channel he is observing. Both cases are plausible: naturally the defender has all the interest in keeping his choice (and, hence, the channel used) secret, since then the attack will be less effective (i.e., leakage will be smaller). On the other hand, the attacker may be able to identify the channel used anyway, for instance because the two programs have different running times. We will call these two cases hidden and visible choice, respectively.

It is possible to model players’ strategies, as well as hidden and visible choices, as operations on channels. This means that we can look at the whole system as if it were a single channel, which will turn out to be useful for some proofs of our technical results. Next section is dedicated to the definition of these operators. We will calculate the exact values for our example in Sect. 5.

4 Visible and Hidden Choice Operators on Channels

In this section we define matrices and some basic operations on them. Since channels are a particular kind of matrix, we use these matrix operations to define the operations of visible and hidden choice among channels, and to prove important properties of these channel operations.

4.1 Matrices, and Their Basic Operators

Given two sets Open image in new window and Open image in new window , a matrix is a total function of type Open image in new window . Two matrices Open image in new window and Open image in new window are said to be compatible if Open image in new window . If it is also the case that Open image in new window , we say that the matrices have the same type. The scalar multiplication Open image in new window between a scalar r and a matrix M is defined as usual, and so is the summation Open image in new window of a family Open image in new window of matrices all of a same type.

Given a family Open image in new window of compatible matrices s.t. each \(M_{i}\) has type Open image in new window , their concatenation Open image in new window is the matrix having all columns of every matrix in the family, in such a way that every column is tagged with the matrix it came from. Formally, Open image in new window , if Open image in new window , and the resulting matrix has type Open image in new window .6 When the family Open image in new window has only two elements we may use the binary version Open image in new window of the concatenation operator. The following depicts the concatenation of two matrices \(M_{1}\) and \(M_{2}\) in tabular form.

4.2 Channels, and Their Hidden and Visible Choice Operators

A channel is a stochastic matrix, i.e., all elements are non-negative, and all rows sum up to 1. Here we will define two operators specific for channels. In the following, for any real value Open image in new window , we denote by \(\bar{p}\) the value Open image in new window .

Hidden Choice. The first operator models a hidden probabilistic choice among channels. Consider a family Open image in new window of channels of a same type. Let Open image in new window be a probability distribution on the elements of the index set Open image in new window . Consider an input x is fed to one of the channels in Open image in new window , where the channel is randomly picked according to \(\mu \). More precisely, an index Open image in new window is sampled with probability Open image in new window , then the input x is fed to channel \(C_{i}\), and the output y produced by the channel is then made visible, but not the index i of the channel that was used. Note that we consider hidden choice only among channels of a same type: if the sets of outputs were not identical, the produced output might implicitly reveal which channel was used.

Formally, given a family Open image in new window of channels s.t. each \(C_{i}\) has same type Open image in new window , the hidden choice operator Open image in new window is defined as Open image in new window .

Proposition 2

Given a family Open image in new window of channels of type Open image in new window , and a distribution \(\mu \) on Open image in new window , the hidden choice Open image in new window is a channel of type Open image in new window .

In the particular case in which the family Open image in new window has only two elements \(C_{i_{1}}\) and \(C_{i_{2}}\), the distribution \(\mu \) on indexes is completely determined by a real value Open image in new window s.t. Open image in new window and Open image in new window . In this case we may use the binary version Open image in new window of the hidden choice operator: Open image in new window . The example below depicts the hidden choice between channels \(C_{1}\) and \(C_{2}\), with probability Open image in new window .Visible Choice. The second operator models a visible probabilistic choice among channels. Consider a family Open image in new window of compatible channels. Let Open image in new window be a probability distribution on the elements of the index set Open image in new window . Consider an input x is fed to one of the channels in Open image in new window , where the channel is randomly picked according to \(\mu \). More precisely, an index Open image in new window is sampled with probability Open image in new window , then the input x is fed to channel \(C_{i}\), and the output y produced by the channel is then made visible, along with the index i of the channel that was used. Note that visible choice makes sense only between compatible channels, but it is not required that the output set of each channel be the same.

Formally, given Open image in new window of compatible channels s.t. each \(C_{i}\) has type Open image in new window , and a distribution \(\mu \) on Open image in new window , the visible choice operator Open image in new window is defined as Open image in new window .

Proposition 3

Given a family Open image in new window of compatible channels s.t. each \(C_{i}\) has type Open image in new window , and a distribution \(\mu \) on Open image in new window , the result of the visible choice Open image in new window is a channel of type Open image in new window .

In the particular case the family Open image in new window has only two elements \(C_{i_{1}}\) and \(C_{i_{2}}\), the distribution \(\mu \) on indexes is completely determined by a real value Open image in new window s.t. Open image in new window and Open image in new window . In this case we may use the binary version Open image in new window of the visible choice operator: Open image in new window . The following depicts the visible choice between channels \(C_{1}\) and \(C_{3}\), with probability Open image in new window .

4.3 Properties of Hidden and Visible Choice Operators

We now prove algebraic properties of channel operators. These properties will be useful when we model a (more complex) protocol as the composition of smaller channels via hidden or visible choice.

Whereas the properties of hidden choice hold generally with equality, those of visible choice are subtler. For instance, visible choice is not idempotent, since in general Open image in new window . (In fact if C has type Open image in new window , Open image in new window has type Open image in new window .) However, idempotency and other properties involving visible choice hold if we replace the notion of equality with the more relaxed notion of “equivalence” between channels. Intuitively, two channels are equivalent if they have the same input space and yield the same value of vulnerability for every prior and every vulnerability function.

Definition 4

(Equivalence of channels). Two compatible channels \(C_{1}\) and \(C_{2}\) with domain Open image in new window are equivalent, denoted by \(C_{1} \approx C_{2}\), if for every prior Open image in new window and every posterior vulnerability Open image in new window we have Open image in new window .

Two equivalent channels are indistinguishable from the point of view of information leakage, and in most cases we can just identify them. Indeed, nowadays there is a tendency to use abstract channels [5, 23], which capture exactly the important behavior with respect to any form of leakage. In this paper, however, we cannot use abstract channels because the hidden choice operator needs a concrete representation in order to be defined unambiguously.

The first properties we prove regard idempotency of operators, which can be used do simplify the representation of some protocols.

Proposition 5

(Idempotency). Given a family Open image in new window of channels s.t. Open image in new window for all Open image in new window , and a distribution \(\mu \) on Open image in new window , then: (a) Open image in new window ; and (b) Open image in new window .

The following properties regard the reorganization of operators, and they will be essential in some technical results in which we invert the order in which hidden and visible choice are applied in a protocol.

Proposition 6

(“Reorganization of operators”). Given a family Open image in new window of channels indexed by sets Open image in new window and Open image in new window , a distribution \(\mu \) on Open image in new window , and a distribution \(\eta \) on Open image in new window :
  1. (a)

    Open image in new window , if all \(C_{i}\)’s have the same type;

     
  2. (b)

    Open image in new window , if all \(C_{i}\)’s are compatible; and

     
  3. (c)

    Open image in new window , if, for each i, all \(C_{i j}\)’s have same type Open image in new window .

     

4.4 Properties of Vulnerability w.r.t. Channel Operators

We now derive some relevant properties of vulnerability w.r.t. our channel operators, which will be later used to obtain the Nash equilibria in information leakage games with different choice operations.

The first result states that posterior vulnerability is convex w.r.t. hidden choice (this result was already presented in [4]), and linear w.r.t. to visible choice.

Theorem 7

Let Open image in new window be a family of channels, and \(\mu \) be a distribution on Open image in new window . Then, for every distribution \(\pi \) on Open image in new window , and every vulnerability Open image in new window :
  1. (a)

    posterior vulnerability is convex w.r.t. to hidden choice: Open image in new window if all \(C_{i}\)’s have the same type.

     
  2. (b)

    posterior vulnerability is linear w.r.t. to visible choice: Open image in new window if all \(C_{i}\)’s are compatible.

     

The next result is concerned with posterior vulnerability under the composition of channels using both operators.

Corollary 8

Let Open image in new window be a family of channels, all with domain Open image in new window and with the same type, and let Open image in new window , and Open image in new window be any vulnerability. Define Open image in new window as follows: Open image in new window . Then \( U \) is convex on \(\mu \) and linear on \(\eta \).

5 Information Leakage Games

In this section we present our framework for reasoning about information leakage, extending the notion of information leakage games proposed in [4] from only simultaneous games with hidden choice to both simultaneous and sequential games, with either hidden or visible choice.

In an information leakage game the defender tries to minimize the leakage of information from the system, while the attacker tries to maximize it. In this basic scenario, their goals are just opposite (zero-sum). Both of them can influence the execution and the observable behavior of the system via a specific set of actions. We assume players to be rational (i.e., they are able to figure out what is the best strategy to maximize their expected payoff), and that the set of actions and the payoff function are common knowledge.

Players choose their own strategy, which in general may be mixed (i.e. probabilistic), and choose their action by a random draw according to that strategy. After both players have performed their actions, the system runs and produces some output value which is visible to the attacker and may leak some information about the secret. The amount of leakage constitutes the attacker’s gain, and the defender’s loss.

To quantify the leakage we model the system as an information-theoretic channel (cf. Sect. 2.2). We recall that leakage is defined as the difference (additive leakage) or the ratio (multiplicative leakage) between posterior and prior vulnerability. Since we are only interested in comparing the leakage of different channels for a given prior, we will define the payoff just as the posterior vulnerability, as the value of prior vulnerability will be the same for every channel.

5.1 Defining Information Leakage Games

An (information) leakage game consists of: (1) two nonempty sets Open image in new window , Open image in new window of defender’s and attacker’s actions respectively, (2) a function Open image in new window that associates to each pair of actions Open image in new window a channel Open image in new window , (3) a prior Open image in new window on secrets, and (4) a vulnerability measure Open image in new window . The payoff function Open image in new window for pure strategies is defined as Open image in new window . We have only one payoff function because the game is zero-sum.

Like in traditional game theory, the order of actions and the extent by which a player knows the move performed by the opponent play a critical role in deciding strategies and determining the payoff. In security, however, knowledge of the opponent’s move affects the game in yet another way: the effectiveness of the attack, i.e., the amount of leakage, depends crucially on whether or not the attacker knows what channel is being used. It is therefore convenient to distinguish two phases in the leakage game:

 
Phase 1:

Each player determines the most convenient strategy (which in general is mixed) for himself, and draws his action accordingly. One of the players may commit first to his action, and his choice may or may not be revealed to the follower. In general, knowledge of the leader’s action may help the follower choose a more advantageous strategy.

Phase 2:

The attacker observes the output of the selected channel \(C_{da}\) and performs his attack on the secret. In case he knows the defender’s action, he is able to determine the exact channel \(C_{da}\) being used (since, of course, the attacker knows his own action), and his payoff will be the posterior vulnerability Open image in new window . However, if the attacker does not know exactly which channel has been used, then his payoff will be smaller.

Note that the issues raised in Phase 2 are typical of leakage games; they do not have a correspondence (to the best of our knowledge) in traditional game theory. On the other hand, these issues are central to security, as they reflect the principle of preventing the attacker from inferring the secret by obfuscating the link between secret and observables.

Following the above discussion, we consider various possible scenarios for games, along two lines of classification. First, there are three possible orders for the two players’ actions.  
Simultaneous:

The players choose (draw) their actions in parallel, each without knowing the choice of the other.

Sequential, defender-first:

The defender draws an action, and commits to it, before the attacker does.

Sequential, attacker-first:

The attacker draws an action, and commits to it, before the defender does.

  Note that these sequential games may present imperfect information (i.e., the follower may not know the leader’s action).

Second, the visibility of the defender’s action during the attack may vary:

 
Visible choice:

The attacker knows the defender’s action when he observes the output of the channel, and therefore he knows which channel is being used. Visible choice is modeled by the operator Open image in new window .

Hidden choice:

The attacker does not know the defender’s action when he observes the output of the channel, and therefore in general he does not exactly know which channel is used (although in some special cases he may infer it from the output). Hidden choice is modeled by the operator Open image in new window .

 

Note that the distinction between sequential and simultaneous games is orthogonal to that between visible and hidden choice. Sequential and simultaneous games model whether or not, respectively, the follower’s choice can be affected by knowledge of the leader’s action. This dichotomy captures how knowledge about the other player’s actions can help a player choose his own action. On the other hand, visible and hidden choice capture whether or not, respectively, the attacker is able to fully determine the channel representing the system, once defender and attacker’s actions have already been fixed. This dichotomy reflects the different amounts of information leaked by the system as viewed by the adversary. For instance, in a simultaneous game neither player can choose his action based on the choice of the other. However, depending on whether or not the defender’s choice is visible, the adversary will or will not, respectively, be able to completely recover the channel used, which will affect the amount of leakage.

If we consider also the subdivision of sequential games into perfect and imperfect information, there are 10 possible different combinations. Some, however, make little sense. For instance, defender-first sequential game with perfect information (by the attacker) does not combine naturally with hidden choice Open image in new window , since that would mean that the attacker knows the action of the defender and choses his strategy accordingly, but forgets it at the moment of the attack. (We assume perfect recall, i.e., the players never forget what they have learned.) Yet other combinations are not interesting, such as the attacker-first sequential game with (totally) imperfect information (by the defender), since it coincides with the simultaneous-game case. Note that attacker and defender are not symmetric with respect to hiding/revealing their actions a and d, since the knowledge of a affects the game only in the usual sense of game theory, while the knowledge of d also affects the computation of the payoff (cf. “Phase 2” above).

Table 2 lists the meaningful and interesting combinations. In Game V we assume imperfect information: the attacker does not know the action chosen by the defender. In all the other sequential games we assume that the follower has perfect information. In the remaining of this section, we discuss each game individually, using the example of Sect. 3 as running example.
Table 2.

Kinds of games we consider. All sequential games have perfect information, except for game V.

Game I (simultaneous with visible choice). This simultaneous game can be represented by a tuple Open image in new window . As in all games with visible choice Open image in new window , the expected payoff \( U \) of a mixed strategy profile Open image in new window is defined to be the expected value of u, as in traditional game theory: Open image in new window , where we recall that Open image in new window .

From Theorem 7(b) we derive: Open image in new window . Hence the whole system can be equivalently regarded as the channel Open image in new window . Still from Theorem 7(b) we can derive that Open image in new window is linear in \(\delta \) and \(\alpha \). Therefore the Nash equilibrium can be computed using the minimax method (cf. Sect. 2.1).

Example 9

Consider the example of Sect. 3 in the setting of Game I. The Nash equilibrium Open image in new window can be obtained using the closed formula from Sect. 2.1, and it is given by Open image in new window The corresponding payoff is Open image in new window .

Game II (defender 1\(^{\mathbf{st}}\) with visible choice). This defender-first sequential game can be represented by a tuple Open image in new window . A mixed strategy profile is of the form Open image in new window , with Open image in new window and Open image in new window , and the corresponding payoff is Open image in new window , where Open image in new window .

Again, from Theorem 7(b) we derive: Open image in new window and hence the system can be expressed as channel Open image in new window . From the same Theorem we also derive that Open image in new window is linear in \(\delta \) and \( \sigma _\mathsf{a} \), so the mutually optimal strategies can be obtained again by solving the minimax problem. In this case, however, the solution is particularly simple, because it is known that there are optimal strategies which are deterministic. Hence it is sufficient for the defender to find the action d which minimizes Open image in new window .

Example 10

Consider the example of Sect. 3 in the setting of Game II. If the defender chooses 0 then the attacker chooses 1. If the defender chooses 1 then the attacker chooses 0. In both cases, the payoff is 1. The game has therefore two solutions, Open image in new window and Open image in new window .

Game III (attacker 1\(^\mathbf{{st}}\) with visible choice). This game is also a sequential game, but with the attacker as the leader. Therefore it can be represented as tuple of the form Open image in new window . It is the same as Game II, except that the roles of the attacker and the defender are inverted. In particular, the payoff of a mixed strategy profile Open image in new window is given by Open image in new window , and the whole system can be equivalently regarded as channel Open image in new window . Obviously, also in this case the minimax problem has a deterministic solution.

In summary, in the sequential case, whether the leader is the defender or the attacker (Games II and III, respectively), the minimax problem has always a deterministic solution [24].

Theorem 11

In a defender-first sequential game with visible choice, there exist Open image in new window and Open image in new window such that, for every Open image in new window and Open image in new window we have: Open image in new window . Similarly, in an attacker-first sequential game with visible choice, there exist Open image in new window and Open image in new window such that, for every Open image in new window and Open image in new window we have: Open image in new window .

Example 12

Consider now the example of Sect. 3 in the setting of Game III. If the attacker chooses 0 then the defender chooses 0 and the payoff is Open image in new window . If the attacker chooses 1 then the defender chooses 1 and the payoff is Open image in new window . The latter case is more convenient for the attacker, hence the solution of the game is the strategy profile (1, 1).

Game IV (simultaneous with hidden choice). This game is a tuple Open image in new window . However, it is not an ordinary game in the sense that the payoff a mixed strategy profile cannot be defined by averaging the payoff of the corresponding pure strategies. More precisely, the payoff of a mixed profile is defined by averaging on the strategy of the attacker, but not on that of the defender. In fact, when hidden choice is used, there is an additional level of uncertainty in the relation between the observables and the secret from the point of view of the attacker, since he is not sure about which channel is producing those observables. A mixed strategy \(\delta \) for the defender produces a convex combination of channels (the channels associated to the pure strategies) with the same coefficients, and we know from previous sections that the vulnerability is a convex function of the channel, and in general is not linear.

In order to define the payoff of a mixed strategy profile Open image in new window , we need therefore to consider the channel that the attacker perceives given his limited knowledge. Let us assume that the action that the attacker draws from \(\alpha \) is a. He does not know the action of the defender, but we can assume that he knows his strategy (each player can derive the optimal strategy of the opponent, under the assumption of common knowledge and rational players).

The channel the attacker will see is Open image in new window , obtaining a corresponding payoff of Open image in new window . By averaging on the strategy of the attacker we obtain Open image in new window . From Theorem 7(b) we derive: Open image in new window and hence the whole system can be equivalently regarded as channel Open image in new window . Note that, by Proposition 6(c), the order of the operators is interchangeable, and the system can be equivalently regarded as Open image in new window . This shows the robustness of this model.

From Corollary 8 we derive that Open image in new window is convex in \(\delta \) and linear in \(\eta \), hence we can compute the Nash equilibrium by the minimax method.

Example 13

Consider now the example of Sect. 3 in the setting of Game IV. For Open image in new window and Open image in new window , let Open image in new window and Open image in new window . The system can be represented by the channel Open image in new window represented below.For uniform \(\pi \), we have Open image in new window ; and Open image in new window is equal to Open image in new window if Open image in new window , and equal to Open image in new window if Open image in new window . Hence the payoff, expressed in terms of p and q, is Open image in new window if Open image in new window , and Open image in new window if Open image in new window . The Nash equilibrium Open image in new window is given by Open image in new window and Open image in new window , and by solving the above, we obtain Open image in new window .

Game V (defender 1\(^{\mathbf{st}}\) with hidden choice). This is a defender-first sequential game with imperfect information, hence it can be represented as a tuple of the form Open image in new window , where \( K _{\mathsf {a}}\) is a partition of Open image in new window . Since we are assuming perfect recall, and the attacker does not know anything about the action chosen by the defender in Phase 2, i.e., at the moment of the attack (except the probability distribution determined by his strategy), we must assume that the attacker does not know anything in Phase 1 either. Hence the indistinguishability relation must be total, i.e., Open image in new window . But Open image in new window is equivalent to Open image in new window , hence this kind of game is equivalent to Game IV.

It is also a well known fact in Game theory that when in a sequential game the follower does not know the leader’s move before making his choice, the game is equivalent to a simultaneous game.7

Game VI (attacker 1\(^{\mathbf{st}}\) with hidden choice). This game is also a sequential game with the attacker as the leader, hence it is a tuple of the form Open image in new window . It is similar to Game III, except that the payoff is convex on the strategy of the defender, instead of linear. The payoff of the mixed strategy profile Open image in new window is Open image in new window , so the whole system can be equivalently regarded as channel Open image in new window . Also in this case the minimax problem has a deterministic solution, but only for the attacker.

Theorem 14

In an attacker-first sequential game with hidden choice, there exist Open image in new window and Open image in new window such that, for every Open image in new window and Open image in new window we have that Open image in new window .

Example 15

Consider again the example of Sect. 3, this time in the setting of Game VI. Consider also the calculations made in Example 13, we will use the same results and notation here. In this setting, the attacker is obliged to make its choice first. If he chooses 0, which corresponds to committing to the system Open image in new window , then the defender will choose Open image in new window , which minimizes its vulnerability. If he chooses 1, which corresponds to committing to the system Open image in new window , the defender will choose Open image in new window , which minimizes its vulnerability of the above channel. In both cases, the leakage is Open image in new window , hence both these strategies are solutions to the minimax. Note that in the first case the strategy of the defender is mixed, while that of the attacker is always pure.

5.2 Comparing the Games

If we look at the various payoffs obtained for the running example in the various games, we obtain the following values (listed in decreasing order): Open image in new window .
Fig. 2.

Order of games w.r.t. payoff. Games higher in the lattice have larger payoff.

This order is not accidental: for any vulnerability function, and for any prior, the various games are ordered, with respect to the payoff, as shown in Fig. 2. The relations between II, I, and III, and between IV-V and VI come from the fact that, in any zero-sum sequential game the leader’s payoff will be less or equal to his payoff in the corresponding simultaneous game. We think this result is well-known in game theory, but we give the hint of the proof nevertheless, for the sake of clarity.

Theorem 16

It is the case that:

Proof

We prove the first inequality in (a). Independently of \(\delta \), consider the attacker strategy \(\tau _a\) that assigns probability 1 to the function \( s_\mathsf{a} \) defined as Open image in new window . Then we have thatNote that the strategy \(\tau _a\) is optimal for the adversary, so the first of the above inequalities is actually an equality. All other cases can be proved with an analogous reasoning.    \(\square \)

Concerning III and IV-V: these are not related. In the running example the payoff for III is higher than for IV-V, but it is easy to find other cases in which the situation is reversed. For instance, if in the running example we set \(C_{11}\) to be the same as \(C_{00}\), the payoff for III will be Open image in new window , and that for IV-V will be Open image in new window .

Finally, the relation between III and VI comes from the fact that they are both attacker-first sequential games, and the only difference is the way in which the payoff is defined. Then, just observe that in general we have, for every Open image in new window and every Open image in new window : Open image in new window .

The relations in Fig. 2 can be used by the defender as guidelines to better protect the system, if he has some control over the rules of the game. Obviously, for the defender the games lower in the ordering are to be preferred.
Fig. 3.

Password-checker algorithm.

6 Case Study: A Safer, Faster Password-Checker

In this section we apply our game-theoretic, compositional approach to show how a defender can mitigate an attacker’s typical timing side-channel attack while avoiding the usual burden imposed on the password-checker’s efficiency.

Consider the password-checker PWD123 of Fig. 3, which performs a bitwise-check of a 3-bit low-input Open image in new window , provided by the attacker, against a 3-bit secret password Open image in new window . The low-input is rejected as soon as it mismatches the secret, and is accepted otherwise.

The attacker can choose low-inputs to try to gain information about the password. Obviously, in case PWD123 accepts the low-input, the attacker learns the password value is Open image in new window . Yet, even when the low-input is rejected, there is some leakage of information: from the duration of the execution the attacker can estimate how many iterations have been performed before the low-input was rejected, thus inferring a prefix of the secret password.

To model this scenario, let Open image in new window be the set of all possible 3-bit passwords, and Open image in new window be the set of observables produced by the system. Each observable is an ordered pair whose first element indicates whether the password was accepted (\(T\) or \(F\)), and the second element indicates the duration of the computation (1, 2, or 3 iterations). For instance, channel \(C_{123,101}\) in Fig. 4 models PWD123’s behavior when the attacker provides low-input Open image in new window .

We will adopt as a measure of information Bayes vulnerability [27]. The prior Bayes vulnerability of a distribution Open image in new window is defined as Open image in new window , and represents the probability that the attacker guesses correctly the password in one try. For instance, if the distribution on all possible 3-bit passwords is Open image in new window , its prior Bayes vulnerability is Open image in new window .

The posterior Bayes vulnerability of a prior \(\pi \) and a channel Open image in new window is defined as Open image in new window , and it represents the probability that the attacker guesses correctly the password in one try, after he observes the output of the channel (i.e., after he has measured the time needed for the checker to accept or reject the low-input). For prior \(\hat{\pi }\) above, the posterior Bayes vulnerability of channel \(C_{123,101}\) is Open image in new window (which represents an increase in Bayes vulnerability of about \(50\%\)), and the expected running time for this checker is of 1.2747 iterations.

A way to mitigate this timing side-channel is to make the checker’s execution time independent of the secret. Channel \(C_{\text {cons},101}\) from Fig. 4 models a checker that does that (by eliminating the break command within the loop in PWD123) when the attacker’s low-input is Open image in new window . This channel’s posterior Bayes vulnerability is Open image in new window , which brings the multiplicative Bayes leakage down to an increase of only about \(0.05\%\). However, the expected running time goes up to 3 iterations (an increase of about \(135\%\) w.r.t. that of \(C_{123,101}\)).
Fig. 4.

Channels \(C_{da}\) modeling the password checker for defender’s action d and attacker’s action a.

Seeking some compromise between security and efficiency, assume that the defender can employ password-checkers that perform the bitwise comparison among low-input a and secret password x in different orders. More precisely, there is one version of the checker for every possible order in which the index i ranges in the control of the loop. For instance, while PWD123 checks the bits in the order 1, 2, 3, the alternative algorithm PWD231 uses the order 2, 3, 1.

To determine a defender’s best choice of which versions of the checker to run, we model this problem as game. The attacker’s actions Open image in new window are all possible low-inputs to the checker, and the defender’s Open image in new window are all orders to perform the comparison. Hence, there is a total of 48 possible channels Open image in new window , one for each combination of Open image in new window , Open image in new window .
Table 3.

Utility for each pure strategy profile.

In our framework, the utility of a mixed strategy profile Open image in new window is given by Open image in new window . For each pure strategy profile Open image in new window , the payoff of the game will be the posterior Bayes vulnerability of the resulting channel \(C_{da}\) (since, if we measuring leakage, the prior vulnerability is the same for every channel once the prior is fixed). Table 3 depicts such payoffs. Note that the attacker’s and defender’s actions substantially affect the effectiveness of the attack: vulnerability ranges between 0.4934 and 0.9311 (and so multiplicative leakage is in the range between an increase of \(12\%\) and one of \(112\%\)). Using techniques from [4], we can compute the best (mixed) strategy for the defender in this game, which turns out to be Open image in new window . This strategy is part of an equilibrium and guarantees that for any choice of the attacker the posterior Bayes vulnerability is at most 0.6573 (so the multiplicative leakage is bounded by \(50\%\), an intermediate value between the minimum of about \(12\%\) and the maximum of about \(112\%\)). It is interesting to note that the expected running time, for any action of the attacker, is bounded by at most 2.3922 iterations (an increase of only \(87\%\) w.r.t. the channel PWD\(_{123}\)), which is below the worst possible expected 3 iterations of the constant-time password checker.

7 Related Work

Many studies have applied game theory to analyses of security and privacy in networks [3, 7, 14], cryptography [15], anonymity [1], location privacy [13], and intrusion detection [30], to cite a few. See [20] for a survey.

In the context of quantitative information flow, most works consider only passive attackers. Boreale and Pampaloni [8] consider adaptive attackers, but not adaptive defenders, and show that in this case the adversary’s optimal strategy can be always deterministic. Mardziel et al. [21] propose a model for both adaptive attackers and defenders, but in none of their extensive case-studies the attacker needs a probabilistic strategy to maximize leakage. In this paper we characterize when randomization is necessary, for either attacker or defender, to achieve optimality in our general information leakage games.

Security games have been employed to model and analyze payoffs between interacting agents, especially between a defender and an attacker. Korzhyk et al. [19] theoretically analyze security games and study the relationships between Stackelberg and Nash Equilibria under various forms of imperfect information. Khouzani and Malacaria [18] study leakage properties when perfect secrecy is not achievable due to constraints on the allowable size of the conflating sets, and provide universally optimal strategies for a wide class of entropy measures, and for g-entropies. These works, contrarily to ours, do not consider games with hidden choice, in which optimal strategies differ from traditional game-theory.

Several security games have modeled leakage when the sensitive information are the defender’s choices themselves, rather than a system’s high input. For instance, Alon et al. [2] propose zero-sum games in which a defender chooses probabilities of secrets and an attacker chooses and learns some of the defender’s secrets. Then they present how the leakage on the defender’s secrets gives influences on the defender’s optimal strategy. More recently, Xu et al. [29] show zero-sum games in which the attacker obtains partial knowledge on the security resources that the defender protects, and provide the defender’s optimal strategy under the attacker’s such knowledge.

Regarding channel operators, sequential and parallel composition of channels have been studied (e.g., [17]), but we are unaware of any explicit definition and investigation of hidden and visible choice operators. Although Kawamoto et al. [16] implicitly use the hidden choice to model a probabilistic system as the weighted sum of systems, they do not derive the set of algebraic properties we do for this operator, and for its interaction with the visible choice operator.

8 Conclusion and Future Work

In this paper we used protocol composition to model the introduction of noise performed by the defender to prevent leakage of sensitive information. More precisely, we formalized visible and hidden probabilistic choices of different protocols. We then formalized the interplay between defender and adversary in a game-theoretic framework adapted to the specific issues of QIF, where the payoff is information leakage. We considered various kinds of leakage games, depending on whether players act simultaneously or sequentially, and whether the choices of the defender are visible or not to the adversary. We established a hierarchy of these games, and provided methods for finding the optimal strategies (at the points of equilibrium) in the various cases.

As future research, we would like to extend leakage games to the case of repeated observations, i.e., when the attacker can observe the outcomes of the system in successive runs, under the assumption that both attacker and defender may change the channel in each run. We would also like to extend our framework to non zero-sum games, in which the costs of attack and defense are not equivalent, and to analyze differentially-private mechanisms.

Footnotes

  1. 1.

    Following the convention of security games, we set the first player to be the defender.

  2. 2.

    The definition of the mixed strategies as Open image in new window means that the attacker draws a function Open image in new window before he knows the choice of the defender. In contrast, the so-called behavioral strategies are defined as functions Open image in new window , and formalize the idea that the draw is made after the attacker knows such choice. In our setting, these two definitions are equivalent, in the sense that they yield the same payoff.

  3. 3.

    Conventionally in game theory the payoff u is set to be that of the first player, but we prefer to look at the payoff from the point of view of the attacker to be in line with the definition of payoff as vulnerability.

  4. 4.

    More precisely, if posterior vulnerability is defined as the expectation of the vulnerability of posterior distributions, the measure respects the data-processing inequality and always yields non-negative leakage iff vulnerability is convex.

  5. 5.

    We adopt the usual convention in QIF of referring to secret variables, inputs and outputs in programs as high, and to their observable counterparts as low.

  6. 6.
  7. 7.

    However, one could argue that, since the defender has already committed, the attacker does not need to perform the action corresponding to the Nash equilibrium, any payoff-maximizing solution would be equally good for him.

Notes

Acknowledgments

The authors are thankful to anonymous reviewers for helpful comments. This work was supported by JSPS and Inria under the project LOGIS of the Japan-France AYAME Program, and by the project Epistemic Interactive Concurrency (EPIC) from the STIC AmSud Program. Mário S. Alvim was supported by CNPq, CAPES, and FAPEMIG. Yusuke Kawamoto was supported by JSPS KAKENHI Grant Number JP17K12667.

References

  1. 1.
    Acquisti, A., Dingledine, R., Syverson, P.: On the economics of anonymity. In: Wright, R.N. (ed.) FC 2003. LNCS, vol. 2742, pp. 84–102. Springer, Heidelberg (2003).  https://doi.org/10.1007/978-3-540-45126-6_7CrossRefGoogle Scholar
  2. 2.
    Alon, N., Emek, Y., Feldman, M., Tennenholtz, M.: Adversarial leakage in games. SIAM J. Discret. Math. 27(1), 363–385 (2013)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Alpcan, T., Buchegger, S.: Security games for vehicular networks. IEEE Trans. Mob. Comput. 10(2), 280–290 (2011)CrossRefGoogle Scholar
  4. 4.
    Alvim, M.S., Chatzikokolakis, K., Kawamoto, Y., Palamidessi, C.: Information leakage games. In: Rass, S., An, B., Kiekintveld, C., Fang, F., Schauer, S. (eds.) GameSec 2017. LNCS, vol. 10575, pp. 437–457. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68711-7_23CrossRefGoogle Scholar
  5. 5.
    Alvim, M.S., Chatzikokolakis, K., McIver, A., Morgan, C., Palamidessi, C., Smith, G.: Axioms for information leakage. In: Proceedings of CSF, pp. 77–92 (2016)Google Scholar
  6. 6.
    Alvim, M.S., Chatzikokolakis, K., Palamidessi, C., Smith, G.: Measuring information leakage using generalized gain functions. In: Proceedings of CSF, pp. 265–279 (2012)Google Scholar
  7. 7.
    Basar, T.: The Gaussian test channel with an intelligent jammer. IEEE Trans. Inf. Theory 29(1), 152–157 (1983)CrossRefGoogle Scholar
  8. 8.
    Boreale, M., Pampaloni, F.: Quantitative information flow under generic leakage functions and adaptive adversaries. Log. Methods Comput. Sci. 11(4–5), 1–31 (2015)MathSciNetMATHGoogle Scholar
  9. 9.
    Braun, C., Chatzikokolakis, K., Palamidessi, C.: Quantitative notions of leakage for one-try attacks. In: Proceedings of MFPS. ENTCS, vol. 249, pp. 75–91. Elsevier (2009)CrossRefGoogle Scholar
  10. 10.
    Chatzikokolakis, K., Palamidessi, C., Panangaden, P.: On the Bayes risk in information-hiding protocols. J. Comput. Secur. 16(5), 531–571 (2008)CrossRefGoogle Scholar
  11. 11.
    Chaum, D.: The dining cryptographers problem: unconditional sender and recipient untraceability. J. Cryptol. 1, 65–75 (1988)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Dwork, C., McSherry, F., Nissim, K., Smith, A.: Calibrating noise to sensitivity in private data analysis. In: Halevi, S., Rabin, T. (eds.) TCC 2006. LNCS, vol. 3876, pp. 265–284. Springer, Heidelberg (2006).  https://doi.org/10.1007/11681878_14CrossRefGoogle Scholar
  13. 13.
    Freudiger, J., Manshaei, M.H., Hubaux, J.-P., Parkes, D.C.: On non-cooperative location privacy: a game-theoretic analysis. In: Proceedings of CCS, pp. 324–337 (2009)Google Scholar
  14. 14.
    Grossklags, J., Christin, N., Chuang, J.: Secure or insure? A game-theoretic analysis of information security games. In: Proceedings of WWW, pp. 209–218 (2008)Google Scholar
  15. 15.
    Katz, J.: Bridging game theory and cryptography: recent results and future directions. In: Canetti, R. (ed.) TCC 2008. LNCS, vol. 4948, pp. 251–272. Springer, Heidelberg (2008).  https://doi.org/10.1007/978-3-540-78524-8_15CrossRefMATHGoogle Scholar
  16. 16.
    Kawamoto, Y., Biondi, F., Legay, A.: Hybrid statistical estimation of mutual information for quantifying information flow. In: Fitzgerald, J., Heitmeyer, C., Gnesi, S., Philippou, A. (eds.) FM 2016. LNCS, vol. 9995, pp. 406–425. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-48989-6_25CrossRefGoogle Scholar
  17. 17.
    Kawamoto, Y., Chatzikokolakis, K., Palamidessi, C.: On the compositionality of quantitative information flow. Log. Methods Comput. Sci. 13(3–11), 1–31 (2017)MathSciNetMATHGoogle Scholar
  18. 18.
    Khouzani, M.H.R., Malacaria, P.: Relative perfect secrecy: universally optimal strategies and channel design. In: Proceedings of CSF, pp. 61–76 (2016)Google Scholar
  19. 19.
    Korzhyk, D., Yin, Z., Kiekintveld, C., Conitzer, V., Tambe, M.: Stackelberg vs. nash in security games: an extended investigation of interchangeability, equivalence, and uniqueness. J. Artif. Intell. Res. 41, 297–327 (2011)MathSciNetMATHGoogle Scholar
  20. 20.
    Manshaei, M.H., Zhu, Q., Alpcan, T., Bacşar, T., Hubaux, J.-P.: Game theory meets network security and privacy. ACM Comput. Surv. 45(3), 25:1–25:39 (2013)CrossRefGoogle Scholar
  21. 21.
    Mardziel, P., Alvim, M.S., Hicks, M.W., Clarkson, M.R.: Quantifying information flow for dynamic secrets. In: Proceedings of S&P, pp. 540–555 (2014)Google Scholar
  22. 22.
    Massey, J.L.: Guessing and entropy. In: Proceedings of the IEEE International Symposium on Information Theory, p. 204. IEEE (1994)Google Scholar
  23. 23.
    McIver, A., Morgan, C., Smith, G., Espinoza, B., Meinicke, L.: Abstract channels and their robust information-leakage ordering. In: Abadi, M., Kremer, S. (eds.) POST 2014. LNCS, vol. 8414, pp. 83–102. Springer, Heidelberg (2014).  https://doi.org/10.1007/978-3-642-54792-8_5CrossRefGoogle Scholar
  24. 24.
    Osborne, M.J., Rubinstein, A.: A Course in Game Theory. The MIT Press, Cambridge (1994)MATHGoogle Scholar
  25. 25.
    Rizzo, J., Duong, T.: The CRIME attack (2012)Google Scholar
  26. 26.
    Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423, 625–656 (1948)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Smith, G.: On the foundations of quantitative information flow. In: de Alfaro, L. (ed.) FOSSACS 2009. LNCS, vol. 5504, pp. 288–302. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-00596-1_21CrossRefGoogle Scholar
  28. 28.
    Sun, Q., Simon, D.R., Wang, Y.-M., Russell, W., Padmanabhan, V.N., Qiu, L.: Statistical identification of encrypted web browsing traffic. In: Proceedings of S&P, pp. 19–30. IEEE (2002)Google Scholar
  29. 29.
    Xu, H., Jiang, A.X., Sinha, A., Rabinovich, Z., Dughmi, S., Tambe, M.: Security games with information leakage: modeling and computation. In: Proceedings of IJCAI, pp. 674–680 (2015)Google Scholar
  30. 30.
    Zhu, Q., Fung, C.J., Boutaba, R., Basar, T.: A game-theoretical approach to incentive design in collaborative intrusion detection networks. In: Proceedings of GAMENETS, pp. 384–392. IEEE (2009)Google Scholar

Copyright information

© The Author(s) 2018

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Authors and Affiliations

  1. 1.Universidade Federal de Minas GeraisBelo HorizonteBrazil
  2. 2.CNRS and École PolytechniquePalaiseauFrance
  3. 3.AISTTsukubaJapan
  4. 4.INRIA and École PolytechniquePalaiseauFrance

Personalised recommendations