Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Distance (upper) bounding (DUB) protocols have been widely studied in recent years: a verifier V interacts with a prover P to obtain assurance that the prover is at a distance at most B from the verifier. A DUB protocol was first proposed in [3] to thwart relay attacks in authentication protocols, by using the location as an unforgeable attribute of the prover. DUB protocols have been widely used for proximity based authentication (e.g., passive keyless entry and start system in modern cars [8]), proximity based control (e.g., implantable medical device [15]), and Radio-Frequency Identification (RFID) authentication [1, 18].

Secure DUB protocols estimate distance by measuring the round-trip time between a challenge and its response, which are transmitted as light-speed electromagnetic (EM) signals. We refer to protocols that use this method for distance estimation, as class \(\mathcal{EF}\) (i.e., EM fast-exchange). In this paper, we consider the dual problem of distance lower bounding (DLB), where a prover P wants to prove its distance from a verifier V is higher than a bound B. DLB problem naturally arises in application scenarios where privileges are given based on the distance of the requester to a verifier. For example a company offering unrestricted Internet access to games and entertainment software to employees, when they are outside of main office area of the company campus (e.g., Google campus), and restricted access when employees are within the main office area. Here the requirement is for employees to prove they are outside the main working area. A second scenario is when the parking lot is divided into zones and parking charge depends on the distance of the car to the main point of interest (e.g. discounted rate will be given if users park their car at farer distance from the shopping mall entrance). In both cases once the privilege is granted based on the distance, one needs to use monitoring mechanisms such as continuous authentication to ensure that the user stays within the claimed area. Embedding such authentication in streaming services such as games or music is straightforward. For the latter scenario, one can use random scanning of the area to ensure correct claim. Although determined users may be able to bypass the authentication, but they will be inconvenient (e.g. move the car frequently) and also have to accept the risk of detection and penalty.

Despite the relation between DUB and DLB and the fact that a successful DUB protocol run proves an upper bound on the prover’s distance, its failure does not say anything about the distance of the prover. None of the DUB protocols protect against distance enlargement attack [6], where the malicious prover enlarges the distance by delaying the response. Other applications of DUB protocol, such as using DUB protocols with multiple verifiers for secure positioning [6], will also be vulnerable to distance enlargement attack. A second approach would be to use Global Positioning System (GPS)[11] to determine the location of the user. However one needs to trust the GPS measurements, which is known to be vulnerable to attacks, such as GPS spoofing attack [21] where fake satellite signals are used to modify the GPS location data. This solution also results in privacy loss and so one needs to consider privacy enhancing GPS solutions that require extra infrastructure.

Attacks on DLB protocols depend on the application scenario. In Sect. 2.1 we formalize attacks that are applicable in the above application scenarios, and show that they are parallel to attacks on DUB protocols. DUB protocols have been analyzed against three broad classes of attacks [20]: distance fraud (DF) where the prover is malicious and wants to shorten its distance to the verifier; collusion fraud (CF) where the prover is malicious and has a helper that would assist them to shorten its distance to the verifier; and finally Man-in-the-middle (MiM) attack where the prover is honest and is the victim of an external attacker, who aims to shorten the distance between the honest prover and the verifier. These classes include attacks such as impersonation, Mafia fraud and Terrorist fraud, that are traditionally considered for DUB. We show that all above attacks are directly applicable to our DLB scenario above and capture important DLB attacks.

The solution to DLB problem depends on the trust assumption. DLB problem in a setting that both the prover and the verifier are trusted, has been considered in [19]. In this paper, we consider a setting where the prover is untrusted and the verifier is trusted.

Here we unravel the main difference between DUB and DLB protocols: DUB protocols have been primarily designed in the setting that an untrusted prover interacts with a trusted verifier. However in Sect. 2.2, we prove that it is impossible to have secure DLB protocol if provers are fully untrusted (have full control of the device hardware and software), which allows them to deviate arbitrarily from the protocol. One however can have secure protocols by making assumptions on the malicious prover’s access to the device and/or communication channel. Table 1 summarizes trust assumptions in the two problems.

Table 1. Impossibility result of DB protocols with different trust assumptions

Our Contribution. First, we initiate the study of distance lower bounding (DLB) problem in a setting where the prover is untrusted using motivating application scenarios. Second, we construct security model for DLB problem and define three broad classes of attacks: distance fraud (DF), Man-in-the-middle attack (MiM) and collusion fraud (CF). Third, we prove that security against any of these attacks without making physical assumptions Footnote 1 is impossible. In particular, a fully malicious prover can always succeed in the DF attack, and an external attacker (without the cryptographic credentials) can always jam-and-delay the signal between the verifier and the prover, and succeed in the MIM attack. This also implies that a malicious prover that has a helper (CF) will always succeed. Fourth, we construct a secure DLB protocols under reasonable assumptions, and prove its security against DF, MiM and CF attacks. Finally, we estimate time, memory and energy requirements of our protocol and conclude with open questions and directions for future research.

Related Work. There is a large body of research on secure positioning and distance estimation, including distance bounding protocols [2, 3, 10], positioning techniques [6, 11] and secure ranging protocols [19]. As we argued earlier, these approaches are not directly applicable to the DLB problem in the setting that the prover is not trusted. GPS systems use a set of satellites signals to determine the location and are designed for non-adversarial setting, and so GPS systems are vulnerable to signal spoofing attacks [21]; DUB protocols protect against malicious provers trying to shorten the distance, but are in general vulnerable to the distance enlargement attack [6]; secure positioning systems use a DUB protocol with multiple verifiers to triangulate the prover’s location, but is also vulnerable to distance enlargement attack, making positioning an insecure approach for DLB; and secure ranging systems only consider non-adversarial setting as well.

To our knowledge, this is the first paper to study DLB in a setting where the prover is not trusted. Our approach to defining attacks, distance estimation, and design of the protocol is inspired by the large body of literature on DUB, in particular, [20] for formalization of attacks, and [2, 10] for the design of the protocol. The use of bounded-memory assumption for the prover’s device in the context of secure code update had been considered in [13]. We refer to [22]. for a complete review of relevant works.

2 DLB - Model and Impossibilities

We consider a multi-party system where a party U is modeled by a polynomially bounded interactive Turing machine (ITM) with a certain location \(loc_U\), and some pre-shared key. A party can be a prover or a verifier. A prover P engages in a two-party protocol with a verifier V, to prove the claim that its distance to the verifier satisfies certain bound. Honest parties run predefined algorithms for their side of the interaction. The verifier V is always honest. The prover however may be malicious, in which case it is denoted by \(P^*\). A protocol instance defines an experiment denoted by \(exp = (P(x; r_P) \leftrightarrow \mathcal{A}(r_A) \leftrightarrow V(y; r_V ))\). At the end of a protocol instance, V has an output \(Out_V\), which is 1 or 0, showing acceptance or rejection of the DLB, respectively. The prover does not have an output. A participant in an experiment has a view consisting of all its inputs, coins, and messages that it can see. The external attacker \(\mathcal{A}\) may interact with multiple Ps and Vs, and its view will include all these interactions.

Definition 1

(DLB Protocol). A Distance Lower Bounding (DLB) protocol is a tuple (Gen, P, V, B), where \((x,y) \leftarrow Gen(1^s, r_k)\) is a randomized key-generation algorithm that takes security parameter s and randomness \(r_k\) and outputs keys x and y; \( P(x; r_P)\) is the prover’s ppt ITM that takes secret-key] x and randomness \(r_P\); \(V(y; r_V )\) is the verifier’s ppt ITM taking secret-key y and randomness \(r_V\) , and B is a distance-bound. It satisfies two properties:

  • Termination: \( (\forall s)(\forall R)(\forall r_k; r_V )(\forall loc_V )\) if \( (.;y) \leftarrow Gen(1^s; r_k)\) and \((R \leftrightarrow V(y; r_V ))\) is an execution of the protocol between the verifier and any (unbounded) prover algorithm, V halts in Poly(s) computational steps;

  • p-Completeness: \((\forall s)( \forall loc_V ; loc_P\) such that \( d(loc_V; loc_P)\ge B)\) we have

    $$Pr_{r_k;r_P;r_V} \left[ Out_{v}=1: \begin{array}{l l} (x;y)\leftarrow Gen(1^s; r_k))\\ P(x; r_P) \leftrightarrow V(y; r_V) \end{array} \right] \ge p$$

2.1 Attacks on DLB Protocols

We consider three classes of attacks: distance fraud (DF), man-in-the-middle (MiM) attack, and collusion fraud (CF). In DF, \(P^*\), with \(d(P^*,V)<B\) wants to convince V that its distance is at least B. In MiM attack, an external attacker who does not have the secret key, interacts with multiple Ps and Vs, and finally succeeds in taking the role of a prover in a protocol instance (See Fig. 1a). In CF, \(P^*\) colludes with a helper to claim a longer distance to V (See Fig. 1b). The collusion should not leak the prover’s secret key to the helper. The formal definitions of the attacks are below.

Fig. 1.
figure 1

MiM and CF attack in DLB

Definition 2

(DF-resistance). A DLB protocol \(\Pi \) is \(\alpha \)-resistant to distance fraud if \((\forall s) (\forall P^{*}) (\forall loc_{v}\) such that \(d(loc_{v}, loc_{p*})\le B) (\forall r_k)\), we have

$$Pr_{r_{v}} \left[ Out_{v}=1: \begin{array}{l l} (x,y)\leftarrow Gen(1^{s};r_{k})\\ P^{*}(x)\leftrightarrow V(y;r_{v}) \end{array} \right] \le \alpha $$

where \(P^{*}\) is any dishonest prover. Because of the concurrent setting we effectively allow polynomially bounded number of \(P(x')\) and \(V(y')\) close to V(y) with independent \((x',y')\).

Distance hijacking Definition 2 captures distance hijacking attack [7] against DLB protocols. In this attack, \(P^*\) who is at distance \( < B\), uses DLB communications of unaware honest provers at a distance \(\ge B\), to claim a distance \(\ge B\).

Definition 3

(MiM-resistance). A DLB protocol \(\Pi \) is \(\beta \)-resistant to MiM attack if \((\forall s)\), \((\forall m,\ell , z ) \) are polynomially bounded, \((\forall A_1, A_2) \) polynomially bounded, for all locations such that \(d(loc_{P_j}, loc_V )<B\), where \( j \in \{ m+1, \cdots , \ell \}\), we have

$$Pr_{r_{v}}\left[ Out_{v}=1: \begin{array}{l l} (x,y)\leftarrow Gen(1^{s})\\ P_{1}(x)...P_{m}(x)\leftrightarrow A_{1} \leftrightarrow V_{1}(y)...V_{z}(y)\\ P_{m+1}(x)...P_{\ell }(x)\leftrightarrow A_{2}(View_{A_1}) \leftrightarrow V(y) \end{array} \right] \le \beta $$

Here probability is over all random coins of the protocol, and \(View_{A1}\) is the final view of \(A_1\). The definition effectively allows polynomially bounded number of \( P(x')\), \(P^*(x')\), and \(V(y')\) with independent \((x',y')\), anywhere.

Mafia fraud and impersonation attack. Definition 3 covers Mafia fraud and impersonation attack as special cases. In Mafia fraud, there is no learning phase. The attacker interacts with an honest prover and makes the verifier to output accept. That is, \(m=z=0\) and, \(\ell =1\) in the attack phase. In impersonation attack the attacker uses multiple possibly concurrent interactions with the verifier to make the verifier output 1. This attack is captured by letting \(\ell =m\).

Definition 4

(CF-resistance). A DLB protocol \(\Pi \) is \((\gamma , \eta )\) resistant to collusion fraud if \((\forall s)(\forall P^{*}) (\forall loc_{v_0})\) such that \(d(loc_{v_0}, loc_{P^*})<D)\), \((\forall A^{CF} ppt.)\):

$$Pr_{r_{v}}\left[ Out_{v_0}=1: \begin{array}{l l} (x,y)\leftarrow Gen(1^{s})\\ P^{*}(x)\leftrightarrow A^{CF} \leftrightarrow V_0(y) \end{array} \right] > \gamma $$

implies existence of an extendedFootnote 2 MiM attack with m, \(\ell , z, A_1, A_2, P_i,\)

\(P_j, V_i\) that uses interaction with P and \(P^*\) both, and V in learning and satisfies

$$Pr\left[ Out_{v}=1: \begin{array}{l l} (x,y)\leftarrow Gen(1^{s})\\ P_{1}^{(*)}(x)...P_{m}^{(*)}(x)\leftrightarrow A_{1} \leftrightarrow V_{1}(y)...V_{z}(y)\\ P_{m+1}(x)...P_{\ell }(x)\leftrightarrow A_{2}(View_{A1}) \leftrightarrow V(y) \end{array} \right] > \eta $$

Here, prover \(P^{(*)}\) is either P or \(P^*\) and we have \(d(loc_{P_j} , loc_V ) < B\), for \( j\in \{m+1\cdots \ell \}\). We implicitly allow a polynomial number of \( P(x')\), \(P^*(x')\), and \(V(y')\) with independent \((x',y')\) anywhere but honest participants are close to \(V_0\).

Terrorist fraud. In Terrorist fraud, \(P^*\), with \(d(P^*, V)<B\), gets aid from a helper who does not have the secret key. Definition 4 captures terrorist fraud as a special case by letting \(m = z =\ell = 1\), by simply allowing \(A_{1}\) to run \(A^{CF}\) and succeed in impersonation and making V to accept.

2.2 Impossibility Results

We consider protocols in \(\mathcal{EF}\). Let C denote speed of light, \(t_c\) and \(t_r\) denote the verifier’s clock readings, when the challenge is sent and the response is received, respectively. If the received response is correct, the verifier calculates \(T_{\Delta }=t_r-t_c\) to estimate the distance of the prover. Let \(T_{proc}\) denote the processing time of the prover. The verifier estimates the prover’s distance D as

$$\begin{aligned} D=\frac{(T_\Delta - T_{proc}) C}{2}. \end{aligned}$$
(1)

Theorem 1

  1. 1.

    Any DLB protocol in \(\mathcal{EF}\) is vulnerable to DF if \(P^*\) has full (hardware and software) control over the prover’s device.

  2. 2.

    No DLB protocols in \(\mathcal{EF}\) can provide \(\beta \)-resistance with \(\beta <1\) to MiM attack by an external attacker who can jam and delay messages to/from the prover.

  3. 3.

    For any DLB protocol in \(\mathcal{EF}\), \(P^*\) can succeed in CF with probability 1 and negligible key leakage to the helper, if the helper has full access to the communication channels with \(P^*\), and \(P^*\) has full control over the prover’s device. The result holds even if communication is only allowed in one direction between the prover and the helper.

The proof sketch of Theorem 1 can be found in Appendix A.

2.3 Restricted DF, MiM, and CF

To remove the above impossibility results, we must use reasonable assumptions (restrictions) on the adversary’s control of the device and/or the communication channel. We refer to attacks under these conditions as restricted DF, MiM and CF (rDF, rMiM and rCF), to emphasize extra assumptions are needed.

Table 2. DLB security against the three attacks in different settings.

Notations. We use \(\mathsf {PD}\) to denote the prover’s device, and rX\(^{[Y]}\) to denote restricted version of attack X, where \(X\in \{DF, MiM, CF\}\) and restrictions are stated in Y. For example, rDF\(^{[BM]}\) refers to the restricted DF attack, under the restriction that \(\mathsf {PD}\) has bounded memory.

Table 2 summarizes our impossibility results and shows assumptions used in our construction in Sect. 3. The assumptions that we use for security against rDF are, (i) \(P^*\) cannot access (read or write) the \(\mathsf {PD}\) ’s read-only memory (ROM), and (ii) \(\mathsf {PD}\) has bounded memory (BM). Note that the first assumption still allows \(P^*\) to inject malicious codes into the device writable memory (RAM), and modify correct execution of the protocol. The bounded memory assumption is a well-established model in cryptography [4], and has been used in the design of security systems [13]. To achieve the security against rMiM and rCF, in addition to the above assumptions, we require the helper to have no On-line Communication (OC) with the prover during the fast-exchange phase. In Sect. 3, we present a DLB protocol that provides security against rDF, rMiM and rCF under the above assumptions. Note that one may achieve rDF, rMiM and rCF resistance using other assumptions that restrict the prover and the helper. For example instead of assuming a root of trust on the \(\mathsf {PD}\), one may establish a dynamic root of trust using software attestation. We give a software attestation-based DLB protocol in full version [22]. This protocol also requires no online-communication assumption for security against all attacks. We also provide an overview of security analysis and implementation challenges of the protocol.

3 DLB Protocol Constructions

Assumptions and Attack Model. We assume the \(\mathsf {PD}\) is a bounded memory device with a protected memory (ROM), and a writable memory (RAM) of (fixed) L bit size. We consider RAM as an array indexed from 1 to L. The DLB protocol code is stored partly in ROM, denoted by \(DLB_{ROM}\), and partly in RAM, denoted by \(DLB_{RAM}\). We assume V has a shared key with the \(\mathsf {PD}\), and holds the same DLB code. The secret key of \(\mathsf {PD}\) is stored in ROM and is accessible only to the code in ROM. We assume communication channel is noise free, although our results are extendable to noisy communication by applying similar methods as [17]. The adversary may store and run arbitrary malicious code on the RAM of the \(\mathsf {PD}\), but is not able to tamper the hardware of the device.

Approach. Using Eq. 1, \(P^*\) at distance D can always delay the response by \(2D'/C\) second(s) to claim a longer distance \(D+D'\). Let \(T_{max}\) denote the maximum expected response generation (processing) time by the verifier. (This can be estimated for example, by measuring the processing time of a set of functional devices, and choosing \(T_{max}\) larger than all the measured times.) Knowing that \(0 \le T_{proc} \le T_{max}\), the verifier uses the round-trip time \(T_\Delta \) to obtain the following distance bounds.

$$\begin{aligned} D\ge D_{lower}=\frac{(T_\Delta -T_{max})C}{2} \end{aligned}$$
(2)

We propose a protocol that assumes bounded memory for \(\mathsf {PD}\)and enables V to force an upper bound on the delay introduced by \(P^*\).

3.1 The Protocol \(\Pi _{DLB-BM}\)

The secret key consists of two binary strings, x, and \(\hat{x}\) in \( \{0,1\}^{\ell }\) respectively. When clear from context, we refer to each string as key also. The protocol uses a secure Pseudo Random Function (PRF) \(f_x:\{0,1\}^{2k} \rightarrow \{0,1\}^{2n}, x \in \{0,1\}^{\ell }\), and a secure keyed-hash function \((H_{\hat{x}})_{\hat{x} \in \{0,1\}^{\ell }}: \{0,1\}^* \rightarrow \{0,1\}^b\). Figure 2 shows the messages communicated in the three phases of the protocol.

Phase 1: Initialization Phase. The prover generates a k-bit nonce \(N_p\) and sends it to the verifier. The verifier selects a k-bit nonce \(N_v\) and a 2n-bit random string A, calculates \(M=A \oplus f_x(N_p,N_v)\), and sends \((M,N_v)\) to the prover. With this information, the prover decrypts M to retrieve \(A=M \oplus f_x(N_p,N_v)\) and stores it in memory. A is the response table that will be used by the prover to respond to challenges in Phase 2. Considering \(A=(a_{(1,j)}, a_{(2,j)})\), where \(j=1\cdots n, \) as a sequence of n bit pairs, we define a third string \(a_{(3,j)}=a_{(1,j)} \oplus a_{(2,j)} \oplus x\). \(a_{(3,j)}\) is computed at run time from the response table and so is not stored in memory.

Phase 2: Fast-exchange Phase. This phase proceeds in n consecutive challenge-response rounds. In each round \(1\le i \le n\), the verifier chooses a random challenge \(c_i \in \{1,2,3\}\) and sends it to the prover, immediately followed by a random erasing sequence \(RS_i\) of length \(z_i\). In Sect. 3.2, we will discuss how \(z_i\) is determined. The role of \(RS_i\) is to prevent P from delaying the response to extend its distance. On receiving the challenge \(c_i\), the prover will retrieve the response \(r_i = a_{(c_i,i)}\). When \(z_i-1\) bits of \(RS_i\) are received, the prover must send \(r_i\) to avoid it being overwritten by the final bit of \(RS_i\). The prover must also send the response to the erasing sequence (also referred to as proof of erasure \(h_i\)). By correctly designing the computation of the hash, the correct proof of erasure will “prove" that the prover has received and stored the full \(RS_i\) and also has kept the code \(DLB_{RAM}\) intact (see Sect. 3.2 for details). In addition, the verifier records the time difference \(T_{\Delta ,i}\) between sending \(c_i\) and receiving \(r_i\).

Fig. 2.
figure 2

Distance lower bounding protocol \(\Pi _{DLB-BM}\)

Phase 3: Verification Phase. The verifier checks the correctness of response \(r_i\) and proof of erasure \(h_i\) for all rounds, \(i=1\cdots n\). It also verifies whether all response times are higher than a threshold \(\theta \), determined as follows. Let B denote the distance-bound, and \(T(z_i-1)\) denote the time interval required by the prover to receive \(z_i-1\) bits. The acceptable round-trip time in round i must satisfy \(T_{\Delta ,i} \ge \theta = \frac{2B}{C}+T(z_i-1)\), where \(C=3*10^8\) is the speed of light (see Eq. 2). The verifier outputs \(Out_v=1\), if and only if all verifications and time checks succeed. For simplicity, we assumed the communication channel is noiseless; thus, a successful protocol requires all challenges to be correctly responded.

3.2 The Design of Erasure Sequence and Its Response

In fast-exchange round i, an erasure sequence \(RS_i\) is sent to the P, and a correct response is required. \(RS_i\) is used to guarantee all the device memory is erased, except DLB code and the part of the memory that is required for future response. The length of the erasing sequence \(RS_i\) must be chosen as follows.

Sequence Length. Let the sizes of the RAM and \(DLB_{RAM}\), be L and \(\lambda \), respectively. After the initialization phase, the 2n-bit random table A is stored in the prover’s device memory. In Round 1, the erasing sequence \(RS_1\) must erase \(L-\lambda -2n\) unused memory, together with \((a_{(1,1)},a_{(2,1)})\), the two response bits associated with round 1. In each subsequent challenge-response round i, two additional bits (\(a_{(1,i)},a_{(2,i)}\)) of A will be used and so the length of the erasing sequence must be increased by two bits. By induction, the random sequence \(RS_i\) in round \(1\le i\le n\) must have length \(L-\lambda -2(n-i)\). Figure 3 shows the state of the prover’s memory during protocol execution.

Fig. 3.
figure 3

Prover’s memory during protocol execution

Response to the Erasure Sequence. The response in round i, denoted by \(h_i\), must guarantee that the \(\mathsf {PD}\)’s memory, contains the sequence \(RS_i\) and DLB code \(DLB_{RAM}\) in full, and prove that the rest of the memory is erased. We refer to this response as proof of erasure, as it is inspired by [13]. An efficient approach is to send the cryptographic hash of \(RS_i\). To prevent the prover from calculating the hash value in real-time without storing the whole \(RS_i\), we require the hash function to be applied to the received sequence in the reverse order of arrival. That is, assuming \(RS_i=(u_1\cdots u_{z_i})\), the hash will be applied to \(\bar{RS_i}= (u_{z_i}\cdots u_1)\). This leaves the prover no choice other than waiting for the last bit to arrive, before starting the calculation. To prevent \(P^*\) to simply store the required hash value of the code, we use \(h_i= H_{\hat{x}}( \bar{RS_i} \parallel DLB_{RAM})\). In this construction, \(\bar{RS_i}\) serves as a random nonce, and rules out the possibility of \(P^*\) successfully passing the verification without storing the full \(DLB_{RAM}\). The response calculation must be such that it cannot be delegated to a helper. This requirement is for achieving security against rCF (Sect. 4). We thus use a keyed-hash message authentication code, that requires the prover’s secret key. The keyed-hash message authentication code uses a suitable cryptographic hash function in a specific structure (e.g. HMAC), to construct a secure MAC, which ensures the keyed-hash value cannot be forged.

4 Security Analysis of \(\Pi _{DLB-BM}\)

\(\Pi _{DLB-BM}\) protocol uses a PRF and HMAC, and we analyse its security against a computationally bounded adversary. We note that it is possible to construct an information theoretically secure version of this protocol, by replacing the PRF and HMAC with appropriate primitives.

rDF \(^{[BM]}\) resistance. In DF, a malicious prover \(P^*\) with \(d(loc_v,loc_{P^*})\le B\), wants to prove, its distance is higher than the bound. To achieve this goal, \(P^*\) must send the correct response bit \(r_i\), and the correct proof of erasure \(h_i\), both with sufficient delay, in all rounds \(1 \le i \le n\), of the fast-exchange phase. Theorem 2 proves that the DF resistance of the DLB protocol \(\Pi _{DLB}\), assuming that a malicious code of length at least g bits is required.

Theorem 2

\(\Pi _{DLB-BM}\) is \(\epsilon \)-resistant to rDF\(^\mathtt{[BM]}\), with \( \epsilon = \max \left( 2^{-(\frac{g}{2})^2},\right. \) \(\left. 2^{-n(n+1)} \right) \), against any rDF\(^\mathtt{[BM]}\) attack that requires at least g-bit malicious code, assuming \(H_x()\) is HMAC with a suitable cryptographic hash function.

The proof is given in Appendix B. The theorem implies that attacks with longer codes directly reduce the success probability of \(P^*\). This substantially limits designing malicious codes. Note that for a 1-byte malicious code (\(g=8\)) leads to a success chance of \(\epsilon < 10^{-6}\), for protocols with at least 4 rounds, \(n\ge 4\).

In rMiM\(^{[\overline{\mathtt{OC}}]}\), the adversary cannot send or receive signal to, or from the prover during the fast-exchange phase of the target instance. It however has full communication power during other phases. We do allow the adversary to jam communications between the verifier and provers in all phases of the protocol (including fast exchange phase). The proof outline of the following Theorem 3 is given in full version [22].

Theorem 3

The DLB protocol \(\Pi _{DLB-BM}\) is \(\beta \)-resistant to rMiM\(^{[\overline{\mathtt{OC}}]}\)attack with \(\beta = 2^{-l}\), by choosing \(b > \frac{l}{n}-1\).

Providing rCF security requires security against rDF and rMiM, and so their associated assumptions. We consider rCF\(^{ [\mathtt{BM},\; \overline{\mathtt{OC}}]}\), and show (i) this is a stronger attack than rDF\(^\mathtt{[BM]}\), and (ii) \(\Pi _{DLB-BM}\) is secure against this attack (see Theorem 4 and its formal proof is given in [22]).

Theorem 4

The protocol \(\Pi _{DLB}\) is \((\gamma ,\eta )\)-resistant to rCF\(^{ [\mathtt{BM},\; \overline{\mathtt{OC}}]}\), with \(\eta = 2^{-l}\) and \(\gamma \le \max \{2^{-\frac{g+0.5}{2} \frac{g+1}{2}}, 2^{-(n+0.25)(n+1)}\}\), assuming malicious code length \(\ge g\).

5 Practical Consideration

The computation during the initialization and challenge-response is similar to DUB protocols and so the excellent works [14] on the implementation of DUB protocols can be used for performance estimates. However using erasing sequence is unique to \(\Pi _{DLB-BM}\). We estimate memory, time, and energy consumption of \(\Pi _{DLB-BM}\) on a MicaZ sensor [12] using TinyOS. We assume the following parameters: \(n=10\) rounds and \(k=192\) bits of nonces. We use HMAC-SHA1, denoted by HMAC(.; .), for the PRF, \(f_x\), and generation of response for \(RS_i\), \(H_{\hat{x}}\). That is, \(f_x(N_p,N_v)\) equals the first \(2n=20\) bits of \(HMAC(x;N_p||N_v)\) and \(H_{\hat{x}}(\bar{RS_i}||DLB_{RAM})\) equals \(HMAC(\hat{x};\bar{RS_i}||DLB_{RAM})\) of length \(b=160\) bits.

MicaZ Sensor Specifications. The device is supplied by two AA batteries and includes an ATMEGA128 microcontroller and a TI-CC2420 radio transceiver. The micro-controller provides 4 KB of writable memory (SRAM), with 4 KB of EEPROM and 640 KB of write-protected flash memory. The radio transceiver chip works for an RF band of 2.4–2.48 GHz and has 250 Kbps data rate.

Memory Consumption. HMAC-SHA1 takes code size of 4650 bytes [13] for implementation on ROM, and around 124 bytes of RAM to load data structures and stack. Considering \(l=128\) of secret key x, we have a reasonable estimation of code size to be 10 KB. Although the EEPROM is only 4 KB large, the ATMEGA128 architecture allows for ROM extension via the use of mask ROM, locked flash memory, and fuse bits. Using these extension methods, one can build read-only memory of size 10 KB or more. Note that in order to obtain maximum energy consumption, we assume the size of \(DLB_{RAM}\) to be 0.

Energy and Time Consumption. The writable memory in ATMEGA128 (when flash memory is write-protected) is the 4KB SRAM. For a \(n=10\) rounds DLB protocol, the erasing sequence has length 32758 bits on average.

Communication Costs. The protocol requires the prover to receive \(N_v\), M in initialization phase and \(c_i\), \(RS_i\) in each fast-exchange round, which gives a total of \(l_{rx}= len(N_v) + len (M) + 2n + n \times len(RS_i) = 326912\) bits. There is also requirement for sending \(N_p\), \(r_i\)’s, and \(h_i\)’s which sums to \(l_{tx} = len(N_p) + n + n \times len(h_i) = 1802\) transmission bits. Sending (resp. receiving) a single bit requires \(E_{rx}=2.34 \mu J/b\) (resp. \(E_{tx} = 4.6 \mu J/b\)) by the radio transceiver with typical power adjustments [5]. The total communication energy is thus \(E_{comm} = l_{tx} E_{tx} + l_{rx} E_{rx} = 773 mJ\).

Computation Costs. The cost is highly due to computing \(h_i\). The rest is negligible. Extrapolating memory erasure phase figures in [13] to our 4KB-memory device, we require less than 600 milliseconds time for computing proof of erasure, which is quite practical. As for energy consumption, each HMAC-SHA1 computation uses \(3.5 \mu J\) per memory byte. Considering the memory size and the number of rounds, the required computation energy is obtained as \(E_{comp} = 3.5 \times 10^{-6} \times 4 \times 2^{10} \times 10 = 140 mJ\). Each AA battery is capable of delivering 1.2 Amperes under an average voltage of 1.2 Volts for one hour [5], implying the power supply of 10, 368 J via the two batteries. This means the proving device can be used for approximately \(\frac{10,368}{(773+140)10^{-3}} > 11,000\) runs of DLB protocol before the batteries die. This is quite a reasonable turn out for power consumption. Although we should consider idle/sleep mode energy consumption for more accurate analysis, this consideration will not cause a drastic change on the above result.

6 Concluding Remarks

We motivated the novel security problem of DLB in the setting that the prover is not trusted using a number of application scenarios, and gave formal definition of security against three general classes of attacks (DF, MiM and CF). We proved that it is impossible to provide security against any of these attacks without making physical assumptions. Our results show that an adversary, even if it is computationally bounded, will always succeed in DF if it has unrestricted access to the prover’s device (fully untrusted prover), and will succeed in MiM attacks, if it has unrestricted access to the communication channel. And security against CF requires restrictions on both types of accesses. These results show a fundamental difference between DLB and DUB problems. The only physical assumption in DUB protocols, is that the speed of EM signals is constant. In DLB protocols however, in addition to this assumption, one must assume other restrictions on the physical access of the adversary.

Our protocol provides security against rDF\(^\mathtt{[BM]}\), rMiM\(^{[\overline{\mathtt{OC}}]}\), and rCF\(^{ [\mathtt{BM},\; \overline{\mathtt{OC}}]}\), using reasonable assumptions that have been used in theoretical cryptography as well as security systems in practice, including systems for secure code update [13]. Enforcing assumptions in practice would need special technologies such as targeted jamming [9]. One can replace the above assumptions with other reasonable assumptions. For example, instead of assuming bounded memory, one can use a software-based externally verifiable code execution (EVCE) system such as Pioneer [16], to guarantee that the target executable code associated with the distance measurement, is executed without modification by a malicious code that may reside on the device. a trusted network to eliminate proxy attacks allowing the construction to provide security against rMiM\(^{[\overline{\mathtt{OC}}]}\) and rCF\(^{ [\mathtt{SA},\; \overline{\mathtt{OC}}]}\). The important point to note is that one must restrict the adversary’s physical access to the environment to achieve any DLB security.

Our primary application scenarios of DLB in this paper were examples of proximity-based access control. Other application scenarios in DLB may have different security requirements. Examining these requirements will be an important step in modelling security and designing secure protocols. Another interesting question is to efficiently incorporate DLB in DUB protocol to provide security against distance enlargement.