Physical attacks on the implementation of various cryptographic schemes are the most threatening aspects for the crypto designer. In theoretical cryptography, the algorithm under consideration is modeled as a blackbox with which an adversary can interact via the input–output interface of the system. Such blackbox security notions do not incorporate an adversary that can change the secret message into some related value through tampering attack, and analyse the outcomes. The adversary can perform tampering attack by heating up the devices, fault injections (Sergei and Ross 2002) etc. In software module, viruses or malwares can also carry out the attack on storage device by corrupting some regions of the memory. Boneh et al. (2001) show that a single bit flip of the signing key is enough to extract the secret information of RSA signature completely. This is one of the most devastating attack where an adversary makes minor modification in the cryptographic device and the sensitive information can be recovered. A line of research have focused on how to secure any cryptographic implementation from such tampering attacks (Bellare and Kohno 2003; Bellare et al. 2011; Kalai et al. 2011; Bellare et al. 2012; Damgård et al. 2013; Chen et al. 2019; Ghosal et al. 2022).

Non-malleable codes, introduced by Dziembowski et al. (2010, 2018), are used as one of the applications of tamper resilient cryptography. It is required when correction of the message is not the main concern but privacy and integrity are more important. Further, the guarantee is that if an adversary tampers any message encoded by non-malleable codes, output is either completely unrelated or the original one. Let k be the secret message (e.g., key of any cryptographic algorithm) and f be the tampering function. An adversary encodes the secret message k as Enc(k). It uses the tampering function f on the encoded message Enc(k) and performs decoding, i.e., Dec(f(Enc(k))). Non-malleability property guarantees that Dec(f(Enc(k))) = k, for every k with probability 1, when tampering has no effect or Dec(f(Enc(k))) = \(k^{'}\), in case of tampering, where k and \(k^{'}\) are computationally independent. Generally, non-malleability cannot be achieved for arbitrary classes of tampering functions. Let \(f_{increment}\) be the tampering function. An adversary uses this function on the encoded message as \(f_{increment}(Enc(k)+1)\) and tries to decode it as \(Dec(f_{increment}(Enc(k)+1))\). After decoding, the adversary gets output as \(k+1\). It is highly related to the original secret message, i.e., k. Hence, non-malleable codes can be constructed for some classes of tampering functions only. In literature, most widely used model is split-state, where the codeword is divided into two different parts \({M}_{0}\), \({M}_{1}\), and it is stored into memory \({\textsf{M}}_{L}\), \({\textsf{M}}_{R}\) respectively (Liu and Lysyanskaya 2012; Dziembowski et al. 2013; Jafargholi and Wichs 2015; Aggarwal et al. 2015; Kiayias et al. 2016; Aggarwal et al. 2016; Fehr et al. 2018). Two different tampering functions f = \((f_0({M}_{0}), f_1({M}_{1}))\) modifies the codeword in an arbitrary and independent way. One important functionality is that both tampering functions cannot run the decoding procedure because two shares are needed in order to decode a codeword whereas each of the functions \(f_{0}({M}_{0})\), \(f_{1}({M}_{1})\) can access only one share. Standard notion of non-malleability protects message for one time tampering attack only. Such codeword is called one-shot non-malleable code. It cannot handle the situation when an adversary tampers the codeword more than once. A stronger version of non-malleability, called continuously non-malleable codes (CNMC) is proposed in Faust et al. (2014a), where the following attack \(f = (f_{i}({M}_{0}),\) \(f_{i+1}({M}_{1}))\) (\(i \in q \wedge q \in poly(n)\)) is performed polynomial number of times for each \(f_{i} \in {\mathcal {F}}\), and still non-malleability is preserved.

Continuous non-malleability has various flavours. Let m be the original message and \(m^{'}\) be the decoded tampered message. Moreover, c denotes the codeword and \(c^{'}\) denotes the tampered codeword in a continuous tampering experiment. Standard version of continuous non-malleability or default version refers to the situation where decoded tampered message \(m^{'}\) and original message m are completely independent but it is possible for an attacker to create an encoding such that \(c^{'}\) is not equal to c but \(c^{'}\) decodes to m as discussed in Dziembowski et al. (2010). In case of strong continuous non-malleability, when \(c^{'}\) is not equal to c, it is guaranteed that both \(m^{'}\) and m completely are independent. Another stronger flavour is super-strong continuous non-malleability, where \(c^{'}\) is not equal to c implies that \(c^{'}\) and c are independent (Faust et al. 2014a, b; Jafargholi and Wichs 2015). Our construction considers stronger version of continuous non-malleability. Again, based on the situation that how tampering functions are applied to the codeword, tampering experiment of continuous non-malleability has two versions as shown in Jafargholi and Wichs (2015). When tampering functions are applied always to initial encoding of the codeword, it is called non-persistent tampering. Here, an auxiliary memory is required beyond n bits of active-memory to store the codeword. An attacker can make a copy of the original codeword to the auxiliary memory. Further, the attacker can tamper the original version of the codeword from the auxiliary memory and place it to the active-memory. In persistent version, tampering functions are applied to the previous version of tampered codeword rather than initial encoding. So, the extra memory requirement is not present here. An adversary can tamper two different parts of the memory until decoding error is triggered. Additional feature of continuous non-malleability is to handle leakage attacks while tampering attacks are being performed. The adversary can gain leakage values as a partial information. Earlier constructions of continuously non-malleable codes are built on top of some leakage resilient primitives which can handle some bounded amount of leakages (Faust et al. 2014a; Aggarwal et al. 2014, 2015) independently from two different parts of the memory. Continuously non-malleable code constructions are broadly categorized into two domains as information-theoretic (Aggarwal et al. 2019) and computational (Faust et al. 2014a; Faonio et al. 2018; Ostrovsky et al. 2018). In Faust et al. (2014a), it is shown that information-theoretic continuous non-malleability is not possible to achieve in split-state model due to the generic attack. Later, Aggarwal et al. (2017) show that in case of persistent tampering in split-state model, information-theoretic continuous non-malleability can be achieved. Further research work shows a more relaxed version of CNMC from computational assumption in the plain model (i.e., without common reference string based setup) but it provides weaker security guarantee (Ostrovsky et al. 2018). In Dachman-Soled and Kulkarni (2019), authors describe that it is necessary to rely on setup assumptions, i.e., common reference string (CRS) to achieve stronger security. Hence, the proposed construction relies on block cipher and robust non interactive zero knowledge (NIZK) (De Santis et al. 2001) proof in CRS based trusted setup environment. In Table  1, we describe various constructions of continuously non-malleable codes in split-state model as available in the literature.

Table 1 Comparison of various continuously non-malleable codes in the split-state model

Limitations of the Existing Work and Our Motivation. Usually, non-malleable codes are keyless encoding scheme in nature. The first construction of a continuously non-malleable code is proposed in Faust et al. (2014a). Their work is based on collision resistant hash function with robust non interactive zero knowledge (NIZK) proof. Later, Fehr et al. (2018) show that one-shot non-malleable codes can be constructed from related-key secure block ciphers. Such construction does not satisfy security against continuous attacks. An attacker can create two valid codewords \((M_{0},M_{1})\) and \((M_{0},M_{1}^{'})\) such that their decoding does not return \(\bot\), i.e., \(\bot \ne Dec_{k}(\alpha ,(M_{0},\) \(M_{1}))\) \(\ne Dec_{k}(\alpha ,(M_{0},M_{1}^{'}\) )) \(\ne \bot\), where \(M_{1} \ne M_{1}^{'}\). It produces two valid messages m, \(m^{'}\). Moreover, assuming the tampering function is non-persistent, an adversary can leak all the bits of \(M_{1}\) without activating the self-destruct feature. In general, for any continuously non-malleable codes, it should be hard to find two valid codewords \((M_{0},M_{1})\) and \((M_{0},M_{1}^{'})\) such that \(Dec_{k}(\alpha ,(\) \(M_{0},\) \(M_{1}))\) \(\ne Dec_{k}(\alpha ,(\) \(M_{0},M_{1}^{'}))\). This property is called \(mess\) \(age\) uniqueness as described in Faust et al. (2014a). Our goal is to design non-malleable codes from any kind of block cipher such as AES (Joan and Vincent 2002), SHACAL (Handschuh and Naccache 2002), Midori (Banik et al. 2015) etc. that is secure against polynomial number of tampering attempts. The block ciphers used in our construction should satisfy the following properties:

  1. a)

    The output produced by the underlying block cipher should be strong pseudorandom permutation (sprp).

  2. b)

    If decryption of a ciphertext c with a key k succeeds, it should return \(\bot\) if it is decrypted with a different key \(k^{'}\) (Subsection 2.5).

Our Contribution. In this work, we propose the construction of continuously non-malleable codes in split-state model from any block cipher in computational domain with trusted setup, i.e., CRS. We remove the restriction of related-key secure block cipher as used in Fehr et al. (2018). The codeword is capable of handling non-persistent tampering attempts until self-destruct occurs. Initially, the message is encoded into leakage resilient storage (lrs). Further, it is encoded with block cipher along with robust non interactive zero knowledge (NIZK) proof. The key k of the block cipher is divided into two shares \(k_{0}\), \(k_{1}\). Left part of a codeword \({M}_{0}\) stores \(k_{0}\) whereas \({M}_{1}\) stores \(k_{1}\). During decoding, it is reconstructed as \(k \leftarrow k_{0} \oplus k_{1}\).

Organization. The paper is organized as follows. Section 2 describes some preliminaries whereas Sect. 3 provides a brief description about continuous non-malleability. Code construction and basic proof ideas are illustrated in Sect. 4. Thereafter, proof of security is given in Sect. 5. Finally, we conclude the paper in Sect. 6.

Table 2 Summary of notations


Notations and basic results

Let m be the original message. \({M}_{0}\) and \({M}_{1}\) are the left and right half of a codeword in split-state model, stored in memory \({\textsf{M}}_{L}\) and \({\textsf{M}}_{R}\) respectively. \({\mathcal {O}}^{T}_{cnmc}(.,.)\) represents the tampering oracle. Two tampering functions are \(f_{0}\) and \(f_{1}\) working in \({M}_{0}\) and \({M}_{1}\) respectively. Moreover, \(f^{i}_{0}\) (or \(f^{i}_{1}\)) denotes the tampering function used by an adversary at \(i^{th}\) round. \({\mathcal {K}}\) is the usable key set after removing the weak and semi-weak keys of the block cipher. If \({\mathcal {K}}\) is the key set, \(|{\mathcal {K}}|\) represents the number of key elements in \({\mathcal {K}}\). When k is uniformly chosen at random from \({\mathcal {K}}\), we write \(k \xleftarrow \$ {\mathcal {K}}\). n is the security parameter. \({\mathcal {O}}^{l}(s)\) denotes the leakage oracle that takes string s as input and performs leakage function \(\tau _{b}()\) on s, and it returns at most l bits. \(r \in \{0,1\}^{n}\) denotes the randomness. \(\alpha\) represents an untamperable common reference string (CRS). A function \(\epsilon (n)\) is called negligible in n if it vanishes faster than the inverse of any polynomial in n. P(xr) is a randomized algorithm which takes \(x \in \{0,1\}^{n}\), randomness \(r \in \{0,1\}^{n}\) as input and produces the output \(y \in \{0,1\}^{n}\). An algorithm P is called probabilistic polynomial-time (PPT) if P is allowed to make random choices, and the computation of P(xr) terminates in a polynomial number of steps (|x|) at most for \(x \in \{0,1\}^{n}\), \(r \in \{0,1\}^{n}\). Let \({\mathbb {E}} = \{E_{k}\}_{k \in N}\), \({\mathbb {F}} = \{F_{k}\}_{k \in N}\) be two ensembles and \({\mathbb {E}} \underset{c}{\approx }\ {\mathbb {F}}\) represents the computational indistinguishability that for every PPT distinguisher D, \(|{\text {Pr}}[D(E_{k}) = 1] - {\text {Pr}}[D(F_{k})= 1 ] | \le \epsilon (n)\). In similar way, \({\mathbb {E}} \underset{s}{\approx }\ {\mathbb {F}}\) denotes the statistical indistinguishability for computationally unbounded scenario. \({\mathcal {H}}_{\infty }(X)\) and \(\tilde{{\mathcal {H}}}_{\infty }(X|Y)\) denote the min-entropy and conditional average min-entropy of the random variable X. \(\delta _{0}[i]\), \(\delta _{1}[i]\) are two arrays used to store the result of tampering queries whereas \(\mu _{0}[i]\), \(\mu _{1}[i]\) are used to store leakage queries result at each invocation (\(i \in q \wedge q \in poly(n)\)) in Algorithm 3 and Algorithm 4. In Table 2, we describe a summary of notations. We now define some definitions and lemmas related to the code construction.

Definition 2.1.1

(Split-State Model) Let M be a codeword which consist of two shares \(M= ({M}_{0}, {M}_{1}\)), and they are stored into two different parts of the memory \({\textsf{M}}_{L}\), \({\textsf{M}}_{R}\) respectively. Each tampering attempt f = \((f_{0}, f_{1})\) is described by two arbitrary chosen functions that can be applied to the codeword f = \((f_0({M}_{0}), f_1({M}_{1}))\) in an independent way. The model which satisfies the above property is said to be split-state model.

Definition 2.1.2

(Non-persistent Tampering) Let f = \((f_{0}, f_{1})\) be the tampering function and M be a codeword which is split into two shares \(M= ({M}_{0}, {M}_{1}\)). The tampering experiment is said to be non-persistent if the tampering functions are applied to initial encoding of the codeword always. Moreover, such model considers the scenario when an adversary has access to an n-bit auxiliary memory beyond the active memory, and it can copy the original codeword to the auxiliary memory. Later, the subsequent attack can be performed on the auxiliary memory and the tampered codeword can be placed to the original memory.

Lemma 2.1.1

A random variable X has min-entropy over the set \({\mathcal {X}}\), denoted as \({\mathcal {H}}_{\infty }(X)\) = \(-log\) \(max_{x \in {\mathcal {X}}} {\text {Pr}}[X=x]\). It represents the probability of guessing X by an unbounded adversary.

Lemma 2.1.2

A random variable X has conditional average min-entropy given some information Y over the set \({\mathcal {X}}\), \({\mathcal {Y}}\), denoted as \(\tilde{{\mathcal {H}}}_{\infty }(X|Y)\) = \(-log{\mathbb {E}}_{y \in {\mathcal {Y}}}\) \(max_{x \in {\mathcal {X}}} {\text {Pr}}[X=x|Y=y]\). It represents the probability of guessing X when some related information of X is available to the adversary through side channel leakage.

Lemma 2.1.3

For a random variable X and another random variable Y, \(\tilde{{\mathcal {H}}}_{\infty }(X|Y)\) \(\ge {\mathcal {H}}_{\infty }(X) - l\), where Y takes \(2^{l}\) possible values \((l \in \{0,1\}^{n})\).

Lemma 2.1.4

For a random variable X and other two correlated random variables \(Y_{1},Y_{2}\), we get \(\tilde{{\mathcal {H}}}_{\infty }(X|Y_{1},Y_{2})\) \(\ge \tilde{{\mathcal {H}}}_{\infty }(X|Y_{1}) - l\), where \(Y_{2}\) takes \(2^{l}\) possible values \((l \in \{0,1\}^{n})\).

Lemma 2.1.5

Let \(\tau\) be the leakage function (possibly randomized) used by an adversary on variable X. Then, \(\tilde{{\mathcal {H}}}_{\infty }(X|\tau (X))\) \(\ge {\mathcal {H}}_{\infty }(X) - l\), where \(\tau (X)\) generates l bits of leakage through the side channel \((l \in \{0,1\}^{n})\).

Lemma 2.1.6

Let XY be the correlated random variables and \(\tau\) be the leakage function used by an adversary A. Then, \(\tilde{{\mathcal {H}}}_{\infty }(X|\tau (Y)) \ge \tilde{{\mathcal {H}}}_{\infty }(X|Y)\).

Leakage resilient storage

Leakage Resilient Storage (lrs) scheme encodes message in such a way that secures the underlying message against leakage attacks. It consists of a pair of algorithms (\(\mathfrak {Enc}^{lrs}\), \(\mathfrak {Dec}^{lrs}\)) with the following properties:

  • \(\mathfrak {Enc}^{lrs}\) algorithm takes input a message m, randomness r and produces the output \(p_{0}\), \(p_{1}\).

  • \(\mathfrak {Dec}^{lrs}\) algorithm takes \(p_{0}\), \(p_{1}\) as input and generates m as output.

Original idea of (\(\mathfrak {Enc}^{lrs}\), \(\mathfrak {Dec}^{lrs}\)) algorithm is used in literature (Davì et al. 2010; Dziembowski and Faust 2011) for computationally unbounded adversary. In our construction, it is used for computationally bounded adversary (Faust et al. 2014a). Leakage experiment is defined below:

$$\begin{aligned} \mathfrak {leak}_{A,m}^{\beta } = \left\{ \begin{array}{c} (p_{0},p_{1})\leftarrow \mathfrak {Enc}^{lrs}(m); {\mathcal {L}} \leftarrow A^{{\mathcal {O}}^{l}(p_{0},.),{\mathcal {O}}^{l}(p_{1},.)}\\ output: (p_{\beta },{\mathcal {L}}_{A}), \beta \in \{0,1\} \end{array} \right\} \end{aligned}$$

Initially, a counter ctr is set to 0. When strings are passed into \({\mathcal {O}}^{l}(p_{0},.)\), \({\mathcal {O}}^{l}(p_{1},.)\), along with leakage function \(\tau (.)\), leakage values are calculated through \(\tau (p_{0})\), \(\tau (p_{1})\), and it is added to ctr, until \(ctr \le l\) from each part. Oracle terminates if \(ctr > l\), and further query would return \(\bot\).

Storage scheme is said to be strong lrs if an adversary should not be able to distinguish between two arbitrarily chosen messages m and \(m^{'}\) except with negligible probability, i.e.,

\({\textbf {Adv}}_{\mathfrak {leak}_{A}^{\beta }}^{strong}(A) = [Pr [A(\mathfrak {leak}_{A,m}^{\beta })=1]\) - \({\text {Pr}}[A(\mathfrak {leak}_{A,m^{'}}^{\beta })=1] ] \le \epsilon (n)\), where m, \(m^{'}\) \(\in\) \(\{ 0,1 \}^{n}\) and \(\epsilon (n)\) denotes a negligible function.

Robust non-interactive zero knowledge

Let \({\mathcal {R}}\) be a relation for the language \({\mathfrak {L}}\), denoted as \({\mathfrak {L}}^{{\mathcal {R}}}\) = { \(m :\exists ~w\) such that \({\mathcal {R}}(m,w)=1 \}\) and \(m \in {\mathcal {M}}\). Robust non-interactive zero knowledge (NIZK) proof system for \({\mathfrak {L}}^{{\mathcal {R}}}\) consists of a set of algorithms \((CRSGen, Prove, Vrfy, S= (S_{0}, S_{1}), Xtr)\), defined as follows. CRSGen takes input a security parameter \(1^{n}\) and generates \(\alpha \in \{0,1\}^{n}\) as a common reference string (CRS). Prove takes \(\alpha\), a label \(\lambda\), \((m,w) \in {\mathcal {R}}\) as input and produces proof \(\pi =Prove^{\lambda }(\alpha ,m,w)\) as output. The deterministic Vrfy algorithm outputs true when verification of statement is successful, i.e., \(Vrfy^{\lambda }(\alpha ,m,Prove^{\lambda }(\alpha ,m,w))=1\). The algorithm S consists of two simulators, i.e., \(S_{0}\) and \(S_{1}\). \(S_{0}\) generates a CRS and the trapdoor key whereas \(S_{1}\) performs simulated game with an adversary A. Xtr outputs the hidden value of the relation \({\mathcal {R}}(m,w)\). It satisfies all the below properties as mentioned in De Santis et al. (2001):

  • Completeness. For every \(m \in {\mathfrak {L}}^{{\mathcal {R}}}\) and all w such that \({\mathcal {R}}(m,w)=1\), for all \(\alpha\) \(\leftarrow CRSGen(1^{n})\), we require that the following probability should be satisfied \({\text {Pr}}[Vrfy(\alpha ,m,Prove(\alpha ,w,m))=1]\).

  • Multi-theorem zero knowledge. It says that honestly computed proof does not reveal anything beyond the validity of the statement. Mathematically, it is represented as follows. For every probabilistic polynomial-time adversary A, real experiment, i.e., Real(n) and simulated experiment, i.e., Simulated(n) are completely indistinguishable, i.e., \(Real(n) \underset{}{\approx } Simulated(n)\). Real(n) and Simulated(n) are described below:

    $$\begin{aligned} Real(n)= & {} \left\{ \begin{array}{c} \alpha \leftarrow CRSGen(1^{n}); {\mathcal {L}} \leftarrow A^{Prove(\alpha ,.,.)}(\alpha )\\ output: {\mathcal {L}} \end{array} \right\} \\ Simulated(n)= & {} \left\{ \begin{array}{c} (\alpha ,pk)\leftarrow S_{0}(1^{n}); {\mathcal {L}} \leftarrow A^{S_{1}(\alpha ,.,pk)}(\alpha )\\ output: {\mathcal {L}} \end{array} \right\} \end{aligned}$$
  • Extractability. For all PPT adversary A, there exists a PPT algorithm Xtr, a negligible function \(\epsilon\) and a security parameter n such that \({\text {Pr}}[ G^{Xtr} = 1] \le \epsilon (n)\), where game \(G^{Xtr}\) is described below.

    $$\begin{aligned} G^{Xtr} = \left\{ \begin{array}{l} (\alpha ,pk,sk)\leftarrow S_{0}(1^{n})\\ (m,\pi ) \leftarrow A^{S_{1}(\alpha ,.,pk)}(\alpha ); w \leftarrow Xtr(\alpha ,(m,\pi ),sk)\\ (m,\pi ) \notin {\mathcal {Q}} \wedge {\mathcal {R}}(m,w) \ne 1 \wedge Vrfy(\alpha ,m,\pi ) = 1 \\ \end{array} \right\} , \end{aligned}$$

\({\mathcal {Q}}\) is the query set of \((m,\pi )\) pairs that an adversary A asks to \(S_{1}\).

In Liu and Lysyanskaya (2012); Faust et al. (2014a), authors show that if the proof statement is modified, the verification algorithm should not proceed further. We use the same approach in our construction. Moreover, the proof algorithm supports public label \(\lambda\) and such label is incorporated with the statement of the message m to calculate the above algorithms, i.e., \(Prove^{\lambda }(.,.,.)\), \(Vrfy^{\lambda }(.,.,.)\), \(Xtr^{\lambda }(.,.,.),\) \(S_{1}^{\lambda }\) (., ., .) etc.

Pseudorandom permutation

Let block cipher \({\mathfrak {E}}\): \(\{0,1\}^n \times \{0,1\}^k \rightarrow \{0,1\}^n\) be a mapping from message space \({\mathcal {M}}\) to ciphertext space \({\mathcal {C}}\) through a fixed k. An adversary A plays fixed pseudorandom security (prp) game with prp oracle \({\mathcal {O}}_{prp}()\) and random permutation oracle \({\mathcal {O}}_{R}()\). The pseudorandom permutation security advantage is defined as follows: \({\textbf {Adv}}_{{\mathfrak {E}}}^{prp}(A\)) = \({\text {Pr}}[A^{{\mathcal {O}}_{prp}()}=1\)] - Pr[\(A^{{\mathcal {O}}_{R}()}=1\)].

\({\textbf {Adv}}_{{\mathfrak {E}}}^{prp}(q,t) = \smash {\displaystyle \max _{A}} \{{\textbf {Adv}}_{{\mathfrak {E}}}^{prp}(A)\},\) where q is the maximum number of queries with time at most t.

An adversary A guesses the value of b, where \(b \xleftarrow \$ \{0,1\}\). If b = 0, A proceeds with \({{\mathcal {O}}_{prp}()}\) and if b = 1, A proceeds with \({{\mathcal {O}}_{R}()}\). \({{\mathcal {O}}_{prp}()}\) returns encryption \({\mathfrak {E}}_{k}(m)\) and \({{\mathcal {O}}_{R}()}\) returns random keyed permutations \(E_{k}(m)\), \(k \xleftarrow \$ {\mathcal {K}}\).

\({\mathfrak {K}}\) is the total key set whereas \({\mathcal {K}}\) is the usable key set after removing weak and semi-weak keys, i.e., \({\mathcal {K}}= {\mathfrak {K}}\) - \(\{ k^{weak} \cup k^{semi-weak}\}\). In a cipher, weak and semi-weak keys are such keys by which an encryption scheme can be broken more efficiently than usual keys.

Block cipher

A block cipher \({\mathfrak {E}}\): \(\{0,1\}^n \times \{0,1\}^k \rightarrow \{0,1\}^n\) is a keyed permutation which takes message \(m \in {\mathcal {M}}\), key \(k \in {\mathcal {K}}\) and outputs \(c \in {\mathcal {C}}\), called encryption. Its inverse algorithm which takes \(c \in {\mathcal {C}}\), \(k \in {\mathcal {K}}\) and generates \(m \in {\mathcal {M}}\), called decryption \({\mathfrak {D}}\). Classical security models for block ciphers are pseudoran dompermutation (prp) and strong pseudorandom permutation (sprp). In prp security model, an adversary has only access to encryption oracle whereas in strong pseudorandom permutation model the adversary has access to both encryption and decryption oracle.

Moreover, the block cipher used in our construction has the following property: If key is modified then decryption algorithm should return \(\bot\). To achieve such property in our non-malleable code construction, we check the key in Algorithm 3 and Algorithm 4. The original key k of a cipher is stored into two parts of codeword \(M_{0}\) and \(M_{1}\). Whenever original key k and tampered key \(k^{'}\) are completely different, i.e., \(k^{'} \ne k\), decryption algorithm \({\mathfrak {D}}_{k}()\) should not be called and we return \(\bot\) from the decoding algorithm of non-malleable code. Since the decryption algorithm of a block cipher with a different key \(k^{'}\) returns some other message rather than original one, we need to restrict it in this way.

Continuously non-malleable codes

Leakage Oracle. Leakage Oracle \({\mathcal {O}}^{l}(.)\) is a stateful oracle that calculates total leakage through some arbitrary leakage function \(\tau ()\). Algorithm 1 shows the leakage experiment. Initially, a counter ctr is set to 0. When strings are passed into it, leakage values are calculated and its length is added with the ctr, until \(ctr \le l\). Otherwise, it returns \(\bot .\)

Tampering Oracle. Tampering Oracle \({\mathcal {O}}^{T}_{cnmc}(.,.)\) in split-state model is a stateful oracle that takes two codewords \(M_{0},M_{1}\) and tampering function f = (\(f_{0}\), \(f_{1}) \in {\mathcal {F}}\) with initial \(state =alive\) and performs the below experiment as defined in Algorithm 2.

Coding Scheme. Let CNMC = (CRSGen\(Enc_{k},Dec_{k})\) be a split-state coding scheme in the CRS model.

  • CRSGen algorithm takes security parameter \(1^{n}\) as input and generates output \(\alpha \in \{0,1\}^{n}\) as CRS.

  • \(Enc_{k}\) algorithm takes key \(k \in {\mathcal {K}}\), CRS \(\alpha\), message \(m \in {\mathcal {M}}\) and produces the codeword \((M_{0},M_{1})\).

  • \(Dec_{k}\) algorithm takes the codeword \((M_{0},M_{1})\), key \(k \in {\mathcal {K}}\), CRS \(\alpha\) and generates message m or special symbol \(\bot\).

Continuous Non-malleability. The coding scheme CNMC is said to be l leakage resilient, q continuously non-malleable code in split-state model if for all messages \(m,m^{'} \in \{0,1\}^{n}\) and for all probabilistic polynomial-time adversaries A, \({\textbf {Tamper}}_{cnmc}^{A,m}\) and \({\textbf {Tamper}}_{cnmc}^{A,m^{'}}\) are computationally indistinguishable, i.e.,

\({\textbf {Adv}}_{{Tamper}_{cnmc}^{A}}^{Strong}(A) = [Pr [A({\textbf {{Tamper}}}_{cnmc}^{A,m})=1]\) - \({\text {Pr}}[A\) \(({\textbf {{Tamper}}}_{cnmc}^{A,m^{'}})\) \(=1] ] \le \epsilon (n)\), where m, \(m^{'}\) \(\in\) \(\{ 0,1 \}^{n}\) and

$$\begin{aligned} {\textbf {Tamper}}_{cnmc}^{A,m} = \left\{ \begin{array}{c} \alpha \leftarrow CRSGen(1^{n}); i = 0; (M_{0},M_{1}) \leftarrow Enc_{k}(\alpha ,m) \\ while \quad i \le q \\ {\mathcal {L}}^{i}_{A} \leftarrow A^{{\mathcal {O}}^{l}(M^{i}_{0}),{\mathcal {O}}^{l}(M^{i}_{1}),{\mathcal {O}}^{T}_{cnmc}(M^{i}_{0},M^{i}_{1})} \\ i = i + 1\\ end \quad while \\ output: {\mathcal {L}}^{i}_{A}. \end{array} \right\} , \end{aligned}$$

\({\mathcal {L}}^{i}_{A}\) contains the view of an adversary with two parameters \(\mu\) and \(\delta\), for i number of tampering queries (\(i \le q \wedge q \in poly(n)\)). \(\mu\) stores the result of leakage queries \((\mu \le 2\,l)\) and \(\delta\) stores the result of tampering queries \((\delta \le q)\) from \({\mathcal {O}}^{T}_{cnmc}()\). When i = 1, our code behaves as one-shot non-malleable code and without any tampering query, i.e., i = 0, it acts as leakage resilient code (Davì et al. 2010).

Message Uniqueness. Let CNMC = \((CRSGen,Enc_{k},De\) \(c_{k})\) be a split-state (lq) continuously non-malleable code. It is said to satisfy message uniqueness property if there does not exist a valid pair \((M_{0},M_{1})\), \((M_{0},M_{1}^{'})\) such that \(\bot \ne Dec_{k}(\alpha ,\) \((M_{0},M_{1})) \ne Dec_{k}(\alpha ,(M_{0},M_{1}^{'}))\) \(\ne \bot\), where \(M_{1} \ne M_{1}^{'}\) and it produces two valid messages m, \(m^{'}\). A continuously non-malleable code should not violate uniqueness property as mentioned in Faust et al. (2014a).

figure a
figure b

Code construction

We propose the construction of continuously non-malleable codes from block cipher along with robust non interactive zero knowledge (NIZK) proof. Then, we analyse the uniqueness property of the codeword and proof of security. Let CNMC = \((CRSGen,Enc_{k},Dec_{k})\) be split-state (lq) continuously non-malleable code in the CRS model based on leakage resilient storage (\(\mathfrak {Enc}^{lrs}\), \(\mathfrak {Dec}^{lrs}\)), on a block cipher \({\mathfrak {E}}\): \(\{0,1\}^n \times \{0,1\}^k \rightarrow \{0,1\}^n\) with some properties incorporated and on a robust non-interactive zero knowledge (NIZK) proof system (CRSGenProveVrfy) with label support for language \({\mathfrak {L}}^{{\mathfrak {E}}_{k_{0}}}\) = { \(c_{key}:\exists ~k\) such that \(c_{key}={\mathfrak {E}}_{k_{0}}(k) \}\), where \(k \in {\mathcal {K}}\), \(k \leftarrow k_{0} \oplus k_{1}\). The construction of our codeword is illustrated below:

  1. I.

    \({{CRSGen(1^{n}). }}\) The algorithm takes \(1^{n}\) as a security parameter and generates the common reference string \(\alpha\).

  2. II.

    \({{Enc_{k}(\alpha ,m). }}\) Encoding algorithm takes key \(k \in {\mathcal {K}}\), CRS \(\alpha\) and message \(m \in {\mathcal {M}}\) as input. Initially, the message m with some randomness \(r \leftarrow \{0,1\}^{n}\) is fed into leakage resilient storage, i.e., \((p_{0},p_{1})\leftarrow \mathfrak {Enc}^{lrs}(m||r)\). Next, it encrypts \(p_{0}\), \(p_{1}\) as \(c_{0} \leftarrow {\mathfrak {E}}_{k}(p_{0})\), \(c_{1} \leftarrow {\mathfrak {E}}_{k}(p_{1})\), where \({\mathfrak {E}}_{k}()\) is an encryption algorithm of a block cipher. The key k is divided into two shares \(k_{0}\), \(k_{1}\) and it is reconstructed as \(k \leftarrow k_{0} \oplus k_{1}\). Further, the master key k is encrypted as \(c_{key}\) = \({\mathfrak {E}}_{k_{0}}(k)\). Thereafter, proof of statements are calculated in the following way, i.e., \(\pi _{0}\) = \(Prove^{c_{1}}(\alpha ,k_{0},(c_{key},c_{0}))\), \(\pi _{1} = Prove^{c_{0}}(\alpha ,k_{1},\) \((c_{key},c_{1}))\). Finally, it outputs the codeword \((M_{0},M_{1})\) = \((((k_{0},c_{0}),p_{0},(c_{key},c_{1}),\pi _{0},\pi _{1})\), \(((k_{1},c_{1}),p_{1},(c_{key},c_{0})\),\(\pi _{0},\pi _{1}))\). The codeword (M0, M1) is stored into the memory (ML, MR) respectively.

  3. III.

    \({{Dec_{k}(\alpha ,(M_{0},M_{1})). }}\) Decoding algorithm starts by parsing \(\pi _{0}\) and \(\pi _{1}\). Then, it constructs the key \(k \leftarrow k_{0} \oplus k_{1}\) and performs the below steps:

  4. IV.

    Left & Right verification. If the verification of statement in the codeword \((M_{0},M_{1})\) are not successful, i.e., either \(Vrfy^{c_{1}}(\alpha ,(c_{key},c_{0}))\) or \(Vrfy^{c_{0}}(\alpha ,(c_{key}\) \(,c_{1}),\pi _{1})\) returns 0, it outputs \(\bot\). Otherwise, go to the next step.

  5. V.

    Uniqueness check. If k = \({\mathfrak {D}}_{k_{0}}(c_{key})\), go to the next step. Otherwise, it returns \(\bot\).

  6. VI.

    Cross check & Decode. If \(p_{0} \ne {\mathfrak {D}}_{k}(c_{0})\), \(p_{1} \ne {\mathfrak {D}}_{k}(c_{1})\) and proofs \(\pi _{0}\), \(\pi _{1}\) both are different, it returns \(\bot\). Otherwise, check \(p_{0}\), \(p_{1}\), if both are equal in \(M_{0}\) and \(M_{1}\), call decode \(\mathfrak {Dec}^{lrs}(p_{0}\), \(p_{1})\).

Lemma 1

CNMC = \((CRSGen,Enc_{k},Dec_{k})\) satisfies message uniqueness property if implemented with the block cipher.


Message uniqueness is based on the property (b) (Subsection 2.5) of the underlying block cipher, i.e., ciphetext generated by the cipher with a key k returns \(\bot\) if it is decrypted with a different key \(k^{'}\). Hence, integrity of the key has to be maintained. Suppose, an adversary A generates a pair \((M_{0},M_{1})\), \((M_{0},M_{1}^{'})\) such that both are valid and \(M_{1} \ne M_{1}^{'}\). It means \(\bot \ne Dec_{k}(\alpha ,(M_{0},M_{1})) \ne Dec_{k}(\alpha ,(M_{0},M_{1}^{'})) \ne \bot\). The equation is only possible if an adversary is able to produce a valid key pair \((k_{0},k_{1})\), \((k_{0},k_{1}^{'})\) such that for \((k_{0},k_{1})\), \({\mathfrak {D}}_{k_{0}}(c_{key})\) = \(k_{0} \oplus k_{1}\) (for \(M_{0},M_{1}\)) which is equal to \(k_{0} \oplus k_{1}^{'}\) = \({\mathfrak {D}}_{k_{0}}(c_{key})\) for \((k_{0},k_{1}^{'})\) (for \(M_{0},M_{1}^{'})\), where \(k_{1} \ne k_{1}^{'}\). Unfortunately, it violates the deterministic property of decryption algorithm as the decrypted key and newly formed key are same. So, \({\mathfrak {D}}_{k_{0}}(c_{key}) = (k_{0} \oplus k_{1})\) (for \(M_{0},M_{1}\)) \(\ne (k_{0} \oplus k_{1}^{'}) = {\mathfrak {D}}_{k_{0}}(c_{key})\) (for \(M_{0},M_{1}^{'})\). Therefore, the key is modified and decoding should return \(\bot\).

Security proof idea of CNMC

Our hunch is to develop the continuous version of non-malleable codes from block ciphers with some additional properties incorporated on the cipher. As mentioned by Gennaro et al. (2004), certain strong cryptographic assumptions are necessary when an adversary tampers a portion of the memory. To prove that codeword is continuously non-malleable, a simulator for the \({\textbf {Tamper}}_{cnmc}^{A,m}\) experiment is developed. In \({\textbf {Tamper}}_{cnmc}^{A,m}\) experiment, an adversary A performs all leakage and tampering oracle queries in real environment on the codeword \((M_{0}, M_{1})\), stored in memory \({\textsf{M}}_{L}\) and \({\textsf{M}}_{R}\) respectively, whereas simulated experiment \({\textbf {SimTamper}}_{cnmc}^{A,0^{n}}\) simulates the adversaries view of the tampering experiment in an ideal scenario. We need to show that both experiments are indistinguishable except with negligible probability, i.e., \(| {\text {Pr}}[{\textbf {Tamper}}_{cnmc}^{A,m}=1]\) - \({\text {Pr}}[{\textbf {SimTamper}}_{cnmc}^{A,0^{n}}=1]| \le \epsilon (n)\). Simulated tampering experiment takes \(r \leftarrow \{0,1\}^{n}\) and proceeds with encryption of message \(0^{n}||r\). But the original tampering experiment proceeds with encryption of message m||r. Initially, m||r is encoded using leakage resilient storage which splits the message into two halves, and it keeps the message secure as long as l bits are leaked at most from each parts of the memory. Given the codeword M = \((M_{0}\), \(M_{1})\), oracle continues until simulated output from left (Algorithm 3) and right (Algorithm 4) sides of (\(T_{0}, T_{1}\)) are equal. The experiment stops when decoding error is triggered, i.e., outputs are not equal. From that point further query would return \(\bot\), and self-destruct state is invoked. Since non-persistent tampering is considered, a separate memory \({\mathfrak {M}}\) of polynomial length is used to store tampered versions of the codeword at each round along with leakage and tampering data.

figure c

The main difficulty of our experiment is to find self-destruct index, i.e., from the point experiment would return \(\bot\) for further query. Let \(\tau (M)\) be the leakage function on the codeword M. \({\mathcal {H}}_{\infty }(M | \tau (M))\) denotes the conditional average entropy of the codeword M when some information is available through side-channel, i.e., the best chance of guessing message m from the codeword M with some side-channel information by an adversary A. Leakage functions are applied in the interleaved way by an adversary A on \((M_{0}, M_{1})\) as \(\tau ^{0}_{0}(M_{0})\), \(\tau ^{0}_{1}(M_{1})\), \(\tau ^{1}_{0}(M_{0})\), \(\tau ^{1}_{1}(M_{1})\),... \(\tau ^{i-1}_{0}(M_{0})\), \(\tau ^{i-1}_{1}(M_{1})\). The \({\textbf {SimTamper}}_{cnmc}^{A,0^{n}}\) experiment proceeds until output produced by two algorithms \(T_{0}\) and \(T_{1}\) are equal. From information-theoretic observation, it can be viewed as \(\tilde{{\mathcal {H}}}_{\infty }(M_{0} | \tau ^{i}_{0}(M_{0}))\) = \(\tilde{{\mathcal {H}}}_{\infty }(M_{1} | \tau ^{i}_{1}(M_{1}))\), i.e., best chance of guessing message m from the codeword M = \((M_{0}\), \(M_{1})\) is same when some information is available through side-channel leakage to the adversary A. At each query invocation, simulated experiment proceeds by checking tampered output from both halves of the memory. If it matches, leak the entire part so that total amount of leakage is upper bounded by \({\mathcal {O}}(n)\), where n represents the security parameter. The experiment triggers self-destruct when outputs are unequal. Simulated tampering experiment consists of \(S = (S_{0},S_{1})\) and it works in the following way. The simulator \(S_{0}\) generates an untamperable CRS and the key (\(\alpha ,pk,sk)\). Further, the key is passed to \(S_{1}\) which takes \(r \leftarrow \{0,1\}^{n}\), encoding of message \(0^{n}||r\), and invokes \((T_{0}, T_{1})\) to simulate the tampering experiment until outputs are equal. The simulator \(S_{1}\) makes simulated proof of statement \(\pi _{0}\) = \(S_{1}^{c_{1}}(\alpha ,\) \((c_{key},c_{0}),pk)\) and \(\pi _{1}\) = \(S_{1}^{c_{0}}(\alpha ,(c_{key},c_{1}),pk)\). Then, it calls the algorithm \(T_{0}\) and \(T_{1}\) in an interleaved manner. Algorithm \(T_{0}\) simulates left part of a codeword (simulated) \(M_{0}\) and algorithm \(T_{1}\) simulates right part of a codeword \(M_{1}\). Both the algorithm proceeds by parsing \(M_{0}\) and \(M_{1}\). It calculates leakage through \((\tau ^{i}_{0},\tau ^{i}_{1})\) and stores the value into \(\mu _{b}[i]\). Then, it applies tampering function \(f^{i}_{0}\) on \(M_{0}\) and \(f^{i}_{1}\) on \(M_{1}\), and it compares tampered codeword \(M^{'}\) with the original codeword M. If both are same, \(\delta _{b}[i]\) is set to \(same^{*}\). Next, it verifies the proof of the statement and if it is successful, \(T_{b}\) proceeds further. Otherwise, \(\delta _{b}[i]\) is set to \(\bot\). Further, the original and tampered proof of statement are compared, and the corresponding values are stored into \(\delta _{b}[i]\). The extractor Xtr algorithm retrieves the key \(k^{'}_{0}\) in algorithm \(T_{1}\), \(k^{'}_{1}\) in algorithm \(T_{0}\) and the key \(k^{'}\) is formed, i.e., \(k^{'} \leftarrow k^{'}_{0} \oplus k^{'}_{1}\). Next, uniqueness condition of the key \(k^{'}\) is checked with k, and if they are same, decoding is performed to retrieve the message \(m^{'}\).

Now, we discuss why the known attacks are not possible to perform in the proposed construction. Firstly, if an adversary tampers \(c_{0}\) and changes it some related value \(c_{0}^{'}\) in \(M_{0}\), NIZK proof \(\pi _{0}\) should be changed to \(\pi _{0}^{'}\). Hence, both values \(\pi _{0}\), \(\pi _{0}^{'}\) should be different and by the property of robust NIZK, experiment should return \(\bot\). Also the adversary has to make same changes in \(M_{1}\), this should be hard without knowing a witness by robustness of the proof. Apart from that if an adversary tampers the key k, and make it to \(k^{'}\), NIZK proof should be different and decryption with \(k^{'}\) should return \(\bot\) as per cipher property (b). Hence, the codeword is secure against continuous tampering attacks. In the next section, we discuss the security of the construction in detail.

figure d

Proof of security

Theorem 1

Let \({\mathfrak {E}}\): \(\{0,1\}^n \times \{0,1\}^k \rightarrow \{0,1\}^n\) be the block cipher with message space \({\mathcal {M}}\), key space \({\mathcal {K}}\) and ciphertext space \({\mathcal {C}}\), \((\mathfrak {Enc}^{lrs}, \mathfrak {Dec}^{lrs})\) be \(l^{'}\) leakage resilient storage, (CRSGenProveVrfy) is a robust NIZK proof for language \({\mathfrak {L}}^{{\mathcal {R}}}\) chosen from message space \({\mathcal {M}}\). Then CNMC = \((CRSGen,Enc_{k},Dec_{k})\) is \(((l+\gamma +\eta ), q)\) continuously non-malleable and l leakage resilient code under non-persistent tampering when instantiated with all the above primitives, where \(q = poly(n)\), \(\gamma = log({\mathcal {M}})\), \(\eta = log({\mathcal {K}})\), \(l^{'} \ge (2\,l + n)\) and n denotes the security parameter.


The proof of our theorem is quite involved. We develop a simulator that simulates the tampering experiment in an ideal scenario. It is shown that an adversary cannot distinguish between the real and simulated experiment except with negligible probability, i.e., \(| {\text {Pr}}[{\textbf {Tamper}}_{cnmc}^{A,m}=1]\) - \({\text {Pr}}[{\textbf {SimTamper}}_{cnmc}^{A,0^{n}}=1]| \le \epsilon (n)\). In \({\textbf {Tamper}}_{cnmc}^{A,m}\) experiment, an adversary A proceeds with q number of leakage and tampering queries in real environment until the self-destruct state is invoked. \({\textbf {SimTamper}}_{cnmc}^{A,0^{n}}\) experiment simulates the adversaries view in an ideal environment. Here, the simulator \(S = (S_{0}, S_{1})\) is constructed to execute the \({\textbf {SimTamper}}_{cnmc}^{A,0^{n}}\) experiment. The simulator \(S_{0}\) generates a triplet \((\alpha ,pk,sk)\) and passes it to \(S_{1}\). \(\alpha\) is an untamperable CRS and (pksk) pair is used to make the simulated proof of statement in Xtr algorithm. The goal of \(S_{1}\) is to simulate the actual tampering experiment. It consists of two algorithms \((T_{0},T_{1})\) with tampering functions \(f^{i}_{0}\) and \(f^{i}_{1}\) (\(i \le q \wedge q \in poly(n)\)). Algorithm \(T_{0}\) works on the codeword \(M_{0}\) with tampering function \(f^{i}_{0}\) and \(T_{1}\) works on the codeword \(M_{1}\) with tampering function \(f^{i}_{1}\). Simulated experiment proceeds with encoding of message \(0^{n}||r\) whereas real experiment proceeds with message m||r (\(r \leftarrow \{0,1\}^{n}\)). To show that simulation works in a proper way, distribution of simulated experiment is changed incrementally until we reach to the real tampering experiment \({\textbf {Tamper}}_{cnmc}^{A,m}\). At each step, a negligible amount of error is introduced. Such change is not noticeable due to the security of lrs scheme. In this way, encryption of \(0^{n}\) switches to the codeword M, i.e., encoding of message m. \(S_{1}\) calls \((T_{0},T_{1})\) in the interleaved manner and experiment stops when outputs from both algorithms are unequal, i.e., \(T_{0}(M_{0},f^{i}_{0},r,i) \ne T_{1}(M_{1},f^{i}_{1},r,i)\). Any further query would return \(\bot\) and experiment leads to self-destruct in \({\textbf {SimTamper}}_{cnmc}^{A,0^{n}}\). Whenever the experiment triggers self-destruct, security of continuous non-malleability reduces to the security of underlying lrs scheme. Alternatively, we can say that if an adversary A breaks the security of continuous non-malleability then there exists an efficient reduction that breaks the security of lrs which contradicts the fact that lrs scheme is secure. \(S_{1}\) simulates the actual reduction with \((T_{0},T_{1})\) in the following way.

Algorithm 3 illustrates the working strategy of the simulated tampering experiment \(T_{0}\). It parses the left part of a codeword first and applies the leakage function \(\tau ^{i}_{0}()\). The maximum leakage bound tolerated by \(T_{0}\) is l. All leakage values are stored in \(\mu _{0}[i]\) array. Then, tampered codeword \(M^{'}_{0}\) is obtained after applying \(f^{i}_{0}\) on \(M_{0}\), i.e., \(M^{'}_{0}\) = \(f^{i}_{0}(M_{0})\) = \(((k^{'}_{0},c^{'}_{0}),p^{'}_{0},(c_{key}^{'},c^{'}_{1}),\) \(\pi ^{'}_{0},\pi ^{'}_{1})\). If \(M_{0}\) and \(M^{'}_{0}\) are equal, \(\delta _{0}[i]\) array is set to \(same^{*}\). Next, the verification of statement is checked and in case, it is unsuccessful, \(\delta _{0}[i]\) array is set to \(\bot\) and the experiment stops. If the original proof of statement \(\pi\) and the tampered one \(\pi ^{'}\) are same, \(\delta _{0}[i]\) array is set to \(\bot\) and it returns \(\bot\). Extractor algorithm Xtr is run to extract \(k^{'}_{1}\) from the simulated proof of statement with the extractor key sk, i.e., \(k^{'}_{1} \leftarrow Xtr^{c_{0}^{'}}(\alpha ,((c_{key}^{'},c^{'}_{1}),\pi ^{'}_{1}),sk)\). Further, the key \(k^{'}_{1}\) in conjunction with \(k^{'}_{0}\) is XORed to form the original key \(k^{'}\) which is checked against \({\mathfrak {D}}_{k_{0}}(c_{key})\). If both are same, \(p^{'}_{1} \leftarrow {\mathfrak {D}}_{k^{'}}(c^{'}_{1})\) is called. Next, the \(\mathfrak {Dec}^{lrs}(p^{'}_{0}\), \(p^{'}_{1})\) algorithm is invoked to retrieve the message \(m^{'}.\) Since tampering experiment is non-persistent, a separate memory \({\mathfrak {M}}\) stores all the tampered codeword along with leakage and tampering data, i.e., \(\delta _{0}[i]\) and \(\mu _{0}[i]\).

Algorithm 4 describes the simulated tampering experiment \(T_{1}\). It starts by parsing right part of a codeword \(M_{1}\) and calculates leakage through \(\tau ^{i}_{1}()\). The maximum leakage tolerated by \(T_{1}\) is upper bounded to l. \(\mu _{1}[i]\) array stores the leakage data and \(\delta _{1}[i]\) stores all the tampering information. At each query invocation, tampering function \(f^{i}_{1}\) is applied on \(M_{1}\). Next, if verification of the statement with label \(c^{'}_{0}\) is successful, proof of statement is compared with the tampered one. In case of successful comparison, Xtr algorithm retrieves \(k^{'}_{0}\) from the simulated proof of statement, i.e., \(k^{'}_{0} \leftarrow Xtr^{c_{1}^{'}}(\alpha ,((c_{key}^{'},c^{'}_{0}),\pi ^{'}_{0}),sk)\). The original key \(k^{'}\) is formed and compared with \({\mathfrak {D}}_{k_{0}}(c_{key})\). Finally, \(p^{'}_{0}\) is recovered from lrs and \(\mathfrak {Dec}^{lrs}(p^{'}_{0},p^{'}_{1})\) is invoked. The \(\mathfrak {Dec}^{lrs}(p^{'}_{0},p^{'}_{1})\) algorithm returns \(m^{'}\).

The simulator \(S_{1}\) runs algorithm \(T_{0}\) and \(T_{1}\) alternatively as long as their outputs are same. Let \(\tilde{{\mathcal {H}}}_{\infty }(M_{0} | \tau ^{i}_{0}(M_{0}))\) be the average conditional entropy. It captures the scenario that best chance of guessing \(M_{0}\) when some information is available through side channel leakages \(\tau ^{i}_{0}(M_{0})\) to the adversary A. Information theoretically, we can write \(\tilde{{\mathcal {H}}}_{\infty }(M_{0} | \tau ^{i}_{0}(M_{0}))\) = \(\tilde{{\mathcal {H}}}_{\infty }(M_{1} | \tau ^{i}_{1}(M_{1}))\) from the working strategy of the simulator \(S_{1}\). \(\tilde{{\mathcal {H}}}_{\infty }(M_{0} | \tau ^{i}_{0}(M_{0}))\) can be written as follows (Lemma 2.1.3).

$$\begin{aligned} \tilde{{\mathcal {H}}}_{\infty }(M_{0} | \tau ^{i}_{0}(M_{0})) = {\mathcal {H}}_{\infty }(M_{0}) -l \end{aligned}$$


$$\begin{aligned}\tilde{{\mathcal {H}}}_{\infty }(M_{1} | \tau ^{i}_{1}(M_{1})) = {\mathcal {H}}_{\infty }(M_{1}) -l \end{aligned}$$

Here, \(\tau ^{i}_{0}(M_{0})\) or \(\tau ^{i}_{1}(M_{1})\) can leak at most l bits as per security of the lrs scheme. The simulator \(S_{1}\) runs until self-destruct is invoked or returns \(\bot\). Let q be the maximum number of queries that are made by A in \({\textbf {Tamper}}_{cnmc}^{A,m}\). It is assumed that the experiment stops at \(q^{th}\) query. In case of \({\textbf {SimTamper}}_{cnmc}^{A,0^{n}}\), same number of queries are performed and the experiment returns \(\bot\) whenever outputs from \(T_{0}\) and \(T_{1}\) are different. The algorithm \(T_{0}(M_{0},f^{q}_{0},r,q)\) and \(T_{1}(M_{1},f^{q}_{1},r,q)\) are l leaky. For 1 to \((q-1)^{th}\) query, we get the below equation with the assumption that function output cannot be more informative than its own input and last inequality comes from Lemma 2.1.4. Apart from that \(M_{1}\), \(T_{0}(M_{0},f^{q}_{0},r,q))\) do not give much useful information about \(M_{0}\) to guess the message m and it decreases the min-entropy of \(M_{0}\) by \({\mathcal {O}}(n)\), i.e., its size. Hence, the security of codeword reduces to the security of leakage resilient storage.

$$\begin{aligned}{} & {} \tilde{{\mathcal {H}}}_{\infty }(M_{0} | T_{0}(M_{0},f^{1}_{0},r,1),..,T_{0}(M_{0},f^{q}_{0},r,q))\\{} & {} = \tilde{{\mathcal {H}}}_{\infty }(M_{0} | T_{1}(M_{1},f^{1}_{1},r,1),..,T_{1}(M_{1},f^{q-1}_{1},r,q-1),T_{0}(M_{0},f^{q}_{0},r,q)).\\{} & {} \Longrightarrow \tilde{{\mathcal {H}}}_{\infty }(M_{0} | T_{1}(M_{1},f^{1}_{1},r,1),..,T_{1}(M_{1},f^{q-1}_{1},r,q-1),T_{0}(M_{0}, f^{q}_{0},r,q))\\{} & {} \ge \tilde{{\mathcal {H}}}_{\infty }(M_{0} | M_{1}, q, T_{0}(M_{0},f^{q}_{0},r,q)). \end{aligned}$$

At each query invocation, tampered output from both sides of (\(M_{0}, M_{1}\)) are compared and if it matches, leak the entire codeword. At last query invocation when output from both sides are not same (also \(\tau ^{q}_{0}(M_{0}) \ne \tau ^{q}_{1}(M_{1})\)), leak the entire tampered codeword so that total leakage is upper bounded by \({\mathcal {O}}(n)\). Apart from that lrs in both parts of the codeword can tolerate leakages upto \(2\,l\) (l bits from each side) bits. Combining the parameters, we need \(l^{'}\) at least greater than \((2l + n)\) to work the simulator \(S_{1}\) properly.


In this work, we propose a generic method to construct continuously non-malleable codes from any kind of block cipher in split-state model. The length of codeword depends on the block size of underlying cipher. A non-persistent version of tampering with self-destruct capability is considered here. Further research work can be pursued to construct super-strong continuously non-malleable codes with self-destruct or without self-destruct capability, and non-persistent tampering attempts from block ciphers in split-state model.