Introduction

Secure multiparty computation (MPC) allows multiple players to jointly compute a function, without giving away anything about their inputs, except what can be deduced from the output. An important special case is when the function to be evaluated constitutes an input itself and should remain hidden, called Private Function Evaluation (PFE). This has been considered in the standard cryptographic setting, e.g., using universal circuits [45] in [5, 19, 32, 38].

Secure multiparty computation, and hence also PFE (by choosing a universal function to be executed), can also be done with a deck of physical cards, as first shown in [8, 12, 41]. In this area of card-based cryptography, one designs tangible protocols using a deck of cards with information-theoretic privacy features. There is already a wealth of literature on how to jointly and securely compute an arbitrary (fixed) circuit on the players’ inputs, see, e.g., [12, 37, 41]. Moreover, similar but different physical assumptions have been exploited in other settings, in particular in the cryptographic voting community, cf. Scantegrity, PunchScan, and Oblivious voting [2, 10, 11, 43] (see [28] for a survey on physical assumptions in cryptography).

Motivation. Card-based protocols are often used in educational and recreational settings. For an illustration of PFE, we stretch the usual motivation for card-based AND protocols a bit, namely the dating problem where players want to find out whether there is mutual love.

We assume a predefined set of binary attributes A such as \(A = \{\)LikesCats, HasPhD, IsGeeky,\(\ldots \}\). Alice implicitly specifies (by providing a circuit or program) which combinations \(P \subseteq 2^A\) of attributes she likes and Bob specifies which attributes \(B \subseteq A\) he has. The task is to determine whether Bob’s secret attributes satisfy Alice’s secret preferences, i.e., whether \(B \in P\). Here, we want to ensure that both Alice’s and Bob’s input remains hidden, i.e., nothing about the input is revealed, except what can be deduced from the output of the protocol.

In the same vein, PFE is useful for the game Skipjack [16]Footnote 1, where a game master invents a rule and the other players take turns querying whether a chosen code words satisfies the rule or not—to deduce/guess the rule in this process. Applying our PFE protocol would allow to prevent the game master from cheating by changing the rule mid-game, or even to play the game in absence of a game master, assuming an encoding of a rule is available or can be obtained at random. (Moreover, as PFE even hides the code words that the player is testing, we can derive a competitive multi-player mode where questions of other players do not help the others.)

Look and Feel of Our Protocols. Imagine a room with a table, where Alice puts an encoding of a function \(f\) in a sequence on the table, each bit of the description as two face-down cards encoding \(0\) via ♣,♡ and \(1\) via ♡ ♣. Next to Alice’s cards, Bob will put his input \(x\) as a bit string using the same encoding. The game then proceeds according to a protocol (described in more detail later) that may prescribe to (i) shuffle the cards in certain controlled ways and (ii) turn over cards (the observed symbols may affect the future course of the protocol). The protocol terminates with output \(f(x)\) encoded as face-down cards. The output can then be revealed to both players or used obliviously in further computations.

The Sort Sub-protocol. The protocols proposed in this paper—and actually a large subset of the protocols from the literature—can be regarded as a sequence of sub-protocols with basically the same functionality, which we capture under the name “sort protocol”. We believe this observation is of independent interest. We also show that, under weak assumptions, protocols obtained as compositions of sort-protocols are secure. This elegantly re-proves the security of existing protocols and greatly simplifies the security proofs of our own protocols. (As we are in a simpler and fully information-theoretic setting, this is much easier than in the common universal composability framework [9]).

On Interaction in Card-Based Protocols. We point out that card-based cryptography can be assumed secure in a rather non-interactive physical model: it suffices to have one protocol executer, who is under surveillance by the other players. For example, when the protocol description specifies that a certain shuffle is to be performed, this step can be implemented by this one player, the executer, who uses envelopes (or helping cards) and completely random shuffles or uniform random cuts in a manner that ensures that not even he himself can keep track of concrete permutation done on the cards. (We could also use shuffling machines, such as the wheel-of-fortune-esque device in [46].)

Note that in this surveillance model where players watch that the protocol is done correctly, many protocols can be argued secure with almost no interaction. For example, ([21], Protocol 3) is a nice physical zero-knowledge proof system for proving that there is a solution to a Sudoku puzzle, where the verifier chooses one of three cards in each cells of the Sudoku to be assigned to piles for rows, columns and subgrids to be able to later verify that all numbers are present. In our model, we can plausibly argue that the randomness chosen by the verifier can also be directly generated by the prover himself on an additional deck of helping cards. If he is watched to perform the shuffle in a way that generates high entropy not under his control, he can use this generated randomness to assign the cards to the piles. This is actually a general observation regarding protocols using public coins, where this shuffling produces an output that can be interpreted to be like the Random Oracle output in the Fiat–Shamir heuristic. The possibility of secure shuffling in this way is a common assumption that people make when playing card games with others.

Using the PFE protocols introduced in this paper, this immediately leads to a direct way to obtain cryptographic obfuscation in this card-based surveillance model: assuming that the encoded protocol is lying on the table using cards, the executer can add cards encoding the inputs and then execute a universal protocol, such as the ones proposed in this paper, with the only interaction being guards that watch out for publicly observable deviations from the protocol.

However, note that because of the very different setting, there are no implications for the usual non-physical (strictly non-interactive) cryptographic world, where general (virtual black-box) obfuscation is impossible, cf. [7].

Universal Protocols and Their Qualities. We implement four different universal card-based protocols with varying degrees of abstraction, based on branching programs, circuits, Turing machines and RAM machines. Our primary focus is on simplicity and elegance of the protocols, but we also consider efficiency in terms of runtime and required cards.

The benefit of providing several solutions is that depending on the nature of the task, a certain computational model may be particularly suitable. For example, in the generalized dating game described above, using universal circuits is a natural option, while a rule in Skipjack might most naturally be described as a program using loops and thus benefit from the possibilities available in Turing machines and RAM machines. For didactic settings, all options are interesting in itself, as they demonstrate the computational models and the implemented privacy properties in a palpable way.

Contribution.

  • We show how to encode and execute circuits, Turing machines, RAM machines and branching programs with cards and specify protocols for executing these on hidden inputs so that nothing about the machine description (except the length, etc.) or the inputs is leaked. We achieve this using envelopes and only very natural shuffle operations, namely random cuts and \(S_n\)-shuffles (i.e., ordinary shuffling, where all card reorderings are equally likely).

  • Given the weakly interactive nature of card-based cryptography in the “surveillance model” (see above), we thereby obtain what may be called cryptographic obfuscation in a card-based setting.

  • We identify and generalize a primitive that is the basis for many protocols and operations in cards-based cryptography, namely coupled sorting, cf. Sect. 3.

Related Work. Regarding our branching program construction, let us mention that there are several card-based protocols to randomly generate a permutation with specific, prescribed properties. For example, the secret santa game asks for random permutations on the player indices (encoding who gives a present to whom) that are fixed-point free to ensure that nobody receives their own present, and has been implemented with cards in [12, 23]. Moreover, they also give protocols for generating permutations with cycles of a certain minimal length. Moreover, Hashimoto et al. [22] give a protocol for generating permutations with a prespecified cycle structure, and show how to obliviously execute the inverse of a permutation encoded with cards on another card sequence, which is a special case of our sorting operations. In general, we make use of card decks that not only feature heart or clubs cards, a line of research that was pursued in, e.g., [29, 35, 42].

Note that cryptographic obfuscation has been performed in other models. For example, Goyal et al. [18] make use of tamper-proof hardware tokens (such as smart cards) introduced by Katz [24]. Moreover, [36] allows to execute many cryptographic primitives (albeit not obfuscation) using scratch-off cards. They have a slightly weaker setting, as they do not gather players around a table, but use sealed (tamper-evident) envelopes that are sent between the players via mail, getting out-of-sight from the other players.

Physical computation is also described in [13] (as “Physical GMW protocol”) to achieve security in the framework of Universal Composability with Local Adversaries (LUC). However, they make very strong assumptions on available “machines”, which we do not need.

Crépeau and Kilian [12] also discuss playing games against a card-encoded (probabilistic) circuit opponent. However, they do not aim to hide this circuit to the player as it is given by the player himself.

Recently, Dvorák and Koucký [14] formulate a similar mechanism to execute Turing machines and branching programs using cards to classify a certain class of card-based protocols that compute functions that are specified by their complexity. This constitutes independent and concurrent work.

Outline. Section 2 gives the necessary preliminaries, including the computational model used in card-based cryptography. Section 3 introduces sorting protocols as a main and versatile building block in card-based cryptography and interprets many results in the field as a single application of such a protocol. We describe concrete protocols for executing universal circuits (Sect. 4), Turing machines (Sect. 5), (word-)RAM machines (Sect. 6) and branching programs (Sect. 7).

Notation (Permutations). For distinct elements \(x_{1},\ldots ,x_k \in X\) the cycle \((x_{1}\;x_{2}\;\ldots \;x_k)\) denotes the cyclic permutation \({\pi }\) with \({\pi }(x_i) = x_{i+1}\) for \(1 \le i < k\), \({\pi }(x_k) = x_{1}\), and \({\pi }(x) = x\) for all \(x \in X\) not occurring in the cycle. For multiple cycles on pairwise disjoint sets, we write them next to one another to denote their composition, e.g., \((1\;2)(3\;4\;5)\) maps \(1 \mapsto 2\), \(2 \mapsto 1\), \(3 \mapsto 4\), \(4 \mapsto 5\), \(5 \mapsto 3\).

Computational Model of Card-Based Cryptography

Card-based protocols operate on a deck of cards, which is specified by a multiset \(\mathcal {D}\) of symbols, e.g., from \(\{\)♡, ♣\(\}\) or from numbered cards \(\{1,\dots ,n\}\). It uses four operations, namely i) turning over cards to reveal their hidden symbols, ii) deterministically permuting the cards, iii) shuffling the cards in some controlled way to introduce randomness, and iv) terminating and outputting a list of card positions encoding the protocol output. The formal model is given in [39].

While many protocols in the literature only use \(\{\)♡, ♣\(\}\) as a deck alphabet, Niemi and Renvall [42] and Mizuki [35] introduce card-based protocols using the (multi-)set \([{1,\dots ,n}]\), and an encoding rule, where a bit given by two face-down cards is \(0\) if the former card has a smaller value, and \(1\) otherwise.

More formally, a protocol \(\mathcal {P}\) is a quadruple \((\mathcal {D}, U, Q, A)\), where \(\mathcal {D}\) is a deck, \(U\) is a set of input sequences over \(\mathcal {D}\), \(Q\) is a set of states with \(q_{0} \in Q\) and \(q_\text {fin} \in Q\), being the initial and the final state. Moreover, we have an action function \(A:(Q {\setminus }\{q_\text {fin}\}) \times {\mathsf {Vis}}^{\mathcal {D}}\rightarrow Q \times \mathsf {Action},\) depending on the current state and visible sequence (i.e., the sequence of the card symbols, with face-down cards specified as a special back symbol ‘\(?\)’, and face-up cards showing their symbol; the set of visible sequences on deck \(\mathcal {D}\) is denoted by \({\mathsf {Vis}}^{\mathcal {D}}\)), which specifies the next state and an operation on the sequence. These actions, constituting the set \(\mathsf {Action}\) are as follows, performed on a sequence \({\varGamma }=({\varGamma }[1], \dotsc , {\varGamma }[n])\):

  1. i)

    \((\mathsf {turn},{T})\), for a set \(T\subseteq \{1, \dotsc , n\}\), flips the cards at positions specified by the turn set \(T\). Formally, for a card \(c=\frac{a}{b}\) we define \({{\mathrm{\mathsf {swap}}}}(c) \mathrel{:=}\frac{b}{a}\) and transform \({\Gamma }\) into \({\mathsf {turn}}_{T}\left( {\varGamma }\right)\), where \(\mathsf{turn}_{T}\left(\Gamma\right)[i] \mathrel{:=}\mathrm{\mathsf {swap}}({\Gamma }[i])\) if \(i \in T\), and \({\mathsf {turn}}_{T}\left( {\varGamma }\right) [i] {:}{=}{\varGamma }[i]\), otherwise.

  2. ii)

    \((\mathsf {perm},{{\pi }})\), for a permutation \(\pi \in S_{n}\), permutes \({\varGamma }\) according to \({\pi }\), i.e., it yields the sequence \({\pi }({\varGamma }) = ({\varGamma }[{\pi }^{-1}(1)],\ldots ,{\varGamma }[{\pi }^{-1}(n)])\).

  3. iii)

    \((\mathsf {shuffle},{\varPi })\), for a permutation set \(\varPi \subseteq S_{n}\), draws a permutation \(\pi \in \varPi\) uniformly at random and obliviously applies it to \({\varGamma }\).

  4. iv)

    \((\mathsf {result},{p_{1}, \dotsc , p_r})\), for a list of distinct positions \(p_{1},\ldots ,p_r \in \{1,\ldots , n\}\), halts the protocol and specifies \(O=({\varGamma }[p_{1}],\ldots ,{\varGamma }[p_r])\) as the output.

See [26, 39] for more details. Then, a sequence trace of a finite protocol run is a list \(({\varGamma }_{0}, {\varGamma }_{1}, \dotsc , {\varGamma }_t)\) of sequences such that \({\varGamma }_{0} \in U\) and \({\varGamma }_{i+1}\) arises from \({\varGamma }_i\) by the specified action. Moreover, mapping this to a trace where not the cards themselves, but only what is visible about the cards, is called the corresponding visible sequence trace.

Card-based protocols are secure if input and output are perfectly hidden, i.e., from the outside the execution of a protocol has the same distribution, regardless of what input and output are.

Definition 2.1

(Security, cf. [30, 31]Footnote 2) Let \(\mathcal {P} = (\mathcal {D},U,Q,A)\) be a protocol. It is (input- and output-)secure if for any random variable I with values in the set of input sequences U, the following holds. A protocol run starting with random initial sequence \({\varGamma }_{0} = I\), and taking random choices for the shuffling actions, terminates almost surely (i.e., with probability 1). Further, if V and O are random variables denoting the visible sequence trace and the output of the run, then the pair (IO) is stochastically independent of V.

Boolean Circuits A Boolean circuit with l input variables \(v_{1},\ldots ,v_{l}\) is a directed acyclic graph \(C = (V,E)\). The nodes are called gates and are labeled with \({\vee }\), \({\wedge }\), \({\lnot }\), an input variable, or one of the constants 1 or 0. In the cases of \({\vee }\), \({\wedge }\), \({\lnot }\), the in-degree must be 2, 2 or 1, respectively, otherwise it is 0. The output node is the unique node with out-degree 0. The depth of C is the maximum number of \({\wedge }\) and \({\vee }\) gates on a path in C.

The value \(C(\vec {v}) \in \{0,1\}\) that a circuit outputs on input \(\vec {v} = (v_{1},\ldots ,v_{l}) \in \{0,1\}^{l}\) is defined in the natural way. For this paper, it is convenient to transform all \({\vee }\)-gates into \({\wedge }\)-gates using de Morgan’s rule \((x {\vee } y) = {\lnot }({\lnot }x {\wedge } {\lnot }y)\). Note that this transformation does not affect the depth of the circuit.

Group Actions. In Sect. 3, we make use of group actions and their orbits, which can be found, e.g., in ( [15], Sect. 1.3). For a definition, let X be a nonempty set, G a group, and \({\varphi }:G \times X \rightarrow X\) a function implicit in the notation \(g(x) {:}{=} {\varphi }(g,x)\) for \(g \in G, x \in X\). G acts on X, or \({\varphi }\) is a group action on X if

  • \({\mathsf {id}}(x) = x\) for all \(x \in X\), where \({\mathsf {id}}\) denotes the neutral element in G,

  • \((g \circ h) (x) = g (h(x))\) for all \(x \in X\) and all \(g,h \in G\).

Let \(G\) be a group acting on a set \(X\). Then, the orbit of an \(x \in X\) is \(G(x) {:}{=}\{g(x):g \in G\}\), i.e., all elements in \(X\) that are reachable from \(x\) via some \(g \in G\). Note that orbits \(G(x), G(y)\) of \(x,y \in X\) are either disjoint or equal. Hence the orbits form a partition of \(X\), called the orbit partition of \(X\) through G. For an application of this to proving lower bounds on the number of cards in card protocols, see [25]. In our setting, \(G=\varPi \subseteq S_n\) is a permutation subgroup used in a shuffle and \(X\) is the set of sequences over a deck \(\mathcal {D}\). Then, \(\varPi\) acts on \(X\) by permuting the card sequences \(x \in X\) via \(\pi \in \varPi\), i.e., \({\pi }((x_{1},\ldots ,x_n)) = (x_{{\pi }^{-1}(1)},\ldots ,x_{{\pi }^{-1}(n)})\).

The Coupled Sorting Sub-protocol

In this section, we introduce our main, versatile building block, namely “sorting protocols”, and later show how to interpret many protocols from the literature as such a protocol. We use the term “coupled” to indicate that a same permutation is applied to multiple card subsequences by forming piles (e.g., to be placed in envelopes) and then permuting them, cf. Fig. 2.

Notation. Let \(\pi \in S_{n}\), \(A=(a_{1}, \dotsc , a_n)\) a sequence of distinct natural numbers and \(B\) a sequence of length \(n\). We define the lift \({\pi }{\uparrow }{A}\) of \({\pi }\) to \(A\) via

$$\begin{aligned} ({\pi }{\uparrow } A)(m) {:}{=}{\left\{ \begin{array}{ll} a_{{\pi }(i)},&{} \text {if } m=a_i \text { for some } i,\\ m,&{} \text {otherwise,} \end{array}\right. } \end{aligned}$$

for \(m\) with \(1 \le m \le \max {\{a_1,\dots ,a_n\}}\). For instance, the permutation \({\pi } = (1\;3)(2\;4) \in S_{4}\) lifted to the sequence \(A = (5,2,7,8)\) yields the permutation \({\pi } {\uparrow } A = (5\;7)(2\;8)\). We define the lift of a permutation to a sequence of same-length sequences \(B = ((b_{1}^1,\ldots ,b_{1}^k),\ldots ,(b_n^1,\ldots ,b_n^k))\) as

$$\begin{aligned} {\pi } {\uparrow } B {:}{=} ({\pi } {\uparrow } (b_{1}^1,\ldots ,b_n^1)) {\circ }\cdots {\circ }({\pi } {\uparrow } (b_{1}^k,\ldots ,b_n^k)). \end{aligned}$$

Note that for each \(i \in \{1,\ldots ,k\}\), the \((b_1^i,\ldots ,b_n^i)\) are again assumed to be distinct. We permit that the \(b_i^j\) are sequences again. In this sense, this definition is recursive. Figure 1 illustrates the simple intuition behind these more complex lifts.

Fig. 1
figure 1

Effect of the permutation \((1\;2\;3) {\uparrow } ((1,2,3,4),(5,6,7,8),(12,11,10,9))\) when applied to a sequence \((1,\dots ,12)\) of cards. The idea is to permute the three card sequences in positions (1, 2, 3, 4), (5, 6, 7, 8) and (12, 11, 10, 9) (all of same length) cyclically (as in \((1\;2\;3)\)), taking the groups of four cards “as a whole”. To illustrate the possibility of given the sequences in the operation in another order, we reversed the third sequence with the effect that when (5, 6, 7, 8) is “mapped” to (12, 11, 10, 9), the card at the 5th position is mapped to the 12th position, and so on (as displayed in the figure)

We naturally extend this definition to permutation sets \(\varPi \subseteq S_n\) and, for convenience, a lift to two sequences AB as

$$\begin{aligned} \varPi {\uparrow } A {:}{=} \{ {\pi } {\uparrow } A:{\pi } \in \varPi \},\quad \varPi {\uparrow } A,B {:}{=} \{ ({\pi } {\uparrow } A) {\circ }({\pi } {\uparrow } B):{\pi } \in \varPi \}. \end{aligned}$$

The Family of Sort (Sub-)protocols. For each combination of a group of permutations \(\varPi \subseteq S_n\), a sequence of (card) positions \(A = (a_{1},\ldots ,a_n)\) and another sequence \(B = (b_{1},\ldots ,b_n)\), we will define a “protocol” \({{\,\mathrm{\mathsf {sort}}}}_\varPi A {\uparrow } B\). However, to avoid a larger and unnecessary technical exposition of sequential compositions of card-based protocols, we will use the symbol \({{\,\mathrm{\mathsf {sort}}}}_\varPi A {\uparrow } B\) just as a shorthand or syntactic sugar for the sequence of four actions as stated in Protocol 1 (which is explained below). As this behaves like an inlined function in programming languages, we chose to call it “(sub-)protocol” in the following.

Note that \(\varPi\), A and B are a public part of the action specification, not inputs. To describe the intended behavior of the shorthand, assume it is executed on a sequence \({\varGamma }\) of cards. Let \(\mathbb {A} {:}{=} {\varGamma }[A] {:}{=}({\varGamma }[a_1], \dots , {\varGamma }[a_n])\) be the sequence of cards in positions A, and \(\mathbb {B} {:}{=}{\varGamma }[B]\) the sequence of cards in positions B. We assume that these card (symbol) sequences \(\mathbb {A}\) and \(\mathbb {B}\) are secret.

figure a

Let \({\pi }_{\mathbb {A}} \in \varPi\) be the permutation that sorts \({\mathbb {A}}\), i.e., \({\pi }_{{\mathbb {A}}}({\mathbb {A}})\) is the lexicographical minimum of \(\{{\pi }({\mathbb {A}}) \mid {\pi } \in \varPi \}\) w.r.t. a given order on the deck symbolsFootnote 3. The overall effect of \({{\,\mathrm{\mathsf {sort}}}}_\varPi A {\uparrow } B\) should be that \({\pi }_{{\mathbb {A}}}\) is applied to both \(\mathbb {A}\) and \(\mathbb {B}\), yielding a sequence \({\varGamma }'\) with \({\varGamma }'[A] = {\pi }_{\mathbb {A}}(\mathbb {A})\), \({\varGamma }'[B] = {\pi }_{\mathbb {A}}(\mathbb {B})\) and \({\varGamma }'\) equal to \({\varGamma }\) everywhere else. We permit B, and correspondingly \(\mathbb {B}\), to be a sequence of k-element sequences \(B = ((b_{1}^{1}\ldots ,b_{1}^k), \ldots , (b_n^{1},\ldots ,b_n^k))\) for \(k \in {\mathbb {N}}\), in which case applying \({\pi }_{{\mathbb {A}}}\) to \(\mathbb {B}\) means applying \({\pi }_{{\mathbb {A}}}\) to each of the k sequences \(\mathbb {B}_{1} = {\varGamma }[(b_{1}^{1},\dots ,b_n^{1})]\), ..., \(\mathbb {B}_k = {\varGamma }[(b_{1}^k,\dots ,b_n^k)]\).

Fig. 2
figure 2

Application of \({{\,\mathrm{\mathsf {sort}}}}_{S_{4}} A {\uparrow } B\) where A denotes the four positions of the red cards and B the four pairs of positions of the blue cards, in canonical ordering. Since the current sequence is \({\mathbb {A}} = (3,1,4,2)\), the permutation \({\pi }_{(3,1,4,2)} = \{1 \mapsto 3,2 \mapsto 1,3 \mapsto 4,4 \mapsto 2\} = (1\;3\;4\;2)\) is applied to A and B, leaving the red cards sorted and the pairs of blue cards permuted by \({\pi }_{(3,1,4,2)}\) as shown. Note that the encoding of the permutation through card sequences is as in Sect. 3.2, and that the revealed sequence (4, 3, 1, 2) is independent of the input sequences and the output sequence. (The different back colors are for illustration and to avoid errors in handling the cards, but are not necessary in theory.)

Implementation of Sort Protocols

An example for a practical implementation is given in Fig. 2 and a formal specification in Protocol 1. The first step applies a randomly chosen permutation \({\tau } \in \varPi\) to A and B. Then, the cards in positions A are turned over, revealing \({\tau }({\mathbb {A}})\) where \({\mathbb {A}}\) is the sequence of cards that was previously in positions A.

This allows us to recognize which permutation \({\pi }_{{\tau }({\mathbb {A}})}\) would sort \({\tau }(\mathbb {A})\) and apply it to the sequences in positions A and B. Clearly, the overall effect is that \(\mathbb {A}\) and \(\mathbb {B}\) have both been permuted by the same permutation \({\pi }_{{\tau }({\mathbb {A}})} {\circ }{\tau }\). Moreover, this permutation sorted the cards in positions A as desired.

If we only want to reset the sequence in A to a sorted one, i.e., without applying it to cards at positions B, (as in Protocols 11 and 12) we write \({{\,\mathrm{\mathsf {sort}}}}_\varPi {A}.\)

Definition 3.1

Let i be a index/step number of an action (or action sequence denoted by a shorthand, if you wish) in an execution of a protocol, and A a sequence of card positions of the protocol. Let \({{\,\mathrm{\mathsf {supp}}\,}}(A, i) {:}{=} \{{\varGamma }[A]:{\varGamma }\) is possible when reaching step \(i\}\) be the set of possibilities for \(\mathbb {A}\) when the protocol reaches the action at step i (before executing this step). We say an sub-protocol/shorthand \({{\,\mathrm{\mathsf {sort}}}}_\varPi A {\uparrow } B\) at a step i is valid in a protocol if \({{\,\mathrm{\mathsf {supp}}\,}}(A, i)\) is contained in an orbit O of the group action of \(\varPi\) on sequences, and \(|O| = |\varPi |\).

The rationale behind this definition is that if \({{\,\mathrm{\mathsf {supp}}\,}}(A, i)\) is subset of O w.r.t. \(\varPi\), then shuffling \(\mathbb {A}\) with \(\varPi\) destroys all information that is held in the sequence \(\mathbb {A}\) prior to turning it. Thus, no information is leaked. The condition \(|O| = |\varPi |\) ensures that the permutation \({\pi }_{\mathbb {A}} \in \varPi\) that sorts \(\mathbb {A}\) is uniquely defined.Footnote 4

Note that this slightly involved criterion is necessary to ensure security in the case that the permutation is chosen at random from a proper subset of \(S_n\) (on all n cards of the deck). An important example for this is a random cut, which we later use to apply a rotation encoded in a sequence. Assume for instance \(\varPi = \langle (1\;2\;3)\rangle\) and \(\pi \in \varPi\) uniformly random. Moreover, let X be the six-element set of permutations of (♡,♣,♠), and \(s \in X\) be arbitrary. Revealing \(\pi (s)\) to be, say, \(\pi (s) =\) (♣,♡,♠) reveals, e.g., that s is not (♡,♣,♠). The reason is that \(\varPi\) has two orbits when acting on sequences of length 3 with symbols ♡,♣,♠ and we learn in which orbit we have been, excluding all sequences of the other orbit. This criterion is also suitable for achieving security, as shown by the following lemma.

Lemma 3.1

If an shorthand/sub-protocol \({{\,\mathrm{\mathsf {sort}}}}_\varPi A {\uparrow } B\) at step i is valid in a protocol, then the sequence revealed in the sub-protocol’s turn step is independent of the random variable \({\varGamma }\) denoting the card sequence before step i, and the random variable \({\varGamma }'\) denoting the sequence directly after the sub-protocol.

Proof

By definition, \({{\,\mathrm{\mathsf {supp}}\,}}(A, i)\) for the sub-protocol \({{\,\mathrm{\mathsf {sort}}}}_\varPi A {\uparrow } B\) at step i is subset of an orbit O. Whatever the distribution of \(\mathbb {A}\) is, if \({\pi } \in \varPi\) is chosen uniformly at random, then the sequence \(\mathbb {A'} = {\pi }(\mathbb {A})\) revealed in the turn step is uniformly distributed on O. It is thus independent of \({\varGamma }\). Since \({\varGamma }'\) is a function of \({\varGamma }\), we conclude that \(\mathbb {A'}\) is independent of \(({\varGamma },{\varGamma }')\). \(\square\)

Corollary 3.1

If a protocol \(\mathcal {P}\) contains no turn operations outside of valid instances of sort sub-protocols, then \(\mathcal {P}\) is secure.

Encoding Permutations

A sequence \((s_{1},\ldots ,s_n) \in \{1,\ldots ,n\}^n\) of card symbols encodes a permutation \({\pi }\) if \(s_i = {\pi }(i)\) for \(1 \le i \le n\). Let us denote \(\mathcal {D}_5 {:}{=} [{1,2,3,4,5}]\) and \(\mathcal {D}_2 {:}{=}\) [♣, ♡], and give a short example.

Example 3.1

The \(5\)-cycle permutation \({\pi }=(1\;2\;3\;4\;5)\) is represented via \(\mathcal {D}_5\) by \({\varGamma }_{{\pi }} = (2, 3, 4, 5, 1)\). The (self-inverse) transposition \({\tau } = (1\;2)\) is represented via \(\mathcal {D}_2\) as \({\varGamma }_{\tau } =\) (♡, ♣).

Useful Specializations. Two subclasses of sort protocols will be particularly useful. The first will be useful, e.g., to apply an encoded permutation to another sequence of cards, the second to rotate a sequence by a specified offset.

  • Apply a permutation encoded in \(\varvec{A}\) to the sequence in \(\varvec{B}\). Assume that in a protocol, \(\mathbb {A} = {\varGamma }[A]\) is known to always be a permutation of a fixed set M of n distinct cards, say of \(M = \{1,2,\ldots ,n\}\). Then, \({{\,\mathrm{\mathsf {sort}}}}_{S_n} A {\uparrow } B\) is valid at this point i as \({{\,\mathrm{\mathsf {supp}}\,}}(A, i)\) is a subset of all permutations of M, which is an orbit w.r.t. \(\varPi =S_n\). The effect is that the permutation encoded in A is applied to \({\varGamma }[B]\). Whenever \(\varPi = S_n\), we omit \(\varPi\) as an index of \({{\,\mathrm{\mathsf {sort}}}}_\varPi A {\uparrow } B\).

  • Apply a rotation encoded in \(\varvec{A}\) to the sequence in \(\varvec{B}\). Assume that in a protocol \(\mathbb {A} = {\varGamma }[A]\) is known to always be a permutation of a multiset M with \(n-1\) copies of one symbol and one copy of another symbol, say \(M =\) [\((n-1){\cdot }\)♡,♣]. Let ♣ < ♡ by convention. Then, for \(\varPi = \langle (1\,2\,\ldots \,n)\rangle\), an sort sub-protocol \({{\,\mathrm{\mathsf {sort}}}}_{\varPi } A {\uparrow } B\) is clearly valid at this point i, as \({{\,\mathrm{\mathsf {supp}}\,}}(A, i) \subseteq \{\)(♣,♡,\(\ldots\),♡),(♡,♣,♡,\(\ldots\),♡), \(\ldots\), (♡,\(\ldots\),♡,♣)\(\}\) and the latter is an orbit w.r.t. \(\varPi\). The effect is that the rotation encoded in A is applied to \({\varGamma }[B]\). In this case, we also write \({{\,\mathrm{\mathsf {rot}}\,}}A {\uparrow } B\) for \({{\,\mathrm{\mathsf {sort}}}}_\varPi A {\uparrow } B\). (Note that this is similar to a part of the coupled rotation protocols given in [30].)

Note that for \(n=2\), the two cases are the same.

Non-destructive Variant \({{\,\mathrm{\mathsf {sort}}}}{}^*\). We define a variation \({{\,\mathrm{\mathsf {sort}}}}^*\) of \({{\,\mathrm{\mathsf {sort}}}}\) that differs only in so far as it should make no net change to the cards in positions A. For this, a sequence of helping cards is assumed to be available in (otherwise unused) positions \(H = (h_{1},\ldots ,h_n)\). We implement \({{\,\mathrm{\mathsf {sort}}}}^*\) in Protocol 2 by two applications of \({{\,\mathrm{\mathsf {sort}}}}\), where the latter restores \(\mathbb {A}\) from the helping “register”.

We say an application of \({{\,\mathrm{\mathsf {sort}}}}^*\) is valid whenever an application of \({{\,\mathrm{\mathsf {sort}}}}\) would be valid and \(\mathbb {H} {:}{=} {\varGamma }[H] = (1,\ldots ,n)\) is guaranteed, i.e., H contains cards with numbers in ascending order. Note that \({{\,\mathrm{\mathsf {sort}}}}^*\) is defined as a shorthand or syntactic sugar via Protocol 2 in the same way as \({{\,\mathrm{\mathsf {sort}}}}\).

It is easy to see that under these conditions, if \({\pi }\) is applied to the cards in positions A and H in the first sorting step, then \({\pi }^{-1}\) is applied to the cards in positions A and H in the second sorting step, as this is the unique permutation that sorts the cards in positions H. Thus, one complete valid application of \({{\,\mathrm{\mathsf {sort}}}}^*\) makes no net changes to A and H. It is also easy to check that both applications of \({{\,\mathrm{\mathsf {sort}}}}\) are valid in the original sense, therefore, Lemma 3.1 and Corollary 3.1 extend naturally to \({{\,\mathrm{\mathsf {sort}}}}^*\). We use \({{\,\mathrm{\mathsf {rot}}}}^*\) for the variant using cyclic rotations.

figure b

Stating Classical Protocols in Terms of \({{\,\mathrm{\mathsf {sort}}}}\)

The standard and, or, xor and copy protocols due to Mizuki and Sone [37] can all be stated as single application of our \({{\,\mathrm{\mathsf {sort}}}}\) sub-protocol as shown in Protocols 3 to 6 in Fig. 3. We also provide a permutation application protocol that takes the encoding of a permutation and a sequence as input and outputs the permuted sequence. This is in essence the permutation division protocol by Hashimoto et al. [22] (the only change being that we encode the inverse permutation). It has been suggested to us that more complex protocols, such as zero-knowledge protocols for Sudoku [44] and Makaro [6], as well as for the Millionaire’s problem [34] can be interpreted to implicitly utilize our sort protocol. Moreover, the eight-card AND protocol for standard decks (where all card symbols are distinct) from [35] and the eight-card 3-bit majority protocol of [40] can be implemented using two sorts, the latter is given in Protocol 8.

figure c
Fig. 3
figure 3

The classical protocols and, or, xor and copy as well as a permutation application protocol, all stated as sort protocols

Securely Evaluating a Universal Circuit

Let us start with the most direct case, namely implementing PFE using universal circuits, first constructed by Valiant [45]. We do not want to go into the details of the construction and just import facts about the general structure of the circuit and how it is used. In our examples, Alice provides her private function, here as a circuit \(C\), and Bob his private input to the function, and it should hold that neither party learns anything about the other’s respective secrets. The universal circuit \(U_n\) for circuits of size \(n\) takes as input an encoding \(\langle C \rangle\) of \(C\), where \(C\) has size \(n\), and an input \(I \in \{0,1\}^l\) of length \(l\). We assume \(C\) to have fan-out and fan-in at most \(2\), i.e., each gate has at most two inputs and at most two outputs.

In the constructions by Valiant, \(U_n\) is described via a directed acyclic graph with \(O(n\log {n})\) vertices, where each vertex represents a logic gate taking values on its incoming edges as well as certain “configuration” (or programming) bits as input and computes outputs emitted to its outgoing edges. More concretely, \(U_n\) contains the following types of nodes:

  • \(n\) universal gates with in- and out-degree exactly two and four configuration bits \(c_{1}, \dotsc , c_{4}\) that compute

    $$\begin{aligned} \mathsf {ug}(c_{1}, c_{2}, c_{3}, c_{4}, x, y) = (z, z), \text { where }z=c_{1}\bar{x}\bar{y} + c_{2}\bar{x}y + c_{3}x\bar{y} + c_4xy\, \end{aligned}$$

where \(c_{1}, \dotsc , c_{4}\) determine the Boolean operation performed at this gate, e.g., AND corresponds to \((c_{1}, \dotsc , c_{4}) = (0,0,0,1)\).

  • \(O(n\log {n})\) X-switches with a configuration bit \(c\) and in- and out-degree two, that compute

    $$\begin{aligned} \mathsf {x}(c, a_{0}, a_{1}) = (a_c, a_{1-c}), \end{aligned}$$

where \(a_c\) is forwarded on one outgoing edge and \(a_{1-c}\) on the other.

  • \(O(n)\) Y-switches computing

    $$\begin{aligned} \mathsf {y}(c, a_{0}, a_{1}) = a_c, \end{aligned}$$

where Alice’s configuration bit \(c\) decides which of the two inputs is forwarded as the output.

  • \(O(n)\) forks (or “\({\lambda }\)-switches”) where the signal on one wire is forwarded to both outgoing wires, i.e., \({\lambda }(a) = (a,a)\).

  • \(l\) input nodes with out-degree 1 and in-degree 0, and one output node with in-degree 1 and out-degree 0 with their natural interpretation.

The universal gates correspond to the gates of Alice’s circuit with the configuration bits determining what kind of gate it is, and the configuration of \(X\) and \(Y\)-switches ensures that the intermediate results are routed correctly to the relevant gates. For us, it suffices that there is an (efficient) way to obtain \(\langle C \rangle\) from \(C\), which Alice applies beforehand. Valiant [45] describes such a general mapping from circuits \(C\) to a string of \(O(n\log {n})\) configuration bits for \(U_n\), such that \(U_n\) configured with \(\langle C \rangle\) (in canonical order) implements \(C\).

We describe in Protocol 9 and Theorem 4.1 how, given \(U_n\), encodings of \(\langle C \rangle\) and Bob’s input \(I\) in sequences of cards, we can compute \(C(I)\) securely.

Theorem 4.1

For any \(l,n \ge 1\), there exists a secure card-based protocol \(\mathcal {P}\) with the following properties:

  1. (i)

    The input sequences are all sequences (VP) where

    • V encodes the values of l Boolean variables \((v_{1},\ldots ,v_{l}) \in \{0,1\}^{l}\) using the deck \(l{\cdot }\)[♣,♡].

    • P encodes a circuit C of size \(n\), via \(k=O(n\log {n})\) programming bits, i.e., via deck \(k\cdot\) [♣, ♡].

  2. (ii)

    The output is two cards encoding \(C(v_{1},\ldots ,v_{l})\).

  3. (iii)

    In addition to the input cards, we use the helping deck \((m+1)\cdot\) [♣, ♡], where \(m=O(n)\) is the number of forks in \(U_n\). (The additional pair is used for the \({{\,\mathrm{\mathsf {sort}}}}^*\) command.)

  4. (iv)

    The protocol uses \(x_n + y_n + 2f_n + 3u_n\) shuffles, where \(x_n\) is the number of X-switches, \(y_n\) is the number of Y-switches, \(f_n\) is the number of forks and \(u_n\) is the number of universal gates in \(U_n\).

Proof

\(\mathcal {P}\) is given as Protocol 9. All nodes of \(U_n\) are considered in some topological order \(s_{1},\ldots ,s_N\), allowing us to compute the bits “flowing” along each edge of \(U_n\) in a systematic way. The message at an edge e is stored in positions \(V_e = (V_{e}[0], V_{e}[1])\). Note that the bit on each edge is only used in one subsequent computation: After processing \(s_i\), only the bits on the edges crossing the cut \((\{s_{1},\ldots ,s_i\}, \{s_{i+1},\ldots ,s_N\})\) are needed in future computations. When processing \(s_{i+1}\) we may, therefore, when storing the bits for the outgoing edges of \(s_{i+1}\), reuse the now freed up cards that stored the bits on the incoming edges of \(s_{i+1}\). In Protocol 9, this is reflected by identifying \(V_e\) and \(V_{e'}\) for some pairs \((e,e')\) of edges. We only need a new pair of cards in the case of a fork.

To verify correctness, let us interpret the main sort commands in the protocol.

  1. 1.

    In the \(X\)-switch case, \({{\,\mathrm{\mathsf {sort}}}}\;C_v {\uparrow } (V_e, V_f)\) swaps the positions encoding the incoming input values at edges \(e\) and \(f\), if the configuration bit of the \(X\)-switch equals \(1\) and leaves them unchanged, if it equals \(0\). This is exactly what we wanted.

  2. 2.

    In the \(Y\)-switch case, the command is exactly the same, with the difference that afterwards only the output bit that ends up in the first position (\(V_e\)) is used afterwards.

  3. 3.

    In the fork case, we (non-destructively, i.e., with restoring) copy the bit to another position, used as an additional output wire value.

  4. 4.

    The universal gate case is the most interesting. Recall that we want to evaluate \(\mathsf {ug}(c_{1}, c_{2}, c_{3}, c_{4}, x, y) = (z, z) \text { with }z= c_{1}\bar{x}\bar{y} + c_{2}\bar{x}y + c_{3}x\bar{y} + c_4xy\). For this, first observe that exactly one of the terms \(\bar{x}\bar{y}\), \(\bar{x}y\), \(x\bar{y}\), \(xy\) equals one. Essentially, the values of x and y select which configuration bit constitutes the output. If \(x=0\) then only \(c_{1}\) and \(c_{2}\) are relevant. If \(x=1\) only \(c_{3}\) and \(c_{4}\) are. Therefore, in the first sorting step, we obliviously swap \((C_{1}, C_{2})\) for \((C_3, C_4)\) if \(x=1\) and leave things as is, if \(x=0\). The interesting two configuration bits end up in positions \(C_{1}, C_{2}\), without us knowing which they are.

    Now, we do the same with \(C_{1}, C_{2}\), based on the value of \(y\), so that the only relevant configuration bit is now in \(C_{1}\). In the last step, we write this value in both \(V_g\) and \(V_h\) (recall the fan-out two requirement).

To see that \(\mathcal {P}\) is secure, we use Corollary 3.1 and the fact that no turn operations are performed outside of sorting steps. \(\square\)

Dependent on the topological ordering used in Protocol 9, the helping deck we use to implement forks is not fully required. Instead of using a “fresh“ pair of cards to store a copy of the incoming value whenever a fork is encountered, we can reuse cards that have already served their function and will not be used in the remainder of the protocol. This includes, for instance, the cards that encoded configuration bits of universal circuits or X-switches that have already been executed.

Remark 4.1

(Reusability of the Circuit) If we would like to be able to execute the circuit multiple times, we want that the programming bits of Alice’s program are not destroyed during the execution. Here, we have to take a little care to ensure that the relevant bits are written back and that conditionally swapped cards are “unswapped” again. For this variant of our algorithm, we replace all sort operations in Protocol 9 by their starred variants. In the case of v being a universal gate, we additionally need to take extra care: in the penultimate line of the case, instead of reusing \(V_e\) and \(V_f\) (which are now in temporary use to swap back the relative positions of the cards containing the configuration bits), we set \(V_g\) and \(V_h\) as the positions of two new cards, containing ♣♡ as in the fork case. To undo the swaps, we perform \({{\,\mathrm{\mathsf {sort}}}}\;V_f {\uparrow } (C_{1}, C_{2})\) and then \({{\,\mathrm{\mathsf {sort}}}}\; V_e {\uparrow } ((C_{1}, C_{2}), (C_{3}, C_{4}))\) at the very end of the procedure in the universal gate case. Afterwards, the cards in \(V_e\) and \(V_f\) may be reused again. Hence, this variant uses uses \(2x_n + 2y_n + 2f_n + 8u_n\) shuffles, where \(x_n\) is the number of X-switches, \(y_n\) is the number of Y-switches, \(f_n\) is the number of forks and \(u_n\) is the number of universal gates in \(U_n\).

figure d

Securely Simulating a Turing Machine

Assume we wish to execute a Turing machine (TM) with a secret encoding provided by one player, Alice, on a secret input provided by another player, Bob. As any secure card protocol uses a fixed number of cards and has a runtime which is independent of the input, there must be known bounds on certain parameters of the Turing machine. Let M be a bound on the number of states, N a bound on the number of accessed tape cells and t a bound on the execution time. For simplicity, assume Alice’s TM has precisely M states (it can be padded with dummy states), runs t steps (“halting” can be achieved by staying in one state, writing the current tape symbol and not moving) and think of the tape as a cycle of length N (which makes no difference for a TM only ever accessing N memory cells).

Fig. 4
figure 4

Overview of a run of the universal TM

All cards (and names for them occurring in the following description) used for our protocol, with the exception of a few helping cards used for \({{\,\mathrm{\mathsf {sort}}}}^*\) and \({{\,\mathrm{\mathsf {rot}}}}^*\) operations, are given in Fig. 4. The encoding of a Turing machine consists of the encoding of its M states. The encoding of each state \(q \in \{0,\ldots ,M-1\}\) consists of the encoding of two transitions, one for each of the two tape symbols ♡♣ and ♣♡. Take for instance the positions \(\textsc {w}_{0}, \textsc {shift}_{0} = (\textsc {l},\textsc {n},\textsc {r})\) and \(\textsc {q}'_{0}\) encoding the transition from state \(q = 0\) if the tape symbol is ♣♡. The two cards in positions \(\textsc {w}_{0}\) contain the tape symbol to be written. The three cards in positions \(\textsc {shift}_{0}\) specify the movement of the Turing machine head, ♣♡♡ for “left”, ♡♡♣ for “right”, ♡♣♡ for “no movement” / “halt”. Lastly, the M cards in positions \(\textsc {q}'_{0}\) contain a unary encoding of \(q-q' \pmod M\) where \(q' \in \{0,\ldots ,M-1\}\) is the index of the state to be entered next (♣♡\(\ldots\)♡ encodes 0, ♡♣♡\(\ldots\)♡ encodes 1, etc.).

The input to the TM, provided by Bob, is encoded in the first l bits of the tape. When executing the Turing machine, the current tape cell will always be in position \(\textsc {tape}[0]\) and the current state in position \(\textsc {q}[0]\). Instead of having an explicit moving head we simply rotate the entire tape. Moreover, instead of having an explicit value encoding the current state, we rotate the sequence of states. This is also the reason we encode state index differences in the state transitions instead of absolute indices. The protocol is given as Protocol 10 and consists of a loop that does t times the following:

  • “read” the tape symbol in position \(\textsc {tape}[0]\) by conditionally swapping the two transitions in state \(\textsc {q}[0]\) such that the transition that should be done is available in the positions \(\textsc {w}_{0}\), \(\textsc {shift}_{0}\) and \(\textsc {q}_{0}'\). To undo this operation later, the value of \(\textsc {tape}[0]\) is also stored temporarily in \((\textsc {sav}[0],\textsc {sav}[1])\).

  • the content of \(\textsc {tape}[0]\), which was reset to 0 in the previous step, is now overwritten with the symbol in position \(\textsc {w}_{0}\).

  • The cards in positions \((\textsc {l},\textsc {n},\textsc {r})\) are used to rotate the ♣ of \(\textsc {rot}[0]\) into the positions \(\textsc {rot}[0]\), \(\textsc {rot}[1]\) or \(\textsc {rot}[N{-}1]\) depending on whether the ♣-card among \(\textsc {shift}_{0}\) is in position \(\textsc {n}\), \(\textsc {r}\) or \(\textsc {l}\), respectively. Then, the \(\textsc {tape}\) and \(\textsc {rot}\) cards are rotated together such that the tape cell whose corresponding \(\textsc {rot}\) card is ♣ comes to rest in position \(\textsc {tape}[0]\) (and such that one does not learn which rotation has been performed.)

  • The same idea is used to first copy the information about the next state into \(\textsc {next}[0\ldots M{-}1]\) and then rotate the sequence of all states accordingly. Note that we need to undo the conditional swap of the two transitions in Q[0] before the rotation of the states (using a coupled sorting with \((\textsc {sav}[0],\textsc {sav}[1])\)).

figure e

Using this protocol idea, we obtain the following theorem.

Theorem 5.1

For any \(l\ge 0,N,M,t \ge 1\), there exists a secure card-based protocol \(\mathcal {P}\) with the following properties:

  1. (i)

    The input sequences are all sequences (VP) where

    • V encodes the values of l Boolean variables \((v_{1},\ldots ,v_{l}) \in \{0,1\}^{l}\) using the deck \(l{\cdot }\) [♣,♡].

    • P encodes a Turing machine T with a state set of size \(M\), using the deck \(2M{\cdot }\) [\(3\cdot\) ,\((M+2)\cdot\) ].

  2. (ii)

    The output is a sequence of cards encoding the output of T after running t steps on a cyclic tape of length N initially containing the input \((v_{1},\ldots ,v_{l})\).

  3. (iii)

    In addition to the cards encoding the inputs, the helping deck [\((N-l+3){\cdot }\) ♣, \((M+2N-l-1){\cdot }\) ♡] \(\cup\) [ ♣, \(\min \{2, M-1\}\cdot\) ♡] is used. (The latter part is implicit in the use of the starred \({{\,\mathrm{\mathsf {rot}}}}^*\) commands and not shown in Fig. 4.)

  4. (iv)

    The protocol uses 10t shuffles.

Proof

The protocol is given in Protocol 10 and Fig. 4. For security, observe that the protocol consists only of sort sub-protocols; we can thus use Corollary 3.1.

For the cards needed, we just count the number of cards depicted in Fig. 4. In a bit more detail, for the helping cards needed, note that we need \(N-l\) pairs of ♣♡ for the empty tape cells, which are placed next to Bob’s input string. We have one ♣ for each of the registers \(\textsc {rot}\), \(\textsc {sav}\) and \(\textsc {next}\), and \(N-1\), 1 and \(M-1\)s, respectively. The second part of the union scales with the size of the largest register to be used in starred commands, which is either \(\textsc {shift}_0\) or \(\textsc {q}_{0}'\). \(\square\)

Remark 5.1

(Variants to the Implementation) Using techniques presented in Sect. 6, we could use a binary instead of a unary encoding of state indices in the encoding of transitions. This would reduce the number of required cards from \(O(N + M^{2})\) to \(O(N+M\log (M))\). However, given that the charm of Turing machines is their simplicity rather than their efficiency, we felt that we should reserve this trick for later.

For simplicity, we also chose to describe how to implement TMs with band alphabet \(\{0,1\}\), excluding the special blank symbol ␣. While one can generically map this to the standard case by using an encoding \(1 \mathrel {\hat{=}} 11\), \(0 \mathrel {\hat{=}} 10\), and ␣\({\textvisiblespace }\mathrel {\hat{=}} 00\), let us briefly discuss how one can easily upgrade our implementation with a TM supporting an additional blank symbol. For this, we encode tape cells with three cards via ♣♡♣ \(\,\mathrel {\hat{=}}0\), ♡♣♣ \(\,\mathrel {\hat{=}}1\) and ♣♣♡ \(\,\mathrel {\hat{=}} {\textvisiblespace }\)␣. In this way, the first two cards encode the value as previously, unless they are ♣♣, which would be a blank. We then need to add \(\textsc {w}_{2}\), \(\textsc {shift}_{2}\) and \(\textsc {q}_{2}'\) to each of the \(\textsc {q}\)s, specifying the operation in the case that a blank symbol is used (Note that the \(\textsc {w}_{i}\) contain the symbol to be written in reversed order, to ensure the right action is done to the tape cards). This approach has the advantage of allowing us to learn the length of the output after the computation (if it is not to be protected), by just turning over the third card in each of the tape cells and outputting (the first two cards of) those cells which do not show a ♡, i.e., which are not blank.

Remark 5.2

(Reusability of the TM) First note that we never destroy any of the state description entries of the TMs code as in normal execution it is always possible to enter the state again. Hence, to be able to run a TM multiple times, we only need to ensure that after the execution the first state is again in \(\textsc {q}[0]\). As we cannot trust Alice to provide a program that guarantees this behavior, we can introduce an additional register \(\textsc {start}[0...M-1]\) which is a copy of \(\textsc {next}\) and is rotated together with \(\textsc {q}\). It can then be used to rotate \(\textsc {q}\) back into its initial configuration by executing \({{\,\mathrm{\mathsf {rot}}\,}}\textsc {start}{\uparrow } \textsc {q}\) after the loop in Protocol 10. Hence, this variant uses \(10t + 1\) shuffles. (Resetting all tape cells to 0 and placing the new input is excluded here, but can be easily appended.)

Securely Simulating a Random Access Machine (RAM)

We now describe a simple bounded Random Access Machine model. The goal is to execute a RAM machine with a secret encoding of the machine specified by one player, Alice, on a secret input provided by another player, Bob.

A Simple RAM Model

We assume fixed constants \(N = 2^n\) (memory words), \(M = 2^m\) (instruction groups), \(l \le N\) (input size) and \(t < \infty\) (time limit). The machine has access to N binary words \(\textsc {ram}[0],\ldots ,\textsc {ram}[N-1]\) of length n each, the first l of which contain the input and the remaining \(N-l\) contain zero. The following types of instructions are available, where x, y are n-bit words and p is an m-bit word:

$$\begin{aligned} \begin{array}{rl} \mathbf{Load\,a\,Constant.} &{} \textsc {ram}[x] {\leftarrow } y\\ \mathbf{ Copy.} &{} \textsc {ram}[x] {\leftarrow } \textsc {ram}[y]\\ \mathbf{ Indirect\,Read.} &{} \textsc {ram}[x] {\leftarrow } \textsc {ram}[\textsc {ram}[y]]\\ \mathbf{ Indirect\,Write.} &{} \textsc {ram}[\textsc {ram}[x]] {\leftarrow } \textsc {ram}[y]\\ \mathbf{ Addition.} &{} \textsc {ram}[x] {\leftarrow } \textsc {ram}[x] + \textsc {ram}[y]\\ \mathbf{ Subtraction.} &{} \textsc {ram}[x] {\leftarrow } \textsc {ram}[x] - \textsc {ram}[y]\\ \mathbf{ Conditional\,Jump.} &{} {{\,\mathrm{\mathsf {jnz}}\,}}\textsc {ram}[x]\ p \end{array} \end{aligned}$$

To simplify the implementation step later, we assume that a program is a sequence \(\textsc {i}[0],\ldots ,\textsc {i}[M]\) of groups of instructions. Each group of instructions contains precisely one instruction of each of the above types, in canonical order. Note that this fixed instruction order does not affect the strength of the model. Indeed, if we assume that without loss of generality the cell \(\textsc {ram}[0]\) is never used in any “real” instruction, we may choose \(x = y = 0\) to turn any instruction into a dummy instruction that has no effect. By turning all but one desired instruction in each instruction group into such a dummy instruction, we can implement programs without having to worry about the fixed instruction order at the expense of increasing the number of instructions by a constant factor.

Here, the \({{\,\mathrm{\mathsf {jnz}}\,}}\textsc {ram}[x]\ p\) (“jump if not zero”) instruction means that if \(\textsc {ram}[x]\) contains zero, the execution should continue with the next instruction group. Otherwise, p is to be interpreted as the relative offset to the next instruction group that should be executed, i.e., if the current instruction group has index j, then the instruction group with index \((j + p) \bmod M\) should be executed next.

Implementation with Cards

Assume we want a secure implementation of the RAM model with parameters \(N = 2^n\), \(M=2^m\), l, t using playing cards. We may imagine that one player, Alice, provides the sequence of instructions, and the other player, Bob, provides the input in \(\textsc {ram}[0\ldots l{-}1]\) of \(l{\cdot }n\) bits. As usual, each bit is encoded with a pair of cards and a word of n or m bits is a sequence of n or m such pairs. In addition to the inputs, we have an encoding of \(\textsc {ram}[l\ldots N{-}1]\) (initially zero) and two additional n-bit “accumulators” A and \(A'\) (initially zero). Finally, there are ♡-cards in (“instruction pointer”) positions labeled \(\textsc {ip}[1],\ldots ,\textsc {ip}[M-1],\textsc {ip}^*\) and one ♣-card in the position labeled \(\textsc {ip}[0]\), which will be used for the conditional jumps. An overview is given in Fig. 5.

Fig. 5
figure 5

Overview of our RAM machine construction, cf. Protocol 13

We say a few words about the implementation of the instructions, starting with a general description of how words can be loaded from and stored to arbitrary addresses.

  • Loading a Word. Assume that an address is available as an n-bit word \(x = (x_{1},\ldots ,x_n)\), each bit \(x_i\) encoded as a pair of face-down cards in positions \(X_i = (X_i[0],X_i[1])\) and that the word \(\textsc {ram}[x]\) should be loaded into the accumulator. We give an implementation as Protocol 11. The first loop uses n conditional swaps of RAM ranges to transport the content of \(\textsc {ram}[x]\) into \(\textsc {ram}[0]\). The invariant is that after the i-th loop, the content of \(\textsc {ram}[x]\) has been transported to \(\textsc {ram}[x \mathop { \& } (2^{n-i} - 1)]\) where & denotes the bitwise AND. For instance, if \(n = 4\) and \(x = 10 = (1010)_{2}\), then in the rounds \(i = 1\) the left half \(\textsc {ram}[0\ldots 7]\) and right half \(\textsc {ram}[8\ldots 16]\) of the memory would be swapped and in round \(i = 3\) the ranges \(\textsc {ram}[0,1]\) and \(\textsc {ram}[2,3]\) would be swapped, in total transporting \(\textsc {ram}[10]\) via \(\textsc {ram}[2]\) to \(\textsc {ram}[0]\).

  • The second for-loop copies the content of \(\textsc {ram}[0]\) to the accumulator. Since the copy protocol can copy information only onto card pairs that are in a known state, we must securely reset the accumulator bits before each copy operation. The third for-loop undoes all swaps of the first loop, in reverse order. In total, this uses 7n shuffles.

    figure f
  • Storing a word. Storing is very similar to loading, we give an implementation in Protocol 12. Here, instead of copying the RAM content to the accumulator in the second line of the second for loop, we copy the value of the accumulator into the RAM. As above, this uses 7n shuffles.

    figure g
  • Move operations. The operations previously dubbed copy, indirect read and indirect write are easy to implement using the load and store algorithms. For temporary storage, the accumulator \(A'\) is used. For instance, the indirect write operation \(\textsc {ram}[\textsc {ram}[x]] {\leftarrow } \textsc {ram}[y]\) with the words x and y encoded in positions X and Y can be implemented using \(\mathsf {load}(Y)\), \(\mathsf {swap}(A,A')\), \(\mathsf {load}(X)\), \(\mathsf {swap}(A,A')\), \(\mathsf {store}(A')\), where \(\mathsf {swap}\) just swaps the two card sequences. As each load or store operation uses 7n shuffles, we use 14n shuffles for copy, and 21n shuffles for indirect read and indirect write.

  • Loading Constants. Copying a value given directly in the instruction is simply done by copying each of the n bits one by one. This uses 7n shuffles.

  • Addition and Subtraction. Secure half and full adders have been described by [33]. If \(n \ge 2\), the accumulator \(A'\) is sufficient to store carry-bits temporarily. Note that both protocols use 5n shuffles (more precisely, random bisection cuts), as subtraction uses the full adder with the carry bit set to 1 and all bits of the second number inverted (via a simple perm operation). We omit the details.

  • Conditional Jump. While it would be possible to have an instruction pointer that is affected by jump operations, we opt for an approach that seems slightly more elegant. We always execute instruction group \(\textsc {i}[0]\), and when executing the last instruction \({{\,\mathrm{\mathsf {jnz}}\,}}\textsc {ram}[x]\ p\) of that group, we rotate the sequence of all instructions such that either \(\textsc {ip}[1]\) or \(\textsc {ip}[p]\) becomes \(\textsc {ip}[0]\), depending on the value of \(\textsc {ram}[a]\). See below for the exact description. Counting the shuffles in the relevant part of Protocol 13 yields \(8n + 2m + 2\) shuffles, as the n bit OR operation uses n shuffles. (Note that here p might even be 0, meaning that if \(\textsc {ram}[x] \ne 0\) the same instruction group is repeated again. Due to the time limit t, this cannot result in a real infinite loop and hence would not exhibit unusual detectable behavior that, e.g., Alice could use to learn information on Bob’s input.)

The overall execution of the RAM program is given in Protocol 13. We assume the addresses x and p are available in positions X and P, respectively. To carry out the conditional jump, first load x into the accumulator and form the Boolean OR of all its bits. Assuming \(\textsc {ram}[0]\) is not zero, then the bit \(a_{1}\) is set to true by this OR operation and the single ♡-card is swapped into \(\textsc {ip}^*\) before the for-loop and is put into position \(\textsc {ip}[1]\) afterwards. If, however, \(\textsc {ram}[0]\) is zero, then \(a_{1}\) is set to false in which case the for-loop transports the ♣-card into position \(\textsc {ip}[p]\) (the loop invariant is that the ♣-card is in position \(\textsc {ip}[p \mathop { \& } (2^{m-i} - 1)]\)). The rot operation in the last step rotates the sequence of instructions as desired.

figure h

Theorem 6.1

For any \(N = 2^n, M = 2^m, l < N, t \ge 1\), there exists a secure card-based protocol \(\mathcal {P}\) with the following properties:

  1. (i)

    The input sequences are all sequences (VP) where

    • V encodes l n-bit words \((v_{1},\ldots ,v_{l}) \in \{0,1\}^{nl}\) using the deck \(nl{\cdot }\)[♣,♡].

    • P encodes an \(n\)-bit-word RAM machine R with \(M\) instruction groups using the deck \(kM{\cdot }\) [♣,♡], where \(k = O(n+m)\) is the length of the encoding of one instruction group.

  2. (ii)

    The output is a sequence of cards encoding the output of R on input \((v_{1},\ldots ,v_{l})\) after t steps.

  3. (iii)

    In addition to the cards encoding the inputs, we need the helping deck \((N-l+2)n\cdot\) [♣,♡] \(\cup\) [♣, \(M\cdot\) ♡]. (Additional cards for the starred sort variants can borrow from \(A'\).)

  4. (iv)

    The protocol uses \((85n + 2m + 4)t\) shuffles.

Proof

For the correctness, we refer to the above explanation of all the relevant commands. For security we again use Corollary 3.1 and the fact that we do not turn over any cards outside sort or rot operations. For this, note that the OR operation in line 5 of Protocol 13 can be framed as a sort operation, cf. Protocol 4. The number of shuffles is derived by counting the numbers of shuffles in each instruction type as specified above. This yields \((7n + 14n + 21n + 21n + 5n + 5n + 8n +2m + 4)t = (85n + 2m + 4)t\) shuffles. \(\square\)

Remark 6.1

(Reusability of the Program) Similarly to Remark 5.2 for the TM case, we can ensure that we end in the original configuration (with the first instruction in \(\textsc {ip}[0]\)) by introducing an additional register \(\textsc {start}[0...M-1]\) which is rotated together with the instruction groups and \(\textsc {ip}\). At the end of the execution, we use it to rotate everything back into place and additionally reset the accumulators. This variant uses an additional \(2n + 1\) shuffles (again not including the reset of the RAM cells and providing the new input).

Securely Evaluating a Branching Program

Branching Programs [4] are commonly used for constructing program obfuscation, e.g., in [17, 20, 47], which inspired this section.

Branching Program. A branching program B of length N and width w for l variables is a sequence \(((j^{(i)}, {\pi }_{0}^{(i)},{\pi }_{1}^{(i)}))_{1 \le i \le N} \in (\{1,\dots , l\} \times S_w \times S_w)^N\) of instructions. The permutation belonging to a sequence \(\vec {v} = (v_{1},\ldots ,v_l) \in \{0,1\}^{l}\) of inputs is

$$\begin{aligned} B(\vec {v}) = \prod _{1 \le i \le N} {\pi }_{v_{j^{(i)}}}^{(i)}. \end{aligned}$$

In other words, in the i-th step, the value of the \(j^{(i)}\)-th variable determines which of the two permutations of the i-th instruction is used.

For \({\sigma } \in S_w\), we say B \({\sigma }\)-computes a Boolean circuit C, if for any \(\vec {v} \in \{0,1\}^{l}\)

$$\begin{aligned} B(\vec {v}) = {\left\{ \begin{array}{ll} {\sigma }, &{}\text { if }C(\vec {v}) = 1,\\ {\mathsf {id}}, &{}\text { if }C(\vec {v}) = 0. \end{array}\right. } \end{aligned}$$

Now let \(\textsf {State}\) be a set of states on which \(S_w\) acts via some group action \(*\) and executing B on \(\vec {v}\) starting from some start state \(q_{0} \in \textsf {State}\) means computing states \((q_i)_{1 \le i \le N}\) iteratively as \(q_{i+1} = {\pi }_{v_{j^{(i)}}}*q_i\). Of course, we end with \(q_N = {\pi }_{v_{j^{(N)}}} *\ldots *{\pi }_{v_{j^{(1)}}} *q_{0} = B(\vec {v}) *q_{0}\).

In this paper, \(\textsf {State}\) is a set of card sequences of length w and \({\pi } *q\) yields the card sequence q permuted by \({\pi }\).

A Peculiar Subset of \(S_{5}\). Barrington’s Theorem makes heavy use of the fact that \(S_{5}\) is not a solvable group. In particular, there are permutations \({\pi },{\tau } \in S_{5}\) such that the commutator \([{\pi },{\tau }] {:}{=} {\pi } {\circ }{\tau } {\circ }{\pi }^{-1} {\circ }{\tau }^{-1}\) is not the identity permutation. There is some freedom when choosing permutations for the construction that follows. To be more specific, we define the five permutations \({\varphi }_{0},\ldots ,{\varphi }_{4}\) as

figure i

In general, we can define \({\varphi }_i = (1\;2\;3\;4\;5)^i {\circ }{\varphi }_{0} {\circ }(1\;2\;3\;4\;5)^{-i}\) for any \(i \in \mathbb {Z}\) but, of course, only the remainder of the index modulo 5 is relevant.

It is easy to check that \({\varphi }_{0} = {\varphi }_{5} = [{\varphi }_{3},{\varphi }_{4}]\) and \({\varphi }_{0}^{-1} = {\varphi }_{5}^{-1} = [{\varphi }_{1},{\varphi }_{3}]\). We can, therefore, write each element \({\varphi } \in F {:}{=} \{{\varphi }_{0},\ldots ,{\varphi }_{4},{\varphi }_{0}^{-1},\ldots ,{\varphi }_{4}^{-1}\}\) as \({\varphi } = [{\varphi }',{\varphi }'']\) for some other elements \({\varphi }',{\varphi }'' \in F\). More concretely, we have

$$\begin{aligned} {\varphi }_i = [{\varphi }_{i+3},{\varphi }_{i+4}], \qquad {\varphi }_i^{-1} = [{\varphi }_{i+1},{\varphi }_{i+3}]. \end{aligned}$$

Barrington’s Theorem. We now state a central theorem due to Barrington, which we specialize to permutations from the set \(F\) defined above. For self-containedness and illustration, we give the elegant and constructive proof in full. Recall from Sect. 2 that the depth of a circuit C is the maximum number of \({\wedge }\) and \({\vee }\) gates on a path in C.

Theorem 7.1

(Barrington [4]) For any Boolean circuit C of depth d and \({\varphi } \in F\) there exists a branching program \(B = B(C)\) of width 5 and \(N \le 4^d\) instructions that \({\varphi }\)-computes C.

Proof

The proof works by induction on the length \(d'\) of the longest path in C. If \(d' = 0\), then we also have \(d = 0\) and the output node is labeled with a constant 0, a constant 1 or the index j of a variable. In these cases, the trivial branching programs with a single instruction of the form \((\_,{\mathsf {id}},{\mathsf {id}})\), \((\_,{\varphi },{\varphi })\) or \((j,{\mathsf {id}},{\varphi })\), respectively, \({\varphi }\)-compute C (here, \(\_\) is a placeholder for an arbitrary variable index).

Now assume \(d' > 0\). If the output node is labeled \({}_{''}{\lnot }\)“, then the value at its unique predecessor is computed by a circuit \(C'\) with longest path of length \(d' - 1\). Therefore, there is a branching program \(B'\) that \({\varphi }^{-1}\)-computes \(C'\) with at most \(4^d\) instructions. Let \((j,{\pi },{\pi }')\) be the last instruction of \(B'\). Replacing it with \((j,{\varphi } \circ {\pi },{\varphi } \circ {\pi }')\) yields a branching program B that \({\varphi }\)-computes C since we have

$$\begin{aligned} B(\vec {v}) = {\varphi } \Leftrightarrow B'(\vec {v}) = {\mathsf {id}}\Leftrightarrow C'(\vec {v}) = 0 \Leftrightarrow C(\vec {v}) = 1 \end{aligned}$$

and for similar reasons \(B(\vec {v}) = {\mathsf {id}}\Leftrightarrow C(\vec {v}) = 0\).

If the output node is labeled \({\wedge }\), then values at its two predecessors are computed by two circuits \(C'\) and \(C''\) with longest path of length at most \(d'-1\) and depth at most \(d-1\). We previously observed that we can write \({\varphi } = [{\varphi }',{\varphi }'']\) for two permutations \({\varphi }',{\varphi }'' \in F\). Let \(B'_{{\varphi }'}\) and \(B'_{{\varphi }'{}^{-1}}\) be two branching programs that \({\varphi }'\)-compute and \({\varphi }'{}^{-1}\)-compute \(C'\), respectively, and similarly \(B''_{{\varphi }''}\) and \(B''_{{\varphi }''{}^{-1}}\) be two branching programs that \({\varphi }'\)-compute and \({\varphi }''{}^{-1}\)-compute \(C''\), respectively.

We obtain B as the concatenation of these four branching programs. Depending on the values \(r' = C'(v_{1},\ldots ,v_{l})\) and \(r'' = C''(v_{1},\ldots ,v_{l})\) we get the following behavior of B:

$$\begin{aligned} B(\vec {v})&= B'_{{\varphi }'}(\vec {v}) {\circ } B''_{{\varphi }''}(\vec {v}) {\circ }B'_{{\varphi }'{}^{-1}}(\vec {v}) {\circ }B''_{{\varphi }''{}^{-1}}(\vec {v}) \\&= {\left\{ \begin{array}{ll} {\varphi }' {\circ }{\varphi }'' {\circ }{\varphi }'{}^{-1} {\circ }{\varphi }''{}^{-1} = [{\varphi }',{\varphi }'']&{} = {\varphi } \text { if }r' = r'' = 1\\ {\mathsf {id}}{\circ }{\varphi }'' {\circ }{\mathsf {id}}{\circ }{\varphi }''{}^{-1}&{} = {\mathsf {id}}\text { if }r' = 0, r'' = 1\\ {\varphi }' {\circ }{\mathsf {id}}{\circ }{\varphi }'{}^{-1} {\circ }{\mathsf {id}}&{} = {\mathsf {id}}\text { if }r' = 1, r'' = 0\\ {\mathsf {id}}{\circ }{\mathsf {id}}{\circ }{\mathsf {id}}{\circ }{\mathsf {id}}&{} = {\mathsf {id}}\text { if }r' = r'' = 0 \end{array}\right. } \end{aligned}$$

Since \(C(\vec {v}) = 1 \Leftrightarrow r' = r'' = 1\), this means B indeed \({\varphi }\)-computes C. \(\square\)

Implementing Branching Programs with Cards

We first describe how the encoding \(P = P(C)\) is obtained from C, as the format of P already contributes to hiding details about C, especially the pattern in which variables are used. Firstly, by Barrington’s Theorem (Theorem 7.1) there is a branching program \(B = B(C)\) that \({\varphi }_{0}^{-1}\)-computes C with \(N \le 4^d\) instructions. We now transform B into a normalized branching program \(B'\) by preceding each instruction \((j,{\pi }_{0},{\pi }_{1})\) of B with the \(j-1\) dummy instructions \((1,{\mathsf {id}},{\mathsf {id}}),\ldots ,(j-1,{\mathsf {id}},{\mathsf {id}})\) and appending to it the \(l-j\) dummy instructions \((j+1,{\mathsf {id}},{\mathsf {id}}),\ldots ,(l,{\mathsf {id}},{\mathsf {id}})\). This means that \(B'\) accesses all variables periodically in canonical order. Note that \(B'\) contains \(lN \le l{\cdot }4^d\). (In addition, we may choose to pad \(B'\) to a longer program \(B''\) of length \(lN'\) if we wish to hide the length of \(B'\) and thus of B.) Clearly, \(B'\) exhibits the same behavior as B. The sequence P is now simply obtained by concatenating the lN sequences encoding the permutations occurring in the description of \(B'\).

Theorem 7.2

For any \(l,N \ge 1\), there exists a secure card-based protocol \(\mathcal {P}\) with the following properties:

  1. (i)

    The input sequences are all sequences (VP) where

    • V encodes the values of l Boolean variables \((v_{1},\ldots ,v_{l}) \in \{0,1\}^{l}\) using the deck \(l{\cdot }\) [♣,♡].

    • P encodes a normalized branching program B of length \(lN\) with one bit output using the deck \(2lN{\cdot }[{1,2,3,4,5}]\).

  2. (ii)

    The output is two cards encoding \(B(v_{1},\ldots ,v_{l})\).

  3. (iii)

    In addition to the cards encoding the inputs, the helping deck [\(2{\cdot }\)♡,\(5{\cdot }\)♣] is used.

  4. (iv)

    Each execution of the protocol performs 3lN shuffle actions.

Proof

The protocol is described in Protocol 14. We denote by capital letters the sets of positions on which the corresponding parts of the input (denoted by lower case letters) are present at the start of the protocol. Additionally, there are helping cards present in positions \(\textsc {q}\) that initially contain the sequences ♣♡♣♣♣ as well as two cards to support the \({{\,\mathrm{\mathsf {sort}}}}^*\)-operation (not shown in Fig. 6).

figure j

Consider an iteration of the inner loop with \(k = li+j\). First, the encodings of the two permutations \({\pi }^{(k)}_{0}\) and \({\pi }^{(k)}_{1}\) (in positions \(\varPi ^{(k)}_{0}\) and \(\varPi ^{(k)}_{1}\)) are swapped if \(v_j\) (in position \(V_j\)) is \(1\) and left as is otherwise. Hence, an encoding of \({\pi }^{(k)}_{v_j}\) ends up in position \(\varPi ^{(k)}_{0}\), from where it is obliviously applied to the sequence in \(\textsc {q}\). For correctness, note that by assumption the normalized branching program \({\varphi }_{0}^{-1}\)-computes \(C\), i.e., if the output is \(0\), in total we perform \({\mathsf {id}}\) on the cards in \(\textsc {q}\), which results in a \(0\) being encoded in \(\textsc {q}_R\). If \(C\) outputs \(1\), then \({\varphi }_{0}^{-1}\) is applied to the cards of \(\textsc {q}\), resulting in ♡♣♣♣♣, as \({\varphi }_{0}^{-1}\) maps \(2 \mapsto 1\), yielding an encoded \(1\) in \(\textsc {q}_R\).

Security of \(\mathcal {P}\) follows again from the fact that the protocol is only composed by valid sort operations and Corollary 3.1. \(\square\)

Fig. 6
figure 6

Overview of the branching program construction. Alice’s input is the branching program \(((j^{(i)}, {\pi }_{0}^{(i)},{\pi }_{1}^{(i)}))_{1 \le i \le N} \in (\{1,\dots , l\} \times S_5 \times S_5)^{N}\) in normalized form

Remark 7.1

(Reusability of the Program) To allow for reusing the branching program after its execution, we would need to write the executed permutation of each step back into its register and to undo any conditional swaps. In more formal terms, we replace the sort command in the second line of the inner loop of Protocol 14 with its starred variant. To undo the swap, we repeat the first line of the inner loop after the second line. Moreover, we reset the register \(\textsc {q}\). Hence, this variant of the protocol uses \(6lN + 1\) shuffles.

A Note Regarding Active Security. Note that a malicious Alice might learn something about the input passed to the program by choosing the permutations of the program in such a way that the output (the first two cards in \(\textsc {q}\) after the protocol run) is not ♣♡ or ♡♣, but ♣♣. If we want to avoid this, we can initialize \(q_0\) with ♣♡♣♡♣ (replacing the penultimate ♣ with a ♡), and instead of opening just the first two cards at the end, we have to ensure that the content of the register gets mapped to a single bit, without revealing anything else. For this, note that after a protocol run of a legal program, \(\textsc {q}\) contains one of two configurations namely ♣♡♣♡♣ if \({\mathsf {id}}\) was applied, and ♡♣♣♣♡ if \({\varphi }_{0}^{-1}\) was applied. Important here, is that in the first case, the ♡ s have distance \(1\) and in the second case distance \(0\), which is invariant over random cuts, and represents the two possible configuration classes (orbits w.r.t. random cuts) in the five-card trick [8]. We cannot use the five-card trick directly, as its output is not in committed format, however. To overcome this, we can make use of the five-card AND protocol of [3], which starts with a situation as above and then outputs a bit commitment to the AND value in a (restart-free) Las Vegas fashion. (Note that this protocol is shown to be optimal/card-minimal in a strong sense in [27].) This change would add seven shuffles (five random cuts and two random bisection cuts) in expectation.

Moreover, for active security in all the protocols in this paper, one should additionally implement the shuffle operation with active security as in [30]. For ease of implementing the coupled shuffles, we recommend to use envelopes to avoid additional helping cards, as in Fig. 2.

Conclusion

We give four card-efficient and conceptually simple protocols for executing a universal machine model in a secure multiparty computation protocol, hence achieving Private Function Evaluation. These are for circuits, Turing and word-RAM machines and branching programs, giving the user a palette of options, from which they can choose the most suitable one. As an interesting building block—also largely simplifying security proofs—we introduce sort protocols, which we believe to be of independent interest, as many protocols from the literature can be restated in these terms. We give the concrete numbers of necessary cards for each of the models, carefully reusing helping cards where possible. We additionally discuss several adaptations, e.g., on how to execute these in a non-destructive way that lets us reuse the program multiple times.

Our results can also be interpreted as a straightforward instantiation of Oblivious RAM (ORAM), making heavy use of the fact that we can physically and obliviously move around “RAM cells”, which is not possible in the usual cryptographic ORAM model. By stating these classical cryptography problems, such as constructing ORAM or program obfuscation in the language of card-based cryptography, it might not only be of didactic use in explaining these to students, but also provide some insight into the constructions in the classical cryptographic realm.