Abstract
We define the concept of and present provably secure constructions for Anonymous RAM (AnonRAM), a novel multiuser storage primitive that offers strong privacy and integrity guarantees. AnonRAM combines privacy features of anonymous communication and oblivious RAM (ORAM) schemes, allowing it to protect, simultaneously, the privacy of content, access patterns and user’s identity, from curious servers and from other (even adversarial) users. AnonRAM further protects integrity, i.e., it prevents malicious users from corrupting data of other users. We present two secure AnonRAM schemes, differing in design and time complexity. The first scheme has a simpler design; like efficient ORAM schemes, its time complexity is polylogarithmic in the number of cells (per user); however, it is linear in the number of users. The second AnonRAM scheme reduces the overall complexity to polylogarithmic in the total number of cells (of all users) at the cost of requiring two (noncolluding) servers.
Keywords
 Anonymity
 Access privacy
 Oblivious RAM
 Outsourced data
 (Universal) Rerandomizable encryption
 Oblivious PRF
Download conference paper PDF
1 Introduction
The advent of cloudbased outsourcing services has been accompanied by a growing interest in security and privacy, striving to prevent exposure and abuse of sensitive information by adversarial cloud service providers and users. This includes, in particular, the tasks of data privacy, i.e., hiding users’ data from overly curious entities such as the provider, as well as access privacy, i.e., hiding information about dataaccess patterns such as which data element is being accessed and how (read/write?). The underlying rationale is that exposure of data access patterns may often lead to a deep exposure of what the user intends to do. An extensive line of research has produced impressive results and tools for achieving both data and access privacy. In particular, oblivious RAM (ORAM) schemes, first introduced by Goldreich and Ostrovsky [29], have been extensively investigated in the last few years, yielding a multitude of elegant and increasingly efficient results [11, 22, 26, 27, 30, 34, 36, 38].
Another important privacy goal is to hide who is accessing the data, i.e., conceal the identity of the user to ensure anonymity. This area spawned extensive research and multiple protocols and systems for anonymous communication [7, 8, 13, 15]. The Tor network [37] currently constitutes the most widely used representative of these works.
We focus on the combination of these two goals: hiding content and access patterns as offered by ORAM schemes, but also concealing the user identities as offered by anonymous communication protocols. Experts in the relevant areas may not be completely surprised to find that designing this primitive is quite challenging. In particular, the privacy guarantees cannot be constructed by solely combining both approaches: the naïve idea to achieve these privacy properties simultaneously is to maintain separate ORAM data structures for each user and have users access the system using the anonymous communication protocol. However, this construction does not hide the access patterns, since the server can determine if the same data structure is accessed twice, and thereby trivially link two accesses made by the same anonymous user. Instead of multiple ORAMs, one could try to use a single ORAM as a blackbox with data of all users contained in it. However, this does not work either, as inherently, the users have to share the same key, and the privacy properties immediately fail in presence of curious adversaries. (See Sect. 3 for more details.) Supporting multiple, potentially malicious (or even ‘just curious’) users is significantly harder than supporting multiple cooperating clients (e.g., devices of the same user), as in [17, 23, 28, 39].
Furthermore, when considering an adversarial environment and, in particular, malicious users, integrity, i.e., preventing one user from corrupting data of other users, is also critical. Notice that the (popular) ‘honestbutcurious’ model is easier to justify for servers (e.g., running ORAM) than for clients; handling (also) malicious client is very important. Note also that ensuring integrity is fairly straightforward, when users can be identified securely; however, this conflicts with the goals of anonymity and, even more, with the desire for oblivious access, i.e., hiding even the pattern of access to data. As often happens in security, the mechanisms for the different goals do not seem to nicely combine, resulting in a rather challenging problem, to which we offer the first  but definitely not final  pair of solutions, albeit with significant limitations and room for improvement.
Our Contributions. We define Anonymous RAM (AnonRAM) schemes and present two constructions that are provably secure in the random oracle model. AnonRAM schemes support multiple users, each user owning multiple memory cells. AnonRAM schemes simultaneously hide data content, access patterns, and the users’ identities against honestbutcurious servers and against malicious users of the same service while ensuring that data can only be modified by the legitimate owner (Sect. 2).
The first scheme, called \(\mathsf {AnonRAM}_{\mathsf {lin}}\), realizes a conceptually simple transformation that turns any secure singleuser ORAM scheme into a secure AnonRAM scheme (that supports multiple users). The key idea here is to convert every singleuser ORAM cell to a multicell having a cell for each user, and to employ rerandomizable encryption such that a user can hide her identity by rerandomizing all other cells in a multicell while updating her own cell (Sect. 3). The drawback of \(\mathsf {AnonRAM}_{\mathsf {lin}}\), however, is that its complexity is linear in the number of users (although polylogarithmic in the number of cells per users). This linear complexity stems from the requirement that a user has to touch one cell of each user when accessing her own cell.
The second scheme, called \(\mathsf {AnonRAM}_\mathsf {polylog}\), reduces the overall complexity to polylogarithmic in the number of users (Sect. 4). This comes at the cost of requiring two noncolluding servers \(\mathsf {S}\) and \(\mathsf {T}\). Server \(\mathsf {S}\) maintains all user data in encrypted form using a universal reencryption scheme, thereby disallowing \(\mathsf {S}\) and other users to establish a mapping between a user and her data blocks. Essentially, \(\mathsf {AnonRAM}_\mathsf {polylog}\) constitutes an extension of hierarchical ORAM designs, e.g., by GoldreichOstrovski [19], where the reshuffle operation and mapping to ‘dummy’ blocks are performed by the dedicated server \(\mathsf {T}\). This prevents user deanonymization by the server \(\mathsf {S}\) or by other users. Furthermore, mappings to specific buckets are achieved by means of a specific oblivious PRF.
For the sake of exposition, we first describe simplified variants of both schemes in the presence of honestbutcurious users. We subsequently show how to extend both constructions to handle malicious users as well. The extension mainly involves adding an integrity element to the employed (universal) reencryption, such that any user can only reencrypt data of other users, but not corrupt it.
Finally, we consider it an important contribution that we present a rigorous model and a definition for this challenging problem of AnonRAM, and show their suitability by providing provable security protocol instantiations.
Related Work. Several multiclient ORAM solutions have been proposed in literature. Goodrich et al. [23] observe that stateless ORAM schemes, in which no state is carried from one access action to another, are suitable for a group of trusted clients. [4, 10] address the concurrent accesses by multiple client devices of the same user in the synchronous model, while [2, 32, 35, 39] deal with asynchronous concurrent accesses.
Franz et al. [17] introduce the concept of delegatable ORAM, where a (trusted) database owner can delegate access rights to other users and periodically performs reshuffling to protect the privacy of their accesses. [28] allows a storage owner to share a serverside ORAM structure among a group of users, but assumes that all users share the same symmetric key which none of them is going to provide to the server. These works, however, do not protect privacy of a client from malicious or ‘curious’ clients.
AnonRAM schemes avoid the strong noncollusion assumption between the users and the storage server. In other words, we consider the problem of anonymously accessing the server by multiple users, where the server (cooperating with some users) should not be able to learn which honest user accessed which cell over the server. Notably, we achieve our stronger privacy guarantees against a stronger adversary without requiring any communication among the users.
The only other multiuser ORAM scheme has been proposed by Zhang et al. [25]. Their scheme uses a set of intermediate nodes to convert a user’s query to an ORAM query to the server. Privacy of the scheme is, however, analyzed only for individual nonanonymous user accesses and not for multiuser anonymous access patterns. Furthermore, their scheme does not provide integrity protection against malicious users. Moreover, their work lacks both definitions and proofs; as the reader will see in our work, the definitions and proofs, we found necessary to claim security of our schemes, are nontrivial.
2 AnonRAM Definitions
We consider a set of N users \(\mathcal {U} =\{\mathsf {U} _1, \dots , \mathsf {U} _N\}\), a set of \(\eta \) servers \(\mathcal {S} = \{\mathsf {S} _1, \dots , \mathsf {S} _\eta \}\), a set \(\varSigma \) of messages, and we let M denote the number of data cells available to each user. All protocols are parametrized by the security parameter \(\lambda \). Before defining the class of AnonRAM schemes, we provide the definitions of access requests and access patterns.
Definition 1
(Access Requests). An access request AR is a tuple \((j,\alpha ,m) \in [1,M] \times \{\mathsf {Read},\mathsf {Write}\} \times \varSigma \). Here j is called the (cell) index of AR, \(\alpha \) the access type, and \(m\) the input message.
Intuitively, an access request \((j,\alpha ,m)\) will denote that \(m\) should be written into cell j (if \(\alpha = \mathsf {Write}\)), or that the content of cell j should be read (if \(\alpha = \mathsf {Read}\); in this case \(m\) is ignored and we often just write \((j,\alpha , *)\)).
Definition 2
(Access Patterns). An access pattern is a series of tuples \((i,AR_i)\) where \(i\in [1,N]\) is a user identifier and \(AR_i\) is an access request.
For notational simplicity, we will write \((i,j, \alpha , m)\) instead of \((i, (j, \alpha , m))\) for the individual elements of access patterns.
We next define AnonRAM schemes. In this work, we consider sequential schemes where one participant is active at any point in time.
Definition 3
(AnonRAM Schemes). An AnonRAM scheme is a tuple \((\mathsf {Setup},\) \( \mathsf {User}, \mathsf {Server} _1,\dots ,\mathsf {Server} _\eta )\) of \(\eta +2\) PPT algorithms, where:
\(\bullet \) The initialization algorithm \(\mathsf {Setup}\) maps a security parameter \(\lambda \) and an identifier id, to an initial state, where \(id\in \{0,1,\ldots ,\eta \}\) identifies one of the servers (for \(id>0\)) or the user (for \(id=0\)).
\(\bullet \) The user algorithm \(\mathsf {User}\) processes two kinds of inputs: (a) access requests (from the user) and (b) pairs (l, m) where \(l\in [1,\eta ]\) denotes a server and m a message from server \(\mathsf {S} _l\). \(\mathsf {User} \) maps the current state and input to a new state and to either a response provided to the user or a pair (l, m) with \(l\in [1,\eta ]\) denoting a server and m being a message for \(\mathsf {S} _l\).
\(\bullet \) The server algorithm \(\mathsf {Server} _l\) for server \(\mathsf {S} _l\) maps the current server state and input (message from user or from another server) to a new server state and a message either to the user or to another server.
Adversarial Models and Protocol Execution. We consider two different adversarial models: (i) honestbutcurious (\(\mathsf {HbC}\)) adversaries that learn the state of one server \(\mathsf {S} ^*\) and of a subset \(\mathcal {U} ^*\) of users, and (ii) malicious users (\(\mathsf {Mal\_Users}\)) adversaries that learn the state of one server (as before) and additionally control a subset \(\mathcal {U} ^*\) of users. In both models, the adversary can additionally eavesdrop on all messages sent on the network, i.e., between users and servers, and between two servers.
We now define the sequential execution \(\mathsf {Exec} (\mathcal {AR},\mathsf {Adv},AP, \zeta )\) of an AnonRAM scheme \(\mathcal {AR}\) in the presence of an adversary \(\mathsf {Adv}\) and a given access pattern AP assuming an adversarial model \(\zeta \in \{\mathsf {HbC},\mathsf {Mal\_Users}\}\).
Definition 4
(Execution). Let \(\mathcal {AR}\) be an AnonRAM scheme \((\mathsf {Setup}, \mathsf {User},\) \( \mathsf {Server} _1,\dots ,\mathsf {Server} _\eta )\), \(\mathsf {Adv}\) be a PPT algorithm, \(\zeta \in \{\mathsf {HbC},\mathsf {Mal\_Users}\}\) and AP be an access pattern. The execution \(\mathsf {Exec} (\mathcal {AR},\mathsf {Adv},AP,\zeta )\) is the following randomized process:

1.
All parties are initialized using \(\mathsf {Setup} \), resulting in initial states \(\sigma _{\mathsf {U} _i}\) for each user \(\mathsf {U} _i\), and \(\sigma _{\mathsf {S} _l}\) for each server \(\mathsf {S} _l\).

2.
\(\mathsf {Adv}\) selects a server \(\mathsf {S} ^*\) and a strict subset \(\mathcal {U} ^*\subset \mathcal {U} \).

3.
Let \((i,j,\alpha ,m_{i,j})\) be the first element of AP; if AP is empty, terminate.

4.
If \(\mathsf {U} _i\in \mathcal {U} ^*\) and \(\zeta =\mathsf {Mal\_Users}\), then let (l, m) be the output of \(\mathsf {Adv}\) on input \((i,j,\alpha ,m_{i,j})\). Otherwise, let (l, m) be the output of \(\mathsf {User} \) on input \((j,\alpha ,m_{i,j})\), with state \(\sigma _{\mathsf {U} _i}\), and update \(\sigma _{\mathsf {U} _i}\) accordingly.

5.
Invoke \(\mathsf {S} _l\) with (input) message m. The server \(\mathsf {S} _l\) may call other servers (possibly recursively) and finally produces an (output) message \(m'\).

6.
If \(\mathsf {U} _i\in \mathcal {U} ^*\) and \(\zeta =\mathsf {Mal\_Users}\), provide the message \(m'\) to \(\mathsf {Adv}\). Otherwise, provide \(m'\) to user \(\mathsf {U} _i\). \(\mathsf {U} _i\) (\(\mathsf {Adv}\) if \(\mathsf {U} _i\in \mathcal {U} ^*\) and \(\zeta =\mathsf {Mal\_Users}\)) may repeat sending messages to any servers. Eventually, \(\mathsf {U} _i\) (\(\mathsf {Adv}\)) terminates.

7.
Repeat the loop (from step 3) with the next element of AP (until empty). Throughout the execution, the adversary learns the internal states of \(\mathsf {S} ^*\) and of all users in \(\mathcal {U} ^*\), as well as all messages sent on the network.
A trace is the random variable defined by an execution, using uniformly random cointosses for all parties. The trace includes the sequence of messages in the execution corresponding to access requests and the final state of the adversary. Let \(\varTheta (x)\) denote the trace of execution x.
Privacy and Integrity of AnonRAM Schemes. To define privacy for AnonRAM schemes, we consider an additional PPT adversary \(\mathcal {D} \) called the distinguisher. \(\mathcal {D} \) outputs two arbitrary access patterns of the same finite length, which differ only in inputs to unobserved users. We then randomly select and execute one of these two patterns. The distinguisher’s goal is to identify which pattern was used. Since these two accesses may differ in user, cell, operation, or value, this definition encompasses all relevant privacy properties in this setting including anonymity (identity privacy), confidentiality (value privacy), and obliviousness (cell and operation privacy). We call an adversary \(\mathsf {Adv}\) compliant with a pair of access patterns \((AP_0,AP_1)\) if \(\mathsf {Adv}\) only outputs sets \(\mathcal {U} ^*\) of users in Step (2) of \(\mathsf {Exec} (\mathcal {AR},\mathsf {Adv}, AP_0, \zeta )\) and \(\mathsf {Exec} (\mathcal {AR},\mathsf {Adv}, AP_1, \zeta )\) such that \(AP_0\) and \(AP_1\) are identical when restricted to users in \(\mathcal {U} ^*\).
Definition 5
(Privacy of AnonRAM). An AnonRAM scheme \(\mathcal {AR}\) preserves privacy in adversarial model \(\zeta \in \{\mathsf {HbC},\mathsf {Mal\_Users}\}\); if for every pair of (same finite length) access patterns \((AP_0, AP_1)\) and for every pair of PPT algorithms \(\left( \mathsf {Adv}, \mathcal {D}\right) \) s.t. \(\mathsf {Adv}\) is compliant with \((AP_0, AP_1)\), we have that
is negligible in \(\lambda \), where the probability is taken over uniform coin tosses by all parties, and \(b \leftarrow _R \{0,1\}\).
Note that when allbutone (i.e., \(N1\)) users are observed and \(\zeta = \mathsf {HbC}\), our privacy property corresponds to the standard ORAM access privacy definition [19]. ORAM is hence a special case of AnonRAM with a single user (\(N=1\)).
AnonRAM should ensure integrity to prevent invalid executions caused by parties deviating from the protocol. Informally, a trace is invalid if a value read from a cell does not correspond to the most recently written value to the cell.
Definition 6
(Integrity of AnonRAM). Let \(\vartheta \) be a trace of execution with access pattern AP, and let \(AR= (j,\mathsf {Read}, *)\) with \((i,AR_i)\in AP\) be a read request for cell j of user \(\mathsf {U} _i\), returning a value x. Let \(AR' = (j,\mathsf {Write}, x')\) be the most recent previous write request to cell j of user \(\mathsf {U} _i\) in AP, or \(\bot \) if there was no such previous write request. If \(x\ne x'\), we say that this read request is invalid. If any read request in the trace is invalid, then the trace is invalid.
An AnonRAM scheme \(\mathcal {AR}\) preserves integrity if there is negligible (in \(\lambda \)) probability of invalid traces when the traces are constrained to the view of the honest users (all \(\mathsf {U} _i \in \mathcal {U} \) in the \(\mathsf {HbC}\) model, and all users \(\mathsf {U} _i \in \mathcal {U}/ \mathcal {U} ^*\) in the \(\mathsf {Mal\_Users}\) model) for any PPT adversary and any access pattern AP.
3 LinearComplexity AnonRAM
In this section, we present our first AnonRAM constructions and prove them secure in the underlying model. For the sake of exposition, we start with a few seemingly natural but flawed approaches to construct AnonRAM schemes.
Seemingly Natural but Flawed Approaches. A first natural idea to design an AnonRAM scheme is to maintain all the \(M \cdot N\) cells in an encrypted form on the server and to only access them via an anonymous channel such as Tor [37]. However, this approach fails to achieve AnonRAM privacy, since the adversary can simply observe all memory accesses on the server and thereby determine how often the same cell j of a user is accessed. One may try to overcome this problem using a shared \((M\cdot N)\)cell stateless ORAM [23] containing M cells for each of N users and assuming that every user executes her ORAM requests via an anonymous channel. In this case, all users will have to use the same private key in the symmetric encryption scheme employed in the ORAM protocol to hide their cells from the server. However, this allows Eve, an HbC user, to break privacy of honest users, by observing the values in cells (allocated to honest users) which she downloads and decrypts as part of her legitimate ORAM requests.
Another natural design would be to use a separate ORAM for the M cells of each user and rely on anonymous access to hide user identities. This use would hide the users’ individual access patterns, but the server can identify all accesses by the same user and thereby violate the AnonRAM privacy requirement.
The AnonRAM schemes presented in this paper overcome such problems by having users rerandomize cells belonging to other users as well whenever their own cells are being accessed in addition to encrypting the user’s own cells.
3.1 \(\mathsf {AnonRAM}_{\mathsf {lin}}\) and Its Security Against HbC Adversaries
We now present the \(\mathsf {AnonRAM}_{\mathsf {lin}}\) construction and prove it secure in the \(\mathsf {HbC}\) adversarial model. \(\mathsf {AnonRAM}_{\mathsf {lin}}\) uses an anonymous communication channel [37] and the (singleuser, singleserver) Path ORAM [36] or other ORAM scheme satisfying a property identified below.
In Path ORAM, the user’s cells are stored on the server RAM as a set of encrypted data blocks such that each block consists of a single ciphertext and all blocks are encrypted with the same key known to the user’s ORAM client. A block encrypts either a user’s cell, or auxiliary information used by the \(\mathsf {User} \) algorithm. To access a cell, the ORAM client reads (and decrypts) a fixed number of blocks from the server, and writes encrypted values (cells or some special messages) in a fixed number of blocks. The server’s duty is to execute these user’s read and write requests.
\(\mathsf {AnonRAM}_{\mathsf {lin}}\) employs N instances (one per user) of Path ORAM for M cells each while requiring a single server.^{Footnote 1} To encrypt data as required in the ORAM scheme, \(\mathsf {AnonRAM}_{\mathsf {lin}}\) uses a semantically secure rerandomizable encryption (RE) scheme \((\mathsf {E,R,D})\) (e.g., ElGamal encryption), where \(\mathsf {E}\), \(\mathsf {R}\), and \(\mathsf {D}\) are respectively encryption, rerandomization, and decryption operations. The \(\mathsf {AnonRAM}_{\mathsf {lin}}\) client of user \(\mathsf {U} _i\), has access to her private key \(sk_i\) and to the public keys \((pk_1, \dots , pk_N)\) of all users. In \(\mathsf {AnonRAM}_{\mathsf {lin}}\), the ORAM scheme uses this RE scheme \((\mathsf {E,R,D})\) instead of the (symmetric) encryption scheme used in ‘regular’ Path ORAM.
Intuitively, an \(\mathsf {AnonRAM}_{\mathsf {lin}}\) client internally runs an ORAM client and mediates its communication with the server. Whenever the ORAM client reads or writes a specific block, the \(\mathsf {AnonRAM}_{\mathsf {lin}}\) client performs corresponding read or write operations for all users, without divulging the user identity to the server at the network level, as follows: Reading a block of another user can be trivially achieved, since the block is encrypted for the owner’s key, but the contents are not used (our goal is only to create indistinguishable accesses for all users). Writing a block belonging to other user’s ORAM must not corrupt the data inside and is hence achieved by rerandomizing the blocks of other users.
The \(\mathsf {Setup} \) and \(\mathsf {Server} \) algorithms of \(\mathsf {AnonRAM}_{\mathsf {lin}}\) are simply N instances of the corresponding algorithm of the underlying ORAM scheme (e.g., Path ORAM). Namely, the \(\mathsf {Setup} \) initializes state for N copies of the ORAM (one per user) and the \(\mathsf {Server} \) receives a ‘user identifier’ i together with each request, and runs the ORAM’s \(\mathsf {Server} \) algorithm using the \(i^{th}\) state over the request. The \(\mathsf {Server} \) algorithm for the \(\mathsf {AnonRAM}_{\mathsf {lin}}\) scheme simply processes Read/Write requests sent by the users as in the ORAM scheme, e.g. the server returns the content of the requested block for Read requests or overrides the content of the requested block with the new value for Write requests.
We finally describe the \(\mathsf {User} \) algorithm of \(\mathsf {AnonRAM}_{\mathsf {lin}}\) using pseudocode in Fig. 1 to increase readability. It relies on an oracle \(\mathcal {O}_c\) for the ORAM client, and an RE scheme \((\mathsf {E,R,D})\). We write \((\mathsf {E}_i,\mathsf {R}_i,\mathsf {D}_i)\) for the corresponding encryption, reencryption and decryption operations using the corresponding keys for user \(\mathsf {U} _i\). The pseudocode depicts which operations are performed for an individual access request \((j,\alpha ,m)\) of user \(\mathsf {U} _i\). Its execution starts with invoking user \(\mathsf {U} _i\)’s local ORAM client \(\mathcal {O}_c\) with the access request \((j,\alpha ,m)\), and ends with a Return message to \(\mathsf {U} _i\). The process involves multiple instances of Read and Write requests from \(\mathcal {O}_c\) for specified blocks kept by the server. These requests to Read and Write blocks kept by the server, should not be confused with access requests \((j,\alpha ,m)\), where \(\alpha \in \{ \mathsf {Read}, \mathsf {Write}\}\) for ORAM cells.
So far, we selected Path ORAM as a specific ORAM instantiation. However, any other ORAM scheme is equally applicable, provided that it exhibits an additional property: individual accesses have to be indistinguishable, i.e., the adversary observing just one access request from an access pattern should not be able to recognise how many accesses the honest user performed so far. We call this property indistinguishability of individual accesses, and it is trivially satisfied by Path ORAM. Hierarchical ORAMs (e.g., [19, 23, 26, 27]), however, do not achieve indistinguishability of individual accesses, as the runtime of individual accesses depends on the number of accesses performed so far; in particular, the client has to reshuffle periodically a variable amount of data.
Theorem 1
\(\mathsf {AnonRAM}_{\mathsf {lin}}\) preserves access privacy in the adversarial model \(\mathsf {HbC}\), when using a secure ORAM scheme \(\mathcal {O}\) that satisfies indistinguishability of individual accesses, and a semantically secure rerandomizable encryption scheme \((\mathsf {E,R,D})\).
Proof
Assume to the contrary that some PPT HbC adversary \(\mathcal {D} \) can efficiently distinguish, with a nonnegligible advantage, between a pair of accesspatterns \(AP=\{(i_u,j_u,\alpha _u,m_u)\}, AP'=\{(i'_u, j'_u, \alpha '_u, m'_u)\}_{u\in [1,len]}\) of length len.
Let \(AP_{v}=\{(i^*_u,j^*_u,\alpha ^*_u,m^*_u)\}_{u\in [1,len]}\) be a ‘hybrid’ access pattern, where \((i^*_u,j^*_u,\alpha ^*_u,m^*_u)\) \(=(i_u,j_u,\alpha _u,m_u)\) for \(u\le {v}\), and \((i^*_u,j^*_u,\alpha ^*_u, m^*_u)=(i'_u,j'_u,\alpha '_u,m'_u)\) for \(u> {v}\). In fact, let \({v}\) be the smallest such value, where some adversary (say \({\mathcal {D}}\)) can distinguish between \(AP_{{v}1}\) and \(AP_{{v}}\), and such \({v}>0\) exists by the standard ‘hybrid argument’ as AP and \(AP'\) differ at least in one access.
If \(i_{v}=i'_{v}\) (i.e., for the same user), the executions only differ in the ORAM client \(\mathcal {O}_c\) Read/Write blocks for \(\mathsf {U} _{i_{v}}\); however, this immediately contradicts the privacy of the underlying ORAM scheme. Notice that a user does not decrypt or modify the other users’ data during her accesses.
Therefore, assume \(i_{v}\ne i'_{v}\). Since we expect our ORAM client \(\mathcal {O}_c\) to satisfy indistinguishability of individual accesses, the difference between these two patterns is only between the encryption of the blocks output by \(\mathcal {O}_c\) and the reencryption of the blocks received anonymously by the ORAM server. However, ability to distinguish between these, contradicts the indistinguishability property of the semantically secure rerandomizable encryption scheme \((\mathsf {E,R,D})\). \(\square \)
Let \(c_S\) and \(c_B\) denote the amortized costs of clientside storage and communication complexity of the underlying ORAM protocol. Then, the respective amortized costs of \(\mathsf {AnonRAM}_{\mathsf {lin}}\) are \(N\cdot {c_S}\) and \(N\cdot c_B\). For example, using Path ORAM, the clientside storage and communication complexity costs of \(\mathsf {AnonRAM}_{\mathsf {lin}}\) become \(O(N\log {M})\) and \(O(N\log ^2{M})\).
3.2 \(\mathsf {AnonRAM}_{\mathsf {lin}}^{\mathbf {M}}{}\) and Its Security Against Malicious Users
When some users are malicious, we need to ensure that only a user knowing the private key associated with a block can update the value inside the block, while other users should only be able to rerandomize it. Leveraging the security of \(\mathsf {AnonRAM}_{\mathsf {lin}}\) to the adversarial model of malicious users, we require a semantically secure encryption primitive such that a ciphertext \(C'\) can replace a ciphertext C if \(C'\) is a rerandomization of C or if the encryptor knows the encryption key for C. Whenever a block is written, the user attaches a zeroknowledge proof showing either that the ciphertext is reencryption of the previous ciphertext or that the user has the (secret) encryption key. The server verifies the proof before updating the block in its RAM memory. This ensures indistinguishability of reencryption from new encryptions, while ensuring that one user cannot corrupt or modify any value of another user. We denote the resulting scheme as \(\mathsf {AnonRAM}_{\mathsf {lin}}^{\mathbf {M}}\).
The required ZK proofs are standard. For the rerandomizable CPAsecure ElGamal encryption scheme, this will involve a ZK proof of knowledge of discrete logarithm [33] and a ZK proof of equality of the discrete logarithm of two pairs of group elements [9] composed in such a way that a user proves validity of one of the statements, without releaving to the server which statement has been proven [12, 14] (see also Example 3 of [5]). Following the formal notation from [24] and extending it for proving “oneoutofseveral" statements, the required proof is
where P(PoK) stands for proof (of knowledge), g is a generator of a group of prime order q, \((pk_i, C=(C_1, C_2), C'=(C'_1, C'_2))\) are group elements, and \((x_i, r)\) are elements in \(\mathbb {Z}_q\).
Theorem 2
\(\mathsf {AnonRAM}_{\mathsf {lin}}^{\mathbf {M}}\) based on a secure ORAM scheme \(\mathcal {O}\) that satisfies indistinguishability of individual accesses, CPAsecure publickey encryption scheme (e.g., ElGamal), and a ZK proof defined above, preserves integrity and privacy in the adversarial model \(\mathsf {Mal\_Users}\).
Proof Sketch
The integrity argument is simple: use of the ZK proof for proving oneoutoftwo statements ensures that the adversarial users cannot modify the cell of other honest users. The adversarial users also cannot change the order of cells in a sequence of cells as the server verifies correctness of one cell at a time. They can only rerandomize the cells of the honest users.
The privacy properties are also preserved similarly to \(\mathsf {AnonRAM}_{\mathsf {lin}}\) as the disjunctive nature of the included ZK proof does not allow the server to determine which of N cells is modified by an honest user, while privacy of the accessed cellindex as well as the access type is maintained by the employed ORAM scheme. \(\square \)
4 PolylogarithmicComplexity AnonRAM
The \(\mathsf {AnonRAM}_{\mathsf {lin}}\) scheme exhibits acceptable performance for a small number of users, but linear overhead renders it prohibitively expensive as the number of users increases. In this section, we present \(\mathsf {AnonRAM}_\mathsf {polylog}\), an AnonRAM scheme whose overhead is polylogarithmic in the number of users.
\(\mathsf {AnonRAM}_\mathsf {polylog}\) is conceptually based on the hierarchical GoldreichOstrovsky ORAM (GOORAM) construction [19], where a user periodically reshuffles her cells maintained over a storage server \(\mathsf {S} \). To reshuffle cells belonging to multiple users, we introduce in \(\mathsf {AnonRAM}_\mathsf {polylog}\) an additional server, the socalled tag server \(\mathsf {T} \). The tag server reshuffles data on the users’ behalf, without knowing the data elements, and thereby maintains a user privacy from the storage server \(\mathsf {S} \) as well as from the other users. The tag server only requires constantsize storage to perform this reshuffling, and we show that, similarly to the storage server, it cannot violate (on its own or with colluding users) the privacy requirements of AnonRAM schemes.^{Footnote 2}
We first describe necessary cryptographic tools, and then the \(\mathsf {AnonRAM}_\mathsf {polylog}\) construction. \(\mathsf {AnonRAM}_\mathsf {polylog}\) is secure for honestbutcurious users. Due to space constraints, we only provide informal descriptions for parts of the construction; detailed descriptions of \(\mathsf {AnonRAM}_\mathsf {polylog} \) and \(\mathsf {AnonRAM}_\mathsf {polylog}^{\mathbf {M}} \), an extension dealing with malicious users, and security analysis thereof are in the extended version [1]. \(\mathsf {AnonRAM}_\mathsf {polylog}^{\mathbf {M}} \) adds an integrity element to \(\mathsf {AnonRAM}_\mathsf {polylog} \) which, in the end, boils down to constructing a ZK proof system based on techniques described in Sect. 3.2.
4.1 Cryptographic Building Blocks
Universally Rerandomizable Encryption. A universally rerandomizable encryption (UREnc) scheme [20, 31] allows to rerandomize given ciphertexts without requiring access to the encryption key. We use the construction of Golle et al. [20]: for a generator g of a multiplicative group \(G_q\) of prime order q and a private/public key pair \((x_i, g^{x_i})\) for party i with \(x_i \in \mathbb {Z}_q^*\), the encryption \(C=\mathsf {E}^*_i(m)\) of a message m is computed as an ElGamal encryption of m together with an ElGamal encryption of the identity \(1 \in G_q\); i.e., \(C=(g^a, g^{ax_i}\cdot m, g^b, g^{b x_i}\cdot 1)\) for \(a,b \in \mathbb {Z}_q^*\). The ciphertext C can be rerandomized, denoted \(\mathsf {R}^*(C)\) by selecting \(a', b' \leftarrow _R \mathbb {Z}_q^*\) and outputting \((g^a\cdot (g^b)^{a'}, g^{a x_i}\cdot m\cdot (g^{b x_i})^{a'}, (g^b)^{b'}, (g^{b x_i})^{b'})\) as the new ciphertext. Note that this scheme is also multiplicatively homomorphic.
We employ a distributed version of the UREnc scheme, where the private key is shared between two servers such that both have to be involved in decryption.
(Partially KeyHomomorphic) Oblivious PRF. An oblivious pseudorandom function (OPRF) [18, 24] enables a party holding an input tag \(\mu \) to obtain an output \(f_s(\mu )\) of a PRF \(f_s(\cdot )\) from another party holding the key \(s\) without the latter party learning any information about the input tag \(\mu \).
We use the JareckiLiu OPRF construction [24] as our starting point. Here, the underlying PRF \(f_{s}(\cdot )\) is a variant of the DodisYampolskiy PRF construction [16] such that \(f_{s}(\mu ) := \mathsf {g}^{1/(s+\mu )}\) is defined over a compositeorder group of order \(n=p_1 p_2\) for safe primes \(p_1\) and \(p_2\). This function constitutes a PRF if factoring safe RSA moduli is hard and the Decisional qDiffieHellman Inversion assumption holds on a suitable group family \(\mathsf {G}_n\) [24].
To securely realize pretag randomization in our \(\mathsf {Reshuffle}\) algorithm (explained later), we propose a modification of the JareckiLiu OPRF where a second key \({\hat{s}}\) is used to define a new PRF \(f_{s, {\hat{s}}}(\mu ):=\mathsf {g}^{{\hat{s}}/(s+\mu )}\). We call such a PRF partially keyhomomorphic as \((f_{s,{\hat{s}}}(\mu ))^\delta = f_{s,({\hat{s}}\cdot \delta )}(\mu )\) holds for it. For unlinkability of PRF values of the same input \(\mu \) with updated \(\delta \), we expect the Composite DDH assumption^{Footnote 3} [6] to hold in \(\mathsf {G}_n\). We denote our OPRF construction as \(\mathsf {OPRF} _{s, {\hat{s}}}^{\mathcal {A}, \mathcal {B}}(\mu )\), where \(\mathcal {A}\) denotes a party with input \(\mu \), and \(\mathcal {B}\) denotes a server possessing the keys \(s\) and \({\hat{s}}\). Our OPRF protocol makes only minor changes to the JareckiLiu OPRF, and we postpone its full description and security analysis to the extended version of the paper [1]
Multiplicatively Homomorphic Encryption. For appropriately computing on our \(\mathsf {OPRF} \) outputs that are elements of a group of order n generated by \(\mathsf {g}\), we need a suitable multiplicatively homomorphic encryption scheme whose decryption is shared between our two servers. To this end, we employ a semanticallysecure ElGamal encryption scheme, whose security relies on DDH assumption in the underlying group. Here, an encryption \(\mathsf {E}^{+*}_{\mathsf {pk}}(m)\) denotes a message \(m \in \mathsf {G}_n\) encrypted under a public key \(\mathsf {pk}= \mathsf {g}^\mathsf {sk}\), where \(\mathsf {g}\) is a generator of group \(\mathsf {G}_n\) of size n, and the private key \(\mathsf {sk}\) belongs to \(\mathbb {Z}_{n/4}^*\).
On the one hand, ElGamal is a multiplicatively homomorphic encryption scheme; on the other hand, the message space matches the output space of our \(\mathsf {OPRF}\); therefore the scheme is additively homomorphic w.r.t. OPRF inputs: \(\mathsf {E}^{+*}_{\mathsf {pk}}(m) \cdot \mathsf {E}^{+*}_{\mathsf {pk}}(m') = \mathsf {E}^{+*}_{\mathsf {pk}}(m \cdot m')\) for any \(m, m' \in \mathsf {G}_n\), and \(\mathsf {E}^{+*}_{\mathsf {pk}}(m)^\delta = \mathsf {E}^{+*}_{\mathsf {pk}}(m^\delta )\) for any \(\delta \in \mathbb {Z}_n^*\). This scheme, moreover, allows shared decryption; i.e., given public/private key pairs \((\mathsf {pk}_\mathcal {A}, \mathsf {sk}_\mathcal {A})\) and \((\mathsf {pk}_\mathcal {B}, \mathsf {sk}_\mathcal {B})\) of parties \(\mathcal {A}\) and \(\mathcal {B}\) and the joint public key \(\mathsf {pk}= \mathsf {pk}_\mathcal {A}\cdot \mathsf {pk}_\mathcal {B}\), parties \(\mathcal {A}\) and \(\mathcal {B}\) can jointly decrypt a ciphertext \(\mathsf {E}^{+*}_{\mathsf {pk}}(m)\) for a receiver using their private keys \(\mathsf {sk}_{\mathcal {A}}\) and \(\mathsf {sk}_{\mathcal {B}}\). In our construction, given a ciphertext encrypted under the joint public key of servers \(\mathsf {S}\) and \(\mathsf {T}\), they jointly decrypt the ciphertext such that the plaintext message is available to \(\mathsf {T}\).
Oblivious Sort. In oblivious sort (\(\mathsf {OSort} \)), one party (in our case, \(\mathsf {S} \)) holds an encrypted data array and the other party (\(\mathsf {T} \)) operates on the data array such that the data array becomes sorted according to some comparison criteria, and \(\mathsf {S} \) learns nothing about the array (therefore, the name “oblivious” sort). \(\mathsf {OSort} \) can be instantiated by the recently introduced randomized ShellSort algorithm [21], which runs in \(O(z\log (z))\) for \(z\) elements.
4.2 \(\mathsf {AnonRAM}_\mathsf {polylog}\) Data Structure
\(\mathsf {AnonRAM}_\mathsf {polylog}\) caters N independent users \((\mathsf {U} _1\), ..., \(\mathsf {U} _N)\) with their \(M\cdot N\) cells (i.e., M cells per user) using a storage server \(\mathsf {S} \) and a tag server \(\mathsf {T} \). Similarly to other hierarchical schemes [19, 26, 27, 30], blocks are organized in levels, where each level \(\ell \in [1,L]\) contents \(2^\ell \) buckets. Each bucket contains blocks, where \(c_\beta \) is a (small) constant.
Similarly to GOORAM, during each access, the user reads a pseudorandomly chosen (entire) bucket from each level such that server \(\mathsf {S}\) cannot learn anything by observing the bucket access patterns. \(\mathsf {AnonRAM}_\mathsf {polylog}\) adopts a recent improvement to GOORAM proposed in [26, 27, 30] to avoid duplicate user blocks in the serverside (RAM) storage at any point in time. To achieve this, on every access, the user has to write a ‘dummy’ block into the location where it finds the data such that \(\mathsf {S} \) cannot distinguish between the added ‘dummy’ block and the ‘real’ data block. These useradded dummy blocks are periodically removed to avoid RAM memory expansion, and the rest of the blocks are periodically reshuffled to allow users to access the same cell multiple times.
In existing singleuser singleserver GOORAM designs [19, 26, 30], this reshuffling is performed by the user. In \(\mathsf {AnonRAM}_\mathsf {polylog}\), reshuffling operations involve blocks of different users, and it cannot be performed by one or more users without interacting with all other users. As we want to avoid interaction among the users, reshuffling in \(\mathsf {AnonRAM}_\mathsf {polylog}\) is jointly performed by two noncolluding servers (the storageserver \(\mathsf {S} \) and the tagserver \(\mathsf {T} \)) without exposing a user’s data or access pattern to either server.
Block Types. Each block in \(\mathsf {AnonRAM}_\mathsf {polylog}\) consists of two parts: a ElGamalencrypted \(\mathsf {OPRF}\) output called pretag part and a UREncencrypted value part. We consider three types of blocks: real, empty, and dummy blocks.
A real block is of the form \( \langle \mathsf {E}^{+*}_{{\mathsf {T} \mathsf {S}}}(\theta _i), \mathsf {E}^*_{\mathsf {U} _i}(j,m_{i,j}) \rangle \). Here, the value part contains the \(j^\text {th}\) cell of user \(\mathsf {U} _i\) with value \(m_{i,j}\) encrypted with UREnc for \(\mathsf {U} _i\), while the pretag part contains a pretag \(\theta _i\) computed using \(\mathsf {OPRF}\) for some secret input of \(\mathsf {U} _i\) and encrypted using ElGamal for a joint public key of \(\mathsf {T} \) and \(\mathsf {S} \). The pretag \(\theta _i\) is computed by \(\mathsf {U} _i\) with help from the storage server \(\mathsf {S} \) using \(\mathsf {OPRF}\) and is used to map the block to a particular bucket on a given level. Given a pretag \(\theta \), for a level \(\ell \in [1,L]\), the bucket index (or tag) is computed by applying a random oracle hash function, \(h_\ell :\{0,1\}^*\rightarrow \mathbb {Z}_{2^\ell }\). The mapping changes after \(2^\ell \) accesses, which we refer to as an epoch.
Empty blocks are padding blocks that are used to form buckets of the required size \(\beta \) on the storage server \(\mathsf {S} \). An empty block is of the form \(\langle \mathsf {E}^{+*}_{{\mathsf {T} \mathsf {S}}}(1),\) \(\mathsf {E}^*_{{\mathsf {T} \mathsf {S}}}\)(“empty”)\(\rangle \), where “empty” is a constant in the UREnc message space. An empty block will be encrypted similarly to other types of blocks to ensure the privacy against the storage server \(\mathsf {S} \), and the server should not be able to determine whether a user fetched an empty block or a real block. The first part of the empty block is an encryption of unity \(1\), as it allows the tag server \(\mathsf {T}\) to determine if a block is empty during the reshuffle.
Finally, similarly to most ORAM algorithms, we use dummy blocks to hide locations of the real blocks. Once a real block with a specific index is found at some level, it is moved to a new bucket at the first level and is replaced with a dummy block in its old location. A dummy block is of the form \(\langle \mathsf {E}^{+*}_{{\mathsf {T} \mathsf {S}}}(\theta _\mathcal {D}),\mathsf {E}^*_{{\mathsf {T} \mathsf {S}}}\)(“dummy”)\(\rangle \), where the pretag \(\theta _\mathcal {D}\) is computed using \(\mathsf {OPRF}\) on the number (t) of accesses made by the users so far and a secret input \(\mu _\mathcal {D}\) known only to server \(\mathsf {T}\), and “dummy” is a constant in the UREnc message space.
Note that different blocks are completely indistinguishable to noncolluding servers \(\mathsf {S} \) and \(\mathsf {T} \) individually. Nevertheless, during the reshuffle operations, when necessary, with help from server \(\mathsf {S} \), server \(\mathsf {T} \) can determine the type of a block.
4.3 \(\mathsf {AnonRAM}_\mathsf {polylog}\) Protocol Overview
Initialization. We need to initialize UREnc, ElGamal, and \(\mathsf {OPRF}\). For the security parameter \(\lambda \), we choose a multiplicative group \(G_{q}\) of an appropriate prime order \(q\) for UREnc, and a multiplicative group \(\mathsf {G}_n\) of order equal to an appropriate safe RSA modulus n for ElGamal and \(\mathsf {OPRF}\). Let g and \(\mathsf {g}\) be generators of groups \(G_{q}\) and \(\mathsf {G}_n\) respectively.
Given this setup, every user generates her UREnc key from \(\mathbb {Z}_q^*\). The two servers select their individual shared private keys for both UREnc and ElGamal and publish the corresponding combined public key for ElGamal; we do not need UREnc public key for two servers. We represent these encryptions as follows: \(\mathsf {E}^*_{\mathsf {U} _i}(\cdot )\) represents a UREnc encryption for user \(\mathsf {U} _i\); \(\mathsf {E}^*_{{\mathsf {T} \mathsf {S}}}(\cdot )\) and \(\mathsf {E}^{+*}_{{\mathsf {T} \mathsf {S}}}(\cdot )\) respectively represent shared UREnc and ElGamal encryptions for the servers \(\mathsf {S} \) and \(\mathsf {T} \). The servers make an encrypted empty block \(\mathsf {E}^*_{{\mathsf {T} \mathsf {S}}}\)(“empty”) and an encrypted dummy block \(\mathsf {E}^*_{{\mathsf {T} \mathsf {S}}}\)(“dummy”) public to all users.
Similarly to all existing hierarchical ORAM constructions, all levels in the \(\mathsf {AnonRAM}_\mathsf {polylog}\) data structure on \(\mathsf {S} \) are initially empty. In particular, the complete first level is filled up with empty blocks, while the rest of the levels are not yet allocated. The users write \(M \cdot N\) cells initialized to some default value, one by one, at the first level such that, at the end of the initialization procedure, \(M \cdot N\) users’ cells will be stored at level L and the remaining levels will be empty (w.l.o.g. we assume that \(M \cdot N\) is power of 2). Let t denote the access counter, which is made available publicly by the servers. Each level \(\ell \) has an epoch counter \({\mathbf {\xi }(t,\ell )}\) that increments after every \(2^{\ell 1}\) accesses. In other words, for level \(\ell \) and t accesses, the epoch counter is .
Recall that our \(\mathsf {OPRF} \) employs two keys. \(\mathsf {S} \) generates the first (and fixed) \(\mathsf {OPRF} \) key \(s\leftarrow _R \mathbb {Z}_n^*\), and then a series of second \(\mathsf {OPRF} \) keys \({\hat{s}}[\ell ,{\mathbf {\xi }(t,\ell )}] \leftarrow _R \mathbb {Z}_n^*\), for each level \(\ell \in [1, L]\) and the current access t. A user \(\mathsf {U} _i\) generates independently a secret PRF input \(\mu _i \in \mathbb {Z}_n\) and computes a pretag \(\theta \) for her block j using \(\mu _i\) by performing \(\mathsf {OPRF} \) with \(\mathsf {S} \). Similarly, the tag server \(\mathsf {T} \) generates a secret input \(\mu _\mathcal {D}\) for dummy blocks. To tag blocks, the construction uses a hash function family \(\{h_{\ell }\}\) domain \([0, 2^{\ell }1]\), for each level \(\ell \in [1, L]\). In particular, a tag (or bucket index) for a pretag \(\theta \) is computed as \(h_{\ell }({\mathbf {\xi }(t,\ell )}\theta )\), where  represents string concatenation.
Protocol Flow. Similarly to our constructions in Sect. 3, users have to communicate with the servers via anonymous channels. To access a cell j during the \(t^{\text {th}}\) access, user \(\mathsf {U} _i\) first computes the associated pretags for all levels \(\theta _i\) using \(\mathsf {OPRF}\) with \(\mathsf {S} \) on her secret inputs \(\mu _i\) and j. She also obtains \(\theta _\mathcal {D}\) pretags from server \(\mathsf {T}\) for all levels for the current value of access counter t. Here, \(\mathsf {T} \) computes pretags for dummy blocks by interacting with \(\mathsf {S} \) and sends those to the users, as the users cannot locally compute them. These pseudorandom pretag values depend on the level and the current epoch through the PRF keys used by \(\mathsf {S} \). Due to the oblivious nature of \(\mathsf {OPRF}\) and secret inputs \(\mu _i\) for \(\mathsf {U} _i\) and \(\mu _\mathcal {D}\) for \(\mathsf {T} \), server \(\mathsf {S} \) does not learn the pretag values.
Once pretags are computed, the user maps each of those to a bucket index (or tag) in their level \(\ell \) using \(h_\ell \). Now, she starts searching for her cell j from level 1 using tags computed using a pretag \(\theta _i\). Similarly to other hierarchical schemes, after obtaining her cell she searches for the remaining levels with tags computed using \(\theta _\mathcal {D}\) values. The updated cell j is added back to the level 1. During this process, a pretag \(\theta \) associated with the user’s cell changes to another value \(\theta '\) indistinguishable from random. Figure 2 shows the main subflow of \(\mathsf {User}\) algorithm executed by \(\mathsf {U} _i\) in cooperation with servers \(\mathsf {S}\) and \(\mathsf {T}\). In \(\mathsf {User}\) flow, this subflow is repeated once for each level. Finally, at the end of \(\mathsf {User}\), the user computes a new pretag for possibly updated cell j, and computes and stores a block with them at the first level.
Although dummy pretags and tags are computed by and known to \(\mathsf {T} \), it cannot learn a tag employed by a user while requesting blocks from \(\mathsf {S} \), as communication between the user and server \(\mathsf {S} \) is encrypted. Neither can \(\mathsf {T} \) learn this information based on the content of blocks of specific tags retrieved by observed users, since \(\mathsf {S} \) rerandomizes blocks before sending them to users.
The main task of \(\mathsf {T} \) is to reshuffle the blocks without involving the users. In the \(\mathsf {Reshuffle} \) protocol, while reshuffling levels 1 to \(\ell \) into level \(\ell +1\), server \(\mathsf {T} \) copies, rerandomizes or changes blocks from levels 1 to \(\ell \), and then sorts them using oblivioussorting (\(\mathsf {OSort} \)) such that the users can obtain their required cells over level \(\ell +1\) by procuring the appropriate pretag values from server \(\mathsf {S} \). This step requires server \(\mathsf {S} \) helping server \(\mathsf {T} \) to decrypt the randomized version of pretags in blocks. Here, for every second access, \(\mathsf {T} \) performs reshuffle of level 1 into level 2 on \(\mathsf {S}\) to empty level 1. For every fourth access, all the real blocks at levels 1 and 2 will be moved to level 3, and so on.
The crucial property is that, while reshuffling, server \(\mathsf {T} \) should not learn any information about user’s data from pretags. To prevent \(\mathsf {T} \) from identifying users’ cells by pretags, \(\mathsf {S} \) proactively shuffles all blocks that \(\mathsf {T} \) will access during \(\mathsf {Reshuffle}\) and updates the pretags associated with the blocks. Here, \(\mathsf {S} \) utilizes homomorphic properties of \(\mathsf {OPRF}\): in particular, for some pretag \(\theta = f_{s,{\hat{s}}}(\mu )\) for server \(\mathsf {S} \)’s \(\mathsf {OPRF}\) keys \(s, {\hat{s}}\), the server computes \(\theta ^\delta = f_{s,({\hat{s}}\cdot \delta )}(\mu )\) for some random \(\delta \). Although pretags in the blocks are stored in the encrypted form and cannot be decrypted by \(\mathsf {S}\) alone, the homomorphic properties of ElGamal allow \(\mathsf {S}\) to apply the aforementioned trick to ciphertexts without knowing pretags in plain. Finally, \(\mathsf {S}\) partially decrypts the pretags of the blocks that have to be reshuffled by \(\mathsf {T}\) and moves these blocks to a temporary array.
After the preprocessing by server \(\mathsf {S}\), server \(\mathsf {T}\) decrypts pretags of the blocks and reshuffles nonempty blocks to arrange them into buckets based on pretags. This process is essentially the same as the ObliviousHash step in GOORAM [19] except for deduplication of blocks [30]. Specifically, while reshuffling blocks from levels 1 to \(\ell \) into level \(\ell +1\), \(\mathsf {T}\) first adds \(2^{\ell }\) forward dummy blocks that can potentially be accessed by a user in subsequent accesses. It then assigns tags to nonempty blocks using hash function \(h_{\ell +1}\) and ensures that no tag gets assigned to more than \(\beta \) blocks. Finally, \(\mathsf {T}\) pads the temporary array with the tagged empty blocks such that exactly \(\beta \) blocks have the same tag, replaces forward dummy blocks with empty ones, and moves all these blocks to level \(\ell +1\) on server \(\mathsf {S} \). Here, \(\mathsf {T}\) cannot link the pretags seen in a current \(\mathsf {Reshuffle}\) execution to those observed during previous reshuffles, as the value \(\delta \) chosen by \(\mathsf {S} \) is unknown to \(\mathsf {T}\).
Elaborate descriptions of the \(\mathsf {User}\) and \(\mathsf {Reshuffle}\) algorithms are available in the extended version of the paper [1].
Complexity Analysis. Computational and communication complexity of \(\mathsf {User} \) is \(O(\log ^2(M \cdot N))\) since there are \(L=O(\log (M \cdot N))\) levels, and for each level a user performs \(\beta = O(\log (M \cdot N))\) encryptions, decryptions, and OPRF evaluations. Each of these operations requires O(1) exponentiations.
Computational and communication complexity of \(\mathsf {Reshuffle} \) depends on parameter t. Consider the state after \(\mathsf {Setup} \) and the state after \(M \cdot N\) subsequent accesses. They are identical, as all the real blocks are located at level L. Hence, it suffices to analyze the aforementioned interval. Let \(\mathsf {Reshuffle} (\ell )\) denote the reshuffle from levels 1 to \(\ell \) into level \(\ell +1\), and let \(\mathsf {\rho } (\ell )\) denote the complexity thereof. In \(\mathsf {Reshuffle} (\ell )\), the number of blocks involved is \(2^{\ell +1} \beta \), hence \(\mathsf {\rho } (\ell ) = O(2^{\ell +1} \cdot \beta \cdot \log (2^{\ell +1} \cdot \beta ) )\) due to the cost of \(\mathsf {OSort}\). Then, within \(M \cdot N\) accesses, there is one \(\mathsf {Reshuffle} (L)\), none of \(\mathsf {Reshuffle} (L1)\) (since level L initially already contains \(M \cdot N\) elements), one \(\mathsf {Reshuffle} (L2)\), two \(\mathsf {Reshuffle} (L3)\), four \(\mathsf {Reshuffle} (L4)\) etc. Thus, the total complexity of all reshuffles made within \(M \cdot N\) accesses is \( (\sum _{\ell =1}^{L2} 2^{L2\ell } \cdot \mathsf {\rho } (\ell )) + \mathsf {\rho } (L)\) \( = ( \sum _{\ell =1}^{L2} 2^{L2\ell } \cdot O(2^{\ell +1} \cdot \beta \cdot \log (2^{\ell +1} \cdot \beta ) ) ) + O(2^{L+1} \cdot \beta \cdot \log (2^{L+1} \cdot \beta ))\) \( = ( 2^{L1} \cdot \beta \cdot \sum _{\ell =1}^{L2} O(\log (2^{\ell +1} \cdot \beta ) ) ) + O(M \cdot N \cdot \log (M \cdot N) \cdot \log (2^{L+1} \cdot \beta ))\) \( = O(M \cdot N) \cdot \beta \cdot O(L^2) + O(M \cdot N \cdot \log ^2(M \cdot N))\) \( = O(M \cdot N \cdot \log ^3(M \cdot N))\).
Hence, the amortized cost of \(\mathsf {Reshuffle}\) is \(\tilde{O}(\log ^3(M\cdot N))\).
Theorem 3
\(\mathsf {AnonRAM}_\mathsf {polylog}\) preserves access privacy against HbC adversaries in the random oracle model, when instantiated with semantically secure universally rerandomizable encryption (UREnc) and multiplicatively homomorphic encryption schemes, and a secure (partially keyhomomorphic) oblivious PRF scheme for appropriate compatible domains.
The proof of Theorem 3 is available in the extended version of the paper [1].
5 Conclusion and Future Work
We have defined the concept of Anonymous RAM (AnonRAM) and presented two provably secure constructions. AnonRAM simultaneously provides privacy of content, access patterns and the user identities, while additionally ensuring the integrity of the user’s data. It hence constitutes a natural extension of the concept of oblivious RAM (ORAM) to a domain with multiple, mutually distrusting users. Our first construction exhibits an access complexity linear in the number of users, while the second one improves the complexity to an amortized access cost that is polylogarithmic in the total number of cells of all users, at the cost of requiring two noncolluding servers. Both constructions have a simpler version which assumes honestbutcurious users, but also a version secure against malicious users.
Several challenges still remain. In particular, it will be interesting to design a polylogarithmic access complexity AnonRAM scheme using a single server. It will be also interesting to manage concurrent accesses to the server by the users.
Notes
 1.
\(\mathsf {AnonRAM}_{\mathsf {lin}}\) can also use an ORAM scheme that uses multiple servers. In this case, \(\mathsf {AnonRAM}_{\mathsf {lin}}\) will use the same number of servers.
 2.
Adhering to our adversarial model from Sect. 2, we only consider the corruption of a single server, and hence assume noncolluding servers \(\mathsf {S} \) and \(\mathsf {T} \).
 3.
References
Backes, M., Herzberg, A., Kate, A., Pryvalov, I.: Anonymous RAM (extended version). Cryptology ePrint Archive, Report 2016/678 (2016)
Bindschaedler, V., Naveed, M., Pan, X., Wang, X., Huang, Y.: Practicing oblivious access on cloud storage: the gap, the fallacy, and the new way forward. In: CCS, pp. 837–849 (2015)
Boneh, D.: The decision DiffieHellman problem. In: Buhler, J.P. (ed.) ANTS 1998. LNCS, vol. 1423, pp. 48–63. Springer, Heidelberg (1998)
Boyle, E., Chung, K.M., Pass, R.: Oblivious parallel RAM and applications. In: Kushilevitz, E., Malkin, T. (eds.) TCC 2016A. LNCS, vol. 9563, pp. 175–204. Springer, Heidelberg (2016). doi:10.1007/9783662490990_7
Camenisch, J., Stadler, M.: Proof systems for general statements about discrete logarithms. Technical report, TR260. Dept. of Computer Science, ETH Zürich, March 1997
Catalano, D., Gennaro, R.: New efficient and secure protocols for verifiable signature sharing and other applications. In: Krawczyk, H. (ed.) CRYPTO 1998. LNCS, vol. 1462, pp. 105–120. Springer, Heidelberg (1998)
Chaum, D.: Untraceable electronic mail, return addresses, and digital pseudonyms. Commun. ACM 4(2), 84–88 (1981)
Chaum, D.: The dining cryptographers problem: unconditional sender and recipient untraceability. J. Cryptology 1(1), 65–75 (1988)
Chaum, D., Pedersen, T.P.: Wallet databases with observers. In: Brickell, E.F. (ed.) CRYPTO 1992. LNCS, vol. 740, pp. 89–105. Springer, Heidelberg (1993)
Chen, B., Lin, H., Tessaro, S.: Oblivious parallel RAM: improved efficiency and generic constructions. In: Kushilevitz, E., Malkin, T. (eds.) TCC 2016A. LNCS, vol. 9563, pp. 205–234. Springer, Heidelberg (2016). doi:10.1007/9783662490990_8
Chung, K.M., Liu, Z., Pass, R.: Statisticallysecure ORAM with \(\tilde{O}(\log ^2 n)\) overhead. In: Sarkar, P., Iwata, T. (eds.) ASIACRYPT 2014, Part II. LNCS, vol. 8874, pp. 62–81. Springer, Heidelberg (2014)
Cramer, R., Damgård, I.B., Schoenmakers, B.: Proof of partial knowledge and simplified design of witness hiding protocols. In: Desmedt, Y.G. (ed.) CRYPTO 1994. LNCS, vol. 839, pp. 174–187. Springer, Heidelberg (1994)
Danezis, G., Dingledine, R., Mathewson, N.: Mixminion: design of a type III anonymous remailer protocol. In: Security and Privacy (S&P), pp. 2–15 (2003)
De Santis, A., Di Crescenzo, G., Persiano, G., Yung, M.: On monotone formula closure of SZK. In: FOCS, pp. 454–465 (1994)
Dingledine, R., Mathewson, N., Syverson, P.: Tor: the secondgeneration onion router. In: Usenix Security, pp. 303–320 (2004)
Dodis, Y., Yampolskiy, A.: A verifiable random function with short proofs and keys. In: Vaudenay, S. (ed.) PKC 2005. LNCS, vol. 3386, pp. 416–431. Springer, Heidelberg (2005)
Franz, M., Williams, P., Carbunar, B., Katzenbeisser, S., Peter, A., Sion, R., Sotakova, M.: Oblivious outsourced storage with delegation. In: Danezis, G. (ed.) FC 2011. LNCS, vol. 7035, pp. 127–140. Springer, Heidelberg (2012)
Freedman, M.J., Ishai, Y., Pinkas, B., Reingold, O.: Keyword search and oblivious pseudorandom functions. In: Kilian, J. (ed.) TCC 2005. LNCS, vol. 3378, pp. 303–324. Springer, Heidelberg (2005)
Goldreich, O., Ostrovsky, R.: Software protection and simulation on oblivious RAMs. J. ACM (JACM) 43(3), 431–473 (1996)
Golle, P., Jakobsson, M., Juels, A., Syverson, P.: Universal reencryption for mixnets. In: Okamoto, T. (ed.) CTRSA 2004. LNCS, vol. 2964, pp. 163–178. Springer, Heidelberg (2004)
Goodrich, M.T.: Randomized shellsort: a simple dataoblivious sorting algorithm. J. ACM (JACM) 58(6), 27 (2011)
Goodrich, M.T., Mitzenmacher, M., Ohrimenko, O., Tamassia, R.: Oblivious RAM simulation with efficient worstcase access overhead. In: ACM CCSW, pp. 95–100 (2011)
Goodrich, M.T., Mitzenmacher, M., Ohrimenko, O., Tamassia, R.: Privacypreserving group data access via stateless oblivious RAM simulation. In: SODA, pp. 157–167 (2012)
Jarecki, S., Liu, X.: Efficient oblivious pseudorandom function with applications to adaptive OT and secure computation of set intersection. In: Reingold, O. (ed.) TCC 2009. LNCS, vol. 5444, pp. 577–594. Springer, Heidelberg (2009)
Jinsheng, Z., Wensheng, Z., Qiao, D.: A Multiuser Oblivious RAM for outsourced data (2014). http://lib.dr.iastate.edu/cs_techreports/262/
Kushilevitz, E., Lu, S., Ostrovsky, R.: On the (in) security of hashbased oblivious RAM and a new balancing scheme. In: SODA, pp. 143–156 (2012)
Lu, S., Ostrovsky, R.: Distributed oblivious RAM for secure twoparty computation. In: Sahai, A. (ed.) TCC 2013. LNCS, vol. 7785, pp. 377–396. Springer, Heidelberg (2013)
Maffei, M., Malavolta, G., Reinert, M., Schröder, D.: Privacy and access control for outsourced personal records. In: Security and Privacy (S&P), pp. 341–358 (2015)
Ostrovsky, R.: Efficient computation on oblivious RAMs. In: STOC, pp. 514–523 (1990)
Pinkas, B., Reinman, T.: Oblivious RAM revisited. In: Rabin, T. (ed.) CRYPTO 2010. LNCS, vol. 6223, pp. 502–519. Springer, Heidelberg (2010)
Prabhakaran, M., Rosulek, M.: Rerandomizable RCCA encryption. In: Menezes, A. (ed.) CRYPTO 2007. LNCS, vol. 4622, pp. 517–534. Springer, Heidelberg (2007)
Sahin, C., Zakhary, V., El Abbadi, A., Lin, H.R., Tessaro, S.: Taostore: Overcoming asynchronicity in oblivious data storage. In: Security and Privacy (S&P) (2016)
Schnorr, C.P.: Efficient signature generation by smart cards. J. Cryptology 4(3), 161–174 (1991)
Shi, E., Chan, T.H.H., Stefanov, E., Li, M.: Oblivious RAM with O((log N)\({^3}\)) worstcase cost. In: Lee, D.H., Wang, X. (eds.) ASIACRYPT 2011. LNCS, vol. 7073, pp. 197–214. Springer, Heidelberg (2011)
Stefanov, E., Shi, E.: ObliviStore: High performance oblivious cloud storage. In: Security and Privacy (S&P), pp. 253–267 (2013)
Stefanov, E., Van Dijk, M., Shi, E., Fletcher, C., Ren, L., Yu, X., Devadas, S.: Path oram: an extremely simple oblivious ram protocol. In: CCS, pp. 299–310 (2013)
The Tor project (2003). https://www.torproject.org/. Accessed Feb 2016
Wang, X., Chan, T.H., Shi, E.: Circuit ORAM: on tightness of the goldreichostrovsky lower bound. In: CCS, pp. 850–861 (2015)
Williams, P., Sion, R., Tomescu, A.: PrivateFS: a parallel oblivious file system. In: CCS, pp. 977–988 (2012)
Acknowledgements
We thank the anonymous reviewers for their valuable comments. This work was supported by the German Federal Ministry for Education and Research (BMBF) through funding for the Center for ITSecurity, Privacy and Accountability (CISPA) and by a grant from the Israeli Ministry of Science and Technology.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Backes, M., Herzberg, A., Kate, A., Pryvalov, I. (2016). Anonymous RAM. In: Askoxylakis, I., Ioannidis, S., Katsikas, S., Meadows, C. (eds) Computer Security – ESORICS 2016. ESORICS 2016. Lecture Notes in Computer Science(), vol 9878. Springer, Cham. https://doi.org/10.1007/9783319457444_17
Download citation
DOI: https://doi.org/10.1007/9783319457444_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783319457437
Online ISBN: 9783319457444
eBook Packages: Computer ScienceComputer Science (R0)