On the (Im)possibility of Obfuscating Programs

Abstract

Informally, an obfuscator $ \mathcal{O} $ is an (efficient, probabilistic) “compiler” that takes as input a program (or circuit) P and produces a new program $ \mathcal{O} $ (P) that has the same functionality as P yet is “unintelligible” in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic and complexity-theoretic applications, ranging from software protection to homomorphic encryption to complexity-theoretic analogues of Rice’s theorem. Most of these applications are based on an interpretation of the “unintelligibility” condition in obfuscation as meaning that $ \mathcal{O} $ is a “virtual black box,” in the sense that anything one can efficiently compute given $ \mathcal{O} $ , one could also efficiently compute given oracle access to P.

In this work, we initiate a theoretical investigation of obfuscation. Our main result is that, even under very weak formalizations of the above intuition, obfuscation is impossible. We prove this by constructing a family of functions $ \mathcal{F} $ that are inherently unobfuscatable in the following sense: there is a property π: $ \mathcal{F} $ → {0,1} such that (a) given any program that computes a function f $ \mathcal{F} $ , the value π(f) can be efficiently computed, yet (b) given oracle access to a (randomly selected) function f $ \mathcal{F} $ , no efficient algorithm can compute π(f) much better than random guessing. We extend our impossibility result in a number of ways, including even obfuscators that (a) are not necessarily computable in polynomial time, (b) only approximately preserve the functionality, and (c) only need to work for very restricted models of computation (TC 0). We also rule out several potential applications of obfuscators, by constructing “unobfuscatable” signature schemes, encryption schemes, and pseudorandom function families.