CAPTCHA: Using Hard AI Problems for Security
We introduce captcha, an automated test that humans can pass, but current computer programs can’t pass: any program that has high success over a captcha can be used to solve an unsolved Artificial Intelligence (AI) problem. We provide several novel constructions of captchas. Since captchas have many applications in practical security, our approach introduces a new class of hard problems that can be exploited for security purposes. Much like research in cryptography has had a positive impact on algorithms for factoring and discrete log, we hope that the use of hard AI problems for security purposes allows us to advance the field of Artificial Intelligence. We introduce two families of AI problems that can be used to construct captchas and we show that solutions to such problems can be used for steganographic communication. CAPTCHAs based on these AI problem families, then, imply a win-win situation: either the problems remain unsolved and there is a way to differentiate humans from computers, or the problems are solved and there is a way to communicate covertly on some channels.
KeywordsOptical Character Recognition Distorted Text Current Computer Program Cryptographic Protocol Image Transformation
- 1.Luis von Ahn, Manuel Blum, Nicholas J. Hopper and John Langford. The CAPTCHA Web Page: http://www.captcha.net. 2000.
- 2.Luis von Ahn, Manuel Blum and John Langford. Telling Humans and Computers part (Automatically) or How Lazy Cryptographers do AI. To appear in Communications of the ACM.Google Scholar
- 3.Mihir Bellare, Russell Impagliazzo and Moni Naor. Does Parallel Repetition Lower the Error in Computationally Sound Protocols? In 38th IEEE Symposium on Foundations of Computer Science (FOCS’ 97), pages 374–383. IEEE Computer Society, 1997.Google Scholar
- 5.A. L. Coates, H. S. Baird, and R. J. Fateman. Pessimal Print: A Reverse Turing Test. In Proceedings of the International Conference on Document Analysis and Recognition (ICDAR’ 01), pages 1154–1159. Seattle WA, 2001.Google Scholar
- 6.Scott Craver. On Public-key Steganography in the Presence of an Active Warden. In Proceedings of the Second International Information Hiding Workshop, pages 355–368. Springer, 1998.Google Scholar
- 7.Nicholas J. Hopper, John Langford and Luis von Ahn. Provably Secure Steganography. In Advances in Cryptology, CRYPTO’ 02, volume 2442 of Lecture Notes in Computer Science, pages 77–92. Santa Barbara, CA, 2002.Google Scholar
- 8.M. D. Lillibridge, M. Adabi, K. Bharat, and A. Broder. Method for selectively restricting access to computer systems. Technical report, US Patent 6,195,698. Applied April 1998 and Approved February 2001.Google Scholar
- 9.Greg Mori and Jitendra Malik. Breaking a Visual CAPTCHA. Unpublished Manuscript, 2002. Available electronically: http://www.cs.berkeley.edu/~mori/gimpy/gimpy.pdf.
- 10.Moni Naor. Verification of a human in the loop or Identification via the Turing Test. Unpublished Manuscript, 1997. Available electronically: http://www.wisdom.weizmann.ac.il/~naor/PAPERS/human.ps.
- 11.Benny Pinkas and Tomas Sander. Securing Passwords Against Dictionary Attacks. In Proceedings of the ACM Computer and Security Conference (CCS’ 02), pages 161–170. ACM Press, November 2002.Google Scholar
- 12.S. Rice, G. Nagy, and T. Nartker. Optical Character Recognition: An Illustrated Guide to the Frontier. Kluwer Academic Publishers, Boston, 1999.Google Scholar
- 13.Adi Shamir and Eran Tromer. Factoring Large Numbers with the TWIRL Device. Unpublished Manuscript, 2003. Available electronically: http://www.cryptome.org/twirl.ps.gz.
- 14.J. Xu, R. Lipton and I. Essa. Hello, are you human. Technical Report GIT-CC-00-28, Georgia Institute of Technology, November 2000.Google Scholar