Advertisement

Using a Memory Test to Limit a User to One Account

  • Vincent Conitzer
Conference paper
  • 330 Downloads
Part of the Lecture Notes in Business Information Processing book series (LNBIP, volume 44)

Abstract

In many Web-based applications, there are incentives for a user to sign up for more than one account, under false names. By doing so, the user can send spam e-mail from an account (which will eventually cause the account to be shut down); distort online ratings by rating multiple times (in particular, she can inflate her own reputation ratings); indefinitely continue using a product with a free trial period; place shill bids on items that she is selling on an auction site; engage in false-name bidding in combinatorial auctions; etc. All of these behaviors are highly undesirable from the perspective of system performance. While CAPTCHAs can prevent a bot from automatically signing up for many accounts, they do not prevent a human from signing up for multiple accounts. It may appear that the only way to prevent the latter is to require the user to provide information that identifies her in the real world (such as a credit card or telephone number), but users are reluctant to give out such information.

In this paper, we propose an alternative approach. We investigate whether it is possible to design an automated test that is easy to pass once, but difficult to pass a second time. Specifically, we design a memory test. In our test, items are randomly associated with colors (“Cars are green.”). The user first observes all of these associations, and is then asked to recall the colors of the items (“Cars are...?”). The items are the same across iterations of the test, but the colors are randomly redrawn each time (“Cars are blue.”). Therefore, a user who has taken the test before will occasionally accidentally respond with the association from the previous time that she took the test (“Cars are...? Green!”). If there is significant correlation between the user’s answers and the correct answers from a previous iteration of the test, then the system can decide that the user is probably the same, and refuse to grant another account. We present and analyze the results of a small study with human subjects. We also give a game-theoretic analysis. In the appendix, we propose an alternative test and present the results of a small study with human subjects for that test (however, the results for that test are quite negative).

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cramton, P., Shoham, Y., Steinberg, R.: Combinatorial Auctions. MIT Press, Cambridge (2006)Google Scholar
  2. 2.
    Fudenberg, D., Tirole, J.: Game Theory. MIT Press, Cambridge (1991)Google Scholar
  3. 3.
    Mori, G., Malik, J.: Recognizing objects in adversarial clutter: Breaking a visual CAPTCHA. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 134–141 (2003)Google Scholar
  4. 4.
    Moy, G., Jones, N., Harkless, C., Potter, R.: Distortion estimation techniques in solving visual CAPTCHAs. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, pp. 23–28 (2004)Google Scholar
  5. 5.
    Myerson, R.: Game Theory: Analysis of Conflict. Harvard University Press, Cambridge (1991)zbMATHGoogle Scholar
  6. 6.
    Osborne, M.J., Rubinstein, A.: A Course in Game Theory. MIT Press, Cambridge (1994)Google Scholar
  7. 7.
    Thayananthan, A., Stenger, B., Torr, P.H.S., Cipolla, R.: Shape context and chamfer matching in cluttered scenes. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 127–133 (2003)Google Scholar
  8. 8.
    von Ahn, L., Blum, M., Hopper, N., Langford, J.: CAPTCHA: Using hard AI problems for security. In: Biham, E. (ed.) EUROCRYPT 2003. LNCS, vol. 2656, pp. 294–311. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  9. 9.
    von Ahn, L., Blum, M., Langford, J.: Telling humans and computers apart automatically: How lazy cryptographers do AI. Communications of the ACM 47(2), 56–60 (2004)CrossRefGoogle Scholar
  10. 10.
    Yokoo, M.: The characterization of strategy/false-name proof combinatorial auction protocols: Price-oriented, rationing-free protocol. In: Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI), Acapulco, Mexico, pp. 733–742 (2003)Google Scholar
  11. 11.
    Yokoo, M., Matsutani, T., Iwasaki, A.: False-name-proof combinatorial auction protocol: Groves mechanism with submodular approximation. In: International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), Hakodate, Japan, pp. 1135–1142 (2006)Google Scholar
  12. 12.
    Yokoo, M., Sakurai, Y., Matsubara, S.: Robust combinatorial auction protocol against false-name bids. Artificial Intelligence 130(2), 167–181 (2001)zbMATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    Yokoo, M., Sakurai, Y., Matsubara, S.: The effect of false-name bids in combinatorial auctions: New fraud in Internet auctions. Games and Economic Behavior 46(1), 174–188 (2004)zbMATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Vincent Conitzer
    • 1
  1. 1.Department of Computer ScienceDuke UniversityDurhamUSA

Personalised recommendations