Multimedia Tools and Applications

, Volume 77, Issue 10, pp 12293–12329 | Cite as

Exploring the influence of CAPTCHA types to the users response time by statistical analysis

  • Darko BrodićEmail author
  • Alessia Amelio
  • Radmila Janković


CAPTCHA stands for Completely Automated Public Turing Test to Tell Computers and Humans Apart. It is a test program that solves a given task for preventing the attacks made by automatic programs. If the response to CAPTCHA is correct, then the program classifies the user as a human. This paper introduces a new analysis of the impact of different CAPTCHAs to the Internet user’s response time. It overcomes the limitations of the previous approaches in the state-of-the-art. In this sense, different types of CAPTCHAs are presented and described. Furthermore, an experiment is conducted, which is based on two populations of Internet users for text and image-based CAPTCHA types, differentiated by demographic features, such as age, gender, education level and Internet experience. Each user is required to solve the different types of CAPTCHA, and the response time to solve the CAPTCHAs is registered. The obtained results are statistically processed by Mann-Whitney U and Pearson’s correlation coefficient tests. They analyze 7 different hypotheses which evaluate the response time in dependence of gender, age, education level and Internet experience, for the different CAPTCHA types. It represents an invaluable study in the literature to predict the best use of a given CAPTCHA for specific types of Internet users.


CAPTCHA Web Response time Usability Statistical analysis Internet user 



The authors are fully grateful to Sanja Petrovska for collecting the data, and to anonymous users for providing their data. This study was partially funded by the Grant of the Ministry of Education, Science and Technological Development of the Republic of Serbia, as a part of the project TR33037 within the framework of the Technological development program.

Compliance with Ethical Standards

Conflict of interests

Author Darko Brodić declares that he has no conflict of interest. Author Alessia Amelio declares that she has no conflict of interest. Author Radmila Janković declares that she has no conflict of interest.

Ethical approval

This article does not contain any dangerous study with human participants or animals performed by any of the authors.


This study was partially funded by the Grant of the Ministry of Education, Science and Technological Development of the Republic of Serbia, as a part of the project TR33037 within the framework of the Technological development program. The receiver of the funding is Dr. Darko Brodić.


  1. 1.
    Baecher P, Fischlin M, Gordon L, Langenberg R, Lutzow M, Schroder D (2010) CAPTCHAS: the good, the bad, and the ugly. In Proc. of GI-Sicherheit, lecture notes in informatics, vol 170, pp 353–365Google Scholar
  2. 2.
    Baird HS, Riopka T (2005) Scattertype: a reading CAPTCHA resistant to segmentation attack. In: Proc. document recognition and retrieval XII, SPIE-IS&T electronic imaging, vol 5676. SPIE, pp 197–207Google Scholar
  3. 3.
    Belk M, Germanakos P, Fidas C, Spanoudis G, Samaras G (2013) Studying the effect of human cognition on text and image recognition CAPTCHA mechanisms. Proc HAS/HCII Lect Notes Comput Sci 8030:71–79CrossRefGoogle Scholar
  4. 4.
    Belk M, Fidas C, Germanakos P, Samaras G (2015) Do human cognitive differences in information processing affect preference and performance of CAPTCHA? Int J Human-Comput Stud 84:118CrossRefGoogle Scholar
  5. 5.
    Brodić D, Amelio A (2016) Analysis of the human-computer interaction on the example of image-based CAPTCHA by association rule mining. In: Proc. of 5th international workshop on symbiotic interaction, lecture notes in computer science, vol 9961. Springer, pp 38–51Google Scholar
  6. 6.
    Brodić D, Amelio A, Draganov IR (2016) Response time analysis of text-based CAPTCHA by association rules. In: Proc. of 17th international conference on artificial intelligence: methodology, systems, applications AIMSA, lecture notes in computer science, vol 9883. Springer, pp 78–88Google Scholar
  7. 7.
  8. 8.
    Chellapilla K, Larson K, Simard P, Czerwinski M (2005) Designing human friendly human interaction proofs (HIPs). In: Proc. of SIGCHI conf. on human factors in computing systems, pp 711–720Google Scholar
  9. 9.
    Cui J, Liu Y, Xu Y, Zhao H, Zha H (2013) Tracking generic human motion via fusion of low- and High-Dimensional approaches. IEEE Trans Syst Man Cybern Syst 43(4):996–1002CrossRefGoogle Scholar
  10. 10.
    Cumming G (2012) Understanding the new statistics: effect sizes confidence intervals and meta-analysis. Routledge, New YorkGoogle Scholar
  11. 11.
  12. 12.
    Dhamija R, Tygar J (2005) Phish and HIPs: human interactive proofs to detect phishing attacks. In: Proc. of Human interactive proofs: second international workshop (HIP), pp 127–141Google Scholar
  13. 13.
    Exact and asymptotic p-value. Available online.
  14. 14.
    Field A (2009) Discovering statistics using SPSS. SAGE Publications Ltd, Los AngeleszbMATHGoogle Scholar
  15. 15.
    First Workshop on Human Interactive Proofs (2002).
  16. 16.
  17. 17.
    Goswami G, Powell BM, Vatsa M, Singh R, Noore A (2014) FaceDCAPTCHA: face detection based color image CAPTCHA. Futur Gener Comput Syst 31(2):59–69CrossRefGoogle Scholar
  18. 18.
    Hernandez-Castro CJ, Ribagorda A (2010) Pitfalls in CAPTCHA design and implementation: the math CAPTCHA, a case study. Comput Secur 29(1):141–157CrossRefGoogle Scholar
  19. 19.
    Hubbard R (2004) Blurring the distinctions between p’s and a’s in psychological research. Theory Psychol 14(3):295–327CrossRefGoogle Scholar
  20. 20.
  21. 21.
    Kalsoom S, Ziauddin S, Abbasi AR (2012) An image-based CAPTCHA scheme exploiting human appearance characteristics. KSII Trans Internet Inf Syst 6(2):734–749Google Scholar
  22. 22.
    Khan M, Shah T, Batool SI (2016) A new implementation of chaotic S-boxes in CAPTCHA. SIViP 10(2):293–300CrossRefGoogle Scholar
  23. 23.
    Kim JW, Chung WK, Cho HG (2010) A new image-based CAPTCHA using the orientation. Vis Comput 26(6):1135–1143CrossRefGoogle Scholar
  24. 24.
    Kim J, Yang J, Wohn K (2014) AgeCAPTCHA: an image-based CAPTCHA that annotates images of human faces with their age groups. KSII Trans Int Inf Syst 8 (3):1071–1092Google Scholar
  25. 25.
    Kruskal-Wallis H Test using SPSS Statistics. Available online.
  26. 26.
    Lee YL, Hsu CH (2011) Usability study of text-based CAPTCHAs. Displays 32(2):81–86CrossRefGoogle Scholar
  27. 27.
    Li Q (2015) A computer vision attack on the ARTiFACIAL CAPTCHA. Multimed Tools Appl 74(13):4583–4597CrossRefGoogle Scholar
  28. 28.
    Lillibridge MD, Abadi M, Bharat K, Broder A (2001) Method for selectively restricting access to computer systems. US Patent 6,195,698.
  29. 29.
    Liu Y, Cui J, Zhao H, Zha H (2012) Fusion of low-and high-dimensional approaches by trackers sampling for generic human motion tracking. In: Proc. of the 21st international conference on pattern recognition, pp 898–901Google Scholar
  30. 30.
    Liu Y, Nie L, Han L, Zhang L, Rosenblum DS (2015) Action2activity: recognizing complex activities from sensor data. In: Proc. of the 24th international conference on artificial intelligence. AAAI Press, pp 1617–1623Google Scholar
  31. 31.
    Liu Y, Zhang L, Nie L, Yan Y, Rosenblum DS (2016) Fortune teller: predicting your career path. In: Proc. of the Thirtieth AAAI conference on artificial intelligence. AAAI Press, pp 201–207Google Scholar
  32. 32.
    Liu Y, Nie L, Liu L, Rosenblum DS (2016) From action to activity: sensor-based activity recognition. Neurocomputing 181:108–115CrossRefGoogle Scholar
  33. 33.
    Liu L, Cheng L, Liu Y, Jia Y, Rosenblum DS (2016) Recognizing complex activities by a probabilistic interval-based model. In: Proc. of the Thirtieth AAAI conference on artificial intelligence. AAAI Press, pp 1266–1272Google Scholar
  34. 34.
    Liu Y, Zhang X, Cui J, Wu C, Aghajan H, Zha H (2010) Visual analysis of child-adult interactive behaviors in video sequences. In: Proc 16th International conference on virtual systems and multimedia, pp 26–33Google Scholar
  35. 35.
    Liu Y, Liang Y, Liu S, Rosenblum DS, Zheng Y (2016) Predicting urban water quality with ubiquitous data. CoRR abs/1610.09462
  36. 36.
    Liu Y, Zheng Y, Liang Y, Liu S, Rosenblum DS (2016) Urban water quality prediction based on multi-task multi-view learning. In: Proc. of IJCAI. IJCAI/AAAI Press, pp 2576–2581Google Scholar
  37. 37.
    Lu Y, Wei Y, Liu L, Zhong J, Sun L, Liu Y (2017) Towards unsupervised physical activity recognition using smartphone accelerometers. Multimed Tools Appl 76(8):10701–10719CrossRefGoogle Scholar
  38. 38.
    Madathil GF, Alapatt JS, Greenstein JS, Madathil KC (2010) An investigation of the usability of image-based CAPTCHAs. Proc. Human Factors Ergon Soc Annual Meeting 54(16):1249–1253CrossRefGoogle Scholar
  39. 39.
    Mann-Whitney U Test using SPSS Statistics Laerd Statistics.
  40. 40.
    Moran TP (1981) The command language grammar: a representation for the user interface of interactive computer systems. Int J Man-Mach Stud 15(1):3–50CrossRefGoogle Scholar
  41. 41.
    Preotiuc-Pietro D, Liu Y, Hopkins D, Ungar L (2017) Beyond binary labels: political ideology prediction of Twitter users. In: Annual meeting of the association for computational linguisticsGoogle Scholar
  42. 42.
    Research Methods I: SPSS for Windows part 3, Nonparametric tests. Available online.
  43. 43.
    Rui Y, Liu Z (2004) ARTiFACIAL: automated reverse Turing test using FACIAL features. Multimed Syst 9(6):493–502CrossRefGoogle Scholar
  44. 44.
    Rusu A, Govindaraju V (2005) Visual CAPTCHA with handwritten image analysis. In: Proc. of HIP, lecture notes in computer science, vol 3517. Springer, pp 42–52Google Scholar
  45. 45.
    The Mann-Whitney U-test – Analysis of 2-Between-Group Data with a Quantitative Response Variable. Available online.
  46. 46.
    Turing AM (1950) Computing machinery and intelligence. Mind 59:433–460MathSciNetCrossRefGoogle Scholar
  47. 47.
    Von Ahn L, Blum M, Langford J (2004) Telling humans and computers apart automatically. Commun ACM 47(2):57–60CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  • Darko Brodić
    • 1
    Email author
  • Alessia Amelio
    • 2
  • Radmila Janković
    • 1
  1. 1.Technical Faculty in BorUniversity of BelgradeBorSerbia
  2. 2.DIMESUniversity of CalabriaRendeItaly

Personalised recommendations