Skip to main content

HuMan: an accessible, polymorphic and personalized CAPTCHA interface with preemption feature tailored for persons with visual impairments

Abstract

Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is one of the major security components in the provision of fair web access by differentiating human access from malicious, automated access by bots. Though the CAPTCHA strengthens the security aspect of web access, their accessibility to people with visual impairments has inherent unresolved challenges. This paper presents an accessible CAPTCHA model termed HuMan (human or machine?) which aims at providing an audio-based CAPTCHA for people with visual impairments. The HuMan model incorporates personalization into the CAPTCHA access. The polymorphic nature of resolving the HuMan CAPTCHA facilitates kaleidoscopic behavior in CAPTCHA rendering. The presence of ambient noise and requirement of common sense knowledge to answer the questions presented by HuMan CAPTCHA model makes it friendlier toward human users. The HuMan model has a CAPTCHA preemption feature which enables the user to stop the challenge audio as soon as the answer is identified. The results of experiments conducted on the prototype implementation of HuMan model project the mean success rate of 92.46 % and system usability scale score of 82.44 for persons with visual impairments and 82.63 for sighted users.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Notes

  1. 1.

    http://www.who.int/mediacentre/factsheets/fs282/en/.

  2. 2.

    https://blog.agilebits.com/2011/08/18/aes-encryption-isnt-cracked/.

  3. 3.

    http://www.nvaccess.org/.

References

  1. 1.

    Baird, H.S., Coates, A.L., Fateman, R.J.: Pessimalprint: a reverse turing test. Int. J. Doc. Anal. Recognit. 5(2–3), 158–163 (2003)

    Article  Google Scholar 

  2. 2.

    Baird, H.S., Moll, M.A., Wang, S.Y.: Scattertype: a legible but hard-to-segment captcha. In: Eighth International Conference on Document Analysis and Recognition. Proceedings, pp. 935–939. IEEE (2005)

  3. 3.

    Bauwens, B., Evenepoel, F., Engelen, J.: Sgml as an enabling technology for access to digital information by print disabled readers. Comput. Stand. Interfaces 18(1), 55–69 (1996)

    Article  Google Scholar 

  4. 4.

    Belk, M., Fidas, C., Germanakos, P., Samaras, G.: Do human cognitive differences in information processing affect preference and performance of captcha? Int. J. Hum. Comput. Stud. 84, 1–18 (2015)

    Article  Google Scholar 

  5. 5.

    Belk, M., Germanakos, P., Fidas, C., Holzinger, A., Samaras, G.: Towards the personalization of CAPTCHA mechanisms based on individual differences in cognitive processing. In: Holzinger, A., Ziefle, M., Hitz, M., Debevc, M. (eds.) Human Factors in Computing and Informatics, Lecture Notes in Computer Science, vol 7946, pp. 409–426. Springer, Berlin (2013)

    Chapter  Google Scholar 

  6. 6.

    Berners-Lee, T.: Long live the web. Sci. Am. 303(6), 80–85 (2010)

    Article  Google Scholar 

  7. 7.

    Bigham, J.P., Cavender, A.C.: Evaluating existing audio captchas and an interface optimized for non-visual use. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1829–1838. ACM (2009)

  8. 8.

    Brooke, J., et al.: Sus-a quick and dirty usability scale. Usability Evalu. Ind. 189(194), 4–7 (1996)

    Google Scholar 

  9. 9.

    Bursztein, E., Bethard, S.: Decaptcha: breaking 75% of ebay audio captchas. In: Proceedings of the 3rd USENIX conference on Offensive technologies, p. 8. USENIX Association (2009)

  10. 10.

    Bursztein, E., Bethard, S., Fabry, C., Mitchell, J.C., Jurafsky, D.: How good are humans at solving captchas? a large scale evaluation. In: 2010 IEEE Symposium on Security and Privacy, pp. 399–413. IEEE (2010)

  11. 11.

    Bursztein, E., Martin, M., Mitchell, J.: Text-based captcha strengths and weaknesses. In: Proceedings of the 18th ACM conference on Computer and communications security, pp. 125–138. ACM (2011)

  12. 12.

    Campbell, D., Stanley, J.: Experimental and quasi-experimental designs for research. In: Gage, N.L. (ed.) Handbook of research on teaching. pp. 171–246. (1966)

  13. 13.

    Chellapilla, K., Larson, K., Simard, P., Czerwinski, M.: Designing human friendly human interaction proofs (hips). In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 711–720. ACM (2005)

  14. 14.

    Converse, T.: CAPTCHA generation as a web service. In: Baird, H.S., Lopresti, D.P. (eds.) Human Interactive Proofs, Lecture Notes in Computer Science, vol 3517, pp. 82–96. Springer, Berlin (2005)

    Chapter  Google Scholar 

  15. 15.

    Darejeh, A., Singh, D.: An investigation on ribbon interface design guidelines for people with less computer literacy. Comput. Stand. Interfaces 36(5), 808–820 (2014)

    Article  Google Scholar 

  16. 16.

    Datta, R., Li, J., Wang, J.Z.: Imagination: a robust image-based captcha generation system. In: Proceedings of the 13th Annual ACM International Conference on Multimedia, pp. 331–334. ACM (2005)

  17. 17.

    Davidson, M., Renaud, K., Li, S.: jCAPTCHA: accessible human validation. In: Miesenberger, K., Fels, D., Archambault, D., Peňáz, P., Zagler, W. (eds.) Computers Helping People with Special Needs, ICCHP 2014. Lecture Notes in Computer Science, vol 8547, pp. 129–136. Springer, Cham (2014)

    Chapter  Google Scholar 

  18. 18.

    Elson, J., Douceur, J.R., Howell, J., Saul, J.: Asirra: a captcha that exploits interest-aligned manual image categorization. In: ACM Conference on Computer and Communications Security, pp. 366–374 (2007)

  19. 19.

    Gao, H., Liu, H., Yao, D., Liu, X., Aickelin, U.: An audio captcha to distinguish humans from computers. In: 2010 Third International Symposium on Electronic Commerce and Security (ISECS), pp. 265–269. IEEE (2010)

  20. 20.

    Gossweiler, R., Kamvar, M., Baluja, S.: What’s up captcha?: a captcha based on image orientation. In: Proceedings of the 18th International Conference on World Wide Web, pp. 841–850. ACM (2009)

  21. 21.

    Goswami, G., Powell, B.M., Vatsa, M., Singh, R., Noore, A.: Facedcaptcha: face detection based color image CAPTCHA. Future Gener. Comput. Syst. 31, 59–68 (2014). doi:10.1016/j.future.2012.08.013. http://www.sciencedirect.com/science/article/pii/S0167739X12001690. Special Section: Advances in Computer Supported Collaboration: Systems and Technologies

    Article  Google Scholar 

  22. 22.

    Goswami, G., Singh, R., Vatsa, M., Powell, B., Noore, A.: Face recognition captcha. In: 2012 IEEE Fifth International Conference on Biometrics: Theory, Applications and Systems (BTAS), pp. 412–417. IEEE (2012)

  23. 23.

    Holman, J., Lazar, J., Feng, J.H., D’Arcy, J.: Developing usable captchas for blind users. In: Proceedings of the 9th International ACM SIGACCESS Conference on Computers and Accessibility, pp. 245–246. ACM (2007)

  24. 24.

    Kani, J., Nishigaki, M.: Gamified CAPTCHA. In: Marinos, L., Askoxylakis, I. (eds.) Human Aspects of Information Security, Privacy, and Trust, HAS 2013. Lecture Notes in Computer Science, vol 8030, pp. 39–48. Springer, Berlin (2013)

    Chapter  Google Scholar 

  25. 25.

    Karshmer, A.I., Myler, H.R., Davis, R.D.: The architecture of an inexpensive and portable talking-tactile terminal to aid the visually handicapped. Comput. Stand. Interfaces 6(2), 207–220 (1987)

    Article  Google Scholar 

  26. 26.

    Kochanski, G., Lopresti, D.P., Shih, C.: A reverse turing test using speech. In: Proceedings of the Seventh International Conference on Spoken Language Processing (ICSLP2002 - INTERSPEECH 2002), Denver, Colorado, 16-20 Sept 2002, pp. 1357-1360. (2002). http://www.isca-speech.org/archive/icslp02

  27. 27.

    Kuber, R., Yu, W.: Feasibility study of tactile-based authentication. Int. J. Hum. Comput. Stud. 68(3), 158–181 (2010)

    Article  Google Scholar 

  28. 28.

    Lazar, J., Feng, J., Brooks, T., Melamed, G., Wentz, B., Holman, J., Olalere, A., Ekedebe, N.: The soundsright captcha: an improved approach to audio human interaction proofs for blind users. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2267–2276. ACM (2012)

  29. 29.

    Lewthwaite, S.: Web accessibility standards and disability: developing critical perspectives on accessibility. Disabil. Rehabil. 36(16), 1375–1383 (2014)

    Article  Google Scholar 

  30. 30.

    Lupkowski, P., Urbanski, M.: Semcaptchauser-friendly alternative for ocr-based captcha systems. In: International Multiconference on Computer Science and Information Technology. IMCSIT 2008, pp. 325–329. IEEE (2008)

  31. 31.

    Masek, W.J., Paterson, M.S.: A faster algorithm computing string edit distances. J. Comput. Syst. Sci. 20(1), 18–31 (1980)

    MathSciNet  Article  Google Scholar 

  32. 32.

    Meutzner, H., Gupta, S., Kolossa, D.: Constructing secure audio captchas by exploiting differences between humans and machines. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, pp. 2335–2338. ACM, New York (2015). doi:10.1145/2702123.2702127

  33. 33.

    Misra, D., Gaj, K.: Face recognition captchas. In: Null, p. 122. IEEE (2006)

  34. 34.

    Moreno, L., González, M., Martínez, P.: Captcha and accessibility-is this the best we can do? WEBIST 2, 115–122 (2014)

    Google Scholar 

  35. 35.

    Olalere, A., Feng, J.H., Lazar, J., Brooks, T.: Investigating the effects of sound masking on the use of audio captchas. Behav. Inf. Technol. 33(9), 919–928 (2014)

    Article  Google Scholar 

  36. 36.

    Pope, C., Kaur, K.: Is it human or computer? defending e-commerce with captchas. IT Prof. 7(2), 43–49 (2005)

    Article  Google Scholar 

  37. 37.

    Rose, S., Engel, D., Cramer, N., Cowley, W.: Automatic keyword extraction from individual documents. In: Berry, M.W., Kogan, J., (eds.) Text mining: applications and theory, Wiley, Chichester (2010). doi:10.1002/9780470689646.ch1

    Chapter  Google Scholar 

  38. 38.

    Roshanbin, N., Miller, J.: A survey and analysis of current captcha approaches. J. Web Eng. 12(1–2), 1–40 (2013)

    Google Scholar 

  39. 39.

    Roshanbin, N., Miller, J.: Adamas: interweaving unicode and color to enhance CAPTCHA security. Future Gener. Comput. Syst. 55, 289–310 (2016). doi:10.1016/j.future.2014.11.004. http://www.sciencedirect.com/science/article/pii/S0167739X14002386

    Article  Google Scholar 

  40. 40.

    Ross, S.A., Halderman, J.A., Finkelstein, A.: Sketcha: a captcha based on line drawings of 3d models. In: Proceedings of the 19th International Conference on World Wide Web, pp. 821–830. ACM (2010)

  41. 41.

    Sachdeva, M., Kumar, K., Singh, G.: A comprehensive approach to discriminate ddos attacks from flash events. J. Inf. Secur. Appl. 26, 8–22 (2016)

    Google Scholar 

  42. 42.

    Sauer, G., Holman, J., Lazar, J., Hochheiser, H., Feng, J.: Accessible privacy and security: a universally usable human-interaction proof tool. Univers. Access Inf. Soc. 9(3), 239–248 (2010)

    Article  Google Scholar 

  43. 43.

    Sauer, G., Lazar, J., Hochheiser, H., Feng, J.: Towards a universally usable human interaction proof: evaluation of task completion strategies. ACM Trans. Access. Comput. (TACCESS) 2(4), 15 (2010)

    Google Scholar 

  44. 44.

    Schryen, G., Wagner, G., Schlegel, A.: Development of two novel face-recognition captchas: a security and usability study. Comput. Secur. 60, 95–116 (2016)

    Article  Google Scholar 

  45. 45.

    Shirali-Shahreza, M., Shirali-Shahreza, S.: Advanced collage captcha. In: Fifth International Conference on Information Technology: New Generations. ITNG 2008, pp. 1234–1235. IEEE (2008)

  46. 46.

    Shirali-Shahreza, M.H., Shirali-Shahreza, M.: Multilingual captcha. In: IEEE International Conference on Computational Cybernetics. ICCC 2007, pp. 135–139. IEEE (2007)

  47. 47.

    Shirali-Shahreza, S., Penn, G., Balakrishnan, R., Ganjali, Y.: Seesay and hearsay captcha for mobile interaction. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2147–2156. ACM (2013)

  48. 48.

    Spitzer, M., Wildenhain, J., Rappsilber, J., Tyers, M.: Boxplotr: a web tool for generation of box plots. Nat. Methods 11(2), 121–122 (2014)

    Article  Google Scholar 

  49. 49.

    Tam, J., Hyde, S., Simsa, J., Ahn, L.V.: Breaking audio CAPTCHAs. In: Koller, D., Schuurmans, D., Bengio, Y., Bottou, L. (eds.) Proceedings of the 21st International Conference on Neural Information Processing Systems (NIPS'08), pp. 1625–1632. Curran Associates Inc., USA (2008)

    Google Scholar 

  50. 50.

    Von Ahn, L., Maurer, B., McMillen, C., Abraham, D., Blum, M.: Recaptcha: human-based character recognition via web security measures. Science 321(5895), 1465–1468 (2008). http://www.sciencemag.org/content/321/5895/1465.short

    MathSciNet  Article  Google Scholar 

  51. 51.

    Walker, W., Lamere, P., Kwok, P., Raj, B., Singh, R., Gouvea, E., Wolf, P., Woelfel, J.: Sphinx-4: A flexible open source framework for speech recognition. Technical Report. Sun Microsystems, Inc., Mountain View, USA (2004)

  52. 52.

    Wei, T.E., Jeng, A.B., Lee, H.M.: GeoCAPTCHAa novel personalized CAPTCHA using geographic concept to defend against 3 rd Party Human Attack. In: 2012 IEEE 31st International Performance Computing and Communications Conference (IPCCC), pp. 392–399. IEEE (2012)

  53. 53.

    Weston, J., Bordes, A., Chopra, S., Mikolov, T.: Towards ai-complete question answering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698 (2015)

  54. 54.

    Winkler, W.E.: Overview of record linkage and current research directions. In: Bureau of the Census. Citeseer (2006). http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.79.1519

  55. 55.

    Woolson, R.F.: Wilcoxon signed-rank test. In: Wiley Encyclopedia of Clinical Trials. Wiley (2007). http://onlinelibrary.wiley.com/doi/10.1002/9780471462422.eoct979/abstract

  56. 56.

    Yang, T.I., Koong, C.S., Tseng, C.C.: Game-based image semantic CAPTCHA on handset devices. In: Multimedia Tools and Applications, pp. 1–16 (2013). http://link.springer.com/article/10.1007/s11042-013-1666-7

    Article  Google Scholar 

  57. 57.

    Zhou, J., Chin, W.Y., Roman, R., Lopez, J.: An effective multi-layered defense framework against spam. Inf. Secur. Tech. Rep. 12(3), 179–185 (2007)

    Article  Google Scholar 

  58. 58.

    Zhu, B.B., Yan, J., Li, Q., Yang, C., Liu, J., Xu, N., Yi, M., Cai, K.: Attacks and design of image recognition captchas. In: Proceedings of the 17th ACM Conference on Computer and Communications Security, CCS ’10, pp. 187–200. ACM, New York (2010). doi:10.1145/1866307.1866329

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to K. S. Kuppusamy.

Appendix I: HuMan CAPTCHA Algorithm

Appendix I: HuMan CAPTCHA Algorithm

figurea

HuMan algorithm receives the url of a page as input. With detectTrace function, it checks for any trace of visit of the same page earlier from the same device. Based on this, it renders the preferences menu (renderUItoGetPref()) for the explicit scenario (E) and allows the user to choose the domain (fetchDomain()).

In the case of the implicit scenario (I), the domain is selected either from the cookies or IP address from TrackDB.

When both cookies and IP scenarios are not available, a thematic domain is chosen (MatchDomain()) by fetching title (getTitle()), meta-keywords (getMeta()) and keywords (FetchKeyTerms()) from the page.

Once the domain is chosen, a random challenge (challengeID) is selected. After the selection of a challenge, one question (qnID) is selected randomly from the list of available questions for that challenge.

Then HuMan CAPTCHA constructed with specified challengeID and qnID is rendered (RenderCAPTCHA(challengeIDqnID)).

Response from the user is gathered in uResponse through getUserResponse(). The correct answer for the challenge (cAns) is fetched through getCorrectAnswer().

As the HuMan model incorporates fuzzy answer matching to incorporate spelling errors, the Jaro–Winkler edit distance is computed (JW(uResponsecAns)) between the user response and the correct answer. If it less than the threshold value (\(\tau\)), then true is returned from the algorithm indicating successful CAPTCHA solving. Before this, the preemption point is recorded updated through addTrackingInfo(). The tracking information is used at the server side to check number of times this CAPTCHA is answered correctly.

If the edit distance is above the threshold value (\(\tau\)), then preemption point (getPreemptionpoint()), error margin (getErrorMargin()) is computed and the tracking information is updated. The algorithm returns false indicating failure of CAPTCHA solving attempt.

If the failure rate for a challenge audio or specific question belonging to that challenge crosses the specified threshold values (\(\nu , \xi\)), it is added to review (addToReview()) by the administrator.

If a response to a CAPTCHA receives a specific incorrect answer many times (above a threshold), then with the approval of administrator the answer is updated for that CAPTCHA.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Kuppusamy, K.S., Aghila, G. HuMan: an accessible, polymorphic and personalized CAPTCHA interface with preemption feature tailored for persons with visual impairments. Univ Access Inf Soc 17, 841–864 (2018). https://doi.org/10.1007/s10209-017-0567-3

Download citation

Keywords

  • Web accessibility
  • Accessible CAPTCHA
  • Non-visual access
  • CAPTCHA preemption