Advertisement

The Testing Method Based on Image Analysis for Automated Detection of UI Defects Intended for Mobile Applications

  • Šarūnas PackevičiusEmail author
  • Andrej Ušaniov
  • Šarūnas Stanskis
  • Eduardas Bareiša
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 538)

Abstract

Large amounts of defects found in applications are classified as user interface defects. As more and more applications are provided for smart phones, it is reasonable to test those applications on various possible configurations of mobile devices such as screen resolution, OS version and custom layer. However, the set of mobile devices configurations is quite large. Developers are limited to testing their applications on all possible configurations.

In this paper, we present an idea of the testing method for automated detection of UI defects intended for mobile applications. The testing method is based on static testing approach. It allows (1) to extract navigation model by code analysis of the application under test, (2) to execute application on a large set of mobile devices with different configurations (mobile cluster), (3) to capture images of application windows on each devices, (4) and to perform detection of defects by analyzing each image and comparing with predefined list of possible user interface defects.

Keywords

Software testing Mobile devices User interface testing Static testing 

References

  1. 1.
    Robinson, B., Brooks, P.: An initial study of customer-reported GUI defects. In: International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2009, pp. 267–274 (2009)Google Scholar
  2. 2.
    Issa, A., Sillito, J., Garousi, V.: Visual testing of graphical user interfaces: an exploratory study towards systematic definitions and approaches. In: 2012 14th IEEE International Symposium on Web Systems Evolution (WSE), pp. 11–15 (2012)Google Scholar
  3. 3.
    Bertolino, A.: Software testing research: achievements, challenges, dreams. In: 2007 Future of Software Engineering, pp. 85–103. IEEE Computer Society (2007)Google Scholar
  4. 4.
    Holl, K., Elberzhager, F.: A mobile-specific failure classification and its usage to focus quality assurance. In: 2014 40th EUROMICRO Conference on Software Engineering and Advanced Applications (SEAA), pp. 385–388 (2014)Google Scholar
  5. 5.
    Muccini, H., Di Francesco, A., Esposito, P.: Software testing of mobile applications: Challenges and future research directions. In: 2012 7th International Workshop on Automation of Software Test (AST), pp. 29–35 (2012)Google Scholar
  6. 6.
    Wasserman, A.I.: Software engineering issues for mobile application development. In: Proceedings of the FSE/SDP Workshop on Future of Software Engineering Research, pp. 397–400. ACM, Santa Fe, New Mexico, USA (2010)Google Scholar
  7. 7.
    Kirubakaran, B., Karthikeyani, V.: Mobile application testing - challenges and solution approach through automation. In: 2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering (PRIME), pp. 79–84 (2013)Google Scholar
  8. 8.
    Huerta-Canepa, G., Lee, D.: A virtual cloud computing provider for mobile devices. In: Proceedings of the 1st ACM Workshop on Mobile Cloud Computing and Services: Social Networks and Beyond, pp. 1–5. ACM, San Francisco, California (2010)Google Scholar
  9. 9.
    Reeder, R.W., Maxion, R.A.: User interface defect detection by hesitation analysis. In: International Conference on Dependable Systems and Networks, DSN 2006, pp. 61–72. (2006)Google Scholar
  10. 10.
    Hu, C., Neamtiu, I.: Automating GUI testing for android applications. In: Proceedings of the 6th International Workshop on Automation of Software Test, pp. 77–83. ACM, Waikiki, Honolulu, HI, USA (2011)Google Scholar
  11. 11.
    Mirzaei, N., Malek, S., Păsăreanu, C.S., Esfahani, N., Mahmood, R.: Testing android apps through symbolic execution. SIGSOFT Softw. Eng. Notes 37, 1–5 (2012)CrossRefGoogle Scholar
  12. 12.
    Anand, S., Naik, M., Harrold, M.J., Yang, H.: Automated concolic testing of smartphone apps. In: Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering, pp. 1–11. ACM, Cary, North Carolina (2012)Google Scholar
  13. 13.
    Liang, C.-J.M., Lane, N., Brouwers, N., Zhang, L., Karlsson, B., Chandra, R., Zhao, F.: Contextual fuzzing: automated mobile app testing under dynamic device and environment conditions. Microsoft. Microsoft, nd Web (2013)Google Scholar
  14. 14.
    Liang, C.-J.M., Lane, N.D., Brouwers, N., Zhang, L., Karlsson, B.F., Liu, H., Liu, Y., Tang, J., Shan, X., Chandra, R., Zhao, F.: Caiipa: automated large-scale mobile app testing through contextual fuzzing. In: Proceedings of the 20th Annual International Conference on Mobile Computing and Networking, pp. 519–530. ACM, Maui, Hawaii, USA (2014)Google Scholar
  15. 15.
    Gao, J., Tsai, W.-T., Paul, R., Bai, X., Uehara, T.: Mobile testing-as-a-service (MTaaS) – infrastructures, issues, solutions and needs. In: Proceedings of the 2014 IEEE 15th International Symposium on High-Assurance Systems Engineering, pp. 158–167. IEEE Computer Society (2014)Google Scholar
  16. 16.
    Candea, G., Bucur, S., Zamfir, C.: Automated software testing as a service. In: Proceedings of the 1st ACM Symposium on Cloud Computing, pp. 155–160. ACM, Indianapolis, Indiana, USA (2010)Google Scholar
  17. 17.
    Gao, J., Xiaoying, B., Wei-Tek, T., Uehara, T.: Testing as a service (TaaS) on clouds. In: 2013 IEEE 7th International Symposium on Service Oriented System Engineering (SOSE), pp. 212–223 (2013)Google Scholar
  18. 18.
    Haller, K.: Mobile testing. SIGSOFT Softw. Eng. Notes 38, 1–8 (2013)CrossRefGoogle Scholar
  19. 19.
    Huang, J.F., Gong, Y.Z.: Remote mobile test system: a mobile phone cloud for application testing. In: 2012 IEEE 4th International Conference on Cloud Computing Technology and Science (CloudCom), pp. 1–4 (2012)Google Scholar
  20. 20.
    Takala, T., Katara, M., Harty, J.: Experiences of system-level model-based GUI testing of an android application. In: 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation (ICST), pp. 377–386 (2011)Google Scholar
  21. 21.
    Xun, Y., Memon, A.M.: Generating event sequence-based test cases using GUI runtime state feedback. IEEE Trans. Softw. Eng. 36, 81–95 (2010)CrossRefGoogle Scholar
  22. 22.
    Memon, A., Banerjee, I., Nagarajan, A.: GUI ripping: reverse engineering of graphical user interfaces for testing. In: 2013 20th Working Conference on Reverse Engineering (WCRE), pp. 260–260. IEEE Computer Society (2003)Google Scholar
  23. 23.
    Amalfitano, D., Fasolino, A.R., Tramontana, P., Amatucci, N.: Considering context events in event-based testing of mobile applications. In: 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation Workshops (ICSTW), pp. 126–133 (2013)Google Scholar
  24. 24.
    Yang, W., Prasad, M.R., Xie, T.: A grey-box approach for automated GUI-model generation of mobile applications. In: Cortellessa, V., Varró, D. (eds.) FASE 2013 (ETAPS 2013). LNCS, vol. 7793, pp. 250–265. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  25. 25.
    Azim, T., Neamtiu, I.: Targeted and depth-first exploration for systematic testing of android apps. SIGPLAN Not. 48, 641–660 (2013)CrossRefGoogle Scholar
  26. 26.
    Yeh, T., Chang, T.-H., Miller, R.C.: Sikuli: using GUI screenshots for search and automation. In: Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology, pp. 183–192. ACM, Victoria, BC, Canada (2009)Google Scholar
  27. 27.
    Chang, T.-H., Yeh, T., Miller, R.C.: GUI testing using computer vision. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1535–1544. ACM, Atlanta, Georgia, USA (2010)Google Scholar
  28. 28.
    Reinecke, K., Bernstein, A.: Knowing what a user likes: a design science approach to interfaces that automatically adapt to culture. MIS Q. 37, 427–453 (2013)CrossRefGoogle Scholar
  29. 29.
    Johnson, D.B.: Finding all the elementary circuits of a directed graph. SIAM J. Comput. 4, 77–84 (1975)MathSciNetCrossRefGoogle Scholar
  30. 30.
    Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Depth-first search. In: Introduction to Algorithms, pp. 540–549. MIT Press, Cambridge and McGraw-Hill, New York (2001)Google Scholar
  31. 31.
    Visser, W., Pasareanu, C.S., Khurshid, S.: Test input generation with java PathFinder. In: Proceedings of the 2004 ACM SIGSOFT International Symposium on Software Testing and Analysis. ACM Press, Boston, Massachusetts, USA (2004)Google Scholar
  32. 32.
    Arnaud, G., Bernard, B., Michel, R.: Automatic test data generation using constraint solving techniques. In: Proceedings of the 1998 ACM SIGSOFT International Symposium on Software Testing and Analysis. ACM Press, Clearwater Beach, Florida, United States (1998)Google Scholar
  33. 33.
    Packevicius, S., Krivickaite, G., Barisas, D., Jasaitis, R., Blazauskas, T., Guogis, E.: Test data generation for complex data types using imprecise model constraints and constraint solving techniques. Inf. Technol. Control 42, 191–204 (2013)Google Scholar
  34. 34.
    Pacheco, C., Lahiri, S.K., Ernst, M.D., Ball, T.: Feedback-directed random test generation. In: Proceedings of the 29th International Conference on Software Engineering. IEEE Computer Society (2007)Google Scholar
  35. 35.
    Baride, S., Dutta, K.: A cloud based software testing paradigm for mobile applications. SIGSOFT Softw. Eng. Notes 36, 1–4 (2011)CrossRefGoogle Scholar
  36. 36.
    Ernst, M.: Static and dynamic analysis: synergy and duality. In: ICSE 2003 International Conference on Software Engineering, pp. 25–29 (2003)Google Scholar
  37. 37.
    Ganesan, L., Bhattacharyya, P.: Edge detection in untextured and textured images-a common computational framework. IEEE Trans. Syst. Man Cybern. B Cybern. 27, 823–834 (1997)CrossRefGoogle Scholar
  38. 38.
    Mishra, A., Alahari, K., Jawahar, C.V.: Top-down and bottom-up cues for scene text recognition. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2687–2694 (2012)Google Scholar
  39. 39.
    Tsomko, E., Hyoung Joong, K.: Efficient method of detecting globally blurry or sharp images. In: Ninth International Workshop on Image Analysis for Multimedia Interactive Services, WIAMIS 2008, pp. 171–174 (2008)Google Scholar
  40. 40.
    Cohen-Or, D., Sorkine, O., Gal, R., Leyvand, T., Xu, Y.-Q.: Color harmonization. ACM Trans. Graph. 25, 624–630 (2006)CrossRefGoogle Scholar
  41. 41.
    Guoshen, Y., Morel, J.M.: A fully affine invariant image comparison method. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2009, pp. 1597–1600 (2009)Google Scholar
  42. 42.
    Waloszek, G., Kreichgauer, U.: User-centered evaluation of the responsiveness of applications. In: Gross, T., Gulliksen, J., Kotzé, P., Oestreicher, L., Palanque, P., Prates, R.O., Winckler, M. (eds.) INTERACT 2009. LNCS, vol. 5726, pp. 239–242. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  43. 43.
    Liepinaitis, D., Motiejūnas, G., Kuckis, T.: The errors search method for mobile devices software using timing characteristics. In: Information Society and University Studies, pp. 55–59 (2014)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Šarūnas Packevičius
    • 1
    Email author
  • Andrej Ušaniov
    • 1
  • Šarūnas Stanskis
    • 1
  • Eduardas Bareiša
    • 1
  1. 1.Department of Software EngineeringKaunas University of TechnologyKaunasLithuania

Personalised recommendations