Advertisement

Using Reverse Engineering for Automated Usability Evaluation of Gui-Based Applications

  • Atif M. Memon
Part of the Human-Computer Interaction Series book series (HCIS)

Abstract

Graphical user interfaces (GUIs) are important parts of today’s software and their usability ensures the usefulness of the overall software. A popular technique to check the usability of GUIs is by performing usability evaluations either manually or automatically using tools. While manual evaluation is resource intensive, performing automatic usability evaluation usually involves the creation of a model of the GUI, a significant resource-intensive step that intimidates many practitioners and prevents the application of the automated techniques. This chapter presents “GUI ripping,” a new process that automatically recovers models of the GUI by dynamically “traversing” all its windows and extracting all the widgets, properties, and values. The usefulness of this process is demonstrated by recovering a structural model called a GUI forest and dynamic models called event-flow graphs and integration trees. Results of case studies show that GUI ripping is effective and requires very little human intervention.

Keywords

Usability Evaluation Reverse Engineering Integration Tree Reverse Engineer Usability Engineer 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Baker, K., Greenberg, S., and Gutwin, C. (2002). Empirical development of a heuristic evaluation methodology for shared workspace groupware. In Proceedings of the 2002 ACM conference on Computer supported cooperative work, pages 96–105. ACM Press.Google Scholar
  2. Brinck, T. and Hofer, E. (2002). Automatically evaluating the usability of websites. In CHI ’02 Extended Abstracts on Human Factors in Computer Systems, pages 906–907. ACM Press.Google Scholar
  3. Byrne, M. D., Wood, S. D., Foley, J. D., Kieras, D. E.,, and Sukaviriya, P. N. (1994). Automating interface evaluation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 232–237.Google Scholar
  4. Castillo, J. C., Hartson, H. R., and Hix, D. (1998). Remote usability evaluation: can users report their own critical incidents? In CHI 98 conference summary on Human factors in computing systems, pages 253–254. ACM Press.Google Scholar
  5. Dix, A., Finlay, J., Abowd, G., and Beale, R. (2002-03-19). Human computer interaction (booksite).Google Scholar
  6. Dwyer, M. B., Carr, V., and Hines, L. (1997). Model checking graphical user interfaces using abstractions. In Jazayeri, M. and Schauer, H., editors, Proceedings of the Sixth European Software Engineering Conference (ESEC/FSE 97), pages 244–261. Springer–Verlag.Google Scholar
  7. Hilbert, D. M. and Redmiles, D. F. (2000). Extracting usability information from user interface events. ACM Computing Surveys, 32(4):384–421.CrossRefGoogle Scholar
  8. Ivory, M. Y. and Hearst, M. A. (2001). The state of the art in automating usability evaluation of user interfaces. ACM Comput. Surv., 33(4):470–516.CrossRefGoogle Scholar
  9. John, B. E. (1996). Evaluating usability evaluation techniques. ACM Computing Surveys, 33(4):139.CrossRefGoogle Scholar
  10. Kieras, D. E., Wood, S. D., Abotel, K., and Hornof, A. (1995). GLEAN: a computer-based tool for rapid GOMS model usability evaluation of user interface designs. In Proceedings of the 8th Annual ACM Symposium on User interface and Software Technology, pages 91–100. ACM Press.Google Scholar
  11. Koskimies, K., Männistö, T., Systä, T., and Tuomi, J. (1998). Automated support for modeling OO software. IEEE Software, 15:87–94.CrossRefGoogle Scholar
  12. Mahajan, R. and Shneiderman, B. (1996). Visual and textual consistency checking tools for graphical user interfaces. Technical Report CS-TR-3639, University of Maryland, College Park.Google Scholar
  13. Marsh, T. (1999). Evaluation of virtual reality systems for usability. In CHI ’99 Extended Abstracts on Human Factors in Computer Systems, pages 61–62. ACM Press.Google Scholar
  14. Matzko, S., Clarke, P. J., Power, J. F., and Monahan, R. (2002). Reveal: A tool to reverse engineer class diagrams. In Proceedings of the Conference in Research and Practice in Information Technology, pages 13–21.Google Scholar
  15. Memon, A. M. (2001). A Comprehensive Framework for Testing Graphical User Interfaces. Ph.D. thesis, Department of Computer Science, University of Pittsburgh.Google Scholar
  16. Memon, A. M. (2002). GUI testing: Pitfalls and process. IEEE Computer, 35(8):90–91.Google Scholar
  17. Memon, A. M. (2003). Advances in GUI testing. In Zelkowitz, M. V., editor, Advances in Computers, 58:149–201. Academic Press.Google Scholar
  18. Memon, A. M., Banerjee, I., Hashmi, N., and Nagarajan, A. (2003). DART: A framework for regression testing nightly/daily builds of GUI applications. In Proceedings of the International Conference on Software Maintenance 2003.Google Scholar
  19. Memon, A. M., Pollack, M. E., and Soffa, M. L. (1999). Using a goal-driven approach to generate test cases for GUIs. In Proceedings of the 21st International Conference on Software Engineering, pages 257–266. ACM Press.Google Scholar
  20. Memon, A. M., Pollack, M. E., and Soffa, M. L. (2001a). Hierarchical GUI test case generation using automated planning. IEEE Transactions on Software Engineering, 27(2):144–155.Google Scholar
  21. Memon, A. M., Soffa, M. L., and Pollack, M. E. (2001b). Coverage criteria for GUI testing. In Proceedings of the 8th European Software Engineering Conference ESEC, pages 256–267.Google Scholar
  22. Moore, M. M. (1996). Rule-based detection for reverse engineering user interfaces. In Proceedings of the Third Working Conference on Reverse Engineering, pages 42–48. IEEE.Google Scholar
  23. Morley, S. (1998). Digital talking books on a PC: a usability evaluation of the prototype DAISY playback software. In Proceedings of the Third International ACM Conference on Assistive Technologies, pages 157–164. ACM Press.Google Scholar
  24. Nielsen, J. (1993). Usability Engineering. Boston, MA: Academic Press.zbMATHGoogle Scholar
  25. Nielsen, J. (1994). Heuristic evaluation. In Nielsen, J. and Mack, R. L., editors, Usability Inspection Methods. New York: John Wiley.Google Scholar
  26. Nielsen, J. and Molich, R. (1990). Heuristic evaluation of user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 249–256. ACM Press.Google Scholar
  27. Perlman, G. (1996). Practical usability evaluation. In Conference Companion on Human Factors in Computing Systems, pages 348–349. ACM Press.Google Scholar
  28. Pinelle, D. and Gutwin, C. (2002). Groupware walkthrough: adding context to groupware usability evaluation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 455–462. ACM Press.Google Scholar
  29. Rohn, J. A., Spool, J., Ektare, M., Koyani, S., Muller, M., and Redish, J. (2002). Usability in practice: alternatives to formative evaluations-evolution and revolution. In CHI ’02 Extended Abstracts on Human Factors in Computer Systems, pages 891–897. ACM Press.Google Scholar
  30. Systä, T. (2001). dynamic reverse engineering of java software. Technical report, University of Tampere, Box 607, 33101 Tampere, Finland.Google Scholar
  31. Winckler, M.A. Carla, M.D.S., and Valdeni de Lima, J. (2000). Usability remote evaluation for WWW, CHI’00 extended abstracts on Human factors in computer systems, pages 131–132.Google Scholar
  32. Wixon, D. 2003). Evaluating usability methods: why the current literature fails the practitioner. Interactions, 10(4):28–34.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  • Atif M. Memon
    • 1
  1. 1.Department of Computer ScienceUniversity of MarylandUSA

Personalised recommendations