Advertisement

Ensuring Web Interface Quality through Usability-Based Split Testing

  • Maximilian Speicher
  • Andreas Both
  • Martin Gaedke
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8541)

Abstract

Usability is a crucial quality aspect of web applications, as it guarantees customer satisfaction and loyalty. Yet, effective approaches to usability evaluation are only applied at very slow iteration cycles in today’s industry. In contrast, conversion-based split testing seems more attractive to e-commerce companies due to its more efficient and easy-to-deploy nature. We introduce Usability-based Split Testing as an alternative to the above approaches for ensuring web interface quality, along with a corresponding tool called WaPPU. By design, our novel method yields better effectiveness than using conversions at higher efficiency than traditional evaluation methods. To achieve this, we build upon the concept of split testing but leverage user interactions for deriving quantitative metrics of usability. From these interactions, we can also learn models for predicting usability in the absence of explicit user feedback. We have applied our approach in a split test of a real-world search engine interface. Results show that we are able to effectively detect even subtle differences in usability. Moreover, WaPPU can learn usability models of reasonable prediction quality, from which we also derived interaction-based heuristics that can be instantly applied to search engine results pages.

Keywords

Usability Metrics Heuristics Interaction Tracking Search Engines Interfaces Context-Awareness 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Atterer, R., Wnuk, M., Schmidt, A.: Knowing the Users Every Move – User Activity Tracking for Website Usability Evaluation and Implicit Interaction. In: Proc. WWW (2006)Google Scholar
  2. 2.
    Carta, T., Paternò, F., de Santana, V.F.: Web Usability Probe: A Tool for Supporting Remote Usability Evaluation of Web Sites. In: Campos, P., Graham, N., Jorge, J., Nunes, N., Palanque, P., Winckler, M. (eds.) INTERACT 2011, Part IV. LNCS, vol. 6949, pp. 349–357. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  3. 3.
    Correani, F., Leporini, B., Patern, F.: Automatic Inspection-based Support for Obtaining Usable Web Sites for Vision-Impaired Users. UAIS 5(1) (2006)Google Scholar
  4. 4.
    de Vasconcelos, L.G., Baldochi Jr., L.A.: Towards an Automatic Evaluation of Web Applications. In: Proc. SAC (2012)Google Scholar
  5. 5.
    Gaedke, M., Gräf, G.: Development and Evolution of Web-Applications using the WebComposition Process Model. In: WWW9-WebE Workshop, Amsterdam (2000)Google Scholar
  6. 6.
    Gay, G.R., Li, C.Q.: AChecker: Open, Interactive, Customizable, Web Accessibility Checking. In: Proc. W4A (2010)Google Scholar
  7. 7.
    Guo, Q., Agichtein, E.: Beyond Dwell Time: Estimating Document Relevance from Cursor Movements and other Post-click Searcher Behavior. In: Proc. WWW (2012)Google Scholar
  8. 8.
    Gutschmidt, A.: Classification of User Tasks by the User Behavior. PhD thesis, University of Rostock (2012)Google Scholar
  9. 9.
    Hall, M.A.: Correlation-based Feature Subset Selection for Machine Learning. PhD thesis, University of Waikato (1998)Google Scholar
  10. 10.
    Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The WEKA Data Mining Software: An Update. SIGKDD Explor. Newsl. 11(1) (2009)Google Scholar
  11. 11.
    Hassenzahl, M.: Hedonic, emotional and experiential perspectives on product quality. In: Ghaoui, C. (ed.) Encyclopedia of Human Computer Interaction, pp. 266–272. IGI Global (2006)Google Scholar
  12. 12.
    Navalpakkam, V., Churchill, E.F.: Mouse Tracking: Measuring and Predicting Users’ Experience of Web-based Content. In: Proc. CHI (2012)Google Scholar
  13. 13.
    Nebeling, M., Speicher, M., Norrie, M.C.: CrowdStudy: General Toolkit for Crowdsourced Evaluation of Web Interfaces. In: Proc. EICS (2013)Google Scholar
  14. 14.
    Nebeling, M., Matulic, F., Norrie, M.C.: Metrics for the Evaluation of News Site Content Layout in Large-Screen Contexts. In: Proc. CHI (2011)Google Scholar
  15. 15.
    Nebeling, M., Speicher, M., Norrie, M.C.: W3Touch: Metrics-based Web Page Adaptation for Touch. In: Proc. CHI (2013)Google Scholar
  16. 16.
    Nielsen, J., Molich, R.: Heuristic Evaluation of User Interfaces. In: Proc. CHI (1990)Google Scholar
  17. 17.
    Nielsen, J.: Putting A/B Testing in Its Place, http://www.nngroup.com/articles/putting-ab-testing-in-its-place/
  18. 18.
    Sauro, J.: Does Better Usability Increase Customer Loyalty?, http://www.measuringusability.com/usability-loyalty.php
  19. 19.
    Speicher, M., Both, A., Gaedke, M.: TellMyRelevance! Predicting the Relevance of Web Search Results from Cursor Interactions. In: Proc. CIKM (2013)Google Scholar
  20. 20.
    Speicher, M., Both, A., Gaedke, M.: Towards Metric-based Usability Evaluation of Online Web Interfaces. In: Mensch & Computer Workshopband (2013)Google Scholar
  21. 21.
    Speicher, M., Both, A., Gaedke, M.: Was that Webpage Pleasant to Use? Predicting Usability Quantitatively from Interactions. In: Sheng, Q.Z., Kjeldskov, J. (eds.) ICWE Workshops 2013. LNCS, vol. 8295, pp. 335–339. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  22. 22.

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Maximilian Speicher
    • 1
    • 2
  • Andreas Both
    • 2
  • Martin Gaedke
    • 1
  1. 1.Chemnitz University of TechnologyChemnitzGermany
  2. 2.R&D, Unister GmbHLeipzigGermany

Personalised recommendations