HMI Fuzzy Assessment of Complex Systems Usability
Testing and assessing are core activities in the development cycle of software applications, dedicated to evaluating interactive products in order to improve their quality by identifying various usability problems and defects. For complex system such as multiagent ones, usability evaluation is still an issue, and requires new test techniques to assess autonomous and interactive behaviors. This paper deals with investigation about the evaluation of the Human-Machine Interaction (HMI) in Complex Systems: A review on evaluation methods and introduce domain-specific requirements is presented; A mechanism for the evaluation of HMI is proposed. Also, an implementation of an automatic tool dedicated to the assessment of complex interactive systems based on fuzzy logic approaches is explained; A solution to automate the evaluation of the HMI that reduces the need for expert assessment and fully integrates end-users into the HMI evaluation loop is suggested. These are assessed in the urban transit control room in the city of Valenciennes, France. The comparative study deals with acceptance, motivation and perceived happiness.
The first author acknowledges the fruitful discussion with Rim Rekik and Abir Abid to improve the results. The authors acknowledge also the financial support of this work by grants from Tunisian General Direction of Scientific Research (DGRST) under the ARUB program.
- 1.Paz, F., Paz, F.A., Villanueva, D., Pow-Sang, J.A.: Heuristic evaluation as a complement to usability testing: a case study in web domain. In: 2015 12th International Conference on Information Technology-New Generations (ITNG), pp. 546–551. IEEE (2015)Google Scholar
- 2.Paz, F., Paz, F.A., Pow-Sang, J.A.: Evaluation of usability heuristics for transactional web sites: a comparative study. In: Information Technology: New Generations, pp. 1063–1073. Springer, Cham (2016)Google Scholar
- 4.Paz, F., Pow-Sang, J.A.: Usability evaluation methods for software development: a systematic mapping review. In: 2015 8th International Conference on Advanced Software Engineering & Its Applications (ASEA), pp. 1–4. IEEE (2015)Google Scholar
- 5.Assila, A., Ezzedine, H., de Oliveira, K.M., Bouhlel, M.S.: Towards improving the subjective quality evaluation of human computer interfaces using a questionnaire tool. In: 2013 International Conference on Advanced Logistics and Transport (ICALT), pp. 275–283. IEEE (2013)Google Scholar
- 6.Callahan, E., Koenemann, J.: A comparative usability evaluation of user interfaces for online product catalog. In: Proceedings of 2nd ACM conference on Electronic commerce, pp. 197–206. ACM (2000)Google Scholar
- 7.Al Zarqa, A., Ozkul, T., Al-Ali, A.: A study toward development of an assessment method for measuring computational intelligence of smart device interfaces. Int. J. Math. Comput. Simul. 8, 87–93 (2014)Google Scholar
- 9.Campos, J.C., Fayollas, C., Martinie, C., Navarre, D., Palanque, P., Pinto, M.: Systematic automation of scenario-based testing of user interfaces. In: Proceedings of 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, pp. 138–148. ACM (2016)Google Scholar
- 10.Dingli, A., Mifsud, J.: Useful: a framework to mainstream web site usability through automated evaluation. Int. J. Hum. Comput. Interact. (IJHCI) 2(1), 10 (2011)Google Scholar
- 11.Mosqueira-Rey, E., Alonso-Ríos, D., Vázquez-García, A., del Río, B., Moret-Bonillo, V.: A multi-agent system based on evolutionary learning for the usability analysis of websites. In: Nguyen, N.T., Jain, L.C. (eds.) Intelligent Agents in the Evolution of Web and Applications, pp. 11–34. Springer, Berlin (2009). https://doi.org/10.1007/978-3-540-88071-4_2 CrossRefGoogle Scholar
- 12.Berkman, M.I., Karahoca, D.: Re-assessing the usability metric for user experience (UMUX) scale. J. Usabil. Stud. 11(3), 89–109 (2016)Google Scholar