Empirical Software Engineering

, Volume 19, Issue 6, pp 1967–2018 | Cite as

Performance assessment of an architecture with adaptative interfaces for people with special needs

  • Elena Gómez-Martínez
  • Rafael Gonzalez-Cabero
  • Jose Merseguer
Experience Report


People in industrial societies carry more and more portable electronic devices (e.g., smartphone or console) with some kind of wireless connectivity support. Interaction with auto-discovered target devices present in the environment (e.g., the air conditioning of a hotel) is not so easy since devices may provide inaccessible user interfaces (e.g., in a foreign language that the user cannot understand). Scalability for multiple concurrent users and response times are still problems in this domain. In this paper, we assess an interoperable architecture, which enables interaction between people with some kind of special need and their environment. The assessment, based on performance patterns and antipatterns, tries to detect performance issues and also tries to enhance the architecture design for improving system performance. As a result of the assessment, the initial design changed substantially. We refactorized the design according to the Fast Path pattern and The Ramp antipattern. Moreover, resources were correctly allocated. Finally, the required response time was fulfilled in all system scenarios. For a specific scenario, response time was reduced from 60 seconds to less than 6 seconds.


Software architecture Performance assessment ICT for people with special needs Industrial report Performance patterns and antipatterns 



The research described in this paper arises from a Spanish research project called INREDIS (INterfaces for RElations between Environment and people with DISabilities). INREDIS is led by Technosite and funded by CDTI (Industrial Technology Development Centre), under the CENIT (National Strategic Technical Research Consortia) Programme, in the framework of the Spanish government’s INGENIO 2010 initiative. The opinions expressed in this paper are those of the authors and are not necessarily those of the INREDIS project’s partners or of the CDTI.José Merseguer has been supported by CICYT DPI2010-20413 and GISED (partially co-financed by the Aragonese Government (Ref. T27) and the European Social Fund).We would like to thank José Antonio Gutiérrez for his work in the experimental tests and Marta Alvargonzález, Esteban Etayo and Fausto Sainz for their help. Last but not least, the authors thank the anonymous reviewers for their valuable help to improve this work.

Supplementary material


  1. Abrams M, Phanouriou C, Batongbacal AL, Williams SM, Shuster JE (1999) UIML: an appliance-independent XML user interface language. Comput Netw 31(11-16):1695–1708CrossRefGoogle Scholar
  2. Ajmone Marsan M, Balbo G, Conte G, Donatelli S, Franceschinis G (1995) Modelling with generalized stochastic. Petri nets. WileyGoogle Scholar
  3. Alvargonzález M, Etayo E, Gutiérrez JA, Madrid J (2010) Arquitectura orientada a servicios para proporcionar accesibilidad. In: Proceedings of 5th Jornadas Científico-Ténicas en Servicios Web y SOA (JSWEB’10), (In Spanish)Google Scholar
  4. Bass L, Clements P, Kazman R (2005) Software Architecture in PracticeSEI Series in Software Engineering. Addison-WesleyGoogle Scholar
  5. Becker S, Koziolek H, Reussner R (2009) The Palladio component model for model-driven performance prediction. J Syst Softw 82(1):3–22CrossRefGoogle Scholar
  6. Bergenti F, Poggi A (2000) Improving UML Designs Using Automatic Design Pattern Detection. In: Proceeding 12th, International Conference on Softw Engineering and Knowledge Engineering (SEKE’00), pp 336–343Google Scholar
  7. Bernardi S, Merseguer J (2007) Performance evaluation of UML design with stochastic well-formed nets. J Syst Softw 80(11):1843–1865CrossRefGoogle Scholar
  8. Brown W, Malveau R, McCormick H, Mowbray T (1998) AntiPatterns: refactoring softwarearchitectures, and projects in crisis. John WileyGoogle Scholar
  9. Card SK, Robertson GG, Mackinlay JD (1991) The information visualizer: an information workspace. In: Proceeding SIGCHI Conference on Human Factors in Computing Systems (CHI’91), ACM, pp 181–186Google Scholar
  10. Catalán E, Catalán M (2010) Performance Evaluation of the INREDIS framework. Technical report, Departament d’Enginyeria Telemàtica, Universitat Politècnica de CatalunyaGoogle Scholar
  11. Chappell D (2004) Enterprise Service Bus. O’Reilly Media Inc.Google Scholar
  12. Chi CF, Tseng LK, Jang Y (2012) Pruning a decision tree for selecting computer-related assistive devices for people with disabilities. IEEE Trans Neural Syst Rehabil Eng 20(4):564–573CrossRefGoogle Scholar
  13. Chiola G, Franceschinis G, Gaeta R, Ribaudo M (1995) GreatSPN 1.7: graphical editor and analyzer for timed and stochastic Petri nets. Perform Eval 24:47–68CrossRefzbMATHGoogle Scholar
  14. Cortellessa V, Di Marco A, Inverardi P (2011) Model-Based Software Performance Analysis. SpringerGoogle Scholar
  15. Cortellessa V, Di Marco A, Trubiani C (2012) An approach for modeling and detecting software performance antipatterns based on first-order logics. Softw Syst Model, pp 1–42Google Scholar
  16. Cortés U, Annicchiarico R, Vázquez-Salceda J, Urdiales C, Cañamero L, López M, Sànchez-Marrè M, Caltagirone C (2003) Assistive technologies for the disabled and for the new generation of senior citizens: the e-Tools architecture. AI Commun 16(3):193–207MathSciNetzbMATHGoogle Scholar
  17. Distefano S, Scarpa M, Puliafito A (2011) From UML to Petri nets: the PCM-based methodology. IEEE Trans Softw Eng 37(1):65–79CrossRefGoogle Scholar
  18. Dugan-Jr RF, Glinert EP, Shokoufandeh A (2002) The sisyphus database retrieval software performance antipattern. In: Proceeding 3rd International Workshop on Software and Performance (WOSP’02). ACM, pp 10–16Google Scholar
  19. Gamma E, Helm R, Johnson R, Vlissides J (1995) Design Patterns: elements of reusable object-oriented software. Addison–WesleyGoogle Scholar
  20. Giménez R, Pous M, Rico-Novella F (2012) Securing an interoperability architecture for home and urban networking: implementation of the security aspects in the INREDIS interoperability architecture. In: Proceedings 26th international conference on advanced information networking and applications workshops (WAINA’12), IEEE Comput Soc, vol 0, pp 714–719Google Scholar
  21. Gómez-Martínez E, Merseguer J (2006) ArgoSPE: model-based software performance engineering. In: Proceedings 27th international conference on applications and theory of Petri nets and other models of concurrency (ICATPN’06). Springer-Verlag, LNCS, vol 4024, pp 401–410.
  22. Gómez-Martínez E, Merseguer J (2010) Performance modeling and analysis of the universal control hub. In: Proceedings 7th European Performance Engineering Workshop (EPEW’10). Springer, LNCS, vol 6342, pp 160–174Google Scholar
  23. Gómez-Martínez E, Ilarri S, Merseguer J (2007) Performance analysis of mobile agents tracking. In: Proceedings 6th international workshop on software and performance (WOSP’07). ACM, pp 181–188Google Scholar
  24. Gómez-Martínez E, Linaje M, Iglesias-Pérez A, Sánchez-Figueroa F, Preciado JC, González-Cabero R, Merseguer J (2013) Interacting with inaccessible smart environments: conceptualization and evaluated recommendation of assistive software submitted to publicationGoogle Scholar
  25. González-Cabero R (2010) A semantic matching process for detecting and reducing accessibility gaps in an ambient intelligence scenario. In: Proceedings 4th International Symposium of Ubiquitous Computing and Ambient Intelligence (UCAmI’10). IBERGACETA Publicaciones, pp 315–324Google Scholar
  26. de Gooijer T, Jansen A, Koziolek H, Koziolek A (2012) An industrial case study of performance and cost design space exploration. In: Proceedings 3rd ACM/SPEC international conference on performance engineering (ICPE’12), ACM, pp 205–216Google Scholar
  27. Grand M (1998) Patterns in Java: a catalog of reusable design patterns illustrated with UML, vol 1. WileyGoogle Scholar
  28. Grand M (2001) Java Enterprise Design Patterns: patterns in Java vol 3. WileyGoogle Scholar
  29. Hermanns H, Herzog U, Katoen JP (2002) Process algebra for performance evaluation. Theor Comput Sci274(1-2):43–87MathSciNetCrossRefzbMATHGoogle Scholar
  30. Huber N, Becker S, Rathfelder C, Schweflinghaus J, Reussner RH (2010) Performance modeling in industry: a case study on storage virtualization. In: Proceedings 32nd ACM/IEEE international conference on software engineering (ICSE’10). ACM, pp 1–10Google Scholar
  31. INREDIS (2010) Deliverable-78.2.1. Final guide to a generic platform deploymentGoogle Scholar
  32. International Standard Organization (2009) ISO 24756:2009–Framework for specifying a common access profile (CAP) of needs and capabilities of users, systems, and their environmentsGoogle Scholar
  33. International Standard Organization (2011) ISO 9999:2011–Assistive products for persons with disability – Classification and terminologyGoogle Scholar
  34. Isa MA, Jawawi DNA (2011) Comparative evaluation of performance assessment and modeling method for software architecture. In: Software Engineering and Computer Systems, Communications in Computer and Information Science, vol 181, Springer-Verlag, pp 764–776Google Scholar
  35. Jin Y, Tang A, Han J, Liu Y (2007) Performance evaluation and prediction for legacy information systems. In: Proceedings 29th International conference on software engineering (ICSE’07). IEEE Comput Soc, pp 540–549Google Scholar
  36. Kadouche R, Abdulrazak B, Giroux S, Mokhtari M (2009) Disability centered approach in smart space management. Int J Smart Home 3(3):13–26Google Scholar
  37. Kauppi T (2003) Performance analysis at the software architectural level. In: Technical Report 512, VTT Technical Research Centre of FinlandGoogle Scholar
  38. Kounev S (2006) Performance modeling and evaluation of distributed component-based systems using queueing Petri nets. IEEE Trans Softw Eng 32(7):486–502CrossRefGoogle Scholar
  39. Koziolek A, Koziolek H, Reussner R (2011) PerOpteryx: automated application of tactics in multi-objective software architecture optimization. In: Proceeding 7th international conference on the quality of software architectures (QoSA’11), ACM, pp 33–42Google Scholar
  40. Koziolek H, Schlich B, Becker S, Hauck M (2012) Performance and reliability prediction for evolving service-oriented software systems. Empir Softw Eng:1–45Google Scholar
  41. Lazowska E, Zahorjan J, Scott G, Sevcik K (1984) Quantitative system performance. In: Computer System Analysis Using Queueing Network Models. Prentice-HallGoogle Scholar
  42. Lea D (1999) Concurrent programming in Java 2nd edn. In: Design Principles and Patterns. Addison-Wesley Longman Publishing Co. IncGoogle Scholar
  43. Levandoski JJ, Ekstrand MD, Ludwig M, Eldawy A, Mokbel MF, Riedl J (2011) RecBench: benchmarks for evaluating performance of recommender system architectures. PVLDB 4(11):911–920Google Scholar
  44. Liang S, Fodor P, Wan H, Kifer M (2009) OpenRuleBench: an analysis of the performance of rule engines. In: Proceedings 18th International Conference on World Wide Web ACM:601–610Google Scholar
  45. Liu Y, Fekete A, Gorton I (2005) Design-level performance prediction of component-based applications. IEEE Trans Softw Eng 31(11):928–941CrossRefGoogle Scholar
  46. Liu Y, Gorton I, Zhu L (2007) Performance prediction of service-oriented applications based on an enterprise service bus. In: Proceedings 31st Annual International Computer Software and Applications Conference (COMPSAC’07). IEEE Computer Society, pp 327–334Google Scholar
  47. Llinás P, Montoro G, García-Herranz M, Haya P, Alamán X (2009) Adaptive interfaces for people with special needs. In: Proceedings 10th International Work Conference on Artificial Neural Networks Part II: Distributed computing. Artificial intelligence, bioinformatics, soft computing, and ambient assisted living (IWANN’09), Springer-Verlag, pp 772–779Google Scholar
  48. Miller RB (1968) Response time in man-computer conversational transactions. In: Proceedings AFIPS fall joint computer conference (AFIPS’68), vol 33, pp 267–277Google Scholar
  49. Murua A, González I, Gómez-Martínez E (2011) Cloud-based assistive technology services. In: Proceedings Federated Conference on Computer Science and Information Systems (FedCSIS’11), pp 985–989Google Scholar
  50. Nielsen J (1993) Usability Engineering. Morgan KaufmannGoogle Scholar
  51. Object Management Group (OMG) (2011) A UML profile for modeling and analysis of real time embedded systems (MARTE) Version 1.1.
  52. Petriu DC, Woodside CM (2002) Software performance models from system scenarios in use case maps. In: Proceedings 12th international conference on computer performance evaluation. Modelling Techniques and Tools (TOOLS’02), Springer-Verlag, pp 141–158Google Scholar
  53. Phanouriou C (2000) UIML: a device-independent user interface markup language. Tech. rep. Virginia Polytechnic Institute and State UniversityGoogle Scholar
  54. Pooley RJ, Abdullatif AAL (2010) CPASA: Continuous Performance Assessment of Software Architecture. In: Proceedings 17th IEEE International Conference and Workshops on the Eng of Computer-Based Systems (ECBS’10), IEEE Comput Soc, pp 79–87Google Scholar
  55. Pous M, Serra-Vallmitjana C, Giménez R, Torrent-Moreno M, Boix D (2012) Enhancing accessibility: mobile to ATM case study. In: Proceedings IEEE consumer communications and networking conference (CCNC’12). IEEE Computer Society, pp 404–408Google Scholar
  56. Prud’hommeaux E, Seaborne A (2006) SPARQL Query Language for RDF.
  57. Q-ImPrESS (2009) Q-ImPrESS consortium: project website.
  58. QoSA (2005–2014) International ACM Sigsoft Conference on the Quality of Software Architectures, SIGSOFTGoogle Scholar
  59. Runeson P, Höst M (2009) Guidelines for conducting and reporting case study research in software engineering. Empir Softw Eng 14(2):131–164CrossRefGoogle Scholar
  60. Sainz F, Casacuberta J, Díaz M, Madrid J (2011) Evaluation of an accessible home control and telecare system. In: Proceedings 13rd Human-Computer Interaction (INTERACT’11), Springer-Verlag, LNCS, vol 6949, pp 527530Google Scholar
  61. Schmidt DC, Stal M, Rohnert H, Buschmann F (2000) Pattern-oriented software architecture: patterns for concurrent and networked objects, 2nd edn. WileyGoogle Scholar
  62. Sereno M, Balbo G (1997) Mean value analysis of stochastic Petri nets. Perform Eval 29(1):35–62CrossRefzbMATHGoogle Scholar
  63. Smith CU (1990) Performance engineering of software systems. Addison–WesleyGoogle Scholar
  64. Smith CU, Williams LG (2000) Software performance antipatterns. In: Proceedings 2nd International Workshop on Software and Performance (WOSP’00). ACM, pp 127–136Google Scholar
  65. Smith CU, Williams LG (2001) Software performance antipatterns; common performance problems and their solutions. In: Proceeding 27th International Conference Computer Measurement Group (CMG’01), pp 797–806Google Scholar
  66. Smith CU, Williams LG (2002a) New software performance antipatterns: More ways to shoot yourself in the foot. In: Proceedings 28th International Conference Computer Measurement Group (CMG’02), pp 667–674Google Scholar
  67. Smith CU, Williams LG (2002b) Performance solutions. In: a practical guide to creating responsive. Scalable software. Addison–WesleyGoogle Scholar
  68. Smith CU, Williams LG (2003) More new software antipatterns: Even more ways to shoot yourself in the foot. In: Proceedings 29th International Conference Computer Measurement Group (CMG’03), pp 717–725Google Scholar
  69. Stephanidis C (2001) Adaptive techniques for universal access. User Model User-Adap Inter 11:159–179CrossRefzbMATHGoogle Scholar
  70. Tribastone M, Gilmore S (2008) Automatic translation of UML sequence diagrams into PEPA models. In: Proceeding 5th international conference on the quantitative evaluation of systems (QEST’08), pp 205–214Google Scholar
  71. UML-SPT (2005) UML Profile for Schedulabibity, Performance and Time Specification. Version 1.1, formal/05-01-02Google Scholar
  72. W3C (2012) OWL 2 Web Ontology Language.
  73. Williams LG, Smith CU (2002) PASASM: A method for the performance assessment of software architectures. In: Proceedings 3rd international workshop on software and performance (WOSP’02). ACM, pp 179–188Google Scholar
  74. Woodcock A, Fielden S, Bartlett R (2012) The user testing toolset: a decision support system to aid the evaluation of assistive technology products work. J Prevention Asses Rehabil 41:1381–1386Google Scholar
  75. Woodside C, Petriu D, Petriu D, Shen H, Israr T, Merseguer J (2005) Performance by Unified Model Analysis (PUMA). In: Proceedings 5th International Workshop on Software and Performance (WOSP’05), ACM, pp 1–12Google Scholar
  76. Woodside CM, Neilson JE, Petriu DC, Majumdar S (1995) The stochastic rendezvous network model for performance of synchronous client-server-like distributed software. IEEE Trans Comput 44(1):20–34CrossRefzbMATHGoogle Scholar
  77. Woodside M, Franks G, Petriu DC (2007) The future of software performance engineering. In: Future of Software Engineering (FOSE’07). IEEE Comput Soc, pp 171–187Google Scholar
  78. Woodside M, Petriu DC, Merseguer J, Petriu DB, Alhaj M (2013) Transformation challenges: from software models to performance models. Softw Syst Modeling (in Press)Google Scholar
  79. XHTML (2010) Extensible HyperText Markup Language.
  80. Zimmermann G, Vanderheiden GC (2007) The universal control hub: An open platform for remote user interfaces in the digital home. In: Proceedings 12th International Conference Human-Computer Interaction (HCI’07). Springer, LNCS, vol 4551, pp 1040–1049Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Elena Gómez-Martínez
    • 1
  • Rafael Gonzalez-Cabero
    • 2
  • Jose Merseguer
    • 3
  1. 1.Babel GroupUniversidad Politecnica de MadridMadridSpain
  2. 2.Ontology Engineering GroupUniversidad Politecnica de MadridMadridSpain
  3. 3.Departamento de Informática e Ingeniería de SistemasUniversidad de ZaragozaZaragozaSpain

Personalised recommendations