Model comprehension for security risk assessment: an empirical comparison of tabular vs. graphical representations

  • Katsiaryna Labunets
  • Fabio Massacci
  • Federica Paci
  • Sabrina Marczak
  • Flávio Moreira de Oliveira
Article

Abstract

Tabular and graphical representations are used to communicate security risk assessments for IT systems. However, there is no consensus on which type of representation better supports the comprehension of risks (such as the relationships between threats, vulnerabilities and security controls). Cognitive fit theory predicts that spatial relationships should be better captured by graphs. In this paper we report the results of two studies performed in two countries with 69 and 83 participants respectively, in which we assessed the effectiveness of tabular and graphical representations with respect to extraction correct information about security risks. The experimental results show that tabular risk models are more effective than the graphical ones with respect to simple comprehension tasks and in some cases are more effective for complex comprehension tasks. We explain our findings by proposing a simple extension of Vessey’s cognitive fit theory as some linear spatial relationships could be also captured by tabular models.

Keywords

Empirical study Security risk assessment Risk modeling Comprehensibility Cognitive fit 

References

  1. Abrahao S, Gravino C, Insfran E, Scanniello G, Tortora G (2013) Assessing the effectiveness of sequence diagrams in the comprehension of functional requirements: Results from a family of five experiments 39(3):327–342Google Scholar
  2. Agarwal R, De P, Sinha A P (1999) Comprehending object and process models: An empirical study 25(4):541–556Google Scholar
  3. BSI (2012) Standard 100-1: Information Security Management SystemsGoogle Scholar
  4. De Gramatica M, Labunets K, Massacci F, Paci F, Tedeschi A (2015) The role of catalogues of threats and security controls in security risk assessment: An empirical study with ATM professionals. SpringerGoogle Scholar
  5. De Lucia A, Gravino C, Oliveto R, Tortora G (2010) An experimental comparison of ER and UML class diagrams for data modelling 15(5):455–492Google Scholar
  6. Dunning D, Johnson K, Ehrlinger J, Kruger J (2003) Why people fail to recognize their own incompetence 12(3):83–87Google Scholar
  7. Fabian B, Gürses S, Heisel M, Santen T, Schmidt H (2010) A comparison of security requirements engineering methods 15(1):7–40Google Scholar
  8. Fox J, Weisberg S (2011) An R Companion to Applied Regression, 2nd edn. Sage, Thousand Oaks, CA. http://socserv.socsci.mcmaster.ca/jfox/Books/Companion Google Scholar
  9. Giorgini P, Massacci F, Mylopoulos J, Zannone N (2005) Modeling security requirements through ownership, permission and delegation. IEEE, p. 167–176Google Scholar
  10. Grondahl IH, Lund MS (2011) Reducing the effort to comprehend risk models: Text labels are often preferred over graphical means 31:1813–1831Google Scholar
  11. Hadar I, Reinhartz-Berger I, Kuflik T, Perini A, Ricca F, Susi A (2013) Comparing the comprehensibility of requirements models expressed in use case and tropos: Results from a family of experiments 55(10):1823–1843Google Scholar
  12. Heijstek W, Kühne T, Chaudron MR (2011) Experimental analysis of textual and graphical representations for software architecture design. IEEE, p. 167–176Google Scholar
  13. Hogganvik I, Stolen K (2005) On the comprehension of security risk scenarios. IEEE, p. 115–124Google Scholar
  14. Hoisl B, Sobernig S, Strembeck M (2014) Comparing three notations for defining scenario-based model tests: A controlled experiment. IEEE, p. 95–104Google Scholar
  15. Hothorn T, Hornik K (2015) exactRankTests: Exact Distributions for Rank and Permutation Tests. https://CRAN.R-project.org/package=exactRankTests, r package version 0.8-28
  16. Kabacoff R (2015) R in action: data analysis and graphics with R. Manning Publications CoGoogle Scholar
  17. Kaczmarek M, Bock A, Heß M (2015) On the explanatory capabilities of enterprise modeling approaches. Springer, p. 128–143Google Scholar
  18. Labunets K, Massacci F, Paci F, Tran LMS (2013) An Experimental Comparison of Two Risk-Based Security Methods. IEEE, p. 163–172Google Scholar
  19. Labunets K, Paci F, Massacci F, Ragosta M, Solhaug B (2014a) A First Empirical Evaluation Framework for Security Risk Assessment Methods in the ATM Domain. SESARGoogle Scholar
  20. Labunets K, Paci F, Massacci F, Ruprai R (2014b) An experiment on comparing textual vs. visual industrial methods for security risk assessment. IEEE, p. 28–35Google Scholar
  21. Landoll DJ, Landoll D (2005) The security risk assessment handbook: A complete guide for performing security risk assessments. CRC PressGoogle Scholar
  22. Lund MS, Solhaug B, Stølen K (2011) A guided tour of the CORAS method Model-Driven Risk Analysis, Springer, pp 23–43CrossRefGoogle Scholar
  23. MacKenzie IS (2012) Human-computer interaction: An empirical research perspective. NewnesGoogle Scholar
  24. Massacci F, Paci F (2012) How to select a security requirements method? a comparative study with students and practitioners. Springer, p. 89–104Google Scholar
  25. Matuleviċius R, Mayer N, Mouratidis H, Dubois E, Heymans P, Genon N (2008) Adapting secure tropos for security risk management in the early phases of information systems development. Springer, p. 541–555Google Scholar
  26. Mayer N, Rifaut A, Dubois E (2005) Towards a risk-based security requirements engineering framework. vol 5Google Scholar
  27. Mayer N, Heymans P, Matulevicius R (2007) Design of a modelling language for information system security risk management. pp 121–132Google Scholar
  28. Mead NR, Allen JH, Barnum S, Ellison RJ, McGraw G (2004) Software Security Engineering: A Guide for Project Managers. Addison-Wesley ProfessionalGoogle Scholar
  29. Mellado D, Fernández-Medina E, Piattini M (2006) Applying a security requirements engineering process. Springer, p. 192–206Google Scholar
  30. Moody D (2009) The ”Physics” of Notations: Toward a Scientific Basis for Constructing Visual Notations in Software Engineering 35(6):756–779Google Scholar
  31. Mouratidis H, Giorgini P (2007) Secure tropos: a security-oriented extension of the tropos methodology 17(02):285–309Google Scholar
  32. Ottensooser A, Fekete A, Reijers H A, Mendling J, Menictas C (2012) Making sense of business process descriptions: An experimental comparison of graphical and textual notations 85(3):596–606Google Scholar
  33. Purchase HC, Welland R, McGill M, Colpoys L (2004) Comprehension of diagram syntax: an empirical study of entity relationship notations 61(2):187–203Google Scholar
  34. R Core Team (2016) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/
  35. Ricca F, Di Penta M, Torchiano M, Tonella P, Ceccato M (2007) The role of experience and ability in comprehension tasks supported by uml stereotypes. pp 375–384Google Scholar
  36. Saleh F, El-Attar M (2015) A scientific evaluation of the misuse case diagrams visual syntax 66:73–96Google Scholar
  37. Scanniello G, Gravino C, Genero M, Cruz-Lemus J, Tortora G (2014a) On the impact of uml analysis models on source-code comprehensibility and modifiability 23(2):13Google Scholar
  38. Scanniello G, Staron M, Burden H, Heldal R (2014b) On the Effect of Using SysML Requirement Diagrams to Comprehend Requirements: Results from Two Controlled Experiments. pp 433–442Google Scholar
  39. Scanniello G, Gravino C, Risi M, Tortora G, Dodero G (2015) Documenting design-pattern instances: A family of experiments on source-code comprehensibility 24(3):14Google Scholar
  40. Sharafi Z, Marchetto A, Susi A, Antoniol G, Guéhéneuc YG (2013) An empirical study on the efficiency of graphical vs. textual representations in requirements comprehension. IEEE, p. 33–42Google Scholar
  41. Stoneburner G, Goguen A, Feringa A (2002) NIST SP 800-30: Risk management guide for information technology systems. http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf
  42. Stålhane T, Sindre G (2008) Safety hazard identification by misuse cases: Experimental comparison of text and diagrams. pp 721–735Google Scholar
  43. Stålhane T, Sindre G (2012) Identifying safety hazards: An experimental comparison of system diagrams and textual use cases. pp 378–392Google Scholar
  44. Stȧlhane T, Sindre G (2014) An experimental comparison of system diagrams and textual use cases for the identification of safety hazards 5(1):1–24Google Scholar
  45. Stålhane T, Sindre G, Bousquet L (2010) Comparing safety analysis based on sequence diagrams and textual use cases. pp 165–179Google Scholar
  46. Svahnberg M, Aurum A, Wohlin C (2008) Using students as subjects – an empirical evaluation. IEEE, p. 288–290Google Scholar
  47. Vessey I (1991) Cognitive fit: A theory-based analysis of the graphs versus tables literature 22(2):219–240Google Scholar
  48. Wickham H (2009) ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag, New York. http://ggplot2.org CrossRefMATHGoogle Scholar
  49. Wickham H (2016) gtable: Arrange ’Grobs’ in Tables. https://CRAN.R-project.org/package=gtable, r package version 0.2.0
  50. Wood RE (1986) Task complexity: Definition of the construct 37(1):60–82Google Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  • Katsiaryna Labunets
    • 1
  • Fabio Massacci
    • 1
  • Federica Paci
    • 2
  • Sabrina Marczak
    • 3
  • Flávio Moreira de Oliveira
    • 3
  1. 1.University of TrentoTrentoItaly
  2. 2.University of SouthamptonSouthamptonUK
  3. 3.Pontifícia Universidade Catòlica do Rio Grande do Sul (PUCRS) UniversityPorto AlegreBrazil

Personalised recommendations