Identifying Safety Hazards: An Experimental Comparison of System Diagrams and Textual Use Cases

  • Tor Stålhane
  • Guttorm Sindre
Part of the Lecture Notes in Business Information Processing book series (LNBIP, volume 113)

Abstract

As ICT is increasingly used in critical systems, safety is a growing concern. Safety hazards should be discovered and handled at an early stage of IS development, since it is much more expensive to redesign a system post hoc due to threats that were initially overlooked. It is therefore interesting to integrate safety analysis with textual and diagrammatic specifications used in mainstream system development. This paper reports on an experiment comparing how well system diagrams and textual uses cases support the identification of hazards in a simple railway control system. The two most important conclusions are that textual uses cases are as good as or better than system diagrams for hazard identification in all cases except for peripheral equipment and that including system diagrams in the documentation is not enough − they must be brought into focus for the analysis.

Keywords

safety use case requirements engineering systems development hazard identification 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Croxford, M., Sutton, J.: Breaking through the V and V Bottleneck. In: Toussaint, M.-J. (ed.) Ada-Europe 1995. LNCS, vol. 1031, pp. 344–354. Springer, Heidelberg (1996)CrossRefGoogle Scholar
  2. 2.
    Jürjens, J.: Developing Safety-Critical Systems with UML. In: Stevens, P., Whittle, J., Booch, G. (eds.) UML 2003. LNCS, vol. 2863, pp. 360–372. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  3. 3.
    Bush, D.: Modelling Support for Early Identification of Safety Requirements: A Preliminary Investigation. In: Firesmith, D. (ed.) 4th Int’l Workshop on Requirements for High Assurance Systems (RHAS 2005), Paris, France (2005)Google Scholar
  4. 4.
    Abdel-Hamid, T.K.: The Economics of Software Quality Assurance: A Simulation Based Case Study. MIS Quarterly (1988)Google Scholar
  5. 5.
    Lutz, R.R.: Analyzing software requirements errors in safety-critical, embedded systems. In: Proceedings of IEEE International Symposium on Requirements Engineering (1993)Google Scholar
  6. 6.
    Allenby, K., Kelly, T.: Deriving Safety Requirements Using Scenarios. In: Fifth IEEE International Symposium on Requirements Engineering (RE 2001). IEEE, Toronto (2001)Google Scholar
  7. 7.
    Berry, D.M.: The Safety Requirements Engineering Dilemma. In: Proceedings of the 9th International Workshop on Software Specification and Design, p. 147. IEEE Computer Society (1998)Google Scholar
  8. 8.
    Stålhane, T., Sindre, G.: A Comparison of Two Approaches to Safety Analysis Based on Use Cases. In: Parent, C., Schewe, K.-D., Storey, V.C., Thalheim, B. (eds.) ER 2007. LNCS, vol. 4801, pp. 423–437. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  9. 9.
    Stålhane, T., Sindre, G.: Safety Hazard Identification by Misuse Cases: Experimental Comparison of Text and Diagrams. In: Czarnecki, K., Ober, I., Bruel, J.-M., Uhl, A., Völter, M., et al. (eds.) MODELS 2008. LNCS, vol. 5301, pp. 721–735. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  10. 10.
    Stålhane, T., Sindre, G., du Bousquet, L.: Comparing Safety Analysis Based on Sequence Diagrams and Textual Use Cases. In: Pernici, B. (ed.) CAiSE 2010. LNCS, vol. 6051, pp. 165–179. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  11. 11.
    Stamatis, D.H.: Failure Mode and Effect Analysis: FMEA from theory to execution. American Society for Quality (ASQ), Milwaukee (1995)Google Scholar
  12. 12.
    Sindre, G., Opdahl, A.L.: Eliciting Security Requirements with Misuse Cases. Requirements Engineering 10(1), 34–44 (2005)CrossRefGoogle Scholar
  13. 13.
    Turner, C.W., Lewis, J.R., Nielsen, J.: Determining Usability Test Sample Size. In: Karwowski, W. (ed.) International Encyclopedia of Ergonomics and Human Factors, pp. 3084–3088. CRC Press, Boca Raton (2006)Google Scholar
  14. 14.
    Bernardi, S., Merseguer, J., Petriu, D.C.: Dependability modeling and analysis of software systems specifiec with UML. ACN Computng Surveys (2011)Google Scholar
  15. 15.
    Iwu, S.E.A.: Integrating safety and formal analyses using UML and PFS. Reliability Engineering and Systems Safety 92(2), 49–65 (2007)CrossRefGoogle Scholar
  16. 16.
    Allmendinger, L.: Diagrams and Design Tools in Context. ACM SIGDOC Asterisk Journal of Computer Documentation 18(4), 25–41 (1994)CrossRefGoogle Scholar
  17. 17.
    Guiochet, J., Vilchis, A.: Safety Analysis of a Medical Robot for Tele-echography. In: Proc. 2nd IARP IEEE/RAS Joint Workshop on Technical Challenge for Dependable Robots in Human Environments, Toulouse, France, pp. 217–227 (2002)Google Scholar
  18. 18.
    Lu, S., Halang, W.A., Zalewski, J.: Component-based HazOp and Fault Tree Analysis in Developing Embedded Real-Time Systems with UML. In: Proceedings of the 4th WSEAS International Conference on Electronics, Control and Signal Processing, pp. 150–155 (2005)Google Scholar
  19. 19.
    Leveson, N.G.: Safeware: System Safety and Computers. Addison-Wesley, Boston (1995)Google Scholar
  20. 20.
    Lauritsen, T., Stålhane, T.: Safety Methods in Software Process Improvement. In: Richardson, I., Abrahamsson, P., Messnarz, R. (eds.) EuroSPI 2005. LNCS, vol. 3792, pp. 95–105. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  21. 21.
    Martin-Guillerez, D., et al.: A UML-based method for risk analysis of human-robot interactions. In: 2nd International Workshop on Software Engineering for Resilient Systems. Birkbeck College, London (2010)Google Scholar
  22. 22.
    Jarzebowicz, A., Gorski, J.: Empirical Evaluation of Reading Techniques for UML Models Inspection. International Transactions on System Science and Applications 1(2), 103–110 (2006)Google Scholar
  23. 23.
    Törner, F., Johannessen, P., Öhman, P.: Assessment of Hazard Identification Methods for the Automotive Domain. In: Górski, J. (ed.) SAFECOMP 2006. LNCS, vol. 4166, pp. 247–260. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  24. 24.
    Davis, F.D., Bagozzi, R.P., Warshaw, P.R.: User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Management Science 35(8), 982–1003 (1989)CrossRefGoogle Scholar
  25. 25.
    Hopkins, W.G.: A New View of Statistics. Brisbane, University of Queensland (2001)Google Scholar
  26. 26.
    Wohlin, C., et al.: Experimentation in Software Engineering: An Introduction. Kluwer Academic, Norwell (2000)CrossRefGoogle Scholar
  27. 27.
    Tukey, J.W.: Data analysis and behavioral science or learning to bear the quantitative’s man burden by shunning badmandments. In: Tukey, J.W., Jones, L.W. (eds.) The Collected Works, pp. 187–389. Wadsworth, Monterey (1986)Google Scholar
  28. 28.
    Runeson, P.: Using Students as Experiment Subjects – An Analysis on Graduate and Freshmen Student Data. In: Linkman, S. (ed.) Proc. 7th International Conference on Empirical Assessment & Evaluation in Software Engineering (EASE 2003), pp. 95–102. Keele University, Staffordshire (2003)Google Scholar
  29. 29.
    Wright, G., Bolger, F., Rowe, G.: An Empirical Test of the Relative Validity of Expert and Lay Judgement of Risk. Risk Analysis 22(6), 1107–1122 (2002)CrossRefGoogle Scholar
  30. 30.
    Coll, R.A., Coll, J.H., Thakur, G.: Graphs and tables: a four factor experiment. Communications of the ACM 37(4), 77–84 (1994)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Tor Stålhane
    • 1
  • Guttorm Sindre
    • 1
  1. 1.Norwegian University of Science and TechnologyNorway

Personalised recommendations