Using Safety Contracts to Verify Design Assumptions During Runtime

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10873)

Abstract

A safety case comprises evidence and argument justifying how each item of evidence supports claims about safety assurance. Supporting claims by untrustworthy or inappropriate evidence can lead to a false assurance regarding the safe performance of a system. Having sufficient confidence in safety evidence is essential to avoid any unanticipated surprise during operational phase. Sometimes, however, it is impractical to wait for high quality evidence from a system’s operational life, where developers have no choice but to rely on evidence with some uncertainty (e.g., using a generic failure rate measure from a handbook to support a claim about the reliability of a component). Runtime monitoring can reveal insightful information, which can help to verify whether the preliminary confidence was over- or underestimated. In this paper, we propose a technique which uses runtime monitoring in a novel way to detect the divergence between the failure rates (which were used in the safety analyses) and the observed failure rates in the operational life. The technique utilises safety contracts to provide prescriptive data for what should be monitored, and what parts of the safety argument should be revisited to maintain system safety when a divergence is detected. We demonstrate the technique in the context of Automated Guided Vehicles (AGVs).

Keywords

Confidence Safety contracts Safety case Safety argument Monitoring Runtime Failure rate Probability of failure Through-life safety assurance 

Notes

Acknowledgment

This work has been partially supported by the Swedish Foundation for Strategic Research (SSF) (through SYNOPSIS and FiC Projects) and the EU-ECSEL (through SafeCOP project).

References

  1. 1.
    Knight, J.C.: Safety critical systems: challenges and directions. In: Proceedings of the 24th International Conference on Software Engineering (ICSE), pp. 547–550, May 2002Google Scholar
  2. 2.
    Jaradat, O., Sljivo, I., Habli, I., Hawkins, R.: Challenges of safety assurance for industry 4.0. In: European Dependable Computing Conference (EDCC). IEEE Computer Society, September 2017Google Scholar
  3. 3.
    Jaradat, O., Graydon, P., Bate, I.: An approach to maintaining safety case evidence after a system change. In: Proceedings of the 10th European Dependable Computing Conference (EDCC), UK (2014)Google Scholar
  4. 4.
    Graydon, P.J., Holloway, C.M.: An investigation of proposed techniques for quantifying confidence in assurance arguments. Saf. Sci. 92(Supplement C), 53–65 (2017)CrossRefGoogle Scholar
  5. 5.
    Denney, E., Pai, G., Habli, I.: Dynamic safety cases for through-life safety assurance. In: 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, vol. 2, pp. 587–590, May 2015Google Scholar
  6. 6.
    Reliability prediction basics. Technical report, ITEM Software Inc. (2007)Google Scholar
  7. 7.
    Pittiglio, P., Bragatto, P., Delle Site, C.: Updated failure rates and risk management in process industries. Energy Procedia 45(Supplement C), 1364–1371 (2014). ATI 2013 - 68th Conference of the Italian Thermal Machines Engineering AssociationCrossRefGoogle Scholar
  8. 8.
    Functional safety of electrical/electronic/programmable electronic safety-related systems. IEC 61508-4 (2010)Google Scholar
  9. 9.
    Functional safety - Safety instrumented systems for the process industry sector. IEC 61511-1 (2016)Google Scholar
  10. 10.
    Generowicz, M., Hertel, A.: Reassessing failure rates. Technical report, I&E Systems Pty Ltd. (2017)Google Scholar
  11. 11.
    Rausand, M.: Reliability of Safety-critical Systems: Theory and Applications. Wiley, Hoboken (2014)CrossRefGoogle Scholar
  12. 12.
    van Beurden, I., Goble, W.M.: The Key Variables Needed for PFDavg Calculation. White paper, Exida, Sellersville, PA 18960, USA, July 2015Google Scholar
  13. 13.
    Goble, W.M.: Control System Safety Evaluation and Reliability, 2nd edn. (1998)Google Scholar
  14. 14.
    Rausand, M., Høyland, A.: System Reliability Theory: Models and Statistical Methods and Applications. Wiley, Hoboken (2004)MATHGoogle Scholar
  15. 15.
    van der Borst, M., Schoonakker, H.: An overview of PSA importance measures. Reliab. Eng. Syst. Saf. 72(3), 241–245 (2001)CrossRefGoogle Scholar
  16. 16.
    Jaradat, O., Bate, I., Punnekkat, S.: Using sensitivity analysis to facilitate the maintenance of safety cases. In: Proceedings of the 20th International Conference on Reliable Software Technologies (Ada-Europe), pp. 162–176, June 2015Google Scholar
  17. 17.
    Hawkins, R., Kelly, T., Knight, J., Graydon, P.: A new approach to creating clear safety arguments. In: Dale, C., Anderson, T. (eds.) Advances in Systems Safety, pp. 3–23. Springer, London (2011).  https://doi.org/10.1007/978-0-85729-133-2_1CrossRefGoogle Scholar
  18. 18.
    GSN Community Standard Version 1. Technical report, Origin Consulting (York) Limited, November 2011Google Scholar
  19. 19.
    Kane, A.: Runtime monitoring for safety-critical embedded systems. PhD thesis, Carnegie Mellon University, September 2015Google Scholar
  20. 20.
    Bates, S., Bate, I., Hawkins, R., Kelly, T., McDermid, J., Fletcher, R.: Safety case architectures to complement a contract-based approach to designing safe systems. In: Proceedings of the 21st International System Safety Conference (ISSC) (2003)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Innovation, Design and EngineeringMälardalen UniversityVästeråsSweden

Personalised recommendations