Advertisement

From Requirements Monitoring to Diagnosis Support in System of Systems

  • Michael Vierhauser
  • Rick Rabiser
  • Jane Cleland-Huang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10153)

Abstract

Context and motivation: Complex industrial software systems are often systems of systems (SoS) whose behavior only fully emerges during operation. Techniques such as requirements monitoring thus have to be used to observe such systems at runtime to detect deviations from their requirements. Question/problem: However, the focus of existing monitoring approaches is mainly on detecting violations of expected behavior, while support for subsequent diagnosis of violations is rather limited and often even neglected. Diagnosis is particularly challenging in SoS, which are characterized by complex heterogeneous architectures and a slew of different development and testing tools. Principal ideas/results: In this research preview paper we discuss the required capabilities for diagnosis support in SoS and outline a tool-supported framework based on a runtime artifact model and pre-defined diagnosis actions. Contribution: We describe our ongoing development of the framework and tools for supporting diagnosis in SoS and provide a research agenda.

Keywords

Requirements monitoring Systems of systems Diagnosis 

Notes

Acknowledgements

This work has been conducted in cooperation with Primetals Technologies and has been supported by the Christian Doppler Forschungsgesellschaft, Austria.

References

  1. 1.
    Baresi, L., Guinea, S.: Event-based multi-level service monitoring. In: Proceedings of the 20th International Conference on Web Services, pp. 83–90. IEEE (2013)Google Scholar
  2. 2.
    Bencomo, N., France, R., Cheng, B.H.C., Aßmann, U. (eds.): Models@run.time. LNCS, vol. 8378. Springer, Heidelberg (2014)Google Scholar
  3. 3.
    Calinescu, R., Ghezzi, C., Kwiatkowska, M.Z., Mirandola, R.: Self-adaptive software needs quantitative verification at runtime. Commun. ACM 55(9), 69–77 (2012)CrossRefGoogle Scholar
  4. 4.
    van Hoorn, A., Waller, J., Hasselbring, W.: Kieker: a framework for application performance monitoring and dynamic software analysis. In: Proceedings of the 3rd Joint International Conference on Performance Engineering, pp. 247–248. ACM (2012)Google Scholar
  5. 5.
    Kim, M., Viswanathan, M., Kannan, S., Lee, I., Sokolsky, O.: Java-MaC: a run-time assurance approach for Java programs. Form. Methods Syst. Des. 24(2), 129–155 (2004)CrossRefMATHGoogle Scholar
  6. 6.
    Mahbub, K., Spanoudakis, G.: A framework for requirements monitoring of service based systems. In: Proceedings of the 2nd International Conference on Service Oriented Computing, pp. 84–93. ACM (2004)Google Scholar
  7. 7.
    Menzies, T., Greenwald, J., Frank, A.: Data mining static code attributes to learn defect predictors. IEEE Trans. Softw. Eng. 33(1), 2–13 (2007)CrossRefGoogle Scholar
  8. 8.
    Muccini, H., Polini, A., Ricci, F., Bertolino, A.: Monitoring architectural properties in dynamic component-based systems. In: Schmidt, H.W., Crnkovic, I., Heineman, G.T., Stafford, J.A. (eds.) CBSE 2007. LNCS, vol. 4608, pp. 124–139. Springer, Heidelberg (2007). doi: 10.1007/978-3-540-73551-9_9 CrossRefGoogle Scholar
  9. 9.
    Müller, C., Oriol, M., Franch, X., Marco, J., Resinas, M., Ruiz-Cortés, A., Rodríguez, M.: Comprehensive explanation of SLA violations at runtime. IEEE Trans. Serv. Comput. 7(2), 168–183 (2014)CrossRefGoogle Scholar
  10. 10.
    Nielsen, C.B., Larsen, P.G., Fitzgerald, J., Woodcock, J., Peleska, J.: Systems of systems engineering: basic concepts, model-based techniques, and research directions. ACM Comput. Surv. 48(2), 18:1–18:41 (2015)CrossRefGoogle Scholar
  11. 11.
    Norman, D.A.: The Invisible Computer: Why Good Products Can Fail, the Personal Computer Is so Complex, and Information Appliances Are the Solution. MIT Press, Cambridge (1998)Google Scholar
  12. 12.
    Robinson, W.N.: A requirements monitoring framework for enterprise systems. Requir. Eng. 11(1), 17–41 (2006)CrossRefGoogle Scholar
  13. 13.
    Tell, P., Babar, M.A., Grundy, J.: A preliminary user evaluation of an infrastructure to support activity-based computing in global software development (ABC4GSD). In: Proceedings of the 8th International IEEE Conference on Global Software Engineering, pp. 100–109. IEEE, Bari (2013)Google Scholar
  14. 14.
    Vierhauser, M., Rabiser, R., Grünbacher, P., Seyerlehner, K., Wallner, S., Zeisel, H.: ReMinds: a flexible runtime monitoring framework for systems of systems. J. Syst. Softw. 112, 123–136 (2016)CrossRefGoogle Scholar
  15. 15.
    Vierhauser, M., Rabiser, R., Grünbacher, P.: Requirements monitoring frameworks: a systematic review. Inf. Softw. Technol. 80, 89–109 (2016)CrossRefGoogle Scholar
  16. 16.
    Vierhauser, M., Rabiser, R., Grünbacher, P., Aumayr, B.: A requirements monitoring model for systems of systems. In: Proceedings of the 23rd IEEE International Requirements Engineering Conference, pp. 96–105. IEEE (2015)Google Scholar
  17. 17.
    Vierhauser, M., Rabiser, R., Grünbacher, P., Egyed, A.: Developing a DSL-based approach for event-based monitoring of systems of systems: experiences and lessons learned. In: Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering, pp. 715–725. IEEE (2015)Google Scholar
  18. 18.
    Völz, M., Koldehofe, B., Rothermel, K.: Supporting strong reliability for distributed complex event processing systems. In: Proceedings of the 13th International Conference on High Performance Computing & Communication, pp. 477–486. IEEE (2011)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Michael Vierhauser
    • 1
  • Rick Rabiser
    • 1
  • Jane Cleland-Huang
    • 2
  1. 1.Christian Doppler Laboratory MEVSSISSE Johannes Kepler University LinzLinzAustria
  2. 2.Department of Computer Science and EngineeringUniversity of Notre DameSouth BendUSA

Personalised recommendations