Extensible and Automated Model-Evaluations with INProVE

  • Sören Kemmann
  • Thomas Kuhn
  • Mario Trapp
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6598)

Abstract

Model-based development is gaining more and more importance for the creation of software-intensive embedded systems. One important aspect of software models is model quality. This does not imply functional correctness, but non-functional properties, such as maintainability, scalability, extensibility. Lots of effort was put into development of metrics for control flow models. In the embedded systems domain however, domain specific- and data flow languages are commonly applied for model creation. For these languages, existing metrics are not applicable. Domain and project specific quality metrics therefore are informally defined; tracking conformance to these metrics is a manual and effort consuming task. To resolve this situation, we developed INProVE. INProVE is a model-based framework that supports definition of quality metrics in an intuitive, yet formal notion. It provides automated evaluation of design models through its indicators. Applied in different industry projects to complex models, INProVE has proven its applicability for quality assessment of data flow-oriented design models not only in research, but also in practice.

Keywords

Quality Modeling Quality Assurance Automated Quality Evaluation Quality Evolution Simulink Model Quality 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Genero, M., Piattini, M., Calero, C.: A Survey of Metrics for UML Class Diagrams. Journal of Object Technology 4(9), 59–92 (2005)CrossRefGoogle Scholar
  2. 2.
    The ISO/IEC. Software Engineering: Software Product Quality Requirements and Evaluation (SQuaRE) - Guide to SQuaRE. Standard document 25000:2005 (2005)Google Scholar
  3. 3.
    The ISO/IEC. Software engineering Product quality, part 1. Standard document 9126-1 (2001)Google Scholar
  4. 4.
    Kazman, R., Klein, M., Clemens, P.: ATAM: Method for Architecture Evaluation. CMU/SEI Technical Report ESC-TR-2000-004 (2000)Google Scholar
  5. 5.
    McCabe, T.J.: A Complexity measure. IEEE Transactions on Software Engineering SE-2, 308–320 (1976)MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Fowler, M., Beck, K., Brant, J., Opdyke, W., Roberts, D.: Refactoring: Improving the Design of Existing Code. Addison-Wesley, Reading (1999)Google Scholar
  7. 7.
    Marticorena, R., López, C., Crespo, Y.: Extending a Taxonomy of Bad Code Smells with Metrics. WOOR (2006)Google Scholar
  8. 8.
    The Mathworks. Control Algorithm Modeling Guidelines Using Matlab, Simulink, and Stateflow; The MathWorks, http://www.mathworks.com/industries/auto/maab.html (last time visited: September 2009)
  9. 9.
    The Mathworks. Simulink Model Advisor (part of Simulink), The MathWorks, http://www.mathworks.com/products/simulink (last time visited: 2010)
  10. 10.
  11. 11.
    Stürmer, I., Dörr, H., Giese, H., Kelter, U., Schürr, A., Zündorf, A.: Das MATE Projekt - Visuelle Spezifikation von MATLAB/Simulink/Stateflow Analysen und Transformationen. Dagstuhl Seminar Modellbasierte Entwicklung eingebetteter Systeme (2007)Google Scholar
  12. 12.
    The Attributed Graph Grammar System, http://tfs.cs.tu-berlin.de/agg/ (last time visited: September 2010)
  13. 13.
    Klir, G., Yuan, B.: Fuzzy Sets and Fuzzy Logic (1995)Google Scholar
  14. 14.
    Drösser, C.: Fuzzy Logic. In: Methodische Einführung in krauses Denken. Rowohlt, Reinbek bei Hamburg (1996)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Sören Kemmann
    • 1
  • Thomas Kuhn
    • 1
  • Mario Trapp
    • 1
  1. 1.Fraunhofer IESEGermany

Personalised recommendations