The Incremental Advantage: Evaluating the Performance of a TGG-based Visualisation Framework

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9761)


Triple Graph Grammars (TGGs) are best known as a bidirectional model transformation language, which might give the misleading impression that they are wholly unsuitable for unidirectional application scenarios. We believe that it is more useful to regard TGGs as just graph grammars with “batteries included”, meaning that TGG-based tools provide simple, default execution strategies, together with algorithms for incremental change propagation. Especially in cases where the provided execution strategies suffice, a TGG-based infrastructure may be advantageous, even for unidirectional transformations.

In this paper, we demonstrate these advantages by presenting a TGG-based, read-only visualisation framework, which is an integral part of the metamodelling and model transformation tool eMoflon. We argue the advantages of using TGGs for this visualisation application scenario, and provide a quantitative analysis of the runtime complexity and scalability of the realised incremental, unidirectional transformation.


Graph transformation Triple graph grammars Incremental model transformation 



This work has been funded by the German Research Foundation (DFG) as part of projects A01 within the Collaborative Research Centre (CRC) 1053 – MAKI.


  1. 1.
    Pérez Andrés, F., de Lara, J., Guerra, E.: Domain specific languages with graphical and textual views. In: Schürr, A., Nagl, M., Zündorf, A. (eds.) AGTIVE 2007. LNCS, vol. 5088, pp. 82–97. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  2. 2.
    Anjorin, A.: Synchronization of Models on Different Abstraction Levels using Triple Graph Grammars Phd thesis, Technische Universität Darmstadt (2014)Google Scholar
  3. 3.
    Anjorin, A., Saller, K., Lochau, M., Schürr, A.: Modularizing triple graph grammars using rule refinement. In: Gnesi, S., Rensink, A. (eds.) FASE 2014 (ETAPS). LNCS, vol. 8411, pp. 340–354. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  4. 4.
    Blouin, D., Plantec, A., Dissaux, P., Singhoff, F., Diguet, J.-P.: Synchronization of models of rich languages with triple graph grammars: an experience report. In: Di Ruscio, D., Varró, D. (eds.) ICMT 2014. LNCS, vol. 8568, pp. 106–121. Springer, Heidelberg (2014)Google Scholar
  5. 5.
    Bottoni, P., Guerra, E., de Lara, J.: Enforced generative patterns for the specification of the syntax and semantics of visual languages. JVLC 19(4), 429–455 (2008)Google Scholar
  6. 6.
    Cheney, J., McKinna, J., Stevens, P., Gibbons, J.: Towards a repository of Bx examples. In: Workshops of EDBT/ICDT 2014. CEUR Workshop Proceedings, vol. 1133, pp. 87–91. (2014)Google Scholar
  7. 7.
    Diskin, Z., Wider, A., Gholizadeh, H., Czarnecki, K.: Towards a rational taxonomy for increasingly symmetric model synchronization. In: Di Ruscio, D., Varró, D. (eds.) ICMT 2014. LNCS, vol. 8568, pp. 57–73. Springer, Heidelberg (2014)Google Scholar
  8. 8.
    Giese, H., Hildebrandt, S., Neumann, S.: Model synchronization at work: keeping SysML and AUTOSAR models consistent. In: Engels, G., Lewerentz, C., Schäfer, W., Schürr, A., Westfechtel, B. (eds.) Nagl Festschrift. LNCS, vol. 5765, pp. 555–579. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  9. 9.
    Greenyer, J., Rieke, J.: Applying advanced TGG concepts for a complex transformation of sequence diagram specifications to timed game automata. In: Schürr, A., Varró, D., Varró, G. (eds.) AGTIVE 2011. LNCS, vol. 7233, pp. 222–237. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  10. 10.
    Hermann, F., Gottmann, S., Nachtigall, N., Ehrig, H., Braatz, B., Morelli, G., Pierre, A., Engel, T., Ermel, C.: Triple graph grammars in the large for translating satellite procedures. In: Di Ruscio, D., Varró, D. (eds.) ICMT 2014. LNCS, vol. 8568, pp. 122–137. Springer, Heidelberg (2014)Google Scholar
  11. 11.
    Hermann, F., Nachtigall, N., Braatz, B., Engel, T., Gottmann, S.: Solving the FIXML2Code-case study with HenshinTGG. In: TTC 2014. CEUR Workshop Proceedings, vol. 1305, pp. 32–46. (2014)Google Scholar
  12. 12.
    Hildebrandt, S., Lambers, L., Giese, H., Rieke, J., Greenyer, J., Schäfer, W., Marius Lauder, A., Anjorin, A. Schürr : A survey of triple graph grammar tools. In: BX 2013. ECEASST, vol. 57. EASST (2013)Google Scholar
  13. 13.
    Kulcsár, G., Leblebici, E., Anjorin, A.: A solution to the FIXML case study using triple graph grammars and eMoflon. In: TTC 2014. CEUR Workshop Proceedings, vol. 1305, pp. 71–75. (2014)Google Scholar
  14. 14.
    Lauder, M., Anjorin, A., Varró, G., Schürr, A.: Efficient model synchronization with precedence triple graph grammars. In: Ehrig, H., Engels, G., Kreowski, H.-J., Rozenberg, G. (eds.) ICGT 2012. LNCS, vol. 7562, pp. 401–415. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  15. 15.
    E. Leblebici: Towards a graph grammar-based approach to inter-model consistency checks with traceability support. In: BX 2016. CEUR Workshop Proceedings, vol. 1571. (2016)Google Scholar
  16. 16.
    Leblebici, E., Anjorin, A., Schürr, A.: Developing eMoflon with eMoflon. In: Di Ruscio, D., Varró, D. (eds.) ICMT 2014. LNCS, vol. 8568, pp. 138–145. Springer, Heidelberg (2014)Google Scholar
  17. 17.
    Leblebici, E., Anjorin, A., Schürr, A., Taentzer, G.: Multi-amalgamated triple graph grammars. In: Parisi-Presicce, F., Westfechtel, B. (eds.) ICGT 2015. LNCS, vol. 9151, pp. 87–103. Springer, Heidelberg (2015)CrossRefGoogle Scholar
  18. 18.
    Leblebici, E., Anjorin, A., Schürr, A., Hildebrandt, S., Rieke, J., Greenyer, J.: A comparison of incremental triple graph grammar tools. In: GT-VMT 2014. ECEASST, vol. 67. EASST (2014)Google Scholar
  19. 19.
    Mougenot, A., Darrasse, A., Blanc, X., Soria, M.: Uniform random generation of huge metamodel instances. In: Paige, R.F., Hartman, A., Rensink, A. (eds.) ECMDA-FA 2009. LNCS, vol. 5562, pp. 130–145. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  20. 20.
    Peldszus, S., Kulcsár, G., Lochau, M.: A Solution to the java refactoring case study using eMoflon. In: TTC 2015. CEUR Workshop Proceedings, vol. 1524, pp. 118–122. (2015)Google Scholar
  21. 21.
    Scheidgen, M.: Generation of large random models for benchmarking. In: BigMDE 2015. CEUR Workshop Proceedings, vol. 1406, pp. 1–10. (2015)Google Scholar
  22. 22.
    Schleich, A.: Skalierbare und effiziente Modellgenerierung mit Tripel-Graph-Grammatiken Master’s thesis. TU Darmstadt, Germany (2015)Google Scholar
  23. 23.
    Schürr, A.: Specification of graph translators with triple graph grammars. In: Mayr, E.W., Schmidt, G., Tinhofer, G. (eds.) WG 1994. LNCS, vol. 903, pp. 151–163. Springer, Heidelberg (1995)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Technische Universität DarmstadtDarmstadtGermany
  2. 2.University of PaderbornPaderbornGermany

Personalised recommendations