Advertisement

Recovering Traceability Links Between Code and Specification Through Domain Model Extraction

  • Jiří Vinárek
  • Petr HnětynkaEmail author
  • Viliam Šimko
  • Petr Kroha
Conference paper
Part of the Lecture Notes in Business Information Processing book series (LNBIP, volume 191)

Abstract

Requirements traceability is an extremely important aspect of software development and especially of maintenance. Efficient maintaining of traceability links between high-level requirements specification and low-level implementation is hindered by many problems. In this paper, we propose a method for automated recovery of links between parts of the textual requirement specification and the source code of implementation. The described method is based on a method allowing extraction of a prototype domain model from plain text requirements specification. The proposed method is evaluated on two non-trivial examples. The performed experiments show that our method is able to link requirements with source code with the accuracy of \(F_1=58-61\,\%\).

Keywords

Specification Requirements Traceability Domain model 

1 Introduction

Requirements traceability is an extremely important aspect of software development [1]. Traceability itself has been defined in the paper [2] as “the ability to describe and follow the life of requirements, in both a forwards and backwards direction”.

Efficient maintenance of traceability links between high-level requirements specification and low-level implementation is hindered by many problems (as also stated in the paper [1]). These problems include high manual effort of making the links up-to-date, insufficient tool support, etc. Keeping the links up-to-date is hard due to the evolving implementation as well as specification. As stated in the book [3], requirements specification cannot be understood as final and unchangeable, especially when incremental development is applied. On the other hand, as the specification commonly serves as a bridge between developers and stakeholders without technical background, it is vital to keep the specification and implementation synchronized with correct traceability links. Also, this is important because the specification quite often serves as a base for decisions about software taken by the system stakeholders.

In this paper, we propose a viable method for automated recovery of links between specification and code. In particular, the method can recover traceability links between implementation classes and specification documents and also between classes and individual domain entities mentioned in the textual specification. The method is suitable especially for projects in a later stage of development. We do not assume that our method recovers all links, yet it may be useful as a starting point in this tedious process. Currently, we focus mainly on use-case specifications written in natural language and Java implementation code.

The method proposed in this paper is based on the tool [4] which is able to extract a prototype domain model from plain text employing statistical classifiers.

Evaluation of the method results is done using several example projects.

The paper is structured as follows. Section 2 presents the method we use as a basis. In Sect. 3, the core method is described and it is evaluated in Sect. 4. Section 5 discuses related work while Sect. 6 concludes the paper.

2 Domain Model Extraction

As a basis of our traceability method, we utilize the Domain Model Extraction Tool described in [4]. The tool extracts potential domain-model entities from text written in natural language (English). Input of the tool is a regular HTML document, and output is an EMF1 model containing the derived domain entities linked to parts of the input text.

A domain model is a high-level overview of the most important concepts in the problem space. The domain model serves as a common vocabulary in the communication among technical and non-technical stakeholders throughout all project phases. This helps them come to an agreement on the meaning of important concepts. In [5] (p. 23), the domain model is defined as “a live, collaborative artefact which is refined and updated throughout the project, so that it always reflects the current understanding of the problem space”.

The Domain Model Extraction Tool itself runs a deep linguistic analysis on the input text and then, using a set of statistical classifiers (Maximum Entropy models), it derives the prototype domain model. The linguistic pipeline employed is based on the Stanford CoreNLP framework2. The pipeline generates linguistic features such as identified sentences, dependency trees of words in each sentence, coreferences, etc. Most of the linguistic features are preserved and stored in the generated EMF model. The tool already contains a default set of classification models trained on several real-life systems. The training data consists of EMF domain models and HTML files linked together (a sequence of words is linked to a model element as depicted in Fig. 1); a link is encoded as an HTML anchor Open image in new window that carries the reference to a model element (domain entity) in the EMF model.

In detail, after running the Domain Model Extraction Tool, we obtain: (1) identified entities, (2) identified relations among entities, (3) links to the original text. It should be noted that we only focus on the entities in this paper and ignore the identified relations.

The tool achieves a classification accuracy of \(F_1=76\,\%\) when classifying words that form a domain entity, and an accuracy of \(F_1=88\,\%\) when identifying sequences of words that form an entity, i.e., identification of multi-word entities. Details about measurements and other details of the tool evaluation are available in the [4] (pp. 47–70). These measurements have been cross-validated on a simple book library system specification.
Fig. 1.

An example of training data for the domain model extraction tool

3 Traceability Links Recovery

The pipeline of our method for recovery of traceability links is depicted in Fig. 2. First, a prototype domain model is extracted from the requirement specification (Sect. 3.1). Then, an implementation model from the Java source code is extracted (Sect. 3.2). Finally, a similarity matrix is computed that assigns scores to the potential traceability links (Sect. 3.3). All of these stages are described in detail in the following sections.
Fig. 2.

Method pipeline

3.1 Extraction of a Domain Model

As a first step, we use the Domain Model Extraction Tool to predict domain entities out of text (an HTML document containing textual requirements specification). Output of the tool is an EMF model containing identified entities of the domain model and links to the specification.

The tool usually predicts more entities than would be predicted by manual inspection. To generate only a domain model, these false positives can be an issue. Nevertheless, for our method, they do not affect the final outcome as they are filtered out in the linking phase (Sect. 3.3).

The Domain Model Extraction Tool internally employs statistical models for classifiers. In a typical model extraction scenario, the training phase is omitted and the tool uses saved preconfigured statistical models. However, re-training the tool on the additional developed domain model and specification may improve precision of the model extraction.

3.2 Extraction of the Implementation Model

The implementation model of the project is reverse-engineered from the source files with the use of the MoDisco3 framework. MoDisco is able to obtain a model from multiple sources (Java, JSP, XML, etc.) but currently only Java code is relevant for our method. Output of the MoDisco is an EMF model of the implementation project that can be easily queried.

3.3 Linking Phase

Linking phase consists of the following steps:
  1. 1.

    The model linker generates a similarity matrix, where the rows represent domain entities and columns represent classes/interfaces found in the implementation model. Each cell in the matrix contains a fractional number between 0 and 1 which represents a string similarity measure between the corresponding entity and class/interface (more details about the chosen string similarity measure are in Sect. 3.4).

     
  2. 2.

    Next, we further filter-out cells from the matrix that are lower than a given threshold value. The surviving entity-class pairs are taken as a result of our method (a particular example is in Fig. 7).

     
  3. 3.

    Finally, as the domain model entities “remember” from which words in the input document they were generated, these words are transformed into hyperlinks pointing to the particular sources files (using hyperlinks from the specification to the code is one of widely used techniques for the traceability links visualisation [6]). Classes from the predicted model that have no classes/interfaces from the implementation model assigned are rejected.

     

3.4 String Similarity Measure

As a particular string similarity measure we have adopted the Jaro-Winkler measure [7]. The Jaro-Winkler measure is an extension of the Jaro string comparator that produces a distance of two strings.

Roughly, the Jaro comparator works in three steps: (i) computes lengths of compared strings \(s_{1}\) and \(s_{2}\), (ii) computes number of matching characters \(m\), (iii) finds number of transpositions \(t\).

Two characters, each from a different string, are matching if they are the same, and they are not too “far” from each other in the strings (at most a half the length of the shorter of two strings). Position of each character from one string is compared with positions of all its matching characters from the other string, and the number of transpositions is the number of matching characters that are in different order. When the number of matching characters is zero, the distance is defined as \(0\); in other cases it is defined as follows:
$$ Jaro(s_{1}, s_{2}) = \frac{1}{3} \cdot (\frac{m}{|s_{1}|} + \frac{m}{|s_{2}|} + \frac{|t|}{2m}). $$
The Jaro-Winkler measure adds additional bonus to strings with a common prefix. The length of the prefix (labeled \(L\)) can reach at maximum 4 characters and the measure is defined as:
$$ JaroWinkler(s_{1}, s_{2}) = Jaro(s_{1}, s_{2}) + \frac{L}{10} \cdot (1 - Jaro(s_{1}, s_{2})). $$
Apart from the Jaro-Winkler measure, we also tried several other string similarity measures (Levenshtein distance, Jaro distance, Dice’s coefficient). The Jaro-Winkler measure gave us the best results as it assigns higher score to the words with the same prefix. This fact suits our needs as giving common prefix to related classes is a common practice. Levenshtein and Dice coefficient measures perform poorly especially in cases when the two strings differ greatly in their length.
Disadvantage of the Jaro-Winkler measure is lack of the preference for strings with common suffix. This fact discriminates related entities with a particular naming convention. For example the words “Dispenser” and “SimCashDispenser” obtain low similarity score although they would be recognized as similar by a human analyst. To overcome this drawback, we proposed a modification of the measure, which adds additional bonus to the words with common suffix. The modified measure computes the Jaro-Winkler measure first (labelled \(JW\)) and then adds the suffix bonus. The bonus equals to the scaled ratio of the common suffix length (labeled \(S\)) and length of the first word (which is in our case used for name of the predicted class). We call it the Boosted-Jaro-Winkler measure:
$$ BoostedJaroWinkler(s_{1}, s_{2}) = JW(s_{1}, s_{2}) + \frac{S}{|s_{1}|} \cdot (1 - JW(s_{1}, s_{2})) $$
A comparison of the mentioned measures on several string pairs (the pairs taken from the example used in Sect. 4) is shown in Fig. 3 (values range from 0 to 1; higher value means that the strings are more similar).
Fig. 3.

String similarity on example string pairs

4 Test Data and Evaluation

To evaluate the described method, we used data from two software projects in our experiment: (1) the CoCoME [8] (Common Component Modeling Example) and (2) the ATM project4. In both cases, a textual specification together with a Java-based implementation was available.

In specifications, we manually identified a set of entities (actors and external systems) that communicate with the described system. In the source files, we located Java classes representing these entities and created a so-called gold set, i.e., pairs connecting the specification entities with their implementation counterparts. The gold set is used for evaluation of the proposed method success rate.

4.1 CoCoME Project

The goal of CoCoME was to create a common example for evaluation of component-based frameworks. The specification of CoCoME defines a trading system used for handling sales in a chain of stores. Importantly, the specification tries to mimic a description of the system as delivered by a business company (as it could be in the reality), and as such it can be potentially incomplete and/or imprecise.

Primary reason for the use of CoCoME was the fact that it offers a real-life system with both the requirements specification (the use-cases) and a freely available5 implementation, which typically is not very common.

The specification itself contains both functional and extra-functional requirements. Functional requirements are described in a form of high-level use-cases accompanied with sequence diagrams. Extra-functional requirements add timing, reliability and usage-profile-related constraints. Apart from the requirements, the CoCoME specification contains also architectural component model, deployment view and behavioral view of the described system—all these parts use structured text in conjunction with UML diagrams.

From the specification, we took the high-level use-cases describing communication between modeled system and involved actors. As a particular implementation of CoCoME, we took the reference JEE-based implementation provided together with the specification.

4.2 ATM Project

The second project used for evaluation describes an ATM system and it was originally developed for an object-oriented software development course. The course material shows the complete process of software system development from its initial requirement collection, analysis, and design to implementation. From the project deliverables, we leveraged initial requirements and use-cases together with system implementation.

4.3 Evaluation

We evaluated our method by setting the following goals for the evaluation:  
G1

Find the best performing threshold value for filtering the similarity matrix.

G2
Compare the results obtained by our method in a fully automated scenario against a prepared baseline. The baseline consists of the domain entity list obtained by picking up all subjects and objects. When possible, we concatenated adjacent nouns (identified by the POS-tagger) to form an entity name.
baseline and baseline-boosted:

In these scenarios, the domain entities from the baseline were used and string similarity was computed using the Jaro-Winkler and Boosted-Jaro-Winkler measures. These scenarios act as our baselines.

predicted and predicted-boosted:

Here, the lists of entities were derived using the Domain Model Extraction Tool and for string similarity the Jaro-Winkler and Boosted-Jaro-Winkler measures were used.

G3

Evaluate the ability of our method to find traceability links between classes and specification documents and compare it with state of the art methods (Vector space model and probability method as stated in the paper [9]).

Characteristics of the data used for evaluation and training are shown in Fig. 4. The CoCoME dataset consisted of 8 use-cases which were split into 23 traced documents; the ATM example contained the high level requirements document and 9 use-cases which were split into 25 traced documents. Models for the statistical classifiers were trained on independent specifications before evaluation, in order to prevent classifier over-fitting; the ATM example evaluation employed CoCoME and the Library system (a model bundled with the Domain Model Extraction Tool) as training data, while CoCoME used the ATM example and the Library system data. The Library system was used for training only (not for evaluation), and it is mentioned here just for the sake of completeness.
Fig. 4.

Characteristics of the data used for evaluation/training

Results for G1: To find the optimal threshold value, we executed the model linker multiple times for thresholds in the interval \((0.4, 1.0)\) and computed the accuracy. Results are presented in a form of Precision, Recall, and \(F_1\mathbf{{\hbox {-}}measure}\) related to cut-off Threshold. Tables with the results are shown in Figs. 5 and 6. We can see from the \(F_1\mathbf{{\hbox {-}}measure}\) diagram that the highest \(F_1\) corresponds to the threshold value \(0.83\) for the ATM example and value \(0.79\) for CoCoME.
Fig. 5.

ATM example – The diagrams show accuracy (Y-axis) for different threshold values (X-axis). A subset of the results is also presented as a table with the best performing threshold value being highlighted.

Results for G2: The diagram denoted as predicted-boost perf. in Figs. 5 and 6 focuses only on the best scenario and shows all the measures together. The model linker executed on the CoCoME example with threshold set to value \(0.79\) returned 23 domain entities and 83 implementation classes (as seen in Fig. 7) from which 19 predicted and 60 implementation were recognized/classified correctly; executed on the ATM example with threshold set to value \(0.83\) returned 18 domain entities and 25 implementation classes (as seen in Fig. 8) from which 12 predicted and 20 implementation were recognized/classified correctly.
Fig. 6.

CoCoME example – The diagrams show accuracy (Y-axis) for different threshold values (X-axis). A subset of the results is also presented as a table with the best performing threshold value being highlighted.

Fig. 7.

CoCoME - predicted entity-class pairs with threshold value \(0.79\)

Fig. 8.

ATM - predicted entity-class pairs with threshold value \(0.83\)

Fig. 9.

Accuracy of tracebility link recovery between classes and whole specification documents. X-axis represents different threshold values, Y-axis represents accuracy.

Results for G3: We traced the predicted entities returned from the model linker back to the specification documents. In this way, we got links between specification documents and implementation classes. Methods mentioned in [9] are able to recover traceability links with accuracy 34–53 % while our method is successful in 58–61 % according to \(F_1\)-measure. Figure 9 shows the dependency of the Precision, Recall, and \(F_1\) -measure related to cut-off Threshold used in the model linker.

5 Related Work

Probabilistic and vector space information retrieval techniques for traceability links are explained in the paper [9]. These approaches apply a text normalization procedures to both source code and software documents. Normalized documents are indexed and traceability links are estimated according to their similarity score. Contrary to that, our method goes in an opposite direction—it tries to synthesize a domain model from the given documents and match them with source code. It works at finer-grained level as it traces not documents as a whole but entities contained in them. Using statistical classifiers it can leverage semantic context of the document’s words.

Probabilistic and vector space methods are also discussed in the paper [10]. In addition to [9], the paper proposes best practices for writing and structuring software artefacts (documentation, specification etc.) to improve automated traceability.

The method introduced in [9] is further extended in [11]. Its main contribution is utilization of a syntax tree derived from code. Identifiers found in code are converted into comment keywords based on their appearance in the syntax tree. Using this approach, the authors are able to match abbreviated identifiers or identifiers using synonyms to their documentation counterparts.

Another approach extending the method from [9] is presented in [12]. It uses information retrieval techniques to obtain the traceability links between code and requirements. Consequently, information mined from software repositories (CVS/SVN) is used to rerank/discard retrieved links. Information can be mined from multiple sources and weights for the links may be assigned on a per-link basis.

An approach helping developers to maintain source code identifiers and comments consistent with high-level artifacts is presented in [13]. The proposed method computes the textual similarity between source code and related high-level specification documents and presents computed similarity to the developer. Moreover, the method recommends candidate identifiers built from high-level artifacts. The approach is implemented as an Eclipse plugin called COde Comprehension Nurturant Using Traceability (COCONUT). The paper also reports on two experiments using COCONUT that evaluate the quality of the developed code. In contrast with the above mentioned approaches [9, 10], the method uses the latent semantic indexing technique for document indexing, which gives more accurate results compared to the vector space method. Compared to our method the approach is more focused on interactive improvement of the source code and less on the automatic derivation of the traceability links.

A probabilistic approach to bridge the gap between high-level description of the system and its implementation is described in the paper [14]. The mentioned cognitive assignment technique has 2 phases—the cognitive map derivation and concept assignment. In the first phase, the system processes relevant project documents (specification, bug reports, etc.) authored by an expert engineer. In the second phase, a non-expert engineer uses queries to look for relevant pieces of code. The query together with the cognitive maps are transformed into a Bayesian network; it is used to classify the source code and relevant results are returned to the user. The method is implemented as an Eclipse plugin and compared to our method, it is more suited to interactive exploration of the software project and less applicable to automatic link derivation.

A method for automated traceability links retrieval using ontologies is explained in the paper [15]. The method processes source and target artifacts with linguistic tools and tries to map concepts extracted from sentences to the domain-specific ontologies. In case of unsuccessful mapping, it establishes similarity using generalized ontology which is composed of single words and very simple phrases. To compare with other methods, a disadvantage of the method is the need to create a domain-specific ontology to obtain more accurate results. The paper states that one of the authors spent two days to create a domain-specific ontology for 40 of the 158 source artifacts.

An approach targeted on linking the implementation source code with code snippets included in learning resources or supporting channels (e.g., bug trackers, forums) is presented in [16]. Authors identified sources of commonly found ambiguities used in code snippets and designed a pipeline to precisely locate the traced source code. Evaluated on several open-source projects, the method shows high precision/recall ratio (96 %). The technique is narrowly focused on the source code and does not take specification written in natural language into account.

An extensive survey and categorization of traceability-discovery techniques can be found in [17]. The survey encompasses 89 articles from 25 venues published between years 1992 and 2011. It defines 7 dimensions and their attributes for the feature location taxonomy. Our method would be classified for Type of analysis dimension as Textual as it uses NLP tools, User input would be Natural Language Query and Source Code Artifact. In Data sources dimension, it would fit into Non-compilable category. Output category defines granularity of the results, the method works on File/class level. Programming language support dimension is in current phase restricted to Java. Evaluation dimension would be ranked as Preliminary as the method is evaluated on small data set. Systems evaluated would contain CoCoME.

The TraceLab project6 [18] aims at providing an experimental workbench for designing, constructing, and executing traceability experiments, and for facilitating the rigorous evaluation of different traceability techniques.

Automated detection and classification of the non-functional requirements from both structured and unstructured documents is discussed in [19]. It describes a classification algorithm and evaluates its effectiveness on two datasets—requirements specification developed as a student term project and a large dataset from an industrial project. The method as well as our method uses machine-learning techniques to identify candidate entities. However, the method targets the non-functional requirements only and processes only specifications.

6 Conclusion and Future Work

We presented our method for recovering traceability links between a requirements specification and implementation. In particular, it can be used to recover links (1) between classes and domain entities or (2) between classes and specification documents as shown in the evaluation on two non-trivial examples. We compared the former with a baseline approach that considers all nouns in the text as potential entities. We showed that precision and recall is increased when the entities are extracted using our Domain Model Extraction Tool. When comparing the latter with existing probabilistic and vector space information retrieval methods (e.g., those presented in [9]), we showed that our method performs better with respect to the \(F_1\)-measure.

Currently, we plan to evaluate our method on several different case studies and examples to confirm performance of the method and to tune it. We also plan to evaluate our method on the same dataset with other mentioned methods for traceability recovery to obtain more accurate method comparison. Obtained values may be affected by the fact that a common specification does not contain high number of domain entities. This issue would be solved with evaluation of additional data sets. Method accuracy could be further improved by detection of common prefixes/suffixes used for domain entities (Java interfaces are frequently prefixed with “I”, enums with “E”, EMF models use “Impl” suffix for class implementation etc.). Their trimming would raise string-similarity values observed during linking phase.

Footnotes

Notes

Acknowledgments

This work was partially supported by the EU project ASCENS 257414, partially supported by the European Union Seventh Framework Programme FP7-PEOPLE-2010-ITN under grant agreement n\(^\circ \)264840, and partially supported by Charles University institutional funding SVV-2014-260100.

References

  1. 1.
    Bouillon, E., Mäder, P., Philippow, I.: A survey on usage scenarios for requirements traceability in practice. In: Doerr, J., Opdahl, A.L. (eds.) REFSQ 2013. LNCS, vol. 7830, pp. 158–173. Springer, Heidelberg (2013)Google Scholar
  2. 2.
    Gotel, O.C.Z., Finkelstein, A.C.W.: An analysis of the requirements traceability problem. In: Proceedings of ICRE 1994. Colorado Springs, USA, April 1994Google Scholar
  3. 3.
    Larman, C.: Applying UML and Patterns: An Introduction to Object-Oriented Analysis and Design and the Unified Proces, 3rd edn. Prentice-Hall, Upper Saddle River (2004)Google Scholar
  4. 4.
    Šimko, V.: From textual specification to formal verification. Ph.D. thesis, Charles University in Prague, Faculty of Mathematics and Physics (2013)Google Scholar
  5. 5.
    Rosenberg, D., Stephens, M.: Use Case Driven Object Modeling with UML: Theory and Practice. Springer, New York (2007)Google Scholar
  6. 6.
    Li, Y., Maalej, W.: Which traceability visualization is suitable in this context? a comparative study. In: Regnell, B., Damian, D. (eds.) REFSQ 2011. LNCS, vol. 7195, pp. 194–210. Springer, Heidelberg (2012)Google Scholar
  7. 7.
    Winkler, W.E.: Overview of record linkage and current research directions. Research report series, Statistical Research Division, US Census Bureau, February 2006Google Scholar
  8. 8.
    Rausch, A., Reussner, R., Mirandola, R., Plášil, F. (eds.): The Common Component Modeling Example. LNCS, vol. 5153. Springer, Heidelberg (2008)Google Scholar
  9. 9.
    Antoniol, G., Canfora, G., Casazza, G., De Lucia, A., Merlo, E.: Recovering traceability links between code and documentation. IEEE Trans. Softw. Eng. 28(10), 970–983 (2002)CrossRefGoogle Scholar
  10. 10.
    Cleland-Huang, J., Settimi, R., Romanova, E., Berenbach, B., Clark, S.: Best practices for automated traceability. Computer 40(6), 27–35 (2007)CrossRefGoogle Scholar
  11. 11.
    Nagano, S., Ichikawa, Y., Kobayashi, T.: Recovering traceability links between code and documentation for enterprise project artifacts. In: Proceedings of COMPSAC 2012, Izmir, Turkey, pp. 11–18. IEEE, July 2012Google Scholar
  12. 12.
    Ali, N., Gueheneuc, Y., Antoniol, G.: Trustrace: mining software repositories to improve the accuracy of requirement traceability links. IEEE Trans. Softw. Eng. 39(5), 725–741 (2013)CrossRefGoogle Scholar
  13. 13.
    Lucia, A.D., Penta, M.D., Oliveto, R.: Improving source code lexicon via traceability and information retrieval. IEEE Trans. Softw. Eng. 37(2), 205–227 (2011)CrossRefGoogle Scholar
  14. 14.
    Cleary, B., Exton, C.: The cognitive assignment Eclipse plug-in. In: Proceedings of ICPC 2006, Athens, Greece. IEEE, June 2006Google Scholar
  15. 15.
    Li, Y., Cleland-Huang, J.: Ontology-based trace retrieval. In: Proceedings of TEFSE 2013, San Francisco, USA, pp. 30–36. IEEE, May 2013Google Scholar
  16. 16.
    Dagenais, B., Robillard, M.: Recovering traceability links between an API and its learning resources. In: Proceedings of ICSE 2012, Zurich, Switzerland, pp. 47–57. IEEE, June 2012Google Scholar
  17. 17.
    Dit, B., Revelle, M., Gethers, M., Poshyvanyk, D.: Feature location in source code: a taxonomy and survey. J. Softw. Evol. Process 25(1), 53–95 (2013)CrossRefGoogle Scholar
  18. 18.
    Dit, B., Moritz, E., Poshyvanyk, D.: A TraceLab-based solution for creating, conducting, and sharing feature location experiments. In: Proceedings of ICPC 2012, Passau, Germany, pp. 203–208. IEEE CS, June 2012Google Scholar
  19. 19.
    Cleland-Huang, J., Settimi, R., Zou, X., Solc, P.: The detection and classification of non-functional requirements with application to early aspects. In: Proceedings of RE 2006, St. Paul, USA. IEEE, September 2006Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Jiří Vinárek
    • 1
    • 2
  • Petr Hnětynka
    • 2
    Email author
  • Viliam Šimko
    • 1
  • Petr Kroha
    • 2
  1. 1.Institute for Program Structures and Data OrganisationKarlsruhe Institute of TechnologyKarlsruheGermany
  2. 2.Department of Distributed and Dependable Systems, Faculty of Mathematics and PhysicsCharles University in PraguePragueCzech Republic

Personalised recommendations