Require, test, and trace IT
 667 Downloads
 2 Citations
Abstract
We propose a framework for requirementdriven test generation that combines contractbased interface theories with modelbased testing. We design a specification language, requirement interfaces, for formalizing different views (aspects) of synchronous dataflow systems from informal requirements. Various views of a system, modeled as requirement interfaces, are naturally combined by conjunction. We develop an incremental test generation procedure with several advantages. The test generation is driven by a single requirement interface at a time. It follows that each test assesses a specific aspect or feature of the system, specified by its associated requirement interface. Since we do not explicitly compute the conjunction of all requirement interfaces of the system, we avoid state space explosion while generating tests. However, we incrementally complete a test for a specific feature with the constraints defined by other requirement interfaces. This allows catching violations of any other requirement during test execution, and not only of the one used to generate the test. This framework defines a natural association between informal requirements, their formal specifications, and the generated tests, thus facilitating traceability. Finally, we introduce a faultbased testcase generation technique, called modelbased mutation testing, to requirement interfaces. It generates a test suite that covers a set of fault models, guaranteeing the detection of any corresponding faults in deterministic systems under test. We implemented a prototype test generation tool and demonstrate its applicability in two industrial use cases.
Keywords
Modelbased testing Testcase generation Requirements engineering Traceability Requirement interfaces Formal specification Synchronous systems Consistency checking Incremental testcase generation Modelbased mutation testing1 Introduction
Modern software and hardware systems are becoming increasingly complex, resulting in new design challenges. For safetycritical applications, correctness evidence for designed systems must be presented to the regulatory bodies (see the automotive standard ISO 26262 [33]). It follows that verification and validation techniques must be used to provide evidence that the designed system meets its requirements. Testing remains the preferred practice in industry for gaining confidence in the design correctness. In classical testing, an engineer designs a test experiment, i.e., an input vector that is executed on the system under test (\(\textsc {SUT}\)) to check whether it satisfies its requirements. Due to the finite number of experiments, testing cannot prove the absence of errors. However, it is an effective technique for catching bugs. Testing remains a predominantly manual and adhoc activity that is prone to human errors. As a result, it is often a bottleneck in the complex system design.
Modelbased testing (MBT) is a technology that enables systematic and automatic testcase generation (TCG) and execution, thus reducing system design time and cost. In MBT, the \(\textsc {SUT}\) is tested for conformance against its specification, a mathematical model of the \(\textsc {SUT}\). In contrast to the specification, that is a formal object, the \(\textsc {SUT}\) is a physical implementation with often unknown internal structure, also called a “blackbox”. The \(\textsc {SUT}\) can be accessed by the tester only through its external interface. To reason about the conformance of the \(\textsc {SUT}\) to its specification, one needs to rely on the testing assumption [48], that the \(\textsc {SUT}\) can react at all times to all inputs and can be modeled in the same language as its specification.
The formal model of the \(\textsc {SUT}\) is derived from its informal requirements. The process of formulating, documenting, and maintaining system requirements is called requirement engineering. The requirements are typically written in a textual form, using possibly constrained English, and are gathered in a requirements document. The requirements document is structured into chapters describing various different views of the system, as, e.g., behavior, safety, timing. Intuitively, a system must correctly implement the conjunction of all its requirements. Sometimes, requirements can be inconsistent, resulting in a specification that does not admit any correct implementation.
We first introduce requirement interfaces as the formalism for modeling system views as subsets of requirements. It is a statetransition formalism that supports compositional specification of synchronous dataflow systems by means of assume/guarantee rules that we call contracts. We associate subsets of contracts to requirement identifiers to facilitate their tracing to the informal requirements from which the specification is derived. These associations can, later on, be used to generate links between the work products [4], connecting several tools.
A requirement interface is intended to model a specific view of the \(\textsc {SUT}\). We define the conjunction operation that enables combining different views of the \(\textsc {SUT}\). Intuitively, a conjunction of two requirement interfaces is another requirement interface that requires contracts of both interfaces to hold. We assume that the overall specification of the \(\textsc {SUT}\) is given as a conjunction of requirement interfaces modeling its different views. Requirement interfaces are inspired by the synchronous interfaces [23], with the difference that we allow hidden variables in addition to the interface (input and output) variables and that the requirement identifiers are part of the formal model. The conjunction operator was first defined in [25] as shared refinement, while [13] establishes the link of the conjunction to multipleviewpoint modeling and requirement engineering.
We then formally define consistency for requirement interfaces and develop a bounded consistency checking procedure. In addition, we show that falsifying consistency is compositional with respect to conjunction, i.e., the conjunction of an inconsistent interface with any other interface remains inconsistent. Next, we develop a requirementdriven TCG and execution procedure from requirement interfaces, with language inclusion as the conformance relation. We present a procedure for TCG from a specific \(\textsc {SUT}\) view, modeled as a requirement interface, and a test purpose. Here, the test purpose is a formal specification of the target state(s) that a test case should cover. Such a test case can be used directly to detect if the implementation by the \(\textsc {SUT}\) violates a given requirement, but cannot detect violation of other requirements in the conjunction. Next, we extend this procedure by completing such a partial test case with additional constraints from other view models that enable detection of violations of any other requirement.
Then, we develop a tracing procedure that exploits the natural mapping between informal requirements and our formal model. Thus, inconsistent contracts or failing test cases can be traced back to the violated requirements. We believe that such tracing information provides precious maintenance and debugging information to the engineers.
Finally, we show how to apply faultbased test generation to requirement interfaces. The used technique, called modelbased mutation testing [2], is applied to automatically generate a set of test purposes. The corresponding test suite is able to provide fault coverage for a specified set of fault models, under the assumption of a deterministic \(\textsc {SUT}\). The approach includes the following steps: first, we define a set of fault models for requirement interfaces. These are applied to all applicable parts of the contracts in the requirement interface, generating a set of faulty models, called mutants. We then check whether the mutated contract introduces any new behavior. This check is encoded as a test purpose, so we can simply pass it to the previously defined test generation. If the mutation introduces new behavior that deviates from the reference model, it will generate a test; otherwise, the test purpose will be unreachable, and the mutant is considered equivalent.
We illustrate the entire workflow of using requirement interfaces for consistency checking, testing, and tracing in Fig. 1, where the test purpose may be produced by modelbased mutation testing, or any arbitrary other technique.
Parts of this paper have already been published in the proceedings of the Formal Methods for Industrial Critical Systems 2015 workshop [5]. The current paper improves on that, by adding the theory for modelbased mutation testing of requirement interfaces, proofs of all theorems, a second industrial case study and various improvements throughout the paper.
The rest of the paper is structured as follows: first, we introduce requirement interfaces in Sect. 2. Then, in Sect. 3, we present how to perform consistency checks and testcase generation from requirement interfaces, and how to trace the involved requirements from consistency violations or test cases. Next, Sect. 4 gives the theory for applying modelbased mutation testing to requirement interfaces. Then, in Sect. 5, we present results of our testcase generation on our running example and on the two industrial case studies. Finally, we discuss related work (Sect. 6) and conclude our work (Sect. 7).
2 Requirement interfaces
We introduce requirement interfaces, a formalism for specification of synchronous dataflow systems. Their semantics is given in the form of labeled transition systems (\(\text {LTS}\)). We define consistent interfaces as the ones that admit at least one correct implementation. The refinement relation between interfaces is given as language inclusion. Finally, we define the conjunction of requirement interfaces as another interface that subsumes all behaviors of both interfaces.
2.1 Syntax
Let \(X\) be a set of typed variables. A valuation v over \(X\) is a function that assigns to each \(x \in X\) a value v(x) of the appropriate type. We denote by \(V(X)\) the set of all valuations over \(X\). We denote by \(X'=\{x'~~x \in X\}\) the set obtained by priming each variable in \(X\). Given a valuation \(v \in V(X)\) and a predicate \(\varphi \) on \(X\), we denote by \(v \models \varphi \) the fact that \(\varphi \) is satisfied under the variable valuation v. Given two valuations \(v,v' \in V(X)\) and a predicate \(\varphi \) on \(X\cup X'\), we denote by \((v,v') \models \varphi \) the fact that \(\varphi \) is satisfied by the valuation that assigns to \(x \in X\) the value v(x), and to \(x' \in X'\) the value \(v'(x')\).
Given a subset \(Y \subseteq X\) of variables and a valuation \(v \in V(X)\), we denote by \(\pi (v)[Y]\), the projection of v to Y. We will commonly use the symbol \(w_{Y}\) to denote a valuation projected to the subset \(Y \subseteq X\). Given the sets X, \(Y_{1} \subseteq X\), \(Y_{2} \subseteq X\), \(w_{1} \in V(Y_{1})\), and \(w_{2} \in V(Y_{2})\), we denote by \(w =w_{1} \cup w_{2}\) the valuation \(w \in V(Y_{1} \cup Y_{2})\), such that \(\pi (w)[Y_{1}] = w_{1}\) and \(\pi (w)[Y_{2}] = w_{2}\).
Given a set \(X\) of variables, we denote by \(X_{I}\), \(X_{O}\), and \(X_{H}\) three disjoint partitions of \(X\) denoting sets of input, output, and hidden variables, such that \(X= X_{I}\cup X_{O}\cup X_{H}\). We denote by \(X_{\text {obs}}= X_{I}\cup X_{O}\) the set of observable variables and by \(X_{\text {ctr}}= X_{H}\cup X_{O}\) the set of controllable variables.^{1} A contract c on \(X \cup X'\), denoted by \((\varphi \vdash \psi )\), is a pair consisting of an assumption predicate \(\varphi \) on \(X_{I}' \cup X\) and a guarantee predicate \(\psi \) on \(X_{\text {ctr}}' \cup X\). A contract \({\hat{c}} = ({\hat{\varphi }} \vdash {\hat{\psi }})\) is said to be an initial contract if \({\hat{\varphi }}\) and \({\hat{\psi }}\) are predicates on \(X_{I}'\) and \(X_{\text {ctr}}'\), respectively, and an update contract otherwise. Given two valuations \(v, v' \in V(X)\), and a contract \(c = (\varphi \vdash \psi )\) over \(X \cup X'\), we say that \((v,v')\) satisfies c, denoted by \((v,v') \models c\), if \((v, \pi (v')[X_{I}]) \models \varphi \rightarrow (v, \pi (v')[X_{\text {ctr}}]) \models \psi \). In addition, we say that \((v, v')\) satisfies the assumption of c, denoted by \((v,v') \models _{A} c\) if \((v, \pi (v')[X_{I}]) \models \varphi \). The valuation pair \((v,v')\) satisfies the guarantee of c, denoted by \((v,v') \models _{G} c\), if \((v,\pi (v')[X_{\text {ctr}}]) \models \psi )\).^{2}
Definition 1

\(X_{I}\), \(X_{O}\), and \(X_{H}\) are disjoint finite sets of input, output, and hidden variables, respectively, and \(X= X_{I}\cup X_{O}\cup X_{H}\) denotes the set of all variables;

\(\hat{C}\) and \(C\) are finite nonempty sets of the initial and update contracts;

\(\mathcal {R}\) is a finite set of requirement identifiers;

\(\rho :\mathcal {R}\rightarrow {\mathcal {P}}(C\cup \hat{C})\) is a function mapping requirement identifiers to subsets of contracts, such that \(\bigcup _{r \in \mathcal {R}} \rho (r) = C\cup \hat{C}\).
We say that a requirement interface is receptive if in any state, it has defined behaviors for all inputs, that is \(\bigvee _{({\hat{\varphi }} \vdash {\hat{\psi }}) \in \hat{C}} {\hat{\varphi }}\) and \(\bigvee _{(\varphi \vdash \psi ) \in C} \varphi \) are both valid. A requirement interface is fully observable if \(X_{H}= \emptyset \). A requirement interface is deterministic if for all \(({\hat{\varphi }} \vdash {\hat{\psi }}) \in \hat{C}\), \({\hat{\psi }}\) has the form \(\bigwedge _{x \in X_{O}} x' = c\),^{3} where c is a constant of the appropriate type, and for all \((\varphi \vdash \psi ) \in C\), \(\psi \) has the form \(\bigwedge _{x \in X_{\text {ctr}}} x' = f(X)\),^{3} where f is a function over X that has the same type as x.
Example 1
 \(r_0\):

The buffer is empty and the inputs are ignored in the initial state.
 \(r_1\):

enq triggers an enqueue operation when the buffer is not full.
 \(r_2\):

deq triggers a dequeue operation when the buffer is not empty.
 \(r_3\):

E signals that the buffer is empty.
 \(r_4\):

F signals that the buffer is full.
 \(r_5\):

Simultaneous enq and deq (or their simultaneous absence), an enq on the full buffer, or a deq on the empty buffer have no effect.
2.2 Semantics
Given a requirement interface \(A\) defined over \(X\), let \(V = V(X) \cup \{ {\hat{v}} \}\) denote the set of states in \(A\), where a state v is a valuation \(v \in V(X)\) or the initial state \({\hat{v}} \not \in V(X)\). The latter is not a valuation, as the initial contracts do not specify unprimed variables. There is a transition between two states v and \(v'\) if \((v,v')\) satisfies all its contracts. The transitions are labeled by the (possibly empty) set of requirement identifiers corresponding to contracts for which \((v,v')\) satisfies their assumptions. The semantics \([[A]]\) of \(A\) is the following \(\text {LTS}\).
Definition 2

\(({\hat{v}}, R, v) \in T\) if \(v \in V(X)\), \(\bigwedge _{{\hat{c}} \in \hat{C}} ({\hat{v}}, v) \models {\hat{c}}\) and \(R = \{ r~~({\hat{v}},v) \models _{A} {\hat{c}} \; \text {for some} \; {\hat{c}} \in \hat{C}\; \text {and} \; {\hat{c}} \in \rho (r)\}\);

\((v, R, v') \in T\) if \(v, v' \in V(X)\), \(\bigwedge _{c \in C} (v, v') \models c\) and \(R = \{ r~~(v,v') \models _{A} c \; \text {for some} \; c \in C\; \text {and} \; c \in \rho (r)\}\).
We say that \(\tau = v_{0} \xrightarrow {R_{1}} v_{1} \xrightarrow {R_{2}} \cdots \xrightarrow {R_{n}} v_{n}\) is an execution of the requirements interface \(A\) if \(v_{0} = {\hat{v}}\) and for all \(1 \le i \le n1\), \((v_{i}, R_{i+1}, v_{i+1}) \in T\). In addition, we use the following notation: (1) \(v \xrightarrow {R}\) iff \(\exists v' \in V(X) \; \text {s.t.} \; v \xrightarrow {R} v'\); (2) \(v \rightarrow v'\) iff \(\exists R \in L \; \text {s.t.} \; v \xrightarrow {R} v'\); (3) \(v \rightarrow \) iff \(\exists v' \in V(X) \; \text {s.t.} \; v \rightarrow v'\); (4) Open image in new window ; (5) Open image in new window iff \(\exists Y \subseteq X \; \text {s.t.} \; \pi (v')[Y] = w\) and \(v \rightarrow v'\); (6) Open image in new window iff \(\exists v',Y \subseteq X \; \text {s.t.} \; \pi (v')[Y] = w \; \text {and}\; v \rightarrow v'\); (7) Open image in new window iff \(\exists v_{1}, \ldots , v_{n1}, v_{n}\) s.t. Open image in new window ; and (8) Open image in new window iff \(\exists v'\) s.t. Open image in new window .
We say that a sequence \(\sigma \in V(X_{\text {obs}})^{*}\) is a trace of \(A\) if Open image in new window . We denote by \(\mathcal {L}(A)\) the set of all traces of \(A\). Given a trace \(\sigma \) of \(A\), let Open image in new window . Given a state \(v \in V\), let \(\text {succ}(v) = \{v'~~v \rightarrow v'\}\) be the set of successors of v.
Example 2
2.3 Consistency, refinement, and conjunction
A requirement interface consists of a set of contracts that can be conflicting. Such an interface does not allow any correct implementation. We say that a requirement interface is consistent if it allows at least one correct implementation.
Definition 3
Let A be a requirement interface, [[A]] its associated \(\text {LTS}\), \(v \in V\) a state, and \({\mathcal {C}} = \hat{C}\) if v is initial, and \(C\) otherwise. We say that v is consistent, denoted by \(\text {cons}(v)\), if for all \(w_{I} \in V(X_{I})\), there exists \(v'\), such that \(w_I = \pi (v')[X_{I}]\), \(\bigwedge _{c \in {\mathcal {C}}} (v,v') \models c\) and \(\text {cons}(v')\). We say that A is consistent if \(\text {cons}({\hat{v}})\).
Example 3
\(A^{beh }\) is consistent, that is, every reachable state accepts every input valuation and generates an output valuation satisfying all contracts. Consider now replacing \(c_{2}\) in \(A^{beh }\) with the contract \(c'_{2} : \lnot \mathsf{enq'} \wedge \mathsf{deq'} \wedge k\ {\ge }\ 0 \;\vdash \; k' = k1\), that incorrectly models \(r_{2}\) and decreases the counter k upon \(\textsf {deq}\) even when the buffer is empty, setting it to the value minus one. This causes an inconsistency with the contracts \(c_3\) and \(c_5\) which state that if k equals zero, the buffer is empty, and that dequeue on an empty buffer has no effect on k.
We define the refinement relation between two requirement interfaces \(A^{1}\) and \(A^{2}\), denoted by \(A^{2} \preceq A^{1}\), as trace inclusion.
Definition 4
Let \(A^{1}\) and \(A^{2}\) be two requirement interfaces. We say that \(A^{2}\) refines \(A^{1}\), denoted by \(A^{2} \preceq A^{1}\), if (1) \(A^{1}\) and \(A^{2}\) have the same sets \(X_{I}\), \(X_{O}\), and \(X_{H}\) of variables; and (2) \(\mathcal {L}(A^1) \subseteq \mathcal {L}(A^2)\).
We use a requirement interface to model a view of a system. Multiple views are combined by conjunction. The conjunction of two requirement interfaces is another requirement interface that is either inconsistent due to a conflict between views, or is the greatest lower bound with respect to the refinement relation. The conjunction of \(A^{1}\) and \(A^{2}\), denoted by \(A^{1} \wedge A^{2}\), is defined if the two interfaces share the same sets \(X_{I}\), \(X_{O}\), and \(X_{H}\) of variables.
Definition 5

\(\hat{C}= \hat{C}^{1} \cup \hat{C}^{2}\) and \(C= C^{1} \cup C^{2}\);

\(\mathcal {R}= \mathcal {R}^{1} \cup \mathcal {R}^{2}\); and

\(\rho (r) = \rho ^{1}(r)\) if \(r \in \rho ^{1}\) and \(\rho (r) = \rho ^{2}(r)\) otherwise.
Remark
For refinement and conjunction, we require the two interfaces to share the same alphabet. This additional condition is used to simplify definitions. It does not restrict the modeling—arbitrary interfaces can have their alphabets equalized without changing their properties by taking union of respective input, output, and hidden variables. Contracts in the transformed interfaces do not constrain newly introduced variables. For requirement interfaces \(A^{1}\) and \(A^{2}\), alphabet equalization is defined if \((X_{I}^1 \cup X_{I}^2) \cap (X_{\text {ctr}}^1 \cup X_{\text {ctr}}^2) = (X_{O}^1 \cup X_{O}^2) \cap (X_{H}^1 \cup X_{H}^2) = \emptyset \). Otherwise, \(A_1 \not \preceq A_2\) and vice versa, and \(A^1 \wedge A^2\) is not defined.
Example 4
 \(r_{a}\):

The power consumption equals zero when no enq/deq is requested.
 \(r_{b}\):

The power consumption is bounded to two units otherwise.
We now show some properties of requirement interfaces. The conjunction of two requirement interfaces with the same alphabet is the intersection of their traces.
Theorem 1
Let \(A^{1}\) and \(A^{2}\) be two consistent requirement interfaces defined over the same alphabet. Then, either \(A^1 \wedge A^2\) is inconsistent, or \(\mathcal {L}(A^{1} \wedge A^{2}) = \mathcal {L}(A^{1}) \cap \mathcal {L}(A^2)\).
A proof of the theorem can be found in the appendix.
The conjunction of two requirement interfaces with the same alphabet is either inconsistent, or it is the greatest lower bound with respect to refinement.
Theorem 2
Let \(A^{1}\) and \(A^{2}\) be two consistent requirement interfaces defined over the same alphabet, such that \(A^1 \wedge A^2\) is consistent. Then, \(A^{1} \wedge A^{2} \preceq A^{1}\) and \(A^{1} \wedge A^{2} \preceq A^{2}\), and for all consistent requirement interfaces A, if \(A \preceq A^{1}\) and \(A \preceq A^{2}\), then \(A \preceq A^{1} \wedge A^{2}\).
A proof of the theorem can be found in the appendix.
The following theorem states that the conjunction of an inconsistent requirement interface with any other interface remains inconsistent. This result enables incremental detection of inconsistent specifications.
Theorem 3
Let A be an inconsistent requirement interface. Then, for all consistent requirement interfaces \(A'\) with the same alphabet as A, \(A\wedge A'\) is also inconsistent.
The proof follows directly from the definition of conjunction, which constrains the guarantees of individual interfaces.
3 Consistency, testing, and tracing
In this section, we present our testcase generation and execution framework and instantiate it with bounded model checking techniques. For now, we assume that all variables range over finite domains. This restriction can be lifted by considering richer data domains in addition to theories that have decidable quantifier elimination, such as linear arithmetic over reals. Before executing the testcase generation, we can apply a consistency check on the requirement interface, to ensure the generation starts from an implementable specification.
3.1 Bounded consistency checking
To implement a consistency check in our prototype, we transform it to a satisfiability problem and use the SMT solver Z3 to solve it.
The first step is to construct a symbolic representation of the initial contracts and the transition relation.
The transition relation is then unfolded for each step by renaming the occurrence of each variable, such that it is indexed by the corresponding step. In each step i, the undecorated variables are indexed with \(i1\), while the decorated variables are indexed with i, thus keeping the relation between the valuations of each step. Given a set \(X\) of variables, we denote by \(X^{i}\) the copy of the set, in which every variable is indexed by i.
The conjunction of all instances up to a certain depth is an open formula, leaving all variables free. The consistency check is bounded by a certain depth.
3.2 Testcase generation
A test case is an experiment executed on the \(\textsc {SUT}\) by the tester. We assume that the \(\textsc {SUT}\) is a blackbox that is only accessed via its observable interface. We assume that it can be modeled as an inputenabled, deterministic^{5} requirement interface. Without loss of generality, we can represent the \(\textsc {SUT}\) as a total sequential function \(\textsc {SUT}: V(X_{I}) \times V(X_{\text {obs}})^{*} \rightarrow V(X_{O})\). A test case \(T_{A}\) for a requirement interface \(A\) over \(X\) takes a history of actual input/output observations \(\sigma \in \mathcal {L}(A)\) and returns either the next input value to be executed or a verdict. Hence, a test case can be represented as a partial function \(T_{A}: \mathcal {L}(A) \rightarrow V(X_{I}) \cup \{ \mathbf{pass }, \mathbf{fail } \}\).
We first consider the problem of generating a test case from \(A\). The testcase generation procedure is driven by a test purpose. Here, a test purpose is a condition specifying the target set of states that a test execution should reach. Hence, it is a formula \(\varPi \) defined over \(X_{\text {obs}}\).
Let \(T_{\sigma _{I}, A}\) be a test case, parameterized by the input sequence \(\sigma _{I}\) and the requirement interface \(A\) from which it was generated. It is a partial function, where \(T_{\sigma _{I},A}(\sigma )\) is defined if \(\sigma  \le \sigma _{I}\) and for all \(0 \le i \le \sigma \), \(w_{I}^{i} = \pi (w_\mathrm{obs}^{i})[X_{I}]\), where \(\sigma _{I} = w_{I}^{0} \cdots w_{I}^{n}\) and \(\sigma = w_\mathrm{obs}^{0} \cdots w_\mathrm{obs}^{k}\). Algorithm 2 gives a constructive definition of the test case \(T_{\sigma _{I},A}\). It starts by producing the output monitor for the given input sequence (Line 1). Then, it substitutes all output variables in the monitor, by the outputs observed from the \(\textsc {SUT}\) (Lines 2–5). If the monitor is satisfied by the outputs, it returns the verdict pass; otherwise, it returns fail.
Incremental testcase generation So far, we considered testcase generation for a complete requirement interface \(A\), without considering its internal structure. We now describe how test cases can be incrementally generated when the interface \(A\) consists of multiple views,^{7} i.e., \(A= A^{1} \wedge A^{2}\). Let \(\varPi \) be a test purpose for the view modeled with \(A_{1}\). We first check whether \(\varPi \) can be reached in \(A^{1}\), which is a simpler check than doing it on the conjunction \(A^{1} \wedge A^{2}\). If \(\varPi \) can be reached, we fix the input sequence \(\sigma _{I}\) that steers \(A^1\) to \(\varPi \). Instead of creating the test case \(T_{\sigma _{I}, A^{1}}\), we generate \(T_{\sigma _{I}, A^{1} \wedge A^{2}}\), which keeps \(\sigma _I\) as the input sequence, but collects output guarantees of \(A^{1}\) and \(A^{2}\). Such a test case steers the \(\textsc {SUT}\) towards the test purpose in the view modeled by \(A^{1}\), but is able to detect possible violations of both \(A^{1}\) and \(A^{2}\).
We note that testcase generation for fully observable interfaces is simpler than the general case, because there is no need for the quantifier elimination, due to the absence of hidden variables in the model. A test case from a deterministic interface is even simpler as it is a direct mapping from the observable trace that reaches the test purpose—there is no need to collect constraints on the output, since the deterministic interface does not admit any freedom to the implementation on the choice of output valuations.
Example 5
3.3 Testcase execution
Let \(A\) be a requirement interface, \(\textsc {SUT}\) a system under test with the same set of variables as \(A\), and \(T_{\sigma _{I}, A}\) a test case generated from \(A\). Algorithm 3 defines the testcase execution procedure \(\text {TestExec}\) that takes as input the \(\textsc {SUT}\) and \(T_{\sigma _{I}, A}\) and outputs a verdict \(\mathbf{pass }\) or \(\mathbf{fail }\). \(\text {TestExec}\) gets the next test input in from the given test case \(T_{\sigma _{I},A}\) (Lines 4, 8), stimulates at every step the system under test with this input, and waits for an output out (Line 6). The new inputs/outputs observed are stored in \(\sigma \) (Line 7), which is given as input to \(T_{\sigma _{I}, A}\). The test case monitors if the observed output is correct with respect to A. The procedure continues until a \(\mathbf{pass }\) or \(\mathbf{fail }\) verdict is reached (Line 5). Finally, the verdict is returned (Line 10).
Proposition 1
 1.
if \(I \preceq A\), then \(\text {TestExec}(\textsc {SUT}, T_{\sigma _{I}, A}) = \mathbf{pass }\);
 2.
if \(\text {TestExec}(\textsc {SUT}, T_{\sigma _{I}, A}) = \mathbf{fail }\), then \(\textsc {SUT}\not \preceq A\).
Proposition 1 immediately holds for test cases generated incrementally from a requirement interface of the form \(A= A^{1} \wedge A^{2}\). In addition, we notice that a test case \(T_{\sigma _{I}, A^{1}}\) generated from a single view \(A^{1}\) of \(A\) does not need to be extended to be useful, and can be used to incrementally show that a \(\textsc {SUT}\) does not conform to its specification. We state the property in the following corollary that follows directly from Proposition 1 and Theorem 2.
Corollary 1
Let \(A= A^{1} \wedge A^{2}\) be an arbitrary requirement interface composed of \(A^{1}\) and \(A^{2}\), \(\textsc {SUT}\) an arbitrary system under test, and \(T_{\sigma _{I}, A^{1}}\) an arbitrary test case generated from \(A^{1}\). Then, if \(\text {TestExec}(\textsc {SUT}, T_{\sigma _{I}, A^{1}}) = \mathbf{fail }\), then \(\textsc {SUT}\not \preceq A^{1} \wedge A^{2}\).
Example 6
Consider as an \(\textsc {SUT}\) the implementation of a 3placebuffer, as illustrated in Algorithm 4. We assume that the power consumption is updated directly in a \(\textsc {pc}\) variable. Although \(\textsc {SUT}\) is correctly implementing a 3placebuffer, it is a faulty implementation of a 2placebuffer. In fact, when \(\textsc {SUT}\) already contains two items, the buffer is still not full, which is in contrast with requirement \(r_4\) of a 2placebuffer. Executing tests \(T_{\sigma _{I},beh }\) and \(T_{\sigma _{I},buf }\) from Example 5 will both result in a \(\mathbf{fail }\) test verdict.
3.4 Traceability
Requirement identifiers as firstclass elements in requirement interfaces facilitate traceability between informal requirements, views, and test cases. A test case generated from a view \(A^i\) of an interface \(A= A^1 \wedge \cdots \wedge A^n\) is naturally mapped to the set \(\mathcal {R}^i\) of requirements. In addition, requirement identifiers enable tracing violations caught during consistency checking and testcase execution back to the conflicting/violated requirements.
Tracing inconsistent interfaces to conflicting requirements When we detect an inconsistency in a requirement interface \(A\) defining a set of contracts \(C\), we use QuickXPlain, a standard conflict set detection algorithm [36], to compute a minimal set of contracts \(C' \subseteq C\), such that \(C'\) is inconsistent. Once we have computed \(C'\), we use the requirement mapping function \(\rho \) defined in \(A\), to trace back the set \(\mathcal {R}' \subseteq \mathcal {R}\) of conflicting requirements.
Tracing fail verdicts to violated requirements In fully observable interfaces, every trace induces at most one execution. In that case, a test case resulting in \(\mathbf{fail }\) can be traced to a unique set of violated requirements. This is not the case in general for interfaces with hidden variables. A trace that violates such an interface may induce multiple executions resulting in \(\mathbf{fail }\) with different valuations of hidden variables, and thus different sets of violated requirements. In this case, we report all sets to the user, but ignore internal valuations that would introduce an internal requirement violation before inducing the visible violation.
Example 7
4 Modelbased mutation testing
In this section, we apply a faultbased variant of modelbased testing to requirement interfaces. In MBT, test cases are generated according to predefined coverage criteria, producing test suites that, e.g., cover all states in the specification model, or, in the case of a contractbased specification, enable all assumptions at least once.
Similar to that, we generate a test suite covering a set of faults. The faults are specified via a set of mutation operators that apply specific faults to all applicable parts of the model. When applied to requirement interfaces, we mutate one contract at a time. Then, we check for conformance between the original requirement interface and the mutated one. If the mutated requirement interface can produce controllable variable values that are forbidden by the original, the conformance is violated. In that case, we produce a test case leading exactly to that violation. If that test case is executed on a deterministic \(\textsc {SUT}\) and passes, we can guarantee that the corresponding fault was not implemented. Thus, by generating all tests for all fault models, we can prove the absence of all corresponding faults in the system.
Definition 6
We define a mutation operator \(\mu \) as a function \(\mu : C\rightarrow 2 ^ {C}\), which takes a contract \(c = (\varphi \vdash \psi ) \in C\) and produces a set of mutated contracts \(C^\mu \subseteq C\), where a specific kind of fault is applied to all valid parts of \(\psi \). We only consider mutations in the guarantee, as the fault models should simulate situations where the system produces wrong outputs, after receiving valid inputs.
 1.
Offbyone Mutate every integer constant or variable, both by adding and subtracting 1,
 2.
Negation Flip every boolean constant or variable,
 3.
Change comparison operators Replace equality operators by inequality operators, and vice versa; replace every operator in \(\{<=, <,>, >=\}\) by every of the operators in \(\{<=, <, ==,>, >=\}\).
 4.
Change and/or Replace every and operator by an or operator and vice versa,
 5.
Change implication/biimplication Replace every implication by a biimplication and vice versa;
Definition 7
A mutant \(c_m = (\varphi \vdash \psi _m) \in C^\mu \) is an intentionally altered (mutated) version of the contract \(c = (\varphi \vdash \psi )\). A mutant is called a firstorder mutant, if it only contains one fault. This paper only considers firstorder mutants.
If a mutation does not introduce new behavior to the requirement interface, it is considered an equivalent mutation. If it leads to an inconsistency, it is considered an unproductive mutation .
Given the contract \((\varphi \vdash \psi ) \in C\), we denote by \({\bar{C}}\) the set of the other contracts in the requirement interface, i.e., \({\bar{C}} = C\setminus \{(\varphi \vdash \psi )\} \), by \(C^\mu \) the set of mutants obtained by applying all mutation operators to \((\varphi \vdash \psi )\) and by \(c_m=(\varphi \vdash \psi _m)\) one single mutant in \(C^\mu \).

v is reachable from \({\hat{v}}\)

\((v,v') \models \varphi \)

\(\forall _{({\bar{\varphi }} \vdash {\bar{\psi }}) \in {\bar{C}}} (v,v') \models ({\bar{\varphi }} \vdash {\bar{\psi }})\)

\((v,v') \models (\varphi \vdash \psi _m) \wedge \lnot (\varphi \vdash \psi )\).
Remark 1
In this paper, we consider weak mutation testing [35]. This means that wrong behavior of internal variables is already considered a conformance violation. In contrast, strong mutation testing also requires that an internal fault propagates to an observable failure. The encoding of the reachability of the mutation as a test purpose, without altering the step relation, is only possible for weak mutation testing. Strong mutation testing would require the step relation to use the mutated contract in all steps, and then detect the failure in the last step. Due to considering weak mutation testing, we also weaken the definition of a test purpose compared with the definition in Sect. 3, allowing it to use internal variables.
Contrary to the previously defined test purposes, the test purposes in modelbased mutation testing lead to negative counter examples, that is, counter examples steering towards an incorrect state. However, as defined in Sect. 3, we only extract the input vector \(\sigma _i\), which is then combined with the correct requirement interface, to form a positive test case.
Often, different mutations of a contract will generate different negative counter examples, but those tests will then combine into the same positive test case. However, if the different mutations require different inputs to be enabled, they will also produce different positive test cases.
Example 8
5 Implementation and experimental results
In this section, we present a prototype that implements our testcase generation framework introduced in Sects. 3 and 4. The prototype was added to the modelbased testing tool family MoMuT^{9} and goes by the name MoMuT::REQs. The implementation uses the programming language Scala 2.10 and Microsoft’s SMT solver Z3 [40]. The tool implements both monolithic and incremental approaches to testcase generation. All experiments were run on a MacBook Pro with a 2.53 GHz Intel Core 2 Duo Processor and 4 GB RAM. We will demonstrate the tool on the buffer example and two industrial case studies.
5.1 Demonstrating example
To experiment with our algorithms, we model three variants of the buffer behavioral interface. All three variants model buffers of size 150, with different internal structures. Buffer 1 models a simple buffer with a single counter variable k. Buffer 2 models a buffer that is composed of two internal buffers of size 75 each, and Buffer 3 models a buffer that is composed of three internal buffers of size 50 each. We also remodel a variant of the power consumption interface that created a dependency between the power used and the state of the internal buffers (idle/used).
All versions of the behavior interfaces can be combined with the power consumption view point, either using the incremental approach or doing the conjunction before generating test cases from the monolithic specification.
Incremental consistency checking To evaluate the consistency check, we introduce three faults to the behavioral and power consumption models of the buffer: Fault 1 makes deq to decrease k when the buffer is empty; Fault 2 mutates an assumption resulting in conflicting requirements for power consumption upon enq; and Fault 3 makes enq to increase k when the buffer is full. The fault injection results in 9 singlefaulty variants of interfaces.
Runtime in seconds for checking consistency of single and conjuncted interfaces
Fault 1 (behavior)  Fault 2 (power)  

Single  Monolithic  Single  Monolithic  
Buffer 1  0.7  3.6  1.0  7.3 
Buffer 2  5.3  13.4  1.0  26.7 
Buffer 3  7.2  13.8  1.0  13 
The bounded consistency checking is very sensitive to the search depth. Setting the bound to 5 increases the runtime from seconds to minutes—this is not surprising, since a search of depth n involves simplifying formulas with alternating quantifiers of depth n, which is a very hard problem for SMT solvers.
Runtime in seconds for incremental and monolithic testcase generation
# Contracts  # Variables  \(t_{\mathrm{inc}}\)  \(t_{\mathrm{mon}}\)  Speedup  

Buffer 1  6  6  10  16.8  1.68 
Buffer 2  15  12  36.7  48.8  1.33 
Buffer 3  20  15  69  115.6  1.68 
Results for modelbased mutation testing on depth 150
# Mutants (equiv.)  # Unique tests  [min] \(t_{\textit{mbmt}}\)  

Buffer 1  44 (0)  29  4.7 
Buffer 2  138 (20)  52  44.0 
Buffer 3  240 (46)  115  128.1 
5.2 Safing engine
As a first industrial application, we present an automotive use case supplied by the European ARTEMIS project MBAT,^{10} that partially motivated our work on requirement interfaces. The use case was initiated by our industrial partner Infineon and evolves around building a formal model for analysis and testcase generation for the safing engine of an airbag chip. The requirements document, developed by a customer of Infineon, is written in natural (English) language. We identified 39 requirements that represent the core of the system’s functionality and iteratively formalized them in collaboration with the designers of Infineon. The resulting formal requirement interface is deterministic and consists of 36 contracts.
The formalization process revealed several underspecifications in the informal requirements that were causing some ambiguities. These ambiguities were resolved in collaboration with the designers. The consistency check revealed two inconsistencies between the requirements. Tracing the conflicts back to the informal requirements allowed their fixing in the customer requirements document.
 \(\textsf {r}_1\):

There shall be seven operating states for the safing engine: RESET state, INITIAL state, DIAGNOSTIC state, TEST state, NORMAL state, SAFE state, and DESTRUCTION state.
 \(\textsf {r}_2\):

The safing engine shall change per default from RESET state to INIT state.
 \(\textsf {r}_3\):

On a reset signal, the safing engine shall enter RESET state and stay, while the reset signal is active.
Modelbased mutation testing In addition, we applied two iterations of the modelbased mutation testing approach, setting the bound k to 6. In the first iteration, we generated 362 mutants, applying all mutation operators. We generated 165 negative tests—197 mutants were kequivalent. From the 165 negative tests, we extracted 28 unique positive test cases.
The mutation score achieved by these 28 test cases on the 60 faulty SIMULINK models was surprisingly low, with only \(49.2\%\). A closer investigation of the requirement interface shows that many of the contracts work globally, without being bound to a specific state of the state machine. For mutants from these contracts, our approach only generates one test case, even though the mutants generate multiple faults, in several different states. Due to the decomposed structure, even though we only insert one fault, our mutants are not classic firstorder mutants anymore.
There are two ways to deal with this problem. The first one would be the generation of multiple test cases per mutant that covers all possible faulty states. We already applied this technique previously, in a different context [3]. However, this technique might become very expensive, and impossible for systems with infinitestate space.
This shows that the quality of modelbased mutation testing for requirement interfaces is severely depending on the modeling style. However, while the finegrained contracts might slightly decrease the clarity of the requirement interfaces, they, in turn, increase the traceability and facilitate fault detection.
5.3 Automated speed limiter
The second industrial case study is a use case provided by the industrial partner Volvo in the ARTEMIS Project CRYSTAL. It revolves around an automated speed limiter (ASL), which adopts the current speed according to a desired speed limit. It contains an internal state machine with three states: OFF, LIMITING, and OVERRIDDEN. Upon activation, it either takes the current speed as limit, or a predefined value. The limit can then be increased and decreased manually, and a kickdown of the gas pedal overrides the speed limiter for some time threshold. Adjusting the speed, or setting it to the predefined value, ends the overridden mode. Finally, the speed limiter can be turned off again, both from overridden and active mode.
To evaluate the quality of the test cases, we implemented a Java version of the ASL, and used the Major mutation framework [37] to generate a set of 64 faulty implementations, using all mutation operators supported by Major. By executing our generated tests on these faulty implementations, we could perform a classic mutation analysis: our test suite was able to detect 48 of the faults. Further investigation of the undetected faults revealed that 13 of the remaining Java mutants were equivalent, and could thus not be detected by any test case. Another two of the faults were introduced in the conditions of if statements. The conditions correspond to the assumptions of our interfaces, which we did not mutate during the testcase generation.
The last remaining fault was introduced in the timing behavior of the Java implementation, which was simulated via a tick method, indicating the passage of 1 s. In the requirement interface, it was modeled via a nondeterministically increasing variable. The fault caused the implementation to trigger the state change already after 9 s instead of 10 s. The test driver was not sensitive enough to detect that, as the test case only specified the behavior after 10 s, and did not specify what the correct behavior after 9 s would be.
6 Related work
Synchronous languages were introduced in the 1980’s, and mostly driven in France, where the most wellknown three synchronous languages were developed: Lustre [22], Signal [27], and Esterel [17]. In 1991, the IEEE devoted a special issue to synchronous systems, featuring, e.g., a paper by Benveniste and Berry [11], discussing the major issues and approaches of synchronous specifications of realtime systems. A decade later, Benveniste et al. [14] gave an overview on the development of synchronous languages during that decade, especially mentioning the rising tool support and the industrial acceptance. They also mention globally asynchronous, locally synchronous systems [47] as an upcoming trend.
The main inspiration for this work was the introduction of the conjunction operation and the investigation of its properties [25] in the context of synchronous interface theories [23]. While the mathematical properties of the conjunction in different interface theories were further studied in [12, 30, 43], we are not aware of any similar work related to MBT.
Synchronous dataflow modeling [15] has been an active area of research in the past. The most important synchronous dataflow programming languages are Lustre [22] and SIGNAL [27]. These languages are implementation languages, while requirement interfaces enable specifying highlevel properties of such programs. Testing of Lustrelike programs was studied by Raymond et al. [42] and Papailiopoulou [41]. The specification language SCADE [16] supports graphical representation of synchronous systems. Internally, SCADE models are stored in a textual representation very similar to Lustre. Experimental results for testcase generation were presented by Wakankar et al. [49] for experiments where the manually translated SCADE models to SALATG models, and used the SALATG for the testcase generation.
The tool STIMULUS [34] allows the synchronous specification and debugging of realtime systems, via predefined sentence templates, which can be simulated via a constraint solver. As in our approach, this enables a tight traceability between the natural language requirements, the formalized requirements, and the outcome of the verification tasks. They combine a userfriendly specification language with formal verification via BDDs. The tool is evaluated on an automotive case study.
Compositional properties of specifications in the context of testing were studied before [7, 18, 24, 38, 44]. None of these works consider synchronous dataflow specifications, and the compositional properties are investigated with respect to the parallel composition and hiding operations, but not conjunction. A different notion of conjunction is introduced for the testcase generation with SAL [28]. In that work, the authors encode test purposes as trap variables and conjunct them to drive the testcase generation process towards reaching all the test purposes with a single test case. Consistency checking of contracts has been studied in [26], yet for a weaker notion of consistency.
Our specifications using constraints share similarities with the Z specification language [45] that also follows a multipleviewpoint approach to structuring a specification into pieces called schemas. However, a Z schema defines the dynamics of a system in terms of operations. In contrast, our requirement interfaces follow the style of synchronous languages.
Brillout et al. [20] performed mutationbased testcase generation on SIMULINK models. They implemented the approach in the tool COVER, based on the modelchecker CBMC. He et al. [29] exploit similarity measures on mutants of SIMULINK models, to decrease the cost of mutationbased testcase generation. They provide experiments to show the advantages of modelbased mutation testing compared with random testing, and compared with simpler mutationbased testing approaches.
There exist several tools for testcase generation for synchronous systems. The tool Lutess [19] is based on Lustre. It takes the specification of the environment (specified in Lustre), a test sequence generator, and an oracle and performed online testing on the system under test according to the environment and traces selected by the generator according to several different modes. Another tool based on Lustre is called Lurette [42]. Lurette only performs random testing, but is able to validate systems with numerical inputs and outputs. A third testing tool based on Lustre is called GATeL [39]. It generates tests according to test purposes, using constraint logic programming to search for suitable traces.
The tool Autofocus [32] facilitates testcase generation from timesynchronous communicating extended finitestate machines that build a distributed system. It is based on constraint logic programming, and supports functional, structural, and stochastic test specifications.
The application of the testcase generation and consistency checking tool for requirement interfaces and its integration into a set of software engineering tools was presented in [4]. That work focuses on the requirementdriven testing methodology, workflow, and tool integration, and gives no technical details about requirement interfaces. In contrast, this paper provides a sound mathematical theory for requirements interfaces and their associated incremental testcase generation, consistency checking, and tracing procedures.
Modelbased mutation testing was initially used for predicatecalculus specifications [21] and later applied to formal Z specifications [46]. Amman et al. [8] used temporal formulae to check equivalence between models and mutants, and converted counter examples to test cases, in case of nonequivalence. Belli et al. [9, 10] applied modelbased mutation testing to event sequence graphs and pushdown automata. Hierons and Merayo [31] applied mutationbased testcase generation to probabilistic finitestate machines. The work presents mutation operators and describes how to create input sequences to kill a given mutated state machine.
Modelbased mutation testing has already been applied to UML models [1, 3], action systems [2], and timed automata [6].
7 Conclusions and future work
We presented a framework for requirementdriven modeling and testing of complex systems that naturally enable the multipleview incremental modeling of synchronous dataflow systems. The formalism enables conformance testing of complex systems to their requirements and combining partial models via conjunction.
We also adapted the modelbased mutation testing technique to requirement interfaces, and evaluated its applicability for two industrial case studies.
Our requirementdriven framework opens many future directions. We will extend our procedure to allow generation of adaptive test cases. In the context of modelbased mutation testing, we will investigate strong mutation testing and mutation of assumptions. We will investigate in the future other compositional operations in the context of testing synchronous systems, such as the parallel composition and quotient. We intend on adding timed semantics to requirement interfaces, for a more thorough timing analysis. We will consider additional coverage criteria and test purposes, and will use our implementation to generate test cases for safetycritical domains, including automotive, avionics, and railways applications.
Footnotes
 1.
We adopt \(\textsc {SUT}\)centric conventions to naming the roles of variable.
 2.
We sometimes use the direct notation \((v,w'_{I}) \models _{A} c\) and \((v,w'_{\text {ctr}}) \models _{G} c\), where \(w_{I} \in V(X_{I})\) and \(w_{\text {ctr}} \in V(X_{\text {ctr}})\).
 3.
Here, we write \(X_{O}\)/\(X_{\text {ctr}}\) to denote the output/controllable variables involved in that contract.
 4.
We use the notation \(r_{1,2,3}\) to denote the set \(\{r_{1}, r_{2}, r_{3}\}\).
 5.
The restriction to deterministic implementations is for presentation purposes only, and the technique is general and can also be applied to nondeterministic systems.
 6.
The formula \(\omega _{\sigma _{I},A}\) can be seen as a monitor for \(A\) under input \(\sigma _{I}\).
 7.
We consider two views for the sake of simplicity.
 8.
We assume that the trace does not violate initial contracts to simplify the presentation. The extension to the general case is straightforward.
 9.
 10.
Notes
Acknowledgements
Open access funding provided by Graz University of Technology.
References
 1.Aichernig, B.K., Brandl, H., Jöbstl, E., Krenn, W.: Efficient mutation killers in action. In: Fourth IEEE international conference on software testing, verification and validation, ICST 2011, Berlin, Germany, March 21–25, pp. 120–129. IEEE Computer Society (2011). doi: 10.1109/ICST.2011.57
 2.Aichernig, B.K., Brandl, H., Jöbstl, E., Krenn, W.: UML in action: a twolayered interpretation for testing. ACM SIGSOFT Softw. Eng. Notes 36(1), 1–8 (2011). doi: 10.1145/1921532.1921559 CrossRefGoogle Scholar
 3.Aichernig, B.K., Brandl, H., Jöbstl, E., Krenn, W., Schlick, R., Tiran, S.: Killing strategies for modelbased mutation testing. Softw. Test. Verif. Reliab. 25(8), 716–748 (2015). doi: 10.1002/stvr.1522 CrossRefGoogle Scholar
 4.Aichernig, B.K., Hörmaier, K., Lorber, F., Nickovic, D., Schlick, R., Simoneau, D., Tiran, S.: Integration of requirements engineering and testcase generation via OSLC. In: 2014 14th International Conference on Quality Software, Allen, TX, October 2–3, pp. 117–126. IEEE (2014). doi: 10.1109/QSIC.2014.13
 5.Aichernig, B.K., Hörmaier, K., Lorber, F., Nickovic, D., Tiran, S.: Require, test and trace IT. In: Formal Methods for Industrial Critical Systems—20th International Workshop, FMICS 2015, Oslo, Norway, June 2223, 2015 Proceedings, pp. 113–127 (2015). doi: 10.1007/9783319194585_8
 6.Aichernig, B. K., Lorber, F., Nickovic, D.: Time for mutants—modelbased mutation testing with timed automata. In: Veanes, M., Viganò, L. (eds.) Tests and Proofs—7th International Conference, TAP 2013, Budapest, Hungary, June 16–20, 2013. Proceedings, Lecture Notes in Computer Science, vol. 7942, pp. 20–38. Springer, New York (2013). doi: 10.1007/9783642389160_2
 7.Aiguier, M., Boulanger, F., Kanso, B.: A formal abstract framework for modelling and testing complex software systems. Theor. Comput. Sci. 455, 66–97 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
 8.Ammann, P.E., Black, P.E., Majurski, W.: Using model checking to generate tests from specifications. In: In Proceedings of the Second IEEE International Conference on Formal Engineering Methods (ICFEM98), pp. 46–54. IEEE Computer Society (1998)Google Scholar
 9.Belli, F., Beyazit, M., Takagi, T., Furukawa, Z.: Modelbased mutation testing using pushdown automata. IEICE Trans. 95D(9), 2211–2218 (2012). http://search.ieice.org/bin/summary.php?id=e95d_9_2211
 10.Belli, F., Budnik, C.J., Wong, W.E.: Basic operations for generating behavioral mutants. In: MUTATION, pp. 9–. IEEE Computer Society (2006). doi: 10.1109/MUTATION.2006.2
 11.Benveniste, A., Berry, G.: The synchronous approach to reactive and realtime systems. In: De Micheli, G. , Ernst, R., Wolf, W. (eds.) Readings in Hardware/Software Codesign, pp. 147–159. Kluwer Academic Publishers, Norwell (2002). http://dl.acm.org/citation.cfm?id=567003.567015
 12.Benveniste, A., Caillaud, B., Ferrari, A., Mangeruca, L., Passerone, R., Sofronis, C.: Multiple viewpoint contractbased specification and design. In: de Boer, F.S., Bonsangue, M.M. , Graf, S., de Roever, W.P. (eds.) Formal Methods for Components and Objects, 6th International Symposium, FMCO 2007, Amsterdam, The Netherlands, October 24–26, 2007, Revised Lectures, Lecture Notes in Computer Science, vol. 5382, pp. 200–225. Springer (2007). doi: 10.1007/9783540921882_9
 13.Benveniste, A., Caillaud, B., Ničković, D., Passerone, R., Raclet, J.B., Reinkemeier, P., SangiovanniVincentelli, A., Damm, W., Henzinger, T., Larsen, K.G.: Contracts for System Design. Rapport de recherche RR8147, INRIA (2012)Google Scholar
 14.Benveniste, A., Caspi, P., Edwards, S.A., Halbwachs, N., Le Guernic, P., De Simone, R.: The synchronous languages 12 years later. Proc. IEEE 91(1), 64–83 (2003). doi: 10.1109/JPROC.2002.805826 CrossRefGoogle Scholar
 15.Benveniste, A., Caspi, P., Guernic, P.L., Halbwachs, N.: Dataflow synchronous languages. In: REX School/Symposium, LNCS, vol. 803, pp. 1–45. Springer, New York (1993)Google Scholar
 16.Berry, G.: Scade: Synchronous design and validation of embedded control software. In: Ramesh, S., Sampath, P. (eds.) Next Generation Design and Verification Methodologies for Distributed Embedded Control Systems: Proceedings of the GM R&D Workshop, Bangalore, India, January 2007, pp. 19–33. Springer, The Netherlands (2007). doi: 10.1007/9781402062544_2
 17.Berry, G., Gonthier, G.: The esterel synchronous programming language: design, semantics, implementation. Sci. Comput. Program. 19(2), 87–152 (1992)CrossRefzbMATHGoogle Scholar
 18.van der Bijl, M., Rensink, A., Tretmans, J.: Compositional testing with ioco. In: Petrenko, A., Ulrich, A. (eds.) Formal Approaches to Software Testing, Third International Workshop on Formal Approaches to Testing of Software, FATES 2003, Montreal, Quebec, Canada, October 6th, 2003, Lecture Notes in Computer Science, vol. 2931, pp. 86–100. Springer, New York (2003). doi: 10.1007/9783540246176_7
 19.du Bousquet, L., Ouabdesselam, F., Richier, J.L., Zuanon, N.: Lutess: a specificationdriven testing environment for synchronous software. In: Proceedings of the 21st International Conference on Software Engineering, ICSE ’99, pp. 267–276. ACM, New York (1999). doi: 10.1145/302405.302634
 20.Brillout, A., He, N., Mazzucchi, M., Kröning, D., Purandare, M., Rümmer, P., Weissenbacher, G.: Mutationbased test case generation for Simulink models. In: Revised Selected Papers of the 8th International Symposium on Formal Methods for Components and Objects (FMCO 2009), Lecture Notes in Computer Science, vol. 6286, pp. 208–227. Springer, New York (2010)Google Scholar
 21.Budd, T.A., Gopal, A.S.: Program testing by specification mutation. Comput. Lang. 10(1), 63–73 (1985). doi: 10.1016/00960551(85)900116 CrossRefzbMATHGoogle Scholar
 22.Caspi, P., Pilaud, D., Halbwachs, N., Plaice, J.: Lustre: A declarative language for programming synchronous systems. In: Conference Record of the Fourteenth Annual ACM Symposium on Principles of Programming Languages, Munich, Germany, January 2123, 1987, pp. 178–188. ACM Press (1987). doi: 10.1145/41625.41641
 23.Chakrabarti, A., de Alfaro, L., Henzinger, T.A., Mang, F.Y.C.: Synchronous and bidirectional component interfaces. In: Brinksma, E., Larsen, K.G. (eds.) Computer Aided Verification, 14th International Conference, CAV 2002, Copenhagen, Denmark, July 2731, 2002, Proceedings, Lecture Notes in Computer Science, vol. 2404, pp. 414–427. Springer, New York (2002). doi: 10.1007/3540456570_34
 24.Daca, P., Henzinger, T.A., Krenn, W., Ničković, D.: Compositional specifications for ioco testing: Technical report. Technical report, IST Austria (2014). http://repository.ist.ac.at/152/
 25.Doyen, L., Henzinger, T.A., Jobstmann, B., Petrov, T.: Interface theories with component reuse. In: de Alfaro, L., Palsberg, J. (eds.) Proceedings of the 8th ACM & IEEE International conference on Embedded software, EMSOFT 2008, Atlanta, October 1924, pp. 79–88. ACM (2008). doi: 10.1145/1450058.1450070
 26.Ellen, C., Sieverding, S., Hungar, H.: Detecting consistencies and inconsistencies of patternbased functional requirements. In: Lang, F., Flammini, F. (eds.) Formal Methods for Industrial Critical Systems—19th International Conference, FMICS 2014, Florence, September 11–12, 2014. Proceedings, Lecture Notes in Computer Science, vol. 8718, pp. 155–169. Springer, New York (2014). doi: 10.1007/9783319107028_11
 27.Gautier, T., Guernic, P.L.: SIGNAL: A declarative language for synchronous programming of realtime systems. In: Kahn, G. (ed.) Functional Programming Languages and Computer Architecture, Portland, Oregon, USA, September 14–16, 1987, Proceedings, Lecture Notes in Computer Science, vol. 274, pp. 257–277. Springer, New York (1987). doi: 10.1007/3540183175_15
 28.Hamon, G., De Moura, L., Rushby, J.: Automated test generation with sal. CSL Technical Note (2005)Google Scholar
 29.He, N., Rümmer, P., Kröning, D.: Testcase generation for embedded simulink via formal concept analysis. In: Proceedings of the 48th Design Automation Conference, DAC ’11, pp. 224–229. ACM, New York (2011). doi: 10.1145/2024724.2024777
 30.Henzinger, T.A., Ničković, D.: Independent implementability of viewpoints. In: Monterey Workshop, LNCS, vol. 7539, pp. 380–395. Springer, New York (2012)Google Scholar
 31.Hierons, R., Merayo, M.: Mutation testing from probabilistic finite state machines. In: Testing: Academic and Industrial Conference Practice and Research Techniques—MUTATION, 2007. TAICPARTMUTATION 2007, pp. 141–150 (2007). doi: 10.1109/TAIC.PART.2007.20
 32.Huber, F., Schätz, B., Schmidt, A., Spies, K.: AutoFocus: A tool for distributed systems specification. In: Proceedings of the 4th International Symposium on Formal Techniques in RealTime and FaultTolerant Systems (FTRTFT 1996), Lecture Notes in Computer Science, vol. 1135, pp. 467–470. Springer, New York (1996)Google Scholar
 33.ISO: ISO/DIS 262621  Road vehicles—Functional safety—Part 1 Glossary. Tech. rep., International Organization for Standardization/Technical Committee 22 (ISO/TC 22), Geneva, Switzerland (2009)Google Scholar
 34.Jeannet, B., Gaucher, F.: Debugging embedded systems requirements with stimulus: an automotive casestudy. In: 8th European Congress on Embedded Real Time Software and Systems (ERTS 2016) (2016)Google Scholar
 35.Jia, Y., Harman, M.: An analysis and survey of the development of mutation testing. Softw. Eng. IEEE Trans. 37(5), 649–678 (2011). doi: 10.1109/TSE.2010.62 CrossRefGoogle Scholar
 36.Junker, U.: Quickxplain: Preferred explanations and relaxations for overconstrained problems. In: AAAI, pp. 167–172. AAAI Press, California (2004)Google Scholar
 37.Just, R., Schweiggert, F., Kapfhammer, G.M.: MAJOR: An efficient and extensible tool for mutation analysis in a Java compiler. In: Proceedings of the International Conference on Automated Software Engineering (ASE), pp. 612–615 (2011)Google Scholar
 38.Krichen, M., Tripakis, S.: Conformance testing for realtime systems. Formal Methods Syst. Des. 34(3), 238–304 (2009)CrossRefzbMATHGoogle Scholar
 39.Marre, B., Arnould, A.: Test sequences generation from LUSTRE descriptions: GATEL. In: Automated Software Engineering, 2000. Proceedings ASE 2000. The Fifteenth IEEE International Conference on, pp. 229–237 (2000). doi: 10.1109/ASE.2000.873667
 40.Moura, L., Björner, N.: Z3: an efficient smt solver. In: Tools and algorithms for the construction and analysis of systems. LNCS, vol. 4963, pp. 337–340. Springer, New York (2008). doi: 10.1007/9783540788003_24
 41.Papailiopoulou, V.: Automatic test generation for lustre/scade programs. In: ASE, pp. 517–520. IEEE Computer Society, Washington, DC (2008). doi: 10.1109/ASE.2008.96
 42.Raymond, P., Nicollin, X., Halbwachs, N., Weber, D.: Automatic testing of reactive systems. In: RTSS, pp. 200–209. IEEE Computer Society (1998)Google Scholar
 43.Reineke, J., Tripakis, S.: Basic problems in multiview modeling. Tech. Rep. UCB/EECS20144. EECS Department, University of California, Berkeley (2014)Google Scholar
 44.Sampaio, A., Nogueira, S., Mota, A.: Compositional verification of inputoutput conformance via CSP refinement checking. In: Breitman, K., Cavalcanti, A. (eds.) Formal Methods and Software Engineering, 11th International Conference on Formal Engineering Methods, ICFEM 2009, Rio de Janeiro, Brazil, December 912, 2009. Proceedings, Lecture Notes in Computer Science, vol. 5885, pp. 20–48. Springer, New York (2009). doi: 10.1007/9783642103735_2
 45.Spivey, J.M.: Z Notation—a reference manual, 2nd edn. Prentice Hall, Prentice Hall International Series in Computer Science (1992)Google Scholar
 46.Stocks, P.A.: Applying formal methods to software testing. Ph.D. thesis, Department of computer science, University of Queensland (1993)Google Scholar
 47.Teehan, P., Greenstreet, M., Lemieux, G.: A survey and taxonomy of GALS design styles. IEEE Des. Test Comput. 24(5), 418–428 (2007). doi: 10.1109/MDT.2007.151 CrossRefGoogle Scholar
 48.Tretmans, J.: Test generation with inputs, outputs and repetitive quiescence. Softw. Concepts Tools 17(3), 103–120 (1996)zbMATHGoogle Scholar
 49.Wakankar, A., Bhattacharjee, A.K., Dhodapkar, S.D., Pandya, P.K., Arya, K.: Automatic test case generation in model based software design to achieve higher reliability. In: Reliability, Safety and Hazard (ICRESH), 2010 2nd International Conference on, pp. 493–499 (2010). doi: 10.1109/ICRESH.2010.5779600
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.