A novel model-based testing approach for software product lines

Abstract

Model-based testing relies on a model of the system under test. FineFit is a framework for model-based testing of Java programs. In the FineFit approach, the model is expressed by a set of tables based on Parnas tables. A software product line is a family of programs (the products) with well-defined commonalities and variabilities that are developed by (re)using common artifacts. In this paper, we address the issue of using the FineFit approach to support the development of correct software product lines. We specify a software product line as a specification product line where each product is a FineFit specification of the corresponding software product. The main challenge is to concisely specify the software product line while retaining the readability of the specification of a single system. To address this, we used delta-oriented programming, a recently proposed flexible approach for implementing software product lines, and developed: (1) delta tables as a means to apply the delta-oriented programming idea to the specification of software product lines; and (2) DeltaFineFit as a novel model-based testing approach for software product lines.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Notes

  1. 1.

    A straightforward embedding of FOP into DOP is illustrated, e.g., in [11].

  2. 2.

    In practice, the tables can be written using any tool that can export its output to HTML, for example MS Word. FineFit ignores anything that is not part of an HTML table.

  3. 3.

    Alloy is a general purpose modeling language (in the style of Z) for reasoning about relational structures with first order logic. It has no direct concept of system states or operations, and it does not offer any tool for testing software.

  4. 4.

    Note that by definition a set does not contain duplicate elements.

  5. 5.

    In the original paper about FineFit [3], we used the term fixture, which was borrowed from Fit [23], instead of driver. We think that driver is a more appropriate term.

  6. 6.

    A feature model defines the valid feature configurations of an SPL, i.e., the feature configurations that describe the products (see, e.g., [24]).

  7. 7.

    This mechanism is similar to the Super(...) call of FOP [10] and to the around advice and proceed mechanisms of aspect-oriented programming (AOP)—see, e.g., [7, 25] for a comparison between DOP and AOP.

  8. 8.

    In DeltaJ 1.5, each constraint “\(S_{i,1}\) when \({\texttt {P}}_{i,1},\) \(\ldots ,\) \(S_{i,q_i}\) when \({\texttt {P}}_{i,q_i}\);” is called “partition,” since the set of sets of delta module names \(\{S_{i,j} \;\vert \;\) \(1\le j\le q_i\}\) is a partition of \(\cup _{1\le j\le q_i} S_{i,j}\).

  9. 9.

    Thus, ruling out ambiguity.

  10. 10.

    Note that tables satisfying this case do not satisfy case 1.

  11. 11.

    The bottom line of the cells is removed to indicate that \(c_i\) represents a subtree rather than the content of a cell.

  12. 12.

    Unlike the \( apply \) function, the \( prepare \) function is aware of the row-alignment of cells in a table and provides special treatment for different kinds of cells.

  13. 13.

    Recall that the row span of the name cell defines that the first three rows of the example are condition rows and the other rows contain variable and value cells.

  14. 14.

    This includes variables checked by the conditional operator.

  15. 15.

    Rows with conditional remove operators are also moved to the right place by this rule, i.e., where the content of a cell matches the condition.

  16. 16.

    The first row of enumeration tables, which consists of an atom name, was counted.

  17. 17.

    We do not count the left most column containing the state variables.

  18. 18.

    In fact, this is a forest. However, it can be transformed into a tree by adding an imaginary true as the parent of the predicates at the first row.

  19. 19.

    Adding deriving(Show) at the end of a data declaration enables printing instances on standard output.

  20. 20.

    Adding a definition for the name main makes Listing 13 a complete program.

  21. 21.

    Line 37 is a special case, where \(*\) matches multiple condition cells.

  22. 22.

    If replace is not substituted by insert in Line 47.

References

  1. 1.

    Clements, P., Northrop, L.: Software Product Lines: Practices and Patterns. Addison Wesley Longman, Boston (2001)

    Google Scholar 

  2. 2.

    Pohl, K., Böckle, G., van der Linden, F.: Software Product Line Engineering-Foundations, Principles, and Techniques. Springer, Berlin (2005)

    Google Scholar 

  3. 3.

    Faitelson, D., Tyszberowicz, S.S.: Data refinement based testing. Int. J. Syst. Assur. Eng. Manag. 2(2), 144–154 (2011)

    Article  Google Scholar 

  4. 4.

    de Roever, W.P., Engelhardt, K.: Data Refinement: Model-Oriented Proof Theories and Their Comparison. Cambridge Tracts in Theoretical Computer Science, vol. 46. Cambridge University Press, Cambridge (1998)

    Google Scholar 

  5. 5.

    The FineFit home page. https://github.com/coderocket/finefit

  6. 6.

    Parnas, D.L.: Tabular representation of relations. Tech. rep. 260, Research Institute of Ontario, McMaster University (1992)

  7. 7.

    Bettini, L., Damiani, F., Schaefer, I.: Compositional type checking of delta-oriented software product lines. Acta Inf. 50, 77–122 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  8. 8.

    Schaefer, I., Bettini, L., Bono, V., Damiani, F., Tanzarella, N.: Delta-oriented programming of software product lines. In: Software Product Line Conference (SPLC), LNCS, vol. 6287, pp. 77–91. Springer (2010)

  9. 9.

    Apel, S., Batory, D.S., Kästner, C., Saake, G.: Feature-Oriented Software Product Lines: Concepts and Implementation. Springer, Berlin (2013)

    Google Scholar 

  10. 10.

    Batory, D., Sarvela, J., Rauschmayer, A.: Scaling step-wise refinement. IEEE TSE 30(6), 355–371 (2004)

    Google Scholar 

  11. 11.

    Schaefer, I., Damiani, F.: Pure delta-oriented programming. In: Proceedings of the 2Nd International Workshop on Feature-Oriented Software Development, FOSD’10, pp. 49–56. ACM, New York, NY, USA (2010). doi:10.1145/1868688.1868696

  12. 12.

    Koscielny, J., Holthusen, S., Schaefer, I., Schulze, S., Bettini, L., Damiani, F.: Deltaj 1.5: delta-oriented programming for java 1.5. In: Proceedings of the 2014 International Conference on Principles and Practices of Programming on the Java Platform: Virtual Machines, Languages, and Tools, PPPJ’14, pp. 63–74. ACM, New York, NY, USA (2014). doi:10.1145/2647508.2647512

  13. 13.

    The DeltaJ home page. https://www.tu-braunschweig.de/isf/research/deltas

  14. 14.

    Johansen, M.F., Haugen, O., Fleurey, F.: Properties of realistic feature models make combinatorial testing of product lines feasible. In: Proceedings of the International Conference on Model Driven Engineering Languages and Systems (MODELS), pp. 638–652. Springer, Berlin (2011)

  15. 15.

    Johansen, M.F., Haugen, O., Fleurey, F.: An algorithm for generating t-wise covering arrays from large feature models. In: Proceedings of the 16th International Software Product Line Conference, vol. 1, SPLC’12, pp. 46–55. ACM, New York, NY, USA (2012). doi:10.1145/2362536.2362547

  16. 16.

    Kowal, M., Schulze, S., Schaefer, I.: Towards efficient spl testing by variant reduction. In: Proceedings of the 4th International Workshop on Variability & Composition, VariComp’13, pp. 1–6. ACM, New York, NY, USA (2013). doi:10.1145/2451617.2451619

  17. 17.

    Lochau, M., Goltz, U.: Feature interaction aware test case generation for embedded control systems. Electron. Notes Theor. Comput. Sci. 264(3), 37–52 (2010)

    Article  Google Scholar 

  18. 18.

    The DeltaFineFit home page. http://di.unito.it/deltafinefit

  19. 19.

    Damiani, F., Gladisch, C., Tyszberowicz, S.: Refinement-based testing of delta-oriented product lines. In: Proceedings of the 2013 International Conference on Principles and Practices of Programming on the Java Platform: Virtual Machines, Languages, and Tools, PPPJ’13, pp. 135–140. ACM, New York, NY, USA (2013). doi:10.1145/2500828.2500841

  20. 20.

    Jackson, D.: Software Abstractions-Logic, Language, and Analysis. MIT Press, Cambridge (2012)

    Google Scholar 

  21. 21.

    Spivey, J.M.: The Z Notation: A Reference Manual. Prentice Hall International, Upper Saddle River (2001)

    Google Scholar 

  22. 22.

    Torlak, E., Jackson, D.: Kodkod: a relational model finder. In: Tools and Algorithms for the Construction and Analysis of Systems (TACAS), LNCS, vol. 4424, pp. 632–647. Springer (2007)

  23. 23.

    Mugridge, R., Cunningham, W.: Fit for Developing Software: Framework for Integrated Tests. Prentice Education Inc., New Jersy (2005)

    Google Scholar 

  24. 24.

    Batory, D.: Feature models, grammars, and propositional formulas. In: Proceedings of the International Conference on Software Product Lines (SPLC), LNCS, vol. 3714, pp. 7–20. Springer (2005)

  25. 25.

    Schaefer, I., Bettini, L., Damiani, F.: Compositional type-checking for delta-oriented programming. In: Proceedings of the Tenth International Conference on Aspect-oriented Software Development, AOSD’11, pp. 43–56. ACM, New York, NY, USA (2011). doi:10.1145/1960275.1960283

  26. 26.

    Schaefer, I., Rabiser, R., Clarke, D., Bettini, L., Benavides, D., Botterweck, G., Pathak, A., Trujillo, S., Villela, K.: Software diversity: state of the art and perspectives. Int. J. Softw. Tools Technol. Transf. 14(5), 477–495 (2012)

    Article  Google Scholar 

  27. 27.

    Apel, S., Kästner, C., Grösslinger, A., Lengauer, C.: Type safety for feature-oriented product lines. Autom. Softw. Eng. 17(3), 251–300 (2010)

    Article  Google Scholar 

  28. 28.

    Apel, S., Kästner, C., Lengauer, C.: Feature featherweight java: a calculus for feature-oriented programming and stepwise refinement. In: Proceedings of the 7th International Conference on Generative Programming and Component Engineering, GPCE’08, pp. 101–112. ACM, New York, NY, USA (2008). doi:10.1145/1449913.1449931

  29. 29.

    Delaware, B., Cook, W.R., Batory, D.: Fitting the pieces together: A machine-checked model of safe composition. In: Proceedings of the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on The Foundations of Software Engineering, ESEC/FSE’09, pp. 243–252. ACM, New York, NY, USA (2009). doi:10.1145/1595696.1595733

  30. 30.

    Krueger, C.: Eliminating the adoption barrier. IEEE Softw. 19(4), 29–31 (2002)

    Article  Google Scholar 

  31. 31.

    do Carmo Machado, I., McGregor, J.D., Cavalcanti, Y.C., de Almeida, E.S.: On strategies for testing software product lines: a systematic literature review. Inf. Softw. Technol. 56(10), 1183–1199 (2014)

    Article  Google Scholar 

  32. 32.

    Engström, E., Runeson, P.: Software product line testing–a systematic mapping study. Inf. Softw. Technol. 53(1), 2–13 (2011)

    Article  Google Scholar 

  33. 33.

    Lee, J., Kang, S., Lee, D.: A survey on software product line testing. In: Proceedings of the 16th International Software Product Line Conference, vol. 1, SPLC’12, pp. 31–40. ACM, New York, NY, USA (2012). doi:10.1145/2362536.2362545

  34. 34.

    da Mota Silveira Neto, P.A., do Carmo Machado, I., McGregor, J.D., de Almeida, E.S., de Lemos Meira, S.R.: A systematic mapping study of software product lines testing. Inf. Softw. Technol. 53(5), 407–423 (2011). Special Section on Best Papers from XP2010

  35. 35.

    Salem, K., Beyer, K., Lindsay, B., Cochrane, R.: How to roll a join: asynchronous incremental view maintenance. In: Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, SIGMOD’00, pp. 129–140. ACM, New York, NY, USA (2000). doi:10.1145/342009.335393

  36. 36.

    Uzuncaova, E., Khurshid, S., Batory, D.S.: Incremental test generation for software product lines. IEEE TSE 36(3), 309–322 (2010)

    Google Scholar 

  37. 37.

    Lochau, M., Lity, S., Lachmann, R., Schaefer, I., Goltz, U.: Delta-oriented model-based integration testing of large-scale systems. J. Syst. Softw. 91, 63–84 (2014)

    Article  Google Scholar 

  38. 38.

    Lochau, M., Schaefer, I., Kamischke, J., Lity, S.: Incremental model-based testing of delta-oriented software product lines. In: TAP, LNCS, vol. 7305, pp. 67–82. Springer (2012)

  39. 39.

    Damiani, F., Owe, O., Dovland, J., Schaefer, I., Johnsen, E.B., Yu, I.C.: A transformational proof system for delta-oriented programming. In: Proceedings of the 16th International Software Product Line Conference, vol. 2, SPLC’12, pp. 53–60. ACM, New York, NY, USA (2012). doi:10.1145/2364412.2364422

  40. 40.

    Hähnle, R., Schaefer, I.: A Liskov principle for delta-oriented programming. In: Leveraging Applications of Formal Methods, Verification and Validation. Technologies for Mastering Change International Symposium (ISoLA), Part I, LNCS, vol. 7609, pp. 32–46. Springer (2012)

  41. 41.

    Dovland, J., Johnsen, E.B., Owe, O., Steffen, M.: Lazy behavioral subtyping. J. Log. Algebr. Program. 79(7), 578–607 (2010)

    MathSciNet  Article  MATH  Google Scholar 

  42. 42.

    Beckert, B., Gladisch, C., Tyszberowicz, S., Yehudai, A.: KeYGenU: combining verification-based and capture and replay techniques for regression unit testing. Int. J. Syst. Assur. Eng. Manag. 2(2), 97–113 (2011)

    Article  Google Scholar 

  43. 43.

    Damiani, F., Padovani, L., Schaefer, I.: A formal foundation for dynamic delta-oriented software product lines. In: Proceedings of the 11th International Conference on Generative Programming and Component Engineering, GPCE’12, pp. 1–10. ACM, New York, NY, USA (2012). doi:10.1145/2371401.2371403

  44. 44.

    Damiani, F., Schaefer, I.: Dynamic delta-oriented programming. In: Proceedings of the 15th International Software Product Line Conference, vol. 2, SPLC’11, pp. 34:1–34:8. ACM, New York, NY, USA (2011). doi:10.1145/2019136.2019175

  45. 45.

    Li, Z., Harman, M., Hierons, R.M.: Search algorithms for regression test case prioritization. IEEE TSE 33(4), 225–237 (2007)

    Google Scholar 

Download references

Acknowledgments

We thank the anonymous referees of PPPJ’13 for valuable comments on a preliminary version of this paper and the anonymous SoSyM referees for many insightful comments and suggestions for improving the paper.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Ferruccio Damiani.

Additional information

The authors of this paper are listed in alphabetical order. This work has been partially supported by project HyVar (www.hyvar-project.eu), which has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 644298; by ICT COST Action IC1402 ARVI (www.cost-arvi.eu); by ICT COST Action IC1201 BETTY (www.behavioural-types.eu); by Italian MIUR PRIN 2010LHT4KM Project CINA (sysma.imtlucca.it/cina); by Ateneo/CSP D16D15000360005 project RunVar; and by GIF (Grant No. 1131-9.6/2011).

Communicated by Profs. Einar Broch Johnsen and Luigia Petre.

Appendices

Appendix 1: The structure and semantics of operation tables

A FineFit operation table is a predicate that specifies the behavior of an operation as a relation between the model’s state variables before the operation starts (the pre-state) and after the operation completes (the post-state). It consists of two major areas: an expression table and a precondition tree. The expression table is a set of columns, where each column defines the values of the state variables in the post-state, given their values in the pre-state. The predcondition tree consists of predicates that determine which columns to use in the definition of the post-state. For example, consider the following operation table:

figurebe

The operation R has a single input parameter c? and determines the value of its two state variables x and y as follows: If c? is positive, then the operation must set x to \(x+c?\) and y to \(x+y\) (the right most column), otherwise, if c? is not positive, the effect of the operation depends on the current value of x. When x is not negative x and y must be set to zero and when x is negative, then x and y do not change (their value must be the same as it was when the operation started). Note that the predicate for determining whether columns 1 and 2Footnote 17 are relevant for the specification is the conjunction of the child predicates \(x<0\) and \(x\ge 0\) with their parent predicate \(c? \le 0\).

We now define the conditions necessary for the precondition part of the table to form a tree. Consider the precondition part of the table as a matrix of cells. Then, each predicate spans one or more consecutive cells (in the same row) and the space that each predicate occupies must be included below the space that its parent (the predicate above) occupies. For example, the predicate \(x \ge 0\) occupies cell (2, 2) and its parent, the predicate \(c? \le 0\) occupies cells (1, 1), (1, 2).

Formally, consider a precondition part that is arranged in a matrix of n rows by m columns of cells. The i-th row contains a sequence of predicates, each occupying a span of one or more cells. Let \(k_i\) indicate the sequence of spans that each predicate occupies (and therefore \(k_{ij}\) is the j-th span in the i-th row). For example, in the operation above, \(k_1 = 2,1\) and \(k_2 = 1,1,1\). In order for the spans to describe a valid precondition tree, three conditions must be met:

  1. 1.

    The sum of spans in each row must be equal to the number of columns in the matrix:

    $$\begin{aligned} \sum _{j=1}^{|k_i|}k_{ij} = m \end{aligned}$$

    where \(|k_i|\) is the number of elements in the ith sequence of spans.

  2. 2.

    The cells that each predicate spans must be included below the span of cells of its parent:

    $$\begin{aligned} S_{ij} \subseteq S_{(i-1)l} \qquad \hbox {for some }1\le l\le m \end{aligned}$$

    where the span of cells of the j-th predicate in the i-th row is

    $$\begin{aligned} S_{ij} = \sum _{l=1}^{j-1}k_{il} + 1, \ldots , \sum _{l=1}^j k_{il} \end{aligned}$$
  3. 3.

    The last row (n) must consist of individual cells, that is

    $$\begin{aligned} k_{nj} = 1 \quad \hbox {for all }1 \le j \le m. \end{aligned}$$

This arrangement ensures that the predicates form a tree structureFootnote 18 with m leaves, all occupying the last row. The following matrix illustrates this structure:

figurebf

There are three span sequences, one for each row:

$$\begin{aligned} k_{3}= & {} 1,1,1,1,1,1,1 \end{aligned}$$
(3)
$$\begin{aligned} k_{2}= & {} 3,2,2 \end{aligned}$$
(4)
$$\begin{aligned} k_{1}= & {} 5,2 \end{aligned}$$
(5)

We can see that each span is included below the span of its parent. For example,

$$\begin{aligned} S_{2,2} = \{4,5\} \subseteq S_{1,1} = \{1,2,3,4,5\} \end{aligned}$$

We say that the precondition tree is well formed when it satisfies the three conditions we have defined above. In the rest of the discussion, we always assume that we are working with well-formed precondition trees. Given a column i, we define the guard for this column as the conjunction of the leaf that occupies the i-th column and all its ancestors. Let \(\bar{v}' = v'_1,\ldots ,v'_l\) be the state space vector. Let \(\bar{c_i}\) be the i-th column of the expression table. Then, the meaning of the operation specification is:

$$\begin{aligned} \bar{v}' \in \{ \bar{c_i} : 1 \le i \le m \wedge guard(i) \} \end{aligned}$$

That is, the value of the state variables vector in the post-state can be equal to the value of any of the expression table columns whose guard was true in the pre-state.

The semantics of individual predicates and expressions is explained in [3].

Appendix 2: The algorithm that computes the \( apply \) function

We provide (in Sect. “Executable specification in Haskell”) a Haskell executable specification of the \( apply \) function (cf. Sect. 4.2.1) and present (in Sect. “The Java implementation”) its Java implementation. Note that in this paper all the tables resulting from delta-table application have been generated using this implementation.

Executable specification in Haskell

The executable specification of the algorithm that computes the \( apply \) function is provided as a program written in Haskell (Listing 13).

Haskell is a functional programming language with a syntax that strongly resembles the usual mathematical notation for defining function by cases, via pattern matching. The Haskell code and the comments in Listing 13 should be almost self-explanatory. In the following, we shortly explain some technicalities. The expression contained in an ordinary table cell and in a delta-table cell containing the insert or replace operators is represented using the standard library type String (Line 3). Both ordinary tables and delta tables are represented as trees, where each node corresponds to a cell. A tree that represents a table (ordinary or delta) is expressed as a value of the recursive data type Table (Line 10)—a value of type Table describes the root cell and the list of its immediate subtrees. The data type Table has five data constructors (each describes a different kind of node): Basic, Insert, and Replace (all have arity two), Match and Remove (arity one).Footnote 19 Ordinary tables are represented by using only the data constructor Basic, while delta tables are represented by using only the other data constructors, which correspond to the delta-table operators.

Example 3

(Representation of ordinary tables and delta tables in Haskell) Since ordinary tables may have more than one cell in their first row, they are always encoded by adding a top node containing the string “ROOT”. Delta tables are therefore encoded by adding a top node containing the match operator (matching the top node in the ordinary tables). The following ordinary and delta tables

figurebg

are respectively represented by the following values of type Table:

figurebh

The Haskell function apply (Lines 1564) corresponds to the \( apply \) function (introduced in Sect. 4.2.1) that executes the delta operations of the delta table. To improve readability, we have not modeled the behavior described in Remark 2 of Sect. 4.2.1.

Line 30 declares the type of the apply function. The first argument of the function apply has type Maybe Table. The standard library Maybe data type has two constructors: the unary data constructor Just and the constant Nothing. It is used to specify optional values: A value of type Maybe Table either contains a table t (represented as Just t), or it is empty (represented as Nothing).

The standard library function concat takes two lists and returns their concatenation. The standard library function zipWith calls a given function pairwise on each member of both lists, returning a list. For the convenience of readers, zipWith’s code is as follows:

figurebi

Example 4

(An execution of the Haskell function apply) Consider the following delta-table application, which already has been presented at the end of Sect. 4.2.1 (recall that the numbers to the left of the cells denote the recursion step):

figurebj

The above delta-table application can be defined by appending to the code in Listing 13 the following three lines:Footnote 20

figurebk

Executing the main program yields

figurebl
figurebm

The Java implementation

The Java implementation of the algorithm that computes the \( apply \) function is given in Listings 14 and 15. A cell of a table is represented by the Node class which extends the Vector class. Tables are represented as trees, and an object of class Node represents a cell, which is the root of a subtree, and contains (in the Vector’s elements) the reference to the roots of the immediate subtrees.

figurebn
figurebo

The applyPrime method (Lines 1623 in Listing 14) corresponds to the \( apply '\) function (introduced in Sect. 4.3). It first calls (Line 19) the prepare method (which applies the rules described in Sect. 4.3) and then invokes (Line 21) the apply method (Lines 3064 in Listing 14). The apply method corresponds to the \( apply \) function (introduced in Sect. 4.2.1 and specified in Sect. 1). When calling the apply method, the this object is the current node of the delta table, and orig and res are the current nodes of the original table and of the resulting table, respectively.

The body of the apply method implements the recursive walking of the parse tree, which is controlled by the operations in the cells of the delta table and, therefore, describes how each operation works. The first part of the method (Lines 3650) reads the operation of the current delta node and updates the content of the resulting node; Lines 4046 implement the behavior described in Remark 2 of Sect. 4.2.1. The second part of the method (Lines 5262) is responsible for the recursive call.Footnote 21 At each recursive call (Lines 4356, and 60), the method getOrCreateChild(i), which creates the nodes of the resulting table, is called; it either returns the i-th subnode if it already exists, or it creates the i-th subnode and returns it. We now continue illustrating the apply method using small examples. (An example of an execution trace of the method apply is provided in Example 5.)

The loop that begins in Line 52 iterates over the siblings of the current delta-table node. This means that if at the current level the original table has more cells than the delta table, then the additional cells are ignored. For example:

figurebp

If the delta table has more cells than the original table at the current level, the last cell of the original table is repeatedly used when processing the additional operations of the delta table (Line 53). For example:

figurebq

If the original table contains no cells at the current level and the current operation of the delta table is \(*\) or \(-\), the remaining operations of the delta table on the current branch are ignored (Line 52).

figurebr

If the original table contains no cells at the current level and the current operation of the delta table is \(\blacktriangleright \) or \(*\blacktriangleright \), a cell with the value of the insert or replace operations is inserted (Line 50) in the resulting table. Line 47 maps in this case the replace operation to an insert operation. For example:

figurebs

It is therefore possible to extend a table with new variable and value rows by using either the insert or replace operation.

The algorithm traverses the original table and the delta table in parallel. The recursive call in Line 54 is responsible for the operations match, remove, and replace.Footnote 22 The recursive call in Line 50 handles the insert operation, where the current node of the original table is passed as the first argument rather than the current child of the original table. Delaying the recursive step on the original table results in the described semantics of the insert operation.

The recursive call in Line 37 is used for convenience. If either the remove or the match operation occurs as the last condition in a condition hierarchy, but the condition cell c of the original table has additional subconditions (this is checked by isLastConditionNode()), the remove or the match operation is applied to all subcondition of c. For example, let AB, and C be condition cells, then

figurebt

The reader may have noticed that unlike the other operations, the remove operation is not handled in the loop in Lines 5262. When a remove operation occurs, a temporary cell is created in the resulting table that is marked to be removed (Line 38). The marked cells are removed in Line 57. Removing nodes at the end of the recursion simplifies the implementation, because removing a node c pulls up its children \(c_1,\ldots ,c_n\) to the current level. Hence, by removing a node the number of siblings at the current level may decrease, stay constant, or increase. In the following example, we assume that A is not a condition cell.

figurebu

Example 5

(An execution trace of the method apply) For a complete example consider the following delta-table application, which already has been presented at the end of Sect. 4.2.1 and in Example 4 of “Executable specification in Haskell” of appendix:

figurebv

Executing the method apply in (Lines 3464) will result in the following recursive invocations of the method:

figurebw

At the end of the last recursion step of the apply algorithm (Line 57 in Listing 14), the method deleteTemporaryRemoveNodes is called which removes the temporary “\(-\)” node from the resulting tree res that is introduced at recursion step 5.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Damiani, F., Faitelson, D., Gladisch, C. et al. A novel model-based testing approach for software product lines. Softw Syst Model 16, 1223–1251 (2017). https://doi.org/10.1007/s10270-016-0516-2

Download citation

Keywords

  • Java
  • Alloy
  • Software product line
  • Delta-oriented programming
  • Model-based testing
  • Refinement