Abstract
Modelbased testing relies on a model of the system under test. FineFit is a framework for modelbased testing of Java programs. In the FineFit approach, the model is expressed by a set of tables based on Parnas tables. A software product line is a family of programs (the products) with welldefined commonalities and variabilities that are developed by (re)using common artifacts. In this paper, we address the issue of using the FineFit approach to support the development of correct software product lines. We specify a software product line as a specification product line where each product is a FineFit specification of the corresponding software product. The main challenge is to concisely specify the software product line while retaining the readability of the specification of a single system. To address this, we used deltaoriented programming, a recently proposed flexible approach for implementing software product lines, and developed: (1) delta tables as a means to apply the deltaoriented programming idea to the specification of software product lines; and (2) DeltaFineFit as a novel modelbased testing approach for software product lines.
Similar content being viewed by others
Notes
A straightforward embedding of FOP into DOP is illustrated, e.g., in [11].
In practice, the tables can be written using any tool that can export its output to HTML, for example MS Word. FineFit ignores anything that is not part of an HTML table.
Alloy is a general purpose modeling language (in the style of Z) for reasoning about relational structures with first order logic. It has no direct concept of system states or operations, and it does not offer any tool for testing software.
Note that by definition a set does not contain duplicate elements.
A feature model defines the valid feature configurations of an SPL, i.e., the feature configurations that describe the products (see, e.g., [24]).
In DeltaJ 1.5, each constraint “\(S_{i,1}\) when \({\texttt {P}}_{i,1},\) \(\ldots ,\) \(S_{i,q_i}\) when \({\texttt {P}}_{i,q_i}\);” is called “partition,” since the set of sets of delta module names \(\{S_{i,j} \;\vert \;\) \(1\le j\le q_i\}\) is a partition of \(\cup _{1\le j\le q_i} S_{i,j}\).
Thus, ruling out ambiguity.
Note that tables satisfying this case do not satisfy case 1.
The bottom line of the cells is removed to indicate that \(c_i\) represents a subtree rather than the content of a cell.
Unlike the \( apply \) function, the \( prepare \) function is aware of the rowalignment of cells in a table and provides special treatment for different kinds of cells.
Recall that the row span of the name cell defines that the first three rows of the example are condition rows and the other rows contain variable and value cells.
This includes variables checked by the conditional operator.
Rows with conditional remove operators are also moved to the right place by this rule, i.e., where the content of a cell matches the condition.
The first row of enumeration tables, which consists of an atom name, was counted.
We do not count the left most column containing the state variables.
In fact, this is a forest. However, it can be transformed into a tree by adding an imaginary true as the parent of the predicates at the first row.
Adding deriving(Show) at the end of a data declaration enables printing instances on standard output.
Adding a definition for the name main makes Listing 13 a complete program.
Line 37 is a special case, where \(*\) matches multiple condition cells.
If replace is not substituted by insert in Line 47.
References
Clements, P., Northrop, L.: Software Product Lines: Practices and Patterns. Addison Wesley Longman, Boston (2001)
Pohl, K., Böckle, G., van der Linden, F.: Software Product Line EngineeringFoundations, Principles, and Techniques. Springer, Berlin (2005)
Faitelson, D., Tyszberowicz, S.S.: Data refinement based testing. Int. J. Syst. Assur. Eng. Manag. 2(2), 144–154 (2011)
de Roever, W.P., Engelhardt, K.: Data Refinement: ModelOriented Proof Theories and Their Comparison. Cambridge Tracts in Theoretical Computer Science, vol. 46. Cambridge University Press, Cambridge (1998)
The FineFit home page. https://github.com/coderocket/finefit
Parnas, D.L.: Tabular representation of relations. Tech. rep. 260, Research Institute of Ontario, McMaster University (1992)
Bettini, L., Damiani, F., Schaefer, I.: Compositional type checking of deltaoriented software product lines. Acta Inf. 50, 77–122 (2013)
Schaefer, I., Bettini, L., Bono, V., Damiani, F., Tanzarella, N.: Deltaoriented programming of software product lines. In: Software Product Line Conference (SPLC), LNCS, vol. 6287, pp. 77–91. Springer (2010)
Apel, S., Batory, D.S., Kästner, C., Saake, G.: FeatureOriented Software Product Lines: Concepts and Implementation. Springer, Berlin (2013)
Batory, D., Sarvela, J., Rauschmayer, A.: Scaling stepwise refinement. IEEE TSE 30(6), 355–371 (2004)
Schaefer, I., Damiani, F.: Pure deltaoriented programming. In: Proceedings of the 2Nd International Workshop on FeatureOriented Software Development, FOSD’10, pp. 49–56. ACM, New York, NY, USA (2010). doi:10.1145/1868688.1868696
Koscielny, J., Holthusen, S., Schaefer, I., Schulze, S., Bettini, L., Damiani, F.: Deltaj 1.5: deltaoriented programming for java 1.5. In: Proceedings of the 2014 International Conference on Principles and Practices of Programming on the Java Platform: Virtual Machines, Languages, and Tools, PPPJ’14, pp. 63–74. ACM, New York, NY, USA (2014). doi:10.1145/2647508.2647512
The DeltaJ home page. https://www.tubraunschweig.de/isf/research/deltas
Johansen, M.F., Haugen, O., Fleurey, F.: Properties of realistic feature models make combinatorial testing of product lines feasible. In: Proceedings of the International Conference on Model Driven Engineering Languages and Systems (MODELS), pp. 638–652. Springer, Berlin (2011)
Johansen, M.F., Haugen, O., Fleurey, F.: An algorithm for generating twise covering arrays from large feature models. In: Proceedings of the 16th International Software Product Line Conference, vol. 1, SPLC’12, pp. 46–55. ACM, New York, NY, USA (2012). doi:10.1145/2362536.2362547
Kowal, M., Schulze, S., Schaefer, I.: Towards efficient spl testing by variant reduction. In: Proceedings of the 4th International Workshop on Variability & Composition, VariComp’13, pp. 1–6. ACM, New York, NY, USA (2013). doi:10.1145/2451617.2451619
Lochau, M., Goltz, U.: Feature interaction aware test case generation for embedded control systems. Electron. Notes Theor. Comput. Sci. 264(3), 37–52 (2010)
The DeltaFineFit home page. http://di.unito.it/deltafinefit
Damiani, F., Gladisch, C., Tyszberowicz, S.: Refinementbased testing of deltaoriented product lines. In: Proceedings of the 2013 International Conference on Principles and Practices of Programming on the Java Platform: Virtual Machines, Languages, and Tools, PPPJ’13, pp. 135–140. ACM, New York, NY, USA (2013). doi:10.1145/2500828.2500841
Jackson, D.: Software AbstractionsLogic, Language, and Analysis. MIT Press, Cambridge (2012)
Spivey, J.M.: The Z Notation: A Reference Manual. Prentice Hall International, Upper Saddle River (2001)
Torlak, E., Jackson, D.: Kodkod: a relational model finder. In: Tools and Algorithms for the Construction and Analysis of Systems (TACAS), LNCS, vol. 4424, pp. 632–647. Springer (2007)
Mugridge, R., Cunningham, W.: Fit for Developing Software: Framework for Integrated Tests. Prentice Education Inc., New Jersy (2005)
Batory, D.: Feature models, grammars, and propositional formulas. In: Proceedings of the International Conference on Software Product Lines (SPLC), LNCS, vol. 3714, pp. 7–20. Springer (2005)
Schaefer, I., Bettini, L., Damiani, F.: Compositional typechecking for deltaoriented programming. In: Proceedings of the Tenth International Conference on Aspectoriented Software Development, AOSD’11, pp. 43–56. ACM, New York, NY, USA (2011). doi:10.1145/1960275.1960283
Schaefer, I., Rabiser, R., Clarke, D., Bettini, L., Benavides, D., Botterweck, G., Pathak, A., Trujillo, S., Villela, K.: Software diversity: state of the art and perspectives. Int. J. Softw. Tools Technol. Transf. 14(5), 477–495 (2012)
Apel, S., Kästner, C., Grösslinger, A., Lengauer, C.: Type safety for featureoriented product lines. Autom. Softw. Eng. 17(3), 251–300 (2010)
Apel, S., Kästner, C., Lengauer, C.: Feature featherweight java: a calculus for featureoriented programming and stepwise refinement. In: Proceedings of the 7th International Conference on Generative Programming and Component Engineering, GPCE’08, pp. 101–112. ACM, New York, NY, USA (2008). doi:10.1145/1449913.1449931
Delaware, B., Cook, W.R., Batory, D.: Fitting the pieces together: A machinechecked model of safe composition. In: Proceedings of the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on The Foundations of Software Engineering, ESEC/FSE’09, pp. 243–252. ACM, New York, NY, USA (2009). doi:10.1145/1595696.1595733
Krueger, C.: Eliminating the adoption barrier. IEEE Softw. 19(4), 29–31 (2002)
do Carmo Machado, I., McGregor, J.D., Cavalcanti, Y.C., de Almeida, E.S.: On strategies for testing software product lines: a systematic literature review. Inf. Softw. Technol. 56(10), 1183–1199 (2014)
Engström, E., Runeson, P.: Software product line testing–a systematic mapping study. Inf. Softw. Technol. 53(1), 2–13 (2011)
Lee, J., Kang, S., Lee, D.: A survey on software product line testing. In: Proceedings of the 16th International Software Product Line Conference, vol. 1, SPLC’12, pp. 31–40. ACM, New York, NY, USA (2012). doi:10.1145/2362536.2362545
da Mota Silveira Neto, P.A., do Carmo Machado, I., McGregor, J.D., de Almeida, E.S., de Lemos Meira, S.R.: A systematic mapping study of software product lines testing. Inf. Softw. Technol. 53(5), 407–423 (2011). Special Section on Best Papers from XP2010
Salem, K., Beyer, K., Lindsay, B., Cochrane, R.: How to roll a join: asynchronous incremental view maintenance. In: Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, SIGMOD’00, pp. 129–140. ACM, New York, NY, USA (2000). doi:10.1145/342009.335393
Uzuncaova, E., Khurshid, S., Batory, D.S.: Incremental test generation for software product lines. IEEE TSE 36(3), 309–322 (2010)
Lochau, M., Lity, S., Lachmann, R., Schaefer, I., Goltz, U.: Deltaoriented modelbased integration testing of largescale systems. J. Syst. Softw. 91, 63–84 (2014)
Lochau, M., Schaefer, I., Kamischke, J., Lity, S.: Incremental modelbased testing of deltaoriented software product lines. In: TAP, LNCS, vol. 7305, pp. 67–82. Springer (2012)
Damiani, F., Owe, O., Dovland, J., Schaefer, I., Johnsen, E.B., Yu, I.C.: A transformational proof system for deltaoriented programming. In: Proceedings of the 16th International Software Product Line Conference, vol. 2, SPLC’12, pp. 53–60. ACM, New York, NY, USA (2012). doi:10.1145/2364412.2364422
Hähnle, R., Schaefer, I.: A Liskov principle for deltaoriented programming. In: Leveraging Applications of Formal Methods, Verification and Validation. Technologies for Mastering Change International Symposium (ISoLA), Part I, LNCS, vol. 7609, pp. 32–46. Springer (2012)
Dovland, J., Johnsen, E.B., Owe, O., Steffen, M.: Lazy behavioral subtyping. J. Log. Algebr. Program. 79(7), 578–607 (2010)
Beckert, B., Gladisch, C., Tyszberowicz, S., Yehudai, A.: KeYGenU: combining verificationbased and capture and replay techniques for regression unit testing. Int. J. Syst. Assur. Eng. Manag. 2(2), 97–113 (2011)
Damiani, F., Padovani, L., Schaefer, I.: A formal foundation for dynamic deltaoriented software product lines. In: Proceedings of the 11th International Conference on Generative Programming and Component Engineering, GPCE’12, pp. 1–10. ACM, New York, NY, USA (2012). doi:10.1145/2371401.2371403
Damiani, F., Schaefer, I.: Dynamic deltaoriented programming. In: Proceedings of the 15th International Software Product Line Conference, vol. 2, SPLC’11, pp. 34:1–34:8. ACM, New York, NY, USA (2011). doi:10.1145/2019136.2019175
Li, Z., Harman, M., Hierons, R.M.: Search algorithms for regression test case prioritization. IEEE TSE 33(4), 225–237 (2007)
Acknowledgments
We thank the anonymous referees of PPPJ’13 for valuable comments on a preliminary version of this paper and the anonymous SoSyM referees for many insightful comments and suggestions for improving the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Profs. Einar Broch Johnsen and Luigia Petre.
The authors of this paper are listed in alphabetical order. This work has been partially supported by project HyVar (www.hyvarproject.eu), which has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 644298; by ICT COST Action IC1402 ARVI (www.costarvi.eu); by ICT COST Action IC1201 BETTY (www.behaviouraltypes.eu); by Italian MIUR PRIN 2010LHT4KM Project CINA (sysma.imtlucca.it/cina); by Ateneo/CSP D16D15000360005 project RunVar; and by GIF (Grant No. 11319.6/2011).
Appendices
Appendix 1: The structure and semantics of operation tables
A FineFit operation table is a predicate that specifies the behavior of an operation as a relation between the model’s state variables before the operation starts (the prestate) and after the operation completes (the poststate). It consists of two major areas: an expression table and a precondition tree. The expression table is a set of columns, where each column defines the values of the state variables in the poststate, given their values in the prestate. The predcondition tree consists of predicates that determine which columns to use in the definition of the poststate. For example, consider the following operation table:
The operation R has a single input parameter c? and determines the value of its two state variables x and y as follows: If c? is positive, then the operation must set x to \(x+c?\) and y to \(x+y\) (the right most column), otherwise, if c? is not positive, the effect of the operation depends on the current value of x. When x is not negative x and y must be set to zero and when x is negative, then x and y do not change (their value must be the same as it was when the operation started). Note that the predicate for determining whether columns 1 and 2^{Footnote 17} are relevant for the specification is the conjunction of the child predicates \(x<0\) and \(x\ge 0\) with their parent predicate \(c? \le 0\).
We now define the conditions necessary for the precondition part of the table to form a tree. Consider the precondition part of the table as a matrix of cells. Then, each predicate spans one or more consecutive cells (in the same row) and the space that each predicate occupies must be included below the space that its parent (the predicate above) occupies. For example, the predicate \(x \ge 0\) occupies cell (2, 2) and its parent, the predicate \(c? \le 0\) occupies cells (1, 1), (1, 2).
Formally, consider a precondition part that is arranged in a matrix of n rows by m columns of cells. The ith row contains a sequence of predicates, each occupying a span of one or more cells. Let \(k_i\) indicate the sequence of spans that each predicate occupies (and therefore \(k_{ij}\) is the jth span in the ith row). For example, in the operation above, \(k_1 = 2,1\) and \(k_2 = 1,1,1\). In order for the spans to describe a valid precondition tree, three conditions must be met:

1.
The sum of spans in each row must be equal to the number of columns in the matrix:
$$\begin{aligned} \sum _{j=1}^{k_i}k_{ij} = m \end{aligned}$$where \(k_i\) is the number of elements in the ith sequence of spans.

2.
The cells that each predicate spans must be included below the span of cells of its parent:
$$\begin{aligned} S_{ij} \subseteq S_{(i1)l} \qquad \hbox {for some }1\le l\le m \end{aligned}$$where the span of cells of the jth predicate in the ith row is
$$\begin{aligned} S_{ij} = \sum _{l=1}^{j1}k_{il} + 1, \ldots , \sum _{l=1}^j k_{il} \end{aligned}$$ 
3.
The last row (n) must consist of individual cells, that is
$$\begin{aligned} k_{nj} = 1 \quad \hbox {for all }1 \le j \le m. \end{aligned}$$
This arrangement ensures that the predicates form a tree structure^{Footnote 18} with m leaves, all occupying the last row. The following matrix illustrates this structure:
There are three span sequences, one for each row:
We can see that each span is included below the span of its parent. For example,
We say that the precondition tree is well formed when it satisfies the three conditions we have defined above. In the rest of the discussion, we always assume that we are working with wellformed precondition trees. Given a column i, we define the guard for this column as the conjunction of the leaf that occupies the ith column and all its ancestors. Let \(\bar{v}' = v'_1,\ldots ,v'_l\) be the state space vector. Let \(\bar{c_i}\) be the ith column of the expression table. Then, the meaning of the operation specification is:
That is, the value of the state variables vector in the poststate can be equal to the value of any of the expression table columns whose guard was true in the prestate.
The semantics of individual predicates and expressions is explained in [3].
Appendix 2: The algorithm that computes the \( apply \) function
We provide (in Sect. “Executable specification in Haskell”) a Haskell executable specification of the \( apply \) function (cf. Sect. 4.2.1) and present (in Sect. “The Java implementation”) its Java implementation. Note that in this paper all the tables resulting from deltatable application have been generated using this implementation.
1.1 Executable specification in Haskell
The executable specification of the algorithm that computes the \( apply \) function is provided as a program written in Haskell (Listing 13).
Haskell is a functional programming language with a syntax that strongly resembles the usual mathematical notation for defining function by cases, via pattern matching. The Haskell code and the comments in Listing 13 should be almost selfexplanatory. In the following, we shortly explain some technicalities. The expression contained in an ordinary table cell and in a deltatable cell containing the insert or replace operators is represented using the standard library type String (Line 3). Both ordinary tables and delta tables are represented as trees, where each node corresponds to a cell. A tree that represents a table (ordinary or delta) is expressed as a value of the recursive data type Table (Line 10)—a value of type Table describes the root cell and the list of its immediate subtrees. The data type Table has five data constructors (each describes a different kind of node): Basic, Insert, and Replace (all have arity two), Match and Remove (arity one).^{Footnote 19} Ordinary tables are represented by using only the data constructor Basic, while delta tables are represented by using only the other data constructors, which correspond to the deltatable operators.
Example 3
(Representation of ordinary tables and delta tables in Haskell) Since ordinary tables may have more than one cell in their first row, they are always encoded by adding a top node containing the string “ROOT”. Delta tables are therefore encoded by adding a top node containing the match operator (matching the top node in the ordinary tables). The following ordinary and delta tables
are respectively represented by the following values of type Table:
The Haskell function apply (Lines 15–64) corresponds to the \( apply \) function (introduced in Sect. 4.2.1) that executes the delta operations of the delta table. To improve readability, we have not modeled the behavior described in Remark 2 of Sect. 4.2.1.
Line 30 declares the type of the apply function. The first argument of the function apply has type Maybe Table. The standard library Maybe data type has two constructors: the unary data constructor Just and the constant Nothing. It is used to specify optional values: A value of type Maybe Table either contains a table t (represented as Just t), or it is empty (represented as Nothing).
The standard library function concat takes two lists and returns their concatenation. The standard library function zipWith calls a given function pairwise on each member of both lists, returning a list. For the convenience of readers, zipWith’s code is as follows:
Example 4
(An execution of the Haskell function apply) Consider the following deltatable application, which already has been presented at the end of Sect. 4.2.1 (recall that the numbers to the left of the cells denote the recursion step):
The above deltatable application can be defined by appending to the code in Listing 13 the following three lines:^{Footnote 20}
Executing the main program yields
1.2 The Java implementation
The Java implementation of the algorithm that computes the \( apply \) function is given in Listings 14 and 15. A cell of a table is represented by the Node class which extends the Vector class. Tables are represented as trees, and an object of class Node represents a cell, which is the root of a subtree, and contains (in the Vector’s elements) the reference to the roots of the immediate subtrees.
The applyPrime method (Lines 16–23 in Listing 14) corresponds to the \( apply '\) function (introduced in Sect. 4.3). It first calls (Line 19) the prepare method (which applies the rules described in Sect. 4.3) and then invokes (Line 21) the apply method (Lines 30–64 in Listing 14). The apply method corresponds to the \( apply \) function (introduced in Sect. 4.2.1 and specified in Sect. 1). When calling the apply method, the this object is the current node of the delta table, and orig and res are the current nodes of the original table and of the resulting table, respectively.
The body of the apply method implements the recursive walking of the parse tree, which is controlled by the operations in the cells of the delta table and, therefore, describes how each operation works. The first part of the method (Lines 36–50) reads the operation of the current delta node and updates the content of the resulting node; Lines 40–46 implement the behavior described in Remark 2 of Sect. 4.2.1. The second part of the method (Lines 52–62) is responsible for the recursive call.^{Footnote 21} At each recursive call (Lines 43, 56, and 60), the method getOrCreateChild(i), which creates the nodes of the resulting table, is called; it either returns the ith subnode if it already exists, or it creates the ith subnode and returns it. We now continue illustrating the apply method using small examples. (An example of an execution trace of the method apply is provided in Example 5.)
The loop that begins in Line 52 iterates over the siblings of the current deltatable node. This means that if at the current level the original table has more cells than the delta table, then the additional cells are ignored. For example:
If the delta table has more cells than the original table at the current level, the last cell of the original table is repeatedly used when processing the additional operations of the delta table (Line 53). For example:
If the original table contains no cells at the current level and the current operation of the delta table is \(*\) or \(\), the remaining operations of the delta table on the current branch are ignored (Line 52).
If the original table contains no cells at the current level and the current operation of the delta table is \(\blacktriangleright \) or \(*\blacktriangleright \), a cell with the value of the insert or replace operations is inserted (Line 50) in the resulting table. Line 47 maps in this case the replace operation to an insert operation. For example:
It is therefore possible to extend a table with new variable and value rows by using either the insert or replace operation.
The algorithm traverses the original table and the delta table in parallel. The recursive call in Line 54 is responsible for the operations match, remove, and replace.^{Footnote 22} The recursive call in Line 50 handles the insert operation, where the current node of the original table is passed as the first argument rather than the current child of the original table. Delaying the recursive step on the original table results in the described semantics of the insert operation.
The recursive call in Line 37 is used for convenience. If either the remove or the match operation occurs as the last condition in a condition hierarchy, but the condition cell c of the original table has additional subconditions (this is checked by isLastConditionNode()), the remove or the match operation is applied to all subcondition of c. For example, let A, B, and C be condition cells, then
The reader may have noticed that unlike the other operations, the remove operation is not handled in the loop in Lines 52–62. When a remove operation occurs, a temporary cell is created in the resulting table that is marked to be removed (Line 38). The marked cells are removed in Line 57. Removing nodes at the end of the recursion simplifies the implementation, because removing a node c pulls up its children \(c_1,\ldots ,c_n\) to the current level. Hence, by removing a node the number of siblings at the current level may decrease, stay constant, or increase. In the following example, we assume that A is not a condition cell.
Example 5
(An execution trace of the method apply) For a complete example consider the following deltatable application, which already has been presented at the end of Sect. 4.2.1 and in Example 4 of “Executable specification in Haskell” of appendix:
Executing the method apply in (Lines 34–64) will result in the following recursive invocations of the method:
At the end of the last recursion step of the apply algorithm (Line 57 in Listing 14), the method deleteTemporaryRemoveNodes is called which removes the temporary “\(\)” node from the resulting tree res that is introduced at recursion step 5.
Rights and permissions
About this article
Cite this article
Damiani, F., Faitelson, D., Gladisch, C. et al. A novel modelbased testing approach for software product lines. Softw Syst Model 16, 1223–1251 (2017). https://doi.org/10.1007/s1027001605162
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s1027001605162