Analysis of Software Product Lines

  • Sven Apel
  • Don Batory
  • Christian Kästner
  • Gunter Saake
Chapter

Abstract

Variability raises new challenges for establishing correctness or any kind of functional or nonfunctional guarantees about programs. Traditional testing, type checking, static analysis, verification, or software and performance measurement are well-established for individual systems, but they do not scale to product lines when analyzing ever product in isolation, due to the huge configuration space. In this chapter, we discuss a broad range of strategies and methods to analyze a whole product line, explicitly considering variability in the analysis (hence the name variability-aware analysis). We cover basic analyses of feature models, analyses of mappings between features and implementations, and analyses of entire product-line implementations.

After reading the chapter, you should be able to
  • characterize opportunities and challenges for analyses of product lines (feature model, implementation artifacts, mappings),

  • perform analyses of feature models using a corresponding encoding as satisfiability problem,

  • detect dead code fragments manually and mechanically, and

  • outline and compare strategies to perform variability-aware type checking of entire product-line implementations.

Variability raises new challenges for establishing correctness or any kind of functional or nonfunctional guarantees about programs. Testing, type checking, static analysis, verification, or software and performance measurement are well-established for individual systems, but they do not scale to product lines due to the huge configuration space with a combinatorial explosion of feature selections.

Instead of a single product, a product line gives rise to dozens, thousands, or billions of potential products that we might want to analyze. Analyzing every product in isolation, using traditional analysis methods in a brute-force fashion, will not scale: For a product line with \({\mathsf{{n}}}\) optional features, there are up to \({\mathtt{{2}}}^{{\mathsf{{n}}}}\) products for distinct feature combinations. Already with 33 optional and independent features, we could create a product line with more products than persons on the planet; and from a product line with 265 optional and independent features, we could derive more products than there are estimated atoms in the universe.1 Industrial product lines often have even more features; for example, according to Refstrup (2009), HP’s product line Owen of printer firmware has roughly 2,000 features and the Linux kernel currently has over 10,000 features (Tartler et al. 2011). These numbers clearly rule out any brute-force strategy for product-line analysis.

Traditionally, developers get away with analyzing only a small set of products. For example, Refstrup (2009) reports that even though HP’s printer firmware has 2,000 features, HP’s developers derive and test firmware only for about 100 current printer models. Only when they produce a new printer, they test its (new) feature combination; when a printer is no longer supported, the corresponding firmware is no longer derived and tested. However, the strategy of checking few selected products works only in cases when few products of a product line are actually needed and application engineering is performed by the original developers.

In contrast, our view of feature-oriented product lines includes scenarios in which users can freely configure features and automatically generate the corresponding product. For example, instead of choosing from a small set of preconfigured products, users of the Linux kernel can freely select from the 10,000 features they want to include in their kernel. In such scenario, the Linux developers cannot predict which products need testing; users may select any product and expect it to work properly.

In this chapter, we discuss a broad range of strategies and methods to analyze a whole product line (or to attain a reasonable coverage) instead of analyzing all derivable products individually. To this end, we explicitly consider variability in the analysis, hence the name variability-aware analysis (also sometimes named product line-aware analysis, family-based analysis, feature-aware analysis, whole-product-line analysis, or 150-% analysis). We introduce mechanisms that are specific to product-line variability and illustrate how to extend existing mechanisms such as type checking, model checking, and static analysis to cover entire product lines.

We start with the basic analyses of feature models (Sect. 10.1) and a simple analysis of the mapping between features and implementation artifacts (Sect. 10.2). Lastly, we discuss examples of how to lift existing analyses to entire product lines (Sect. 10.3).

10.1 Analysis of Feature Models

Analyzing feature models is a good starting point, because it is well understood and comparably simple. These analyses not only provide support for reasoning about feature models themselves, but also provide a foundation for analyzing the code of a software product line later. All the analyses discussed in this section are concerned with the feature model (in domain analysis) and feature selections (in requirements analysis) as illustrated in Fig. 10.1.
Fig. 10.1

Analysis of feature models in domain and application engineering

Among many others, feature model analyses can provide answers to the following questions:
  • Is a given feature selection valid for a given feature model?

  • Is the given feature model consistent (that is, is there at least one valid feature selection)?

  • Do the following assumptions hold for my feature model (testing)?

  • Which features are mandatory?

  • Which features can never be selected (dead features)?

  • How many valid feature selections does a given feature model have?

  • Are two feature models equivalent (that is, do they define the same feature selections)?

  • Given a partial feature selection, what other features must be included (or excluded)?

  • Given a partial feature selection, what features should be selected to produce the product with lowest cost, lowest size, best security, or highest performance?

All of these questions can be answered with analyses of feature models and feature selections, and can be automated with tool support. Each can be encoded as formula in a suitable formalism, and automated solvers can answer the questions more or less efficiently. In this chapter, we discuss encodings as Boolean satisfiability problem that can be answered with SAT solvers, but other encodings and tools are possible.
Fig. 10.2

Simplified feature model of our graph example

A word on notation: We denote the set of all features of a product line with \({\mathtt{{F}}}\) and the set of all possible feature selections by \({\mathtt{{2}}}^{{\mathtt{{F}}}}\). We denote the propositional representation of a feature model as \(\phi \). We write \(\models {\mathtt{{p}}}\) to denote that formula \({\mathtt{{p}}}\) is a tautology. We write \(\text {{SAT}}({\mathtt{{p}}})\) to determine whether formula \({\mathtt{{p}}}\) is satisfiable (has at least one model).2 Both notions are translatable: \(\ \ \models {\mathtt{{p}}} \ \ \equiv \ \ \lnot \text {{SAT}}(\lnot {\mathtt{{p}}})\).

10.1.1 Valid Feature Selection

A question that we can answer easily is whether a given feature selection is valid for a given feature model. To this end, we translate the feature model into a propositional formula \(\phi \) as described in Sect. 2.3.3. A feature selection is valid if and only if the interpretation of the formula, in which we assign \({\mathtt{{true}}}\) (\(\top \) for short) for every selected feature and \({\mathtt{{false}}}\) (\(\bot \) for short) otherwise, it is a model of the formula. In other words, we substitute every variable corresponding to a selected feature by \({\mathtt{{true}}}\) and every other variable by \({\mathtt{{false}}}\); the selection is valid if \(\phi \) is \({\mathtt{{true}}}\). The operation is computationally very cheap (linear in the size of \(\phi \)).

Example 10.1

In Fig. 10.2 below, we show a subset of the feature model of our graph example (originally Fig. 2.6, p. 33) and its corresponding propositional formula \(\phi \).

To check whether \(\{\mathsf{{GraphLibrary}},\mathsf{{EdgeType}},\mathsf{{Directed}}\}\) is a valid selection, we substitute all variables of \(\phi \) with the corresponding assignment:
$$\begin{aligned} \phi =&\mathtt{{\top }}\wedge {\top } \wedge ({\top }\vee {\bot } )\wedge \lnot ({\top }\wedge {\bot } )\\&\wedge ( ({\bot }\vee {\bot }\vee {\bot } ) \Leftrightarrow {\bot } ) \wedge ({\bot } \Rightarrow {\bot })\\&\wedge ( ({\bot }\vee {\bot } )\Leftrightarrow {\bot })\wedge \lnot ({\bot }\wedge {\bot } ) \wedge ({\bot } \Rightarrow ({\bot } \wedge {\bot }))\\ =&\top \end{aligned}$$
This result confirms that the selection is valid.
At the same time, \(\{\mathsf{{GraphLibrary}},\mathsf{{EdgeType}},\mathsf{{Directed}},\mathsf{{Undirected}}\}\) is not a valid selection:
$$\begin{aligned} \phi =&{\top }\wedge {\top } \wedge ({\top }\vee {\top } )\wedge \lnot ({\top }\wedge {\top } )\\&\wedge ( ({\bot }\vee {\bot }\vee {\bot } ) \Leftrightarrow {\bot } ) \wedge ({\bot } \Rightarrow {\bot })\\&\wedge ( ({\bot }\vee {\bot } )\Leftrightarrow {\bot })\wedge \lnot ({\bot }\wedge {\bot } ) \wedge ({\bot } \Rightarrow ({\bot } \wedge {\bot }))\\ =&\bot \end{aligned}$$
\(\square \)
A typical application of this analysis is during requirements-analysis phase of application engineering. When a user selects features, the tool can give immediate feedback whether the current selection is valid. For example, in Fig. 10.3, we show a screenshot of the configuration dialog of FeatureIDE.3 Next to the root feature, FeatureIDE indicates whether the current selection is valid. In this example, the current selection is invalid because the user has not yet selected feature Directed or Undirected. (Both Directed and Undirected are false in this evaluation).
Fig. 10.3

Feature-selection dialog in FeatureIDE with an incomplete feature selection (simple variant left, advanced variant right)

10.1.2 Consistent Feature Models

The next question we attempt to answer is: Is there any valid feature selection for a given feature model? We say a feature model is consistent if it has at least one valid feature selection; otherwise, we say the feature model is inconsistent. In a model with many cross-tree constraints, such question is not trivial to answer; adding contradictory constraints by accident can easily happen.

Naively, we could automatically check all possible feature selections (\({\mathtt{{s}}}\in {\mathtt{{2}}}^{{\mathtt{{F}}}}\), exponentially many) until we find a valid one. In practice, we encode the question as a Boolean satisfiability problem and use a SAT solver to compute the answer. To ask whether a feature model is consistent, we simply determine whether its Boolean representation \(\phi \) is satisfiable (\(\text {{SAT}}(\phi )\)). Modern SAT solvers are mature tools, which can solve such problems with great efficiency.4 If desired, most SAT solvers can also output a valid feature selection (that is, a model of the formula).

Determining whether a propositional formula is satisfiable is an NP-complete problem, meaning that there is no guaranteed efficient algorithm. Consequently, determining whether a feature model is consistent is NP-complete, as well. We can verify a solution quickly (see Sect. 10.1.1), but there is (most likely) no polynomial algorithm to check whether a solution exists; in the worst case, execution time might be exponential in the number of features. Despite exponential worst-case complexity, researchers have empirically shown that modern SAT solvers can solve practical Boolean satisfiability problems quickly in the context of feature-model analysis, even for very large feature models (Mendonça et al. 2009). For real-world feature models, modern SAT solvers, such as SAT4J (Berre and Parrain 2010), can determine satisfiability within milliseconds on good-sized formulas.

Example 10.2

The feature model depicted in Fig. 10.2 is consistent. In Example 10.1, we showed that at least one valid feature selection exists. In contrast, if we extended the feature model as follows: \(\phi '=\phi \wedge \) (Directed\(\wedge \)Undirected), \(\phi '\) is inconsistent and has not a single valid feature selection. Reason: \(\phi '\) requires that both features Directed and Undirected be selected and not selected together. \(\square \)

10.1.3 Testing Facts about Feature Models

A domain engineer typically knows certain facts that must hold in the domain and that should also hold in the feature model. For example, in graph library of Fig. 2.6 (p. 33) we know that feature Cycle requires feature Directed. This fact must be embodied in the feature model; the feature model must not allow any feature selection to violate that dependency. In the graph example, the constraint is obviously fulfilled, it is even stated directly as a cross-tree constraint in the feature model. However, not all constraints may hold so obviously, especially in large models.

To test a feature model, we check an assumption, encoded as propositional formula \(\psi \) (such as \({\mathsf{{Cycle}}} \Rightarrow {\mathsf{{Directed}}}\)), in a feature model \(\phi \). The idea is simple: We check that the feature model implies the assumption (\(\models \phi \Rightarrow \psi \)). Phrased differently, we check whether \(\phi \wedge \lnot \psi \) is satisfiable; if it is, the feature model is incorrect as there exists a valid feature selection in \(\phi \) that does not satisfy \(\psi \).

In a practical setting, a domain expert can test a feature model by creating a list of assumptions that the feature model must satisfy. As in all testing, there is a certain redundancy in that we need to specify knowledge about features twice (in the feature model and in the assumptions) and then check that both align. As usual, testing can only show the presence of errors and not their absence.

Example 10.3

Here is a list of facts that could be used to test the feature model for the graph example:
$$\begin{aligned} {\mathsf{{Kruskal}}}\Rightarrow {\mathsf{{Weighted}}}\\ {\mathsf{{Prim}}}\Rightarrow {\mathsf{{Weighted}}}\\ \lnot ({\mathsf{{Prim}}}\wedge {\mathsf{{Kruskal}}})\\ \cdots \end{aligned}$$
The first two facts state that an MST algorithm requires Weighted graphs. The third states that both Prim and Kruskal algorithms will never both be present in a graph product, and so on. In our example, all tests pass. \(\square \)

10.1.4 Dead Features and Mandatory Features

Next, we might want to know if a feature is dead or mandatory. A dead feature is never used in any product. In contrast, a mandatory feature is always used in every product.

Given \(\phi \) of a feature model, there is at least one valid feature selection with feature f, iff \(\phi \wedge {\mathtt{{f}}}\) is satisfiable, and there is at least one valid feature selection without feature f, iff \(\phi \wedge \lnot {\mathtt{{f}}}\) is satisfiable. A feature is dead if there is no valid feature selection with it (\(\lnot \text {{SAT}}(\phi \wedge {\mathtt{{f}}})\)) and mandatory if there is none without it (\(\lnot \text {{SAT}}(\phi \wedge \lnot {\mathtt{{f}}})\)). To detect all dead (or mandatory) features, we simply iterate over all features.

Example 10.4.

The feature model depicted in Fig. 10.2 has no dead features: GraphLibrary and EdgeType are mandatory. If we make also feature Undirected mandatory (\(\phi ''=\phi \wedge \mathsf{{Undirected}}\)), the features Directed and Cycle become dead features. In an inconsistent feature model, as \(\phi '\) from Example 10.2, all features are simultaneously dead and mandatory (that is why we should rule out this fact first). \(\square \)

A typical application of detecting dead features is to report warnings in the feature model editor. Also, an editor may issue a warning as false optional feature if analysis reports that a feature that is modeled as optional feature (or part of a choice or alternative group) is actually mandatory in all valid feature selections. Dead and false optional features can be considered code smells of feature models that indicate possible defects (see Chap. 8).

10.1.5 Constraint Propagation

As a user chooses features during feature selection, some features may no longer be selectable (they would invalidate the feature model) and others become required. A good editor can provide tool support to infer feature selections by automatically disabling or hiding unavailable features and selecting implied features automatically. This mechanism is called constraint propagation.

So far, we specified feature selections as a set of features and assumed that all features not within the set are deselected. In contrast, in a partial feature selection, we have not yet made a decision about all features, in particular, as product configuration and derivation is often an incremental process. Therefore, there are three possibilities: a feature is selected, a feature is deselected, or no decision has been made. As a consequence, we specify a partial feature selection with two sets: the set of selected features (\({\mathtt{{S}}}\subseteq {\mathtt{{F}}}\)) and the set of deselected features (\({\mathtt{{D}}}\subseteq {\mathtt{{F}}}\), with \({\mathtt{{S}}}\cap {\mathtt{{D}}} = \emptyset \)).

Determining which features must be selected or deactivated given a partial feature selection is similar to detecting dead or mandatory features. We encode a partial feature selection with the sets \({\mathtt{{S}}}\) and \({\mathtt{{D}}}\) as predicate \({\mathtt{{pfs(S,D)}}}\):
$$\begin{aligned} {\mathtt{{pfs(S,D)}}}=\bigwedge _{{\mathtt{{s}}}\in {\mathtt{{S}}}} {\mathtt{{s}}} \ \wedge \ \bigwedge _{{\mathtt{{d}}}\in {\mathtt{{D}}}} \lnot {\mathtt{{d}}} \end{aligned}$$
A partial feature selection is valid, iff \(\phi \wedge {\mathtt{{pfs(S,D)}}}\) is satisfiable. We say a feature \({\mathtt{{f}}}\) is deactivated or no longer selectable, iff \(\phi \wedge {\mathtt{{pfs(S,D)}}} \wedge {\mathtt{{f}}}\) is not satisfiable. Conversely, we say a feature is activated or must be selected, iff \(\phi \wedge {\mathtt{{pfs(S,D)}}} \wedge \lnot {\mathtt{{f}}}\) is not satisfiable.

Example 10.5

In Fig. 10.3 (p. 247), we showed a screenshot from FeatureIDE, where we selected feature Cycle. Due to the implication \({\mathtt{{Cycle}}} \Rightarrow {\mathtt{{Directed}}}\), the selection is propagated automatically to feature Directed. Technically, we see that for a partial selection \({\mathtt{{S}}}=\{{\mathtt{{Cycle}}}\}, {\mathtt{{D}}}=\{\}\), the corresponding formula is a contradiction:
$$\begin{aligned}&\phi \wedge {\mathtt{{pfs(D,S)}}}\wedge \lnot {\mathsf{{Directed}}} \\&= \ldots \wedge ({\mathsf{{Cycle}}} \Rightarrow {\mathsf{{Directed}}}) \wedge {\mathsf{{Cycle}}} \wedge \lnot {\mathsf{{Directed}}} \end{aligned}$$
Since features Directed and Undirected are mutually exclusive, the selection is further propagated to deactivate feature Undirected. We derive this again, by determining that the corresponding formula is a contradiction:
$$\begin{aligned}&\phi \wedge {\mathtt{{pfs(S,D)}}} \wedge {\mathsf{{Undirected}}} \\&= \ldots \wedge \lnot ({\mathsf{{Directed}}}\wedge {\mathsf{{Undirected}}} ) \wedge ({\mathsf{{Cycle}}} \Rightarrow {\mathsf{{Directed}}}) \wedge \\&\quad \quad {\mathsf{{Cycle}}} \wedge {\mathsf{{Undirected}}} \end{aligned}$$
In FeatureIDE, disabled and propagated selections are updated instantaneously during interactive editing. \(\square \)

How to communicate the three possible states of whether a feature is selected, deselected, or yet open is a tricky user interface problem; one possibility is to use different symbols instead of normal check boxes as shown in Fig. 10.3 (right).

For an efficient mechanism to propagate constraints for a set of features with a minimal number of SAT-solver calls, see Janota’s algorithm (Janota 2010).

10.1.6 Number of Valid Feature Selections

A question that managers ask is: How many valid feature selections does a feature model allow? Phrased differently: How many distinct products are part of this product line?

From a feature diagram without cross-tree constraints, a simple recursive algorithm calculates the number:
$$\begin{aligned} \begin{array}{lll} &{} \textit{count}\ {\mathsf{{root}}(\mathtt{{c}})} &{} = \quad \textit{count}({\mathtt{{c}}})\\ &{} \textit{count}\ {\mathsf{{mandatory}}(\mathtt{{c}})} &{} = \quad \textit{count}({\mathtt{{c}}})\\ &{} \textit{count}\ {\mathsf{{optional}}(\mathtt{{c}})} &{} = \quad \textit{count}({\mathtt{{c}}}) + {\mathtt{{1}}}\\ &{} \textit{count}\ {\mathsf{{and}}}({\mathtt{{c}}}_{{\mathtt{{1}}}}, \ldots ,{\mathtt{{c}}}_{{\mathtt{{n}}}}) &{} = \quad \textit{count}({\mathtt{{c}}}_{{\mathtt{{1}}}})\ * \ldots * \textit{count}({\mathtt{{c}}}_{{\mathtt{{n}}}})\\ &{} \textit{count}\ {\mathsf{{alternative}}}({\mathtt{{c}}}_{{\mathtt{{1}}}}, \ldots , {\mathtt{{c}}}_{{\mathtt{{n}}}}) &{} = \quad \textit{count}({\mathtt{{c}}}_{{\mathtt{{1}}}}) +\ldots + \textit{count}({\mathtt{{c}}}_{{\mathtt{{n}}}})\\ &{} \textit{count}\ {\mathsf{{or}}}({\mathtt{{c}}}_{{\mathtt{{1}}}}, \ldots , {\mathtt{{c}}}_{{\mathtt{{n}}}}) &{} = \quad (\textit{count}({\mathtt{{c}}}_{{\mathtt{{1}}}})+ {\mathtt{{1}}})\ * \ldots * (\textit{count}({\mathtt{{c}}}_{{\mathtt{{n}}}}) + {\mathtt{{1}}}) - {\mathtt{{1}}}\\ &{} \textit{count}\ {\mathsf{{leaf}}} &{} = \quad {\mathtt{{1}}} \end{array} \end{aligned}$$
In a nutshell, function count is a recursive function that traverses the tree structure of a feature diagram from the root to the leaves. Depending on the type of feature, count is defined differently. This is implemented by pattern matching: for example, count optional\(({\mathsf{{c}}})\) adds one to the number of valid feature selections and proceeds recursively with the subfeatures of the feature in question; count alternative\(({\mathtt{{c}}}_{{\mathtt{{1}}}}, \ldots , {\mathtt{{c}}}_{{\mathtt{{n}}}})\) sums the valid feature selections of the alternative subfeatures of the feature in question. The recursion terminates when the features at the leaves of the tree are reached (count leaf).

Example 10.6

Ignoring the two cross-tree constraints, the simplified feature valid feature selections:
$$\begin{aligned} \begin{array}{lll} &{}\textit{count}({\mathtt{{f}}}) &{}= {\mathtt{{1}}}\;\text {//for all leaf nodes} \\ &{}\textit{count}(\mathsf{{EdgeType}}) &{}= \textit{count}(\mathsf{{Directed}}) + \textit{count}(\mathsf{{Undirected}}) ={\mathtt{{1}}}+{\mathtt{{1}}}={\mathtt{{2}}}\\ &{}\textit{count}(\mathsf{{MST}}) &{}= \textit{count}(\mathsf{{Prim}})+ \textit{count}(\mathsf{{Kruskal}}) = {\mathtt{{1}}}+{\mathtt{{1}}}={\mathtt{{2}}}\\ &{}\textit{count}(\mathsf{{Algorithm}}) &{}= (\textit{count}(\mathsf{{Cycle}})+{\mathtt{{1}}}) * (\textit{count}(\mathsf{{ShortestPath}})+{\mathtt{{1}}})~* \\ &{}&{}\quad \quad (\textit{count}(\mathsf{{MST}}) +{\mathtt{{1}}})-{\mathtt{{1}}} \\ &{}&{}=({\mathtt{{1}}}+{\mathtt{{1}}})*({\mathtt{{1}}}+{\mathtt{{1}}})*({\mathtt{{2}}}+{\mathtt{{1}}})-{\mathtt{{1}}}= {\mathtt{{11}}}\\ &{}\textit{count}(\mathsf{{GraphLibrary}}) &{}= \textit{count}({\mathsf{{Mandatory}}}({\mathsf{{EdgeType}}}))~*\\ &{}&{}\quad \quad \textit{count}({\mathsf{{Optional}}}({\mathsf{{Weighted}}}))~* \\ &{}&{}\quad \quad \textit{count}({\mathsf{{Optional}}}({\mathsf{{Algorithm}}})) \\ &{}&{}= {\mathtt{{2}}} * ({\mathtt{{1}}}+{\mathtt{{1}}}) * ({\mathtt{{11}}}+{\mathtt{{1}}}) = {\mathtt{{48}}} \end{array} \end{aligned}$$
\(\square \)

For feature models with cross-tree constraints, the number is not easy to determine. A single cross-tree constraint can already eliminate a huge number of valid feature selections. For small feature models, we can simply count the solutions (for example, in a brute-force fashion, or with a SAT solver or binary decision diagrams). Fernandez-Amoros et al. (2009) have investigated a more sophisticated algorithm that can deal with certain kinds of cross-tree constraints.

Overall, this metric is of questionable utility. Due to the combinatorial number of feature selections in most product lines, a huge number is produced. Unless you like large numbers, saying that a product line yields 15 trillion valid feature selections in contrast to 3 quintillion of another, the number itself provides little insight. Most tools only provide approximations, such as an upper bound ignoring cross-tree constraints or an information on a small lower bound (“more than 1,000 valid feature selections”), which are cheap to compute and sufficient for many practical tasks.

10.1.7 Comparing Feature Models

Given two feature models \(\phi _{{\mathtt{{1}}}}\) and \(\phi _{{\mathtt{{2}}}}\), what is their relationship? Does \(\phi _{{\mathtt{{1}}}}\) define the same set of feature selections (products) than \(\phi _{{\mathtt{{2}}}}\) Is \(\phi _{{\mathtt{{1}}}}\) a generalization of \(\phi _{{\mathtt{{2}}}}\) (meaning that the set of products of \(\phi _{{\mathtt{{1}}}}\) includes those of \(\phi _{{\mathtt{{2}}}}\))? Or conversely, is \(\phi _{{\mathtt{{2}}}}\) a specialization of \(\phi _{{\mathtt{{1}}}}\)?

These questions about the relationship between two feature models arose early in feature modeling. When a designer edits a feature model, she wants to know if her changes have altered the set of existing valid products. Enlarging the set of products may be acceptable, but eliminating products (particularly those that have been fielded) is often not. However, except for trivial cases such as adding or removing a single feature, the relationship is not always obvious. Even simple feature models are of sufficient complexity to make analyzing their relationships by manual inspection difficult.
Fig. 10.4

Refactoring, specializations, and generalizations of feature models, between an old feature model (feature selections described by the solid circle) and a new feature model (feature selections described by shaded area)

Changing (improving) the structure of a feature model, while preserving all feature selections it describes is related to refactorings, a topic we considered in more depth in Chap. 8. A feature-model refactoring is an edit to a feature model that does not alter the set of legal feature selections (equivalent to a variability-preserving refactoring in Sect. 8.2.2).

In Fig. 10.4, we describe four possible relationships that we want to identify. A feature-model refactoring preserves exactly the set of valid feature selections, whereas a specialization removes feature selections without adding new ones and a generalization adds valid feature selections without removing any. All differences that both add and remove valid feature selections are classified as arbitrary edits.

One approach to classify the relationship between two feature models is to describe their difference in terms of a set of known transformations (Alves et al. 2006). If we can express a feature-model difference in terms of a chain of well-known transformations, we can deduce the nature of the difference based on the properties of the transformations, for example, whether the difference represents a refactoring. In a feature-model editor, it would be possible to provide editing operations that are guaranteed to be refactorings.

A more general solution supporting arbitrary edits (without the limitations of a structured editor and more flexible with regard to cross-tree constraints) is again based on Boolean satisfiability. Two feature models \(\phi _{{\mathtt{{1}}}}\) and \(\phi _{{\mathtt{{2}}}}\) are equivalent when their propositional formulas are equivalent, that is, \(\models \phi _{{\mathtt{{1}}}} \Leftrightarrow \phi _{{\mathtt{{2}}}}\), or operationalized for a SAT solver \(\lnot {\mathtt{{SAT}}}\bigl (\lnot (\phi _{{\mathtt{{1}}}} \Leftrightarrow \phi _{{\mathtt{{2}}}})\bigr )\). Similarly, \(\phi _{{\mathtt{{1}}}}\) is a specialization of \(\phi _{{\mathtt{{2}}}}\) and \(\phi _{{\mathtt{{2}}}}\) is a generalization of \(\phi _{{\mathtt{{1}}}}\), iff \(\models \phi _{{\mathtt{{2}}}}\Rightarrow \phi _{{\mathtt{{1}}}}\). Thüm et al. (2009) discuss in more detail the different kinds of relationships and how to efficiently encode them as Boolean satisfiability problems, even for very large feature models.

Example 10.7.

In Fig. 10.5, we show two feature models \(\phi _{{\mathtt{{right}}}}\) and \(\phi _{{\mathtt{{left}}}}\) (\(\phi _{{\mathtt{{left}}}}\) is an excerpt of the graph example). The propositional formulas for both the models are:
$$\begin{aligned} \phi _{{\mathtt{{left}}}}&= \mathsf{{Algorithm}} \wedge (( \mathsf{{Cycle}} \vee \mathsf{{ShortestPath}} \vee \mathsf{{MST}})\Leftrightarrow \mathsf{{Algorithm}})\\ \phi _{{\mathtt{{right}}}}&= \mathsf{{Algorithm}} \wedge (\mathsf{{Cycle}} \Rightarrow \mathsf{{Algorithm}}) \wedge (\mathsf{{ShortestPath}} \Rightarrow \mathsf{{Algorithm}})\\&\quad \wedge (\mathsf{{MST}} \Rightarrow \mathsf{{Algorithm}}) \wedge ( \mathsf{{Cycle}} \vee \mathsf{{ShortestPath}} \vee \mathsf{{MST}}) \end{aligned}$$
A SAT solver can prove \(\phi _{{\mathtt{{right}}}} \Leftrightarrow \phi _{{\mathtt{{left}}}}\). \(\square \)

A typical use case for the comparison of feature models is during feature-model editing. For example, FeatureIDE displays after each edit whether all changes since the feature model was last saved forms a refactoring, a specialization, or a generalization. It also lists examples of added or removed feature selections. Furthermore, the analysis can be used to compare and possibly merge two independently changed feature models (Thüm et al. 2009).

10.1.8 Other Feature-Model Analyses

Other analyses detect redundancies, explain feature selections, optimize selections, and calculate various metrics. Other variability models (with cardinalities, attributes and non-Boolean features) have been explored along with different kinds of solvers, from SAT solvers as in this chapter, to binary decision diagrams, to solvers for constraint satisfaction problems . For instance, an interesting class of problems deals with attributes to features that describe nonfunctional properties, such as costs, memory consumption, performance impact, footprint, and security; optimization algorithms can then find the best solution (for some target function or some nonfunctional constraints) given a partial feature selection (Benavides et al. 2005; Sincero et al. 2010; Siegmund et al. 2011). For an introduction and overview of the state of the art on feature-model analysis and solvers, see the survey by Benavides et al. (2010).
Fig. 10.5

Two equivalent feature models

10.2 Analysis of Feature-to-Code Mappings

We reviewed in the last section analyses that focus on the problem space with feature model and feature selections. Now, we investigate the mapping from features to code, that is, the mapping from problem space to solution space, as shown in Fig. 10.6. By leveraging the analyses of the previous section, we show how to detect unused modules in feature-oriented programming, dead code in preprocessor-based implementations, and elaborate on issues regarding build systems that can complicate analyses (see Sect. 5.2 p. 105). We will not yet look at structures in the source code such as methods or statements (we will do that in Sect. 10.3); instead, we consider code fragments as arbitrary text sequences. Specifically, we explore the following:
  • Which code fragments are never included in any product?

  • Which code fragments are included in all products?

  • Which features have no influence on the product portfolio?

10.2.1 Dead Code

Our first goal is to find dead code—fragments that are never included in any valid feature selection. Dead code can be an indicator for an incorrect mapping or an over-constrained feature model. Look at Fig. 10.7: Line 5 is included only if the features A and B are both selected, but the feature model specifies both features as mutually exclusive. That is, Line 5 can be never included in any product of the product line. Developers were likely unaware that features A and B are defined as mutually exclusive, or the feature model was too strict and thus both features could be optional.
Fig. 10.6

Analysis of feature-to-code mappings incorporates knowledge about the mapping and the feature model, but not yet about structures in the source code

Fig. 10.7

Simple example of a dead code fragment in Line 5

Detecting dead code is different from traditional detection of unreachable code. Compilers analyze a program’s control flow of a single product using static analyses. In contrast, we find code that is dead with regard to feature selections in a product line. Performing control-flow analysis on a whole product line requires more sophisticated techniques outlined in Sect. 10.3.7.

To identify dead code, we need to reason about the mapping from features to code. A code fragment can be a plug-in of framework (Sect. 4.3, p. 79), a file that is conditionally excluded by a build system (Sect. 5.2, p. 105), a block of code guarded by conditional-compilation directives as in Fig. 10.7 (Sect. 5.3, p. 110), a feature module (Sect. 6.1, p. 130), an aspect (Sect. 6.2, p. 141), or some other variable code. We can even regard a parameter in a build script (for example, calling the compiler at different optimization levels) or flags generated in a configuration file (Sect. 4.1, p. 66) as analyzable code fragments.

Formally, we describe the mapping as a function from code fragments (from the set \({\mathtt{{C}}}\) of all code fragments) to sets of products represented by feature selections (\({\mathtt{{pc: C}}}\rightarrow {\mathtt{{2}}}^{{\mathtt{{2}}}^{{{\mathtt{{F}}}}}}\)). That is, we map each code fragment to the set of products in which it is included. As a compact representation of large sets of feature selections and corresponding products, we use a presence condition—a propositional formula representing a set of feature selections. This is in line with the propositional formula representing the set of valid feature selections in a feature model (see Sect. 2.3.3 p. 31). A presence condition defines the feature selections in which a code fragment is present. For example, in Fig. 10.7, the code fragment in Line 3 has the presence condition \({\mathtt{{A}}}\) and is included in all products with feature A; Line 3 has the presence condition \({\mathtt{{A}}}\wedge {\mathtt{{B}}}\) and is included in all products with the features A and B; and Line 8 has presence condition \(\lnot {\mathtt{{A}}}\) and is included in all products that do not include feature A. In this section, we use function \({\mathtt{{pc(c)}}}\) to denote the presence condition of a code fragment \({\mathtt{{c}}}\) in the form of a propositional formula.

A code fragment is dead if it is never included in any product of a product line. Since the representation of the feature model as propositional formula \(\phi \) represents all valid feature selections (products), a code fragment \({\mathtt{{c}}}\) is dead iff the conjunction of presence condition and feature model is not satisfiable: \(\lnot {\mathtt{{SAT}}}(\phi \wedge {\mathtt{{pc(c))}}}\). That is, there is no feature selection that is both valid according to a feature model and that fulfills the presence condition.

Example 10.8.

Returning to our example of Fig. 10.7, we have the presence conditions \(\top \), \({\mathtt{{A}}}\), \({\mathtt{{A}}}\wedge {\mathtt{{B}}}\), and \(\lnot {\mathtt{{A}}}\) for the Lines 1, 3, 5, and 8 respectively. The predicate of the feature model is \(\phi ={\mathtt{{root}}}\wedge ({\mathtt{{A}}}\vee {\mathtt{{B}}})\wedge \lnot ({\mathtt{{A}}}\wedge {\mathtt{{B}}})\). Line 5 is dead because \(\phi \wedge {\mathtt{{A}}}\wedge {\mathtt{{B}}}\) is unsatisfiable. \(\square \)

In a similar manner, we can detect code fragments that are included in all products. Such code fragments are mandatory. A code fragment \({\mathtt{{c}}}\) is mandatory iff \(\models \phi \Rightarrow {\mathtt{{pc(c)}}}\), that is, \(\lnot {\mathtt{{SAT}}}(\phi \wedge \lnot {\mathtt{{pc(c))}}}\).

10.2.2 Abstract Features

As introduced in Sect. 2.3.5, abstract features are used in some notations of feature models, but are not mapped to any code. That is, selecting or not selecting abstract features does not have an influence on product derivation, and those features may be skipped during the requirements analysis process.

As a conservative approximation, every feature that does not occur syntactically in any presence condition is abstract. This approximation is usually sufficient in practice, but would not catch corner cases such as a presence condition \({\mathtt{{A}}}\vee \lnot {\mathtt{{A}}}\), which contains the feature name but does not influence product derivation. For a more precise analysis, we can again encode the analysis as a corresponding Boolean satisfiability problem (Thüm et al. 2011a): Feature \({\mathtt{{f}}}\) is abstract, iff the following formula is satisfiable:
$$\begin{aligned} \bigvee _{{\mathtt{{c}}}\in {\mathtt{{C}}}} {\mathtt{{pc(c)}}} [{\mathtt{{f}}}\rightarrow \top ] \oplus {\mathtt{{pc(c)}}} [{\mathtt{{f}}}\rightarrow \bot ] \end{aligned}$$
where \({\mathtt{{p}}}[{\mathtt{{A}}}\rightarrow {\mathtt{{B}}}]\) denotes substituting all occurrences of \({\mathtt{{A}}}\) by \({\mathtt{{B}}}\) in predicate \({\mathtt{{p}}}\) and \(\oplus \) denotes exclusive or. In our example \({\mathtt{{A}}}\vee \lnot {\mathtt{{A}}}\), we would substitute \({\mathtt{{A}}}\) both by \({\mathtt{{true}}}\) (\(\top \)) and \({\mathtt{{false}}}\) (\(\bot \)) to yield \((\top \vee \lnot \top ) \oplus (\bot \vee \lnot \bot )\), which is true; so we know the Boolean value of feature \({\mathtt{{A}}}\) has no impact on selecting code fragments.

10.2.3 Determining Presence Conditions

Previously, we assumed that we knew the presence conditions for all code fragments. In Example 10.8, we used a simple presence condition without explanation. Although, we may separately model presence conditions of implementation artifacts (Metzger et al. 2007), we argue that it is usually more convenient and reliable to extract them directly from the variability in the implementation. In this section, we discuss how to extract presence conditions for different implementation mechanisms.

Feature-Oriented Programming

In Sect. 6.1, we considered feature modules as code fragments to analyze. Developers mostly use an implicit one-to-one mapping between features and feature modules, that is, the feature module has the same name as the feature in the feature model. Therefore, we can extract a mapping as follows: A feature module X has the presence condition \({\mathtt{{pc(X)}}}={\mathtt{{X}}}\) (referring to a feature with the same name). Hence, determining presence conditions is trivial.

Of course also explicit external mappings are possible, for example, a table describing the presence condition for every module or a build system selecting which modules to compose. Especially with regard to extra modules for feature interactions (Sect. 9.4.7, p. 230) presence conditions such as \({\mathtt{{A}}}\wedge {\mathtt{{B}}}\) are common.
Fig. 10.8

Examples of presence conditions extracted from conditional compilation

Conditional Compilation with the C Preprocessor

Extracting a presence condition for code fragments using conditional compilation (Sect. 5.3, p. 110) is also straightforward. In the context of the C preprocessor, a code fragment refers to a sequence of code lines within a file; code fragments are separated by conditional-compilation directives. Macros that control conditional compilation are often mapped directly to features or have a simple mapping (for example, in the Linux kernel, macro ‘CONFIG_X’ represents feature X).

As described in Sect. 5.3, the C preprocessor has the directives #ifdef, #ifndef, #if, #elif, #else, and #endif, which can be nested. Instead of explaining in detail how to extract the mapping, we simply give the example in Fig. 10.8. For a precise description, see the formalization by Tartler et al. (2011).

Determining a presence condition for code that includes conditional-compilation directives is not always as simple as in Sect. 5.3.2. Using #define and #undef directives, developers can activate and deactivate macros within the source code during the execution of the preprocessor (possibly depending on other features using conditional compilation on macro definitions). A precise analysis is outside the scope of our book and discussed elsewhere (Hu et al. 2000; Favre 2003; Latendresse 2003, 2004; Kästner et al. 2011; Tartler et al. 2011; Gazzillo and Grimm 2012). However, simple, disciplined preprocessors (or preprocessor usage) can significantly ease analyses. For example, we recommend not changing the definition of macros that denote features within the source code; so the intuitive extraction procedure above can be used.

More modern annotation-based implementation strategies enforce a direct mapping. For example, FeatureMapper (Sect. 5.3.3, p. 113) and virtual separation of concerns (Sect. 7.4, p. 184) store feature-code mappings separately, for example, in a table explicitly mapping code fragments to presence conditions. Here, extracting presence conditions is trivial.

Build Systems

A build system selects which files to compile and how (Sect. 5.2). As build systems control the inclusion of entire files or directories, code fragments in this context can refer to plug-ins (see Sect. 4.3, p. 79), aspects (see Sect. 6.2, p. 141), feature modules, or any other files or containers. As build systems also control how (for example, with which parameters) files are compiled and initiate generators, we can also determine presence conditions for compiler parameters or settings in configuration files.

In the simplest case, a build system maintains a list of presence conditions for each file. Unfortunately, determining presence conditions from a build system is not always easy, because most build systems are written in sophisticated Turing-complete scripting languages. Extracting presence conditions is often undecidable, because many build systems may perform arbitrary computations by calling shell scripts.

Analysts wanting to extract variability from build systems can pursue different strategies. First, they can use a disciplined build system with limited expressiveness designed for analysis (for example, a system providing a direct mapping between presence conditions and files). Second, automated tools can try to detect common patterns used in existing build scripts, however, accuracy will depend on whether and how those patterns are used (for example, Berger et al. (2010a) and Nadi and Holt (2012) describe experience with such extraction for the Linux kernel). Finally, an analyst could perform different kinds of more heavyweight dynamic and static analysis on the build script, such as symbolic execution (Tamrawi et al. 2012).

There is a trade-off between how expressive and how analyzable the build system is. It may be that the expressiveness provided by contemporary build systems is not needed, but is used simply because the developers are familiar with it. The more expressive the build system’s language, the less accurate and the more difficult the analysis process becomes. Imprecision in the analysis can yield to both false positives and false negatives when searching for dead code fragments and abstract features. For many purposes, restricted domain-specific languages would suffice and allow precise analysis, though some migration effort may be necessary in existing projects.

In practice, it is typical to combine conditions extracted from the build system with conditions from other implementation mechanisms, such as preprocessors.

Parameters

For software product lines that use run-time variability (see Sect. 4.1, p. 66), static presence conditions are the most difficult to extract. With intra-procedural control-flow and data-flow analysis, we could attempt to trace configuration parameters to specific code fragments. Detecting feature code in a product line implemented with run-time parameters is conceptually similar to detecting unreachable code in compilers (with some extra knowledge about these parameters). However, as parameters can be passed throughout the program and assigned and modified, we would need sophisticated and computationally expensive abstract interpretation or slicing analysis. Further, such analysis is always incomplete or unsound, so either false positives or false negatives cannot be avoided (see Rice’s theorem).5

When parameters are used in a restricted and disciplined fashion, specific analysis techniques could in principle detect many presence conditions (Haase 2012; Ouellet et al. 2012). Anyway, implementations based on compile-time variability (especially advanced language-based and tool-based approaches) are naturally easier to analyze statically than approaches based on run-time variability. To perform variability-aware analysis, a reliable extraction of presence conditions is important and disciplined implementation approaches can simplify that task significantly.

10.3 Analysis of Domain Implementations

After analyzing feature models and the mapping from features to code, we now focus on analyzing variability in program structures, such as function calls or statements. We call these analyses variability-aware analysis , because they perform traditional analyses, such as type checking and model checking, but they incorporate knowledge about variability in the system. Again, we want to analyze and ensure properties for all possible products of a product line. These analyses build on top of the analyses we have presented earlier, and reason about all kinds of domain-engineering artifacts (feature models, domain implementations, and the mapping):

The idea of variability-aware analysis is not to invent new kinds of analysis techniques, but to lift existing analysis techniques developed for individual programs to entire product lines (that is, to domain artifacts). Examples of established analyses that we want to lift include type checking, model checking, data-flow analysis, and deductive verification. The goal is to perform the same analysis on a product line that we could perform on every possible product separately. Ideally, variability-aware analysis should yield the same results, but in a more efficient way.

Let us explain the vision of variability-aware analysis in Fig. 10.9. We have two possible paths to check a property for an entire product line:
  • Brute-force analysis. Starting from a product line, we can derive a product per valid feature selection (Step 1). For each product, we can now perform a given off-the-shelf analysis (Step 2). For example, we could compile the source code to detect syntax errors and type errors. If we repeat that process for every valid product (exponentially many, in the worst case), we can aggregate the results and determine whether the property holds for all products of the product line (Step 4).

  • Variability-aware analysis. We analyze the domain artifacts of the product line, without checking all products of a product line individually. Variability-aware analysis produces a result for the entire product line (for example, ‘the property holds for all products’, or ‘the property does not hold for products with feature A’). From the result, we can derive whether the property holds for a specific product (Step 4).

Given an ideal variability-aware analysis, both paths should come to the same conclusions. That is, variability-aware analysis should yield the same result as applying an existing analysis in a brute-force approach. At the same time, we expect that variability-aware analysis is typically much faster, because it can exploit similarities and reuse analysis results across products. For example, if multiple products share code (from the base code or from some feature code), variability-aware analysis might only need to analyze it once and not over and over again for each possible feature selection. (Note: Not all variability-aware analyses follow this ideal picture. Some provide approximations that are not exact, but still useful and much faster to compute).
Fig. 10.9

Ideally, variability-aware analysis should reach the same result as traditional analysis applied to all products in isolation

We illustrate variability-aware analysis with type checking, because it is well-understood and comparably easy to explain. Type checking of product lines is only interesting for implementation approaches with compile-time variability though. In approaches with run-time variability, such as parameters (see Sect. 4.1, p. 66), only a single program is compiled (and checked), which can be both a strength (easy check for type errors) and a weakness (some errors caught only at run time). We use examples from type checking product-line implementations based on preprocessors and feature-oriented programming.

10.3.1 Design Space

There is a large design space of different analyses and researchers have explored many different strategies. Before we discuss specific analyses, we introduce some additional terminology that helps to distinguish different variability-aware analyses within the design space.

First, different kinds of analysis check different properties and give different guarantees. For example, a type system checks well-typedness of programs to ensure the absence of a certain class of errors, whereas model checking verifies that the behavior of a program satisfies a given specification.

Second, in a product-line context, the issue arises of how to specify the expected behavior of a product line. Can we specify the behavior of each feature in isolation, should we provide a specification per product, or is a single global specification for all products sufficient? For simplicity, here we always expect a global specification that must hold for all products, such as ‘all products shall be well-typed’ or ‘there shall be no null-pointer exception during the execution of any product.’

Finally, there are different strategies to lift analyses to handle the large configuration space of a product line. We say a variability-aware analysis is complete, if it finds the same property violations that the brute-force approach would find (see Fig. 10.9). We say a variability-aware analysis is sound, if every property violation found in domain artifacts is also a property violation in a corresponding derived product.

10.3.2 Sampling Strategies

A first strategy, which is easy to apply, is to check only a (suitable) subset of all products of a product line with an off-the-shelf, single-product analysis. For example, we can choose interesting feature selections, derive the corresponding products, and simply compile them to find type errors in the sampled products. This corresponds taking the path 1–3 in Fig. 10.9 multiple times (though not in a brute-force fashion for all products).

The main question is how to select the sample of feature selections to analyze? Typically, we want to check only a small number of products, but achieve a high coverage according to some criterion. Among many others, possible coverage criteria in product lines are:
  • Feature coverage: Select products such that every feature (from the problem space) is included in at least one product.

  • Feature-code coverage: Select products such that every code fragment (from the solution space) is included in at least one product.

  • Pair-wise feature coverage: Select products such that each pair of features is included in at least one product. Additionally, we can demand that for each feature pair \((\mathsf{{f}}, \mathsf{{g}})\) there is a product with f but without g and a product with g but without f, in addition to a product with both f and g.

  • N-wise feature coverage: Much like pair-wise feature coverage, but all possible \({\mathsf{{n}}}\)-tuples of features should be included in at least one product.

  • Popular products and features: Select products frequently used by customers or products with features that are often requested.

  • Domain-specific: In many domains experts can provide suitable coverage criteria for the domain, for example, critical features such as transaction management in a database system.

Sampling with feature coverage may result in a poor detection error rate, because problems related to interactions between multiple features might not be detected (see Chap. 9). Pair-wise coverage attempts to address this problem by analyzing every pair of features, so that we can detect all interactions between two features. To achieve pair-wise coverage, only a moderate number of products are necessary: For example, Oster et al. (2010) shows an example of a feature model with 88 features that can be covered by 40 products and a feature model with 287 features that can be covered with 62 products. Also, \({\mathtt{{n}}}\)-wise coverage with larger \({\mathtt{{n}}}\) is possible; we may detect more interactions, but this approach requires much larger samples. When selecting a sample, we always have to face a trade-off between the number of products selected (analysis effort) and the desired coverage.

There are many other coverage criteria and combinations of them. Furthermore, there are different strategies to find the smallest (or a small) number of product that fulfills one or more coverage criteria, some of which require sophisticated analysis with SAT solvers that are outside our discussion (see Sect. 10.6, p. 277).

Note that sampling strategies are sound but always incomplete. Since we do not look at all products, we might miss errors. We cannot establish guarantees for an entire product line. However, when we find an error, we are sure that it actually is an error. In this respect, variability-aware analysis with sampling is similar to software testing and borrows from a large amount of research on combinatorial testing and test coverage.

10.3.3 Family-Based Type Checking of Preprocessor-Based Implementations

Next, we look at an example of how to analyze entire product lines: family-based type checking. We first illustrate family-based type checking of preprocessor-based implementations, and subsequently demonstrate its generality by applying it also to feature-oriented programming.

To illustrate family-based, variability-aware type checking, we slightly extend our graph example again, and use an almost trivial excerpt. As shown in Fig. 10.10, we extend the graph example from Fig. 5.9 (p. 112), such that nodes can optionally have a name, in addition to their id (for example, to store names if nodes represent persons). Feature Name introduces a new method getName, and, to provide the same interface without names, feature NoName provides the same method, but with a dummy implementation.

Together with the already known feature Color, our example has three features, which can be combined to eight different products. Obviously, for some of these products, a Java compiler will issue type errors: Selecting neither Name nor NoName leads to a dangling method invocation in the parameter of the print statement (Line 20; getWeight has not been declared); selecting both Name and NoName leads to a method declared twice.
Fig. 10.10

Extended graph example implementing colors and optional names of nodes using preprocessor directives

Fig. 10.11

Abstract syntax tree of the domain implementation of the graph example of Fig. 10.10, describing all variations

Fig. 10.12

Selected constraints in the graph example and corresponding output of a family-based type system

To detect these kinds of errors with a brute-force approach, we have to derive and compile all eight products individually. While a brute-force approach seems acceptable for this example, it clearly does not scale for product lines with more features, as the number of products to check grows exponentially. Instead, we lift Java’s type system to take variability into account.

Presence Conditions on Structures

Variability-aware type systems reasons about presence conditions in the source code. However, in contrast to the presence conditions for arbitrary textual fragments in Sect. 10.2, we now reason about presence conditions for structural program elements of in the domain implementation, such as variables, fields, and methods. In our example, the two methods getWeight have the presence condition \({\mathsf{{Name}}}\) (Line 6) and \({\mathsf{{NoName}}}\) (Line 9), respectively; the first statement in the main function has presence condition \({\mathsf{{Color}}}\wedge {\mathsf{{Name}}}\) (Line 18). We emphasize the mapping of presence condition to program elements (instead of lines of plain text) by showing an abstract-syntax tree of the code fragment that includes nodes for variability in Fig. 10.11 (\(\mathsf{{<\!optional\!>}}\) nodes denote optional subtrees with a presence condition). Again, we denote presence conditions with function \({\mathtt{{pc}}}\). Depending on the implementation mechanism, extracting such mapping is more or less complex (usually straightforward for composition-based and disciplined annotation-based implementations; more difficult for undisciplined annotations; see also Sect. 5.3.4).

Reachability Constraints

A family-based analysis operates on a program representation that is variable. In our example, family-based type checking takes place at a variable abstract syntax tree that represents the whole space of possible products. Let us generalize from an example: When resolving a method invocation, there can be different target declarations in different products. The type system must ensure that all derivable products that contain the method invocation must also contain a corresponding method declaration as target (with an expected type). In our example, method getName is invoked in all products with presence condition \({\mathtt{{true}}}\) (Line 20, expected to return type String), but a corresponding method declaration is only present in products with the features Name or NoName (Lines 6 and 9, both returning type String). Just by comparing presence conditions within the product-line implementation, we can identify that products without feature Name and without feature NoName will contain a type error. If such feature selections are valid according to the feature model, we can issue an error message: “cannot resolve method getName() in Line 20 if \(\lnot \mathsf{{Name}}\wedge \lnot \mathsf{{NoName}}\)”, as shown in Fig. 10.12.

A type system performs many other lookups, of fields, local variables, methods, classes, types, and so on. In all cases, we need to ensure that a target element is present whenever the source element is present (and often with additional constraints on types). For instance, a field can only have type Color and we can only instantiate class Color when a corresponding class declaration is present. More generally, given a feature model \(\phi \), presence conditions \({\mathtt{{pc}}}\), a source element \({\mathtt{{s}}}\), and a set of target elements \({\mathtt{{T}}}\), we can formulate the following generic constraint, which we call reachability condition:
$$\begin{aligned} \phi \Rightarrow \bigl ({\mathtt{{pc(s)}}} \Rightarrow \bigvee _{{\mathtt{{t}}}\in {\mathtt{{T}}}} {\mathtt{{pc(t)}}}\bigr ) \end{aligned}$$
If that constraint is not a tautology (that is, if its negation is satisfiable), we report an error message, indicating that there are products in the product line that do not compile. Once again, we use a SAT solver to perform this analysis. We can even pinpoint the error message to a set of feature selections by negating the constraint; for debugging, a SAT solver can provide specific feature selections to reproduce the error with an existing single-product analysis.
In a similar way, we can also detect redeclaration (or multiple-declaration) errors. In our example, we must not declare method getName twice. To this end, we check that all declarations in a set of potentially conflicting declarations \({\mathtt{{D}}}\) are pair-wise mutually exclusive (within feature selections specified as valid by the feature model). We use the following constraint and report an error if it is not a tautology:
Table 10.1

Reachability constraints in the graph example

Construct

Source

Target

Constraint

String (type reference)

5

JSL

\(\phi \Rightarrow (\mathsf{{Name}}\Rightarrow \top )\)

String (type reference)

6

JSL

\(\phi \Rightarrow (\mathsf{{Name}}\Rightarrow \top )\)

name (field access)

6

5

\(\phi \Rightarrow (\mathsf{{Name}}\Rightarrow \mathsf{{Name}})\)

String (type reference)

9

JSL

\(\phi \Rightarrow (\mathsf{{NoName}}\Rightarrow \top )\)

String.valueOf (method invocation)

9

JSL

\(\phi \Rightarrow (\mathsf{{NoName}}\Rightarrow \top )\)

id (field access)

9

2

\(\phi \Rightarrow (\mathsf{{NoName}}\Rightarrow \top )\)

Color (type reference)

13

24

\(\phi \Rightarrow (\mathsf{{Color}}\Rightarrow \mathsf{{Color}})\)

Color (instantiation)

13

24

\(\phi \Rightarrow (\mathsf{{Color}}\Rightarrow \mathsf{{Color}})\)

Color.setDisplayColor (method inv.)

18

25

\(\phi \Rightarrow ((\mathsf{{Color}}\wedge \mathsf{{Name}})\Rightarrow \mathsf{{Color}})\)

color (field access)

18

13

\(\phi \Rightarrow ((\mathsf{{Color}}\wedge \mathsf{{Name}})\Rightarrow \mathsf{{Color}})\)

System.out (field access)

20

JSL

\(\phi \Rightarrow (\top \Rightarrow \top )\)

PrintStream.print (method invocation)

20

JSL

\(\phi \Rightarrow (\top \Rightarrow \top )\)

getName (method invocation)

20

6, 9

\(\phi \Rightarrow (\top \Rightarrow (\mathsf{{Name}}\vee \mathsf{{NoName}}))\)

Color (type reference)

25

24

\(\phi \Rightarrow (\mathsf{{Color}}\Rightarrow \mathsf{{Color}})\)

getName (method redeclaration)

9

6

\(\phi \Rightarrow \lnot (\mathsf{{Name}}\wedge \mathsf{{NoName}})\)

Source and target refer to lines in Fig. 10.10; JSL represents targets in the Java Standard Library with presence condition \(\top \)

Example 10.9.

We illustrate selected constraints derived from our graph example in Fig. 10.12. We give a full list of constraints in Table 10.1. Note that some references such as String and System refer to the Java Standard Library which is included in all products.

By solving the constraints, we can see that, without additional restrictions from a feature model, two constraints are violated. We can report corresponding error messages, as shown in Fig. 10.12 (bottom). More compactly, we could report the result of our analysis as “if \((\mathsf{{Name}} \oplus \mathsf{{NoName}})\) then well-typed else ill-typed”. The result is equivalent to the result gained from a brute-force application using the standard Java type system.

When the feature model is repaired, namely that Name and NoName are alternative features (\(\phi \Rightarrow \mathsf{{Name}} \oplus \mathsf{{NoName}}\)), all constraints we had to check in our example above are tautologies, so we now know that every valid product of our product line is well-typed. \(\square \)

Performance

So, how does variability-aware type checking with a family-based strategy improve over the brute-force approach? Instead of checking reachability and redeclaration errors again and again in the generated code separately for each product, we formulate constraints over the space of all products. The important benefit of this approach is that we check variability locally in domain artifacts, where it occurs. For code that is not variable, we perform only a single check overall, instead of a check per product. For example, we check whether method System.out.print exists only once (instead of eight times for each product in the brute-force approach), and we check only two possible targets of the method invocation of getName, independent of whether feature Color is selected.

Rather than checking the surface complexity of up to \({\mathtt{{2}}}^{{\mathtt{{n}}}}\) products in isolation, family-based strategies analyze the domain artifacts of the entire product line and check only essential complexity where variability actually matters. Worst-case effort is still exponential, since developers could write product lines without any code sharing, but experience suggests that this happens rarely, because reuse is a key goal of product-line development.6

A family-based type checker can be sound and complete with regard to the brute-force approach, but also unsound or incomplete approximations are possible, to simplify implementation or improve the performance of the analysis (still useful to find some errors early in domain artifacts and enforce consistent use of variability implementations).

10.3.4 Family-Based Type Checking for Feature-Oriented Programming

To illustrate the generality of lifting analyses, let us investigate family-based type checking also for feature-oriented programming. The basic mechanism is similar to that for preprocessor implementations: we look up all possible targets of method invocations, field accesses, class references, and so forth. Subsequently, we check reachability constraints and redeclaration errors with presence conditions as before. There are two main differences, though. First, presence conditions for code structures are easily identifiable: All code structures with a feature module have the same presence condition (see Sect. 10.2.3, p. 257). Second, we have a new (extended) language and need to perform different kinds of lookups, some of which are local to the feature module and some of which cross feature module boundaries.
Fig. 10.13

Checking whether references to add are well-typed in all products

Let us extend the graph example once more as shown in Fig. 10.13: In addition to the basic graph and the extension for feature Weighted from Fig. 6.4 (p. 134), we add a new optional feature AccessControl. Feature AccessControl can prevent users from adding additional edges. We type check this program with a similar strategy as before:
  • In Line 8, we access field nodes. The field is defined locally in Line 4 in the same feature module. Thus, the presence conditions of source and target are the same, and the reachability constraint is trivially a tautology:
    $$\begin{aligned} \phi \Rightarrow \bigl (\mathsf{{BasicGraph}} \Rightarrow \mathsf{{BasicGraph}}\bigr ) \end{aligned}$$
  • In feature module Weighted, we refine method add(Node, Node) of class Graph. Since we use a Super call, we require that a prior declaration of the method exists. A lookup across module boundaries finds a possible target in feature module BasicGraph.7 Hence, we derive the following reachability constraint:
    $$\begin{aligned} \phi \Rightarrow \bigl (\mathsf{{AccessControl}} \Rightarrow \mathsf{{BasicGraph}}\bigr ) \end{aligned}$$
  • In feature module AccessControl, we refine method add(Node, Node) once more. This time, we find two possible targets, in feature modules BasicGraph and Weigthed, leading to the following constraint:
    $$\begin{aligned} \phi \Rightarrow \bigl (\mathsf{{AccessControl}} \Rightarrow (\mathsf{{BasicGraph}} \vee \mathsf{{Weighted}})\bigr ) \end{aligned}$$
  • Similarly, we refine method add(Node, Node, Weight), but only with one possible target in feature module Weighted. Thus, we add the constraint:
    $$\begin{aligned} \phi \Rightarrow \bigl (\mathsf{{AccessControl}} \Rightarrow \mathsf{{Weighted}}\bigr ) \end{aligned}$$
    This constraint can be stricter than a developer might have assumed. In fact this is an instance of the optional-feature problem discussed in Sect. 9.3.
The interesting point is that we can check some constraints within a single feature module. Although we cannot compile feature modules in isolation like plug-ins, we still exploit the locality of feature modules. Recently, researchers have started to exploit this locality further in several approaches and even declare or infer corresponding feature interfaces to enable plug-in-like modular type checking despite crosscutting implementations (Apel and Hutchins 2010; Delaware et al. 2009; Schaefer et al. 2011; Kästner et al. 2012b).

10.3.5 Family-Based Analysis with Variability Encoding

When discussing refactorings in Chap. 8, we already mentioned the possibility of variability-preserving rewrites between different variability implementations to change binding times (see Sect. 8.2.3, p. 201). For example, within some limits, we can rewrite a preprocessor-based implementation into one using parameters or feature-oriented programming and vice versa. Where available, we can exploit such rewrites for variability-aware analysis. For example, instead of developing a new variability-aware type system for feature-oriented programming, we could provide an automated rewrite that transforms feature-oriented programs into preprocessor-based implementations and type checks them.

Especially for analyses in the course of model checking, rewrites from compile-time variability into run-time variability using parameters are common. The process is called configuration lifting or variability encoding (Post and Sinz 2008; Apel et al. 2013b).
Fig. 10.14

Possible variability encoding of the graph example from Fig. 10.10; conditionally executed code is highlighted

In Fig. 10.14, we show an example of a possible variability encoding for the graph example of Fig. 10.10. The presence and absence of the features Name, NoName, and Colored is modeled by three corresponding Boolean variables, located in class Conf. Code that is specific to particular features is executed conditionally based on the values of these variables (highlighted in Fig. 10.10). Using standard testing, symbolic execution, model checking, or other existing analysis techniques, we can find that a variability exception is raised when neither Name nor NoName is selected, which indicates an error in the product line (provided this selection is valid according to its feature model).

10.3.6 Feature-Based Analysis Strategies

A Grand Challenge of variability-aware analysis is to analyze features in isolation. Black-box frameworks are especially interesting because their plug-ins can be compiled separately (see Sect. 4.3, p. 79). Separate compilation implies that each plug-in can be type checked in isolation, by compiling against the plug-in interface of the framework. Thus, type errors are detected locally within a plug-in without considering other plug-ins.

However, separate compilation does not yet ensure that all combinations of these plug-ins can be loaded. We still need to ensure that plug-ins and the framework share the same interface. Furthermore, there may dependencies between plug-in interfaces, and there could be constraints on which and how many plug-ins may be loaded. For example, we might want to guarantee that in every product at least one (or at most one, or exactly one) plug-in is installed. That is, some checks are still required at composition time.

Also, in feature-oriented programming (and aspect-oriented programming and delta-oriented programming) modular type checking has been explored. The idea is to type check a feature module in isolation as far as possible. As we have seen previously in Fig. 10.13, many checks can be performed locally within a feature module. Checks that are not performed locally, can be deferred to composition time. That is, constraints referring to code fragments of other features can be expressed (explicitly or inferred) in an interface. An interface constraint of a feature module might specify that it requires some other feature module to provide a class, method, or field. The interface also describes which structures are exported, so they can be used by other features. Compatibility between modules is then checked at composition time (usually called linker checks). In Fig. 10.15, we exemplify this idea by means of our previous graph example.

An exponential number of possible feature selections and corresponding module compositions remains, for which we need to check interface compatibility. Each compatibility check is cheaper than rechecking the entire source code of the product though. To aim for complete coverage of all feature selections, while avoiding a brute-force approach, we can again use sampling or a family-based approach that checks reachability constraints between interfaces, as illustrated in Fig. 10.15.
Fig. 10.15

References to field sealed can be checked entirely within feature AccessControl (left); references to the add methods and the class Graph cut across feature boundaries and are checked at composition time based on the features’ interfaces (right)

Feature-based analysis enables an open-world development strategy where not all features may be known at development time or analysis time. For example, when extending a framework, plug-in developers may not know about all other plug-ins in the system. It can be a good strategy to first check plug-ins in isolation as far as possible and then check plug-in compatibility when actually composing specific plug-ins. Open-world development becomes increasingly important with software ecosystems to which multiple independent parties contribute (Bosch 2009). In contrast, the family-based strategies discussed previously require that all features are known at analysis time, it requires a closed-world scenario.

10.3.7 Beyond Type Checking

So far, we have illustrated different analysis strategies by means of type checking. The outlined strategies can be applied to other kinds of analyses as well. In all cases, the idea is to lift an existing analysis to check a given property for the entire product line. If possible, we want to move beyond brute-force and sampling approaches. So far, researchers have investigated variability-aware parsing (Kästner et al. 2011; Gazzillo and Grimm 2012), variability-aware data-flow, control-flow, and information-flow analysis (Brabrand et al. 2012; Bodden 2012; Liebig et al. 2012), variability-aware testing, mostly based on sampling (Cohen et al. 2007; Oster et al. 2010; Kästner et al. 2012c; Kim et al. 2012), variability-aware model checking (Li et al. 2005; Post and Sinz 2008; Classen et al. 2010, 2012; Apel et al. 2013b), variability-aware theorem proving (Thüm et al. 2011b; Thüm et al. 2012b), and variability-aware consistency checking of models (Czarnecki and Pietroszek 2006).

We will not go into details of these approaches, but there seem to be repeating patterns. A general strategy is to perform analysis on shared code only once and to reason about entire configuration spaces by means of propositional formulas and SAT solvers. For the interested reader, we recommend a survey of variability-aware analysis (analysis strategies, specification strategies, and classification of existing analyses) by Thüm et al. (2012a).

10.4 Case Studies and Experience

Analysis of product lines is a comparably new research area, and most results are from academic contexts. Nevertheless, we want to highlight some achievements and share some results to give an impression of what product-line analysis is capable of.

Regarding analysis of feature models, early product configurators were hard to use and allowed people to configure invalid products or get stuck in the configuration process. Modern configurators, also of commercial product-line tools, are quite advanced, thanks to feature-model analysis. Partial selections are rapidly propagated and conflicts are explained (see tooling section below). Researchers have found that the tools scale interactive configuration easily to feature models with several hundreds or even thousands of features.

Tartler et al. (2011) have analyzed the feature-to-code mapping of the Linux kernel in detail with the goal of finding inconsistencies, especially dead code. To this end, they reverse engineered the feature-modeling language Kconfig (see Sect. 2.3.6, p. 36) and extracted presence conditions from Linux’s build system Kbuild (see Sect. 5.2.3, p. 107) and Linux’s preprocessor-based implementation. They found 117 incorrect mappings between features and code fragments, where #ifdef constructs referred to features that are not declared in the feature model (typically, typos such as CONFIG_CPU_HOTPLUG instead of CONFIG_HOTPLUG_CPU). Following the approach outlined in Sect. 10.2.1, they found over 1,000 dead code fragments and manually proposed 214 patches to the Linux community, of which a majority was accepted to be included into the kernel. They classify 22 of those dead code fragments as actual bugs that change the behavior of the kernel in unexpected ways. The analysis is fast and can analyze the entire kernel in about 15 min. Challenges arise mostly from the difficult extraction of information from the feature model and the build system (due to subtle semantic details and anachronisms of the language), so the analysis is not entirely precise. Overall, this project impressively shows how even lightweight analyses can discover many problems, even in well-developed and peer-reviewed code of the Linux kernel.

Also variability-aware analysis, especially type checking, was applied to a series of larger projects and discovered many implementation bugs. Notable studied systems are AHEAD itself (70 features; 48k lines of composition-based Jak code; Thaker et al. 2007), Mobile RSS Reader (14 features, 20k lines of annotation-based Java code;  Kästner et al. 2012a), Mobile Media (14 features; 6k lines of annotation-based Java code;  Kästner et al. 2012a), Busybox (811 features; 260k lines of annotation-based C code;  Kästner et al. 2012b), and the x86 Linux kernel (7000 features; 6.7M lines of annotation-based C code). In all projects, bugs were found: conflicting introductions of a method in multiple modules in AHEAD, dangling calls across feature boundaries in Mobile RSS Reader, a missing dependency in the feature model and incorrectly annotated import statements in Mobile Media, and dangling references in Busybox. In Fig. 10.16, we exemplify a bug found in Busybox, which is hard to find manually. In all cases, performance is in the realm of analyzing less than ten sampled products.

Experience with variability-aware static analysis (Brabrand et al. 2012; Bodden 2012; Liebig et al. 2012) and variability-aware model checking (Li et al. 2005; Post and Sinz 2008; Classen et al. 2012; Apel et al. 2013b) is similar, but tools in this field are just starting to approach larger scale studies.
Fig. 10.16

Variability-related bug in Busybox: When feature NTPD_SERVER is deactivated, field listen_fd is removed from struct globals. but still accessed in Line 19 (ENABLE_FEATURE_NTPD_SERVER is a macro defined to either \({\mathtt{{0}}}\) or \({\mathtt{{1}}}\) depending on the feature selection)

Overall, experience shows that efficient analysis of entire product lines is possible and useful. Analysis finds real bugs and can be performed in reasonable time. Difficulties typically stem from undisciplined implementation strategies and legacy artifacts (for example, extracting presence conditions from build systems and lexical preprocessors and reverse engineering feature modeling languages), whereas the analysis is typically straightforward.

10.5 Tooling

Analysis of feature models has matured and some analyses are now available even in commercial product-line tools, such as pure::variants. The SPLOT website8 offers the possibility to try many analyses directly online. FeatureIDE integrates many feature-model analyses. The FAMA9 tool suite is probably the most comprehensive selection of different analyses available right now. FAMA also allows selecting from a large range of different solvers.

For checking the feature-code mapping only few tools are readily available. Specifically for the Linux kernel, the Undertaker10 tool analyzes the mapping with the goal to dead (and undead) code fragments. Some implementation approaches ensure directly that only features declared in the feature model are referenced in the implementation; examples are CIDE11 and FeatureMapper.12

For variability-aware analysis, almost only concept and research prototypes are available. Some tools that can be used for experimentation are SafeGen,13TypeChef,14CIDE,15CIDE+,16SPLverifier,17VMC,18 and ProVeLines and SNIP.19

10.6 Further Reading

Analyses of feature models are well explored in the literature. Batory (2005), Benavides et al. (2005), and van der Storm (2004) were among the first to describe encodings of feature models as propositional formulas to reason about them with SAT solvers, solvers for constraint satisfaction problems, and binary decision diagrams. Benavides et al. (2010) provide an excellent overview of developments in the field, including many analysis questions and different implementation strategies and tools. They also provide a good introduction of how to reason about feature models in the presence of non-Boolean features and constraints.

A good example of analysis of the feature-code mapping is the Undertaker project by Tartler et al. (2011). The authors describe in detail the challenges of extracting feature models and presence conditions and their experience with reporting bugs to the developer community. An earlier and simpler approach was described by Metzger et al. (2007) who provided a separate variability model for each implementation artifact (instead of extracting presence conditions from some implementation), and subsequently checked intended variability against the variability modeled for the implementation.

Sampling strategies have been first explored outside the product-line context as combinatorial testing, but have quickly been applied to product lines as well. There is a large body of research to which we can only provide initial pointers (Cohen et al. 2007; Oster et al. 2010; Perrouin et al. 2010).

The idea to analyze the domain artifacts of the entire product line originated initially from work on generators (Huang et al. 2005) and checking model consistency (Czarnecki and Pietroszek 2006). The first type checking approach for product line was proposed by Thaker et al. (2007). The field of variability-aware analysis has recently exploded with research contributions from different fields. Readers interested in this field may follow the references in Sect. 10.3.7 as a starting point. Also, a recent survey by Thüm et al. (2012a) provides a good overview of the field and the different strategies applied.

Work on feature-based analysis often has striking parallels with research in programming languages, regarding modularity and module systems. The goal is the same: Check errors locally and early to allow development in an open-world style. Again, we can only provide initial pointers to a large body of research (Leroy 1994; Cardelli 1997; Blume and Appel 1999; Ancona and Zucca 2001; Ancona et al. 2005; Strniša et al. 2007).

Finally, there is plenty of work on product-line testing. Unfortunately, testing cannot yet exploit the similarities between products as static variability-aware analyses do, but relies more on sampling. Typical technical strategies are to test domain artifacts in isolation as far as possible and to prepare reusable test cases as part of domain engineering that can be executing during application engineering. Pohl et al. (2005) provide a good overview of basic testing strategies and Engström and Runeson (2011) and da Mota Silveira Neto et al. (2011) have conducted recent surveys of product-line testing that provide good starting points for further reading.
Fig. 10.17

Sample feature models

Exercises

10.1. When are analyses of software product lines useful or even necessary? Discuss opportunities and challenges. Which phases of the product-line-development process can be supported by analyses? Provide illustrative examples to explain your position.

10.2. Analyze (i) the feature models in Fig. 10.17, (ii) the feature models of the graph example in Fig. 2.6, and (iii) the feature models created in Exercises 2.4 and 2.5 (p. 43) as follows:
  1. (a)

    Translate the feature model into a propositional formula.

     
  2. (b)

    Provide two valid and two invalid feature selections (if possible).

     
  3. (c)

    Check whether the feature model is consistent.

     
  4. (d)

    Provide two assumptions that hold in the feature model and two assumptions that do not hold. Select assumptions that could be reasonably used as tests.

     
  5. (e)

    Detect whether the feature model contains any dead or false optional features.

     
  6. (f)

    Illustrate constraint propagation on a partial feature selection (if possible). As partial feature selection use the last two features of the valid feature selections of Exercise 10.2b)

     
  7. (g)

    Calculate the number of valid feature selections (you may ignore cross-tree constraints).

     
  8. (h)

    Perform a change of the feature model that is (i) a refactoring, (ii) a generalization, (iii) a specialization, and (iv) none of the above. Demonstrate that the change actually falls into the given category.

     
10.3. Build an infrastructure to answer the questions of Exercise 10.2 mechanically. Define a simple textual (or XML) format for feature models; translate feature models into propositional formulas; answer the questions by translating them into Boolean satisfiability problems; solve the satisfiability problems by handing them over an off-the-shelf SAT solver as sat4j or MiniSat20; and print the solution.

10.4. In the context of the domains from Exercise 2.5 (p. 43), discuss when optimizing feature selections may be useful or necessary. Which nonfunctional requirements may be worth to optimize? Which functional requirements may be implemented by different features with different nonfunctional trade-offs? Provide illustrative examples.

10.5. Discuss how you could extend your analysis infrastructure from Exercise 10.3 to support constraints over non-Boolean feature attributes and optimization goals. For example, assume each feature has a known price and a known impact on binary size and you want to complete a partial feature selection such that the resulting product is smaller than 500 kb and has the lowest possible price. Investigate what technology could be used to perform such optimization problem.

10.6. Derive presence conditions for each code fragment in the following file. Subsequently determine which code fragments are dead code fragments given the four feature models:

10.7. To test a product line, we want to pursue a sampling strategy.
  1. (a)

    Discuss possible coverage criteria. Which coverage would be necessary to detect the division-by-zero in Line 17 of the example in Exercise 10.6?

     
  2. (b)
    Collect a small set of feature selections to fulfill the following coverage goals:
    1. (i)

      Feature coverage: Each feature of the product lines described by the feature models in Fig. 2.6 and in Fig. 10.17a should be included in at least one feature selection.

       
    2. (ii)

      Feature-code coverage: Each line of code in the code example of Exercise 10.6 should be included in at least one feature selection (not considering any feature models).

       
    3. (iii)

      Feature-code coverage: Each line of code in the code example of Exercise 10.6 should be included in at least one feature selection that is also valid in the corresponding feature models.

       
    4. (iv)

      Pair-wise coverage: In a product line with five optional and independent features A, B, C, D, and E, for every pair of features (f,g), there should be a feature selection with f and g, one with f and without g, and one without f but with g.

       
    5. (v)

      Pair-wise coverage: For the feature model of the graph example in Fig. 2.6 and for the feature model in Fig. 10.17a–c, achieve pair-wise coverage as in the previous task.

       
     
10.8. Explain the strategy of family-based type checking on the following two code examples. What reachability constraints can be derived from the code base? For which feature selections will the code not compile? Provide a feature model that describes all compilable products.
  1. (a)
    A simple hello-world program with three features World, Bye and Slow, declared as optional and independent.
     
  2. (b)
    A simple object-oriented store with two alternative base features (SingleStore and MultiStore) and an optional feature AccessControl implemented with feature-oriented programming.
     
10.9. Provide examples of analyses for product lines that are not sound or complete with regard to what an analysis using a brute-force approach would find. Discuss in which scenarios such analyses may still be useful.

10.10. Advanced task, requires a background in type systems (Pierce 2002). Design a formal type system for product lines based on the simply typed lambda calculus.

We start with an extended version of the lambda calculus enhanced with compile-time variability annotations:
$$\begin{aligned} {\mathtt{{e}}} = {\mathtt{{x}}} \quad |\quad \lambda {\mathtt{{x}}}:\tau .\ {\mathtt{{e}}} \quad |\quad {\mathtt{{e}}}\ {\mathtt{{e}}} \quad |\quad {\mathtt{{c}}} \quad |\quad \Upsilon {\mathtt{{f}}} .\ {\mathtt{{e}}} - {\mathtt{{e}}} \end{aligned}$$
\(\Upsilon {\mathtt{{f}}} .\ {\mathtt{{e}}} - {\mathtt{{e}}} \) represents a compile-time choice between two expressions depending on feature \({\mathtt{{f}}}\). By evaluating compile-time choices \(\Upsilon \) with a feature selection, we can derive a traditional lambda-calculus expression.

Design a type system for the variability-enhanced lambda calculus, such that an variability-enhanced expression is well-typed if and only if all derivable lambda-calculus expressions are well-typed (see Fig. 10.9). Proof soundness and completeness with regard to the brute-force approach.

Footnotes

  1. 1.

    There are estimated \({\mathtt{{7B}}}\approx {\mathtt{{2}}}^{{\mathtt{{33}}}}\) people on earth and \({\mathtt{{1O}}}^{{\mathtt{{8O}}}} \approx {{\mathtt{{2}}}}^{{\mathtt{{265}}}}\) atoms in the universe.

  2. 2.

    A model is a solution (that is, a true/false assignment to each feature variable) that satisfies \(\phi \).

  3. 3.

    For more information on the open-source tool FeatureIDE, see Appendix A.

  4. 4.

    Typically, SAT solvers require formulas to be in conjunctive normal form, but these details should be hidden by feature-modeling tools.

  5. 5.

    Rice’s theorem says in the general result no static analysis can prove a non-trivial property for any programs in a finite time. Of course, by restricting the domain of programs, by requiring certain structures or properties of these programs, a non-trivial property can be proven. This is analogous to the Halting Problem—for straight-line programs, the Halting Problem is solvable.

  6. 6.

    We do know of cases of “product line” development where this is not so. The situation arises when different versions of a common system are produced in version control by branching code bases that are never merged. We strongly recommend against this practice.

  7. 7.

    Here, we assume a fixed order of feature modules. A lookup is performed only in previous feature modules. If we want to type check feature modules with a flexible composition order, we need a more sophisticated encoding that reasons about the composition order as well.

  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Sven Apel
    • 1
  • Don Batory
    • 2
  • Christian Kästner
    • 3
  • Gunter Saake
    • 4
  1. 1.University of PassauPassauGermany
  2. 2.The University of Texas at AustinAustinUSA
  3. 3.Carnegie Mellon UniversityPittsburghUSA
  4. 4.Fak. Informatik, Inst. Technische/BetrieblicheOtto-von-Guericke-UniversitätMagdeburgGermany

Personalised recommendations