Empirical software metrics for benchmarking of verification tools
 1.2k Downloads
 2 Citations
Abstract
We study empirical metrics for software source code, which can predict the performance of verification tools on specific types of software. Our metrics comprise variable usage patterns, loop patterns, as well as indicators of controlflow complexity and are extracted by simple dataflow analyses. We demonstrate that our metrics are powerful enough to devise a machinelearning based portfolio solver for software verification. We show that this portfolio solver would be the (hypothetical) overall winner of the international competition on software verification (SVCOMP) in three consecutive years (2014–2016). This gives strong empirical evidence for the predictive power of our metrics and demonstrates the viability of portfolio solvers for software verification. Moreover, we demonstrate the flexibility of our algorithm for portfolio construction in novel settings: originally conceived for SVCOMP’14, the construction works just as well for SVCOMP’15 (considerably more verification tasks) and for SVCOMP’16 (considerably more candidate verification tools).
Keywords
Software verification Software metrics Machine learning Algorithm portfolio1 Introduction
The success and gradual improvement of software verification tools in the last two decades is a multidisciplinary effort—modern software verifiers combine methods from a variety of overlapping fields of research including model checking, static analysis, shape analysis, SAT solving, SMT solving, abstract interpretation, termination analysis, pointer analysis etc.
The mentioned techniques all have their individual strengths, and a modern software verification tool needs to pick and choose how to combine them into a strong, stable and versatile tool. The tradeoffs are based on both technical and pragmatic aspects: many tools are either optimized for specific application areas (e.g. device drivers), or towards the indepth development of a technique for a restricted program model (e.g. termination for integer programs). Recent projects like CPA [6] and FrankenBit [22] have explicitly chosen an eclectic approach which enables them to combine different methods more easily.
There is growing awareness in the research community that the benchmarks in most research papers are only useful as proofs of concept for the individual contribution, but make comparison with other tools difficult: benchmarks are often manually selected, handcrafted, or chosen a posteriori to support a certain technical insight. Oftentimes, neither the tools nor the benchmarks are available to other researchers. The annual international competition on software verification (SVCOMP, since 2012) [3, 4, 5, 12, 13, 14] is the most ambitious attempt to remedy this situation. Now based on more than 6600 C source files, SVCOMP has a diverse and comprehensive collection of benchmarks available, and is a natural starting point for a more systematic study of tool performance.
Sources of complexity for 4 tools participating in SVCOMP’15, marked with + / – / n/a when supported/not supported/no information is available
Source of complexity  CBMC  Predator  CPAchecker  SMACK  Corresp. feature 

Unbounded loops  –  n/a  n/a  –  \(\mathcal {L}^{\mathrm{SB}}, \mathcal {L}^{\mathrm{ST}}, \mathcal {L}^{\mathrm{simple}}, \mathcal {L}^{\mathrm{hard}}\) 
Pointers  \(+\)  \(+\)  \(+\)  \(+\)  PTR 
Arrays  \(+\)  –  n/a  \(+\)  ARRAY_INDEX 
Dynamic data structures  n/a  \(+\)  n/a  \(+\)  PTR_STRUCT_REC 
Nonstatic pointer offsets  –  \(+\)  n/a  n/a  OFFSET 
Nonstatic size of heapallocated memory  \(+\)  \(+\)  n/a  n/a  ALLOC_SIZE 
Pointers to functions  \(+\)  n/a  n/a  n/a  \(m_\mathrm{fpcalls}, m_\mathrm{fpargs}\) 
Bit operations  \(+\)  –  \(+\)  –  BITVECTOR 
Integer variables  \(+\)  \(+\)  \(+\)  \(+\)  SCALAR_INT 
Recursion  –  –  –  \(+\)  \(m_\mathrm{reccalls}\) 
Multithreading  \(+\)  –  –  –  THREAD_DESCR 
External functions  \(+\)  –  n/a  n/a  INPUT 
Structure fields  \(+\)  \(+\)  n/a  \(+\)  STRUCT_FIELD 
Big CFG (\(\ge \)100 KLOC)  \(+\)  n/a  n/a  \(+\)  \(m_\mathrm{cfgblocks}, m_\mathrm{maxindeg}\) 
 1.
A portfolio solver optimally uses available resources.
While in theory one may run all available tools in parallel, in practice the cost of setup and computational power makes this approach infeasible. A portfolio predicts the n tools it deems bestsuited for the task at hand, allowing better resource allocation.
 2.
It can avoid incorrect results of partially unsound tools.
Practically every existing software verification tool is partially incomplete or unsound. A portfolio can recognize cases in which a tool is prone to give an incorrect answer, and suggest another tool instead.
 3.
Portfolio solving allows us to select between multiple versions of the same tool.
A portfolio is not only useful in deciding between multiple independent tools, but also between the same tool with different runtime parameters (e.g. commandline arguments).
 4.
The portfolio solver gives insight into the stateoftheart in software verification.
As argued in [43], the stateoftheart can be set by an automatically constructed portfolio of available solvers, rather than the single best solver (e.g. a competition winner). This accounts for the fact that different techniques have individual strengths and are often complementary.
 1.
Program variables Does the program deal with machine or unbounded integers? Are the ints used as indices, bitmasks or in arithmetic? Dynamic data structures? Arrays? Interval analysis or predicate abstraction?
 2.
Program loops Reducible loops or goto programs? FORloops or ranking functions? Widening, loop acceleration, termination analysis, or loop unrolling?
 3.
Control flow Recursion? Function pointers? Multithreading? Simulinkstyle code or complex branching?
Our algorithm for building the portfolio is based on machine learning using support vector machines (SVMs) [8, 16] over these metrics. Section 3 explains our approach for constructing the portfolio.
Finally, we discuss our experiments in Sect. 4. In addition to previous results on SVCOMP’14 and ’15 in [17], we apply our portfolio construction to new data from SVCOMP’16, which has recently become available. As before, our portfolio is the hypothetical winner. As the underlying machine learning problem is becoming harder from year to year (considerably more verification tasks and candidate tools), this showcases the overall flexibility of our approach. We highlight the major differences between the three SVCOMP editions ’14–’16 in Sect. 4.1.
While portfolio solvers are important, we also think that the software metrics we define in this work are interesting in their own right. Our results show that categories in SVCOMP have characteristic metrics. Thus, the metrics can be used to (1) characterize benchmarks not publicly available, (2) understand large benchmarks without manual inspection, (3) understand presence of language constructs in benchmarks.

We define software metrics along the three dimensions – program variables, program loops and control flow – in order to capture the difficulty of program analysis tasks (Sect. 2).

We develop a machinelearning based portfolio solver for software verification that learns the bestperforming tool from a training set (Sect. 3).

We experimentally demonstrate the predictive power of our software metrics in conjunction with our portfolio solver on the software verification competitions SVCOMP’14, ’15, and ’16 (Sect. 4).

We apply the portfolio construction from [17] to SVCOMP’16 and report on the results. In particular, our portfolio is again winning the Overall category (Sect. 4).

We include detailed results tables for our experiments on SVCOMP’14–’16. (Sect. 4).

We extend our experiments on memory usage and runtime as a tie breaker in our tool selection algorithm (Sect. 3.3).

We extend the description of loop patterns, which have only been motivated in the conference article (Sect. 2.2).

We improve the explanation of support vector machines for nonlinearly separable data, motivating their use in our portfolio construction (Sect. 3.1).
2 Source code metrics for software verification
We introduce program features along the three dimensions—program variables, program loops and control flow—and describe how to derive corresponding metrics. Subsequent sections demonstrate their predictive power: In Sect. 3 we describe a portfolio solver for software verification based on our metrics. In Sect. 4 we experimentally demonstrate the portfolio’s success, thus attesting the descriptive and predictive power of our metrics and the portfolio.
2.1 Variable role based metrics
Example 1
Consider the C program in Fig. 2a, which computes the number of nonzero bits of variable x. In every loop iteration, a nonzero bit of x is set to zero and counter n is incremented. For a human reading the program, the statements n=0 and n++ in the loop body signal that n is a counter, and statement x = x & (x1) indicates that x is a bit vector.
Example 2
Consider the program in Fig. 2b, which reads a decimal number from a text file and stores its numeric representation in variable val. Statement fd=open(path, flags) indicates that variable fd stores a file descriptor and statement isdigit(c) indicates that c is a character, because function isdigit() checks whether its parameter is a decimal digit character.
List of variable roles with informal definitions
C type  Role name  Informal definition 

int  ARRAY_INDEX  Occurs in an array subscript expression 
ALLOC_SIZE  Passed to a standard memory allocation function  
BITVECTOR  Used in a bitwise operation or assigned the result of a bitwise operation or a BITVECTOR variable  
BOOL  Assigned and compared only to 0, 1, the result of a bitwise operation, or a BOOL variable  
BRANCH_COND  Used in the condition of an if statement  
CHAR  Used in a library function which manipulates characters, or assigned a character literal  
CONST_ASSIGN  Assigned only literals or CONST_ASSIGN variables  
COUNTER  Changed only in increment/decrement statements  
FILE_DESCR  Passed to a library function which manipulates files  
INPUT  Assigned the result of an external function call or passed to it as a parameter by reference  
LINEAR  Assigned only linear combinations of LINEAR variables  
LOOP_BOUND  Used in a loop condition in a comparison operation, where it is compared to a LOOP_ITERATOR variable  
LOOP_ITERATOR  Occurs in loop condition, assigned in loop body  
MODE  Not used in comparison operations other than == and !=; assigned and compared to constant values only  
OFFSET  Added to or subtracted from a pointer  
SCALAR_INT  Scalar integer variable  
SYNT_CONST  Not assigned in the program (a global or an unused variable, or a formal parameter to a external function)  
THREAD_DESCR  Passed to a function of pthread library  
USED_IN_ARITHM  Used in addition/subtraction/multiplication/division  
float  SCALAR_FLOAT  Scalar float variable 
int*, float*  PTR_SCALAR  Pointer to a scalar value 
struct_type*  PTR_STRUCT  Pointer to a structure 
PTR_STRUCT_PTR  Pointer to a structure which has a pointer field  
PTR_STRUCT_REC  Pointer to a recursively defined structure  
PTR_COMPL_STRUCT  Pointer to a recursively defined structure with more than one pointer, e.g. doubly linked lists  
any_type*  HEAP_PTR  Assigned the result of a memory allocation function call 
PTR  Any pointer 
Definition of roles We define roles using dataflow analysis, an efficient fixedpoint algorithm [1]. Our current definition of roles is controlflow insensitive, and the result of analysis is the set of variables \({ Res }^R\) which are assigned role R. For the exact definitions of variable roles, we refer the reader to [18].
Example 3
We describe the process of computing roles on the example of role LINEAR for the code in Fig. 2a. Initially, the algorithm assigns to \({ Res }^{\mathrm{LINEAR}}\) the set of all variables \(\{\texttt {x},\texttt {x\_old},\texttt {n}\}\). Then it computes the greatest fixed point in three iterations. In iteration 1, variable x is removed, because it is assigned the nonlinear expression x&(x1), resulting in \({ Res }^{\mathrm{LINEAR}}=\{\texttt {x\_old},\texttt {n}\}\). In iteration 2, variable x_old is removed, because it is assigned variable x, resulting in \({ Res }^{\mathrm{LINEAR}}=\{\texttt {n}\}\). In iteration 3, \({ Res }^{\mathrm{LINEAR}}\) does not change, and the result of the analysis is \({ Res }^{\mathrm{LINEAR}}=\{\texttt {n}\}\).
Definition 1
2.2 Loop pattern based metrics
The second set of program features we consider is a classification of loops in the program under verification, as introduced in [31]. Although undecidable in general, the ability to reason about bounds or termination of loops is highly useful for software verification: For example, it allows a tool to assert the (un)reachability of program locations after the loop, and to compute unrolling factors and soundness limits in the case of bounded model checking.
In [31] we present heuristics for loop termination. They are inspired by definite iteration, i.e. structured iteration over the elements of a finite set, such as an integer sequence or the elements of a data structure [37]. We first give a definition of definite iteration, which we call FOR loops, for the C programming language, as C does not have dedicated support for this concept. Then, we define generalized FOR loops, which capture some aspects of definite iteration and allow us to describe a majority of loops in our benchmarks. Table 3 gives an overview.
Example Consider the program shown in Fig. 3a. We show termination of the loop in a straightforward manner: The value of i is changed by the loop, while the value of N is fixed. The loop’s condition induces a predicate \(P(i): i \ge N\), guarding the edge leaving the loop (Fig. 3b). We show that during execution, P(i) eventually evaluates to true: The domain of P can be partitioned into two intervals \([\infty , N)\) and \([N, \infty ]\), for which P(i) evaluates to false or true, respectively (Fig. 3c). As i is (in total) incremented during each iteration, we eventually have \(i \in [N, \infty ]\), and thus P(i) holds and the loop terminates.
 1.
For each variable v we establish the set of possible constant integral updates \({ Incs }(v)\) of v along all possible execution paths of a single iteration of L.
In our example \({ Incs }(i) = \{1,2\}\).
 2.
We identify control flow edges e leaving the loop for which the corresponding \(P_e(v)\) eventually evaluates to true under updates described by \({ Incs }(v)\).
In our example there is a single such edge with predicate \(P(i): i \ge N\). All values in \({ Incs }(i)\) are positive, thus P(i) eventually becomes true.
 3.
We impose a constraint to ensure \(P_e(v)\) is evaluated in each iteration of L.
In our example P(i) corresponds to the loop condition and this constraint is trivially satisfied.
List of loop patterns with informal descriptions
Loop pattern  Empirical hardness  Informal definition 

Syntactically bounded loops \(\mathcal {L}^{\mathrm{bounded}}\)  Easy  The number of executions of the loop body is bounded (considers outer control flow) 
FOR loops \(\mathcal {L}^{\mathrm{FOR}}\)  Intermediate  The loop terminates whenever control flow enters it (disregards outer control flow) 
Generalized FOR loops \(\mathcal {L}^{\mathrm{FOR(*)}}\)  Advanced  A heuristic derived from FOR loops by weakening the termination criteria. A good heuristic for termination 
Hard loops \(\mathcal {L}^{\mathrm{hard}}\)  Hard  Any loop that is not classified as generalized FOR loop 
We call a loop for which we obtain such a termination proof a FOR loop \(L \in \mathcal {L}^{\mathrm{FOR}}\). In [31] we show how to efficiently implement these checks using syntactic pattern matching and dataflow analysis.
Syntactically bounded loops A stronger notion of termination considers a loop to be bounded if the number of executions of the loop body is bounded: A loop L is syntactically bounded \(L \in \mathcal {L}^{\mathrm{bounded}}\) if and only if L itself and all its nesting (outer) loops are FOR loops: \(L \in \mathcal {L}^{\mathrm{bounded}} \text { iff } \forall L_o \supseteq L \, . L_o \in \mathcal {L}^{\mathrm{FOR}}\).
Generalized FOR loops We impose strong constraints for classifying loops as \(\mathcal {L}^{\mathrm{FOR}}\). In order to cover more loops, we systematically loosen these constraints and obtain a family of heuristics, which we call generalized FOR loops \(\mathcal {L}^{\mathrm{FOR(*)}}\). We conjecture that this class still retains many features of FOR loops. We describe details of the constraint weakenings in [31]. Of the family of generalized FOR loop classes presented there, we only consider \(\mathcal {L}^\mathrm {(\text {W}_{1}\text {W}_{2}\text {W}_{3})}\) for constructing the portfolio.
Hard loops Any loop not covered by \(\mathcal {L}^{\mathrm{bounded}} \subseteq \mathcal {L}^{\mathrm{FOR}} \subseteq \mathcal {L}^{\mathrm{FOR(*)}}\) is classified as hard: Let \(\mathcal {L}^{\mathrm{any}}\) be the set of all loops. Then \(\mathcal {L}^{\mathrm{hard}} = \mathcal {L}^{\mathrm{any}} \setminus \mathcal {L}^{\mathrm{FOR(*)}}\).
Definition 2
2.3 Control flow based metrics

For intraprocedural control flow, we count (a) the number of basic blocks in the control flow graph (CFG) \(m_{\mathrm{cfgblocks}}\), and (b) the maximum indegree of any basic block in the CFG \(m_\mathrm{maxindeg}\).

To represent indirect function calls, we measure (a) the ratio \(m_\mathrm{fpcalls}\) of call expressions taking a function pointer as argument, and (b) the ratio \(m_\mathrm{fpargs}\) of parameters to such call expressions that have a function pointer type.

Finally, to describe the use of recursion, we measure the number of direct recursive function calls \(m_\mathrm{reccalls}\).
2.4 Usefulness of our features for selecting a verification tool
In Sect. 4, we demonstrate that a portfolio built on top of these metrics performs well as a tool selector. In this section, we already give two reasons why we believe these metrics have predictive power in the software verification domain in the first place.
Tool developer reports The developer reports in the competition report for SVCOMP’15 [2], as well as tool papers (e.g. [10, 19]), give evidence for the relevance of our features for selecting verification tools: They mention language constructs, which—depending on whether they are fully, partially, or not modeled by a tool—constitute its strengths or weaknesses. We give a short survey of such language constructs in Table 1 and relate them to our features. For example, Predator is specifically built to deal with dynamic data structures (variable role PTR_STRUCT_REC) and pointer offsets (OFFSET), and CPAchecker does not model multithreading (THREAD_DESCR) or support recursion (control flow feature \(m_\mathrm{reccalls}\)). For CBMC, unbounded loops (various loop patterns \(\mathcal {L}^\mathrm {C}\)) are an obstacle.
3 A portfolio solver for software verification
3.1 Preliminaries on machine learning
In this section we introduce standard terminology from the machine learning community (see for example [7]).
3.1.1 Supervised machine learning
In supervised machine learning problems, we learn a model \(M:\mathbb {R}^n \rightarrow \mathbb {R}\). The \(\mathbf {x}_i \in \mathbb {R}^n\) are called feature vectors, measuring some property of the object they describe. The \(y_i \in \mathbb {R}\) are called labels.
We learn model M by considering a set of labeled examples \(X\mathbf {y} = \{(\mathbf {x}_i, y_i)\}_{i=1}^N\). M is then used to predict the label of previously unseen inputs \(\mathbf {x'} \notin X\).

Classification considers labels from a finite set \(y \in \{1, \dots , C\}\). For \(C=2\), we call the problem binary classification, for \(C>2\) we speak of multiclass classification.

Regression considers labels from the real numbers \(y \in \mathbb {R}\).
3.1.2 Support vector machines
A support vector machine (SVM) [8, 16] is a binary classification algorithm that finds a hyperplane \(\mathbf {w}\cdot \mathbf {x} + b = 0\) separating data points with different labels. We first assume that such a hyperplane exists, i.e. that the data is linearly separable:
If the data is not linearly separable, e.g. due to outliers or noisy measurements, there are two orthogonal approaches that we both make use of in our portfolio solver:
Kernel transformations Another, orthogonal approach to data that is not linearly separable in the input space, is to transform it to a higherdimensional feature space \(\mathbb {H}\) obtained by a transformation \(\phi : \mathbb {R}^n \rightarrow \mathbb {H}\). For example, 2class data not linearly separable in \(\mathbb {R}^2\) can be linearly separated in \(\mathbb {R}^3\) if \(\phi \) pushes points of class 1 above, and points of class 2 below some plane.
The quadratic programming formulation of SVM allows for an efficient implementation of this transformation: We define a kernel function \(K(\mathbf {x}_i,\mathbf {x}_j) = \phi (\mathbf {x}_i) \cdot \phi (\mathbf {x}_j)\) instead of explicitly giving \(\phi \), and replace the dot product in Eq. 4 with \(K(\mathbf {x}_i,\mathbf {x}_j)\). An example of a nonlinear kernel function is the radial basis function (RBF): \(K(\mathbf {x}_i, \mathbf {x}_j)=\exp (\gamma  \mathbf {x}_i  \mathbf {x}_j ^2), ~\gamma > 0\).
3.1.3 Probabilistic classification
Probabilistic classification is a generalization of the classification algorithm, which searches for a function \(M: \mathbb {R}^n \rightarrow \Pr (\mathbf {y})\), where \(\Pr (\mathbf {y})\) is the set of all probability distributions over \(\mathbf {y}\). \(M(\mathbf {x'})\) then gives the probability \({{\mathrm{p}}}(y_i \mid \mathbf {x'}, X\mathbf {y})\), i.e. the probability that \(\mathbf {x'}\) actually has label \(y_i\) given the model trained on \(X\mathbf {y}\). There is a standard algorithm for estimating class probabilities for SVM [41].
3.1.4 Creating and evaluating a model
The labeled set \(X\mathbf {y}\) used for creating (training) model M is called training set, and the set \(X'\) used for evaluating the model is called test set. To avoid overly optimistic evaluation of the model, it is common to require that the training and test sets are disjoint: \(X \cap X' = \emptyset \). A model which produces accurate results with respect to \(\mathbf {w}\) for the training set, but results in a high error for previously unseen feature vectors \(\mathbf {x'} \notin X\), is said to overfit.
3.1.5 Data imbalances
The training set \(X\mathbf {y}\) is said to be imbalanced when it exhibits an unequal distribution between its classes: \(\exists y_i, y_j \in \mathbf {y} \text { . } {{{{\mathrm{num}}}(y_i)}/{{{\mathrm{num}}}(y_j)}} \sim 100\), where \({{\mathrm{num}}}(y)=\{\mathbf {x}_i \in X \mid y_i=y\}\), i.e. imbalances of the order 100:1 and higher. Data imbalances significantly compromise the performance of most standard learning algorithms [23].
3.1.6 Multiclass classification
3.2 The competition on software verification SVCOMP
In this section we give an overview of the competition’s setup. Detailed information about the competition is available on its website [15].
SVCOMP maintains a repository of verification tasks, on which the competition’s participants are tested:
Definition 3
(Verification task) We denote the set of all considered verification tasks as \({ Tasks }\). A verification task \(v \in { Tasks }\) is described by a triple \(v = (f, p, { type })\) of a C source file f, verification property p and property type \({ type }\). For SVCOMP’14 and ’15, \({ type }\) is either a label reachability check or a memory safety check (comprising checks for freedom of unsafe deallocations, unsafe pointer dereferences, and memory leaks). SVCOMP’16 adds the property types overflow and termination.
For each verification task, its designers define the expected answer, i.e. if property p holds on f:
Definition 4
(Expected answer) Function \({{\mathrm{ExpAns}}}: { Tasks }\rightarrow \{\textsf {true}, \textsf {false}\}\) gives the expected answer for task v, i.e. \({{\mathrm{ExpAns}}}(v) = \textsf {true}\) if and only if property p holds on f.
Furthermore, SVCOMP partitions the verification tasks \({ Tasks }\) into categories, a manual grouping by characteristic features such as usage of bitvectors, concurrent programs, linux device drivers, etc.
Definition 5
(Competition category) Let \({ Categories }\) be the set of competition categories. Let \({{\mathrm{Cat}}}: { Tasks }\rightarrow { Categories }\) define a partitioning of \({ Tasks }\), i.e. \({{\mathrm{Cat}}}(v)\) denotes the category of verification task v.
Finally, SVCOMP assigns a score to each tool’s result and computes weighted category scores. For example, the Overall SVCOMP score considers a meta category of all verification tasks, with each constituent category score normalized by the number of tasks in it. We describe and compare the scoring policies of recent competitions in Sect. 4.1. In addition, medals are awarded to the three best tools in each category. In case multiple tools have equal scores, they are ranked by runtime for awarding medals.
Definition 6
(Score, category score, Overall score) Let \({ score }_{t,v}\) denote the score of tool \(t \in { Tools }\) on verification task \(v \in { Tasks }\) calculated according to the rules of the respective edition of SVCOMP. Let \({{\mathrm{cat\_score}}}(t,c)\) denote the score of tool t on the tasks in category \(c \in { Categories }\) calculated according to the rules of the respective edition of SVCOMP.
3.3 Tool selection as a machine learning problem
In this section, we describe the setup of our portfolio solver \(\mathcal {TP}\). We give formal definitions for modeling SVCOMP, describe the learning task as multiclass classification problem, discuss options for breaking ties between multiple tools predicted correct, present our weighting function to deal with data imbalances, and finally discuss implementation specifics.
3.3.1 Definitions
Definition 7
(Verification tool) We model the constituent verification tools as set \({ Tools }= \{1, 2, \dots , { Tools }\}\) and identify each verification tool by a unique natural number \(t \in { Tools }\).
Definition 8
Definition 9
(Virtual best solver) The virtual best solver (VBS) is an oracle that selects for each verification task the tool which gives the correct answer in minimal time.
3.3.2 Machine learning data
We compute feature vectors from the metrics introduced in Sect. 2 and the results of SVCOMP as follows:
We associate each feature vector \(\mathbf {x}(v)\), with a label \(t \in { Tools }\), such that t is the tool chosen by the virtual best solver for task v. In the following, we reduce the corresponding classification problem to \({ Tools }\) independent classification problems.
3.3.3 Formulation of the machine learning problem
For each tool \(t \in { Tools }\), \(\mathcal {TP}\) learns a model to predict whether tool t gives gives a correct or incorrect answer, or responds with “unknown”. Since the answer of a tool does not depend on the answers of other tools, \({ Tools }\) independent models (i.e., one per tool) give more accurate results and prevent overfitting.
3.3.4 Choosing among tools predicted correct
 1.Time: \(\mathcal {TP}^\mathbf{time}\). We formulate \({ Tools }\) additional regression problems: For each tool t, we use training data \(\{(\mathbf {x}(v), { runtime }_{t,v}^\mathrm{norm})\}_{v \in { Tasks }}\) to obtain a model \(M_t^\mathrm{time}(v)\) predicting runtime, whereand \({{\mathrm{norm}}}\) normalizes to the unit interval:$$\begin{aligned} { runtime }_{t,v}^\mathrm{norm}= {{\mathrm{norm}}}({ runtime }_{t,v}, \{{ runtime }_{t',v'}\}_{t' \in { Tools }, v' \in { Tasks }}) \end{aligned}$$The predicted value \(M_t^\mathrm{time}(v)\) is the predicted runtime of tool t on task v. We define$$\begin{aligned} {{\mathrm{norm}}}(x, X) = \frac{x  \min ({ X })}{\max ({ X })  \min ({ X })} . \end{aligned}$$$$\begin{aligned} { choose }({ TPredicted }) = \mathop {\hbox {arg min}}\limits _{t \in { TPredicted }} M_t^\mathrm{time}(v) . \end{aligned}$$
 2.Memory: \(\mathcal {TP}^\mathbf{mem}\). Similar to \(\mathcal {TP}^\mathrm{time}\), we formulate \({ Tools }\) additional regression problems: For each tool t, we use training data \(\{(\mathbf {x}(v), { memory }_{t,v}^\mathrm{norm})\}_{v \in { Tasks }}\) to obtain a model \(M_t^\mathrm{mem}(v)\) predicting memory, whereWe define$$\begin{aligned} { memory }_{t,v}^\mathrm{norm}= {{\mathrm{norm}}}({ memory }_{t,v}, \{{ memory }_{t',v'}\}_{t' \in { Tools }, v' \in { Tasks }}) . \end{aligned}$$$$\begin{aligned} { choose }({ TPredicted }) = \mathop {\hbox {arg min}}\limits _{t \in { TPredicted }} M_t^\mathrm{mem}(v) . \end{aligned}$$
 3.Class probabilities: \(\mathcal {TP}^\mathbf{prob}\). We define the operatorwhere \(P_{t,v}\) is the class probability estimate for \(M_t(v)=1\), i.e. the probability that tool t gives the expected answer on v.$$\begin{aligned} { choose }({ TPredicted })=\mathop {\hbox {arg max}}\limits _{t \in { TPredicted }} P_{t,v} \end{aligned}$$
Comparison of formulations of \(\mathcal {TP}\), using different implementations of operator \({ choose }\)
Setting  Correct/incorrect/unknown answers (%)  Score  Runtime (min)  Memory (GiB)  Place 

\(\mathcal {TP}^\mathrm{mem}\)  88/2/10  1047  2819  390.2  3 
\(\mathcal {TP}^\mathrm{time}\)  92/2/6  1244  920  508.4  1 
\(\mathcal {TP}^\mathrm{prob}\)  94/1/5  1443  2866  618.1  1 
3.3.5 Dealing with data imbalances
An analysis of the SVCOMP data shows that the labels \(L_t(v)\) are highly imbalanced: For example, in SVCOMP’14 the label which corresponds to incorrect answers, \(L_t(v) = 3\), occurs in less than 4% for every tool. The situation is similar for SVCOMP’15 and ’16. We therefore use SVM with weights, in accordance with standard practice in machine learning.

\(\mathbf {Potential}(v)\) describes how important predicting a correct tool for task v is, based on its score potential. E.g., unsafe tasks (\({{\mathrm{ExpAns}}}= \textsf {false}\)) have more points deducted for incorrect answers than safe (\({{\mathrm{ExpAns}}}= \textsf {true}\)) tasks, thus their score potential is higher.

\(\mathbf {Criticality}(v)\) captures how important predicting a correct tool is, based on how many tools give a correct answer. Intuitively, this captures how important an informed decision about task v, as opposed to a purely random guess, is.

\(\mathbf {Performance}(t,c)\) describes how well tool t does on category c compared to the category winner.

\(\mathbf {Speed}(t,c)\) describes how fast tool t solves tasks in category c compared to the fastest tool in the category.
3.3.6 Implementation of \(\mathcal {TP}\)
Finally, we discuss details of the implementation of \(\mathcal {TP}\). We use the SVM machine learning algorithm with the RBF kernel and weights as implemented in the LIBSVM library [9]. To find optimal parameters C for softmargin SVM and \(\gamma \) for the RBF kernel, we do exhaustive search on the grid, as described in [24].
4 Experimental results
4.1 SVCOMP 2014 versus 2015 versus 2016
Candidate tools and verification tasks. Considering the number of participating tools, SVCOMP is a success story: Figure 4a shows the increase of participants over the years. Especially the steady increase in the last 2 years is a challenge for our portfolio, as the number of machine learning problems (cf. Sect. 3.3) increases. As Fig. 4b shows, also the number of verification tasks used in the competition has increased steadily.
Scoring. As described in Sect. 3.2, SVCOMP provides two metrics for comparing tools: score and medal counts. As Table 4c shows, the scoring policy has constantly changed (the penalties for incorrect answers were increased). At least for 2015, this was decided by a close jury vote [38]. We are interested how stable the competition ranks are under different scoring policies. Table 5 gives the three topscoring tools in Overall and their scores in SVCOMP, as well as the topscorers of each year if the scoring policy of other years had been applied:
Clearly, the scoring policy has a major impact on the competition results: In the latest example of SVCOMP’16, UltimateAutomizer wins SVCOMP’16 with the original scoring policy applied, but is not even among the three topscorers if the policies of 2015 or 2014 are applied.
Overall competition ranks for SVCOMP’14–’16 under the scoring policies of SVCOMP’14–’16
Year competition scoring  \(1{\mathrm{st}}\) place (score)  \(2{\mathrm{nd}}\) place (score)  \(3{\mathrm{rd}}\) place (score)  

2014  2014  CBMC (3501)  CPAchecker (2987)  LLBMC (1843) 
2015  CBMC (3052)  CPAchecker (2961)  LLBMC (1788)  
2016  CPAchecker (2828)  LLBMC (1514)  UFO (1249)  
2015  2014  CPAchecker (5038)  SMACK (3487)  CBMC (3473) 
2015  CPAchecker (4889)  SMACK (3168)  UAutomizer (2301)  
2016  CPAchecker (4146)  SMACK (1573)  PredatorHQ (1169)  
2016  2014  CBMC (6669)  CPASeq (5357)  ESBMC (5129) 
2015  CBMC (6122)  CPASeq (5263)  ESBMC (4965)  
2016  UAutomizer (4843)  CPASeq (4794)  SMACK (4223) 
4.2 Decisivenessreliability plots
To better understand the competition results, we create scatter plots where each data point \(\mathbf {v} = (c,i)\) represents a tool that gives \(c\%\) correct answers and \(i\%\) incorrect answers. Figure 5 shows such plots based on the verification tasks in SVCOMP’14, ’15, and ’16. Each data point marked by an unfilled circle \(\circ \) represents one competing tool. The rectilinear distance \(c+i\) from the origin gives a tool’s decisiveness, i.e. the farther from the origin, the fewer times a tool reports “unknown”. The angle enclosed by the horizontal axis and \(\mathbf {v}\) gives a tool’s (un)reliability, i.e. the wider the angle, the more often the tool gives incorrect answers. Thus, we call such plots decisivenessreliability plots (DRplots).

For 2014 (Fig. 5a), all the tools are performing quite well on soundness: none of them gives more than 4% of incorrect answers. CPAchecker, ESBMC and CBMC are highly decisive tools, with more than 83% correct answers.

For 2015 (Fig. 5b), the number of verification tasks more than doubled, and there is more variety in the results: We see that very reliable tools (BLAST, SMACK, and CPAchecker) are limited in decisiveness—they report “unknown” in more than 40% of cases. The bounded model checkers CBMC and ESBMC are more decisive at the cost of giving up to 10% incorrect answers.

For 2016 (Fig. 5c), there is again a close field of very reliable tools (CPAchecker, SMACK, and UltimateAutomizer) that give around 50% of correct answers and almost no incorrect answers. Bounded model checker CBMC is still highly decisive, but gives 6% of incorrect answers.
Referring back to Fig. 5a–c, we also show the theoretic strategies \(T_\mathrm{cat}\) and \(T_\mathrm{vbs}\) marked by a square \(\square \): Given a verification task v, \(T_\mathrm{cat}\) selects the tool winning the corresponding competition category \({{\mathrm{Cat}}}(v)\). \(T_\mathrm{vbs}\) is the virtual best solver (VBS) and selects for each verification task the tool which gives the correct answer in minimal time. Neither \(T_\mathrm{cat}\) nor \(T_\mathrm{vbs}\) can be built in practice: For \(T_\mathrm{cat}\), we would need to know competition category \({{\mathrm{Cat}}}(v)\) of verification task v, which is withheld from the competition participants. For \(T_\mathrm{vbs}\), we would need an oracle telling us the tool giving the correct answer in minimal time. Thus any practical approach must be a heuristic such as the portfolio described in this work.
However, both strategies illustrate that combining tools can yield an almost perfect solver, with \(\ge 90\%\) correct and 0% incorrect answers. (Note that these figures may give an overly optimistic picture—after all the benchmarks are supplied by the competition participants.) The results for \(T_\mathrm{vbs}\) compared to \(T_\mathrm{cat}\) indicate that leveraging not just the category winner, but making a pertask decision provides an advantage both in reliability and decisiveness. A useful portfolio would thus lie somewhere between CPAchecker, CBMC, \(T_\mathrm{cat}\), and \(T_\mathrm{vbs}\), i.e. improve upon the decisiveness of constituent tools while minimizing the number of incorrect answers.
4.3 Evaluation of our portfolio solver
We originally implemented the machine learningbased portfolio \(\mathcal {TP}\) for SVCOMP’14 in our tool Verifolio [40]. When competition results for SVCOMP’15 became available, we successfully evaluated the existing techniques on the new data, and described our results in [17]. For SVCOMP’16, we reused the portfolio construction published there to compute the additional results in this paper. We present these both in terms of the traditional metrics used by the competition (SVCOMP score and medals) and \(\mathcal {TP}\)’s placement in DRplots:
Experimental results for the competition participants, plus our portfolio \(\mathcal {TP}\) on random subsets of SVCOMP’14, given as arithmetic mean of 10 experiments on the resp. test sets \({ test }_{ year }\)


For SVCOMP’14 (Figure 6a), our portfolio \(\mathcal {TP}\) overtakes the original Overall winner CBMC with 16% more points. It wins a total of seven medals (1/5/1 gold/silver/bronze) compared to CBMC’s six medals (2/2/2).

For SVCOMP’15 (Figure 6b), \(\mathcal {TP}\) is again the strongest tool, collecting 13% more points than the original Overall winner CPAchecker. Both CPAchecker and \(\mathcal {TP}\) collect 8 medals, with CPAchecker’s 2/1/5 against \(\mathcal {TP}\)’s 1/6/1.

For SVCOMP’16 (Figure 6c), \(\mathcal {TP}\) beats the original Overall winner UltimateAutomizer, collecting 66% more points. \(\mathcal {TP}\) collects 6 medals, compared to the original winner UltimateAutomizer with 2 medals (0/2/0) and the original runnerup CPASeq with 5 medals (2/1/2).
4.3.1 Constituent verifiers employed by our portfolio
Our results could suggest that \(\mathcal {TP}\) implements a tradeoff between CPAchecker’s conservativeandsound and CBMC’s decisivebutsometimesunsound approach. Contrarily, our experiments show that significantly more tools get selected by our portfolio solver (cf. Fig. 7a–c). Additionally, we find that our approach is able to select domainspecific solvers: For example, in the Concurrency category, \(\mathcal {TP}\) almost exclusively selects variants of CSeq (and for 2016 also CIVL), which are specifically aimed at concurrent problems.
4.3.2 Wrong predictions
We manually investigated cases of wrong predictions made by the portfolio solver. We identify i. imperfect tools and ii. data imbalances as the two main reasons for bad predictions. In the following, we discuss them in more detail:
 unsound: for example, in SVCOMP’16 the benchmarks differ in a single comparison operator, namely equality is changed to inequality. Tool BLAST solves the unsafe task correctly, and the safe one incorrectly (i.e. gives the same answer for both).
 buggy: similarly to above, in SVCOMP’16 benchmarks differ in a single comparison operator. The tool Forest solves the safe task correctly, and crashes on the unsafe one.
 incomplete: the benchmarks , also taken from SVCOMP’16, differ in a single function call, namely mutex_unlock() is changed to mutex_lock(). The tool CASCADE correctly solves the safe benchmark, and answers unknown for the unsafe one.
Countermeasures: In all cases, our metrics do not distinguish the given benchmark pairs. To mitigate these results, the obvious solution is to improve the participating tools. To solve the issue on the side of our portfolio, we believe more expensive analyses would have to be implemented for feature extraction. However, these analyses would i. be equivalent to correctly solving the verification problem directly and ii. increase the overhead spent on feature extraction. A practical portfolio is thus limited by the inconsistencies exhibited by its individual tools.
Data imbalances In our training data we can find feature vectors on which, for a given tool t, e.g. the number of correct answers noticeably outweighs the number of incorrect answers. This corresponds to the problem of data imbalances (cf. Sect. 3.1.5), which leads to the following bias in machine learning: For a verification tool that is correct most of the time, machine learning prefers the error of predicting that the tool is correct (when in fact incorrect) over the error that a tool is incorrect (when in fact correct). In other words, “good” tools are predicted to be even “better”.
Countermeasures: As described in Sect. 3.1.5, the standard technique to overcome data imbalances are weighting functions. Discovering data imbalances and countering multiple of them in a single weighting function is a hard problem. Our weighting function (cf. Sect. 3.3.5) mitigates this issue by compensating several imbalances that we identified in our training data, and was empirically tuned to improve results while staying general.
4.3.3 Overhead of feature extraction
By construction, our portfolio incurs an overhead for feature extraction and prediction before actually executing the selected tool. In our experiments, we measured this overhead to take a median time of \(\tilde{x}_\mathrm{features} = 0.5\) s for feature extraction and \(\tilde{x}_\mathrm{prediction} = 0.5\) s for prediction. We find this overhead to be negligible, when compared to verification time. For example, the Overall winner of SVCOMP’16, UltimateAutomizer, exhibits a median verification time of \(\tilde{x}^\mathrm{ua}_\mathrm{verif} = 24.9\) s computed over all tasks in SVCOMP’16.
Experimental results for the competition participants, plus our portfolio \(\mathcal {TP}\) on random subsets of SVCOMP’15, given as arithmetic mean of 10 experiments on the resp. test sets \({ test }_{ year }\)

Experimental results for the competition participants, plus our portfolio \(\mathcal {TP}\) on random subsets of SVCOMP’16, given as arithmetic mean of 10 experiments on the resp. test sets \({ test }_{ year }\)

5 Related work
Portfolio solvers have been successful in combinatorially cleaner domains such as SAT solving [27, 35, 42], quantified boolean satisfiability (QSAT) [32, 33, 36], answer set programming (ASP) [20, 29], and various constraint satisfaction problems (CSP) [21, 28, 30]. In contrast to software verification, in these areas constituent tools are usually assumed to be correct.
 1.
The results in [39] are not reproducible because i. the benchmark is not publicly available, ii. the verification properties are not described, and iii. the weighting function—in our experience crucial for good predictions—is not documented.
 2.
We demonstrate the continued viability of our approach by applying it to new results of recent SVCOMP editions.
 3.
We use a larger set of verification tools (35 tools vs. 3). Our benchmark is not restricted to device drivers and is >10 times larger (56 MLOC vs. 4 MLOC in [39]).
 4.
In contrast to structural metrics of [39] our metrics are computed using dataflow analysis. Based on tool designer reports (Table 1) we believe that they have superior predictive power. Precise comparison is difficult due to nonreproducibility of [39].
6 Conclusion
In this paper we demonstrate the importance of software metrics to predict and explain the performance of verification tools. As software verification is a highly multidisciplinary effort and tools have highly diverse strengths and weaknesses, we believe that portfolio solving is a relevant research direction, well worthy of a competition track in its own right. In such a competition, a part of the benchmarks could be hidden from participating tools to prevent overfitting.
In future work, we also envision the use of software metrics for selfevaluation, i.e. better and more systematic descriptions of the benchmarks that accompany research papers in verification.
Footnotes
Notes
Acknowledgements
Open access funding provided by Austrian Science Fund (FWF)
References
 1.Aho AV, Sethi R, Ullman JD (1986) Compilers: princiles, techniques, and tools. AddisonWesley, BostonGoogle Scholar
 2.Baier C, Tinelli C (eds) (2015) Tools and algorithms for the construction and analysis of systems—21st international conference, TACAS 2015, held as part of the European joint conferences on theory and practice of software, ETAPS 2015, London, UK, April 11–18, 2015. In: Proceedings, Lecture Notes in Computer Science, vol. 9035. SpringerGoogle Scholar
 3.Beyer D (2014) Status report on software verification (competition summary SVCOMP 2014). In: Tools and algorithms for the construction and analysis of systems, pp 373–388Google Scholar
 4.Beyer D (2015) Software verification and verifiable witnesses—(report on SVCOMP 2015). In: Proceedings of the tools and algorithms for the construction and analysis of systems—21st international conference, TACAS 2015, held as part of the European joint conferences on theory and practice of software, ETAPS 2015, London, UK, April 11–18, 2015, pp 401–416Google Scholar
 5.Beyer D (2016) Reliable and reproducible competition results with benchexec and witnesses (report on SVCOMP 2016). In: TACAS, Lecture Notes in computer science, vol 9636. Springer, pp 887–904Google Scholar
 6.Beyer D, Henzinger TA, Théoduloz G (2007) Configurable software verification: concretizing the convergence of model checking and program analysis. In: Computer aided verification (CAV’07), pp 504–518Google Scholar
 7.Bishop CM (2006) Pattern recognition and machine learning. Springer, New YorkzbMATHGoogle Scholar
 8.Boser BE, Guyon I, Vapnik V (1992) A training algorithm for optimal margin classifiers. In: Conference on computational learning theory (COLT’92), pp 144–152Google Scholar
 9.Chang C, Lin C (2011) LIBSVM: a library for support vector machines. ACM TIST 2(3):27Google Scholar
 10.Clarke E, Kroening D, Lerda F (2004) A tool for checking ansic programs. In: Tools and algorithms for the construction and analysis of systems. Springer, pp 168–176Google Scholar
 11.Collective benchmark (cBench). http://ctuning.org/wiki/index.php/CTools:CBench. Accessed 11 Mar 2016
 12.Competition on Software Verification (2014). http://svcomp.sosylab.org/2014/. Accessed 11 Mar 2016
 13.Competition on Software Verification (2015). http://svcomp.sosylab.org/2015/. Accessed 11 Mar 2016
 14.Competition on Software Verification (2016). http://svcomp.sosylab.org/2016/. Accessed 11 Mar 2016
 15.Competition on Software Verification. http://svcomp.sosylab.org/. Accessed 11 Mar 2016
 16.Cortes C, Vapnik V (1995) Supportvector networks. Mach Learn 20(3):273–297zbMATHGoogle Scholar
 17.Demyanova Y, Pani T, Veith H, Zuleger F (2015) Empirical software metrics for benchmarking of verification tools. In: Proceedings of the computer aided verification—27th international conference, CAV 2015, San Francisco, CA, USA, July 18–24, 2015, Part I, pp 561–579Google Scholar
 18.Demyanova Y, Veith H, Zuleger F (2013) On the concept of variable roles and its use in software analysis. In: Formal methods in computeraided design, FMCAD 2013, Portland, OR, USA, October 20–23, 2013, pp 226–230Google Scholar
 19.Dudka K, Peringer P, Vojnar T (2013) Byteprecise verification of lowlevel list manipulation. In: Static analysis. Springer, pp. 215–237Google Scholar
 20.Gebser M, Kaminski R, Kaufmann B, Schaub T, Schneider MT, Ziller S (2011) A portfolio solver for answer set programming: preliminary report. In: Logic programming and nonmonotonic reasoning (LPNMR’11), pp 352–357Google Scholar
 21.Gomes CP, Selman B (2001) Algorithm portfolios. Artif Intell 126(1–2):43–62MathSciNetCrossRefzbMATHGoogle Scholar
 22.Gurfinkel A, Belov A (2014) Frankenbit: Bitprecise verification with many bits—(competition contribution). In: Tools and algorithms for the construction and analysis of systems (TACAS’14), pp 408–411Google Scholar
 23.He H, Garcia EA (2009) Learning from imbalanced data. Knowl Data Eng 21(9):1263–1284CrossRefGoogle Scholar
 24.Hsu CW, Chang CC, Lin CJ, et al (2003) A practical guide to support vector classificationGoogle Scholar
 25.Huang YM, Du SX (2005) Weighted support vector machine for classification with uneven training class sizes. Mach Learn Cybern 7:4365–4369Google Scholar
 26.Huberman BA, Lukose RM, Hogg T (1997) An economics approach to hard computational problems. Science 275(5296):51–54CrossRefGoogle Scholar
 27.Kadioglu S, Malitsky Y, Sabharwal A, Samulowitz H, Sellmann M (2011) Algorithm selection and scheduling. In: Principles and practice of constraint programming (CP’11), pp 454–469Google Scholar
 28.Lobjois L, Lematre M (1998) Branch and bound algorithm selection by performance prediction. In: Mostow J, Rich C (eds) National conference on artificial intelligence and innovative applications of artificial intelligence conference, pp 353–358Google Scholar
 29.Maratea M, Pulina L, Ricca F (2012) The multiengine ASP solver measp. In: Logics in artificial intelligence (JELIA), pp 484–487Google Scholar
 30.O’Mahony E, Hebrard E, Holland A, Nugent C, OSullivan B (2008) Using casebased reasoning in an algorithm portfolio for constraint solving. In: Irish conference on artificial intelligence and cognitive scienceGoogle Scholar
 31.Pani T, Veith H, Zuleger F (2015) Loop patterns in C programs. ECEASST 72Google Scholar
 32.Pulina L, Tacchella A (2007) A multiengine solver for quantified boolean formulas. In: Bessiere C (ed) Principles and practice of constraint programming (CP’07), pp 574–589Google Scholar
 33.Pulina L, Tacchella A (2009) A selfadaptive multiengine solver for quantified boolean formulas. Constraints 14(1):80–116MathSciNetCrossRefzbMATHGoogle Scholar
 34.Rice JR (1976) The algorithm selection problem. Adv Comput 15:65–118CrossRefGoogle Scholar
 35.Roussel O. Description of ppfolio. http://www.cril.univartois.fr/~roussel/ppfolio/solver1.pdf
 36.Samulowitz H, Memisevic R (2007) Learning to solve QBF. In: Proceedings of the conference on artificial intelligence (AAAI), pp 255–260Google Scholar
 37.Stavely AM (1995) Verifying definite iteration over data structures. IEEE Trans Softw Eng 21(6):506–514CrossRefGoogle Scholar
 38.SVCOMP 2014—Minutes. http://svcomp.sosylab.org/2015/Minutes2014.txt. Accessed 6 Feb 2015. No longer available, archived version: https://web.archive.org/web/20150413080431/ and http://svcomp.sosylab.org/2015/Minutes2014.txt
 39.Tulsian V, Kanade A, Kumar R, Lal A, Nori AV (2014) Mux: algorithm selection for software model checkers. In: Working conference on mining software repositories, pp 132–141Google Scholar
 40.Verifolio. http://forsyte.at/software/verifolio/. Accessed 11 Mar 2016
 41.Wu TF, Lin CJ, Weng RC (2004) Probability estimates for multiclass classification by pairwise coupling. J Mach Learn Res 5:975–1005MathSciNetzbMATHGoogle Scholar
 42.Xu L, Hutter F, Hoos HH, LeytonBrown K (2008) Satzilla: portfoliobased algorithm selection for SAT. J Artif Intell Res (JAIR) 32:565–606zbMATHGoogle Scholar
 43.Xu L, Hutter F, Hoos H, LeytonBrown K (2012) Evaluating component solver contributions to portfoliobased algorithm selectors. In: Cimatti A, Sebastiani R(eds) Proceedings of the theory and applications of satisfiability testing—SAT 2012—15th international conference, Trento, Italy, June 17–20, 2012, Springer, pp 228–241Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.