FixedParameter Evolutionary Algorithms and the Vertex Cover Problem
Authors
 First Online:
 Received:
 Accepted:
DOI: 10.1007/s0045301296604
 Cite this article as:
 Kratsch, S. & Neumann, F. Algorithmica (2013) 65: 754. doi:10.1007/s0045301296604
Abstract
In this paper, we consider multiobjective evolutionary algorithms for the Vertex Cover problem in the context of parameterized complexity. We consider two different measures for the problem. The first measure is a very natural multiobjective one for the use of evolutionary algorithms and takes into account the number of chosen vertices and the number of edges that remain uncovered. The second fitness function is based on a linear programming formulation and proves to give better results. We point out that both approaches lead to a kernelization for the Vertex Cover problem. Based on this, we show that evolutionary algorithms solve the vertex cover problem efficiently if the size of a minimum vertex cover is not too large, i.e., the expected runtime is bounded by O(f(OPT)⋅n ^{ c }), where c is a constant and f a function that only depends on OPT. This shows that evolutionary algorithms are randomized fixedparameter tractable algorithms for the vertex cover problem.
Keywords
Evolutionary algorithms Fixedparameter tractability Vertex cover Randomized algorithms1 Introduction
General purpose algorithms, such as evolutionary algorithms [8] and ant colony optimization [6], have been shown to be successful problem solvers for a wide range of combinatorial optimization problems. Such techniques make use of random decisions which allows to consider them as a special class of randomized algorithms. Especially, if the problem is new and there are not enough resources such as time, money, or knowledge about the problem to develop specific algorithms, general purpose algorithms often produce good results without a large development effort. Usually, it is just necessary to think about a representation of possible solutions, a function to measure the quality of solutions, and operators that produce from a solution (or a set of solutions) a new solution (or a set of solutions).
The general approach of an evolutionary algorithm is to start with a set of candidate solutions for a given problem. The solutions of this set can be constructed by some heuristics or chosen randomly from the underlying search space. Such solutions are improved iteratively over time. In each iteration, the current set of solutions (called parent population) constructs a new set of solutions (called offspring population) by variation operators such as crossover and mutation. Based on a selection method which is motivated by Darwin’s principle of the survival of the fittest, a new parent population is constituted by selecting solutions from the parent and the offspring population. The process is iterated until a stopping criteria is fulfilled.
Taking such a general approach to solve a given problem, it is clear that we cannot hope to beat techniques that are tailored to the given task. However, such general approaches find many applications when no good problem specific algorithm is available. In addition to many experimental studies that confirm the success of these techniques on problems from different domains, there has been increasing interest in understanding such algorithms also in a rigorous way. This line of research treats such algorithms as a class of randomized algorithms and analyzes them in a classical fashion, i.e., with respect to their runtime behavior and approximation ability in expected polynomial time. The results obtained in this research area confirm that general purpose approaches often come up with optimal solutions quickly even if they do not use problem specific knowledge. Problems that have been studied among many others within this line of research are the shortest path problem [5, 25], maximum matchings [14], minimum spanning trees [18, 21], minimum (multi)cuts [19, 20], covering and scheduling problems [28]. A comprehensive presentation of the different results obtained in the field of combinatorial optimization can be found in [22]. Additionally, recent theoretical studies have investigated the learning ability of evolutionary algorithms [9, 26].
For NPhard problems we cannot hope to prove practicality in the sense of a polynomial upperbound on the worstcase runtime, even though an algorithm might perform very well in practice. Nevertheless, the notion of fixedparameter tractability may be helpful to explore that situation as well as guiding further algorithm design. Fixedparameter tractability is a central concept of parameterized complexity. In this field, the complexity of input instances is measured in a twodimensional way considering not only the size of the input but also one or more parameters, e.g., solution size, structural restrictions, or quality of approximation. One hopes to confine the inevitable combinatorial explosion in the runtime to a function in the parameter, with only polynomial dependence on the input size. The idea is that even large instances may exhibit a very restricted structure and can therefore be considered easy to solve, despite their size. Let us briefly introduce the central notions (following Flum and Grohe [10]).
A parameterized problem \((\mathcal {Q},\kappa)\) consists of a language \(\mathcal {Q}\) over a finite alphabet Σ and a parameterization κ:Σ ^{∗}→ℕ. The problem \(\mathcal {Q}\) is fixedparameter tractable (FPT) if there is an algorithm that decides whether \(x\in \mathcal {Q}\) in time f(κ(x))⋅x^{ O(1)}, i.e., in time with arbitrary (but computable) dependence on the parameter but only polynomial dependence in the input size. Such an algorithm is called an fptalgorithm for \((\mathcal {Q},\kappa)\). A Monte Carlo fptalgorithm for \((\mathcal {Q},\kappa)\) is a randomized fptalgorithm with runtime f′(κ(x))⋅x^{ O(1)} that will on input x∈Σ ^{∗} accept with probability at least 1/2 if \(x\in \mathcal {Q}\) and with probability 0 if \(x\notin \mathcal {Q}\). For an introduction to parameterized complexity we point the interested reader to [7, 10].
In this paper we want to adopt a parameterized view on evolutionary algorithms for Vertex Cover and consider their expected runtime behavior related to the minimum cardinality of a vertex cover of the input graph, denoted by OPT. We examine when evolutionary algorithms compute a solution quickly if OPT is small, i.e., in expected time \(O(f(\operatorname {OPT}) \cdot n^{c})\). We call an evolutionary algorithm with such a runtime bound fixedparameter evolutionary algorithm.
An important stepping stone in the analysis of our algorithms will be the fact that they create partial solutions that can be considered problem kernels of the original instance, given a feasible secondary measure. A kernelization or reduction to a problem kernel is a special form of polynomialtime data reduction for parameterized problems that produces an equivalent (and usually smaller) instance whose size is bounded by a function in the original parameter. It is known that a parameterized problem is fixedparameter tractable if and only if there exists a kernelization for the problem (cf. [10]). A well known fixedparameter tractable problem is the (standard) parameterized Vertex Cover problem. Given an undirected graph and an integer k, one has to decide whether there exists a set of at most k vertices such that each edge contains at least one of these vertices, parameterized by k. This problem can be solved in time O(1.2738^{ k }+kn) via kernelization followed by a bounded search tree algorithm [3].
The Vertex Cover problem has also been studied in the field of evolutionary computation from a theoretical point of view. Rigorous runtime analysis has been given for the wellknown (1+1) EA and population based algorithms for singleobjective optimization [11, 23, 24]. Additionally, it has been shown that a multiobjective model can help the optimization process of an evolutionary algorithm to find good solutions quicker than in a singleobjective one [12]. Due to the results obtained in [12] we consider two different multiobjective models for the Vertex Cover problem. Both models take as the first objective the goal to minimize the number of chosen vertices. The second criteria should be a penalty function which has to be minimized such that a feasible vertex cover is obtained.
Minimizing the number of uncovered edges as the second objective has already been investigated in [12] and we study this approach with respect to the approximation quality depending on the value of OPT. Afterwards, we examine this approach with respect to the expected runtime in dependence of OPT and show that this approach leads to fixedparameter evolutionary algorithms. Our second approach is to take the minimum cost of a fractional vertex cover for the uncovered edges as the second objective. We show that this approach leads to a 2approximation for Vertex Cover in expected polynomial time and to fixedparameter evolutionary algorithms of runtime \(O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2}+n\cdot4^{\operatorname {OPT}})\). For the case where one is interested in computing a (1+ϵ)approximation, we reduce the runtime bound of this approach to \(O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2}+n\cdot4^{(1\epsilon)\cdot \operatorname {OPT}})\).
The outline of the paper is as follows. In Sect. 2, we introduce the Vertex Cover problem as well as the algorithms and problem formulations that are subject to our investigations. In Sects. 3 and 4 we consider two different multiobjective models for Vertex Cover. In Sect. 5 we summarize our results and give possible directions for further research.
2 Preliminaries
The Vertex Cover problem is one of the wellknown NPhard combinatorial optimization problems. Given an undirected graph G=(V,E) where V=n and E=m the aim is to find a subset V′⊆V of minimum cardinality such that for each e∈E, \(e\cap V' \not= \emptyset\) holds. Many simple approximation algorithms achieve a worstcase approximation ratio of 2 (cf. [4]). For example such an approximation can be achieved in polynomial time by computing a maximal matching in the given graph and choosing for each edge of the matching the corresponding two vertices.
Relaxing the integrality constraint x _{ i }∈{0,1} to fractional values between 0 and 1, i.e., x _{ i }∈[0,1], yields a linear program formulation of the Fractional Vertex Cover problem. Clearly, for any graph, the cost of an optimal fractional vertex cover is a lower bound on the cardinality of a minimum (integral) vertex cover. The dual problem of Fractional Vertex Cover is Fractional Maximum Matching, i.e., Maximum Matching with relaxed integrality.
In the case of multiobjective optimization, the fitness function maps from the search space X to a vector of real values, i.e. f:X→ℝ^{ k }. We consider the case where each of the k objectives should be minimized. For two search points x∈X and x′∈X, f(x)≤f(x′) holds iff f _{ i }(x)≤f _{ i }(x′), 1≤i≤k. In this case, we say that x weakly dominates x′. A search point x dominates a search point x′ iff f(x)≤f(x′) and \(f(\mathbf{x}) \not= f(\mathbf{x}')\). In this case x is considered strictly better than x′. The notion of dominance and weak dominance transfers to the corresponding objective vectors. A Pareto optimal search point x is a search point that is not dominated by any other search point in X. The set of nondominated search points is called the Pareto optimal set and the set of the corresponding objective vectors is called the Pareto front.
We follow this approach and examine the multiobjective model for Vertex Cover in conjunction with the simple multiobjective evolutionary algorithm called Global SEMO (Global Simple Evolutionary MultiObjective Optimizer). This algorithm has already been studied for a wide range of multiobjective optimization problems and can be considered as the generalization of the (1+1) EA to the multiobjective case.
Denote by E(x)⊆E the set of edges for which at least one vertex is chosen by x. As each edge e∈E has to be covered by at least one vertex to obtain a vertex cover, it may be helpful to flip vertices which are incident with uncovered edges with a larger probability. This leads to the following alternative mutation operator.
Our goal is to analyze our algorithms until they have found an optimal solution or a good approximation of an optimal one. Our algorithms using the function f _{1} (or f _{2}) have produced an rapproximation for the Vertex Cover problem iff they have produced a solution x with objective vector f _{1}(x)=(x_{1},0) (or f _{2}(x)=(x_{1},0)) where \(\frac{\mathbf {x}_{1}}{\operatorname {OPT}} \leq r\).
To measure the runtime of our algorithms, we consider the number of fitness evaluations T until a minimum vertex cover or a good approximation of such a solution has been obtained. We define T(k) as the random variable that measures the number of fitness evaluations until a vertex cover of size at most k appears in the population. The expected optimization time refers to the expected number of fitness evaluations \(E[T(\operatorname {OPT})]\) until an optimal solution has been obtained. Often we consider the expected time to achieve intermediate goals, e.g., partial solutions that fulfill certain properties.
If \(\operatorname {OPT}\leq k\), then the probability that an evolutionary algorithm, whose expected optimization time is upper bounded by \(E[T(\operatorname {OPT})]\), finds an optimal solution within at most 2⋅E[T(k)] is at least 1/2, using Markov’s inequality [16]. Clearly, if \(\operatorname {OPT}>k\) then no solution of cost at most k can be found. Thus running a fixedparameter evolutionary algorithm for twice the expected optimization time for \(\operatorname {OPT}=k\) yields a Monte Carlo fptalgorithm for the decision version.
For both introduced fitness functions, the search point 0^{ n } is Pareto optimal as the first objective for all functions is to minimize the number of ones in the bitstring. In the remaining part of the paper we will proceed from this solution towards a minimum vertex cover or a vertex cover of a certain approximation quality.
Lemma 1
The expected number of iterations of Global SEMO or Global SEMO_{ alt } until the population contains the search point 0^{ n } is O(n ^{2}logn) for the fitness functions f _{1} and f _{2}.
Proof
After an expected number of O(n ^{2}logn) iterations both algorithms working on the fitness function f _{1} or f _{2} introduce the search point 0^{ n } into the population. Afterwards, this search point stays in the population. The population size of both algorithms is upper bounded by n+1. This may be used to give a bound on the expected time to reach a minimum vertex cover depending on \(\operatorname {OPT}\).
3 Minimizing the Number of Uncovered Edges
In this section we consider the effect of minimizing the number of uncovered edges as the second criteria by investigating the fitness function f _{1}. Note, that this approach has already been investigated in [12]. Friedrich et al. [12] show that there are bipartite graphs where the (1+1) EA cannot achieve a good approximation in expected polynomial time. Running Global SEMO on these instances solves the problem quickly. For general graphs, it has been showed that Global SEMO achieves a lognapproximation in expected polynomial time.
Lemma 2
 1.
the vertices chosen by x constitute a subset of a minimum vertex cover of G and
 2.
the vertices of G(x) have degree at most \(\operatorname {OPT}\),
Proof
We know that the search point 0^{ n } is introduced into the population after an expected number of O(n ^{2}logn) iterations. Assuming that the search point 0^{ n } has already been introduced into the population, we show that an expected number of \(O(\operatorname {OPT}\cdot n^{4})\) iterations occur where the population does not contain a solution with the desired properties.
We denote by V′⊆V the set of vertices that have degree larger than \(\operatorname {OPT}\) in G. Observe that every vertex cover of cardinality \(\operatorname {OPT}\) contains V′. A vertex cover that does not select a vertex of degree greater than \(\operatorname {OPT}\) must contain all neighbors of the vertex, which leads to a cardinality greater than \(\operatorname {OPT}\). We assume that V′≠∅ as otherwise 0^{ n } has the desired properties.
The idea to prove the lemma is to investigate a potential taking \(O(E \cdot \operatorname {OPT})\) different values. If the population does not contain a solution with properties 1 and 2, the potential is decreased with probability Ω(1/n ^{2}) which leads to the stated upper bound on the number of steps that have a population where each solution does not fulfill the desired properties.
Let \(s_{0}, s_{1},\dots,s_{\operatorname {OPT}}\) be integer values such that s _{ j } is the smallest value of u(x) for any search point x in P choosing at most j vertices, i.e., x_{1}≤j. Note, that each s _{ j } cannot increase during the run of the algorithm as only nondominated solutions are accepted.
 1.
If the graph G(x _{ i }) contains no vertex of degree larger than \(\operatorname {OPT}\) then x _{ i } fulfills properties 1 and 2 by selection of i. For the other cases we assume that G(x _{ i }) contains a vertex of degree greater than \(\operatorname {OPT}\), say v.
 2.If \(s_{i}s_{i+1}\leq \operatorname {OPT}\) (note: this includes the case when P does not contain any solution x with x_{1}=i+1, implying that s _{ i+1}=s _{ i }) then with probability Ω(1/n ^{2}) Global SEMO or Global SEMO_{ alt } chooses the search point x _{ i } and mutates it into a point \(\mathbf {x}'_{i+1}\) that additionally selects v. ClearlyThus \(u(\mathbf {x}'_{i+1})<s_{i+1}\), implying that s _{ i+1} is decreased by at least one.$$u\bigl(\mathbf {x}'_{i+1}\bigr)=u(\mathbf {x}_i)\deg_{G(\mathbf {x}_i)}(v)<s_i \operatorname {OPT}. $$
 3.If \(s_{i}s_{i+1}>\operatorname {OPT}\) then P contains a solution x _{ i+1} of fitness (i+1,s _{ i+1}) and x _{ i+1} selects at least one vertex u∈V∖V′ by choice of i. With probability at least Ω(1/n ^{2}) the search point x _{ i+1} is chosen and is mutated into a solution \(\mathbf {x}'_{i}\) by flipping only the bit corresponding to u. ThusTherefore \(u(\mathbf {x}'_{i})<s_{i}\), so s _{ i } is improved by at least one.$$u\bigl(\mathbf {x}'_i\bigr)=u(\mathbf {x}_{i+1})+\deg_{G(\mathbf {x}'_i)}(u) \leq s_{i+1}+\operatorname {OPT}. $$
In each case we get that either P contains a solution as claimed in the lemma or with probability Ω(1/n ^{2}) the potential decreases by at least one. The potential can take on only \(O(\operatorname {OPT}\cdotE)\) different values which completes the proof. □
We have seen that in all but expected \(O(\operatorname {OPT}\cdot n^{4})\) iterations of Global SEMO or Global SEMO_{ alt } the population contains a solution x that is a subset of some minimum vertex cover and such that G(x) has maximum degree \(\operatorname {OPT}\). Such partial solutions will be useful when proving an upper bound on the expected number of iterations of Global SEMO_{ alt } to generate a minimum vertex cover, while also implying that an \(\operatorname {OPT}\)approximate vertex cover is produced in expected polynomial number of iterations of Global SEMO or Global SEMO_{ alt }. One can easily see that G(x) has at most \((\operatorname {OPT}\mathbf {x}_{1})\cdot \operatorname {OPT}\) uncovered edges, since \((\operatorname {OPT}\mathbf {x}_{1})\) vertices of degree at most \(\operatorname {OPT}\) suffice to cover all of them.
Though these partial solutions are obtained in a randomized fashion aiming to cover as many edges as possible with few vertices, they are strongly related to deterministic preprocessing for the parameterized Vertex Cover problem. To decide whether a given graph has a vertex cover of size at most k one may greedily select all vertices of degree larger than k. In fact, if v is a vertex of degree larger than k then G has a vertex cover of cardinality k if and only if G−v has a vertex cover of cardinality k−1. In conjunction with deleting isolated vertices this leads to an equivalent reduced instance with at most O(k ^{2}) vertices, this technique being known as Buss’ kernelization (cf. [7]).
These structural insights can be used to show that our algorithms achieve an \(\operatorname {OPT}\)approximation in expected polynomial time when using the fitness function f _{1}.
Theorem 1
Using the fitness function f _{1}, the expected number of iterations of Global SEMO or Global SEMO_{ alt } until an \(\operatorname {OPT}\)approximation is computed is \(O(\operatorname {OPT}\cdot n^{4})\).
Proof
According to Lemma 1 and Lemma 2, we already know that the expected number of steps where the population does not contain a solution with the properties stated in Lemma 2 is \(O(\operatorname {OPT}\cdot n^{4})\). In the following, we consider only steps where such a solution exists.
Thus it is ensured that there is a solution x in the population for which \(\mathbf {x}_{1}\leq \operatorname {OPT}\) and the maximum degree of G(x) is at most \(\operatorname {OPT}\). This implies \(u(\mathbf {x})\leq(\operatorname {OPT}\mathbf {x}_{1})\cdot \operatorname {OPT}\) and \(\mathbf {x}_{1}+ u(\mathbf {x}) \leq \operatorname {OPT}^{2}\). If x is dominated by any solution x′ then clearly \(\mathbf {x}'_{1}+ u(\mathbf {x}') \leq \operatorname {OPT}^{2}\). Therefore, in all later steps the population contains at least one solution y with \(\mathbf {y}_{1}+u(\mathbf {y})\leq \operatorname {OPT}^{2}\).
Let u denote the minimum value of u(x) among solutions x∈P with \(\mathbf {x}_{1}+u(\mathbf {x})\leq \operatorname {OPT}^{2}\). Let y∈P be a solution with \(\mathbf {y}_{1}+ u(\mathbf {y})\leq \operatorname {OPT}^{2}\) and u(y)=u. If u(y)=0 it follows that y selects at most \(\operatorname {OPT}^{2}\) vertices which are a vertex cover. Otherwise at least one vertex v of G(y) is incident with an (uncovered) edge.
The probability that y is selected and that it is mutated into a solution y′ that additionally selects v is Ω(1/n ^{2}) for Global SEMO and Global SEMO_{ alt }. Clearly the solution y′ fulfills y′_{1}+u(y′)≤y_{1}+u(y) and u(y′)<u(y). Observe that y′ cannot be dominated by any solution in P due to y′_{1}+u(y′)≤y_{1}+u(y) and by choice of y, implying that it is added to P, decreasing u by at least 1.
If the solution y with u(y)=u and \(\mathbf {y}_{1}+u(\mathbf {y})\leq \operatorname {OPT}^{2}\) is removed from the population then there must be a solution, say z, that dominates it. By u(z)≤u(y) and z_{1}≤y_{1} this cannot increase the value of u. Clearly \(0\leq u\leq \operatorname {OPT}^{2}\), hence it can be decreased at most \(\operatorname {OPT}^{2}\) times.
Thus after expected \(O(\operatorname {OPT}^{2}\cdot n^{2} + \operatorname {OPT}\cdot n^{4})\) iterations of Global SEMO or Global SEMO_{ alt } a solution with fitness (S,0) with \(S\leq \operatorname {OPT}^{2}\) is obtained. □
After having shown that both algorithms achieve an \(\operatorname {OPT}\)approximation in expected polynomial time, we will bound the time until Global SEMO_{ alt } achieves an optimal solution.
Theorem 2
Using the fitness function f _{1}, the expected number of iterations of Global SEMO_{ alt } until it has computed a minimum vertex cover is \(O(\operatorname {OPT}\cdot n^{4}+n\cdot2^{\operatorname {OPT}+\mathrm{OPT}^{2}})\).
Proof
As in the proof of Theorem 1, we assume that P contains a solution x such that G(x) has maximum degree at most \(\operatorname {OPT}\) and there exists a minimum vertex cover S that contains the vertices selected by x. Due to Lemma 2 the expected number of iterations where Global SEMO_{ alt } does not fulfill the properties is \(O(\operatorname {OPT}\cdot n^{4})\), i.e., adding this term to the obtained bound covers the assumption.
The probability of choosing x in the next mutation step is Ω(1/n). Choosing all the remaining vertices of S and not flipping any other bit in x leads to a minimum vertex cover. The graph G(x) has maximum degree \(\operatorname {OPT}\) and it has a vertex cover of size \((\operatorname {OPT}\mathbf {x}_{1})\). Each vertex in such a vertex cover can be adjacent to at most \(\operatorname {OPT}\) nonisolated vertices (and each edge is incident with at least one vertex of the vertex cover), implying that G(x) has at most \((\operatorname {OPT}\mathbf {x}_{1})+(\operatorname {OPT}\mathbf {x}_{1})\cdot \operatorname {OPT}\leq \operatorname {OPT}+\operatorname {OPT}^{2}\) nonisolated vertices.
We consider the mutation of x which flips vertices adjacent to noncovered edges with probability 1/2. Note that with probability (1−1/n)^{ n′}∈Ω(1) no bit corresponding to any of the n′≤n isolated vertices of G(x) is flipped. The probability of flipping only the bits corresponding to the missing vertices of S is therefore \(\varOmega(2^{(\operatorname {OPT}+\operatorname {OPT}^{2})})\), since there are at most \(\operatorname {OPT}+\operatorname {OPT}^{2}\) nonisolated vertices. Hence, the expected time until a minimum vertex cover has been computed is upper bounded by \(O(\operatorname {OPT}\cdot n^{4} + n\cdot2^{\operatorname {OPT}+\operatorname {OPT}^{2}})\). □
4 Fractional Vertex Covers
In this section, we use the minimum cost of a fractional vertex cover for the uncovered edges as the second criteria. For every search point x this gives an estimate on how many vertices are needed to complete the set of selected vertices to a vertex cover of G (or of G(x)). We denote this cost by LP(x), as it is the optimal cost of solutions to the Vertex Cover ILP with relaxed integrality constraints, i.e., 0≤x _{ i }≤1 in place of x _{ i }∈{0,1}. Balinski [1] showed that all basic feasible solutions (or extremal points) of the Fractional Vertex Cover LP are halfintegral.
Theorem 3
[1]
Every basic feasible solution x of the relaxed Vertex Cover ILP is halfintegral, i.e., x∈{0,1/2,1}^{ n }.
Due to this result, optimal fractional vertex covers can be computed very efficiently via a maximum matching of an auxiliary bipartite graph (cf. [2]). Throughout the section we will implicitly assume that chosen fractional vertex covers are halfintegral.
Nemhauser and Trotter [17] proved a very strong relation between optimal fractional vertex covers and minimum vertex covers.
Theorem 4
Let x ^{∗} be an optimal fractional vertex cover and let P _{0},P _{1}⊆V be the vertices whose corresponding components of x ^{∗} are 0 or 1 respectively, then there exists a minimum vertex cover that contains P _{1} and no vertex of P _{0}.
We start with a simple lemma that gives insights into the structure of the objective space.
Lemma 3
 1.
x_{1}+LP(x)≥LP(0^{ n }).
 2.
\(\mathbf {x}_{1}+2\cdot \mathit{LP}(\mathbf {x})\geq \operatorname {OPT}\).
Proof
 1.
One can obtain a fractional vertex cover of G from y by adding the vertices that are selected by x. The cost of this cover, i.e., x_{1}+LP(x), cannot be smaller than the minimum cost of a fractional vertex cover, i.e., LP(0^{ n }).
 2.
Similarly, a vertex cover of G can be obtained by adding all vertices that have value 1/2 or 1 in y to the vertices selected by x, since each edge of G(x) must be incident with vertices of total value of at least one. The cardinality of this vertex cover is bounded by 2⋅LP(x) (i.e., the maximum number of vertices with value 1/2 or 1) plus x_{1}. Clearly, this vertex cover cannot be smaller than a minimum vertex cover (with cardinality \(\operatorname {OPT}\)).
Hence, each solution for which equality holds in one of the inequalities stated in Lemma 3 is Pareto optimal. The following lemma relates a search point x∈{0,1}^{ n } to an optimal fractional solution x ^{∗}∈[0,1]^{ n }. For x,y∈[0,1]^{ n }, we denote by x≤y the fact that x _{ i }≤y _{ i }, 1≤i≤n.
Lemma 4
Let y be an optimal fractional vertex cover of G. Every x∈{0,1}^{ n } with x≤y, is a Pareto optimal solution.
Proof
Let y′ be obtained from y by setting the value of all vertices that are selected by x to 0. The graph G(x) contains all edges that are not incident to any vertex that is selected by x. Thus y′ is a fractional vertex cover of G(x). Therefore we have y_{1}−x_{1}=y′_{1}≥LP(x), implying that LP(0^{ n })=y_{1}≥LP(x)+x_{1}. Thus, by Lemma 3, we can conclude that x_{1}+LP(x)=LP(0^{ n }) and that x is a Pareto optimal solution. □
We state a simple property that describes search points that are subsets of a minimum vertex cover. Such solutions are of particular interest as they can be turned into a minimum vertex cover by adding vertices.
Lemma 5
If x∈{0,1}^{ n } is a solution with LP(x)=LP(0^{ n })−x_{1}, then there exists a minimum vertex cover z∈{0,1}^{ n } with x≤z (i.e., every vertex selected by x is also selected by z).
Proof
Consider an optimal fractional vertex cover y of G(x) of cost LP(0^{ n })−x_{1}. We can obtain a fractional vertex cover z of G by also selecting the x_{1} vertices that are selected by x (i.e., setting the corresponding components of y to 1). Hence z is a fractional vertex cover of G of cost LP(0^{ n }), implying that it is optimal. By Theorem 4 it follows that there exists a minimum vertex cover of G that contains all vertices with value 1 in z which includes all vertices that are selected by x. □
After having pointed out some basic properties about fractional vertex covers and Pareto optimal solutions, we can now analyze our algorithms with respect to the approximation that they can achieve in expected polynomial time. It is easy to see that, for every optimal fractional vertex cover, the vertices of value 1/2 and 1 form a 2approximate vertex cover, since the fractional vertex cover has cost at most \(\operatorname {OPT}\).
Theorem 5
Using the fitness function f _{2}, the expected number of iterations of Global SEMO or Global SEMO_{ alt } until the population P contains a 2approximate vertex cover is \(O(n^{2}\log n+\operatorname {OPT}\cdot n^{2})\).
Proof
The expected number of iterations until the search point 0^{ n } is added to the population is O(n ^{2}logn) due to Lemma 1.
Let x∈P be a solution that minimizes LP(x) under the constraint that \(\mathbf {x}_{1}+2\cdot \mathit{LP}(\mathbf {x})\leq2\cdot \mathit{LP}(0^{n})\leq2\cdot \operatorname {OPT}\). Note, that 0^{ n } fulfills the constraint. If LP(x)=0 then x is a vertex cover of G and \(\mathbf {x}_{1}\leq2\cdot \mathit{LP}(0^{n})\leq2\cdot \operatorname {OPT}\) as claimed. Otherwise, every optimal fractional vertex cover of G(x) assigns at least 1/2 to some vertex, say v. Therefore, LP(x′)≤LP(x)−1/2 where x′ is obtained from x by additionally selecting v. With probability 1/n ^{2}⋅(1−1/n)^{ n−1}=Ω(1/n ^{2}) the solution x is picked in the mutation step and exactly the bit corresponding to v is flipped, leading to the solution x′. Clearly, x′_{1}=x_{1}+1 and LP(x′)≤LP(x)−1/2. Thus x′_{1}+2⋅LP(x′)≤x_{1}+2⋅LP(x)≤2⋅LP(0^{ n }), implying that x′ fulfills the constraint while having a smaller value LP(x′). Thus, x′ is added to the population since no solution in P dominates it, by selection of x.
As \(\mathit{LP}(\mathbf {x})\leq \operatorname {OPT}\), this can happen at most \(2\cdot \operatorname {OPT}\) times since each time the smallest value of LP(x) among solutions x that fulfill \(\mathbf {x}_{1}+2\cdot \mathit{LP}(\mathbf {x})\leq2\cdot \operatorname {OPT}\) is reduced by at least 1/2. Thus, the expected number of steps until the population contains a 2approximate vertex cover is at most \(O(n^{2}\log n+\operatorname {OPT}\cdot n^{2})\). □
Having shown that using the minimum cost of a fractional vertex cover as the second criteria leads to a 2approximation, we will now examine the number of iterations until Global SEMO_{ alt } has obtained an optimal solution.
To prove an upper bound on that number we consider solutions choosing r vertices such that the subgraph consisting of the noncovered edges has at most 2⋅(LP(0^{ n })−r) nonisolated vertices. Therefore we are interested in solutions x of fitness (x_{1},LP(0^{ n })−x_{1}) such that optimal fractional vertex covers of G(x) assign 1/2 to each nonisolated vertex, implying that there are exactly 2⋅(LP(0^{ n })−x_{1}) nonisolated vertices in G(x).
Lemma 6
 1.
LP(x)=LP(0^{ n })−x_{1} and
 2.
each optimal fractional vertex cover assigns 1/2 to each nonisolated vertex of G(x)
Proof
 1.
Optimal fractional vertex covers for G(x) assign 1/2 to each nonisolated vertex.
 2.
There is an optimal fractional vertex cover z of G(x) which assigns 1 to at least one nonisolated vertex of G(x), say v. With probability at least Ω(1/n ^{2}) Global SEMO or Global SEMO_{ alt } chooses the solution x for mutation and flips exactly the bit corresponding to v, obtaining a solution x′.
Observe that LP(x′)≤LP(x)−1 since z′, i.e., z but with 0 assigned to v, is a fractional vertex cover of G(x′). Clearly, x′ is added to the population since solutions of fitness (i,LP(0^{ n })−i) are Pareto optimal, according to Lemma 3 (this also implies that r can never decrease). This increases the value of r by 1.
Since \(0\leq r\leq \mathit{LP}(0^{n})\leq \operatorname {OPT}\), the value can be increased at most \(\operatorname {OPT}\) times. Therefore the expected number of steps in which case 2 happens is at most \(O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2})\). □
Both algorithms generate a search point x that selects a subset of a minimum vertex cover and such that G(x) has at most \(2\cdot(\operatorname {OPT}\mathbf {x}_{1})\) nonisolated vertices in expected polynomial time and, similar to Lemma 2, the population contains such a solution in all but expected \(O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2})\) iterations.
In the following, we show that Global SEMO_{ alt } is able to produce from such a solution an optimal one in total expected time \(O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2}+n\cdot4^{\operatorname {OPT}})\) which implies that it is a fixedparameter evolutionary algorithm for the Vertex Cover problem.
Theorem 6
Using the fitness function f _{2}, the expected number of iterations of Global SEMO_{ alt } until it has computed a minimum vertex cover is \(O(n^{2}\cdot \log n+\operatorname {OPT}\cdot n^{2}+n\cdot4^{\operatorname {OPT}})\).
Proof
We consider iterations of Global SEMO_{ alt } where the population contains x with LP(x)=LP(0^{ n })−x_{1} such that each optimal fractional vertex cover assigns 1/2 to each nonisolated vertex of G(x). The expected number of iterations where this is not the case is at most \(O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2})\), by Lemma 6 .

With probability 1/2 Global SEMO_{ alt } chooses the mutation proves that flips every bit that corresponds to a nonisolated vertex of G(x) with probability 1/2.

In that case, the probability that exactly the bits corresponding to V′ are flipped (to 1) is \(\varOmega(2^{2\cdot(\operatorname {OPT}\mathbf {x}_{1})})\) since there are at most \(2\cdot(\operatorname {OPT}\mathbf {x}_{1})\) vertices that are incident to uncovered edges in G(x). This includes a factor of Ω(1) for the probability that Global SEMO_{ alt } does not flip bits corresponding to isolated vertices of G(x), which is (1−1/n)^{ n′} for n′≤n isolated vertices.
In the final theorem of this section we prove that the expected number of iterations until Global SEMO_{ alt } has generated a (1+ϵ)approximate vertex cover is bounded by \(O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2}+n\cdot 4^{(1\epsilon)\cdot \operatorname {OPT}})\). This implies that the expected approximation ratio of the vertex cover generated by Global SEMO_{ alt } improves over time (that is to say, the upper bound on that ratio decreases) to the point where it reaches 1 at expected time \(O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2}+n\cdot4^{\operatorname {OPT}})\).
Theorem 7
Using the fitness function f _{2}, the expected number of iterations of Global SEMO_{ alt } until it has generated a (1+ϵ)approximate vertex cover, i.e., a solution of fitness (r,0) with \(r\leq(1+\epsilon)\cdot \operatorname {OPT}\), is \(O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2}+n\cdot4^{(1\epsilon)\cdot \operatorname {OPT}})\).
Proof
Again we consider iterations where the population of Global SEMO_{ alt } contains a solution x with LP(x)=LP(0^{ n })−x_{1} such that each optimal fractional vertex cover assigns 1/2 to each nonisolated vertex of G(x). The expected number of iterations where this is not the case is at most \(O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2})\), by Lemma 6.
Let X denote the set of nonisolated vertices in G(x), let S⊆X be any minimum vertex cover of G(x), and let T=X∖S. Observe that T is an independent set and that T<S; otherwise, if S≤T, assigning 1 to each vertex of S and 0 to each vertex of T would yield a fractional vertex cover of cost at most 1/2⋅X but which does not assign 1/2 to each nonisolated vertex. Let \(\operatorname {OPT}'=\operatorname {OPT}\mathbf {x}_{1}\), i.e., the size of minimum vertex covers of G(x). Let \(s_{1},\dots ,s_{\operatorname {OPT}'}\) and t _{1},…,t _{T} be any two numberings of the vertices in S and T, respectively.
With probability Ω(1/n) Global SEMO_{ alt } selects the solution x and applies the mutation that flips bits corresponding to nonisolated vertices of G(x) with probability 1/2. With probability \(\varOmega ((1/4)^{(1\epsilon)\cdot \operatorname {OPT}'})\) all bits corresponding to \(s_{1},\dots ,s_{\lceil(1\epsilon)\cdot \operatorname {OPT}\rceil}\) are flipped and those corresponding to t _{1},…,t _{ α }, with \(\alpha=\min\lbrace T,\lceil(1\epsilon)\cdot \operatorname {OPT}\rceil\rbrace\), are not flipped. With probability greater than 1/2 the mutation flips bits of at least as many of the remaining vertices of S as of the remaining vertices of T, since T<S. Thus with probability \(\varOmega(1/n\cdot (1/4)^{(1\epsilon)\cdot \operatorname {OPT}'})\) the solution x is mutated into a solution x′ that additionally selects subsets S′⊆S and T′⊆T with \(S'\geq(1\epsilon)\cdot \operatorname {OPT}'+T'\). Again this includes a factor of Ω(1) accounting for the probability that Global SEMO_{ alt } does not flip bits corresponding to isolated vertices of G(x).
Should a solution y∈P dominate x′ then this would imply y_{1}+2⋅LP(y)≤x′_{1}+2⋅LP(x′). Thus after expected \(O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2}+n\cdot4^{(1\epsilon)\cdot \operatorname {OPT}})\) steps the population contains a solution x′ with \(\mathbf {x}'_{1}+2\cdot \mathit{LP}(\mathbf {x}')\leq(1+\epsilon)\cdot \operatorname {OPT}\).
Finishing the proof we show that such a solution leads to a (1+ϵ)approximate vertex cover in expected polynomial time. Let y∈P be a solution with minimum value of LP(y) under the constraint that \(\mathbf {y}_{1}+2\cdot \mathit{LP}(\mathbf {y})\leq(1+\epsilon)\cdot \operatorname {OPT}\). If LP(y)=0 then y is a (1+ϵ)approximate vertex cover. Otherwise there exists at least one vertex v that has value at least 1/2 in some optimal fractional vertex cover of G(y). With probability Ω(1/n ^{2}) the solution y is selected for mutation and exactly the bit corresponding to v is flipped, producing the solution y′.
Since y′ fulfills the constraint and LP(y′)<LP(y), it follows that no solution in P can dominate y′; otherwise, this solution would have been chosen in place of y. Thus with probability Ω(1/n ^{2}) the minimum value of LP(y) among solutions y that fulfill \(\mathbf {y}_{1}+2\cdot \mathit{LP}(\mathbf {y})\leq(1+\epsilon)\cdot \operatorname {OPT}\) is decreased by at least 1/2. Since \(0\leq \mathit{LP}(\mathbf {y})\leq \operatorname {OPT}\) the expected number of steps (from the point that x′ was introduced) until the population contains a (1+ϵ)approximate vertex cover is bounded by \(O(\operatorname {OPT}\cdot n^{2})\). Hence the total expected number of iterations of Global SEMO_{ alt } until the population contains a (1+ϵ)approximate vertex cover is bounded by \(O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2}+n\cdot4^{(1\epsilon)\cdot \operatorname {OPT}})\). □
5 Conclusion
We have introduced the notion of fixedparameter evolutionary algorithms to examine how the runtime of search heuristics depend on structural properties of a given problem. Using this approach we have examined the runtime and approximation behavior of evolutionary algorithms with respect to the value of an optimal solution. Intuitively, our analyses on different multiobjective models show that additional criteria, such as minimizing the number of uncovered edges or the value of a fractional solution for the uncovered part of the graph, can lead to a preprocessing phase similar to a kernelization of the problem. Adding a random search component to the evolutionary algorithm by using the alternative mutation operator, we have shown that this gives fixedparameter evolutionary algorithms.
There are several topics for future research. On the one hand, it seems to be interesting to analyze search heuristics in dependence of a given parameter on some other problems as well. The parameter can be the value of an optimal solution as considered in this paper but also a parameter which restricts the given input to certain classes of the problem. Examples include Cluster Editing and 3Hitting Set, both are FPT when parameterized by solution size, as well as Maximum Knapsack parameterized by the capacity of the knapsack. Additionally, many graph problems, such as Independent Set or Dominating Set, are FPT when parameterized by the treewidth of the input graph. Showing that an evolutionary algorithm profits from small values of treewidth might be a rather challenging problem, as the FPT algorithms for the two mentioned problems employ dynamic programming. On the other hand, the use of the ILP relaxation as the second criteria to guide the search process may be of independent interest and we expect this criteria to be applicable for other problems as well. A very interesting candidate might be the Multiway Cut problem which is known to have a halfintegral LP formulation [13].
Open Access
This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.