Algorithmica

, Volume 65, Issue 4, pp 754–771

# Fixed-Parameter Evolutionary Algorithms and the Vertex Cover Problem

• Stefan Kratsch
• Frank Neumann
Open Access
Article

## Abstract

In this paper, we consider multi-objective evolutionary algorithms for the Vertex Cover problem in the context of parameterized complexity. We consider two different measures for the problem. The first measure is a very natural multi-objective one for the use of evolutionary algorithms and takes into account the number of chosen vertices and the number of edges that remain uncovered. The second fitness function is based on a linear programming formulation and proves to give better results. We point out that both approaches lead to a kernelization for the Vertex Cover problem. Based on this, we show that evolutionary algorithms solve the vertex cover problem efficiently if the size of a minimum vertex cover is not too large, i.e., the expected runtime is bounded by O(f(OPT)⋅n c ), where c is a constant and f a function that only depends on OPT. This shows that evolutionary algorithms are randomized fixed-parameter tractable algorithms for the vertex cover problem.

## Keywords

Evolutionary algorithms Fixed-parameter tractability Vertex cover Randomized algorithms

## 1 Introduction

General purpose algorithms, such as evolutionary algorithms [8] and ant colony optimization [6], have been shown to be successful problem solvers for a wide range of combinatorial optimization problems. Such techniques make use of random decisions which allows to consider them as a special class of randomized algorithms. Especially, if the problem is new and there are not enough resources such as time, money, or knowledge about the problem to develop specific algorithms, general purpose algorithms often produce good results without a large development effort. Usually, it is just necessary to think about a representation of possible solutions, a function to measure the quality of solutions, and operators that produce from a solution (or a set of solutions) a new solution (or a set of solutions).

The general approach of an evolutionary algorithm is to start with a set of candidate solutions for a given problem. The solutions of this set can be constructed by some heuristics or chosen randomly from the underlying search space. Such solutions are improved iteratively over time. In each iteration, the current set of solutions (called parent population) constructs a new set of solutions (called offspring population) by variation operators such as crossover and mutation. Based on a selection method which is motivated by Darwin’s principle of the survival of the fittest, a new parent population is constituted by selecting solutions from the parent and the offspring population. The process is iterated until a stopping criteria is fulfilled.

Taking such a general approach to solve a given problem, it is clear that we cannot hope to beat techniques that are tailored to the given task. However, such general approaches find many applications when no good problem specific algorithm is available. In addition to many experimental studies that confirm the success of these techniques on problems from different domains, there has been increasing interest in understanding such algorithms also in a rigorous way. This line of research treats such algorithms as a class of randomized algorithms and analyzes them in a classical fashion, i.e., with respect to their runtime behavior and approximation ability in expected polynomial time. The results obtained in this research area confirm that general purpose approaches often come up with optimal solutions quickly even if they do not use problem specific knowledge. Problems that have been studied among many others within this line of research are the shortest path problem [5, 25], maximum matchings [14], minimum spanning trees [18, 21], minimum (multi-)cuts [19, 20], covering and scheduling problems [28]. A comprehensive presentation of the different results obtained in the field of combinatorial optimization can be found in [22]. Additionally, recent theoretical studies have investigated the learning ability of evolutionary algorithms [9, 26].

For NP-hard problems we cannot hope to prove practicality in the sense of a polynomial upper-bound on the worst-case runtime, even though an algorithm might perform very well in practice. Nevertheless, the notion of fixed-parameter tractability may be helpful to explore that situation as well as guiding further algorithm design. Fixed-parameter tractability is a central concept of parameterized complexity. In this field, the complexity of input instances is measured in a two-dimensional way considering not only the size of the input but also one or more parameters, e.g., solution size, structural restrictions, or quality of approximation. One hopes to confine the inevitable combinatorial explosion in the runtime to a function in the parameter, with only polynomial dependence on the input size. The idea is that even large instances may exhibit a very restricted structure and can therefore be considered easy to solve, despite their size. Let us briefly introduce the central notions (following Flum and Grohe [10]).

A parameterized problem $$(\mathcal {Q},\kappa)$$ consists of a language $$\mathcal {Q}$$ over a finite alphabet Σ and a parameterization κ:Σ →ℕ. The problem $$\mathcal {Q}$$ is fixed-parameter tractable (FPT) if there is an algorithm that decides whether $$x\in \mathcal {Q}$$ in time f(κ(x))⋅|x| O(1), i.e., in time with arbitrary (but computable) dependence on the parameter but only polynomial dependence in the input size. Such an algorithm is called an fpt-algorithm for $$(\mathcal {Q},\kappa)$$. A Monte Carlo fpt-algorithm for $$(\mathcal {Q},\kappa)$$ is a randomized fpt-algorithm with runtime f′(κ(x))⋅|x| O(1) that will on input xΣ accept with probability at least 1/2 if $$x\in \mathcal {Q}$$ and with probability 0 if $$x\notin \mathcal {Q}$$. For an introduction to parameterized complexity we point the interested reader to [7, 10].

In this paper we want to adopt a parameterized view on evolutionary algorithms for Vertex Cover and consider their expected runtime behavior related to the minimum cardinality of a vertex cover of the input graph, denoted by OPT. We examine when evolutionary algorithms compute a solution quickly if OPT is small, i.e., in expected time $$O(f(\operatorname {OPT}) \cdot n^{c})$$. We call an evolutionary algorithm with such a runtime bound fixed-parameter evolutionary algorithm.

An important stepping stone in the analysis of our algorithms will be the fact that they create partial solutions that can be considered problem kernels of the original instance, given a feasible secondary measure. A kernelization or reduction to a problem kernel is a special form of polynomial-time data reduction for parameterized problems that produces an equivalent (and usually smaller) instance whose size is bounded by a function in the original parameter. It is known that a parameterized problem is fixed-parameter tractable if and only if there exists a kernelization for the problem (cf. [10]). A well known fixed-parameter tractable problem is the (standard) parameterized Vertex Cover problem. Given an undirected graph and an integer k, one has to decide whether there exists a set of at most k vertices such that each edge contains at least one of these vertices, parameterized by k. This problem can be solved in time O(1.2738 k +kn) via kernelization followed by a bounded search tree algorithm [3].

The Vertex Cover problem has also been studied in the field of evolutionary computation from a theoretical point of view. Rigorous runtime analysis has been given for the well-known (1+1) EA and population based algorithms for single-objective optimization [11, 23, 24]. Additionally, it has been shown that a multi-objective model can help the optimization process of an evolutionary algorithm to find good solutions quicker than in a single-objective one [12]. Due to the results obtained in [12] we consider two different multi-objective models for the Vertex Cover problem. Both models take as the first objective the goal to minimize the number of chosen vertices. The second criteria should be a penalty function which has to be minimized such that a feasible vertex cover is obtained.

Minimizing the number of uncovered edges as the second objective has already been investigated in [12] and we study this approach with respect to the approximation quality depending on the value of OPT. Afterwards, we examine this approach with respect to the expected runtime in dependence of OPT and show that this approach leads to fixed-parameter evolutionary algorithms. Our second approach is to take the minimum cost of a fractional vertex cover for the uncovered edges as the second objective. We show that this approach leads to a 2-approximation for Vertex Cover in expected polynomial time and to fixed-parameter evolutionary algorithms of runtime $$O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2}+n\cdot4^{\operatorname {OPT}})$$. For the case where one is interested in computing a (1+ϵ)-approximation, we reduce the runtime bound of this approach to $$O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2}+n\cdot4^{(1-\epsilon)\cdot \operatorname {OPT}})$$.

The outline of the paper is as follows. In Sect. 2, we introduce the Vertex Cover problem as well as the algorithms and problem formulations that are subject to our investigations. In Sects. 3 and 4 we consider two different multi-objective models for Vertex Cover. In Sect. 5 we summarize our results and give possible directions for further research.

## 2 Preliminaries

The Vertex Cover problem is one of the well-known NP-hard combinatorial optimization problems. Given an undirected graph G=(V,E) where |V|=n and |E|=m the aim is to find a subset V′⊆V of minimum cardinality such that for each eE$$e\cap V' \not= \emptyset$$ holds. Many simple approximation algorithms achieve a worst-case approximation ratio of 2 (cf. [4]). For example such an approximation can be achieved in polynomial time by computing a maximal matching in the given graph and choosing for each edge of the matching the corresponding two vertices.

The Vertex Cover problem can be formulated as an integer linear program (ILP) in the following way:

Relaxing the integrality constraint x i ∈{0,1} to fractional values between 0 and 1, i.e., x i ∈[0,1], yields a linear program formulation of the Fractional Vertex Cover problem. Clearly, for any graph, the cost of an optimal fractional vertex cover is a lower bound on the cardinality of a minimum (integral) vertex cover. The dual problem of Fractional Vertex Cover is Fractional Maximum Matching, i.e., Maximum Matching with relaxed integrality.

Often the (1+1) evolutionary algorithm ((1+1) EA for short) is taken as a baseline algorithm in the theoretical analysis of evolutionary algorithms. Algorithm 1 shows the (1+1) EA for the minimization of a fitness function f:{0,1} n →ℝ. It starts with a solution chosen uniformly at random from the underlying search space and produces in each iteration one offspring by mutation. The offspring replaces the parent iff the offspring is not worse than the parent according to the used fitness function. It has been pointed out that simple evolutionary algorithms cannot achieve a non-trivial approximation guarantee. There are instances where the (1+1) EA cannot obtain a better approximation than a factor Θ(n) in expected polynomial time [12]. In contrast to this a multi-objective model in conjunction with a simple evolutionary algorithm leads to an O(logn)-approximation on the much broader class of set cover problems.

In the case of multi-objective optimization, the fitness function maps from the search space X to a vector of real values, i.e. f:X→ℝ k . We consider the case where each of the k objectives should be minimized. For two search points xX and x′∈X, f(x)≤f(x′) holds iff f i (x)≤f i (x′), 1≤ik. In this case, we say that x weakly dominates x′. A search point x dominates a search point x′ iff f(x)≤f(x′) and $$f(\mathbf{x}) \not= f(\mathbf{x}')$$. In this case x is considered strictly better than x′. The notion of dominance and weak dominance transfers to the corresponding objective vectors. A Pareto optimal search point x is a search point that is not dominated by any other search point in X. The set of non-dominated search points is called the Pareto optimal set and the set of the corresponding objective vectors is called the Pareto front.

We follow this approach and examine the multi-objective model for Vertex Cover in conjunction with the simple multi-objective evolutionary algorithm called Global SEMO (Global Simple Evolutionary Multi-Objective Optimizer). This algorithm has already been studied for a wide range of multi-objective optimization problems and can be considered as the generalization of the (1+1) EA to the multi-objective case.

Global SEMO (see Algorithm 2) keeps at each time step for each non dominated objective vector found so far one single solution. In this way it preserves an approximation of the Pareto front. The algorithm starts with an initial solution that is chosen uniformly at random from the underlying search space. In each iteration, a solution x from the current population P is chosen uniformly at random. A mutation operator flipping each bit of x with probability 1/n is applied to obtain an offspring x′. This solution x′ is introduced into the population iff it is not dominated by any other solution in the population. If this is the case, all solutions that are weakly dominated by x′ are deleted from P.

Denote by E(x)⊆E the set of edges for which at least one vertex is chosen by x. As each edge eE has to be covered by at least one vertex to obtain a vertex cover, it may be helpful to flip vertices which are incident with uncovered edges with a larger probability. This leads to the following alternative mutation operator.

In the alternative mutation operator vertices that are incident with an uncovered edge may be flipped with a larger probability of 1/2. These are exactly the non-isolated vertices of G(x)=(V,EE(x)). Replacing the mutation operator of Global SEMO by Algorithm 3 we call this algorithm Global SEMO alt . The fitness function
$$f_1(\mathbf {x}) = \bigl(|\mathbf {x}|_1, u(\mathbf {x})\bigr),$$
where |x|1 denotes the number of chosen vertices and u(x) denotes the number of edges that are not covered by any vertex chosen by x, has already been considered in [12]. Additionally, we also examine the fitness function
$$f_2(\mathbf {x}) = \bigl(|\mathbf {x}|_1, \mathit{LP}(\mathbf {x})\bigr),$$
where LP(x) denotes the optimum value of the relaxed Vertex Cover ILP for G(x), i.e., the cost of an optimal fractional vertex cover of G(x).

Our goal is to analyze our algorithms until they have found an optimal solution or a good approximation of an optimal one. Our algorithms using the function f 1 (or f 2) have produced an r-approximation for the Vertex Cover problem iff they have produced a solution x with objective vector f 1(x)=(|x|1,0) (or f 2(x)=(|x|1,0)) where $$\frac{|\mathbf {x}|_{1}}{\operatorname {OPT}} \leq r$$.

To measure the runtime of our algorithms, we consider the number of fitness evaluations T until a minimum vertex cover or a good approximation of such a solution has been obtained. We define T(k) as the random variable that measures the number of fitness evaluations until a vertex cover of size at most k appears in the population. The expected optimization time refers to the expected number of fitness evaluations $$E[T(\operatorname {OPT})]$$ until an optimal solution has been obtained. Often we consider the expected time to achieve intermediate goals, e.g., partial solutions that fulfill certain properties.

If $$\operatorname {OPT}\leq k$$, then the probability that an evolutionary algorithm, whose expected optimization time is upper bounded by $$E[T(\operatorname {OPT})]$$, finds an optimal solution within at most 2⋅E[T(k)] is at least 1/2, using Markov’s inequality [16]. Clearly, if $$\operatorname {OPT}>k$$ then no solution of cost at most k can be found. Thus running a fixed-parameter evolutionary algorithm for twice the expected optimization time for $$\operatorname {OPT}=k$$ yields a Monte Carlo fpt-algorithm for the decision version.

For both introduced fitness functions, the search point 0 n is Pareto optimal as the first objective for all functions is to minimize the number of ones in the bitstring. In the remaining part of the paper we will proceed from this solution towards a minimum vertex cover or a vertex cover of a certain approximation quality.

### Lemma 1

The expected number of iterations of Global SEMO or Global SEMO alt until the population contains the search point 0 n is O(n 2logn) for the fitness functions f 1 and f 2.

### Proof

The size of the population is upper bounded by n+1 as the population keeps at most one solution x for each fixed number of ones in the bitstring. We consider in each step the individual $$\mathbf {y}= \operatorname {argmin}_{\mathbf {z}\in P} |\mathbf {z}|_{1}$$. The probability to choose this individual in the next step is at least $$\frac{1}{n+1}$$. Let i=|y|1 be the number of ones in this bitstring. The probability that Global SEMO produces a solution with a smaller number of ones is lower bounded by
$$\frac{1}{n+1} \cdot\frac{i}{n}\cdot\biggl(1-\frac{1}{n} \biggr)^{n-1}\geq\frac{1}{n+1} \cdot\frac{i}{n}\cdot \frac{1}{e} = \varOmega\biggl(\frac{i}{n^2} \biggr),$$
where $$\frac{i}{n}$$ is the probability of flipping a single one, and $$(1-\frac{1}{n} )^{n-1}$$ is the probability of not flipping any other bit. (Note that for Global SEMO alt there is an extra factor of $$\frac{1}{2}$$ for choosing the mutation operator which flips each bit with probability $$\frac{1}{n}$$.) Hence, the expected waiting time until a solution with at most i−1 ones has been produced is therefore O(n 2/i). Using the method of fitness based partitions [27] and summing up over the different values of i, the expected time until the search point 0 n has been included into the population is $$\sum_{i=1}^{n} O(n^{2}/i) = O(n^{2} \log n)$$. □

After an expected number of O(n 2logn) iterations both algorithms working on the fitness function f 1 or f 2 introduce the search point 0 n into the population. Afterwards, this search point stays in the population. The population size of both algorithms is upper bounded by n+1. This may be used to give a bound on the expected time to reach a minimum vertex cover depending on $$\operatorname {OPT}$$.

Let x be an arbitrary solution that remains in the population during the optimization process. The probability of producing a specific solution x′ that has Hamming-distance c to x in the next step is lower bounded by
$$\frac{1}{2(n+1)} \cdot\biggl(\frac{1}{n} \biggr)^c \cdot \biggl(1-\frac{1}{n} \biggr)^{n-c} = \varOmega\bigl(n^{-(c+1)} \bigr),$$
which implies that the expected time to produce such a solution is O(n c+1). Hence, both algorithms obtain an optimal solution in expected time O(n OPT+1) after they have obtained the search point 0 n . Note, that this time bound is not sufficient for our definition of fixed-parameter evolutionary algorithms.

## 3 Minimizing the Number of Uncovered Edges

In this section we consider the effect of minimizing the number of uncovered edges as the second criteria by investigating the fitness function f 1. Note, that this approach has already been investigated in [12]. Friedrich et al. [12] show that there are bipartite graphs where the (1+1) EA cannot achieve a good approximation in expected polynomial time. Running Global SEMO on these instances solves the problem quickly. For general graphs, it has been showed that Global SEMO achieves a logn-approximation in expected polynomial time.

In the following, we show a bound on the approximation quality depending on the value of $$\operatorname {OPT}$$ that Global SEMO or Global SEMO alt can achieve in polynomial time. Furthermore, we prove that under this secondary measure the expected number of iterations until Global SEMO alt finds a minimum vertex cover is bounded by
$$O\bigl(\operatorname {OPT}\cdot n^4+n\cdot2^{\mathrm{OPT}^2+\operatorname {OPT}}\bigr).$$
A central idea in our proofs is to consider a solution xP where the set of vertices is a subset of a minimum vertex cover and such that G(x) does not contain vertices of degree greater than $$\operatorname {OPT}$$. The following lemma shows that Global SEMO and Global SEMO alt spend an expected number of $$O(\operatorname {OPT}\cdot n^{4})$$ steps on producing such solutions during the run of the algorithm.

### Lemma 2

Using the fitness function f 1, the expected number of iterations of Global SEMO and Global SEMO alt where the population does not contain a solution  x that fulfills the properties that
1. 1.

the vertices chosen by  x constitute a subset of a minimum vertex cover of G and

2. 2.

the vertices of G(x) have degree at most  $$\operatorname {OPT}$$,

is upper bounded by  $$O(\operatorname {OPT}\cdot n^{4})$$.

### Proof

We know that the search point 0 n is introduced into the population after an expected number of O(n 2logn) iterations. Assuming that the search point 0 n has already been introduced into the population, we show that an expected number of $$O(\operatorname {OPT}\cdot n^{4})$$ iterations occur where the population does not contain a solution with the desired properties.

We denote by V′⊆V the set of vertices that have degree larger than $$\operatorname {OPT}$$ in G. Observe that every vertex cover of cardinality $$\operatorname {OPT}$$ contains V′. A vertex cover that does not select a vertex of degree greater than $$\operatorname {OPT}$$ must contain all neighbors of the vertex, which leads to a cardinality greater than $$\operatorname {OPT}$$. We assume that V′≠∅ as otherwise 0 n has the desired properties.

The idea to prove the lemma is to investigate a potential taking $$O(|E| \cdot \operatorname {OPT})$$ different values. If the population does not contain a solution with properties 1 and 2, the potential is decreased with probability Ω(1/n 2) which leads to the stated upper bound on the number of steps that have a population where each solution does not fulfill the desired properties.

Let $$s_{0}, s_{1},\dots,s_{\operatorname {OPT}}$$ be integer values such that s j is the smallest value of u(x) for any search point x in P choosing at most j vertices, i.e., |x|1j. Note, that each s j cannot increase during the run of the algorithm as only non-dominated solutions are accepted.

We investigate the potential of a population P given by
$$\operatorname {pot}(P) = \sum_{j=1}^{\mathrm{OPT}} s_j \leq|E| \cdot \operatorname {OPT}.$$
Let i be the largest integer such that P contains solutions x 0,…,x i with fitness (0,s 0),…,(i,s i ) that select only vertices of V′. We will now consider different cases to show that either x i has the desired properties or that, with probability Ω(1/n 2), a solution is generated that improves at least one of the s j .
1. 1.

If the graph G(x i ) contains no vertex of degree larger than $$\operatorname {OPT}$$ then x i fulfills properties 1 and 2 by selection of i. For the other cases we assume that G(x i ) contains a vertex of degree greater than $$\operatorname {OPT}$$, say v.

2. 2.
If $$s_{i}-s_{i+1}\leq \operatorname {OPT}$$ (note: this includes the case when P does not contain any solution x with |x|1=i+1, implying that s i+1=s i ) then with probability Ω(1/n 2) Global SEMO or Global SEMO alt chooses the search point x i and mutates it into a point $$\mathbf {x}'_{i+1}$$ that additionally selects v. Clearly
$$u\bigl(\mathbf {x}'_{i+1}\bigr)=u(\mathbf {x}_i)-\deg_{G(\mathbf {x}_i)}(v)<s_i- \operatorname {OPT}.$$
Thus $$u(\mathbf {x}'_{i+1})<s_{i+1}$$, implying that s i+1 is decreased by at least one.

3. 3.
If $$s_{i}-s_{i+1}>\operatorname {OPT}$$ then P contains a solution x i+1 of fitness (i+1,s i+1) and x i+1 selects at least one vertex uVV′ by choice of i. With probability at least Ω(1/n 2) the search point x i+1 is chosen and is mutated into a solution $$\mathbf {x}'_{i}$$ by flipping only the bit corresponding to u. Thus
$$u\bigl(\mathbf {x}'_i\bigr)=u(\mathbf {x}_{i+1})+\deg_{G(\mathbf {x}'_i)}(u) \leq s_{i+1}+\operatorname {OPT}.$$
Therefore $$u(\mathbf {x}'_{i})<s_{i}$$, so s i is improved by at least one.

In each case we get that either P contains a solution as claimed in the lemma or with probability Ω(1/n 2) the potential decreases by at least one. The potential can take on only $$O(\operatorname {OPT}\cdot|E|)$$ different values which completes the proof. □

We have seen that in all but expected $$O(\operatorname {OPT}\cdot n^{4})$$ iterations of Global SEMO or Global SEMO alt the population contains a solution x that is a subset of some minimum vertex cover and such that G(x) has maximum degree $$\operatorname {OPT}$$. Such partial solutions will be useful when proving an upper bound on the expected number of iterations of Global SEMO alt to generate a minimum vertex cover, while also implying that an $$\operatorname {OPT}$$-approximate vertex cover is produced in expected polynomial number of iterations of Global SEMO or Global SEMO alt . One can easily see that G(x) has at most $$(\operatorname {OPT}-|\mathbf {x}|_{1})\cdot \operatorname {OPT}$$ uncovered edges, since $$(\operatorname {OPT}-|\mathbf {x}|_{1})$$ vertices of degree at most $$\operatorname {OPT}$$ suffice to cover all of them.

Though these partial solutions are obtained in a randomized fashion aiming to cover as many edges as possible with few vertices, they are strongly related to deterministic preprocessing for the parameterized Vertex Cover problem. To decide whether a given graph has a vertex cover of size at most k one may greedily select all vertices of degree larger than k. In fact, if v is a vertex of degree larger than k then G has a vertex cover of cardinality k if and only if Gv has a vertex cover of cardinality k−1. In conjunction with deleting isolated vertices this leads to an equivalent reduced instance with at most O(k 2) vertices, this technique being known as Buss’ kernelization (cf. [7]).

These structural insights can be used to show that our algorithms achieve an $$\operatorname {OPT}$$-approximation in expected polynomial time when using the fitness function f 1.

### Theorem 1

Using the fitness function f 1, the expected number of iterations of Global SEMO or Global SEMO alt until an  $$\operatorname {OPT}$$-approximation is computed is  $$O(\operatorname {OPT}\cdot n^{4})$$.

### Proof

According to Lemma 1 and Lemma 2, we already know that the expected number of steps where the population does not contain a solution with the properties stated in Lemma 2 is $$O(\operatorname {OPT}\cdot n^{4})$$. In the following, we consider only steps where such a solution exists.

Thus it is ensured that there is a solution x in the population for which $$|\mathbf {x}|_{1}\leq \operatorname {OPT}$$ and the maximum degree of G(x) is at most $$\operatorname {OPT}$$. This implies $$u(\mathbf {x})\leq(\operatorname {OPT}-|\mathbf {x}|_{1})\cdot \operatorname {OPT}$$ and $$|\mathbf {x}|_{1}+ u(\mathbf {x}) \leq \operatorname {OPT}^{2}$$. If x is dominated by any solution x′ then clearly $$|\mathbf {x}'|_{1}+ u(\mathbf {x}') \leq \operatorname {OPT}^{2}$$. Therefore, in all later steps the population contains at least one solution y with $$|\mathbf {y}|_{1}+u(\mathbf {y})\leq \operatorname {OPT}^{2}$$.

Let u denote the minimum value of u(x) among solutions xP with $$|\mathbf {x}|_{1}+u(\mathbf {x})\leq \operatorname {OPT}^{2}$$. Let yP be a solution with $$|\mathbf {y}|_{1}+ u(\mathbf {y})\leq \operatorname {OPT}^{2}$$ and u(y)=u. If u(y)=0 it follows that y selects at most $$\operatorname {OPT}^{2}$$ vertices which are a vertex cover. Otherwise at least one vertex v of G(y) is incident with an (uncovered) edge.

The probability that y is selected and that it is mutated into a solution y′ that additionally selects v is Ω(1/n 2) for Global SEMO and Global SEMO alt . Clearly the solution y′ fulfills |y′|1+u(y′)≤|y|1+u(y) and u(y′)<u(y). Observe that y′ cannot be dominated by any solution in P due to |y′|1+u(y′)≤|y|1+u(y) and by choice of y, implying that it is added to P, decreasing u by at least 1.

If the solution y with u(y)=u and $$|\mathbf {y}|_{1}+u(\mathbf {y})\leq \operatorname {OPT}^{2}$$ is removed from the population then there must be a solution, say z, that dominates it. By u(z)≤u(y) and |z|1≤|y|1 this cannot increase the value of u. Clearly $$0\leq u\leq \operatorname {OPT}^{2}$$, hence it can be decreased at most $$\operatorname {OPT}^{2}$$ times.

Thus after expected $$O(\operatorname {OPT}^{2}\cdot n^{2} + \operatorname {OPT}\cdot n^{4})$$ iterations of Global SEMO or Global SEMO alt a solution with fitness (S,0) with $$S\leq \operatorname {OPT}^{2}$$ is obtained. □

After having shown that both algorithms achieve an $$\operatorname {OPT}$$-approximation in expected polynomial time, we will bound the time until Global SEMO alt achieves an optimal solution.

### Theorem 2

Using the fitness function f 1, the expected number of iterations of Global SEMO alt until it has computed a minimum vertex cover is  $$O(\operatorname {OPT}\cdot n^{4}+n\cdot2^{\operatorname {OPT}+\mathrm{OPT}^{2}})$$.

### Proof

As in the proof of Theorem 1, we assume that P contains a solution x such that G(x) has maximum degree at most $$\operatorname {OPT}$$ and there exists a minimum vertex cover S that contains the vertices selected by x. Due to Lemma 2 the expected number of iterations where Global SEMO alt does not fulfill the properties is $$O(\operatorname {OPT}\cdot n^{4})$$, i.e., adding this term to the obtained bound covers the assumption.

The probability of choosing x in the next mutation step is Ω(1/n). Choosing all the remaining vertices of S and not flipping any other bit in x leads to a minimum vertex cover. The graph G(x) has maximum degree $$\operatorname {OPT}$$ and it has a vertex cover of size $$(\operatorname {OPT}-|\mathbf {x}|_{1})$$. Each vertex in such a vertex cover can be adjacent to at most $$\operatorname {OPT}$$ non-isolated vertices (and each edge is incident with at least one vertex of the vertex cover), implying that G(x) has at most $$(\operatorname {OPT}-|\mathbf {x}|_{1})+(\operatorname {OPT}-|\mathbf {x}|_{1})\cdot \operatorname {OPT}\leq \operatorname {OPT}+\operatorname {OPT}^{2}$$ non-isolated vertices.

We consider the mutation of x which flips vertices adjacent to non-covered edges with probability 1/2. Note that with probability (1−1/n) nΩ(1) no bit corresponding to any of the n′≤n isolated vertices of G(x) is flipped. The probability of flipping only the bits corresponding to the missing vertices of S is therefore $$\varOmega(2^{-(\operatorname {OPT}+\operatorname {OPT}^{2})})$$, since there are at most $$\operatorname {OPT}+\operatorname {OPT}^{2}$$ non-isolated vertices. Hence, the expected time until a minimum vertex cover has been computed is upper bounded by $$O(\operatorname {OPT}\cdot n^{4} + n\cdot2^{\operatorname {OPT}+\operatorname {OPT}^{2}})$$. □

## 4 Fractional Vertex Covers

In this section, we use the minimum cost of a fractional vertex cover for the uncovered edges as the second criteria. For every search point x this gives an estimate on how many vertices are needed to complete the set of selected vertices to a vertex cover of G (or of G(x)). We denote this cost by LP(x), as it is the optimal cost of solutions to the Vertex Cover ILP with relaxed integrality constraints, i.e., 0≤x i ≤1 in place of x i ∈{0,1}. Balinski [1] showed that all basic feasible solutions (or extremal points) of the Fractional Vertex Cover LP are half-integral.

### Theorem 3

[1]

Every basic feasible solution  x of the relaxed Vertex Cover ILP is half-integral, i.e., x∈{0,1/2,1} n .

Due to this result, optimal fractional vertex covers can be computed very efficiently via a maximum matching of an auxiliary bipartite graph (cf. [2]). Throughout the section we will implicitly assume that chosen fractional vertex covers are half-integral.

Nemhauser and Trotter [17] proved a very strong relation between optimal fractional vertex covers and minimum vertex covers.

### Theorem 4

Let  x be an optimal fractional vertex cover and let P 0,P 1V be the vertices whose corresponding components of  x are 0 or 1 respectively, then there exists a minimum vertex cover that contains P 1 and no vertex of P 0.

We start with a simple lemma that gives insights into the structure of the objective space.

### Lemma 3

For every  x∈{0,1} n it holds that
1. 1.

|x|1+LP(x)≥LP(0 n ).

2. 2.

$$|\mathbf {x}|_{1}+2\cdot \mathit{LP}(\mathbf {x})\geq \operatorname {OPT}$$.

### Proof

Let y be an optimal fractional vertex cover of G(x) of cost LP(x).
1. 1.

One can obtain a fractional vertex cover of G from y by adding the vertices that are selected by x. The cost of this cover, i.e., |x|1+LP(x), cannot be smaller than the minimum cost of a fractional vertex cover, i.e., LP(0 n ).

2. 2.

Similarly, a vertex cover of G can be obtained by adding all vertices that have value 1/2 or 1 in y to the vertices selected by x, since each edge of G(x) must be incident with vertices of total value of at least one. The cardinality of this vertex cover is bounded by 2⋅LP(x) (i.e., the maximum number of vertices with value 1/2 or 1) plus |x|1. Clearly, this vertex cover cannot be smaller than a minimum vertex cover (with cardinality $$\operatorname {OPT}$$).

□

Hence, each solution for which equality holds in one of the inequalities stated in Lemma 3 is Pareto optimal. The following lemma relates a search point x∈{0,1} n to an optimal fractional solution x ∈[0,1] n . For x,y∈[0,1] n , we denote by xy the fact that x i y i , 1≤in.

### Lemma 4

Let  y be an optimal fractional vertex cover of G. Every  x∈{0,1} n with  xy, is a Pareto optimal solution.

### Proof

Let y′ be obtained from y by setting the value of all vertices that are selected by x to 0. The graph G(x) contains all edges that are not incident to any vertex that is selected by x. Thus y′ is a fractional vertex cover of G(x). Therefore we have |y|1−|x|1=|y′|1LP(x), implying that LP(0 n )=|y|1LP(x)+|x|1. Thus, by Lemma 3, we can conclude that |x|1+LP(x)=LP(0 n ) and that x is a Pareto optimal solution. □

We state a simple property that describes search points that are subsets of a minimum vertex cover. Such solutions are of particular interest as they can be turned into a minimum vertex cover by adding vertices.

### Lemma 5

If  x∈{0,1} n is a solution with LP(x)=LP(0 n )−|x|1, then there exists a minimum vertex cover  z∈{0,1} n with  xz (i.e., every vertex selected by  x is also selected by  z).

### Proof

Consider an optimal fractional vertex cover y of G(x) of cost LP(0 n )−|x|1. We can obtain a fractional vertex cover z of G by also selecting the |x|1 vertices that are selected by x (i.e., setting the corresponding components of y to 1). Hence z is a fractional vertex cover of G of cost LP(0 n ), implying that it is optimal. By Theorem 4 it follows that there exists a minimum vertex cover of G that contains all vertices with value 1 in z which includes all vertices that are selected by x. □

After having pointed out some basic properties about fractional vertex covers and Pareto optimal solutions, we can now analyze our algorithms with respect to the approximation that they can achieve in expected polynomial time. It is easy to see that, for every optimal fractional vertex cover, the vertices of value 1/2 and 1 form a 2-approximate vertex cover, since the fractional vertex cover has cost at most $$\operatorname {OPT}$$.

### Theorem 5

Using the fitness function f 2, the expected number of iterations of Global SEMO or Global SEMO alt until the population P contains a 2-approximate vertex cover is  $$O(n^{2}\log n+\operatorname {OPT}\cdot n^{2})$$.

### Proof

The expected number of iterations until the search point 0 n is added to the population is O(n 2logn) due to Lemma 1.

Let xP be a solution that minimizes LP(x) under the constraint that $$|\mathbf {x}|_{1}+2\cdot \mathit{LP}(\mathbf {x})\leq2\cdot \mathit{LP}(0^{n})\leq2\cdot \operatorname {OPT}$$. Note, that 0 n fulfills the constraint. If LP(x)=0 then x is a vertex cover of G and $$|\mathbf {x}|_{1}\leq2\cdot \mathit{LP}(0^{n})\leq2\cdot \operatorname {OPT}$$ as claimed. Otherwise, every optimal fractional vertex cover of G(x) assigns at least 1/2 to some vertex, say v. Therefore, LP(x′)≤LP(x)−1/2 where x′ is obtained from x by additionally selecting v. With probability 1/n 2⋅(1−1/n) n−1=Ω(1/n 2) the solution x is picked in the mutation step and exactly the bit corresponding to v is flipped, leading to the solution x′. Clearly, |x′|1=|x|1+1 and LP(x′)≤LP(x)−1/2. Thus |x′|1+2⋅LP(x′)≤|x|1+2⋅LP(x)≤2⋅LP(0 n ), implying that x′ fulfills the constraint while having a smaller value LP(x′). Thus, x′ is added to the population since no solution in P dominates it, by selection of x.

As $$\mathit{LP}(\mathbf {x})\leq \operatorname {OPT}$$, this can happen at most $$2\cdot \operatorname {OPT}$$ times since each time the smallest value of LP(x) among solutions x that fulfill $$|\mathbf {x}|_{1}+2\cdot \mathit{LP}(\mathbf {x})\leq2\cdot \operatorname {OPT}$$ is reduced by at least 1/2. Thus, the expected number of steps until the population contains a 2-approximate vertex cover is at most $$O(n^{2}\log n+\operatorname {OPT}\cdot n^{2})$$. □

Having shown that using the minimum cost of a fractional vertex cover as the second criteria leads to a 2-approximation, we will now examine the number of iterations until Global SEMO alt has obtained an optimal solution.

To prove an upper bound on that number we consider solutions choosing r vertices such that the subgraph consisting of the non-covered edges has at most 2⋅(LP(0 n )−r) non-isolated vertices. Therefore we are interested in solutions x of fitness (|x|1,LP(0 n )−|x|1) such that optimal fractional vertex covers of G(x) assign 1/2 to each non-isolated vertex, implying that there are exactly 2⋅(LP(0 n )−|x|1) non-isolated vertices in G(x).

### Lemma 6

Using the fitness function f 2, the expected number of iterations during the run of Global SEMO and Global SEMO alt where the population does not contain a solution  x that fulfills the properties that
1. 1.

LP(x)=LP(0 n )−|x|1 and

2. 2.

each optimal fractional vertex cover assigns 1/2 to each non-isolated vertex of G(x)

is upper bounded by  $$O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2})$$.

### Proof

After expected O(n 2logn) iterations the population contains the solution 0 n of fitness (0,LP(0 n )), by Lemma 1. Let r be the largest integer such that P contains solutions of fitness values (0,LP(0 n )),…,(r,LP(0 n )−r) and let xP be the solution of fitness (r,LP(0 n )−r). There are two possible cases:
1. 1.

Optimal fractional vertex covers for G(x) assign 1/2 to each non-isolated vertex.

2. 2.

There is an optimal fractional vertex cover z of G(x) which assigns 1 to at least one non-isolated vertex of G(x), say v. With probability at least Ω(1/n 2) Global SEMO or Global SEMO alt chooses the solution x for mutation and flips exactly the bit corresponding to v, obtaining a solution x′.

Observe that LP(x′)≤LP(x)−1 since z′, i.e., z but with 0 assigned to v, is a fractional vertex cover of G(x′). Clearly, x′ is added to the population since solutions of fitness (i,LP(0 n )−i) are Pareto optimal, according to Lemma 3 (this also implies that r can never decrease). This increases the value of r by 1.

Since $$0\leq r\leq \mathit{LP}(0^{n})\leq \operatorname {OPT}$$, the value can be increased at most $$\operatorname {OPT}$$ times. Therefore the expected number of steps in which case 2 happens is at most $$O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2})$$. □

Both algorithms generate a search point x that selects a subset of a minimum vertex cover and such that G(x) has at most $$2\cdot(\operatorname {OPT}-|\mathbf {x}|_{1})$$ non-isolated vertices in expected polynomial time and, similar to Lemma 2, the population contains such a solution in all but expected $$O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2})$$ iterations.

In the following, we show that Global SEMO alt is able to produce from such a solution an optimal one in total expected time $$O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2}+n\cdot4^{\operatorname {OPT}})$$ which implies that it is a fixed-parameter evolutionary algorithm for the Vertex Cover problem.

### Theorem 6

Using the fitness function f 2, the expected number of iterations of Global SEMO alt until it has computed a minimum vertex cover is  $$O(n^{2}\cdot \log n+\operatorname {OPT}\cdot n^{2}+n\cdot4^{\operatorname {OPT}})$$.

### Proof

We consider iterations of Global SEMO alt where the population contains x with LP(x)=LP(0 n )−|x|1 such that each optimal fractional vertex cover assigns 1/2 to each non-isolated vertex of G(x). The expected number of iterations where this is not the case is at most $$O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2})$$, by Lemma 6 .

According to Lemma 5 there exists a minimum vertex cover y with xy, i.e., y contains the vertices that are selected by x. Let V′ be the set of vertices that are selected by y but not by x. Observe that every vertex of V′ is non-isolated in G(x), i.e., incident to an uncovered edge, since y is a minimum vertex cover. With probability at least $$\frac{1}{n+1}$$ the solution x is picked in the mutation step. The probability that y is obtained in that case can be easily lower bounded:
• With probability 1/2 Global SEMO alt chooses the mutation proves that flips every bit that corresponds to a non-isolated vertex of G(x) with probability 1/2.

• In that case, the probability that exactly the bits corresponding to V′ are flipped (to 1) is $$\varOmega(2^{-2\cdot(\operatorname {OPT}-|\mathbf {x}|_{1})})$$ since there are at most $$2\cdot(\operatorname {OPT}-|\mathbf {x}|_{1})$$ vertices that are incident to uncovered edges in G(x). This includes a factor of Ω(1) for the probability that Global SEMO alt does not flip bits corresponding to isolated vertices of G(x), which is (1−1/n) n for n′≤n isolated vertices.

Thus with probability at least $$1/n\cdot1/2\cdot(1/4)^{\operatorname {OPT}}$$ the solution y of fitness $$(\operatorname {OPT},0)$$ is obtained. Therefore, the expected number of iterations of Global SEMO alt until the population contains a minimum vertex cover is bounded by $$O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2}+n\cdot4^{\operatorname {OPT}})$$. □

In the final theorem of this section we prove that the expected number of iterations until Global SEMO alt has generated a (1+ϵ)-approximate vertex cover is bounded by $$O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2}+n\cdot 4^{(1-\epsilon)\cdot \operatorname {OPT}})$$. This implies that the expected approximation ratio of the vertex cover generated by Global SEMO alt improves over time (that is to say, the upper bound on that ratio decreases) to the point where it reaches 1 at expected time $$O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2}+n\cdot4^{\operatorname {OPT}})$$.

### Theorem 7

Using the fitness function f 2, the expected number of iterations of Global SEMO alt until it has generated a (1+ϵ)-approximate vertex cover, i.e., a solution of fitness (r,0) with  $$r\leq(1+\epsilon)\cdot \operatorname {OPT}$$, is  $$O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2}+n\cdot4^{(1-\epsilon)\cdot \operatorname {OPT}})$$.

### Proof

Again we consider iterations where the population of Global SEMO alt contains a solution x with LP(x)=LP(0 n )−|x|1 such that each optimal fractional vertex cover assigns 1/2 to each non-isolated vertex of G(x). The expected number of iterations where this is not the case is at most $$O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2})$$, by Lemma 6.

Let X denote the set of non-isolated vertices in G(x), let SX be any minimum vertex cover of G(x), and let T=XS. Observe that T is an independent set and that |T|<|S|; otherwise, if |S|≤|T|, assigning 1 to each vertex of S and 0 to each vertex of T would yield a fractional vertex cover of cost at most 1/2⋅|X| but which does not assign 1/2 to each non-isolated vertex. Let $$\operatorname {OPT}'=\operatorname {OPT}-|\mathbf {x}|_{1}$$, i.e., the size of minimum vertex covers of G(x). Let $$s_{1},\dots ,s_{\operatorname {OPT}'}$$ and t 1,…,t |T| be any two numberings of the vertices in S and T, respectively.

With probability Ω(1/n) Global SEMO alt selects the solution x and applies the mutation that flips bits corresponding to non-isolated vertices of G(x) with probability 1/2. With probability $$\varOmega ((1/4)^{(1-\epsilon)\cdot \operatorname {OPT}'})$$ all bits corresponding to $$s_{1},\dots ,s_{\lceil(1-\epsilon)\cdot \operatorname {OPT}\rceil}$$ are flipped and those corresponding to t 1,…,t α , with $$\alpha=\min\lbrace |T|,\lceil(1-\epsilon)\cdot \operatorname {OPT}\rceil\rbrace$$, are not flipped. With probability greater than 1/2 the mutation flips bits of at least as many of the remaining vertices of S as of the remaining vertices of T, since |T|<|S|. Thus with probability $$\varOmega(1/n\cdot (1/4)^{(1-\epsilon)\cdot \operatorname {OPT}'})$$ the solution x is mutated into a solution x′ that additionally selects subsets S′⊆S and T′⊆T with $$|S'|\geq(1-\epsilon)\cdot \operatorname {OPT}'+|T'|$$. Again this includes a factor of Ω(1) accounting for the probability that Global SEMO alt does not flip bits corresponding to isolated vertices of G(x).

We will now prove an upper bound of $$(1+\epsilon)\cdot \operatorname {OPT}$$ on the value of |x′|+2⋅LP(x′). Observe that $$\mathit{LP}(\mathbf {x}')\leq \operatorname {OPT}'-|S'|$$ since SS′ is a vertex cover of G(x′). We also use the fact that $$|T'|\leq|S'|-(1-\epsilon)\cdot \operatorname {OPT}'$$.

Should a solution yP dominate x′ then this would imply |y|1+2⋅LP(y)≤|x′|1+2⋅LP(x′). Thus after expected $$O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2}+n\cdot4^{(1-\epsilon)\cdot \operatorname {OPT}})$$ steps the population contains a solution x′ with $$|\mathbf {x}'|_{1}+2\cdot \mathit{LP}(\mathbf {x}')\leq(1+\epsilon)\cdot \operatorname {OPT}$$.

Finishing the proof we show that such a solution leads to a (1+ϵ)-approximate vertex cover in expected polynomial time. Let yP be a solution with minimum value of LP(y) under the constraint that $$|\mathbf {y}|_{1}+2\cdot \mathit{LP}(\mathbf {y})\leq(1+\epsilon)\cdot \operatorname {OPT}$$. If LP(y)=0 then y is a (1+ϵ)-approximate vertex cover. Otherwise there exists at least one vertex v that has value at least 1/2 in some optimal fractional vertex cover of G(y). With probability Ω(1/n 2) the solution y is selected for mutation and exactly the bit corresponding to v is flipped, producing the solution y′.

Clearly |y′|1=|y|1+1 and LP(y′)≤LP(y)−1/2. Thus
$$\bigl|\mathbf {y}'\bigr|_1+2\cdot \mathit{LP}\bigl(\mathbf {y}'\bigr)\leq| \mathbf {y}|_1+2\cdot \mathit{LP}(\mathbf {y})\leq(1+\epsilon)\cdot \operatorname {OPT}.$$

Since y′ fulfills the constraint and LP(y′)<LP(y), it follows that no solution in P can dominate y′; otherwise, this solution would have been chosen in place of y. Thus with probability Ω(1/n 2) the minimum value of LP(y) among solutions y that fulfill $$|\mathbf {y}|_{1}+2\cdot \mathit{LP}(\mathbf {y})\leq(1+\epsilon)\cdot \operatorname {OPT}$$ is decreased by at least 1/2. Since $$0\leq \mathit{LP}(\mathbf {y})\leq \operatorname {OPT}$$ the expected number of steps (from the point that x′ was introduced) until the population contains a (1+ϵ)-approximate vertex cover is bounded by $$O(\operatorname {OPT}\cdot n^{2})$$. Hence the total expected number of iterations of Global SEMO alt until the population contains a (1+ϵ)-approximate vertex cover is bounded by $$O(n^{2}\cdot\log n+\operatorname {OPT}\cdot n^{2}+n\cdot4^{(1-\epsilon)\cdot \operatorname {OPT}})$$. □

## 5 Conclusion

We have introduced the notion of fixed-parameter evolutionary algorithms to examine how the runtime of search heuristics depend on structural properties of a given problem. Using this approach we have examined the runtime and approximation behavior of evolutionary algorithms with respect to the value of an optimal solution. Intuitively, our analyses on different multi-objective models show that additional criteria, such as minimizing the number of uncovered edges or the value of a fractional solution for the uncovered part of the graph, can lead to a preprocessing phase similar to a kernelization of the problem. Adding a random search component to the evolutionary algorithm by using the alternative mutation operator, we have shown that this gives fixed-parameter evolutionary algorithms.

There are several topics for future research. On the one hand, it seems to be interesting to analyze search heuristics in dependence of a given parameter on some other problems as well. The parameter can be the value of an optimal solution as considered in this paper but also a parameter which restricts the given input to certain classes of the problem. Examples include Cluster Editing and 3-Hitting Set, both are FPT when parameterized by solution size, as well as Maximum Knapsack parameterized by the capacity of the knapsack. Additionally, many graph problems, such as Independent Set or Dominating Set, are FPT when parameterized by the treewidth of the input graph. Showing that an evolutionary algorithm profits from small values of treewidth might be a rather challenging problem, as the FPT algorithms for the two mentioned problems employ dynamic programming. On the other hand, the use of the ILP relaxation as the second criteria to guide the search process may be of independent interest and we expect this criteria to be applicable for other problems as well. A very interesting candidate might be the Multiway Cut problem which is known to have a halfintegral LP formulation [13].

## References

1. 1.
Balinski, M.L.: On maximum matching, minimum covering and their connections. In: Proceedings of the Princeton Symposium on Mathematical Programming, pp. 434–445 (1970) Google Scholar
2. 2.
Chen, J., Kanj, I.A., Jia, W.: Vertex cover: further observations and further improvements. J. Algorithms 41(2), 280–301 (2001)
3. 3.
Chen, J., Kanj, I.A., Xia, G.: Improved parameterized upper bounds for vertex cover. In: Kralovic, R., Urzyczyn, P. (eds.) MFCS. LNCS, vol. 4162, pp. 238–249. Springer, Berlin (2006) Google Scholar
4. 4.
Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms, 2nd edn. MIT Press, Cambridge (2001)
5. 5.
Doerr, B., Happ, E., Klein, C.: Crossover can provably be useful in evolutionary computation. Theor. Comput. Sci. 425, 17–33 (2012)
6. 6.
Dorigo, M., Stützle, T.: Ant Colony Optimization. MIT Press, Cambridge (2004)
7. 7.
Downey, R.G., Fellows, M.R.: Parameterized Complexity. Monographs in Computer Science. Springer, Berlin (1998)
8. 8.
Eiben, A., Smith, J.: Introduction to Evolutionary Computing, 2nd edn. Springer, Berlin (2007) Google Scholar
9. 9.
Feldman, V.: Evolvability from learning algorithms. In: Ladner, R.E., Dwork, C. (eds.) STOC, pp. 619–628. ACM, New York (2008) Google Scholar
10. 10.
Flum, J., Grohe, M.: Parameterized Complexity Theory. Texts in Theoretical Computer Science. An EATCS Series. Springer, Berlin (2006) Google Scholar
11. 11.
Friedrich, T., He, J., Hebbinghaus, N., Neumann, F., Witt, C.: Analyses of simple hybrid algorithms for the vertex cover problem. Evol. Comput. 17(1), 3–19 (2009)
12. 12.
Friedrich, T., He, J., Hebbinghaus, N., Neumann, F., Witt, C.: Approximating covering problems by randomized search heuristics using multi-objective models. Evol. Comput. 18(4), 617–633 (2010)
13. 13.
Garg, N., Vazirani, V.V., Yannakakis, M.: Multiway cuts in directed and node weighted graphs. In: Abiteboul, S., Shamir, E. (eds.) ICALP. Lecture Notes in Computer Science, vol. 820, pp. 487–498. Springer, Berlin (1994) Google Scholar
14. 14.
Giel, O., Wegener, I.: Evolutionary algorithms and the maximum matching problem. In: Alt, H., Habib, M. (eds.) STACS. Lecture Notes in Computer Science, vol. 2607, pp. 415–426. Springer, Berlin (2003) Google Scholar
15. 15.
Kratsch, S., Neumann, F.: Fixed-parameter evolutionary algorithms and the vertex cover problem. In: Rothlauf, F. (ed.) GECCO, pp. 293–300. ACM, New York (2009)
16. 16.
Motwani, R., Raghavan, P.: Randomized Algorithms. Cambridge University Press, Cambridge (1995)
17. 17.
Nemhauser, G.L., Trotter, L.E.: Vertex packings: structural properties and algorithms. Math. Program. 8, 232–248 (1975)
18. 18.
Neumann, F.: Expected runtimes of a simple evolutionary algorithm for the multi-objective minimum spanning tree problem. Eur. J. Oper. Res. 181(3), 1620–1629 (2007)
19. 19.
Neumann, F., Reichel, J.: Approximating minimum multicuts by evolutionary multi-objective algorithms. In: Rudolph, G., Jansen, T., Lucas, S.M., Poloni, C., Beume, N. (eds.) PPSN. Lecture Notes in Computer Science, vol. 5199, pp. 72–81. Springer, Berlin (2008) Google Scholar
20. 20.
Neumann, F., Reichel, J., Skutella, M.: Computing minimum cuts by randomized search heuristics. Algorithmica 59(3), 323–342 (2011)
21. 21.
Neumann, F., Wegener, I.: Randomized local search, evolutionary algorithms, and the minimum spanning tree problem. Theor. Comput. Sci. 378(1), 32–40 (2007)
22. 22.
Neumann, F., Witt, C.: Bioinspired Computation in Combinatorial Optimization—Algorithms and Their Computational Complexity. Springer, Berlin (2010)
23. 23.
Oliveto, P.S., He, J., Yao, X.: Analysis of population-based evolutionary algorithms for the vertex cover problem. In: IEEE Congress on Evolutionary Computation, pp. 1563–1570. IEEE Press, New York (2008) Google Scholar
24. 24.
Oliveto, P.S., He, J., Yao, X.: Analysis of the (1+1)-EA for finding approximate solutions to vertex cover problems. IEEE Trans. Evol. Comput. 13(5), 1006–1029 (2009)
25. 25.
Scharnow, J., Tinnefeld, K., Wegener, I.: The analysis of evolutionary algorithms on sorting and shortest paths problems. J. Math. Model. Algorithms 4(3), 349–366 (2004)
26. 26.
Valiant, L.G.: Evolvability. J. ACM 56(1) (2009). http://doi.acm.org/10.1145/1462153.1462156
27. 27.
Wegener, I.: Methods for the analysis of evolutionary algorithms on pseudo-Boolean functions. In: Sarker, R., Yao, X., Mohammadian, M. (eds.) Evolutionary Optimization, pp. 349–369. Kluwer, Dordrecht (2002) Google Scholar
28. 28.
Witt, C.: Worst-case and average-case approximations by simple randomized search heuristics. In: Diekert, V., Durand, B. (eds.) STACS. Lecture Notes in Computer Science, vol. 3404, pp. 44–56. Springer, Berlin (2005) Google Scholar