Journal of Combinatorial Optimization

, Volume 28, Issue 1, pp 105–120 | Cite as

An exact algorithm for the maximum probabilistic clique problem

  • Zhuqi Miao
  • Balabhaskar Balasundaram
  • Eduardo L. Pasiliao
Article

Abstract

The maximum clique problem is a classical problem in combinatorial optimization that has a broad range of applications in graph-based data mining, social and biological network analysis and a variety of other fields. This article investigates the problem when the edges fail independently with known probabilities. This leads to the maximum probabilistic clique problem, which is to find a subset of vertices of maximum cardinality that forms a clique with probability at least \(\theta \in [0,1]\), which is a user-specified probability threshold. We show that the probabilistic clique property is hereditary and extend a well-known exact combinatorial algorithm for the maximum clique problem to a sampling-free exact algorithm for the maximum probabilistic clique problem. The performance of the algorithm is benchmarked on a test-bed of DIMACS clique instances and on a randomly generated test-bed.

Keywords

Maximum clique problem Probabilistic programming  Probabilistic clique Branch-and-bound 

1 Introduction

A clique is a subset of pairwise adjacent vertices in a graph, i.e. the subgraph induced by a clique is complete. Cliques and their generalizations such as quasi-cliques, \(k\)-cliques, \(k\)-clubs, \(k\)-plexes, and other dense-subgraph models have been widely used in various graph-based data mining applications in bioinformatics, computational finance, and text mining among others. Several recent and older surveys discuss the broad range of applications of cliques and related models (Balasundaram and Pajouh 2013; Pattillo et al. 2012; Boginski 2011; McClosky 2011; Bomze et al. 1999; Pardalos and Xue 1994; Balasundaram and Butenko 2008; Butenko and Wilhelm 2006; Cook and Holder 2000). A maximum clique in a graph is a clique with the largest number of vertices. The cardinality of a maximum clique is called the clique number of the graph \(G\) and is denoted by \(\omega (G)\). The maximum clique problem (to find a maximum clique in a graph) is NP-hard (Garey and Johnson 1979), and it is hard to approximate within \(n^{1-\epsilon }\) for any \(\epsilon >0\) (Håstad 1999).

Early algorithms for the problem were motivated by applications in social network analysis such as the detection of cohesive social subgroup (Luce and Perry 1949; Harary and Ross 1957; Wasserman and Faust 1994). Over the years, several exact combinatorial algorithms, polyhedral approaches, and global optimization approaches have been developed to solve the maximum clique problem, in addition to numerous heuristic approaches (Bomze et al. 1999; Johnson and Trick 1996). In particular, exact combinatorial algorithms were developed by Balas and Yu (1986), Applegate and Johnson (1988), Carraghan and Pardalos (1990), Balas and Xue (1996), Wood (1997), Sewell (1998), and Östergård (2002), to name a few. The back-tracking algorithm proposed independently by Carraghan and Pardalos (1990) and by Applegate and Johnson (1988) was simple and effective (given the general intractability of the problem), and it was used as a benchmark in the Second DIMACS implementation challenge for the maximum clique problem (Johnson and Trick 1996). This algorithm was subsequently enhanced by Östergård (2002), yielding one of the fastest exact combinatorial algorithms for the problem in practice (Tomita and Kameda 2007). Even recently, combinatorial exact algorithms have proven to be the most effective for this problem (Tomita et al. 2010; Batsyn et al. 2013).

Consider a random graph denoted by \(\widetilde{G}=(V,\widetilde{E})\), where \(\widetilde{E}\) denotes the random subset of undirected edges and each edge exists with probability \(p_e\). We associate with this random graph a (deterministic) support graph \(G=(V,E)\) where \(e \in E \iff p_e > 0\). Then, the sample space \({\varOmega } = \{G^1, \ldots , G^N\}\) is the collection of all possible spanning subgraphs of \(G\), and \(N = 2^{|E|}\) if each edge exists independently of the others. We denote the probability measure by \(\mathbb {P}: 2^{\varOmega } \longrightarrow [0,1]\) and refer to \(G^i \in {\varOmega }\) for \(i=1,\ldots , N\) as the scenarios. Formally, the maximum probabilistic clique problem (MPCP) is to solve:
$$\begin{aligned} \max _{S \subseteq V}\bigl \{|S| \ : \ \mathbb {P}\{S \hbox { is a clique in }\widetilde{G}\} \ge \theta \bigr \}, \end{aligned}$$
(1)
where \(\theta \in [0,1]\) is a constant probability threshold specified by the user. Note that when \(\theta = 0\), the problem is trivial and when \(\theta =1\), the MPCP reduces to the classical maximum clique problem on graph \(\hat{G}^1 = (V, \hat{E}^1)\), where \(\hat{E}^1 = \{e \in E : p_e = 1\}\). We call a feasible solution to the MPCP with probability threshold \(\theta \) as a “probabilistic \(\theta \)-clique.”

A probabilistic (chance-constrained) programming approach to the maximum clique problem is one of several approaches to handle uncertainty in applications of the maximum clique problem where the edge information could be subject to measurement uncertainty or errors in the data for which a reasonable probabilistic characterization can be obtained. Furthermore, a feasible solution to the MPCP (1) may not be a clique in some scenarios; however, the probability of that event is limited to \(1-\theta \). In applications where that is unacceptable, robust optimization formulations or conditional value-at-risk constraints could be used, among other approaches (Bertsimas et al. 2011; Shapiro et al. 2009; Ahmed 2006; Krokhmal et al. 2005; Rockafellar and Uryasev 2000). However, the restrictive nature of the clique definition could result in a small maximum \(\theta \)-clique, even with a loose probability threshold. In such cases, relaxations of cliques (Balasundaram and Pajouh 2013) could be considered in a probabilistic setting when the solution of the MPCP is not sufficiently large.

Probabilistic programs arise frequently in risk management and decision-making in uncertain environments (Prékopa 2003). Methods in the literature for solving probabilistic programs generally combine ideas for convexification, sampling, and decomposition to solve the associated nonconvex mathematical optimization formulations (Nemirovski and Shapiro 2004, 2006a, b; Ahmed and Shapiro 2008; Luedtke and Ahmed 2008; Pagnoncelli et al. 2009; Luedtke 2010). However, the approach taken to solve the MPCP in this article is a combinatorial branch-and-bound that extends an algorithm by Östergård (2002). The result is a sampling-free exact approach that exploits the independence assumption on the edges of the random graph and the restrictive nature of the clique definition. The remainder of this paper is organized as follows. The “heredity” of probabilistic \(\theta \)-cliques is established in Sect. 2 and an exact algorithm for the MPCP is presented in Sect. 3. Computational experiments are summarized and discussed in Sect. 4.

2 Hereditary graph properties under probabilistic edge failures

In this section we discuss a key property that facilitated the development of Carraghan and Pardalos (1990) algorithm for the maximum clique problem, which was later enhanced by Östergård (2002). We show that this property holds for probabilistic cliques, making the extension of the algorithm possible.

A graph property \({\varPi }\) is said to be hereditary on induced subgraphs if for any \(S\subseteq V\), such that the induced subgraph \(G[S]\) satisfies property \({\varPi }\), \(G[T]\) satisfies \({\varPi }\) for all \(T \subseteq S\). Yannakakis (1978) obtained a general complexity result that shows finding a maximum order induced subgraph that satisfies any (nontrivial, interesting) hereditary graph property \({\varPi }\) is NP-hard. The algorithm for the maximum clique problem developed by Östergård (2002) can be extended in principle to any hereditary graph property as discussed in (Trukhanov et al. 2013). We first briefly describe this algorithm and discuss some relevant points before establishing the hereditary nature of probabilistic \(\theta \)-cliques.

In the algorithm by Östergård (2002), the vertices in \(V\) are ordered as \(V=(v_1, v_2, \ldots , v_n)\) and the sets \(U_i\subseteq V\) are defined as \(U_i=\{v_i, v_{i+1}, \ldots , v_n\}\). The algorithm computes \(\omega _i := \omega (G[U_i])\) starting from \(\omega _n = 1\) and down to \(\omega _1 = \omega (G)\). Note that since clique property is hereditary,
$$\begin{aligned} \omega _i=\left\{ \begin{array}{ll} \omega _{i+1}+1, &{}\quad \text{ if } \text{ every } \text{ corresponding } \text{ solution } \text{ contains }~v_i \\ \omega _{i+1}, &{}\quad \text{ otherwise } \end{array} \right. \end{aligned}$$
The algorithm computes \(\omega _n, \omega _{n-1}, \ldots , \omega _1\) recursively using backtracking, adding vertices to a current solution and maintaining an associated candidate set of vertices that are adjacent to all vertices in the current solution. Pruning is based on two strategies for calculating upper bounds. The first type of pruning occurs when the size of the current solution plus the size of the candidate set is no larger than the incumbent solution. The second type of pruning occurs when the size of the current solution plus \(\omega _i\), where \(i\) is the first vertex in the ordering that is also in the candidate set, is no larger than the incumbent. The choice of \(i\) is such that \(U_i\) contains the candidate set, which implies that \(\omega _i\) is the best possible in \(G[U_i]\). The second type of upper-bound is based on the increase from \(\omega _{i+1}\) to \(\omega _i\) being at most unity when the property is hereditary. With only the first type of pruning we obtain the Carraghan and Pardalos (1990) algorithm for the maximum clique problem.

Proposition 1

Consider a property \({\varPi }\) that is hereditary on vertex induced subgraphs, a random graph \(\widetilde{G}\) associated with a sample space \({\varOmega } = \{G^1, \ldots , G^N\}\) and a probability measure \(\mathbb {P}: 2^{\varOmega } \longrightarrow [0,1]\). Given \(\theta \in [0,1]\) and a subset of vertices \(S\) such that,
$$\begin{aligned} \mathbb {P}\{S\,\textit{ satisfies }\,{\varPi }\,\textit{ in }\, \widetilde{G}\}~\ge ~\theta , \end{aligned}$$
then for any \(T\subseteq S\),
$$\begin{aligned} \mathbb {P}\{T \,\textit{ satisfies }\,{\varPi }\,\textit{ in }\, \widetilde{G}\}\ge \theta . \end{aligned}$$

Proof

Let \(\fancyscript{F}= \{G^i \in {\varOmega } \ : \ S \hbox { satisfies } {\varPi } \hbox { in }G^i\}\). We have,
$$\begin{aligned} \mathbb {P}\{S \hbox { satisfies }{\varPi }\hbox { in } \widetilde{G}\} = \mathbb {P}(\fancyscript{F}) = \sum _{G^i \in \fancyscript{F}}\mathbb {P}\{\widetilde{G}=G^i\} \ge \theta \end{aligned}$$
Since \({\varPi }\) is hereditary, any \(T\subseteq S\) also satisfies \({\varPi }\) in all scenarios in \(\fancyscript{F}\). Hence,
$$\begin{aligned} \mathbb {P}\{T\, \hbox { satisfies }{\varPi }\,\hbox { in } \widetilde{G}\}\ge \sum _{G^i \in \fancyscript{F}}\mathbb {P}\{\widetilde{G}=G^i\} \ge \theta . \end{aligned}$$
\(\square \)

Hence, when \({\varPi }\) is hereditary on vertex induced subgraphs of a deterministic graph, the probabilistic-\({\varPi }\) associated with a random graph is also hereditary in a chance-constrained optimization problem. Note that the result did not require the random edges to be independent of each other. In fact, the randomness need not be limited to edge failures and can include vertex failures. As discussed in (Trukhanov et al. 2013), Proposition 1 implies that an approach similar to the maximum clique algorithm by Östergård (2002), which is itself a variant of the Russian Doll Search algorithm (Vaskelainen 2010), can be developed for the MPCP.

3 An exact algorithm for the MPCP

Consider a random graph \(\widetilde{G}=(V,\widetilde{E})\) that models probabilistic edge failures on a deterministic vertex set associated with a sample space \({\varOmega } = \{G^1,\ldots ,G^N\}\) and a probability measure \(\mathbb {P}:2^{\varOmega } \longrightarrow [0,1]\). Denote the edge probability by \(p_e := \mathbb {P}\{ e \in \widetilde{E}\}\) and the clique probability by \(\mathbb {P}(S):=\mathbb {P}\{S \hbox { is a clique in }\widetilde{G}\}\). With the definition of a support graph from Sect. 1 and under the independence assumption on the edges in \(\widetilde{E}\),
$$\begin{aligned} \mathbb {P}(S)&= \left\{ \begin{array}{ll} \prod \limits _{e \in E(S)}p_e &{} \quad \text {if}\, S\, \text {is a clique in the support graph}\, G=(V,E),\\ 0 &{} \quad \text {otherwise,} \end{array} \right. \end{aligned}$$
(2)
where \(E(S)\) is the subset of edges in the support graph with both end-points in \(S\).

Given \(\theta \in [0,1]\), if \(S \subseteq V\) is a probabilistic \(\theta \)-clique then \(S\) is a clique in the support graph \(G=(V,E)\). This observation can motivate one to modify an existing branch-and-bound (BB) algorithm for the deterministic maximum clique problem directly on the support graph \(G\); this “naive” algorithm will prune any BB node when the associated clique in \(G\) violates the probabilistic constraint. Unfortunately, this approach can produce incorrect answers. Suppose this BB algorithm encounters a clique \(S\) that violates the probabilistic constraint and consequently prunes the associated BB node. It is possible that there exists a \(T \subset S\) which satisfies the probabilistic constraint even when \(S\) violates it. If \(T\) was larger than the incumbent, it may lead the BB algorithm to incorrectly report a sub-optimal solution as an optimal one. An extreme case of this situation is when the support graph is complete, but \(V\) is not a probabilistic \(\theta \)-clique. This BB algorithm would then terminate at the root node, returning an empty set as the solution.

The approach described above is incorrect as a result of improper pruning, which may assume infeasibility while there is still potential for feasible solutions at that node of the BB tree. However, because of the independence assumption, the probabilistic constraint can be easily enforced in the construction of the candidate set, which eliminates the need for pruning by infeasibility. Algorithm 1 presents an extension of the algorithm by Östergård (2002) to the MPCP. The algorithm recursively calls the function Search(\(C,P,S,\mathbb {P}(S)\)) where:
  1. 1.

    \(S\) is the current \(\theta \)-clique and \(\mathbb {P}(S)\) is the clique probability,

     
  2. 2.

    \(C\) is the candidate set satisfying: \(S \cup \{v_i\}\) is a \(\theta \)-clique for each \(v_i \in C\), and

     
  3. 3.

    \(P\) is an array such that, \(P[v_i]\) is the product of probabilities of the edges between \(v_i\) and vertices in \(S\) for each \(v_i \in C\). Note that the dependence of \(P\) on the associated \(S\) is implicit and not notated for the sake of simplicity.

     
After a vertex \(v_i \in C\) is chosen and the current solution is updated to \(S^\prime : = S\cup \{v_i\}\), the candidate set \(C\) and the \(P\)-array must also be updated to maintain the required properties with respect to \(S^\prime \). For the classical clique problem, \(P\) is not used and the candidate set is easily updated by intersecting \(C\) with \(N(v_i)\). In the probabilistic setting, a \(\theta \)-clique has to be a clique in the support graph, thus candidates must be in \(C\cap N(v_i)\) where \(N(v_i)\) is the set of neighbors of \(v_i\) in the support graph \(G\). Furthermore, for \(v_j \in C\cap N(v_i)\), if the probability that \(S^\prime \cup \{v_j\}\) is a clique is less than \(\theta \), then \(v_j\) cannot be a candidate with respect to \(S^\prime \). By equation (2) and the definition of \(P\), this probability can be incrementally computed as in function BuildCand(\(v_i,C,P,\mathbb {P}(S^\prime ))\).

Updating the candidate set, the probability array and the probability level of the current solution is done incrementally. By storing a limited amount of information we speed up the steps of computation compared to an approach that computes these quantities from scratch. The savings in running-time can become significant with the large number of calls to update the candidate set over the course of the algorithm. The main procedure MaxPclq initiates the recursive call to Search from each vertex in the vertex ordering: from \(v_n\) to \(v_1\), and when control returns to the main procedure after the call corresponding to \(v_i\), the incumbent solution size \(\textit{max}\) is \(\omega _i\).

4 Computational experience

The necessary and sufficient condition for extending the algorithm by Östergård (2002) is that the graph property be hereditary on vertex induced subgraphs. This facilitated the development of Algorithm 1 based on Proposition 1. Furthermore, similar extensions are possible for the probabilistic counterparts of all hereditary graph properties (Trukhanov et al. 2013). However, two key aspects of the algorithm must be carefully designed for the implementation to be effective: The first being the procedure that updates the candidate set, which as discussed in Sect. 3 is done incrementally to improve performance. The second is the ordering of vertices to facilitate effective pruning of the search tree, which can be encouraged by ensuring slow increase in \(max\). For this reason, a greedy degree based ordering (largest degree vertex is \(v_n\) and smallest degree vertex is \(v_1\)) is generally outperformed by an opposite ordering, which results in smaller incumbents early in the search process. Naturally, this type of ordering emphasizes optimality over detecting large feasible solutions early. Similarly, an ordering based on vertex coloring was found to be more effective in (Östergård 2002). When vertices are grouped by a color class, at most one vertex from the color class can be part of a clique. In this section, we extend both of these ideas to the probabilistic cliques: a reverse-greedy ordering based on expected vertex degrees and coloring based on the notion of a bottleneck graph (Hochbaum and Shmoys 1985) are implemented for our computational study to investigate the performance of Algorithm 1. The algorithm was implemented in C++ and executed on a 64-bit Linux system with 96GB RAM and Intel Xeon E5620 2.40 GHz processor. We set all the programs to be single-threaded for the sake of comparison.

Expected degree ordering (EDO). Note that the expected degree of a vertex \(v\) in \(\widetilde{G}\) is given by \(\sum \{p_{vu} \ : \ u \in N(v)\}\), where the neighborhood is with respect to the support graph \(G\). In this ordering, the vertices are ordered with the minimum expected degree first (\(v_n\)) and maximum expected degree last (\(v_1\)).

Bottleneck coloring ordering (BCO). Given a threshold \(\epsilon \in \mathbb {R}\) and the support graph \(G=(V,E)\), the bottleneck graph is denoted by \(G^\epsilon = (V, E^\epsilon )\) where \(E^\epsilon = \{e \in E \ : \ p_e \ge \epsilon \}\). Note that when \(\epsilon = 0\) (or sufficiently small), the bottleneck graph is the support graph itself. The bottleneck graph is colored using a simple greedy coloring heuristic to obtain the color classes (Kubale 2004). This procedure iteratively selects a vertex with the largest degree in the bottleneck graph induced by the uncolored vertices and assigns the smallest available color. After the color classes are determined, they are ordered from the smallest one (\(v_1\) belongs to this group) to the largest one (\(v_n\) belongs to this group), and in each color class the vertices are in the order in which they were added to this color class.

It should be noted that we do not attempt a comparison of our algorithm against an out-of-the-box integer programming solver that solves a deterministic equivalent formulation of the MPCP for the following reason: The exact combinatorial algorithm we propose is sampling-free, and whenever an optimal solution is obtained it is not subject to any sampling error. To achieve this by solving a deterministic equivalent formulation, we would have to enumerate all possible scenarios, which quickly becomes unmanageable as it is exponential in the number of edges in the support graph.

4.1 Experiments on the DIMACS test-bed

We selected 20 DIMACS clique instances (DIMACS 1995; Johnson and Trick 1996) on which the algorithm was able to solve the problem optimally, and compare the running time performance. For each instance, approximately 25 % of edges were chosen randomly and independently, and assigned probabilities uniformly distributed in the interval \((0.8,1)\). The remaining edges were assumed to exist with probability 1. This was done with the intention of leaving the instances “mostly intact” so that they continue to remain meaningful benchmark instances.

Algorithm 1 was implemented with EDO and BCO for \(\epsilon = 1, 0.9, 0.8\) to solve the MPCP for \(\theta =0.75\) and \(\theta =0.5\). The running time in seconds and the probabilities of the solutions found by Algorithm 1 with these four vertex ordering schemes are listed under “EDO”, “BCO \(\epsilon =1\)”, “BCO \(\epsilon =0.9\)”, and “BCO \(\epsilon =0.8\)” in Tables 1 and 2. The tables also present the size of a maximum probabilistic \(\theta \)-clique for each instance. Note that \(G^{0.8}\) is the support graph which corresponds to the original DIMACS instance, and \(G^{1.0}\) contains approximately 75 % of the edges from the original DIMACS instance, all guaranteed to exist. To illustrate the point made earlier that slower growth of the incumbent max with each major iteration is desirable when the emphasis is on optimality, we plot the progress of the algorithms on c-fat500-5 for \(\theta =0.5\) in Fig. 1.
Fig. 1

Growth of the incumbent size \(max\) on c-fat500-5 for \(\theta =0.50\)

Table 1

Algorithm 1 on DIMACS clique benchmarks with \(\theta = 0.75\) (running time in seconds)

Graph

Size

EDO

BCO \(\epsilon =1\)

BCO \(\epsilon =0.9\)

BCO \(\epsilon =0.8\)

Time

Prob.

Time

Prob.

Time

Prob.

Time

Prob.

brock200_1

14

107.0

0.76

90.6

0.82

82.3

0.76

100.4

0.76

brock200_2

9

0.4

0.75

0.3

0.76

0.2

0.80

0.3

0.75

brock200_3

12

2.2

0.75

1.4

0.75

1.5

0.75

1.1

0.75

brock200_4

13

5.2

0.77

3.8

0.77

3.0

0.77

3.1

0.77

c-fat200-1

9

\(<\)0.1

0.87

\(<\)0.1

0.76

\(<\)0.1

0.75

\(<\)0.1

0.84

c-fat200-2

12

\(<\)0.1

0.77

\(<\)0.1

0.77

\(<\)0.1

0.77

\(<\)0.1

0.78

c-fat200-5

18

115.6

0.76

8.4

0.75

10.8

0.75

59.9

0.76

c-fat500-1

10

\(<\)0.1

0.86

\(<\)0.1

0.80

\(<\)0.1

0.79

\(<\)0.1

0.78

c-fat500-2

14

0.3

0.78

\(<\)0.1

0.78

\(<\)0.1

0.78

0.1

0.77

c-fat500-5

20

304.1

0.76

59.1

0.76

23.0

0.76

106.8

0.76

hamming6-2

15

0.3

0.79

0.1

0.79

0.1

0.79

\(<\)0.1

0.77

hamming6-4

4

\(<\)0.1

0.87

\(<\)0.1

0.93

\(<\)0.1

0.82

\(<\)0.1

0.80

hamming8-4

12

9.3

0.78

6.2

0.77

5.3

0.78

3.1

0.76

johnson8-2-4

4

\(<\)0.1

0.91

\(<\)0.1

0.94

\(<\)0.1

1.00

\(<\)0.1

0.98

johnson8-4-4

11

\(<\)0.1

0.76

\(<\)0.1

0.76

\(<\)0.1

0.91

\(<\)0.1

0.84

johnson16-2-4

8

3.4

0.78

1.9

0.78

0.3

0.78

0.2

0.79

keller4

11

0.6

0.79

0.9

0.79

1.0

0.79

0.7

0.79

MANN_a9

13

\(<\)0.1

0.80

\(<\)0.1

0.76

\(<\)0.1

0.76

\(<\)0.1

0.80

p_hat300-1

7

\(<\)0.1

0.87

\(<\)0.1

0.85

\(<\)0.1

0.77

\(<\)0.1

0.82

p_hat300-2

15

23.3

0.78

19.5

0.76

24.9

0.75

13.3

0.78

Table 2

Algorithm 1 on DIMACS clique benchmarks with \(\theta = 0.5\) (running time in seconds)

Graph

Size

EDO

BCO \(\epsilon =1\)

BCO \(\epsilon =0.9\)

BCO \(\epsilon =0.8\)

Time

Prob.

Time

Prob.

Time

Prob.

Time

Prob.

brock200_1

16

970.7

0.52

690.8

0.52

401.5

0.52

692.9

0.52

brock200_2

10

0.4

0.54

0.3

0.52

0.3

0.53

0.4

0.50

brock200_3

12

6.8

0.54

5.1

0.50

5.3

0.54

4.9

0.54

brock200_4

13

29.4

0.52

25.7

0.57

33.2

0.50

25.8

0.54

c-fat200-1

11

\(<\)0.1

0.50

\(<\)0.1

0.53

\(<\)0.1

0.50

\(<\)0.1

0.53

c-fat200-2

14

\(<\)0.1

0.52

\(<\)0.1

0.51

\(<\)0.1

0.51

\(<\)0.1

0.51

c-fat200-5

20

5850.8

0.51

707.1

0.52

828.7

0.51

2074.0

0.51

c-fat500-1

11

\(<\)0.1

0.52

\(<\)0.1

0.57

\(<\)0.1

0.51

\(<\)0.1

0.58

c-fat500-2

15

1.5

0.50

0.4

0.50

0.3

0.53

0.6

0.51

c-fat500-5

22

20346.7

0.50

3433.9

0.51

1285.4

0.51

3498.1

0.50

hamming6-2

16

3.5

0.52

1.5

0.50

1.7

0.50

1.3

0.53

hamming6-4

4

\(<\)0.1

0.65

\(<\)0.1

0.73

\(<\)0.1

0.82

\(<\)0.1

0.80

hamming8-4

14

24.2

0.53

9.6

0.53

7.5

0.53

7.3

0.53

johnson8-2-4

4

\(<\)0.1

0.91

\(<\)0.1

0.94

\(<\)0.1

1.00

\(<\)0.1

0.98

johnson8-4-4

13

\(<\)0.1

0.54

\(<\)0.1

0.54

\(<\)0.1

0.54

\(<\)0.1

0.54

johnson16-2-4

8

8.8

0.51

4.9

0.67

0.6

0.53

0.5

0.67

keller4

11

1.9

0.79

2.0

0.51

2.0

0.66

1.6

0.51

MANN_a9

14

0.5

0.52

0.4

0.50

0.1

0.53

\(<\)0.1

0.52

p_hat300-1

8

\(<\)0.1

0.53

\(<\)0.1

0.53

\(<\)0.1

0.52

\(<\)0.1

0.67

p_hat300-2

17

251.7

0.55

106.5

0.52

116.6

0.52

109.6

0.55

For \(\theta =0.75\), there are eight instances in which at least one version of the algorithm took more than 2 s. On these eight instances, one of the BCO versions always outperforms the EDO version, and on seven of these instances EDO was actually the slowest (by a significant amount on several instances). Furthermore, BCO \(\epsilon =1\) was outperformed by the other two BCO versions on seven out of the eight instances. Comparing the two BCO versions with \(\epsilon =0.9\) and \(\epsilon =0.8\) (coloring the support graph) is less straightforward since neither version clearly dominates the other. However, on several instances when BCO with \(\epsilon =0.9\) was faster, it was faster by a significant amount. With all else being the same, we then solved the MPCP with \(\theta =0.5\) on this test-bed and observed similar behavior from the algorithms (on the nine instances on which at least one version of the algorithm took more than 2 s). More noticeable now is the performance difference between BCO \(\epsilon =0.9\) and other versions. When it is not the fastest this version of the algorithm is not far behind. But some of the instances such as brock200_1, c-fat200-5, c-fat500-5, p_hat300-2 are clearly harder than the rest since all versions took much longer on these instances even compared to instances from the same family of the same size for both values of \(\theta \). Often in such instances there is a noticeable advantage in using the BCO \(\epsilon =0.9\) version based on these results. It is also clear that for many instances, solving the MPCP for \(\theta =0.5\) took a lot longer than it did for solving the problem with \(\theta =0.75\).

The observations made in this section are dependent on the choice of the test-bed, the variation in the probability distribution (\(p_e\)) across the edges of the support graph, and the probability threshold \(\theta \). Nonetheless, the experiments on this test-bed clearly indicate that the bottleneck coloring approach used here has strong potential to result in significant time savings in general. It is also clear that certain values of \(\epsilon \) could result in dramatic reduction in running times, especially on “hard” instances. However, it remains to be seen whether this performance can be duplicated on another test-bed. We explore this in the next set of experiments.

4.2 Experiments on a randomly generated test-bed

In the DIMACS test-bed, only 25 % of the edges were probabilistic while the rest were guaranteed to exist. In this set of experiments, we utilized a test-bed of 20 samples of 200-vertex instances with complete support graphs and every edge of this complete graph is assigned an edge probability independently and uniformly from the interval \((0.8,1.0)\), the same range used in the previous experiments. We use this test-bed to study the performance of BCO ordering with respect to variations in \(\theta \) and \(\epsilon \). The MPCP was solved on these instances using Algorithm 1 with BCO at \(\epsilon =0.85,0.9\) and \(0.95\) for \(\theta =\{0.4,0.5,0.6,0.7,0.8,0.9\}\). The results are presented in Tables 45 and 6 and are summarized in Table 3. As \(\theta \) decreases, the size of a maximum probabilistic \(\theta \)-clique increases (not necessarily strictly) as expected. However, there is significant increase in running time possibly due to the larger size of the candidate sets in each iteration and in each recursive call to the Search function. All three \(\epsilon \) values for the bottleneck graph coloring result in very similar running times, with some noticeable difference for \(\theta =0.4,0.5\). But no version conclusively dominates the rest. Nonetheless, choosing the median value of \(p_e\) over all \(e \in E\) or choosing a mid-range value as \(0.5~(\min \nolimits _{e \in E}p_e + \max \nolimits _{e \in E}p_e)\) is clearly a good starting point for tuning the parameter \(\epsilon \) based on the results on both groups of test instances.
Table 3

Median running time (in seconds) over 20 samples of 200-vertex complete graph instances with \(p_e\) chosen uniformly in \((0.8,1)\) using Algorithm 1

 

\(\theta =0.4\)

\(\theta =0.5\)

\(\theta =0.6\)

\(\theta =0.7\)

\(\theta =0.8\)

\(\theta =0.9\)

\(\epsilon = 0.85\)

903.7

170.0

17.0

3.5

0.5

\(<\)0.1

\(\epsilon = 0.90\)

862.8

164.9

16.9

3.3

0.5

\(<\)0.1

\(\epsilon = 0.95\)

904.7

147.8

16.5

3.2

0.5

\(<\)0.1

Table 4

Algorithm 1 on 200-vertex complete support graph instances with \(p_e\) chosen uniformly in \((0.8,1)\) using BCO \(\epsilon =0.85\) (running time in seconds)

Graph

\(\theta =0.4\)

\(\theta =0.5\)

\(\theta =0.6\)

\(\theta =0.7\)

\(\theta =0.8\)

\(\theta =0.9\)

Time

Size

Prob.

Time

Size

Prob.

Time

Size

Prob.

Time

Size

Prob.

Time

Size

Prob.

Time

Size

Prob.

1

1036.3

8

0.41

164.6

7

0.51

19.3

7

0.61

3.7

6

0.70

0.6

5

0.80

\(<\)0.1

5

0.91

2

920.7

8

0.40

170.6

7

0.50

17.1

7

0.63

3.7

6

0.70

0.6

5

0.84

\(<\)0.1

5

0.91

3

769.1

8

0.41

159.9

7

0.53

15.7

7

0.61

3.6

6

0.70

0.5

5

0.81

\(<\)0.1

4

0.90

4

993.6

8

0.40

166.1

7

0.52

16.3

7

0.63

3.3

6

0.71

0.6

5

0.80

\(<\)0.1

5

0.91

5

1017.6

8

0.41

177.4

7

0.52

19.5

7

0.60

3.6

6

0.70

0.5

5

0.80

\(<\)0.1

5

0.90

6

997.5

8

0.41

110.9

8

0.51

16.5

7

0.61

3.0

6

0.70

0.6

5

0.83

\(<\)0.1

4

0.94

7

940.9

8

0.40

174.7

7

0.52

17.6

7

0.60

3.2

6

0.71

0.5

5

0.84

\(<\)0.1

5

0.90

8

976.9

8

0.40

166.9

7

0.51

22.2

7

0.61

3.8

6

0.72

0.6

5

0.86

\(<\)0.1

5

0.90

9

892.7

8

0.41

181.3

7

0.51

17.3

7

0.61

3.7

6

0.76

0.6

5

0.80

\(<\)0.1

5

0.90

10

915.5

8

0.41

163.6

7

0.50

16.9

7

0.60

3.4

6

0.73

0.5

5

0.80

\(<\)0.1

4

0.90

11

896.7

8

0.42

171.1

7

0.50

18.8

7

0.61

3.3

6

0.70

0.3

6

0.81

\(<\)0.1

5

0.91

12

884.1

8

0.40

178.2

8

0.51

16.7

7

0.60

3.6

6

0.70

0.6

5

0.82

\(<\)0.1

5

0.92

13

1012.3

8

0.41

174.1

7

0.53

22.6

7

0.60

3.6

6

0.73

0.3

6

0.80

\(<\)0.1

5

0.91

14

847.6

8

0.42

171.5

7

0.50

16.8

7

0.60

3.5

6

0.72

0.5

5

0.80

\(<\)0.1

4

0.91

15

874.6

8

0.41

170.7

7

0.51

15.3

7

0.60

3.7

6

0.72

0.5

5

0.80

\(<\)0.1

5

0.91

16

864.7

8

0.41

161.7

7

0.50

15.1

7

0.61

2.8

6

0.71

0.5

5

0.81

\(<\)0.1

5

0.90

17

855.3

8

0.40

163.6

7

0.53

15.4

7

0.61

3.1

6

0.71

0.5

5

0.80

\(<\)0.1

5

0.91

18

760.4

8

0.42

169.5

7

0.51

16.7

7

0.63

3.7

6

0.73

0.3

6

0.81

\(<\)0.1

5

0.91

19

910.8

8

0.41

174.6

7

0.52

18.1

7

0.61

2.9

6

0.71

0.6

5

0.81

\(<\)0.1

4

0.92

20

813.1

8

0.41

165.6

7

0.50

19.5

7

0.60

2.9

6

0.71

0.5

5

0.81

\(<\)0.1

5

0.91

Median

903.7

8

0.41

170.0

7

0.51

17.0

7

0.61

3.5

6

0.71

0.5

5

0.81

\(<\)0.1

5

0.91

Table 5

Algorithm 1 on 200-vertex complete support graph instances with \(p_e\) chosen uniformly in \((0.8,1)\) using BCO \(\epsilon =0.9\) (running time in seconds)

Graph

\(\theta =0.4\)

\(\theta =0.5\)

\(\theta =0.6\)

\(\theta =0.7\)

\(\theta =0.8\)

\(\theta =0.9\)

Time

Size

Prob.

Time

Size

Prob.

Time

Size

Prob.

Time

Size

Prob.

Time

Size

Prob.

Time

Size

Prob.

1

946.8

8

0.43

168.9

7

0.51

19.0

7

0.60

4.0

6

0.70

0.6

5

0.81

\(<\)0.1

5

0.91

2

929.9

8

0.41

169.3

7

0.55

17.0

7

0.62

3.9

6

0.70

0.5

5

0.82

\(<\)0.1

5

0.91

3

872.4

8

0.41

153.1

7

0.53

22.0

7

0.61

3.5

6

0.70

0.5

5

0.81

\(<\)0.1

4

0.92

4

1021.1

8

0.40

175.0

7

0.53

22.4

7

0.60

4.0

6

0.71

0.6

5

0.81

\(<\)0.1

5

0.91

5

777.1

8

0.41

177.6

7

0.51

15.4

7

0.60

3.7

6

0.71

0.6

5

0.81

\(<\)0.1

5

0.90

6

880.6

8

0.41

126.0

8

0.51

18.5

7

0.61

3.3

6

0.71

0.6

5

0.81

\(<\)0.1

4

0.90

7

829.4

8

0.40

163.2

7

0.50

15.1

7

0.61

3.6

6

0.71

0.5

5

0.84

\(<\)0.1

5

0.90

8

813.1

8

0.42

159.3

7

0.51

16.0

7

0.60

3.3

6

0.70

0.5

5

0.80

\(<\)0.1

5

0.90

9

899.9

8

0.40

172.6

7

0.51

20.4

7

0.63

3.2

6

0.77

0.6

5

0.83

\(<\)0.1

5

0.90

10

797.4

8

0.41

166.9

7

0.51

17.0

7

0.60

3.3

6

0.74

0.6

5

0.81

\(<\)0.1

4

0.92

11

853.2

8

0.40

167.0

7

0.51

15.3

7

0.61

3.0

6

0.70

0.3

6

0.81

\(<\)0.1

5

0.91

12

916.5

8

0.42

94.3

8

0.51

18.9

7

0.63

3.3

6

0.71

0.6

5

0.82

\(<\)0.1

5

0.92

13

874.3

8

0.41

166.9

7

0.51

17.2

7

0.62

3.1

6

0.72

0.3

6

0.81

\(<\)0.1

5

0.91

14

819.1

8

0.40

152.8

7

0.51

17.0

7

0.60

2.9

6

0.70

0.5

5

0.80

\(<\)0.1

4

0.91

15

789.2

8

0.41

161.1

7

0.52

14.7

7

0.62

3.2

6

0.73

0.6

5

0.82

\(<\)0.1

5

0.91

16

701.1

8

0.40

161.5

7

0.51

14.2

7

0.61

2.9

6

0.74

0.5

5

0.80

\(<\)0.1

5

0.90

17

778.5

8

0.47

151.6

7

0.51

15.0

7

0.61

2.9

6

0.72

0.6

5

0.81

\(<\)0.1

5

0.91

18

843.2

8

0.40

166.7

7

0.50

14.4

7

0.60

3.2

6

0.73

0.3

6

0.81

\(<\)0.1

5

0.91

19

917.3

8

0.42

177.7

7

0.50

15.9

7

0.63

3.9

6

0.72

0.6

5

0.81

\(<\)0.1

4

0.92

20

923.2

8

0.42

149.8

7

0.52

16.9

7

0.60

3.3

6

0.74

0.5

5

0.81

\(<\)0.1

5

0.91

Median

862.8

8

0.41

164.9

7

0.51

16.9

7

0.61

3.3

6

0.71

0.5

5

0.81

\(<\)0.1

5

0.91

Table 6

Algorithm 1 on 200-vertex complete support graph instances with \(p_e\) chosen uniformly in \((0.8,1)\) using BCO \(\epsilon =0.95\) (running time in seconds)

Graph

\(\theta =0.4\)

\(\theta =0.5\)

\(\theta =0.6\)

\(\theta =0.7\)

\(\theta =0.8\)

\(\theta =0.9\)

Time

Size

Prob.

Time

Size

Prob.

Time

Size

Prob.

Time

Size

Prob.

Time

Size

Prob.

Time

Size

Prob.

1

920.7

8

0.41

160.5

7

0.52

16.1

7

0.61

3.5

6

0.71

0.5

5

0.82

\(<\)0.1

5

0.91

2

855.8

8

0.40

159.6

7

0.51

15.1

7

0.61

3.4

6

0.71

0.5

5

0.81

\(<\)0.1

5

0.91

3

822.8

8

0.40

162.3

7

0.51

18.2

7

0.60

3.0

6

0.70

0.5

5

0.80

\(<\)0.1

4

0.93

4

1132.9

8

0.41

157.8

7

0.52

15.0

7

0.60

3.5

6

0.73

0.5

5

0.83

\(<\)0.1

5

0.91

5

892.6

8

0.42

156.2

7

0.51

14.7

7

0.61

2.8

6

0.71

0.5

5

0.81

\(<\)0.1

5

0.90

6

910.8

8

0.41

117.7

8

0.51

17.6

7

0.61

3.1

6

0.71

0.5

5

0.80

\(<\)0.1

4

0.90

7

721.9

8

0.40

147.6

7

0.52

15.6

7

0.61

3.0

6

0.70

0.5

5

0.82

\(<\)0.1

5

0.90

8

954.7

8

0.42

148.9

7

0.52

18.8

7

0.60

3.3

6

0.70

0.5

5

0.80

\(<\)0.1

5

0.90

9

951.2

8

0.42

149.7

7

0.51

21.0

7

0.61

3.3

6

0.70

0.5

5

0.81

\(<\)0.1

5

0.90

10

741.2

8

0.41

141.7

7

0.51

15.7

7

0.60

3.0

6

0.72

0.4

5

0.81

\(<\)0.1

4

0.90

11

1015.3

8

0.40

147.9

7

0.53

16.2

7

0.61

2.8

6

0.72

0.4

6

0.81

\(<\)0.1

5

0.91

12

924.8

8

0.41

114.5

8

0.51

17.0

7

0.61

3.1

6

0.70

0.5

5

0.81

\(<\)0.1

5

0.92

13

1049.5

8

0.44

151.3

7

0.51

17.0

7

0.64

3.2

6

0.72

0.2

6

0.81

\(<\)0.1

5

0.91

14

898.6

8

0.42

138.9

7

0.51

20.1

7

0.60

2.6

6

0.74

0.5

5

0.81

\(<\)0.1

4

0.93

15

680.0

8

0.41

143.5

7

0.53

14.7

7

0.60

3.2

6

0.72

0.5

5

0.80

\(<\)0.1

5

0.90

16

853.0

8

0.41

142.0

7

0.57

16.5

7

0.61

3.1

6

0.71

0.5

5

0.81

\(<\)0.1

5

0.90

17

780.1

8

0.43

144.4

7

0.51

18.9

7

0.61

3.3

6

0.72

0.5

5

0.81

\(<\)0.1

5

0.91

18

954.4

8

0.40

145.3

7

0.54

21.1

7

0.61

3.3

6

0.70

0.3

6

0.82

\(<\)0.1

5

0.91

19

915.0

8

0.40

157.5

7

0.52

16.6

7

0.63

3.2

6

0.72

0.5

5

0.80

\(<\)0.1

4

0.91

20

713.7

8

0.40

139.9

7

0.51

16.1

7

0.60

3.0

6

0.70

0.4

5

0.81

\(<\)0.1

5

0.91

Median

904.7

8

0.41

147.8

7

0.51

16.5

7

0.61

3.2

6

0.71

0.5

5

0.81

\(<\)0.1

5

0.91

5 Conclusions

This article demonstrates that graph properties that are hereditary on vertex induced subgraphs preserve heredity when we seek vertices that satisfy the graph property with probability at least \(\theta \in [0,1]\) in a random graph. Consequently, the well-known algorithm by Östergård (2002) for the maximum clique problem is extended to the maximum probabilistic clique problem. Two key features of this algorithm, updating the so-called candidate set and the order in which the vertices of the graph are processed, that are known to significantly impact the performance of this algorithm are then investigated. The independence assumption on probabilistic edge failures and the clique definition admit the development of an incremental candidate set update procedure that leads to a “sampling-free” exact algorithm for the problem. A bottleneck graph coloring approach is proposed for vertex ordering that is shown via computational experiments to be very promising. It would be interesting to investigate the extension of more recent algorithms for the maximum clique problem (Tomita et al. 2010; Batsyn et al. 2013) to this probabilistic setting, as well as explore whether the probabilistic versions of hereditary clique relaxations such as \(k\)-plexes and \(k\)-defective cliques (Trukhanov et al. 2013) admit sampling-free combinatorial exact algorithms under the assumption of independent edge failures.

Notes

Acknowledgments

The computational experiments reported in this article were performed at the Oklahoma State University High Performance Computing Center (OSUHPCC). The authors are grateful to Dr. Dana Brunson for her support in conducting these experiments at OSUHPCC. The authors would also like to thank the referees for the comments that helped us improve the presentation of this paper. This research was supported by the US Department of Energy Grant DE-SC0002051, the Oklahoma Transportation Center Equipment Grant OTCES10.2-10 and by the AFRL Mathematical Modeling and Optimization Institute.

References

  1. Ahmed S (2006) Convexity and decomposition of mean-risk stochastic programs. Math Progr 106:433–446CrossRefMATHGoogle Scholar
  2. Ahmed S, Shapiro A (2008) Solving chance-constrained stochastic programs via sampling and integer programming. In: Chen ZL, Raghavan S (eds) Tutorials in operations research, 10th edn. INFORMS, MinneapolisGoogle Scholar
  3. Applegate D, Johnson DS (1988) dfmax.c [C program], available online. ftp://dimacs.rutgers.edu/pub/challenge/graph/solvers/dfmax.c
  4. Balas E, Xue J (1996) Weighted and unweighted maximum clique algorithms with upper bounds from fractional coloring. Algorithmica 15:397–412CrossRefMATHMathSciNetGoogle Scholar
  5. Balas E, Yu C (1986) Finding a maximum clique in an arbitrary graph. SIAM J Comput 15:1054–1068CrossRefMATHMathSciNetGoogle Scholar
  6. Balasundaram B, Butenko S (2008) Network clustering. In: Junker BH, Schreiber F (eds) Analysis of biological networks. Wiley, New York, pp 113–138CrossRefGoogle Scholar
  7. Balasundaram B, Pajouh FM (2013) Graph theoretic clique relaxations and applications. In: Pardalos PM, Du DZ, Graham R (eds) Handbook of combinatorial optimization, 2nd edn. Springer. doi: 10.1007/978-1-4419-7997-1_9
  8. Batsyn M, Goldengorin B, Maslov E, Pardalos P (2013) Improvements to mcs algorithm for the maximum clique problem. J Comb Optim 26:1–20. doi: 10.1007/s10878-012-9592-6 CrossRefMathSciNetGoogle Scholar
  9. Bertsimas D, Brown DB, Caramanis C (2011) Theory and applications of robust optimization. SIAM Rev 53(3):464–501CrossRefMATHMathSciNetGoogle Scholar
  10. Boginski V (2011) Network-based data mining: operations research techniques and applications. In: Encyclopedia of operations research and management science, Wiley, New YorkGoogle Scholar
  11. Bomze IM, Budinich M, Pardalos PM, Pelillo M (1999) The maximum clique problem. In: Du DZ, Pardalos PM (eds) Handbook of combinatorial optimization. Kluwer Academic, Dordrecht, pp 1–74CrossRefGoogle Scholar
  12. Butenko S, Wilhelm W (2006) Clique-detection models in computational biochemistry and genomics. Eur J Oper Res 173:1–17CrossRefMATHMathSciNetGoogle Scholar
  13. Carraghan R, Pardalos P (1990) An exact algorithm for the maximum clique problem. Oper Res Lett 9:375–382CrossRefMATHGoogle Scholar
  14. Cook DJ, Holder LB (2000) Graph-based data mining. IEEE Intell Syst 15(2):32–41CrossRefGoogle Scholar
  15. DIMACS (1995) Cliques, coloring, and satisfiability: second dimacs implementation challenge. http://dimacs.rutgers.edu/Challenges/
  16. Garey MR, Johnson DS (1979) Computers and intractability: a guide to the theory of NP-completeness. W.H. Freeman and Company, New YorkMATHGoogle Scholar
  17. Harary F, Ross IC (1957) A procedure for clique detection using the group matrix. Sociometry 20:205–215CrossRefMathSciNetGoogle Scholar
  18. Håstad J (1999) Clique is hard to approximate within \(n^{1-\epsilon }\). Acta Math 182:105–142CrossRefMATHMathSciNetGoogle Scholar
  19. Hochbaum DS, Shmoys DB (1985) A best possible heuristic for the \(k\)-center problem. Math Oper Res 10:180–184CrossRefMATHMathSciNetGoogle Scholar
  20. Johnson D, Trick M (eds) (1996) Cliques, coloring, and satisfiablility: second dimacs implementation challenge, DIMACS series in discrete mathematics and theoretical computer science, vol 26. American Mathematical Society, ProvidenceGoogle Scholar
  21. Krokhmal P, Uryasev S, Zrazhevsky G (2005) Numerical comparison of conditional value-at-risk and conditional drawdown-at-risk approaches: application to hedge funds. In: Applications of stochastic programming, MPS/SIAM Ser. Optim., vol 5, SIAM, Philadelphia, pp 609–631Google Scholar
  22. Kubale M (2004) Graph colorings, 352nd edn. American Mathematical Society, ProvidenceCrossRefMATHGoogle Scholar
  23. Luce RD, Perry AD (1949) A method of matrix analysis of group structure. Psychometrika 14(2):95–116CrossRefMathSciNetGoogle Scholar
  24. Luedtke J (2010) An integer programming and decomposition approach to general chance-constrained mathematical programs. In: Eisenbrand F, Shepherd F (eds) Integer programming and combinatorial optimization, lecture notes in computer science, vol 6080. Springer, Berlin / Heidelberg, pp 271–284CrossRefGoogle Scholar
  25. Luedtke J, Ahmed S (2008) A sample approximation approach for optimization with probabilistic constraints. SIAM J Optim 19(2):674–699CrossRefMATHMathSciNetGoogle Scholar
  26. McClosky B (2011) Clique relaxations. In: Encyclopedia of operations research and management science, Wiley, New YorkGoogle Scholar
  27. Nemirovski A, Shapiro A (2004) Scenario approximations of chance constraints. In: Probabilistic and randomized methods for design under uncertainty, Springer, Heidelberg, pp 3–48Google Scholar
  28. Nemirovski A, Shapiro A (2006a) Convex approximations of chance constrained programs. SIAM J Optim 17:969–996CrossRefMATHMathSciNetGoogle Scholar
  29. Nemirovski A, Shapiro A (2006b) Scenario approximations of chance constraints. In: Calafiore G, Dabbene F (eds) Probabilistic and randomized methods for design under uncertainty. Springer, London, pp 3–47CrossRefGoogle Scholar
  30. Östergård PRJ (2002) A fast algorithm for the maximum clique problem. Discrete Appl Math 120:197–207CrossRefMATHMathSciNetGoogle Scholar
  31. Pagnoncelli BK, Ahmed S, Shapiro A (2009) Sample average approximation method for chance constrained programming: theory and applications. J Optim Theory Appl 142:399–416CrossRefMATHMathSciNetGoogle Scholar
  32. Pardalos PM, Xue J (1994) The maximum clique problem. J Glob Optim 4:301–328CrossRefMATHMathSciNetGoogle Scholar
  33. Pattillo J, Youssef N, Butenko S (2012) Clique relaxation models in social network analysis. In: Thai MT, Pardalos PM (eds) Handbook of optimization in complex networks, springer optimization and its applications, vol 58. Springer, New York, pp 143–162CrossRefGoogle Scholar
  34. Prékopa A (2003) Probabilistic programming. In: Ruszczynski A, Shapiro A (eds) Stochastic programming, handbooks in operations research and management, vol 10. Elsevier, Salt Lake, pp 267–351Google Scholar
  35. Rockafellar R, Uryasev S (2000) Optimization of conditional value-at-risk. J Risk 2(3):21–41Google Scholar
  36. Sewell EC (1998) A branch and bound algorithm for the stability number of a sparse graph. INFORMS J Comput 10(4):438–447CrossRefMathSciNetGoogle Scholar
  37. Shapiro A, Dentcheva D, Ruszczynski A (eds) (2009) Lectures on stochastic programming: modeling and theory. Society for Industrial and Applied Mathematics (SIAM): MPS/SIAM series on optimization, PhiladelphiaGoogle Scholar
  38. Tomita E, Kameda T (2007) An efficient branch-and-bound algorithm for finding a maximum clique with computational experiments. J Glob Optim 37(1):95–111CrossRefMATHMathSciNetGoogle Scholar
  39. Tomita E, Sutani Y, Higashi T, Takahashi S, Wakatsuki M (2010) A simple and faster branch-and-bound algorithm for finding a maximum clique. In: Rahman M, Fujita S (eds) WALCOM: algorithms and computation, lecture notes in computer science, vol 5942. Springer, Berlin Heidelberg, pp 191–203CrossRefGoogle Scholar
  40. Trukhanov S, Balasubramaniam C, Balasundaram B, Butenko S (2013) Algorithms for detecting optimal hereditary structures in graphs, with application to clique relaxations. Comput Optim Appl 56(1):113–130CrossRefMATHMathSciNetGoogle Scholar
  41. Vaskelainen V (2010) Russian doll search algorithms for discrete optimization problems. PhD thesis, Helsinki University of TechnologyGoogle Scholar
  42. Wasserman S, Faust K (1994) Social network analysis. Cambridge University Press, New YorkCrossRefGoogle Scholar
  43. Wood DR (1997) An algorithm for finding a maximum clique in a graph. Oper Res Lett 21(5):211–217CrossRefMATHMathSciNetGoogle Scholar
  44. Yannakakis M (1978) Node-and edge-deletion NP-complete problems. STOC ’78 In: Proceedings of the 10th Annual ACM Symposium on Theory of Computing. ACM Press, New York, pp 253–264Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Zhuqi Miao
    • 1
  • Balabhaskar Balasundaram
    • 1
  • Eduardo L. Pasiliao
    • 2
  1. 1.School of Industrial Engineering & ManagementOklahoma State UniversityStillwater USA
  2. 2.Munitions Directorate, Air Force Research LaboratoryEglin AFBUSA

Personalised recommendations