On monotonicity and search strategies in face-based copositivity detection algorithms

Over the last decades, algorithms have been developed for checking copositivity of a matrix. Methods are based on several principles, such as spatial branch and bound, transformation to Mixed Integer Programming, implicit enumeration of KKT points or face-based search. Our research question focuses on exploiting the mathematical properties of the relative interior minima of the standard quadratic program (StQP) and monotonicity. We derive several theoretical properties related to convexity and monotonicity of the standard quadratic function over faces of the standard simplex. We illustrate with numerical instances up to 28 dimensions the use of monotonicity in face-based algorithms. The question is what traversal through the face graph of the standard simplex is more appropriate for which matrix instance; top down or bottom up approaches. This depends on the level of the face graph where the minimum of StQP can be found, which is related to the density of the so-called convexity graph.


Introduction
Copositivity of a matrix is an important concept in combinatorial and quadratic optimization (Burer 2009;Kaplan 2000;Povh and Rendl 2007;Väliaho 1986). Given the standard simplex with the n coordinate unit vectors e i , i = 1, . . . , n as vertices: A symmetric n × n matrix A is called copositive if and noncopositive if ∃x ∈ Δ n , x T Ax < 0.
One can determine if a matrix is PSD via Cholesky decomposition in polynomial time (O(n 3 )). However, determination of copositivity of a matrix has been shown to be a co-NP complete problem (Murty and Kabadi 1987). The certification of copositivity is related to the standard quadratic program (StQP) The StQP is generic in the sense that optimizing a quadratic function f (x) := x T Qx + b T x + c over Δ n is equivalent to (5), taking A := Q + 1 2 (1b T + b1 T ) + c11 T , where 1 is the all-ones vector. Clearly, if f * < 0, then A is not copositive and if f * ≥ 0, then A is copositive.
There have been suggestions in literature to create procedures for copositivity testing based on properties of the matrix (Nie et al. 2018;Yang and Li 2009). The spatial branch and bound (B&B) algorithm introduced by Bundfuss and Dür (2008) can either certify that a matrix is not copositive, or prove it is so-called -copositive. Following such a procedure, Žilinskas and Dür (2011) claim that certifying -copositivity of a copositive matrix is limited to a size up to n = 22 in a reasonable time. Certification of -copositivity by simplicial refinement requires much more computation than verifying non-copositivity of a matrix, which can be done for a dimension n up to several thousands.
Basically, a spatial B&B approach samples and evaluates points that do not necessarily coincide with candidates of optima of (5). In contrast, a recent work which also compares with B&B approaches by Liuzzi et al. (2019), focusing on the first order conditions, i.e. Karush-Kuhn-Tucker (KKT) conditions of the optima of (5). They apply convex and linear bounds in a B&B context implicitly enumerating KKT points, which proved to be very efficient in low-density graphs. The consequence of introducing monotonicity in the original B&B in Bundfuss and Dür (2008) has been investigated in Hendrix et al. (2019) and Salmerón (2019).
Also recent is the work of Gondzio and Yildirim (2018) which translates the KKT conditions to a MIP type of approach making use of fast integer programming implementations in order to solve the StQP (5) for instances of hundreds of variables. The mentioned procedures do not focus on the second order considerations to find an optimum.
An older work, Scozzari and Tardella (2008) derives algorithms based on the observation that a minimum point of (5) can only be on the so-called relative interior of a face of the standard simplex, if f is convex on that face. Their focus is on the convexity of f on the edges of that face. In this way, they look for what they call a clique in the convexity graph, consisting of the vertices of the standard simplex and edges where f is strictly convex.
Our focus is also face-based, where we look for negative points of f for noncopositivity detection, where only local minimum points on the relative interior of faces are evaluated. The search procedure traverses what we call the face graph of the standard simplex, as depicted in Fig. 1. Each node is a face of the standard simplex, described by a bit string b k ; if the ith position is 1, then vertex e i is included in the face. We introduce the ordered index set I k as the ordered set of indices of the variables in face F k . Nodes on level are -faces, so the standard simplex is the root, and vertices e i are on the lowest level. Level has n faces. Node set F := {1, . . . , 2 n − 1} contains all face numbers. An edge (m, k) implies that either F m is a facet of F k or vice versa. This means that bit string b k differs in one bit from b m and consequently I k has one index more than I m or the other way around.
In this context, the procedure of Scozzari and Tardella (2008) first marks nodes on level = 2 (edges of the standard simplex) as strictly convex and goes up in the graph to identify faces on a level as high as possible with all edges convex to find interior minima, with a value as low as possible. For faces F m on a lower level, we have that F m ⊂ F k implies min x∈F m f (x) ≥ min y∈F k f (y). Now, having f convex on all edges is a necessary but not sufficient condition for f to be convex on the face. Therefore, Scozzari and Tardella (2008) test whether the matrix A k related to the elements of face F k is positive definite (PD). We will show, that this is a sufficient, but not necessary condition for f to be convex on F k .
Our main research question is how we can traverse the face graph identifying those faces where f is strictly convex on the relative interior and evaluate the corresponding minima in order to find points with a negative objective function value or to prove that A is copositive. To report on the findings of our investigation of the research questions, our paper is organized as follows. Section 2 discusses the mathematical properties relevant for the algorithm development about monotonicity and first and second order conditions. Section 3 sketches the traversal variants of a face-based algorithm. Section 4 uses several benchmark instances to investigate numerically for what type of matrices which graph traversal is more effective. The main conclusions and future research questions are described in Sect. 5.  5   01111  15  10111  23  11011  27  11101  29  11110  30   01011  11  01110  14  10101  21  11001  25  11100  28  00111  7  01101  13  10011  19  10110  22  11010  26   00101  5  01001  9  01100  12  10010  18  11000  24  00011  3  00110  6  01010  10  10001  17  10100  20   00001  1  00010  2  00100  4  01000  8 10000 16 Fig. 1 Face graph of the standard simplex for n = 5, where faces F k are numbered by index k and bit string b k indicates which elements are considered positive and which zero 2 Properties of a StQP In our notation, we use as identity matrix in dimension n the symbol I n := (e 1 , . . . , e n ) and 1 represents the all-ones vector in appropriate dimension. Moreover, we use D n := I n − 1 n 11 T = (d 1 , . . . , d n ) as the projection matrix on the zero sum plane P := {x ∈ R n |1 T x = 0}. It is useful to consider matrix A k in order to evaluate f := x T Ax on face F k .
Definition 1 Given a symmetric n×n matrix A and a binary vector b k with corresponding index set I k , A k is the sub-matrix of A with rows and columns that correspond to indices in I k . For face F k at level , A k is an × matrix.
Note that min x∈F k x T Ax is equivalent to min x∈Δ x T A k x.

Optimality conditions
The first order conditions (KKT conditions) for a local minimum of StQP (5) are used in the studies of Gondzio and Yildirim (2018), Liuzzi et al. (2019), Salmerón et al. (2018) and Scozzari and Tardella (2008). For a local minimum point x ≥ 0 of (5) there exist values of the dual variables μ and λ i ≥ 0 such that The expression x i λ i = 0 is called complementarity and is closely related to the question on which face the minimum point can be found. Gondzio and Yildirim (2018) shows that the StQP (5)  It is known that the minimum of an indefinite quadratic function can be found at the boundary of the feasible set. Basically, the feasible set Δ n of (5) does not have an interior. Consider the relative interior rint(Δ n ) of Δ n as When we know that a face F k has a relative interior minimum point y * , it is given by where y * is mostly in a space of dimension lower than n. Translation of the solution to n-dimensional space requires adding zeros on the positions i / ∈ I k . So, either the global minimum point of StQP can be found in one of the vertices of the standard simplex (unit vector e i ) or at the relative interior of one of the other faces. To have a relative interior optimum on F k , f should at least be convex on F k . Scozzari and Tardella (2008) characterize this by looking for faces where A k is positive definite (PD). However, this is not a necessary condition. For instance, matrix defines a function f which is strictly convex on Δ 3 , but is not PD. Consider the matrix H := D n AD n . This matrix defines the convexity on the standard simplex.
Proposition 1 If the matrix H := D n AD n is positive semidefinite (PSD), then the function f := x T Ax is convex on Δ n .
Proof Let x, y ∈ Δ n . Notice that (y − x) ∈ P such that ∃r ∈ R n , y − x = D n r . Then for 0 ≤ λ ≤ 1 This means that for any x, y ∈ Δ n and 0 ≤ λ ≤ 1, The other way around, to have a relative interior optimum, H should be PSD.
Proof Following the KKT conditions (6), only the constraint 1 T x = 1 is binding, i.e. there exists a value μ such that Ax * = μ1. As D n = I n − 1 n 11 T , Considering As D n Ax * = 0, we have that ∀r ∈ B(0, δ), r T Hr ≥ 0, so H is a positive semidefinite matrix.
With respect to strict convexity, one of the eigenvalues of H with respect to direction 1 is zero, as D n 1 = 0. Basically, this means that the other eigenvalues of H should be positive. Our implementations and Scozzari and Tardella (2008) use Cholesky decomposition routines to test whether A k or H k are PD or PSD. Proposition 2 can be extended to any face F k if we consider the standard simplex Δ in dimension equal to the number of positive elements in face F k .

Corollary 1
If ∃ x * ∈ argmin F k f (x) ∩ rint(F k ), then also ∃ y * ∈ rint(Δ ), such that D A k y * = 0 and H k := D A k D is positive semidefinite.

The convexity graph
The analysis of Scozzari and Tardella (2008) focuses on convexity of the edges of a face. Function f over edge (e i , e j ) can be written as The function f is strictly convex over edge (e i , e j ) of Δ n if In that case, a relative minimum point of f over the line e i + ρ(e j − e i ) is given by If 0 < ρ * < 1, then we have an interior minimum over edge (e i , e j ) with function value Definition 2 The convexity graph G := (N , C) of a matrix A is a graph with node set N := {1, . . . , n} having an edge (i, j) ∈ C if (13) holds, i.e. f is strictly convex over the edge between e i and e j (Scozzari and Tardella 2008).
Actually, a node can be removed from G if it is not incident to any edge on which f is convex. Graph G k is defined similarly to matrix A k . A necessary but not sufficient condition for f to be convex on F k is that graph G k is complete. Due to the focus of Scozzari and Tardella (2008), we illustrate that the condition is not sufficient for matrix The corresponding convexity graph G k is complete, but f is not convex on F k , neither A k nor H k are PSD.
Looking for a face F k with an interior optimum of the StQP means looking for faces that correspond to a complete G k on a level as high as possible. This consideration brings up some typical properties. We did not focus on sparse matrices in our study. A counter-intuitive property is that A i j = 0 corresponds to a convex edge. Hence, when A is sparse, the corresponding convexity graph will be quite dense.
The necessary condition of convex edges for an interior optimum, also implies that an edge, where f is strictly concave, cannot belong to a face with a relative interior optimum.
The edge (e i , e j ) apparently cannot be subset of F k , so either x * i = 0 or x * j = 0. An anonymous referee drew our attention to the following consequence, which may be used in algorithm development. Consider matrixÃ where in A, we replace the corresponding concave edge entrance A i j byÃ i j : The corresponding functionf := x TÃ x enhances a linearization of f over the concave edges. Following the same reasoning that a relative interior optimum on face F k requires f to be strictly convex on its edges, we have the following property.

Monotonicity considerations
In simplicial B&B methods like that of Bundfuss and Dür (2008)

Proposition 4 Consider simplex S
Proof Consider the matrix V with columns that correspond to the vertices of S. Then point y ∈ Φ i can be written as y = V λ for a vector λ ∈ Δ with λ i = 0. Assume the minimum is attained at The importance of this theoretical result is that, for the StQP (5) and the proof on copositivity, one can reduce the search in a B&B algorithm like Bundfuss and Dür (2008) to a facet of simplicial subset S.

Corollary 3 Consider simplex S
Proof Follows directly from Proposition 4, by taking y as vertex v p ∈ Φ i . For a face-based algorithm, the analysis is easier, as it says we can drop out vertex e i from the face F k we are investigating. The condition of Corollary 3 implies where a ki and a kp are columns i and p of A k , respectively. Actually, monotonicity considerations in face-based algorithms are only relevant when searching the face graph in Fig. 1 top-down, as it specifies which facets on the lower level can contain a minimum point, eliminating in a B&B way the search on other faces.

Algorithms
The authors of Salmerón (2019) report the findings of the consequence of including monotonicity in the B&B algorithm of Bundfuss and Dür (2008). In this paper, we develop three traversal variants of the face graph of Δ n . The algorithms only evaluate the vertices, i.e. the diagonal elements of A, the centroid and proven interior minimum points of faces F k .
1. TDk (Alg. 1), Traverses the faces in Decreasing order of k and checks whether they should be investigated. This requires at least for each face to store a marker indicating if a face and sometimes all its sub-faces need not be checked. 2. TDown (Alg. 2), tries to avoid storage by checking the faces in a list for each level and Traverses the face graph Downwards level-wise. 3. TUp (Alg. 3), follows the line of the algorithm of Scozzari and Tardella (2008) Traversing Upwards the face graph. However, in contrast it works also level-wise, as negative points may be found in local minima on lower levels.
In each algorithm, we first check matrix A to be copositive due to Then the following actions are taken: -Create the convexity graph G checking strict convexity over edges via (13), meanwhile calculating their minima given by (15) (this corresponds to = 2).

TDk
The TDk version in face-based Algorithm 1 goes over the faces of the graph from higher index value k downward until the list has been checked or a negative point has been found. It has the similarity with B&B approaches, that on a higher level, we hope to exclude evaluation on lower levels in the graph. Therefore, it uses a global indicator list Tag k -Tag k = 0: face F k has not been checked, -Tag k = 1: face F k need not be checked and -Tag k = 2: no need to check face F k nor any of its sub-faces F m ⊂ F k .
When f is shown to be monotonous, we can tag some of the facets as checked. As all sub-matrices A m of a copositive matrix A k are also copositive, we do not have to evaluate the sub-faces F m in the face graph. This means that like in B&B the efficiency of the algorithm depends on which level of the graph we detect a face F k where A k is copositive. The higher in the graph, the more nodes do not have to be evaluated.
Computationally, Algorithm 1 has the advantage that we only have to store one byte Tag k for each face. However, from a complexity point of view, the number of faces increases exponentially in the dimension n. Moreover, one face has usually several parents in the face graph, such that it may be visited or marked several times during the algorithm.

TDown
The idea of Algorithm 2 is to inspect only the interesting faces of each level of the face graph. Only faces are evaluated at a level that may still contain negative points of Algorithm 1 Copositivity detection, traversing in decreasing k order (TDk) Require: A: symmetric n × n matrix, n ≥ 3. 1: if A is entry-wise nonnegative or A is PSD then 2: return A is copositive 3: Evaluate A ii , centroid and edge minima where f is convex via (15), generating G 4: if negative point found then 5: return A is not copositive 6: Tag k := 0 for faces k not checked 7: k := 2 n − 1 8: while k > 2 do 9: if Tag k = 0 then 10: if A k ≥ 0 then 11: Set Tag m :=2 for all sub-facets F m ⊂ F k 12: break 13: if ∃i for which (17) holds, the minimum is on the facet without e I k (i) then 14: Set return A is not copositive 21: else 22: Tag m :=2 for F m ⊂ F k 23: k := k − 1 24: return A is copositive f . Two lists of candidate faces are active L and L −1 , where only the numbers of the faces are stored. In the pseudo-code, removal or insertion of a face F k in a list means the removal or insertion of number k.
Moreover, it maintains one global list R, which stores the values k of faces with a nonnegative interior minimum or with a completely positive matrix A k . On each level, it keeps a list N of facets of the faces that cannot have a minimum due to monotonicity. Let us note that sub-faces of the faces in N are still considered, as the optimum of f on a monotonous face is on the boundary. This is different for faces in R; none of the sub-faces of F k , k ∈ R can contain negative minima, so all its sub-faces can be dropped. In this way, no sub-face of F k , k ∈ R nor facet F k , k ∈ N has to be included in the next level list L −1 . In the worst case, list L may still be huge with increasing dimension n. Most troublesome, however, is the list management overhead.

TUp
Algorithm 3, TUp, follows the upward search of the algorithm of Scozzari and Tardella (2008) in their search for the minimum of StQP(5) in the levels of the face graph. Their terminology is to look for maximum cliques in the convexity graph, i.e. G k is complete on a level as high as possible. Finding such a face F k still requires checking whether H k is PSD in order to have a possible interior optimum on face F k . The algorithm stops if it finds a negative interior optimum or alternatively has found the highest Algorithm 2 Copositivity detection top-down in the face graph (TDown) Require: A: symmetric n × n matrix. 1: if A is entry-wise nonnegative or A is PSD then 2: return A is copositive 3: Evaluate A ii , centroid and edge minima where f is convex via (15), generating G 4: if negative point found then 5: return A is not copositive 6: L := {2 n − 1}, R := ∅, := n 7: while > 2 do 8: L −1 := ∅, N := ∅ 9: while L = ∅ do 10: Retrieve k from L 11: if A k ≥ 0 then 12: Save k in R 13: else if ∃i for which (17) holds, the minimum is on the facet without e I k (i) then 14: Save if G k is complete then 18: if H k is PSD then 19: Solve (8) return A is not copositive 22: else 23: Save k in R 24: else 25: Save all facets F m : F m ⊂ F k in L −1 26: Eliminate from L −1 all sub-faces F m : F m ⊆ F k , k ∈ R 28: := − 1 29: return A is copositive level (maximum clique) corresponding to the convexity graph where the minimum is nonnegative.

Numerical investigation
The most appropriate way to traverse the face graph for copositivity testing, depends on the convexity graph of the instance under consideration. On one hand there is the density of the convexity graph, i.e. on how many edges f is strictly convex, and on the other hand the level on which the minimum of StQP can be found in case the matrix is copositive. These two number are related. We first illustrate the behavior with cases from literature based on the maximum clique problem. Then we vary more systematically the level on which the minimum can be found and the density of the convexity graph. The instances can be found in the appendices.
Computer time is relative to the computational platform used. The algorithms were implemented in Matlab 2016b using routines to run standard Cholesky decomposition and solving the linear set of equations (8). They were run on an i5 CPU on a desktop computer.

Algorithm 3 Copositivity detection searching bottom-up (TUp)
Require: A: symmetric n × n matrix. 1: if A is entry-wise nonnegative or A is PSD then 2: return A is copositive 3: Evaluate A ii , centroid and edge minima where f is convex via (15), generating G 4: if negative point found then 5: return A is not copositive 6: := 2 and L 2 includes all edges where f is strictly convex 7: while < n − 1 do 8: L +1 := ∅ 9: while L = ∅ do 10: Retrieve m from L 11: for each vertex e j / ∈ F m do 12: Add vertex e j to F m generating F k 13: if k / ∈ L +1 and G k is complete then 14: if not A k ≥ 0 then 15: if H k is PSD then 16: Solve (8) return A is not copositive 19: Add k to L +1 20: := + 1 21: return A is copositive A first example is due to Hall and Newman (1963) and called the Horn matrix. It is copositive in dimension n = 5; the corresponding face graph is depicted in Fig. 1. The traversal variants show a varying behavior. TDk visits all faces in 0.01s. detecting monotonicity and does not require the PSD check of Cholesky. TDown visits all faces by level in 0.02s. TUp detects that none of the faces on level = 3 corresponds to a complete graph G k (clique) in 0.05s, such that the edge minima of 0 determine the global minimum. No face on a higher level is investigated.

Measuring performance of face graph traversal on max-clique instances
Part of benchmark cases in literature are based on the maximum clique problem according to the following relation: Let ω be the clique number of a graph defined by its adjacency matrix A G . One way to find it is to determine the minimum integer value t such that (t − 1)11 T − t A G is copositive, i.e. the clique number is Despite that there are better ways to determine the clique number, we used instances from maximum clique DIMACS challenge (http://archive.dimacs.rutgers. edu/Challenges/) of dimensions n =14, 16 and 28. Instance 1tc.16.clique (n = 16) was converted to a clique from Challenge Problems (https://oeis.org/A265032/a265032. html). Notice that this type of instances do no exhibit edges where f is strictly concave, as for (13), we have that This also implies that the matrixÃ discussed in Corollary 2 isÃ = A. n: dimension t: parameter for the clique number ω -d%: density of the convexity graph measured as the percentage of edges (excluding diagonal elements of C) on which f is strictly convex - * : level on which the minimum point of StQP can be found, or a negative point is found Running the algorithms, We measure the following indicators: -#Eval: number of evaluated faces -#Mon: number of times f is monotonous on a face -#An: number of times the evaluated A k was completely nonnegative -#PSD: number of times PSD evaluation of H k was performed using Cholesky decomposition -# f : number of function evaluations of an interior optimum -T: running time of the algorithm -Cpos: copositivity has been proven.
Algorithm TDk runs over the list of 2 n − 1 faces and marks them with respect to monotonicity detection and the existence of higher level faces that may be all nonnegative or have a nonnegative relative interior optimum. In Table 1, we can observe that each time the PSD status has been checked with Cholesky, H k appeared PSD, the solution of (8) resulted in an interior point that has been evaluated. Therefore, we leave out this column for the same instances in the other face-based algorithm results.
The largest instance to solve is Johnson8-2-4 (http://archive.dimacs.rutgers.edu/ Challenges/), with n = 28 and max clique number ω = 4. Computationally, if each marker Tag k only requires one byte, the algorithm requires 2 28 bytes, i.e. 0.5 GB, just to store the marker. For t = 3, A is not copositive and the TDk algorithm requires 3 h 11 min to find an interior negative minimum at level = 4. For t = 4, the matrix is copositive and the matlab implementation of the algorithm requires 6 h to run over the complete list.
The idea of Algorithm TDown is to use sets R and N not to store information on all faces. Theoretically, this is an elegant idea and the monotonicity is passed on to  next levels in a more systematic way. However, computationally the algorithm may get stuck if the list L gets larger. Table 2 shows this effect for the largest instance, where suddenly the algorithm is not successful anymore; it looses a lot of time in managing the lists. One should also take into account that level = 14 alone contains 28 14 = 40,116,600 faces. Therefore, the time required by TDown is practically larger than that of algorithm TDk for the largest instance, whereas for the instances up to n = 16 it is the fastest of the three traversal variants due to the efficient use of the monotonicity information. Algorithm TUp works upwards. As all instances have relative interior optima on a relatively low level, the number of faces to be checked is very low. Table 3 shows that for all measured instances, the algorithm requires less than 2 seconds. It surprised the authors that the TDown implementation walking down from n = 14, 16 could be faster than the TUp implementation. The latter requires more PSD tests, but this is due to a compiled Cholesky routine, which is faster than the monotonicity test which is based on a matlab script. Apparently, one can reduce the number of facets to be evaluated either by proving convexity over the facet or by monotonicity from top-down. The implementation of TDown gets stuck in handling the large list L on each level.  Scozzari and Tardella (2008) show us that random matrices for n=10, 30, 50, 100, 200, 500, 1000 and 1500, with a control on the density of the convexity graph of 0.25, 0.5 and 0.75 can be generated according to a description in Bomze and De Klerk (2002) and Nowak (1999). The findings report that one can solve problems with a density of 0.25 up to dimension n = 500, a density of 0.5 up to n = 200 and a density of 0.75 up to n = 100. Intuitively, this provides the idea that the bottom-up approach is more appropriate for low density convexity graphs and the top-down approach for cases where the density graph is dense.

Face graph traversal on instances with a varying density convexity graph
To measure this effect, we generated the instances following Nowak (1999) varying in dimension and convexity graph density that can be found in "Appendix B". First of all, they are all copositive, so the algorithms solve the StQP. As we can directly observe from the instances, is that the matrices contain more variation in the numbers, i.e. implying a higher condition number providing the phenomenon, that now the matrix H k is not necessarily PSD when the graph G k is complete and the computed KKT point is not necessarily interior of a face, so evaluation is not necessary. The occurrence of monotonicity is far bigger for these random instances than for the maxclique instances, leading to less computation time (fewer faces are evaluated) for the same dimension. For those instances, we also evaluated the effect of using Corollary 2 by changing A i j intoÃ i j when f is concave over edge e i , e j . This appeared not to make any difference for the matrices with the highest density convexity graph, as concavity hardly occurs. However, for lower density, usingÃ instead of A appears to increase the effectiveness of the monotonicity check drastically.
The top-down traversal algorithms profit from an increasing density. They have to check more the PSD of H k with higher density as can be observed from Table 4. However, they make use of monotonicity in order not to check the complete face graph. What is typical for those instances is that the level-wise traversal of TDown, which looked hopeless for the largest DIMACS instance due to list management, is now faster because it does a more systematic elimination of monotone faces. Basically, the number of checked faces decreases drastically. Checking the faces in index order TDk from top to down leads to less efficiency in concluding on the monotonicity than the TDown traversal. Moreover, as can be observed, this is helped by using matrixÃ instead of A.
For the bottom-up traversal TUp, the work starts to be harder with increasing density, as more faces are kept on the list as their convexity graph G k is complete. In computing time, the bottom-up approach gets slower than the top-down traversal variants for the denser instances. For the instances where the optimum is at a relatively low level, the TDown traversal is harder and costs more time. Moreover, the computing time depends a lot on how well the management of the lists is organized. We can observe, that TDown and also TUp require far more time if the number of faces to be evaluated increases.

Conclusions
This paper derives several properties on the monotonicity and convexity of the standard quadratic function f over faces and subsets of the standard simplex. It illustrates that monotonicity can be applied in face-based copositivity detection algorithms which traverse the face graph of the standard simplex top-down. We found that randomly generated instances provide a different characteristic in terms of monotonicity and convexity than maximum clique based instances. The latter have a lot of symmetry and no edges on which f is concave. We show that the success of a top-down or bottom-up traversal of the face graph depends not only on the density of the convexity graph, but also on the level on which strictly convex faces can be discovered and on how well the list management overhead can be reduced by efficient implementations. A level-wise implementation looked hopeless for the larger symmetric maximum clique based instances due to list management overhead, but appeared a very systematic and efficient approach for the randomly generated instances. A transformation of the matrix of the StQP towards a linearization over edges on which f is concave, helped a lot in the monotonicity tests and reduces the number of faces to be investigated and the total computational time.
The implementations of the algorithms used for the illustration are based on easily available matrix subroutines such as the Cholesky decomposition, but do not exploit a lot the management of lists. The monotonicity considerations reduce the number of Cholesky calls drastically. From a computer science perspective there is still ample opportunities to improve the implementation of the algorithms to obtain a computing time comparable to earlier published results. From this perspective, we are looking into the parallelization of the face-based algorithms.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

A Matrix instances from literature
Example 1 Horn matrix, n = 5 from Hall and Newman (1963).