Skip to main content
Log in

Semi-definite programming relaxation of quadratic assignment problems based on nonredundant matrix splitting

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

Quadratic assignment problems (QAPs) are known to be among the most challenging discrete optimization problems. Recently, a new class of semi-definite relaxation models for QAPs based on matrix splitting has been proposed (Mittelmann and Peng, SIAM J Optim 20:3408–3426, 2010; Peng et al., Math Program Comput 2:59–77, 2010). In this paper, we consider the issue of how to choose an appropriate matrix splitting scheme so that the resulting relaxation model is easy to solve and able to provide a strong bound. For this, we first introduce a new notion of the so-called redundant and non-redundant matrix splitting and show that the relaxation based on a non-redundant matrix splitting can provide a stronger bound than a redundant one. Then we propose to follow the minimal trace principle to find a non-redundant matrix splitting via solving an auxiliary semi-definite programming problem. We show that applying the minimal trace principle directly leads to the so-called orthogonal matrix splitting introduced in (Peng et al., Math Program Comput 2:59–77, 2010). To find other non-redundant matrix splitting schemes whose resulting relaxation models are relatively easy to solve, we elaborate on two splitting schemes based on the so-called one-matrix and the sum-matrix. We analyze the solutions from the auxiliary problems for these two cases and characterize when they can provide a non-redundant matrix splitting. The lower bounds from these two splitting schemes are compared theoretically. Promising numerical results on some large QAP instances are reported, which further validate our theoretical conclusions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The minimal trace principle is chosen because, as shown in our analysis in Sect. 2, the final solution matrix following the minimal principle has the minimal rank and the rank information on the splitting matrix can be further used to reduce the memory requirement and simplify the relaxation model as discussed in Sect. 4.

  2. For simplicity of discussion, in all the theoretical analysis of this work, we consider only the basic model 2 which is slightly different from the full SDR model to be described in Sect. 4. However, since in the full model we only add some convex constraints on the elements of \(Y\), one can easily extend the results for the basic model to the full model.

  3. We note that a similar approach (called the reduction method) has been used to improve the GLB and eigenvalue bound for QAPs with nonsymmetric matrices in the literature [7, 8, 12, 29]. One simple choice is \(u=\min (B_\mathrm{off})\). For more details on the reduction method, we refer to Sect. 7.5.2 of [7].

References

  1. Adams, W.P., Johnson, T.A.: Improved linear programming-based lower bounds for the quadratic assignment problem. In: Pardalos, P.M., Wolkowicz, H. (eds.) Quadratic Assignment and Related Problems. DIMACS Series in Discrete Mathematics and Theoretical Computer Science, vol. 16, pp. 43–75. AMS, Rhode Island (1994)

    Google Scholar 

  2. Adams, W., Guignard, M., Hahn, P., Hightower, W.: A level-2 reformulation-linearization technique bound for the quadratic assignment problem. Eur. J. Oper. Res. 180, 983–996 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  3. Anstreicher, K., Brixius, N.: A new lower bound via projection for the quadratic assignment problem. Math. Program. 80, 341–357 (2001)

    Article  MathSciNet  Google Scholar 

  4. Ben-David, G., Malah, D.: Bounds on the performance of vector-quantizers under channel errors. IEEE Trans. Inf. Theory 51, 2227–2235 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  5. Burer, S., Vandenbussche, D.: Solving lift-and-project relaxations of binary integer programs. SIAM J. Optim. 16, 726–750 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  6. Burkard, R., Karisch, S., Rendl, F.: QAPLIB—a quadratic assignment problem library. J. Glob. Optim. 10, 391–403 (1997). Recent updates on QAPLIB are avaliable at http://www.seas.upenn.edu/qaplib/

  7. Burkard, R., Dell’Amico, M., Martello, S.: Assignment Problems. Society for Industrial and Applied Mathematics, Philadelphia (2009)

    Book  MATH  Google Scholar 

  8. Conrad, K.: Das quadratische Zuweisungsproblem und zwei seiner Spezialfalle. Mohr Siebeck, Tubingen (1971)

    Google Scholar 

  9. de Carvalho, S.A., Rahmann, S.: Microarray layout as a quadratic assignment problem. Proc. Ger. Conf. Bioinform. 83, 11–20 (2006)

    Google Scholar 

  10. De Klerk, E., Sotirov, R.: Exploiting group symmetry in semidefinite programming relaxations of the quadratic assignment problem. Math. Program. 122, 225–246 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  11. Ding, Y., Wolkowicz, H.: A low-dimensional semidefinite relaxation for the quadratic assignment problem. Math. Oper. Res. 34, 1008–1022 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  12. Edwards, C.: A branch and bound algorithm for the Koopmans–Beckmann quadratic assignment problem. Comb. Optim. II, 35–52 (1980)

    Google Scholar 

  13. Gilmore, P.: Optimal and suboptimal algorithms for the quadratic assignment problem. SIAM J. Appl. Math. 10, 305–313 (1962)

    Article  MATH  MathSciNet  Google Scholar 

  14. Grant, M., Boyd, S., Ye, Y.: CVX: Matlab software for disciplined convex programming. http://www.stanford.edu/boyd/cvx. Accessed 2013

  15. Hadley, S.W., Rendl, F., Wolkowicz, H.: A new lower bound via projection for the quadratic assignment problem. Math. Oper. Res. 17, 727–739 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  16. Hahn, P., Grant, T.: Lower bounds for the quadratic assignment problem based upon a dual formulation. Oper. Res. 46, 912–922 (1998)

  17. Hahn, P., Anjos, M., Burkard, R.E., Karisch, S.E., Rendl, F.: QAPLIB—a quadratic assignment problem library. http://www.seas.upenn.edu/qaplib/. Accessed 2013

  18. Hahn, P.M., Zhu, Y.R., Guignard, M., Hightower, W.L., Saltzman, M.J.: A level-3 reformulation-linearization technique-based bound for the quadratic assignment problem. INFORMS J. Comput. 24, 202–209 (2012)

    Article  MathSciNet  Google Scholar 

  19. Hanan, M., Kurtzberg, J.: Placement techniques. Des. Autom. Digital Syst. 1, 213–282 (1972)

    Google Scholar 

  20. Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1985)

    Book  MATH  Google Scholar 

  21. Jansson, C., Chaykin, D., Keil, C.: Rigorous error bounds for the optimal value in semidefinite programming. SIAM J. Numer. Anal. 46, 180–200 (2007)

    Article  MathSciNet  Google Scholar 

  22. Koopmans, T., Beckmann, M.: Assignment problems and the location of economic activities. Econometrica 25, 53–76 (1957)

    Article  MATH  MathSciNet  Google Scholar 

  23. Lawler, E.: The quadratic assignment problem. Manage. Sci. 9, 589–599 (1963)

    Article  MathSciNet  Google Scholar 

  24. Loiola, E., Abreu, N., Boaventura-Netto, P., Hahn, P., Querido, T.: A survey for the quadratic assignment problem. Eur. J. Oper. Res. 176, 657–690 (2007)

    Article  MATH  Google Scholar 

  25. Mittelmann, H., Peng, J.: Estimating bounds for quadratic assignment problems associated with Hamming and Manhattan distance matrices based on semidefinite programming. SIAM J. Optim. 20, 3408–3426 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  26. Mukherjee, L., Singh, V., Peng, J., Xu, J., Zeitz, M., Berezney, R.: Generalized median graphs and applications. J Combin. Optim. 17, 21–44 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  27. Peng, J., Mittelmann, H., Li, X.: A new relaxation framework for quadratic assignment problems based on matrix splitting. Math. Program. Comput. 2, 59–77 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  28. Rendl, F., Sotirov, R.: Bounds for the quadratic assignment problem using the bundle method. Math. Program. 109, 505–524 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  29. Roucairol, C.: A reduction method for quadratic assignment problems. Methods Oper. Res. 32, 185–187 (1979)

    Google Scholar 

  30. Taillard, E.: Comparison of iterative searches for the quadratic assignment problem. Locat. Sci. 3, 87–105 (1995)

    Article  MATH  Google Scholar 

  31. Theobald, C.M.: An inequality for the trace of the product of two symmetric matrices. Math. Proc. Camb. Philos. Soc. 77, 77–265 (1975)

    Article  MathSciNet  Google Scholar 

  32. Toh, K., Todd, M., Tutuncu, R.: SDPT3: Matlab software package for semidefinite programming. Optim. Methods Softw. 11, 545–581 (1999)

    Article  MathSciNet  Google Scholar 

  33. Zhao, Q., Karisch, S., Rendl, F., Wolkowicz, H.: Semidefinite programming relaxations for the quadratic assignment problem. J. Combin. Optim. 2, 71–109 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  34. Zhao, X., Sun, D., Toh, K.: A Newton-CG augmented Lagrangian method for semidefinite programming. SIAM J. Optim. 20, 1737–1765 (2010)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgments

We would like to thank the two anonymous referees and the Associate Editor for their helpful suggestions that led to substantial improvement on the presentation of the paper. We also thank Etienne de Klerk for pointing out some missing constraints in our previous implementation of the SDRMS-SUM model in CVX. This work was jointly supported by AFOSR Grant FA9550-09-1-0098 and NSF Grant DMS 09-15240 ARRA, the National Natural Science Foundation of China under Grants 11071219 and 11371324, and the Zhejiang Provincial Natural Science Foundation of China under Grant LY13A010012.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiming Peng.

Appendix: Proofs of Theorems 2 and 3

Appendix: Proofs of Theorems 2 and 3

Proof of Theorem 2

Denote the optimal solution to the MTMS-PSD problem by \((B^*_1, B^*_2)\). We first show \((B^*_1, B^*_2)=(B^+, B^-)\). Let \(P\) be the projection matrix defined by

$$\begin{aligned} P=\sum _{i:\lambda _i\ge 0} q_iq_i^T. \end{aligned}$$

It follows immediately that

$$\begin{aligned} \mathrm{Tr}(B^*_1) \ge \mathrm{Tr}(B^*_1 P) =\mathrm{Tr}(B^*_1P^2)=\mathrm{Tr}(PB^*_1P) \ge \mathrm{Tr}(P(B^*_1-B^*_2)P) = \mathrm{Tr}(B^+), \end{aligned}$$

where the first inequality follows from the relation

$$\begin{aligned} \mathrm{Tr}(B^*_1(I-P))\ge 0. \end{aligned}$$

Here \(I\) denotes the identity matrix in \(\mathfrak {R}^{n\times n}\). Similarly, one has

$$\begin{aligned} \mathrm{Tr}(B^*_2) \ge \mathrm{Tr}(B^*_2 (I-P)) =\mathrm{Tr}((I-P) B^*_2 (I-P)) \ge \mathrm{Tr}(B^-). \end{aligned}$$

Therefore, we have

$$\begin{aligned} \mathrm{Tr}(B_1^*)+\mathrm{Tr}(B_2^*)\ge \mathrm{Tr}(B^+)+\mathrm{Tr}(B^-), \end{aligned}$$

and the equality holds if and only if

$$\begin{aligned} \mathrm{Tr}(B^*_1(I-P))= 0, \quad \mathrm{Tr}(B_2^*P)=0. \end{aligned}$$
(40)

Since all the matrices \(B_1^*, B_2^*, P\) and \( I-P\) are positive semi-definite. Relation holds if and only if

$$\begin{aligned} B_1^* = B^*_1P=P B_1^*, \quad B_2^*P=P B_2^*=0. \end{aligned}$$
(41)

Since \(B_1^*-B_2^* = B = B^+ - B^-\), we thus have

$$\begin{aligned} B_1^*=PBP=B^+, \quad B_2^*=-(I-P)B(I-P)=B^-. \end{aligned}$$

It remains to show that the matrix splitting \((B^+, B^-)\) is non-redundant. Suppose to the contrary that \((B^+, B^-)\) is a redundant splitting of \(B\), i.e., there exists \(R\ne 0 \succeq 0\) such that

$$\begin{aligned} B_1=B^+-R \succeq 0, \quad B_2=B^{-}-R\succeq 0, \quad B_1-B_2=B. \end{aligned}$$

Then we have

$$\begin{aligned} \mathrm{Tr}(B^+B^-)= \mathrm{Tr}((B_1+R)(B_2+R))\ge \mathrm{Tr}(B_1B_2)+\mathrm{Tr}(R^2)>0, \end{aligned}$$

which contradicts to the relation \(\mathrm{Tr}(B^+B^-)=0\). This finishes the proof of the theorem.\(\square \)

We next cite a well-known result for matrix product from [31] that will be used in the proof of Theorem 3.

Lemma 3

Let \(A,B\in {\textsc {S}}^n\) with the eigenvalues \(\lambda _i(A)\) and \(\lambda _i(B)\), \(i=1,\ldots ,n\) listed in nonincreasing order. Then

$$\begin{aligned} \mathrm{Tr}(AB)\le \sum _{i=1}^n\lambda _i(A)\lambda _i(B), \end{aligned}$$

where the equality holds if and only if there is an orthogonal matrix \(P\) whose columns form a common set of eigenvectors for A and B and are ordered with respect to \(\{\lambda _i(A)\}_{i=1}^n\) and \(\{\lambda _i(B)\}_{i=1}^n\), such that \(P^{-1}AP\) and \(P^{-1}BP\) are diagonal.

Proof of Theorem 3

Since \((\alpha ,\beta )\) is an optimal solution of problem (23), there exists \(U\in {\textsc {S}}^n\) such that

$$\begin{aligned}&n-\mathrm{Tr}(UE)=0,\end{aligned}$$
(42)
$$\begin{aligned}&n-\mathrm{Tr}(U)=0,\end{aligned}$$
(43)
$$\begin{aligned}&\mathrm{Tr}(U(\alpha E +\beta I - B)) =0,\end{aligned}$$
(44)
$$\begin{aligned}&U\succeq 0, \quad \alpha E +\beta I - B\succeq 0. \end{aligned}$$
(45)

From (42) to (44), we obtain directly

$$\begin{aligned} n(\alpha +\beta )=\mathrm{Tr}(UB). \end{aligned}$$
(46)

From (44) and (45), we follow that

$$\begin{aligned} U(\alpha E +\beta I - B)=(\alpha E +\beta I - B)U=0. \end{aligned}$$
(47)

This implies that \(U\) and \(\alpha E +\beta I - B\) can commute. By Theorem 1.3.12 in [20], \(U\) and \(\alpha E +\beta I - B\) are simultaneously diagonalizable. Since \(U\in {\textsc {S}}^n\), \(\alpha E +\beta I - B\in {\textsc {S}}^n\), there is an orthogonal matrix \(P\) such that \(P^{-1}UP\) and \(P^{-1}(\alpha E +\beta I - B)P\) are diagonal. So, we have

$$\begin{aligned} \mathrm{Tr}(U(\alpha E +\beta I - B))=\sum _{i=1}^n\lambda _i(U)\lambda _i(\alpha E +\beta I - B) =0, \end{aligned}$$

which in turn by (45) implies that

$$\begin{aligned} \lambda _i(U)\lambda _i(\alpha E +\beta I - B)=0,~i=1,\ldots ,n. \end{aligned}$$
(48)

Due to the minimal trace principle, we have \(m=\mathrm{Rank}(\alpha E+\beta I -B) < n\). Since \(\alpha E +\beta I - B\succeq 0\), we assume that \(\lambda _i(\alpha E +\beta I - B)>0\), \(i=1,\ldots ,m\). The above equality (48) then yields \(\lambda _i(U)=0\), \(i=1,\ldots ,m\).

We now prove \(UE\ne EU\). Suppose to the contrary that \(UE= EU\). By Theorem 1.3.12 in [20], \(U\) and \(E\) are simultaneously diagonalizeable. Let

$$\begin{aligned} \lambda _1(U)=\cdots =\lambda _s(U)=0<\lambda _{m+1}(U)\le \ldots \le \lambda _n(U). \end{aligned}$$
(49)

Note that the eigenvalues of \(E\) are \(0,\ldots ,0,n\). Therefore, we have

$$\begin{aligned} \mathrm{Tr}(UE)=n\lambda _n(U), \end{aligned}$$

which by (42) implies \(\lambda _n(U)=1\). Hence, we infer from (49) that

$$\begin{aligned} \mathrm{Tr}(U)=\sum _{i=1}^n\lambda _i(U)\le n-m<n, \end{aligned}$$

this contradicts (43).

Because \(UE\ne EU\), from (47) we obtain \(UB\ne BU\). Since \(U\in {\textsc {S}}^n\) and \(B\in {\textsc {S}}^n\), by Theorem 1.3.12 in [20], \(U\) and \(B\) are not simultaneously diagonalizable. Now using Lemma 3, we have

$$\begin{aligned} \mathrm{Tr}(UB)<\sum _{i=1}^n\lambda _i(U)\lambda _i(B), \end{aligned}$$

which, together with (46), yields

$$\begin{aligned} n(\alpha +\beta )<\sum _{i=1}^n\lambda _i(U)\lambda _i(B). \end{aligned}$$
(50)

Let \(\lambda _{\max }(B)\) be the largest eigenvalue of \(B\). Note that \(\sum _{i=1}^n\lambda _i(U)=\mathrm{Tr}(U)=n\). Also, \(\lambda _i(U)\ge 0\) for all \(i\) since \(U\succeq 0\). It then follows from (50) that

$$\begin{aligned} \alpha +\beta < \lambda _{\max }(B). \end{aligned}$$
(51)

On the other hand, from (45), we have

$$\begin{aligned} B-\alpha (E-I) -(\alpha +\beta )I \preceq 0. \end{aligned}$$

This means that

$$\begin{aligned} \alpha +\beta \ge \lambda _{\max }( B-\alpha (E-I)). \end{aligned}$$
(52)

If \(\alpha =0\), then the combination of (51) and (52) leads to a contradiction.

If \(\alpha <0\). Let \(\rho (B)\) be the spectral radius of \(B\). Since \(B\in {\textsc {S}}^n\) is non-negative, by Theorem 8.3.1 in [20], then \(\rho (B)\) is an eigenvalue of \(B\) and there exists nontrivial \(\hat{x} \ge 0\in {\mathfrak {R}}^n\) such that \(B\hat{x}=\rho (B)\hat{x}\). Without loss of generality, we can further assume that \(\Vert \hat{x}\Vert _2=1\). Thus we have \(\hat{x}^TB\hat{x}=\rho (B)\). Since \(\hat{x}\ge 0\), it holds \(\hat{x}^T(E-I)\hat{x}\ge 0\). It follows from (52) that

$$\begin{aligned} \alpha +\beta&\ge \lambda _{\max }( B-\alpha (E-I))\\&= \max \left\{ x^T( B-\alpha (E-I))x:x^{T}x=1\right\} \\&\ge \hat{x}^T( B-\alpha (E-I))\hat{x} \\&\ge \hat{x}^T B\hat{x}=\rho (B)\\&\ge \lambda _{\max }(B), \end{aligned}$$

which contradicts to (51). Therefore, we can conclude \(\alpha >0\). This finishes the proof of the theorem.\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Peng, J., Zhu, T., Luo, H. et al. Semi-definite programming relaxation of quadratic assignment problems based on nonredundant matrix splitting. Comput Optim Appl 60, 171–198 (2015). https://doi.org/10.1007/s10589-014-9663-y

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-014-9663-y

Keywords

Navigation