Abstract
We consider the problem of constructing quantum channels, if they exist, that transform a given set of quantum states \(\{\rho _1, \ldots , \rho _k\}\) to another such set \(\{\hat{\rho }_1, \ldots , \hat{\rho }_k\}\). In other words, we must find a completely positive linear map, if it exists, that maps a given set of density matrices to another given set of density matrices, possibly of different dimension. Using the theory of completely positive linear maps, one can formulate the problem as an instance of a positive semidefinite feasibility problem with highly structured constraints. The nature of the constraints makes projection-based algorithms very appealing when the number of variables is huge and standard interior-point methods for semidefinite programming are not applicable. We provide empirical evidence to this effect. We moreover present heuristics for finding both high-rank and low-rank solutions. Our experiments are based on the method of alternating projections and the Douglas–Rachford reflection method.
Similar content being viewed by others
Notes
Note that if the maximum value is the same as \(iterlimit\), then the method failed to attain the desired accuracy \(toler\) for this particular value of \(r\).
This is a good indicator of the expected number of iterations.
We used the rank function in MATLAB with the default tolerance; i.e., \({{\mathrm{{rank}}}}(P)\) is the number of singular values of \(P\) that are larger than \(mn*eps(\Vert P\Vert )\), where \(eps(\Vert P\Vert )\) is the positive distance from \(\Vert P\Vert \) to the next larger in magnitude floating point number of the same precision. Here we note that we did not fail to find a max-rank solution with the DR algorithm.
References
Artacho, F.A., Borwein, J., Tam, M.: Recent results on Douglas-Rachford methods for combinatorial optimization problems. J. Optim. Theory Appl. 163, 1–30 (2013). doi:10.1007/s10957-013-0488-0
Bauschke, H., Borwein, J.: On the convergence of von Neumann’s alternating projection algorithm for two sets. Set-Valued Anal. 1(2), 185–212 (1993). doi:10.1007/BF01027691
Bauschke, H., Borwein, J.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38(3), 367–426 (1996). doi:10.1137/S0036144593251710
Bauschke, H., Combettes, P., Luke, D.: Phase retrieval, error reduction algorithm, and Fienup variants: a view from convex optimization. J. Opt. Soc. Am. A 19(7), 1334–1345 (2002). doi:10.1364/JOSAA.19.001334
Bauschke, H., Luke, D., Phan, H., Wang, X.: Restricted normal cones and the method of alternating projections: theory. Set-Valued Var. Anal. 21, 1–43 (2013). doi:10.1007/s11228-013-0239-2
Borwein, J., Sims, B.: The Douglas–Rachford algorithm in the absence of convexity. In: Fixed-Point Algorithms for Inverse Problems in Science and Engineering, Springer Optim. Appl., vol. 49, pp. 93–109. Springer, New York (2011). doi:10.1007/978-1-4419-9569-8_6
Borwein, J., Wolkowicz, H.: Facial reduction for a cone-convex programming problem. J. Aust. Math. Soc. Ser. A 30(3), 369–380 (1980/81)
Borwein, J., Wolkowicz, H.: Regularizing the abstract convex program. J. Math. Anal. Appl. 83(2), 495–530 (1981)
Bregman, L.: The method of successive projection for finding a common point of convex sets. Sov. Math. Dokl. 6, 688–692 (1965)
Chefles, A., Jozsa, R., Winter, A.: On the existence of physical transformations between sets of quantum states. Int. J. Quantum Inf. 2, 11–21 (2004)
Choi, M.: Completely positive linear maps on complex matrices. Linear Algebra Appl. 10, 285–290 (1975)
Demmel, J., Marques, O., Parlett, B., Vömel, C.: Performance and accuracy of LAPACK’s symmetric tridiagonal eigensolvers. SIAM J. Sci. Comput. 30(3), 1508–1526 (2008). doi:10.1137/070688778
Drusvyatskiy, D., Ioffe, A., Lewis, A.: Alternating projections and coupling slope (2014). arXiv:1401.7569
Duffin, R.: Infinite programs. In: Tucker, A. (ed.) Linear Equalities and Related Systems, pp. 157–170. Princeton University Press, Princeton (1956)
Eckart, C., Young, G.: The approximation of one matrix by another of lower rank. Psychometrika 1, 211–218 (1936)
Elser, V., Rankenburg, I., Thibault, P.: Searching with iterated maps. Proc. Natl. Acad. Sci. 104(2), 418–423 (2007). doi:10.1073/pnas.0606359104. http://www.pnas.org/content/104/2/418.abstract
Escalante, R., Raydan, M.: Alternating projection methods, Fundamentals of Algorithms, vol. 8. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2011). doi:10.1137/1.9781611971941
Fung, C.H., Li, C.K., Sze, N.S., Chau, H.: Conditions for degradability of tripartite quantum states. Tech. Rep., University of Hong Kong (2012). arXiv:1308.6359
Hesse, R., Luke, D., Neumann, P.: Alternating projections and Douglas–Rachford for sparse affine feasibility. IEEE Trans. Signal Process. 62(18), 4868–4881 (2014). doi:10.1109/TSP.2014.2339801
Huang, Z., Li, C.K., Poon, E., Sze, N.S.: Physical transformations between quantum states. J. Math. Phys. 53(10), 102, 209, 12 (2012). doi:10.1063/1.4755846
Kraus, K.: States, Effects, and Operations: Fundamental Notions of Quantum Theory. Lecture Notes in Physics, vol. 190. Springer, Berlin (1983)
Lewis, A., Luke, D., Malick, J.: Local linear convergence for alternating and averaged nonconvex projections. Found. Comput. Math. 9(4), 485–513 (2009). doi:10.1007/s10208-008-9036-y
Lewis, A., Malick, J.: Alternating projections on manifolds. Math. Oper. Res. 33(1), 216–234 (2008). doi:10.1287/moor.1070.0291
Li, C.K., Poon, Y.T.: Interpolation by completely positive maps. Linear Multilinear Algebra 59(10), 1159–1170 (2011). doi:10.1080/03081087.2011.585987
Li, C.K., Poon, Y.T., Sze, N.S.: Higher rank numerical ranges and low rank perturbations of quantum channels. J. Math. Anal. Appl. 348(2), 843–855 (2008). doi:10.1016/j.jmaa.2008.08.016
Lions, P., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979). doi:10.1137/0716071
Mendl, C., Wolf, M.: Unital quantum channels—convex structure and revivals of Birkhoff’s theorem. Commun. Math. Phys. 289(3), 1057–1086 (2009). doi:10.1007/s00220-009-0824-2
Nielsen, M., Chuang, I. (eds.): Quantum Computation and Quantum Information. Cambridge University Press, Cambridge (2000)
Phan, H.M.: Linear convergence of the Douglas–Rachford method for two closed sets (2014). arXiv:1401.6509
Watrous, J.: Distinguishing quantum operations having few kraus operators. Quantum Inf. Comput. 8, 819–833 (2008). http://arxiv.org/abs/0710.0902
Acknowledgments
We would like to thank the editors and referees for their careful reading and helpful comments on the paper.
Author information
Authors and Affiliations
Corresponding author
Appendix: Background
Appendix: Background
1.1 Matrix representation of \(\mathcal {L}\) and \(\mathcal {L}^\dagger \)
In this section, we show that the matrices \(L\) in (9) and \(L^\dagger \) in (10) are indeed matrix representations of the linear map \(\mathcal {L}\) [defined in (5)] and its Moore–Penrose generalized inverse, respectively, under a specific choice of basis of the vector space \({\mathcal H} ^{nm}\).
1.1.1 Choice of orthonormal basis of \({\mathcal H} ^s\)
We choose the standard orthonormal basis for the vector space \({\mathcal H} ^\ell \) of \(\ell \times \ell \) Hermitian matrices (over the reals) as follows. Let \(e_j\in \mathbb {R}^\ell \) be the \(j\)th standard unit vector for \(j=1,\ldots ,\ell \). Then \(e_ie_j^T\in \mathbb {R}^{\ell \times \ell }\) is zero everywhere except the \((i,j)\)th entry, which is 1. For \(i,j=1,\ldots ,\ell \), define the \((i,j)\)th basis matrix as follows:
Then \({\mathcal E} _{{\mathrm {real,offdiag}}}\cup {\mathcal E} _{{\mathrm {imag,offdiag}}}\cup {\mathcal E} _{{{\mathrm{{diag}}}}}\) forms an orthonormal basis of \({\mathcal H} ^\ell \), where
-
\({\mathcal E} _{\mathrm {real,offdiag}}:=\{E_{ij} : 1\le i < j \le \ell \}\) collects the real zero-diagonal basis matrices,
-
\({\mathcal E} _{\mathrm {imag,offdiag}}:=\{E_{ij} : 1\le j < i \le \ell \}\) collects the imaginary zero-diagonal basis matrices, and
-
\({\mathcal E} _{{{\mathrm{{diag}}}}}:=\{E_{jj} : 1\le j \le \ell \}\) collects the real diagonal basis matrices.
We define a total ordering \(<\) on the tuples \((i,j)\) for \(i,j=1,\ldots ,\ell \), so that the matrices are ordered with \({\mathcal E} _{\mathrm {real,offdiag}}< {\mathcal E} _{\mathrm {imag,offdiag}}< {\mathcal E} _{{{\mathrm{{diag}}}}}\) in the element-wise sense [as stated in (6)]. For any \((i,j), (\tilde{i}, \tilde{j})\in \{1,\ldots ,\ell \}^2\), we say that \((i,j) \prec (\tilde{i}, \tilde{j})\) if one of the following holds.
-
Case 1: \(i<j\) (so that \(E_{ij}\) is a real matrix with zero diagonal).
-
\(i<j\) and \(\tilde{i} \ge \tilde{j}\).
-
\(i<j\) and \(\tilde{i}<\tilde{j}\), but \(\tilde{j}> j\).
-
\(i<j\) and \(\tilde{i}<\tilde{j} = j\), but \(\tilde{i}>i\).
-
-
Case 2: \(i>j\) (so that \(E_{ij}\) is a imaginary matrix with zero diagonal).
In this case we must have \(\tilde{i}\ge \tilde{j}\).
-
\(j<i\) and \(\tilde{j} = \tilde{i}\).
-
\(j<i\) and \(\tilde{j}<\tilde{i}\), but \(\tilde{i}> i\).
-
\(j<i\) and \(\tilde{j}<\tilde{i} = i\), but \(\tilde{j}>j\).
-
-
Case 3: \(i=j\) (so that \(E_{jj}\) is a real diagonal matrix).
In this case we must have \(\tilde{i}=\tilde{j}\).
-
\(j<\tilde{j}\).
-
For instance, when \(\ell =3\), our orthonormal basis of choice is given in the following order:
We also work with \(nm\times nm\) block matrices, with each block of size \(m\times m\). On the space \({\mathcal H} ^{nm}\), for any \(1\le i , j \le nm\), let
Note that \(1\le s,t\le n\) are the block indices and \(1\le p,q\le m\) are the intra-block indices. For instance, consider the matrix \(e_ie_j^T\in {\mathcal H} ^{nm}\) (having only 1 nonzero entry at position \((i,j)\)). The nonzero entry is at the \((p,q)\)th entry in the \((s,t)\)th block (which is of size \(m\times m\)). The orthonormal matrices \(E_{ij}\) defined in (11) are related to the block indices \((s,t)\) as described in Table 7.
Unlike \({\mathcal H} ^s\), we order the blocked orthonormal matrices \(E_{ij}\) in \({\mathcal H} ^{nm}\) via the following total ordering \(<\) on the set \(\mathcal {I}:=\{(i,j): 1\le i,j \le nm\}\). Let \(1\le i,j,\tilde{i}, \tilde{j}\le nm\) with \((i,j)\ne (\tilde{i}, \tilde{j})\). Then letting
the relation \((i,j)<(\tilde{i}, \tilde{j})\) holds if and only if one of the following holds.
-
\(\{p,q\}\ne \{\tilde{p},\tilde{q}\}\) and \(\left( \min \{p,q\},\ \max \{p,q\}\right) \prec \left( \min \{\tilde{p},\tilde{q}\},\ \max \{\tilde{p},\tilde{q}\}\right) \).
-
\(\{p,q\}=\{\tilde{p},\tilde{q}\}\) and \((s,t)\prec (\tilde{s}, \tilde{t})\).
-
\(\{p,q\}=\{\tilde{p},\tilde{q}\}\), \((s,t)=(\tilde{s}, \tilde{t})\) and \(p<q\). (Then \(\tilde{q} = p < q = \tilde{p}\).)
In other words, we order the 2-tuples \((i,j)\) in \(\mathcal {I}\) by grouping all those with the same intra-block index \((p,q)\) and block indices \(s<t\) together, for some \(p<q\), followed by those tuples with intra-block index \((q,p)\) and block indices \(s<t\). As an example, when \(m=2\) and \(n=3\), the following list gives the first few entries of \(\mathcal {I}\):
These first 2-tuples \((i,j)\) in \(\mathcal {I}\) have the corresponding 4-tuples \((s,t,p,q)\) defined as in (12), given as follows:
Note that the first three 2-tuples have the same intra-block index \((p,q)=(1,2)\). The immediately following three 2-tuples have the intra-block index \((q,p)=(2,1)\), and so on.
1.1.2 Symmetric vectorization of Hermitian matrices
Using the ordered orthonormal basis of \({\mathcal H} ^s\) described in (11) in Sect. 1, we can define the corresponding symmetric vectorization of Hermitian matrices. Since any Hermitian matrix in \({\mathcal H} ^s\) can be expressed as a unique linear combination of the orthonormal basis matrices \(E_{ij}\), the map
where \(v\in \mathbb {R}^{s^2}\) is the unique vector such that \(H = \sum _{i,j=1}^{s} v_{ij} E_{ij}\), is well-defined. The map \({{\mathrm{{sHvec}}}}\) is a linear isometry (i.e., \({{\mathrm{{sHvec}}}}\) is a linear map and \(\Vert {{\mathrm{{sHvec}}}}(H)\Vert ^2={{\mathrm{{trace}}}}(H^2)\) for all \(H\in {\mathcal H} ^s\)), and its adjoint is given by
which is also the inverse map of \({{\mathrm{{sHvec}}}}\). For instance,
1.1.3 Ordering the rows and columns in the matrix representation of \(\mathcal {L}\)
In the following, we compute matrix representations \(L_A\) and \(L_T\) of the linear maps \(\mathcal {L}_A:{\mathcal H} ^{nm}\rightarrow \otimes _{j=1}^m{\mathcal H} ^m\) and \(\mathcal {L}_T:{\mathcal H} ^{nm}\rightarrow {\mathcal H} ^n\), respective. The matrix representation \(\mathcal {L}=(\mathcal {L}_A(\cdot ), \mathcal {L}_T(\cdot ))\) is then chosen to be \(L = \begin{bmatrix} L_A \\ L_T \end{bmatrix}\), with \(L_A\in \mathbb {R}^{km^2\times n^2m^2}\) and \(L_T\in \mathbb {R}^{m^2\times n^2m^2}\).
Any matrix representation for the linear map \(\mathcal {L}_A\) (resp. \(\mathcal {L}_T\)) depends on the choice of the ordered orthonormal bases for \({\mathcal H} ^{nm}\) and for \({\mathcal H} ^m\times \cdots \times {\mathcal H} ^m\) (resp. for \({\mathcal H} ^{nm}\) and for \({\mathcal H} ^m\)). For \({\mathcal H} ^{nm}\), we use the ordered orthonormal basis defined in Sect. 1. For \({\mathcal H} ^m\times \cdots \times {\mathcal H} ^m\), we use the orthonormal basis
We first construct a matrix representation \(L_A\) of \(\mathcal {L}_A\) by rows. Recall that
for any \(P\in {\mathcal H} ^{nm}\), so the rows of \(\mathcal {L}_A\) are determined by the linear functionals
for some \(\ell \in \{1,\ldots ,k\}\) and \(p<q\in \{1,\ldots ,m\}\). Defining vectors \(\alpha _{{\ell ,p,q,{\mathrm{Re}\,}}},\alpha _{{\ell ,p,q,{\mathrm{Im}\,}}},\beta _{{\ell ,q}}\in \mathbb {R}^{(nm)^2}\) by
for all \(\ell \in \{1,\ldots ,k\}\), \(p<q\in \{1,\ldots ,m\}\), we get that
Now we proceed to find the vectors \(\alpha _{{\ell ,p,q,{\mathrm{Re}\,}}},\alpha _{{\ell ,p,q,{\mathrm{Im}\,}}},\beta _{{\ell ,q}}\).
1.1.4 Computing the rows of \(L_A\)
Fix any \(\ell \in \{1,\ldots ,k\}\). For any \(i,j\in \{1,\ldots ,nm\}\), let \((s,t,p,q)\) be defined as in (12).
If \(s<t\), then using Table 7 we get
If \(s>t\), then
If \(s=t\), then
Fix any \(\ell =1,\ldots ,k\) and \(\hat{p}<\hat{q}\) from \(\{1,\ldots ,m\}\). Then for all \(i,j\in \{1,\ldots ,nm\}\), defining \((s,t,p,q)\) as in (12), we have
and
and
Therefore for any \(\hat{p}<\hat{q}=1,\ldots ,m\) and any \(i,j=1,\ldots , nm\), using \((s,t,p,q)\) defined in (12) and the definitions of \(M_{{\mathrm{Re}\,}}, M_{{\mathrm{Im}\,}}, M_D\) on Page 6,
implying that \(\alpha _{{\ell ,\hat{p},\hat{q},{\mathrm{Re}\,}}}^T\) is one of the first \(k\) rows of the matrix \(\begin{bmatrix} I_{t(m-1)}\otimes N_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}&\ 0 \end{bmatrix}\). (Here note that the number of pairs \((\hat{p},\hat{q})\) with \(1\le \hat{p}< \hat{q} \le m\) is \(t(m-1)=\frac{1}{2}m(m-1)\). The zero block corresponds to the index pairs \((i,j)\) with \(p=q\), where \((s,t,p,q)\) are the block indices defined in (12).) Similarly,
so \(\alpha _{{\ell ,\hat{p},\hat{q},{\mathrm{Im}\,}}}^T\) is one of the last \(k\) rows of the matrix \(\begin{bmatrix} I_{t(m-1)}\otimes N_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}&\ 0 \end{bmatrix}\). Finally,
so \(\beta _{{\ell ,\hat{q}}}^T\) is one of the rows of the matrix \(\begin{bmatrix} 0&\ I_{m}\otimes M_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}\end{bmatrix}\).
Hence, a matrix representation of \(\mathcal {L}_A\) is given by
1.1.5 Computing \(L_T\)
Recall the linear map \(\mathcal {L}_T:{\mathcal H} ^{nm}\rightarrow {\mathcal H} ^n : P\mapsto [{{\mathrm{{trace}}}}(P_{st})]_{s,t=1,\ldots ,n}\), which defines the second component of \(\mathcal {L}\). We compute a matrix representation \(L_T\) of \(\mathcal {L}_T\) by columns, i.e., by considering \(\mathcal {L}_T(E_{ij})\in {\mathcal H} ^n\) for \(i,j=1,\ldots ,nm\). Defining \((s,t,p,q)\) as in (12), we have
Hence the \((i,j)\)th column of \(L_T\) is given by
This implies that \(L_T=\begin{bmatrix} 0&\ e_m^T\times I_{n^2} \end{bmatrix}\), where the zero block corresponds to the \((i,j)\) pairs with \(p\ne q\), and each row \(e_m^T\) in the Kronecker product corresponds to those \((i,j)\) pairs with the same block indices \((s,t)\) (and there are \(m\) pairs of \((i,j)\) with the same block indices that have nonzero intra-block traces).
1.1.6 Alternative column orderings, eliminating redundant rows
Combining the results from the previous two sections, we arrive at a matrix representation of \(\mathcal {L}\):
In the final matrix representation that we use, some of the rows of the second block rows are linearly dependent of the other rows, if the linear map \(\mathcal {L}_A\) contains the unital constraints. Hence we remove those rows and replace the original matrix representation \(L\) in (15) by the following matrix:
Note that we can use alternative ordering for the off-diagonal entries inside the blocks. While this does not change the ordering of the columns in the second block column in (15) (which correspond to the diagonal entries inside the blocks), it can affect the column ordering of \(N_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}\) (resulting in, e.g., \(N_\mathrm{final}\)).
1.1.7 Pseudoinverse of \(L\)
Using the block diagonal structure of \(L\) and the fact that
(which can be easily verified to be the pseudoinverse), it is immediate that
Rights and permissions
About this article
Cite this article
Drusvyatskiy, D., Li, CK., Pelejo, D.C. et al. Projection methods for quantum channel construction. Quantum Inf Process 14, 3075–3096 (2015). https://doi.org/10.1007/s11128-015-1024-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11128-015-1024-y