Skip to main content
Log in

Projection methods for quantum channel construction

  • Published:
Quantum Information Processing Aims and scope Submit manuscript

Abstract

We consider the problem of constructing quantum channels, if they exist, that transform a given set of quantum states \(\{\rho _1, \ldots , \rho _k\}\) to another such set \(\{\hat{\rho }_1, \ldots , \hat{\rho }_k\}\). In other words, we must find a completely positive linear map, if it exists, that maps a given set of density matrices to another given set of density matrices, possibly of different dimension. Using the theory of completely positive linear maps, one can formulate the problem as an instance of a positive semidefinite feasibility problem with highly structured constraints. The nature of the constraints makes projection-based algorithms very appealing when the number of variables is huge and standard interior-point methods for semidefinite programming are not applicable. We provide empirical evidence to this effect. We moreover present heuristics for finding both high-rank and low-rank solutions. Our experiments are based on the method of alternating projections and the Douglas–Rachford reflection method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Note that if the maximum value is the same as \(iterlimit\), then the method failed to attain the desired accuracy \(toler\) for this particular value of \(r\).

  2. This is a good indicator of the expected number of iterations.

  3. We used the rank function in MATLAB with the default tolerance; i.e., \({{\mathrm{{rank}}}}(P)\) is the number of singular values of \(P\) that are larger than \(mn*eps(\Vert P\Vert )\), where \(eps(\Vert P\Vert )\) is the positive distance from \(\Vert P\Vert \) to the next larger in magnitude floating point number of the same precision. Here we note that we did not fail to find a max-rank solution with the DR algorithm.

References

  1. Artacho, F.A., Borwein, J., Tam, M.: Recent results on Douglas-Rachford methods for combinatorial optimization problems. J. Optim. Theory Appl. 163, 1–30 (2013). doi:10.1007/s10957-013-0488-0

    Article  MATH  Google Scholar 

  2. Bauschke, H., Borwein, J.: On the convergence of von Neumann’s alternating projection algorithm for two sets. Set-Valued Anal. 1(2), 185–212 (1993). doi:10.1007/BF01027691

    Article  MathSciNet  Google Scholar 

  3. Bauschke, H., Borwein, J.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38(3), 367–426 (1996). doi:10.1137/S0036144593251710

    Article  MathSciNet  Google Scholar 

  4. Bauschke, H., Combettes, P., Luke, D.: Phase retrieval, error reduction algorithm, and Fienup variants: a view from convex optimization. J. Opt. Soc. Am. A 19(7), 1334–1345 (2002). doi:10.1364/JOSAA.19.001334

    Article  MathSciNet  ADS  Google Scholar 

  5. Bauschke, H., Luke, D., Phan, H., Wang, X.: Restricted normal cones and the method of alternating projections: theory. Set-Valued Var. Anal. 21, 1–43 (2013). doi:10.1007/s11228-013-0239-2

    Article  MathSciNet  Google Scholar 

  6. Borwein, J., Sims, B.: The Douglas–Rachford algorithm in the absence of convexity. In: Fixed-Point Algorithms for Inverse Problems in Science and Engineering, Springer Optim. Appl., vol. 49, pp. 93–109. Springer, New York (2011). doi:10.1007/978-1-4419-9569-8_6

  7. Borwein, J., Wolkowicz, H.: Facial reduction for a cone-convex programming problem. J. Aust. Math. Soc. Ser. A 30(3), 369–380 (1980/81)

  8. Borwein, J., Wolkowicz, H.: Regularizing the abstract convex program. J. Math. Anal. Appl. 83(2), 495–530 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  9. Bregman, L.: The method of successive projection for finding a common point of convex sets. Sov. Math. Dokl. 6, 688–692 (1965)

    MATH  Google Scholar 

  10. Chefles, A., Jozsa, R., Winter, A.: On the existence of physical transformations between sets of quantum states. Int. J. Quantum Inf. 2, 11–21 (2004)

    Article  MATH  Google Scholar 

  11. Choi, M.: Completely positive linear maps on complex matrices. Linear Algebra Appl. 10, 285–290 (1975)

    Article  MATH  Google Scholar 

  12. Demmel, J., Marques, O., Parlett, B., Vömel, C.: Performance and accuracy of LAPACK’s symmetric tridiagonal eigensolvers. SIAM J. Sci. Comput. 30(3), 1508–1526 (2008). doi:10.1137/070688778

    Article  MathSciNet  MATH  Google Scholar 

  13. Drusvyatskiy, D., Ioffe, A., Lewis, A.: Alternating projections and coupling slope (2014). arXiv:1401.7569

  14. Duffin, R.: Infinite programs. In: Tucker, A. (ed.) Linear Equalities and Related Systems, pp. 157–170. Princeton University Press, Princeton (1956)

    Google Scholar 

  15. Eckart, C., Young, G.: The approximation of one matrix by another of lower rank. Psychometrika 1, 211–218 (1936)

    Article  MATH  Google Scholar 

  16. Elser, V., Rankenburg, I., Thibault, P.: Searching with iterated maps. Proc. Natl. Acad. Sci. 104(2), 418–423 (2007). doi:10.1073/pnas.0606359104. http://www.pnas.org/content/104/2/418.abstract

  17. Escalante, R., Raydan, M.: Alternating projection methods, Fundamentals of Algorithms, vol. 8. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2011). doi:10.1137/1.9781611971941

  18. Fung, C.H., Li, C.K., Sze, N.S., Chau, H.: Conditions for degradability of tripartite quantum states. Tech. Rep., University of Hong Kong (2012). arXiv:1308.6359

  19. Hesse, R., Luke, D., Neumann, P.: Alternating projections and Douglas–Rachford for sparse affine feasibility. IEEE Trans. Signal Process. 62(18), 4868–4881 (2014). doi:10.1109/TSP.2014.2339801

    Article  MathSciNet  ADS  Google Scholar 

  20. Huang, Z., Li, C.K., Poon, E., Sze, N.S.: Physical transformations between quantum states. J. Math. Phys. 53(10), 102, 209, 12 (2012). doi:10.1063/1.4755846

  21. Kraus, K.: States, Effects, and Operations: Fundamental Notions of Quantum Theory. Lecture Notes in Physics, vol. 190. Springer, Berlin (1983)

    Book  Google Scholar 

  22. Lewis, A., Luke, D., Malick, J.: Local linear convergence for alternating and averaged nonconvex projections. Found. Comput. Math. 9(4), 485–513 (2009). doi:10.1007/s10208-008-9036-y

    Article  MathSciNet  Google Scholar 

  23. Lewis, A., Malick, J.: Alternating projections on manifolds. Math. Oper. Res. 33(1), 216–234 (2008). doi:10.1287/moor.1070.0291

    Article  MathSciNet  MATH  Google Scholar 

  24. Li, C.K., Poon, Y.T.: Interpolation by completely positive maps. Linear Multilinear Algebra 59(10), 1159–1170 (2011). doi:10.1080/03081087.2011.585987

    Article  MathSciNet  MATH  Google Scholar 

  25. Li, C.K., Poon, Y.T., Sze, N.S.: Higher rank numerical ranges and low rank perturbations of quantum channels. J. Math. Anal. Appl. 348(2), 843–855 (2008). doi:10.1016/j.jmaa.2008.08.016

    Article  MathSciNet  MATH  Google Scholar 

  26. Lions, P., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979). doi:10.1137/0716071

    Article  MathSciNet  ADS  MATH  Google Scholar 

  27. Mendl, C., Wolf, M.: Unital quantum channels—convex structure and revivals of Birkhoff’s theorem. Commun. Math. Phys. 289(3), 1057–1086 (2009). doi:10.1007/s00220-009-0824-2

    Article  MathSciNet  ADS  MATH  Google Scholar 

  28. Nielsen, M., Chuang, I. (eds.): Quantum Computation and Quantum Information. Cambridge University Press, Cambridge (2000)

    Google Scholar 

  29. Phan, H.M.: Linear convergence of the Douglas–Rachford method for two closed sets (2014). arXiv:1401.6509

  30. Watrous, J.: Distinguishing quantum operations having few kraus operators. Quantum Inf. Comput. 8, 819–833 (2008). http://arxiv.org/abs/0710.0902

Download references

Acknowledgments

We would like to thank the editors and referees for their careful reading and helpful comments on the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Henry Wolkowicz.

Appendix: Background

Appendix: Background

1.1 Matrix representation of \(\mathcal {L}\) and \(\mathcal {L}^\dagger \)

In this section, we show that the matrices \(L\) in (9) and \(L^\dagger \) in (10) are indeed matrix representations of the linear map \(\mathcal {L}\) [defined in (5)] and its Moore–Penrose generalized inverse, respectively, under a specific choice of basis of the vector space \({\mathcal H} ^{nm}\).

1.1.1 Choice of orthonormal basis of \({\mathcal H} ^s\)

We choose the standard orthonormal basis for the vector space \({\mathcal H} ^\ell \) of \(\ell \times \ell \) Hermitian matrices (over the reals) as follows. Let \(e_j\in \mathbb {R}^\ell \) be the \(j\)th standard unit vector for \(j=1,\ldots ,\ell \). Then \(e_ie_j^T\in \mathbb {R}^{\ell \times \ell }\) is zero everywhere except the \((i,j)\)th entry, which is 1. For \(i,j=1,\ldots ,\ell \), define the \((i,j)\)th basis matrix as follows:

$$\begin{aligned} E_{ij} = {\left\{ \begin{array}{ll} \frac{1}{\sqrt{2}} \left( e_i e_j^T + e_j e_i^T\right) &{}\quad \text { if }\; i< j, \\ \frac{\mathsf {i}\,}{\sqrt{2}} \left( e_j e_i^T - e_i e_j^T\right) &{}\quad \text { if }\; i > j, \\ e_j e_j^T &{} \quad \text { if } i=j. \end{array}\right. } \end{aligned}$$
(11)

Then \({\mathcal E} _{{\mathrm {real,offdiag}}}\cup {\mathcal E} _{{\mathrm {imag,offdiag}}}\cup {\mathcal E} _{{{\mathrm{{diag}}}}}\) forms an orthonormal basis of \({\mathcal H} ^\ell \), where

  • \({\mathcal E} _{\mathrm {real,offdiag}}:=\{E_{ij} : 1\le i < j \le \ell \}\) collects the real zero-diagonal basis matrices,

  • \({\mathcal E} _{\mathrm {imag,offdiag}}:=\{E_{ij} : 1\le j < i \le \ell \}\) collects the imaginary zero-diagonal basis matrices, and

  • \({\mathcal E} _{{{\mathrm{{diag}}}}}:=\{E_{jj} : 1\le j \le \ell \}\) collects the real diagonal basis matrices.

We define a total ordering \(<\) on the tuples \((i,j)\) for \(i,j=1,\ldots ,\ell \), so that the matrices are ordered with \({\mathcal E} _{\mathrm {real,offdiag}}< {\mathcal E} _{\mathrm {imag,offdiag}}< {\mathcal E} _{{{\mathrm{{diag}}}}}\) in the element-wise sense [as stated in (6)]. For any \((i,j), (\tilde{i}, \tilde{j})\in \{1,\ldots ,\ell \}^2\), we say that \((i,j) \prec (\tilde{i}, \tilde{j})\) if one of the following holds.

  • Case 1: \(i<j\) (so that \(E_{ij}\) is a real matrix with zero diagonal).

    • \(i<j\) and \(\tilde{i} \ge \tilde{j}\).

    • \(i<j\) and \(\tilde{i}<\tilde{j}\), but \(\tilde{j}> j\).

    • \(i<j\) and \(\tilde{i}<\tilde{j} = j\), but \(\tilde{i}>i\).

  • Case 2: \(i>j\) (so that \(E_{ij}\) is a imaginary matrix with zero diagonal).

    In this case we must have \(\tilde{i}\ge \tilde{j}\).

    • \(j<i\) and \(\tilde{j} = \tilde{i}\).

    • \(j<i\) and \(\tilde{j}<\tilde{i}\), but \(\tilde{i}> i\).

    • \(j<i\) and \(\tilde{j}<\tilde{i} = i\), but \(\tilde{j}>j\).

  • Case 3: \(i=j\) (so that \(E_{jj}\) is a real diagonal matrix).

    In this case we must have \(\tilde{i}=\tilde{j}\).

    • \(j<\tilde{j}\).

For instance, when \(\ell =3\), our orthonormal basis of choice is given in the following order:

$$\begin{aligned} \begin{array}{lll} E_{12} = \frac{1}{\sqrt{2}} \begin{bmatrix} 0&{}\quad 1&{}\quad 0\\ 1&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0 \end{bmatrix},\ &{} E_{13} = \frac{1}{\sqrt{2}} \begin{bmatrix} 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 0&{}\quad 0\\ 1&{}\quad 0&{}\quad 0 \end{bmatrix},\ &{} E_{23} = \frac{1}{\sqrt{2}} \begin{bmatrix} 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 1&{}\quad 0 \end{bmatrix},\ \\ E_{21} = \frac{1}{\sqrt{2}} \begin{bmatrix} 0&{}\quad \mathsf {i}\,&{}\quad 0\\ ii&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0 \end{bmatrix},\ &{} E_{31} = \frac{1}{\sqrt{2}} \begin{bmatrix} 0&{}\quad 0&{}\quad \mathsf {i}\,\\ 0&{}\quad 0&{}\quad 0\\ ii&{}\quad 0&{}\quad 0 \end{bmatrix},\ &{} E_{32} = \frac{1}{\sqrt{2}} \begin{bmatrix} 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad \mathsf {i}\,\\ 0&{}\quad -\mathsf {i}\,&{}\quad 0 \end{bmatrix},\\ E_{11} = \begin{bmatrix} 1&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0 \end{bmatrix},\ &{} E_{22} = \begin{bmatrix} 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 1&{}\quad 0\\ 0&{}\quad 0&{}\quad 0 \end{bmatrix},\ &{} E_{33} = \begin{bmatrix} 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 1 \end{bmatrix}. \end{array} \end{aligned}$$

We also work with \(nm\times nm\) block matrices, with each block of size \(m\times m\). On the space \({\mathcal H} ^{nm}\), for any \(1\le i , j \le nm\), let

$$\begin{aligned} s := \left\lceil \frac{i}{m} \right\rceil , \quad t := \left\lceil \frac{j}{m} \right\rceil , \quad p := i-m(s-1), \quad \text {and}\quad q := j-m(t-1). \end{aligned}$$
(12)

Note that \(1\le s,t\le n\) are the block indices and \(1\le p,q\le m\) are the intra-block indices. For instance, consider the matrix \(e_ie_j^T\in {\mathcal H} ^{nm}\) (having only 1 nonzero entry at position \((i,j)\)). The nonzero entry is at the \((p,q)\)th entry in the \((s,t)\)th block (which is of size \(m\times m\)). The orthonormal matrices \(E_{ij}\) defined in (11) are related to the block indices \((s,t)\) as described in Table 7.

Table 7 The nonzero blocks of \(E_{ij}\) for any fixed \(i,j=1,\ldots ,nm\)

Unlike \({\mathcal H} ^s\), we order the blocked orthonormal matrices \(E_{ij}\) in \({\mathcal H} ^{nm}\) via the following total ordering \(<\) on the set \(\mathcal {I}:=\{(i,j): 1\le i,j \le nm\}\). Let \(1\le i,j,\tilde{i}, \tilde{j}\le nm\) with \((i,j)\ne (\tilde{i}, \tilde{j})\). Then letting

$$\begin{aligned} \begin{array}{llll} s:=\left\lceil \frac{i}{m}\right\rceil , &{}t:=\left\lceil \frac{j}{m}\right\rceil , &{}p=i-m(s-1), &{}q=j-m(t-1), \\ \tilde{s}\,:=\left\lceil \frac{\tilde{i}}{m}\right\rceil , &{}\tilde{t}\,:=\left\lceil \frac{\tilde{j}}{m}\right\rceil , &{}\tilde{p}=\tilde{i}-m(\tilde{s}-1), &{}\tilde{q}=\tilde{j}-m(\tilde{t}-1), \end{array} \end{aligned}$$

the relation \((i,j)<(\tilde{i}, \tilde{j})\) holds if and only if one of the following holds.

  • \(\{p,q\}\ne \{\tilde{p},\tilde{q}\}\) and \(\left( \min \{p,q\},\ \max \{p,q\}\right) \prec \left( \min \{\tilde{p},\tilde{q}\},\ \max \{\tilde{p},\tilde{q}\}\right) \).

  • \(\{p,q\}=\{\tilde{p},\tilde{q}\}\) and \((s,t)\prec (\tilde{s}, \tilde{t})\).

  • \(\{p,q\}=\{\tilde{p},\tilde{q}\}\), \((s,t)=(\tilde{s}, \tilde{t})\) and \(p<q\). (Then \(\tilde{q} = p < q = \tilde{p}\).)

In other words, we order the 2-tuples \((i,j)\) in \(\mathcal {I}\) by grouping all those with the same intra-block index \((p,q)\) and block indices \(s<t\) together, for some \(p<q\), followed by those tuples with intra-block index \((q,p)\) and block indices \(s<t\). As an example, when \(m=2\) and \(n=3\), the following list gives the first few entries of \(\mathcal {I}\):

$$\begin{aligned} (1,4),\ (1,6),\ (3,6),\ (2,3),\ (2,5),\ (4,5),\ (1,2),\ (3,4),\ \ldots \end{aligned}$$

These first 2-tuples \((i,j)\) in \(\mathcal {I}\) have the corresponding 4-tuples \((s,t,p,q)\) defined as in (12), given as follows:

$$\begin{aligned}&(1,2,1,2),\ (1,3,1,2),\ (2,3,1,2),\ (1,2,2,1),\ (1,3,2,1),\ (2,3,2,1),\\&\quad (1,1,1,2),\ (2,2,1,2),\ \ldots \end{aligned}$$

Note that the first three 2-tuples have the same intra-block index \((p,q)=(1,2)\). The immediately following three 2-tuples have the intra-block index \((q,p)=(2,1)\), and so on.

1.1.2 Symmetric vectorization of Hermitian matrices

Using the ordered orthonormal basis of \({\mathcal H} ^s\) described in (11) in Sect. 1, we can define the corresponding symmetric vectorization of Hermitian matrices. Since any Hermitian matrix in \({\mathcal H} ^s\) can be expressed as a unique linear combination of the orthonormal basis matrices \(E_{ij}\), the map

$$\begin{aligned} {{\mathrm{{sHvec}}}}:{\mathcal H} ^s \rightarrow \mathbb {R}^{s^2}: H\mapsto v, \end{aligned}$$

where \(v\in \mathbb {R}^{s^2}\) is the unique vector such that \(H = \sum _{i,j=1}^{s} v_{ij} E_{ij}\), is well-defined. The map \({{\mathrm{{sHvec}}}}\) is a linear isometry (i.e., \({{\mathrm{{sHvec}}}}\) is a linear map and \(\Vert {{\mathrm{{sHvec}}}}(H)\Vert ^2={{\mathrm{{trace}}}}(H^2)\) for all \(H\in {\mathcal H} ^s\)), and its adjoint is given by

$$\begin{aligned} {{\mathrm{{sHMat}}}}: \mathbb {R}^{s^2}\rightarrow {\mathcal H} ^s: v\mapsto \sum _{i,j=1}^{s} v_{ij} E_{ij}, \end{aligned}$$
(13)

which is also the inverse map of \({{\mathrm{{sHvec}}}}\). For instance,

$$\begin{aligned} {{\mathrm{{sHvec}}}}\left( \begin{bmatrix}1&\quad \sqrt{2}-\mathsf {i}\,\\ \sqrt{2}+\mathsf {i}\,&\quad 3\end{bmatrix}\right) = \begin{bmatrix} 2&\quad -\sqrt{2}&\quad 1&\quad 3\end{bmatrix}^T. \end{aligned}$$

1.1.3 Ordering the rows and columns in the matrix representation of \(\mathcal {L}\)

In the following, we compute matrix representations \(L_A\) and \(L_T\) of the linear maps \(\mathcal {L}_A:{\mathcal H} ^{nm}\rightarrow \otimes _{j=1}^m{\mathcal H} ^m\) and \(\mathcal {L}_T:{\mathcal H} ^{nm}\rightarrow {\mathcal H} ^n\), respective. The matrix representation \(\mathcal {L}=(\mathcal {L}_A(\cdot ), \mathcal {L}_T(\cdot ))\) is then chosen to be \(L = \begin{bmatrix} L_A \\ L_T \end{bmatrix}\), with \(L_A\in \mathbb {R}^{km^2\times n^2m^2}\) and \(L_T\in \mathbb {R}^{m^2\times n^2m^2}\).

Any matrix representation for the linear map \(\mathcal {L}_A\) (resp. \(\mathcal {L}_T\)) depends on the choice of the ordered orthonormal bases for \({\mathcal H} ^{nm}\) and for \({\mathcal H} ^m\times \cdots \times {\mathcal H} ^m\) (resp. for \({\mathcal H} ^{nm}\) and for \({\mathcal H} ^m\)). For \({\mathcal H} ^{nm}\), we use the ordered orthonormal basis defined in Sect. 1. For \({\mathcal H} ^m\times \cdots \times {\mathcal H} ^m\), we use the orthonormal basis

$$\begin{aligned} \begin{array}{llll} (E_{12}, 0, \ldots ,0),\ &{} (0,E_{12},\ldots ,0), \ &{} \ldots , \ &{} (0,0,\ldots ,E_{12}), \\ (E_{21}, 0, \ldots ,0), \ &{} (0,E_{21},\ldots ,0), \ &{} \ldots , \ &{} (0,0,\ldots ,E_{21}),\ \\ (E_{13}, 0, \ldots ,0),\ &{} (0,E_{13},\ldots ,0), \ &{} \ldots , \ &{} (0,0,\ldots ,E_{13}),\ \\ (E_{31}, 0, \ldots ,0),\ &{} (0,E_{31},\ldots ,0), \ &{} \ldots , \ &{} (0,0,\ldots ,E_{31}),\ \ldots ,\ (0,0,\ldots ,E_{mm}). \end{array} \end{aligned}$$
(14)

We first construct a matrix representation \(L_A\) of \(\mathcal {L}_A\) by rows. Recall that

$$\begin{aligned} \mathcal {L}_A(P) = \left( \sum _{s,t=1}^n (A_1)_{st} P_{st}, \sum _{s,t=1}^n (A_2)_{st} P_{st}, \ldots , \sum _{s,t=1}^n (A_k)_{st} P_{st}\right) \end{aligned}$$

for any \(P\in {\mathcal H} ^{nm}\), so the rows of \(\mathcal {L}_A\) are determined by the linear functionals

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathbf {L}_{{\ell ,p,q,{\mathrm{Re}\,}}}(P):=\sqrt{2} {\mathrm{Re}\,}\left( \sum _{s,t=1}^n (A_\ell )_{st} (P_{st} )\right) _{pq} \\ \mathbf {L}_{{\ell ,p,q,{\mathrm{Im}\,}}}(P):=\sqrt{2} {\mathrm{Im}\,}\left( \sum _{s,t=1}^n (A_\ell )_{st} (P_{st} )\right) _{pq} \\ \mathbf {D}_{{\ell ,q}}(P):= \left( \sum _{s,t=1}^n (A_\ell )_{st} (P_{st} )\right) _{qq} \end{array}\right. } \end{aligned}$$

for some \(\ell \in \{1,\ldots ,k\}\) and \(p<q\in \{1,\ldots ,m\}\). Defining vectors \(\alpha _{{\ell ,p,q,{\mathrm{Re}\,}}},\alpha _{{\ell ,p,q,{\mathrm{Im}\,}}},\beta _{{\ell ,q}}\in \mathbb {R}^{(nm)^2}\) by

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathbf {L}_{{\ell ,p,q,{\mathrm{Re}\,}}}(P)=(\alpha _{{\ell ,p,q,{\mathrm{Re}\,}}})^T{{\mathrm{{sHvec}}}}(P), \\ \mathbf {L}_{{\ell ,p,q,{\mathrm{Im}\,}}}(P)=(\alpha _{{\ell ,p,q,{\mathrm{Im}\,}}})^T{{\mathrm{{sHvec}}}}(P), \\ \mathbf {D}_{{\ell ,q}}(P)=(\beta _{{\ell ,q}})^T{{\mathrm{{sHvec}}}}(P), \end{array}\right. } \end{aligned}$$

for all \(\ell \in \{1,\ldots ,k\}\), \(p<q\in \{1,\ldots ,m\}\), we get that

$$\begin{aligned} L_A = \begin{bmatrix} \alpha _{{1,1,2,{\mathrm{Re}\,}}} \cdots&\alpha _{{k,1,2,{\mathrm{Re}\,}}}&\alpha _{{1,1,2,{\mathrm{Im}\,}}} \cdots&\alpha _{{k,m,m,{\mathrm{Im}\,}}}&\beta _{{1,1}}&\cdots&\beta _{{k,1}}&\beta _{{1,2}}&\cdots&\beta _{{k,m}} \end{bmatrix}^T. \end{aligned}$$

Now we proceed to find the vectors \(\alpha _{{\ell ,p,q,{\mathrm{Re}\,}}},\alpha _{{\ell ,p,q,{\mathrm{Im}\,}}},\beta _{{\ell ,q}}\).

1.1.4 Computing the rows of \(L_A\)

Fix any \(\ell \in \{1,\ldots ,k\}\). For any \(i,j\in \{1,\ldots ,nm\}\), let \((s,t,p,q)\) be defined as in (12).

If \(s<t\), then using Table 7 we get

$$\begin{aligned} \sum _{\tilde{s},\tilde{t}=1}^n(A_\ell )_{\tilde{s} \tilde{t}} (E_{ij})_{\tilde{s} \tilde{t}}&=\ \frac{1}{\sqrt{2}} \left( (A_{\ell })_{st} e_p e_q^T + (A_{\ell })_{ts} e_q e_p^T\right) \\&=\ \frac{1}{\sqrt{2}} \left( {\mathrm{Re}\,}(A_{\ell })_{st} \left( e_p e_q^T + e_q e_p^T\right) + \mathsf {i}\,{\mathrm{Im}\,}(A_{\ell })_{st} \left( e_p e_q^T - e_q e_p^T\right) \right) \end{aligned}$$

If \(s>t\), then

$$\begin{aligned} \sum _{\tilde{s},\tilde{t}=1}^n(A_\ell )_{\tilde{s} \tilde{t}} (E_{ij})_{\tilde{s} \tilde{t}}&=\ \frac{\mathsf {i}\,}{\sqrt{2}} \left( -(A_{\ell })_{st} e_p e_q^T + (A_{\ell })_{ts} e_q e_p^T \right) \\&=\ \frac{1}{\sqrt{2}} \left( -{\mathrm{Im}\,}(A_{\ell })_{ts} (e_p e_q^T + e_q e_p^T) - \mathsf {i}\,{\mathrm{Re}\,}(A_{\ell })_{ts} (e_p e_q^T - e_q e_p^T) \right) \end{aligned}$$

If \(s=t\), then

$$\begin{aligned} \sum _{\tilde{s},\tilde{t}=1}^n(A_\ell )_{\tilde{s} \tilde{t}} (E_{ij})_{\tilde{s} \tilde{t}} =&\ (A_{\ell })_{ss} E_{pq}. \end{aligned}$$

Fix any \(\ell =1,\ldots ,k\) and \(\hat{p}<\hat{q}\) from \(\{1,\ldots ,m\}\). Then for all \(i,j\in \{1,\ldots ,nm\}\), defining \((s,t,p,q)\) as in (12), we have

$$\begin{aligned} \mathbf {L}_{{\ell ,\hat{p},\hat{q},{\mathrm{Re}\,}}}(E_{ij}) = {\left\{ \begin{array}{ll} {\mathrm{Re}\,}(A_{\ell })_{st} &{}\text { if}\;\; s<t\, \text { and}\, \{p,q\}=\{\hat{p},\hat{q}\}\\ -{\mathrm{Im}\,}(A_{\ell })_{ts} &{}\text { if}\;\; s>t\, \text { and}\, \{p,q\}=\{\hat{p},\hat{q}\}\\ (A_{\ell })_{ss} &{}\text { if}\;\; s=t, p<q\, \text { and}\, (p,q)=(\hat{p},\hat{q})\\ 0 &{}\text { otherwise}, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} \mathbf {L}_{{\ell ,\hat{p},\hat{q},{\mathrm{Im}\,}}}(E_{ij}) = {\left\{ \begin{array}{ll} {\mathrm{Im}\,}(A_{\ell })_{st} &{}\text { if}\;\; s<t\,\text { and}\,(p,q)=(\hat{p},\hat{q}) \\ -{\mathrm{Im}\,}(A_{\ell })_{st} &{} \text { if}\;\; s<t \,\text { and}\, (p,q)=(\hat{q},\hat{p}) \\ -{\mathrm{Re}\,}(A_{\ell })_{ts} &{}\text { if}\;\; s>t\,\text { and} (p,q)=(\hat{p},\hat{q}) \\ {\mathrm{Re}\,}(A_{\ell })_{ts} &{}\text { if}\;\; s>t\,\text { and}\, (p,q)=(\hat{q},\hat{p}) \\ (A_{\ell })_{ss} &{}\text { if}\;\; s=t, p>q\, \text { and}\, (p,q)=(\hat{q},\hat{p}) \\ 0 &{}\text { otherwise}, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} \mathbf {D}_{{\ell ,\hat{q}}}(E_{ij}) = {\left\{ \begin{array}{ll} \sqrt{2} {\mathrm{Re}\,}(A_{\ell })_{st} &{} \text { if}\;\; s<t\, \text { and}\, p=q=\hat{q} \\ -\sqrt{2} {\mathrm{Im}\,}(A_{\ell })_{st} &{} \text { if}\;\; s>t\, \text { and}\, p=q=\hat{q} \\ (A_{\ell })_{ss} &{}\text { if}\, s=t\, \text { and}\;\; p=q=\hat{q}. \end{array}\right. } \end{aligned}$$

Therefore for any \(\hat{p}<\hat{q}=1,\ldots ,m\) and any \(i,j=1,\ldots , nm\), using \((s,t,p,q)\) defined in (12) and the definitions of \(M_{{\mathrm{Re}\,}}, M_{{\mathrm{Im}\,}}, M_D\) on Page 6,

$$\begin{aligned} \left( \alpha _{{\ell ,\hat{p},\hat{q},{\mathrm{Re}\,}}}\right) _{i,j} =&\ \alpha _{{\ell ,\hat{p},\hat{q},{\mathrm{Re}\,}}}^T {{\mathrm{{sHvec}}}}(E_{ij}) = \mathbf {L}_{{\ell ,\hat{p},\hat{q},{\mathrm{Re}\,}}}(E_{ij}) \\ =&\ {\left\{ \begin{array}{ll} \frac{1}{\sqrt{2}} (M_{{\mathrm{Re}\,}})_{\ell ,st} &{}\text { if}\;\; s<t\, \text { and}\, \{p,q\}=\{\hat{p},\hat{q}\} \\ -\frac{1}{\sqrt{2}} (M_{{\mathrm{Im}\,}})_{\ell ,ts} &{}\text { if}\;\; s>t\, \text { and}\, \{p,q\}=\{\hat{p},\hat{q}\} \\ (M_D)_{\ell ,s} &{}\text { if}\;\; s=t, p<q\, \text { and}\, (p,q)=(\hat{p},\hat{q}) \\ 0 &{}\text { otherwise}, \end{array}\right. } \end{aligned}$$

implying that \(\alpha _{{\ell ,\hat{p},\hat{q},{\mathrm{Re}\,}}}^T\) is one of the first \(k\) rows of the matrix \(\begin{bmatrix} I_{t(m-1)}\otimes N_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}&\ 0 \end{bmatrix}\). (Here note that the number of pairs \((\hat{p},\hat{q})\) with \(1\le \hat{p}< \hat{q} \le m\) is \(t(m-1)=\frac{1}{2}m(m-1)\). The zero block corresponds to the index pairs \((i,j)\) with \(p=q\), where \((s,t,p,q)\) are the block indices defined in (12).) Similarly,

$$\begin{aligned} \left( \alpha _{{\ell ,\hat{p},\hat{q},{\mathrm{Im}\,}}}\right) _{i,j} =&\ \alpha _{{\ell ,\hat{p},\hat{q},{\mathrm{Im}\,}}}^T {{\mathrm{{sHvec}}}}(E_{ij}) = \mathbf {L}_{{\ell ,p,q,{\mathrm{Im}\,}}}(E_{ij}) \\ =&\ {\left\{ \begin{array}{ll} \frac{1}{\sqrt{2}} (M_{{\mathrm{Im}\,}})_{\ell ,st} &{}\text { if}\;\; s<t\, \text { and}\, (p,q)=(\hat{p},\hat{q}) \\ -\frac{1}{\sqrt{2}} (M_{{\mathrm{Im}\,}})_{\ell ,st} &{} \text { if}\;\; s<t\, \text { and}\, (p,q)=(\hat{q},\hat{p}) \\ -\frac{1}{\sqrt{2}} (M_{{\mathrm{Re}\,}})_{\ell ,ts} &{}\text { if}\;\; s>t\, \text { and}\, (p,q)=(\hat{p},\hat{q}) \\ \frac{1}{\sqrt{2}} (M_{{\mathrm{Re}\,}})_{\ell ,ts} &{}\text { if}\;\; s>t\, \text { and}\, (p,q)=(\hat{q},\hat{p}) \\ (M_D)_{\ell ,s} &{}\text { if}\;\; s=t, p>q\, \text { and}\, (p,q)=(\hat{q},\hat{p}) \\ 0 &{}\text { otherwise}, \end{array}\right. } \end{aligned}$$

so \(\alpha _{{\ell ,\hat{p},\hat{q},{\mathrm{Im}\,}}}^T\) is one of the last \(k\) rows of the matrix \(\begin{bmatrix} I_{t(m-1)}\otimes N_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}&\ 0 \end{bmatrix}\). Finally,

$$\begin{aligned} (\beta _{{\ell ,\hat{q}}})_{ij} = \mathbf {D}_{{\ell ,\hat{q}}}(E_{ij}) = {\left\{ \begin{array}{ll} (M_{{\mathrm{Re}\,}})_{\ell ,st} &{} \text { if}\;\; s<t\, \text { and}\, p=q=\hat{q} \\ -(M_{{\mathrm{Im}\,}})_{\ell ,st} &{} \text { if}\;\; s>t\, \text { and}\, p=q=\hat{q} \\ (M_D)_{\ell ,s} &{}\text { if}\;\; s=t\, \text { and}\, p=q=\hat{q}, \end{array}\right. } \end{aligned}$$

so \(\beta _{{\ell ,\hat{q}}}^T\) is one of the rows of the matrix \(\begin{bmatrix} 0&\ I_{m}\otimes M_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}\end{bmatrix}\).

Hence, a matrix representation of \(\mathcal {L}_A\) is given by

$$\begin{aligned} \begin{bmatrix} I_{t(m-1)}\otimes N_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}&\quad 0 \\ 0&\quad I_m \otimes M_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}\end{bmatrix}. \end{aligned}$$

1.1.5 Computing \(L_T\)

Recall the linear map \(\mathcal {L}_T:{\mathcal H} ^{nm}\rightarrow {\mathcal H} ^n : P\mapsto [{{\mathrm{{trace}}}}(P_{st})]_{s,t=1,\ldots ,n}\), which defines the second component of \(\mathcal {L}\). We compute a matrix representation \(L_T\) of \(\mathcal {L}_T\) by columns, i.e., by considering \(\mathcal {L}_T(E_{ij})\in {\mathcal H} ^n\) for \(i,j=1,\ldots ,nm\). Defining \((s,t,p,q)\) as in (12), we have

$$\begin{aligned} \mathcal {L}_T(E_{ij}) = {\left\{ \begin{array}{ll} E_{st}&{}\quad \text{ if } p=q, \\ 0 &{}\quad \text { otherwise.} \end{array}\right. } \end{aligned}$$

Hence the \((i,j)\)th column of \(L_T\) is given by

$$\begin{aligned} {{\mathrm{{sHvec}}}}(\mathcal {L}_T(E_{ij})) = {\left\{ \begin{array}{ll} {{\mathrm{{sHvec}}}}(E_{st})&{}\quad \text{ if } p=q, \\ 0 &{}\quad \text { otherwise.} \end{array}\right. } \end{aligned}$$

This implies that \(L_T=\begin{bmatrix} 0&\ e_m^T\times I_{n^2} \end{bmatrix}\), where the zero block corresponds to the \((i,j)\) pairs with \(p\ne q\), and each row \(e_m^T\) in the Kronecker product corresponds to those \((i,j)\) pairs with the same block indices \((s,t)\) (and there are \(m\) pairs of \((i,j)\) with the same block indices that have nonzero intra-block traces).

1.1.6 Alternative column orderings, eliminating redundant rows

Combining the results from the previous two sections, we arrive at a matrix representation of \(\mathcal {L}\):

$$\begin{aligned} L= \begin{bmatrix} L_A\\L_T \end{bmatrix} = \begin{bmatrix} I_{t(m-1)} \otimes N_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}&\quad 0 \\ 0&\quad I_m\otimes M_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}\\ 0&\quad e_m^T\otimes I_{n^2} \end{bmatrix}. \end{aligned}$$
(15)

In the final matrix representation that we use, some of the rows of the second block rows are linearly dependent of the other rows, if the linear map \(\mathcal {L}_A\) contains the unital constraints. Hence we remove those rows and replace the original matrix representation \(L\) in (15) by the following matrix:

$$\begin{aligned} L=&\ \begin{bmatrix} I_{t(m-1)} \otimes N_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}&\quad 0 \\ 0&\begin{bmatrix} I_{m-1}\otimes M_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}&\quad 0 \end{bmatrix} \\ 0&\quad e_m^T\otimes I_{n^2} \end{bmatrix} \\ =&\ \begin{bmatrix} I_{t(m-1)} \otimes N_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}&\quad 0&\quad 0 \\ 0&\quad I_{m-1}\otimes M_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}&\quad 0 \\ 0&\quad e_{m-1}^T\otimes I_{n^2}&\quad I_{n^2} \end{bmatrix}. \end{aligned}$$

Note that we can use alternative ordering for the off-diagonal entries inside the blocks. While this does not change the ordering of the columns in the second block column in (15) (which correspond to the diagonal entries inside the blocks), it can affect the column ordering of \(N_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}\) (resulting in, e.g., \(N_\mathrm{final}\)).

1.1.7 Pseudoinverse of \(L\)

Using the block diagonal structure of \(L\) and the fact that

$$\begin{aligned} \begin{bmatrix} I_{m-1}\otimes M_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}&\quad 0 \\ e_{m-1}^T\otimes I_{n^2}&\quad I_{n^2} \end{bmatrix}^\dagger = \begin{bmatrix} I_{m-1}\otimes M_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}^\dagger&\quad e_{m-1}\otimes (M_{{\mathrm{Re}\,}{\mathrm{Im}\,}D})_\mathrm{null} \\ -e_{m-1}^T\otimes M_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}^\dagger&\quad I_{n^2} - (m-1) (M_{{\mathrm{Re}\,}{\mathrm{Im}\,}D})_\mathrm{null} \end{bmatrix} \end{aligned}$$
(16)

(which can be easily verified to be the pseudoinverse), it is immediate that

$$\begin{aligned} L^\dagger =&\ \begin{bmatrix} I_{t(m-1)} \otimes N_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}&\quad 0 \\ 0&\begin{bmatrix} I_{m-1}\otimes M_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}&\quad 0 \\ e_{m-1}^T\otimes I_{n^2}&\quad I_{n^2} \end{bmatrix}^\dagger \end{bmatrix} \\ =&\ \begin{bmatrix} I_{t(m-1)} \otimes N_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}&\quad 0&\quad 0 \\ 0&\quad I_{m-1}\otimes M_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}^\dagger&\quad e_{m-1}\otimes (M_{{\mathrm{Re}\,}{\mathrm{Im}\,}D})_\mathrm{null} \\ 0&\quad -e_{m-1}^T\otimes M_{{\mathrm{Re}\,}{\mathrm{Im}\,}D}^\dagger&\quad I_n - (m-1) (M_{{\mathrm{Re}\,}{\mathrm{Im}\,}D})_\mathrm{null} \end{bmatrix}. \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Drusvyatskiy, D., Li, CK., Pelejo, D.C. et al. Projection methods for quantum channel construction. Quantum Inf Process 14, 3075–3096 (2015). https://doi.org/10.1007/s11128-015-1024-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11128-015-1024-y

Keywords

Navigation