1 Introduction

Quantum channels model the evolution of quantum systems. Mathematically, they correspond to completely positive and trace preserving linear maps between matrix algebras. One important scenario in the rapidly developing field of quantum technologies is the distribution of quantum entanglement: which channels can be used to transmit a quantum particle which is entangled to another system in such a way that some entanglement in the total bipartite system is preserved? Quantum channels \(\Phi \) which are useless for this task are dubbed entanglement breaking [18]: the local application of \(\Phi \) on any subsystem of a bipartite quantum state results in a separable (non-entangled) state. For qubit channels, this property is equivalent to a simpler PPT property, which amounts to saying that both \(\Phi \) and \(\Phi \circ \top \) are completely positive, where \(\top \) denotes the transposition map. The preceding equivalence ceases to hold for higher-dimensional qudit channels. However, the PPT-squared conjecture posits that the composition of two arbitrary PPT linear maps must be entanglement breaking [8, 10]. In particular, for a PPT channel, its composition with itself must be entanglement breaking.

Conjecture 1.1

The composition of two arbitrary PPT linear maps is entanglement breaking.

This conjecture is relevant for quantum information theory because it imposes constraints on the type of resources that can be distributed using quantum repeaters [3, 6]. Since it was first proposed in 2012 by M. Christandl, the conjecture has garnered a lot of attention. It has been shown that the conjecture holds in the asymptotic limit, i.e., the distance between several iterates of a unital (or trace preserving) PPT map and the set of entanglement breaking maps tends to zero in the asymptotic limit [21]. In [26], the authors proved that any unital PPT map becomes entanglement breaking after finitely many iterations of composition with itself; for other algebraic approaches, see [14, 17, 22]. As noted above, the conjecture trivially holds for qubit maps. For the next dimension \(d=3\), the conjecture has been proven independently in [10, 12]. For higher dimensions however, the validity of the conjecture still remains ambiguous [13, 20]. In infinite dimensional systems, the set of Gaussian maps has been shown to satisfy the conjecture [10].

The main result of the current paper is the proof of this conjecture in the case when the two PPT maps are covariant with respect to the action of the diagonal unitary group. More precisely, we consider the class of linear maps which are (conjugate-)invariant under the action of diagonal unitaries (see Definition 2.2): \(\forall X \in \mathcal {M}_{d}(\mathbb {C}) \text { and } U \in \mathcal {DU}_d\), we have either

$$\Phi (UXU^*) = U^*\Phi (X)U \qquad \text { or } \qquad \Psi (UXU^*) = U\Psi (X)U^*.$$

These maps, dubbed respectively, Diagonal Unitary Covariant (DUC) and Conjugate Diagonal Unitary Covariant (CDUC), were studied at length in [28]. A variety of physically relevant classes of quantum channels are of this kind, like the depolarizing and transpose depolarizing channels, amplitude damping channels, Schur multipliers, etc. Several important properties of (C)DUC maps, such as complete positivity and copositivity, entanglement breaking property, and the like, were analyzed in detail. In this work, we focus on the PPT\(^2\) conjecture for these maps, proving a stronger version of the conjecture. The main technical tool in our proof is the characterization of a subclass of entanglement breaking covariant maps, which is related to the matrix-theoretic concept of factor width [4] for the cone of pairwise completely positive matrices [19], see Theorems 3.9 and 3.10. The following is an informal statement of our main result, Theorem 4.5:

Theorem 1.2

The composition of two arbitrary (conjugate) diagonal unitary covariant PPT maps corresponds to a pairwise completely positive matrix pair with factor width two; in particular, it is entanglement breaking.

The paper is organized as follows. In Sect. 2, we provide several equivalent statements of the PPT\(^2\) conjecture and also review some useful facts about diagonal unitary covariant maps; all of them are proven in [28]. In Sect. 3, we introduce the notion of factor width for pairwise and triplewise completely positive matrices. These tools are used in Sect. 4 to prove the PPT\(^2\) conjecture for (C)DUC maps. Finally, in Sect. 5 we discuss some open problems and future directions for research.

2 Review of Diagonal Unitary Covariant Maps

In this section, we will briefly review several key aspects from the theory of diagonal unitary and orthogonal covariant maps between matrix algebras. For a more comprehensive discussion and proofs of the results stated here, the readers are referred to our previous work [28, Sections 6–9].

Let us first quickly set up the basic notation. We use Dirac’s bra-ket notation for vectors \(v\in \mathbb {C}^{d}\) and their duals \(v^*\in (\mathbb {C}^{d})^*\) as kets \(|v\rangle \) and bras \(\langle v|\), respectively. For \(|v\rangle ,|w\rangle \in \mathbb {C}^{d}\), the rank one matrix \(vw^*\) is then represented as an outer-product \(|v\rangle \langle w| \). \(\{ |i\rangle \}_{i=1}^d\) denotes the standard basis of \(\mathbb {C}^{d}\). We collect all \(d\times d\) complex matrices into the set \(\mathcal {M}_{d}(\mathbb {C})\). Within \(\mathcal {M}_{d}(\mathbb {C})\), the cones of entrywise non-negative and (hermitian) positive semi-definite matrices are denoted by \(\mathsf {EWP}_d\) and \(\mathsf {PSD}_d\), respectively. We denote the adjoint (conjugate transpose) of \(A\in \mathcal {M}_{d}(\mathbb {C})\) by \(A^*\). Sets of pairs and triples of matrices in \(\mathcal {M}_{d}(\mathbb {C})\) with equal diagonals are represented as follows

$$\begin{aligned} \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}}&{:}{=}\{ (A,B) \in \mathcal {M}_{d}(\mathbb {C})\times \mathcal {M}_{d}(\mathbb {C}) \, \big | \, {\text {diag}}(A)={\text {diag}}(B) \} \end{aligned}$$
(1)
$$\begin{aligned} \mathcal {M}_{d}(\mathbb {C})^{\times 3}_{\mathbb {C}^{d}}&{:}{=}\{ (A,B,C) \in \mathcal {M}_{d}(\mathbb {C})\times \mathcal {M}_{d}(\mathbb {C})\times \mathcal {M}_{d}(\mathbb {C}) \, \big | \, {\text {diag}}(A)={\text {diag}}(B)={\text {diag}}(C)\} \end{aligned}$$
(2)

The set of all linear maps \(\Phi : \mathcal {M}_{d}(\mathbb {C})\rightarrow \mathcal {M}_{d}(\mathbb {C})\) is denoted by \(\mathcal {T}_{d}(\mathbb {C})\). A map \(\Phi \in \mathcal {T}_{d}(\mathbb {C})\) is called positive if \(\Phi (X)\in \mathsf {PSD}_d\) for all \(X\in \mathsf {PSD}_d\). If \({\text {id}}\otimes \Phi :\mathcal {M}_{n}(\mathbb {C})\otimes \mathcal {M}_{d}(\mathbb {C})\rightarrow \mathcal {M}_{n}(\mathbb {C})\otimes \mathcal {M}_{d}(\mathbb {C})\) is positive for all \(n\in \mathbb {N}\), where \({\text {id}}\in \mathcal {T}_{n}(\mathbb {C})\) is the identity map, then \(\Phi \in \mathcal {T}_{d}(\mathbb {C})\) is called completely positive (CP). Every CP map in \(\mathcal {T}_{d}(\mathbb {C})\) admits a (non-unique) Kraus representation \(\Phi (X) = \sum _{j=1}^k \mathfrak {A}_j X \mathfrak {A}_j^*\), where \(\{\mathfrak {A}_j\}_{j=1}^k \subseteq \mathcal {M}_{d}(\mathbb {C})\) and \(k\le d^2\). If \(\Phi \circ \top \) is completely positive, where \(\top \) acts on \(\mathcal {M}_{d}(\mathbb {C})\) as the matrix transposition (with respect to the standard basis in \(\mathbb {C}^{d}\)), then \(\Phi \in \mathcal {T}_{d}(\mathbb {C})\) is called completely copositive (coCP). We say that \(\Phi \in \mathcal {T}_{d}(\mathbb {C})\) is positive partial transpose (PPT) if it is both CP and coCP. If, for all positive semi-definite \(X\in \mathcal {M}_{d}(\mathbb {C})\otimes \mathcal {M}_{d}(\mathbb {C})\), \([{\text {id}}\otimes \Phi ] (X)\) lies in the convex hull of product matrices \(A\otimes B\) with \(A,B\in \mathsf {PSD}_d\), i.e., \([{\text {id}}\otimes \Phi ] (X)\) is separable, we say that \(\Phi \in \mathcal {T}_{d}(\mathbb {C})\) is entanglement-breaking. By quantum channels, we understand completely positive maps in \(\mathcal {T}_{d}(\mathbb {C})\) which also preserve trace, i.e., \({\text {Tr}}\Phi (X)={\text {Tr}}X, \,\, \forall X\in \mathcal {M}_{d}(\mathbb {C})\).

Now, in order to reformulate the PPT\(^2\) conjecture in the language of bipartite matrices, we need to introduce the notion of locality for CP maps between tensor products of matrix algebras. For our purposes, it suffices to look at the tripartite setting. We say that a CP linear map \(\Phi : \mathcal {M}_{d}(\mathbb {C})\otimes \mathcal {M}_{d}(\mathbb {C})\otimes \mathcal {M}_{d}(\mathbb {C}) \rightarrow \mathcal {M}_{d}(\mathbb {C})\otimes \mathcal {M}_{d}(\mathbb {C})\otimes \mathcal {M}_{d}(\mathbb {C})\) is separable if it can be expressed as a finite sum \(\Phi = \sum _{i\in I} \Phi ^1_i \otimes \Phi ^2_i \otimes \Phi ^3_i\), where \(\Phi ^j_i\in \mathcal {T}_{d}(\mathbb {C})\) are CP for all \(i\in I\) and \(j\in \{1,2,3\}\). Using the Kraus representation of CP maps in \(\mathcal {T}_{d}(\mathbb {C})\), it is easy to see that any such separable operation itself admits a Kraus representation of the form \(\Phi (X) = \sum _{j\in J} (\mathfrak {A}_j \otimes \mathfrak {B}_j \otimes \mathfrak {C}_j) X (\mathfrak {A}_j \otimes \mathfrak {B}_j \otimes \mathfrak {C}_j)^*\), where \(\mathfrak {A}_j,\mathfrak {B}_j,\mathfrak {C}_j \in \mathcal {M}_{d}(\mathbb {C})\) for all \(j \in J\). Also recall that a bipartite matrix \(X\in \mathcal {M}_{d}(\mathbb {C})\otimes \mathcal {M}_{d}(\mathbb {C})\) is said to be positive under partial transpose (PPT) if both X and \(X^\Gamma = [{\text {id}}\otimes \top ](X)\) are positive semi-definite. Equipped with the appropriate terminology, we are now prepared to state several equivalent formulations of the PPT\(^2\) conjecture.

Proposition 2.1

The following statements are equivalent:

  1. (1)

    \(\forall \) PPT linear maps \(\Phi _1,\Phi _2\in \mathcal {T}_{d}(\mathbb {C})\): \( \Phi _1\circ \Phi _2 \text { is entanglement breaking}.\)

  2. (2)

    \(\forall \) PPT bipartite matrices \(\rho , \sigma \in \mathcal {M}_{d}(\mathbb {C})\otimes \mathcal {M}_{d}(\mathbb {C})\):

    $$\begin{aligned} {\text {Tr}}_{2,3} \{ (\rho \otimes \sigma ) (\mathbb {I} \otimes |e\rangle \langle e| \otimes \mathbb {I} \} \text { is separable}. \end{aligned}$$
  3. (3)

    \(\forall \) PPT bipartite matrices \(\rho , \sigma \in \mathcal {M}_{d}(\mathbb {C})\otimes \mathcal {M}_{d}(\mathbb {C})\), \(\forall \) tripartite separable CP linear maps \(\Lambda : \mathcal {M}_{d}(\mathbb {C})\otimes [\mathcal {M}_{d}(\mathbb {C})\otimes \mathcal {M}_{d}(\mathbb {C})]\otimes \mathcal {M}_{d}(\mathbb {C}) \rightarrow \mathcal {M}_{d}(\mathbb {C})\otimes [\mathcal {M}_{d}(\mathbb {C})\otimes \mathcal {M}_{d}(\mathbb {C})]\otimes \mathcal {M}_{d}(\mathbb {C})\):

    $$\begin{aligned} {\text {Tr}}_{2,3} \{ \Lambda (\rho \otimes \sigma ) \} \text { is separable}. \end{aligned}$$

Here, \(|e\rangle = \sum _{i=1}^d |i\rangle \otimes |i\rangle \in \mathbb {C}^d \otimes \mathbb {C}^d\) is a maximally entangled vector, \(\mathbb {I}\in \mathcal {M}_{d}(\mathbb {C})\) is the identity matrix, and \({\text {Tr}}_{2,3}\{\cdot \}\) denotes partial trace over the middle two tensor factors.

Proof

The equivalence of (1) and (2) can be readily established by using the Choi-Jamiołkowski isomorphism \(J:\mathcal {T}_{d}(\mathbb {C})\rightarrow \mathcal {M}_{d}(\mathbb {C})\otimes \mathcal {M}_{d}(\mathbb {C})\) defined as

$$\begin{aligned} J(\Phi ) = \sum _{i,j=1}^d \Phi (|i\rangle \langle j| ) \otimes |i\rangle \langle j| , \end{aligned}$$

which identifies PPT and entanglement breaking linear maps in \(\mathcal {T}_{d}(\mathbb {C})\) with PPT and separable matrices in \(\mathcal {M}_{d}(\mathbb {C})\otimes \mathcal {M}_{d}(\mathbb {C})\), respectively. Since \(J(\Phi _1 \circ \Phi _2) = {\text {Tr}}_{2,3} \{ (J(\Phi _1)\otimes J(\Phi _2)) (\mathbb {I} \otimes |e\rangle \langle e| \otimes \mathbb {I} \}\), the equivalence of (1) and (2) becomes evident.

Let us now assume that (2) holds. Consider an arbitrary separable CP map \(\Lambda \) as given in (3) with Kraus operators \(\{\mathfrak {A}_j \otimes \mathfrak {C}_j \otimes \mathfrak {B}_j \}_{j\in J}\), where \(\mathfrak {A}_j, \mathfrak {B}_j\in \mathcal {M}_{d}(\mathbb {C})\) and \(\mathfrak {C}_j\in \mathcal {M}_{d}(\mathbb {C})\otimes \mathcal {M}_{d}(\mathbb {C})\) for all \(j\in J\). Then, we can write

$$\begin{aligned} \Lambda (\rho \otimes \sigma ) = \sum _j (\mathbb {I}\otimes \mathfrak {C}_j \otimes \mathbb {I} ) (\rho _j\otimes \sigma _j ) (\mathbb {I}\otimes \mathfrak {C}_j \otimes \mathbb {I})^*, \end{aligned}$$

where \(\rho _j = (\mathfrak {A}_j \otimes \mathbb {I}) \rho (\mathfrak {A}_j \otimes \mathbb {I})^*\) and \(\sigma _j = (\mathbb {I} \otimes \mathfrak {B}_j ) \sigma (\mathbb {I} \otimes \mathfrak {B}_j )^* \) are again PPT. Thus,

$$\begin{aligned} {\text {Tr}}_{2,3} \{\Lambda (\rho \otimes \sigma ) \}&= \sum _j {\text {Tr}}_{2,3} \{ (\rho _j\otimes \sigma _j) (\mathbb {I} \otimes \mathfrak {C}_j^* \mathfrak {C}_j \otimes \mathbb {I}) \} \\&= \sum _{j\in J} \sum _{i=1}^d \lambda _{ij} {\text {Tr}}_{2,3} \{ (\rho _j \otimes \sigma _j) (\mathbb {I} \otimes |c_{ij}\rangle \langle c_{ij}| \otimes \mathbb {I}) \}, \end{aligned}$$

where, for each \(j\in J\), \(\sum _{i=1}^d \lambda _{ij}|c_{ij}\rangle \langle c_{ij}| \) is the spectral decomposition of the positive semi-definite matrix \(\mathfrak {C}_j^* \mathfrak {C}_j\). Now, by writing \(|c_{ij}\rangle = (\mathfrak {C'}_{ij}\otimes \mathbb {I}) |e\rangle \) for \(\mathfrak {C'}_{ij}\in \mathcal {M}_{d}(\mathbb {C})\) and redefining \(\varrho _{ij} = (\mathbb {I} \otimes \mathfrak {C'}_{ij})^* \rho _{j} (\mathbb {I} \otimes \mathfrak {C'}_{ij})\) for all ij (which are all again PPT), we obtain:

$$\begin{aligned} {\text {Tr}}_{2,3} \{\Lambda (\rho \otimes \sigma ) \} = \sum _{j\in J} \sum _{i=1}^d \lambda _{ij} {\text {Tr}}_{2,3} \{ (\varrho _{ij}\otimes \sigma _j) (\mathbb {I} \otimes |e\rangle \langle e| \otimes \mathbb {I})\}, \end{aligned}$$

whose separability trivially follows from our assumption.

Finally, if we assume that (3) holds, then (2) follows by choosing \(\Lambda \) to be defined by a single Kraus operator of the form \(\mathbb {I}\otimes |e\rangle \langle e| \otimes \mathbb {I}\). \(\square \)

Let us take a second to interpret the above result. Assume that there are three spatially separated parties: Alice, Bob, and Charlie, such that Charlie shares bipartite states \(\rho \) and \(\sigma \) with Alice and Bob, respectively (see Fig. 1). The objective is to transfer any entanglement that Charlie shares with Alice and Bob separately (via \(\rho \) and \(\sigma \)) to shared entanglement between Alice and Bob. We can think of this as a generalized entanglement ‘swapping’ task. What the PPT\(^2\) conjecture says in this context is that the above task is impossible if \(\rho \) and \(\sigma \) are PPT. No matter what local operations the parties might wish to perform on their subsystems, Proposition 2.1 guarantees that the resulting state shared by Alice and Bob is separable. This has drastic implications in the realm of quantum key distribution using repeater devices, where such entanglement swapping procedures are heavily employed, see [3, 6].

Fig. 1
figure 1

The setup for a generalized entanglement swapping task

We now define the different families of covariant maps in \(\mathcal {T}_{d}(\mathbb {C})\).

Definition 2.2

Let \(\mathcal {DU}_d\) and \(\mathcal {DO}_d\) denote the groups of diagonal unitary and diagonal orthogonal matrices in \(\mathcal {M}_{d}(\mathbb {C})\), respectively. Then, a linear map \(\Phi \in \mathcal {T}_d(\mathbb {C})\) is said to be

  • Diagonal Unitary Covariant (DUC) if

    $$\begin{aligned} \forall X \in \mathcal {M}_{d}(\mathbb {C}) \text { and } U \in \mathcal {DU}_d: \qquad \Phi (UXU^*) = U^*\Phi (X)U, \end{aligned}$$
  • Conjugate Diagonal Unitary Covariant (CDUC) if

    $$\begin{aligned} \forall X \in \mathcal {M}_{d}(\mathbb {C}) \text { and } U \in \mathcal {DU}_d: \qquad \Phi (UXU^*) = U\Phi (X)U^*, \end{aligned}$$
  • Diagonal Orthogonal Covariant (DOC) if

    $$\begin{aligned} \forall X \in \mathcal {M}_{d}(\mathbb {C}) \text { and } O \in \mathcal {DO}_d: \qquad \Phi (OXO) = O\Phi (X)O. \end{aligned}$$

Remark 2.3

[28, Theorem 6.4] Definition 2.2 can be reformulated in terms of bipartite Choi matrices having certain local diagonal unitary/orthogonal invariance properties:

  • \(\Phi \in \mathcal {T}_{d}(\mathbb {C})\) is DUC \(\iff (U \otimes U) J(\Phi ) (U^* \otimes U^*) = J(\Phi ) \qquad \forall U\in \mathcal {DU}_d\).

  • \(\Phi \in \mathcal {T}_{d}(\mathbb {C})\) is CDUC \(\iff (U \otimes U^*) J(\Phi ) (U^* \otimes U) = J(\Phi ) \qquad \forall U\in \mathcal {DU}_d\).

  • \(\Phi \in \mathcal {T}_{d}(\mathbb {C})\) is DOC \(\iff (O \otimes O) J(\Phi ) (O \otimes O) = J(\Phi ) \qquad \forall O\in \mathcal {DO}_d\).

The sets of DUC, CDUC and DOC maps in \(\mathcal {T}_{d}(\mathbb {C})\) will be denoted by \(\mathsf {DUC}_d, \mathsf {CDUC}_d\), and \(\mathsf {DOC}_d\), respectively. Superscripts \(i=1,2\) and 3 will be used to distinguish between maps \(\Phi ^{(i)}\) in \(\mathsf {DUC}_d, \mathsf {CDUC}_d\) and \(\mathsf {DOC}_d\), respectively. Using the structure of the invariant bipartite Choi matrices \(J(\Phi ^{(i)})\) from [28, Proposition 2.3], one can parameterize the action of the corresponding covariant maps \(\Phi ^{(i)}\) on \(\mathcal {M}_{d}(\mathbb {C})\) in terms of matrix triples \((A,B,C)\in \mathcal {M}_{d}(\mathbb {C})^{\times 3}_{\mathbb {C}^{d}}\) (Eq. (2)) as follows:

$$\begin{aligned} \Phi ^{(1)}_{(A,B)}(X)&= {\text {diag}}(A|{\text {diag}}X\rangle ) + \widetilde{B}\odot X^\top \end{aligned}$$
(3)
$$\begin{aligned} \Phi ^{(2)}_{(A,B)}(X)&= {\text {diag}}(A|{\text {diag}}X\rangle ) + \widetilde{B}\odot X \end{aligned}$$
(4)
$$\begin{aligned} \Phi ^{(3)}_{(A,B,C)}(X)&= {\text {diag}}(A|{\text {diag}}X\rangle ) + \widetilde{B}\odot X + \widetilde{C}\odot X^\top \end{aligned}$$
(5)

where \(\widetilde{B}=B-{\text {diag}}B, \, \widetilde{C}=C-{\text {diag}}C\), and \(\odot \) denotes the operation of Hadamard (or entrywise) product in \(\mathcal {M}_{d}(\mathbb {C})\). In a quantum setting, where \(X=\rho \) is a quantum state (\(\rho \in \mathsf {PSD}_d, {\text {Tr}}\rho =1\)) and \(\Phi ^{(i)}\in \mathcal {T}_{d}(\mathbb {C})\) are quantum channels, one can interpret the above actions by splitting them into two parts. The first part involves a classical diagonal mixing operation, which is nothing but a transformation on the space of probability distributions in \(\mathbb {R}^d_{+}\): \(|{\text {diag}}\rho \rangle \mapsto A|{\text {diag}}\rho \rangle \). The second part acts on the off-diagonal part of the input state by mixing the well-known actions of Schur Multipliers [24, Chapters 3,8] and transposition maps in \(\mathcal {T}_{d}(\mathbb {C})\).

An important example of the above kind of maps is the Choi map [7], which was introduced in the ‘70s as the first example of a positive non-decomposable map:

$$\begin{aligned} \Phi _{\mathsf {Choi}} : \mathcal {M}_{3}(\mathbb {C}) \rightarrow \mathcal {M}_{3}(\mathbb {C}), \qquad \Phi _{\mathsf {Choi}}(X) = \left( \begin{array}{ccc} X_{11}+X_{33} &{} -X_{12} &{} -X_{13} \\ -X_{21} &{} X_{11}+X_{22} &{} -X_{23} \\ -X_{31} &{} -X_{32} &{} X_{22}+X_{33} \end{array} \right) . \end{aligned}$$

It can be easily seen that the Choi map is a CDUC map \(\Phi _{\mathsf {Choi}} = \Phi ^{(2)}_{(A,B)}\), with

$$\begin{aligned} A = \begin{pmatrix} 1 &{}{} 0 &{}{} 1 \\ 1 &{}{} 1 &{}{} 0 \\ 0 &{}{} 1 &{}{} 1 \end{pmatrix} \quad \text{ and } \quad B = \begin{pmatrix} 1 &{} -1 &{} -1 \\ -1 &{} 1 &{} -1 \\ -1 &{} -1 &{} 1 \end{pmatrix} = 2\mathbb {I}_3 - \mathbb {J}_3, \end{aligned}$$

where \(\mathbb J_3\) is the all-ones matrix. In fact, the action of all generalized Choi maps in \(\mathcal {T}_{d}(\mathbb {C})\) [9, 11, 15] can be similarly parameterized by an arbitrary \(A\in \mathsf {EWP}_d\) and \(B=2\mathbb {I}_d-\mathbb {J}_d\), see [28, Example 7.5]. Besides Choi-type maps, the classes of (C)DUC and DOC maps contain many other important examples, like the depolarizing and transpose depolarizing maps, amplitude damping maps, Schur multipliers, etc. (see [28, Section 7 and Table 2] for a list of examples).

Remark 2.4

Note that the maps in \(\mathsf {DUC}_d\) and \(\mathsf {CDUC}_d\) are linked through composition by matrix transposition, i.e., for \((A,B)\in \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}}\), we have \(\Phi ^{(1)}_{(A,B)} = \Phi ^{(2)}_{(A,B)}\circ \top \). We will shortly observe a reflection of this characteristic in the fact that the PPT and entanglement-breaking properties of these maps entail an equivalent constraint on the corresponding matrix pairs \((A,B)\in \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}}\).

Remark 2.5

For a matrix pair \((A,B)\in \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}}\), we have

$$\begin{aligned} \Phi ^{(1)}_{(A,B)} = \Phi ^{(3)}_{(A,{\text {diag}}A,B)} \qquad \Phi ^{(2)}_{(A,B)} = \Phi ^{(3)}_{(A,B,{\text {diag}}A)} \qquad \end{aligned}$$

Next, we introduce the cones of pairwise and triplewise completely positive matrices, which were first introduced in [19, 23], respectively, as generalizations of the well-studied cone of completely positive matrices [5]. These will be used later in Propositions 2.9 and 2.10 to provide an equivalent description of the entanglement-breaking properties of our covariant families of maps. It is wise to point out that one should not confuse these notions with the earlier defined completely positive maps in \(\mathcal {T}_{d}(\mathbb {C})\), which are different beasts altogether.

Definition 2.6

A matrix pair \((A,B)\in \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}}\) is said to be pairwise completely positive (PCP) if there exist vectors \(\{ |v_n\rangle ,|w_n\rangle \}_{n\in I}\) (for a finite index set I) such that

$$\begin{aligned} A = \sum _{n\in I}|v_n\odot \overline{v_n}\rangle \langle w_n\odot \overline{w_n}| , \qquad B = \sum _{n\in I}|v_n\odot w_n\rangle \langle v_n\odot w_n| . \end{aligned}$$

Definition 2.7

A matrix triple \((A,B,C)\in \mathcal {M}_{d}(\mathbb {C})^{\times 3}_{\mathbb {C}^{d}}\) is said to be triplewise completely positive (TCP) if there exist vectors \(\{ |v_n\rangle ,|w_n\rangle \}_{n\in I}\) (for a finite index set I) such that

$$\begin{aligned}&A = \sum _{n\in I}|v_n\odot \overline{v_n}\rangle \langle w_n\odot \overline{w_n}| , \qquad B = \sum _{n\in I}|v_n\odot w_n\rangle \langle v_n\odot w_n| ,\\&\quad C = \sum _{n\in I}|v_n\odot \overline{w_n}\rangle \langle v_n\odot \overline{w_n}| . \end{aligned}$$

The vectors \(\{|v_n\rangle ,|w_n\rangle \}_{n\in I}\) above are said to form the PCP/TCP decomposition of the concerned matrix pair/triple. Notice that \((A,B,C)\in \mathsf {TCP}_d\implies (A,B),(A,C)\in \mathsf {PCP}_d\). It is easy to deduce that PCP and TCP matrices form closed convex cones, which we will denote by \(\mathsf {PCP}_d\) and \(\mathsf {TCP}_d\), respectively. For an extensive account of the convex structure of these cones, the readers should refer to [28, Section 5]. Several elementary properties of these cones are discussed in [19, Sections 3, 4] and [23, Appendix B], respectively. We recall some important necessary conditions for membership in the \(\mathsf {PCP}_d\) cone below.

Lemma 2.8

Let \((A,B)\in \mathsf {PCP}_d\). Then, \(A\in \mathsf {EWP}_d\) and \(B\in \mathsf {PSD}_d\). Moreover, the entrywise inequalities \(A_{ij}A_{ji} \ge \vert B_{ij} \vert ^2\) hold for all ij.

Let us now describe the PPT and entanglement-breaking properties of the covariant maps in terms of constraints on the associated matrix pairs and triples.

Proposition 2.9

([28, Lemmas 6.11, 6.12]). Let \((A,B)\in \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}}\). Then, \(\Phi ^{(1)}_{(A,B)}\) is

  1. (1)

    CP \(\iff A\in \mathsf {EWP}_d\), \(B=B^*\) and \(A_{ij}A_{ji}\ge |B_{ij}|^2 \,\, \forall i,j\iff \Phi ^{(2)}_{(A,B)}\) is coCP.

  2. (2)

    coCP \(\iff A\in \mathsf {EWP}_d\) and \(B\in \mathsf {PSD}_d\iff \Phi ^{(2)}_{(A,B)}\) is CP.

  3. (3)

    PPT \(\iff A\in \mathsf {EWP}_d\), \(B\in \mathsf {PSD}_d\) and \(A_{ij}A_{ji}\ge |B_{ij}|^2 \,\, \forall i,j \iff \Phi ^{(2)}_{(A,B)}\) is PPT.

  4. (4)

    entanglement breaking \(\iff (A,B)\in \mathsf {PCP}_d \iff \Phi ^{(2)}_{(A,B)}\) is entanglement breaking.

Proposition 2.10

([28, Lemma 6.13]). Let \((A,{B,}C){\in } \mathcal {M}_{d}(\mathbb {C})^{\times 3}_{\mathbb {C}^{d}}\). Then, \(\Phi ^{(3)}_{(A,{B,}C)}\) is

  1. (1)

    CP \(\iff A \in \mathsf {EWP}_d\), \(B \in \mathsf {PSD}_d\), \(C=C^*\), and \(A_{ij}A_{ji} \ge \vert C_{ij} \vert ^2 \,\, \forall i,j\).

  2. (2)

    coCP \(\iff A \in \mathsf {EWP}_d\), \(B=B^*\), \(C \in \mathsf {PSD}_d\), and \(A_{ij}A_{ji} \ge \vert B_{ij} \vert ^2 \,\, \forall i,j\).

  3. (3)

    PPT \(\iff A\in \mathsf {EWP}_d\), \(B,C\in \mathsf {PSD}_d\) and \(A_{ij}A_{ji}\ge {\text {max}}\{|B_{ij}|^2,|C_{ij}|^2 \} \,\, \forall i,j\).

  4. (4)

    entanglement breaking \(\iff (A,B,C)\in \mathsf {TCP}_d\).

3 Factor Widths

The concept of factor width was first formalized in [4] for real (symmetric) positive semi-definite matrices, although the idea had been in operation before, particularly in the study of completely positive matrices [5, Definition 2.4]. Recall that for \(|v\rangle \in \mathbb {C}^{d}\), its support is defined as \({\text {supp}}|v\rangle {:}{=}\{ i\in [d] : v_i\ne 0\}\). We define \(\sigma (v)\) as the size of \({\text {supp}}|v\rangle \), that is the number of non-zero coordinates of \(|v\rangle \). A real positive semi-definite matrix \(A\in \mathcal {M}_{d}(\mathbb {R})\) is said to have factor width k if there exists vectors \(\{|v_n\rangle \}_{n\in I}\subset \mathbb {R}^d\) with \(\sigma (v_n)\le k\) for each n such that A admits the following rank one decomposition: \(A = \sum _{n\in I}|v_n\rangle \langle v_n| \). Besides being heavily implemented in the analysis of completely positive matrices [5, Section 4], the concept of factor width has found several applications in the field of conic programming and optimization theory [1]. In this section, we will extend the notion of factor width to the cones of complex (hermitian) positive semi-definite matrices \(\mathsf {PSD}_d\), pairwise completely positive matrices \(\mathsf {PCP}_d\) and triplewise completely positive matrices \(\mathsf {TCP}_d\). In particular, we will obtain a complete characterization of matrices with factor width 2 in \(\mathsf {PSD}_d\) and \(\mathsf {PCP}_d\), which will later play an instrumental role in proving the validity of the PPT-squared conjecture for covariant maps in \(\mathsf {DUC}_d, \mathsf {CDUC}_d\). Without further delay, let us now delve straight into the definition of factor width for matrices in \(\mathsf {PSD}_d\), \(\mathsf {PCP}_d\) and \(\mathsf {TCP}_d\).

Definition 3.1

A matrix \(B\in \mathsf {PSD}_d\) is said to have factor width \(k\in \mathbb {N}\) if it admits a rank one decomposition \(B=\sum _{n\in I}|v_n\rangle \langle v_n| \), where \(\{ |v_n\rangle \}_{n\in I} \subset \mathbb {C}^{d} \) are such that \(\sigma (v_n)\le k\) for each \(n\in I\).

The notion of factor width for positive semidefinite matrices has been considered in [25], in relation to measures of coherence for density matrices. This notion has a straightforward generalization for PCP and TCP matrices.

Definition 3.2

A matrix pair \((A,B)\in \mathsf {PCP}_d\) (resp. triple \((A,B,C)\in \mathsf {TCP}_d\)) is said to have factor width \(k\in \mathbb {N}\) if it admits a PCP (resp. TCP) decomposition with vectors \(\{|v_n\rangle ,|w_n\rangle \}_{n\in I}\subset \mathbb {C}^{d}\) such that \(\sigma (v_n\odot w_n)\le k\) for each \(n\in I\).

The cones of factor width k matrices in \(\mathsf {PSD}_d, \mathsf {PCP}_d\) and \(\mathsf {TCP}_d\) will be denoted by \(\mathsf {PSD}_d^k, \mathsf {PCP}_d^k\), and \(\mathsf {TCP}_d^k\), respectively. It should be clear from the definitions that \((A,B,C)\in \mathsf {TCP}_d^k\implies (A,B),(A,C)\in \mathsf {PCP}_d^k \implies B,C\in \mathsf {PSD}_d^k\) and that the cones \(\mathsf {PSD}_d^k, \mathsf {PCP}_d^k\), and \(\mathsf {TCP}_d^k\) are stable by direct sums. The following sequences of inclusions are also trivial consequences of the definitions:

$$\begin{aligned} \mathsf {PSD}_d^1 \subset \mathsf {PSD}_d^2 \subset \dots \subset \mathsf {PSD}_d^d = \mathsf {PSD}_d \end{aligned}$$
(6)
$$\begin{aligned} \mathsf {PCP}_d^1 \subset \mathsf {PCP}_d^2 \subset \dots \subset \mathsf {PCP}_d^d = \mathsf {PCP}_d \end{aligned}$$
(7)
$$\begin{aligned} \mathsf {TCP}_d^1 \subset \mathsf {TCP}_d^2 \subset \dots \subset \mathsf {TCP}_d^d = \mathsf {TCP}_d \end{aligned}$$
(8)

Remark 3.3

For all \(k < d\), the inclusions

$$\begin{aligned} \mathsf {PSD}_d^k \subset \mathsf {PSD}_d, \qquad \mathsf {PCP}_d^k \subset \mathsf {PCP}_d, \qquad \mathsf {TCP}_d^k \subset \mathsf {TCP}_d \end{aligned}$$

are strict. This can be seen by considering extremal rays of the cones (see [28, Theorem 5.13]) generated by vectors of full support.

For \(k=1\), it is evident that \(\mathsf {PSD}_d^1\) is precisely the set of diagonal matrices in \(\mathsf {EWP}_d\). It is equally easy to deduce that \(\mathsf {PCP}_d^1\) (resp. \(\mathsf {TCP}_d^1\)) contain matrix pairs \((A,B)\in \mathsf {PCP}_d\) (resp. triples \((A,B,C)\in \mathsf {TCP}_d\)) where \(A\in \mathsf {EWP}_d\) and \(B={\text {diag}}A\) (resp. \(A\in \mathsf {EWP}_d\) and \(B=C={\text {diag}}A\)).

The \(k=2\) case is more interesting, and we must familiarize ourselves with some matrix-theoretic terminology before we begin to deal with it. Let us start with the definitions of the so-called scaled diagonally dominant and M-matrices. In what follows, \(\mathbb {I}_d\) denotes the identity matrix in \(\mathcal {M}_{d}(\mathbb {C})\). For \(B\in \mathcal {M}_{d}(\mathbb {C})\) and \(k\in \mathbb {N}\), we define a hierarchy of comparison matrices entrywise as:

$$\begin{aligned} M_k(B)_{ij} = {\left\{ \begin{array}{ll} \, k|B_{ij}|, \quad &{}\text {if } i = j\\ -|B_{ij}|, \quad &{}\text {otherwise } \end{array}\right. } \end{aligned}$$
(9)

Notice that \(M_1(B)\) coincides with the usual comparison matrix M(B) for \(B\in \mathcal {M}_{d}(\mathbb {C})\)

Definition 3.4

A matrix \(B\in \mathcal {M}_{d}(\mathbb {C})\) is called diagonally dominant (DD) if \(|B_{ii}|\ge \sum _{j\ne i}|B_{ij}|\) and \(|B_{ii}|\ge \sum _{j\ne i}|B_{ji}| \,\, \forall i\). For \(B\in \mathcal {M}_{d}(\mathbb {C})\), if there exists a positive diagonal matrix D such that DBD is diagonally dominant, then B is called scaled diagonally dominant (SDD).

Note that, for a hermitian matrix B, the two conditions \(|B_{ii}|\ge \sum _{j\ne i}|B_{ij}|\) and \(|B_{ii}|\ge \sum _{j\ne i}|B_{ji}|\) are equivalent. Also note that hermitian DD and SDD matrices are clearly positive semi-definite.

Definition 3.5

A matrix \(B=s\,\mathbb {I}_d-P\) with \(s\ge 0\) and \(P\in \mathsf {EWP}_d\) is called an M-matrix if \(s\ge \rho (P)\), where \(\rho (P)\) denotes the spectral radius of P.

From the above definition, it is easy to see that if \(P\in \mathsf {EWP}_d\) is symmetric, then \(B=s\,\mathbb {I}_d-P\) is an M-matrix if and only if it is positive semi-definite. Before proceeding further, let us equip ourselves with an important result from the Perron–Frobenius theory of non-negative matrices [16, Chapter 8]. Recall that a matrix \(B\in \mathcal {M}_{d}(\mathbb {C})\) is reducible if it is permutationally similar to a block matrix in Eq. (10) (where \(B_1,B_3\) are square matrices) and irreducible otherwise.

$$\begin{aligned} \left( \begin{array}{ c c } B_1 &{} B_2 \\ 0 &{} B_3 \end{array} \right) \end{aligned}$$
(10)

Lemma 3.6

For an irreducible non-negative matrix \(P\in \mathsf {EWP}_d\), the spectral radius \(\rho (P)\) is an eigenvalue (called Perron eigenvalue) of unit multiplicity with a positive eigenvector (called Perron eigenvector) \(|p\rangle \in \mathbb {R}^d_+\): \(P|p\rangle = \rho (P)|p\rangle \).

With all the required tools now present in our arsenal, we begin to analyze the structure of the \(\mathsf {PSD}_d^2\) and \(\mathsf {PCP}_d^2\) cones. We start by showing that diagonal dominance is a sufficient condition to guarantee membership in these cones. It would be insightful to compare the following results with [2, Theorem2] and [19, Theorem4.4].

Lemma 3.7

If \(B\in \mathcal {M}_{d}(\mathbb {C})\) is a (hermitian) diagonally dominant matrix, then \(B\in \mathsf {PSD}_d^2\).

Proof

Let us use the symbol in Eq. (11) to define a \(d\times d\) matrix which has zeros everywhere except in the ij-principal submatrix, where the entries are defined by the matrix present in the notation.

$$\begin{aligned} \left( \begin{array}{ c c } a &{} b \\ c &{} d \end{array} \right) _{i,j\in [d]} {:}{=}a|i\rangle \langle i| + b|i\rangle \langle j| + c|j\rangle \langle i| + d|j\rangle \langle j| \in \mathcal {M}_{d}(\mathbb {C}) \end{aligned}$$
(11)

Then, the following decomposition of a hermitian DD matrix shows that it has factor width 2

$$\begin{aligned} B = \sum _{1\le i<j \le d} \left( \begin{array}{ c c } |B_{ij}| &{} B_{ij} \\ B_{ji} &{} |B_{ji}| \end{array} \right) _{i,j\in [d]} + {\text {diag}}|b\rangle \end{aligned}$$
(12)

where \(|b\rangle \in \mathbb {R}^d\) is defined entrywise as \(b_i= B_{ii}-\sum _{j\ne i}|B_{ij}|\ge 0\). \(\square \)

Lemma 3.8

Let \((A,B)\in \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}}\) be such that it satisfies the necessary conditions for membership in \(\mathsf {PCP}_d\). (see Lemma 2.8). Moreover, if B is diagonally dominant, then \((A,B)\in \mathsf {PCP}_d^2\).

Proof

Using the same notation as in Lemma 3.7, we can decompose the given pair (AB) as

$$\begin{aligned} (A,B) = \sum _{1\le i<j \le d} \left( \left( \begin{array}{ c c } |B_{ij}| &{} A_{ij} \\ A_{ji} &{} |B_{ji}| \end{array} \right) _{i,j\in [d]} , \left( \begin{array}{ c c } |B_{ij}| &{} B_{ij} \\ B_{ji} &{} |B_{ji}| \end{array} \right) _{i,j\in [d]} \right) + ({\text {diag}}|a\rangle ,{\text {diag}}|b\rangle ) \end{aligned}$$
(13)

where \(|a\rangle =|b\rangle \in \mathbb {R}^d\) is defined entrywise as \(a_i=A_{ii}-\sum _{j\ne i}|B_{ij}|\ge 0\). Each pair in the above sum lies in \(\mathsf {PCP}_d^2\), since it satisfies the lemma’s hypothesis and is supported on a 2-dimensional subspace, see [23, LemmaB.11] or [19, Theorem4.1]. Hence, \((A,B)\in \mathsf {PCP}_d^2\). \(\square \)

With Lemmas 3.7 and 3.8 in place, we now proceed to obtain a complete characterization of the factor width 2 cones \(\mathsf {PSD}_d^2\) and \(\mathsf {PCP}_d^2\). The following results can be interpreted as generalizations of a similar result for real symmetric matrices [4, Theorem 9], where the set of factor width 2 matrices has been shown to be equal to the set of scaled diagonally dominant matrices. A similar result has been obtained in [25, Theorem 1], in the context of multilevel coherence of mixed quantum states.

Theorem 3.9

For a (hermitian) matrix \(B\in \mathcal {M}_{d}(\mathbb {C})\), the following equivalences hold:

$$\begin{aligned} B\in \mathsf {PSD}_d^2 \!\iff \! M(B) \text { is an M -matrix} \!\iff \! M(B) \in \mathsf {PSD}_d \!\iff \! B \text { is scaled diagonally dominant}. \end{aligned}$$

Proof

Since B is hermitian, M(B) is symmetric, which implies that M(B) is an M-matrix \(\iff M(B)\in \mathsf {PSD}_d\). With this fact in the background, assume first that \(B\in \mathsf {PSD}_d^2\) can be decomposed as \(B=\sum _{n\in I}|v_n\rangle \langle v_n| \), where \(\sigma (v_n)\le 2\) for each n. Now, split the index set as \(I=I_1\sqcup \bigsqcup _{i<j}I_{ij}\), where \( I_1=\{n\in I:\sigma (v_n)= 1 \}\text { and }I_{ij}=\{n\in I : {\text {supp}}|v_n\rangle =\{i,j\} \}\). By defining \(B^{ij}=\sum _{n\in I_{ij}}|v_n\rangle \langle v_n| \) (so that \(M(B^{ij})\in \mathsf {PSD}_d\) since B is supported on a two-dimensional subspace), we can deduce that

$$\begin{aligned} M(B)=\sum _{n\in I_1}|v_n\rangle \langle v_n| + \sum _{i<j}M(B^{ij})\in \mathsf {PSD}_d. \end{aligned}$$

Now, let B be a hermitian matrix such that \(M(B)=s\,\mathbb {I}_d - P\in \mathsf {PSD}_d\) with \(s\ge \rho (P)\). If M(B) is reducible, it can be written as a direct sum of irreducible matrices. Hence, without loss of generality, we can assume that M(B) is irreducible. Lemma 3.6 then provides us with a positive Perron eignenvector \(|p\rangle \in \mathbb {R}^d_+\) such that \(P|p\rangle =\rho (P)|p\rangle \). Define \(D={\text {diag}}|p\rangle \) and let \(|e\rangle \in \mathbb {R}^d\) be the all ones vectors, so that \(DM(B)D|e\rangle =D(s-\rho (P))|p\rangle \) is entrywise non-negative, i.e., DM(B)D is diagonally dominant. The same D then works to show that B is scaled diagonally dominant.

For the final implication, assume that there exists a positive diagonal matrix D such that DBD is diagonally dominant. Using Lemma 3.7, we then have \(DBD\in \mathsf {PSD}_d^2 \implies B\in \mathsf {PSD}_d^2\). \(\square \)

Theorem 3.10

For \((A,B)\in \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}}\) satisfying the necessary conditions for membership in \(\mathsf {PCP}_d\) (see Lemma 2.8), the following equivalence holds: \((A,B)\in \mathsf {PCP}_d^2 \iff B\in \mathsf {PSD}_d^2\).

Proof

The forward implication is trivial to prove. For the reverse implication, let us assume that \(B\in \mathsf {PSD}_d^2\). Theorem 3.9 then guarantees the existence of a positive diagonal matrix D such that DBD is (hermitian) diagonally dominant. One can then apply Lemma 3.8 on the pair (DADDBD) to deduce that \((DAD,DBD)\in \mathsf {PCP}_d^2\), which clearly implies that \((A,B)\in \mathsf {PCP}_d^2\). \(\square \)

Upon light scrutiny, it becomes clear that an analogue of Theorem 3.10 would not work for the \(\mathsf {TCP}_d^2\) cone, due to the added complexity of the third matrix. In fact, counterexamples exist in the form of matrix triples \((A,B,C)\in \mathcal {M}_{d}(\mathbb {C})^{\times 3}_{\mathbb {C}^{d}}\) where, even though (AB) and (AC) separately satisfy the constraints of Lemma 3.8, (ABC) is not TCP, see [28, Example 9.2].

Even in the real regime of symmetric positive semi-definite matrices, the cones of factor width \(k\ge 3\) matrices have evaded tractability. Computing the factor width of a given real symmetric matrix may be NP-hard [4]. Hence, it is reasonable to expect similar hardness in obtaining characterizations of the factor width k cones \(\mathsf {PSD}_d^k, \mathsf {PCP}_d^k\) and \(\mathsf {TCP}_d^k\) for \(k\ge 3\). Although no attempt to analyze these cones in full generality is made in this paper, we do provide simple necessary conditions for membership in these cones using the maps introduced in Eq. (9).

Proposition 3.11

Let \(B\in \mathsf {PSD}_d^k\). Then, \(M_{k-1}(B)\) is positive semi-definite.

Proof

Assume that the vectors \(\{|b_n\rangle \}_{n\in I}\) form the rank one decomposition of \(B\in \mathsf {PSD}_d^k\) as in Definition 3.1. Let us, for convenience, define \({\text {supp}}_n\) as an index set of size k which contains the actual (maybe smaller) support of the vector \(|b_n\rangle \in \mathbb {C}^{d}:\,\, {\text {supp}}|b_n\rangle \subseteq {\text {supp}}_n\) for each \(n\in I\). Let \(|e\rangle \in \mathbb {R}^d\) be the all ones vector. Then,

$$\begin{aligned} \langle e |M_{k-1}(B)| e\rangle&= \sum _{i=1}^d (k-1)B_{ii} - \sum _{1\le i\ne j \le d} |B_{ij}| \\&= \sum _{i=1}^d (k-1) \sum _{n\in I} |b_{n,i}|^2 - \sum _{i\ne j} \left| \sum _{n\in I} b_{n,i}\overline{b_{n,j}} \right| \\&\ge \sum _{n\in I} (k-1)\sum _{i=1}^d |b_{n,i}|^2 - \sum _{n\in I}\sum _{i\ne j} |b_{n,i}b_{n,j}| \\&\ge \sum _{n\in I} \left( \sum _{ \begin{array}{c} i,j\,\in \, {\text {supp}}_n \\ i<j \end{array}} |b_{n,i}|^2 + |b_{n,j}|^2 \right) - \sum _{n\in I} \left( \sum _{ \begin{array}{c} i,j\,\in \, {\text {supp}}_n \\ i<j \end{array}} 2|b_{n,i}b_{n,j}|\right) \\&= \sum _{n\in I} \left( \sum _{ \begin{array}{c} i,j\,\in \, {\text {supp}}_n \\ i<j \end{array}} |b_{n,i}|^2 + |b_{n,j}|^2 - 2|b_{n,i} b_{n,j}| \right) \ge 0 \end{aligned}$$

Now, for any positive vector \(|\psi \rangle \in \mathbb {R}^d_{+}\), we have \(\langle \psi | M_{k-1}(B) | \psi \rangle = \langle e | D_{\psi } M_{k-1}(B) D_{\psi } | e \rangle \ge 0\), where \(D_{\psi }={\text {diag}}|\psi \rangle \). Moreover, if we assume (without loss of generality) that \(M_{k-1}(B)\) is irreducible and write \(M_{k-1}(B) = \alpha \mathbb {I}_d - P\) for \(\alpha \ge 0\) and an irreducible \(P\in \mathsf {EWP}_d\), we can choose \(|\psi \rangle \) to be the (normalized) Perron eigenvector of P so that \( \langle \psi | M_{k-1}(B) | \psi \rangle = \alpha - \langle \psi | P | \psi \rangle = \alpha - \rho (P) \ge 0\), where \(\rho (P)\) is the spectral radius of P. This shows that \(M_{k-1}(B)\) is positive semi-definite. \(\square \)

Proposition 3.12

Let \((A,B,C)\in \mathsf {TCP}_d^k\). Then \(M_{k-1}(B)\) and \(M_{k-1}(C)\) are positive semi-definite.

It is perhaps worthwhile to point out that the above necessary condition for membership in \(\mathsf {TCP}_d^2\) has been utilized in [27] to unearth a new kind of entanglement in bipartite quantum states which likes to hide behind peculiar distributions of zeros on the states’ diagonals.

4 The PPT\(^2\) Conjecture for Diagonal Unitary Invariant Maps

In this section, we investigate the validity of the PPT\(^2\) conjecture for diagonal unitary covariant maps in \(\mathcal {T}_{d}(\mathbb {C})\). In particular, by exploiting the composition rule for these maps, along with the properties of factor width 2 pairwise completely positive matrices, we show that the PPT-squared conjecture holds for linear maps in \(\mathsf {DUC}_d\) and \(\mathsf {CDUC}_d\).

To begin with, let us consider two particular composition rules.

Definition 4.1

On \(\mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}}\), define bilinear compositions \(\circ _1\) and \(\circ _2\) as follows:

$$\begin{aligned} \circ _1 : \qquad \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}} \times \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}}&\rightarrow \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}} \\ \{ (A_1,B_1) , (A_2,B_2) \}&\mapsto (\mathfrak {A},\mathfrak {B}) = (A_1 A_2, B_1 \odot B_2^\top \\&+ {\text{ diag }}(A_1 A_2 -B_1\odot B_2)) \end{aligned}$$
$$\begin{aligned} \circ _2 : \qquad \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}} \times \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}}&\rightarrow \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}} \\ \{ (A_1,B_1), (A_2,B_2) \}&\mapsto (\mathfrak {A},\mathfrak {B}) = (A_1 A_2, B_1\odot B_2 \\&+ {\text{ diag }}(A_1 A_2 - B_1\odot B_2)) \end{aligned}$$

Remark 4.2

It is obvious from the above Definition that

$$\begin{aligned} (A_1, B_1)\circ _1 (A_2, B_2) = (A_1, B_1)\circ _2 (A_2, B_2^\top ) \qquad \forall (A_1, B_1), (A_2, B_2) \in \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}}. \end{aligned}$$

Next, we state and prove an important proposition, which connects the above composition rules on matrix pairs to the operations of map composition in \(\mathsf {DUC}_d\) and \(\mathsf {CDUC}_d\). But first, we need familiarity with the notion of stability under composition.

Definition 4.3

A set \(K\subseteq \mathcal {T}_{d}(\mathbb {C})\) is said to be stable under composition if \(\Phi _1\circ \Phi _2 \in K\) for all \(\Phi _{1},\Phi _2 \in K\).

Proposition 4.4

The linear subspace \(\mathsf {CDUC}_d \subset \mathcal {T}_{d}(\mathbb {C})\) is stable under composition, but \(\mathsf {DUC}_d \subset \mathcal {T}_{d}(\mathbb {C})\) is not. Moreover, for pairs \((A_1,B_1), (A_2,B_2)\in \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}}\), the following composition rules hold:

$$\begin{aligned} \Phi ^{(i)}_{(A_1,B_1)} \circ \Phi ^{(j)}_{(A_2,B_2)} = {\left\{ \begin{array}{ll} \, \Phi ^{(2)}_{(\mathfrak {A},\mathfrak {B})}, &{}\text {where } (\mathfrak {A},\mathfrak {B}) = (A_1,B_1)\circ _i (A_2,B_2) \quad \text {if } i=j \\ \, \Phi ^{(1)}_{(\mathfrak {A},\mathfrak {B})}, &{}\text {where } (\mathfrak {A},\mathfrak {B}) = (A_1,B_1)\circ _i (A_2,B_2) \quad \text {if } i\ne j. \end{array}\right. } \end{aligned}$$

Proof

The stability results follow directly from Definition 2.2. It is also trivial to check that if \(\Phi _1\in \mathsf {DUC}_d\) and \(\Phi _2\in \mathsf {CDUC}_d\) (or vice-versa), then \(\Phi _1\circ \Phi _2 \in \mathsf {DUC}_d\), since \(\forall \, U\in \mathcal {DU}_d\) and \(Z\in \mathcal {M}_{d}(\mathbb {C})\), the following equation holds: \([\Phi _1\circ \Phi _2](UZU^*) = \Phi _1 [U \Phi _2(Z) U^*] = U^* [\Phi _1\circ \Phi _2 (Z)] U\).

Let us now show one of the composition rules, say the one for two CDUC maps, leaving the other three to the reader. Consider thus two CDUC maps acting as follows:

$$\Phi ^{(2)}_{(A_i,B_i)}(X) = {\text {diag}}(A_i |{\text {diag}}X\rangle ) + \widetilde{B_i}\odot X.$$

We have

$$\begin{aligned} \Phi ^{(2)}_{(A_1, B_1)} \circ \Phi ^{(2)}_{(A_2, B_2)}(X)&= \Phi ^{(2)}_{(A_1, B_1)} \Big (\underbrace{ {\text {diag}}(A_2 |{\text {diag}}X\rangle ) + \widetilde{B_2}\odot X }_{Y}\Big ) \\&= {\text {diag}}(A_1 |{\text {diag}}Y\rangle ) + \widetilde{B_1}\odot Y\\&= {\text {diag}}(\mathfrak A |{\text {diag}}X\rangle ) + \widetilde{\mathfrak B}\odot X, \end{aligned}$$

where \((\mathfrak A, \mathfrak B)\) are given precisely by the composition rule \(\circ _2\) from Definition 4.1. Indeed, for the diagonal part, since \(\tilde{B}_2\) has zero diagonal, we have \({\text {diag}}(Y) = A_2 |{\text {diag}}X\rangle \), hence

$$\begin{aligned} {\text {diag}}(A_1 |{\text {diag}}Y\rangle ) = {\text {diag}}(A_1 A_2|{\text {diag}}X\rangle ). \end{aligned}$$

proving the claim for \(\mathfrak A\). Similarly, in \(\widetilde{B}_1 \odot Y\), only the off-diagonal entries of Y matter, and those are given by \(\widetilde{B}_2 \odot X\). Hence,

$$\begin{aligned} \widetilde{B}_1 \odot Y = \widetilde{B}_1 \odot \left( \widetilde{B}_2 \odot X \right) = (\widetilde{B_1 \odot B_2})\odot X, \end{aligned}$$

proving that \(\widetilde{\mathfrak B} = \widetilde{B_1 \odot B_2}\) and finishing the proof. \(\square \)

We are now finally ready to prove the main result of our paper.

Theorem 4.5

Consider matrix pairs \((A,B), (C,D) \in \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}}\) such that the corresponding (C)DUC linear maps \(\Phi ^{(i)}_{(A,B)}, \Phi ^{(i)}_{(C,D)} \in \mathcal {T}_d(\mathbb {C})\) are PPT for \(i=1,2\). Then, the compositions \(\Phi ^{(i)}_{(A,B)} \circ \Phi ^{(j)}_{(C,D)}\) are entanglement breaking for \(i,j=1,2\).

Proof

First of all, invoke Proposition 4.4 to deduce that

$$\begin{aligned} \Phi ^{(i)}_{(A,B)} \circ \Phi ^{(j)}_{(C,D)} = {\left\{ \begin{array}{ll} \, \Phi ^{(2)}_{(\mathfrak {A},\mathfrak {B})}, &{}\text {where } (\mathfrak {A},\mathfrak {B}) = (A,B)\circ _i (C,D) \quad \text {if } i=j \\ \, \Phi ^{(1)}_{(\mathfrak {A},\mathfrak {B})}, &{}\text {where } (\mathfrak {A},\mathfrak {B}) = (A,B)\circ _i (C,D) \quad \text {if } i\ne j \end{array}\right. } \end{aligned}$$

Hence to prove Theorem, it suffices to show that the matrix pairs \((\mathfrak {A}',\mathfrak {B}') = (A,B)\circ _1 (C,D)\) and \((\mathfrak {A},\mathfrak {B}) = (A,B)\circ _2 (C,D)\) are PCP, see Proposition 2.9. In what follows, we show that the latter pair \((\mathfrak {A},\mathfrak {B})\) is PCP, and the former case will follow from Remark 4.2. With this end in sight, let us analyze the structure of \(\mathfrak {B}={\text {diag}}AC + \widetilde{B}\odot \widetilde{D}\) in some detail. Since all the involved maps in this Theorem are PPT, the compositions are guaranteed to be PPT as well, which implies that \(\mathfrak {B}\in \mathsf {PSD}_d\). Let us initially restrict ourselves to the case when \(d=2\). Here, we have

$$\begin{aligned} \mathfrak {B} = \left( \begin{array}{cc} A_{11}C_{11} + A_{12}C_{21} &{} B_{12}D_{12} \\ B_{21}D_{21} &{} A_{21}C_{12} + A_{22}C_{22} \end{array} \right) = \left( \begin{array}{cc} A_{11}C_{11} &{} 0 \\ 0 &{} A_{22}C_{22} \end{array} \right) + \left( \begin{array}{cc} A_{12}C_{21} &{} B_{12}D_{12} \\ B_{21}D_{21} &{} A_{21}C_{12} \end{array} \right) \end{aligned}$$

Since \(A,C\in \mathsf {EWP}_d\) and \(A_{12}A_{21}C_{12}C_{21} \ge | B_{12}D_{12} |^2\) (because the maps \(\Phi ^{(i)}_{(A,B)}\) and \(\Phi ^{(i)}_{(C,D)}\) are PPT), it is clear that \(\mathfrak {B}\in \mathsf {PSD}_2^2\). Similar splitting can be performed in the \(d=3\) case:

$$\begin{aligned} \mathfrak {B}&= \left( \begin{array}{ccc} A_{11}C_{11} + A_{12}C_{21} + A_{13}C_{31} &{} B_{12}D_{12} &{} B_{13}D_{13} \\ B_{21}D_{21} &{} A_{21}C_{12} + A_{22}C_{22} + A_{23}C_{32} &{} B_{23}D_{23} \\ B_{31}D_{31} &{} B_{32}D_{32} &{} A_{31}C_{13} + A_{32}C_{23} + A_{33}C_{33} \end{array} \right) \\&= {\text {diag}}(A\odot C) + \left( \begin{array}{ccc} A_{12}C_{21} &{} B_{12}D_{12} &{} 0 \\ B_{21}D_{21} &{} A_{21}C_{12} &{} 0 \\ 0 &{} 0 &{} 0 \end{array} \right) + \left( \begin{array}{ccc} A_{13}C_{31} &{} 0 &{} B_{13}D_{13} \\ 0 &{} 0 &{} 0 \\ B_{31}D_{31} &{} 0 &{} A_{31}C_{13} \end{array} \right) \\&\quad + \left( \begin{array}{ccc} 0 &{} 0 &{} 0 \\ 0 &{} A_{23}C_{32} &{} B_{23}D_{23} \\ 0 &{} B_{32}D_{32} &{} A_{32}C_{23} \end{array} \right) \end{aligned}$$

which shows that \(\mathfrak {B}\in \mathsf {PSD}_3^2\). It should now be apparent that the above splittings are nothing but special cases of a more general decomposition, which holds for \(\mathfrak {B}\in \mathsf {PSD}_d\) for arbitrary \(d\in \mathbb {N}\):

$$\begin{aligned} \mathfrak {B} = {\text {diag}}(A\odot C) + \sum _{1\le i<j \le d} \left( \begin{array}{ c c } A_{ij}C_{ji} &{} B_{ij}D_{ij} \\ B_{ji}D_{ji} &{} A_{ji}C_{ij} \end{array} \right) _{i,j\in [d]} \end{aligned}$$

Observe that we used the notation from Lemma 3.7. The PPT constraints \(A,C\in \mathsf {EWP}_d\) and \(A_{ij}A_{ji}C_{ij}C_{ji}\ge |B_{ij}D_{ij}|^2 \,\, \forall i,j\) imply that all the matrices in the above decomposition are positive semi-definite, which shows that \(\mathfrak {B}\) has factor width 2. A swift application of Theorem 3.10 then shows that \((\mathfrak {A},\mathfrak {B})\in \mathsf {PCP}_d^2\). \(\square \)

We should emphasize here that the above theorem contains a stronger form of the PPT\(^2\) conjecture for (C)DUC maps: we show that the composition of two PPT (C)DUC maps corresponds to a matrix pair \((\mathfrak {A},\mathfrak {B})\in \mathsf {PCP}_d^2\), where \(\mathsf {PCP}_d^2\) is a strict subset of \(\mathsf {PCP}_d\), for \(d \ge 3\), see Remark 3.3.

Let the vectors \(\{|v_n\rangle ,|w_n\rangle \}_{n\in I}\subset \mathbb {C}^{d}\) form the PCP decomposition of \((\mathfrak {A},\mathfrak {B})\) as in Definition 3.2. By defining \(\mathfrak {A}_n = |v_n\odot \overline{v_n}\rangle \langle w_n\odot \overline{w_n}| \) and \(\mathfrak {B}_n=|v_n\odot w_n\rangle \langle v_n\odot w_n| \), we see that for \(i=1,2\), the Choi matrix \(J(\Phi ^{(i)}_{(\mathfrak {A},\mathfrak {B})})\) splits up into a sum of PPT Choi matrices \(J(\Phi ^{(i)}_{(\mathfrak {A}_n,\mathfrak {B}_n)})\), each having support on a \(2\otimes 2\) subsystem (barring some diagonal entries). Hence, we can write

$$\begin{aligned} J(\Phi ^{(i)}_{(\mathfrak {A},\mathfrak {B})}) = \sum _{n\in I}J(\Phi ^{(i)}_{(\mathfrak {A}_n,\mathfrak {B}_n)}) \implies \Phi ^{(i)}_{(\mathfrak {A},\mathfrak {B})} = \sum _{n\in I} \Phi ^{(i)}_{(\mathfrak {A}_n,\mathfrak {B}_n)}, \end{aligned}$$
(14)

where, since each \(\Phi ^{(i)}_{(\mathfrak {A}_n,\mathfrak {B}_n)}\) is a PPT map acting on a qubit system, the sum \(\Phi ^{(i)}_{(\mathfrak {A},\mathfrak {B})}\) is entanglement breaking, for \(i=1,2\).

5 Perspectives and Open Questions

Let us first emphasize that, in light of Proposition 2.1, the main result in Theorem 4.5 can be interpreted as follows. Assume Alice, Bob, and Charlie share a state \(\rho \otimes \sigma \) as in Fig. 1, with \(\rho , \sigma \in \mathcal {M}_{d}(\mathbb {C})\otimes \mathcal {M}_{d}(\mathbb {C})\) being PPT and (conjugate) local diagonal unitary invariant in the sense of Remark 2.3. Alice acts on her system with a (C)DUC map \(\Phi _A\); similarly, Bob acts on his system with a (C)DUC map \(\Psi _B\). Charlie then postselects on his bipartite system, the maximally entangled state \(|e\rangle \). Then, Theorem 4.5 says that the resulting state of Alice and Bob is separable. Note however that the local actions of Alice, Bob, and Charlie are restricted; a general result, in the sense of point (3) of Proposition 2.1 does not hold in our diagonal-covariant setting.

The main question left open in this work is the PPT\(^2\) question for DOC maps. We conjecture that answer is the same as for (C)DUC maps.

Conjecture 5.1

The composition of two arbitrary PPT maps in \(\mathsf {DOC}_d\) is entanglement breaking.

We recall (see [28, Lemma 9.3]) the composition rule for DOC maps. Given two triples of matrices \((A_1,B_1,C_1), (A_2,B_2,C_2)\in \mathcal {M}_{d}(\mathbb {C})^{\times 3}_{\mathbb {C}^{d}}\), one has

$$\begin{aligned} \Phi ^{(3)}_{(A_1,B_1,C_1)} \circ \Phi ^{(3)}_{(A_2,B_2,C_2)} = \Phi ^{(3)}_{(\mathfrak {A},\mathfrak {B},\mathfrak {C})}, \end{aligned}$$

where

$$\begin{aligned} \mathfrak {A}&= A_1 A_2 \\ \mathfrak {B}&= B_1 \odot B_2 + C_1\odot C_2^\top + {\text {diag}}(A_1 A_2 - 2A_1\odot A_2) \\ \mathfrak {C}&= B_1\odot C_2 + C_1\odot B_2^\top + {\text {diag}}(A_1 A_2 - 2A_1\odot A_2). \end{aligned}$$

Hence, in terms of matrix triples, Conjecture 5.1 posits that \((\mathfrak {A},\mathfrak {B},\mathfrak {C})\in \mathsf {TCP}_d\) for arbitrary matrix triples \((A_i,B_i,C_i)\in \mathcal {M}_{d}(\mathbb {C})^{\times 3}_{\mathbb {C}^{d}}\) such that \(\Phi ^{(3)}_{(A_i,B_i,C_i)}\) is PPT, for \(i=1,2\).

We would like to point out that even if the conjecture above holds in full generality, we do not have evidence for a stronger conclusion, as is the case with (C)DUC maps. Indeed, we have numerical evidence for the existence of PPT triples \((A_1, B_1, C_1), (A_2, B_2, C_2)\) (see Proposition 2.10) such that their compositions \((\mathfrak A, \mathfrak B, \mathfrak C)\) have factor width \(>2\). More precisely, we have found PPT matrix pairs \((A,B) \in \mathcal {M}_{d}(\mathbb {C})^{\times 2}_{\mathbb {C}^{d}}\) such that \((\mathfrak {A},\mathfrak {B})\notin \mathsf {PCP}_d^2\), where \((\mathfrak {A},\mathfrak {B},\mathfrak {B})\) is obtained by composing the triple (ABB) with itself in the above fashion. Hence, in order to prove the PPT\(^2\) conjecture for the more general class of DOC maps, one likely requires stronger criterion for separability, in terms of sufficient conditions for membership in both the \(\mathsf {PCP}_d\) and \(\mathsf {TCP}_d\) cones.