1 Introduction

Modern data systems have an ever-growing gap between the available information storage and the bit-per-second rates, which are limited by the noisy transmission medium [1]. Quantum communication is thus expected to enter the sixth generation of cellular networks (6G) in order to achieve performance gains [2,3,4].

Data volumes are even larger when limiting a system to identifying alerts, rather than recovering information. In Shannon’s transmission task [5], a transmitter sends a message over a noisy channel, and the receiver needs to find which message was sent. In some modern event-triggered applications, however, the receiver may simply perform a binary decision on whether a particular message of interest was sent or not. This setting is known as identification via channels [6]. Identification (ID) is relevant for various applications such as watermarking [7,8,9] and sensor communication [10]. In vehicle-to-X communication [11], a vehicle may announce information about its future movements to the surrounding road users. Every road user is interested in one specific movement that interferes with its plans, and it checks only if this movement is announced or not.

The ID capacity of a classical-quantum channel was determined by Löber [12] and Ahlswede and Winter [13] (see also [14]). The ID capacity turns out to have the same value as the transmission capacity for most classical-input single-user channels that we know of. However, the units are different. Specifically, the ID code size grows doubly exponentially in the block length, provided that the encoder has access to a source of randomness. Thereby, identification codes achieve an exponential advantage in throughput compared to transmission codes. This is attained by letting the encoding and decoding sets overlap. General results for ID are surveyed in [15]. Löber [12] considered a simultaneous identification scenario, in which the same measurement is performed in order to perform identification for multiple receivers. This is also relevant to a network that consists of chains [16]. In Boche et al. [17] considered identification over the classical-quantum channel under channel uncertainty and secrecy constraints. For quantum-quantum channels, even the single-user identification capacity is unknown so far, except for special channels [14]. In general, it can exceed the transmission capacity of a quantum channel [14] and was recently shown to exceed the simultaneous identification capacity [18]. For example, the transmission capacity and the simultaneous identification capacity of the noiseless qubit channel are both one [18, 19], but the identification capacity of the noiseless qubit channel is 2 and equals the entanglement-assisted transmission capacity [14]. The best lower bounds equal the amount of common randomness that can be generated over a channel, and thus, entanglement also increases the identification capacity [14].

The broadcast channel is a fundamental multi-user communication model, whereby a single transmitter sends messages to two receivers [20]. In the traditional transmission setting, the capacity region of the discrete memoryless broadcast channel is generally unknown, even in the classical case. The best known lower bound is due to Marton [21], and the best known upper bound was proven by Nair and El Gamal [22]. The two bounds coincide in special cases such as more capable, less noisy or degraded broadcast channel [23]. On the other hand, the ID capacity region of the classical broadcast channel was fully characterized by Bracher and Lapidoth [24, 25], for uniformly distributed messages. Namely, the ID capacity region is known for any classical discrete memoryless broadcast channel, without special requirements on the channel. The derivation in [24, 25] is based on a pool-selection technique that differs from the standard arguments. Related settings were also considered in [26,27,28]. The authors of the present paper have recently considered ID over the classical compound multiple-input multiple-output (MIMO) broadcast channel [29, 30].

Quantum broadcast channels were studied in various settings [31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46]. Yard et al.  [31] derived the superposition inner bound and determined the capacity region for the degraded classical-quantum broadcast channel. Wang et al.  [34] used the previous characterization to determine the capacity region for Hadamard broadcast channels. Dupuis et al.  [35, 36] developed the entanglement-assisted version of Marton’s region. Quantum broadcast channels with conferencing decoders were recently considered in [47] as well, providing an information-theoretic perspective to the operation of quantum repeaters. In addition, security aspects were treated in [48, 49].

Optical communication forms the backbone of the Internet [50,51,52,53]. The Gaussian bosonic channel is a simple quantum-mechanical model for optical communication over free space or optical fibers [54, 55]. An optical communication system consists of a modulated source of photons, the optical channel, and an optical detector. For a single-mode bosonic broadcast channel, the channel input is an electromagnetic field mode with annihilation operator \(\hat{a}\), and the output is a pair of modes with annihilation operators \(\hat{b}_1\) and \(\hat{b}_2\), corresponding to each receiver. Bosonic broadcast channels are considered in different settings in [56,57,58,59,60,61,62,63].

In this work, we consider identification over the quantum broadcast channel. We derive an achievable ID rate region for the general quantum broadcast channel and establish full characterization for the classical-quantum broadcast channel under a semi-average error criterion. We demonstrate our results and determine the ID capacity region of the quantum erasure broadcast channel. Furthermore, we establish the capacity region of the single-mode pure-loss bosonic broadcast channel with coherent-state encoding. The ID capacity region of the bosonic broadcast channel is depicted in Fig. 1 as the area below the solid blue line. For comparison, the transmission capacity region, as determined by Guha and Shapiro [56] subject to the minimum output-entropy conjecture, is indicated by the red dashed line. It can be seen that the ID capacity region is significantly larger than the transmission counterpart. We note that the ID result does not require the conjecture.

Fig. 1
figure 1

The transmission and ID capacity regions of the pure-loss bosonic broadcast channel, with coherent-state encoding, mean photon-number input constraint \(N_A = 10\), and transmissivity \(\eta = 0.8\). The transmission capacity region \(\textsf{C}_{\textsf{T}}\) corresponds to the light gray area, and the ID capacity region \(\textsf{C}_{\textsf{ID}}\) comprises additionally the dark gray rectangular area

While the properties above are analogous to the classical setting [30], the analysis is more involved. To prove the direct part, we extend the pool-selection method due to Bracher and Lapidoth [24, 25] to the quantum setting. On the other hand, our converse proof is based on completely different arguments than in Bracher and Lapidoth’s classical proof. Instead, we exploit recent observations made by Boche et al. [17] as they treated the classical-quantum compound channel, combined with the arguments of Ahlswede and Winter [13] in their seminal paper on ID for the single-user classical-quantum channel.

This paper is organized as follows: In Sect. 2, we introduce the notation, give basic definitions, and introduce the communication model. Section 3 contains our main results. In Sect. 4, we demonstrate our results for the pure-loss bosonic broadcast channel and the erasure broadcast channel. Section 5 provides the achievability proof for identification over the quantum broadcast channel in finite dimensions, and Sect. 6 provides the proof for the ID capacity region of the classical-quantum broadcast channel. Finally, the results are summarized in Sect. 7.

2 Preliminaries and related work

2.1 Notation

We use the following notation conventions.

2.1.1 Basic notation

\(X,Y,\dots \)

Classical random variables

\(\mathcal {X},\mathcal {Y}, \dots \)

Finite sets (alphabets)

\(x,y,\dots \)

Constants and classical values

\(x^n = (x_1,x_2,\dots ,x_n) \in \mathcal {X}^n\)

Sequence of length n

\(P_X\)

Probability mass function (PMF) of X

\(\mathbb {E}[X] \)

Expectation of a random variable X

\(\mathcal {P}(\mathcal {X})\)

Set of all PMFs with finite support over a set \(\mathcal {X}\)

\(P^n(x^n) = \prod _{t=1}^n P(x_t)\)

n-fold product distribution

[N]

\(\left\{ {1, \dots , \lceil {N}\rceil } \right\} \)

\(A,B,\dots \)

Quantum systems

\(\mathcal {H}_A\)

Hilbert space A

\(\rho _A \in \mathscr {D}(\mathcal {H}_A)\)

Density operator on \(\mathcal {H}_A\)

\(\mathscr {D}(\mathcal {H}_A)\)

Set of density operators on \(\mathcal {H}_A\)

\(\mathcal {N}_{A\rightarrow B}:\mathscr {D}(\mathcal {H}_A)\rightarrow \mathscr {D}(\mathcal {H}_B) \)

Quantum channel (CPTP map)

\(\left\{ {D_j: j\in [J]} \right\} \)

Positive operator-valued measure (POVM),

\(\left| {\Phi _{AB}} \right\rangle = \frac{1}{\sqrt{d}} \sum \limits _{i=0}^{d-1} \left| {i} \right\rangle _A \otimes \left| {i} \right\rangle _B\)

A maximally entangled state of dimension d

2.1.2 Information measures

\(H(X) = \sum \limits _{x \in {{\,\textrm{supp}\,}}P_X} -P_{X}(x) \log _2 P_X(x)\)

Classical entropy

\(I(X; Y) = H(X) + H(Y) - H(X Y)\)

Classical mutual information

\(H(A)_\rho = H(\rho _A) = - {{\,\textrm{Tr}\,}}[\rho _A\log _2(\rho _A)]\)

Quantum entropy

\(I(A; B)_\sigma = H(\sigma _A) + H(\sigma _B) - H(\sigma _{AB})\)

Quantum mutual information

\(H(A|B)_\sigma = H(\sigma _{AB}) - H(\sigma _B)\)

Conditional quantum entropy

2.1.3 Quantum broadcast channels

A quantum broadcast channel \(\mathcal {N}_{ A\rightarrow B_1 B_2}: \mathscr {D}(\mathcal {H}_A)\rightarrow \mathscr {D}(\mathcal {H}_{B_1}\otimes \mathcal {H}_{B_2})\) corresponds to a quantum physical evolution from the input A to the combined output \(B_1,B_2\), associated with the transmitter and two receivers, respectively. We assume that the channel is memoryless. That is, if the systems \(A^n=(A_1,\ldots ,A_n)\) are sent through n channel uses, then the input \(\rho _{ A^n}\) undergoes the tensor product mapping \(\mathcal {N}_{ A^n\rightarrow B_1^n B_2^n}\equiv \mathcal {N}_{ A\rightarrow B_1 B_2}^{\otimes n}\). The marginal channel is defined by \( \mathcal {N}_{A\rightarrow B_1}^{(1)}(\rho _A)={{\,\textrm{Tr}\,}}_{B_2} \left( \mathcal {N}_{ A\rightarrow B_1 B_2}(\rho _{A}) \right) \) for Receiver 1, and similarly \(\mathcal {N}_{A\rightarrow B_2}^{(2)}\) for Receiver 2. The transmitter, Receiver 1, and Receiver 2 are often called Alice, Bob 1, and Bob 2. A classical-quantum (c-q) broadcast channel \(\mathcal {N}^{\,\text {c-q}}_{X\rightarrow B_1 B_2}\) is defined, in a similar manner, as a mapping \(\mathcal {X}\rightarrow \mathscr {D}(\mathcal {H}_B)\).

2.2 Identification codes

In the following, we define the communication task of identification over a quantum broadcast channel, where the decoder is not required to recover the sender’s message \(i\), but simply determines whether a particular message \(i'\) was sent or not.

Definition 1

An \(\left( {N_1, N_2, n}\right) \) identification (ID) code for the quantum broadcast channel \(\mathcal {N}_{A\rightarrow B_1 B_2}\) consists of an encoding channel \(\mathcal {E}_{A^n}:[N_1]\times [N_2]\rightarrow \mathscr {D}(\mathcal {H}_A^{\otimes n})\) and a collection of binary decoding POVMs and , for \( i_1 \in [N_1]\) and \( i_2 \in [N_2]\). We denote the identification code by \(\mathcal {C}= (\mathcal {E}_{A^n}, \mathcal {D}_{B_1^n}, \mathcal {D}_{B_2^n})\).

The identification scheme is depicted in Fig. 2. Alice chooses a pair of messages \((i_1,i_2)\), where \(i_k\in [N_k]\), for \(k\in \{1,2\}\). She encodes the messages by preparing an input state \(\rho ^{i_1,i_2}_{A^n}\equiv \mathcal {E}_{A^n}(i_1,i_2)\) and sends the input system \(A^n\) through n uses of the quantum broadcast channel \(\mathcal {N}_{A\rightarrow B_1 B_2}\). Bob 1 and Bob 2 receive the output systems \(B_1^n\) and \(B_2^n\), respectively. Suppose that Bob \(k\) is interested in a particular message \(i'_k \in [N_k]\), where \(k\in \{1,2\}\). Then, he performs the binary measurement to determine whether \(i_k'\) was sent or not and obtains a measurement outcome \(s_k\in \{0,1\}\). He declares ‘no’ if the measurement outcome is \(s_k=0\), and ‘yes’ if \(s_k=1\).

Fig. 2
figure 2

Identification over the quantum broadcast channel \(\mathcal {N}_{A\rightarrow B_1 B_2}^{{\otimes }n}\). Alice chooses a message pair \((i_1,i_2)\). She encodes the messages by preparing an input state \( \mathcal {E}_{A^n}(i_1,i_2)\), and sends the input system \(A^n\) through n uses of the quantum broadcast channel \(\mathcal {N}_{A\rightarrow B_1 B_2}^{{\otimes }n}\). Bob 1 and Bob 2 receive the output systems \(B_1^n\) and \(B_2^n\), respectively. As Bob \(k\) is interested in the message \(i'_k \in [N_k]\), he performs the binary measurement to determine whether \(i_k'\) was sent or not

The ID rates of the code \(\mathcal {C}\) are defined as \(R_k = \frac{1}{n} \log \log (N_k)\), for \(k \in \left\{ {1,2} \right\} \). In this work, we assume that the ID messages \(i_k\) are uniformly distributed over the set \([N_k]\), for \(k \in \left\{ {1,2} \right\} \). Therefore, the error probabilities are defined on average over the messages for the other receiver. Bob 1 makes an error in two cases: (1) He decides that \(i_1\) was not sent (missed ID); (2) Bob 1 decides that \(i'_1\) was sent, while in fact \(i_1\) was sent, and \(i_1 \ne i'_1\) (false ID). The probabilities of these two kinds of error, averaged over \(i_2 \in [N_2]\), are defined as

(1a)
(1b)

for \(i_1,i_1'\in [N_1]\) such that \(i_1\ne i_1'\). Similarly, Bob 2’s error probabilities are

(1c)
(1d)

for \(i_2,i_2'\in [N_2]\) such that \(i_2\ne i_2'\).

An \((N_1, N_2, n, \lambda _1,\lambda _2)\) ID-code \(\mathcal {C}\) for the quantum broadcast channel \(\mathcal {N}_{A\rightarrow B_1 B_2}\) satisfies

$$\begin{aligned} \max _{i_k \in [N_k]} \bar{e}_{k,1}(\mathcal {N}, n, \mathcal {C}, i_k)&< \lambda _1, \end{aligned}$$
(2a)
$$\begin{aligned} \max _{\begin{array}{c} i_k,i'_k \in [N_k], \\ i'_k \ne i_k \end{array}} \bar{e}_{k,2}(\mathcal {N}, n, \mathcal {C}, i'_k, i_k)&< \lambda _2, \end{aligned}$$
(2b)

for \(k \in \left\{ {1,2} \right\} \). An ID rate pair \((R_1, R_2)\) is achievable if for every \(\lambda _1,\lambda _2 > 0\) and sufficiently large \(n\), there exists an \(\left( {\exp {e^{nR_1}}, \exp {e^{nR_2}}, n, \lambda _1,\lambda _2}\right) \) ID-code. The ID capacity region \(\textsf{C}_{\textsf{ID}}(\mathcal {N})\) of the quantum broadcast channel \(\mathcal {N}_{A\rightarrow B_1 B_2}\) is defined as the set of achievable rate pairs.

2.3 Previous results

In the traditional transmission setting [5], the decoder Bob is required to find an estimate \(\hat{i}\) of Alice’s message. This is a more stringent requirement than identification, and it results in exponentially slower communication. Specifically, the number of messages scales as \(\exp (nR)\) for transmission, whereas \(\exp (e^{nR})\) for identification. While the transmission rate is measured in units of information bits per channel use, the identification rate has different units. Nonetheless, for the classical-quantum single-user channel, it turns out that the identification and transmission capacities have the same value.

In the single-user setting, the ID capacity of the classical-quantum channel was determined by Löber [12] and Ahlswede and Winter [13]. Let \(\mathcal {W}_{X\rightarrow B}\) be a single-user c-q channel. The ID capacity \(C_{\text {ID}}(\mathcal {W})\) is then defined, in a similar manner, as the supremum of achievable ID rates over the c-q channel \(\mathcal {W}_{X\rightarrow B}\).

Theorem 1

(see [12, 13][64, Theorem 4]) The ID capacity of a single-user classical-quantum channel \(\mathcal {W}_{X\rightarrow B}\) is given by

$$\begin{aligned} C_{\text {ID}}(\mathcal {W}) = \max _{P_X \in \mathcal {P}(\mathcal {X})} I(X; B)_\rho , \end{aligned}$$
(3)

where .

While the single-user achievability proof in [6, 12] employs a random binning scheme based on transmission codes [65], we will see that the broadcast coding methods are significantly more involved and do not follow from the transmission characterization.

3 Results

Our results are presented below. Consider the quantum broadcast channel \(\mathcal {N}_{A\rightarrow B_1 B_2}\), as defined in Sect. 2.1.3. Define the rate region \(\mathscr {R}(\mathcal {N})\) as

$$\begin{aligned} \mathscr {R}(\mathcal {N}) = \bigcup _{ P_X \in \mathcal {P}(\mathcal {X}),\; |\phi _A^x\rangle } \Bigg \{ \begin{array}{l l} (R_1, R_2): &{} \displaystyle R_1 \le I(X; B_1)_\rho , \\ &{} \displaystyle R_2 \le I(X; B_2)_\rho \end{array}\Bigg \} \end{aligned}$$
(4)

with

Theorem 2

  1. 1.

    The region \(\mathscr {R}(\mathcal {N})\) is achievable for identification over the quantum broadcast channel \(\mathcal {N}_{A\rightarrow B_1 B_2}\). That is,

    $$\begin{aligned} \textsf{C}_{\textsf{ID}}(\mathcal {N}) \supseteq \mathscr {R}(\mathcal {N}). \end{aligned}$$
    (5)
  2. 2.

    The identification capacity region of a classical-quantum broadcast channel \(\mathcal {N}^{\,\text {c-q}}_{X\rightarrow B_1 B_2}\) is given by

    $$\begin{aligned} \textsf{C}_{\textsf{ID}}(\mathcal {N}^{\,\text {c-q}}) = \bigcup _{ P_X \in \mathcal {P}(\mathcal {X}) } \Bigg \{ \begin{array}{l l} (R_1, R_2): &{} \displaystyle R_1 \le I(X; B_1)_\rho , \\ &{} \displaystyle R_2 \le I(X; B_2)_\rho \end{array} \Bigg \}, \end{aligned}$$
    (6)

    with .

The proof of part 1 is given in Sect. 5, where we show that all rate pairs in the interior of the region \(\mathscr {R}(\mathcal {N})\) are achievable. In Sect. 6, we prove part 2 and show the classical-quantum converse part, i.e. that no rate pair outside the region above can be achieved for identification over the classical-quantum broadcast channel. In the proof of part 1, we use the pool-selection method by Bracher and Lapidoth [24, 25]. This will enable the same extension to the broadcast setting as in [24, 25]. On the other hand, in the converse proof, we used a different approach exploiting recent observations by Boche et al. [17] along with the methods of Ahlswede and Winter [13].

Remark 1

As mentioned in Sect. 2.3, in the classical-quantum single-user setting, the ID and transmission capacity characterizations are identical. On the other hand, in the broadcast ID setting, we see a departure from this equivalence [24, 25, 28]. The examples in the following section demonstrate this departure in a more explicit manner, showing that the ID capacity region can be strictly larger than the transmission capacity region.

Remark 2

Consider the classical-quantum broadcast channel. In general, the rate \(R_k\) of User k must be limited by the ID capacity of the single-user channel from A to \(B_k\), for \(k\in \{1,2\}\). This observation leads to the following rectangular upper bound,

$$\begin{aligned} \textsf{C}_{\textsf{ID}}(\mathcal {N}^{\,\text {c-q}}) \subseteq \Bigg \{ \begin{array}{l l} (R_1, R_2) : &{} R_1 \le \textsf{C}_{\textsf{ID}}^{(1)} , \\ &{} R_2 \le \textsf{C}_{\textsf{ID}}^{(2)} \end{array} \Bigg \}, \end{aligned}$$
(7)

where \(C_{\text {ID}}^{(k)} = \max _{P_X} I(X; B_k)_\rho \). However, in identification over the broadcast channel, the users cannot necessarily achieve the full capacity of each marginal channel simultaneously, since both marginal channels must share the same input distribution in the capacity formula on the right hand side of (6). Equality holds in (7) if the same input distribution \(P_X^\star \) maximizes both mutual informations simultaneously, i.e. when

$$\begin{aligned} P_X^\star = {\mathop {{{\,\mathrm{arg{\,}max}\,}}}\limits _{P_X }} I(X; B_1)_\rho = {\mathop {{{\,\mathrm{arg{\,}max}\,}}}\limits _{P_X}} I(X;B_2)_\rho . \end{aligned}$$
(8)

4 Examples

As examples, we consider the pure-loss bosonic broadcast channel and the erasure broadcast channel.

4.1 Bosonic broadcast channel

To demonstrate our results, consider the single-mode bosonic broadcast channel. We extend the finite-dimension result in Theorem 2 to the bosonic channel with infinite-dimension Hilbert spaces based on the discretization limiting argument by Guha limiting argument by Guha [57]. A detailed description of (continuous-variable) bosonic systems can be found in [54]. Here, we only define the notation for the quantities that we use. We use hat-notation, e.g. \(\hat{a}\), \(\hat{b}_1\), \(\hat{b}_2\), \(\hat{e}\), to denote annihilation operators that act on a quantum state. A thermal state \(\tau (N)\) is a Gaussian mixture of coherent states, where \( \tau (N) \equiv \int _{\mathbb {C}} d^2 \alpha \frac{e^{-|\alpha |^2/N}}{\pi N} |\alpha \rangle \langle \alpha | \), with an average photon number \(N> 0\).

Consider a bosonic broadcast channel, whereby the channel input is an electromagnetic field mode with annihilation operator \(\hat{a}\), and the output is a pair of modes with annihilation operators \(\hat{b}_1\) and \(\hat{b}_2\). The annihilation operators correspond to Alice, Bob 1, and Bob 2, respectively. The input–output relation of the pure-loss bosonic broadcast channel in the Heisenberg picture [66] is given by

$$\begin{aligned} \hat{b}_1&=\sqrt{\eta }\, \hat{a}+\sqrt{1-\eta }\,\hat{e}, \end{aligned}$$
(9)
$$\begin{aligned} \hat{b}_2&=\sqrt{1-\eta }\, \hat{a}-\sqrt{\eta }\,\hat{e}, \end{aligned}$$
(10)

where \(\hat{e}\) is associated with the environment noise and \(\eta \) is the transmissivity, \(0\le \eta \le 1\), which captures, for instance, the length of the optical fiber and its absorption length [67]. The relations above correspond to the outputs of a beam splitter, as illustrated in Fig. 3. In the pure-loss setting, the environment is in the vacuum state, i.e., \(\hat{e}= \left| {0} \right\rangle \). It is assumed that the encoder uses a coherent state protocol with an input constraint. That is, the input state is a coherent state \(|x\rangle \), \(x\in \mathbb {C}\), such that each codeword satisfies \(\frac{1}{n}\sum _{i=1}^n |x_{i}|^2\le N_{A}\).

Fig. 3
figure 3

The beam splitter relation of the single-mode bosonic broadcast channel. The channel input is an electromagnetic field mode with annihilation operator \(\hat{a}\), and the output is a pair of modes with annihilation operators \(\hat{b}_1\) and \(\hat{b}_2\), corresponding to each receiver. The mode \(\hat{e}\) is associated with the environment noise in the pure-loss setting, the environment is in the vacuum state, i.e., \(\hat{e}= \left| {0} \right\rangle \). The parameter \(\eta \) is the transmissivity, which captures the length of the optical fiber and its absorption length

Based on part 1 of Theorem 2, the ID capacity region of the pure-loss bosonic broadcast channel with coherent encoding and average photon number at most \(N_A\) is given by

$$\begin{aligned} \textsf{C}_{\textsf{ID}}(\mathcal {N})&= \Bigg \{ \begin{array}{rl} (R_1,R_2) \,:\; R_1 &{}\le g(\eta N_{A}) \\ R_2 &{}\le g((1-\eta )N_{A}) \end{array} \Bigg \} \end{aligned}$$
(11)

where \(g(N) = (N+1)\log (N+1)-N\log (N)\) is the entropy of a thermal state with mean photon number N, with \(0 \log 0 :=0\). See Fig. 1. The converse part immediately follows from the single-user capacity characterization. To show achievability, set the input to be an ensemble of coherent states, \(|X\rangle \), with a circularly-symmetric Gaussian distribution with zero mean and variance \(\mathbb {E}[\left| {X}\right| ^2] = N_{A}\). As mentioned in Remark 2, the users cannot necessarily achieve the marginal capacity. Nevertheless, for the bosonic broadcast channel, each user achieves the full capacity of the respective marginal channel.

On the other hand, the transmission capacity region of the single-mode pure-loss bosonic broadcast channel is [56, 57],

$$\begin{aligned} \textsf{C}_{\textsf{T}}(\mathcal {N}) = \bigcup _{0\le \beta \le 1} \Bigg \{ \begin{array}{ll} (R_1,R_2): &{} R_1 \le g(\eta \beta N_{A}) \\ &{} R_2 \le g((1-\eta ) N_{A})-g((1-\eta )\beta N_{A}) \end{array} \Bigg \}. \end{aligned}$$
(12)

where the subscript ‘T’ stands for ‘Transmission’, under the assumption that the minimum output-entropy conjecture holds (see Strong Conjecture 2 in [57]). The transmission capacity region and the ID capacity region are depicted in Fig. 1 as the light gray area (T) and additionally the dark gray area (ID), respectively.

The converse part for the transmission result on the pure-loss bosonic broadcast channel relies on the strong minimum output-entropy conjecture [56], as stated below. Let the noise modes \(\{ \hat{e}_i \}_{i=1}^n\) be in a product state of n vacuum states, and assume that \(H(A^n)_\rho =n g( N_A)\). Then, the strong minimum output-entropy conjecture states that [56]

$$\begin{aligned} H(B^n)_\rho \ge n g (\eta N_A) \,. \end{aligned}$$
(13)

We note that in the single-user case, the conjecture is not required for neither identification nor transmission [68,69,70] [71, Section VI.B]. There are several special cases that are known to hold [72, Section III]. E.g., it is well known that the conjecture holds for \(n=1\). However, as pointed out in [70, Section V], this is insufficient for the converse proof of the bosonic broadcast channel, which requires the strong minimum output-entropy conjecture stated above.

4.2 Erasure broadcast channel

We consider the qubit erasure broadcast channel, specified by \(\mathcal {N}(\rho )=U\rho U^\dagger \),

(14)

where the erasure state \(\left| {e} \right\rangle \) is orthogonal to the qubit space, and \(0\le \lambda \le \frac{1}{2}\) is a given parameter. Hence, the marginal channels to Bob 1 and Bob 2 are standard quantum erasure channels, with erasure parameters \(\lambda \) and \(1-\lambda \), respectively. Specifically,

(15)
(16)

The ID capacity region of the erasure broadcast channel \(\mathcal {N}_{A\rightarrow B_1 B_2}\) satisfies

$$\begin{aligned} \textsf{C}_{\textsf{ID}}(\mathcal {N}) \supseteq \mathscr {R}(\mathcal {N}) = \Bigg \{ \begin{array}{l l} (R_1, R_2) : &{} R_1 \le 1-\lambda , \\ &{} R_2 \le \lambda \end{array} \Bigg \}\,. \end{aligned}$$
(17)

This result is obtained in a straightforward manner. To show achievability, we apply part 1 of Theorem 2 and set \(P_X=\left( \frac{1}{2},\frac{1}{2}\right) \) over the ensemble \(\left\{ {\left| {0} \right\rangle , \left| {1} \right\rangle } \right\} \).

First, consider the symmetric case of \(\lambda = \frac{1}{2}\). Our achievable region \(\mathscr {R}(\mathcal {N})\) is then the best known bound on the ID capacity region. Whereas, for \(\lambda < \frac{1}{2}\), we can improve upon our bound.

For a single-user quantum erasure channel \(\mathcal {L}_{A\rightarrow B}\) with a parameter \(\varepsilon \), Winter established achievability of the identification rate \(R = 2(1-\varepsilon )\) for \(\varepsilon < \frac{1}{2}\), and \(R = 1-\varepsilon \) for \(\varepsilon \ge \frac{1}{2}\) [14, Section 4], where \(\rho _X\) is a classical state. If \(\lambda = \frac{1}{2}\), then this yields the rate pair \((R_1,R_2)=\left( \frac{1}{2}, \frac{1}{2} \right) \), which is the corner of our region in (17). On the other hand, for \(\lambda < \frac{1}{2}\), the rate pairs \((R_1,R_2) = (2(1-\lambda ),0)\) and \((R_1,R_2)=(0,\lambda )\) are achievable. Hence, by time division, i.e., coding for User 1 over a sub-block of length \(\alpha n\), and for User 2 over the remaining sub-block of \((1-\alpha )n\) channel uses, we have that the ID capacity region is lower-bounded by

$$\begin{aligned} \textsf{C}_{\textsf{ID}}(\mathcal {N}) \supseteq \mathscr {T}:=\bigcup _{0 \le \alpha \le 1} \Bigg \{ \begin{array}{l l} (R_1, R_2) : &{} R_1 \le 2 \alpha (1-\lambda ), \\ &{} R_2 \le (1-\alpha )\lambda \end{array} \Bigg \}\,. \end{aligned}$$
(18)

The transmission capacity region \(\textsf{C}_{\textsf{T}}(\mathcal {N})\) of the quantum erasure broadcast channel is also achieved by time-division. It is given by the rate pairs \((R_1, R_2)\) satisfying \(R_1 \le \alpha (1-\lambda )\) and \(R_2 \le (1-\alpha )\lambda \), for some \(0 \le \alpha \le 1\), as can be shown using the same methods used in [73, Example 3.2 and Section 5.4.1] and [19, Section 20.4.3]. Clearly, \(\textsf{C}_{\textsf{T}}(\mathcal {N})\) is contained in either of the regions \(\mathscr {R}(\mathcal {N})\), \(\mathscr {T}\). We deduce that for an erasure channel that is not symmetric, our achievable region \(\mathscr {R}(\mathcal {N})\) in Theorem 2 is suboptimal, but improves on the best previously known bound in the interval \(0 < R_1 \le 1 - \lambda \). Figure 4 shows the regions \(\textsf{C}_{\textsf{T}}(\mathcal {N})\), \(\mathscr {T}\) and \(\mathscr {R}(\mathcal {N})\) for a quantum erasure broadcast channel \(\mathcal {N}\) with \(\lambda = \frac{1}{4}\).

Fig. 4
figure 4

Achievable regions for ID over the qubit erasure broadcast channel \(\mathcal {N}\) with erasure probability \(\lambda = \frac{1}{4}\). The transmission capacity region \(\textsf{C}_{\textsf{T}}(\mathcal {N})\) corresponds to the light gray area. The region \(\mathscr {T}\) achievable by time-division between single-user identification codes comprises additionally the middle gray area. The rectangle indicated by the dark gray area corresponds to our lower bound \(\mathscr {R}(\mathcal {N})\)

5 Achievability proof

In this section, we prove the lower bound on the capacity region in Theorem 2, i.e. we show that

$$\begin{aligned} \textsf{C}_{\textsf{ID}}(\mathcal {N}) \supseteq \mathscr {R}(\mathcal {N}). \end{aligned}$$
(19)

In the classical achievability proof, Bracher and Lapdidoth [24, 25] first generate a single-user random code, based on a pool-selection technique, as shown below. Then, a similar pool-selection code is constructed for the BC using a pair of single-user codes, one for each receiver. It is shown in [24, 25] that the corresponding ID error probabilities for the BC can be approximated in terms of the error probabilities of the single-user codes. We use a similar approach, and begin with the single-user quantum channel.

We use standard tools of typical space projectors as detailed in Appendix A. In particular, \(\mathcal {T}_\delta ^n(P_X)\) denotes the classical \(\delta \)-typical set with respect to a given PMF \(P_X \in \mathcal {P}(\mathcal {X})\) over \(\mathcal {X}\). Furthermore, \(\Pi _\delta ^n(\rho )\) is the projector onto the \(\delta \)-typical subspace of an average state , and \(\Pi _\delta ^n(\sigma _{XB}|x^n)\) is the conditionally \(\delta \)-typical projector for a classical-quantum state \(\sigma _{XB}\).

5.1 Single-user quantum channel

First, we construct and analyze an identification code for a single-user quantum channel. In Sect. 5.2, we will use the single-user code in order to construct a code for the broadcast channel. Let \(\mathcal {L}_{A\rightarrow B}\) be a single-user quantum channel.

5.1.1 Code construction

Let \(N = \exp e^{nR}\) be the code size. Fix a PMF \(P_X\) over \(\mathcal {X}\), a pool rate \(R_{\text {pool}}\), and a binning rate \(\tilde{R}\), such that

$$\begin{aligned} R&< \tilde{R}< I(X;B)_\rho \end{aligned}$$
(20)
$$\begin{aligned} R_{\text {pool}}&> \tilde{R}. \end{aligned}$$
(21)

We generate the codebook such that all codewords are \(\delta \)-typical. Therefore, consider the distribution

(22)

where the indicator function takes the value 1 if \(\pi \) is true, and 0 otherwise. For every index \(v \in \mathcal {V}= [e^{nR_{\text {pool}}}]\), choose a codeword \(F(v) \sim P_{X'^n}\) at random. Then, for every \(i \in [N]\), decide whether to add \(v\) to the set \({\varvec{\mathcal {V}}}_i\) by a binary experiment, with probability \(e^{-n\tilde{R}}/\left| {\mathcal {V}}\right| = e^{-n(R_{\text {pool}}- \tilde{R})}\). That is, decide to include \(v\) in \({\varvec{\mathcal {V}}}_i\) with probability \(e^{-n(R_{\text {pool}}- \tilde{R})}\), and not to include it with probability \(1-e^{-n(R_{\text {pool}}- \tilde{R})}\). Reveal this construction to all parties. Denote the collection of codewords and index bins by

$$\begin{aligned} \mathcal {B}= \Big ({\{F(v)\}_{v\in \mathcal {V}}, \left\{ {{\varvec{\mathcal {V}}}_i} \right\} _{i=1}^N}\Big ) . \end{aligned}$$
(23)

5.1.2 Encoding

To send an ID Message \(i \in [N]\), Alice chooses an index \(v\) uniformly at random from \({\varvec{\mathcal {V}}}_i\). If \({\varvec{\mathcal {V}}}_i\) is non-empty, she prepares the state

$$\begin{aligned} \left| { \phi _{A^n}^{F(v)} } \right\rangle \equiv \bigotimes _{t=1}^n \left| { \phi _A^{F_t(v)} } \right\rangle , \end{aligned}$$
(24)

where \(F_t(v)\) is the t-th symbol of the sequence F(v). Otherwise, if \({\varvec{\mathcal {V}}}_i=\emptyset \), she prepares \(|\phi _{A^n}^{F(1)}\rangle \). Then, Alice transmits the systems \(A^n\) through the channel. Therefore, if \({\varvec{\mathcal {V}}}_i \ne \emptyset \), then the average input state \(\mathcal {E}_{A^n}(i)\) is given by

(25)

5.1.3 Decoding

Bob receives the output systems \(B^n\), and he would like to determine whether the message \(i'\) was sent. To this end, he selects any constant \(\delta \) such that

$$\begin{aligned} 0< \delta < \frac{I(X; B)_\rho - \tilde{R}}{c+c'}, \end{aligned}$$
(26)

where \(c, c' > 0\) are constants as in Section A. Then, he performs a series of binary decoding measurements (POVMs)

(27)

where we denote \(\Pi \equiv \Pi _\delta ^n(\rho _B)\) and \(\Pi ^{F(v)} \equiv \Pi _\delta ^n(\rho _{XB}|F(v))\). Bob obtains a binary sequence of measurement outcomes \((a(v))_{v \in {\varvec{\mathcal {V}}}_{i'}}\). If \(a(v)=1\) for some \(v\in {\varvec{\mathcal {V}}}_{i'}\), then Bob declares that \(i'\) was sent. Otherwise, he declares that \(i'\) was not sent. Note that we can also construct one POVM \({\varvec{\mathcal {D}}}_{B^n}^{i'}\) that is equivalent to the series of measurements.

Thus, the ID code associated with the construction above is denoted by

$$\begin{aligned} \mathcal {C}_{\mathcal {B}} = \left( {{\varvec{\mathcal {E}}}_{A^n}, {\varvec{\mathcal {D}}}_{B^n}}\right) . \end{aligned}$$

The error analysis for the single-user identification code is delegated to Appendix B.

5.2 Broadcast channel

In this section, we show the direct part for the ID capacity region of the quantum broadcast channel. That is, we show that \(\textsf{C}_{\textsf{ID}}(\mathcal {N}) \supseteq \mathscr {R}(\mathcal {N})\). The analysis makes use of the our single-user derivation above.

5.2.1 Code construction

We extend Bracher and Lapdioth’s [24, 25] idea to combine two BL codebooks \(\mathcal {B}^{(1)}, \mathcal {B}^{(1)}\) that share the same pool. Fix a PMF \(P_X\) over \(\mathcal {X}\) and rates \(R_k,\tilde{R}_k\), for \(k\in \{1,2\}\), that satisfy

$$\begin{aligned} R_1<&\tilde{R}_1&< \min _{s \in \mathcal {S}} I(X; B_1)_\rho \end{aligned}$$
(28a)
$$\begin{aligned} R_2<&\tilde{R}_2&< \min _{s \in \mathcal {S}} I(X; B_2)_\rho \end{aligned}$$
(28b)
$$\begin{aligned}&\max \big \{\tilde{R}_1, \tilde{R}_2\big \}&< R_{\text {pool}}\end{aligned}$$
(28c)
$$\begin{aligned} R_{\text {pool}}<&\tilde{R}_1 + \tilde{R}_2. \end{aligned}$$
(28d)

Let \(N_k = e^{nR_k}\). For every index \(v \in \mathcal {V}= [e^{nR_{\text {pool}}}]\), perform the following. Choose a codeword \(F(v) \sim P^n_X\) at random, as in the single-user case. Then, for every \(i_k\), decide whether to add \(v\) to the set \({\varvec{\mathcal {V}}}_{i_k}^{(k)}\) by a binary experiment, with probability \(e^{-n\tilde{R}_k}/\left| {\mathcal {V}}\right| = e^{-n(R_{\text {pool}}- \tilde{R}_k)}\). That is, decide that \(v\) is included in \({\varvec{\mathcal {V}}}_{i_k}^{(k)}\) with probability \(e^{-n(R_{\text {pool}}- \tilde{R}_k)}\), and not to include with probability \(1-e^{-n(R_{\text {pool}}- \tilde{R}_k)}\). Finally, for every pair \((i_1,i_2) \in [N_1] \times [N_2]\), select a common index \(V_{i_1,i_2}\) uniformly at random from \({\varvec{\mathcal {V}}}_{i_1}^{(1)} \cap {\varvec{\mathcal {V}}}_{i_2}^{(2)}\), if this intersection is non-empty. Otherwise, if \({\varvec{\mathcal {V}}}_{i_1}^{(1)} \cap {\varvec{\mathcal {V}}}_{i_2}^{(2)} = \emptyset \), then draw \(V_{i_1,i_2}\) uniformly from \(\mathcal {V}\). Reveal this construction to all parties.

Denote the collection of codewords and index bins by

$$\begin{aligned} \mathcal {B}_{\mathcal {N}} = \Big (F,\, \big \{{\varvec{\mathcal {V}}}_{i_1}^{(1)}\big \}_{i_1 \in [N_1]}&, \big \{{\varvec{\mathcal {V}}}_{i_2}^{(2)}\big \}_{i_2 \in [N_2]} ,\, \big \{V_{i_1,i_2}\big \}_{(i_1,i_2) \in [N_1] \times [N_2]} \Big ). \end{aligned}$$
(29)

Note that, for \(k \in \left\{ {1,2} \right\} \), \(\mathcal {B}_\mathcal {N}\) includes all elements of \( \mathcal {B}^{(k)} = \big ({ F, \left\{ {{\varvec{\mathcal {V}}}_{i_k}^{(k)}} \right\} _{i_k \in [N_k]} }\big ),\) defined for the marginal channels \(\mathcal {N}^{(k)}_{A\rightarrow B_k}\) as in Sect. 5.1. We denote the corresponding single-user code by

$$\begin{aligned} \mathcal {C}_{\mathcal {B}^{(k)}} = ({\tilde{\varvec{\mathcal {E}}}}_{A^n}^{(k)}, {\varvec{\mathcal {D}}}_{B_k^n}). \end{aligned}$$
(30)

5.2.2 Encoding

To send an ID message pair \((i_1,i_2) \in [N_1] \times [N_2]\), Alice prepares the input state \(\big \vert \phi _{A^n}^{F(V_{i_1,i_2})}\big \rangle \) and transmits the input system \(A^n\).

5.2.3 Decoding

Receiver \(k\), for \(k=1,2\), employs the decoder of the single-user code \(\mathcal {C}_{\mathcal {B}^{(k)}}\). Specifically, suppose that Bob k is interested in an ID message \(i'_k \in [N_k]\). Then, he uses the decoding POVM \({\varvec{\mathcal {D}}}_{B_k^n}^{i'_k}\) to decide whether \(i'_k\) was sent or not.

We denote the broadcast ID code associated with the construction above by

$$\begin{aligned} \mathcal {C}_{\mathcal {B}_\mathcal {N}} = ({\varvec{\mathcal {E}}}_{A^n}, {\varvec{\mathcal {D}}}_{B^n_1}, {\varvec{\mathcal {D}}}_{B^n_2}) \end{aligned}$$
(31)

5.2.4 Error analysis

We show that the semi-average error probabilities of the ID code defined above can be approximately upper-bounded by the respective error probabilities of the single-user ID-codes \(\mathcal {C}_{\mathcal {B}^{(1)}}\) and \(\mathcal {C}_{\mathcal {B}^{(2)}}\) for the respective receivers.

Consider a given pair of codebooks \(\mathcal {B}^{(1)} \) and \(\mathcal {B}^{(2)} \). Conditioned on those codebooks, the input state can be written in terms of an encoding distribution

(32)

where . Similarly,

(33)

where \({\tilde{\varvec{{Q}}}}_{{i_{k}}}^{(k)}(v)\) is the respective distribution for the single-user code from Sect. 5.1, namely

(34)

We consider now only Receiver 1 and his marginal channel \(\mathcal {N}_{A_1\rightarrow B}^{(1)}\). Since the code construction is completely symmetric between the two receivers, the same arguments hold for Receiver 2 and \(\mathcal {N}_{A_2\rightarrow B}^{(2)}\). The missed-ID error probability for \(\mathcal {C}_{\mathcal {B}_\mathcal {N}}\) and Bob 1 is given by

(35)

and for \(\mathcal {C}_{\mathcal {B}^{(k)}}\), it is given by

(36)

By the linearity of the channel and the measurement, we have

(37)

where \(\delta _{{i_{1}}}^{(1)}\) is the total variation distance

$$\begin{aligned} \delta _{{i_{1}}}^{(1)} = \frac{1}{2} \sum _{v \in \mathcal {V}} \left| { \frac{1}{N_2} \sum _{{i_{2} \in [N_{2}]}} {\varvec{{Q}}}_{{i_{1},i_{2}}}(v) - \sum _{v \in \mathcal {V}} \tilde{{\varvec{{Q}}}}_{{i_{1}}}^{(1)}(v) }\right| = d\left( { \frac{1}{N_2} \sum _{{i_{2} \in [N_{2}]}} {\varvec{{Q}}}_{{i_{1},i_{2}}},\, \tilde{{\varvec{{Q}}}}_{{i_{1}}}^{(1)} }\right) , \end{aligned}$$
(38)

and the inequalities follow from the triangle inequality. The same argument applies to the false-ID error. Hence,

$$\begin{aligned} \bar{e}_{1,1}(\mathcal {N}, n, \mathcal {C}_{\mathcal {B}_\mathcal {N}}, i_1)&\le e_{1}(\mathcal {N}^{(1)}_{A \rightarrow B_1}, n, \mathcal {C}_{\mathcal {B}^{(1)}}, i_1) + \delta _{{i_{1}}}^{(1)}, \end{aligned}$$
(39a)
$$\begin{aligned} \bar{e}_{1,2}(\mathcal {N}, n, \mathcal {C}_{\mathcal {B}_\mathcal {N}}, i'_1, i_1)&\le e_{2}(\mathcal {N}^{(1)}_{A \rightarrow B_1}, n, \mathcal {C}_{\mathcal {B}^{(1)}}, i'_1, i_1) + \delta _{{i_{1}}}^{(1)}, \end{aligned}$$
(39b)

Similarly, the error probabilities for the second marginal channel are bounded by

$$\begin{aligned} \bar{e}_{2,1}(\mathcal {N}, n, \mathcal {C}_{\mathcal {B}_\mathcal {N}}, i_2)&\le e_{1}(\mathcal {N}^{(2)}_{A \rightarrow B_2}, n, \mathcal {C}_{\mathcal {B}^{(2)}}, i_2) + \delta _{i_2}^{(2)}, \end{aligned}$$
(39c)
$$\begin{aligned} \bar{e}_{2,2}(\mathcal {N}, n, \mathcal {C}_{\mathcal {B}_\mathcal {N}}, i'_2, i_2)&\le e_{2}(\mathcal {N}^{(2)}_{A \rightarrow B_2}, n, \mathcal {C}_{\mathcal {B}^{(2)}}, i'_2, i_2) + \delta _{i_2}^{(2)}, \end{aligned}$$
(39d)

where \(\delta _{i_2}^{(2)} = d \Big (\frac{1}{N_1} \sum _{i_1 \in [N_1]} {\varvec{{Q}}}_{{i_{1},i_{2}}},\, \tilde{{\varvec{{Q}}}}_{{i_{2}}}^{(2)} \Big )\).

From this point, we can continue as in the classical derivation due to Bracher and Lapidoth [24, 25]. The next lemma bounds \(\delta _{i_k}^{(k)}\) in (39) to zero in probability as \(n \rightarrow \infty \). By [24, 30, Lemma 3], for every \(k \in \left\{ {1,2} \right\} \) and some \(\tau > 0\),

$$\begin{aligned} \lim _{n\rightarrow \infty } \Pr \bigg ({ \max _{i_k \in [N_k]} \delta _{i_k}^{(k)} \ge e^{-n\tau } }\bigg ) = 0. \end{aligned}$$
(40)

Hence by (39), the error probabilities for the quantum broadcast-channel code \(\mathcal {C}_{\mathcal {B}_\mathcal {N}}\) are approximately upper-bounded by the corresponding error probabilities for the single-user marginal codes \(\mathcal {C}_{\mathcal {B}^{(1)}}\) and \( \mathcal {C}_{\mathcal {B}^{(2)}}\).

By (59) for the single-user quantum channel \(\mathcal {N}^{(k)}_{A \rightarrow B_k}\) and \(k \in \left\{ {1,2} \right\} \), the error probabilities \(e_{1}(\mathcal {N}^{(k)}_{A \rightarrow B_k}, n, \mathcal {C}_{\mathcal {B}^{(k)}}, i_k)\) and \(e_{2}(\mathcal {N}^{(k)}_{A \rightarrow B_k}, n, \mathcal {C}_{\mathcal {B}^{(k)}}, i'_k, i_k)\) converge in probability to zero with convergence speed exponentially in \(n\), for all messages \(i_k, i'_k \in [N_k]\) such that \(i_k \ne i'_k\). This completes the proof of the direct part. \(\square \)

6 Converse proof

The direct part follows from part 1. Hence, it remains to prove the converse part. To this end, consider an \((N_1, N_2, n, \lambda _1,\lambda _2)\) ID code, \(\mathcal {C}= (\mathcal {E}_{X^n}, \mathcal {D}_{B_1^n}, \mathcal {D}_{B_2^n})\), for the c-q broadcast channel \(\mathcal {N}_{X\rightarrow B_1 B_2}\). In the case of a classical input, the encoder effectively assigns a probability distribution \(Q_{i_1,i_2}\) to each message pair, i.e.

(41)

Thus, the ID code is specified by \(\big \{\big ({Q_{i_1,i_2}, D^{(1)}_{i_1}, D^{(2)}_{i_2} }\big ) \,:\; (i_1,i_2)\in [N_1]\times [N_2] \big \}\). We denote the Holevo information for each c-q channel \(\mathcal {N}^{(k)}_{X\rightarrow B_k}\) with respect to an input distribution \(P_X\in \mathcal {P}(\mathcal {X})\) by \( \textsf{I}(P_X,\mathcal {N}^{(k)}_{X\rightarrow B_k})\equiv I(X;B_k)_\rho . \)

Following the approach of Boche et al. [17], we prove the converse part in three stages, beginning with a modification of the code.

6.1 Code modification

6.1.1 \(\delta \)-net on \(\mathcal {P}(\mathcal {X})\)

First, we fix a \(\delta \)-net \(\mathcal {T}\) of probability distributions on \(\mathcal {X}\). That is, for \(|\mathcal {T}|\le \left( \frac{c}{\delta }\right) ^{|\mathcal {X}|}\), there exists \(\mathcal {T}\subseteq \mathcal {P}(\mathcal {X})\) such that \( \mathcal {X}^n=\bigcup _{P\in \mathcal {T}} \mathcal {A}_{P} \) and such that the type of an input sequence \(x^n\in \mathcal {A}_{P}\) is \(\delta \)-close to P. Hence,

$$\begin{aligned} Q_{i_1,i_2}=\bigoplus _{P\in \mathcal {T}} \mu _{i_1,i_2}(P) Q^P_{i_1,i_2} \end{aligned}$$
(42)

where \(\mu _{i_1,i_2}\) is a PMF over \(\mathcal {T}\), and \(Q^P_{i_1,i_2}\) are PMFs over \(\mathcal {A}_P\).

6.1.2 \(\epsilon \)-net on \(\mathcal {P}(\mathcal {T})\)

For \(|\mathcal {M}|\le \left( \frac{c}{\epsilon }\right) ^{|\mathcal {T}|}\), there exists an \(\epsilon \)-net \(\mathcal {M}\subseteq \mathcal {P}(\mathcal {T})\). Hence, for every \(i_2\), there exists a PMF \(\mu '_{i_2}\in \mathcal {M}\) such that at least a fraction \(\frac{1}{|\mathcal {M}|}\) of the messages \(i_1\in [N_1]\) has \(\mu _{i_1,i_2}\) that is \(\epsilon \)-close to \(\mu _{i_2}'\). Without loss of generality, for \(N_1'=\left\lfloor \frac{N_1}{|\mathcal {M}|} \right\rfloor \),

$$\begin{aligned} \forall i_1\in [N_1'] \,:\; \frac{1}{2}\left\Vert \mu _{i_1,i_2}-\mu '_{i_2} \right\Vert _1\le \epsilon . \end{aligned}$$
(43)

Similarly, there exists a probability distribution \(\mu ''\in \mathcal {M}\) such that at least a \(\frac{1}{|\mathcal {M}|}\) of the messages \(i_2\in [N_2]\) has \(\mu '_{i_2}\) that is \(\epsilon \)-close to \(\mu ''\). Without loss of generality, for \(N_2'=\left\lfloor \frac{N_2}{|\mathcal {M}|} \right\rfloor \), \( \forall i_2\in [N_2'] \,:\; \frac{1}{2}\left\Vert \mu _{i_2}'-\mu '' \right\Vert _1\le \epsilon . \) Thereby,

$$\begin{aligned} \forall (i_1,i_2)\in [N_1']\times [N_2'] \,:\; \frac{1}{2}\left\Vert \mu _{i_1,i_2}'-\mu '' \right\Vert _1\le 2\epsilon . \end{aligned}$$
(44)

Then, we modify the encoding distribution and define

$$\begin{aligned} Q_{i_1,i_2}''=\bigoplus _{P\in \mathcal {T}} \mu ''(P) Q^P_{i_1,i_2} \,, \end{aligned}$$
(45)

leaving the decoder as it is. This results in an \((n,N_1',N_2',\lambda _1+2\epsilon ,\lambda _2+2\epsilon )\) code, where we choose \(\epsilon \) to be sufficiently small such that \(\lambda _1+\lambda _2+2\epsilon <1\).

6.2 Encoder truncation

There exists \(P^*\in \mathcal {T}\) such that \(\mu ''(P^*)\ge \frac{1}{|\mathcal {T}|}\). Thereby, we modify the code once more and truncate all the other distributions in \(\mathcal {T}\). That is, we consider the code \(\big \{ (Q_{i_1,i_2}^{P^*}, D^{(1)}_{i_1},D^{(2)}_{i_2}): (i_1,i_2) \in [N_1'] \times [N_2']\big \}\). For the new code, the error probabilities of the first and the second kind are bounded by \(\lambda _k^*=\left| {\mathcal {T}}\right| (\lambda _k+2\epsilon )\) for \(k\in \{1,2\}\). Letting \(\epsilon \equiv \epsilon (\lambda _1,\lambda _2)\rightarrow 0\) as \(\lambda _1,\lambda _2\rightarrow 0\), the error probabilities of the truncated code tend to zero for every given \(\delta >0\).

6.3 Rate bounds

Consider the marginal \(\mathcal {N}^{(1)}_{A\rightarrow B_1}\) and \(Q_{i_1}^{P^*} \equiv \frac{1}{N_2} \sum _{i_2=1}^{N_2'}Q_{i_1,i_2}^{P^*}\). Let \(i_2\) be uniformly distributed. Then, observe that the randomized-encoder code \(\big \{ (Q_{i_1}^{P^*}, D^{(1)}_{i_1}): i_1 \in [N_1'] \big \}\) is an \((n,N_1',\lambda _1^*,\lambda _2^*)\) ID code for the single-user channel \(\mathcal {N}^{(1)}_{A\rightarrow B_1}\). Therefore, following the single-user converse proof by Ahlswede and Winter [13] (see also [17, Section III]),

$$\begin{aligned} R_1= \frac{1}{n} \log \log (N_1')&< \textsf{I}(P^*,\mathcal {N}^{(1)}_{X\rightarrow B_1})+\epsilon _{1} = I(X;B_1)_\rho + \epsilon _1 \end{aligned}$$
(46)

where \(X\sim P^*\) and \(\epsilon _1\) tends to zero as \(\delta \rightarrow 0\). For completeness, we prove the inequality in the appendix. Similarly, we also have \( R_2 < I(X;B_2)_\rho +\epsilon _{2}. \) This completes the proof of the ID capacity theorem. \(\square \)

7 Summary and outlook

We derive an achievable ID region for the quantum broadcast channel and established full characterization for the classical-quantum broadcast channel. To prove achievability, we extend the classical proof due to Bracher and Lapidoth [24, 25] to the quantum setting. On the other hand, in the converse proof, we use the truncation approach by Boche et al. [17] along with the arguments of Ahlswede and Winter [13].

As examples, we derive explicit expressions for the ID capacity regions for the quantum erasure broadcast channel and for the pure-loss bosonic broadcast channel in Sect. 4. In those examples, each user can achieve the capacity of the respective marginal channel. In particular, the ID capacity region of the pure-loss bosonic broadcast channel is rectangular and strictly larger than the transmission capacity region. In general, the ID capacity region is not necessarily rectangular, as demonstrated for the classical Z-channel [30, Section IV.C] and the classical Gaussian Product channel [30, Section IV.E], [29, Section IV.B].

The ID capacity has a different behavior compared to the single-user setting, in which the ID capacity equals the transmission capacity [74] (see Sect. 2.3). Here, in the broadcast setting, the ID capacity region can strictly larger than in transmission, since interference between receivers can be seen as part of the randomization of the coding scheme.

Extending the results to more than two receivers remains an open challenge. Upper and lower bounds for such a model may be derived in a similar manner as in the classical setting [24, Section IV.A]. To derive the identification capacity of classical-quantum channels, new methods are required. The capacity of quantum-quantum channels is even unknown for general point-to-point discrete memoryless channels [14, 18]. These are interesting and challenging directions of further research.