1 Introduction

Programmable processors are expected to automate information processing tasks, lessening human intervention by adapting their functioning according to some input program. This adjustment, that is, the process of extraction and assimilation of relevant information to perform efficiently some task, is often called learning, borrowing a word most naturally linked to living beings. Machine learning is a well-established and interdisciplinary research field, broadly fitting within the umbrella of Cybernetics, that seeks to endow machines with this sort of ability, rendering them able to ‘learn’ from past experiences, perform pattern recognition and identification in scrambled data, and ultimately self-regulate [1, 2]. Algorithms featuring learning capabilities have numerous practical applications, including speech and text recognition, image analysis, and data mining.

Whereas conventional machine learning theory implicitly assumes the training set to be made of classical data, a more recent variation, which can be referred to as quantum machine learning, focuses on the exploration and optimisation of training with fundamentally quantum objects. Quantum learning [3], as an area of strong foundational and technological interest, has recently raised great attention. Particularly, the use of programmable quantum processors has been investigated to address machine learning tasks such as pattern matching [4], binary classification [58], feedback-adaptive quantum measurements [9], learning of unitary transformations [10], ‘probably approximately correct’ learning [11], and unsupervised clustering [12]. Quantum learning algorithms provide not only performance improvements over some classical learning problems, but they naturally have also a wider range of applicability. Quantum learning has also strong links with quantum control theory [13], and is thus becoming an increasingly significant element of the theoretical and experimental quantum information processing toolbox.

In this paper, we investigate a quantum learning scheme for the task of discriminating between two coherent states. Coherent states stand out for their relevance in quantum optical communication theory [1416], quantum information processing implementations with light, atomic ensembles, and interfaces thereof [17, 18], and quantum optical process tomography [19]. Lasers are widely used in current telecommunication systems, and the transmission of information can be theoretically modelled in terms of bits encoded in the amplitude or phase modulation of a laser beam. The basic task of distinguishing two coherent states in an optimal way is thus of great importance, since lower chances of misidentification translate into higher transfer rates between the sender and the receiver.

The discrimination of coherent states has been considered, so far, within two main approaches, namely minimum-error and unambiguous discrimination, although the former is more developed. Generally, a logical bit can be encoded in two possible coherent states \(\vert \alpha \rangle\) and \(\vert -\alpha \rangle\), via a phase shift, or in the states \(\vert 0 \rangle\) and \(\vert 2\alpha \rangle\), via amplitude modulation. Both encoding schemes are equivalent, since one can move from one to the other by applying Weyl’s displacement operator \(\hat{D}(\alpha)\) to both states.Footnote 1 In the minimum-error approach, the theoretical minimum for the probability of error is given by the Helstrom formula for discriminating two pure states [20]. A variety of implementations have been devised to achieve this task, e.g., the Kennedy receiver [21], based on photon counting; the Dolinar receiver [22], a modification of the Kennedy receiver with real-time quantum feedback; and the homodyne receiver.Footnote 2 Concerning the unambiguous approach to the discrimination problem, results include the unambiguous discrimination between two known coherent states [24, 25], and its programmable version, i.e., when the information about the amplitude α enters the discrimination device in a quantum form [2628].

The goal of this paper is to explore the fundamental task of discriminating between two coherent states with minimum error, when the available information about their amplitudes is incomplete. The simplest instance of such problem is a partial knowledge situation: the discrimination between the (known) vacuum state, \(\vert 0 \rangle\), and some coherent state, \(\vert \alpha \rangle\), where the value of α is not provided beforehand in the classical sense, but instead encoded in a number n of auxiliary modes in the state \(\vert \alpha \rangle^{\otimes n}\). Such discrimination scheme can be cast as a learning protocol with two steps: a first training stage where the auxiliary modes (the training set) are measured to obtain an estimate of α, followed by a discrimination measurement based on this estimate. We then investigate whether this two-step learning procedure matches the performance of the most general quantum protocol, namely a global discrimination measurement that acts jointly over the auxiliary modes and the state to be identified.

Before proceeding with the derivation of our results and in order to motivate further the problem investigated in this paper, let us define the specifics of the setting in the context of a quantum-enhanced readout of classically-stored information.

Imagine a classical memory register modelled by an array of cells, where each cell contains a reflective medium with two possible reflectivities \(r_{0}\) and \(r_{1}\). To read the information stored in the register, one shines light into one of the cells and analyses its reflection. The task essentially consists in discriminating the two possible states of the reflected signal, which depend on the reflectivity of the medium and thus encode the logical bit stored in the cell. In a seminal paper on quantum reading [29], the author takes advantage of ancillary modes to prepare an initial entangled state between those and the signal. The reflected signal is sent together with the ancillae to a detector, where a joint discrimination measurement is performed. A purely quantum resource - entanglement - is thus introduced, enhancing the probability of a successful identification of the encoded bit.Footnote 3 This model has been later extended to the use of error correcting codes, thus defining the notion of quantum reading capacity [30] also studied in the presence of various optical limitations [31]. The idea of using nonclassical light to improve the performance of classical information tasks can be traced back to precursory works on quantum illumination [32, 33], where the presence of a low-reflectivity object in a bright thermal-noise bath is detected with higher accuracy when entangled light is sent to illuminate the target region. For more recent theoretical and experimental developments in optical quantum imaging, illumination and reading, including studies on the role of nonclassical correlations beyond entanglement, refer e.g. to Refs. [3444].

In this paper we consider a reading scenario with an imperfect coherent light source and no initial entanglement involved. The proposed scheme is as follows (see Figure 1). We model an ideal classical memory by a register made of cells that contain either a transparent medium (\(r_{0}=0\)) or a highly reflective one (\(r_{1}=1\)). A reader, comprised by a transmitter and a receiver, extracts the information of each cell. The transmitter is a source that produces coherent states of a certain amplitude α. The value of α is not known with certainty due, for instance, to imperfections in the source, but it can be statistically localised in a Gaussian distribution around some (known) \(\alpha_{0}\). A signal state \(\vert \alpha \rangle\) is sent towards a cell of the register and, if it contains the transparent medium, it goes through; if it hits the highly reflective medium, it is reflected back to the receiver in an unperturbed form. This means that we have two possibilities at the entrance of the receiver upon arrival of the signal: either nothing arrives - and we represent this situation as the vacuum state \(\vert 0 \rangle\) - or the reflected signal bounces back - which we denote by the same signal state \(\vert \alpha \rangle\). To aid in the discrimination of the signal, we alleviate the effects of the uncertainty in α by considering that n auxiliary modes are produced by the transmitter in the global state \(\vert \alpha \rangle^{\otimes n}\) and sent directly to the receiver. The receiver then performs measurements over the signal and the auxiliary modes and outputs a binary result, corresponding with some probability to the bit stored in the irradiated cell.

Figure 1
figure 1

A quantum reading scheme that uses a coherent signal \(\pmb{\vert \alpha \rangle}\) , produced by a transmitter, to illuminate a cell of a register that stores a bit of information. A receiver extracts this bit by distinguishing between the two possible states of the reflected signal, \(\vert 0 \rangle\) and \(\vert \alpha \rangle\), assisted by n auxiliary modes sent directly by the transmitter.

We set ourselves to answer the following questions: (i) which is the optimal (unrestricted) measurement, in terms of the error probability, that the receiver can perform? and (ii) is a joint measurement, performed over the signal together with the auxiliary modes, necessary to achieve optimality? To accomplish the set task, we first obtain the optimal minimum-error probability considering collective measurements (Section 2). Then, we contrast the result with that of the standard estimate-and-discriminate (E&D) strategy, consisting in first estimating α by measuring the auxiliary modes, and then using the acquired information to determine the signal state by a discrimination measurement tuned to distinguish the vacuum state \(\vert 0 \rangle\) from a coherent state with the estimated amplitude (Section 3). In order to compare the performance of the two strategies, we focus on the asymptotic limit of large n. The natural figure of merit is the excess risk, defined as the excess asymptotic average error per discrimination when α is perfectly known. We show that a collective measurement provides a lower excess risk than any Gaussian E&D strategy, and we conjecture (and provide strong evidence) that this is the case for all local strategies (Section 4). We conclude with a summary and discussion of our results (Section 5), while some technical derivations and proofs are deferred to the Appendices.

2 Collective strategy

The global state that arrives at the receiver can be expressed as either \([ \alpha ]^{\otimes n} \otimes[0]\) or \([ \alpha ]^{\otimes n} \otimes[\alpha]\), where the shorthand notation \([ \cdot]\equiv \vert \cdot \rangle \langle \cdot \vert \) will be used throughout the paper. For simplicity, we take equal a priori probabilities of occurrence of each state. We will always consider the signal state to be that of the last mode, and all the previous modes will be the auxiliary ones. First of all, note that the information carried by the auxiliary modes can be conveniently ‘concentrated’ into a single mode by means of a sequence of unbalanced beam splitters.Footnote 4 The action of a beam splitter over a pair of coherent states \(\vert \alpha \rangle\otimes \vert \beta \rangle\) yields

$$ \vert \alpha \rangle\otimes \vert \beta \rangle \longrightarrow \vert \sqrt{T}\alpha+\sqrt{R}\beta \rangle\otimes \vert -\sqrt {R}\alpha+\sqrt{T}\beta \rangle, $$
(1)

where T is the transmissivity of the beam splitter, R is its reflectivity, and \(T+R=1\). A balanced beam splitter (\(T=R=1/2\)) acting on the first two auxiliary modes thus returns \(\vert \alpha \rangle\otimes \vert \alpha \rangle\longrightarrow|\sqrt{2}\alpha\rangle \otimes \vert 0 \rangle\). Since the beam splitter preserves the tensor product structure of the two modes, one can treat separately the first output mode and use it as input in a second beam splitter, together with the next auxiliary mode. By choosing appropriately the values of T and R, the transformation \(|\sqrt{2}\alpha\rangle\otimes \vert \alpha \rangle \longrightarrow|\sqrt {3}\alpha \rangle\otimes \vert 0 \rangle\) can be achieved. Applying this process sequentially over the n auxiliary modes, we perform the transformation

$$ \vert \alpha \rangle^{\otimes n} \longrightarrow|\sqrt {n}\alpha\rangle \otimes \vert {0} \rangle^{\otimes n-1} . $$
(2)

Note that this is a deterministic process, and that no information is lost, for it is contained completely in the complex parameter α. This operation allows us to effectively deal with only two modes. The two possible global states entering the receiver hence become \([\sqrt {n}\alpha]\otimes[0]\) and \([\sqrt{n}\alpha]\otimes[\alpha]\).

The parameter α is not known with certainty. Adopting a Bayesian viewpoint, we embed this lack of information into the analysis by considering averaged global states over the possible values of α, where the choice of the prior probability distribution accounts for the prior knowledge that we might already have. One readily sees though that a flat prior probability distribution for α, representing a limiting situation of complete ignorance, is not reasonable in this particular setting. On the one hand, such prior would yield divergent average states of infinite energy, since the phase space is infinite. On the other hand, in a practical situation it is not realistic to assume that all amplitudes α are equally probable.Footnote 5 A way to overcome this apparent difficulty and assign a reasonable prior probability distribution to α is to sacrifice a small number of auxiliary modes (e.g., such that we are left with \(\tilde {n}=n^{1-\epsilon}\) modes) and use them to construct a rough estimator of α, by means of some isotropic quantum tomography procedure. Then, it can be shown that α belongs to a neighbourhood of size \(n^{-1/2+\epsilon}\) centred at a fixed amplitude \(\alpha_{0}\), with probability converging to one (this is shown, though in a classical statistical context, in [47]). Therefore, the asymptotic behaviour of any statistical inference problem that uses this model is determined by the structure of a local (Gaussian) quantum model around a fixed coherent state \(\vert \alpha_{0} \rangle \).Footnote 6 In our case, we consider this ‘localisation’ of the prior probability distribution as an innocuous preparatory process in both the collective and the E&D strategies, in the sense that the comparison between their asymptotic discrimination power will not be affected.

Under these considerations, the initial prior for α will be a Gaussian probability distribution centred at \(\alpha_{0}\), whose width goes as \(\sim1/\sqrt{n}\). That is, we can express the true amplitude α as

$$ \alpha\approx\alpha_{0} + u/\sqrt{n} ,\quad u \in\mathbb{C} , $$
(3)

where the parameter u follows the Gaussian distribution

$$ G(u) = \frac{1}{\pi\mu^{2}} e^{-u^{2}/\mu^{2}} . $$
(4)

To avoid divergences, we have introduced the free parameter μ as a temporal energy cut-off that defines the width of \(G(u)\). After obtaining expressions for the excess risks in the asymptotic regime of large n, we will remove the cut-off dependence by taking the limit \(\mu\to\infty\).

Exploiting the prior information acquired through the rough estimation, that is using Eqs. (3) and (4), we compute the average global states arriving at the receiver

$$\begin{aligned}& \sigma_{1} = \int G(u) [ \sqrt{n} \alpha_{0} + u ] \otimes[0] \,d^{2}u , \end{aligned}$$
(5)
$$\begin{aligned}& \sigma_{2} = \int G(u) [ \sqrt{n} \alpha_{0} + u ] \otimes[\alpha_{0} + u/\sqrt{n} ] \,d^{2}u . \end{aligned}$$
(6)

The optimal measurement to determine the state of the signal is the Helstrom measurement for the discrimination of the states \(\sigma_{1}\) and \(\sigma_{2}\) [20], that yields the average minimum-error probabilityFootnote 7

$$ P_{\mathrm{e}}^{\mathrm{opt}}(n)= \frac{1}{2} \biggl(1- \frac {1}{2}\Vert \sigma _{1}-\sigma_{2} \Vert _{1} \biggr) , $$
(7)

where \(\Vert M \Vert _{1}=\operatorname{tr} \sqrt{M^{\dagger}M}\) denotes the trace norm of the operator M. The technical difficulty in computing \(P_{\mathrm{e}}^{\mathrm {opt}}(n)\) resides in the fact that \(\sigma_{1}-\sigma_{2}\) is an infinite-dimensional full-rank matrix, hence its trace norm does not have a computable analytic expression for arbitrary finite n. Despite this, one can still resort to analytical methods in the asymptotic regime \(n\to \infty \) by treating the states perturbatively.

To ease this calculation, we first apply the displacement operator

$$ \hat{D}(\alpha_{0}) = \hat{D}_{1} (-\sqrt{n} \alpha_{0}) \otimes\hat {D}_{2}(-\alpha_{0}) $$
(8)

to the states \(\sigma_{1}\) and \(\sigma_{2}\), where \(\hat{D}_{1}\) (\(\hat {D}_{2}\)) acts on the first (second) mode, and we obtain the displaced global states

$$\begin{aligned}& \bar{\sigma}_{1} = \hat{D}(\alpha_{0}) \sigma_{1}\hat{D}^{\dagger}(\alpha_{0}) = \int G(u) [ u ] \otimes [-\alpha_{0} ] \,d^{2}u , \end{aligned}$$
(9)
$$\begin{aligned}& \bar{\sigma}_{2} = \hat{D}(\alpha_{0}) \sigma_{2}\hat{D}^{\dagger}(\alpha_{0}) = \int G(u) [ u ] \otimes[ u/\sqrt{n} ] \,d^{2}u . \end{aligned}$$
(10)

Since both states have been displaced by the same amount, the trace norm does not change, i.e., \(\Vert \sigma_{0}-\sigma_{1} \Vert _{1}=\Vert \bar {\sigma}_{0}-\bar{\sigma}_{1} \Vert _{1}\). Eq. (9) directly yields

$$ \bar{\sigma}_{1} = \sum_{k=0}^{\infty}c_{k} [k] \otimes[-\alpha_{0}] , $$
(11)

where \(c_{k} = \mu^{2k}/[(\mu^{2}+1)^{k+1}]\) and \(\{\vert k \rangle\}\) is the Fock basis. Note that, as a result of the average, the first mode in Eq. (11) corresponds to a thermal state with average photon number \(\mu^{2}\). Note also that the n-dependence is entirely in \(\bar{\sigma}_{2}\). In the limit \(n\to\infty\), we can expand the second mode of \(\bar{\sigma}_{2}\) as

$$ |u/\sqrt{n} \rangle= e^{-\frac{|u|^{2}}{2n}} \sum_{k=0}^{\infty} \frac {(u/\sqrt{n})^{k}}{\sqrt{k!}} \vert {k} \rangle . $$
(12)

Then, up to order \(1/n\) its asymptotic expansion gives

$$\begin{aligned}{} [u/\sqrt{n} ] \simeq&\vert 0 \rangle \langle{0}\vert + \frac{1}{\sqrt{n}} \bigl( u \vert {1} \rangle \langle 0 \vert + u^{*} \vert {0} \rangle \langle 1 \vert \bigr) \\ &{}+ \frac{1}{n} \biggl\{ |u|^{2} \bigl( \vert {1} \rangle \langle{1}\vert -\vert {0} \rangle \langle{0}\vert \bigr) + \frac{1}{\sqrt{2}} \bigl[ u^{2} \vert 2 \rangle \langle{0} \vert + \bigl(u^{*} \bigr)^{2} \vert 0 \rangle \langle{2} \vert \bigr] \biggr\} . \end{aligned}$$
(13)

Inserting Eq. (13) into Eq. (10) and computing the corresponding averages of each term in the expansion, we obtain a state of the form

$$ \bar{\sigma}_{2} \simeq\bar{\sigma}_{2}^{(0)} + \frac{1}{\sqrt {n}}\bar {\sigma}_{2}^{(1)} + \frac{1}{n}\bar{\sigma}_{2}^{(2)} . $$
(14)

We can now use Eqs. (11) and (14) to compute the trace norm \(\Vert \bar{\sigma }_{1}-\bar{\sigma}_{2} \Vert _{1}\) in the asymptotic regime of large n, up to order \(1/n\), by applying perturbation theory. The explicit form of the terms in Eq. (14), as well as the details of the computation of the trace norm, are given in Appendix 1. Here we just show the result: the average minimum-error probability \(P_{\mathrm{e}}^{\mathrm{opt}}(n)\), defined in Eq. (7), can be written in the asymptotic limit as

$$ P_{\mathrm{e}}^{\mathrm{opt}} \equiv P_{\mathrm{e}}^{\mathrm {opt}}(n \to\infty) \simeq\frac {1}{2} \biggl[1-\sqrt{1-e^{-|\alpha_{0}|^{2}}} - \frac{1}{2n} \bigl(\Lambda _{+}^{(2)}-\Lambda_{-}^{(2)} \bigr) \biggr] , $$
(15)

where \(\Lambda_{\pm}^{(2)}\) is given by Eq. (60).

2.1 Excess risk

The figure of merit that we use to assess the performance of our protocol is the excess risk, that we have defined as the difference between the asymptotic average error probability \(P_{\mathrm {e}}^{\mathrm{opt}}\) and the average error probability for the optimal strategy when α is perfectly known. As we said at the beginning of the section, the true value of α is \(\alpha_{0}+u/\sqrt{n}\) for a particular realisation, thus knowing u equates knowing α. The minimum-error probability for the discrimination between the known states \(\vert 0 \rangle\) and \(\vert \alpha _{0}+u/\sqrt{n} \rangle\), \(P_{\mathrm{e}}^{*}(u,n)\), averaged over the Gaussian distribution \(G(u)\), takes the form

$$\begin{aligned} P_{\mathrm{ e}}^{*}(n) =& \int G(u) P_{\mathrm{ e}}^{*} (u,n)\, d^{2}u \\ =& \int G(u) \frac{1}{2} \bigl(1-\sqrt{1-\bigl|\langle{0}|{\alpha _{0}+u/\sqrt{n}}\rangle\bigr|^{2}} \bigr) \,d^{2}u . \end{aligned}$$
(16)

To compute this integral we do a series expansion of the overlap in the limit \(n\rightarrow\infty\) and integrate the resulting terms (see Appendix 4). After some algebra we obtain

$$ P_{\mathrm{ e}}^{*} \equiv P_{\mathrm{e}}^{*}(n\to\infty) \simeq \frac {1}{2} \biggl(1-\sqrt{1-e^{-|\alpha_{0}|^{2}}} + \frac{1}{n} \Lambda^{*} \biggr) , $$
(17)

where

$$ \Lambda^{*} = \frac{\mu^{2} [ 2 (e^{-|\alpha_{0}|^{2}}-1 ) +|\alpha_{0}|^{2} (2-e^{-|\alpha_{0}|^{2}} ) ]}{4 (e^{|\alpha_{0}|^{2}}-1 ) \sqrt{1-e^{-|\alpha_{0}|^{2}}}} . $$
(18)

The excess risk is then given by Eqs. (15) and (17) as

$$ R^{\mathrm{ opt}}_{\mu}= n \bigl(P_{\mathrm{e}}^{\mathrm{opt}} - P_{\mathrm{ e}}^{*} \bigr) . $$
(19)

Finally, we remove the cut-off imposed at the beginning by taking the limit \(\mu\rightarrow\infty\) and we obtain

$$ R^{\mathrm{ opt}} =\lim_{\mu\to\infty} R^{\mathrm{ opt}}_{\mu}= \frac{|\alpha_{0}|^{2} e^{-|\alpha_{0}|^{2}/2} (2e^{|\alpha_{0}|^{2}}-1 )}{16 ( e^{|\alpha_{0}|^{2}}-1 )^{3/2}} . $$
(20)

Note that the excess risk only depends on the module of \(\alpha_{0}\), i.e., on the average distance between \(\vert \alpha \rangle\) and \(\vert 0 \rangle\). The excess risk is thus phase-invariant, as it should.

Eq. (20) is the first piece of information we need to address the main question posed at the beginning, namely whether the optimal performance of the collective strategy is achievable by an estimate-and-discriminate (E&D) strategy. We now move on towards the second piece.

3 Estimate & Discriminate strategy

An alternative - and more restrictive - strategy to determine the state of the signal consists in the natural combination of two fundamental tasks: state estimation, and state discrimination of known states. In such an E&D strategy, all auxiliary modes are used to estimate the unknown amplitude α. Then, the obtained information is used to tune a discrimination measurement over the signal that distinguishes the vacuum state from a coherent state with the estimated amplitude. In this section we find the optimal E&D strategy based on Gaussian measurements and compute its excess risk \(R^{\mathrm{ E\&D}}\). Then, we compare the result with that of the optimal collective strategy \(R^{\mathrm{ opt}}\).

The most general Gaussian measurement that one can use to estimate the state of the auxiliary mode \(|\sqrt{n}\alpha\rangle\) is a generalised heterodyne measurement, represented by a positive operator-valued measure (POVM) with elements

$$ E_{\bar{\beta}} = \frac{1}{\pi} [ \bar{\beta},r,\phi] , $$
(21)

i.e., projectors onto pure Gaussian states with amplitude \(\bar{\beta}\) and squeezing r along the direction ϕ. The outcome of such heterodyne measurement \(\bar{\beta}=\sqrt{n}\beta\) produces an estimate for \(\sqrt{n}\alpha\), hence β stands for an estimate of α.Footnote 8 Upon obtaining \(\bar{\beta}\), the prior information that we have about α gets updated according to Bayes’ rule, so that now the signal state can be either \([0]\) or some state \(\rho(\beta)\). The form of this second hypothesis is given by

$$ \rho(\beta) = \int p(\alpha|\beta) [\alpha] \,d^{2}\alpha, $$
(22)

where \(p(\alpha|\beta)\) encodes the posterior information that we have acquired via the heterodyne measurement. It represents the conditional probability of the state of the auxiliary mode being \(\vert \sqrt{n}\alpha \rangle\), given that we obtained the outcome \(\bar{\beta}\). Bayes’ rule dictates

$$ p(\alpha|\beta) = \frac{p(\beta|\alpha) p(\alpha)}{p(\beta)} , $$
(23)

where \(p(\beta|\alpha)\) is given by (see Appendix 2)

$$ p(\beta|\alpha) = \frac{1}{\pi\cosh r} e^{-|\sqrt{n} \alpha- \bar {\beta}|^{2}-\operatorname{Re}[(\sqrt{n} \alpha-\bar{\beta})^{2} e^{-i 2 \phi}] \tanh r} , $$
(24)

\(p(\alpha)\) is the prior information of α before the heterodyne measurement, and

$$ p(\beta) = \int p(\alpha) p(\beta|\alpha) \,d^{2}\alpha $$
(25)

is the total probability of giving the estimate β.

The error probability of the E&D strategy, averaged over all possible estimates β, is then

$$ P_{\mathrm{e}}^{\mathrm{E\&D}}(n) = \frac{1}{2} \biggl(1- \frac {1}{2} \int p(\beta) \bigl\Vert [0]-\rho(\beta) \bigr\Vert _{1}\, d^{2}\beta \biggr) . $$
(26)

Note that the estimate β depends ultimately on the number n of auxiliary modes, hence the explicit dependence in the left-hand side of Eq. (26).

We are interested in the asymptotic expression of Eq. (26), so let us now focus on the \(n\to\infty\) scenario. Recall that an initial rough estimation of α permits the localisation of the prior \(p(\alpha)\) around a central point \(\alpha _{0}\), such that \(\alpha\approx\alpha_{0} + u/\sqrt{n}\), where u is distributed according to \(G(u)\), defined in Eq. (4). Consequently, the estimate β will also be localised around the same point, i.e., \(\beta\approx\alpha_{0}+v/\sqrt{n}\), \(v\in\mathbb{C}\). As a result, we can effectively shift from amplitudes α and β to a local Gaussian model around \(\alpha_{0}\), parameterised by u and v. According to this new model, we make the following transformations:

$$\begin{aligned}& p(\alpha) \rightarrow G(u) , \end{aligned}$$
(27)
$$\begin{aligned}& p(\beta|\alpha) \rightarrow p(v|u) = \frac{1}{\pi\cosh r} e^{-|u-v|^{2}-\operatorname{Re}[(u-v)^{2}] \tanh r} , \end{aligned}$$
(28)
$$\begin{aligned}& p(\beta) \rightarrow p(v) = \int p(v|u) G(u) \,du \\& \hphantom{p(\beta) \rightarrow p(v)}= \frac{1}{\pi\cosh r} \frac{1}{\sqrt{1+\mu^{2} (2+\frac {\mu ^{2}}{\cosh^{2} r} )}} \\& \hphantom{p(\beta) \rightarrow p(v)=}{}\times\operatorname{exp} \biggl(\frac{|v|^{2} (1+\frac{\mu ^{2}}{\cosh^{2} r} )+\operatorname{Re}[v^{2}]\tanh r}{\mu^{4} \tanh^{2} r- (\mu ^{2}+1 )^{2}} \biggr) , \end{aligned}$$
(29)
$$\begin{aligned}& p(\alpha|\beta) \rightarrow p(u|v)=\frac{p(v|u) G(u)}{p(v)} , \end{aligned}$$
(30)

where, for simplicity, we have assumed \(\alpha_{0}\) to be real. Note that this can be done without loss of generality. Note also that, by the symmetry of the problem, this assumption implies \(\phi=0\).

The shifting to the local model transforms the trace norm in Eq. (26) as

$$ \bigl\Vert [0]-\rho(\beta) \bigr\Vert _{1} \rightarrow\bigl\Vert [-\alpha _{0}]-\rho (v) \bigr\Vert _{1} , $$
(31)

where

$$ \rho(v)=\int p(u|v) [ u/\sqrt{n} ] \,d^{2}u . $$
(32)

To compute the explicit expression of \(\rho(v)\) we proceed as in the collective strategy. That is, we expand \([u/\sqrt{n}]\) in the limit \(n\rightarrow\infty\) up to order \(1/n\), as in Eq. (13), and we compute the trace norm using perturbation theory (see Appendix 3 for details). The result allows us to express the asymptotic average error probability of the E&D strategy as

$$ P_{\mathrm{e}}^{\mathrm{E\&D}} \equiv P_{\mathrm{ e}}^{\mathrm{E\& D}}(n \to\infty) \simeq \frac{1}{2} \biggl(1-\sqrt{1-e^{-\alpha_{0}^{2}}} + \frac{1}{n} \Delta ^{\mathrm{ E\&D}} \biggr) , $$
(33)

where \(\Delta^{\mathrm{E\&D}}\) is given by Eq. (66).

3.1 Excess risk

The excess risk associated to the E&D strategy is generally expressed as

$$ R^{\mathrm{E\&D}}(r) = n \lim_{\mu\to\infty} \bigl(P_{\mathrm {e}}^{\mathrm{ E\& D}}-P_{\mathrm{ e}}^{*} \bigr) , $$
(34)

where \(P_{\mathrm{ e}}^{*}\) is the error probability for known α, given in Eq. (17), and \(P_{\mathrm{ e}}^{\mathrm{ E\& D}}\) is the result from the previous section, i.e., Eq. (33). The full analytical expression for \(R^{\mathrm{ E\&D}}(r)\) is given in Eq. (67). Note that we have to take the limit \(\mu\to\infty\) in the excess risk, as we did for the collective case. Note also that all the expressions calculated so far explicitly depend on the squeezing parameter r (apart from \(\alpha_{0}\)). This parameter stands for the squeezing of the generalised heterodyne measurement in Eq. (21), which we have left unfixed on purpose. As a result, we now define, through the squeezing r, the optimal heterodyne measurement over the auxiliary mode to be that which yields the lowest excess risk (34), i.e.,

$$ R^{\mathrm{ E\&D}} = \min_{r} R^{\mathrm{ E\&D}} (r) . $$
(35)

To find the optimal r, we look at the parameter estimation theory of Gaussian models (see, e.g., [51]). In a generic two-dimensional Gaussian shift model, the optimal measurement for the estimation of a parameter \(\theta= (q,p)\) is a generalised heterodyne measurementFootnote 9 of the type (21). Such measurement yields a quadratic risk of the form

$$ R_{\hat{\theta}}=\int p(\theta) \bigl((\hat{\theta}-\theta)^{T} G ( \hat {\theta }-\theta)\bigr) \,d^{2}\theta, $$
(36)

where \(p(\theta)\) is some probability distribution, \(\hat{\theta}\) is an estimator of θ, and G is a two-dimensional matrix. One can always switch to the coordinates system in which G is diagonal, \(G=\operatorname{ diag}(g_{q},g_{p})\), to write

$$ R_{\hat{\theta}}=g_{q} \int p(\theta) ( \hat{q}-q)^{2} \,d^{2}\theta+ g_{p} \int p(\theta) ( \hat{p}-p)^{2} \,d^{2}\theta. $$
(37)

It can be shown [51] that the optimal squeezing of the estimation measurement, i.e., that for which the quadratic risk \(R_{\hat {\theta}}\) is minimal, is given by

$$ r=\frac{1}{4}\ln \biggl(\frac{g_{q}}{g_{p}} \biggr) . $$
(38)

We can then simply compare Eq. (37) with Eq. (34) to deduce the values of \(g_{q}\) and \(g_{p}\) for our case. By doing so, we obtain that the optimal squeezing reads

$$ r = \frac{1}{4} \ln \biggl(\frac{f(\alpha_{0})+\alpha_{0}^{2}}{f(\alpha _{0})-\alpha_{0}^{2}} \biggr) , $$
(39)

where

$$f(\alpha_{0})= 2 e^{\alpha_{0}^{2}} \bigl(e^{\alpha_{0}^{2}}-1 \bigr) \bigl(\sqrt {1-e^{-\alpha_{0}^{2}}}-1 \bigr) + \alpha_{0}^{2} \bigl(1-2e^{\alpha_{0}^{2}}\sqrt{1-e^{-\alpha_{0}^{2}}} \bigr) . $$

Eq. (39) tells us that the optimal squeezing r is a function of \(\alpha_{0}\) that takes negative values, and asymptotically approaches zero when \(\alpha_{0}\) is large (see Figure 2). This means that the optimal estimation measurement over the auxiliary mode is comprised by projectors onto coherent states, antisqueezed along the line between \(\alpha_{0}\) and the origin (which represents the vacuum) in phase space. In other words, the estimation is tailored to have better resolution along that axis because of the subsequent discrimination of the signal state. This makes sense: since the error probability in the discrimination depends primarily on the distance between the hypotheses, it is more important to estimate this distance more accurately rather than along the orthogonal direction. For large amplitudes, the estimation converges to a (standard) heterodyne measurement with no squeezing. As \(\alpha_{0}\) approaches 0 the states of the signal become more and more indistinguishable, and the projectors of the heterodyne measurement approach infinitely squeezed coherent states, thus converging to a homodyne measurement.

Figure 2
figure 2

Optimal squeezing r for the generalised heterodyne measurement in a E&D strategy, as a function of \(\pmb{\alpha_{0}}\) .

Inserting Eq. (39) into Eq. (35) we finally obtain the expression of \(R^{\mathrm { E\& D}}\) as a function of \(\alpha_{0}\), which we can now compare with the excess risk for the collective strategy \(R^{\mathrm{ opt}}\), given in Eq. (20). We plot both functions in Figure 3. For small amplitudes, say in the range \(0.3 \lesssim \alpha_{0} \lesssim1.5\), there is a noticeable difference in the performance of the two strategies, reaching more than a factor two at some points. We also observe that the gap closes for large amplitudes \(\alpha_{0} \rightarrow\infty\); this behaviour is expected, since the problem becomes essentially classical when the energy of the signal is sufficiently large. Interestingly, very weak amplitudes \(\alpha_{0} \rightarrow0\) also render the two strategies almost equivalent.

Figure 3
figure 3

Excess risk for the collective strategy, \(\pmb{R^{\mathrm{ opt}}}\) , and for the E&D strategy, \(\pmb{R^{\mathrm{ E\&D}}}\) , as a function of \(\pmb{\alpha_{0}}\) .

4 General estimation measurements

We have showed that a local strategy based on the estimation of the auxiliary state via a generalised heterodyne measurement, followed by the corresponding discrimination measurement on the signal mode, performs worse than the most general (collective) strategy. However, the considered E&D procedure does not encompass all local strategies. The heterodyne measurement, although with some nonzero squeezing, still detects the phase space around \(\alpha_{0}\) in a Gaussian way, i.e., up to second moments. In principle, one might expect that a more general measurement that produces a non-Gaussian probability distribution for the estimate β might perform better in terms of the excess risk, and even possibly match the optimal performance, closing the gap between the curves in Figure 3. Here we show that the observed difference in performance between the collective and the local strategy is not due to lack of generality of the latter. We do so by considering a simplified although nontrivial version of the problem that allows us to obtain a fully general solution.

The intuitive reason why one could think, at first, that a non-Gaussian probability distribution for β might give an advantage is the following. Imagine that α is further restricted to be on the positive real axis. Then, the true α is either to the left of \(\alpha_{0}\) or to the right, depending on the sign of the local parameter u. In the former case, α is closer to the vacuum, so the error in discriminating between them two is larger than for the states on the other side. One would then expect that an ideal strategy should better estimate the negative values of the parameter u, compared to the positive ones. Gaussian measurements like the heterodyne detection do not contemplate this situation, as they are translationally invariant, and that might be the reason behind the gap in Figure 3.

To test this, we design the following simple example. Since the required methods are a straightforward extension of the ones used in the previous sections, we only sketch the procedure without showing any explicit calculation. Imagine now that the true value of α is not Gaussian distributed around \(\alpha_{0}\), but it can only take the two values \(\alpha=\alpha_{0} \pm1/\sqrt{n}\), representing the states that are closer to the vacuum and further away. Having only two possibilities for α allows us to solve analytically the most general local strategy, since estimating the auxiliary state becomes a discrimination problem between the coherent states \(\vert \sqrt{n}\alpha_{0}+1 \rangle\) and \(\vert \sqrt {n}\alpha _{0}-1 \rangle\). The measurement that distinguishes the two possibilities is a two-outcome POVM \(\mathcal{E}=\{[e_{+}],[e_{-}]\}\).Footnote 10 We use the displacement operator (8) to shift to the local model around \(\alpha_{0}\), such that the possible coherent states of the auxiliary mode are now \(\vert 1 \rangle\) and \(\vert {-1} \rangle \). Note that, without loss of generality, one can confine the POVM vectors to the (Bloch) plane spanned by \(\vert 1 \rangle\) and \(\vert -1 \rangle \), so that \(\vert e_{+} \rangle\) and \(\vert e_{-} \rangle\) are real linear combinations of these. Indeed, any component orthogonal to this plane would give no aid to distinguish the hypotheses. This allows us to express the probabilities of correctly identifying each state as

$$\begin{aligned}& p_{+} = \bigl|\langle{e_{+}}|{1}\rangle\bigr|^{2}\equiv c^{2} , \end{aligned}$$
(40)
$$\begin{aligned}& p_{-} = \bigl|\langle{e_{-}}|{-1}\rangle\bigr|^{2} = 1- \bigl(c \chi- \sqrt {1-c^{2}}\sqrt {1-\chi^{2}} \bigr)^{2} , \end{aligned}$$
(41)

where \(\chi=\langle{1}|{-1}\rangle=e^{-2} \), and the overlap c completely parametrises the measurement \(\mathcal{E}\). If the optimal estimation measurement is indeed asymmetric, it should happen that \(p_{+} < p_{-}\), i.e., that the probability of a correct identification is greater for the state \(\vert -1 \rangle\) than for \(\vert 1 \rangle\).

From now on we proceed as for the Gaussian E&D strategy. We first compute the posterior state of the signal mode according to Bayes’ rule. Then, we compute the optimal error probability in the discrimination of \([-\alpha_{0}]\) and the posterior state, which is a combination of \([1/\sqrt{n}]\) and \([-1/\sqrt{n}]\), weighted by the corresponding posterior probabilities. The c-dependence is carried by these probabilities. Going to the asymptotic limit \(n\to\infty\), applying perturbation theory to compute the trace norm, and averaging the result over the two possible outcomes in the discrimination of the signal state, we finally obtain the asymptotic average error probability for the local strategy as a function of c. The asymptotic average error probability for the optimal collective strategy in this simple case is obtained exactly along the same lines as shown in Section 2, and the one for known states is given by the asymptotic expansion of Eq. (16), substituting the average over \(G(u)\) appropriately.

Now we can compute the excess risk for the local and collective strategy, and optimise the local one over c. As already advanced at the beginning, the optimal solution yields \(c^{*}= (\sqrt{1+\chi }+\sqrt{1-\chi} )/2\), and therefore \(p_{+}=p_{-}\). That is, the POVM \(\mathcal{E}\) is symmetric with respect to the vectors \(\vert {1} \rangle\) and \(\vert {-1} \rangle\), hence both hypotheses receive the same treatment by the measurement in charge of determining the state of the auxiliary mode. Moreover, the gap between the excess risk of both strategies remains. This result leads us to conjecture that the optimal collective strategy performs better than any local strategy.

5 Discussion

In this paper we have proposed a learning scheme for coherent states of light. We have presented it in the context of a quantum-enhanced readout of classically-stored binary information, following a recent research line initiated in [29]. The reading of information, encoded in the state of a signal that comes reflected by a memory cell, is achieved by measuring the signal and deciding its state to be either the vacuum state or some coherent state of unknown amplitude. The effect of this uncertainty is palliated by supplying a large number of auxiliary modes in the same coherent state. We have presented two strategies that make different uses of this (quantum) side information to determine the state of the signal: a collective strategy, consisting in measuring all modes at once and making the binary decision, and a local (E&D) strategy, based on first estimating - learning - the unknown amplitude, then using the acquired knowledge to tune a discrimination measurement over the signal. We have showed that the former outperforms any E&D strategy that uses a Gaussian estimation measurement over the auxiliary modes. Furthermore, we conjecture that this is indeed the case for any (even possibly non-Gaussian) local strategy, based on evidence obtained within a simplified version of the original setting that allowed us to consider completely general measurements.

Previous works on quantum reading rely on the use of specific preparations of nonclassical - namely, entangled - states of light to improve the reading performance of a classical memory [29, 3436]. Our results indicate that, when there exists some uncertainty in the states produced by the source (and, consequently, the possibility of preparing a specific entangled signal state is highly diminished), alternative quantum resources - namely, collective measurements - still enhance the reading of classical information using uncorrelated, classical coherent light. It is worth mentioning that there are precedents of quantum phenomena of this sort providing enhancements for statistical problems involving coherent states. As an example, in the context of estimation of product coherent states, the optimal measure-and-prepare strategy on identical copies of \(\vert \alpha \rangle\) can be achieved by local operations and classical communication (according to the fidelity criterion), but bipartite product states \(\vert \alpha \rangle \vert \alpha ^{*} \rangle\) require entangled measurements [52].

On a final note, the quantum enhancement found here is relevant in the regime of low-energy signalsFootnote 11 (small coherent amplitudes). This is in accordance to the advantage regime provided by nonclassical light sources, as discussed in other works [29, 35, 37]. A low energy readout of memories is, in fact, of very practical interest. While, mathematically, the success probability of any readout protocol could be arbitrarily increased by sending signals with diverging energy, there are many situations where this is highly discouraged. For instance, the readout of photosensitive organic memories requires a high level of control over the amount of energy irradiated per cell. In those situations, the use of signals with very low energy benefits from quantum-enhanced performance, whereas highly energetic classical light could easily damage the memory.