Aharonov et al. [1] introduced the concept of a weak value as controlling an anomalously large deflection of an atomic beam passing through a Stern–Gerlach apparatus. The deflection size is controlled by pre- and post-selected states, as well as the size of the magnetic field gradient. In the concluding paragraph, they mention that “another striking aspect of this experiment becomes evident when we consider it as a device for measuring a small gradient of the magnetic field...Our choosing (of the post-selection state) yields a tremendous amplification”. The price one pays for this amplification is the loss of a large fraction of events due to the post-selection. Nevertheless, the relevant information about the parameter in question is concentrated into these small number of events [2]. This technique has been adapted to optical metrology and has been successfully implemented in many experiments to precisely estimate various parameters, such as beam deflection, phase or frequency shifts. For recent reviews of this active area of research, see Refs. [3, 4].
While still obeying the standard quantum limit, weak value amplification experiments have been shown to be capable of extracting nearly all of the theoretically available information about the estimated parameter in a relatively simple way. Further, it has been shown that in comparison to a standard experimental technique, and given the presence of certain types of noise sources or technical limitations obscuring the measurement process, the weak value-type experiment can have better precision (even when using optimal statistical estimators), even though the detector only collects a small fraction of the light in the experiment [2]. There have also been a number of recent advances that propose to improve the intrinsic inefficiency of the post-selection. For example, in the optical context, it is possible to recycle the rejected photons, further improving the sensitivity of the technique [5]. This then gathers all the photons in the experiment through repeated cycles of selection, leading to higher power on the detector with the enhanced signal.
Quantum-enhanced metrology is based on using quantum resources, such as entanglement, to estimate a parameter of interest better than an analogous classical technique could do with similar resources—typically photon number. Proposed applications of this field range from precision measurements in optical interferometry to gravity wave detection [6]. Recently, Pang et al. [7] proposed combining the weak value technique with additional entangled quantum degrees of freedom to further increase the weak value at the same post-selection probability, or to keep the same weak value while boosting the post-selection probability. This technique leads to Heisenberg scaling of the parameter estimation precision with the number of auxiliary degrees of freedom, using quantum entanglement as a resource. These advances lead us naturally to consider how other quantum resources manifest in the context of weak measurements, which is the subject of the present article.
An important tool in quantum-enhanced metrology is the Fisher information. Classically, this quantity indicates how much information about the parameter of interest is encoded in the probability distribution of a random variable that is being measured. It is an important quantity because it sets the (Cramér–Rao) bound for the minimum variance of any unbiased estimator for the parameter of interest. Any estimator that achieves that bound is said to be efficient. The quantum mechanical extension of the Fisher information analogously gives the quantum Cramér–Rao bound, which indicates the minimum variance achievable using any measurement strategy. Despite these powerful properties, the formal expressions for the Fisher information do not necessarily provide deeper insight about the physics of the detection method and can even obscure what are essentially simple physical effects. In this paper, we will use both quantum Fisher information and the distinguishability of two quantum states as ways to quantify the smallest measurable parameter. These measures are related to one another: the mean squared distance between a quantum state and that state slightly shifted by a classical parameter is proportional to the quantum Fisher information about the parameter in the quantum state [8]. The usefulness of weak measurements has also been considered in the problem of state distinguishability, which is related to the current problem [9].
A conundrum involving Fisher information was recently presented by Zhang et al. [10], who considered a coherent state of photons interacting with a spin-1/2 particle to estimate a small coupling parameter in the interaction Hamiltonian. Notably, this example is a variation of the original weak value amplification scenario [1], but using a different parameter regime that more commonly appears in cavity and circuit QED [11, 12]. Even though the coherent state used in their example is typically considered to be a classical quantity that does not provide quantum resources, the authors showed the surprising result that the Fisher information about the coupling parameter seemed to scale at the optimal Heisenberg limit as the average number of photons was increased, rather than at the standard quantum limit that one would typically expect. This result raises several immediate questions: is there a simple physical explanation of this apparent Heisenberg scaling, and can this scaling really be used to enhance the estimation of the interaction parameter in an experiment?
The proposal starts with a separable state of the system \(| \psi _i \rangle \) (a two-state system), and a meter state \(|\alpha \rangle \) (a macroscopic coherent state), given by
$$\begin{aligned} | \Psi _0 \rangle = [\cos (\theta _i/2) | -\rangle + \sin (\theta _i/2) e^{i \phi _i} |+\rangle ] |\alpha \rangle . \end{aligned}$$
(1)
An interaction Hamiltonian generates a unitary operation of the form
$$\begin{aligned} U = \exp (-i g {\hat{\sigma }}_z {\hat{n}}), \end{aligned}$$
(2)
which entangles the states.Footnote 1 Here, \({\hat{\sigma }}_z\) is a Pauli operator, and \({\hat{n}}\) is a photon number operator. This results in the entangled state
$$\begin{aligned} | \Psi \rangle = \cos (\theta _i/2) | -\rangle | \alpha e^{i g} \rangle + \sin (\theta _i/2) e^{i \phi _i} |+\rangle |\alpha e^{-i g}\rangle , \end{aligned}$$
(3)
which is often called a Schrödinger cat state because the total quantum state involves a superposition of macroscopically distinct states of light.
The authors go on to look at projection of the system state onto a final state \(|\psi _f\rangle \), where this state has the same form as \(|\psi _i\rangle \), with the subscript \(i\) replaced by \(f\) on the parameters [10]. Specifically, a strong measurement will project the system onto \(|\psi _f\rangle \) or onto the state orthogonal to \(|\psi _f\rangle \) (since the system is two dimensional, there are no other options). The scaling of the post-selected parameter estimation is optimized when pre- and post-selected states are parallel \(|\psi _i\rangle = |\psi _f\rangle = (|- \rangle + |+ \rangle )/\sqrt{2}\), so we focus on this case for simplicity of calculation. The orthogonal state is then clearly \(|\psi _f^\perp \rangle = (|-\rangle - |+\rangle )/\sqrt{2}\).
In the case of a projection onto \(|\psi _f\rangle \), or \(|\psi _f^\perp \rangle \), the resulting meter states of the light are given by
$$\begin{aligned} | \phi _{\pm }\rangle = (1/2) ( | \alpha e^{i g}\rangle \pm |\alpha e^{-i g}\rangle ), \end{aligned}$$
(4)
where \(+\) refers to projection onto \(|\psi _f\rangle \), and \(-\) onto \(|\psi _f^\perp \rangle \). These meter states must be properly renormalized, which gives the probability \(p_\pm \) of projecting on the parallel or perpendicular system states,
$$\begin{aligned} p_{\pm } = 1/2 \pm (1/4) (\exp ( |\alpha |^2 (e^{2 i g} - 1))+c.c.). \end{aligned}$$
(5)
Note that if \(g \rightarrow 0\), the probability to project back onto the initial system state limits to 1. Reference [10] points out that there are three possible sources of information in the measurement: the probability of the post-selection projection \(p_{+}\), and the information in the two meter states, \(| \phi _{\pm }\rangle \) (in principle, the correlations between these outcomes also have information in them). The Fisher information contained in these channels is then calculated, and curiously, while the meter states have Fisher information that scales with \(N = |\alpha |^2\) (yielding the standard quantum limit), the probability of the post-selection has a Fisher information that scales as \(N^2 = |\alpha |^4\), giving Heisenberg scaling in the photon number for the precision of estimating \(g\).
The main purpose of this paper is to give physical insight into why Heisenberg scaling for the parameter \(g\) can be obtained at all, and further, why it comes mainly from the probability of projecting on the system state, as opposed to mining the meter states for information, as is usually done in weak value amplification experiments [3]. Zhang, Datta, and Walmsley write that “How this conditioning step using a classical measurement apparatus achieves a precision beyond the standard quantum limit is therefore an interesting open question.” We answer this question here and give a simple physical argument showing how this scaling is possible. We are primarily concerned with the large \(N\) limit (Heisenberg scaling), rather than finding general expressions as in Ref. [10].