Eliminating the"impossible": Recent progress on local measurement theory for quantum field theory

Arguments by Sorkin arXiv:gr-qc/9302018 and Borsten, Jubb, and Kells arXiv:1912.06141 establish that a natural extension of quantum measurement theory from non-relativistic quantum mechanics to relativistic quantum theory leads to the unacceptable consequence that expectation values in one region depend on which unitary operation is performed in a spacelike separated region. Sorkin labels such scenarios"impossible measurements". We explicitly present these arguments as a no-go result with the logical form of a reductio argument and investigate the consequences for measurement in quantum field theory (QFT). Sorkin-type impossible measurement scenarios clearly illustrate the moral that Microcausality is not by itself sufficient to rule out superluminal signalling in relativistic quantum theories that use L\"uders' rule. We review three different approaches to formulating an account of measurement for QFT and analyze their responses to the"impossible measurements"problem. Two of the approaches are: a measurement theory based on detector models proposed in Polo-G\'omez, Garay, and Mart\'in-Mart\'Inez arXiv:2108.02793 and a measurement framework for algebraic QFT proposed in Fewster and Verch arXiv:1810.06512. Of particular interest for foundations of QFT is that they share common features that may hold general morals about how to represent measurement in QFT. These morals are about the role that dynamics plays in eliminating"impossible measurements", the abandonment of the operational interpretation of local algebras as representing possible operations carried out in a region, and the interpretation of state update rules. Finally, we examine the form that the"impossible measurements"problem takes in histories-based approaches and we discuss the remaining challenges.


Introduction
Non-relativistic quantum mechanics (NRQM) has an accepted measurement theory for laboratory measurements (see, e.g., Busch et al. [18], a standard reference on Quantum Measurement Theory). There is no consensus about how to interpret NRQM (including this measurement theory); this is the infamous Measurement Problem. However, the recipe for extracting probabilistic predictions for measurement outcomes from NRQM is uncontroversial. In contrast, it is far from straightforward to formulate an analogous measurement theory for local laboratory measurements in relativistic quantum field theory (QFT). As Sorkin's 1993 paper "Impossible measurements on quantum fields" [100] illustrates, the natural generalization of the non-relativistic measurement scheme to relativistic quantum theory fails because it entails superluminal signalling. Sorkin uses a minimal theoretical framework for relativistic quantum theory to construct his examples of impossible measurements, but articulating an adequate measurement theory has also been a longstanding problem in more comprehensive axiomatic formulations of QFT such as algebraic QFT. Philosophers Earman and Valente declare that the lack of a measurement theory for algebraic QFT is "a major scandal in the foundations of quantum physics" [36, p.17]. In a recent paper that aims to make amends for this, Fewster and Verch [41] remark that there has been a "gap" between the fields of quantum measurement theory and algebraic QFT that "has-surprisingly-lain open for a long time." We take Sorkin's examples of 'impossible measurements' and the recent extensions by Borsten, Jubb, and Kells in [14] as a starting point for investigating possible formulations of a measurement theory for QFT. We approach the problem of formulating a measurement theory for QFT from the disciplinary perspectives of philosophy and relativistic quantum information (RQI). Our ultimate goal is to explore the landscape of recent proposals for a measurement theory for QFT. There has been intense recent interest in this topic, so we will not be able to offer a comprehensive survey. We will instead focus our attention on a few well-developed proposals that are different in spirit, and will clarify both their differences and their similarities. The two main approaches that we will consider are the detector models approach prominent within Relativistic Quantum Information that motivated the proposal for a detector-based measurement theory by Polo-Gómez, Garay, and Martín-Martínez in [86] and the framework for measurement in algebraic QFT (AQFT) presented by Fewster and Verch in [41]. Both of these proposals successfully address the 'impossible measurements' problem. We will also consider a history-based approach, which is Sorkin's preferred response. As we shall discuss, how to eliminate 'impossible measurements' in this framework is an open problem. Our aim is not to advocate for any one of these proposals. On the contrary, one of our conclusions is that the proposals are designed to address different problems and are currently each suitable for different purposes. Measurement theory for QFT is still an area of active research, so there is no one settled formulation that satisfies all desiderata.
Sorkin [100] and Borsten et al. [14] furnish an excellent starting point for understanding the current situation because addressing this 'impossible measurement' problem is widely regarded as the first order of business for establishing a measurement theory for QFT. As we shall explain, these impossible measurement scenarios rely on assumptions that can be framed as an informal no-go result. This reductio ad absurdum argument is useful because it identifies an apparently reasonable set of premises that lead to an unacceptable conclusion. The premises include the basic elements of ideal measurement theory for quantum mechanics, including Lüders' rule for state update for non-selective measurements. This measurement theory is adapted to Minkowski spacetime by making the natural assumption that causal order defines a partial temporal order. The Microcausality principle that operators associated with spacelike separated regions commute is also imposed. When the system is not being measured, the Heisenberg picture representation for the dynamics is used. There are examples of of 'impossible measurement' scenarios that comply with all of these requirements, and yet the expectation values for a measurement confined to one bounded region depend on which non-selective measurement is carried out in a spacelike separated bounded region. This conclusion is clearly unacceptable because it violates the prohibition on superluminal signalling or information transfer that is typically understood to be a hallmark of relativistic theories.
Different approaches to formulating an account of measurement for QFT can be classified according to how they respond to this reductio argument by revising, rejecting, or adding premises. The detector models approach has the pragmatic goal of representing detectors that are actually used to theoretically and experimentally investigate the measurement of relativistic fields, typically in quantum optics and quantum information (e.g., Unruh-DeWitt dectectors). The response to the 'impossible measurements' reductio argument is to use NRQM, and not QFT, to model the detectors. This allows ideal measurement theory for NRQM to be applied to the detector model without (for all practical purposes) leading to superluminal signalling, provided that care is taken to satisfy relativistic constraints when coupling the detector and field. Essentially, the addition of these assumptions for a concrete detector model to Sorkin's premises is what excludes FAPP the possibility of 'impossible measurements' in the detector model's regime of applicability. The Fewster-Verch (FV) framework for measurement in algebraic QFT presented in [41] takes a different approach that begins with general physical principles for QFT. Fewster and Verch adopt axioms for AQFT that go beyond the minimal set of physical principles assumed by Sorkin. These additional physical principles entail that ideal measurement theory cannot be extended from NRQM to QFT in the straightforward manner posited by Sorkin. The FV framework also rejects some of Sorkin's premises about ideal measurement theory. In particular, the assumption that Lüders' rule is applied to determine the post-measurement state of the system is rejected. The FV framework introduces a new measurement theory for AQFT with new state update rules that are informed by the physical principles of AQFT. Histories-inspired approaches also reject the assumption that Lüders' rule is applicable, but do not aim to introduce state update rules that describe the measurement process 'step-by-step'; instead, probabilities are directly assigned to entire histories.
It is worth emphasizing, as Sorkin also emphasizes, that the 'impossible measurements' problem is a separate issue from the Measurement Problem. In NRQM, the Measurement Problem originally arose after the standard theory of NRQM (including dynamics) was formulated and a rudimentary measurement theory was introduced. In brief, one version of the Measurement Problem in NRQM is that the unitary quantum dynamics (e.g., as given by the Schrödinger equation) is inconsistent with the prescription for state update after measurement (e.g., as given by Lüders' rule). In general, the Schrödinger equation determines that the composite of the system and measuring device ends up in an entangled state that is not an eigenstate of the measured quantity, while the Lüders' rule for selective measurement assigns an eigenstate of the measured quantity. (Furthermore, it is the state update rule that seems to be correct about the post-measurement state.) The Measurement Problem in QFT should be set up in an exactly analogous way. This means that before the Measurement Problem in QFT can be posed, the physical theory of QFT (including dynamics) and a measurement theory for QFT must be fixed. The 'impossible measurements' problem pertains to how the physical theory of QFT and a measurement theory for QFT are formulated. In this sense, the 'impossible measurements' problem for QFT arises prior to the Measurement Problem for QFT and the ensuing interpretational issues. This means that addressing the 'impossible measurements' problem does not require a solution to the Measurement Problem for QFT. However, the solution to the 'impossible measurement' problem that is adopted may well affect the form that the Measurement Problem takes in QFT. In general, both the physical theory of QFT and the measurement theory for QFT differ from NRQM; therefore, the Measurement Problem may take different forms in QFT and NRQM.
It is also useful to consider the historical context of the 'impossible measurement' problem. Of course, measurement is possible in QFT; it is commonplace to use QFTs to make theoretical predictions for measurements conducted on relativistic quantum systems. In QED, for example, standard predictions take the form of scattering amplitudes, which involve asymptotic states. This is in contrast to NRQM, in which predicted quantities typically take the form of properties of an instantaneous state at a finite time or a stationary state. Blum [12, p.46] characterizes this historical shift from a focus on states in NRQM to scattering theory in QED as a paradigm shift because it constitutes a significant change in the paradigmatic problem of what is to be calculated from the theory. Blum offers an illuminating account of how the quantum state "withers away" in two lines of development of relativistic quantum theory in the 1930's and 1940's, one that originates with Heisenberg's S-matrix theory and the other with Wheeler-Feynman electrodynamics. As Blum explains, this paradigm shift was prompted both by the desideratum of obtaining an explicitly relativistic formulation of quantum theory and by the need for a calculationally tractable theory. (See [46] for further discussion of the historical background.) Asymptotic scattering theory works well for predictions for many experimental scenarios, especially in particle physics, but does not cover all cases of interest. Relativistic quantum information is a field in which finite time processes that occur in a local laboratory environment are theoretically and experimentally investigated. Consequently, a measurement theory for local measurements that applies to relativistic quantum theory at finite times is needed. Sorkin [100] concerns precisely this problem. For this reason, [100] has been influential in the relativistic quantum information community more broadly [30,10,9,11,25]. Recently, the issue of how to formulate a measurement theory for QFT has attracted renewed attention, in part due to Borsten et al.'s [14] sharpening of Sorkin's results and Fewster and Verch's proposed measurement theory for algebraic QFT [41].
We begin by explicitly formulating the reductio argument that underlies Sorkin-type impossible measurement scenarios. After rehearsing Sorkin's [100] original examples of impossible measurement scenarios and Borsten, Jubb, and Kells' [14] recent examples, we use the reductio argument analyze the root causes of the 'impossible measurements' problem and to classify different approaches to formulating an account of measurement for QFT. Sec. 3 and 4 offer overviews of the FV framework and the detector models approach, respectively, focusing on analysis of how each approach addresses the reductio argument and rules out impossible measurement scenarios. In Sec. 5, we compare the FV and detector-based measurement theories. Our comparison does not focus exclusively on the important differences; we also identify substantial similarities between the two measurement theories. The similarities are of particular interest because, given the differences in goals and strategies employed by the two approaches, similarities suggest morals about general features that might be shared by any measurement theory for QFT. In Sec. 6 we consider how the histories-inspired approaches favoured by Sorkin address impossible measurement scenarios. Sec. 7 summarizes our conclusions. We hope that this paper lays the groundwork for productive dialogue among the many communities of physicists and philosophers who are working on theoretical, practical, and interpretative issues surrounding the treatment of local measurements in QFT.

Sorkin [100] and Borsten, Jubb, and Kells [14] 'impossible measurements'
Sorkin's original paper [100] presents examples of 'impossible measurement' scenarios that exhibit dependence of the expectation values of a measurement in one region on the identity of the measurement operation performed in a spacelike separated region. The purpose of these examples is to show that ideal measurement theory cannot be naïvely extended from NRQM to relativistic quantum theory. Borsten, Jubb, and Kells [14] explicate Sorkin's assumptions and introduce additional examples of 'impossible measurements'. In 2.1 we begin by reviewing the set of assumptions that gives rise to these 'impossible measurement' scenarios and explicitly cast the argument in the logical form of a reductio ad absurdum argument. We then take up examples of 'impossible measurement' scenarios in Sec. 2.2 and 2.3. This will be followed by analysis of conclusions that are supported by the reductio argument and a survey of strategies for responding to the reductio argument in Sec. 2.4.

The 'impossible measurements' reductio argument
The 'impossible measurement' scenarios presented by Sorkin [100] and Borsten, Jubb, and Kells [14] are a type of no-go result. No-go results such as Bell's theorem have played an important role in foundations of quantum theory because they identify a set of assumptions that cannot all be true. (Conditional on the conclusion being false, which in the case of Bell's theorem is established by closing the loopholes in the experimental tests of the Bell inequalities.) Similarly, the 'impossible measurement' scenarios are valuable because they play the role of identifying a set of assumptions that cannot all be true. (Conditional on the conclusion being false, which in this case is established by experimental tests that rule out superluminal signalling.) In relativistic quantum information (RQI), 'impossible measurement' results have played the heuristic roles of motivating the formulation of models of local measurement that are suited to QFT and serving as a criterion of adequacy for proposed local measurement models for QFT (i.e., they must not permit detectable signalling in Sorkin-type measurement scenarios). We will also use the 'impossible measurements' results to classify different approaches to formulating a measurement theory for QFT (Sec. 2.4). For these heuristic purposes, it is useful to extract a reductio ad absurdum argument from the examples of 'impossible measurements'. A reductio argument is an argument in which an apparently acceptable set of premises leads by apparently acceptable reasoning to an apparently unacceptable conclusion.
Both Bell's theorem and the argument based on 'impossible measurement' examples that is set out below take the form of reductio arguments. An important difference between these two arguments is that Bell's theorem is a no-go theorem provable using mathematics from mathematically-stated premises (i.e., a deductive argument) while the 'impossible measurements' reductio argument is a more informal no-go result (i.e., the conclusion is established by producing examples of scenarios in which superluminal signalling is possible). This is a significant difference, and one that makes Bell's theorem much more powerful, but for our purposes what is important is that the informal 'impossible measurement' reductio argument serves the heuristic functions of motivating and guiding the formulation of a measurement theory for QFT.
Relatedly, Sorkin assumes only a minimal, informal framework for relativistic quantum theory. Assume that (when no measurements occur) there is a Heisenberg picture representation of some quantum field Φ (e.g., a free scalar quantum field) and an observable A k is associated with a region of Minkowski spacetime O k by restriction of the field Φ to O k . Microcausality is the only principle from QFT that is assumed. This starting point is intended to invoke only generally agreed upon features of relativistic quantum theory. Following the presentation of 'impossible measurement' examples in Borsten, Jubb, and Kells [14], here is the reductio argument: P1 Local degrees of freedom An observable A k is associated with a region of Minkowski spacetime O k by restriction of the field Φ to O k . [100] P2 Dynamics When measurements are not being performed, use the Heisenberg picture representation (i.e., time-dependence is carried by the observables).

P3 Ideal measurement theory for relativistic quantum theory (a) Detection assumptions:
(i) eigenstate-eigenvalue link: "the measurement outcomes are the eigenvalues of the self-adjoint operator corresponding to the observable" [14] (ii) Born rule: In a state ρ, the probability of an outcome n that corresponds to a projector E n is given by Prob(n) = tr(ρE n ).

(b) Preparation assumption:
The state ρ(t ) at time t after a non-selective measurement is determined by applying Lüders' rule (for non-selective measurement) to the state at time t prior to the measurement.
Lüders' rule for non-selective measurement for arbitrary self-adjoint observables 1 By the spectral theorem, A = ∞ −∞ λdE(λ) where E(·) maps Borel subsets B ⊆ R to projectors on H. For a set of mutually disjoint Borel sets B = {B n } n∈I that covers R (with I some countable indexing set), each B n represents a possible bin for a measurement outcome. The corresponding projectors 2 E n := E(B n ) resolve the identity n∈I E n = 1 H . Lüders' rule for non-selective measurement for arbitrary self-adjoint observables: That the existence claim in the conclusion of the argument is compatible with the premises is established by the examples set out in the following two subsections. Premise P3 sets out the assumptions of ideal measurement theory for relativistic quantum theory. Parts (a) the detection assumption and (b) the preparation assumption are carried directly over from NRQM. The relativistic ingredient of P3 is (c), which specifies a temporal ordering relation for regions in Minkowski spacetime. P4 Microcausality is an uncontroversial assumption within QFT.
The conclusion is that there are bounded, spacelike separated regions O 1 and O 3 for which the expectation values of a measurement confined to O 3 depends on which unitary operation is performed in O 1 . In both of the responses to Sorkin-type impossible measurement scenarios 1 This is a generalisation of Lüders' rule for non-selective measurement for discrete observables: For a compact self-adjoint observable A, A = n λ n E n , where λ n are distinct eigenvalues and E n are associated projectors onto associated eigenspaces that resolve the identity. A selective measurement is conditioned on obtaining the outcome λ n . A non-selective measurement is not conditioned on obtaining any particular outcome. Lüders' rule for non-selective measurement for discrete observables: ρ(t) → ρ(t ) = n E n ρ(t)E n 2 not necessarily rank-1 3 As Sorkin [100, p.3] notes, ≺ may be extended to some non-unique linear order. P4 Microcausality ensures that different choices of linear order do not affect the expectation values for any sequence of projective measurements associated with the set of regions. discussed in Sec. 3 and 4 below the operation performed in region O 1 is implemented by a nonselective measurement. A further argument can be made that the conclusion of the reductio argument is unacceptable because it allows for superluminal signalling. As a consequence of the detection assumptions P3(a), the probabilities for measurement outcomes in O 3 are dependent on which measurements are carried out in spacelike separated region O 1 . If we assume that parties can make multiple measurements on identically prepared systems to build up statistics following Borsten et al. [14], then in principle an observer in O 3 could determine whether a measurement was carried out in spacelike separated region O 1 .This violates the prohibition on superluminal signalling or information transfer that is typically understood to be a hallmark of relativistic theories. The reason for labeling the regions O 1 and O 3 will become apparent in the next section: the examples of superluminal signalling involve measurements in another intermediate region O 2 . We will refer to this reductio argument as the 'impossible measurements' reductio argument, but it should be appreciated that while Sorkin was (as far as we are aware) the first to raise this problem, Borsten, Jubb, and Kells [14] make important contributions that refine the argument.

Sorkin's examples of impossible measurements
Sorkin offers two versions of his no-go result, a QFT version and a QM version with qubits on Minkowski spacetime. Since we will argue that the QM version is not compelling, we will begin by reviewing the QFT version. Consider O 1 and O 3 , two bounded spacelike separated regions of Minkowski spacetime, and a unitary element of the local algebra A(O 1 ) that is characterized by a parameter λ, i.e., U λ ∈ A(O 1 ). This can be thought of as a local unitary 'intervention' that will transform the state of the field |ψ 0 → U λ |ψ 0 := |ψ 1 . Independent of the interpretation of this 'local kick', prohibition of superluminal signalling entails that expectation values of observables outside the causal future of O 1 should not depend on the value of λ. In this case of two spacetime regions, this is guaranteed by Microcausality, which imposes [U λ , C] = 0 ∀λ and for all C ∈ A(O 3 ). As a result of Microcausality This expectation value is independent of λ, and so the value of λ cannot be used to signal to spacelike separated regions.
The situation changes dramatically if one considers a third region O 2 'between' O 1 and O 3 that is partially in the causal future of O 1 and partially in the causal past of O 3 . Roughly speaking, this third region can 'link' the first two in counter intuitive ways. The region O 2 is chosen by Sorkin to be a thickened hypersurface that lies in the chronological future of O 1 and in the chronological past of O 3 (see Figure 2). Associated with O 2 is a non-selective measurement of the projector P 2 = |ψ 2 ψ 2 |. Applying the non-selective Lüders' rule to the state |ψ 1 = U λ |ψ 0 , it is easy to see that the expectation values of C is where we denote with ... 0 the expectation value over the state |ψ 0 . This expression is equal to prob(P 2 = 1)Exp(C, P 2 = 1) + prob(P 2 = 0)Exp(C, P 2 = 0) and will generally depend on λ. Sorkin is choosing a particular state |ψ 2 to be a superposition of the vacuum and a one-particle state to demonstrate the λ−dependence, but the details of the derivation are not important. One simply has to notice that, in general, the λ−dependence on the r.h.s. of (4) will not drop out (as it did in (3)) since U λ is guaranteed to commute with C but not with P 2 ∈ A 2 (O 2 ), because O 1 and O 2 are not spacelike separated. This λ-dependence instantiates the conclusion of the no-go result, since it allows for superluminal signalling between the spacelike separated regions O 1 and O 3 (because, in principle, a signal can be encoded in the value of λ).
To fully appreciate the no-go result, it is important to analyse the role of region O 2 in terms of the premises that are laid down in the previous section. Microcausality (P4) provides the ground for thinking of regions O 1 and O 3 as 'separate' or statistically independent in a bipartite scenario. By invoking a third 'intervention region' O 2 we open up the possibility of signalling between O 1 and O 3 (Conclusion). This is because the non-selective measurement that is associated with region O 2 updates or 'prepares' the state over which the expectation value of C is evaluated, in accordance with the standard rules set out in P3. More explicitly, the preparation assumption (b) (Lüders' rule) is used for the measurement over O 2 , while the detection assumptions (a) go into the evaluation of the expectation value of C. A temporal ordering relation t < t is needed to apply Lüders' rule. Premise (c) defines a relativistic temporal ordering relation, which reflects the causal structure of Minkowski spacetime, i.e.,  [100]). Perhaps the involvement of a third region would partially demystify the conclusion, but from a local perspective of the observers that one could associate with O 1 and O 3 , the non-selective measurements over O 2 should be irrelevant. Thus, one of the problems posed by this example is the consistent description of multi-partite measurements (involving more than two parties) in relativistic spacetimes. 4 Sorkin also offers a 'baby' QM version of the no-go result. In this case, there are two qubits that one can think of as embedded over regions O 1 and O 3 in Minkowski spacetime. The two qubits are initially in an entangled state, and the first one can potentially be flipped by a local unitary operation (analogue to the local unitary over region O 1 ) before a global projector is applied to the total system (analogue to the non-selective measurement over O 2 ). Evidently the expectation values of observables of the second qubit (analogue to O 3 ) will generally depend on whether the first qubit was flipped or not before the global operation. This is not surprising because the global projection presupposes some notion of global access to the total system. Sorkin suggests that this example is "[i]n a sense ... all we need, since one would expect to be able to embed it in any quantum field theory which is sufficiently general to be realistic" (p.7).
While it is true that NRQM should somehow be related to QFT, it does not follow that the QM example is sufficient for Sorkin's purposes. Precisely how NRQM relates to QFT is a nontrivial and somewhat controversial matter, as the discussion below of detector models and the FV framework for algebraic QFT will highlight. It is not obvious which features of Sorkin's QM example should be expected to carry over to QFT. The value of Sorkin's quantum field theoretic example is that it clearly demonstrates which set of assumptions adapted from NRQM cannot be transferred to QFT. Furthermore, there are disanalogies between the two examples that seem relevant. In the case of the two qubits there is no third 'disjoint' party. The 'third' system is simply the total system. Of course, operations over the total system are by definition global. In the QFT example, there is a non-trivial third party O 2 , seemingly 'disjoint' from O 1 and O 3 . Nevertheless, that third party is responsible for an operation which, loosely speaking, would also 'connect' O 1 and O 3 . Relatedly, Weinstein [111] points out that Sorkin's QM example implicitly assumes that ideal measurements are instantaneous. Another disanalogy is that the initial state of the QM system must be entangled over the qubits, while there are no restrictions on the initial state in the QFT example. 5 These disanalogies seem to undermine the usefulness of the QM example.

Borsten, Jubb, and Kells' examples of impossible measurements
At first glance, Sorkin's QFT example that illustrates his no-go result seems to be very particular to the choice of O 2 to be a thickened hypersurface (Figure 1). It is definitely bothersome, but not really surprising, that such global operations, like the one over region O 2 , can cause signalling between the two spacelike separated parties. The global projector represents an operation that presupposes some notion of global access to the total system. Sorkin recognizes this shortcoming of his example, but insists that there is still a genuine problem for QFT: "[i]n a way it is no surprise that a measurement such as of [A 2 ], which occupies an entire hypersurface, should entail a physical non-locality; but surprising or not, the implications seem far from trivial...What then remains of the apparatus of states and observables, on which the interpretation of quantum mechanics is traditionally based?" Unfortunately, the problem raised by Sorkin cannot be easily dismissed by simply excluding global operations. Borsten, Jubb, and Kells [14] supply examples that establish that the problem persists for general bounded regions O 2 that partially invade the future lightcone of O 1 and the past lightcone of O 3 , i.e., Figure 2). As we shall discuss in the next section, Borsten et al. posit a general condition on allowed local operators that guarantees no-signalling for non-selective measurement.
Some examples of seemingly innocent locally implementable operations that lead to 'impossible measurements' are given in [14] (and also [9]). For finite dimensional Hilbert spaces, it is particularly interesting for quantum information purposes to analyse the causal behaviour of operations that correspond to measuring observables of the typeÂ ⊗ 1 + 1 ⊗B versusÂ ⊗B on the tensor product of two local subsystems H 1 ⊗ H 2 . In [14] it was shown that the latter can be problematic, despite the expectation that such a 'factorised' operation should be locally implementable in the Hilbert space sense (by means of LOCC, local operations and classical communication). They provide the following concrete example of a bipartite system that starts out in the factorised state |ψ = |0 ⊗ 1 √ 2 (|0 + |1 ). First, a local unitary 'kick'Û = e iγσx⊗1 is applied, and then a non-selective measurement of the observable |1 1| ⊗σ z is performed. The outcome is that expectation values of observables over H 2 will generally depend on γ, e.g., 1 ⊗σ x = cos 2 (γ). 7 A similar QFT example of this problematic case is the measurement of a product of fields such asφ(f 1 )φ(f 2 ) where f 1 , f 2 are supported in disjoint spacetime regions (see [65]). 8 6 The causal future/past J +/− (x) of a spacetime point x is the set of all points reached from x by smooth future-directed causal curves. For a spacetime region O we write J ± (O) = x∈O J ± (x) [41]. 7 By applying the causality condition (5) that Borsten et al. derive (see next section) for the case A 1 =Û , A 2 = |1 1| ⊗σ z and A 3 = 1 ⊗σ z one can appreciate the role that the degeneracy of A 2 plays in failing to satisfy (5). That is, the non-selective measurement of a degenerate observable can enable superluminal signalling, even if this observable is factorised.
8 Details: where non-selective measurements are implemented by unitary 'kicks' or operations involving 1-parameter families of Kraus operators. Jubb also shows that expectation values involving products of fields can be recovered using only 'possible measurements' of smeared fields and the identity.

Analysis of the 'impossible measurements' reductio argument
Sorkin-type impossible measurement scenarios clearly illustrate the moral that P4 Microcausality is not by itself sufficient to rule out superluminal signalling in relativistic quantum theories. The question of how to respond to this reductio argument-i.e., how to revise the set of assumptions so that superluminal signalling is excluded for all possible measurements-will be taken up in the next subsection. First we will analyze the argument itself.
Reductio arguments perform the useful function of pinpointing a set of assumptions that lead to a problematic conclusion. However, the implied negative information about issues that are not relevant to resolving the problem stated in the conclusion can be valuable. The 'impossible measurements' reductio argument is particularly valuable in this respect because it is distinct from some of the other recognized foundational issues that face QFT.
An obvious first line of response to the 'impossible measurements' reductio might seem to be to make Lüders' rule manifestly Lorentz covariant. However, this response does not address the impossible measurements problem. Lüders' rule P3(b) for a state update from t to t is applied to a sequence of measurements in relativistic spacetime by using the temporal ordering relation in P3(c). P3(c) imposes a temporal order for a given set of regions (such as those identified in Fig. 2), but the transitive closure operation is not Lorentz covariant. The order induced by transitive closure can be extended to a non-unique linear order, which is equivalent to the choice of a preferred hypersurface of simultaneity. However, as Sorkin notes, different choices of linear order do not affect the expectation values for any sequence of projective measurements associated with the regions as a result of P4 Microcausality. Since the conclusion of the reductio argument concerns the expectation values for O 3 , which are assigned in a Lorentz covariant way, making Lüders' rule manifestly Lorentz covariant seems unlikely to solve the problem. (Of course, adopting a manifestly Lorentz covariant alternative to Lüders' rule may well be part of the solution, as the FV measurement framework demonstrates.) Another indication that the fact that Lüders' rule is not manifestly Lorentz covariance is not the root cause of the impossible measurements problem is that making Lüders' rule Lorentz covariant is not sufficient to solve the problem. Hellwig and Kraus [60] proposed a manifestly Lorentz covariant version of Lüders' rule back in 1970. 9 Hellwig and Kraus stipulate that Lüders' rule only updates the state of the field in the causal future and causal complement of the region in which the measurement is performed; the state of the field in the causal past of the measurement region remains unchanged. Clearly, making Lüders' rule manifestly Lorentz covariant in this way does not address the problem of Sorkin-type impossible measurements because this state update rule still applies in region O 2 , which is contained in the causal future and causal complement of O 1 . Sorkin himself suggests (but does not endorse) an alternatiave Lorentz-covariant modification, which is to restrict Lüders' state update to the causal future of a measurement region. However, this does not by itself address the impossible measurement problem, which involves measurement regions such as O 3 in Fig. 2 that are not strictly contained in either the causal future or the causal complement of O 2 . Sorkin considers the restriction of allowed measurement regions to those that are strictly causally ordered (e.g., O 3 in Fig. 2 must be entirely in the causal future or entirely in the causal complement of O 2 ), but this restriction seems to lack an independent physical motivation, as we discuss in Sec. 2.4.2. 10 The upshot is that impossible measurement scenarios cannot be blamed on the fact that Lüders' rule is not manifestly Lorentz covariant. Of course, a measurement theory for QFT would ideally contain manifestly Lorentz covariant state update rules, but making Lüders' rule manifestly Lorentz covariant is not by itself sufficient to address the 'impossible measurements' reductio argument.
As Sorkin also points out, the failure of collapse interpretations to be manifestly Lorentz covariant is also not the source of the impossible measurements problem: It is often objected that the idea of state-vector reduction cannot be Lorentzinvariant, since "collapse" will occur along different hypersurfaces in different rest-frames. However we have just seen that well-defined probability rules can be given without associating the successive collapses to any particular hypersurface. Thus the objection is unfounded to the extent that one regards the projection postulate as nothing more than a rule for computing probabilities. (p. 4) In general, proposed solutions to the Measurement Problem that offer a physical interpretation of quantum theory will not address the impossible measurements problem if they leave all of the premises of the 'impossible measurements' reductio argument intact insofar as their implications for the assignment of probabilities upon measurement are concerned. the 'impossible measurements' reductio argument relies on the uncontroversial formal recipe for extracting probabilities for measurement outcomes from NRQM; the problem is that the attempt to extend this formal recipe to the relativistic context leads to superluminal signalling. There is consensus about the irrelevance of the Measurement Problem in the responses to Sorkin-type impossible measurement scenarios that are discussed below (see p.3 of [41] and Sec. II of [54]). 11 The Measurement Problem is thus not directly relevant to the resolution of the Sorkin-type no-go result.
Interpretations of quantum theory proposed as solutions to the Measurement Problem are not relevant to addressing the 'impossible measurements' reductio argument. However, the implication could run in the other direction: resolutions of the impossible measurements reductio could have implications for the possible interpretations of relativistic quantum theory. "The" Measurement Problem is actually a collection problems (see [76,79]), but a historically important and intuitive variant is the following: the unitary quantum dynamics (e.g., as given by the Schrödinger equation) is inconsistent with the prescription for state update after measurement (e.g., as given by Lüders' rule). Rejecting some of the premises of the 'impossible measurements' reductio argument and/or adding new assumptions to block impossible measurement scenarios may involve changes to both halves of this Measurement Problem for NRQM. That is, the representation of relativstic dynamics in QFT and the state update rules in the accompanying measurement theory for QFT could both be different from NRQM in ways that affect the form taken by the Measurement Problem in QFT. For example, the FV measurement framework for AQFT involves both the addition of an assumption about relativistic dynamics and a revised measurement theory for QFT (see Sec. 3).
Another feature of the Sorkin reductio argument is that it makes no assumptions about the state of the relativistic quantum system that is measured. This is another important piece of negative information about factors that are not relevant to ruling out impossible measurement scenarios. In particular, while Sorkin's QFT example chooses the vacuum state as the initial state, it is not necessary that the system be in the vacuum state or any other state that satisfies the assumptions of the Reeh-Schlieder theorem. In fact, it is interesting to note that the 'impossible measurements' reductio argument assumes Microcausality, but the Reeh-Schlieder theorem does not (see [21] for discussion). As a result, Sorkin's impossible measurements are not related to state-dependent phenomena such as the entanglement of a state across spacelike separated regions. Furthermore, impossible measurement scenarios are not caused by the special properties of the Type III von Neumann algebras that are ubiquitous in QFT; the 'impossible measurements' reductio argument applies in principle to von Neumann algebras of any type. 12 Of course, given that Type III von Neumann algebras are often physically relevant in QFT, the particular interpretive issues that they raise will need to be addressed, as will the implications of the Reeh-Schlieder theorem. 13 However, it is important to recognize that Sorkin-type impossible measurement scenarios raise a separate set of foundational issues. 14

Strategies for responding to the 'impossible measurements' reductio argument
We will proceed to evaluate the lessons of the 'impossible measurements' reductio argument by working under the assumption that the conclusion is genuinely unacceptable (i.e., that 12 Borsten et al. [14] draw attention to the fact that the projectors in the P3(b) version of Lüders' rule are not necessarily of rank one. For discussion of limitations on the extent to which it is possible to apply Lüders' rule in the context of Type III algebras see [17] and [93]. 13 For example, the FV framework and detector models approach have been deployed to analyze the theoretical limitations on harvesting entanglement from Reeh-Schlieder states in [90] and [54], respectively. The Reeh-Schlieder property is also mentioned below in the context of discussions of selective measurements and properties of Type III algebras, which are not directly relevant to impossible measurement scenarios (which concern only non-selective measurements).
14 The interaction picture for representing the dynamics of QFT is used in one of Sorkin's examples and some of the literature responding to Sorkin-type impossible measurement scenarios. The interaction picture is also problematic due to Haag's theorem (See [34] for discussion.) These issues will be set aside in this paper because they are not directly related to the impossible measurements problem insofar as simply adopting an acceptable alternative to the interaction picture is insufficient to block impossible measurements. Moreover the 'impossible measurements' reductio argument relies on a different set of premises from proofs of Haag's theorem.
the expectation values of a measurement performed in one region cannot depend on which unitary operation is performed in a spacelike separated region). This means that the avenues of response to the reductio argument can be distinguished by their rejection of different sets of premises and/or their addition of different sets of premises to the argument. Assuming that the conclusion of the 'impossible measurements' reductio argument is deemed unacceptable, responding to the reductio requires blocking the derivation of the conclusion by rejecting one or more premises or adding one or more premises.
The most straightforward response is an ad hoc 15 one: target P1 and P3, which taken together entail that the measurable observables include all A k that can be obtained by restricting the field Φ to any region O. An ad hoc resolution of the reductio can be obtained by simply excluding any observable that can lead to superluminal signalling. Sorkin proposes (but does not endorse) restricting the regions to which observables may be assigned. For example, imposing the restriction that measurable observables may only be defined on regions that are strictly causally ordered (i.e., for regions O j and O k , all x ∈ O j causally precede all y ∈ O k or vice versa) (p.9). As Sorkin notes, it is difficult to imagine how the possibility of performing a measurement operation could depend on spacetime in this way (see also [11]). There are presumably not 'spacetime police' to ensure that laboratory measurements are only carried out when they are strictly causally ordered.
Borsten et al. [14] propose a different ad hoc resolution of the reductio that imposes a restriction directly on the observables rather than the associated regions (see also [9,2]). They argue that the following condition rules out superluminal signalling by non-selective measurements in Sorkin-type scenarios: An operator A 2 ∈ A(O 2 ) with resolution B will not enable signalling iff Again, the logic is that this condition is imposed for the purpose of excluding superluminal signalling. The condition can be enforced by 'banning' observables A 2 that do not satisfy it, or else bringing in some notion of coarse-graining that entails a measurement resolution that is large enough for the criterion to be met. 16 Both options are ad hoc, as long as they are demanded only to avoid superluminal signalling, and would have to be further motivated on physical grounds.
In this paper, we focus our attention on more comprehensive, physically motivated proposals for modeling measurement in QFT that address the 'impossible measurements' reductio. However, we do not wish to diminish the value of this ad hoc approach, which is further developed by Jubb in [65]. This approach may be regarded as complementary to both the detector models approach and the FV framework. Borsten et al. emphasize that their no-signalling criterion is model-independent ("a theoretical limit on what is possible, independent of how it is attempted" (p.3)), in contrast to the detector models approach. They also argue that their condition is applicable to any state update rule, not only Lüders' rule. As we shall discuss in Sec. 4, condition 5 holds FAPP for important examples of detector models. Borsten et al. note that their condition (5) is general enough to be applied to the Type III von Neumann algebras that are physically relevant in QFT. Condition (5) can also be compared with the results of applying the FV framework for AQFT. (See [65,42] for further discussion.) In Sec. 3-5, we examine in depth two proposals for representing measurement in QFT: the FV framework for measurement in AQFT and the detector-based measurement theory for QFT. We have chosen to focus on these two proposals because each makes substantial, physically motivated revisions to the premises of the reductio argument. In contrast to the ad hoc resolutions, these revisions do not rule out impossible measurement scenarios automatically; non-trivial arguments are required to show that, in each of these measurement theories, non-selective measurements of Sorkin-type cannot be used to signal.
In Sec. 3, we will consider the Fewster-Verch (FV) measurement framework for AQFT. Fewster and Verch adopt a 'top down' approach that aims to treat measurements in general and quantum field systems in general. Both the quantum field system and the measurement probe are modeled using AQFT. The initial motivation for this approach was to provide a framework in which the localization properties of observables of Unruh-DeWitt detectors could be studied [42, p.5]. Subsequently the FV framework was used to addresses the 'impossible measurements' problem [15]. The strategy involves rejecting many of the premises of the 'impossible measurements' reductio argument as well as adding as premises axioms from AQFT. In particular, a new measurement theory for AQFT is formulated to replace much of P3. In the axiomatic context of AQFT, it is recognized that Microcausality by itself is insufficient to rule out superluminal signalling for reasons unrelated to impossible measurements (see [95,36] and the discussion in Sec. 3.1 below); additional axioms or assumptions are needed. An important goal of this approach is the principled one of determining which physical principles are needed to consistently represent relativistic quantum systems and the measurements performed on them.
In contrast, the detector models approach that is examined in Sec. 4 adopts a 'bottom up' strategy of constructing models for different types of detectors (e.g., an Unruh-DeWitt detector), each of which represents the interaction between the detector and a quantum field system. A main plank of this strategy is to model the detector using NRQM and (as far as possible) the measurement theory set out in P3 (including Lüders' rule for ideal measurements). However, the detector models approach rejects Sorkin's assumption that the measurement theory set out in P3 can be applied directly to the quantum field system. The primary goal of this approach is the pragmatic one of obtaining practically applicable models of realistic detectors, typically used in quantum optics and quantum information. We offer an in-depth comparison of the detector models approach and the FV framework in Sec. 5.
Clearly, the ad hoc, FV, and detector models approaches are not the only possible approaches to addressing the 'impossible measurements' reductio; the reductio functions as a useful heuristic for suggesting alternative approaches to formulating a measurement theory for QFT. Different approaches might involve rejecting or revising other premises and/or adding premises that reflect missing relativistic and/or quantum principles. The open-ended nature of the project means that our review will not be comprehensive, but we will discuss Sorkin's own preferred response to the 'impossible measurements' problem, which is to shift to a histories-based formulation of QFT. In Sec. 6, we review recent progress on a historiesinspired formulation of QFT and the remaining challenges to resolving the 'impossible measurements' problem in this framework. Another example of an approach to formulating a measurement theory for QFT is the positive formalism proposed in Oeckl [80] which adopts the strategy of abstracting an operational framework based on probes and composition from non-relativistic quantum mechanics and then developing a concrete implementation for QFT. The resolution of the 'impossible measurements' problem in this framework is the subject of current research.

Principled approach: Fewster-Verch framework for measurement in AQFT
The Fewster-Verch (FV) framework [41,42] adopts a 'top down' strategy for formulating a measurement theory for QFT: first general principles for QFT in the algebraic framework are posited and then a compatible measurement theory is devised. Sec. 3.1 draws on philosophical analysis of AQFT by Earman and Valente [36] and Ruetsche [92] to trace the implications of the physical principles that Fewster and Verch adopt as axioms. Our aim is to foster an appreciation for the role that these principles play in ruling out Sorkin-type impossible measurement scenarios and also in informing the measurement theory for AQFT introduced by Fewster   A set of axioms for AQFT is chosen that represents both quantum and relativistic principles. For an introduction to AQFT, see [39] or [92]. Our discussion will focus on the relevant details of Fewster and Verch's version of AQFT. Fewster and Verch formulate their axioms for AQFT on globally hyperbolic spacetime. 18 As a result, features that are particular to the special context of Minkowksi spacetime are not included in the axioms. Fewster and Verch prefer a set of axioms that is inspired by the locally covariant 17 In a *-algebra, an element is positive if is a finite convex combination of elements of the form A * A 18 A spacetime is globally hyperbolic if and only if it has no closed causal curves and the causal hull of any compact set is compact. The causal hull of a subset S of spacetime M is the intersection of its causal past and future, J + (S) J − (S) [41]. approach to AQFT proposed by Brunetti, Fredenhagen and Verch in [16]. Their main motivation for considering globally hyperbolic spacetimes is, of course, to open up the possibility of incorporating gravity. Fewster and Verch's axioms are devised to apply to a collection of globally hyperbolic spacetimes; however, only the special case of a single globally hyperbolic spacetime is needed for representing 'impossible measurement' scenarios. The FV axiomatization liberalizes traditional algebraic QFT in one more respect: the algebras A are taken to include not only self-adjoint operators, but also effects. This generalization is made to bring the measurement theory for AQFT in line with Quantum Measurement Theory (e.g., [18]), in which projection-valued measurements are a special case [41, p.7]. Fewster and Verch similarly include Effect Valued Measures (EVMs), which are known as Positive Operator Valued Measures (POVMs) in Quantum Measurement Theory. An EVM is a map E from effects of the probe to the effects of the system: E : χ → A, where χ is a σ-algebra and A is a *-algebra, such that E has properties of a measure and takes values A ∈ A such that A and 1 − A are both positive [41, p.7].

The principles of AQFT and their implications
Following Bostelmann et al. [15], here are the axioms for a single globally hyperbolic spacetime (with information about the general case of a collection of spacetimes in footnotes):

(Global algebras)
The theory specifies a unital *-algebra A associated with a globally hyperbolic spacetime M . 19  In this axiomatization, the global algebra A(M ) is posited and (Compatibility) is used to define local algebras: associated with every casually convex subset N ⊆ M is an algebra A(N ) 19 In general, the theory can admit a collection of globally hyperbolic spacetimes M and specifies a unital *-algebra for each M in M. This opens up the possibility for comparison of a theory on different spacetimes even when one spacetime is not embeddable in the other [37]. 20 A subset N is causally convex if it is equal to its causal hull J + (N ) J − (N ). For a collection of globally hyperbolic spacetimes, this becomes a compatibility requirement that A can be defined on N , taken to be a globally hyperbolic spacetime in its own right with metric and time orientation inherited from M . 21 This is a weakened form of Haag duality. See [41] for discussion.

that (Compatibility) ensures is compatible with the global algebra A(M ). (Compatibility) and (Time-Slice Property) entail a local version of (Time-Slice Property):
(Local Time-Slice Property) If N 1 ⊂ N 2 and N 1 contains a Cauchy surface for N 2 , then Axioms such as these time-slice properties have a long history in AQFT. Axioms in the same family include Primitive Causality, Local Primitive Causality, and Second Causality [29].
(See [36] and [92] for discussion.) The only obvious overlap with the assumptions for the 'impossible measurements' reductio argument set out in Sec. 2 is (Microcausality). (Local Time-Slice Property) and (Time-Slice Property) are independent of (Microcausality) (i.e., there are models in which (Microcausality) is satisfied but not either of the other two axioms, and vice versa) [55]. All of the axioms posited by Fewster and Verch play a role in the FV framework, but the key axioms that Bostelmann et al. [15] invoke to demonstrate that the FV framework does not allow Sorkin-type impossible measurements to occur are (Isotony) and (Local Time-Slice Property). (Isotony) is the natural requirement that the inclusion relations among algebras reflect the spacetime relations among spacetime regions. (Local Time-Slice Property) is the one that does the nontrivial work in shielding the FV framework from Sorkin-type measurement scenarios.
To appreciate the significance of (Time-Slice Property) for ruling out superluminal signalling, we will draw on the comprehensive analysis of relativistic causality assumptions in AQFT presented in Earman and Valente [36]. Earman and Valente's main conclusion is that in Minkowski spacetime the "most direct expression of relativistic causality" in AQFT is (Local Time-Slice Property). 22 Consideration of how the FV framework uses (Local Time-Slice Property) to block Sorkin-type impossible measurements strengthens Earman and Valente's argument for the relative importance of (Local Time-Slice Property), but one does not have to accept their conclusion that (Local Time-Slice Property) is the most direct expression of relativistic causality in order to follow their analysis. Earman and Valente distinguish two aspects of relativistic causality that are relevant to our discussion: no superluminal signalling (i.e., by performing local operations on quantum fields) and no superluminal propagation of quantum fields. We will focus on their positive analysis of the relationship between (Local Time-Slice Property) and relativistic causality.
Earman and Valente argue that a dynamical axiom is needed in order to enforce relativistic causality. (Microcausality) is a kinematical axiom that imposes an independence or separability requirement (p.3). In contrast, (Time-Slice Property) concerns dynamics. As Fewster and Verch explain, "[t]he timeslice assumption is one of the lynch-pins of the structure and encodes the idea that the theory has a dynamical law, although what it is is left unspecified" [40, p.9]. That is, (Time-Slice Property) states that there exist local embedding isomorphisms α M ;N that reflect the dynamics. Axioms in this family are sometimes labelled "Existence of Dynamics" to make their role transparent [39, p.14]. Positing an axiom that imposes a dynamical constraint can exclude spacelike dependencies between expectation values in one region and unitary operations performed in a spacelike separated region by enforcing the requirement that fields cannot propagate faster than the speed of light. Intuitively, if the fields cannot propagate faster than the speed of light, then the effects of local operations on the fields should not be able to propagate faster than the speed of light either.
Earman and Valente [36, p.19] argue that this intuition about needing a dynamical axiom like (Time-Slice Property) to exclude superluminal signalling is supported by considering classical field theories. In classical relativistic field theories, the prohibition on superluminal field propagation is typically enforced by the field equations. More specifically, the field equations are a system of symmetric, quasi-linear, hyperbolic partial differential equations that are associated with a set of causal cones that typically 23 do not permit superluminal propagation of the field [50]. Determinism keeps the fields propagating within the causal cones. Consider the initial value problem for a system of field equations. The specification of 'initial' data on a closed subset S of points in Cauchy surface Σ picks out a unique solution of the field equations in the future and past domains of dependence of S, D(S). 24 Note that determinism is a fact about what the initial state and dynamical laws entail about future states, not an epistemic matter of what we can know or predict. As Earman and Valente explain, the prohibition on superluminal propagation "follows from a mild form of verificationism": "the local nature of determinism means that there is no way to detect the effects of any alleged superluminal propagation since once the relevant initial data on S are fixed, data at points relatively spacelike to S and to D(S) can be varied in any manner one likes (consistent with the constraints (if any) on initial data) without making any difference at all for the solution in D(S)" [36, p.19].
For AQFT on Minkowski spacetime, Local Quantum Determinism is the the analogue of the initial value problem for classical fields: Local Quantum Determinism: For any physical states ω and ω , and represents the restriction of the state ω to the local algebra A(O).
(Local Time-Slice Property) ensures that Local Quantum Determinism holds when O is a 23 In atypical cases the causal cones of the hyperbolic partial differential equations could differ from the null cones of the spacetime, which would in principle permit superluminal signalling [50,36]. 24 Assuming that the solution of the field equations does not 'blow up' at future or past times (i.e., global existence and uniqueness conditions are satisfied) [36]. The domain of dependence D(S) of S is the set of points p such that every inextendible causal curve through p meets S [36].
causally convex subset N of M by identifying A(N ) and A(D(N )). Modulo the restriction to causally convex regions, 25 (Local Time-Slice Property) is the axiom that Earman and Valente regard as the most direct expression of relativistic causality. Note that the prohibition on superluminal field propagation does not follow automatically from the quantization of a classical theory with hyperbolic field equations. The interpretation of the algebras of observables as having localization regions that include their own domain of dependence, as required by (Local Time-Slice Property), is also necessary [36, p.24]. 26 Again, determinism in the sense captured by Local Quantum Determinism is a fact about the theory, not an epistemic matter. However, epistemic considerations can provide motivation for the adoption of (Local Time-Slice Property). Bostelmann et al. [15, p.3] declare "Morally: If one knows the initial conditions of a quantum field on a Cauchy surface, then one knows the quantum field everywhere." Before moving on to discuss the details of the FV measurement theory in the next subsection, we will pause to consider why AQFT needs its own measurement theory and how the axioms inform the representation of measurement in AQFT. Earman and Valente [36, p.14] offer a straightforward argument that Lüders' rule cannot be naïvely extended to AQFT. Apply the GNS construction using state ω to obtain a Hilbert space representation on which the algebraic state ω is an expectation-valued map. Lüders' rule for a non-selective measurement of A ∈ A(O) can be formally applied to an algebraic state ω as follows: where P A j are projections. However, ω cannot be interpreted as the state of the system after measurement: in general, ω differs from ω in the causal past of region O (as well as the causal future). Therefore, ω → ω cannot represent a physical change of state-that is, a transition from a state ω before the measurement of A to a state ω after the measurement of A. In other words, the projection postulate is inapplicable; the measurement preparation assumption P3(b) of ideal measurement theory is not a legitimate assumption in the context of AQFT. Earman and Valente recognize that there is a need for the type of measurement theory for AQFT that Fewster and Verch develop: "assuming AQFT is an empirically adequate theory, there must be within the AQFT description of the combined object-measurement apparatus system...a description of measurement processes and their outcomes" [36, p.17].
Given that algebraic QFT has traditionally been given an operational interpretation in terms of local measurement operations, one might wonder why it needs a separate measurement theory. In their seminal presentation of axioms for algebraic QFT, Haag and Kastler offer the following as the core of their operational interpretation: "[a]n operation in the spacetime region B corresponds to an element from [local algebra] U(B)" [57, p.851]. Fewster and Verch point out two respects in which this operational interpretation falls short of their goals for their own measurement theory. First, Haag and Kastler do not "set out how exactly one measures an observable or performs an operation within a region of spacetime" [41, p.8]. Second, they "were reluctant to interpret elements of the local algebras as observables (which they considered to arise as limits of local algebra elements)" (p.8). That is, AQFT is not connected to predictions via the interpretation of local algebras in terms of laboratory procedures; instead, the connection to predictions is still made by relating the algebras of operators to collision cross sections using asymptotic limits via Haag-Ruelle scattering theory. (See [46] for further discussion of the history of operational interpretations of AQFT.) The axioms of AQFT inform and constrain the interpretation of the formalism. (Local Time-Slice Property) and (Isotony) carry immediate consequences for the localization of algebras of observables. Fewster and Verch emphasize that, as a consequence of these axioms, an observable can be localized in different regions (p.7). Moreover, recognition of the larger localization regions staked out by the domains of dependence of regions is not optional. Due to determinism, the enlargement of the localization region from N to D(N ) needs to be taken into account. Ruetsche [92, pp.115, 110-111] endorses this "resolute reading" of (Local Time-Slice Property) 27 as mandating the larger localization regions. She argues that, by (Local Time-Slice Property), an algebraic state ω on A(O) is also automatically a state on A(D(O)). Appealing to states does not reduce the size of the localization region because in AQFT states inherit their localization properties from the algebras. Another reason that the larger localization regions D(N ) need to be taken into account to block Sorkin-type impossible measurement scenarios, as we shall see in Sec. 3.3.
The association of an algebra A(O) with more than one localization region A(D(O)) is in tension with the traditional operational interpretation of local algebras A as representing operations that can be performed in a laboratory in region O. If the localization region associated with A(O) can be expanded to include the domain of dependence D(O), then the algebra of observables will typically be associated with spatiotemporal regions outside of the lab and a duration longer than the duration of the measurement. As we shall discuss in Sec. 3.2, the FV measurement framework does not rely on local algebras of observables to represent local laboratory operations.
(Local Time-Slice Property) also precludes a natural strategy for representing dynamical evolution in a globally hyperbolic spacetime. In NRQM, there is a unique foliation of the spacetime into spacelike hypersurfaces that are paramaterized by absolute time. The dynamics is represented by a unitary operator U (t) that induces time evolution by acting on either the operators (Heisenberg picture) or the states (Schrödinger picture). One might try to represent the dynamics in AQFT by proceeding by analogy with NRQM. A globally hyperbolic spacetime can be foliated into a family of spacelike Cauchy surfaces. Pick a preferred foliation {Σ t } and then try to interpret the associated set of algebras A(Σ t ) as representing time evolution in the Heisenberg picture. However, it is not possible to do this. As Ruetsche [92, pp.110-111] argues, the root of the problem is that there is no 'set' of algebras {A(Σ t )} that can be associated with a foliation of hypersurfaces {Σ t } due to (Local Time-Slice Property) (and (Isotony)): The Schrödinger picture is not possible either. Furthermore, Ruetsche [92, pp.110-111] argues, it is doubly problematic to interpret the algebraic states as evolving in time. Algebraic states are timeindependent; they are associated with spacetime regions indirectly, as functionals of algebras directly associated with spacetime regions. The first problem is that physically significant states such as (in Minkowski spacetime) the vacuum state ω 0 are global states in the sense of being states for all of space and time. The global vacuum state ω 0 (M ) is not a state that can figure in a time evolution for a system-e.g., a transition from ω 0 to ω 0 -because ω 0 already generates the expectation values for the system on all of spacetime. The second problem is that-for any algebraic state ω-a time evolution cannot be introduced by restrictions of ω to a foliation of Cauchy surfaces {Σ t }. Once again, (Local Time-Slice Property) (and (Isotony)) undermine this strategy: ω | A(Σt) and ω | A(Σ t ) cannot be interpreted as states at times t and t , respectively, because Of course, AQFT does represent relativistic dynamics; the dynamics is encoded in the assignment of algebras to spacetime regions in accordance with the axioms, including (Time-Slice Property). Once the dynamics is represented in this manner, one can choose a foliation of (thickened) Cauchy surfaces and then infer the algebra of observables associated with each of these hypersurfaces. The point is that the dynamics is not stipulated by associating an algebra of observables with one of these (thickened) hypersurfaces and then applying a time evolution operator to determine the algebras of observables associated with the other hypersurfaces in the foliation. Instead, (Time-Slice Property) supplies local embedding isomorphisms that represent the dynamical relations among algebras defined on different spacetime regions. The FV measurement theory presented in the next section will provide an example of how dynamics can be represented in this manner. (See Adlam [1] for a general discussion of representations of dynamical laws that do not rely on time evolution.) In sum, the positive conclusion of this subsection is that, as Earman and Valente argue, there are principled reasons to expect that (when supplemented with a compatible measurement theory) the axioms for AQFT adopted by Fewster and Verch will exclude superluminal signalling. The negative conclusions are that the FV axioms place constraints on both the representation of localization and the representation of relativistic dynamics. First, an algebra A(O) is in general associated with more than one localization region. Furthermore, the traditional operational interpretation of an algebra of observables in AQFT needs to be given up: A(O) cannot be straightforwardly interpreted as representing a set of operations that can (in principle) be performed in a lab in region O. 28 Second, the dynamics of the system in relativistic spacetime cannot be represented by using the usual method of choosing a preferred foliation of Cauchy surfaces with respect to which a time evolution operator is defined.

The FV measurement theory for AQFT
Fewster and Verch [41] adopt a three-pronged strategy for devising their measurement theory for AQFT. First, they introduce abstract AQFT models to represent the system and the measurement probe(s). This means that both the system and the probe(s) are assigned algebras of observables that satisfy the axioms set out in the previous subsection. A probe is coupled to a system in some region and measurement of the probe outside of this region, when it is not coupled to the system, is used to infer properties of the target system. Second, an abstract version of scattering theory for finite regions that is adapted to AQFT is applied. Finally, the same formal approach as Quantum Measurement Theory (QMT), exemplified by Chapter 10 of [18]), is applied to this dynamical representation of the measurement process in AQFT in order to derive state update rules. While the same reasoning is applied at each step in the derivation, key differences are that abstract algebras are used instead of Hilbert spaces (and Type I von Neumann algebras) and that spacetime dependence is made explicit. 29 As a result, the FV measurement theory differs both formally and in phyiscal interpretation from QMT.
To represent the dynamical process of measurement, Fewster and Verch formulate a sophisticated abstract version of scattering theory within AQFT. Consider a system and a single probe that are coupled in compact region K ⊆ M and uncoupled outside of K. (See Fig. 3.) Let the algebra S represent the uncoupled system, the algebra P the uncoupled probe, and the algebra C the coupled system and probe. Fewster and Verch then adopt a scattering theoretic picture to represent the measurement interaction. The 'in' region is M − , the complement of J + (K) (i.e., the causal future of K, including K). The 'out' region is M + (K), the complement of J − (K) (i.e., the causal past of K, including K). The coupled 29 Thanks to Chris Fewster for emphasizing this point. algebra C is identified with the uncoupled system-probe algebra U = P ⊗ S 30 in M − and M + , but not in the causal hull of coupling region K. That is, for each region L in M − or M + there is an isomorphism χ: U(L) → C(L) that commutes with both the local embeddings α M ;N ⊗β M ;N for U and γ M ;N for C that are guaranteed by (Time-Slice Property). 31 Algebraic states are defined on each of the algebras. For example, the analogue of an 'in' state in which there is no system-probe interaction is a state ω ⊗ σ over U. over C is an example of an actual state in which the system and probe interact.
As in scattering theory, the central object is Θ, the scattering morphism, which relates representations of the system and probe at 'early' and 'late' times. Θ is an algebraic isomorphism defined as a combination of isomorphisms χ: U(L) → C(L) and the local embedding isomorphisms that are guaranteed by (Time-Slice Property), α M ;N ⊗ β M ;N (for uncoupled algebra U) and γ M ;N (for coupled algebra C). More precisely, the scattering morphism Θ is defined as an automorphism of U(M ) by introducing advanced and retarded maps τ ± : is a composition of six isomorphisms that map the algebras as follows: Intuitively, Θ takes the uncoupled observable A ∈ U(M + ) associated with coupled observable C ∈ C(M + ) at 'late' times, spacetime translates C to its earlier counterpart, the coupled observable C ∈ C(M − ), and then maps this observable to the corresponding uncoupled observable A ∈ U(M − ).
Though inspired by conventional scattering theory, there are some important differences. First, 'early' and 'late' do not refer to asymptotic times; moreover, they do not refer to times picked out by Cauchy surfaces at all. 'Early' and 'late' refer to the entire spacetime regions M − and M + , respectively, that are outside of the causal past and the causal future, respectively, of coupling region K. This departure from conventional scattering theory enables the FV framework to represent finite time measurement processes. A second difference is 30 Assumption to avoid technical detail: assume the algebras have discrete topology and use the algebraic tensor product (p.8). 31 The existence of such isomorphisms can be checked for a specified interaction by constructing a model. The assumption that such isomorphisms exist is viable in general in perturbative AQFT [41, p.9]. that in the FV framework the 'in' and 'out' systems are not required to be free systems [41, p.4]. In contrast, in conventional scattering theory the 'in' and 'out' systems are taken to be free; typically, 'in' and 'out' systems are represented using Fock space representations for free systems. A third difference is that Fewster and Verch's scattering theory is abstract: the scattering map Θ is an isomorphism at the algebraic level; concrete Hilbert space representations (and the GNS construction) are not used. As we have already emphasized, the dynamics is treated abstractly by positing (Time-Slice Property) to impose a dynamical constraint-the existence of algebraic isomorphisms implementing local embeddings-that any concrete dynamical model (e.g., for a specified Lagrangian) must obey. Fewster and Verch note that this is advantageous because it avoids one of the main challenges of constructive QFT: constructing an explicit dynamics for a specified interacting theory. Though, of course, the ultimate goal is to apply the FV framework to realistic measurement scenarios for interacting systems. With this end in view, the FV framework is designed to be compatible with perturbative AQFT [41, p.9]. The next step is to obtain state update rules that are adapted to the algebraic formulation of AQFT and the scattering morphism Θ. The set up and derivations closely parallel the presentation of QMT in Chapter 10 of Busch, Lahti, Pellonpää, and Ylinen [18]. Fewster and Verch first present a measurement scheme for a system observable induced by a measured probe observable and then define an instrument that represents state updates after measurements. As already noted, the algebras of observables include not only self-adjoint operators, but what FV label EVMs, which are known as POVMs in QMT. In both the FV framework and QMT, projection-valued measurements are a special class of measurements. In QMT, Lüders' rule applies to the special case of repeatable, ideal, nondegenerate measurements.
Consider the measurement scheme for the FV framework. As in QMT, a measurement scheme specifies the correspondence between probe observables and the corresponding system observables that are measured. A measurement of probe observable B (when the probe is prepared in state σ) can be interpreted as a measurement of induced system observable 32 A = ε σ (B), where ε σ (B): P(M ) → S(M ) and σ is the state in which the probe is prepared. By construction [42, p.7], ω(A)-which represents the expectation value of a measurement of uncoupled system observable ε σ (B) = A-is the same as˜ σ (B)-the expectation value of the corresponding coupled system-probe observableB in the corresponding state˜ σ over C. This is achieved by imposing the following condtion (Eq. (3.9) in [41]): whereB ∈ C(M ) andB is the observable at 'late' times that corresponds to B ∈ P(M ) and˜ σ is the state at 'late' times that corresponds to ω ⊗ σ at 'early' times ).) Applying this condition results in a definition of the measurement scheme for the FV measurement framework in terms of Θ: As in QMT, CP-instruments are introduced to describe the effect on the system of a measurement satisfying this measurement scheme. More specifically, Fewster and Verch adopt the same criterion of adequacy for the updated state as QMT (cf. [18, pp.230-1]): "[w]e would like to obtain a new system state that is conditioned on the observation of this effect, which means that the new state correctly predicts the conditional probability for the joint observation of B together with any system effect, given that B is observed." [41, p.13].
Since 'impossible measurement' scenarios involve only non-selective measurements, we will only introduce the FV state update rule for non-selective measurements in this section. Recall also (from Sec. 2.4) that the 'impossible measurements' problem concerns how to formulate a measurement theory for QFT, not how to interpret the formalism. We will therefore discuss how the FV framework resolves the 'impossible measurements' problem in the next subsection and then offer a physical interpretation of the FV measurement framework in the following subsection, where we will also consider the selective state update rule. The FV state update rule for non-selective measurements (i.e., when there is no filtering conditional on which probe effect is observed) is where * denotes the adjoint map. 33 For comparison, the analogous instrument for QMT gives the non-selective state update rule where the pointer variable Z has a value in Ω (i.e., a non-selective measurement is performed), κ is the Hilbert space for the probe, and ρ ⊗ σ is the initial system-probe state [18, p.231]. The expression in Eq. (9) is the algebraic version of tracing out the probe. Moreover, the scattering morphism Θ represents the dynamics, unitary time evolution operator U .
We can now appreciate how the FV measurement represents local operations and dynamics in a manner that complies with the negative morals noted in Sec. 3.1. (Time-Slice Property) and (Isotony) imply that an algebra of observables A(N ) can be localized in any region in the domain of dependence of N . The FV framework squares this with the fact that experiments happen in local labs by explicitly introducing a representation for the probe and a region K in which the system and probe interact. A coupled system-probe algebra C is assigned to the causal hull of this region K, but this algebra can also be localized in any region D(K). The localization of operations performed in the lab is instead reflected in the assumption that the algebras C and U = P ⊗ S can only be identified outside of K, where and when the system and probe are uncoupled. Second, the dynamics of the system, probe, and coupled system-probe are not represented by choosing a foliation of Cauchy surfaces with respect to which to define a unitary operator to represent the time evolution. The FV framework demonstrates that the local embedding isomorphisms underwritten by (Time-Slice Property) suffice to represent not only the dynamics of the system, but also the dynamics of the system-probe measurement interaction. For the latter, the scattering morphism Θ is the key.

How the FV framework blocks Sorkin-type impossible measurement scenarios
Bostelmann, Fewster, and Ruep [15] prove that the application of the FV framework to Borsten et al.'s [14] 'impossible measurement' scenario does not allow superluminal signalling. That is, the expectation value of a non-selective measurement in O 3 is independent of which unitary 'kick' is implemented in K 1 (see Fig. 4). K 1 is the region in which the interaction occurs between the system S and the first probe P 1 and K 2 is the compact region in which the interaction occurs between the system S and the second probe P 2 . Outside of the causal hulls of K 1 and K 2 the uncoupled algebra U = S ⊗ P 1 ⊗ P 2 is isomorphic to coupled algebra C. O 3 is the region in which the superluminal signal is allegedly received. Two scattering morphisms are introduced:Θ 1 for the first measurement in K 1 andΘ 2 for the second measurement in K 2 . Θ 1 is defined by extending the scattering morphism Θ 1 : S ⊗P 1 → S ⊗P 1 to U = S ⊗P 1 ⊗P 2 , and similarly forΘ 2 .
Bostelmann et al. [15] prove that the expectation value for any system observable C in S(O 3 ) is independent of which non-selective measurement is performed in K 1 . Recall that in the FV framework the state update rule for non-selective measurement of system observable A is Applying this update rule to the measurement scenario in Fig. 4, the updated state of the system (for a system observable C ∈ S(O 3 )) conditional on the specified non-selective measurement operations A and B performed in K 1 and K 2 , respectively, is Consequently, the result that the expectation value for the measurement of any system observable C in S(O 3 ) is independent of the measurement performed in K 1 can be established by proving that the following algebraic equation involving the scattering morphisms holds: Essentially, both theorems are established by using (Time-Slice Property) and (Compatibility) (which, recall, entail (Local Time-Slice Property)) to show that the scattering morphism preserves the localization properties of the local algebras on which it acts (see Appendix 1 of [41]). The proofs involve tracing the localization properties of algebras mapped by each of the two types of component morphisms of the scattering morphism: the isomorphisms χ ± between uncoupled algebra U and coupled algebra C in regions M ± and the morphisms for local embeddings γ of C and α ⊗ β of U that are guaranteed by (Time-Slice Property).
Bostelmann et al.'s proof that Eq. (11) holds has two main parts. They first establish that there exists a Cauchy surface Σ for M + 1 that is in the causal future of K 1 and the causal pasts of both K 2 and O 3 (see Fig. 4). The second step is to use the resulting information about domains of dependence to apply the two scattering properties. The key is that region O 3 is in the domain of dependence of the region K ⊥ 1 M − 2 . By the second property of scattering morphisms, applyingΘ 2 to U = S ⊗ P 1 ⊗ P 2 (O 3 ) maps the uncoupled algebra to K ⊥ 1 M − 2 . As a result of the first property, applyingΘ 1 •Θ 2 then gives the same result as applyingΘ 2 because the localization region K ⊥ 1 M − 2 is in the causal complement of K 1 . Therefore, any interactions between probe 1 and the system in K 1 do not affect the expectation values of system observables localized in O 3 .
As we have emphasized, postulation of (Local Time-Slice Property) is the main reason that the FV framework does not allow Sorkin-type impossible measurements. In terms of the reductio argument in Sec. 2.1, Fewster and Verch's resolution involves adding this principle to the listed set of premises. Of course, the FV framework also adds other axioms for AQFT and a formulation of measurement theory that is suited to relativistic QFT. With the introduction of new state update rules, Lüders' rule for non-selective measurement is not applied to obtain the updated state for the system in O 3 . Instead, the state update rule for non-selective measurement in the FV framework (Eq. (9)) is applied. An obvious difference between the rules is that Lüders' rule is applied in a concrete Hilbert space representation and the FV state update rule is formulated in abstract algebraic terms. More specifically, the salient difference between the two rules is that Lüders' rule invokes (a sum of) projectors while the FV state update rule for non-selective measurement invokes Θ * , which is (the adjoint of) a composite of algebraic isomorphisms. Recall that the algebraic isomorphisms composing Θ are of two types: dynamical isomorphisms underwritten by (Time-Slice Property) and isomorphisms between algebras U and C in M + and M − . The main point is that the FV state update rule for non-selective measurement depends on the algebraic dynamics; it does not depend on the action of operators in the algebra. The physical interpretation of this state update rule will be taken up in the next section. Fewster and Verch [42, p.5] also draw the following moral from Sorkin-type impossible measurements: "the fact that a unitary operator is localisable in some region O does not imply that it induces an operation that can be physically performed within O. This should not be a surprise: for instance, the Lagrangians that describe local fields with local interactions constitute a very specific (and small) subset of all possible Lagrangian field theories." 34 One might worry that this result does not fully address Sorkin-type impossible measurement scenarios. Bostelmann et al. [15] prove that when the FV measurement framework is applied, all system observables in O 3 are independent of which non-selective measurement is performed in K 1 , but this is not reassuring if there are physical system observables that are not measurable by any probe representable in the FV measurement framework. Fewster, Jubb and Ruep [43] addresses this issue by proving that, for the case of a real scalar field theory, every local system observable can be asymptotically measured by some collection of probes in the FV framework. That is, for every local system observable A ∈ S(N ) for N ⊆ M , there is a set of system observables and FV measurement schemes such that the induced system observable ε Cα σα (B α ) converges to A 35 (implying that A can be measured to arbitrary precision). They expect that with "merely mild technical effort" their existence proof for asymptotic measurement schemes could be extended to multiple real scalar fields, Wick powers of real scalar fields, and other types of fields [43, pp.24, 25]. van der Lugt [108] approaches the problem of showing that physical observables are measurable within the FV framework from the perspective of standard non-relativistic quantum information theory. He introduces a 'hybrid model' that implements the FV framework using the simpler Hilbert space models of NRQM (i.e., Type I von Neumann algebras with a natural tensor product structure) and then uses results from quantum information theory to show that in this hybrid model all operations that do not permit superluminal signalling can be measured in the FV framework.

Physical interpretation of the state update rules in the FV measurement framework
How should we physically interpret the FV state update rules? More specifically, can the state update rules be interpreted as representing physical processes or are they merely epistemic in the sense that they are only calculational devices that allow us to derive probabilistic predictions for measurement outcomes? In this subsection, we will go beyond the context that 34 In response to a version of the 'impossible measurements' problem, Earman and Valente [36, p.14] also question the assumption that all unitary operators in a local algebra A(O) correspond to unitary evolutions in O. 35 The proof demonstrates convergence in the strong operator topology of the GNS representation associated with any quasi-free state with distributional two-point function.
is strictly relevant to the Sorkin-type impossible measurement scenarios and consider selective as well as non-selective measurements. To foreshadow, we will argue that the FV framework can be interpreted as representing the physical process of measurement. However, there are two fundamental respects in which the interpretation of the FV state update rules differs from the interpretation of their counterparts in QMT: the states ω and ω are counterfactual states (not actual states of the system) and for selective measurements there is no region of spacetime in which a transition from ω to ω must occur. Our interpretation of the FV framework in this section is compatible with many of the brief interpretative remarks offered by Fewster and Verch, but we do disagree with a few of their interpretative comments.
The state update rules are expressed in terms of scattering morphism Θ. The sequence of morphisms that compose Θ * (see the list on p.25) suggests the following intuitive, chronological interpretation of this scattering theory for the states, ordered from the past to the future: Of course, Θ is actually an isomorphism, but roughly speaking the morphisms vicariously map via the algebras the prepared state ω ⊗ σ to the state of C in the causal hull of the systemprobe interaction region to the final system-probe state ν. The fact that ω ⊗ σ is a product state and ν is not is a reflection of a time-asymmetry that is built into the measurement scheme: we assume that the prepared system and probe states are uncorrelated and that the measurement interaction correlates the system-probe states [41, p.10]. How should we interpret the arrows in (12)? Clearly, they cannot represent the time evolution of the actual system because, as we have been emphasizing, the dynamical isomorphisms represent local embeddings and not time evolution (of either states or operators). Furthermore, these three states are not even defined on the same algebra: ω ⊗ σ, ν are defined on U and is defined on C.
The interpretation of these states depends on the interpretation of the associated algebras. The actual world is represented by the coupled algebra C(M ). The uncoupled algebra U(M ) is best interpreted, as Fewster and Verch [41, p.8] suggest, as representing "the counterfactual world in which the interaction does not occur." Fewster and Verch [41, p.8] also describe U(M ) as representing a "control situation." Accordingly, the actual state of the system-probe is given by on C(M ). on C(M ) is a global state in the sense that this state represents the entire actual history of the combined system-probe system. Likewise, ω ⊗ σ and ν over U(M ) are each counterfactual global states.
The interpretations of , ω ⊗ σ, and ν differ from the interpretations of their counterparts in both QMT and conventional scattering theory for QFT. In QMT, the initial state ρ ⊗ σ is taken to be the actual system-probe state, not a counterfactual state like ω ⊗ σ. U (t) represents the actual time evolution of the system-probe (setting aside the question of what happens upon measurement). In conventional scattering theory in particle physics, the measurement probe is not included in the representation, but there is a similar contrast between the status of the the initial and final states: in conventional scattering theory, the initial and final states are also typically taken to represent the actual states of the system at asymptotically early and late times, not counterfactual states.
Consider the non-selective state update rules. Like the states ω ⊗ σ and ν from which they are derived by tracing out the probe, ω and ω ns are counterfactual states. As a result, the state update ω → ω ns cannot be interpreted as representing a physical change of the actual state of the system from ω to ω ns . ω and ω ns are the appropriate states to assign in M − and M + respectively insofar as they predict the correct pre-and post-measurement conditional probabilities. 36 Fewster and Verch [42, p.7] describe state updates as "an exercise in bookkeeping" that "provides an effective description of a physical process." Is the FV measurement framework then merely a calculational device for deriving predictions, or does it also describe the physical process of measurement? While the state update rules are a calculational device, the FV framework is also equipped with the resources to describe actual physical processes. The state update rules are derived using (C), which represents the the actual state and (presumably) actual physical changes in the values of the physical quantities that are associated with local regions.
Consideration of selective measurement (i.e., measurement given that probe effect B is observed) yields two additional arguments against interpreting the state update rules as representing physical changes of state. These arguments concern how state updates are associated with spacetime regions, not the counterfactual versus actual status of the states.  [41, p.16] argue for the reasonable conclusion that, since ω s (A) = ω(A) for A in either the causal complement or the causal past of K, "there seems to be no purpose in envisaging a transition from ω s to ω s occurring along or near some surface in spacetime (whether a constant time surface as in non-relativistically inspired accounts of measurement, or e.g., along the backward light cone of the interaction region as in the proposal of Hellwig and Kraus [60], or an earlier proposal of Schlieder)." Second, and more compellingly, Fewster and Verch's conclusion that there is no reason to assume that an evolution from ω to ω s occurs in any region of spacetime is supported by a theorem about successive selective measurements [41,Corollary 6]. Consider two probes in interaction regions K 1 and K 2 that are causally orderable (i.e., K 2 J − (K 1 ) = ∅). (See Fig. 4 for an example; however, unlike the Borsten et al. [14] measurement scenario, these measurements will be selective.) Assume that the causal factorization property holds:Θ = Θ 1 •Θ 2 whereΘ 1 andΘ 2 are defined as above. This is a natural assumption, but it can also be verified for concrete models of system-probe interactions. Fewster and Verch show that application of the selective rule for state update produces the same result regardless of 36 Regions M − and M + overlap in K ⊥ . For non-selective measurements, ω(A) = ω ns (A) for all A localizable in K ⊥ , as one would expect. However, for selective measurements, this is not the case, as we discuss below. 37 And assuming that the system obeys Huygens' principle whether the interactions are modeled as successive selective measurements in regions K 1 and K 2 or as a single selective measurement in region K 1 K 2 in which probe effects B 1 and B 2 are both observed. More precisely, for probe preparation states σ 1 and σ 2 and probe effects B 1 and B 2 , let ω 1 be the state conditioned on B 1 being observed in initial state ω, ω 12 be the state conditioned on B 2 being observed in state ω 1 , and ω 12 be the state conditioned on B 1 ⊗ B 2 being observed in initial state ω. Then ω 12 = ω 12 . 38 Fewster and Verch [41, p.17] "emphasize that we have have not needed to invoke any reduction of the state across geometric regions." They consider this corollary (and the accompanying theorem for the corresponding pre-instruments) to be the "core result" of their article [41, p.27]. The upshot is that the selective state update rule in the FV framework "renders moot the discussion of where and when a state change of the system takes place as a consequence of measurement" [41, p.27]. In other words, in QMT the selective state update rule can be interpreted literally as representing a measurement-induced collapse that occurs somewhere in the world. Even if one believes that this is not a compelling interpretation of NRQM, it is an admissible interpretation of the formalism. In contrast, the FV selective state update rule does not admit a literal interpretation as representing a measurement-induced collapse that occurs somewhere in the world.
Corollary 6 is a direct result of the requirement that the FV measurement theory reflect relativistic spacetime structure. As Fewster and Verch [41, p.17] note, Theorem 5-which is the counterpart for instruments of Corollary 6 for state update-does not hold for nonrelativistic theories such as Euclidean QFT. Essentially, what Theorem 5 and Corollary 6 establish is that the measurement theory enshrines both the symmetry between the ordering of measurement operations when K 1 and K 2 are causally disjoint (i.e., K 2 ⊂ K ⊥ 1 ) and the broken symmetry between the ordering of the measurement operations when K 1 and K 2 are strictly causally ordered (i.e., K 2 ⊂ J + (K 1 )). In contrast, relativistic causal ordering was a requirement that Sorkin had to put in by hand as P3(c) when applying non-relativistic quantum measurement theory to a relativistic case. Again, the contrast with QMT is also illuminating. There is no analogue of Corollary 6 for QMT because successive measurements are strictly temporally ordered: there is no alternative to representing the measurements as ω 12 (i.e., with successive state updates) because the time evolution operator U appears in the state update rule.
This raises the question of whether the FV measurement framework is inherently epistemic in the sense that it only admits an interpretation in terms of the processing of information by observers, and does not admit an interpretation in terms of physical processes in the world. We have already suggested that the representation of the actual state of affairs by (C) affords the FV framework the opportunity to represent actual physical processes, and not only the counterfactual descriptions underwritten by the state update rules. There are also two respects in which the state update rules themselves are not merely epistemic: they are not observer-dependent in any significant respect and they do not need to be given a strong operationalist interpretation. An epistemic interpretation of the FV measurement framework is possible, but not required. 38 Assuming that B 1 has nonzero probability of being observed in ω and B 2 has nonzero probability of being observed in ω 1 .
Some remarks by Fewster and Verch suggest that state update has an observer-dependent interpretation: for example, in the context of selective measurements, "there seems no reason to invoke a physical process of state reduction occurring at points or surfaces in spacetime, rather, the updated state reflects the observer's filtering of the system by conditioning on measurement outcomes" [41, p.16]. However, there is nothing especially observer-dependent about the state update rules. An updated state ω (minimally) serves the purpose of generating conditional probabilities (i.e., conditional on a specified probe effect). Of course, conditional probabilities can be calculated for conditions that are not known by an observer to obtain-or even conditions that are known by an observer not to obtain. Moreover, the updated state ω is spacetime-independent. This is trivially true because the algebras of observables, and not the states, are associated with regions of spacetime. More substantively, when states are associated with spacetime regions via the algebras, ω can be defined over U(M ); the state is associated with all of spacetime, so clearly is not dependent on the location of the observer. Of course, ω is the correct state to assign to the system only in M + insofar as it only gives the correct conditional probabilities for measurements performed in this region. But this is not in any way observer-dependent-M + is picked out by the system-probe interaction region K. M + is not defined with respect to the observer's location in spacetime, including with respect to where the observer chooses to perform a measurement on the probe to obtain information about the value of an induced system observable.
As just noted, a selective state update can yield an updated state ω s (A) that differs from the prepared state ω(A) in the causal past of K. Fewster and Verch [41, p.16] endorse Hellwig and Kraus' [60] strongly operationalist interpretation of this fact: "whether the state actually remains unchanged or not in the past of the coupling region is a 'pure convention' with no operational significance as the region is no longer accessible to further experiment." While it is true that the observer in the causal future of K who assigns state ω s (A) cannot experimentally test the predictions of this state assignment in M − , it is not necessary to adopt this strong version of operationalism in order to defend the state assignment. The counterfactual interpretation of states ω and ω defended above also permits observers in M − and M + to adopt different global states. Both ω and ω s are counterfactual states that represent states of affairs in possible worlds. It is not meaningful to ask which of these states represents the true state of the actual world in any region of spacetime. (A state on C(M ) is the true state of the actual world.) The state update rules perform their function of predicting expectation values for measurements by supplying counterfactual states that are only appropriate to use for calculational purposes in suitable spacetime regions (e.g., before another measurement of the system is made).
Finally, the FV measurement framework is an ongoing research program, so there are outstanding formal and interpretative questions. How the formalism gets developed will have consequences for how the interpretation of the formalism discussed in this section can be elaborated. For example, one formal development is that the recent extension of the FV measurement framework from compact probe-system coupling regions to non-compact ones for the case of a quantized real linear scalar field in [43]. Some abstract *-algebraic models have been constructed for particular types of field systems, probes, and interactions between them (e.g., for the simple case in which field and probe are modeled using free (massive or massless) scalar fields and are linearly coupled in [41]. Another formal project is to implement the FV measurement framework using C*-algebras or concrete von Neumann algebras. [43] applies the FV framework to C*-algebras. von Neumann algebras have not yet been explicitly constructed, but [43, p.28, fn. 18] contains an argument that, for quasi-free Hadamard states, the asymptotic measurement scheme that is defined is implementable on the von Neumann algebra. (See Ruep [91] for more detail.) 39 On the interpretative side, Fewster and Verch [42, p.13] note that there are outstanding questions about how to interpret superpositions and transition probabilities in the algebraic framework because these concepts are native to algebras of bounded operators on separable Hilbert spaces, which are Type I von Neumann algebras. Local algebras in AQFT are typically Type III von Neumann algebras. As Ruetsche and Earman [93] explain, this creates problems for justifying the core interpretive assumption that quantum states represent probabilities of something (e.g., measurement outcomes or hidden variables). In quantum mechanics with a finite number of degrees of freedom, the standard justification for assuming that states represent probabilities of measurement outcomes relies on the existence of atoms (i.e., minimal projectors). The Type III von Neumann algebras that are characteristic of local QFT do not contain atoms. Ruestche and Earman [93] consider several strategies for extending the standard argument that states represent probabilities of either events (e.g., measurement outcomes) or value states (e.g., hidden variables) to Type III von Neumann algebras, including a version of the 'funnel' proposal suggested in Fewster and Verch [42, p.13], but find all of the arguments deficient. They conclude that justifying the interpretation of states as representing probabilities of either events or value states is an open problem for Type III von Neumann algebras.
In summary, the FV measurement framework begins by positing a set of axioms for AQFT. These physical principles inform the measurement theory that FV introduce. This measurement theory is also constrained by the fact it is set up in an analogous way to QMT from NRQM, and by the use of a scattering theoretic framework to model the measurement process. When Sorkin-type 'impossible measurement' scenarios are modeled in the FV framework they do not lead to superluminal signalling. The FV measurement theory for AQFT has state update rules that take a different form and have a different interpretation from QMT. In particular, the 'in' ω and 'out' ω algebraic states that feature in the state update rules are best interpreted as counterfactual states. The actual state of the system is represented by on C(M ). Moreover, the state update rules cannot be literally interpreted as representing a physical change of state upon measurement that occurs in some region of spacetime.

The use of Unruh-DeWitt-type detector models in RQI
Sorkin-type examples of impossible measurements indicate that it is problematic to naïvely extend ideal measurement theory from NRQM within a minimal framework for relativistic 39 Thanks to Maximilian Ruep for helpful correspondence about this. quantum theory. From a practical point of view, this is distressing because measurement is obviously central to the use of the theory. This is an especially pressing concern in relativistic quantum information (RQI), which is concerned with the theoretical and experimental treatment of finite-time processes and realistic detectors. This makes it appealing to hang on to ideal measurement theory and to give up one of the other assumptions in the reductio argument instead. Detector models implement this strategy by introducing non-relativistic quantum mechanical detector systems and coupling them to relativistic quantum fields represented using QFT. The relativistic quantum fields are not 'directly' measurable, but can only be measured indirectly through their dynamical coupling to a controllable detector system, or probe 40 . This von Neumann-like approach involves modeling the measurement as a dynamical process, where the quantum field and the probe are suitably coupled for a finite (or possibly infinite) time, the measurement duration. After the coupling has been switched off (or becomes negligible), the probe can be directly measured, and quantum measurement theory is applied to the detector system. The outcomes can be translated into statements about the quantum field, at least in principle. In this sense, the quantum field is measured by the detector system. If the detector-field coupling is local then the detector can be thought of as a local probe that is locally measuring the quantum field. If we want to retain ideal measurement theory from NRQM, this dynamical understanding of the measurement process may be the only feasible option in Minkowski spacetime, given Sorkin's no-go result.
Detectors are typically defined as controllable localised systems that are locally probing the quantum field. The requirement that the detector system is controllable is more naturally fulfilled if the detector is chosen to be a non-relativistic system. This means that it is welldescribed by non-relativistic quantum mechanics and the measurement theory that comes with it. Crucially, we can consider projective measurements over the detector system with the usual Lüders state update rule and probability assignments that correspond to each possible outcome. This is an advantage because the notion of a measurement outcome that is associated to a finite-rank projector is typically not available in QFT, as an implication of the Reeh-Schlieder theorem and arguments that local algebras are generically Type III von Neumann algebras, which by definition do not contain finite-rank projectors [39,41,112]. It is not clear that generalizing from projectors to POVMs addresses this problem because the spectral theorem no longer holds, and therefore cannot be appealed to as support for the interpretation of the probabilities as probabilities of measurement outcomes [93]. Even though it can be a relief that the usual notion of a measurement outcome can be maintained through the introduction of a detector system, the association of detector outcomes with induced field observables is far from straightforward. Partial answers to the question 'what do detectors detect?' have been given [24,26,99,104], but a systematic account is still missing.
Originally particle detector models were introduced to extract particle phenomenology in QFT (in curved spacetimes) related to the Unruh and Hawking effects [106,107]. Since quantum field theories do not permit a particle ontology [110,44], this motivated the operational 40 Often the terms 'detector' and 'probe' are used interchangeably, especially if it is not clear from the context whether we are modeling a macroscopic or microscopic detector coupled to the field. A microscopic quantum mechanical system (like a spin or an atom) is commonly called a 'probe' of the field, while this term is not used for explicitly macroscopic detector systems (like a superconducting qubit, see below). approach that 'a particle is what a particle detector detects' advocated by Davies and others [106,23,44]. The Unruh-DeWitt detector model has become a paradigm example in the field of Relativistic Quantum Information (RQI). RQI was born out of the need to merge quantum information theory with with relativity theory, and a core commitment of the approach is that relativistic QFT is a necessary ingredient (see e.g. [61,8]). See Peres and Terno [85] for a pioneering defense of this approach. RQI describes quantum communication through quantum fields (e.g. [19,64]) and the entanglement structure of QFT by locally coupling multiple detectors to the quantum field (e.g. [89,87]).
In the realm of quantum information, the notion of operations performed in local regions that is informally used in the application of quantum mechanics becomes central. Detector models are introduced to extend the use of QFT from high energy physics to relatively low energy systems probed in quantum information or quantum optics [22,51]. Many variations upon the Unruh-DeWitt model 41 have been developed and applied to probe many different types of relativistic quantum systems described by QFT.

Constructing non-relativistic detector models
Like any other model, detector models are an addition to the underlying theory and, as a result, they are not a priori guaranteed to comply with its premises. Detector models raise a major concern when the underlying theory is relativistic QFT: are the predictions of the nonrelativistic model respectful of relativistic causality? This is a justified concern, especially because the detector is chosen to be a non-relativistic quantum-mechanical system and, as such, alien to Minkowski spacetime. From this perspective, the non-relativistic quantummechanical nature of the detector seems like a serious drawback. On the other hand, thanks to its non-relativistic nature, the detector system is localizable in the usual quantum-mechanical terms. First-quantised non-relativistic systems admit a position representation, which implies that their states will be representable, and localizable, by means of their spatial wavefunction. Such a representation is known not to be available for relativistic systems [70]. Relativistic quantum fields are localized in a different sense: they are operator-valued objects that are locally defined over space and time (e.g., in AQFT by associating algebras of observables to bounded spacetime regions [56]). Since the field (relativistic) and the detector system (non-relativistic) enjoy very different notions of localization, it is first important to clarify the sense in which they can be locally coupled.
The simplest version of the Unruh-DeWitt (UDW) model involves a scalar quantum field coupled to a non-relativistic quantum system (e.g. an atom, a harmonic oscilator, or a twolevel system). There have been attempts to extend the model beyond the scalar field, e.g., to spinor fields [62], but this complication is not relevant for our purposes. Also, for simplicity, we will only refer to the case of linear coupling between the detector and the field, even though more complicated couplings, e.g., quadratic, have been investigated in the literature [62,101,84]. A careful treatment of the modeling of light-matter interaction with UDW-type detectors beyond the scalar approximation can be found in [67].
Perhaps the most well-known detector model that has been considered in the literature is the pointilke UDW detector model, in which it is assumed that the detector is coupled to the field over a timelike trajectory. This is prescribed by the interaction Hamiltonian that generates translations with respect to the proper time τ associated to the detector's trajectory. In the interaction picture, this Hamiltonian is given bŷ Here λ is the coupling strength, χ(τ ) is the switching function, which is usually assumed to be integrable, and x(τ ) is the spacetime trajectory of the detector parametrized by its proper time τ . The Hamiltonian couples the field along the wordline of the detector to an internal degree of freedom of the detectorμ 42 . The point-like model can exhibit ultraviolet divergences related to the coincidence limit of the time-ordered n-point functions. One strategy for avoiding the divergences of the point-like model is to introduce a finite extension of the detector-field interaction through a smearing, but as we will see this introduces issues with the covariance and the causality of the model that are due to the extension of the interaction and are absent in the point-like model [75,25].

The simplest generalisation of the point-like interaction Hamiltonian (13) involves a linear coupling between the detector observableμ(t) and the scalar field operatorφ(t, x)
where the switching function χ(t) models the duration of the interaction between field and detector and the smearing function F (x) specifies the spatial extension of the interaction (in the proper frame of the detector system) [72,74]. The support of these functions specifies the spacetime region O over which the detector is coupled to the field, i.e., O = suppχ(t)F (x). If both the smearing and the switching functions are compactly supported, then the interaction region O is bounded. Note that the interaction region need not coincide with the (initial) localization region of the quantum-mechanical detector system. Commonly both functions (switching and smearing) are introduced as a phenomenological input of the model, especially when the detector system is macroscopic. The smearing is modeling the 'size' of the interaction, which in general will not coincide with the apparent size of the detector, and the switching is modeling the mechanism for switching the interaction on and off (whenever such mechanism is available 43 ).
It is perhaps curious that even in the case of an explicitly macroscopic detector system (e.g. in superconducting circuits [77]) the physical intuition that 'the interaction happens where the detector is' is not fulfilled. In [77] the authors investigate the model-dependence of the predictions for different smearing functions and different cut-off functions that determine 'how many' field mode functions are relevant for the detector-field interaction. The result suggests that, in this case, the real shape and size of the macroscopic detector does not affect the prediction as much as the choice of a UV cutoff. This means that one can directly model the feature of finite extension, based on mathematical convenience, without worrying about how the microscopic details of the detector affect the smearing function. In other studies, where the detector is explicitly a microscopic probe (e.g. the electron of an atom coupled to the quantum electromagnetic field), the smearing function has been associated with the microscopic nature of the probe (e.g., the orbitals of a hydrogen atom interacting with the electromagnetic field in a light-matter interaction [87]). It is common to attribute the smearings to the detector (e.g. 'smeared' detector as opposed to point-like detector) even though smearing the field operator is more accurate mathematically. Conceptually, it is preferable to attribute the smearing (the 'shape' of the interaction) to neither the detector nor the field but to their joint interaction. As McKay, Lupascu, and Martín-Martínez [77] put it, "the shape of the qubit cannot be determined just with an individual description of the qubit itself. Rather, this shape belongs neither to the qubit nor to the line but to the both of them in interaction with each other, constituting a property that becomes evident and relevant in and through interactions between the relevant quantum systems." Overall, the choice of switching and smearing functions is a crucial input of the model that can critically affect its predictions. This choice can be motivated by the underlying (i.e., microscopic) physics, first-principles, mathematical convenience, or even aesthetics. In the spirit of the pragmatic approach, it is common to investigate all possible (calculable) choices without particular emphasis on motivating each possible choice. This model-dependence poses an extra challenge when detectors are used to study universal effects like the Unruh effect [33]. On mathematical grounds, the smearing functions were first introduced as a cure to the UV divergences of the point-like model [28] where the detector interacts with the field in a point-like manner. The UV divergences of the point-like model come from the distributional character of the 'field at a point'. Concretely, the response function of a detector at leading order in perturbation theory is a function of the field's Wightman function and can be regulated in different ways through the introduction of suitable switchings and smearings [96,68]. In this literature, it is typical that the smearing depends on a regulator (e.g., Gaussian/Lorentzian function) for the purpose of regularising the response of a pointlike detector (e.g., excitation probability) in the limit → 0 [96]. Without taking the limit, an infinitely extended smearing function is unphysical since it implies a 'non-local' coupling between the field and the detector in all space.
Finally, let us consider the 'covariant' generalisation of the Unruh-DeWitt interaction Hamiltonian [74,75], where the switching and the smearing come together to form a spacetime smearing function Λ(x) e.g. H int (t) = λ dV Λ(x)μ(τ (x)) ⊗φ(x). In this interaction Hamiltonian both the field and the detector operator are 'smeared' by Λ in the sense that the detector inherits spatial dependence through its proper time τ = τ (x) in a general reference frame with coordinates x. For example, if we are considering Lorentz boosts in Minkowski spacetime,μ(τ ) =μ(γ(t − vx)) in a boosted frame with coordinates (t, x). The detector observableμ is only time-(and not space-) dependent in its proper frame, where the spacetime smearing function factorizes like Λ(x) = χ(τ )F (x) in terms of the switching and the smearing functions. The time duration and the spatial extension of the interaction can only be defined separately in the detector's proper frame (e.g. Fermi normal coordinates in curved spacetime [75]), while they mix in a general reference frame [72,74,83]. This 'covariant' form of the interaction Hamiltonian was proposed in [74] for a consistent description of detector physics in curved spacetimes, even though the model fails to be fully covariant due to the non-relativistic nature of the detector [75].
To give a definition of what we mean by a non-relativistic detector model, it will be useful to write the interaction Hamiltonian (density) in the following general form that was introduced in [25] (see also [6])ĥ whereĴ (x) is a current operator that is associated with the detector. This form of the interaction Hamiltonian covers the zoo of detector models that one finds in the contemporary literature (see [27]). In principle, the detector current (through which the particle detector couples to the field) could be derived using an effective field theory approach (e.g. [6,105], [52]). We say that the detector system is non-relativistic if the detector current is not microcausal over the extension of the interaction region: In general, the microcausality condition will not be satisfied by the current operators when one considers spacelike separated points within the extension of the interaction region O = suppΛ due to the non-relativistic dynamics of the detector system [25]. This observation will become important when analysing the frictions with relativistic causality in the following subsections. 44 Due to the coupling of a non-relativistic detector model to a relativistic system, it is not a priori guaranteed that the resulting model for measurement will comply with relativistic causality. In general, one deals with three distinct layers: the underlying theory (i.e., QFT), the chosen model (i.e., the non-relativistic detector model), and the relevant mathematical approximations (for more discussion see [94]). If an empirical prediction is inconsistent with the underlying theory (e.g., signalling at spacelike separation), it can be the fault of the model and/or the approximation. Approximations that are known to be at odds with relativistic causality in the particle detector literature are the rotating wave approximation, the nonrelativistic approximation and the relevant zero-mode approximation [48,82,102]. This is due to the non-locality associated with the field observables that these approximations introduce.
Approximations are not to blame for violations of relativity theory in Sorkin-type scenarios (even though they do play a role [11]), but they do illustrate the pragmatic approach taken in detector modeling: approximations that violate relativity theory can be tolerated in regimes in which the violations are negligible. To address Sorkin-type scenarios, the goal is not to rule out superluminal signalling in principle, but to argue on a model-by-model basis that violations are negligible in the intended domain of application. Furthermore, superluminal signalling is not restricted to Sorkin-type impossible measurement scenarios within the detector models program. The next subsection discusses superluminal signalling in bipartite measurement scenarios, and then we will turn to superluminal signalling in tripartite, Sorkin-type measurement scenarios in Sec. 4.4.

Frictions with relativistic causality: Superluminal signalling with two detectors
As a preliminary to analyzing Sorkin-type impossible measurement scenarios, consider two detectors in spacelike separated regions. For example, consider two two-level systems A,B coupled to the field through the interaction Hamiltonian Since the two detectors are not directly coupled to each other, the question is: how much signalling can be 'transmitted' through their coupling to the quantum field? Since the field is relativistic, is there any causality condition for the field that blocks superluminal signalling if the two detectors (i.e., the two interaction regions) are placed in spacelike separation?
In Martín-Martínez [71] it was shown that after A and B have interacted with the field (assuming that A interacts with the field before B in some reference frame) the state of detector B at leading order in perturbation theory iŝ where the noise term is local on detector B, and all the influence of the presence of detector A on detector B's density matrix is captured by the 'non-local' term that is proportional to λ a λ b . This signalling part of the density matrix can be written aŝ where andd is an operator that depends on the states and the internal frequencies ω a , ω b of the detectors. If both smearings are compactly supported, the integration is performed only over the two disjoint and spacelike separated spacetime regions, suppχ a F a and suppχ b F b . Microcausality guarantees that the field commutator vanishes in spacelike separation, and there is no superluminal signalling between the two detectors at second order in perturbation theory. 45 This behaviour has also been studied in the general case, using the Hamiltonian Note that, for convenience, we have absorbed the spacetime smearing function in the definition of the detector current operator, i.e.,Ĵ ν (x) := Λ ν (x)Ĵ ν (x) (comparing with (15)). If 45 In the case of point-like interactions, this argument can be extended to higher orders in perturbation theory [19]. A non-perturbative argument can be found in [25]. 46 E(τ ) is a one-parameter family of spacelike surfaces, where τ is a global function whose level curves represent the planes of simultaneity of the detector's center of mass and (under some assumptions [74]) τ is the detector's proper time. dE denotes the family of induced measures on the surfaces E(τ ). Note that we have assumed that the two detectors share the same proper time.
we assume that the state is initially uncorrelated, i.e.ρ initial =ρ a ⊗ρ b ⊗ρ φ , the general expression for signalling is [27] ρ (2) (22) and where G r (x, x ) is the retarded Green's function 47 . We see that, for general switching functions (dropping the assumption that the switching functions are compactly supported and non-overlapping), the role of the field commutator in (20) is played by the field's retarded Green's function in (22).
The operatorΣ can be understood as the current associated with detector B smeared by the propagated expectation value of the current associated with detector A. In the case of the massless Klein-Gordon field in a 3+1 dimensional flat spacetime, for instance, the propagator takes the familiar form of the Lienard-Wiechert potentials where t r = t − |x | is the retarded time. We see that the operatorΣ carries all the information about the signalling from detector A to B. In [27] it was shown that the variance ofΣ bounds the Fisher information of B, i.e., the information that detector B can 'learn' about the coupling of A to the same quantum field. Again, we notice that if the 'source' Ĵ a (x ) is spacelike separated from the 'receiver'Ĵ b ,Σ is the zero operator and there is no superluminal signalling. This is because G r (x, x ) Ĵ a (x ) is supported in the future lightcone of A's interaction region. Nevertheless, it is quite common in the detector literature to use smearing functions that are not compactly supported. For example, Gaussian smearings are chosen for the sake of computational convenience and analytical results.
Moreover, there are cases in which the use of non-compactly supported functions is not optional. When using detector models to represent the light matter interaction, the smearings are associated with the atomic wavefunctions that are generally not compactly supported, unless confined in an infinite square well. In their seminal paper on the Unruh effect [107], Unruh and Wald introduce the coupling of the position operatorx t (e.g. of an electron) to the field asĤ The field operator is defined over the spectrum of the position operator of the non-relativistic particle. This type of interaction Hamiltonian can resemble the dipole coupling in the lightmatter interaction [87,73]. In this case the expectation value of the current is If we plug this current into (22) we see that there is non-zero signalling even if the detectors are 'centered' in spacelike separation. Intuitively, the detectors are 'overlapping' even when in spacelike separation due to the quantum-mechanical 'tails'. These 'tails' are obscuring relativistic causality when the detectors are put in contact with the underlying relativistic QFT. This contact between the non-relativistic detector system and the relativistic quantum field is a unique feature of non-relativistic detector models. Roughly, the scales that one can define in the interface between the relativistic and non-relativistic theory (e.g., the Compton scale or the light crossing time of an atom) can be thought of as the regimes of validity of the non-relativistic models; however, the additional, system-specific scales that are introduced by the model will play a role too. This means that regime of validity of each model has to be evaluated on a case-by-case basis.
The apparent causality violations introduced by two detectors that are mostly spacelike separated (when the overlap of their 'tails' is 'small') has been analysed and quantified in [27] from the perspective of quantum metrology. Perhaps counterintuitively, the causal 'overlap' depends not only on how fast the tails decay and on the characteristics of the spacetime, but also on the internal characteristics of the detector systems (e.g., the internal frequencies ω a,b ). Nevertheless, one can derive frequency-independent bounds to the information that B can gain for detector's A interaction with the field [27]. Quantifying this cross-talk between distant detector systems is important in the analysis for entanglement harvesting, for which one needs to distinguish between genuine harvesting and the correlations that are established through communication [69,103].
Overall, in the weak coupling regime and to leading order in perturbation theory, the apparent causality violations introduced by non-compact detector-field interactions seem manageable and can be argued to be outside the regime of validity the model, based on the relevant scales of each problem. Beyond perturbation theory, for compactly supported detector-field interactions, there is a non-pertutbative argument for blocking superluminal signalling based on the causal factorisation of the scattering operator of the model. We denote by S a+b the scattering operator of the total system: i.e., the time-ordered exponential of the interaction Hamiltonian (21) and S a , the scattering matrix representing the interaction between detector A and the field (similarly for B). In [25] it was shown that if the two interaction regions O a,b are compactly supported and causally orderable (that is, if O b does not intersect the causal past of O a ). Causal factorisation also guarantees that the final state of the field after both interactions does not depend on their order (and so it does not depend on the reference frame) as long as they are spacelike separated, since in this case S b S a = S a S b . Causal factorisation is sufficient for blocking superluminal signalling in bipartite scenarios but, as we will see in the next section, it will not suffice for blocking superluminal signalling in the set-up of the Sorkin-type problem.

Impossible measurements induced by detector-field interactions
In the case of three (or more) detectors coupled to the field, Sorkin-type impossible measurement problems can arise [25,11]. In de Ramón, Papageorgiou, and Martín-Martínez [25], it was shown that this type of acausal behaviour persists for the most general kind of detector models that use the general Hamiltonian density in (15) (i.e., for both compactly and non-compactly supported detector models, with the exception of the point-like model).
The extension of the detector-field interaction and the non-relativistic dynamics of the detector system are responsible for this acausal behaviour. Nevertheless, the advantage of the detector approach is that one can quantify this causality violation in terms of the scales that are introduced by the detectors and the detector-field interactions.
Following the demonstration in [25], consider the impossible measurement scenario for local regions O 1 , O 2 , and O 3 depicted in Fig. 2. A unitary 'kick' is implemented over region O 1 , possibly through the coupling of a detector to the field, which then can be disregarded. In particular, the initial state of the detectors plus field has the formρ initial =Ûρ 0Û † whereρ 0 is an arbitrary state of the joint system, andÛ = 1 a ⊗ 1 b ⊗Û φ is an arbitrary unitary acting on the field's Hilbert space. Two detectors A, B interact with the field over the regions O 2,3 , respectively. If detector A were not coupled to the field, the expectation values of observables of detector B, denoted asD b , would not depend on U φ since B only interacts with the field in the causal complement of O 1 .In the presence of detector A, the condition that B's observables are not sensitive to the local 'kick'Û is (for derivation see [25]) Condition (27) To make sense of (29) we can think ofŜ † bD bŜb as an induced observable that resides on region O 3 andŜ aÛŜ † a as the local 'kick' propagated through the coupling to A. Next we have to examine the localisation ofŜ aÛŜ † a . That is, how does the coupling to detector A 'propagate' the local 'kick' over region O 1 to region O 3 ? Crucially, it turns out that the localisation ofŜ aÛŜ † a includes the forward lightcone of region O 2 (see Fig. 2) and, as a result, the expectation values for detector B in O 3 will depend on the local 'kick'. By expanding condition (29) one finds that this is because [Ĵ a (x),Ĵ a (x )] = 0 for spacelike separated points within the extension of region O 2 (i.e., suppΛ). As de Ramón et al. [25] put it, "[this result] links superluminal signalling with superluminal propagation within the device that is implementing the measurement...The physical intuition is that, when a detector is spatially extended, the information propagating inside the detector is not constrained to travel subluminally since the detector is a non-relativistic system." In Sec. 4.2 we argued that suppΛ cannot be straightforwardly interpreted as the region occupied by the detector, but the main point is that if the detector currentĴ a were another relativistic field, and as such obeyed Microcausality, then its coupling to the field over region O 2 would not change the localisation of the local 'kick' over O 1 and no observable in O 3 would be sensitive to the 'kick'. Note that condition (29) is a special case of the causality condition (5) proposed in Borsten et al. [14], 49 , which is not satisfied in general in the detector models approach. We will return to this point when we review the detector-based measurement theory in the next subsection.
This structural issue of using non-relativistic detector models, namely that they are defined using currents that do not obey a microcausality condition, can be tolerated by conducting a rigorous analysis of the regimes of validity of the models. That is, the severity of the causality violations in physically reasonable situations can be quantified. This is not only necessary for justifying the use of the models, but also for making sense of this abstract type of causality violations in concrete scenarios that can represent 'realistic' detection experiments. de Ramón et al. [25] note that "since for point-like detectors there is not superluminal propagation, one can disregard this kind of faster-than-light signalling for 'small enough' detectors. Whether a detector is small or not will depend, of course, on the parameters of the problem." One can also argue, in terms of the coupling strength, that in the weak coupling limit the Sorkin-type problem is of at least O(λ n ) when n detectors are involved. As explained above, the signalling between any two detectors A and B is of second order, i.e., of order λ a λ b (see Eq. (18)). This is because the λ 2 a and λ 2 b terms are 'local' to each detector and do not allow for the detectors to 'see each other'. Similarly, in the tripartite case of detectors A, B and C in the Sorkin-type configuration, the coupling constants have to be combined for detector C to 'see' A through B, and so the superluminal signalling is of at least third order λ a λ b λ c . In fact, in [25] it was shown explicitly that, for UDW-type detectors in the tripartite scenario, the superluminal signalling is of fourth order in perturbation theory [25] while most relevant calculations are of second order in the coupling constant.
To summarize, the use of non-relativistic detector models to represent measurements of relativistic systems introduces the possibility of superluminal signalling. This phenomenon is not confined to Sorkin-type impossible measurement scenarios; for example, superluminal signalling is also possible in models for bipartite measurement scenarios. In terms of the 'impossible measurement' reductio argument in Sec. 2.1, the detector models approach rejects assumption P3 that ideal measurement theory is applied directly to the field system. Instead, projective measurements (modeled by rank-1 projection operators) are only performed on detectors, and Lüders' rule is only applied as a state update rule following measurements on detectors. This strategy is combined with a pragmatic approach: a detector model may only be used when non-relativistic effects such as superluminal signalling can be shown to be negligible. That is, superluminal signalling is ruled out FAPP in the regime of applicability of a given detector model. The analysis of the magnitude of non-relativistic effects is carried out on a case-by-case basis for concrete detector models used under specified conditions. For Sorkin-type impossible measurement scenarios, this analysis pinpoints the source of superluminal signalling as being that the current associated with the detector violates Microcausality. As an example of the case-by-case analysis that rules out superluminal 49 Because (29)  signalling FAPP, when the impossible measurement scenario is modeled using UDW-type detectors, the effects of superluminal signalling appear at fourth order in the perturbation series in the coupling constant, while the results that are taken to be physically significant are at second order.

The Polo-Gómez, Garay, and Martín-Martínez detector-based measurement theory
In the previous section we sketched the dynamical understanding of how the 'impossible measurements' arise in extended detector-field interactions in concrete models, without making explicit use of any measurement theory. In this section we will analyse the consequences of this for the detector-based measurement theory, by checking to what extend the detectorinduced state updates satisfy the causality condition (5) by Borsten et al. A detector-based measurement theory for QFT that specifies the state update rules for the field that are induced by projective measurements on the detectors has been developed by Polo-Gómez, Garay, and Martín-Martínez in [86]. We will summarize this detector-based measurement theory here to set up a comparison with the measurement theory of the FV framework in Sec. 5.
As before, the general set up is that measurements on the field are carried out by first allowing the detector and field to interact in some region, and then measuring the detector in the causal future of this region when the detector and field are no longer coupled. Assume that the initial state of the detector-field system is a separable state represented by the density operator ρ = ρ d ⊗ ρ φ . Given the interaction HamiltonianĤ int between the field and the detector, the evolved state isŜ 1 ρŜ † 1 , whereŜ 1 = T exp −i t 1 −∞ dtĤ int (t) and t 1 is a time after which the detector-field interaction is turned off. At a later time t 2 ≥ t 1 a projective measurementP (t 2 ) (denoted asP 2 ) is applied to the detector system and the total state is updated as follows Note that the unitary scattering operatorŜ 1 is supported over the interaction region, while the projection operator depends only on time since the detector operators have no explicit spatial dependence. Also, the states ρ, ρ are in general not spacetime-dependent. This becomes important for the interpretation of the induced state update for the field, as we explain below. Assume that the initial state of the detector is ρ d = |ψ ψ| and that after the interaction with the field the detector is projected by means of the rank-one projector |i i| (e.g. onto the i−th energy eigenstate of the detector), and then trace out the detector system in (30) to get whereM i,ψ := i|Ŝ 1 |ψ .
In the regime of weak coupling between the field and the detector one can use the Dyson expansion for the scattering operatorŜ 1 , usingĤ int from (17): x)φ(t , x ). (35) In the case of non-selective measurements, Polo-Gómez et al. [86] show that We see that, by summing over all possible outcomes i, the updated state only depends on the dynamical coupling between the field and the detector. Polo-Gómez et al. [86, p.4] explain that "this is because the projective measurement acts only on the detector once the interaction has been switched off, and it does not provide additional information since being non-selective the outcome is not known". This point is important for showing that the expectation values of observablesÂ that are defined in spacelike separation from the detector-field interaction region do not change due to the non-selective measurement. That is, since [Ŝ 1 ,Â] = 0 thanks to the fields obeying the Microcausality condition.
Nevertheless, strictly speaking, a non-selective state update of the type (36) can enable 'impossible measurements'. Consider the impossible measurement scenario in Fig. 2, witĥ A ∈ A(O 3 ) and the non-selective measurement happening over region O 2 (that is,M i,ψ ∈ A(O 2 )). Call ρ (ns) φ := E 2 [ρ φ ]. Then the map E 2 does not satisfy the condition (5) above that was introduced by Borsten et al. [14] to exclude impossible measurements. This can be seen from the analysis of impossible measurements in the Sec. 4.4, where we concluded that 'impossible measurements' are due to the non-local dynamics of the non-relativistic detector system (i.e., the currentĴ a (x) associated with the detector does not obey Microcausality). Since the effect of the non-selective measurement only depends on the dynamical coupling between the field and the detector (equations (36), (37)), this diagnosis is directly relevant for the non-selective state update derived in the detector-based measurement theory.
In particular, the condition (29) onŜ that is shown to block the Sorkin-type problem in de Ramón et al. [25] is a special case of Borsten et al.'s condition (5) on E 2 . Strictly speaking, this condition is violated in the detector-based measurement theory. To see this explicitly, using the notation of the previous section, we consider the following equivalent of equation (29): whereÛ represents the unitary 'kick' in O 1 . Taking the trace of the action of this operator on any state ρ a ρ b ρ φ of the total system yields the induced field observable that corresponds to the measurement of the expectation value of the detector observableD b , we have that Performing the trace over detector A, this equation can be written as is the dual non-selective map. If we demand that Eq. (44) holds for all states of the field ρ φ we get the condition which is equivalent to (29). Eq. (29) does not hold in general for the reasons that we exposed in the previous section. The violation of (46) shows that, in general, the (dual) state update map E d 2 does not define an observable in the causal complement of O 1 (where the unitary 'kick'Û is supported). Of course, this is due to the non-local dynamics that goes into the definition of the update map (44). Now we turn to the case of the state update rule for selective measurements (30). In contrast to non-selective measurements, expectation values of observables that are spacelikeseparated from the detector-field interaction region are affected. That is, For this reason, Polo-Gómez et al. argue that for selective measurements ρ φ cannot be used for calculating expectation values of observables that are causally disconnected from the causal future of the interaction region. 50 The physical explanation they offer is that after the dynamical interaction between the field and the detector is switched off, the detector gets entangled with the field. In general the state of the field exhibits spacelike correlations (see the discussion in Sec. 3.4), so projecting the detector selectively destroys some of these spacelike correlations. As they put it, "The entanglement between the detector and the field generated by their interaction thus hinders the possibility of applying the selective update outside the causal future of the detector in a way consistent with the relativistic framework of QFT" [86, p.15].
As a result, Polo-Gómez et al. conclude that the selective state update rule can only be applied in regions that are causally connected to the causal future of the detector measurement region. At the same time, they argue that it is not satisfactory to represent this selective state update by the restriction of the density operator ρ φ to the forward lightcone because "a density operator does not naturally depend on points of the spacetime manifold" [86, p.5]. They also point out that this restriction on ρ φ would be ambiguous for the purpose of updating the field's n−point functions w(x 1 , ..., x n ) := φ (x 1 )...φ(x n ) in the case in which some of the spacetime points x i are inside of and some outside of the forward lightcone of the interaction region. Hence, they shift their attention to updating all possible n−point functions of the field rather than the density operator representing the state of the field. This is because the n−point functions, in contrast to the density operator, naturally depend on the spacetime points. Also, for practical purposes, the state of the field is equivalent to the set of n−point functions. In the case of the detector-based measurement theory, there is an extra pragmatic motivation for this shift, which is that, in the weak coupling regime of a linearly coupled interaction Hamiltonian, the n−th order response of the detector depends on the field n-point functions (making use of the Dyson expansion ofŜ as in (33)). For example, the leading order (second order in the coupling constant) excitation probability (of a two-level system) is one-to-one with the field two-point function [24]. For these reasons, Polo-Gómez et al. argue that the n-point functions, and not the density operator, should be regarded as the primary means of representing the state of the field.
As a result of these considerations, in the detector-based measurement theory the state update rule following a selective measurement is spacetime-dependent in two respects. First, whether the selective or non-selective state update rule applies to an n-point function for the field φ (x 1 )...φ(x n ) depends on the locations of the spacetime points x i . Second, the updated n-point functions for the field system depend on the detector measurement region in which a selective measurement on the detector is performed. In contrast, in the FV framework the selective state update rule applies to 'early' state ω in a spacetime-independent way in both respects. In particular, the scattering morphism depends on the detector-field interaction region, but is entirely independent of the detector measurement region.
The detector-based measurement theory raises the same question as the FV framework: is state update merely epistemic in some sense? Polo-Gómez et al. contend that the detector model update rules can only be interpreted as representing a change in an observer's state of information about the field, not as representing a physical change in the state of the field (see Sec. V and Appendix A of [86]). Whether the detector-based measurement theory requires an epistemic interpretation in this strong sense that state update is merely an update of an observer's state of information is an interpretative issue that is beyond the scope of this paper. However, the fact that the state update rules for the field depend on the spacetime region in which the measurement on the detector is performed does make the state update rules observer-dependent and constrains their interpretation. By assumption, the detector and the field system only interact in the detector-field interaction region. The observer can choose to perform a projective measurement on the detector at any time after the detector and field cease interacting. As a result, it does not make sense to literally interpret the state update rules as representing a physical change of state that is brought about by measurement. This is an unnatural interpretation of this measurement theory because, by assumption, the field and detector are no longer interacting in the detector measurement region; therefore, according to this measurement theory, the projective measurement on the detector does not cause (bring about) a change in the physical state of the field in this region. Of course, physical changes in the state of the field in the detector measurement region could be attributed to entanglement between the field and detector or the measurement theory could be modified to include a non-trivial interaction between the detector and field in the detector measurement region; however, these moves would be counter to the main goal of modeling local measurements using field theory. Locality in this sense is ensured by the assumption that the measurement interaction is confined to the detector-field interaction region. Therefore, the state update rules in the detector-based measurement theory cannot literally be interpreted as representing physical changes to the field system that occur in the detector measurement region.
To briefly summarize this section, the detector models approach is pragmatic. The detector is modeled using NRQM, which allows projective measurements on the detector to be represented. Constructing a concrete model of a detector coupled to a field system involves choices such as a smearing and a switching function, which may be made on pragmatic grounds. The introduction of a non-relativistic detector introduces non-relativistic effects. Superluminal signalling in Sorkin-type impossible measurement scenarios can be attributed to the fact that the current associated with the detector, that goes into the detector-field interaction Hamiltonian, does not satisfy Microcausality. However, from a pragmatic perspective, this is not problematic as long as the effect is negligible in the domain of applicability of the model. Reassurance that this is the case can be obtained on a case-by-case basis, by carrying out the calculations for a concrete model. Polo-Gómez, Garay, and Martín-Martínez have proposed a detector-based measurement theory with state update rules for the induced field observables. Borsten et al.'s condition on physical observables (5) holds only approximately due to the violation of Microcausality by the detector current. The selective state update rule only applies to regions causally connected to the causal future of the detector measurement region in which the projective measurement on the detector is performed. This is an observer-dependent feature of the state update rules that Polo-G'omez et al. interpret this state update rule as representing an update to the observer's state of information.

Discussion: Comparing the FV framework and the detector models approach
The reductio argument generated by Sorkin-type impossible measurement scenarios helps to clarify the important differences between the detector models approach and the FV framework. The main differences arise from different strategies for addressing the problem of impossible measurements, although the diagnoses of the problems are broadly similar. The detector models approach identifies as one important source of the problem the premise that ideal measurement theory (and Lüders' rule in particular) can be applied directly to relativistic quantum field systems. Furthermore, according to the detector models approach, the 'impossible measurements' reductio argument leaves out many model-specific details about the detectors that are actually used to make measurements of quantum fields. Properly modeling the detectors and their interactions with the field system can provide reassurance that superluminal signalling is negligible (i.e., FAPP does not occur) in the domain of applicability of the detector models.
The FV framework also blames the impossible measurement scenarios on the application of ideal measurement theory (including Lüders' rule) to relativistic quantum systems. Their response also involves modeling the probe as well as the field system. However, the FV framework proposes a general, abstract framework for representing both the probe and the field using AQFT that is not model-specific. As a result, additional principles for AQFT need to be posited. In particular, the Time-Slice Property axiom needs to be added to the premises of the 'impossible measurements' reductio argument in order to block impossible measurement scenarios. Furthermore, a new measurement theory for AQFT is needed in this approach. In contrast to the detector models approach, the application of Lüders' rule to represent a projective measurement of the detector is not taken as a starting point for determining the measurement theory for the field system. Fewster and Verch instead start with a representation of measurement based on the axioms of AQFT and the scattering isomorphism and follow the strategy (but not the physical interpretation) of QMT to derive a new measurement theory for AQFT. Both the detector models approach and the FV framework end up with new state update rules for field systems, but their methods for deriving them are different.
It would be easy to focus only on the differences between the detector models and FV framework, but there are some general similarities between the approaches that are worth drawing attention to because they shed light on the form taken by measurement theory in QFT and fundamental features of QFT itself. These two approaches are very different in spirit, so it seems plausible that points of agreement between them could reveal genuine features that an ultimate, complete theory of relativistic quantum fields that includes a measurement theory will have. We take it that neither the detector models approach nor the FV framework is the final account of local measurement for QFT because each satisfies some desiderata and not others, as we explain below. From this perspective, the detector models and FV framework are not inherently incompatible approaches. To facilitate a direct comparison, our discussion in this section will focus on the FV framework with the Polo-Gómez, Garay, and Martín-Martínez detector-based measurement theory discussed in Sec. 4.5.

Similarities
• The role of dynamics: Regarding the response to the 'impossible measurement' problem, the FV framework and detector models approaches agree in general terms on the two main problems with the reductio argument in Sec. 2.1: the premise that Lüders' rule applies to relativistic systems (P3(b)) must be rejected and additional assumptions about the dynamics of relativistic systems undergoing measurement must be introduced. In the FV framework, Lüders' rule is abandoned entirely and new state update rules for relativistic QFT are derived. The detector models approach also refrains from applying Lüders' rule directly to relativistic quantum fields, though it is applied to non-relativistic detectors. On the dynamical side, it is the Local Time-Slice Property that is crucial for blocking Sorkin-type impossible measurement scenarios in the FV framework. The Local Time-Slice Property is a general principle that ensures that the quantum fields propagate subluminally and deterministically. In the detector models approach, careful attention to how the dynamics of the relativistic field and its coupling to the detectors is modeled is crucial for addressing the 'impossible measurements' reductio. In this case, the non-relativistic coupling of the detector to the field via currents that do not satisfy microcausality is the source of superluminal signalling in impossible measurement scenarios. This problem is addressed on a case-by-case basis by performing calculations that involve the interaction Hamiltonian to assess the magnitude of the non-relativistic effects. From this perspective, one thing that goes wrong in Sorkin's argument is that these model-specific dynamical details are not properly taken into account.
• Localization regions for observables: Abandonment of the prima facie operational interpretation of a local algebra of observables A(O) as representing operations that it is possible to carry out in region O. In both the detector models approach and the FV framework, this change in interpretation is made possible by the introduction of detectors or probes. In practical approaches to applying measurement in relativistic quantum theory such as Sorkin's, the traditional interpretation of a smeared field operator is that the smearing reflects the spacetime region over which the operation represented by the field operator is performed. In the detector models approach, the explicit representation of the detector that is coupled to the field system complicates the interpretation of the smearing function. As discussed in Sec. 4.2, the role of the smearing function is ultimately pragmatic; therefore, as long as the predictions are not affected, there is no reason to choose a smearing that is supported only in the interaction region between the detector and field system (e.g., may choose a Gaussian). Furthermore, as was observed in [78] and is further supported by considering the covariant Hamiltonian density representing the interaction between the detector and field system, the most natural interpretation of the spacetime smearing function is that it is a holistic property of the detector-field interaction rather than a property of either one by itself. In the FV framework, the system observables can similarly be associated with different localization regions. Their main rationale for shifting to the n-point functions is that the state update rules are spacetime-dependent and the density operators are not. Therefore, the n-point functions must be the primary vehicle for representing the state.
In both the FV framework and the detector-based measurement theory, the representation of local measurements involves expectation values of fields at different times.
In the FV framework, the state on C(M ) for the coupled probe and field system is a global state in the sense that it encodes the expectation values of the field over all local regions. The scattering isomorphism facilitates the representation of 'in' and 'out' algebraic states (and hence expectation values) in regions M − and M + . In the detector-based measurement framework, the n-point functions directly involve fields at different times. As we noted in Sec. 1, this shift away from the instantaneous states that play a central role in NRQM has longstanding historical roots in QFT in lines of theoretical development that led to the formulation of scattering theory (again, see [12]). Here we see the same theme emerging in the treatment of local measurements. As we will discuss in Sec. 6, the problems raised by instantaneous states in QFT are also an important motivation for histories-based approaches to relativistic quantum theory, including the Quantum Temporal Probabilities program [7].
• State update rules for relativistic field systems cannot be literally interpreted as representing a physical change of state that occurs in some spacetime region: Our discussion of the FV framework emphasized this point that (as Fewster puts it) "no geometric boundaries across which the state reduction occurs are needed" [38, p.10]. In the FV framework, this is a consequence of two features: the counterfactual interpretation of the 'in' and 'out' algebraic states over the uncoupled algebra and Corollary 6, which establishes that when successive selective measurements are performed they can either be evaluated jointly or sequentially. As a result, the state update rule in the FV framework cannot be literally interpreted as representing a physical change of state that happens in any spacetime region. The detector-based measurement theory similarly posits state update rules that do not admit a literal interpretation in terms of a physical change of state that occurs in the region in which the state update rules are applied. However, the arguments for this conclusion differ from those for the FV framework. Polo-Gómez et al. argue that the detector-based state update rules need to be interpreted as an update of the observer's state of information about the field system, and not as a change in the observer-independent state of the field [86, p.5]. Even if one does not go so far as endorsing an epistemic interpretation in this strong sense, the fact that the state update for the field depends on the spacetime location in which the selective measurement on the detector is performed is not compatible with interpreting the state update rule as representing a physical change of state of the field that is brought about by measurement.
• Nobody solves the Measurement Problem: Proponents of the FV framework and the Polo-Gómez detector-based measurement theory explicitly agree that their respective measurement theories for QFT do not solve the Measurement Problem that arises in NRQM. The Measurement Problem is an issue that arises after a theory for (non-relativistic or relativistic) quantum systems and an accompanying measurement theory is formulated. Proponents of both the detector models approach and the FV framework agree that they are engaged in the preliminary task of formulating a physical theory plus a compatible measurement theory for relativistic systems.

Differences
• Pragmatic vs principled approaches have different goals: As we have stressed, the detector models approach is pragmatic in spirit, while the FV framework adopts a more principled approach. The adoption of different approaches means that the detector models approach and the FV framework prioritize different goals. A central goal of the detector models framework is to construct models that adequately describe realistic detectors, including detectors that can actually be built in a lab. In contrast, the FV framework has the primary goal of supplying a framework for measurement in QFT that is generally applicable. Many pragmatic choices are made in the course of constructing a model for a particular detector, including the use of NRQM to model the detector, the smearing functions and field-detector couplings, and the acceptance of FAPP arguments ruling out impossible measurements. In contrast, the FV framework focuses on formulating a fully relativistic measurement theory based on the general physical principles of AQFT in which impossible measurement scenarios cannot arise at all.
• Different scopes of applicability: The detector models approach and the FV framework have different scopes of applicability. While the FV framework aspires to provide an entirely general account of measurement in QFT, the framework introduces simplifying assumptions that exclude some physically realistic detectors. For example, the FV framework assumes that the region K in which the probe and system interact is compact. 51 This assumption does not apply to all of the models of detectors described in Sec. 4. The detector models approach is not a special case of the FV framework. Its physical assumptions are more flexible than the FV framework in virtue of its case-bycase approach. The FV framework also faces the limitation that, in order to apply it to a situation, field and detector models that satisfy the axioms of AQFT need to be constructed. However, it should not be assumed that the detector model approach is always the one that is more useful for practical applications, for example, when probes are not modeled as two-level systems. The FV framework may be more appropriate for representing field-field couplings and how information is transferred from one field to another (e.g. see [52]) . There is definitely interesting work to be done in (search of) the intersection of the two approaches. Second, the state update rules are also dependent on the region in which the projective measurement on the detector is performed. In contrast, the FV framework is not spacetime-dependent in either of these respects. In the FV framework, the selective state update rule yields an updated algebraic state that applies to all of spacetime. As discussed above, this state should be interpreted as a counterfactual state, but it is still the case that the selective state update yields a single, global state. Furthermore, the state update rules in the FV framework are independent of the spacetime region in which the probe is measured. The instrument is defined using the scattering morphism, which is spacetime-dependent on the field-probe coupling region only.

Further Comparison
While both the physical theory of NRQM and its accompanying measurement theory are well-established parts of physics, both the formulation of the physical theory of QFT and the formulation of its accompanying measurement theory are works in progress. The flurry of recent papers on how to represent measurements on relativistic quantum systems is one indication of this. Both the FV framework and the detector models approach are part of this larger ongoing research program. In our view, the FV framework and the detector models approach should be viewed as complementary projects rather than rivals. As we have emphasized, they adopt different strategies and have different goals. In their present incarnations, they also have different scopes of applicability. Applying the two approaches to model the same measurement scenario can also lead to apparently different predictions. For example, entanglement harvesting from the vacuum by spacelike-separated local detectors is investigated in [90,54]. Ruep [90] applies the FV measurement framework and concludes that weakly coupled detectors cannot harvest entanglement due to noise associated with the inescapable mixedness of the states of the local probe. In response, Grimmer et al. [54] argue that when a concrete detector model is applied that includes an assumption about the scale of the detector, this effect is negligible and there is no obstacle to weakly coupled detectors harvesting entanglement. As in other cases in science, different models can be used to give complementary descriptions of the same situation. Discrepancies in predictions can be useful for investigating the conditions under which effects occur and the relationships among our models.
While we view the FV framework and detector models approach as complementary, it should be noted that there are differences of opinion on this issue in the literature. One point of disagreement concerns whether it is possible in principle to model detectors using QFT, as the FV framework sets out to do. Here is an example of an assumption that is made to set up the FV framework: We also do not claim to solve the Measurement Problem of quantum theory. Rather, we take it for granted that the experimenter has some means of preparing, controlling and measuring the probe and sufficiently separating it from the QFT of interest -which we will call the 'system' -the question is what measurements of the probe tell us about the system. That is, our interest is in describing a link in the measurement chain, in a covariant spacetime context. [41, p.3] This assumption that measurement theory describes "a link in the measurement chain" is a standard one within Quantum Measurement Theory applied to NRQM (see, for example, [18, p.225]), but its use in QFT is criticized by Grimmer, Torres, and Martín-Martínez [54, p.5] and Grimmer [53], who argue that the assumption that "someone, somewhere, knows how to measure something" is not warranted when the probe is taken to be a relativistic quantum field. For practical reasons, the detector models approach is the only one that we can use right now to model some realistic detectors. Measurement apparatuses are low energy systems that would need to be described using interacting QFTs with bound states [86, p.1]. However, one might expect (as Fewster and Verch do) that eventually a treatment of systems and some types of detectors will be possible within QFT. Grimmer et al. [54] make the strong claim that detector models formulated using NRQM are needed in principle to model measurements of QFT systems. Grimmer [53] presents a stronger argument that, in order to acquire empirical significance, QFT observables must be appropriately related to a non-QFT model, with the Unruh-DeWitt detector model being a paradigm example. Furthermore, Grimmer believes that these models of measurement must be constructed on a case-by-case basis and is skeptical about the viability of a general measurement theory for QFT such as the FV measurement framework or even the Poló-Gomez et al. detector-based measurement theory.
A related point of disagreement between proponents of the detector models approach and the FV framework is whether finite-rank projectors are needed to represent measurements. If so (as proponents of the detector models approach believe), then such objects are not available in the Type III von Neumann algebras that are typical in QFT, which would be problematic for the FV framework (see Sec. 3.4 and 4.1).

Another approach: Histories-inspired responses to Sorkin-type impossible measurement scenarios
It is important to point out that Sorkin's motivation for formulating the 'impossible measurements' issue is to advocate for the sum-over-histories approach to quantum theory. As he puts it in the abstract of [100], "It is argued that this problem leaves the Hilbert space formulation of quantum field theory with no definite measurement theory, removing whatever advantages it may have seemed to possess vis a vis the sum-over-histories approach, and reinforcing the view that a sum-over-histories framework is the most promising one for quantum gravity." Histories-inspired approaches do not set out state update rules; they instead assign probabilities directly to histories. In this section, we will comment on the form that the 'impossible measurements' problem takes in histories-based formalisms. To the best of our knowledge, one cannot find a complete response in the histories literature, even though the problem is clearly articulated in older [63] and more recent [47,3] literature.
As there are many variants of the histories-based approaches, we will first sketch the main tools and ideas that are common in the histories literature. Roughly, the possible 'histories' of a given system are all the possible time-extended propositions that one can assign to the system; that is, all possible 'paths' of the system in the (underlying classical) sampling space. The goal of the formalism is to assign probabilities to all possible paths. These probabilities are 'quantum' in the sense that in general they 'overlap' and they are not guaranteed to satisfy the usual additivity conditions. Histories-based formalisms can be viewed as generalisations of the path-integral [58,81]. As histories-based approaches have mostly been applied to quantum cosmology [49], the formalism typically refers to histories of a closed system (e.g. the universe), which is why the notion of agency, or 'external' measurement, is not part of the formalism. This is one of the conceptual advantages that is highlighted by Sorkin for using histories-based approaches to resolve the impossible measurements issue, or for re-evaluating the measurement problem in a spacetime context. In Sorkin's words, "With the formal notion of measurement compromised as it seems to be already in quantum field theory, the greatest advantage of the sum-over-histories may be that it does not employ measurement as a basic concept. Instead it operates with the idea of a partition (or 'coarse-graining') of the set of all histories, and assigns probabilities directly to the members of a given partition, using what I would call the quantum replacement for the classical probability calculus" [100, p.11].
In non-relativistic quantum theory, a history of a quantum system is simply a chain of time-ordered single-time propositions that are represented by projection operators (in the Heisenberg picture) and are typically associated with possible values of the observables of the system. Such a chain α = (α 1 , α 2 , ..., α n ) is represented by a class operator where the α i 's are indexing the eigenvalues of some observable (or more generally the closed subspaces of the Hilbert space). The set of time-points t 1 , t 2 , ..., t n is called the history's temporal support [63].
It is worth emphasizing that the role of the projectors in (48) is propositional, i.e., encoding the possible propositions that can be attributed to a closed system. It would be natural to think of a history as a sequence of actual events, e.g. measurement outcomes, but this interpretation is not straightforward as the definition (48) has nothing to do with measurements a priori. The use of projectors here is similar in spirit to how they are used in quantum logic, where one can define 'or' and 'and' operations for combining the propositions that can be assigned to a physical system [63]. The joint probability of α (the sequence of propositions α 1 at t 1 , ..., α n at t n ) is given by where ρ 0 the state of the system (in the Heisenberg picture). In general, these probabilities do not satisfy the Kolmogorov additivity condition. That is, if α, β are exclusive histories and α ∨ β denotes their conjunction, then it does not hold in general that If we define the decoherence functional to be the following complex-valued functional of two histories then p(α) = d(α, α) and the additivity condition (50) holds only if which is called the consistency condition [31]. In the context of standard path-integral approaches the decoherence functional can be calculated as a double path-integral [32]. In a sense, the decoherence functional quantifies the 'overlap' of two histories and the most obvious way of satisfying the consistency condition is to demand that the 'overlap' vanishes: i.e., tr Ĉ † α ρ 0Ĉβ = 0 (53) for two exclusive histories α, β. The definition of exclusiveness is that for at least one timestep t i the two corresponding projectors are orthogonal, i.e.,P α i (t i )P β i (t i ) = 0. In historiesbased approaches the decoherence functional (rather than the state) can be considered the primary object from which the probabilistic predictions of the theory are extracted [32]. This approach gives more general probability rules (than the Born rule) and the issue of state update does not come up since the formalism operates directly at the level of whole histories [81]. In a relativistic setting, one is not looking for state-updates that can be consistently applied 'step-by-step' without leading to causality violations, but rather the question is: which histories of the system can be assigned a probability that does not entail superluminal influences between the history's 'nodes'? In what follows, we will elaborate on this question.
Adjusting the history-based formalism to quantum field theory is not trivial, mostly because in a relativistic set-up there is no fixed time-ordering of events. The definition of the history's temporal support as a set of time-points {t 1 , t 2 , ..., t n } is too restrictive in this case. Even in globally hyperbolic spacetimes there are many foliations that one can choose, not to mention the challenges in general non-hyperbolic spacetimes [63]. Also, in a field theory it is unnatural to associate events with spacetime points, and one would like to consider localised propositions associated with spacetime regions. Isham [63]  One would like to associate quantum field propositions with these local regions. This is challenging for various reasons. Questions that appeal to the Fock space structure of the Hilbert space, like 'is the field in the ground state?' cannot be asked locally (but can, for example, on a hypersurface, as in the 'impossible measurement' example due to Sorkin described in Sec. 2.2). Due to the Type III nature of the local algebras there are no finite rank projectors, and the 'yes-no'-type questions familiar from quantum logic cannot be asked locally [56,88]. Isham [63] suggests that the elementary propositions of the field theory over a spacetime region O should be of the form P (f ) that corresponds to the possible values of the smeared field operator φ(f ) where supp(f ) ⊆ O, and he argues that one would have to specify the way in which a local lattice of propositions L O associated to region O is generated from these basic propositions. In the recent work [47], a map O → E O from regions to projective effect valued measures is assumed, but it is not clear in general which statement about the quantum field the effect valued measures correspond to. These difficulties with extending the notion of local propositions to a QFT set up are also related to the difficulties in applying the modal interpretation to QFT [20,35].
When considering how to structure basic regions and propositions in QFT so as to solve the 'impossible measurements' problem, Sorkin presents the following dilemma: one can either further restrict the allowed-measurement regions and the corresponding ordering relation, or else select the allowed observables on "some more ad hoc basis." In his words, the problem is "foreshadowed by our need to take a transitive closure in defining ≺" (see discussion in section 2.2) and as a result one could "further restrict the allowed measurement regions O j in such a manner that the transitive closure we took in defining ≺ would be redundant. For example, we could require that for each pair of regions O j , O k all pairs of points x ∈ O j and y ∈ O k be related in the same way" [100, p.9]. Of course, this would block the Sorkin problem by excluding the configuration of regions in figures 1 and 2. This further restriction of ≺ would imply that one can only consider temporal supports that consist of spacetime regions that are pairwise related like two 'thickened' spacetime points (fully spacelike, fully timelike, or fully lightlike), blocking the possibility that a region partially invades the forward lightcone of another region. This is a global restriction, perhaps an 'all-at-once' constraint as in [1], that is hard to reconcile with the local perspective. As Sorkin puts it "it is difficult to see how the ability to perform a measurement in a given region-or the effect of that measurement on future probabilities-could be sensitive to whether some other measurement was located totally to its past, or only partly to its past and partly spacelike to it" [100]. It is also noteworthy that the initial definition of ≺ already excludes cases that might be of physical interest, such as overlapping regions, which were considered by Bohr and Rosenfeld in [13], and regions that intersect the causal past of each other, which are relevant for the study of possible spacetime embeddings of general process matrices [109]. In general, by restricting ≺ one might exclude physically interesting cases. Finally, regarding the second possibility of restricting the allowed observables, Sorkin points out that the inability of two coupled subsystems to signal through the measurement of an observable that is additive suggests that one could still allow integrals of spatially smeared observables over a spatial subset of a hypersurface (see the discussion in Sec. 2.4.2, also [3]). However, spacetime smeared fields do not possess this additive character due to the time-extension (for a treatment of time-extended propositions in history-based approaches see, e.g., [5]).
Overall, it is not clear how the histories machinery can be used to eliminate the impossible measurements, but it offers some tools that can be used for salvaging the framework. One technical tool that is usually not emphasized in the 'standard' single-time formalism, is the consistency condition (53) that gives rise to well-defined multi-time probabilities. In [47], Fuksa argues that the consistency condition can be used to characterise the causal behaviour of the probabilistic predictions of the formalism. To briefly demonstrate the point, consider two propositions P α 1 associated to a region O 1 andP α 2 associated to a region O 2 , and say that P α 1 corresponds to some observableÂ 1 andP α 2 corresponds to some observableÂ 2 . The observers associated to the two regions cannot signal if the non-selective measurement ofÂ 1 does not affect the statistics in O 2 , that is, if This holds if [P α 1 ,P α 2 ] = 0. 53 . The observation is that, even if the two projectors do not commute, (54) holds thanks to the consistency condition (with respect to α 1 ). To see this (following [47]), from (49) we have that p(α 1 , α 2 ) = P α 1P α 2P α 2P α 1 Assuming the consistency condition (53) with respect to α 1 , that is we get that This means that, thanks to the consistency condition with respect to α 1 , a non-selective measurement ofÂ 1 in O 1 does not affect the statistics ofÂ 2 in O 2 even if O 1 , O 2 are not fully spacelike separated. Similarly, in [47] Fuksa considers the three-step history that corresponds to the Sorkin-type problem, and is represented by the class operatorĈ (α 1 ,α 2 ,α 3 ) =P α 3P α 2P α 1 whereP α 1 commutes withP α 3 , butP α 2 does not commute with eitherP α 1 orP α 2 (P α 3 is associated with region O 3 that is spacelike separated from O 1 , and O 2 is not spacelike separated from either O 1 or O 3 , see figure 2). Now, since region O 1 is not spacelike separated from the union of O 2 and O 3 , the consistency condition with respect to α 1 gives that which means that a non-selective measurement ofÂ 1 at O 1 does not affect the joint statistics of α 2 , α 3 for any state ψ if the class operator satisfieŝ C † (α 1 ,α 2 ,α 3 )Ĉ (α 1 ,α 2 ,α 3 ) = 0 for α 1 = α 1 .
Fuksa [47] points out that if we were to 'squeeze' another intermediate region O 2 between O 1 and O 3 (partially invading their future and past lightcone respectively) the condition that blocks 'impossible measurements' becomeŝ C † (α 1 ,α 2 ,α 2 ,α 3 )Ĉ (α 1 ,α 2 ,α 2 ,α 3 ) = 0 for α 1 = α 1 , which is a stricter condition than (62). By introducing more intermediate regions, more conditions are added to the list of conditions that must be satisfied for blocking the Sorkintype problem, and there is no obvious way in which they are redundant (reduce to one another). So it is not obvious how to block 'impossible measurements' in general, but this analysis offers a recipe of 'who can signal to whom' given a particular set of regions and propositions. In [47] they point out that given a chain of regions O i , a region O j cannot signal to a 'subsequent' region O k as long as the class operator 'decoheres' (satisfies the consistency condition) with respect to the j-th variable, and that propositions before j and after k do not contribute to the signalling between j and k.
Conditions of the type (62) are ad hoc and global in nature, i.e., depend on the whole set of regions and propositions that one is considering so, as we said above, they are hard to motivate from the local perspective of observers embedded in spacetime. One way to motivate the consistency condition is through the introduction of decohering pointer variables that are locally coupled to the field [59]. Roughly, the idea is that sufficiently decohered propositions (or measurements) of the pointer variables are correlated with quantum field histories that satisfy the consistency condition and would not lead to (significant) causality violation. Nevertheless, there is a strong dependence of the pointer variable multi-time probabilities on the chosen measurement scheme, i.e., the chosen measurement resolution [4].
A histories-based formalism that explicitly treats QFT measurement through the introduction of coarse-grained pointer variables is the Quantum Temporal Probabilities (QTP) formalism [6,8]. Joint probabilities of the pointer variables are defined by means of unequaltime correlation functions, and the consistency condition is satisfied for a certain degree of coarse-graining (see also [4]). A connection of this formalism to the closed-time-path (CTP) integral was recently established in [8]. In general, as is also pointed out in [47], it is not obvious in general how to establish standard causality conditions in the path-integral formalism ('in-out' formalism) beyond scattering theory (see discussion in [12]). The CTP formalism (Schwinger-Keldysh or 'in-in' formalism [98,66]) is better suited for analysing the causal behaviour of local QFT measurement, thanks to its emphasis to real-time causal evolution [7]. Also, the QTP program demonstrates how the CTP formalism provides the 'right' correlation functions that go into the definition of joint probability distributions over outcomes of coarse-grained pointer variables that are locally coupled to the field [8]. Time is also treated as a random variable (in analogy to stochastic processes) and time-of-arrival problems can be described accordingly. It is work in progress to evaluate the causal behaviour in bipartite scenarios and in multi-partite Sorkin-type set-ups using this framework, and to fully analyse how the possibilities of signalling are encoded in the CTP correlation functions.

Conclusion
Sorkin's 'impossible measurements' problem serves as both a motivation for formulating an account of measurement that is compatible with QFT and a guide to possible strategies for carrying out this program. Sorkin [100] and Borsten, Jubb, and Kells [14] present examples that show that the naïve application of Lüders' rule to model non-selective measurements in relativistic quantum theory can lead to superluminal signalling. We explicitly presented these examples as a reductio argument in Sec. 2.1. Identifying the problematic set of premises is useful for distinguishing the 'impossible measurements' problem from other foundational issues raised by QFT. 'Impossible measurements' are not caused by Lüders' rule failing to be manifestly Lorentz covariant due to the relativistic temporal ordering assumption P3(c). Making Lüders' rule manifestly Lorentz covariant does not solve the 'impossible measurements' problem; however, the approaches to responding to the 'impossible measurements' problem surveyed in Sec. 3-6 all seek to replace the relativistic temporal ordering assumption P3(c) in the course of developing manifestly Lorentz covariant alternatives to applying Lüders' rule directly to the field system. The 'impossible measurements' problem is also unrelated to state-dependent features of QFT. In particular, the initial state of the field system is not required to be the vacuum state or any other state that satisfies the assumptions of the Reeh-Schlieder theorem. Of course, these features of QFT do need to be taken into account to have a complete understanding of measurement in QFT, as the discussions of selective measurement of vacuum states and the role of Type III von Neumann algebras throughout this paper indicate. However, these issues are not a cause of the 'impossible measurements' problem.
Sorkin-type 'impossible measurement' scenarios clearly illustrate that Microcausality is not by itself sufficient to rule out superluminal signalling in relativistic quantum theories when Lüders' rule is used to model non-selective measurements. Assuming that the practical ability to signal superluminally is an unacceptable consequence of a relativistic quantum theory, responding to the 'impossible measurements' reductio argument in Sec. 2.1 requires rejecting or revising at least one of the premises and/or adding at least one premise that is sufficient to block the conclusion. Strategies for formulating an account of measurement for QFT can be distinguished according to how they respond to the reductio argument. The FV measurement framework for AQFT introduces additional physical principles for QFT and formulates a new measurement theory for QFT that is informed by QMT for non-relativistic quantum mechanics but differs both formally and interpretationally. It can be shown that superluminal signalling does not occur when Sorkin-type measurement scenarios are modeled using the FV framework [15]. The detector models approach rejects the reductio argument's premise that Lüders' rule is directly applied to the field system; instead, measurements are modeled by coupling the field system to a detector that is represented using NRQM and then performing projective measurements on this detector that may be evaluated using Lüders' rule. Detector models are constructed on a case-by-case basis. For Unruh-DeWitt-type models, superluminal signalling can be ruled out FAPP for Sorkin-type scenarios. The histories-inspired approach that is preferred by Sorkin takes the more radical approach of turning away from the representation of measurement processes 'step by step' and instead assigning probabilities directly to entire histories. As far as we are aware, histories-based approaches have not yet achieved a complete resolution of the 'impossible measurement' problem, but progress has been made. Of course, these three approaches are not the only options for either responding to the 'impossible measurements' reductio argument or accounting for measurement in QFT.
There are important differences between these three approaches to accounting for measurement in QFT. The FV measurement framework proceeds axiomatically in a general fashion; in contrast, the detector models approach focuses on constructing concrete models of detectors and the systems to which they are coupled on a case-by-case basis. The differences between the principled and pragmatic attitudes that are adopted as well as in the goals of these two research programs lead to differences in the resulting models for measurement. Most obviously, the state update rules for the field system differ in the FV framework and the detector-based measurement theory. The state update rules are derived from more basic principles within the FV framework and posited based on plausibility arguments in the detector-based measurement theory. On the other hand, the detector-based approach has a wider scope of applicability at present than the FV framework (though there may also be measurement scenarios treatable within the FV framework but not using detector-based models). A further difference of opinion concerns how best to deal with Type III von Neumann algebras when representing measurements on quantum fields. The histories-informed approaches differ from both of the other approaches insofar as they do not model the measurement process using state update rules at all. While the differences are important, the similarities between these approaches are also revealing. The development of a fully satisfactory account of measurement in QFT is still a work in progress, so similarities may offer a glimpse of general features of solutions to the 'impossible measurements' problem or even of a measurement theory for QFT. An important moral is that the dynamics is crucial for diagnosing and addressing the 'impossible measurements' problem. The FV framework's exclusion of superluminal signalling relies on the Local Time-Slice Property, which is a dynamical principle of AQFT. In detector models, the fact that the currents associated with the detector modeled using NRQM do not satisfy microcausality is the source of superluminal signalling in impossible measurement scenarios. This problem can be addressed on a case-by-case basis by performing calculations that involve the interaction Hamiltonian to determine the regime in which non-relativistic effects are neg-ligible. In the histories-inspired approach, the decoherence functional includes information about the local dynamics of the system as well as the kinematics. Solving the 'impossible measurements' problem is still an open problem in this approach, but a possible solution would rely on the decoherence functional (see e.g. [3]). Overall, histories-based approaches can lead to notions of causality that go beyond scattering.
Another moral is that both the FV framework and the detector models approach dispense with the traditional operational interpretation of a local algebra of observables A(O) as representing operations that it is possible to carry out in region O. In the FV framework, observables can generally be localized in many different regions. The fact that an operation can be performed in a local lab region is instead represented by explicitly identifying a region K in which the field and probe interact and noting that the coupled and uncoupled algebras may only be related by isomorphisms outside of the causal hull of K. The detector models approach does involve introducing smearings, but the support for the smearing function need not be interpreted as representing the region in which the operation represented by a smeared field operator is performed. The choice of a smearing function is a pragmatic one that is not limited to functions with support in the detector-field interaction region; furthermore, the most natural interpretation of the spacetime smearing function is as a holistic property of the detector-field interaction. The same detector system coupled to the field through different physical interactions can lead to different interaction regions. Finally, further work is required for interpreting the local propositions over local regions in histories-based approaches.
The historical context for contemporary programs for developing a measurement theory for local measurements in QFT was set by the choice to formulate QED in terms of asymptotic scattering theory in the 1940's [12]. This was a departure from NRQM, in which instantaneous states at a (finite) time or stationary states are primary. It is interesting to note that none of the three approaches examined in Sec. 4-6 re-introduce instantaneous states-at-a-time. Instead of regarding a state in a Hilbert space as the paradigm representative of the physical state, expectation values (or correlation functions) are the primary representatives of the physical state. In the FV framework, the algebraic state ω(A) represents the expectation value of A in state ω. Similarly, in the detector-based measurement theory, the n-point functions are the primary objects that figure in the state update rules for the field system. In the histories-inspired approaches, the decoherence functional is the primary object that generates the probabilistic predictions of the theory. Furthermore, in all three approaches expectation values at different times are needed. In the FV framework, the scattering isomorphism implements a finite-time scattering theory that facilitates the representation of 'in' and 'out' expectation values. Furthermore, the state on C(M ) for the coupled probe-field system encodes the expectation values of the field over all local regions. In the detector-based measurement theory, the n-point functions directly involve fields at different times. The state update rules also introduce a scattering operator. And, of course, in histories-inspired approaches, multi-time histories are the central quantities of physical significance. The final moral is that the state update rules for the field system cannot be literally interpreted as representing a physical change of state that occurs in some spacetime region in either the FV framework or the detector-based measurement theory. In the former case, the 'in' and 'out' states over the uncoupled algebra are best understood as having a counterfactual interpretation, as we argued. Moreover, state updates for successive selective measurements may be evaluated either sequentially or jointly. This is a reflection of the fact that the FV state update rules are manifestly Lorentz covariant. The detector-based state update rules are interpreted by Polo-Gómez, Garay, and Martín-Martínez as representing an update of the observer's state of information about the field system, not as representing an observer-independent change in the physical state of the field system. Even if this interpretation of the state update rules is contested, the fact remains that, for a selective measurement, the state update for the field depends on the spacetime location in which the measurement on the detector is performed, which is incompatible with interpreting the state update as representing a physical change in the field that is brought about by measurement.
This final moral is, of course, highly suggestive. Sorkin-type 'impossible measurements' are not a symptom of the Measurement Problem. However, conversely, solutions to the 'impossible measurement' problem could affect how the Measurement Problem is framed in QFT. More concretely, one version of the Measurement Problem in NRQM is that a literal interpretation of the dynamical evolution of the state is in general inconsistent with a literal interpretation of the state update rule following a measurement. The interpretative consequences of proposed accounts of measurement in QFT is an important direction for future research. (See [45] for a starting point.) Given that all of the proposals reviewed in this paper are works in progress with open questions and challenges, this is only one of many directions for future research. Furthermore, there are other possible approaches for devising an account of measurement for QFT that we did not consider. We hope that this paper encourages continued development of a diverse range of approaches as well as further investigation of the relationships among them.