# On the Identification of the Parts of Compound Quantum Objects

## Abstract

A view of the constitution of quantum objects as reducible, in the sense of being decomposable to elementary particles, is outlined. On this view, parts of composite quantum systems are considered to be identified according to a recently introduced, specifically quantum notion of individuation (Jaeger, Found Phys 40:1396 2010). These parts can typically also be considered particles according to Wigner’s symmetry-based notion. Particles are considered elementary when they satisfy a condition of elementarity, newly introduced here, that improves on that provided by Newton and Wigner. In any given instance, the compound character of a physical object can be verified in principle by decomposition, ultimately to a set of such elementary parts, through appropriate precise quantum measurements during experimentation consistently with this principle of individuation.

### Keywords

Quantum theory Reduction Composite system Object Particle## 1 Introduction

An explicit basis in quantum theory for the consideration of mechanical systems in space-time in terms of the smallest known objects, the reduction of each object to a set of elementary particles, remains to be provided despite the popular assumption that such a reduction can be accomplished and is well understood. In the consideration of the hierarchical structure of objects, difficulties are encountered, for example, even in the case of individual atoms thought of as composed of electrons, protons, and neutrons. This is because it is not always possible to consider the quantum states—which provide in general only probabilities—as products of those of distinct constituents, because the states may be entangled, bringing their separability into question.

In the case of atoms that are more complex than the hydrogen atom, the states of atomic electrons are typically entangled. Even for such small two-component systems, the often-considered factoring of a quantum system Hilbert space into two smaller Hilbert spaces associated with subsystems is in itself insufficient for state reduction, although the tensor product structure of Hilbert space can play a role in considering the composition of systems. This is because the statistics of the joint states of such a pair differ from those predicted when it is assumed to be built from entirely separate individuals, as assumed in classical particle mechanics; there is no longer an immediate association between pure states associated with a larger Hilbert subspace and those associated with the subspaces into which it can be factored. This is so, for example, for a system in the Bell singlet state \(|\Psi ^-\rangle \): that joint state is maximally specific (pure) whereas the subsystems states are minimally specified (fully mixed).

The traditional notion of physical objects as composed of impenetrable elementary parts, which had in physics long been used to understand the presence and behavior of ponderable matter, is now seen as inappropriate to quantum systems. This is reflected in general in the statistics of states containing ‘identical’ subsystems, such as electrons. The failure of state separability was of concern to many, perhaps most of all to Einstein. Like the more abstract notions of substance and deterministic causation, the traditional notion of the particle as a substantial, impenetrable object moving along a distinct space-time trajectory interacting deterministically with others must be replaced by a weaker notion of particle (cf. [1], Ch. 3). This notion and the often-considered competing notion of wave are discussed here in Sect. 2. In Sect. 3, it will be argued that both the ontological reduction of physical objects to parts and the notion of the smallest of these as elementary particles remain viable nonetheless, and, furthermore, that these objects can be understood to be causally efficacious. The specific notion of particle invoked here is based on that explicated by Wigner [2], as a co-presence of properties described by a quantum state that satisfies fundamental space-time symmetries. This is valuable because, among other things, it provides a conceptual basis for the view in chemistry of matter as a hierarchy of interacting constituents, with an identifiable lowest level at any period of physics, and supports the prominent role in high-energy physics of the notion of particle and its detection.

It will be specifically argued in Sect. 4 that, together with an improved condition of elementarity supplied here, Wigner’s notion of particle supports the view of matter as reducible to elementary parts. Difficulties following from a dependence on a substantival notion of particle, for example, those explained by Falkenburg (cf. [1], Ch. 1) are thereby avoided, while the essence of the particle as component is retained. Finally, it is pointed out how empirically observed instances of such parts, that is, the particle tokens of the particle types appearing from the Wigner classification can be identified through a process of analysis carried out by the performance of precise quantum measurements as supported by the quantum principle of individuation. Explicit examples of such analysis are given.

## 2 Non-locality and Localization

It is now evident that quantum theory has surprising implications for our conception of objects and their relationships in space-time (cf. [3], Ch. 1). Most strikingly, the violation of the Bell inequality [4] in its testable versions, e.g. [5], have influenced our world picture in relation to two distinct notions involving space-time, namely, locality and localization. The former involves composite systems and their separability whereas the latter relates even to individual systems and their nature; both pertain to the understanding of objects and their elementary parts.

Experimental results of Aspect et al. and others have experimentally shown that the CHSH Bell-type inequality, which is a consequence of a form of locality [5], is violated by the behavior of pairs of quantum subsystems, that is, “non-local” correlations between properties of distantly located systems exist in nature. These correlations challenge the view of quantum systems as objects composed of distinct parts that is central to the view of the world as constructed of separable entities, which Einstein argued is essential to physics as a science; a realist after his earliest years as a scientist, he emphasized the importance of having physical theory describe such an objectively existing world and, more conservatively, took local descriptions of objects as a precondition of physics. Einstein had argued that any fundamental theory, by virtue of its place in the structure of physics itself, must provide a full description of a world composed of independently existing entities distinctly located in space-time and obeying relativity.

The conception of objects in space-time that underlies Einstein’s standard of theory completeness had come into serious question even before experimental results such as those above were widely obtained and pertains even to the individual case; he commented of quantum theory that “The price which had to be paid for the extraordinary success of the theory has been twofold: The requirement of causality, which anyhow cannot be tested in the atomistic domain, had to be given up, and the endeavor to describe the reality of physical objects in space and time had to be abandoned” [6]. Regarding this, one finds Einstein writing to one colleague regarding the philosophical controversies surrounding quantum theory that “The sore point [*Der wunde Punkt*] lies less in the renunciation of causality than in the renunciation of a reality thought of as independent of observation” (quoted in [7], p. 374).

This concern of Einstein also relates to another issue that became the focus of much attention, of Reichenbach’s most prominently: The coherence of the notion of persistent system properties, that is, observables \(\{O\}\) when *not* under observation, something essential to a realist world view, even for elementary objects. Reichenbach noted that the old question of the status of occurrences “between” observations is of renewed relevance when they are understood to be described by measurements of quantum observables and so introduced a specific term for it, as follows.

“We then shall consider as unobservable all those occurrences that happen between the coincidences such as the movement of an electron or of a light ray from its source to a collision with matter. We call this class of occurrences the

interphenomena. ...they are constructed in the form of aninterpolationwithin the world of phenomena...” ([8], p. 21).

One example of this is the question of the presence of charged particles along points *between* locations of bubbles observable in a bubble chamber. The character of such interphenomena was the subject of much debate in the early years of quantum mechanics. It was addressed by the Copenhagen school, which originated with Bohr and his notion of complementarity, taken to apply to the basic form of quantum entities as a way of accommodating the Heisenberg indeterminacy relations. As Reichenbach put it, “the principle of indeterminacy leads to some ambiguities which find their expression in the duality of waves and corpuscles. ...One sort of experiment seemed to require the wave interpretation, another the corpuscle interpretation.” ([8], pp. 21).

Despite the fact that quantum systems need not be continually localized, specific quantum systems are widely thought of as composed of *particles*. The characteristics traditionally associated with the notion of particle are: discreteness, localizability, impenetrability, substantiality, distinct space-time trajectory, and/or transcendental individuality. Other characteristics more recently associated with them are: carrying mass and charge, mutually independence, exhibiting point-like behavior during interactions, being subject to conservation laws, having behavior completely determined by mechanical law, following phase-space trajectories, being spatio-temporally individuated, and being able to form bound systems ([1], p. 211). Notably, experiments involving very small systems such as electrons can and do involve the measurement of discrete units of quantities within local regions in which a strict accounting of quantities can ultimately be made. The traditional notion of the wave, by contrast, is one in which matter–energy is continuously distributed in across space with a specific associated matter–energy density, so that if one were to highly localize a wave then some amount of matter–energy might have to be spatiotemporally relocated at superluminal speeds to the localization region. Such a picture of the smallest quantum systems is also problematic when explaining phenomena such as the photo-electric and Compton effect.

The systems considered by particle physics are these days typically understood as localizable, that is, *capable of being localized* rather than being either precisely localized or distributed: Whenever the position is measured precisely, it is either found to have a position within the bounds of precision of its measurement, for example, within the volume of a bubble-chamber bubble or other particle detector, however briefly, or the system is locally destroyed in the process. The contemporary notion of particle has been articulated via the co-presence of properties and can be identified through space-time symmetries, as explicated by Wigner [2]. This notion will be taken here as that underlying a viable realist picture of compound quantum systems within quantum theory proper.

Feynman described such a solution to the problem of the ultimate character of elementary entities, relative to the complementary pair of pictures of quantum systems as either particles or waves, as follows. “Quantum electrodynamics ‘resolves’ this wave–particle duality by saying that light is made of particles (as Newton originally thought), but the price of this great advancement of science is a retreat by physics to the position of being able to calculate only the *probability* that a photon will hit a detector...” ([9], p. 37). The elementary particles to which Feynman refers are not particles in the traditional classical sense but, rather, are quanta. More precisely, these are the discrete instantiations of irreducible representations of symmetry groups corresponding to their invariant properties, such as rest mass (for light, zero) and spin (for light, \(\hbar \)), of states of quantum fields. Nonetheless, this notion remains closely related to the traditional particle concept, by virtue of the importance to it of discreteness.

Consider the case of a single quantum of the quantum electromagnetic field, that is, a photon. The empirical feature most distinctively characterizing the photon is that it is capable of producing only a single detection event at an appropriately sensitive light detector [10]. This is a manifestation of the particle aspect of the system to which Feynman refers and uses to justify his preference for the term particle over that of wave:

“[Y]ou were probably told something about light behaving like waves. I’m telling you the way it

doesbehave — like particles. ...every instrument that has been designed to be sensitive enough to detect weak light has always ended up discovering [that].” ([9], p. 15)

Particles in the modern sense can be understood as essentially instances of irreducible representations of the symmetry groups corresponding to distinct values for invariant properties, including rest mass and spin. When under observation, these entities are found to deliver fundamental properties in discrete units, as discussed further in the following section.

The particle is a notion of continued and ever-expanding application in experimental high-energy physics practice. Thus, the term *particle* continues to be used to refer to entities in the world: High-energy physics identifies the fundamental particles with field quanta and continues to count these entities—despite the fact that they obey their own, non-classical statistics, either Bose–Einstein or Fermi–Dirac—and performs strict accounting of their properties in accordance with conservation laws, such as the conservation of energy–momentum and charges. The particle notion need not be rejected if only the essential characteristics of being discrete, invoked by Feynman, being localizable (at least momentarily in a reference frame), and being capable of compounding to form larger entities (or of larger entities being deconstructed into them) are required within it.

Complex bodies can be by traditional means readily identified and understood as composite at large scales, despite their being ultimately quantum mechanical, simply because they are associated with *unique* collections of various recognizable subsystems and are localized to precisions pertinent to the length scales associated with them. Our primary concern as regards reduction, then, is the more unusual quantum behavior observed in systems at small scales and high precision, approaching the atomic and below.

## 3 Quantum Particles and Causation

Regardless of the counterintuitive behavior of quantum systems and the generally statistical nature of quantum predictions, in the absence of the revision of physics by a radically different theory, one is driven to accept what Reichenbach called the *synoptic principle of quantum mechanics*: the quantum state description is the most complete state description possible within its general theoretical framework. One must, therefore, describe the world via quantum theory, even though it is not always possible to do so causally in the traditional and straightforward sense of the *deterministic* cause–effect relations with which early quantum physicists contrasted quantum behavior by calling it “acausal” [12].

The components of ions and atoms to which most larger systems of ordinary matter—that is, matter outside of extreme, typically astrophysical conditions—might be reduced can be viewed as field quanta and classified as particles according to the Wigner classification: these component parts have essential properties of mass, energy, spin, and charge. The main thrust of the Wigner classification of identical particles, discussed in detail below, is that a correspondence can be set up between the spin value of the elementary particle and a quantum field—the Klein–Gordon field for spin 0, the Dirac field for spin 1/2 such as the electron, the electromagnetic field for spin 1, etc. via the space-time symmetries obeyed. These systems are found to be discrete, capable of independence (in that they may be uncoupled and have uncorrelated initial conditions), point-like in their interactions, (at least instantaneously), localizable (in a chosen frame of reference), and countable. When a set of these is observed in interaction, they are found to obey conservation laws (cf. [1], p. 227) allowing them to be associated with their previous collective state. The view of quantum systems advocated here is one in which they are considered as composed, directly or indirectly, of specific sorts of particles that are both localizable at any moment and are capable in a range of circumstances of interacting causally.

The feature of such particles we are primarily concerned with from here on is their role as causally efficacious parts of objects, the most important aspect of this being that other entities can interact with and be decomposed into them. Although the entities of elementary particle physics are fundamentally different from classical particles, they still share the essential characteristics of traditional particles; other, more problematic characteristics such as impenetrability, substantiality, continual localization, and unique space-time trajectory are *not* features we attribute to these entities. Arguments against the particle picture that target such features therefore fail to pertain to them. The sense of object in quantum theory is thus far more spare than the traditional ones, particularly in regard to localization: Objects amount essentially to persistent bundles of properties or sets of such properties associated with space-time regions and are not substantial in any fundamental sense. One finds in quantum theory, for example, that a physical system need not have a rest mass—as demonstrated by the example of the photon—something alien to the traditional body and particle concepts. It has proven most valuable to think of matter and light in the realm of subatomic physics primarily in terms of entities with invariant properties determined by laws or conservation rules and with changeable dynamical properties. One can then understand the structure of as many combinations of these as possible, as evidenced by the ability of the structure of symmetry groups to account for the classification of subatomic particles into a structured “particle zoo.”

The Wigner classification provides us with a valuable tool for determining the smallest parts of which any given quantum system can be understood as composed: if one is able to analyze a quantum system to the smallest parts one will arrive in some sense at a collection of such particles. Consider, for example, any atom larger than that of hydrogen. Such an atom can be largely understood in terms of quanta of the Dirac field governing spin-1/2 particles. It can be fully ionized, one electron at a time, until only its nucleus remains. The protons and neutrons of this nucleus can be observed as either radioactive decay products or, in principle, by removal via collision processes. (These nucleons can be understood themselves as arising from various arrangements and re-arrangements of properties, as described within the quark model; the quarks are Dirac field particles associated with the irreducible representations of the special unitary symmetry group SU(3). Quarks can also be understood, in a somewhat more limited sense, as particles because they behave in accordance with that group structure and re-arrangements of them obey conservation laws and sum rules for isospin, among others. However, quarks are never observed as free individuals, presenting challenges discussed further in the following section.)

Now, the particle concept currently used in physics can be seen to have at least two underlying pre-theoretical meanings, the causal and the mereological (cf. [1], Sect. 3.1). On the former, particles are viewed as the (probabilistic) local causes of local events as observed, for example, in light detectors. The latter, which is taken up primarily in the following section, primarily concerns the relationship between parts but also the (larger) whole of which they are parts, and has a meaning that is derived through the causal analysis of experiments and observations: Particles interact and act as the constituents of larger systems of matter and/or light. A broad range of views relating to these two meanings has developed over the last century. Falkenburg has summarized the recent development of the particle concept with the observation that “The 20th century history of the particle concept is a story of disillusion...in the subatomic domain there are no particles in the classical sense...a generalized concept of quantum particles is not tenable either” ([1], Ch. 6). Here, we challenge the view the quanta cannot be viewed as particles.

Most pertinent to the view developed here is the question, “In what sense does particle physics deal with *particles*?” Falkenburg answers this question as follows. “When theoretical foundations of the current particle physics concept(s) are investigated, it becomes obvious that an *operational particle concept* is crucial.” ([1, p. 228) She argues in particular that “[P]articles are observable local effects rather than the unobservable causes of such effects. What remains is the operational particle concept (OP). Current quantum theories no longer support a causal particle concept.” ([1], p. 329). At first glance, one might understood this as part of an argument that *only* an operational conception, which assumes *only* that particles are collections of mass, energy, spin, and charge, localizable by particle detectors, and are independent of each other, is viable. Ultimately, however, the view defended by Falkenburg is that “the reality of subatomic particles and atomic processes is not a reality in its own right” but is only relational, existing “only in relation to a macroscopic environment and experimental devices” ([1], p. xi). A succinct characterization of the modern view of particles Falkenburg has in mind here is the following.

“Abstracting particle properties such as mass, charge, or spin from their experimental or environmental contexts means dispensing with spatio-temporal objects and keeping nothing but

bundles of dynamic propertiesof the respective kinds. These bundles are made up of those magnitudes that can be measured dispersion-free at the same time inanyexperiment. Somehow these bundles of dynamic properties propagate through an experimental context, respecting the conservation-laws of mass–energy, charge, and spin...” ([1], p. 205)

This view accords well with the previous discussion above, but Falkenburg’s view of such quanta has an *additional* aspect, namely, an accordance with Ketterle’s dictum that in many experimental situations one may only experiment in a way that “prepares waves but detects particles” when dealing with quantum systems. Contrary to this dictum, one can and often does prepare particles and later detect them, as seen, for example, in the case of experiment in which individual systems are prepared and manipulated in ion traps. (It is also questionable to view what is observed in experimental procedures as accurately representing the phenomena of nature in the general context.) Although it remains true that there is an asymmetry between space-time and momentum parameters, in that according to the Schrödinger equation a localized quantum system that is free has a spatial wave-function that spreads out spatially tending in the infinite-time limit toward a momentum state which is unrestricted in extent ([1], p. 281), the causal particle concept cannot be rejected for this reason: only the notions of causal *necessitation* and *continual* localization that have traditionally been associated with the particle concept are threatened by this dispersive tendency in the statistics of collections of quantum entities. The conclusion that the causal particle notion fails follows only if the classical requirement of particles being perpetually localized is assumed whenever one understands *particle* in any non-trivial realist sense.

Falkenburg comments that “bare quantum ‘objects’ are just bundles of properties which underlie superselection rules and which exhibit non-local, acausal correlations” ([1], p. 206), they should be understood as (1) collections of mass \(m\), energy \(E\), spin \(s\), charge \(q\), (2) localizable by a particle detector, and (3) independent of each other—the very same characteristics mentioned above. These bundles are observed to occur regularly together, and can therefore be observed under controlled conditions, which have been considered Lockean ‘empirical substances.’ However, she argues, “without a macroscopic measuring device which is itself local, nothing is localized and no re-identification of the same kind of particle in subsequent measurements is possible” ([1], p. 221) and the causal particle notion is *inapplicable.* This argument depends on specific assumptions about the nature of quantum measurement connected with the Copenhagen interpretation. In particular, although a localized ‘macroscopic’ measuring device often suffices for particle localization, *requiring* one to accomplish a re-identification of components is an unnecessary assumption, even though there are no pre-determined trajectories in general. Indeed, the very notion of a ‘macroscopic’ measurement apparatus is quite vague [14, 15]. If one desires a re-identification of the set of parts as such, one can be accomplished by measurements *without* the assumption of a macroscopic apparatus; both the measuring device and measured system can be successively simultaneously localized as a result of repeated measurement interaction, which can be performed extremely, although not arbitrarily frequently, resulting in empirical data such as one sees in bubble chamber detections of charged nuclei and independent of the requirement that the apparatus be ‘macroscopic.’

The presence of localizable particles provides a realist ground for the observed “bundling” of properties and their “propagation through experimental contexts” in accordance with conservation laws. Moreover, even when they are by their very nature precisely the same (identical) in their prescribed intrinsic properties, such nomological objects can still be differentiated by their dynamical properties (as shown in two examples further below). Let us, therefore, reconsider whether quantum particles cannot be understood as causally effective. General quantum physics now relies on conservation law constraints, symmetry, Einstein locality, and other assumptions specifically regarding measurements to explain the appearance of definite measurement outcomes and individual events which have in the past been explained via deterministic causation. Traditional (deterministic) causality, that is, a relation of necessity between events involving such particles, can be replaced by an approach to state evolution that incorporates probabilistic causation. As Pauli argued, “The simple idea of deterministic causality must...be abandoned and replaced by the idea of statistical causality” ([16], *op. cit.*, p. 151).

Such a quantum approach to causation can be understood as inspired by, but importantly different from, Laplace’s earlier mechanical determinism [12]. It is supported by the success of physical laws written as equations for state-vectors, such as the Schrödinger equation, or, equivalently, the time-evolution of operators acting on them that represent quantum observables [17]. If one deems it necessary, such a move can be metaphysically accommodated, for example, by following Heisenberg in taking the quantum state to describe, rather than actualities in every case, to describe “potentialities” in general [18], which can be understood as possessing a causal aspect, cf. [19].

*raising of the likelihood of their effects*, that is,

## 4 Quantum Objects and Composition

The mereological aspect of particle identification derives from the relationship *between* parts as well as their relation to the (larger) wholes of which they are parts: composite systems can be decomposed into distinct parts as well as composed into wholes. Under the Wigner scheme for classifying particles, discussed above, the ways in which its ‘identical’ particles differ offers a way of identifying their individual instances, even when they are co-present: The mereological aspect of particles can be understood in principle through their satisfaction of quantum number sum rules—the sum of conserved properties is fixed. The compound nature of composite systems can, in principle, be verified in individual instances by analytical measurements within compound systems; counting them and their properties in individual situations before and after composition or decomposition allows for their identification as tokens of the types provided by the Wigner classification. Examples of such comprehensive measurements are explicitly described further below.

In practice, in most of the scattering experiments mentioned above, the particles (protons, neutrons, etc.) and their properties are not individually counted; indeed, even their number is typically inferred from the limited data available utilizing the sum rules for charge, spin, and momentum–energy. However, in principle such complete measurements could be performed on a given composite system if such complete analysis were the primarily goal and sufficient resources were brought to bear. Moreover, the consistency of the statistical patterns of decomposition found, for example, in particle scattering experiments reinforces the notion of the type of composite system as one composed of a set of particles of specific types from among all known types of particle, just as the type *table* is one with essential parts: an upper surface and legs that together make it up. Any non-elementary particle can be decomposed to a collection of elementary particles, just as any table can be decomposed, at least, into its upper surface an its legs, when considered as the elementary parts of a table from the structural perspective, and the properties of the legs must be such that together they function to properly support the upper surface. It is typically impossible in the quantum context to use space-time position or trajectory for the purposes of individuation as can be done with classical systems. Nonetheless, because particles cause individual detection events in the measuring instruments with which they interact, one event per particle can be observed—as noted by Feynman in the quotation given in Sect. 2—when measurements are appropriately carried out.

The quantum systems conventionally circumscribable by the formalism of quantum mechanics do not automatically correspond the ontological structure of compound objects *qua* composites. Quantum theory associates to each system a Hilbert space and a state, which is given either as a ray (or state-vector) within that space or as a statistical operator acting on it. If the Hilbert space can be factored then one can consider rays and statistical operators on the factor spaces as states of subsystems, because composite systems are described by states in the product space: Any countable number of physical systems with Hilbert spaces \(\fancyscript{H}_i\) can be composed and formally considered subsystems of a larger compound system associated with the tensor-product Hilbert space \(\fancyscript{H}=\otimes _{i=1}^n\fancyscript{H}_i\). This allows for a consistent and practically successful portrayal of a multi-particle system as a larger, individual entity at hand. The resulting joint system is defined by the quantum numbers provided by the requirements of quantum theory; it is the set of states of this system which is relevant to the state count. However, not every such factoring corresponds to a natural kind, such as an electron or photon, or provides a non-statistical, individual system description.

When individual systems are described in the formalism by state-vectors, the quantum mechanical description of an individual object is sufficient because when a system is attributed a state-vector \(\mid a, b, c,\ldots >\) it is certain that if all members of the complete set of observables \(A,B,C,\ldots \) for which \(a,b,c,\ldots \) are the corresponding eigenvalues are measured precisely and in immediate succession, the corresponding outcomes of these measurements on the system will exactly match the values, \(a,b,c,\ldots \) , these corresponding to elements of reality in the sense explicitly given by EPR [21]. One finds, however, that problems can arise in identifying individual objects when the statistical requirements on complexes of quantum systems are enforced. For example, apparent conflicts can arise between associated the quantum state exchange symmetry requirements and some forms of the basic logical principle of identity of indiscernibles (PII). According to the simplest form of the PII, any entities not differing in their properties are to be considered *one and the same* entity. However, a joint system of two identical quantum particles could be mis-identified as just a single one, having the size of the ostensible components [22].

In its strictest form of the PII, in which only non-relational properties are considered relevant, if the Bose–Einstein permutation invariance requirements for multiple-boson systems such as photons are enforced, as they must be, two putative entities (systems S\(_1\) and S\(_2\)) under consideration may have not only identical fixed properties but also identical *dynamical* states. According to the PII in this form these must then be one and the same system, something which contradicts the assumption that there are two physical systems under consideration.

*no*vector in the associated Hilbert space. The single-electron descriptions given by the reduced state for each as a subsystem—the only description available to each in quantum mechanics [23]—are identical in such cases: the z-spin value is entirely indefinite for each, despite the apparent applicability of the exclusion principle by virtue of which one would consider the electrons differentiated and strictly anti-correlated. Again, under the strictest form of the PII, in the absence of any further restriction on entities other than being conventional quantum systems, the two would be understood as only one, inconsistently with the intuitive understanding that

*two*electrons are present, as would be suggested on the basis of measurements of joint mass, charge, spin, etc.

Such difficulties might be avoided in at least three ways: 1) by adopting a weaker form of the PII, 2) by restricting the set of systems to be considered proper objects, and/or 3) by restricting the choice of Hilbert space factoring used to provide the systems to be considered. The PII in forms weakened so that relational properties are included, allows theoretical inconsistency and contradiction with experiment to be avoided for a system in the joint spin state \(|\Psi ^-\rangle \), for example, by reference to the exclusion principle, which provides the components with differing properties of relative z-spin, or by directly noting the strict z-spin anti-correlation in any shared spin orientation. However, it is not clear that this helps in the case of our bosonic example.

A more specifically quantum mechanical approach not requiring a weak form of the PII is to invoke an individuation principle, namely, the Quantum principle of individuation (QPI): A system is an individual if and only if its state is entirely specifiable by a ray in the Hilbert space associated with it [22]. The QPI precludes any paradox in the second of the above examples because the conventionally considered quantum subsystems cannot be considered individual *objects* under it: The reduced states attributed to them are mixed states and so are capable only of describing the statistics of the subsystems because the subsystems have become joined by quantum entanglement into a larger whole; if one seeks parts of this composite system it must be physically analyzed. (As shown below, this approach is also effective for the bosonic example.)

An interpretation of the quantum mechanical formalism adopting the QPI is an instance of what Maudlin has called the “ray view”: the view of quantum states in which “a single particle is represented by a ray in the associated Hilbert space.” The opposing, “statistical operator view” is that in which quantum states of individual systems are allowed to be specified by statistical operators, including those with nonzero impurity [24]. Maudlin has pointed out that state reduction fails on the statistical operator view because the singlet state does not supervene on the states of the subsystems, which are given by the statistical operators obtained by tracing over the degree of freedom of the other subsystem, in the above example both being the fully mixed state proportional to the identity ([24], p. 54): A number of distinct joint system states, for example, all the Bell states, are compatible with these subsystem states, not *only* the singlet state \(|\Psi ^-\rangle \).

In the descriptions of the structure of and explanations of the behavior of quantum systems, one indeed generally finds that there is no hierarchical relation between the subsystem states and that of the related composite systems when bound, or even after having interacted and becoming distanced from each other, unless approximations are used ([3], Ch. 4). It then appears, because identical particles may be involved, that the strong hierarchical picture of matter as analogous to collections of distinct building blocks might not reach all the way down to the level of ‘elementary particles’: Entanglement might preclude a strong reduction because an entangled system will not have a reducible physical state, as mentioned at the outset of this article. However, the components within such systems can still be distinguished through an active analysis, because such an analysis can be performed in such a way as to lead to a disentangled joint state. This can be done, in particular, through an appropriate set of measurements. By continuing a process of measurement analysis, non-elementary physical systems can be seen as composed from distinct, individual parts, the smallest being particles in the modern sense discussed above. When appropriate precise measurements are made of a larger enough collection of mutually commuting observables of the formally identifiable subsystems of a compound system, the results distinguish any two of them; they will be distinguished by at least one difference of measured value and the Hilbert subspaces associated with the observables will correspond to these differing values, as is shown below for our two examples.

One may yet question the adequacy of Wigner’s method for identifying particles as regards the *elementarity* of particles that would be reached through ontological analysis: He and T. D. Newton addressed this aspect as follows.

“[T]wo conditions seem to play the most important role in the concept of an elementary particle. The first one is that its states shall form an elementary system in the sense [of being initial and final states of collision phenomena]. The second is less clear cut: it is that it should not be useful to consider the particle as a union of other particles.” [13]

The *utility* of the consideration of the system as a union, as invoked there, is a less than desirable criterion because it is a pragmatic rather than fundamental one. However, the condition is readily replaced by the condition that “it cannot be conceived as a union of other particles” (with the caveat “given the current state of physical knowledge”).

Now, consider the analysis of the two examples discussed above. First, consider the second example, that of electron pair described by \(|\Psi ^-\rangle \) of Eq. 7. The two electrons in this spin-singlet state can be made to counter-propagate along the z-axis, because of the anti-correlation of their spin orientations, by using a Stern–Gerlach type apparatus. (Note that, even when they are initially motionless in the lab frame, the apparatus can be moved relative to them.) The electrons are then measured precisely for z-spin through the direction of induced propagation along the z-axis with which the apparatus perfectly correlates it, the z-direction being defined by the apparatus magnetic-field gradient; when detected, one electron will be found on one side of the origin (as defined by their counter-propagation along the z-axis) and the other on the opposite side. Upon such joint detection, the system will enter a pure product state of spin eigenstates, either \(|\uparrow \rangle _\mathrm{+z}|\downarrow \rangle _\mathrm{-z}\) or \(|\downarrow \rangle _\mathrm{+z}|\uparrow \rangle _\mathrm{-z}\) and so, according to the QPI, be analyzed into two individual component objects, one above the origin and one below the origin. The momentum characteristics of the joint system continue consistently to be accounted for after this measurement, in accordance with the constraints provided by fundamental conservation laws. What is needed to carry out such an analysis is thus only a set of precise measurements of the state in the Hilbert space corresponding to the elementary particles.

A similar process can distinguish a pair of photons as parts of a biphoton, such as considered in the first example above. One can have the mode containing the full system energy oriented along the +x-direction, so that the system enters a countable (length \(m\)) sequence of beam-splitter+detector-pair suites from among an array of suites laid out in an equal-spacing \(n\times n\) gridded geometry in the x-y plane among which it executes a “random walk” (so that \(n\) is generally less than \(m\) but may be equal if all the energy is transmitted directly through all but the last beam splitter) until reaching a final beam splitter at which two photons are found to diverge; *after* a sequence of deflection or transmission events in which the total system energy remains *undivided*, the full system will finally arrive at a beam splitter at two photons become distinguished as the total system energy *is* divided, with one photon being reflected and the other photon being transmitted. Upon such a joint detection, the system state can be written \(| 1\rangle _\mathrm{x} |1\rangle _\mathrm{y}\), where the subspace index value (x or y) corresponds to the directions of the orthogonal spatial mode and \(1\) indicates mode occupation.

Such analyses thus provide a basis for the notion of a quantum object as composite, that is, having parts. Interesting open questions remain, however, in particular those arising in the context of the quark model. As mentioned in Sect. 3, the protons and neutrons can be understood as arising from various arrangements and re-arrangements of properties, consistently with the quark model wherein they are Dirac field particles associated with the irreducible representations of the special unitary symmetry group SU(3). In the approach outlined here, quarks themselves might be viewed as particles because they behave in accordance with that group structure and re-arrangements of them obey conservation laws and sum rules for isospin, among others. But they can only be considered so in a somewhat more limited sense because quarks are never observed as free individuals: because quarks described by quantum chromodynamics are only “free” only asymptotically there, they cannot be individuated by *direct* measurement. Nonetheless, quarks are often considered particles in that they scatter other isolated particle in a way much like a point-like system would (cf. [1] Sect. 6.5.2). These considerations suggest that quarks should be considered particles whose individuation might proceed by other means. The possibility of such other means of individuation may be taken up in a future publication.

## 5 Conclusion

A conception of the composition of physical objects that replaces traditional notions of component part by a specifically quantum mechanical one is considered here. It is based on the reduction of quantum systems to sets of parts, to the greatest extent possible in a specific, modern sense given by the Wigner classification. A condition of elementarity that improves upon the one given by Newton and Wigner and that enables this is provided. The parts of composite systems are identifiable in principle through a specifically quantum notion of individuation by appropriate precise quantum measurements in any individual instance.

## Notes

### Acknowledgments

This research was supported by the DARPA QUINESS program through U.S. Army Research Office Award No. W31P4Q-12-1-0015. I would also like to thank Brigitte Falkenburg for helpful comments.

### References

- 1.Falkenburg, B.: Particle Metaphysics. Springer, Heidelberg (2007)Google Scholar
- 2.Wigner, E.P.: On unitary representations of the inhomogeneous Lorentz group. Ann. Math.
**40**, 149 (1939)CrossRefMathSciNetGoogle Scholar - 3.Jaeger, G.: Quantum Objects. Springer, Berlin (2014)CrossRefGoogle Scholar
- 4.Bell, J.S.: On the Einstein–Podolsky–Rosen paradox. Physics
**1**, 195 (1964)Google Scholar - 5.Clauser, J.F., Horne, M., Shimony, A., Holt, R.A.: Proposed experiments to test local hidden-variable theories. Phys. Rev. Lett.
**23**, 880 (1973)ADSCrossRefGoogle Scholar - 6.Einstein, A.: Physics, philosophy, and scientific progress, Speech to the International Congress of Surgeons in Cleveland. Ohio; reprinted as Phys. Today
**58**(6), 46 (2012)Google Scholar - 7.Stachel, J.: Einstein and the quantum: Fifty years of struggle. In: Colodny, R.G. (ed.) From Quarks to Quasars, p. 349. University of Pittsburgh Press, Pittsburgh (1986)Google Scholar
- 8.Reichenbach, H.: Philosophic Foundations of Quantum Mechanics. University of California Press, Berkeley (1944)Google Scholar
- 9.Feynman, R.P.: QED. Princeton University Press, Princeton (1985)Google Scholar
- 10.Miller, A.J., Nam, S.W., Martinis, J.M., Sergienko, A.V.: Demonstration of low-noise near-infrared photon counter with multiphoton discrimination. Appl. Phys. Lett.
**83**, 791 (2003)ADSCrossRefGoogle Scholar - 11.Clifton, R.: The subtleties of entanglement and its role in quantum information theory. Phil. Sci.
**69**, S150 (2002)CrossRefMathSciNetGoogle Scholar - 12.Jaeger, G.: Potentiality and causation. AIP Conf. Proc.
**1424**, 387 (2012)ADSCrossRefGoogle Scholar - 13.Newton, T.D., Wigner, E.: Localized states for elementary systems. Rev. Mod. Phys.
**21**, 400 (1949)ADSCrossRefMATHGoogle Scholar - 14.Jaeger, G.: What in the (quantum) world is macroscopic? Am. J. Phys. (in press)Google Scholar
- 15.Wigner., E. P.: The subject of our discussions. In: Foundations of Quantum Mechanics. Proceedings of the International School of Physics “Enrico Fermi”, p. 7. Academic Press, London (1971)Google Scholar
- 16.Cushing, J.T.: Quantum Mechanics: Historical Contingency and the Copenhagen Hegemony. The University of Chicago Press, Chicago (1994)MATHGoogle Scholar
- 17.Birkhoff, G., von Neumann, J.: The logic of quantum mechanics. Ann. Math.
**37**, 823 (1936)CrossRefGoogle Scholar - 18.Heisenberg, W.: Physics and Philosophy. Harper and Row, New York (1958)Google Scholar
- 19.Busch, P., Jaeger, G.: Unsharp quantum reality. Found. Phys.
**40**, 1341 (2010)ADSCrossRefMATHMathSciNetGoogle Scholar - 20.Suppes, P.: A Probabilistic Theory of Causality. North-Holland Publishing Company, Amsterdam (1970)Google Scholar
- 21.Einstein, A., Podolsky, B., Rosen, N.: Can quantum-mechanical description of physical reality be considered complete? Phys. Rev.
**47**, 777 (1935)ADSCrossRefMATHGoogle Scholar - 22.Jaeger, G.: Individuation in quantum mechanics and space-time. Found. Phys.
**40**, 1396 (2010)ADSCrossRefMATHMathSciNetGoogle Scholar - 23.Landau, L.: Das dämpfungsproblem in der wellenmechanik. Z. Phys.
**45**, 430 (1927)ADSCrossRefMATHGoogle Scholar - 24.Maudlin, T.: Part and whole in quantum mechanics. In: Castellani, E. (ed.) Interpreting Bodies, Ch. 3. Princeton University Press, Princeton (1998)Google Scholar