1 Introduction

Since its inception in 2008, the LHC has turned up no experimental signatures of beyond-the-Standard-Model (BSM) physics, despite strong theoretical expectations that it would based on the so-called Higgs naturalness principle. No evidence for supersymmetry, large extra dimensions, or any of the other leading candidate theories of BSM physics has been discovered. In light of this unsuccessful prediction, it is important to revisit the rationales that were offered in favor of this principle to gain a clearer sense of why these expectations were not met. Ultimately, I argue, the failure of these rationales lies with the naturalness principle itself, and with the particular understanding of high-energy effective field theories (EFT), and of the foundations of quantum field theory (QFT), in which it is rooted.

The earliest and most widely cited formulation of the Higgs naturalness principle rests on the observation that, according in the Standard Model in a cutoff-based regularization scheme, the Higgs pole mass \(m_p^2\) (also known as the physical Higgs mass), is, to leading order, the sum of a contribution from the bare Higgs mass \(m_{0}^2\) and quantum corrections \(\delta m^2\) that are quadratic in the cutoff regulator \(\Lambda\):

$$\begin{aligned} m^{2}_{p}&= m_{0}^{2} + \delta m^{2} \nonumber \\&= m_{0}^{2} - \frac{y_{t,0}^{2}}{8 \pi ^{2}} \Lambda ^2+ ... \nonumber \\&= \Lambda _{SM}^{2}\left( \tilde{m}_{0}^{2} - \frac{y_{t,0}^{2}}{8 \pi ^{2}}\right) + \ldots ; \end{aligned}$$
(1)

where \(\tilde{m}_{0} \equiv \frac{m_0}{\Lambda }\) is the dimensionless bare mass parameter. It is widely assumed in such arguments that the scale parameter \(\Lambda\), the upper limit of modes contained in perturbative loop integrals and path integrals, should be set to the physical scale \(\Lambda _{SM}\) associated with the Standard Model’s empirical cutoff—that is, with the scale above which the Standard Model ceases to yield accurate empirical predictions. A problem is supposed to arise when \(m^{2}_{p}<< \Lambda _{SM}^{2}\). Depending on the formulation, the problem is supposed to be that in this case, recovering the measured value of the Higgs pole mass \(m^{2}_{p}\) requires a very delicate and “improbable” cancellation between the bare Higgs mass and the quantum corrections, or alternatively that the physical Higgs mass is very delicately sensitive to small changes in the dimensionless bare mass and coupling.

In a conventional quantum field theory course, students are taught that one’s choice of regulator—e.g., hard cutoff, lattice spacing, dimensional regulator, Pauli-Villars regulator—is a matter of mathematical convention, chosen for purposes of calculational convenience and lacking physical import; the physical values and experimental predictions calculated from a quantum field theory should be independent of regularization scheme. Because the relation (1) manifestly depends on a particular cutoff-based regularization scheme, there arose the predictable criticism that the fine tuned sensitivities and cancellations were tied to a particular mathematical convention associated with a certain choice of regulator, and so lacked the physical significance that had been attributed to them.

Although regularization scheme invariance continues to be a staple of quantum field theory both in practice and in pedagogy, naturalness-based arguments were widely adopted as a guide to model-building and model selection in the description of BSM physics. Ostensibly, this can be explained by virtue of two distinct, and on their face plausible, responses to the worry about regulator dependence.

One response has been to circumvent the concern about regulator dependence by noting the need for similar quadratically large corrections to the Higgs mass even in the context of certain renormalized, and therefore regulator-independent, parametrizations of the Standard Model. This response is consistent with the traditional view that physical quantities in quantum field theory should be regularization scheme invariant, that the choice of regulator is a matter of convention, and that bare parameters possess the same conventionality as their associated regularization scheme. However, if quadratically large corrections also generate a need for similar delicate cancellations and sensitivities in the context of regulator-independent parametrizations, the initial criticism of the naturalness argument based on regulator dependence simply has failed to address the real problem, which is presumably rooted in “renormalized” formulations of the naturalness argument - or so the thinking goes.

A second response to the initial worry about regulator dependence is to question the validity of the assumption that the physical content of a quantum field theory must be regulator-independent, on the grounds that the existence of a physical cutoff necessarily entails that each EFT possesses one true physically preferred cutoff regulator and parametrization, as one widespread way of understanding quantum field theories as effective field theories (EFTs) suggests. From this perspective, dependence of the original “bare” naturalness argument on a specific cutoff-based regularization is not a problem at all, but a virtue.

Here, I argue that both responses to the initial worry are flawed - that the initial criticism of the naturalness argument was on the right track in its concerns about scheme dependence, but in need of generalization to other formulations of the naturalness principle, and of grounding within a more clearly articulated perspective on the foundations of quantum field theory.

Here I argue that the first response, which circumvents the worry about regulator dependence by reformulating the problem in the context of a renormalized, regulator-independent parametrization, simply trades dependence on a particular regularization scheme for dependence on a particular renormalization scheme. Just as every first-year student of quantum field theory is taught that the physical content of a quantum field theory must be regularization scheme invariant, so he/she is also taught that the physical content of a quantum field theory must be renormalization scheme invariant. As argued in [14], and as explained again below, the quadratically large matching corrections on which the “renormalized” formulation of the Higgs naturalness argument rests are ultimately themselves an artifact of a particular renormalization scheme, and vanish upon the transition to an alternative, physically equivalent renormalization scheme. Just as the bare naturalness argument runs afoul of regularization scheme invariance by attributing special status to one particular regularization scheme, so the renormalized formulation runs afoul of renormalization scheme invariance by requiring that a particular renormalization scheme (specifically, the \(\overline{MS}\) scheme) be given special “physical” status.

The second response, which dismisses worries about regulator dependence on the grounds that nature singles out one particular regulator, associated with the physical cutoff scale, as the true physically correct regulator, and the associated bare parameters of (1) as the unique physically correct, fundamental parameters of the Standard Model, abandons the principle of regularization invariance on the basis of purely metaphysical speculation, and in spite of the fact that the Standard Model generates exactly the same set of predictions irrespective of the regulator that is employed. The assumption of a physically preferred parametrization, motivated by a highly literal interpretation of analogies with condensed matter field theory, constitutes a theoretical idle wheel in generating the confirmed predictions of all empirically successful EFTs in high energy physics; it can be removed without sacrificing the empirical adequacy of the theory, the mathematical well-defined-ness that comes with finite-cutoff parametrizations, or the possibility of giving a realist (as opposed to positivist, empiricist, instrumentalist, or operationalist) physical interpretation of the theory.

The first central claim of this article, based on the considerations of the previous two paragraphs, is that naturalness-based reasoning, on any formulation of the naturalness principle relevant to the prediction of BSM signatures at the LHC, is at odds with the combined principles of regularization and renormalization scheme invariance. That is to say, it is only by suspending either the principle of regularization or renormalization scheme invariance and denying the physical equivalence of parametrizations associated with different schemes that the naturalness argument gains any purchase. There does not appear to exist any formulation of the Higgs naturalness principle that does not depend on a specific choice of regularization or renormalization scheme—that is, in which the offending cancellations and sensitivities relate purely scheme-independent quantities. Moreover, there exist choices of scheme in which the offending cancellations or sensitivities are absent.

A second core claim is that the assumption of a physically preferred set of “fundamental parameters” or “physical parameters,” associated with some particular regularization or renormalization scheme of an EFT, constitutes a theoretical idle wheel in the sense that it plays no essential role in generating the theory’s successful predictions. By way of illustrating this second thesis, I argue that there is a salient analogy between the assumption that there exists a physically preferred set of bare “fundamental parameters” or renormalized “physical parameters” and the central assumption of the (long debunked) nineteenth century ether theory of light—namely, that light propagates in a physical medium known as the “ether” that establishes a physically preferred notion of rest. In both cases, an invariance that is unfailingly respected by experimental results—in the case of the ether, invariance of the speed of light and the form of dynamical laws under change of inertial frame, and in the case of quantum field theory, invariance of measured values for observables under changes of regularization and renormalization scheme—is nevertheless abandoned for the purpose of supporting a particular ontological framework that is incompatible with that invariance. Like the assumption of a preferred reference frame in the context of ether theory, which was later understood to play no essential role in generating the successful empirical predictions of Maxwell’s electromagnetic theory of light, the assumption of a preferred set of “fundamental parameters” or “physical parameters” in an EFT constitutes a superfluous assumption that plays no essential role in generating the EFT’s successful, scheme-independent predictions about high-energy scattering experiments. Much as it was recognized that one should relinquish the assumption of a physically preferred reference frame and pay due respect to the equivalence of inertial reference frames suggested by experimental results, so I argue that in our understanding of EFTs one should relinquish the assumption of a physically preferred parametrization and pay due respect to the principles of regularization and renormalization scheme invariance, particularly as these principles have been unfailingly respected by empirical data drawn from high-energy scattering experiments.

A third related claim is that abandoning the assumption of a physically preferred parametrization motivates an understanding of EFTs in high-energy physics that deviates from certain common dogmas about the mathematical definition and physical interpretation of EFTs in high-energy physics. Relative to a certain way of understanding EFTs in high-energy physics, abandoning the assumption of a physically preferred regulator and parametrization induces a number of significant shifts in the interpretation of certain formal elements of QFT. In particular, it implies a substantially different interpretation of the cutoff parameter \(\Lambda\), Wilsonian renormalization group (RG) transformations, and the QFT path integral. While a certain dogma about EFT associates the cutoff parameter \(\Lambda\) with the empirical cutoff scale \(M_{phys}\) above which an EFT ceases to generate accurate predictions, I argue that this association rests on a false conflation of two sharply distinct notions of cutoff, and that all values of \(\Lambda\) for which the Wilsonian renormalization group (RG) flow of an EFT is defined are equally consistent with any given value of \(M_{phys}\) (as long as \(M_{phys}\) lies below the UV scale at which the RG flow ceases to be mathematically defined, such as a Landau pole).

Relatedly, while  it is common to understand different points along a given Wilsonian RG trajectory as distinct EFTs possessing different cutoffs, with low-cutoff EFTs understood as coarse grainings of higher-cutoff EFTs, I argue that points along a Wilsonian RG trajectory are more appropriately understood as different, physically equivalent parametrizations of one and the same set of physical amplitudes, associated with just a single EFT. Thus, rather than taking the cutoff \(\Lambda\) to infinity, as continuum approaches to QFT attempt to do, or setting \(\Lambda\) to a single finite, empirically determined value, as one approach to EFT suggests, the approach defended here is to “quotient over” different finite values of \(\Lambda\) by positing the physical equivalence of parameterizations that lie along the same Wilsonian RG trajectory (which are associated with different \(\Lambda\)). This view of the Wilsonian RG more closely resembles the particular version of the RG that describes the running of renormalized parameters with the renormalization scale \(\mu\) in some fixed renormalization scheme, where it is common to understand the renormalization scale as a unphysical reference scale rather than as a physical cutoff. On the view advocated here, it is a whole RG trajectory, rather than any single point on such a trajectory, that is associated with a given EFT. Corresponding to this alternative view of the Wilsonian RG, which is closely connected with the path integral, there emerges a distinct interpretation of the path integral itself, as discussed further below.

Different attempts to give a realist (as opposed to positivist or empiricist) interpretation of quantum field theory differ as to which elements of the mathematical formalism they take as matters of physical fact and which as matters of mere representational convention. Examples of quantities that belong uncontroversially to the first category are pole masses and scattering cross sections, while an example that belongs uncontroversially to the second is one’s choice of gauge. However, there appears to be no clear consensus regarding which of these two categories the Lagrangian parameters of a QFT belong to. Adherence to the principles of regularization and renormalization scheme invariance entails that the values of the parameters themselves are merely conventional, since these values are only defined when such a scheme is adopted. On the other hand, adopting the view of EFTs according which there exists a physically preferred parametrization, associated either with some set of bare “fundamental parameters” or renormalized “physical parameters,” would suggest that there is a physical matter of fact about what the “real” parameters of the theory are, and that this true parametrization can in principle be determined experimentally. The view of EFTs described here abandons the assumption that there is any such matter of fact, on the grounds that the choice of parametrization among different schemes makes no difference to the confirmed predictions of the theory.

The analysis provided here serves to synthesize and expand upon the analysis of two previous articles, [23] and [14]. The first of these argues that the cancellations and sensitivities associated with bare formulations of the Higgs naturalness principle are formal artifacts associated with an arbitrary choice of bare parametrization, while the second argues that the cancellations and sensitivities arising in renormalized formulations are formal artifacts associated with an arbitrary choice of renormalization scheme. The present discussion synthesizes and expands these two theses by arguing that these claims of conventionality—and specifically scheme dependence—generalize to all formulations of the naturalness principle relevant to the prediction of BSM signatures at the LHC. It offers a more detailed argument to the effect that the assumption of a preferred parametrization is not needed to support an understanding of quantum field theories as effective field theories, and to the effect that the assumption of a preferred parametrization should be abandoned.

The discussion is outlined as follows. Section 2 makes the argument that all formulations of the naturalness principle relevant to the prediction of BSM signatures at the LHC rest on the assumption of a physically preferred parametrization and so are in tension with the combined principles of regularization and renormalization scheme independence. Section 3 argues that the assumption of a preferred parametrization constitutes a theoretical idle wheel that should be abandoned, and highlights several important features of the understanding of EFTs that emerges when this assumption, along with several related dogmas about effective field theory, are abandoned. Section 4 is the Conclusion.

2 Scheme Dependence Across Formulations of the Higgs Naturalness Principle

The core claim defended in this section is that the Higgs naturalness principle, on any formulation relevant to the prediction of BSM signatures at the LHC, is fundamentally at odds with the combined principles of regularization and renormalization scheme invariance.

Given its many formulations, the naturalness principle can seem amorphous and ill-defined.Footnote 1 On the other hand, the naturalness principle was sufficiently well-defined that it served to generate a concrete, testable physical prediction—namely, the observation of BSM signatures at the LHC. While there are many distinct concepts associated with the naturalness principle, it is only a certain subset of these that are relevant to generating this prediction, while the others bear only a superficial or peripheral connection to the line of argument that generated this prediction.

I argue that the prediction of BSM signatures at the LHC may be grounded in any one of several distinct formulations of the prohibition against fine tuning, and that these prohibitions in turn are motivated by an understanding of EFTs that presupposes the existence of a physically preferred parametrization for the Standard Model, and thereby a suspension either of the principle of regularization or renormalization scheme invariance. I emphasize that several other notions commonly associated with the Higgs naturalness principle, including “absolute naturalness” and the Decoupling Theorem, are peripheral to the chain of reasoning that generates this prediction, and bear only a surface similarity to the formulations of the principle that have been most relevant to producing the expectation of new physics at the TeV scale.

2.1 Naturalness as a Prohibition Against Fine Tuning

I argue here, as many others have, that the naturalness argument comes down to a prohibition against fine tuning. I delineate four distinct ways of understanding what this prohibition amounts to; any of these formulations is sufficient on its own terms to justify the prediction of BSM signatures at the LHC.

One important distinction between different formulations of the fine tuning argument is between “probabilistic” fine tuning and “sensitivity-based” fine tuning.Footnote 2 The first prohibits “improbable” delicate cancellations between a QFT’s “fundamental parameters” and quantum corrections in the calculation of observable quantities such as the physical Higgs mass, also known as the pole mass. The second prohibits delicate numerical sensitivity of observables such as the Higgs pole mass to small changes in the values of the fundamental parameters.

A second relevant distinction concerns the difference between “bare” and “renormalized” fine tuning. The first involves delicate cancellations between bare parameters and quantum corrections in the calculation of the Higgs pole mass in a cutoff regularization scheme. The second involves delicate cancellations between a renormalized \(\overline{MS}\) mass parameter of a light scalar field (such as the Higgs) and matching corrections that are proportional to the squared mass of a much heavier field to which the light scalar is coupled.

Both the bare and renormalized formulations can be understood either in terms of probabilistic or sensitivity-based fine tuning. On the assumption that the parametrization in which the delicate cancellations arise constitutes the physically preferred or fundamental parametrization of the theory in question, any of the resulting four formulations predicts a naturalness problem for values of the cutoff in the upper range of scales probed at the LHC.

2.2 The “Bare” Fine Tuning Argument

The original formulation of the naturalness argument in [28] was a “bare,” “sensitivity-based” fine tuning argument. However, it is the “bare,” “probabilistic” formulation that has become perhaps the most prevalent formulation of the naturalness principle. I first review the probabilistic and then the sensitivity-based formulation of bare fine tuning, and discuss the connection between them.

2.2.1 Probabilistic Formulation

The fine tuning argument assumes that the bare parameters and cutoff parameter \(\Lambda = \Lambda _{SM}\) in the bare fine tuning relation (1) are independently specified parameters of the theory, and that they constitute the unique, physically correct set of parameters of the theory; all other parametrizations are less faithful representations of the physics encoded in these “fundamental parameters.” From this perspective, it seems highly unlikely \(\textit{a priori}\) that two quantities that have nothing to do with each other, \(m_{0}^{2}\) and \(\frac{y_{t,0}^{2}}{8 \pi ^{2}} \Lambda _{SM}^2\), would agree so precisely in their numerical values. In the words of ’t Hooft, who was the first to explicitly inject notions of probability into the discussion of Higgs fine tuning, “it is unlikely that the microscopic equations contain various free parameters that are carefully adjusted by Nature to give cancelling effects such that the macroscopic systems have some special properties” [15]. Even the best case scenario allowed by the LHC, consisting of an agreement between these terms to one part in \(10^4\), appears contrived and conspicuously fine tuned. Such a mysterious unlikely agreement “cries out for explanation” by some theory beyond the Standard Model, and should constitute a primary guide in the search for such a theory—or so the argument goes.

The notion that certain combinations of bare parameters are “unlikely” implicitly assumes a notion of which parameters are more or less likely, and therefore seems to presuppose a probability distribution over the bare parameter space, from which the true parameters have been sampled in some sense. Of course, this formulation raises the difficult question of what could justify the choice of any particular probability distribution over the bare parameter space, and has been the source of much skepticism about the naturalness principle; see, e.g., [16].

2.2.2 Sensitivity-Based Formulation

The problem with the relation (1) is sometimes characterized as a problem of sensitivity of observables such as the Higgs pole mass to small adjustments in the values of high-energy parameters such as the Standard model bare parameters. This characterization can be found in Susskind’s original work; referring to the quadratic divergences in the quantum corrections to the Higgs mass, he writes, “These divergences violate a concept of naturalness which requires the observable properties of a theory to be stable against minute variations of the fundamental parameters” [28]. Susskind notes that if one divides both sides of the relation (1) by \(\Lambda _{SM}^2\), one has the dimensionless relation

$$\begin{aligned} \frac{m^{2}_{p}}{\Lambda _{SM}^2} = \tilde{m}_{0}^{2} - \frac{y_{t,0}^{2}}{8 \pi ^{2}}. \end{aligned}$$
(2)

Susskind assumes the physical cutoff of the Standard Model, \(\Lambda _{SM}\), to be the Planck scale, which is of order \(10^{19}\) GeV. Since the Higgs pole mass is of order \(10^2\) GeV, the dimensionless pole mass on the left hand side of (2) is then of order \(10^{-34}\), entailing that the dimensionless bare mass and coupling on the right hand side must agree to the \(34^{th}\) decimal place. A very small adjustment in the bare mass \(\tilde{m}_{0}^{2}\) or \(y_{t,0}\), say of order \(10^{-6}\), will cause the Higgs pole mass squared \(m_{p}\) to increase by a factor of \(10^{14}\). Thus, the observable Higgs pole mass is extremely sensitive to small adjustments in both the dimensionless bare mass and coupling.

Within the tradition of defining naturalness as the absence of delicate sensitivity of a theory’s observables to high-energy parameters, multiple quantitative measures of fine tuning have been proposed. The most well-known such measure is the one proposed by Barbieri and Giudice, who require that for every fundamental parameter \(a_i\)—what they call the “most general parameters of the theory”—the following condition hold:

$$\begin{aligned} \Delta _{BG} \equiv \left| \frac{a_i}{O}\frac{\partial O}{\partial a_i} \right| < \Delta \end{aligned}$$
(3)

where \(\Delta _{BG}\) is the dimensionless Barbieri-Giudice fine tuning measure, O is an observable or low-energy parameter, and \(\Delta\) is the lowest upper bound (usually set to 10 as a rule of thumb) on the sensitivity of O to any parameter \(a_i\), as measured by \(\Delta _{BG}\) [5]. Following on the work of Barbieri and Giudice, Anderson and Castaño observed that there can be cases, such as the mass of the proton in QCD, in which \(\Delta _{BG}\) is independent of the value of the parameters \(a_i\) or chosen observable O; in such cases, even when the value of \(\Delta _{BG}\) is large, it makes no sense to describe the parameters in question as fine tuned. They write that \(\Delta _{BG}\) “is really a measure of sensitivity, and sensitivity does not automatically translate into fine tuning.” For Anderson and Castaño, the problem with delicate sensitivity of an observable to the values of high-energy parameters is not intrinsic, but rather derives from probabilistic considerations. This observation motivated them to propose a refinement of the Barbieri-Giudice measure, \(\Delta _{AC} \equiv \frac{\Delta _{BG}}{\overline{\Delta }_{BG}}\), which quantifies the size of \(\Delta _{BG}(a)\) at a particular set of values for the parameters a relative to its average value \(\overline{\Delta }_{BG}\) over the whole parameter space. However, this average must be defined with respect to some assumed probability measure p(a) over the parameter space. While \(\Delta _{AC}\) is still in a sense a measure of the sensitivity of observables to adjustments in the “fundamental parameters” of the theory, through its reliance on a probability measure over the parameter space, it rests on a prior assumption about which parameter values are more or less likely [1].

As Borrelli and Castellani observe, Susskind in his original formulation of the naturalness argument does not say why delicate sensitivity of observables to adjustments in the fundamental bare parameters is problematic. What is to prevent one from simply shrugging one’s shoulders and responding “so what?” to the observation that the Higgs pole mass depends delicately on the values of the bare parameters at the physical cutoff scale? Is it really so self-evident that such sensitivity is problematic? There are several rationales (in some cases tacit) underpinning the prohibition against sensitivity of observables to the values of fundamental parameters.

One possibility is that worries about sensitivity originate in the sort of probabilistic worry about “unlikely” parameters described in Sect. 2.2.1. Intuitively, the connection appears to be roughly the following. On the assumption that the given probability distribution over the fundamental parameter space is reasonably smooth, delicate sensitivity of observables to the values of fundamental parameters implies the need for an unlikely choice of parameters. Let \(O = f(g)\), where O, the observable in question, is some function f of g, the so-called fundamental parameters parameters of the theory. The degree of sensitivity of the observable to the fundamental parameter is proportional to the derivative \(\frac{dO}{dg^i}\). Given a fixed range \(I = [O_{0} - \epsilon O_{0} + \epsilon ]\) of allowable values for O, there will be a certain restricted set of values for the fundamental parameters g that are consistent with the set I. Considering just the value of a single fundamental parameter \(g_i\), for a fixed range I, increasing \(\frac{d O}{dg^i}\) shrinks the range of values for \(g_i\) that are consistent with allowable range I of observed values for O. To gain some intuition for this claim, think of the simple linear relation \(y=mx+b\): for a fixed range \(I=[y_0 - \epsilon , y_0 + \epsilon ]\) of values for y, the larger the derivative m, the smaller the range \([\frac{y_0 - \epsilon - b}{m}, \frac{y_0 + \epsilon - b}{m}]\) of values of x consistent with this set I of values for y. Assuming a fixed probability density p(g) over the bare parameter space, larger values of \(\frac{dO}{dg^i}\) will generically correspond to smaller values for the integrated probability \(\int _{f^{-1}(I)} \ dg \ p(g)\) since they will entail a smaller integration region \(f^{-1}(I)\). Cases in which O is extremely sensitive to g will therefore generically correspond to case in which the integrated probability \(\int _{f^{-1}(I)} \ dg \ p(g)\) is small, assuming that p(g) is reasonably smooth.

A second possible rationale for precluding delicate sensitivity of observables to fundamental parameters is that there is something intrinsically problematic about delicate sensitivity of observables to high-energy parameters, irrespective of any assessments of the likeliness or unlikeliness of a theory’s parameter values. For example, Guidice offers a suggestion of this view when he characterize naturalness in terms of “separation of scales.” He writes of the naturalness principle that

It is deeply rooted in our description of the physical world in terms of effective theories. Separation of scales is not an a priori necessary ingredient, but it has been a cornerstone of much of the progress done in physics throughout the centuries. Were it necessary for deriving the trajectory of the Moon’s orbit to solve the equation of motion of each individual quark and electron in the lunar interior, how could have Newton obtained his gravity equation? Separation of scales has been a very useful tool for physicists to make progress along the path towards the inner layers of matter, and we can be grateful to nature for employing it so generously [13].

To the extent that naturalness is understood here as the requirement of separation of scales, it is being taken a precondition for describing the world in terms of effective theories—that is, for using simple mathematical regularities to characterize the behavior of coarse-grained degrees of freedom, without needing to know the detailed state or dynamics of the fine-grained degrees of freedom on which they depend. Particularly with the benefit of the LHC’s experimental findings, however, it has become clear that failure of naturalness, understood as numerical sensitivity of the physical Higgs mass to cutoff-scale parameters, is entirely compatible with separation of scales. The Standard Model continues to make excellent predictions about the behavior of low-energy, effective degrees of freedom at least up to the TeV scale, despite our remaining ignorant of the dynamics governing whatever more fine-grained degrees of freedom happen to underpin this behavior; in this sense, the Standard Model’s relationship to whatever BSM theory underpins it serves as a prime example of the separation of scales, in spite of the numerical sensitivity of the Higgs mass to high-scale parameters. It appears that separation of scales in the sense is unaffected by the numerical sensitivity of the physical Higgs mass to cutoff scale parameters, far from being critically threatened by it.Footnote 3

A third rationale for worrying about sensitivity of the physical Higgs mass to cutoff-scale parameters has been articulated by Williams, who argues that the numerical sensitivities of the Higgs mass are problematic because they violate the “spirit” of the Decoupling Theorem and the expectation that natural phenomena be describable in terms of “quasi-autonomous domains,” where a quasi-autonomous domain is a realm of phenomena characterized by the applicability of some particular effective theory [34, 36]. The existence of quasi-autonomous domains corresponds roughly to what Giudice describes as separation of scales—the fact nature exhibits regimes of phenomena governed by different effective theories, which accurately describe the behavior of coarse-grained degrees of freedom without reference to detailed state or dynamics governing the more fine-grained degrees of freedom on which they depend. Yet the notion that numerical sensitivity of the physical Higgs mass to the values of some cutoff-scale parameters is a problem because it violates the “spirit,” if not the letter, of the Decoupling Theorem and separation of scales, appears to trade on the vagueness inherent in notions like decoupling and autonomy. The pattern of inference at work in his argument appears to be that, if, on one way of making terms like “autonomous” and “decouple” precise, violations of decoupling and quasi-autonomy are legitimately problematic, then such violations are also problematic on any other other way of making these terms precise. The argument is weak, to the extent that it rests on the flawed assumption that legitimate considerations offered in support of one notion of “decoupling” or “autonomous” carry over to sharply distinct notions that happen to be described using the same terms, simply by virtue of a linguistic ambiguity or surface-level similarity.

2.3 Concerns About Regularization Scheme Dependence

One immediate concern about formulations of the naturalness principle based on the bare fine tuning relation (1) is that they are explicitly tried to a particular choice of regulator—namely, the cutoff parameter \(\Lambda\). What of the notion, taught to every student of quantum field theory, that the physical content of a quantum field theory should be independent of regularization scheme?

Given that bare formulations of the naturalness principle based on (1) violate the principle of regularization scheme independence, there are two possible responses on the part of defenders of naturalness: (1) retain the principle of regularization independence and seek a formulation of the naturalness principle that does not depend on the choice of regulator; (2) reject, or at least loosen, the principle of regularization scheme independence to allow for the notion that one regularization scheme, and one set of bare parameters, constitute the single physically correct regularization and bare parametrization of the theory. Option (1) is considered in the coming subsections. Option (2) is considered further in Section 3.

2.4 The “Renormalized” Fine Tuning Argument

There is a simple toy model that is often invoked to illustrate the need for fine tuning even in the context of regulator-independent, renormalized parametrizations of EFTs containing scalar fields. The model describes a light scalar field coupled to a much heavier field, usually a heavy fermion, but sometimes another scalar; see e.g. [4, 8, 26, 27]. For concreteness, consider the following Lagrangian, which describes a light scalar field \(\phi\) of mass m coupled to a heavy fermion \(\psi\) of mass M, where \(M>> m\):

(4)

where the parameters are understood as renormalized \(\overline{MS}\) parameters. At low energies, heavy fermions are not produced, and we can use an effective Lagrangian that describes only the dynamics of the light scalar field:

$$\begin{aligned} \mathcal {L}_{\phi } = \frac{1}{2} A (\partial _{\mu } \phi )^{2} - \frac{1}{2} B \phi ^{2} + \frac{1}{4!} C \phi ^{4} + \ldots , \end{aligned}$$
(5)

where ABC are coefficients to be determined by requiring that both the effective theory (5) and the full theory with Lagrangian (4) generate the same light-field pole mass and light-field S-matrix elements, and the ellipsis designates higher-dimensional terms whose influence is negligible at energies much less than M. Here, the light scalar is analogous to the Standard Model Higgs, and the heavier field to some heavy BSM particle that has not yet been discovered.

Requiring the full theory and the light scalar theory to agree to one-loop order on the values of these quantities generates the following relationship between the scalar mass parameters of the two theories:

$$\begin{aligned} \begin{aligned} B(\mu ) \equiv \tilde{m}^2(\mu )&\equiv m^2(\mu ) + \delta m^2(\mu ) \\&= m^2(\mu ) - \frac{4 g^2}{(4\pi )^2}M^2\left( 1+3\,\ln \frac{\mu ^2}{M^2}\right) \end{aligned} \end{aligned}$$
(6)

where \(\mu\) is the arbtirary value associated with \(\overline{MS}\) renormalization scale \(\mu\). Setting \(\mu\) to M for convenience, one obtains

$$\begin{aligned} \begin{aligned} \tilde{m}^2 = m^2 - \frac{4 g^2}{(4\pi )^2}M^2. \end{aligned} \end{aligned}$$
(7)

which is similar to (1) except that the correction term depends on the mass of the heavy particle that has been “integrated out,” instead of on a cutoff regulator, and the light scalar mass \(\tilde{m}^2\) is a renormalization-scheme-dependent scalar \(\overline{MS}\) mass rather than the observable, scheme-independent scalar pole mass.

If \(M>> \tilde{m}\), where \(\tilde{m}\) should be the \(\overline{MS}\) mass of the light scalar theory determined by fitting the theory directly to measurement, then a delicate cancellation between the renormalized scalar mass \(m^2\) of the full theory and the EFT matching corrections \(\frac{4 g^2}{(4\pi )^2}M^2\) is required to recover the value of the low-energy parameter \(\tilde{m}^2\). Just as the Higgs pole mass is thought to represent the combined “macroscopic” effect of a delicate cancellation between the “microscopic” bare mass and its quadratic quantum corrections, the low-energy renormalized scalar mass parameter \(\tilde{m}^2\) is thought the represent the combined “macroscopic” effect of a delicate cancellation the “microscopic” high-energy renormalized scalar mass parameter \(\tilde{m}^2\) and its quadratic matching corrections.

By way of comparison with bare fine tuning arguments, note that in the bare context, it is only parameters of the light field effective theory that are involved in the delicate cancellation. In the renormalized formulation, the cancellation involves the parameters of an entirely distinct theory with new, heavy fields. While the bare fine tuning argument applies directly to the Standard Model without assuming any model of BSM physics, the renormalized fine tuning argument does to some extent presuppose a particular speculative model for BSM physics, represented in the toy example by the heavy fermion.

2.4.1 Probabilistic Interpretation

By analogy with the fundamental bare parameters of the bare fine tuning argument, \(m^2\), g, and M are taken as independent, fundamental parameters of the full theory, and from this perspective it seems intuitively unlikely that these terms should cancel so precisely purely as a matter of chance; some deeper explanation is needed as to the underlying origins of this apparent coincidence. Thus, just as the bare formulation assumes a probability distribution over the bare parameter space of the low-energy theory, this formulation likewise assumes some probability distribution over the space of \(\overline{MS}\)-renormalized parameters of the underlying full theory.

2.4.2 Sensitivity-Based Interpretation

Similarly to bare fine tuning arguments, for large values of the heavy field mass M, small adjustments to either the light-field dimensionless \(\overline{MS}\) mass, \(\frac{m(\mu )}{M}\), or coupling g in the full theory generate large changes in the light-field \(\overline{MS}\) mass parameter of the effective theory. In this sense, the light-field \(\overline{MS}\) mass is delicately sensitive to the parameters of the full theory. By contrast with the bare fine tuning argument, the low-energy quantity that depends sensitively on cutoff-scale parameters is not an observable like the pole mass, but a scheme-dependent parameter. The possible rationales for precluding delicate sensitivity carry over from the discussion of bare fine tuning.

2.5 Scheme Dependence of the “Renormalized” Fine Tuning Argument

An important point about the fine tuned matching relation (7), is that although it does not depend explicitly on any regulator, it does depend on a specific choice of renormalization scheme—the \(\overline{MS}\) scheme—in both the full theory and in the light-field effective field theory.Footnote 4 Moreover, there exist alternative choices of renormalization scheme in which there is no fine tuning at all in the matching relation between between the renormalized light scalar masses of the full and effective theory. Specifically, if both theories are renormalized in the on-shell scheme, the light-field mass parameter is the same in both theories—being equal to the pole mass in both cases—and there are no matching corrections at all. Thus, one can simply transform away the fine tuned cancellations by a change of renormalization scheme to the on-shell scheme, without altering the physical content of the theory. Just as every student of quantum field theory is taught that the physical predictions of a QFT are independent of the regulator that one employs, so he/she is also taught that these physical predictions are likewise independent of the renormalization scheme that one uses. If the principle of renormalization scheme invariance is to be respected, then the delicate cancellations that occur in the renormalized fine tuning relation (7) are to be regarded simply as artifacts of a mathematical convention.

On the other hand, one sometimes hears reference to the renormalized running \(\overline{MS}\) mass as the “effective mass at scale \(\mu\)” of the light scalar field, and reference to renormalized parameters in a variety of schemes as the “physical parameters” of a QFT. This language could be interpreted as suggesting that some particular set of renormalized parameters constitute the “true” renormalized parametrization. By analogy with the case of bare parameters, one might try to justify the naturalness argument by attributing special physical status to some particular renormalized parametrization in which fine tuned cancellations do appear. On the other hand, if any set of renormalized parameters deserves to be counted as the “physical parameters” of QFT, one would think, it is the parameters of the on-shell scheme, since the mass parameter is the scheme-invariant pole mass and the coupling is directly defined in terms of scheme-invariant scattering amplitudes at a specified (but arbitrary) physical scattering energy. And in the on-shell parametrization, as we have seen, there are no fine-tuned cancellations involving only renormalized parameters. But if one is to adopt a strict interpretation of the principle of renormalization scheme independence, then the notion of a special “physical” set of renormalized parameters, and particularly one other than the on-shell parameters, seems a non-starter.

2.6 Other Formulations of the Higgs Naturalness Principle

Apart from fine tuning formulations of the naturalness principle, there are several other notions commonly discussed in connection with the Higgs naturalness principle; these include absolute naturalness, technical naturalness, and the Decoupling Theorem.

2.6.1 Absolute Naturalness

Absolute naturalness, originating in the writings of Dirac, is the requirement that the dimensionless parameters of a theory all be roughly of order one [21]. It is rooted in the intuition that fundamental theories should not possess arbitrary features, including parameters of the same dimension that differ by many orders magnitude. The requirement that such parameters all possess roughly the same order of magnitude can be formulated as the requirement that their dimensionless ratios be of order one. Thus, one intuition motivating absolute naturalness is that a theory whose dimensionless ratios are all of order one is in some sense more uniform, more typical, and less arbitrary. As Wells notes, the expectation that a theory be natural in the absolute sense is thought to be more compelling the more fundamental (i.e., the more universal and physically encompassing) that theory is. For theories that are less fundamental, the presence of such arbtirary-seeming features, including dimensionful parameters with widely varying magnitudes, or equivalently small dimensionless parameters, is more acceptable since there is a reasonable expectation that these features will be explained by a deeper, more encompassing theory that is more uniform and aesthetically pleasing [32].

Even without the Higgs, the Standard Model parameters, including for example the mass of the electron or any of the lighter fermions, flatly violate the requirement of absolute naturalness inasmuch as their ratio with the physical cutoff scale of the Standard Model - understood here as a parameter of the Standard Model—is much smaller than one. Since the prediction of new physics signatures at the LHC was rooted essentially in the quadratic cutoff-scale dependence of the Higgs mass corrections, it is clear that absolute naturalness bears at most a peripheral, surface-level connection to naturalness in the sense responsible for generating this prediction.

2.6.2 Technical Naturalness

It was ‘t Hooft who first proposed the requirement known as “technical naturalness,” which allows a dimensionless parameter to be much less than one only if the parameter is “protected” by a symmetry: that is, if setting that parameter to zero would increase the symmetry of the theory. ’t Hooft’s rationale is that if the parameter is protected by a symmetry, then the quantum corrections to the parameter will be proportional to the parameter itself; this in turn implies that the corrections will only be logarithmic in the cutoff; for cutoffs up to and well beyond the Planck scale, logarithmic corrections will be smaller than or of the same order as the parameter itself, so that no delicate or conspiratorial-seeming cancellations are needed to recover the results of experimental observations. The motivations that ’t Hooft offers for imposing technical naturalness are not essentially about an intrinsic aversion to dimensionless parameters, as they are for Dirac, but rather about the need to avoid “unlikely” cancellations, as he himself describes the problem.Footnote 5 Thus, although ’t Hooft recognizes the similarity of technical naturalness with Dirac’s absolute naturalness, in the sense that both place constraints on the order of magnitude of dimensionless parameters, for ’t Hooft the impetus for imposing technical naturalness ultimately traces back to worries about probabilistic fine tuning. Reinforcing this interpretation of ’t Hooft, which is more or less stated explicitly in his own words, Wells characterizes the requirement of technical naturalness as essentially a reformulation of the prohibition against fine tuning [32]. This is also the view of technical naturalness adopted here.

2.6.3 The Decoupling Theorem

The Decoupling Theorem of effective field theory, introduced in [2], is frequently mentioned in connection with the Higgs naturalness principle. Like the concept of separation of scales described in the above passage from Giudice, both the Decoupling Theorem and the naturalness principle’s prohibition against sensitive dependence of observables on cutoff-scale parameters can be construed as demands for “autonomy” of physics at low-energies from the details of physics at high-energies. I have already argued that despite this surface similarity, separation of scales and fine tuning naturalness are in fact sharply distinct concepts, and neither implies the other. As others have already pointed out, a similar lesson holds for the connection between the Decoupling Theorem and fine tuning naturalness.Footnote 6

The Decoupling Theorem states that in a QFT describing both light and heavy fields with widely differing pole masses, parametrized in a mass-dependent renormalization scheme, scattering amplitudes that contain only light field particles in their external legs can be parametrized in terms of an effective Lagrangian that contains only light fields. Thus, in this one particular sense, the heavy fields “decouple” from the dynamics of the light fields. While the Decoupling Theorem broadly concerns the possibility of applying effective theories to describe low-energy degrees of freedom, it is a purely mathematical result and should be distinguished from separation of scales, which is an empirical observation about the character of natural laws. Furthermore, the Decoupling Theorem is fully consistent with delicate numerical sensitivity of light-field observables to cutoff-scale parameters; one can have decoupling of light field dynamics from that of heavy fields in the sense of the Decoupling Theorem and delicate sensitivity of light-field observables either to light field EFT parameters defined at the mass scale of the heavy field, or to the parameters of the full theory including the heavy field. The sense of “decouple” in which the physical Higgs mass fails to “decouple” from high-energy physics is very clearly distinct from the sense in which light-field dynamics does “decouple” from heavy-field physics. Thus, the notion that the Decoupling Theorem has anything to do with the numerical sensitivity of the Higgs mass to cutoff scale parameters rests mainly on a certain sloppiness in the use of the term “decouple.”Footnote 7

2.7 Summary

It is only through the relaxation of the principle of regularization or renormalization scheme independence that the naturalness principle (on any formulation relevant to the prediction of BSM signatures at the LHC) can be motivated. That is, it is only by virtue of the supposition that some particular parametrization of an EFT—whether a set of bare “fundamental parameters” or renormalized “physical parameters”—constitutes the unique physically preferred parametrization of the theory that the delicate cancellations and sensitivities caused by quadratic corrections pose a potential problem. In the next section, I argue that while the assumption of such a preferred parametrization underpins one popular way of understanding quantum field theories as effective field theories, the concept of an EFT in high energy physics does not depend essentially for its mathematical definition or physical interpretation on the existence of a preferred parametrization. Moreover, the notion of a single preferred fundamental or physical parametrization constitutes an idle wheel in generating the confirmed predictions of all known EFTs in high-energy physics, and for this reason should be dispensed with.

3 Effective Field Theory Without a Preferred Parametrization

In this section, I argue that the naturalness arguments cited above rest on a certain “received” interpretation of effective field theories that is widely adopted in both the physics community and in discussion about the conceptual foundations of quantum field theory within the philosophical literature. I highlight the ways in which this received view implicitly loosens the conventional assumption that the only quantities in a quantum field theory that represent real physical features of the modeled system (as opposed to merely reflecting a choice of mathematical convention) are restricted to those quantities that are regularization and renormalization scheme independent.

The tension between naturalness and the combined principles of regularization and renormalization scheme independence raises an important question: if the principles of regularization and renormalization scheme independence are such core tenets of QFT, as they are commonly thought to be, what is the reasoning that motivates their (usually tacit) abandonment in the adoption naturalness-based arguments? This section considers two distinct rationales that appear to motivate the loosening of these principles—one that suspends regularization scheme independence, and the other that suspends renormalization scheme independence.

I then argue that these rationales are flawed and that the grounds for adhering to the principles of regularization and renormalization scheme independence are substantially stronger than they are for singling out some particular set of bare “fundamental parameters” or renormalized “physical parameters” as physically preferred. Specifically, I argue that because all known EFTs are so far known only to make contact with the experiment at the level of scheme-independent observables, the notion of a single fundamental bare parametrization associated with the physical cutoff of the theory, or the notion that the \(\overline{MS}\) parameters constitute the unique physical parameters of the theory, constitutes an idle metaphysical supposition that plays no essential role in generating the successful predictions of the theory. In its violation of an invariance that is respected by empirical data, solely for the purpose of upholding particular metaphysical prejudices, the notion that some particular set of bare or renormalized \(\overline{MS}\) parameters constitute the true physical parameters of the theory is analogous to the assumption of a physically preferred standard of rest in the ether theory of the nineteenth century.

I go on to argue that abandoning the assumption of a physically preferred parametrization also entails the abandonment of certain influential dogmas about effective field theory, and explain how certain elements of the mathematical formalism of effective field theory are to be re-interpreted in their absence. In particular, I describe several revisions in the interpretation of the cutoff parameter \(\Lambda\), the QFT path integral, and the Wilsonian renormalization group (RG) that are motivated by the abandonment of a physically preferred parametrization.

3.1 Three Views of the Cutoff in Quantum Field Theory

To set the stage for the discussion, in this subsection I review three distinct ways of interpreting the cutoff parameter \(\Lambda\) in quantum field theory that serve to motivate the introduction of EFTs in QFT as well as to distinguish a certain “received” understanding of EFTs from the understanding of EFTs that is advocated here.

3.1.1 \(\Lambda \rightarrow \infty\)

In pre-Wilsonian, perturbative approaches to quantum field theory, the governing attitude is that one is obliged to take the cutoff regulator \(\Lambda\) to infinity at the end of the renormalization process. Since \(\Lambda\) is inserted as an arbitrary parameter solely for the purpose of rendering loop integrals finite to enable calculation, it is thought that \(\Lambda\) should be taken to infinity as infinity is the only non-arbitrary value for this parameter. In addition, finite values for \(\Lambda\) can violate exact Lorentz and gauge invariance, so that the strict preservation of both of these symmetries requires that \(\Lambda \rightarrow \infty\). Lastly, the view that one must take the limit \(\Lambda \rightarrow \infty\) may rest partly on the prejudice that any quantum field theory, by virtue of being a field theory, should be defined on a continuum, up to arbitrarily high frequencies. The notion that the QFT must be defined on a continuum is also associated with the programs of algebraic and constructive QFT, which aim to provide a general non-perturbative mathematical definition of quantum field theory. Although it is possible to extract well-defined predictions in the limit \(\Lambda \rightarrow \infty\) within the context of perturbative approximations, the continuum limit continues to pose mathematical problems in the more general non-perturbative realm, to the extent that there exists no known physically realistic, interacting quantum field theory with a well-defined, non-perturbative continuum limit.

3.1.2 \(\Lambda = M_{phys}\)

From Wilson’s seminal work on the renormalization group, which highlighted many fruitful mathematical analogies between condensed matter and high-energy physics,Footnote 8 there arose the notion that one need not take the limit \(\Lambda \rightarrow \infty\), but instead can perform calculations using a finite value of \(\Lambda\). With this change of perspective, it is no longer necessary to consider only theories with renormalizable Lagrangians as in continuum approaches; one can equally well perform calculations in theories with Lagrangians containing arbitrarily high-dimensional operators, provided one does not require that the cutoff be taken to infinity. This has led to a revised understanding of quantum field theories as effective field theories (EFTs). This perspective begins from the realization that in practice, quantum field theories do not describe phenomena up to arbitrarily high energies, but only up to some finite physical cutoff scale \(M_{phys}\), and that it is therefore not strictly speaking necessary for a QFT to be mathematically defined for arbitrarily large values of \(\Lambda\) in order for the theory to generate empirically well-confirmed predictions over some finite domain; it is only necessary that theory be defined up through the scale of the physical cutoff.

Within this approach, it is common to insert the further assumption that in the mathematical definition of an EFT, one should simply set \(\Lambda =M_{phys}\), essentially identifying the “true” value of the cutoff regulator with the physical cutoff, in much the same sense as the atomic lattice spacing functions as a physically preferred regulator in condensed matter field theory. Using the Wilsonian renormalization group, one can then “integrate out” high-energy degrees of freedom from the EFT path integral, starting at \(\Lambda =M_{phys}\), to form new EFTs with \(\Lambda < M_{phys}\), which describe physics at lower energy scales. The bare parameters at the physical cutoff are viewed as “fundamental parameters” of the EFT, while other parametrizations of the EFT at smaller \(\Lambda\) are understood as coarse grainings or approximations to this true fundamental description.

3.1.3 “Quotient-ing over” the Wilsonian RG Flow

Here, I wish to highlight the possibility of a third possible interpretation of \(\Lambda\) that preserves core tenets of the effective field theory viewpoint, but revises certain of the dogmas mentioned in the second paragraph of Sect. 3.1.2. Rather than take \(\Lambda \rightarrow \infty\) or set \(\Lambda =M_{phys}\), the approach is, in a sense, to “quotient over” different finite values of \(\Lambda\) by declaring different parametrizations \(g(\Lambda )\) along the same Wilsonian RG trajectory, which are associated with different values of \(\Lambda\), to be physically equivalent. Such an approach is enabled by the fact that the finite-cutoff path integral, the correlation functions derived from it, and the observables derived from these correlation functions, are all invariant under Wilsonian RG transformations. They are invariant in the specific sense that attributing a cutoff dependence \(g(\Lambda )\) to the bare parameters given by some solution to the Wilsonian RG equations, serves to cancel the explicit \(\Lambda\) dependence of these quantities and thereby render them \(\Lambda\)-independent—that is,

$$\begin{aligned}&\Lambda \frac{d}{d\Lambda } Z[J; g(\Lambda ), \Lambda ) = 0 \end{aligned}$$
(8)
$$\begin{aligned}&\Lambda \frac{d}{d\Lambda } G(x_1,..., x_N; g(\Lambda ), \Lambda ) = 0 \end{aligned}$$
(9)
$$\begin{aligned}&\Lambda \frac{d}{d\Lambda } S(p_1,..., p_N; q_1,...,q_M ; g(\Lambda ), \Lambda ) = 0 \end{aligned}$$
(10)
$$\begin{aligned}&\Lambda \frac{d}{d\Lambda } \mathcal {O}(g(\Lambda ), \Lambda ) = 0 \end{aligned}$$
(11)

where J a source function, \(g(\Lambda )\) is a solution to the Wilsonian RG equations, \(Z[J; g(\Lambda ), \Lambda )\) is the finite-cutoff path integral , \(G(x_1,..., x_N; g(\Lambda ), \Lambda )\) is the N-point correlation function, \(S(p_1,..., p_N; q_1,...,q_M ; g(\Lambda ), \Lambda )\) is the S-matrix, and \(\mathcal {O}(g(\Lambda ), \Lambda )\) is a general observable (which, it is assumed, can be calculated from correlation functions). Thus, all correlation functions and observables of the theory that can be calculated from them can be computed using any value of \(\Lambda\) for which the RG solution \(g(\Lambda )\) is mathematically defined. This point of view has also been advocated in the work of Tim Morris on the non-perturbative renormalization group, although it has received little attention in the scientific or philosophical literature on naturalness or in the philosophical literature on the foundations of quantum field theory [19, 20].Footnote 9 Its implications for naturalness are explored in detail in [23].

The value of the physical cutoff \(M_{phys}\) enters nowhere into the mathematical definition of the EFT and is completely external to the definition of the EFT model. The value \(M_{phys}\) characterizes the relationship between the EFT model and the world, but not internal mathematical features of the model. \(\Lambda\) and \(M_{phys}\) are cutoffs in two completely distinct senses of the term; the first is an arbitrary parameter that can be varied without changing the physical content of the theory; the second is an empirically observable quantity, such that different values of \(M_{phys}\) necessarily correspond to physically distinct states of affairs. Because all bare parametrizations \(g(\Lambda )\) along the Wilsonian RG trajectory of an EFT are physically equivalent on this view, there are no “fundamental parameters” in the theory. Distinct points along the same RG trajectory do not correspond to distinct EFTs, but to physically equivalent parametrizations of one and the same EFT, associated with the same set of correlation functions. Put somewhat more figuratively, an EFT is an RG trajectory, not a point on such a trajectory.

This way of understanding the Wilsonian RG, as an invariance of physical observables, resembles more closely the understanding of the RG for renormalized parametrizations, which describes the running of renormalized parameters \(g_r(\mu )\) with the renormalization scale \(\mu\); there, one has a condition on observables that \(\mu \frac{d}{d\mu } \mathcal {O}(g_r(\mu ), \mu ) = 0\), analogous to (11). The flow of \(g_r(\mu )\) to lower renormalization scales is often not interpreted as a coarse graining (as in the case the Wilsonian RG), but frequently as an invariance of observables under change in parametrization. What I am urging here is an interpretation of the Wilsonian RG along lines more closely analogous to the interpretation of the renormalized RG.

3.2 Motivations for Abandoning Scheme Independence in the “Received” View of Effective Field Theory

The loosening of scheme independence that underpins the naturalness principle can be motivated on the basis of two distinct rationales. The first rationale suspends the principle of regularization scheme invariance, while the second suspends the principle of renormalization scheme invariance.

The first rationale, associated with a particular set of dogmas about the interpretation of effective field theories, supposes that the existence of a physical cutoff for an EFT—understood as the energy scale above which the EFT ceases to generate empirically accurate predictions—establishes a particular finite value of the cutoff regulator \(\Lambda\) as the uniquely correct regularization scheme, and bare parameters of this scheme as the “fundamental parameters” of the EFT. This line of reasoning rests strongly on analogies with condensed matter field theory, where it is uncontroversially true that there exists a single physically preferred regulator and bare parametrization. Specifically, the regulator in such cases is the lattice spacing associated with some real atomic lattice, and the physical bare parameters associated with this regulator characterize the dynamics and interaction of the atoms at different lattice sites. Since the displacements from equilibrium at the different lattice sites are governed by quantum mechanical laws, and can be described collectively as a field, the configuration of the lattice as a whole can be modeled as a quantum field, albeit one whose oscillations are restricted to modes of wavelength greater than lattice spacing. This picture is physically intuitive, and, crucially, free of divergences. As Wilson observed to powerful effect, there exist many deep and fruitful mathematical analogies between condensed matter and statistical field theory on the one hand and the quantum field theories of high-energy physics on the other. The notion of a preferred fundamental parametrization in the effective field theories of high-energy physics results from efforts to transplant this intuitive, mathematically well-defined picture of quantum fields in the context of condensed matter systems to the description of elementary particle interactions, where notorious divergences have confounded both the mathematical definition and physical interpretation of quantum field theory models. However, both [12] and [10] have recently argued that analogies between condensed matter and high energy quantum field theory are merely formal, and that there are many important physical dis-analogies between the theories, a conclusion that is further emphasized and elaborated in the current discussion and in [23].

The second rationale, which motivates the suspension of renormalization rather than regularization scheme independence, identifies \(\overline{MS}\)-renormalized parameters as the “physical parameters” of an EFT, and the running \(\overline{MS}\) mass \(m(\mu )\) as the “effective mass at scale \(\mu\).” In traditional approaches to perturbative renormalization, the regulator is always removed and bare parameters are divergent. Given their infinite nature, bare parameters have no claim to physicality in this approach. By comparison, it is tempting to regard renormalized parameters as the “physical parameters” of the QFT, simply by virtue of their finiteness.Footnote 10 Furthermore, it is tempting to regard the renormalized mass \(m(\mu )\) as an effective mass at some physical scale \(\mu\) by analogy both with the effective masses of quasi-particles in condensed matter field theory, and with the effective charge at different scales of the electron.

If one accepts these rationales for adopting a preferred parametrization in the EFTs of elementary particle physics, it is easier to see why delicate cancellations or sensitivities with respect to the bare parameters or renormalized parameters at the scale of the physical cutoff would be problematic, for reasons discussed in the previous section. However, the following subsections explain why these two rationales should not be accepted.

3.3 The Case for Abandoning a Physically Preferred Parametrization

For reasons described in Sect. 3.2, a certain way of interpreting analogies between high-energy and condensed matter theory has facilitated the notion that for every EFT there exists single physically preferred regulator and bare parametrization—even despite widespread acceptance of the manifestly opposing principle that the physical content of a QFT should be regularization and renormalization scheme independent. But, as [18] have emphasized in the context of Wilson’s analysis of the renormalization group, it is one thing to recognize points of mathematical analogy between two domains of physics, and another thing entirely to carry over aspects of the physical interpretation of corresponding elements from one side of the analogy to the other. In this subsection, I will briefly reiterate, and then expand upon, the argument of [23]—namely, that supposing the existence of some set of real fundamental bare parameters is an interpretational step that goes well beyond what is justified by empirical evidence, and beyond what is needed to ensure that the theory is mathematically well-defined.

With regard to the existence of a preferred regulator and a fundamental parametrization, the cases of high-energy and condensed matter field theory are not analogous. In the second case, such a violation is empirically verifiable; in the first, there is no evidence to support the existence of a fundamental parametrizaton or preferred regularization. The notion that in high-energy physics, the existence of a physical cutoff scale \(M_{phys}\) establishes a unique physically correct value for the regulator \(\Lambda\) rests on a false conflation of two sharply distinct notions of cutoff. \(M_{phys}\) and \(\Lambda\) are two distinct types of quantity. \(M_{phys}\) is an observable, which might be associated for example with the pole mass of a heavy particle that has been integrated out of some more general theory (or the energy scale at which the internal structure of a composite particle in the theory starts to be probed). \(\Lambda\) is an arbitrary parameter whose value, as we have seen, can be adjusted without altering the physical predictions or the domain of empirical validity of the EFT in question. An infinite range of values of \(\Lambda\) are consistent with the same set of empirical predictions for the EFT in question, and with the same value for \(M_{phys}\). By contrast, a change in the value of \(M_{phys}\) has direct empirical consequences, inasmuch as it alters the domain of empirical validity of the EFT in question, as well as any scattering amplitudes involving the heavy field propagator of the full theory.

This point of view is further supported by the fact that even in cases where it is possible to directly probe physics at and above the physical cutoff scale of an EFT, there is no sense in which one is able to directly measure the values of the theory’s bare parameters in the way that one can do in the context of a condensed matter system. In high-energy physics it is only ever the case, even at scales near the cutoff, that one measures scheme-independent quantities, and that bare parameters are simply assigned whatever values are needed to maintain consistency with the measured values of these quantities and the arbitrarily chosen regulator. In the physical interpretation of quantum field theory, one can—and should—simply choose to be a realist only about scheme-independent quantities.

A second argument, which further serves to underscore the dis-analogy between the cutoff mechanisms of high-energy and condensed matter field theories, comes from the practice of performing matching calculations between high-energy effective field theories and their low-energy effective field theories. The crucial point is that these matching calculations only require the theories to agree on scheme-independent quantities like pole masses and scattering amplitudes—not on the values of Lagrangian parameters (bare or renormalized) or other scheme-dependent quantities. In principle, any parametrization scheme, employing any regulator and any renormalization scheme, can be employed in either theory during the matching calculation without affecting the physical predictions of either theory or their agreement at low-energies. In this sense, the empirical cutoff established for the low-energy EFT by the high-energy EFT is fully compatible with all choices of regulator for the EFT.

The physical breakdown of an EFT at high-energies is reflected not in the value of any regulator, but in the fact that the \(p-\)dependence of the Fourier-transformed correlation functions \(\tilde{G}^{n}(p_1,..., p_n; g(\Lambda ), \Lambda )\), which are invariant under changes in \(\Lambda\), no longer captures the functional \(p_i\)-dependence exhibited in measured scattering amplitudes. Despite a certain superficial similarity, the parameter \(\Lambda\) is largely unrelated to the actual physical cutoff scale \(M_{phys}\) of an EFT.

While the value of \(\Lambda\) is essentially unrelated to the empirical cutoff scale of the EFT, there does exist one loose constraint that connects the two. If the RG flow of \(g(\Lambda )\) fails to be mathematically defined beyond some scale \(\Lambda _{max}\), the EFT simply generates no predictions—whether empirically correct or not—for physical momentum scales (scales determined by the arguments \(p_i\) appearing in Fourier-transformed correlation functions) above \(\Lambda _{max}\). This occurs for example in theories such as QED that possess a Landau pole. The value \(\Lambda _{max}\) thus constitutes yet another type of cutoff, in the particular sense that it serves as the upper limit of scales at which the EFT can conceivably generate any predictions at all, whether correct or not. Apart from the loose requirement that \(M_{phys} < \Lambda _{max}\), the presence of a theoretical cutoff \(\Lambda _{max}\) is unrelated to the physical cutoff \(M_{phys}\) of the theory. \(\Lambda _{max}\) is an intrinsic mathematical feature of the EFT model, while \(M_{phys}\) concerns the relationship of the EFT model to the phenomena.

3.3.1 Analogy with the Electromagnetic Ether

The notion that the existence of a physical cutoff establishes a physically preferred regularization scheme and a physically preferred value for that regulator rests solely on a particular physical picture of quantum field theory that is compelled neither by considerations of empirical adequacy, nor considerations of mathematical well-definedness, nor even by the need for a realist interpretation, since one can choose to be realist only about scheme-independent quantities. The concept of a physically preferred regulator and a physically preferred set of “fundamental parameters” to go along with it constitutes extraneous theoretical baggage, whose sole purpose is to prop up a particular metaphysical picture of quantum field theory.

In this sense, the notion of a physically preferred fundamental parametrization is closely analogous to the notion of a physically preferred reference frame presupposed by the electromagnetic ether theory of the 19th century. Although the predictions of Maxwell’s electromagnetic theory were invariant under changes of inertial reference frame, it was widely supposed by Maxwell, Lorentz, and others that the electromagnetic waves predicted by the theory must be waves in some material medium, called the “ether.” This notion was rooted in a particular physical picture of wave phenomena drawn from the theory of sound waves in material media, where the medium itself unavoidably establishes an absolute standard of rest. With the advent of special relativity, it was understood the electromagnetic waves constitute a qualitatively distinct type of wave that does not require the existence of medium with a preferred standard of rest. Choosing to prioritize the symmetries inherent in electromagnetic and mechanical phenomena over commitment to a particular metaphysical picture that violated those symmetries, the physics community ultimately chose to jettison the notion of the ether and a preferred reference frame.

By analogy, there exists a “symmetry” of sorts associated with the regularization and renormalization scheme independence of scattering phenomena. Just as all reference frames generate the same physical predictions in classical electromagnetic theory, so all regularization and renormalization schemes generate the same physical predictions in quantum field theory. Just as the boost invariance of electromagnetic phenomena should make us skeptical any claims about a physical preferred reference frame, so the scheme invariance of scattering phenomena should make us skeptical about any claims of a physically preferred regularization or renormalization scheme. From this perspective, the notion of “fundamental parameters” should be regarded with the same skepticism as the ether, for largely the same reasons.

3.4 Effective Field Theory Without a Preferred Parametrization

In the absence of a preferred parametrization, there emerges an interpretation of the path integral and Wilsonian renormalization group, and of their connections to renormalized, regulator-independent parametrizations, that is in some cases dramatically different from the interpretations of these elements within a view of EFTs based on a preferred set of parameters.

3.4.1 Interpretation of the Path Integral and Wilsonian RG

The notion that all parametrizations \(g(\Lambda )\), for any \(\Lambda\) along the same RG trajectory are physically equivalent may appear to run contrary to one widespread intuition about the RG, which holds that the flow to lower momenta constitutes a coarse-graining operation in which information about physics at high-energy scales is lost. There is one sense in which this claim is true, and another in which it is misleading.

Regarding the latter, there exist obvious limitations to the intuition that the RG flow to lower momenta constitutes an irreversible, many-to-one coarse-graining operation that proceeds only from high to low energies. After all, it is only by flowing the RG evolution up in energy from low to high scales that it is possible to infer the existence of a Landau pole in theories like QED (where the Landau pole lies well above the Planck scale). Because the Wilsonian RG equations,

$$\begin{aligned} \Lambda \frac{dg_i}{d \Lambda } = \beta _i(g(\Lambda ), \Lambda ) \end{aligned}$$
(12)

are first-order in \(\Lambda\), one can freely move in either direction along an RG trajectory, over all \(\Lambda\) for which that particular solution is mathematically defined. A point in parameter space lies on a unique RG trajectory (although the trajectories do tend to bunch up in the flow to smaller momenta), and the mapping of a point in parameter space under the RG is unique both for flows to higher and lower momenta.

However, there does remain an important asymmetry between flows down to lower momenta and flows up to higher momenta - namely, that for an arbitrary initial condition \(g_0\) and \(\Lambda _0\), a solution to the RG equations exists for all \(0< \Lambda < \Lambda _0\), but not for all \(\Lambda > \Lambda _0\). In the general case, the RG trajectory will cease to be mathematically defined at some finite momentum scale above \(\Lambda _0\), as occurs in the case of a Landau pole.Footnote 11 This is mathematically analogous to the sense in which it is possible with generic initial conditions of the heat equation to flow them arbitrarily far into the future, but not arbitrarily far into the past; when attempting to flow the initial conditions backward in time, one will typically reach a time at which the solution no longer exists. Indeed, it is well-known that the RG flow equation can be formulated as a heat equation in the space of action functionals \(S[\phi ]\), where \(\phi\) is a classical field configuration.

A second asymmetry between flows to high and low momenta is that irrelevant operators are suppressed in the flow to lower momenta, causing trajectories to bunch together (but crucially, without actually intersecting). Thus, a small degree of imprecision in the specification of a theory’s parameters for small \(\Lambda\) can reflect a large measure of imprecision in the values of that theory’s parameters at large \(\Lambda\). This reflects the fact that two EFT models describing one and the same set of fields can differ widely in their description of physics at high energies while being virtually (but not exactly) indistinguishable in their description of physics at low energies.

Crucially for my thesis here, neither of these asymmetries contradicts the basic point that one can flow freely up or down in \(\Lambda\) without loss of information so long as one remains on the same RG trajectory and does not make any approximations (say by neglecting terms that are small at low energy), and that this allows for the interpretation of \(\Lambda\) as an arbitrary reference scale, analogous to a choice of origin, whose value does not affect the scope or physical content of the theory.

It may seem puzzling to suggest that all parametrizations along the same RG trajectory are physically equivalent, given the common assumption that a path integral cut off at the scale \(\Lambda\), and the resulting correlation functions and observables, only describe phenomena at scales less than \(\Lambda\). In particular, it is often assumed that a path integral cut off at the scale \(\Lambda\) only describes scattering phenomena with external momenta \(p < \Lambda\). This perspective is motivated in part by analogies with condensed matter physics, where the atomic lattice spacing a establishes a hard momentum cutoff \(\Lambda \sim \frac{1}{a}\) on modes p that can propagate in the medium. On this interpretation, it seems clear that different parametrizations \(g(\Lambda )\) along the same Wilsonian RG trajectory cannot be physically equivalent since the range of phenomena described by parametrizations \(g(\Lambda _l)\) for small values \(\Lambda _l\) of the cutoff (those with scattering energy less than \(\Lambda _l\)) constitute a strict subset of the phenomena described by parametrizations \(g(\Lambda _h)\) for large values \(\Lambda _h\) of the cutoff.

However, shorn of physical interpretation, the mathematical definition of the finite-cutoff path integral does not constrain the values of external momenta p in N-point correlation functions \(\tilde{G}^{N}(p_1, ..., p_N)\) to lie below the scale \(\Lambda\) characterizing the upper limit of modes appearing in the path integral measure. To see this, note that from the formula

$$\begin{aligned} \tilde{G}^{(n)}(k_1, ... , k_n) = \frac{ (-i)^{n}}{Z[0]} \frac{\delta ^n}{\delta \tilde{J}(k_1) \ ... \ \delta \tilde{J}(k_N)}\bigg |_{J=0}Z[J], \end{aligned}$$
(13)

what matters most directly to the values of correlation functions—which ultimately determine the physical predictions of an EFT—is the dependence of path integral Z[J] on the external source fields J. Moreover, the formula (9) shows that this dependence remains unaltered irrespective of the chosen value of \(\Lambda\) along the theory’s RG trajectory.

To see this last point more explicitly, consider a general multivariate integral, of which a suitably regularized path integral can be considered a special case:

$$\begin{aligned} z(j_1, ..., j_N)&= \int dx_1 ... dx_L \ dx_{L+1} ... dx_N \ f_N(x_1, ...,x_L, x_{L+1}, ..., x_N, j_1, ..., j_{L}, j_{L+1}, ..., j_N) \nonumber \\&= \int dx_1 ... dx_L \ f_L(x_1, ..., x_L, j_1, ...j_L, j_{L+1},..., j_N) \end{aligned}$$
(14)

where \(L< N\) and \(f_m(x_1, ..., x_L, j_1, ..., j_N) \equiv \int dx_{L+1} ... dx_{N} \ f_n(x_1, ..., x_N, j_1, ..., j_N)\). The fact that we have partially integrated over some of the variables \(x_i\) in the second line has no effect whatsoever on the derivatives of \(z(j_1, ..., j_N)\) with respect to the \(j_i\). We can integrate over all or none of \(x_i\) without affecting the values of these derivatives; that is, L can take any value between 0 and N without affecting the value of z or its derivatives with respect to the \(j_i\).  In other words, the derivative \(\frac{\partial z}{\partial j_i}\) is the same irrespective of whether we choose \(i < L\) or \(i > L\). Now, replace the variables \(x_1, ..., x_N\) with the Fourier transformed fields \(\tilde{\phi }(k_1), ..., \tilde{\phi }(k_N)\) (with \(k_1< ...< k_n < \Lambda\)), the variable L with the cutoff \(\Lambda\), and the variables \(j_i\) with the source fields \(\tilde{J}(k_i)\). This interpretation underscores the possibility of an entirely different interpretation of \(\Lambda\), not as the physical cutoff, but as an arbitrary, unphysical demarcation between those degrees of freedom \(\tilde{\phi }(k_i)\) for which we have chosen to explicitly perform the integration and those for which we have not. There is nothing in the mathematical definition of the path integral or its relationship to correlation functions that constrains \(\Lambda\) to lie above (or below) the momenta \(k_i\) in the functional derivatives \(\frac{\delta ^n}{\delta \tilde{J}(k_1) \ ... \ \delta \tilde{J}(k_N)}\bigg |_{J=0}Z[J]\) that are taken when computing the Fourier transformed correlation functions \(\tilde{G}(k_1, ..., k_N)\).

Viewed another way, the Wilsonian RG flow, shorn of physical interpretation, can be understood simply as the level set of the partition function \(Z[g, \Lambda , J=0)\), where the functional dependence of g on \(\Lambda\) is simply the functional dependence established by the Implicit Function Theorem. With the understanding that \(\Lambda\) does not represent the physical cutoff of the EFT defined by Z, but rather an arbitrary, freely adjustable parameter, and any pair of values \(\Lambda\) and \(g(\Lambda )\) along a given RG trajectory generate exactly the same set of predictions for the theory’s observables, it is more reasonable to associate an EFT with a whole RG trajectory rather than any single point on such a trajectory. This set of simple, purely mathematical facts about the path integral and Wilsonian RG support a minimal interpretation of QFT in which only quantities that are invariant under the Wilsonian RG flow represent real physical matters of fact. As we have seen, correlation functions, and all quantities constructed from them, including scattering cross sections and pole masses, satisfy this invariance requirement, which can be understood as a special form of scheme independence. By contrast, the values of the parameters \(\Lambda\) and \(g(\Lambda )\) do not meet this requirement, and so should not be counted as physical matters of fact.

3.4.2 Connection to Renormalized Parametrizations and Other Regularization Schemes

Occasionally, a distinction is drawn between “Wilsonian EFTs” and “continuum EFTs.” The former are defined by finite-cutoff bare parametrizations while the latter are defined in terms of a renormalized parametrization such as the \(\overline{MS}\) scheme. On some ways of interpreting this distinction, these are understood as entirely distinct theories with distinct ontologies, suggesting that, say, Wilsonian QED describes a different physical reality from continuum QED; see, e.g., Bain [3]. The perspective adopted here is that for a given theory such as Fermi weak theory, QED, or the Standard Model, “Wilsonian EFTs” and “continuum EFTs” are not distinct theories, but rather distinct, physically equivalent ways of parametrizing one and the same theory. The EFT’s ontology, understood as what the theory takes there to be physical matters of fact about, is independent of the parametrization scheme. There are not “Wilsonian EFTs” and “continuum EFTs” so much as finite-cutoff-based parametrizations and renormalized parametrizations of a given EFT, both of which describe the same physical state of affairs.

3.4.3 Defining the Hilbert Space of an EFT

In the view of EFTs according to which an EFT is defined once and for all at the value of \(\Lambda\) associated with the theory’s physical cutoff scale, there are multiple ways to define the Hilbert space of the EFT, depending on the precise form of the cutoff regulator. In the simple case of a scalar field theory with lattice regulator, one can set the value of the lattice spacing to correspond to the physical cutoff scale of the theory, and then assign a field degree of freedom \(\phi (\mathbf {x})\) and associated Hilbert space \(\mathcal {H}_{\mathbf {x}}\) to each point on the lattice. The Hilbert space of the EFT is then the tensor product Hilbert space \(\otimes _{\mathbf {x}}\mathcal {H}_{\mathbf {x}}\). According to the alternative perspective advocated here, the cutoff scale \(\Lambda\) is an arbitrary and unphysical parameter that can be varied without altering the physical content of the theory. The definition of the Hilbert space of the theory should also reflect this invariance, rather than being tied to the particular value of \(\Lambda\) associated with the physical cutoff. A follow-up to the current manuscript describes in detail a way of constructing an EFT's Hilbert space such that the parameter Λ occurring as the integration limit in the path integral makes no appearance in the mathematical definition of the Hilbert space [24].

3.5 Realism and the Physical Interpretation of EFTs

A realist interpretation of quantum field theory attempts to establish a definite view as to which quantities in the QFT model represent real physical matters of fact. It is useful to distinguish three categories of quantity in an EFT: (1) quantities that are uncontroversially a matter of convention, such as one’s choice of gauge, (2) quantities that uncontroversially represent matters of physical fact, such as pole masses and cross sections, (3) quantities on which there is no consensus regarding whether they fall into category (1) or (2). The Lagrangian parameters of an EFT belong to category (3). The “received” understanding of EFTs, which presupposes the existence of a physically preferred parametrization, places some particular set of Lagrangian parameters, associated with some particular regularization or renormalization scheme, into category (2). The perspective advocated here places the Lagrangian parameters into category (1), by virtue of their dependence on one’s choice of scheme, and the assumption that choice of regularization or renormalization scheme is in important respects analogous to one’s choice of gauge, inasmuch as it reflects a qualitatively similar sort of representational redundancy. The program of giving a realist interpretation of QFT within the context of the “received” view of EFTs, which adheres less strictly to the principles of regularization and renormalization scheme invariance, is described in [11, 22, 25, 30, 35].

3.6 Implications for the Naturalness Principle

On the understanding of effective field theory sketched in this section, in which parametrizations associated with different regularization schemes, renormalization schemes, cutoff scales, and renormalization scales, all constitute physically equivalent representations of the same physical state of affairs, and in which there are no “fundamental parameters” or “physical parameters,” the delicate cancellations and sensitivities associated with the naturalness principle are mere artifacts of mathematical convention. That is, they can be transformed away by adopting an alternative but physically equivalent convention. An important lesson that should be drawn from the naturalness principle’s failed prediction of BSM signatures at the LHC is that the foundations of effective field theory are more aptly characterized by this point of view than by a perspective rooted in the idle assumption that there exists a physically preferred (but never observed) parametrization for each EFT.

The cosmological constant problem, which is often thought to arise from an unnatural delicate cancellation involving the bare vacuum energy and quartic quantum corrections, is often likened to the Higgs naturalness problem. While this problem is outside the scope of the present discussion, a point of view on the cosmological constant problem analogous in important respects to the one advocated here in the context of the Higgs naturalness problem has been defended in [6, 17, 18].

4 Conclusion

I have argued that the formulations of the naturalness principle most relevant to the prediction of BSM signatures at the LHC rest on a suspension either of the principle of regularization scheme independence or of renormalization scheme independence, through the assumption that some particular parametrization constitutes the physically preferred set of “fundamental parameters” or “physical parameters” of the theory. However, the grounds for believing in the existence of any such parametrization are thin, since the actual confirmed physical predictions of an EFT are invariant under changes of parametrization associated with different schemes. Since the assumption of a preferred parametrization constitutes an idle wheel in the theoretical framework of effective field theory for high-energy physics, I argue that it should be abandoned. As discussed above, the resulting view of effective field theories also requires the revision of certain influential dogmas about the interpretation of the cutoff regulator \(\Lambda\), the Wilsonian renormalization group, the finite-cutoff path integral, the connection between bare and renormalized parametrizations, and the definition of Hilbert space in an EFT. Given these revisions, the delicate cancellations and sensitivities associated with the Higgs naturalness principle are seen to be little more than unphysical artifacts that can be transformed away through an alternative, but physically equivalent, choice of mathematical convention.