Naturalness: past, present, and future

We assess the state of naturalness in high-energy physics and summarize recent approaches to the three major naturalness problems: the cosmological constant problem, the electroweak hierarchy problem, and the strong CP problem.


Introduction
Writing about naturalness in the current era is fraught with peril, and with good reason.Of the three major naturalness problems in high-energy physics -the cosmological constant problem, the electroweak hierarchy problem, and the strong CP problem -it is difficult to reconcile natural solutions of the first with our understanding of physics at the eV scale, while natural solutions of the second are under intense pressure from the LHC's exploration of physics at the TeV scale.The third problem is in somewhat better condition, with decisive experimental tests still in the future, but its prevailing natural solutions face theoretical challenges of their own.In surveying this state of affairs, it is hard not to be pessimistic about a e-mail: ncraig@ucsb.edu(corresponding author) the future prospects of naturalness-based reasoning. 1 But it bears remembering that these three naturalness problems are the three outstanding naturalness problems in high-energy theory -the ones whose solutions remain unknown.So many measured quantities in the Standard Model are consistent with naturalness-based reasoning, and in notable cases were predicted by it.When focusing on the open problems of naturalness, we are prone to forgetting its past solutions.In truth, the abundant past validation of naturalness is a compelling reason to take it seriously when confronting the problems at hand.This perspective -that naturalness is a pragmatic and empirically-validated strategy for discovering new physics -is far from the only rationale that has been used to support natural reasoning.Throughout its history, naturalness has been variously framed as a pragmatic strategy, a bedrock principle, an aesthetic criterion, and a catastrophic folly.In truth, it is a bit of each.Faced by an essentially infinite space of candidates for the theory of Nature and a very finite num- 1 There is another sense in which writing about naturalness is fraught with peril: there is already a great deal of excellent writing on the general subject.These include (among many others) the classic essay by Nelson [1]; Giudice's 2008 [2] and 2013 [3] essays on naturalness and the hierarchy problem; Murayama's ICTP summer school lectures on supersymmetry [4]; Luty's TASI lectures on supersymmetry breaking [5]; Martin's supersymmetry primer [6] and his forthcoming book with Herbie Dreiner and Howie Haber; Wells' articles on fine-tuning and naturalness [7][8][9][10][11]; Cohen's TASI lectures on effective field theory [12] and Burgess' textbook on the same topic [13]; Dine's [14] and Hook's [15] TASI lecture notes on the strong CP problem; McCullough's TRISEP lecture notes on the hierarchy and strong CP problems [16]; Weinberg's review of the cosmological constant problem [17]; Polchinski's Solvay lecture [18], Bousso's TASI lectures [19], and 1 Burgess' Les Houches lectures [20] on the same topic; Hebecker's lecture notes [21] and book [22] on naturalness and the string landscape; and Koren's Sakurai Dissertation Prize-winning thesis on the hierarchy problem [23].This white paper is also far from the only Snowmass contribution dedicated to aspects of naturalness; see e.g.[24][25][26][27][28][29][30], many of which go into much greater detail about specific approaches to naturalness problems than this paper ber of experimental tests (at least in one lifetime), physicists must come up with strategies to focus experimental efforts in directions that seem most likely to yield discovery.Occasionally these strategies emerge from considerations of simplicity, or admit codification into robust principles.Frequently, they do not.Ultimately, they are judged less by these considerations than by their success or failure in predicting new physics.
In this white paper, at least, we will focus primarily on naturalness as a pragmatic strategy for discovery.It is a strategy that succeeds again and again as we go up through the mass scales of Standard Model particles, making it reasonable to expect its continued success in addressing the problems at hand.Of course, to the extent that this rationale is inductive, we should be mindful of the well-known problems of induction.Bertrand Russell perhaps put it best [31]: The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of Nature would have been useful to the chicken.
The many successful postdictions and handful of successful predictions of naturalness do not guarantee its continued success.Perhaps the cosmological constant problem and electroweak hierarchy problem are signaling that more refined views as to the uniformity of Nature would have been useful to the particle theorist.
But even that would be a useful outcome of natural reasoning.Whether it be the cosmological constant, the Higgs mass, or the theta angle of QCD, there are essentially two possibilities: either the parameter in question is natural, and we need to understand the mechanism; or the parameter is unnatural, and we need to develop the "more refined views as to the uniformity of Nature."In the former case, we may already have come up with the essential mechanism and are simply in need of experimental direction to confirm it, or we may need to discover the mechanism in its entirety.In the latter case, the failure of naturalness is interesting in its own right: it signifies that Nature works in a way that is fundamentally different from what previous examples have led us to expect.
Of course, this does not justify thinking about naturalness ad infinitum.Challenges to the naturalness strategy are both beneficial and necessary.They serve to sharpen our understanding of what naturalness is, and what it is not.And ultimately, if we have exhausted all promising paths, these challenges should persuade us to turn our undivided attention towards other strategies for discovery.
But that day has not yet come.The three marquee naturalness problems are, as yet, undecided.In each case, some of the most appealing solutions have been experimentally tested and found wanting.From this we have learned something about the paths that Nature chose not to take, and been compelled to go off in search of paths less traveled.And what paths we have discovered!The past decade has seen a proliferation of new ideas about how the naturalness strategy might play out, and the coming decade will surely see even more.For the most part, the attendant experimental tests are close at hand, though often in places we had not yet thought to look.
The primary goal of this white paper is to sketch the new paths for naturalness that have been discovered in the past decade, point to some of the possible paths that may be explored in the coming decade, and highlight the wealth of future experimental tests.To this end, we begin with a summary of historical reasoning behind naturalness as a pragmatic strategy, its codification in the language of technical naturalness and 't Hooft naturalness, and its susceptibility to anthropic reasoning.We then turn to the three naturalness problems of the era: the strong CP problem, the cosmological constant problem, and the electroweak hierarchy problem.When these three problems are discussed together, they are typically introduced in order of ascending or descending dimensionality of the operators involved.Here we will take an alternative approach, running from the theta angle of QCD, to the cosmological constant, to the mass of the Higgs, in the hope of drawing a clearer line through common solutions to the different problems.In each case we summarize the problem and historical approaches before turning to recent developments.We conclude by looking to the future, with an eye towards the promise of developments on the horizon.

A brief history of naturalness
Although our aim is to look forwards, rather than backwards, there are some instructive lessons to be learned from exploring the history of naturalness.What we would now recognize as naturalness arguments have a long history, dating back at least to Copernicus. 2 Within the field of high-energy physics, definite notions of naturalness emerged in the 1930s.Perhaps the two most striking exemplars are Weisskopf's calculation of the self-energy of the electron [32,33] and Dirac's Large Numbers Hypothesis [34].While both are examples of what we would now classify as naturalness-based reasoning, they are wildly different in both their nature and their effectiveness, and it is instructive to compare them.
Weisskopf began with the observation that the quantum theory of a relativistic electron with an ultraviolet cutoff suffers from a number of divergent contributions to the electron self-energy.The most striking is the contribution m e ∼ e 2 , which is the familiar divergence of the classical Coulomb self-energy corresponding to an electron radius r ∼ 1/ .The problem is exacerbated by the intro-duction of quantum mechanics, as quantum fluctuations of the electromagnetic field around the electron contribute as m e ∼ e 2 2 /m e .Of course, these divergences can all be absorbed in a proper renormalization procedure, which wouldn't be developed until some time after Weisskopf's calculation.But if one supposes that carries some physical significance, for m e this naively implies finely-tuned cancellations among large contributions m e to the electron self-energy in order to obtain the measured value of m e .
However, there is an alternative: additional degrees of freedom could appear to modify the calculation and avoid the need for fine-tuning.And indeed, this is precisely what Weisskopf found.As Dirac had proposed, a relativistic quantum theory of the electron requires a new degree of freedom, the positron (whose appearance can be attributed to a new symmetry of the quantum theory, CPT).The appearance of the positron modifies the calculation of the electron self-energy, and the power divergences cancel between the virtual contributions of electrons and positrons.All that remains is a logarithmic sensitivity, m e ∼ 3e 2 2π m e log( /m e ), such that m e does not greatly exceed m e even for m e .In his second paper on the subject [33], Weisskopf considered the analogous self-energy of a charged scalar.In this case there are no additional degrees of freedom that appear to cancel the divergences.Weisskopf then posited precisely the sort of naturalness argument that we would recognize today, taking the critical length a ≡ −1 to be set by the mass of the scalar itself [33]: This may indicate that a theory of particles obeying Bose statistics must involve new features at this critical length, or at energies corresponding to this length; whereas a theory of particles obeying the exclusion principle is probably consistent down to much smaller lengths or up to much higher energies.
Of course, Weisskopf's conclusions should be viewed through the subsequent lens of renormalization.Properly speaking, the divergences he encountered and parameterized with a cutoff can be absorbed by a suitable renormalization prescription.This is perfectly adequate if we only expect our theory to relate measurements in different channels and at different scales.Thus the perfectly sensible observation that the Standard Model on its own does not suffer from a hierarchy problem -it simply has parameters which are fixed by measurements, and divergences encountered in loop calculations are duly absorbed by renormalization.But if we expect the fundamental theory to be finite and fully predictive, then Weisskopf's divergences take on a different character entirely.From this perspective, the divergences arise because we are only computing in a subset of the full theory, and their existence signals the approximate size of finite, physical contributions from the missing parts of the theory.
Given the key role that Dirac played in Weisskopf's work, it is surprising that the conclusions Dirac drew from his own contemporary natural reasoning were quite far off the mark.Dirac's underlying expectation was that "any two of the very large dimensionless numbers occurring in Nature are connected by a simple mathematical relation, in which the coefficients are of the order of magnitude unity" [34].Although this expectation ultimately underlies more modern notions of naturalness, Dirac's inferences took their own path.Dirac understood that there was a mass scale associated with gravity, M Pl ∼ 10 19 GeV, as well as a mass scale associated with the proton, m p ∼ 1 GeV, and wished to understand why m p M Pl .In Dirac's own framing, the goal was to explain why Gm 2 p hc ∼ 5×10 −39 .Dirac noted that the Hubble age of the universe was about T m p c 2 h ∼ 10 42 , and that the mass of the universe to its visible limits was about M m p ∼ (10 40 ) 2 .To him this suggested that there was a causal connection between dimensionless constants and powers of T. Since T changes in time, that also implies that fundamental constants change in time, e.g., that G evolves as 1/t, and that M evolves as t 2 .He proceeded to develop an elaborate theory of cosmology around this idea, which Nature does not support.
Although the true explanation for the proton mass eluded Dirac, we now understand it to be a beautiful triumph of naturalness criteria.The answer is that the proton mass is dynamically generated by confinement, which in turn arises from the logarithmic evolution of a dimensionless coupling, which itself is the manifestation of a violation of symmetry -in this case, (classical) conformal symmetry.This phenomenon, dimensional transmutation, explains the existence of exponentially different scales.So Dirac's question was a good one, and the ultimate answer is a triumph of natural reasoning.The fact that Dirac's own answer was spectacularly incorrect carries an important lesson: the failure of a specific answer to a naturalness problem does not signify the failure of the problem itself.
In the meantime, Weisskopf's reasoning succeeds again and again as we proceed up in scale from the electron.Every fermion we encounter enjoys the same resolution to its selfenergy puzzle as the electron.Although an apparently fundamental scalar would not appear until the discovery of the Higgs, the light pseudo-scalar bound states of the strong interactions provide a clear validation of Weisskopf's reasoning for scalars.The lightest of the charged mesons, the π ± , experience precisely the same divergent contribution to their self-energy that Weisskopf computed in 1939.As this is not shared by the neutral π 0 , the divergent electromagnetic contribution to the self-energy can be framed in terms of the difference in the squared masses, Given the size of the charged-neutral meson splittings, m 2 π ± − m 2 π 0 ∼ (35.5 MeV) 2 , we expect the loop should be cut off around 850 MeV if electromagnetic loops explain the mass difference.Lo and behold, the ρ meson enters at 775 MeV, which provides a cutoff for the effective theory as the harbinger of compositeness.In fact, the argument can be made even more precise.Using Weinberg's sum rules and assuming the lightest vector (ρ) and axial vector (a 1 ) mesons dominate, the cutoff dependence is replaced by dependence on the masses of the vector and axial vector mesons [35]: Although the naive cutoff dependence in the low-energy theory of the pions alone is unphysical (depending on the choice of regularization, and entirely removable by renormalization), it nonetheless provides a useful proxy for the dependence on physical scales in a more complete theory.Of course, both the electron and charged pion masses are post-dictions: in each case, the new features appearing at Weisskopf's "critical length" were already known, and the self-energy calculation was merely a validation that the known particles and interactions fit together in a natural way.But the same logic has also been used to make successful predictions, most notably the prediction of the charm quark mass by Gaillard and Lee in 1974 [36].By then it was well known that mass difference between the K 0 L and K 0 S states in a theory with only the up, down, and strange quarks was quadratically divergent, where f K = 114 MeV is the kaon decay constant and sin θ C = 0.22 is the Cabibbo angle.Requiring this correction to be smaller than the measured value (M K 0 Extending the theory to include the charm quark as proposed by Glashow et al. [37], the divergence is eliminated and instead replaced by corresponding dependence on the mass of the charm quark.Once again, the unphysical divergences encountered in part of the theory anticipate finite contributions arising in the full theory.Gaillard and Lee's corresponding prediction m c 1.5 GeV presaged the discovery of the charm quark at m c 1.2 GeV in the same year.

In search of a principle
Weisskopf's natural reasoning eventually gave way to the familiar narrative of Wilson [38], Weinberg [39], Susskind [40], 't Hooft [41], and Veltman [42].The perspectives comprising this narrative are harmonious but far from uni-form, laying the foundations for a broad and somewhat nebulous definition of naturalness that persists to this day.Broadly speaking, these perspectives built on Dirac's original expectation of naturalness -that all dimensionless quantities should be order-one in the appropriate units -with an improved understanding of the structure of radiative corrections, recognizing that selection rules could lead to more refined criteria.
Wilson did not explicitly use the term "naturalness" in his 1970 paper, but observed that the structure of radiative corrections implied that the breaking of scale invariance at low energies should be expected only from couplings which also break internal symmetries, for which radiative corrections to the parameters were necessarily proportional to the parameters themselves.Such arguments led Wilson to famously observe that "it is interesting to note that there are no weakly coupled scalar particles in nature; scalar particles are the only kind of free particles whose mass term does not break either an internal or a gauge symmetry" [38].Wilson's objections to light scalar fields strongly informed Susskind's notion of naturalness, which emphasized ultraviolet insensitivity: "the observable properties of a theory should be stable against minute variations of the fundamental parameters" [40].In this respect, Weisskopf's quadratic mass divergences represented a violation of naturalness criteria because a light scalar arising from intricately tuned cancellations in a theory with a large physical cutoff would be inordinately sensitive to small variations among fundamental parameters.Wilson's emphasis on the role of symmetries in controlling radiative corrections was sharpened by 't Hooft into a sort of principle: "We now conjecture that the following dogma should be followed: -at any energy scale μ, a physical parameter or set of physical parameters a i (μ) is allowed to be very small only if the replacement a i (μ) = 0 would increase the symmetry of the system.-In what follows this is what we mean by naturalness.It is clearly a weaker requirement than that of P. Dirac who insists on having no small numbers at all" [41].Veltman's definition of naturalness broadens out considerably from 't Hooft's, establishing a qualitative rule of thumb that remains in use to this day:"This criterium [of naturalness] is that radiative corrections are supposed to be of the same order (or much smaller) than the actually observed values.And this then is taken to apply also for coupling constants and masses.Symmetries may be important here too; radiative corrections may be made small if there is a symmetry guaranteeing this smallness" [42].
In modern parlance, parameters have come to be known as technically natural if their size in the ultraviolet theory is not spoiled on the way to the infrared by physics at intermediate scales.Technical naturalness may be assured by approximate symmetries: certain symmetries can control the form of quantum corrections, and when these symmetries are broken the quantum corrections must be proportional to the symmetry-breaking itself.This leads to the somewhat narrower notion of 't Hooft naturalness: a parameter is 't Hooft natural if symmetries are restored when the parameter is set to zero.Most observed hierarchies are both technically natural and 't Hooft natural, but it is possible for parameters to be technically natural without being 't Hooft natural.The latter is sufficient to assure the former, but not necessary.In cases where a parameter is technically natural without being 't Hooft natural, an assumption is typically made about the absence of additional physical thresholds intermediate between the ultraviolet and the infrared, which could induce large corrections in the absence of a symmetry.
These refinements help to clarify what sort of small numbers pose naturalness problems, and what do not.Circling back to Weisskopf, the electron mass is 't Hooft natural due to the chiral symmetry restored in the m e → 0 limit, while the mass of a charged scalar is not.More broadly, all the fermionic mass hierarchies in the Standard Model are 't Hooft natural, stemming from the violation of Standard Model flavor symmetries that are restored when Yukawa couplings are taken to zero.It is still worth asking how these observed hierarchies came about, but whatever mechanism explains the flavor hierarchy could live in the far ultraviolet, given that its predictions would persist undisturbed into the infrared.In contrast, the cosmological constant, Higgs mass, and QCD theta angle are neither technically nor 't Hooft natural in generic extensions of the Standard Model.

An alternative to naturalness
How could the logic of naturalness fail?Amusingly, Dirac's own attempts at natural reasoning led to the identification of a possible failure mode.Responding to Dirac, in 1961 Dicke pointed out that questions about the age of the universe could only arise if conditions were right for the existence of life, with the specific criteria that the universe must be old enough so that some stars completed their time on the main sequence and produced heavy elements, and young enough that some stars were still undergoing fusion [43].Working these out in terms of fundamental units, Dicke found the upper and lower bounds essentially lead to Dirac's relations -but rather than resulting from time variation of fundamental parameters, they followed entirely from the existence of observers.
Dicke's argument is perhaps the first use of what has subsequently been termed anthropic reasoning in modern physics [17], though the term itself would only be coined another 12 years later by Carter [44].At heart, anthropic reasoning stems from the fact that observed parameters are necessarily compatible with the existence of an observer.To gain explanatory power over the values of fundamental parameters, it requires something like the existence of alternate universes in which these fundamental parameters vary, as well as some assumptions about the distribution of parameters among these universes.If the most natural values of certain parameters do not lead to the formation of suitable observers, then anthropic reasoning in this context allows us to understand why we might instead observe a universe with unnatural values.
Of course, the possible role of anthropic reasoning in circumventing a naturalness problem depends sensitively on the parameter in question, and the extent to which its variation can be tied to the formation of observers.To say more requires us to commit to the specifics of a problem.

The strong CP problem
We know that CP is not a symmetry of the Standard Model, being broken by the weak interactions.But there is another potential source of CP violation that is not, as yet, observed.The QCD Lagrangian in principle contains a term of the form The θ term is P-and T -odd, hence C P-odd.While this can be written as a total derivative, for non-abelian theories we are not entitled to discard boundary terms due to the existence of instantons, and so we have to contend with the possible physical consequences.The effects of the θ term in the QCD Lagrangian can be traced down to low energies, where they contribute to a host of hadronic CP-violating observables.For instance, in the pion-nucleon effective Lagrangian it induces a CPviolating nucleon-nucleon-pion coupling.At one loop this yields a contribution to the neutron electric dipole moment of order whereas the experimental bound is |d n | < 1.8 × 10 −26 e cm; an inferred bound from the 199 Hg EDM limit is comparable.This implies θ 10 −10 .Given that CP is not a symmetry of the Standard Model, the natural expectation might have been θ ∼ O(1), amounting to a violation of naturalness expectations by ten orders of magnitude.This is known as the strong CP problem.See e.g.[14,15,45] for excellent overviews.It bears emphasizing that θ is technically natural (by the definition used here) when restricted solely to the Standard Model [46].CP violation from the CKM matrix only feeds in at high loop order, generating a contribution to θ that is well below current limits.However, it is not 't Hooft natural, as CP symmetry is not restored in the Standard Model when θ → 0. It is not obvious why θ should be small in the UV, but even if it were, its smallness would generically be spoiled by physics beyond the Standard Model at intermediate scales.
There are three conventional avenues for rendering θ natural.The first is to have a massless quark, since then θ is unphysical as it may be removed entirely by redefinitions of the massless quark.The second is to solve the problem in the UV by imposing P or CP as exact symmetries at a high scale, broken spontaneously to induce the CP violation inherent to the CKM phase without generating large corrections to θ.The third is to solve the problem in the IR by relaxing the value of θ.The first option is ruled out by lattice data, which strongly disfavors a massless quark. 3Instead, let us briefly explore the second and third options.

UV solutions
Perhaps the most transparent option is to render the theta angle small by reference to the ultraviolet: make P or CP good symmetries in the UV, broken spontaneously at some scale to give the known CP violation observed in the CKM matrix.Recent progress has been surveyed in a dedicated Snowmass white paper [24], and so here we will restrict ourselves to sketching the key ideas and developments.
The physical strong CP angle is the combination of the quark mass term phase and the intrinsic QCD phase, The challenge for a technically natural approach is thus to explain why arg det[Y u Y d ] is small, but the combination that picks out the phase in the CKM matrix is not.This entails no small degree of cleverness.The most common route, the Nelson-Barr mechanism [49,50], starts with CP as a UV symmetry and breaks it via the vevs of some complex scalars, which accumulate a relative phase.These scalars couple to Standard Model quarks with the assistance of additional vector-like quarks, and couplings are engineered (with the assistance of exact Z 2 global symmetries) in such a way as to guarantee that arg det[Y u Y d ] = 0 at tree level but the CKM phase is nonzero.Much of the complexity of Nelson-Barr models stems from the fact that they protect θ with a symmetry (CP) that the Standard Model breaks in a non-decoupling manner.
The key features of the Nelson-Barr mechanism are perhaps most clearly expressed in the minimal model of Bento et al. [51], which adds vector-like quark fields q, q (neutral under SU (2) and carrying hypercharges ±1/3) and a set of neutral complex scalars η a to the Standard Model, along with interactions of the form 3 For an excellent and very recent summary of the state of affairs, see [47].For a novel model leveraging a massless quark in a hidden sector to control the θ angle of the Standard Model, see [48].
Here a a f , y f f are Yukawa couplings and f is a Standard Model flavor index.The potential for the η a is such that they acquire vacuum expectation values with relative phases, thereby spontaneously breaking CP.At tree level, the above interactions ensure arg det m q = 0, while the CKM phase is generated upon integrating out the heavy fermion mass eigenstate.
An alternative avenue is to leverage the second symmetry that can protect θ, namely parity [52][53][54].Although parity is also violated by the Standard Model, a generalized parity P may be restored in the UV by e.g.extending SU (2) L → SU (2) L × SU (2) R and having parity act as conventional parity supplemented by P : SU (2) L ↔ SU (2) R .The θ term is odd under this parity, and so is forbidden in the UV where the parity is good.Unlike Nelson-Barr models, parity violation is not needed to allow for a complex CKM phase; it's already allowed, and simply mirrored by a parity phase for the mirror fields.Of course, this parity must be spontaneously broken in order to satisfy direct limits on SU (2) R gauge bosons and other states.
Additional challenges arise in both cases.The success of UV models is contingent upon their specific field content, and can easily be spoiled by additional fields and generic interactions; this is particularly challenging when extending these models to address other problems such as the electroweak hierarchy problem.Although UV models are not as susceptible to quality problems as their IR competitors (about which more will be said momentarily), the requisite scale of P or CP violation is low enough to introduce questions about the origin and stability of large parametric hierarchies.These challenges are comprehensively summarized in [55,56].
Challenges aside, UV approaches to the strong CP problem are a promising and relatively unexplored direction, as they point to a wide array of experimental signatures not traditionally associated with strong CP.They have correspondingly seen something of a revival in recent years; developments include • A new approach to spontaneous parity breaking associated with a vanishing Higgs quartic at high energies [57].• Renewed focus on parity-based solutions and their collider signatures, either with [58] or without [59] large vector-like mass terms for SU (2)-singlet fermions.• Sharpened predictions for two-loop contributions to θ in minimal parity models [60] and Nelson-Barr models [61], suggesting near-future tests of UV models for strong CP with hadronic EDMs.• The development of a comprehensive framework for the UV completion of Nelson-Barr models without numerical coincidences or small dimensionless input parameters [62].
The space of UV solutions to strong CP is far from being fully explored, and the promise of near-future experimental tests (at both colliders and EDM experiments, as well as unexpected venues such as gravitational wave detectors [58]) is highly compelling.We refer the reader to [24] for a discussion of some promising opportunities.

IR solutions
If parity and Nelson-Barr models can be said to solve the strong CP problem in the UV, the axion solves the strong CP problem in the IR.Recent progress and outstanding questions in axion theory are comprehensively summarized by a dedicated Snowmass white paper [25], and here we provide only a cursory overview with an eye towards naturalness-related considerations.
The basic idea is simple: if we can introduce a pseudoscalar field a that couples to G G like θ, i.e., then the total effective CP violating angle is θ + a / f a and the QCD vacuum energy becomes This has a minimum at a = −θ f a where the total effective CP violating angle is set to zero, solving the strong CP problem.
The requisite coupling of the pseudoscalar emerges automatically if it is the goldstone boson of a spontaneously broken symmetry, the Peccei-Quinn symmetry U (1) P Q [63].This doesn't work in the Standard Model with one Higgs doublet; both the Higgs and its conjugate are involved in Yukawa couplings, so there is no way of assigning U (1) P Q charges to the Higgs and quarks such that Yukawa terms are invariant.Introducing a second doublet leads to a highly successful and predictive framework, the Weinberg-Wilczek model [64,65] in which the axion decay constant is 1/v, which is ruled out by direct searches.This idea can be easily rescued by adding a singlet complex scalar also transforming under U (1) P Q to obtain the DFSZ (Dine-Fischler-Srednicki-Zhitnitsky) axion [66,67].If the singlet acquires a much larger vev, this renders the axion lighter and more weakly coupled.Another option is to introduce new fermions charged under QCD; this gives the KSVZ (Kim-Shifman-Vainshtein-Zakharov) axion [68,69].Although both DFSZ and KSVZ axions make relatively sharp predictions for axion couplings to Standard Model particles, the interplay between multiple axions (via e.g. the clockwork mechanism [70]) can open entirely new parameter space [71].
Axion solutions to strong CP are compelling for a number of reasons.Axions generically arise in string compactifications, making their appearance in association with the strong CP problem less surprising. 4Famously, they can also furnish a dark matter candidate with a predictive mechanism for the dark matter relic abundance.Unlike UV solutions to strong CP, they are insensitive to new sources of CP violation from additional degrees of freedom beyond the Standard Model; axions ultimately relax the sum of all contributions to θ.However, this requires that the U (1) P Q symmetry giving rise to the axion be exceptionally good (with the singular exception of being anomalous with respect to QCD), much better than generic expectations about the violation of global symmetries in a theory of quantum gravity would suggest.This axion quality problem [74][75][76] has long motivated model-building solutions including discrete gauge or R-symmetries [77,78] and compositeness [79], with a number of new approaches emerging in recent years [80][81][82][83][84]. Recent investigation suggests that the axion quality problem may be ameliorated in weakly-coupled string compactifications, as large hierarchies among instanton actions lead to exponential suppression of dangerous PQ-violating effects [85].It is unclear whether this should persist for compactifications outside the weaklycoupled regime.
As with UV solutions to strong CP, the space of axion solutions to the strong CP problem is far from comprehensively explored, particularly in relation to the axion quality problem.We refer the reader to [25] for a discussion of some of the most promising opportunities.

Anthropics
Rather than being accidental, or rendered natural by one of the mechanisms discussed above, the strong CP problem could be explained with anthropic reasoning if values of θ much larger than the true value were unfavorable to observers.This is not a typical line of reasoning for the strong CP problem since it is not obvious if any of the important processes in the early universe are significantly altered by variation of θ provided θ 0.1 [86].However, if one assumes that the cosmological constant is anthropically determined (about which more momentarily), by making sufficiently strong assumptions about the mechanism one can obtain a correlated bound on θ [87].As with many anthropic arguments, the devil is in the details [88].Of course, there is nothing wrong with the coexistence of anthropic explanations for the smallness of some parameters and natural explanations for others.Perhaps anthropic explanations for strong CP will seem more compelling if decisive experimental tests of natural explanations come up empty.We are far from this situation at present, and the strong CP problem remains a compelling target for natural reasoning.

The cosmological constant problem
Let us now move to the other end of the dimensional spectrum, to the cosmological constant.There are a number of excellent overviews of the cosmological constant problem, e.g.[17][18][19][89][90][91], and we shall not attempt to improve on them here.
Conventionally, the cosmological constant (not to be confused with the momentum cutoff appearing elsewhere) is a constant term that can appear in the Lagrangian of a theory of gravity.This corresponds to a vacuum energy density ρ ≡ 8π G ; as the former quantity is the one most relevant to both experimental measurements and the discussion of quantum corrections, we'll focus on ρ in what follows, and revert to using as a momentum cutoff.
Decisive evidence for nonzero ρ was accumulated in 1998 from observations of distance-redshift relations for Type 1a supernovae, and further solidified by CMB measurements yielding a preferred value of ρ = (2.26× 10 −3 eV) 4 . ( Purely on dimensional grounds, in a field theory with a cutoff coupled to gravity we expect ρ ∝ 4 .Radiatively, vacuum bubbles of a field of mass m in an effective field theory with cutoff contribute where the coefficients c i are on the order of 1/16π 2 in natural units.In the Standard Model, if we take ∼ M Pl , then we expect ρ ∼ 10 120 ρ , obs , an enormous violation of naturalness expectations.This is the cosmological constant problem. There are two points to emphasize about the underlying problem.The first is the cutoff dependence; as we have discussed, the cutoff itself is unphysical, but gives us a plausible proxy for finite contributions associated with new degrees of freedom at the scale .However, it bears emphasizing that there is an enormous problem even if we discard the power-law cutoff dependence and keep only the finite and log-dependent terms, which contribute at O(m 4 ).In the Standard Model, the finite contribution from the top quark already implies ρ ∼ 10 53 ρ , obs .The cosmological constant is far from technically natural even when restricted solely to the Standard Model.
The second consideration is whether we somehow misunderstand how QFT couples to gravity -perhaps the estimate of quantum contributions to the cosmological constant is flawed on the grounds that we misunderstand how gravity couples to virtual particles.But we know that a virtual electron contributes to the vacuum polarization correction to the Lamb shift, and by the equivalence principle this must couple to gravity.Loops are "real" in terms of their observable consequences.And although we have focused on quantum corrections to ρ , there is also a problem at tree level, as phase transitions induce changes in the vacuum energy density.We should expect a contribution of order ρ ∼ 4 QC D from the QCD phase transition alone, much less contributions from the electroweak phase transition or potential earlier phase transitions associated with unification or sectors other than our own.
Numerous solutions to the cosmological constant problem have been proposed; for an extremely comprehensive enumeration of possibilities prior to 2004, see [91].In what follows we'll review a subset of these approaches with an eye towards recent progress.

Anthropics
The explanation for the cosmological constant that is both most popular and most controversial among high-energy physicists (at present) is the one that discards naturalness in favor of anthropic reasoning.For observers to be present in order to see a universe with a small cosmological constant, the cosmological constant must be small enough that sufficiently large gravitationally bound systems can form.By sufficiently large, we have in mind something that forms stars and planets, which requires heavy elements -so the structures of interest are galaxies or globular clusters.
The anthropic argument for the cosmological constant is often credited to Weinberg [92], and with good reason.But it also bears emphasizing that a general sketch of the argument was made by Banks in 1985 [93], and a qualitative bound along the lines of Weinberg's was made by Barrow and Tipler in 1986 [94].In any event, a simplified version of Weinberg's argument will suffice for our purposes.We know that in our universe gravitational condensation had already begun at redshift z c ≥ 4 (from the redshifts of the oldest quasars), when the energy density was greater than the present mass density ρ M 0 by a factor (1 + z c ) 3 .A cosmological constant has little effect as long as the non-vacuum energy density is larger than ρ , so this implies The detailed form of the argument gives We know in reality ρ ∼ 3ρ M 0 , so this bound lies within two orders of magnitude of the observed value.At this stage one can apply more detailed statistical reasoning to obtain a typical value closer to the observed value.
For this to be truly explanatory, we should envision a landscape of vacua over which the cosmological constant varies, all of which can be realized, but only a small number of which produce observers to witness them.Thus a satisfying anthropic argument for the cosmological constant requires a theoretical framework with a landscape of vacua over which the cosmological constant is finely scanned.This was famously furnished in the context of string theory by Bousso and Polchinski [95], as well as subsequent progress exploring the construction and distribution of flux vacua in microscopic models [96][97][98].It's fair to say that the concrete realization of landscapes in string theory has significantly increased the relevance of anthropic reasoning to naturalness problems.
One of the reasons Weinberg's anthropic argument has taken such hold is its evident success in predicting the cosmological constant, since it came well before the decisive measurement.However, it bears noting that, at the time of Weinberg's anthropic argument, Loh and Spillar [99] had set a limit ρ /ρ M 0 = 0.1 −0.4 +0.2 from surveys of galaxies as a function of redshift.Weinberg's assessment of this result at the time was This is more than 3 orders of magnitude below the anthropic upper bound discussed earlier.If the effective cosmological constant is really this small, then we would have to conclude that the anthropic principle does not explain why it is so small.before going on to discuss possible problems with the experimental result.Of course, we know this bound [99] was off by an order of magnitude of the true value, but it is far from obvious that two orders of magnitude is better than three.
Another potential loophole is that the anthropic bound on the cosmological constant is not a one-parameter argument.As noted in [92], the bound would be much weaker if gravitational condensation occurred at much higher redshift.This is possible if the amplitude of primordial density perturbations δρ/ρ ∼ 10 −5 were allowed to increase, which could indeed be increased by at least an order of magnitude before impacting anthropic viability, and significantly impacts the anthropic bound.Nonetheless, the apparent success of an anthropic argument for the cosmological constant sets a high bar for natural explanations, to which we now turn.

Relaxation
Much like the axion relaxes potentially large contributions to the θ parameter, a compelling alternative is for some dynamics to relax otherwise large contributions to the cosmological constant.
The archetypal proposal is due to Abbott [100], which we will review in some detail here in order to set the stage for recent approaches to both the cosmological constant and elec-troweak hierarchy problems.Abbott's proposal introduces a new confining sector coupled to an axion-like particle with a classical shift symmetry ϕ → ϕ + c (not necessarily that of a Goldstone from a compact symmetry group) and the typical axion-like coupling α 8π Non-perturbative effects give an axion potential which breaks the classical shift symmetry to the discrete subgroup ϕ → ϕ + 2π N f ϕ .In order to be relevant to the cosmological constant problem, ϕ ≤ 10 −34 eV, but this is not so hard to engineer by virtue of dimensional transmutation; for an SU (2) theory with six quarks, this amounts to α(M Pl ) ≤ 0.01.The symmetry breaking scale is taken to be large, perhaps In addition, a tilt is given to the cosine via a second term, where ε < 4 ϕ .Here we have taken a linear perturbation, but various other deformations would also work, as long as they don't introduce additional minima over the field range we'll discuss.Since ε breaks the discrete symmetry, its smallness can be technically natural, and all radiative corrections to ε are guaranteed to be proportional to it.
The vacuum energy density in this theory is given by with minima at ϕ n ≈ 2π n f ϕ for small ε, and in these minima ρ ≈ nε− 4 ϕ +• • • .Now by assumption, ε < (10 −34 eV) 4 , so we are guaranteed there is always a minimum where the total energy density is ∼ ε, which we can make arbitrarily small.
To account for the cosmological constant, we must explain why the universe is in one of the states with a small cosmological constant, instead of another one.If we imagine starting at some arbitrary point on the potential with large, positive cosmological constant, we are in a de Sitter spacetime and over time ϕ will evolve down the potential, decreasing the vacuum energy density at each step.Initially, when ρ > M 2 Pl 2 ϕ the barriers are irrelevant because of the non-zero Hawking temperature in de Sitter space,

Pl
, so the field can undergo thermal fluctuations over the barriers (and instantons generating the barriers are moreover suppressed).Eventually, we will hit (This is the reason for our ϕ , and hence ε 1/4 , to be much smaller than ρ -it's not the step size that matters, but the point at which the barriers switch on.)At this point the barriers become relevant, and field evolution proceeds via tunneling, i.e., bubble nucleation.For ρ M 2 Pl 2 ϕ , the tunneling rate per unit volume is Pl /ρ (18) and eventually the evolution becomes quite slow.This all takes a long time, 10 450 years for ρ ∼ M 4 Pl to be reduced to the observed value.However, once we get there, we remain in a series of states with acceptable cosmological constant for a far longer time, 10 10 248 years.Eventually we tunnel to a state with small, negative vacuum energy, but this is expected to undergo gravitational collapse and the game's over.In the meantime, we have a doubly exponentially long time in a realistic vacuum.
The problem is that the universe only contains vacuum energy.Any initial matter density is rapidly inflated away, and any matter density generated during a tunneling event is inflated away while awaiting the next transition.The last transition to the current vacuum can't reheat above T RH ∼ ε 1/4 , and even matter created from this is unlikely to be isotropic because the energy released by the tunneling event is primarily stored in the bubble wall.Even if you imagine raising the scales so that the step size is of order ρ 1/4 , you are still impossibly far away from getting a realistic universe.Recently, attempts have been made to develop constructions inspired by the Abbott model that solve the reheating problem.A proof of principle was provided in [101], in which the key ingredient is a sector violating the null energy condition (NEC). 5The NEC violation induces an inflationary epoch followed by reheating and standard Big Bang cosmology, with symmetries restricting the cosmological constant to be the same before and after the NEC-violating phase.A related proposal [102] involves a bounce following the relaxation epoch, after which the universe expands and proceeds through standard cosmological history.Although the added ingredients in both proposals are somewhat exotic, they pave the way towards potentially viable relaxation of the cosmological constant.A relaxation mechanism without NEC violation has recently been proposed [103], involving a very supersymmetric gravity sector coupled to a matter sector with non-linearly realized supersymmetry and an accidental approximate scale invariance.A related "crunching" mechanism for solving the cosmological constant problem was proposed in [104], wherein regions of space with a large cosmological constant crunch shortly after inflation, whereas regions with a small cosmological constant are metastable and survive to late times.
A very different sort of relaxation mechanism, famously proposed by Coleman [105], entails the relaxation of the cosmological constant at low energies by the effects of virtual wormholes.The essential argument is that including the effects of wormholes in the path integral leads to a doublyexponential enhancement of the measure at the point where the total cosmological constant vanishes.This would then render a vanishing cosmological constant "natural" in the sense that it is a generic prediction.This argument encounters a number of significant challenges, including the inability to accommodate the small but nonzero observed value of the cosmological constant; the apparent existence of an inflationary epoch; and various problems with the prediction itself.
Nonetheless, this highlights the potential phenomenological relevance of wormholes or gravitational instantons.For an excellent overview of recent progress, see [106].

Symmetry
There is an excellent symmetry for the cosmological constant problem: supersymmetry.We will have a fair bit more to say about supersymmetry when we turn to the electroweak hierarchy problem, but in some sense the true calling of supersymmetry should have been to solve the cosmological constant problem.In globally supersymmetric theories, the vacuum energy density vanishes exactly; quantum corrections cancel between bosons and fermions, while the supersymmetry-preserving minima of scalar potentials occur at V = 0. Of course, the lack of apparent supersymmetry below the TeV scale suggests that supersymmetry, if present at all, is spontaneously broken at a scale SUSY .This leads to the prediction ρ ∼ 4 SUSY (1 TeV) 4 , making supersymmetry a promising approach to the cosmological constant problem in theory but not in practice.The failure of supersymmetry to explain the cosmological constant in 3 + 1 dimensions was famously captured by Witten [107]: Within the known structure of physics, supergravity in four dimensions leads to a dichotomy: either the symmetry is unbroken and bosons and fermions are degenerate, or the symmetry is broken and the vanishing of the cosmological constant is difficult to understand.
Witten's emphasis on four dimensions was not accidental: in 2 + 1 dimensions the dichotomy disappears, as supersymmetry without degenerate boson and fermion masses can still explain a vanishing cosmological constant.Loosely speaking, the idea is that in 2 + 1 dimensions supersymmetry can control the vacuum and ensure the vanishing of the vacuum energy without controlling the spectrum of excited states.Although this possibility is extremely compelling, it remains to be realized in a form relevant to the cosmological constant in our 3 + 1 dimensional universe.
However, supersymmetry is not the only symmetry that might have something to say about the cosmological constant problem.Among other, more exotic, possibilities is an unusual discrete symmetry, "E ↔ −E".The idea, which originates with Linde [108] but was fleshed out further by Kaplan and Sundrum [109], is to introduce parity partners of all normal fields with an opposite-sign Lagrangian density.The radiative contributions from the normal matter sector and its wrong-sign partner to the cosmological constant cancel, leaving only the bare contribution.We can think of this as arising from a Z 2 energy-parity symmetry P that anticommutes with the Hamiltonian, {H, P} = 0, so that an energy eigenstate (H |E = E|E ) is transformed into one with opposite energy, The problem is that a Minkowski vacuum is evidently unstable to the pair production of positive-and negativeenergy states.If the two sectors can be completely decoupled, this pair production process is suppressed and the Minkowski vacuum is effectively stable.If there is a Poincaré-invariant state that is P invariant, P|0 = |0 , then 0|{H, P}|0 = 2 0|H |0 = 0, corresponding to vanishing cosmological constant.Although the matter action respects energy-parity, the gravitational action violates it.Since gravity violates the parity, one might expect a gravitational contribution to the cosmological constant of order ρ ∼ 4  grav , a scale corresponding to the cutoff of graviton momenta -so the scale at which a quantized EFT of Einstein gravity must break down.To reproduce the observed cosmological constant, this implies grav 2 × 10 −3 eV, or a length scale of ∼ 100 microns, which is in tension with current shortdistance tests.Nonetheless, this and related ideas have motivated recent work probing the (in)stability of theories with ghosts [110,111].

UV/IR mixing
Another possibility is that there is a breakdown in effective field theory, corresponding to some mixing between UV and IR physics.This is surprising but not unprecedented, as UV/IR mixing appears to be a feature of quantum gravity.If there is UV/IR mixing present in the theory of quantum gravity, one might hope to put it to work by inferring longdistance properties that might be felt at lower energies.The potential implications of quantum gravity for the cosmological constant problem (and particle physics more broadly) are reviewed in a pair of dedicated Snowmass white papers [26,27].
Encouragingly, a form of UV/IR mixing has already been used to understand an entirely different naturalness problem from the ones studied here.The static Love numbers (i.e. the multipole moments induced by a tidal gravitational field) of both spherical and spinning black holes vanish in 4d Einstein gravity, implying that all quadratic finite-size operators without time derivatives in the corresponding worldline effective field theory vanish for black holes.This is a naturalness problem [112] very much akin to the ones we have already encountered.Remarkably, this naturalness problem was recently solved with reference to a hidden SL(2, R) × U (1) "Love symmetry" which mixes UV and IR modes [113].Although the solution is formally a demonstration of 't Hooft naturalness, its reliance on UV/IR mixing is an encouraging sign for applying similar ideas to other naturalness problems.Of course, in this particular case the ultimate lesson may be that signs of apparent UV/IR mixing are simply a harbinger of a more complete symmetry in action, as the vanishing Love numbers have subsequently been explained using conventional symmetries [114].
Various ideas about UV/IR mixing and the cosmological constant have been put forward, most notably by Banks [115] and Horava [116].Here we will focus on a proposal by Cohen et al. [117], which has recently been the subject of further exploration [118][119][120][121][122][123]. 6The essential idea is to leverage entropy bounds arising in a theory of quantum gravity to influence physics in the infrared.
Normally, an EFT in a box of size L (an IR cutoff) with UV cutoff has extensive entropy, S ∼ L 3 3 .Inspired by black hole thermodynamics, Bekenstein formulated a series of conjectures about entropy in field theory [124][125][126][127], namely that the entropy in a box of volume L 3 only grows as the area of the box.Any EFT would violate this bound in a sufficiently large box, so if the bound is true, it implies that conventional field theories vastly over-count degrees of freedom.One way to reconcile these would be if there is a connection imposed between the UV and IR cutoffs of an EFT by requiring it to satisfy the conjectured bound.This would mean But a more refined condition is possible.An EFT satisfying the above bound contains many states with Schwarzschild radius larger than the box, which should probably not be described by a local QFT.We can exclude those by requiring the Schwarzschild radius of the maximum energy configuration (corresponding to an energy L 3 4 ) not to exceed the size of the box, i.e., This would imply that any EFT with a cutoff has a correlated IR cutoff L when coupled to gravity.
What to make of these correlated cutoffs?A conservative and relatively uncontroversial interpretation of the bound is that highly-occupied states are not well-described by quantum field theory due to strong gravitational back-reaction.A much more audacious interpretation is that L and should be treated as true EFT cutoffs applicable at the few-particle level. 7Cohen, Kaplan, and Nelson's conjectured application to the cosmological constant is then as follows: if the IR cutoff of the Standard Model (and everything else) is taken to be comparable to the current horizon size, the corresponding UV cutoff is ∼ 10 −2.5 eV, surprisingly close to the observed value of the CC.Now, this is not wholly satisfying -it is to some degree tautological, and an effective field theorist would expect to see features at the cutoff which are not (to our knowledge) apparent in Nature.In any event, it illustrates how conjectured properties of a theory of quantum gravity might be brought to bear to constrain otherwise-independent parameters of an EFT.
Recently this idea has been the subject of further exploration in various directions, e.g.[118][119][120][121][122][123].An optimistic interpretation would suggest effects observable in precision measurements [118,120,121], while a more conservative interpretation [119,122] leads to negligible corrections in precision measurements while retaining relevance to the cosmological constant problem by implying a thinning of field-theoretic degrees of freedom contributing to the vacuum energy density.There is considerable potential for further exploration of this direction.

The electroweak hierarchy problem
We now turn to the final naturalness problem: the electroweak hierarchy problem.If we consider the Standard Model as an effective field theory up to some cutoff , computing oneloop corrections from Standard Model fields to the Higgs mass gives us a famous quadratic divergence reminiscent of Weisskopf's result: As we have emphasized earlier, the divergence itself is not a problem.In the Standard Model alone, the Higgs mass is merely a parameter fixed by measurement, and the above divergences are absorbed by a suitable renormalization procedure (or are absent entirely in some choices of regulariza-tion, such as dim reg).The Standard Model in isolation does not suffer from a hierarchy problem. 8 But the Standard Model is not, ultimately, in isolation.In a more complete theory with additional physical scales, the divergence in Eq. ( 21) is replaced by finite, calculable contributions (see, e.g.[12,129] for extended discussion of this point).In this respect the divergence is merely a sign that the Higgs mass is sensitive to UV physics, a consequence of attempting to compute the Higgs mass in only part of a more complete theory.Indeed, if the Standard Model is all there is, the Higgs mass parameter is technically natural; the finite corrections proportional to e.g. the masses of Standard Model particles are all small.It is only in the presence of additional UV scales that the problem emerges.
In discussing the electroweak hierarchy problem, it is typical to invoke ∼ M Pl in the expectation that new physics should enter at the apparent scale of quantum gravity.In that case, Eq. ( 21) implies quantum corrections that are 32 orders of magnitude in excess of the doublet mass parameter m 2 H inferred from the Higgs vev v and the physical Higgs mass m h .But a complete theory of quantum gravity remains elusive, so one might be tempted to speculate that the theory of quantum gravity 'takes care of itself' at the scale M Pl without inducing physical thresholds seen by the Higgs. 9However, in this case we should continue to extrapolate the Standard Model up to arbitrarily high energies, until the hypercharge gauge coupling hits a Landau pole around 10 41 GeV. 10 A UV completion of the Landau pole introduces a new scale playing the role of , but now M Pl .Avoiding this conclusion through gauge coupling unification or a transition to conformal dynamics [132] induces additional physical scales that will enter into the Higgs mass.So the Standard Model is genuinely an effective field theory with cutoff whether or not one is concerned about the implications of quantum gravity.
As we have already emphasized, the Higgs is not the only degree of freedom in the Standard Model whose mass poses a naturalness problem; it is simply the only one for which 8 Even this statement depends on what means by "the Standard Model"; including right-handed neutrinos can induce a hierarchy problem driven by the Majorana mass term [128].There is also a question of the UV fate of the Standard Model, as the hypercharge gauge coupling eventually reaches a Landau pole if unification or quantum gravity do not intercede first; these scales, or the scale induced by the Landau pole itself, generate a hierarchy problem.Perhaps the most accurate statement would be "the Standard Model does not suffer from a hierarchy problem induced by its observed scales." 9Properly speaking, proponents of this idea should then commit to demonstrating a proof of principle, as it is a highly non-trivial thing to ask from the theory of gravity.For valiant and qualitatively very different efforts in this direction, see e.g.[130,131]. 10Precisely how this extrapolation works depends on how gravity has "taken care of itself", and may not be possible in the language of local quantum field theory.
we do not yet know the answer.Thus it seems prudent to use naturalness as a strategy to guide the search for new physics.Naively asking that the corrections in Eq. ( 21) not exceed the inferred Higgs doublet mass parameter implies new physics should enter around 500 GeV.Of course, insofar as is merely a proxy for unknown microscopic physics, it may well be that the masses of new particles lie within an order of magnitude of this estimate.
Although we know the scale implied by naturalness of the Higgs, we do not know the specific mechanism.More than four decades of thinking about the electroweak hierarchy problem have generated a plethora of candidates.Here we will briefly review some of the canonical approaches before focusing our attention on recent developments.

Canonical approaches
The first thing one is tempted to do when confronted by the hierarchy problem is to erase the apparent hierarchy itself, bringing down the cutoff of the Higgs sector or the entire Standard Model.Indeed, this was the nature of the first attempted solution to the hierarchy problem, technicolor (due to Weinberg [39] and Susskind [40]), which attempted to replicate the success of the proton mass prediction by imagining that electroweak symmetry was broken by the vacuum condensate of a strongly coupled group.The fivedimensional holographic duals of technicolor are Randall-Sundrum models [133,134].In these cases, the Higgs is not an elementary degree of freedom, and the cutoff is provided by compositeness of the Higgs itself.Alternately, we could imagine leaving the Higgs alone and lowering the scale of quantum gravity, so that all field theoretic physics reaches an end at the cutoff.This is the nature of solutions such as large extra dimensions [135,136].More recently, a third extradimensional option has come to the fore, where the geometry is set by a five-dimensional dilaton whose background profile varies linearly in the extra dimension [137][138][139].Recent progress on warped compactifications relevant to naturalness is summarized in a dedicated Snowmass white paper [140].
The problem with pure lowered-cutoff solutions is that they generically do not predict any separation between the Higgs and the scale of new physics.That is, the typical expectation of the Higgs mass is of order m 2 H = c 2 , with c ∼ O (1).Such theories then predict a host of particles close in mass to the Higgs, as well as a host of higherdimensional operators suppressed by a low cutoff.The nonobservation of new particles close in mass to the Higgs, as well as strong bounds on irrelevant operators correcting the Standard Model, suggests that this mechanism is not operative on its own.But if the proton mass is not a viable analogy (to our knowledge) for the naturalness of the Higgs mass, it is sensible to consider whether the other mechanisms realized by Nature might play a role.

Supersymmetry
As we have seen, the electron mass was rendered 't Hooft natural by a chiral symmetry.A scalar enjoys no such protection on its own, but could 'borrow' the chiral symmetry of a fermion if there is an additional symmetry relating bosons and fermions.This is the sense in which supersymmetry can solve the electroweak hierarchy problem, by making the mass of a scalar proportional to that of a fermion, which is itself protected by chiral symmetry.This symmetry must be softly broken in order to be consistent with the non-observation of degenerate superpartners.For an excellent review, see e.g.[6].
One of the conceptual virtues of supersymmetry is that it provides a very concrete, calculable realization of the expectation we have attached to Eq. ( 21), namely that the quadratic divergence is merely a proxy for finite, calculable contributions in a more complete microscopic theory.In supersymmetric extensions of the Standard Model, the quadratic divergences indeed vanish and are replaced by the mass splittings between Standard Model particles and their superpartners. 11f course, after decades of mounting expectations for the appearance of supersymmetry at the TeV scale, the LHC has found no evidence for superpartners.At this point it is fair to say that supersymmetry did not appear where naturalness arguments led us to expect it. 12That is not to say that supersymmetry may not appear somewhere above the TeV scale, but this leaves a fair bit of daylight between Nature and naturalness expectations.The degree to which naturalness expectations are violated is often quantified by fine-tuning using various measures, including e.g. the Barbieri-Giudice measure, [141].However, there is active debate about the extent to which this measure over-estimates fine-tuning due to large logarithmic enhancements, and much more optimistic conclusions may be drawn from measures using the infrared values of supersymmetry-breaking parameters [142].It may also be the case that general considerations in a top-down framework such as string theory favor supersymmetry-breaking scales somewhat larger than what bottom-up naturalness considerations indicate, as in the paradigm of stringy naturalness [143].Ultimately we do not know how Nature computes finetuning, making it difficult to draw quantitative conclusions.

Compositeness
The other path already chosen by Nature is the combination of a spontaneously broken global symmetry and compos-iteness, as realized by the charged and neutral pions.Such a composite Higgs [144] is naturally separated from the scale of compositeness itself by a moderate amount, depending on the degree to which Standard Model couplings violate the global symmetry protecting the Higgs.This generically predicts the modification of Higgs couplings relative to Standard Model expectations [145], as well as new degrees of freedom around the TeV scale.For an excellent review, see [146].There is still room for a composite Higgs to satisfy generic naturalness expectations, but the tension will increase significantly if the LHC finds no evidence for Higgs coupling deviations or new particles beneath a few TeV.Recently, considerable progress has been made in alleviating the tuning in composite Higgs models that is otherwise implied by the absence of large Higgs coupling deviations [147,148].

Anthropics
A final "conventional" possibility is anthropics: nothing protects the Higgs mass, but rather there are many vacua of the Standard Model over which the Higgs mass varies according to some statistical distribution.If there is then a mechanism for selecting from the tail of the distribution with smaller Higgs masses, one has an explanation for the observed Higgs mass that does not rely on symmetries or a low cutoff, much like the proposed anthropic explanation of the cosmological constant.
The prevailing version of this argument uses anthropic pressure to understand the weak scale in a universe where the dimensionful parameters of the Standard Model (i.e., the Higgs mass, or equivalently the vacuum expectation value v) vary, but the dimensionless quantities are held fixed.In this case, v is bounded from above to be near its observed value by an argument known as the Atomic Principle [149].Recall that for v = v SM the lightest baryons are the proton and neutron, of which the proton is lighter because the splitting due to quark masses exceeds the electromagnetic energy splitting: But in nuclei there is a binding energy that stabilizes the nuclei.Without going into the details, it suffices to note that the long-range part of the nucleon-nucleon potential is due to single pion exchange, with a range of ∼ 1/m π .For small u, d masses m π ∝ ((m u + m d ) f π ) 1/2 , so (neglecting the weak dependence of QC D on v) we have m π ∼ v 1/2 .Mocking up the binding energy in deuterons B d (the most weakly bound system) as a square well with a hard core to mimic shortrange repulsion gives Now we see that as we increase v, we will eventually reach the point where B d < Q and the neutron is no longer stabilized by nuclear binding energy.This occurs for v/v SM 1.2, which is a tight bound, indeed!The deuteron is fairly important, since all primordial and stellar nucleosynthesis begins with deuterium.But this is not an airtight bound, as nuclei could form in violent astrophysical processes.The binding energies for heavier nuclei are larger, but for v/v SM 5 typical nuclei no longer stabilize the neutron against decay.
Assuming that stable protons and complex atoms are required for observers to form, this provides an anthropic pressure that favors v v SM .But it is clear that a robust constraint only exists if dimensionless couplings are held fixed; variation of the Yukawas allows these constraints to be naturally evaded (although other catastrophic boundaries may be encountered, see e.g.[150]).Indeed, it is possible to imagine a "weak-less" universe where the gauge group of the Standard Model is SU (3) c × U (1) em , and fermions appear in vectorlike representations [151].It has been argued that such a universe undergoes big-bang nucleosynthesis, matter domination, structure formation, and star formation -i.e., sufficient stages of development to produce some form of observers.Of course, truly demonstrating that such a theory is capable of reproducing the physics necessary for forming observers is challenging, but suffices to indicate that anthropic reasoning applied to the weak scale is sufficiently permeable.
Given the appeal of an anthropic explanation for the cosmological constant problem, it is sensible to consider whether this informs a possible anthropic explanation for the weak scale.In general, a landscape providing an anthropic explanation for both problems must contain enough vacua to scan both the cosmological constant and the weak scale with sufficient precision.This is a significant demand on top of the vacuum multiplicity required to scan the cosmological constant alone, making it seem more economical to set the weak scale naturally even if the cosmological constant is set anthropically.However, it may be possible to correlate the value of the cosmological constant with the weak scale [152,153] in such a way that the landscape need only contain sufficient vacua to scan the former and not the latter.

Recent developments
Thus far, there is no experimental evidence for compositeness or supersymmetry as a solution to the electroweak hierarchy problem.But as we have emphasized, the fact that Nature does not realize a specific mechanism for naturalness of the electroweak scale does not mean that the electroweak scale is unnatural, or that naturalness itself has failed.Perhaps what is called for are new ideas, ones that do not necessarily replicate one of the mechanisms already used by Nature.In what follows we will survey some of the new directions that have been developed since the last Snowmass process.

Discrete symmetries
One interesting direction is to retain the symmetry-based approach but expand the scope of possible symmetries.The most apparent possibility is to work with discrete symmetries, rather than continuous ones.The appeal is that the new particles required by a discrete symmetry need not carry the same Standard Model quantum numbers, and so are less strongly constrained by data from the LHC.
There are by now many different examples of "neutral naturalness" [154], but the simplest is the original: the Twin Higgs [155].The idea is to introduce a mirror copy of the Standard Model along with a Z 2 symmetry exchanging each field with its mirror counterpart. 13On top of this, one needs to assume an approximate global symmetry in the Higgs sector; this global symmetry need not be exact, and is violated by all SM Yukawa and gauge couplings, but should be an approximate symmetry of the Higgs potential.Together, this leads to an accidental SU (4) global symmetry respected by one-loop radiative corrections, despite the fact that the exact symmetry of the marginal couplings is only Z 2 .In this respect, the Higgs can be thought of as a pseudo-goldstone boson of the accidental SU (4) symmetry.This does not stabilize the Higgs mass to arbitrarily high scales [157], but rather postpones the scale at which true solutions to the hierarchy problem (such as supersymmetry [158][159][160] or compositeness [161][162][163][164]) must appear.
Experimental signatures of neutral naturalness are quite different from those typically associated with supersymmetry or compositeness, insofar as the partner particles predicted by the discrete symmetry populate a Hidden Valley [165].The most promising signals depend on the nature and quality of the discrete symmetry, but typically include Higgs coupling deviations and Higgs decays into invisible or long-lived particles [166,167].Higgs coupling deviations generically provide the strongest constraint on the natural parameter space of these models, though tuning and Higgs couplings may be decoupled [168,169].Light, stable partner particles can be copiously produced in the early universe and give rise to promising signatures in the CMB and large scale structure [170][171][172], exemplifying the growing relevance of cosmological observations to the electroweak hierarchy problem.
Just as the Twin Higgs leverages a discrete symmetry to produce an accidental global symmetry, folded supersymmetry [173,174] leverages a discrete symmetry to produce an accidental supersymmetry.Theories with completely neutral scalar partners for Standard Model fermions require an interplay between a discrete symmetry, a continuous global symmetry, and supersymmetry, and have only been discovered more recently [175,176].
Models and signatures of neutral naturalness have been explored extensively in the past decade.There is a comprehensive Snowmass white paper dedicated to neutral naturalness [28], to which we refer the reader for further details.Some cosmological implications of neutral naturalness are also discussed in Snowmass white papers on light cosmological relics [29] and early-universe model building [30].
The relevance of discrete symmetries to the hierarchy problem extends well beyond the framework of neutral naturalness.Nonlinearly realized discrete symmetries can significantly alter naive expectations of naturalness and provide new approaches to the hierarchy problem [177,178].

Relaxation
In some sense, neutral naturalness is a conservative "new" idea for the electroweak hierarchy problem, in that it retains a familiar mechanism (symmetry protection) while pushing the specific realization in a new direction.But the past decade has seen the emergence of several entirely new approaches, in some cases inspired by proposals for the strong CP and cosmological constant problems.Chief among these is relaxation of the weak scale.Aspects of these approaches are also discussed in a Snowmass white paper on early-universe model building [30].
The original incarnation is the relaxion [179], inspired by the Abbott model for the cosmological constant, featuring a QCD axion-like particle φ coupled to the Standard Model with an additional inflationary sector whose properties are necessarily somewhat special.The simplest realization involves enlarging the Standard Model with the following terms: where M is of the order of the cutoff of the SM Higgs sector, H is the Higgs doublet, g is the dimensionful coupling that breaks the shift symmetry, and V (gφ) ∼ gM 2 φ + g 2 φ 2 + • • • parameterizes the non-derivative terms solely involving φ.We will be interested in field values of φ that greatly exceed f, so we should understand it as a non-compact field (or a compact field imbued with an effective period much longer than 2π f ).When g/M → 0 the Lagrangian has a shift symmetry φ → φ + 2π f, and g can be treated as a spurion for breaking of the shift symmetry.
Below the QCD confinement scale, the coupling between φ and the gluon field strength gives rise to the familiar periodic axion potential For values of the Higgs vev near the Standard Model value, the height of the cosine potential is where m 2 π changes linearly with the quark masses, and so the barrier height is linearly proportional to the Higgs vev (at least roughly speaking; there are of course logarithmic corrections from the contributions to QCD running).Now the idea is clear: starting at values of φ such that the total Higgs mass is large and positive, and assuming the slope of the φ potential causes it to evolve in a direction that lowers the Higgs mass, the φ potential will initially be completely dominated by the gφ potential terms, until the point at which the total Higgs mass-squared goes from positive to negative and the Higgs acquires a vacuum expectation value.At this point the wiggles due to the quark masses grow linearly in the Higgs vev, and generically φ will stop when the slope of the QCD-induced wiggles matches the slope of V (φ).This classical stopping point occurs when the maximum slope of the cosine potential is of the same order as the linear tilt, This allows for a light Higgs (i.e., a small total Higgs masssquared and small electroweak scale) relative to a cutoff M provided g/M 1.For example, with a QCD axion decay constant f = 10 9 GeV and M ∼ 10 7 GeV we have g/M ∼ 10 −30 .
So far we have only accounted for the parametrics of the potential, neglecting the actual dynamical process.In the minimal realization of the relaxion mechanism, φ is made to roll slowly by imagining that its evolution occurs during a period of inflation, such that Hubble friction provides efficient dissipation of kinetic energy in φ.Combining all constraints, this simplest model does not stabilize the weak scale all the way to M Pl ; the cutoff of the theory is at most Unfortunately, even if all of these criteria are satisfied, there is an observational problem with this simplest scenario.The field φ stops not at the minimum of the QCD cosine potential (for which the effective θ angle is zero), but is rather displaced by an amount proportional to the slope of φ.This amounts to θ ∼ 1, which is excluded (as we have seen) by bounds on hadronic EDMs.So the minimal mechanism is ruled out by a natural prediction, though it is certainly no fault of the mechanism itself.This can be ameliorated without extra ingredients by coupling the relaxion to the inflaton in such a way that the slope of φ decreases after inflation, reducing the contribution to θ.This has the effect of lowering the scale at which the model must be UV completed, leading to M 30 TeV for θ 10 −10 .The most striking experimental signatures in minimal relaxion scenarios involve the relaxion-Higgs mixing induced by the cosine potential, leading to signals associated with a new, light Higgs-like scalar [180].A simple variation without stringent constraints from hadronic EDMs entails repeating the same ingredients, but the relaxion is instead the axion of another gauge group for which constraints on the θ parameter are weaker or nonexistent.This scenario should involve quarks of a new gauge group that are also charged under the electroweak gauge group, with attendant Hidden Valley experimental signatures [181].For further development of the relaxion paradigm, see e.g.[182][183][184][185][186][187][188].
Many of the open questions about the relaxion involve physics in the UV.There are challenges to protecting the shift symmetry of the relaxion over the vastly trans-Planckian excursions in field space required to explain the value of the weak scale, as enumerated by the Swampland program; for recent summaries of relevant considerations, see e.g.[27,189].One possibility is to accumulate effectively trans-Planckian flat potentials via axion monodromy [190,191] or clockwork [70].The advent of the relaxion catalyzed the development of other cosmological approaches to the electroweak hierarchy problem, for the most part leveraging vacuum selection during an inflationary epoch to preferentially populate a universe with the observed value of the weak scale [192][193][194][195][196].

Reheating
An alternative that proceeds from similar inspiration is to put many copies of the Standard Model in the same universe, but explain why one copy acquires the dominant energy density [197].This proposal, known as N -Naturalness, is briefly reviewed in a Snowmass white paper on light cosmological relics [29]; here we summarize some key features.
The idea is to envision N sectors which are mutually decoupled.For simplicity, we could take it to be N copies of the Standard Model, though this is not an important restriction.From copy to copy, we imagine the Higgs mass parameters are distributed in some range from − 2 H to 2 H according to some probability distribution.For a wide range of distributions, the generic expectation is that some sectors have accidentally small Higgs masses, m 2 H ∼ 2 H /N .For large enough N , this implies that there is a sector whose electroweak scale is well below the cutoff, which we might identify with "our" Standard Model.Reversing the argument, this implies that the cutoff of the theory should be For example, a cutoff of 10 TeV corresponds to N = 10 4 , whereas a cutoff of 10 10 GeV requires N = 10 16 .
There is another factor in play when N is large.While the naive scale of quantum gravity is M Pl , in the presence of a large number of species the scale at which gravity becomes strongly coupled is lowered, 2  G ∼ M 2 Pl /N .This implies the effective Planck scale should be at least M 2  Pl ∼ N 2 G .Solving the entire hierarchy problem this way would entail N = 10 32 .However, this lowers the cutoff of quantum gravity to the weak scale, and gives us the usual problems associated with a low cutoff.But we would naturally have one sector with the observed value of the weak scale and a Higgs cutoff associated with the cutoff of quantum gravity for N = 10 16 , for which H = G = 10 10 GeV.Alternately, we could preserve a notion of grand unification for N = 10 4 , for which quantum gravity grows strong at 10 16 GeV, and something like supersymmetry enters at H = 10 TeV to cut off the Higgs sector.
The question, then, is to explain why this sector with "our" Standard Model is populated, while all of the other sectors are not.As with the relaxion, this is accomplished through cosmology.In a universe with many sectors, the universe is populated by whatever sectors are abundant.If all sectors had a thermal abundance, there would be an enormous contribution to the energy density of the universe, and we would not have any ability to understand why we are the sector with the smallest scales.Thus we can imagine a cosmological mechanism that preferentially reheats sectors with smaller scales.The simplest way to accomplish this is to imagine an inflationary epoch, followed by reheating due to the decay of some reheaton.To avoid tuning, this reheaton should couple universally to all sectors.The Standard Model can be preferentially reheated (i.e., absorb most of the energy from the reheaton decays) if the branching ratio of the reheaton to each sector scales like an inverse power of the (absolute value of the) Higgs mass-squared in each sector.
Remarkably, this is precisely what happens if the reheaton is lighter than the Higgs in each sector -a parametric but technically natural requirement of the theory.Assuming this is the case, the reheaton predominantly decays into fermions of sectors where electroweak symmetry is broken, whereas when electroweak symmetry is unbroken the dominant decay is into gauge bosons.Thus the decay rate into broken-phase sectors scales as 1/m 2 h , while the decay into unbroken-phase sectors scales as 1/m 4 H . Reheaton decays therefore prefer a sector with broken electroweak symmetry and the smallest possible value of m h .The resulting energy density of each sector is proportional to the decay width, This leads to some energy density in the sectors nearest to ours in mass, with attendant predictions for dark radiation within the reach of future CMB experiments [198].

UV/IR mixing
One way to frame the hierarchy problem is as a separation of UV physics from IR physics in effective field theory: the theory in the far UV knows nothing about the theory in the far IR, and cannot generically produce IR scales well-separated from the fundamental UV scale (with the exception of special mechanisms, such as dimensional transmutation, that we have encountered earlier).From this perspective, a new approach to the hierarchy problem might entail linking the far UV and the far IR.What are the prospects of UV/IR mixing for the hierarchy problem?As we have already seen in our discussion of the cosmological constant problem, we might expect a theory of quantum gravity to feature UV/IR mixing.Whether this UV/IR mixing has any relevance to the weak scale is an open question, but there are a number of promising possibilities.These are summarized quite comprehensively in a dedicated Snowmass white paper [27], and so our discussion here will remain fairly concise.Some of the most promising opportunities arise in the context of the Swampland program, which articulates conjectured constraints on effective field theories from consistent embedding in a theory of quantum gravity.Perhaps the most famous among the Swampland conjectures is the Weak Gravity Conjecture (WGC) [199], which formalizes the sense in which "gravity is the weakest force."For a comprehensive review of Swampland conjectures, see [200]; for a review focused on the Weak Gravity Conjecture and its relatives, see [189].The possible relevance of these conjectures to the electroweak hierarchy problem is illustrated by a proposal first made by Cheung and Remmen [201] to use the (electric) Weak Gravity Conjecture to bound the weak scale.
In its simplest form, the WGC posits that an abelian gauge theory coupled to gravity must contain a state of charge q and mass m satisfying qg > m M Pl (28) which amounts to the statement that gravity is the weakest force, since this implies the gauge force between two charges exceeds the gravitational one.Cheung and Remmen noted that writing the inequality as m < qgM Pl had the effect of bounding a possibly UV-sensitive parameter (the mass m, potentially additively sensitive to short-distance physics) by a UV-insensitive one (the coupling g, which is only logarithmically sensitive to short-distance physics).This could be applied to the electroweak hierarchy problem by extending the Standard Model to include an unbroken U (1) and some particle charged under it whose mass satisfies the WGC and is controlled by electroweak symmetry breaking.A natural candidate is gauging U (1) B−L , which can be rendered anomaly-free by adding a right-handed neutrino ν R .Current bounds on U (1) B−L require qg 10 −24 .
In this case neutrino masses arise from a Yukawa coupling to the Higgs, giving Dirac neutrino masses of the form The lightest neutrino has the largest charge-to-mass ratio, and if there are no other light particles in the spectrum it is the natural candidate to satisfy the WGC.For a neutrino mass around m ν ∼ 0.1 eV, if qg ∼ m ν M Pl ∼ 10 −29 (consistent with current bounds) then the WGC is just barely satisfied.If the values of the Yukawa coupling y ν and U (1) B−L coupling qg are held fixed, then higher values of the Higgs vev v would violate the WGC.One could then imagine that consistency of quantum gravity places an upper bound on v.
Of course, there are many ways in which this argument could fail: there could be lighter states charged under U (1) B−L that satisfy the WGC; the WGC could be satisfied in the underlying theory by varying y ν and qg; etc.Unfortunately, even taking the premises to be true, the argument itself fails due to a related conjecture.The magnetic form of the WGC posits that the cutoff of a purely electric description of an Abelian gauge theory with charged states must satisfy q M Pl where here the cutoff could correspond to e.g. the scale of monopoles in the theory or some other breakdown of the purely electric description.This would imply the above construction breaks down at the scale of neutrino masses, and additional degrees of freedom associated with would appear well before the scale v.The proposal can be revived by considering a distinct U (1) whose charged states lie closer to the weak scale and acquire some mass from the Higgs [202], in which case the bound from the magnetic WGC rises above the weak scale.This presents a number of novel experimental signatures, including new states around the TeV scale coupled to the Higgs boson and, potentially, an extremely weak long-range force acting on dark matter.Although there are various possible caveats and potential loopholes [189,202], the proposal illustrates a sense in which Swampland conjectures may be relevant to the electroweak hierarchy problem.Indeed, there are a number of related ways that Swampland conjectures may be brought to bear to explain the value of the weak scale [203][204][205].
The Weak Gravity Conjecture and other Swampland conjectures amount to a sort of "implicit" UV/IR mixing, in which the parameter space of an EFT is bounded by generic criteria without reference to the microscopic physics responsible.There are also examples of theories exhibiting various forms of "explicit" UV/IR mixing.A very concrete example with immediate relevance to the hierarchy problem involves worldsheet modular invariance in non-supersymmetric string theory [206,207].Other examples include the vanishing black hole Love numbers mentioned earlier, as well as noncommutative field theories [208,209], field theories with sub-system global symmetries [210], and certain non-integrable quantum field theories in two dimensions [130].The relevance of these latter examples to the electroweak hierarchy problem is less apparent, but the exploration of theories featuring UV/IR mixing is likely to bear further fruit.At the very least, it promises to reveal new phenomena in quantum field theory.

Self-organized criticality
There is one distinguished value for the mass-squared parameter of the Higgs doublet: m 2 H = 0.For m 2 H < 0 electroweak symmetry is spontaneously broken, while for m 2 H > 0 it is preserved, rendering m 2 H = 0 the critical value (at zero temperature) separating the two phases of electroweak symmetry.The fact that |m 2 H /M 2 Pl | 1 (for, say, ∼ M Pl ) amounts to the statement that we are surprisingly close to the critical point.As there are systems that drive themselves to their critical points -a phenomenon known as self-organized criticality [211] -it is inviting to consider whether something along these lines might solve the electroweak hierarchy problem. 14 Naively, it is difficult to realize self-organized criticality in Lorentz-invariant quantum field theories, since most instances involve both driving and dissipation.Nonetheless, recent years have seen several concrete proposals for something like self-organized criticality as an explanation of the weak scale.The first of these [213] involves the interplay between the Higgs field and a modulus field in a 5d Randall-Sundrum model, with the Higgs instability being connected via the modulus field to violation of the Breitenlohner-Freedman bound far from the UV boundary. 15Subsequently, analogs of self-organized criticality were developed in a cosmological setting [215][216][217], in which fluctuations of scalar fields during an inflationary epoch lead to localization close to a critical point.Once again, new experimental signatures arise in connection with the Higgs.The simplest application of self-organized localisation [217] to the electroweak hierarchy problem, for example, requires modifying the running of the Higgs self-coupling via new states near the weak scale.More broadly, these concrete examples are an encouraging indication of the prospects for self-organized criticality in understanding the electroweak hierarchy problem. 14To my knowledge, the first suggestion that self-organized criticality might be relevant to the electroweak hierarchy problem was made (ironically) by David B. Kaplan in his 1997 TASI lectures [212], while a more earnest suggestion appears in [2]. 15For related work, see [214].

Looking forward
With that, our journey through the main outstanding naturalness problems of high-energy physics (and their recently proposed solutions) comes to an end.We have seen something of the problems themselves, their historical solutions, and the proliferation of new approaches that have emerged in the course of the past decade.Among other things, these new approaches are distinguished by the novelty of their experimental signatures, bringing entirely new observables and experiments to bear in the search for signs of naturalness.
These approaches also underline the extent to which naturalness problems are connected.Although the cosmological constant problem, the electroweak hierarchy problem, and the strong CP problem vary widely in both dimensionality and severity, their proposed solutions have much in common; see, for instance, the table below.If nothing else, this is a reminder that something can be gained from thinking of these naturalness problems together, rather in isolation.It may well be that a recently-proposed solution proves most fruitful when applied to a different problem from the one it was invented to address.What lies ahead?A number of the new paths sketched here are in the earliest stages of exploration, and will doubtlessly develop further in the coming years.Further exploration of UV/IR mixing seems particularly promising, at the very least because it remains a relatively unexplored facet of quantum field theory and gravity with transformative potential.Although there are not, at present, any completely satisfying applications of UV/IR mixing to the marquee naturalness problems of high-energy physics, it would be premature to conclude that "there is no there there."Before discovery there is always exploration, and the motivation for exploring UV/IR mixing with an eye towards naturalness problems is abundant.Self-ordered criticality is also quite promising in this regard; now that there is proof of principle in relativistic settings, there is considerable room for further exploration.
There are also numerous developments in adjacent subfields of high-energy theory that have yet to be applied directly to naturalness problems, but seem destined to play a role.The amplitudes program is perhaps the most striking example, as it has recently provided abundant evidence that the renormalization of irrelevant operators in effective field theories enjoys surprising properties that motivate some refinement of naturalness expectations.For instance, the unexpected zeroes in the one-loop dimension-6 matrix of anomalous dimensions in the Standard Model EFT is best understood from helicity selection rules [218]; analogous surprises persist even at two loops [219].Such surprising zeroes extend to Wilson coefficients of irrelevant operators as well [220], which can be understood at least in part using on-shell techniques [221,222].Of course, there may be limits to how much we can learn about the naturalness problems of marginal and relevant operators from an improved understanding of irrelevant ones, but these examples suggest that naturalness expectations should be treated with care.
More broadly, our understanding of quantum field theory is far from static, and in particular the understanding of symmetries has evolved considerably since the articulation of 't Hooft naturalness.The ordinary symmetries typically applied to naturalness problems have subsequently been joined by a plethora of generalizations, including higherform symmetries [223], higher-group symmetries [224,225], subsystem symmetries [226], and non-invertible symmetries [227].It seems quite likely that at least some of these generalized symmetries can be brought to bear on familiar naturalness problems.This is far from wishful thinking.For instance, higher-form symmetries already imbue the masslessness of the photon with a genuine notion of 't Hooft naturalness by making it the goldstone boson of a spontaneously-broken one-form global symmetry, while higher-group symmetries have given rise to new constraints on the phenomenology of axion-Yang-Mills theories [228].If the next decade sees the emergence of a genuinely new and compelling approach to naturalness problems leveraging symmetries, it is likely to come from this direction.
Time will tell if more refined views as to the uniformity of Nature would have been useful to the particle theorist.As ever, we must look to experiment for ultimate guidance.But in the meantime, the motivation for thinking about naturalness remains strong.There are many paths to be followed, and yet more paths to be discovered.

Data Availability Statement
This manuscript has no associated data or the data will not be deposited.[Authors' comment: Data sharing not applicable to this article as no datasets were generated or analysed during the current study.].
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.Funded by SCOAP 3 .SCOAP 3 supports the goals of the International Year of Basic Sciences for Sustainable Development.