Varying Constants, Gravitation and Cosmology
 3.7k Downloads
 264 Citations
Abstract
Fundamental constants are a cornerstone of our physical laws. Any constant varying in space and/or time would reflect the existence of an almost massless field that couples to matter. This will induce a violation of the universality of free fall. Thus, it is of utmost importance for our understanding of gravity and of the domain of validity of general relativity to test for their constancy. We detail the relations between the constants, the tests of the local position invariance and of the universality of free fall. We then review the main experimental and observational constraints that have been obtained from atomic clocks, the Oklo phenomenon, solar system observations, meteorite dating, quasar absorption spectra, stellar physics, pulsar timing, the cosmic microwave background and big bang nucleosynthesis. At each step we describe the basics of each system, its dependence with respect to the constants, the known systematic effects and the most recent constraints that have been obtained. We then describe the main theoretical frameworks in which the lowenergy constants may actually be varying and we focus on the unification mechanisms and the relations between the variation of different constants. To finish, we discuss the more speculative possibility of understanding their numerical values and the apparent finetuning that they confront us with.
1 Introduction
Fundamental constants appear everywhere in the mathematical laws we use to describe the phenomena of Nature. They seem to contain some truth about the properties of the physical world while their real nature seem to evade us.
The question of the constancy of the constants of physics was probably first addressed by Dirac [155, 156] who expressed, in his “Large Numbers hypothesis”, the opinion that very large (or small) dimensionless universal constants cannot be pure mathematical numbers and must not occur in the basic laws of physics. He suggested, on the basis of this numerological principle, that these large numbers should rather be considered as variable parameters characterizing the state of the universe. Dirac formed five dimensionless ratios among which^{1} δ ≡ H_{0}ħ/m_{p}c^{2} ∼ 2h × 10^{−42} and \(\epsilon \equiv G{\rho _0}/H_0^2 \sim 5{h^{ 2}} \times {10^{ 4}}\) and asked the question of which of these ratios is constant as the universe evolves. Usually, δ varies as the inverse of the cosmic time while ϵ varies also with time if the universe is not described by an Einsteinde Sitter solution (i.e., when a cosmological constant, curvature or radiation are included in the cosmological model). Dirac then noticed that α_{G}/μα_{EM}, representing the relative magnitude of electrostatic and gravitational forces between a proton and an electron, was of the same order as H_{0}e^{2}/m_{e}c^{2} = δα_{EMμ} representing the age of the universe in atomic units so that his five numbers can be “harmonized” if one assumes that α_{G} and δ vary with time and scale as the inverse of the cosmic time.
This argument by Dirac is indeed not a physical theory but it opened many doors in the investigation on physical constants, both on questioning whether they are actually constant and on trying to understand the numerical values we measure.
First, the implementation of Dirac’s phenomenological idea into a fieldtheory framework was proposed by Jordan [268], who realized that the constants have to become dynamical fields and proposed a theory where both the gravitational and finestructure constants can vary ([497] provides a summary of some earlier attempts to quantify the cosmological implications of Dirac’s argument). Fierz [195] then realized that in such a case, atomic spectra will be spacetimedependent so that these theories can be observationally tested. Restricting to the subcase in which only G can vary led to definition of the class of scalartensor theories, which were further explored by Brans and Dicke [67]. This kind of theory was further generalized to obtain various functional dependencies for G in the formalization of scalartensor theories of gravitation (see, e.g., [124]).
Second, Dicke [151] pointed out that in fact the density of the universe is determined by its age, this age being related to the time needed to form galaxies, stars, heavy nuclei. … This led him to formulate that the presence of an observer in the universe places constraints on the physical laws that can be observed. In fact, what is meant by observer is the existence of (highly?) organized systems and this principle can be seen as a rephrasing of the question “why is the universe the way it is?” (see [252]). Carter [82, 83], who actually coined the term “anthropic principle” for it, showed that the numerological coincidences found by Dirac can be derived from physical models of stars and the competition between the weakness of gravity with respect to nuclear fusion. Carr and Rees [80] then showed how one can scale up from atomic to cosmological scales only by using combinations of α_{EM}, α= and m_{e}/m_{p}.

how do we construct theories in which what were thought to be constants are in fact dynamical fields,

how can we constrain, experimentally or observationally, the spacetime dependencies of the constants that appear in our physical laws

how can we explain the values of the fundamental constants and the finetuning that seems to exist between their numerical values.
While “varying constants” may seem, at first glance, to be an oxymoron, it has to be considered merely as jargon to be understood as “revealing new degrees of freedom, and their coupling to the known fields of our theory”. The tests on the constancy of the fundamental constants are indeed very important tests of fundamental physics and of the laws of Nature we are currently using. Detecting any such variation will indicate the need for new physical degrees of freedom in our theories, that is new physics.
 1.
it is necessary to understand and to model the physical systems used to set the constraints. In particular one needs to relate the effective parameters that can be observationally constrained to a set of fundamental constants;
 2.
it is necessary to relate and compare different constraints that are obtained at different spacetime positions. This often requires a spacetime dynamics and thus to specify a model as well as a cosmology;
 3.
it is necessary to relate the variations of different fundamental constants.
Therefore, we shall start in Section 2 by recalling the link between the constants of physics and the theories in which they appear, as well as with metrology. From a theoretical point of view, the constancy of the fundamental constants is deeply linked with the equivalence principle and general relativity. In Section 2 we will recall this relation and in particular the link with the universality of free fall. We will then summarize the various constraints that exist on such variations, mainly for the fine structure constant and for the gravitational constant in Sections 3 and 4, respectively. We will then turn to the theoretical implications in Section 5 in describing some of the arguments backing up the fact that constants are expected to vary, the main frameworks used in the literature and the various ways proposed to explain why they have the values we observe today. We shall finish by a discussion on their spatial variations in Section 6 and by discussing the possibility to understand their numerical values in Section 7.
Various reviews have been written on this topic. We will refer to the review [500] as FVC and we mention the following later reviews [31, 47, 72, 119, 226, 281, 278, 501, 395, 503, 505] and we refer to [356] for the numerical values of the constants adopted in this review.
2 Constants and Fundamental Physics
2.1 About constants
Our physical theories introduce various structures to describe the phenomena of Nature. They involve various fields, symmetries and constants. These structures are postulated in order to construct a mathematicallyconsistent description of the known physical phenomena in the most unified and simple way.
We define the fundamental constants of a physical theory as any parameter that cannot be explained by this theory. Indeed, we are often dealing with other constants that in principle can be expressed in terms of these fundamental constants. The existence of these two sets of constants is important and arises from two different considerations. From a theoretical point of view we would like to extract the minimal set of fundamental constants, but often these constants are not measurable. From a more practical point of view, we need to measure constants, or combinations of constants, which allow us to reach the highest accuracy.

from a theoretical point of view: the considered framework does not provide any way to compute these parameters, i.e., it does not have any equation of evolution for them since otherwise it would be considered as a dynamical field,

from an experimental point of view: these parameters can only be measured. If the theories in which they appear have been validated experimentally, it means that, at the precisions of these experiments, these parameters have indeed been checked to be constant, as required by the necessity of the reproducibility of experimental results.
This means that testing for the constancy of these parameters is a test of the theories in which they appear and allow to extend our knowledge of their domain of validity. This also explains the definition chosen by Weinberg [526] who stated that they cannot be calculated in terms of other constants “…not just because the calculation is too complicated (as for the viscosity of water) but because we do not know of anything more fundamental”.
This has a series of implications. First, the list of fundamental constants to consider depends on our theories of physics and, thus, on time. Indeed, when introducing new, more unified or more fundamental, theories the number of constants may change so that this list reflects both our knowledge of physics and, more important, our ignorance. Second, it also implies that some of these fundamental constants can become dynamical quantities in a more general theoretical framework so that the tests of the constancy of the fundamental constants are tests of fundamental physics, which can reveal that what was thought to be a fundamental constant is actually a field whose dynamics cannot be neglected. If such fundamental constants are actually dynamical fields it also means that the equations we are using are only approximations of other and more fundamental equations, in an adiabatic limit, and that an equation for the evolution of this new field has to be obtained.
The reflections on the nature of the constants and their role in physics are numerous. We refer to the books [29, 215, 510, 509] as well as [59, 165, 216, 393, 521, 526, 538] for various discussions of this issue that we cannot develop at length here. This paragraph summarizes some of the properties of the fundamental constants that have attracted some attention.
2.1.1 Characterizing the fundamental constants
Physical constants seem to play a central role in our physical theories since, in particular, they determined the magnitudes of the physical processes. Let us sketch briefly some of their properties. How many fundamental constants should be considered? The set of constants, which are conventionally considered as fundamental [213] consists of the electron charge e, the electron mass m_{e}, the proton mass m_{p}, the reduced Planck constant ħ, the velocity of light in vacuum c, the Avogadro constant N_{a}, the Boltzmann constant k_{ B }, the Newton constant G, the permeability and permittivity of space, ε_{0} and μ_{0}. The latter has a fixed value in the SI system of unit (μ_{0} = 4π × 10^{−7} H m^{−1}), which is implicit in the definition of the Ampere; ε_{0} is then fixed by the relation ε_{0}μ_{0} = c^{−2}.
List of the fundamental constants of our standard model. See Ref. [379] for further details on the measurements.
Constant  Symbol  Value 

Speed of light  c  299 792 458 m s^{−1} 
Planck constant (reduced)  ħ  1.054 571628(53) × 10^{−34} J s 
Newton constant  G  6.674 28(67) × 10^{−11} m^{2} kg^{−1} s^{−2} 
Weak coupling constant (at m_{ Z })  g_{2}(m_{ Z })  0.6520 ± 0.0001 
Strong coupling constant (at m_{ Z })  g_{3}(m_{ Z })  1.221 ± 0.022 
Weinberg angle  \({\sin ^2}\,{\theta _{\rm{w}}}{(91.2\,{\rm{GeV}})_{\overline {{\rm{MS}}}}}\)  0.23120 ± 0.00015 
Electron Yukawa coupling  h _{e}  2.94 × 10^{•6} 
Muon Yukawa coupling  h _{ μ }  0.000607 
Tauon Yukawa coupling  h _{ τ }  0.0102156 
Up Yukawa coupling  h _{u}  0.000016 ± 0.000007 
Down Yukawa coupling  h _{d}  0.00003 ± 0.00002 
Charm Yukawa coupling  h _{c}  0.0072 ± 0.0006 
Strange Yukawa coupling  h _{s}  0.0006 ± 0.0002 
Top Yukawa coupling  h _{t}  1.002 ± 0.029 
Bottom Yukawa coupling  h _{b}  0.026 ± 0.003 
Quark CKM matrix angle  sin θ_{12}  0.2243 ± 0.0016 
sin θ_{23}  0.0413 ± 0.0015  
sin θ_{13}  0.0037 ± 0.0005  
Quark CKM matrix phase  δ _{CKM}  1.05 ± 0.24 
Higgs potential quadratic coefficient  \({{\hat \mu}^2}\)  ? 
Higgs potential quartic coefficient  λ  ? 
QCD vacuum phase  θ _{QCD}  < 10^{−9} 
List of some related constants that appear in our discussions. See Ref. [379].
Constant  Symbol  Value 

Electromagnetic coupling constant  g_{EM}=e=g_{2} sinθ_{W}  0.313429 ± 0.000022 
Higgs mass  m _{ H }  > 100 GeV 
Higgs vev  v  (246.7 ± 0.2) GeV 
Fermi constant  \({G_{\rm{F}}} = 1/\sqrt 2 {v^2}\)  1.166 37(1) × 10^{−5} GeV^{−2} 
Mass of the W^{±}  m _{ W }  80.398 ± 0.025 GeV 
Mass of the Z  m _{ Z }  91.1876 ± 0.0021 GeV 
Fine structure constant  α _{EM}  1/137.035 999 679(94) 
Fine structure constant at m_{ Z }  α_{EM}(m_{ Z })  1/(127.918 ± 0.018) 
Weak structure constant at m_{ Z }  α_{W}(m_{ z })  0.03383 ± 0.00001 
Strong structure constant at m_{ Z }  α_{S}(m_{ Z })  0.1184 ± 0.0007 
Gravitational structure constant  \({\alpha _{\rm{G}}} = Gm_{\rm{P}}^2/\hbar c\)  ∼ 5.905 × 10^{−39} 
Electron mass  \({m_{\rm{e}}} = {h_{\rm{e}}}v/\sqrt 2\)  510.998910 ± 0.000013 keV 
Mu mass  \({m_\mu} = {h_\mu}v/\sqrt 2\)  105.658367 ± 0.000004 MeV 
Tau mass  \({m_\tau} = {h_\tau}v/\sqrt 2\)  1776.84 ± 0.17 MeV 
Up quark mass  \({m_{\rm{u}}} = {h_{\rm{u}}}v/\sqrt 2\)  (1.5 – 3.3) MeV 
Down quark mass  \({m_{\rm{d}}} = {h_{\rm{d}}}v/\sqrt 2\)  (3.5 – 6.0) MeV 
Strange quark mass  \({m_{\rm{s}}} = {h_{\rm{s}}}v/\sqrt 2\)  \(105_{ 35}^{+ 25}\,{\rm{MeV}}\) 
Charm quark mass  \({m_{\rm{c}}} = {h_{\rm{c}}}v/\sqrt 2\)  \(1.27_{ 0.11}^{+ 0.07}\,{\rm{GeV}}\) 
Bottom quark mass  \({m_{\rm{b}}} = {h_{\rm{b}}}v/\sqrt 2\)  \(4.20_{ 0.07}^{+ 0.17}\,{\rm{GeV}}\) 
Top quark mass  \({m_{\rm{t}}} = {h_{\rm{t}}}v/\sqrt 2\)  171.3 ± 2.3 GeV 
QCD energy scale  Λ_{QCD}  (190 – 240) MeV 
Mass of the proton  m _{p}  938.272013 ± 0.000023 MeV 
Mass of the neutron  m _{n}  939.565346 ± 0.000023 MeV 
protonneutron mass difference  Q _{np}  1.2933321 ± 0.0000004 MeV 
protontoelectron mass ratio  μ = m_{p}/m_{e}  1836.15 
electrontoproton mass ratio  \(\bar \mu = {m_{\rm{e}}}/{m_{\rm{p}}}\)  1/1836.15 
d − u quark mean mass  m_{q} = (m_{u} + m_{d})/2  (2.5 – 5.0) MeV 
d − u quark mass difference  δm_{q} = m_{d} − m_{u}  (0.2 – 4.5) MeV 
proton gyromagnetic factor  g _{p}  5.586 
neutron gyromagnetic factor  g _{n}  −3.826 
Rydberg constant  \({R_\infty}\)  10 973 731.568 527(73) m^{−1} 
More familiar constants, such as the masses of the proton and the neutron are, as we shall discuss in more detail below (see Section 5.3.2), more difficult to relate to the fundamental parameters because they depend not only on the masses of the quarks but also on the electromagnetic and strong binding energies.
Are some constants more fundamental? As pointedout by LévyLeblond [328], all constants of physics do not play the same role, and some have a much deeper role than others. Following [328], we can define three classes of fundamental constants, class A being the class of the constants characteristic of a particular system, class B being the class of constants characteristic of a class of physical phenomena, and class C being the class of universal constants. Indeed, the status of a constant can change with time. For instance, the velocity of light was initially a class A constant (describing a property of light), which then became a class B constant when it was realized that it was related to electromagnetic phenomena and, to finish, it ended as a type C constant (it enters special relativity and is related to the notion of causality, whatever the physical phenomena). It has even become a much more fundamental constant since it now enters in the definition of the meter [413] (see Ref. [510] for a more detailed discussion). This has to be contrasted with the proposition of Ref. [538] to distinguish the standard model free parameters as the gauge and gravitational couplings (which are associated to internal and spacetime curvatures) and the other parameters entering the accommodation of inertia in the Higgs sector.
Relation with physical laws. LévyLeblond [328] proposed to rank the constants in terms of their universality and he proposed that only three constants be considered to be of class C, namely G, ħ and c. He pointed out two important roles of these constants in the laws of physics. First, they act as concept synthesizer during the process of our understanding of the laws of nature: contradictions between existing theories have often been resolved by introducing new concepts that are more general or more synthetic than older ones. Constants build bridges between quantities that were thought to be incommensurable and thus allow new concepts to emerge. For example c underpins the synthesis of space and time while the Planck constant allowed to related the concept of energy and frequency and the gravitational constant creates a link between matter and spacetime. Second, it follows that these constants are related to the domains of validity of these theories. For instance, as soon as velocity approaches c, relativistic effects become important, relativistic effects cannot be negligible. On the other hand, for speed much below c, Galilean kinematics is sufficient. Planck constant also acts as a referent, since if the action of a system greatly exceeds the value of that constant, classical mechanics will be appropriate to describe this system. While the place of c (related to the notion of causality) and ħ (related to the quantum) in this list are well argued, the place of G remains debated since it is thought that it will have to be replaced by some mass scale.
Evolution. There are many ways the list of constants can change with our understanding of physics. First, new constants may appear when new systems or new physical laws are discovered; this is, for instance, the case of the charge of the electron or more recently the gauge couplings of the nuclear interactions. A constant can also move from one class to a more universal class. An example is that of the electric charge, initially of class A (characteristic of the electron), which then became class B when it was understood that it characterizes the strength of the electromagnetic interaction. A constant can also disappear from the list, because it is either replaced by more fundamental constants (e.g., the Earth acceleration due to gravity and the proportionality constant entering Kepler law both disappeared because they were “explained” in terms of the Newton constant and the mass of the Earth or the Sun) or because it can happen that a better understanding of physics teaches us that two hitherto distinct quantities have to be considered as a single phenomenon (e.g., the understanding by Joule that heat and work were two forms of energy led to the fact that the Joule constant, expressing the proportionality between work and heat, lost any physical meaning and became a simple conversion factor between units used in the measurement of heat (calories) and work (Joule)). Nowadays the calorie has fallen in disuse. Indeed demonstrating that a constant is varying will have direct implications on our list of constants.
In conclusion, the evolution of the number, status of the constants can teach us a lot about the evolution of the ideas and theories in physics since it reflects the birth of new concepts, their evolution and unification with other ones.
2.1.2 Constants and metrology
Since we cannot compute them in the theoretical framework in which they appear, it is a crucial property of the fundamental constants (but in fact of all the constants) that their value can be measured. The relation between constants and metrology is a huge subject to which we just draw the attention on some selected aspects. For more discussions, see [56, 280, 278].
The introduction of constants in physical laws is also closely related to the existence of systems of units. For instance, Newton’s law states that the gravitational force between two masses is proportional to each mass and inversely proportional to the square of their separation. To transform the proportionality to an equality one requires the use of a quantity with dimension of m^{3} kg^{−1} s^{−2} independent of the separation between the two bodies, of their mass, of their composition (equivalence principle) and on the position (local position invariance). With an other system of units the numerical value of this constant could have simply been anything. Indeed, the numerical value of any constant crucially depends on the definition of the system of units.
Measuring constants. The determination of the laboratory value of constants relies mainly on the measurements of lengths, frequencies, times, … (see [414] for a treatise on the measurement of constants and [213] for a recent review). Hence, any question on the variation of constants is linked to the definition of the system of units and to the theory of measurement. The behavior of atomic matter is determined by the value of many constants. As a consequence, if, e.g., the finestructure constant is spacetime dependent, the comparison between several devices such as clocks and rulers will also be spacetime dependent. This dependence will also differ from one clock to another so that metrology becomes both device and spacetime dependent, a property that will actually be used to construct tests of the constancy of the constants.
Indeed a measurement is always a comparison between two physical systems of the same dimensions. This is thus a relative measurement, which will give as result a pure number. This trivial statement is oversimplifying since in order to compare two similar quantities measured separately, one needs to perform a number of comparisons. In order to reduce the number of comparisons (and in particular to avoid creating every time a chain of comparisons), a certain set of them has been included in the definitions of units. Each units can then be seen as an abstract physical system, which has to be realized effectively in the laboratory, and to which another physical system is compared. A measurement in terms of these units is usually called an absolute measurement. Most fundamental constants are related to microscopic physics and their numerical values can be obtained either from a pure microscopic comparison (as is, e.g., the case for m_{e}/m_{p}) or from a comparison between microscopic and macroscopic values (for instance to deduce the value of the mass of the electron in kilogram). This shows that the choice of the units has an impact on the accuracy of the measurement since the pure microscopic comparisons are in general more accurate than those involving macroscopic physics. This implies that only the variation of dimensionless constants can be measured and in case such a variation is detected, it is impossible to determine, which dimensional constant is varying [183].
It is also important to stress that in order to deduce the value of constants from an experiment, one usually needs to use theories and models. An example [278] is provided by the Rydberg constant. It can easily be expressed in terms of some fundamental constants as \({R_\infty} = \alpha _{{\rm{EM}}}^2{m_{\rm{e}}}c/2h\). It can be measured from, e.g., the triplet 1s − 2s transition in hydrogen, the frequency of which is related to the Rydberg constant and other constants by assuming QED so that the accuracy of R_{∞} is much lower than that of the measurement of the transition. This could be solved by defining R_{∞} 4ν_{H}(1s − 2s)/3c but then the relation with more fundamental constants would be more complicated and actually not exactly known. This illustrates the relation between a practical and a fundamental approach and the limitation arising from the fact that we often cannot both exactly calculate and directly measure some quantity. Note also that some theoretical properties are plugged in the determination of the constants.
As a conclusion, let us recall that (i) in general, the values of the constants are not determined by a direct measurement but by a chain involving both theoretical and experimental steps, (ii) they depend on our theoretical understanding, (iii) the determination of a selfconsistent set of values of the fundamental constants results from an adjustment to achieve the best match between theory and a defined set of experiments (which is important because we actually know that the theories are only good approximation and have a domain of validity) (iv) that the system of units plays a crucial role in the measurement chain, since for instance in atomic units, the mass of the electron could have been obtained directly from a mass ratio measurement (even more precise!) and (v) fortunately the test of the variability of the constants does not require a priori to have a highprecision value of the considered constants.
System of units. Thus, one needs to define a coherent system of units. This has a long, complex and interesting history that was driven by simplicity and universality but also by increasing stability and accuracy [29, 509].
Originally, the sizes of the human body were mostly used to measure the length of objects (e.g., the foot and the thumb gave feet and inches) and some of these units can seem surprising to us nowadays (e.g., the span was the measure of hand with fingers fully splayed, from the tip of the thumb to the tip of the little finger!). Similarly weights were related to what could be carried in the hand: the pound, the ounce, the dram.… Needless to say, this system had a few disadvantages since each country, region has its own system (for instance in France there was more than 800 different units in use in 1789). The need to define a system of units based on natural standard led to several propositions to define a standard of length (e.g., the mille by Gabriel Mouton in 1670 defined as the length of one angular minute of a great circle on the Earth or the length of the pendulum that oscillates once a second by Jean Picard and Christiaan Huygens). The real change happened during the French Revolution during which the idea of a universal and non anthropocentric system of units arose. In particular, the Assemblée adopted the principle of a uniform system of weights and measures on 8 May 1790 and, in March 1791 a decree (these texts are reprinted in [510]) was voted, stating that a quarter of the terrestrial meridian would be the basis of the definition of the meter (from the Greek metron, as proposed by Borda): a meter would henceforth be one ten millionth part of a quarter of the terrestrial meridian. Similarly the gram was defined as the mass of one cubic centimeter of distilled water (at a precise temperature and pressure) and the second was defined from the property that a mean solar day must last 24 hours.
To make a long story short, this led to the creation of the metric system and then of the signature of La convention du mètre in 1875. Since then, the definition of the units have evolved significantly. First, the definition of the meter was related to more immutable systems than our planet, which, as pointed out by Maxwell in 1870, was an arbitrary and inconstant reference. He then suggested that atoms may be such a universal reference. In 1960, the International Bureau of Weights and Measures (BIPM) established a new definition of the meter as the length equal to 1650763 wavelengths, in a vacuum, of the transition line between the levels 2p_{10} and 5d_{5} of krypton86. Similarly the rotation of the Earth was not so stable and it was proposed in 1927 by André Danjon to use the tropical year as a reference, as adopted in 1952. In 1967, the second was also related to an atomic transition, defined as the duration of 9162631770 periods of the transition between the two hyperfine levels of the ground state of caesium133. To finish, it was decided in 1983, that the meter shall be defined by fixing the value of the speed of light to c = 299792458 m s^{−1} and we refer to [55] for an up to date description of the SI system. Today, the possibility to redefine the kilogram in terms of a fixed value of the Planck constant is under investigation [279].
This summary illustrates that the system of units is a human product and all SI definitions are historically based on nonrelativistic classical physics. The changes in the definition were driven by the will to use more stable and more fundamental quantities so that they closely follow the progress of physics. This system has been created for legal use and indeed the choice of units is not restricted to SI. SI systems and the number of basic units. The International System of Units defines seven basic units: the meter (m), second (s) and kilogram (kg), the Ampere (A), Kelvin (K), mole (mol) and candela (cd), from which one defines secondary units. While needed for pragmatic reasons, this system of units is unnecessarily complicated from the point of view of theoretical physics. In particular, the Kelvin, mole and candela are derived from the four other units since temperature is actually a measure of energy, the candela is expressed in terms of energy flux so that both can be expressed in mechanical units of length [L], mass [M] and time [T]. The mole is merely a unit denoting numbers of particles and has no dimension.
Indeed, we can construct many such systems since the choice of the 3 constants is arbitrary. For instance, we can construct a system based on (e, m_{e}, h), that we can call the Bohr units, which will be suited to the study of the atom. The choice may be dictated by the system, which is studied (it is indeed far fetched to introduce G in the construction of the units when studying atomic physics) so that the system is well adjusted in the sense that the numerical values of the computations are expected to be of order unity in these units.
Such constructions are very useful for theoretical computations but not adapted to measurement so that one needs to switch back to SI units. More important, this shows that, from a theoretical point of view, one can define the system of units from the laws of nature, which are supposed to be universal and immutable. Do we actually need 3 natural units? is an issue debated at length. For instance, Duff, Okun and Veneziano [165] respectively argue for none, three and two (see also [535]). Arguing for no fundamental constant leads to consider them simply as conversion parameters. Some of them are, like the Boltzmann constant, but some others play a deeper role in the sense that when a physical quantity becomes of the same order as this constant, new phenomena appear; this is the case, e.g., of ħ and c, which are associated respectively to quantum and relativistic effects. Okun [392] considered that only three fundamental constants are necessary, as indicated by the International System of Units. In the framework of quantum field theory + general relativity, it seems that this set of three constants has to be considered and it allows to classify the physical theories (with the famous cube of physical theories). However, Veneziano [514] argued that in the framework of string theory one requires only two dimensionful fundamental constants, c and the string length λ_{ s }. The use of ħ seems unnecessary since it combines with the string tension to give λ_{ s }. In the case of the NambuGoto action \(S/\hbar = (T/\hbar)\int {{\rm{d}(Area)}} \equiv \lambda _{\mathcal S}^{ 2}\int {\rm{d}(Area)}\) and the Planck constant is just given by \(\lambda _{\mathcal S}^{ 2}\). In this view, ħ has not disappeared but has been promoted to the role of a UV cutoff that removes both the infinities of quantum field theory and singularities of general relativity. This situation is analogous to pure quantum gravity [388] where ħ and G never appear separately but only in the combination \({\ell _{{\rm{P1}}}} = \sqrt {G\hbar/{c^3}}\) so that only c and ℓ_{Pl} are needed. Volovik [520] made an analogy with quantum liquids to clarify this. There an observer knows both the effective and microscopic physics so that he can judge whether the fundamental constants of the effective theory remain fundamental constants of the microscopic theory. The status of a constant depends on the considered theory (effective or microscopic) and, more interestingly, on the observer measuring them, i.e., on whether this observer belongs to the world of lowenergy quasiparticles or to the microscopic world. Fundamental parameters. Once a set of three independent constants has been chosen as natural units, then all other constants are dimensionless quantities. The values of these combinations of constants does not depend on the way they are measured, [110, 164, 437], on the definition of the units etc.… It follows that any variation of constants that will leave these numbers unaffected is actually just a redefinition of units.
These dimensionless numbers represent, e.g., the mass ratio, relative magnitude of strength etc.… Changing their values will indeed have an impact on the intensity of various physical phenomena, so that they encode some properties of our world. They have specific values (e.g., α_{EM} ∼ 1/137, m_{p}/m_{e} ∼ 1836, etc.) that we may hope to understand. Are all these numbers completely contingent, or are some (why not all?) of them related by relations arising from some yet unknown and more fundamental theories. In such theories, some of these parameters may actually be dynamical quantities and, thus, vary in space and time. These are our potential varying constants.
2.2 The constancy of constants as a test of general relativity
The previous paragraphs have yet emphasize why testing for the consistency of the constants is a test of fundamental physics since it can reveal the need for new physical degrees of freedom in our theory. We now want to stress the relation of this test with other tests of general relativity and with cosmology.
2.2.1 General relativity
The tests of the constancy of fundamental constants take all their importance in the realm of the tests of the equivalence principle [540]. Einstein general relativity is based on two independent hypotheses, which can conveniently be described by decomposing the action of the theory as S = S_{grav} + S_{matter}.

the universality of free fall,

the local position invariance,

the local Lorentz invariance.

First, it implies that all nongravitational constants are spacetime independent, which have been tested to a very high accuracy in many physical systems and for various fundamental constants; this is the subject of this review.

Second, the isotropy has been tested from the constraint on the possible quadrupolar shift of nuclear energy levels [99, 304, 422] proving that different matter fields couple to a unique metric tensor at the 10^{−27} level.
 Third, the universality of free fall can be tested by comparing the accelerations of two test bodies in an external gravitational field. The parameter η_{12} defined ascan be constrained experimentally, e.g., in the laboratory by comparing the acceleration of a beryllium and a copper mass in the Earth gravitational field [4] to get$${\eta _{12}} \equiv 2{{\vert {{\bf{a}}_1}  {{\bf{a}}_2}\vert} \over {\vert {{\bf{a}}_1} + {{\bf{a}}_2}\vert}},$$(5)Similarly the comparison of Earthcorelike and Moonmantlelike bodies gave [23]$${\eta _{{\rm{Be,Cu}}}} = {(}  1.9 \pm 2.5{)} \times {10^{ 12}}.$$(6)and experiments with torsion balance using test bodies composed of tellurium an bismuth allowed to set the constraint [450]$${\eta _{{\rm{Earth,Moon}}}} = {(}0.1 \pm 2.7 \pm 1.7{)} \times {10^{ 13}},$$(7)The Lunar Laser ranging experiment [543], which compares the relative acceleration of the Earth and Moon in the gravitational field of the Sun, also set the constraints$${\eta _{{\rm{Te,Bi}}}} = {(}0.3 \pm 1.8{)} \times {10^{ 13}}.$$(8)Note that since the core represents only 1/3 of the mass of the Earth, and since the Earth’s mantle has the same composition as that of the Moon (and thus shall fall in the same way), one loses a factor of three, so that this constraint is actually similar to the one obtained in the lab. Further constraints are summarized in Table 3. The latter constraint also contains some contribution from the gravitational binding energy and thus includes the strong equivalence principle. When the laboratory result of [23] is combined with the LLR results of [542] and [365], one gets a constraints on the strong equivalence principle parameter, respectively$${\eta _{{\rm{Earth,Moon}}}} = {(}  1.0 \pm 1.4{)} \times {10^{ 13}}.$$(9)Large improvements are expected thanks to existence of two dedicated space mission projects: Microscope [493] and STEP [355].$${\eta _{{\rm{SEP}}}} = (3 \pm 6) \times {10^{ 13}}\;\;{\rm{and}}\;\;{\eta _{{\rm{SEP}}}} = ( 4 \pm 5) \times {10^{ 13}}.$$

Fourth, the Einstein effect (or gravitational redshift) has been measured at the 2 × 10^{−4} level [517]. We can conclude that the hypothesis of metric coupling is extremely welltested in the solar system.
Summary of the constraints on the violation of the universality of free fall.
Constraint  Body 1  Body 2  Ref. 

(−1.9 ± 2.5) × 10^{−12}  Be  Cu  [4] 
(0.1 ± 2.7 ± 1.7) × 10^{−13}  Earthlike rock  Moonlike rock  [23] 
(−1.0 ± 1.4) × 10^{−13}  Earth  Moon  [543] 
(0.3 ± 1.8) × 10^{−13}  Te  Bi  [450] 
(−0.2 ± 2.8) × 10^{−12}  Be  Al  [481] 
(−1.9 ± 2.5) × 10^{−12}  Be  Cu  [481] 
(5.1 ± 6.7) × 10^{−12}  Si/Al  Cu  [481] 
These two phenomenological parameters are constrained (1) by the shift of the Mercury perihelion [457], which implies that ∣2γ^{PPN} − β^{PPN} − 1∣ < 3 × 10^{−3}, (2) the Lunar laser ranging experiments [543], which implies that ∣4β^{PPN} − γ^{PPN} − 3∣ = (4.4 ± 4.5) × 10^{−4} and (3) by the deflection of electromagnetic signals, which are all controlled by γ^{PPN}. For instance the very long baseline interferometry [459] implies that ∣γ^{PPN} − 1∣ = 4 × 10^{−4}, while the measurement of the time delay variation to the Cassini spacecraft [53] sets γ^{PPN} − 1 = (2.1 ± 2.3) × 10^{−5}.
General relativity is also tested with pulsars [125, 189] and in the strong field regime [425]. For more details we refer to [129, 495, 540, 541]. Needless to say that any extension of general relativity has to pass these constraints. However, deviations from general relativity can be larger in the past, as we shall see, which makes cosmology an interesting physical system to extend these constraints.
2.2.2 Varying constants and the universality of free fall
As the previous description shows, the constancy of the fundamental constants and the universality are two pillars of the equivalence principle. Dicke [152] realized that they are actually not independent and that if the coupling constants are spatially dependent then this will induce a violation of the universality of free fall.
The connection lies in the fact that the mass of any composite body, starting, e.g., from nuclei, includes the mass of the elementary particles that constitute it (this means that it will depend on the Yukawa couplings and on the Higgs sector parameters) but also a contribution, E_{binding}/c^{2}, arising from the binding energies of the different interactions (i.e., strong, weak and electromagnetic) but also gravitational for massive bodies. Thus, the mass of any body is a complicated function of all the constants, m[α_{ i }].
This anomalous acceleration is generated by the change in the (electromagnetic, gravitational, …) binding energies [152, 246, 386] but also in the Yukawa couplings and in the Higgs sector parameters so that the α_{ i }dependencies are a priori compositiondependent. As a consequence, any variation of the fundamental constants will entail a violation of the universality of free fall: the total mass of the body being space dependent, an anomalous force appears if energy is to be conserved. The variation of the constants, deviation from general relativity and violation of the weak equivalence principle are in general expected together.
On the other hand, the composition dependence of δa_{ A } and thus of η_{ AB } can be used to optimize the choice of materials for the experiments testing the equivalence principle [118, 120, 122] but also to distinguish between several models if data from the universality of free fall and atomic clocks are combined [143].
From a theoretical point of view, the computation of η_{ AB } will requires the determination of the coefficients f_{ Ai }. This can be achieved in two steps by first relating the new degrees of freedom of the theory to the variation of the fundamental constants and then relating them to the variation of the masses. As we shall see in Section 5, the first issue is very model dependent while the second is especially difficult, particularly when one wants to understand the effect of the quark mass, since it is related to the intricate structure of QCD and its role in low energy nuclear reactions.
Note that varying coupling constants can also be associated with violations of local Lorentz invariance and CPT symmetry [298, 52, 242].
2.2.3 Relations with cosmology
Most constraints on the time variation of the fundamental constants will not be local and related to physical systems at various epochs of the evolution of the universe. It follows that the comparison of different constraints requires a full cosmological model.
Parameter  Symbol  Value 

Reduced Hubble constant  h  0.73(3) 
Baryontophoton ratio  η = n_{b}/n_{ γ }  6.12(19) × 10^{−10} 
Photon density  Ω_{γ}h^{2}  2.471 × 10^{−5} 
Dark matter density  Ω_{CDM}h^{2}  0.105(8) 
Cosmological constant  Ω_{Λ}  0.73(3) 
Spatial curvature  Ω_{ K }  0.011(12) 
Scalar modes amplitude  Q  (2.0 ± 0.2) × 10^{−5} 
Scalar spectral index  n _{ S }  0.958(16) 
Neutrino density  Ω_{ ν }h^{2}  (0.0005 – 0.023) 
Dark energy equation of state  w  •0.97(7) 
Scalar running spectral index  α _{ S }  •0.05 ± 0.03 
Tensortoscalar ratio  T/S  < 0.36 
Tensor spectral index  n _{ T }  < 0.001 
Tensor running spectral index  α _{ T }  ? 
Baryon density  Ω_{b}h^{2}  0.0223(7) 
The ΛCDM model assumes that gravity is described by general relativity (H1), that the Universe contains the fields of the standard model of particle physics plus some dark matter and a cosmological constant, the latter two having no physical explanation at the moment. It also deeply involves the Copernican principle as a symmetry hypothesis (H3), without which the Einstein equations usually cannot been solved, and assumes most often that the spatial sections are simply connected (H4). H2 and H3 imply that the description of the standard matter reduces to a mixture of a pressureless and a radiation perfect fluids. This model is compatible with all astronomical data, which roughly indicates that Ω_{Λ0} ≃ 0.73, Ω_{mat0} ≃ 0.27, and Ω_{K0} ≃ 0. Thus, cosmology roughly imposes that \(\vert {\Lambda _0}\vert \leq H_0^2\), that is \({\ell _\Lambda} \leq H_0^{ 1} \sim {10^{26}}{\rm{m}} \sim {\rm{1}}{{\rm{0}}^{41}}{\rm{Ge}}{{\rm{V}}^{{ 1}}}\).
Classically, this value is no problem but it was pointed out that at the quantum level, the vacuum energy should scale as M^{4}, where M is some energy scale of highenergy physics. In such a case, there is a discrepancy of 60–120 order of magnitude between the cosmological conclusions and the theoretical expectation. This is the cosmological constant problem [528].
Two approaches to solve this problem have been considered. Either one accepts such a constant and such a finetuning and tries to explain it on anthropic ground. Or, in the same spirit as Dirac, one interprets it as an indication that our set of cosmological hypotheses have to be extended, by either abandoning the Copernican principle [508] or by modifying the local physical laws (either gravity or the matter sector). The way to introduce such new physical degrees of freedom were classified in [502]. In that latter approach, the tests of the constancy of the fundamental constants are central, since they can reveal the coupling of this new degree of freedom to the standard matter fields. Note, however, that the cosmological data still favor a pure cosmological constant.
Among all the proposals quintessence involves a scalar field rolling down a runaway potential hence acting as a fluid with an effective equation of state in the range −1 ≤ w ≤ 1 if the field is minimally coupled. It was proposed that the quintessence field is also the dilaton [229, 434, 499]. The same scalar field then drives the time variation of the cosmological constant and of the gravitational constant and it has the property to also have tracking solutions [499]. Such models do not solve the cosmological constant problem but only relieve the coincidence problem. One of the underlying motivation to replace the cosmological constant by a scalar field comes from superstring models in which any dimensionful parameter is expressed in terms of the string mass scale and the vacuum expectation value of a scalar field. However, the requirement of slow roll (mandatory to have a negative pressure) and the fact that the quintessence field dominates today imply, if the minimum of the potential is zero, that it is very light, roughly of order m ∼ 10^{−33} eV [81].
Such a light field can lead to observable violations of the universality of free fall if it is nonuniversally coupled to the matter fields. Carroll [81] considered the effect of the coupling of this very light quintessence field to ordinary matter via a coupling to the electromagnetic field as \(\phi {F^{\mu \nu}}{{\tilde F}_{\mu \nu}}\). Chiba and Kohri [96] also argued that an ultralight quintessence field induces a time variation of the coupling constant if it is coupled to ordinary matter and studied a coupling of the form ΦF^{ μν }F_{ μν }, as, e.g., expected from KaluzaKlein theories (see below). This was generalized to quintessence models with a couplings of the form Z(ϕ)F^{ μν }F_{ μν } [11, 112, 162, 315, 314, 347, 404, 531] and then to models of runaway dilaton [133, 132] inspired by string theory (see Section 5.4.1). The evolution of the scalar field drives both the acceleration of the universe at late time and the variation of the constants. As pointed in [96, 166, 532] such models are extremely constrained from the bound on the universality of freefall (see Section 6.3).

The field driving the time variation of the fundamental constants does not explain the acceleration of the universe (either it does not dominate the matter content today or its equation of state is not negative enough). In such a case, the variation of the constants is disconnected from the dark energy problem. Cosmology allows to determine the dynamics of this field during the whole history of the universe and thus to compare local constraints and cosmological constraints. An example is given by scalartensor theories (see Section 5.1.1) for which one can compare, e.g., primordial nucleosynthesis to local constraints [134]. However, in such a situation one should take into account the effect of the variation of the constants on the astrophysical observations since it can affect local physical processes and bias, e.g., the luminosity of supernovae and indirectly modify the distance luminosityredshift relation derived from these observations [33, 435].

The field driving the time variation of the fundamental constants is also responsible for the acceleration of the universe. It follows that the dynamics of the universe, the level of variation of the constants and the other deviations from general relativity are connected [348] so that the study of the variation of the constants can improve the reconstruction of the equation state of the dark energy [20, 162, 389, 404].
In conclusion, cosmology seems to require a new constant. It also provides a link between the microphysics and cosmology, as foreseen by Dirac. The tests of fundamental constants can discriminate between various explanations of the acceleration of the universe. When a model is specified, cosmology also allows to set stringer constraints since it relates observables that cannot be compared otherwise.
3 Experimental and Observational Constraints
This section focuses on the experimental and observational constraints on the nongravitational constants, that is assuming α_{G} remains constant. We use the convention that Δα = α − α_{0} for any constant α, so that Δα < 0 refers to a value smaller than today.
Summary of the systems considered to set constraints on the variation of the fundamental constants. We summarize the observable quantities, the primary constants used to interpret the data and the other hypothesis required for this interpretation. All the quantities appearing in this table are defined in the text.
System  Observable  Primary constraints  Other hypothesis 

Atomic clock  δ ln ν  g_{ i }, α_{EM}, μ  — 
Oklo phenomenon  isotopic ratio  E _{ r }  geophysical model 
Meteorite dating  isotopic ratio  λ  — 
Quasar spectra  atomic spectra  g_{p}, μ, α_{EM}  cloud physical properties 
Stellar physics  element abundances  B _{ D }  stellar model 
21 cm  T_{ b }/T_{CMB}  g_{p}, μ, α_{EM}  cosmological model 
CMB  ΔT/T  μ, α_{EM}  cosmological model 
BBN  light element abundances  Q_{np},τ_{n}, m_{ e }, m_{n}, α_{EM}, B_{ D }  cosmological model 
3.1 Atomic clocks
3.1.1 Atomic spectra and constants
It follows that at the lowest level of description, we can interpret all atomic clocks results in terms of the gfactors of each atoms, g_{ i }, the electron to proton mass ration μ and the finestructure constant α_{EM}. We shall parameterize the hyperfine and finestructures frequencies as follows.
Sensitivity of various transitions on a variation of the finestructure constant.
Atom  Transition  sensitivity κ_{ α } 

^{1}H  1s – 2s  0.00 
^{87}Rb  hf  0.34 
^{133}Cs  ^{2}S_{1/2}(F = 2) − (F = 3)  0.83 
^{171}Yb^{+}  ^{2}S_{1/2} − D_{3/2}  0.9 
^{199}Hg^{+}  ^{2}S_{1/2} − ^{2}D_{5/2}  −3.2 
^{87}Sr  ^{1}S_{0} − ^{3}P_{0}  0.06 
^{27}Al^{+}  ^{1}S_{0} − ^{3}P_{0}  0.008 
From an experimental point of view, various combinations of clocks have been performed. It is important to analyze as many species as possible in order to ruleout speciesdependent systematic effects. Most experiments are based on a frequency comparison to caesium clocks. The hyperfine splitting frequency between the F = 3 and F = 4 levels of its ^{2}S_{1/2} ground state at 9.192 GHz has been used for the definition of the second since 1967. One limiting effect, that contributes mostly to the systematic uncertainty, is the frequency shift due to cold collisions between the atoms. On this particular point, clocks based on the hyperfine frequency of the ground state of the rubidium at 6.835 GHz, are more favorable.
3.1.2 Experimental constraints
 Rubidium: The comparison of the hyperfine frequencies of the rubidium and caesium in their electronic ground state between 1998 and 2003, with an accuracy of order 10^{−15}, leads to the constraint [346]With one more year of experiment, the constraint dropped to [58]$${{\rm{d}} \over {{\rm{d}}t}}\ln \left({{{{\nu _{{\rm{Rb}}}}} \over {{\nu _{{\rm{Cs}}}}}}} \right) = {(}0.2 \pm 7.0{)} \times {10^{ 16}}{\rm{y}}{{\rm{r}}^{1}}.$$(24)From Equation (21), and using the values of the sensitivities κ_{ α }, we deduce that comparison constrains$${{\rm{d}} \over {{\rm{d}}t}}\ln \left({{{{\nu _{{\rm{Rb}}}}} \over {{\nu _{{\rm{Cs}}}}}}} \right) = {(}  0.5 \pm 5.3{)} \times {10^{ 16}}{\rm{y}}{{\rm{r}}^{1}}.$$(25)$${{{\nu _{{\rm{Cs}}}}} \over {{\nu _{{\rm{Rb}}}}}} \propto {{{g_{{\rm{Cs}}}}} \over {{g_{{\rm{Rb}}}}}}\alpha _{{\rm{EM}}}^{0.49}.$$
 Atomic hydrogen: The 1s–2s transition in atomic hydrogen was compared tp the ground state hyperfine splitting of caesium [196] in 1999 and 2003, setting an upper limit on the variation of ν_{H} of (−29±57) Hz within 44 months. This can be translated in a relative driftSince the relativistic correction for the atomic hydrogen transition nearly vanishes, we have ν_{H} ∼ R_{∞} so that$${{\rm{d}} \over {{\rm{d}}t}}\ln \left({{{{\nu _{\rm{H}}}} \over {{\nu _{{\rm{Cs}}}}}}} \right) = ( 32 \pm 63) \times {10^{ 16}}{\rm{y}}{{\rm{r}}^{1}}.$$(26)$${{{\nu _{{\rm{Cs}}}}} \over {{\nu _{\rm{H}}}}} \propto {g_{{\rm{Cs}}}}\bar \mu \alpha _{{\rm{EM}}}^{2.83}.$$
 Mercury: The ^{199}Hg^{+} ^{2}S_{1/2} − ^{2}D_{5/2} optical transition has a high sensitivity to α_{EM} (see Table 6) so that it is well suited to test its variation. The frequency of the ^{199}Hg^{+} electric quadrupole transition at 282 nm was compared to the ground state hyperfine transition of caesium during a two year period, which lead to [57]This was improved by a comparison over a 6 year period [214] to get$${{\rm{d}} \over {{\rm{d}}t}}\ln \left({{{{\nu _{{\rm{Hg}}}}} \over {{\nu _{{\rm{Cs}}}}}}} \right) = {(}0.2 \pm 7{)} \times {10^{ 15}}{\rm{y}}{{\rm{r}}^{1}}.$$(27)While ν_{Cs} is still given by Equation (21), ν_{Hg} is given by Equation (22). Using the sensitivities of Table 6, we conclude that this comparison test the stability of$${{\rm{d}} \over {{\rm{d}}t}}\ln \left({{{{\nu _{{\rm{Hg}}}}} \over {{\nu _{{\rm{Cs}}}}}}} \right) = {(}3.7 \pm 3.9{)} \times {10^{ 16}}{\rm{y}}{{\rm{r}}^{1}}.$$(28)$${{{\nu _{{\rm{Cs}}}}} \over {{\nu _{{\rm{Hg}}}}}} \propto {g_{{\rm{Cs}}}}\bar \mu \alpha _{{\rm{EM}}}^{6.05}.$$
 Ytterbium: The ^{2}S_{1/2} − ^{2}D_{3/2} electric quadrupole transition at 688 THz of ^{171}Yb^{+} was compared to the ground state hyperfine transition of cesium. The constraint of [408] was updated, after comparison over a six year period, which lead to [407]Proceeding as previously, this tests the stability of$${{\rm{d}} \over {{\rm{d}}t}}\ln \left({{{{\nu _{{\rm{Yb}}}}} \over {{\nu _{{\rm{Cs}}}}}}} \right) = {(}  0.78 \pm 1.40{)} \times {10^{ 15}}{\rm{y}}{{\rm{r}}^{1}}.$$(29)$${{{\nu _{{\rm{Cs}}}}} \over {{\nu _{{\rm{Yb}}}}}} \propto {g_{{\rm{Cs}}}}\bar \mu \alpha _{{\rm{EM}}}^{1.93}.$$
 Strontium: The comparison of the ^{1}S_{0} −^{3}P_{0} transition in neutral ^{87}Sr with a cesium clock was performed in three independent laboratories. The combination of these three experiments [61] leads to the constraintProceeding as previously, this tests the stability of$${{\rm{d}} \over {{\rm{d}}t}}\ln \left({{{{\nu _{{\rm{Sr}}}}} \over {{\nu _{{\rm{Cs}}}}}}} \right) = {(}  1.0 \pm 1.8{)} \times {10^{ 15}}{\rm{y}}{{\rm{r}}^{1}}.$$(30)$${{{\nu _{{\rm{Cs}}}}} \over {{\nu _{{\rm{Sr}}}}}} \propto {g_{{\rm{Cs}}}}\bar \mu \alpha _{{\rm{EM}}}^{2.77}.$$
 Atomic dyprosium: It was suggested in [175, 174] (see also [173] for a computation of the transition amplitudes of the low states of dyprosium) that the electric dipole (E1) transition between two nearly degenerate oppositeparity states in atomic dyprosium should be highly sensitive to the variation of α_{EM}. It was then demonstrated [384] that a constraint of the order of 10^{−18}/yr can be reached. The frequencies of nearly of two isotopes of dyprosium were monitored over a 8 months period [100] showing that the frequency variation of the 3.1MHz transition in ^{163}Dy and the 235MHz transition in ^{162}Dy are 9.0 ± 6.7 Hz/yr and −0.6 ± 6.5 Hz/yr, respectively. These provide the constraintat 1σ level, without any assumptions on the constancy of other fundamental constants.$${{{\alpha _{{\rm{\dot EM}}}}} \over {{\alpha _{{\rm{EM}}}}}} = {(}  2.7 \pm 2.6{)} \times {10^{ 15}}{\rm{y}}{{\rm{r}}^{1}},$$(31)
 Aluminium and mercury singleion optical clocks: The comparison of the ^{1}S_{0} − ^{3}P_{0} transition in ^{27}Al^{+} and ^{2}S_{1/2} − ^{2}D_{5/2} in ^{199}Hg^{+} over a year allowed to set the constraint [440]Proceeding as previously, this tests the stability of$${{\rm{d}} \over {{\rm{d}}t}}\ln \left({{{{\nu _{{\rm{Al}}}}} \over {{\nu _{{\rm{Hg}}}}}}} \right) = {(}  5.3 \pm 7.9{)} \times {10^{ 17}}{\rm{y}}{{\rm{r}}^{1}}.$$(32)which directly set the constraint$${{{\nu _{{\rm{Hg}}}}} \over {{\nu _{{\rm{Al}}}}}} \propto \alpha _{{\rm{EM}}}^{ 3.208},$$since it depends only on α_{EM}.$${{{\alpha _{{\rm{\dot EM}}}}} \over {{\alpha _{{\rm{EM}}}}}} = {(}  1.6 \pm 2.3{)} \times {10^{ 17}}{\rm{y}}{{\rm{r}}^{1}},$$(33)
Summary of the constraints obtained from the comparisons of atomic clocks. For each constraint on the relative drift of the frequency of the two clocks, we provide the dependence in the various constants, using the numbers of Table 6. From Ref. [379], which can be consulted for other constants.
Clock 1  Clock 2 \({{\rm{d}} \over {{\rm{d}}t}}\ln \left({{{{v_{{\rm{clock}}\,1}}} \over {{v_{{\rm{clock}}\,2}}}}} \right)\)  Constraint (yr ^{−1})  Constants dependence  Reference 

^{87}Rb  ^{133}Cs  (0.2 ± 7.0) × 10^{−16}  \({{{g_{{\rm{Cs}}}}} \over {{g_{{\rm{Rb}}}}}}\alpha _{{\rm{EM}}}^{0.49}\)  [346] 
^{87}Rb  ^{133}Cs  (−0.5 ± 5.3) × 10^{−16}  [58]  
^{1}H  ^{133}Cs  (−32 ± 63) × 10^{−16}  \({g_{{\rm{CS}}}}\bar \mu \alpha _{{\rm{EM}}}^{2.83}\)  [196] 
^{199}Hg^{+}  ^{133}Cs  (0.2 ± 7) × 10^{−15}  \({g_{{\rm{CS}}}}\bar \mu \alpha _{{\rm{EM}}}^{6.05}\)  [57] 
^{199}Hg^{+}  ^{133}Cs  (3.7 ± 3.9) × 10^{−16}  [214]  
^{171}Yb^{+}  ^{133}Cs  (−1.2 ± 4.4) × 10^{−15}  \({g_{{\rm{CS}}}}\bar \mu \alpha _{{\rm{EM}}}^{1.93}\)  [408] 
^{171}Yb^{+}  ^{133}Cs  (−0.78 ± 1.40) × 10^{−15}  [407]  
^{87}Sr  ^{133}Cs  (−1.0 ± 1.8) × 10^{−15}  \({g_{{\rm{CS}}}}\bar \mu \alpha _{{\rm{EM}}}^{2.77}\)  [61] 
^{87}Dy  ^{87}Dy  (−2.7 ± 2.6) × 10^{−15}  α _{EM}  [100] 
^{27}Al^{+}  ^{199}Hg^{+}  (−5.3 ± 7.9) × 10^{−17}  \(\alpha _{{\rm{EM}}}^{ 3.208}\)  [440] 
3.1.3 Physical interpretation
3.1.4 Future evolutions

New systems: Many new systems with enhanced sensitivity [171, 200, 202, 205, 421] to some fundamental constants have recently been proposed. Other atomic systems are considered, such as, e.g., the hyperfine transitions in the electronic ground state of cold, trapped, hydrogenlike highly charged ions [44, 199, 448], or ultracold atom and molecule systems near the Feshbach resonances [98], where the scattering length is extremely sensitive to μ.
Concerning diatomic molecules, it was shown that this sensitivity can be enhanced in transitions between narrow close levels of different nature [13, 15]. In such transitions, the fine structure mainly depends on the finestructure constant, ν_{fs} ∼ (Zα_{EM})^{2}R_{∞}c, while the vibrational levels depend mainly on the electrontoproton mass ratio and the reduced mass of the molecule, \({\nu _{\rm{V}}} \sim M_r^{ 1/2}{{\bar \mu}^{1/2}}{R_\infty}c\). There could be a cancellation between the two frequencies when ν = ν_{hf} − nν_{v} ∼ 0 with n a positive integer. It follows that δν/ν will be proportional to K = ν_{hf}/ν so that the sensitivity to α_{EM} and μ can be enhanced for these particular transitions. A similar effect between transistions with hyperfinestructures, for which the sensitivity to α_{EM} can reach 600 for instance for ^{139}La^{32}S or silicon monobrid [42] that allows one to constrain \({\alpha _{{\rm{EM}}}}{{\bar \mu}^{ 1/4}}\).
Nuclear transitions, such as an optical clock based on a very narrow ultraviolet nuclear transition between the ground and first excited states in the ^{229}Th, are also under consideration. Using a Walecka model for the nuclear potential, it was concluded [199] that the sensitivity of the transition to the finestructure constant and quark mass was typicallywhich roughly provides a five order of magnitude amplification, which can lead to a constraint at the level of 10^{−24}/yr on the time variation of X_{q}. Such a method is promising and would offer different sensitivities to systematic effects compared to atomic clocks. However, this sensitivity is not clearly established since different nuclear calculations do not agree [46, 247].$${{\delta \omega} \over \omega} \sim {10^5}\left({4{{\delta {\alpha _{{\rm{EM}}}}} \over {{\alpha _{{\rm{EM}}}}}} + {{\delta {X_{\rm{q}}}} \over {{X_{\rm{q}}}}}  10{{\delta {X_{\rm{s}}}} \over {{X_{\rm{s}}}}}} \right),$$ 
Atomic clocks in space (ACES): An improvement of at least an order of magnitude on current constraints can be achieved in space with the PHARAO/ACES project [433, 444] of the European Spatial Agency. PHARAO (Projet d’Horloge Atomique par Refroidissement d’Atomes en Orbite) combines laser cooling techniques and a microgravity environment in a satellite orbit. It aims at achieving time and frequency transfer with stability better than 10^{−16}.
The SAGAS (Search for anomalous gravitation using atomic sensor) project aims at flying highly sensitive optical atomic clocks and cold atom accelerometers on a solar system trajectory on a time scale of 10 years. It could test the constancy of the finestructure constant along the satellite worldline, which, in particular, can set a constraint on its spatial variation of the order of 10^{−9} [433, 547].

Theoretical developments: We remind one more time that the interpretation of the experiments requires a good theoretical understanding of the systems but also that the constraints we draw on the fundamental constants such as the quark masses are conditional to our theoretical modeling, hence on hypothesis on a unification scheme as well as nuclear physics. The accuracy and the robustness of these steps need to be determined, e.g., by taking the dependence in the nuclear radius [154].
3.2 The Oklo phenomenon
3.2.1 A natural nuclear reactor
Oklo is the name of a town in the Gabon republic (West Africa) where an openpit uranium mine is situated. About 1.8 × 10^{9} yr ago (corresponding to a redshift of ∼ 0.14 with the cosmological concordance model), in one of the rich vein of uranium ore, a natural nuclear reactor went critical, consumed a portion of its fuel and then shut a few million years later (see, e.g., [509] for more details). This phenomenon was discovered by the French Commissariat à l’Énergie Atomique in 1972 while monitoring for uranium ores [382]. Sixteen natural uranium reactors have been identified. Well studied reactors include the zone RZ2 (about 60 boreholes, 1800 kg of ^{235}U fissioned during 8.5 × 10^{5} yr) and zone RZ10 (about 13 boreholes, 650 kg of ^{235}U fissioned during 1.6 × 10^{5} yr).
The existence of such a natural reactor was predicted by P. Kuroda [303] who showed that under favorable conditions, a spontaneous chain reaction could take place in rich uranium deposits. Indeed, two billion years ago, uranium was naturally enriched (due to the difference of decay rate between ^{235}U and ^{238}U) and ^{235}U represented about 3.68% of the total uranium (compared with 0.72% today and to the 3–5% enrichment used in most commercial reactors). Besides, in Oklo the conditions were favorable: (1) the concentration of neutron absorbers, which prevent the neutrons from being available for the chain fission, was low; (2) water played the role of moderator (the zones RZ2 and RZ10 operated at a depth of several thousand meters, so that the water pressure and temperature was close to the pressurized water reactors of 20 Mpa and 300°C) and slowed down fast neutrons so that they can interact with other ^{235}U and (3) the reactor was large enough so that the neutrons did not escape faster than they were produced. It is estimated that the Oklo reactor powered 10 to 50 kW. This explanation is backed up by the substantial depletion of ^{235}U as well as a correlated peculiar distribution of some rareearth isotopes. These rareearth isotopes are abundantly produced during the fission of uranium and, in particular, the strong neutron absorbers like \(_{62}^{149}{\rm{Sm,}}\,_{63}^{151}{\rm{Eu,}}\,_{64}^{155}{\rm{Gd}}\) and \(_{64}^{155}{\rm{Gd}}\) are found in very small quantities in the reactor.

First, the cross section σ(_{n, γ}) strongly depends on the energy of a resonance at E_{ r } = 97.3 meV.

Geochemical data allow to determine the isotopic composition of various element, such as uranium, neodynium, gadolinium and samarium. Gadolinium and neodium allow to determine the fluence (integrated flux over time) of the neutron while both gadolinium and samarium are strong neutron absorbers.

From these data, one deduces the value of the averaged value of the cross section on the neutron flux, \({{\hat \sigma}_{149}}\). This value depends on hypothesis on the geometry of the reactor zone.

The range of allowed value of \({{\hat \sigma}_{149}}\) was translated into a constraint on E_{ r }. This step involves an assumption on the form and temperature of the neutron spectrum.

E_{ r } was related to some fundamental constant, which involve a model of the nucleus.

Isotopic compositions and geophysical parameters are measured in a given set of borehold in each zone. A choice has to be made on the sample to use, in order, e.g., to ensure that they are not contaminated.

With hypothesis on the geometry of the reactor, on the spectrum and temperature of the neutron flux, one can deduce the effective value of the cross sections of neutron absorbers (such as samarium and gadolinium). This requires one to solve a network of nuclear reactions describing the fission.

One can then infer the value of the resonance energy E_{ r }, which again depends on the assumptions on the neutron spectrum.

E_{ r } needs to be related to fundamental constant, which involves a model of the nucleus and high energy physics hypothesis.
We shall now detail the assumptions used in the various analyses that have been performed since the pioneering work of [465].
3.2.2 Constraining the shift of the resonance energy
Extracting the effective cross section from the data. To “measure” the value of \({\hat \sigma}\) from the Oklo data, we need to solve the nuclear reaction network that controls the isotopic composition during the fission.
By comparing the solution of this system with the measured isotopic composition, one can deduce the effective cross section. At this step, the different analyses [465, 415, 123, 220, 305, 416, 234] differ from the choice of the data. The measured values of \({{\hat \sigma}_{149}}\) can be found in these articles. They are given for a given zone (RZ2, RZ10 mainly) with a number that correspond to the number of the borehole and the depth (e.g., in Table 2 of [123], SC391383 means that we are dealing with the borehole number 39 at a depth of 13.83 m). Recently, another approach [416, 234] was proposed in order to take into account of the geometry and details of the reactor. It relies on a fullscale MonteCarlo simulation and a computer model of the reactor zone RZ2 [416] and both RZ2 and RZ10 [234] and allows to take into account the spatial distribution of the neutron flux.
Summary of the analysis of the Oklo data. The principal assumptions to infer the value of the resonance energy E_{ r } are the form of the neutron spectrum and its temperature.
Ore  neutron spectrum  Temperature (°C)  \({{\hat \sigma}_{149}}({\rm{kb}})\)  ΔE_{ r } (meV)  Ref. 

?  Maxwell  20  55 ± 8  0 ± 20  [465] 
RZ2 (15)  Maxwell  180–700  75 ± 18  −1.5 ± 10.5  [123] 
RZ10  Maxwell  200–400  91 ± 6  4 ± 16  [220] 
RZ10  −97 ± 8  [220]  
—  Maxwell + epithermal  327  91 ± 6  \( 45_{ 15}^{+ 7}\)  [305] 
RZ2  Maxwell + epithermal  73.2 ± 9.4  −5.5 ± 67.5  [416]  
RZ2  Maxwell + epithermal  200–300  71.5 ± 10.0  —  [234] 
RZ10  Maxwell + epithermal  200–300  85.0 ± 6.8  —  [234] 
RZ2+RZ10  7.2 ± 18.8  [234]  
RZ2+RZ10  90.75 ± 11.15  [234] 
3.2.3 From the resonance energy to fundamental constants
The energy of the resonance depends a priori on many constants since the existence of such resonance is mainly the consequence of an almost cancellation between the electromagnetic repulsive force and the strong interaction. But, since no full analytical understanding of the energy levels of heavy nuclei is available, the role of each constant is difficult to disentangle.
In conclusion, these last results illustrate that a detailed theoretical analysis and quantitative estimates of the nuclear physics (and QCD) aspects of the resonance shift still remain to be carried out. In particular, the interface between the perturbative QCD description and the description in term of hadron is not fully understand: we do not know the exact dependence of hadronic masses and coupling constant on Λ_{QCD} and quark masses. The second problem concerns modeling nuclear forces in terms of the hadronic parameters.
At present, the Oklo data, while being stringent and consistent with no variation, have to be considered carefully. While a better understanding of nuclear physics is necessary to understand the full constantdependence, the data themselves require more insight, particularly to understand the existence of the leftbranch.
3.3 Meteorite dating
Longlived α or βdecay isotopes may be sensitive probes of the variation of fundamental constants on geological times ranging typically to the age of the solar system, t ∼ (4–5) Gyr, corresponding to a mean redshift of z ∼ 0.43. Interestingly, it can be compared with the shallow universe quasar constraints. This method was initially pointed out by Wilkinson [539] and then revived by Dyson [168]. The main idea is to extract the α_{EM}dependence of the decay rate and to use geological samples to bound its time variation.
3.3.1 Long lived αdecays
Summary of the main nuclei and their physical properties that have been used in αdecay studies.
Element  Z  A  Lifetime (yr)  Q (MeV)  s _{ α } 

Sm  62  147  1.06 × 10^{11}  2.310  774 
Gd  64  152  1.08 × 10^{14}  2.204  890 
Dy  66  154  3 × 10^{6}  2.947  575 
Pt  78  190  6.5 × 10^{11}  3.249  659 
Th  90  232  1.41 × 10^{10}  4.082  571 
U  92  235  7.04 × 10^{8}  4.678  466 
U  92  238  4.47 × 10^{9}  4.270  548 
The sensitivities of all the nuclei of Table 9 are similar, so that the best constraint on the time variation of the finestructure constant will be given by the nuclei with the smaller Δλ/λ.
As for the Oklo phenomena, the effect of other constants has not been investigated in depth. It is clear that at lowest order both Q and m_{p} scales as Λ_{QCD} so that one needs to go beyond such a simple description to determine the dependence in the quark masses. Taking into account the contribution of the quark masses, in the same way as for Equation (53), it was argued that \(\lambda \propto X_{\rm{q}}^{300  2000}\), which leads to ∣ΔlnX_{q}∣ ≲ 10^{−5}. In a grand unify framework, that could lead to a constraint of the order of ∣Δln α_{EM}∣ ≲ 2 × 10−^{7}.
3.3.2 Long lived βdecays
Dicke [150] stressed that the comparison of the rubidiumstrontium and potassiumargon dating methods to uranium and thorium rates constrains the variation of α_{EM}.
As pointed out in [219, 218], these constraints really represents a bound on the average decay rate \({\bar \lambda}\) since the formation of the meteorites. This implies in particular that the redshift at which one should consider this constraint depends on the specific functional dependence λ(t). It was shown that welldesigned time dependence for λ can obviate this limit, due to the time average.
3.3.3 Conclusions
Meteorites data allow to set constraints on the variation of the fundamental constants, which are comparable to the ones set by the Oklo phenomenon. Similar constraints can also bet set from spontaneous fission (see Section III.A.3 of FVC [500]) but this process is less well understood and less sensitive than the α and βdecay processes and.
From an experimental point of view, the main difficulty concerns the dating of the meteorites and the interpretation of the effective decay rate.
As long as we only consider α_{EM}, the sensitivities can be computed mainly by considering the contribution of the Coulomb energy to the decay energy, that reduces to its contribution to the nuclear energy. However, as for the Oklo phenomenon, the dependencies in the other constants, X_{q}, G_{F}, μ…, require a nuclear model and remain very modeldependent.
3.4 Quasar absorption spectra
3.4.1 Generalities
Quasar (QSO) absorption lines provide a powerful probe of the variation of fundamental constants. Absorption lines in intervening clouds along the line of sight of the QSO give access to the spectra of the atoms present in the cloud, that it is to paleospectra. The method was first used by Savedoff [447] who constrained the time variation of the finestructure constraint from the doublet separations seen in galaxy emission spectra. For general introduction to these observations, we refer to [412, 474, 271].
Indeed, one cannot use a single transition compared to its laboratory value since the expansion of the universe induces a global redshifting of all spectra. In order to tackle down a variation of the fundamental constants, one should resort on various transitions and look for chromatic effects that can indeed not be reproduce by the expansion of the universe, which acts chromatically on all wavelengths.
The shift between two lines is easier to measure when the difference between the qcoefficients of the two lines is large, which occurs, e.g., for two levels with large q of opposite sign. Many methods were developed to take this into account. The alkali doublet method (AD) focuses on the finestructure doublet of alkali atoms. It was then generalized to the manymultiplet method (MM), which uses correlations between various transitions in different atoms. As can be seen on Figure 3, some transitions are almost insensitive to a variation of α_{EM}. This is the case of Mg II, which can be used as an anchor, i.e., a reference point. To obtain strong constraints one can either compare transitions of light atoms with those of heavy atoms (because the α_{EM} dependence of the ground state scales as Z^{2}) or compare s − p and d − p transitions in heavy elements (in that case, the relativistic correction will be of opposite signs). This latter effect increases the sensitivity and strengthens the method against systematic errors. However, the results of this method rely on two assumptions: (i) ionization and chemical homogeneity and (ii) isotopic abundance of Mg II close to the terrestrial value. Even though these are reasonable assumptions, one cannot completely rule out systematic biases that they could induce. The AD method completely avoids the assumption of homogeneity because, by construction, the two lines of the doublet must have the same profile. Indeed the AD method avoids the implicit assumption of the MM method that chemical and ionization inhomogeneities are negligible. Another way to avoid the influence of small spectral shift due to ionization inhomogeneities within the absorber and due to possible nonzero offset between different exposures was to rely on different transitions of a single ion in individual exposure. This method has been called the Single ion differential alpha measurement method (SIDAM).
Most studies are based on optical techniques due to the profusion of strong UV transitions that are redshifted into the optical band (this includes AD, MM, SIDAM and it implies that they can be applied only above a given redshift, e.g., Si IV at z > 1.3, Fe IIλ1608 at z > 1) or on radio techniques since radio transitions arise from many different physical effects (hyperfine splitting and in particular HI 21 cm hyperfine transition, molecular rotation, Lambdadoubling, etc). In the latter case, the line frequencies and their comparisons yield constraints on different sets of fundamental constants including α_{EM}, g_{p} and μ. Thus, these techniques are complementary since systematic effects are different in optical and radio regimes. Also the radio techniques offer some advantages: (1) to reach high spectral resolution (< 1 km/s), alleviating in particular problems with line blending and the use of, e.g., masers allow to reach a frequency calibration better than roughly 10 m/s; (2) in general, the sensitivity of the line position to a variation of a constant is higher; (3) the isotopic lines are observed separately, while in optical there is a blend with possible differential saturations (see, e.g., [109] for a discussion).
Let us first emphasize that the shifts in the absorption lines to be detected are extremely small. For instance a change of α_{EM} of order 10^{−5} corresponds a shift of at most 20 mÅ for a redshift of z ∼ 2, which would corresponds to a shift of order ∼ 0.5 km/s, or to about a third of a pixel at a spectral resolution of \(R \sim 40000\), as achieved with Keck/HIRES or VLT/UVES. As we shall discuss later, there are several sources of uncertainty that hamper the measurement. In particular, the absorption lines have complex profiles (because they result from the propagation of photons through a highly inhomogeneous medium) that are fitted using a combination of Voigt profiles. Each of these components depends on several parameters including the redshift, the column density and the width of the line (Doppler parameter) to which one now needs to add the constants that are assumed to be varying. These parameters are constrained assuming that the profiles are the same for all transitions, which is indeed a nontrivial assumption for transitions from different species (this was one of the driving motivations to use the transition from a single species and of the SIDAM method). More important, the fit is usually not unique. This is not a problem when the lines are not saturated but it can increase the error on α_{EM} by a factor 2 in the case of strongly saturated lines [91].
3.4.2 Alkali doublet method (AD)
 Murphy et al. [377] analyzed 21 Keck/HIRES Si IV absorption systems toward 8 quasars to obtain the weighted mean of the sample,with a mean redshift of z = 2.6. The S/N ratio of these data is in the range 15–40 per pixel and the spectral resolution is R ∼ 34000.$$\Delta {\alpha _{{\rm{EM}}}}/{\alpha _{{\rm{EM}}}} = {(}  0.5 \pm 1.3{)} \times {10^{ 5}},\quad 2.33 < z < 3.08,$$(71)
 Chand et al. [91] analyzed 15 Si IV absorption systems selected from a ESOUVES sample containing 31 systems (eliminating contaminated, saturated or very broad systems; in particular a lower limit on the column density was fixed so that both lines of the doublets are detected at more than 5σ) to get the weighted mean,The improvement of the constraint arises mainly from a better S/N ratio, of order 60–80 per pixel, and resolution R ∼ 45000. Note that combining this result with the previous one (71 in a weighted mean would lead to Δα_{EM}/α_{EM} = (−0.04 ± 0.56) × 10^{−5} in the range 1.59 < z < 3.02$$\Delta {\alpha _{{\rm{EM}}}}/{\alpha _{{\rm{EM}}}} = {(}  0.15 \pm 0.43{)} \times {10^{ 5}},\quad 1.59 < z < 2.92.$$(72)
 The analysis [349] of seven CIV systems and two Si IV systems in the direction of a single quasar, obtained by the VLTVES (during the science verification) has led toThis is less constraining than the two previous analyses, mainly because the qcoefficients are smaller for CIV (see [410] for the calibration of the laboratory spectra)$$\Delta {\alpha _{{\rm{EM}}}}/{\alpha _{{\rm{EM}}}} = {(}  3.09 \pm 8.46{)} \times {10^{ 5}},\quad 1.19 < z < 1.84.$$(73)
One limitation may arise from the isotopic composition of silicium. Silicium has three naturally occurring isotopes with terrestrial abundances ^{28}Si:^{29}Si:^{30}Si = 92.23:4.68:3.09 so that each absorption line is a composite of absorption lines from the three isotopes. However, it was shown that this effect of isotopic shifts [377] is negligible in the case of Si IV.
3.4.3 Many multiplet method (MM)
A generalization of the AD method, known as the manymulptiplet was proposed in [176]. It relies on the combination of transitions from different species. In particular, as can be seen on Figure 3, some transitions are fairly unsensitive to a change of the finestructure constant (e.g., Mg II or MgI, hence providing good anchors) while others such as Fe II are more sensitive. The first implementation [522] of the method was based on a measurement of the shift of the Fe II (the rest wavelengths of which are very sensitive to α_{EM}) spectrum with respect to the one of MgII. This comparison increases the sensitivity compared with methods using only alkali doublets. Two series of analyses were performed during the past ten years and lead to contradictory conclusions. The accuracy of the measurements depends on how well the absorption line profiles are modeled.
Hunting systematics. While performing this kind of observations a number of problems and systematic effects have to be taken into account and controlled. (1) Errors in the determination of laboratory wavelengths to which the observations are compared. (2) While comparing wavelengths from different atoms one has to take into account that they may be located in different regions of the cloud with different velocities and hence with different Doppler shifts. (3) One has to ensure that there is no transition not blended by transitions of another system. (4) The differential isotopic saturation has to be controlled. Usually quasar absorption systems are expected to have lower heavy element abundances. The spatial inhomogeneity of these abundances may also play a role. (5) Hyperfine splitting can induce a saturation similar to isotopic abundances. (6) The variation of the velocity of the Earth during the integration of a quasar spectrum can also induce differential Doppler shift. (7) Atmospheric dispersion across the spectral direction of the spectrograph slit can stretch the spectrum. It was shown that, on average, this can, for low redshift observations, mimic a negative Δα_{EM}/α_{EM}, while this is no more the case for high redshift observations (hence emphasizing the complementarity of these observations). (8) The presence of a magnetic field will shift the energy levels by Zeeman effect. (9) Temperature variations during the observation will change the air refractive index in the spectrograph. In particular, flexures in the instrument are dealt with by recording a calibration lamp spectrum before and after the science exposure and the signaltonoise and stability of the lamp is crucial (10) Instrumental effects such as variations of the intrinsic instrument profile have to be controlled.
All these effects have been discussed in detail in [374, 376] to argue that none of them can explain the current detection. This was recently complemented by a study on the calibration since adistortion of the wavelength scale could lead to a nonzero value of Δα_{EM}. The quality of the calibration is discussed in [368] and shown to have a negligible effect on the measurements (a similar result has been obtained for the VLT/UVES data [534]).
As we pointed out earlier, one assumption of the method concerns the isotopic abundances of Mg II that can affect the lowz sample since any changes in the isotopic composition will alter the value of effective restwavelengths. This isotopic composition is assumed to be close to terrestrial ^{24}Mg:^{25}Mg:^{26}Mg = 79:10:11. No direct measurement of r_{Mg} = (^{26}Mg + ^{25}Mg)/^{24}Mg in QSO absorber is currently feasible due to the small separation of the isotopic absorption lines. However, it was shown [231], on the basis of molecular absorption lines of MgH that r_{Mg} generally decreases with decreasing metallicity. In standard models it should be near 0 at zero metallicity since type II supernovae are primarily producers of ^{24}Mg. It was also argued that ^{13}C is a tracer of ^{25}Mg and was shown to be low in the case of HE 05154414 [321]. However, contrary to this trend, it was found [552] that r_{Mg} can reach high values for some giant stars in the globular cluster NGC 6752 with metallicity [Fe/H] ∼ −1.6. This led Ashenfelter et al. [18] to propose a chemical evolution model with strongly enhanced population of intermediate (2–8 M_{⊙}) stars, which in their asymptotic giant branch phase are the dominant factories for heavy Mg at low metallicities typical of QSO absorption systems, as a possible explanation of the lowz Keck/HIRES observations without any variation of α_{EM}. It would require that r_{Mg} reaches 0.62, compared to 0.27 (but then the UVES/VLT constraints would be converted to a detection). Care needs to be taken since the star formation history can be different ine each region, even in each absorber, so that one cannot a priori use the bestfit obtained from the Keck data to the UVES/VLT data. However, such modified nucleosynthetic history will lead to an overproduction of elements such as P, Si, Al, P above current constraints [192], but this later model is not the same as the one of Ref. [18] that was tuned to avoid these problems.
In conclusion, no compelling evidence for a systematic effect has been raised at the moment.
VLT/UVES data. The previous results, and their importance for fundamental physics, led another team to check this detection using observations from UVES spectrograph operating on the VLT. In order to avoid as much systematics as possible, and based on numerical simulations, they apply a series of selection criteria [90] on the systems used to constrain the time variation of the finestructure constant: (1) consider only lines with similar ionization potentials (Mg II, Fe II, Si II and Al II) as they are most likely to originate from similar regions in the cloud; (2) avoid absorption lines contaminated by atmospheric lines; (3) consider only systems with hight enough column density to ensure that all the mutiplets are detected at more than 5σ; (4) demand than at least one of the anchor lines is not saturated to have a robust measurement of the redshift; (5) reject strongly saturated systems with large velocity spread; (6) keep only systems for which the majority of the components are separated from the neighboring by more than the Doppler shift parameter.
The advantage of this choice is to reject most complex or degenerate systems, which could result in uncontrolled systematics effects. The drawback is indeed that the analysis will be based on less systems.
On the basis of the articles [372, 371, 370] and the answer [471], it is indeed difficult (without having played with the data) to engage one of the parties. This exchange has enlightened some differences in the statistical analysis.
To finish, let us mention that [361] reanalyzed some systems of [90, 470] by means of the SIDAM method (see below) and disagree with some of them, claiming for a problem of calibration. They also claim that the errors quoted in [367] are underestimated by a factor 1.5.
Open controversy. At the moment, we have to face a situation in which two teams have performed two independent analyses based on data sets obtained by two instruments on two telescopes. Their conclusions do not agree, since only one of them is claiming for a detection of a variation of the finestructure constant. This discrepancy between VLT/UVES and Keck/Hires results is yet to be resolved. In particular, they use data from a different telescopes observing a different (Southern/Northern) hemisphere.
Ref. [236] provides an analysis of the wavelength accuracy of the Keck/HIRES spectrograph. An absolute uncertainty of Δz ∼ 10^{−5}, corresponding to Δλ ∼ 0.02 Å with daily drift of Δz ∼ 5 × 10^{−6} and multiday drift of Δz ∼ 2 × 10^{−5}. While the cause of this drift remains unknown, it is argued [236] that this level of systematic uncertainty makes it difficult to use the Keck/HIRES to constrain the time variation of α_{EM} (at least for a single system or a small sample since the distortion pattern pertains to the echelle orders as they are recorded on the CDD, that is it is similar from exposure to exposure, the effect on Δα_{EM}/α_{EM} for an ensemble of absorbers at different redshifts would be random since the transitions fall in different places with respect to the pattern of the disortion). This needs to be confirmed and investigated in more detail. We refer to [373] for a discussion on the Keck wavelength calibration error and [534] for the VLT/UVES as well as [86] for a discussion on the ThAr calibration.
On the one hand, it is appropriate that one team has reanalyzed the data of the other and challenged its analysis. This would indeed lead to an improvement in the robustness of these results. Indeed a similar reverse analysis would also be appropriate. On the other hand both teams have achieved an amazing work in order to understand and quantify all sources of systematics. Both developments, as well as the new techniques, which are appearing, should hopefully set this observational issue. Today, it is unfortunately premature to choose one data set compared to the other.
3.4.4 Single ion differential measurement (SIDAM)
This method [320] is an adaptation of the MM method in order to avoid the influence of small spectral shifts due to ionization inhomogeneities within the absorbers as well as to nonzero offsets between different exposures. It was mainly used with Fe II, which provides transitions with positive and negative qcoefficients (see Figure 3). Since it relies on a single ion, it is less sensitive to isotopic abundances, and in particular not sensitive to the one of Mg.
3.4.5 HI21 cm vs. UV: \(x = \alpha _{{\rm{EM}}}^2{g_p}/\mu\)
In such an approach two main difficulties arise: (1) the radio and optical source must coincide (in the optical QSO can be considered pointlike and it must be checked that this is also the case for the radio source), (2) the clouds responsible for the 21 cm and UV absorptions must be localized in the same place. Therefore, the systems must be selected with care and today the number of such systems is small and are actively looked for [411].
3.4.6 HI vs. molecular transitions: \(y \equiv {g_{\rm{P}}}\alpha _{{\rm{EM}}}^2\)
The radio domain has the advantage of heterodyne techniques, with a spectral resolution of 10^{6} or more, and dealing with cold gas and narrow lines. The main systematics is the kinematical bias, i.e., that the different lines do not come exactly from the same material along the line of sight, with the same velocity. To improve this method one needs to find more sources, which may be possible with the radio telescope ALMA^{3}.
3.4.7 OH — 18 cm: \(F = {g_{\rm{P}}}{(\alpha _{{\rm{EM}}}^2\mu)^{1.57}}\)
Using transitions originating from a single species, like with SIDAM, allows to reduce the systematic effects. The 18 cm lines of the OH radical offers such a possibility [95, 272].
3.4.8 Far infrared finestructure lines: \({F\prime} = \alpha _{{\rm{EM}}}^2\mu\)
3.4.9 “Conjugate” satellite OH lines: G = g_{p}(α_{EM}μ)^{1.85}
The satellite OH 18 cm lines are conjugate so that the two lines have the same shape, but with one line in emission and the other in absorption. This arises due to an inversion of the level of populations within the ground state of the OH molecule. This behavior has recently been discovered at cosmological distances and it was shown [95] that a comparison between the sum and difference of satellite line redshifts probes G = g_{p}(α_{EM}μ)^{1.85}.
One strength of this method is that it guarantees that the satellite lines arise from the same gas, preventing from velocity offset between the lines. Also, the shape of the two lines must agree if they arise from the same gas.
3.4.10 Molecular spectra and the electrontoproton mass ratio
As was pointed out in Section 3.1, molecular lines can provide a test of the variation^{4} [488] of μ since rotational and vibrational transitions are respectively inversely proportional to their reduce mass and its squareroot [see Equation (35)].
3.4.10.1 Constraints with H_{2}
H_{2} is the most abundant molecule in the universe and there were many attempts to use its absorption spectra to put constraints on the time variation of μ despite the fact that H_{2} is very difficult to detect [387].
This method is subject to important systematic errors among which (1) the sensitivity to the laboratory wavelengths (since the use of two different catalogs yield different results [431]), (2) the molecular lines are located in the Lymanα forest where they can be strongly blended with intervening HI Lymanα absorption lines, which requires a careful fitting of the lines [289] since it is hard to find lines that are not contaminated. From an observational point of view, very few damped Lymanα systems have a measurable amount of H_{2} so that only a dozen systems is actually known even though more systems will be obtained soon [411]. To finish, the sensitivity coefficients are usually low, typically of the order of 10^{−2}. Some advantages of using H_{2} arise from the fact there are several hundred available H_{2} lines so that many lines from the same ground state can be used to eliminate different kinematics between regions of different excitation temperatures. The overlap between Lyman and Werner bands also allow to reduce the errors of calibration.
To conclude, the combination of all the existing observations indicate that μ is constant at the 10^{−5} level during the past 11 Gigayrs while an improvement of a factor 10 can be expected in the five coming years.
3.4.10.2 Other constraints
This method was also applied [323] in the Milky Way, in order to constrain the spatial variation of μ in the galaxy (see Section 6.1.3). Using ammonia emission lines from interstellar molecular clouds (Perseus molecular core, the Pipe nebula and the infrared dark clouds) it was concluded that Δμ = (4–14) × 10^{−8}. This indicates a positive velocity offset between the ammonia inversion transition and rotational transitions of other molecules. Two systems being located toward the galactic center while one is in the direction of the anticenter, this may indicate a spatial variation of μ on galactic scales.
3.4.10.3 New possibilities
The detection of several deuterated molecular hydrogen HD transitions makes it possible to test the variation of μ in the same way as with H_{2} but in a completely independent way, even though today it has been detected only in 2 places in the universe. The sensitivity coefficients have been published in [263] and HD was first detected by [387].
HD was recently detected [473] together with CO and H_{2} in a DLA cloud at a redshift of 2.418 toward SDSS1439+11 with 5 lines of HD in 3 components together with several H_{2} lines in 7 components. It allowed to set the 3σ limit of ∣Δμ/μ∣ < 9 × 10^{−5} [412].
Even though the small number of lines does not allow to reach the level of accuracy of H_{2} it is a very promising system in particular to obtain independent measurements.
3.4.10.4 Emission spectra
Similar analysis to constrain the time variation of the fundamental constants were also performed with emission spectra. Very few such estimates have been performed, since it is less sensitive and harder to extend to sources with high redshift. In particular, emission lines are usually broad as compared to absorption lines and the larger individual errors need to be beaten by large statistics.
3.4.11 Conclusion and prospects
Summary of the latest constraints on the variation of fundamental constants obtained from the analysis of quasar absorption spectra. We recall that \(y \equiv {g_{\rm{p}}}\alpha _{{\rm{EM}}}^2,\,F \equiv {g_{\rm{p}}}{(\alpha _{{\rm{EM}}}^2\mu)^{1.57}},\,x \equiv \alpha _{{\rm{EM}}}^2{g_{\rm{p}}}/\mu ,\,F' \equiv \alpha _{{\rm{EM}}}^2\mu\) and μ ≡ m_{p}/m_{e}, G = g_{p}(αμ)^{1.85}.
Constant  Method  System  Constraint (× 10^{−5})  Redshift  Ref. 

α _{EM}  AD  21  (−0.5 ± 1.3)  2.33–3.08  [377] 
AD  15  (−0.15 ± 0.43)  1.59–2.92  [91]  
AD  9  (−3.09 ± 8.46)  1.19–1.84  [349]  
MM  143  (−0.57 ± 0.11)  0.2–4.2  [367]  
MM  21  (0.01 ± 0.15)  0.4–2.3  [90]  
SIDAM  1  (−0.012 ± 0.179)  1.15  [361]  
SIDAM  1  (0.566 ± 0.267)  1.84  [361]  
y  HI — mol  1  (−0.16 ± 0.54)  0.6847  [375] 
HI — mol  1  (−0.2 ± 0.44)  0.247  [375]  
CO, CHO^{+}  (−4 ± 6)  0.247  [536]  
F  OH — HI  1  (−0.44 ± 0.36 ± 1.0_{syst})  0.765  [276] 
OH — H I  1  (0.51 ± 1.26)  0.2467  [138]  
x  HI — UV  9  (−0.63 ± 0.99)  0.23–2.35  [494] 
HI — UV  2  (−0.17 ± 0.17)  3.174  [472]  
F′  C II — CO  1  (1 ± 10)  4.69  [327] 
C II — CO  1  (14 ± 15)  6.42  [327]  
G  OH  1  < 1.1  0.247, 0.765  [95] 
OH  1  < 1.16  0.0018  [95]  
OH  1  (−1.18 ± 0.46)  0.247  [273]  
μ  H_{2}  1  (2.78 ± 0.88)  2.59  [431] 
H_{2}  1  (2.06 ± 0.79)  3.02  [431]  
H_{2}  1  (1.01 ± 0.62)  2.59  [289]  
H_{2}  1  (0.82 ± 0.74)  2.8  [289]  
H_{2}  1  (0.26 ± 0.30)  3.02  [289]  
H_{2}  1  (0.7 ± 0.8)  3.02, 2.59  [490]  
NH_{3}  1  < 0.18  0.685  [366]  
NH_{3}  1  < 0.38  0.685  [353]  
HC_{3}N  1  < 0.14  0.89  [250]  
HD  1  < 9  2.418  [412]  
HD  1  (0.56 ± 0.55_{stat} ± 0.27_{syst})  2.059  [342] 
At the moment, only one analysis claims to have detected a variation of the fine structure constant (Keck/HIRES) while the VLT/UVES points toward no variation of the fine structure constant. It has led to the proposition that α_{EM} may be space dependent and exhibit a dipole, the origin of which is not explained. Needless to say that such a controversy and hypotheses are sane since it will help improve the analysis of this data, but it is premature to conclude on the issue of this debate and the jury is still out. Most of the systematics have been investigated in detail and now seem under control.
We mention in the course of this paragraph many possibilities to improve these constraints.
Since the AD method is free of the two main assumptions of the MM method, it seems important to increase the precision of this method as well as any method relying only on one species. This can be achieved by increasing the S/N ratio and spectral resolution of the data used or by increasing the sample size and including new transitions (e.g., cobalt [172, 187]).
The limitation may then lie in the statistics and the calibration and it would be useful to use more than two QSO with overlapping spectra to crosscalibrate the line positions. This means that one needs to discover more absorption systems suited for these analyses. Much progress is expected. For instance, the FIR lines are expected to be observed by a new generation of telescopes such as HERSCHEL^{6}. While the size of the radio sample is still small, surveys are being carried out so that the number of known redshift OH, HI and HCO^{+} absorption systems will increase. For instance the future Square Kilometer Array (SKA) will be able to detect relative changes of the order of 10^{−7} in α_{EM}.
In conclusion, it is clear that these constraints and the understanding of the absorption systems will increase in the coming years.
3.5 Stellar constraints

the decay lifetime of ^{8}Be, of order 10^{−16} s, is four orders of magnitude longer than the time for two α particles to scatter, so that a macroscopic amount of beryllium can be produced, which is sufficient to lead to considerable production of carbon,
 an excited state of ^{12}C lies just above the energy of ^{8}Be+α, which allows for$$^4{\rm{He}}{{\rm{+}}^4}{\rm{He}}{\leftrightarrow ^8}{\rm{Be,}}{\quad ^8}{\rm{Be}}{{\rm{+}}^4}{\rm{He}}{\leftrightarrow ^{12}}{\rm{C\ast}}\;{\rightarrow^{12}}{\rm{C + 7}}{\rm{.367}}\;{\rm{MeV,}}$$

the energy level of ^{16}O at 7.1197 MeV is non resonant and below the energy of ^{12}C + α, of order 7.1616 MeV, which ensures that most of the carbon synthesized is not destroyed by the capture of an αparticle. The existence of this resonance, the \(J_l^\pi = 0_2^ +\)state of ^{12}C was actually discovered [111] experimentally later, with an energy of 372 ± 4 keV [today, \({E_{0_2^ +}} = 379.47 \pm 0.15{\rm{keV}}\)], above the ground state of three αparticles (see Figure 5).
The variation of any constant that would modify the energy of this resonance would also endanger the stellar nucleosynthesis of carbon, so that the possibility for carbon production has often been used in anthropic arguments. Qualitatively, if \({E_{0_2^ +}}\) is increased then the carbon would be rapidly processed to oxygen since the star would need to be hotter for the tripleα process to start. On the other hand, if \({E_{0_2^ +}}\) is decreased, then all αparticles would produce carbon so that no oxygen would be synthesized. It was estimated [334] that the carbon production in intermediate and massive stars is suppressed if the various of the energy of the resonance is outside the range \( 250\,{\rm{keV \lesssim}}\Delta {{\rm{E}}_{0_2^ +}} \lesssim 60{\rm{keV}}\), which was further improved [451] to, \( 5\,{\rm{keV \lesssim}}\Delta {{\rm{E}}_{0_2^ +}} \lesssim 50\,{\rm{keV}}\) in order for the C/O ratio to be larger than the error in the standard yields by more than 50%. Indeed, in such an analysis, the energy of the resonance was changed by hand. However, we expect that if \({E_{0_2^ +}}\) is modified due to the variation of a constant other quantities, such as the resonance of the oxygen, the binding energies and the cross sections will also be modified in a complex way.
 1.
to determine the effective parameters, e.g., cross sections, which affects the stellar evolution. The simplest choice is to modify only the energy of the resonance but it may not be realistic since all cross sections and binding energies should also be affected. This requires one to use a stellar evolutionary model;
 2.
relate these parameters to nuclear parameters. This involves the whole nuclear physics machinery;
 3.
to relate the nuclear parameters to fundamental constants. As for the Oklo phenomenon, it requires to link QCD to nuclear physics.
To finish, a recent study [3] focus on the existence of stars themselves, by revisiting the stellar equilibrium when the values of some constants are modified. In some sense, it can be seen as a generalization of the work by Gamow [224] to constrain the Dirac model of a varying gravitational constant by estimating its effect on the lifetime of the Sun. In this semianalytical stellar structure model, the effect of the fundamental constants was reduced phenomenologically to 3 parameters, G, which enters mainly on the hydrostatic equilibrium, α_{EM}, which enters in the Coulomb barrier penetration through the Gamow energy, and a composite parameter \({\mathcal C}\), which describes globally the modification of the nuclear reaction rates. The underlying idea is to assume that the power generated per unit volume, ε(r), and which determines the luminosity of the star, is proportional to the fudge factor \({\mathcal C}\), which would arise from a modification of the nuclear fusion factor, or equivalently of the cross section. Thus, it assumes that all cross sections are affected is a similar way. The parameter space for which stars can form and for which stable nuclear configurations exist was determined, showing that no finetuning seems to be required.
This new system is very promising and will provide new information on the fundamental constants at redshifts smaller than z ∼ 15 where no constraints exist at the moment, even though drawing a robust constraint seems to be difficult at the moment. In particular, an underlying limitation arises from the fact that the composition of the interstellar media is a mixture of ejecta from stars with different masses and it is not clear which type of stars contribute the most the carbon and oxygen production. Besides, one would need to include rotation and mass loss [181]. As for the Oklo phenomenon, another limitation arises from the complexity of nuclear physics.
3.6 Cosmic Microwave Background
The CMB temperature anisotropies mainly depend on three constants: G, α_{EM} and m_{e}.
In summary, both the temperature of the decoupling and the residual ionization after recombination are modified by a variation of α_{EM} or m_{e}. This was first discussed in [36, 277]. The last scattering surface can roughly be determined by the maximum of the visibility function \(g = \dot \tau \exp ( \tau)\), which measures the differential probability for a photon to be scattered at a given redshift. Increasing α_{EM} shifts g to a higher redshift at which the expansion rate is faster so that the temperature and x_{ e } decrease more rapidly, resulting in a narrower g. This induces a shift of the C_{ ℓ } spectrum to higher multipoles and an increase of the values of the C_{ ℓ }. The first effect can be understood by the fact that pushing the last scattering surface to a higher redshift leads to a smaller sound horizon at decoupling. The second effect results from a smaller Silk damping.
Most studies have introduced those modifications in the RECFAST code [454] including similar equations for the recombination of helium. Our previous analysis shows that the dependences in the fundamental constants have various origins, since the binding energies B_{ i } scale has \({m_{\rm{e}}}\alpha _{{\rm{EM}}}^2\), σ_{ T } as \(\alpha _{{\rm{EM}}}^2m_{\rm{e}}^{ 2}\), K as \(m_{\rm{e}}^{ 3}\alpha _{{\rm{EM}}}^{ 6}\), the ionisation coefficients β as \(\alpha _{{\rm{EM}}}^3\), the transition frequencies as \({m_{\rm{e}}}\alpha _{{\rm{EM}}}^2\), the Einstein’s coefficients as \({m_{\rm{e}}}\alpha _{{\rm{EM}}}^5\), the decay rates Λ as \({m_{\rm{e}}}\alpha _{{\rm{EM}}}^8\) and \({\mathcal R}\) has complicated dependence, which roughly reduces to \(\alpha _{{\rm{EM}}}^{ 1}m_{\rm{e}}^{ 2}\). Note that a change in the finestructure constant and in the mass of the electron are degenerate according to Δα_{EM} ≈ 0.39Δm_{e} but this degeneracy is broken for multipoles higher than 1500 [36]. In earlier works [244, 277] it was approximated by the scaling \({\mathcal R} \propto \alpha _{{\rm{EM}}}^{2(1 + \xi)}\) with ξ ∼ 0.7.
The first studies [244, 277] focused on the sensitivity that can be reached by WMAP^{7} and Planck^{8}. They concluded that they should provide a constraint on α_{EM} at recombination, i.e., at a redshift of about z ∼ 1,000, with a typical precision ∣Δα_{EM}/α_{EM}∣ ∼ 10^{−2}–10^{−3}.
The first attempt [21] to actually set a constraint was performed on the first release of the data by BOOMERanG and MAXIMA. It concluded that a value of α_{EM} smaller by a few percents in the past was favored but no definite bound was obtained, mainly due to the degeneracies with other cosmological parameters. It was later improved [22] by a joint analysis of BBN and CMB data that assumes that only α_{EM} varies and that included 4 cosmological parameters (Ω_{mat}, Ω_{b}, h,n_{ s }) assuming a universe with Euclidean spatial section, leading to −0.09 < Δα_{EM} < 0.02 at 68% confidence level. A similar analysis [307], describing the dependence of a variation of the finestructure constant as an effect on recombination the redshift of which was modeled to scale as z_{*} = 1080[1 + 2Δα_{EM}/α_{EM}], set the constraint −0.14 < Δα_{EM} < 0.02, at a 2σ level, assuming a spatially flat cosmological models with adiabatic primordial fluctuations that. The effect of reionisation was discussed in [350]. These works assume that only α_{EM} is varying but, as can been seen from Eqs. (110–116), assuming the electron mass constant.
With the WMAP first year data, the bound on the variation of α_{EM} was sharpened [438] to −0.05 < Δα_{EM}/α_{EM} < 0.02, after marginalizing over the remaining cosmological parameters (Ω_{mat}h^{2}, Ω_{b}h^{2}, Ωh^{2}, n_{ s }, α_{ s }, τ) assuming a universe with Euclidean spatial sections. Restricting to a model with a vanishing running of the spectral index (α_{ s } ≡ dn_{ s }/dlnk = 0), it gives −0.06 < Δα_{EM}/α_{EM} < 0.01, at a 95% confidence level. In particular it shows that a lower value of α_{EM} makes α_{ s } = 0 more compatible with the data. These bounds were obtained without using other cosmological data sets. This constraint was confirmed by the analysis of [259], which got −0.097 < Δα_{EM}α_{EM} < 0.034, with the WMAP1yr data alone and −0.042 < Δα_{EM}/α_{EM} < 0.026, at a 95% confidence level, when combined with constraints on the Hubble parameter from the HST Hubble Key project.
The analysis of the WMAP3yr data allows to improve [476] this bound to −0.039 < Δα_{EM}/α_{EM} < 0.010, at a 95% confidence level, assuming (Ω_{mat}, Ω_{b}, h, n_{ s }, z_{re}, A_{ s }) for the cosmological parameters (Ω_{Λ} being derived from the assumption Ω_{ K } = 0, as well as τ from the reionisation redshift, z_{re}) and using both temperature and polarization data (TT, TE, EE).
The WMAP 5year data were analyzed, in combination with the 2dF galaxy redshift survey, assuming that both α_{EM} and m_{e} can vary and that the universe was spatially Euclidean. Letting 6 cosmological parameters [(Ω_{mat}h^{2}, Ω_{b}h^{2}, Θ, τ, n_{ s }, A_{ s }), Θ being the ratio between the sound horizon and the angular distance at decoupling] and 2 constants vary they, it was concluded [452, 453] −0.012 < Δα_{EM}/α_{EM} < 0.018 and −0.068 < Δm_{e}/m_{e} < 0.044, the bounds fluctuating slightly depending on the choice of the recombination scenario. A similar analyis [381] not including m_{e} gave −0.050 < Δα_{EM}/α_{EM} < 0.042, which can be reduced by taking into account some further prior from the HST data. Including polarisation data data from ACBAR, QUAD and BICEP, it was also obtained [352] −0.043 < Δα_{EM}/α_{EM} < 0.038 at 95% C.L. and −0.013 < Δα_{EM}/α_{EM} < 0.015 including HST data, also at 95% C.L. Let us also emphasize the work by [351] trying to include the variation of the Newton constant by assuming that Δα_{EM}/α_{EM} = QΔG/G, Q being a constant and the investigation of [380] taking into account α_{EM}, m_{e} and μ, G being kept fixed. Considering (Ω_{mat}, Ω_{b}, h, n_{ s }, τ) for the cosmological parameters they concluded from WMAP5 data (TT, TE, EE) that −8.28 × 10^{−3} < Δα_{EM}/α_{EM} < 1.81 × 10^{−3} and −0.52 < Δμ/μ < 0.17
The analysis of [452, 453] was updated [310] to the WMAP7yr data, including polarisation and SDSS data. It leads to −0.025 < Δα_{EM}/α_{EM} < −0.003 and 0.009 < Δm_{e}/m_{e} < 0.079 at a 1σ level.
The main limitation of these analyses lies in the fact that the CMB angular power spectrum depends on the evolution of both the background spacetime and the cosmological perturbations. It follows that it depends on the whole set of cosmological parameters as well as on initial conditions, that is on the shape of the initial power spectrum, so that the results will always be conditional to the model of structure formation. The constraints on α_{EM} or m_{e} can then be seen mostly as constraints on a delayed recombination. A strong constraint on the variation of α_{EM} can be obtained from the CMB only if the cosmological parameters are independently known. [438] forecasts that CMB alone can determine α_{EM} to a maximum accuracy of 0.1%.
3.6.1 21 cm
After recombination, the CMB photons are redshifted and their temperature drops as (1 + z). However, the baryons are prevented from cooling adiabatically since the residual amount of free electrons, that can couple the gas to the radiation through Compton scattering, is too small. It follows that the matter decouples thermally from the radiation at a redshift of order z ∼ 200.
Summary of the latest constraints on the variation of fundamental constants obtained from the analysis of cosmological data and more particularly of CMB data. All assume Ω_{ K } = 0.
Constraint (α_{EM} × 10^{2})  Data  Comment  Ref. 

[−9, 2]  BOOMERanGDASICOBE + BBN  BBN with α_{EM} only (Ω_{mat}, Ω_{b}, h, n_{ s })  [22] 
[−1.4, 2]  COBEBOOMERanGMAXIMA  (Ω_{mat}, Ω_{b}, h, n_{ s })  [307] 
[−5, 2]  WMAP1  (Ω_{mat}h^{2}, Ω_{b}h^{2}, Ω_{Λ}h^{2}, τ, n_{ s }, α_{ s })  [438] 
[−6, 1]  WMAP1  same + α_{ s } = 0  [438] 
[−9.7, 3.4]  WMAP1  (Ωmat, Ω_{b}, h, n_{ s }, τ, m_{e})  [259] 
[−4.2, 2.6]  WMAP1 + HST  same  [259] 
[−3.9, 1.0]  WMAP3 (TT, TE, EE) + HST  (Ω_{mat}, Ω_{b}, h, n_{ s }, z_{re}, A_{ s })  [476] 
[−1.2, 1.8]  WMAP5 + ACBAR + CBI + 2df  (Ω_{mat}h^{2}, Ω_{b}h^{2}, Θ, τ, n_{ s }, A_{ s }, m_{ e })  [452] 
[−1.9, 1.7]  WMAP5 + ACBAR + CBI + 2df  (Ω_{mat}h^{2}, Ω_{b}h^{2}, Θ, τ, n_{ s }, A_{ s }, m_{ e })  [453] 
[−5.0, 4.2]  WMAP5 + HST  (Ω_{mat}h^{2}, Ω_{b}h^{2}, h, τ, n_{ s }, A_{ s })  [381] 
[−4.3, 3.8]  WMAP5 + ACBAR + QUAD + BICEP  (Ω_{mat}h^{2}, Ω_{b}h^{2}, h, τ, n_{ s })  [352] 
[−1.3, 1.5]  WMAP5 + ACBAR + QUAD + BICEP+HST  (Ω_{mat}h^{2}, Ω_{b}h^{2}, h, τ, n_{ s })  [352] 
[−0.83, 0.18]  WMAP5 (TT, TE, EE)  (Ω_{mat}h^{2}, Ω_{b}h^{2}, h, τ, n_{ s }, A_{ s }, m_{e}, μ)  [380] 
[−2.5, −0.3]  WMAP7 + H_{0} + SDSS  (Ω_{mat}h^{2}, Ω_{b}h^{2}, Θ, τ, n_{ s }, A_{ s }, m_{e}  [310] 
It follows [284, 285] that the change in the brightness temperature of the CMB at the corresponding wavelength scales as \({T_{\rm{b}}} \propto {A_{12}}/\nu _{21}^2\), where the Einstein coefficient A_{12} is defined below. Observationally, we can deduce the brightness temperature from the brightness I_{ ν }, that is the energy received in a given direction per unit area, solid angle and time, defined as the temperature of the blackbody radiation with spectrum I_{ ν }. Thus, k_{B}T_{b} ≃ I_{ ν }c^{2}/2ν^{2}. It has a mean value, \({{\bar T}_{\rm{b}}}({z_{{\rm{obs}}}})\) at various redshift where \(1 + {z_{{\rm{obs}}}} = \nu _{21}^{{\rm{today}}}/{\nu _{{\rm{obs}}}}\). Besides, as for the CMB, there will also be fluctuation in T_{b} due to imprints of the cosmological perturbations on n_{ p } and T_{g}. It follows that we also have access to an angular power spectrum C_{ ℓ }(z_{obs}) at various redshift (see [329] for details on this computation).
Both quantities depend on the value of the fundamental constants. Beside the same dependencies of the CMB that arise from the Thomson scattering cross section, we have to consider those arising from the collision terms. In natural units, the Einstein coefficient scaling is given by \({A_{12}} = {2 \over 3}\pi {\alpha _{{\rm{EM}}}}\nu _{21}^3m_{\rm{e}}^{ 2} \sim 2.869 \times {10^{ 15}}{{\rm{s}}^{ 1}}\). It follows that it scales as \({A_{10}} \propto g_{\rm{P}}^3{\mu ^3}\alpha _{{\rm{EM}}}^{13}{m_{\rm{e}}}\). The brightness temperature depends on the fundamental constant as \({T_{\rm{b}}} \propto {g_{\rm{P}}}\mu \alpha _{{\rm{EM}}}^5/{m_{\rm{e}}}\). Note that the signal can also be affected by a time variation of the gravitational constant through the expansion history of the universe. [284] (see also [221] for further discussions), focusing only on α_{EM}, showed that this was the dominant effect on a variation of the fundamental constant (the effect on C_{10} is much complicated to determine but was argued to be much smaller). It was estimated that a single station telescope like LWA^{9} or LOFAR^{10} can lead to a constraint of the order of Δα_{EM}/α_{EM} ∼ 0.85%, improving to 0.3% for the full LWA. The fundamental challenge for such a measurement is the subtraction of the foreground.
The 21 cm absorption signal in a available on a band of redshift typically ranging from z ≲ 1000 to z ∼ 20, which is between the CMB observation and the formation of the first stars, that is during the “dark age”. Thus, it offers an interesting possibility to trace the constraints on the evolution of the fundamental constants between the CMB epoch and the quasar absorption spectra.
As for CMB, the knowledge of the cosmological parameters is a limitation since a change of 1% in the baryon density or the Hubble parameter implies a 2% (3% respectively) on the mean bolometric temperature. The effect on the angular power spectrum have been estimated but still require an in depth analysis along the lines of, e.g., [329]. It is motivating since C_{ ℓ }(z_{obs}) is expected to depend on the correlators of the fundamental constants, e.g., 〈α_{ EM }(x, z_{obs})α_{EM}(x′, z_{obs})〉 and thus in principle allows to study their fluctuation, even though it will also depend on the initial condition, e.g., power spectrum, of the cosmological perturbations.
In conclusion, the 21 cm observation opens a observational window on the fundamental at redshifts ranging typically from 30 to 100, but full indepth analysis is still required (see [206, 286] for a critical discussion of this probe).
3.7 Big bang nucleosynthesis
3.7.1 Overview
The amount of ^{4}He produced during the big bang nucleosynthesis is mainly determined by the neutron to proton ratio at the freezeout of the weak interactions that interconvert neutrons and protons. The result of Big Bang nucleosynthesis (BBN) thus depends on G, α_{W}, α_{EM} and α_{S} respectively through the expansion rate, the neutron to proton ratio, the neutronproton mass difference and the nuclear reaction rates, besides the standard parameters such as, e.g., the number of neutrino families.
 1.for T > 1 MeV, (t < 1 s) a first stage during which the neutrons, protons, electrons, positrons an neutrinos are kept in statistical equilibrium by the (rapid) weak interactionAs long as statistical equilibrium holds, the neutron to proton ratio is$$n \leftrightarrow p + {e^ } + {\bar \nu _e},\quad n + {\nu _e} \leftrightarrow p + {e^ },\quad n + {e^ +} \leftrightarrow p + {\bar \nu _e}.$$(119)where Q_{np} ≡ (m_{n} − m_{p})c^{2} = 1.29 MeV. The abundance of the other light elements is given by [409]$$(n/p) = {{\rm{e}}^{ {Q_{{\rm{np/}}{{\rm{k}}_{\rm{B}}}T}}}}$$(120)where g_{ A } is the number of degrees of freedom of the nucleus \(_Z^A{\rm{X}}\), m_{N} is the nucleon mass, η the baryonphoton ratio and B_{ A } ≡ (Zm_{p} + (A − Z)m_{n} − m_{ A })c^{2} the binding energy.$${Y_A} = {g_A}{\left({{{\zeta (3)} \over {\sqrt \pi}}} \right)^{A  1}}{2^{(3A  5)/2}}{A^{5/2}}{\left[ {{{{k_{\rm{B}}}T} \over {{m_{\rm{N}}}{c^2}}}} \right]^{3(A  1)/2}}{\eta ^{A  1}}Y_{\rm{p}}^ZY_{\rm{n}}^{A  Z}{{\rm{e}}^{{B_A}/{k_{\rm{B}}}T}},$$(121)
 2.Around T ∼ 0.8 MeV (t ∼ 2 s), the weak interactions freeze out at a temperature T_{f} determined by the competition between the weak interaction rates and the expansion rate of the universe and thus roughly determined by Γ_{w}(T_{f}) ∼ H(T_{f}) that iswhere G_{F} is the Fermi constant and N_{*} the number of relativistic degrees of freedom at T_{f}. Below T_{f}, the number of neutrons and protons change only from the neutron βdecay between T_{f} to T_{N} ∼ 0.1 MeV when p + n reactions proceed faster than their inverse dissociation.$$G_{\rm{F}}^2{({k_{\rm{B}}}{T_{\rm{f}}})^5} \sim \sqrt {G{N_\ast}} {({k_{\rm{B}}}{T_{\rm{f}}})^2}$$(122)
 3.For 0.05 MeV < T < 0.6 MeV (3 s < t < 6 min), the synthesis of light elements occurs only by twobody reactions. This requires the deuteron to be synthesized (p + n → D) and the photon density must be low enough for the photodissociation to be negligible. This happens roughly whenwith η ∼ 3 × 10^{−10}. The abundance of ^{4}He by mass, Y_{p}, is then well estimated by$${{{n_{\rm{d}}}} \over {{n_\gamma}}} \sim {\eta ^2}\exp ( {B_D}/{T_{\rm{N}}}) \sim 1$$(123)with$${Y_{\rm{p}}} \simeq 2{{{{(n/p)}_{\rm{N}}}} \over {1 + {{(n/p)}_{\rm{N}}}}}$$(124)with \({t_{\rm{N}}} \propto {G^{ 1/2}}T_{\rm{N}}^{ 2}\) and \(\tau _{\rm{n}}^{ 1} = 1.636\,G_{\rm{F}}^2(1 + 3g_A^2)m_{\rm{e}}^5/(2{\pi ^3})\), with g_{ A } ≃ 1.26 being the axial/vector coupling of the nucleon. Assuming that \({B_D} \propto \alpha _{\rm{S}}^2\), this gives a dependence \({t_{\rm{N}}}/{\tau _{\rm{P}}} \propto {G^{ 1/2}}\alpha _{\rm{S}}^2G_{\rm{F}}^2\).$${(n/p)_{\rm{N}}} = {(n/p)_{\rm{f}}}\exp ( {t_{\rm{N}}}/{\tau _{\rm{n}}})$$(125)
 4.The abundances of the light element abundances, Y_{ i }, are then obtained by solving a series of nuclear reactionswhere J and Γ are timedependent source and sink terms.$${\dot Y_i} = J  \Gamma {Y_i},$$
3.7.2 Constants everywhere…
In complete generality, the effect of varying constants on the BBN predictions is difficult to model because of the intricate structure of QCD and its role in low energy nuclear reactions. Thus, a solution is to proceed in two steps, first by determining the dependencies of the light element abundances on the BBN parameters and then by relating those parameters to the fundamental constants.
The analysis of the previous Section 3.8.1, that was restricted to the helium4 case, clearly shows that the abundances will depend on: (1) α_{G}, which will affect the Hubble expansion rate at the time of nucleosynthesis in the same way as extrarelativistic degrees of freedom do, so that it modifies the freezeout time T_{f}. This is the only gravitational sector parameter. (2) τ_{n}, the neutron lifetime dictates the free neutron decay and appears in the normalization of the protonneutron reaction rates. It is the only weak interaction parameter and it is related to the Fermi constant G_{F}, or equivalently the Higgs vev. (3) α_{EM}, the finestructure constant. It enters in the Coulomb barriers of the reaction rates through the Gamow factor, in all the binding energies. (4) Q_{np}, the neutronproton mass difference enters in the neutronproton ratio and we also have a dependence in (5) m_{N} and m_{e} and (6) the binding energies.
Clearly all these parameters are not independent but their relation is often modeldependent. If we focus on helium4, its abundance mainly depends on Q_{np}, T_{f} and T_{N} (and hence mainly on the neutron lifetime, τ_{n}). Early studies (see Section III.C.2 of FVC [500]) generally focused on one of these parameters. For instance, Kolb et al. [295] calculated the dependence of primordial ^{4}He on G, G_{F} and Q_{np} to deduce that the helium4 abundance was mostly sensitive in the change in Q_{np} and that other abundances were less sensitive to the value of Q_{np}, mainly because ^{4}He has a larger binding energy; its abundances is less sensitive to the weak reaction rate and more to the parameters fixing the value of (n/p). To extract the constraint on the finestructure constant, they decomposed Q_{np} as Q_{np} = α_{EM}Q_{ α } + βQ_{ β } where the first term represents the electromagnetic contribution and the second part corresponds to all nonelectromagnetic contributions. Assuming that Q_{ α } and Q_{ β } are constant and that the electromagnetic contribution is the dominant part of Q, they deduced that ∣Δα_{EM}/α_{EM}∣ < 10^{−2}. Campbell and Olive [77] kept track of the changes in T_{f} and Q_{np} separately and deduced that \({{\Delta {Y_{\rm{P}}}} \over {{Y_{\rm{P}}}}} \simeq {{\Delta {T_{\rm{f}}}} \over {{T_{\rm{f}}}}}  {{\Delta {Q_{{\rm{np}}}}} \over {{Q_{{\rm{np}}}}}}\) while more recently the analysis [308] focused on α_{EM} and v.
Let us now see how the effect of all these parameters are now accounted for in BBN codes.
Then the focus fell on the deuterium binding energy, B_{ D }. Flambaum and Shuryak [207, 208, 158, 157] illustrated the sensitivity of the light element abundances on B_{ D }. Its value mainly sets the beginning of the nucleosynthesis, that is of T_{N} since the temperature must lowenough in order for the photodissociation of the deuterium to be negligible (this is at the origin of the deuterium bottleneck). The importance of B_{ D } is easily understood by the fact that the equilibrium abundance of deuterium and the reaction rate p(n, γ)D depends exponentially on B_{ D } and on the fact that the deuterium is in a shallow bound state. Focusing on the T_{N}dependence, it was concluded [207] that ΔB_{ D }/B_{ D } < 0.075.
This shows that the situation is more complex and that one cannot reduce the analysis to a single varying parameter. Many studies then tried to determinate the sensitivity to the variation of many independent parameters.
This was generalized by Landau et al. [309] up to lithium7 considering the parameters {α_{EM}, G_{F}, Λ_{QCD}, Ω_{ b }h^{2}}, assuming G constant where the variation of τ_{n} and the variation of the masses where tied to these parameters but the effect on the binding energies were not considered.
This analysis was extended [146] to incorporate the effect of 13 independent BBN parameters including the parameters considered before plus the binding energies of deuterium, tritium, helium3, helium4, lithium6, lithium7 and beryllium7. The sensitivity of the light element abundances to the independent variation of these parameters is summarized in Table I of [146]. These BBN parameters were then related to the same 6 “fundamental” parameters used in [364].
All these analyses demonstrate that the effects of the BBN parameters on the light element abundances are now under control. They have been implemented in BBN codes and most results agree, as well as with semianalytical estimates. As long as these parameters are assume to vary independently, no constraints sharper than 10^{−2} can be set. One should also not forget to take into account standard parameters of the BBN computation such as η and the effective number of relativistic particle.
3.7.3 From BBN parameters to fundamental constants
To reduce the number parameters, we need to relate the BBN parameters to more fundamental ones, keeping in mind that this can usually be done only in a modeldependent way. We shall describe some of the relations that have been used in many studies. They mainly concern Q_{np}, τ_{n} and B_{ D }.
At lowest order, all dimensional parameters of QCD, e.g., masses, nuclear energies etc., are to a good approximation simply proportional to some powers of Λ_{QCD}. One needs to go beyond such a description and takes the effects of the masses of the quarks into account.
 Pion mass. A first route is to use the dependence of the binding energy on the pion mass [188, 38], which is related to the u and d quark masses bywhere m_{q} ≡ ½(m_{u} + m_{d}) and assuming that the leading order of \(\left\langle {\bar uu + \bar dd} \right\rangle f_\pi ^{ 2}\) depends only on Λ_{QCD}, f_{ π } being the pion decay constant. This dependence was parameterized [553] as$$m_\pi ^2 = {m_{\rm{q}}}\langle \bar uu + \bar dd\rangle f_\pi ^{ 2} \simeq \hat m{\Lambda _{{\rm{QCD}}}},$$where r is a fitting parameter found to be between 6 [188] and 10 [38]. Prior to this result, the analysis of [207] provides two computations of this dependence, which respectively lead to r = −3 and r = 18 while, following the same lines, [88] got r = 0.082.$${{\Delta {B_D}} \over {{B_D}}} =  r{{\Delta {m_\pi}} \over {{m_\pi}}},$$[364], following the computations of [426], adds an electromagnetic contribution −0.0081Δα_{EM}/α_{EM} so thatbut this latter contribution has not been included in other work.$${{\Delta {B_D}} \over {{B_D}}} =  {r \over 2}{{\Delta {m_{\rm{q}}}} \over {{m_{\rm{q}}}}}  0.0081{{\Delta {\alpha _{{\rm{EM}}}}} \over {{\alpha _{{\rm{EM}}}}}},$$(132)
 Sigma model. In the framework of the Walecka model, where the potential for the nuclear forces keeps only the σ and ω meson exchanges,where g_{ s } and g_{ v } are two coupling constants. Describing σ as a SU(3) singlet state, its mass was related to the mass of the strange quark. In this way one can hope to take into account the effect of the strange quark, both on the nucleon mass and the binding energy. In a second step B_{ D } is related to the meson and nucleon mass by$$V =  {{g_s^2} \over {4\pi r}}\exp ( {m_\sigma}r) + {{g_v^2} \over {4\pi r}}\exp ( {m_\omega}r),$$so that ΔB_{ D }/B_{ D } ≃ −17Δm_{s}m_{s} [208]. Unfortunately, a complete treatment of all the nuclear quantities on m_{s} has not been performed yet.$${{\Delta {B_D}} \over {{B_D}}} =  48{{\Delta {m_\sigma}} \over {{m_\sigma}}} + 50{{\Delta {m_\omega}} \over {{m_\omega}}} + 6{{\Delta {m_{\rm{N}}}} \over {{m_{\rm{N}}}}}$$
These analyses allow one to reduce all the BBN parameter to the physical constants (α_{EM}, v, m_{e}, m_{d} − m_{u}, m_{q}) and G that is not affected by this discussion. This set can be further reduced, since all the masses can be expressed in terms of v as m_{ i } = h_{ i }v, where h_{ i } are Yukawa couplings.
To go further, one needs to make more assumption, such as grand unification, or by relating the Yukawa coupling of the top to v by assuming that weak scale is determined by dimensional transmutation [104], or that the variation of the constant is induced by a string dilaton [77]. At each step, one gets more stringent constraints, which can reach the 10^{−4} [146] to 10^{−5} [104] level but indeed more modeldependent!
3.7.4 Conclusion
Primordial nucleosynthesis offers a possibility to test almost all fundamental constants of physics at a redshift of z ∼ 10^{8}. This makes it very rich but indeed the effect of each constant is more difficult to disentangle. The effect of the BBN parameters has been quantified with precision and they can be constrained typically at a 10^{−2} level, and in particular it seems that the most sensitive parameter is the deuterium binding energy.
The link with more fundamental parameters is better understood but the dependence of the deuterium binding energy still left some uncertainties and a good description of the effect of the strange quark mass is missing.
We have not considered the variation of G in this section. Its effect is disconnected from the other parameters. Let us just stress that assuming the BBN sensitivity on G by just modifying its value may be misleading. In particular G can vary a lot during the electronpositron annihilation so that the BBN constraints can in general not be described by an effective speedup factor [105, 134].
4 The Gravitational Constant
The gravitational constant was the first constant whose constancy was questioned [155]. From a theoretical point of view, theories with a varying gravitational constant can be designed to satisfy the equivalence principle in its weak form but not in its strong form [540] (see also Section 5). Most theories of gravity that violate the strong equivalence principle predict that the locally measured gravitational constant may vary with time.
The value of the gravitational constant is G = 6.674 28(67) × 10^{−11} m^{3} kg^{−1} s^{−2} so that its relative standard uncertainty fixed by the CODATA^{11} in 2006 is 0.01%. Interestingly, the disparity between different experiments led, in 1998, to a temporary increase of this uncertainty to 0.15% [241], which demonstrates the difficulty in measuring the value of this constant. This explains partly why the constraints on the time variation are less stringent than for the other constants.
A variation of the gravitational constant, being a pure gravitational phenomenon, does not affect the local physics, such as, e.g., the atomic transitions or the nuclear physics. In particular, it is equivalent at stating that the masses of all particles are varying in the same way to that their ratios remain constant. Similarly all absorption lines will be shifted in the same way. It follows that most constraints are obtained from systems in which gravity is nonnegligible, such as the motion of the bodies of the Solar system, astrophysical and cosmological systems. They are mostly related in the comparison of a gravitational time scale, e.g., period of orbits, to a nongravitational time scale. It follows that in general the constraints assume that the values of the other constants are fixed. Taking their variation into account would add degeneracies and make the constraints cited below less stringent.
We refer to Section IV of FVC [500] for earlier constraints based, e.g., on the determination of the Earth surface temperature, which roughly scales as \({G^{2.25}}M_ \odot ^{1.75}\) and gives a constraint of the order of ∣ΔG/G∣ < 0.1 [224], or on the estimation of the Earth radius at different geological epochs. We also emphasize that constraints on the variation of G are meant to be constraints on the dimensionless parameter α_{G}.
4.1 Solar systems constraints
Monitoring the orbits of the various bodies of the solar system offers a possibility to constrain deviations from general relativity, and in particular the time variation of G. This accounts for comparing a gravitational time scale (related to the orbital motion) and an atomic time scale and it is thus assumed that the variation of atomic constants is negligible over the time of the experiment.
4.2 Pulsar timing
Contrary to the solar system case, the dependence of the gravitational binding energy cannot be neglected while computing the time variation of the period. Here two approaches can be followed; either one sticks to a model (e.g., scalartensor gravity) and compute all the effects in this model or one has a more phenomenological approach and tries to put some modelindependent bounds.
Recently, it was argued [266, 432] that a variation of G would induce a departure of the neutron star matter from βequilibrium, due to the changing hydrostatic equilibrium. This would force nonequilibrium βprocesses to occur, which release energy that is invested partly in neutrino emission and partly in heating the stellar interior. Eventually, the star arrives at a stationary state in which the temperature remains nearly constant, as the forcing through the change of G is balanced by the ongoing reactions. Comparing the surface temperature of the nearest millisecond pulsar, PSR J04374715, inferred from ultraviolet observations, two upper limits for variation were obtained, ∣Ġ/G∣ < 2 × 10^{−10} yr^{−1}, direct Urca reactions operating in the neutron star core are allowed, and ∣Ġ/G∣ < 4 × 10^{−12} yr^{−1}, considering only modified Urca reactions. This was extended in [302] in order to take into account the correlation between the surface temperatures and the radii of some old neutron stars to get ∣Ġ/G∣ < 2.1 × 10^{−11} yr^{−1}.
4.3 Stellar constraints
Early works, see Section IV.C of FVC [500], studied the Solar evolution in presence of a time varying gravitational constant, concluding that under the Dirac hypothesis, the original nuclear resources of the Sun would have been burned by now. This results from the fact that an increase of the gravitational constant is equivalent to an increase of the star density (because of the Poisson equation).
The idea of using stellar evolution to constrain the possible value of G was originally proposed by Teller [487], who stressed that the evolution of a star was strongly dependent on G. The luminosity of a main sequence star can be expressed as a function of Newton’s gravitational constant and its mass by using homology relations [224, 487]. In the particular case that the opacity is dominated by freefree transitions, Gamow [224] found that the luminosity of the star is given approximately by L ∝ G^{7.8}M^{5.5}. In the case of the Sun, this would mean that for higher values of G, the burning of hydrogen will be more efficient and the star evolves more rapidly, therefore we need to increase the initial content of hydrogen to obtain the present observed Sun. In a numerical test of the previous expression, Delg’Innocenti et al. [140] found that lowmass stars evolving from the Zero Age Main Sequence to the red giant branch satisfy L ∝ G^{5.6}M^{4.7}, which agrees to within 10% of the numerical results, following the idea that Thomson scattering contributes significantly to the opacity inside such stars. Indeed, in the case of the opacity being dominated by pure Thomson scattering, the luminosity of the star is given by L ∝ G^{4}M^{3}. It follows from the previous analysis that the evolution of the star on the main sequence is highly sensitive to the value of G.
The driving idea behind the stellar constraints is that a secular variation of leads to a variation of the gravitational interaction. This would affect the hydrostatic equilibrium of the star and in particular its pressure profile. In the case of nondegenerate stars, the temperature, being the only control parameter, will adjust to compensate the modification of the intensity of the gravity. It will then affect the nuclear reaction rates, which are very sensitive to the temperature, and thus the nuclear time scales associated to the various processes. It follows that the main stage of the stellar evolution, and in particular the lifetimes of the various stars, will be modified. As we shall see, basically two types of methods have been used, the first in which on relate the variation of G to some physical characteristic of a star (luminosity, effective temperature, radius), and a second in which only a statistical measurement of the change of G can be inferred. Indeed, the first class of methods are more reliable and robust but is usually restricted to nearby stars. Note also that they usually require to have a precise distance determination of the star, which may depend on G.
4.3.1 Ages of globular clusters
The effect of a possible time dependence of G on luminosity has been studied in the case of globular cluster HR diagrams but has not yielded any stronger constraints than those relying on celestial mechanics
4.3.2 Solar and stellar seismology
A side effect of the change of luminosity is a change in the depth of the convection zone so that the inner edge of the convecting zone changes its location. This induces a modification of the vibration modes of the star and particularly to the acoustic waves, i.e., pmodes [141].
4.3.3 Late stages of stellar evolution and supernovae
A variation of G can influence the white dwarf cooling and the light curves ot Type Ia supernovae.
The result depends on the details of the cooling theory, on whether the C/O white dwarf is stratified or not and on hypothesis on the age of the galactic disk. For instance, with no stratification of the C/O binary mixture, one would require Ġ/G = −(2.5±0.5) × 10^{−11} yr^{−1} if the solar neighborhood has a value of 8 Gyr (i.e., one would require a variation of G to explain the data). In the case of the standard hypothesis of an age of 11 Gyr, one obtains that 0 ≤ −Ġ/G < 3 × 10^{−11} yr^{−1}.
The late stages of stellar evolution are governed by the Chandrasekhar mass \({(\hbar c/G)^{3/2}}m_{\rm{n}}^{ 2}\) mainly determined by the balance between the Fermi pressure of a degenerate electron gas and gravity.
Simple analytical models of the light curves of Type Ia supernovae predict that the peak of luminosity is proportional to the mass of nickel synthesized. In a good approximation, it is a fixed fraction of the Chandrasekhar mass. In models allowing for a varying G, this would induce a modification of the luminosity distanceredshift relation [227, 232, 435]. However, it was shown that this effect is small. Note that it will be degenerate with the cosmological parameters. In particular, the Hubble diagram is sensitive to the whole history of G(t) between the highest redshift observed and today so that one needs to rely on a better defined model, such as, e.g., scalartensor theory [435] (the effect of the Fermi constant was also considered in [194]).
4.3.4 New developments
It has recently been proposed that the variation of G inducing a modification of the binary’s binding energy, it should affect the gravitational wave luminosity, hence leading to corrections in the chirping frequency [554]. For instance, it was estimated that a LISA observation of an equalmass inspiral event with total redshifted mass of 10^{5} M_{∩} for three years should be able to measure Ġ/G at the time of merger to better than 10^{−11}/yr. This method paves the way to constructing constraints in a large band of redshifts as well as in different directions in the sky, which would be an invaluable constraint for many models.
More speculative is the idea [25] that a variation of G can lead a neutron to enter into the region where strange or hybrid stars are the true ground state. This would be associated with gammaray bursts that are claimed to be able to reach the level of 10^{−17}/yr on the time variation of G.
4.4 Cosmological constraints
Cosmological observations are more difficult to use in order to set constraints on the time variation of G. In particular, they require to have some ideas about the whole history of G as a function of time but also, as the variation of G reflects an extension of General relativity, it requires to modify all equations describing the evolution (of the universe and of the large scale structure) in a consistent way. We refer to [504, 502, 506] for a discussion of the use of cosmological data to constrain deviations from general relativity.
4.4.1 Cosmic microwave background
 1.
The variation of G modifies the Friedmann equation and therefore the age of the Universe (and, hence, the sound horizon). For instance, if G is larger at earlier time, the age of the Universe is smaller at recombination, so that the peak structure is shifted towards higher angular scales.
 2.
The amplitude of the Silk damping is modified. At small scales, viscosity and heat conduction in the photonbaryon fluid produce a damping of the photon perturbations. The damping scale is determined by the photon diffusion length at recombination, and therefore depends on the size of the horizon at this epoch, and hence, depends on any variation of the Newton constant throughout the history of the Universe.
 3.
The thickness of the last scattering surface is modified. In the same vein, the duration of recombination is modified by a variation of the Newton constant as the expansion rate is different. It is well known that CMB anisotropies are affected on small scales because the last scattering “surface” has a finite thickness. The net effect is to introduce an extra, roughly exponential, damping term, with the cutoff length being determined by the thickness of the last scattering surface. When translating redshift into time (or length), one has to use the Friedmann equations, which are affected by a variation of the Newton constant. The relevant quantity to consider is the visibility function g. In the limit of an infinitely thin last scattering surface, τ goes from ∞ to 0 at recombination epoch. For standard cosmology, it drops from a large value to a much smaller one, and hence, the visibility function still exhibits a peak, but it is much broader.
In full generality, the variation of G on the CMB temperature anisotropies depends on many factors: (1) modification of the background equations and the evolution of the universe, (2) modification of the perturbation equations, (3) whether the scalar field inducing the time variation of G is negligible or not compared to the other matter components, (4) on the time profile of G that has to be determine to be consistent with the other equations of evolution. This explains why it is very difficult to state a definitive constraint. For instance, in the case of scalartensor theories (see below), one has two arbitrary functions that dictate the variation of G. As can be seen, e.g., from [435, 378], the profiles and effects on the CMB can be very different and difficult to compare. Indeed, the effects described above are also degenerate with a variation of the cosmological parameters.
In the case of BransDicke theory, one just has a single constant parameter ω_{ BD } characterizing the deviation from general relativity and the time variation of G. Thus, it is easier to compare the different constraints. Chen and Kamionkowski [94] showed that CMB experiments such as WMAP will be able to constrain these theories for ω_{ BD } < 100 if all parameters are to be determined by the same CMB experiment, ω_{BD} < 500 if all parameters are fixed but the CMB normalization and ω_{BD} < 800 if one uses the polarization. For the Planck mission these numbers are respectively, 800, 2500 and 3200. [2] concluded from the analysis of WMAP, ACBAR, VSA and CBI, and galaxy power spectrum data from 2dF, that ω_{ BD } > 120, in agreement with the former analysis of [378]. An analysis [549] indictates that The ‘WMAP5yr data’ and the ‘all CMB data’ both favor a slightly nonzero (positive) Ġ/G but with the addition of the SDSS poser spectrum data, the bestfit value is back to zero, concluding that −0.083 < ΔG/G < 0.095 between recombination and today, which corresponds to −1.75 × 10^{−12} yr^{−1} < Ġ/G < 1.05 × 10^{−12} yr^{−1}.
From a more phenomenological prospect, some works modeled the variation of G with time in a purely adhoc way, for instance [89] by assuming a linear evolution with time or a step function.
4.4.2 BBN
As explained in detail in Section 3.8.1, changing the value of the gravitational constant affects the freezeout temperature T_{f}. A larger value of G corresponds to a higher expansion rate. This rate is determined by the combination Gρ and in the standard case the Friedmann equations imply that Gρt^{2} is constant. The density ρ is determined by the number N_{*} of relativistic particles at the time of nucleosynthesis so that nucleosynthesis allows to put a bound on the number of neutrinos N_{ ν }. Equivalently, assuming the number of neutrinos to be three, leads to the conclusion that G has not varied from more than 20% since nucleosynthesis. But, allowing for a change both in G and N_{ ν } allows for a wider range of variation. Contrary to the fine structure constant the role of G is less involved.
The effect of a varying G can be described, in its most simple but still useful form, by introducing a speedup factor, ξ = H/H_{ GR }, that arises from the modification of the value of the gravitational constant during BBN. Other approaches considered the full dynamics of the problem but restricted themselves to the particular class of JordanFierzBransDicke theory [1, 16, 26, 84, 102, 128, 441, 551] (Casas et al. [84] concluded from the study of helium and deuterium that ω_{BD} > 380 when N_{ ν } = 3 and ω_{BD} > 50 when N_{ ν } = 2.), of a massless dilaton with a quadratic coupling [105, 106, 134, 446] or to a general massless dilaton [455]. It should be noted that a combined analysis of BBN and CMB data was investigated in [113, 292]. The former considered G constant during BBN while the latter focused on a nonminimally quadratic coupling and a runaway potential. It was concluded that from the BBN in conjunction with WMAP determination of η set that ΔG/G has to be smaller than 20%. However, we stress that the dynamics of the field can modify CMB results (see previous Section 4.4.1) so that one needs to be careful while inferring Ω_{b} from WMAP unless the scalartensor theory has converged close to general relativity at the time of decoupling.
In early studies, Barrow [26] assumed that G ∝ t^{−n} and obtained from the helium abundances that −5.9 × 10^{−3} < n < 7 × 10^{−3}, which implies that ∣Ġ/G∣ < (2 ± 9.3) h × 10^{−12} yr^{−1}, assuming a flat universe. This corresponds in terms of the BransDicke parameter to ω_{BD} > 25. Yang et al. [551] included the deuterium and lithium to improve the constraint to n < 5 × 10^{−3}, which corresponds to ω_{BD} > 50. It was further improved by Rothman and Matzner [441] to ∣n∣ < 3 × 10^{−3} implying ∣Ġ/G∣ < 1.7 × 10^{−13} yr^{−1}. Accetta et al. [1] studied the dependence of the abundances of D, ^{3}He, ^{4}He and ^{7}Li upon the variation of G and concluded that −0.3 < ΔG/G < 0.4, which roughly corresponds to ∣Ġ/G∣ < 9 × 10^{−13} yr^{−1}. All these investigations assumed that the other constants are kept fixed and that physics is unchanged. Kolb et al. [295] assumed a correlated variation of G, α_{EM} and G_{F} and got a bound on the variation of the radius of the extra dimensions.
5 Theories With Varying Constants
As explained in the introduction, Dirac postulated that G varies as the inverse of the cosmic time. Such an hypothesis is indeed not a theory since the evolution of G with time is postulated and not derived from an equation of evolution^{12} consistent with the other field equations, that have to take into account that G is no more a constant (in particular in a Lagrangian formulation one needs to take into account that G is no more constant when varying.
Fierz [195] realized that with such a Lagrangian, atomic spectra will be spacetimedependent, and he proposed to fix η to the value −1 to prevent such a spacetime dependence. This led to the definition of a oneparameter (ξ) class of scalartensor theories in which only G is assumed to be a dynamical field. This was then further explored by Brans and Dicke [67] (with the change of notation ξ → ω). In this JordanFierzBransDicke theory the gravitational constant is replaced by a scalar field, which can vary both in space and time. It follows that, for cosmological solutions, G ∝ t^{−n} with n^{−1} = 2 + 3ω_{BD}/2. Thus, Einstein’s gravity is recovered when ω_{BD} → ∞. This kind of theory was further generalized to obtain various functional dependencies for in the formalisation of scalartensor theories of gravitation (see, e.g., Damour and EspositoFarèse [124] or Will [540]).
5.1 Introducing new fields: generalities
5.1.1 The example of scalartensor theories
This illustrates the main features that will appear in any such models: (i) new dynamical fields appear (here a scalar field), (ii) some constant will depend on the value of this scalar field (here G is a function of the scalar field). It follows that the Einstein equations will be modified and that there will exist a new equation dictating the propagation of the new degree of freedom.
The example of scalartensor theories is also very illustrative to show how deviation from general relativity can be fairly large in the early universe while still being compatible with solar system constraints. It relies on the attraction mechanism toward general relativity [130, 131].
It follows that the deviation from general relativity remains constant during the radiation era (up to threshold effects in the early universe [108, 134] and quantum effects [85]) and the theory is then attracted toward general relativity during the matter era. Note that it implies that postulating a linear or inverse variation of G with cosmic time is actually not realistic in this class of models. Since the theory is fully defined, one can easily compute various cosmological observables (late time dynamics [348], CMB anisotropy [435], weak lensing [449], BBN [105, 106, 134]) in a consistent way and confront them with data.
5.1.2 Making other constants dynamical
This example shows that we cannot couple a field blindly to, e.g., the Faraday tensor to make the finestructure constant dynamics and that some mechanism for reconciling this variation with local constraints, and in particular the university of free fall, will be needed.
5.2 Highenergy theories and varying constants
5.2.1 KaluzaKlein
In the models by Kaluza [269] and Klein [291] the 5dimensional spacetime was compactified assuming that one spatial extradimension S^{1}, of radius R_{KK}. It follows that any field χ(x^{ μ }, y) can be Fourier transformed along the compact dimension (with coordinate), so that, from a 4dimensional point of view, it gives rise to a tower of of fields χ^{(n)}(x^{ μ }) of mas m_{n} = nR_{ KK }. At energies small compared to \(R_{KK}^{ 1}\) only the yindependent part of the field remains and the physics looks 4dimensional.
In such a framework the variation of the gauge couplings and of the gravitational constant arises from the variation of the size of the extra dimensions so that one can derives stronger constraints that by assuming independent variation, but at the expense of being more modeldependent. Let us mention the works by Marciano [345] and Wu and Wang [550] in which the structure constants at lower energy are obtained by the renormalization group, and the work by Veneziano [515] for a toy model in D ≥ 4 dimensions, endowed with an invariant UV cutoff Λ, and containing a large number of nonselfinteracting matter species.
Ref. [295] used the variation (173) to constrain the time variation of the radius of the extra dimensions during primordial nucleosynthesis to conclude that ∣ Δ R_{KK}/R_{KK} ∣ < 1%. [28] took the effects of the variation of \({\alpha _{\rm{S}}} \propto R_{KK}^{ 2}\) and deduced from the helium4 abundance that ∣ ΔR_{KK}/R_{KK}∣ < 0.7% and ∣ Δ R_{KK}/R_{KK}∣ < 1.1% respectively for D = 2 and D = 7 KaluzaKlein theory and that ∣Δ R_{KK}/R_{KK}∣ < 3.4 × 10^{−10} from the Oklo data. An analysis of most cosmological data (BBN, CMB, quasar etc..) assuming that the extra dimension scales as R_{0}(1 + Δt^{−3/4}) and R_{0}[1 + Δ](1 − cosω(t − t_{0})) concluded that Δ has to be smaller than 10^{−16} and 10^{−8} respectively [311], while [330] assumes that gauge fields and matter fields can propagate in the bulk, that is in the extra dimensions. Ref. [336] evaluated the effect of such a couple variation of G and the structures constants on distant supernova data, concluding that a variation similar to the one reported in [524] would make the distant supernovae brighter.
5.2.2 String theory
There exist five anomalyfree, supersymmetric perturbative string theories respectively known as type I, type IIA, type IIB, SO(32) heterotic and E_{8} × E_{8} heterotic theories (see, e.g., [420]). One of the definitive predictions of these theories is the existence of a scalar field, the dilaton, that couples directly to matter [484] and whose vacuum expectation value determines the string coupling constant [546]. There are two other excitations that are common to all perturbative string theories, a rank two symmetric tensor (the graviton) g_{ μν } and a rank two antisymmetric tensor B_{ μν }. The field content then differs from one theory to another. It follows that the 4dimensional couplings are determined in terms of a string scale and various dynamical fields (dilaton, volume of compact space, …). When the dilaton is massless, we expect three effects: (i) a scalar admixture of a scalar component inducing deviations from general relativity in gravitational effects, (ii) a variation of the couplings and (iii) a violation of the weak equivalence principle. Our purpose is to show how the 4dimensional couplings are related to the string mass scale, to the dilaton and the structure of the extra dimensions mainly on the example of heterotic theories.
Ref. [290] considers a probe D3brane probe in the context of AdS/CFT correspondence at finite temperature and provides the predictions for the running electric and magnetic effective couplings, beyond perturbation theory. It allows to construct a varying speed of light model.
To conclude, superstring theories offer a natural theoretical framework to discuss the value of the fundamental constants since they become expectation values of some fields. This is a first step towards their understanding but yet, no complete and satisfactory mechanism for the stabilization of the extra dimensions and dilaton is known.
It has paved the way for various models that we detail in Section 5.4.
5.3 Relations between constants
There are different possibilities to relate the variations of different constants. First, in quantum field theory, we have to take into account the running of coupling constants with energy and the possibilities of grand unification to relate them. It will also give a link between the QCD scale, the coupling constants and the mass of the fundamental particles (i.e., the Yukawa couplings and the Higgs vev). Second, one can compute the binding energies and the masses of the proton, neutron and different nuclei in terms of the gauge couplings and the quark masses. This step involves QCD and nuclear physics. Third, one can relate the gyromagnetic factor in terms of the quark masses. This is particularly important to interpret the constraints from the atomic clocks and the QSO spectra. This allows one to set stronger constraints on the varying parameters at the expense of a modeldependence.
5.3.1 Implication of gauge coupling unification
The first theoretical implication of highenergy physics arises from the unification of the nongravitational interactions. In these unification schemes, the three standard model coupling constants derive from one unified coupling constant.
This allowed to be defined six classes of scenarios: (1) varying gravitational constant (d_{ H } = d_{ S } = d_{ X } = 0) in which only M_{ U }/M_{ P } or equivalently \(G\Lambda _{{\rm{QCD}}}^2\) is varying; (2) varying unified coupling (d_{ U } = 1, d_{ H } = d_{ S } = d_{ M } = 0); (3) varying Fermi scale defined by (d_{ H } = 1, d_{ U } = d_{ S } = d_{ M } = 0) in which one has d ln μ/d lnα_{EM} = −325; (4) varying Fermi scale and SUSYbreaking scale (d_{ S } = d_{ H } = 1, d_{ U } = d_{ M } = 0) and for which d ln μ/d ln α_{EM} = −21.5; (5) varying unified coupling and Fermi scale \(({d_X} = 1,{d_H} = \tilde \gamma {d_X},{d_S} = {d_M} = 0)\) and for which \({\rm{d}}\ln \mu/{\rm{d}}\ln {\alpha _{{\rm{EM}}}} = (23.2  0.65\tilde \gamma)/(0.865 + 0.02\tilde \gamma)\) (6) varying unified coupling and Fermi scale with SUSY \(({d_X} = 1,{d_S} \simeq {d_H} = \tilde \gamma {d_X},{d_M} = 0)\) and for which \({\rm{d}}\ln \mu/{\rm{d}}\ln {\alpha _{{\rm{EM}}}} = (14  0.28\tilde \gamma)/(0.52 + 0.013\tilde \gamma)\).
Each scenario can be compared to the existing constraints to get sharper bounds on them [146, 147, 149, 364] and emphasize that the correlated variation between different constants (here μ and α_{EM}) depends strongly on the theoretical hypothesis that are made.
5.3.2 Masses and binding energies
The previous Section 5.3.1 described the unification of the gauge couplings. When we consider “composite” systems such as proton, neutron, nuclei or even planets and stars, we need to compute their mass, which requires to determine their binding energy. As we have already seen, the electromagnetic binding energy induces a direct dependence on α_{EM} and can be evaluated using, e.g., the BetheWeizäcker formula (61). The dependence of the masses on the quark masses, via nuclear interactions, and the determination of the nuclear binding energy are especially difficult to estimate.
The case of the deuterium binding energy B_{ D } has been discussed in different ways (see Section 3.8.3). Many models have been created. A first route relies on the use of the dependence of B_{ D } on the pion mass [188, 38, 426, 553], which can then be related to m_{u}, m_{d} and Λ_{QCD}. A second avenue is to use a sigma model in the framework of the Walecka model [456] in which the potential for the nuclear forces keeps only the σ, ρ and ω meson exchanges [208]. We also emphasize that the deuterium is only produced during BBN, as it is too weakly bound to survive in the regions of stars where nuclear processes take place. The fact that we do observe deuterium today sets a nontrivial constraint on the constants by imposing that the deuterium remains stable from BBN time to today. Since it is weakly bound, it is also more sensitive to a variation of the nuclear force compared to the electromagnetic force. This was used in [145] to constrain the variation of the nuclear strength in a sigmamodel.
These expressions allow to compute the sensitivity coefficients that enter in the decomposition of the mass [see Equation (201)]. They also emphasize one of the most difficult issue concerning the investigation about constant related to the intricate structure of QCD and its role in low energy nuclear physics, which is central to determine the masses of nuclei and the binding energies, quantities that are particularly important for BBN, the universality of free fall and stellar physics.
5.3.3 Gyromagnetic factors
The constraints arising from the comparison of atomic clocks (see Section 3.1) involve the finestructure constant α_{EM}, the protontoelectron mass ratio μ and various gyromagnetic factors. It is important to relate these factors to fundamental constants.
5.4 Models with varying constants
The models that can be constructed are numerous and cannot all be reviewed here. Thus, we focus on the string dilaton model in Section 5.4.1 and then discuss the chameleon mechanism in Section 5.4.2 and the Bekenstein framework in Section 5.4.3.
5.4.1 String dilaton and Runaway dilaton models
If, as allowed by the ansatz (195), m_{ A }(ϕ) has a minimum ϕ_{ m } then the scalar field will be driven toward this minimum during the cosmological evolution. However, if the various coupling functions have different minima then the minima of m_{ A }(ϕ) will depend on the particle A. To avoid violation of the equivalence principle at an unacceptable level, it is thus necessary to assume that all the minima coincide in ϕ = ϕ_{ m }, which can be implemented by setting B_{ i } = B. This can be realized by assuming that ϕ_{ m } is a special point in field space, for instance it could be associated to the fixed point of a Z_{2} symmetry of the T or Sduality [129].
Expanding ln B around its maximum ϕ_{ m } as ln B ∝ − κ(ϕ − ϕ_{ m })^{2}/2, Damour and Polyakov [135, 136] constrained the set of parameters (κ, ϕ_{0} − ϕ_{ m }) using the different observational bounds. This toy model allows one to address the unsolved problem of the dilaton stabilization, to study all the experimental bounds together and to relate them in a quantitative manner (e.g., by deriving a link between equivalenceprinciple violations and timevariation of α_{EM}). This model was compared to astrophysical data in [306] to conclude that ∣ Δϕ∣ < 3.4κ10^{−6}.

The PPN parameters, the time variation of α and G today and the violation of the university of freefall all scale as \(\Delta \phi _0^2\).

The field is driven toward ϕ_{ m } during the cosmological evolution, a point at which the scalar field decouples from the matter field. The mechanism is usually called the least coupling principle.

Once the dynamics for the scalar field is solved, Δϕ_{0} can be related to Δϕ_{ i } at the end of inflation. Interestingly, this quantity can be expressed in terms of amplitude of the density contrast at the end of inflation, that is to the energy scale of inflation.

The numerical estimations [135] indicate that η_{ U,H } ∼ −5.4 × 10^{−5}(γ^{PPN} − 1) showing that in such a class of models, the constraint on η ∼ 10^{−13} implies 1 − γ^{PPN} ∼ 2 × 10^{−9}, which is a better constraint that the one obtained directly.
The coupling of the dilaton to the standard model fields was further investigated in [122, 121]. Assuming that the heavy quarks and weak gauge bosons have been integrated out and that the dilaton theory has been matched to the light fields below the scale of the heavy quarks, the coupling of the dilaton has been parameterized by 5 parameters: d_{ e } and d_{ g } for the couplings to the electromagnetic and gluonic fieldstrength terms, and \({d_{{m_e}}},{d_{{m_u}}}\) and \({d_{{m_d}}}\) for the couplings to the fermionic mass terms so that the interaction Lagrangian is reduces to a linear coupling (e.g., ∝ d_{ e }ϕF^{2} for the coupling to electromagnetism etc.) It follows that Δα_{EM}/α_{EM} = d_{ e }κϕ for the fine structure constant, ΔΛ_{QCD}/Λ_{QCD} = d_{ d }κϕ for the strong sector and Δm_{ i }/m_{ i } = d_{ mi }κϕ for the masses of the fermions. These parameters can be constrained by the test of the equivalence principle in the solar system [see Section 6.3].
In these two stringinspired scenarios, the amplitude of the variation of the constants is related to the one of the density fluctuations during inflation and the cosmological evolution.
5.4.2 The Chameleon mechanism
A central property of the least coupling principle, that is at the heart of the former models, is that all coupling functions have the same minimum so that the effective potential entering the KleinGordon equation for the dilaton has a welldefined minimum.
The cosmological variation of α_{EM} in such model was investigated in [70, 71]. Models based on the Lagrangian (209) and exhibiting the chameleon mechanism were investigated in [398].
The possible shift in the value of μ in the Milky Way (see Section 6.1.3) was related [323, 324, 322] to the model of [398] to conclude that such a shift was compatible with this model.
5.4.3 Bekenstein and related models
Bekenstein [39, 40] introduced a theoretical framework in which only the electromagnetic sector was modified by the introduction of a dimensionless scalar field ϵ so that all electric charges vary in unison e_{ i } = e_{0}iϵ(x^{ α }) so that only α_{EM} is assumed to possibly vary.
As discussed previously, this class of models predict a violation of the universality of free fall and, from Equation (14), it is expected that the anomalous acceleration is given by δa = − M^{−1}(∂ E_{EM}/∂ϵ)▽ϵ.
This theory was also used [41] to study the spacetime structure around charged blackhole, which corresponds to an extension of dilatonic charged black hole. It was concluded that a cosmological growth of α_{EM} would decrease the blackhole entropy but with half the rate expected from the earlier analysis [139, 339].
5.4.4 Other ideas

Models involving a late time phase transition in the electromagnetic sector [87, 10];

Braneworld models [336, 8, 73, 331, 403] or extradimensions [477];

Model with pseudoscalar couplings [203];

Growing neutrino models [9, 533] in which the neutrino masses are a function of a scalar field, that is also responsible for the late time acceleration of the universe. In these models the neutrinos freeze the evolution of the scalar field when they become nonrelativistic while its evolution is similar as in quintessence when the neutrinos are ultrarelativistic;

Models based on discrete quantum gravity [223] or on loop quantum gravity in which the BarberoImmirzi parameter controls the minimum eigenvalue of the area operator and could be promoted to a field, leading to a classical coupling of Einstein’s gravity with a scalarfield stressenergy tensor [354, 483]

“varying speed of light” models for which we refer to the review [341] and our previous analysis [183] for a critical view;

Quintessence models with a nonminimal coupling of the quintessence field [20, 11, 96, 112, 162, 217, 315, 314, 389, 347, 404, 531] [see discussion Section 2.2.3];

Holographic dark energy models with nonminimal couplings [235]
6 Spatial Variations

On cosmological scales, the fields dictating the variation of the constants have fluctuations that can let their imprint in some cosmological observables.

On local scales (e.g., our solar system or the Milky Way) the fields at the origin of the variation of the constants are sourced by the local matter distribution so that one expect that the constants are not homogeneous on these scales.
6.1 Local scales
In order to determine the profile of the constant in the solar system, let us assume that their value is dictated by the value of a scalar field. As in Section 5.4.1, we can assume that at lowest order the profile of the scalar field will be obtained from the scalartensor theory, taking into account that all masses scale as A_{QCD}(ϕ_{*}) where ϕ_{*} is the value of the field in the Einstein frame.
6.1.1 Generalities
6.1.2 Solar system scales
Such bounds can be improved by comparing clocks on Earth and onboard of satellites [209, 444, 343] while the observation of atomic spectra near the Sun can lead to an accuracy of order unity [209]. A space mission with atomic clocks onboard and sent to the Sun could reach an accuracy of 10^{−8} [343, 547].
6.1.3 Milky Way
An attempt [323, 358] to constrain k_{ μ } from emission lines due to ammonia in interstellar clouds of the Milky Way led to the conclusion that k_{ μ } ∼ 1, by considering different transitions in different environments. This is in contradiction with the local constraint (219). This may result from rest frequency uncertainties or it would require that a mechanism such as chameleon is at work (see Section 5.4.2) in order to be compatible with local constraints. The analysis was based on an ammonia spectra atlas of 193 dense protostellar and prestellar cores of low masses in the Perseus molecular cloud, comparison of N_{2}H^{+} and N_{2}D^{+} in the dark cloud L183.
A second analysis [324] using high resolution spectral observations of molecular core in lines of NH_{3}, HC_{3}N and N_{2}H^{+} with 3 radiotelescopes showed that ∣Δμ/μ∣ < 3 × 10^{−8} between the cloud environment and the local laboratory environment. However, an offset was measured that could be interpreted as a variation of μ of amplitude \(\Delta \bar \mu/\bar \mu = (2.2 \pm {0.4_{{\rm{stat}}}} \pm {0.3_{{\rm{sys}}}}) \times {10^{ 8}}\). A second analysis [322] map four molecular cores L1498, L1512, L1517, and L1400K selected from the previous sample in order to estimate systematic effects due to possible velocity gradients. The measured velocity offset, once expressed in terms of \(\Delta \bar \mu\), gives \(\Delta \bar \mu = (26 \pm {1_{{\rm{stat}}}} \pm {3_{{\rm{sys}}}}) \times {10^{ 9}}\).
Since extragalactic gas clouds have densities similar to those in the interstellar medium, these bounds give an upper bound on a hypothetic chameleon effect, which are much below the constraints obtained on time variations from QSO absorption spectra.
6.2 Cosmological scales
During inflation, any light scalar field develop superHubble fluctuations of quantum origin, with an almost scale invariant power spectrum (see chapter 8 of [409]). It follows that if the fundamental constants depend on such a field, their value must fluctuate on cosmological scales and have a nonvanishing correlation function. More important these fluctuations can be correlated with the metric perturbations.
In such a case, the finestructure constant will behave as α_{EM} = α_{EM}(t) + δα_{EM}(x, t), the fluctuations being a stochastic variable. As we have seen earlier, α_{EM} enters the dynamics of recombination, which would then become patchy. This has several consequences for the CMB anisotropies. In particular, similarly to weak gravitational lensing, it will modify the mean power spectra (this is a negligible effect) and induce a curl component (B mode) to the polarization [466]. Such spatial fluctuations also induce nonGaussian temperature and polarization correlations in the CMB [466, 417]. Such correlations have not allowed to set observational constraints yet but they need to be included foe consistency, see e.g., the example of CMB computation in scalartensor theories [435]. The effect on large the scale structure was also studied in [30, 363] and the Keck/HIRES QSO absorption spectra showed [377] that the correlation function of the finestructure constant is consistent on scales ranging between 0.2 and 13 Gpc.
This has lead to the idea [396] of the existence of a low energy domain wall produced in the spontaneous symmetry breaking involving a dilatonlike scalar field coupled to electromagnetism. Domains on either side of the wall exhibit slight differences in their respective values of α_{EM}. If such a wall is present within our Hubble volume, absorption spectra at large redshifts may or may not provide a variation in α_{EM} relative to the terrestrial value, depending on our relative position with respect to the wall.
Another possibility would be that the Copernican principle is not fully satisfied, such as in various void models. Then the background value of ϕ would depend, e.g., on r and t for a spherically symmetric spacetime (such as a LemaîtreTolmanBondi spacetime). This could give rise to a dipolar modulation of the constant if the observer (us) is not located at the center of the universe. Note, however, that such a cosmological dipole would also reflect itself, e.g., on CMB anisotropies. Similar possibilities are also offered within the chameleon mechanism where the value of the scalar field depends on the local matter density (see Section 5.4.2).
More speculative, is the effect that such fluctuations can have during preheating after inflation since the decay rate of the inflaton in particles may fluctuate on large scales [293, 294].
6.3 Implication for the universality of free fall
As we have seen in the previous sections, the tests of the universality of free fall is central in containing the model involving variations of the fundamental constants.
The link between the time variation of fundamental constants and the violation of the universality of free fall have been discussed by Bekenstein [39] in the framework described in Section 5.4.2 and by DamourPolyakov [135, 136] in the general framework described in Section 5.4.1. In all these models, the two effects are triggered by a scalar field. It evolves according to a KleinGordon equation \((\ddot \phi + 3H\dot \phi + {m^2}\phi + \ldots = 0)\), which implies that ϕ is damped as \(\dot \phi \propto {a^{ 3}}\) if its mass is much smaller than the Hubble scale. Thus, in order to be varying during the last Hubble time, ϕ has to be very light with typical mass m ∼ H_{0} ∼ 10^{−33} eV. As a consequence, ϕ has to be very weakly coupled to the standard model fields to avoid a violation of the universality of free fall.
One question concerns the most sensitive probes of the equivalence principle. This was investigated in [144] in which the coefficients λ_{ Ai } are estimated using the model (189). It was concluded that they are 2–3 orders of magnitude over cosmic clock bounds. However, [148] concluded that the most sensitive probe depends on the unification relation that exist between the different couplings of the standard model. [463] concluded similarly that the universality of free fall is more constraining that the seasonal variations. The comparison with QSO spectra is more difficult since it involves the dynamics of the field between z ∼ 1 and today. To finish, let us stress that these results may be changed significantly if a chameleon mechanism is at work.
7 Why Are The Constants Just So?
The numerical values of the fundamental constants are not determined by the laws of nature in which they appear. One can wonder why they have the values we observe. In particular, as pointed out by many authors (see below), the constants of nature seem to be fine tuned [317]. Many physicists take this finetuning to be an explanandum that cries for an explanans, hence following Hoyle [258] who wrote that “one must at least have a modicum of curiosity about the strange dimensionless numbers that appear in physics.”
7.1 Universe and multiverse approaches
Two possible lines of explanation are usually envisioned: a design or consistency hypothesis and an ensemble hypothesis, that are indeed not incompatible together. The first hypothesis includes the possibility that all the dimensionless parameters in the “final” physical theory will be fixed by a condition of consistency or an external cause. In the ensemble hypothesis, the universe we observe is only a small part of the totality of physical existence, usually called the multiverse. This structure needs not be finetuned and shall be sufficiently large and variegated so that it can contain as a proper part a universe like the one we observe the finetuning of which is then explained by an observation selection effect [64].
These two possibilities send us back to the large number hypothesis by Dirac [155] that has been used as an early motivation to investigate theories with varying constants. The main concern was the existence of some large ratios between some combinations of constants. As we have seen in Section 5.3.1, the running of coupling constants with energy, dimensional transmutation or relations such as Equation (185) have opened a way to a rational explanation of very small (or very large) dimensional numbers. This follows the ideas developed by Eddington [178, 179] aiming at deriving the values of the constants from consistency relations, e.g., he proposed to link the finestructure constant to some algebraic structure of spacetime. Dicke [151] pointed out another possible explanation to the origin of Dirac large numbers: the density of the universe is determined by its age, this age being related to the time needed to form galaxies, stars, heavy nuclei…. This led Carter [82] to argue that these numerical coincidence should not be a surprise and that conventional physics and cosmology could have been used to predict them, at the expense of using the anthropic principle.
The idea of such a structure called the multiverse has attracted a lot of attention in the past years and we refer to [79] for a more exhaustive account of this debate. While many versions of what such a multiverse could be, one of them finds its root in string theory. In 2000, it was realized [66] that vast numbers of discrete choices, called flux vacua, can be obtained in compactifying superstring theory. The number of possibilities is estimated to range between 10^{100} and 10^{500}, or maybe more. No principle is yet known to fix which of these vacua is chosen. Eternal inflation offers a possibility to populate these vacua and to generate an infinite number of regions in which the parameters, initial conditions but also the laws of nature or the number of spacetime dimensions can vary from one universe to another, hence being completely contingent. It was later suggested by Susskind [482] that the anthropic principle may actually constrain our possible locations in this vast string landscape. This is a shift from the early hopes [270] that Mtheory may conceivably predict all the fundamental constants uniquely.
Indeed such a possibility radically changes the way we approach the question of the relation of these parameters to the underlying fundamental theory since we now expect them to be distributed randomly in some range. Among this range of parameters lies a subset, that we shall call the anthropic range, which allow for universe to support the existence of observers. This range can be determined by asking ourselves how the world would change if the values of the constants were changed, hence doing counterfactual cosmology. However, this is very restrictive since the mathematical form of the law of physics managed as well and we are restricting to a local analysis in the neighborhood of our observed universe. The determination of the anthropic region is not a prediction but just a characterization of the sensitivity of “our” universe to a change of the fundamental constants ceteris paribus. Once this range is determined, one can ask the general question of quantifying the probability that we observe a universe as ours, hence providing a probabilistic prediction. This involves the use of the anthropic principle, which expresses the fact that we observe are not just observations but observations made by us, and requires us to state what an observer actually is [383].
7.2 Finetunings and determination of the anthropic range
As we have discussed in the previous sections, the outcome of many physical processes are strongly dependent on the value of the fundamental constants. One can always ask the scientific question of what would change in the world around us if the values of some constants were changed, hence doing some counterfactual cosmology in order to determine the range within which the universe would have developed complex physics and chemistry, what is usually thought to be a prerequisit for the emergence of complexity and life (we emphasize the difficulty of this exercise when it goes beyond small and local deviations from our observed universe and physics, see, e.g., [245] for a possibly life supporting universe without weak interaction). In doing so, one should consider the fundamental parameters entering our physical theory but also the cosmological parameters.

It has been noted that the stability of the proton requires \({m_{\rm{d}}}  {m_{\rm{u}}} \gtrsim \alpha _{{\rm{EM}}}^{3/2}{m_p}\). The anthropic bounds on m_{d}, m_{u} and m_{e} (or on the Higgs vev) arising from the existence of nuclei, the dineutron and the diproton cannot form a bound state, the deuterium is stable have been investigated in many works [5, 6, 120, 145, 160, 161, 252, 254], even allowing for nuclei made of more than 2 baryon species [264]. Typically, the existence of nuclei imposes that m_{d} + m_{u} and v cannot vary by more that 60% from their observed value in our universe.

If the difference of the neutron and proton masses where less that about 1 MeV, the neutron would become stable and hydrogen would be unstable [442, 253] so that helium would have been the most abundant at the end of BBN so that the whole history of the formation and burning of stars would have been different. It can be deduced that [252] one needs m_{d} − m_{u} − m_{e} ≲ 1.2 MeV so that the universe does not become all neutrons; m_{d} − m_{u} + m_{e} ≲ 3.4 MeV for the pp reaction to be exothermic and m_{e} > 0 leading to a finite domain.

A coincidence emerges from the existence of stars with convective and radiative envelopes, since it requires [80] that \({\alpha _{\rm{G}}} \sim \alpha _{{\rm{EM}}}^{20}\). It arises from the fact that the typical mass that separates these two behavior is roughly \(\alpha _{\rm{G}}^{ 2}\alpha _{{\rm{EM}}}^{10}{m_{\rm{p}}}\) while the masses of star span a few decades around \(\alpha _{\rm{G}}^{ 3}{m_{\rm{p}}}\). Both stars seem to be needed since only radiative stars can lead to supernovae, required to disseminate heavy elements, while only convective stars may generate winds in their early phase, which may be associated with formation of rocky planets. This relation while being satisfied numerically in our universe cannot be explained from fundamental principles.

Similarly, it seems that for neutrinos to eject the envelope of a star in a supernovae explosion, one requires [80] \({\alpha _{\rm{G}}} \sim \alpha _{\rm{W}}^4\).

As we discussed in Section 3.5, the production of carbon seems to imply that the relative strength of the nuclear to electromagnetic interaction must be tuned typically at the 0.1% level.

The total density parameter Ω must lie within an order of magnitude of unity. If it were much larger the universe will have recollapsed rapidly, on a time scale much shorter that the mainsequence star lifetime. If it were to small, density fluctuations would have frozen before galaxies could form. Typically one expects 0.1 < Ω_{0} < 10. Indeed, most inflationary scenarios lead to Ω_{0} ∼ 1 so that this may not be anthropically determined but in that case inflation should last sufficiently long so that this could lead to a fine tuning on the parameters of the inflationary potential.

The cosmological constant was probably the first one to be questioned in an anthropical way [527]. Weinberg noted that if Λ is too large, the universe will start accelerating before structures had time to form. Assuming that it does not dominate the matter content of the universe before the redshift z_{*} at which earliest are formed, one concludes that ρ_{ V } = Λ/8πG < (+z_{*})ρ_{mat0}. Weinberg [527] estimated z_{*} ∼ 4.5 and concluded that “if it is the anthropic principle that accounts for the smallness of the cosmological constant, then we would expect the vacuum energy density α_{ v } ∼ (10–100)ρ_{mat0} because there is no anthropic reason for it to be smaller”. Indeed, the observations indicate ρ_{ v } ∼ 2ρ_{mat0}

Tegmark and Rees [486] have pointed out that the amplitude of the initial density perturbation, Q enters into the calculation and determined the anthropic region in the plane (Λ, Q). This demonstrates the importance of determining the parameters to include in the analysis.

Different time scales of different origin seem to be comparable: the radiative cooling, galactic halo virialization, time of cosmological constant dominance, the age of the universe today. These coincidence were interpreted as an anthropic sign [65].
These are just a series of examples. For a multiparameter study of the anthropic bound, we refer, e.g., to [485] and to [243] for a general anthropic investigation of the standard model parameters.
7.3 Anthropic predictions
The determination of the anthropic region for a set of parameters is in no way a prediction but simply a characterization of our understanding of a physical phenomenon P that we think is important for the emergence of observers. It reflects that, the condition C stating that the constants are in some interval, C ⇒ P, is equivalent to !P ⇒!C.
The anthropic principle [82] states that “what we can expect to observe must be restricted by the conditions necessary for our presence as observers”. It has received many interpretations among which the weak anthropic principle stating that “we must be prepared to take account of the fact that our location in the universe in necessarily privileged to the extent of being compatible with our existence as observers”, which is a restriction of the Copernican principle often used in cosmology, and the strong anthropic principle according to which “the universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage.” (see [35] for further discussions and a large bibliography on the subject).
This approach to the understanding of the observed values of the fundamental constants (but also of the initial conditions of our universe) by resorting to the actual existence of a multiverse populated by a different “lowenergy” theory of some “mother” microscopic theory allows us to explain the observed finetuning by an observational selection effect. It also sets a limit to the Copernican principle stating that we do not live in a particular position in space since we have to live in a region of the multiverse where the constants are inside the anthropic bound. Such an approach is indeed not widely accepted and has been criticized in many ways [7, 182, 480, 402, 479, 511, 475].
Among the issues to be answered before such an approach becomes more rigorous, let us note: (1) what is the shape of the string landscape; (2) what constants should we scan. It is indeed important to distinguish the parameters that are actually finetuned in order to determine those that we should hope to explain in this way [537, 538]. Here theoretical physics is indeed important since it should determine which of the numerical coincidences are coincidences and which are expected for some unification or symmetry reasons; (3) how is the landscape populated; (4) what is the measure to be used in order and what is the correct way to compute anthropicallyconditioned probabilities.
While considered as not following the standard scientific approach, this is the only existing window on some understanding of the value of the fundamental constants.
8 Conclusions
The study of fundamental constants has witnessed tremendous progresses in the past years. In a decade, the constraints on their possible space and time variations have flourished. They have reached higher precision and new systems, involving different combinations of constants and located at different redshifts, have been considered. This has improved our knowledge on the equivalence principle and allowed to test it on astrophysical and cosmological scales. We have reviewed them in Section 3 and Section 4. We have emphasized the experimental observational progresses expected in the coming years such as the EELT, radio observations, atomic clocks in space, or the use of gravitational waves.
From a theoretical point of view, we have described in Section 5 the highenergy models that predict such variation, as well as the link with the origin of the acceleration of the universe. In all these cases, a spacetime varying fundamental constant reflects the existence of an almost massless field that couples to matter. This will be at the origin of a violation of the universality of free fall and thus of utmost importance for our understanding of gravity and of the domain of validity of general relativity. Huge progress has been made in the understanding of the coupled variation of different constants. While more modeldependent, this allows one to set stronger constraints and eventually to open an observational window on unification mechanisms.
To finish, we have discussed in Section 7 the ideas that try to understand the value of the fundamental constant. While considered as borderline with respect to the standard physical approach, it reveals the necessity of considering a universe larger than our own, and called the multiverse. It will also give us a hint on our location in this structure in the sense that the anthropic principle limits the Copernican principle at the basis of most cosmological models. We have stressed the limitations of this approach and the ongoing debate on the possibility to make it predictive.
To conclude, the puzzle about the large numbers pointed out by Dirac has led to a better understanding of the fundamental constants and of their roles in the laws of physics. They are now part of the general tests of general relativity, as well as a breadcrumbs to understand the origin of the acceleration of the universe and to more speculative structures, such as a multiverse structure, and possibly a window on string theory.
Footnotes
 1.
 2.
After studying electrolysis in 1874, Johnstone Stoney suggested the existence of a “single definite quantity of electricity”. He was able to estimate the value of this elementary charge by means of Faraday’s laws of electrolysis. He introduced the term “electron” in 1894 and it was identified as a particle in 1897 by Thomson.
 3.
 4.
Again, μ is used either from m_{e}/m_{p} or m_{p}/m_{e}. I have chosen to use μ = m_{p}/m_{e} and \(\bar u = {m_{\rm{e}}}/{m_{\rm{p}}}\).
 5.
 6.
 7.
 8.
 9.
 10.
 11.
The CODATA is the COmmittee on Data for Science and Technology, see http://www.codata.org/.
 12.
Note that the Dirac hypothesis can also be achieved by assuming that e varies as t^{1/2}. Indeed this reflects a choice of units, either atomic or Planck units. However, there is a difference: assuming that only G varies violates the strong equivalence principle while assuming a varying e results in a theory violating the weak equivalence principle. It does not mean that we are detecting the variation of a dimensionful constant but simply that either e^{2}/ħc or \(Gm_{\rm{e}}^2/\hbar c\) is varying. This shows that many implementations of this idea are a priori possible.
 13.
For copper ν_{p} = 0.456, for uranium ν_{p} = 0.385 and for lead ν_{p} = 0.397.
Notes
Acknowledgments
I would like to thank all my collaborators on this topic, Alain Coc, Pierre Descouvemont, Sylvia Ekström, George Ellis, Georges Meynet, Nelson Nunes, Keith Olive and Elisabeth Vangioni as well as Bénédicte Leclercq and Roland Lehoucq.
I also thank many colleagues for sharing their thoughts on the subject with me, first at the Institut d’Astrophysique de Paris, Luc Blanchet, Michel Cassé, Gilles EspositoFarèse, Bernard Fort, Guillaume Faye, JeanPierre Lasota, Yannick Mellier, Patrick Petitjean; in France, Francis Bernardeau, Sébastien Bize, Françoise Combes, Thibault Damour, Nathalie Deruelle, Christophe Salomon, Carlo Schimd, Peter Wolfe; and to finish worldwide, John Barrow, Thomas Dent, Victor Flambaum, Bala Iyer, Lev Kofman, Paolo Molaro, David Mota, Michael Murphy, Jeff Murugan, Cyril Pitrou, Anan Srianand, Gabriele Veneziano, John Webb, Amanda Weltman, Christof Wetterich. To finish, I thank Clifford Will for motivating me to write this review.
This work was supported by a PEPSPTI grant from CNRS (2009–2011) and the PNCG (2010) but, despite all our efforts, has not been supported by the FrenchANR.
References
 [1]Accetta, F.S., Krauss, L.M. and Romanelli, P., “New limits on the variability of G from big bang nucleosynthesis”, Phys. Lett. B, 248, 146, (1990). (Cited on pages 83 and 84.)ADSCrossRefGoogle Scholar
 [2]Acquaviva, V., Baccigalupi, C., Leach, S.M., Liddle, A.R. and Perrotta, F., “Structure formation constraints on the JordanBransDicke theory”, Phys. Rev. D, 71, 104025, (2005). [DOI], [astroph/0412052]. (Cited on page 83.)ADSCrossRefGoogle Scholar
 [3]Adams, F.C., “Stars in other universes: stellar structure with different fundamental constants”, J. Cosmol. Astropart. Phys., 2008(08), 010, (2008). [DOI], [arXiv:0807.3697 [astroph]]. (Cited on page 63.)ADSCrossRefGoogle Scholar
 [4]Adelberger, E.G., “New tests of Einstein’s equivalence principle and Newton’s inversesquare law”, Class. Quantum Grav., 18, 2397–2405, (2001). [DOI]. (Cited on page 19.)ADSzbMATHCrossRefGoogle Scholar
 [5]Agrawal, V., Barr, S.M., Donoghue, J.F. and Seckel, D., “Anthropic considerations in multipledomain theories and the scale of electroweak symmetry breaking”, Phys. Rev. Lett., 80, 1822, (1998). [DOI], [hepph/9801253]. (Cited on page 111.)ADSCrossRefGoogle Scholar
 [6]Agrawal, V., Barr, S.M., Donoghue, J.F. and Seckel, D., “Viable range of the mass scale of the standard model”, Phys. Rev. D, 57, 5480–5492, (1998). [DOI], [hepph/9707380]. (Cited on page 111.)ADSCrossRefGoogle Scholar
 [7]Aguirre, A., “Making predictions in a multiverse: conundrums, dangers, coincidences”, in Carr, B.J., ed., Universe or Multiverse?, pp. 367–386, (Cambridge University Press, Cambridge; New York, 2007). [astroph/0506519], [Google Books]. (Cited on page 113.)CrossRefGoogle Scholar
 [8]Amarilla, L. and Vucetich, H., “Braneworld cosmology and varying G”, Int. J. Mod. Phys. A, 25, 3835–3856, (2010). [DOI], [0908.2949]. (Cited on page 101.)ADSMathSciNetzbMATHCrossRefGoogle Scholar
 [9]Amendola, L., Baldi, M. and Wetterich, C., “Quintessence cosmologies with a growing matter component”, Phys. Rev. D, 78, 023015, (2008). [DOI], [arXiv:0706.3064 [astroph]]. (Cited on pages 101 and 102.)ADSCrossRefGoogle Scholar
 [10]Anchordoqui, L., Barger, V., Goldberg, H. and Marfatia, D., “Phase transition in the fine structure constant”, Phys. Lett. B, 660, 529, (2008). [arXiv:0711.4055 [hepph]]. (Cited on page 101.)ADSCrossRefGoogle Scholar
 [11]Anchordoqui, L. and Goldberg, H., “Time variation of the fine structure constant driven by quintessence”, Phys. Rev. D, 68, 083513, (2003). [DOI], [hepph/0306084]. (Cited on pages 24 and 102.)ADSCrossRefGoogle Scholar
 [12]Anderson, J.D., Campbell, J.K., Jurgens, R.F. and Lau, E.L., “Recent Developments in SolarSystem Tests of General Relativity”, in Sato, H. and Nakamura, T., eds., The Sixth Marcel Grossmann Meeting: On recent developments in theoretical and experimental general relativity, gravitation and relativistic field theories, Proceedings of the meeting held at Kyoto International Conference Hall, Kyoto, Japan, 23–29 June 1991, pp. 353–355, (World Scientific, Singapore, 1992). (Cited on page 77.)Google Scholar
 [13]Andreev, O.Y., Labzowsky, L.N., Plunien, G. and Soff, G., “Testing the time dependence of the fundamental constants in the spectra of multicharged ions”, Phys. Rev. Lett., 94, 243002, (2005). [DOI], [physics/0505081]. (Cited on page 33.)ADSCrossRefGoogle Scholar
 [14]Angstmann, E.J., Dzuba, V.A. and Flambaum, V.V., “Atomic clocks and the search for variation of the fine structure constant”, Phys. Rev. A, 70, 014102, (2004). [DOI], [physics/0407141]. (Cited on pages 45 and 105.)ADSCrossRefGoogle Scholar
 [15]Angstmann, E.J., Dzuba, V.A., Flambaum, V.V., Nevsky, A.Y. and Karshenboim, S.G., “Narrow atomic transitions with enhanced sensitivity to variation of the fine structure constant”, J. Phys. B: At. Mol. Opt. Phys., 39, 1937, (2006). [DOI], [physics/0511180]. (Cited on page 33.)ADSCrossRefGoogle Scholar
 [16]Arai, K., Hashimoto, M. and Fukui, T., “Primordial nucleosynthesis in the BransDicke theory with a variable cosmological term”, Astron. Astrophys., 179, 17, (1987). [ADS]. (Cited on page 83.)ADSGoogle Scholar
 [17]Ashby, N., Heavner, T.P., Jefferts, S.R., Parker, T.E., Radnaev, A.G. and Dudin, Y.O., “Testing Local Position Invariance with Four CesiumFountain Primary Frequency Standards and Four NIST Hydrogen Masers”, Phys. Rev. Lett., 98, 070802, (2007). [DOI]. (Cited on page 105.)ADSCrossRefGoogle Scholar
 [18]Ashenfelter, T., Mathews, G.J. and Olive, K.A., “The chemical evolution of Mg isotopes vs. the time variation of the fine structure constant”, Phys. Rev. Lett., 92, 041102, (2004). [DOI], [astroph/0309197]. (Cited on page 49.)ADSCrossRefGoogle Scholar
 [19]Audi, G., “The history of nuclidic masses and of their evaluation”, Int. J. Mass Spectrom., 251, 85–94, (2006). [DOI], [physics/0602050]. (Cited on page 74.)ADSCrossRefGoogle Scholar
 [20]Avelino, P.P., Martins, C.J.A.P., Nunes, N.J. and Olive, K.A., “Reconstructing the dark energy equation of state with varying constant”, Phys. Rev. D, 74, 083508, (2006). [DOI], [astroph/0605690]. (Cited on pages 25 and 102.)ADSCrossRefGoogle Scholar
 [21]Avelino, P.P., Martins, C.J.A.P. and Rocha, G., “Looking for a varying α in the cosmic microwave background”, Phys. Rev. D, 62, 123508, (2000). [DOI], [astroph/0008446]. (Cited on page 65.)ADSCrossRefGoogle Scholar
 [22]Avelino, P.P. et al., “Earlyuniverse constraints on a timevarying fine structure constant”, Phys. Rev. D, 64, 103505, (2001). [DOI], [astroph/0102144]. (Cited on pages 65 and 67.)ADSCrossRefGoogle Scholar
 [23]Baeßler, S., Heckel, B.R., Adelberger, E.G., Gundlach, J.H., Schmidt, U. and Swanson, H.E., “Improved Test of the Equivalence Principle for Gravitational SelfEnergy”, Phys. Rev. Lett., 83, 3585–3588, (1999). [DOI]. (Cited on page 19.)ADSCrossRefGoogle Scholar
 [24]Bahcall, J.N., Steinhardt, C.L. and Schlegel, D., “Does the finestructure constant vary with cosmological epoch?”, Astrophys. J., 600, 520, (2004). [DOI], [astroph/0301507]. (Cited on page 58.)ADSCrossRefGoogle Scholar
 [25]Bambi, C. and Drago, A., “Constraints on temporal variation of fundamental constants from GRBs”, Astropart. Phys., 29, 223, (2008). [DOI], [arXiv:0711.3569 [hepph]]. (Cited on page 82.)ADSCrossRefGoogle Scholar
 [26]Barrow, J.D., “A cosmological limit on the possible variation of G”, Mon. Not. R. Astron. Soc., 184, 677, (1978). (Cited on pages 83 and 84.)ADSCrossRefGoogle Scholar
 [27]Barrow, J.D., “Natural Units Before Planck”, Quart. J. R. Astron. Soc., 24, 24–26, (1983). [ADS]. (Cited on page 15.)ADSGoogle Scholar
 [28]Barrow, J.D., “Observational limits on the time evolution of extra spatial dimensions”, Phys. Rev. D, 35, 1805, (1987). [DOI]. (Cited on page 89.)ADSCrossRefGoogle Scholar
 [29]Barrow, J.D., The Constants of Nature: From Alpha to Omega — The Numbers that Encode the Deepest Secrets of the Universe, (Jonathan Cape, London, 2002). (Cited on pages 9 and 14.)zbMATHGoogle Scholar
 [30]Barrow, J.D., “Cosmological bounds on spatial variations of physical constants”, Phys. Rev. D, 71, 083520, (2005). [DOI], [astroph/0503434]. (Cited on page 106.)ADSCrossRefGoogle Scholar
 [31]Barrow, J.D., “Varying constants”, Philos. Trans. R. Soc. London, Ser. A, 363, 2139, (2005). [astroph/0511440]. (Cited on page 8.)ADSMathSciNetCrossRefGoogle Scholar
 [32]Barrow, J.D. and Li, B., “Varyingalpha cosmologies with potentials”, Phys. Rev. D, 78, 083536, (2008). [DOI], [arXiv:0808.1580 [grqc]]. (Cited on page 100.)ADSMathSciNetCrossRefGoogle Scholar
 [33]Barrow, J.D. and Magueijo, J., “Can a changing α explain the Supernovae results?”, Astrophys. J., 532, L87, (2000). [DOI], [astroph/9907354]. (Cited on page 24.)ADSCrossRefGoogle Scholar
 [34]Barrow, J.D. and Shaw, D.J., “Varyingalpha: new constraints from seasonal variations”, Phys. Rev. D, 78, 067304, (2008). [DOI], [arXiv:0806.4317 [hepph]]. (Cited on page 105.)ADSCrossRefGoogle Scholar
 [35]Barrow, J.D. and Tipler, F.J., The Anthropic Cosmological Principle, (Oxford University Press, Oxford; New York, 1986). [Google Books]. (Cited on page 112.)Google Scholar
 [36]Battye, R.A., Crittenden, R. and Weller, J., “Cosmic concordance and the fine structure constant”, Phys. Rev. D, 63, 043505, (2001). [DOI], [astroph/0008265]. (Cited on page 65.)ADSCrossRefGoogle Scholar
 [37]Bauch, A. and Weyers, S., “New experimental limit on the validity of local position invariance”, Phys. Rev. D, 65, 081101R, (2002). [DOI]. (Cited on page 104.)ADSCrossRefGoogle Scholar
 [38]Beane, S.R. and Savage, M.J., “Variation of fundamental couplings and nuclear forces”, Nucl. Phys. A, 717, 91, (2003). [DOI], [hepph/0206113]. (Cited on pages 74 and 95.)ADSCrossRefGoogle Scholar
 [39]Bekenstein, J.D., “Finestructure constant: Is it really a constant”, Phys. Rev. D, 25, 1527, (1982). [DOI]. (Cited on pages 100, 101, and 108.)ADSMathSciNetCrossRefGoogle Scholar
 [40]Bekenstein, J.D., “Finestructure constant variability, equivalence principle and cosmology”, Phys. Rev. D, 66, 123514, (2002). [DOI]. (Cited on page 100.)ADSMathSciNetCrossRefGoogle Scholar
 [41]Bekenstein, J.D. and Schiffer, M., “Varyingfine structure ‘constant’ and charged blackhole”, Phys. Rev. D, 80, 123508, (2009). [DOI], [arXiv:0906.4557 [grqc]]. (Cited on page 101.)ADSCrossRefGoogle Scholar
 [42]Beloy, K., Borschevsky, A., Schwerdtfeger, P. and Flambaum, V.V., “Enhanced Sensitivity to the Time Variation of the FineStructure Constant and m_{p}/m_{e} in Diatomic Molecules: A Closer Examination of Silicon Monobromide”, Phys. Rev. A, 82, 022106, (2010). [DOI], [arXiv:1007.0393 [physics.atomph]]. (Cited on page 33.)ADSCrossRefGoogle Scholar
 [43]Benvenuto, O.G., GarcíaBerro, E. and Isern, J., “Asteroseismology bound on Ġ/G from pulsating white dwarfs”, Phys. Rev. D, 69, 082002, (2004). [DOI]. (Cited on page 81.)ADSCrossRefGoogle Scholar
 [44]Berengut, J.C., Dzuba, V.A. and Flambaum, V.V., “Enhanced Laboratory Sensitivity to Variation of the FineStructure Constant using Highly Charged Ions”, Phys. Rev. Lett., 105, 120801, (2010). [DOI], [arXiv:1007.1068 [physics.atomph]]. (Cited on page 33.)ADSCrossRefGoogle Scholar
 [45]Berengut, J.C., Dzuba, V.A., Flambaum, V.V., Kozlov, M.G., Marchenko, M.V., Murphy, M.T. and Webb, J.K., “Laboratory spectroscopy and the search for spacetime variation of the fine structure constant using QSO spectra”, arXiv, eprint, (2006). [arXiv:physics/0408017]. (Cited on page 45.)Google Scholar
 [46]Berengut, J.C., Dzuba, V.A., Flambaum, V.V. and Porsev, S.G., “A proposed experimental method to determine αsensitivity of splitting between ground and 7.6 eV isomeric states in ^{229}Th”, Phys. Rev. Lett., 102, 210801, (2009). [DOI], [arXiv:0903.1891 [physics.atomph]]. (Cited on page 34.)ADSCrossRefGoogle Scholar
 [47]Berengut, J.C. and Flambaum, V.V., “Astronomical and laboratory searches for spacetime variation of fundamental constants”, J. Phys.: Conf. Ser., 264, 012010, (2010). [DOI], [arXiv:1009.3693 [physics.atomph]]. (Cited on page 8.)ADSGoogle Scholar
 [48]Berengut, J.C. and Flambaum, V.V., “Manifestations of a spatial variation of fundamental constants on atomic clocks, Oklo, meteorites, and cosmological phenomena”, arXiv, eprint, (2010). [arXiv:1008.3957 [physics.atomph]]. (Cited on page 51.)Google Scholar
 [49]Berengut, J.C., Flambaum, V.V. and Dmitriev, V.F., “Effect of quarkmass variation on big bang nucleosynthesis”, Phys. Lett. B, 683, 114, (2010). [arXiv:0907.2288 [nuclth]]. (Cited on page 73.)ADSCrossRefGoogle Scholar
 [50]Berengut, J.C., Flambaum, V.V., King, J.A., Curran, S.J. and Webb, J.K., “Is there further evidence for spatial variation of fundamental constants?”, arXiv, eprint, (2010). [arXiv:1009.0591 [astroph.CO]]. (Cited on pages 51 and 106.)Google Scholar
 [51]Bergström, L., Iguri, S. and Rubinstein, H., “Constraints on the variation of the fine structure constant from big bang nucleosynthesis”, Phys. Rev. D, 60, 045005, (1999). [DOI], [astroph/9902157]. (Cited on page 70.)ADSCrossRefGoogle Scholar
 [52]Bertolami, O., Lehnert, R., Potting, R. and Ribeiro, A., “Cosmological acceleration, varying couplings, and Lorentz breaking”, Phys. Rev. D, 69, 083513, (2004). [DOI], [arXiv:astroph/0310344]. (Cited on page 22.)ADSCrossRefGoogle Scholar
 [53]Bertotti, B., Iess, L. and Tortora, P., “A test of general relativity using radio links with the Cassini spacecraft”, Nature, 425, 374–376, (2003). [DOI]. (Cited on page 20.)ADSCrossRefGoogle Scholar
 [54]Biesiada, M. and Malec, B., “A new white dwarf constraint on the rate of change of the gravitational constant”, Mon. Not. R. Astron. Soc., 350, 644, (2004). [DOI], [astroph/0303489]. (Cited on page 81.)ADSCrossRefGoogle Scholar
 [55]BIPM, The International System of Units (SI), (BIPM, Sèvres, 2006), 8th edition. Online version (accessed 1 March 2011): http://www.bipm.org/en/si/si_brochure/. (Cited on page 15.)Google Scholar
 [56]Birge, R.T., “Probable Values of the General Physical Constants”, Rev. Mod. Phys., 1, 1, (1929). (Cited on page 13.)ADSMathSciNetCrossRefGoogle Scholar
 [57]Bize, S. et al., “Testing the Stability of Fundamental Constants with ^{199}Hg^{+} SingleIon Optical Clock”, Phys. Rev. Lett., 90, 150802, (2003). [DOI], [physics/0212109]. (Cited on pages 29 and 30.)ADSCrossRefGoogle Scholar
 [58]Bize, S. et al., “Cold atom clocks and applications”, J. Phys. B: At. Mol. Opt. Phys., 38, S449–S468, (2005). [DOI], [physics/0502117]. (Cited on page 29.)ADSCrossRefGoogle Scholar
 [59]Bjorken, J.D., “Standard Model Parameters and the Cosmological Constant”, Phys. Rev. D, 64, 085008, (2001). [DOI], [hepph/0103349]. (Cited on page 9.)ADSCrossRefGoogle Scholar
 [60]Blanchet, L., “Gravitational Radiation from PostNewtonian Sources and Inspiralling Compact Binaries”, Living Rev. Relativity, 9, lrr20064, (2006). [grqc/0202016]. URL (accessed 27 September 2010): http://www.livingreviews.org/lrr20064. (Cited on page 20.)
 [61]Blatt, S. et al., “New Limits on Coupling of Fundamental Constants to Gravity Using ^{87}Sr Optical Lattice Clocks”, Phys. Rev. Lett., 100, 140801, (2008). [DOI], [arXiv:0801.1874 [physics.atomph]]. (Cited on pages 29, 31, and 105.)ADSCrossRefGoogle Scholar
 [62]Bohlin, R., Jenkins, E.B., Spitzer Jr, L., York, D.G., Hill, J.K., Savage, B.D. and Snow Jr, T.P., “A survey of ultraviolet interstellar absorption lines”, Astrophys. J. Suppl. Ser., 51, 277–308, (1983). [DOI]. (Cited on page 60.)ADSCrossRefGoogle Scholar
 [63]Bonifacio, P. et al., “First stars VII — Lithium in extremely metal poor dwarfs”, Astron. Astrophys., 462, 851–864, (2007). [DOI], [astroph/0610245]. (Cited on page 69.)ADSCrossRefGoogle Scholar
 [64]Bostrom, N., Anthropic Bias: Observation Selection Effects in Science and Philosophy, (Routledge, New York; London, 2002). [Google Books]. (Cited on pages 110 and 113.)Google Scholar
 [65]Bousso, R., Hall, L.J. and Nomura, Y., “Multiverse understanding of cosmological coincidences”, Phys. Rev. D, 80, 063510, (2009). [DOI], [arXiv:0902.2263 [hepth]]. (Cited on page 112.)ADSMathSciNetCrossRefGoogle Scholar
 [66]Bousso, R. and Polchinski, J., “Quantization of Fourform Fluxes and Dynamical Neutralization of the Cosmological Constant”, J. High Energy Phys., 2000(06), 006, (2000). [DOI], [hepth/0004134]. (Cited on page 110.)MathSciNetzbMATHCrossRefGoogle Scholar
 [67]Brans, C. and Dicke, R.H., “Mach’s Principle and a Relativistic Theory of Gravitation”, Phys. Rev., 124, 925–935, (1961). [DOI]. (Cited on pages 7 and 85.)ADSMathSciNetzbMATHCrossRefGoogle Scholar
 [68]Brax, P. and Martin, J., “Dark Energy and the MSSM”, Phys. Rev. D, 75, 083507, (2007). [DOI], [hepth/0605228]. (Cited on page 109.)ADSCrossRefGoogle Scholar
 [69]Brax, P. and Martin, J., “Moduli Fields as Quintessence and the Chameleon”, Phys. Lett. B, 647, 320, (2007). [hepth/0612208]. (Cited on page 94.)ADSCrossRefGoogle Scholar
 [70]Brax, P., van de Bruck, C., Davis, A.C., Khoury, J. and Weltman, A., “Detecting dark energy in orbit: The cosmological chameleon”, Phys. Rev. D, 70, 123518, (2004). [DOI], [astroph/0408415]. (Cited on page 99.)ADSCrossRefGoogle Scholar
 [71]Brax, P., van de Bruck, C., Mota, D.F., Nunes, N.J. and Winther, H.A., “Chameleons with fielddependent couplings”, Phys. Rev. D, 82, (2010). [DOI], [arXiv:1006.2796 [astroph.CO]]. (Cited on page 99.)Google Scholar
 [72]Bronnikov, K.A. and Kononogov, S.A., “Possible variations of the fine structure constant α and their metrological significance”, Metrologia, 43, R1, (2006). [DOI], [grqc/0604002]. (Cited on page 8.)ADSCrossRefGoogle Scholar
 [73]Byrne, M. and Kolda, C., “Quintessence and varying α from shape moduli”, arxiv, eprint, (2004). [arxiv:hepph/0402075]. (Cited on page 101.)Google Scholar
 [74]Calmet, X. and Fritzsch, H., “The Cosmological Evolution of the Nucleon Mass and the Electroweak Coupling Constants”, Eur. Phys. J. C, 24, 639–642, (2002). [DOI], [hepph/0112110]. (Cited on page 93.)CrossRefGoogle Scholar
 [75]Calmet, X. and Fritzsch, H., “Symmetry Breaking and Time Variation of Gauge Couplings”, Phys. Lett. B, 540, 173, (2002). [hepph/0204258]. (Cited on page 93.)ADSCrossRefGoogle Scholar
 [76]Calmet, X. and Fritzsch, H., “A time variation of protonelectron mass ratio and grand unification”, Europhys. Lett., 76, 1064, (2006). [DOI], [astroph/0605232]. (Cited on page 93.)ADSCrossRefGoogle Scholar
 [77]Campbell, B.A. and Olive, K.A., “Nucleosynthesis and the time dependence of fundamental couplings”, Phys. Lett. B, 345, 429–434, (1995). [hepph/9411272]. (Cited on pages 70, 75, and 93.)ADSCrossRefGoogle Scholar
 [78]Carilli, C.L. et al., “Astronomical Constraints on the Cosmic Evolution of the Fine Structure Constant and Possible Quantum Dimensions”, Phys. Rev. Lett., 85, 5511–5514, (2000). [DOI]. (Cited on page 53.)ADSCrossRefGoogle Scholar
 [79]Carr, B.J., ed., Universe or Multiverse?, (Cambridge University Press, Cambridge; New York, 2007). [Google Books]. (Cited on page 110.)zbMATHGoogle Scholar
 [80]Carr, B.J. and Rees, M.J., “The anthropic principle and the structure of the physical world”, Nature, 278, 605–612, (1979). [DOI]. (Cited on pages 7 and 111.)ADSCrossRefGoogle Scholar
 [81]Carroll, S.M., “Quintessence and the Rest of the World: Suppressing LongRange Interactions”, Phys. Rev. Lett., 81, 3067–3070, (1998). [DOI]. (Cited on pages 23 and 24.)ADSCrossRefGoogle Scholar
 [82]Carter, B., “Large number coincidences and the anthropic principle in cosmology”, in Longair, M.S., ed., Confrontation of Cosmological Theories with Observational Data, Proceedings of the 63rd Symposium of the International Astronomical Union (Copernicus Symposium II), held in Cracow, Poland, 10–12 September, 1973, pp. 291–298, (Reidel, Dordrecht, 1974). [ADS]. (Cited on pages 7, 110, and 112.)CrossRefGoogle Scholar
 [83]Carter, B., “The anthropic principle and its implication for biological evolution”, Philos. Trans. R. Soc. London, Ser. A, 310, 347, (1983). [DOI]. (Cited on page 7.)ADSCrossRefGoogle Scholar
 [84]Casas, J.A., GarcíaBellido, J. and Quirós, M., “Nucleosynthesis Bounds On JordanBransDicke Theories Of Gravity”, Mod. Phys. Lett. A, 7, 447, (1992). [DOI]. (Cited on page 83.)ADSCrossRefGoogle Scholar
 [85]Cembranos, J.A.R., Olive, K.A., Peloso, M. and Uzan, J.P., “Quantum corrections to the cosmological evolution of conformally coupled fields”, J. Cosmol. Astropart. Phys., 2009(07), 025, (2009). [DOI], [arXiv:0905.1989 [astroph.CO]]. (Cited on page 87.)CrossRefGoogle Scholar
 [86]Centurión, M., Molaro, P. and Levshakov, S., “Calibration issues in Δαα”, Mem. Soc. Astron. Ital., 80, 929, (2009). (Cited on page 50.)ADSGoogle Scholar
 [87]Chacko, Z., Grojean, C. and Perelstein, M., “Fine structure constant variation from a late phase transition”, Phys. Lett. B, 565, 169, (2003). [hepph/0204142]. (Cited on page 101.)ADSCrossRefGoogle Scholar
 [88]Chamoun, N., Landau, S.J., Mosquera, M.E. and Vucetich, H., “Helium and deuterium abundances as a test for the time variation of the baryonic density, fine structure constant and the Higgs vacuum expectation value”, J. Phys. G: Nucl. Part. Phys., 34, 163, (2007). [DOI], [astroph/0508378]. (Cited on page 74.)ADSCrossRefGoogle Scholar
 [89]Chan, K.C. and Chu, M.C., “Constraining the variation of G by cosmic microwave background anisotropies”, Phys. Rev. D, 75, 083521, (2007). [DOI], [astroph/0611851]. (Cited on page 83.)ADSMathSciNetCrossRefGoogle Scholar
 [90]Chand, H., Petitjean, P., Srianand, R. and Aracil, B., “Probing the cosmological variation of the finestructure constant: Results based on VLTUVES sample”, Astron. Astrophys., 417, 853, (2004). [DOI], [astroph/0401094]. (Cited on pages 49, 50, and 59.)ADSCrossRefGoogle Scholar
 [91]Chand, H., Petitjean, P., Srianand, R. and Aracil, B., “Probing the timevariation of the finestructure constant: Results based on Si IV doublets from a UVES sample”, Astron. Astrophys., 430, 47–58, (2005). [DOI], [astroph/0408200]. (Cited on pages 46, 47, and 59.)ADSCrossRefGoogle Scholar
 [92]Chand, H., Petitjean, P., Srianand, R. and Aracil, B., “On the variation of the finestructure constant: Very high resolution spectrum of QSO HE 05154414”, Astron. Astrophys., 451, 45, (2006). [DOI], [astroph/0601194]. (Cited on page 51.)ADSCrossRefGoogle Scholar
 [93]Chandler, J.F., Reasenberg, R.D. and Shapiro, “New bounds on Ġ”, Bull. Am. Astron. Soc., 25, 1233, (1993). (Cited on page 77.)ADSGoogle Scholar
 [94]Chen, X. and Kamionkowski, M., “Cosmic microwave background temperature and polarization anisotropy in BransDicke cosmology”, Phys. Rev. D, 60, 104036, (1999). [DOI]. (Cited on page 83.)ADSCrossRefGoogle Scholar
 [95]Chengalur, J.N. and Kanekar, N., “Constraining the variation of fundamental constants using 18 cm OH lines”, Phys. Rev. Lett., 91, 241302, (2003). [DOI], [astroph/0310764]. (Cited on pages 53, 54, and 59.)ADSCrossRefGoogle Scholar
 [96]Chiba, T. and Khori, K., “Quintessence cosmology and varying a”, Prog. Theor. Phys., 107, 631, (2002). [DOI], [hepph/0111086]. (Cited on pages 24, 102, and 108.)ADSzbMATHCrossRefGoogle Scholar
 [97]Chiba, T., Kobayashi, T., Yamaguchi, M. and Yokoyama, J., “Time variation of protonelectron mass ratio and fine structure constant with runaway dilaton”, Phys. Rev. D, 75, 043516, (2007). [DOI], [hepph/0610027]. (Cited on page 99.)ADSCrossRefGoogle Scholar
 [98]Chin, C. and Flambaum, V.V., “Enhancement of variation of fundamental constants in ultracold atom and molecule systems near Feshbach resonances”, Phys. Rev. Lett., 96, 230801, (2006). [DOI], [condmat/0603607]. (Cited on page 33.)ADSCrossRefGoogle Scholar
 [99]Chupp, T.E., Hoare, R.J., Loveman, R.A., Oteiza, E.R., Richardson, J.M., Wagshul, M.E. and Thompson, A.K., “Results of a new test of local Lorentz invariance: A search for mass anisotropy in ^{21}Ne”, Phys. Rev. Lett., 63, 1541–1545, (1989). [DOI]. (Cited on page 18.)ADSCrossRefGoogle Scholar
 [100]Cingöz, A., Lapierre, A., Nguyen, A.T., Leefer, N., Budker, D., Lamoreaux, S.K. and Torgerson, J.R., “Limit on the Temporal Variation of the FineStructure Constant Using Atomic Dysprosium”, Phys. Rev. Lett., 98, 040801, (2008). [DOI], [physics/0609014]. (Cited on pages 29, 31, and 104.)CrossRefGoogle Scholar
 [101]Civitarese, O., Moliné, M.A. and Mosquera, M.E., “Cosmological bounds to the variation of the Higgs vacuum expectation value: BBN constraints”, Nucl. Phys. A, 846, 157, (2010). [DOI]. (Cited on page 75.)ADSCrossRefGoogle Scholar
 [102]Clifton, T., Barrow, J.D. and Scherrer, R.J., “Constraints on the variation of G from primordial nucleosynthesis”, Phys. Rev. D, 71, 123526, (2005). [DOI]. (Cited on page 83.)ADSCrossRefGoogle Scholar
 [103]Coc, A., Ekströom, S., Descouvemont, P., Meynet, G., Olive, K.A., Uzan, J.P. and Vangioni, E., “Constraints on the variations of fundamental couplings by stellar models”, Mem. Soc. Astron. Ital., 80, 809–813, (2009). [ADS]. (Cited on pages 61 and 63.)ADSGoogle Scholar
 [104]Coc, A., Nunes, N.J., Olive, K.A., Uzan, J.P. and Vangioni, E., “Coupled variations of the fundamental couplings and primordial nucleosynthesis”, Phys. Rev. D, 76, 023511, (2007). [DOI], [astroph/0610733]. (Cited on pages 63, 71, 75, 93, and 94.)ADSCrossRefGoogle Scholar
 [105]Coc, A., Olive, K.A., Uzan, J.P. and Vangioni, E., “Big bang nucleosynthesis constraints on scalartensor theories of gravity”, Phys. Rev. D, 73, 083525, (2006). [DOI], [astroph/0601299]. (Cited on pages 72, 75, 83, 84, and 87.)ADSCrossRefGoogle Scholar
 [106]Coc, A., Olive, K., Uzan, J.P. and Vangioni, E., “Nonuniversal scalartensor theories and big bang nucleosynthesis”, Phys. Rev. D, 79, 103512, (2009). [DOI]. (Cited on pages 83 and 87.)ADSCrossRefGoogle Scholar
 [107]Coc, A. and Vangioni, E., “BigBang Nucleosynthesis with updated nuclear data”, J. Phys.: Conf. Ser., 202, 012001, (2010). [DOI]. (Cited on pages 69 and 72.)ADSGoogle Scholar
 [108]Coc, A., VangioniFlam, E., Descouvemont, P., Adahchour, A. and Angulo, C., “Updated big bang nucleosynthesis compared with Wilkinson Microwave Anisotropy Probe observations and the abundance of light elements”, Astrophys. J., 600, 544, (2004). [DOI], [astroph/0309480]. (Cited on pages 69 and 87.)ADSCrossRefGoogle Scholar
 [109]Combes, F., “Radio measurements of constant variation, and perspective with ALMA”, Mem. Soc. Astron. Ital., 80, 888, (2009). (Cited on page 46.)ADSGoogle Scholar
 [110]Cook, A.H., “Secular changes of the units and constant of physics”, Nature, 180, 1194, (1957). [DOI]. (Cited on page 17.)ADSCrossRefGoogle Scholar
 [111]Cook, C.W., Fowler, W.A., Lauritsen, C.C. and Lauritsen, T., “B^{12}, C^{12}, and the Red Giants”, Phys. Rev. D, 107, 508, (1957). [DOI]. (Cited on page 61.)ADSCrossRefGoogle Scholar
 [112]Copeland, E.J., Nunes, N.J. and Pospelov, M., “Models of quintessence coupled to the electromagnetic field and the cosmological evolution of α”, Phys. Rev. D, 69, 023501, (2004). [DOI], [hepph/0307299]. (Cited on pages 24 and 102.)ADSCrossRefGoogle Scholar
 [113]Copi, C.J., Davis, A.N. and Krauss, L.M., “New Nucleosynthesis Constraint on the Variation of G”, Phys. Rev. Lett., 92, 171301, (2004). [DOI]. (Cited on page 83.)ADSCrossRefGoogle Scholar
 [114]Cremmer, E. and Scherk, J., “Spontaneous Compactification of Extra Space Dimensions”, Nucl. Phys. B, 118, 61, (1977). [DOI]. (Cited on page 89.)ADSMathSciNetCrossRefGoogle Scholar
 [115]Cristiani, S. et al., “The CODEXESPRESSO experiment: cosmic dynamics, fundamental physics, planets and much more…”, Nuovo Cimento B, 122, 1165–1170, (2007). [DOI], [arXiv:0712.4152 [astroph]]. (Cited on page 60.)ADSGoogle Scholar
 [116]Cyburt, R.H., Fields, B.D. and Olive, K.A., “An update on the big bang nucleosynthesis prediction for ^{7}Li: the problem worsens”, J. Cosmol. Astropart. Phys., 2008(11), 012, (2008). [DOI], [arXiv:0808.2818 [astroph]]. (Cited on page 70.)CrossRefGoogle Scholar
 [117]Cyburt, R.H., Fields, B.D., Olive, K.A. and Skillman, E., “New BBN limits on physics beyond the standard model from ^{4}He”, Astropart. Phys., 23, 313–323, (2005). [DOI], [astroph/0408033]. (Cited on pages 68 and 84.)ADSCrossRefGoogle Scholar
 [118]Damour, T., “Testing the equivalence principle: why and how?”, Class. Quantum Grav., 13, A33–A41, (1996). [DOI], [grqc/9606080]. (Cited on pages 21 and 107.)ADSMathSciNetzbMATHCrossRefGoogle Scholar
 [119]Damour, T., “The Equivalence Principle and the Constants of Nature”, Space Sci. Rev., 148, 191, (2009). [DOI], [arXiv:0906.3174 [grqc]]. (Cited on page 8.)ADSCrossRefGoogle Scholar
 [120]Damour, T. and Donoghue, J.F., “Constraints on the variability of quark masses from nuclear binding”, Phys. Rev. D, 78, 014014, (2008). [DOI], [arXiv:0712.2968 [hepph]]. (Cited on pages 21, 22, 95, and 111.)ADSCrossRefGoogle Scholar
 [121]Damour, T. and Donoghue, J.F., “Equivalence Principle Violations and Couplings of a Light Dilaton”, Phys. Rev. D, 82, 084033, 1–20, (2010). [arXiv:1007.2792 [grqc]]. (Cited on pages 99 and 107.)Google Scholar
 [122]Damour, T. and Donoghue, J.F., “Phenomenology of the Equivalence Principle with Light Scalars”, Class. Quantum Gram., 27, 202001, (2010). [DOI], [arXiv:1007.2790 [grqc]]. (Cited on pages 21, 22, 99, and 107.)ADSzbMATHCrossRefGoogle Scholar
 [123]Damour, T. and Dyson, F.J., “The Oklo bound on the time variation of the finestructure constant revisited”, Nucl. Phys. B, 480, 37–54, (1996). [DOI], [hepph/9606486]. (Cited on pages 37, 38, and 39.)ADSCrossRefGoogle Scholar
 [124]Damour, T. and EspositoFarèse, G., “Tensormultiscalar theories of gravitation”, Class. Quantum Grav., 9, 2093–2176, (1992). [DOI]. (Cited on pages 7, 85, 103, and 104.)ADSMathSciNetzbMATHCrossRefGoogle Scholar
 [125]Damour, T. and EspositoFarèse, G., “Gravitationalwave versus binarypulsar tests of strongfield gravity”, Phys. Rev. D, 58, 042001, (1998). [DOI]. (Cited on pages 20 and 87.)ADSCrossRefGoogle Scholar
 [126]Damour, T., Gibbons, G.W. and Gundlach, C., “Dark matter, timevarying G, and a dilaton field”, Phys. Rev. Lett., 64, 123, (1990). [DOI]. (Cited on page 100.)ADSCrossRefGoogle Scholar
 [127]Damour, T., Gibbons, G.W. and Taylor, J.H., “Limits on the Variability of G Using BinaryPulsar Data”, Phys. Rev. Lett., 61, 1151–1154, (1988). [DOI], [ADS]. (Cited on page 78.)ADSCrossRefGoogle Scholar
 [128]Damour, T. and Gundlach, C., “Nucleosynthesis constraints on an extended JordanBransDicke theory”, Phys. Rev. D, 43, 3873, (1991). [DOI]. (Cited on page 83.)ADSCrossRefGoogle Scholar
 [129]Damour, T. and Lilley, M., “String theory, gravity and experiment”, in Bachas, C., Baulieu, L., Douglas, M., Kiritsis, E., Rabinovici, E., Vanhove, P., Windey, P. and Cugliandolo, L.F., eds., String Theory and the Real World: From Particle Physics to Astrophysics, Proceedings of the Les Houches Summer School, Session LXXXVII, 2 July–27 July 2007, Les Houches Summer School Proceedings, 87, pp. 371–448, (Elsevier, Amsterdam, 2008). (Cited on pages 20, 95, and 97.)CrossRefGoogle Scholar
 [130]Damour, T. and Nordtvedt, K., “General relativity as a cosmological attractor of tensorscalar theories”, Phys. Rev. Lett., 70, 2217–2219, (1993). [DOI]. (Cited on page 87.)ADSCrossRefGoogle Scholar
 [131]Damour, T. and Nordtvedt, K., “Tensorscalar cosmological models and their relaxation toward general relativity”, Phys. Rev. D, 48, 3436–3450, (1993). [DOI]. (Cited on page 87.)ADSMathSciNetCrossRefGoogle Scholar
 [132]Damour, T., Piazza, F. and Veneziano, G., “Runaway dilaton and equivalence principle violations”, Phys. Rev. Lett., 89, 081601, (2002). [DOI], [grqc/0204094]. (Cited on page 24.)ADSCrossRefGoogle Scholar
 [133]Damour, T., Piazza, F. and Veneziano, G., “Violations of the equivalence principle in a dilatonrunaway scenario”, Phys. Rev. D, 66, 046007, (2002). [DOI], [hepth/0205111]. (Cited on pages 24 and 99.)ADSMathSciNetCrossRefGoogle Scholar
 [134]Damour, T. and Pichon, B., “Big bang nucleosynthesis and tensorscalar gravity”, Phys. Rev. D, 59, 123502, (1999). [DOI], [astroph/9807176]. (Cited on pages 24, 75, 83, 84, and 87.)ADSCrossRefGoogle Scholar
 [135]Damour, T. and Polyakov, A.M., “The string dilaton and a least coupling principle”, Nucl. Phys. B, 423, 532–558, (1994). [DOI], [hepth/9401069]. (Cited on pages 22, 88, 93, 97, 98, and 108.)ADSMathSciNetzbMATHCrossRefGoogle Scholar
 [136]Damour, T. and Polyakov, A.M., “String theory and gravity”, Gen. Relativ. Gravit., 26, 1171, (1994). [DOI], [grqc/9411069]. (Cited on pages 88, 93, 97, and 108.)ADSMathSciNetCrossRefGoogle Scholar
 [137]Damour, T. and Taylor, J.H., “On the Orbital Period Change of the Binary Pulsar PSR 1913+16”, Astrophys. J., 366, 501–511, (1991). [DOI], [ADS]. (Cited on page 78.)ADSCrossRefGoogle Scholar
 [138]Darling, J., “A laboratory for constraining cosmic evolution of the finestructure constant: conjugate 18 centimeter OH lines toward PKS 1413+135 at z = 0.2467”, Astrophys. J., 612, 58, (2004). [DOI], [astroph/0405240]. (Cited on pages 54 and 59.)ADSCrossRefGoogle Scholar
 [139]Davies, P.C.W., Davis, T.M. and Lineweaver, C.H., “Cosmology: Black holes constrain varying constants”, Nature, 418, 602, (2002). [DOI]. (Cited on page 101.)ADSCrossRefGoogle Scholar
 [140]Del’Innocenti, S. etal, “Time variation of Newton’s constant and the age of globular clusters”, Astron. Astrophys., 312, 345, (1996). (Cited on pages 79 and 80.)ADSGoogle Scholar
 [141]Demarque, P., Krauss, L.M., Guenther, D.B. and Nydam, D., “The Sun as a probe of varying G”, Astrophys. J., 437, 870, (1994). [DOI]. (Cited on page 80.)ADSCrossRefGoogle Scholar
 [142]Dent, T., “Varying alpha, thresholds and fermion masses”, Nucl. Phys. B, 677, 471–484, (2004). [DOI], [hepph/0305026]. (Cited on page 94.)ADSCrossRefGoogle Scholar
 [143]Dent, T., “Compositiondependent long range forces from varying m_{p}/m_{e}”, J. Cosmol. Astropart. Phys., 2007(01), 013, (2007). [DOI], [hepph/0608067]. (Cited on pages 21 and 108.)CrossRefGoogle Scholar
 [144]Dent, T., “Eötvös bounds on couplings of fundamental parameters to gravity”, Phys. Rev. Lett., 101, 041102, (2008). [DOI], [arXiv:0805.0318 [hepph]]. (Cited on page 109.)ADSMathSciNetCrossRefGoogle Scholar
 [145]Dent, T. and Fairbairn, M., “Time varying coupling strength, nuclear forces and unification”, Nucl. Phys. B, 653, 256, (2003). [DOI], [hepph/0112279]. (Cited on pages 95 and 111.)ADSCrossRefGoogle Scholar
 [146]Dent, T., Stern, S. and Wetterich, C., “Primordial nucleosynthesis as a probe of fundamental physics parameters”, Phys. Rev. D, 76, 063513, (2007). [DOI], [arXiv:0705.0696 [astroph]]. (Cited on pages 73, 75, and 94.)ADSCrossRefGoogle Scholar
 [147]Dent, T., Stern, S. and Wetterich, C., “Unifying cosmological and recent time variations of fundamental couplings”, Phys. Rev. D, 78, 103518, (2008). [DOI], [arXiv:0808.0702 [hepph]]. (Cited on pages 43, 52, and 94.)ADSCrossRefGoogle Scholar
 [148]Dent, T., Stern, S. and Wetterich, C., “Competing bounds on the presentday time variation of fundamental constants”, Phys. Rev. D, 79, 083533, (2009). [DOI], [arXiv:0812.4130 [hepph]]. (Cited on page 109.)ADSCrossRefGoogle Scholar
 [149]Dent, T., Stern, S. and Wetterich, C., “Time variation of fundamental couplings and dynamical dark energy”, J. Cosmol. Astropart. Phys., 2009(01), 038, (2009). [DOI], [arXiv:0809.4628 [hepph]]. (Cited on pages 94 and 108.)CrossRefGoogle Scholar
 [150]Dicke, R.H., “Dirac’s Cosmology and the Dating of Meteorites”, Nature, 183, 170–171, (1959). [DOI]. (Cited on page 42.)ADSCrossRefGoogle Scholar
 [151]Dicke, R.H., “Dirac’s Cosmology and Mach’s Principle”, Nature, 192, 440, (1961). [DOI]. (Cited on pages 7 and 110.)ADSzbMATHCrossRefGoogle Scholar
 [152]Dicke, R.H., “Experimental relativity”, in DeWitt, C.M. and DeWitt, B.S., eds., Relativity, Groups and Topology. Relativité, Groupes et Topologie, Lectures delivered at Les Houches during the 1963 session of the Summer School of Theoretical Physics, University of Grenoble, pp. 165–313, (Gordon and Breach, New York; London, 1964). (Cited on pages 21 and 100.)Google Scholar
 [153]Dine, M., Nir, Y., Raz, G. and Volansky, T., “Time Variations in the Scale of Grand Unification”, Phys. Rev. D, 67, 015009, (2003). [DOI], [hepph/0209134]. (Cited on page 94.)ADSCrossRefGoogle Scholar
 [154]Dinh, T.H., Dunning, A., Dzuba, V.A. and Flambaum, V.V., “The sensitivity of hyperfine structure to nuclear radius and quark mass variation”, Phys. Rev. A, 79, 054102, (2009). [DOI], [arXiv:0903.2090 [physics.atomph]]. (Cited on page 34.)ADSCrossRefGoogle Scholar
 [155]Dirac, P.A.M., “The cosmological constants”, Nature, 139, 323, (1937). [DOI]. (Cited on pages 7, 76, and 110.)ADSzbMATHCrossRefGoogle Scholar
 [156]Dirac, P.A.M., “A new basis for cosmology”, Proc. R. Soc. London, Ser. A, 165, 199–208, (1938). [ADS]. (Cited on page 7.)ADSzbMATHCrossRefGoogle Scholar
 [157]Dmitriev, V.F. and Flambaum, V.V., “Limits on cosmological variation of quark masses and strong interaction”, Phys. Rev. D, 67, 063513, (2003). [DOI], [astroph/0209409]. (Cited on pages 22 and 71.)ADSCrossRefGoogle Scholar
 [158]Dmitriev, V.F., Flambaum, V.V. and Webb, J.K., “Cosmological varation of deuteron binding energy, strong interaction and quark masses from big bang nucleosynthesis”, Phys. Rev. D, 69, 063506, (2004). [DOI], [astroph/0310892]. (Cited on page 71.)ADSCrossRefGoogle Scholar
 [159]Donoghue, J.F., “The nuclear central force in the chiral limit”, Phys. Rev. C, 74, 024002, (2006). [DOI], [nuclth/0603016]. (Cited on pages 22, 95, and 96.)ADSCrossRefGoogle Scholar
 [160]Donoghue, J.F., Dutta, K. and Ross, A., “Quark and lepton masses and mixing in the landscape”, Phys. Rev. D, 73, 113002, (2006). [DOI], [hepph/0511219]. (Cited on page 111.)ADSCrossRefGoogle Scholar
 [161]Donoghue, J.F., Dutta, K., Ross, A. and Tegmark, M., “Likely values of the Higgs vev”, Phys. Rev. D, 81, 073003, (2010). [DOI], [arXiv:0903.1024 [hepph]]. (Cited on page 111.)ADSCrossRefGoogle Scholar
 [162]Doran, M., “Can we test Dark Energy with Running Fundamental Constants?”, J. Cosmol. Astropart. Phys., 2005(04), 016, (2005). [DOI], [astroph/0411606]. (Cited on pages 24, 25, and 102.)ADSCrossRefGoogle Scholar
 [163]Dudas, E., “Theory and phenomenology of type I strings and M theory”, Class. Quantum Grav., 17, R41, (2000). [DOI]. (Cited on page 91.)ADSMathSciNetzbMATHCrossRefGoogle Scholar
 [164]Duff, M.J., “Comment on timevariation of fundamental constants”, arxiv, eprint, (2002). [arxiv:hepth/0208093]. (Cited on page 17.)Google Scholar
 [165]Duff, M.J., Okun, L.B. and Veneziano, G., “Trialogue on the number of fundamental constants”, J. High Energy Phys., 2002(03), 023, (2002). [DOI], [physics/0110060]. (Cited on pages 9 and 16.)MathSciNetCrossRefGoogle Scholar
 [166]Dvali, G. and Zaldarriaga, M., “Changing α with Time: Implications For FifthForceType Experiments and Quintessence”, Phys. Rev. Lett., 88, 091303, (2002). [DOI], [hepph/0108217]. (Cited on pages 22, 24, and 108.)ADSCrossRefGoogle Scholar
 [167]Dyson, F.J., “Time variation of the charge of the proton”, Phys. Rev. Lett., 19, 1291, (1967). [DOI]. (Cited on page 43.)ADSCrossRefGoogle Scholar
 [168]Dyson, F.J., “The Fundamental Constants and Their Time Variation”, in Salam, A. and Wigner, E.P., eds., Aspects of Quantum Theory, pp. 213–236, (Cambridge University Press, Cambridge; New York, 1972). [Google Books]. (Cited on pages 40 and 42.)Google Scholar
 [169]Dzuba, V.A. and Flambaum, V.V., “Atomic optical clocks and search for the variation of the finestructure constant”, Phys. Rev. A, 61, 034502, (2000). [DOI]. (Cited on page 44.)ADSCrossRefGoogle Scholar
 [170]Dzuba, V.A. and Flambaum, V.V., “Atomic clocks and search for variation of the fine structure constant”, Phys. Rev. A, 61, 034502, (2001). [DOI]. (Cited on page 28.)ADSCrossRefGoogle Scholar
 [171]Dzuba, V.A. and Flambaum, V.V., “Finestructure and search of variation of the finestructure constant in laboratory experiments”, Phys. Rev. A, 72, 052514, (2005). [DOI], [physics/0510072]. (Cited on page 33.)ADSCrossRefGoogle Scholar
 [172]Dzuba, V.A. and Flambaum, V.V., “Sensitivity of the energy levels of singly ionized cobalt to the variation of the fine structure constant”, Phys. Rev. A, 81, 034501, (2010). [DOI], [arXiv:1002.1750 [astroph.CO]]. (Cited on page 60.)ADSCrossRefGoogle Scholar
 [173]Dzuba, V.A. and Flambaum, V.V., “Theoretical study of the experimentally important states of dysprosium”, Phys. Rev. A, 81, 052515, (2010). [DOI], [arXiv:1003.1184 [physics.atomph]]. (Cited on page 31.)ADSCrossRefGoogle Scholar
 [174]Dzuba, V.A., Flambaum, V.V. and Marchenko, M.V., “Relativistic effect in Sr, Dy, YbII, and YbIII and search for variation of the fine structure constant”, Phys. Rev. A, 68, 022506, (2003). [DOI], [physics/0305066]. (Cited on pages 28 and 31.)ADSCrossRefGoogle Scholar
 [175]Dzuba, V.A., Flambaum, V.V. and Webb, J.K., “Calculations of the relativistic effects in many electron atoms and spacetime variation of fundamental constants”, Phys. Rev. A, 59, 230, (1999). [DOI], [physics/9808021]. (Cited on pages 28, 31, 44, and 45.)ADSCrossRefGoogle Scholar
 [176]Dzuba, V.A., Flambaum, V.V. and Webb, J.K., “Spacetime variation of physical constants and relativistic corrections in atoms”, Phys. Rev. Lett., 82, 888, (1999). [DOI]. (Cited on page 47.)ADSCrossRefGoogle Scholar
 [177]Eardley, D.M., “Observable effects of a scalar gravitational field in a binary pulsar”, Astrophys. J. Lett., 196, L59–L62, (1975). [DOI], [ADS]. (Cited on pages 77 and 78.)ADSCrossRefGoogle Scholar
 [178]Eddington, A., Relativity Theory of Protons and Electrons, (Cambridge University Press, Cambridge, 1936). (Cited on page 110.)zbMATHGoogle Scholar
 [179]Eddington, A., Fundamental Theory, (Cambridge University Press, Cambridge, 1948). (Cited on page 110.)zbMATHGoogle Scholar
 [180]Ekström, S., Coc, A., Descouvemont, P., Meynet, G., Olive, K.A., Uzan, J.P. and Vangioni, E., “Effects of the variation of fundamental constants on Population III stellar evolution”, Astron. Astrophys., 514, A62, (2010). [DOI], [arXiv:0911.2420 [astroph.SR]]. (Cited on page 63.)ADSCrossRefGoogle Scholar
 [181]Ekström, S., Meynet, G., Chiappini, C., Hirschi, R. and Maeder, A., “Effects of rotation on the evolution of primordial stars”, Astron. Astrophys., 489, 685, (2008). [DOI], [arXiv:0807.0573 [astroph]]. (Cited on page 63.)ADSCrossRefGoogle Scholar
 [182]Ellis, G.F.R., Kirchner, U. and Stoeger, W.R., “Multiverses and physical cosmology”, Mon. Not. R. Astron. Soc., 34, 921, (2004). [DOI], [astroph/0305292]. (Cited on page 113.)ADSCrossRefGoogle Scholar
 [183]Ellis, G.F.R. and Uzan, J.P., “‘c’ is the speed of light, isn’t it?”, Am. J. Phys., 73, 240–247, (2005). [DOI], [grqc/0305099]. (Cited on pages 14, 86, and 101.)ADSCrossRefGoogle Scholar
 [184]Ellis, J., Ibáñez, L. and Ross, G.G., “Grand Unification with Large Supersymmetry Breaking”, Phys. Lett. B, 113, 283–287, (1982). [DOI]. (Cited on page 93.)ADSCrossRefGoogle Scholar
 [185]Ellis, J., Ibáñez, L. and Ross, G.G., “SU(2)_{L} × U(1) Symmetry Breaking as a Radiative Effect of Supersymmetry Breaking in Guts”, Phys. Lett. B, 110, 215–220, (1982). [DOI]. (Cited on page 93.)ADSCrossRefGoogle Scholar
 [186]Ellis, J., Kalara, S., Olive, K.A. and Wetterich, C., “Densitydependent couplings and astrophysical bounds on light scalar particles”, Phys. Lett. B, 228, 264, (1989). (Cited on page 99.)ADSCrossRefGoogle Scholar
 [187]Ellison, S.L., Ryan, S.G. and Prochaska, J.X., “The first detection of cobalt in a damped Lyman alpha system”, Mon. Not. R. Astron. Soc., 326, 628, (2001). [DOI], [astroph/0104301]. (Cited on page 60.)ADSCrossRefGoogle Scholar
 [188]Epelbaum, E., Meissner, U.G. and Gloöckle, W., “Nuclear forces in the chiral limit”, Nucl. Phys. A, 714, 535–574, (2003). [DOI], [nuclth/0207089]. (Cited on pages 74 and 95.)ADSzbMATHCrossRefGoogle Scholar
 [189]EspositoFarèse, G., “Tests of Alternative Theories of Gravity”, in Hewett, J., Jaros, J., Kamae, T. and Prescott, C., eds., Gravity in the Quantum World and the Cosmos, Proceedings of the 33rd SLAC Summer Institute on Particle Physics (SSI 2005), Menlo Park, USA, 25 July–5 August 2005, 819, (SLAC, Stanford, 2005). URL (accessed 27 September 2010): http://www.slac.stanford.edu/econf/C0507252/papers/T025.PDF. (Cited on pages 20 and 87.)Google Scholar
 [190]EspositoFarèse, G., “Motion in alternative theories of gravity”, in Blanchet, L., Spallicci, A. and Whiting, B., eds., Mass and Motion in General Relativity, Lectures from the CNRS School on Mass held in Orléans, France, 23–25 June 2008, Fundamental Theories of Physics, 162, pp. 461–489, (Springer, Berlin; New York, 2011). [DOI], [arXiv:0905.2575 [grqc]]. (Cited on page 18.)Google Scholar
 [191]EspositoFarèse, G. and Polarski, D., “Scalartensor gravity in an accelerating universe”, Phys. Rev. D, 63, 063504, (2001). [DOI], [grqc/0009034]. (Cited on page 86.)ADSCrossRefGoogle Scholar
 [192]Fenner, Y., Murphy, M.T. and Gibson, B.K., “On variations in the finestructure constant and stellar pollution of quasar absorption systems”, Mon. Not. R. Astron. Soc., 358, 468, (2005). [DOI], [astroph/0501168]. (Cited on page 49.)ADSCrossRefGoogle Scholar
 [193]Ferrel, S.J. et al., “Investigation of the gravitational potential dependence of the finestructure constant using atomic dyprosium”, Phys. Rev. A, 76, 062104, (2007). [DOI], [arXiv:0708.0569 [physics.atomph]]. (Cited on page 104.)ADSCrossRefGoogle Scholar
 [194]Ferrero, A. and Altschul, B., “Limits on the Time Variation of the Fermi Constant G_{F} Based on Type Ia Supernova Observations”, Phys. Rev. D, 82, 123002, 1–8, (2010). [DOI], [arXiv:1008.4769 [hepph]]. (Cited on page 81.)Google Scholar
 [195]Fierz, M., “On the physical interpretation of P. Jordan’s extended theory of gravitation”, Helv. Phys. Acta, 29, 128, (1956). (Cited on pages 7, 85, and 100.)MathSciNetGoogle Scholar
 [196]Fischer, M. et al., “New limits on the drift of fundamental constants from laboratory measurements”, Phys. Rev. Lett., 92, 230802, (2004). [DOI], [physics/0312086]. (Cited on pages 29 and 30.)ADSCrossRefGoogle Scholar
 [197]Flambaum, V.V., “Limits on temporal variation of quark masses and strong interaction from atomic clock experiments”, arxiv, eprint, (2003). [arxiv:physics/0302015]. (Cited on page 96.)Google Scholar
 [198]Flambaum, V.V., “Limits on temporal variation of fine structure constant, quark masses and strong interaction from atomic clock experiments”, in Hannaford, P., Sidorov, A., Bachor, H. and Baldwin, K., eds., Laser Spectroscopy, Proceedings of the XVI International Conference, Palm Cove, Australia, 13–18 July 2003, pp. 49–57, (World Scientific, Singapore, 2004) [physics/0309107]. (Cited on pages 28, 32, and 33.)CrossRefGoogle Scholar
 [199]Flambaum, V.V., “Enhanced effect of temporal variation of the finestructure constant and the strong interaction in ^{229}Th”, Phys. Rev. Lett., 97, 092502, (2006). [DOI], [physics/0604188]. (Cited on pages 33 and 34.)ADSCrossRefGoogle Scholar
 [200]Flambaum, V.V. and Dzuba, V.A., “Search for variation of the fundamental constants in atomic, molecular and nuclear spectra”, Can. J. Phys., 87, 25, (2009). [DOI], [arXiv:0805.0462 [physics.atomph]]. (Cited on page 33.)ADSCrossRefGoogle Scholar
 [201]Flambaum, V.V. and Kozlov, M.G., “Enhanced sensitivity to timevariation of m_{p}/m_{e} in the inversion spectrum of ammonia”, Phys. Rev. Lett., 98, 240801, (2007). [DOI], [arXiv:0704.2301 [astroph]]. (Cited on page 57.)ADSCrossRefGoogle Scholar
 [202]Flambaum, V.V. and Kozlov, M.G., “Enhanced sensitivity to variation of the fine structure constant and m_{p}/m_{e} in diatomic molecules”, Phys. Rev. Lett., 99, 150801, (2007). [DOI], [arXiv:0705.0849 [physics.atomph]]. (Cited on pages 33 and 57.)ADSCrossRefGoogle Scholar
 [203]Flambaum, V.V., Lambert, S. and Pospelov, M., “Scalartensor theories with pseudoscalar couplings”, Phys. Rev. D, 80, 105021, (2009). [DOI], [arXiv:0902.3217 [hepph]]. (Cited on page 101.)ADSCrossRefGoogle Scholar
 [204]Flambaum, V.V., Leinweber, D.B., Thomas, A.W. and Young, R.D., “Limits on the temporal variation of the fine structure constant, quark masses and strong interaction from quasar absorption spectra and atomic clock experiments”, Phys. Rev. D, 69, 115006, (2004). [hepph/0402098]. (Cited on pages 33, 95, and 96.)ADSCrossRefGoogle Scholar
 [205]Flambaum, V.V. and Porsev, S.G., “Enhanced sensitivity to the finestructure constant variation in Th IV atomic clock transition”, Phys. Rev. A, 80, 064502, (2009). [DOI], [arXiv:0910.3459 [physics.atomph]]. (Cited on page 33.)ADSCrossRefGoogle Scholar
 [206]Flambaum, V.V. and Porsev, S.G., “Comment on ‘21cm Radiation: A New Probe of Variation in the FineStructure Constant’”, Phys. Rev. Lett., 105, 039001, (2010). [DOI], [arXiv:1004.2540 [astroph.CO]]. (Cited on page 68.)ADSCrossRefGoogle Scholar
 [207]Flambaum, V.V. and Shuryak, E.V., “Limits on cosmological variation of strong interaction and quark masses from big bang nucleosynthesis, cosmic, laboratory and Oklo data”, Phys. Rev. D, 65, 103503, (2002). [DOI], [hepph/0201303]. (Cited on pages 39, 71, 73, and 74.)ADSCrossRefGoogle Scholar
 [208]Flambaum, V.V. and Shuryak, E.V., “Dependence of hadronic properties on quark and constraints on their cosmological variation”, Phys. Rev. D, 67, 083507, (2003). [DOI], [hepph/0212403]. (Cited on pages 22, 40, 71, 74, and 95.)ADSCrossRefGoogle Scholar
 [209]Flambaum, V.V. and Shuryak, E.V., “How changing physical constants and violation of local position invariance may occur?”, in Danielewicz, P., Piecuch, P. and Zelevinsky, V., eds., Nuclei and Mesoscopic Physics, Workshop in East Lansing (Michigan), 20–22 October 2007, AIP Conference Proceedings, 995, pp. 1–11, (American Institute of Physics, Melville, NY, 2008). [DOI], [physics/0701220]. (Cited on pages 104 and 105.)Google Scholar
 [210]Flambaum, V.V. and Tedesco, A.F., “Dependence of nuclear magnetic moments on quark masses and limits on temporal variation of fundamental constants from atomic clock experiments”, Phys. Rev. C, 73, 055501, (2006). [DOI], [nuclth/060150]. (Cited on pages 28, 32, 33, and 105.)ADSCrossRefGoogle Scholar
 [211]Flambaum, V.V. and Wiringa, R.B., “Dependence of nuclear binding on hadronic mass variation”, Phys. Rev. C, 76, 054002, (2007). [DOI], [arXiv:0709.0077 [nuclth]]. (Cited on page 96.)ADSCrossRefGoogle Scholar
 [212]Flambaum, V.V. and Wiringa, R.B., “Enhanced effect of quark mass variation in ^{229}Th and limits from Oklo data”, Phys. Rev. C, 79, 034302, (2009). [DOI], [arXiv:0807.4943 [nuclth]]. (Cited on page 39.)ADSCrossRefGoogle Scholar
 [213]Flowers, J.L. and Petley, B.W., “Progress in our knowledge of the fundamental constants of physics”, Rep. Prog. Phys., 64, 1191, (2001). [DOI]. (Cited on pages 10 and 13.)ADSCrossRefGoogle Scholar
 [214]Fortier, T.M. et al., “Precision atomic spectroscopy for improved limits on variation of the fine structure constant and local position invariance”, Phys. Rev. Lett., 98, 070801, (2007). [DOI]. (Cited on pages 29, 30, and 104.)ADSCrossRefGoogle Scholar
 [215]Fritzsch, H., The fundamental constants, a mistery of physics, (World Scientific, Singapore, 2009). (Cited on page 9.)CrossRefGoogle Scholar
 [216]Fritzsch, H., “The Fundamental Constants in Physics”, Phys. Usp., 52, 359, (2009). [DOI], [arXiv:0902.2989 [hepph]]. (Cited on page 9.)ADSCrossRefGoogle Scholar
 [217]Fujii, Y., “Accelerating universe and the timedependent finestructure constant”, Mem. Soc. Astron. Ital., 80, 780, (2009). (Cited on page 102.)ADSGoogle Scholar
 [218]Fujii, Y. and Iwamoto, A., “Re/OS constraint on the time variability of the fine structure constant”, Phys. Rev. Lett., 91, 261101, (2003). [DOI], [hepph/0309087]. (Cited on pages 41 and 44.)ADSCrossRefGoogle Scholar
 [219]Fujii, Y. and Iwamoto, A., “How strongly does dating meteorites constrain the timedependence of the finestructure constant?”, Mod. Phys. Lett. A, 20, 2417–2434, (2005). [DOI], [hepph/0508072]. (Cited on pages 41 and 44.)ADSCrossRefGoogle Scholar
 [220]Fujii, Y., Iwamoto, A., Fukahori, T., Ohnuki, T., Nakagawa, M., Hidaka, H., Oura, Y. and Möoller, P., “The nuclear interaction at Oklo 2 billion years ago”, Nucl. Phys. B, 573, 377, (2000). [DOI], [hepph/9809549]. (Cited on pages 37, 38, and 39.)ADSCrossRefGoogle Scholar
 [221]Furlanetto, S.R., Oh, S.P. and Briggs, F.H., “Cosmology at low frequencies: The 21 cm transition and the highredshift universe”, Phys. Rep., 433, 181, (2006). [DOI], [astroph/0608032]. (Cited on pages 66, 67, and 68.)ADSCrossRefGoogle Scholar
 [222]Furnstahl, R.J. and Serot, B.D., “Parameter counting in relativistic meanfield models”, Nucl. Phys. A, 671, 447, (2000). [DOI], [nuclth/9911019]. (Cited on page 95.)ADSCrossRefGoogle Scholar
 [223]Gambini, R. and Pullin, J., “Discrete Quantum Gravity: A Mechanism for Selecting the Value of Fundamental Constants”, Int. J. Mod. Phys. D, 12, 1775–1781, (2003). [DOI], [grqc/0306095]. (Cited on page 101.)ADSMathSciNetCrossRefGoogle Scholar
 [224]Gamow, G., “Electricity, gravity and cosmology”, Phys. Rev. Lett., 19, 759, (1967). (Cited on pages 63, 76, and 79.)ADSCrossRefGoogle Scholar
 [225]GarcíaBerro, E., Hernanz, M., Isern, J. and Mochkovitch, R., “The rate of change of the gravitational constant and the cooling of white dwarfs”, Mon. Not. R. Astron. Soc., 277, 801–810, (1995). [ADS]. (Cited on page 81.)ADSCrossRefGoogle Scholar
 [226]GarcíaBerro, E., Isern, J. and Kubyshin, Y.A., “Astronomical measurements and constraints on the variability of fundamental constants”, Astron. Astrophys. Rev., 14, 113–170, (2007). [DOI], [astroph/0409424]. (Cited on page 8.)ADSCrossRefGoogle Scholar
 [227]GarcíaBerro, E., Kubyshin, Y., LorenAguilar, P. and Isern, J., “The variation of the gravitational constant inferred from the Hubble diagram of Type Ia supernovae”, Int. J. Mod. Phys. D, 15, 1163–1174, (2006). [DOI], [grqc/0512164]. (Cited on page 81.)ADSzbMATHCrossRefGoogle Scholar
 [228]Garriga, J. and Vilenkin, A., “On likely values of the cosmological constant”, Phys. Rev. D, 61, 083502, (2000). [DOI], [astroph/9908115]. (Cited on page 113.)ADSCrossRefGoogle Scholar
 [229]Gasperini, M., Piazza, F. and Veneziano, G., “Quintessence as a runaway dilaton”, Phys. Rev. D, 65, 023508, (2002). [DOI]. (Cited on pages 23 and 99.)ADSCrossRefGoogle Scholar
 [230]Gasser, J. and Leutwyler, H., “Quark Masses”, Phys. Rep., 87, 77, (1982). [DOI]. (Cited on pages 22 and 73.)ADSCrossRefGoogle Scholar
 [231]Gay, P.L. and Lambert, D.L., “The Isotopic Abundances of Magnesium in Stars”, Astrophys. J., 533, 260, (2000). [DOI], [astroph/9911217]. (Cited on page 49.)ADSCrossRefGoogle Scholar
 [232]Gaztañaga, E., GarcíaBerro, E., Isern, J., Bravo, E. and Dominguez, I., “Bounds On The Possible Evolution Of The Gravitational Constant From Cosmological Type Ia Supernovae”, Phys. Rev. D, 65, 023506, (2002). (Cited on page 81.)ADSCrossRefGoogle Scholar
 [233]Goldman, I., “Upper limit on G variability derived from the spindown of PSR 0655+64”, Mon. Not. R. Astron. Soc., 244, 184–187, (1990). [ADS]. (Cited on page 78.)ADSGoogle Scholar
 [234]