Key words

1 Introduction

Physiological rhythms are central to life. Some rhythms appear during certain phases in an individual’s life, like the somite clock during embryonic development, and some others, like circadian clocks, are maintained throughout life. Understanding the mechanisms of physiological rhythms requires an approach that integrates mathematics and physiology. Of particular relevance is a branch of mathematics called nonlinear dynamics [1, 2]. Dynamics is the subject that deals with change, with systems that evolve throughout time. Whether the system in question settles down to an equilibrium, keeps repeating in cycles or does something more complicated, it is the theory of nonlinear dynamics what we use to analyze the behavior. The roots of nonlinear dynamics were set by Henri Poincaré at the end of the nineteenth century, but have seen remarkable development over the past 50 years, especially in the application to biological systems.

The development of theoretical models in biology is not a recent field. Conceptual models have been investigated, for example in population biology, long before the first genes were discovered. Nevertheless, with the molecular biology revolution of the 1980s, many new examples of gene regulatory or protein-interaction networks came to light. With so many feedback and feed-forward loops underlying the complex dynamics of cellular processes, many questions became impossible to understand with intuitive reasoning; and thus, mathematical models gained popularity. Especially in the context of oscillations and clocks, which are the type of processes in which mathematical models give a good chance not only to describe them, but also to understand them. Through numerical simulations, models can highlight the role of key parameters in oscillations and can be used to predict the system’s behavior in conditions that have not yet been experimentally tested. Mathematical models can also help in grasping the dynamic properties of molecular mechanisms that are responsible for the generation of robust oscillations, both at the cellular and inter-cellular level. They even provide tools to artificially construct biological networks that can aid in understanding the design principles of biochemical oscillating systems. Elowitz and Leibler were pioneers on this and designed an oscillating network termed “repressilator” in Escherichia coli [3].

In this chapter we address how circadian clock models can be developed and what insights they provide. We start by introducing some important terms in Subheading 2 and exemplify the concepts with a simple generic oscillator model, the Goodwin model. We then apply the logic of the Goodwin model to understand how more complex models have been developed in the context of circadian clocks, and what mathematical biologists have learned from such models (Subheading 3). But circadian clocks do not exist in isolation; they are subjected to a number of inputs (light, feeding cues, etc.) and they also govern physiological output responses. Thus, in Subheading 4 we overview the interaction of clocks with their environment. We end by summarizing the main points and reviewing modeling limitations. Throughout the chapter there are 9 boxes with practical examples. Scripts for the analyses are provided as Supplementary Material. More extensive introductions to mathematical modeling, however, can be found in the excellent textbooks of Glass and Mackey [1], Kaplan and Glass [4], Segel [5], Murray [6], Goldbeter [7], Ingalls [8] and Jackson [9].

2 Clock Modeling Fundamentals: Mathematical Preliminaries, Notations and Basic Concepts

Designing a model requires some understanding of the system of interest. It is necessary to gather and summarize information on what are the components and key interactions. Because biological systems are typically of great complexity, it is important to differentiate between essential and superfluous variables. In this sense, drawing a scheme of the system of interest is often helpful, even before formulating equations. Putting all the known information on a single picture helps us clarifying the nature of the interactions, and to order the different molecular processes. Contrary to many expectations, this first step in the process of building a model is often the most time-consuming.

Although the mathematical and computer tools that are used to simulate models are standard, there is no consensus on how to construct a model. This always requires some considerations and assumptions, and a number of questions naturally arise when we have to write the equations. What are the key variables? How many equations should be considered? What kind of equations? Are all kinetic constants (model parameters) known? If not, how can we set them? Modelers have to make choices that depend, first of all, on the biological question to be answered, but also on personal tastes and experiences. Simple generic models are useful to study general properties of circadian rhythms, such as coupling of large oscillator ensembles [10, 11], entrainment of clocks to external Zeitgebers [12,13,14] or the role of positive feedback loops in the generation of oscillations [15]. On the other hand, if the focus is to understand the molecular details, more complex models with a larger number of variables are generated. A number of detailed models are now available for the circadian clock in mammals [16,17,18,19], Neurospora [20,21,22] or Drosophila [23,24,25], among other organisms. The model, be it relatively simple, with just a few variables, or in contrast very complex, will be a precise representation of what we believe to be true. The modeler’s task, as the mathematical biologist Tyson says, is “to determine whether it is a good or useful representation of [that] truth” [26].

2.1 Ordinary Differential Equations (ODEs)

Most circadian clock models are described with ordinary differential equations (ODEs), which take the form

$$ \frac{dx}{dt}=f\left(x,y,z\right). $$
(1)

ODEs are often used to describe how a dynamical system changes over time [1, 2, 8, 26]. The function f(x, y, z) describes the rate of change of a variable x(t) as a function of (potentially) all the variables x, y, z… that define the dynamical system. For example, x(t) might stand for the concentration of a given protein, whose evolution depends on a number of time-dependent variables (amount of mRNA) and on time-independent parameters that have a physical interpretation (rate of synthesis, degradation, modification, complex formation, transport, etc.).

Box A Formulating a simple ODE

The concentration of a certain protein (let us call it variable y) is controlled by synthesis and degradation. Even though protein production processes are highly complex and might be modulated by ribosome and tRNA availability, a simple way to model production is with a first order reaction, in which the absolute production rate of protein y is proportional to the mRNA abundance (let us call it x). In the same lines, degradation processes can be assumed to follow first order kinetics (absolute degradation rate proportional to the protein abundance), although there might be additional regulatory mechanisms. The mathematical translation of the protein concentration (y) differential equation is:

$$ \frac{dy}{dt}= px- dy $$
(2)

a linear first-order ODE with two parameters: production rate p and degradation rate d. We can read this equation as follows: the change of protein y over time (\( \frac{dy}{dt} \)) is equal to its production (proportional to the mRNA abundance x and the protein synthesis rate p) minus its degradation (proportional to the protein abundance y and the degradation rate). (Note the negative sign in front of the degradation term, since it contributes to the removal of protein y).

Using standard mass action and enzymatic kinetics, we can convert the network diagram that we have drawn from the biological system into a set of ODEs. In such equations, concentrations of variables (e.g., the reactant species) are associated with rates of biochemical reactions (transcription, translation, degradation, phosphorylation, etc.). The equations can then be solved numerically, this is, by letting the computer work out the implications of the complex feedback and feed-forward loops in the network, without having to solve the system analytically (i.e., by hand). Many programming languages have built-in strategies that allow numerical solving of ODEs, such as odeint (as part of the scipy library) in Python, ode (as part of the deSolve package) in R, or ode45 in Matlab, among others.

Box B Goodwin model for circadian clocks, part I—Scheme and ODEs

One of the simplest and most famous ODE-based oscillator models is the one imagined by Goodwin [27]. In 1965, when Goodwin developed his model, the molecular mechanisms of circadian clocks were not yet known. He proposed the model as a prototypical biomolecular oscillator. The Goodwin model is based on a delayed negative feedback loop, where the final product of a 3-step chain of reactions inhibits the production of the first component (Fig. 1a, b).

Fig. 1
figure 1

Goodwin model for self-sustained circadian oscillations. (a) Scheme of the model. (b) Ordinary differential equations of the model: three variables are considered and account for a clock mRNA (x) that produces a clock protein (y) which activates a transcriptional inhibitor (z). Production reactions are modeled with mass action kinetics; degradation reactions are modeled assuming Michaelis Menten kinetics; repression is modeled with a Hill equation

In the context of circadian rhythms, the model is interpreted as follows: a clock gene mRNA (x) produces a clock protein (y) that, in turn, activates a transcriptional inhibitor (z) that represses the synthesis of the x mRNA, closing the negative feedback loop. This generic model can be seen as the minimal backbone for circadian oscillations, as it accounts for the negative feedback exerted by PER and CRY proteins in their own genes. The Goodwin model, later refined by Gonze [10], is still used today to describe fundamental properties of the core circadian oscillator [15, 28, 29] or the synchronization of an ensemble of coupled circadian oscillators [10, 30, 31].

2.2 Limit Cycles

Most circadian clock models generate stable limit cycle oscillations. Limit cycles are isolated trajectories characterized by a given period and amplitude [1, 2, 32]. Isolated means that neighboring trajectories are not closed, and they spiral either towards or out of the limit cycle [2]. If all neighboring trajectories approach the limit cycle, we say that the limit cycle is stable or attracting (Fig. 2a). This way, a small perturbation that pushes the system out of a stable limit cycle will eventually dampen out, and the trajectory of the perturbed variable will be “attracted” back to the stable limit cycle (red and blue curves in Fig. 2a). In the terms of nonlinear dynamics, stable limit cycles represent a type of attractor, since any perturbation will asymptotically return back to the limit cycle with time. Note that not all oscillations are limit cycles: some, like those idealized by the pendulum, represent another type of oscillators (conservative oscillators), in which neighboring trajectories are closed. In these types of oscillations, unlike in limit cycles, amplitude depends on initial conditions (Fig. 2b) [2]. Limit cycle oscillations, in this sense, ensure robustness to small perturbations in the environment.

Fig. 2
figure 2

Limit cycle oscillations (a) and conservative oscillations (b) in phase space. Limit cycles are isolated trajectories and thus any perturbation that pushes the system out of the cycle (red or blue curves) will dampen out, and the system will asymptotically return to the limit cycle (thick black line). Conservative oscillations, on the other hand, are not isolated. As a consequence, there is no damping and, after perturbation, the amplitude of the oscillation changes and does not recover its initial value

Limit cycles exhibit self-sustained oscillations, this is, they intrinsically oscillate, even in the absence of external periodic cues. This self-sustained oscillating nature is often observed among biological systems [2, 7, 33]. Phenomena such as heart beats, circadian clocks or neuronal activity are just some biological limit cycle oscillations among countless examples [7]. In each case, if the system is slightly perturbed, it always returns to standard cycle.

2.2.1 Cooking Recipe for Oscillations

For limit cycle oscillations to occur, the biological system of interest must fulfill a series of requirements that have been reviewed by Ferrell, Gonze and Tyson in [32, 34,35,36], among other theoreticians.

  1. 1.

    First of all, a negative feedback is necessary to carry the reaction network back to the point where the oscillation started [34, 35, 37].

  2. 2.

    Second, the negative feedback signal must be sufficiently delayed in time so that reactions do not settle on a stable steady state. This time delay can be achieved by explicitly introducing time delays in the equations (the so-called “delay-differential equations”), or by a long chain of reaction intermediates. The more variables in the loop, the longer the time delay is, and the “easier” it is to generate oscillations [34, 38].

  3. 3.

    In addition, nonlinear kinetic processes must present in the system to destabilize the steady state. Such nonlinear processes are also commonly referred to as switches [22, 39] or ultrasensitive processes [40,41,42], and they help to keep the system away from the stable steady state. In Eq. 2, both production and degradation terms of protein concentration were linear; in Eqs. 3, however, degradation of all variables was assumed to follow nonlinear Michaelis Menten kinetics. Phosphorylation, active transport, cooperative binding, sequestration, or other enzymatic events are commonly described by nonlinear terms, such as Michaelis Menten- or Hill-like kinetics. These terms often provide the necessary source of nonlinearity [43,44,45].

  4. 4.

    Lastly, the system must be open; this is, it must be equipped with dissipative mechanisms (e.g., degradation processes) and sources of energy (e.g., mRNA or protein synthesis), so that oscillations that grow too large are dampened out, and oscillations that become too small are pumped up [38].

The Goodwin model presented in Box B includes these four “ingredients” and thus, self-sustained oscillations can be expected for proper choice of parameter values.

Box C Goodwin model for circadian clocks, part II—Limit cycle oscillations

The original Goodwin model [27] only contains one nonlinear term, which is given by the Hill equation used to model the z-mediated repression of x. Griffith demonstrated that the Hill coefficient had to be sufficiently large (n = 8) for the model to generate self-sustained oscillations [46]. But, since such high Hill coefficients are not biologically meaningful, the original model was modified by Gonze and others by including additional nonlinearities. This reduced the need of such a high Hill coefficient [10, 15, 45]. In the modified Goodwin model represented in Fig. 1, there are two sources of nonlinearities: the repressive Hill term and the Michaelis Menten-like kinetics assumed for degradation processes [10].

Numerical integration of the Goodwin model equations (Fig. 1b) over time, with an appropriate parameter choice, can result in self-sustained limit cycle oscillations (Fig. 3). We can plot the solution as time series, in which we illustrate how the concentration of the different species change over time (Fig. 3a), or in phase space, which is the illustration of the space of all possible states (Fig. 3b).

Fig. 3
figure 3

Limit cycle circadian oscillations of the Goodwin model. Limit cycle oscillations, plotted as time series (a) or in phase space (b). Results were obtained by numerical integration of the equations in Fig. 1b for the following parameter values: ν1 = 0.70 nM/h, ν2 = 0.45 nM/h, ν3 = 0.70 h−1, ν4 = 0.35 nM/h, ν5 = 0.70 h−1, ν6 = 0.35 nM/h, K1 = 1 nM, K2 = 1 nM, K4 = 1 nM, K6 = 1 nM, n = 7. Oscillations were normalized to their mean

2.3 Bifurcation Diagrams

Another important term that frequently appears in the field of theoretical chronobiology is that of bifurcation diagrams. To understand this concept, we have to be aware of how the dynamics of a system greatly depend on parameter values. Unfortunately, kinetic rates and equilibrium constants are not measured experimentally in many cases, so this poses a challenge for constructing the model. Although a model with several parameters leads to a combinatorial number of possible parameter values, there are many constraints that fortunately allow us to narrow down the range of suitable values. In the context of circadian clocks, examples of such constraints might be given by the oscillation period (which needs to be circadian), by phase relationships of variables (if known), by effects of some mutations (assuming that we know which parameters are affected) or from some biochemical constants known from in vitro experiments. The unknown parameters that have to be “guessed” should be chosen within realistic physiological ranges.

Usually, when a parameter value changes, the characteristics of the limit cycle (i.e., period, amplitude, phase relationship between variables…) also change. These changes can be illustrated in bifurcation diagrams. Novak and Tyson make the analogy of bifurcation diagrams being for modelers what signal-response curves are for experimentalists [26]. In a physiology experiment, biologists measure how some behavior of the cell (e.g., oscillation amplitude or period) depends on the value of an experimentally controlled signal (e.g., the concentration of a certain synchronizing factor in culture medium). The signal is held at a constant value until the response settles on a definitive value. Then, the signal is changed to a new value and the new response is recorded. A one-parameter bifurcation diagram illustrates the same concept. It shows how the final states of a mathematical model (e.g., period or amplitude of oscillations, or most commonly the maximum and minimum of a variable, plotted in the y axis) depend on a control parameter of the model (plotted in the x axis).

Variation in parameter values can cause qualitative changes in the long-term behavior of the system. For example, the number of steady states or their stability properties can vary. These qualitative changes in the dynamics are called bifurcations, and the parameter values at which they occur are called bifurcation points [2]. In the context of oscillations, Hopf bifurcations are the most important type of bifurcation point. They occur in dynamic systems when a periodic solution (limit cycle) arises from a stable steady state that loses its stability (Fig. 4). By manually exploring the parameter space (i.e., by analyzing how solutions change as parameter values are varied), we can make predictions of how the model might behave under different conditions, and its sensitivity towards parameter changes.

Fig. 4
figure 4

Bifurcation diagrams of the Goodwin model as a function of the x degradation rate ν2. Effect that changes in ν2 have on maxima and minima (a) or on the period (b) of x dynamics. The diagrams were built numerically by simulating the ODEs for each value of the control parameter ν2 (the rest of the parameters take their default value, given in the caption of Fig. 3) and retaining either the maximum and minimum values of the x oscillations, or the period, once the system converged to its stable regime. Maximum and minimum values or periods are plotted against the control parameter values. When the system converges to a stable steady state, maximum and minimum values are indistinguishable: a single point is plotted in the maximum-minimum bifurcation diagram, and no point is plotted in the period bifurcation plot. Oscillations vanish at ν2 = 0.93 nM/h (Hopf bifurcation point). Red points indicate the default parameter value that produces ∼24 h circadian oscillations, ν2 = 0.45 nM/h

Box D Goodwin model for circadian clocks, part III—Bifurcation diagrams

The bifurcation diagram of a given control parameter can be numerically constructed in the same way as Fig. 3 was built, but iterating this process over a range of parameter values. Thus, for each value of the control parameter, we let the computer solve the set of ODEs (i.e., we simulate the system) and retain the maximum and minimum values (or any other oscillation parameter) reached by a given variable once the system has converged to its stable regime. If the system converges to a steady state, then the minimum and maximum will be indistinguishable and a single point will be plotted on the bifurcation diagram. If, on the contrary, the system oscillates for the simulated parameter value, then two points will be plotted.

Such maximum-minimum bifurcation plot is shown in Fig. 4a. We can also build the period bifurcation diagram (Fig. 4b), keeping in mind that the period can only be estimated when oscillations appear. The bifurcation plots show, firstly, that oscillations disappear when the degradation rate is > 0.93 nM/h (this is the Hopf bifurcation point). Moreover, it illustrates that both amplitude and period of x decrease as its degradation rate increases. The model thus predicts that decreasing the degradation rate should lead to oscillations with longer period. In this way, modeling and bifurcation analyses can be used to predict the behavior of the system in conditions that might have not yet been tested experimentally.

Of note is that there are other methods, such as those implemented in XPP-AUTO, that allow the calculation of bifurcations without intensive simulation for each parameter value.

3 Learnings from Modeling Interlocked Feedback Loops

The Goodwin oscillator exemplified in Box B demonstrates that, with a minimum number of “ingredients”, it is possible to generate stable limit cycle oscillations (Fig. 3). But in biological systems, however, the picture is more complex. Over the last years, the discovery of additional clock genes and regulations has led to the realization that the circadian timing system involves multiple sources of nonlinearity and interlocked feedback loops. Consequently, these findings have motivated the development of more detailed molecular models [16,17,18,19, 47,48,49] that contain additional positive and negative feedback loops, which have also been shown to contribute to the generation of stable and robust oscillations. Nevertheless, their structure most of the times relies on a Goodwin-like negative feedback loop.

Box E Positive feedback loops promote oscillations in the Goodwin model

There have been many useful refinements of the Goodwin oscillator. It has been shown, for example, that complementary positive feedback loops can enhance the capabilities of rhythm generation [15, 50,51,52]. However, it is important to note that not all feedback loops (positive or negative) are always explicitly visible. Sequestration (formation of inactive protein complexes), for instance, can also form implicit feedback loops. A prominent example is the sequestration of KaiA molecules in the cyanobacterial clock [53]. Although the majority of models use Hill functions to describe transcriptional repression, some have also introduced protein sequestration-based repression [54].

Ananthasubramaniam et al. showed very elegantly in 2014 that addition of positive feedback loops to the Goodwin model promotes oscillations at lower Hill coefficients. Figure 5 summarizes their findings: a Goodwin-like motif with a Hill coefficient n = 4 (lower n than in the simulations from Fig. 3, where n = 7) cannot oscillate; nevertheless, the same Goodwin-like motif with an explicit positive feedback loop on x (which can be interpreted as the BMAL1-Ror loop) generates limit cycle oscillations. In the study, the authors highlighted additional mechanisms that may facilitate the emergence of oscillations, such as cross-activation (explicit feedback loop) or Michaelis Menten degradation (implicit feedback loop) of variables [15].

Fig. 5
figure 5

A positive feedback loop promotes oscillations in a Goodwin-like motif. (a) Simulation of a Goodwin-like motif with a Hill coefficient n = 4 does not produce self-sustained limit cycle oscillations. Instead, the system approaches a stable steady state. (b) When a self-activating positive feedback loop is included on x, limit cycle oscillations emerge for the same parameter values. Results were obtained by numerical integration of the equations in Fig. 1b for the following parameter values: ν1 = 0.70 nM/h, ν2 = 0.45 nM/h, ν3 = 0.70 h−1, ν4 = 0.35 nM/h, ν5 = 0.70 h−1, ν6 = 0.35 nM/h, K1 = 1 nM, K2 = 1 nM, K4 = 1 nM, K6 = 1 nM, n = 4. Oscillations were normalized to their mean

In 2011, Relógio et al. developed an extensive 19 variable ODE-based model. The Relógio system contains clock transcripts, cytoplasmatic and nuclear proteins, either alone or in complex with other clock proteins [19]. It was built from available data on phases and amplitudes of clock components to understand the mechanisms that govern circadian rhythm generation in mammalian cells [19]. It allowed to independently study the roles of the Ror-BMAL1-RevErb and Per2:Cry1 loops, as well as the role of Per2 degradation rate in the dynamics of the system. The authors provided in silico evidence, for the first time, that the Ror-BMAL1-RevErb loop could act as an oscillator independently of the Per2:Cry1 loop and they showed that in silico overexpression of RevErbα and RevErbβ resulted in the loss of oscillations [19]. This theoretical prediction was experimentally validated one year after in mouse embryonic fibroblasts [55]. Taken together, the computational findings from the Relógio model challenged the role of the Ror-BMAL1-RevErb loop as a merely auxiliary loop and illustrate how models can be used to make predictions.

In the same lines, a more recent study from Pett et al. showed that a repressilator motif containing regulated expression of Cry1, Per2 and RevErbα is sufficient to generate 24 h rhythmicity, thus constituting a core loop of the mammalian oscillator [48]. In a later bioinformatic study, the authors proposed that the most essential feedback loops which result in rhythm generation can differ among tissues [56]. It has been suggested that the primary rhythm-generating loop in adrenal gland and heart is the BMAL1-RevErb loop, whereas self-inhibitions of Per and Cry genes are more characteristic for models of suprachiasmatic nucleus clocks [56]. Of note, though, is that the authors did not use ODEs in their study, but instead delay differential equations, in which time delays are introduced explicitly in the equations.

A new simple ODE model of the mammalian circadian clockwork was published by Almeida in the early 2020. It was tailored to identify the essential interactions that are needed to generate phase opposition between the activating CLOCK:BMAL1 and the repressing Per2:Cry1 complexes [49]. Van Soest and colleagues performed extensive bifurcation analyses on the Almeida model and found that changes in degradation rates of clock proteins could generate arrhythmic dynamics. These findings suggested (and predicted) that not only knockout [57, 58] or overexpression [55] of core clock components can lead to arrhythmicities, but also changes in degradation or transcription activation rates. The in vitro or in vivo significance of such observations, however, remains to be validated.

The take-home message from these computational studies is that modeling can help to study alternative network architectures and can in this way guide experimental research. Mathematical models are usually made to answer a specific question, but a big advantage of modeling is that we can then make predictions and ask ourselves additional questions, that can be later tested (and hopefully validated) experimentally.

4 Interaction of Clocks with the Environment

The major role of circadian rhythms is to coordinate physiological and behavioral processes with the natural daily variation. To do this, molecular circadian clocks need to integrate signals from the external world (Zeitgebers) and to transmit such signals to the whole organism. Figure 6 illustrates the key paradigm in biological clock research: although clocks are able to tick by themselves, they respond to inputs (Zeitgebers) and perform output responses (coordination of physiology or behavior with external time).

Fig. 6
figure 6

Simplified scheme of circadian clock systems. Circadian clock systems consist of a network of input pathways that integrate external Zeitgeber timing cues, the central oscillator (pacemaker) and output pathways. Central oscillators generate the endogenous rhythm and must be able to synchronize to environmental Zeitgebers (e.g., light, food, temperature) via input pathways. Consequently, pacemakers drive output pathways (e.g., physiology, behavior) and clock-controlled activities by synchronizing downstream oscillators

In mathematical terms, a free-running clock is an autonomous system. An autonomous system is a system of ordinary differential equations which does not explicitly depend on the independent variable. When the variable is time, like in our case, they are also called time-invariant systems. When mathematical biologists want to model how external signals confer timekeeping information to an autonomous clock, they typically add a time-dependent term on the right-hand side of an ODE. This is commonly referred to as “adding a forcing” or “driving term” to the system, and thus the resulting system is said to be forced, driven or non-autonomous (since now the system depends on time explicitly) (Box F).

Box F Amplitude-phase models

Amplitude-phase oscillators are one of the most abstract yet intuitive class of models that can relate to any observed rhythm. They describe phenomenologically the dynamics of a system, independently of any molecular details [11, 14, 32]. Such models have been used in clock research to study generic properties of phase response curves [59], entrainment [12, 14], the behavior of ensembles of coupled oscillators [11, 38, 60,61,62,63], or to interpret experimental results [64]. They are described with only two variables, namely radius r and phase ϕ of the oscillation, and thus they do not necessarily account for the levels of a given protein or transcript. The equations, in polar coordinates, read:

$$ {\displaystyle \begin{array}{rll}\frac{d r}{d t}& =\lambda r\left({A}_0-r\right),& \\ {}\frac{d\phi}{d t}& =\frac{2\pi }{\tau}\end{array}} $$
(3)

where r and ϕ represent the variables (radius and phase, respectively), and λ, A0 and τ, the parameters (amplitude relaxation rate, oscillation amplitude and period, respectively). The amplitude relaxation rate is a relatively abstract concept that describes how fast a perturbation relaxes back to the limit cycle [11, 12, 38].

Basic calculus allows the transformation of any point in the polar plane into Cartesian coordinates, since \( r=\sqrt{x^2+{y}^2} \) and \( \phi =\arctan \left(\frac{y}{x}\right) \) (Fig. 7). Thus, the amplitude-phase model can be converted into Cartesian coordinates, reading:

$$ {\displaystyle \begin{array}{rll}\frac{dx}{dt}& =\lambda x\left({A}_0-\sqrt{x^2+{y}^2}\right)-\frac{2\pi }{\tau }y,& \\ {}\frac{dy}{dt}& =\lambda y\left({A}_0-\sqrt{x^2+{y}^2}\right)+\frac{2\pi }{\tau }x.\end{array}} $$
(4)

Period and amplitude can easily be calculated from the oscillatory time series. Amplitude relaxation rate can be determined by estimating the rate at which a perturbation decays back to the limit cycle. The reader is encouraged to calculate this parameter for any variable of the Goodwin oscillator described in Fig. 3.

Fig. 7
figure 7

Converting between polar and Cartesian coordinates. A point in polar coordinates is characterized by the variables r and ϕ. Polar coordinates can be converted to the Cartesian coordinates x and y with r > 0 by \( r=\sqrt{x^2+{y}^2} \) (as in the Pythagorean theorem) and \( \phi =\arctan \left(\frac{y}{x}\right) \)

So far, the right-hand sides of the amplitude-phase model ODEs do not contain time t. Thus, these equations describe an autonomous system that oscillates by itself, i.e., in the absence of external timing cues. But clocks usually respond to external timekeeping cues, and thus they can be driven (forced) by these signals. Assuming that the forcing is done by a sinusoidal Zeitgeber Z(t) with period T and amplitude F,

$$ Z(t)=F\cos \left(\frac{2\pi }{T}t+\phi \right), $$
(5)

and that the Zeitgeber Z(t) drives oscillations of x, we can now incorporate the forcing term (Eq. 5) in the right-hand side of the x ODE (Eq. 4) as follows:

$$ {\displaystyle \begin{array}{rll}\frac{dx}{dt}& =\lambda x\left({A}_0-\sqrt{x^2+{y}^2}\right)-\frac{2\pi }{\tau }y+Z(t),& \\ {}\frac{dy}{dt}& =\lambda y\left({A}_0-\sqrt{x^2+{y}^2}\right)+\frac{2\pi }{\tau }x.\end{array}} $$
(6)

The system has become forced or non-autonomous, since time t is now explicitly included in the ODEs inside the Z(t) term.

4.1 Coupled Oscillators, Synchronization and Entrainment

Both theoretical and experimental scientists have long been puzzled by the existence of spontaneous order (and thus synchronization) that exists in the universe. The “science of synchronization” centers on the study of coupled oscillators, which are widespread throughout biological systems: groups of fireflies, pacemaker cells or circadian clocks are collection of oscillators in which one can find some “underlying order”. Understanding the basic rules of coupled oscillator theory can help us to gain insights into how coupling results in synchronization or entrainment.

Two or more oscillators are said to be coupled if some physical or chemical process allows them to influence one another [2]. Fireflies communicate with light, heart cells exchange electrical currents… The result of this mutual influence is often synchrony. When synchrony occurs, oscillators acquire a rational \( \frac{m}{n} \) ratio, meaning that one oscillator will undergo m cycles in the time in which the second one undergoes n cycles. Some examples can be seen by the 1:4 frequency locking between respiratory and cardiac rhythms in some individuals (i.e., 1 inhalation and exhalation occur for every 4 heart beats) [65, 66], or by the 1:1 synchronization of circadian clocks to the day-night cycle. We can distinguish between intrinsic or multidirectional and extrinsic or unidirectional coupling in biological systems:

  • Intrinsic or multidirectional coupling can be seen as the coupling that exists among oscillators without explicit external time information. Here, all oscillators exchange information with one another. For example, coupling among cardiac cells exists to produce a coherent heart beat as output response. In the context of circadian clocks, it is often said that feedback loops that comprise the core clock network are coupled: in principle, different feedback loops are able to oscillate independently [56], but all loops are coupled such that they all oscillate with a 24 h periodicity. Another example of intrinsic coupling occurs clearly in neurons from the suprachiasmatic nucleus (SCN). Although circadian rhythms can be observed on the single cell level [67, 68], synaptic connections, gap junctions and neurotransmitters are believed to couple (and thus synchronize) SCN neurons in a robust manner [68].

  • Extrinsic or unidirectional coupling, on the other hand, requires an explicit periodic signal (Zeitgeber) present in the surrounding of an oscillator, such as the alternation of night and day, feeding-fasting rhythms or tidal rhythms. This external rhythm affects the intrinsic clock, but not the other way around (thus unidirectional coupling). When the intrinsic clock adapts (synchronizes) to the external timing signal, entrainment results.

We have defined some terminology related to coupling between oscillators. In Boxes G and H we illustrate some of the key behaviors that are seen in coupled oscillator systems, namely spontaneous synchronization and entrainment.

Box G Coupled circadian oscillators synchronize spontaneously

A remarkable property of circadian rhythms in the SCN is their robust nature. Although the free-running periods of isolated neurons are broadly distributed [68], the SCN as an ensemble oscillates very robustly with a clear periodicity. This indicates that a coupling mechanism must operate between the neurons, which is known to be achieved by periodic neurotransmitter release and synaptic connections [68].

Based on this, it is a reasonable hypothesis to assume global coupling among all oscillators in the SCN, achieved through a mean-field M. The mean-field can be defined as the average concentration of neurotransmitter xi as follows:

$$ M=\frac{1}{N}\kern0.3em \sum \limits_{i=1}^N{x}_i $$
(7)

The network dynamics of an ensemble of N amplitude-phase models, in Cartesian coordinates, that describes the oscillatory dynamics of N neurotransmitter xi in the presence of mean-field coupling can then be given by

$$ {\displaystyle \begin{array}{rll}\frac{d{x}_i}{dt}& =\lambda {x}_i\left({A}_0-\sqrt{x_i^2+{y}_i^2}\right)-\frac{2\pi }{\tau_i}{y}_i+{K}_{coup}M,& \\ {}\frac{d{y}_i}{dt}& =\lambda {y}_i\left({A}_0-\sqrt{x_i^2+{y}_i^2}\right)+\frac{2\pi }{\tau_i}{x}_i.\end{array}} $$
(8)

where the parameter Kcoup denotes the strength of the coupling between the mean field and the single oscillatory units. We assume, in the lines of previous studies [11, 13], that the mean-field M additively couples only to the x coordinate. Note that this is, strictly speaking, not a forced system, since there is no explicit time dependence in the right-hand side of the equations.

Figure 8 shows how an ensemble of N = 50 heterogeneous oscillators with periods chosen from a normal distribution with mean μ = 24 h and a standard deviation σ = 1.5 h can spontaneously synchronize when they are coupled. In the absence of inter-oscillator coupling (Fig. 8a), each oscillator runs with its free-running period τi, and the average bulk signal (in blue thick line) does not display robust rhythms. When the individual oscillators are coupled through their mean-field, on the other hand, order emerges: oscillators start running at the same pace and locked to the mean-field (Fig. 8b). Consequently, the period distribution of the individual oscillators becomes narrower (Fig. 8c).

Fig. 8
figure 8

Spontaneous synchronization of coupled circadian oscillators. 50 heterogeneous amplitude-phase oscillators run at their own pace in the absence of coupling (a), but they spontaneously synchronize when coupled through a mean-field M (b). Grey thin lines represent individual oscillators; thick lines (blue, red) represent the signal of the average population (bulk). (c) Distribution of the individual periods in the uncoupled (Kcoup = 0, blue) and coupled (Kcoup = 0.1, red) systems. (d) Coupling leads to higher bulk amplitudes due to resonance. Results were obtained by numerical integration of Eqs. 8, for 100 days and the following parameter values: A0 = 1, λ = 0.03 h−1, individual periods τi taken from a normal distribution with mean μ = 24 h and standard deviation σ = 1.5 h, and varying Kcoup values. Bulk amplitudes were calculated as the mean peak-to-trough distance of the average signal (thick lines in panels (a) and (b)) during the last 5 days of simulations

It is well known from the theory of coupled oscillators that if a periodic stimulus (in this case the mean-field) is of the same or nearly the same frequency as the natural vibrating frequency of a system, the amplitude of the system will increase; a phenomenon called resonance [69]. For a network of oscillators, like in this case, resonance can be interpreted as amplification of the amplitude of individual oscillators. Figure 8d shows precisely this phenomenon: as the coupling strength increases, so does the amplitude of the bulk signal due to resonance effects (to values that are even bigger than that of the individual oscillators, A0 = 1 in the simulations).

It is important to mention that the emergent properties of the coupled ensemble depend not only on the characteristics of coupling, but also on the properties of individual oscillators. For example, amplitude relaxation rate λ of individual oscillators is inversely correlated with amplitude resonance: as the oscillator relaxation rate increases, amplitude expansions decrease [11, 12]. The reader is encouraged to analyze through simulations how the curve from Fig. 8d changes in systems with varying λ.

4.1.1 Entrainment and Arnold Tongues

A characteristic property of circadian rhythms is their ability to be synchronized, or entrained, by external Zeitgebers. Thus, although circadian rhythms can persist in the absence of external timing cues, normally such cues are present and rhythms are aligned to them. This alignment is called entrainment, and it occurs when the strength of the Zeitgeber (the “coupling strength”) is capable of overcoming the period mismatch between its period T and the clock’s intrinsic period τ. If this happens, the Zeitgeber will enforce its natural periodicity T on the clock. The range of period mismatches τT for which entrainment occurs is called the range of entrainment and it depends on the Zeitgeber strength as well as on the clock’s properties (Box H). It is in fact the difference of both periods τ and T, rather than the single periods per se, what determines whether a clock can be entrained or not for a given Zeitgeber strength [1, 13, 14]. When entrainment occurs, the system adopts a specific phase relationship with the phase of the Zeitgeber, and this difference is known as phase of entrainment Ψ.

In more general terms, entrainment is the process in which oscillators synchronize to an external signal at a fixed m to n ratio, and it is common to all systems of coupled oscillators. In the field of circadian clocks, 1:1 entrainment is the common scenario (the period τ of the circadian system adjusts such that it equals the period T of the Zeitgeber); nevertheless, other entrainment ratios might exist under some circumstances. The regions of \( \frac{m}{n} \) synchronization can be plotted as Arnold tongues (Box H), named after the mathematician Arnold, who described them in the 1960s. The dynamics are more or less simple at low coupling: tori (limit cycles of two frequencies that are not locked) and some zones of synchronization (and thus period- and phase-locking) dominate the parameter space. Higher coupling increases the regions in which the tongues (and thus synchronization) exist, but can also lead to more complex behavior, including chaos. Arnold tongues are generic to coupled oscillators and there is a vast amount of literature on their theory and the results of mathematical modeling [1, 13, 14, 70,71,72].

Box H Entrainment and Arnold tongue of a circadian amplitude-phase model

We now compute the Arnold tongue of a circadian amplitude-phase oscillator driven by a sinusoidal Zeitgeber (Eqs. 5 and 6) and explore which combinations of Zeitgeber strength F and Zeitgeber period T lead to entrainment. From the theory of coupled oscillators we know that sufficiently strong Zeitgebers (with high “unidirectional” coupling strength F) can entrain oscillators even if the period difference between the intrinsic oscillator and the Zeitgeber (i.e., the period mismatch τT) is large [1, 2, 12,13,14, 69, 70, 72].

The tongue indeed shows that the range of entrainment increases with Zeitgeber strength F (Fig. 9a). Nevertheless, not only the coupling strength F, but also the intrinsic oscillator properties can affect the entrainment range. Stronger oscillators with high amplitude relaxation rates λ display narrow ranges of entrainment, whereas weaker oscillators (lower λ) have wider entrainment ranges. The reader is encouraged to compute the Arnold tongues of an amplitude-phase oscillator with changing values of λ. These principles explain experimental findings, namely that peripheral clocks in the lung entrain to extreme Zeitgeber cycles, while SCN clocks do not [12].

Fig. 9
figure 9

Entrainment of an amplitude-phase oscillator to an external sinusoidal Zeitgeber. (a) Arnold tongue for a circadian amplitude-phase oscillator driven by a Zeitgeber, where phases of entrainment Ψ are color-coded. (b) Phase of entrainment in dependence on the Zeitgeber strength F along the vertical blue and red lines in (a). Results were obtained by numerical integration of Eq. 6 for the following parameter values: A0 = 1, τ = 24 h, λ = 0.01 h−1, Zeitgeber strengths F ranging from 0 to 0.1 and Zeitgeber periods T ranging from 19 h to 29 h

The simulations reproduce the observations that the phase of entrainment Ψ increases (i.e., the intrinsic clock becomes later) with period mismatch τT (Fig. 9a) [14]. These theoretical observations can be translated into biological words and associated to the spread of chronotypes. Under natural conditions of T = 24 h, variations of intrinsic period τ lead to different phases of entrainment: short endogenous periods τ often lead to early phases or entrainment (“morning larks”), whereas longer periods τ correspond to later phases (“night owls”) [13, 14, 73, 74].

The strength of a Zeitgeber has also been suggested to modulate the phase of entrainment [75]. If we move vertically along the τ > T region of the tongue (red vertical line in Fig. 9a), we see how an increase in the Zeitgeber strength results in earlier entrainment phases. On the other hand, stronger Zeitgebers can lead to later phases Ψ within the τ < T region (blue vertical line in Fig. 9a). Thus, increasing Zeitgeber strength can lead to both decrease or increase of Ψ depending on the mismatch τT (Fig. 9b). Experimental predictions can relate light intensity and chronotypes in the base of these observations: we expect that more light leads to earlier entrainment phases for night owls with τ > T, but later values of Ψ for morning larks with τ < T. Taken together, these theoretical results predict that strong Zeitgebers should lead to narrower distributions of chronotypes [13, 14, 76].

4.2 Output Regulation

So far we have focused on the ability of clocks to free run (exhibit self-sustained limit cycle oscillations, Subheading 2) and on how external signals impinge on the oscillator (Subheading 4.1). Nevertheless, clocks also regulate a myriad of output signals. In fact, transcriptomic studies have shown that the expression of almost 10% of all genes in peripheral tissues is regulated rhythmically (despite small overlap between tissues) [56, 77,78,79]. Modeling can also help in studying properties of the expression of clock-controlled genes, as described in [80].

Box I Modeling driven expression of clock-controlled genes

In many cases, transcription factors like BMAL1 activate the expression of the so-called clock-controlled genes, and this can lead to the following model generalization:

$$ \frac{dx}{dt}=p\left(1+F\sin \left(\frac{2\pi }{\tau }t\right)\right)- dx, $$
(9)

where the production rate p of an mRNA x is periodically driven by a core clock element with an amplitude F and a period τ of about 24 h. This (non-autonomous) equation can be solved numerically (letting the computer run) or analytically, and the solution will oscillate periodically around its mean (\( \frac{p}{d} \), unless normalized like in Fig. 10a). Some interesting insights emerge from these simulations: first, amplitude and, second, phase of the clock-controlled transcript depend strongly on its own half-life (Fig. 10b, c). Short-lived transcripts display large amplitudes and are almost in phase with the transcriptional modulator. Long lived genes have larger delays (approaching 6 h) but smaller amplitudes as lifetimes increase. Indeed, such dependencies have been found for many clock-controlled genes [81].

Fig. 10
figure 10

Modeling the expression of clock-controlled genes. (a) Oscillations in mRNA abundance of two clock-controlled genes with different half-lives. Amplitude (b) and phase delay (c) of the driven clock-controlled gene depend on its half-life. Results were obtained by numerical integration of Eq. 9 for the following parameter values: p = 1 (units of concentration/h), F = 0.20 and τ = 24 h. Oscillations were normalized to their mean. The amplitude values depicted in panel (c) were calculated as peak-to-trough distances

5 Concluding Remarks and Modeling Limitations

We have seen how two simple and generic models, namely the Goodwin and an amplitude-phase model, properly reproduce core features of the circadian clock, notably its self-sustained nature, its response to parameter variations, to coupling and to Zeitgeber entrainment. These “toy models” give hints, ideas and speculative explanations, but they are also subjected to several caveats. The most common criticism towards models is that the type of equations and the model parameters are (mostly) arbitrary. Whereas molecular models are empirically based on well-established genetic regulations, the quantitative details of the molecular mechanisms are usually unknown. For instance, the choice of Michaelis Menten or Hill-like functions are realistic representations of enzymatic processes (and they account for the necessary degree of nonlinearity that models need to oscillate) but the hypotheses underlying these approximations are not always satisfied. Thus, theoretical models like the ones presented in this chapter should be regarded as semi-quantitative and phenomenological models. Simple models usually do not allow the investigation of quantitative details of physiological processes, but they allow to study qualitatively the dynamic properties of oscillating systems.

Second, most circadian clock models are based on ODEs. These models, as well as their stochastic versions, only account for the regulation of physiological responses in time, and neglect the aspects in space. They assume that the underlying molecular mechanisms occur in well-stirred reaction vessels, and that the variables move freely around the cell. But eukaryotic cells are far from being well-stirred reaction vessels. Cells are very crowded spaces and cellular processes are not only organized in time but also in space. They are divided into compartments, which might need to be modeled individually to take into account space and diffusion, two variables that likely play critical roles in the dynamics of cellular systems.

We must be aware that none of the models, as detailed as they may be, bring really definitive answers. Rather, they provide elements for reflection. For example, the role of positive feedback loops in the molecular mechanism of circadian clocks is not fully elucidated yet. But modeling provides us with clues to possible functions of these additional loops: increasing robustness to parameter variations, allowing period tunability, etc. But that is in fact the beauty of simple models: they provide us with additional perspectives of a system and allow the constant self-formulation of new questions. In the words of the great Albert Einstein, and invoking Occam’s razor “everything should be made as simple as possible, but not simpler”.