1 Introduction

In recent years, there have been reports of cell signal transmission in terms of biophysical approach applying information science [1]. Signal transmission consists of a network of sequential activation of proteins (i.e., chain reaction), which results in the promotion of using gene information [2]. This process is referrered to as signal transduction or signal cascade, where a cascade consists of multiple protein reactions in a stepwise manner [3].

This study attempted to quantitatively model signal transduction based on entropy production and entropy rate [4]. When signal transduction occurs, it has two aspects: (i) thermodynamic entropy generation derived from the heat released in the activation reaction of signalling molecular proteins and (ii) information entropy generation/transformation. The entropy of these two intersects in the biological cell system, and it is necessary to consider them in an integrated manner from the viewpoints of thermodynamics and information science [5].

For example, epidermal growth factor receptor (EGFR) changes its three-dimensional structure on the cancer cell membrane and becomes phosphorylated by binding to its ligand, epidermal growth factor (EGF), outside the cell [6, 7]. Subsequently, when an adapter molecule protein is recruited to bind the receptor, triggering a chain reaction in which a signalling molecule is modulated and can in turn modulate another signalling molecule. This sequential modification occurs with the signalling molecules c-Raf, Ras, mitogen-activated protein kinase kinase 1, extracellular signal-regulated kinase 1 [8]. Likewise, the mitogen-activating protein cascade, which is related to the stress-induced signal, was quantified using the same approach [9,10,11]. Eventually, transcription factors, which are also signalling molecules that bind to DNA, pass through the nuclear envelope and cause structural changes in DNA to activate gene expression [6, 9]. Experimental studies demonstrated that it takes several hours for signal transduction to proceed [12]. This duration can be considered signal duration τ [13].

As mentioned above, signal transduction is a type of information transfer, and the scientific information approach is equally promising [14]. The order in which signal molecules are phosphorylated can be viewed as a code string representing the signal event. By tentatively setting the ratio of phosphorylated signalling molecules to total signalling molecules as p, it becomes possible to define information entropy by taking its logarithmic average, − plog p. This logarithmic proportion represents the information entropy. Another approach is to utilise  information gain, specifically the Kullback–Leibler divergence [10, 15]. The entropy coding method demonstrated that the average rate of each step in signal transduction (i.e., activation of signal molecules) remains constant regardless of the molecule type when the signalling rate reaches its upper limit [8, 9]. In this scenario, assuming the activation period of the signalling molecule as τ, the average production rate of information entropy can be defined as − (1/τ) p log p [16]. Therefore, by determining the conditions that maximise the number of corresponding signalling strings per unit signalling time, we can establish the upper limit of the signalling rate, known as the transmission channel capacity. In this study, the speed limit of signal transduction was specifically examined by considering the sum of these two entropies as the total entropy [17].

2 Results

2.1 Signal transduction model

A single signal transduction cascade consisting of N steps was modelled as a part of a network consisting of M signal cascades. A single signal transduction cascade l consisting of N steps was modelled as part of a network of M signal cascades (1 ≤ l ≤ M). Here, the selection probability of a cascade was given by Pl. The signal transduction system is represented by the step number that proceeds in the signal event. In the j step (1 ≤ i, j ≤ N; i ≠ j), the i signal molecule Xi activates the j signal molecule Xj0 into Xj (Fig. 1; Table 1), where the superscript 0 indicates the inactive type (Eq. (1)). After the activation, Xj recover to Xj0 (Eq. (2)):

Fig. 1
figure 1

Signal network of M cascades. Xj represents the j-step signal molecule (1≤ jN). The last signal molecule transmits the signal to DNA (deoxyribonucleotide) in the nucleus to transcribe the code information. Cl represents the channel capacity, i.e., maximum limit of the signal transduction (1 ≤ l ≤ M). Qi:j represents the heat production and flow (1 ≤ N) in the transition from step i to j . Wi:j represents the transition rate from step i to j

Table 1 A table of symbols
$${X}_{i}+{{X}_{j}}^{0}\leftrightarrow {X}_{i}^{0}+{X}_{j}$$
(1)
$${{{X}}}_{{{j}}}\to {{{{X}}}_{{{j}}}}^{0}.$$
(2)

To formulate this signal, a master equation was introduced.

The probability of the system when Xj functions as a signal molecule at j at time t is pj (t). pj (t + Δt) is given by the transition probability from the step j to step i, wj:i and the transition probability from the step i to step j, wi::

$$p_{j } \left( {t + {\Delta }t} \right) = \left( {1 - \sum\limits_{j = 1}^{N} {w_{j:i} } } \right)p_{j} \left( t \right) + \sum\limits_{j = 1}^{N} {w_{i:j} } \left( t \right)p_{i} \left( t \right)$$
(3)

Then, (3) can be rewritten as:

$${\Delta }p_{j } = p_{j} \left( {t + {\Delta }t } \right) - p_{j} \left( t \right) = \sum\limits_{i = 1}^{N^{\prime}} {\left\{ { - w_{j:i} \left( t \right)p_{j} \left( t \right) + w_{i:j} \left( t \right)p_{i} \left( t \right)} \right\}}$$
(4)

When Δt approaches zero, using the transition rate, the following equation is obtained using the transition rate, Wj:i:

$${W}_{j:i}=\underset{\mathit{\Delta t}\to 0}{\mathrm{lim}}\frac{{w}_{j:i}\left(t\right)}{\Delta t}$$
(5)

Followed by the master equations, which are obtained:

$$\frac{{dp_{j } \left( t \right)}}{dt} = \sum\limits_{i \ne j}^{N} {} - W_{j:i} p_{j} \left( t \right) + W_{i:j} \left( t \right)p_{i} \left( t \right)$$
(6)

In this case, the master equations of signal transduction at j step in the single-type cascade were introduced as:

$$\frac{{dp}_{j}\left(t\right)}{dt}=-{{W}_{j:j+1}p}_{j} \left(t\right)+{W}_{j-1:j} \left(t\right){p}_{j-1}\left(t\right)$$
(7)

2.2 Entropy of signal transduction

During cell signal transduction, activation reactions occur continuously, and signal molecules are activated sequentially. In the present study, a cascade is considered a code string, such as X1 X2 X3XN, representing the sequential activation of signal molecules. With this representation, the number of signal events can be calculated by counting the code strings—it can be obtained by dividing the product of the factorial of the total number X of these signal molecules by X1! X2!.X3! Xj!.…XN! = Πj=1 N Xj!.  The entropy S,  which signifies the disorder or randomness of the signal transduction system, can be obtained by taking the logarithm of the total signal molecule number X.

$$S(t)=\mathrm{log}\frac{X!}{\prod_{j=1}^{N}{X}_{j}!(t)}$$
(8)

In the above, X is kept at a constant value, while Xj varies over time. The right-hand side of Eq. (8) is approximated using Stirling's equation, where pj (t) = Xj (t)/X, and we now have:

$$\it S\left( t \right) = - {{X}}\sum\limits_{j = 1}^{N} {p_{j} } \left( t \right)\log p_{j} \left( t \right)=- {{X}}\sum\limits_{j = 1}^{N}s_{j} \left( t \right)$$
(9)
$${s}_{j}(t)=-{p}_{j}\left(t\right)\mathrm{log}~{p}_{j}\left(t\right)$$
(10)

In Eqs. (9) and (10), S(t) and sj(t) represent the Shannon entropies. Here, it is important to note that the probability of the state of the signalling system is determined by both the probability of which type of signalling molecule is activated and the transition probabilities of the molecular activation signalling steps. Furthermore, the thermodynamic entropies Sth (t) and sth,j(t) are introduced using the heat production and flow Qj:i(t) during the transition from j to i steps:

$$Q_{j} \left( t \right) = \sum\limits_{i = 1,i \ne j}^{N} {Q_{j:i} \left( t \right)}$$
(11)
$$S_{{th}} \left( t \right) = - X\mathop \sum \limits_{{j = 1}}^{N} p_{j} \left( t \right)Q_{j} \left( t \right) = X\mathop \sum \limits_{{j = 1}}^{N} s_{{th,j}} \left( t \right)$$
(12)
$${s}_{th,j}\left(t\right)= - {p}_{j}{\left(t\right)Q}_{j}\left(t\right)$$
(13)

In the above equation, Qj(t) = βqj(t), where β represents the inverse temperature and qj(t) represents the net heat during the transition. Therefore, the total entropy, \({S}_{tot}\left(t\right)\) and \({S}_{tot,j}(t)\) (a combined form of the two entropy types, Shannon and thermodynamic entropy), are as follows [18]:

$$\begin{aligned} S_{{tot}} \left( t \right) = ~S_{{th}} \left( t \right) + S\left( t \right) = & \mathop \sum \limits_{{j = 1}}^{N} \left\{ { - X p_{j} \left( t \right)Q_{j} \left( t \right) - Xp_{j} \left( t \right)\log p_{j} \left( t \right) - X\Delta _{j} \left( t \right)} \right\} \\ = & \mathop \sum \limits_{{j = 1}}^{N} (S_{{tot,i}} \left( t \right) - X\Delta _{j}) \\ \end{aligned}$$
(14)
$${S}_{tot,j}(t)= -X{{p}_{j}\left(t\right)Q}_{j}\left(t\right)-X{p}_{j}\left(t\right)\mathrm{log}{ p}_{j}\left(t\right)$$
(15)

and

$$\begin{aligned} s_{{tot}} \left( t \right) = & \mathop \sum \limits_{{j = 1}}^{N} \left\{ -{p_{j} \left( t \right)Q_{j} \left( t \right) - p_{j} \left( t \right)\log p_{j} \left( t \right) - \Delta _{j} \left( t \right)} \right\} \\ = & \mathop \sum \limits_{{j = 1}}^{N} (s_{{tot,~j}} \left( t \right) - \Delta _{j} \left( t \right))\\ \end{aligned}$$
(16)
$${s}_{tot, j}\left(t\right)=-Q_{j}\left(t\right)-\mathrm{log}{p}_{j}\left(t\right)$$
(17)

where Δj indicates the deviation from the equilibrium state, representing the kinetic term in j step of the signal transduction. In Eq. (17),  Qj (t) was assumed to be described using the transitional rate Wj:j+1 (t).

$$\mathrm{log }{W}_{j:j+1}(t)= \ {Q}_{j}\left(t\right) $$
(18)

Therefore, the total entropy can be expressed as follows.

$${s}_{tot, j}\left(t\right)=-\mathrm{log}{ p}_{j}\left(t\right)W_{j:j+1}(t)$$
(19)

Furthermore the total entropy quantifying the amount of signal transmission was provided. Subsequently, considering Eq. (20), the entropy rate, dstot/dt = σtot, is given by Eq. (21).

$$\mathop \sum \limits_{j = 1}^{N} {\frac{dp_{j} \left( t \right)}{dt}\ }=0$$
(20)
$$\sigma_{tot}(t) =- \sum\limits_{j = 1}^{N} {\left\{ {\left( { Q_{j} \left( t \right) + \log p_{j} \left( t \right) } \right)\frac{{dp_{j} \left( t \right)}}{dt} - {\Delta }_{j}^{^{\prime}} \left( t \right)} \right\}}$$
(21)

where Δj′ indicates the differentiation of Δ with respect to t. From Eqs. (16), (17), and (21), the following entropy rate is obtained [4, 19]:

$$\begin{aligned} \sigma_{tot} = & \mathop \sum_{j} - \left\{ { { {W_{j:j + 1} \left( t \right)p_{j} \left( t \right)} {\text{log}}W_{j:j + 1} \left( t \right)p_{j} \left( t \right)} + {{\Delta }_{j}^{^{\prime}}} ( {p_{j} \left( t \right)}) } \right\} \\ & \; - \left\{ { { W_{j - 1:j} (t)p_{j-1} (t){\text{log}}W_{j - 1:j} (t)p_{j - 1} (t)} + {{\Delta }_{j - 1}^{^{\prime}}} (p_{j - 1} (t))} \right\} \\ \end{aligned}$$
(22)

2.3 Speed limit of signal transduction

To maximise the signal transduction, denoted by total entropy stot,j during the signal event, the total of signal transduction duration τ, and duration τj:j+1 representing j to j + 1 are introduced from the initial cell response to the pre-response stationary state as follows:

$${{\tau}}={{X}}\sum_{{{j}}={{1}}}^{{{N}}-1}({{{p}}}_{{{j}}}{{{W}}}_{{{j}}:{{j}}+{{1}}}(t){{{\tau}}}_{{{j}}:{{j}}+{{1}}}\boldsymbol{ })$$
(23)

 τj:j+1 and τj+1:j are assigned positive and negative values, respectively, based on the orientation of the signal transduction. In addition, total transition rate, W, is assumed to be remain constant:

$$X\sum_{{{j}}={{1}}}^{{{N}}-{{1}}}{{{W}}}_{{{j}}:{{j}}+{{1}}}(t)={{W}}$$
(24)

For simplicity of notation, (t) will be omitted in the following. To obtain the maximised entropy rate dstot/dt, a function Lj is introduced with constraints (23) and (24), along with the arbitrary parameters λ and μ, as follows:

$${{{L}}}_{{{j}}}= \sigma_{tot,j}+{{\lambda}}{{X}}{{{W}}}_{{{j}}-{{1}}:{{j}}}-{{\mu}}{{X}}{{{p}}}_{{{j}}-{{1}}}{{{{W}}}_{{{j}}-{{1}}:{{j}}}{{\tau}}}_{{{j}}-{{1}}:{{j}}}+{{\lambda}}{{X}}{{{W}}}_{{{j}}:{{j}}+{{1}}}-{{\mu}}{{X}}{{{p}}}_{{{j}}}{{{{W}}}_{{{j}}:{{j}}+{{1}}}{{\tau}}}_{{{j}}:{{j}}+{{1}}}$$
(25)

By partially differentiating Lj with respect to pj and X and setting it to zero, we obtain: 

$${{\partial}_{p_{j}}} {L}_{j} = - {W}_{j:j + 1} \,{\text{log}} \, {W}_{j:j + 1}{p}_{j} - {W}_{{{j}:{j} + {{1}}}} - {{\partial }}_{{{p}_{{j}} }} {{\Delta }}_{{{j}:{j} + {{1}}}} ^{^{\prime}} \left( {{p}_{{j}} } \right) - {\mu XW}_{{{j}:{j} + {{1}}}}{\tau }_{{{j}:{j} + {{1}}}} = 0$$
(26)
$${\partial }_{{{X}}}{{{L}}}_{{{j}}}={{\lambda}}{{{W}}}_{{{j}}-1:{{j}}}-{{\mu}}{{{p}}}_{{{j}}-1}{{{{W}}}_{{{j}}-1:{{j}}}{{\tau}}}_{{{j}}-1:{{j}}}+ {{\lambda}}{{{W}}}_{{{j}}:{{j}}+1}-{{\mu}}{{{p}}}_{{{j}}}{{{{W}}}_{{{j}}:{{j}}+1}{{\tau}}}_{{{j}}:{{j}}+1}=0$$
(27)

As a result, μ and λ are gained:

$${{\mu}}=-\frac{{{-{{W}}}_{{{j}}:{{j}}+{{1}}} s_{tot, j}}+{W_{j:j+1}}+{\partial }_{{{{p}}}_{{{j}}}}{\Delta_{j:j+1}^{{^{\prime}}}}\left({{{p}}}_{{{j}}}\right)}{{X}{{{{{W}}}_{{{j}}:{{j}}+{{1}}}\boldsymbol{ }{{\tau}}}_{{{j}}:{{j}}+{{1}}}}}$$
(28)
$${{\lambda}} = \frac{{\left( {{p}_{{j}} {{W}}_{{{{j}}:{{j}} + {{1}}}}~ {{\tau}}_{{{{j}}:{{j}} + {{1}}}}~ + {{p}}_{{{{j}} - {{1}}}} {{W}}_{{{{j}} - {{1}}:{{j}}}}{ \tau }_{{{{j}} - {{1}}:{{j}}}} } \right)\left( {{{W}}_{{{{j}}:{{j}} + {{1}}}}~{s}_{{tot,~ j}}~ - {W}_{{{{j}}:{{j}} + {{1}}}} - {{\partial }}_{{{{p}}_{j}}} ~ {\Delta_{j:j+1}^{^{\prime}}}\left( {{{p}}_{{{j}}} } \right)} \right)}}{{{X}}{{{W}}_{{{{j}}:{{j}} + {{1}}}}~{\tau }_{{j:{{j}} + {{1}}}} \left( {{{W}}_{{j-1}:{j}} + {{W}}_{{{{j:}} {j+1}}} } \right )}}$$
(29)

In the above, to satisfy the entropy coding that maximises the signal event number during the given duration [13], total entropy stot,j in Eqs. (13) and (28) should be rewritten as [16]:

$${{\mu}}={ {{s}}}_{{{tot,j}}}/\boldsymbol{ }{{{X}}{{\tau}}}_{{{j}}:{{j}}+{{1}}}$$
(30)

The right-hand side presents a quantity with the dimension of the entropy rate obtained by dividing the entropy by the sign length in the coding. Therefore, it was found that the average entropy rate of each signalling step, represented by μ, is constant, regardless of the step [14]. Here we define the channel capacity, denoted by C, as the maximised average entropy rate, (ΣjXτj:j+1 pj stot,j) /τ = μ. Thus, the average signal transduction capacity is represented by μ. Moreover, considering that τj:j+1 is already given and since μ and λ are independent of j, Δj:j+1 must satisfy:

$${\partial }_{{{{p}}}_{{{j}}}}{{{{\Delta}}}_{{{j}}:{{j}}+{{1}}}^{{^{\prime}}}}\left({{{p}}}_{{{j}}}\right)=-{{{W}}}_{{{j}}:{{j}}+{{1}}}$$
(31)

Since pj is zero, Δj: j+1 = 0,

$${{{{\Delta}}}_{{{j}}:{{j}}+{{1}}}^{^{\prime}}}\left({{{p}}}_{{{j}}}\right)=-{{{W}}}_{{{j}}:{{j}}+1}{{{p}}}_{{{j}}}$$
(32)

In addition,

$$\frac{{{\lambda}}}{{{\mu}}}= \frac{{{{p}}}_{{{j}}}{{{W}}}_{{{j}}:{{j}}+1}{{{\tau}}}_{{{j}}:{{j}}+{{1}}}+{{{{p}}}_{{{j}}-{{1}}}{{W}}}_{{{j}}-{{1}}:{{j}}}{\boldsymbol{ }{{\tau}}}_{{{j}}-{{1}}:{{j}}}}{{{W}_{j-1:j}}+{{W}_{j:j+1}}}$$
(33)

Accordingly, the general form of the entropy rate satisfying the entropy coding and representing the signal transduction rate from j to j+1 step, is given based on Eq. (22) as follows:

$${\sigma}_{j:{{j}}+{{1}}}= -\{ {{W_{j:j+1}}p_{j}{\text{log}} {W_{j:j+1}}p_{j}+{\Delta_{j}^{^{\prime}}}(p_{j})} \}- \{ {{W_{j-1:j}}p_{j-1}{\text{log}} {W_{j-1:j}}p_{j-1}+{\Delta_{j-1}^{^{\prime}}}(p_{j-1})} \}$$
(34)

Differentiating Eq. (34) gives the entropy rate per signal molecule, which represents the speed limit of signal transduction, and is equal to μ

$$\begin{aligned} \frac{{{\partial \sigma }_{{{{j}}:{{j}} + {{1}}}} }}{{{X\partial p}_{{{j}}} }} = {\mu W}_{{{{j}}:{{j}} + {{1}}}} {\tau}_{{j:j+ 1}} { = \mu w}_{{{{j}}:{{j}} + {{1}}}} \sim \mu \\ \end{aligned}$$
(35)

In the above Eq. (35), we set Wj: j+1τj: j+1 ≃ wj:j+1, based on Eq. (5). Considering that the signal cascade is substantially irreversible, wj:j+1 is nearly equivalent to 1. As a result, the entropy rate per number of signal molecules is independent of the step in the signal cascade and is equal to μ = C. Therefore, if we multiply the selection probability of a given cascade network, denoted as Pl, with μl, which represents the channel capacity of the l cascade, we can define Σl = μlPl as the overall signal transduction network.

3 Discussion

Signalling pathways have been extensively studied to elucidate the biological information processing system. In many studies, signal transduction has been quantified and simulated using reaction kinetics. In this research area, the relationship between thermodynamics and information entropy (Shannon entropy) is critical. Equations (14) and (15) are one of the recently developed methods that define the total entropy by summing the probability function of the state of the system and entropy based on thermodynamic heat flow and production according to the concept of Hatano–Sasa entropy [18]. The entropy coding of signal transduction is formulated in Eq. (30). In addition, to account for biological homeostasis, the term Δj: j+1 was introduced to express the difference between the cell system and the equilibrium state in Eqs. (14)–(19). Equation (30) was employed for entropy encoding, which ensures the maximisation of entropy. Consequently, maximised entropy rate was formulated in Eq. (34). By utilizing this formulation, it becomes evident that Eq. (35) establishes a theoretical upper limit.

We arrived at similar conclusions in the frameworks of linear and non-equilibrium thermodynamics [19], Kullback–Leibler divergence [15], and queuing theory [10], as previously described. The number of signalling molecules in the cells is not large, with considerable fluctuations. In terms of thermodynamic and kinetics analysis, the entropy rate in signal transduction is determined by the fluctuations in signal molecule concentration [19]. Therefore, it is necessary to treat the number of signal molecules as a discrete quantity. We have recently reported that the "queuing" theory can be applied to quantify signal transduction as well. This approach is one of the operations-research methods for maximising the efficiency of a given system. A signalling cascade network can be treated as an open queuing network, specifically Jackson's queue network (JQN) [20]. In this network, signalling molecules diffuse and form queues before reaching network nodes, where they undergo phosphorylation. This theory determined that the entropy rate, defined as the ratio of the arrival rate to the phosphorylation rate, is independent of the node type when the entropy is maximised [10]. Alternatively, the information gain (Kullback–Leibler) measure can be adopted. In this case, when the entropy is maximised, the entropy rate is independent of the node type.

The conservation of entropy rates may contribute to identifying signalling pathways specific to responses, such as starvation and radiation exposure. Starvation stress activates the MAPK signalling pathway [21], and the conservation of the entropy rate at each phosphorylation step has been experimentally confirmed [9, 10]. Similar methods could be used to uncover previously unknown signalling pathways under experimental methods where the level of stress can be quantitatively controlled by modulating the irradiation dose.

In conclusion, the information entropy coding theory and combined thermodynamic entropy are helpful for quantitative signal transduction analyses. In our future studies, we will verify the conclusions made herein through experimental measurements of many stress responses.