1 Introduction

Signal transduction is one of the frameworks for understanding cellular reaction networks. In this study, we modelled signal transduction as the flux of biochemical information [1, 2]. A typical signal transduction is signalling molecule modification. For example, an environmental change, such as increased ligand levels in the extracellular area, can trigger chemical modification of the receptor protein on the cell membrane. The modification allows recruiting of the adaptor protein to the receptor. The receptor–adaptor complex further modifies the proteins in the cytoplasm, and the modified proteins catalyze further modification of other molecules [3]. The modification includes phosphorylation or co-factor binding, for example, GTP (guanosine triphosphate). The final modified protein in the reaction cascade is translocated to the nucleus. It binds to DNA, alters its structure, and subsequently promotes the transcription of genetic information, followed by protein translation. The EGFR (epidermal growth factor receptor) signal cascade is a well-known example [4].

In summary, environmental change is transduced to the expression of genetic information through signal molecule modification. Conventionally, this information transduction has been termed cell signal transduction, and we will call each modification signal step. Mostly, the signal transduction analysis has not been quantified in information science. This makes it challenging to compare gene expression level change by signal transduction or determine ligand dose for receptor stimulation.

Previously, we reported that the signal reaction could be modelled as a string of code representing the modified signalling molecule [1], one of the source coding methods in information science [5]. In this case, the logarithm of the concentration ratio of signalling molecule to total signalling molecules provides the information entropy. In addition, the code length is given by the reaction time. Furthermore, the cascade crosstalks with other cascades to create a signal network. In that case, the reaction steps form the network nodes. Because information is transduced in the direction that particular modification reaction step follows, the signal cascade network forms a directional network. In this way, the signal transduction phenomenon can be reconceptualized based on information science.

On the modelled cell signal network, we can calculate the information amount [4, 6]. First, we define information entropy in cell signal transduction and consider a network of signal cascades [3]. Next, to evaluate the signal transduction efficiency, the average entropy rate, i.e., capacity, is formulated when signalling efficiency is maximised. In terms of information science, this formulation corresponds to entropy coding. Finally, signal transduction thermodynamics is linked to the fluctuation theorem (FT), a major recent breakthrough in nonequilibrium thermodynamics, giving the ratio of the probability distribution function of an event (information gain, signal molecular modification in most cases) to the probability distribution function of the reverse event (information loss, de-modification in most cases) [6].

2 Results

2.1 A model chain reaction for information transmission

Consider a biochemical chain-reaction cascade of n biochemical species Xj (1 ≦ jn) in a reaction chain. Herein, we employed the biochemical species that transmit information via their modification (i.e. phosphorylation or mediator binding).

In the model, the cell chemostat supplies an information signal mediator, such as adenosine triphosphate (ATP). ATP is hydrolyzed into adenosine diphosphate (ADP) and phosphate to modify Xj into Xj*. The asterisk represents the modified form of Xj, and Xj* can modify another species Xj +1 into Xj +1*. After that, Xj* is demodified to Xj. This reaction generates a cycle chain, and the step proceeds from the jth to j + 1th (Fig. 1). For example, an increase in X1* can be transmitted as an increase in the final species X4* through the following four-step chain-reaction cascade (1 ≦ j ≦ 4):

$$\eqalign{ {X_1} & \leftrightarrow ~{X^*_1} \cr {X^*_1} + {X_2} & \leftrightarrow ~{X_1} + {X^*_2} \cr {X^*_2} + {X_3} & \leftrightarrow ~{X_2} + {X^*_3} \cr {X^*_3} + {X_4} & \leftrightarrow ~{X_3} + {X^*_4} \cr}$$
(1)
Fig. 1
figure 1

A model of signal cascade. The diagram shows an example of a chain modification/de-modification reaction cascade in which active protein Xj* converts inactive protein Xj+1 into Xj+1*. Xj* is converted spontaneously into Xj (1 ≦ j ≦ N). The arrows represent the supply of the mediator (e.g. ATP (adenosine triphosphate) or GTP) activating Xj by the chemostat and the release of the cofactor (e.g. phosphate or GDP, guanosine diphosphate) from Xj* to the chemostat

In this case, the code sequence for the forward chain reaction can be written as (Fig. 2A):

$$X_{1}^* X_{2}^* X_{3}^* X_{4}^*$$
(2)
Fig. 2
figure 2

Time courses of the chain reaction. The y-axis represents the ratio pj* = Xj* /X or pj = Xj /X (1 ≤  j ≤ 4; in red, blue, green, and orange) to that at the pre-reaction steady state,  pj*st = Xj*st /X or pjst = Xjst /X. "st" represents the steady state. The x-axis represents the reaction time and the duration of each step in the cascade. As modification or de-modification proceeds, pj*/ pj*st or pj / pjst increase or decrease chronologically. The horizontal lines (labelled 1) represent the ratios at the pre-reaction state. A An example of a modification to de-modification reaction. Panel (A) indicates the time course of modification (increase in pj*/pj*st) followed by de-modification (decrease in pj*/pj*st), where τj represents the time of the modification period, and τ1, τ2,τ3, and τ4 represent the modification duration of the 1st, 2nd, 3rd, and 4th signal molecules, respectively. Information transmission and signal transduction proceed in the order 1 → 2 → 3 → 4 in panel. B An example of a de-modification to modification reaction. Panel (B) indicates the time course of de-modification (increase in pjpjst) followed by modification (decrease in pj/pjst), where τj* represents the time of the de-modification period and τ1*, τ2*, τ3*, and τ4* represent the de-modification duration of the 1st, 2nd, 3rd, and 4th molecules, respectively. Information transmission and signal transduction proceed in the order 4 → 3 → 2 → 1

The reverse chain-reaction cascade can be described as (Fig. 2B):

$$\eqalign{ {X^*_4} & \leftrightarrow ~{X_4} \cr {X_4} + {X^*_3} & \leftrightarrow ~{X^*_4} + {X_3} \cr {X_3} + {X^*_2} & \leftrightarrow ~{X^*_3} + {X_2} \cr {X_2} + {X^*_1} & \leftrightarrow ~{X^*_2} + {X_1} \cr}$$
(3)

The code sequence for the reverse chain reaction can be written as:

$$X_{4} X_{3} X_{2} X_{1}$$
(4)

Furthermore, the appearance of the species coding can repeat as:

$$X_{1} X_{1}^{*} X_{2}^{*} X_{2} X_{2}^{*} X_{3} X_{3}^{*}$$
(5)

We interpreted the reaction time of the jth step as the jth code length (1 ≦ jn), which corresponded to the code length in the information resource code theory. Then we introduced X, which represents the total concentration of the signalling molecules:

$$X = \mathop \sum \limits_{j = 1}^{n} (X_{j} + X_{j}^{*})$$
(6)

The concentration ratios pj and pj* were defined as

$$\begin{gathered} p_{j} = X_{j} /X \hfill \\ p_{j}^{*} = X_{j}^{*} /X \hfill \\ \end{gathered}$$
(7)

where

$$\mathop \sum \limits_{j = 1}^{n} \left( {p_{j} + p_{j}^{*} } \right) = 1$$
(8)

Next, we considered duration corresponding to the reaction time of the jth-step during which modification and de-modification. The total duration of the message, τ,  was given as:

$$\tau = \sum X_{j} \tau_{j} = X\left( {\mathop \sum \limits_{j = 1}^{n} p_{j} \tau_{j} - \mathop \sum \limits_{j = 1}^{n} p_{j}^{*} \tau_{j}^{*} } \right)$$
(9)

where τj signifies the duration of the Xj to Xj* conversion per one Xj molecule, and τj* signifies the duration of the Xj* to Xj conversion per one Xj* molecule. Here we set τj > 0 and τj* < 0, determining the direction of the signal transduction.

2.2 Channel capacity of the signal transduction

Subsequently, the total number of signal transduction events, Ψ, was defined for the entire cascade as follows:

$${\Psi } = \frac{{X_{j} !}}{{\mathop \prod \nolimits_{j = 1}^{n} X_{j} !\mathop \prod \nolimits_{j = 1}^{n} X_{j}^{*} !}}$$
(10)

Taking the logarithm of Ψ, Shannon's entropy S was given by Starling's formula as follows [1]:

$${\it S} = \log {\Psi } \simeq - X\left( {\mathop \sum \limits_{j = 1}^{n} p_{j} \log p_{j} + \mathop \sum \limits_{j = 1}^{n} p_{j}^{*} \log p_{j}^{*} } \right) =-X {\mathop \sum \limits_{j = 1}^{n} S_{j} } $$
(11)

where

$$S_{j} = p_{j} \log p_{j} + p_{j}^{*} \log p_{j}^{*}$$
(12)

Here, as in previous studies [3], we assumed that cells transmit the maximum signal amount at each step at a given time. The maximisation suggests that the signal cascade does not allow for signal redundancy and that signaling transduction system adopts a strategy that transduces as many signals as possible. Because ATP and GTP, which are mediators for the phosphorylation of signalling molecules, are also involved in essential cellular activities such as the synthesis of metabolites and nucleic acids, their amounts are limited in the cell. This maximum amount of information transmission is defined here as transmission capacity, using the language of information science.

To obtain the capacity, we defined a function G and applied Lagrange’s method to maximise entropy S under the constraints of (6), (7), (8) and (9).

$$\begin{aligned} & G\left( {{p_1},{p_2},~ \ldots ~{p_n};~p_1^*,~p_2^*,~ \ldots ~p_n^*} \right) \\ & \quad = S - a\mathop \sum \limits_{j = 1}^n \left( {{p_j} + ~p_j^*} \right) + b\tau \\ & \quad = S - a\mathop \sum \limits_{j = 1}^n \left( {{p_j} + p_j^*} \right) + bX\mathop \sum \limits_{j = 1}^n \left( {{p_j}{\tau _j} - p_j^*{\tau _j}^*} \right) \\\end{aligned}$$
(13)

In the above, a and b are non-determined parameters. Differentiating G gave us:

$$\frac{\partial G}{{\partial p_{j} }} = - X(\log p_{j} - b\tau_{j} ) - a - X$$
(14)
$$\frac{\partial G}{{\partial p_{j}^{*} }} = - X(\log p_{j}^{*} + b\tau_{j}^{*} ) - a - X$$
(15)
$$\frac{\partial G}{{\partial X}} = - \left( {\mathop \sum \limits_{j = 1}^{n} p_{j} \log p_{j} + \mathop \sum \limits_{j = 1}^{n} p_{j}^{*} \log p_{j}^{*} } \right) + b\left( {\mathop \sum \limits_{j = 1}^{n} p_{j}\tau_{j} - \mathop \sum \limits_{j = 1}^{n} p_{j}^{*}\tau_{j}^{*} } \right)$$
(16)

Setting the left-hand sides of Eqs. (14), (15), and (16) as zero gave us

$$a = - X$$
(17)
$$\log p_{j} = b\tau_{j}$$
(18)
$$- \log p_{j}^{*} = b\tau_{j}^{*}$$
(19)

From Eqs. (18) and (19), b could be considered the average “entropy (production) rate”. Furthermore, substituting Eqs. (18) and (19) into the right-hand side of Eq. (11) gave us

$$S_{{{\text{max}}}} = - X\left( {\mathop \sum \limits_{j = 1}^{n} p_{j} b\tau_{j} - \mathop \sum \limits_{j = 1}^{n} p_{j}^{*} b\tau_{j}^{*} } \right) = - b\tau$$
(20)
$$S_{{j,{\text{max}}}} = - Xb\left( {p_{j} \tau_{j} - p_{j}^{*} \tau_{j}^{*} } \right)$$
(21)

In the above, the “max” suffix denotes the maximum value of the entropy. Therefore, the channel capacity of the signal transduction cascade, i.e., the maximum average rate of the entropy, was given by Eq. (20) as follows [1]:

$$\begin{aligned} C & = \mathop {\lim }\limits_{\tau \to \infty } K\frac{{S_{{{\text{max}}}} }}{\tau } \\ & = - b \\ \end{aligned}$$
(22)

Here, if entropy units are used, we take K = kB, Boltzmann’s constant. In contrast, in information science, K is equivalent to log2e. Therefore, the negative average entropy production rate was equal to the channel capacity of the signal transduction cascade. The channel capacity was one of the conserved quantities in the transduction cascade.

2.3 Fluctuation theorem holds in a single signal cascade

Thereafter, the transitional probability of the j + 1th step, if given the jth step, could be defined as P (j + 1|j), while the probability of the jth step, if given the j + 1th step, was defined as P (j|j + 1) using the duration ratios as follows:

$$P\left( {j + 1{|}j} \right) = \frac{{Xp_{j} \tau_{j} }}{{X\mathop \sum \nolimits_{j = 1}^{n} \left( {p_{j} \tau_{j} - p_{j}^{*} \tau_{j}^{*} } \right)}} = \frac{{X_{j} \tau_{j} }}{{\mathop \sum \nolimits_{j = 1}^{n} \left( {X_{j} \tau_{j} - X_{j}^{*} \tau_{j}^{*} } \right)}}$$
(23)
$$P\left( {j{|}j + 1} \right) = \frac{{Xp_{j}^{*} \tau_{j}^{*} }}{{X\mathop \sum \nolimits_{j = 1}^{n} \left( {p_{j} \tau_{j} - p_{j}^{*} \tau_{j}^{*} } \right)}} = \frac{{X_{j}^{*} \tau_{j}^{*} }}{{\mathop \sum \nolimits_{j = 1}^{n} \left( {X_{j} \tau_{j} - X_{j}^{*} \tau_{j}^{*} } \right)}}$$
(24)

We then introduced a logarithmic function of the jth step as:

$$z_{j} = \log \frac{{P\left( {j{|}j + 1} \right)}}{{P\left( {j + 1{|}j} \right)}}$$
(25)

Substituting Eqs. (18), (19), (23) and (24) into (25), gave a function zj:

$$z_{j} = {\text{log}}\frac{{e^{{b\tau_{j} }} \tau_{j} }}{{e^{{ - b\tau_{j}^{*} }} \tau_{j}^{*} }} = b(\tau_{j} - \tau_{j}^{*} ) + \log \frac{{\tau_{j} }}{{\tau_{j}^{*} }}$$
(26)

We then took the average zj for the reaction time Xj τjXj*τj* which represented the total duration of the jth step:

$${ }\frac{1}{{X_{j} \tau_{j} - X_{j}^{*} \tau_{j}^{*} }}z_{j} = \frac{{b\left( {\tau_{j} - \tau_{j}^{*} } \right)}}{{X_{j} \tau_{j} - X_{j}^{*} \tau_{j}^{*} }} + \frac{1}{{X_{j} \tau_{j} - X_{j}^{*} \tau_{j}^{*} }}\log \frac{{\tau_{j} }}{{\tau_{j}^{*} }}$$
(27)

In many biochemical reactions, such as the information transmission reaction involving EGFR-related cascades and mitogen-activating protein kinases (MAPKs), tj = τj – τj* was anticipated to be sufficiently long [7,8,9,10]. Based on experimental data, |τj*| is more prolonged than several hours, and τj is a few minutes; therefore, τj/|τj*| < 0.05, and 1/(Xj τjXj*τj*) is sufficiently small [11, 12]. Therefore, the second term of the right-hand side of Eq. (27) in the limit operation is equal to −0.01 or smaller and is small enough to be neglected. Accordingly,

$$\mathop {\lim }\limits_{{\tau_{j} - \tau_{j}^{*} \to \infty }} \frac{1}{{X_{j} \tau_{j} - X_{j}^{*} \tau_{j}^{*} }}z_{j} \sim b\frac{{\tau_{j} - \tau_{j}^{*} }}{{X_{j} \tau_{j} - X_{j}^{*} \tau_{j}^{*} }}$$
(28)

Therefore, we obtained:

$$\mathop {\lim }\limits_{{\tau_{j} - \tau_{j}^{*} \to \infty }} z_{j} \sim b\left( {\tau_{j} - \tau_{j}^{*} } \right) = bt_{j}$$
(29)

tj = τj −τj* represents the duration of the sum of the modification and de-modification duration of a signal cascade trajectory, and the sum of zj gave:

$$z = \mathop \sum \limits_{j = 1}^{n} \mathop {\lim }\limits_{{\tau_{j} - \tau_{j}^{*} \to \infty }} z_{j} = bt$$
(30)

with

$$\mathop \sum \limits_{i = 1}^{n} \left( {\tau_{j} - \tau_{j}^{*} } \right) = t$$
(31)

Therefore, b equals the entropy per time t taken from the beginning to the end of a single cascade. Then we obtained Eqs. (32) from Eqs. (25), (29), and (30):

$$\mathop {\lim }\limits_{t \to \infty } \frac{1}{t}\mathop \sum \limits_{j = 1}^{n} \log \frac{{P\left( {j{|}j + 1} \right)}}{{P\left( {j + 1{|}j} \right)}} = - b$$
(32)

Eqs. (22) and (32) gave:

$$\mathop {\lim }\limits_{t \to \infty } \frac{1}{t}\mathop \sum \limits_{j = 1}^{n} \log \frac{{P\left( {j{|}j + 1} \right)}}{{P\left( {j + 1{|}j} \right)}} = K\frac{{S_{{{\text{max}}}} }}{\tau }$$
(33)

Also, Eqs. (9) and (31) gave:

$$\frac{\tau }{t} = \frac{{X\mathop \sum \nolimits_{j = 1}^{n} \left( {p_{j} \tau_{j} - p_{j}^{*} \tau_{j}^{*} } \right)}}{{\mathop \sum \nolimits_{j = 1}^{n} \left( {\tau_{j} - \tau_{j}^{*} } \right)}}\sim \mathop \sum \limits_{j = 1}^{n} Xp_{j}^{*} : = X^{*} \left( {t \to \infty } \right)$$
(34)

In the above, we used τj /τj* < 0.05. Therefore, Eq. (33) could be rewritten as:

$$\frac{1}{t}\mathop \sum \limits_{j = 1}^{n} \log \frac{{P\left( {j{|}j + 1} \right)}}{{P\left( {j + 1{|}j} \right)}}\sim \frac{{S_{{{\text{max}}}} }}{{KX^{*} t}} = \frac{{s_{{{\text{max}}}} }}{Kt}\left( {t \to \infty } \right)$$
(35)

And finally:

$$\log \frac{{P\left( {j{\text{|}}j + 1} \right)}}{{P\left( {j + 1{\text{|}}j} \right)}}\sim \frac{{{s_j}_{{\text{max}}}}}{{{k_{\text{B}}}}}\left( {t \to \infty } \right)$$
(36)

In the above, we replaced K with kB. Here, we set Smax /X* to smax. In this way, the logarithm of the forward and reverse transition ratio per the reaction time is equal to the entropy. Considering that sj max represents the maximum entropy production at the jth step per one signal transduction in producing a single molecule of the active form X*, we obtained the following Eq. (37) by identifying smax with the heat production ΔQj (1 ≦ jn) in the modification at the jth signal transduction step:

$$\log \frac{{P\left( {j{|}j + 1} \right)}}{{P\left( {j + 1{|}j} \right)}}\sim \frac{{\Delta Q_{j} }}{{k_{B} T}}\left( {t \to \infty } \right)$$
(37)

Here, T represents the system temperature. Equation (37) shows that enough time has passed, and the signalling amount approaches the maximum value. This equation possesses the detailed balance condition [13].

2.4 Path integral of signal transduction

The time-course scheme of signal transduction along the step-by-step trajectory may include forward and backward fluctuations. Therefore, the path and reverse path of signal transduction were introduced, respectively, as:

$$\pi \left( + \right) = \pi \left( {t_{0} } \right)\mathop \prod \limits_{j}^{}\ P\left( {j + 1{|}j} \right)(t_{j} )$$
(38)
$$\pi \left( - \right) = \pi \left( {t_{n} } \right)\mathop \prod \limits_{j }^{} P\left( {j{|}j + 1} \right)(t_{j} )$$
(39)

where π(+) and π(−) denote the probability that the signal transduction and the reverse signal transduction occur in the given signal transduction system, respectively. π(t0) and π(tn) denote the probability when  tt0 (start of the transduction) and tt(start of the reverse transduction). Taking the logarithms of Eqs. (38), and (39), we had:

$${\text{log}} \pi \left( + \right) = {\text{log}} \pi \left( {t_{0} } \right) + \mathop \sum \limits_{j }^{} {\text{log}} P\left( {j + 1{|}j} \right)\left( {t_{j} } \right)$$
(40)
$${\text{log}} \pi \left( - \right) = {\text{log}} \pi \left( {t_{n} } \right) + \mathop \sum \limits_{j}^{} {\text{log}} P\left( {j{|}j + 1} \right)\left( {t_{j} } \right)$$
(41)

Suppose that entropy production Δs follows the probability distribution P (Δs) while taking a value close to the maximum value smax. In the above, we noted that the negative logarithm of π(t0) and π(tn) was considered entropy s' and set log π(t0) - log π(tn) = ∆s' /kB. By taking the difference between the right and left sides of (36), and the transition probability π(+) and π(-) from Eqs. (40) and (41), we obtained:

$$\log \frac{\pi \left( + \right)}{{\pi \left( - \right)}}=\frac{\Delta s}{{k_{B} }}$$
(42)

In the above, we set ∆s/kB = ∆s'/kB + ∑j logP(j+1|j) - ∑j logP(j+1|j).  When integrating the product along the transduction cascade path, the relationship between the probability distribution of sP (Δs) for the transduction trajectory path and P (-Δs) for the reverse trajectory path (taking the opposite entropy −Δs) was given by the following equation [14,15,16,17]:

$$\begin{aligned} P\left( {\Delta s} \right) & = \mathop\sum\limits_{j}^{} \int \limits_{} d\left( {j + 1{|}j} \right)\delta \left( {\Delta s - k_{B} \log \frac{\pi \left( + \right)}{{\pi \left( - \right)}}} \right) \pi \left( + \right) \\ \, & = \exp \left( {\frac{\Delta s}{{k_{{\text{B}}} }}} \right)\mathop\sum\limits_{j}^{} \int \limits_{{}} d\left( {j+1{|}j } \right)\delta \left( {\Delta s - k_{B} \log \frac{\pi \left( + \right)}{{\pi \left( - \right)}}} \right) \pi \left( - \right) \\ & = \exp \left( {\frac{\Delta s}{{k_{{\text{B}}} }}} \right)\mathop\sum\limits_{j}^{} \int \limits_{{}} d\left( {j+1{|}j } \right)\delta \left( {\Delta s + k_{B} \log \frac{\pi \left( - \right)}{{\pi \left( + \right)}}} \right) \pi \left( - \right) \\ & = \exp \left( {\frac{\Delta s}{{k_{{\text{B}}} }}} \right) \mathop\sum\limits_{j}^{} \int \limits_{}^{} d\left( {j {|}j+1} \right)\delta \left( { - \Delta s - k_{B} \log \frac{\pi \left( - \right)}{{\pi \left( + \right)}}} \right)\pi \left( - \right) \\ & = \exp \left( {\frac{\Delta s}{{k_{{\text{B}}} }}} \right)P{ }\left( { - \Delta s} \right) \\ \end{aligned}$$
(43)

Therefore, we obtained a form of the FT in terms of signal transduction.

$$\exp \left( {\frac{\Delta s}{{k_{{\text{B}}} }}} \right) = \frac{{P\left( {\Delta s} \right)}}{{P{ }\left( { - \Delta s} \right){ }}}$$
(44)

In conclusion, Eq. (44) formulates the ratio of the probability distribution function of a signal transduction event (information gain) and the rare reverse signal transduction event (information loss).

3 Discussion

This study considered a chain reaction in signal transduction as a model of code sequence in terms of information science [2]. We modelled a chain reaction using two type signal molecules, an inactive Xj, and an active form Xj*, which is a type of binary coding [18]. First, Eqs. (18) and (19) were derived from the viewpoint of source coding in information theory and both equations describe entropy coding. Second, the channel capacity in Eq. (22) was given as a form of the entropy-time average, which is essential in quantifying signal transduction. Third, we obtained a new form of the FT in Eq. (44). Thus, the chain reaction model of signal transduction provides a unified understanding of thermodynamical and informational entropy. Below is an overview of the features of this model.

3.1 Unidirectionality of signalling

So far, consideration of signal transduction duration and direction have rarely been included in systems biology studies of signal transduction. One of the novel points of the current study was considering the code length and direction of signal transduction. In this study, the unidirectionality of signalling was introduced into the framework by the significantly longer reverse time compared to the forward time in signal transduction. The irreversibility of signal transduction could be estimated in Eqs. (2528). Besides, we assigned a negative sign to the duration of the reverse transduction,  as τj* < 0. This negative duration contributed to expressing the loss of information carried by signal transduction. As a result, we were successful in expressing the irreversibility in information science. In addition, the mobile flow and oscillation wave of slime moulds and bacteria are well-knownmodels point of information transmission by biological populations and natural calculation computing. The presented model may be adopted for the interpretation of such models in the future [19].

3.2 Appication of Fluctuation theorem (FT) for Cell Biology and Information Science

FT is a significant achievement in thermodynamics and has been applied to study a nonequilibrium system [20, 21], membrane transport [22], and molecular machines [23]. FT has a general form of the thermodynamic framework to demonstrate the second law of thermodynamics, the dissipative theorem, and Onsager's reciprocity relations [24, 25]. Recently, biophysical applications have been further developed [26, 27]. This study aimed to interpret the FT in terms of information theory. It is not necessarily obvious whether our formulation can be extended to other biological systems, and more detailed analyses based on information thermodynamics are still required. As with other quantification frameworks of signal transduction, we have proposed several quantifications of signal transduction based on information entropy [3, 4], queueing theory [28], and nonlinear thermodynamics [29]. These theoretical frameworks may be closely linked, and their relationship will be an object of biophysical project (Table 1).

In conclusion, a code string model of biochemical chain reaction can be used to analyse information transmission. Our model suggests measuring cell information transmission or signal transduction capacity and presents a possible application of FT for analysing biochemical information transmission.

Table 1 A table of symbols