Analysis of Simulated Fluorescence Intensities Decays by a New Maximum Entropy Method Algorithm

A new algorithm for the Maximum Entropy Method (MEM) is proposed for recovering the lifetime distribution in time-resolved fluorescence decays. The procedure is based on seeking the distribution that maximizes the Skilling entropy function subjected to the chi-squared constraint χ2 ~ 1 through iterative linear approximations, LU decomposition of the Hessian matrix of the lagrangian problem and the Golden Section Search for backtracking. The accuracy of this algorithm has been investigated through comparisons with simulated fluorescence decays both of narrow and broad lifetime distributions. The proposed approach is capable to analyse datasets of up to 4,096 points with a discretization ranging from 100 to 1,000 lifetimes. A good agreement with non linear fitting estimates has been observed when the method has been applied to multi-exponential decays. Remarkable results have been also obtained for the broad lifetime distributions where the position is recovered with high accuracy and the distribution width is estimated within 3 %. These results indicate that the procedure proposed generates MEM lifetime distributions that can be used to quantify the real heterogeneity of lifetimes in a sample.


Introduction
Fluorescence spectroscopy is a classical method for studying the structural and dynamical aspects of biological systems such as proteins, membranes and living cells [1][2][3]. The excited states of the fluorophores have indeed lifetimes in the range of a few picoseconds to some tens of nanoseconds. Since this corresponds to the time scale of many important biological processes, such as the protein folding dynamics, the protein-DNA interactions, the diffusion of small molecules, the rotational and internal motions, the proton transfer and the energy transfer mechanism [4][5][6], the time resolved fluorescence has became an important investigation tool.
Despite the great advances in measurement techniques over the recent years, the analysis of fluorescence decays belongs to the difficult tasks in data analysis. Indeed, a parametric multi-exponential model function is usually fitted with the experimental data by non linear least squares methods [7]. Besides a difficulty to find the global overall minimum of the chisquared function χ 2 by the parametric minimization procedure, this statistical criteria very rarely allowed to fit the analysed decay curve by more than three exponential functions [4]. Moreover, there are many situations where the time resolved fluorescence profile is described by a continuous lifetime distribution rather than a discrete number of exponential decays. For example, the tryptophan residues in the exited state are usually involved in a energy transfer process to intra-molecular acceptors that depends on the relative distance and affects the fluorescence lifetime [4,8]. This behaviour of the tryptophan is used to get structural informations on the intermediate forms of proteins that characterise their folding process from an extended random polypeptide chain. In complex situations, such as those encountered even in small proteins, the rate of energy transfer and thus the intramolecular distances can be described by a continuum distribution due to heterogeneity of the structure that turns out to a continuum lifetime distribution for the fluorescence decay [9][10][11][12]. In these cases, the analysis for recovering the distribution takes advantages of a "regularizing function" in addition to the chi-squared statistic for forcing the data to choose one member out of the set of the feasible distributions [13]. The Skilling entropy S is one of these regularizing functions and it is maximized subjected to the constraint χ 2 ∼ 1 by the maximum entropy method (MEM) thus determining the desired distribution with no assumptions about its functional form [14][15][16][17][18][19][20][21].
The MEM is a mathematically complex algorithm that seeks the stationary point of the Lagrange function = S + λ(χ 2 − 1), λ being the so-called Lagrange multiplier, that turns out to be the solution of the set of non linear equations resulting from ∇ = 0. Due to the once limited computer memory and low performing computational time, the conjugate gradient method (CG) [22] has been the iterative method widely spread for the solution of equation systems [23]. The necessary compromise between the accuracy of the solution and the convergence speed of the algorithm [24] limited both the number of data points and the number of lifetimes used for reconstructing the desired distribution to a maximum values of about 1,000 and 200 respectively, with the obvious limitation of the information content in the MEM distribution [25,26].
Nowadays, the extensive computing power of a standard home computer makes possible the adoption of new iterative approaches to the MEM based on direct methods. In this paper we present a new procedure based on the Newton-Raphson method for solving the Lagrange multiplier problem associated to the MEM constrained problem. A method for the solution of the equation system ∇ = 0 through iterative linear approximations and LU decomposition [27] is extensively discussed. The implementation of the MEM is carried out using a homebuilt routine package written in MATLAB (version 7.14, The MathWorks Inc., Natick, MA, 2012). The accuracy of the algorithm is investigated through comparisons with numerical simulations of fluorescence decay data. It results that the MEM algorithm proposed can analyse datasets with up to 4,096 data points, that is a typical value of an experimental set up based on a time-correlated single photon counting technique, by considering distributions with a number of lifetimes that ranges from 100 to 1,000.

Theory of the Method
The time-resolved fluorescence decay data {E m } can be described by a function T(t) defined as a discrete convolution product of the intensity decay function I(t) by the impulse response function R(t): with t m the measurement time of the m-th data point E m and M the number of data points. The quantity R(t m ) is experimentally measured at each time t m whereas I(t) is usually modelled as a discrete sum of exponential decays [28]: For the purpose of numerical calculations the sum in Eq. 2 is extended from the resolution limit τ 1 imposed by the experimental set-up to the maximum lifetime τ N of the chromophore observed. The quantity N is the number of partitions of the interval [τ 1 , τ N ], α k and τ k the amplitude factors and the lifetime of the k-th partition, respectively. The expression 2 allows to write the convolution sum expressed by the Eq. 1 as follows: There are many sets {α k } physically allowable that agree with the data within the experimental precision {σ k } [15] and the Maximum Entropy Method selects the distribution {α k } that maximizes the Skilling entropy function S [14] S(α 1 , ..., subjected to the following chi-squared condition: with the lifetime components equally spaced in a logarithmic scale. Note that in Eq. 5, the number M of observations is not reduced by the number N of parameters, because this condition is not used to find the optimal set of parameters in α-space but it is used to test whether or not computed data is in satisfactory agreement with measured data [15].
In the following a new algorithm for maximizing the entropy S according to the MEM requirements will be described. The procedure refers to the negative function −S which is minimized by the same distribution that maximizes S obviously.
The method of Lagrange multiplier provides a strategy for finding the minimum of -S with the constraint 5 by considering the following Lagrange function where λ is the Lagrange multiplier. The MEM solution is an extremal of for some value of the Lagrange multiplier λ, thus it is obtained by solving the set of non linear equations ∇ = 0: The subscript i is used to indicate the first N components of gradients ∇ , ∇(−S) and ∇χ 2 , whereas the last component is given by the subscript N + 1. Now let x denote the vector of components (α 1 , ..., α N , λ) and F = ∇ . In order to solve the set 7, each of the equations F i = 0 is expanded in Taylor series: The matrix of partial derivatives appearing in Eq. 9 is the Hessian matrix H of the function : and its elements are given by: In matrix notation the Eq. 9 is By neglecting terms of order δx 2 and by setting F(x + δx) = 0, we obtain a set of linear equations for the corrections δx (the Newton step), namely The solution to the set of equations 7 is obtained by solving iteratively the linear set 13 that minimizes the norm f = 1/2F · F because the correction δx move each function F i closer to zero simultaneously at each iteration. The first iteration is performed by considering the distribution {α k } that equals the calculated total intensity to the experimental one, namely a flat distribution with a constant value of m E m / m,k C m,k for all lifetimes. The matrix equation 13 is solved by the package linsolve of Matlab that implements a LU decomposition algorithm that writes H as a product of two matrices [27]: where L and U are respectively a lower and upper triangular matrix. Their elements are obtained by the Crout's algorithm whose stability is increased by a partial pivoting technique [27]. The decomposition is used first for solving for the vector y such that and then for solving The advantage of breaking up the linear set 13 into two successive ones is that the solution of a triangular set of equations is quite trivial and is given by backsubstitution [29]. Once the new approximated solution x + δx is obtained, one seeks negative values among its first N components because the natural logarithm in Eq. 8 requires only positive values. If this is the case, the positiveness is enforced by using only a fraction of the corrections δx.
Generally, the Newton step δx is a descent direction for the norm f : Therefore, we always first try the full correction δx because once we are close enough to the solution a quadratic convergence is given and the proposed step reduces f . Conversely, if the linear approximation 12 is not satisfied, δx need not decreases the norm f and a backtrack along the Newton direction is adopted until an acceptable step is obtained. Particularly, one moves to a new point and the parameter is chosen to minimize f in the direction δx . This goal is achieved by using the routine fminbnd of Matlab that implements the Golden Section Search algorithm [30]. The improved approximation x new is used as a new starting point for the set of equations 13 and the procedure is iterated until the minimum of the functional is achieved. The set of values {α k } that minimizes with χ 2 ∼ 1 is the lifetime distribution that accounts for the experimental data.

Analysis of the Accuracy of the MEM Algorithm
In the following the accuracy of the MEM algorithm proposed in the previous section is investigated through comparisons with numerical simulations of fluorescence decay data. Synthetic data {E m } are generated with time scale and time resolution that are typical in experimental set-up for time-correlated single photon counting technique (TCSPC) [31]. The simulated curves consist of 4,096 data points over a time scale of 25 ns and are generated by the convolution product of a Gaussian profile with full-width at half maximum (FWHM) of 120 ps, the impulse response function R(t), and a decay model function according to the Eq. 3. The Poisson noise statistics that affects the typical TCSPC measurements [32] is simulated by taking as a value for the intensity at each time a Poisson-distributed random number with a mean equal to the calculated model value E m . The routine poissrnd of Matalb is used to this purpose and its algorithm is extensively described in [33]. According to this procedure for generating simulated data, the number of counts in each point is its variance σ 2 and the peak counts can be considered as a measure of the noise content of the decay curve.

Multi-Exponential Fluorescence Decays
Multi exponential intensity decays are usually adopted for the analysis of the fluorescence emission of proteins in aqueous solutions and lie on the border of complexity that can be handled by parametric methods. In Fig. 1 we report a fluorescence decay curve (black points) with 10 4 counts in the maximum generated by three exponential functions with lifetimes τ 1 = 100 ps, τ 2 = 1,000 ps and τ 3 = 4,000 ps and amplitude α j = 0.33, j = 1, ..., 3. The solid red line is the curve resulting from the fitting with the MEM algorithm by considering N = 400 points equally spaced in log τ between τ min = 20 ps and τ max = 10 4 ps. The MEM convergence is achieved for a chi-square value equals to χ 2 = 1.01 and the accuracy of the algorithm is confirmed by the random distribution of the residuals around zero as it is shown in the lower panel of Fig. 1. The normalised lifetime distribution that accounts for the agreement between the MEM curve and the synthetic decay data is   Table 1 reported in Fig. 2. The distribution consists of multiple peaks whose mean position τ j and mean amplitude α j are calculated as follows (19) where N j is the number of lifetimes represented in each peak and k is the spacing in log τ . The calculated mean values τ j and α j are reported in Table 1 and are the retrieved lifetimes and the pre-exponential factors of the analysed decay curve. Similarly, the standard deviation τ of the data points of the peak region is considered as the uncertainty in the lifetime estimate, whereas the error of the amplitude factor is given by the standard deviation of the values calculated for a set of 20 simulated decay curves. The parameters generated by a non linear least-square regression analysis with a three exponential model function are also shown in the Table 1 for comparison. It results a very good agreement among the estimates of the amplitudes α j given by both methods. The values retrieved by the non linear fitting are indeed within three standard deviations and have relative errors lower than 3 %, whereas the relative amplitudes calculated from the MEM lifetime distribution agree with the theoretical values within 2 %. As far as it concerns the three lifetimes, the values of τ 1 , τ 2 and τ 3 are recovered by the MEM within one standard deviation, whereas the non linear fitting procedure achieves this accuracy only for τ 1 , as τ 2 and τ 3 lie within two and four standard deviations, respectively. The long lifetime τ 3 even outside the "three-sigma" interval witnesses the difficulty in fitting decay curves with more than three exponential functions.
The MEM accuracy in reconstructing a lifetime spectrum α(τ ) can be analysed by considering the ratio r = τ/ τ . In Fig. 3 the values of r calculated for a single exponential decay with lifetime τ = 3,000 ps are reported as a function of peak counts. The results have been obtained for a MEM analysis that has considered both M = 4,096 (black solid curves) and M = 1,024 (red solid curves) data points. A lifetime spectrum equally spaced in log τ between 20 ps and 10 4 ps has been considered with a discretization that ranges from N = 100 to N = 1,000 in order to investigate the sensitivity of the method to the number of data points and the lifetimes in log τ space, respectively. From inspection of the curves it results the ratio r does not depend on the noise level when the number of counts of the The uncertainties of the lifetime estimates τ j are given by the standard deviations τ of the data points of the peak regions, whereas the errors of the amplitude factors α j are the standard deviations of the values calculated for a set of 20 synthetic curves. Results of non linear least-square regression analysis are also reported for comparison together with the theoretical values of the simulated curve. In this case the uncertainties of the parameters are given by the fitting procedure peak is larger than 3 − 5 × 10 4 . The ratio r is always smaller than 3 % and decreases as the number N of discretization of the lifetimes space increases. When N is larger than 400 r is smaller than 1 % reaching the

Gaussian Lifetime Distributions
To test the performances of our algorithm with broad lifetime distributions, we considered Gaussian distributions centred at τ = 5,000 ps with different standards deviation τ = 500 ps, 1,000 ps, 1,500 ps (i.e. 10, 20 % and the 30 % of τ ), that are assumed as a measure of their width. A set of 20 fluorescence decays curves with M = 4,096 data points and ∼5 × 10 4 number of counts at the maximum has been generated for each of these distributions. Each curve of every set has been analysed with the MEM by using N = 400 points equally spaced in log τ between 40 ps and 10 5 ps and each normalised MEM spectrum α(τ ) has been fitted with a Gaussian profile to estimate the center and the width of the distribution. In Fig. 4 the typical MEM spectra obtained for each set (differently marked points) are shown together with the Gaussian fitted curves (solid lines) whereas the mono-modal section of the Table 2 reports the mean values and the uncertainties of the center and the width τ of the Gaussian distributions calculated for each set of curves. The uncertainties are the standard deviations of the estimates of the parameters A set of 20 fluorescence decay curves with 5 × 10 4 counts in the peak channel has been generated for each distribution. The reported values are the means and the standard deviations of the estimates of the parameters calculated for each set. The theoretical values are also reported for comparison. The results have been obtained by using N = 400 points equally spaced in log τ between 40 ps and 10 5 ps calculated for each set. Figure 4 shows that the MEM algorithm reproduces the Gaussian profile of the distributions with excellent resolution of the leading and the falling edges. The fitting results are reported in the Table 2, where a good agreement is observed between the values estimated for the parameters of the Gaussian lifetime distributions and the theoretical ones. The resolution limit of the algorithm in resolving the intensities of the lifetimes spectrum has been investigated by adding a Gaussian distribution with Fig. 5 The normalized lifetimes spectra α(τ ) reconstructed by the MEM analysis (black points) for a bi-modal Gaussian distribution with the main peak centred at τ = 5,000 ps and the secondary to τ = 1,000 ps. Four values for the amplitude of the additional peak have been considered and they represent 100 % a, 50 % b, 25 % c and the 10 % d of the main peak. Both Gaussians have size equals to the 30 % of their peak location. The results refer to fluorescence decay intensities with ∼5 × 10 4 counts in the peak channel and have been obtained by using N = 400 points equally spaced in log τ between 40 ps and 10 5 ps. The solid red line are the two Gaussian fitted profiles used to estimate the characteristic parameters of the bi-modal distribution (see Table 2) different peak intensities on the tail of the distribution peaked at τ = 5,000 ps. To this purpose, we have considered the worst case of Fig. 4 observed for a width τ = 1,500 ps and added an additional distribution centred at τ = 1,000 ps with a width τ = 300 ps. Four values for the amplitude of this additional peak have been considered: 100, 50, 25 and the 10 % of the main peak at τ = 5,000 ps. As before, a set of 20 fluorescence decay curves has been generated in each case for working out the errors in recovering the characteristic parameters of these bi-modal lifetime distributions through a bi-Gaussian fitting procedure.
In each panel of the Fig. 5 the typical MEM spectrum (black points) and the best fit (red curve) by two Gaussian curves have been shown. The average for each set of decay curves of the peak position (center), the height (amplitude) and the width ( τ ) are reported in Table 2. As it can be seen, the position and the width of the main peak are estimated with an accuracy that is not affected by the presence of the secondary peak. Similar results are observed for the retrieved parameters of the secondary peak showing that our algorithm resolves two distributions even when their amplitude are in the ratio 1:10.

Conclusions
We have described a new algorithm for implementing a MEM analysis of the time-resolved fluorescence decays. The proposed procedure is based on seeking the desired lifetime distribution by solving the set of non linear equations ∇ = 0 through iterative linear approximations, LU decomposition of the Hessian matrix H and the Golden Section Search for backtracking. This algorithm has been tested on complex analytically simulated data that arise from multi-exponential decays and from broad lifetime distributions. A typical inversion with M = 4,096 data points and N = 400 discretization of the lifetime spectrum takes about 60 s in total using an Intel(R) Core(TM)i7-2600 CPU@3.40 GHz processor with 8 GB memory. The computation time is about 150 s when the number of lifetimes considered is N = 1,000.
The analysis of the multi-exponential decays with our computational approach has clearly shown that the sensitivity of the MEM increases as the number N of discretization in log τ space increases. An accuracy in retrieving the simulated parameters comparable to that of non linear regression can be achieved by considering N = 1,000 lifetimes.
The characteristic parameters of the broad lifetime distributions have been retrieved with high accuracy.
Particularly, our MEM procedure has retrieved the widths τ that represent 10 % of the lifetime with a discrepancy lower than 3 % by using a typical value of 5 × 10 4 counts in the maximum peak. This result clearly indicates that the procedure proposed generates MEM lifetime distributions that can be used to quantify the real heterogeneity of lifetimes in a sample with no need for a time consuming high statistic. Moreover, the analysis of bi-modal distributions has demonstrated that our algorithm resolves a secondary peak even when its relative amplitude is 10 % of the amplitude of the neighbourhood peak.
It is also important to highlight that the MEM algorithm proposed in this paper extends the analysis to datasets of up to 4,096 data points, thereby increasing the limit currently set at about 1,000. This increases the information that can be achieved from the data thus improving the accuracy in recovering the fast decay times; in fact, short lifetimes affect the leading edge that only contains few points. On the other hand, a larger density (N ≥ 1,000) is beneficial also for larger lifetimes since their spectrum can be determined with higher resolution.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.