Towards a meaningful non-isothermal kinetics for biomass materials and other complex organic samples
- 265 Downloads
The literature of the kinetics in thermal analysis deals mainly with models that consist of a single reaction equation. However, most samples with practical importance are too complex for such an oversimplified description. There is no universal way to overcome the difficulties, though there are well-established models that can express the complexity of the studied reactions for several important types of samples. The assumption of more than one reaction increases the number of unknown parameters. Their reliable estimation requests the evaluation of a series of experiments. The various linearization techniques cannot be employed in such cases, while the method of least squares can be carried out at any complexity of the models by proper numerical methods. It is advantageous to evaluate simultaneously experiments with linear and nonlinear temperature programs because a set of constant heating rate experiments is frequently not sufficient to distinguish between different models or model variants. It is well worth including modulated and constant reaction rate temperature programs into the evaluated series whenever they are obtainable. Sometimes different samples share some common features. In such cases one can try to describe their reactions by assuming parts of the kinetic parameters to be common for the samples. One should base the obtained models and parameter values on a sufficiently large amount of experimental information, in a reliable way. This article is based on the authors’ experience in the indicated directions from 1979 till the present. Though the examples shown are taken from biomass research, the models and methods shown in the article are also hoped to be relevant for other materials that have complicated structure or exhibit complicated thermal reactions, or both.
KeywordsNon-isothermal reaction kinetics Thermal analysis Complex kinetic models Method of least squares Modulated experiments Biomass Charcoal
Unfortunately, the samples with practical importance are usually too complex for such an oversimplified description because different sorts of reactive species participate in the studied processes. Sometimes backward reactions or other secondary reactions influence the measured signals. Impurities with catalytic activities may also complicate the picture. There is no universal way to overcome the difficulties, though there are well-established models that can express the complexity of the studied reactions for several important types of samples.
The assumption of more than one reaction increases the number of unknown parameters. Their reliable estimation requests the evaluation of a series of experiments. The traditional evaluation methods (i.e., the various linearization techniques) cannot be employed in such cases because they can handle only one kinetic equation of type (1). Besides, they are restricted to constant heating rates (linear T(t) programs), their sensitivity on the experimental errors is unfavorable, and the empirical version of the reacted fraction (α) frequently cannot be read from the TG curves. The latter problem arises whenever the decomposition of an organic sample is followed by the slow carbonization of the formed chars.
The present work is based on the authors’ experience in the indicated directions from 1979 till the present. Though the examples shown are taken from biomass research, the treatment is also hoped to be relevant for other materials that have complicated structure or exhibit complicated thermal reactions, or both.
Evaluation of a series of experiments by the method of least squares (LSQ)
As mentioned above the traditional linearization techniques of the non-isothermal kinetics cannot be employed when the model consists of more than one reaction. Besides, a complex model contains too many unknown parameters compared to the information content of a single thermal analysis experiment. In such cases the simultaneous evaluation of several experiments can be carried out by the method of the nonlinear least squares. The present stage of development of computers and numerical methods facilitates this.
History of the LSQ evaluation of series of experiments in the non-isothermal kinetics from 1979 till 1996
The kinetic evaluation of thermal analysis experiments is published in a wide range of journals and conferences; hence, a general survey in the field is difficult. According to our knowledge the first paper dealing with the least squares evaluation of more than one non-isothermal thermoanalytical experiments was published by the first author of the present article nearly 40 years ago . The work contained a section entitled “Least squares evaluation of more than one thermoanalytical curve.” The objective function in this section was identical to Eq. (2) without the w j weight factors. A detailed description was given on the employed numerical methods. A parameter transformation was also described to reduce the compensation effects between the variables. The outline of the algorithm was terminated by a note: “the resulting program can be run on minicomputers of 64 K bytes of total memory. With careful programming the required memory can be diminished below 32 K bytes and the computation can be carried out on desk-top computers.” The quoted sentences reflected the possibilities of the seventies. Note that the memory of a present day desktop computer is between 2 and 16 GB, which shows an increase in around 5 orders of magnitude in the past four decades. The computational speed of the computers has also increased highly.
The next paper in this direction appears to be the work of Braun & Burnham . They presented a method that can be employed at any temperature program and employed it to the simulated experiments with constant heating rates. In their Fig. 6 the evaluation of simulated experiments with heating rates of 0.56, 5.6 and 56 °C min−1 is shown. This choice of heating rates is very reasonable because the lowest value, 0.56 °C min−1, corresponds to a practical limit (ca. 18 h per experiment), while the thermoanalytical experiments above 56 °C min−1 are frequently influenced by heat and/or mass transfer limitations. The studied models included a distributed activation energy model (DAEM) which has been a useful tool for the description of complex decomposition reactions for more than 40 years . Burnham et al.  have employed this evaluation method for studying the thermal decomposition of kerogens (the portion of organic materials in sedimentary rocks) in 1987, and for coal pyrolysis in 1989 . More than one DAEM was used in their works. Sundararaman et al.  also studied the thermal decomposition of kerogens in 1992 by assuming different DAEMs and elaborating a complex algorithm for the evaluation.
Shortly afterward further authors started to evaluate their non-isothermal experiments simultaneously [10, 11]. Várhegyi et al.  tried to call the attention of the thermal analysis community to the importance of the simultaneous LSQ evaluation of more than one experiments in an article in the Journal of Thermal Analysis in 1996. This work was entitled “Application of complex reaction kinetic models in thermal analysis. The least squares evaluation of series of experiments.”
What temperature programs should be used for a series of experiments?
A straightforward way would be the inclusion of isothermal experiments. However, we seldom can produce entirely isothermal experiments in thermal analysis because there is a transient period before reaching the isothermal part where important parts of the reactions might occur. A better way is to heat up the sample in a controlled way, and include the heat up period, too, into the kinetic evaluation. Besides, it is well worth continuing the heat up after the isothermal section to study the continuation of the processes. In other words the isothermal experiments should be handled as experiments at a stepwise T(t) and the kinetic equations should be solved numerically from low to high temperatures, as it was done by Várhegyi .
Nowadays several thermoanalytical apparatuses have special built-in temperature program features which can also add valuable information to a series of experiments. Figure 3b displays heating programs that were used for studying the thermal decomposition of wood . The wavy line across Fig. 3b is a modulated T(t): Sinusoidal waves with amplitudes of 5 °C and a wavelength of 200 s were superposed on a 2 °C min−1 linear T(t) function. They served to increase the rather limited information content of the linear T(t) experiments. In the “constant reaction rate” (CRR) experiments the equipment regulated the heating of the samples, so that the reaction rate would oscillate around a preset limit. The CRR experiments aimed at getting low mass loss rates in the entire domain of the reaction. The highest mass loss rate was found to be 0.8 μg s−1 in these experiments. The T(t) needed to keep the reaction rate around a preset limit depends obviously on the reactivity of the given sample. Figure 3b displays what the instrument set to the spruce (•••) and the birch (---) samples of the study.
Note that the modulated and the CRR temperature programs have been available for a long time. (The CRR method with a different name was invented by the Paulik brothers nearly 50 years ago.) Their evaluation together with the linear and stepwise heating programs does not need extra efforts: The numerical solution of the model can easily be carried out at any T(t) function. Still this approach is not yet popular. In the field of biomass research we found the simultaneous LSQ evaluation of modulated, CRR and other type of experiments together only in works in which we participated [14, 15, 16, 17, 18, 19].
An example: the controlled combustion of charcoals
Charcoals are made usually from woods or other lignocellulosic materials. These feedstocks have rather complicated chemical and physical structure. Accordingly, the charcoals are not homogeneous; they contain more and less reactive parts. A simple approach for the kinetic description of the parts with different reactivity is the assumption of pseudo-components. The use of pseudo-components in the biomass research has a long history though the early investigators have not clarified/emphasized by the adjective “pseudo” that their components are not well-defined chemical compounds.
In this work each pseudo-component was described by Eq. (1). Such a formula was selected for f(α) which has two adjustable parameters and can approximate the self-acceleration due to increasing pore surface area in the pores of the sample during charcoal combustion [9, 19]. Fifty-two unknown parameters were determined for the six samples from 18 experiments. Hence N param/N exper was 2.9 in this evaluation, meaning that less than three parameters were determined from each experimental curve.
Another example: the thermal decomposition of woods
Woods consist of three major components (cellulose, hemicellulose and lignin), and several minor components. Accordingly, the description of their thermal decomposition requires at least three pseudo-components. Here examples follow from a recent work of Barta-Rajnai et al. . The thermal decomposition of the cellulose component is relatively simple under the usual conditions of thermal analysis. Usually a first-order kinetics gives an adequate approximation, though a self-accelerating kinetics frequently gives somewhat better fit. We followed the latter approach in our recent works: The cellulose component was described by Eq. (1) with the same type of f(α) function which was used in our combustion and gasification studies [9, 13, 14, 16, 17, 18, 19].
The thermal decomposition of the hemicellulose and lignin is more complex. There are several partial reactions. In our opinion the best available way is the use of a distributed activation energy model (DAEM). This approach was elaborated for coals more than 40 years ago  and has been used in biomass researches since 1985 . A DAEM approximates the decomposition kinetics of many reacting species. The reactivity differences are described by different activation energies. To keep the number of unknown parameters on a reasonable level a distribution function can be assumed for the activation energies. See more details in the literature, e.g., in the classical work of Anthony et al. .
The number of the unknown parameters
There are many publications which employ Eq. (1) and regard the activation energy as a function of the reacted fraction, α. (See e.g., the ICTAC Kinetic Project ). Practically, it means a graphical or tabular presentation of 20–100 E − A data pairs as function of α. In this way 40–200 kinetic parameters are determined from a few simple experimental curves measured at constant heating rates. In reality, however, the information content of such an experimental series is much smaller.
However, the application of the Friedman method , or other model-free approaches , or the Miura–Maki method  for a DAEM evaluation would result in a very high number of kinetic parameter values for the experiments shown in Fig. 3 or 8. Note that a computing algorithm almost always results in some numbers; the question is the meaning, the reliability and the uniqueness of these numbers.
Toward the determination of kinetic parameters that are more reliable than the ones filling the literature nowadays
There is no general recipe to achieve this goal. There are a few pieces of advice that might be useful, as listed above. Among others the experiments should be based on a wide range of experimental conditions (as wide as the properties of the given samples, reactions and equipment permit). Frequently several samples are available which share some common features. If so, one can try to describe their reactions by assuming several common parameters. The goal is to base the obtained parameter values on a large amount of experimental information. In the work of Barta-Rajnai et al.  the ratio of the evaluated experiments and the determined parameter values, N param/N exper, were near to one, meaning that each parameter value was based on nearly one TGA experiment. This was achieved by a systematic investigation to find which parameters could be assumed identical for the samples without a considerable worsening of the fit quality.
A cross section of recent works that use non-isothermal kinetics
An increasing number of kinetic works are published in thermal analysis. In the last 2 years the Journal of Thermal Analysis and Calorimetry published more than 200 articles containing the word “kinetic” or “kinetics” in their titles. We selected 60 of these articles for a closer look to obtain a cross section on the present state of the field. The selection was based on the relevance of the titles to the subjects of the present work. A quarter of the selected papers were found to be closely related to our treatment, as shown below.
Four papers employed the simultaneous least squares evaluation of more than one constant heating rate experiment. Conesa et al.  studied the shredder residues of motor vehicles in this way. Three heating rates (5, 15 and 30 °C min−1) and three different atmospheres (N2 with 0, 10 and 20% O2) were used. The complexity of the studied feedstock was described by assuming three pseudo-components. Their thermal reactions were described by a distributed activation energy model. The model assumed Gaussian distribution on the activation energies. As a comparison the pseudo-components were also described by first-order kinetics. We think that this work was the closest match to the considerations outlined in the present article. In a subsequent work Conesa and Soler  studied biomass, electronic wastes and their mixture by similar means. In that work the reactions of the pseudo-components were described by first-order and n-order kinetics. Yang et al.  examined the combustion properties of peats by the simultaneous least squares evaluation of experiments at five heating rates. Three partial reactions were considered: pyrolysis, fuel oxidation and char burn. The partial reactions were described by n-order kinetics. Plis et al.  studied the combustion behavior of furniture wood wastes. One of their samples was the untreated waste, while four other samples were made from the original feedstock by thermal pre-treatments (torrefaction). The torrefaction served to improve the fuel properties. A simple kinetic model was used that consisted of two first-order partial reactions. The evaluation was based on the simultaneous evaluation of experiments at 5, 10 and 20 °C min−1 heating rates.
Two further works evaluated the experiments one-by-one by the method of least squares. This procedure is not sufficiently safe, as shown in the next section.
In our opinion there is no need for artificial functions in the deconvolution because the kinetic models themselves can serve for the description of the partial peaks, and the kinetic evaluation of the experiments can directly lead to a deconvolution. (See e.g., Figures 1–7 in the present work).
We think that this method introduces artifacts into the evaluation. If Gaussian curves are used, for example, then the obtained kinetics will reflect the properties of the Gaussian curves.
The deconvolution is applicable only to constant heating rate measurements and is not suitable for the simultaneous evaluation of more than one experiment.
Four articles divided the complex TGA curves into smaller temperature domains and assumed a kinetic equation of type Eq. (1) in each domain. In these works the kinetic evaluation was carried out separately in each domain by a traditional evaluation method. However, the separation of the overlapping processes cannot be carried out by so simple means. Let us regard Fig. 1 in the work of Cruz and Crnkovic  as an example, which shows the oxidative decomposition of a lignocellulosic biomass sample. Here the border between the first and second reaction steps is around 305 °C. However, the thermal decomposition of the cellulose is far from being terminated at this temperature, while the reactions of the cellulose start earlier in this material. (That is why the two partial peaks overlap.) Besides, the thermal decomposition reactions of the lignin component take place everywhere between 200 and 600 °C at a considerable reaction rate . This example illustrates why we cannot deduce the reacted fractions of the partial processes from the experimental TGA curves by this method.
Why one experiment is not enough for a dependable kinetic evaluation
Numerous works have shown in the literature that a single TGA experiment can be described by many ways; accordingly, a kinetic evaluation based only on one experiment is ill defined. Here we add a new example that shows the similarities of the n-order kinetics and DAEM kinetics with very different activation energies in a non-isothermal experiment.
Kinetic parameters, peak maxima and peak widths (full width at half maximum, FWHM) of the simulated curves in Fig. 9
Line style in Fig. 9
E 0/kJ mol−1
T peak at 5 °C/min/ °C
FWHM at 5 °C/min/ °C
T peak at 50 °C/min/ °C
FWHM at 50 °C/min/ °C
The materials of practical importance seldom have simple thermal behavior.
The traditional models and evaluation methods of the non-isothermal kinetics are usually not suitable for materials with complicated chemical and/or physical structure.
One should look for such models which reflect more or less the complexity of the studied processes.
The evaluation should be based on an ample amount of experimental information.
It is advantageous to evaluate simultaneously experiments with linear and nonlinear temperature programs because a set of linear temperature programs (constant heating rate experiments) is frequently not sufficient to distinguish between different models or model variants.
The method of least squares is highly advisable for the evaluation of series of experiments because it can be carried out for any model complexity and any sort of temperature program at the present level of computers and numerical methods.
Sometimes different samples share some common features. In such cases one can try to describe their reactions by assuming parts of the kinetic parameters to be common for the samples.
The points listed above aim to base the obtained models and parameter values on a large amount of experimental information in a reliable way.
The authors acknowledge the financial support by the Research Council of Norway and a number of industrial partners through the project BioCarb + (“Enabling the Biocarbon Value Chain for Energy”).
- 3.Anthony DB, Howard JB, Hottel HC, Meissner HP. Rapid devolatilization of pulverized coal. In: Symposium (international) on combustion 1975 Jan 1. Elsevier; (Vol. 15, No. 1, pp. 1303–1317). https://doi.org/10.1016/s0082-0784(75)80392-4.
- 30.Arhangelskii I, Dunaev A, Makarenko I, Tikhonov N, Belyaev S, Tarasov A. Non-isothermal kinetic methods: workbook and laboratory manual. Berlin: Edition Open Access. 2013. Downloadable from internet address http://edition-open-access.de/textbooks/1/index.html.