Introduction

Solid recovered fuel (SRF) is a waste-derived fuel produced from various groups of non-hazardous municipal, industrial and commercial wastes that are not recyclable but still have good combustion characteristics. Its classification is determined by the EU standard EN 15359:2011, which helps the producers and the consumers to find a common language when specifying the needs and hopefully maximizing the utilization. With the proper methods, it is possible to create fuels with properties comparable to classical biomasses and coals, creating the possibility of efficient co-combustion without the need of complex boiler reconstruction. It is important to mention that only the non-recyclable part of the waste should be the source of SRF. Albeit both recycling and combustion directly reduce the amount of unprocessed landfilled waste, recycling is an economically and environmentally more feasible process. Regarding the most recent related directive [the Circular Economy Package (January 2018)], EU members should aim to reduce landfill to a maximum of 10% of municipal solid waste by 2030. In an ambitious environment like this, every opportunity to reduce landfill is welcomed. A very similar type of fuel is the refuse-derived fuel (RDF), which shows no major technological differences; the distinction is mostly legal.

To achieve efficient boiler operations with this fuel, it is essential to have reliable knowledge about the nature of the relevant reactions. Thermogravimetric analysis (TGA) is a powerful tool for this, but because of its very heterogeneous composition, the combustion characteristics of SRF are challenging to determine. The main problem is that TGA commonly works with very small samples (a few milligrams), which assumes a very precise sampling, measuring and evaluation process, and to create a representative sample from a suboptimal material like this, sometimes requires compromises.

The decomposition of solid waste-derived fuels was investigated quite thoroughly in the past few years [1,2,3,4,5,6,7], but the reaction kinetics were covered only a few times. To acquire the parameters describing the kinetic behavior, different methods are available, which could be categorized as model-free and model-fitting methods. For similar fuels, the most widely used ones were collected and compared by Cepeliogullar et al. [1]. Acceptable results were achieved by all methods with some limitations, which shows that there is no obviously best way to handle mixed solid fuels; every case needs special attention and thorough investigation. The three most commonly used model-free methods were applied for two kind of solid wastes by Radojevic et al. [2] in nitrogen atmosphere. A more advanced model-fitting method was applied by Conesa et al. [3]. Three parallel nth-order reactions were considered in inert and air atmospheres, and the kinetic parameters were calculated using the least squares optimum seeking method. Satisfactory results were presented in all cases, although only the pre-exponential factors were considered different in case of the different atmospheres. One of the most widely investigated fuels is the sewage sludge. In the work of Niu et al. [8, 9], pure sewage sludge with different moisture contents and its blend with coal were evaluated using the model-free Flynn–Wall–Ozawa (FWO) method. Another special recovered fuel type is the automotive shredder residue (ASR), investigated by Conesa et al. [10]. In the study, distributed activation energy method (DAEM) was applied with least squares optimization technique. To describe the complex reactions of the sample, three pseudo-components were defined: 5, 15 and 30 °C min−1 heating rate, three different atmospheres (0, 10 and 20% oxygen) were measured, and the DAEM results were compared to the simple first-order ones. The difference in inert atmosphere was small, but in the presence of oxygen the DAEM become more reliable. DAEM was used successfully multiple times to determine the kinetic parameters of other complex solid fuels as well [11,12,13,14]. The principles of the DAEM were originally presented by Anthony et al. [15].

Thus, in most cases, either nitrogen or other inert gas was used as the atmosphere for the TGA measurements. It is feasible, if the aim of the work is identifying the various gases released or using as an input in a model for pyrolysis technology. However, the combustion kinetics of solid (not just waste-derived) fuels are more rarely investigated, which is understandable, as during combustion the reactions are much complex and harder to distinguish. Also, it is much harder to taken a proper TGA measurement, because in combustible atmosphere the pyrolytic and combustion reactions occur simultaneously, and the samples tend to self-ignite resulting in unrealistic behavior in the measured graphs, which makes most kinetic evaluation method unreliable [10]. But there are cases, for example if the determined kinetic parameters supposed to be used as an input for a physical model with combustion, where it is necessary to consider combustion in the kinetics as well.

Instead of distinguishing every possible reaction, it is common to substitute them with only a few pseudo-reaction groups. Identifying the origin of these is quite challenging for these complex samples. In the literature, it is common to relate them to the major waste components, which are cellulosic materials like paper, textiles and sometimes biomasses, and plastics [1,2,3,4,5,6,7]. For combustion, this means three main reaction groups, two of which are responsible for the volatile releases at around 300 °C and 470 °C. The first one describes the pyrolysis of all cellulosic materials, and the second shows the decomposition of the plastics. The third reaction takes place between 600 and 700 °C; it is related to the combustion of all remaining char, mostly from the cellulosic components [1,2,3,4,5,6,7].

The aim of this work is to simultaneously evaluate the combustion kinetics of a complex SRF sample with the most commonly used reaction kinetics models. The results of the models will be compared and rated in regard to precision and usability as an input of more complex combustion models. Sensitivity analysis will be also performed for every optimizable parameter.

Experimental

The SRF sample used in this work was provided by a Hungarian waste processing company. The original appearance was quite diverse in size, shape and color as well, because of the heterogeneous composition. Before the measurements, a representative sample was grinded, which resulted in threadlike pieces with a width of 10–30 μm and varying lengths in 10–1000 μm as shown in Fig. 1. The pictures were taken by a JEOL JSM-5500LV scanning electron microscope, with the same method used by Bakos et al. [16].

Fig. 1
figure 1

SEM image of the grinded sample

The ultimate and proximate properties are shown in Table 1. All parameters are measured in dry basis, which is close to the fuel’s quality during industrial utilization because of the natural drying during the long transfer and storage.

Table 1 Proximate and ultimate analysis of the sample (dry basis)

A TA Instruments SDT 2960 simultaneous TG/DTA device was used for the thermal analysis in air atmosphere (130 mL min−1) as described in Bakos et al. [16]. The measurements were taken at 5, 10 and 15 °C min−1. These rates are relatively small compared to what generally used, but on higher rates the self-ignition of sample was too significant to get reliable results. Because of this, and to minimize the impact of the mass and heat transfer phenomena, the sample size was decided to be around 2 mg, as it was suggested by Várhegyi et al. [14].

Kinetic models

To describe the combustion of the sample, a model-fitting method was selected. With the increase in numerical possibilities in the past years, the model-fitting methods tend to became more and more powerful tools in reaction kinetics. However, it is advisable to consider the basic drawbacks of these kinds of calculations. These were highlighted numerous times in the past, most recently by Várhegyi et al. [17]. The most important is that albeit it is really tempting to use only one measured data with one heating rate (as it is numerically possible), the result from that is only usable for that exact heating rate. The reason is that in that case the system is very ill-defined, and a conversion graph could be described with more sets of parameters. Evaluating more conversion curves with different heating rates simultaneously, however, obligates the optimization process to find parameters that can fit measurements with different heating programs at the same time.

Equation 1 was used as the fundamental rate equation to build up the models, where \(x\) is the conversion of the sample defined as the ratio of the actual to final reacted masses (Eq. 2), \(f\left( x \right)\) is the reaction function, which will change in every model and \(k_{\text{r}}\) is the reaction rate represented by Eq. 3, where A is the pre-exponential factor, E is the activation energy, R is the universal gas coefficient and T is the absolute temperature of the sample.

$$\frac{{{\text{d}}x}}{{{\text{d}}t}} = k_{\text{r}} f\left( x \right)$$
(1)
$$x\left( t \right) = \frac{{m_{0} - m\left( t \right)}}{{m_{0} - m_{ \inf } }}$$
(2)
$$k_{\text{r}} = A\exp \left( { - \frac{E}{RT}} \right)$$
(3)

As it was mentioned before, to describe the kinetics of this kind of samples, it is common to consider three subcomponents, which have their own mass share \(c_{\text{i}}\) (Eq. 4).

$$\frac{{{\text{d}}x}}{{{\text{d}}t}} = \mathop \sum \limits_{1}^{3} c_{\text{i}} A_{\text{i}} \exp \left( { - \frac{{E_{\text{i}} }}{RT}} \right)f\left( x \right)$$
(4)

As the most commonly used models, three different \(f\left( x \right)\) reaction functions and a distributed activation energy model (DAEM) are considered, as shown in Table 2.

Table 2 List of the tested reaction models

The first one is a simple first-order conversion function (n = 1, Eq. 5), the second one is a more general nth-order reaction (\(n \ne 1,\) Eq. 6), and the third one is expanded with \(\left( {x + z} \right)^{\text{m}}\) (Eq. 7) as it was suggested in the earlier work of Várhegyi et al. [18].

The third model has the most parameters, and some of them could be neglected in some cases, as it was already suggested [18], because more parameters to optimize demand more computation capacity, and in most cases, the precision of the results could not be increased above a limit. However, it was not investigated for this kind of samples, so in current work it was decided to let it in the original form. The influence of the different parameters and their potential neglecting will be evaluated by sensitivity analysis later in this work.

A first-order DAEM is used as the fourth test subject (Eq. 8). This method assumes that the previously defined three pseudo-reaction groups consist of infinite number of subreactions, and for their activation energies, a specific distribution could be assumed. To describe this, a \(\left( {x + z} \right)^{\text{m}}\) distribution function was implemented (Eq. 9), where \(E_{0}\) is the mean value of the distribution, while \(\sigma\) is its width. Its integral in any range of \(E\) means the probability for a random chemical group to have its activation energy in that range. By assuming a fine enough resolution, this also means the proportion of the reactions in the selected range. In this case, Gaussian distribution was considered, as it is the simplest and the most commonly used one. Its biggest problem is that it is symmetrical, which was already stated that it is not true for most reaction groups [11]. In practice, though this asymmetry is not that significant, good fitting is still achievable by this method [10, 13, 18]. The relevance of choosing a more complex distribution function was investigated by Cai et al. [11].

$$D_{\text{j}} \left( E \right) = = \frac{1}{{\sigma_{\text{J}} \sqrt {2\pi } }}\exp \left( { - \frac{1}{2}\left( {\frac{{E - E_{{0,{\text{j}}}} }}{{\sigma_{\text{j}} }}} \right)^{2} } \right)$$
(9)

Equation 8 also shows that the differential equation already contains an integral, which makes an analytical solution challenging. The problem will be solved numerically, by considering a first-order reaction equation for a series of independent reactions (k) with the corresponding activation energies and with the share defined by the distribution function. In summary, it results in the real conversion of j pseudo-component (Eq. 10).

$$x_{\text{j}} \left( t \right) = = \mathop \sum \limits_{k} \mathop \smallint \limits_{{E_{\text{k}} - 1}}^{{E_{\text{k}} }} D_{\text{j}} \left( {E_{\text{k}} } \right){\text{d}}Ex_{\text{k}} \left( {t,E} \right)$$
(10)

Parameter fitting

Because of the high number of optimizable parameters, the fitting was performed numerically with genetic algorithm (GA). It is a commonly used optimum seeking method, which is based on the Darwinian evolution theory. The basic principles and the mathematic background are summarized by McCall [19]. It works by producing generations of species as the solutions of the same problem with different parameters. Every generation is evaluated by comparing the results to a desired value, for example measured data, based on which the parameters (species) resulting in the best fits are selected and used to create the new generation. This method ensures that the difference between the benchmark data and the results of the best parameters converges to zero in every generation. The result of the function generating the species should be one number at any time, which is called the fitness value (F), and the function that provides it is called the fitness function.

A serious drawback is that the method is very computation heavy, as the same problem is solved multiple times with different parameters without any further simplification, as every generation should have an exact number of independent species that needs to be compared. However, this independency has some benefits as well, and they can be computed simultaneously using multi-core workstations, which significantly decreases the necessary computation time.

A MATLAB code was developed for the calculation using the built-in genetic algorithm function provided by the Optimization Toolbox [20] that handles parallel cores as default as well. Equation 11 shows the method to generate the fitness values, which is the sum of the square differences of the measured and the calculated data at every time step. This also should be divided by the number of measurement point in cases of the different heating rates, as the duration is longer in case of lower rates with more measurement points, which would increase the impact of those slower measurements.

$$F = \mathop \sum \limits_{i} \frac{{\mathop \sum \nolimits_{j} \left( {x_{\text{m}} \left( t \right) - x_{\text{c}} \left( t \right)} \right)^{2} }}{{N_{\text{j}} }}$$
(11)

Measurements with different heating rates were evaluated together, so every parameter set provided only one fitness value based on the difference of all three conversion cases, as it was detailed earlier.

Least squares method as the central element of the process has another benefit, as its structure is quite robust, and the actual reaction models can be changed easily while letting most of the code intact.

Sensitivity analysis

To evaluate the influence of the parameters, a local sensitivity analysis was performed, which means that the parameter changes were calculated around the optimal values found by the genetic algorithm. The minimum and maximum values of these sets were generated as Eq. 12 shows for a general, already optimized \(p_{\text{opt}}\) parameter as \(p_{\text{SA,min}}\) and \(p_{\text{SA, max}}\).

$$\begin{aligned} p_{\text{SA,min}} = p_{\text{opt}} - 0.5 p_{\text{opt}} \hfill \\ p_{\text{SA,max}} = p_{\text{opt}} + 0.5 p_{\text{opt}} \hfill \\ \end{aligned}$$
(12)

The evaluation was performed as it was suggested by Cai et al. [21], so the influence of the various parameters on the fitness value was classified in three different groups: poor, medium or high. Poor influence was considered, when \(F_{\text{r}}\) (the actual fitness value relative to the optimized one) was under 2 at 50% deviation, in case of medium it should be above 2 at least at \(p_{\text{SA,min}}\) or \(p_{\text{SA,max}}\) and below 102 at both ends, and for high, it should be above 102 at least at one end.

Results and discussion

Experimental results

Figure 2 presents the conversion of measured thermal decomposition of SRF in air (dashed lines), at three heating rates with the estimated graphs provided by the first-order model (continues lines). The optimized parameters were the pre-exponential factor, the activation energy and the mass fraction of all three pseudo-components, which is eight parameters all in all, because the third mass fraction is derived from the other two, so unity is ensured in all species of all generations.

Fig. 2
figure 2

Thermal decomposition of SRF in air atmosphere with first-order model

Three reaction groups (cellulosic materials, plastics and the remaining char) were considered based on the common method in other papers, as it was detailed earlier. These reaction groups are very general; they could be divided to smaller parts, but without special measurements [3] that would be only speculation, as the composition of the sample is very diverse. Moreover, more reaction groups would not lead to more precise results, so it would not have any practical benefits.

Figures 35 show the same measured conversion graphs with the results of the nth-order and the expanded nth-order models and the DAEM as continuous lines, respectively.

Fig. 3
figure 3

Thermal decomposition of SRF in air atmosphere with nth-order model

Fig. 4
figure 4

Thermal decomposition of SRF in air atmosphere with expanded nth-order model

Fig. 5
figure 5

Thermal decomposition of SRF in air atmosphere with DAEM

An interesting observation on the experimental (dashed) curves in Fig. 2 is that an elevated heating rate systematically results in a decreasing amount of char remaining after the second step. This may show a certain capability of char for further gasification if more time (slower temperature increase) is available. Also note that none of the investigated models handle this behavior (Figs. 25) as char fractions are considered constant in all cases.

Kinetic parameters

The fitting for the devolatilization (which is more than 80% of the whole process) is acceptable even in case of the simplest first-order model. The main differences start around 500 °C, where the char combustion occurs. Table 3 shows the kinetic parameters found for the decomposition. The fitness value was \(2.51 \times 10^{ - 4}\).

Table 3 Kinetic parameters in case of three parallel first-order reactions, FV: \(2.51 \times 10^{ - 4}\)

To increase the quality of the fitting, the applied reaction model should be improved. Three upgraded methods, a basic and an expanded nth-order reaction model and a DAEM were used for that as described earlier. Also, it was decided to let the mass fractions of the reaction groups slightly vary as well, which means that the exact amount of the various components is part of the model, not defined or measured in any other independent way.

Table 4 shows the results of the optimization for the nth-order reactions, Table 5 for the expanded nth-order model and Table 6 for the DAEM. The distribution of activation energies is shown in Fig. 6.

Table 4 Kinetic parameters in case of three parallel nth-order reactions, F: \(2.31 \times 10^{ - 4}\)
Table 5 Kinetic parameters in case of three parallel expanded nth-order reactions, F: \(2.28 \times 10^{ - 4}\)
Table 6 Kinetic parameters in case of DAEM with three parallel first-order reactions, F: \(2.07 \times 10^{ - 4}\)
Fig. 6
figure 6

Distribution functions for the three pseudo-components’ activation energies in DAEM

It can be seen that letting n differ from unity led to an improved fitness value, which was only slightly increased in case of the expanded model. The lowest fitness value came from the DAEM, which resulted in approximately 20% decrease compared to the first-order model as Fig. 7 shows.

Fig. 7
figure 7

Fitness value of the different reaction models

The pre-exponential factors and activation energies are similar for the first three models and slightly different for the DAEM. These values are hard to be compared to the results of other works with similar samples, as those tend to highly scatter. Conesa et al. [3] with a similar method calculated much higher pre-exponential factors (with magnitudes of 106, 1019 and 1021) with also higher activation energies between 98 and 325 kJ mol−1. However, in case of the nth-order model, the reaction orders were below unity, or almost 3 in one case.

Cepeliogullar et al. [1] showed pre-exponential factor with the same magnitude and similar activation energies, with a different model-fitting method (Coats–Redfern) and only for pyrolysis. For the reaction order, five values were tested as parameters between 0 and 2, and it was shown that A and E were increasing linearly with that. The best fit was found at n = 1.5, which is close to the reaction orders of this work’s models.

Luo et al. [22] investigated separately the major components of solid wastes with macro-TGA and FWO method. Here the activation energies of the biomass components were between 23 and 51 kJ mol−1, and for the plastics, they were between 33 and 76 kJ mol−1. These values are a slightly smaller than the ones reported here.

It is clear that with the freedom of the model-fitting methods, it is possible to create infinite number of equally correct models with very different parameters, but it is clear that these values are not comparable without clarifying the measurement technique, the applied model and the used method.

Sensitivity analysis

Figures 811 show the changing of the relative fitness value (\(F_{\text{r}} = F_{\text{i}} /F_{\text{opt}}\)) for the previously described sets of parameters.

Fig. 8
figure 8

Sensitivity analysis of the first-order reaction model

Fig. 9
figure 9

Sensitivity analysis of the nth-order reaction model

Fig. 10
figure 10

Sensitivity analysis of the expanded nth-order reaction model

Fig. 11
figure 11

Sensitivity analysis of the DAEM

It can be seen that in case of a first-order model, the impact of the activation energies is the biggest and approximately proportional to the corresponding mass fractions. The pre-exponential factors have much smaller, but still relevant impact with the same mass fraction-based distribution. This tendency stays in case of the other models as well (Table 7).

Table 7 Sensitivity levels of parameters

It is also clear for the more complex models that the sole effect of the new parameters is quite poor, and the increment rather comes from the modified model structure. In case of the two nth-order models, the reaction order has medium impact, which is a little higher for the expanded model. Given the expanded model, the parameter m has quite low, and z almost non-existent relevance, so those parameters could be neglected, as it was suggested earlier [18].

Conclusions

In case of complex solid fuels, choosing the correct reaction function could increase the fitness of a kinetic model. However, this complexity could lead to precision problems, especially if oxidative atmosphere is used during the measurements. This should be avoided at all cost, for which detailed suggestions are available in the literature, but if the future application demands suboptimal operation conditions, the already slightly flawed measured data could not be improved by choosing a more precise reaction model.

To investigate this problem, four different reaction models were applied to the thermal decomposition of a quite heterogeneous sample, which is also inclined to self-ignite. For the numerical optimization, genetic algorithm was used, and it was observed that although there is clear improvement in the fitness value in case of more complex models, that difference is not significant. The impact of the additional parameters was also investigated using sensitivity analysis, and as it was expected, their relevance is close to negligible compared to the activation energy.