The SRF sample used in this work was provided by a Hungarian waste processing company. The original appearance was quite diverse in size, shape and color as well, because of the heterogeneous composition. Before the measurements, a representative sample was grinded, which resulted in threadlike pieces with a width of 10–30 μm and varying lengths in 10–1000 μm as shown in Fig. 1. The pictures were taken by a JEOL JSM-5500LV scanning electron microscope, with the same method used by Bakos et al. [16].
The ultimate and proximate properties are shown in Table 1. All parameters are measured in dry basis, which is close to the fuel’s quality during industrial utilization because of the natural drying during the long transfer and storage.
Table 1 Proximate and ultimate analysis of the sample (dry basis) A TA Instruments SDT 2960 simultaneous TG/DTA device was used for the thermal analysis in air atmosphere (130 mL min−1) as described in Bakos et al. [16]. The measurements were taken at 5, 10 and 15 °C min−1. These rates are relatively small compared to what generally used, but on higher rates the self-ignition of sample was too significant to get reliable results. Because of this, and to minimize the impact of the mass and heat transfer phenomena, the sample size was decided to be around 2 mg, as it was suggested by Várhegyi et al. [14].
Kinetic models
To describe the combustion of the sample, a model-fitting method was selected. With the increase in numerical possibilities in the past years, the model-fitting methods tend to became more and more powerful tools in reaction kinetics. However, it is advisable to consider the basic drawbacks of these kinds of calculations. These were highlighted numerous times in the past, most recently by Várhegyi et al. [17]. The most important is that albeit it is really tempting to use only one measured data with one heating rate (as it is numerically possible), the result from that is only usable for that exact heating rate. The reason is that in that case the system is very ill-defined, and a conversion graph could be described with more sets of parameters. Evaluating more conversion curves with different heating rates simultaneously, however, obligates the optimization process to find parameters that can fit measurements with different heating programs at the same time.
Equation 1 was used as the fundamental rate equation to build up the models, where \(x\) is the conversion of the sample defined as the ratio of the actual to final reacted masses (Eq. 2), \(f\left( x \right)\) is the reaction function, which will change in every model and \(k_{\text{r}}\) is the reaction rate represented by Eq. 3, where A is the pre-exponential factor, E is the activation energy, R is the universal gas coefficient and T is the absolute temperature of the sample.
$$\frac{{{\text{d}}x}}{{{\text{d}}t}} = k_{\text{r}} f\left( x \right)$$
(1)
$$x\left( t \right) = \frac{{m_{0} - m\left( t \right)}}{{m_{0} - m_{ \inf } }}$$
(2)
$$k_{\text{r}} = A\exp \left( { - \frac{E}{RT}} \right)$$
(3)
As it was mentioned before, to describe the kinetics of this kind of samples, it is common to consider three subcomponents, which have their own mass share \(c_{\text{i}}\) (Eq. 4).
$$\frac{{{\text{d}}x}}{{{\text{d}}t}} = \mathop \sum \limits_{1}^{3} c_{\text{i}} A_{\text{i}} \exp \left( { - \frac{{E_{\text{i}} }}{RT}} \right)f\left( x \right)$$
(4)
As the most commonly used models, three different \(f\left( x \right)\) reaction functions and a distributed activation energy model (DAEM) are considered, as shown in Table 2.
Table 2 List of the tested reaction models The first one is a simple first-order conversion function (n = 1, Eq. 5), the second one is a more general nth-order reaction (\(n \ne 1,\) Eq. 6), and the third one is expanded with \(\left( {x + z} \right)^{\text{m}}\) (Eq. 7) as it was suggested in the earlier work of Várhegyi et al. [18].
The third model has the most parameters, and some of them could be neglected in some cases, as it was already suggested [18], because more parameters to optimize demand more computation capacity, and in most cases, the precision of the results could not be increased above a limit. However, it was not investigated for this kind of samples, so in current work it was decided to let it in the original form. The influence of the different parameters and their potential neglecting will be evaluated by sensitivity analysis later in this work.
A first-order DAEM is used as the fourth test subject (Eq. 8). This method assumes that the previously defined three pseudo-reaction groups consist of infinite number of subreactions, and for their activation energies, a specific distribution could be assumed. To describe this, a \(\left( {x + z} \right)^{\text{m}}\) distribution function was implemented (Eq. 9), where \(E_{0}\) is the mean value of the distribution, while \(\sigma\) is its width. Its integral in any range of \(E\) means the probability for a random chemical group to have its activation energy in that range. By assuming a fine enough resolution, this also means the proportion of the reactions in the selected range. In this case, Gaussian distribution was considered, as it is the simplest and the most commonly used one. Its biggest problem is that it is symmetrical, which was already stated that it is not true for most reaction groups [11]. In practice, though this asymmetry is not that significant, good fitting is still achievable by this method [10, 13, 18]. The relevance of choosing a more complex distribution function was investigated by Cai et al. [11].
$$D_{\text{j}} \left( E \right) = = \frac{1}{{\sigma_{\text{J}} \sqrt {2\pi } }}\exp \left( { - \frac{1}{2}\left( {\frac{{E - E_{{0,{\text{j}}}} }}{{\sigma_{\text{j}} }}} \right)^{2} } \right)$$
(9)
Equation 8 also shows that the differential equation already contains an integral, which makes an analytical solution challenging. The problem will be solved numerically, by considering a first-order reaction equation for a series of independent reactions (k) with the corresponding activation energies and with the share defined by the distribution function. In summary, it results in the real conversion of j pseudo-component (Eq. 10).
$$x_{\text{j}} \left( t \right) = = \mathop \sum \limits_{k} \mathop \smallint \limits_{{E_{\text{k}} - 1}}^{{E_{\text{k}} }} D_{\text{j}} \left( {E_{\text{k}} } \right){\text{d}}Ex_{\text{k}} \left( {t,E} \right)$$
(10)
Parameter fitting
Because of the high number of optimizable parameters, the fitting was performed numerically with genetic algorithm (GA). It is a commonly used optimum seeking method, which is based on the Darwinian evolution theory. The basic principles and the mathematic background are summarized by McCall [19]. It works by producing generations of species as the solutions of the same problem with different parameters. Every generation is evaluated by comparing the results to a desired value, for example measured data, based on which the parameters (species) resulting in the best fits are selected and used to create the new generation. This method ensures that the difference between the benchmark data and the results of the best parameters converges to zero in every generation. The result of the function generating the species should be one number at any time, which is called the fitness value (F), and the function that provides it is called the fitness function.
A serious drawback is that the method is very computation heavy, as the same problem is solved multiple times with different parameters without any further simplification, as every generation should have an exact number of independent species that needs to be compared. However, this independency has some benefits as well, and they can be computed simultaneously using multi-core workstations, which significantly decreases the necessary computation time.
A MATLAB code was developed for the calculation using the built-in genetic algorithm function provided by the Optimization Toolbox [20] that handles parallel cores as default as well. Equation 11 shows the method to generate the fitness values, which is the sum of the square differences of the measured and the calculated data at every time step. This also should be divided by the number of measurement point in cases of the different heating rates, as the duration is longer in case of lower rates with more measurement points, which would increase the impact of those slower measurements.
$$F = \mathop \sum \limits_{i} \frac{{\mathop \sum \nolimits_{j} \left( {x_{\text{m}} \left( t \right) - x_{\text{c}} \left( t \right)} \right)^{2} }}{{N_{\text{j}} }}$$
(11)
Measurements with different heating rates were evaluated together, so every parameter set provided only one fitness value based on the difference of all three conversion cases, as it was detailed earlier.
Least squares method as the central element of the process has another benefit, as its structure is quite robust, and the actual reaction models can be changed easily while letting most of the code intact.
Sensitivity analysis
To evaluate the influence of the parameters, a local sensitivity analysis was performed, which means that the parameter changes were calculated around the optimal values found by the genetic algorithm. The minimum and maximum values of these sets were generated as Eq. 12 shows for a general, already optimized \(p_{\text{opt}}\) parameter as \(p_{\text{SA,min}}\) and \(p_{\text{SA, max}}\).
$$\begin{aligned} p_{\text{SA,min}} = p_{\text{opt}} - 0.5 p_{\text{opt}} \hfill \\ p_{\text{SA,max}} = p_{\text{opt}} + 0.5 p_{\text{opt}} \hfill \\ \end{aligned}$$
(12)
The evaluation was performed as it was suggested by Cai et al. [21], so the influence of the various parameters on the fitness value was classified in three different groups: poor, medium or high. Poor influence was considered, when \(F_{\text{r}}\) (the actual fitness value relative to the optimized one) was under 2 at 50% deviation, in case of medium it should be above 2 at least at \(p_{\text{SA,min}}\) or \(p_{\text{SA,max}}\) and below 102 at both ends, and for high, it should be above 102 at least at one end.