# Dynamics of particle chopping in blenders and food processors

- 485 Downloads

## Abstract

Mathematical models are developed to explain the size distribution of particles in blenders to give insight into the behaviour of possible blender designs. The initial models consider idealized simplified situations, first with the chopping of long thin particles and then of spheres. The models are first presented using the idea of chopping at discrete places but then extended to account for chopping at any point via a continuous model. Some of the simple models can be solved analytically while others require numerical calculations. Comparisons of the predictions from the various models with experimental data at a fixed time are presented and show that the models account for much of the behaviour. The initial models do not however predict the large amount of extremely small particles (debris) that are observed at the end of the blending process. The models are thus modified using simple extensions to account for additional effects and numerical solutions of these models are compared with the observed data. The theory should provide a useful tool that eliminates the need to perform costly and time-consuming experiments when understanding how a particular food will be blended.

## Keywords

Chopping Discrete-to-continuum models Similarity solutions Smoluchowski theory## 1 Introduction

Most households have a blender or a food processor, which is commonly used to turn fruit and vegetables into smoothies, drinks, sauces and dips. These blenders chop and shred a variety of ingredients to produce a purée where the material has been broken into very small particles suspended in a liquid, usually water. These blenders use various blade systems, composed of multiple blades inclined at various angles and operate at extremely high speeds to chop and mix the ingredients.

There is significant experimental interest in understanding how blade design and container shape can be tailored to create optimal purées. There is, however, very little literature examining the underlying fluid and particle dynamics to assist in identifying how such optima may be achieved. In addition, one consumer criticism of existing blender designs is the noise that is generated due to the high speed of blade rotation that is currently used to adequately blend the ingredients in a timely manner. Identifying how such speeds might be reduced while still creating optimal conditions is thus of great interest.

The quality of the purée is characterized by the particle size distribution and summarized by the mean particle size, with homogeneous mixtures consisting of small particles being preferred. Operating parameters that may contribute to modifying the particle distribution include blade speed, shape and sharpness, along with properties of the container, such as the shape and the inclusion of baffles on the inner walls.

When attempting to understand the physics of food blending, computational fluid dynamics (CFD) is the *de facto* approach. Current CFD packages are able to model fluid flow situations for the prediction of heat, mass and momentum transfer and optimal design in a variety of food processes [1]. The recent advances in computer processor speeds mean that CFD packages are able to predict the resulting mixing process given an initial configuration, reducing the need to perform batches of experiments. However, such simulations remain a step away from being able to perform the necessary comprehensive parameter sweeps that are required to determine the optimum operating regimes. Furthermore, the predominant use of CFD is in mixing and segregation processes rather than in chopping (see, for example, [2, 3]). Other work, such as studies in the bread industry, have placed an emphasis on examining the effect of the rheology of the substance on the mixing process [4]. A third area for research concerns the mechanics of an individual cut, in particular examining the relationship between the force exerted during a cut and the resulting sliced product [5].

In this paper, we turn our attention away from the mixing mechanism and towards the chopping process, asking the question, how do the food pieces placed into a blender get chopped to make a smoothie? To the best of our knowledge a mathematical theory for the chopping process in a food blender has not been proposed. We shall take a different approach to the computationally heavy methodologies presented previously, by deriving a simplified mathematical model from which we can extract scaling laws that will ultimately allow us to make predictions on how the operating conditions affect the chopping process. The resulting theories will eliminate the need to perform many costly and time-consuming experiments to determine how a particular mixture will be chopped over time, and thus will ultimately provide guidance on how to design blenders to achieve a desired final distribution of particle sizes.

While the chopping of food in blenders has received little mathematical attention, techniques for modelling dissociation (and association) processes are prevalent within the literature. For a broad study of such population dynamics models, see, for example, [6]. Such theories are used to describe, for example, the formation of aerosols [7, 8], colloidal aggregates [9], polymers [10] and the large-scale interactions of celestial bodies [11, 12]. A cornerstone of the aggregation and breakdown kinetics literature is Becker–Döring theory. This theory describes the process of aggregation of dissociation by the stepwise loss or gain of individual elements that are assumed to comprise an aggregate [13]. A key use of the Becker–Döring theory is in the formation and dissociation of *micelles* in surfactant systems, which are large chemical compounds composed of many individual surfactant particles, or *monomers* [14, 15]. Smoluchowski theory generalizes the ideas of the Becker–Döring models by allowing the merging of any two aggregates and conversely the disintegration of any species into two arbitrarily sized aggregates [16]. Both Becker–Döring and Smoluchowski theories track the time evolution of number of aggregates of any given discrete size. In many instances, the range of sizes of aggregates may be large, and so a continuum theory, where a continuum variable is assigned to the aggregate size, is more appropriate (see, for example, [17]). In this case a size distribution of aggregates is tracked. Such theories allow for more efficient numerical computation. In other instances, a mean-field approach is more beneficial [18]. The recent explosion in the availability of data also allows techniques such as partition-valued Markov chains to be used, which exhibit a scalability that is lacking with analogous continuous models. Such ideas have been successfully employed to model genetic sequences [19].

We consider the blending process as a compromise between chopping, which makes particles smaller, and mixing, which makes the mixture homogeneous. This paper concentrates on the chopping aspect of a blender, and describes a simple mathematical model that captures the behaviour of solid particles within a fluid as they are randomly and continuously chopped. The distribution of food pieces in a blended product is usually described by attributing a single number, which characterizes its size, to each piece. For long items, such as carrot or celery sticks, the length of each stick provides a suitable metric for characterizing the pieces. Other foods, such as berries or carrot cubes, are more accurately represented by associating a typical volume or diameter to each piece.

We first propose a model for the chopping of long thin particles. This can be thought of as a one-dimensional problem, where we track the length of each piece (Sect. 2) and present both analytical and numerical results for this. A model is then presented to address the chopping of food pieces that are characterized more appropriately by their volume (or an effective diameter) (Sect. 3), and a similar analysis is conducted. We compare the predictions with experimental data and use this to improve our model in Sect. 4. We conclude our analysis in Sect. 5 by generalizing our model to include a minimum particle size that can be chopped by the blades to provide a more accurate prediction of the resulting particle distribution.

## 2 Models of chopping one-dimensional particles

Motivated by the chopping of long slender objects, such as carrots, we begin by considering piece of food is randomly and continuously chopped, producing smaller pieces, each of which is defined only by its length. By studying the time evolution of the distribution of the number of pieces of each size, we gain insight as to what to expect when considering the action of a blender. We shall start by considering that the line can only be chopped at a finite, but large, set of discrete points. Hence, we might consider the entire line to be made of very small sub-lines. We will develop a model using this discrete version of chopping. We will then take the limit of the process and consider chopping at any point thereby generating a continuous model of chopping. We will consider beginning with several pieces of food. Our aim is to introduce the notation and the ideas in a simple context of one dimension before presenting a model of more complex particles.

### 2.1 The discrete-size model

*M*very thin pieces of food each of length

*L*. We imagine that each of these pieces is composed of

*N*very small discrete bits, of length

*L*/

*N*, and that any chop will cut the piece of food at a point between two of these small bits. Hence if a piece has length

*L*(

*i.e.*it is made of

*N*small bits) then one chop will result in two pieces, one of length

*jL*/

*N*and the other of length \((N-j)L/N\) (for some integer \(j\in [1,N-1]\)). Here, we are interested in the case when \(N\gg 1\). We now introduce the notation \(Y_i(t)\) to be the number of pieces composed of exactly

*i*small bits at time

*t*or

*number density*(this corresponds to the number of pieces of length

*iL*, and we shall refer to these as pieces of “size”

*i*from here onward). Hence, for example, if we begin with

*M*pieces of length

*L*, then we have

We first consider a single chop of the blender blade. We assume that the probability that a piece of size *i* is chopped is *G*(*i*). This functional dependence allows for the possibility of, for example, a larger piece being more likely to be chopped than smaller pieces. Note that by assuming *G* depends only on *i*, the chances of a piece being chopped depends only on its size and not, for example, on its position or orientation, and hence, we are assuming perfect mixing of the contents. We also assume that this probability does not vary with time. However, our methodology readily generalizes to capture such behaviour, and we discuss the possibility of time-dependent probabilities in the Conclusions.

*p*is an integer,

*i*, \(Y_i(t)\), is reduced due to some of these pieces of size

*i*being chopped to smaller pieces, with probability

*G*(

*i*) (the first term on the right-hand side of (2)), and increased due to larger pieces of size \(q>i\) that have been chopped to form some pieces of size

*i*, with probability

*G*(

*q*) (the summation term in (2)). We note the extra multiplicative factor in the summation term, which arises when we consider the different outcomes that lead to a piece of size

*i*when a piece of size \(q>i\) is chopped. Recall, a piece of size

*q*can be chopped at \(q-1\) possible positions along its length. If we assume that the chop can occur at any of these places with equal probability, then a piece of size

*i*is created if the chop takes place in one of two places: at position

*i*or at position \(q-i\). Thus, the probability of a chop producing a piece of size

*i*is \(2/(q-1)\). Note that this also accounts for the special case when \(q=2i\): in this case, if we chop in half, we produce two pieces of size

*i*, but there is only one way in which we can chop to achieve this so the resulting probability of generating a piece of size

*i*from a piece of size 2

*i*is also \(2/(q-1)\).

*t*as follows:

*likelihood function*and corresponds to the rate at which particles of size

*i*are chopped, with units s\(^{-1}\). We assume that

*F*(

*i*) is an order-one quantity in the limit as \(\varDelta t\rightarrow 0\) and discuss the appropriate functional form of

*F*(

*i*) in detail later.

Equation (3) belongs to a subset of the generalized Smoluchowski theory [16], which models the agglomeration and disintegration of aggregates each composed of a given number of distinct entities. Here, we include only the disintegration component, which captures the chopping process, since we assume that pieces will never recombine. The Smoluchowski equations are a system of coupled nonlinear ordinary differential equations (ODEs) that describe the evolution of the number density of each piece size, \(Y_i(t)\). The general Smoluchowski equations have been shown to be well-posed, and so our subset model will also be well-posed [18].

*i.e.*

*i*and summing over all

*i*. When we begin with

*M*pieces of size

*N*(each of length

*L*), this implies

#### 2.1.1 Numerical solution of the discrete-size problem

We can solve the discrete model (1)–(3) numerically using MATLAB, and hence visualize the evolution of particle distribution for a particular likelihood function (Fig. 1). We must choose a reasonable likelihood function, *F*(*i*), to describe the rate of chopping a piece of size *iL* / *N*. We might expect larger pieces to be more likely to be chopped than smaller pieces, therefore a simple model of this is to take the likelihood of chopping to be directly proportional to a piece’s length, namely \(F(i) = ai\), where *a* is a positive constant that captures other general contributing factors.

*length density*, \(L_{D_{{i}}}(t)\), by considering the total length of pieces of food that is attributed to each particle size, given by

The number-density distribution we observe from Fig. 1a initially has a spike of height \(M=100\) at \(i=N\), corresponding to the initial condition (1). As time progresses, the total number of pieces in the system increases, while the size of a typical piece decreases. We notice that there are considerably more tiny pieces than large pieces, however, it is not obvious which particles account for the majority of the material (*i.e.* the length) in the system. The length-density distribution shown in Fig. 1b is a single-peaked graph which, with time, moves from right to left, while growing taller and narrower. From this, we observe how the length fraction attributed to each particle size changes with time, while the total length in the system remains constant (which is demonstrated by the constant area under this curve as time evolves). We can then deduce which size of particle accounts for the majority of the length.

### 2.2 The continuous-size model

*N*, it is natural to extend the ideas of Sect. 2.1 to a continuous-size model so that we can consider a continuous range of piece sizes rather than the discrete set considered previously. Such a continuum approach has been considered in the wider context (see, for example, [20, 21, 22]). We define

*x*as the continuous variable for a piece length, \(x = {iL}/{N}\). We define

*y*(

*x*,

*t*) as the continuous analogue of \(Y_i(t)\) and

*f*(

*x*) as the continuous analogue of

*F*(

*i*) when \(N\rightarrow \infty \). Hence, we have

*N*, we are able to do since

*g*(

*i*) is zero initially for all \(i>N\) and remains zero for all future times.

#### 2.2.1 Analytical solution

*f*(

*x*) to describe the rate of chopping a piece of length

*x*. As noted in Sect. 2.1.1, larger pieces are more likely to be chopped than smaller pieces, so one possibility is to assume that the likelihood of chopping a piece is proportional to its length. However, there are other factors that will affect the likelihood, and so we generalize this idea in a manner that still allows us to find a solution analytically. Specifically, we consider the case

*k*and

*a*are positive constants, so that larger pieces have greater chance of being chopped.

*f*(

*x*) is given by (14). We assume that the problem has a similarity solution of the form

*k*and \(\beta \) are to be chosen. Similarity solutions of this nature have also been considered in the wider context (see, for example, [23]). If we substitute (15) into (13), then the need to have the condition true for all time requires that we take \(\gamma = -2\). To substitute (15) into (10), we simplify the steps by first differentiating (10) with respect to

*x*. This then gives

*t*explicitly, we find we must take \(\beta = {1}/{k}\). Hence the function \(g(\eta )\) satisfies

*g*is finite for \(t>0\). Noting that \(\varGamma (z)\) is finite only for \(z>0\), we conclude that we must take \(c_2 = 0\). Thus, we have

*LM*are placed into a blender with a likelihood function of \(f(x)=ax^k\) for \(a,k>0\). Note that (23) does not actually satisfy the initial condition given by (11), however, it does provide an excellent approximation to the solution to the full problem (10) and (11) for any time away from \(t=0\), and particularly in the physically relevant case where the initial pieces have been chopped several times so that most pieces are small, as will be demonstrated in the next section.

#### 2.2.2 Comparison of results

The continuous model (10) and (11) can be solved by discretizing in *x* and using MATLAB function ode45 (*i.e.* using the method of lines). Note that if a uniform discretization of *x* is used then this is equivalent to the discrete model. However, as seen in Fig. 1 the distributions tend to move toward smaller and smaller particles and these can most efficiently be computed by considering a uniform discretization in the logarithm of *x*. This can be done by rewriting the model in the variable \(z=\log x\) and then using a uniform mesh in *z*. The predictions for both the number density and length density agree well with the discrete model and so have not been shown here.

## 3 Model extension: three dimensions

The models that we have derived so far hold for the chopping of food that is long and slender, so that its length forms an appropriate identifying metric. As discussed in the Introduction, in some cases the food pieces being chopped are more spherical in shape, and so are better described by their volume. For simplicity, we will ignore any shape discrepancies, and model all particles as spheres, imposing the assumption that when a spherical particle is chopped it produces two spherical particles while conserving volume. This is of course a considerable approximation but it allows us to make significant progress thereby giving useful insight while avoiding excessive computational effort trying to follow complicated changes in geometry of the particles.

We will now proceed in an identical manner to the one-dimensional case. We begin by studying a discrete distribution of sizes, but will find that our previous methodology does not naturally generalize. We then consider the continuous model, in which case, we find analytical results are possible.

### 3.1 The discrete-size model

*j*(similar to our sizes being integers

*i*earlier). When a chop takes place, we must conserve volume, and this leaves us considering the following equation:

*j*is the particle diameter before chopping, while \(j_1\) and \(j_2\) are the diameters corresponding to the two particles resulting from the chop. However, we recall Fermat’s Last Theorem, from which we know that this equation has no integer solutions (the exponents are greater than 2) [24]. Therefore, we cannot adopt the same approach as in the one-dimensional case, so we immediately focus on a continuous-size description.

### 3.2 The continuous-size model

*x*as the particle diameter. We expect the three-dimensional model to take a similar form as before, that is,

*y*(

*x*,

*t*) denotes the number of particles of diameter

*x*at time

*t*, and we introduce

*h*(

*s*) as the probability that chopping a particle of diameter

*s*generates a particle of diameter

*x*. In the one-dimensional case, we found \(h(s)=2/s\) using a probability argument, which did not require the need to introduce the terminology

*h*(

*s*). However, in three dimensions it is easier to determine

*h*(

*s*) by exploiting the need to conserve mass (though this method is equivalent to the probabilistic procedure). The general equation expressing conservation of mass in

*n*dimensions is

*h*(

*s*) for the general case of \(n>0\).

*x*over \(0\le x<\infty \), we obtain

*h*(

*s*) must take the form

*M*pieces each of diameter

*L*initially. Our choice for

*h*(

*s*) ensures that volume is conserved and so, using (34), this implies

#### 3.2.1 Analytical solution

*a*and

*k*. Following the previous method of seeking a similarity solution, we find the solution

*y*(

*x*,

*t*), given by (36), when \(n=3\), as a function of the particle diameter

*x*while in Fig. 3b, we show the continuous particle

*volume distribution*, which is given by

We also solve the continuous 3D model numerically by discretizing in *x* and then using MATLAB function ode45, but the results agree very closely with those obtained analytically with a similar convergence as the one-dimensional case, and, therefore, we do not include them here.

## 4 Experimental data

A series of experiments were performed to collect data that could be compared to the model predictions and enable the models to be further refined and for the behaviour to be interpreted.

### 4.1 Experimental methodology

In this section, we compare our model predictions with experimental data. Experiments were performed using a 24 oz single-serve cup, and a Nutri Ninja®Pro Extractor Blade [25]. All experiments are carried out using a mixture of carrots and water. Carrot tops are removed and the carrots are split in half. These were mixed randomly in order to minimize any effect due to any physical variation between carrots. The carrots are then cut into approximate 6 mm cubes and an initial mixture, composed of \(425\,\)g water and 283 g carrot, put in the blender. The blender is then operated at rotation speeds controlled via a Hall-effect sensor integrated into the motor controller. After a fixed time in the blender of 50 s a sample of the mixture is taken, diluted with water and the resulting size distribution measured using a Malvern MS3000 Laser Analysis Unit with a Malvern HydroLV sampling unit. The process was repeated 15 times to ensure repeatability.

### 4.2 Comparison with analytical solution

*k*and

*a*, and perform a least-squares fit to determine their values. Some parameters are known, for example, the experimental blender contains 425 ml water and \(2857\,\)ml of carrots (mass ratio 40:60 carrots to water). The initial size of carrot particles is 6 mm, hence, we choose our initial data such that we begin with 257, 000 mm\(^3\) of carrot pieces of diameter 6 mm. Despite the additional step of relating the operating conditions to the parameters

*a*and

*k*, the fit of the analytical solution to the experimental data is extremely good (see Fig. 4). This suggests that our model captures the key physics during blending. However, we note the appearance of a second peak, corresponding to very small particles within the experimental data, that does not arise in our model. We thus turn our attention to modelling this extra observation in the following section.

## 5 Debris and a minimum particle size

We notice from Fig. 4 that a significant discrepancy between our simple model and the experimental data is the leftmost peak visible in the distribution. We speculate that this is a consequence of *debris* produced when a particle is chopped. It is likely that when a particle is chopped into two particles there is additionally some residue which could consist of juice, broken fruit cells or fragments of fruit which are too small to be considered as particles. This leads us to the natural extension of including the *debris* in our model. We consider all the debris to consist of very small particles of a given size. We shall presume debris is created whenever a larger particle is chopped and that this debris cannot be chopped further. However, a consequence of such an assumption is that we will need to define the smallest particle size that can be chopped.

### 5.1 The one-dimensional discrete-size model with debris

To summarize the notation of the one-dimensional discrete-size model in Sect. 2.1, we supposed that a piece of length *L* could be chopped in any one of \(N-1\) places to form two pieces of length *iL* / *N* (or size *i*) and \((N-i)L/N\) (size \(N-i\)). Within this framework the smallest piece that can be formed is of length *L* / *N* (that is, size 1). We now adjust our model so that each time a chop takes place, along with the division into two distinct pieces we also generate *m* pieces of debris, of size 1. However, we also assume that the smallest piece that can be formed from a chop that is not debris is of length *pL* / *N* (size *p*), where \(p>1\) is an integer, so that we create a distinction between chopped pieces (\(i \ge p\)) and the debris (\(i=1\)). Within this framework, we conclude that the smallest piece that can be chopped must be of size at least \(2p+m\) since chopping anything smaller cannot create two of the smallest possible pieces and the associated debris.

*F*(

*i*) have the same definition as before (with \(Y_1(t)\) being the total number of debris particles). The fraction in the summation on the right-hand side of (38) describes the probability that when we chop a piece of size

*q*(\(q\ge 2p+m\)) a piece of size

*i*(\(i\ge p\)) is created.

*M*pieces of identical length as before, where we note there is no debris initially (\(Y_1(0)=0\)). This model then ensures that the total mass of food particles is conserved, with

Note that the minimum piece length, *pL* / *N*, debris length, *L* / *N*, and amount of debris, *m*, created at each chop may all depend on operating parameters, such as the speed of the chop. However, here we concentrate on trying to understand the general behaviour of the distribution created by such a model.

### 5.2 The one-dimensional continuous debris model

### 5.3 Numerical solution of the one-dimensional continuous debris model

*M*pieces of length

*L*so that our initial condition is

*a*and \(\nu \) are positive constants. We must ensure that (44) is satisfied and this is most easily done by taking \(\nu = 2\mu +m\lambda \).

### 5.4 Extension to three dimensions with debris

We can extend our one-dimensional continuous debris model to three dimensions in a similar manner to Sect. 3, by treating the particles as spheres and allowing *x* to denote the particle diameter.

#### 5.4.1 Numerical solution and comparison with experiments

*f*(

*x*) to be

*m*. Further there may need to be other model improvements such as considering the number of debris pieces create per chop,

*m*, to depend on the particle being chopped,

*i.e.*larger particles may produce more debris when chopped than smaller particles.

## 6 Conclusions

The behaviour of particles in a blender has been examined using a simple model of chopping. We began by proposing a discrete model based on Smoluchowski theory. We assumed that the mixing process of the blender led to a homogeneous distribution of food pieces in space, so that our model did not need to be spatially dependent. We used the fact that the typical number of different particle sizes available is large to derive a continuum description, from which, in ideal cases an analytical solution was found to exist. When compared with experimental data the analytical solution agreed remarkably well, and provides useful scaling laws for the behaviour. A key feature emerged that was not captured by the simple model, namely the appearance of a second peak in the particle size distribution. The appearance of this second peak was attributed to the accumulation of a large amount of extremely small particles (debris) being created. The simple model was modified to account for the debris, which is too small to be subsequently chopped, being created whenever a particle was chopped. The modified model is able to capture the full particle size distribution following blending and supported our modelling assumptions, in particular the spatial uniformity in the distribution of particles.

The model provides key insight into the behaviour within a blender. For instance, the model may easily be interrogated to determine trends and possible design improvements. The model also bypasses the need to do many costly and time-consuming experiments to determine the distribution of particle sizes in a given blending process. A key next step in the model development would be to take data obtained at different times during the blending process, which could be used to validate the predicted time evolution of the distribution. In all of the models presented here, we assumed that the chopping process remained constant with time. In practice, we might expect that the chopping rate varies with time, for example due to the change in viscosity of the fluid as the number of smaller particles in the mixture increases. To capture such additional phenomena, more detailed modelling would be required, particularly of the particle behaviour near the blades and how the viscosity changes might allow the particles to avoid being chopped. This would most likely involve the use of computational fluid dynamics simulations to understand the large-scale flow patterns created in the blender.

Other generalizations of the work considered here could account for scenarios in which food pieces are inserted at a constant rate and removed when suitably blended, so that eventually a steady state is attained. The modelling framework that we have outlined here also applies to any initial conditions applied, and so may be directly used to assess resulting particle size distributions following chopping of a range of different initial mixtures. Such analysis would be useful in determining the value in pre-chopping the food before it is inserted into the blender.

Nevertheless, we envisage the results of such sophisticated studies being inputs that accurately confirm the parameters within our model framework, such as the likelihood function. The model presented here provides an overview of where such future focused studied would be beneficial, as we move towards a comprehensive model for the blender behaviour.

## Notes

### Acknowledgements

This publication is based on work supported by the EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling (EP/L015803/1) in collaboration with Shark Ninja. I.M.G. gratefully acknowledges support from the Royal Society through a University Research Fellowship.

## References

- 1.Xia B, Sun DW (2002) Applications of computational fluid dynamics (CFD) in the food industry: a review. Comput Electron Agric 34:5–24CrossRefGoogle Scholar
- 2.Porion P, Sommier N, Evesque P (2000) Dynamics of mixing and segregation processes of grains in 3D blender by NMR imaging investigation. Europhys Lett 50:319–325CrossRefGoogle Scholar
- 3.Sen M, Karkala S, Panikar S, Lyngberg O, Johnson M, Marchut A, Schäfer E, Ramachandran R (2017) Analyzing the mixing dynamics of an industrial batch bin blender via discrete element modeling method. Processes 5:22CrossRefGoogle Scholar
- 4.Hosseinalipour SM, Tohidi A, Shokrpour M, Nouri NM (2013) Introduction of a chaotic dough mixer. Part A: mathematical modeling and numerical simulation. J Mech Sci Technol 27:1329–1339CrossRefGoogle Scholar
- 5.Zhou D, McMurray G (2011) Slicing cuts on food materials using robotic-controlled razor blade. Model Simul Eng, 469262Google Scholar
- 6.Cushing JM (1998) An introduction to structured population dynamics. SIAM, PhiladelphiaCrossRefzbMATHGoogle Scholar
- 7.Drake RL, Hidy GM, Brock JR (Eds) (1972) Topics in current aerosol research, vol. 3. Pergamon Press, New YorkGoogle Scholar
- 8.Pruppacher HR, Klett JD (1978) Microphysics of clouds and precipitation. Reidel, DordrechtCrossRefGoogle Scholar
- 9.Wall SN, Aniansson GEA (1980) Numerical calculations on the kinetics of stepwise micelle association. J Phys Chem 84:727–736CrossRefGoogle Scholar
- 10.Stockmayer WH (1943) Theory of molecular size distribution and gel formation in branded-chain polymers. J Chem Phys 11:45–55CrossRefGoogle Scholar
- 11.Michel P, Benz E, Tanga P, Richardson DC (2001) Collisions and gravitational reaccumulation: forming asteroid families and satellites. Science 294:1696–1700CrossRefGoogle Scholar
- 12.Lee MH (2000) On the validity of the coagulation equation and the nature of runaway growth. Icarus 143:74–86CrossRefGoogle Scholar
- 13.Becker R, Döring W (1935) Kinetische Behandlung der Keimbildung in übersättigten Dämpfen. Ann Phys 24:719–752CrossRefzbMATHGoogle Scholar
- 14.Coveney PV, Wattis JAD (1996) Analysis of a generalized Becker–Döring model of self-reproducing micelles. Proc R Soc Lond A 452:2079–2102CrossRefzbMATHGoogle Scholar
- 15.Griffiths IM, Bain CD, Breward CJW, Colegate DM, Howell PD, Waters SL (2011) On the predictions and limitations of the Becker–Döring model for reaction kinetics in micellar surfactant solutions 2011. J Coll Interface Sci 360:662–671CrossRefGoogle Scholar
- 16.Smoluchowski V (1917) Mathematical theory of the kinetics of the coagulation of colloidal solutions. Phys Chem 92:129–68Google Scholar
- 17.Griffiths IM, Bain CD, Breward CJW, Chapman SJ, Howell PD, Waters SL (2012) An asymptotic theory for the re-equilibration of a micellar surfactant solution. SIAM J Appl Math 72:201–215MathSciNetCrossRefzbMATHGoogle Scholar
- 18.Wattis JAD (2006) An introduction to mathematical models of coagulation-fragmentation processes: a discrete deterministic mean-field approach. Physica D 222(1–2):1–20MathSciNetCrossRefzbMATHGoogle Scholar
- 19.Elliott L, Teh YW (2012) Scalable imputation of genetic data with a discrete fragmentation–coagulation process. Adv Neural Inf Process Syst, 2852–2860Google Scholar
- 20.Banasiak J, Lamb W (2009) Coagulation, fragmentation and growth processes in a size structured population. Discrete Contin Dyn Syst Ser B 11:563585MathSciNetzbMATHGoogle Scholar
- 21.Griffiths IM, Breward CJW, Colegate DM, Howell PD, Bain CD (2013) A new pathway for the re-equilibration of a micellar surfactant solution. Soft Matter 9:853–863CrossRefGoogle Scholar
- 22.Ziff RM (1991) New solutions to the fragmentation equation. J Phys A 24:2821–2828MathSciNetCrossRefGoogle Scholar
- 23.Calvez V, Doumic M, Gabriel P (2012) Self-similarity in a general aggregation-fragmentation problem. Application to fitness analysis. Journal de Mathématiques Pures et Appliquées 98:1–27MathSciNetCrossRefzbMATHGoogle Scholar
- 24.Wiles A (1995) Modular elliptic curves and Fermat’s last theorem. Ann. Math. 142:443–551MathSciNetCrossRefzbMATHGoogle Scholar
- 25.Wood-Lee M (2016) Private communicationGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.