1 Introduction

Most households have a blender or a food processor, which is commonly used to turn fruit and vegetables into smoothies, drinks, sauces and dips. These blenders chop and shred a variety of ingredients to produce a purée where the material has been broken into very small particles suspended in a liquid, usually water. These blenders use various blade systems, composed of multiple blades inclined at various angles and operate at extremely high speeds to chop and mix the ingredients.

There is significant experimental interest in understanding how blade design and container shape can be tailored to create optimal purées. There is, however, very little literature examining the underlying fluid and particle dynamics to assist in identifying how such optima may be achieved. In addition, one consumer criticism of existing blender designs is the noise that is generated due to the high speed of blade rotation that is currently used to adequately blend the ingredients in a timely manner. Identifying how such speeds might be reduced while still creating optimal conditions is thus of great interest.

The quality of the purée is characterized by the particle size distribution and summarized by the mean particle size, with homogeneous mixtures consisting of small particles being preferred. Operating parameters that may contribute to modifying the particle distribution include blade speed, shape and sharpness, along with properties of the container, such as the shape and the inclusion of baffles on the inner walls.

When attempting to understand the physics of food blending, computational fluid dynamics (CFD) is the de facto approach. Current CFD packages are able to model fluid flow situations for the prediction of heat, mass and momentum transfer and optimal design in a variety of food processes [1]. The recent advances in computer processor speeds mean that CFD packages are able to predict the resulting mixing process given an initial configuration, reducing the need to perform batches of experiments. However, such simulations remain a step away from being able to perform the necessary comprehensive parameter sweeps that are required to determine the optimum operating regimes. Furthermore, the predominant use of CFD is in mixing and segregation processes rather than in chopping (see, for example, [2, 3]). Other work, such as studies in the bread industry, have placed an emphasis on examining the effect of the rheology of the substance on the mixing process [4]. A third area for research concerns the mechanics of an individual cut, in particular examining the relationship between the force exerted during a cut and the resulting sliced product [5].

In this paper, we turn our attention away from the mixing mechanism and towards the chopping process, asking the question, how do the food pieces placed into a blender get chopped to make a smoothie? To the best of our knowledge a mathematical theory for the chopping process in a food blender has not been proposed. We shall take a different approach to the computationally heavy methodologies presented previously, by deriving a simplified mathematical model from which we can extract scaling laws that will ultimately allow us to make predictions on how the operating conditions affect the chopping process. The resulting theories will eliminate the need to perform many costly and time-consuming experiments to determine how a particular mixture will be chopped over time, and thus will ultimately provide guidance on how to design blenders to achieve a desired final distribution of particle sizes.

While the chopping of food in blenders has received little mathematical attention, techniques for modelling dissociation (and association) processes are prevalent within the literature. For a broad study of such population dynamics models, see, for example, [6]. Such theories are used to describe, for example, the formation of aerosols [7, 8], colloidal aggregates [9], polymers [10] and the large-scale interactions of celestial bodies [11, 12]. A cornerstone of the aggregation and breakdown kinetics literature is Becker–Döring theory. This theory describes the process of aggregation of dissociation by the stepwise loss or gain of individual elements that are assumed to comprise an aggregate [13]. A key use of the Becker–Döring theory is in the formation and dissociation of micelles in surfactant systems, which are large chemical compounds composed of many individual surfactant particles, or monomers [14, 15]. Smoluchowski theory generalizes the ideas of the Becker–Döring models by allowing the merging of any two aggregates and conversely the disintegration of any species into two arbitrarily sized aggregates [16]. Both Becker–Döring and Smoluchowski theories track the time evolution of number of aggregates of any given discrete size. In many instances, the range of sizes of aggregates may be large, and so a continuum theory, where a continuum variable is assigned to the aggregate size, is more appropriate (see, for example, [17]). In this case a size distribution of aggregates is tracked. Such theories allow for more efficient numerical computation. In other instances, a mean-field approach is more beneficial [18]. The recent explosion in the availability of data also allows techniques such as partition-valued Markov chains to be used, which exhibit a scalability that is lacking with analogous continuous models. Such ideas have been successfully employed to model genetic sequences [19].

We consider the blending process as a compromise between chopping, which makes particles smaller, and mixing, which makes the mixture homogeneous. This paper concentrates on the chopping aspect of a blender, and describes a simple mathematical model that captures the behaviour of solid particles within a fluid as they are randomly and continuously chopped. The distribution of food pieces in a blended product is usually described by attributing a single number, which characterizes its size, to each piece. For long items, such as carrot or celery sticks, the length of each stick provides a suitable metric for characterizing the pieces. Other foods, such as berries or carrot cubes, are more accurately represented by associating a typical volume or diameter to each piece.

We first propose a model for the chopping of long thin particles. This can be thought of as a one-dimensional problem, where we track the length of each piece (Sect. 2) and present both analytical and numerical results for this. A model is then presented to address the chopping of food pieces that are characterized more appropriately by their volume (or an effective diameter) (Sect. 3), and a similar analysis is conducted. We compare the predictions with experimental data and use this to improve our model in Sect. 4. We conclude our analysis in Sect. 5 by generalizing our model to include a minimum particle size that can be chopped by the blades to provide a more accurate prediction of the resulting particle distribution.

2 Models of chopping one-dimensional particles

Motivated by the chopping of long slender objects, such as carrots, we begin by considering piece of food is randomly and continuously chopped, producing smaller pieces, each of which is defined only by its length. By studying the time evolution of the distribution of the number of pieces of each size, we gain insight as to what to expect when considering the action of a blender. We shall start by considering that the line can only be chopped at a finite, but large, set of discrete points. Hence, we might consider the entire line to be made of very small sub-lines. We will develop a model using this discrete version of chopping. We will then take the limit of the process and consider chopping at any point thereby generating a continuous model of chopping. We will consider beginning with several pieces of food. Our aim is to introduce the notation and the ideas in a simple context of one dimension before presenting a model of more complex particles.

2.1 The discrete-size model

We consider initially M very thin pieces of food each of length L. We imagine that each of these pieces is composed of N very small discrete bits, of length L / N, and that any chop will cut the piece of food at a point between two of these small bits. Hence if a piece has length L (i.e. it is made of N small bits) then one chop will result in two pieces, one of length jL / N and the other of length \((N-j)L/N\) (for some integer \(j\in [1,N-1]\)). Here, we are interested in the case when \(N\gg 1\). We now introduce the notation \(Y_i(t)\) to be the number of pieces composed of exactly i small bits at time t or number density (this corresponds to the number of pieces of length iL, and we shall refer to these as pieces of “size” i from here onward). Hence, for example, if we begin with M pieces of length L, then we have

$$\begin{aligned} Y_i(0) = {\left\{ \begin{array}{ll} 0, &{} 1 \le i < N, \\ M, &{} i = N, \\ 0, &{} i>N. \end{array}\right. } \end{aligned}$$

Our aim is to determine \(Y_i(t)\) for \(t>0\) by considering how the chopping process occurs.

We first consider a single chop of the blender blade. We assume that the probability that a piece of size i is chopped is G(i). This functional dependence allows for the possibility of, for example, a larger piece being more likely to be chopped than smaller pieces. Note that by assuming G depends only on i, the chances of a piece being chopped depends only on its size and not, for example, on its position or orientation, and hence, we are assuming perfect mixing of the contents. We also assume that this probability does not vary with time. However, our methodology readily generalizes to capture such behaviour, and we discuss the possibility of time-dependent probabilities in the Conclusions.

If we assume that a chop takes place every \(\varDelta t\) s, then we may then write down the number density, \(Y_i\), at some time \(t=(p+1)\varDelta t\) in terms of the number density at time \(t=p \varDelta t\), where p is an integer,

$$\begin{aligned} Y_i((p+1)\varDelta t) = Y_i(p\varDelta t) - G(i)Y_i(p\varDelta t) + \sum _{q=i+1}^{\infty } G(q)Y_q(p\varDelta t)\frac{2}{q-1}. \end{aligned}$$

The number density of pieces of size i, \(Y_i(t)\), is reduced due to some of these pieces of size i being chopped to smaller pieces, with probability G(i) (the first term on the right-hand side of (2)), and increased due to larger pieces of size \(q>i\) that have been chopped to form some pieces of size i, with probability G(q) (the summation term in (2)). We note the extra multiplicative factor in the summation term, which arises when we consider the different outcomes that lead to a piece of size i when a piece of size \(q>i\) is chopped. Recall, a piece of size q can be chopped at \(q-1\) possible positions along its length. If we assume that the chop can occur at any of these places with equal probability, then a piece of size i is created if the chop takes place in one of two places: at position i or at position \(q-i\). Thus, the probability of a chop producing a piece of size i is \(2/(q-1)\). Note that this also accounts for the special case when \(q=2i\): in this case, if we chop in half, we produce two pieces of size i, but there is only one way in which we can chop to achieve this so the resulting probability of generating a piece of size i from a piece of size 2i is also \(2/(q-1)\).

We now assume that the time between two successive chops, \(\varDelta t\), is small. This allows us to take the continuous limit in time of (2) to obtain a discrete-size differential-equation model for the number density, \(Y_i\) at any time t as follows:

$$\begin{aligned} \frac{\mathrm {d}Y_i(t)}{\mathrm {d}t} = - F(i)Y_i(t) + \sum _{q=i+1}^{\infty } F(q)Y_q(t)\frac{2}{q-1}, \end{aligned}$$

with initial condition (1). Here, \(F(i)=G(i)/\varDelta t\) is the likelihood function and corresponds to the rate at which particles of size i are chopped, with units s\(^{-1}\). We assume that F(i) is an order-one quantity in the limit as \(\varDelta t\rightarrow 0\) and discuss the appropriate functional form of F(i) in detail later.

Equation (3) belongs to a subset of the generalized Smoluchowski theory [16], which models the agglomeration and disintegration of aggregates each composed of a given number of distinct entities. Here, we include only the disintegration component, which captures the chopping process, since we assume that pieces will never recombine. The Smoluchowski equations are a system of coupled nonlinear ordinary differential equations (ODEs) that describe the evolution of the number density of each piece size, \(Y_i(t)\). The general Smoluchowski equations have been shown to be well-posed, and so our subset model will also be well-posed [18].

One important property of Eq. (3) is that it ensures conservation of mass, i.e.

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}t}\sum _{i=1}^{\infty } \frac{iL}{N} Y_i(t) = 0 \qquad \hbox { for } t\ge 0, \end{aligned}$$

which may be obtained by multiplying (3) by i and summing over all i. When we begin with M pieces of size N (each of length L), this implies

$$\begin{aligned} \sum _{i=1}^{\infty } \frac{iL}{N} Y_i(t) = LM \qquad \hbox { for } t\ge 0. \end{aligned}$$

2.1.1 Numerical solution of the discrete-size problem

We can solve the discrete model (1)–(3) numerically using MATLAB, and hence visualize the evolution of particle distribution for a particular likelihood function (Fig. 1). We must choose a reasonable likelihood function, F(i), to describe the rate of chopping a piece of size iL / N. We might expect larger pieces to be more likely to be chopped than smaller pieces, therefore a simple model of this is to take the likelihood of chopping to be directly proportional to a piece’s length, namely \(F(i) = ai\), where a is a positive constant that captures other general contributing factors.

We can interpret the particle distributions in two ways: either as the number density, \(Y_i(t)\), that is, the number of particles for each particle size; or the length density, \(L_{D_{{i}}}(t)\), by considering the total length of pieces of food that is attributed to each particle size, given by

$$\begin{aligned} L_{D_{{i}}}(t)=\left( \frac{iL}{N}\right) Y_i(t). \end{aligned}$$
Fig. 1
figure 1

Numerical solution to the one-dimensional discrete model (1)–(3): a number density, \(Y_i\), and b length density, \(L_{D_i}\) at \(t=3, 7\) and 10. Here, \(L=1\), \(M=100\) and \(N=1000\)

The number-density distribution we observe from Fig. 1a initially has a spike of height \(M=100\) at \(i=N\), corresponding to the initial condition (1). As time progresses, the total number of pieces in the system increases, while the size of a typical piece decreases. We notice that there are considerably more tiny pieces than large pieces, however, it is not obvious which particles account for the majority of the material (i.e. the length) in the system. The length-density distribution shown in Fig. 1b is a single-peaked graph which, with time, moves from right to left, while growing taller and narrower. From this, we observe how the length fraction attributed to each particle size changes with time, while the total length in the system remains constant (which is demonstrated by the constant area under this curve as time evolves). We can then deduce which size of particle accounts for the majority of the length.

2.2 The continuous-size model

Given the largeness of N, it is natural to extend the ideas of Sect. 2.1 to a continuous-size model so that we can consider a continuous range of piece sizes rather than the discrete set considered previously. Such a continuum approach has been considered in the wider context (see, for example, [20,21,22]). We define x as the continuous variable for a piece length, \(x = {iL}/{N}\). We define y(xt) as the continuous analogue of \(Y_i(t)\) and f(x) as the continuous analogue of F(i) when \(N\rightarrow \infty \). Hence, we have

$$\begin{aligned} Y_i(t) = y\left( x,t\right) , \quad F(i) = f\left( x\right) , \quad \text {with }x=\frac{iL}{N}, \quad 0 < i \le N. \end{aligned}$$

We now consider the discrete model (1)–(3) and take the limit as \(N\rightarrow \infty \) to obtain the continuous analogue. Note we must consider how the summation in (3) behaves, for which it is useful to recall the definition of a Riemann sum:

$$\begin{aligned} \lim _{N\rightarrow \infty }\sum _{i=0}^{\infty } g(i)\frac{1}{N} = \int _0^{\infty } g(x)\mathrm {d}x. \end{aligned}$$

Note that the upper limit on the summation has been set to infinity rather than N, we are able to do since g(i) is zero initially for all \(i>N\) and remains zero for all future times.

Hence, we find

$$\begin{aligned} \sum _{q=i+1}^{\infty } F(q)n_q(t)\frac{2}{q-1}&\sim \int _x^{\infty } f(s)y(s,t)\frac{2}{s}\; \mathrm {d}s \quad \hbox { as } N\rightarrow \infty . \end{aligned}$$

The resulting continuous version of (3) is therefore

$$\begin{aligned} \frac{\partial y(x,t)}{\partial t} = - f\left( x\right) y\left( x,t\right) + \int _{x}^{\infty } f\left( s\right) y\left( s,t\right) \frac{2}{s} \mathrm {d}s, \end{aligned}$$

where the initial condition (1) has become

$$\begin{aligned} y(x,0) = M\delta (x-L). \end{aligned}$$

where \(\delta (x)\) is the Dirac delta function. The continuous analogue of the length density (6) is

$$\begin{aligned} \ell _D=xy(x,t), \end{aligned}$$

while, analogous to the discrete-size model, we observe that Eq. (10) ensures that mass is conserved and for these initial conditions implies that

$$\begin{aligned} \int _0^{\infty } x\,y(x,t)\,\mathrm {d}x = LM. \end{aligned}$$

2.2.1 Analytical solution

The continuous model allows us to exploit methods of solution that can create analytical solutions in special, but very practical, cases. We must choose a function f(x) to describe the rate of chopping a piece of length x. As noted in Sect. 2.1.1, larger pieces are more likely to be chopped than smaller pieces, so one possibility is to assume that the likelihood of chopping a piece is proportional to its length. However, there are other factors that will affect the likelihood, and so we generalize this idea in a manner that still allows us to find a solution analytically. Specifically, we consider the case

$$\begin{aligned} f(x) = a x^k, \end{aligned}$$

where both k and a are positive constants, so that larger pieces have greater chance of being chopped.

We now derive an analytical solution to (10) where f(x) is given by (14). We assume that the problem has a similarity solution of the form

$$\begin{aligned} y(x,t) = x^{\gamma } g(\eta )\qquad \hbox { with } \eta = xt^{\beta }, \end{aligned}$$

where the constants k and \(\beta \) are to be chosen. Similarity solutions of this nature have also been considered in the wider context (see, for example, [23]). If we substitute (15) into (13), then the need to have the condition true for all time requires that we take \(\gamma = -2\). To substitute (15) into (10), we simplify the steps by first differentiating (10) with respect to x. This then gives

$$\begin{aligned} \left( \beta x t^{2\beta -1}\right) g'' + \left( a x^{k}t^{\beta } + (1+\gamma )\beta t^{\beta -1}\right) g' + \left( a(k+\gamma +2)x^{k-1}\right) g = 0. \end{aligned}$$

In order that this equation only depends on the variable \(\eta \), and not on t explicitly, we find we must take \(\beta = {1}/{k}\). Hence the function \(g(\eta )\) satisfies

$$\begin{aligned} \frac{1}{k}\eta g'' + \left( a\eta ^k -\frac{1}{k}\right) g' + a k\eta ^{k-1}g = 0. \end{aligned}$$

We now note that (17) has the general solution

$$\begin{aligned} g(\eta ) = c_1 a^{\frac{2}{k}}\eta ^2 \mathrm {e}^{-a\eta ^k} + \frac{c_2}{k} (-a)^{\frac{2}{k}}\,\eta ^2 \mathrm {e}^{-a\eta ^k} \left( \varGamma \left( -\frac{2}{k}\right) -\varGamma \left( -\frac{2}{k},-a\eta ^k\right) \right) , \end{aligned}$$

where \(c_1\) and \(c_2\) are arbitrary constants, while \(\varGamma (\cdot ,\cdot )\) and \(\varGamma (\cdot )\) denote, respectively, the incomplete Gamma function and the Gamma function, defined by

$$\begin{aligned} \varGamma (u,v)&=\int _v^\infty s^{u-1}\mathrm {e}^{-s}\,\mathrm {d}s, \quad \varGamma (u)=\varGamma (u,0). \end{aligned}$$

When we differentiated (10) to get (16), some information was lost and this is retrieved if we ensure that g is finite for \(t>0\). Noting that \(\varGamma (z)\) is finite only for \(z>0\), we conclude that we must take \(c_2 = 0\). Thus, we have

$$\begin{aligned} y(x,t) = x^{-2}g(\eta ) = c_1(a t)^{\frac{2}{k}} \mathrm {e}^{-x^k (a t)}, \end{aligned}$$

and substituting (20) into (13) yields

$$\begin{aligned} c_1\, (a t^{\frac{2}{k}})\int _0^{\infty } x \, \mathrm {e}^{-x^k(a t)} \,\mathrm{{d}}x = LM. \end{aligned}$$

Hence, using the identity

$$\begin{aligned} \int _0^{\infty } x^m \mathrm {e}^{-ax^b} \mathrm {d}x = \frac{1}{b}\, a^{-\frac{m+1}{b}}\,\varGamma \left( \frac{m+1}{b}\right) , \end{aligned}$$

we deduce that the solution to the problem is

$$\begin{aligned} y(x,t) = \frac{LM\, k\,(a t)^{\frac{2}{k}}}{\varGamma \left( \frac{2}{k}\right) }\mathrm {e}^{-x^k a t}. \end{aligned}$$

We have therefore determined the analytical form of the size distribution for all time when a set of particles with a total mass LM are placed into a blender with a likelihood function of \(f(x)=ax^k\) for \(a,k>0\). Note that (23) does not actually satisfy the initial condition given by (11), however, it does provide an excellent approximation to the solution to the full problem (10) and (11) for any time away from \(t=0\), and particularly in the physically relevant case where the initial pieces have been chopped several times so that most pieces are small, as will be demonstrated in the next section.

2.2.2 Comparison of results

The continuous model (10) and (11) can be solved by discretizing in x and using MATLAB function ode45 (i.e. using the method of lines). Note that if a uniform discretization of x is used then this is equivalent to the discrete model. However, as seen in Fig. 1 the distributions tend to move toward smaller and smaller particles and these can most efficiently be computed by considering a uniform discretization in the logarithm of x. This can be done by rewriting the model in the variable \(z=\log x\) and then using a uniform mesh in z. The predictions for both the number density and length density agree well with the discrete model and so have not been shown here.

As mentioned earlier, the analytical solution (23) will not be a good approximation to the distribution at the beginning of a blend, but we anticipate that it will, given the correct the behaviour after longer times when many chops have occurred. In Fig. 2, we compare the distributions from the continuous model and the analytical solution after different times. We see that at \(t=2\), there is some discrepancy between the numerical and analytical solutions; however, for \(t\gtrsim 5\) s, this discrepancy is negligible.

Fig. 2
figure 2

A comparison of the length density \(\ell _D\), defined by (12), predicted by numerical solution of (10) and (11) (dashed lines) and against the predictions of the analytic solution (23) that satisfies (10) and (13) (solid line) at \(t=5,15\) and 50. Within a short time, the analytical and numerical solutions agree well. Here, \(L=1\), \(M=100\), \(a=1\) and \(k=1\). The inset shows the \(L_1\) norm for the difference between the analytical solution and the numerical solution, illustrating convergence with time

Hence, we conclude that we can study the relevant behaviour of a blender accurately with both the analytical and the numerical solutions. The numerical approach easily allows us to include other effects such as more complicated likelihood functions. The analytical solution gives useful insight into the general behaviour of the distributions. For example, we can readily find that the peak in the length distribution (see Fig. 2) occurs at the maximum of \(\ell _D=xy(x,t)\) and this occurs when

$$\begin{aligned} x = (akt)^{-1/k}. \end{aligned}$$

3 Model extension: three dimensions

The models that we have derived so far hold for the chopping of food that is long and slender, so that its length forms an appropriate identifying metric. As discussed in the Introduction, in some cases the food pieces being chopped are more spherical in shape, and so are better described by their volume. For simplicity, we will ignore any shape discrepancies, and model all particles as spheres, imposing the assumption that when a spherical particle is chopped it produces two spherical particles while conserving volume. This is of course a considerable approximation but it allows us to make significant progress thereby giving useful insight while avoiding excessive computational effort trying to follow complicated changes in geometry of the particles.

We will now proceed in an identical manner to the one-dimensional case. We begin by studying a discrete distribution of sizes, but will find that our previous methodology does not naturally generalize. We then consider the continuous model, in which case, we find analytical results are possible.

3.1 The discrete-size model

To model discrete chopping of spheres into smaller spheres, we can choose to characterize the particles by their diameter. Without loss of generality, we consider our set of diameters to be integers j (similar to our sizes being integers i earlier). When a chop takes place, we must conserve volume, and this leaves us considering the following equation:

$$\begin{aligned}&\frac{\pi }{6}\,j^3 = \frac{\pi }{6}\,j_1^3 + \frac{\pi }{6}\,j_2^3, \end{aligned}$$
$$\begin{aligned}&j^3 = j_1^3 + j_2^3, \end{aligned}$$

where j is the particle diameter before chopping, while \(j_1\) and \(j_2\) are the diameters corresponding to the two particles resulting from the chop. However, we recall Fermat’s Last Theorem, from which we know that this equation has no integer solutions (the exponents are greater than 2) [24]. Therefore, we cannot adopt the same approach as in the one-dimensional case, so we immediately focus on a continuous-size description.

3.2 The continuous-size model

Following the same ideas as the one-dimensional case, we define x as the particle diameter. We expect the three-dimensional model to take a similar form as before, that is,

$$\begin{aligned} \frac{\partial y}{\partial t} = -f\left( x\right) y\left( x,t\right) + \int _{x}^{\infty } f\left( s\right) y\left( s,t\right) h(s) \mathrm {d}s. \end{aligned}$$

However, now y(xt) denotes the number of particles of diameter x at time t, and we introduce h(s) as the probability that chopping a particle of diameter s generates a particle of diameter x. In the one-dimensional case, we found \(h(s)=2/s\) using a probability argument, which did not require the need to introduce the terminology h(s). However, in three dimensions it is easier to determine h(s) by exploiting the need to conserve mass (though this method is equivalent to the probabilistic procedure). The general equation expressing conservation of mass in n dimensions is

$$\begin{aligned} \frac{\mathrm {d}}{\mathrm {d}t}\int _0^{\infty } x^n\,y(x,t)\,\mathrm {d}x = 0. \end{aligned}$$

In one dimension (\(n=1\)) (28) corresponds to conservation of length and is the differentiated form of (13). In three dimensions, (28) corresponds to conservation of total volume. We now derive the function h(s) for the general case of \(n>0\).

By multiplying (27) by \(x^n\) and integrating with respect to x over \(0\le x<\infty \), we obtain

$$\begin{aligned} \frac{\partial }{\partial t} \int _0^{\infty } x^n y(x,t)\,\mathrm {d}x= & {} -\int _0^{\infty }x^n f\left( x\right) y\left( x,t\right) \,\mathrm {d}x +\int _0^{\infty } x^n \int _{x}^{\infty } f\left( s\right) y\left( s,t\right) h(s)\, \mathrm {d}s \, \mathrm {d}x. \end{aligned}$$

Using (28), we require that the left-hand side of (29) must be equal to zero. Rearranging the double integral on the right-hand side of (29), we then find

$$\begin{aligned} 0 = -\int _0^{\infty }x^n f\left( x\right) y\left( x,t\right) \,\mathrm {d}x + \int _0^{\infty } f\left( s\right) y\left( s,t\right) h(s) \int _{0}^{s} x^n \, \mathrm {d}x \, \mathrm {d}s, \end{aligned}$$

and thus

$$\begin{aligned} \int _0^{\infty }x^n f\left( x\right) y\left( x,t\right) \,\mathrm {d}x = \int _0^{\infty } f\left( s\right) y\left( s,t\right) h(s) \frac{1}{n+1} s^{n+1} \,\mathrm {d}s. \end{aligned}$$

Therefore, we conclude that the probability h(s) must take the form

$$\begin{aligned} h(s) = \frac{n+1}{s}. \end{aligned}$$

Hence our model becomes

$$\begin{aligned} \frac{\partial y}{\partial t} = -f(x)y(x,t) + \int _x^{\infty } f(s)y(s,t)\frac{n+1}{s}\,\mathrm {d}s, \end{aligned}$$

with initial condition

$$\begin{aligned} y(x,0) = M\delta (x-L), \end{aligned}$$

which corresponds to having M pieces each of diameter L initially. Our choice for h(s) ensures that volume is conserved and so, using (34), this implies

$$\begin{aligned} \int _0^{\infty } x^n\,y(x,t)\,\mathrm {d}x = L^nM \quad \, \hbox { for all} \, t. \end{aligned}$$

This agrees with the earlier one-dimensional model (\(n=1\)).

3.2.1 Analytical solution

In an analogous fashion to the one-dimensional model, an analytical solution can be found to (33) and (35) for the likelihood function \(f(x) = a x^k\) for positive constants a and k. Following the previous method of seeking a similarity solution, we find the solution

$$\begin{aligned} y(x,t) = \frac{L^nM\,k\,(a t)^{\frac{n+1}{k}}}{\varGamma \left( \frac{n+1}{k}\right) }\mathrm {e}^{-x^k a t}. \end{aligned}$$

Note that (36) reduces to the one-dimensional solution (23) when we set \(n=1\). Again, while (36) does not satisfy the correct initial condition it does have the correct total volume and gives excellent predictions of behaviour at times of practical interest. In Fig. 3a, we show the particle number distribution y(xt), given by (36), when \(n=3\), as a function of the particle diameter x while in Fig. 3b, we show the continuous particle volume distribution, which is given by

$$\begin{aligned} v_D(x,t)=x^3 y(x,t), \end{aligned}$$
Fig. 3
figure 3

Analytical solution to the three-dimensional continuous model, (36) with \(n=3\). a Particle number density, y, and b volume density, \(v_D\), at \(t=3\), \(t=7\) and \(t=10\). Here, \(L=1\), \(M=100\), \(k=1\) and \(a=1\)

We also solve the continuous 3D model numerically by discretizing in x and then using MATLAB function ode45, but the results agree very closely with those obtained analytically with a similar convergence as the one-dimensional case, and, therefore, we do not include them here.

4 Experimental data

A series of experiments were performed to collect data that could be compared to the model predictions and enable the models to be further refined and for the behaviour to be interpreted.

4.1 Experimental methodology

In this section, we compare our model predictions with experimental data. Experiments were performed using a 24 oz single-serve cup, and a Nutri Ninja®Pro Extractor Blade [25]. All experiments are carried out using a mixture of carrots and water. Carrot tops are removed and the carrots are split in half. These were mixed randomly in order to minimize any effect due to any physical variation between carrots. The carrots are then cut into approximate 6 mm cubes and an initial mixture, composed of \(425\,\)g water and 283 g carrot, put in the blender. The blender is then operated at rotation speeds controlled via a Hall-effect sensor integrated into the motor controller. After a fixed time in the blender of 50 s a sample of the mixture is taken, diluted with water and the resulting size distribution measured using a Malvern MS3000 Laser Analysis Unit with a Malvern HydroLV sampling unit. The process was repeated 15 times to ensure repeatability.

4.2 Comparison with analytical solution

To demonstrate the predictive accuracy of the models, we take the three-dimensional analytical solution, (36), and fit this to the measured experimental data. In doing so, we are free to choose the values of the model parameters, k and a, and perform a least-squares fit to determine their values. Some parameters are known, for example, the experimental blender contains 425 ml water and \(2857\,\)ml of carrots (mass ratio 40:60 carrots to water). The initial size of carrot particles is 6 mm, hence, we choose our initial data such that we begin with 257, 000 mm\(^3\) of carrot pieces of diameter 6 mm. Despite the additional step of relating the operating conditions to the parameters a and k, the fit of the analytical solution to the experimental data is extremely good (see Fig. 4). This suggests that our model captures the key physics during blending. However, we note the appearance of a second peak, corresponding to very small particles within the experimental data, that does not arise in our model. We thus turn our attention to modelling this extra observation in the following section.

Fig. 4
figure 4

Comparison between the analytical solution (36) (solid curve) with the derived likelihood function (14) and the experimental data for a blend at 7000 RPM. The grey dashed curves show the volume density distribution after 50 s of blending for 30 independent runs. Here, \(L=6\) and \(M=284\), as to agree with the experimental conditions, and \(a\approx 0.057\) and \(k\approx 1.123\), both found through a least-squares fit

5 Debris and a minimum particle size

We notice from Fig. 4 that a significant discrepancy between our simple model and the experimental data is the leftmost peak visible in the distribution. We speculate that this is a consequence of debris produced when a particle is chopped. It is likely that when a particle is chopped into two particles there is additionally some residue which could consist of juice, broken fruit cells or fragments of fruit which are too small to be considered as particles. This leads us to the natural extension of including the debris in our model. We consider all the debris to consist of very small particles of a given size. We shall presume debris is created whenever a larger particle is chopped and that this debris cannot be chopped further. However, a consequence of such an assumption is that we will need to define the smallest particle size that can be chopped.

5.1 The one-dimensional discrete-size model with debris

To summarize the notation of the one-dimensional discrete-size model in Sect. 2.1, we supposed that a piece of length L could be chopped in any one of \(N-1\) places to form two pieces of length iL / N (or size i) and \((N-i)L/N\) (size \(N-i\)). Within this framework the smallest piece that can be formed is of length L / N (that is, size 1). We now adjust our model so that each time a chop takes place, along with the division into two distinct pieces we also generate m pieces of debris, of size 1. However, we also assume that the smallest piece that can be formed from a chop that is not debris is of length pL / N (size p), where \(p>1\) is an integer, so that we create a distinction between chopped pieces (\(i \ge p\)) and the debris (\(i=1\)). Within this framework, we conclude that the smallest piece that can be chopped must be of size at least \(2p+m\) since chopping anything smaller cannot create two of the smallest possible pieces and the associated debris.

The dynamics of this modified scenario that accounts for chopping with debris can be encapsulated in the following discrete-size model:

$$\begin{aligned}&\frac{\mathrm {d}Y_i(t)}{\mathrm {d}t}= - F(i)Y_i(t) + \sum _{q=i+p+m}^{\infty } F(q)Y_q(t) \frac{2}{q-2p-m+1},\quad i\ge p, \end{aligned}$$
$$\begin{aligned}&\frac{\mathrm {d}Y_1(t)}{\mathrm {d}t} = \sum _{q=2p+m}^{\infty } m F(q)Y_q(t), \end{aligned}$$
$$\begin{aligned}&\hbox {with the constraint that } F(i)=0, \quad \hbox { for } i \le 2p+m, \end{aligned}$$

where \(Y_i(t)\) and F(i) have the same definition as before (with \(Y_1(t)\) being the total number of debris particles). The fraction in the summation on the right-hand side of (38) describes the probability that when we chop a piece of size q (\(q\ge 2p+m\)) a piece of size i (\(i\ge p\)) is created.

We continue with the same initial condition that we used for our original model, (1), where we begin with M pieces of identical length as before, where we note there is no debris initially (\(Y_1(0)=0\)). This model then ensures that the total mass of food particles is conserved, with

$$\begin{aligned} \sum _{i=1}^{\infty } \frac{iL}{N} Y_i(t) = LM, \end{aligned}$$

for all time.

Note that the minimum piece length, pL / N, debris length, L / N, and amount of debris, m, created at each chop may all depend on operating parameters, such as the speed of the chop. However, here we concentrate on trying to understand the general behaviour of the distribution created by such a model.

5.2 The one-dimensional continuous debris model

We can extend the discrete debris model (38)–(40) to the continuous model following the same methodology as before. This leads to the following model:

$$\begin{aligned}&\frac{\partial y(x,t)}{\partial t} = -f(x)y(x,t) + \int _{x+\mu +m\lambda }^{\infty } f(s)y(s,t)\frac{2}{s-2\mu -m\lambda }\, \mathrm {d}s, \quad x\ge \mu , \end{aligned}$$
$$\begin{aligned}&\frac{\partial y(\lambda ,t)}{\partial t} = \int _{2\mu +m\lambda }^{\infty } m f(s)y(s,t)\, \mathrm {d}s, \end{aligned}$$
$$\begin{aligned}&\hbox {with the constraint that } f(x) = 0, \quad \hbox { for } x<2\mu +m\lambda . \end{aligned}$$

Here \(\mu \) and \(\lambda \) denote the continuous analogues of the minimum piece and debris size, respectively. This model bears similarities to the continuum descriptions used for micelle formation and breakdown, where the system is composed of large aggregates (micelles) and individual species (monomer), which we identify with the large pieces and debris respectively [17, 21].

5.3 Numerical solution of the one-dimensional continuous debris model

We consider a similar set-up to Sect. 2.2.2, that is, we begin with M pieces of length L so that our initial condition is

$$\begin{aligned} y(x,0)=M\delta (x-L). \end{aligned}$$

Finally, we need to define a likelihood function that indicates that chances of a piece being chopped. We will assume that there is some minimum sized piece that the blades might chop of size \(\nu \) and that the probability of chopping increases linearly above this size. Then, we have

$$\begin{aligned} f(x) = {\left\{ \begin{array}{ll} 0, &{} x <\nu , \\ a(x - \nu ), \quad &{} x \ge \nu , \end{array}\right. } \end{aligned}$$

where a and \(\nu \) are positive constants. We must ensure that (44) is satisfied and this is most easily done by taking \(\nu = 2\mu +m\lambda \).

Fig. 5
figure 5

Numerical solution to the one-dimensional continuous model (42)–(46) for the chopped pieces, y(xt), \(x\ge \mu \). a Number density, y, and b volume density, \(v_D\) at \(t=5\), \(t=10\) and \(t=15\). Here, \(\mu = 0.02\), \(\lambda = 0.0025\), \(m = 3\), \(a=1\), \(k=1\), \(\nu =2\mu +m\lambda =0.0475\), \(L=1\) and \(M=100\)

The model (42)–(46) may be solved by discretizing as before and using ode45. We plot the main distribution, \(x\ge \mu \), and the time evolution of the debris, \(x = \lambda \), separately in Figs. 5 and 6, respectively. The creation of debris reduces the quantity of chopped pieces, and so the peaks in Fig. 5 are lower than in Fig. 1 when no debris is created. The rate at which debris volume is generated increases almost linearly to begin with, but slows over time (see Fig. 6).

Fig. 6
figure 6

The evolution of the volume fraction attributed to debris particles, \(\lambda y(\lambda ,t)\), with time given by the numerical solution to the one-dimensional continuous model (42)–(46). Here, \(\mu = 0.02\), \(\lambda = 0.0025\), \(m=3\), \(a=1\), \(k=1\), \(\nu =2\mu +m\lambda =0.0475\), \(L=1\) and \(M=100\)

5.4 Extension to three dimensions with debris

We can extend our one-dimensional continuous debris model to three dimensions in a similar manner to Sect. 3, by treating the particles as spheres and allowing x to denote the particle diameter.

The analogue to Eqs. (42)–(46) are

$$\begin{aligned}&\frac{\partial y(x,t)}{\partial t} = -f(x)y(x,t) + \int _{\root 3 \of {x^3+\mu ^3+m\lambda ^3}}^{\infty } f(s)y(s,t)h(s)\, \mathrm {d}s, \quad x\ge \mu , \end{aligned}$$
$$\begin{aligned}&\frac{\partial y(\lambda ,t)}{\partial t} = \int _{\root 3 \of {2\mu ^3+m\lambda ^3}}^{\infty } m f(s)y(s,t)\, \mathrm {d}s, \end{aligned}$$
$$\begin{aligned}&h(x) = \frac{4(s^3 - m\lambda ^3)}{(s^3-\mu ^3-m\lambda ^3)^{\frac{4}{3}} - \mu ^4}, \end{aligned}$$
$$\begin{aligned}&f(x) = 0, \quad x<\root 3 \of {2\mu ^3+m\lambda ^3}. \end{aligned}$$

Conservation of mass for the initial data is now expressed as

$$\begin{aligned} \int _0^{\infty } x^3 y(x,t) \mathrm{{d}}x = L^3 M. \end{aligned}$$

5.4.1 Numerical solution and comparison with experiments

In order to make our comparison as accurate as possible, we must choose a minimum particle diameter \(\mu \), and we do this in an approximate manner by taking it to be at the lowest point in the trough that appears between the two peaks in the experimental data shown in Fig. 4, from which we extract a minimum particle diameter of 0.2543 mm at 7000 RPM. Our model represents all of the debris as pieces of diameter \(\lambda \). However, in order to visualize the consequential debris peak, we introduce a normal distribution curve centred at \(\lambda \) to represent the volume attributed to debris. For our simulation, we choose \(\lambda = 0.12\) mm and \(m=100\). We define our likelihood function f(x) to be

$$\begin{aligned} f(x) = {\left\{ \begin{array}{ll} 0, &{} x < \root 3 \of {2\mu ^3 + m\lambda ^3}, \\ x - \root 3 \of {2\mu ^3 + m\lambda ^3}, \quad &{} x \ge \root 3 \of {2\mu ^3 + m\lambda ^3}, \end{array}\right. } \end{aligned}$$

This likelihood function captures the fact that any particles of size \(x<\root 3 \of {2\mu ^3 + m\lambda ^3}\) cannot be chopped, and assumes that the probability of chopping a larger particle follows a linear relationship with its diameter.

As this is a proof-of-concept model, our main goal is to correctly predict the qualitative behaviour of the particle distribution. Figure 7 demonstrates that our model does a good job of predicting the shape of the distribution correctly at one particular time even with crude parameter estimates. While this model does not yet correctly explain the detailed quantitative behaviour of the distribution, this could possibly be obtained by optimally choosing \(\lambda \) and m. Further there may need to be other model improvements such as considering the number of debris pieces create per chop, m, to depend on the particle being chopped, i.e. larger particles may produce more debris when chopped than smaller particles.

Fig. 7
figure 7

Comparison between a the experimentally observed particle volume distribution after 50 s of blending at 7000 RPM, and b the prediction according to the three-dimensional debris model (47)–(51). Here \(\mu =0.25\) mm, \(\lambda =0.035\) mm, \(m=900\), \(a=0.07\), \(k=1\), \(M = 284\) and \(L = 6\)

6 Conclusions

The behaviour of particles in a blender has been examined using a simple model of chopping. We began by proposing a discrete model based on Smoluchowski theory. We assumed that the mixing process of the blender led to a homogeneous distribution of food pieces in space, so that our model did not need to be spatially dependent. We used the fact that the typical number of different particle sizes available is large to derive a continuum description, from which, in ideal cases an analytical solution was found to exist. When compared with experimental data the analytical solution agreed remarkably well, and provides useful scaling laws for the behaviour. A key feature emerged that was not captured by the simple model, namely the appearance of a second peak in the particle size distribution. The appearance of this second peak was attributed to the accumulation of a large amount of extremely small particles (debris) being created. The simple model was modified to account for the debris, which is too small to be subsequently chopped, being created whenever a particle was chopped. The modified model is able to capture the full particle size distribution following blending and supported our modelling assumptions, in particular the spatial uniformity in the distribution of particles.

The model provides key insight into the behaviour within a blender. For instance, the model may easily be interrogated to determine trends and possible design improvements. The model also bypasses the need to do many costly and time-consuming experiments to determine the distribution of particle sizes in a given blending process. A key next step in the model development would be to take data obtained at different times during the blending process, which could be used to validate the predicted time evolution of the distribution. In all of the models presented here, we assumed that the chopping process remained constant with time. In practice, we might expect that the chopping rate varies with time, for example due to the change in viscosity of the fluid as the number of smaller particles in the mixture increases. To capture such additional phenomena, more detailed modelling would be required, particularly of the particle behaviour near the blades and how the viscosity changes might allow the particles to avoid being chopped. This would most likely involve the use of computational fluid dynamics simulations to understand the large-scale flow patterns created in the blender.

Other generalizations of the work considered here could account for scenarios in which food pieces are inserted at a constant rate and removed when suitably blended, so that eventually a steady state is attained. The modelling framework that we have outlined here also applies to any initial conditions applied, and so may be directly used to assess resulting particle size distributions following chopping of a range of different initial mixtures. Such analysis would be useful in determining the value in pre-chopping the food before it is inserted into the blender.

Nevertheless, we envisage the results of such sophisticated studies being inputs that accurately confirm the parameters within our model framework, such as the likelihood function. The model presented here provides an overview of where such future focused studied would be beneficial, as we move towards a comprehensive model for the blender behaviour.