Dynamics of particle chopping in blenders and food processors
- 485 Downloads
Mathematical models are developed to explain the size distribution of particles in blenders to give insight into the behaviour of possible blender designs. The initial models consider idealized simplified situations, first with the chopping of long thin particles and then of spheres. The models are first presented using the idea of chopping at discrete places but then extended to account for chopping at any point via a continuous model. Some of the simple models can be solved analytically while others require numerical calculations. Comparisons of the predictions from the various models with experimental data at a fixed time are presented and show that the models account for much of the behaviour. The initial models do not however predict the large amount of extremely small particles (debris) that are observed at the end of the blending process. The models are thus modified using simple extensions to account for additional effects and numerical solutions of these models are compared with the observed data. The theory should provide a useful tool that eliminates the need to perform costly and time-consuming experiments when understanding how a particular food will be blended.
KeywordsChopping Discrete-to-continuum models Similarity solutions Smoluchowski theory
Most households have a blender or a food processor, which is commonly used to turn fruit and vegetables into smoothies, drinks, sauces and dips. These blenders chop and shred a variety of ingredients to produce a purée where the material has been broken into very small particles suspended in a liquid, usually water. These blenders use various blade systems, composed of multiple blades inclined at various angles and operate at extremely high speeds to chop and mix the ingredients.
There is significant experimental interest in understanding how blade design and container shape can be tailored to create optimal purées. There is, however, very little literature examining the underlying fluid and particle dynamics to assist in identifying how such optima may be achieved. In addition, one consumer criticism of existing blender designs is the noise that is generated due to the high speed of blade rotation that is currently used to adequately blend the ingredients in a timely manner. Identifying how such speeds might be reduced while still creating optimal conditions is thus of great interest.
The quality of the purée is characterized by the particle size distribution and summarized by the mean particle size, with homogeneous mixtures consisting of small particles being preferred. Operating parameters that may contribute to modifying the particle distribution include blade speed, shape and sharpness, along with properties of the container, such as the shape and the inclusion of baffles on the inner walls.
When attempting to understand the physics of food blending, computational fluid dynamics (CFD) is the de facto approach. Current CFD packages are able to model fluid flow situations for the prediction of heat, mass and momentum transfer and optimal design in a variety of food processes . The recent advances in computer processor speeds mean that CFD packages are able to predict the resulting mixing process given an initial configuration, reducing the need to perform batches of experiments. However, such simulations remain a step away from being able to perform the necessary comprehensive parameter sweeps that are required to determine the optimum operating regimes. Furthermore, the predominant use of CFD is in mixing and segregation processes rather than in chopping (see, for example, [2, 3]). Other work, such as studies in the bread industry, have placed an emphasis on examining the effect of the rheology of the substance on the mixing process . A third area for research concerns the mechanics of an individual cut, in particular examining the relationship between the force exerted during a cut and the resulting sliced product .
In this paper, we turn our attention away from the mixing mechanism and towards the chopping process, asking the question, how do the food pieces placed into a blender get chopped to make a smoothie? To the best of our knowledge a mathematical theory for the chopping process in a food blender has not been proposed. We shall take a different approach to the computationally heavy methodologies presented previously, by deriving a simplified mathematical model from which we can extract scaling laws that will ultimately allow us to make predictions on how the operating conditions affect the chopping process. The resulting theories will eliminate the need to perform many costly and time-consuming experiments to determine how a particular mixture will be chopped over time, and thus will ultimately provide guidance on how to design blenders to achieve a desired final distribution of particle sizes.
While the chopping of food in blenders has received little mathematical attention, techniques for modelling dissociation (and association) processes are prevalent within the literature. For a broad study of such population dynamics models, see, for example, . Such theories are used to describe, for example, the formation of aerosols [7, 8], colloidal aggregates , polymers  and the large-scale interactions of celestial bodies [11, 12]. A cornerstone of the aggregation and breakdown kinetics literature is Becker–Döring theory. This theory describes the process of aggregation of dissociation by the stepwise loss or gain of individual elements that are assumed to comprise an aggregate . A key use of the Becker–Döring theory is in the formation and dissociation of micelles in surfactant systems, which are large chemical compounds composed of many individual surfactant particles, or monomers [14, 15]. Smoluchowski theory generalizes the ideas of the Becker–Döring models by allowing the merging of any two aggregates and conversely the disintegration of any species into two arbitrarily sized aggregates . Both Becker–Döring and Smoluchowski theories track the time evolution of number of aggregates of any given discrete size. In many instances, the range of sizes of aggregates may be large, and so a continuum theory, where a continuum variable is assigned to the aggregate size, is more appropriate (see, for example, ). In this case a size distribution of aggregates is tracked. Such theories allow for more efficient numerical computation. In other instances, a mean-field approach is more beneficial . The recent explosion in the availability of data also allows techniques such as partition-valued Markov chains to be used, which exhibit a scalability that is lacking with analogous continuous models. Such ideas have been successfully employed to model genetic sequences .
We consider the blending process as a compromise between chopping, which makes particles smaller, and mixing, which makes the mixture homogeneous. This paper concentrates on the chopping aspect of a blender, and describes a simple mathematical model that captures the behaviour of solid particles within a fluid as they are randomly and continuously chopped. The distribution of food pieces in a blended product is usually described by attributing a single number, which characterizes its size, to each piece. For long items, such as carrot or celery sticks, the length of each stick provides a suitable metric for characterizing the pieces. Other foods, such as berries or carrot cubes, are more accurately represented by associating a typical volume or diameter to each piece.
We first propose a model for the chopping of long thin particles. This can be thought of as a one-dimensional problem, where we track the length of each piece (Sect. 2) and present both analytical and numerical results for this. A model is then presented to address the chopping of food pieces that are characterized more appropriately by their volume (or an effective diameter) (Sect. 3), and a similar analysis is conducted. We compare the predictions with experimental data and use this to improve our model in Sect. 4. We conclude our analysis in Sect. 5 by generalizing our model to include a minimum particle size that can be chopped by the blades to provide a more accurate prediction of the resulting particle distribution.
2 Models of chopping one-dimensional particles
Motivated by the chopping of long slender objects, such as carrots, we begin by considering piece of food is randomly and continuously chopped, producing smaller pieces, each of which is defined only by its length. By studying the time evolution of the distribution of the number of pieces of each size, we gain insight as to what to expect when considering the action of a blender. We shall start by considering that the line can only be chopped at a finite, but large, set of discrete points. Hence, we might consider the entire line to be made of very small sub-lines. We will develop a model using this discrete version of chopping. We will then take the limit of the process and consider chopping at any point thereby generating a continuous model of chopping. We will consider beginning with several pieces of food. Our aim is to introduce the notation and the ideas in a simple context of one dimension before presenting a model of more complex particles.
2.1 The discrete-size model
We first consider a single chop of the blender blade. We assume that the probability that a piece of size i is chopped is G(i). This functional dependence allows for the possibility of, for example, a larger piece being more likely to be chopped than smaller pieces. Note that by assuming G depends only on i, the chances of a piece being chopped depends only on its size and not, for example, on its position or orientation, and hence, we are assuming perfect mixing of the contents. We also assume that this probability does not vary with time. However, our methodology readily generalizes to capture such behaviour, and we discuss the possibility of time-dependent probabilities in the Conclusions.
Equation (3) belongs to a subset of the generalized Smoluchowski theory , which models the agglomeration and disintegration of aggregates each composed of a given number of distinct entities. Here, we include only the disintegration component, which captures the chopping process, since we assume that pieces will never recombine. The Smoluchowski equations are a system of coupled nonlinear ordinary differential equations (ODEs) that describe the evolution of the number density of each piece size, \(Y_i(t)\). The general Smoluchowski equations have been shown to be well-posed, and so our subset model will also be well-posed .
2.1.1 Numerical solution of the discrete-size problem
We can solve the discrete model (1)–(3) numerically using MATLAB, and hence visualize the evolution of particle distribution for a particular likelihood function (Fig. 1). We must choose a reasonable likelihood function, F(i), to describe the rate of chopping a piece of size iL / N. We might expect larger pieces to be more likely to be chopped than smaller pieces, therefore a simple model of this is to take the likelihood of chopping to be directly proportional to a piece’s length, namely \(F(i) = ai\), where a is a positive constant that captures other general contributing factors.
The number-density distribution we observe from Fig. 1a initially has a spike of height \(M=100\) at \(i=N\), corresponding to the initial condition (1). As time progresses, the total number of pieces in the system increases, while the size of a typical piece decreases. We notice that there are considerably more tiny pieces than large pieces, however, it is not obvious which particles account for the majority of the material (i.e. the length) in the system. The length-density distribution shown in Fig. 1b is a single-peaked graph which, with time, moves from right to left, while growing taller and narrower. From this, we observe how the length fraction attributed to each particle size changes with time, while the total length in the system remains constant (which is demonstrated by the constant area under this curve as time evolves). We can then deduce which size of particle accounts for the majority of the length.
2.2 The continuous-size model
2.2.1 Analytical solution
2.2.2 Comparison of results
The continuous model (10) and (11) can be solved by discretizing in x and using MATLAB function ode45 (i.e. using the method of lines). Note that if a uniform discretization of x is used then this is equivalent to the discrete model. However, as seen in Fig. 1 the distributions tend to move toward smaller and smaller particles and these can most efficiently be computed by considering a uniform discretization in the logarithm of x. This can be done by rewriting the model in the variable \(z=\log x\) and then using a uniform mesh in z. The predictions for both the number density and length density agree well with the discrete model and so have not been shown here.
3 Model extension: three dimensions
The models that we have derived so far hold for the chopping of food that is long and slender, so that its length forms an appropriate identifying metric. As discussed in the Introduction, in some cases the food pieces being chopped are more spherical in shape, and so are better described by their volume. For simplicity, we will ignore any shape discrepancies, and model all particles as spheres, imposing the assumption that when a spherical particle is chopped it produces two spherical particles while conserving volume. This is of course a considerable approximation but it allows us to make significant progress thereby giving useful insight while avoiding excessive computational effort trying to follow complicated changes in geometry of the particles.
We will now proceed in an identical manner to the one-dimensional case. We begin by studying a discrete distribution of sizes, but will find that our previous methodology does not naturally generalize. We then consider the continuous model, in which case, we find analytical results are possible.
3.1 The discrete-size model
3.2 The continuous-size model
3.2.1 Analytical solution
We also solve the continuous 3D model numerically by discretizing in x and then using MATLAB function ode45, but the results agree very closely with those obtained analytically with a similar convergence as the one-dimensional case, and, therefore, we do not include them here.
4 Experimental data
A series of experiments were performed to collect data that could be compared to the model predictions and enable the models to be further refined and for the behaviour to be interpreted.
4.1 Experimental methodology
In this section, we compare our model predictions with experimental data. Experiments were performed using a 24 oz single-serve cup, and a Nutri Ninja®Pro Extractor Blade . All experiments are carried out using a mixture of carrots and water. Carrot tops are removed and the carrots are split in half. These were mixed randomly in order to minimize any effect due to any physical variation between carrots. The carrots are then cut into approximate 6 mm cubes and an initial mixture, composed of \(425\,\)g water and 283 g carrot, put in the blender. The blender is then operated at rotation speeds controlled via a Hall-effect sensor integrated into the motor controller. After a fixed time in the blender of 50 s a sample of the mixture is taken, diluted with water and the resulting size distribution measured using a Malvern MS3000 Laser Analysis Unit with a Malvern HydroLV sampling unit. The process was repeated 15 times to ensure repeatability.
4.2 Comparison with analytical solution
5 Debris and a minimum particle size
We notice from Fig. 4 that a significant discrepancy between our simple model and the experimental data is the leftmost peak visible in the distribution. We speculate that this is a consequence of debris produced when a particle is chopped. It is likely that when a particle is chopped into two particles there is additionally some residue which could consist of juice, broken fruit cells or fragments of fruit which are too small to be considered as particles. This leads us to the natural extension of including the debris in our model. We consider all the debris to consist of very small particles of a given size. We shall presume debris is created whenever a larger particle is chopped and that this debris cannot be chopped further. However, a consequence of such an assumption is that we will need to define the smallest particle size that can be chopped.
5.1 The one-dimensional discrete-size model with debris
To summarize the notation of the one-dimensional discrete-size model in Sect. 2.1, we supposed that a piece of length L could be chopped in any one of \(N-1\) places to form two pieces of length iL / N (or size i) and \((N-i)L/N\) (size \(N-i\)). Within this framework the smallest piece that can be formed is of length L / N (that is, size 1). We now adjust our model so that each time a chop takes place, along with the division into two distinct pieces we also generate m pieces of debris, of size 1. However, we also assume that the smallest piece that can be formed from a chop that is not debris is of length pL / N (size p), where \(p>1\) is an integer, so that we create a distinction between chopped pieces (\(i \ge p\)) and the debris (\(i=1\)). Within this framework, we conclude that the smallest piece that can be chopped must be of size at least \(2p+m\) since chopping anything smaller cannot create two of the smallest possible pieces and the associated debris.
Note that the minimum piece length, pL / N, debris length, L / N, and amount of debris, m, created at each chop may all depend on operating parameters, such as the speed of the chop. However, here we concentrate on trying to understand the general behaviour of the distribution created by such a model.
5.2 The one-dimensional continuous debris model
5.3 Numerical solution of the one-dimensional continuous debris model
5.4 Extension to three dimensions with debris
We can extend our one-dimensional continuous debris model to three dimensions in a similar manner to Sect. 3, by treating the particles as spheres and allowing x to denote the particle diameter.
5.4.1 Numerical solution and comparison with experiments
The behaviour of particles in a blender has been examined using a simple model of chopping. We began by proposing a discrete model based on Smoluchowski theory. We assumed that the mixing process of the blender led to a homogeneous distribution of food pieces in space, so that our model did not need to be spatially dependent. We used the fact that the typical number of different particle sizes available is large to derive a continuum description, from which, in ideal cases an analytical solution was found to exist. When compared with experimental data the analytical solution agreed remarkably well, and provides useful scaling laws for the behaviour. A key feature emerged that was not captured by the simple model, namely the appearance of a second peak in the particle size distribution. The appearance of this second peak was attributed to the accumulation of a large amount of extremely small particles (debris) being created. The simple model was modified to account for the debris, which is too small to be subsequently chopped, being created whenever a particle was chopped. The modified model is able to capture the full particle size distribution following blending and supported our modelling assumptions, in particular the spatial uniformity in the distribution of particles.
The model provides key insight into the behaviour within a blender. For instance, the model may easily be interrogated to determine trends and possible design improvements. The model also bypasses the need to do many costly and time-consuming experiments to determine the distribution of particle sizes in a given blending process. A key next step in the model development would be to take data obtained at different times during the blending process, which could be used to validate the predicted time evolution of the distribution. In all of the models presented here, we assumed that the chopping process remained constant with time. In practice, we might expect that the chopping rate varies with time, for example due to the change in viscosity of the fluid as the number of smaller particles in the mixture increases. To capture such additional phenomena, more detailed modelling would be required, particularly of the particle behaviour near the blades and how the viscosity changes might allow the particles to avoid being chopped. This would most likely involve the use of computational fluid dynamics simulations to understand the large-scale flow patterns created in the blender.
Other generalizations of the work considered here could account for scenarios in which food pieces are inserted at a constant rate and removed when suitably blended, so that eventually a steady state is attained. The modelling framework that we have outlined here also applies to any initial conditions applied, and so may be directly used to assess resulting particle size distributions following chopping of a range of different initial mixtures. Such analysis would be useful in determining the value in pre-chopping the food before it is inserted into the blender.
Nevertheless, we envisage the results of such sophisticated studies being inputs that accurately confirm the parameters within our model framework, such as the likelihood function. The model presented here provides an overview of where such future focused studied would be beneficial, as we move towards a comprehensive model for the blender behaviour.
This publication is based on work supported by the EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling (EP/L015803/1) in collaboration with Shark Ninja. I.M.G. gratefully acknowledges support from the Royal Society through a University Research Fellowship.
- 5.Zhou D, McMurray G (2011) Slicing cuts on food materials using robotic-controlled razor blade. Model Simul Eng, 469262Google Scholar
- 7.Drake RL, Hidy GM, Brock JR (Eds) (1972) Topics in current aerosol research, vol. 3. Pergamon Press, New YorkGoogle Scholar
- 16.Smoluchowski V (1917) Mathematical theory of the kinetics of the coagulation of colloidal solutions. Phys Chem 92:129–68Google Scholar
- 19.Elliott L, Teh YW (2012) Scalable imputation of genetic data with a discrete fragmentation–coagulation process. Adv Neural Inf Process Syst, 2852–2860Google Scholar
- 25.Wood-Lee M (2016) Private communicationGoogle Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.