Depth from Defocus Technique: A Simple Calibration-Free Approach for Dispersion Size Measurement

Particle size measurement is crucial in various applications, be it sizing droplets in inkjet printing or respiratory events, tracking particulate ejection in hypersonic impacts, or detecting floating target markers in free surface flows. Such systems are characterised by extracting quantitative information like size, position, velocity and number density of the dispersed particles, which is typically non-trivial. The existing methods like phase Doppler or digital holography offer precise estimates at the expense of complicated systems, demanding significant expertise. We present a novel volumetric measurement approach for estimating the size and position of dispersed spherical particles that utilises a unique 'Depth from Defocus' (DFD) technique with a single camera. The calibration free sizing enables in-situ examination of hard to measure systems, including naturally occurring phenomena like pathogenic aerosols, pollen dispersion or raindrops. The efficacy of the technique is demonstrated for diverse sparse dispersions, including dots, glass beads, spray droplets, and pollen grains. The simple optical configuration and semi-autonomous calibration procedure make the method readily deployable and accessible, with a scope of applicability across vast research horizons.


Introduction
Dispersions are heterogeneous mixtures of particles dispersed within a continuous phase, whereby the term 'particle' can refer to particles of any phase, e.g.drops/aerosols, bubbles or solid particles.These particulate systems are omnipresent and they bear significance in numerous natural and practical applications.For instance, in industrial settings, the size, location and velocity of atomized fuel droplets are crucial for evaporation, rapid ignition and achieving higher efficiency of combustion based engines.Parallel examples apply to the pharmaceutical, food, agriculture, energy, and automobile industries.Understanding the transport mechanism of toxic dispersions, such as contagious aerosol droplets, dust, or microplastics, is crucial for health care and environmental sciences since it is constrained by their size ranges.From a biological perspective, entities such as pollen, blood cells, vesicles, or microorganisms possess characteristics that are dependent on their size.The list is endless, but in summary, the need to characterise the size, position and velocity of dispersed particles in a mixture is ubiquitous.Knowing such information then also allows for concentration and flux to be measured.
Among the numerous alternatives to perform such measurements, optical methods are of particular interest, as they are non-intrusive.Optical techniques are usually characterised as being pointwise, planar or volumetric and are based on various principles, such as interferometry (e.g.phase Doppler, holography, laser diffraction, ILID-S/IPI, etc.), time shift, or direct imaging [1].However, pointwise or planar methods are tedious to deploy when volumetric information is required for two reasons.For one, the measurement point or plane must be traversed throughout the flow field, necessitating tedious measurement repetition and demanding steady flow conditions during the entire measurement procedure.Furthermore, the measurement volume is seldom known exactly, making a quantitative computation of global volumetric distributions difficult.Holography offers a volumetric measurement and furthermore, inline holography is optically quite simple to realize.Nevertheless, holography does involve considerable computational effort, making the processing time longer.
Direct imaging techniques provide a potential solution as they allow for high spatiotemporal resolution combined with simple experimental configurations.Shadow imaging is one such favourable configuration suitable to distinguish the particulate content from the continuous phase and furthermore, it is easy to set up and adjust [2].However, delineating the observation volume is difficult with such approaches.As the particle moves out of focus away from the object plane (Fig. 1a), projected geometric features become blurred and the apparent size seems to increase.Hence, most of the early implementations of direct imaging involved only the measurement of particles in focus and rejection of the blurred projections based on grey level intensity [3], gradient [4,5] or contrast based criteria [6].In many applications, near focus instances occur less often, resulting in a small sample size and consequently increasing the statistical uncertainty of the measurement.Moreover, smaller particles tend to blur more rapidly with increasing distance from the object plane, reaching beyond the detection limit faster than the larger particles.This leads to an intrinsic bias in evaluating the size distribution using arithmetic averaging by overweighting the occurrence of larger particles.
These drawbacks can be mitigated by considering volumetric methods where the blurring of the out-of-focus particles is utilized to determine not only size, but also position through the degree of blurring.Such systems are known as the Depth from Defocus (DFD) approach, first introduced in the context of general imaging systems [7,8].Several extensions were then proposed, which can be broadly classified into single or two image approaches.The single image approach is realised through special apertures [9][10][11], lenses [12] or active illumination [13].Another approach is to employ image processing algorithms based on the concept of deconvolution [14,15], normalised contrast [16,17], circle of confusion [18] or even machine learning [19][20][21].Some of these methods offer both size and depth estimation, albeit with an ambiguity in the depth direction, as blurring is symmetric across the object plane.Furthermore, these methods usually require a lengthy calibration procedure.
The two image DFD approach involves acquiring the images at different degrees of blur (out of focus).This can be realised from a single camera by capturing sequential images after changing the parameters of the optical system [22] or by using coloured illumination with suitable filters [23].Alternatively, two cameras and a beam splitter can be deployed to obtain simultaneous images, each with a different degree of focus.Recent developments of the two camera DFD [24][25][26] enable reliable measurement of size and depth using images from two cameras, whose object planes have a prescribed spacing.These images are processed using functions determined from the calibration procedure, requiring a series of target dot images of known size moved along the optical axis at known depths.Unlike other methods, this DFD approach enables the precise estimation of the measurement volume (or more precisely, the detection volume), which varies with particle size.The theoretical formulation of this two camera DFD [26] lays the foundation for the present newly proposed technique using only one camera for the measurement of spherical particles.
The underlying principle of the proposed technique is illustrated in Fig. 1b.When the dispersed particle of interest is located on the object plane of the lens, a focused image with distinct features is obtained.However, as the particle is displaced along the depth axis away from this plane, blurring occurs, resulting in smoother features and lower intensity gradients.Another parameter of interest is the thresholded radius of the particle, which decreases as the depth increases.In the earlier DFD approaches, two experimental calibration functions were employed, utilizing the radius information obtained from two cameras for analysis.Since the actual size and position of a particle also influence the gradient magnitude of its projection, it is utilized in the single camera approach proposed here.This approach aims to determine the size and depth of a particle using thresholded radius and gradient magnitude extracted from a single image at a reference intensity (chosen here as 0.5).This is achieved using the analytical calibration functions.

Theoretical Analysis
This novel implementation of a single image DFD relies on a theoretical description of the image blurring as a function of the position of the dispersed particle with respect to the object plane of the system.How this description is implemented into the calibration and into a practical measurement procedure is summarized graphically in Fig. 2.However, the imaging processing is always performed on a normalized grayscale, where the intensity values are scaled between 0 and 1.This normalization step and the image processing algorithm are described in more detail in the Materials and Methods (Section 3).

Blurred Image Formation
The image projection onto a camera sensor can be described using simple ray optics, as illustrated in Fig. 1a.When a particle is on the object plane at a distance u o from the lens, a focused image is formed at the imaging plane, located at a distance s from the lens.However, when the particle is displaced to a distance |∆z| from the object plane, the focused image shifts to a different plane, causing a blurred image projection on the sensor.This blurred image can be described by a convolution of the focused image (i f ) of a particle (size d 0 ) with a blurring kernel (h) [16,24].The intensity g t at any location r t is then evaluated as (Fig. 3a): Here i f (r) is a normalized intensity image of a particle of radius r o = d o /2 on the image plane in polar coordinates The particle dimension on the image plane d o is related to the actual size M , where M is the magnification of the optical system.The blur kernel h(r h ) can be represented using a Gaussian profile with σ as the standard deviation: where ⃗ r h = ⃗ r − ⃗ r t .Therefore, the two-dimensional convolution Eq. ( 1) can be written as: The standard deviation σ represents the degree of blur or size of the blur kernel, which can be expressed as [24] where A is an experimental constant for the imaging system, D is the aperture diameter, f is the focal length and ∆z is the distance of the particle from the object plane (see Fig. 1a).As A, D, M and f are invariant for a given DFD measurement system, these terms are replaced with a single constant β.Ultimately the resolution of imaging systems is limited by diffraction, and the smallest possible point spread function (PSF) is associated with the formation of the Airy disk.This limits the contour sharpness when in focus i.e. σ ̸ = 0 at ∆z = 0.However, for the present system parameters, this diffraction limitation is negligible, and other factors are more prominent, as discussed in detail in the Appendix A.
The solution for the convolution integral equation Eq. ( 4) is obtained by nondimensionalisation of the variables with the particle diameter as Here we use σ as a parameter to represent the dimensionless depth from the object plane; refer Eq. ( 5).Using appropriate substitutions in Eq. ( 4) the reduced dimensionless form is obtained as where I o is the zeroth order modified Bessel function of the first kind.The dimensionless equation, Eq. ( 7) is the foundation for the analytical calibration functions, the solutions of which are numerically determined and are depicted in Figs.3b  and 4a.One must note that when σ → 0, the term inside the Bessel function and hence, the function itself blows up in Eq. (7).To obtain a solution in this region, the asymptotic estimate of the function as ( ρ ρ t / σ 2 ) → ∞ is used [27].Fig. 3b represents the variation of threshold radius with particle depth from the object plane for a specific threshold intensity.This solution also provides the foundation for the calibration curves used in the earlier two camera DFD approach [26].Fig. 4a represents the intensity distribution of blurred images in the radial direction at a specific depth.

Analytical Calibration Functions
From the single camera image, two quantities can be extracted -radius (r t ) and intensity gradient (∂g t /∂r t ) at a reference intensity value g t = 0.5.These parameters decrease with increasing depth of the particle from the object plane |∆z| (Fig. 1b), indicating the possibility of a gradient based calibration function to estimate the degree of blur; hence, indirectly the depth.This is confirmed in Fig. 4a by observing the intensity profiles for blurred particles at different depths, exhibiting different gradients at a reference intensity.Using an experimental image, we can only evaluate the radial intensity profiles, i.e., r t -g t variation rather than the dimensionless version shown in Fig. 4a, since d o is unknown.Hence, we propose a novel measurable dimensionless radius: where (r t ) gt=0.5 is the radius at the reference intensity.The corresponding modified solution is depicted in Fig. 4b, and the proposed functional form of the calibration function based on the modified gradient at reference intensity g t = 0.5 is From this measurable dimensionless version of intensity gradient |∂g t /∂ R t | = |r t ∂g t /∂r t | at the reference intensity (subscript g t = 0.5 omitted for brevity from now on), we can estimate the dimensionless depth σ.This calibration curve is shown in Fig. 5a.From the solution depicted in Fig. 3b another required calibration function is directly obtained to estimate ρ t from σ at the reference intensity represented in the functional form as This calibration curve is illustrated in Fig. 5b.The input parameters for the analytical calibration functions f 1 and f 2 are conveniently measurable from the image.These functions can be further combined in the form ), as depicted in Fig. 6a.
Being dimensionless, these analytical functions are universal to optical systems that exhibit a Gaussian blurring of the circular particles, which makes this technique a powerful measurement tool.The measurement process based on these functions is explained in the next subsection.
On the assessment of radial intensity profiles, i.e., ρ t − g t curves with varying σ, the maximum slope values are found to occur at the intensity g t ≈ 0.5 for most of the suitable working range ( σ ≤ 0.2) (see Fig. 6b).This intensity value at the location of maximum gradient magnitude, g t = 0.5, is chosen as the reference location described earlier, making gradient estimation less susceptible to noise.The gradient G is estimated by considering the average magnitude within a thin strip whose edges are defined by the intensities (g t ± δg t ) around the reference intensity (see Fig. 7a).This is necessary as the image is composed of pixels, and precise estimation at exactly the reference intensity is challenging.Moreover, the noise manifests as pixel level fluctuations, leading to sharp intensity variations; hence, steep local gradient values.By ensuring that the base gradient values are maximum at the region of interest, these fluctuations will have a minor influence on the estimated average when compared with the rest of the domain.
The current analysis considers individual blurred particles, but in practical applications, particles often overlap when projected onto the image plane.This overlapping can result in a single indistinguishable, non-symmetric entity due to blurring.Appendix B includes a discussion on the particle concentration limit, which refers to the maximum degree to which closely packed particles can be distinguished.By solving the convolution equation specific to this case, it is deduced that the particles with a spacing between their centres greater than 1.4 times the diameter will be distinguishable at all depths for the segmentation threshold of 0.4.
To the best of the authors' knowledge, this approach using both an intensity threshold and the gray level gradient for contour and size measurement is novel and a patent for this analytic approach has been filed.

Measurement Process
Size estimation: The size of the particles can be estimated based on the analytical calibration curves f 1 and f 2 .First, the threshold radius r t and gradient magnitude ∂gt ∂rt is evaluated at the reference intensity g t = 0.5 from the particle image.The associated image processing is explained in the Materials and Methods (Section 3), consisting of aspects like image normalization, segmentation, and sub-pixel interpolation.These parameters are used to calculate the dimensionless gradient G = ∂gt ∂ Rt = r t ∂gt ∂rt .From Eq. ( 9) the dimensionless depth σ = f −1 2 ( G) is obtained and substituted into Eq.( 10) to evaluate the dimensionless radius ρ t = f 1 ( σ).The size of the particle for the reference intensity gt=0.5.A steep variation of ρt with the gradient is observed in the blue shaded region where σ > 0.35.At the same time, there is a minimal variation in the grey shaded region, i.e., ρt ≈ 0.5, where σ < 0.05 (b) Variation of the intensity value at the location of maximum gradient magnitude with dimensionless depth σ.This corresponds to gt ≈ 0.5 for most of the suitable working range (σ ⩽ 0.2) and therefore is chosen as reference.Beyond the working range, i.e., the grey shaded region rt → 0, as is evident from calibration function f 1 .
in the image plane, d o , is then evaluated using the relation Depth estimation: The estimation of particle depth requires an experimental calibration function in addition to the analytical functions used above.This step is optional and is not required if emphasis is placed only on the calibration free particle size estimation.Experimental calibration is achieved following the size estimation procedure described earlier and is performed for target dots or reticles of known size moved along the optical axis at known depths.The blur kernel size σ is evaluated using the relation σ = σd o .Since the depths of these target dots are already known, the correlation between σ and |∆z| can Fig. 7 (a) Gradient G estimation using the average magnitude in a thin strip (gt ±δgt) at the reference intensity depicted by cyan in the figure.This is necessary because the image is composed of pixels, restricting the precise estimation of gradients at exactly the reference intensity.The strip width increases as σ increases, causing the average value to deviate from the anticipated exact value.(b) Error correction function ε (ratio of actual to estimated diameter) generated using synthetic images to consider the pixelation effect on size and gradient estimation at reference intensity location gt=0.5.
be estimated through Eq. ( 5).The calculated linear fit β remains constant for the system and is applied to the σ values obtained from the sample particle measurements to estimate their corresponding depths.Due to the symmetric nature of the image blurring across the object plane, the depth location exhibits directional ambiguity, and only absolute values can be determined from the object plane.
Referring to Fig. 6a, we now examine the characteristics of the calibration functions and their implications for the measurement process.In the vicinity of the object plane or the near-focus depth field σ < 0.05, the parameter ρ t is practically constant, as can be seen in the combined calibration curve.This makes the method robust under nearfocus conditions for diameter estimation, even though the gradient estimation and thus, σ, is prone to error.This is due to the expected sharp gradients and the limitations imposed by image projection onto discrete pixels.Consequently, the depth estimates of particles near the object plane are unreliable.Furthermore, Fig. 6a reveals a steep variation of ρ t with the gradient in the blue shaded region corresponding to σ > 0.35.This region represents larger depth locations, approaching the limit of the measurement system.The measurements in this region are unreliable for diameter estimation.Moreover, the overall intensity level is lower due to a higher degree of blur, rendering the image susceptible to noise.This limits the measurement depth to approx.σ c = 0.35, and the results beyond this are disregarded.Corresponding to this imposed limit ρ t = 0.3211 and G = 0.2501.Hence, the availability of discrete two-dimensional intensity data due to pixelated image information poses a challenge in various ways.The errors associated with estimating gradients and threshold radius propagate through the aforementioned calibration functions, leading to inaccuracies in the estimated size values.To quantify this error, synthetic images of dots with known sizes and degrees of blur were analysed.An error correction function ε is developed to compensate for the errors due to the pixelation effect, which is defined as the ratio: where d 0,est is the diameter estimated using the proposed method and d 0,act is the actual diamter of the particle.This function is illustrated in Fig. 7b and used to estimate the corrected diameter as d 0,corr = d 0,est • ε(d 0,est ).On closer inspection, we find the error in diameter estimation to be ∆d 0 ≈ 0.35 pixel irrespective of the actual particle diameter for the proposed algorithm and parameters.Since the particle image is discrete, an inaccuracy of ∆d 0 ≈ 1 pixel is anticipated; however, we are able to achieve a lower value due to the sub-pixel interpolation procedure discussed in the Materials and Methods (Section 3).

Depth of Detection
In the limit of detection corresponding to the depth |∆z| = |∆z| c , the threshold radius r t → 0. Solving the dimensionless equation, Eq. ( 7) developed earlier, this limit predicts a linear variation of depth of detection δ (total depth considering both sides of the object plane) with particle diameter d p [26], which can be represented as where α is a constant and d p0 is an offset parameter to adjust the linear fit (usually d p ≫ d p0 ).This offset parameter is an artefact of the pixelation associated with actual images and is discussed in detail in previous articles [26].
Considering the limit set on the measurement up to σ c , α can be determined using Eq. ( 5), ( 6) and ( 12) The detection volume can then be determined as a function of particle size as where H × L is the dimensions of the region of interest.This precise determination of the measurement volume is a distinguishing feature of the DFD approach, and a detailed discussion regarding the same can be found in previous works [26].The smaller particles are measured over a smaller depth range, and the detection depth increases linearly as the size increases, leading to an overweighting of larger particles.Hence, the information on detection depth is used for volumetric bias correction of the size distributions as discussed in the Materials and Methods (Section 3).The parameter α plays a significant role in determining the detection volume (V d ), as indicated by Eq. ( 14).This system parameter is dependent on β, implying that α ∝ f /AD according to Eqs. ( 5) and (13).Therefore, by choosing or adjusting these parameters, one can ensure a larger detection volume for a higher sampling rate.
For instance, in designing the optical system for a particular application, if a larger focal length (f ) for the optical system or or a lower aperture diameter (D) is chosen, one could achieve a larger detection volume.Although the latter significantly affects the overall intensity profiles captured in the image and must be compensated by controlling the background illumination.Furthermore, experimental factors affecting parameter A are not precisely known, but it is highly dependent on the type, collimation, and chromaticity of background illumination.As will be demonstrated later using target dot measurements, a diffused beam illumination leads to a lower value of α and detection volume, but provides reliable measurement results.A collimated beam illumination however leads to a much higher α value, but the results obtained are unreliable due to the interference effects.

Experimental Setup
Suitable Setup Requirements: This measurement technique requires the minimal equipment associated with basic backlight imaging: a camera and a diffused light source for background illumination, as shown in Fig. 8.For reliable measurements, the camera resolution and magnification should be carefully selected to ensure that the minimum particle of interest has a diameter of at least 3-5 pixels on the image sensor plane.To achieve suitable background illumination, a diffusor plate or an appropriate optical device should be used.It is crucial to avoid collimated beams, as they can lead to inaccurate results due to non-Gaussian blurring and interference effects, such as Fresnel diffraction (refer to the Appendix A).Additionally, the light source should be aligned along the optical axis to ensure proper shadow formation, which means that the contours should remain circular when the particle is blurred.The background intensity should be adjusted to an intermediate value in the dynamic range of the sensor to avoid saturation associated with very high intensities and noise with low intensity levels.If the particle is not completely opaque, a bright central spot will appear inside the shadow, corresponding to first order refracted light passing through the particle.However, this effect can be more or less completely eliminated by moving the light source farther away from the object plane.In this manner, to an increasing degree, only paraxial rays will be seen and the intensity of the bright central spot decreases.The formation of this localized central bright spot does not impact the estimation of radius and gradient at the reference intensity.The choice of lens is crucial and depends on the particle sizes being measured and the observation volume.A telecentric lens is preferred for accurate measurements, since it maintains a constant magnification, keeping the object size constant, independent of its position along the optical axis.Furthermore, a telecentric lens maintains symmetry of the blurred image for particles behind or in front of the object plane.However, standard optical arrangements can be used if the measurement volume is small in the depth direction, where the magnification variation is insignificant.It is important to note that the aperture size and focal length can affect the system parameter β (Eq.( 5)) and consequently the measurement depth of the system (Eqs.( 13) and ( 14)).

Setup used in Experiments:
The basic configuration consisted of a high-speed camera, zoom lens and light source, with other accessories such as a beam expander, diffusor plate and calibration target dot plate.Target Dot Measurement: High-speed camera: Photron SA5; Lens: 6.5× Navitar zoom lens coupled with 1.5× lens attachment, and 1× and 2× objective, where the latter was used for the higher magnification configuration; Light sources: Dolan Jenner Fiber-Lite Mi-150 LED light and Cavitar Cavilux smart UHS pulsed laser; Beam Expander: Thorlabs GBE05-A; Magnification: ∼ 6.8× and ∼ 13.7×; Resolution: 2.94 µm/pixel and 1.46 µm/pixel.Glass Beads and Ethanol Spray Measurements: High-speed camera: Photron SA5; Lens:

Experimental Calibration Procedure
The calibration procedure involves capturing a sequence of images to obtain the correlation between depth |∆z| and blur kernel size σ.Hence, this step is optional and required only for depth estimation.We have confirmed a linear relationship between σ and |∆z| as depicted in the subsequent section in Fig. 10c.The calibration target dots of known size are moved along the optical axis at known depth positions from the object plane.For each of these particles, blur kernel size or σ can be estimated.Linear regression is performed on the scatter plot of σ and |∆z|, as shown in Fig. 10c, to derive the inverse functional form |∆z| = mσ + c.This functional form and associated parameters (m, c) remain consistent for all measurements performed using the same optical system.Utilizing the calculated σ, along with the established functional form, we can estimate the depth of the particles under measurement.For improved accuracy, higher order polynomial fits can be considered.
If the object plane lies behind a glass window, then the calibration should ideally be conducted also with the glass window in place.The glass window will have the effect of shifting the absolute position measured by the system, but will not affect the relative positions between particles.If however, the dispersed phase is in a continuous phase with a refractive index other than air, then the value of β will be affected.An example would be solid spheres in a liquid vessel, whereby the shadow imaging system is outside looking through the vessel.In this case, the calibration is best performed in situ, i.e., the calibration plate is traversed inside the vessel.

Image Processing Algorithm
The image processing routine consists of the following key aspects: Normalisation, Particle Identification, Sub-pixel interpolation, Size estimation and Depth estimation.The size and depth estimation processes utilise the proposed algorithm.The preceding steps are standard procedures for image processing systems.The flowchart for the algorithm depicted in Fig. 2 was implemented using MATLAB.Normalisation: This process involves rescaling the intensity of the greyscale shadow image to a range of [0, 1].The global maximum value associated with the unobstructed illuminated background is mapped to 0, while the global minimum corresponding to the completely obstructed background or shadow is mapped to 1.The reference value for the former is derived from background illumination images and the latter from blackshading images (images captured with the camera lid on).Mathematically, the normalised intensity (I n ) is obtained [24] as: where I, I bi and I bs are actual shadow image background illumination and blackshade image intensities, respectively.
Particle Identification: This step involves isolating and extracting individual particles from the normalized image for further analysis.In this study, a simple intensity based method was adopted, in which regions with an intensity above a threshold value were identified as a particle.This process, known as segmentation in image processing, allowed for particle identification with a threshold set at 0.4 for this study.The particles were isolated as separate images based on the bounding box enclosing the identified regions on segmentation (see Fig. 9).The bounding box refers to the smallest rectangular region that encloses the particle.The intensity threshold for particle detection should be lower than the reference intensity value of 0.5, within which the subsequent analysis for size and depth estimation is conducted.This ensures that the information used for estimation is extracted within the bounding box, sufficiently away from its edges.Depending on the system under study, more advanced algorithms can be employed for the segmentation or isolation process.Sub-pixel Interpolation: This step involves interpolating intensity data on a grid finer than pixel resolution for the isolated particles.This is necessary because only discrete information is available from an image, and extraction of information precisely at exactly some prescribed reference intensity is a challenge.In this study, a simple bilinear interpolation was performed, where each pixel was subdivided into a 5 × 5 grid (see Fig. 9).Prior to the interpolation process, a noise removal step is performed using a Wiener filter.Depending on the noise characteristics of the system, further advanced interpolation techniques can be performed on a suitable sub-grid.Size Estimation: To estimate the image size d o of the isolated particle, radius and gradient magnitude information at a reference intensity of 0.5 is required.The radius r t is determined by obtaining a region with an intensity above 0.5 and calculating the equivalent radius from its area A t as r t = A t /π (see Fig. 9).If glare points exist, they will appear as holes in this image region and can be easily removed by the 'fill hole' operation commonly available in image processing systems.The region eccentricity provides an estimation of the actual particle shape and is used to segregate noncircular particles as discussed in the subsequent sections.To determine the gradient, the average magnitude in a thin strip (g t ± δg t ) centred at reference intensity is considered (Fig. 7a and Fig. 9).The gradient can be calculated using standard gradient functions available in image processing systems.For this study, the strip width is set by choosing δg t = 0.005.The threshold radius r t and gradient magnitude ∂gt ∂rt evaluated as above are then used to determine the dimensionless gradient G = r t ∂gt ∂rt .The analytical calibration functions f 1 and f 2 are employed to determine σ = f −1 2 ( G) and subsequently, ρ t = f 1 ( σ).The size of the particle d o is evaluated using the relation d o = r t / ρ t .The blur kernel size σ is evaluated using the relation σ = σd o .Until this step, analytical functions are sufficient and experimental calibration is not required.Hence, size estimation can be performed independently in a calibration free manner.Depth Estimation: To estimate depth, the inverse functional form |∆z| = mσ + c from the experimental calibration procedure is required.By substituting the determined value of σ, the absolute depth from the object plane is evaluated.However, the proposed method does not provide directional information for the depth.

Limiting Parameters for Reliable Measurements
Particles located at the outer limits of the detection depth exhibit high levels of blurring, low intensities, and significant alterations in gradients due to imaging system noise.Consequently, measurements in this region are highly unreliable, as even a small error in gradient estimation can result in a large diameter error.To address this, we introduce a critical measurement depth limit σ c = 0.35, beyond which results are not considered.By imposing a tighter depth of detection with a lower σ c value, more accurate overall results can be achieved.Furthermore, while the ideal eccentricity for spherical entities is zero, a practical limit can be set in the range of 0.5 to 0.8.Particles exceeding this limit can be rejected from the analysis.

Volumetric Corrections in Size Distributions
The detection depth and volume are dependent on the size of the particle being measured.Detection depth varies linearly with particle size, and the detection volume can be determined as per Eq. ( 14).This leads to a volumetric measurement bias, because larger particles are measured (and counted) over a larger volume compared to the smaller particles.To address this bias, it is important to consider the number of dispersed particles per unit volume when determining the size distribution.This can be achieved by weighting the occurrence frequency in each histogram bin by the inverse of the corresponding measurement volume.Normalizing this weighted frequency yields the required probability density function.From Eq. ( 14) it can be observed that d p0 is not significant, since d p ≫ d p0 and V d ∝ α, which implies that this α will cancel out uniformly during the normalisation procedure.Hence, the volumetric bias correction of the PDFs can be easily achieved without any experimental calibration or knowledge of α.This estimation of the size probability density distribution implicitly assumes that that the distribution is uniform along the optical axis.Nevertheless, since the position and size of all particles are known, one could retroactively examine subvolumes and determine whether the assumption of uniformity was correct.However, the subvolumes must lie within the detection bounds of all particles.

Parameteric analysis of measurement system
The calibration target dots (or reticles) of known size are moved along the optical axis at known depths and captured in different background illumination configurations (see Fig. 8).This enables to validate the measurement technique by comparing the size estimated by the proposed technique with the actual dot size at various depth locations.
A comprehensive discussion on various illumination configurations using diffused and collimated light can be found in the Appendix A. Measurements for the case of a diffused LED light source illumination are performed at a magnification of ∼ 6.8x at two background intensity levels (low(0.2) and high(0.65),rescaled average background image pixel bit value where 0.2 means intensity at 20% of the dynamic range of the image sensor where 100% represents completely saturated) and depicted in Fig. 10.The size is predicted accurately up to a 5-15% relative error in most parts of the measurement depth (Fig. 10b).One observes a higher relative error in measurements for the collimated beam illumination due to the interference pattern caused by Fresnel diffraction [28].Hence the proposed analysis does not apply to such optical settings due to the non-Gaussian blurring [29,30] of the dots.The dashed line in Fig. 10a represents the linear depth of detection, indicated by σ c = 0.35.Measurements beyond this limit on the right side are not as unreliable as anticipated.The target dots measurements also enable to validate the hypothesis of a linear relationship σ ∝ |∆z|, as depicted in Fig. 10c.Hence, the experimental calibration can be performed and β can be estimated through linear regression from these dot images.No considerable effect of the background illumination intensity is observed.Still, an intermediate background intensity is suggested, as a lower value is prone to noise, and a higher value might flush out the blurring information due to over-saturation at the sensor.For particles of the same physical size, the higher magnification ensures the availability of more pixels to extract more accurate information.This enables a slightly better estimation of size.A discussion on measurements at higher magnification (∼ 13.7x) is presented in the Appendix A.

Technique implementation for diverse applications
This section illustrates the application of the technique to a diverse range of problems.The details of the experimental setup for each system are provided in Materials and Methods (Section 3).

Dispersed Glass beads
Untinted spherical glass beads within a size range of 40 − 90µm are used for sample measurement.Such measurements are common in the field of chemical sciences, particularly as calibration standards for a wide range of analytical techniques such as flow cytometry and spectroscopy.To validate the approach, a reference size distribution is estimated using microscope images of glass beads on a slide (Fig. 11a).For measuring in a DFD system, the glass beads are uniformly dispersed in a DI water solution and stirred continuously to avoid settling.Shadow images of the dispersed solution are captured using a diffused LED and laser illumination (Fig. 11b).The predicted size distribution from the DFD measurement is compared with the microscope results and is in good agreement (see Fig. 11d).Error bars are added to represent one standard deviation realised over six runs.The volumetric measurement with a varying detection depth is evident from Fig. 11c.

Sprays
The measurement of a droplet size distribution in sprays holds significance in various natural and industrial systems.For instance, in fuel injection  systems, the size of atomized droplets affects combustion efficiency through droplet lifetime and evaporation rate [31].In high-speed gas flowinduced atomization, precise control of droplet dispersion size is important for monodisperse powder production for additive manufacturing and pharmaceutical applications [32,33].The COVID-19 pandemic highlighted the role of micro-droplets in disease transmission and the requirement to develop mitigation strategies [34][35][36].To illustrate the applicability of the DFD method, shadow imaging of an ethanol spray using monochromatic background illumination from a diffused laser beam is performed (Fig. 12a).The spray is generated using a laboratory grade positive displacement pump-type spray dispenser.Measurements are performed at a downstream sparse spray region to obtain the size distribution, as depicted in Fig. 12b.The error bars correspond to the standard deviation evaluated from six runs.The number distribution follows a familiar skewed distribution, commonly observed in dispersed spray systems.

Aerosol generation from surface bubble rupture
Air bubbles formed at a liquid surface undergo film drainage, eventually leading to rupture and fragmentation into dispersed droplets (Fig. 12c).Depending on the surface tension, film thickness, and the bubble lifetime, this can lead to the formation of droplets in the aerosolization range [37].This mode of mass transfer at bulk liquid interfaces is of interest in marine and environmental sciences.Furthermore, recent studies [38] identified the effect of biological secretions on the size of fragmenting droplets, with many falling in sizes critical for aerosolization.Such transport of pathogen-loaded droplets into the ambient environment is relevant to disease transmission.Hence the proposed method can be deployed for such studies.To illustrate this method, bubbles are generated below the surface of a sample liquid pool with a nozzle connected to the air supply from the pump.The continuous bubbles generated in the DI water sample coalesce to form a larger surface bubble of diameter ∼30 mm (spherical cap), which eventually ruptures.For measurement, shadow imaging is performed on the unobstructed dispersed droplets generated from the rupture of a bubble, and ∼50 such events were considered.The obtained size distribution is depicted in Fig. 12d.

Pollen viability
The health of pollen grains regarding their ability for germination depends on numerous factors, including their diameter and shape [39].In some species, the inviable or unhealthy pollens are smaller and oddly shaped, associated with aberrant or dehydrated conditions.Pollen grains have a wide range of shapes, with sizes ranging from 10-200 microns.There are a limited number of methods available for their size characterisation [40,41], and the newly proposed method in this study can serve as a simple tool to characterize near-spherical pollens dispersed in a solution.To illustrate this, a pollen sample of Hibiscus (Hibiscus rosa-sinensis, see Fig. 13a) is collected from the institute gardens and dispersed in DI water.Hibiscus has spherical shaped pollen grains in the size range of 80-180 µm, with very small spike like features on its periphery [42].The size distribution obtained by implementing the DFD technique on shadow images is depicted in Fig. 13b.The error bars correspond to the standard deviation realised over five runs.

Surface reconstruction
Digital reconstruction of a three-dimensional surface is an important application in computer vision [13,19] or interfacial fluid mechanics [9].The proposed method can be implemented in this context by considering patterned surfaces, i.e., engraving target dots over the surface of interest and performing the analysis to obtain the depth profiles.As the blurring is still Gaussian, the same theory and calibration functions are applicable.To illustrate this, images of patterned surfaces with target dots printed on paper adhering to a defined three-dimensional (3D) contour are considered.
A camera with normal front lighting from an LED source is chosen, i.e., not shadow imaging, to depict the simplicity and versatility of this method.The 3D geometry and resultant scanned surface are depicted in Fig. 13c,d.The actual height of the cuboidal (green) and cylindrical (red) surfaces was 33mm and 10mm, which is measured to be approximately 34mm and 8mm, respectively.The reconstructed profiles closely matched the actual shape, but discrepancies were observed for the surface near the focal plane.This is due to the inaccurate estimation of sharp gradients from discrete pixel information.Improved depth estimations can be achieved by keeping particles or surface features within the intermediate regions of the depth of detection.Hence, a simple extension of the proposed algorithm can be used for the digital reconstruction of patterned 3D surfaces.As the method estimates the degree of blur of the scene, this can be reinforced with deconvolution algorithms as well for deblurring operations.In the context of experimental fluid mechanics, this method is applicable for reconstructing a free surface of the fluid with floating spherical particles that serve as target markers.

Discussion
We introduce a new measurement technique to precisely characterise the size and position of both in-focus and out-of-focus spherical dispersions using minimal and accessible optical resources.The measurement principle is based on an analytical framework of image blurring, and the derived functions are universal, enabling particle sizing in a calibration-free manner.Particle position from the object plane is estimated based on its correlation with the degree of blurring, established using a simple calibration procedure.The system precisely calculates measurement volume and its dependence on the size of the dispersion particles.This is crucial to obtain bias-free size distribution and volume concentration estimates.The method requires simple shadow imaging with a diffused light source for background illumination and a camera, paired with a telecentric lens or equivalent arrangement.With a suitable spatiotemporal resolution, implementation is possible in various systems, including microns to millimetres size particles moving with speeds ranging from stationary suspensions to supersonic droplets, limited only by imaging hardware capabilities.
To validate the method, opaque target dots of known size at known incremental depth locations across the object plane were considered.The implementation under various background illumination demonstrated its suitability in diffused beams, where the blurring is Gaussian.However, in cases with collimated beams, the presence of diffraction effects resulted in deviations due to the non-Gaussian nature of the point spread function (PSF), in particular for very small particles.To illustrate the technique, sparse dispersions of spherical particles like glass beads, spray droplets and pollen grains were considered.In the case of dispersed glass beads, microscopy was used as a reference to validate the DFD measurements.The technique was further extended to computer vision applications, where a threedimensional surface profile was reconstructed digitally using engraved target dots.The resultant profile matched the actual shape, although discrepancies were observed for surfaces near the focal plane due to the inaccurate estimation of sharp gradients.It should be noted that the measurement accuracy is limited by the precision of gradient evaluation from discrete pixel information, which is susceptible to noise.Moreover, although the absolute distance of the particle from the object plane is known, an ambiguity remains whether the particle is positioned in front of or behind the object plane.Thus, in practice the optical arrangement should be designed, such that the region of interest lies all on one side of the object plane, to avoid ambiguous position measurements.Note that this ambiguity does not exist for the two-camera implementation of the DFD technique.
The question may be posed whether a position measurement of each particle is necessary, since this requires the extra calibration step?There are several reasons why this might be essential.For one, if particle tracking is to be realized, for instance with a high-speed camera, then the particle position must be known at each time step.The position would also be necessary if spatial inhomogenieties of size distribution are to be detected.
As an outlook, the approach using blur gradients together with a gray level threshold offers possibilities in characterising overlapping projections in dense particle clusters and/or nonspherical/irregular particles.The firsts extension would greatly increase the tolerable volume concentration for applying this technique.The second feature would open up inumerable new application areas.Both of these extensions are currently being developed by the authors.

Appendix A Parameteric analysis of measurement system
Detailed results and discussion are presented here on the effect of various system parameters on measurement accuracy.The calibration target dots of known size and depth locations are captured in different background illumination configurations.The size estimated by the proposed technique is compared with the actual dot size at various depth locations.Measurements performed at a magnification of ∼ 6.8x are depicted in Figs.A1, A2, and A3.Two background intensity levels: low(0.2) and high(0.65)were considered.These are the rescaled average background image pixel bit value, where, for a 16-bit image, the pixel value ranges from 0-65535 and is rescaled to 0-1.Diffused Background Illumination: The size is accurately predicted within a relative error of 5-10% within the measurement depth when using the diffused light source (Fig. A2a,c).The measurements of target dots further validate the linear relationship between σ and |∆z|, as shown in Fig. A3a,c.The intensity of the background illumination is found to have no significant effect on the results.The use of diffused white light yields better results due to its incoherent nature.Collimated Beam Illumination: Measurements with a collimated light source exhibit a higher relative error, as shown in Fig. A2b,d.This is due to the interference pattern caused by Fresnel diffraction and Poisson spot formation as depicted in Fig. A5.The presence of interference patterns causes significant deviations in the gradients, which do not align with the expected profiles based on Gaussian PSFs.This non-Gaussian blurring of the dots invalidates the proposed analysis.However, in the case of a collimated white light beam, this error is prominent only for the smaller dots (≤ 30µm), since the interference patterns due to different wavelengths average out at length scales associated with the larger dot sizes.The measurements also deviate from the hypothesized linear relationship between σ and |∆z|, as illustrated in Fig. A3b,d.However, the depth of detection is substantially increased, as evident from the higher values of α.The background illumination intensity has minimal impact on the results.
The second set of measurements, shown in Fig. A4, is performed using a diffused laser beam illumination at a higher magnification of approximately 13.7x.The same low and high normalised background intensity level as earlier is ensured for these measurements.For the particle of the same physical size, the higher magnification ensures the availability of more pixels to extract accurate information.Hence, with roughly twice as many pixels available in this second set, a slightly better estimation of size is achieved than with the first set of measurements.The validity of the proposed technique is demonstrated for particle sizes as small as 7µm with a suitable resolution.

Limits on Point Spread Function (PSF)
The presumed Gaussian PSF in optical systems is limited by the diffraction of light waves and the formation of the Airy disk.This limits the resolution of the system as well as the validity of the proposed DFD approach.In Fig. A5, we have observed how interference patterns emerge owing to diffraction around the particle edges for the cases of collimated beams.In this section, we estimate the size of the smallest PSF i.e., Airy disk, to see if it affects the measurement analysis.
For the given combination of lenses (Navitar 1.5× lens attachment + 6.5× Zoom lens + 1.0× or 2.0× adapter) being used for the parametric study using target dots, the Objective Numerical Aperture N A obj as given by the manufacturer is The corresponding F-number (f /#) is given as Then, Airy disk diameter d Airy in terms of (f /#) is given by [29] d Airy ≈ 2.44λ(f /#) (A2) For the Cavilux light source λ = 640nm (Red).Substituting values in Eq.A1 and A2 we get d Airy ≈ 7.37µmEven in the case of a white light source, the components with a longer wavelength will form a larger Airy disk, as evident from Eq. A2, and hence we can use the red light wavelength as a test case to evaluate the limitations.The least squared error fit of a Gaussian PSF to the Airy disk profile provides an equivalent Gaussian blurring standard deviation σ eq with respect to the airy disk diameter as where the R-squared value of the fit is R 2 = 0.9981.Substituting the values in Eq.A3, we get σ eq = 0.9345µm This value is smaller compared to the pixel size (refer to Materials and Methods, Section 3 for details) and hence will not affect the results drastically.Furthermore, as a diffused light beam is suggested for the proposed technique, these diffraction effects will be significantly less obvious.Although from Fig. A3 and A4c, one observes that the calculated blur kernel size approaches a finite non-zero value at focus (|∆z| = 0) instead of an expected sharp focused image with Dirac function as PSF (i.e., σ = 0).This is expected due to the following reasons: 1.The pixel intensity value is the average manifestation of the light intensity falling over the sensor.The image of a focused particle (both actual and artificial) has some pixels with intermediate intensity values at the boundary due to the edge of the projected shadow lying in an intermediate position within the pixel/sensor.This gives a sense of blurring even for the focused image with σ ̸ = 0. 2. In theory, we need gradients at the edge of the particle image to approach infinity when in focus, which practically never seems to happen partially due to this discrete way of capturing information.The σ calculations are further affected due to errors associated with the estimation of steep gradients from the available discrete information in the image.

Appendix B Theoretical Particle Concentration Limit
The proposed methodology is currently capable of analysing an isolated blurred particle.However, in sprays and other dispersed systems, particle images often overlap when projected along the optical axis onto the image plane.Blurring can cause particles to appear as a single indistinguishable non-symmetric entity, even if they do not overlap.The particle concentration limit is the extent to which the closely packed particles are distinguishable on the imaging plane based on a segmentation threshold value, which is chosen for the current study as g t,c = 0.4.This limiting condition is illustrated theoretically for a simple case where two particles of the same size d o are considered at a specified centre-to-centre distance of 2∆, illustrated in Fig. B6a.The intensity at point 'O' is evaluated for different degrees of blurring σ and separation distance ∆.If this exceeds the detection threshold value g t,c , then the particles are indistinguishable.Convolution as earlier (Eq.( 1)) is applied using a Gaussian blur kernel h (Eq.( 3)).In this case, the normalized image function (i f ) takes a value of one within shaded regions (1) and ( 2) in Fig. B6a, and zero otherwise.These shaded regions can be defined geometrically in polar coordinates, with point 'O' as the origin  Where ϕ in the angle subtended by the tangent to particle contour at origin as depicted in Fig. B6a.Substituting this into Eq.( 1) to evaluate g t,c at 'O' where r t = 0, while considering the additional non-dimensionalisation ∆ = ∆/d o , we obtain the following expression: where ϕ = sin −1 d0 2∆ = sin −1 1 2 ∆ .The solutions for this are numerically evaluated and variation of the dimensionless parameter σ with inter-particle half separation ∆ for different intensity values (g t,c ) at the centre of the pair 'O' is depicted in Fig. B6b.Two solutions for σ, at near focus depth and far focus depths, exist for a prescribed ∆ and g t,c .Also, there is a critical separation ∆ c for a prescribed g t,c beyond which, for any depth, the particle pair is distinguishable.For the chosen g t,c = 0.4 corrsponding to particle segmentation, this value is ∆ c ≈ 0.7.This signifies the critical concentration limit, and particles with spacing such that ∆ > ∆ c are distinguishable for all depths in the measurement.In simpler terms, the particles with a spacing between their centres greater than 1.4 times the diameter will be distinguishable at all depths for the segmentation threshold of 0.4.This analysis is a simplified representation of the presence of such a limit to be considered when choosing the image based system for measurement.However, several further aspects must be considered when attempting to determine an absolute concentration limit for a given optical configuration.To start, most dispersed systems consist of multiple particles of different sizes, and the size distribution must be accounted for.Furthermore, there are two effects leading to overlap.Even if all particles were in the same plane perpendicular to the optical axis, the overlap would increase with the degree of out of focus, as treated above.This is very similar to the situation encountered in other out-of-focus approaches such as ILIDS/IPI, and concentration limits for such techniques have been derived previously [43].However, with the DFD technique, we also encounter varying degrees of out of focus because the detection volume is also larger in the z-direction.This is an added influence that was not treated in the earlier work [43].Finally, when attempting to determine a concentration limit theoretically, some assumption must be made regarding how uniform the concentration is throughout the detection volume, the most simple assumption being a uniform distribution.and SS prepared the original draft, CT edited the draft, and all authors reviewed the manuscript.

Funding
The financial support of the Science and Engineering Research Board of India is acknowledged in sponsoring author CT through the VAJRA Faculty scheme.SJR acknowledges the support from the Prime Minister's Research Fellowship (PMRF).

Fig. 1
Fig.1(a) Illustration of image projection using ray optics, where a particle located at a distance uo from the lens in the object plane is in focus at a distance s in the image plane (IP).Objects in front or behind the object plane by a distance |∆z| appear blurred on the image plane.(b) Graphical illustration of particle size estimation using single camera image by extracting two quantities -radius (rt) and intensity gradient (∂gt/∂rt) at a reference intensity value (gt=0.5);both of which decrease with increasing depth from object plane |∆z|.

Fig. 2
Fig. 2 (a) Image processing flow chart depicting the calibration free diameter estimation and depth estimation based on calibration from target dot images.(b) Expanded flow chart for the dashed boxed part in (a).

Fig. 3
Fig. 3 (a) The blurred image is estimated by convolving the focused image of a particle of size do with a Gaussian blur kernel (shown as a shaded circle).The intensity (gt) at each location (rt) is evaluated by convoluting the focused image with the point spread function.(b) Theoretical variation of dimensionless parameters ρt = rt/do with σ = σ/do for different intensity threshold values (gt = 0.1 to 0.9).

Fig. 4
Fig. 4 (a) Analztical variation of intensity gt with dimensionless radius ρt for various dimensionless blurring standard deviations σ which is representative of depth (b) Theoretical intensity variation with modified dimensionless radius Rt for various dimensionless blurring standard deviations σ.

Fig. 8
Fig. 8 Schematic of the optical arrangement for the single camera DFD measurement and various background illumination configurations to test the effect of parameters like chromaticity, collimation and intensity on measurements.

Fig. 9
Fig.9Image processing steps depicting segmentation of normalised image and extracting image of each particle enclosed in a bounding box, sub-pixel interpolation, thresholding to estimate radius rt and average gradient G within a thin strip defined by edges at (gt ± δgt)

Fig. 10
Fig. 10 Measurement results for calibration dots of known sizes and depths at a magnification ∼6.8x for diffused LED beam illumination depicting the variation of (a) measured diameter with depth (b) the relative error in diameter measurement with dimensionless depth (c) blur kernel size with depth.'Low' and 'High' intensity background illumination measurements are overlaid on the same plot.

Fig. 11
Fig. 11 Measurement results for diverse applications utilising a diffused background illumination (a) Spherical glass beads under the microscope (b) Glass beads dispersed in a solution, being continuously stirred.The detected beads are marked with red circles in the normalised shadow image with size dp in µm (c) The estimated size of dispersed glass beads dp and the corresponding blur kernel size σ depicting the linear relationship between the depth of detection and the diameter.(d) Comparison of the size distribution evaluated from the DFD technique with the microscope measurements as a reference.The uncorrected and detection volume bias-corrected estimates are depicted as Probability Density Functions (PDFs).

Fig. 12 (
Fig. 12 (a) Ethanol spray in monochromatic background, illumination using a diffused laser beam.(b) Ethanol spray droplet size distribution measured using the DFD technique, represented as a PDF corrected for detection volume bias.(c) Droplets generated during the rupture of a surface bubble in DI water.(d) Droplet size distribution from bubble rupture measured using the DFD technique represented as corrected PDF.

Fig. 13 (
Fig. 13 (a) Hibiscus (Hibiscus rosa-sinensis) flower and its pollen grain under the microscope.(b) Pollen grain size distribution estimated using the DFD technique represented as a corrected PDF.(c) Target dots engraved over a prescribed 3D surface (d) Digital reconstruction of this 3D surface geometry using the DFD technique with surface points projected on the x-z plane to depict depth.Error bars, when represented, correspond to one standard deviation in the size distributions.

Fig. A1
Fig. A1 Measurement results for calibration dots of known size and depth illustrating the variation of measured diameter with depth for different background illumination configurations (a) LED white light diffused beam (b) LED white light collimated beam (c) Laser mono-chromatic light diffused beam (d) Laser monochromatic light collimated beam.'Low' and 'High' intensity illumination measurements are overlaid on the same plot for magnification ∼6.8x.

Fig. A2
Fig. A2 Measurement results for calibration dots of known size and depth illustrating the variation of relative error in measured diameter with dimensionless depth for different background illumination configurations (a) LED white light diffused beam (b) LED white light collimated beam (c) Laser mono-chromatic light diffused beam (d) Laser monochromatic light collimated beam.'Low' and 'High' intensity illumination measurements are overlaid on the same plot for magnification ∼6.8x.

Fig. A3
Fig. A3 Measurement results for calibration dots of known size and depth illustrating the variation of blur kernel size with depth for different background illumination configurations (a) LED white light diffused beam (b) LED white light collimated beam (c) Laser mono-chromatic light diffused beam (d) Laser monochromatic light collimated beam.'Low' and 'High' intensity illumination measurements are overlaid on the same plot for magnification ∼6.8x.

Fig. A4
Fig.A4Measurement results for calibration dots of known size and depth at a higher magnification ∼13.7x for monochromatic diffused laser beam illumination illustrating the variation of (a) measured diameter with depth (b) the relative error in measured diameter with dimensionless depth (c) blur kernel size with depth.'Low' and 'High' intensity illumination measurements are overlaid on the same plot.

Fig
Fig. B6 (a) The blurred image for a particle pair is estimated by convolving the focused image with a Gaussian blur kernel, shown as a shaded circle.Here, 2∆ is the separation between the particles of the same size dp.Intensity at point 'O' is estimated for different degrees of blur σ. ϕ is the angle that the tangent from 'O' makes with the horizontal axis.(b) Theoretical variation of non-dimensional parameter σ = σ/do with inter-particle half separation ∆ = ∆/do for different intensity values (gt,c) at the centre of the pair O. Two σ solutions -near focus (B) and far focus (C) depth -exist for a prescribed ∆ and gt,c.Also, there is a critical separation ∆c corresponding to (D) for a prescribed gt,c beyond which, for any depth, the particle pair is distinguishable.(c) Illustration of the blurred image of a particle pair corresponding to points (A), (B) and (C) in (a).If gt,c exceeds the detection intensity threshold (0.4 here), then both particles cannot be directly distinguished, as shown.Here cyan represents gt = 0.4 ± 0.05.