1 Introduction

Particle tracking techniques are widely employed in several science fields for identifying particular structures or processes of interest. Some important examples include biophysics, where these techniques are involved in the observation of molecular level motion of kinesin in microtubules and of motion of myosin on actin [52], in the study of the infection path of a virus [49] or in the investigation of cytoskeletal filaments [1]; another topic involving particles tracking problem regards the observation of protein motion in cell membranes [37] or intracellular transport [32]. Other interesting areas of application include statistical mechanics [5, 6], fluid dynamics and mechanics, in particular rheology [34], where the thermal motion of Brownian particles has been tracked to study local rheological properties [20], complex fluids [2, 47] and microrheology in medicine [23]. Colloidal works have benefited from developments in particle tracking procedures in measuring biofluids such as mucus [48] and vitreous humor [50]. All these practical instances of particle tracking rely on imaging data, acquired via confocal microscopy, electric microscopy and/or similar techniques.

It has been pointed out [21] that particles have different meanings depending on the applications: a single molecule, a virus, a spherical object. In this work, a particle is a spherical object around 1 micrometer in diameter, observed in confocal microscopy.

Particle tracking consists of two main steps: particle position estimation and trajectory reconstruction. The former is based on the acquired images, while the latter employs the retrieved information together with probabilistic results. In the past, several procedures have been proposed to estimate the particle position: cross-correlation of a sequence of images [35], centroid techniques [33] and Gaussian fitting [38]. Some of them claim subpixel resolution, and in [19], a wide comparison of these techniques showed that significant numerical experimentation is needed before validating such results. Other methods include combinatorial optimization [45], nearest neighbor [31], Kalman filtering coupled with probabilistic data association [26], use of the Viterbi algorithm [39], sparse deconvolution techniques [30] and several others. An experimental comparison of a plethora of methods can be found in [21]. In [44] (and references therein), a particular focus on microrheology-related problems is considered, and the balance between high spatial resolution and timescale of data acquisition is considered in depth: The former leads to approximate multiple tracking techniques, while the latter allows a greater flexibility and provides high statistical accuracy. In [19], the spatial resolution influence was investigated. In the presented paper, the first step of particle tracking problem is solved: The proposed algorithm provides estimations of the particles position with subpixel resolution, both in two- and three-dimensional cases. The analysis focuses also on the role of image denoising techniques, which heavily influences the final result and performance of position estimation algorithms. The proposed procedure aims mainly to treat the static error [44], which arises from noise affecting this type of experiments; this static error is equivalent to the notion of precision in [19].

Following [44] and the consideration in [19] about preliminary synthetic experiments, in this work a numerical simulation of the standard setup is adopted: The simulated system consists of a CCD camera connected to a microscope which records images (frames) of molecules or spherical particles. Our proposed procedure is first tested on synthetic but realistic data. The algorithm proved itself to be providing good performance on such data; hence, it is applied on real 3D data with satisfactory results.

The presented procedure provides position estimations of 3D spherical particles: This approximation is inspired by the problem of estimating the motion of spherical nanoparticles suspended in a fluid. A novel approach based on Total Variation functional and on Least Squares fitting is proposed to locate the center of the spherical particles in 2D frames. The 3D centers of the particles are hence estimated using geometric properties and employing the 2D information retrieved in the previous steps. The algorithm achieves subpixel resolution both in the 2D case, i.e., in estimating the position of the particles within frames, and in the 3D case. In real-life application, 3D confocal data are corrupted by noise, usually of Poisson type; hence, denoising techniques are necessary to ensure the good quality of the reconstruction. In this work, the comparison between classical Gaussian filtering and more tailored algorithm for noise removal is done.

This paper is organized as follows: In Sect. 2, the simulation procedure is described, in order to get realistic 3D data to validate the proposed algorithm. In Sect. 3, details of the proposed procedure are given: the preprocessing of the frames and the estimation of the 2D centers and then the 3D estimation. Section 4 is devoted to the numerical experimentation on both synthetic and realistic data; finally, in Sect. 5, conclusions are drawn.

Notation Bold letters, bold capital letters and Latin (or Greek) letters denote vectors, matrices and scalars, respectively. The ith element of the vector \(\mathbf {x}\) is denoted by \(x_i\). The notation \({\mathcal {N}}\left( \mu ,\sigma ^2\right) \) indicates a Gaussian distribution of mean \(\mu \) and variance \(\sigma ^2\). \(\mathbf {I}\) denotes the identity matrix and \({\varvec{0}}\) the vector with all zeros entries.

2 Data Creation: Simulation Procedure

The synthetic datasets used to validate the proposed algorithm are simulated following these steps, which are inspired by the characteristics of real settings:

  • N spherical particles of radius a are randomly placed in a 3D volume of dimension \(D_x\times D_y \times D_z\). The particles are assumed to have all the same, known radius a;

  • the 3D volume is discretized into an array of \(N_x\times N_y\times N_z\) voxels; each voxel has dimension \(\mathrm{d}x \times \mathrm{d}y \times \mathrm{d}z\), being \(\mathrm{d}x = D_x/N_x, \mathrm{d}y = D_y/N_y, \mathrm{d}z = D_z/N_z\). \(N_z\) represents the number of 2D frames. Each particle is discretized in this volume;

  • aiming to simulate realistic data, a blurring operator is applied to each frame, and then, Gaussian and/or Poisson noise is, respectively, added to or composed with each image.

In the following, the creation of the dataset is described precisely.

Fig. 1
figure 1

a Discretization of a disk. The true center is represented by the orange dot together with the true profile in the same color. The pixels at a distance less than a are set to H (highlighted in light blue), while the others are set to h. It is clear that it is not always possible to discretize the disk in a symmetric fashion. The procedure follows the same ratio for the 3D case (b), where usually the discretization on the z axis is less fine than on the xy plane (code available at [40]), c blurred and noisy frame (Color figure online)

Position simulation The continuous positions \(\{\mathbf {x}_i\}_{i=1, \ldots ,N}\) of the N particles are randomly chosen in \(D_x\times D_y \times D_z\), via an uniform distribution. The 3D position of the ith particle is denoted via \({\mathbf {x}}_{i}^{}= \left( x_i, y_i, z_i\right) ^\top \).

Discretization Given the continuous coordinates \(\mathbf {x}_i\) of the ith particle and the radius a, the voxels at distance less or equal to a are filled with a value of H, while the others are set to h, aiming to have a nonzero constant background. In our simulations, we set \(h=10\) and \(H=220\). These values were chosen in order to simulate realistic tiff images, which usually have values in [0, 255]. In Fig. 1a, a 2D explanation of this procedure is depicted: The 3D case follows the same procedure (Fig. 1b).

Blurring and noise A blurring operator of Gaussian type (dimension: \(5\times 5\) pixels, of zero mean and unitary variance, created via the MATLAB function imfilter) is applied to each frame, simulating the perturbation given by the acquisition system. Gaussian noise of level \(\sigma \) is added to each frame: Let \({\varvec{\eta }}\sim {\mathcal {N}}\left( {\varvec{0}}, \sigma \mathbf {I}\right) \) be a realization of a Gaussian multivalued random variable of zero mean and covariance matrix \(\sigma \mathbf {I}\). The noise \({\varvec{\eta }}\) is added according to the following formula (which is a slight modification of the one in [29]):

$$\begin{aligned} \mathbf {F}_z \leftarrow \mathbf {F}_z + \sigma \frac{{\varvec{\eta }}}{\Vert {\varvec{\eta }}\Vert _{\mathrm{F}}}\Vert \mathbf {F}_z\Vert _{\mathrm{F}} \end{aligned}$$

with \(\mathbf {F}_z\) being the zth frame and \(\Vert \cdot \Vert _{\mathrm{F}}\) the Frobenius norm. A different noise realization \({\varvec{\eta }}\) is created for each frame. Moreover, in order to have the most realistic data, Poisson noise is composed with the images, via the MATLAB function imnoise, belonging to the Image Processing Toolbox. This function is employed by the rescaling 1e12*imnoise(1e-12*F,’Poisson’), with F being the current frame (see the MATLAB help for the imnoise function for more details about this procedure.). Finally, the intensity values of each frame are rescaled into the interval [0, 255]. See Fig. 1c for a visual inspection of the result.

3 Algorithm

The steps for the particles recognition problem in the three-dimensional case are presented in Algorithm 1:

figure e

Section 3.1 is devoted to illustrating the idea and the procedures beyond lines 2–7 of Algorithm 1. In Sect. 4 the denoising step (line 2) is pursued via classical or variational approaches: We present a brief introduction to the latter. Section 3.2 explains how the 2D information obtained from the frames can be used to estimate the particle center coordinates in three dimensions (lines 8–11).

3.1 Frames Processing

The procedures in lines 2–7 are expanded below.

Denoising The presence of noise, together with the blurring operator, could lead to some artifacts in the particle position and diameter estimation; hence, a denoising and deblurring procedure is necessary. A simple approach is using a Gaussian filtering: This procedure is very quick and inexpensive, performed via the FFT MATLAB’s native algorithms, see Fig. 2b for the results. The pros of this approach are that it reduces the presence of the noise and in its speed, while the drawbacks lie in the fact that the image is oversmoothed: The perturbing effect of the PSF is augmented, resulting in blurred edges.

We propose a denoising strategy based on an optimization method. The linear model of image acquisition reads as

$$\begin{aligned} \mathbf {g}=\mathbf {H}\mathbf {f}+b +{\varvec{\eta }} \end{aligned}$$
(3.1)

where \(\mathbf {H}\) is the linear operator representing the Point Spread Function (PSF), e.g. the blurring operator, b is a constant background term, \(\mathbf {f}\) is the image to be registered and \(\mathbf {g}\) is the actual recorded image corrupted by the statistical noise \({\varvec{\eta }}\). Since the direct solution of this linear model presents several issues due to the ill-conditioning of \(\mathbf {H}\) and to the presence of noise, one resorts to compute a restored image \({{\tilde{\mathbf {f}}}}\) by solving

$$\begin{aligned} {\tilde{\mathbf {f}}} = \underset{\begin{array}{c} {\mathbf {f}\in {\mathcal {C}}} \end{array}}{\text {argmin}}\;f_0(\mathbf {H}\mathbf {f}+b;\mathbf {g}) + \mu f_1(\mathbf {f}) \end{aligned}$$
(3.2)

where \({\mathcal {C}}\) is a convex, non-empty closed set of constraints (e.g., the nonnegative orthant), \(\mu >0\) is a real parameter and \(f_0\) and \(f_1\) are the fit-to-data and regularization functions, respectively. The role of \(f_0\) is to measure the discrepancy between the recovered image and the given data \(\mathbf {g}\), while \(f_1\) helps in reducing the influence of the noise and preserves some characteristics of the solution (e.g., sharp edges). The choice of \(f_0\) mainly depends on the noise perturbing the data. In case of additive Gaussian noise, the most widely employed functional is the classical Least Squares:

$$\begin{aligned} f_0(\mathbf {H}\mathbf {f}+b;\mathbf {g})=\frac{1}{2} \Vert \mathbf {H}\mathbf {f}+b-\mathbf {g}\Vert _2^2. \end{aligned}$$

In case of Poisson noise, which is signal dependent, \(f_0\) is chosen as the generalized Kullback–Leibler functional:

$$\begin{aligned} f_0(\mathbf {H}\mathbf {f}+b;\mathbf {g})=\sum _{i=1}^n g_i\log \left( \frac{g_i}{\left( \mathbf {H}\mathbf {f}+b\right) _i}\right) - \left( \mathbf {H}\mathbf {f}+b-\mathbf {g}\right) _i \end{aligned}$$

The regularization functional \(f_1\) is chosen according to the characteristics one desires to preserve on the solution: The sparseness is preserved via the component-wise \(\ell _1\) norm [27], diffuse components (e.g. in astronomical imaging [10]) are recognized via Tikhonov regularization, the recovering of few diffuse components is pursued by employing a convex combination of the \(\ell _1\) and \(\ell _2\) norms, namely the Elastic-Net, [4, 17, 53], and sharp edges in images could be detected via the Total Variation functional [18, 24, 46] (or its differentiable version called hyper-surface potential [54]). In this work, \(f_0\) is chosen as the Kullback–Leibler functional and the regularization \(f_1\) is the Total Variation: The first is selected due to the presence of Poisson noise, and the latter has the role to promote the sharp edges of the spherical particles. For a complete review of the methods for image reconstruction from Poisson data, the interested reader could refer to [11].

The numerical experiments will compare two denoising techniques: simple Gaussian filtering and the variational approach (3.2). An approximated solution to the latter is computed via an inexact Bregman technique [7, 9], which consists of an iterative procedure where \(f_1\) is substituted with its inexact Bregman distance computed in the previous iterate. The inexact Bregman distance of a convex function is defined as follows and refers to some classical concepts of convex analysis [43].

Definition 1

Let f be a proper, convex function. The inexact Bregman distance of f of \(\mathbf {x}\) from \(\mathbf {y}\) is

$$\begin{aligned} \varDelta _f^{\mathbf {p},\varepsilon }(\mathbf {x},\mathbf {y}) = f(\mathbf {x})-f(\mathbf {y}) - \left\langle \mathbf {p}, \mathbf {x}-\mathbf {y}\right\rangle +\varepsilon \end{aligned}$$
(3.3)

where \(\mathbf {p}\in \partial _\varepsilon f(\mathbf {y})\) is a \(\varepsilon \)-subgradient of f in \(\mathbf {y}\).

figure f

This substitution triggers an iterative procedure depicted in Algorithm 2.

The following result (originally stated and proved in [7]) provides a convergence result.

Theorem 1

Let \(f_0\) and \(f_1\) be nonnegative, proper, lower semicontinuous and convex functions, with \(\mathrm{dom}(f_0) \subset \mathrm{dom}(f_1)\) and the relative interiors of \(f_0\) and \(f_1\) have at least a point in common. We assume that, for any k, there exists a minimizer of the subproblem (3.4) and that \({\hat{x}}\) is minimizer of \(f_0\) such that \(f_1({\hat{x}})<\infty \). If for any \(k\ge 0\) the inner solver determines \(x^{k+1}\), \(\mathbf {q}^{k+1}\in \partial f_0(x^{k+1})\) and \(\mathbf {p}^{k+1} \in \partial _{\varepsilon _{k+1}}f_1(x^{k+1})\) so that the following condition on \({\varvec{\gamma }}^{k+1}=\frac{1}{\beta }\mathbf {q}^{k+1} +\mathbf {p}^{k+1}-\mathbf {p}^{k}\) and \(\varepsilon _{k+1}\) holds

$$\begin{aligned} \Vert {\varvec{\gamma }}_{k+1}\Vert \le \eta _{k+1}, \quad \varepsilon _{k+1}\le \nu _{k+1} \end{aligned}$$

with \(\displaystyle \sum _{i=1}^\infty \eta _i<\infty \) and \(\displaystyle \sum _{i=1}^\infty i\nu _i<\infty \), then we have that

$$\begin{aligned} f_0(x^{k})\le & {} f_0({\hat{x}}) + \frac{\mu }{k}\left( f_1({\hat{x}}) -f_1(x^0)+\sum _{i=1}^k\left\langle {\varvec{\gamma }}_i,x^i-{\hat{x}}\right\rangle \right. \\&\quad + \left. \sum _{i=1}^k(i+1)\varepsilon _i\right) \end{aligned}$$

Moreover, if the level subsets of \(f_0\) are bounded, a limit point of the sequence \(\{x^k\}\) is a minimizer of \(f_0\); if \({\hat{x}}\) is the unique minimizer of \(f_0\), then \(x^k\rightarrow {\hat{x}}\) as \(k\rightarrow \infty \).

This technique has been employed in this work because it has been proved that the usage of the Bregman distance induces a contrast enhancement [3, 7, 16] in the recovered images: This is a key point since having sharp edges eases the recognizing of the particles’ profile. Furthermore, real-world data present some issues due to the physical process of acquiring 3D images: The top frames suffer from fluorescence that worsen the contrast, possibly merging the particles with the background (cfr. Fig. 13a, bottom left corner).

Fig. 2
figure 2

Particular of the frame in Fig. 1c. a A region of interest with two separated particles. b Result of the Gaussian filtering. The noise is reduced, but the edges are blurred. c Thresholding via the Otsu method. d Labeling procedure, where different colors mean different labels. The order of labeling does not influence the final result (Color figure online)

Search for the connected components In order to get an estimation of the profile and of the center of the particles in the current frame, they must be localized first. The strategy is quite simple: The first step consists of thresholding the denoised frame, by employing the Otsu method [41] (see Fig. 2c). Then, the K connected components \(\{L_k\}_{k=1, \ldots ,K}\) in the thresholded frame are recognized and labeled (Fig. 2d). The MATLAB function bwlabel is set to assume the 8-connected neighbors. At this stage, the area of each kth connected component is stored in \(a_k\): This area will be used for the estimation in three dimensions of the center [see (3.7)]. The center of mass \(m_k\) of \(L_k\) is computed, together with a first raw estimation \(r_k\) of the radius: \(r_k\) is the distance of \(m_k\) from the furthest pixel in \(L_k\) (Fig. 3a).

Least Squares fit Once the connected components are recognized, a Least Squares fit is performed on each one in order to estimate the profile and the center of the particle. First of all, a Total Variation functional [54] is applied to the current denoised frame, namely \(\mathbf {D}\), aiming to find the edges of the particles (Fig. 3c). The discrete version of the Total Variation functional for an image \(\mathbf {d}\) reads as

$$\begin{aligned} \mathbf {TV}(\mathbf {d}) = \sum _{i=1}^m \Vert \mathbf {A}_i\mathbf {d}\Vert , \quad \mathbf {A}_i \in {\mathbb {R}}^{2\times m} \end{aligned}$$
(3.5)

where \(\mathbf {d}\) is the (column-wise) vectorized image, \(A_i\in {\mathbb {R}}^{2\times m}\) is the discrete version of the gradient of \(\mathbf {d}\) at the pixel i [3, 13].

For sake of clarity, we focus on the kth component, assuming that it is well separated from all the others.

Fig. 3
figure 3

Procedure for the Least Squares fit, focusing on a single connected component. First panel: connected component, with its center of mass and raw radius estimation. Second panel: window of interest around the localized particle. Third panel: chosen pixels for the Least Squares fit, with the relative intensity values. Fourth panel: estimated center together with the profile, based on the thresholded values (Color figure online)

  1. 1.

    A squared window of interest (WOI) centered in \(m_k\) of width \(2\times \left( 1.5 r_k\right) \) is opened (Fig. 3b) in \(\mathbf {TV}\left( \mathbf {D}\right) \). If a particle is near to one edge of the frame, the window is reduced until it falls entirely into the frame. This reduction is not performed evenly on the two dimension: It could lead to a rectangular WOI.

  2. 2.

    The WOI is thresholded via a value obtained again with the Otsu method: This thresholding yields the positions of the largest changes in intensity, which are ideally located on the profile edge, and at the same time discards the fluctuations given by the residual noise (Fig. 3c).

  3. 3.

    The position of the q pixels above the threshold is stored in an array \(\{x_i,y_i,w_i\}_{i=1, \ldots ,q}\) together with the corresponding intensity values \(w_i\).

  4. 4.

    A constrained regularized Least Squares fit is performed (Fig. 3d):

    $$\begin{aligned} \tilde{{\varvec{\alpha }}}\sim \underset{\begin{array}{c} {\alpha _1^2+\alpha _2^2-\alpha _3-a^2\le 0} \end{array}}{\text {argmin}}\;\frac{1}{2}\Vert \mathbf {W}\mathbf {R}{\varvec{\alpha }} - \mathbf {W}\mathbf {y}\Vert _2^2 + \frac{\mu }{2}\Vert {\varvec{\alpha }}\Vert _2^2 \end{aligned}$$
    (3.6)

    where

    $$\begin{aligned} \mathbf {W}= & {} \begin{pmatrix} \sqrt{w_1} &{} 0 &{} \dots &{} 0\\ 0 &{} \sqrt{w_2} &{} \dots &{} 0\\ 0 &{} 0 &{} \ddots &{} 0\\ 0 &{} 0 &{} \dots &{} \sqrt{w_q}\\ \end{pmatrix}, \, \mathbf {R}= \begin{pmatrix} -2 x_1 &{} -2 y_2 &{} 1\\ -2 x_2 &{} -2 y_2 &{} 1\\ -2 x_3 &{} -2 y_3 &{} 1\\ \vdots &{} \vdots &{} \vdots \\ -2 x_q &{} -2 y_q &{} 1\\ \end{pmatrix}\\ \mathbf {y}= & {} -\begin{pmatrix} x_1^2 + y_1^2\\ x_2^2 + y_2^2\\ \vdots \\ x_q^2 + y_q^2\\ \end{pmatrix}, \quad {\varvec{\alpha }}= \begin{pmatrix} \alpha _1\\ \alpha _2\\ \alpha _3 \end{pmatrix} \end{aligned}$$

    and a is the true radius of the particles. The coordinates of the estimated center \(\left( {x}_{k}^{e},{y}_{k}^{e}\right) \) are simply \(\left( {{\tilde{\alpha }}}_1,{{\tilde{\alpha }}}_2\right) \), where \(\tilde{{\varvec{\alpha }}}\) is the solution to the minimization problem, while the estimated radius \(r_k^e\) is computed as \(r_k^e = \sqrt{{{\tilde{\alpha }}}_1^2+{{\tilde{\alpha }}}_2^2-{{\tilde{\alpha }}}_3}\): This is the main reason for the constraint in (3.6).

    The regularization term is included due to the fact that the matrix \(\mathbf {W}\mathbf {R}\) could be ill-conditioned [28]; hence, the algorithm could fail to converge to a feasible solution (e.g., if the estimated radius is greater than a): In order to avoid that, the parameter \(\mu \) is set as \(1/{\mathcal {K}}\), with \({\mathcal {K}}\) being the condition number of \(\mathbf {W}\mathbf {R}\). Numerical experiments have shown that \({\mathcal {K}}\) is usually large, and hence, \(\mu \) is small, resulting in a small influence on the regularization, but still sufficient to avoid infeasible solutions. Sometimes, \({\mathcal {K}}\) is so large that even the regularization does not allow to achieve a feasible estimation. In this case, the regularization parameter is repeatedly increased by a factor 1.1 until the constraint is satisfied.

Remark 1

In the numerical experiment of this work, the solution to (3.6) is found by employing the MATLAB function fmincon.

Remark 2

One may wonder if a procedure more simple could be used in place of this Total Variation approach. We compared the results (on synthetic tests) obtained via our proposed approach with the ones achieved with a more direct strategy. This simple procedure estimates the center of each particle profile via the weighted mean of the elements of the connected component, while the radius is computed employing the variances of these elements. In this way, the achieved total error \(\mathrm T\) is around 0.15, the vertical error \(\mathrm V\) is close to 0.10–0.11 and the plane error \(\mathrm P\) ranges between 0.08 and 0.09. Comparing these results with the one obtained via the Total Variation approach convincingly shows that the latter strategy is more effective.

We now focus on a pathological case, where two particles are very close (Fig. 4a): The situation is problematic, but still tractable. When the WOI is opened around one particle, it may happen that some pixels belonging to the edge of the other fall inside the window (Fig. 4a, b), affecting the Least Squares procedure as it is evident in Fig. 4c. Thus, a further control is needed in this case. Another search for connected components is performed inside the WOI: If the number of the found components is greater than 1 (Fig. 4b), then only the largest one is kept (Fig. 4c). Adopting this procedure leads to a better fit, as shown in Fig. 4d.

Fig. 4
figure 4

Upper panels: when two (or more) particles are very close but still separated, selecting a large WOI may lead to include some undesired pixels in the LS fit, resulting in a perturbed result. Bottom panels: searching inside the WOI for all connected components avoids the problem depicted in upper panels. If the particles are close but disconnected, one can easily isolate the largest component which is related to the particle, and hence, a reliable LS fit can be reached (Color figure online)

Unfortunately, the case in Fig. 5a can occur: The above procedure fails to recognize two distinct particles and compute a center which is very close to the center of mass of the particles. Two possible strategies are proposed, but they still need to be investigated.

The first is to perform some morphological operations [42], in order to be allowed to recognize the different particles.

The second consists of performing a LS fit using an ellipse model, instead of a circumference (Fig. 5c): If the ratio of the semi-axes of the ellipse is either highly greater or lower than 1, it means that inside the ellipse there are more than one particle, due to the assumption of the spherical properties of the particles. Another check is given by the eccentricity of the ellipse. Thus, using the information (length and orientation) of the axes of the ellipse, the WOI can be divided into two smaller WOIs (Fig. 5d): Another LS ellipse fit is pursued in each portion. For each one, the ratio of the semi-axis is checked again: If it is around 1, then a particle is found; on the other case, the same procedure is iterated.

Fig. 5
figure 5

From left to right, up to bottom: true image, labelled component, estimated ellipse, WOI divided into two more WOIs. In the fourth panel, the window of interest is divided along the longest axis. The example shown refers to a vertical ellipse, but the procedure can take into account arbitrarily oriented ellipses (Color figure online)

Remark 3

The situation depicted in Fig. 5 can be worse: Three or more particles can cluster, leading to an ellipsoid fit which strongly resembles a circumference. In this undesired case, the control on the ratio of the semi-axis could be misleading, while the eccentricity can give a more reliable output. Another strategy could be to rely on more advanced image segmentation than simple thresholding, for example, via a Mumford–Shah functional [25, 51, 55].

3.2 Three-Dimensional Estimation

The procedure lying beyond lines 8–11 of Algorithm 1 for the estimation of the center of the particles is now explicit. It consists of two main steps: First, given the 2D estimation of the center of a particle in a frame, two possible 3D candidates are computed via the Pythagorean theorem. In a second step, we cluster all candidates belonging to the same particle.

Creation of the candidates This procedure relies on the assumption that the radius a of the particles is known. Focusing on a single particle, assume we have estimated its center \((x^e,y^e)\) and the radius \(r^e\) of its circular profile in the zth frame. The distance d between the true center and the considered frame is easily computed by \(d = \sqrt{a^2-(r^e)^2}\) (cfr. Fig. 6a). Hence, the two candidates for the third coordinate are \(z\mathrm{d}z - d\) and \(z\mathrm{d}z + d\) (with \(\mathrm{d}z \) being the vertical discretization, equal to the separation between acquisition planes). At this point, no prior information is known about where the true center is located. A single particle can be spanned by Z frames, namely: Hence in the ideal case, Z estimation for the 2D centers is available, one for each frame intersecting the particle, leading thus to have 2Z candidates for the true center (Fig. 6b). Due to the geometric properties, Z candidates will cluster in a region around the true center (blue enlighten region in Fig. 6b): The next step consists in finding this cluster.

Fig. 6
figure 6

a A vertical section of a particle. The horizontal line represents the zth frame, on which an estimated center \((x^e,y^e)\) (blue point) and estimated radius (\(r^e\)) are computed. The information on the true radius a allows to compute the distance d of the true center (black \(+\)) from the zth frame, leading to two different candidates (red and yellow points). b The procedure is repeated for each estimated center: In this case, there are seven frames intersecting the particle; hence, 14 candidates are created. The correct ones cluster around the true center, in the highlighted circular region (Color figure online)

Fig. 7
figure 7

Up: after processing of all the frames of the volume, the clustering of the candidates around the true centers becomes evident. Bottom: the Z candidates which have to be used for the estimation of the center. On both figures, the colors are displayed only for the sake of clarity (Color figure online)

Finding the clusters and computing the center For each center in each frame, two candidates are created: Once all the frames are processed, the situation in Fig. 7 occurs. For the sake of clarity, we call \({\mathcal {R}}\) the set of centers found in the frames and call \({\mathcal {C}}\) the set of possible candidates computed as described in the previous paragraph (namely, the points in Fig. 6a). It is expected that there should be a clustering around the true centers of the particles. One strategy could consist of searching for the Z nearest neighbors [36] lying in a ball of radius \(\rho _\mathrm{raw} a\), \(0<\rho _{\mathrm{raw}}< 1\) (recall that Z is the maximum number of frames spanned by a particle), but a different approach is adopted here:

  1. 1.

    A first raw estimation of the center of the particles is computed, using the set \({\mathcal {R}}\);

  2. 2.

    The Z nearest neighbors to these approximated centers are found within the candidates in \({\mathcal {C}}\).

The first step groups the points in \({\mathcal {R}}\) that belong to the same particle. Once these clusters are detected and labelled, the corresponding profiles are considered and used in a LS sphere fit, in order to get a first raw estimation of the center of the particles (see Fig. 9a for a visual inspection of this procedure). Let \(\{R_i\}_{i=1, \ldots ,q}\) be the set of these raw estimations; focus on one of these, namely the kth one. The Z nearest neighbors to \(R_k\) are searched within a range \(\rho _\mathrm{est} a\), \(0<\rho _{\mathrm{est}}<1\): Let \(\left\{ \left( {x}_{k,i}^{e}, {z}_{k,i}^{e}, {z}_{k,i}^{e} \right) \right\} _{i=1, \ldots ,Z}\) be these neighbors (ideally, these are the points lying in the small highlighted circle of Fig. 6a). The estimation of the kth center \(\mathbf {x}_k^e = \left( {x}_{k}^{e},{y}_{k}^{e},{z}_{k}^{e} \right) \) is computed as

$$\begin{aligned} \mathbf {x}_k^e=\frac{1}{A} \begin{pmatrix} \left\langle \mathbf {a}, \mathbf {x}_{(k)}\right\rangle \\ \\ \left\langle \mathbf {a}, \mathbf {y}_{(k)}\right\rangle \\ \\ \left\langle \mathbf {a}, \mathbf {z}_{(k)}\right\rangle \\ \end{pmatrix} \end{aligned}$$
(3.7)

where \(\mathbf {a}=\left( a_1,a_2, \ldots ,a_Z\right) ^\top \), \(a_i\) is the area of the connected component related to the center \(\left( {x}_{ki}^{e},{y}_{ki}^{e}\right) \), and \(A = \displaystyle \sum \nolimits _{i=1}^{Z}a_i\), \(\mathbf {x}_{(k)}=\left( {x}_{k,1}^{e},{x}_{k,2}^{e}, \ldots , {x}_{k,Z}^{e}\right) ^\top \). A weighted mean is employed in order to lower the influence on the final estimation of unreliable 2D estimations, for example the ones coming from frames which intersect a particle near its top or its bottom, leading to high uncertainty.

Remark 4

It could happen that the nearest neighbors to \(R_k\) are less than Z: This can be due to low-quality images, because the procedure fails to recover the 2D center in some frames or because the particle has moved during acquisition.

Fig. 8
figure 8

Profile pixels of particles lying in a volume of \(76.8\times 76.8\times 7\) \(\upmu \)m, with a discretization of \(512\times 512\times 10\). Using a simple nearest neighbors search within a multiple of the given radius may lead to wrong labelling (see, for example, the orange points) (Color figure online)

Remark 5

The perceptive reader may wonder why the profile pixels (Fig. 3c) found in the whole volume are not employed to directly estimate the center and the radius via a 3D fit. This approach has been investigated: The profile pixels are identified (crf. Fig. 8), and the main problem is to gather the points belonging to the same particle. Employing a simple nearest neighbors search within a distance 1.2a–1.3a (because the TV functional spreads the profile) would fail, since it may assign pixels to a particle while they actually belong to another one. We then resort to use our procedure to obtain a first estimation of the centers, then search for the nearest neighbors within a distance of 1.2a–1.3a and then apply a 3D LS sphere fit. We explored two approaches for a volume of dimension \(76.8\times 76.8\times 7\) \(\upmu \)m, with a discretization of \(512\times 512\times 10\) voxels and 100 particles (see Sect. 4 for details). Twenty different simulations were considered to validate the approaches.

  1. 1.

    Employing a 3D unweighted LS fit leads to a total error T of \(\sim \)20% of a voxel, which is not sufficiently precise in any real-life application.

  2. 2.

    Including also the pixels values as weights in the estimation, following (3.6), leads to a total error of 0.0944, while our procedure provides \({T} = 0.0813\) (cfr. Table 1) and to a in-plane error is 0.0922 versus 0.0883 (cfr. Table 1). Only the error in the z axis is 0.0139, while we reach an estimated \({V} = 0.0259\) (cfr. Table 1).

Although the weighted Least Squares fit is very appealing in the 3D case thanks to its easiness of extension, it does not provide better results than the proposed procedure, specially in total error which is of main interest. See Sect. 4 for the details about error measurements, performance and results.

Fig. 9
figure 9

Up left: overlay of the estimated center and of the circle profile of a particle over the spanned frame. The highlighted profiles are used in a LS fit to get a raw estimation of the center of the particle, indicated with the red plus in Fig. 9b. Up right: the red plus is the raw estimation of the center, the dots are the possible candidates in \({\mathcal {C}}\), and the orange one is the Z nearest neighbors to the raw estimation within a range of 0.1: These points are employed in (3.7). The reader should pay attention to the different scale of the axis. Bottom: xyxz and yz view of the estimated center, of the candidates and of the selected candidates (Color figure online)

4 Numerical Tests

Two different experiments are carried out to validate the performance of the proposed algorithm. The first is devoted to evaluating the performance on synthetic datasets. Dataset construction is described in Sect. 2, with two different noise realization (Gaussian plus Poisson noise and pure Poisson). The evaluation is done by using three different error measurements, described in the subsequent paragraph. A large number of simulation is carried out, aiming to produce a sufficient amount of data to draw reliable conclusions. Moreover, the performance of the algorithm is also evaluated on the vertical resolution, since this is an important issue in real-life application. The second experiment concerns real 3D data: It consists of considering a scanned volume of particles with a diameter of \(3\, \upmu \)m suspended in a glycerol/water mixture. Both experiments are carried on a MacBookPro, equipped with 16-GB RAM and an Intel\(^\circledR \) Core™  i7 CPU (2.2GHz), on MATLAB 2015a. The MATLAB code is available at http://www-syscom.univ-mlv.fr/~benfenat/Software.html.

Error measurements In order to evaluate the performance of our algorithm, inspired by [19, 44], three different error measurements are adopted. Denote with \(\mathbf {c}=\left( c_x, c_y, c_z\right) ^\top \) the true coordinates of a center and with \(\mathbf {e}= \left( e_x, e_y, e_z \right) ^\top \) the coordinate of the relative estimation. The total error \({ {T}}\) is

$$\begin{aligned} { {T}}= \sqrt{\left( \mathbf {c}-\mathbf {e}\right) ^\top \mathbf {D}^{-2} \left( \mathbf {c}-\mathbf {e}\right) }, \quad \mathbf {D}=\begin{pmatrix} \mathrm{d}x &{} 0 &{} 0\\ 0 &{} \mathrm{d}y &{} 0\\ 0 &{} 0 &{} \mathrm{d}z \end{pmatrix} \end{aligned}$$
(4.1)

which aims to measure the error w.r.t. voxel precisions.

The in-plane error \({ {P}}\) and the out-of-plane error \({ {V}}\) are defined as

$$\begin{aligned} { {P}}= \sqrt{ \left( \frac{c_x -e_x}{\mathrm{d}x}\right) ^2 + \left( \frac{c_y -e_y}{\mathrm{d}y}\right) ^2}, \quad { {V}}= \frac{|c_z-e_z|}{\mathrm{d}z}. \end{aligned}$$
(4.2)

The former aims to measure the error on the estimation of the particles’ position in the single frames w.r.t. pixel precision, while the latter focuses on the vertical displacement.

Fig. 10
figure 10

From left to right: \({ {V}},{ {P}}\) and \({ {T}}\) errors. Each performance stays below the state-of-the-art baseline, which is 10% of a pixel/voxel. The medians of the errors are 0.0289, 0.0483 and 0.0712 for \({ {V}}, { {P}}\) and \({ {T}}\), respectively (Color figure online)

First synthetic test: Gaussian and Poisson noise Following the notation of Sect. 2, the synthetic dataset is generated using the following settings: \(D_x=D_y = 76.8\,\upmu \)m, \(D_z=7\,\upmu \)m, the number N of particles is 100 of radius \(a=1\,\upmu \)m; the volume is discretized into a 3D array of dimension \(N_x=N_y=512, \, N_z=22\), leading to voxels’ dimension \(\mathrm{d}x = \mathrm{d}y = 0.15\) \(\upmu \)m, \(\mathrm{d}z = 0.3182\) \(\upmu \)m. Two types of noise are affecting the frames: Gaussian (\(\sigma _n=0.2\)) and Poisson (see Sect. 2 for the details on how the Poisson noise is added).

Algorithm 1 is applied: The chosen denoising technique (Line 2) consists simply of filtering via a Gaussian filter of dimension 5 pixels and variance 1. The window of interest is chosen as described in Sect. 3.1. Due to the discretization of the 3D volume, the maximum number Z of frames that can be spanned by a particle is 7; hence, the estimation of the centers (Sect. 3.2) is achieved by

  1. 1.

    clustering the points in \({\mathcal {R}}\) within a distance equal to 0.2a followed by estimating the raw center \(\{R_k\}_{k=1, \ldots ,q}\) and then

  2. 2.

    search the Z nearest neighbors to each \(R_k\) within a distance 0.2a and apply (3.7).

In Fig. 10, the three types of errors are depicted; the proposed procedure recognizes 99 particles (out of 100). The plots in Fig. 10 show that the mean of each error (yellow dashed line) type stays below the 1/10 of a pixel/voxel (red line), which is the baseline of the state-of-the-art methods [19, 21]. In fact, the in-plane error is 0.0596, and the out-of-plane error is 0.0371. The total error, given by (4.1), is 0.0777, below the state-of-the-art baseline.

In order to study the behavior of the procedure on large numbers of particles, the above simulation is repeated 20 times (for a total of 2000 particles), storing the errors \({ {V}}, { {P}}, { {T}}\) for each run. The histograms of the total error \({ {T}}\) are shown in Fig. 11a, together with its distribution estimation. The histogram is fit with a \(\varGamma \) distribution with parameters \((k,\theta )\), where k is the shape parameter and \(\theta \) is the scale parameter. The mean of \({ {T}}\) is 0.0811. The behavior of the total error is presented alone: The histogram of the in-plane error has the same appearance, with mean 0.0643, while the histogram of the out-of-plane error has also a \(\varGamma \) behavior but much more concentrates toward zero, with a mean of 0.0387. All the three errors stay below the expected baseline of 10% [19].

Fig. 11
figure 11

a Histogram of the total error \({ {T}}\): Its mean is 0.0811, and its median is 0.0781. The out-of-plane and the in-plane error has very similar behavior and can be fitted to the same distribution. b Histogram of the signed difference \(a-r^e\) together with its t-location scale fit. There are more outliers on the left than on the right, and in addition to the fact that the mean is circa − 0.014 this tells that the proposed procedure tends to slightly overestimate the radius of the particles (Color figure online)

Table 1 Performance w.r.t. different vertical discretization. There is a faint decreasing behavior in the vertical error, which leads in a decrease in the total error

Our proposed procedure is based on the assumption that the true radius is known: This is a valid assumption in many applications, but with a certain degree of uncertainty (e.g., the radius can be known within an error of the 10%). In order to check whether the estimation \(r^e\) of the radii of the particles is reliable, in Fig. 11b the histogram of the signed difference \(a-r^e\) is shown, aiming to evaluate the performance of the algorithm (\(r^e\) is computed by simple geometric properties). The chosen distribution for the fit is the t-location scale fit, due to the heavy tail on the left: This distribution is able to capture also the highest error (in absolute value). In this case, there are actually some outliers on the left of the histogram, as it is evident from Fig. 11b. The mean given by this distribution is \(-0.0142\): This means that overall the radii of the particles are overestimated by 1.5%. A first justification of this behavior can be given by the blur effect given by the PSF (see Sect. 2 for the detail) combined with the denoising technique adopted, but the next experiment will neglect the influence of the PSF and it will show how the denoising technique influences the radius estimation.

Fig. 12
figure 12

From left to right: Blurred and noisy frame, original image (without blurring and noise), Gaussian filtering and Bregman restoration. These images are examples from the fourth frame, they are displayed in the range [0, 255]. The Bregman technique is able to separate in a more reliable way the particles and at the same time is providing with more sharp edges, due to the choice of regularization function. In this case, the regularization is given by the Total Variation functional. It could happen that Gaussian filtering makes merge two or more particle in one big component, increasing the difficulties in recognized different objects (Color figure online)

The last part is devoted to study the performance w.r.t. the vertical resolution, i.e., the number \(N_z\) of frames in which the volume is discretized (\(N_x\) and \(N_y\) are unchanged, since most modern microscopes have a high resolution in both x and y axes). In Table 1, the behavior of the three kinds of error is depicted for increasing vertical resolution. For each dimension, 20 different simulations were performed; hence, 20 different runs of the procedure have been done: The numbers appearing in Table 1 are the means of the results of these simulations. One would expect that the estimation would improve with the number of frames: Actually, the procedure reveals itself to be very robust w.r.t. the vertical resolution, even with only a few (10 or 12) frames. The difference \(a-r^e\) is depicted in the fourth row: For each resolution, this difference is around -0.013, meaning that, regardless of the number of vertical frames, the radius of the particles is overestimated by 1.3%. The last line of Table 1 refers to the (mean) number of estimated particles: The results are very satisfying for all the resolution but the first one (\(N_z=10\)); this is due to the fact that in this case a particle can span only three frames maximum (more likely just two frames), leading to have a low number of candidates in \({\mathcal {C}}\). Hence, it is a problem linked to the relation between the dimension of the particles and the vertical resolution: For small particles, it is sufficient to slightly increase \(N_z\) (\(N_z=12\) in order to get very good results), while for larger particles (\(a=1.1\,\upmu \)m) ten frames prove to be sufficient, as it is evident in Table 2.

Table 2 Results of 20 runs of the procedure with \(a=1.1\,\upmu \)m, \(N_z=10\) and \(N=50\)
Table 3 Results obtained by ten runs of the algorithm. The Bregman technique provides better results overall, both for \(N_z=10\) and \(N_z=22\)

Second synthetic test: Poisson noise These tests aim at checking whether the Gaussian filtering is the right choice for denoising. Let consider the same setting of the previous experiments: \(D_x=D_y = 76.8 \,\upmu \)m, \(D_z=7 \,\upmu \)m, 100 particles of radius \(a=1\,\upmu \)m, \(N_x=N_y=512, \, N_z=22\). The difference lies in the noise corrupting the frames: No Gaussian noise is present (\(\sigma _n=0\)), while Poisson noise affects the data. Algorithm 1 is applied to this dataset: Satisfactory results, in line with the ones in Table 1, are obtained (\({ {P}}=0.0621, { {V}}= 0.0331\), and \({ {T}}= 0.0755\), 98 particles recognized). Since simple Gaussian filter is not always sufficient to deal with high-level Poisson noise, as anticipated in Sect. 3.1 the variational approach (3.2) is adopted and solved by the aforementioned inexact Bregman procedure [7, 8]. As explained in Sect. 3.1, this strategy has been chosen instead of possibly simpler procedures (e.g., [12, 13, 15, 22]) for its ability to increase contrast in the restored images. A visual inspection on the difference between the Gaussian filtering and the employed Bregman technique is depicted in Fig. 12, where a zoom of the fourth frame is shown. Algorithm 2 requires an iterative solver for subproblem (3.4) when an explicit solution is not available, as in the present case. In this work, the AEM algorithm [14] is used, with a maximum of 1000 iterations maximum and stopped via the criterion described in [7] within a tolerance of \(10^{-4}\), the fixed number of external iterations is 3, the regularization parameter \(\mu \) is set to 0.1. Due to the presence of Poisson noise, \(f_0\) is set as the generalized Kullback–Leibler functional, while the regularization function \(f_1\) is the Total Variation. Using this approach in line 2 of Algorithm 1 yields the following results: \({ {P}}=0.0627, { {V}}= 0.0316\) and \({ {T}}=0.0752\), with 99 particles recognized. The most important difference lies in the estimated radius: With Gaussian filtering, the mean error (obtained by a t-location scale fit) is \(-0.0134\), while the Bregman technique leads to an error of \(-0.0018\); hence, using the Gaussian filtering leads to an overestimation of the radius of the particles. Since just one single experiment is not sufficient to support this claim, further tests are carried on and presented in Table 3: One is with a lower vertical resolution (\(N_z=10\)), where the dimension and the discretization of the volume are the same, while the number of particles is 50 and the radius is set to \(1.1 \,\upmu \)m. The second test is performed on a dataset with the same characteristic of the first one presented in this paragraph: \(D_x=D_y = 76.8 \mu \hbox {m}\), \(D_z=7 \mu \hbox {m}\), 100 particles of radius \(a=1\mu \hbox {m}\), \(N_x=N_y=512, \, N_z=22\).

Table 3 shows that using the correct denoising procedure produces better results in terms of error estimation and of number of recognized particles; moreover, choosing the correct denoising technique allows to estimate more precisely the radius: In fact, for \(N_z=10\) using Gaussian filtering leads to an error of almost 1%, while the Bregman technique reduces the error to 0.1%. For \(N_z=22\), the difference is more pronounced: Classical filtering gives an error of \(\sim 1.4\%\), while again the proposed approach results in an error of only \(0.1\%\). The hypothesis that the overestimation of the radius actually depends on the denoising and deblurring technique is true: At a first sight, it seems from Table 1 that this is a determinate error [19] of the algorithm, but this last experiment tells the opposite. The procedure used to improve the quality of the images influences the performance of the particle estimation algorithm.

On the one hand, the two denoising procedures are similar, because both require parameters setting (e.g., the Bregman technique requires the tuning of the regularization parameter, of the tolerance for the stopping criterion; the filtering techniques require to choose the type of filter and its parameters); on the other hand, the optimization technique has drawbacks as its computational cost and the time need to restore each frame, while simple filtering is more or less free in these terms. There is a trade-off (as it usually occurs in cases such these) between performance and time/computational cost.

Fig. 13
figure 13

From left to right, from up to bottom. The images have to be read in pairs: For the zth slice, the left image refers to the noisy and blurred frame, while the right image refers to the restored one. It is clear that the contrast of the image is significantly improved, reducing the diffuse areas, mainly in the highest frames. All the images are displayed in the range [0, 255] (Color figure online)

Fig. 14
figure 14

a 3D recovering of the position of the particles at time \(t=1\). Panel from bk contains the original images with the superposition of the recognized profiles (in red) (Color figure online)

Real 3D data This paragraph is devoted to applying the proposed algorithm to real 3D data. The scanned volume has \(D_x=D_y=64\,\upmu \)m, \(D_z=4.1\,\upmu \)m and discretized into an array of dimension \(512\times 512\times 10\), leading to \(\mathrm{d}x = \mathrm{d}y = 0.125\,\upmu \)m, \(\mathrm{d}z = 0.41\,\upmu \)m; 50 scans of the volume were recorded, with a \(\mathrm{d}t = 0.5\)s. The diameter of the particles is \(3\,\upmu \)m (\(a=1.5\,\upmu \)m), and they are suspended in a \(\sim 70\%\)\(30\%\) glycerol/water mixture (viscosity of \(\sim 0.017\) Pa s). The instrument used to acquire this data is a confocal microscope (Zeiss LSM 700) with a 100\(\times \)NA 1.4 oil immersion objective (Zeiss Plan-APOCHROMAT). The frames are restored using the Bregman procedure previously described with the following settings: AEM as inner solver with a Total Variation functional as regularization, maximum number of allowed iterations set to 1000 within a tolerance of \(10^{-4}\) for the stopping criterion described in [7] with \(\alpha =2\), 3 external iteration allowed. Since the images are given without any information about their recording, a Gaussian PSFFootnote 1 with \(\sigma =1\) and zero mean is assumed as blurring operator, a background term equal to the minimum value of the image, and Poisson noise affecting the frames. All these assumptions are consistent with the type of the images produced by the aforementioned instrument.

Figure 13 shows in its first row the sixth acquired frame at time \(t=1\), the restored version via Bregman technique and the filtered image via a Gaussian filter. In the second column, a particular of these image is presented: The visual inspection makes clear that the usage of a suitable denoising technique allows to reduce the glowing halo all among the frame and moreover provides with more sharp edges, and all this contributes in making easier the recognition of the profiles.

Algorithm 1 is set with an initial WOI of width \(2\times 0.1 r_k\) (see Sect. 3.1 for the details), with a threshold which is 1.5 times the value given by Otsu’s method, \(\rho _{\mathrm{raw}} = 0.3\), \(\rho _{\mathrm{est}}=0.3\). The frames at time \(t=1\) are shown in Fig. 14b–k.

Figure 14a provides a visual inspection of the reconstructed position of the particles at time \(t=1\): This reconstruction faithfully respects the true position, as it is clear by comparing the 3D plot with the frames depicted from Fig. 14b–k, where the recovered profiles of the particles are superimposed on the original images. In these images, the top left corner corresponds to the point \((0,0,k\mathrm{d}z)\) in the 3D space, with k being the number of the frame. A closer inspection of Fig. 14 demonstrates that the proposed procedure finds particles close to the boundaries of the frames, as well as the ones near the top or the bottom of the volume.

5 Conclusion

In this work, a particle segmentation and position estimation methodology is presented. Assuming fixed spherical particles with a known radius, this procedure on the first hand applies a noise removal algorithm on each frame of the 3D volume; then, it uses the 2D gradient information on the profiles of the particles and employs a weighted regularized Least Squares fit to find the 2D center and the radius of the profile intersecting each frame. Using geometric properties, the coordinates of the 3D center are retrieved with an accuracy better than 10% of a voxel, which is the state-of-the-art performance of this type of algorithms. Furthermore, the intermediate steps implemented for the 3D reconstruction allow also to recover the particles’ position within each 2D frame, with a subpixel precision. Reliable results for the 3D positioning are achieved even for a low vertical resolution: The total error is indeed under the 10% of a voxel. Moreover, the very low error on the radius estimation suggests that this procedure improves a priori information about the radius of particles of uncertain dimension. This work demonstrates that the preprocessing of the frames requires particularly tailored techniques, depending on noise type: Since Poisson noise is the most common noise affecting the images, simple Gaussian filtering is not sufficient. One of the available image restoration techniques is then applied in this context: Although they are more demanding in terms of computational cost and time, the application of this strategies leads to a general improvement of the position estimation. Moreover, this tailored approach significantly increases the precision on the radius estimation, and it provides deeper insights on the role of Gaussian filtering in this task, proving that it induces an overestimation. Future work will involve better segmentation techniques for pathological cases, employing more tailored approaches such as regularized approaches inspired by the Mumford–Shah functional. The case of spherical particles with unknown radius will be also handled. The reliable results in positioning directly suggest that the proposed technique can be embedded in a more general procedure devoted to tracking procedure, where the particles are no longer fixed but may subjected to significant Brownian motion between slice acquisition.