Abstract
Particle estimation is a classical problem arising in many science fields, such as biophysics, fluid mechanics and biomedical imaging. Many interesting applications in these areas involve 3D imaging data: This work presents a technique to estimate the 3D coordinates of the center of spherical particles. This procedure has its core in the processing of the images of the scanned volume: It firstly applies denoising techniques to each frame of the scanned volume and then provides an estimation of both the center and the profile of the 2D intersections of the particles with the frames, by coupling the usage of Total Variation functional and of a regularized weighted Least Squares fit. Then, the 2D information is used to retrieve the 3D coordinates using geometrical properties. The experiments provide evidence that image denoising has a large impact on the performance of the particle tracking procedures, since they strongly depend on the quality of the initial acquisition. This work shows that the choice of tailored image denoising technique for Poisson noise leads to a better estimation of the particle positions.
Similar content being viewed by others
1 Introduction
Particle tracking techniques are widely employed in several science fields for identifying particular structures or processes of interest. Some important examples include biophysics, where these techniques are involved in the observation of molecular level motion of kinesin in microtubules and of motion of myosin on actin [52], in the study of the infection path of a virus [49] or in the investigation of cytoskeletal filaments [1]; another topic involving particles tracking problem regards the observation of protein motion in cell membranes [37] or intracellular transport [32]. Other interesting areas of application include statistical mechanics [5, 6], fluid dynamics and mechanics, in particular rheology [34], where the thermal motion of Brownian particles has been tracked to study local rheological properties [20], complex fluids [2, 47] and microrheology in medicine [23]. Colloidal works have benefited from developments in particle tracking procedures in measuring biofluids such as mucus [48] and vitreous humor [50]. All these practical instances of particle tracking rely on imaging data, acquired via confocal microscopy, electric microscopy and/or similar techniques.
It has been pointed out [21] that particles have different meanings depending on the applications: a single molecule, a virus, a spherical object. In this work, a particle is a spherical object around 1 micrometer in diameter, observed in confocal microscopy.
Particle tracking consists of two main steps: particle position estimation and trajectory reconstruction. The former is based on the acquired images, while the latter employs the retrieved information together with probabilistic results. In the past, several procedures have been proposed to estimate the particle position: cross-correlation of a sequence of images [35], centroid techniques [33] and Gaussian fitting [38]. Some of them claim subpixel resolution, and in [19], a wide comparison of these techniques showed that significant numerical experimentation is needed before validating such results. Other methods include combinatorial optimization [45], nearest neighbor [31], Kalman filtering coupled with probabilistic data association [26], use of the Viterbi algorithm [39], sparse deconvolution techniques [30] and several others. An experimental comparison of a plethora of methods can be found in [21]. In [44] (and references therein), a particular focus on microrheology-related problems is considered, and the balance between high spatial resolution and timescale of data acquisition is considered in depth: The former leads to approximate multiple tracking techniques, while the latter allows a greater flexibility and provides high statistical accuracy. In [19], the spatial resolution influence was investigated. In the presented paper, the first step of particle tracking problem is solved: The proposed algorithm provides estimations of the particles position with subpixel resolution, both in two- and three-dimensional cases. The analysis focuses also on the role of image denoising techniques, which heavily influences the final result and performance of position estimation algorithms. The proposed procedure aims mainly to treat the static error [44], which arises from noise affecting this type of experiments; this static error is equivalent to the notion of precision in [19].
Following [44] and the consideration in [19] about preliminary synthetic experiments, in this work a numerical simulation of the standard setup is adopted: The simulated system consists of a CCD camera connected to a microscope which records images (frames) of molecules or spherical particles. Our proposed procedure is first tested on synthetic but realistic data. The algorithm proved itself to be providing good performance on such data; hence, it is applied on real 3D data with satisfactory results.
The presented procedure provides position estimations of 3D spherical particles: This approximation is inspired by the problem of estimating the motion of spherical nanoparticles suspended in a fluid. A novel approach based on Total Variation functional and on Least Squares fitting is proposed to locate the center of the spherical particles in 2D frames. The 3D centers of the particles are hence estimated using geometric properties and employing the 2D information retrieved in the previous steps. The algorithm achieves subpixel resolution both in the 2D case, i.e., in estimating the position of the particles within frames, and in the 3D case. In real-life application, 3D confocal data are corrupted by noise, usually of Poisson type; hence, denoising techniques are necessary to ensure the good quality of the reconstruction. In this work, the comparison between classical Gaussian filtering and more tailored algorithm for noise removal is done.
This paper is organized as follows: In Sect. 2, the simulation procedure is described, in order to get realistic 3D data to validate the proposed algorithm. In Sect. 3, details of the proposed procedure are given: the preprocessing of the frames and the estimation of the 2D centers and then the 3D estimation. Section 4 is devoted to the numerical experimentation on both synthetic and realistic data; finally, in Sect. 5, conclusions are drawn.
Notation Bold letters, bold capital letters and Latin (or Greek) letters denote vectors, matrices and scalars, respectively. The ith element of the vector \(\mathbf {x}\) is denoted by \(x_i\). The notation \({\mathcal {N}}\left( \mu ,\sigma ^2\right) \) indicates a Gaussian distribution of mean \(\mu \) and variance \(\sigma ^2\). \(\mathbf {I}\) denotes the identity matrix and \({\varvec{0}}\) the vector with all zeros entries.
2 Data Creation: Simulation Procedure
The synthetic datasets used to validate the proposed algorithm are simulated following these steps, which are inspired by the characteristics of real settings:
-
N spherical particles of radius a are randomly placed in a 3D volume of dimension \(D_x\times D_y \times D_z\). The particles are assumed to have all the same, known radius a;
-
the 3D volume is discretized into an array of \(N_x\times N_y\times N_z\) voxels; each voxel has dimension \(\mathrm{d}x \times \mathrm{d}y \times \mathrm{d}z\), being \(\mathrm{d}x = D_x/N_x, \mathrm{d}y = D_y/N_y, \mathrm{d}z = D_z/N_z\). \(N_z\) represents the number of 2D frames. Each particle is discretized in this volume;
-
aiming to simulate realistic data, a blurring operator is applied to each frame, and then, Gaussian and/or Poisson noise is, respectively, added to or composed with each image.
In the following, the creation of the dataset is described precisely.
Position simulation The continuous positions \(\{\mathbf {x}_i\}_{i=1, \ldots ,N}\) of the N particles are randomly chosen in \(D_x\times D_y \times D_z\), via an uniform distribution. The 3D position of the ith particle is denoted via \({\mathbf {x}}_{i}^{}= \left( x_i, y_i, z_i\right) ^\top \).
Discretization Given the continuous coordinates \(\mathbf {x}_i\) of the ith particle and the radius a, the voxels at distance less or equal to a are filled with a value of H, while the others are set to h, aiming to have a nonzero constant background. In our simulations, we set \(h=10\) and \(H=220\). These values were chosen in order to simulate realistic tiff images, which usually have values in [0, 255]. In Fig. 1a, a 2D explanation of this procedure is depicted: The 3D case follows the same procedure (Fig. 1b).
Blurring and noise A blurring operator of Gaussian type (dimension: \(5\times 5\) pixels, of zero mean and unitary variance, created via the MATLAB function imfilter) is applied to each frame, simulating the perturbation given by the acquisition system. Gaussian noise of level \(\sigma \) is added to each frame: Let \({\varvec{\eta }}\sim {\mathcal {N}}\left( {\varvec{0}}, \sigma \mathbf {I}\right) \) be a realization of a Gaussian multivalued random variable of zero mean and covariance matrix \(\sigma \mathbf {I}\). The noise \({\varvec{\eta }}\) is added according to the following formula (which is a slight modification of the one in [29]):
with \(\mathbf {F}_z\) being the zth frame and \(\Vert \cdot \Vert _{\mathrm{F}}\) the Frobenius norm. A different noise realization \({\varvec{\eta }}\) is created for each frame. Moreover, in order to have the most realistic data, Poisson noise is composed with the images, via the MATLAB function imnoise, belonging to the Image Processing Toolbox. This function is employed by the rescaling 1e12*imnoise(1e-12*F,’Poisson’), with F being the current frame (see the MATLAB help for the imnoise function for more details about this procedure.). Finally, the intensity values of each frame are rescaled into the interval [0, 255]. See Fig. 1c for a visual inspection of the result.
3 Algorithm
The steps for the particles recognition problem in the three-dimensional case are presented in Algorithm 1:
Section 3.1 is devoted to illustrating the idea and the procedures beyond lines 2–7 of Algorithm 1. In Sect. 4 the denoising step (line 2) is pursued via classical or variational approaches: We present a brief introduction to the latter. Section 3.2 explains how the 2D information obtained from the frames can be used to estimate the particle center coordinates in three dimensions (lines 8–11).
3.1 Frames Processing
The procedures in lines 2–7 are expanded below.
Denoising The presence of noise, together with the blurring operator, could lead to some artifacts in the particle position and diameter estimation; hence, a denoising and deblurring procedure is necessary. A simple approach is using a Gaussian filtering: This procedure is very quick and inexpensive, performed via the FFT MATLAB’s native algorithms, see Fig. 2b for the results. The pros of this approach are that it reduces the presence of the noise and in its speed, while the drawbacks lie in the fact that the image is oversmoothed: The perturbing effect of the PSF is augmented, resulting in blurred edges.
We propose a denoising strategy based on an optimization method. The linear model of image acquisition reads as
where \(\mathbf {H}\) is the linear operator representing the Point Spread Function (PSF), e.g. the blurring operator, b is a constant background term, \(\mathbf {f}\) is the image to be registered and \(\mathbf {g}\) is the actual recorded image corrupted by the statistical noise \({\varvec{\eta }}\). Since the direct solution of this linear model presents several issues due to the ill-conditioning of \(\mathbf {H}\) and to the presence of noise, one resorts to compute a restored image \({{\tilde{\mathbf {f}}}}\) by solving
where \({\mathcal {C}}\) is a convex, non-empty closed set of constraints (e.g., the nonnegative orthant), \(\mu >0\) is a real parameter and \(f_0\) and \(f_1\) are the fit-to-data and regularization functions, respectively. The role of \(f_0\) is to measure the discrepancy between the recovered image and the given data \(\mathbf {g}\), while \(f_1\) helps in reducing the influence of the noise and preserves some characteristics of the solution (e.g., sharp edges). The choice of \(f_0\) mainly depends on the noise perturbing the data. In case of additive Gaussian noise, the most widely employed functional is the classical Least Squares:
In case of Poisson noise, which is signal dependent, \(f_0\) is chosen as the generalized Kullback–Leibler functional:
The regularization functional \(f_1\) is chosen according to the characteristics one desires to preserve on the solution: The sparseness is preserved via the component-wise \(\ell _1\) norm [27], diffuse components (e.g. in astronomical imaging [10]) are recognized via Tikhonov regularization, the recovering of few diffuse components is pursued by employing a convex combination of the \(\ell _1\) and \(\ell _2\) norms, namely the Elastic-Net, [4, 17, 53], and sharp edges in images could be detected via the Total Variation functional [18, 24, 46] (or its differentiable version called hyper-surface potential [54]). In this work, \(f_0\) is chosen as the Kullback–Leibler functional and the regularization \(f_1\) is the Total Variation: The first is selected due to the presence of Poisson noise, and the latter has the role to promote the sharp edges of the spherical particles. For a complete review of the methods for image reconstruction from Poisson data, the interested reader could refer to [11].
The numerical experiments will compare two denoising techniques: simple Gaussian filtering and the variational approach (3.2). An approximated solution to the latter is computed via an inexact Bregman technique [7, 9], which consists of an iterative procedure where \(f_1\) is substituted with its inexact Bregman distance computed in the previous iterate. The inexact Bregman distance of a convex function is defined as follows and refers to some classical concepts of convex analysis [43].
Definition 1
Let f be a proper, convex function. The inexact Bregman distance of f of \(\mathbf {x}\) from \(\mathbf {y}\) is
where \(\mathbf {p}\in \partial _\varepsilon f(\mathbf {y})\) is a \(\varepsilon \)-subgradient of f in \(\mathbf {y}\).
This substitution triggers an iterative procedure depicted in Algorithm 2.
The following result (originally stated and proved in [7]) provides a convergence result.
Theorem 1
Let \(f_0\) and \(f_1\) be nonnegative, proper, lower semicontinuous and convex functions, with \(\mathrm{dom}(f_0) \subset \mathrm{dom}(f_1)\) and the relative interiors of \(f_0\) and \(f_1\) have at least a point in common. We assume that, for any k, there exists a minimizer of the subproblem (3.4) and that \({\hat{x}}\) is minimizer of \(f_0\) such that \(f_1({\hat{x}})<\infty \). If for any \(k\ge 0\) the inner solver determines \(x^{k+1}\), \(\mathbf {q}^{k+1}\in \partial f_0(x^{k+1})\) and \(\mathbf {p}^{k+1} \in \partial _{\varepsilon _{k+1}}f_1(x^{k+1})\) so that the following condition on \({\varvec{\gamma }}^{k+1}=\frac{1}{\beta }\mathbf {q}^{k+1} +\mathbf {p}^{k+1}-\mathbf {p}^{k}\) and \(\varepsilon _{k+1}\) holds
with \(\displaystyle \sum _{i=1}^\infty \eta _i<\infty \) and \(\displaystyle \sum _{i=1}^\infty i\nu _i<\infty \), then we have that
Moreover, if the level subsets of \(f_0\) are bounded, a limit point of the sequence \(\{x^k\}\) is a minimizer of \(f_0\); if \({\hat{x}}\) is the unique minimizer of \(f_0\), then \(x^k\rightarrow {\hat{x}}\) as \(k\rightarrow \infty \).
This technique has been employed in this work because it has been proved that the usage of the Bregman distance induces a contrast enhancement [3, 7, 16] in the recovered images: This is a key point since having sharp edges eases the recognizing of the particles’ profile. Furthermore, real-world data present some issues due to the physical process of acquiring 3D images: The top frames suffer from fluorescence that worsen the contrast, possibly merging the particles with the background (cfr. Fig. 13a, bottom left corner).
Search for the connected components In order to get an estimation of the profile and of the center of the particles in the current frame, they must be localized first. The strategy is quite simple: The first step consists of thresholding the denoised frame, by employing the Otsu method [41] (see Fig. 2c). Then, the K connected components \(\{L_k\}_{k=1, \ldots ,K}\) in the thresholded frame are recognized and labeled (Fig. 2d). The MATLAB function bwlabel is set to assume the 8-connected neighbors. At this stage, the area of each kth connected component is stored in \(a_k\): This area will be used for the estimation in three dimensions of the center [see (3.7)]. The center of mass \(m_k\) of \(L_k\) is computed, together with a first raw estimation \(r_k\) of the radius: \(r_k\) is the distance of \(m_k\) from the furthest pixel in \(L_k\) (Fig. 3a).
Least Squares fit Once the connected components are recognized, a Least Squares fit is performed on each one in order to estimate the profile and the center of the particle. First of all, a Total Variation functional [54] is applied to the current denoised frame, namely \(\mathbf {D}\), aiming to find the edges of the particles (Fig. 3c). The discrete version of the Total Variation functional for an image \(\mathbf {d}\) reads as
where \(\mathbf {d}\) is the (column-wise) vectorized image, \(A_i\in {\mathbb {R}}^{2\times m}\) is the discrete version of the gradient of \(\mathbf {d}\) at the pixel i [3, 13].
For sake of clarity, we focus on the kth component, assuming that it is well separated from all the others.
-
1.
A squared window of interest (WOI) centered in \(m_k\) of width \(2\times \left( 1.5 r_k\right) \) is opened (Fig. 3b) in \(\mathbf {TV}\left( \mathbf {D}\right) \). If a particle is near to one edge of the frame, the window is reduced until it falls entirely into the frame. This reduction is not performed evenly on the two dimension: It could lead to a rectangular WOI.
-
2.
The WOI is thresholded via a value obtained again with the Otsu method: This thresholding yields the positions of the largest changes in intensity, which are ideally located on the profile edge, and at the same time discards the fluctuations given by the residual noise (Fig. 3c).
-
3.
The position of the q pixels above the threshold is stored in an array \(\{x_i,y_i,w_i\}_{i=1, \ldots ,q}\) together with the corresponding intensity values \(w_i\).
-
4.
A constrained regularized Least Squares fit is performed (Fig. 3d):
$$\begin{aligned} \tilde{{\varvec{\alpha }}}\sim \underset{\begin{array}{c} {\alpha _1^2+\alpha _2^2-\alpha _3-a^2\le 0} \end{array}}{\text {argmin}}\;\frac{1}{2}\Vert \mathbf {W}\mathbf {R}{\varvec{\alpha }} - \mathbf {W}\mathbf {y}\Vert _2^2 + \frac{\mu }{2}\Vert {\varvec{\alpha }}\Vert _2^2 \end{aligned}$$(3.6)where
$$\begin{aligned} \mathbf {W}= & {} \begin{pmatrix} \sqrt{w_1} &{} 0 &{} \dots &{} 0\\ 0 &{} \sqrt{w_2} &{} \dots &{} 0\\ 0 &{} 0 &{} \ddots &{} 0\\ 0 &{} 0 &{} \dots &{} \sqrt{w_q}\\ \end{pmatrix}, \, \mathbf {R}= \begin{pmatrix} -2 x_1 &{} -2 y_2 &{} 1\\ -2 x_2 &{} -2 y_2 &{} 1\\ -2 x_3 &{} -2 y_3 &{} 1\\ \vdots &{} \vdots &{} \vdots \\ -2 x_q &{} -2 y_q &{} 1\\ \end{pmatrix}\\ \mathbf {y}= & {} -\begin{pmatrix} x_1^2 + y_1^2\\ x_2^2 + y_2^2\\ \vdots \\ x_q^2 + y_q^2\\ \end{pmatrix}, \quad {\varvec{\alpha }}= \begin{pmatrix} \alpha _1\\ \alpha _2\\ \alpha _3 \end{pmatrix} \end{aligned}$$and a is the true radius of the particles. The coordinates of the estimated center \(\left( {x}_{k}^{e},{y}_{k}^{e}\right) \) are simply \(\left( {{\tilde{\alpha }}}_1,{{\tilde{\alpha }}}_2\right) \), where \(\tilde{{\varvec{\alpha }}}\) is the solution to the minimization problem, while the estimated radius \(r_k^e\) is computed as \(r_k^e = \sqrt{{{\tilde{\alpha }}}_1^2+{{\tilde{\alpha }}}_2^2-{{\tilde{\alpha }}}_3}\): This is the main reason for the constraint in (3.6).
The regularization term is included due to the fact that the matrix \(\mathbf {W}\mathbf {R}\) could be ill-conditioned [28]; hence, the algorithm could fail to converge to a feasible solution (e.g., if the estimated radius is greater than a): In order to avoid that, the parameter \(\mu \) is set as \(1/{\mathcal {K}}\), with \({\mathcal {K}}\) being the condition number of \(\mathbf {W}\mathbf {R}\). Numerical experiments have shown that \({\mathcal {K}}\) is usually large, and hence, \(\mu \) is small, resulting in a small influence on the regularization, but still sufficient to avoid infeasible solutions. Sometimes, \({\mathcal {K}}\) is so large that even the regularization does not allow to achieve a feasible estimation. In this case, the regularization parameter is repeatedly increased by a factor 1.1 until the constraint is satisfied.
Remark 1
In the numerical experiment of this work, the solution to (3.6) is found by employing the MATLAB function fmincon.
Remark 2
One may wonder if a procedure more simple could be used in place of this Total Variation approach. We compared the results (on synthetic tests) obtained via our proposed approach with the ones achieved with a more direct strategy. This simple procedure estimates the center of each particle profile via the weighted mean of the elements of the connected component, while the radius is computed employing the variances of these elements. In this way, the achieved total error \(\mathrm T\) is around 0.15, the vertical error \(\mathrm V\) is close to 0.10–0.11 and the plane error \(\mathrm P\) ranges between 0.08 and 0.09. Comparing these results with the one obtained via the Total Variation approach convincingly shows that the latter strategy is more effective.
We now focus on a pathological case, where two particles are very close (Fig. 4a): The situation is problematic, but still tractable. When the WOI is opened around one particle, it may happen that some pixels belonging to the edge of the other fall inside the window (Fig. 4a, b), affecting the Least Squares procedure as it is evident in Fig. 4c. Thus, a further control is needed in this case. Another search for connected components is performed inside the WOI: If the number of the found components is greater than 1 (Fig. 4b), then only the largest one is kept (Fig. 4c). Adopting this procedure leads to a better fit, as shown in Fig. 4d.
Unfortunately, the case in Fig. 5a can occur: The above procedure fails to recognize two distinct particles and compute a center which is very close to the center of mass of the particles. Two possible strategies are proposed, but they still need to be investigated.
The first is to perform some morphological operations [42], in order to be allowed to recognize the different particles.
The second consists of performing a LS fit using an ellipse model, instead of a circumference (Fig. 5c): If the ratio of the semi-axes of the ellipse is either highly greater or lower than 1, it means that inside the ellipse there are more than one particle, due to the assumption of the spherical properties of the particles. Another check is given by the eccentricity of the ellipse. Thus, using the information (length and orientation) of the axes of the ellipse, the WOI can be divided into two smaller WOIs (Fig. 5d): Another LS ellipse fit is pursued in each portion. For each one, the ratio of the semi-axis is checked again: If it is around 1, then a particle is found; on the other case, the same procedure is iterated.
Remark 3
The situation depicted in Fig. 5 can be worse: Three or more particles can cluster, leading to an ellipsoid fit which strongly resembles a circumference. In this undesired case, the control on the ratio of the semi-axis could be misleading, while the eccentricity can give a more reliable output. Another strategy could be to rely on more advanced image segmentation than simple thresholding, for example, via a Mumford–Shah functional [25, 51, 55].
3.2 Three-Dimensional Estimation
The procedure lying beyond lines 8–11 of Algorithm 1 for the estimation of the center of the particles is now explicit. It consists of two main steps: First, given the 2D estimation of the center of a particle in a frame, two possible 3D candidates are computed via the Pythagorean theorem. In a second step, we cluster all candidates belonging to the same particle.
Creation of the candidates This procedure relies on the assumption that the radius a of the particles is known. Focusing on a single particle, assume we have estimated its center \((x^e,y^e)\) and the radius \(r^e\) of its circular profile in the zth frame. The distance d between the true center and the considered frame is easily computed by \(d = \sqrt{a^2-(r^e)^2}\) (cfr. Fig. 6a). Hence, the two candidates for the third coordinate are \(z\mathrm{d}z - d\) and \(z\mathrm{d}z + d\) (with \(\mathrm{d}z \) being the vertical discretization, equal to the separation between acquisition planes). At this point, no prior information is known about where the true center is located. A single particle can be spanned by Z frames, namely: Hence in the ideal case, Z estimation for the 2D centers is available, one for each frame intersecting the particle, leading thus to have 2Z candidates for the true center (Fig. 6b). Due to the geometric properties, Z candidates will cluster in a region around the true center (blue enlighten region in Fig. 6b): The next step consists in finding this cluster.
Finding the clusters and computing the center For each center in each frame, two candidates are created: Once all the frames are processed, the situation in Fig. 7 occurs. For the sake of clarity, we call \({\mathcal {R}}\) the set of centers found in the frames and call \({\mathcal {C}}\) the set of possible candidates computed as described in the previous paragraph (namely, the points in Fig. 6a). It is expected that there should be a clustering around the true centers of the particles. One strategy could consist of searching for the Z nearest neighbors [36] lying in a ball of radius \(\rho _\mathrm{raw} a\), \(0<\rho _{\mathrm{raw}}< 1\) (recall that Z is the maximum number of frames spanned by a particle), but a different approach is adopted here:
-
1.
A first raw estimation of the center of the particles is computed, using the set \({\mathcal {R}}\);
-
2.
The Z nearest neighbors to these approximated centers are found within the candidates in \({\mathcal {C}}\).
The first step groups the points in \({\mathcal {R}}\) that belong to the same particle. Once these clusters are detected and labelled, the corresponding profiles are considered and used in a LS sphere fit, in order to get a first raw estimation of the center of the particles (see Fig. 9a for a visual inspection of this procedure). Let \(\{R_i\}_{i=1, \ldots ,q}\) be the set of these raw estimations; focus on one of these, namely the kth one. The Z nearest neighbors to \(R_k\) are searched within a range \(\rho _\mathrm{est} a\), \(0<\rho _{\mathrm{est}}<1\): Let \(\left\{ \left( {x}_{k,i}^{e}, {z}_{k,i}^{e}, {z}_{k,i}^{e} \right) \right\} _{i=1, \ldots ,Z}\) be these neighbors (ideally, these are the points lying in the small highlighted circle of Fig. 6a). The estimation of the kth center \(\mathbf {x}_k^e = \left( {x}_{k}^{e},{y}_{k}^{e},{z}_{k}^{e} \right) \) is computed as
where \(\mathbf {a}=\left( a_1,a_2, \ldots ,a_Z\right) ^\top \), \(a_i\) is the area of the connected component related to the center \(\left( {x}_{ki}^{e},{y}_{ki}^{e}\right) \), and \(A = \displaystyle \sum \nolimits _{i=1}^{Z}a_i\), \(\mathbf {x}_{(k)}=\left( {x}_{k,1}^{e},{x}_{k,2}^{e}, \ldots , {x}_{k,Z}^{e}\right) ^\top \). A weighted mean is employed in order to lower the influence on the final estimation of unreliable 2D estimations, for example the ones coming from frames which intersect a particle near its top or its bottom, leading to high uncertainty.
Remark 4
It could happen that the nearest neighbors to \(R_k\) are less than Z: This can be due to low-quality images, because the procedure fails to recover the 2D center in some frames or because the particle has moved during acquisition.
Remark 5
The perceptive reader may wonder why the profile pixels (Fig. 3c) found in the whole volume are not employed to directly estimate the center and the radius via a 3D fit. This approach has been investigated: The profile pixels are identified (crf. Fig. 8), and the main problem is to gather the points belonging to the same particle. Employing a simple nearest neighbors search within a distance 1.2a–1.3a (because the TV functional spreads the profile) would fail, since it may assign pixels to a particle while they actually belong to another one. We then resort to use our procedure to obtain a first estimation of the centers, then search for the nearest neighbors within a distance of 1.2a–1.3a and then apply a 3D LS sphere fit. We explored two approaches for a volume of dimension \(76.8\times 76.8\times 7\) \(\upmu \)m, with a discretization of \(512\times 512\times 10\) voxels and 100 particles (see Sect. 4 for details). Twenty different simulations were considered to validate the approaches.
-
1.
Employing a 3D unweighted LS fit leads to a total error T of \(\sim \)20% of a voxel, which is not sufficiently precise in any real-life application.
-
2.
Including also the pixels values as weights in the estimation, following (3.6), leads to a total error of 0.0944, while our procedure provides \({T} = 0.0813\) (cfr. Table 1) and to a in-plane error is 0.0922 versus 0.0883 (cfr. Table 1). Only the error in the z axis is 0.0139, while we reach an estimated \({V} = 0.0259\) (cfr. Table 1).
Although the weighted Least Squares fit is very appealing in the 3D case thanks to its easiness of extension, it does not provide better results than the proposed procedure, specially in total error which is of main interest. See Sect. 4 for the details about error measurements, performance and results.
4 Numerical Tests
Two different experiments are carried out to validate the performance of the proposed algorithm. The first is devoted to evaluating the performance on synthetic datasets. Dataset construction is described in Sect. 2, with two different noise realization (Gaussian plus Poisson noise and pure Poisson). The evaluation is done by using three different error measurements, described in the subsequent paragraph. A large number of simulation is carried out, aiming to produce a sufficient amount of data to draw reliable conclusions. Moreover, the performance of the algorithm is also evaluated on the vertical resolution, since this is an important issue in real-life application. The second experiment concerns real 3D data: It consists of considering a scanned volume of particles with a diameter of \(3\, \upmu \)m suspended in a glycerol/water mixture. Both experiments are carried on a MacBookPro, equipped with 16-GB RAM and an Intel\(^\circledR \) Core™ i7 CPU (2.2GHz), on MATLAB 2015a. The MATLAB code is available at http://www-syscom.univ-mlv.fr/~benfenat/Software.html.
Error measurements In order to evaluate the performance of our algorithm, inspired by [19, 44], three different error measurements are adopted. Denote with \(\mathbf {c}=\left( c_x, c_y, c_z\right) ^\top \) the true coordinates of a center and with \(\mathbf {e}= \left( e_x, e_y, e_z \right) ^\top \) the coordinate of the relative estimation. The total error \({ {T}}\) is
which aims to measure the error w.r.t. voxel precisions.
The in-plane error \({ {P}}\) and the out-of-plane error \({ {V}}\) are defined as
The former aims to measure the error on the estimation of the particles’ position in the single frames w.r.t. pixel precision, while the latter focuses on the vertical displacement.
First synthetic test: Gaussian and Poisson noise Following the notation of Sect. 2, the synthetic dataset is generated using the following settings: \(D_x=D_y = 76.8\,\upmu \)m, \(D_z=7\,\upmu \)m, the number N of particles is 100 of radius \(a=1\,\upmu \)m; the volume is discretized into a 3D array of dimension \(N_x=N_y=512, \, N_z=22\), leading to voxels’ dimension \(\mathrm{d}x = \mathrm{d}y = 0.15\) \(\upmu \)m, \(\mathrm{d}z = 0.3182\) \(\upmu \)m. Two types of noise are affecting the frames: Gaussian (\(\sigma _n=0.2\)) and Poisson (see Sect. 2 for the details on how the Poisson noise is added).
Algorithm 1 is applied: The chosen denoising technique (Line 2) consists simply of filtering via a Gaussian filter of dimension 5 pixels and variance 1. The window of interest is chosen as described in Sect. 3.1. Due to the discretization of the 3D volume, the maximum number Z of frames that can be spanned by a particle is 7; hence, the estimation of the centers (Sect. 3.2) is achieved by
-
1.
clustering the points in \({\mathcal {R}}\) within a distance equal to 0.2a followed by estimating the raw center \(\{R_k\}_{k=1, \ldots ,q}\) and then
-
2.
search the Z nearest neighbors to each \(R_k\) within a distance 0.2a and apply (3.7).
In Fig. 10, the three types of errors are depicted; the proposed procedure recognizes 99 particles (out of 100). The plots in Fig. 10 show that the mean of each error (yellow dashed line) type stays below the 1/10 of a pixel/voxel (red line), which is the baseline of the state-of-the-art methods [19, 21]. In fact, the in-plane error is 0.0596, and the out-of-plane error is 0.0371. The total error, given by (4.1), is 0.0777, below the state-of-the-art baseline.
In order to study the behavior of the procedure on large numbers of particles, the above simulation is repeated 20 times (for a total of 2000 particles), storing the errors \({ {V}}, { {P}}, { {T}}\) for each run. The histograms of the total error \({ {T}}\) are shown in Fig. 11a, together with its distribution estimation. The histogram is fit with a \(\varGamma \) distribution with parameters \((k,\theta )\), where k is the shape parameter and \(\theta \) is the scale parameter. The mean of \({ {T}}\) is 0.0811. The behavior of the total error is presented alone: The histogram of the in-plane error has the same appearance, with mean 0.0643, while the histogram of the out-of-plane error has also a \(\varGamma \) behavior but much more concentrates toward zero, with a mean of 0.0387. All the three errors stay below the expected baseline of 10% [19].
Our proposed procedure is based on the assumption that the true radius is known: This is a valid assumption in many applications, but with a certain degree of uncertainty (e.g., the radius can be known within an error of the 10%). In order to check whether the estimation \(r^e\) of the radii of the particles is reliable, in Fig. 11b the histogram of the signed difference \(a-r^e\) is shown, aiming to evaluate the performance of the algorithm (\(r^e\) is computed by simple geometric properties). The chosen distribution for the fit is the t-location scale fit, due to the heavy tail on the left: This distribution is able to capture also the highest error (in absolute value). In this case, there are actually some outliers on the left of the histogram, as it is evident from Fig. 11b. The mean given by this distribution is \(-0.0142\): This means that overall the radii of the particles are overestimated by 1.5%. A first justification of this behavior can be given by the blur effect given by the PSF (see Sect. 2 for the detail) combined with the denoising technique adopted, but the next experiment will neglect the influence of the PSF and it will show how the denoising technique influences the radius estimation.
The last part is devoted to study the performance w.r.t. the vertical resolution, i.e., the number \(N_z\) of frames in which the volume is discretized (\(N_x\) and \(N_y\) are unchanged, since most modern microscopes have a high resolution in both x and y axes). In Table 1, the behavior of the three kinds of error is depicted for increasing vertical resolution. For each dimension, 20 different simulations were performed; hence, 20 different runs of the procedure have been done: The numbers appearing in Table 1 are the means of the results of these simulations. One would expect that the estimation would improve with the number of frames: Actually, the procedure reveals itself to be very robust w.r.t. the vertical resolution, even with only a few (10 or 12) frames. The difference \(a-r^e\) is depicted in the fourth row: For each resolution, this difference is around -0.013, meaning that, regardless of the number of vertical frames, the radius of the particles is overestimated by 1.3%. The last line of Table 1 refers to the (mean) number of estimated particles: The results are very satisfying for all the resolution but the first one (\(N_z=10\)); this is due to the fact that in this case a particle can span only three frames maximum (more likely just two frames), leading to have a low number of candidates in \({\mathcal {C}}\). Hence, it is a problem linked to the relation between the dimension of the particles and the vertical resolution: For small particles, it is sufficient to slightly increase \(N_z\) (\(N_z=12\) in order to get very good results), while for larger particles (\(a=1.1\,\upmu \)m) ten frames prove to be sufficient, as it is evident in Table 2.
Second synthetic test: Poisson noise These tests aim at checking whether the Gaussian filtering is the right choice for denoising. Let consider the same setting of the previous experiments: \(D_x=D_y = 76.8 \,\upmu \)m, \(D_z=7 \,\upmu \)m, 100 particles of radius \(a=1\,\upmu \)m, \(N_x=N_y=512, \, N_z=22\). The difference lies in the noise corrupting the frames: No Gaussian noise is present (\(\sigma _n=0\)), while Poisson noise affects the data. Algorithm 1 is applied to this dataset: Satisfactory results, in line with the ones in Table 1, are obtained (\({ {P}}=0.0621, { {V}}= 0.0331\), and \({ {T}}= 0.0755\), 98 particles recognized). Since simple Gaussian filter is not always sufficient to deal with high-level Poisson noise, as anticipated in Sect. 3.1 the variational approach (3.2) is adopted and solved by the aforementioned inexact Bregman procedure [7, 8]. As explained in Sect. 3.1, this strategy has been chosen instead of possibly simpler procedures (e.g., [12, 13, 15, 22]) for its ability to increase contrast in the restored images. A visual inspection on the difference between the Gaussian filtering and the employed Bregman technique is depicted in Fig. 12, where a zoom of the fourth frame is shown. Algorithm 2 requires an iterative solver for subproblem (3.4) when an explicit solution is not available, as in the present case. In this work, the AEM algorithm [14] is used, with a maximum of 1000 iterations maximum and stopped via the criterion described in [7] within a tolerance of \(10^{-4}\), the fixed number of external iterations is 3, the regularization parameter \(\mu \) is set to 0.1. Due to the presence of Poisson noise, \(f_0\) is set as the generalized Kullback–Leibler functional, while the regularization function \(f_1\) is the Total Variation. Using this approach in line 2 of Algorithm 1 yields the following results: \({ {P}}=0.0627, { {V}}= 0.0316\) and \({ {T}}=0.0752\), with 99 particles recognized. The most important difference lies in the estimated radius: With Gaussian filtering, the mean error (obtained by a t-location scale fit) is \(-0.0134\), while the Bregman technique leads to an error of \(-0.0018\); hence, using the Gaussian filtering leads to an overestimation of the radius of the particles. Since just one single experiment is not sufficient to support this claim, further tests are carried on and presented in Table 3: One is with a lower vertical resolution (\(N_z=10\)), where the dimension and the discretization of the volume are the same, while the number of particles is 50 and the radius is set to \(1.1 \,\upmu \)m. The second test is performed on a dataset with the same characteristic of the first one presented in this paragraph: \(D_x=D_y = 76.8 \mu \hbox {m}\), \(D_z=7 \mu \hbox {m}\), 100 particles of radius \(a=1\mu \hbox {m}\), \(N_x=N_y=512, \, N_z=22\).
Table 3 shows that using the correct denoising procedure produces better results in terms of error estimation and of number of recognized particles; moreover, choosing the correct denoising technique allows to estimate more precisely the radius: In fact, for \(N_z=10\) using Gaussian filtering leads to an error of almost 1%, while the Bregman technique reduces the error to 0.1%. For \(N_z=22\), the difference is more pronounced: Classical filtering gives an error of \(\sim 1.4\%\), while again the proposed approach results in an error of only \(0.1\%\). The hypothesis that the overestimation of the radius actually depends on the denoising and deblurring technique is true: At a first sight, it seems from Table 1 that this is a determinate error [19] of the algorithm, but this last experiment tells the opposite. The procedure used to improve the quality of the images influences the performance of the particle estimation algorithm.
On the one hand, the two denoising procedures are similar, because both require parameters setting (e.g., the Bregman technique requires the tuning of the regularization parameter, of the tolerance for the stopping criterion; the filtering techniques require to choose the type of filter and its parameters); on the other hand, the optimization technique has drawbacks as its computational cost and the time need to restore each frame, while simple filtering is more or less free in these terms. There is a trade-off (as it usually occurs in cases such these) between performance and time/computational cost.
Real 3D data This paragraph is devoted to applying the proposed algorithm to real 3D data. The scanned volume has \(D_x=D_y=64\,\upmu \)m, \(D_z=4.1\,\upmu \)m and discretized into an array of dimension \(512\times 512\times 10\), leading to \(\mathrm{d}x = \mathrm{d}y = 0.125\,\upmu \)m, \(\mathrm{d}z = 0.41\,\upmu \)m; 50 scans of the volume were recorded, with a \(\mathrm{d}t = 0.5\)s. The diameter of the particles is \(3\,\upmu \)m (\(a=1.5\,\upmu \)m), and they are suspended in a \(\sim 70\%\)–\(30\%\) glycerol/water mixture (viscosity of \(\sim 0.017\) Pa s). The instrument used to acquire this data is a confocal microscope (Zeiss LSM 700) with a 100\(\times \)NA 1.4 oil immersion objective (Zeiss Plan-APOCHROMAT). The frames are restored using the Bregman procedure previously described with the following settings: AEM as inner solver with a Total Variation functional as regularization, maximum number of allowed iterations set to 1000 within a tolerance of \(10^{-4}\) for the stopping criterion described in [7] with \(\alpha =2\), 3 external iteration allowed. Since the images are given without any information about their recording, a Gaussian PSFFootnote 1 with \(\sigma =1\) and zero mean is assumed as blurring operator, a background term equal to the minimum value of the image, and Poisson noise affecting the frames. All these assumptions are consistent with the type of the images produced by the aforementioned instrument.
Figure 13 shows in its first row the sixth acquired frame at time \(t=1\), the restored version via Bregman technique and the filtered image via a Gaussian filter. In the second column, a particular of these image is presented: The visual inspection makes clear that the usage of a suitable denoising technique allows to reduce the glowing halo all among the frame and moreover provides with more sharp edges, and all this contributes in making easier the recognition of the profiles.
Algorithm 1 is set with an initial WOI of width \(2\times 0.1 r_k\) (see Sect. 3.1 for the details), with a threshold which is 1.5 times the value given by Otsu’s method, \(\rho _{\mathrm{raw}} = 0.3\), \(\rho _{\mathrm{est}}=0.3\). The frames at time \(t=1\) are shown in Fig. 14b–k.
Figure 14a provides a visual inspection of the reconstructed position of the particles at time \(t=1\): This reconstruction faithfully respects the true position, as it is clear by comparing the 3D plot with the frames depicted from Fig. 14b–k, where the recovered profiles of the particles are superimposed on the original images. In these images, the top left corner corresponds to the point \((0,0,k\mathrm{d}z)\) in the 3D space, with k being the number of the frame. A closer inspection of Fig. 14 demonstrates that the proposed procedure finds particles close to the boundaries of the frames, as well as the ones near the top or the bottom of the volume.
5 Conclusion
In this work, a particle segmentation and position estimation methodology is presented. Assuming fixed spherical particles with a known radius, this procedure on the first hand applies a noise removal algorithm on each frame of the 3D volume; then, it uses the 2D gradient information on the profiles of the particles and employs a weighted regularized Least Squares fit to find the 2D center and the radius of the profile intersecting each frame. Using geometric properties, the coordinates of the 3D center are retrieved with an accuracy better than 10% of a voxel, which is the state-of-the-art performance of this type of algorithms. Furthermore, the intermediate steps implemented for the 3D reconstruction allow also to recover the particles’ position within each 2D frame, with a subpixel precision. Reliable results for the 3D positioning are achieved even for a low vertical resolution: The total error is indeed under the 10% of a voxel. Moreover, the very low error on the radius estimation suggests that this procedure improves a priori information about the radius of particles of uncertain dimension. This work demonstrates that the preprocessing of the frames requires particularly tailored techniques, depending on noise type: Since Poisson noise is the most common noise affecting the images, simple Gaussian filtering is not sufficient. One of the available image restoration techniques is then applied in this context: Although they are more demanding in terms of computational cost and time, the application of this strategies leads to a general improvement of the position estimation. Moreover, this tailored approach significantly increases the precision on the radius estimation, and it provides deeper insights on the role of Gaussian filtering in this task, proving that it induces an overestimation. Future work will involve better segmentation techniques for pathological cases, employing more tailored approaches such as regularized approaches inspired by the Mumford–Shah functional. The case of spherical particles with unknown radius will be also handled. The reliable results in positioning directly suggest that the proposed technique can be embedded in a more general procedure devoted to tracking procedure, where the particles are no longer fixed but may subjected to significant Brownian motion between slice acquisition.
Notes
fspecial(’gaussian’,512,1).
References
Akhmanova, A., Steinmetz, M.O.: Tracking the ends: a dynamic protein network controls the fate of microtubule tips. Nat. Rev. Mol. Cell Biol. 9, 309–322 (2008)
Apgar, J., Tseng, Y., Fedorov, E., Herwig, M.B., Almo, S.C., Wirtz, D.: Multiple-particle tracking measurements of heterogeneities in solutions of actin filaments and actin bundles. Biophys. J. 79(2), 1095–1106 (2000)
Benfenati, A., Camera, A.L., Carbillet, M.: Deconvolution of post-adaptive optics images of faint circumstellar environments by means of the inexact Bregman procedure. Astron. Astrophys. 586, A16 (2016). https://doi.org/10.1051/0004-6361/201526960
Benfenati, A., Causin, P., Lupieri, M., Naldi, G.: Regularization techniques for inverse problem in DOT applications. J. Phys. Conf. Ser. 1476, 012007 (2020). https://doi.org/10.1088/1742-6596/1476/1/012007
Benfenati, A., Coscia, V.: Nonlinear microscale interactions in the kinetic theory of active particles. Appl. Math. Lett. 26(10), 979–983 (2013). https://doi.org/10.1016/j.aml.2013.04.007
Benfenati, A., Coscia, V.: Modeling opinion formation in the kinetic theory of active particles I: spontaneous trend. Ann. Univ. Ferrara 60(1), 35–53 (2014). https://doi.org/10.1007/s11565-014-0207-2
Benfenati, A., Ruggiero, V.: Inexact Bregman iteration with an application to Poisson data reconstruction. Inverse Prob. 29(6), 065016 (2013)
Benfenati, A., Ruggiero, V.: Image regularization for Poisson data. J. Phys. Conf. Ser. 657(1), 012011 (2015)
Benfenati, A., Ruggiero, V.: Inexact Bregman iteration for deconvolution of superimposed extended and point sources. Commun. Nonlinear Sci. Numer. Simul. 21(1), 210–224 (2015)
Bertero, M., Boccacci, P., Desiderà, G., Vicidomini, G.: Image deblurring with Poisson data: from cells to galaxies. Inverse Prob. 25, 123006 (2009)
Bertero, M., Boccacci, P., Ruggiero, V.: Inverse Imaging with Poisson Data, pp. 2053–2563. IOP Publishing, Bristol (2018). https://doi.org/10.1088/2053-2563/aae109
Bonettini, S., Benfenati, A., Ruggiero, V.: Primal-dual first order methods for total variation image restoration in presence of Poisson noise. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 4156–4160 (2014). https://doi.org/10.1109/ICIP.2014.7025844
Bonettini, S., Benfenati, A., Ruggiero, V.: Scaling techniques for \(\varepsilon \)-subgradient methods. SIAM J. Optim. 26(3), 1741–1772 (2016). https://doi.org/10.1137/14097642X
Bonettini, S., Ruggiero, V.: An alternating extragradient method for total variation-based image restoration from poisson data. Inverse Prob. 27(9), 095001 (2011)
Bredies, K., Kunisch, K., Pock, T.: Total generalized variation. SIAM J. Imaging Sci. 3(3), 492–526 (2010). https://doi.org/10.1137/090769521
Brune, C., Sawatzky, A., Burger, M.: Primal and dual Bregman methods with application to optical nanoscopy. Int. J. Comput. Vis. 92, 211–229 (2011). https://doi.org/10.1007/s11263-010-0339-5
Causin, P., Naldi, G., Weishaeupl, R.M.: Elastic net regularization in diffuse optical tomography applications. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 1627–1630 (2019)
Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 1 (2011). https://doi.org/10.1007/s10851-010-0251-1
Cheezum, M.K., Walker, W.F., Guilford, W.H.: Quantitative comparison of algorithms for tracking single fluorescent particles. Biophys. J. 81(4), 2378–2388 (2001)
Chen, D.T., Weeks, E.R., Crocker, J.C., Islam, M.F., Verma, R., Gruber, J., Levine, A.J., Lubensky, T.C., Yodh, A.G.: Rheological microscopy: local mechanical properties from microrheology. Phys. Rev. Lett. 90, 108301 (2003). https://doi.org/10.1103/PhysRevLett.90.108301
Chenouard, N., Smal, I., de Chaumont, F., Maška, M., Sbalzarini, I.F., Gong, Y., Cardinale, J., Carthel, C., Coraluppi, S., Winter, M., Cohen, A.R., Godinez, W.J., Rohr, K., Kalaidzidis, Y., Liang, L., Duncan, J., Shen, H., Xu, Y., Magnusson, K.E.G., Jaldén, J., Blau, H.M., Paul-Gilloteaux, P., Roudot, P., Kervrann, C., Waharte, F., Tinevez, J.Y., Shorte, S.L., Willemse, J., Celler, K., van Wezel, G.P., Dan, H.W., Tsai, Y.S., de Solórzano, C.O., Olivo-Marin, J.C., Meijering, E.: Objective comparison of particle tracking methods. Nat. Methods 11, 281 (2014)
Chouzenoux, E., Jezierska, A., Pesquet, J., Talbot, H.: A convex approach for image restoration with exact Poisson–Gaussian likelihood. SIAM J. Imaging Sci. 8(4), 2662–2682 (2015). https://doi.org/10.1137/15M1014395
Chu, K.K., Mojahed, D., Fernandez, C.M., Li, Y., Liu, L., Wilsterman, E.J., Diephuis, B., Birket, S.E., Bowers, H., Solomon, G.M., Schuster, B.S., Hanes, J., Rowe, S.M., Tearney, G.J.: Particle-tracking microrheology using micro-optical coherence tomography. Biophys. J. 111(5), 1053–1063 (2016). https://doi.org/10.1016/j.bpj.2016.07.020
Figueiredo, M.A.T., Bioucas-Dias, J.M.: Deconvolution of Poissonian images using variable splitting and augmented Lagrangian optimization. In: 2009 IEEE/SP 15th Workshop on Statistical Signal Processing, pp. 733–736 (2009)
Foare, M., Lachaud, J.O., Talbot, H.: Image restoration and segmentation using the Ambrosio–Tortorelli functional and discrete calculus. In: 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 1418–1423 (2016). https://doi.org/10.1109/ICPR.2016.7899836
Godinez, W.J., Rohr, K.: Tracking multiple particles in fluorescence time-lapse microscopy images via probabilistic data association. IEEE Trans. Med. Imaging 34(2), 415–432 (2015). https://doi.org/10.1109/TMI.2014.2359541
Goldstein, T., Osher, S.: The split Bregman method for l1-regularized problems. SIAM J. Imaging Sci. 2(2), 323–343 (2009). https://doi.org/10.1137/080725891
Golub, G., Van Loan, C.: Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences. Johns Hopkins University Press, Baltimore (2013)
Hansen, P.C., Nagy, J.G., O’Learly, D.P.: Deblurring Images: Matrices, Spectra, and Filtering. SIAM, New Delhi (2006)
Hugelier, S., De Rooi, J., Bernex, R., Duw, S., Devos, O., Sliwa, M., Dedecker, P., Eilers, P., Ruckebusch, C.: Sparse deconvolution of high-density super-resolution images. Sci. Rep. 6, 1 (2016). https://doi.org/10.1038/srep21413
Husain, M., Boudier, T., Paul-Gilloteaux, P., Casuso, I., Scheuring, S.: Software for drift compensation, particle tracking and particle analysis of high-speed atomic force microscopy image series. J. Mol. Recognit. 25(5), 292–298 (2012). https://doi.org/10.1002/jmr.2187
Jandt, U., Zeng, A.P.: Modeling of Intracellular Transport and Compartmentation, pp. 221–249. Springer, Berlin (2012). https://doi.org/10.1007/10_2011_104
Jenkins, M., Egelhaaf, S.: Confocal microscopy of colloidal particles: towards reliable, optimum coordinates. Adv. Colloid Interface Sci. 136(1), 65–92 (2008). https://doi.org/10.1016/j.cis.2007.07.006
Josephson, L.L., Swan, J.W., Furst, E.M.: In situ measurement of localization error in particle tracking microrheology. Rheol. Acta 1, 1 (2018). https://doi.org/10.1007/s00397-018-1117-5
Kodippili, G.C., Putt, K.S., Low, P.S.: Evidence for three populations of the glucose transporter in the human erythrocyte membrane. Blood Cells Mol. Dis. 77, 61–66 (2019). https://doi.org/10.1016/j.bcmd.2019.03.005
Kononenko, I., Kukar, M.: Machine Learning and Data Mining: Introduction to Principles and Algorithms. Horwood Publishing Limited (2007)
Kusumi, A., Tsunoyama, T.A., Hirosawa, K.M., Kasai, R.S., Fujiwara, T.K.: Tracking single molecules at work in living cells. Nat Chem Biol 10, 524 (2014). https://doi.org/10.1038/nchembio.1558
Lin, T.S., Zhu, S., Kojima, S., Homma, M., Lo, C.J.: Flil association with flagellar stator in the sodium-driven vibrio motor characterized by the fluorescent microscopy. Sci Rep 8(1), 11172 (2018)
Magnusson, K.E.G., Jaldn, J.: A batch algorithm using iterative application of the Viterbi algorithm to track cells and construct cell lineages. In: 2012 9th IEEE International Symposium on Biomedical Imaging (ISBI), pp. 382–385 (2012). https://doi.org/10.1109/ISBI.2012.6235564
Olivier: Plotcube (2020). https://www.mathworks.com/matlabcentral/fileexchange/15161-plotcube
Otsu, N.: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9(1), 62–66 (1979). https://doi.org/10.1109/TSMC.1979.4310076
Puybareau, É., Talbot, H., Gaber, N., Bourouina, T.: Morphological analysis of Brownian motion for physical measurements. In: Angulo, J., Velasco-Forero, S., Meyer, F. (eds.) Mathematical Morphology and Its Applications to Signal and Image Processing, pp. 486–497. Springer, Cham (2017)
Rockafellar, R.T.: Convex Analysis. Princeton Mathematical Series. Princeton University Press, Princeton, NJ (1970)
Savin, T., Doyle, P.S.: Static and dynamic errors in particle tracking microrheology. Biophys. J. 88(1), 623–638 (2005). https://doi.org/10.1529/biophysj.104.042457
Sbalzarini, I., Koumoutsakos, P.: Feature point tracking and trajectory analysis for video imaging in cell biology. J. Struct. Biol. 151(2), 182–195 (2005). https://doi.org/10.1016/j.jsb.2005.06.002
Setzer, S., Steidl, G., Teuber, T.: Deblurring Poissonian images by split Bregman techniques. J. Vis. Commun. Image Represent. 21(3), 193–199 (2010). https://doi.org/10.1016/j.jvcir.2009.10.006
Valentine, M.T., Kaplan, P.D., Thota, D., Crocker, J.C., Gisler, T., Prud’homme, R.K., Beck, M., Weitz, D.A.: Investigating the microenvironments of inhomogeneous soft materials with multiple particle tracking. Phys. Rev. E 64, 061506 (2001). https://doi.org/10.1103/PhysRevE.64.061506
Wagner, C.E., Turner, B.S., Rubinstein, M., McKinley, G.H., Ribbeck, K.: A rheological study of the association and dynamics of muc5ac gels. Biomacromolecules 18(11), 3654–3664 (2017). https://doi.org/10.1021/acs.biomac.7b00809. PMID: 28903557
Wen, L., Zheng, Z.H., Liu, A.A., Lv, C., Zhang, L.J., Ao, J., Zhang, Z.L., Wang, H.Z., Lin, Y., Pang, D.W.: Tracking single baculovirus retrograde transportation in host cell via quantum dot-labeling of virus internal component. J. Nanobiotechnol. 15(1), 37 (2017). https://doi.org/10.1186/s12951-017-0270-9
Xu, Q., Boylan, N., Suk, J., Wang, Y., Nance, E., Yang, J., McDonnell, P., Cone, R., Duh, E., Hanes, J.: Nanoparticle diffusion in, and microrheology of, the bovine vitreous ex vivo. J. Controll. Release 167(1), 76–84 (2013). https://doi.org/10.1016/j.jconrel.2013.01.018
Yap, C.K., Lee, H.K.: Identification of cell nucleus using a Mumford-Shah ellipse detector. In: Advances in Visual Computing, pp. 582–593. Springer, Berlin (2008)
Yildiz, A., Forkey, J.N., McKinney, S.A., Ha, T., Goldman, Y.E., Selvin, P.R.: Myosin V walks hand-ove–hand: single fluorophore imaging with 1.5-nm localization. Science 300(5628), 2061–2065 (2003)
Yu, X., Xie, W.: Single image blind deblurring based on salient edge-structures and elastic-net regularization. J. Math. Imaging Vis. (2020). https://doi.org/10.1007/s10851-020-00949-6
Zanella, R., Boccacci, P., Zanni, L., Bertero, M.: Efficient gradient projection methods for edge-preserving removal of Poisson noise. Inverse Prob. 25, 045010 (2009)
Zanella, R., Porta, F., Ruggiero, V., Zanetti, M.: Serial and parallel approaches for image segmentation by numerical minimization of a second-order functional. Appl. Math. Comput. 318(C), 153–175 (2018). https://doi.org/10.1016/j.amc.2017.07.021
Acknowledgements
Funding was provided by Agence Nationale de la Recherche (Grant No. CoMeDIC) and LABEX MMCD.
Funding
Open access funding provided by Università degli Studi di Milano within the CRUI-CARE Agreement.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Funding for this project was provided in part by LABEX MMCD and ANR CoMeDIC.
Alessandro Benfenati: Member of the INdAM Research Group GNCS.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Benfenati, A., Bonacci, F., Bourouina, T. et al. Efficient Position Estimation of 3D Fluorescent Spherical Beads in Confocal Microscopy via Poisson Denoising. J Math Imaging Vis 63, 56–72 (2021). https://doi.org/10.1007/s10851-020-00994-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10851-020-00994-1