1 Introduction

Regions of interest (ROI) segmentation in magnetic resonance (MR) brain images, like tissue classification and anatomical subdivision, is an important medical image processing topic (Gordillo et al. 2013; El-Dahshan et al. 2014; Ma et al. 2009; Mahmood et al. 2015). In the past 20 years, scholars have developed many different segmentation models, including region growing models (Chupin et al. 2002; Al-Faris et al. 2014), random walk models (Dakua and Sahambi 2011; Grady 2006), active shape models (ASM) (Hu et al. 2011), and active contours models (ACM) (Rajendran and Dhanasekaran 2012; Bereciartua et al. 2015; Li et al. 2015). Among these models, ACM has been one of the most successful and frequently used models because it is effective at segmenting MR brain images (Rajendran and Dhanasekaran 2012; Sachdeva et al. 2012).

Particularly, level set (LS) method (Zhang et al. 2008; Li et al. 2011; Zhang et al. 2010) is one popular numerical solution to Partial Differential Equation (PDE) in ACMs. For example, Li et al. (2011) proposed a region-based LS algorithm, which diminished the effects of intensity inhomogeneity from MR brain images by defining a local clustering criterion function for image intensities. Zhang et al. (2010) added a local variation term to decrease variations between the target regions and background regions to realize a ‘soft’ classification.

Although LS methods achieved acceptable segmentation results, one obvious weakness is that it is comparatively time-consuming (Balla-Arabe et al. 2013). In many clinical cases physicians require real-time image segmentation. However, this requirement cannot be met via LS methods, e.g., 3D medical images.

To overcome these shortcomings, researchers introduced the lattice Boltzmann (LB) model (Pingen et al. 2009; Barkha et al. 2015; Chen et al. 2014) as an interesting alternative to augment the computational speed to approximate PDE in the LS method; it is a widely used method that analyzes complex physical problems in hydrodynamics (Tsutahara 2012; Huang et al. 2013). The LB model is based on the microscopic description by means of particles collision and propagation to simulate the macroscopic physical process. In image processing, the collision step is described as redistributing the gray level on each pixel; the propagation step is expressed as the gray level on each pixel being updated by the gray level from neighboring pixels. Similar to fluids, which are often subjected to the action of gravity or intermolecular force in hydrodynamics, the LB model of image segmentation can be derived from the classical LB model by adding an external force. The time requirements for the LB algorithm to optimize the PDE solutions are much lower than that of the less traditional finite difference scheme (Sun et al. 2012; Hagan and Zhao 2009). In the past decade, LB methods have begun to be applied in complex medical imaging segmentation. For instance, Balla-Arabe et al. (2013) proposed a LB algorithm with fuzzy external force to segment knee and blood vessels in MR images; Chen et al. (2014) proposed a Geodesic Active Contour LB algorithm to segment the thrombus of giant intracranial aneurysms from CT angiography scans. However, existing LB algorithms are limited to delineate complex MR brain images, like segmenting cerebral cortical surfaces or tumors in MR brain images, which are based on complex intensity inhomogeneity, with low-contrast intensity level, noise and bias field.

In this paper, we propose a new LB algorithm for MR brain imaging segmentation. This new approach has a two-order statistical external force that controls the evolving curve shrinkage or expansion according to the force magnitude from the object and background regions; also, it features a sampling window that increases between-class variances, then reduces the overlap area of the foreground and background to improve accuracy segmentation; third, the LB algorithm is very suitable for parallel programming due to its local and explicit nature; moreover, it can handle complex shapes, topological changes, and implicit computation of curvature. We call this new algorithm ‘local statistic lattice Boltzmann algorithm (LSLBA)’.

2 The mathematical model of LSLBA

2.1 The classical LB mathematical model in image segmentation

From a macroscopic view, the LB model for image segmentation can normally be divided into two parts: the diffusion and the external force terms (Balla-Arabe et al. 2013; Wen et al. 2014). Equation (1) presents the mathematical expression of the LB segmentation model:

$$\begin{aligned} C_{\mathrm {seg}}={LB}_{\mathrm {diffusion}}+F\cdot {\varvec{\Delta }} t \end{aligned}$$
(1)

where \(C_{\mathrm {seg}}\) is described as the segmentation result, \({LB}_{\mathrm {diffusion}}\) is expressed as the classical diffusion term, and the right second term is defined as a product of the external force F and the time step \({\varvec{\Delta }} t\) (Frisch et al. 1987; Guo and Zheng 2009).

According to Bhatnagar, Gross, and Krook, collision models (Bhatnagar et al. 1954), \({LB}_{\mathrm {diffusion}}\) can be described by collision and propagation. Suppose the initial contour \(\varphi ({{\varvec{r}}},t)\) to be the sign distance function of images, and the D2Q9 model is used, so \({LB}_{\mathrm {diffusion}}\) is as follows:

$$\begin{aligned}&\text {Collision:}\quad \tilde{\varphi }_{\alpha }\left( {{\varvec{r}}},t \right) =\varphi _{\alpha }\left( {{\varvec{r}}},t \right) +1/\tau \left( \varphi _{\alpha }^{eq}\left( {{\varvec{r}}},t \right) -\varphi _{\alpha }\left( {{\varvec{r}}},t \right) \right) \quad \alpha =0,1,\cdots 8 \end{aligned}$$
(2)
$$\begin{aligned}&\text {Propagation:}\, \varphi _{\alpha }\left( {{\varvec{r}}}+\mathbf{e}_{\alpha }{\varvec{\Delta }}t,t+{\varvec{\Delta }} t \right) =\tilde{\varphi }_{\alpha }\left( {{\varvec{r}}},t \right) \end{aligned}$$
(3)

where \({{\varvec{r}}}\) represents location, t represents time, \(\tau \) is the dimensionless relaxation time, and \({\varvec{\Delta }}t\) is the time step of Eq. (1) at a specific time. The set of discrete velocity directions \(\left\{ {\mathbf {e}}_{{\upalpha }},{\upalpha } =0,1,\cdots 8 \right\} \) is:

$$\begin{aligned} {\mathbf {e}}_{\alpha }=\left\{ {\begin{array}{l@{\quad }l} \left( 0,0 \right) &{} \upalpha =0\\ c\left( \cos {\left[ \left( \alpha -1 \right) \frac{{\uppi }}{{2}}\right] ,}\sin {\left[ \left( \alpha {-1} \right) \frac{{\uppi }}{{2}}\right] } \right) &{}\alpha =1,2,3,4\\ \sqrt{2} c\left( \cos {\left[ \left( {2}\alpha {-1} \right) \frac{{\uppi }}{{4}} \right] ,}\sin \left[ \left( {2}\alpha {-1} \right) \frac{{\uppi }}{{4}} \right] \right) &{} \alpha =5,6,7,8\\ \end{array} } \right. \end{aligned}$$

\(\varphi _{\alpha }({{\varvec{r}}},t), \tilde{\varphi }_{\alpha }({{\varvec{r}}},t)\) and \(\varphi _{\alpha }\left( {{\varvec{r}}}+\mathbf {e}_{\alpha }{\varvec{\Delta }} t,t+{{\varvec{\Delta }} }t \right) \) are the particle distribution functions of different lattice statuses: before collision, after collision, and propagation, \(\varphi _{\alpha }^{eq}({{\varvec{r}}},t)\) is the distribution function of the equilibrium, and \(\alpha \) is the index of direction \({\mathbf {e}}_{\alpha }\).

In Eq. (2), \(\tau \) depends on the image gradient or other geometric information. According to literature (Wen et al. 2014), \(\tau \) can be defined as Eq. (4) that has a powerful ability to protect the weak boundaries:

$$\begin{aligned} \tau =9g {/4{\varvec{\Delta }} }t +0.5 \end{aligned}$$
(4)

where \(g =1/(1+\nabla G_{\sigma } *I)^{2}\) is the edge stopping function, which makes diffusion faster at smooth areas, or slower at sharp areas. \(\nabla G_{\sigma }*I\) is defined as the convolution of Gaussian kernels in image I with standard deviation \(\sigma \).

To get typical diffusion computations, the equilibrium function \(\varphi _{\alpha }^{eq}({{\varvec{r}}},t)\) can be simplified as \(\varphi _{\alpha }^{eq}\left( {{\varvec{r}}},t \right) =t_{\alpha }\varphi ({{\varvec{r}}},t)\) (Shi et al. 2008). \(t_{\alpha }\) denotes the contributing weight of the equilibrium, where normally \(\alpha =0, t_{\alpha } = 4/9; \alpha = 1,2,3,4, t_{\alpha } = 1/9; \alpha =5,6,7,8, t_{\alpha } = 1/36.\)

2.2 The external force

The external force \(\vec {F}\) plays an important role in extracting region features in the LB segmentation model. In this paper, we defined \(\vec {F}\) as:

$$\begin{aligned} \vec {F}_{i}=\log {p(I\vert \theta _{i})\vec {n}} \end{aligned}$$
(5)

where \(\theta _{i},i=1,2\) is the parameter of the distribution of the background region \({\Omega }_{1}\) and the target region \({\Omega }_{2}, \vec {n}\) is the normal vector of the contour and \(\vec {n}=\nabla \varphi /\left| \nabla \varphi \right| \), and \(\varphi \) is the sign distance function mentioned above. \( p(I\vert \theta _{i})\) is the conditional probability density. To get the probability distribution \(p(I\vert \theta _{i})\), we assume that the region \({{\Omega }_{i}}\) satisfied the Gaussian probability distribution function (Zhang et al. 2010; Zhu and Yuille 1996) , i.e., \(\theta _{i}\sim N\left( \mu _{i} , \sigma _{i}^{2} \right) , i=1,2\) with mean \(\mu _{i}\) and intra-class standard deviation \(\sigma _{i}\). Figure 1 illustrates the external force on the contour. Each point is compressed by two forces from \({\Omega }_{1}\) and \({\Omega }_{2}\) simultaneously. Suppose \(\vec {n}=-\vec {n}_{1}=\vec {n}_{2}\), and the total force of the point A is presented as \(\vec {F}=\vec {F}_{1}+\vec {F}_{2}=\log {p\left( I \vert \theta _{1}\right) \vec {n}_{1} +\log {p\left( I \vert \theta _{2}\right) \vec {n}_{2}}}=\log {\frac{p\left( I \vert \theta _{2}\right) }{p\left( I \vert \theta _{1}\right) }\vec {n}}\). As a result, if \(p\left( I \vert \theta _{2}\right) >p\left( I \vert \theta _{1}\right) , \vec {F}\) will be positive, and the contour will expand to \({\Omega }_{1}\). The external force controls the contour expansion or shrinkage according to the force magnitude from the object regions and the background regions. The external force is defined as \(\vec {F}=(\log \frac{\sigma _{2}}{\sigma _{1}}+\frac{\left( I\left( {{\varvec{r}}} \right) -\mu _{2} \right) ^{2}}{2\sigma _{2}^{2}}-\frac{{(I\left( {{\varvec{r}}} \right) -\mu _{1})}^{2}}{2\sigma _{1}^{2}})\vec {n}\).

Fig. 1
figure 1

The statistical force \(\vec {F}\) on the contour and \(\vec {F}=\vec {F}_{1}+\vec {F}_{2}\) at the point A. \(O_{x}=\{{{\varvec{y}}}:\left| {{\varvec{y}}}-{{\varvec{x}}} \right| \le \uprho \}\) is the sample window at the point B, \(\uprho \) is the circle radius, \({{\varvec{x}}} \) indicates the coordinates of point B, and \({{\varvec{y}}}\) is the coordinates of an arbitrary point in the sampling window. The region of the oblique lire is \(O_{x}\cap \varOmega _{i},i=1,2\)

To deal with intensity inhomogeneity of MR images, an observed image I can be modeled as \(I=b({{\varvec{x}}})J+ n\), where J is the true image, \(b({{\varvec{x}}})\) is the bias field that accounts for the intensity inhomogeneity, and n is additive noise. According to literature (Li et al. 2011), in this paper, our assumptions about the true image J and the bias field \(b({{\varvec{x}}})\) can be stated more specifically as: (1) The bias field is slowly varying, which implies that it can be approximated by a constant in a neighborhood of each point in the image domain (2) The true image J approximately takes two distinct constant values \(c_{1} , c_{2}\) in the disjoint region \({\Omega }_{1} , {\Omega }_{2}\), respectively, where \({\Omega =}\cup _{i=1}^{2}{{\Omega }_{i}}\), and \({{\Omega }_{1}\cap \Omega }_{2}=\emptyset \). We can thus get the mean value \(\mu _{i}=b({{\varvec{x}}})c_{i}, i=1,2\).

2.3 The sampling window

Variance is an important parameter in extracting the features of region information. Significant intra-class variances of \({\Omega }_{1}\,{\mathrm { and}}\, {\Omega }_{2}\) may yield a large overlap area, which may be low contrast gray levels in MR images, which will result in misclassification when pixels fall into the tail of the distribution. A good alternative is to decrease the intra-class variances by adding a sampling window \(O_{x}=\{\,{{\varvec{y}}}:\left| {{\varvec{y}}}-{{\varvec{x}}} \right| \le \rho \} \) along the contour points, as shown in Fig. 1; \(\rho \) is the circle radius, \({{\varvec{x}}}\) indicates the coordinates of point B, and \({{\varvec{y}}}\) is the coordinates of an arbitrary point in the sampling window. The intersecting region of the sampling window and the object region or the background region is \(O_{x}\cap {{\Omega }_{i}},i=1,2\); suppose the pixels’ numbers are \(m_{i}\left( {{\varvec{x}}} \right) =\left\| O_{x}\cap {{\Omega }_{i}} \right\| ,i=1,2\) which are always greater than 1.

Fig. 2
figure 2

The example to illustrate \(O_{x} \) to decrease intra-class variances

Figure 2 shows a schematic diagram of \(O_{x}\) to decrease the intra-class variances of \({\Omega }_{1}\) and \({\Omega }_{2}\), which obey Gaussian probability distribution \(\theta _{i}\sim N\left( \mu _{i} , \sigma _{i}^{2} \right) , i=1,2\). The gray area overlaps between the object and background region. When \(\mu _{1}\) and \(\mu _{2}\) are closer to each other, the overlapping area becomes larger, and the segmentation results become less usable.

To decrease intra-class variances, suppose the average intensity of \(O_{x}\cap {{\Omega }_{i}},i=1,2\) is defined as \(\bar{I}(({{\varvec{x}}})\vert \theta _{i})\):

$$\begin{aligned} \bar{I}\left( \left( {{\varvec{x}}} \right) \vert \theta _{i}\right) =\frac{1}{m_{i}\left( {{\varvec{x}}} \right) }\sum \nolimits _{O_{x}\cap {{\Omega }_{i}}} {\bar{I}\left( \left( {{\varvec{y}}} \right) \vert \theta _{i}\right) } \end{aligned}$$
(6)

where \(m_{i}\left( {{\varvec{x}}} \right) =\left\| O_{x}\cap {{\Omega }_{i}} \right\| ,i=1,2\). According to literature (Zhang et al. 2010; Zhu and Yuille 1996), the probability density \(p(\bar{I}({{\varvec{x}}})\vert \theta _{i})\) of the average sample \(\bar{I}(({{\varvec{x}}})\vert {{\Omega }_{i}})\) can be replaced by the joint probability \(\prod \nolimits _{O_{x}\cap {{\Omega }_{i}}} {p(I({{\varvec{y}}})\vert \theta _{i})} \), and the corresponding probability density is still a Gaussian distribution:

$$\begin{aligned} p\left( {\bar{I}\left( {{\varvec{x}}} \right) } \vert \theta _{i}\right) =\prod \nolimits _{O_{x}\cap {{\Omega }_{i}}} {p(I({{\varvec{y}}})\vert \theta _{i})} \propto N\left( \mu _{i} , \frac{\sigma _{i}^{2}}{m_{i}\left( {{\varvec{x}}} \right) } \right) \end{aligned}$$
(7)

where the intra-class variances \(\sigma _{i}^{2}\) are divided by \(m_{i}\left( {{\varvec{x}}} \right) \) (always more than 1), and the standard deviation \(\sigma _{i}\) is changed to \(\sigma _{i}/ \sqrt{m_{i}\left( {{\varvec{x}}} \right) } , i=1,2\). It can be clearly seen that the intra-class variations are decreased, and the overlap area of the object region and the background region becomes small, as shown by dashed lines in Fig. 2.

By combing Eq. (7) with (5), the local statistic force \(\vec {F}_{{\mathrm {local}}}\) is described as:

$$\begin{aligned} \vec {F}_{\mathrm {local}}=\left[ \sum \nolimits _{{y\in O}_{x}\cap {{\Omega }_{i}}} \left( \log \frac{\tilde{\sigma }_{2}}{\tilde{\sigma }_{1}}+\frac{\left( I\left( {{\varvec{y}}} \right) -\tilde{\mu }_{2} \right) ^{2}}{2\tilde{\sigma }_{2}^{2}}-\frac{\left( I\left( {{\varvec{y}}} \right) -\tilde{\mu }_{1} \right) ^{2}}{2\tilde{\sigma }_{1}^{2}} \right) \right] \vec {n} \end{aligned}$$
(8)

where \(\tilde{\mu }_{i} , \tilde{\sigma }_{i}^{2}\) are the local mean and variance variable of regions \(O_{x}\cap {{\Omega }_{i}},i=1,2\).

To correct the image intensity inhomogeneity, according to Sect. 2.2, we can get \(\tilde{\mu }_{i}=\tilde{b}({{\varvec{y}}})\tilde{c}_{i}\) in the sampling window. For a slowly varying bias field, the values \(b({{\varvec{y}}})\) for all \({{\varvec{y}}}\) are close to \(({{\varvec{x}}})\) , i.e., \(b({{\varvec{y}}}){\approx }b({{\varvec{x}}})\) for \({{\varvec{y}}}\in O_{x}\).

Next, let \(K_{\rho }({{\varvec{x}}} , {{\varvec{y}}})\) be the indicator function of \(O_{x}\) and \(\rho \) be the size. The expressions of these variables \(c_{i}, \sigma _{i}^{2}\) and \(b({{\varvec{x}}})\), are as follows (please see the Appendix):

$$\begin{aligned} \left\{ {\begin{array}{l} \tilde{c}_{i}=\frac{\int _{{\Omega }_{i}} {K_{\rho }*b\left( {{\varvec{x}}} \right) I({{\varvec{y}}})M_{i}(\varphi )\mathrm {d}{} \mathbf{x}}}{\int _{{\Omega }_{i}} {K_{\rho }*b^{2}\left( {{\varvec{x}}} \right) M_{i}(\varphi )\mathrm {d}{} \mathbf{x}}} \\ \tilde{\sigma }_{i}^{2}=\frac{\int _{{\Omega }_{i}} {K_{\rho }*{(I\left( {{\varvec{y}}} \right) -b\left( {{\varvec{x}}} \right) \tilde{c}_{i})}^{2}M_{i}(\varphi )\mathrm {d}{} \mathbf{x}}}{\int _{{\Omega }_{i}} {K_{\rho }*M_{i}(\varphi )\mathrm {d}{} \mathbf{x}}} \\ \tilde{b}\left( {{\varvec{x}}} \right) =\frac{\sum \nolimits _{i=1}^2 {K_{\rho }*I({{\varvec{y}}})M_{i}(\varphi ){\cdot }\frac{\tilde{c}_{i}}{\tilde{\sigma }_{i}^{2}}} }{\sum \nolimits _{\mathrm {i}=1}^2 {K_{\rho }*M_{i}(\varphi ){\cdot }\frac{\tilde{c}_{i}}{\tilde{\sigma }_{i}^{2}}} }\\ \end{array} } \right. \end{aligned}$$
(9)

where \(*\) denotes the convolution operator and \(M_{i}(\varphi )\) is the phase indicators of \({{\Omega }_{i}}\), which is defined as \(M_{i}\left( \varphi \right) =\left\{ {\begin{array}{ll} H\left( \varphi \right) &{}i=1\\ 1-H\left( \varphi \right) &{}i=2\\ \end{array} } \right. \). \(H\left( \varphi \right) \) is a Heaviside functional, which is defined as \(H\left( \varphi \right) =\left\{ {\begin{array}{ll} 1&{}\varphi \ge 0\\ 0&{}\varphi <0\\ \end{array} } \right. \). In addition, \(H\left( \varphi \right) \) is approximated by a smooth function as \(H\left( \varphi \right) =\frac{1}{2}[1+{\frac{2}{\uppi }\mathrm{tan}}^{-1}\frac{\varphi }{\varepsilon }]\), where \(\varepsilon \) is a small constant.

2.4 LSLBA

The D2Q9 model is used in this paper, so the external force F must be separated into 9 directions, and satisfied \(F_{\alpha }=t_{\alpha }F (\alpha =0, t_{\alpha }=4/9; \alpha =1,2,3,4, t_{\alpha }=1/9; \alpha =5,6,7,8, t_{\alpha }=1/36)\). Suppose \(F=\varepsilon F^{(0)}\), then \(F_{\alpha }=\varepsilon F_{\alpha }^{(0)}\), \(\sum \nolimits _\alpha {F_{\alpha }=F} , \sum \nolimits _\alpha {{\mathbf {e}}_{\alpha }F}_{\alpha }=0\) (Guo and Zheng 2009). Finally, we selected the LB model with the external force (Frisch et al. 1987; Guo and Zheng 2009), substituting Eqs. (2), (3), and (8) into (1). The final mathematical equation of LSLBA is:

$$\begin{aligned} \varphi _{\alpha }\left( {{\varvec{r}}}+\mathbf{e}_{\alpha }{\varvec{\Delta }}t,t+{\varvec{\Delta }} t \right)= & {} \varphi _{\alpha }\left( {{\varvec{r}}},t \right) +1/ \tau \left( \varphi _{\alpha }^{eq}\left( {{\varvec{r}}},t \right) -\varphi _{\alpha }\left( {{\varvec{r}}},t \right) \right) \nonumber \\&+\, \upgamma {\frac{\mathbf{e}_{\alpha }}{\tau c_{\mathrm {s}}^{2}}}{\vec {F}_{\mathrm {local}}}({{\varvec{r}}},\mathrm {t}) {\varvec{\Delta }} t \quad \alpha =0,1,\cdots 8 \end{aligned}$$
(10)

where \(\upgamma \) is a weighted coefficient that regulates the influence of the external force to the segmentation model. \(c_{s} \) is the lattice velocity. In this paper, \(c_{\mathrm {s}}^{2}=1/3\).

3 Programming realization of LSLBA

Figure 3a is the flow chart of LSLBA, and Fig. 3b is the kernel of traditional LB methods with \(F_{\alpha }\).

Fig. 3
figure 3

The flow chart of the proposed LSLBA. a is the flow chart of LSLBA, b is the kernel of LB algorithm with \({{\varvec{F}}}_{{\varvec{\upalpha }} }\)

Table 1 shows the detailed programming realization of LSLBA.

Table 1 Programming realization of LSLBA

In Part 1, first, we input the initial contour \(\varphi \), and the radius of the sampling window \(\rho \); next, calculate variables of Image I: the relax factor \(\tau \) by Eq. (4), the parameters \(c_{i}, \upsigma _{i}^{2} , b_{i}\left( {{\varvec{x}}} \right) \) computed by Eq. (9) to get the local statistic force \(\vec {F}_{\mathrm {local}}\); then , we apply the kernel LB model from Part 2 with the external force to get the total particles’ distribution function of nine directions as segmentation results. In Part 2, the distribution function \(\varphi _{\alpha }\left( {{\varvec{r}}}+\mathbf{e}_{\alpha }{\varvec{\Delta }}t,t+{\varvec{\Delta }} t \right) \) is updated by the collision and propagation processes with the external force in each direction.

Figure 4 shows an example on the image processing procedure via LSLBA to segment white matter in an MR brain image. Figure 4a is the initial contour, Fig. 4b is the intermediate process, and the red arrows are the evolution directions of the contour, Fig. 4c is the final segmentation result, and Fig. 4d, e are the bias field and the image after being corrected.

Fig. 4
figure 4

An example of MR brain white matter segmented using LSLBA. a is the initial contour, b is the intermediate process, and the red arrows are the evolution directions of the contour, c is the final segmentation result, and d and e are the bias field and the image after being corrected (Color figure online)

4 Experiments and results

Experiments were carried out using synthetic images and MR brain images to test whether LSLBA can accurately segment MR brain images. At the same time, comparison experiments were carried out compared with six algorithms: Wang’s LB algorithm (2011), Balla-Arabe’s LB algorithm (2013), Dakua’s random walk algorithm (2011), Li’s (2011) and Zhang’s (2010) LS algorithms, and our proposed algorithm, which showed high correlation with LSLBA to segmented MR images. The programming codes of five existing algorithms are partly downloaded from open sources in the related literature.

This section presents four experiments in total. First, we designed a synthetic image to determine the size \(\rho \) of the sampling window; second, we tested LSLBA in a synthetic image with a similar gray scale between target regions and background regions, which were used to simulate MR images with low-contrast object detection; third, we added random Gaussian noises with different standard deviations into an MR brain image to test its robustness at different noise levels; finally, we compared segmentation accuracy and computation time with the six algorithms mentioned above.

All experiments have been implemented using Matlab R2010a installed on a PC with a clock speed of 1.83 GHz and 3 GB of RAM. We set the parameters in LSLBA as follows: the time step \({\varvec{\Delta }}t=1,\upvarepsilon =1, \upgamma =0.1\). Central coordinates of the initial contour are defined in the centers of each image. Dice coefficient (DC) and Hausdorff distance (HD) values are chosen as evaluation parameters to judge segmentation accuracy. DC is a ratio of a common area of the segmentation result and the ground truth to the area in total; values closer to 1 indicate better results. HD measures the similarity between the two images. Lower HD values indicate better segmentation results.

Fig. 5
figure 5

The correlations between DC and \(\rho \) values. a is the synthetic image, b is DC values with three groups of random noises

4.1 Determining \({\varvec{\rho }} \) of the sampling window

The radius \(\rho \) of the sampling window is a key parameter in LSLBA, which reduces the intra-class variances of intersection regions to avoid classification errors. To identify \(\rho \), we constructed a synthetic image with the background region \({\Omega }_{1}\) and the target region \({\Omega }_{2}\) (Fig. 5a). The intensities of two regions were defined as \(\mu _{1}=100, \mu _{2}=120\). Then, we added three groups of Gaussian random noises with standard deviations of \(\sigma _{1}=25, \sigma _{2}=44; \sigma _{1}=36, \sigma _{2}=57\); and \(\sigma _{1}=44, \sigma _{2}=62\), respectively. Finally we tested different \(\rho \) values (from 0 to 20, at a step of 1) and judged segmentation results with DC values. Figure 5b shows the correlations between DC and \(\rho \) values. The blue curve is the result with \(\sigma _{1}=25,\sigma _{2}=44\).The green curve is the result with \(\sigma _{1}=36,\sigma _{2}=57\). The red curve is the result with \(\sigma _{1}=44, \sigma _{2}=62\). From Fig. 5, we can see that the DC values increased when \(\rho \) was from 0 to 6, and DC values become stable when \(\rho \) was larger than 6. We generally chose \(\rho =6\)~8 because larger \(\rho \) values meant more computing time.

4.2 Segmentation in low contrast gray levels

We established an experiment using the same synthetic image in Sect. 4.1 to verify whether LSLBA can improve segmentation performance in low contrast gray levels. We kept the intensity of background regions fixed, and gradually changed the intensity of target regions. In this experiment, we kept \(\sigma _{1}=10,\sigma _{\mathrm {2}}=30, \mu _{1}=100\), and decreased \(\mu _{2}\) from 140 to 100 with the step 10. Figure 6 shows the segmentation results. From left to right, Fig. 6a–e show the images of different intensity from 140 to 100. The results showed that LSLBA can keep the segmentation stability with high DC values even if the intensities of the target regions and background regions are in very low contrast.

Fig. 6
figure 6

The segmentation results with different intensities in target regions. ae are images with different intensity from 140 to 100

4.3 Experiments of anti-noise

In general, noise in MR images are caused by Rice density, because thermal noise is the major source of noise in MR images; the magnitude of MR signals is the square root of the sum of squares in the data that presents in real and imaginary channels. However, the Rice distribution will start to move towards the Gaussian distribution when the signal to noise ratio (SNR) of MR images is greater than a small value (Vn 2012). So, we added Gaussian random noise into MR imaging to test anti-noise. In this experiment, we chose to segment a tumor in an MR brain image; the standard deviations of Gaussian random noises are from 10 to 60 by the step 10. Figure 7 shows the segmentation results.

Fig. 7
figure 7

The segmentation results about adding different Gaussian random noises

We also compared LSLBA with five other algorithms mentioned above. Table 2 shows the DC and HD results of six segmentation algorithms under different standard deviations. The experiments were repeated 50 times in each deviation \(\sigma \), and average ± standard of DC and HD values are listed. As shown in Table 2, among the six algorithms, LSLBA is more accurate and stable even when random noises are very strong.

4.4 Comparison experiments

We established comparison experiments to verify the segmentation accuracy and computing speed of LSLBA; three MR brain images were segmented. The target regions were the brain white matter or the tumor. We compared LSLBA with the five algorithms above. The radius of the initial contour in Image1 is 16, and the central coordinates are (\(+\)5, \(-5\)) deviated from the image center. The radius of the initial contour in Image2 is 20, and the central coordinates are the image centers. The radius of the initial contour in Image 3 is 4, and the central coordinates are (\(+\)12, \(-5\)), deviated from the image center. The coefficient \(\gamma \) of both images is 1.2. The \(\rho \) value of LSLBA in all images is 6. For the other algorithms, we repeated the experiments with different \(\rho \) values, or different seed points or iteration numbers to get maximum DC values. Figure 8 shows the segmentation results and computing times of these methods. Figure 8a is the initial contour; Fig. 8b, c are the results of Wang’s and Balla-Arabe’s LB algorithm, Fig. 8d is the result of Dakua’s method, whose weighting function of the random walk algorithm is the difference between the Laplacian and Gaussian (DoLOG) methods (the yellow points are the seeds of the foreground object, and the blue are for the background, Fig. 8e, f are the results of Li’s and Zhang’s algorithms, and Fig. 8g is the results of LSLBA. The reference standards, which were manually determined by a senior radiologist, established the ground truths shown in Fig. 8h. Table 3 lists the computed DC values and CPU time (CPU_t) for evaluating the segmentation accuracy and the time consumption.

Table 2 The DC and HD values of six segmentation algorithms under different standard deviations
Fig. 8
figure 8

The segmentation results of Wang’s, Balla-Arabe’s, Dakua’s, Li’s, Zhang’s and our proposed algorithms. a is initial contour; b is Wang’s algorithm; c is Balla-Arabe’s algorithm; d is Dakua’s algorithm; e is Li’s algorithm; f is Zhang’s algorithm; g is ours; h is the ground truth

Table 3 DC values and CPU time (CPU_t) of evaluating accuracy and time consumption for the six algorithms

As shown in Fig. 8 and Table 3, the segmentation accuracy and computer time of LSLBA are better than Wang’s algorithm. Balla-Arabe’s and Dakua’s results are faster than the other algorithms because there are no iterations. Balla-Arabe’s algorithm cannot distinguish the regions of object; its results are similar to edge detection. When the iteration number is greater than 1, the segmentation will be unstable. Dakua’s results depended on seeds point position. Different positions yielded different delineations. Although it can successfully segment the tumor image, we needed to place a high amount of seeds, which made segmentation burdensome. Comparing to the two LS methods, the DC values of Li’s algorithm were lower, and the values of LSLBA were slightly better than Zhang’s algorithm. However, the computational time of LSLBA was much shorter than those of both Li’s and Zhang’s algorithms. For example, in Image 2, the DC value of LSLBA was 97.9 %, while the DC values in Li’s and Zhang’s algorithms were 81.77 and 96.1 %. The computational time of LSLBA was 18.65s, while Li’s and Zhang’s were 44.71 and 198.38 s, respectively. LSLBA only needed 10–50 % of the computing time of the other two LS methods.

5 Discussion

The above results show that LSLBA can segment MR brain images that contain low-contrast objects segmentation, noise and bias field effectively and efficiently compared with existing LB algorithms. The reasons are discussed as follows:

  1. 1.

    The new external force in LSLBA can solve the problems of weak boundaries better than the other LB segmentation algorithms can. For example, the previous external force in Wang’s algorithm is based on an assumption that the image intensities are statistically homogeneous. In fact, it is a special case of LSLBA when \(\tilde{\sigma }_{1}({{\varvec{x}}})=\tilde{\sigma }_{2}({{\varvec{x}}})=1\) and \(\rho =1\), which is the reason why it is not effective at segmenting weak boundaries.

  2. 2.

    We added a sampling window in LSLBA to solve the problem of low-contrast objects segmentation and noise. A sampling window can decrease the intra-class variations and lessen the overlap between object regions and background regions. In addition, the sampling window can achieve good anti-noise performance by smoothing small window neighborhood noise.

  3. 3.

    We also added a stop function g to the relax factor of the LB model which can lead the pixel particles diffusing faster at smooth areas and slower at shape areas. However, Wang’s and Balla-Arabé’s algorithms only use a constant diffusion coefficient in the evolution equation, in which the contour will not accurately stop at boundaries.

Dakua’s random walk algorithm is the fastest because it does not need any iteration. However, it is highly dependent on seed selection, especially with objects and backgrounds that belong to weak edges, or objects with no edges. Hence the segmentation results Images 1 and 2 in Fig. 8d is not satisfied.

LSLBA achieved similar and even better DC and HD values compared to Li’s and Zhang’s algorithms, but at a shorter computing time. This is because LSLBA is only composed of collision and propagation steps, and all pixels are evolved simultaneously with a simple relaxation. Thus, calculation times can be reduced by approximating and simulating the partial differential equation (PDE) with Taylor and Chapman–Enskog expansions. In contrast, the solutions of LS algorithms are mainly based on finite differences, such as up the wind scheme, forward difference, etc.; their stability is usually limited in the Courant–Friedrichs–Lewy condition (Whittaker 1967), which needs more time to achieve stable solutions for PDE.

Although LSLBA achieved good segmentation results in this paper, further improvements are still required. For example, (1) this paper only presents qualitative analysis on LSLBA, and quantitative studies for MR brain images are needed to verify LSLBA; (2) we only realized 2D LSLBA, and it can be employed to segment 3D MR brain images by graphic processing unit (GPU) in the future; (3) for the model itself, we need to define more advanced functions to fit true probability distribution functions rather than assuming a Gaussian distribution function. We expect that LSLBA robustness and accuracy can be improved by developing new distribution functions.

6 Conclusions

This paper improved the LB algorithm using the local statistical expression with a sampling window, and carried out comparative experiments with some highly correlated algorithms. The experimental results showed that LSLBA can segment MR brain images more successfully than conventional algorithms can. The conclusions are as follows: (1) In LSLBA, a novel external force is to strengthen abilities in extracting the target regions whatever they are with weak boundaries or with no boundary; (2) the sampling window of the external force can decrease the intra-class variances of the foreground and background and smooth noise in a small neighbor region. The experiment results show that LSLBA delivered satisfactory results in both accuracy and efficiency. Furthermore, future improvements for LSLBA are still required.