Advertisement

Pulse coupled neural network based MRI image enhancement using classical visual receptive field for smarter mobile healthcare

  • Rencan Nie
  • Min HeEmail author
  • Jinde Cao
  • Dongming Zhou
  • Zifei Liang
Open Access
Original Research

Abstract

With the rapid growth of medical big data, medical signal processing measurement techniques are facing severe challenges. Enormous medical images are constantly generated by various health monitoring and sensing devices, such as ultrasound, MRI machines. Hence, based on pulse coupled neural network (PCNN) and the classical visual receptive field (CVRF) with the difference of two Gaussians (DOG), a contrast enhancement of MRI image is suggested to improve the accuracy of clinical diagnosis for smarter mobile healthcare. As one premise, the parameters of DOG are estimated from the fundamentals of CVRF; then the PCNN parameters in image enhancement are estimated eventually with the help of DOG. As a result, the MRI images can be enhanced adaptively. Due to the exponential decay of the dynamic threshold and the pulses coupling among neurons, PCNN effectively enhances the contrast of low grey levels in MRI image. Moreover, because of the inhibitory effects from inhibitory region in CVRF, PCNN also effectively preserves the structures such as edges for enhanced results. Experiments on several MRI images show that the proposed method performs better than other methods by improving contrast and preserving structures well.

Keywords

MRI image enhancement Medical big data Pulse coupled neural network Classical visual receptive field 

1 Introduction

With the technological advancements, medical devices are routinely used to detect and record physiological signals that are essential for human health monitoring. Therefore, a huge amount of medical data is generated from those health monitoring and sensing devices, such as ultrasound, MRI machines. The medical big data is growing rapidly now such that the medical signal processing measurement techniques are facing severe challenges, such as denoising (Jeon 2017), contrast enhancement (Raja et al. 2018), segmentation (Han et al. 2018) or region of interest (ROI) extraction (Al-Ayyoub et al. 2018) et al. It is well known that medical images play an important role in modern disease diagnosis, so the image quality could directly affect the accuracy of doctors’ diagnoses and treatments. Due to some factors such as environmental noises (Jeon 2017), light condition, constrained imaging techniques, and patients’ special conditions (Yang et al. 2010; Hassanpour et al. 2015), low resolution and contrast always are present in medical images, such that many important structures are not visible properly. In such case, it is difficult to segment or detect the boundaries of the abnormal structures/lesions or blood vessels presented in these poor images. Low resolution could be improved by some super-resolution technologies (Yang et al. 2018; Wei et al. 2017), and yet enhancing low contrast should be performed by image enhancement methods (Iqbal et al. 2014; Tao et al. 2018).

There are many image enhancement methods and the most common method is histogram equalization (HE) (Sonka et al. 1993). In this method, the enhanced image illustrates a uniform distribution of intensity, but it may cause washed out effect after equalization (Chaira 2014). Thus, some improved versions focusing on contrast enhancement are proposed, which includes bi histogram equalization (BHE) (Chen et al. 2009), contrast-limited adaptive histogram equalization (CLAHE) (Sasi and Jayasree 2013). However, these techniques do not take the imprecision of gray values into account. Thereby some filtering-based methods (Yang et al. 2003; Karumuri and Kumari 2017; Bhadu et al. 2017) and neural network-based methods (Tao et al. 2017; Park et al. 2018; Ma et al. 2007; Zhang et al. 2010; Xu et al. 2014) are developed.

Pulse coupled neural network (PCNN), based on the phenomena of propagating oscillating pulses in the brain visual cortex of cats (Eckhorn et al. 1988), has the characteristic that the group of neurons with similar stimulus could spark synchronous oscillating pulses. PCNN had been successfully applied in image segmentation (Na et al. 2012; Deng and Ma 2014; Zhou and Shao 2017; He et al. 2017), image fusion (Kong and Liu 2013; Xiang et al. 2015; Ganasala and Kumar 2016; Wang and Gong 2017) and so on. Due to the exponential decay of the dynamic threshold and the pulse coupling among neurons, the PCNN neurons present a non-linear transformation for stimuli such as images. Thereby, the authors in He et al. (2011) introduced an approach to X-ray image enhancement where the image is directly factorized into an image sequence by PCNN. By replacing decomposition method, Wu and Zhang (2016 and (Yang and Zhai 2014) also performed image enhancement in PCNN. In addition, Li et al. (2005) use PCNN to segment image, then the different regions segmented are enhanced through using different linear functions. These PCNN based methods mainly exist two limitations. Firstly, the PCNN parameters are set manually and it limits the universality for different images. Moreover, because the local synapses among neurons are positive, the edges are not preserved well in enhanced results, so the contrast is improved for low gray levels.

Given the above analysis, focusing on Magnetic resonance imaging (MRI), the paper proposes an effective PCNN based medical image enhancement method. The PCNN parameters are estimated with the help of classical visual receptive field (CVRF) described by the difference of two Gaussians (DOG). Due to the inhibitory effects from inhibitory legion in CVRF, PCNN not only can improve the contrast of original MRI image but also effectively preserve the structure such as edges in the enhanced image.

The remainder of the paper is organized as follows. In Sect. 2.1, we briefly introduce PCNN, and list the basic algorithm of MRI image enhancement using PCNN. The CVRF with DOG and its parameters estimation are given in Sect. 2.2. Section 2.3 provides the method of PCNN parameters estimation with the help of CVRF. Simulation and results analysis are given in Sect. 3. Finally, conclusions are drawn in Sect. 4.

2 Materials and methods

2.1 Pulse coupled neural network

In image processing, generally, PCNN is single layer 2-D network where the pixels as stimuli are contacted with neurons by a one-to-one correspondence. Ranganath et al. (1995) elaborated the basic PCNN, then a kind of simplified PCNN (Ma et al. 2005; Chen et al. 2011) as following is exploited in image processing:
$$F(n)=S$$
(1)
$$L(n)={V^L}Y(n - 1) \otimes W$$
(2)
$$U(n)=F(n)[1+\beta L(n)]$$
(3)
$$Y(n)=\left\{ {\begin{array}{*{20}{l}} {1,}&{U(n)>\theta (n)} \\ {0,}&{otherwise} \end{array}} \right.$$
(4)
$$\theta (n)={e^{ - {\alpha ^\theta }}}\theta (n - 1)+{V^\theta }Y(n - 1)$$
(5)
where \(n\) denotes the current iteration for neurons, \(\otimes\) is a convolution operator. The neurons compose of two channels, differently, F channel only receives extrinsic stimulus S, while L channel only receives coupling pulses Y from neighbors. Each neuron communicates with its neighbors through the local synapses W. Then a modulation through linking strength β between two-channel outputs is performed to produce the inner state U. Finally, the inner state is compared with a dynamic threshold θ to judge whether or not the neuron pulse, i.e. Y equals to 1 or 0. The dynamic threshold decays exponentially with a factor \({\alpha ^\theta }\), thus, once the neuron pulses, its threshold not only decays but also needs to be added a large amplitude \({V^\theta }\).

Compared with basic PCNN, this simplified PCNN discards the exponential decay in L channel and F channel, which reduces the number of neuron parameters while other important properties of basic PCNN are retained. Obviously, this kind of PCNN like basic PCNN also shows two main characteristics: (1) nonlinear threshold decay. The dynamic threshold will continuously decay exponentially from high level to inner state till the neuron pulse, then the threshold steps to high level and performs previous process again, thus the extrinsic stimulus can be mapped nonlinearly to a pulse frequency; (2) local pulse coupling. A neuron receives coupling pulses from its neighbors such that its inner state will vary, and the neuron would pulse in advance if the pulse coupling is excited; otherwise the neuron would postpone to pulse if the pulse coupling is inhibitory. Hence, excited pulse coupling to a neuron will raise its pulse frequency, such that the smoothing and features clustering could be produced in an image; conversely, inhibitory pulse coupling will decrease its pulse frequency. It is worthy to note that the nonlinear map for stimuli in PCNN is a not strict exponential transformation because the inner state is varied continuously through the varied coupling pulses.

We set the neuron parameters as: \(W=[0.5,1,0.5;0.5,0,0.5;0.5,1,0.5]\), \({V^L}=1\), \(\beta =0.05\), \({\alpha ^\theta }=0.1\), \({V^\theta }=400\), and the iterations of PCNN is 1500, then a simple example of PCNN image processing is presented in Fig. 1. Obviously, whether or not the neurons in PCNN are coupled by local pulses, the results, Fig. 1a2–a4, all presents higher contrast than the original image, Fig. 1a1. This truth implies that the nonlinear map, resulted from the nonlinear threshold decay of a neuron, can enhance an image. Moreover, we also can see from the histograms, Fig. 1b1–b3, that the nonlinear map would lead to serious smoothing for enhanced image owing to the quantization effects of dynamic threshold in PCNN (Du et al. 2015). However, Fig. 1b1–b4 indicate that the local pulse coupling among neurons and the proper neuron parameters could effectively alleviate this smoothing for enhanced image.

Fig. 1

An example of PCNN image processing. a1 Original image; a2a4 the processed images through using the PCNN with no local pulse coupling, the PCNN with local pulse coupling, the PCNN with estimated parameters from following proposed method, respectively; b1b4 the histograms for a1a4, respectively

Therefore, we can enhance MRI images through above nonlinear map for stimuli in PCNN, and the specific process can be summarized in Algorithm 1. PCNN parameters estimation is essential for MRI image enhancement. Owing to coupling pulses in PCNN, the difficulty of parameters estimation lies in how to inhibit the over smoothing to the dominant structures of the MRI image. In this paper, we attempt to estimate neural parameters with the help of CVRF.

2.2 CVRF and its parameters estimation

A visual neuron receives the visual stimuli from other neurons in a small visual field called visual receptive field (VRF). Classical VRF (CVRF) with two opponent concentric circles presents mutual inhibitory characteristic between a central region and another surrounding region. Owing to the opponent functions of two concentric regions, there are two types of CVRFs, i.e., ON-CVRF and OFF-CVRF. Specifically, the neurons in the central region are excited to the central neuron for ON-CVRF while they are inhibitory for OFF-CVRF; conversely, the neurons in the surrounding region for ON-CVRF and OFF-CVRF are inhibitory and excited to the central neuron, respectively. Thus, Rodieck and Stone (Karumuri and Kumari 2017), in 1965, attempted to model above phenomenons in CVRF by using the difference of two Gaussians (DOG) followed by
$$DOG(x,y)=A{e^{ - \frac{{{x^2}+{y^2}}}{{d_{1}^{2}}}}} - B{e^{ - \frac{{{x^2}+{y^2}}}{{d_{2}^{2}}}}}$$
(6)
where the right two items simulate the functions of central region and surrounding region in CVRF, respectively; \((x,y)\) denotes the position of a neuron from the central neuron; A and d1 are the coupling strength and the width of the central region, respectively, like B and d2 are of surrounding region. Therefore, \(DOG(x,y)\) describes ON-CVRF if A > B, d1 < d2; inversely, it presents OFF-CVRF if A < B, d1 > d2. The DOG operators for ON-CVRF and OFF-CVRF (see Fig. 2) are a vertical straw hat and a reverse straw hat, respectively. According to the principle of CVRF, the sum of DOG should be 0; however, it is difficult to manually select the proper parameters, especially, for discrete DOG.
Fig. 2

Examples of CVRF. a ON-CVRF; b OFF-CVRF

Theorem 1

Given d1and d2, the range of A/Bfor CVRF satisfies
$$exp\left( {\frac{{d_{2}^{2} - d_{1}^{2}}}{{d_{2}^{2}}}} \right)<\frac{A}{B}<exp\left( {\frac{{{{({d_1}+1)}^2}(d_{2}^{2} - d_{1}^{2})}}{{d_{1}^{2}d_{2}^{2}}}} \right)\;for\;ON{\text{-}}CVRF$$
$$exp\left( {\frac{{{{({d_1}+1)}^2}(d_{2}^{2} - d_{1}^{2})}}{{d_{1}^{2}d_{2}^{2}}}} \right)<\frac{A}{B}<exp\left( {\frac{{d_{2}^{2} - d_{1}^{2}}}{{d_{2}^{2}}}} \right)\;for\;OFF{\text{-}}CVRF.$$

Proof

Suppose at \(({x^{\text{*}}},{y^{\text{*}}})\), \(DOG({x^*},{y^*})=0\), we have
$$A{e^{ - \frac{{{x^*}^{2}+{y^*}^{2}}}{{d_{1}^{2}}}}}=B{e^{ - \frac{{{x^*}^{2}+{y^*}^{2}}}{{d_{2}^{2}}}}}$$
then
$${x^{\text{*}}}^{2}+{y^{\text{*}}}^{2} = \frac{{d_{1}^{2}d_{2}^{2}}}{{d_{2}^{2} - d_{1}^{2}}}ln\frac{A}{B}.$$
(7)
On the other hand, ON-CVRF for any \((x,y)\) requires
$$\left\{ {\begin{array}{*{20}{l}} {DOG(x,y)>0,}&{0<{x^2}+{y^2} \leq d_{1}^{2}} \\ {DOG(x,y)>0,}&{d_{1}^{2}<{x^2}+{y^2} \leq {{({d_1}+{d_2})}^2}.} \end{array}} \right.$$
(8)
It implies in Eq. (8) that
$$d_{1}^{2}<{x^{\text{*}}}^{2}+{y^{\text{*}}}^{2}<{({d_1}+1)^2}.$$
(9)
From Eqs. (7) and (9), one obtains
$$\begin{aligned} d_{1}^{2} & <\frac{{d_{1}^{2}d_{2}^{2}}}{{d_{2}^{2} - d_{1}^{2}}}ln\frac{A}{B}<{({d_1}+1)^2} \\ & \Leftrightarrow \frac{{d_{2}^{2} - d_{1}^{2}}}{{d_{2}^{2}}}<ln\frac{A}{B}<\frac{{{{({d_1}+1)}^2}(d_{2}^{2} - d_{1}^{2})}}{{d_{1}^{2}d_{2}^{2}}}. \\ \end{aligned}$$
(10)
Thus, for ON-CVRF, we have
$$exp\left( {\frac{{d_{2}^{2} - d_{1}^{2}}}{{d_{2}^{2}}}} \right)<\frac{A}{B}<exp\left( {\frac{{{{({d_1}+1)}^2}(d_{2}^{2} - d_{1}^{2})}}{{d_{1}^{2}d_{2}^{2}}}} \right).$$
However, for OFF-CVRF, it is
$$exp\left( {\frac{{{{({d_1}+1)}^2}(d_{2}^{2} - d_{1}^{2})}}{{d_{1}^{2}d_{2}^{2}}}} \right)<\frac{A}{B}<exp\left( {\frac{{d_{2}^{2} - d_{1}^{2}}}{{d_{2}^{2}}}} \right)$$

which completes the proof.

Assumption 1

Define \({G_1}(x,y)=exp( - ({x^2}+{y^2})/d_{1}^{2})\)\({G_2}(x,y)=exp( - ({x^2}+{y^2})d_{2}^{2})\). Then given d1 and d2, we can compute \({D_1}=\sum {{G_1}(x,y} )\) and \({D_2}=\sum {{G_2}(x,y} )\).

Theorem 2

Under Assumption1, \({A \mathord{\left/ {\vphantom {A B}} \right. \kern-0pt} B}\)in CVRF satisfies
$$\left\{ {\begin{array}{*{20}{l}} {A/B={k_{1+}},}&{{D_2}/{D_1} \leq {k_1}} \\ {A/B={D_2}/{D_1},}&{{k_1}<{D_2}/{D_1}<{k_2}} \\ {A/B={k_{1 - }},}&{{D_2}/{D_1} \geq {k_2}.} \end{array}} \right.$$

For\({A \mathord{\left/ {\vphantom {A B}} \right. \kern-0pt} B} \in ({k_1},{k_2})\).

Proof

From Assumption 1, we have
$$\sum {\operatorname{DOG} (x,y)} =A{D_1} - B{D_2} = B{D_1}\left( {\frac{A}{B} - \frac{{{D_2}}}{{{D_1}}}} \right)$$
Then to approach the final \(A/B\), one obtains
$$\mathop {min}\limits_{{A/B}} \left| {\sum {\operatorname{DOG} (x,y)} } \right|=\mathop {min}\limits_{{A/B}} \left| {\frac{A}{B} - \frac{{{D_2}}}{{{D_1}}}} \right|$$
Consider \(A/B \in ({k_1},{k_2})\), the final \(A/B\) can be determined followed by
$$\left\{ {\begin{array}{*{20}{l}} {A/B = {k_{1+}},}&{{D_2}/{D_1} \leq {k_1}} \\ {A/B = {{{D_2}} \mathord{\left/ {\vphantom {{{D_2}} {{D_1}}}} \right. \kern-0pt} {{D_1}}},}&{{k_1}<{D_2}/{D_1}<{k_2}} \\ {A/B = {k_{1 - }},}&{{D_2}/{D_1} \geq {k_2}} \end{array}} \right.$$
which completes the proof.

Obviously, given, \({d_2}\), and A or B in CVRF, the range of \(A/B\) can be computed by Theorem 1, the final \(A/B\) can be determined by using Theorem 2. In our case, let \({x_+}=x+0.0001\), \({x_ - }=x - 0.0001\).

Suppose B = 0.0195, the results from Theorem 1and Theorem 2 are shown in Table 1. The sum of the DOG function nearly equal to 0 if \({D_2}/{D_1}\) locates in the range of \(A/B\) produced by Theorem 1, otherwise it only reaches a minimal value due to the restriction from \({d_1}\) and \({d_2}\) in CVRF. And the smaller\(\sum {DOG}\) still can be produced even the D1 and D2 are large. Thus, we can easily give a desired CVRF through using Theorem 1 and Theorem 2.

Table 1

Computing parameters for different CVRF

CVRF

d 1

d 2

D 1

D 2

Range of A/B

A/B

\(\sum {DOG}\)

ON-CVRF

2

4

12.565

44.935

(2.117, 5.406)

3.576

1.622e−15

ON-CVRF

5

15

78.540

587.422

(2.432, 3.597)

3.596

− 5.956

ON-CVRF

10

20

314.320

1123.489

(2.117, 2.478)

2.478

− 6.740

OFF-CVRF

4

2

44.935

12.5647

(0.0092, 0.0498)

0.050

− 0.202

OFF-CVRF

15

5

587.422

78.5398

(1.114 e−4, 3.355 e−4)

2.355e−4

− 1.531

OFF-CVRF

20

10

1123.5

314.1

(0.0366, 0.0498)

0.0497

− 5.0449

2.3 PCNN parameters estimation

  1. 1.

    Local synapses W

    A neuron and its neighbors in PCNN constitute a VRF, in which the local synapses W represents the stimulating strength from neighbors to the neuron. Thus, we use DOG function for CVRF to determine W in PCNN, namely
    $$W=DOG(x,y).$$
    (11)

    This strategy makes a neuron in PCNN not only receive excited stimuli but also inhibitory stimuli from neighbors. Due to the inhibitory stimuli, a neuron and neighbors could present more difference resulted in more preserved edges and enhanced details. Hence we set a larger inhibitory region compared with an excited region in CVRF. In our case, \({d_2}=3{d_1}\) is assumed for ON-CVRF or \({d_1}=3{d_2}\) for OFF-CVRF.

     
  1. 2.

    Lingking strength β

    For a PCNN neuron, its internal state U will be stable when there are no coupling pulses from neighbors; whereas U will be not stable due to the presence of coupling pulses to L channel. Because of the multiplying modulation between L channel and F channel, U will increase if \(L>0\) and prompt the neuron to pulse in advance; otherwise U will decrease if \(L<0\) and postpone the neuron to pulse. From Eq. (3), the excited or inhibitory ability of coupling pulse to central neuron can be measured followed by
    $$\Delta U = F(1+\beta L) - F=\beta FL.$$
    (12)
    Suppose the maximal excited ability of neighbors in CVRF is \(\Delta {U_E}\), and the maximal inhibitory ability is \(\Delta {U_I}\), according to Eq. (12), we have
    $$\beta = \frac{{\Delta {U_E}}}{{F{V^L}A{D_1}}} = \frac{{\Delta {U_I}}}{{F{V^L}B{D_2}}}$$
    (13)
    where β and \({V^L}\) can be estimated totally, thus \({V^L}\) is set to 1 in practice. Then once \(\Delta {U_E}\) or \(\Delta {U_I}\) is given, β can be determined from Eq. (13).
     
  1. 3.

    Threshold amplitude \({V^T}\)

    Once a neuron pulse, its threshold will step to a high level by adding threshold amplitude \({V^T}\)resulted that the neuron does not pulse again quickly. From the aspect of image enhancement, except for using exponential transformation for threshold, the threshold amplitude being contacted with enhanced contrast of edges also could further enhance the contrast of the image. On the other hand, CVRF with DOG presents the enhanced contrast of edges in images. Therefore, we use \(DOG(x,y)\) as convolution kernel in image I to estimate \({V^T}\), namely
    $${V^T} = DOG \otimes I.$$
    (14)
     
  1. 4.

    Threshold decay coefficient\({\alpha ^\theta }\)

    The threshold decays exponentially in iterations resulted that the decayed amplitude is greater for higher threshold state than for lower threshold state. That is to say that a neuron would pulse on the same frequency for different lower stimuli. Therefore, even though for maximum stimulus 255, the maximum decayed amplitude for the threshold also should be limited to 1, namely, \(255 - 255{e^{ - {\alpha ^\theta }}}=1\), then we have
    $${\alpha ^\theta } = ln\frac{{255}}{{254}} = 0.0039.$$
    (15)
     

3 Experiment results and analysis

Extensive experiments in Matlab 2018a were carried out on different MRI images. These images consist of four MRI-T1, five MRI-T2 and four MRI-PD. However, one of the MRI-T2 images is only utilized to compare the effectiveness between ON-CVRF and OFF-CVRF in our proposed method. In all experiments, the iterations of PCNN is set to 680, 690, 720 for MRI-T1, MRI-T2, MRI-PD, respectively; the initial threshold of neuron is 255, and the maximal inhibitory ability \(\Delta {U_I}\) is 5.

To evaluate the performance of the proposed method, our method is compared with three typical image enhancement methods such as contrast-limited adaptive histogram equalization (CLAHE) (Sasi and Jayasree 2013), homomorphic filtering (HF) (Bhadu et al. 2017), and probabilistic method with simultaneous illumination and reflectance estimation (PMSIRE) (Fu et al. 2015). Moreover, four quantitative criterions are selected in the objective evaluation for the enhanced images, which include entropy (EN) (Li and Xie 2015), contrast improvement index (CII) (Yang and Zhang 2012), structural similarity (SSIM) (Wang et al. 2004) and edge preserve index (EPI) (Zhang et al. 2011).

3.1 Evaluation index

Specifically, EN is to measure the amount of information in enhanced image; the CII reflects the degree of the contrast improvement for enhanced image compared with original image; SSIM describe the structure similarity between original image and enhanced image; and EPI presents how well the salient edges in original image are preserved in enhanced image. To facilitate the detailed descriptions for these evaluation metrics, we denote original image and enhanced image as X and Y, respectively.

  1. 1.

    Shannon entropy

    The Entropy (EN) is a statistical measure of information content, which can be used to characterize the average uncertainty of an image. The index is defined as
    $$EN(Y)= - \sum\limits_{{i=0}}^{{255}} {p(i)} {\log _2}(p(i))$$
    (16)
    where \(p(i)\) denotes the probability of the pixels with gray-level \(i\) in enhanced image.
     
  1. 2.

    Contrast improvement index

    The contrast improvement index (CII) is a ratio of global contrasts from original image and enhanced image, and formulated as
    $$CII(X,Y)={{\frac{{\sigma _{Y}^{2}}}{{\sqrt[4]{{{M_Y}}}}}} \mathord{\left/ {\vphantom {{\frac{{\sigma _{Y}^{2}}}{{\sqrt[4]{{{M_Y}}}}}} {\frac{{\sigma _{X}^{2}}}{{\sqrt[4]{{{M_X}}}}}}}} \right. \kern-0pt} {\frac{{\sigma _{X}^{2}}}{{\sqrt[4]{{{M_X}}}}}}}$$
    (17)
    where \(\sigma _{X}^{2}\), \(\sigma _{Y}^{2}\), \({M_X}\), \({M_Y}\) are the variances, the fourth-order moments for X, Y.
     
  1. 3.

    Structural similarity index

    The structural similarity index (SSIM) models the image quality as a combination of three terms, namely the luminance term, the contrast term and the structure term. Mathematically, it is described as
    $$SSIM(X,Y)=\frac{{(2{\mu _X}{\mu _Y}+{C_1})(2{\sigma _{XY}}+{C_2})}}{{(\mu _{X}^{2}+\mu _{Y}^{2}+{C_1})(\sigma _{X}^{2}+\sigma _{Y}^{2}+{C_2})}}$$
    (18)
    where \({\mu _X}\), \({\mu _Y}\), \({\sigma _X}\), \({\sigma _Y}\), \({\sigma _{XY}}\) are the local means, standard deviations, and cross-covariance for X, Y; C1 and C2 are smaller constants. More details can be seen in Wang et al. (2004).
     
  1. 4.

    Edge preserve index

    The edge preserve index (EPI) is the ratio resulted from horizontal and vertical absolute gradients in original image and enhanced image. It is represented as
    $$EPI(X,Y)=\frac{{\sum\nolimits_{{i,j}} {\left| {Y(i,j) - Y(i,j+1)} \right|+\left| {Y(i,j) - Y(i+1,j)} \right|} }}{{\sum\nolimits_{{i,j}} {\left| {X(i,j) - X(i,j+1)} \right|+\left| {X(i,j) - X(i+1,j)} \right|} }}$$
    (19)
    where \((i,j)\) denotes the location of each pixel in an image. Note that the larger the value of all the indexes mentioned above, the better the quality of enhanced image.
     

3.2 Comparison results on ON-CVRF and OFF-CVRF

Both ON-CVRF and OFF-CVRF in our method can be exploited to estimate neuron parameters. Thus, to test and compare their abilities resulted in higher performance for medical image enhancement, the first experiment, where the total iterations of PCNN is set as 700, is performed on an MRI-T2 image shown in Fig. 3x. Fixing excited region width at 5 and varying inhibitory region width from 10 to 25 in ON-CVRF and OFF-CVRF, the results are presented in Fig. 2 and Table 1. Compared with the original image, Fig. 3x, the visual results, Fig. 3a1–a4, b1–b4, show higher brightness and contrast resulted in larger edges strength. And we are difficult to distinguish the visual differences among these results except Fig. 3a1 due to the presence of little dark blocks enhanced wrongly.

Fig. 3

Visual results for different CVRF in our method. x Original MRI image; ai and bi, i = 1,2,3,4 are the results for ON-CVRF and OFF-CVRF, respectively; \({d_1} = 5\), \({d_2} = 5(i+1)\)   for each (ai) while \({d_1} = 5(i+1)\), \({d_2} = 5\), for each (bi)

On the other hand, it can be seen from the objective evaluations in Table 2 that with the inhibitory region width \({d_2}\) increasing, our method with ON-CVRF presents higher performances than with OFF-CVRF in all metrics except CII. Moreover, the evaluations for the method with OFF-CVRF will almost stabilize starting from \({d_2}=15\). We can also observe that the method with ON-CVRF illustrates abnormal evaluations in terms of CII and SSIM for \(({d_1},{d_2})=(5,10)\)resulted from wrongly enhanced dark blocks in Fig. 3a1. Therefore, the OFF-CVRF with \(({d_1},{d_2})=(15,5)\) is selected in our method.

Table 2

Objective evaluation of proposed method with different CVRF

CVRF (d1, d2)

ON-CVRF

OFF-CVRF

(5, 10)

(5, 15)

(5, 20)

(5, 25)

(10, 5)

(15, 5)

(20, 5)

(25, 5)

MI

3.9426

4.0665

4.0861

4.0955

4.1530

4.1888

4.1889

4.1889

CII

1.5370

1.6374

1.6354

1.6336

1.6325

1.6289

1.6288

1.6288

SSIM

0.8159

0.9072

0.9093

0.9099

0.9101

0.9110

0.9110

0.9110

EPI

1.1238

1.1829

1.1812

1.1827

1.2027

1.2077

1.2078

1.2078

Italic values indicate the best-performance values of each objective evaluation of the methods

3.3 Applications on MRI-T1

To compare the qualities of enhanced images among various methods, firstly four MRI-T1 images are tested in the second experiment, and the results are presented in Fig. 4 and Table 3. We can observe from Fig. 4 that PMSIRE and our method illustrate higher brightness and contrast than CLAHE and HF, while CLAHE gives nearly non-enhanced results. In Table 3, our method exceeds other methods in terms of all indexes except EN. Especially for SSIM, our method is very close to 1 while CLAHE and HF are probably 0.5, which indicates that nearly all structure information in original MRI-T1 images has been preserved in enhanced images. We also find that CLAHE does not enhance MRI-T1 images except T1-1 because the method in CII is under 1 for T1-2 to T1-4. Therefore, our method gives better results from visual perception and objective evaluation on MRI-T1 images than other methods.

Fig. 4

Visual results of different methods on MRI-T1 images. xi (i = 1, 2, 3, 4) is original MRI-T1 image T1-i; y1y4 (y = a, b, c, d,) represent CLAHE, HF, PMSIRE and our method, respectively

Table 3

Objective evaluation of various methods for MRI-T1 images

MRI-T1

Indexes

CLAHE

HF

PMSIRE

Ours

T1-1

MI

4.4679

4.6050

4.5039

4.1683

CII

1.0416

1.2332

1.6222

1.6254

SSIM

0.4933

0.4566

0.8945

0.9017

EPI

1.0277

0.9505

1.1216

1.2526

T1-2

MI

4.3268

4.7345

4.1843

4.0943

CII

0.8193

1.1231

1.366

1.4003

SSIM

0.487

0.4410

0.9322

0.9341

EPI

0.9297

0.9490

1.0023

1.0571

T1-3

MI

4.3751

4.2657

4.1210

4.0197

CII

0.8377

1.0632

1.3782

1.4087

SSIM

0.4962

0.4512

0.9259

0.9377

EPI

0.9516

0.8305

0.9762

1.0311

T1-4

MI

3.353

3.535

3.306

3.1535

CII

0.843

1.048

1.359

1.3868

SSIM

0.362

0.334

0.954

0.9550

EPI

0.919

0.843

0.999

1.0452

Italic values indicate the best-performance values of each objective evaluation of the methods

3.4 Applications on MRI-T2

We further evaluate the performance of the proposed method on four MRI-T2 images, and the results are illustrated in Fig. 5 and Table 4. The results of our method, HF and PMSIRE present stronger edges and more details than of CLAHE, and yet our method gives better visual contrast than HF. Like for MRI-T1, the results from CLAHE in Fig. 5 still do not report obvious enhancement effects for MRI-T2. Furthermore, from the aspect of objective evaluation, our method presents the best performance in terms of CII, SSIM and EPI for each MRI-T2 image; conversely, CLAHE and HF show the worst scores in all metrics except MI. Therefore, our method produces higher enhancement performance on MRI-T2 images than the other three.

Fig. 5

Visual results of different methods on MRI-T2 images. xi (i = 1, 2, 3, 4) is original MRI-T2 image T2-i; y1y4 (y = a, b, c, d,) represent AHE, HF, PMSIRE and our method, respectively

Table 4

Objective evaluation of various methods for MRI-T2 images

MRI-T2

Metrics

CLAHE

HF

PMSIRE

Ours

T2-1

MI

4.7529

5.0292

4.6583

4.4479

CII

0.8109

1.2543

1.439

1.4705

SSIM

0.5004

0.4401

0.9035

0.9150

EPI

0.9310

0.9911

1.0414

1.1200

T2-2

MI

4.5148

5.0064

4.5127

4.1096

CII

0.8726

1.3577

1.5819

1.5949

SSIM

0.4697

0.4045

0.8661

0.8999

EPI

0.9643

1.1697

1.2221

1.2650

T2-3

MI

4.4817

4.4762

4.4416

4.0363

CII

0.8375

1.3221

1.6020

1.6177

SSIM

0.4700

0.4109

0.8855

0.9131

EPI

0.9631

0.9701

1.0808

1.0921

T2-4

MI

3.0136

3.0128

2.7543

2.6557

CII

0.7890

1.2870

1.4564

1.4606

SSIM

0.3192

0.2789

0.9337

0.9486

EPI

0.9245

1.0519

1.0054

1.0679

Italic values indicate the best-performance values of each objective evaluation of the methods

3.5 Applications on MRI-PD

Besides MRI-T1 and MRI-T2 images, another four MRI-PD images are also used to verify the effectiveness of our proposed method. Obviously, in Fig. 6, the results from the proposed method and PMSIRE give better enhancement performance, especially for PD-1 and PD-2 images, than from CLAHE and HF. Figure 6a4 present more clear edges of tissues than Fig. 6a1–a3, like Fig. 6b4 than Fig. 6b1–b3. In addition, CLAHE still performs worse for the MRI-PD images just like for MRI-T1 and MRI-T2 images due to the almost non-enhanced results shown in Fig. 6a1–d1, and the small CII less than 1 in Table 5. This is further verified in the objective evaluation from Table 5. CLAHE illustrates the worst results comparing with the other methods in terms of CII, SSIM and EPI, while the proposed method is superior to the others.

Fig. 6

Visual results of different methods on MRI-PD images. xi (i = 1, 2, 3, 4) Original MRI-PD image PD-i; y1y4 (y = a, b, c, d,) represent AHE, HF, PMSIRE and our method, respectively

Table 5

Objective evaluation of various methods for MRI-PD images

MRI-PD

Metrics

CLAHE

HF

PMSIRE

Ours

PD-1

MI

4.4940

4.5610

4.3065

4.2077

CII

0.8873

1.3495

1.6612

1.7394

SSIM

0.5247

0.4669

0.8910

0.9002

EPI

0.9279

1.0456

1.0889

1.1808

PD-2

MI

3.8278

4.2661

3.8457

3.7523

CII1

1.0632

1.5703

2.0459

2.0518

SSIM

0.4446

0.3907

0.8716

0.8790

EPI

0.9483

1.2060

1.2735

1.4389

PD-3

MI

3.8973

3.8789

3.4918

3.5458

CII

0.8237

1.2510

1.4647

1.5626

SSIM

0.4885

0.4382

0.9196

0.9201

EPI

0.9485

1.0688

1.0728

1.2021

PD-4

MI

2.9927

2.9135

2.6902

2.5933

CII

0.7730

1.0755

1.3536

1.4060

SSIM

0.3428

0.3202

0.9509

0.9537

EPI

0.9128

0.9573

1.0233

1.0658

Italic values indicate the best-performance values of each objective evaluation of the methods

Eventually, the average quantitative results of different methods on all MRI images are provided in Table 6. It clearly shows that the proposed method outperforms the other methods with largest scores in terms of all indexes except MI. These datums from CII, SSIM and EPI confirm that the enhanced images by our method present higher contrast and preserve more structures and edges.

Table 6

Average objective evaluation of various methods for all MRI images

Metrics

CLAHE

HF

PMSIRE

Ours

MI

4.0415

4.1904

3.9014

3.7320

CII

0.8666

1.2446

1.5275

1.5604

SSIM

0.4499

0.4028

0.9107

0.9215

EPI

0.9457

1.0028

1.0756

1.1516

Italic values indicate the best-performance values of each objective evaluation of the methods

4 Conclusion

In this study, we develop an effective MRI image enhancement scheme based on PCNN and CVRF with DOG. In this scheme, the enhanced image can be directly produced after inputting MRI images to PCNN; in order to adaptively perform this process for different images, the neuron parameters are estimated with the help of DOG. In addition, owing to the inhibitory effects from inhibitory region in CVRF, the pulses coupling among neurons can avoid effectively to smooth the structures and edges such that the structures and edges in the original MRI image can be preserved well in enhanced result. Experiments were performed on three types of MRI images. Compared with other methods, the proposed method presents better enhancement qualities for enhanced results not only in visual perception but also in objective evaluation.

Notes

Acknowledgements

This work is supported by the National Natural Science Foundation of China (nos. 61463052 and 61463049) and China Postdoctoral Science Foundation (no. 171740).

Author contributions

RN, MH, JC, DZ and ZL conceived and designed the experiments; RN and MH performed the experiments, analyzed the data; JC, DZ and ZL contributed analysis tools; RN and MH wrote the paper and JC improved the english writing.

References

  1. Al-Ayyoub M, Al-Mnayyis N, Alsmirat MA et al (2018) SIFT based ROI extraction for lumbar disk herniation CAD system from MRI axial scans. J Ambient Intell Human Comput.  https://doi.org/10.1007/s12652-018-0750-2 Google Scholar
  2. Bhadu R, Sharma R, Soni SK, Varma N (2017) Sparse representation and homomorphic filter for capsule endoscopy image enhancement. In: 2017 international conference on computing, communication and automation (ICCCA 2017). IEEE, pp 1178–1182.  https://doi.org/10.1109/CCAA.2017.8229976
  3. Chaira T (2014) An improved medical image enhancement scheme using type II fuzzy set. Appl Soft Comput 25(C):293–308Google Scholar
  4. Chen HO, Kong NSP, Ibrahim H (2009) Bi-histogram equalization with a plateau limit for digital image enhancement. IEEE Trans Consum Electron 55(4):2072–2080Google Scholar
  5. Chen Y, Park SK, Ma Y, Ala R (2011) A new automatic parameter setting method of a simplified PCNN for image segmentation. IEEE Trans Neural Netw 22(6):880–892Google Scholar
  6. Deng X, Ma YD (2014) PCNN model analysis and its automatic parameters determination in image segmentation and edge detection. Chin J Electron 23(1):97–103Google Scholar
  7. Du S, Huang Y, Ma J, Ma Y (2015) Mammalian visual characteristics inspired perceptual image quantization using pulse-coupled neural networks. Optik Int J Light Electron Opt 126(21):3135–3139Google Scholar
  8. Eckhorn R, Bauer R, Jordan W et al (1988) Coherent oscillations: a mechanism of feature linking in the visual cortex? Biol Cybern 60(2):121–130Google Scholar
  9. Fu X, Liao Y, Zeng D, Huang Y, Zhang XP, Ding X (2015) A probabilistic method for image enhancement with simultaneous illumination and reflectance estimation. IEEE Trans Image Process 24(12):4965–4977MathSciNetzbMATHGoogle Scholar
  10. Ganasala P, Kumar V (2016) Feature-motivated simplified adaptive PCNN-based medical image fusion algorithm in NSST domain. J Digit Imaging 29(1):73–85Google Scholar
  11. Han B, Han Y, Gao X et al (2018) Boundary constraint factor embedded localizing active contour model for medical image segmentation. J Ambient Intell Human Comput.  https://doi.org/10.1007/s12652-018-0978-x Google Scholar
  12. Hassanpour H, Samadiani N, Salehi SMM (2015) Using morphological transforms to enhance the contrast of medical images. Egypt J Radiol Nucl Med 46(2):481–489Google Scholar
  13. He S, Liu Y, Ma Y, Song W, Deng H (2011) Medical X-ray image enhancement based on PCNN image factorization. J Image Graph 16(1):21–26Google Scholar
  14. He F, Guo Y, Gao C (2017) An improved pulse coupled neural network with spectral residual for infrared pedestrian segmentation. Infrared Phys Technol 87:22–30Google Scholar
  15. Iqbal K, Odetayo MO, James A (2014) Face detection of ubiquitous surveillance images for biometric security from an image enhancement perspective. J Ambient Intell Human Comput 5(1):133–146Google Scholar
  16. Jeon G (2017) Computational intelligence approach for medical images by suppressing noise. J Ambient Intell Human Comput.  https://doi.org/10.1007/s12652-017-0627-9 Google Scholar
  17. Karumuri R, Kumari SA (2017) Weighted guided image filtering for image enhancement. In: 2017 2nd International conference on communication and electronics systems (ICCES). IEEE, pp 545–548.  https://doi.org/10.1109/CESYS.2017.8321137
  18. Kong W, Liu J (2013) Technique for image fusion based on nonsubsampled shearlet transform and improved pulse-coupled neural network. Opt Eng 52(1):7001–7013Google Scholar
  19. Li B, Xie W (2015) Adaptive fractional differential approach and its application to medical image enhancement. Comput Electr Eng 45:324–335Google Scholar
  20. Li GY, Li HG, Wu TH, Dong M (2005) Applications of PCNN and OTSU theories for image enhancement. J Optoelectron Laser 16(3):358–362Google Scholar
  21. Ma YD, Liu Q, Qian ZB (2005) Automated image segmentation using improved PCNN model based on cross-entropy. In: Proceedings of 2004 international symposium on intelligent multimedia, video and speech processing, 2004, IEEE, pp 743–746Google Scholar
  22. Ma Y, Lin D, Zhang B, Xia C (2007) A novel algorithm of image enhancement based on pulse coupled neural network time matrix and rough set. In: fourth international conference on fuzzy systems and knowledge discovery (FSKD 2007), vol 3, IEEE, pp 86–90.  https://doi.org/10.1109/FSKD.2007.93
  23. Na Y, Chen H, Yanfeng LI, Hao X (2012) Coupled parameter optimization of PCNN model and vehicle image segmentation. J Transp Syst Eng Inf Technol 12(1):48–54Google Scholar
  24. Park S, Yu S, Kim M, Park K, Paik J (2018) Dual autoencoder network for retinex-based low-Light image enhancement. IEEE Access.  https://doi.org/10.1109/ACCESS.2018.2812809 Google Scholar
  25. Raja NSM, Fernandes SL, Dey N et al (2018) Contrast enhanced medical MRI evaluation using Tsallis entropy and region growing segmentation. J Ambient Intell Human Comput.  https://doi.org/10.1007/s12652-018-0854-8 Google Scholar
  26. Ranganath HS, Kuntimad G, Johnson JL (1995) Pulse coupled neural networks for image processing. In: Proceedings IEEE southeastcon ‘95. Visualize the future, IEEE, pp 37–43Google Scholar
  27. Sasi NM, Jayasree VK (2013) Contrast limited adaptive histogram equalization for qualitative enhancement of myocardial perfusion images. Engineering 5(10):326–331Google Scholar
  28. Sonka M, Hlavac V, Boyle R (1993) Image processing, analysis and machine vision. Springer, BostonGoogle Scholar
  29. Tao L, Zhu C, Xiang G, Li Y, Jia H, Xie X (2017) LLCNN: A convolutional neural network for low-light image enhancement. In: 2017 IEEE visual communications and image processing (VCIP 2017), IEEE.  https://doi.org/10.1109/VCIP.2017.8305143
  30. Tao F, Yang X, Wu W, Liu K, Zhou Z, Liu Y (2018) Retinex-based image enhancement framework by using region covariance filter. Soft Comput 22(5):1399–1420Google Scholar
  31. Wang Z, Gong C (2017) A multi-faceted adaptive image fusion algorithm using a multi-wavelet-based matching measure in the PCNN domain. Appl Soft Comput 61:1113–1124Google Scholar
  32. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612Google Scholar
  33. Wei S, Zhou X, Wu W, Pu Q, Wang Q, Yang X (2017) Medical image super-resolution by using multi-dictionary and random forest. Sustain Cities Soc.  https://doi.org/10.1016/j.scs.2017.11.012 Google Scholar
  34. Wu FX, Zhang XB (2016) An enhanced method of color image combined PCNN based on NSCT. Aeronaut Comput Tech 46(5):21–25MathSciNetGoogle Scholar
  35. Xiang T, Yan L, Gao R (2015) A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking PCNN in NSCT domain. Infrared Phys Technol 69:53–61Google Scholar
  36. Xu G, Li C, Zhao J, Lei B (2014) Multiplicative decomposition based image contrast enhancement method using PCNN factoring model. In: Proceeding of the 11th World congress on intelligent control and automation (WCICA 2014), IEEE, pp 1511–1516Google Scholar
  37. Yang X, Zhai Y (2014) Image enhancement based on tetrolet transform and PCNN. Comput Eng Appl 50(19):178–181Google Scholar
  38. Yang M, Zhang G (2012) SAR images filtering via sparse optimization. J Image Graph 17(11):1439–1443Google Scholar
  39. Yang J, Liu L, Jiang T, Fan Y (2003) A modified Gabor filter design method for fingerprint image enhancement. Pattern Recognit Lett 24(12):1805–1817Google Scholar
  40. Yang Y, Su Z, Sun L (2010) Medical image enhancement algorithm based on wavelet transform. Electron Lett 46(2):120–121Google Scholar
  41. Yang X, Wu W, Liu K, Chen W, Zhou Z (2018) Multiple dictionary pairs learning and sparse representation-based infrared image super-resolution with improved fuzzy clustering. Soft Comput 22(5):1385–1398Google Scholar
  42. Zhang YD, Wu LN, Wang SH, Wei G (2010) Color image enhancement based on HVS and PCNN. Sci China Inf Sci 53(10):1963–1976MathSciNetGoogle Scholar
  43. Zhang WG, Zhang Q, Yang CS (2011) Improved bilateral filtering for SAR image despeckling. Electron Lett 47(4):286–288Google Scholar
  44. Zhou D, Shao Y (2017) Region growing for image segmentation using an extended PCNN model. IET Image Proc 12(5):729–737Google Scholar

Copyright information

© The Author(s) 2018

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.School of Information Science and TechnologyYunnan UniversityKunmingChina
  2. 2.School of AutomationSoutheast UniversityNanjingChina
  3. 3.Department of RadiologyNYU Langone HealthNew YorkUSA

Personalised recommendations