Abstract
With the rapid growth of medical big data, medical signal processing measurement techniques are facing severe challenges. Enormous medical images are constantly generated by various health monitoring and sensing devices, such as ultrasound, MRI machines. Hence, based on pulse coupled neural network (PCNN) and the classical visual receptive field (CVRF) with the difference of two Gaussians (DOG), a contrast enhancement of MRI image is suggested to improve the accuracy of clinical diagnosis for smarter mobile healthcare. As one premise, the parameters of DOG are estimated from the fundamentals of CVRF; then the PCNN parameters in image enhancement are estimated eventually with the help of DOG. As a result, the MRI images can be enhanced adaptively. Due to the exponential decay of the dynamic threshold and the pulses coupling among neurons, PCNN effectively enhances the contrast of low grey levels in MRI image. Moreover, because of the inhibitory effects from inhibitory region in CVRF, PCNN also effectively preserves the structures such as edges for enhanced results. Experiments on several MRI images show that the proposed method performs better than other methods by improving contrast and preserving structures well.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
With the technological advancements, medical devices are routinely used to detect and record physiological signals that are essential for human health monitoring. Therefore, a huge amount of medical data is generated from those health monitoring and sensing devices, such as ultrasound, MRI machines. The medical big data is growing rapidly now such that the medical signal processing measurement techniques are facing severe challenges, such as denoising (Jeon 2017), contrast enhancement (Raja et al. 2018), segmentation (Han et al. 2018) or region of interest (ROI) extraction (Al-Ayyoub et al. 2018) et al. It is well known that medical images play an important role in modern disease diagnosis, so the image quality could directly affect the accuracy of doctors’ diagnoses and treatments. Due to some factors such as environmental noises (Jeon 2017), light condition, constrained imaging techniques, and patients’ special conditions (Yang et al. 2010; Hassanpour et al. 2015), low resolution and contrast always are present in medical images, such that many important structures are not visible properly. In such case, it is difficult to segment or detect the boundaries of the abnormal structures/lesions or blood vessels presented in these poor images. Low resolution could be improved by some super-resolution technologies (Yang et al. 2018; Wei et al. 2017), and yet enhancing low contrast should be performed by image enhancement methods (Iqbal et al. 2014; Tao et al. 2018).
There are many image enhancement methods and the most common method is histogram equalization (HE) (Sonka et al. 1993). In this method, the enhanced image illustrates a uniform distribution of intensity, but it may cause washed out effect after equalization (Chaira 2014). Thus, some improved versions focusing on contrast enhancement are proposed, which includes bi histogram equalization (BHE) (Chen et al. 2009), contrast-limited adaptive histogram equalization (CLAHE) (Sasi and Jayasree 2013). However, these techniques do not take the imprecision of gray values into account. Thereby some filtering-based methods (Yang et al. 2003; Karumuri and Kumari 2017; Bhadu et al. 2017) and neural network-based methods (Tao et al. 2017; Park et al. 2018; Ma et al. 2007; Zhang et al. 2010; Xu et al. 2014) are developed.
Pulse coupled neural network (PCNN), based on the phenomena of propagating oscillating pulses in the brain visual cortex of cats (Eckhorn et al. 1988), has the characteristic that the group of neurons with similar stimulus could spark synchronous oscillating pulses. PCNN had been successfully applied in image segmentation (Na et al. 2012; Deng and Ma 2014; Zhou and Shao 2017; He et al. 2017), image fusion (Kong and Liu 2013; Xiang et al. 2015; Ganasala and Kumar 2016; Wang and Gong 2017) and so on. Due to the exponential decay of the dynamic threshold and the pulse coupling among neurons, the PCNN neurons present a non-linear transformation for stimuli such as images. Thereby, the authors in He et al. (2011) introduced an approach to X-ray image enhancement where the image is directly factorized into an image sequence by PCNN. By replacing decomposition method, Wu and Zhang (2016 and (Yang and Zhai 2014) also performed image enhancement in PCNN. In addition, Li et al. (2005) use PCNN to segment image, then the different regions segmented are enhanced through using different linear functions. These PCNN based methods mainly exist two limitations. Firstly, the PCNN parameters are set manually and it limits the universality for different images. Moreover, because the local synapses among neurons are positive, the edges are not preserved well in enhanced results, so the contrast is improved for low gray levels.
Given the above analysis, focusing on Magnetic resonance imaging (MRI), the paper proposes an effective PCNN based medical image enhancement method. The PCNN parameters are estimated with the help of classical visual receptive field (CVRF) described by the difference of two Gaussians (DOG). Due to the inhibitory effects from inhibitory legion in CVRF, PCNN not only can improve the contrast of original MRI image but also effectively preserve the structure such as edges in the enhanced image.
The remainder of the paper is organized as follows. In Sect. 2.1, we briefly introduce PCNN, and list the basic algorithm of MRI image enhancement using PCNN. The CVRF with DOG and its parameters estimation are given in Sect. 2.2. Section 2.3 provides the method of PCNN parameters estimation with the help of CVRF. Simulation and results analysis are given in Sect. 3. Finally, conclusions are drawn in Sect. 4.
2 Materials and methods
2.1 Pulse coupled neural network
In image processing, generally, PCNN is single layer 2-D network where the pixels as stimuli are contacted with neurons by a one-to-one correspondence. Ranganath et al. (1995) elaborated the basic PCNN, then a kind of simplified PCNN (Ma et al. 2005; Chen et al. 2011) as following is exploited in image processing:
where \(n\) denotes the current iteration for neurons, \(\otimes\) is a convolution operator. The neurons compose of two channels, differently, F channel only receives extrinsic stimulus S, while L channel only receives coupling pulses Y from neighbors. Each neuron communicates with its neighbors through the local synapses W. Then a modulation through linking strength β between two-channel outputs is performed to produce the inner state U. Finally, the inner state is compared with a dynamic threshold θ to judge whether or not the neuron pulse, i.e. Y equals to 1 or 0. The dynamic threshold decays exponentially with a factor \({\alpha ^\theta }\), thus, once the neuron pulses, its threshold not only decays but also needs to be added a large amplitude \({V^\theta }\).
Compared with basic PCNN, this simplified PCNN discards the exponential decay in L channel and F channel, which reduces the number of neuron parameters while other important properties of basic PCNN are retained. Obviously, this kind of PCNN like basic PCNN also shows two main characteristics: (1) nonlinear threshold decay. The dynamic threshold will continuously decay exponentially from high level to inner state till the neuron pulse, then the threshold steps to high level and performs previous process again, thus the extrinsic stimulus can be mapped nonlinearly to a pulse frequency; (2) local pulse coupling. A neuron receives coupling pulses from its neighbors such that its inner state will vary, and the neuron would pulse in advance if the pulse coupling is excited; otherwise the neuron would postpone to pulse if the pulse coupling is inhibitory. Hence, excited pulse coupling to a neuron will raise its pulse frequency, such that the smoothing and features clustering could be produced in an image; conversely, inhibitory pulse coupling will decrease its pulse frequency. It is worthy to note that the nonlinear map for stimuli in PCNN is a not strict exponential transformation because the inner state is varied continuously through the varied coupling pulses.
We set the neuron parameters as: \(W=[0.5,1,0.5;0.5,0,0.5;0.5,1,0.5]\), \({V^L}=1\), \(\beta =0.05\), \({\alpha ^\theta }=0.1\), \({V^\theta }=400\), and the iterations of PCNN is 1500, then a simple example of PCNN image processing is presented in Fig. 1. Obviously, whether or not the neurons in PCNN are coupled by local pulses, the results, Fig. 1a2–a4, all presents higher contrast than the original image, Fig. 1a1. This truth implies that the nonlinear map, resulted from the nonlinear threshold decay of a neuron, can enhance an image. Moreover, we also can see from the histograms, Fig. 1b1–b3, that the nonlinear map would lead to serious smoothing for enhanced image owing to the quantization effects of dynamic threshold in PCNN (Du et al. 2015). However, Fig. 1b1–b4 indicate that the local pulse coupling among neurons and the proper neuron parameters could effectively alleviate this smoothing for enhanced image.
Therefore, we can enhance MRI images through above nonlinear map for stimuli in PCNN, and the specific process can be summarized in Algorithm 1. PCNN parameters estimation is essential for MRI image enhancement. Owing to coupling pulses in PCNN, the difficulty of parameters estimation lies in how to inhibit the over smoothing to the dominant structures of the MRI image. In this paper, we attempt to estimate neural parameters with the help of CVRF.
2.2 CVRF and its parameters estimation
A visual neuron receives the visual stimuli from other neurons in a small visual field called visual receptive field (VRF). Classical VRF (CVRF) with two opponent concentric circles presents mutual inhibitory characteristic between a central region and another surrounding region. Owing to the opponent functions of two concentric regions, there are two types of CVRFs, i.e., ON-CVRF and OFF-CVRF. Specifically, the neurons in the central region are excited to the central neuron for ON-CVRF while they are inhibitory for OFF-CVRF; conversely, the neurons in the surrounding region for ON-CVRF and OFF-CVRF are inhibitory and excited to the central neuron, respectively. Thus, Rodieck and Stone (Karumuri and Kumari 2017), in 1965, attempted to model above phenomenons in CVRF by using the difference of two Gaussians (DOG) followed by
where the right two items simulate the functions of central region and surrounding region in CVRF, respectively; \((x,y)\) denotes the position of a neuron from the central neuron; A and d1 are the coupling strength and the width of the central region, respectively, like B and d2 are of surrounding region. Therefore, \(DOG(x,y)\) describes ON-CVRF if A > B, d1 < d2; inversely, it presents OFF-CVRF if A < B, d1 > d2. The DOG operators for ON-CVRF and OFF-CVRF (see Fig. 2) are a vertical straw hat and a reverse straw hat, respectively. According to the principle of CVRF, the sum of DOG should be 0; however, it is difficult to manually select the proper parameters, especially, for discrete DOG.
Theorem 1
Given d1and d2, the range of A/Bfor CVRF satisfies
Proof
Suppose at \(({x^{\text{*}}},{y^{\text{*}}})\), \(DOG({x^*},{y^*})=0\), we have
then
On the other hand, ON-CVRF for any \((x,y)\) requires
It implies in Eq. (8) that
From Eqs. (7) and (9), one obtains
Thus, for ON-CVRF, we have
However, for OFF-CVRF, it is
which completes the proof.
Assumption 1
Define \({G_1}(x,y)=exp( - ({x^2}+{y^2})/d_{1}^{2})\), \({G_2}(x,y)=exp( - ({x^2}+{y^2})d_{2}^{2})\). Then given d1 and d2, we can compute \({D_1}=\sum {{G_1}(x,y} )\) and \({D_2}=\sum {{G_2}(x,y} )\).
Theorem 2
Under Assumption1, \({A \mathord{\left/ {\vphantom {A B}} \right. \kern-0pt} B}\)in CVRF satisfies
For\({A \mathord{\left/ {\vphantom {A B}} \right. \kern-0pt} B} \in ({k_1},{k_2})\).
Proof
From Assumption 1, we have
Then to approach the final \(A/B\), one obtains
Consider \(A/B \in ({k_1},{k_2})\), the final \(A/B\) can be determined followed by
which completes the proof.
Obviously, given, \({d_2}\), and A or B in CVRF, the range of \(A/B\) can be computed by Theorem 1, the final \(A/B\) can be determined by using Theorem 2. In our case, let \({x_+}=x+0.0001\), \({x_ - }=x - 0.0001\).
Suppose B = 0.0195, the results from Theorem 1and Theorem 2 are shown in Table 1. The sum of the DOG function nearly equal to 0 if \({D_2}/{D_1}\) locates in the range of \(A/B\) produced by Theorem 1, otherwise it only reaches a minimal value due to the restriction from \({d_1}\) and \({d_2}\) in CVRF. And the smaller\(\sum {DOG}\) still can be produced even the D1 and D2 are large. Thus, we can easily give a desired CVRF through using Theorem 1 and Theorem 2.
2.3 PCNN parameters estimation
-
1.
Local synapses W
A neuron and its neighbors in PCNN constitute a VRF, in which the local synapses W represents the stimulating strength from neighbors to the neuron. Thus, we use DOG function for CVRF to determine W in PCNN, namely
$$W=DOG(x,y).$$(11)This strategy makes a neuron in PCNN not only receive excited stimuli but also inhibitory stimuli from neighbors. Due to the inhibitory stimuli, a neuron and neighbors could present more difference resulted in more preserved edges and enhanced details. Hence we set a larger inhibitory region compared with an excited region in CVRF. In our case, \({d_2}=3{d_1}\) is assumed for ON-CVRF or \({d_1}=3{d_2}\) for OFF-CVRF.
-
2.
Lingking strength β
For a PCNN neuron, its internal state U will be stable when there are no coupling pulses from neighbors; whereas U will be not stable due to the presence of coupling pulses to L channel. Because of the multiplying modulation between L channel and F channel, U will increase if \(L>0\) and prompt the neuron to pulse in advance; otherwise U will decrease if \(L<0\) and postpone the neuron to pulse. From Eq. (3), the excited or inhibitory ability of coupling pulse to central neuron can be measured followed by
$$\Delta U = F(1+\beta L) - F=\beta FL.$$(12)Suppose the maximal excited ability of neighbors in CVRF is \(\Delta {U_E}\), and the maximal inhibitory ability is \(\Delta {U_I}\), according to Eq. (12), we have
$$\beta = \frac{{\Delta {U_E}}}{{F{V^L}A{D_1}}} = \frac{{\Delta {U_I}}}{{F{V^L}B{D_2}}}$$(13)where β and \({V^L}\) can be estimated totally, thus \({V^L}\) is set to 1 in practice. Then once \(\Delta {U_E}\) or \(\Delta {U_I}\) is given, β can be determined from Eq. (13).
-
3.
Threshold amplitude \({V^T}\)
Once a neuron pulse, its threshold will step to a high level by adding threshold amplitude \({V^T}\)resulted that the neuron does not pulse again quickly. From the aspect of image enhancement, except for using exponential transformation for threshold, the threshold amplitude being contacted with enhanced contrast of edges also could further enhance the contrast of the image. On the other hand, CVRF with DOG presents the enhanced contrast of edges in images. Therefore, we use \(DOG(x,y)\) as convolution kernel in image I to estimate \({V^T}\), namely
$${V^T} = DOG \otimes I.$$(14)
-
4.
Threshold decay coefficient\({\alpha ^\theta }\)
The threshold decays exponentially in iterations resulted that the decayed amplitude is greater for higher threshold state than for lower threshold state. That is to say that a neuron would pulse on the same frequency for different lower stimuli. Therefore, even though for maximum stimulus 255, the maximum decayed amplitude for the threshold also should be limited to 1, namely, \(255 - 255{e^{ - {\alpha ^\theta }}}=1\), then we have
$${\alpha ^\theta } = ln\frac{{255}}{{254}} = 0.0039.$$(15)
3 Experiment results and analysis
Extensive experiments in Matlab 2018a were carried out on different MRI images. These images consist of four MRI-T1, five MRI-T2 and four MRI-PD. However, one of the MRI-T2 images is only utilized to compare the effectiveness between ON-CVRF and OFF-CVRF in our proposed method. In all experiments, the iterations of PCNN is set to 680, 690, 720 for MRI-T1, MRI-T2, MRI-PD, respectively; the initial threshold of neuron is 255, and the maximal inhibitory ability \(\Delta {U_I}\) is 5.
To evaluate the performance of the proposed method, our method is compared with three typical image enhancement methods such as contrast-limited adaptive histogram equalization (CLAHE) (Sasi and Jayasree 2013), homomorphic filtering (HF) (Bhadu et al. 2017), and probabilistic method with simultaneous illumination and reflectance estimation (PMSIRE) (Fu et al. 2015). Moreover, four quantitative criterions are selected in the objective evaluation for the enhanced images, which include entropy (EN) (Li and Xie 2015), contrast improvement index (CII) (Yang and Zhang 2012), structural similarity (SSIM) (Wang et al. 2004) and edge preserve index (EPI) (Zhang et al. 2011).
3.1 Evaluation index
Specifically, EN is to measure the amount of information in enhanced image; the CII reflects the degree of the contrast improvement for enhanced image compared with original image; SSIM describe the structure similarity between original image and enhanced image; and EPI presents how well the salient edges in original image are preserved in enhanced image. To facilitate the detailed descriptions for these evaluation metrics, we denote original image and enhanced image as X and Y, respectively.
-
1.
Shannon entropy
The Entropy (EN) is a statistical measure of information content, which can be used to characterize the average uncertainty of an image. The index is defined as
$$EN(Y)= - \sum\limits_{{i=0}}^{{255}} {p(i)} {\log _2}(p(i))$$(16)where \(p(i)\) denotes the probability of the pixels with gray-level \(i\) in enhanced image.
-
2.
Contrast improvement index
The contrast improvement index (CII) is a ratio of global contrasts from original image and enhanced image, and formulated as
$$CII(X,Y)={{\frac{{\sigma _{Y}^{2}}}{{\sqrt[4]{{{M_Y}}}}}} \mathord{\left/ {\vphantom {{\frac{{\sigma _{Y}^{2}}}{{\sqrt[4]{{{M_Y}}}}}} {\frac{{\sigma _{X}^{2}}}{{\sqrt[4]{{{M_X}}}}}}}} \right. \kern-0pt} {\frac{{\sigma _{X}^{2}}}{{\sqrt[4]{{{M_X}}}}}}}$$(17)where \(\sigma _{X}^{2}\), \(\sigma _{Y}^{2}\), \({M_X}\), \({M_Y}\) are the variances, the fourth-order moments for X, Y.
-
3.
Structural similarity index
The structural similarity index (SSIM) models the image quality as a combination of three terms, namely the luminance term, the contrast term and the structure term. Mathematically, it is described as
$$SSIM(X,Y)=\frac{{(2{\mu _X}{\mu _Y}+{C_1})(2{\sigma _{XY}}+{C_2})}}{{(\mu _{X}^{2}+\mu _{Y}^{2}+{C_1})(\sigma _{X}^{2}+\sigma _{Y}^{2}+{C_2})}}$$(18)where \({\mu _X}\), \({\mu _Y}\), \({\sigma _X}\), \({\sigma _Y}\), \({\sigma _{XY}}\) are the local means, standard deviations, and cross-covariance for X, Y; C1 and C2 are smaller constants. More details can be seen in Wang et al. (2004).
-
4.
Edge preserve index
The edge preserve index (EPI) is the ratio resulted from horizontal and vertical absolute gradients in original image and enhanced image. It is represented as
$$EPI(X,Y)=\frac{{\sum\nolimits_{{i,j}} {\left| {Y(i,j) - Y(i,j+1)} \right|+\left| {Y(i,j) - Y(i+1,j)} \right|} }}{{\sum\nolimits_{{i,j}} {\left| {X(i,j) - X(i,j+1)} \right|+\left| {X(i,j) - X(i+1,j)} \right|} }}$$(19)where \((i,j)\) denotes the location of each pixel in an image. Note that the larger the value of all the indexes mentioned above, the better the quality of enhanced image.
3.2 Comparison results on ON-CVRF and OFF-CVRF
Both ON-CVRF and OFF-CVRF in our method can be exploited to estimate neuron parameters. Thus, to test and compare their abilities resulted in higher performance for medical image enhancement, the first experiment, where the total iterations of PCNN is set as 700, is performed on an MRI-T2 image shown in Fig. 3x. Fixing excited region width at 5 and varying inhibitory region width from 10 to 25 in ON-CVRF and OFF-CVRF, the results are presented in Fig. 2 and Table 1. Compared with the original image, Fig. 3x, the visual results, Fig. 3a1–a4, b1–b4, show higher brightness and contrast resulted in larger edges strength. And we are difficult to distinguish the visual differences among these results except Fig. 3a1 due to the presence of little dark blocks enhanced wrongly.
On the other hand, it can be seen from the objective evaluations in Table 2 that with the inhibitory region width \({d_2}\) increasing, our method with ON-CVRF presents higher performances than with OFF-CVRF in all metrics except CII. Moreover, the evaluations for the method with OFF-CVRF will almost stabilize starting from \({d_2}=15\). We can also observe that the method with ON-CVRF illustrates abnormal evaluations in terms of CII and SSIM for \(({d_1},{d_2})=(5,10)\)resulted from wrongly enhanced dark blocks in Fig. 3a1. Therefore, the OFF-CVRF with \(({d_1},{d_2})=(15,5)\) is selected in our method.
3.3 Applications on MRI-T1
To compare the qualities of enhanced images among various methods, firstly four MRI-T1 images are tested in the second experiment, and the results are presented in Fig. 4 and Table 3. We can observe from Fig. 4 that PMSIRE and our method illustrate higher brightness and contrast than CLAHE and HF, while CLAHE gives nearly non-enhanced results. In Table 3, our method exceeds other methods in terms of all indexes except EN. Especially for SSIM, our method is very close to 1 while CLAHE and HF are probably 0.5, which indicates that nearly all structure information in original MRI-T1 images has been preserved in enhanced images. We also find that CLAHE does not enhance MRI-T1 images except T1-1 because the method in CII is under 1 for T1-2 to T1-4. Therefore, our method gives better results from visual perception and objective evaluation on MRI-T1 images than other methods.
3.4 Applications on MRI-T2
We further evaluate the performance of the proposed method on four MRI-T2 images, and the results are illustrated in Fig. 5 and Table 4. The results of our method, HF and PMSIRE present stronger edges and more details than of CLAHE, and yet our method gives better visual contrast than HF. Like for MRI-T1, the results from CLAHE in Fig. 5 still do not report obvious enhancement effects for MRI-T2. Furthermore, from the aspect of objective evaluation, our method presents the best performance in terms of CII, SSIM and EPI for each MRI-T2 image; conversely, CLAHE and HF show the worst scores in all metrics except MI. Therefore, our method produces higher enhancement performance on MRI-T2 images than the other three.
3.5 Applications on MRI-PD
Besides MRI-T1 and MRI-T2 images, another four MRI-PD images are also used to verify the effectiveness of our proposed method. Obviously, in Fig. 6, the results from the proposed method and PMSIRE give better enhancement performance, especially for PD-1 and PD-2 images, than from CLAHE and HF. Figure 6a4 present more clear edges of tissues than Fig. 6a1–a3, like Fig. 6b4 than Fig. 6b1–b3. In addition, CLAHE still performs worse for the MRI-PD images just like for MRI-T1 and MRI-T2 images due to the almost non-enhanced results shown in Fig. 6a1–d1, and the small CII less than 1 in Table 5. This is further verified in the objective evaluation from Table 5. CLAHE illustrates the worst results comparing with the other methods in terms of CII, SSIM and EPI, while the proposed method is superior to the others.
Eventually, the average quantitative results of different methods on all MRI images are provided in Table 6. It clearly shows that the proposed method outperforms the other methods with largest scores in terms of all indexes except MI. These datums from CII, SSIM and EPI confirm that the enhanced images by our method present higher contrast and preserve more structures and edges.
4 Conclusion
In this study, we develop an effective MRI image enhancement scheme based on PCNN and CVRF with DOG. In this scheme, the enhanced image can be directly produced after inputting MRI images to PCNN; in order to adaptively perform this process for different images, the neuron parameters are estimated with the help of DOG. In addition, owing to the inhibitory effects from inhibitory region in CVRF, the pulses coupling among neurons can avoid effectively to smooth the structures and edges such that the structures and edges in the original MRI image can be preserved well in enhanced result. Experiments were performed on three types of MRI images. Compared with other methods, the proposed method presents better enhancement qualities for enhanced results not only in visual perception but also in objective evaluation.
References
Al-Ayyoub M, Al-Mnayyis N, Alsmirat MA et al (2018) SIFT based ROI extraction for lumbar disk herniation CAD system from MRI axial scans. J Ambient Intell Human Comput. https://doi.org/10.1007/s12652-018-0750-2
Bhadu R, Sharma R, Soni SK, Varma N (2017) Sparse representation and homomorphic filter for capsule endoscopy image enhancement. In: 2017 international conference on computing, communication and automation (ICCCA 2017). IEEE, pp 1178–1182. https://doi.org/10.1109/CCAA.2017.8229976
Chaira T (2014) An improved medical image enhancement scheme using type II fuzzy set. Appl Soft Comput 25(C):293–308
Chen HO, Kong NSP, Ibrahim H (2009) Bi-histogram equalization with a plateau limit for digital image enhancement. IEEE Trans Consum Electron 55(4):2072–2080
Chen Y, Park SK, Ma Y, Ala R (2011) A new automatic parameter setting method of a simplified PCNN for image segmentation. IEEE Trans Neural Netw 22(6):880–892
Deng X, Ma YD (2014) PCNN model analysis and its automatic parameters determination in image segmentation and edge detection. Chin J Electron 23(1):97–103
Du S, Huang Y, Ma J, Ma Y (2015) Mammalian visual characteristics inspired perceptual image quantization using pulse-coupled neural networks. Optik Int J Light Electron Opt 126(21):3135–3139
Eckhorn R, Bauer R, Jordan W et al (1988) Coherent oscillations: a mechanism of feature linking in the visual cortex? Biol Cybern 60(2):121–130
Fu X, Liao Y, Zeng D, Huang Y, Zhang XP, Ding X (2015) A probabilistic method for image enhancement with simultaneous illumination and reflectance estimation. IEEE Trans Image Process 24(12):4965–4977
Ganasala P, Kumar V (2016) Feature-motivated simplified adaptive PCNN-based medical image fusion algorithm in NSST domain. J Digit Imaging 29(1):73–85
Han B, Han Y, Gao X et al (2018) Boundary constraint factor embedded localizing active contour model for medical image segmentation. J Ambient Intell Human Comput. https://doi.org/10.1007/s12652-018-0978-x
Hassanpour H, Samadiani N, Salehi SMM (2015) Using morphological transforms to enhance the contrast of medical images. Egypt J Radiol Nucl Med 46(2):481–489
He S, Liu Y, Ma Y, Song W, Deng H (2011) Medical X-ray image enhancement based on PCNN image factorization. J Image Graph 16(1):21–26
He F, Guo Y, Gao C (2017) An improved pulse coupled neural network with spectral residual for infrared pedestrian segmentation. Infrared Phys Technol 87:22–30
Iqbal K, Odetayo MO, James A (2014) Face detection of ubiquitous surveillance images for biometric security from an image enhancement perspective. J Ambient Intell Human Comput 5(1):133–146
Jeon G (2017) Computational intelligence approach for medical images by suppressing noise. J Ambient Intell Human Comput. https://doi.org/10.1007/s12652-017-0627-9
Karumuri R, Kumari SA (2017) Weighted guided image filtering for image enhancement. In: 2017 2nd International conference on communication and electronics systems (ICCES). IEEE, pp 545–548. https://doi.org/10.1109/CESYS.2017.8321137
Kong W, Liu J (2013) Technique for image fusion based on nonsubsampled shearlet transform and improved pulse-coupled neural network. Opt Eng 52(1):7001–7013
Li B, Xie W (2015) Adaptive fractional differential approach and its application to medical image enhancement. Comput Electr Eng 45:324–335
Li GY, Li HG, Wu TH, Dong M (2005) Applications of PCNN and OTSU theories for image enhancement. J Optoelectron Laser 16(3):358–362
Ma YD, Liu Q, Qian ZB (2005) Automated image segmentation using improved PCNN model based on cross-entropy. In: Proceedings of 2004 international symposium on intelligent multimedia, video and speech processing, 2004, IEEE, pp 743–746
Ma Y, Lin D, Zhang B, Xia C (2007) A novel algorithm of image enhancement based on pulse coupled neural network time matrix and rough set. In: fourth international conference on fuzzy systems and knowledge discovery (FSKD 2007), vol 3, IEEE, pp 86–90. https://doi.org/10.1109/FSKD.2007.93
Na Y, Chen H, Yanfeng LI, Hao X (2012) Coupled parameter optimization of PCNN model and vehicle image segmentation. J Transp Syst Eng Inf Technol 12(1):48–54
Park S, Yu S, Kim M, Park K, Paik J (2018) Dual autoencoder network for retinex-based low-Light image enhancement. IEEE Access. https://doi.org/10.1109/ACCESS.2018.2812809
Raja NSM, Fernandes SL, Dey N et al (2018) Contrast enhanced medical MRI evaluation using Tsallis entropy and region growing segmentation. J Ambient Intell Human Comput. https://doi.org/10.1007/s12652-018-0854-8
Ranganath HS, Kuntimad G, Johnson JL (1995) Pulse coupled neural networks for image processing. In: Proceedings IEEE southeastcon ‘95. Visualize the future, IEEE, pp 37–43
Sasi NM, Jayasree VK (2013) Contrast limited adaptive histogram equalization for qualitative enhancement of myocardial perfusion images. Engineering 5(10):326–331
Sonka M, Hlavac V, Boyle R (1993) Image processing, analysis and machine vision. Springer, Boston
Tao L, Zhu C, Xiang G, Li Y, Jia H, Xie X (2017) LLCNN: A convolutional neural network for low-light image enhancement. In: 2017 IEEE visual communications and image processing (VCIP 2017), IEEE. https://doi.org/10.1109/VCIP.2017.8305143
Tao F, Yang X, Wu W, Liu K, Zhou Z, Liu Y (2018) Retinex-based image enhancement framework by using region covariance filter. Soft Comput 22(5):1399–1420
Wang Z, Gong C (2017) A multi-faceted adaptive image fusion algorithm using a multi-wavelet-based matching measure in the PCNN domain. Appl Soft Comput 61:1113–1124
Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612
Wei S, Zhou X, Wu W, Pu Q, Wang Q, Yang X (2017) Medical image super-resolution by using multi-dictionary and random forest. Sustain Cities Soc. https://doi.org/10.1016/j.scs.2017.11.012
Wu FX, Zhang XB (2016) An enhanced method of color image combined PCNN based on NSCT. Aeronaut Comput Tech 46(5):21–25
Xiang T, Yan L, Gao R (2015) A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking PCNN in NSCT domain. Infrared Phys Technol 69:53–61
Xu G, Li C, Zhao J, Lei B (2014) Multiplicative decomposition based image contrast enhancement method using PCNN factoring model. In: Proceeding of the 11th World congress on intelligent control and automation (WCICA 2014), IEEE, pp 1511–1516
Yang X, Zhai Y (2014) Image enhancement based on tetrolet transform and PCNN. Comput Eng Appl 50(19):178–181
Yang M, Zhang G (2012) SAR images filtering via sparse optimization. J Image Graph 17(11):1439–1443
Yang J, Liu L, Jiang T, Fan Y (2003) A modified Gabor filter design method for fingerprint image enhancement. Pattern Recognit Lett 24(12):1805–1817
Yang Y, Su Z, Sun L (2010) Medical image enhancement algorithm based on wavelet transform. Electron Lett 46(2):120–121
Yang X, Wu W, Liu K, Chen W, Zhou Z (2018) Multiple dictionary pairs learning and sparse representation-based infrared image super-resolution with improved fuzzy clustering. Soft Comput 22(5):1385–1398
Zhang YD, Wu LN, Wang SH, Wei G (2010) Color image enhancement based on HVS and PCNN. Sci China Inf Sci 53(10):1963–1976
Zhang WG, Zhang Q, Yang CS (2011) Improved bilateral filtering for SAR image despeckling. Electron Lett 47(4):286–288
Zhou D, Shao Y (2017) Region growing for image segmentation using an extended PCNN model. IET Image Proc 12(5):729–737
Acknowledgements
This work is supported by the National Natural Science Foundation of China (nos. 61463052 and 61463049) and China Postdoctoral Science Foundation (no. 171740).
Author information
Authors and Affiliations
Contributions
RN, MH, JC, DZ and ZL conceived and designed the experiments; RN and MH performed the experiments, analyzed the data; JC, DZ and ZL contributed analysis tools; RN and MH wrote the paper and JC improved the english writing.
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Nie, R., He, M., Cao, J. et al. Pulse coupled neural network based MRI image enhancement using classical visual receptive field for smarter mobile healthcare. J Ambient Intell Human Comput 10, 4059–4070 (2019). https://doi.org/10.1007/s12652-018-1098-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12652-018-1098-3