Detection of Lung Contour with Closed Principal Curve and Machine Learning
Abstract
Radiation therapy plays an essential role in the treatment of cancer. In radiation therapy, the ideal radiation doses are delivered to the observed tumor while not affecting neighboring normal tissues. In threedimensional computed tomography (3DCT) scans, the contours of tumors and organsatrisk (OARs) are often manually delineated by radiologists. The task is complicated and timeconsuming, and the manually delineated results will be variable from different radiologists. We propose a semisupervised contour detection algorithm, which firstly uses a few points of region of interest (ROI) as an approximate initialization. Then the data sequences are achieved by the closed polygonal line (CPL) algorithm, where the data sequences consist of the ordered projection indexes and the corresponding initial points. Finally, the smooth lung contour can be obtained, when the data sequences are trained by the backpropagation neural network model (BNNM). We use the private clinical dataset and the public Lung Image Database Consortium and Image Database Resource Initiative (LIDCIDRI) dataset to measure the accuracy of the presented method, respectively. To the private dataset, experimental results on the initial points which are as low as 15% of the manually delineated points show that the Dice coefficient reaches up to 0.95 and the global error is as low as 1.47 × 10^{−2}. The performance of the proposed algorithm is also better than the cubic spline interpolation (CSI) algorithm. While on the public LIDCIDRI dataset, our method achieves superior segmentation performance with average Dice of 0.83.
Keywords
Lung contour Principal curve Closed polygonal line algorithm Machine learningIntroduction
With the advancement of medical imaging technology, the amount of data obtained in the clinical images has increased exponentially. The important information for organ diseases can be quantitatively provided by the clinical images, while quantification is often manually implemented in some clinics. In order to speed up the manual task and reduce workload, combining computeraided diagnosis (CAD) with automatic detection method is becoming a research hotspot. A contour is an ordered set of data points with segments connecting them into a piecewisedefined curve. The obtained contour can be represented in a simple form, and it can be useful in solving various problems such as shape matching and retrieval, character recognition, and medical image analysis [1]. In order to overcome the interference factors, such as noise, occlusion, and artifacts, both detection and representation problems will face a big challenge. Therefore, accurate detection of region of interest (ROI) contour of medical image is necessary.
Most current medical image edge detection techniques can be categorized as featureclassify approaches [2, 3, 4, 5], threshold segmentation approaches [6, 7], and contour curve detection approaches [8, 9, 10, 11]. Tang et al. [12] have developed a splat feature classification method to detect retinal hemorrhage. The authors show that an area under the receiver operating characteristic (ROC) curve [13] is 0.96 and 0.87 at the splat and the image level, respectively. Maggio et al. [14] have successfully exploited hybrid feature selection algorithm to prune unimportant features and realize rapid computation. However, both of their techniques only test a single dataset. In Ref. [15], Pu et al. have presented a computerized scheme to automatically segment the 3D human airway tree based on selecting a multithreshold. However, the author does not use the Dice coefficient [16, 17] treated as a standard for assessing similarity to prove the performance of the proposed method. The contour curve detection approach has an impact on describing the shape of the specific organ, where the contour curve consists of the data points of the edge [18]. Comparing with the other two approaches, the form of experimental results obtained by the contour curve detection approach can save more storage space, while the shape feature of the specific organ can be easily extracted.
The main purpose of contour detection is to use shape representation models to approximately represent a boundary curve [18, 19]. Related work can be found on studies about shape representation [20, 21, 22] and curve approximation [23, 24]. Shepherd et al. [25] have proposed a segmentation method combined statistical shape model (SSM) with online and offline learning method based on shape priors, while Song et al. [26] have devised a method for multiobject segmentation using context and shape prior in a 3D graphtheoretic framework with good accuracy. However, both of them only use a subset of the whole shape as the prior shape, where it will ignore some information. Zhang et al. [27] have exploited dictionary learning and local shape prior model to detect the ROI in whole body CT with increased overall accuracy. However, the author did not consider noisy inputs. Heibel et al. [28] have combined Markov random fields with Bspline curve algorithm to approximate a contour curve that the sequence of points was previously known. Aquino et al. [29] have used edge detection and morphological methods followed by the Circular Hough Transform to achieve the optic disc boundary curve approximation. In Ref. [29], they do not compare their result to the ground truth.
Among many other contour detection methods, principal curve technique is a useful tool for noisy inputs and can obtain a robust result [30]. Principal curve was described by Hastie [31] as a smooth curve which passes through the “middle” of a set of data points. The notion is successfully utilized in many applications such as skeletonization [32, 33] and curvilinear feature detection from data points [34]. In Ref. [30], the authors have used principal curve to extract the coronary artery centerlines. Further, the artificial neural network [2, 3, 4], which is treated as a classifier, can be well used to detect the tumor regions from nontumor regions. Thus, principal curve combining with machine learning is a promising candidate in detecting discriminative information from the dataset [35, 36].
Lavdas et al. [37] have used classification forest (CF), Convolutional Neural Networks (CNN) and a multiatlas (MA) approach for multiorgan segmentation, respectively. The CNN algorithm can have the capability of learning complex data associations, while the training configuration is too complex. In Ref. [38], Tseng et al. have proposed a deep reinforcement learning (DRL) method for dynamic clinical decision making in adaptive radiotherapy. However, development of the method into a fully credible autonomous system would require further validation on larger multiinstitutional datasets. Ma et al. [39] have utilized Cascade convolutional neural networks to evaluate a fully automatic detection of thyroid nodules from 2D ultrasound images, while Shaukat et al. [40] have developed a fully automatic detected method to lung nodules using a hybrid feature set with Support Vector Machine (SVM) classifier. However, both of them cannot detect micronodules (< 3 mm) accurately. Considering that deep learning is more suitable for the large dataset [41, 42], we choose backpropagation neural network for training.
In this work, we use less than 15% points of ROI as an approximate initialization; the approximate contour of lung image can be obtained by combining closed polygonal line and backpropagation neural network model (CPLBNNM) algorithm. Experimental results show that the obtained lung contours can be smoothly and accurately expressed, when the relation between the data points and their corresponding projection indexes is identified by training with BNNM. The computational complexity and the workload of radiologists can be well reduced. At the same time, comparing with the cubic spline interpolation (CSI) algorithm, the performance of the proposed algorithm can be further proved.
Materials and Methods
This section firstly introduces brief discussions on two theories that are relevant to this article, which is named principal curve and machining learning, respectively. Then the overall process of the proposed algorithm will be described. Finally, the quantitative evaluation parameters which consist of global error and Dice coefficient can be proposed.
Principal Curve
KSegment Principal Curve
Kegl et al. [33] give the convergence confirmation of the KSPC; it can guarantee the learning ability of the principal curve and propose the polygonal line algorithm for finding the KSPC.
Polygonal Line Algorithm
 (1)
In the projection step, the data points are classified according to which segment or vertex they project. Let f be a polygon curve composed of vertexs v_{1}, v_{2}, …, v_{k + 1} and line segments s_{1}, s_{2}, …, s_{ k },. s_{ i } connects v_{ i }, v_{i + 1}, where in i ∈ (1, k), i is a positive integer. The dataset X_{ n } is divided into 2k + 1 disjoint sets which consist of (V_{1}, V_{2}, .., V_{k + 1}) and (S_{1}, S_{2}, .., S_{ k }), and they are called the sample points which belong to the vertex V_{ i } or line segment S_{ i }.
 (2)
In the vertex optimization step, the position of each vertex is adapted on the principle that distance from the sample points to the principal curve is the smallest. The gradientbased minimization method which minimizes the penalty distance function makes the position of the point changed while changing each line segment.
Machine Learning
BNNM is the machine learning algorithm for training a multilayer neural network. It trains multilayer feedforward neural networks which contains iterative gradient descent property. In this section, we summarize the essential equations which are used to implement the BNNM.
Proposed Algorithm
Obtain Data Sequences
In the first step, normalize the dataset {x_{1}, x_{2}, .., x_{ n }} and record the coordinates (x_{ i }, y_{ i })(i = 1, 2, .., n) of the dataset. Then in order to introduce uniformly, the dataset which consists of the coordinate form is used to handle. Normalize all the dataset to the range {(−1, −1)~(1, 1)}.
In the third step, enter into the outer loop and calculate the value of the outer loop distance function.
In the fourth step, run the inner loop and adjust the position of each vertex. When the angle between lines is greater than 90^{o} and the shape is closed, by projecting the dataset to the line and the vertex projection, the distance function from data points to the curve can be calculated. During the value of distance function becomes smaller, the position of vertex will be changed under the principle of the vertex optimization step. Comparing the value of the current distance function with the value of the last inner loop distance function, when the reduced value is smaller than the max distance deviation ∆s = 0.002, it reaches the inner loop stop condition and executes the fifth step. Otherwise, the new vertex will be added and the fourth step will be reexecuted.
In the fifth step, comparing the value of the current distance function with the value of the previous outer loop distance function, when the reduced value is smaller than the max distance deviation ∆s = 0.002, the outer loop stop condition is reached, and a closed polygon formed by a piecewise straight line can be obtained, then it goes to the sixth step. Differently, the new vertex will be added, and it goes to the third step to reexecute the outer loop operation step.
In the sixth step, the projection indexes {t_{1}, t_{2}, …, t_{ n }} of the dataset can be achieved by projecting the dataset to the closed polygon. According to the sequence of projection index t_{ i } defined from small to large, the dataset (x_{ i }, y_{ i })(i = 1, 2, .., n) is sorted. Finally, the data sequences consist of the ordered projection indexes, and the corresponding data points {(t_{ i }, (x_{ i }, y_{ i })), i = 1, 2, …, n, 0 ≤ t_{1} < t_{2} < … < t_{ n } ≤ 1} can be obtained.
Training
By looking for a continuous, differentiable, and integrable smooth function, the principal curve is used to approximate the distribution of the dataset points. Due to the complicated relationship of function, simple regression method cannot be well fitted. The BNNM minimizes the global error of dataset to approximate function and fit curve to obtain a smooth principal curve.
The steepness parameter λ determines the active region of the activation function. When the steepness parameter λ is from infinity to zero, the sigmoid function alters from the unit step function to the constant value of 0.5 as well.
The corresponding parameters are denoted as follows:
 N

the number of the neurons at the hidden layer
 w _{ i }

the weight from the input layer to the ith neuron at the hidden layer
 T _{ i }

the output threshold of the ith neuron at the hidden layer
 v _{i, k}

the weight from the ith neuron at hidden layer to the kth neuron at the output layer
 r _{ k }

the output threshold of the kth neuron at the output layer
Quantitative Evaluation
In order to confirm the performance of the proposed CPLBNNM algorithm, the Dice coefficient and the global error will be used.
Global Error
In the BNNM, we train the BNNM to achieve the goal by minimizing the global error E, where the global error E is the sum of the mean square E_{ k } and the mean square error E_{ k } which represents the deviation between the actual output and the expected output in the neural network.
Dice Coefficient
Results and Discussions
In this section, in order to prove the performance of the proposed CPLBNNM algorithm, we use the private highresolution lung dataset and the public Lung Image Database Consortium and Image Database Resource Initiative (LIDCIDRI) dataset for contour detection, respectively. The private dataset is acquired by 3DCT scans in which the detection results are compared with the manually delineated contours by overlapping ratio. All contours of ROIs are manually delineated by professional radiologists as a reference for evaluation. The detection results of the proposed algorithm can be evaluated quantitatively and qualitatively. The anonymous 3DCT dataset is provided by the Second Affiliated Hospital of Soochow University. The dataset format is DICOM; the image size is 512 × 512, while the public LIDCIDRI dataset of CT scans was acquired from the Lung Imaging Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) [44]. The experiments run on a computer with 2CPU i54590 3.3GHz, 2GB RAM.
The overoscillation of the system is resulted by high momentum parameter. On the other hand, local minimization will be caused by low momentum parameters in which the system training will slow down. Hence, the momentum parameter α will be set 1 in the BNNM. In the CPL algorithm, the range of the distance deviation is ∆s ∈ [0,0.002] and the curvature penalty factor is \( {\lambda}_p^{,}=0.13 \). The following experimental part will give a comparison of the actual dataset; the blue lines show the contours obtained by the proposed algorithm, and yellow lines show the manually delineated contours.
Contour Detection under Different Learning Rates on Private Dataset
In the BNNM, the selection of learning rate is critical. More steps are needed to achieve acceptable results with small learning rates. On the contrary, when the learning rate is too large, it will lead to oscillation near extreme points which prevents to converge. In order to prove the influence of epochs number and the number of neurons in the BNNM, a compromise scheme is selected to set the learning rate η = 0.5.
Contour Detection under Different Numbers of Neurons at the Hidden Layer on Private Dataset
Contour Detection under Different Numbers of Epochs on Private Dataset
Contour Detection under Different Algorithms on Private Dataset
To prove the effectiveness of the proposed algorithm in lung contour detection, we choose the fitting accuracy to be criteria for judging. The existing principal algorithms are chosen to compare with the proposed CPLBNNM algorithm. Nowadays, the current main algorithms are the least squares (LS) algorithm and the cubic spline interpolation (CSI) algorithm. The LS algorithm is recommended to use by the International Electrotechnical Commission. And the CSI algorithm which has a high accuracy can be modeled for the closed curve.
Figure 13(A), (B) show the left lung image dealt by the CSI and CPLBNNM algorithm, respectively. Figure 13(C), (D) show the right lung image dealt by the CSI and CPLBNNM algorithm, respectively. The △(f) represents the global European square distance function regarded as the evaluation index. In principle, when the △(f) decreases, the curve f is much closer to the dataset, while the similarity of the proposed results is much higher. Thus, the phenomenon in the dataset is similar, Lung A will be used as an example to analyze. In Lung A, compared with the △(f) of the left lung image in Fig. 13(1A), (1B), the △(f) = 1.44 × 10^{−2} obtained by the CSI algorithm is much larger than the △(f) = 2.83 × 10^{−3} achieved by the proposed CPLBNNM algorithm. Meanwhile, to the right lung image in Fig. 13(1C), (1D), the △(f) = 1.78 × 10^{−2} received by the CSI algorithm is much larger than the △(f) = 4.36 × 10^{−3} obtained by the proposed CPLBNNM algorithm. Through the nine compared results, it can be concluded that when the discrete data points are excessively many, the inverse matrix becomes more complicated, while the fitting result of CSI algorithm is not good. The proposed CPLBNNM algorithm contains feedforward neural network with the hidden layer which can approximate any continuous function with arbitrary precision. It can be seen that the fitting precision of the proposed CPLBNNM algorithm is better than that of the CSI algorithm.
Furthermore, although the closed curve can also be fitted by the CSI algorithm, the sequence of the projection indexes needs to be obtained manually during collecting dataset, where the complexity of operation is increased. In this paper, the projection indexes can be obtained by the proposed CPLBNNM algorithm. When dataset is collected, more data points are obtained by the retention algorithms. With the property of passing the “middle” data points, the principal curve has a certain ability to process the data error. Hence, the requirement of data acquisition accuracy is reduced, so the proposed CPLBNNM algorithm is more favorable for practical application.
According to Fig. 13, considering that the dotted line 1 shows the crack which the CSI algorithm appears, it is easy to verify the superiority of the proposed CPLBNNM algorithm to deal with the closed dataset. In this way, we only use the dotted line 2 in Fig. 13 for analysis in which Fig. 14 can be gotten to show the partial magnification graph of curve fitting. Because of more characteristic of Lung B than others, Lung B is mainly analyzed. From Fig. 14, each pixel point of the dataset is too small to observe, in order to improve the contrast of the experimental results; the cross form will be used to denote each pixel of the dataset in this paper. It can be seen from Fig. 14(2A), (2B) that when the turn occurs, the principal curve obtained by the CSI algorithm is more deviated from the approximate trajectory of the initial points. The reason is that the robust ability of the obtained principal curve is weakened by the influence of the excessive oscillation. On the contrary, with continuous training in the proposed CPLBNNM algorithm, the complete and smooth expression of the principal curve can be acquired. In addition, the center position which the principal curve processes the initial points is recorded as well. So that the fitting problem of the complex data distribution is solved, while the mathematical expression of lung image based on the principal curve can be obtained.
By looking at the results of Fig. 14(2C), (2D), it is observed that when the dataset is very much or the curvature of the obtained curve is very large, the obtained curves of the CSI and the proposed CPLBNNM algorithm are all deviated from the center lines. Intuitively, the achieved curve of the CSI algorithm seems to be much closer to the approximate trajectory of the dataset, but the curve obtained by the proposed CPLBNNM algorithm covers the more initial points relatively. In addition, the curve achieved by the proposed CPLBNNM algorithm can be repaired automatically to approach the center of dataset by keeping on learning.
Contour Detection under Different Algorithms on Public LIDCIDRI Dataset
Comparison of proposed study with previous works (Dice values in mean ± standard deviation)
Authors, years  Technique/method  Database  Dice coefficient 

Proposed method  CPLBNNM  LIDCIDRI  0.83 ± 0.11 
Wang et al., 2017 [47]  Multiview deep convolutional neural network  LIDCIDRI  0.78 ± 0.16 
Wang et al., 2017 [48]  Central Focused Convolutional Neural Networks  LIDCIDRI  0.82 ± 0.11 
Song et al., 2016 [49]  Toboggan Based Growing Automatic segmentation  LIDCIDRI  0.81 ± 0.04 
Kubota et al., 2011 [50]  Distance transform, region growing, convex hull  LIDCIDRI  0.69 ± 0.18 
Lavdas et al., 2017 [37]  3D Convolutional Neural Network  Private  0.81 ± 0.13 
Conclusion
In CT images, detection and recognition of organs are problems in the field of image processing, and lung contour detection is one of the key problems in CT imaging. In this paper, we use the private dataset and the public LIDCIDRI dataset to evaluate the proposed method, respectively. To the private dataset, the data points manually delineated by radiologists are treated as the initial dataset. The data sequences are generated by the CPL algorithm, where the data sequences are made up of the ordered projection indexes and the corresponding data points. The projection index is taken as the independent variable, while the initial data points are regarded as the dependent variable; the data sequence is trained by the BNNM, after that, the smooth contour of the lung can be obtained. With the proposed CPLBNNM algorithm, the computational complexity of the contour extraction and the workload of radiologists can be reduced. The quantitative and qualitative experimental results show that our proposed semiautomatic detection method has better extraction accuracy for highresolution lung datasets obtained by 3DCT scans. A clear lung contour is retrieved by the proposed CPLBNNM algorithm by training the data. While compared with other methods on the public LIDCIDRI dataset, our method achieves superior segmentation performance with average Dice of 0.83. However, the BNNM can be too complex with excessive training and overfitting. In machine learning, overfitting is a common phenomenon; it will lead to the deviation between the actual output and the expected output to be very large. To solve the problem of overfitting, we plan to use the regularization or dropout to optimize the BNNM in the future. In addition, we plan to apply the proposed twodimensional method into the threedimensional medical reconstruction, which is based on the contour extraction of the twodimensional lung image.
Notes
Acknowledgments
The authors would like to thank the Second Affiliated Hospital of Soochow University for their support.
References
 1.AtaerCansizoglu E, Bas E, KalpathyCramer J, Sharp GC, Erdogmus D: Contourbased shape representation using principal curves. Pattern Recognition 46:1140–1150, 2013CrossRefGoogle Scholar
 2.Okumura E, Kawashita I, Ishida T: Computerized Classification of Pneumoconiosis on Digital Chest Radiography Artificial Neural Network with Three Stages. J Digit Imaging 30:1–14, 2017CrossRefGoogle Scholar
 3.Wang J, Kato F, Yamashita H, Baba M, Cui Y, Li R, OyamaManabe N, Shirato H: Automatic Estimation of Volumetric Breast Density Using Artificial Neural NetworkBased Calibration of FullField Digital Mammography: Feasibility on Japanese Women With and Without Breast Cancer. J Digit Imaging 30:215–227, 2017CrossRefPubMedGoogle Scholar
 4.Kamra A, Jain VK, Singh S, Mittal S: Characterization of Architectural Distortion in Mammograms Based on Texture Analysis Using Support Vector Machine Classifier with Clinical Evaluation. J Digit Imaging 29:104–114, 2016CrossRefPubMedGoogle Scholar
 5.Song Y, Cai W, Zhou Y, Feng DD: Featurebased image patch approximation for lung tissue classification. IEEE Transactions on Medical Imaging 32:797–808, 2013CrossRefPubMedGoogle Scholar
 6.Brosch T, Tang LYW, Yoo Y, Li DKB, Traboulsee A, Tam R: Deep 3D convolutional encoder networks with shortcuts for multiscale feature integration applied to multiple sclerosis lesion segmentation. IEEE Transactions on Medical Imaging 35:1229–1239, 2016CrossRefPubMedGoogle Scholar
 7.Aarle WA, Batenburg KJ, Sijbers J: Optimal threshold selection for segmentation of dense homogeneous objects in tomographic reconstructions. IEEE Transactions on Medical Imaging 30:980–989, 2011CrossRefPubMedGoogle Scholar
 8.Zhao Y, Rada L, Chen K, Harding SP, Zheng Y: Automated vessel segmentation using infinite perimeter active contour model with hybrid region information with application to retinal images. IEEE Transactions on Medical Imaging 34:1797–1807, 2015CrossRefPubMedGoogle Scholar
 9.Barbu A, Suehling M, Xu X, Liu D, Zhou SK, Comaniciu D: Automatic detection and segmentation of lymph nodes from CT data. IEEE Transactions on Medical Imaging 31:240–250, 2012CrossRefPubMedGoogle Scholar
 10.Bates R, Irving B, Markelc B, Kaeppler J, Brown G, Muschel RJ, Brady M, Grau V, Schnabel JA: Segmentation of Vasculature from Fluorescently Labeled Endothelial Cells in MultiPhoton Microscopy Images. IEEE Transactions on Medical Imaging 99:1–10, 2017CrossRefGoogle Scholar
 11.Ali S, Madabhushi A: An integrated region, boundary, shapebased active contour for multiple object overlap resolution in histological imagery. IEEE Transactions on Medical Imaging 31:1448–1460, 2012CrossRefPubMedGoogle Scholar
 12.Tang L, Niemeijer M, Reinhardt JM, Garvin MK, Abràmoff MD: Splat feature classification with application to retinal hemorrhage detection in fundus images. IEEE Transactions on Medical Imaging 32:364–375, 2013CrossRefPubMedGoogle Scholar
 13.Vogl WD, Waldstein SM, Gerendas BS, SchmidtErfurth U, Langs G: Predicting Macular Edema Recurrence from SpatioTemporal Signatures in Optical Coherence Tomography Images. IEEE Transactions on Medical Imaging 36:1773–1783, 2017CrossRefPubMedGoogle Scholar
 14.Maggio S, Palladini A, Marchi LD, Alessandrini M, Speciale N, Masetti G: Predictive deconvolution and hybrid feature selection for computeraided detection of prostate cancer. IEEE Transactions on Medical Imaging 29:455–464, 2010CrossRefPubMedGoogle Scholar
 15.Pu J, Fuhrman C, Good WF, Sciurba FC, Gur D: A differential geometric approach to automated segmentation of human airway tree. IEEE Transactions on Medical Imaging 30:266–278, 2011CrossRefPubMedGoogle Scholar
 16.Dai S, Lu K, Dong J, Zhang Y, Chen Y: A novel approach of lung segmentation on chest CT images using graph cuts. Neurocomputing 168:799–807, 2015CrossRefGoogle Scholar
 17.Zhou H, Goldgof DB, Hawkins S, Wei L, Liu Y, Creighton D, Gillies RJ, Hall LO, Nahavandi S: A robust approach for automated lung segmentation in thoracic CT. Proceedings of the 2015 I.E. conference on Systems, Man, and Cybernetics (SMC), Kowloon, China, 2015, pp 2267–2272Google Scholar
 18.Tu L, Styner M, Vicory J, Elhabian S, Wang R, Hong J, Paniagua B, Prieto JC, Yang D, Whitaker R, Pizer SM: Skeletal Shape Correspondence through Entropy. IEEE Transactions on Medical Imaging 99:1–10, 2017Google Scholar
 19.Shao Y, Gao Y, Guo Y, Shi Y, Yang X, Shen D: Hierarchical lung field segmentation with joint shape and appearance sparse learning. IEEE Transactions on Medical Imaging 33:1761–1780, 2014CrossRefPubMedGoogle Scholar
 20.Soliman A, Khalifa F, Elnakib A, ElGhar MA, Dunlap N, Wang B, Gimel’farb G, Keynton R, ElBaz A: Accurate lungs segmentation on CT chest images by adaptive appearanceguided shape modeling. IEEE Transactions on Medical Imaging 36:263–276, 2017CrossRefPubMedGoogle Scholar
 21.Dhara AK, Mukhopadhyay S, Dutta A, Garg M, Khandelwal N: A combination of shape and texture features for classification of pulmonary nodules in lung ct images. J Digit Imaging 29:466–475, 2016CrossRefPubMedPubMedCentralGoogle Scholar
 22.Nguyen HV, Porikli F: Support vector shape: A classifierbased shape representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 35:970–982, 2013CrossRefPubMedGoogle Scholar
 23.Deng C, Lin H: Progressive and iterative approximation for least squares Bspline curve and surface fitting. ComputerAided Design 47:32–44, 2014CrossRefGoogle Scholar
 24.Xiao C, Staring M, Shamonin D, Reiber JHC, Stolk J, Stoel BC: A strain energy filter for 3D vessel enhancement with application to pulmonary CT images. Medical Image Analysis 15:112–124, 2011CrossRefPubMedGoogle Scholar
 25.Shepherd T, Prince SJD, Alexander DC: Interactive lesion segmentation with shape priors from offline and online learning. IEEE Transactions on Medical Imaging 31:1698–1712, 2012CrossRefPubMedGoogle Scholar
 26.Song Q, Bai J, Garvin MK, Sonka M, Buatti JM, Wu X: Optimal multiple surface segmentation with shape and context priors. IEEE Transactions on Medical Imaging 32:376–386, 2013CrossRefPubMedGoogle Scholar
 27.Zhang S, Zhan Y, Metaxas DN: Deformable segmentation via sparse representation and dictionary learning. Medical Image Analysis 16:1385–1396, 2012CrossRefPubMedGoogle Scholar
 28.Heibel H, Glocker B, Groher M, Pfister M, Navab N: Interventional tool tracking using discrete optimization. IEEE Transactions on Medical Imaging 32:544–555, 2013CrossRefPubMedGoogle Scholar
 29.Aquino A, GegúndezArias ME, Marín D: Detecting the optic disc boundary in digital fundus images using morphological, edge detection, and feature extraction techniques. IEEE Transactions on Medical Imaging 29:1860–1869, 2010CrossRefPubMedGoogle Scholar
 30.Li Z, Zhang Y, Liu G, Shao H, Li W, Tang X: A robust coronary artery identification and centerline extraction method in angiographies. Biomedical Signal Processing and Control 16:1–8, 2015CrossRefGoogle Scholar
 31.Hastie T, Stuetzle W: Principal curves. Journal of the American Statistical Association 84:502–516, 1989CrossRefGoogle Scholar
 32.Bradley RS, Withers PJ: Postprocessing techniques for making reliable measurements from curveskeletons. Computers in biology and medicine 72:120–131, 2016CrossRefPubMedGoogle Scholar
 33.Kégl B, Krzyzak A: Piecewise linear skeletonization using principal curves. IEEE Transactions on Pattern Analysis and Machine Intelligence 24:59–74, 2002CrossRefGoogle Scholar
 34.Yu Y, Wang J: Enclosure Transform for Interest Point Detection From Speckle Imagery. IEEE Transactions on Medical Imaging 36:769–780, 2017CrossRefGoogle Scholar
 35.Khedher L, Ramírez J, Górriz JM, Brahim A, Segovia F: Early diagnosis of Alzheimer’ s disease based on partial least squares, principal component analysis and support vector machine using segmented MRI images. Neurocomputing 151:139–150, 2015CrossRefGoogle Scholar
 36.Zhu X, Ge Y, Li T, Thongphiew D, Yin FF, Wu QJ: A planning quality evaluation tool for prostate adaptive IMRT based on machine learning. Medical physics 38:719–726, 2011CrossRefPubMedGoogle Scholar
 37.Lavdas I, Glocker B, Kamnitsas K, Rueckert D, Mair H, Sandhu A, Taylor SA, Aboagye EO, Rockall A: G: Fully automatic, multiorgan segmentation in normal whole body magnetic resonance imaging (MRI), using classification forests (CFs), convolutional neural networks (CNNs), and a multiatlas (MA) approach. Medical Physics 44:5210–5220, 2017CrossRefPubMedGoogle Scholar
 38.Tseng HH, Luo Y, Cui S, Chien JT, Ten Haken RK, Naqa IE: Deep reinforcement learning for automated radiation adaptation in lung cancer. Medical Physics 44:6690–6705, 2017CrossRefPubMedGoogle Scholar
 39.Ma J, Wu F, Jiang TA, Zhu J, Kong D: Cascade convolutional neural networks for automatic detection of thyroid nodules in ultrasound images. Medical Physics 44:1678–1691, 2017CrossRefPubMedGoogle Scholar
 40.Shaukat F, Raja G, Gooya A, Frangi AF: Fully automatic detection of lung nodules in CT images using a hybrid feature set. Medical Physics 44:3615–3629, 2017CrossRefPubMedGoogle Scholar
 41.Taigman Y, Yang M, Ranzato MA, Wolf L: Deepface: Closing the gap to humanlevel performance in face verification. Proceedings of the 27th IEEE conference on computer vision and pattern recognition, Columbus, USA, 2014, pp 1701–1708Google Scholar
 42.Najafabadi MM, Villanustre F, Khoshgoftaar TM, Seliya N, Wald R, Muharemagic E: Deep learning applications and challenges in big data analytics. Journal of Big Data 2:1, 2015CrossRefGoogle Scholar
 43.Kégl B, Krzyzak A, Linder T, Zeger K: Learning and design of principal curves. IEEE Transactions on Pattern Analysis and Machine Intelligence 22:281–297, 2000CrossRefGoogle Scholar
 44.Armato SG, McLennan G, Bidaut L et al.: The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans. Medical Physics 38:915–931, 2011CrossRefPubMedPubMedCentralGoogle Scholar
 45.Srivastava N, Hinton GE, Krizhevsky A, Sutskever I, Salakhutdinov R: Dropout: a simple way to prevent neural networks from overfitting. Journal of machine learning research 15:1929–1958, 2014Google Scholar
 46.LeCun Y, Bengio Y, Hinton GE: Deep learning. Nature 521:436–444, 2015CrossRefPubMedGoogle Scholar
 47.Wang S, Zhou M, Gevaert O, Tang Z, Dong D, Liu Z, Tian J: A multiview deep convolutional neural networks for lung nodule segmentation. Proceedings of the 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju Island, South Korea, 2017, pp 1752–1755Google Scholar
 48.Wang S, Zhou M, Liu Z, Liu Z, Gu D, Zang Y, Dong D, Gevaert O, Tian J: Central focused convolutional neural networks: Developing a datadriven model for lung nodule segmentation. Medical image analysis 40:172–183, 2017CrossRefPubMedPubMedCentralGoogle Scholar
 49.Song J, Yang C, Fan L, Wang K, Yang F, Liu S, Tian J: Lung lesion extraction using a toboggan based growing automatic segmentation approach. IEEE transactions on medical imaging 35:337–353, 2016CrossRefPubMedGoogle Scholar
 50.Kubota T, Jerebko AK, Dewan M, Salganicoff M, Krishnan A: Segmentation of pulmonary nodules of various densities with morphological approaches and convexity models. Medical Image Analysis 15:133–154, 2011CrossRefPubMedGoogle Scholar