Deep Learning in Classification of Covid-19 Coronavirus, Pneumonia and Healthy Lungs on CXR and CT Images

Abstract

Purpose

In this paper, the transfer learning method has been implemented to chest X-ray (CXR) and computed tomography (CT) bio-images of diverse kinds of lungs maladies, including CORONAVIRUS 2019 (COVID-19). COVID-19 identification is a difficult assignment that constantly demands a careful analysis of a patient’s clinical images, as COVID-19 is found to be very alike to pneumonic viral lung infection. In this paper, a transfer learning model to accelerate prediction processes and to assist medical professionals is proposed. Finally, the main purpose is to do an accurate classification between Covid-19, pneumonia and, healthy lungs using CXR and CT images.

Methods

Learning transfer gives the possibility to find out about this new illness COVID-19, using the knowledge we have about the pneumonia virus. This demonstrates the apprehensiveness achieved from a new architecture trained to detect virus-related pneumonia that must be transferred for COVID-19 detection. Transfer learning presents a considerable dissimilarity in results when compared to the result of traditional groupings. It is not necessary to create a separate model for the classification of COVID-19. This simplifies complicated issues by adopting the available model for COVID-19 determination. Automated diagnosis of COVID-19 using Haralick texture features is focused on segmented lung images and problematic lung patches. Lung patches are necessary for the augmentation of COVID-19 image data.

Results

The obtained outcomes are quite reliable for all distinctive processes as the proposed architecture can distinguish healthy lungs, pneumonia, COVID-19.

Conclusions

The results suggest that the implemented model is improved considering other existing models because the obtained classification accuracy is over the recently obtained results. It is a belief that the new architecture that is implemented in this study, delivers a petite step in building refined Coronavirus 2019 diagnosis architecture using CXR and CT bio-images.

Introduction

Coronavirus

CORONAVIRUS 2019 disease (COVID-19) is caused by severe acute respiratory syndrome coronavirus 2 (SARSCoV-2). Due to the very infectious character, deficiency of proper treatment, fast detection of COVID-19 is growing essential to hinder the significant spread and to smooth the trajectory for the appropriated designation of finite clinical assets.

Antigen tests behave fast but have low sensitivity. Nowadays, the reverse transcription-polymerase chain reaction (RT-PCR), which discovers the viral nucleic acid, is the best criterion for detecting Coronavirus 2019. RT-PCRs are obtained by using nasopharyngeal and throat spicas that can have errors of sampling and small viral load [1].

Consequently, the deep learning (DL) strategies of chest CXR and CT for COVID-19 classification have been conscientiously explored. In [2] an open-source convolutional neural network platform named COVID-Net is proposed and adapted for COVID-19 cases recognition in CXR and CT images. COVID-Net can obtain good sensitivity for COVID-19 cases with a sensitivity of 80% [3].

Neural Network Design

In this paper, deep convolutional neural networks (DCNN) are evaluated for diagnosing COVID-19. However, it is challenging to gather a big set of clean data for the construction of deep neural networks. Because of this, the premier objectives of this paper are to implement a deep neural network (DNN) architecture that is appropriate to learn with a small dataset. The obtained data can further produce radiologically understandable results [3].

A systematic scientific investigation needs several imaging features for CXR and CT examination, for example, contrast, homogeneity, entropy, correlation. The training dataset can be extended using some patches from each selected bio-image. Even with a small dataset, the DNN is possible to be trained successfully without being overfitted. By mixing new preprocessing steps to normalize the heterogeneity and data bias, it is possible to demonstrate with the same dataset that the proposed DNN architecture offers finer sensitivity and intelligibility compared to the existing COVID-Net [3]. There are appreciable important differences in the resulted patches. The intensity dispersion is well connected to radiological intensity variations for COVID-19 in CXR and CT. It is indispensable to recommend a new patch based DNN model, with fuzzy area cutting. The concluding classification results come from the current results found in several patch locations.

In the neural network design, a fully convolutional DenseNet for segmentation FC-DenseNet103 and a residual network for classification ResNet101 are used. FC-DenseNet103 was implemented using the PyTorch library and ResNet101 using MATLAB 2021a. The major advantage of Densely Connected Convolutional Networks (DenseNet) is that it has fewer parameters than standard convolutional networks. DenseNet pursues to grow the depth of deep neural networks.

In Fig. 1 a DenseNet structure with dense blocks is represented. The distance from the input layer to the output is quite long. The layers between two adjacent blocks are referred to as transition layers and change features sizes using convolution and pooling. The solution is, that each layer must be simply connected layers in DenseNets, as we can observe in Fig. 2. Each layer belonging to this architecture takes all preceding features as input. In this way, the neural network is easier to be trained. While the depth for the neural network increases, the problem of gradient vanishes.

Fig.1
figure1

DenseNet architecture with three blocks [6]

Fig. 2
figure2

DenseNet classic architecture [6]

ResNet-101 architecture is presented in Fig. 3. It is a convolutional neural network (CNN) that has 101 layers. A pretrained version of the CNN network can be trained on more than 12,520 biomedical images for viral pneumonia taken from the Kaggle database [4]. As a result, the network has learned a lot of features that characterize a large range of images.

Fig. 3
figure3

A regular block and a residual block (up). ResNet-101 architecture (down) [9]

Also, Haralick texture features, as contrast, correlation, and entropy, based on lung morphology are computed, which will emphasize the kind of abnormality we find inside the lungs.

Proposed Method—Network Architecture Using Transfer Learning Model

Transfer Learning Architecture

Pneumonia is in many ways like COVID-19 Fig. 4, [4]. The suggested transfer learning architecture of a neural network model is presented in Fig. 5. First, neural-network training is executed by operating a big number of datasets to execute grouping, in transfer learning. The CXR and CT images (12,520 images) of a great variety of lung illnesses, including COVID-19 (220 images) are given to the proposed architecture. The starting step is preprocessing for getting more accurate images. CXR and CT quality images are obtained using histogram equalization. Wiener filters are implemented to enhance contrast and reject perturbations. These two applied enhancement techniques grow the quality of the bio-images. The awareness obtained by a newly implemented model, for instance, the pneumonic respiratory disease recognition architecture, for a specified set of information, is moved for estimating a different question of an alike objective (CORONAVIRUS-2019 recognition architecture) that represents the transfer learning method [5].

Fig. 4
figure4

a Normal, b lung opacity, c viral pneumonia, d COVID-19

Fig. 5
figure5

Proposed architecture for Covid-19 classification

In the following, each network is described separately. The training of each network was implemented sequentially. First the training for the segmentation network, then the training for the classification network. Also, the mathematical background for establishing the Haralick texture features extraction is presented.

Segmentation Network

The main task for our segmentation network is to extract lung contour from the CXR and CT lung images. A fully convolutional FC-DenseNet 103 is implemented to execute semantic segmentation [4, 6]. Our training target is:

$$\mathop {argmin}\limits_{{\Theta }} { \mathcal{L}}\left( {\Theta } \right)$$
(1)

where \(\mathcal{L}\left(\Theta \right)\) is the cross-entropy loss of multi-categorical semantic segmentation and \(\Theta\) represents the network parameter set, which is composed of filter kernel weights and biases.

\(\mathcal{L}\left(\Theta \right)\) is represented as follows:

$${\mathcal{L}}\left( \Theta \right) = - \mathop \sum \limits_{i} \mathop \sum \limits_{j} \lambda_{s} \parallel \left( {y_{j} = s} \right)log\left( {p_{{\Theta }} \left( {{\varvec{x}}_{j} } \right)} \right)$$
(2)

where \({\mathcal{L}}\left( \Theta \right)\) represents the indicator function, \(p_{{\Theta }} \left( {{\varvec{x}}_{j} } \right)\) is the softmax probability of the j-th pixel in a CXR and CT image x, and yj represents the corresponding ground-truth label. s represents the class category, i.e., s\(\epsilon\) \(\left\{\mathrm{background},\mathrm{ right lung},\mathrm{ left lung}\right\}.\) λs represents the weights given to each class category.

The FC-DenseNet 103 was trained using the preprocessed data. Random distribution was applied for the network parameters at the beginning. The SGDM optimizer with an initial learning rate of 0.00001 was used. The obtained results for our network are: 88.99% Accuracy, 83.42% Precision, 85.97% Recall and 84.47% F1Score. The PyTorch library was used for the network implementation [4].

Classification Network

For properly using the classification network the first step was to apply lung masks on the preprocessed images.

The second step was to calculate the Haralick texture features for the segmented images, to narrow the class for selection. This aspect has a wider explanation in the next paragraph.

The third step was to take out randomly and locally from the contour lung image patches with a dimension of 224 × 224, that represents the input shape dimension for ResNet-101. The resulting patches were used as the network input. Because the patches have the optimal size for our classification network ResNet-101, it was only to apply a color processing step for changing the gray images from the Kaggle dataset in RGB images, as our classification network is expecting. The centers of the patches were randomly selected within the lung areas for avoiding cropping the patch from the empty area of the masked images.

The prepared images represent at this moment the input for our classification network. Because the pretrained classification network has an output for 1000 classes, we changed the fully connected layer and the classification layer for only four classes.

For deciding the pretrained parameters for our classification network, the parameters from ImageNet are used for weight initialization, then the network was trained for CXR and CT for viral pneumonia. The last step was training the network for COVID classification.

The training options are the stochastic gradient descent with momentum (SGDM) optimizer with a learning rate 0.01, ten epochs and batch size 10. The classification network was implemented using Matlab 2021a, Neural Network Design Application.

Haralick Texture Features Extraction

The Gray Level Cooccurrence Matrix (GLCM) and texture features have been introduced in 1973 by Haralick et al. [7]. This technique is often used in biomedical image processing.

The implemented method consists of two steps used for feature extraction. The first step is to calculate the GLCM, and the second step is to calculate the texture features based on the GLCM.

GLCM gives the information concerning how often each gray level occurs in a pixel located at a fixed geometric position versus each other pixel, as a function of the gray level [7].

A measure of gray level changes between the reference pixel and its neighbor represents the contrast (C):

$${\varvec{C}} = \mathop \sum \limits_{i} \mathop \sum \limits_{j} \left( {i - j} \right)^{2} p_{d} \left( {i,j} \right)$$
(3)

where \({p}_{d}\) is the normalized symmetric GLCM having a dimension Ng x Ng.

Here Ng represents the number of gray levels and \({p}_{d}\left(i,j\right)\) is the \(\left(i,j\right)\) th element of the normalized GLCM [7].

Local homogeneity (LC) calculates how close the distribution of elements in GLCM is to the diagonal of GLCM. When the contrast decreases, the homogeneity increases:

$$LC = \mathop \sum \limits_{i} \mathop \sum \limits_{j} \frac{1}{{1 + \left( {i - j} \right)^{2} }}p_{d} \left( {i,j} \right)$$
(4)

Homogeneity (H) of an image represents the similarity between pixels:

$$H = \mathop \sum \limits_{i = 0}^{{N_{g} }} \mathop \sum \limits_{j = 0}^{{N_{g} }} \frac{{p\left( {i,j} \right)}}{{1 + \left( {i - j} \right)^{2} }}$$
(5)

Entropy (E) represents the degree of disorder in the image.

A maximum of entropy will be obtained when all elements of the cooccurrence matrix are the same. A reduced entropy value will be obtained for unequal elements.

$$E = - \mathop \sum \limits_{i} \mathop \sum \limits_{j} p_{d} \left( {i,j} \right)\ln p_{d} \left( {i,j} \right)$$
(6)

Correlation (Corr) represents the linear dependency of gray level values in the cooccurrence matrix:

$$Corr = \mathop \sum \limits_{i} \mathop \sum \limits_{j} p_{d} \left( {i,j} \right)\frac{{\left( {i - \mu_{x} } \right)\left( {j - \mu_{y} } \right)}}{{\sigma_{x} \sigma_{y} }}$$
(7)

\({\mu }_{x}\),\({\mu }_{y}\)—mean values; \({\sigma }_{x}\),\({\sigma }_{y}\)—standard deviations have the following mathematical expressions:

$$\mu_{x} = \mathop \sum \limits_{i} \mathop \sum \limits_{j} ip\left( {i,j} \right)$$
(8)
$$\mu_{y} = \mathop \sum \limits_{i} \mathop \sum \limits_{j} jp\left( {i,j} \right)$$
$$\sigma_{x} = \sqrt {\mathop \sum \limits_{i} \mathop \sum \limits_{j} \left( {i - \mu_{x} } \right)^{2} p_{d} \left( {i,j} \right)}$$
$$\sigma_{y} = \sqrt {\mathop \sum \limits_{i} \mathop \sum \limits_{j} \left( {i - \mu_{y} } \right)^{2} p_{d} \left( {i,j} \right)}$$

The most important and basic difference statistic texture descriptions are Mean (M), Sum Entropy (SE), Variance (V) and Sum Variance (SV).

$$M = \mathop \sum \limits_{k} kP_{x - y} \left( k \right)$$
(9)
$$SE = - \mathop \sum \limits_{k} P_{x - y} \left( k \right)\log P_{x - y} \left( k \right)$$
(10)
$$V = \mathop \sum \limits_{i = 1}^{Ng} \mathop \sum \limits_{j = 1}^{Ng} \left( {i - \mu } \right)^{2} p_{d} \left( {i,j} \right)$$
(11)
$$SV = \mathop \sum \limits_{i = 2}^{2Ng} \left( {i - SE} \right)^{2} P_{x - y} \left( k \right)$$
(12)

where \(\mu\) is the mean value.

The Haralick features of 100 images of normal, viral pneumonia and COVID-19 are calculated and analyzed using Matlab2021a and Statistics and Machine Learning Toolbox. From this analysis, it is concluded from Table 1 that the features V, M and SV should be within a certain domain. Other features have values with fewer changes. The domain for an image to be detected as normal is shown in Eqs. (13), (14) and (15).

$${22}.{97} \le V \le { 33}.{81}$$
(13)
$${8}.{17} \le M \le { 11}.{87}$$
(14)
$${99}.{98} \le SV \le { 132}.{45}$$
(15)
Table 1 Statistical analysis of Haralick features for normal, COVID-19 and viral pneumonia for segmented lung images (100 images)

For viral pneumonia affected lung images the values of the features must be within a domain as shown in Eqs. (16), (17) and (18).

$${36}.{13 } \le V \le { 48}.{19}$$
(16)
$${1}0.{77 } \le M \le { 15}.{79}$$
(17)
$$147.79 \le SV \le 206.52$$
(18)

COVID-19 images have features values in the domain as shown in Eqs. (19), (20) and (21).

$${33}.{78 } \le V \le { 57}.{99}$$
(19)
$${1}0.{44 } \le M \le { 18}.{97}$$
(20)
$${132}.{78 } \le SV \le { 211}.{54}$$
(21)

The calculated values in Table 1 are like feature values obtained in [5].

Sorting System for COVID-19

Usually, the major trivial motive of pneumonia is a viral infection. In these extensive epidemic circumstances caused by a contagious illness, the disposal of clinical resources is a fact of supreme significance.

CORONAVIRUS 2019 is growing fast and is exceeding the dimensions of the clinical structure in a lot of countries. It is compulsory, to take an acceptable choice to allocate the restricted assets based on the sorting system in Fig. 6, which determines the requirements and emergency for each patient [4].

Fig. 6
figure6

Sorting system for coronavirus 2019 victim [4]

Results

Datasets

Kaggle dataset was divided into four classes for normal, lung opacity, viral pneumonia, and COVID-19 as it is specified in Table 2. The images are resized for our pretrained CNN on 224 × 224 pixels. Datastores were assembled for four classes from image data folders and subfolders, properly labeled, resized, color processed and augmented. The augmentation was performed with randomly patching. This dataset is acceptable for this study considering that the used computer is a Laptop Gaming Dell G7 7790, Intel Core i7@ 4,1 GHz, 17.3″ Full HD, 16 GB, 1 TB + SSD 256 GB, NVIDIA GeForce RTX 2020 6 GB, Operating System Windows 10 Home. We shall see that the training time is acceptable, depending on how the randomly patching for data augmentation is done. The time varies approximately from 55 to 111 min training on a GPU. If the training is performed on a single CPU instead of using GPU, the time will be ten times longer.

Table 2 Dataset images (1024 × 1024 pixels jpg format) from database Kaggle for CXR (80%) and CT (20%)

Another possibility for training networks is Deep Neural Network Azure Cloud Training. In this situation, the processing cost will be high, but the time consumption will be extremely low.

Performance Parameters

The final classification is completed to calculate confusion matrix for model evaluation, after finding the corresponding model. Parameters that measure the classification performance for ResNet-101 network are described in this paragraph: accuracy, precision, recall, and F1-score [4].

Table 3 presents the confusion matrix for traditional classification and transfer learning model for COVID19 data when tested for viral pneumonia. Table 4 shows the performance parameters precision, recall, F1-Score for analyzing the normal Class1, lung opacity Class2, viral pneumonia Class3 and Covid-19 Class4. Precision and Recall values are encouraging. F1-Score is calculated using precision and recall as in Eq. (25). The model is experienced and classifies the images correctly. In general, the transfer learning provides enhanced results compared to classical classification.

Table 3 Confusion matrix for Resnet-101 network for classification and transfer learning model
Table 4 Performance parameters for Resnet-101 network for classification and transfer learning model

Accuracy is the most intuitive performance measure and is an evaluation metric that allows to measure the total number of predictions a model gets right [4]. Accuracy shows what percent of the model predictions were correct.

$$Accuracy = \;\left( {TP\; + \;TN} \right)/(TP\; + \;TN\; + \;FP\; + \;FN)$$
(22)

Precision evaluates how rigorous a model is in predicting positive labels [4].

$$Precision = TP\;/\;(TP\; + \;FP)$$
(23)

Recall calculates the percentage of actual positives a model correctly identified (True Positive) [4].

$$Recall = TP\;/\left( {TP\; + \;FN} \right)$$
(24)

F1 Score (F-Measure) is the weighted average of Precision and Recall. Therefore, this score takes both false positives and false negatives into account.

$$F1 = 2\; \times \left( {Precision\; \times \;Recall} \right)/\left( {Precision + Recall} \right)$$
(25)

True positive (TP) represents the number of predictions where the classifier accurately predicts the positive class as positive. TP is the number of data that are Covid-19 and are precisely classified as Covid-19.

True negative (TN) means the number of predictions where the classifier perfectly predicts the negative class as negative.

False Positives (FP) shows us the number of predictions where the classifier inaccurately predicts the negative class as positive.

False Negatives (FN) indicates the number of predictions where the classifier falsely predicts the positive class as negative.

Comparison for CORONAVIRUS 2019 Classification Accuracy

In Fig. 7 the loss and accuracy graphs can be seen. It is observed the increase in accuracies and decrease in loss values while training and testing.

Fig. 7
figure7

Accuracy and loss graphs for COVID-19 classification using a classification network Resnet-101

In Table 5 it is possible to see a testing accuracy comparison between different COVID-19 studies. From the performance analysis, the proposed transfer learning model is slightly improved compared with other existing models. Properly labeling, applying mask segmentation, calculating Haralick features and choosing the right patches were extremely important for obtaining a higher performance concerning the network.

Table 5 Comparison of CORONAVIRUS 2019 classification accuracy

Using the adequate set for training, validating, and testing data is very important for obtaining better parameters for the implemented network, and finally a better classification for COVID-19. Haralick features gave the possibility to distinguish more easily between normal lungs, viral pneumonia, and COVID-19. Also, the proper size 224 × 224 for patches [4] [4] was very important for having an improved performance for the classification algorithm. The proposed model has a 93% precision, 93% recall and 94.9% accuracy using ResNet-101 with four classification classes.

Conclusions

In this paper, the obtained results have been presented and visualized in a comprehensive way taking into consideration the demanding problems that are discouraging in this domain. The Coronavirus 2019 recognition architecture has thrived using incorporated data from multiple sources [8]. This virus infection has been detected by examining peculiar characteristics found in the studied bioimages. If the viral infection is detected earlier, lives will be saved. The obtained outcomes are quite reliable for all distinctive processes as the proposed architecture can detect healthy lungs, viral pneumonia, and COVID-19 as represented in Fig. 4. It is believed that the new architecture that is implemented in this study, delivers a petite step in building refined Coronavirus 2019 diagnosis architecture using CXR and CT bio-images. In the following research activity, supplemental details and information will be incorporated for improved conclusions that auxiliary toughens the suggested transfer learning architecture.

References

  1. 1.

    Varela-Santos, S., & Melin, P. (2020). A new approach for classifying coronavirus COVID-19 based on its manifestation on chest X-rays using texture features, neural networks. Information Sciences, Division of Graduate Studies, Tijuana Institute of Technology, Tijuana, 22414 Baja CA, Mexico https://doi.org/10.1016/j.ins.2020.09.041

    Article  PubMed  PubMed Central  Google Scholar 

  2. 2.

    Hemdan, E., Shouman, M., & Karar, M. (2020). Covidx-Net: A framework of deep learning classifiers to diagnose COVID19 in X-ray images. Journal Computer Science Engineering, Computer Science, Engineering, ArXiv

  3. 3.

    Wang, S., Zha, Y., Li, W., Wu, Q., Li, X., Niu, M., Wang, M., Qiu, X., Li, H., Yu, H., Gong, W., Li, L., Yongbei, Z., Wang, L., & Tian, J. (2020). A fully automatic deep learning system for COVID-19 diagnostic and prognostic analysis. European Respiratory Journal. https://doi.org/10.1183/13993003.00775-2020

    Article  Google Scholar 

  4. 4.

    Oh, Y., Park, S., & Ye, J. C. (2020). IEEE fellow, Deep learning COVID-19 features on CXR using limited training data sets. IEEE Transactions on Medical Imaging. https://doi.org/10.1109/TMI.2020.2993291

    Article  PubMed  Google Scholar 

  5. 5.

    Perumal, V., Narayanan, V., & Rajasekar, S. J. S. (2020). Detection of COVID-19 using CXR and CT images using transfer learning and Haralick features. Applied Intelligence. https://doi.org/10.1007/s10489-020-01831-z

    Article  Google Scholar 

  6. 6.

    Huang, G., Liu, Z., Van der Maaten, L., Weinberger, K.Q. (2017). Densely Connected Convolutional Networks. In IEEE Conference on computer vision and pattern recognition (CVPR), Honolulu, HI, USA. 21–26 July 2017, http://doi.org/https://doi.org/10.1109/CVPR.2017.243

  7. 7.

    Haralick, R. M., Shanmugam, K., & Dinstein, I. (1973). Textural features for image classification. IEEE Transactions on Systems, Man and Cybernetics, 3(6), 610–621.

    Article  Google Scholar 

  8. 8.

    Wang, S., Kang, B., Ma, J., Zeng, X., Xiao, M., Guo, J., Cai, M., Yang, J., Li, Y., Meng, X., & Xu, B. (2020). A deep learning algorithm using CT images to screen for coronavirus disease (COVID-19). The Preprint Server for Health Sciences. https://doi.org/10.1101/2020.02.14.20023028

    Article  Google Scholar 

  9. 9.

    Zhang, A., Lipton, Z. C., Li, M., Smola, A. J., Dive into deep learning. STAT 157, UC Berkeley, http://courses.d2l.ai/berkeley-stat-157/index.html

  10. 10.

    Zhao, J., Zhang, Y., He, X., Xie, P. (2020). COVID-CT-dataset: A CT scan dataset about COVID19, arXiv 2003.13865 [cs.LG], https://github.com/UCSD-AI4H/COVID-CT.

  11. 11.

    El Asnaoui, K., & Chawki, Y. (2020). Using X-ray images and deep learning for automated detection of coronavirus disease. Journal of Biomolecular Structure and Dynamics. https://doi.org/10.1080/07391102.2020.1767212

    Article  PubMed  Google Scholar 

  12. 12.

    Apostolopoulos, I., & Tzani, M. (2020). COVID-19: Automatic detection from x-ray images utilizing transfer learning with convolutional neural networks, (2020). Physical and Engineering Sciences in Medicine, 43, 635–640.

    Article  Google Scholar 

  13. 13.

    Khan, A., Shah, J., & Bhat, M. (2020). Coronet: A deep neural network for detection and diagnosis of COVID-19 from chest X-ray images. Computer Methods and Programs in Biomedicine, 196, 105581. https://doi.org/10.1016/j.cmpb.2020.105581

    Article  PubMed  PubMed Central  Google Scholar 

  14. 14.

    Shi, F., Xia, L., Shan, F., Wu, D., Wei, Y., Yuan, H., Jiang, H., Gao, Y., Sui, H., & Shen, D. (2020). Large-scale screening of COVID-19 from community-acquired pneumonia using infection size-aware classification, DOI:10.1088/1361-6560/abe838 https://arxiv.org/ftp/arxiv/papers/2003/2003.09860.pdf

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Mihaela-Ruxandra Lascu.

Ethics declarations

Conflict of interest

The author declares that she has no conflict of interest.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Lascu, MR. Deep Learning in Classification of Covid-19 Coronavirus, Pneumonia and Healthy Lungs on CXR and CT Images. J. Med. Biol. Eng. (2021). https://doi.org/10.1007/s40846-021-00630-2

Download citation

Keywords

  • COVID-19
  • Pulmonary disease detection
  • X-ray/CT imaging
  • Deep learning
  • Segmentation
  • Classification augmentation
  • Network architecture