1 Introduction

Diagnosis of COVID-19 is typically associated with both the symptoms of pneumonia and Chest X-ray tests [25]. Chest X-ray is the first imaging technique that plays an important role in the diagnosis of COVID-19 disease. Figure 1 shows a negative example of a normal chest X-ray, a positive one with COVID-19, and a positive one with the severe acute respiratory syndrome (SARS).

Fig. 1
figure 1

Examples of a) normal, b) COVID-19, and c) SARS chest X-ray images

Several classical machine learning approaches have been previously used for automatic classification of digitised chest images [7, 13]. For example, in [17], three statistical features were calculated from lung texture to discriminate between malignant and benign lung nodules using a Support Vector Machine SVM classifier. A grey-level co-occurrence matrix method was used with Backpropagation Network [22] to classify images from being normal or cancerous. With the availability of enough annotated images, deep learning approaches [1, 3, 30] have demonstrated their superiority over the classical machine learning approaches. CNN architecture is one of the most popular deep learning approaches with superior achievements in the medical imaging domain [14]. The primary success of CNN is due to its ability to learn features automatically from domain-specific images, unlike the classical machine learning methods. The popular strategy for training CNN architecture is to transfer learned knowledge from a pre-trained network that fulfilled one task into a new task [19]. This method is faster and easy to apply without the need for a huge annotated dataset for training; therefore many researchers tend to apply this strategy especially with medical imaging. Transfer learning can be accomplished with three major scenarios [16]: a) “shallow tuning”, which adapts only the last classification layer to cope with the new task, and freezes the parameters of the remaining layers without training; b) “deep tuning” which aims to retrain all the parameters of the pre-trained network from end-to-end manner; and (c) “fine-tuning” that aims to gradually train more layers by tuning the learning parameters until a significant performance boost is achieved. Transfer knowledge via fine-tuning mechanism showed outstanding performance in chest X-ray image classification [3, 9, 26].

Class decomposition [33] has been proposed with the aim of enhancing low variance classifiers facilitating more flexibility to their decision boundaries. It aims to the simplification of the local structure of a dataset in a way to cope with any irregularities in the data distribution. Class decomposition has been previously used in various automatic learning workbooks as a pre-processing step to improve the performance of different classification models. In the medical diagnostic domain, class decomposition has been applied to significantly enhance the classification performance of models such as Random Forests, Naive Bayes, C4.5, and SV M [20, 21, 37].

In this paper, we adapt our previously proposed convolutional neural network architecture based on class decomposition, which we term Decompose, Transfer, and Compose (DeTraC) model, to improve the performance of pre-trained models on the detection of COVID-19 cases from chest X-ray images.Footnote 1 This is by adding a class decomposition layer to the pre-trained models. The class decomposition layer aims to partition each class within the image dataset into several sub-classes and then assign new labels to the new set, where each subset is treated as an independent class, then those subsets are assembled back to produce the final predictions. For the classification performance evaluation, we used images of chest X-ray collected from several hospitals and institutions. The dataset provides complicated computer vision challenging problems due to the intensity inhomogeneity in the images and irregularities in the data distribution.

The paper is organised as follow. In Section 2, we review the state-of-the-art methods for COVID-19 detection. Section 3 discusses the main components of DeTraC and its adaptation to the detection of COVID-19 cases. Section 4 describes our experiments on several chest X-ray images collected from different hospitals. In Section 5, we discuss our findings. Finally, Section 6 concludes the work.

2 Related work

In the last few months, World Health Organization (WHO) has declared that a new virus called COVID-19 has been spread aggressively in several countries around the world [18]. Diagnosis of COVID-19 is typically associated with the symptoms of pneumonia, which can be revealed by genetic and imaging tests. Fast detection of the COVID-19 can be contributed to control the spread of the disease.

Image tests can provide a fast detection of COVID-19, and consequently contribute to control the spread of the disease. Chest X-ray (CXR) and Computed Tomography (CT) are the imaging techniques that play an important role in the diagnosis of COVID-19 disease. The historical conception of image diagnostic systems has been comprehensively explored through several approaches ranging from feature engineering to feature learning.

Convolutional neural network (CNN) is one of the most popular and effective approaches in the diagnosis of COVD-19 from digitised images. Several reviews have been carried out to highlight recent contributions to COVID-19 detection [8, 15, 24]. For example, in [35], a CNN was applied based on Inception network to detect COVID-19 disease within CT. In [29], a modified version of ResNet-50 pre-trained network has been provided to classify CT images into three classes: healthy, COVID-19 and bacterial pneumonia. CXR were used in [23] by a CNN constructed based on various ImageNet pre-trained models to extract the high level features. Those features were fed into SVM as a machine learning classifier in order to detect the COVID-19 cases. Moreover, in [34], a CNN architecture called COVID-Net based on transfer learning was applied to classify the CXR images into four classes: normal, bacterial infection, non-COVID and COVID-19 viral infection. In [4], a dataset of CXR images from patients with pneumonia, confirmed COVID-19 disease, and normal incidents, was used to evaluate the performance of state-of-the-art convolutional neural network architectures proposed previously for medical image classification. The study suggested that transfer learning can extract significant features related to the COVID-19 disease.

Having reviewed the related work, it is evident that despite the success of deep learning in the detection of COVID-19 from CXR and CT images, data irregularities have not been explored. It is common in medical imaging in particular that datasets exhibit different types of irregularities (e.g. overlapping classes) that affect the resulting accuracy of machine learning models. Thus, this work focuses on dealing with data irregularities, as presented in the following section.

3 DeTraC method

This section describes in sufficient details the proposed method for detecting COVID-19 from CXR images. Starting with an overview of the architecture through to the different components of the method, the section discusses the workflow and formalises the method.

3.1 DeTraC architecture overview

DeTraC model consists of three phases. In the first phase, we train the backbone pre-trained CNN model of DeTraC to extract deep local features from each image. Then we apply the class-decomposition layer of DeTraC to simplify the local structure of the data distribution. In the second phase, the training is accomplished using a sophisticated gradient descent optimisation method. Finally, we use a class composition layer to refine the final classification of the images. As illustrated in Fig. 2, class decomposition and composition components are added respectively before and after knowledge transformation from an ImageNet pre-trained CNN model. The class decomposition layer aiming at partitioning each class within the image dataset into k sub-classes, where each subclass is treated independently. Then those sub-classes are assembled back using the class-composition component to produce the final classification of the original image dataset.

Fig. 2
figure 2

De compose, Tra nsfer, and C ompose (DeTraC) model for the detection of COVID-19 from CXR images

3.2 Deep feature extraction

A shallow-tuning mode was used during the adaptation and training of an ImageNet pre-trained CNN model using the collected CXR image dataset. We used the off-the-shelf CNN features of pre-trained models on ImageNet (where the training is accomplished only on the final classification layer) to construct the image feature space. However, due to the high dimensionality associated with the images, we applied PCA [36] to project the high-dimension feature space into a lower-dimension, where highly correlated features were ignored. This step is important for the class decomposition to produce more homogeneous classes, reduce the memory requirements, and improve the efficiency of the framework.

3.3 Class decomposition layer

Now assume that our feature space (PCA’s output) is represented by a 2-D matrix (denoted as dataset A), and L is a class category. A and L can be rewritten as

$$ A= \left[ \begin{matrix} a_{11} & a_{11} & \ldots ~~~~~ a_{1m}\\ a_{21} & a_{22} & \ldots ~~~~~~ a_{2m}\\ \vdots & \vdots & \vdots ~~~~~~~~~~~ \vdots \\ a_{n1} & a_{n2} & \ldots ~~~ a_{nm}\\ \end{matrix} \right] , \mathbf{L}= \left\{ l_{1}, l_{2}, \ldots ,l_{k} \right\}, $$
(1)

where n is the number of images, m is the number of features, and k is the number of classes. For class decomposition, we used k-means clustering [38] to further divide each class into homogeneous sub-classes (or clusters), where each pattern in the original class L is assigned to a class label associated with the nearest centroid based on the squared euclidean distance (SED):

$$ SED= \sum\limits_{j=1}^{k} \sum\limits_{i=1}^{n}\parallel a_{i}^{ \left( j \right) }-c_{j}\parallel, $$
(2)

where centroids are denoted as cj.

Once the clustering is accomplished, each class in L will further divided into k subclasses, resulting in a new dataset (denoted as dataset B).

Accordingly, the relationship between dataset A and B can be mathematically described as:

$$ A = (A | \mathbf{L} ) ~ \mapsto~ B= (B | \mathbf{C} ) $$
(3)

where the number of instances in A is equal to B while C encodes the new labels of the subclasses (e.g. \(\mathbf {C} =\{l_{11}, l_{12}, \dots , l_{1k}, l_{21}, l_{22}, \dots , l_{2k}, \dots l_{ck} \}\)). Consequently A and B can be rewritten as:

$$ \begin{aligned} A= \left[ \begin{matrix} a_{11} & a_{11} & \ldots ~~~~~ a_{1m } & l_{1}\\ a_{21} & a_{22} & \ldots ~~~~~~ a_{2m} & l_{1}\\ \vdots & \vdots & \vdots ~~~~~~~~~~~ \vdots & \vdots \\ \vdots & \vdots & \begin{matrix} \vdots & ~~~~~~~ \vdots \\ \end{matrix} & l_{2}\\ a_{n1} & a_{n2} & \ldots ~~~ a_{nm} & l_{2 }\\ \end{matrix} \right], \\ B= \left[ \begin{matrix} b_{11} & b_{11} & \ldots ~~~~~ b_{1m} & l_{11}\\ b_{21} & b_{22} & \ldots ~~~~~~ b_{2m} & l_{1c}\\ \vdots & \vdots & \vdots ~~~~~~~~~~~ \vdots & \vdots \\ \vdots & \vdots & \begin{matrix} \vdots & ~~~~~~~ \vdots \\ \end{matrix} & l_{21}\\ b_{n1} & b_{n2} & \ldots ~~~ b_{nm} & l_{2c}\\ \end{matrix} \right]. \end{aligned} $$
(4)

3.4 Transfer learning

With the high availability of large-scale annotated image datasets, the chance for the different classes to be well-represented is high. Therefore, the learned in-between class-boundaries are most likely to be generic enough to new samples. On the other hand, with the limited availability of annotated medical image data, especially when some classes are suffering more compared to others in terms of the size and representation, the generalisation error might increase. This is because there might be a miscalibration between the minority and majority classes. Large-scale annotated image datasets (such as ImageNet) provide effective solutions to such a challenge via transfer learning where tens of millions parameters (of CNN architectures) are required to be trained.

For transfer learning, we used, tested, and compared several ImageNet pre-trained models in both shallow- and deep- tuning modes, as will be investigated in the experimental study section. With the limited availability of training data, stochastic gradient descent (SGD) can heavily be fluctuating the objective/loss function and hence overfitting can occur. To improve convergence and overcome overfitting, the mini-batch (MB) of stochastic gradient descent (mSGD) was used to minimise the objective function, E(⋅), with cross-entropy loss

$$ \begin{array}{@{}rcl@{}} E\left (y^{j},z(x^{j}) \right ) & = & -\frac{1}{n}\sum_{j=0}^{n} [ y^{j} \ln {z\left( x^{j}\right )} \\ && + \left (1-y^{j} \right )\ln {\left (1-z\left( x^{j} \right ) \right )} ], \end{array} $$
(5)

where xj is the set of input images in the training, yj is the ground truth labels while z(⋅) is the predicted output from a softmax function.

3.5 Evaluation and composition

In the class decomposition layer of DeTraC, we divide each class within the image dataset into several sub-classes, where each subclass is treated as a new independent class. In the composition phase, those sub-classes are assembled back to produce the final prediction based on the original image dataset. For performance evaluation, we adopted Accuracy (ACC), Specificity (SP) and Sensitivity (SN) metrics. They are defined as:

$$ \begin{array}{@{}rcl@{}} \text{Accuracy} (ACC) & = & \frac{TP+TN}{n}, \end{array} $$
(6)
$$ \begin{array}{@{}rcl@{}}[3pt] \text{Sensitivity } (SN) & = & \frac{TP}{TP+FN}, \end{array} $$
(7)
$$ \begin{array}{@{}rcl@{}}[3pt] \text{Specificity } (SP) & = & \frac{TN}{TN + FP} , \end{array} $$
(8)

where TP is the true positive in case of COVID-19 case and TN is the true negative in case of normal or other disease, while FP and FN are the incorrect model predictions for COVID-19 and other cases.

More precisely, in this work we are coping with a multi-classification problem. Consequently, our model has been evaluated using a multi-class confusion matrix of [28]. Before error correction, the input image can be classified into one of (c) non-overlapping classes. As a consequence, the confusion matrix would be a (Nc × Nc) matrix, and TP, TN, FP and FN for a specific class i are defined as:

$$ \begin{array}{@{}rcl@{}} TP_{i}=\sum_{i=1}^{n}x_{ii} \end{array} $$
(9)
$$ \begin{array}{@{}rcl@{}}[3pt] TN_{i}=\sum_{j=1}^{c}\sum_{k=1}^{c} x_{jk} ,j\neq i ,k\neq i \end{array} $$
(10)
$$ \begin{array}{@{}rcl@{}}[3pt] FP_{i}=\sum_{j=1}^{c} x_{ji} , j\neq i \end{array} $$
(11)
$$ \begin{array}{@{}rcl@{}}[3pt] FN_{i}=\sum_{j=1}^{c} x_{ij} , j\neq i, \end{array} $$
(12)

where xii is an element in the diagonal of the matrix.

3.6 Procedural steps of DeTraC model

Having discussed the mathematical formulations of DeTraC model, in the following, the procedural steps of DeTraC model is shown and summarised in Algorithm 1.

figure d

DeTraC establishes the effectiveness of class decomposition in detecting COVID-19 from CXR images. The main contribution of DeTraC is its ability to deal with data irregularities, which is one of the most challenging problems in the detection of COVID-19 cases. The class decomposition layer of DeTraC can simplify the local structure of a dataset with a class imbalance. This is achieved by investigating the class boundaries of the dataset and adapt the transfer learning accordingly. In the following section, we experimentally validate DeTraC with real CXR images for the detection of COVID-19 cases from normal and SARS cases.

4 Experimental study

This section presents the dataset used in evaluating the proposed method, and discusses the experimental results.

4.1 Dataset

In this work we used a combination of two datasets:

  • 80 samples of normal CXR images (with 4020 × 4892 pixels) from the Japanese Society of Radiological Technology (JSRT) [5, 11].

  • CXR images of [6], which contains 105 and 11 samples of COVID-19 and SARS (with 4248 × 3480 pixels).

4.2 Parameter settings

All the experiments in our work have been carried out in MATLAB 2019a on a PC with the following configuration: 3.70 GHz Intel(R) Core(TM) i3-6100 Duo, NVIDIA Corporation with the donation of the Quadra P5000GPU, and 8.00 GB RAM.

We applied different data augmentation techniques to generate more samples including flipping up/down and right/left, translation and rotation using random five different angles. This process resulted in a total of 1764 samples. Also, a histogram modification technique was applied to enhance the contrast of each image. The dataset was then divided into two groups: 70% for training the model and 30% for evaluation of the classification performance.

For the class decomposition layer, we used AlexNet [12] pre-trained network based on shallow learning mode to extract discriminative features of the three original classes. AlexNet is composed of 5 convolutional layers to represent learned features, 3 fully connected layers for the classification task. AlexNet uses 3 × 3 max-pooling layers with ReLU activation functions and three different kernel filters. We adopted the last fully connected layer into three classes and initialised the weight parameters for our specific classification task. For the class decomposition process, we used k-means clustering [38]. In this step, as pointed out in [2], we selected k = 2 and hence each class in L is further divided into two clusters (or subclasses), resulting in a new dataset (denoted as dataset B) with six classes (norm_1, norm_2, COVID-19_1,COVID-19_2, SARS_1, and SARS_2), see Table 1.

Table 1 Sample distribution in each class of the CXR dataset before and after class decomposition

In the transfer learning stage of DeTraC, we used different ImageNet pre-trained CNN networks such as AlexNet [12], VGG19 [27], ResNet [31], GoogleNet [32], and SqueezeNet [10]. The parameter settings for each pre-trained model during the training process are reported in Table 2.

Table 2 Parameters settings for each pre-trained model used in our experiments

4.3 Validation and comparisons

To demonstrate the robustness of DeTraC, we used different ImageNet pre-trained CNN models (such as AlexNet, VGG19, ResNet, GoogleNet, and SqueezeNet) for the transfer learning stage in DeTraC. For a fair comparison, we also compared the performance of the different versions of DeTraC directly with the pre-trained models in both shallow- and deep- tuning modes. The results are summarised in Table 3, confirming the robustness and effectiveness of the class decomposition layer when used with pre-trained ImageNet CNN models.

Table 3 COVID-19 Classification performance (on both original and augmented test cases) before and after applying DeTraC, obtained by AlexNet, VGG19, ResNet, GoogleNet, and SqueezeNet in case of shallow- and deep- tuning modes

As shown by Fig. 3, DeTraC with VGG19 has achieved the highest accuracy of 97.35%, sensitivity of 98.23%, and specificity of 96.34%. Moreover, Fig 3 shows the learning curve accuracy and loss between training and test obtained by DeTraC. Also, the Area Under the receiver Curve (AUC) was produced as shown in Fig 4.

Fig. 3
figure 3

The learning curve accuracy (a) and error (b) obtained by DeTraC model when VGG19 is used as a backbone pre-trained model

Fig. 4
figure 4

The ROC analysis curve by training DeTraC model based on VGG19 pre-trained model

Based on the above, deep-tuning mode provides better performance in all the cases. Thus, the last set of experiments is conducted to show the effectiveness of DeTraC on detecting original Covid-19 cases (i.e. after removing the augmented test images). Table 4 demonstrates the classification performance of all the models, used in this work, on the original test cases. As illustrated by Table 4, DeTraC outperformed all pre-trained models with a large margin in most cases. Note that the only case when a deep-tuned pre-trained model (VGG19) showed a higher sensitivity (100%) in detecting COVID-19 cases than DeTraC (87.09%) achieved at the cost of a very low specificity (53.44%) when compared to DeTraC (100%). This explains the notable accuracy boost when applying DeTraC on the deep-tuned VGG19 architecture (+ 25.36%).

Table 4 COVID-19 Classification performance (on the original test cases) before and after applying DeTraC, obtained by AlexNet, VGG19, ResNet, GoogleNet, and SqueezeNet in case of deep-tuning mode

5 Discussion

Training CNN s can be accomplished using two different strategies. They can be used as an end-to-end network, where an enormous number of annotated images must be provided (which is impractical in medical imaging). Alternatively, transfer learning usually provides an effective solution with the limited availability of annotated images by transferring knowledge from pre-trained CNN s (that have been learned from a bench-marked large-scale image dataset) to the specific medical imaging task. Transfer learning can be further accomplished by three main scenarios: shallow-tuning, fine-tuning, or deep-tuning. However, data irregularities, especially in medical imaging applications, remain a challenging problem that usually results in miscalibration between the different classes in the dataset. CNN s can provide an effective and robust solution for the detection of the COVID-19 cases from CXR images and this can be contributed to control the spread of the disease.

Here, we adapted and validated a deep convolutional neural network, called DeTraC, to deal with irregularities in a COVID-19 dataset by exploiting the advantages of class decomposition within the CNN s for image classification. DeTraC is a generic transfer learning model that aims to transfer knowledge from a generic large-scale image recognition task to a domain-specific task. In this work, we validated DeTraC with a COVID-19 dataset with imbalance classes (including 105 COVID-19, 80 normal, and 11 SARS cases). DeTraC has achieved high accuracy of 98.23% with VGG19 pre-trained ImageNet CNN model, confirming its effectiveness on real CXR images. To demonstrate the robustness and the contribution of the class decomposition layer of DeTraC during the knowledge transformation using transfer learning, we also used different pre-trained CNN models such as ALexNet, VGG, ResNet, GoogleNet, and SqueezeNet. Experimental results showed high accuracy in dealing with the COVID-19 cases used in this work with all the pre-trained models trained in a deep-tuning mode. More importantly, significant improvements have been demonstrated by DeTraC when compared to the pre-trained models without the class decomposition layer. Finally, this work suggests that the class decomposition layer with a pre-trained ImageNet CNN model (trained in a deep-tuning mode) is essential for transfer learning, see Figs. 5 and 6, especially when data irregularities are presented in the dataset.

Fig. 5
figure 5

The accuracy (a) and sensitivity (b), on both original and augmented test cases, obtained by DeTraC model when compared to different pre-trained models, in shallow- and deep-tuning modes

Fig. 6
figure 6

The accuracy (a) and sensitivity (b), on the original test cases only, obtained by DeTraC model when compared to different pre-trained models, in shallow- and deep-tuning modes

6 Conclusion and future work

Diagnosis of COVID-19 is typically associated with the symptoms of pneumonia, which can be revealed by genetic and imaging tests. Imagine test can provide a fast detection of the COVID-19 and consequently contribute to control the spread of the disease. CXR and CT are the imaging techniques that play an important role in the diagnosis of COVID-19 disease. Paramount progress has been made in deep CNN s for medical image classification, due to the availability of large-scale annotated image datasets. CNN s enable learning highly representative and hierarchical local image features directly from data. However, the irregularities in annotated data remains the biggest challenge in coping with real COVID-19 cases from CXR images.

In this paper, we adapted DeTraC, a deep CNN architecture, that relies on a class decomposition approach for the classification of COVID-19 images in a comprehensive dataset of CXR images. DeTraC showed effective and robust solutions for the classification of COVID-19 cases and its ability to cope with data irregularity and the limited number of training images too. We validated DeTraC with different pre-trained CNN models, where the highest accuracy has been obtained by VGG19 in DeTraC. With the continuous collection of data, we aim in the future to extend the experimental work validating the method with larger datasets. We also aim to add an explainability component to enhance the usability of the model. Finally, to increase the efficiency and allow deployment on handheld devices, model pruning, and quantisation will be utilised.