Abstract
COVID-19 has caused over 6.35 million deaths and over 555 million confirmed cases till 11/July/2022. It has caused a serious impact on individual health, social and economic activities, and other aspects. Based on the gray-level co-occurrence matrix (GLCM), a four-direction varying-distance GLCM (FDVD-GLCM) is presented. Afterward, a five-property feature set (FPFS) extracts features from FDVD-GLCM. An extreme learning machine (ELM) is used as the classifier to recognize COVID-19. Our model is finally dubbed FECNet. A multiple-way data augmentation method is utilized to boost the training sets. Ten runs of tenfold cross-validation show that this FECNet model achieves a sensitivity of 92.23 ± 2.14, a specificity of 93.18 ± 0.87, a precision of 93.12 ± 0.83, and an accuracy of 92.70 ± 1.13 for the first dataset, and a sensitivity of 92.19 ± 1.89, a specificity of 92.88 ± 1.23, a precision of 92.83 ± 1.22, and an accuracy of 92.53 ± 1.37 for the second dataset. We develop a mobile app integrating the FECNet model, and this web app is run on a cloud computing-based client–server modeled construction. This proposed FECNet and the corresponding mobile app effectively recognize COVID-19, and its performance is better than five state-of-the-art COVID-19 recognition models.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
COVID-19 has triggered more than 6.35 million deaths and over 555 million confirmed cases till 11/July/2022 (See Fig. 1). It has caused a serious impact on individual health, social and economic activities, and other aspects [1]. The main symptoms of the disease are a recent and enduring cough, low fever, and a loss or alteration of smell and taste [2].
Two types of COVID-19 recognition tools are popular. The primary one is viral experiments to examine the presence of viral-related [3] RNA pieces. Its limitations are: (a) the swab might be contaminated [4], and (b) The person who is tested has to wait for the test outcomes for quite a few hours to some days. The last type is the chest imaging modalities [5], among which chest computed tomography (CCT) [6] can provide the highest sensitivity.
Nevertheless, manual labeling by human scientists is tedious, inefficient, laborious, and undoubtedly prejudiced by personal emotions. In contrast, computer-aided diagnosis methods are now winning better outcomes on automatic labeling of COVID-19 than human experts due to the progress of computer vision and deep learning.
El-kenawy et al. [7] presented the Feature Selection and Voting Classifier (FSVC) for diagnosing the disease. Ni et al. [8] developed a deep learning approach (DLA) to characterize COVID-19. Wang et al. [9] proposed a tailored deep convolutional neural network (TDCNN) to detect COVID-19. Hou [10] proposed a 6-layer deep convolutional network (6 l-DCN) to detect this disease. Wang [11] combined wavelet entropy (WE) and genetic algorithm (GA) to identify COVID-19. Wang [12] integrated WE and cat swarm optimization (CSO) together to identify COVID-19. Pi [13] combined the gray-level co-occurrence matrix (GLCM) and Schmitt network for COVID-19 recognition. They achieved an accuracy of 76.33%. Gafoor et al. [14] presented a deep learning model (DLM) to discover the disease from chest X-ray images.
Nevertheless, the implementation of the above methods can still advance. To recognize COVID-19 more precisely, the authors propose a new COVID-19 recognition model and web app (WA). We introduce the GLCM to descript chest CT images. Also, the four-direction varying-distance GLCM (FDVD-GLCM) is presented for a better description. The five-property feature set (FPFS) is extracted from the FDVD-GLCM matrix, and the extreme learning machine (ELM) neural network is used as the classifier. In all, our method is abbreviated as FECNet, in which F stands for the FDVD-GLCM, E for the ELM classifier, C for COVID-19, and Net for the neural network. To avoid overfitting, we use multiple-way data augmentation (MDA) to boost the training.
COVID-19-related mobile apps were developed in the past, such as COVIUAM [15]. Tsinaraki et al. [16] investigated Apple's App Store, Google Play, relevant tweets, and digital media outlets. They listed recent mobile apps to fight against COVID-19. Denis et al. [17] developed a self-assessment web app to evaluate trends of the COVID-19 pandemic in France. Kinori et al. [18] developed a web app for emotional management during the COVID-19 pandemic. Smith et al. [19] developed the 'covidscreen' app to assess asymptomatic COVDI-19 testing strategies.
Therefore, inspired by previous COVID-19-related mobile apps, we have developed a mobile app based on our FECNet method. In all, the contributions are listed below:
-
1.
The FDVD-GLCM is presented as the feature descriptor.
-
2.
An FPFS is extracted from FDVD-GLCM descriptor.
-
3.
ELM is introduced as the classifier. MDA is used to boost the training.
-
4.
Cross-validation shows this FECNet method is superior to five state-of-the-art methods.
-
5.
We have developed the mobile app for our proposed FECNet method.
2 Dataset and preprocessing
Two COVID-19 datasets (\({E}_{1}\) and \({E}_{2}\)) are used. The details of the two datasets (\({E}_{1}\) and \({E}_{2}\)) are recorded in open access data from references [20, 21]. Table 1 displays the basic information of \({E}_{1}\) and \({E}_{2}\), where \(a+b\) means \(a\) COVID-19 and \(b\) healthy control (HC).
Suppose \({\eta }_{s}\) stands for the number of subjects, and \({\eta }_{i}\) the number of CCT images. It is easy to observe that there are \({\eta }_{i}\left({E}_{1}\right)=296\) images in \({E}_{1}\) and \({\eta }_{i}\left({E}_{2}\right)=640\) images in \({E}_{2}\).
A four-step preprocessing (FSP) method is introduced. The flowchart can be seen in Fig. 2(a), in which the four steps are grayscaling, histogram stretch (HS), margin and text crop (MTC), and down-sample (DS). Here \({V}_{0}\) stands for the raw dataset, \({V}_{k}\left(k=\mathrm{1,2},3\right)\) stands for the dataset at each temporary step, and \(V\) the preprocessed dataset after the last step.
We skip the explanation of gray scaling. Afterward, HS is used to enhance the contrast since it, or its variants, have proven its ability in many academic and industrial applications [22, 23]. Suppose \({V}_{1}=\left\{{v}_{1}\left(i\right)\right\}\), we first calculate its lower bound \({v}_{1}^{L}\left(i\right)\) and upper bound \({v}_{1}^{U}\left(i\right)\) as:
and the HSed image is defined as
The grayscale range of \({v}_{2}\left(i\right)\) is \(\left[{v}_{\text{min}},{v}_{\text{max}}\right]\). Figure 2(b-c) shows the samples of raw COVID-19 and preprocessed images, respectively. The size of each image in the final DSed dataset \(V=\left\{v\left(i\right)\right\}\) is set to \(\left({v}_{h},{v}_{w}\right)\).
3 Methodology
3.1 GLCM
Table 2 presents the acronym list. Take that we have a gray-level image \(Q\), whose size is \(m\times n\) (\(m\) rows and \(n\) columns). Assume each pixel has \(p\) different pixel values, the GLCM reckons how often the pairs of pixels with a definite value and offset arise [24].
The motivation for using GLCM is because (i) it is an efficient method of texture analysis, and it can present the distance and angular spatial relationship. (ii) It has been successfully applied in lots of industrial and medical applications, e.g., lung cancer classification [25] and pneumonia [26], alike to this COVID-19 recognition task.
The GLCM matrix \(G\) is termed as
in which \(\wedge\) stands for the 'and' function. \(i\) and \(j\) stand for possible pixel values of the image \(Q\), and they also mean the \(i\)-th row and \(j\)-th column in the matrix \(G\). \(x\) and \(y\) stand for spatial positions in \(Q\); \(\left(\Delta x,\Delta y\right)\) for the offsets determining the spatial relation. Remember \(G\) is a \(p\times p\) matrix, so \({G}_{\Delta x,\Delta y}\in {\mathbb{R}}^{p\times p}\).
Figure 3 gives a GLCM illustration with an offset \(\left(\Delta x,\Delta y\right)=\left(\mathrm{0,1}\right)\). For examples, the blue circles in \(G\) shows \({G}_{\mathrm{0,1}}\left(\mathrm{2,2}\right)=1\), denoting there is one instance in \(Q\) where the one pair of horizontally adjacent pixels have values of 2. The purple circle in \(G\) shows \({G}_{\mathrm{0,1}}\left(\mathrm{3,8}\right)=1\), indicating there is one instance in \(Q\) where there is one pair of horizontal adjacent pixels with the value of \(\left(\mathrm{3,8}\right)\).
3.2 Four-direction varying-distance GLCM
The \(\left(x,y\right)\) are habitually substituted with row and column coordinates \(\left(r,c\right)\). Successively, the offset is stated in the form of \(\left(\Delta r,\Delta c\right)\). From now on, we will use \(\left(r,c\right)\) coordinates. Figure 4 shows the illustration of the four-direction mechanism (FDM) in GLCM.
Supposing \(\theta\) means the directional angle of GLCM (DAG), the offset \(\left[\mathrm{0,1}\right]\) matches to \(\theta =0^\circ\), in the same way, \(\left[-\mathrm{1,1}\right]\), \(\left[-\mathrm{1,0}\right]\), and \(\left[-1,-1\right]\) match to DAGs of \(45^\circ\), \(90^\circ\), and \(135^\circ\), respectively. The DAGs are fixed to the following four default values:
Other DAGs \(\theta =\left[180^\circ ,225^\circ ,270^\circ ,315^\circ \right]\) may be selected. Nonetheless, they render the transpose of the outcomes as DAGs \(\theta =\left[0^\circ , 45^\circ ,90^\circ ,135^\circ \right]\), respectively.
All the above four offsets \(\left(\left[\mathrm{0,1}\right],\left[-\mathrm{1,1}\right],\left[-\mathrm{1,0}\right],\left[-1,-1\right]\right)\) have a distance \(d\) of 1. In the paper of Srivastava et al. [27], they discussed the varying-distance mechanism (VDM). \(d\) can range from 1 to the width of the image. Normally we set
where \(D\) represens the maximum distance (MD). The optimal value of \(D\) is obtained via the trial-and-error method, since \(d\) with a value larger than \(D\) does not progress the whole system's performance.
Figure 5 displays the VDM where the MD value is set as \(D=3\). Consequently, this study writes the pseudocode of the four-direction varying-distance GLCM (FDVD-GLCM) in the following Algorithm 1. Remember the offset \(\left(\Delta r,\Delta c\right)\) is now uttered in the form of \(\left(d,\theta \right)\), so GLCM matrix \(G\) can be written as \({G}_{d,\theta }\left(Q\right)\in {\mathbb{R}}^{p\times p}\). The FDVD-GLCM matrix is now obtained as:
where \(E\) means the concatenation function.
3.3 Five-property feature set
Founded on FDVD-GLCM, the paper of Zhang [28] proposed the FPFS, in which they extracted \({N}_{F}=5\) different features from each GLCM. First feature is the contrast \({f}^{1}\), signifying the linear dependency of gray levels of two neighboring pixels.
For a constant image, its contrast is 0. \({f}_{d,\theta }^{1}\left({\mathbb{C}}\right)=0\), where \({\mathbb{C}}\) denotes any constant image.
Second feature \({f}^{2}\) is the correlation, gauging how correlated a pixel is to its neighbor over the individual GLCM image.
where \({m}_{i}\) and \({m}_{j}\) are the GLCM means defined as
The GLCM variances \({v}_{i}\) and \({v}_{j}\) are defined as:
The correlation \({f}^{2}\) is in the range of \(\left[-\mathrm{1,1}\right]\).
where \({h}_{range}\left(x\right)\) returns the range of the function of \(x\). In this case, \(-1\) means a perfectly negative correlation, while \(+1\) a perfectly positive correlation.
The third feature energy \(\left({f}^{3}\right)\) gauges the sum of squared GLCM entries. It gauges the textural uniformity of an image. Its definition is:
The fourth \({f}^{4}\) is homogeneity [29], an amount that assesses the closeness of the distribution of entries in the GLCM to the GLCM diagonal. ts definition is:
The final property \({f}^{{N}_{F}}={f}^{5}\) is entropy [30]:
The \({N}_{F}\) properties run on all individual GLCM matrixes \(\left\{{G}_{d,\theta }\right\}\), and then are concatenated and form the FDVD-GLCM feature \({\mathbb{F}}\).
Thus, the number of FPFS features of each GLCM \({G}_{d,\theta }\) is \({N}_{F}\), and the number of FPFS features of total FDVD-GLCM \({\mathbb{F}}\) is \(H=D\times 4\times {N}_{F}\), so \({\mathbb{F}}\) can be written in vectorized way as \({\mathbb{F}}=\left\{{f}_{1},{f}_{2},\dots , {f}_{H}\right\}\).
3.4 Proposed FECNet model
The FPFS features based on FDVD-GLCM \({\mathbb{F}}=\left\{{f}_{1},{f}_{2},\dots , {f}_{H}\right\}\) are used as the learnt features and passed to the extreme learning machine (ELM) [31], whose structure is a single hidden-layer feedforward network [32]. Its structure is shown in Fig. 6. Its hidden nodes can be randomly assigned and never updated, so the output weights are learned in just one step, making it a very fast classifier [33].
Take the i-th input sample to be \({{\varvec{f}}}_{{\varvec{i}}}=({f}_{i1},,...,{f}_{iH}{)}^{\mathrm{T}}\in {\mathbb{R}}^{H},i=1,\dots ,N\), where \(N\) is the number of images in the training set (TRS). The output [34] of an ELM with \(L\) hidden neurons is:
where \(h\) stands for the activation function, which will be defined in Section 4. \({\boldsymbol{\alpha }}_{{\varvec{j}}}=({\alpha }_{j1},{\alpha }_{j2},\dots ,{\alpha }_{jH}{)}^{\mathrm{T}}\) stand for the input weight, \({\beta }_{j}\) the bias, \({{\varvec{O}}}_{{\varvec{i}}}={\left({o}_{i1},{o}_{i2},{o}_{i3},\dots {o}_{im}\right)}^{\mathrm{T}}\) the output of the model for the \(i\)-th input sample. Afterward, the model is trained to yield
Let us rephrase the above equation as
where
It challenges the users to obtain the optimal \({\boldsymbol{\alpha }}_{{\varvec{j}}}\), \({\beta }_{j}\) and \({{\varvec{\lambda}}}_{{\varvec{j}}}\). ELM can yield a solution quickly [35] via the pseudo inverse:
where \({\text{M}}^{\dagger}\) signifies the Moore–Penrose [36] of \({\varvec{M}}\). The pseudocode is shown in Algorithm 2. The flowchart is shown in Fig. 7.
3.5 Cross-validation
\(T\)-fold cross-validation (CV) is harnessed to operate this proposed FECNet model because of the relatively small size of the two datasets. At \(r\)-th run \(\left(1\le r\le R\right)\) of the \(T\)-fold CV [37], the whole dataset \(V\) is divided into \(T\) folds. Figure 8 displays the diagram of \(T\)-fold cross validation. We let \(T=10\), i.e., a ten-fold CV is run.
Furthermore, the \(T\)-fold CV carries out \(R\) times. Within each run, the division is re-split [38] randomly.
where \({V}_{r}\left(t\right)\) means the \(t\)-th fold of the \(V\) at \(r\)-th run.
In \(t\)-th \(\left(1\le t\le T\right)\) trial, the \(t\)-th fold is selected as the test set (TES) \({s}_{r}^{test}\left(t\right)\), and the rest \(T-1\) folds are chosen and merged as the TRS \({s}_{r}^{train}\left(t\right)\):
3.6 Multiple-way data augmentation
The TRS \({s}_{r}^{train}\left(t\right)\) is enhanced via the MDA portrayed in Algorithm 1 of Ref. [39]. \(\beta =9\) different individual data augmentation (IDA) [39] techniques are exercised: rotation, Gamma correction, vertical shear, horizontal shear, Gaussian noise, random translation, scaling, sault-and-pepper noise, and speckle noise. Each IDA method \({z}_{DA}\) breeds the same number of new \(\gamma =30\) images. Think an image in TRS is \(v\), the MDA earns an enhanced set:
where \({z}_{DA}^{i},i=1,\dots ,\beta\) denotes \(i\)-th type of IDA and \(h\left(v\right)\) is the horizontally mirrored image (HMI) of \(v\).
All seven measures' mean-and-standard-deviation (MSD) results will be reported on TES. They are sensitivity (\({\eta }_{1}\)), specificity (\({\eta }_{2}\)), precision (\({\eta }_{3}\)), accuracy (\({\eta }_{4}\)), F1 score (\({\eta }_{5}\)), Matthews correlation coefficient (MCC, \({\eta }_{6}\)), and Fowlkes–Mallows index (FMI, \({\eta }_{7}\)).
4 Experiments, results, and discussions
The hyperparameters are listed in Table 3. Most optimal values of the hyperparameters are obtained by trial and error method, and others are obtained by experience. The minimum and maximum gray values of the HSed images are \(\left(\mathrm{0,255}\right)\). The size of the DSed image is \(\left({v}_{h},{v}_{h}\right)=\left(\mathrm{256,256}\right)\). The MD of GLCM is set to \(D=5\). The number of gray levels is \(p=8\). The number of FPFS features from each GLCM is \({N}_{F}=5\). The total FPFS features generated by FDVD-GLCM is \(H=D\times 4\times {N}_{F}=100\). The activation function in ELM is set to sigmoid [40]. The number of hidden neurons in ELM is \(L=3000\). We run 10 runs of tenfold CV. We have \(\beta =9\) different IDA methods, and each IDA generates \(\gamma =30\) new images. So, in total, we generate 542 new images for each image in TRS.
4.1 Result of MDA
Grab Fig. 2(c) as the TRS image \(v\), Fig. 9(a-g) displays the outcomes of seven different IDAs. The HMI \(h\left(v\right)\) and its related IDA outcomes are not shown due to the page limit. As we can see from Fig. 9, MDA gains lots of variety to the \({s}^{train}\), therefore, enhancing the performance of our COVID-19 recognition system FECNet.
4.2 Statistical performance of our FECNet model
The results of 10 runs of the tenfold CV of our FECNet model are reported in Table 4. In each run, the division of the tenfold CV is randomly reset. The MSD values of the seven indicators on the 1st dataset \({E}_{1}\) are listed as: sensitivity (92.23 ± 2.14), specificity (93.18 ± 0.87), precision (93.12 ± 0.83), accuracy (92.70 ± 1.13), F1 (92.66 ± 1.20), MCC (85.43 ± 2.24), and FMI (92.67 ± 1.20). The MSD values of the seven indicators on the 2nd dataset \({E}_{2}\) are recorded as: sensitivity (92.19 ± 1.89), specificity (92.88 ± 1.23), precision (92.83 ± 1.22), accuracy (92.53 ± 1.37), F1 (92.50 ± 1.41), MCC (85.07 ± 2.73), and FMI (92.50 ± 1.41).
After we add up the ten runs' test confusion matrixes [41], the consequent confusion matrixes (CCMs) are displayed in Fig. 10, from which we spot how many COVID-19 cases are wrongly predicted as HC cases and vice versa for the two datasets.
4.3 Comparison to other models
The proposed FECNet model is compared with five state-of-the-art (SOTA) COVID-19 recognition models: FSVC [7], DLA [8], TDCNN [9], 6 l-DCN [10], WEGA [11], WECSO [12], GLCM [13], and DLM [14]. The results are plotted in Fig. 11 and itemized in Table 5, in which the best results are emboldened. We see that this proposed FECNet model gains the best outcomes at almost all seven indicators for both datasets.
The success of our FECNet model attributes to four aspects: (i) The FDVD-GLCM is presented as the feature descriptor. (ii) The FPFS is extracted from the FDVD-GLCM descriptor as a feature reduction method. (iii) ELM is used as the classifier. (iv) MDA is used to boost the training set.
The performances of the proposed FECNet model are better than not only non-deep-learning models but also deep-learning models. DLA [8] achieves the best sensitivity \(\left({\eta }_{1}\right)\) in the second dataset \(\left({E}_{2}\right)\). However, it has the poor specificities \(\left({\eta }_{2}\right)\) on both datasets. TDCNN [9] was designed originally for a triple-classification task on X-ray images, so when we adapt their model to our binary-classification task on CCT images, the TDCNN [9] model may not work perfectly. 6 l-DCN [10] gains promising results. Nevertheless, its performance can be improved since the depth of its network is only six layers. DLM [14] also attains promising results. The structure of DLM [14] is a six-layer convolutional neural network. Again, the model can be made deeper to enhance its performance.
4.4 Mobile app
MATLAB app designer is introduced to fashion a specialized WA for our FECNet model. The input to this WA is any CCT image, and our FECNet model is integrated into this developed WA. The WA is retrieved through a Google Chrome (Version: 105.0.5195.125) web browser. The WA is based on a cloud computing-based client–server modeled construction, viz., the client is delivered services through an off-site server hosted by a third-party cloud facility, viz., Microsoft Azure [42].
Figure 12(a) displays the home page, indicating our mobile app is now version 9.0. The URL of the home page is http://kdml.le.ac.uk/webapps/home. Figure 12(b) shows the screenshot of clicking the 'upload custom image' button. A new window pops out and lets the user drag and drop any CCT image. Then, the user needs to click the 'recognize' button. Figure 12(c) displays the screenshot of the recognition result.
This developed online WA contributes to hospital clinicians/radiologists making choices distantly and effectually. The clients are able to upload their own CCT images. Then the mobile WA will give the diagnosis outcomes by turning the knob to the predicted label: COVID-19, HC, or None. We add the new 'none' category here in case some clients may wrongly upload non-sense images, such as cat or dog images.
5 Conclusion
We propose a novel FECNet model for COVID-19 recognition. We use the FPFS feature based on FDVD-GLCM as the feature extractor and ELM as the classifier. MDA is used for data augmentation on TRSs. The ten runs of tenfold CV show that our FECNet model is better than five SOTA models: FSVC [7], 6 l-DCN [10], WECSO [12], GLCM [13], and DLM [14].
After reviewing the FECNet model and its WA, we have observed three points that can be improved: (a) The task is only a binary classification problem, and other chest-related infectious diseases are not considered. (b) Advanced classification technologies are not tested. (c) The hyperparameters are not proven to be the best.
In studies ahead, we plan to include other chest-related infectious diseases, e.g., pneumonia and tuberculosis. We shall examine the recent classification technologies, such as weakly-supervised learning. At last, the hyperparameter optimization method will be embedded in our future study.
Data availability
The datasets analyzed during the current study are available from the corresponding author on reasonable request.
References
Nguyen HH et al (2022) Impacts of monetary policy transmission on bank performance and risk in the Vietnamese market: does the Covid-19 pandemic matter? Cogent Bus Manag 9(1):Article ID. 2094591
Subawa NS et al (2022) MSMEs envisaged as the economy spearhead for Bali in the covid-19 pandemic situation. Cogent Econ Finance 10(1):Article ID. 2096200
Mauro GD et al (2022) European safety analysis of mRNA and viral vector COVID-19 vaccines on glucose metabolism events. Drug Saf 45(10):1209–1210
Rosa V et al (2022) Pandemic preparedness and response: a foldable tent to safely remove contaminated dental aerosols-clinical study and patient experience. Appl Sci-Basel 12(15):Article ID. 7409
Sharma A et al (2022) Improved interobserver reliability in diagnosing and staging lesions of COVID-19 between radiologist and emergency medicine physicians after an online course. Cureus J Med Sci 14(9):e29216
Islam KR et al (2022) Prognostic model of ICU admission risk in patients with COVID-19 infection using machine learning. Diagnostics 12(9):Article ID. 2144
El-kenawy ESM et al (2020) Novel feature selection and voting classifier algorithms for COVID-19 classification in CT images. IEEE Access 8:179317–179335
Ni QQ et al (2020) A deep learning approach to characterize 2019 coronavirus disease (COVID-19) pneumonia in chest CT images. Eur Radiol 30:6517–6527
Wang LD et al (2020) COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Sci Rep 10(1):Article ID. 19549
Hou S (2022) COVID-19 detection via a 6-layer deep convolutional neural network. Comput Model Eng Sci 130(2):855–869
Wang J-J (2022) Covid-19 detection by wavelet entropy and genetic algorithm. Intell Comput Theories Appl 13394:588–599
Wang W (2022) Covid-19 Detection by Wavelet Entropy and Cat Swarm Optimization. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. 415, p 479-487
Pi P (2021) Gray level co-occurrence matrix and Schmitt neural network for Covid-19 diagnosis. EAI Endorsed Trans e-Learning 7(22):e3
Gafoor SA et al (2022) Deep learning model for detection of COVID-19 utilizing the chest X-ray images. Cogent Eng 9(1):Article ID. 2079221
Montero-Contreras D et al (2021) COVIUAM: a mobile app to get information about COVID-19 cases. In International Conference on Computational Science and Computational Intelligence (CSCI). Las Vegas, p 1223-1228
Tsinaraki C et al (2021) Mobile apps to fight the COVID-19 crisis. Data 6(10):Article ID. 106
Denis F et al (2021) A self-assessment web-based app to assess trends of the COVID-19 pandemic in france: observational study. J Med Internet Res 23(3):Article ID. e26182
Kinori SGF et al (2022) A web-based app for emotional management during the COVID-19 pandemic: platform development and retrospective analysis of its use throughout two waves of the outbreak in Spain. JMIR Form Res 6(3):Article ID. e27402
Smith J et al (2022) covidscreen: a web app and R Package for assessing asymptomatic COVID-19 testing strategies. Bmc Public Health 22(1):Article ID. 1361
Wu X (2020) Diagnosis of COVID-19 by wavelet renyi entropy and three-segment biogeography-based optimization. Int J Comput Intell Syst 13(1):1332–1344
Zhang YD (2022) A seven-layer convolutional neural network for chest CT-based COVID-19 diagnosis using stochastic pooling. IEEE Sens J 22(18):17573–17582
Li HX et al (2022) Evaluation of microvascular invasion of hepatocellular carcinoma using whole-lesion histogram analysis with the stretched-exponential diffusion model. Br J Radiol 95(1132):Article ID. 20210631
Zhou JC et al (2022) Underwater image enhancement via two-level wavelet decomposition maximum brightness color restoration and edge refinement histogram stretching. Opt Express 30(10):17290–17306
Mohammadpour P et al (2022) Vegetation mapping with random forest using sentinel 2 and GLCM texture feature-a case study for Lousa Region, Portugal. Remote Sens 14(18):Article ID. 4585
Patil SA et al (2010) Chest X-ray features extraction for lung cancer classification. J Sci Ind Res 69(4):271–277
Kim YJ (2022) Machine learning model based on radiomic features for differentiation between COVID-19 and pneumonia on chest x-ray. Sensors 22(17):Article ID. 6709
Srivastava D et al (2020) Pattern-based image retrieval using GLCM. Neural Comput Appl 32(15):10819–10832
Zhang Y-D (2022) Secondary pulmonary tuberculosis recognition by 4-direction varying-distance GLCM and fuzzy SVM. Mobile Netw Appl. https://doi.org/10.1007/s11036-021-01901-7
Kshirsagar PR et al (2022) Accrual and dismemberment of brain tumours using fuzzy interface and grey textures for image disproportion. Comput Intell Neurosci 2022:Article ID. 2609387
Hussain L et al (2022) Bayesian dynamic profiling and optimization of important ranked energy from gray level co-occurrence (GLCM) features for empirical analysis of brain MRI. Sci Rep 12(1):Article.ID. 15389
Tummalapalli S et al (2022) Detection of web service anti-patterns using weighted extreme learning machine. Comput Stand Interfaces 82:Article ID. 103621
Jegan R et al (2022) MFCC and texture descriptors based stuttering dysfluencies classification using extreme learning machine. Int J Adv Comput Sci Appl 13(8):612–619
Pandey AK et al (2022) Software fault classification using extreme learning machine: a cognitive approach. Evol Intel 15(4):2261–2268
Vasquez-Coronel JA et al (2022) Training of an extreme learning machine autoencoder based on an iterative shrinkage-thresholding optimization algorithm. Appl Scie-Basel 12(18):Article ID. 9021
Demidova LA et al (2022) Classification of program texts represented as Markov chains with biology-inspired algorithms-enhanced extreme learning machines. Algorithms 15(9):Article ID. 329
Moghadam RG et al (2022) Evaluation of discharge coefficient of triangular side orifices by using regularized extreme learning machine. Appl Water Sci 12(7):Article ID. 145
Freire R et al (2022) New predictive resting metabolic rate equations for high-level athletes: a cross-validation study. Med Sci Sports Exerc 54(8):1335–1345
Nguyen D et al (2022) Ensemble learning using traditional machine learning and deep neural network for diagnosis of Alzheimer’s disease. Ibro Neurosci Rep 13:255–263
Wang S-H (2021) ADVIAN: Alzheimer’s disease VGG-inspired attention network based on convolutional block attention module and multiple way data augmentation. Front Aging Neurosci 13:Article ID. 687456
Deepika KK et al (2022) Comparison of principal-component-analysis-based extreme learning machine models for boiler output forecasting. Appl Sci-Basel 12(15):Article ID. 7671
Shinohara I et al (2022) Using deep learning for ultrasound images to diagnose carpal tunnel syndrome with high accuracy. Ultrasound Med Biol 48(10):2052–2059
Perumal K et al (2022) Dynamic resource provisioning and secured file sharing using virtualization in cloud azure. J Cloud Comput-Adv Syst Appl 11(1):Article ID. 46
Funding
This paper is partially supported by National Natural Science Foundation of China (62276092); Key Science and Technology Program of Henan Province, China (212102310084); Key Scientific Research Projects of Colleges and Universities in Henan Province, China (22A520027); Medical Research Council Confidence in Concept Award, UK (MC_PC_17171), Royal Society International Exchanges Cost Share Award, UK (RP202G0230), Hope Foundation for Cancer Research, UK (RM60G0680), Global Challenges Research Fund (GCRF), UK (P202PF11), Sino-UK Industrial Fund, UK (RP202G0289), British Heart Foundation Accelerator Award, UK (AA/18/3/34220), LIAS Pioneering Partnerships award, UK (P202ED10), and Data Science Enhancement Fund, UK (P202RE237).
Author information
Authors and Affiliations
Contributions
Yu-Dong Zhang: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Writing—Original Draft, Visualization, Supervision, Project administration, Funding acquisition.
Vishnuvarthanan Govindaraj: Conceptualization, Software, Formal analysis, Writing—Review & Editing, Project administration.
Ziquan Zhu: Conceptualization, Methodology, Validation, Investigation, Data Curation, Writing—Review & Editing.
Corresponding authors
Ethics declarations
Ethics approval
The ethical check is exempted since we use open-access datasets.
Competing interests
There are no conflicts of interest regarding the submission of this paper.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Zhang, YD., Govindaraj, V. & Zhu, Z. FECNet: a Neural Network and a Mobile App for COVID-19 Recognition. Mobile Netw Appl (2023). https://doi.org/10.1007/s11036-023-02140-8
Accepted:
Published:
DOI: https://doi.org/10.1007/s11036-023-02140-8