1 Introduction

COVID-19, the disease caused by SARS-CoV-2, was formally named as a pandemic by the World Health Organization (WHO) [1, 2] in March 2020. COVID-19 is a highly contagious virus that can lead to deadly acute respiratory distress syndrome (ARDS). But there are just a few specific COVID-19 vaccinations available, most unvaccinated persons are susceptible to infection. Early isolation and diagnosis of the infected individual by any legal means is one of the most effective strategies to prevent the spread of viral infection among healthy people. Further, an X-ray or CT scan of the patient’s chest is one of the most effective ways to detect this virus. In addition, lung inflammation can be dangerous to people’s health. The rising number of infected persons in the community needs more effective therapies as well as a cost-effective process based on the primary diagnosis. The ability to recognize contaminated tissue quickly and precisely is critical for optimal patient treatment and survival [3,4,5,6].

CT provides a pathophysiology guide, which may aid in the diagnosis and progression of various disease stages. It develops to become a viable diagnostic tool for treating COVID-19-related lung infection in medical practice [7]. According to early studies, chest CT provides high sensitivity for detecting COVID-19 lung disease. As per the article, [8], several organizations have proven the ability to diagnose using CAD systems with an accuracy of up to 95%. Nowadays, medical imaging has just been exploited for a variety of disease diagnostics. Medical imaging technologies may also be employed as a key pathological tool that helps for identifying possible diseases.

On the other hand, digital images are commonly illustrated by utilizing computer-based image processing. The intensity of each image element in a digital image is reported by a numerical integer. In general, the purpose of image processing is to transform the source image into a more informative image with the aid of mathematically manipulated recorded numerical integers. In practice, this is performed by subjecting the source image to appropriate mathematical functions and saving the effects of the calculation as a new image. The mathematical mechanisms employed in image processing are almost limitless, but a wide variety of mechanisms can be classified into one of four major functions: thematic classification, image restoration, image segmentation, and image enhancement. At this, image enhancement mechanisms [9] try to improve the detection of patterns or objects in an image. The authors [10] suggested the image enhancement mechanism and analysis for the satellite images. Ram et al. [11] proposed polarization-based spatial filtering for the edge enhancement approach utilizing S-waveplate. Further, enhancement and classification based on brain MRI images have been done in [12]. Zhong et al. [13] recommended an image enhancement technique based on wavelet analysis and new pseudo-color image processing for the black-and-white image. The author [14] offered a contrast enhancement mechanism is enforced to improve image contrast, descriptive ability, and image appearance by increasing the gray level range. And also, the enhanced images are usually comfortable to interpret than the source images.

Image segmentation refers to the division of a digital image into several parts, which is a simple and significant tool in digital image processing. The main purpose of segmentation is to transform the images into more meaningful parts, separating objects from the background and locating image edges. The great applications in this field are as image denoising, face detection, video surveillance, fingerprint recognition, iris recognition, machine vision, content-based image retrieval, and brake light detection, locate objects (roads, forests, and crops) in satellite images [15], and particularly in the domain of medical imaging [16]. Recently, there have been several division methods and procedures in the literature, some of which include: edge detection, clustering, thresholding, region-growing methods, dual clustering method, histogram-based methods, watershed transformation, and so on. Each of the aforementioned methods is based on some methodologies for partitioning regions of the image.

Thresholding is a fascinating object separation mechanism that is also the most widely used, well-known, and dependable technique for image segmentation analysis. This mechanism works on a noisy image, converting the grey image to a binary image with a threshold value and this threshold value serves as the key feature of this mechanism. Over the past few decades, the choice of threshold has been based on certain statistical characteristics [17, 18], such as minimum error approach, entropy approach, moment-based algorithm, and class variation systems. The above statistical term class variance system limits the high computational time and cost. Following that, some thresholding approaches fail in a unimodal distribution as well as incapable of determining the threshold for imprecise data in an image; these issues are addressed by recommending the use of non-linear thresholds based on a fuzzy rule (multi-dimensional). Vague/imprecise image object is separated using the fuzzy set (FS) theory provided by Zadeh [19]. The authors [20] suggested a scheme to extract the image based on the FS approach instead of the crisp set. As a result, the entire image in the form of the FS, as well as each of their image elements (grey pixel), has a membership value. Under those circumstances, the membership function plays a vital role in the separation of the image object, and it is defined in terms of the unique characteristics of the image. As evidenced in [21, 22], there is rich literature on FS-based image thresholding techniques. Later, Atanassov [23] suggested that the new FS be an Intuitionistic Fuzzy Set (IFS), with two degrees of membership and non-membership for each component. Moreover, the author Yager [24, 25] proposed the Pythagorean Fuzzy Set (PFS), an extension of IFS in which each element is represented by a pair of membership and non-membership degrees. Image segmentation employs a variety of membership and non-membership functions [21, 26,27,28], including the Gamma, triangular, Sugeno, and Yager.

Notably, in reference, the primary goal is to achieve image segmentation by optimizing the threshold with the fuzzy measure and calculating the appropriate membership function of the original image prior to applying the fuzzy measure. Different fuzzy threshold selection measures, such as the entropy measure [22], the similarity score function [29], and the divergence measure [30] have recently flourished, and these solve the problem of separation in imprecise images. The thresholding based on similarity measure is an appealing mechanism that is employed as an elementary tool to determine the threshold in an image [29, 31]. In the literature, the PFS-based similarity, distance, and entropy measures are discussed [32,33,34,35]. The main aspiration for using similarity-based thresholding is that a particular object is associated with the same grey levels in the image’s pixels. Thresholding mechanisms like these are used to classify the foreground (Fg) and background (Bg) into different groups.

Inspired by the previous conceptions, image enhancement from the PFS feature and the segmentation of two-dimensional biomedical images such as lungs affected due to the COVID-19 virus has been effectively demonstrated in this article. The proposed scheme is divided into two steps: image enhancement and division. After activating the Pythagorean fuzzy (PFS) entropy, the aforesaid clinical images were properly enhanced. Further, enhanced images are employed for the purpose of the extraction process, which determines the thresholds that separate the object from the background. Furthermore, the proposed and other object extraction schemes, such as Method1 [4], Method2 [5], Method3 [6], Method4 [7] are compared with each other with the aid of quality measurement factors. According to the results of the aforementioned analysis, the proposed PFS-based object extraction technique performs admirably in terms of segmentation and factor values.

The following is the overall framework for this study. Section 2 investigates the theoretical background like PFS-based image enhancement and segmentation. In Sect. 3, objective analysis is discussed. Section. 4 expresses the experimental results and analysis. Finally, in Sect. 5, the concluding annotations are written.

2 Theoretical background

2.1 Pythagorean fuzzy set

A fascinating novel system revealed by Yager [24], the Pythagorean fuzzy theory is a formidable scheme and it has freshly become attractive. Pythagorean Fuzzy Theory (PFT) is proposed by an aspect of the Fuzzy set theory (FST). In PFT, situations are categorized and explored under three factors: ‘Membership’, ‘Non-membership’, and ‘indeterminacy’. In PFS’s view, it is a beneficial device to solve the indeterminacy issues, which has been employed in the applications of image processing namely, edge detection, segmentation, and so on.

2.2 Pythagorean fuzzy image (PFI)

The source RGB image has been altered to a grayscale image and the altered image is again reconfigured in the design of PFS, which holds the factors such as membership ’\(\mu\)’, indeterminacy ’\(\pi\)’, and non-membership ’\(\nu\)’. In that, there are foreground (Fg) in \(\mu\), background (Bg) in \(\nu\), and ambiguities or edges (Ed) belongs to \(\pi\) in the grayscale domain. Later, the Fg, Bg, and Ed regions must be obtained by executing the functions \(\mu\), \(\nu\), and \(\pi\). The foreground division is being carried out in the final stages.

Fig. 1
figure 1

Schematic representation of the proposed image segmentation process

2.3 Image enhancement

Firstly, in order to build a PFS from IFS, PFI is formed. The efficient strategy of PFI is the formulation of membership and non-membership functions of the image. In the beginning process, the grayscale image \({\mathbb {A}}\) is fuzzified by the upcoming formula:

$$\begin{aligned} \mu _{{\mathbb {A}}}({gl})=\frac{gl(r, c)-gl_{min}}{gl_{{max}}-{gl}_{{min}}} \end{aligned}$$
(1)

Here, gl(rc) is the gray picture element at the location (rc). The notations \({gl}_{{min}}\) and \({gl}_{{max}}\) are the minimum and maximum intensities of the image \({\mathbb {A}}\).

2.3.1 \(\mu\), \(\nu\), and \(\pi\) degrees estimation

Based on the IFS, the degree of membership is estimated by applying the following generator [14]:

$$\begin{aligned} \mu _{{\mathbb {A}}}^{{PFS}}({gl}({r}, {c}); \lambda )=1-\frac{1-\mu _{{\mathbb {A}}}({gl})}{1+(e^{\lambda }-1)\mu _{{\mathbb {A}}}({gl})}, \lambda > 0. \end{aligned}$$
(2)

Then implementing the fuzzy negation \(\Psi ({x})=\frac{1-{x}}{1+(e^{\lambda }-1){x}}\), \(\lambda > 0\), the degree of non-membership in the PFI is estimated as:

$$\begin{aligned} \begin{aligned} \nu _{{\mathbb {A}}}^{{{PFS}}}({gl}({r}, {c}); \lambda )&=\Psi (\mu _{{\mathbb {A}}}^{{PFS}}({gl}({r}, {c}); \lambda ))\\ \nu _{{\mathbb {A}}}^{{PFS}}({gl}({r}, {c}); \lambda )&=\frac{1-\mu _{{\mathbb {A}}}^{{PFS}}({gl}({r}, {c}); \lambda )}{1+(e^{\lambda +1}-1)\mu _{{\mathbb {A}}}^{{PFS}}({gl}({r}, {c}); \lambda )}, \lambda > 0 \end{aligned} \end{aligned}$$
(3)

Finally, the degree of indeterminacy in the PFI is estimated as

$$\begin{aligned}&\pi _{{\mathbb {A}}}^{{PFS}}({gl}({r}, {c}); \lambda )\nonumber \\&\quad =\sqrt{1-(\mu _{{\mathbb {A}}}^{{PFS}}({gl}({r}, {c}); \lambda ))^2-(\nu _{{\mathbb {A}}}^{{PFS}}({gl}({r}, {c}); \lambda ))^2} \end{aligned}$$
(4)

2.3.2 Entropy

Entropy act as a significant part of image processing. The authors De Luca and Termini [36] first recommended non-probabilistic entropy in the FS environment. Moreover, the authors [35] offered numerous entropy measurements by employing PFT. In this study, the Pythagorean Fuzzy Entropy (PFE) is implemented and its mathematical formula is as follows:

$$\begin{aligned} PFE({\mathbb{A}};\lambda ) = & \frac{1}{{|\mathbb{R} \times \mathbb{C}|}} \\ & \sum\limits_{{r = 0}}^{\mathbb{R}} {\sum\limits_{{c = 0}}^{\mathbb{C}} {\frac{{(\pi _{{\mathbb{A}}}^{{PFS}} (gl(r,c);\lambda ))^{2} }}{{(\pi _{{\mathbb{A}}}^{{PFS}} (gl(r,c);\lambda ))^{2} }}} } \frac{{ + 1 - \left| {(\mu _{{\mathbb{A}}}^{{PFS}} (gl(r,c);\lambda ))^{2} } \right.}}{{ + 1 + \left| {(\mu _{{\mathbb{A}}}^{{PFS}} (gl(r,c);\lambda ))^{2} } \right.}} \\ & \frac{{\left. { - (\nu _{{\mathbb{A}}}^{{PFS}} (gl(r,c);\lambda ))^{2} } \right|}}{{\left. { - (\nu _{{\mathbb{A}}}^{{PFS}} (gl(r,c);\lambda ))^{2} } \right|}} \\ \end{aligned}$$
(5)

PFE is measured by utilizing the above equation (5) for each \(\lambda\) value. Likewise, the maximum value of PFE corresponds to the \(\lambda\) value, is considered as optimum value, which is written by:

$$\begin{aligned} \lambda _{{opt}}= \text{ max } ({PFE}({\mathbb {A}}; \lambda )) \end{aligned}$$
(6)

Here, the measured value \(\lambda _{{opt}}\) is substituted in equation (2), then the equation is of the mathematical form as follows:

$$\begin{aligned} \mu _{{\mathbb {A}}}^{{PFS}}({gl})=1-\frac{1-\mu _{ {\mathbb {A}}}({gl})}{1+(e^{\lambda _{{opt}}}-1)\mu _{{\mathbb {A}}}({gl})} \end{aligned}$$
(7)

Further, the Pythagorean fuzzy image is constructed.

2.3.3 Enhancement

The PFI is utilized in the image enhancement process and their mathematical term is written as:

$$\rm{\mathbb{A}}^{{enh}} (gl(r,c)) = \left\{ {\begin{array}{ll} {2[\mu _{\rm{\mathbb{A}}}^{{PFS}} (gl(r,c))]^{2} ;} & {if{\text{ }}\mu _{\rm{\mathbb{A}}}^{{PFS}} (gl(r,c)) \le 0.5} \\ {1 - 2[1 - \mu _{\rm{\mathbb{A}}}^{{PFS}} (gl(r,c))]^{2} ;} & {if{\text{ }}0.5 < \mu _{\rm{\mathbb{A}}}^{{PFS}} (gl(r,c)) \le 1} \\ \end{array} } \right.$$
(8)

The aforementioned equation (8) forms the contrast enhanced image.

2.4 Image segmentation

In general, every image contains unimodal, bimodal, trimodal, and multimodal regions according to the intensity range. Bimodal images need a single threshold for image segmentation and trimodal, multimodal images need two or more thresholds to segment the images. In this, multimodal images are taken into account. The segmentation algorithm is shown in Fig. 1. In the aforementioned figure, the best threshold values are obtained using the PFS measure between the manually thresholded image and the enhanced thresholded image. The manually thresholded image is that image in which each pixel strictly belongs to its respective (object or background) regions. In that, each pixel has a degree of membership as 1 and its degrees of non-membership and hesitation are 0. The threshold selection algorithm is illustrated by the following steps.

Step 1 The search process is to find the optimal threshold value, and this mechanism requires only single for loop with T varying from 0–255.

Step 2 The enhanced image \({\mathbb {A}}^{{enh}}\) of dimension \({\mathbb {R}} \times {\mathbb {C}}\) and let the symbol ‘\({\mathscr {L}}\)’ indicates the levels of the grayness of the image \({\mathbb {A}}^{{enh}}\) that is \({\mathscr {L}}=\{0, 1,...,{\mathbb {L}}-1\}\), where \({\mathbb {L}}\) specifies the maximum gray level of \({\mathbb {A}}^{{enh}}\). \({\mathbb {N}}({gl})\) denotes the frequency of the gray level ‘gl’.

Step 3 The pixels of the enhanced image \({\mathbb {A}}^{{enh}}\) are splited in two classes, namely foreground (Fg) and background (Bg). The notation ‘Bg’ exposes the set of pixels accompanied with the intensity values \(\{0, 1,..., {t}\}\) and ‘Fg’ expresses the set of pixels accompanied with the intensity values \(\{{t}+1, ..., {\mathbb {L}}-1\}\) where ‘t ’specifies the threshold value.

Step 4: The mean of the Fg and Bg classes are expressed as follows:

$$\begin{aligned} {m}_1=\frac{\sum \nolimits ^{{t}}_{{gl}=0} {gl} {\mathbb {N}} ({gl})}{\sum \nolimits ^{{t}}_{{gl}=0} {\mathbb {N}} ({gl})} \text{ and } {m}_2 =\frac{\sum \nolimits ^{{\mathbb {L}}-1}_{{gl}={t}+1}{gl} {\mathbb {N}}({gl})}{\sum \nolimits ^{{\mathbb {L}}-1}_{{gl}={t}+1} {\mathbb {N}}({gl})} \end{aligned}$$
(9)

Here \({m}_1\) and \({m}_2\), respectively, denote the average values of Bg and Fg classes.

Step 5: This paper considers the gamma distribution for experimental purposes because it ensures the images to symmetric nature. Then, each image element (rc) membership value in the enhanced image \({\mathbb {A}}^{{enh}}\) is determined by applying Gamma distribution [21] as follows:

$$\mu _{\rm{\mathbb{A}}}^{{enh}} (gl(r,c)) = \left\{ {\begin{array}{ll} {\exp {\text{ }}( - c_{1}^{*} .|gl(r,c) - m_{1} |)} & {if{\text{ }}gl(r,c) \le t} \\ {\exp {\text{ }}( - c_{1}^{*} .|gl(r,c) - m_{2} |)} & {if{\text{ }}gl(r,c) > t2} \\ \end{array} } \right.$$
(10)

Here, \({m}_1\) and \({m}_2\) are the mean intensity for two regions of the \(({r}, {c})^{th}\) pixel, and the constant \(c_1^*=\frac{1}{\left( \text{ max }({gl}) - \text{ min } ( {gl}) \right) }\). For multilevel thresholding, since there are n-regions in the image, \(n-1\) threshold values (t1, t2,..., \(tn-1\)) were chosen such that \(0 \le t1< t2<...<tn-1 \le {\mathbb {L}}-1\), whereby \({\mathbb {L}}\) is just the image’s maximum grey level. Following the notion of bilevel thresholding, in the case of multilevel thresholding, the membership function will take the form:

$$\mu _{\rm{\mathbb{A}}}^{{enh}} (gl(r,c)) = \left\{ {\begin{array}{ll} {{\text{exp }}( - c_{2}^{*} .|gl(r,c) - m_{1} |)} \hfill & {{\text{if }}gl(r,c) \le t} \hfill \\ {{\text{exp }}( - c_{2}^{*} .|gl(r,c) - m_{2} |)} \hfill & {{\text{if }}t1 < gl(r,c) \le t2} \hfill \\ \vdots \hfill & \vdots \hfill \\ {{\text{exp }}( - c_{2}^{*} .|gl(r,c) - m_{n} |)} \hfill & {{\text{if }}gl(r,c) > tn - 1} \hfill \\ \end{array} } \right.$$
(11)

Here, the average grey levels for the n-regions partitioned by that of the thresholds t1, t2, ..., \(tn-1\) are \({m}_1\), \({m}_2\), ..., \({m}_n\), as well as the constant \(c_2^*\), is much like \(c_1^*\) in Equation (10).

Step 6 Each image element (rc) non-membership value in the enhanced image \({\mathbb {A}}^{{enh}}\) is computed by employing Sugeno’s generator (Sugeno [26]) as follows:

$$\begin{aligned} \nu _{\mathbb {A}}^{{enh}}({gl}({{r, c}}))=\frac{1-\mu _{\mathbb {A}}^{{enh}}({gl}({{r, c}}))}{1+\zeta .\mu _{\mathbb {A}}^{{enh}}({gl}({r, c}))}; \zeta >0. \end{aligned}$$
(12)

Step 7: Each image element (rc) indeterminacy value in the enhanced image \({\mathbb {A}}^{{enh}}\) is illustrated by implementing the above equations (10) and (12), which can be written in the following form:

$$\begin{aligned} \pi _{\mathbb {A}}^{{enh}}({gl}({{r, c}}))=\sqrt{1-[(\mu _{\mathbb {A}}^{{enh}}({gl}({{r, c}})))^2+(\nu _{\mathbb {A}}^{{enh}}({gl}({{r, c}})))^2]} \end{aligned}$$
(13)

where \(\pi _{\mathbb {A}}^{{enh}}({gl}({{r, c}}))\) indicates the degree of indeterminacy of the image \({\mathbb {A}}^{{enh}}\).

Step 8:This paper utilizes the measure [32] provided below to determine the degree of similarity between the manually thresholded image and the enhanced thresholded image with the threshold t.

$$\rm{\mathbb{S}}(\rm{\mathbb{A}},\rm{\mathbb{B}}) = 1 - \left\{ {\frac{1}{{\mathbb{R} \times \mathbb{C}}}\sum\limits_{{r = 0}}^{{\mathbb{R} - 1}} {\left. {\sum\limits_{{c = 0}}^{{\mathbb{C} - 1}} {\left\{ {_{{\frac{1}{2}\max \left\{ {\begin{array}{ll} {\left| {(\mu _{\rm{\mathbb{A}}}^{{enh}} (gl(r,c)))^{2} - (\mu _{\rm{\mathbb{B}}}^{{enh}} (gl(r,c)))^{2} } \right|^{\tau } } \\ {\left| {(\nu _{\rm{\mathbb{A}}}^{{enh}} (gl(r,c)))^{2} - (\nu _{\rm{\mathbb{B}}}^{{enh}} (gl(r,c)))^{2} } \right|^{\tau } } \\ {\left| {(\pi _{\rm{\mathbb{A}}}^{{enh}} (gl(r,c)))^{2} - (\pi _{\rm{\mathbb{B}}}^{{enh}} (gl(r,c)))^{2} } \right|^{\tau } } \\ \end{array} } \right\}}}^{{\frac{1}{4}\left\{ {\begin{array}{ll} {\left| {(\mu _{\rm{\mathbb{A}}}^{{enh}} (gl(r,c)))^{2} - (\mu _{\rm{\mathbb{B}}}^{{enh}} (gl(r,c)))^{2} } \right|^{\tau } + } \\ {\left| {(\nu _{\rm{\mathbb{A}}}^{{enh}} (gl(r,c)))^{2} - (\nu _{\rm{\mathbb{B}}}^{{enh}} (gl(r,c)))^{2} } \right|^{\tau } + } \\ {\left| {\pi _{\rm{\mathbb{A}}}^{{enh}} (gl(r,c)))^{2} - (\pi _{\rm{\mathbb{B}}}^{{enh}} (gl(r,c)))^{2} } \right|^{\tau } } \\ \end{array} } \right\} + }} } \right.} } \right\}} } \right\}^{{\frac{1}{\tau }}}, \tau \ge 1$$
(14)

Now, if \({{\mathbb {A}}}^{{enh}}\) is the enhanced thresholded image and \({{\mathbb {B}}}^{{enh}}\) is the manually thresholded image, then \(\mu _{{\mathbb {B}}}^{{enh}}({gl(r, c)}) = 1\), \(\nu _{{\mathbb {B}}}^{{enh}}({gl(r, c)}) = 0\) and \(\pi _{{\mathbb {B}}}^{{enh}}({gl(r, c)}) = 0\). Hence, equation. (14) is reduced, which is denoted in the form as

$$\begin{aligned} & \hfill \\ \rm{\mathbb{S}}(\rm{\mathbb{A}},\rm{\mathbb{B}}) = 1 - \left\{ {\frac{1}{{\mathbb{R} \times \mathbb{C}}}\sum\limits_{{r = 0}}^{{\mathbb{R} - 1}} {\left. {\sum\limits_{{c = 0}}^{{\mathbb{C} - 1}} {\left\{ {_{{\frac{1}{2}\max \left\{ {\begin{array}{ll} {\left| {(\mu _{\rm{\mathbb{A}}}^{{enh}} (gl(r,c)))^{2} - 1} \right|^{\tau } ,} \\ {\left| {(\nu _{\rm{\mathbb{A}}}^{{enh}} (gl(r,c)))^{2} } \right|^{\tau } ,} \\ {\left| {(\pi _{\rm{\mathbb{A}}}^{{enh}} (gl(r,c)))^{2} } \right|^{\tau } } \\ \end{array} } \right\}}}^{{\frac{1}{4}\left\{ {\begin{array}{ll} {\left| {(\mu _{\rm{\mathbb{A}}}^{{enh}} (gl(r,c)))^{2} - 1} \right|^{\tau } + } \\ {\left| {(\nu _{\rm{\mathbb{A}}}^{{enh}} (gl(r,c)))^{2} } \right|^{\tau } + } \\ {\left| {\pi _{\rm{\mathbb{A}}}^{{enh}} (gl(r,c)))^{2} } \right|^{\tau } } \\ \end{array} } \right\} + }} } \right.} } \right\}} } \right\}^{{\frac{1}{\tau }}} {\text{ }}, \tau \ge 1 \hfill \\ \end{aligned}$$
(15)

Step 9: Calculate the max(\(\mathbb {S(A, B)}\)), the resultant value corresponding to the gray value is considered as an optimal threshold \({\mathfrak {t}}\).

Step 10: Employ the obtained optimal threshold t in the enhanced image \({\mathbb {A}}^{{enh}}\) to produce the thresholded (segmented) image \({\mathbb {A}}^{{seg}}\).

3 Objective analysis

Image quality evaluation measures are applied to obtain the efficacy of the proposed scheme in comparison with other existing works.

3.1 Mean absolute error (MAE)

Deviations from thresholded and manually thresholded images are measured in MAE, and the mathematical form is given below

$$\begin{aligned} {MAE}=\frac{1}{{\mathbb {R}} \times {\mathbb {C}}} \sum _{{r}=1}^{{\mathbb {R}}} \sum _{{c}=1}^{{\mathbb {C}}} \left| \begin{aligned} {\mathbb {A}}^{{seg}} ({gl(r, c)})-{\mathbb {B}}^{{seg}} ({gl(r, c)}) \end{aligned} \right| \end{aligned}$$
(16)

Here, \({\mathbb {B}}^{{seg}} ({gl(r, c)})\), and \({\mathbb {A}}^{{seg}} ({gl(r, c)})\) mentions the manually thresholded image and thresholded image utilizing proposed scheme in the pixel gl(rc). If the above equation (16) delivers the minimum deviation (zero or near to zero), then both images \({\mathbb {A}}^{{seg}} ({gl(r, c)})\), and \({\mathbb {B}}^{{seg}} ({gl(r, c)})\) are more similar. Or else the images are not identical.

3.2 Root-mean-square error (RMSE)

The metric RMSE has implemented to enumerate the root-mean-square error value of the image. If an image has an eminent quality, then the RMSE value should be near to zero. The RMSE signified by the mathematical design is as below

$$\begin{aligned} {RMSE}=\sqrt{\frac{1}{{\mathbb {R}} \times {\mathbb {C}}}\sum _{{r}=1}^{{\mathbb {R}}} \sum _{{c}=1}^{{\mathbb {C}}} \left( \begin{aligned} {\mathbb {A}}^{{seg}} ({r}, {c})-{\mathbb {B}}^{{seg}}({r}, {c}) \end{aligned} \right) ^2} \end{aligned}$$
(17)

where the symbols \({\mathbb {R}}, {\mathbb {C}}\) defines the number of rows and columns of the images \({\mathbb {A}}^{{seg}}\) and \({\mathbb {B}}^{{seg}}\) respectively.

3.3 Correlation (CORR)

The other crucial factor is prescribed to assess the correlation of the two images like \({\mathbb {A}}^{{seg}}\) and \({\mathbb {B}}^{{seg}}\). The factor CORR is nominated by the following equ.

$$\begin{aligned} {CORR}=\frac{2 \sum _{{r}=1}^{{\mathbb {R}}} \sum _{{c}=1}^{{\mathbb {C}}} {\mathbb {A}}^{{seg}}({r}, {c}){\mathbb {B}}^{{seg}}({r}, {c})}{\sum _{{r}=1}^{{\mathbb {R}}} \sum _{{c}=1}^{{\mathbb {C}}} ({\mathbb {A}}^{{seg}}({r}, {c}))^2+\sum _{{r}=1}^{{\mathbb {R}}} \sum _{{c}=1}^{{\mathbb {C}}} ({\mathbb {B}}^{{seg}}({r}, {c}))^2} \end{aligned}$$
(18)

The largest value of CORR estimates the superior thresholding results. Aforementioned equation (18) delivers the outcome as 1 which mean that the images \({\mathbb {A}}^{{seg}}\) and \({\mathbb {B}}^{{seg}}\) are identical. Besides, if it returns the value of 0, then the images are not identical.

3.4 Signal to noise ratio (SNR)

$$\begin{aligned}&{SNR}=10 \text{ log}_{10} \left[ \begin{aligned} \frac{\sum _{{r}=1}^{{\mathbb {R}}} \sum _{{c}=1}^{{\mathbb {C}}}({\mathbb {B}}^{{seg}}({r, c}))^2}{\sum _{{r}=1}^{{\mathbb {R}}} \sum _{{c}=1}^{{\mathbb {C}}} ({\mathbb {B}}^{{seg}}({r, c})-{\mathbb {A}}^{{seg}}({r, c}))^2} \end{aligned} \right] \end{aligned}$$
(19)

The huge value of SNR reveals the excellent outcome because both the images \({\mathbb {A}}^{{seg}}\) and \({\mathbb {B}}^{{seg}}\) are identical.

3.5 Peak signal to noise ratio (PSNR)

$$\begin{aligned}&{PSNR}=10 \text{ log}_{10} \left[ \begin{aligned} \frac{255^2}{\frac{1}{\mathbb {RC}} \sum _{{r}=1}^{{\mathbb {R}}} \sum _{{c}=1}^{{\mathbb {C}}} ({\mathbb {B}}^{{seg}}({r, c})-{\mathbb {A}}^{{seg}}({r, c}))^2} \end{aligned} \right] \end{aligned}$$
(20)

The supreme value of PSNR exposes the superior effect because both the images \({\mathbb {A}}^{{seg}}\) and \({\mathbb {B}}^{{seg}}\) are equivalent.

4 Experimental results and analysis

The capability of the proposed PFS-based thresholding scheme has been inspected in several lungs affected due to COVID-19 (above 50% and below 50%) images. In order to exhibit the great performance of the proposed scheme, it is compared with four methods namely Method1 [4], Method2 [5], Method3 [6], and Method4 [7], several quality measurement factors examined in the before-mentioned section have been applied. The values of the quality measurement factors reveal that the outcomes of the proposed method produce a greater efficiency than recent sophisticated methods. Besides, the addressed method signifies

Fig. 2
figure 2

Source images: CT scan of lungs affected by COVID-19 [Above 50% ((1a)-(2j)) and Below 50% ((3a)–(4j))]

Fig. 3
figure 3

Gray images: CT scan of lungs affected by COVID-19 [Above 50% ((1a)-(2j)) and Below 50% ((3a)–(4j))]

the minimal error, high similarity, and is very convenient for real-time applications. Also, the proposed scheme would be highly beneficial in terms of image quality.

Fig. 4
figure 4

Histogram: CT scan of lungs affected by COVID-19 [Above 50% ((1a)-(2j)) and Below 50% ((3a)–(4j))]

Fig. 5
figure 5

Enhanced images: CT scan of lungs affected by COVID-19 [Above 50% ((1a)-(2j)) and Below 50% ((3a)–(4j))]

Initially, the dataset contains the CT scan of lungs affected by COVID-19 that can be found at the following link: https://github.com/UCSD-AI4H/COVID-CT. For this investigation, more than 50% and less than 50% of the lungs infected due to COVID-19 were taken for the experimental analysis. Then, Fig. 2 presents the source RGB images and it includes both the above and the below 50% infected lungs images due to COVID-19. The resolution of these images are \(210 \times 150\), \(275 \times 193\), \(273 \times 192\), \(343 \times 188\), \(250 \times 190\), and so on. Further, the Red, Green, and Blue channels of source images were remodeled into grayscale images, and these converted images are exhibited in Fig. 3 (1a)–(4j). Meanwhile, Fig. 4 (1a)–(2j) exposes the histogram of the above 50% affected lungs due to COVID-19. In the same way, the histogram of remaining below 50% affected lungs due to COVID-19 are given in the same Fig. 4 (3a)–(4j). Later, PFS based enhancement scheme is implemented for Fig. 3 (1a)–(4j), then the enhanced images are displayed in Fig.  5 (1a)–(4j).

Table 1 Threshold Values
Fig. 6
figure 6

Segmentation results: Lungs affected by above 50% due to COVID-19

Fig. 7
figure 7

Segmentation results: Lungs affected by below 50% due to COVID-19

In addition, the values of the threshold for an enhanced first dataset (Fig. 5 (1a)) of above 50% affected lungs owing to COVID-19 images are determined by employing some other object extraction schemes and the proposed scheme, which are shown in the first row and second to sixth columns of Table 1. Then, Fig. 6 (1a)–(1e) provides object separated first dataset (Fig. 5 (1a)) of affected lungs images owing to COVID-19 after activating the current sophisticated methods, and proposed PFS method, respectively. From Fig. 6 (1e), it is clear that the proposed method is much improved and that it separates the foreground of the first dataset of affected lungs due to COVID-19. The values of the error in object extraction are estimated by analyzing the image mentioned above with the manually extracted object image. In this, two types of error rating factors such as MAE, and RMSE are applied, and these estimated error values are portrayed in the first row and second to sixth columns in Tables 2 and 3, whereas the CORR, SNR, and PSNR values are enumerated, which are also presented in the first row and second to sixth columns in Tables 45, and  6. At last, the obtained MAE, RMSE, SNR, PSNR, and CORR metrics with distinct methods are plotted in the form of graphs, which are displayed in Fig. 8 (a)–(e).

On the other hand, the threshold values for an enhanced twenty-first dataset (Fig. 5 (3a)) of the below 50% affected lungs due to COVID-19 images are acquired by several methods, which are provided in the first row and seventh to twelfth columns of Table 1. After implementing such mentioned methods, the foreground of the enhanced lungs affected due to COVID-19 (below 50%) image is extracted and the resulting foregrounds are presented in Fig. 7 (1a)–(1e). Utilizing the resultant foregrounds, the quality measurement factors like MAE, RMSE, SNR, PSNR, and CORR are quantified and these measured values are arranged in a first row and seventh to twelfth columns in Tables 2, 3, 4, 5 and 6. Subsequently, the measured/tabulated outcomes are depicted as the plotted graphs, and these graphs are shown in Fig. 8 (a)–(e). From the mentioned row and columns of Tables 23, 4, 5, 6 and Fig. 8 (a)−(e), the proposed scheme establishes the great outcomes by generating smaller MAE and RMSE error values as well as larger SNR, PSNR, and CORR values.

Table 2 MAE Values
Table 3 RMSE Values
Table 4 CORR Values
Table 5 SNR Values

In the same manner, employing the maximum similarity principle [32], the enhanced images of all remaining affected lungs (above 50%) due to COVID-19 (Fig. 5 (1b) –(2j)) are thresholded, and these depicted thresholds are listed in rows second to twenty and columns second to sixth in Table 1. By presenting the methods of Method1 [4], Method2 [5], Method3 [6], and Method4 [7], the foregrounds of the enhanced images (Fig. 5 (1b)–(2j)) are separated and these foregrounds are pictured in Fig. 6 [(2a) – (2d)] –[(20a) –(20d)]. In the final analysis, Fig. 6 (2e)– (20e) illustrates the output images found after the foreground extraction procedure by implementing the proposed scheme. Four unique methods and the experimental outcomes of the proposed method are analyzed, and this reveals that the proposed PFS method devotes the preferable thresholded image rather than current state-of-the-art methods. Further, the aforementioned metric values of current sophisticated object extraction methods along with the proposed PFS scheme are assessed, and these values are listed in rows second to twenty and columns second to sixth in Tables 2, 3, 4, 5 and 6. Moreover, the listed values are drawn as the plotted graph, which is exhibited in Fig. 8 (a)–(e). From Tables 2, 3, 4, 5, 6 and Fig. 8 reports that the foreground separation received by the proposed PFS scheme exceeds other methods mentioned.

Fig. 8
figure 8

Evaluation Metrices

Likewise, the remaining images of below 50% affected lungs due to COVID-19 are enhanced then the resultant images are given in Fig. 5 (3b)–(4j). Further to that, the obtained images were utilized for the purpose of the extraction process, which determines the thresholds that are exhibited in rows second to twenty and columns seventh to twelfth in Table 1. Besides, using recent sophisticated techniques as well as the proposed one to segment the images, which are depicted in Fig. 7 [(2a) –(2e)]– [(20a) –(20e)]. After the segmentation, we employed some evaluation metrics to compute MAE, RMSE, CORR, SNR, and PSNR values then they are tabulated in rows second to twenty and seventh to twelfth columns in Tables 2, 3, 4, 5 and 6. Following the Tables, the values are as plotted as graphical forms, which are shown in Fig. 8 (a)–(e). Figs. 7, 8, and Tables 2, 3, 4, 5 and 6 are evidenced to display the performance of the addressed method.

Table 6 PSNR Values

In general, Tables 2 and 3, and Fig. 8 (a)–(b) exhibit the quality measurement factor values found after implementing various segmentation schemes in enhanced images of lungs affected due to COVID-19. It is noteworthy that from the values in the aforementioned tables, the proposed PFS scheme delivers minimal error values compared to the state-of-the-art methods. The maximum CORR values (Table 4) for the proposed scheme confirm that the object in the enhanced images (above and below 50%) of lungs affected due to COVID-19 is clearly separated. Although the first and second sophisticated methods hold the big CORR values, the visual quality of the resulting images gained by the proposed scheme outperforms other methods. Furthermore, it is obvious from the values in Tables 45 and 6 that the proposed method yields higher SNR, CORR, and PSNR values compared to the mentioned current sophisticated methods. The difference between the proposed and state-of-the-art techniques shows the maximum values, which are the reasons mentioned to determine that the proposed PFS thresholding scheme is most worthy for all images of lungs infected owing to COVID-19.

5 Conclusion

Recent developments in image analysis for digital images such as image enhancement and division have been surveyed in the construction of PFS and FS theory. This research paper deals with the perusal and structure of the segmentation scheme related to the PFS feature of lung infected owing to COVID-19 images. The recommended object segmentation scheme develop image enhancement and thresholding technique. By employing the PFS entropy, the considered images are enhanced, and it exhibits adequate quality images. Besides, the successful implementation of quality measurement factors will lead to some significant improvements in image quality as the PFS-based segmentation scheme is attractive, which makes the proposed scheme more relevant and ensures image quality if the image is considered blurry / noise. Therefore, such work would be important for efforts to discover a more beneficial scheme for image segment analysis.