A novel self-learning weighted fuzzy local information clustering algorithm integrating local and non-local spatial information for noise image segmentation

Fuzzy clustering algorithm (FCM) can be directly used to segment images, it takes no account of the neighborhood information of the current pixel and does not have a robust segmentation noise suppression. Fuzzy Local Information C-means Clustering (FLICM) is a widely used robust segmentation algorithm, which combines spatial information with the membership degree of adjacent pixels. In order to further improve the robustness of FLICM algorithm, non-local information is embedded into FLICM algorithm and a fuzzy C-means clustering algorithm has local and non-local information (FLICMLNLI) is obtained. When calculating distance from pixel to cluster center, FLICMLNLI algorithm considers two distances from current pixel and its neighborhood pixels to cluster center. However, the algorithm gives the same weight to two different distances, which incorrectly magnifies the importance of neighborhood information in calculating the distance, resulting in unsatisfactory image segmentation effects and loss of image details. In order to solve this problem, we raise an improved self-learning weighted fuzzy algorithm, which directly obtains different weights in distance calculation through continuous iterative self-learning, then the distance metric with the weights obtained from self-learning is embedded in the objective function of the fuzzy clustering algorithm in order to improve the segmentation performance and robustness of the algorithm. A large number of experiments on different types of images show that the algorithm can not only suppress the noise but also retain the details in the image, the effect of segmenting complex noise images is better, and it provides better image segmentation results than the existing latest fuzzy clustering algorithms.


Introduction
With the rapid development of computer technology, digital image technology has spread to industrial inspection, environmental monitoring, military and space exploration and other multidisciplinary fields, making image processing technology attract the attention of many domestic and foreign scholars [1]. Image segmentation [2,3] is a key link in image processing technology, as well as the key to image understanding and machine vision. It can be used for pre-processing tasks such as scene analysis, image recognition, and object detection. The essence are a series of problems such as the loss of information in imaging, the uncertainty of visual perception gray level, and the fuzziness of the image itself. Therefore, the use of fuzzy theory to solve this series of problems in image segmentation has become a research trend. Among them, fuzzy clustering [24] combined with fuzzy theory [25,26] and clustering technology is a research hotspot. All in all, when extracting and detecting objects in an image, image segmentation technology usually becomes an indispensable step. Therefore, the in-depth study of image segmentation algorithms has important research significance. However, due to many reasons, the image will be disturbed by some noise during the process of acquisition and transmission, and the image itself has problems such as uncertainty and complexity, which greatly reduces the segmentation of the fuzzy clustering algorithm performance. Therefore, it is still of great theoretical significance and research value that how to reduce the noise of noisy images under different noise intensity conditions, and how to restore the detailed information of original images as much as possible while ensuring the robustness of clustering algorithm to noise.
The traditional Fuzzy Mean Clustering(FCM) [27,28] algorithm proposed by Dunn has been widely used in image segmentation. This algorithm has a good segmentation effect when segmenting images that are not affected by noise. FCM uses Euclidean distance and is not robust. It only considers the difference in grayscale between pixels when classifying, and does not consider neighboring pixels. Therefore, the algorithm is extremely sensitive to noise and outliers, and when segmenting an image with a low signal-to-noise ratio(SNR), the segmentation effect is not good. In recent years, many scholars have introduced spatial information on the basis of the original FCM algorithm, thereby improving the robustness of the FCM algorithm and obtaining better segmentation results. Some of the representative algorithms are as follows: Ahmed et al. [29] added the neighborhood information of the pixel to the objective function of the FCM algorithm and proposed a fuzzy mean clustering with spatial constraints(FCM s or BCFCM). Although the BCFCM algorithm can achieve better segmentation results, each iteration needs to calculate the neighborhood information of the pixel, which reduces the segmentation efficiency of the BCFCM algorithm. In response to this situation, Chen and Zhang [30] proposed FCM s1 and FCM s2. The algorithm replaces the neighborhood item with the neighborhood mean or median, which greatly reduces the calculation time. Szilagyi et al. [31] proposed an enhanced FCM(EnFCM), which generates filtered images in advance from the original image and its mean value in its neighborhood, and clustering is performed based on its gray histogram. Each of the above algorithms introduces a spatial parameter, which is manually specified according to the situation. However, it is difficult to determine the optimal parameters because the parameters depend heavily on noise, but the type and intensity of noise are unknown in advance. For this reason, Cai et al. [32] combined local spatial information and pixel gray features, introduced new factors to obtain new linear weighted sum images, and proposed a fast-generalized fuzzy C-means clustering algorithm(FGFCM). In addition, Krinidis and Chatzis [33] proposed fuzzy local information mean clustering(FLICM) with a certain adaptive ability, which integrates spatial information to construct local fuzzy factors to ensure noise sensitivity and preservation of details.
Up to now, there are still many scholars devoted to the study of how to better achieve image segmentation. Haiping Yu et al. [34] proposed a new region-based active contour model. It improves the inaccuracy of segmentation in the local-based model, which only considers the rough local information but does not consider the spatial relationship between the center pixel and its neighborhood. To further accurately segment medical images, they proposed a novel edge-based active contour model (ACM) for medical image segmentation [35]. In addition, in order to improve the image filtering by Gaussian filter, resulting in the loss of image edge gradient information, they also proposed a local region model based on adaptive bilateral filter for noise image segmentation [36]. Recently, Zhang Xiaofeng et al. [37] improved the FLICM algorithm and proposed an algorithm with local and non-local information(FLICMLNLI). The algorithm makes full use of non-local information with the help of self-similarity, while retaining the original information of the picture through back-projection, thereby improving the robustness of the algorithm. The problem with this algorithm is that when the current pixel information and the local information of the pixel appear in the formula for calculating the Euclidean distance, the two have the same weight. This calculation method erroneously magnifies the role of the local information of the pixel in calculating the distance. The problem caused by too much neighborhood information is that the denoising ability is enhanced, but the detailed information of the image is not properly preserved. Based on this algorithm, in view of the incompleteness of the existing fuzzy clustering algorithm [38][39][40][41][42][43], this paper further improves the existing clustering algorithm to enhance the anti-noise robustness of the original algorithm and segmentation accuracy. This paper introduces an iterative self-learning method to calculate the different weights of pixel information and neighborhood information, and solves the problem that the FLICMLNLI algorithm neglects to assign the same weight to the pixel information and the neighborhood information. The three innovations of the proposed robust self-learning weighted fuzzy clustering algorithm are as follows: (1) The improved algorithm combines the local and non-local information of pixels and adopts a more reasonable correlation calculation model to achieve a better image segmentation effect. (2) The improved algorithm can calculate the distance from the pixel to the cluster center more reasonably, because the original information of the pixel itself and the local information of the pixel must not be equally important. Through this improvement, the algorithm can eliminate most of the noise and retain a considerable degree of detailed information. It has better performance than FLICMLNLI algorithm. (3) The improved algorithm solves the problem that the existing robust fuzzy clustering with spatial information constraints cannot automatically select the weighted factor. The algorithm solves this problem by obtaining weights through continuous iterative selflearning methods in the process of clustering.
The structure of this article is described as follows: The second section introduces the FCM algorithm and other classic algorithms; the third section analyzes the FLICMLNLI algorithm in detail; the fourth section presents a self-learning weighted distance metric, embedding improved distance measure into the objective function and gives the corresponding derivation and proof of the convergence process; the fifth part compares the segmentation results of the algorithm in this paper with the classic algorithm and the latest algorithm, and displays the comparison result graph and performance indicators; the final section gives the conclusion of this article.

FCM
FCM algorithm is a fuzzy clustering algorithm based on objective function, which is mainly used for data clustering analysis. The theory is mature and widely used. It is an excellent clustering algorithm [44].
Suppose X = { x i , i = 1, 2, · · · } represents a gray image to be segmented. Where x i is the intensity of the ith pixel, i is the pixel index, n is the total number of pixels.
The widely used optimization model of classic FCM can be expressed as follows: The constraints are: 0 ≤ u ij ≤ 1, Where n, c , x i and v j are the number of samples, the number of clusters, the ith sample in data set X and the clustering center of the jth cluster respectively. U = (u ij ) n×c is a fuzzy partition matrix, u ij denotes the membership of sample x i belonging to the jth cluster. d 2 (x i , v j ) = ||x i − v j || 2 represents the squared Euclidean distance between the sample x i and the clustering center v j . m ∈ [1, +∞) is a fuzzy factor with the typical value of 1.5, 2.0 and 2.5, generally taking m = 2. Using the Lagrange multiplier method to solve the optimization model (1), the membership degree u ij and clustering center v j can be obtained as follows: The algorithm is simple in principle and fast in operation, but it is sensitive to noise and outliers. Therefore, since its proposal, many researchers have devoted themselves to improving the algorithm. In this paper, the definitions of mathematical symbols n, c, v, u, d, m and x are consistent with this section.

BCFCM
FCM does not consider the spatial information of pixels but treats all pixels as isolated pixels. Generally, an image contains rich spatial information, such as local neighborhood means, median information and non-local spatial information. Therefore, it is of great significance to use spatial information to guide pixel clustering for image segmentation. In view of this, many scholars combine spatial neighborhood information with FCM and have obtained a series of corresponding improved algorithms. Ahmed et al. [45] by introducing local spatial constraints in FCM, the BCFCM algorithm is proposed, which can obtain satisfactory results and reduce the influence of noise on the segmentation results. The optimization model of BCFCM is expressed as follows: Where d 2 ij = ||x i − v j || 2 , N i is the collection of adjacent pixels surrounding the ith pixel. N R is the number of neighboring pixels. The neighbors effect term is controlled by the parameter α. Controlling the influence of neighboring pixels on clustering needs to be verified by experiment and error. The constraints are the same as (1). Solve (3) with Lagrange multiplier method, the membership degree u ij and clustering center v j can be obtained as follows:

EnFCM
In order to improve the computational efficiency of the FCM algorithm, Szilagyi et al. [31] proposed an enhanced FCM, and its optimization model is expressed as follows: Where The parameter α controls the intensity of the neighboring effect. θ i represents the number of voxels from the whole stack of slices. The membership degree u ij and clustering center v j can be obtained as follows: and v j =

FLICM
Krinidi et al. [33] proposed the FLICM algorithm by defining the restrictive relationship between the membership of the center pixel and the neighboring pixels, which effectively solved the problem that traditional fuzzy clustering algorithms are sensitive to noise and outliers. The optimization model of the FLICM algorithm is defined as: Where j is the category index. c is the current number of clusters. u ij is the fuzzy membership function, which represents the degree to which the ith pixel belongs to the jth category. m is the fuzzy degree of the algorithm. v j is the cluster center of the kth cluster. ||x i − v j || 2 is the Euclidean distance between the ith pixel and the jth cluster center. G ij is the blur factor can be defined as: Where N i is the collection of adjacent pixels surrounding the ith pixel, i is the index for neighborhood pixels, d ii is the spatial distance between the ith pixel and the i th cluster center. G in this paper is consistent with this section. By minimizing the objective function, its membership degree u ij and clustering center v j can be obtained as follows: (9) In the objective function of the FLICM algorithm, the fuzzy factor calculation method shown in (8) ensures that the membership degrees of the pixels in the local window are similar. This method can reduce the influence of noise on the image. However, it is unreasonable to treat adjacent pixels as one category in general. When the noise level is high, the segmentation effect will be worse. The way to improve this problem is to ensure that similar pixels belong to the same cluster and not limited to adjacent pixels based on the membership of similar pixels. In addition, applying Euclidean distance in (8) will exclude pixels farther from the center pixel, which will further reduce performance.

New image fuzzy clustering and its robust model
The closer the two pixels are, it does not mean that the correlation between the two pixels is high. Therefore, the smaller the distance value, the greater the correlation calculation method is wrong. This paper adopts a more reasonable pixel correlation calculation model, which measures the correlation in the form of considering adjacent image blocks with different weights around the pixels. The specific calculation process of the correlation model is shown in Algorithm 1.
In FLICMLNLI algorithm, the cluster center v j becomes a vector (v j , v N j ). The blur factor is modified to: Where represents the non-local information of the ith pixel. The corrected blur factor G ij can classify the center pixel and similar pixels (not all pixels) in the search window into the same category. The objective function of FLICMLNLI is as follows: FLICMLNLI algorithm improves the correlation calculation and fuzzy factor calculation on the basis of FLICM algorithm, which improves the robustness of the algorithm. In addition, according to c j = 1 u ij = 1. Lagrange multiplier method (LMM) is used to minimize (11). By updating the clustering vector (v j , v N j ) and the degree of membership u ij , the objective function convergence and image segmentation are finally achieved. Algorithm 2 gives the detailed information of FLICMLNLI.
FLICMLNLI algorithm is a relatively advanced fuzzy clustering algorithm at present. The correlation model proposed by this algorithm makes full use of the non-local information of pixels and introduces the local information when calculating the distance between pixels and the clustering center.

Proposed algorithm
FLICMLNLI algorithm introduces local information and non-local information, which improves the robustness of the algorithm to some extent, but there are also some problems. It is obviously unreasonable to assume that the original pixel information has the same importance as the neighborhood information when calculating the distance. Based on FLICMLNLI algorithm, the proposed algorithm retains the correlation model, introduces selflearning method, calculate the different weights of the original information and the neighborhood information when calculating the distance, and improves the algorithm to further improve the robustness of the algorithm.
As shown in Fig. 1, the algorithm proposed in this paper combines the current pixel information and its neighborhood information to obtain the latest weighted Euclidean distance, and introduces the current pixel information and non-local information at the same time. The latest local factors are proposed after fusion. The improved objective function is obtained by combining the traditional algorithm with the innovative local factor, the new cluster centers and membership degree matrix is obtained, through self-learning method constantly iterative calculation, each iteration to get the weight corresponding to the current pixel and its neighborhood information then get the distance measure, to achieve the optimal segmentation result.
The improved algorithm has two advantages. On one hand, it has obvious combinational novelty. In this paper, several concepts, methods, techniques and components, such as image segmentation, fuzzy clustering, distance measurement, self-learning, non-local information and noisy image are skillfully combined, which is a breakthrough. On the other hand, it breaks the inherent calculation mode of distance measurement in the past, gives different weights to the two which have different contributions to the calculation distance, and adopts the selflearning calculation method, which provides a brand new idea for the future research work.

Preliminary knowledge
In the article by Xinmin Tao [46], in order to express fuzzy clustering based on maximum entropy, the entropy criterion is first introduced. Let X be a random variable with probability mass function P (x) = P (X = x), and set of possible values {x i , i = 1, ..., n}. The entropy of random variable X is denoted by H (X) and is defined by H (X) = − P (x) log P (x). Therefore, the structure of maximum entropy inference based on grades of Applied to the FCM algorithm, the FCM algorithm requires finding a set of {u ik } to minimize the loss function of the normalization constraint. The maximum fuzzy clustering based on entropy is realized by using L 1 -norm space and its goal is to maximize Thus we see that in the maximum entropy-based reasoning, fuzzy clustering problem into a set of prototype, so the loss function and membership distribution that satisfy the normalization constraint and the maximum Lagrangian function are minimized.
In the article by Vikas Singh [47], by considering the different contribution of each feature in each cluster, a new entropy-based variable feature weighted fuzzy mean clustering algorithm was proposed. In this method, the weighted fuzzy mean algorithm is combined with membership entropy and feature weighted entropy at the same time. As defined in (18), the first item measures the difference between samples within a class, the second item measures the membership entropy between samples and classes in the clustering process, and the third item, the Consider a data matrix X = [X 1 , X 2 , ...X n ] ∈ R q×n , q and n are the number of features and number of samples respectively. Here, X i = [x i1 , x i2 , ..., x iq ] ∈ R q is the ith sample in the data matrix. To group the data matrix X into c number of clusters, the following objective function can be minimized: w je log w je (13) The constraints are: Where D e ij = (x ie − v je ) 2 , is the Euclidean distance metric between ith sample and jth cluster, is an c × p weight matrix, w je is the weight value of lth feature to jth cluster, η i and μ j are the input parameters used to control the fuzzy partition and feature weight respectively.
The objective function as shown in (12) is a constrained nonlinear optimization problem, the solution of which is unknown. The main aim is to minimize J with respect to U , V and W using alternating optimization method.

Optimization model and solution
Combining the maximum entropy fuzzy clustering and sample attribute weighting in Xinmin Tao and Vikas Singh's article to construct the optimization model of the algorithm proposed in this paper. In the new algorithm proposed in this paper, instead of maximizing the maximum entropy reasoning structure based on the degree of membership, the maximum entropy reasoning structure based on the weight is used to maximize the maximum entropy reasoning structure, so The optimization model of the proposed algorithm is expressed as follows: (14) In (14), the first term measures the difference between samples within the same class, and the entropy of the second term feature weight indicates the degree of certainty of features in cluster recognition.
The constraints are: The improved distance measurement formula contains the original pixel information and the local pixel information and their weights are different, which can better distinguish noise and image details. The improved blur factor is as follows: Where su = s(k, i)(1 − u kj ) m , the followings are the same. In (15), non-local information of pixels is used when calculating pixel correlation. The same applies to the Lagrange multiplier rule, the Lagrange function is: Also use alternate optimization method to minimize. First, use weights to minimize the process as follows: Then get: (20) Because of ω 1 + ω 2 = 1, The final weights are Second, the process of minimizing L with V is as follows: The calculation formula for obtaining cluster centers is as follows: Last, the process of minimizing L with U is as follows: ∂L The calculation formula for the membership degree is as follows: The improved algorithm flow is shown in Algorithm 3.
The improved algorithm uses the non-local information of pixels when calculating the correlation between pixels, and the original information and local information of pixels are used when calculating the distance between pixels and the cluster center, and the two play different roles in calculating the distance, so the weights are different. This improvement makes the proposed algorithm more reasonable and has better segmentation effect than FLICMLNLI algorithm.

Convergence proof of U
The objective function is expressed as follows: 1 and ω 2 are constants. If u * is the minimum of L(U ), then ∂L(u * ) ∂u ij = 0, that is ∂L(u * ) ∂u ij = mu m−1 ij (d 2 ij +G ij ) − λ i = 0, the following formula can be obtained: The calculation process of the Hessian matrix of L(u) at u = u * is as follows: It can be found from (26) that the characteristic of the Hessian matrix of L(u) at u = u * is that the diagonal elements of the matrix are greater than zero, and the nondiagonal elements are zero, which shows that the obtained Hessian matrix is a symmetric positive definite matrix, so u * is a local minimum of L(U ).

Convergence proof of V
The objective function is expressed as follows: The calculation process of the Hessian matrix of L(v) at v = v * is as follows: It can be found from (29) that the characteristic of the Hessian matrix of L(v) at v = v * is that the diagonal elements of the matrix are greater than zero, and the nondiagonal elements are zero, which shows that the obtained Hessian matrix is a symmetric positive definite matrix, so v * is a local minimum of L(V ).

Convergence proof of V N
The objective function is expressed as follows: The calculation process of the Hessian matrix of L(v N ) at v N = v N * is as follows: It can be found from (32) that the characteristic of the Hessian matrix of L(v) at v = v * is that the diagonal elements of the matrix are greater than zero, and the nondiagonal elements are zero, which shows that the obtained Hessian matrix is a symmetric positive definite matrix, so v N * j is a local minimum of L(V N ).
In conclusion, this section confirms the convergence of the algorithm proposed in this article from three aspects.

Evaluation index
The performance of the comparison algorithm can be calculated by calculating the performance index. This section gives the calculation formula of each index.
The calculation formula of partition coefficient (V P C ) and partition entropy (V P E ) are as follows: The segmentation result with the smallest blur is the best, so the larger V P C , the smaller V P E of the segmented image, the better the segmentation effect. In addition, there are four other indicators for comparing the segmented image and the ideal image of each algorithm. They are accuracy(Acc.), sensitivity (Sen.), specificity(Spe.), and precision(P re.) [48].Their calculation formulas are as follows:

Spe. = T N/(T N + F P )
P re. = T P /(T P + F P ) Among them, P , N, T and F represent positive, negative, true and false respectively. Accuracy is the ratio of pixels that correctly classify all pixels relative to all pixels. Sensitivity and specificity reveal the possibility of correctly classifying positive and negative pixels. Therefore, the values of these four measurement methods are between 0 and 1. The greater the accuracy, sensitivity, specificity, and precision, the better the segmentation effect.
Misclassification error(ME) [49] as a quantitative evaluation index can objectively and quantitatively describe the effectiveness and robustness of the algorithm, which is: Where A i represents the set of pixels belonging to the ith category in the ideal segmented image, B j represents the set of pixels belonging to the jth category in the real segmented image obtained by the algorithm. c represents the number of classes. The smaller ME, the closer the segmentation result is to the ideal segmentation result, the better segmentation performance and vice versa.

Parameter selection
In the experiment, it is very important to define the number of classes in advance, because different numbers of classes will present different details [51]. In our experiment, the number of categories in the composite image is known in advance. For other types of images, we determine the value of c based on different segmentation tasks. The values of other parameters in the relevant algorithm are shown in Table 1. The parameters are obtained through many experiments. Each mathematical symbol in the Table 1 corresponds to the preceding sections.

Experimental results and analysis
In this section, different types of noise are used to destroy the original image in Fig. 2. Figure 2c-e are selected from BSDS500 gallery. Figure 2f-m are selected from MedPix database. The segmentation results of the improved algorithm and FCM, BCFCM, EnFCM, FLICM and FLICMLNLI are compared.

Comparison of segmentation performance without noise
As shown in Fig. 3a, three medical images were selected. Medical images often contain a lot of detailed information, and the detailed information of medical images needs to be preserved. The selected medical images were divided into three categories. Comparing Fig. 3c-h with b, it can be found that the algorithm proposed in this paper is closer to the ideal segmentation image and can better classify the details of the original image. On the contrary, the classification results of FLICMLNLI algorithm are relatively rough, ignoring many details and unable to recover the original image well. Furthermore, the evaluation indexes obtained by the segmentation of images by various algorithms are shown in Table 2. The comparison of all indexes also leads to the conclusion that the algorithm proposed in this paper is superior to other algorithms in the segmentation of original medical images. Figure 4a contains three images, one natural image and two medical images. The first row of Fig. 4a is a swan image. Different algorithms are used to segment the swan image polluted by Gaussian noise with a normalized variance of 0.1, and the segmentation result of the algorithm proposed in this paper is the closest to the ideal image. It is obvious that the algorithm proposed in this article has stronger anti-Gaussian noise resistance than other algorithms. The second and third rows in Fig. 4a are two medical images with rich details, compare the segmentation results of various algorithms on medical images after they are contaminated by Gaussian noise. By comparing the result graph of each algorithm with the ideal image, it can be found that the algorithm proposed in this paper has the best restoration effect, which can not only remove most of the noise pollution, but also retain the details of the original image.

Comparison of segmentation performance with Gaussian noise
The performance indicators in Table 3 further confirm the statement that the algorithm performance in this paper is better. It shows that the algorithm in this paper achieves a better balance between noise removal and detail preservation. It has a strong ability to process medical images, which is more effective than other algorithm. Figure 5a shows three different types of original images, where the first row is a composite image, the second is  a natural image, and the third is a medical image. These three kinds of images are polluted by 20% Salt and Pepper noise, as shown in Fig. 5b. The segmentation results of different segmentation algorithms are shown in Fig. 5d-i. By comparing the segmentation results of different algorithms with the ideal image, it is found that the segmentation effect of the algorithm proposed in this paper is closer to the ideal effect. The results of calculating the corresponding evaluation indexes are shown in Table 4, and the algorithm in this paper has the best effect.

Comparison of segmentation performance with Mixed noise
Combine Fig. 6 and Table 5, improved algorithm also works best when processing images contaminated by Mixed noise.

Comparison of segmentation performance with Speckle noise
Combine Fig. 7 and Table 6, the improved algorithm also works best when processing images contaminated by Speckle noise.

Comparison of segmentation performance with Rician noise
The improved algorithm also has the best segmentation effect for images polluted by Rician noise, see Fig. 8 and Table 7.

The influence of Guassian noise of different density on evaluation index
Add Gaussian noise with different normalized variance values (from 0.15 to 1) to multiple pictures, get the partition coefficient and partition entropy of different algorithms, Fig. 9a and b show the change of partition entropy and partition coefficient of each algorithm with the increase of gaussian noise intensity. In addition, these algorithms are quantitatively compared by calculating the clustering indicators of the algorithms, as shown in the Fig. 10 for accuracy, sensitivity, and specificity.

Further comparison with the current state-of-the-art algorithms
By comparing the results of the algorithm in this paper with the classic algorithm in the previous sections, it is found that the optimized algorithm proposed in this paper has better segmentation results when segmenting images contaminated by different types of noise. In this section, we will select several superior algorithms recently published in journals with higher impact factors, including KWFLICM algorithm, APFCM algorithm, TFLICM algorithm, FAL-RCM algorithm and its fast algorithm [52][53][54][55], and compare them with the algorithms in this article. In addition, a new indicator, Jacobian Score (J S) [56] will be introduced to compare the performance of algorithms. The calculation formula of this indicator is as Where P i denote the pixels belonging to the manual segmented image, Q i denote the pixels belonging to the experimental segmented image. The larger the J S, the better the effect of segmentation.
The segmentation results obtained by using different algorithms to segment different types of pictures under the influence of different noises are shown in Fig. 11, and the performance indicators are shown in Table 8. Among them, the first row of Fig. 11 shows the segmentation results of a medical image contaminated by Gaussian noise with a normalized variance of 0.1, the second row of Fig. 11 shows the segmentation results of a medical images contaminated by 30% Salt and Pepper noise, the third row of Fig. 11 shows the segmentation result of a remote sensing images contaminated by Mixed noise(Gaussian noise with a normalized variance of 0.05 and 10% Salt & Pepper noise),     Fig. 11 shows the segmentation results of a natural image contaminated by Speckle noise with a normalized variance of 0.1, The fifth row of Fig. 11 shows the segmentation results of a medical image contaminated by Rician noise with standard deviation of 60. From the comparison of segmented images and performance indicators shown above, it is not difficult to find that compared with the KWFLICM algorithm, APFCM algorithm and TFLCM algorithm, the algorithm proposed in this paper has better results when segmenting images contaminated by noise, and there are fewer noise points in the image. Compared with the FALRCM algorithm and its fast algorithm, which has more powerful anti-noise performance, the algorithm in this paper can retain more detailed image information when segmenting medical images with rich details.

Complexity analysis and test of the improved algorithm
Time complexity is also an important index to evaluate the performance of different algorithms. The higher the time complexity, the higher the calculation cost and the longer the execution time. The complexity of the algorithm proposed in this paper mainly comes from the introduction of non-local information in the new correlation model, the improved weighted Euclidian distance and the iterative process of the algorithm. For a grayscale image with n pixels, the time complexity of correlation calculation according to the correlation model adopted in this paper is O(n × (R 4 + R 2 )). Where R represents the size of the neighborhood window. For an image segmented into c clusters, the time complexity of iterative calculation of the weighted distance Where l is the number of iterations. As a result, the total complexity of the proposed algorithm . The time complexity comparison between the algorithm in this paper and some representative algorithms is shown in Table 9. The symbols in the table are as above. It can be seen from Table 9 that the algorithm proposed in this paper is more complicated than other algorithms, so the time complexity is higher. The specific running time is compared as follows: Firstly, the running time of the traditional algorithm and the improved algorithm is compared as shown in the Table 10. The running times of the first row in Fig. 5, the first row in Fig. 6, and the first and second rows in Fig. 7 are averaged to get the first row in Table 10. The running times of three rows in Fig. 3, the second and third rows in Fig. 4, the third row in Fig. 5 and the two rows in Fig. 8 are averaged to get the second row in Table 10. Take the average running times of the first row in Fig. 4, the second row in Fig. 5, and the second and third rows in Fig. 6 to get the third row in Table 10. The last row in Table 10 shows the running time of the third row in Fig. 7.
Visualizing the data in Tables 10 and 11, the histogram of the running time of all algorithms is shown in Fig. 12.
From Tables 10, 11 and Fig. 12, it can be found that the improved algorithm proposed in this paper requires a relatively long running time. This is mainly due to the introduction of self-learning iterative methods and the simultaneous introduction of local and non-local information of the image. When segmenting different types of images, the execution efficiency of our algorithm is always lower than other comparison algorithms, especially the slowest speed when segmenting remote sensing images. This further shows that the real-time performance of the algorithm is poor. The improved algorithm can obtain better segmentation performance at the expense of real-time performance, which is one of the main deficiencies of the improved algorithm. However, this shortcoming will also provide new ideas for future research, and we will improve the running speed of this algorithm in the next research work.
Secondly, compare the running time of current state-ofthe-art algorithms and improved algorithm. The comparison   Table 11. The first row of data in Table 11 is obtained by calculating the average running time of the first, second, and fifth rows in Fig. 11, the second row of data in Table 11 is the running time of the fourth row in Fig. 11, the third row of data in Table 11 is the running time of the third row in Fig. 11.

Conclusions
Image segmentation has been an active area of research in the fields of computer vision and pattern recognition for the past two decades. In this paper, we propose a novel fuzzy algorithm, which can combine spatial information and iterative self-learning weighting into image segmentation. Based on the FLICMLNLI algorithm proposed by the predecessors, this paper believes that the Euclidean distance calculation formula that assigns the pixel information and its local information the same weight is unreasonable, so the determined different weight values are calculated through the iterative self-learning method. Pixel information and its local information have different weights, which is a more reasonable distance calculation method, and then it is embedded in the objective function to obtain an optimized model of image segmentation. This strategy make the proposed scheme capable of reducing heave noise and preserving more important details. After testing a large number of images with different types and numbers, it is found that no matter what kind of image is contaminated by noise, the improved algorithm is better than previous algorithms in terms of balancing noise removal ability and detail retention ability. Especially when processing detailed medical images, the improved algorithm can not only remove most of the noise, but also will not misprocess the detailed information such as veins and capillaries in the brain image. In general, the performance of the algorithm proposed in this paper is better.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.