Multi-label feature selection based on fuzzy neighborhood rough sets

Multi-label feature selection, a crucial preprocessing step for multi-label classification, has been widely applied to data mining, artificial intelligence and other fields. However, most of the existing multi-label feature selection methods for dealing with mixed data have the following problems: (1) These methods rarely consider the importance of features from multiple perspectives, which analyzes features not comprehensive enough. (2) These methods select feature subsets according to the positive region, while ignoring the uncertainty implied by the upper approximation. To address these problems, a multi-label feature selection method based on fuzzy neighborhood rough set is developed in this article. First, the fuzzy neighborhood approximation accuracy and fuzzy decision are defined in the fuzzy neighborhood rough set model, and a new multi-label fuzzy neighborhood conditional entropy is designed. Second, a mixed measure is proposed by combining the fuzzy neighborhood conditional entropy from information view with the approximate accuracy of fuzzy neighborhood from algebra view, to evaluate the importance of features from different views. Finally, a forward multi-label feature selection algorithm is proposed for removing redundant features and decrease the complexity of multi-label classification. The experimental results illustrate the validity and stability of the proposed algorithm in multi-label fuzzy neighborhood decision systems, when compared with related methods on ten multi-label datasets.


Introduction
In recent years, multi-label classification occupies a very important position in the fields of artificial intelligence and machine learning, which attracts the attention of more and more scholars and a series of multi-label classification methods are proposed [1][2][3][4][5]. In traditional classification learning, each sample has only one category label, namely single-label learning [6,7]. However, in actual application, most of the samples may belong to multiple category labels at the same time, which named multi-label learning [8][9][10]. There are a large number of features in multi-label data, but some of which may be irrelevant or redundant information, which will lead to such problems such as high computational cost, overfitting, low classification performance of multi-label learning B Kaili Shen skl9601@163.com 1 algorithm and long process of classification learning. Therefore, dimension reduction of multi-label data is the focus of current research. Feature selection is one of the most common dimensionality reduction methods for analyzing high dimensional multi-label data, which aims to eliminate redundant and irrelevant features in classification learning task, and extract useful information [11][12][13].
Rough set theory is a familiar method to deal with uncertain data, which does not need any prior information except data, so it has been widely used in feature selection of data [45]. However, the traditional rough set theory is based on equivalence relation, which is only suitable for discrete data. To solve this problem, some scholars have extended the rough set model. For example, the neighborhood rough sets model (NRS), which is the most common model to deal with numerical data, and the neighborhood relation is used to replace the equivalence relation. Duan et al. [46] defined the lower approximation and dependency of NRS in multi-label learning, and proposed a multi-label feature selection algorithm based on neighborhood rough sets model (MNRS). Unfortunately, NRS cannot deal with the fuzziness of data effectively. So Lin et al. [47] used different fuzzy relations to construct a multi-label fuzzy rough sets model (MFRS), which estimated the similarity between samples under different labels, and directly evaluated the attributes of multi-label data, solved the problem of low separability about fuzzy similarity and defined the dependency function. But FRS is sensitive to noise, these noisy data will affect the calculation of fuzzy lower approximation and limit their practical application [48]. To solve the above problems, the fuzzy neighborhood rough sets model (FNRS) is designed. Wang et al. [49] combined NRS with FRS, proposed a feature selection algorithm based on FNRS via dependency to select feature subset. Chen et al. [48] designed a multi-label attribute reduction method based on variable precision FNRS, which used parameterized fuzzy neighborhood granule to define the fuzzy decision and decision class, and calculated importance of features using dependency measure, but the reduction based on the positive region does not take into account the influence of the uncertain information in the upper approximation on the importance of the attribute. Inspired by these observations, this paper designs a multi-label feature selection method based on FNRS and the approximation accuracy is introduced into our proposed multi-label feature selection method.
In the latest decades, the multi-label feature selection methods are classified into two kinds of views. The first is the algebra view based on approximate accuracy, which considers the effect of some features on the labels with the change of approximation accuracy, while confirms whether these features can be eliminated. For instance, Liang et al. [17] presented the selection of the optimal number of particles in the multi-grain and multi-label decision table, which makes certain positive region reduction more suitable for multi-label datasets. Li et al. [35] designed a robust MFRS by the kernelized information and obtained a lower approximation. The second is the information view based on information entropy, which considers the influence of some features on the decision subset with the information entropy and decides whether these features can be eliminated. For example, Lin et al. [25] designed a multi-label feature selec-tion based on neighborhood mutual information, extended neighborhood information entropy to adapt to multi-label data, and introduced three new measurement methods. Li et al. [29] developed a multi-label feature selection based on information gain, which measured the correlation between features and labels. Xu et al. [24] proposed a fuzzy neighborhood conditional entropy for feature selection. Inspired by these contributions, we design a novel fuzzy neighborhood conditional entropy to judge whether exclude these features on multi-label data. However, these methods cannot provide a more accurate and comprehensive assessment of the importance of features from different perspectives. Therefore, Sun et al. [39] developed a multi-label feature selection which combined neighborhood mutual information with the approximate accuracy in multi-label neighborhood decision systems, and this method of combining two views obtained great the classification performance. Combine the above contributions, this paper proposes a multi-label feature selection method, which combines the fuzzy neighborhood conditional entropy with the approximate accuracy, to evaluate the importance of features from two views. Thus, the major contributions of this article can be briefly described as follows: -Considering that the similarity of samples is also affected by 0-value label, the average value of decision under different labels is calculated as fuzzy decision. The concepts of fuzzy neighborhood upper approximation, lower approximation and fuzzy neighborhood approximation accuracy are proposed, which improves the integrity of multi-label fuzzy neighborhood decision system. -This work proposed the definitions of fuzzy neighborhood information entropy, fuzzy neighborhood joint entropy and fuzzy neighborhood conditional entropy for multi-label data, and their related properties and proofs are discussed, by improving the single-label fuzzy neighborhood entropy. -Combining the approximate accuracy of fuzzy neighborhood under the view of algebra with the fuzzy neighborhood conditional entropy under the view of information theory, a mixed measure method is proposed to evaluate the correlation between feature subset and label set in the multi-label fuzzy neighborhood decision system. Finally, a forward multi-label feature selection algorithm based on fuzzy neighborhood rough sets is designed for multi-label classification.
The remainder of this paper is structured as follows. The next section briefly introduces the related knowledge of NRS, MNRS and FNRS. In the subsequent section, the fuzzy neighborhood rough set model, fuzzy neighborhood conditional entropy and hybrid measure are introduced. The multi-label feature selection algorithm is designed in the next section.
Then the experimental results are provided. Finally, the conclusions of our research are provided in the last section.

Classical neighborhood rough sets
Suppose there exists a neighborhood decision system which can be simplified as NDS =< U , A D, V , Δ, δ >, where U = {x 1 , x 2 , . . . , x n } is a nonempty samples set; A = {a 1 , a 2 , . . . , a m } is a features set; D is decision class of samples; V = a∈A V a , where V a is the value of feature a; Δ indicates distance function; and δ(0 ≤ δ ≤ 1) is a neighborhood radius. If Δ satisfy the following properties [50] as Then U , Δ is called metric space, in general, the distance in the metric space can be expressed as Suppose the nonempty metric space < U , Δ >, for ∀B ⊆ A, δ B (x) = {y |x, y ∈ U , Δ(x, y) ≤ δ, δ ≥ 0} [46]. Δ(x, y) is a function to measure the distance between x and y, δ B (x) can also be called the neighborhood granularity of x under B.

Multi-label neighborhood rough sets
Suppose there exists a multi-label neighborhood decision system which can be abbreviated to MNDS =< U , A D, denotes a set of samples with the label d j . Then the upper approximation and lower approximation of the neighborhood rough sets of D with respect to B are defined [46], respectively, as Then, for ∀B ⊆ A, the neighborhood entropy of x i ∈ U is expressed [25] as

Fuzzy neighborhood rough sets
Suppose there exists a fuzzy neighborhood decision system which can be short for FNDS =< U , A D, δ >, where U = {x 1 , x 2 , . . . , x n } is the nonempty set of samples, and A is the set of features for ∀B ⊆ A. The fuzzy binary relation R B is derived from B [49]. For ∀x, y ∈ U , R B (x, y) is called fuzzy similarity relation between samples x and y under features set B when it satisfies the following conditions: Then R B is also known as the fuzzy similarity relation.
, R a is a fuzzy similarity relation for ∀a ∈ B, then we can express R B = a∈B R a . Then the fuzzy similarity matrix of x with respect to B over U is defined [24] as . . D r }, for ∀x, y ∈ U , the parameterized fuzzy neighborhood information granule is constructed as follows: where δ is called the fuzzy neighborhood radius and satisfies 0 ≤ δ ≤ 1. The fuzzy neighborhood of ∀x ∈ U can be determined by fuzzy similarity relation R B and neighborhood radius δ. Let FNDS =< U , A D, δ > be a fuzzy neighborhood decision system, U /D = {D 1 , D 2 , · · · D r }, for ∀B ⊆ A, the upper and lower approximations of D with respect to B are expressed, respectively, as For ∀B ⊆ C, the fuzzy neighborhood approximation accuracy of D with respect to B is described as

Proposed method
In this section, we improve the multi-label fuzzy neighborhood rough set model based on the relevant basic knowledge introduced in the previous section. First, the parameterized fuzzy similarity relation is used to calculate the fuzzy neighborhood granule. Because a sample in multi-label data may belong to multiple labels at the same time, the multi-label fuzzy decision is obtained by averaging values in multiple labels, which is different from the single-label fuzzy decision. Secondly, the fuzzy neighborhood approximation accuracy is introduced to consider the uncertain information of upper approximation. Then the fuzzy neighborhood conditional entropy for multi-label data is proposed. Finally, the fuzzy neighborhood approximation accuracy and fuzzy neighborhood conditional entropy are combined to form a mixed measure, and the relevant proof process is given.

Multi-label fuzzy neighborhood approximation accuracy and fuzzy decision
. . , D t 1 } denote a label determined coverage of U, then the parameterized fuzzy decision is constructed as follows: where D j p represents a sample set which is p in the column of the label d j , j = 1, 2, . . . , t, p = 0, 1.
whereD j p (x i ) is the fuzzy membership degree of x i with respect to D j p ;D j p is the fuzzy set of the equivalence decision class of the samples.
whereD p (x i ) is the fuzzy set of the sample x i which belongs label p.
where {D 0 ,D 1 } is the fuzzy decision of the samples induced by D.
Definition 3 [49] Let F and R are the two fuzzy sets, the inclusion degree between F and R can be defined as where P(F , R ) represents the inclusion degree of fuzzy set F in fuzzy set R , F R represents the number of samples whose membership degree of fuzzy set F is not greater than that of fuzzy set R . Example 1 Given a set U = {x 1 , x 2 , . . . , x 6 }, F and R are two fuzzy sets defined on U , which represent the membership degree of samples separately, as follows: So, we can get

Definition 4 Given MFNDS
. . , d t } represents a set of labels; δ is called the fuzzy neighborhood radius and satisfies 0 ≤ δ ≤ 1. For ∀x, y ∈ U , the parameterized fuzzy neighborhood information granule is constructed as follows: where R B is the fuzzy similarity relation induced by B on U , when Definition 5 Given M F N DS =< U , A D, δ > with ∀B ⊆ A, δ is called the fuzzy neighborhood radius; D 0 ,D 1 is the fuzzy decision of samples induced by D.
The upper and lower approximations of the fuzzy neighborhood of D is relative to B are defined, separately, as where A, δ is called the fuzzy neighborhood radius; D 0 ,D 1 is the fuzzy decision of samples induced by D; R B is the fuzzy similarity relation induced by B on U . The fuzzy neighborhood approximation accuracy is defined as where | | represents the cardinality of the set. R δ Property 1 Given MFNDS =< U , A D, δ > with ∀B ⊆ A, δ 1 and δ 2 are two fuzzy neighborhood radii, if δ 1 ≤ δ 2 , then α δ 2 B (D) ≤ α δ 1 B (D). Proof For ∀x ∈ U , according to Definition 4, the fuzzy neighborhood information granule satisfies the relation is .
Example 2 Given a multi-label decision table MDT=< U , A D > to display in Table 1, The data in Table 1 were normalized according to literature [24], so that the numerical value was within the range of [0,1]. The fuzzy similarity relationship R a k between the samples x i and x j relative to the attribute a k is calculated by where . Because the fuzzy similarity relation R a k satisfies the reflexivity, R a k = 1 when i = j, then we can get The fuzzy decision under the labels d 1 , d 2 , d 3 are calculated as follows: where D j r represents the sample set of the value is p under the label d j , where j = 1, 2, 3, p = 0, 1. According to Definition 2, we can obtain .5028, 0.4215, 0.5238, 0.4964, 0.5105, 0.7405}, .4972, 0.5785, 0.4762, 0.5036, 0.4895, 0.2595}; From the above, we can derive thatD 0 (x) +D 1 (x) = 1, so the eventual fuzzy decision of entire label space is

Multi-label fuzzy neighborhood conditional entropy
δ is the neighborhood radius, then fuzzy neighborhood entropy of B is defined as where |δ B (x i )| represents the number of nonzero values in the fuzzy neighborhood particle of an object represents the probability of the number of nonzero values in fuzzy neighborhood granule |δ B (x i )| in U .
are fuzzy neighborhood granules, then the fuzzy neighborhood joint entropy of B 1 and B 2 is defined as are fuzzy neighborhood granules, then the fuzzy neighborhood conditional entropy of B 1 and B 2 is defined as The property is as follows: Proof According to Definitions 6 and 7, it can be proved Then, from Definition 8, it follows that .
is the fuzzy neighborhood granule,D = {D 0 ,D 1 } is a fuzzy decision, then the conditional entropy of decision attribute set D on feature subset B is defined as where |δ B (x i )| represents the number of nonzero values in the fuzzy neighborhood particle of an object x i , then δ B (x i ) D p represents the number of nonzero values of samples whose membership degree of δ B (x i ) is not greater thanD p . The feature selection only from the algebraic or information viewpoint is limited. For the algebraic viewpoint, feature selection under the definition of information theory may also exist redundancy features, for information theory, feature selection under definitions of algebraic viewpoint, conditional entropy may have changed. So we combine the approximate precision from the algebraic viewpoint with the measurement method of conditional entropy from information theory to calculate the importance degree of candidate features.
is the fuzzy neighborhood granule,D = {D 0 ,D 1 } is a fuzzy decision, then the mixed measure based on the approximate accuracy of the fuzzy neighborhood and the conditional entropy of the fuzzy neighborhood is defined as , then the feature a is unnecessary.

Definition 12
Given MFNDS =< U , A D, δ >with ∀B ⊆ A,we call B a reduction of A in the fuzzy neighborhood decision information system, relative to decision class D when it satisfies that Definition 13 Given MFNDS =< U , A D, δ > with ∀B ⊆ A, the importance of feature for a ∈ B relative to D is expressed as To get a reduced subset, two preconditions from the Definition 12 must be met. However, there are many redundant and unrelated features in the multi-label datasets, and searching for the minimum reduced subset is an NP-complete problem. Therefore, we set a threshold value λ to control subset selection before selecting the final feature subset. If the difference of the mixing measure between the current feature subset and the original feature subset is less than λ, then a relatively approximate reduced subset Red is selected, which shall meet the following requirements: Then the importance of feature for R ∈ A − Red relative to D is expressed as Remark 1 Sun et al. [51] considered that the upper and lower approximations of rough set belong to the viewpoint of algebraic theory, and information entropy and its extension belong to the viewpoint of information theory. Then Definition 6 shows the fuzzy neighborhood approximate accuracy α δ B (D) from the algebraic point of view, and Definition 10 shows the conditional entropy E f n (D |B ) of the feature subset B of the fuzzy information decision setD from the information theory. Therefore, Definition 11 measures the uncertainty of the multi-label fuzzy neighborhood decision systems from both the algebraic view and information view.

Multi-label feature selection algorithm based on fuzzy neighborhood rough sets
According to the relevant definitions in the third section, this paper constructs a multi-label feature selection algorithm based on fuzzy neighborhood rough sets. To clearly understand our proposed algorithm, the process of feature selection for multi-label classification is described by the framework is shown in Fig. 1.
In Algorithm 1, a multi-label feature selection algorithm (MFSFN) is proposed based on fuzzy neighborhood rough sets, assume that the multi-label fuzzy neighborhood decision system contains n is the size of samples, m is the number of features and t is the number of labels which have |D| decision classes. The time complexity on the calculation of the fuzzy similarity relation is O( 1 2 n 2 m), which is the basis for the calculation of the fuzzy decision is O(tn |D|) in Steps

Experimental preparation
The main goal of feature selection is to select fewer feature subsets and achieve higher classification performance.
To prove the validity and classification performance of our method, we select ten multi-label datasets of four different fields from http://mulan.sourceforge.net/datasets.html and http://www.uco.es/kdis/mllresources/. The Flags dataset contains details of some countries and their flags; Cal500 is a music dataset, composed of 502 songs; Emotions is about the music fragments that can cause emotions; Scene stores The basic information description of these datasets, including the size of samples set, the dimensionality of attributes set, the cardinality of the labels set, the domains of ten multilabel datasets, which are demonstrated in Table 2, where LC ] denotes that the sample x i is associated with the label d j . When d j (x i ) = +1 holds, [·] equal to 1, otherwise it is 0 [52].
The following all experiments were performed using MATLAB R2016b on Windows10 with the experimental platform of Inter(R) Core(TM) i5-8500 CPU at 3.00 GHz, memory 16.00 GB. Two classifiers MLKNN [52] and MLFE [53], which are used to prove the classification performance of MFSFN. The smoothing factor is equal to 1, and the size of nearest neighbor K is equal to 10 in MLKNN and MLFE [54]. We select several the common evaluation indexes of multi-label classification to evaluate the classification performance based on our proposed method in multi-label learning, including the number of selected features (N), average precision (AP), coverage (CV), Hamming loss (HL), one error (OE), ranking loss (RL), macro-averaging F1 (MacF1) and micro-averaging F1 (MicF1) [25,36,40,54], each of these indexes measures different aspects of the classification performance. The higher the value of AP, CV, MacF1 and MicF1 are, the better the classification performance is, and the lower the CV, OE, RL and HL are, the better the classification performance is. In the following experimental results, "↑" represents "the larger the better", and "↓" represents "the smaller the better". The number in bold indicates that this algorithm is better than other algorithms in the corresponding index.

Parameter discussion
Since parameters δ and λ will impact the classification performance of the MFSFN, to obtain the best classification results, in this subsection we will demonstrate the influence of parameters on the feature selection results. The parameter δ represents the fuzzy neighborhood radius, and the parameter λ is threshold to control the selection of feature subset. In this paper, we set the variation range of δ be [0,0.5] with step size of 0.05, and the variation range of λ is [0,1] with the step size of 0.05. As shown in Figs. 2 and 3, where the X-axis refers to the neighborhood radius δ, the Y -axis refers to λ that controls the selection of feature subset. We select the Scene dataset by our proposed algorithm MFSFN to demonstrate the training process that is the selection of parameters δ and λ under two classifiers MLKNN and MLFE. Finally, we select  Tables 3 and 4. The purpose of first portion is to analysis the change of evaluation indexes with parameters under classifier MLKNN. Figure 2 illustrates the change of each evaluation index with the parameters on the Scene dataset. For the Scene dataset, when δ = 0.15, λ = 0.65, the five evaluation indexes AP, CV, RL, OE and N are the most appropriate. Therefore, the following will take δ = 0.15, λ = 0.65 as the best parameter on the Scene dataset. Using the same process to obtain the best parameters of the other nine datasets from Table 2. The parameter values and evaluation index values are displayed in Table 3.
The second portion of this subsection is to analysis change of evaluation indexes with parameters under classifier MLFE. Figure 3 demonstrates the change of each evaluation index with parameters on the Scene dataset. For the Scene dataset, when δ = 0.05, λ = 1, the eight evaluation indexes N, AP, HL, CV, OE, RL, MacF1 and MicF1 are the optimal value. Therefore, the following will take δ = 0.05, λ = 1 as the best parameters on the Scene dataset, use the same procedure to get the best parameters of the other nine datasets from Table 2.
The parameter values and each evaluation index value are shown in Table 4.

Comparison results of methods under MLKNN
This subsection exhibitions the comparison results of our proposed method with other related algorithms under MLKNN. First, our improved algorithm is compared with eight most advanced multi-label feature selection algorithms on the Scene dataset, including MLNB [55], MDDMspc [56], MDDMproj [56], PMU [57], RF-ML [58], MFNMIopt [25], MFNMIneu [25], MFNMIpes [25] were tested in aspects of AP, CV, HL and RL. Using the experimental techniques and results provided in [25], where μ is set as 0.5 in MDDMspc. The parameters δ and λ of MFSFN in the experiment select the optimal parameter values in Table 3. As shown in Table 5, it is the experimental result of comparing MFSFN on the Scene dataset with the other eight algorithms. The AP value of MFSFN is optimal, which is 0.0117 higher than MFNMIopt. On the CV index, MFSFN achieves lowest on the eight algorithms, which is 0.0292 lower than MDDMspc. The RL value of MFSFN is lower than other seven algorithms, where MFSFN is 0.0043 lower than MLNB. In terms of HL, MFSFN is compared with the other eight algorithms on the Scene dataset is ranked 2nd, and MFSFN is only 0.0002 higher than MFNMIopt, but MFSFN has obvious advantages over MFNMIopt for indexes AP, CV and RL. Obviously, for the Scene dataset, MFSFN achieves better results in each evaluation indications compared with other eight algorithms, and the validity of the selected parameters δ and λ is proved.
This part of the subsection adopts the classifier MLKNN and proves the validity of MFSFN in the aspects of N, AP,OE, CV and HL. Our method is compared with ParetoFS [59], ELA-CHI [60], PPT-CHI [61], and MUCO [62] on the Scene and Yeast datasets, the experimental techniques and results in reference [59] are used, as shown in Tables 6 and 7.
According to the experimental results in Table 6, the AP index of the proposed algorithm yields the most competitive performance on five algorithms. On the CV index, MFSFN has obvious advantages over other algorithms, MFSFN is 0.6526 lower than ELA-CHI. On the OE index, the proposed method achieves higher performance than other algorithms, which is 0.2649 lower than the algorithm ELA-CHI. On the HL index, the proposed algorithm obtains better results than other algorithms, and MFSFN is 0.0934 lower than MUCO. The number of selected features obtains fewest by algorithm ParetoFS, but our proposed method performs fairly better than ParetoFS in the aspects of AP, CV, HL and OE. In Table 7, we can observe that AP of the proposed algorithm has obvious advantages over other algorithms on the Yeast dataset, MFSFN is at least 0.0023 and at most 0.0248 larger than other algorithms. For CV, MFSFN achieves superior performance than other algorithms except for algorithm ParetoFS, and ranks 2nd, but MFSFN performs better than ParetoFS in aspects of AP, OE and HL. As a whole, our proposed algorithm MFSFN has better classification performance than other algorithms on the Scene and Yeast datasets, the validity of the selected parameters δ and λ is proved.
Then seven multi-label datasets: Flags, Yeast, Plant, Gnegative, Virus, BBC and Guardian are selected from Table 2, carry out a series of experiments which compare the proposed algorithm MFSFN with the six advanced related algorithms, including RF-ML, PMU, MDDMproj, MDDMspc, FSRD [63] and MFSMR [20], the experimental techniques and    Table 4 The evaluation results of the ten datasets under classifier MLFE  [63] are used, and in reference [20], the number of missing labels is set up to 0. In the aspects of AP, CV, OE and RL, the results of this classification are demonstrated in Tables 8, 9, 10 and 11. In Table 8, the OE index of MFSFN performs obvious advantages compared with other algorithms on most of the datasets, which exhibits superior performance against other algorithms on four datasets: Flags, Yeast, Plant and Virus,  Table 9, in the CV index, MFSFN has obvious advantages compared with other five algorithms on the Yeast and Plant datasets. On the Gnegative dataset, MFSFN is inferior to MFSMR and MDDMproj, but it has obvious advantages over the other four algorithms. On the Virus dataset, the CV of MFSFN is 1.2530, is in close proximity to the lowest CV value of FSRD, 1.2417, which represents that our method has certain competitiveness with other methods. Additionally, MDDMproj, PMU and RF-ML do not outperform the other algorithms on any dataset. The CV value of MFSFN on the Guardian dataset is slightly lower than the algorithm FSRD, and ranks 2nd. In short, our proposed method is superior to other algorithms in most cases.
As seen from Table 10, on the datasets Yeast and Plant, the RL of the proposed method is obviously better than other algorithms. On the Virus dataset, MFSFN was 0.0186 lower than MDDMSPC and 0.0031 lower than the algorithm RF-ML. On the datasets Gnegative and Guardian, the RL value of MFSFN ranks 2nd. It is clear that the 2nd best performance for MFSFN is slightly inferior to FSRD or MFSMR, but better than other five algorithms.
As shown in Table 11, the OE index of MFSFN performs exhibits superior performance against other algorithms on three datasets Flags, Plant,and Virus. On the Yeast dataset, the best performance of OE is FSRD, our method is only 0.0072 larger than FSRD. On the dataset BBC, MFSFN is larger 0.268 of the lowest value which is achieved by the algorithm MFSMR and ranks 2nd. On the dataset Guardian, the proposed algorithm is slightly inferior to MDDMspc and RF-ML, but MFSFN is about 0.037 lower than PMU. On the whole, our proposed method is fairly well to other algorithms. From Tables 8, 9, 10 and 11, comprehensive analysis shows that our algorithm has higher classification performance than other algorithms in AP, CV, RL and OE.
To verify the validity and stability of proposed algorithm MFSFN, the experimental comparisons for multi-label classification on the selected features are carried out by fivefold cross-validation. We select four multi-label datasets of different fields from Table 2, including Yeast, Emotions, Scene and Cal500 datasets. Combine the proposed algorithm MFSFN with MUCO, MDDMproj, MDDMspc, PMU, MFS-KA [64] and RFNMIFS [39] in four multi-label datasets. The six comparison algorithms verify the validity of our proposed algorithm using the classification in AP, CV, OE, RL and HL measures, and using the experimental techniques and results in the literature [39], The results of classification are demonstrated in Tables 12, 13, 14, 15 and 16. From Table  12, the index AP of MFSFN apparently outperforms other algorithms on the four datasets Yeast, Emotions, Cal500 and Scene; as an example, with respect to the Scene dataset, the maximum value of MFSFN is 0.0099 lower than MDDMspc, and the minimum value of MFSFN is 0.0941 higher than MDDMspc. Thus, MFSFN obtained better classification performance than other algorithms on AP. As can be seen from Table 13 that the CV value of MFSFN has a significant advantage over other algorithms on the three datasets: Yeast, Emotions and Scene. On the Cal500 dataset, the proposed algorithm MFSFN is 0.0917 higher than the minimum value of RFNMIFS, but the maximum value of MFSFN is 0.6717 lower than RFNMIFS, so MFSFN is more stable than RFN-MIFS. As shown in Table 14, for the Yeast, Emotions and Scene datasets in metrics of OE, MFSFN achieves the lowest mean values. On the Cal500 dataset, the lowest value of RFN-MIFS is 0.0143 lower than that of the MFSFN algorithm, but the highest value of RFNMIFS is 0.0047 higher than that of MFSFN, which proves that the stability of MFSFN is stronger than other algorithms. It can be seen from Table 15 that the RL of MFSFN is significantly better than other the six algorithms and obtains satisfactory results on the four datasets. From Table 16, the HL of MFSFN is better than other algorithms on the Yeast, Scene, Emotions and Cal500 four datasets. The results show that our algorithm can not only eliminate the redundant features on the four datasets, but also achieve better performance than other six algorithms in terms of AP, CV, OE, RL and HL.

Comparison results of methods under MLFE
This subsection illustrates the performance of the proposed method by comparing with other methods under classifier MLFE. We select three datasets from Table 2, including Flags, Yeast and Scene. MFSFN is compared with six most advanced multi-label feature selection methods, including PCT-CHI2 [19], CSFS [65], SFUS [66], Avg.CHI [67], MCLS [54], and RFNMIFS, on three multi-label datasets. The algorithm MFSFN is tested on the aspects of AP, CV, OE, RL, MacF1, and MicF1, the experimental techniques and results in reference [39] are used, as shown in Tables 17, 18 and 19. MFSFN prevails over other algorithms for the optimal mean values in the each evaluation index. It can be seen from Table 17 that the six metrics of MFSFN are better than other algorithms in the Flags dataset. The CV, MacF1 and MicF1 value of MFSFN have obvious advantages against the other six algorithms. On the RL index, MFSFN is 0.0172 higher than the lowest value of RFNMIFS and 0.0112 lower than its highest value. On the whole, MFSFN has better classifica- higher than the lowest value of RFNMIFS, but 0.0216 lower than its highest value. MFSFN is more stable than other algorithms. It can be seen from Table 19 that the six indicators of MFSFN are significantly better than other algorithms on the Scene dataset. Based on the above analysis of the classification results of the three datasets on MLFE classifier, MFSFN algorithm can not only effectively eliminate the redundant features of the three datasets, but also has higher classification performance than other algorithms.

Statistical analysis
To systematically analyze the classification performance of MFSFN and intuitively display the statistical performance of each evaluation index under various comparison algorithms, Friedman statistical test [68] and Bonferroni-Dunn test [69] are used in this section. Friedman test is demonstrated as follows where M and T are the numbers of methods and datasets, respectively, and R i is the average order value of the i − th method in all datasets. In the Bonferroni-Dunn test, the average rank difference between methods is calculated to evaluate whether there are significant differences between methods. The critical difference is expressed as follows where q α indicates the critical tabulated value of the test, and α represents the significance level. According to the statistical tests in references [36,70], the mean order of all datasets is obtained by averaging all levels on each metric. The opti-mal value under each index is set to the rank of 1, the second is set to the rank of 2, and so on. With CD value chart is used to visually display MFSFN correlations with other algorithms, each of these algorithms, the average ranking of each method is drew along the axis, in which the rank value on the axis increases from left to right. The MFSFN and compared algorithms are linked together with a thick line if the mean rank difference between these algorithms is within a criticality difference, indicating there is no significant difference between algorithms; otherwise, any algorithm that is not connected together will be considered markedly different from the other algorithms. From the classification results in Tables 8, 9, 10 and 11, we can get the average ranking of MFSFN and six comparison algorithms on the four aspects of AP, CV, RL and OE under the MLKNN classifier, and the corresponding F F values are demonstrated in Table 20. When the significance level α = 0.1, each indicator rejects the zero hypotheses that seven algorithms have the same performance under the Friedman test. At that time, q α = 2.394, then CD = 2.7644 (M = 7, T = 7). The accuracy comparison of seven algorithms by Bonferroni-Dunn test is demonstrated in Fig. 4. It can be obtained from Fig. 4 that MFSFN is significantly better than other algorithms in AP and OE evaluation indicators. From Fig. 4a, for the metric AP, the proposed algorithm has obvious advantages compared with PMU, MDDMspc, MDDMproj, RF-ML, and MFSFN is no significant difference with algorithms FSRD and MFSMR. It can be seen   Fig. 4d, the OE index of the algorithm MFSFN is distinctly better than other algorithms, and and the distinction among the performance of FSRD, MFSMR, RF-ML, PMU and MDDMspc is insignificant. To sum up, the proposed algorithm has more excellent classification performance than other algorithms. From the classification results which are illustrated in Tables 12, 13, 14, 15 and 16, we can get the average ranking of the proposed method and six comparison algorithms on the five aspects of AP, CV, HL, OE and RL under the MLKNN classifier, and the corresponding F F values are displayed in Table 21. When the significance level α = 0.1, each indicator rejects the zero hypotheses that seven algorithms have the same performance under the Friedman test. At that time, q α = 2.394, then CD = 3.6569 (M = 7, T = 4). The accuracy comparison of seven algorithms by the Bonferroni-Dunn test is demonstrated in Fig. 5. It can be seen from Fig. 5 that MFSFN is significantly better than other algorithms in each index. Fig. 5a illustrates that in terms of AP, MFSFN achieves significantly better than four algorithms PMU, MDDMspc, MDDMproj and MUCO and obtains comparable results against MFS-KA and RFNMIFS. As can be seen from Fig. 5b, d, the CV and RL of algorithm MFSFN outperforms PMU, MUCO and MDDMproj and comparable to MDDMspc, MFS-KA and RFNMIFS, and there is no full evidence to demonstrate a statistical equivalence with RFNMIFS, MFS-KA, MDDMspc, MDDMproj, MUCO and PMU. As can be obtained from Fig. 5c, for the index OE, MFSFN is significantly better than other algorithms and comparable to RFNMIFS, MFS-KA and MDDMspc, and there is no consistent evidence to indicate a statistical equivalence with RFNMIFS, MFS-KA, MDDMspc, PMU and MDDMproj, and there is no concrete evidence to determine the significant difference among MFS-KA, MDDMspc, PMU, MDDMproj and MUCO. It can be obtained from Fig. 5e that HL index of MFSFN is more excellent than MDDMspc, MDDMproj, MUCO and PMU. In general, MFSFN has strong classification performance compared with other algorithms under classifier MLKNN.
The classification results in Tables 17, 18 and 19 were statistically tested under the classifier MLFE. The F F values of the six metrics are listed in Table 22. When α = 0.1, q α = 2.394, then CD = 4.2226 (M = 7, T = 3). The test results are demonstrated in Fig. 6. As can be obtained from Fig. 6a, for the AP index, MFSFN performs better than PCT-CHI2 and CSFS and is comparable to RFNMIFS, MCLS, SFUS and Avg.CHI. As can be seen from Fig. 6b, e, there is not enough evidence to suggest a statistical equivalence among MFSFN and RFNMIFS, MCLS, SFUS, CSFS and Avg.CHI in the aspects of CV and MacF1, and it is significantly superior to PCT-CHI2. As can be seen from Fig. 6c, there is no obvious difference between algorithm MFSFN and RFNMIFS, MCLS, SFUS and CSFS in the OE index, and it is superior to algorithms Avg.CHI and PCT-CHI2. As can be seen from Fig. 6d, for RL index, MFSFN is comparable to RFNMIFS, MCLS, SFUS and PCT-CHI2, and performs better than Avg.CHI and CSFS. As can be seen from Fig. 6f, in metric of MicF1, algorithm MFSFN is comparable to RFNMIFS, MCLS, SFUS, PCT-CHI2 and Avg.CHI, and is significantly superior to CSFS. Therefore, under the classifier MLFE, the algorithm MFSFN has more excellent performance compared with the other algorithms in general.

Conclusion
In this article, a multi-label feature selection method based on fuzzy neighborhood rough sets was improved by combining information view with the algebraic view, which achieved highly classification performance in the multi-label fuzzy neighborhood decision system. First, a new multi-label fuzzy neighborhood rough set model was proposed by combining NRS with FRS. Second, the fuzzy similarity matrix was obtained by computing the similarity between samples under different condition attributes, and a new multi-label fuzzy decision was proposed and the fuzzy neighborhood approximation accuracy was defined. Then, the fuzzy neighborhood conditional entropy was introduced, according to the concept of information entropy in information theory, and a hybrid metric was designed by combining the fuzzy neighborhood approximate accuracy with the fuzzy neighborhood condi-  tional entropy, to measure the importance of each attribute. Finally, a multi-label feature selection method based on fuzzy neighborhood rough sets was developed, a novel forward search algorithm for multi-label feature selection is provided. A series of experiments on ten multi-label datasets verify the effectiveness of the proposed algorithm in multi-label classification. In our future work, we will seek multi-label feature selection method of higher classification performance, and more efficient search strategies.