Skip to main content
Log in

A variable precision multigranulation rough set model and attribute reduction

  • Foundation, algebraic, and analytical methods in soft computing
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

As a useful extension of rough sets, multigranulation rough sets (MGRSs) can be used to deal with a variety of complex data. Numerous significant advances have been achieved by generalizing MGRSs. However, most of the existing findings of MGRSs are sensitive to misclassification and noise in data. Furthermore, the studies of attribute reduction based on MGRSs have received little attention. To fill such gaps, this paper proposes an extended model of MGRSs named variable precision multigranulation rough sets (VPMGRSs) by introducing rough membership function and approximation parameters in variable precision rough sets (VPRSs) into the multigranulation environment. After giving some basic properties of VPMGRSs, we investigate the relationships between VPMGRSs and VPRSs, pessimistic MGRSs, and generalized MGRSs. In addition, several VPMGRSs-based attribute reductions are introduced, and it is proved that some of them are equivalent when the parameters in the model meet specific requirements. Finally, we propose a heuristic algorithm for \(\alpha \)-lower distribution reduct and illustrate its effectiveness and efficiency by a comparative experiment on real datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Availability of data and materials

Not applicable.

Code Availability

Not applicable.

References

  • An LK, Ji SN, Wang CZ et al (2021) A multigranulation fuzzy rough approach to multisource information systems. Soft Comput 25(2):933–947

    MATH  Google Scholar 

  • Bai JC, Sun BZ, Chu XL et al (2021) Neighborhood rough set-based multi-attribute prediction approach and its application of gout patients. Appl Soft Comput 114:108127

    Google Scholar 

  • Barman B, Patra S (2020) Variable precision rough set based unsupervised band selection technique for hyperspectral image classification. Knowl-Based Syst 193(4):105414

    Google Scholar 

  • Chai J (2021) Dominance-based rough approximation and knowledge reduction: a class-based approach. Soft Comput 25(17):11535–11549

    MATH  Google Scholar 

  • Chen YY, Chen YM (2021) Feature subset selection based on variable precision neighborhood rough sets. Int J Comput Intell Syst 14(1):572–581

    Google Scholar 

  • Chen DG, Yang YY, Dong Z (2016) An incremental algorithm for attribute reduction with variable precision rough sets. Appl Soft Comput 45:129–149

    Google Scholar 

  • Demšr J (2006) Statistical comparison of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  Google Scholar 

  • Dua D, Graff C (2019) UCI machine learning repository. University of California, School of Information and Computer Science, Irvine, CA. http://archive.ics.uci.edu/ml

  • Friedman M (1940) A comparison of alternative tests of significance for the problem of m ranking. Ann Math Stat 11(1):86–92

    MathSciNet  MATH  Google Scholar 

  • Ghosh SK, Ghosh A, Bhattacharyya S (2022) Recognition of cancer mediating biomarkers using rough approximations enabled intuitionistic fuzzy soft sets based similarity measure. Appl Soft Comput 124:109052

    Google Scholar 

  • Gong ZT, Zhang XX (2014) Variable precision intuitionistic fuzzy rough sets model and its application. Int J Mach Learn Cybern 5(2):263–280

    Google Scholar 

  • Herbert JP, Yao JT (2011) Game-theoretic rough set. Fundam Inform 108:267–286

    MathSciNet  MATH  Google Scholar 

  • Ho SS, Wechsler H (2008) Query by transduction. IEEE Trans Pattern Anal Mach Intell 30(9):1557–1571

    Google Scholar 

  • Hu QH, Yu DR, Liu JF et al (2008) Neighborhood rough set based heterogeneous feature subset selection. Inf Sci 178(18):3577–3594

    MathSciNet  MATH  Google Scholar 

  • Hu CX, Liu SX, Huang XL (2017) Dynamic updating approximations in multigranulation rough sets while refining or coarsening attribute values. Knowl-Based Syst 130:62–73

    Google Scholar 

  • Inuiguchi M, Yoshioka Y, Kusunoki Y (2009) Variable-precision dominance-based rough set approach and attribute reduction. Int J Approx Reason 50(8):1199–1214

    MathSciNet  MATH  Google Scholar 

  • Jaccard P (1908) Nouvelles recherches sur la distribution florale. Bull Soc Vaud Sci Nat 44:223–270

    Google Scholar 

  • Ju HR, Yang XB, Dou HL et al (2014) Variable precision multigranulation rough set and attributes reduction. Transactions on rough sets XVIII. Springer, Berlin, pp 52–68

    Google Scholar 

  • Kang XP, Miao DQ (2016) A variable precision rough set model based on the granularity of tolerance relation. Knowl-Based Syst 102:103–115

    Google Scholar 

  • Katzberg JD, Ziarko W (1996) Variable precision extension of rough sets. Fundam Inform 27(2–3):155–168

    MathSciNet  MATH  Google Scholar 

  • Kryszkiewicz M (1998) Rough set approach to incomplete information systems. Inf Sci 112:39–49

    MathSciNet  MATH  Google Scholar 

  • Liang MS, Mi JS, Feng T et al (2020) A dynamic approach for updating the lower approximation in adjustable multi-granulation rough sets. Soft Comput 24(21):15951–15966

    MATH  Google Scholar 

  • Lin GP, Qian YH, Li JJ (2012) NMGRS: neighborhood-based multigranulation rough sets. Int J Approx Reason 53(7):1080–1093

    MathSciNet  MATH  Google Scholar 

  • Lin GP, Liang JY, Qian YH et al (2016) A fuzzy multigranulation decision-theoretic approach to multi-source fuzzy information systems. Knowl-Based Syst 91:102–113

    Google Scholar 

  • Liu HJ, Tuo HY, Liu YC (2004) Rough neural network of variable precision. Neural Process Lett 19(1):73–87

    Google Scholar 

  • Liu X, Qian YH, Liang JY (2014) A rule-extraction framework under multigranulation rough sets. Int J Mach Learn Cybern 5(2):319–326

    Google Scholar 

  • Ma ZM, Mi JS, Lin YT et al (2022) Boundary region-based variable precision covering rough set models. Inf Sci 608:1524–1540

    Google Scholar 

  • Mi JS, Wu WZ, Zhang WX (2004) Approaches to knowledge reduction based on variable precision rough set model. Inf Sci 159(3–4):255–272

    MathSciNet  MATH  Google Scholar 

  • Patra S, Barman B (2021) A novel dependency definition exploiting boundary samples in rough set theory for hyperspectral band selection. Appl Soft Comput 99:106944

    Google Scholar 

  • Pawlak Z (1982) Rough sets. Int J Comput Inform Sci 11(5):341–356

    MATH  Google Scholar 

  • Pawlak Z (1991) Rough sets: theoretical aspects of reasoning about data. Kluwer Academic Publishers, Dordrecht

    MATH  Google Scholar 

  • Pawlak Z, Skowron A (1993) Rough membership functions: a tool for reasoning with uncertainty. Banach Center Publ 28(1):135–150

    MathSciNet  MATH  Google Scholar 

  • Qian YH, Liang JY, Yao YY et al (2010) MGRS: a multi-granulation rough set. Inf Sci 180:949–970

    MathSciNet  MATH  Google Scholar 

  • Qian YH, Li SR, Liang JY et al (2014) Pessimistic rough set based decisions: a multigranulation fusion strategy. Inf Sci 264:196–210

    MathSciNet  MATH  Google Scholar 

  • Qian YH, Zhang H, Sang YL et al (2014) Multigranulation decision-theoretic rough sets. Int J Approx Reason 55(1):225–237

    MathSciNet  MATH  Google Scholar 

  • Qian J, Liu CH, Yue XD (2019) Multigranulation sequential three-way decisions based on multiple thresholds. Int J Approx Reason 105:396–416

    MathSciNet  MATH  Google Scholar 

  • She YH, He XL, Shi HX (2017) A multiple-valued logic approach for multigranulation rough set model. Int J Approx Reason 82:270–284

    MathSciNet  MATH  Google Scholar 

  • Sun BZ, Ma WM, Qian YH (2017) Multigranulation fuzzy rough set over two universes and its application to decision making. Knowl-Based Syst 123:61–74

    Google Scholar 

  • Sun BZ, Chen XT, Zhang LY et al (2020) Three-way decision making approach to conflict analysis and resolution using probabilistic rough set over two universes. Inf Sci 507:809–822

    MathSciNet  MATH  Google Scholar 

  • Tan AH, Wu WZ, Li JJ et al (2016) Evidence-theory-based numerical characterization of multigranulation rough sets in incomplete information systems. Fuzzy Set Syst 294:18–35

    MathSciNet  MATH  Google Scholar 

  • Wang CZ, Hu QH, Wang XZ et al (2017) Feature selection based on neighborhood discrimination index. IEEE Trans Neural Networks Learn Syst 29(7):2986–2999

    MathSciNet  Google Scholar 

  • Wei W, Liang J (2019) Information fusion in rough set theory: an overview. Inf Fusion 48:107–118

    Google Scholar 

  • Wilcoxon F (1945) Individual comparisons by ranking methods. Biometrics 1:80–83

    MathSciNet  Google Scholar 

  • Xie G, Zhang J, Lai KK et al (2008) Variable precision rough set for group decision-making: an application. Int J Approx Reason 49(2):331–343

    MATH  Google Scholar 

  • Xu WH, Yu JH (2017) A novel approach to information fusion in multi-source datasets: a granular computing viewpoint. Inf Sci 378:410–423

    MATH  Google Scholar 

  • Xu WH, Zhang XT, Wang QR (2012) A generalized multigranulation rough set approach. Lect Notes Bioinform 1:681–689

    Google Scholar 

  • Yang XB, Song XN, Chen ZH et al (2012) On multi-granulation rough sets in incomplete information system. Int J Mach Learn Cybern 3:223–232

    Google Scholar 

  • Yang XB, Qi YS, Song XN et al (2013) Test cost sensitive multigranulation rough set: model and minimal cost selection. Inf Sci 250:184–199

    MathSciNet  MATH  Google Scholar 

  • Yang YY, Chen DG, Dong Z (2014) Novel algorithms of attribute reduction with variable precision rough set model. Neurocomputing 139:336–344

  • Yanto ITR, Vitasari P, Herawan T et al (2012) Applying variable precision rough set model for clustering student suffering study’s anxiety. Expert Syst Appl 39(1):452–459

    Google Scholar 

  • Yao YY, Zhao Y, Wang J (2006) On reduct construction algorithms. In: Proceedings of the first international conference on rough sets and knowledge technology, pp 297–304

  • Yao YY, She YH (2016) Rough set models in multigranulation spaces. Inf Sci 327:40–56

    MathSciNet  MATH  Google Scholar 

  • Yu JH, Zhang XY, Zhao ZH (2016) Uncertainty measures in multigranulation with different grades rough set based on dominance relation. J Intell Fuzzy Syst 31:1133–1144

    MATH  Google Scholar 

  • Yu JH, Zhang B, Chen MH et al (2018) Double-quantitative decision-theoretic approach to multigranulation approximate space. Int J Approx Reason 98:236–258

    MathSciNet  MATH  Google Scholar 

  • Zadeh LA (1997) Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Fuzzy Sets Syst 90:111–127

    MathSciNet  MATH  Google Scholar 

  • Zhan JM, Zhang XH, Yao YY (2020) Covering based multigranulation fuzzy rough sets and corresponding applications. Artif Intell Rev 53(2):1093–1126

    Google Scholar 

  • Zhang PF, Li TR, Luo C et al (2022) AMG-DTRS: adaptive multi-granulation decision-theoretic rough sets. Int J Approx Reason 140:7–30

    MathSciNet  MATH  Google Scholar 

  • Zhao SY, Tsang ECC, Chen DG (2009) The model of fuzzy variable precision rough sets. IEEE Trans Fuzzy Syst 17(2):451–467

    Google Scholar 

  • Zhao XR, Miao DQ, Fujita H (2021) Variable-precision three-way concepts in l-contexts. Int J Approx Reason 130:107–125

    MathSciNet  MATH  Google Scholar 

  • Ziarko W (1993) Variable precision rough set model. J Comput Syst Sci 46(1):39–59

    MathSciNet  MATH  Google Scholar 

Download references

Funding

This work was supported by the National Natural Science Foundation of China under Grant 62172048.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ping Zhu.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A. The proof of proposition 5

(1) For any \(x\in \sim X\) and \(AT_i\in \mathcal {A}\), it is obvious that \(\mu ^{{AT}_i} _X(x)<1\), which leads to \(\max \{\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i}\vert x\in \sim X\}<1\). There is \(\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i}\le \max \{\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i}\vert x\in \sim X\}<\alpha \). According to Eq. (8), it can be obtained \(x\notin {\underline{\mathcal {A}}}_\alpha (X)\). This means \(\sim X\subseteq \sim {\underline{\mathcal {A}}}_\alpha (X)\), which is equivalent to \({\underline{\mathcal {A}}}_\alpha (X)\subseteq X\).

(2) It is obvious that \(\mu ^{{AT}_i} _X(x)>0\) for any \(x\in X\) and \(AT_i\in \mathcal {A}\), therefore, \(\min \{\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i}\vert x\in X\}>0\). Because \(\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i}\ge \min \{\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i}\vert x\in X\}\ge \alpha \), there is \(x\in {\underline{\mathcal {A}}}_\alpha (X)\). Thus, \(X\subseteq {\underline{\mathcal {A}}}_\alpha (X)\).

(3) For any \(x\in X\) and \(AT_i\in \mathcal {A}\), it can be obtained that \(\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i}\ge \min \{\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i}\vert x\in X\}>\beta \) from the known condition; therefore, \(x\in \overline{\mathcal {A}}_\beta (X)\).

(4) For any \(x\in \sim X\), there is \(\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i}\le \max \{\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i}\vert x\in \sim X\}\le \beta \), accordingly, \(x\notin \overline{\mathcal {A}}_\beta (X)\). Then, \(\sim X\subseteq \sim \overline{\mathcal {A}}_\beta (X)\), which is equivalent to \(\overline{\mathcal {A}}_\beta (X)\subseteq X\).

(5) From the known condition, it can be obtained that for any \(x\in U\), there exists \(AT_i\in \mathcal {A}\) such that \(\mu ^{{AT}_i} _X(x)>0\) and \(\min \{\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i}\vert x\in U\}>0\). There is \(\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i} \ge \min \{\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i}\vert x\in U\}\ge \alpha \), which means \(x\in {\underline{\mathcal {A}}}_\alpha (X)\). Hence, \(U\subseteq {\underline{\mathcal {A}}}_\alpha (X)\) and \({\underline{\mathcal {A}}}_\alpha (X)=U\) is gotten. From the property (1) in Proposition 3, \(U\subseteq \overline{\mathcal {A}}_\beta (X)\), and thus, \(\overline{\mathcal {A}}_\beta (X)=U\).

(6) From the known condition, for any \(x\in U\), there exists \(AT_i\in \mathcal {A}\) such that \(\mu ^{{AT}_i} _X(x)<1\) and thus \(\max \{\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i}\vert x\in U\}<1\). It is obvious that \(\dfrac{\sum \nolimits _{i=1}^s \omega _i\mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i}\le \max \{\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i}\vert x\in U\}\le \beta \), which indicates \(x\notin {\overline{\mathcal {A}}}_\alpha (X)\). From the arbitrariness of x, there is \({\overline{\mathcal {A}}}_\alpha (X)=\emptyset \). From the property (1) in Proposition 3, \(\underline{\mathcal {A}}_\alpha (X)\subseteq \overline{\mathcal {A}}_\beta (X)\), and thus \({\underline{\mathcal {A}}}_\alpha (X)=\emptyset \).

Appendix B. The proof of proposition 10

(1) According to Eq. (8), if \(x\in \underline{\mathcal {A}}_1(X)\), then \(\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i}\ge 1\), which means for any \(AT_i\in \mathcal {A}\), there is \(\mu ^{{AT}_i} _X(x)=1\), i.e., \([x]_{{AT}_i}\subseteq X\). It can be gotten \(x\in \underline{\mathcal {A}}^P(X)\) and thus \(\underline{\mathcal {A}}_1(X)\subseteq \underline{\mathcal {A}}^P(X)\). And vice versa. Then \(\underline{\mathcal {A}}_1(X)=\underline{\mathcal {A}}^P(X)\) is proved.

According to Eq. (9), if \(x\in \overline{\mathcal {A}}_0(X)\), then \(\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)}{\sum \nolimits _{i=1}^s \omega _i}> 0\) and \(\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _X(x)>0\). Thus, there must exist \(AT_i\in \mathcal{AT}\mathcal{}\) subject to \([x]_{AT_i}\cap X\ne \emptyset \). Therefore, \(x\in \overline{\mathcal {A}}^P(X)\) and it can be concluded \(\overline{\mathcal {A}}_0(X)\subseteq \overline{\mathcal {A}}^P(X)\). And vice versa. \({\overline{\mathcal {A}}}_\beta (X)\subseteq {\overline{\mathcal {A}}}^P(X)\) is proved.

(2) From Proposition 10 (1), \(\underline{\mathcal {A}}^P(X)=\underline{\mathcal {A}}_1(X)\) is already known. In addition, for any \(\alpha \le 1\), it can be obtained \({\underline{\mathcal {A}}}_\alpha (X)\subseteq {\underline{\mathcal {A}}}_1(X)\) from Proposition 7 (1). Consequently, \({\underline{\mathcal {A}}}^P(X)\subseteq {\underline{\mathcal {A}}}_\alpha (X)\). The inclusion relationship between upper approximations can be similarly carried out.

Appendix C. The proof of proposition 13

(1) (\(\Rightarrow \)) Let \(\mathcal {A}\) be an \(\alpha \)-lower distribution consistent set. For any \(x\in U\), if \(D_j\in \gamma _\mathcal {A}(x)\), then \(\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _{D_j}(x)}{\sum \nolimits _{i=1}^s \omega _i}\ge \alpha _\mathcal {A}\ge \alpha _0\ge \alpha \); therefore, \(x\in \underline{\mathcal {A}}_\alpha (D_j)\). It can be obtained from \(\underline{\mathcal {A}}_\alpha (D_j)=\underline{\mathcal{AT}\mathcal{}}_\alpha (D_j)\) that \(x\in \underline{\mathcal{AT}\mathcal{}}_\alpha (D_j)\), and this means \(\dfrac{\sum \nolimits _{i=1}^m \omega _i \mu ^{{AT}_i} _{D_j}(x)}{\sum \nolimits _{i=1}^m \omega _i}\ge \alpha >0.5\). Then, it can be obtained that, compared with other decision classes, the average membership degree of x to \(D_j\) is the maximum, and there is \(D_j\in \gamma _\mathcal{AT}\mathcal{}(x)\). Thus, \(\gamma _\mathcal {A}(x)\subseteq \gamma _\mathcal{AT}\mathcal{}(x)\), and \(\gamma _\mathcal{AT}\mathcal{}(x)\subseteq \gamma _\mathcal {A}(x)\) can proved in the same way. In conclusion, for any \(x\in U\), there is \(\gamma _\mathcal{AT}\mathcal{}(x)=\gamma _\mathcal {A}(x)\); namely, \(\mathcal {A}\) is a maximum distribution consistent set.

(\(\Leftarrow \)) Let \(\mathcal {A}\) be a maximum distribution consistent set. For any \(D_j\in U/D\), if \(x\in \underline{\mathcal {A}}_\alpha (D_j)\), then there are \(\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _{D_j}(x)}{\sum \nolimits _{i=1}^s \omega _i}\ge \alpha >0.5\) and \(D_j\in \gamma _\mathcal {A}(x)\). Because \(\mathcal {A}\) is a maximum distribution consistent set, it can be obtained \(\gamma _\mathcal{AT}\mathcal{}(x)=\gamma _\mathcal {A}(x)\) and \(D_j\in \gamma _\mathcal{AT}\mathcal{}(x)\). Thus, \(\dfrac{\sum \nolimits _{i=1}^m \omega _i \mu ^{{AT}_i} _{D_j}(x)}{\sum \nolimits _{i=1}^m \omega _i}\ge \alpha _\mathcal{AT}\mathcal{}\ge \alpha _0\ge \alpha \) and \(x\in \underline{\mathcal{AT}\mathcal{}}_\alpha (D_j)\). Therefore, \(\underline{\mathcal {A}}_\alpha (D_j)\subseteq \underline{\mathcal{AT}\mathcal{}}_\alpha (D_j)\). It can be proved that \(\underline{\mathcal{AT}\mathcal{}}_\alpha (D_j)\subseteq \underline{\mathcal {A}}_\alpha (D_j)\) similarly. In conclusion, \(\underline{\mathcal{AT}\mathcal{}}_\alpha (D_j)=\underline{\mathcal {A}}_\alpha (D_j)\) for all \(D_j\in U/D\), that is \(\mathcal {A}\) is an \(\alpha \)-lower distribution consistent set.

(2) can be easily proved by (1).

Appendix D. The proof of proposition 14

(1) (\(\Rightarrow \)) Let \(\mathcal {A}\) be a \(\beta \)-upper distribution consistent set. For any \(x \in U\), if \(D_j\in \delta _\mathcal {A}(x)\), then \(\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _{D_j}(x)}{\sum \nolimits _{i=1}^s \omega _i}\ge \beta _\mathcal {A}\ge \beta _0>\beta \) and \(x\in \overline{\mathcal {A}}_\alpha (D_j)\). It can be obtained that \(x\in \overline{\mathcal{AT}\mathcal{}}_\beta (D_j)\) from \(\overline{\mathcal{AT}\mathcal{}}_\alpha (D_j)=\overline{\mathcal {A}}_\alpha (D_j)\). Thus \(\dfrac{\sum \nolimits _{i=1}^m \omega _i \mu ^{{AT}_i} _{D_j}(x)}{\sum \nolimits _{i=1}^m \omega _i}>\beta \ge 0\), and this means \(D_j\in \delta _\mathcal{AT}\mathcal{}(x)\). Therefore, \( \delta _\mathcal {A}(x)\subseteq \delta _\mathcal{AT}\mathcal{}(x)\). It can be similarly proved that \( \delta _\mathcal{AT}\mathcal{}(x)\subseteq \delta _\mathcal {A}(x)\). In conclusion, \( \delta _\mathcal {A}(x)=\delta _\mathcal{AT}\mathcal{}(x)\) for any \(x\in U\), that is, \(\mathcal {A}\) is a possible consistent set.

(\(\Leftarrow \)) Let \(\mathcal {A}\) be a possible consistent set. For any \(D_j\in U/D\), if \(x\in \overline{\mathcal {A}}_\beta (D_j)\), then \(\dfrac{\sum \nolimits _{i=1}^s \omega _i \mu ^{{AT}_i} _{D_j}(x)}{\sum \nolimits _{i=1}^s \omega _i}>\beta \ge 0\) and \(D_j\in \delta _\mathcal {A}(x)\). Then, it can be obtained that \(D_j\in \delta _\mathcal{AT}\mathcal{}(x)\) from the known condition \( \delta _\mathcal{AT}\mathcal{}(x)=\delta _\mathcal {A}(x)\), and this indicates \(\dfrac{\sum \nolimits _{i=1}^m \omega _i \mu ^{{AT}_i} _{D_j}(x)}{\sum \nolimits _{i=1}^m \omega _i}>0\). Therefore, there are \(\dfrac{\sum \nolimits _{i=1}^m \omega _i \mu ^{{AT}_i} _{D_j}(x)}{\sum \nolimits _{i=1}^m \omega _i}\ge \beta _\mathcal{AT}\mathcal{}\ge \beta _0>\beta \) and \(x\in \overline{\mathcal{AT}\mathcal{}}_\beta (D_j)\). Thus \(\overline{\mathcal {A}}_\beta (D_j)\subseteq \overline{\mathcal{AT}\mathcal{}}_\beta (D_j)\). It can be proved that \(\overline{\mathcal{AT}\mathcal{}}_\beta (D_j)\subseteq \overline{\mathcal {A}}_\beta (D_j)\) in the same way. Consequently, \(\overline{\mathcal{AT}\mathcal{}}_\beta (D_j)=\overline{\mathcal {A}}_\beta (D_j)\), i.e., \(\mathcal {A}\) is a \(\beta \)-upper distribution consistent set.

(2) can be easily proved by (1).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, J., Zhu, P. A variable precision multigranulation rough set model and attribute reduction. Soft Comput 27, 85–106 (2023). https://doi.org/10.1007/s00500-022-07566-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-022-07566-y

Keywords

Navigation