Advertisement

Using Positive Region to Reduce the Computational Complexity of Discernibility Matrix Method

  • Feng Honghai
  • Zhao Shuo
  • Liu Baoyan
  • He LiYun
  • Yang Bingru
  • Li Yueli
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4031)

Abstract

Rough set discernibility matrix method is a valid method to attribute reduction. However, it is a NP-hard problem. Up until now, though some methods have been proposed to improve this problem, the case is not improved well. We find that the idea of discernibility matrix can be used to not only the whole data but also partial data. So we present a new algorithm to reduce the computational complexity. Firstly, select a condition attribute C that holds the largest measure of γ(C, D) in which the decision attribute D depends on C. Secondly, with the examples in the non-positive region, build a discernibility matrix to create attribute reduction. Thirdly, combine the attributes generated in the above two steps into the attribute reduction set. Additionally, we give a proof of the rationality of our method. The larger the positive region is; the more the complexity is reduced. Four Experimental results indicate that the computational complexity is reduced by 67%, 83%, 41%, and 30% respectively and the reduced attribute sets are the same as the standard discernibility matrix method.

Keywords

Decision Attribute Disjunctive Normal Form Forest Cover Type Discernibility Matrix Chinese Character Recognition 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Pawlak, Z.: Rough sets. Int. J. of Computer and Information Science 1l, 341–356 (1982)CrossRefMathSciNetGoogle Scholar
  2. 2.
    Wong, S.K., Ziarko, W.: On optimal decision rules in decision tables. Bulletin of Polish Academy of Sciences 33, 357–362 (1985)Google Scholar
  3. 3.
    Duntscb, D., Gediga, G.: Uncertainly measures of rough set prediction. Artificial Intelligence 106, 109–137 (1998)CrossRefMathSciNetGoogle Scholar
  4. 4.
    Beaubouef, T., Petry, F.E., Ora, G.: Information-theoretic measures of uncertainty for rough sets and rough relational databases. Information Science 109, 185–195 (1998)CrossRefGoogle Scholar
  5. 5.
    HTWroblewski, J.T.: T Ensembles of classifiers based on approximate reducts. In: Fundamenta Informaticae, vol. 47, pp. 351–360. IOS Press, Netherlands (2001)Google Scholar
  6. 6.
    HWroblewski, J.H.: Covering with reducts-a fast algorithm for rule generaten. Rough Sets and Current Trends in Computing. In: Proceedings of First International Conference, RSCTC 1998, pp. 402–407 (1998)Google Scholar
  7. 7.
    HXiao, J.-M.: New rough set approach to knowledge reduction in decision table. In: Proceedings of 2004 International Conference on Machine Learning and Cybernetics, vol. 4, pp. 2208–2211 (2004)Google Scholar
  8. 8.
    Jin-song, F., Ting-jian, F.: Rough set and SVM based pattern classification method. Pattern Recognition and Artificial Intelligence 13, 419–423 (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Feng Honghai
    • 1
    • 2
  • Zhao Shuo
    • 1
  • Liu Baoyan
    • 3
  • He LiYun
    • 3
  • Yang Bingru
    • 2
  • Li Yueli
    • 1
  1. 1.Hebei Agricultural UniversityBaodingChina
  2. 2.University of Science and Technology BeijingBeijingChina
  3. 3.China Academy of Traditional Chinese MedicineBeijingChina

Personalised recommendations