Advertisement

Fuzziness and Performance: An Empirical Study with Linguistic Decision Trees

  • Zengchang Qin
  • Jonathan Lawry
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4529)

Abstract

Generally, there are two main streams of theories for studying uncertainties. One is probability theory and the other is fuzzy set theory. One of the basic ideas of fuzzy set theory is how to define and interpret membership functions. In this paper, we will study tree-structured data mining model based on a new interpretation of fuzzy theory. In this new theory, fuzzy labels will be used for modelling. The membership function is interpreted as appropriateness degrees for using labels to describe a fuzzy concept. Each fuzzy concept is modelled by a distribution on the appropriate fuzzy label sets. Previous work has shown that the new model outperforms some well-known data mining models such as Naive Bayes and Decision trees. However, the fuzzy labels used in previous works were predefined. We are interested in study the influences on the performance by using fuzzy labels with different degrees of overlapping. We test a series of UCI datasets and the results show that the performance of the model increased almost monotonically with the increase of the overlapping between fuzzy labels. For this empirical study with the LDT model, we can conclude that more fuzziness implies better performance.

Keywords

Information Gain Fuzzy Concept Focal Element Classical Decision Tree Label Semantic 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Blake, C., Merz, C.J.: UCI machine learning repository, http://www.ics.uci.edu/~mlearn/MLRepository.html
  2. 2.
    Lawry, J.: Modelling and Reasoning with Vague Concepts. Springer, Heidelberg (2006)zbMATHGoogle Scholar
  3. 3.
    Olaru, C., Wehenkel, L.: A complete fuzzy decision tree technique. Fuzzy Sets and Systems 138, 221–254 (2003)CrossRefMathSciNetGoogle Scholar
  4. 4.
    Qin, Z., Lawry, J.: Decision tree learning with fuzzy labels. Information Sciences 172(1-2), 91–129 (2005)zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Quinlan, J.R.: Induction of decision trees. Machine Learning 1, 81–106 (1986)Google Scholar
  6. 6.
    Quinlan, J.R.: Decision trees at probabilistic classifiers. In: Proceeding of 4th International Workshop on Machine Learning, pp. 31–37. Morgan Kaufmann, San Francisco (1987)Google Scholar
  7. 7.
    Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo (1993)Google Scholar
  8. 8.
    Wang, P.: Interpretations on fuzziness. IEEE Transactions on Systems, Man and Cybernetics, Part B 26(2), 321–326 (1996)CrossRefGoogle Scholar
  9. 9.
    Zadeh, L.A.: Fuzzy sets. Information and Control 8, 338–353 (1965)zbMATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • Zengchang Qin
    • 1
  • Jonathan Lawry
    • 2
  1. 1.Berkeley Initiative in Soft Computing, Computer Science Division, EECS Department, University of California, Berkeley CA 94720USA
  2. 2.Artificial Intelligence Group, Engineering Mathematics Department, University of Bristol, BS8 1TRUK

Personalised recommendations