IWLCS 2003, IWLCS 2004, IWLCS 2005: Learning Classifier Systems pp 308-332 | Cite as

Using XCS to Describe Continuous-Valued Problem Spaces

  • David Wyatt
  • Larry Bull
  • Ian Parmee
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4399)

Abstract

Learning classifier systems have previously been shown to have some application in single-step tasks. This paper extends work in the area by applying the classifier system to progressively more complex multi-modal test environments, each with typical search space characteristics, convex/non-convex regions of high performance and complex interplay between variables. In particular, two test environments are used to investigate the effects of different degrees of feature sampling, parameter sensitivity, training set size and rule subsumption. Results show that XCSR is able to deduce the characteristics of such problem spaces to a suitable level of accuracy. This paper provides a foundation for the possible use of XCS as an exploratory tool that can provide information from conceptual design spaces enabling a designer to identify the best direction for further investigation as well as a better representation of their design problem through redefinition and reformulation of the design space.

Keywords

Training Dataset Test Dataset Minority Class Order Interval Learn Classifier System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    An, G.: The Effects of Adding Noise During Backpropagation Training on a Generalization Performance. Neural Computation 8, 643–674 (1996)CrossRefGoogle Scholar
  2. 2.
    Beasley, D., Bull, D., Martin, R.: A Sequential Niche Technique for Multimodal Function Optimisation. Evolutionary Computation 1(2), 101–125 (1993)CrossRefGoogle Scholar
  3. 3.
    Bernadó, E., Llorà, X., Garrell, J.: XCS and GALE: a Comparative Study of Two Learning Classifier Systems with Six Other Learning Algorithms on Classification Tasks. In: Lanzi, P.L., Stolzmann, W., Wilson, S.W. (eds.) IWLCS 2001. LNCS (LNAI), vol. 2321, pp. 115–133. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  4. 4.
    Blake, C., Merz, C.: UCI Repository of Machine Learning Databases, University of California, Irvine. (1998), Available at http://www.ics.uci.edu/~mlearn/MLRepository.html
  5. 5.
    Bonham, C.: Evolutionary Decomposition of Complex Design Spaces. PhD Thesis, University of Plymouth (2000)Google Scholar
  6. 6.
    Bonham, C., Parmee, I.: An Investigation of Exploration and Exploitation Within Cluster-Oriented Genetic Algorithms (COGAs). In: Banzhaf, W., et al. (eds.) Proceedings of the Genetic and Evolutionary Computation Conference 1999, pp. 1491–1497. Morgan Kaufmann, San Francisco (1999)Google Scholar
  7. 7.
    Bull, L., Wyatt, D., Parmee, I.: Initial Modifications to XCS for use in Interactive Evolutionary Design. In: Guervós, J.J.M., et al. (eds.) PPSN 2002. LNCS, vol. 2439, pp. 568–577. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  8. 8.
    Butz, M., Wilson, S.: An algorithmic description of XCS. In: Lanzi, P.L., Stolzmann, W., Wilson, S.W. (eds.) IWLCS 2000. LNCS (LNAI), vol. 1996, pp. 253–272. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  9. 9.
    Elkan, C.: The Foundations of Cost-Sensitive Learning. In: Proceedings of the 17th International Joint Conference on Artificial Intelligence, pp. 973–978 (2001)Google Scholar
  10. 10.
    Hart, P.: The Condensed Nearest Neighbor Rule. IEEE Transactions on Information Theory 14, 515–516 (1968)CrossRefGoogle Scholar
  11. 11.
    Holmes, J.: A Genetics-Based Machine Learning Approach to Knowledge Discovery in Clinical Data. Journal of the American Medical Informatics Association Supplement 883 (1996)Google Scholar
  12. 12.
    Holmström, L., Koistinen, P.: Using Additive Noise in Back-Propagation Training. IEEE Transactions on Neural Networks 3, 24–38 (1992)CrossRefGoogle Scholar
  13. 13.
    Japkowicz, N., Stephen, S.: The Class Imbalance Problem: A Systematic Study. Intelligent Data Analysis 6(5), 429–450 (2002)MATHGoogle Scholar
  14. 14.
    Kocis, L., Whiten, W.J.: Computational Investigations in Low Discrepancy Sequences. ACM Transactions on Mathematical Software 23(2), 266–294Google Scholar
  15. 15.
    Kohavi, R., Provost, F.: Glossary of Terms. Machine Learning 30, 271–274 (1998)CrossRefGoogle Scholar
  16. 16.
    Kononenko, I., Bratko, I.: Information-Based Evaluation Criterion for Classifier’s Performance. Machine Learning 6, 67–80 (1991)Google Scholar
  17. 17.
    Kubat, M., Matwin, S.: Addressing the Curse of Imbalanced Data Sets: One-Sided Sampling. In: Fisher, D. (ed.) Proceedings of the 14th International Conference on Machine Learning, pp. 179–186. Morgan Kaufmann, San Francisco (1997)Google Scholar
  18. 18.
    Kubat, M., Holte, R., Matwin, S.: Learning when Negative Examples Abound. In: van Someren, M., Widmer, G. (eds.) ECML 1997. LNCS, vol. 1224, pp. 146–153. Springer, Heidelberg (1997)Google Scholar
  19. 19.
    Laurikkala, J.: Improving Identification of Difficult Small Classes by Balancing Class Distribution. In: Quaglini, S., Barahona, P., Andreassen, S. (eds.) AIME 2001. LNCS (LNAI), vol. 2101, pp. 63–66. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  20. 20.
    Lee, S.: Noisy Replication in Skewed Binary Classification. Computational Statistics and Data Analysis 34, 165–191 (2000)CrossRefMATHGoogle Scholar
  21. 21.
    Lewis, D., Gale, W.: A Sequential Algorithm for Training Text Classifiers. In: Proceedings of SIGIR-94, 17th ACM International Conference on Research and Development in Information Retrieval, pp. 3–12. ACM Press, New York (1994)Google Scholar
  22. 22.
    Ling, C., Li, C.: Data Mining for Direct Marketing: Problems and Solutions. In: Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-98), pp. 73–79. AAAI Press, Menlo Park (1998)Google Scholar
  23. 23.
    Parmee, I.: The Maintenance of Search Diversity for Effective Design Space Decomposition using Cluster-Oriented Genetic Algorithms (COGAs) and Multi-Agent Strategies (GAANT). In: Proceedings of 2nd International Conference on Adaptive Computing in Engineering Design and Control, PEDC, University of Plymouth, pp. 128–138 (1996)Google Scholar
  24. 24.
    Parmee, I.: Improving Problem Definition through Interactive Evolutionary Computation. Journal of Artificial Intelligence in Engineering Design, Analysis and Manufacture 16(3) (2002)Google Scholar
  25. 25.
    Parmee, I., Bonham, C.: Improving Cluster-Oriented Genetic Algorithms for High-Performance Region Identification. In: Proceedings US United Engineering Foundation’s ’Optimisation in Industry’ Conference, Tuscany, Italy 2001, Springer, Heidelberg (2001)Google Scholar
  26. 26.
    Raviv, Y., Intrator, N.: Bootstrapping with Noise: An Effective Regularisation Technique. Connection Science, Special issue on Combining Estimators 8, 356–372 (1995)Google Scholar
  27. 27.
    Stone, C., Bull, L.: For Real! XCS with Continuous-Valued Inputs. Evolutionary Computation 11(3), 299–336 (2003)CrossRefGoogle Scholar
  28. 28.
    Swets, J.: Measuring the Accuracy of Diagnostic Systems. Science 240, 1285–1293 (1988)CrossRefMathSciNetGoogle Scholar
  29. 29.
    Tomek, I.: Two Modifications to CNN. IEEE Transactions on Systems, Man and Communications 6, 769–772 (1976)MATHCrossRefMathSciNetGoogle Scholar
  30. 30.
    Toussaint, G.: A Counter-Example to Tomek’s Consistency Theorem for a Condensed Nearest Neighbor Decision Rule. Pattern Recognition Letters 15, 797–801 (1994)CrossRefMATHGoogle Scholar
  31. 31.
    Weiss, G., Provost, F.: The Effect of Class Distribution on Classifier Learning: An Empirical Study. Technical Report ML-TR-44, Rutgers University (2001)Google Scholar
  32. 32.
    Wilson, D., Martinez, T.: Improved Heterogeneous Distance Functions. Journal of Artificial Intelligence Research 6, 1–34 (1997)MATHMathSciNetGoogle Scholar
  33. 33.
    Wilson, D., Martinez, T.: Reduction Techniques for Exemplar-Based Learning Algorithms. Machine Learning 38(3), 257–286 (1998)CrossRefGoogle Scholar
  34. 34.
    Wilson, S.: Classifier fitness based on accuracy. Evolutionary Computation 3(2), 149–175 (1995)CrossRefGoogle Scholar
  35. 35.
    Wilson, S.: Get real! XCS with Continuous-valued inputs. In: Lanzi, P.L., Stolzmann, W., Wilson, S.W. (eds.) IWLCS 1999. LNCS (LNAI), vol. 1813, pp. 209–222. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  36. 36.
    Wilson, S.: Compact Rulesets for XCSI. In: Lanzi, P.L., Stolzmann, W., Wilson, S.W. (eds.) IWLCS 2001. LNCS (LNAI), vol. 2321, pp. 197–210. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  37. 37.
    Wilson, S.: Mining Oblique Data with XCS. In: Lanzi, P.L., Stolzmann, W., Wilson, S.W. (eds.) IWLCS 2000. LNCS (LNAI), vol. 1996, pp. 158–177. Springer, Heidelberg (2001)CrossRefGoogle Scholar

Copyright information

© Springer Berlin Heidelberg 2007

Authors and Affiliations

  • David Wyatt
    • 1
  • Larry Bull
    • 1
  • Ian Parmee
    • 1
  1. 1.University of the West of England, Faculty of Computing, Engineering & Mathematical Sciences, Frenchay Campus, Bristol BS16 1QY 

Personalised recommendations