NXCS Experts for Financial Time Series Forecasting

Part of the Studies in Fuzziness and Soft Computing book series (STUDFUZZ, volume 150)


Most of the early machine learning approaches -e.g., Decision Lists (DL) [1], [2], Decision Trees (DT) [3], Counterfactuals (CFs) [4], Classification And Regression Trees (CART) [5]- apply the divide-and-conquer principle by recursively partitioning the input space until regions of roughly constant class membership are obtained. The corresponding algorithms yield a monolithic result by enforcing heuristics devised to control the complexity of the search. Notwithstanding this apparent interpretation, they can also be reviewed in the light of a new perspective, in which the partitioning procedure is considered as a tool for generating multiple experts. Although with a different focus, both the evolutionary-computation and the connectionist communities allowed to make explicit the multiple experts’ perspective. In the former community, the focus was on establishing suitable architectures and techniques able to enforce an adaptive behavior on a population of individuals, e.g., Genetic Algorithms (GAs) [6], [7], Learning Classifier Systems (LCSs) [8], [9], and eXtended Classifier Systems (XCSs) [10]. In the latter community, the focus was mainly on training techniques and outputs combination mechanisms; in particular, let us recall Jordan and Jacobs’ Mixtures of Experts (MEs) [11], [12] and Weigend’s Gated Experts (GEs) [13].


Stock Index Sharpe Ratio Learn Classifier System Multiple Expert Decision List 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Rivest, R.L.: Learning Decision Lists. Machine Learning 2 (3) (1987) 229–246Google Scholar
  2. 2.
    Clark, P., Niblett, T.: The CN2 Induction Algorithm. Machine Learning 3(4) Kluwer, (1989) 261–283Google Scholar
  3. 3.
    Quinlan, J.R.: Induction of Decision Trees. Machine Learning, 1 (1986) 81–106Google Scholar
  4. 4.
    Vere, S.A.: Multilevel Counterfactuals for Generalizations of Relational Concepts and Productions. Artificial Intelligence 14 (2) (1980) 139–164zbMATHCrossRefGoogle Scholar
  5. 5.
    Breiman, L., Friedman, J., Olshen, R., Stone, C.: Classification and Regression Trees. Wadsworth, Belmont, CA (1984)zbMATHGoogle Scholar
  6. 6.
    Holland, J.H.: Adaptation in Natural and Artificial Systems. University of Michigan Press, (1975)Google Scholar
  7. 7.
    Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley (1989)Google Scholar
  8. 8.
    Holland, J.H.: Adaption. In: R.Rosen and F.M. Snell (eds.): Progress in Theoretical Biology 4, New York (1976)Google Scholar
  9. 9.
    Holland, J.H.: Escaping Brittleness: The possibilities of General-Purpose Learning Algorithms Applied to Parallel Rule-Based Systems. In: R.S. Michalski, J. Carbonell, M. Mitchell (eds.): Machine Learning II, Morgan Kaufmann (1986) 593–623Google Scholar
  10. 10.
    Wilson, S.W.: Classifier Fitness Based on Accuracy. Evolutionary Computation, 3 (2) (1995) 149–175CrossRefGoogle Scholar
  11. 11.
    Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive Mixtures of Local Experts. Neural Computation, 3 (1991) 79–87CrossRefGoogle Scholar
  12. 12.
    Jordan, M.I., Jacobs, R.A.: Hierarchies of Adaptive Experts. In Advances in Neural Information Processing Systems 4,J. Moody, S. Hanson, and R. Lippman (eds.) Morgan Kaufmann (1992) 985–993Google Scholar
  13. 13.
    Weigend, A.S., Mangeas, M., Srivastava, A. N.: Nonlinear Gated Experts for Time Series: Discovering Regimes and Avoiding Overfitting. Int. Journal of Neural Systems, 6 (1995) 373–399CrossRefGoogle Scholar
  14. 14.
    Valiant L.: A Theory of the Learnable. Communications of the ACM, 27 (1984) 1134–1142zbMATHCrossRefGoogle Scholar
  15. 15.
    Vapnik, V. N.: Statistical Learning Theory. John Wiley and Sons Inc., New York (1998)zbMATHGoogle Scholar
  16. 16.
    Krogh, A., Vedelsby, J.: Neural Network Ensembles, Cross Validation, and Active Learning. In G. Tesauro, D. Touretzky, and T. Leen, (eds.) Advances in Neural Information Processing Systems, MIT Press, 7 (1995)Google Scholar
  17. 17.
    Breiman, L.: Stacked Regressions. Machine Learning, 24 (1996) 41–48MathSciNetzbMATHGoogle Scholar
  18. 18.
    Freund, Y., Schapire, R. E.: A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting. Journal of Computer Science and System Sciences, 55 (1) (1997) 119–139MathSciNetzbMATHCrossRefGoogle Scholar
  19. 19.
    Schapire, E.: A Brief Introduction to Boosting. Proc. of the Sixteenth Int. Joint Conference on Artificial Intelligence (1999)Google Scholar
  20. 20.
    Sun, R., Peterson, T.: Multi-Agent Reinforcement Learning: Weighting and Partitioning. Neural Networks, 12 (4–5) (1999) 727–753CrossRefGoogle Scholar
  21. 21.
    Breiman, L.: Bias, Variance, and Arcing Classifiers. Technical Report n. 460, Statistics Dept., Univ. of California at Berkeley, CA (1996)Google Scholar
  22. 22.
    Schapire, R.E., Freund, Y., Bartlett, P., Lee, W.S.: Boosting the Margin: a New Explanation for the Effectiveness of Voting Methods. Proc. of the Fourteenth Int. Conference on Machine Learning (1997) 322–330Google Scholar
  23. 23.
    Domingos, P.: A Unified Bias-Variance Decomposition for Zero-One and Squared Loss. Proc. of the Seventeenth National Conference on Artificial Intelligence, Austin, Texas (2000) 564–569Google Scholar
  24. 24.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA (1998)Google Scholar
  25. 25.
    Kovacs, T.: Evolving Optimal Populations with XCS Classifier Systems. MSc. Dissertation, Univ. of Birmingham, UK (1996)Google Scholar
  26. 26.
    Kovacs, T.: Strength or Accuracy? A Comparison of Two Approaches to Fitness. Second Int. Workshop on Learning Classifier Systems during GECCO99 (1999)Google Scholar
  27. 27.
    Lanzi, P.L.: A Study of the Generalization Capabilities of XCS. Proc. of the Seventh Int. Conference on Genetic Algorithms (ICGA97) Morgan Kaufmann, San Francisco, CA (1997)Google Scholar
  28. 28.
    Wilson, S.W.: Generalization in the XCS Classifier System. Proc. of the Third Annual Genetic Programming Conference, San Francisco, CA, Morgan Kaufmann, (1998) 665–674.Google Scholar
  29. 29.
    Lanzi, P.L.: Adding memory to XCS. Proc. of the IEEE Conference on Evolutionary Computation (ICEC98) (1998)Google Scholar
  30. 30.
    Lanzi, P.L., Perrucci, A.: Extending the Representation of Classifier Conditions Part II: From Messy Coding to S-Expressions. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO ‘99) I, Morgan Kaufmann (1999) 345–353Google Scholar
  31. 31.
    Wilson, S.W.: State of XCS Classifier System Research. Second Int. Workshop on Learning Classifier Systems during GECCO99 (1999)Google Scholar
  32. 32.
    Wilson, S.W.: Get Real! XCS with Continuous-Valued Inputs. Festschrift in Honor of John H. Holland, L.Booker, S. Forrest, M. Mitchell, and R. Riolo (eds.). Center of Study of Complex Systems, The University of Michigan, ANN Arbor, MI, May 15–18 (1999).Google Scholar
  33. 33.
    Bull, L., O’Hara, T.: Accuracy-based Neuro and Neuro-Fuzzy Classifier Systems. Technical Report UWELCSG02–001, UWE LCS Group, Faculty of Computing, Engineering, and Mathematical Sciences. University of the West of England (2002)Google Scholar
  34. 34.
    Schwager, J. D.: Technical Analysis, John Wiley & Sons (1995)Google Scholar
  35. 35.
    Schulenburg, S., Ross, P.: An Adaptive Agent Based Economic Model. Second Int. Workshop on Learning Classifier Systems, IWLCS 99, Springer Verlag LNCS (2000) 263–282Google Scholar
  36. 36.
    Schulenburg, S., Ross, P.: Strength and Money: An LCS Approach to Increasing Returns. Third Int. Workshop on Learning Classifier Systems, IWLCS 2000, Springer Verlag LNCS (2001) 114–137Google Scholar
  37. 37.
    Armano, G., Murru, A., Roll, F.: Stock Market Prediction by a Mixture of Genetic-Neural Experts. Int. Journal of Pattern Recognition and Artificial Intelligence, 16(5) (2002) 501–526CrossRefGoogle Scholar
  38. 38.
    Hancock, P.J.B.: Pruning Neural Nets by Genetic Algorithm. Int. Conference on Artificial Neural Networks, Elsevier (1992) 991–994Google Scholar
  39. 39.
    Dorsey, R.E., Sexton, R.S.: The Use of Parsimonious Neural Networks for Forecasting Financial Time Series. Journal of Computational Intelligence in Finance, 6 (1) (1998) 24–31Google Scholar
  40. 40.
    Giles, C.L., Lawrence, S., Chung Tsoi, A.: Rule Inference for Financial Prediction Using Recurrent Neural Networks. Proc. of IEEE/IAFE Conference on Computational Intelligence for Financial Engineering (CIFE) (1997) 253–259Google Scholar
  41. 41.
    Weigend, A.S., Zimmermann, H. G.: Exploiting Local Relations as Soft Constraints to Improve Forecasting. Journal of Computational Intelligence in Finance, 6 (1998) 14–23Google Scholar
  42. 42.
    Fahlmann, S. E., Lebiere, C.: The Cascade-Correlation Learning Architecture. Technical Report CMU-CS-90–100, Carnegie Mellon University (1990)Google Scholar
  43. 43.
    Weigend, A.S., Huberman, B.A., Rumeihart, D.E.: Predicting Sunspots and Exchange Rates with Connectionist Networks. Proc. of the 1990 NATO Workshop on Nonlinear Modeling and Forecasting, Santa Fe (1991)Google Scholar
  44. 44.
    Fama, E.F.: The Behavior of Stock Market Prices. The Journal of Business. 38 (1965) 34105Google Scholar
  45. 45.
    Sharpe, W.F.: Adjusting for Risk in Portfolio Performance Measurement. Journal of Portfolio Management (1975)Google Scholar
  46. 46.
    Campolucci, P., and Piazza, F.: On-Line Learning Algorithms for Locally Recurrent Neural Networks. IEEE Transactions on Neural Networks, 10 (2) (1999)Google Scholar
  47. 47.
    Shynk, J.J.: Adaptive IIR filtering. IEEE ASSP Magazine (1989)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  1. 1.DIEE, Dept. of Electrical and Electronic EngineeringUniversity of CagliariCagliariItaly

Personalised recommendations