Advertisement

The Repository Method for Chance Discovery in Financial Forecasting

  • Alma Lilia Garcia-Almanza
  • Edward P. K. Tsang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4253)

Abstract

The aim of this work is to forecast future opportunities in financial stock markets, in particular, we focus our attention on situations where positive instances are rare, which falls into the domain of Chance Discovery. Machine learning classifiers extend the past experiences into the future. However the imbalance between positive and negative cases poses a serious challenge to machine learning techniques. Because it favours negative classifications, which has a high chance of being correct due to the nature of the data. Genetic Algorithms have the ability to create multiple solutions for a single problem. To exploit this feature we propose to analyse the decision trees created by Genetic Programming. The objective is to extract and collect different rules that classify the positive cases. It lets model the rare instances in different ways, increasing the possibility of identifying similar cases in the future. To illustrate our approach, it was applied to predict investment opportunities with very high returns. From experiment results we showed that the Repository Method can consistently improve both the recall and the precision.

Keywords

Genetic Programming Hard Condition Chance Discovery Rule Extraction Imbalanced Data 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Tsang, E.P., Martinez-Jaramillo, S.: Computational finance. In: IEEE Computational Intelligence Society Newsletter, pp. 3–8 (2004)Google Scholar
  2. 2.
    Koza, J.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. The MIT Press, Cambridge (1992)MATHGoogle Scholar
  3. 3.
    Li, J.: A genetic programming based tool for financial forecasting. Colchester CO4 3SQ, UK: PhD Thesis, University of Essex (2001)Google Scholar
  4. 4.
    Tsang, E.P., Yung, P., Li, J.: Eddie-automation, a decision support tool for financial forecasting. Journal of Decision Support Systems, Special Issue on Data Mining for Financial Decision Making 37(4) (2004)Google Scholar
  5. 5.
    Tsang, E.P., Markose, S., Er, H.: Chance discovery in stock index option and future arbitrage. In: New Mathematics and Natural Computation, vol. 1(3), pp. 435–447. World Scientific, Singapore (2005)Google Scholar
  6. 6.
    Abe, A., Ohsawa, Y.: Special issue on chance disocvery. In: New Generation Computing, ser. 1, vol. 21, pp. 1–2. Springer, Berlin (2002)Google Scholar
  7. 7.
    Ohsawa, Y., McBurney, P.: Chance discovery. Springer, Heidelberg (2003)MATHGoogle Scholar
  8. 8.
    Provost, F.J., Fawcett, T., Kohavi, R.: The case against accuracy estimation for comparing induction algorithms. In: Madison, W. (ed.) Proc. Fifteenth Intl. Conf. Machine Learning, pp. 445–553 (1998), [Online] Available: citeseer.ist.psu.edu/provost97case.html
  9. 9.
    Kubat, M., Holte, R.C., Matwin, S.: Machine learning for the detection of oil spills in satellite radar images. Machine Learning 30, 195–215 (1998)CrossRefGoogle Scholar
  10. 10.
    Tsang, E.P., Li, J., Butler, J.: Eddie beats the bookies. International Journal of Software, Practice and Experience 20(10), 1033–1043 (1998)CrossRefGoogle Scholar
  11. 11.
    Sharpe, W.F., Alexander, G.J., Bailey, J.V.: Investments. Prentice-Hall International, Inc., Upper Saddle River (1995)Google Scholar
  12. 12.
    Quinlan, J.R.: Rule induction with statistical data. Journal of the operational research Society 38, 347–352 (1987)Google Scholar
  13. 13.
    Chomsky, N.: Aspects of the theory of syntax. MIT Press, Cambridge (1965)Google Scholar
  14. 14.
    Angeline, P.: Genetic Programming and Emergent Intelligence. In: Kinnear, Jr., K.E. (ed.) Advances in Genetic Programming. Ch. 4, pp. 75–98. MIT Press, Cambridge (1994)Google Scholar
  15. 15.
    Nordin, P., Francone, F., Banzhaf, W.: Explicitly Defined Introns and Destructive Crossover in Genetic Programming. In: Rosca, J.P. (ed.) Proceedings of the Workshop on Genetic Programming: From Theory to Real-World Applications, Tahoe City, California, USA, July 9, pp. 6–22 (1995)Google Scholar
  16. 16.
    Soule, T., Foster, J.A.: Code size and depth flows in genetic programming. In: Genetic Programming 1997: Proceeding of the Second Annual Conference, pp. 313–320. Morgan Kaufmann, San Francisco (1997)Google Scholar
  17. 17.
    Langdon, W.B.: Quadratic bloat in genetic programming. In: Proceedings of the Genetic and evolutionary Computation Conference, pp. 451–458 (2000)Google Scholar
  18. 18.
    Garcia-Almanza, A.L.: Technical report, rule simplification, http://privatewww.essex.ac.uk/~algarc/documents/Rule-simplification.doc

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Alma Lilia Garcia-Almanza
    • 1
  • Edward P. K. Tsang
    • 1
  1. 1.Department of Computer ScienceUniversity of EssexColchesterUK

Personalised recommendations