Advertisement

Dynamic Asset Allocation Exploiting Predictors in Reinforcement Learning Framework

  • Jangmin O
  • Jae Won Lee
  • Jongwoo Lee
  • Byoung-Tak Zhang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3201)

Abstract

Given the pattern-based multi-predictors of the stock price, we study a method of dynamic asset allocation to maximize the trading performance. To optimize the proportion of asset to be allocated to each recommendations of the predictors, we design an asset allocator called meta policy in the Q-learning framework. We utilize both the information of each predictor’s recommendations and the ratio of the stock fund over the asset to efficiently describe the state space. The experimental results on Korean stock market show that the trading system with the proposed asset allocator outperforms other systems with fixed asset allocation methods. This means that reinforcement learning can bring synergy effects to the decision making problem through exploiting supervised-learned predictors.

Keywords

Stock Price Total Asset Trading Performance Trading System Asset Allocation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Fama, E.F., French, K.R.: Dividend Yields and Expected Stock Returns. Journal of Financial Economics 22, 3–26 (1988)CrossRefGoogle Scholar
  2. 2.
    Hellström, T.: A Random Walk through the Stock Market, ph.D. Thesis, Department of Computing Science, Umeȧ University (1998)Google Scholar
  3. 3.
    Kaufman, P.J.: The New Commodity Trading Systems and Methods. Wiley, NewYork (1987)Google Scholar
  4. 4.
    Kendall, S.M., Ord, K.: Time Series. Oxford, New York (1997)Google Scholar
  5. 5.
    Kim, S.D., Lee, J.W., Lee, J., Chae, J.-S.: A Two-Phase Stock Trading System Using Distributional Differences. In: Proceedings of International Conference on Database and Expert Systems Applications, pp. 143–152 (2002)Google Scholar
  6. 6.
    Lee, J.W.,, J.: O, A Multi-agent Q-learning Framework for Optimizing Stock Trading Systems. In: Proceedings of International Conference on Database and Expert Systems Applications, pp. 153–162 (2002)Google Scholar
  7. 7.
    Malkiel, B.G.: A Random Walk Down Wall Street. Norton, New York (1996)Google Scholar
  8. 8.
    Moody, J., Saffell, M.: Learning to Trade via Direct Reinforcement. IEEE Transactions on Neural Networks 12(4), 875–889 (2001)CrossRefGoogle Scholar
  9. 9.
    Neuneier, R.: Risk Sensitive Reinforcement Learning. In: Advances in Neural Information Processing Systems, pp. 1031–1037. MIT Press, Cambridge (1999)Google Scholar
  10. 10.
    O, J., Lee, J.W., Zhang, B.-T.: Stock Trading System Using Reinforcement Learning with Cooperative Agents. In: Proceedings of International Conference on Machine Learning, pp. 451–458. Morgan Kaufmann, San Francisco (2002)Google Scholar
  11. 11.
    Saad, E.W., Prokhorov, D.V., Wunsch II, D.C.: Comparative Study of Stock Trend Prediction Using Time Delay, Recurrent and Probabilistic Neural Networks. IEEE Transactions on Neural Networks 9(6), 1456–1470 (1998)CrossRefGoogle Scholar
  12. 12.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Jangmin O
    • 1
  • Jae Won Lee
    • 2
  • Jongwoo Lee
    • 3
  • Byoung-Tak Zhang
    • 1
  1. 1.School of Computer Science and EngineeringSeoul National UniversitySeoulKorea
  2. 2.School of Computer Science and EngineeringSungshin Women’s UniversitySeoulKorea
  3. 3.Department of Multimedia ScienceSookmyung Women’s UniversitySeoulKorea

Personalised recommendations