Weighted Kernel Regression for Predicting Changing Dependencies

  • Steven Busuttil
  • Yuri Kalnishkan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4701)

Abstract

Consider the online regression problem where the dependence of the outcome y t on the signal x t changes with time. Standard regression techniques, like Ridge Regression, do not perform well in tasks of this type. We propose two methods to handle this problem: WeCKAAR, a simple modification of an existing regression technique, and KAARCh, an application of the Aggregating Algorithm. Empirical results on artificial data show that in this setting, KAARCh is superior to WeCKAAR and standard regression techniques. On options implied volatility data, the performance of both KAARCh and WeCKAAR is comparable to that of the proprietary technique currently being used at the Russian Trading System Stock Exchange (RTSSE).

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hull, J.C.: Options, Futures and Other Derivatives, 6th edn. Prentice-Hall, Englewood Cliffs (2005)Google Scholar
  2. 2.
    Vovk, V.: Competitive on-line statistics. International Statistical Review 69(2), 213–248 (2001)MATHCrossRefGoogle Scholar
  3. 3.
    Aizerman, M., Braverman, E., Rozonoer, L.: Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control 25, 821–837 (1964)MathSciNetGoogle Scholar
  4. 4.
    Aronszajn, N.: Theory of reproducing kernels. Transactions of the American Mathematical Society 68, 337–404 (1950)MATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Schölkopf, B., Smola, A.J.: Learning with Kernels — Support Vector Machines, Regularization, Optimization and Beyond. The MIT Press, USA (2002)Google Scholar
  6. 6.
    Hoerl, A.E.: Application of ridge analysis to regression problems. Chemical Engineering Progress 58, 54–59 (1962)Google Scholar
  7. 7.
    Saunders, C., Gammerman, A., Vovk, V.: Ridge regression learning algorithm in dual variables. In: Proceedings of the 15th International Conference on Machine Learning, pp. 515–521. Morgan Kaufmann, San Francisco (1998)Google Scholar
  8. 8.
    Cesa-Bianchi, N., Lugosi, G.: Prediction, Learning, and Games. Cambridge University Press, Cambridge (2006)MATHGoogle Scholar
  9. 9.
    Busuttil, S., Kalnishkan, Y., Gammerman, A.: Improving the aggregating algorithm for regression. In: Proceedings of the 25th IASTED International Conference on Artificial Intelligence and Applications (AIA 2007), pp. 347–352. ACTA Press (2007)Google Scholar
  10. 10.
    Gammerman, A., Kalnishkan, Y., Vovk, V.: On-line prediction with kernels and the complexity approximation principle. In: Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence, pp. 170–176. AUAI Press (2004)Google Scholar
  11. 11.
    Busuttil, S., Kalnishkan, Y.: Online regression competitive with changing predictors. In: Proceedings of the 18th International Conference on Algorithmic Learning Theory (ALT 2007). LNCS, Springer, Heidelberg (to appear, 2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Steven Busuttil
    • 1
  • Yuri Kalnishkan
    • 1
  1. 1.Computer Learning Research Centre and Department of Computer Science, Royal Holloway, University of London, Egham, Surrey, TW20 0EXUnited Kingdom

Personalised recommendations