Advertisement

Prediction Improvement via Smooth Component Analysis and Neural Network Mixing

  • Ryszard Szupiluk
  • Piotr Wojewnik
  • Tomasz Ząbkowski
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4132)

Abstract

In this paper we derive a novel smooth component analysis algorithm applied for prediction improvement. When many prediction models are tested we can treat their results as multivariate variable with the latent components having constructive or destructive impact on prediction results. The filtration of those destructive components and proper mixing of those constructive should improve final prediction results. The filtration process can be performed by neural networks with initial weights computed from smooth component analysis. The validity and high performance of our concept is presented on the real problem of energy load prediction.

Keywords

Independent Component Analysis Independent Component Analysis Blind Source Separation Nonnegative Matrix Factorisation Prediction Improvement 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)MATHMathSciNetGoogle Scholar
  2. 2.
    Bishop, C.M.: Neural networks for pattern recognition. Oxford Univ. Press, UK (1996)MATHGoogle Scholar
  3. 3.
    Choi, S., Cichocki, A.: Blind separation of nonstationary sources in noisy mixtures. Electronics Letters 36(9), 848–849 (2000)CrossRefGoogle Scholar
  4. 4.
    Cichocki, A., Amari, S.: Adaptive Blind Signal and Image Processing. John Wiley, Chichester (2002)CrossRefGoogle Scholar
  5. 5.
    Cichocki, A., Zurada, J.M.: Blind Signal Separation and Extraction: Recent Trends, Future Perspectives, and Applications. In: Rutkowski, L., Siekmann, J.H., Tadeusiewicz, R., Zadeh, L.A. (eds.) ICAISC 2004. LNCS (LNAI), vol. 3070, pp. 30–37. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  6. 6.
    Donoho, D.L., Elad, M.: Maximal Sparsity Repre-sentation via l1 Minimization. The Proc. Nat. Aca. Sci. 100, 2197–2202 (2003)MATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Haykin, S.: Neural networks: a comprehensive foundation. Macmillan, New York (1994)MATHGoogle Scholar
  8. 8.
    Hoeting, J., Mdigan, D., Raftery, A., Volinsky, C.: Bayesian model averaging: a tutorial. Statistical Science 14, 382–417 (1999)MATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Hurst, H.E.: Long term storage capacity of reservoirs. Trans. Am. Soc. Civil Engineers 116 (1951)Google Scholar
  10. 10.
    Hyvärinen, A., Karhunen, J., Oja, E.: Independent Component Analysis. John Wiley, Chichester (2001)CrossRefGoogle Scholar
  11. 11.
    Lendasse, A., Cottrell, M., Wertz, V., Verdleysen, M.: Prediction of Electric Load using Kohonen Maps – Application to the Polish Electricity Consumption. In: Proc. Am. Control Conf., Anchorage AK, pp. 3684–3689 (2002)Google Scholar
  12. 12.
    Lee, D.D., Seung, H.S.: Learning of the parts of objects by non-negative matrix factorization. Nature 401 (1999)Google Scholar
  13. 13.
    Li, Y., Cichocki, A., Amari, S.: Sparse component analysis for blind source separation with less sensors than sources. In: Fourth Int. Symp. on ICA and Blind Signal Separation, Nara, Japan, pp. 89–94 (2003)Google Scholar
  14. 14.
    Mitchell, T.: Machine Learning. McGraw-Hill, Boston (1997)MATHGoogle Scholar
  15. 15.
    Molgedey, L., Schuster, H.: Separation of a mixture of independent signals using time delayed correlations. Phisical Review Letters 72(23) (1994)Google Scholar
  16. 16.
    Osowski, S., Siwek, K.: Regularization of neural networks for improved load forecasting in the power system. IEE Proc. Generation, Transmission and Distribution 149(3), 340–344 (2002)CrossRefGoogle Scholar
  17. 17.
    Parra, L., Mueller, K.R., Spence, C., Ziehe, A., Sajda, P.: Unmixing Hyperspectral Data. Advances in Neural In formation Processing Systems 12, pp. 942–948. MIT Press, Cambridge (2000)Google Scholar
  18. 18.
    Samorodnitskij, G., Taqqu, M.: Stable non-Gaussian random processes: stochastic models with infinitive variance. Chapman and Hall, N.York (1994)Google Scholar
  19. 19.
    Scales, L.E.: Introduction to Non-Linear Optimization. Springer, New York (1985)Google Scholar
  20. 20.
    Stone, J.V.: Blind Source Separation Using Temporal Predictability. Neural Computation 13(7), 1559–1574 (2001)MATHCrossRefGoogle Scholar
  21. 21.
    Szupiluk, R., Wojewnik, P., Zabkowski, T.: Model Improvement by the Statistical Decomposition. In: Rutkowski, L., Siekmann, J.H., Tadeusiewicz, R., Zadeh, L.A. (eds.) ICAISC 2004. LNCS (LNAI), vol. 3070, pp. 1199–1204. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  22. 22.
    Therrien, C.W.: Discrete Random Signals and Statistical Signal Processing. Prentice Hall, New Jersey (1992)MATHGoogle Scholar
  23. 23.
    Yang, Y.: Adaptive regression by mixing. Journal of American Statistical Association 96 (2001)Google Scholar
  24. 24.
    Zibulevsky, M., Kisilev, P., Zeevi, Y.Y., Pearlmutter, B.A.: Blind source separation via multinode sparse representation. In: Advances in Neural Information Processing Systems, vol. 14, pp. 185–191 (2002)Google Scholar
  25. 25.
    Cichocki, A., Zdunek, R., Amari, S.: New Algorithms for Non-Negative Matrix Factorization in Applications to Blind Source Separation. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2006, Toulouse, France (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Ryszard Szupiluk
    • 1
    • 2
  • Piotr Wojewnik
    • 1
    • 2
  • Tomasz Ząbkowski
    • 1
    • 3
  1. 1.Polska Telefonia Cyfrowa Ltd.WarsawPoland
  2. 2.Warsaw School of EconomicsWarsawPoland
  3. 3.Warsaw Agricultural UniversityWarsawPoland

Personalised recommendations