Skip to main content

Smooth Component Analysis as Ensemble Method for Prediction Improvement

  • Conference paper
Independent Component Analysis and Signal Separation (ICA 2007)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 4666))

Abstract

In this paper we apply a novel smooth component analysis algorithm as ensemble method for prediction improvement. When many prediction models are tested we can treat their results as multivariate variable with the latent components having constructive or destructive impact on prediction results. We show that elimination of those destructive components and proper mixing of those constructive can improve the final prediction results. The validity and high performance of our concept is presented on the problem of energy load prediction.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)

    MATH  MathSciNet  Google Scholar 

  2. Cardoso, J.F.: High-order contrasts for independent component analysis. Neural Computation 11, 157–192 (1999)

    Article  Google Scholar 

  3. Choi, S., Cichocki, A.: Blind separation of nonstationary sources in noisy mixtures. Electronics Letters 36(9), 848–849 (2000)

    Article  Google Scholar 

  4. Cichocki, A., Amari, S.: Adaptive Blind Signal and Image Processing. John Wiley, Chichester (2002)

    Google Scholar 

  5. Donoho, D.L., Elad, M.: Maximal Sparsity Representation via l1 Minimization. The Proc. Nat. Acad. Sci. 100, 2197–2202 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  6. Golub, G.H., Van-Loan, C.F.: Matrix Computations. Johns Hopkins, Baltimore (1996)

    MATH  Google Scholar 

  7. Haykin, S.: Neural nets: a comprehensive foundation. Macmillan, NY (1994)

    MATH  Google Scholar 

  8. Hoeting, J., Mdigan, D., Raftery, A., Volinsky, C.: Bayesian model averaging: a tutorial. Statistical Science 14, 382–417 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  9. Hurst, H.E.: Long term storage capacity of reservoirs. Trans. Am. Soc. Civil Engineers 116 (1951)

    Google Scholar 

  10. Hyvärinen, A., Karhunen, J., Oja, E.: Independent Component Analysis. John Wiley, Chichester (2001)

    Google Scholar 

  11. Lendasse, A., Cottrell, M., Wertz, V., Verdleysen, M.: Prediction of Electric Load using Kohonen Maps – Application to the Polish Electricity Consumption. In: Proc. Am. Control Conf. Anchorage AK, pp. 3684–3689 (2002)

    Google Scholar 

  12. Lee, D.D., Seung, H.S.: Learning of the parts of objects by non-negative matrix factorization. Nature, 401 (1999)

    Google Scholar 

  13. Mitchell, T.: Machine Learning. McGraw-Hill, Boston (1997)

    MATH  Google Scholar 

  14. Scales, L.E.: Introduction to Non-Linear Optimization. Springer, NY (1985)

    Google Scholar 

  15. Stone, J.V.: Blind Source Separation Using Temporal Predictability. Neural Computation 13(7), 1559–1574 (2001)

    Article  MATH  Google Scholar 

  16. Szupiluk, R., Wojewnik, P., Zabkowski, T.: Model Improvement by the Statistical Decomposition. In: Rutkowski, L., Siekmann, J.H., Tadeusiewicz, R., Zadeh, L.A. (eds.) ICAISC 2004. LNCS (LNAI), vol. 3070, pp. 1199–1204. Springer, Heidelberg (2004)

    Google Scholar 

  17. Therrien, C.W.: Discrete Random Signals and Statistical Signal Processing. Prentice Hall, New Jersey (1992)

    MATH  Google Scholar 

  18. Yang, Y.: Adaptive regression by mixing. Journal of American Statistical Association, 96 (2001)

    Google Scholar 

  19. Zibulevsky, M., Kisilev, P., Zeevi, Y.Y., Pearlmutter, B.A.: BSS via multinode sparse representation. Adv. in Neural Information Proc. Sys. 14, 185–191 (2002)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Mike E. Davies Christopher J. James Samer A. Abdallah Mark D Plumbley

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Szupiluk, R., Wojewnik, P., ZÄ…bkowski, T. (2007). Smooth Component Analysis as Ensemble Method for Prediction Improvement. In: Davies, M.E., James, C.J., Abdallah, S.A., Plumbley, M.D. (eds) Independent Component Analysis and Signal Separation. ICA 2007. Lecture Notes in Computer Science, vol 4666. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-74494-8_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-74494-8_35

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-74493-1

  • Online ISBN: 978-3-540-74494-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics