Advertisement

Discrete Cosine Transformation as Alternative to Other Methods of Computational Intelligence for Function Approximation

  • Angelika Olejczak
  • Janusz KorniakEmail author
  • Bogdan M. Wilamowski
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10245)

Abstract

The discrete cosine transform (DCT) is commonly known in signal processing. In this paper DCT is used in computational intelligence to show its usefulness. Proposed DCT method is used to reduce the size of system which results in faster processing with limited and controlled precision lost. Proposed method is compared to other ones like Fuzzy Systems, Neural Networks, Support Vector Machines, etc. to investigate the ability to solve sample problem. The results show that the method can be successfully used and the results are comparable or better to those achieved by other methods considered as powerful ones.

Keywords

Approximation Neural networks Computational intelligence 

References

  1. 1.
    Greblicki, W., Pawlak, M.: Fourier and hermite series estimates of regression functions. Ann. Inst. Stat. Math. 37(1), 443–454 (1985)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Rutkowski, L., Rafajlowicz, E.: On optimal global rate of convergence of some nonparametric identification procedures. IEEE Trans. Autom. Control 34(10), 1089–1091 (1989)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Rafajlowicz, E., Pawlak, M.: On function recovery by neural networks based on orthogonal expansions. Nonlinear Anal. Theory Methods Appl. 30(3), 1343–1354 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Kayacan, E., Kayacan, E., Khanesar, M.A.: Identification of nonlinear dynamic systems using type-2 fuzzy neural networks a novel learning algorithm and a comparative study. IEEE Trans. Ind. Electron. 62(3), 1716–1724 (2015)CrossRefzbMATHGoogle Scholar
  5. 5.
    Takagi, T., Sugeno, M.: Fuzzy identification of systems and its applications to modeling and control. IEEE Trans. Syst. Man Cybern. SMC–15(1), 116–132 (1985)CrossRefzbMATHGoogle Scholar
  6. 6.
    Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2(1), 1127 (2009). Also published as a book. Now PublishersMathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Dahl, G.E., Ranzato, M., Mohamed, A., Hinton, G.E.: Phone recognition with the mean-covariance restricted boltzmann machine. In: NIPS2010Google Scholar
  8. 8.
    Wu, X., Rozycki, P., Wilamowski, B.: Hybride constructive algorithm for single layer feeforward network learning. IEEE Trans. Neural Netw. Learn. Syst. 26(8), 1659–1668 (2015)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Wilamowski, B.M.: Neural network architectures and learning algorithms- how not to be frustrated with neural networks. IEEE Ind. Electron. Mag. 3(4), 56–63 (2009)CrossRefGoogle Scholar
  10. 10.
    Wilamowski, B.M., Yu, H.: Improved computation for levenberg marquardt training. IEEE Trans. Neural Netw. 21(6), 930–937 (2010)CrossRefGoogle Scholar
  11. 11.
    Tiantian, X., Hao, Y., Hewlett, J., Rozycki, P., Wilamowski, B.: Fast and efficient second-order method for training radial basis function networks. IEEE Trans. Neural Netw. Learn. Syst. 23(4), 609–619 (2012)CrossRefGoogle Scholar
  12. 12.
    Lang, K.L., Witbrock, M.J.: Learning to tell two spirals apart. In: Proceedings of the 1988 Connectionists Models Summer School. Morgan Kaufman (1988)Google Scholar
  13. 13.
    Fahlman, S.E., Lebiere, C.: The cascade-correlation learning architecture. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems 2, pp. 524–532. Morgan Kaufmann, San Mateo (1990)Google Scholar
  14. 14.
    Hagan, M.T., Menhaj, M.B.: Training feedforward networks with the Marquardt algorithm. IEEE Trans Neural Netw. 5(6), 989–993 (1994)CrossRefGoogle Scholar
  15. 15.
    Demuth, H.B., Beale, M.: Neural Network Toolbox: For Use with MATLAB. Mathworks, Natick (2000)Google Scholar
  16. 16.
    Różycki, P., Kolbusz, J., Bartczak, T., Wilamowski, B.M.: Using parity-N problems as a way to compare abilities of shallow, very shallow and very deep architectures. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2015. LNCS, vol. 9119, pp. 112–122. Springer, Cham (2015). doi: 10.1007/978-3-319-19324-3_11 CrossRefGoogle Scholar
  17. 17.
    Hunter, D., Hao, Y., Pukish, M.S., Kolbusz, J., Wilamowski, B.M.: Selection of proper neural network sizes and architecturesa comparative study. IEEE Trans. Ind. Inform. 8, 228–240 (2012)CrossRefGoogle Scholar
  18. 18.
    Smola, A.J., Schlkopf, B.: A tutorial on support vector regression. NeuroCOLT2 Technical report NC2-TR-1998-030 (1998)Google Scholar
  19. 19.
    Yu, H., Xie, T., Hewlett, J., Rozycki, P., Wilamowski, B.M.: Fast and efficient second order method for training radial basis function networks. IEEE Trans. Neural Netw. 24(4), 609–619 (2012)Google Scholar
  20. 20.
    Cecati, C., Kolbusz, J., Siano, P., Rozycki, P., Wilamowski, B.: A novel RBF training algorithm for short-term electric load forecasting: comparative studies. IEEE Trans. Ind. Electron. 62(10), 6519–6529 (2015)CrossRefGoogle Scholar
  21. 21.
    Rao, K., Yip, P.: Discrete Cosine Transform: Algorithms, Advantages, Applications. Academic Press, Boston (1990). ISBN0-12-580203-XCrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Angelika Olejczak
    • 1
  • Janusz Korniak
    • 1
    Email author
  • Bogdan M. Wilamowski
    • 2
  1. 1.University of IT and ManagementRzeszowPoland
  2. 2.Auburn UniversityAuburnUSA

Personalised recommendations