Rough Restricted Boltzmann Machine – New Architecture for Incomplete Input Data

  • Wojciech K. Mleczko
  • Robert K. Nowicki
  • Rafał Angryk
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9692)

Abstract

In the paper, a rough restricted Boltzmann machine (RRBM) is proposed. It is a hybrid architecture, which extends the restricted Boltzmann machine (RBM) using some elements of the Pawlak rough set theory. The main goal of such hybridization is to allow processing the imperfect input data and expressing the imperfection in the answer of the system. In the paper, one form of the imperfection is considered - missing values. However, the solutions similar to presented one can be designed also to handle e.g. imprecise data. The formal definition of RRBM is illustrated by experimental results on a handwritten digits reconstruction.

Keywords

Restricted Boltzmann machine Missing data Rough set theory Handwritten digits reconstruction 

Notes

Acknowledgment

The project was funded by the Polish National Science Center under decision number DEC-2012/05/B/ST6/03620.

References

  1. 1.
    Barnard, J., Rubin, D.: Small-sample degrees of freedom with multiple imputation. Biometrika 86(4), 948–955 (1999)MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Bilski, J., Nowicki, R., Scherer, R., Litwiski, S.: Application of signal processor TMS320C30 to neural networks realisation. In: Proceedings of the Second Conference Neural Networks and Their Applications, Czestochowa 53–59 (1996)Google Scholar
  3. 3.
    Bilski, J., Litwiński, S., Smoląg, J.: Parallel realisation of QR algorithm for neural networks learning. In: Rutkowski, L., Siekmann, J.H., Tadeusiewicz, R., Zadeh, L.A. (eds.) ICAISC 2004. LNCS (LNAI), vol. 3070, pp. 158–165. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  4. 4.
    Bilski, J., Smoląg, J., Galushkin, A.I.: The parallel approach to the conjugate gradient learning algorithm for the feedforward neural networks. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2014, Part I. LNCS, vol. 8467, pp. 12–21. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  5. 5.
    Bilski, J., Smoląg, J.: Parallel realisation of the recurrent RTRN neural network learning. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2008. LNCS (LNAI), vol. 5097, pp. 11–16. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  6. 6.
    Bilski, J., Smoląg, J.: Parallel realisation of the recurrent Elman neural network learning. In: Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2010, Part II. LNCS, vol. 6114, pp. 19–25. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  7. 7.
    Bilski, J., Smoląg, J.: Parallel realisation of the recurrent multi layer perceptron learning. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2012, Part I. LNCS, vol. 7267, pp. 12–20. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  8. 8.
    Bilski, J., Smoląg, J.: Parallel approach to learning of the recurrent jordan neural network. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2013, Part I. LNCS, vol. 7894, pp. 32–40. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  9. 9.
    Chen, M., Ludwig, S.A.: Particle swarm optimization based fuzzy clustering approach to identify optimal number of clusters. J. Artif. Intell. Soft Comput. Res. 4(1), 43–56 (2014)CrossRefGoogle Scholar
  10. 10.
    Chu, J.L., Krzyzak, A.: The recognition of partially occluded objects with support vector machines and convolutional neural networks and deep belief networks. J. Artif. Intell. Soft Comput. Res. 4(1), 5–19 (2014)CrossRefGoogle Scholar
  11. 11.
    Cpalka, K., Rutkowski, L.: Flexible Takagi-Sugeno fuzzy systems. In: IEEE International Joint Conference on Neural Networks, IJCNN 2005. Proceedings, vol. 3, pp 1764–1769, July 2005Google Scholar
  12. 12.
    Cpalka, K., Rutkowski, L.: Evolutionary learning of flexible neuro-fuzzy systems. In: IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2008, (IEEE World Congress on Computational Intelligence), pp. 969–975, June 2008Google Scholar
  13. 13.
    Dourlens, S., Ramdane-Cherif, A.: Modeling & understanding environment using semantic agents. J. Artif. Intell. Soft Comput. Res. 1(4), 301–314 (2011)Google Scholar
  14. 14.
    El-Samak, A.F., Ashour, W.: Optimization of traveling salesman problem using affinity propagation clustering and genetic algorithm. J. Artif. Intell. Soft Comput. Res. 5(4), 239–245 (2015)CrossRefGoogle Scholar
  15. 15.
    Grycuk, R., Gabryel, M., Korytkowski, M., Scherer, R.: Content-based image indexing by data clustering and inverse document frequency. In: Kozielski, S., Mrozek, D., Kasprowski, P., Małysiak-Mrozek, B. (eds.) BDAS 2014. CCIS, vol. 424, pp. 374–383. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  16. 16.
    Grycuk, R., Gabryel, M., Korytkowski, M., Scherer, R., Voloshynovskiy, S.: From single image to list of objects based on edge and blob detection. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2014, Part II. LNCS, vol. 8468, pp. 605–615. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  17. 17.
    Hinton, G.: Training products of experts by minimizing contrastive divergence. Neural Comput. 14(8), 1771–1800 (2002)MathSciNetCrossRefMATHGoogle Scholar
  18. 18.
    Hinton, G.: A practical guide to training restricted Boltzmann machines. Momentum 9(1), 926 (2010)Google Scholar
  19. 19.
    Hinton, G., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural comput. 18(7), 1527–1554 (2006)MathSciNetCrossRefMATHGoogle Scholar
  20. 20.
    The mnist database of handwritten digits. http://yann.lecun.com/exdb/mnist/
  21. 21.
    Karpathy, A.: Code for training restricted Boltzmann machines (RBM) and deep belief networks in MATLAB. https://code.google.com/p/matrbm/
  22. 22.
    Kitajima, R., Kamimura, R.: Accumulative information enhancement in the self-organizing maps and its application to the analysis of mission statements. J. Artif. Intell. Soft Comput. Res. 5(3), 161–176 (2015)CrossRefGoogle Scholar
  23. 23.
    Korytkowski, M., Nowicki, R., Rutkowski, L., Scherer, R.: AdaBoost ensemble of DCOG rough–neuro–fuzzy systems. In: Jdrzejowicz, P., Nguyen, N.T., Hoang, K. (eds.) ICCCI 2011, Part I. LNCS, vol. 6922, pp. 62–71. Springer, Heidelberg (2011)Google Scholar
  24. 24.
    Korytkowski, M., Nowicki, R., Scherer, R.: Neuro-fuzzy rough classifier ensemble. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds.) ICANN 2009, Part I. LNCS, vol. 5768, pp. 817–823. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  25. 25.
    Korytkowski, M., Rutkowski, L., Scherer, R.: Fast image classification by boosting fuzzy classifiers. Inform. Sci. 327, 175–182 (2016)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Koshiyama, A.S., Vellasco, M.M.B.R., Tanscheit, R.: GPFIS-control: a genetic fuzzy system for control tasks. J. Artif. Intell. Soft Comput. Res. 4(3), 167–179 (2014)CrossRefGoogle Scholar
  27. 27.
    Laskowski, L., Laskowska, M.: Functionalization of SBA-15 mesoporous silica by Cu-phosphonate units: probing of synthesis route. J. Solid State Chem. 220, 221–226 (2014)CrossRefGoogle Scholar
  28. 28.
    Laskowski, L., Laskowska, M., Balanda, M., Fitta, M., Kwiatkowska, J., Dzilinski, K., Karczmarska, A.: Mesoporous silica SBA-15 functionalized by nickel-phosphonic units: Raman and magnetic analysis. Microporous Mesoporous Mater. 200, 253–259 (2014)CrossRefGoogle Scholar
  29. 29.
    Laskowski, Ł., Laskowska, M., Jelonkiewicz, J., Boullanger, A.: Spin-glass implementation of a Hopfield neural structure. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2014, Part I. LNCS, vol. 8467, pp. 89–96. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  30. 30.
    Le Roux, N., Bengio, Y.: Representational power of restricted boltzmann machines and deep belief networks. Neural Comput. 20(6), 1631–1649 (2008)MathSciNetCrossRefMATHGoogle Scholar
  31. 31.
    Lingras, P.: Comparison of neofuzzy and rough neural networks. Inf. Sci. 110(3–4), 207–215 (1998)MathSciNetCrossRefGoogle Scholar
  32. 32.
    Lingras, P.: Fuzzy-rough and rough-fuzzy serial combinations in neurocomputing. Neurocomput. 36(1–4), 29–44 (2001)CrossRefMATHGoogle Scholar
  33. 33.
    Mleczko, W.K., Kapuscinski, T., Nowicki, R.K.: Rough deep belief network - application to incomplete handwritten digits pattern classification. In: Proceedings Information and Software Technologies - 21st International Conference, ICIST 2015, Druskininkai, Lithuania, 15–16 October 2015, pp. 400–411 (2015)Google Scholar
  34. 34.
    Nowak, B.A., Nowicki, R.K., Woźniak, M., Napoli, C.: Multi-class nearest neighbour classifier for incomplete data handling. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) Artificial Intelligence and Soft Computing. LNCS, vol. 9119, pp. 469–480. Springer, Heidelberg (2015)CrossRefGoogle Scholar
  35. 35.
    Nowicki, R.: Rough neuro-fuzzy structures for classification with missing data. IEEE Trans. Syst. Man Cybern. B Cybern. 39(6), 1334–1347 (2009)CrossRefGoogle Scholar
  36. 36.
    Nowicki, R.: On classification with missing data using rough-neuro-fuzzy systems. Int. J. Appl. Math. Comput. Sci. 20(1), 55–67 (2010)CrossRefMATHGoogle Scholar
  37. 37.
    Nowicki, R.: On combining neuro-fuzzy architectures with the rough set theory to solve classification problems with incomplete data. IEEE Trans. on Knowledge and Data. Engineering 20(9), 1239–1253 (2008)Google Scholar
  38. 38.
    Nowicki, R., Nowak, B., Starczewski, J., Cpalka, K.: The learning of neuro-fuzzy approximator with fuzzy rough sets in case of missing features. In: International Joint Conference on Neural Networks (IJCNN), 2014, pp. 3759–3766, July 2014Google Scholar
  39. 39.
    Nowicki, R.K., Nowak, B.A., Wozniak, M.: Rough k-nearest neighbours for classification in the case of missing input data. In: Proceedings of the 9th International Conference on Knowledge, Information and Creativity Support Systems, Limassol, pp. 196–207, November 2014Google Scholar
  40. 40.
    Patan, K., Patan, M.: Optimal training strategies for locally recurrent neural networks. J. Artif. Intell. Soft Comput. Res. 1(2), 103–114 (2011)MATHGoogle Scholar
  41. 41.
    Pawlak, Z.: Rough classification. Int. J. Man Mach. Stud. 20, 469–485 (1984)CrossRefMATHGoogle Scholar
  42. 42.
    Pawlak, Z.: Rough Sets: Theoretical Aspects of Reasoning About Data. Kluwer, Dordrecht (1991)CrossRefMATHGoogle Scholar
  43. 43.
    Rosenblatt, F.: The perceptron: a probabilistic model for information storage and organization in the brain. Psychol. Sci. 65(65), 386–408 (1958)MathSciNetGoogle Scholar
  44. 44.
    Rutkowski, L., Jaworski, M., Pietruczuk, L., Duda, P.: Decision trees for mining data streams based on the Gaussian approximation. IEEE Trans. Knowl. Data Eng. 26(1), 108–119 (2014)CrossRefGoogle Scholar
  45. 45.
    Rutkowski, L., Jaworski, M., Pietruczuk, L., Duda, P.: A new method for data stream mining based on the misclassification error. IEEE Trans. Neural Networks Learn. Syst. 26(5), 1048–1059 (2015)MathSciNetCrossRefGoogle Scholar
  46. 46.
    Rutkowski, L., Pietruczuk, L., Duda, P., Jaworski, M.: Decision trees for mining data streams based on the McDiarmid’s bound. IEEE Trans. Knowl. Data Eng. 25(6), 1272–1279 (2013)CrossRefGoogle Scholar
  47. 47.
    Rutkowski, L., Jaworski, M., Pietruczuk, L., Duda, P.: The CART decision tree for mining data streams. Inf. Sci. 266, 1–15 (2014)CrossRefGoogle Scholar
  48. 48.
    Sartori, N., Salvan, A., Thomaseth, K.: Multiple imputation of missing values in a cancer mortality analysis with estimated exposure dose. Comput. Stat. Data Anal. 49(3), 937–953 (2005)MathSciNetCrossRefMATHGoogle Scholar
  49. 49.
    Scherer, R., Rutkowski, L.: Relational equations initializing neuro-fuzzy system. In: Proceeding of the 10th Zittau Fuzzy Colloquium, Zittau, Germany, pp. 18–22 (2002)Google Scholar
  50. 50.
    Smolensky, P.: Information processing in dynamical systems: foundations of harmony theory. In: Rumelhart, D.E., McLelland, J.L., (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1 Fundations, pp. 194–281. MIT (1986)Google Scholar
  51. 51.
    Starczewski, J., Nowicki, R., Nowak, B.: Genetic fuzzy classifier with fuzzy rough sets for imprecise data. In: IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2014, pp. 1382–1389, July 2014Google Scholar
  52. 52.
    Tambouratzis, T., Chernikova, D., Pázsit, I.: Pulse shape discrimination of neutrons and gamma rays using kohonen artificial neural networks. J. Artif. Intell. Soft Comput. Res. 3(2), 77–88 (2013)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Wojciech K. Mleczko
    • 1
  • Robert K. Nowicki
    • 1
  • Rafał Angryk
    • 2
  1. 1.Institute of Computational IntelligenceCzestochowa University of TechnologyCzestochowaPoland
  2. 2.Department of Computer ScienceGeorgia State UniversityAtlantaUSA

Personalised recommendations