Skip to main content

Learning Sparse Feature Representations Using Probabilistic Quadtrees and Deep Belief Nets


Learning sparse feature representations is a useful instrument for solving an unsupervised learning problem. In this paper, we present three labeled handwritten digit datasets, collectively called n-MNIST by adding noise to the MNIST dataset, and three labeled datasets formed by adding noise to the offline Bangla numeral database. Then we propose a novel framework for the classification of handwritten digits that learns sparse representations using probabilistic quadtrees and Deep Belief Nets. On the MNIST, n-MNIST and noisy Bangla datasets, our framework shows promising results and outperforms traditional Deep Belief Networks.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3


  1. The datasets are available at the web link [9] and [10] along with a detailed description of the methods and parameters used to create them.

  2. Our test error on MNIST is different from that in [17] because the authors use a different architecture. This is true both for their Neural Network and DBN architectures.


  1. Basu S, Karki M, Ganguly S, DiBiano R, Mukhopadhyay S, Nemani R (2015) Learning sparse feature representations using probabilistic quadtrees and deep belief nets. In: Proceedings of the European symposium on artificial neural networks, ESANN

  2. Hinton Geoffrey E, Osindero Simon (2006) A fast learning algorithm for deep belief nets. Neural Comput 18:2006

    Article  MathSciNet  MATH  Google Scholar 

  3. Mohamed Abdel-rahman, Dahl George E, Hinton Geoffrey E (2012) Acoustic modeling using deep belief networks. IEEE Trans Audio Speech Lang Process 20(1):14–22

    Article  Google Scholar 

  4. Ranzato M, Boureau Y, Lecun Y (2008) Sparse feature learning for deep belief networks. In: Advances in neural information processing systems

  5. Basu S, Ganguly S, Mukhopadhyay S, DiBiano R, Karki M, Nemani R (2015) Deepsat: a learning framework for satellite imagery. In: Proceedings of the 23rd SIGSPATIAL international conference on advances in geographic information systems, GIS ’15. New York, NY, USA. ACM, pp 37:1–37:10

  6. Basu S, Karki M, DiBiano R, Mukhopadhyay S, Ganguly S, Nemani R, Gayaka S (2016) A theoretical analysis of deep neural networks for texture classification. arXiv preprint arXiv:1605.02699

  7. WWW. Mnist.

  8. Bhattacharya Ujjwal, Chaudhuri BidyutB (2009) Handwritten numeral databases of indian scripts and multistage recognition of mixed numerals. IEEE Trans Pattern Anal Mach Intell 31(3):444–457

    Article  Google Scholar 

  9. WWW. n-mnist.

  10. WWW. Noisy bangla.

  11. Finkel RA, Bentley JL (1974) Quad trees a data structure for retrieval on composite keys. Acta Informatica 4(1):1–9

    Article  MATH  Google Scholar 

  12. Bengio Yoshua (2009) Learning deep architectures for AI. Found Trends Mach Learn 2(1):1–127

    Article  MathSciNet  MATH  Google Scholar 

  13. Koller D, Friedman N (2009) Probabilistic graphical models: principles and techniques - adaptive computation and machine learning. The MIT Press

  14. Miguel A (2005) Carreira-Perpinan and Geoffrey E. Hinton, On contrastive divergence learning

  15. Bishop CM (1995) Neural networks for pattern recognition. Oxford University Press Inc

  16. Hassani S (2009) Dirac delta function. In: Mathematical methods, pp 139–170 Springer, (2009)

  17. Srivastava Nitish, Hinton Geoffrey E, Krizhevsky Alex, Sutskever Ilya, Salakhutdinov Ruslan (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958

    MathSciNet  MATH  Google Scholar 

  18. Hinton GE, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov R (2012) Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580, (2012)

  19. Lazebnik S, Schmid C, Ponce J (2006) Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. CVPR ’06, pp 2169–2178

  20. Lee H, Battle A, Raina R, Ng AY (2007) Efficient sparse coding algorithms. In: NIPS. NIPS, pp 801–808

  21. Lee H, Grosse R, Ranganath R, Ng AY (2009) Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. ICML ’09, pp 609–616

Download references


The project is supported by Army Research Office (ARO) under Grant #W911-NF1010495 and NASA Carbon Monitoring System through Grant #NNH14ZD-A001NCMS. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the ARO or the United States Government. We are thankful to the Computer Vision and Pattern Recognition unit at Indian Statistical Institute, Kolkata, India for making the offline Bangla handwritten numeral dataset available to us.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Saikat Basu.

Additional information

This is an extended version of the paper published in ESANN 2015, [1].

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Basu, S., Karki, M., Ganguly, S. et al. Learning Sparse Feature Representations Using Probabilistic Quadtrees and Deep Belief Nets. Neural Process Lett 45, 855–867 (2017).

Download citation

  • Published:

  • Issue Date:

  • DOI:


  • Deep neural networks
  • Handwritten digit classification
  • Probabilistic quadtrees
  • Deep belief networks
  • Sparse feature representation