Skip to main content

Embedded Prototype Subspace Classification: A Subspace Learning Framework

  • Conference paper
  • First Online:
Computer Analysis of Images and Patterns (CAIP 2019)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11679))

Included in the following conference series:

Abstract

Handwritten text recognition is a daunting task, due to complex characteristics of handwritten letters. Deep learning based methods have achieved significant advances in recognizing challenging handwritten texts because of its ability to learn and accurately classify intricate patterns. However, there are some limitations of deep learning, such as lack of well-defined mathematical model, black-box learning mechanism, etc., which pose challenges. This paper aims at going beyond the black-box learning and proposes a novel learning framework called as Embedded Prototype Subspace Classification, that is based on the well-known subspace method, to recognise handwritten letters in a fast and efficient manner. The effectiveness of the proposed framework is empirically evaluated on popular datasets using standard evaluation measures.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Carbon, M., Hallin, M., Tat Tran, L.: Kernel density estimation for random fields: the l 1 theory. J. Nonparametric Stat. 6(2–3), 157–170 (1996)

    Article  MathSciNet  Google Scholar 

  2. Clanuwat, T., Bober-Irizar, M., Kitamoto, A., Lamb, A., Yamamoto, K., Ha, D.: Deep learning for classical Japanese literature. CoRR abs/1812.01718 (2018). http://arxiv.org/abs/1812.01718

  3. Cohen, G., Afshar, S., Tapson, J., van Schaik, A.: EMNIST: an extension of MNIST to handwritten letters. CoRR abs/1702.05373 (2017). http://arxiv.org/abs/1702.05373

  4. Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002). https://doi.org/10.1109/34.1000236

    Article  Google Scholar 

  5. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), vol. 1, pp. 886–893, June 2005. https://doi.org/10.1109/CVPR.2005.177

  6. Ding, S., Zhao, H., Zhang, Y., Xu, X., Nie, R.: Extreme learning machine: algorithm, theory and applications. Artif. Intell. Rev. 44(1), 103–115 (2015). https://doi.org/10.1007/s10462-013-9405-z

    Article  Google Scholar 

  7. Domhan, T., Springenberg, J.T., Hutter, F.: Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. In: Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI 2015, pp. 3460–3468. AAAI Press (2015). http://dl.acm.org/citation.cfm?id=2832581.2832731

  8. Dutta, K., Krishnan, P., Mathew, M., Jawahar, C.: Improving CNN-RNN hybrid networks for handwriting recognition. In: 2018 16th International Conference on Frontiers in Handwriting Recognition (ICFHR), pp. 80–85. IEEE (2018)

    Google Scholar 

  9. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)

    Google Scholar 

  10. Ilievski, I., Akhtar, T., Feng, J., Shoemaker, C.: Efficient hyperparameter optimization for deep learning algorithms using deterministic RBF surrogates (2017). https://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14312

  11. Jamieson, K., Talwalkar, A.: Non-stochastic best arm identification and hyperparameter optimization. In: Gretton, A., Robert, C.C. (eds.) Proceedings of the 19th International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 51, pp. 240–248. PMLR, Cadiz, Spain, 09–11 May 2016. http://proceedings.mlr.press/v51/jamieson16.html

  12. Kohonen, T., Lehtiö, P., Rovamo, J., Hyvärinen, J., Bry, K., Vainio, L.: A principle of neural associative memory. Neuroscience 2(6), 1065–1076 (1977). https://doi.org/10.1016/0306-4522(77)90129-4. http://www.sciencedirect.com/science/article/pii/0306452277901294

    Article  Google Scholar 

  13. Kohonen, T., Oja, E.: Fast adaptive formation of orthogonalizing filters and associative memory in recurrent networks of neuron-like elements. Biol. Cybern. 21(2), 85–95 (1976). https://doi.org/10.1007/BF01259390

    Article  MathSciNet  MATH  Google Scholar 

  14. Kohonen, T., Reuhkala, E., Mäkisara, K., Vainio, L.: Associative recall of images. Biol. Cybern. 22(3), 159–168 (1976). https://doi.org/10.1007/BF00365526

    Article  Google Scholar 

  15. Krishnan, P., Dutta, K., Jawahar, C.: Word spotting and recognition using deep embedding. In: 2018 13th IAPR International Workshop on Document Analysis Systems (DAS), pp. 1–6. IEEE (2018)

    Google Scholar 

  16. Laaksonen, J., Aksela, M., Oja, E., Kangas, J.: Adaptive local subspace classifier in on-line recognition of handwritten characters. In: IJCNN 1999 International Joint Conference on Neural Networks. Proceedings (Cat. No. 99CH36339), vol. 4, pp. 2812–2815, July 1999. https://doi.org/10.1109/IJCNN.1999.833527

  17. Laaksonen, J.: Subspace classifiers in recognition of handwritten digits, 07 May 1997. http://urn.fi/urn:nbn:fi:tkk-001249

  18. LeCun, Y., Cortes, C., Burges, C.: Mnist handwritten digit database. AT&T Labs (2010). http://yann.lecun.com/exdb/mnist2

  19. Läuter, H.: Silverman, B.W.: Density Estimation for Statistics and Data Analysis. Chapman & Hall, London, p. 175 (1986). £12.—. Biometrical Journal 30(7), 876–877 (1988). https://doi.org/10.1002/bimj.4710300745

    Article  Google Scholar 

  20. Maaten, L.v.d., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)

    Google Scholar 

  21. McInnes, L., Healy, J.: UMAP: uniform manifold approximation and projection for dimension reduction. arXiv e-prints, February 2018

    Article  Google Scholar 

  22. Oja, E., Kohonen, T.: The subspace learning algorithm as a formalism for pattern recognition and neural networks. In: IEEE 1988 International Conference on Neural Networks, vol. 1, pp. 277–284, July 1988. https://doi.org/10.1109/ICNN.1988.23858

  23. Rauber, P.E., Fadel, S.G., Falcão, A.X., Telea, A.C.: Visualizing the hidden activity of artificial neural networks. IEEE Trans. Vis. Comput. Graph. 23(1), 101–110 (2017). https://doi.org/10.1109/TVCG.2016.2598838

    Article  Google Scholar 

  24. Roerdink, J.B., Meijster, A.: The watershed transform: definitions, algorithms and parallelization strategies. Fundam. Inf. 41(1,2), 187–228 (2000). http://dl.acm.org/citation.cfm?id=2372488.2372495

    MathSciNet  MATH  Google Scholar 

  25. van Schaik, A., Tapson, J.: Online and adaptive pseudoinverse solutions for ELM weights. Neurocomputing 149(PA), 233–238 (2015). https://doi.org/10.1016/j.neucom.2014.01.071

    Article  Google Scholar 

  26. Shapshak, P.: Artificial intelligence and brain. Bioinformation 14(1), 38 (2018)

    Article  Google Scholar 

  27. Shores, T.S.: Applied Linear Algebra and Matrix Analysis, p. 104. Springer Publisher, New York (2007). https://doi.org/10.1007/978-0-387-48947-6

    Book  MATH  Google Scholar 

  28. Sudholt, S., Fink, G.A.: PHOCNET: a deep convolutional neural network for word spotting in handwritten documents. In: ICFHR, pp. 277–282. IEEE Computer Society (2016)

    Google Scholar 

  29. Turk, M., Pentland, A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3(1), 71–86 (1991). https://doi.org/10.1162/jocn.1991.3.1.71

    Article  Google Scholar 

  30. Vincent, L., Soille, P.: Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE Trans. Pattern Anal. Mach. Intell. 13(6), 583–598 (1991). https://doi.org/10.1109/34.87344

    Article  Google Scholar 

  31. Watanabe, S., Pakvasa, N.: Subspace method in pattern recognition. In: 1st International Joint Conference on Pattern Recognition, Washington DC, pp. 25–32 (1973)

    Google Scholar 

  32. Watanabe, W., Lambert, P.F., Kulikowski, C.A., Buxto, J.L., Walker, R.: Evaluation and selection of variables in pattern recognition, vol. 2, pp. 91–122. Academic Press, New York (1967)

    Google Scholar 

  33. Wold, S., Esbensen, K., Geladi, P.: Principal component analysis. Chemom. Intell. Lab. Syst. 2(1–3), 37–52 (1987)

    Article  Google Scholar 

  34. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. CoRR abs/1708.07747 (2017). http://arxiv.org/abs/1708.07747

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anders Hast .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hast, A., Lind, M., Vats, E. (2019). Embedded Prototype Subspace Classification: A Subspace Learning Framework. In: Vento, M., Percannella, G. (eds) Computer Analysis of Images and Patterns. CAIP 2019. Lecture Notes in Computer Science(), vol 11679. Springer, Cham. https://doi.org/10.1007/978-3-030-29891-3_51

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-29891-3_51

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-29890-6

  • Online ISBN: 978-3-030-29891-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics