Skip to main content
Log in

Dimension Reduction and Construction of Feature Space for Image Pattern Recognition

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript


We mathematically and experimentally evaluate the validity of dimension-reduction methods for the computation of similarity in image pattern recognition. Image pattern recognition identifies instances of particular objects and distinguishes differences among images. This recognition uses pattern recognition techniques for the classification and categorisation of images. In numerical image pattern recognition techniques, images are sampled using an array of pixels. This sampling procedure derives vectors in a higher-dimensional metric space from image patterns. To ensure the accuracy of pattern recognition techniques, the dimension reduction of the vectors is an essential methodology since the time and space complexities of processing depend on the dimension of the data. Dimension reduction causes information loss of topological and geometrical features of image patterns. Through both theoretical and experimental comparisons, we clarify that dimension-reduction methodologies that preserve the topology and geometry in the image pattern space are essential for linear pattern recognition. For the practical application of methods of dimension reduction, the random projection works well compared with downsampling, the pyramid transform, the two-dimensional random projection, the two-dimensional discrete cosine transform and nonlinear multidimensional scaling if we have no a priori information on the input data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22

Similar content being viewed by others


  1. In this paper, the 2DDCT is applied to an image without partitioning, while the JPEG and MPEG compression algorithms divide an image into blocks of \(8 \times 8\) pixels before applying the 2DDCT.

  2. This generalised principal component analysis is a different method to the GPCA [17] although they bear the same name.

  3. For an iterative method for 2DSVD see refs. [18, 60].

  4. Note that we use the two-dimensional DCT-II without dividing an image into blocks, while the JPEG and MPEG compression algorithms use the two-dimensional DCT-II by partitioning an image into \(N \times N\) blocks of \(8\times 8\) pixels.

  5. The MDS embeds data into a low-dimensional space. This embedding is a nonlinear dimension-reduction method, while the kernel method uses a linear dimension-reduction method in a high-dimensional space.


  1. Turk, M., Pentland, A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3(1), 71–86 (1991)

    Article  Google Scholar 

  2. Maeda, K.: From the Subspace Methods to the Mutual Subspace Method. Computer Vision, vol. 285, pp. 135–156. Springer, Berlin (2010)

    Google Scholar 

  3. Murase, H., Nayar, S.K.: Illumination planning for object recognition using parametric eigenspace. IEEE Trans. Pattern Anal. Mach. Intell. 16, 1219–1227 (1994)

    Article  Google Scholar 

  4. Park, C.H., Park, H.: Fingerprint classification using fast Fourier transform and nonlinear discriminant analysis. Pattern Recognit. 38, 495–503 (2005)

    Article  MATH  Google Scholar 

  5. Park, H.A., Park, K.R.: Iris recognition based on score level fusion by using SVM. Pattern Recognit. Lett. 28, 2019–2028 (2007)

    Article  Google Scholar 

  6. Csurka, G., Dance, C., Fan, L., Willamowski, J., Bray, C.: Visual categorization with bags of keypoints. In: Proceedings of the ECCV Workshop on Statistical Learning in Computer Vision, pp. 1–22 (2004)

  7. Van der Maaten, L.J. P., Postma, E.O., Vanden Herik, H.J.: Dimensionality Reduction: A Comparative Review. Technical report, Tilburg University (2009)

  8. Burt, P.J., Adelson, E.H.: The Laplacian pyramid as a compact image code. IEEE Trans. Commun. 31, 532–540 (1983)

    Article  Google Scholar 

  9. Karlsson, A.: Nonexpanding maps, Busemann functions, and multiplicative ergodic theory. In: Rigidity in Dynamics and Geometry, pp. 283–294. Springer (2002)

  10. Borgefors, G., Ramella, G., di Baja, G.S.: Shape and topology preserving multi-valued image pyramids for multi-resolution skeletonization. Pattern Recognit. Lett. 22, 741–751 (2001)

    Article  MATH  Google Scholar 

  11. Kropatsch, W.G., Haxhimusa, Y., Pizlo, Z., Langs, G.: Vision pyramids that do not grow too high. Pattern Recognit. Lett. 26, 319–337 (2005)

    Article  Google Scholar 

  12. Lu, H., Plataniotis, K.N., Venetsanopoulos, A.N.: A survey of multilinear subspace learning for tensor data. Pattern Recognit. 44, 1540–1551 (2011)

    Article  MATH  Google Scholar 

  13. Yang, J., Zhang, D., Frangi, A.F., Yang, J.-Y.: Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 26, 131–137 (2004)

    Article  Google Scholar 

  14. Otsu, N.: Mathematical Studies on Feature Extraction in Pattern Recognition. PhD thesis, Electrotechnical Laboratory (1981)

  15. Aase, S.O., Husoy, J.H., Waldemar, P.: A critique of SVD-based image coding systems. Proc. IEEE Int. Symp. Circuits Syst. 4, 13–16 (1999)

    Google Scholar 

  16. Ding, C., Ye, J.: Two-dimensional singular value decomposition (2DSVD) for 2D maps and images. In: Proceedings of the SIAM International Conference on Data Mining, pp. 32–43 (2005)

  17. Ye, J., Janardan, R., Qi, L.: GPCA: An efficient dimension reduction scheme for image compression and retrieval. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 354–363 (2004)

  18. Moore, J.B., Mahony, R.E., Helmke, U.: Numerical gradient algorithms for eigenvalue and singular value calculations. SIAM J. Matrix Anal. Appl. 15, 881–902 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  19. Lu, H., Plataniotis, K.N., Venetsanopoulos, A.N.: MPCA: Multilinear principal component analysis of tensor objects. IEEE Trans. Neural Netw. 19(1), 18–39 (2008)

    Article  Google Scholar 

  20. Lathauwer, L.D., Moor, BDe, Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21(4), 1253–1278 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  21. Wang, H., Ahuja, N.: Compact representation of multidimensional data using tensor randk-one decomposition. In: Proceedings of the International Conference on Pattern Recognition, vol. 1, pp. 44–47 (2004)

  22. Lu, H., Plataniotis, K.N., Venetsanopoulos, A.N.: Uncorrelated multilinear principal component analysis for unsupervised multilinear subspace learning. IEEE Trans. Neural Netw. 20(11), 1820–1836 (2009)

    Article  Google Scholar 

  23. Allen, G.I.: Sparse higher-order principal components analysis. In: Proceedings of the International Conference on Artificial Intelligence and Statistics, pp. 27–36 (2012)

  24. Johnson, W., Lindenstrauss, J.: Extensions of Lipschitz maps into a Hilbert space. Contemp. Math. 26, 189–206 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  25. Arya, S., Mount, D.M., Netanyahu, N.S., Silverman, R., Wu, A.Y.: An optimal algorithm for approximate nearest neighbor searching in fixed dimensions. In: Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, pp. 573–582 (1994)

  26. Achlioptas, D., McSherry, F.: Fast computation of low-rank matrix approximations. J. ACM 54(2), 9 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  27. Sakai, T., Imiya, A.: Practical algorithms of spectral clustering: toward large-scale vision-based motion analysis. In: Machine Learning for Vision-Based Motion Analysis, pp. 3–26. Springer (2011)

  28. Baraniuk, R.G., Wakin, M.B.: Random projections of smooth manifolds. Found. Comput. Math. 9, 51–77 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  29. Bingham, E., Mannila, H.: Random projection in dimensionality reduction: Applications to image and text data. In: Proceedings of the International Conference on Knowledge Discovery and Data Mining, pp. 245–250 (2001)

  30. Achlioptas, D.: Database-friendly random projections: Johnson–Lindenstrauss with binary coins. J. Comput. Syst. Sci. 66, 671–687 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  31. Watanabe, T., Takimoto, E., Amano, K., Maruoka, A.: Random projection and its application to learning. In: Proceedings of the Workshop on Randomness and Computation, pp. 3–4 (2005)

  32. Matousek, J.: On variants of the Johnson–Lindenstrauss lemma. Random Struct. Algorithms 33, 142–156 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  33. Ailon, N., Liberty, E.: Almost optimal unrestricted fast Johnson–Lindenstrauss transform. ACM Trans. Algorithms 9, 21:1–211:2 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  34. Schölkopf, B., Smola, A., Müller, K.-R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 10, 1299–1319 (1998)

    Article  Google Scholar 

  35. Borg, I., Groenen, P.: Modern Multidimensional Scaling: Theory and Applications, 2nd edn. Springer, New York (2005)

    MATH  Google Scholar 

  36. Williams, C.K.I.: On a connection between kernel PCA and metric multidimensional scaling. Mach. Learn. 46, 11–19 (2002)

    Article  MATH  Google Scholar 

  37. Tenenbaum, J.B., de Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319–2323 (2000)

    Article  Google Scholar 

  38. Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000)

    Article  Google Scholar 

  39. Venna, J., Kaski, S.: Local multidimensional scaling. Neural Netw. 19, 889–899 (2006)

    Article  MATH  Google Scholar 

  40. Vidal, R., Yi, M., Sastry, S.: Generalized principal component analysis (GPCA). IEEE Trans. Pattern Anal. Mach. Intell. 27, 1945–1959 (2005)

    Article  Google Scholar 

  41. Goh, A., Vidal, R.: Clustering and dimensionality reduction on Riemannian manifolds. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–7 (2008)

  42. Harandi, M.T., Salzmann, M., Hartley, R.: From manifold to manifold: geometry-aware dimensionality reduction for PD matrices. Proc. Eur. Conf. Comput. Vis. 8690, 17–32 (2014)

    Google Scholar 

  43. Fisher, R.A.: The use of multiple measurements in taxonomic problems. Ann. Eugen. 7, 179–188 (1936)

    Article  Google Scholar 

  44. Vapnik, V., Lerner, A.: Pattern recognition using generalized portrait method. Autom. Remote Control 24, 774–780 (1963)

    Google Scholar 

  45. Iijima, T.: Theory of pattern recognition. Electronics and Communications in Japan, pp. 123–134 (1963)

  46. Watanabe, S.: Karhunen–Loeve expansion and factor analysis. In: Proceedings of the Transactions of the Fourth Prague Conference on Information Theory, Statistical Decision Functions, Random Processes, pp. 635–660 (1967)

  47. Itoh, H., Sakai, T., Kawamoto, K., Imiya, A.: Topology-preserving dimension-reduction methods for image pattern recognition. In: Proceedings of the Scandinavian Conference on Image Analysis, pp. 195–204 (2013)

  48. Fukui, K., Maki, A.: Difference subspace and its generalization for subspace-based methods. IEEE Trans. Pattern Anal. Mach. Intell., pp. 1–1, In print (2015)

  49. Cock, K.D., Moor, B.D.: Subspace angles between ARMA models. Syst. Control Lett. 46, 265–270 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  50. Hamm, J., Lee, D. D.: Grassmann discriminant analysis: a unifying view on subspace-based learning. In: Proceedings of the International Conference on Machine Learning, pp. 376–383 (2008)

  51. Boser, E., Guyon, I., Vapnik, V.: A training algorithm for optimal margin classifiers. In: Proceedings of the Workshop on Computational Learning Theory, pp. 144–152 (1992)

  52. Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)

    MATH  Google Scholar 

  53. Hsu, C.-W., Lin, C.-J.: A comparison of methods for multiclass support vector machines. IEEE Trans. Neural Netw. 13, 415–425 (2002)

    Article  Google Scholar 

  54. Sivic, J., Zisserman, A.: Efficient visual search of videos cast as text retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 31, 591–606 (2009)

    Article  Google Scholar 

  55. Jégou, H., Perronnin, F., Douze, M., Sanchez, J., Perez, P., Schmid, C.: Aggregating local image descriptors into compact codes. IEEE Trans. Pattern Anal. Mach. Intell. 34, 1704–1716 (2012)

    Article  Google Scholar 

  56. Vempala, S.S.: The Random Projection Method. American Mathematical Society, Providence (2004)

    MATH  Google Scholar 

  57. Magen, A.: Dimensionality reductions that preserve volumes and distance to affine spaces, and their algorithmic applications. Proc. Randomization and Approximation Techniques in Computer. Science 2483, 239–253 (2002)

    MathSciNet  MATH  Google Scholar 

  58. Agarwal, P. K., Har-Peled, S., Yu, H.: Embeddings of surfaces, curves, and moving points in Euclidean space. In: Proceedings of the Annual Symposium on Computational Geometry, pp. 381–389 (2007)

  59. Dasgupta, S., Gupta, A.: An elementary proof of the Johnson–Lindenstrauss lemma. Technical report, UC Berkeley (1996)

  60. Helmke, U., Moore, J.B.: Singular-value decomposition via gradient and self-equivalent flows. Linear Algebra Appl. 169, 223–248 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  61. Liang, Z., Shi, P.: An analytical algorithm for generalized low-rank approximations of matrices. Pattern Recognit. 38, 2213–2216 (2005)

    Article  MATH  Google Scholar 

  62. Itoh, H., Sakai, T., Kawamoto, K., Imiya, A.: Dimension reduction methods for image pattern recognition. In: Proceedings of the International Workshop on Similarity-Based, Pattern Recognition, pp. 26–42 (2013)

  63. Björck, Å., Golub, G .H.: Numerical methods for computing angles between linear subspaces. Math. Comput. 27, 579–594 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  64. Golub, G.H., Van Loan, C.F.: Matrix Computations. The Johns Hopkins University Press, Baltimore (1996)

    MATH  Google Scholar 

  65. Georghiades, A.S., Belhumeur, P.N., Kriegman, D.J.: From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. Intell. 23, 643–660 (2001)

    Article  Google Scholar 

  66. Samaria, F., Harter, A.: Parameterisation of a stochastic model for human face identification. In: Proceedings of the IEEE Workshop on Applications of Computer Vision (1994)

  67. Leibe, B., Schiele, B.: Analyzing appearance and contour based methods for object categorization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition vol. 2, pp. 409–415 (2003)

  68. Mobahi, H., Collobert, R., Weston, J.: Deep learning from temporal coherence in video. In: Proceedings of the International Conference on Machine Learning (2009)

  69. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998)

    Article  Google Scholar 

  70. Saito, T., Yamada, H., Yamada, K.: On the data base ETL9 of handprinted characters in JIS Chinese characters and its analysis. IEICE Trans. J68–D, 757–764 (1985)

    Google Scholar 

  71. Fei-Fei, L., Fergus, R., Perona, P.: Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories. Comput. Vis. Image Underst. 106, 59–70 (2007)

    Article  Google Scholar 

  72. Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes Challenge 2010 (VOC2010) Results. html

  73. Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31, 210–227 (2009)

    Article  Google Scholar 

  74. Kim, T.-K., Kittler, J., Cipolla, R.: Discriminative learning and recognition of image set classes using canonical correlations. IEEE Trans. Pattern Anal. Mach. Intell. 29, 1005–1018 (2007)

    Article  Google Scholar 

  75. Liu, H., Ding, X.: Handwritten character recognition using gradient feature and quadratic classifier with multiple discrimination schemes. In: Proceedings of the International Conference on Document Analysis and Recognition, pp. 19–23 (2005)

  76. He, K., Zhang, Z., Ren, S., Jian, S.: Spatial pyramid pooling in deep convolutional networks for visual recognition. In: Proceedings of the European Conference on Computer Vision, pp. 346–361 (2014)

  77. Everingham, M., Eslami, S.M.A., Gool, L.V., Williams, C.K.I., Winn, J., Zisserman, A.: The Pascal Visual Object Classes challenge: a retrospective. Int. J. Comput. Vis. 111, 98–136 (2015)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Hayato Itoh.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Itoh, H., Imiya, A. & Sakai, T. Dimension Reduction and Construction of Feature Space for Image Pattern Recognition. J Math Imaging Vis 56, 1–31 (2016).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: