Cluster Computing

, Volume 21, Issue 1, pp 805–811 | Cite as

A novel pattern recognition technique based on group clustering computing and convex optimization for dimensionality reduction

  • Shiqi Li
  • Mingming Wang
  • Shiping Liu
  • Yan FuEmail author


In the field of pattern recognition and data learning process the dimensionality reduction is a necessary method for providing a personalized data in perfect numbers. The increase in research proclaims the importance of data reduction in current trends. The old traditional reduction techniques use ranking systems. But this differs from others in reduction of data without lagging its performance in terms of increased efficiency and less percentage of error occurrences. The area of dimensionality reduction not only ends in pattern recognition and also in many high dimensional data processing elements such as text categorization, indexing of documents and mainly in gene expression data. The feature extraction and feature selection are the two steps of process in reduction. The process proposed is statistical pattern recognition process and a paradigm for this approach to the problem is summarized herein. The four data process includes is (1) evaluation (2) acquisition (3) feature selection, and (4) statistical model of feature selection. We show how to reduce the dimensionality by utilizing group and convexity model. This paper concluded by an integrated approach to the implementation dimensionality reduction process with an effort to maximize the efficiency and accuracy.


Dimensionality reduction Feature selection Feature extraction Group and convexity model 


  1. 1.
    Sun, Y., Ridge, C., del Rio, F., Shaka, A.J., Xin, J.: Post processing and sparse blind source separation of positive and partially overlapped data. Signal Process. 91, 1838–1851 (2011)CrossRefzbMATHGoogle Scholar
  2. 2.
    Hirwani, A., Gonnade, S.: Character recognition using multilayer perceptron. Int. J. Comput. Sci. Inf. Technol. 5(1), 558–661 (2014)Google Scholar
  3. 3.
    Schmidt, M., Roux, N.L., Bach, F.: Convergence rates of inexact proximal-gradient methods for convex optimization. In: Advances in Neural Information Processing Systems (NIPS) (2011)Google Scholar
  4. 4.
    Wang, H., Wang, J.: An effective image representation method using kernel classification. In: IEEE 26th International Conference on Tools with Artificial Intelligence (ICTAI), pp. 853–858 (2014)Google Scholar
  5. 5.
    Abdullah-Al-Mamun, Md, Ahmed, M.: Hypothetical pattern recognition design using multi-layer perceptron neural network for supervised learning. Proc. Int. J. Adv. Res. 4, 1–6 (2015)Google Scholar
  6. 6.
    Zhang, S., Wang, H., Huang, W.: Two-stage plant species recognition by local mean clustering and weighted sparse representation classification. Clust. Comput. (2017). doi: 10.1007/s10586-017-0859-7
  7. 7.
    Shalev-Shwartz, S., Ben-David, S.: Understanding Machine Learning Theory to Algorithms. Cambridge University Press, Cambridge (2014)CrossRefzbMATHGoogle Scholar
  8. 8.
    Le Roux, N., Schmidt, M., Bach. F.: A stochastic gradient method with an exponential convergence rate for strongly-convex optimization with finite training sets. In: Advances in Neural Information Processing Systems (NIPS) (2012)Google Scholar
  9. 9.
    Lee, Y.-T., Sidford, A., Wong, S.C.-W.: A faster cutting plane method and its implications for combinatorial and convex optimization. Math. Oper. Res. (2015)Google Scholar
  10. 10.
    Geebelen, D., Suykens, J.A.K., Vandewalle, J.: Reducing the number of support vectors of SVM classifiers using the smoothed separable case approximation. IEEE Trans. Neural Netw. Learn. Syst. 23, 682–688 (2012)CrossRefGoogle Scholar
  11. 11.
    Condat, L.: A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl. 158, 460–479 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Hirwani, A., Gonnade, S.: Character recognition using multilayer perceptron. Int. J. Comput. Sci. Inf. Technol. 5(1), 558–661 (2014)Google Scholar
  13. 13.
    Wang, W., Carreira-Perpinan, M.A.: The role of dimensionality reduction in classification. Association for the Advancement of Artificial Intelligence (2014)Google Scholar
  14. 14.
    Mahoney, M.: Randomized algorithms for matrices and data. Found. Trends Mach. Learn. 3(2), 123–224 (2011)zbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  1. 1.School of Mechanical Science and EngineeringHuazhong University of Science and TechnologyWuhanChina

Personalised recommendations