Skip to main content

A New Inverse Nth Gravitation Based Clustering Method for Data Classification

  • Conference paper
  • First Online:
Smart Health (ICSH 2016)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10219))

Included in the following conference series:

  • 950 Accesses

Abstract

Data classification is one of the core technologies in the field of pattern recognition and machine learning, which is of great theoretical significance and application value. With the increasing improvement of data acquisition, storage, transmission means and the amount of data, how to extract the essential attribute data from massive data, data accurate classification has become an important research topic. Inverse nth n order gravitational field is essentially a generalization of the n order in the physics, which can effectively describe the interaction between all the particles in the gravitational field. This paper proposes a new inverse nth power gravitation (I-n-PG) based clustering method is proposed for data classification. Some randomly generated data samples as well as some well-known classification data sets are used for the verification of the proposed I-n-PG classifier. The experiments show that our proposed I-n-PG classifier performs very well on both of these two test sets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chandrasekhar, S.: Newton’s Principia for the Common Reader. Oxford University Press, New York (1995)

    MATH  Google Scholar 

  2. Heintz, W.H.: Runge-Lenz vector for non- relativistic Kepler motion modified by an inverse cube force. Am. J. Phys. 44(7), 687–694 (1976)

    Article  Google Scholar 

  3. Roseveare, N.: Mercury’s Perihelion from Le verrier to Einstein. Oxford University Press, New York (1982)

    MATH  Google Scholar 

  4. Paul, E.: Incremental Induction of Decision Trees. Mach. Learn. 4(2), 161–186 (1997)

    Google Scholar 

  5. Murthy, S.K.: Automatic construction of decision trees from data: a multi-disciplinary survey. Data Min. Knowl. Disc. 2, 345–389 (1998)

    Article  Google Scholar 

  6. Mugambi, E.M.: Polynomial—fuzzy data knowledge-based systems: decision tree structures for classifying medical data. Knowl.-Based Syst. 17(2), 81–87 (2004)

    Article  Google Scholar 

  7. Zheng, Z.: Constructing conjunctions using systematic search on decision trees. Knowl. Based Syst. J. 10, 421–430 (1998)

    Article  Google Scholar 

  8. Gama, J., Brazdil, P.: Linear Tree. Intell. Data Anal. 3, 1–22 (1999)

    Article  MATH  Google Scholar 

  9. Meretakis, D., Wuthrieh, B.: Extending Naive Bayes classifiers using long itemsets. In: Proceeding 1999 International Conference Knowledge Discovery and Data Mining (KDD 1999), San Diego, pp. 165–174, August 1999

    Google Scholar 

  10. Cheng, J., Bell, D., Liu, W.: Learning Bayesian Networks from Data: An Efficient Approach Based on Information Theory (1999)

    Google Scholar 

  11. Acid, S., de Campos, L.M.: Searching for Bayesian Network structures in the space of restricted acyclic partially directed graphs. J. Artif. Intell. Res. 18, 445–490 (2003)

    MATH  MathSciNet  Google Scholar 

  12. Zhang, G.: Neural networks for classification: a survey. IEEE Trans. Syst. Man Cybern. Part C 30(4), 451–462 (2000)

    Article  Google Scholar 

  13. Kon, M., Plaskota, L.: Information complexity of neural networks. Neural Netw. 13, 365–375 (2000)

    Article  MATH  Google Scholar 

  14. Siddique, M.N.H., Tokhi, M.O.: Training neural networks: backpropagation vs. genetic algorithms. In: IEEE International Joint Conference on Neural Networks, vol. 4, pp. 2673–2678 (2011)

    Google Scholar 

  15. Lam Savio, L.Y., Lee, D.L.: Feature reduction for neural network based text categorization. In: Digital Symposium Collection of 6th International Conference on Database System for Advanced Application (1999)

    Google Scholar 

  16. Ji, S.W., Xu, W., Yang, M., Yu, K.: 3D convolutional neural networks for human action recognition. In: The 27th International Conference on Machine Learning, pp. 495–502 (2010)

    Google Scholar 

  17. Damoulas, T., Girolami, M.A.: Pattern Recognition with a Bayesian Kernel Combination Machine (2009)

    Google Scholar 

  18. Kingsbury, N., Tay, D.B.H., Palaniswami, M.: Multi-Scale Kernel Methods for Classification (2005)

    Google Scholar 

  19. Li, B., Zheng, D., Sun, L.: Exploiting Multi-Scale Support Vector Regression for Image Compression (2007)

    Google Scholar 

  20. Pozdnoukhov, A., Kanevski, M.: Multi-Scale Support Vector Algorithms for Hot Spot Detection and Modeling (2007)

    Google Scholar 

  21. Lin, Y.Y., Liu, T.L., Fuh, C.S.: Local Ensemble Kernel Learning for Object Category Recognition (2007)

    Google Scholar 

  22. Sonnenburg, S., Rlitsch, G., Schafer, C.: A General and Efficient Multiple Kernel Learning Algorithm (2005)

    Google Scholar 

  23. Smola, A.J., SchOlkopf, B.: A Tutorial on Support Vector Regression (2004)

    Google Scholar 

  24. Vapnik, V.N.: The Nature of Statistical Learning Theory. Springer, New York (1995)

    Book  MATH  Google Scholar 

  25. Karampatziakis, N.: Static analysis of binary executables using structural SVMs. Adv. Neural. Inf. Process. Syst. 23, 1063–1071 (2010)

    Google Scholar 

  26. Ridder, D., Kouropteva, O., Okun, O., Pietikäinen, M., Duin, Robert P.W.: Supervised locally linear embedding. In: Kaynak, O., Alpaydin, E., Oja, E., Xu, L. (eds.) ICANN/ICONIP -2003. LNCS, vol. 2714, pp. 333–341. Springer, Heidelberg (2003). doi:10.1007/3-540-44989-2_40

    Chapter  Google Scholar 

  27. Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000)

    Article  Google Scholar 

  28. Tenenbaum, J.B., de Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319–2323 (2000)

    Article  Google Scholar 

  29. Belkin, M., Niyogi, P.: Laplacian Eigen maps for dimensionality reduction and data representation. Neural Comput. 15(6), 1373–1396 (2003)

    Article  MATH  Google Scholar 

  30. Rigamonti, R., Brown, M., Lepetit, V.: Are sparse representation really relevant for image classification? In: Proceeding of International Conference on Computer Vision and Pattern Recognition (2011)

    Google Scholar 

  31. Shi, Q., Eriksson, A., Van Den Hengel, A., Shen, C.: Is face recognition really a Compressive Sensing problem? In: Proceeding of International Conference on Computer Vision and Pattern Recognition (2011)

    Google Scholar 

  32. Zhang, L., Yang, M., Feng, X.: Sparse representation or collaborative representation: which helps face recognition? In: Proceeding of International Conference on Computer Vision (2011)

    Google Scholar 

  33. Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. PAMI 31(2), 210–227 (2009)

    Article  Google Scholar 

  34. Zhang, H., Yang, J., Zhang, Y., Huang, T.: Close the loop: joint blind image restoration and recognition with sparse representation prior. In: ICCV (2011)

    Google Scholar 

  35. He, X., Yan, S., Hu, Y., Niyogi, P., Zhang, H.J.: Face recognition using Laplacianfaces. IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 328–340 (2005)

    Article  Google Scholar 

  36. Yan, S., Dong, X., Zhang, B., Zhang, H., Yang, Q., Lin, S.: Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 40–51 (2007)

    Article  Google Scholar 

  37. Lanckriet, G.R.G., Cristianini, N., Bartlett, P., Ghaoui, L.E., Jordan, M.I.: Learning the kernel matrix with semidenite programming. J. Mach. Learn. Res. 5(1), 27–72 (2004)

    MATH  Google Scholar 

  38. Lee, W.-J., Verzakov, S., Duin, R.P.W.: Kernel combination versus classifier combination. In: Haindl, M., Kittler, J., Roli, F. (eds.) MCS 2007. LNCS, vol. 4472, pp. 22–31. Springer, Heidelberg (2007). doi:10.1007/978-3-540-72523-7_3

    Chapter  Google Scholar 

  39. Tipping, M.E.: The relevance vector machine. In: Proceeding of Advances in Neural Information Processing Systems, vol. 12, pp. 652–658 (2000)

    Google Scholar 

Download references

Acknowledgment

This work was supported by National Nature Science Foundation of China under the research project 61273290.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huarong Xu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Xu, H., Hao, L., Jianag, C., Haq, E.U. (2017). A New Inverse Nth Gravitation Based Clustering Method for Data Classification. In: Xing, C., Zhang, Y., Liang, Y. (eds) Smart Health. ICSH 2016. Lecture Notes in Computer Science(), vol 10219. Springer, Cham. https://doi.org/10.1007/978-3-319-59858-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-59858-1_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-59857-4

  • Online ISBN: 978-3-319-59858-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics