Advertisement

Study of Double SMO Algorithm Based on Attributes Reduction

  • Chen Chen
  • Liu Hong
  • Haigang Song
  • Xueguang Chen
  • TieMin Hou
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5552)

Abstract

To solve the classification problem in data mining, this paper proposes double SMO algorithm based on attributes reduction. Firstly attributes reduction deletes irrelevant attributes (or dimensions) to reduce data amount, consequently the total calculation is reduced, the training speed is fastened and Classification mode is easy to understand. Secondly applying SMO algorithm on the sampling dataset to get the approximate separating hyperplane, and then we obtain all the support vectors of original dataset. Finally again use SMO algorithm on the support vectors to get the final separating hyperplane. It is shown in the experiments that the algorithm reduces the memory space, effectively avoids the noise points’ effect on the final separating hyperplane and the precision of the algorithm is better than Decision Tree, Bayesian and Neural Network.

Keywords

Data mining Support vector machine Attribute reduction Training algorithm 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Quinlan, J.R.: Induction of Decision Trees. Machine Learning 1(1), 81–106 (1986)Google Scholar
  2. 2.
    Gomgde, G., Hui, W., David, B., et al.: KNN Model-based Approach in Classification. In: International Conference on Ontologism, Database and Application of Semantics, Sicily, Italy, pp. 156–160 (2003)Google Scholar
  3. 3.
    Domingos, P., Pazzani, M.: Beyond Independence: Conditions for the Optimality of the Simple Bayesian Classifier. In: Proc. 13th Conf. Machine Learning, Sicily, Italy, pp. 105–112 (1996)Google Scholar
  4. 4.
    Lacher, R.C.: Back-Propagation Learning in Expert Network. IEEE Transaction on Neural Networks 1(3), 62–72 (2002)Google Scholar
  5. 5.
    Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (1995)CrossRefzbMATHGoogle Scholar
  6. 6.
    Osuna, E., Freund, R., Girosi, F.: Training Support Vector Machines: An Application to Face Detection. In: The IEEE International Conference on Computer Vision and Pattern Recognition, pp. 130–136. IEEE Press, New York (1997)CrossRefGoogle Scholar
  7. 7.
    Joachims, T.: Making Large-scale SVM Llearning Practical. MIT Press, Cambridge (1998)Google Scholar
  8. 8.
    Platt, J.C.: Fast Training of SVM Using Sequential Minimal Optimization. Support Vector Learning 5(4), 81–106 (1998)Google Scholar
  9. 9.
    Solla, S., Leen, T., Muller, R.: Advances in Neural Information Processing Systems. IEEE Transactions on Neural Networks 9(3), 932–937 (2000)Google Scholar
  10. 10.
    Mangasarian, O.L.: Successive over Relaxation for Support Vector Machines. IEEE Transcations on Neural Networks 10(12), 1032–1037 (2003)Google Scholar
  11. 11.
    Kiranoudis, C.T., Koumiotis, S.P., Christolis, M., et al.: An Operational Centre for Managing Major Chemical Industrial Accidents. Journal of Hazardous Materials 8(8), 141–161 (2002)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Chen Chen
    • 1
  • Liu Hong
    • 1
  • Haigang Song
    • 2
  • Xueguang Chen
    • 1
  • TieMin Hou
    • 3
  1. 1.Institute of System EngineeringHuazhong University of Science and TechnologyWuhanP.R. China
  2. 2.Basic Research Service of the Ministry of Science and Technology of the P. R. ChinaBeijingP.R. China
  3. 3.Key Lab. for Image Processing and Intelligent controlHuazhong University of Science and TechnologyWuhanP.R. China

Personalised recommendations