Advertisement

An Iterative Algorithm for Selecting the Parameters in Kernel Methods

  • Tan Zhiying
  • She Kun
  • Song Xiaobo
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 240)

Abstract

Giving a certain training sample set, the learning efficiency almost depends on the kernel function in kernel methods. This inspires us to learn the kernel and the parameters. In the paper, a selecting parameter algorithm is proposed to improve the calculation efficiency. The normalized inner product matrix is the approximation target. And utilize the iterative method to calculate the optimal bandwidth. The defect detection efficiency can be greatly improved adopting the learned bandwidth. We applied the algorithm to detect the defects on tickets’ surface. The experimental results indicate that our sampling algorithm not only reduces the mistake rate but also shortens the detection time.

Keywords

Kernel methods Gaussian kernels Iterative methods Kernel PCA Pre-image 

References

  1. 1.
    Shawe-Taylor J, Cristianini N (2004) Kernel methods for pattern analysis. Cambridge University, LondonCrossRefGoogle Scholar
  2. 2.
    Schölkopf B, Smola A, Müller KR (1997) Kernel principal component analysis. ICANN: artificial neural networks, pp 583–588Google Scholar
  3. 3.
    Shao JD, Rong G, Lee JM (2009) Learning a data-dependent kernel function for KPCA-based nonlinear process monitoring. Chem Eng Res Design 87:1471–1480Google Scholar
  4. 4.
    Cortes C, Mohri M, Rostamizadeh A (2009) L2 regularization for learning kernels. In: Conference uncertainty in artificial intelligence, pp 109–116Google Scholar
  5. 5.
    Cortes C, Mohri M, Rostamizadeh A (2009) Learning non-linear combinations of kernels. Adv Neural Inf Proc Syst 22:396–404Google Scholar
  6. 6.
    Rakotomamonjy A, Bach FR, Canu S et al (2009) SimpleMKL. J Mach Learn Res 9:2491–2521MathSciNetGoogle Scholar
  7. 7.
    Bach F (2008) Exploring large feature spaces with hierarchical multiple kernel learning. arXiv preprint 0809:1–30Google Scholar
  8. 8.
    Yi Y, Nan Y, Bingchao D et al (2012) Neural decoding based on Kernel regression. JDCTA Int J Digit Content Technol Appl 6:427–435Google Scholar
  9. 9.
    Shi WY (2012) The algorithm of nonlinear feature extraction for large-scale data set. IJIPM Int J Inf Proc Manage 3:45–52Google Scholar
  10. 10.
    Scholkopf B, Smola A, Muller KR (1998) Nonlinear component analysis as a Kernel eigenvalue problem, vol 10. pp 1299–1319Google Scholar
  11. 11.
    Mika S, Schölkoph B, Smola A et al (2001) Kernel PCA and de-noising in feature spaces. Adv Neural Inf Proc Syst 11:536–542Google Scholar
  12. 12.
    Takahashi T, Kurita T (2002) Robust de-noising by Kernel PCA. Artificial neural networks-ICANN, pp 789–789Google Scholar
  13. 13.
    Tan ZY, Feng Y (2011) A novel improved sampling algorithm. In: Conference communication software and networks, pp 43–46Google Scholar
  14. 14.
    Gerald CF, Wheatley PO (2006) Applied numerical analysis. Pearson Academic, AmericaGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht(Outside the USA) 2013

Authors and Affiliations

  1. 1.ChangzhouChina
  2. 2.School of Computer Science and EngineeringUniversity of Electronic Science and Technology of ChinaChengduChina
  3. 3.Institute of Advanced Manufacturing TechnologyHefei Institutes of Physical Science, Chinese Academy of SciencesChangzhouChina

Personalised recommendations