Advertisement

Swap Kernel Regression

  • Masaharu Yamamoto
  • Koichiro YamauchiEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11727)

Abstract

Recent developments in the field of artificial intelligence have increased the demand for high performance computation devices. An edge device is highly restricted not only in terms of its computational power but also memory capacity. This study proposes a method that enables both inference and learning on an edge device. The proposed method involves a kernel machine that works in restricted environments by collaborating with its secondary storage system. The kernel parameters, which are not essential for calculating the output values for the upcoming inputs, are stored in the secondary storage to make space in the main memory. The essential kernel parameters stored in the secondary storage are loaded into the main memory when required. With the use of this strategy, the system can realize the recognition/regression tasks without reducing its generalization capability.

Keywords

Swap kernel regression Regression Kernel machine Softmax function General regression neural network Secondary storage 

References

  1. Dekel, O., Shalev-Shwartz, S., Singer, Y.: The forgetron: a kernel-based perceptron on a budget. SIAM J. Comput. (SICOMP) 37(5), 1342–1372 (2008).  https://doi.org/10.1137/060666998MathSciNetCrossRefzbMATHGoogle Scholar
  2. He, W., Wu, S.: A kernel-based perceptron with dynamic memory. Neural Networks 25, 105–113 (2012).  https://doi.org/10.1016/j.neunet.2011.07.008CrossRefzbMATHGoogle Scholar
  3. Kivinen, J., Smola, A.J., Williamson, R.C.: Online learning with kernels. IEEE Trans. Signal Process. 52(8), 2165–2176 (2004).  https://doi.org/10.1109/TSP.2004.830991MathSciNetCrossRefzbMATHGoogle Scholar
  4. Lee, D., et al.: LRFU: a spectrum of policies that subsumes the least recently used and least frequently used policies. IEEE Trans. Comput. 50(12), 1352–1361 (2001).  https://doi.org/10.1109/TC.2001.970573MathSciNetCrossRefzbMATHGoogle Scholar
  5. Orabona, F., Keshet, J., Caputo, B.: The projectron: a bounded kernel-based perceptron. In: ICML 2008, pp. 720–727 (2008)Google Scholar
  6. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations ICLR 2015 (2015), arXiv:1409.1556v6
  7. Weston, J., Chopra, S., Bordes, A.: Memory networks. In: ICLR 2015 (2015)Google Scholar
  8. Yamauchi, K.: An importance weighted projection method for incremental learning under unstationary environments. In: IJCNN2013: The International Joint Conference on Neural Networks 2013, pp. 1–9. The Institute of Electrical and Electronics Engineers, Inc., New York (2013).  https://doi.org/10.1109/IJCNN.2013.6706779
  9. Yamauchi, K.: Incremental learning on a budget and its application to quick maximum power point tracking of photovoltaic systems. J. Adv. Comput. Intell. Intell. Inf. 18(4), 682–696 (2014).  https://doi.org/10.20965/jaciii.2014.p0682CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Chubu UniversityKasugaiJapan

Personalised recommendations