Fix-Budget and Recurrent Data Mining for Online Haptic Perception

  • Lele Cao
  • Fuchun Sun
  • Xiaolong Liu
  • Wenbing Huang
  • Weihao Cheng
  • Ramamohanarao Kotagiri
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10638)


Haptic perception is to identify different targets from haptic input. Haptic data have two prominent features: sequentially real-time and temporally correlated, which calls for a fixed-budget and recurrent perception procedure. Based on an efficient-robust spatio-temporal feature representation, we handle the problem with a bounded online-sequential learning framework (MBS-ESN), and incorporates the strength of batch-regularization bootstrapping, bounded recursive reservoir, and momentum-based estimation. Experimental evaluations show that it outperforms the state-of-the-art methods by a large margin on test accuracy; and its training performance is superior to most compared models from aspects of computational complexity and storage efficiency.


Haptic perception Echo state network Online learning Recurrent neural network Fixed-budget learning 



This work is supported by National Natural Science Foundation of China with grant number 041320190.


  1. 1.
    Atiya, A.F., Parlos, A.G.: New results on recurrent network training: unifying the algorithms and accelerating convergence. IEEE Trans. Neural Netw. 11(3), 697–709 (2000)CrossRefGoogle Scholar
  2. 2.
    Bekiroglu, Y., Kragic, D., Kyrki, V.: Learning grasp stability based on tactile data and HMMs. In: Proceedings of 19th International Conference on RO-MAN, pp. 132–137. IEEE, Viareggio (2010)Google Scholar
  3. 3.
    Bekiroglu, Y., Laaksonen, J., Jorgensen, J.A., Kyrki, V., Kragic, D.: Assessing grasp stability based on learning and haptic data. IEEE Trans. Robot. 27(3), 616–629 (2011)CrossRefGoogle Scholar
  4. 4.
    Cao, L., Kotagiri, R., Sun, F., Li, H., Huang, W., Aye, Z.M.M.: Efficient spatio-temporal tactile object recognition with randomized tiling convolutional networks in a hierarchical fusion strategy. In: Proceedings of 30th AAAI, pp. 3337–3345. AAAI Press, Phoenix (2016)Google Scholar
  5. 5.
    Chong, E.K., Zak, S.H.: An Introduction to Optimization, vol. 76. Wiley, Hoboken (2013)zbMATHGoogle Scholar
  6. 6.
    Csató, L., Opper, M.: Sparse on-line Gaussian processes. Neural Comput. 14(3), 641–668 (2002)CrossRefzbMATHGoogle Scholar
  7. 7.
    Drimus, A., Kootstra, G., Bilberg, A., Kragic, D.: Design of a flexible tactile sensor for classification of rigid and deformable objects. Robot. Auton. Syst. 62(1), 3–15 (2014)CrossRefGoogle Scholar
  8. 8.
    Farhang-Boroujeny, B.: Adaptive Filters: Theory and Applications. Wiley, Hoboken (2013)CrossRefzbMATHGoogle Scholar
  9. 9.
    Hinton, G.E.: A practical guide to training restricted Boltzmann machines. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, 2nd edn, pp. 599–619. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-35289-8_32 CrossRefGoogle Scholar
  10. 10.
    Hoerl, A.E., Kennard, R.W.: Ridge regression: biased estimation for nonorthogonal problems. Technometrics 12(1), 55–67 (1970)CrossRefzbMATHGoogle Scholar
  11. 11.
    Horn, B.K., Schunck, B.G.: Determining optical flow. Artif. Intell. 17, 185–203 (1981)CrossRefGoogle Scholar
  12. 12.
    Jaeger, H.: Adaptive nonlinear system identification with echo state networks. In: Advances in Neural Information Processing Systems, pp. 593–600 (2002)Google Scholar
  13. 13.
    Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004)CrossRefGoogle Scholar
  14. 14.
    Kountouriotis, P., Obradovic, D., Goh, S.L., Mandic, D.P.: Multi-step forecasting using echo state networks. In: International Conference on Computer as a Tool, EUROCON 2005, vol. 2, pp. 1574–1577. IEEE (2005)Google Scholar
  15. 15.
    Liang, N.Y., Huang, G.B., Saratchandran, P., Sundararajan, N.: A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans. Neural Netw. 17(6), 1411–1423 (2006)CrossRefGoogle Scholar
  16. 16.
    LukošEvičIus, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3(3), 127–149 (2009)CrossRefzbMATHGoogle Scholar
  17. 17.
    Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14(11), 2531–2560 (2002)CrossRefzbMATHGoogle Scholar
  18. 18.
    Neal, R.M.: Bayesian Learning for Neural Networks, vol. 118. Springer Science & Business Media, New York (2012). doi: 10.1007/978-1-4612-0745-0 zbMATHGoogle Scholar
  19. 19.
    Orabona, F., Castellini, C., Caputo, B., Jie, L., Sandini, G.: On-line independent support vector machines. Pattern Recogn. 43(4), 1402–1412 (2010)CrossRefzbMATHGoogle Scholar
  20. 20.
    Orabona, F., Keshet, J., Caputo, B.: Bounded kernel-based online learning. J. Mach. Learn. Res. 10, 2643–2666 (2009)MathSciNetzbMATHGoogle Scholar
  21. 21.
    Rao, C.R., Mitra, S.K.: Generalized Inverse of Matrices and Its Applications, vol. 7. Wiley, New York (1971)zbMATHGoogle Scholar
  22. 22.
    Shi, Z., Han, M.: Support vector echo-state machine for chaotic time-series prediction. IEEE Trans. Neural Netw. 18(2), 359–372 (2007)CrossRefGoogle Scholar
  23. 23.
    Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: Advances in Neural Information Processing Systems, pp. 568–576 (2014)Google Scholar
  24. 24.
    Soh, H., Demiris, Y.: Incrementally learning objects by touch: online discriminative and generative models for tactile-based recognition. IEEE Trans. Haptics 7(4), 512 (2014)CrossRefGoogle Scholar
  25. 25.
    Soh, H., Su, Y., Demiris, Y.: Online spatio-temporal Gaussian process experts with application to tactile classification. In: Proceedings of 25th IROS, pp. 4489–4496. IEEE/RSJ, Algarve (2012)Google Scholar
  26. 26.
    Soh, H., Demiris, Y.: Iterative temporal learning and prediction with the sparse online echo state Gaussian process. In: Proceedings of 25th IJCNN, pp. 1–8. IEEE, Brisbane (2012)Google Scholar
  27. 27.
    Soh, H., Demiris, Y.: Spatio-temporal learning with the online finite and infinite echo-state Gaussian processes. IEEE Trans. Neural Netw. Learn. Syst. 26(3), 522–536 (2015)MathSciNetCrossRefGoogle Scholar
  28. 28.
    Van Vaerenbergh, S., Santamaría, I., Liu, W., Príncipe, J.C.: Fixed-budget kernel recursive least-squares. In: 2010 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pp. 1882–1885. IEEE (2010)Google Scholar
  29. 29.
    Wang, Z., Crammer, K., Vucetic, S.: Breaking the curse of kernelization: budgeted stochastic gradient descent for large-scale SVM training. J. Mach. Learn. Res. 13(1), 3103–3131 (2012)MathSciNetzbMATHGoogle Scholar
  30. 30.
    Yang, J., Liu, H., Sun, F., Gao, M.: Tactile sequence classification using joint kernel sparse coding. In: Proceedings of 28th IJCNN, pp. 1–6. IEEE, Killarney (2015)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Lele Cao
    • 1
    • 2
  • Fuchun Sun
    • 1
  • Xiaolong Liu
    • 1
  • Wenbing Huang
    • 1
  • Weihao Cheng
    • 2
  • Ramamohanarao Kotagiri
    • 2
  1. 1.Department of Computer Science and TechnologyTsinghua UniversityBeijingChina
  2. 2.Department of Computing and Information SystemsThe University of MelbourneMelbourneAustralia

Personalised recommendations