Visual–Tactile Fusion Object Recognition Using Joint Sparse Coding
Visual and tactile measurements offer complementary properties that make them particularly suitable for fusion. It is helpful for the robust and accurate recognition of objects, which is a necessity in many automation systems. In this chapter, a visual–tactile fusion framework is developed for object recognition tasks. This work uses the multivariate time series model to represent the tactile sequence, and the covariance descriptor to characterize the image. Further, a joint group kernel sparse coding method is designed to tackle the intrinsically weak-pairing problem in visual–tactile data samples. Finally, a visual–tactile dataset is developed, which is composed of 18 household objects for validation. The experimental results show that considering both visual and tactile input is beneficial and the proposed method indeed provides an effective strategy for fusion.
- 9.Lederman, S.J., Klatzky, R.L.: Multisensory texture perception. Handb. Multisens. Process., 107–122 (2004)Google Scholar
- 10.Liu, H., Qin, J., Cheng, H., Sun, F.: Robust kernel dictionary learning using a whole sequence convergent algorithm. IJCAI 1(2), 5 (2015)Google Scholar
- 12.Natale, L., Metta, G., Sandini, G.: Learning haptic representation of objects. In: International Conference on Intelligent Manipulation and Grasping (2004)Google Scholar
- 13.Nene, S.A., Nayar, S.K., Murase, H., et al.: Columbia Object Image Library (coil-20) (1996)Google Scholar