Advertisement

Smart Playground: A Tangible Interactive Platform with Regular Toys for Young Kids

  • Duc-Minh Pham
  • Thinh Nguyen-Vo
  • Minh-Triet Tran
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 528)

Abstract

In modern world, children need to get familiar with interactive toys to quickly improve their learning and imagination. Our approach is to add augmented information and interaction to common toys on the surface containing them, which is called Smart Playground. Popular methods use three color channels and local features to recognize objects. However, toys of children usually have various pictures with different colors drawn on many small components. Therefore depth data is useful in this case. Each toy usually have unique shape that is distinguishable from others. In this paper, we use an RGB-D sensor to collect information about both color and shape of objects. To learn the training set of toys, an approach of convolutional neural network is used to represent data (both color and depth) by high-level feature vectors. Using the combined results, the accuracy of 3D recognition is more than 90 %.

Keywords

RGB-D 3d recognition Deep learning Convolutional neural network 

References

  1. 1.
    Fei-Fei, L., Fergus, R., Torralba, A.: Recognizing and Learning Object Categories. Awarded the Best Short Course Prize at ICCV (2005)Google Scholar
  2. 2.
    Gupta, S., Arbelaez, P., Malik, J.: Perceptual organization and recognition of indoor scenes from RGB-D images. In: CVPR (2013)Google Scholar
  3. 3.
    Bo, L., Ren, X., Fox, D.: Unsupervised feature learning for RGB-D based object recognition. In: Desai, J.P., Dudek, G., Khatib, O., Kumar, V. (eds.) Experimental Robotics. STAR, vol. 88, pp. 387–402. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  4. 4.
    Socher, R., Huval, B., Bhat, B., Manning, C.D., Ng, A.Y.: Convolutional-Recursive Deep Learning for 3D Object Classification. In: NIPS (2012)Google Scholar
  5. 5.
    Gupta, S., Girshick, R., Arbeláez, P., Malik, J.: Learning rich features from RGB-D images for object detection and segmentation. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part VII. LNCS, vol. 8695, pp. 345–360. Springer, Heidelberg (2014)Google Scholar
  6. 6.
    Liu, L., Shao, L.: Learning discriminative representations from RGB-D video data. In: IJCAI (2013)Google Scholar
  7. 7.
    Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding (2014)Google Scholar
  8. 8.
    Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)MATHGoogle Scholar
  9. 9.
    Le, Q.V., Ranzato, M.A., Monga, R., Devin, M., Corrado, G., Chen, K., Dean, J., Ng, A.Y.: Building high-level features using large scale unsupervised learning. In: ICML (2012)Google Scholar
  10. 10.
    Sun, Y., Bo, L., Fox, D.: Learning to identify new objects. In: ICRA, pp. 3165–3172 (2014)Google Scholar
  11. 11.
    ImageNet: Large Scale Visual Recognition Challenge, ILSVRC2012 (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Duc-Minh Pham
    • 1
  • Thinh Nguyen-Vo
    • 1
  • Minh-Triet Tran
    • 1
  1. 1.Faculty of Information TechnologyUniversity of Science, VNU-HCMHo-chi-minhVietnam

Personalised recommendations