Skip to main content

Vibrotactile Signal Generation from Texture Images or Attributes Using Generative Adversarial Network

Part of the Lecture Notes in Computer Science book series (LNISA,volume 10894)

Abstract

Providing vibrotactile feedback that corresponds to the state of the virtual texture surfaces allows users to sense haptic properties of them. However, hand-tuning such vibrotactile stimuli for every state of the texture takes much time. Therefore, we propose a new approach to create models that realize the automatic vibrotactile generation from texture images or attributes. In this paper, we make the first attempt to generate the vibrotactile stimuli leveraging the power of deep generative adversarial training. Specifically, we use conditional generative adversarial networks (GANs) to achieve generation of vibration during moving a pen on the surface. The preliminary user study showed that users could not discriminate generated signals and genuine ones and users felt realism for generated signals. Thus our model could provide the appropriate vibration according to the texture images or the attributes of them. Our approach is applicable to any case where the users touch the various surfaces in a predefined way.

Keywords

  • Vibrotactile signals
  • Generative Adversarial Network

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-319-93399-3_3
  • Chapter length: 12 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   84.99
Price excludes VAT (USA)
  • ISBN: 978-3-319-93399-3
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   109.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.
Fig. 7.
Fig. 8.
Fig. 9.
Fig. 10.

References

  1. Culbertson, H., Unwin, J., Kuchenbecker, K.J.: Modeling and rendering realistic textures from unconstrained tool-surface interactions. IEEE Trans. Haptics 7(3), 381–393 (2014)

    CrossRef  Google Scholar 

  2. Shin, S., Osgouei, R.H., Kim, K.D., Choi, S.: Data-driven modeling of isotropic haptic textures using frequency-decomposed neural networks. In: IEEE World Haptics Conference, WHC 2015, pp. 131–138 (2015)

    Google Scholar 

  3. Goodfellow, I., Pouget-Abadie, J., Mirza, M., et al.: Generative Adversarial Nets. Adv. Neural. Inf. Process. Syst. 27, 2672–2680 (2014)

    Google Scholar 

  4. Mirza, M., Osindero, S.: Conditional Generative Adversarial Nets. arXiv preprint arXiv:1411.1784 (2014)

  5. Reed, S., Akata, Z., Yan, X., et al.: Generative adversarial text to image synthesis. In: ICML, pp. 1060–1069 (2016)

    Google Scholar 

  6. Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A.: Image-to-Image Translation with Conditional Adversarial Networks. In: CVPR (2017)

    Google Scholar 

  7. Ledig, C., Theis, L., Huszar, F., et al.: Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. arXiv preprint arXiv:1609.04802 (2016)

  8. Odena, A., Olah, C., Shlens, J.: Conditional Image Synthesis With Auxiliary Classifier GANs. arXiv preprint arXiv:1610.09585 (2016)

  9. Chen, L., Srivastava, S., Duan, Z., Xu, C.: Deep Cross-Modal Audio-Visual Generation. arXiv preprint arXiv:1704.08292 (2017)

  10. Griffin, D.: Signal estimation from modified short-time Fourier transform. IEEE Trans. Acoust. Speech Sig. Process. 32(2), 236–243 (1984)

    CrossRef  Google Scholar 

  11. Strese, M., Schuwerk, C.: Multimodal feature-based surface material classification. IEEE Trans. Haptics 10(2), 226–239 (2017)

    CrossRef  Google Scholar 

  12. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)

    Google Scholar 

  13. Russakovsky, O., Deng, J., Su, H., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)

    MathSciNet  CrossRef  Google Scholar 

  14. Jin, Y., Zhang, J., Li, M., et al.: Towards the Automatic Anime Characters Creation with Generative Adversarial Networks. arXiv preprint arXiv:1708.05509 (2017)

  15. Kodali, N., Abernethy, J., Hays, J., Kira, Z.: On Convergence and Stability of GANs. arXiv preprint arXiv:1705.07215 (2017)

  16. Lee, K., Hicks, G., Nino-Murcia, G.: Validity and reliability of a scale to assess fatigue. Psychiatry Res. 36, 291–298 (1991)

    CrossRef  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuki Ban .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Ujitoko, Y., Ban, Y. (2018). Vibrotactile Signal Generation from Texture Images or Attributes Using Generative Adversarial Network. In: Prattichizzo, D., Shinoda, H., Tan, H., Ruffaldi, E., Frisoli, A. (eds) Haptics: Science, Technology, and Applications. EuroHaptics 2018. Lecture Notes in Computer Science(), vol 10894. Springer, Cham. https://doi.org/10.1007/978-3-319-93399-3_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-93399-3_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-93398-6

  • Online ISBN: 978-3-319-93399-3

  • eBook Packages: Computer ScienceComputer Science (R0)