Advertisement

Estimation of the Pressing Force from Finger Image by Using Neural Network

  • Yoshinori InoueEmail author
  • Yasutoshi Makino
  • Hiroyuki Shinoda
Conference paper
  • 2.8k Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10894)

Abstract

In this paper, we propose a method that estimates contact force to hard surface from a single visual image of a finger by using a neural network. In general, it is hard to estimate applied force to hard object only from visual images as the object surface hardly moves. In this paper, we focus on the human side. When persons push an object, posture of hand reflects how hard he/she pushes the surface. Observation of human body condition will tell the haptic information. We used the Convolutional Neural Network to make the system learn the relationship between the applied force and the finger posture. We created a neural network model individually. The evaluation result shows that a root mean square error from the actual force is approximately 0.5 N for the best case, which is 2.5% to the dynamic range (0–20 N) of applied force.

Keywords

Force sensing Convolutional neural network Augmented reality 

Notes

Acknowledgments

This work is supported by JST PRESTO 17939983.

References

  1. 1.
    Kazuma, Y., Keisuke, H., Hiroyuki, S.: Measuring visio-tactile threshold for visio-tactile projector. In: Proceedings of SICE Annual Conference 2012, pp. 1996–2000 (2012)Google Scholar
  2. 2.
    Stephen, M., Harry, A.: Measurement of finger posture and three-axis fingertip touch force using fingernail sensors. IEEE Trans. Robot. Autom. 20, 26–35 (2004)CrossRefGoogle Scholar
  3. 3.
    Thomas, G., John, H., Stephen, M.: 3-D fingertip touch force prediction using fingernail imaging with automated calibration. IEEE Trans. Rob. 31, 1116–1129 (2015)CrossRefGoogle Scholar
  4. 4.
    Thomas, G., Lucas, L., Yu, S., John, H., Stephen, M.: 3D force prediction using fingernail imaging with automated calibration. In: 2010 IEEE Haptics Symposium, pp. 113–120 (2010)Google Scholar
  5. 5.
    Cornelia, F., Fang, W., Yezhou, Y., Konstantinos, Z., Yi, Z., Francisco, B., Michael, P.: Prediction of Manipulation Actions. arXiv.org. https://arxiv.org/abs/1610.00759. Accessed 20 Jan 2018
  6. 6.
    Tu-Hoa, P., Nikolaos, K., Antonis, A., Abderrahmane, K.: Hand-object contact force estimation from markerless visual tracking. IEEE Trans. Patt. Anal. Mach. Intell. PP, 1 (2017)Google Scholar
  7. 7.
    Wonjun, H., Soo-Chul, L.: Inferring Interaction Force from Visual Information without Using Physical Force Sensors. Sensors, Basel (2017)Google Scholar
  8. 8.
    Florian, E., Andreas, D., Marcus, T., Gudrun, K.: Inverted FTIR: easy multitouch sensing for flatscreens. In: ITS 2009 Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, pp. 29–32 (2009)Google Scholar
  9. 9.
    Chris, H., Hrvoje, B., Andrew, W.: OmniTouch: wearable multitouch interaction everywhere. In: UIST 2011 Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, pp. 441–450 (2011)Google Scholar
  10. 10.
    Alex, K., Ilya, S., Geoffrey, H.: ImageNet classification with deep convolutional neural networks. In: NIPS 2012 Proceedings of the 25th International Conference on Neural Information Processing Systems, vol. 1, pp. 1097–1105 (2012)Google Scholar
  11. 11.
    Alex, G.: Generating Sequences With Recurrent Neural Network. arXiv.org. https://arxiv.org/abs/1308.0850. Accessed 20 Jan 2018
  12. 12.
    Shaoqing, R., Kaiming, H., Ross, G., Jian, S.: Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. arXiv.org. https://arxiv.org/abs/1506.01497. Accessed 20 Jan 2018

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Yoshinori Inoue
    • 1
    Email author
  • Yasutoshi Makino
    • 1
    • 2
  • Hiroyuki Shinoda
    • 1
  1. 1.University of TokyoKashiwaJapan
  2. 2.JST PRESTOKashiwaJapan

Personalised recommendations