Advertisement

Weight Estimation of Lifted Object from Body Motions Using Neural Network

  • Tomoki OjiEmail author
  • Yasutoshi Makino
  • Hiroyuki Shinoda
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10894)

Abstract

In this paper, we propose a method based on machine learning, which estimates the mass of an object from a body motion performed to lift it. In the field of behavior recognition and prediction, some previous studies had focused on estimating the current or future state of a person from his/her motion. In contrast, this research estimates the information of an object in contact with a person. Using this method, we can obtain a rough estimate of an object’s mass without using a weighing machine. Such a measurement system will be useful in several applications, for example, for estimating the excess weight of baggage before checking-in at the airport. We believe that this system can also be used for the evaluation of haptic illusions such as the size–weight illusion. The proposed system detects human-body joints as the input dataset for machine learning. We created a neural network that estimated an object’s mass in real-time, u/sing data from a single person for training. The experimental results showed that the proposed system could estimate an object’s mass more accurately than human senses.

Keyword

Machine learning 

Notes

Acknowledgments

This research was supported by JST PRESTO 17939983. We would like to thank Editage (www.editage.jp) for English language editing.

References

  1. 1.
    Ionescu, C., Papava, D., Olaru, V., Sminchisescu, C.: Human3.6M: large scale datasets and predictive methods for 3D human sensing in natural environments. IEEE Trans. Pattern Anal. Mach. Intell. 36(7), 1325–1339 (2014).  https://doi.org/10.1109/TPAMI.2013.248CrossRefGoogle Scholar
  2. 2.
    Martinez, J., Black, M.J., Romero, J.: On human motion prediction using recurrent neural networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 4674–4683 (2017).  https://doi.org/10.1109/CVPR.2017.497
  3. 3.
    Jain, A., Zamir, A.R., Savarese, S., Saxena, A.: Structural-RNN: deep learning on spatio-temporal graphs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5308–5317 (2016).  https://doi.org/10.1109/CVPR.2016.573
  4. 4.
    Fragkiadaki, K., Levine, S., Felsen, P., Malik, J.: Recurrent network models for human dynamics. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4346–4354 (2015).  https://doi.org/10.1109/ICCV.2015.494
  5. 5.
    Horiuchi, Y., Makino, Y., Shinoda, H.: Computational foresight: forecasting human body motion in real-time for reducing delays in interactive system. In: Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces, pp. 312–317 (2017).  https://doi.org/10.1145/3132272.3135076
  6. 6.
    Fermüller, C., Wang, F., Yang, Y., Zampogiannis, K., Zhang, Y., Barranco, F., Pfeiffer, M.: Prediction of manipulation actions. Int. J. Comput. Vis. 126(2–4), 358–374 (2018).  https://doi.org/10.1007/s11263-017-0992-zMathSciNetCrossRefGoogle Scholar
  7. 7.
    Pham, T.-H., Kyriazis, N., Argyros, A.A., Kheddar, A.: Hand-object contact force estimation from markerless visual tracking. IEEE Trans. Pattern Anal. Mach. Intell. (2017).  https://doi.org/10.1109/TPAMI.2017.2759736
  8. 8.
    Hwang, W., Lim, S.-C.: Inferring interaction force from visual information without using physical force sensors. Sensors 17(11), 2455 (2017).  https://doi.org/10.3390/s17112455CrossRefGoogle Scholar
  9. 9.
    Lederman, S.J., Jones, L.A.: Tactile and haptic illusions. IEEE Trans. Haptics 4(4), 273–294 (2011).  https://doi.org/10.1109/TOH.2011.2CrossRefGoogle Scholar
  10. 10.
    Park, S.-B., Kim, S.-Y., Hyeong, J.-H., Chung, K.-R.: A study on the development of image analysis instrument and estimation of mass, volume and center of gravity using CT image in Korean. J. Mech. Sci. Technol. 28(3), 971–977 (2014).  https://doi.org/10.1007/s12206-013-1168-6CrossRefGoogle Scholar
  11. 11.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on Machine Learning (2015)Google Scholar
  12. 12.
    Cao, Z., Simon, T., Wei, S.-E., Sheikh, Y.: Realtime multi-person 2D pose estimation using part affinity fields. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1302–1310 (2017).  https://doi.org/10.1109/CVPR.2017.143
  13. 13.
    Mehta, D., Sridhar, S., Sotnychenko, O., Rhodin, H., Shafiei, M., Seidel, H.-P., Xu, W., Casas, D., Theobalt, C.: VNect: real-time 3D human pose estimation with a single RGB camera. ACM Trans. Graph. 36, 4 (2017).  https://doi.org/10.1145/3072959.3073596CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Tomoki Oji
    • 1
    Email author
  • Yasutoshi Makino
    • 1
    • 2
  • Hiroyuki Shinoda
    • 1
  1. 1.Graduate School of Information Science and TechnologyThe University of TokyoTokyoJapan
  2. 2.JST PRESTOTokyoJapan

Personalised recommendations