Detecting rigid links between sensors for automatic sensor space alignment in virtual environments

Original Article

Abstract

Simultaneous use of multiple sensor systems provides improved accuracy and tracking range compared to use of a single sensor system for virtual reality applications. However, calibration of multiple sensor technologies is non-trivial and at a minimum will require significant, and likely regular, user actioned calibration procedures. To enable ambient sensor calibration, we present techniques for automatically identifying relations between rigidly linked 6DoF and 3DoF sensors belonging to different sensor systems for body tracking. The techniques allow for subsequent automatic alignment of the sensor systems. Two techniques are presented, analysed in simulation for performance under varying noise and latency conditions, and are applied to two case studies. The first study identified sensors tracked by a gold standard rigid body tracker with one of six rigid bodies tracked by the first generation Kinect sensor with each sensor identified correctly in at least 76% of estimates. The second case study was an interactive version of the system that can detect a change in sensor configuration in 1–2 s and only requires movements of less than 15 cm or \(90^\circ\). Our methods represent a key step in creating highly accessible multi-device 3D virtual environments.

Keywords

Tracking Input devices Calibration Usability Sensors 

Notes

Acknowledgements

This work was supported by an Australian Postgraduate Allowance Scholarship and the Newcastle Robotics Laboratory at The University of Newcastle, Australia.

References

  1. Bahle G, Lukowicz P, Kunze K, Kise K (2013) I see you: how to improve wearable activity recognition by leveraging information from environmental cameras. In: 2013 IEEE international conference on pervasive computing and communications workshops (PERCOM workshops), pp 409–412Google Scholar
  2. Banos O, Calatroni A, Damas M, Pomares H, Rojas I, Sagha H, del R Milln J, Tröster G, Chavarriaga R, Roggen D (2012) Kinect= IMU? Learning MIMO signal mappings to automatically translate activity recognition systems across sensor modalities. In: 2012 16th international symposium on wearable computers (ISWC). IEEE, pp 92–99Google Scholar
  3. Calatroni A, Roggen D, Tröster G (2010) A methodology to use unknown new sensors for activity recognition by leveraging sporadic interactions with primitive sensors and behavioral assumptions. Eidgenössische Technische Hochschule Zürich, D-ITET, Institut für ElektronikGoogle Scholar
  4. Chavarriaga R, Bayati H, Milln JdR (2013) Unsupervised adaptation for acceleration-based activity recognition: robustness to sensor displacement and rotation. Pers Ubiquitous Comput 17(3):479–490CrossRefGoogle Scholar
  5. Deng S, Jiang N, Chang J, Guo S, Zhang JJ (2017) Understanding the impact of multimodal interaction using gaze informed mid-air gesture control in 3D virtual objects manipulation. Int J Human-Comput Stud 105(Supplement C):68–80CrossRefGoogle Scholar
  6. Destelle F, Ahmadi A, O’Connor N, Moran K, Chatzitofis A, Zarpalas D, Daras P (2014) Low-cost accurate skeleton tracking based on fusion of kinect and wearable inertial sensors. In: 2014 Proceedings of the 22nd European signal processing conference (EUSIPCO), pp 371–375Google Scholar
  7. Dornaika F, Horaud R (1998) Simultaneous robot-world and hand-eye calibration. IEEE Trans Robot Autom 14:617–622CrossRefGoogle Scholar
  8. Forster K, Roggen D, Tröster G (2009) Unsupervised classifier self-calibration through repeated context occurences: is there robustness against sensor displacement to gain? In: International symposium on wearable computers (ISWC), pp 77–84Google Scholar
  9. Fountain J, Smith SP (2016) Automatic identification of rigidly linked 6DoF sensors. In: IEEE virtual reality 2016. IEEE, pp 175–176Google Scholar
  10. Fountain J, Smith SP (2017) Real-time ambient fusion of commodity tracking systems for virtual reality. In: International conference on artificial reality and telexistence and eurographics symposium on virtual environments (ICAT-EGVE 2017). Eurographics Association, pp 1–8Google Scholar
  11. Gottschalk S, Hughes JF (1993) Autocalibration for virtual environment tracking hardware. In: Proceedings of the 20th annual conference on computer graphics and interactive techniques. ACM, New York, NY, USA, SIGGRAPH ’93, pp 65–72Google Scholar
  12. Kunze K, Lukowicz P (2008) Dealing with sensor displacement in motion-based onbody activity recognition systems. In: Proceedings of the 10th international conference on ubiquitous computing. ACM, pp 20–29Google Scholar
  13. Kunze K, Lukowicz P, Junker H, Trster G (2005) Where am I: recognizing on-body positions of wearable sensors. In: Strang T, Linnhoff-Popien C (eds) Location- and context-awareness, no. 3479 in Lecture notes in computer science. Springer, Berlin, pp 264–275Google Scholar
  14. Kunze K, Lukowicz P, Partridge K, Begole B (2009) Which way am I facing: inferring horizontal device orientation from an accelerometer signal. In: International symposium on wearable computers, 2009. ISWC ’09, pp 149–150Google Scholar
  15. LaViola JJ Jr, Kruijff E, McMahan RP, Bowman DA, Poupyrev I (2017) 3D user interfaces: theory and practice, 2nd edn. Addison-Wesley, BostonGoogle Scholar
  16. Lester J, Hannaford B, Borriello G (2004) “Are You with Me?”—Using accelerometers to determine if two devices are carried by the same person. In: Ferscha A, Mattern F (eds) Pervasive computing. Springer, Berlin, pp 33–50CrossRefGoogle Scholar
  17. Li A, Wang L, Wu D (2010) Simultaneous robot-world and hand-eye calibration using dual-quaternions and Kronecker product. Int J Phys Sci 5(10):1530–1536Google Scholar
  18. Moser K, Itoh Y, Oshima K, Swan J, Klinker G, Sandor C (2015) Subjective evaluation of a semi-automatic optical see-through head-mounted display calibration technique. IEEE Trans Vis Comput Graph 21(4):491–500CrossRefGoogle Scholar
  19. Pearl T (2012) Cross-platform tracking of a 6DoF motion controller using computer vision and sensor fusion. Master’s thesis, Vienna University of TechnologyGoogle Scholar
  20. Plopski A, Itoh Y, Nitschke C, Kiyokawa K, Klinker G, Takemura H (2015) Corneal-imaging calibration for optical see-through head-mounted displays. IEEE Trans Vis Comput Graph 21(4):481–490CrossRefGoogle Scholar
  21. Sanderson C (2010) Armadillo: An open source C++ linear algebra library for fast prototyping and computationally intensive experiments. Technical Report, NICTA, AustraliaGoogle Scholar
  22. Schapansky K (2014) Jester: a device abstraction and data fusion API for skeletal tracking. Master’s Thesis, California Polytechnic State UniversityGoogle Scholar
  23. Shah M (2011) Comparing two sets of corresponding six degree of freedom data. Comput Vis Image Underst 115(10):1355–1362CrossRefGoogle Scholar
  24. Shah M (2013) Solving the robot-world/hand-eye calibration problem using the Kronecker product. J Mech Robot 5(3):031,007–031,007CrossRefGoogle Scholar
  25. Zhuang H, Roth ZS, Sudhakar R (1994) Simultaneous robot/world and tool/flange calibration by solving homogeneous transformation equations of the form AX = YB. IEEE Trans Robot Autom 10(4):549–554CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Electrical Engineering and ComputingThe University of NewcastleCallaghanAustralia

Personalised recommendations