Skip to main content

Microsoft Kinect™ Range Camera

  • Chapter
  • First Online:
Book cover Time-of-Flight Cameras and Microsoft Kinect™

Abstract

The Microsoft Kinect™ range camera (for simplicity just called Kinect™ in the sequel) is a light-coded range camera capable to estimate the 3D geometry of the acquired scene at 30 fps with VGA (640 x 480) spatial resolution. Besides its lightcoded range camera, the Kinect™ also has a color video-camera and an array of microphones. In the context of this book the Kinect™ light-coded range camera is the most interesting component, and the name Kinect™ will be often referred to it rather than to the whole product.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Primesense. http://www.primesense.com/.

    Google Scholar 

  2. Asus, “Xtion pro.” http://www.asus.com/Multimedia/Motion Sensor/Xtion PRO/.

    Google Scholar 

  3. J. Smisek, M. Jancosek, and T. Pajdla, “3d with kinect,” in 1st IEEE Workshop on Consumer Depth Cameras for Computer Vision, 2011.

    Google Scholar 

  4. K. Konoldige and P. Mihelich, “Technical description of kinect calibration,” tech. rep.,Willow Garage, http://www.ros.org/wiki/kinect calibration/technical, 2011.

    Google Scholar 

  5. Salvi J, Pags J, Batlle J (2004) Pattern codification strategies in structured light systems. Pattern Recognition 37:827–849

    Article  MATH  Google Scholar 

  6. Carrihill B, Hummel R (1985) Experiments with the intensity ratio depth sensor. Computer Vision, Graphics, and Image Processing 32(3):337–358

    Article  Google Scholar 

  7. J. Tajima and M. Iwakawa, “3-d data acquisition by rainbow range finder,” in Proceedings of the 10th International Conference on Pattern Recognition, 1990.

    Google Scholar 

  8. M. Trobina, “Error model of a coded-light range sensor,” tech. rep., Communication Technology Laboratory Image Science Group, ETH-Zentrum, Zurich, 1995.

    Google Scholar 

  9. Bergmann D (1995) “New approach for automatic surface reconstruction with coded light”, in Proc. Remote Sensing and Reconstruction for Three-Dimensional Objects and Scenes, SPIE

    Google Scholar 

  10. Maruyama M, Abe S (1993) Range sensing by projecting multiple slits with random cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence 15:647–651

    Article  Google Scholar 

  11. Boyer K, Kak A (1987) Color-encoded structured light for rapid active ranging. IEEE Transactions on Pattern Analysis and Machine Intelligence 9:14–28

    Article  Google Scholar 

  12. Etzion T (1988) Constructions for perfect maps and pseudorandom arrays. IEEE Transactions on Information Theory 34(5):1308–1316

    Article  MathSciNet  Google Scholar 

  13. “Patent application us2010 0118123,” 2010.

    Google Scholar 

  14. Morano R, Ozturk C, Conn R, Dubin S, Zietz S, Nissano J (1998) Structured light using pseudorandom codes. IEEE Transactions on Pattern Analysis and Machine Intelligence 20:322–327

    Article  Google Scholar 

  15. Szeliski R (2010) Computer Vision: Algorithms and Applications. Springer, New York

    Google Scholar 

  16. D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” International Journal of Computer Vision, 2001.

    Google Scholar 

  17. G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library. O’Reilly, 2008.

    Google Scholar 

  18. J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-Time Human Pose Recognition in Parts from Single Depth Images,” in CVPR, 2011.

    Google Scholar 

  19. “Willow garage turtlebot.” http://www.willowgarage.com/turtlebot.

    Google Scholar 

  20. S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. Davison, and A. Fitzgibbon, “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th annual ACM Symposium on User Interface Software and Technology, 2011.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2012 The Author(s)

About this chapter

Cite this chapter

Mutto, C.D., Zanuttigh, P., Cortelazzo, G.M. (2012). Microsoft Kinect™ Range Camera. In: Time-of-Flight Cameras and Microsoft Kinect™. SpringerBriefs in Electrical and Computer Engineering(). Springer, Boston, MA. https://doi.org/10.1007/978-1-4614-3807-6_3

Download citation

  • DOI: https://doi.org/10.1007/978-1-4614-3807-6_3

  • Published:

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4614-3806-9

  • Online ISBN: 978-1-4614-3807-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics