Advertisement

A Survey on 3D Visual Tracking of Multicopters

  • Qiang FuEmail author
  • Xiang-Yang Chen
  • Wei He
Research Article

Abstract

Three-dimensional (3D) visual tracking of a multicopter (where the camera is fixed while the multicopter is moving) means continuously recovering the six-degree-of-freedom pose of the multicopter relative to the camera. It can be used in many applications, such as precision terminal guidance and control algorithm validation for multicopters. However, it is difficult for many researchers to build a 3D visual tracking system for multicopters (VTSMs) by using cheap and off-the-shelf cameras. This paper firstly gives an over- view of the three key technologies of a 3D VTSMs: multi-camera placement, multi-camera calibration and pose estimation for multi-copters. Then, some representative 3D visual tracking systems for multicopters are introduced. Finally, the future development of the 3D VTSMs is analyzed and summarized.

Keywords

Multicopter three-dimensional (3D) visual tracking camera placement camera calibration pose estimation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgements

This work was supported by the National Key Research and Development Program of China (No. 2017YFB1300102) and National Natural Science Foundation of China (No. 61803025).

References

  1. [1]
    R. Mahony, V. Kumar, P. Corke. Multirotor aerial vehicles: Modeling, estimation, and control of quadrotor. IEEE Robotics & Automation Magazine, vol. 19, no. 3, pp. 20–32, 2012. DOI:  https://doi.org/10.1109/MRA.2012.2206474.CrossRefGoogle Scholar
  2. [2]
    D. Scaramuzza, M. C. Achtelik, L. Doitsidis, F. Friedrich, E. Kosmatopoulos, A. Martinelli, M. W. Achtelik, M. Chli, S. Chatzichristofis, L. Kneip, D. Gurdan, L. Heng, G. H. Lee, S. Lynen, M. Pollefeys, A. Renzaglia, R. Siegwart, J. C. Stumpf, P. Tanskanen, C. Troiani, S. Weiss, L. Meier. Vision-controlled micro flying robots: From system design to autonomous navigation and mapping in GPS-denied environments. IEEE Robotics & Automation Magazine, vol. 21, no. 3, pp. 26–40, 2014. DOI:  https://doi.org/10.1109/MRA.2014.2322295.CrossRefGoogle Scholar
  3. [3]
    F. Zhou, W. Zheng, Z. F. Wang. Adaptive noise identification in vision-assisted motion estimation for unmanned aerial vehicles. International Journal of Automation and Computing, vol. 12, no. 4, pp. 413–420, 2015. DOI:  https://doi.org/10.1007/s11633-014-0857-7.CrossRefGoogle Scholar
  4. [4]
    W. He, Z. J. Li, C. L. P. Chen. A survey of human-centered intelligent robots: Issues and challenges. IEEE/CAA Journal of Automatica Sinica, vol. 4, no. 4, pp. 602–609, 2017. DOI:  https://doi.org/10.1109/JAS.2017.7510604.CrossRefGoogle Scholar
  5. [5]
    V. Lepetit, P. Fua. Monocular model-based 3D tracking of rigid objects. Foundations and Trends in Computer Graphics and Vision, vol. 1, no. 1, pp. 1–89, 2005. DOI:  https://doi.org/10.1561/0600000001.CrossRefGoogle Scholar
  6. [6]
    Z. Q. Hou, C. Z. Han. A survey of visual tracking. Acta Automatica Sinica, vol. 32, no. 4, pp. 603–617, 2006. DOI:  https://doi.org/10.16383/j.aas.2006.04.016. (in Chinese)Google Scholar
  7. [7]
    X. Y. Gong, H. Su, D. Xu, Z. T. Zhang, F. Shen, H. B. Yang. An overview of contour detection approaches. International Journal of Automation and Computing, vol. 15, no. 6, pp. 656–672, 2018. DOI:  https://doi.org/10.1007/s11633-018-1117-z.CrossRefGoogle Scholar
  8. [8]
    VICON Motion Capture Systems, [Online], Available: https://doi.org/www.vicon.com/, October 3, 2018.
  9. [9]
    OptiTrack Motion Capture Systems, [Online], Available: https://doi.org/www.optitrack.com/, October 3, 2018.
  10. [10]
    A. Assa, F. Janabi-Sharifi. Virtual visual servoing for multicamera pose estimation. IEEE/ASME Transactions on Mechatronics, vol. 20, no. 2, pp. 789–798, 2015. DOI:  https://doi.org/10.1109/TMECH.2014.2305916.CrossRefGoogle Scholar
  11. [11]
    F. Kendoul. Survey of advances in guidance, navigation, and control of unmanned rotorcraft systems. Journal of Field Robotics, vol. 29, no. 2, pp. 315–378, 2012. DOI:  https://doi.org/10.1002/rob.20414.CrossRefGoogle Scholar
  12. [12]
    OptiTrack Camera Placement, [Online], Available: https://doi.org/t.cn/EhrxoJk, October 3, 2018.
  13. [13]
    S. Sakane, T. Sato. Automatic planning of light source and camera placement for an active photometric stereo system. In Proceedings of IEEE International Conference on Robotics and Automation, IEEE, Sacramento, USA, pp. 1080–1087, 1991. DOI:  https://doi.org/10.1109/ROBOT.1991.131737.Google Scholar
  14. [14]
    S. K. Yi, R. M. Haralick, L. G. Shapiro. Optimal sensor and light source positioning for machine vision. Computer Vision and Image Understanding, vol. 61, no. 1, pp. 122–137, 1995. DOI:  https://doi.org/10.1006/cviu.1995.1009.CrossRefGoogle Scholar
  15. [15]
    J. A. Sun, D. H. Lv, A. P. Song, T. G. Zhuang. A survey of sensor planning in computer vision. Journal of Image and Graphics, vol. 6, no. 11, pp. 1047–1052, 2001. DOI:  https://doi.org/10.3969/j.issn.1006-8961.2001.11.001. (in Chinese)Google Scholar
  16. [16]
    G. Olague, R. Mohr. Optimal camera placement for accurate reconstruction. Pattern Recognition, vol. 35, no. 4, pp. 927–944, 2002. DOI:  https://doi.org/10.1016/S0031-3203(01)00076-0.zbMATHCrossRefGoogle Scholar
  17. [17]
    X. Chen, J. Davis. An occlusion metric for selecting robust camera configurations. Machine Vision and Applications, vol. 19, no. 4, pp. 217–222, 2008. DOI:  https://doi.org/10.1007/s00138-007-0094-y.zbMATHCrossRefGoogle Scholar
  18. [18]
    P. Rahimian, J. K. Kearney. Optimal camera placement for motion capture systems. IEEE Transactions on Visualization and Computer Graphics, vol. 23, no. 3, pp. 1209–1221, 2017. DOI:  https://doi.org/10.1109/TVCG.2016.2637334.CrossRefGoogle Scholar
  19. [19]
    J. H. Kim, B. K. Koo. Convenient calibration method for unsynchronized multi-camera networks using a small reference object. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, Vilamoura, Portugal, pp. 438–444, 2012. DOI:  https://doi.org/10.1109/IROS.2012.6385605.Google Scholar
  20. [20]
    Z. Zhang. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000. DOI:  https://doi.org/10.1109/34.888718.CrossRefGoogle Scholar
  21. [21]
    C. Theobalt, M. Li, M. A. Magnor, H. P. Seidel. A flexible and versatile studio for synchronized multi-view video recording. Vision, Video, and Graphics, P. Hall, P. Willis, Eds., Aire-la-Ville, Switzerland: Eurographics, pp. 9–16, 2003.Google Scholar
  22. [22]
    T. Ueshiba, F. Tomita. Plane-based calibration algorithm for multi-camera systems via factorization of homography matrices. In Proceedings of the 9th IEEE International Conference on Computer Vision, IEEE, Nice, France, pp. 966–973, 2003. DOI:  https://doi.org/10.1109/ICCV.2003.1238453.CrossRefGoogle Scholar
  23. [23]
    B. Sun, Q. He, C. Hu, M. Q. H. Meng. A new camera calibration method for multi-camera localization. In Proceedings of IEEE International Conference on Automation and Logistics, IEEE, Hong Kong and Macau, China, pp. 7–12, 2010. DOI:  https://doi.org/10.1109/ICAL.2010.5585376.Google Scholar
  24. [24]
    Z. Y. Zhang. Camera calibration with one-dimensional objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 7, pp. 892–899, 2004. DOI:  https://doi.org/10.1109/TPAMI.2004.21.CrossRefGoogle Scholar
  25. [25]
    L. Wang, F. Q. Duan, K. Lv. Camera calibration with one-dimensional objects based on the heteroscedastic error-invariables model. Acta Automatica Sinica, vol. 40, no. 4, pp. 643–652, 2014. DOI:  https://doi.org/10.3724/SP.J.1004.2014.00643. (in Chinese)Google Scholar
  26. [26]
    J. Mitchelson, A. Hilton. Wand-based Multiple Camera Studio Calibration, Technical Report. VSSP-TR-2, Centre for Vision, Speech and Signal Processing, University of Surrey, UK, 2003.Google Scholar
  27. [27]
    G. Kurillo, Z. Y. Li, R. Bajcsy. Wide-area external multi-camera calibration using vision graphs and virtual calibration object. In Proceedings of 2nd ACM/IEEE International Conference on Distributed Smart Cameras, IEEE, Stanford, USA, 2008. DOI:  https://doi.org/10.1109/ICDSC.2008.4635695.Google Scholar
  28. [28]
    L. Wang, F. C. Wu. Multi-camera calibration based on 1D calibration object. Acta Automatica Sinica, vol. 33, no. 3, pp. 225–231, 2007. DOI:  https://doi.org/10.16383/j.aas.2007.03.001. (in Chinese)MathSciNetCrossRefGoogle Scholar
  29. [29]
    L. Wang, F. C. Wu, Z. Y. Hu. Multi-camera calibration with one-dimensional object under general motions. In Proceedings of the 11th IEEE International Conference on Computer Vision, IEEE, Rio de Janeiro, Brazil, pp. 1–7, 2007. DOI:  https://doi.org/10.1109/ICCV.2007.4408994.Google Scholar
  30. [30]
    Q. Fu, Q. Quan, K. Y. Cai. Multi-camera calibration based on freely moving one dimensional object. In Proceedings of the 30th Chinese Control Conference, IEEE, Yantai, China, pp. 5023–5028, 2011.Google Scholar
  31. [31]
    Q. Fu, Q. Quan, K. Y. Cai. Calibration method and experiments of multi-camera’s parameters based on freely moving one-dimensional calibration object. Control Theory & Applications, vol. 31, no. 8, pp. 1018–1024, 2014. DOI:  https://doi.org/10.7641/CTA.2014.31188. (in Chinese)Google Scholar
  32. [32]
    Q. Fu, Q. Quan, K. Y. Cai. Calibration of multiple fish-eye cameras using a wand. IET Computer Vision, vol. 9, no. 3, pp. 378–389, 2015. DOI:  https://doi.org/10.1049/iet-cvi.2014.0181.CrossRefGoogle Scholar
  33. [33]
    T. Svoboda, D. Martinec, T. Pajdla. A convenient multicamera self-calibration for virtual environments. Presence: Teleoperators and Virtual Environments, vol. 14, no. 4, pp. 407–422, 2005. DOI:  https://doi.org/10.1162/105474605774785325.CrossRefGoogle Scholar
  34. [34]
    M. C. Villa-Uriol, G. Chaudhary, F. Kuester, T. Hutchinson, N. Bagherzadeh. Extracting 3D from 2D: Selection basis for camera calibration. In Proceedings of the 7th IASTED International Conference on Computer Graphics and Imaging, IASTED, Kauai, USA, pp. 315–321, 2004.Google Scholar
  35. [35]
    M. Bruckner, F. Bajramovic, J. Denzler. Intrinsic and extrinsic active self-calibration of multi-camera systems. Machine Vision and Applications, vol. 25, no. 2, pp. 389–403, 2014. DOI:  https://doi.org/10.1007/s00138-013-0541-x.CrossRefGoogle Scholar
  36. [36]
    F. Bajramovic, M. Bruckner, J. Denzler. An efficient shortest triangle paths algorithm applied to multi-camera self-calibration. Journal of Mathematical Imaging and Vision, vol. 43, no. 2, pp. 89–102, 2012. DOI:  https://doi.org/10.1007/s10851-011-0288-9.MathSciNetzbMATHCrossRefGoogle Scholar
  37. [37]
    T. T. Nguyen, M. Lhuillier. Self-calibration of omnidirectional multi-cameras including synchronization and rolling shutter. Computer Vision and Image Understanding, vol. 162, pp. 166–184, 2017. DOI:  https://doi.org/10.1016/j.cviu.2017.08.010.CrossRefGoogle Scholar
  38. [38]
    M. A. Fischler, R. C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981. DOI:  https://doi.org/10.1145/358669.358692.MathSciNetCrossRefGoogle Scholar
  39. [39]
    F. Moreno-Noguer, V. Lepetit, P. Fua. Accurate non-iterative O(n) solution to the PnP problem. In Proceedings of the 11th IEEE International Conference on Computer Vision, IEEE, Rio de Janeiro, Brazil, pp. 1–8, 2007. DOI:  https://doi.org/10.1109/ICCV.2007.4409116.Google Scholar
  40. [40]
    V. Lepetit, F. Moreno-Noguer, P. Fua. EPnP: An accurate O(n) solution to the PnP problem. International Journal of Computer Vision, vol. 81, no. 2, pp. 155–166, 2009. DOI:  https://doi.org/10.1007/s11263-008-0152-6.CrossRefGoogle Scholar
  41. [41]
    J. A. Hesch, S. I. Roumeliotis. A direct least-squares (DLS) method for PnP. In Proceedings of IEEE International Conference on Computer Vision, IEEE, Barcelona, Spain, pp. 383–390, 2011. DOI:  https://doi.org/10.1109/ICCV.2011.6126266.Google Scholar
  42. [42]
    D. A. Cox, J. Little, D. O’Shea. Using Algebraic Geometry, 2nd ed., New York, USA: Springer-Verlag, 2005. DOI:  https://doi.org/10.1007/b138611.zbMATHGoogle Scholar
  43. [43]
    Y. Q. Zheng, Y. B. Kuang, S. Sugimoto, K. Astrom, M. Okutomi. Revisiting the PnP problem: A fast, general and optimal solution. In Proceedings of IEEE International Conference on Computer Vision, IEEE, Sydney, Australia, pp. 2344–2351, 2013. DOI:  https://doi.org/10.1109/ICCV.2013.291.Google Scholar
  44. [44]
    C. Martinez, P. Campoy, I. Mondragon, M. A. Olivares-Mendez. Trinocular ground system to control UAVs. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, St. Louis, USA, pp. 3361–3367, 2009. DOI:  https://doi.org/10.1109/IROS.2009.5354489.Google Scholar
  45. [45]
    C. Martinez, I. F. Mondragon, M. A. Olivares-Mendez, P. Campoy. On-board and ground visual pose estimation techniques for UAV control. Journal of Intelligent & Robotic Systems, vol. 61, no. 1–4, pp. 301–320, 2011. DOI:  https://doi.org/10.1007/s10846-010-9505-9.CrossRefGoogle Scholar
  46. [46]
    R. Hartley, A. Zisserman. Multiple View Geometry in Computer Vision, 2nd ed., Cambridge, UK: Cambridge University Press, 2004.zbMATHCrossRefGoogle Scholar
  47. [47]
    M. Faessler, E. Mueggler, K. Schwabe, D. Scaramuzza. A monocular pose estimation system based on infrared LEDs. In Proceedings of IEEE International Conference on Robotics and Automation, IEEE, Hong Kong, China, pp. 907–913, 2014. DOI:  https://doi.org/10.1109/ICRA.2014.6906962.Google Scholar
  48. [48]
    L. Kneip, D. Scaramuzza, R. Siegwart. A novel parametrization of the perspective-three-point problem for a direct computation of absolute camera position and orientation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Colorado Springs, USA, pp. 2969–2976, 2011. DOI:  https://doi.org/10.1109/CVPR.2011.5995464.Google Scholar
  49. [49]
    C. P. Lu, G. D. Hager, E. Mjolsness. Fast and globally convergent pose estimation from video images. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 6, pp. 610–622, 2000. DOI:  https://doi.org/10.1109/34.862199.CrossRefGoogle Scholar
  50. [50]
    Y. X. Xu, Y. L. Jiang, F. Chen. Generalized orthogonal iterative algorithm for pose estimation of multiple camera systems. Acta Optica Sinica, vol. 29, no. 1, pp. 72–77, 2009. DOI:  https://doi.org/10.3788/AOS20092901.0072. (in Chinese)CrossRefGoogle Scholar
  51. [51]
    W. J. Wilson, C. C. W. Hulls, G. S. Bell. Relative end-effector control using Cartesian position based visual servoing. IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 684–696, 1996. DOI:  https://doi.org/10.1109/70.538974.CrossRefGoogle Scholar
  52. [52]
    M. Ficocelli, F. Janabi-Sharifi. Adaptive filtering for pose estimation in visual servoing. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the Next Millennium, IEEE, Maui, USA, pp. 19–24, 2001. DOI:  https://doi.org/10.1109/IROS.2001.973330.Google Scholar
  53. [53]
    A. Shademan, F. Janabi-Sharifi. Sensitivity analysis of EKF and iterated EKF pose estimation for position-based visual servoing. In Proceedings of IEEE Conference on Control Applications, IEEE, Toronto, Canada, pp. 755–760, 2005. DOI:  https://doi.org/10.1109/CCA.2005.1507219.Google Scholar
  54. [54]
    F. Janabi-Sharifi, M. Marey. A Kalman-filter-based method for pose estimation in visual servoing. IEEE Transactions on Robotics, vol. 26, no. 5, pp. 939–947, 2010. DOI:  https://doi.org/10.1109/TRO.2010.2061290.CrossRefGoogle Scholar
  55. [55]
    A. Assa, F. Janabi-Sharifi. A robust vision-based sensor fusion approach for real-time pose estimation. IEEE Transactions on Cybernetics, vol. 44, no. 2, pp. 217–227, 2014. DOI:  https://doi.org/10.1109/TCYB.2013.2252339.CrossRefGoogle Scholar
  56. [56]
    Q. Fu, Q. Quan, K. Y. Cai. Robust pose estimation for multirotor UAVs using off-board monocular vision. IEEE Transactions on Industrial Electronics, vol. 64, no. 10, pp. 7942–7951, 2017. DOI:  https://doi.org/10.1109/TIE.2017.2696482.CrossRefGoogle Scholar
  57. [57]
    N. T. Rasmussen, M. Storring, T. B. Moeslund, E. Granum. Real-time tracking for virtual environments using SCAAT Kalman filtering and unsynchronised cameras. In Proceedings of the 1st International Conference on Computer Vision Theory and Applications, Institute for Systems and Technologies of Information, Control and Communication, Setubal, Portugal, pp. 333–341, 2006. DOI:  https://doi.org/10.5220/0001367803330340.Google Scholar
  58. [58]
    Y. X. Wu, H. L. Zhang, M. P. Wu, X. P. Hu, D. W. Hu. Observability of strapdown INS alignment: A global perspective. IEEE Transactions on Aerospace and Electronic Systems, vol. 48, no. 1, pp. 78–102, 2012. DOI:  https://doi.org/10.1109/TAES.2012.6129622.CrossRefGoogle Scholar
  59. [59]
    Z. S. Yu, P. Y. Cui, S. Y. Zhu. Observability-based beacon configuration optimization for Mars entry navigation. Journal of Guidance, Control, and Dynamics, vol. 38, no. 4, pp. 643–650, 2015. DOI:  https://doi.org/10.2514/1.G000014.CrossRefGoogle Scholar
  60. [60]
    J. J. Qi, K. Sun, W. Kang. Optimal PMU placement for power system dynamic state estimation by using empirical observability Gramian. IEEE Transactions on Power Systems, vol. 30, no. 4, pp. 2041–2054, 2015. DOI:  https://doi.org/10.1109/TPWRS.2014.2356797.CrossRefGoogle Scholar
  61. [61]
    K. Sun, J. J. Qi, W. Kang. Power system observability and dynamic state estimation for stability monitoring using synchrophasor measurements. Control Engineering Practice, vol. 53, pp. 160–172, 2016. DOI:  https://doi.org/10.1016/j.conengprac.2016.01.013.CrossRefGoogle Scholar
  62. [62]
    S. Lupashin, A. Schollig, M. Sherback, R. D’Andrea. A simple learning strategy for high-speed quadrocopter multi-flips. In Proceedings of IEEE International Conference on Robotics and Automation, IEEE, Anchorage, USA, pp. 1642–1648, 2010. DOI:  https://doi.org/10.1109/ROBOT.2010.5509452.Google Scholar
  63. [63]
    S. Lupashin, M. Hehn, M. W. Mueller, A. P. Schoellig, M. Sherback, R. D’Andrea. A platform for aerial robotics research and demonstration: The flying machine arena. Mechatronics, vol. 24, no. 1, pp. 41–54, 2014. DOI:  https://doi.org/10.1016/j.mechatronics.2013.11.006.CrossRefGoogle Scholar
  64. [64]
    R. Oung, R. D’Andrea. The distributed flight array. Mechatronics, vol. 21, no. 6, pp. 908–917, 2011. DOI:  https://doi.org/10.1016/j.mechatronics.2010.08.003.CrossRefGoogle Scholar
  65. [65]
    S. Trimpe, R. D’Andrea. Accelerometer-based tilt estimation of a rigid body with only rotational degrees of freedom. In Proceedings of IEEE International Conference on Robotics and Automation, IEEE, Anchorage, USA, pp. 2630–2636, 2010. DOI:  https://doi.org/10.1109/ROBOT.2010.5509756.Google Scholar
  66. [66]
    M. Furci, G. Casadei, R. Naldi, R. G. Sanfelice, L. Marconi. An open-source architecture for control and coordination of a swarm of micro-quadrotors. In Proceedings of International Conference on Unmanned Aircraft Systems, IEEE, Denver, USA, pp. 139–146, 2015. DOI:  https://doi.org/10.1109/ICUAS.2015.7152285.Google Scholar
  67. [67]
    The Crazyflie 1.0, [Online], Available: https://doi.org/www.bitcraze.io/crazyflie/, October 3, 2018.
  68. [68]
    J. A. Preiss, W. Honig, G. S. Sukhatme, N. Ayanian. Crazyswarm: A large nano-quadcopter swarm. In Proceedings of IEEE International Conference on Robotics and Automation, IEEE, Singapore, Singapore, pp. 3299–3304, 2017. DOI:  https://doi.org/10.1109/ICRA.2017.7989376.Google Scholar
  69. [69]
    P. J. Besl, N. D. McKay. A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239–256, 1992. DOI:  https://doi.org/10.1109/34.121791.CrossRefGoogle Scholar
  70. [70]
    Autonomous Vehicles Research Studio, [Online], Available: https://doi.org/www.quanser.com/products/autonomous-vehicles-research-studio/, October 3, 2018.
  71. [71]
    H. Oh, D. Y. Won, S. S. Huh, D. H. Shim, M. J. Tahk, A. Tsourdos. Indoor UAV control using multi-camera visual feedback. Journal of Intelligent & Robotic Systems, vol. 61, no. 1–4, pp. 57.84, 2011. DOI:  https://doi.org/10.1007/s10846-010-9506-8.Google Scholar
  72. [72]
    D. Y. Won, H. Oh, S. S. Huh, D. H. Shim, M. J. Tahk. Multiple UAVs tracking algorithm with a multi-camera system. In Proceedings of International Conference on Control Automation and Systems, IEEE, Gyeonggido, South Korea, pp. 2357–2360, 2010. DOI:  https://doi.org/10.1109/ICCAS.2010.5669931.Google Scholar
  73. [73]
    Q. Fu. Research on Robust 3D Visual Tracking of Multirotor Aerial Vehicles, Ph. D. dissertation, Beihang University (formerly Beijing University of Aeronautics and Astronautics), Beijing, China, 2016. (In Chinese)Google Scholar
  74. [74]
    A. Elhayek, C. Stoll, N. Hasler, K. I. Kim, H. P. Seidel, C. Theobalt. Spatio-temporal motion tracking with unsynchronized cameras. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Providence, USA, pp. 1870–1877, 2012. DOI:  https://doi.org/10.1109/CVPR.2012.6247886.Google Scholar

Copyright information

© Institute of Automation, Chinese Academy of Sciences and Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of Automation and Electrical EngineeringUniversity of Science and Technology BeijingBeijingChina
  2. 2.Institute of Artificial IntelligenceUniversity of Science and Technology BeijingBeijingChina

Personalised recommendations