Abstract
This paper is focusing on the development of a system based on computer vision to estimate the movement of an MAV (X, Y, Z and yaw). The system integrates elements such as: a set of cameras, image filtering (physical and digital), and estimation of the position through the calibration of the system and the application of an algorithm based on experimentally found equations. The system represents a low cost alternative, both computational and economic, capable of estimating the position of an MAV with a significantly low error using a scale in millimeters, so that almost any type of camera available in the market can be used. This system was developed in order to offer an affordable form of research and development of new autonomous and intelligent systems for closed environments.
Keywords
- Computer vision
- Cameras array
- Quadrotor
- Motion estimation
- Path planning
- MAV
- Control system
This is a preview of subscription content, access via your institution.
Buying options











References
Kim, H., Park, K.-Y., Hong, J., Kang, K.: All-graphene-battery: bridging the gap between supercapacitors and lithium ion batteries. Sci. Rep. 4, 13 (2014)
Aguilar, W.G., Luna, M., Moya, J., Abad, V., Parra, H., Ruiz, H.: Pedestrian detection for UAVs using cascade classifiers with meanshift. In: 11th International Conference on Semantic Computing (ICSC), San Diego. IEEE (2017)
Aguilar, W.G., Luna, Marco A., Moya, Julio F., Abad, V., Ruiz, H., Parra, H., Angulo, C.: Pedestrian detection for UAVs using cascade classifiers and saliency maps. In: Rojas, I., Joya, G., Catala, A. (eds.) IWANN 2017. LNCS, vol. 10306, pp. 563–574. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59147-6_48
Aguilar, W.G., Angulo, C.: Real-Time Model-Based video stabilization for microaerial vehicles. Neural Process. Lett. 43(2), 459–477 (2016)
Aguilar, W.G., Angulo, C.: Real-time video stabilization without phantom movements for micro aerial vehicles. Eurasip J. Image Video Process. 1, 1–13 (2014)
Aguilar, W.G., Luna, Marco A., Moya, Julio F., Abad, V., Ruiz, H., Parra, H., Lopez, W.: Cascade classifiers and saliency maps based people detection. In: De Paolis, L.T., Bourdot, P., Mongelli, A. (eds.) AVR 2017. LNCS, vol. 10325, pp. 501–510. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-60928-7_42
How, J.P., Behihke, B., Frank, A., Dale, D., Vian, J.: Real-Time indoor autonomous vehicle test environment. IEEE Control Syst. 28(2), 51–64 (2008)
Di Fulvio, G., Frontoni, E., Mancini, A., Zingaretti, P.: A stereovision system for dimensional measurements in industrial robotics applications. In: IEEE/ASME 10th International Conference on Mechatronic and Embedded Systems and Applications (MESA), Senigallia, Italy (2014)
Henry, P., Krainin, M., Herbst, E., Ren, X., Fox, D.: RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments. Exp. Robot. 31(5), 647–663 (2012)
Aguilar, W.G., Rodríguez, Guillermo A., Álvarez, L., Sandoval, S., Quisaguano, F., Limaico, A.: On-Board visual SLAM on a UGV using a RGB-D camera. In: Huang, Y., Wu, H., Liu, H., Yin, Z. (eds.) ICIRA 2017. LNCS (LNAI), vol. 10464, pp. 298–308. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-65298-6_28
Aguilar, W.G., Rodríguez, Guillermo A., Álvarez, L., Sandoval, S., Quisaguano, F., Limaico, A.: Visual SLAM with a RGB-D camera on a quadrotor UAV using on-board processing. In: Rojas, I., Joya, G., Catala, A. (eds.) IWANN 2017. LNCS, vol. 10306, pp. 596–606. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59147-6_51
Aguilar, W.G., Rodríguez, Guillermo A., Álvarez, L., Sandoval, S., Quisaguano, F., Limaico, A.: Real-Time 3D modeling with a RGB-D camera and on-board processing. In: De Paolis, L.T., Bourdot, P., Mongelli, A. (eds.) AVR 2017. LNCS, vol. 10325, pp. 410–419. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-60928-7_35
Kohlbrecher, S., von Stryk, O., Meyer, J., Klingauf, U.: A flexible and scalable SLAM system with full 3D motion estimation. In: IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Kyoto, Japan (2011)
Andriluka, M., Roth, S., Schiele, B.: Monocular 3D pose estimation and tracking by detection. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, USA (2010)
de La Gorce, M., Fleet, D.J., Paragios, N.: Model-based 3D hand pose estimation from monocular video. In: IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 9, pp. 1793–1805 (2011)
Vincent, L., Pascal, F.: Monocular model-based 3D tracking of rigid objects: a survey. Found. Trends® Comput. Graph. Vis. 1(1), 1–89 (2005)
Aguilar, W.G., Salcedo, Vinicio S., Sandoval, David S., Cobeña, B.: Developing of a video-based model for UAV autonomous navigation. In: Barone, D.A.C., Teles, E.O., Brackmann, C.P. (eds.) LAWCN 2017. CCIS, vol. 720, pp. 94–105. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-71011-2_8
Aguilar, W.G., Verónica, C., José, P.: Obstacle avoidance based-visual navigation for micro aerial vehicles. Electronics 6(1), 10 (2017)
Aguilar, W.G., Casaliglla, Verónica P., Pólit, José L., Abad, V., Ruiz, H.: Obstacle avoidance for flight safety on unmanned aerial vehicles. In: Rojas, I., Joya, G., Catala, A. (eds.) IWANN 2017. LNCS, vol. 10306, pp. 575–584. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59147-6_49
Aguilar, W.G., Angulo, C., Costa-Castello, R.: Autonomous navigation control for quadrotors in trajectories tracking. In: Huang, Y., Wu, H., Liu, H., Yin, Z. (eds.) ICIRA 2017. LNCS (LNAI), vol. 10464, pp. 287–297. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-65298-6_27
May, S., Droschel, D., Holz, D., Fuchs, E., Malis, S., Nuchter, A., Hertzberg, J.: Three-dimensional mapping with time-of-flight cameras. J. Field Robot. (JFR) 26, 11–12 (2009)
Konolige, K., Agrawal, M.: FrameSLAM: From bundle adjustment to real-time visual mapping. IEEE Trans. Robot. 25, 5 (2008)
Gold, S., Ping Lu, C., Rangarajan, A., Pappu, S., Mjolsness, E.: New algorithms for 2D and 3D point matching: pose estimation and correspondence. Patt. Recognit. 31, 1019–1031 (1999)
Aguilar, W.G., Verónica, C., José, P.: Obstacle avoidance for low-cost UAVs. IEEE 11th International Conference on Semantic Computing (ICSC), San Diego (2017)
Hyondong, O., Dae-Yeon, W., Sung-Sik, H., Hyunchul, D., Tahk, M.-J., Tsourdos, A.: Indoor UAV control using multi-camera visual feedback. J. Intelligent & Robot. Syst. 61(1–4), 57–84 (2011)
Valenti, M., Bethke, B., Frank, D., McGrew, A., Ahrens, J., How, S., Vian, J.: The MIT indoor multi-vehicle flight testbed. IEEE International Conference Robot Automation (2007)
Altug, E., Ostrowski, J.P., Taylor, C.J.: Control of a quadrotor helicopter using dual camera visual feedback. Int. J. Robot. Res. 24(5), 329–341 (2005)
Yoshihata, Y., Watanabe, K., Iwatani, Y., Hashimoto, K.: Multi-camera visual servoing of a micro helicopter under occlusions. In: Proceedings on the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2651–2620 (2007)
Mak, L.C., Whitty, M., Furukawa, T.: A localization system for an indoor rotary-wing MAV using blade mounted LEDs. Sens. Rev. 28(2), 125–131 (2008)
Garage W., Intel Corporation. OpenCV. https://opencv.org/. Accessed 20 Oct 2017
Bradski, G., Kaehler, A.: Learning OpenCV: Computer vision with the OpenCV library, Sebastopol (2008)
Aguilar, W.G., Morales, S.: 3D environment mapping using the Kinect V2 and path planning based on RRT algorithms. Electronics 5(4), 70 (2016)
Aguilar, W.G., Morales, S., Ruiz, H., Abad, V.: RRT* GL based optimal path planning for real-time navigation of UAVs. In: Rojas, I., Joya, G., Catala, A. (eds.) IWANN 2017. LNCS, vol. 10306, pp. 585–595. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59147-6_50
Aguilar, W.G., Morales, S., Ruiz, H., Abad, V.: RRT* GL based path planning for virtual aerial navigation. In: De Paolis, L.T., Bourdot, P., Mongelli, A. (eds.) AVR 2017. LNCS, vol. 10324, pp. 176–184. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-60922-5_13
Acknowledgement
This work is part of the project “Perception and localization system for autonomous navigation of rotor micro aerial vehicle in gps-denied environments, VisualNavDrone”, 2016-PIC-024, from the Universidad de las Fuerzas Armadas ESPE, directed by Dr. Wilbert G. Aguilar.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
The authors declare no conflict of interest.
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Aguilar, W.G., Manosalvas, J.F., Guillén, J.A., Collaguazo, B. (2018). Robust Motion Estimation Based on Multiple Monocular Camera for Indoor Autonomous Navigation of Micro Aerial Vehicle. In: De Paolis, L., Bourdot, P. (eds) Augmented Reality, Virtual Reality, and Computer Graphics. AVR 2018. Lecture Notes in Computer Science(), vol 10851. Springer, Cham. https://doi.org/10.1007/978-3-319-95282-6_39
Download citation
DOI: https://doi.org/10.1007/978-3-319-95282-6_39
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-95281-9
Online ISBN: 978-3-319-95282-6
eBook Packages: Computer ScienceComputer Science (R0)