Real-time 3D reconstruction method using massive multi-sensor data analysis and fusion

  • Seoungjae Cho
  • Kyungeun ChoEmail author


This paper proposes a method to reconstruct three-dimensional (3D) objects using real-time fusion and analysis of multiple sensor data. This paper attempts to create a realistic 3D visualization with which a remote pilot can intuitively control a remote unmanned robot by utilizing the characteristics of massive sensor data. The 3D reconstruction system proposed in this paper is comprised of 3D and two-dimensional (2D) data segmentation method, a 3D reconstruction method applied to each object, and a projective texture mapping method. Specifically, we propose applying both a 2D region extraction method and a 3D mesh modeling method to each object. The proposed schemes are implemented as a real-time application to verify real-time performance. This paper proves that 3D meshes can be modeled in real time by using the proposed method. The proposed method allows the remote control of a robot for real-time 3D rendering of remote scenes, which is essential for various tasks in areas that cannot be easily accessed by humans.


Unmanned ground vehicle 3D reconstruction 3D point cloud Object segmentation Template mesh 



This research was supported by BK21 Plus project of the National Research Foundation of Korea Grant and by a Grant from Agency for Defense Development, under Contract #UD150017ID.


  1. 1.
    Song W, Liu L, Tian Y, Sun G, Fong S, Cho K (2017) A 3D localisation method in indoor environments for virtual reality applications. Hum Centric Comput Inf Sci 7(1):1–11CrossRefGoogle Scholar
  2. 2.
    Rao M, Kamila NK (2017) Tracking intruder ship in wireless environment. Hum Centric Comput Inf Sci 7(1):1–26CrossRefGoogle Scholar
  3. 3.
    Hong MP, Oh K (2017) Real-time motion blur using triangular motion paths. J Inf Process Syst 13(4):818–833Google Scholar
  4. 4.
    Kelly A, Chan N, Herman H, Huber D, Meyers R, Rander P, Warner R, Ziglar J, Capstick E (2011) Real-time photorealistic virtualized reality interface for remote mobile robot control. Int J Robot Res 30(3):384–404CrossRefGoogle Scholar
  5. 5.
    Kubota N (2011) Human-friendly tele-operation for robot partners. Proc Inst Mech Eng I J Syst Control Eng 225(3):361–366Google Scholar
  6. 6.
    Nagatani K, Kiribayashi S, Okada Y, Tadokoro S, Nishimura T, Yoshida T, Koyanagi E, Hada Y (2011) Redesign of rescue mobile robot Quince. In: IEEE International Symposium on Safety, Security, and Rescue Robotics, Kyoto, pp 13–18Google Scholar
  7. 7.
    Kuhnert L, Kuhnert K-D (2011) Sensor-fusion based real-time 3D outdoor scene reconstruction and analysis on a moving mobile outdoor robot. KI Kunstl Intell 25(2):117–123CrossRefGoogle Scholar
  8. 8.
    Moosmann F, Pink O, Stiller C (2009) Segmentation of 3D lidar data in non-flat urban environments using a local convexity criterion. In: Intelligent Vehicles Symposium, IEEE pp 215–220Google Scholar
  9. 9.
    Himmelsbach M, Hundelshausen Fv, Wuensche H (2010) Fast segmentation of 3D point clouds for ground vehicles. In: Intelligent Vehicles Symposium (IV), IEEE, pp 560–565Google Scholar
  10. 10.
    Douillard B, Underwood J, Melkumyan N, Singh S, Vasudevan S, Brunner C, Quadros A (2010) Hybrid elevation map: 3D surface models for segmentation. In: IEEE International Conference on Intelligent Robots and System (IROS), pp 1532–1538Google Scholar
  11. 11.
    Hernandez J, Marcotegui B (2009) Point cloud segmentation towards urban ground modeling. In: Urban remote sensing event, pp 1–5Google Scholar
  12. 12.
    Song W, Cho K, Um K, Won CS, Sim S (2012) Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation. Sensors 12:17186–17207CrossRefGoogle Scholar
  13. 13.
    Chen T, Dai B, Wang R, Daxue L (2013) Gaussian-process-based real-time ground segmentation for autonomous land vehicles. J Intell Robot Syst 76(3–4):563–582Google Scholar
  14. 14.
    Cho S, Kim J, Ikram W, Cho K, Jeong Y-S, Um K, Sim S (2014) Sloped terrain segmentation for autonomous drive using sparse 3D point cloud. Sci World J 2014:1–9Google Scholar
  15. 15.
    Kim G, Huber D, Hebert M (2008) Segmentation of salient regions in outdoor scenes using imagery and 3-D data. In: IEEE Workshop on Applications of Computer Vision (WACV) pp 1–8Google Scholar
  16. 16.
    Jozsa O, Borcs A, Benedek C (2013) Towards 4D virtual city reconstruction from lidar point cloud sequences. In: The ISPRS Workshop on 3D Virtual City Modeling, vol 2, no 3, pp 15–20Google Scholar
  17. 17.
    Choe Y, Shim I, Chung MJ (2011) Geometric-featured voxel maps for 3D mapping in urban environments. In: IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). IEEEGoogle Scholar
  18. 18.
    Ortega A, Cetto JA (2011) Segmentation of dynamic objects from laser data. In: Proceedings of European Conference Mobile Robotics, pp 115–121Google Scholar
  19. 19.
    Vallet B, Xiao W, Brédif M (2015) Extracting mobile objects in images using a velodyne LiDAR point cloud. ISPRS Ann Photogramm Rem Sens Spatial Inf Sci 2 W4(3):247–253CrossRefGoogle Scholar
  20. 20.
    Postica G, Romanoni A, Matteucci M (2016) Robust moving objects detection in lidar data exploiting visual cues. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 1093–1098Google Scholar
  21. 21.
    Izadi S, Newcombe RA, Kim D, Hilliges O, Molyneaux D, Hodges S, Kohli P, Shotton J, Davison AJ, Fitzgibbon A (2011) Kinectfusion: real-time dynamic 3D surface reconstruction and interaction. In: ACM SIGGRAPH 2011 Talks, p 1Google Scholar
  22. 22.
    Song W, Cho S, Cho K, Um K, Won CS, Sim S (2014) Traversable ground surface segmentation and modeling for real-time mobile mapping. Int J Distrib Sens Netw 2014:1–8Google Scholar
  23. 23.
    Song W, Cho S, Xi Y, Cho K, Um K (2014) Real-time terrain storage generation from multiple sensors towards mobile robot operation interface. Sci World J 2014:1–12Google Scholar
  24. 24.
    Cheng M-M (2009) Curve structure extraction for cartoon images. In: The 5th Joint Conference on Harmonious Human Machine Environment, pp 1–8Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Multimedia EngineeringDongguk University-SeoulSeoulRepublic of Korea

Personalised recommendations