Skip to main content
Log in

Multi-sensor data fusion for accurate surface modeling

  • Methodologies and Application
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Multi-sensor data fusion is advantageous while fusing data from heterogeneous range sensors, for scanning a scene containing both fine and coarse details. This paper presents a new multi-sensor range data fusion method with the aim to increase the descriptive contents of the entire generated surface model. First, a new training framework of the scanned range dataset to solve the relaxed Gaussian mixture model-based method by applying the convex relaxation technique is presented. The classification of the range data is based on a trained statistical model. In the data fusion experiments, a laser range sensor and Kinect (V1) are used. Based on the segmentation criterion, the range data fusion is performed by integration of the finer regions range data obtained from a laser range sensor with the coarser regions of the Kinect range data. The fused range information overcomes the weaknesses of the respective range sensors, i.e., the laser scanner is accurate but takes time while the Kinect is fast but not very accurate. The surface model of the fused range dataset generates a highly accurate, realistic surface model of the scene. The experimental results demonstrate robustness of the proposed approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Aliakbarpour H, Ferreira JF, Prasath VS, Palaniappan K, Seetharaman G, Dias J (2017) A probabilistic fusion framework for 3-D reconstruction using heterogeneous sensors. IEEE Sens J 17(9):2640–2641

    Article  Google Scholar 

  • An SY, Lee LK, Oh SY (2012) Fast incremental 3D plane extraction from a collection of 2d line segments for 3D mapping. In: IEEE/RSJ International conference on intelligent robots and systems (IROS). IEEE, pp 4530–4537

  • Bae E, Yuan J, Tai XC (2011) Global minimization for continuous multiphase partitioning problems using a dual approach. Int J Comput Vis 92(1):112–129

    Article  MathSciNet  Google Scholar 

  • Bastonero P, Donadio E, Chiabrando F, Spanò A (2014) Fusion of 3D models derived from tls and image-based techniques for CH enhanced documentation. ISPRS Ann Photogramm Remote Sens Spat Inf Sci 2(5):73

    Article  Google Scholar 

  • Besl PJ, McKay ND (1992) Method for registration of 3-D shapes. In: Robotics-DL tentative, international society for optics and photonics, pp 586–606

  • Bishop CM (2006) Pattern recognition and machine learning. Springer, New York

    MATH  Google Scholar 

  • Budzan S (2014) Fusion of visual and range images for object extraction. In: International conference on computer vision and graphics. Springer, Berlin, pp 108–115

  • Budzan S, Kasprzyk J (2016) Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications. Opt Lasers Eng 77:230–240

    Article  Google Scholar 

  • Chan Y, Delmas P, Gimel’frab G, Valkenburg R (2008) On fusion of active range and passive stereo data for 3D scene modeling. In: 23rd International conference on image and vision computing New Zealand, IVCNZ

  • Chane CS, Mansouri A, Marzani FS, Boochs F (2013) Integration of 3D and multispectral data for cultural heritage applications: survey and perspectives. Image Vis Comput 31(1):91–102

    Article  Google Scholar 

  • Chávez A, Karstoft H (2012) Improvement of kinecttm sensor capabilities by fusion with laser sensing data using octree. Sensors 12(4):3868–3878

    Article  Google Scholar 

  • Dal Mutto C, Zanuttigh P, Mattoccia S, Cortelazzo G (2012) Locally consistent tof and stereo data fusion. In: International conference on computer vision, ECCV’12, pp 598–607

  • Dias P, Sequeira V, Gonçalves JG, Vaz F (2002) Automatic registration of laser reflectance and colour intensity images for 3D reconstruction. Robot Auton Syst 39(3):157–168

    Article  Google Scholar 

  • Dias P, Sequeira V, Vaz F, Gonçalves JG (2003) Registration and fusion of intensity and range data for 3D modelling of real world scenes. In: Fourth international conference on 3-D digital imaging and modeling. IEEE, pp 418–425

  • Elseberg J, Magnenat S, Siegwart R, Nüchter A (2012) Comparison of nearest-neighbor-search strategies and implementations for efficient shape registration. J Softw Eng Robot 3(1):2–12

    Google Scholar 

  • Elstrom MD, Smith PW (1999) Stereo-based registration of multi-sensor imagery for enhanced visualization of remote environments. In: IEEE international conference on robotics and automation, vol 3. IEEE, pp 1948–1953

  • Evangelidis GD, Hansard M, Horaud R (2015) Fusion of range and stereo data for high-resolution scene-modeling. IEEE Trans Pattern Anal Mach Intell 37(11):2178–2192

    Article  Google Scholar 

  • Fryskowska A, Walczykowski P, Delis P, Wojtkowska M (2015) Als and tls data fusion in cultural heritage documentation and modeling. Int Arch Photogramm Rem Sens Spat Inf Sci 40(5):147

    Article  Google Scholar 

  • Gerardo-Castro MP, Peynot T, Ramos F (2015) Laser-radar data fusion with Gaussian process implicit surfaces. In: Field and service robotics. Springer, Berlin, pp 289–302

  • Hannachi A, Kohler S, Lallement A, Hirsch E (2014) Multi-sensor data fusion for realistic and accurate 3D reconstruction. In: 2014 5th European workshop on visual information processing (EUVIP). IEEE, pp 1–6

  • Henry P, Krainin M, Herbst E, Ren X, Fox D (2012) RGB-D mapping: using kinect-style depth cameras for dense 3D modeling of indoor environments. Int J Robot Res 31(5):647–663

    Article  Google Scholar 

  • Hess M, Petrovic V, Meyer D, Rissolo D, Kuester F (2015) Fusion of multimodal three-dimensional data for comprehensive digital documentation of cultural heritage sites. In: Digital heritage, 2015, vol 2. IEEE, pp 595–602

  • Himmelsbach M, Muller A, Luttel T, Wunsche HJ (2008) LIDAR based 3D object perception. In: Proceedings of 1st international workshop on cognition for technical systems

  • Jolliffe I (2005) Principal component analysis. Wiley, New York

    MATH  Google Scholar 

  • Khoshelham K, Elberink SO (2012) Accuracy and resolution of kinect depth data for indoor mapping applications. Sensors 12(2):1437–1454

    Article  Google Scholar 

  • Kläß J, Stückler J, Behnke S (2012) Efficient mobile robot navigation using 3D surfel grid maps. In: Conference on robotics (ROBOTIK), VDE, pp 1–4

  • Lalonde JF, Vandapel N, Huber DF, Hebert M (2006) Natural terrain classification using three-dimensional ladar data for ground robot mobility. J Field Robot 23(10):839–862

    Article  Google Scholar 

  • Li C, Kao CY, Gore JC, Ding Z (2008) Minimization of region-scalable fitting energy for image segmentation. IEEE Trans Image Process 17(10):1940–1949

    Article  MathSciNet  Google Scholar 

  • McLachlan G, Krishnan T (2007) The EM algorithm and extensions, vol 382. Wiley, New York

    MATH  Google Scholar 

  • Papoulis A, Pillai SU (2002) Probability, random variables, and stochastic processes. Tata McGraw-Hill Education, New York City

    Google Scholar 

  • Redner RA, Walker HF (1984) Mixture densities, maximum likelihood and the EM algorithm. SIAM Rev 26(2):195–239. https://doi.org/10.1137/1026034

    Article  MathSciNet  MATH  Google Scholar 

  • Rockafellar RT (2015) Convex analysis. Princeton University Press, Princeton

    Google Scholar 

  • Rusu RB, Marton ZC, Blodow N, Holzbach A, Beetz M (2009) Model-based and learned semantic object labeling in 3D point cloud maps of kitchen environments. In: IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 3601–3608

  • Singh MK, Venkatesh KS, Dutta A (2014) Accurate 3D terrain modeling by range data fusion from two heterogeneous range scanners. In: Annual IEEE India conference (INDICON), pp 1–6

  • Singh MK, Venkatesh KS, Dutta A (2015) Range data fusion for accurate surface generation from heterogeneous range scanners. In: International conference on informatics in control, automation and robotics (ICINCO), vol 02, pp 444–449

  • Singh MK, Dutta A, Subramanian VK (2016) Accurate three-dimensional documentation of distinct sites. J Electron Imaging 26(1):011012. https://doi.org/10.1117/1.JEI.26.1.011012

    Article  Google Scholar 

  • Singh MK, Venkatesh KS, Dutta A (2017) Design and development of a low-cost laser range sensor. Imaging Sci J 65(4):203–213. https://doi.org/10.1080/13682199.2017.1315481

    Article  Google Scholar 

  • Stamos I, Allen P (2000) 3-D model construction using range and image data. In: IEEE conference on computer vision and pattern recognition, vol 1. IEEE, pp 531–536

  • Trevor AJ, Rogers J, Christensen HI (2012) Planar surface slam with 3D and 2D sensors. In: IEEE international conference on robotics and automation (ICRA). IEEE, pp 3041–3048

  • Tsai R (1987) A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J Robot Autom 3(4):323–344

    Article  Google Scholar 

  • Vasudevan S (2012) Data fusion with gaussian processes. Robot Auton Syst 60(12):1528–1544

    Article  Google Scholar 

  • Wang L, Ding L, Ding X, Fang C (2011) 2D face fitting-assisted 3D face reconstruction for pose-robust face recognition. Soft Comput 15(3):417–428

    Article  Google Scholar 

  • Wu Y, Liu C, Lan S, Yang M (2015) Real-time 3D road scene based on virtual-real fusion method. IEEE Sens J 15(2):750–756

    Article  Google Scholar 

  • Yang Q, Tan KH, Culbertson B, Apostolopoulos J (2010) Fusion of active and passive sensors for fast 3D capture. In: IEEE international workshop on multimedia signal processing (MMSP), pp 69–74

  • Yue H, Chen W, Wu X, Liu J (2014) Fast 3D modeling in complex environments using a single kinect sensor. Opt Lasers Eng 53:104–111

    Article  Google Scholar 

  • Zhang Z (2012) Microsoft kinect sensor and its effect. IEEE Multimed 19(2):4–10

    Article  Google Scholar 

  • Zhu J, Wang L, Gao J, Yang R (2010) Spatial-temporal fusion for high accuracy depth maps using dynamic MRFs. IEEE Trans Pattern Anal Mach Intell 32:899–909

    Article  Google Scholar 

  • Zhu J, Wang L, Yang R, Davis JE, Pan Z (2011) Reliability fusion of time-of-flight depth and stereo geometry for high quality depth maps. IEEE Trans Pattern Anal Mach Intell 33(7):1400–1414

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mahesh K. Singh.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Communicated by V. Loia.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 50 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Singh, M.K., Dutta, A. & Venkatesh, K.S. Multi-sensor data fusion for accurate surface modeling. Soft Comput 24, 14449–14462 (2020). https://doi.org/10.1007/s00500-020-04797-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-020-04797-9

Keywords

Navigation