Skip to main content

ASPIRE: Automatic scanner position reconstruction

Abstract

The recent advances in 3D laser range scanning have led to significant improvements in capturing and modeling 3D environments, allowing the creation of highly expressive and semantically rich 3D models from indoor environments, generally known as building information models. Despite the capabilities of state-of-the-art methods to generate faithful architectural 3D building models, the majority of them rely explicitly on the prior knowledge of scanner positions in order to reconstruct them successfully. However, in real-world applications, this metadata information gets typically lost after the point cloud registration, which means that none of these methods could work in practice and the creation of their building models would be impossible. Therefore, we present a novel pipeline that allows to automatically and accurately reconstruct the original scanner positions under very challenging conditions, without requiring any prior knowledge about the environment or the dataset. Being independent from laser range scanner manufacturers, it can be applied to almost every real-world LiDAR application. Our method exploits only information derived from the raw point data and is applicable to all scientific and industrial applications, where the original scan positions typically get lost after registration by the proprietary software provided by the scanner manufacturers. We demonstrate the validity of our approach by evaluating it on several real-world and synthetic indoor environments.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

References

  1. Adan, A., Huber, D.: 3D reconstruction of interior wall surfaces under occlusion and clutter. In: Proceedings Symposium on 3D Data Processing, Visualization, and Transmission, pp. 275–281 (2011)

  2. Xiong, X., Adan, A., Akinci, B., Huber, D.: Automatic creation of semantically rich 3D building models from laser scanner data. Autom. Constr. 31, 325–337 (2013)

    Article  Google Scholar 

  3. Mura, C., Mattausch, O., Villanueva, A.J., Gobbetti, E., Pajarola, R.: Automatic room detection and reconstruction in cluttered indoor environments with complex room layouts. Comput. Graph. 44, 20–32 (2014)

    Article  Google Scholar 

  4. Oesau, S., Lafarge, F., Alliez, P.: Indoor scene reconstruction using feature sensitive primitive extraction and graph-cut. ISPRS J. Photogramm. Remote Sens. 90, 68–82 (2014)

    Article  Google Scholar 

  5. Previtali, M., Scaioni, M., Barazzetti, L., Brumana, R.: A flexible methodology for outdoor/indoor building reconstruction from occluded point clouds. ISPRS Ann. Photogram. Remote Sens. Spatial Inf. Sci., II(3), 119–126, 08 (2014)

  6. Stambler, A., Huber, D.: Building modeling through enclosure reasoning. In: Proceedings International Conference on 3D Computer Vision, Workshop on 3D Computer Vision in the Built Environment, pp. 118–125 (2014)

  7. Ochmann, S., Vock, R., Wessel, R., Klein, R.: Automatic reconstruction of parametric building models from indoor point clouds. Comput. Graph. 54, 94–103 (2016)

    Article  Google Scholar 

  8. Mura, C., Mattausch, O., Pajarola, R.: Piecewise-planar reconstruction of multi-room interiors with arbitrary wall arrangements. Comput. Graph. Forum 35(7), 179–188 (2016)

    Article  Google Scholar 

  9. Ambrus, R., Claici, S., Wendt, A.: Automatic room segmentation from unstructured 3D data of indoor environments. IEEE Robot. Autom. Lett. 2(2), 749–756 (2017)

    Article  Google Scholar 

  10. Michailidis, G.-T., Pajarola, R. : Automatic reconstruction of wall features under clutter and occlusion. In: In Proceedings Computer Graphics International (2015)

  11. Michailidis, G.-T., Pajarola, R.: Bayesian graph-cut optimization for wall surfaces reconstruction in indoor environments. Vis. Comput. 33(10), 1347–1355 (2017)

    Article  Google Scholar 

  12. Shao, T., Xu, W., Zhou, K., Wang, J., Li, D., Guo, B.: An interactive approach to semantic modeling of indoor scenes with an RGBD camera. ACM Trans. Graph. 31, 136:1–136:11 (2012)

    Google Scholar 

  13. Armeni, I., Sener, O., Zamir, A.R, Jiang, H., Brilakis, I., Fischer, M., Savarese, S.: 3D semantic parsing of large-scale indoor spaces. In: Proceedings IEEE International Conference on Computer Vision and Pattern Recognition (2016)

  14. Tchapmi, L.P., Choy, C., Armeni, I., Gwak, J.Y., Savarese, S.: SEGCloud: Semantic segmentation of 3D point clouds. In: International Conference of 3D Vision (2017)

  15. Sanchez, V., Zakhor, A.: Planar 3D modeling of building interiors from point cloud data. In: Proceedings IEEE International Conference on Image Processing, pp. 1777 – 1780 (2012)

  16. Nikoohemat, S., Peter, M., Elberink, S.O., Vosselman, G.: Exploiting indoor mobile laser scanner trajectories for semantic interpretation of point clouds. ISPRS Ann. Photogram. Remote Sens. Spatial Inf. Sci. IV(2/W4), 355–362 (2017)

    Article  Google Scholar 

  17. Boehler, W., Marbs, A.: Investigating laser scanner accuracy. Technical report, German University FH Mainz (2003)

  18. Adan, A., Galera, B.Q., Vazquez, A., Olivares, A., Parra, E., Prieto, S.: Towards the automatic scanning of indoors with robots. Sensors 15(5), 11551–11574 (2015)

    Article  Google Scholar 

  19. Chen, J., Fang, Y., Cho, Y.: Unsupervised recognition of volumetric structural components from building point clouds. In: Proceedings International Workshop on Computing in Civil Engineering, pp. 34–42 (2017)

  20. Elseberg, J., Magnenat, S., Siegwart, R., Nüchter, A.: Comparison of nearest-neighbor-search strategies and implementations for efficient shape registration. J. Softw. Eng. Robot. 3, 2–12 (2012)

    Google Scholar 

  21. Diggle, P.J.: Statistical Analysis of Spatial Point Patterns, 2nd edn. Edward Arnold, London (2003)

    MATH  Google Scholar 

  22. Huber, P.J., Ronchetti, E.M.: Robust Statistics, 2nd edn. Wiley, Hoboken (2009)

    Book  MATH  Google Scholar 

  23. An, L., Ahmed, S.E.: Improving the performance of kurtosis estimator. Comput. Stat. Data Anal. 52(5), 2669–2681 (2008)

    MathSciNet  Article  MATH  Google Scholar 

  24. Lu, C.-T., Chen, D., Kou, Y.: Algorithms for spatial outlier detection. In: Proceedings IEEE International Conference on Data Mining, pp. 597–601 (2003)

  25. Rousseeuw, P.J., Croux, C.: Alternatives to the median absolute deviation. J. Am. Stat. Assoc. 88(424), 1273–1283 (1993)

    MathSciNet  Article  MATH  Google Scholar 

  26. Pearson, R.K.: Exploring process data. J. Process Control 11(2), 179–194 (2001)

    Article  Google Scholar 

  27. Pearson, R.K.: Outliers in process modeling and identification. IEEE Trans. Control Syst. Technol. 10(1), 55–63 (2002)

    Article  Google Scholar 

  28. Fan, R.-E., Chang, K.-W., Hsieh, C.-J., Wang, X.-R., Lin, C.-J.: Liblinear: a library for large linear classification. J. Mach. Learn. Res. 9, 1871–1874 (2008)

    MATH  Google Scholar 

  29. Güngör, E., Özmen, A.: Distance and density based clustering algorithm using gaussian kernel. Expert Syst. Appl. 69, 10–20 (2017)

    Article  Google Scholar 

  30. Gander, W., Golub, G.H., Strebel, R.: Least-squares fitting of circles and ellipses. BIT Numer. Math. 34(4), 558–578 (1994)

    MathSciNet  Article  MATH  Google Scholar 

  31. Ahn, S.J., Rauh, W., Warnecke, H.-J.: Least-squares orthogonal distances fitting of circle, sphere, ellipse, hyperbola, and parabola. Pattern Recognit. 34(12), 2283–2303 (2001)

    Article  MATH  Google Scholar 

  32. Calafiore, G.: Approximation of n-dimensional data using spherical and ellipsoidal primitives. IEEE Trans. Syst. Man Cybern. 32(2), 269–278 (2002)

    Article  Google Scholar 

  33. Yu, J., Haipeng, Z., Kulkarni, S.R., Poor, H.V.: Two-stage outlier elimination for robust curve and surface fitting. EURASIP J. Adv. Signal Process. (2010)

  34. Nurunnabi, A., Sadahiro, Y., Lindenbergh, R.: Robust cylinder fitting in three-dimensional point cloud data. Int. Arch. Photogram. Remote Sens. Spatial Inf. Sci. XLII(1/W1), 63–70 (2017)

    Article  Google Scholar 

  35. Leibe, B., Leonardis, A., Schiele, B.: Combined object categorization and segmentation with an implicit shape model. In: European Conference on Computer Vision Workshop on statistical learning in computer vision, pp. 17–32 (2004)

  36. Gall, J., Yao, A., Razavi, N., Van Gool, L., Lempitsky, V.: Hough forests for object detection, tracking, and action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 33(11), 2188–2202 (2011)

    Article  Google Scholar 

  37. Woodford, O.J., Pham, M.-T., Maki, A., Perbet, F., Stenger, B.: Demisting the hough transform for 3D shape recognition and registration. Int. J. Comput. Vis. 106(3), 332–341 (2014)

    Article  Google Scholar 

  38. Ikehata, S., Yan, H., Furukawa, Y.: Structured indoor modeling. In: Proceedings IEEE International Conference on Computer Vision (2015)

Download references

Acknowledgements

We acknowledge Dr. Claudio Mura, Prof. Yasutaka Furukawa, Prof. Satoshi Ikehata, Prof. Reinhard Klein and Dr. Sebastian Ochmann for the acquisition of the 3D point clouds. The 3D scanning of the datasets Cottage, Penthouse, G82, Synth1 and Synth2 has partially been supported by the EU FP7 People Programme (Marie Curie Actions) under REA Grant Agreement no. 290227.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Georgios-Tsampikos Michailidis.

Ethics declarations

Conflict of interest

G.-T. Michailidis declares that he/she has no conflict of interest. R. Pajarola declares that he/she has no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Michailidis, GT., Pajarola, R. ASPIRE: Automatic scanner position reconstruction. Vis Comput 35, 1209–1221 (2019). https://doi.org/10.1007/s00371-019-01711-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-019-01711-9

Keywords

  • LiDAR reconstruction
  • Interiors reconstruction
  • Point cloud processing
  • Point pattern analysis