Skip to main content

Virtualizing 3D Real Environments Using 2D Pictures Based on Photogrammetry

  • 92 Accesses

Part of the Lecture Notes in Computer Science book series (LNCS,volume 13264)

Abstract

Virtual creatures are situated agents capable of interacting with the virtual environment where they inhabit. Experiments with virtual creatures require an environment where they can develop. Depending on the task, a scene from the real world may be the best candidate; it is possible to generate a virtual representation according to the specific case study. Usually, this is known as 3D reconstruction. This paper focuses on this possibility. It presents a quick rundown of the more common approaches to 3D reconstruction, along with some of their strengths and weaknesses. With this background information, a proposal is made and tested for a workflow for reconstruction using a photogrammetry approach. The workflow’s capabilities are tested in the indoor and outdoor settings regarding the approach’s ability to generate a usable environment for virtual creature experimentation. The results presented are based on using a database for the community and generating a personal database to test the proposed workflow. The result shows that the reconstruction 3D environment using photogrammetry is possible, and it is feasible to obtain a virtual environment of the real world.

Keywords

  • Virtual environment
  • 3D reconstruction
  • Photogrammetry

Supported by CONACYT.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-031-07750-0_16
  • Chapter length: 12 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   59.99
Price excludes VAT (USA)
  • ISBN: 978-3-031-07750-0
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   79.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.
Fig. 5.
Fig. 6.
Fig. 7.
Fig. 8.

References

  1. Bianco, S., Ciocca, G., Marelli, D.: Evaluating the performance of structure from motion pipelines. J. Imaging 4(8) (2018). https://doi.org/10.3390/jimaging4080098

  2. Fuhrmann, S., Goesele, M.: Floating scale surface reconstruction. ACM Trans. Graph. 33(4), 46 (2014). https://doi.org/10.1145/2601097.2601163

    CrossRef  MATH  Google Scholar 

  3. Fuhrmann, S., Langguth, F., Goesele, M.: MVE - a multi-view reconstruction environment. In: Klein, R., Santos, P. (eds.) Eurographics Workshop on Graphics and Cultural Heritage. The Eurographics Association (2014). https://doi.org/10.2312/gch.20141299

  4. Furukawa, Y., Curless, B., Seitz, S.M., Szeliski, R.: Reconstructing building interiors from images. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 80–87 (2009). https://doi.org/10.1109/ICCV.2009.5459145

  5. Furukawa, Y., Curless, B., Seitz, S.M., Szeliski, R.: Towards internet-scale multi-view stereo. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1434–1441. IEEE (2010). https://doi.org/10.1109/cvpr.2010.5539802

  6. Furukawa, Y., Hernández, C.: Multi-view stereo: a tutorial. Found. Trends Comput. Graph. Vis. 9, 1–148 (2015). https://doi.org/10.1561/0600000052

    CrossRef  Google Scholar 

  7. Furukawa, Y., Ponce, J.: Accurate, dense, and robust multiview stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 32(8), 1362–1376 (2010). https://doi.org/10.1109/tpami.2009.161

    CrossRef  Google Scholar 

  8. Goesele, M., Snavely, N., Curless, B., Hoppe, H., Seitz, S.M.: Multi-view stereo for community photo collections. In: 2007 IEEE 11th International Conference on Computer Vision, pp. 1–8. IEEE (2007). https://doi.org/10.1109/iccv.2007.4408933

  9. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2 edn. Cambridge University Press (2003). https://doi.org/10.1017/CBO9780511811685

  10. Kataria, R., DeGol, J., Hoiem, D.: Improving structure from motion with reliable resectioning. In: 2020 International Conference on 3D Vision (3DV), pp. 41–50 (2020). https://doi.org/10.1109/3DV50981.2020.00014

  11. Kazhdan, M., Bolitho, M., Hoppe, H.: Poisson surface reconstruction. In: Sheffer, A., Polthier, K. (eds.) Symposium on Geometry Processing. The Eurographics Association (2006). https://doi.org/10.2312/SGP/SGP06/061-070

  12. Kazhdan, M., Hoppe, H.: Screened poisson surface reconstruction. ACM Trans. Graph. 32(3), 29 (2013). https://doi.org/10.1145/2487228.2487237

    CrossRef  MATH  Google Scholar 

  13. Koch, T., d’Angelo, P., Kurz, F., Fraundorfer, F., Reinartz, P., Korner, M.: The TUM-DLR multimodal earth observation evaluation benchmark. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 19–26 (2016). https://doi.org/10.1109/cvprw.2016.92

  14. Liu, C., Wu, J., Kohli, P., Furukawa, Y.: Raster-to-Vector: revisiting floorplan transformation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2195–2203 (2017). https://doi.org/10.1109/iccv.2017.241

  15. Locher, A., Havlena, M., Van Gool, L.: Progressive structure from motion. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11208, pp. 22–38. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01225-0_2

  16. Miras, K., Eiben, A.: Effects of environmental conditions on evolved robot morphologies and behavior. In: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 125–132 (2019). https://doi.org/10.1145/3321707.3321811

  17. Ofria, C., Bryson, D.M., Wilke, C.O.: Avida: a software platform for research in computational evolutionary biology. Artificial Life Models in Software, pp. 3–35 (2009). https://doi.org/10.1007/978-1-84882-285-6_1

  18. Raies, Y., von Mammen, S.: A swarm grammar-based approach to virtual world generation. In: Romero, J., Martins, T., Rodríguez-Fernández, N. (eds.) EvoMUSART 2021. LNCS, vol. 12693, pp. 459–474. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72914-1_30

  19. Ray, T.S.: An evolutionary approach to synthetic biology: zen and the art of creating life. Artif. Life 1(1_2), 179–209 (1993). https://doi.org/10.1162/artl.1993.1.1_2.179

  20. Robertson, D.P., Cipolla, R.: Practical Image Processing and Computer Vision, chap. Structure from Motion, p. 49. John Wiley, Hoboken, NJ, USA (2009)

    Google Scholar 

  21. Schenk, T.: Introduction to Photogrammetry, gS400.02 Department of Civil and Environmental Engineering and Geodetic Science The Ohio State University (2005)

    Google Scholar 

  22. Sims, K.: Evolving virtual creatures. In: Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques, pp. 15–22 (1994). https://doi.org/10.1145/192161.192167

  23. Sun, J., Xie, Y., Chen, L., Zhou, X., Bao, H.: Neuralrecon: Real-time coherent 3D reconstruction from monocular video. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15598–15607 (2021). https://doi.org/10.1109/cvpr46437.2021.01534

  24. Vidanapathirana, M., Wu, Q., Furukawa, Y., Chang, A.X., Savva, M.: Plan2scene: converting floorplans to 3D scenes. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10733–10742 (2021). https://doi.org/10.1109/cvpr46437.2021.01059

  25. Wu, C.: Towards linear-time incremental structure from motion. In: 2013 International Conference on 3D Vision-3DV 2013, pp. 127–134. IEEE (2013). https://doi.org/10.1109/3dv.2013.25

  26. Yuan, Z., Li, Y., Tang, S., Li, M., Guo, R., Wang, W.: A survey on indoor 3D modeling and applications via RGB-d devices. Front. Inf. Technol. Electron. Eng. 22(6), 815–826 (2021). https://doi.org/10.1631/fitee.2000097

    CrossRef  Google Scholar 

Download references

Acknowledgments

This research was possible thanks to the National Council for Science and Technology (CONACYT)’s National Scholarship Program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rafael Mercado Herrera .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Herrera, R.M., Jiménez, V.M., Corchado, M.A.R., Corchado, F.F.R., Romero, J.R.M. (2022). Virtualizing 3D Real Environments Using 2D Pictures Based on Photogrammetry. In: Vergara-Villegas, O.O., Cruz-Sánchez, V.G., Sossa-Azuela, J.H., Carrasco-Ochoa, J.A., Martínez-Trinidad, J.F., Olvera-López, J.A. (eds) Pattern Recognition. MCPR 2022. Lecture Notes in Computer Science, vol 13264. Springer, Cham. https://doi.org/10.1007/978-3-031-07750-0_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-07750-0_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-07749-4

  • Online ISBN: 978-3-031-07750-0

  • eBook Packages: Computer ScienceComputer Science (R0)