Advertisement

Using Aerial Drone Photography to Construct 3D Models of Real World Objects in an Effort to Decrease Response Time and Repair Costs Following Natural Disasters

  • Gil EckertEmail author
  • Steven Cassidy
  • Nianqi Tian
  • Mahmoud E. Shabana
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 943)

Abstract

When a natural disaster occurs, there is often significant damage to vitally important infrastructure. Repair crews must quickly locate the structures with the most damage that are in need of immediate attention. These crews need to determine how to allocate their resources most efficiently to save time and money without having to assess each area individually. To streamline this process, drone technology can be used to take photographs of the affected areas. From these photographs, three dimensional models of the area can be constructed. These models can include point clouds, panoramas, and other three-dimensional representations. This process is called photogrammetry. The first step in constructing a three-dimensional model from two dimensional photographs is to detect key features that match throughout all the photos. This is done using David Lowe’s Scale Invariant Feature Transform (SIFT) algorithm which detects the key features. Pairwise matches are then computed by using a k nearest neighbor algorithm to compare all the images one pair at a time finding pixel coordinates of matching features. These pixel matches are then passed to an algorithm which calculates the relative camera positions of the photos in a 3D space. These positions are then used to orient the photos allowing us to generate a 3D model. The purpose of this research is to determine the best method to generate a 3D model of a damaged area with maximum clarity in a relatively short period of time at the lowest possible cost; therefore, allowing repair crews to allocate resources more efficiently.

Keywords

Photogrammetry Point cloud Structure from motion 

References

  1. 1.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  2. 2.
    Introduction to SIFT (Scale-Invariant Feature Transform). (n.d.). Retrieved from OpenCV website. https://docs.opencv.org/3.3.0/da/df5/tutorial_py_sift_intro.html
  3. 3.
  4. 4.
    What is ASPRS? Retrieved from ASPRS website. https://www.asprs.org/organization/what-is-asprs.html
  5. 5.
    Point Cloud Data: Retrieved from U.S. Naval Academy website. https://www.usna.edu/Users/oceano/pguth/md_help/html/pt_clouds.htm
  6. 6.
    Shervais, K.: Structure from Motion Introductory Guide. Retrieved from UNAVICO website. https://www.unavco.org/education/resources/modules-and-activities/field-geodesy/module-materials/sfm-intro-guide.pdf

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Gil Eckert
    • 1
    Email author
  • Steven Cassidy
    • 1
  • Nianqi Tian
    • 1
  • Mahmoud E. Shabana
    • 1
  1. 1.Monmouth UniversityWest Long BranchUSA

Personalised recommendations