Multimedia Tools and Applications

, Volume 77, Issue 8, pp 9691–9717 | Cite as

3D reconstruction of disaster scenes for urban search and rescue

  • Styliani VerykokouEmail author
  • Charalabos Ioannidis
  • George Athanasiou
  • Nikolaos Doulamis
  • Angelos Amditis


Natural and man-made disasters that may take place due to a catastrophic incident (e.g., earthquake, explosion, terrorist attack) often result in trapped humans under rubble piles. In such emergency response situations, Urban Search and Rescue (USaR) teams have to make quick decisions under stress in order to determine the location of possible trapped victims. Fast 3D modelling of fully or partially collapsed buildings using images from Unmanned Aerial Vehicles (UAVs) can considerably help USaR efforts, thus improving disaster response and increasing survival rates. The a-priori establishment of a proper workflow for fast and reliable image-based 3D modelling and the a priori determination of the parameters that have to be set in each step of the photogrammetric pipeline are critical aspects that ensure the readiness in an emergency response situation. This paper evaluates powerful commercial and open-source software solutions for the 3D reconstruction of disaster scenes for rapid response situations. The software packages are tested using UAV datasets of a real earthquake scene. A thorough analysis on the parameters of the various modelling steps that may lead to desired results for USaR tasks is made and indicative processing chains are proposed, taking into account the restriction of time. Furthermore, some weaknesses of the data acquisition process that have been detected by performing the experiments are outlined and some improvements and additions are proposed, including an initial preprocessing of the images using a graph-based approach.


Search and rescue Fast 3D Modelling UAV images Oblique aerial images Photogrammetry Software evaluation 



Styliani Verykokou would like to acknowledge the Eugenides Foundation for the financial support through a PhD scholarship. This work was supported by the European Commission under INACHUS, a collaborative project part of the FP7 for research, technological development and demonstration (grant agreement no 607522). The authors would like to thank all partners within INACHUS for their cooperation and valuable contribution. This work was also supported by the FP7-PEOPLE project Four Dimensional Cultural Heritage World (4D CH World) funded by European Union Marie Curie Actions under the grant agreement no 324523.


  1. 1.
    Adams SM, Friedland CJ (2011) A survey of unmanned aerial vehicle (UAV) usage for imagery collection in disaster research and management. 9th Int workshop on remote sensing for disaster responseGoogle Scholar
  2. 2.
    Agisoft. Accessed 16 January 2017
  3. 3.
    Alahi A, Ortiz R, Vandergheynst P (2012) FREAK: Fast retina keypoint. 2012 I.E. Conf on Computer Vision and Pattern Recognition:510–517.
  4. 4.
    Athanasiou G, Amditis A, Riviere N, Makri E, Bartzas A, Anyfantis A, Werner R, Axelsson D, di Girolamo E, Balet O, Schaap M, Kerle N, Bozabalian N, Marafioti G, Berzosa J, Gustafsson A (2015) INACHUS: integrated wide area situation awareness and survivor localisation in search and rescue operations. GiT4NDM 2015Google Scholar
  5. 5.
    Bay H, Tuytelaars T, Van Gool L (2006) SURF: speeded up robust features. Computer vision – ECCV 2006, part 1. LNCS 3951:404–417. Google Scholar
  6. 6.
    Boccardo P, Chiabrando F, Dutto F, Tonolo FG, Lingua A (2015) UAV deployment exercise for mapping purposes: evaluation of emergency response applications. Sensors 15(7):15717–15737. CrossRefGoogle Scholar
  7. 7.
    Boykov Y, Veksler O, Zabih R (2001) Fast approximate energy minimization via graph cuts. IEEE Trans Pattern Anal Mach Intell 23(11):1222–1239. CrossRefGoogle Scholar
  8. 8.
    CloudCompare. Accessed 16 January 2017
  9. 9.
    Cui J, Liu Y, Xu Y, Zhao H, Zha H (2013) Tracking generic human motion via fusion of low-and high-dimensional approaches. IEEE Trans Syst, Man, Cybern, Syst 43(4):996–1002. CrossRefGoogle Scholar
  10. 10.
    DeBusk WM (2010) Unmanned aerial vehicle systems for disaster relief: tornado alley. AIAA Infotech@aerospace 1–10.
  11. 11.
    Faugeras OD, Luong QT, Maybank SJ (1992) Camera self-calibration: theory and experiments. Computer vision – ECCV'92. LNCS 588:321–334Google Scholar
  12. 12.
    Fernandez Galarreta J, Kerle N, Gerke M (2015) UAV-based urban structural damage assessment using object-based image analysis and semantic reasoning. Nat Hazards Earth Syst Sci 15:1087–1101. CrossRefGoogle Scholar
  13. 13.
    Ferworn A, Tran J, Ufkes A, D'Souza A (2011) Initial experiments on 3D modeling of complex disaster environments using unmanned aerial vehicles. SSRR 2011:167–171. Google Scholar
  14. 14.
    Ferworn A, Herman S, Tran J, Ufkes A, Mcdonald R (2013) Disaster scene reconstruction: modeling and simulating urban building collapse rubble within a game engine. SCSC 2013:31–36Google Scholar
  15. 15.
    Fischler M, Bolles R (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 24(6):381–395MathSciNetCrossRefGoogle Scholar
  16. 16.
    Furukawa Y, Ponce J (2010) Accurate, dense, and robust multiview stereopsis. IEEE Trans Pattern Anal Mach Intell 32(8):1362–1376. CrossRefGoogle Scholar
  17. 17.
    Fuse T, Harada R (2016). Development of image selection method using graph cuts. Int arch Photogramm remote Sens spatial Inf Sci, XLI-B5:641–646.
  18. 18.
    Gehrig SK, Eberli F, Meyer T (2009) A real-time low-power stereo vision engine using semi-global matching. Computer Vision Systems, LNCS 5815:134–143. CrossRefGoogle Scholar
  19. 19.
    Gerke M, Nyaruhuma A (2009) Incorporating scene constraints into the triangulation of airborne oblique images. Int arch Photogramm remote Sens spatial Inf Sci, XXXVIII-1-4-7/W5Google Scholar
  20. 20.
    Hartley RI (1997) In defense of the eight-point algorithm. IEEE Trans Pattern Anal Mach Intell 19(6):580–593CrossRefGoogle Scholar
  21. 21.
    Hartley R, Zisserman A (2003) Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, CambridgezbMATHGoogle Scholar
  22. 22.
    Hirschmüller H (2005) Accurate and efficient stereo processing by semi-global matching and mutual information. 2005 I.E. computer society Conf on computer vision and. Pattern Recogn 2:807–814. Google Scholar
  23. 23.
    IGN. Accessed 16 January 2017
  24. 24.
    INACHUS. Accessed 16 January 2017
  25. 25.
    Kanade T, Morris DD (1998) Factorization methods for structure from motion. Philosophical transactions of the Royal Society of London a: mathematical. Phys Eng Sci 356(1740):1153–1173MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Ke Y, Sukthankar R (2004) PCA-SIFT: a more distinctive representation for local image descriptors. 2004 I.E. computer society Conf on computer vision and. Pattern Recogn 2:506–513Google Scholar
  27. 27.
    Leutenegger S, Chli M, Siegwart RY (2011) BRISK: binary robust invariant scalable keypoints. 2011 I.E. Int Conf on computer vision:2548-2555
  28. 28.
    Lewis G (2007) Evaluating the use of a low-cost unmanned aerial vehicle platform in acquiring digital imagery for emergency response. Geomatics Solutions for Disaster Management 117–133.–3–540-72108-6_9
  29. 29.
    Li X, Wu C, Zach C, Lazebnik S, Frahm JM (2008) Modeling and recognition of landmark image collections using iconic scene graphs. European Conf on Computer Vision:427–440.
  30. 30.
    Liu Y, Cui J, Zhao H, Zha H (2012) Fusion of low-and high-dimensional approaches by trackers sampling for generic human motion tracking. 21st IEEE Int Conf on Pattern Recogniton 898–901. Accessed 10 September 2017
  31. 31.
    Liu Y, Nie L, Han L, Zhang L, Rosenblum DS (2015) Action2Activity: recognizing complex activities from sensor data. 24th Int joint Conf on. Artif Intell:1617–1623Google Scholar
  32. 32.
    Liu Y, Nie L, Liu L, Rosenblum DS (2016) From action to activity: sensor-based activity recognition. Neurocomputing 181:108–115. CrossRefGoogle Scholar
  33. 33.
    Liu L, Cheng L, Liu Y, Jia Y, Rosenblum DS (2016) Recognizing complex activities by a probabilistic interval-based model. 30th AAAI Conf on Artif Intell: 1266–1272Google Scholar
  34. 34.
    Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110. CrossRefGoogle Scholar
  35. 35.
    Lu Y, Wei Y, Liu L, Zhong J, Sun L, Liu Y (2017) Towards unsupervised physical activity recognition using smartphone accelerometers. Multimed Tools Appl 76(8):10701–10719. CrossRefGoogle Scholar
  36. 36.
    MeshLab. Accessed 16 January 2017
  37. 37.
    Morel JM, Yu G (2009) ASIFT: a new framework for fully affine invariant image comparison. SIAM J Imaging Sci 2(2):438–469. MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    Nex F, Remondino F (2014) UAV for 3D mapping applications: a review. Appl Geomat 6(1):1–15. CrossRefGoogle Scholar
  39. 39.
    Nilosek D, Walvoord DJ, Salvaggio C (2014) Assessing geoaccuracy of structure from motion point clouds from long-range image collections. SPIE Optical Engineering 53(11):113112–113112. CrossRefGoogle Scholar
  40. 40.
    Pellenz J, Lang D, Neuhaus F, Paulus D (2010) Real-time 3D mapping of rough terrain: a field report from disaster City. 2010 I.E. Int workshop on safety security and rescue. Robotics:1–6Google Scholar
  41. 41.
    Quaritsch M, Kuschnig R, Hellwagner H, Rinner B (2011) Fast aerial image acquisition and mosaicking for emergency response operations by collaborative UAVs. 8th Int Conf on Inform Syst for Crisis Response and Manag 1–5. Accessed 10 September 2017
  42. 42.
    Remondino F (2003) From point cloud to surface: the modeling and visualization problem. Int arch Photogramm remote Sens spatial. Inf Sci 34(5)Google Scholar
  43. 43.
    Remondino F, Spera MG, Nocerino E, Menna F, Nex F, Gonizzi-Barsanti S (2013) Dense image matching: comparisons and analyses. Digital Heritage Int Congress, vol 1: 47–54.
  44. 44.
    Robertson DP, Cipolla R (2009) Structure from motion. In: Varga M (ed) Practical image processing and computer vision. John Wiley & Sons Australia, MiltonGoogle Scholar
  45. 45.
    Rublee E, Rabaud V, Konolige K, Bradski G (2011) ORB: an efficient alternative to SIFT or SURF. 2011 I.E. Int Conf on Computer Vision:2564–2571.
  46. 46.
    Snavely N, Seitz SM, Szeliski R (2006) Photo tourism: exploring photo collections in 3D. ACM Trans Graph 25(3):835–846CrossRefGoogle Scholar
  47. 47.
    Snavely N, Seitz SM, Szeliski R (2008) Skeletal graphs for efficient structure from motion. 2008 I.E. Conf on Computer Vision and Pattern Recognition.
  48. 48.
    Tran J, Ufkes A, Ferworn A, Fiala M (2013) 3D disaster scene reconstruction using a canine-mounted RGB-D sensor. 2013 I.E. Int Conf on Computer and Robot Vision 23–28.
  49. 49.
    Verykokou S, Ioannidis C (2015) Metric exploitation of a single low oblique aerial image. FIG working week 2015: from the wisdom of the ages to the challenges of the modern worldGoogle Scholar
  50. 50.
    Verykokou S, Ioannidis C (2016) Automatic rough georeferencing of multiview oblique and vertical aerial image datasets of urban scenes. Photogramm Rec 31(155):281–303. CrossRefGoogle Scholar
  51. 51.
    Verykokou S, Ioannidis C (2016) Exterior orientation estimation of oblique aerial imagery using vanishing points. Int arch Photogramm remote Sens spatial Inf Sci, XLI-B3:123–130.
  52. 52.
    Verykokou S, Doulamis A, Athanasiou G, Ioannidis C, Amditis A (2016) Multi-scale 3D modelling of damaged cultural sites: use cases and image-based workflows. EuroMed 2016, Part I, LNCS 10058:50–62.
  53. 53.
    Verykokou S, Doulamis A, Athanasiou G, Ioannidis C, Amditis A (2016) UAV-based 3D modelling of disaster scenes for urban search and rescue. 2016 I.E. Int Conf on imaging systems and techniques (IST):106-111.
  54. 54.
    Vetrivel A, Gerke M, Kerle N, Vosselman G (2015) Identification of damage in buildings based on gaps in 3D point clouds from very high resolution oblique airborne images. ISPRS J Photogramm Remote Sens 105:61–78. CrossRefGoogle Scholar
  55. 55.
    Yamazaki F, Matsuda T, Denda S, Liu W (2015) Construction of 3D models of buildings damaged by earthquakes using UAV aerial images. 10th Pacific Conf on Earthquake Eng 1–8. Accessed 10 September 2017
  56. 56.
    Zhang Z, Zhang Y, Ke T, Guo D (2009) Photogrammetry for first response in Wenchuan earthquake. Photogramm Eng Remote Sens 75(5):510–513Google Scholar
  57. 57.
    Zhang K, Lu J, Lafruit G (2009) Cross-based local stereo matching using orthogonal integral images. IEEE Trans Circuits Syst Video Technol 19(7):1073–1079. CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2017

Authors and Affiliations

  1. 1.Institute of Communication and Computer SystemsAthensGreece
  2. 2.Laboratory of Photogrammetry, School of Rural & Surveying EngineeringNational Technical University of AthensAthensGreece

Personalised recommendations