Abstract
In the event of a natural disaster, Japanese local governments investigate the level of damage of the buildings and issue damage certificates to the victims. The damage certificate is used to determine the contents of support provided to the victims; hence, they must be issued rapidly and accurately. However, in the past, the investigation of damage was time consuming, thus delaying the support provided to the victims. Additionally, while investigating the roof of the damaged building, it was difficult for the investigators to look at the entire roof and calculate the damage rate accurately. To address this issue, we have developed an image processing model to automatically calculate the rate of damage on a roof through image recognition from aerial photos. To circumvent the problem of lack of training data reported in our previous study [1], in this study, roof images were divided into roof surfaces based on image segmentation by deep learning, and the number of training data was increased. Our model calculated the rate of damage for up to 80% of roof data more accurately than the conventional assessment by a field investigator.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Fujita, S., Hatayama M.: Estimation method for roof‐damaged buildings from aero-photo images during earthquakes using deep learning. Inf. Syst. Front. (2021)
Disaster Management, Cabinet Office in Japan: Guidelines of the Operation of Criteria for Building Damage Investigation in Disasters (2020). (in Japanese)
Disaster Management, Cabinet Office in Japan: Guidance of the Implementation System for Building Damage Investigation in Disasters (2020). (in Japanese)
Cabinet Office in Japan: Outline of the March 2018 Revision. http://www.bousai.go.jp/taisaku/pdf/h3003kaitei.pdf. Accessed 10 May 2021. (in Japanese)
Geospatial Information Authority of Japan: Aerial Photograph. http://www.gsi.go.jp/gazochosa/gazochosa41006.html. Accessed 10 May 2021. (in Japanese)
DRONEBIRD: Drone Rescue Team of Disaster, DRONEBIRD. https://dronebird.org/. Accessed 10 May 2021. (in Japanese)
Vetrivel, A., Gerke, M., Kerle, N., Nex, F., Vosselman, G.: Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning. ISPRS J. Photogram. Remote Sens. 140, 45–59 (2018)
Tu, J., Li, D., Feng, W., Han, O., Sui, H.: Detecting damaged building regions based on semantic scene change from multi-temporal high-resolution remote sensing images. Int. J. Geo-Inf. 6(5) (2017)
Fujita, A., Sakurada, K., Imaizumi, T., Ito, R., Hikosaka, S., Nakamura, R.: Damage detection from aerial images via convolutional neural networks. In: 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA) (2017)
Inoguchi, M., Tamura, K., Hamamoto, R.: Establishment of work-flow for roof damage detection utilizing drones, human and AI based on human-in-the-loop framework. In: 2019 IEEE International Conference on Big Data (Big Data), pp. 4618–4623 (2019)
Ji, M., Liu, L., Du, R., Buchroithner, M.F.: A comparative study of texture and convolutional neural network features for detecting collapsed buildings after earthquakes using pre- and post- event satellite imagery, Remote Sens. 11, 1202 (2019)
Radhika, S., Tamura, Y., Matsui, M.: Determination of degree of damage on building roofs due to wind disaster from close range remote sensing images using texture wavelet analysis. In: IEEE International Symposium on Geoscience and Remote Sensing (IGARSS) (2018)
Lucks, L., Bulatov, D., Thonnessen, U., Boge, M.: Superpixel-wise assessment of building damage from aerial images. In 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP) (2019)
Miura, H., Aridome, T., Matsuoka, M.: Deep learning-based identification of collapsed, non-collapsed and blue tarp-covered buildings from post-disaster aerial images. Remote Sens. 12, 1924 (2020)
Ise, T., Minagawa, M., Onishi, M.: Classifying 3 Moss species by deep learning, using the “chopped picture” method. Open J. Ecol. 2018(8), 166–173 (2018)
Susaki, J.: Segmentation of shadowed buildings in dense urban areas from aerial photographs. Remote Sens. 4, 911–933 (2012)
Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Ada, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: ECCV (2018)
He, K., Gkioxari, G., Dollar, P., Girshick, R.: Mask R-CNN. arXiv:1703.06870 (2017)
He, K., Zhang, X., Ren, S., Sun, J: Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 IFIP International Federation for Information Processing
About this paper
Cite this paper
Fujita, S., Hatayama, M. (2022). Automatic Calculation of Damage Rate of Roofs Based on Image Segmentation. In: Sasaki, J., Murayama, Y., Velev, D., Zlateva, P. (eds) Information Technology in Disaster Risk Reduction. ITDRR 2021. IFIP Advances in Information and Communication Technology, vol 638. Springer, Cham. https://doi.org/10.1007/978-3-031-04170-9_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-04170-9_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-04169-3
Online ISBN: 978-3-031-04170-9
eBook Packages: Computer ScienceComputer Science (R0)