An innovative structural health assessment tool for existing precast concrete buildings using deep learning methods and thermal infrared satellite imagery

Currently, there is a limited number of tools that can be used to assess progressive damage of buildings in large-scale study areas. The effectiveness of such tools is also constrained by a lack of sufficient and reliable data from the buildings and the area itself. This research article presents an innovative framework for damage detection and classification of precast concrete (PC) buildings based on satellite infrared (IR) imagery. The framework uses heat leakage changes over time to assess the progressive damage of buildings. Multispectral satellite images are used for a spatial scanning and large-scale assessment of a study area. A deep learning object detection algorithm coupled with two pixel intensities classification approaches are utilized in the framework. The proposed framework is demonstrated on two case study areas (parts of Karaganda and Almaty cities) in Kazakhstan using a set of multitemporal satellite images. Overall, the proposed framework, in combination with a YOLOv3 algorithm, successfully detects 85% of the PC buildings in the study areas. The use of a peak heat leakage classification approach (in comparison to mean heat leakage classification) over the 4 years showed a good agreement with the proposed framework. On-site visual inspections confirmed that PC buildings that were classified as having “High damage probability” have indeed evident signs of deterioration, as well as a more heat leakage than the rest of the buildings in the study areas. Whilst the framework has some limitations such as its applicability to extreme continental climate and its low sensitivity to detect minor damage, the proposed innovative framework showed very promising results at detecting progressive damage in PC buildings. This article contributes towards developing more efficient long-term damage assessment tools for existing buildings in large urban areas.


Introduction
Much of the existing precast concrete (PC) infrastructure around the world has reached its service life, which has led to structural issues. Indeed, the demand for affordable housing after WWII pushed constructors to build PC buildings in a hasty manner. Moreover, constant construction cost reductions, inadequate design and low-quality control during the construction phase led to the poor present structural condition of these buildings [1]. In the former USSR countries alone, an estimate of two hundred thousand PC buildings with an average age of 55 years exist [2,3]. Previous independent investigations showed that some of these buildings have already reached their critical structural state and require immediate intervention [4,5]. Figure 1 illustrates common structural issues in such PC buildings, with spalling of panel joints being identified as the most critical one [6]. The repetitive freezing and thawing cycles destroy the plastering cover, resulting in deep gaps and corrosion of the internal reinforcement elements and metal joints between panels. These gaps are in turn connected to interior rooms in the buildings, and this produces a high level of heat loss [7][8][9].
However, the large amount of buildings and their location across the former USSR countries makes the assessment of existing PC buildings difficult and impractical by existing methods/techniques. Moreover, no practical tools for large-scale damage assessment of such PC buildings have been developed so far. Therefore, in the last decades, remote sensing technologies, in particular infrared imagery (IR), computer vision (CV) and deep learning (DL) showed great potential to help solve the lack of assessment tools [10]. However, the assessment of existing structures experiencing progressive damage is incredibly complicated and thus not all optical evaluation methodologies can be directly applied in damage detection of buildings.
Remote sensing (RS) technologies collect information from a distance without any interaction with the object of interest [11]. Among these technologies are very wellknown seismic topography, gravity gradiometry and others [12]. RS technologies that use satellite or aerial multispectral optical imagery [13] provide remarkable results in damage assessment, if accompanied by image processing techniques [14]. The rapid development of RS technologies has led to the fast growth of their applications in spatial scanning and damage detection. More recently, these technologies have been used for damage identification of buildings by utilising various space or air platforms fitted with optical [15], radar [16], and LiDAR sensors [10]. All these methodologies concentrate on the use of received images to correlate building damage to visual patterns in pictures and perform damage classification. Many methods have been used for damage classification, applying co-occurrence matrices on satellite pictures [17] or morphological-scale spaces [18].
Other studies attempted to optimise the identification of specific building components, such as bricks and roof tiles [19,20]. Higher photographic resolution has enhanced the degree of detail in the images and, as a result, the complexity and variety of the objects that can be identified. Consequently, active learning methods [21] have been utilised to increase the quality of the damage classification for large-scale areas and to automate the process.
Image data processing for large-scale areas is timeconsuming and complex. As a result, several methods were proposed to automate data interpretation. These methods range from traditional Fourier analysis to advanced methods based on Markov random fields/Bayesian inference theory or variational/partial differential equations [22]. More recently, analytical models and algorithms based on patterns and geometry recognition have been developed for image interpretation. Computer vision (CV) generally covers these areas, including image processing and analysis [23]. Among the different CV methodologies, Convolutional Neural Networks (CNN) has a huge potential for damage detection in buildings [24][25][26].
For instance, several architectures have been presented in order to enhance the performance of debris detection [27]. However, such architectures have used convolutional autoencoders to deal with a small number of samples [28], which have resulted in modest enhancements in damage categorisation. Other approaches have attempted to use multitemporal data to find differences before and after catastrophic events [29]. However, their practical application has been constrained by the lack of pre-event data. To compensate for the lack of pre-event data, subsequent research [30] used CNNs to define building footprints from multitemporal preevent images to update such footprints just after the disaster and compare them [10,31].
However, all the above-mentioned approaches are based on pre-event and post-event comparisons of images. Hence, such approaches cannot be easily applied to existing buildings that experience progressive damage. In the last 2 decades, satellite IR imagery showed the potential to help in this matter [25]. IR imagery relies on the radiation emitted by objects in proportion to their temperature. Most of the applications of thermal sensing lie in the areas of effective energy use [32,33], natural hazards or disasters monitoring [34,35], and climate and environmental studies [36]. However, to date, limited studies have examined the use of IR satellite and airborne imagery to detect damage in buildings. Past research has confirmed that the amount of heat transferred through a material depends heavily on its thermal properties [37,40]. This is particularly true for the thermal conductivity (TC) of a material. The TC greatly depends on the severity of damage in the material, and this in turn is expected to change the pixel intensity in the IR images. Very recently, Zhang [41] applied UAV IR photography for automatic detection of earthquake-damaged buildings. Thermal imagery was utilised to detect external damage and cracking, and a methodology to assess the severity of damage was proposed. Despite the promising results, Zhang's methodology was limited to the maximum altitude of the UAV and, therefore, could not cover large areas in the same assessment. More detailed review of application of UAV for damage assessment was presented by Rakha [42]. Nevertheless, large-scale post-disaster assessments using IR has been successfully performed in the past. In a pioneering study, Hanada and Yamazaki [43] used measured average heat leakage from buildings to compare histograms of leakage intensities and detect building damage before and after a tornado. The results showed that IR images successfully detected damage in 76% of the buildings that actually experienced some damage (as confirmed by on-site inspections). However, the manual processing and handling of the collected data performed by Hanada and Yamazaki makes this process unsuitable to assess damage at a large scale (e.g. large urban areas). Therefore, a new framework to process large amounts of data in a more efficient manner is required.
This article proposes an innovative framework for damage detection and classification of PC buildings based on satellite IR imagery. The framework automatically detects changes of heat leakage in such buildings over time to assess their damage condition. The effectiveness of the framework is demonstrated through the analysis of multitemporal IR satellite images from buildings in two case study areas in Kazakhstan, and the results are compared with on-site visual inspections of the actual buildings. An overview of the proposed framework is introduced in Sect. 2 of this article. Section 3 discusses the case studies that were used to test the effectiveness of the proposed framework. It also introduces the CNN used in this research to automatically detect damaged buildings. Sections 4 and 5 compare and discuss the results from the analysis and on-site visual inspection. Concluding remarks of this investigation are given in Sect. 6. This article contributes towards developing more efficient long-term damage assessment tools for existing buildings in large urban areas.

Methodology
The proposed damage assessment framework is divided into two stages (Stage 1 and Stage 2), as shown in Fig. 2. Each stage consists of several steps:

Stage 1
Step 1. IR Imagery pre-processing performs image preprocessing, sorting, and aligning to ensure the same positioning of the view. After that, an IR channel separation is performed to extract only required IR data.
Step 2. Pan sharpening performs image quality enhancement for an automated building extraction. Step 3. IR histogram normalisation, contrast enhancement performs normalisation and equalisation of the histograms, for further classification part.

Stage 2
Step 4 and Step 4.1. CNN model training and YOLOv3 building detection. These steps perform training of the You only look once (YOLO) object detection model. The YOLOv3 model was pretrained based on ImageNet dataset [44]. The sub-step 4.1 trains the CNN model for PC building detection adopting a transfer learning technique. Transfer learning applies the knowledge from source domains to target domains which might be related but different [45]. The YOLOv3 model's architecture is given in Appendix A. This model is used for automatic PC buildings detection and extraction over the entire study area from Step 3. It should be noted that YOLOv3 was chosen as it has proven to be a robust and easily deployable model for object detection [46]. In this article, the purpose of the YOLO algorithm application is to automatically detect PC buildings over the entire study area (it is not used for a damage classification nor detection itself). Therefore, other object detection models can be used for the same purpose of PC building detection.
Step 5. Heat leakage assessment performs heat leakage intensities evaluation and damage classification based on pixel intensities values.
Step 6. Production of classification map performs the damage probability classification. It uses classification logical expressions to produce damage probability classification maps to suggest further on-site investigations.
As shown in Fig. 2, Stage 1 consists of three steps and performs image processing required for an object detection model, such as image aligning, channels separation, pansharpening, and histogram normalisation. Step 1 separates image data from the redundant information from other channels using GIS software (QGis 3.10 [47]). The far-infrared (FIR) channel was separated from the geo-tiff file that represents a tensor with a degree of four. This FIR channel still has enough information of the existing buildings, including pixel intensities that can be used for heat leakage evaluation. Equation 1 is the mathematical representation of IR channel separation: where i n is an original image from a year 'n', u D are pixel intensities and D is the number of channels in multispectral image, as defined in Eq. 2: The single FIR channel represents a black and white image with pixels of different intensities and with a very low spatial resolution (6 mpp). In Step 2, a pan-sharpening technique [48] represented by Eq. 3 was used to increase the image resolution. In this technique, pairs of low-resolution multispectral images (IR spectra, Fig. 3a) and high-resolution panchromatic images ( Fig. 3b) are combined to obtain new high-resolution multispectral images (Fig. 3c) of 1 mpp.
where PXS(·) is the modified sharpened image, i,j are the pixel indexes, PAN is a panchromatic image, PAN smooth is a smoothed panchromatic image and S is a low-resolution multispectral image.
After that, the resultant IR images are coloured using the viridies palette, as explained later in Section III (Step 1-Image preparation). Therefore, Step 2 produces IR images of high resolution and with coloured and enhanced features of the multi-channelled satellite images.
Step 3 eliminates discrepancies in the light intensities of the images. Accordingly, the source images from Step 2 (e.g. Figure 4a) are normalised to reference images (Fig. 4b) through a cumulative-probability distribution histogram matching process [49]. This led to images with similar light intensities (Figs. 4c), which allows for a direct comparison of the heat leakage using pixel intensities.
Stage 2 in Fig. 2 detects buildings (Step 4), assesses their heat leakage (Step 5), and produces a classification map of the buildings in the study area (Step 6).
Step 4 implements It should be mentioned that the YOLOv3 algorithm cannot directly handle high-resolution IR satellite imagery. Therefore, a sliding window approach is adopted to handle high-resolution images. The window slides along the X and Y axes to produce cropped images equal to the size of the window. The YOLOv3 algorithm deals only with small, cropped images (see Fig. 5a). It should be also noted that, in the sliding window approach, the initial IR image itself does not shrink after each sliding round. This modification was done to reduce computational difficulties and to shorten the process of building detection. Hence, the detection cycles are reduced to a number of cropped images taken from the original IR image. Each of these cropped images is then fed into an object detection model to determine the coordinates of the detected buildings.
The coordinates of each building determined in Step 4 (on every nth cropped image) are then converted into global coordinates (Glob(X1, X2, Y1, Y2)) using Eq. 3: where glob(w n ) are the global coordinates of the n th cropped image, and loc(b w ) are the local coordinates of each building on the n th cropped image.
When the local and global coordinates are combined, various overlapping boundary boxes appear at the same location due to a multiple object detection process (see Fig. 5b). Therefore, to improve the precision of the building detection, it was decided to overlap each consecutive cropped image. A non-maxima suppression (NMS) method is applied to remove the unnecessary boundary boxes [50]. This method utilises an Intersection over Union function (IoU) (Eq. 4 and Fig. 5b) to find an area of intersection between two different boundary boxes, and then it divides such area of intersection by the total combined area of the two boundary boxes. The result from Eq. 4 is then compared to a threshold value so that IoU value higher than the threshold is kept as a unified boundary box for a detected building.
where B1 and B2 are the areas of the two boundary boxes being analysed.
Finally, the global coordinates of each building (Fig. 5c) are stored in a CSV file for heat leakage analysis in Step 5.
Step 5 performs heat leakage assessment based on the pixel intensities. Accordingly, pixel intensities are obtained for every building according to their global coordinates. After that, an algorithm is applied to classify buildings according to their damage, as described in the following section.
The last step in the damage assessment framework (Step 6) produces a classification map of the buildings based on the damage class obtained in Step 5.

Damage classification system
In the proposed framework, damage classification probability is necessary to determine the damage condition of the PC buildings on the study area. The 'damage class' probability of an individual building is defined by an integer number C {c|c = 0,1,2,3}: c = 0-Low damage probability, c = 1-Minor damage probability, c = 2-Moderate damage probability, c = 3-High damage probability. The proposed classification utilizes the hypothesis of higher heat leakage due to internal damage and is determined by the rate of a building's external heat radiation. Therefore, buildings with higher heat leakage are assumed to have a higher level of damage. Validation of this hypothesis is discussed in the following chapters. The exact building's damage level and its relationship to heat leakage will be obtained in further studies.
In the proposed framework, two approaches of damage assessment are examined so as to determine the best correlation between damage and heat leakage. Approach 1. For each multitemporal image of an individual building (in), a function of a maximum value of heat leakage detection λmax is sought after (Eq. 6). Equation 6 defines a peak intensity value of the pixels over the RGB channels: where P c is pixel intensities, and the rest of the variables are as defined before. Approach 2. As an alternative to the above approach, Eq. 7 defines a mean intensity value of the pixels over the RGB channels for each multitemporal image (i n ): The damage probability of a building is defined based on the peak values or mean values of heat leakage intensities, as defined by Eq. (8): where C is a damage gradation coefficient.
It should be noted that C does not represent the severity of a building's damage. However, it shows a trend of changes in heat leakage, which can represent a progressive damage. Therefore, the following classification is based on the likelihood of damage appearance. Accordingly, ѱ(λ max/mean ) is a logical expression as given below: If none of the above is applicable → Low damage probability.
The logical expression above evaluates the trend of heat leakage changes over a specific time between observations. In the above Eq. (8), a consecutive increase in the heat leakage (λ n ) over the years of observation represents a high damage probability. In this framework, it is proposed to adopt a time between observations of approximately twelve months. The highest damage class probability ("High damage probability", c = 3) indicates a constant heat leakage rate growth over an 'n' number of years. This would suggest that high damage probability is expected to be observed in a building, and that further urgent investigations are necessary to confirm the actual severity of the damage. On the contrary, a low damage class ("Low damage probability", c = 0) indicates an inconsistent trend in heat leakage changes, which in turn suggests that a building has low damage probability and, therefore, it is not of serious concern. It should be noted that, since the framework detects the consistency in damage progression over time, even minor damage over a few years could be classified as "High damage probability". Whilst the actual damage in the building may not be severe, this information can be used to propose more thorough on-site inspections to verify the condition of the building. Therefore, even minor damage detected at an early stage could be swiftly repaired before it turns into a major structural issue.
To show the effectiveness of the proposed framework, the following section applies the framework to two case study areas where PC buildings with structural issues have been identified.

Data collection
In this study, the framework was applied to detect damage on existing PC buildings located in Karaganda and Almaty, two of the main cities of Kazakhstan. Karaganda is located in an extreme continental climate zone with a temperature range from -40 to + 38˚C, whereas Almaty has a mild continental weather (-30 to + 30˚C). Almaty is also located at the highest seismic zone of Kazakhstan [51].
The multitemporal (years 2015-2018) satellite images from these two cities were provided by a governmental company (JSC Kazakhstan Gharysh Sapary). Each multitemporal image covered parts of Almaty and Karaganda cities equal to 8.61 sq.km and 3.62 sq.km, respectively. Due to the presence of critical security facilities in the studied cities, much of the dataset cannot be publicly shared. Accordingly, only the main results of non-critical areas are presented and discussed in this article. The received images were taken by the satellites KazSat-1 and KazSat-2, which started operating in 2015 and 2016, respectively. Both KazSat 1 and 2 are geostationary satellites with geosynchronous orbits. Therefore, the nadir direction remains unchanged throughout all service time of these satellites. KazSat-1 has a panchromatic camera with a spatial resolution equal to 1 m per pixel (mpp). KazSat-2 can provide multispectral images with a spatial resolution of 6 mpp and a far-range infrared camera with a spatial resolution of 6 mpp. The signal to noise ratio of these satellites is > 100, as reported by the manufacturer [52].
To apply the framework, images were chosen from the same areas considering similar weather conditions and time of the year. In order to avoid any change in the emissivity on a particular day, all selected IR images were taken between 5.00 and 6.30 AM. Therefore, at the time the IR images were taken, all buildings would been cooled down for at least 10 h and no other heat sources could have disturbed the emissivity. This was done to minimise discrepancies with heat loss values due to variations of exterior temperature at different dates, changes in operation of heating fixtures inside the buildings, and sunlight heat radiation. The chosen images come from days with an outside temperature below − 10 °C, when (according to the local regulations) the central heating station provides maximum power to the heating system. In this way, the maximum value of heat leakage could be observed. Table 1 shows the weather data for the 4 years of images received for both cities, with a minimum of 11 months interval between the received images. Each set of images consisted of five separate channels: Red, Green, Blue, far-range infrared (FIRc) and panchromatic spectra in GeoTIFF format. Before receiving the images, each of them was pre-processed to overcome spatial distortions due to the spherical shape of the Earth.

Step 1. IR imagery pre-processing
In Step 1, a separation of the single IR channel was performed in QGIS 3.10 software as explained in the Methodology. The obtained binary black and white images from previous step, had very low contrast and could not be used directly as an input to CNN. Therefore, additional colouring was required to achieve a higher contrast. Several palettes (spectral, magma, viridies) were considered in this study (see pallets comparison in Fig. 6a, b, c, c, d, e).
As shown in Fig. 6, it was found that the viridies palette could keep more contrast data than the other palettes, which helped the YOLOv3 algorithm to detect objects more efficiently. More than that, it was also able to highlight peak heat leakages over the surface of the buildings. Even though the performed colouring process produced excellent results in terms of heat leakage values, additional contrast enhancement was still required to define

Step 2. Pan sharpening
In Step 2, a dataset of coloured low-resolution IR images was combined with a dataset of high-resolution panchromatic images. As a result, four high-resolution IR images of Almaty with resolution of 4148 × 2121 pixels and four images of Karaganda with resolution of 1874 × 1866 pixels were obtained.

Step 3. Image normalisation (histogram matching)
A process of peer-to-peer building's heat leakage assessment required pixel intensities to be matched and aligned with each other. To perform this process, a cumulative distribution histogram matching was implemented.   large difference in colours between the source and reference images. This would have implied a too high value of the calculated statistical distribution of the intensities of the pixels for the correction. Thus, a difference between pixel intensities forced the algorithm to bring magenta colour to the matched image (Fig. 8c). Therefore, in this step, it is crucial to make a proper decision in terms of reference image selection.

Step 4.1 Training of the YOLOv3 object detection model
A YOLOv3 deep learning object detection model was trained for automatic PC buildings detection [46,53] based on a manually prepared dataset of images of PC buildings. The prepared dataset had in total: training (3200) and validation (300) images. In this study, the authors were limited by such number of IR satellite images of the PC building. Therefore, to increase the accuracy of the model, it was decided to increase the size of the dataset through a data augmentation process. It was used to increase the diversity of the training dataset by applying random but realistic changes to the existing images, like rotation or rescale, skewing, blurring, mirroring, cropping. After the data augmentation, the set of training images (90% of total number of images) and validation images (10%) increased to 6800 images and 630 images, respectively. The dataset was manually labelled with LabelIMG software [54] in a Pascal VOC format.

Step 4. Object detection
After completion of YOLOv3 model training, a process of automatic PC buildings detection was done. The size of the sliding window was selected to be 150 × 200pixels. Each pixel corresponded to 1 m of a real size. Thus, to ensure that a standard building (rough size = 33.6 × 11.52 m) would fit in an area of 150 × 200 m [55], it was also decided to use a threshold value for IoU equal to 10%.

Steps 5 and 6
In Step 5, the heat leakage evaluation of PC buildings was performed based on their pixel intensities. Two CSV files were obtained, one for each case study city. These files contained information of global coordinates for individual buildings obtained in Step 4. For each global coordinate, a set of peak and mean pixel intensities values was recorded. After that, in Step 6, a classification algorithm was applied to the pixel intensities values of the buildings. After the classification, a damage class was assigned for every building. Based on the obtained damage classes, a classification map was created for every studied area.  model is high and equal to 85%. Accuracy was calculated as a ratio between the number of true detected buildings (automatically detected-false detected) and the overall number of existing buildings. For Almaty city, the number of PC buildings successfully detected by the CNN model was equal to 436 out of 451, whereas such figures were 150 out of 161 for Karaganda city. It should be highlighted that the tolerance in the accuracy of predictions for other study areas can be higher or lower than that achieved here, depending on the amount and quality of data. Despite the minor issues found during the automatic PC building detection process, the outcomes were suitable for heat leakage evaluation. Figure 8a, b compares the YOLOv3 detection map with the PC buildings and the ground truth map (exact position of the buildings) for a part of Almaty (year 2015). It should be noted that the boundary of the buildings in Fig. 8b was defined manually. As can be seen from Figs. 9a--b and Table 2, the framework performed well and detected the majority of the existing PC buildings in the study area.

Automatic PC building detection
Whilst the automatic CNN detection algorithm was highly effective at detecting and highlighting the existing PC buildings within a study area of more than 8 sq. km (Fig. 8a), some features are worth discussing: -Boundaries overlapping and close appearance for some buildings (Fig. 8a(1)). Due to the limitations of the NMS algorithm and sliding window techniques, some of the sliding windows cut buildings into smaller parts, thus resulting in small boundary boxes. Therefore, the IoU of these boundary boxes was much smaller than the threshold value. -Some buildings were wrongly detected, as shown in Fig. 8a(2), where the roof of one storey house was predicted as a PC building. The wrong detection of the buildings shows that more data is still required to increase the accuracy. -Few buildings were not recognised as PC buildings, as can be seen in Fig. 8a(3). Not-recognised buildings are highlighted on the ground truth map (Fig. 8b). Figure 9A, B compares the classification maps of a part of Karaganda city obtained based on peak values (Fig. 9a) and mean values (Fig. 9b) of pixel intensities. As shown in Table 3, for Karaganda city, 35 buildings were found to have "Low damage probability" for both peak and mean values of pixel intensities. 13 out of those 35 buildings actually matched (e.g. compare building 31 in Fig. 9a  and b), and thus were confirmed to have "Low damage probability" by both approaches. At the same time, 108 and 97 buildings had "Minor damage probability" if peak and mean pixel intensities were used in the calculations, respectively. 73 buildings duplicated in both approaches and were, therefore, flagged as having "Minor damage probability" (e.g. compare building 46 in Fig. 9a and b). It was also found that 4 and 15 buildings had a "Moderate damage probability" class for peak and mean pixel intensities, correspondingly, but only three of them matched in both maps. Finally, "High damage probability" was found by both peak and mean approaches in three buildings, but only one matched in both maps (e.g. compare building 33 in Fig. 9a and b). The study area for Almaty city had 451 PC buildings, and 436 of them were automatically detected with the YOLOv3 algorithm. It was found that 13 and 67 buildings had "Moderate damage probability" for peak and mean pixel intensities, respectively, and eight of them actually matched. Finally, the most critical category of "High damage probability" was found to have two and four buildings, and two of them duplicated between the two approaches.

Damage detection and classification
All "High damage probability" classified buildings in Fig. 9a, b for both classification approaches showed a gradual increase in terms of heat leakage over 4 years of observation. However, not all of them had a uniform increase in heat leakage intensity. For example, Fig. 10 illustrates an example of heat leakage intensities for building #33 in Karaganda city. The framework identified a consecutive heat leakage increase throughout the 4 years of observations and, consequently, building #33 was labelled as having a "high damage probability". The increase in mean values of pixel intensities was considerable between 2015 and 2016 (almost 2%), thus indicating progress of the damage. However, in the last 3 years of observations (2016-2018), the average increase in pixel intensities was less than 0.5% per year on average. At the same time, the peak values calculation approach of the first three years (2015-2017), led to a value of heat leakage intensity increase of 13%. While, between 2017 and 2018, the heat leakage peak value changed drastically  Table 3). However, there are still a few buildings in the study areas that are in a critical state, and therefore, it is proposed to do further urgent assessment.
To further validate the results obtained in this step, an on-site visual inspection of the studied buildings was carried out to assess their actual level of damage. Nonetheless, as it is explained further, both classification approaches performed well and showed almost 50% of identical results in terms of damage classification.

Validation of results. On-site visual inspection
An on-site investigation found that most of the existing PC buildings in the case study areas have the following similar features: -5 storeys -PC sandwich panels as wall systems and facades -Floor height -2.5 m -Average area in plan -800-850m2 -Average Length and Width -13 × 65 m -Average number of apartments per building-70 Figure 11a shows examples of three existing PC buildings in Karaganda city. These buildings were identified as having "Minor damage probability", "Moderate damage probability", and "High damage probability" in the analyses presented in the previous section. The visual inspection of such buildings was performed in December 2020 at an outdoor temperature of − 23˚C. The images in Fig. 11 were taken with a DSLR Camera Canon 5d mark IV and with an IR Camera FLIR model C3, with a spectral range of 7.5-14.0 µm. In addition, during the investigation, it was also found that most of the PC buildings in the research areas had not undergone any retrofitting works for the last 8 years. Therefore, the risk of heat loss due to the renovation works was minimal.
The outcomes from the visual inspection confirmed the findings from the damage probability classification maps. Indeed, the main structural components of building #25 on Erubayeva street (Fig. 11a) had minor signs of deterioration. Overall, the condition of the joints between PC panels was very good (Fig. 11d), as was the overall condition of the building considering its age. This was confirmed by minimal traces of heat leakage, as shown in Fig. 11j. At the same time, building #26 (Fig. 11b) had noticeable signs of joints spalling (Fig. 11e) and poorer comparing to building #25 overall condition. It also showed more traces of heat leakage with IR camera (Fig. 11h). Building #33 (Fig. 11c) had the highest class of damage probability in the analysis (see Fig. 10). The on-site inspection revealed that building #33 had multiple localized defects that (as a whole) were sufficient to be successfully identified by the proposed framework. It was confirmed that the plastering at the joints between PC panels had spalled severely (Fig. 11f). As a result, deep gaps were visible, and these could communicate with the interior of the building. Such condition is causing freeze-thawing cycles and corrosion of the internal reinforcement and metal joints between panels as shown in Fig. 12a, b. The thermal images revealed that building #33 had a higher heat leakage than buildings #25 and #26, which was evidenced by the heat leakage patterns on the surface of the building (Fig. 11k). Overall, the average surface temperature of building #33 was 0.8 °C higher than that of the other two buildings. The overall condition of the building was deemed as poor, and a furthermore detailed inspection is recommended.
The results reported in this article should be considered in the light of some limitations. The damage assessment framework proposed here can only be applied to PC buildings without an outer insulation layer. This is because this category of PC buildings experience heat leakage (through their damaged elements) that can be observed by an IR camera. It is worth mentioning that, in this study, there is no direct correlation between the nadir direction of the satellite imagery and the façade damage of buildings, especially as the facades of many of them cannot be observed by KazSat 1 and 2 satellites. However, images or data from other satellites could provide an insight into such correlation. The proposed damage assessment framework can only be applied to countries with an extreme continental climate so that the highest level of heat leakage can be observed during winter. It should be also noted that this article focused on large-scale damage assessment. Accordingly, the proposed damage assessment tool is unsuitable to identify individual small-scale damage (e.g. cracks, concrete spalling, etc.), which can be much smaller than the resolution of satellite images used in civil applications. Despite of that, the proposed framework successfully detects increments in heat leakage as damage (either small or large-scale) progresses in a building, and this can be used to suggest further on-site inspections.
The proposed framework can also be trained and extended to identify potential changes in emissivity (i.e., changes in pixel intensity) resulting from roof renovations (solar panels, etc.), should this be necessary. Even though the proposed automatic damage assessment framework showed promising results, more conventional non-destructive testing (NDT) methods will be required to assess the buildings' structural condition for further retrofitting or engineering decisions. Ongoing research is examining the use of additional results from NDT to further optimise the classification algorithm. However, the framework proposed in this article is deemed as readily applicable for large-scale damage assessment.

Conclusions
This article presents a novel framework for damage assessment and classification based on satellite IR imagery. The effectiveness of the framework is demonstrated using two case study areas of Almaty and Karaganda cities in Kazakhstan. The framework is further verified with on-site visual inspections of actual buildings that the framework flagged as having "Minor damage probability", "Moderate damage probability", and "High damage probability". Based on the results of this study, the following conclusion can be drawn: Fig. 12 Examples of metal joints corrosion in PC damaged buildings. a/b -signs of concrete colour change due to corrosion • Satellite IR images can be effectively used for rapid damage assessment and suitable to produce damage classification maps of the PC buildings in the study areas. The results showed a gradual increase of heat leakage for PC buildings that were classified as having "High damage probability" by the proposed framework. • In combination with the YOLOv3 algorithm, the proposed framework successfully detected 85% of the PC buildings. Specifically, it detected 88% and 82% of the PC buildings in Almaty and Karaganda cities, respectively. The proposed framework is suitable to detect buildings in large-scale study areas of up to 8.0 sq. km. • On-site visits confirmed the findings from the classification maps. Most of the PC buildings classified as having "Moderate damage probability" and "High damage probability" by the framework were visually inspected. PC buildings with "High damage probability" showed clear evidence of progressive damage and had an overall poor structural condition. The most common issue was the spalling of joints at the edges of the PC panels, which is leading to corrosion of the internal reinforcement and metal joints between panels. • The use of a peak pixel intensities classification approach is more conservative and therefore predicts more PC buildings as having "Moderate damage probability" or "High damage probability". Therefore, it is proposed to use a peak pixel intensities classification approach for rapid spatial scanning of large study areas.
This article contributes to the development of an effective long-term damage assessment tool for existing PC buildings and provides the basis for large urban thermal analysis. This framework can also be used as core data for the energy efficiency analysis of large urban areas.
Appendix A YOLOv3 structure Figure 13 shows the YOLOv3 deep learning model used in this study. YOLOv3 is a single-stage CNN consisting of a backbone (DarkNet 53 Network) and a head subnet (Feature Pyramids Networks) [46]. The backbone computes the convolutional feature maps over an input image. A head subnet is constructed on top of the backbone to perform classifications and bounding box regressions. The large, medium and small detected objects (output) are merged into one image of predictions.