Fusion of Optical and SAR Data for Seismic Vulnerability Mapping of Buildings

Part of the Augmented Vision and Reality book series (Augment Vis Real, volume 3)


Seismic risk depends not only on seismic hazard, but also on the vulnerability of exposed elements since it is important in providing the necessary information to policy and decision-makers in order to prevent and mitigate the loss in lives and property. Currently, the estimation of seismic vulnerability of buildings relies on accurate, complex models to be fed with large amounts of in situ data. A limited geographical scope is a natural consequence of such approach, while extensive assessment would be desirable when risk scenarios are concerned. Remote sensing might be fruitfully exploited in this case, if not for a gap between information required by current, accurate, data-hungry vulnerability models and information derivable from remotely sensed data. In this context, naturally the greatest amount of information should be collected, and data fusion is more a necessity than an option. Fusion between optical and radar data allows covering the widest range of information pieces; in this chapter we will describe how such information may be extracted and how it can be profitably fed to simplified seismic vulnerability models to assign a seismic vulnerability class to each building. Some examples of real cases will also be presented with a special focus on the test site of Messina, Italy, a notorious seismic-prone area, where an intensive campaign of data collection is in progress within our research group.


Data fusion Very high resolution radar Building mapping Seismic vulnerability 

1 Introduction

Optical remote sensing has a long story [1] of success in wide-scale classification of land cover as well as in retrieving features and characteristics of selected items such as vegetation [2] and water [3]. Yet some pieces of information or conditions of operation are definitely out of the reach for very-short-wavelength remote sensing, such as directly detecting conductive or moving objects, or operating in poor weather conditions.

Apart from these extreme cases, it is a well known fact that optical and radar remote sensing can complement each other very well and provide, when exploited together, more information than the sheer sum of single contributions. In general, it is correct to assume that improvements in terms of classification accuracy, rejection rate, and interpretation robustness can only be achieved at the expense of additional independent data delivered by sensors. Data fusion is a concept that formalizes the combination of these measurements. In this chapter a review will be provided on the fusion of optical and radar data, with a specific attention to fusion between very-high-resolution data from the two realms.

2 Fusion of Optical and Radar Data

Data fusion [4] gathers together a large number of methods and mathematical tools, ranging from spectral analysis to plausibility theory. Fusion is not specific to a theme or an application; tools used in a data fusion process for a given application may instead be tailored to the case at hand.

Despite the fact the fusion of optical and radar data is potentially very advantageous, the difficulty inherent in combining so largely different types of data prevented it from becoming commonplace. Optical and radar data may not be both available with the given characteristics at the target site; or they may be available, but with such a long time span between them that some relevant information may become uncorrelated. Even when suitable data has been retrieved from both sources, the image pair needs to be accurately co-registered, which is not a painless procedure. Traditional, correlation-based methods [5], which used to work for optical-to-optical image registration, are not applicable when optical-to-radar image registration is concerned.

Correlation-based methods indeed assume similar types of sensors and tend to fail for registration of optical and radar images, because those two images have no radiometric correlation at all, due to the extremely different wavelengths.

Other approaches were then developed which do not assume radiometric correlation: matching connected-groups of pixels (blobs) in the two images [6, 7]; chain-codes description of contours [8]; application of active contour models [9, 10, 11].

Even these methods will often fail to accurately register optical and radar images for at least two reasons. The first is—again—that the two images have different radiometric characteristics, and in many cases the contrast between objects can be even reversed. Correlations between such dissimilar images will rarely yield a peak, even when correlation is computed locally. The second important reason for failure lies in speckle noise, which introduces strong distortions in the apparent shape of the areas found in the radar image with respect to the optical one, and this may be sufficient to prevent matching of the corresponding areas. More recently [12], edge-based methods have been proposed which get around the problem of radiometric correlation and are capable of providing good geometric agreement between the registered images, at least at the resolutions typical for the elder generation of Earth Observation (EO) satellites (i.e., on the order of 10 m).

Nowadays, however, we are witnessing a turnover from the old generation of 10-m radar satellites (ERS, ASAR, JERS) to the new, meter-resolution synthetic aperture radar, with the launch of satellites like COSMO/SkyMed [13], TerraSAR-X [14], and RADARSAT-2 [15]. The new generation of very high resolution (VHR) radar satellites brings finest achievable radar resolution once more very close to that of optical satellites, thus making the scales of the two types of data comparable again. It is on urban areas that the finest spatial resolution of such data is best appreciated, given the extreme spatial variability of the urban environment. At these resolutions, details of the buildings can be seen on both types of data, and their fusion can theoretically achieve the best results.

Sportouche et al. [16] have presented a method for building information extraction with the purpose of a 3D reconstruction exploiting data fusion. They use the optical image to obtain the footprint of the building; later they validate building detection and extract height information exploiting SAR data. Other methods employ high-resolution In-SAR data and optical imagery to extract facilities such as buildings or bridges [17, 18]. In case of a seismic event, damage mapping can be very useful and data fusion is still a very powerful tool for this purpose [19, 20]. It is possible to exploit both the high repetition observation rate available with the new generation of SAR systems and the fine level of detail available even in a single multiband optical image with the aim of change detection [21]. Again for the same purpose, i.e. change detection, images are used, acquired at different times during the process of construction of a city or reconstruction of an urban area stricken by a natural disaster [22].

Although still limited in its extent by the relative novelty of the data, fusion between very high-resolution optical and radar data clearly represent a very fertile terrain for the construction of powerful tools for information extraction through remote sensing. Even more so for urban areas, whose inherent complexity makes the fine spatial discrimination granted by these data highly desirable for enabling large-scale exploitation of the wealth of information contained in the acquired data.

In the next section, we will illustrate some of the issues raised by the optical and radar data fusion at very high resolution by analyzing a concrete example.

3 Data Fusion for Vulnerability Assessment

In order to illustrate the usefulness of data fusion, we focus our attention on a particular application (seismic vulnerability assessment), which is particularly interesting as it is relatively new.

3.1 The Aim

Seismic risk depends on both seismic hazard (i.e. how likely an earthquake of given intensity is to occur) and vulnerability of exposed elements (i.e. how likely is a building to suffer damage of a given extent as a consequence of a seismic input of a given intensity), although it is more commonly thought of in terms of hazard alone.

The contribution of remote sensing to seismic hazard computation is generally indirect, as it consists of collecting clues on, e.g. seismic faults location and patterns, to be used as input to probabilistic models, which in turn provide an estimate of the earthquake probabilities. The information fed in through remote sensing is often replaceable with input from other sources, like, e.g. global fault models.

On the other side, the contribution of remote sensing to seismic vulnerability estimation can be substantial. At a very different scale with respect to the factors connected with seismic hazard, vulnerability assessment can help mapping seismic risk at a deep detail level. Thus, a capability to map vulnerability on a wide geographical scope can be very beneficial to improve disaster preparedness on the one side, and to make early-stage damage estimation more precise and reliable by incorporating vulnerability models into damage estimation algorithms.

As already mentioned, the seismic vulnerability of a structure can be defined as its susceptibility to be damaged from ground shaking of a given intensity, usually described in terms of probability of damage and discrete levels of damage, respectively. Evaluating the vulnerability of existing building stock is certainly pivotal in this framework and indeed it has a long history of method proposed along the years [23], based either on empirical, analytical or even hybrid approaches. In general the various methods proposed need a considerable amount of information to be collected; for example, when the response of a single building is considered, existing approaches essentially require several studies on the structure as an accurate examination of the possible local mechanisms of damage and collapse, the selection of a probable non linear response mechanism, and so on. This may represent a severe limitation on the geographic scope of the vulnerability estimation procedure, either because historical data are unavailable at the desired precision or format, or because the in situ collection of data is too expensive and time-consuming to make it practical to collect the required information. Though, it may become feasible once suitable methods become available and trading precision for geographical scope is a viable option. Recently, new algorithms have been developed for vulnerability assessment, which require fewer data, normally available from census on the building stock, e.g. year of construction, number of storeys, materials, etc. One of such methods, termed Simplified Pushover-Based Earthquake Loss Assessment (SP-BELA) [24] can provide a sensible output for comparison purposes even with a very limited set of inputs. These include the footprint of the building and the number of storeys—the latter parameter being more important than the total height of the structure. Remote sensing techniques, which by definition can operate on far larger scales than in situ data collection, are in a position to complete the framework [25]. The 3D shape of the building is a most relevant input item. In literature it is possible to find lots of building height extraction methods, both for optical and SAR imagery. Existing methodologies are either based on shadow analysis or on interferometric data [26, 27]. However, the calculation of the interferogram fails if all of the roof backscattering is sensed before the double bounce area and therefore superimposes with the ground scattering in the layover region, which is usually the case for high buildings. In order to tackle the problem of signal mixture from different altitudes methods founded on interferometric or polarimetric data or stereoscopic SAR are proposed [28, 29]. Recently, methods based on multi-aspect data where the same area is measured from different flight paths, were proposed [30]. Generally speaking, as testified by the amount of relevant literature, the problem of extracting a building 3D shape is quite a complex one. For our purposes, however, such problem can be split into two sub-problems, namely footprint extraction and determination of the number of storey. This latter problem is quite a new one in the remote sensing research scenario, and a simpler one with respect to traditional building height extraction. Our final intent is a wide range scanning of the urban environment, using optical data to extract footprints of buildings and, due to its side-looking nature, using SAR data to extract the number of storey. These pieces of information will then represent the basic input to the vulnerability model.

3.2 Remote Sensing as a Tool

It is thus clear that a combination of optical and radar data, both at a very high resolution, can satisfy the information needs related to wide-scale vulnerability assessment in urban areas.

High-resolution (HR) optical data seems to be a good means to determine items such as shape and size, footprint of the building, relative location and orientation of neighbouring buildings. The main issue with HR optical data is related to its cost, currently around 20 € per square kilometre for archive data, rising up to 40–50 € per square kilometre if multi-vantage point acquisition is involved, useful for, e.g. cross-checking the height of the building with the value determined from shadow length or from estimation of the number of floors.

High-resolution SAR data, as already mentioned, is starting to become more widely available thanks to the launch and activation of a new generation of satellites with ground resolution around 1 m. Such systems have started producing radar images of the Earth surface at an unprecedented spatial resolution, at least for spaceborne systems. This is opening up new possibilities, as these systems combine the all weather, night and day operation typical of radar systems with a fine geometrical resolution, which allows sensing details of the scene previously concealed. Such ability allows for example an accurate updating of the disaster-prone areas, because mapping of the significant elements can be performed as soon as the acquired data becomes available, through a mapping process. This is connected with vulnerability, in the sense of affording an updated scenario of possible life-lines, escape routes and population distribution. It is however difficult to estimate the cost of using such images as most of the data distribution is still made for scientific purposes only, at subsidized prices.

In order to better illustrate the issues involved in seismic vulnerability determination from combined optical and radar satellite data, we will focus on a specific test site, i.e. the city of Messina, Italy. This city is famous to the earthquake scientist community because of the disastrous 1908 event, which triggered also a tsunami resulting in its almost complete destruction. Several studies are underway on this test site and the 2008 Applied Geophysics Conference took place in Messina to celebrate 100 years of progress in disaster mitigation and management. The vulnerability of Messina building stock was analysed through a statistical approach where the assessment unit was the census tract.

Extraction of building footprint, as well as extraction of the number of storeys, is performed relying extensively on a linear feature extractor termed W-Filter which is part of a feature extraction software named BREC [31]. The footprint of the building (Fig. 2) was extracted by applying the linear feature extractor to an optical, very-high-resolution image. This latter consisted of the panchromatic band of a Quickbird image, purchased for this specific purpose, whose features are reported in Table 1. A quick look of the image is visible in Fig. 1.
Table 1

Information on the quickbird image

Sensor vehicle

Acquisition date

Total off nadir angle

Area max off nadir angle

Area max sun elevation

Total cloud cover pct.

Area cloud cover pct.

Imaging bands








Pan + MS1

Fig. 1

Preview of the purchased image ©DigitalGlobe

Fig. 2

Steps in generation of building footprint estimate: a the original grayscale image, b preliminary feature extraction, c feature merging, d footprint hypothesis

A procedure has been set up, capable of connecting the extracted linear segments into a “reasonable” footprint for the considered building. This procedure allows to outline the building footprint shape and size and to determine its across and along size, two most important parameters for vulnerability assessment.

The following step is the SAR image analysis: as we can see (Fig. 3a) radar images feature quite apparent rows of scatterers, probably originated by the corner structures constituted by the protruding balconies, in addition to the corner reflector structure at the pavement/façade meeting point. If we assume the footprint of the building is available, so is also the dominant direction of the façade in the image. Directional filtering enables turning such rows of scatterers into a more homogeneous, linear bright area, which can be easily detected by the linear feature extractor, as seen in Fig. 3. Quite apparent here are the three parallel lines which mark the associated three storeys. Counting the longest parallel lines extracted from the image results in determining the number of storeys in the building. The overall information flow is shown in Fig. 4.
Fig. 3

a SAR image of the selected building, b segments extracted from north-west façade, c segments extracted from north facade

Fig. 4

Flow-chart of the applied method

Fig. 5

Steps in number-of-storeys-extraction (a) and (e): original images, (b-c), (f-g): after rotation to align reflector lines with principle direction and filtering; (d) and (h): examples of reflector row extractions.

The experiments have shown that, unfortunately, although apparently the information on the number of floors can be extracted from visual interpretation, the procedure set up seems to be somehow too simplistic and sometimes it fails to deliver the correct number of floors (Fig. 5).

The main problem seems to be in the directional filtering, failing to sufficiently highlight the edges between reflector rows for the extractor to work correctly.

This issue was addressed by introducing two important novelties:
  • Use of hard decision (strong scatterer/no strong scatterer) on each pixel.

  • SAR + SAR + optical fusion instead of SAR + optical alone.

The first modification was introduced to account for the insufficient contrast created by the directional filter. Instead of attempting to make the impulse response of the filter sharper and sharper, a strategy that has proved to be basically ineffective, a binary logic was introduced. A preliminary step is introduced, in which pixels contained in the image are tested for being local maxima. If so, they are marked with a “1” on a resulting mask image, “0” otherwise. Strong scatterers, despite their spatially spread response probably due to the SAR distributed impulse response, are turned into single 1’s in the mask image. The mask image is then rotated by the orientation angle retrieved from the optical image.

At this stage, a morphological dilation is performed using a constituting element whose shape is that of a row of pixels—equivalent to extending the “1” marked area along rows, given the rotation of the mask image. This results in merging together the scatterers constituting a row marking a floor boundary. A final stage consists of counting the number of 0–1–0 transitions along each column, as this is expected to be connected to the number of floors. Isolated transitions are not taken into account as they may be connected with speckle spikes.

The second modification was introduced to make the overall procedure more robust. On the test site, as already mentioned, more than one SAR image was available from different vantage points. Thus, a second image of the same building from a more favourable azimuth to see a different façade of the building was considered, and underwent the same procedure.

Figure 6 shows a flowchart representing this second method used to assess then number of floors of a given building:
Fig. 6

Flow-chart of the second method

3.3 Decision-Level Fusion

A final fusion step between the estimates of the floor number is then performed as visible in Fig. 7. The number of floors results from majority voting between the numbers of transitions extracted from the mask image along its columns, according to the criteria discussed in the former subchapter. The experiments report a large number of errors on single columns yet with a large majority of correct counts.
Fig. 7

Flow-chart of the final data fusion

It can be argued that the method developed is very case-specific as on the particular site of Messina several airborne radar images were available along different flight lines and thus with different azimuth view angles. This naturally makes exploration of scatterers from different sides of the buildings easier. This situation can however be effectively simulated through the use of spaceborne radar images acquired on ascending and descending orbits on the same site. If left- and right-looking capabilities are also available, the total number of images available at different azimuth vantage points rises to 4, which is probably sufficient for many sites.

This method seems to have marked a step forward in reliability of the floor number estimation.

4 Conclusions

In this chapter, the topic of optical and radar data fusion at a very high resolution has been discussed. Fusion of HR SAR and HR optical data has been shown to be useful to make each type of data fill in the other’s gaps. Just to mention a few basic examples, severe geometric distortions in radar data may be inverted where near-nadir HR optical data are available, faithfully reproducing the shape of the objects. On the other hand, height information may be more easily extracted from radar shadows than from nadir HR optical data.

In order to discuss more specifically the issues related to optical and radar data fusion, a particular application, i.e. seismic vulnerability assessment, has been addressed. It has been shown in a practical case how the optical and radar image can complement the information one may extract from the two types of data, together providing a fairly complete set of features of an observed building.

Still, the usefulness of VHR optical + radar data fusion is still somehow hindered by the complex behavior of responses from objects observed at those finest resolutions. The literature on this sort of data fusion is still somehow scarce, although it is expected to increase considerably in the coming years thanks to the ever increasing availability of this type of data.



The authors wish to acknowledge the support of the Italian Civil Protection Department (“Programma Quadro” 2009–2011 funding of the European Centre for Training and Research in Earthquake Engineering, EUCENTRE, Pavia) and the European Commission (funding of project SAFER, 2009). They also wish to thank the colleagues at the Seismic Risk Section of EUCENTRE, particularly Helen Crowley and Barbara Borzi for their help with the SP-BELA model.


  1. 1.
    Thrower, N.J.W.: Land use in the Southwestern United States from Gemini and Apollo imagery (map suppl. no. 12). Ann. Assoc. Am. Geogr. 60(1), 208–209 (1970)Google Scholar
  2. 2.
    Myneni, R.B., Pinty, B., Maggion, D.S. Kimes, S., Iaquinta, J. Privettet, J.L., Gobron, N., Verstraetett, M., Williams, D.L.: Optical remote sensing of vegetation: modeling, caveats, and algorithms. Remote Sens. Environ. 51, 169–188 (1995)Google Scholar
  3. 3.
    Smith, R.C., Baker, K.S.: The bio-optical state of ocean waters and remote sensing. Limnol. Oceanogr. 23(2), 247–259 (1978)CrossRefGoogle Scholar
  4. 4.
    Wald, L.: A conceptual approach to the fusion of earth observation data. Surv. Geophys. 21, 177–186 (2000)CrossRefGoogle Scholar
  5. 5.
    Fonseca, L.M.G., Manjunath, B.S.: Registration techniques for multisensor remotely sensed imagery. Photogr Eng Remote Sens 62, 1049–1056 (1996)Google Scholar
  6. 6.
    Ali, M.A., Clausi, D.A.: Automatic registration of SAR and visible band remote sensing images. In: Proceedings of the Geoscience and Remote Sensing Symposium IGARSS ‘02, IEEE International, pp. 1331–1333 (2002)Google Scholar
  7. 7.
    Dare, P., Dowman, I.: A new approach to automatic feature based registration of SAR and SPOT images. Int. Arch. Photogr. Remote Sens. XXXIII, 125–130 (2000)Google Scholar
  8. 8.
    Dai, X., Khorram, S.: A feature-based image registration algorithm using improved chain-code representation combined with invariant moments. IEEE Trans. Geosci. Remote Sens. 37(5), 2351–2362 (1999)CrossRefGoogle Scholar
  9. 9.
    Kass, M., Witkin, A., Terzopoulos, D.: Snakes: active contour models. Int. J. Comp. Vis. 1(4), 321–331 (1987)CrossRefGoogle Scholar
  10. 10.
    Li, H., Manjunath, B.S., Mitra, S.K.: A contour-based approach to multisensor image registration. IEEE Trans. Image Process. 4(3), 320–334 (1995)CrossRefGoogle Scholar
  11. 11.
    Maitre, H., Wu, Y.: A dynamic programming algorithm for elastic registration of distorted pictures based on autoregressive models. IEEE Trans. Acoust. Speech Signal Process 37, 288–297 (1989)CrossRefGoogle Scholar
  12. 12.
    Hong, T.D., Schowengerdt, R.A.: A robust technique for precise registration of radar and optical satellite images. Photogr. Eng. Remote Sens. 71(5), 585–593 (2005)Google Scholar
  13. 13.
    Impagnatiello, F., Bertoni, R., Caltagirone F.: The SkyMed/COSMOsystem: SAR payload characteristics. In: Proceedings of IGARSS’98, vol. 2, pp. 689–691, 6–10 July 1998, Seattle (WA) (1998)Google Scholar
  14. 14.
    Roth, A.: TerraSAR-X: a new perspective for scientific use of high resolution spaceborne SAR data. In: Proceedings of 2nd GRSS/ISPRS Joint Workshop on Remote Sensing and Data Fusion over Urban Areas, pp. 4–7, 22–23 May 2003, Berlin, Germany (2003)Google Scholar
  15. 15.
    Morena, L.C., James, K.V., Beck, J.: An introduction to the RADARSAT-2 mission. Can. J. Remote Sens. 30(3), 221–234 (2004). ISSN 1712-7971Google Scholar
  16. 16.
    Sportouche, H., Tupin, F., Denise, L.: Building extraction and 3D reconstruction in urban areas from high-resolution optical and SAR imagery. Urban Remote Sensing Event, 2009 Joint, 20–22 May, pp. 1–11 (2004)Google Scholar
  17. 17.
    Wegner, J.D., Soergel, U., Thiele, A.: Building extraction in urban scenes from high-resolution InSAR data and optical imagery. Urban Remote Sensing Event, 2009 Joint, 20–22 May, pp. 1–6 (2009)Google Scholar
  18. 18.
    Soergel, U., Thiele, A., Gross, H., Thoennessen, U.: Extraction of bridge features from high-resolution InSAR data and optical images. Urban Remote Sensing Joint Event 11–13 April 2007 pp. 1–6 (2007)Google Scholar
  19. 19.
    Stramondo, S., Bignami, C., Pierdicca, N., Chini, M.: SAR and optical remote sensing for urban damage detection and mapping: case studies. Urban Remote Sensing Joint Event, 11–13 April 2007, pp. 1–6 (2007)Google Scholar
  20. 20.
    Chini, M., Pierdicca, N., Emery, W.J.: Exploiting SAR and VHR optical images to quantify damage caused by the 2003 Bam Earthquake. Geosci. Remote Sens. IEEE Trans. 47(1), Part 1, 45–152 (2009)Google Scholar
  21. 21.
    Orsomando, F., Lombardo, P., Zavagli, M., Costantini, M.: SAR and optical data fusion for change detection. Urban Remote Sensing Joint Event 11–13 pp. 1–9 (2007)Google Scholar
  22. 22.
    Zhang, J., Wang, X., Chen, T., Zhang, Y.: Change detection for the urban area based on multiple sensor information fusion. Geoscience and Remote Sensing Symposium, 2005. IGARSS ‘05. Proceedings 2005 IEEE International, vol. 1, 25–29, p 4 , July 2005Google Scholar
  23. 23.
    Calvi, G.M., Pinho, R., Bommer, J.J., Restrepo-Vélez, L.F., Crowley, H.: Development of seismic vulnerability assessment methodologies over the past 30 years. ISET J. Earthq. Technol Paper No. 472 43(3), 75–104 (2006)Google Scholar
  24. 24.
    Borzi, B., Crowley, H., Pinho, R.: Simplified pushover-based earthquake loss assessment (SP-BELA) method for masonry buildings. Int. J. Archit. Heritage 2(4), 353–376 (2008)CrossRefGoogle Scholar
  25. 25.
    Polli, D., Dell’Acqua, F., Gamba, P., Lisini, G.: Remote sensing as a tool for vulnerability assessment. In: Proceedings of the 6th International Workshop on Remote Sensing for Disaster Management Applications, Pavia, Italy, 11–12 September 2008Google Scholar
  26. 26.
    Hill, R., Moate, C., Blacknell, D.: Estimating building dimensions from synthetic aperture radar image sequences. IET Radar Sonar Navig. 2(3), 189–199 (2008)CrossRefGoogle Scholar
  27. 27.
    Bennett, A.J., Blacknell, D.: Infrastructure analysis from high resolution sar and insar imagery. In: 2nd GRSS/ISPRS Joint Workshop on Remote Sensing and Data Fusion over Urban Areas. Berlin, Germany (2003)Google Scholar
  28. 28.
    Cellier, F., Colin, E.: Building height estimation using fine analysis of altimetric mixtures in layover areas on polarimentric interferometric x-band sar images. In: International Geoscience and Remote Sensing Symposium (IGARSS). Denver, CO, USA (2006)Google Scholar
  29. 29.
    Simonetto, E., Oriot, H., Garello, R.: Rectangular building extraction from stereoscopic airborne radar images. IEEE Trans. Geosci. Remote Sens. 43(10), 2386–2395 (2005)CrossRefGoogle Scholar
  30. 30.
    Xu, F., Jin, Y.Q.: Automatic reconstruction of building objects from multiaspect meter-resolution sar images. IEEE Trans. Geosci. Remote Sens. 45(7), 2336–2353 (2007)CrossRefGoogle Scholar
  31. 31.
    Gamba, P., Dell’Acqua, F., Lisini, G.: BREC: the Built-up area RECognition tool. In: Proceedings of the 2009 Joint Urban Remote Sensing Event (JURSE 2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  1. 1.Remote Sensing Group, Department of ElectronicsUniversity of PaviaPaviaItaly
  2. 2.Telecommunications and Remote Sensing Section, the European Centre for Training and Research on Earthquake Engineering (EUCENTRE)PaviaItaly

Personalised recommendations