Advertisement

A support vector machine object based image analysis approach on urban green space extraction using Pleiades-1A imagery

  • Zylshal
  • Sayidah Sulma
  • Fajar Yulianto
  • Jalu Tejo Nugroho
  • Parwati Sofan
Original Article

Abstract

The use of remote sensing data for urban studies has increased along with the availability of Very High-Resolution (VHR) satellite data such as IKONOS, Quickbird, Worldview, and the Pleiades. This study aimed to evaluate the use of Pleiades-1A imagery and object based image analysis (OBIA) method to extract the information of urban green spaces in some areas of Jakarta, Indonesia. Multiresolution segmentation and spectral difference segmentation were then applied to the imagery respectively. Support Vector Machine (SVM) was performed for the classification phase, followed by an expert-knowledge refinement. Normalized Difference Vegetation Index (NDVI), Normalized Difference Water Index (NDWI), and Modified Soil Adjusted Vegetation Index (MSAVI) were derived from the imagery to help the classification process. The results showed two classes of landcover, that consists of “urban green” and “non-urban green”. The accuracy assessment was then performed using the visual interpretation followed by field measurements as reference data. By using the area-based similarity measurement framework, this study scored 86 % for overall accuracy. The similarity measurement showed values above 87 % for all 20 samples. This study found that the proposed methods gave a more into “similar” results to the reference data, than the “dissimilar”. The segmentation and classification rule set built in this study still need further study to see how effective the proposed method when applied to different cities with a different landuse/landcover characteristic.

Keywords

OBIA Support vector machine Urban green space Feature extraction Urban 

Introduction

Vegetation covers in urban areas have an important role in human life as it affect the urban climate in terms of temperature, humidity, wind speed and noise (Hoffmann et al. 2011). Monitoring the vegetation cover is one of the important issues in the urban planning. Puissant et al. (2014) describes the importance of green space mapping for management activities of air and water quality, reduce the noise level, and recreational activities. VHR image data such as IKONOS, Quickbird, Worldview-3, and the Pleiades are all increasing in the availability. Thus, allowing for more detailed information extraction on urban vegetation cover. Since using visual interpretation is a labour intensive task, the use of digital classification offers faster results. The automated/semi-automated classification can suppress the element of subjectivity, as well as a reproducible procedure (Belgiu et al. 2014). The spectral variation within object class in the VHR imagery makes it difficult to apply the conventional pixel-based classification (Lu et al. 2011; Myint et al. 2011; Zhou and Troy 2008). Thus, other methods are needed that can optimize not only the spectral information, but also contextual information on optimizing the classification process automatically/semi-automatically. In recent years, OBIA already widely used and accepted as an efficient method for classifying high-resolution image data (Blaschke 2010). OBIA method basically consists of two phases: segmentation, in which the image data is divided into objects that are smaller by a certain homogeneity requirement (Baatz and Schape 2000), and classification.

The use of SVM algorithm combined with OBIA to extract the landuse/landcover information from VHR has great potential. For example, Tzotsoz and Argialas (2008) used this method on Landsat TM and aerial photographs (Toposys GmbH) to extract the four landuse/landcover classes (water, impervious surface, trees, and grass) on Landsat TM and 4 other classes (tile roofs, vegetation, surface asphalt and bright roof) on aerial photographs. Petropoulos et al. (2012) used this method in hyperspectral data (Hyperion) to extract seven (7) classes of landuse/landcover (sea, open land, crops/permanent plantation, heterogeneous agriculture, grass and sclerophyllous type vegetation). Guo et al. (2008) used one-class SVM to extract the class of “house” in the area of Oakland, California using aerial photographs data (spatial resolution of 0.3 meters). Wu et al. (2014) combined decision tree and SVM on Landsat TM imagery to determine the level of vulnerability to landslides in the Three Gorges, China. Similar to the study of Wu et al. (2014), Heumann (2011) also used the SVM approach coupled with Decision Tree on OBIA to classify mangrove using Worldview-2 data in the Galapagos, and obtain the overall accuracy value of 94 %. Foody and Mathur (2006) found that SVM was able to provide better results than the discriminate analysis algorithm and decision-tree for aerial photo data. Until the time of this writing, the use of SVM algorithm with OBIA method for image Pleiades-1A, to extract information green open space, especially in the area of Jakarta, still has not been done.

The aim of this study was to evaluate the results of OBIA method with SVM algorithm to extract the urban green spaces in some areas of Jakarta using Pleiades-1A data.

Test site

Testing site used in this study is shown in Fig. 1. The area of interest (AOI) has the size of 4000 × 4000 pixels. AOI is represented by sub-urban area, which generally consists of low vegetation that is on the golf course, highway and road network, several lakes and land in the form of residential and office space, as well as some bare soil near the lake.
Fig. 1

Test site, Jakarta, Indonesia  (Pleiades © 2013 Distribution Airbus DC)

Method

Data

This study used the Pleiades-1A Orthorectified Pan-sharpened image, with four multispectral bands. The detail is shown on Table 1.
Table 1

Pleiades-1A data specification (CNES 2014)

Acquisition date

12 July 2013

Sensors

Multispectral: Spatial resolution 2 m

Panchromatic: Spatial resolution 0.5 m

Spectral resolution

Blue: 480–830 nm

Green: 490–610 nm

Red: 600–720 nm

Near infrared: 750–950 nm

Panchromatic: 480–830 nm

Processing level

Orthorectified

Pre-processing

Three layers were then created to use in the classification stage. These layers were NDVI, NDWI, and MSAVI. NDVI and NDWI (McFeeters 1996) were derived from Pleiades-1A imagery and calculated as Eqs. (1) and (2):
$$NDVI = \frac{NIR - R}{NIR + R}$$
(1)
$$NDWI = \frac{G - NIR}{G + NIR}.$$
(2)
NDWI has been widely used to perform the extraction and classification of water body features of the image (Gao 1996; Xu 2006), for its ability to be free from the influence of lighting differences or image distortion reflected by the inconsistency of DN values (Belgiu et al. 2014).
$$MSAVI2 = \frac{{\left( {2 \times NIR + 1 - \sqrt {\left( {2 \times NIR + 1} \right)^{2} - 8 \times (NIR - R)} } \right)}}{2}$$
(3)

MSAVI2 better known and widely used as MSAVI (Qi et al. 1994), has proven to give better results in distinguishing the different canopy structures on different ground, which distinguishes this index from other indices such as NDVI or SAVI, particularly in urban areas (Jiang et al. 2007; Pham et al. 2011; Van Delm and Gulinck 2011). By using MSAVI as additional layer, it can provide further information when NDVI alone were deemed insufficient.

Segmentation

To implement the SVM algorithm into OBIA, the first step that was to segment the image. In this study, the segmentation process was done using “multiresolution segmentation” algorithm in Trimble® eCognition Developer 8.7. This segmentation process produces primitive objects that will be used in the classification stage. The scale parameter value, corresponding to the object of interest in this study, was conducted by implementing ESP Tool (Estimation of Scale Parameters) (Dragut et al. 2010).

The second segmentation algorithm was then applied on top of the first segmentation stage. This second algorithm called “Spectral Difference Segmentation”. The second algorithm was often used in studies related to urban vegetation (Pham et al. 2011; Puissant et al. 2014). This algorithm was basically serves to combine objects with similar characteristics (homogeneous), based on user-determined threshold value.

Classification

In general, the classification stage using SVM algorithm is a type of supervised classification. After landuse/landcover previously determined, the next step was taking training samples for each class. One of the advantages was that the SVM algorithm requires only a small sample size but able to give a very good results (Tzotsos and Argialas 2008; Foody and Mathur 2006).

Support Vector Machine

The classification stage was done by applying the SVM algorithm. The first stage was to determine the landuse/landcover classes, which was consisted of “vegetation”, “non-vegetation”, and “water body”. SVM is basically separates objects with different classes into a field “hyperplane” (Karatzoglou and Meyer 2006; Kavzoglu and Colkesen 2009).

By identifying “hyperplane” areas that separating the two classes, the SVM was able to provide a more accurate classification results. Sometimes classes are not separated in linear manner (Fig. 2b). For this case, a variable called slack variable ξ was used to calculate and combine maximization and minimization margin of error criteria (Tzotsos and Argialas 2008).
Fig. 2

SVM algorithm illustration on two hypothetical class clusters, each of which is represented by the symbol of squares and circles. a Illustration of a cluster-type classes with linear separation, b Illustration of cluster-type classes with non-linear separation (Tzotsos and Argialas 2008)

SVM algorithm originally created to be applied on two different classes. To be applied on more than two classes, it needed to reduce these classes into the form of binary choices. In this study, we use the “one against one” approach, because it was considered more suitable to be applied in the case of multiclass like the one in this study (Hsu and Lin 2002). In principle, the “one against one” approach consists of multiple classifiers used for each pair of classes.
Fig. 3

Data processing flowchart

Expert knowledge refinement

The results of the SVM classification were still in need of improvement, especially in some of the shadowed objects. Additional refinement algorithm based on expert-knowledge was then written. The integration between machine learning algorithms and knowledge-based system on OBIA method can provide better results (Tzotsos and Argialas 2014; Eisank et al. 2014). The refinement algorithm used some additional parameters. Texture, brightness, and contextual information i.e. “relative border to neighbor” were then calculated and used as additional parameters. The refinement done mostly to correct the misclassified asphalt and building shadows as water bodies. The texture parameter was based on GLCM texture (gray-level co-occurrence matrix) homogeneity (Haralick 1979) in the red channel, and calculated as shown in Eq. (4):
$$\mathop \sum \limits_{i,j = 0}^{N - 1} \frac{{P_{i,j} }}{{1 + \left( {i - j} \right)^{2} }}$$
(4)
i is the row number, j is the column number, Pi,j is the normalized value in the cell I,j, and N is the number of rows or columns.
Brightness values formula is shown in Eq. (5) (Trimble 2013):
$$b = \frac{1}{n} + \mathop \sum \limits_{j = 1}^{n} w_{j} b_{i}^{j}$$
(5)
where b is the brightness, wj is the real number, n is the numbers of calculated layer, b i j  is the brightness value.
Relative border to neighbor calculate the relative lengths of object boundary directly adjacent to the neighbor objects. This parameter was described as the ratio of the shared border length of an image object (with a neighboring image object assigned to a defined class) to the total border length. If the relative border of an image object to image objects of a certain class is 1, the image object is totally embedded in them. If the relative border is 0.5 then the image object is surrounded by half of its border. The formula for relative border to this neighbor is shown in Eq. (6) (Trimble 2013):
$$\frac{{\mathop \sum \nolimits_{{u \in N_{v} \left( {d,m} \right)}} b\left( {v,u} \right)}}{{b_{v} }}$$
(6)
b(vu) is the length of common border between v and u; N v (dm) are neighbors to an image object v at a distance d; and b v is the image object border length. The data processing flowchart is shown in Fig. 3.

Accuracy assessment

This study used visual interpretation on the Pleiades-1A image followed by field survey as reference data. The accuracy assessment conducted in this study was an area-based similarity measurement proposed by Whiteside et al. (2014). This study used a 100 m radius of 20 randomly selected point samples. These circular polygon samples were then used as the clip boundary for cutting the proposed method’s results as well as the visually interpreted reference data.

The complete accuracy assessment used in this study can be found in Table 2. C is the area of the classified object and R is the area of the reference object. C∩R is the area of intersection between C and R, ¬C∩R is the area of R not covered by C, C∩¬R is the area of C that is not covered by R, and C∪R is the area of union between C and R.
Table 2

A summary of area-based measurement of similarity/dissimilarity used in this study as described by Whiteside et al. (2014)

Author

Measure

Formula

Domain

Notes

Winter (2000)

s11

\(\frac{{\left| {C\mathop \cap \nolimits R} \right|}}{{\left| {C\mathop \cup \nolimits R} \right|}}\)

0–1

Similarity/grade of equal (overall accuracy)

Zhan et al. (2005)

Overall quality (OQ)

\(\frac{{\left| {C\mathop \cap \nolimits R} \right|}}{{\left| {C\mathop \cap \nolimits \neg R} \right| + \left| {\neg C\mathop \cap \nolimits R} \right| + \left| {C\mathop \cap \nolimits R} \right|}}\)

0–1

 

User’s accuracy (UA)

\(\frac{{\left| {C\mathop \cap \nolimits R} \right|}}{\left| C \right|}\)

0–1

 

Producer’s accuracy (PA)

\(\frac{{\left| {C\mathop \cap \nolimits R} \right|}}{\left| R \right|}\)

0–1

 

Weidner (2008)

False positive rate (ρfp)

\(\frac{{\left| {C\mathop \cap \nolimits \neg R} \right|}}{\left| R \right|}\)

0–∞

False alarm rate

False negative rate (ρfn)

\(\frac{{\left| {\neg C\mathop \cap \nolimits R} \right|}}{\left| R \right|}\)

0–1

Similarity measurement (s11) associated with the degree of similarity between the classified object and the reference data. A value of 0 indicates that there was no overlap area (totally disjoint), and a value of 1 indicates an identical area (Winter 2000). Overall quality (OQ), producer’s accuracy (PA), and the user’s accuracy (UA) indicates information regarding each class. On the other hand, the overall accuracy (OA) calculates the percentage of all classes extracted correctly to the total area (Zhan et al. 2005).

In general, the accuracy assessment in this study can be divided into two parts. Class-related accuracy assessment represented by false positive rate, false negative rate, overall quality, user’s accuracy, and producer’s accuracy. The second part was sample-related accuracy assessment represented by similarity measurement (s11).

Results

Segmentation results

We first run the ESP tool to estimate the optimum scale value (Fig. 4). After conducting trial and error on the obtained value, we then decide to use 90 as the scale parameter. The complete segmentation parameters and its result are shown in Table 3 and Fig. 5, respectively. The initial objects generated with multiresolution segmentation algorithm tend to be over-segmented. This was deliberately done to keep the relatively smaller object identified, as well as to accommodate the finer boundary for the further sampling in the classification stage. (Duro et al. 2012; Smith 2010; Stumpf and Kerle 2011; Puissant et al. 2014; Zylshal et al. 2015). The use of the second segmentation stage was to merge the more homogeneous objects such as, highways, golf course and lakes.
Fig. 4

ESP Tool results. The red circle (90) shows the optimum values calculated by the ESP Tool

Table 3

Segmentation parameters

Level

Segmentation algorithm

Layer input

Scale

Shape

Compactness

1

Multiresolution segmentation

B, G, R, NIR

90

0.5

0.3

Level

Segmentation algorithm

Layer input

Maximum spectral difference

2

Spectral difference segmentation

B, G, R, NIR

25

Fig. 5

A subset of segmentation results, a multiresolution segmentation algorithm, b spectral difference segmentation  (Pleiades © 2013 Distribution Airbus DC)

Classification results

The classification results are shown in Fig. 6. The result consists of three classes: “vegetation”, “non-vegetation” and “water bodies”. The initial classification results still have some errors especially in some dark areas, as the SVM failed to distinguished some the trees and building’s shadow with water (Fig. 6a). We then created a set of additional rule sets using contextual information on the misclassified object. The refinement result shown in Fig. 6b.
Fig. 6

Expert-knowledge refinement on misclassified shadows and roads as water body a the yellow mark shows some of the misclassified object, b the misclassified object after refined

For the purposes of accuracy test, the water body was then merged into the non-vegetation class. This was done to accommodate the existing visual interpretation that used as reference data. The area-based similarity measurement were then conducted, and the summary statistics of both aforementioned classes are shown in Table 4 and Table 5.
Table 4

Summary statistics (m2) for “vegetation”

 

Object name

|CR|

|C|

|C ∩ ¬R|

C ∩ R|

|R|

|C ∩ R|

Total

314,469.5

289,700.75

21,510.75

24,768.75

292,958.75

268,190

Percentage

100

92

7

8

93

85

Mean

1828.31

75.64

5.99

7.53

1566.62

1131.60

Minimum

0.25

0.25

0.25

0.25

0.25

0.25

Maximum

29,377.00

29,146.25

1328.00

1190.50

29,276.50

29,146.25

Table 5

Summary statistics (m2) for “non-vegetation”

 

Object name

|C ∪ R|

|C|

|C ∩ ¬R|

C ∩ R|

|R|

|C ∩ R|

Total

359,940

338,229.75

24,455

21,710.25

335,485.00

313,774.8

Percentage

100

94

7

6

93

87

Mean

3129.91

96.53

7.47

6.36

3321.63

1352.48

Minimum

0.25

0.25

0.25

0.25

0.25

0.25

Maximum

29,587.25

29,100.00

1188.00

1328.00

29,169.75

29,100.00

Discussion

The advantage of this study is that, the boundaries of objects produced by the proposed method are able to provide more similar results to visual interpretation, without the effect of “salt and pepper” caused by the variation of the spectral information within objects and texture on VHR image. Table 6 shows the results of the area based accuracy assessment. Overall quality (OQ) shows the classification accuracy for one class against the entire study area, whereas overall accuracy (OA) indicates the accuracy of the classification for the whole class (vegetation and non-vegetation) to the entire study area (entire image). PA indicates the probability of an object being correctly classified, while UA indicates the probability of an object classified on the map actually represents the classes on the field (Congalton 1991). For example, in this study the segmentation and classification result of vegetation classes, 92 % of them are correctly classified as vegetation, and 93 % of this mapped vegetation was actually vegetation objects on the field.
Table 6

Area based accuracy assessment summary

Class

Area based accuracy assessments

OQ (%)

UA (%)

PA (%)

OA (%)

FP (%)

FN (%)

Vegetation

85

93

92

86

7

6

Non-vegetation

87

93

94

8

6

OQ overall quality, UA user’s accuracy, PA producer’s accuracy, OA overall accuracy, FP false positive rate, FN false negative rate

From the similarity measurement results shown in Fig. 7f and Fig. 8, the highest s11 value was obtained on sample 6 (sample ID = 6), with the value went as high as 0.99 (area with black circles on Fig. 7f), and the lowest value obtained was on sample no. 3 with s11 values of 0.87 (white circle in Fig. 7f). The different delineation path between the classification result and the reference data generally caused the difference between the reference data and the classification result. The reference data, which was obtained from visual interpretation, tends to give a smoother result, due to the nature of visual interpreter when dealing with lines. They did not follow the boundaries of the pixels on the image while digitizing and relying more on contextual information. The proposed method, however, since it was conducted digitally, followed the pixels boundary to create the object’s perimeter, thus generate a more jagged edge (Fig. 8).
Fig. 7

Testing site, overlaid with Pleiades-1A image. a Multiresolution segmentation algorithm result, b samples taken for SVM classification for three classes: red for “non-vegetation”, blue for “water”, green for “vegetation”, c SVM classification result, d “water” object merged with “non-vegetation” for accuracy assessment, e randomly generated sample area, f similarity measurement (s11) result (Pleiades © 2013 Distribution Airbus DC)

Fig. 8

Similarity measurement (s11) for each sample

All samples give s11 values above 0.5 indicating that the proposed method in this study tends to have “similar” result rather than “dissimilar”. This result, with the lowest s11 value on 0.85 indicates that this method can be used to perform the urban green space extraction from VHR image such as the Pleiades-1A (Fig. 9).
Fig. 9

Boundary disagreement between the proposed method and the reference data. The jagged red line produced by OBIA method, and the blue line was the reference data produced by visual interpretation (Pleiades © 2013 Distribution Airbus DC)

Area based accuracy assessment used in this study was also able to accommodate the needed information regarding the OBIA classification results, where both the classification accuracy as well as the accuracy related to the object boundaries are assessed and produced. This result was in line with Winter (2000) who stated that for an object based image classification, at least one of the information related to similarity or dissimilarity are required. The use of random sampling in the accuracy assessment was able to suppress the effect of the dominance of large-sized objects, and provides the option to not depend on the availability of reference data for the entire area. This is in line with what other researcher previously proposed Congalton and Green (2009), Foody (2011), Whiteside et al. (2014).

The weakness of the proposed method used in this study was that the ruleset built are still dependent on the image sensor and acquisition date. It cannot be directly applied to a different area with different sensor and data acquisition time. The same principles and procedures can still be applied with additional modification in order to accommodate the difference. The use of additional data such as data LiDAR (Light Detection and Ranging) can be used to add more options on the classification process and produce more accurate maps. Pleaides-1A satellite as part of a constellation system with Pleiades-1B is also able to perform stereo recording and therefore able to produce photogrammetrically derived elevation information from stereo/tristereo imagery. This also can be used as an alternative to obtain more information to improve the accuracy of the segmentation and classification process. These options are part of our future research.

Conclusion

This study attempts to combine a machine learning algorithm and expert-knowledge in the OBIA framework to extract information about urban green space. With just Pleiades-1A multispectral information, it is found that the proposed method was able to give a high degree of similarity in terms of segmentation and classification results to the manual visual interpretation. With the minimum similarity value was above 85 % for all samples generated, the proposed method gives a more “similar” results than “dissimilar”. The overall accuracy was for the proposed method was 86 %. The use of area based accuracy assessment and the similarity measurement on the OBIA results, show that this evaluation method was easy to implemented, and were able to accommodate the accuracy information needed regarding the ever-growing object image analysis method, in terms of assessing the classification results, and the objects boundary as the results of segmentation process. The use of OBIA method in this study can be used as an alternative solution in the monitoring of urban green space. With its ability to conduct the classification digitally, the results can be obtained in a much-more faster timeframe with acceptable accuracy, compared to the conventional pixel-based classification.

Notes

Acknowledgments

This study was funded by the budget of DIPA LAPAN activities in 2015, Remote Sensing Application Center, Indonesian National Institute of Aeronautics and Space (LAPAN). We would like to thank Dr. M. Rokhis Khomarudin (Director of Remote Sensing Application Center, LAPAN) for his support on making the research happened as well as the Environmental Researcher Team, for helping the authors conducting field survey. We would like to offer our gratitude to the LAPAN’s Remote Sensing Data and Technology Center for providing us with Pleiades-1A imagery as well as conducting the ortho-rectification process.

References

  1. Baatz M, Schape A (2000) Multiresolution segmentation—an optimization approach for high quality multi-scale image segmentation. In: Strobl J, Blaschke T, Griesebner G (eds) Angewandte Geographische Informations-Verarbeitung XII. Wichmann Verlag, Karlsruhe, pp 12–23Google Scholar
  2. Belgiu M, Dragut L, Strobl J (2014) Quantitative evaluations of variations in rule-based classifications of land cover in urban neighborhoods using WolrdView-2 imagery. ISPRS J Photogramm Remote Sens 88:205–215CrossRefGoogle Scholar
  3. Blaschke T (2010) Object based image analysis for remote sensing. ISPRS J Photogramm Remote Sens 65(1):2–16CrossRefGoogle Scholar
  4. CNES (2014) PLEIADES. France. https://pleiades.cnes.fr/en/PLEIADES/A_produits.htm. Accessed 21 Dec 2015
  5. Congalton RG (1991) A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens Environ 37:35–46CrossRefGoogle Scholar
  6. Congalton RG, Green K (2009) Assessing the accuracy of remotely sensed data: principles and practices, 2nd edn. CRC Press, Boca RatonGoogle Scholar
  7. Dragut L, Tiede D, Levick SR (2010) ESP: a tool to estimate scale parameter for multiresolution image segmentation of remotely sensed data. Int J Geogr Inf Sci 24(6):859–871CrossRefGoogle Scholar
  8. Duro DC, Franklin SE, Dube MG (2012) A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote Sens Environ 118:259–272CrossRefGoogle Scholar
  9. Eisank C, Holbling D, Friedl B, Chen Y, Chang K (2014) Expert knowledge for object-based landslide mapping in Taiwan. S East Eur J Earth Obs 25(3):347–350Google Scholar
  10. Foody GM (2011) Classification accuracy assessment. In: IEEE Geoscience and Remote Sensing Newsletter, June, pp 8–14Google Scholar
  11. Foody GM, Mathur A (2006) The use of small training sets containing mixed pixels for accurate hard image classification: training on mixed spectral responses for classification by a SVM. Remote Sens Environ 103:179–189CrossRefGoogle Scholar
  12. Gao BC (1996) NDWI— a normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens Environ 58(3):257–266CrossRefGoogle Scholar
  13. Guo Q, Du G, Liu Y, Liu D (2008) Integrating Object-based classification with one-class support vector machines in mapping a specific land class form high spatial resolution images. In: The international archives of the photogrammetry, remote sensing and spatial information sciences, vol XXXVII. Part B4. Beijing, pp 1159–1164Google Scholar
  14. Haralick RM (1979) Statistical and structural approaches to texture. Proc IEEE 67(5):786–804CrossRefGoogle Scholar
  15. Heumann BW (2011) An object classification of mangroves using a hybrid decision tree—support vector machine approach. MDPI J Remote Sens 3:2440–2460CrossRefGoogle Scholar
  16. Hoffmann P, Strobl J, Nazarkulova A (2011) Mapping green spaces in Bishkek—how reliable can spatial analysis be? MDPI J Remote Sens 3:1088–1103CrossRefGoogle Scholar
  17. Hsu CW, Lin CJ (2002) A comparison of methods for multiclass support vector machines. IEEE Trans Neural Netw 13(2):415–425CrossRefGoogle Scholar
  18. Jiang Z, Huete AR, Li J, Qi J (2007) Interpretation of the modified soil-adjusted vegetation index isolines in Red-NIR reflectance space. J Appl Remote Sens 1(013503):1–12Google Scholar
  19. Karatzoglou A, Meyer D (2006) Support vector machines in R. J Stat Softw 15:1–28CrossRefGoogle Scholar
  20. Kavzoglu T, Colkesen I (2009) A kernel functions analysis for support vector machines for land cover classification. Int J Appl Earth Obs Geoinf 11:352–359CrossRefGoogle Scholar
  21. Lu D, Hetrick S, Moran E (2011) Impervious surface mapping with Quickbird imagery. Int J Remote Sens 32(9):2519–2533CrossRefGoogle Scholar
  22. McFeeters SK (1996) The use of normalized difference water index (NDWI) in the delineation of open water features. Int J Remote Sens 17:1425–1432CrossRefGoogle Scholar
  23. Myint SW, Gober P, Brazel A, Grossman CS, Weng Q (2011) Per-pixel vs. object-based classification of urban landcover extraction using high spatial resolution imagery. Remote Sens Environ 115(5):1145–1161CrossRefGoogle Scholar
  24. Petropoulos GP, Kalaitzidis C, Vadrevu KP (2012) Support vector machines and object-based classification for obtaining land-use/cover cartography form Hyperion hyperspectral imagery. Comput Geosci 41:99–107CrossRefGoogle Scholar
  25. Pham TTH, Apparicio P, Séguin AM, Gagnon M (2011) Mapping the greenscape and environmental equity in Montreal: an application of remote sensing and GIS. In: Caquart SB, Vaughan L, Cartwright W (eds) Mapping environmental issues in the city: arts and cartography cross perspectives. Springer-Verlag, Berlin, pp 30–48CrossRefGoogle Scholar
  26. Puissant A, Rougier S, Stumpf A (2014) Object-oriented mapping of urban trees using Random Forest classifier. Int J Appl Earth Obs Geoinf 26:235–245CrossRefGoogle Scholar
  27. Qi J, Chehbouni A, Huete AR, Kerr YH (1994) Modified soil adjusted vegetation Index (MSAVI). Remote Sens Environ 48:119–126CrossRefGoogle Scholar
  28. Smith A (2010) Image segmentation scale parameter optimization and land cover classification using the random forest algorithm. J Spat Sci 55:69–79CrossRefGoogle Scholar
  29. Stumpf A, Kerle N (2011) Object-oriented mapping of landslides using random forests. Remote Sens Environ 115:2564–2577CrossRefGoogle Scholar
  30. Trimble (2013) eCognition® Developer Reference Book. München, GermanyGoogle Scholar
  31. Tzotsos A, Argialas D (2008) Support vector machine classification for object-based image analysis. Object-based image analysis. Springer, Berlin, Heidelberg, pp 663–667CrossRefGoogle Scholar
  32. Tzotsos A, Argialas D (2014) Integrating knowledge-based expert systems and advanced machine learning for object-based image analysis. 5th GEOBIA, 21–24 May, Thessaloniki. http://aiolos.survey.ntua.gr/slides/geobia2014. Accessed 18 Feb 2015
  33. Van Delm A, Gulinck H (2011) Classification and quantification of green in the expanding urban and semi-urban complex: application of detailed field data and IKONOS-imagery. Ecol Indic 11:52–60Google Scholar
  34. Whiteside TG, Maier SW, Boggs GS (2014) Area-based and location-based validation of classified image objects. Int J Appl Earth Obs Geoinf 28:117–130CrossRefGoogle Scholar
  35. Wiedner U (2008) Contribution to the assessment of segmentation quality for remote sensing application. In: International archives of the photogrammetry, remote sensing and spatial information sciences, vol XXXVII-B7, pp 479–484Google Scholar
  36. Winter S (2000) Location similarity of regions. ISPRS J Photogramm Remote Sens 55:189–200CrossRefGoogle Scholar
  37. Wu X, Ren F, Niu R (2014) Landslide susceptibility assessment using object mapping units, decision tree, and support vector machine models in the Tree Gorges of China. Environ Earth Sci 71:4725–4738CrossRefGoogle Scholar
  38. Xu H (2006) Modification of normalized difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int J Remote Sens 27(14):3025–3033CrossRefGoogle Scholar
  39. Zhan Q, Molenaar M, Tempfli K, Shi W (2005) Quality assessment for geo-spatial objects derived from remotely sensed data. Int J Remote Sens 26:2953–2974CrossRefGoogle Scholar
  40. Zhou W, Troy A (2008) An object-oriented approach for analyzing and characterizing urban landscape at the parcel level. Int J Remote Sens 29(11):3119–3135CrossRefGoogle Scholar
  41. Zylshal, Yulianto F, Pasaribu JM, Prasasti I (2015) Landuse/landcover extraction from SPOT-6 imagery using object based image analysis approach: a case study of Jakarta, Indonesia. In: Proceedings of the 36th Asian Conference on Remote Sensing 2015, Quenzon City, Metro Manila Philippines, October 24–28 2015Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Remote Sensing Application CenterIndonesian National Institute of Aeronautics and Space (LAPAN)JakartaIndonesia

Personalised recommendations