Transformation in Scale for Continuous Zooming

This chapter summarizes the theories and methods in continuous zooming for Digital Earth. It introduces the basic concepts of and issues in continuous zooming and transformation in scale (or multiscale transformation). It presents the theories of transformation in scale, including the concepts of multiscale versus variable scale, transformation in the Euclidean space versus the geographical space, and the theoretical foundation for transformation in scale, the Natural Principle. It addresses models for transformations in scale, including space-primary hierarchical models, feature-primary hierarchical models, models of transformation in scale for irregular triangulation networks, and the models for geometric transformation of map data. It also discusses the mathematical solutions to transformations in scale (including upscaling and downscaling) for both raster (numerical and categorical data) and vector (point set data, line data set and area data) data. In addition, some concluding remarks are provided.

The cascade scene seen by the young child is a result of continuous zooming. Such zooming can be realized by continuously displaying a series of Earth images taken at a given position and changing the focal length of the camera lens continuously or displaying images taken at different heights continuously but with at a fixed camera focal length.
In theory, to make the display visually smooth, the differences between two images should be sufficiently small, thus the number of images in such a series is very large, which demands huge data storage. Thus, it is a very difficult, if not impossible, problem.

Transformation in Scale: Foundation of Continuous Zooming
In practice, Earth images are acquired and stored at discrete scales (e.g., 1:500,000, 1:100,000, 1:10,000) or different resolutions (e.g., 100, 10, 1, 0.5 m), leading to the term multiscale representation. Figure 8.1 shows a series of satellite images covering Hong Kong Polytechnic University at six different scales, extracted from Google Maps. If such images at discrete scales are displayed in sequence, there will be a visual jump between two images. The obviousness of the visual jump is dependent on the magnitude of the scale difference. The smaller the difference between the two scales is, the less apparent the visual jump will be.
To minimize the effect of such visual jumps, some techniques are required to smooth the transformations from one scale to another scale to make the display appear like continuous zooming. This transformation in scale is the foundation of continuous zooming. Thus, transformation in scale, also called multiscale transformation, is the topic of this chapter.

Transformation in Scale: A Fundamental Issue in Disciplines Related to Digital Earth
Transformation in scale is one of the most important but unsolved issues in various disciplines related to Digital Earth, such as mapping, geography, geomorphology, oceanography, soil science, social sciences, hydrology, environmental sciences and urban studies. Typical examples are map generalization and the modifiable areal unit problem (MAUP). Although transformation in scale is a traditional topic, it has been a critical issue in this digital era. Transformation in scale has attracted attention from disciplines related to Digital Earth since the 1980s because a few important publications on the scale issue in that period awakened researchers in relevant areas. Openshaw (1984) revisited the MAUP. Abler (1987) reported that multiscale representation was identified as one of the initiatives of the National Center for Geographic Information and Analysis (NCGIA), and noted that zooming and overlay are the two most exciting functions in a geographical information system. Since then, the scale issue has been included in many research agendas (e.g., Rhind 1988;UCGIS 2006) and has become popular in the geo-information community.
The first paper on the scale issue in remote sensing was also published in 1987 (Woodcock and Strahler 1987). Later, in 1993, the issue of scaling from point to regional-or global-scale estimates of the surface energy fluxes attracted great attention at the Workshop on Thermal Remote Sensing held at La Londe les Maures, France from September 20-24. Scale became a hot topic in remote sensing as well.
As a result, many papers on the scale issue have been published in academic journals and at conferences related to Digital Earth. Other papers have been p in the form of edited books, such as Scaling Up in Hydrology Using Remote Sensing edited by Stewart et al. (1996), Scale in Remote Sensing and GIS edited by Quattrochi and Goodchild (1997), Scale Dependence and Scale Invariance in Hydrology edited by Sposito (1998), Modelling Scale in Geographical Information Science edited by Tate and Atkinson (2001), Scale and Geographic Inquiry: Nature, Society and Method edited by Sheppard and McMaster (2004), Generalisation of Geographic Information: Cartographic Modelling and Applications edited by Mackaness et al. (2007), and Scale Issues in Remote sensing edited by Weng (2014). Authored research monographs have also been published by researchers, e.g., Algorithmic Foundation of Multi-Scale Spatial Representation by Li (2007) and Integrating Scale in Remote Sensing and GIS by Zhang et al. (2017).

Theories of Transformation in Scale
Transformation in scale is the modeling of spatial data or spatial representations from one scale to another by employing mathematical models and/or algorithms developed based on certain scaling theories and/or principles. This section describes such scaling theories and/or principles.

Transformation in Scale: Multiscale Versus Variable Scale
To facilitate zooming, not necessarily continuous, a common practice of service providers such as Google Maps, Virtual Earth and Tianditu is to organize maps and images into nearly 20 levels (scales or resolutions), from global level to street level. Figure 8.2 shows a series of maps covering Hong Kong Polytechnic University at six different scales (extracted from Google Maps). This follows the tradition of organizing maps by national map agencies. For example, the United States Geological Survey (USGS) produces topographic maps at scales of 1:500,000, 1:250,000, 1:100,000, 1:50,000 and 1:24,000; the Chinese State Bureau of Surveying and Mapping produces maps at scales of 1:4,000,000, 1:1,000,000, 1:250,000, 1:50,000 and 1:10,000; the Ordnance Survey of the UK produces maps at scales of 1:50,000, 1:25,000 and 1:10,000; and the German federal states produces maps at 1:1,000,000, 1:250,000, 1:100,000, 1:50,000, 1:25,000 and 1:10,000 scales. These maps at different scales contain information at different levels of detail, and thus are suitable for different applications. Such a scale is also called the cartographic ratio. Similarly, image data and digital elevation models (DEMs) are also produced and stored at discrete scales. In these two cases, the scale is normally indicated by resolution. This kind of representation is called multiscale representation. In such cases, the cartographic ratio is uniform across a map and/or an image. Thus, such representations have multiple cartographic ratios. The cartographic ratio may vary across a representation (e.g., oblique view), leading to the term variable scale representation; the resolution may also vary across a representation, leading to the term variable resolution representation. As a result, the term multiscale might mean different things to different people, i.e., multi cartographic ratio, variable cartographic ratio, multi resolution and variable resolution. This leads to nine different kinds of transformations in scale, as shown in Fig. 8.3.

Transformations in Scale: Euclidean Versus Geographical Space
In Euclidean space, an increase in scale will commonly cause an increase in length, area and volume; and a decrease in scale will cause a decrease in length, area and volume, accordingly. Figure 8.4 shows an example of scale reduction and increase in  (Li 2007) a 2D Euclidean space. In such a transformation in scale, the absolute complexity of a feature or features remains unchanged. That is, the transformations are reversible.
However, the geographical space is fractal. If one measures a coastal line using different measurement units, then different lengths will be obtained. The smaller the measurement unit is, the longer the length obtained. Similarly, different length values will be obtained when measuring a coastal line represented on maps at different scales using identical measurement units at map scale. That is, the transformation in scale in fractal geographical space is quite different from that in Euclidean space.
For a given area on a terrain surface, the size of the graphic representation (or map space) on a smaller scale map is reduced compared with that on larger scale maps. The complexity of the graphics on a smaller scale map remain compatible with larger scale maps. However, the absolute complexity is reduced. As a result, if the graphics on a smaller map are enlarged back to the size on the larger scale map, the level of complexity of the enlarged representation will appear to be reduced. Figure 8.5 illustrates such a case. In a fractal geographical space, the level of complexity cannot be recovered by an increase in scale. In other words, the transformations in scale in such a geographical space are not reversible.
The transformation in scale is also termed scaling. The process of making the resolution coarser (or making the map scale smaller) is called upscaling. In contrast, the transformation process to make the resolution finer (or map scale larger) is called downscaling.

Theoretical Foundation for Transformation in Scale:
The Natural Principle One question that arises is "does such a transformation follow any principle or law?" The answer is "yes". Li and Openshaw (1993) formulated the Natural Principle for such a transformation in scale in fractal geographical space. Li and Openshaw (1993) made use of the terrain surface viewed from different height levels as an example to illustrate the Natural Principle, as follows: • When one views the terrain surface from the Moon, all terrain variations disappear, and one can only see a blue ball; • When one views the terrain surface from a satellite, then the terrain surface becomes visible, but the terrain surface looks very smooth; • When one views the terrain surface from an airplane, the main characteristics of the terrain variations become very clear, but small details do not appear; and • When one views the terrain surface from a position on ground, the main characteristics of the terrain variations become lost, and one sees small details.
When the viewpoint is higher, the ground area corresponding to the human eyes' resolution becomes larger, but all detailed variations within this ground area can no longer be seen, and thus the terrain surface appears more abstract. These examples underline a universal principle, the Natural Principle as termed by Li and Openshaw (1993). It can be stated as follows: For a given scale of interest, all details about the spatial variations of geographical objects (features) beyond a certain limitation cannot be presented and can thus be neglected.
It follows that a simple corollary to this process can be used as a basis for transformations in scale. The corollary can be stated as follows (Li and Openshaw 1993): By using a criterion similar to the limitation of human eyes' resolution, and, neglecting all the information about the spatial variation of spatial objects (features) beyond this limitation, zooming (or generalization) effects can be achieved. Li and Openshaw (1992) also term such a limitation as the smallest visible object (SVO) or smallest visible size (SVS) in other literature (Li 2007). The natural principle: spatial variations within a smallest visible size (SVS) to be neglected (Li 2007) the idea of this corollary, that is, that all spatial variations within the SVS can be neglected, no matter how big they are on the ground. Figure 8.7 illustrates the working example of applying the Natural Principle to a terrain surface. Figure 8.7a shows the views of a terrain surface at two different heights based on the Natural Principle, resulting in two quite different representations in terms of complexity. Figure 8.7b, c show the results viewed at levels L A and L B , respectively. In these two Figures, the zooming (or generalization) effects are very clear.
To apply the Natural Principle, the critical element to be considered is the value of this "certain limitation" or SVS, beyond which all spatial variations (no matter how complicated) can be neglected. Openshaw (1992, 1993) suggested the following formula: The process of zooming at two viewing distances (scales) Fig. 8.7 Zooming effect of a terrain surface generated by the Natural Principle (Li and Openshaw 1993) where S T and S S are the scale factors of the target and source data, respectively; k is the SVS value in terms of map distance at the target scale and K is the SVS value in terms of ground distance at the target scale. Through intensive experimental testing, Li and Openshaw (1992) recommend a k value between 0.5 and 0.7 mm, i.e., k = {0.5 mm, 0.7 mm} (8.2)

Models for Transformations in Scale
To realize a transformation in scale, some transformation models must be adopted and algorithms and/or mathematical functions for these models are applied. The former is the topic of this section and the latter are described in Sect. 8.4.

Data Models for Feature Representation: Space-Primary Versus Feature-Primary
To record features in geographical space, two different viewpoints can be taken: feature-primary and space-primary (Lee et al. 2000).
In a feature-primary view, the geographical space is considered as being tessellated by features and the locations of these features are then determined. This kind of model is also called feature-based. In such a model, features are represented by vectors, leading to the popular term vector data model. Figure 8.8a-c show the representation of points, a line and an area using a vector model.
In a space-primary view, the geographical space is considered as being tessellated by space cells. In such a tessellation (partitioning), square raster cells are popularly employed, leading to the popular term raster data model. In each raster cell, there could be a feature or there might be no features. A point is represented by a pixel (picture element); a line is represented by a string of connected pixels and an area is formed by a set of connected pixels, as shown in Fig. 8.8d-f. The cells can be in any form, regular or irregular. Irregular triangular networks are another popular tessellation.
On a spherical surface, longitude/latitude is the coordinate system for featureprimary representation. The cells with an equal interval in latitude/longitude (e.g., 6 × 6 ) are the raster equivalent of spherical tessellation (Fig. 8.9a). However, the actual area size of such a cell varies with the latitude. To overcome this problem, the quaternary triangular mesh (QTM) (Fig. 8.9b) has been used (e.g., Dutton 1984Dutton , 1996. The cells can be any shape (e.g., triangle, hexagon), regular or irregular.  .9c shows the use of a regular hexagon diagram for such a tessellation. For 3D space, the voxel (volume element) is the raster equivalent for space tessellation ( Fig. 8.9d).
As the natures of the raster and vector data models are quite different, the model for transformation in scale in these two data models might also differ. Thus, separate subsections are devoted to these topics.

Space-Primary Hierarchical Models for Transformation in Scale
Hierarchical models are popular for the multiscale representation of spatial data at discrete scales. For example, Google Maps, Virtual Earth and Tianditu have all adopted hierarchical models for the representation of images and maps. Figure 8.10 shows the first three zoom levels of the hierarchical model used by Google Maps (Stefanakis 2017). This model has a special name, the pyramid model, which is a result of aggregating a 2 × 2 pixel into one pixel. The number of pixels (squares) at the nth level is 4 n−1 . A more general form of aggregation is to transform any N × N pixels into one pixel. A more general form of transformation to create a hierarchical representation is to transform N × N pixels into M × M pixels, e.g., a 5 × 5 into a 2 × 2 or a 3 × 3 into a 2 × 2. In such cases, a resampling process (instead of simple aggregation) is required.
With a hierarchical model, the resolution and cartographic ratio at each level are not necessarily uniform. Typical examples of hierarchical models with variable resolutions are shown in Fig. 8.11, i.e., the quadtree and binary tree models.
With the pyramid and quadtree models, the hierarchical levels are fixed and the transformation in scale jumps from one level to another like stairs. To make the transformation absolutely smooth, we need to make the difference between two steps of the stairs infinitely small, to make the stairs become a continuous linear slope (see Fig. 8.12).
For hierarchical representation on a spherical surface, the Open Geospatial Consortium (OGC) approved a new standard called the Discrete Global Grid System (DGGS) (OGC 2019) The hierarchical representation of QTM as shown in Fig. 8.9b is an example of such a DGGS.

Feature-Primary Hierarchical Models for Transformation in Scale
Hierarchical models have also been used to represent point, line and area features in feature-primary models. Figure 8.13 shows such a representation for the points on a line. At level 1, only two points, i.e., points (1, 1) and (1, 2), will be used to represent the line; at level 2, in addition to the two points at level 1, point 2 will also be used; and at level 3, points (3, 1) and (3, 2) will also be used. This kind of model has been employed for progressive transmission of vector data. Figure 8.14 shows the hierarchical representation of a river network by the Horton and Shreve models. Figure 8.14a is a hierarchical representation based on river segments. The formation of such a representation starts from the level 1 branches. A segment of level 2 is formed by two or more segments of level 1. Similarly, a segment of level 3 is formed by two or more segments of level 2. All higher level segments are formed by following this principle. Figure 8.14b is a hierarchical representation formed by the Horton model based on a river stroke, which is a concatenated segment. Figure 8.14c is a hierarchical representation formed by the Shreve model. The numbering in this hierarchy is formed by adding the numbers of upstream branches. 14 Hierarchical representation of a river network using the Horton and Shreve models (Li 2007) For example, the ranking value for the segment with the highest ranking is 13, which is a result of adding 9 and 4. Such a numbering of ranking is not continuous. Figure 8.15 shows the hierarchical representation of two transportation networks. In this case, the importance of each road is evaluated based on geometric information and/or thematic information. A ranking value is assigned to each road. Figure 8.16 shows a hierarchical representation of area features. The area features in the whole area are first connected by a minimum spanning tress (MST) as a whole group, i.e., Group A. Group A is then subdivided into subgroups B and C by breaking the tree at the connection with the largest span. Similarly, Group B is broken into D and E, and Group C is broken into F and G. The subdivision goes on S7 S14 S19 S16 S4 S2 S3 S18 S13 S8 S1 S5 S6 S9 S11 S10 S19 S12 S17 S15 S1 S52 S51 until a criterion is met or until the complete hierarchy is constructed. In the end, a hierarchical representation is formed.

Models of Transformation in Scale for Irregular Triangulation Networks
An irregular triangulation network is an irregular space tessellation that has been widely used for digital terrain models (DTMs). In such a representation, the resolution is variable across the space. Therefore, special models should be used to make the resolution transformable from one to another. Four basic transformation models have been developed for such a purpose (Li 2005): • Vertex removal: A vertex in the triangular network is removed and new triangles are formed.

Models for Geometric Transformation of Map Data in Scale
The hierarchical model described in Sect. 8.3.2 is suitable to represent raster image data because images are numerical data that naturally record the earth and such a recording follows the Natural Principle described in Sect. 8.2.3. Figure 8.18 shows four images with different resolutions, the result of a "2 × 2 into "1 × 1" aggregation. These images appear to be very natural. However, for the categorical data of topographic maps, such a simple transformation does not work well, and there is a need for other transformation models. Topographic maps are produced via a complicated intellectual process that consists of abstraction, symbolization, generalization, selective omission and simplification. During this process, small details are ignored (or grouped together). All features are represented by symbols (geometric or pictorial). The colors of the symbols are not necessarily the natural colors of features. The graphic symbols are annotated with text (e.g., name of a street/town/city). There are requirements for minimum size, minimum separation and minimum differentiation for graphic elements. Thus, when a map at a larger scale (Fig. 8.19c) is simply reduced by 4 times (equivalent to  (Li 2005) Image resolution becomes coarser with a "2×2" into "1x1" aggregation a "2 × 2 into 1" aggregation), the graphics ( Fig. 8.19b) become unclear because the minimum requirements can no longer be met. Figure 8.20 illustrates such a situation with the aggregation of buildings as an example. A set of special models is needed for the transformation of map data from one scale to another to make the graphics at the smaller scale clear (Fig. 8.19a).
The transformation of maps from a larger scale to a smaller scale is called map generalization and has long been studied in the cartographic community. Some transformation models have been identified by researchers. In the traditional textbook by Robinson et al. (1984), only four models are listed, i.e., classification, induction, simplification and symbolization. In the 1980s, more models were identified, and a list of 12 models was produced by McMaster and Shea (1992). Many of these models were still too general to be precisely implemented in a computer system.

Settlements Green Space
Source map at scale 1:S  Elimination (too small to represent, thus removed)

Magnification
(enlarged due to importance)

Merging
(to combine to two or more close lines together)

Displacement
(to move one away from others or both away from each other)

Split
(to split an area into two because the connection between them is too narrow) Dissolving (to split a small area into pieces and merge these pieces into adjacent areas) (continued) Typification (to retain the typical pattern, e.g., a group of areas aligned in rows and columns)

Models for Transformation in Scale of 3D City Representations
For 3D representation of digital cities, the CityGML, which was officially adopted by the OGC in 2008, specifies five well-defined consecutive levels of detail (LOD) as follows, an example of which is shown in Fig. 8.21 (Kolbe et al. 2008): • LOD 0-regional, landscape • LOD 1-city, region • LOD 2-city districts, projects • LOD 3-architectural models (outside), landmarks • LOD 4-architectural models (interior) For the transformation in scale of 3D features, a set of models is listed in Table 8.7, which is a summary of models proposed in the literature.

Mathematical Solutions for Transformations in Scale
In the previous section, several sets of models for the transformation in scale were described. These models express what is achieved in such transformations, e.g., the shape is simplified, important points retained, and/or the main structure is preserved.
To make these transformations work, mathematical solutions (e.g., algorithms and mathematical functions) must be developed for each of these transformations. A selection of these solutions is presented in this section.

Mathematical Solutions for Upscaling Raster Data: Numerical and Categorical
For raster-based numerical data such as images and digital terrain models (DTMs), aggregation is widely used to generate hierarchical models. In recent years, wavelet transform (e.g., Mallat 1989), Laplacian transform (Burt and Adelson 1983) and other more advanced mathematical solutions have also been employed. The commonly used aggregation methods are by mode, by median, by average, and by Nth cell (i.e., Nth cell in both the row and column). Figure 8.22 shows a "3 × 3 to 1 × 1" aggregation with these four methods. The 6 × 6 grid is then aggregated into a 2 × 2 grid. If the new cell interval is not multiples of the original cells, then interpolation must be applied to resample the data. Bilinear and weighted averaging interpolations are widely used for resampling. Figure 8.23 shows the resampling of a 3 × 3 grid into a 2 × 2 grid using weighted averaging interpolation.
Bilinear interpolation can be performed for any four points (not along a line). The mathematical function is as follows: where a 0 , a 1 , a 2 , a 3 is the set of four coefficients, which are to be determined by four equations that are formed by making use of the coordinates of four reference points, i.e., the centers of the four grid cells in Fig. 8.23b: P 1 (x 1 , y 1 , z 1 ), P 2 (x 2 , y 2 , z 2 ), P 3 (x 3 , y 3 , z 3 ) and P 4 (x 4 , y 4 , z 4 ). The mathematical formula is as follows: Once the coefficients a 0 , a 1 , a 2 , a 3 are computed, the height Z P of any point P with a given set of coordinates (x P , y P) can be obtained by substituting (x P , y P) into Eq. (8.1).
The mathematical expression of weighted averaging interpolation is as follows: where w i is the weight of the ith reference point; z i is the height of the ith reference point; and n is the total number of the reference points used. In the case of Fig. 8.23b, n = 4. Weights may be determined by using different functions. The simplest weighting function assigns an equal weight to all reference points. However, it seems unfair to those reference points that are closer to the interpolation point, as such points should have a higher influence on the estimate. As a result, distance-based or area-based weighting are more commonly used. The inverse of distance is most popularly used: where d is the distance from a reference point to the interpolation point. In the case of interpolating the height of P in Fig. 8.23b, the four distances from the four (old) cell centers to point P will be used. Figure 8.23b also shows that the distance of each cell center to the interpolation point P is directly related to the size of the area contributed by each (old) cell to the new cell. If the area size is denoted as A, the weighting function is For example, if the area of the new cell is composed of 100% of the upper left cell, 50% of the upper right cell, 50% of the lower left cell and 25% of the lower right cell, the weights of these four cells are 1.0, 0.5, 0.5 and 0.25, and the result of the interpolation is: For the raster-based categorical data, the averaging and median are no longer applicable. The mode (also called the majority in some literature) is still valid and widely used. Figure 8.24b shows such a result. However, the value for the upper right cell is difficult to determine as there is no mode (majority) in the 3 × 3 window at the upper right corner of the original data ( Fig. 8.24a). Notably, some priority rules or orders are in practical use. For example, a river feature is usually given a priority because thin rivers are likely to be broken after aggregation. Figure 8.25 shows the improvement in the connectivity of river pixels with water as the priority. Figure 8.24c-e show the results with different options, e.g., random selection and central pixel. It is also possible to consider the statistical distribution of the original data (e.g., A = 8, T = 10, W = 6, S = 11) to try to maintain the distribution as much as possible.
In the aggregation/resampling process, as illustrated in Figs. 8.22, 8.23 and 8.24, a moving window is used but the question of the most appropriate window size has rarely been addressed. Li and Li (1999) suggested that the size of the moving window for aggregation/resampling should be computed based on the resolutions (scales) of the input and output, following the Natural Principle (Li and Openshaw 1993) described in Sect. 8.2.3. Mathematically, where R in is the resolution (scale) of the input data; K is the SVS value in terms of ground distance at the target scale computed by Eq. (8.1), and W is the size of the window's side in terms of pixel numbers (of input data).

Mathematical Solutions for Downscaling Raster Data
Downscaling produces a finer spatial resolution raster data than that of the input data through prediction. It is possible to use simple resampling (as described in Sect. 8.4.1) to achieve downscaling. However, methods based on spatial statistical analysis are more theoretically grounded and have become popular (Atkinson 2008(Atkinson , 2013, particularly area-to-point prediction (ATPP). Double dictionary learning has also been used (Xu and Huang 2014). Area-to-point kriging (ATP Kriging or ATPK) (Kyriakidis 2004) is the typical method. ATP Kriging can ensure the coherence of predictions, such as by ensuring that the sum of the downscaled predictions within any given area are equal to the original aggregated count. Some variants of ATP Kriging have also been developed, e.g., ATP Poisson Kriging (Goovaerts 2008(Goovaerts , 2009(Goovaerts , 2010, indicator cokriging (Boucher and Kyriakidis 2006) and ATP regression Kriging (Wang et al. 2015). In this section, the base version of ATP Kriging is described.
The basic principle behind Kriging is weighted averaging. The weights are optimized by using the semivariogram computed from the original data.
where Z e, p is the estimated (interpolated) value; Z i is the value of the ith reference point; w i is the value of the ith reference point and w i = 1. The interpolated value Z e, p is very likely to deviate from the actual value at point p, Z a, p . The difference is called the estimation error. The variance of these deviations is expressed by Eq. (8.10).
The basic principle of Kriging is to produce the minimum estimation variance by choosing a set of optimal weights. Such weights are obtained by solving a set of simultaneous equations: where w i is the weight of the ith reference point; λ is the Lagrange multiplier; and γ (d) is the semivariogram value of points with distance d apart, which can be expressed as follows: In ATP Kriging, the interpolation finds an estimate for a point at higher resolution. In such a case, a cell point at coarser resolution corresponds to an area at higher resolution. Therefore, the set of simultaneous equations is as follows: w 1 + w 2 + · · · · · · + w m = 1 (8.13) where γ (d i A ) is the point-to-block semivariogram value from the ith point to area A. It is the same as the average of the point-to-point semivariogram value between the ith point and the points within A.

Mathematical Solutions for Transformation (in Scale) of Point Set Data
As discussed in Sect. 8.3.5, a number of transformations are possible, such as regionalization, aggregation, selective omission, structural simplification, and typification. In both aggregation and regionalization, the clustering plays a central role. In aggregation, a cluster is represented by a point; in regionalization, a cluster is represented by an area. Thus, clustering is discussed here. Clustering is one of the most primitive activities of human beings (Anderberg 1973;Xu and Wunsch 2005). Clustering of spatial points is one of the main tasks in digital earth such as in spatial data mining and exploratory spatial analysis (Estivill-Castro and Lee 2002;Miller and Han 2009;Openshaw et al. 1987). Numerous clustering methods are available. The classic algorithms are the K-means algorithms, and the ISODATA algorithm is an important extension of K-means (Ball and Hall 1967). Classification by K-means is achieved by minimizing the sum of the square error over all K clusters (i.e., the objective function) as follows: whereC k is the mean of the cluster C k . The procedure of this algorithm is as follows: (1) arbitrarily select K points from data set (X) as initial cluster centroids; (2) assign each point in X to the cluster whose centroid is closest to the point; (3) compute the new cluster centroid for each cluster; and (4) repeat Steps (2) and (3) until no change can be made.
However, Li et al. (2017) noted that (a) all clustering algorithms discover clusters in a geographical dataset even if the dataset has no natural cluster structure and (b) quite different results will be obtained with different sets of parameters for the same algorithm. These two problems lead to the difficulty in understanding the implications of the clustering results. Consequently, Li et al. (2017) proposed a scale-driven clustering theory. In this theory, scale is modeled as a parameter of a clustering model; the scale dependency in the spatial clustering is handled by constructing a hypothesis testing; and multiscale significant clusters can be discovered by controlling the scale parameters in an objective manner. The basic model can be written as where C is the clustering result; f is the clustering model; A is the analysis scale (the size of clusters or the degree of homogeneity within clusters); and D is the data scale (e.g., resolution and extent).
The clustering consists of two major tasks, i.e., estimation of the density for each point and detection of dense regions. The procedure is as follows: (1) Control the data scale: Determine the SVS (smallest visible size) based on input and output data scales and following the Natural Principle, and ignore all the points within an SVS in the calculation of point data density.
(2) Identify high-density points: The probability density function (PDF) of the dataset is estimated with adaptive analysis scales. The PDF are statistically tested against a null distribution. Points with a significantly higher density are then identified. (3) Group the high-density points into clusters: Clusters with different densities are formed by adaptively breaking the long edges in the triangulation of high-density points. The significance of clusters obtained at multiscales can be statistically evaluated.

Mathematical Solution for Transformation (in Scale) of Individual Lines
As discussed in Sect. 8.3.5, there are eight different types of transformation for individual lines and the algorithms/mathematical solutions for the transformation models are discussed in detail by Li (2007). In this section, two classic algorithms are described in detail, i.e., the Douglas-Peucker algorithm (Douglas and Peucker 1973) and the Li-Openshaw algorithm (Li and Openshaw 1992). In Fig. 8.13, a hierarchical representation of the points on a line is presented. The order of these points is sorted by the Douglas-Peucker algorithm. The working principle of this algorithm is illustrated in Fig. 8.27. A curve line is given with an ordered set of points, and a distance tolerance ε (> 0) is set. The basic idea is to use a straight line connecting the first and last points to represent the curve line if the deviations from all line points to the straight line are smaller than ε. In this case, only the two end points are selected and all middle points are regarded as being insignificant and can be removed.
The algorithm first selects two end points (i.e., the first and last points). It then searches for the point that has the largest deviation from the straight-line segment connecting these two end points, i.e., at point 2 in Fig. 8.27. If the deviation is larger than ε, then this point is selected; otherwise, all other points can be ignored. In this example, point 2 is selected and it splits the line into two pieces. The search is then carried out for both pieces. Then, points (3, 1) and (3, 2) are selected. These two points split the whole line into four pieces, and the search will be carried out for these four pieces. The process continues until all the deviations are smaller than ε. Visvalingham and Whyatt (1993) and Li (2007) noted that the Douglas-Peucker algorithm may cause huge shape distortion. To overcome this problem, Visvalingham and Whyatt (1993) believed that the size of an area "sets a perceptual limit on the significance" and is the most reliable metric for measuring the importance of points since it simultaneously considers the distance between points and angular measures. They used the effective area of a point as the threshold, as illustrated in Fig. 8.28. For example, the effective area of point 2 is the area covered by the triangle formed Many researchers (Li and Openshaw 1992;Visvalingham and Whyatt 1993;Weibel 1996) have noted that the Douglas-Peucker algorithm will create selfintersection (with the line itself) and cross-intersections (between neighboring lines). This problem is associated with all the algorithms with an objective of point reduction or curve approximation. Li and Openshaw (1992) argued that these algorithms are not suitable for generalization (i.e., transformation in scale) because they are normally evaluated with the original curve line (but do not correspond with the curve line at other scales) as the benchmark. To perform transformation in scale for line features, the Li-Openshaw algorithm should be employed as this algorithm, "by virtue of its raster structure, implicitly (but not explicitly) avoids self-overlaps" (Weibel 1996). Even for a very complex coastline, it can produce results that are extremely similar to those manually generalized to various scales, as illustrated by Fig. 8.29. Many recent evaluations also indicate that the Li-Openshaw algorithm produces reasonable and genuine results (e.g., Zhu et al. 2007).
The Li-Openshaw algorithm follows the Natural Principle (Li and Openshaw 1993) described in Sect. 8.2.3, i.e., to neglect all spatial variations within the SVS that is computed by using input and output scales. The SVS is mimicked by a cell or  (Li 2007) pixel although other geometric elements are also possible (e.g., hexagon by Raposo in 2013). The cells can be organized in the form of a none overlapped tessellation or with overlaps. If there is no overlap, it becomes a pure raster template. Figure 8.30 shows the generalization (transformation) process with a raster template. In this example, each SVS is represented by a raster pixel and the result is represented by pixels, as shown in Fig. 8.30b, or by its geometric center. Three algorithms were developed by Li and Openshaw (1993) in different modes, raster node, vector mode and raster-vector mode. The algorithm in raster-vector mode was recommended. Figure 8.31 shows the generalization by the Li-Openshaw algorithm in raster-vector mode. The first point to be recorded is the starting point. The second point is somewhere within the second cell. In this implementation, the middle point between the two intersections between cell grids and the line (Fig. 8.31b) is used. If there is more than one intersection, the first (from the inlet direction) and  (Clarke 1995) the last (outlet direction) intersections are used to determine the position of the new point (( Fig. 8.31c). The final result of the generalization of a complete line is given in Fig. 8.31d. Similar to the algorithm in raster mode, overlap between SVSs can also be adopted, although it is not too critical. Notably, it is not necessary to take the average to represent a cell. It does not matter what point within the cell is used, as the cell itself is an SVS. Thus, it is also possible to take an original point, which is considered a critical point to represent the cell. Some work has also been carried out to downscale the lines, i.e., to add more details to the lines. A typical example of such work is that by Dutton (1981), which adds more details to the line by following the fractal characteristics of the line itself (see Fig. 8.32).

Mathematical Solutions for Transformation (in Scale) of Line Networks
In geographical space, three types of line networks are commonly used, contour line networks, hydrological networks and transportation networks. Some hierarchical models were presented in Sect. 8.3.3. The mathematical solutions for the transformation in scale of these networks are discussed in detail by Li (2007). Here, only the construction of a hierarchy for transportation networks is described.
The first approach is based on the importance of roads. As road networks are stored in segments and intersections in a database, two steps are required, to build strokes and to order strokes, as illustrated in Fig. 8.33. To build strokes means to concatenate continuous and smooth network segments (see Fig. 8.33a) into a whole (see Fig. 8.33b). To order strokes means to rank the strokes in a descending order based on their importance from high to low (see Fig. 8.33b). The importance of each stroke can be calculated according to various properties, i.e., geometric properties such as length (Chaudhry and Mackaness 2005), topological properties such as degree, closeness and/or betweenness (Jiang and Claramunt 2004), and thematic properties such as road class. A comparative analysis of the methodology for building strokes was carried by Zhou and Li (2012). With each stroke, given an importance, a stroke-based hierarchy of a line network can be built.
The importance of strokes can be evaluated by the connectivity of strokes in the network. ego-network analysis and weighted ego-network analysis are possible methods (Zhang and Li 2011). Figure 8.34 shows the basic structure of three types of ego-networks and the weight of each link, also called the proportional link strength.
The proportional link strength of each link (p ij ) from node i to any of its immediate neighbor nodes can be defined as the reciprocal of the degree of connectivity (k) of node i. Mathematically, For instance, in Fig. 8.34a, the ego is connected to both alter1 and alter2, so its degree of connectivity is 2; thus, the strengths of links from this ego to alter1 and to alter2 are both 1/2 = 0.5. The strengths of other links are also indicated in Fig. 8.34.
If node i and node j are not directly linked but are linked via another node q in the neighbor (ne), the strength of the link from node i to node j (i.e., p ij ) is defined as: The total link strength (C ij ) from node i to node j is defined as the square of the sum of the direct link strength and the indirect link strength from node i to node j. Mathematically, The C ij value reveals the constraint of i by j. The larger the C value is, the larger the constraint over i, and the smaller the opportunity for i.
To apply this concept to a transport network, the physical road network is first concerted into a connectivity graph, and the link strength values are computed for each node in the connectivity graph. Figure 8.35 shows an example. Roads can then be ranked by the link strength values.
The ego-network is a feasible and effective solution for the formation of hierarchies for road networks. However, Zhang and Li (2011) identified two significant limitations, the deviation of the link intensity definition from reality and the so-called 'degree 1 effect'. They subsequently developed a weighted ego-network analysis method.
Another important development is the mesh density-based approach proposed by Chen et al. (2009). The so-called mesh is a closed region surrounded by several road segments. In this approach, the density of each mesh in the road network is computed according to the following formula: S7 S19 S5 S6 S8 S18 S17 S20 S9 S12 S16 S10 S15 S14 S4 S3 S1 S2 S11 S13 S4 S15 S7 S20 S14 S1 S10 S2 S12 S13 S3 S9 S17 S8 S18 S19 S6 S5 S16 S11 (a) A regular road network (b) The connectivity graph where P is the perimeter of the mesh and A is the area of the mesh. Then, the meshes with the highest density are merged progressively, as illustrated in Fig. 8.36. In this Figure, the mesh with density of 0.64 is first merged into the that with a density of 0.42 and segment L is eliminated. The density (0.32) of the new mesh is then updated. The process is iterated until only one mesh is left.
Generally, a road network is often a hybrid of linear and areal patterns, thus Li and Zhou (2012) proposed the construction of hybrid hierarchies, i.e., an integration of a line hierarchy and an area hierarchy.

Mathematical Solutions for Transformation of a Class of Area Features
Section 8.3.5 described how a hierarchy of areas could be structured by a minimum spanning tree. In that example, the centroid of a polygon was used to represent the polygon. However, if the polygon is thin and/or irregular, then the edge length is not necessarily a good measure for closeness. Densification of points along the polygon edge will make the problem simpler. Figure 8.37 shows such an example. Figure 8.38 shows the transformation of buildings into suitable representations at different scales. Li (1994) argued that the transformation in scale should be better performed in raster space (because a scale reduction causes a space reduction and the raster format takes care of space) and proposed the use of techniques in mathematical morphology for transformation in scale. Li et al. have developed a complete set of algorithms for such transformations based on mathematical morphology.
One such algorithm is the aggregation of areas into groups and transformation into representations at different scales (Su et al. 1997). The mathematical model for the aggregation is: where A is the representation (image) showing the original area features and B 1 and B 2 are the two structuring elements.

Fig. 8.37
Grouping of buildings at 1:10000 scale for generalization to various scales (Li et al. 2004) (a) 1:25,000, by typification (b) 1:50,000, by typification and aggregation (c) 1:100,000, by aggregation (d) 1:250,000, by aggregation Fig. 8.38 Transformation of grouped buildings to various scales (Li et al. 2004) The success of applying this model to area combination depends on the proper size and shape of the structuring elements B 1 and B 2 . Su et al. (1997) suggest that the size of B 1 and B 2 should be determined by the input and output scales, following the Natural Principle described in Sect. 8.2.3. Figure 8. (Su et al. 1997) (a) A settlement with an irregular shape (b) Simplified by the SLLM algorithm buildings using this model for two different scales: one for a scale reduction by 7 times and the other by 10 times. The results are also compared with those using simple photoreduction. The combined results are very reasonable. However, the combined results are very irregular and the simplification of boundaries could be discussed. A detailed description of such a simplification is omitted here but can be found in the work of Su et al. (1997) and the book by Li (2007). The result is shown in Fig. 8.40.

Mathematical Solutions for Transformation (in Scale) of Spherical and 3D Features
In the previous sections, mathematical solutions for transformation of 2D features have been presented. Mathematical solutions for transformation of spherical (e.g., Dutton 1999) and 3D features (e.g., Anders 2005) have also been researched, although the body of literature is much smaller than that for map generalization. In recent years, there have been more papers on the generalization of buildings-based CityGML (e.g., Fan andMeng 2012, Uyar andUlugtekin 2017); details on such methodologies are omitted here due to page limitations.

Transformation in Scale: Final Remarks
The beginning of this chapter emphasized that continuous zooming is at the core of Digital Earth as initiated by Al Gore. Continuous zooming is a kind of transformation of spatial representation in scale. In this chapter, the theoretical foundation for transformations in scale was presented in Sect. 8.2. Then, models for such transformations were described in Sect. 8.3 for raster and vector data, images, digital terrain models and map data. A selection of algorithms and/or mathematical functions for achieving these transformations was presented in Sect. 8.4. Notably, the content of this chapter was concentrated on the theories and methodology to achieve continuous zooming and some important issues related to transformation in scale have been omitted, such as temporal scale, scale effect and optimum scale selection. For the content of the models for transformation in scale, emphasis was on the representations. Thus, other models such as geographical and environmental processes were excluded. However, these aspects are important but were omitted due to page limitations. Zhilin Li is the Chair Professor of Geo-informatics at the Hong Kong Polytechnic. His research interests include multi-scale modelling and representation, cartographic language and cartographic information theory and mapping from satellite images. He was a vice president of the International Cartographic Association.
Haowen Yan is a professor in geographic information science at Lanzhou Jiaotong University, China. His research interests are map generalization, spatial relations and spatial data security. He is now one of the three editors-in-chief of Journal of Geovisualization and Spatial Analysis published by Springer-Nature.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.