A study on skeletonization of complex petroglyph shapes
 1k Downloads
 1 Citations
Abstract
In this paper, we present a study on skeletonization of realworld shape data. The data stem from the cultural heritage domain and represent contact tracings of prehistoric petroglyphs. Automated analysis can support the work of archeologists on the investigation and categorization of petroglyphs. One strategy to describe petroglyph shapes is skeletonbased. The skeletonization of petroglyphs is challenging since their shapes are complex, contain numerous holes and are often incomplete or disconnected. Thus they pose an interesting testbed for skeletonization. We present a large realworld dataset consisting of more than 1100 petroglyph shapes. We investigate their properties and requirements for the purpose of skeletonization, and evaluate the applicability of stateoftheart skeletonization and skeleton pruning algorithms on this type of data. Experiments show that preprocessing of the shapes is crucial to obtain robust skeletons. We propose an adaptive preprocessing method for petroglyph shapes and improve several stateoftheart skeletonization algorithms to make them suitable for the complex material. Evaluations on our dataset show that 79.8 % of all shapes can be improved by the proposed preprocessing techniques and are thus better suited for subsequent skeletonization. Furthermore we observe that a thinning of the shapes produces robust skeletons for 83.5 % of our shapes and outperforms more sophisticated skeletonization techniques.
Keywords
Skeletonization Petroglyphs Shape preprocessing Realworld shape data1 Introduction
In this paper, we present a study on skeletonization of realworld shape data. Skeletonization is a crucial prerequisite for the robust description and indexing of shapes, and for further search and shape retrieval [24]. The investigated data represent manually performed tracings of prehistoric petroglyphs that pose novel challenges to skeletonization due to their complex topology and structure. Petroglyphs are humanmade markings on rock surfaces, which were pecked, scratched or carved into rocks [8]. They can be found all over the world and are preserved, studied, and interpreted by archeologists to gain knowledge about early human history. Petroglyphs are an interesting testbed for skeletonization as they exhibit a number of challenges. Depicted motifs range from simple geometric shapes (e.g. crosses) up to compositions of complex hunting, fighting, and dancing scenes.^{1} The tracings of the petroglyph shapes may be incomplete due to partial abrasion of the rock surface. Since the petroglyphs are made of individual peck marks they exhibit a complex boundary as well as numerous holes in their interior (see Fig. 1a). Additionally, complex figures may consist of several disconnected parts. Petroglyph shapes can either show filled bodies or just the silhouette of a figure depending on their artistic style. Finally over the years figures have been pecked on top of each other which results in merged shapes.
Petroglyphs are important artifacts that document early human life and development. The digitization and thus permanent preservation of petroglyphs recently gained increasing attention [23, 34]. Recent effort is put into the building of retrieval systems that enable the search for similar shapes as well as the automated classification of petroglyphs into predefined shape classes according to archeological typologies [25]. Following the segmentation of photographs of petroglyphs to get the shapes of the figures [23], the work in this publication is an essential prerequisite for later automated recognition of the shapes based on skeletal descriptors [24, 25].
Existing skeletonization algorithms are not directly applicable to this type of material and yield poor skeletons as shown in Fig. 1b. One reason for the poor performance is that existing methods are usually developed on perfectly segmented shapes with continuous contour and continuously filled regions originating, for example, from public datasets such as MPEG7 Core Experiment CEShape1 and Kimia99 [3, 4, 28, 33]. Thus most methods do not fulfill the requirements of noisy realworld data such as that employed in this work. Other algorithms are designed for special tasks (e.g. fingerprint recognition) and therefore rely on specific image properties such as parallel ridges and furrows with welldefined frequency, orientation, and line width [2]. This makes such algorithms inappropriate for our material which show a high variety in composition, line width, and complexity. Aside from different applications (e.g. shape retrieval) enabled by robust skeletonization, the employed petroglyph shapes pose a powerful testbed for the further development of skeletonization algorithms.
The paper is structured as follows: In Section 2 we review related work on skeletonization and skeleton pruning and identify suitable algorithms for our task. Section 3 presents our realworld material and its characteristics. We describe our preprocessing approach and the improvements of skeletonization algorithms in Section 4. Experimental setup and results are presented in Sections 5 and 6. Finally we draw conclusions in Section 7.
2 Related work
In this section we review skeletonization and skeleton pruning algorithms, analyze their properties and identify suitable methods for our task. In the literature the usage of terminology for skeletonization is highly ambiguous. Skeletonization, thinning, medial axis transform, distance transform as methodologies and skeleton, medial axis or medial line as their results are used inconsistently [5]. According to Arcelli and Baja [1] algorithms for skeleton computation in discrete space can generally be partitioned into two categories: Methods that perform skeletonization by medial axis transform produce skeletons following Blum’s definition of the medial axis [6] and techniques employing skeletonization by thinning derive a thin version of a shape [9, 13]. A third category of approaches performs the medial axis transform to polygonal shapes in continuous space [12, 20, 21]. Additionally, there is a group of more recent skeletonization algorithms that utilize physicsbased modeling of the shapes [14, 22] for which we suggest a fourth category.
All skeletonization methods are sensitive to boundary noise, i.e. small perturbations of a shape may have large influence on the skeleton (see Fig. 1). To overcome this problem some form of regularization is required [26]. This regularization process is generally referred to as “skeleton pruning”. Shaked and Bruckstein observe that pruning is an essential part of skeletonization algorithms and most recent developments combine skeletonization and skeleton pruning in one algorithm. Skeleton pruning methods can be consolidated in two major categories: The first covers the pruning of skeleton branches based on a significance value calculated for every single skeleton point. This results in a shortening of all skeleton branches. The second class of skeleton pruning algorithms calculate a significance measure for each branch. Based on its significance value a branch is either removed completely or remains in the skeleton [18].
2.1 Pointbased pruning approaches
Montanari first develops a form of regularization to detect the most important skeleton branches [19]. He proposes the use of a threshold for Blum’s “propagation velocity of the wavefront”. Blum and Nagel extend this idea and propose a boundary/axis weight for the regularization of unwanted branches caused by boundary perturbations [7]. They state, however, that boundary perturbations are not always unwanted distortions but might actually be important features of a shape and therefore pruning should be carried out with great care. Ho and Dyer propose the computation of the relative prominence of a skeleton point by using geometric relations between the maximum generating disk at the point and the contour of the shape [10]. Ogniewicz and Ilg compare several other regularization methods for skeleton points and propose the generation of a skeleton pyramid for further pruning [21]. Telea and van Wijk introduce a skeletonization algorithm based on a fast marching level set method (Augmented Fast Marching Method, AFMM) [31]. For every skeletal point they determine the length of the boundary segment it originates from and prune skeleton points using a single threshold. Howe applies the work of Telea and van Wijk to handwriting recognition using the contour length as salience measure [11]. Shen et al. compare this and other pixelbased significance measures and introduce a new significance measure for skeleton pruning by calculating the bending potential ratio (BPR) of the contour segment generated by the two points of the maximum inscribed disc that are tangent to the boundary [27]. Telea further improves AFMM by a different saliency metric, and proposes skeletonization for feature preserving shape smoothing [30].
2.2 Branchbased pruning approaches
The methods summarized in Section 2.1 all compute a significance value for each single point of the skeleton. A thresholding of this value leads to a shortening of the branches. Branchbased methods, in contrast, avoid the shortening of branches and instead use a significance value to remove or retain entire branches. Bai et al. [4] propose a novel method for skeleton pruning based on Discrete Curve Evolution (DCE) introduced by Latecki and Lakmper [16]. They determine the contour points of a shape that have maximum curvature and delete all skeleton branches that do not end at one of these points. This approach inspired numerous other stateoftheart skeleton pruning algorithms. Bai and Latecki further improve DCE by removing the necessity of prior knowledge about the shape [3]. They compute the DCEskeleton with a fixed parameter (50 vertices) and subsequently add a reconstruction step, which removes skeleton branches with low contribution to the original shape. Yang et al. use the same methodology and extend the reconstruction algorithm to increase speed and to enable the computation of skeletons from shapes with holes [33]. Shen et al. introduce a normalization factor in the reconstruction step that quantifies the tradeoff between the simplicity of their skeletons and the reconstruction error of the shapes [28]. Liu et al. extract the Generalized Voronoi Skeleton of a shape and then apply DCE to perform a first pruning of the obtained skeleton [17]. Subsequently, they further prune by balancing the visual contribution and the reconstruction contribution of each skeleton branch. Liu et al. further devise a skeleton pruning approach that fuses the information of several different branch significance measures [18]. Recently, Krinidis and Krinidis proposed a new skeletonization approach that smoothes the polygonal approximation of a shape iteratively [15]. In each iteration they determine the most important polygon vertices from the angles of their incoming edges and prune the skeletons by deleting those branches that connect less important nodes.
2.3 Comparison of algorithms

Robustness against remaining insignificant branches: Insignificant branches are branches that do not contribute essentially to the original shape and thus should be avoided or pruned.

Robustness against deletion of significant branches: A significant branch has an essential contribution to the figures shape and thus should remain in the skeleton. A deletion would significantly change the structure of the skeleton.

Robustness against branch shortening: Branch shortening occurs when insignificant as well as significant skeleton branches are shortened likewise.This bears the risk of changing the structure of the skeleton.

Rotation and scale invariance: The skeleton of a differently scaled and rotated shape should be equivalent.

Number of parameters: A large number of parameters increases the dependence of an algorithm on user input but at the same time gives more control. We prefer algorithms with a low number of parameters with adequate sensitivity to parameter changes.

No prior knowledge about shape needed: Parameters such as the number of endpoints, or absolute values that depend on the size and complexity of the shape, require a priori knowledge and should be avoided.
Comparison of recent skeletonization methods with respect to the identified criteria. Note that for pointbased pruning approaches the first two criteria do not apply because they do not distinguish between significant and insignificant branches
Algorithm  Robustness against remaining insignificant branches  Robustness against deletion of significant branches  Robustness against branch shortening  Rotation and scale invariance  Number of parameters  No prior knowledge about shape needed 

Pointbased pruning  
Chord residual + Skeleton  –  –  no  yes  3  no 
pyramid [21]  
AFMM [31]  –  –  no  no  1  no 
Boundary length [11]  –  –  no  yes  1  no 
Bending potential ratio [27]  –  –  no  yes  1  yes 
Saliency metric [30]  –  –  no  yes  1  yes 
Branchbased pruning  
DCEskeleton [4]  no  no  yes  yes  1  no 
Discrete skeleton evolution [3]  yes  yes  yes  yes  1  yes 
Quick stable skeletons [33]  yes  yes  yes  yes  1  yes 
Tradeoff reconstruction error /  yes  yes  yes  yes  2  yes 
skeleton simplicity [28]  
Visual contribution / reconstruction  yes  yes  yes  yes  3  no 
contribution [17]  
Information fusion [18]  yes  yes  yes  yes  12  yes/no 
Empiric mode decomposition [15]  yes  yes  yes  yes  1  yes 
3 Investigated material
Initial experiments of Takaki et al. showed that skeletonization is a useful abstraction of shapes [29]. Thus it enables higher level applications such as similarity search and automated shape classification, which is our ultimate goal. Petroglyphs pose a challenge for skeletonization as they are made of single peck marks and thus have neither a continuous contour nor continuously filled regions. Figure 2 shows that the shapes have highly varying complexity, contain numerous holes due to incompletely pecked areas, often contain very fine structures (horns of deer, feathers of birds, etc.), and have disconnected parts.
4 Approach
As already discussed in Section 1 the characteristics of petroglyph shapes impede skeletonization which leads to unsatisfactory results (see for example Fig. 1). For robust skeletonization a preprocessing of the shapes is necessary as well as improvements of skeletonization techniques. In the following sections we present a fully automated shape preprocessing method and propose a number of improvements for the selected skeletonization algorithms to make them applicable to petroglyph shapes.
4.1 Adaptive shape preprocessing
Initially we resize and pad all shapes to normalize the inputs. Next we apply a median filter with size s _{ m e d } to the input. Median filtering removes small holes in foreground and background (salt and pepper noise) and at the same time slightly smooths the contour. Apart from this, however, the median filter may generate artifacts by disconnecting weakly connected blobs. To compensate for these artifacts, we apply an area opening and closing as proposed by Vincent [32]^{2} as well as a dilation operation. We use an area size of t _{ a o c } pixels as threshold for area opening and closing, and combine it with a dilation by a disc with a radius r _{ d i l } to reconnect disjoint parts. We iterate these steps with increasing median filter size until a stopping criterion is met.
The stopping criterion requires a robust indicator function that is suitable for the differently complex shapes in the dataset. We evaluate different indicator functions such as solidity and circularity of the shape, the number and size of foreground and background blobs, and the number of endpoints in the thinning skeleton. Our preliminary experiments show that the most robust criterion is a combination of the number of foreground blobs and the number of background blobs. If the number of both do not change over a certain number of iterations n _{ i t }, we terminate the preprocessing. The intuition behind this stopping criterion is that if the number of foreground and background blobs remains constant for some time, the figure is likely to be in a robust state where the influence of noise is low.
After the iterative preprocessing has terminated, we smooth the contour by a convolution filter with a Hanning window of size s _{ c o n v }. We smooth the x and ycoordinates of the shapes’ contour points separately which removes contour perturbations and thus reduces the likelihood to get spurious (insignificant) skeleton branches in subsequent skeletonization.
4.2 Improvements of existing skeletonization methods
5 Experimental setup
From the ongoing annotation process of the investigated material described in Section 3, we derive a dataset of 1181 petroglyph shapes to carry out our experiments with.^{3} In the following, we describe the selection of the parameters for our adaptive shape preprocessing method and the investigated skeletonization methods, and present our evaluation criteria.
5.1 Selection of parameters
For the selection of suitable values for the parameters defined in Section 4.1, we evaluate the preprocessing method on a reduced dataset consisting of 150 representative shapes from the entire dataset. In the absence of a ground truth of preprocessed shapes, we face difficulties in the selection of suitable parameter values. Hence we choose a heuristic approach to estimate robust parameters for the proposed method.
Additional parameters have to be selected for the skeletonization methods. For DCE [4], we estimate the parameter for the number of vertices adaptively by counting the number of endpoints of the respective thinning skeleton. For the BPRalgorithm [27] and the SPTalgorithm [28], we take the parameter values as proposed by their authors. The computation of the thinning skeleton is parameter free as it is a simple morphological operation.
5.2 Evaluation
Due to the absence of ground truth shapes and skeletons, we define several perceptual evaluation measures that can easily be judged by a human observer. Subsequently, we evaluate our preprocessing method, and the applied skeletonization algorithms separately on the entire dataset.

All shape parts that are important for visual perception are preserved.

No independent but closely spaced parts are merged (e.g. legs or feathers).

Small holes in the shape are closed, disjoint parts are reconnected, and the contour is properly smoothed, i.e. the shape is likely to facilitate subsequent skeletonization.

The skeleton preserves the full structure of the shape.

It exhibits branches for all important parts of the shape.

It does not have remaining spurious branches.
6 Results
Results of the quantitative evaluation of the proposed preprocessing method on the entire dataset
Not smoothed enough  Parts merged  Details lost  Sum errors  Sum correct  

Weak preprocessing  9.4 %  6.9 %  3.9 %  20.2%  79.8% 
Evaluation of the four selected skeletonization algorithms on the entire dataset. BPR and thinning perform best on the preprocessed shapes
The skeletons pruned with DCE are in most cases complete (96.3 %), but the algorithm additionally produces a lot of spurious branches (47.7 %). This is related to the fact that the algorithm requires prior knowledge about the shapes (the number of DCE vertices). If this number is set too low, significant branches are deleted even before spurious branches (3.7 % of all shapes). If the number is set too high, a lot of spurious branches remain (47.7 %). We set this parameter adaptively (depending on the number of endpoints of the corresponding thinning skeleton) since a unique number that is suitable for all shapes does not exist.
SPT builds upon DCE and performs similarly. Although it produces even more spurious branches (65.8 % of all shapes), it deletes spurious branches before important ones. Thus for only 1.4 % of all shapes important parts are lost. An advantage of SPT over DCE is that it does not require a priori information about the shapes.
The BPR algorithm outperforms DCE and SPT and produces satisfactory skeletons for 86.9 % of all shapes. BPR generates much fewer spurious branches than DCE and SPT (only 6.7 % of all shapes). The BPR pruning is, however, in some situations too strong and thus important branches are removed in 6.4 % of all shapes (see for example tail of the bird in Fig. 10, 2nd row).
A simple thinning results in notably good skeletons for 83.5 % of all shapes which is nearly as good as the performance of the more sophisticated BPR algorithm. Since petroglyph shapes often resemble sticklike figures, they can be well modeled by the thinning algorithm. Additionally, the contour smoothing in the preprocessing avoids the generation of spurious branches by thinning (in 87.5 % of all cases). The results obtained for thinning show that a proper preprocessing can replace an additional skeleton pruning. In only 4.0 % of all cases important branches are missed by thinning.
7 Conclusion
In this paper, we presented a study on skeletonization of petroglyph shapes. We introduced a large heterogeneous dataset of realworld shapes that exhibit numerous challenges to existing skeletonization algorithms and thus poses an interesting testbed. We studied the applicability of existing skeletonization methods and evaluated their strengths and weaknesses. Existing skeletonization methods were developed and evaluated mainly on ideal shapes and are thus not directly applicable to our realworld data. Therefore we improved several skeletonization algorithms to compensate for their shortcomings that became apparent. Additionally, we proposed an adaptive shape preprocessing method that enables the computation of robust skeletons for the complex and diverse shapes under investigation. We performed a largescale experiment and showed that a proper preprocessing is crucial for the skeletonization of petroglyph shapes. Experiments on skeletonization showed that preprocessing in combination with a simple thinning yields a good tradeoff for robust skeletonization, whereas more sophisticated skeletonization techniques either generate more spurious branches (DCE, SPT) or delete important ones (BPR). Our experiments clearly demonstrated that the presented preprocessing method and the proposed improvements of recent skeletonization methods solve the additional challenges introduced by our complex and noisy realworld shape data for more than 86% of all investigated shapes.
Footnotes
 1.
Example: http://3dpitoti.eu.
 2.
Note that the proposed area opening/closing is fundamentally different from a morphological opening/closing as it does not employ a structuring element.
 3.
The dataset can be downloaded at: http://mc.fhstp.ac.at/content/petroskel_dataset.
Notes
Acknowledgments
Open access funding provided by FH St. Pölten  University of Applied Sciences. The images of petroglyph tracings used in this paper have been kindly provided by the CCSP  Centro Camuno di Studi Preistorici and by Alberto Marretta, who we thank. This work has been carried out in the project 3DPITOTI which is funded from the European Community’s Seventh Framework Programme (FP7/20072013) under grant agreement no 600545; 20132016. Further information about the project can be found at http://3dpitoti.eu.
References
 1.Arcelli C, Baja GSd (1996) Skeletons of planar patterns. In: Kong TY, Rosenfeld A (eds) Topological algorithms for digital image processing, Machine Intell. and Patt. Rec., vol 19. NorthHolland, pp 99–143Google Scholar
 2.Atul S, Chaudhari ASC (2013) A study and review on fingerprint image enhancement and minutiae extraction. IOSR J Comput Eng 9(6):53–56. doi: 10.9790/06610965356 CrossRefGoogle Scholar
 3.Bai X, Latecki LJ (2007) Discrete skeleton evolution. In: Proceedings of the 6th international conference on energy minimization methods in computer vision and pattern recognition. SpringerVerlag, Berlin, Heidelberg, pp 362–374Google Scholar
 4.Bai X, Latecki L, Wy Liu (2007) Skeleton pruning by contour partitioning with discrete curve evolution. IEEE Trans Pattern Anal Mach Intell 29(3):449–462CrossRefGoogle Scholar
 5.Baja GSd (2006) Skeletonization of digital objects. In: Proceedings of the 11th Iberoamerican conference on progress in pattern recognition, image analysis and applications (CIARP06). Springer, DE, pp 1–13Google Scholar
 6.Blum H (1967) A transformation for extracting new descriptors of shape. In: Models for the perception of speech and visual form, Proceedings of Meeting held in Boston, Nov. 1964. MIT Press, Cambridge, pp 362–380Google Scholar
 7.Blum H, Nagel RN (1978) Shape description using weighted symmetric axis features. Pattern Recogn 10(3):167–180CrossRefzbMATHGoogle Scholar
 8.Chippindale C, Taçon P (1998) The archaeology of rockart. New directions in archaeology series. Cambridge University PressGoogle Scholar
 9.Dinneen GP (1955) Programming pattern recognition. In: Proceedings of the March 13, 1955, Western Joint Comp. Conf., AFIPS ’55 (Western). ACM, NY, pp 94–100Google Scholar
 10.Ho S, Dyer C (1984) Medialaxis based shape smoothing. Technical Report 557, University of WisconsinMadison Department of Computer SciencesGoogle Scholar
 11.Howe NR (2004) Code implementations by Nicholas R. Howe., http://www.cs.smith.edu/nhowe/research/code/
 12.Kirkpatrick D (1979) Efficient computation of continuous skeletons. In: Proceedings of the 20th annual symposium on foundations of computer science, 1979. IEEE, San Juan, pp 18–27Google Scholar
 13.Kirsch RA, Cahn L, Ray C, Urban GH (1958) Experiments in processing pictorial information with a digital computer. In: Papers and Discussions Presented at the December 913, 1957, Eastern Joint Comp. Conf.: Computers with Deadlines to Meet. IREACMAIEE ’57 (Eastern). ACM, New York, pp 221–229Google Scholar
 14.Krinidis S, Chatzis V (2009) A skeleton family generator via physicsbased deformable models. IEEE Trans Image Process 18(1):1–11MathSciNetCrossRefGoogle Scholar
 15.Krinidis S, Krinidis M (2013) Empirical mode decomposition on skeletonization pruning. Image Vis Comput 31(8):533–541CrossRefGoogle Scholar
 16.Latecki LJ, Lakämper R (1999) Polygon evolution by vertex deletion. In: Proceedings of the 2nd international conference on scalespace theories in computer vision. Springer, pp 398–409Google Scholar
 17.Liu H, Wu Z, Hsu DF, Peterson BS, Xu D (2012) On the generation and pruning of skeletons using generalized voronoi diagrams. Pattern Recogn Lett 33 (16):2113–2119CrossRefGoogle Scholar
 18.Liu H, Wu ZH, Zhang X, Hsu DF (2013) A skeleton pruning algorithm based on information fusion. Pattern Recogn Lett 34(10):1138–1145CrossRefGoogle Scholar
 19.Montanari U (1968) A method for obtaining skeletons using a quasieuclidean distance. J ACM (JACM) 15(4):600–624CrossRefGoogle Scholar
 20.Montanari U (1969) Continuous skeletons from digitized images. J ACM (JACM) 16(4):534–549CrossRefzbMATHGoogle Scholar
 21.Ogniewicz RL, Ilg M (1992) Voronoi skeletons: Theory and applications. In: Proceedings of the IEEE Conf. on Comp. Vision and Patt. Rec. (CVPR), pp 63–69Google Scholar
 22.Parker JR (2011) Algorithms for image processing and computer vision, 2nd edn. Wiley Publishing, Inc, IndianapolisGoogle Scholar
 23.Seidl M, Breiteneder C (2012) Automated petroglyph image segmentation with interactive classifier fusion. In: Proceedings of the 8th Indian conference on computer vision, graphics and image processing, ICVGIP ’12. ACM, New York, pp 66:1–66:8Google Scholar
 24.Seidl M, Wieser E, Zeppelzauer M, Pinz A, Breiteneder C (2014) Graphbased similarity of petroglyphs. In: VISART ‘Where Computer Vision Meets Art’, ECCV’2014. Springer, ZürichGoogle Scholar
 25.Seidl M, Wieser E, Alexander C (2015) Automated classification of petroglyphs. Digital Applications in Archaeology and Cultural HeritageGoogle Scholar
 26.Shaked D, Bruckstein AM (1998) Pruning medial axes. Comput Vis Image Underst 69(2):156–169CrossRefGoogle Scholar
 27.Shen W, Bai X, Hu R, Wang H, Latecki L (2011) Skeleton growing and pruning with bending potential ratio. Pattern Recognit 44(2):196–209CrossRefGoogle Scholar
 28.Shen W, Bai X, Yang X, Latecki LJ (2013) Skeleton pruning as tradeoff between skeleton simplicity and reconstruction error. Science China Inf Sci 56(4):1–14CrossRefGoogle Scholar
 29.Takaki R, Toriwaki J, Mizuno S, Izuhara R (2006) Shape analysis of petroglyphs in central asia. Forma 21:243–258Google Scholar
 30.Telea A (2012) Feature preserving smoothing of shapes using saliency skeletons. In: Vis. in Medicine and Life Sciences II. Springer, pp 153–170Google Scholar
 31.Telea A, van Wijk JJ (2002) An augmented fast marching method for computing skeletons and centerlines. In: Proc. of the symposium on data visualisation 2002, eurographics ass., VISSYM ’02. AirelaVille, Switzerland, pp 251–ffGoogle Scholar
 32.Vincent L (1994) Morphological area openings and closings for greyscale images. In: Shape in picture. Springer, pp 197–208Google Scholar
 33.Yang X, Bai X, Yang X, Zeng L (2009) An efficient quick algorithm for computing stable skeletons. In: Proceedings of the 2nd international congress on image and signal processing (CISP’09), pp 1–5Google Scholar
 34.Zhu Q, Wang X, Keogh E (2009) Augmenting the generalized hough transform to enable the mining of petroglyphs. In: Proceedings of the 15th ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’09. ACM, New York, pp 1057–1066Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.