# Statistical Model of Shape Moments with Active Contour Evolution for Shape Detection and Segmentation

- 1.9k Downloads
- 4 Citations

## Abstract

This paper describes a novel method for shape representation and robust image segmentation. The proposed method combines two well known methodologies, namely, statistical shape models and active contours implemented in level set framework. The shape detection is achieved by maximizing a posterior function that consists of a prior shape probability model and image likelihood function conditioned on shapes. The statistical shape model is built as a result of a learning process based on nonparametric probability estimation in a PCA reduced feature space formed by the Legendre moments of training silhouette images. A greedy strategy is applied to optimize the proposed cost function by iteratively evolving an implicit active contour in the image space and subsequent constrained optimization of the evolved shape in the reduced shape feature space. Experimental results presented in the paper demonstrate that the proposed method, contrary to many other active contour segmentation methods, is highly resilient to severe random and structural noise that could be present in the data.

## Keywords

Active contour Statistical shape model Segmentation Shape detection## 1 Introduction

Originally proposed in [13], active contour models have achieved enormous success in image segmentation. The basic idea of active contour is to iteratively evolve an initial curve towards the boundaries of target objects. Classical curve evolution is normally driven by a combination of internal forces, determined by the geometry of the evolving curve, and external forces induced from an image. A segmentation method using active contour is usually based on minimizing a functional defined in such a way that for curve close to the target object boundary it has small value.

Introduction of a prior shape constraint into the image segmentation functional has recently become the focus of intensive research [6, 7, 12, 14, 16, 19, 24]. The early work on this problem has been done by Cootes et al. [4]. Their method is based on principal component analysis (PCA) calculated for landmarks selected for a training set of shapes which are assumed to be representatives of the shape variations. The method is implemented in the parametric active contour framework, with results strongly depending on the quality of the selected landmarks.

Leventon et al. [17] considered introduction of prior shape information using level set based representation, where landmarks are replaced by signed distance functions calculated for the contours in the training data set, providing hence an intrinsic and parametrization free shape model. However, it was demonstrated that, linear combinations of signed distance functions do not necessarily result in a signed distance function, and therefore possibly compromise the quality of the solution. Furthermore, all these methods effectively assume that the shape prior has a Gaussian distribution. As a result, these methods cannot handle multi-modal shape distributions and thus are restricted to the segmentation of target objects with limited shape variabilities.

Instead of using evolution of active contour to search optimum in the image space, Tsai et al. [25] proposed a method to directly search solution in the shape space which is built by the signed distance functions of aligned training images and reduced by PCA. In their paper, a few cost functions are proposed and their derivatives with respect to eigen-shape weights and to pose parameters are given, so that the steepest descent algorithm can be applied. In [10], Fussenegger et al. apply a robust and incremental PCA algorithm on binary training masks of the object(s) to define an active shape model which is then “embedded” in a level set implementation. Segmentation (or tracking) is computed using pre-trained shape model, then PCA representation is updated using this result in order to improve next iteration of segmentation process. Although this self-improving “looping process” between the image space and the shape space is interesting, PCA of binary training masks requires that these training examples are aligned before learning the implicit shape model. The major limitation of all these methods is the implicit assumption of uniform distribution in the shape space.

Recently, it has been proposed to construct nonparametric shape prior by extending the Parzen density estimator to the space of shapes. For instance, in [5, 20, 21, 22], authors proposed a nonlinear statistical shape model for level set segmentation which can be efficiently implemented. Given a set of training shapes, they performed kernel density estimation in the low dimensional subspace. In this way, they are able to combine an accurate model of the statistical shape distribution with efficient optimization in a finite-dimensional subspace. In a Bayesian inference framework, they integrated the nonlinear shape model with a nonparametric intensity model and a set of pose parameters which are estimated in a more direct data-driven manner than in previously proposed level set methods. Kim et al. [14] proposed a nonparametric shape prior model for image segmentation problems. Given example training shapes, they estimate the underlying shape distribution by extending a Parzen density estimator to the space of shapes. Such density estimates are expressed in terms of distances between shapes. The learned shape prior distribution is then incorporated into a maximum a posteriori estimation framework which is solved using active contours.

Recently, Foulonneau et al. [9] proposed an alternative approach for shape prior integration within the framework of parametric snakes. They combined a compact, parametric representation of shapes within curve evolution theory. More specifically, they proposed to define a *geometric* shape prior based on a description of the target object shape using Legendre moments. A new shape energy term, defined as the distance between moments calculated for the evolving active contour and the moments calculated for a fixed reference shape prior, is proposed and derived in the mathematical framework of [1] in order to obtain the evolution equation. Initially, the method was designed for a single reference shape prior [8], but in the most recent version is able to take into account multi-reference shape priors. As a result, the authors have defined a new efficient method for region-based active contours integrating static shape prior information. Nevertheless, one of the main drawbacks of such an approach lies in its strong dependence to the shape alphabet used as reference. Indeed, as stated by the authors themselves in [9], this method is more related to *template matching* than to *shape learning*.

Inspired by the aforementioned results and especially by the approach proposed by Foulonneau et al., the method proposed in this paper optimizes, within the level sets framework, model consisting of a prior shape probability model and image likelihood function conditioned on shapes. The statistical shape model results from a learning process based on nonparametric estimation of a prior probability, in a low dimensional shape space of Legendre moments built from training silhouette images. Such approach tends to combine most of the advantages of the aforementioned methods, that is to say, it can handle multi-modal shape distributions, preserve a consistent framework for shape modeling and is free from any explicit shape distribution model.

The structure of this paper is as follows: Section 2 describes the proposed image segmentation framework. More specifically in Sect. 2.1 a shape representation using Legendre moments is introduced; The statistical shape model constructed in the space of the Legendre moments is explained in Sect. 2.2; The level set active contour model used in the proposed method is briefly explained in Sect. 2.3; Section 2.4 recasts the energy minimization problem in the general maximum a posteriori (MAP) framework, whereas in Sect. 2.5 the proposed strategy for energy minimization is explained in detail; Section 3 demonstrates the performance of the proposed method on binary silhouette and gray scale images, emphasising the resilience of the proposed method with respect to severe random and structural noise present in the image; Finally the conclusions are given in Sect. 4.

## 2 Segmentation Framework

The proposed segmentation framework can be seen as constrained contour evolution, with the evolution driven by an iterative optimization of the posterior probability model that combines a prior shape probability and an image likelihood function linked with a coupling prior imposing constraints on the contour evolution in the image domain. The method can be implemented with any combination of the shape descriptors and dimensionality reduction techniques as long as the shape reconstruction is possible from the selected low dimensional representation. Although for the clarity of the presentation and due to analysis in the experimental section comparing the proposed method against [9], Legendre moments are used in the paper, other shape descriptors such as Zernike moments [23] could be equally used.

In this section all the elements of the proposed model along with the proposed optimization procedure are described in detail.

### 2.1 Shape Representation Using Legendre Moments

The method proposed in this paper can utilize any shape descriptor as long as it enables shape reconstruction [18, 23]. However, in order to simplify description of the method and comparison with other approaches [2, 9] shapes are encoded, as in [9], by central-normalized Legendre moments * λ*={

*λ*

_{ pq },

*p*+

*q*≤

*N*

_{ o }} of order

*N*

_{ o }where

*p*and

*q*are non-negative integers, and therefore \(\boldsymbol{\lambda}\in\mathbf {R}^{N_{f}}\) with

*N*

_{ f }=(

*N*

_{ o }+1)(

*N*

_{ o }+2)/2.

*N*

_{ o }. Figure 1 shows an example of shape reconstruction when different values of

*N*

_{ o }are used.

*Ω*the moments are defined by:

*L*

_{ pq }are the tensor product of two 1D central-normalized Legendre polynomials

*L*

_{ p }and

*L*

_{ q }:

*Ω*| and the center of gravity coordinates \((\bar{x}, \bar{y})\) are calculated from: The Legendre polynomials form the orthonormal basis:

In the following sections the scale and translation invariant moments are used but the method would remain the same if similarity or affine invariant moments were used instead.

### 2.2 Statistical Shape Model of Legendre Moments

*N*

_{ s }binary silhouette images with foreground and background represented respectively by ones and zeros. The training data can be obtained from previously segmented images or generated from computer models of the objects of interest. In the first instance the central-normalized Legendre moments \(\{\boldsymbol{\lambda}_{i}\} _{i=1}^{N_{s}}\) are calculated for the shapes \(\{\varOmega_{i} \}_{i=1}^{N_{s}}\) from the training database. Following the methodology proposed in [4] the mean vector \(\bar{\boldsymbol{\lambda}}\) and the

*N*

_{ f }×

*N*

_{ f }covariance matrix

**Q**are estimated using: Subsequently the

*N*

_{ f }×

*N*

_{ c }projection matrix

**P**is formed by the eigenvectors of the covariance matrix

**Q**that correspond to the largest

*N*

_{ c }(

*N*

_{ c }≤min{

*N*

_{ s },

*N*

_{ f }}) eigenvalues. The projection of feature vectors \(\{\boldsymbol{\lambda}_{i}\} _{i=1}^{N_{s}}\) onto the shape space, spanned by the selected eigenvectors, forms the feature vectors \(\{\boldsymbol{\lambda}_{r,i}\}_{i=1}^{N_{s}}\):

*P*(

**λ**_{ r }), with

**λ**_{ r }defined in the shape space, is performed up to a scale, using

**λ**_{ r,i }as samples from the population of shapes and with the isotropic Gaussian function as the Parzen window:

### 2.3 Level Set Active Contour Model

Introduced in the previous section, density function *P*(**λ**_{ r }) is defined on the shape space of Legendre moments and represents a prior knowledge learned from the training shape examples.

*I*is formed by regions of approximatively constant intensity values and the segmentation is defined as energy minimization problem, with the energy given by: where

*Ω*

^{ c }represents the complement of

*Ω*in the image domain and |

*∂Ω*| represents the length of the boundary

*∂Ω*of the region

*Ω*. The above defined energy minimization problem can be equivalently expressed as maximization of the likelihood function:

*P*(

*I*|

*Ω*) could also be interpreted as a probability of observing image

*I*when shape

*Ω*is assumed to be present in the image. Introducing level set (embedding) function

*ϕ*such that the

*Ω*can be expressed in terms of

*ϕ*as

*Ω*={(

*x*,

*y*):

*ϕ*(

*x*,

*y*)≥0}, as well as

*Ω*

^{ c }={(

*x*,

*y*):

*ϕ*(

*x*,

*y*)<0} and

*∂Ω*={(

*x*,

*y*):

*ϕ*(

*x*,

*y*)=0}, the foregoing functional is equivalent to with

*H*representing Heaviside function. Calculating Gateaux derivative [1] it can be shown that such energy function is minimized by function

*ϕ*given as a solution of the following PDE equation with

*μ*

_{ Ω }=∫

_{ Ω }

*I*

*dxdy*and \(\mu_{\varOmega^{c}}= \int_{\varOmega^{c}} I \, dxdy\) representing respectively the average intensities inside and outside the evolving curve.

### 2.4 MAP Framework

*P*(

**λ**_{ r }) and

*P*(

*I*|

**λ**_{ r }) represent respectively shape and intensity based information. In [26] it was proposed to optimize

*P*(

**λ**_{ r }|

*I*) by restricting the shape evolution in the estimated shape space, by imposing following constraint: \(P(I|\boldsymbol{\lambda}_{r}) = P(I|\varOmega)|_{\varOmega= \varOmega(\boldsymbol{\lambda}_{r})}\). As maximizing

*P*(

**λ**_{ r }|

*I*) is equivalent to minimizing −ln(

*P*(

**λ**_{ r }|

*I*)), Zhang et al. [26] suggested minimizing an energy function:

*Ω*

_{ i }as explained in Sect. 2.2. The image term is defined as:

*E*

_{ cv }is constraint to shapes

*Ω*from the estimated shape space

*Ω*=

*Ω*(

**λ**_{ r }) (

*Ω*(

**λ**_{ r }) denotes a shape from the shape space represented by the Legendre moments \(\boldsymbol{\lambda}= \mathbf{P}\boldsymbol {\lambda}_{r} + \bar{\boldsymbol{\lambda}}\)).

*E*(

**λ**_{ r }) belongs to the shape space and as such may not accurately represent object of interest. To resolve this the Eq. (15) can be redefined as:

*Ω*defined in the image space and vector

**λ**_{ r }defined in the shape space. The coupling between these two is achieved by

*P*(

*Ω*|

**λ**_{ r }) defined as:

*α*is a weighting factor defining the strength of coupling between

*Ω*and

*ϕ*

_{ r }is a signed distance function representing the shape defined by the

**λ**_{ r }in the image domain. The overall energy to be minimized is now given by:

The details of the optimization procedure for energy *E*(*Ω*,**λ**_{ r }) are given in the next section.

### 2.5 Optimization

In the implementation of the proposed method the energy given in Eq. (22) is minimized using a greedy method where each of the two energy components *E* _{ prior } and *E* _{ image } is minimized in turn. The optimization of the image based energy *E* _{ image } is implemented through evolution of the level set *ϕ* defined by Eq. (24) with **λ**_{ r } fixed. Subsequently the *E* _{ prior } is minimized in the shape space with respect to the **λ**_{ r }. In this approach active contour evolution can be interpreted as a method for transferring the evidence about the shape present in the image into the shape space where it is combined with the shape information derived from the training shape samples.

- Projection of the current shape
*Ω*^{(k)}into the shape space:where \(\boldsymbol{\lambda}_{r}^{(k)} = \mathbf{P}^{T} (\boldsymbol {\lambda}^{(k)} - \bar{\boldsymbol{\lambda}})\), and the central-normalized Legendre moments in vector$$ \varOmega^{(k)} \rightarrow\boldsymbol{\lambda}_r^{(k)} $$(25)**λ**^{(k)}are calculated using:where$$ \lambda_{pq}^{(k)} = \frac{1}{|\varOmega^{(k)}|} \int_{\varOmega^{(k)}} L_{pq} \left( x,y,\varOmega^{(k)} \right) \, dxdy $$(26)*Ω*^{(k)}, comes from the previous algorithm iteration; - Shape space vector update:This step reduces the value of$$ \boldsymbol{\lambda}_r^{(k)} \rightarrow\boldsymbol{\lambda}_r^{\prime(k)} $$(27)
*E*_{ prior }by moving \(\boldsymbol{\lambda}_{r}^{(k)}\) in the steepest descent direction:where$$ \boldsymbol{\lambda}_r^{\prime(k)} = \boldsymbol{\lambda}_r^{(k)} - \beta\left. \frac{\partial E_{prior}}{\partial \boldsymbol{\lambda}_r} \right|_{\boldsymbol{\lambda }_r=\boldsymbol{\lambda}_r^{(k)}} $$(28)with$$ \frac{\partial E_{prior}}{\partial\boldsymbol{\lambda}_r} = \frac{1}{2\sigma^2} \sum_{i = 1}^{N_s} w_i (\boldsymbol{\lambda }_r-\boldsymbol{\lambda}_{r,i}) $$(29)$$ w_i = \frac{\mathcal{N}(\boldsymbol{\lambda}_r; \boldsymbol {\lambda}_{r,i}, \sigma^2)}{\sum_{k=1}^{N_s} \mathcal{N} (\boldsymbol{\lambda}_r; \boldsymbol{\lambda}_{r,k}, \sigma^2)} $$(30) - Shape reconstruction from Legendre moments:where shape$$ \boldsymbol{\lambda}_r^{\prime(k)} \rightarrow\varOmega^{\prime(k)} $$(31)
*Ω*^{′(k)}is reconstructed using: with the Legendre moments \(\lambda_{pq}^{\prime(k)}\) in vector**λ**^{′(k)}calculated from the shape space vector \(\boldsymbol{\lambda }_{r}^{\prime(k)}\) using: \(\boldsymbol{\lambda}^{\prime(k)} = \mathbf{P}\boldsymbol{\lambda }_{r}^{\prime(k)} + \bar{\boldsymbol{\lambda}}\) - Evolution of
*Ω*^{′(k)}according to Eq. (24):shape$$ \varOmega^{\prime(k)} \rightarrow\varOmega^{(k+1)} $$(33)*Ω*^{′(k)}, is a shape represented in the shape space and*Ω*^{(k+1)}is the result of shape evolution in the image domain;

*Ω*

^{(k+1)}=

*Ω*

^{(k)}.

The proposed strategy provides the maximum flexibility by making the optimizations in image space and shape space two independent processes bridged by shape projection and reconstruction. Thus, changing the curve evolution model in the image space or probability estimation model in the shape space will not affect other procedures. Although Legendre moments and PCA are selected to build the shape space in this paper, other shape descriptors and dimensionality reduction techniques can be easily ‘plugged’ into the optimization framework as long as the shape reconstruction from the shape space is possible. It should be pointed out that, unlike derivative based optimization methods such as [8] and [9], the shape descriptors need *not* be differentiable in the proposed method.

To guarantee convergence of the algorithm the parameter *α* in equation Eq. (24) should be non-decreasing function of the iteration index. In that case the convergence is guaranteed as for large enough value of *α* the algorithm, if not terminated beforehand, is equivalent to the steepest descent in the reduced shape space. In practical implementation the value of *α* is periodically increased after predefined number of iterations lapses. With this in mind the proposed algorithm can be interpreted as a mode seeking shape detection procedure. With small value of *α* the algorithm can relatively easily make long “unconstrained” jumps in the shape space following the shape evidence in the image domain. With the gradually increasing value of *α* the algorithm will be restricted to make gradually smaller steps to maintain similarity of the evolving shape in the image domain to the current shape defined in the shape space. It should be noted that in the practical experiments the algorithm converged in just a few iterations without increasing *α* for the vast majority of cases. To further improve segmentation results after the algorithm terminates the image energy can be minimized independently through the contour propagation defined by formula Eq. (24). In this case the value of the parameter *α* should correspond to the level of noise present in the image, with small values of *α* corresponding to low level of noise. This is further explained in the experimental section.

## 3 Experimental Results

To evaluate the proposed method, experiments were carried out using binary silhouette and real gray scale images. The main reason behind using the silhouette images was to investigate robustness of the proposed technique against severe random and structural noise present in data. The segmentation of such images without any noise is straightforward, as it could be achieved by simple thresholding, proving ready ground truth data. Additionally any incorrect segmentation of the noisy images can be directly associated with the noise rather than with a specific “non-optimal” type of image intensity descriptor used to compute the external energy in the active contour model. As it was explained in Sect. 2 the proposed method can be used with any contour evolution equation and as such can be used with colour or even tensor valued data. Here for illustration purposes results showing segmentation of real gray scale images were included.

### 3.1 Silhouette Data

*MPEG7 CE shape-1 Part B*database [15]. The first 19 of them were used as training shapes for building the statistical prior model and the remaining image was used for testing (see Fig. 2). The diversity of the training shapes can be clearly noted—rotations in the images were not removed on purpose to test robustness of the proposed method against large shape variability.

The test image with Gaussian noise is shown in Fig. 4(b) where the noise level is so high that even with a prior knowledge of the shape it is difficult to find the original silhouette in the noisy image. For the structural noise (Fig. 4(c)), hard alterations are made on the original image in order to emphasis the need for shape constraints. Finally, the last test image is corrupted by both Gaussian and structural noise (Fig. 4(d)).

*N*

_{ o }=40 and

*N*

_{ c }=10, used to calculate Legendre moments and the shape space.

*Ω*(shown in red), closely follows edges of the silhouette. For the noisy images, particularly with the severe random noise, quality of the segmentation in the image domain slightly deteriorates. This can be understood as manifestation a basic tradeoff between fidelity and robustness to noise. In the proposed method this tradeoff is controlled by the

*α*parameter, Eq. (24), where small value of

*α*encourages fidelity whereas larger values improve robustness of the solution. Samples of the evolving shape for this specific test image are shown in Fig. 6.

*σ*

^{2}=0.02. The dots represent the projections of the 19 training shapes.

The three curves in Fig. 7, shown in solid, dash and dotted lines, respectively demonstrate the trajectories formed by the optimization processes of the proposed method based on the test images with Gaussian, structural and hybrid noise. As the same initial circular shape was used for all three test images all the trajectories start at the same point marked by a square. All trajectories converge to points scattered nearby the dot representing the shape included in the image shown in the first row and third column in Fig. 2, which is the most similar to the shape present in the test images. Focusing on the dotted trajectory within the feature space, one can match trajectory steps with the intermediate results shown in Fig. 6. The fact that the convergent points are close but not exactly on the dot indicates that the proposed approach is not a template matching. Although the method is designed to search for shapes similar to shapes seen during the training process it can recover some unseen shape variations.

The segmentation result for the additive Gaussian noise from the Chan-Vese model, which is well-known for its robustness to Gaussian noise, is shown in Fig. 8(b). Inaccurate as it is, the result does provide some reasonable indications about the shape and position of the desired object, shown as a dash line, which is one of the major reasons why region-based active contour approaches such as Chan-Vese model are good choices for the image term in the proposed method. Figure 9(b) shows the segmentation result using the multi-reference method from [9], where all the 20 training shapes were used as references. The result demonstrates a dilemma for the methods with ‘soft’ shape constraints—How to or is it possible to select an appropriate weight to balance the image term and shape term? For a noisy image like this, a strong image force could lead to the inaccurate result as shown in Fig. 8(b), whereas a strong shape force could result in the convergence to a wrong shape at a wrong location due to the lack of guidance from image force. In this case, a range of different weights were tried, but none of them converged to the right result. Much better result was achieved using the proposed method as shown in Fig. 5(b). As expected, the resulting shape living in the reduced feature space tends to have more regular appearance.

For images with a large amount of structural noise Chan-Vese model without shape constraint completely failed, as shown in Fig. 8(c–d), by following the false structures. Although increasing the weight associated with the length term (*γ* in Eq. (11)) can avoid some of the false structures, it cannot properly locate the desired shape. Again, the multi-reference method failed to converge to the right result as evident from Fig. 9(c–d).

*σ*

^{2}=0.002 was used within the Gaussian kernel. Regarding convergence of the different trajectories, the same conclusions as in the first set of experiments can be made.

Although the main objective of the described experiment was to demonstrate a superior robustness of the proposed methods with respect to severe random and structural noise, the accuracy of the method was also tested on repeated experiments with different combination of the target image and structural noise pattern. It transpired that the proposed method was able to localize object boundary with an average accuracy of 1.2, 1.7 and 2 pixels when operating respectively on images with Gaussian, structural and hybrid noise.

### 3.2 Gray Scale Images

*MPEG7 CE shape-1 Part B*database used. It can be clearly seen that the training shapes integrate a large shape variability, and that different positions of the handle are taken into account (left and right). Results of segmentation using the Chan-Vese, multi-reference and the proposed method are shown in Fig. 13.

This demonstrates that, the proposed method is more robust than the other two tested methods with respect to “shape distractions” present in the data. The final result can be seen as a good compromise between image information and the prior shape constraints imposed by the training data set used.

^{∘}of the rotational viewing angle of the figurine.

*PF*function. It can be clearly noticed that the shape priors are activated in both cases as the stable contours ignore erroneous image driven shape internal/external region indicators.

## 4 Conclusions

The paper describes a novel method for shape detection and image segmentation. The proposed method can be seen as constrained contour evolution, with the evolution driven by an iterative optimization of the posterior probability function that combines a prior shape probability, the coupling distribution, and the image likelihood function. The prior shape probability function is defined on the subspace of Legendre moments and is estimated, using Parzen window method, on the training shape samples given in the estimated beforehand shape space. The likelihood function is constructed from conditional image probability distribution, with the image modeled to have regions of approximately constant intensities. The coupling distribution is defined as the prior distribution on the image likelihood function which imposes feasible shapes changes based on the current shape parametrization in the shape space. The resulting constrained optimization problem is solved using combinations of level set active contour evolution in the image space and steepest descent iterations in the shape space. The decoupling of the optimization processes into image and shape spaces provides an extremely flexible optimization framework for general statistical shape based active contour where evolution function, statistical model, shape representation all become configurable. The presented experimental results demonstrate very strong resilience of the proposed method to the random as well as structural noise present in the image.

## References

- 1.Aubert, G., Barlaud, M., Faugeras, O., Jehan-Besson, S.: Image segmentation using active contours: calculus of variations or shape gradients? SIAM J. Appl. Math.
**63**, 2128–2154 (2003) MathSciNetzbMATHCrossRefGoogle Scholar - 2.Chan, T.F., Vese, L.A.: Active contours without edges. IEEE Trans. Image Process.
**10**(2), 266–277 (2001) zbMATHCrossRefGoogle Scholar - 3.Chen, Y., Tagare, H., Thiruvenkadam, S., Huang, F., Wilson, D., Gopinath, K., Briggs, R., Geiser, E.: Using prior shapes in geometric active contours in a variational framework. Int. J. Comput. Vis.
**50**(3), 315–328 (2002) zbMATHCrossRefGoogle Scholar - 4.Cootes, T.F., Taylor, C.J., Cooper, D.H., Graham, J.: Active shape models—their training and application. Comput. Vis. Image Underst.
**61**(1), 38–59 (1995) CrossRefGoogle Scholar - 5.Cremers, D., Osher, S.J., Soatto, S.: Kernel density estimation and intrinsic alignment for shape priors in level set segmentation. Int. J. Comput. Vis.
**69**(3), 335–351 (2006) CrossRefGoogle Scholar - 6.Erdem, E., Tari, S., Vese, L.: Segmentation using the edge strength function as a shape prior within a local deformation model. In: ICIP, pp. 2989–2992 (2009) Google Scholar
- 7.Etyngier, P., Segonne, F., Keriven, R.: Shape prior using manifold learning techniques. In: ICCV, pp. 1–8 (2007) Google Scholar
- 8.Foulonneau, A., Charbonnier, P., Heitz, F.: Geometric shape priors for region-based active contours. In: ICIP, vol. 3, pp. 413–416 (2003) Google Scholar
- 9.Foulonneau, A., Charbonnier, P., Heitz, F.: Multi-reference shape priors for active contours. Int. J. Comput. Vis.
**81**(1), 68–81 (2009) CrossRefGoogle Scholar - 10.Fussenegger, M., Roth, P., Bischof, H., Deriche, R., Pinz, A.: A level set framework using a new incremental, robust Active Shape Model for object segmentation and tracking. Image Vis. Comput.
**27**(8), 1157–1168 (2009) CrossRefGoogle Scholar - 11.Gastaud, M., Barlaud, M., Aubert, G.: Combining shape prior and statistical features for active contour segmentation. IEEE Trans. Circuits Syst. Video Technol.
**14**(5), 726–734 (2004) CrossRefGoogle Scholar - 12.Houhou, N., Lemkaddem, A., Duay, V., Allal, A., Thiran, J.-P.: Shape prior based on statistical MAP for active contour segmentation. In: ICIP, pp. 2284–2287 (2008) Google Scholar
- 13.Kass, M., Witkin, A., Terzopoulos, D.: Snakes: active contour models. Int. J. Comput. Vis.
**1**(4), 321–331 (1988) CrossRefGoogle Scholar - 14.Kim, J., Çetin, M., Willsky, A.S.: Nonparametric shape priors for active contour-based image segmentation. Signal Process.
**87**(12), 3021–3044 (2007) zbMATHCrossRefGoogle Scholar - 15.Latecki, L.J., Lakamper, R., Eckhardt, T.: Shape descriptors for non-rigid shapes with a single closed contour. In: CVPR, pp. 424–429 (2000) Google Scholar
- 16.Lecellier, F., Jehan-Besson, S., Fadili, J., Aubert, G., Revenu, M., Saloux, E.: Region-based active contour with noise and shape priors. In: ICIP, pp. 1649–1652 (2006) Google Scholar
- 17.Leventon, M., Grimson, W., Faugeras, O.: Statistical shape influence in geodesic active contours. In: CVPR, pp. 316–323 (2000) Google Scholar
- 18.Mukundan, R., Ramakrishnan, K.: Moments functions in image analysis-theory and applications. World Scientific, Singapore (1998). ISBN 981-02-3524-0 CrossRefGoogle Scholar
- 19.Prisacariu, A.V., Reid, I.: Nonlinear shape manifolds as shape priors in level set segmentation and tracking. In: CVPR, pp. 2185–2192 (2011) CrossRefGoogle Scholar
- 20.Rousson, M., Cremers, D.: Efficient kernel density estimation of shape and intensity priors for level set segmentation. In: MICCAI, pp. 757–764 (2005) Google Scholar
- 21.Rousson, M., Paragios, N.: Shape priors for level set representations. In: ECCV, pp. 78–92 (2002) Google Scholar
- 22.Rousson, M., Paragios, N.: Prior knowledge, level set representations and visual grouping. Int. J. Comput. Vis.
**76**(3), 231–243 (2008) CrossRefGoogle Scholar - 23.Teague, M.: Image analysis via the general theory of moments. J. Opt. Soc. Am.
**70**(8), 920–930 (1980) MathSciNetCrossRefGoogle Scholar - 24.Thiruvenkadam, T.R., Chan, T., Hong, B.-.-W.: Segmentation under occlusions using selective shape prior. SIAM J. Imaging Sci.
**1**(1), 115–142 (2008) MathSciNetzbMATHCrossRefGoogle Scholar - 25.Tsai, A., Yezzi, A., Wells, W.M., Tempany, C., Tucker, D., Fan, A., Grimson, W., Willsky, A.: A shape-based approach to the segmentation of medical imagery using level sets. IEEE Trans. Med. Imaging
**22**(2), 137–154 (2003) CrossRefGoogle Scholar - 26.Zhang, Y., Matuszewski, B.J., Histace, A., Precioso, F.: Statistical shape model of Legendre moments with active contour evolution. In: CAIP, pp. 51–58 (2011) Google Scholar