Statistical Density Estimation Using Threshold Dynamics for Geometric Motion
Authors
 First Online:
 Received:
 Revised:
 Accepted:
DOI: 10.1007/s1091501296156
 Cite this article as:
 Kostić, T. & Bertozzi, A. J Sci Comput (2013) 54: 513. doi:10.1007/s1091501296156
Abstract
Our goal is to estimate a probability density based on discrete point data via segmentation techniques. Since point data may represent certain activities, such as crime, our method can be successfully used for detecting regions of high activity. In this work we design a binary segmentation version of the wellknown Maximum Penalized Likelihood Estimation (MPLE) model, as well as a minimization algorithm based on thresholding dynamics originally proposed by Merriman et al. (The Computational Crystal Growers, pp. 73–83, 1992). We also present some computational examples, including one with actual residential burglary data from the San Fernando Valley.
Keywords
Statistical density estimation Image segmentation Thresholding GinzburgLandau functional1 Introduction
This paper is organized as follows, in Sect. 2 we give some background on variational methods for image segmentation, in Sect. 3 we describe the MBO scheme, in Sect. 4 we present some background on MPLE models. In Sect. 5 we discuss the proposed model in more details, and calculate the time dependent EulerLagrange equation to minimize the functional (5). Section 6 explains the thresholding dynamics for minimization of our energy functional. Details on the numerical implementation are presented in Sect. 7.
2 Background on Variational Methods in Image Segmentation and MBO Scheme

Step 1 Let v(x)=S(δt)u _{ n }(x) where S(δt) is a propagator by time δt of the equation:with appropriate boundary conditions.$$v_t=\Delta v $$

Step 2 Threshold$$u_{n+1}(x) = \left \{ \begin{array}{l@{\quad}l} 0 & \text{if $v(x) \in(\infty,\frac{1}{2}]$ }\\[6pt] 1 & \text{if $v(x) \in(\frac{1}{2},\infty)$ }\\ \end{array} \right . $$
The reason we are interested in motion by mean curvature flow is due to is its close relation to the AllenCahn equation (10).

Step 1 Let v(x)=S(δt)u _{ n }(x) where S(δt) is a propagator by time δt of the equation:with appropriate boundary conditions.$$w_t=\Delta w2\tilde{\lambda} \bigl(w(c_1f)^2+(1w) (c_2f)^2 \bigr) $$

Step 2 Set$$u_{n+1}(x) = \left \{ \begin{array}{l@{\quad}l} 0 & \text{if $v(x) \in(\infty,\frac{1}{2}]$ }\\[6pt] 1 & \text{if $v(x) \in(\frac{1}{2},\infty)$ }\\ \end{array} \right . $$
3 MPLE Methods and Proposed Model
3.1 General Model
For now are going to focus on the segmentation function u. We assume our segmentation function is the characteristic function of the region Σ, where Σ is an area with a larger density. For any given data and any given segmentation function there is a unique density function corresponding to them. With w being the function that approximates the data, the total number of events is approximately equal to ∫w, while the number of events inside and the number of events outside of the region Σ are approximated by ∫wu and ∫w(1−u), respectively. According to that, the density c _{1}(u) inside the region Σ is equal to \(\frac{\int wu}{\int u \int w}\) and the density c _{2}(u) in the region Σ ^{ C } is equal to \(\frac{\int w(1u)}{\int w \int(1u)}\). Finally, we write the density function as c _{1}(u)u+c _{2}(u)(1−u). The established correspondence between the segmentation and the density function suggests that building a diffuse interface MPLE model around the segmentation function is possible. As the segmentation function takes only 0 and 1 values, the GinzburgLandau functional is a natural choice. As the density is a rescaled segmentation function, using the GinzburgLandau functional for u, as opposed to the GinzburgLandau functional for the density seems both reasonable and convenient.

If both and c _{1}(u) and c _{2}(u) (further we use c _{1} and c _{2} instead for simplicity of the notation) are nonzero:$$ u_t=2\epsilon\Delta u\frac{1}{\epsilon}W'(u)+\mu w \biggl[\frac{c_1c_2}{c_1}u+\frac{c_1c_2}{c_2}(1u)+\biggl(\frac{\int {(1u)w}}{\int{1u}} \frac{\int{u w}}{\int{u}}\biggr)\biggr]. $$(17)

If c _{1} is equal to zero:$$ u_t=2\epsilon\Delta u\frac{1}{\epsilon}W'(u)+\mu\biggl[ w(u1)+\biggl(\frac{\int{(1u)w}}{\int{1u}}w\biggr)\biggr]. $$(18)

If c _{2} is equal to zero$$ u_t=2\epsilon\Delta u\frac{1}{\epsilon}W'(u)+\mu\biggl[ wu+\biggl(w\frac{\int{u w}}{\int{u}}\biggr)\biggr]. $$(19)
3.2 Special Case Model
4 Proposed Dynamics

Step 1. Let v(x)=S(δt)u _{ n }(x) where S(δt) is a propagator by time δt of the equation:with appropriate boundary conditions.$$y_t=\Delta yA\bigl(y(\cdot,t)\bigr)y+B\bigl(y(\cdot,t)\bigr) $$

Step 2. Set$$u_{n+1}(x) = \left \{ \begin{array}{l@{\quad}l} 0 & \text{if $v(x) \in(\infty,\frac{1}{2}]$ }\\[6pt] 1 & \text{if $v(x) \in(\frac{1}{2},\infty)$ }\\ \end{array} \right . $$
5 Numerical Implementation
In this implementation, the data function w is used as an initial condition, along with Dirichlet or Neumann boundary conditions.
5.1 Adaptive Timestepping
The choice of timestep in the propagation phase, a “subtimestep”, can be chosen to optimize performance. In the early stage of computation, it is important to keep the subtimestep small in order to obtain a good estimate in the propagation phase. However, as our algorithm is approaching steady state, a large number of iterations in the propagation phase pose a burden on the computational time. To successfully speed up the convergence of our algorithm, we used adaptive timestepping, a modified form of the scheme proposed in [1].
We used adaptive timestepping at two different levels, in the propagation phase of the algorithm we adapt the subtimestep, as well as adapting an initial subtimestep for the future iterations. In the propagation phase of any iteration, we calculate a dimensionless truncation error estimate for different propagation times. Once an error is smaller than a given tolerance Tol _{1} for a certain number of the consecutive iterations, we increase the timestep by 10 %. We also estimate the dimensionless error in every iteration of the algorithm, and if we find an error to be smaller than Tol _{2} the initial subtimestep in the propagation phase of the next iteration will be increased be 10 %. However, we never allow the initial subtimestep to be larger than \(\frac{1}{8}\) of the timestep. Notice that we are not adapting the timestep, the total propagation time in each iteration is the same.
5.2 Adaptive Resolution
Another way to improve the computational time is to use adaptive resolution. As we mentioned before, we use the data function w as an initial condition when solving the equation (23). It is reasonable to assume that the more the initial condition “resembles” the solution, the less iterations the algorithm would take to obtain the solution. The main idea is to generate a lower resolution form of the data set, then use a low resolution solution to create a good initial guess for the high resolution solution. Providing a good initial guess for the higher resolution problem is particularly useful as the iterations when the algorithm is applied to the higher resolution versions of the data set tend to be slower. In this implementation, we typically applied this procedure several times on some sparse data sets. At each step we create the coarser form of the given data set, until we reach the version of the data set that has a satisfying density. Our experiments show that data sets with the total density between 0.05 and 0.2 are optimal for this algorithm. Once a sufficiently dense low resolution version of the data set is obtained, we run our algorithm to get the low resolution solution, and start working our way up from there. The higher resolution approximation of the solution is then generated, and used as an initial condition in the next step. In the next step, we are solving the problem on the data set that has a higher resolution. It is important to mention that this process does not alter the original data set. We call this process nstep adaptive resolution where n is the total number of times we reduced the resolution of the original data set. The number of steps, n, is closely related to our choice of timestep. In case we are segmenting the region of higher density in our data, we noticed, through multiple experiments, that the timestep often can be given as ω2^{ n }, where n is the number of levels in adaptive resolution, and ω∈[0.15,0.2]. In case we are locating the valid region, we usually allow a smaller timestep, but also a larger number of levels in adaptive resolution. However, starting with a problem that has a significantly lower resolution comparing to the original one, we might run into some problems. Decreasing resolution significantly may result in a very different looking data set, thus segmentation would not perform in an expected way, i.e. this first initial guess would not be a good approximation of the solution we are trying to find.
6 Computational Examples
6.1 Test Shapes
6.2 Orange County Coastline
6.3 San Fernando Valley Residential Burglary Data
7 VFold Cross Validation
8 Conclusion
This work demonstrates that threshold dynamics methods for image segmentation are a powerful tool for statistical density estimation for problems involving two dimensional geographic information. The efficiency of the method, especially when combined with multiresolution techniques makes this a practical choice for parameter estimation involving Vfold cross validation, especially when parallel platforms are available. The method is a binary segmentation method that also determines density values. However, it can be naturally generalized to multilevel segmentation. One way to achieve that may include representing the segmentation function as a linear combination of the multiple binary components, similarly to the idea used for generalizing binary to grayscale inpainting in [7]. However, this requires sufficient data to warrant a multilevel segmentation.
Acknowledgements
This paper is dedicated to Prof. Stanley Osher on the occasion of his 70th Birthday. We would like to thank Laura Smith and George Mohler for their helpful comments. This work was supported by ONR grant N000141210040, ONR grant N000141010221, AFOSR MURI grant FA95501010569, NSF grant DMS0968309 and ARO grant W911NF1010472, reporting number 58344MA.
Open Access
This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.