Skip to main content
Log in

Detecting Curved Edges in Noisy Images in Sublinear Time

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

Detecting edges in noisy images is a fundamental task in image processing. Motivated, in part, by various real-time applications that involve large and noisy images, in this paper we consider the problem of detecting long curved edges under extreme computational constraints, that allow processing of only a fraction of all image pixels. We present a sublinear algorithm for this task, which runs in two stages: (1) a multiscale scheme to detect curved edges inside a few image strips; and (2) a tracking procedure to estimate their extent beyond these strips. We theoretically analyze the runtime and detection performance of our algorithm and empirically illustrate its competitive results on both simulated and real images.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Achieser, N.I.: Theory of approximation. Dover Publications (1992)

  2. Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. 33(5), 898–916 (2011)

    Article  Google Scholar 

  3. Arias-Castro, E., Efros, B., Levi, O.: Networks of polynomial pieces with application to the analysis of point clouds and images. J. Approx. Theory 1(162), 94–130 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  4. Brandt, A., Dym, J.: Fast calculation of multiple line integrals. SIAM J. Sci. Comput. 20(4), 1417–1429 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  5. Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. 8(6), 679–698 (1986)

    Article  Google Scholar 

  6. Deriche, R.: Using Canny’s criteria to derive a recursively implemented optimal edge detector. Int. J. Comput. Vis. 1(2), 167–187 (1987)

    Article  Google Scholar 

  7. Desolneux, A., Moisan, L., Morel, J.M.: Edge detection by Helmholtz principle. J. Math. Imaging Vis. 14(3), 271–284 (2001)

    Article  MATH  Google Scholar 

  8. Dollár, P., Zitnick, C.: Structured forests for fast edge detection. In: IEEE International Conference on Computer Vision, pp. 1841–1848 (2013)

  9. Gonzalez, R.C., Woods, R.E.: Digital image processing, 2nd edn. Prentice Hall (2002)

  10. Haupt, J., Castro, R., Nowak, R.: Distilled sensing: adaptive sampling for sparse detection and estimation. IEEE Trans. Inform. Theory 57(9), 6222–6235 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  11. Horev, I., Nadler, B., Arias-Castro, E., Galun, M., Basri, R.: Detection of long edges on a computational budget: a sublinear approach. SIAM J. Imaging Sci. 8(1), 458–483 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  12. Kimmel, R., Bruckstein, A.M.: Regularized Laplacian zero crossings as optimal edge integrators. Int. J. Comput. Vis. 53(3), 225–243 (2003)

    Article  Google Scholar 

  13. Kiryati, N., Eldar, Y., Bruckstein, A.M.: A probabilistic hough transform. Lect. Notes Comput. Sci. 24(4), 303–316 (1991)

    MathSciNet  Google Scholar 

  14. Kiryati, N., Kälviäinen, H., Alaoutinen, S.: Randomized or probabilistic hough transform: unified performance evaluation. Pattern Recogn. Lett. 21(13), 1157–1164 (2000)

    Article  MATH  Google Scholar 

  15. Kleiner, I., Keren, D., Newman, I., Ben-Zwi, O.: Applying property testing to an image partitioning problem. IEEE Trans. Pattern Anal. 33(2), 256–265 (2011)

    Article  Google Scholar 

  16. Konishi, S., Yuille, A., Coughlan, J., Zhu, S.C.: Statistical edge detection: learning and evaluating edge cues. IEEE Trans. Pattern Anal. 25(1), 57–74 (2003)

    Article  Google Scholar 

  17. Korman, S., Reichman, D., Tsur, G., Avidan, S.: Fast-match: fast affine template matching. In: Proceedings of the CVPR IEEE, pp. 2331–2338 (2013)

  18. Laurent, B., Massart, P.: Adaptive estimation of a quadratic functional by model selection. Ann. Statist. 28(5), 1302–1338 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  19. Lebrun, M., Colom, M., Buades, A., Morel, J.: Secrets of image denoising cuisine. Acta Numer. 21, 475–576 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  20. Liu, C., Freeman, W., Szeliski, R., Kang, S.: Noise estimation from a single image. In: Proceedings of the CVPR IEEE, vol. 1, pp. 901–908. IEEE (2006)

  21. Marr, D., Hildreth, E.: Theory of edge detection. Philos. Trans. R. Soc. B 207(1167), 187–217 (1980)

    Google Scholar 

  22. Ofir, N., Galun, M., Nadler, B., Basri, R.: Fast detection of curved edges at low SNR. In: Proceedings of the CVPR. IEEE (2016)

  23. Rao, K.R., Ben-Arie, J.: Optimal edge detection using expansion matching and restoration. IEEE Trans. Pattern Anal. 16(12), 1169–1182 (1994)

    Article  Google Scholar 

  24. Raskhodnikova, S.: Approximate testing of visual properties. In: Lecture Notes in Computer Science, pp. 370–381. Springer (2003)

  25. Rousseeuw, P., Croux, C.: Alternatives to the median absolute deviation. J. Am. Stat. Assoc. 88(424), 1273–1283 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  26. Rubinfeld, R., Shapira, A.: Sublinear time algorithms. SIAM J. Discrete Math. 25(4), 1562–1588 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  27. Shen, J., Castan, S.: An optimal linear operator for step edge detection. Graph. Models Image Process. 54(2), 112–133 (1992)

    Article  Google Scholar 

  28. Tsur, G., Ron, D.: Testing properties of sparse images. In: International Journal in Foundations of Computer Science, pp. 468–477. IEEE (2010)

  29. Von Gioi, R.G., Jakubowicz, J., Morel, J.M., Randall, G.: On straight line segment detection. J. Math. Imaging Vis. 32(3), 313–347 (2008)

    Article  MathSciNet  Google Scholar 

  30. von Gioi, R.G., Jakubowicz, J., Morel, J.M., Randall, G.: LSD: a fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. 32(4), 722–732 (2008)

    Article  Google Scholar 

  31. Willett, R., Martin, A., Nowak, R.: Backcasting: adaptive sampling for sensor networks. In: Lecture Notes in Computer Science, pp. 124–133. ACM (2004)

  32. Xu, L., Oja, E., Kultanen, P.: A new curve detection method: randomized hough transform (RHT). Pattern Recogn. Lett. 11(5), 331–338 (1990)

    Article  MATH  Google Scholar 

Download references

Acknowledgments

Funding was provided by Division of Mathematical Sciences (Grant No. 0706816 ).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Boaz Nadler.

Additional information

Alain Trouvé and Yali Amit were partially supported by NSF-DMS 0706816.

Appendices

Appendix 1: Consistency Test

Let \((y_k)_{1\le k\le L}\) be a Gaussian random vector

$$\begin{aligned} y_k = \mu _k + \xi _k, \quad k = 1, \ldots , L \end{aligned}$$

where the noise terms \((\xi _k)_{1\le k\le L}\) are i.i.d. of zero mean and known variance \(\sigma ^2\). Under the null hypothesis, its mean vector satisfies \(\mu _1 = \mu _2 = \cdots = \mu _L\). Then the test statistic

$$\begin{aligned} T = \frac{1}{\sigma ^2}\sum _{k = 1}^L \left( y_k - \frac{1}{L}\sum _{i=1}^L y_i\right) ^2 \end{aligned}$$

follows the Chi-squared distribution with \(L-1\) degrees of freedom. We reject the null hypothesis when

$$\begin{aligned} T \ge L - 1 + 2\ln \delta ^{-1} + 2\sqrt{(L-1)\ln \delta ^{-1}}. \end{aligned}$$

The false positive rate of this test is at most \(\delta \) by the tail bound [18]

$$\begin{aligned} \forall t > 0, \;\; \mathbb {P}\left( \frac{T - (L-1)}{L-1} \ge 2t + 2 \sqrt{t}\right) \le e^{-(L-1)t}. \end{aligned}$$

Appendix 2: Endpoint Location

The problem of locating the endpoint t of a candidate edge given its values on an interval of length L can be formulated as follows: let \((y_k)_{1\le k\le L}\) be a Gaussian random vector

$$\begin{aligned} y_k = \mu \cdot 1_{k\le t} + \xi _k, \quad k = 1, \ldots , L \end{aligned}$$

where the noise terms \((\xi _k)_{1\le k\le L}\) are i.i.d., with zero mean and known variance \(\sigma ^2\), but the endpoint t and the contrast \(\mu \ne 0\) are unknown. Their maximum likelihood estimates are

$$\begin{aligned} \max _{\mu , t} \log p(y_1, \ldots , y_L)= & {} \min _{\mu , t} \left( \sum _{k = 1}^t (y_k-\mu )^2 + \sum _{k = t+1}^L y_k^2\right) \\= & {} \min _{\mu , t}\left( \mu ^2 t - 2\mu \sum _{k = 1}^t y_k \right) . \end{aligned}$$

In particular, the estimated end point is

$$\begin{aligned} t^* = \mathop {\hbox {argmax}}\limits _{1\le t\le L} \frac{\left( \sum _{k=1}^t y_k\right) ^2}{t}. \end{aligned}$$

Appendix 3: Proofs

Proof of Lemma 1

Let \(p(x)\) be the degree \(r\) Taylor expansion of g around the middle point \(x_{L}\) of \(I = [x_0, x_{2L}],\)

$$\begin{aligned} p(x) = \sum _{k=0}^r \frac{g^{(k)}(x_L)}{k!} (x-x_L)^k. \end{aligned}$$

Since \(L = \lambda (I){/}2\), its approximation error satisfies

$$\begin{aligned} \sup _{x\in I}|p(x)- g(x)| \le \frac{b_{r+1}}{n^r}L^{r+1}. \end{aligned}$$

It follows from the coefficient quantization formula (8)

$$\begin{aligned} \left| \frac{a_k}{M_qL^k} - \frac{g^{(k)}(x_L)}{k!}\right| \le \frac{1}{2M_qL^k}, \quad k = 0, \ldots , r. \end{aligned}$$

Hence, writing \(p_q = \mathcal {P}_{I, M_q}^s(g)\), we find

$$\begin{aligned} \sup _{x\in I}|p(x) - p_q(x)| \le \frac{r+1}{2M_q}. \end{aligned}$$

An application of the triangle inequality concludes the proof. \(\square \)

Proof of Lemma 2

By definition, for each \(g \in \mathcal {R}_{\varvec{b}, n}\), there is a \(\varvec{b}\)-regular function f such that \(g(x) = nf(x{/}n)\). According to (8), the coefficients of its symmetric approximator are

$$\begin{aligned} a_k = \left\lfloor \frac{f^{(k)}(c{/}n)\lambda (I)^kM_q}{k!2(2n)^{k-1}} + \frac{1}{2}\right\rfloor , \quad k = 1, \ldots , r. \end{aligned}$$

Since \(|\lfloor z+\tfrac{1}{2}\rfloor |\le |z|+\tfrac{1}{2}\) holds for all \(z \in \mathbb {R}\), we deduce

$$\begin{aligned} |a_k| \le \left| \frac{f^{(k)}(c{/}n)\lambda (I)^kM_q}{k!2(2n)^{k-1}}\right| + \frac{1}{2}, \quad k = 1, \ldots , r. \end{aligned}$$

Now, since \(f\) is \(\varvec{b}\)-regular,

$$\begin{aligned} |a_k| \le \frac{b_k\lambda (I)^kM_q}{2(2n)^{k-1}} + \frac{1}{2} \quad k = 1, \ldots , r. \end{aligned}$$

As the coefficients \(a_k\) are all integer valued, Eq. (12) follows. \(\square \)

Proof of Lemma 3

First consider the case \(b_1 > 0\) and \(b_k = 0\) for \(k\ge 2\). This corresponds to the assumption that image edges can be well approximated by straight segments. In this case, \(a_{0}\) can have \(nM_{q}\) different values, whereas by Eq. (12), the coefficient \(a_{1}\) can have approximately \(b_{1}M_qn^\kappa {/}2\) different values, and \(a_k = 0\) for \(k\ge 2\). Hence, in this case, the polynomial search space has \(O(n^{1+\kappa })\) candidate curves which are all linear.

Next, consider the general case of curved smooth edges with \(b_k > 0, \; k = 1, \ldots , r\). Here, as above the coefficients \(a_0\) and \(a_1\) still have O(n) and \(O(n^\kappa )\) possible values. By Eq. (12), the higher-order coefficients have \(O(n^{k\kappa }/n^{k-1})\) possibilities, of course provided that \(1+k(\kappa -1)\ge 0\), otherwise they have a constant number of possible values, independent of \(n\). Hence in the general case its size is \(O\left( n^{\kappa +1+\sum _{k=2}^r (1+k(\kappa -1))_+}\right) .\)

Let us analyze the behavior of this expression. For \(\kappa \in [0,1{/}2]\), for all \(k\ge 2\), \(1+k(\kappa -1)\le 0\) and we obtain \(O(n^{1+\kappa })\). For \(\kappa \in [1{/}2,2{/}3]\) the term with \(k=2\) also contributes and yields an overall size \(O(n^{3\kappa })\). A value of \(\kappa >2{/}3\) yields a search space of size larger than \(O(n^2)\) and thus not relevant for sublinear edge detection. Thus, Eq. (14) follows. \(\square \)

Next, to prove Theorem 1 we shall make use of the following two auxiliary results:

Lemma 8

Let h(z) be a twice differentiable function defined on a closed interval \([z_1, z_2]\). Let l(z) be its linear interpolant such that \(l(z_i) = h(z_i), \; i = 1, 2\). Then

$$\begin{aligned} \sup _{z\in [z_1, z_2]}|h(z) - l(z)| \le \frac{(z_2-z_1)^2}{8}\sup _{z\in [z_1, z_2]}|h^{(2)}(z)|. \end{aligned}$$

Proof

Define for any \(t \in (z_1, z_2)\)

$$\begin{aligned} A_t(z) = h(z) - l(z) - \frac{h(t) - l(t)}{(t-z_1)(t-z_2)}(z-z_1)(z-z_2). \end{aligned}$$

By construction, \(A_t(z)\) is twice differentiable and has at least three roots \(\{z_1, t, z_2\}\). Applying Rolle’s theorem twice, there exists \(\theta _t \in (z_1, z_2)\) such that \(A_t''(\theta _t) = 0\) or equivalently

$$\begin{aligned} h''(\theta _t) - 2\frac{h(t) - l(t)}{ (t-z_1)(t-z_2)} = 0. \end{aligned}$$

It follows that

$$\begin{aligned} |h(t) - l(t)| \le \frac{(z_2-z_1)^2}{8} \sup _{z\in [z_1, z_2]}|h''(z)|. \end{aligned}$$

\(\square \)

Lemma 9

Assume \(\lambda (I)=2md\) for some integers d and m. Let \((x'_k)_{0\le k \le 2m}\) be the \(2m+1\) equally spaced integers with a spacing of d in the interval \(I= [x'_0, x'_{2m}]\). For any function \(g \in \mathcal {R}_{\varvec{b}, n}\), let \(p_q\) be its symmetric approximator of degree r on the same interval. Then the piecewise linear function \(\ell \) that interpolates \(p_q\) at the grid points \(\{x'_0, \ldots , x'_{2m}\}\) satisfies

$$\begin{aligned} \sup _{x\in I} |\ell (x) - p_q(x)| \le \tfrac{1}{8M_qm^2}\sum _{k=2}^r k(k-1)\left\lfloor \tfrac{b_k\lambda (I)^kM_q}{2(2n)^{k-1}} + \tfrac{1}{2} \right\rfloor . \end{aligned}$$

Proof of Lemma 9

Since the piecewise linear approximation \(\ell (x)\) interpolates the polynomial \(p_q\) at \(2m+1\) equidistant points of the interval I, it follows from Lemma 8 that

$$\begin{aligned} \sup _{x\in I}|p_q(x)-\ell (x)| \le \frac{d^2}{8}\sup _{x\in I}|p_q^{(2)}(x)| \end{aligned}$$
(29)

Combining this with the definition of \(p_q(x)\), Eq. (8) and the fact that \(L=md\) gives that

$$\begin{aligned} \sup _{x\in I}|p_q(x)-\ell (x)| \le \frac{1}{8M_qm^2}\sum _{k=2}^r |a_k| k(k-1). \end{aligned}$$

The bound (12) on the coefficients \(a_{k}\) yields the lemma. \(\square \)

Proof of Theorem 1

Let \(g\in \mathcal {R}_{\varvec{b}, n}\), let \(p_{q}\) be its symmetric approximator and let \(\ell (x)\) be its piecewise linear interpolant. By the triangle inequality in the interval I,

$$\begin{aligned} \Vert g-\ell \Vert _\infty \le \Vert g-p_q\Vert _\infty + \Vert p_q-\ell \Vert _\infty \end{aligned}$$

By Lemma 1, the first term on the right-hand side is bounded by (16), whereas by Lemma 9 the second term is bounded by (17). Hence, Eq. (15) of the theorem readily follows.

Next, let us consider how large can the spacing \(d\) be. On the one hand we would like \(d\) to be as large as possible, as this leads to significant gains in the runtime of the line integral algorithm [4]. In details, this method calculates the edge responses of all the linear candidate curves in a \(d\times n\) strip, in \(O(nd\log d)\) operations instead of \(O(nd^2)\) by the naive method. While this recursion may induce an approximation error [4], at high noise levels its effect is negligible.

Now on the other hand, a larger spacing \(d\) yields a larger approximation error of the piecewise linear interpolation. For large values of n, the asymptotically dominant error term in (17) is the first summand with \(k=2\). This error term is approximately equal to \(b_2d^2{/}(4n)\). More precisely,

$$\begin{aligned} \left| \frac{1}{4M_qm^2}\left\lfloor \frac{b_2\lambda (I)^2M_q}{4n} + \frac{1}{2} \right\rfloor - \frac{b_2d^2}{4n} \right| \le \frac{1}{8M_qm^2}. \end{aligned}$$

For this term to be small, the spacing d should thus grow no faster than \(O(\sqrt{n})\).

Armed with these results, we now consider the number of operations to calculate all the edge responses of linearly interpolated candidate curves in the polynomial search space. We first calculate all the linear responses in the 2m contiguous sub-strips of width d. Then, for each candidate curve we sum the corresponding responses to form the final output.

The first step costs \(O(mnd\log d)\) operations. For the second step, consider first the case \(\kappa \in (1{/}2, 2{/}3)\). According to (14), the number of polynomial candidate curves is \(O(n^{3\kappa })\). Hence computing their edge responses requires \(O(mn^{3\kappa })\) additional operations. Since \(\lambda (I)=2md\), the overall complexity is \(O(mnd\log d+mn^{3\kappa })=O(n^{1+\kappa }\log d +n^{4\kappa }{/}d)\). To minimize the complexity without sacrificing accuracy, we thus choose \(d = O(\sqrt{n})\). At this value of \(d\) the first term is negligible, and the overall complexity is \(O(n^{4\kappa -1/2})\). For \(\kappa \in [0, 1{/}2]\), a similar analysis shows that with \(d = O(n^{\kappa })\), the complexity is \(O(n^{1+\kappa }\log n)\). As a result, the sublinear constraint implies an upper bound on the strip width \(\kappa \in [0, 5{/}8)\). \(\square \)

Proof of Lemma 5

As soon as the reference curve \(\ell _J^*\) defined in Eq. (32) satisfies

$$\begin{aligned} |\mathbb {E}R(\ell ^*_J)| \ge \tau _d + c_\beta \end{aligned}$$
(30)

where \(c_\beta \) is the standard normal distribution’s \(\beta \)-quantile, the probability of the event \(\{s^* = +\infty \}\) of all candidate curves not firing is less than \(\beta \). A sufficient condition for Eq. (30) to hold can be obtained using Eq. (7)

$$\begin{aligned} |\mathbb {E}R(\ell ^*_J)|\ge \frac{\mu _{{\varGamma }(g)} (\omega - \gamma ^*) \sqrt{L}}{\sigma \sqrt{2\omega }} \ge \tau _d + c_{\beta } \end{aligned}$$

with \(L = 2^{J}+1\) the number of columns of the full strip. Rearranging this inequality, we find

$$\begin{aligned} \frac{\mu _{{\varGamma }(g)}}{\sigma } \ge \frac{\sqrt{2\omega } (\tau _d + c_\beta )}{(\omega - \gamma ^*) \sqrt{L}}. \end{aligned}$$

The strip width \(L = O(n^{\kappa })\) and the detection threshold \(\tau _d = O(\sqrt{\ln n})\) thus imply the asymptotic minimal detectable edge SNR \(=O(n^{-\kappa /2}\sqrt{\ln n})\). \(\square \)

Proof of Theorem 2

If a pixel in the middle column fires, its signal must be nonzero on the high probability event

$$\begin{aligned} \left\{ \forall h \in \mathcal {T}_J, \;\; |R(h) - \mathbb {E}R(h)| \le \sqrt{2\ln \tfrac{|\mathcal {T}_J|}{\alpha }} \right\} . \end{aligned}$$
(31)

Consequently, its covering ratio is 1 with high probability.

Next, consider the case where \(1\le s^*<\infty .\) The goal is to show that, asymptotically, the curve \(\bar{\ell }_{s^*}\) enjoys a covering ratio close to \(1-\gamma ^*{/}\omega \) with high probability. To this end, we first lower bound the covering ratio in terms of edge responses and then prove that when n is large, this lower bound approaches \(1-\gamma ^*{/}\omega \) with high probability.

First, we define the following \(J+1\) reference curves

$$\begin{aligned} \ell ^*_s = \mathop {\hbox {argmax}}\limits _{\rho (g, \ell _s)=1} |\mathbb {E}R(\ell _s)|,\quad \quad s=0,1,\ldots ,J. \end{aligned}$$
(32)

Their existence is guaranteed by Assumption (20). Let \(L_s\) be the number of columns in the scale s sub-strip. Eq. (7) implies

$$\begin{aligned} |\mathbb {E}R(\ell _s^*)| \sigma \sqrt{2\omega L_s}\ge (\omega - \gamma ^*) \mu _{{\varGamma }(g)} L_s. \end{aligned}$$
(33)

By definition, any scale s candidate curve \(\ell _s\) also satisfies

$$\begin{aligned} |\mathbb {E}R(\ell _s)| \sigma \sqrt{2\omega L_s}\le \omega \mu _{{\varGamma }(g)} \rho (g, \ell _s) L_s \end{aligned}$$
(34)

Combining (33) and (34) gives the following deterministic lower bound on the covering ratio

$$\begin{aligned} \rho (g, \ell _s) \ge \frac{\omega - \gamma ^*}{\omega } \left| \frac{\mathbb {E}R(\ell _s)}{\mathbb {E}R(\ell _s^*)}\right| , \quad s = 0, \ldots , J. \end{aligned}$$
(35)

Two straightforward triangle inequalities then lead to

$$\begin{aligned} \rho (g, \ell _s) \ge \frac{\omega - \gamma ^*}{\omega } \frac{|R(\ell _s)|-|R(\ell _s)-\mathbb {E}R(\ell _s)|}{|R(\ell _s^*)| + |R(\ell _s^*)-\mathbb {E}R(\ell _s^*)|} \end{aligned}$$

As the relation \(|R(\bar{\ell }_s)| \ge |R(\ell _s^*)|\) always holds, we obtain

$$\begin{aligned} \rho (g, \bar{\ell }_s) \ge \frac{\omega - \gamma ^*}{\omega } \frac{|R(\bar{\ell }_s)|-|R(\bar{\ell }_s)-\mathbb {E}R(\bar{\ell }_s)|}{|R(\bar{\ell }_s)| + |R(\ell _s^*)-\mathbb {E}R(\ell _s^*)|} \end{aligned}$$

which then implies

$$\begin{aligned} \rho (g, \bar{\ell }_s) \ge \frac{\omega - \gamma ^*}{\omega } \left( 1- \frac{ v_{s}^* + \bar{v}_s}{|R(\bar{\ell }_s)|}\right) , \end{aligned}$$

where \(v_s^* \,{:}{=}\, |R(\ell _s^*)-\mathbb {E}R(\ell _s^*)|\) and \(\bar{v}_s \,{:}{=}\, |R(\bar{\ell }_s)-\mathbb {E}R(\bar{\ell }_s)|\).

To get the desired result, it suffices to show that when n is large, on a high probability event restricted to \(\{s^* < +\infty \}\), both \(v_{s^*}^*\) and \(\bar{v}_{s^*}\) are small compared to \(|R(\bar{\ell }_{s^*})|\).

Though unknown, the reference curves \(\ell _s^*\) are deterministic. Hence, it follows from Lemma 4 that \(\max _{0\le s\le J}v^*_{s}\) is \(O_P(\sqrt{\ln J}) = O_P(\sqrt{\ln \ln n})\). Since \(v^*_{s^*} \le \max _{0\le s\le J}v^*_{s}\), \(v^*_{s^*}\), it is asymptotically negligible compared to the detection threshold \(\tau _d = O(\sqrt{\ln n})\) with high probability. So is it to \(|R(\bar{\ell }_{s^*})|\) which by definition is larger than \(\tau _d\).

In contrast, at each scale, the candidate curve with the maximal absolute edge response is random. To control \(\bar{v}_{s^*}\), we show such candidate curves can only belong to a certain set whose size we upper bound. Then Lemma 4 can be used. To this end, we prove that thanks to the offset \({\varDelta }\) in the detection threshold, it is with high probability that a firing candidate curve enjoys a minimal covering with the edge \(g\) (Lemma 10) and there are at most \(O((\ln n)^{3})\) such candidate curves at any fixed scale (Theorem 3). Theorem 2 now follows. \(\square \)

Lemma 10

On a high probability event, any candidate curve \(\ell \) firing at scale \(s^* \in \{1, \ldots , J\}\), namely \(|R(\ell )|>\tau _d\), satisfies

$$\begin{aligned} \rho (g, \ell ) \ge \frac{(\omega - \gamma ^*)^2 {\varDelta }}{2\sqrt{3}\omega ^2\tau _d}. \end{aligned}$$
(36)

Theorem 3

Let \(g \in \mathcal {R}_{\varvec{b}, n}\). For any \(\kappa \in [0, 5{/}8)\), in a strip of \(O(n^\kappa )\) columns, there is a constant C independent of n and \(\kappa \) such that

$$\begin{aligned} |\{\ell \in \mathcal {S}(\varvec{b}, I, M_q, m, n) \; |\; \rho (g, \ell ) > \rho \}| \le \frac{C}{\rho ^{6}}. \end{aligned}$$

Proof of Lemma 10

Equations (33) and (34) not only yield a lower bound on the covering ratio (35), they also bound the maximal expected edge responses of two successive scales

$$\begin{aligned} |\mathbb {E}R(\ell ^*_s)| \le \frac{\sqrt{3} \omega |\mathbb {E}R(\ell ^*_{s-1})| }{(\omega - \gamma ^*)}, \quad s = 1, \ldots , J \end{aligned}$$
(37)

This holds, because by definition, \(L_s\le 3 L_{s-1}\), for all \(s\ge 1\), with equality at \(s=1\). Combining Eqs. (37) and (35) gives

$$\begin{aligned} \rho (g, \ell _s) \ge \frac{\omega - \gamma ^*}{\omega } \left| \frac{\mathbb {E}R(\ell _s)}{\mathbb {E}R(\ell _s^*)}\right| \ge \frac{(\omega - \gamma ^*)^2}{\sqrt{3}\omega ^2} \left| \frac{\mathbb {E}R(\ell _s)}{\mathbb {E}R(\ell _{s-1}^*)}\right| . \end{aligned}$$

Next, applying the triangle inequality gives, \(\forall s \in \{1, \ldots , J\}\)

$$\begin{aligned} \rho (g, \ell _s) \ge \frac{(\omega - \gamma ^*)^2 ( |R(\ell _{s})| - \max _{\ell ' \in \mathcal {T}_J}|R(\ell ') - \mathbb {E}R(\ell ')| )}{\sqrt{3}\omega ^2(|R(\bar{\ell }_{s-1})| + \max _{0\le s\le J}|R(\ell ^*_{s}) - \mathbb {E}R(\ell ^*_{s})|)}, \end{aligned}$$

Let \(\ell \) be any candidate curve firing at scale \(s^* \in \{1, \ldots , J\}\). Then \(|R(\ell )| > \tau _d\), and all candidate curves up to scale \(s^*-1\) failed to fire. Hence, \(\max _{s<s^*}|R(\bar{\ell }_{s})| \le \tau _d\), and

$$\begin{aligned} \rho (g, \ell ) \ge \frac{(\omega - \gamma ^*)^2 ( \tau _d - \max _{\ell ' \in \mathcal {T}_J}|R(\ell ') - \mathbb {E}R(\ell ')| )}{\sqrt{3}\omega ^2(\tau _d + \max _{0\le s\le J}|R(\ell ^*_{s}) - \mathbb {E}R(\ell ^*_{s})| )}. \end{aligned}$$

On the event (31) which according to Lemma 4 has probability at least \(1-\alpha \), we thus have

$$\begin{aligned} \rho (g, \ell ) \ge \frac{(\omega - \gamma ^*)^2 {\varDelta }}{2\sqrt{3}\omega ^2\tau _d}. \end{aligned}$$

\(\square \)

Proof of Theorem 3

The proof of Lemma 3 effectively shows that the sublinear runtime requires \(a_k = 0\) for \(k>2\) when \(n \rightarrow \infty \). We thus assume \(r=2\) in the following. Let \(p_q \in \mathcal {S}_p(\varvec{b}, I, M_q, n)\) be such that \(\ell = \mathcal {I}_{m}(p_q)\) intersects the signal tube of an edge \(g \in \mathcal {R}_{\varvec{b}, n}\) at point \(x \in I\), that is \(|\ell (x) - g(x)| \le \omega -1\). By Theorem 1, and the triangle inequality

$$\begin{aligned} |p_q(x) - \mathcal {P}_{I, M_q}^s(g)(x)|\le & {} |p_q(x) - \ell (x)| + |\ell (x) - g(x)| \\&+\, |g(x) - \mathcal {P}_{I, M_q}^s(g)(x)| \le W \end{aligned}$$

where \(W\,{:}{=}\, \omega - 1 + E_1(I) + E_2(I, m)\). Let

$$\begin{aligned} A(p_q)=\sum _{k=1}^L 1_{|\mathcal {P}_{I, M_q}^s(g)(x_k) - p_q(x_k)| \le W} \end{aligned}$$

where \(L\) is the number of strip columns. Hence, we find

$$\begin{aligned} A(p_q)\ge & {} \sum _{k=1}^L 1_{|g(x_k) - \ell (x_k)| \le \omega - 1} = \rho (g, \ell ) L. \end{aligned}$$

To prove the theorem, it suffices to show that there is some constant C independent of n and \(\kappa \) such that for any \(\rho > 0\),

$$\begin{aligned} \left| \left\{ p_q\in \mathcal {S}_p(\varvec{b}, I, M_q, n), \ A(p_q) > \rho L\right\} \right| \le \frac{C}{\rho ^{6}}. \end{aligned}$$

To this end, we study two separate cases \(\rho L \ge 6\) and \(\rho L < 6\).

First assume \(\rho L \ge 6\). Since \(d_q \,{:}{=}\, \mathcal {P}_{I, M_q}^s(g) - p_q\) is a quantized polynomial of degree at most 2, the set of integers \(\{x_k \in I, \; |d_q(x_k)| \le W\}\) can be divided into at most 2 groups of contiguous integers on which \(d_q\) is monotone. The largest group must therefore have at least \(\lceil \rho L {/}2\rceil \) elements. Let \(I' \subset I\) denote the smallest interval containing these integers. Its length satisfies

$$\begin{aligned} \lambda (I') \ge (\rho L{/}2-1)_+ \mathop {\ge }\limits ^{(i)} \rho L {/}3 > \rho \lambda (I) {/}3 \end{aligned}$$
(38)

where inequality (i) holds thanks to the assumption \(\rho L \ge 6\).

The polynomial \(d_q\) is uniformly bounded by W over \(I'\), which implies a bound on its coefficients. It can be proved, for instance, by Markov’s theorem (see, e.g., [1]). \(\square \)

Theorem 4

(Markov) Consider the universal constants

$$\begin{aligned} C_{r,k} = {\left\{ \begin{array}{ll} \prod _{j= 0}^{k-1}\frac{r^2 - j^2}{2j+1}, &{}\quad k > 0\\ 1, &{}\quad k = 0 \end{array}\right. } \end{aligned}$$

Then for any polynomial p(x) of degree r and any \(k\le r\)

$$\begin{aligned} \sup _{x \in [-1, 1]}|p^{(k)}(x)| \le C_{r,k}\sup _{x \in [-1, 1]}|p(x)|. \end{aligned}$$

The idea is thus to turn the bounds on the coefficients into a bound on the number of such polynomials. Specifically, let \(I' = [y_0, y_1]\) and apply Markov’s theorem to the polynomial

$$\begin{aligned} \bar{d}_q(x) = d_q\left( \frac{y_1-y_0}{2}x+\frac{y_1+y_0}{2}\right) \end{aligned}$$

which is uniformly bounded on the interval \([-1, 1]\) by W. We obtain for \(k=1,2\)

$$\begin{aligned} \sup _{x\in I'}|d_q^{(k)}(x)| \le C_{2, k}\frac{2^kW}{\lambda (I')^k} < C_{2, k}\frac{6^kW}{\rho ^k \lambda (I)^k}, \end{aligned}$$
(39)

where the last inequality follows from Eq. (38). Let c denote the interval I’s midpoint and \(y_* = \hbox {argmin}_{y\in I'}|y - c|\). It is clear \(|y_* - c| < \lambda (I){/}2\). Hence for all \(k = 0, 1, 2\)

$$\begin{aligned} |d_q^{(k)}(c)|\le & {} |d_q^{(k)}(c) - d_q^{(k)}(y_*)| + |d_q^{(k)}(y_*)| \nonumber \\&\le \&\frac{2^k}{\lambda (I)^k}\sum _{i=k}^2 \frac{|d_q^{(i)}(y_*)|\lambda (I)^{i} }{(i-k)!2^{i}}. \end{aligned}$$
(40)

Substituting the estimates of Eq. (39) into (40), we obtain

$$\begin{aligned} |d_q^{(k)}(c)|&\le \frac{2^k}{\lambda (I)^k}\sum _{i=k}^2 \frac{|d_q^{(i)}(y_*)|\lambda (I)^{i} }{(i-k)!2^{i}} \\&< \frac{2^kW}{\lambda (I)^k}\sum _{i=k}^2 \frac{C_{2,i}3^i}{(i-k)!\rho ^{i}}. \end{aligned}$$

Hence if we write \(d_q\) as

$$\begin{aligned} d_q(x) = \sum _{k=0}^2 \frac{\bar{a}_k}{M_q} \left( \frac{x-c}{\lambda (I){/}2}\right) ^k, \end{aligned}$$

its quantized coefficients \((\bar{a}_k)_{0\le k\le 2}\) can be bounded

$$\begin{aligned} |\bar{a}_k| = \frac{M_q\lambda (I)^k}{k!2^k}|d_q^{(k)}(c)| \le \frac{M_qW}{\rho ^2k!}\sum _{i=k}^2 \frac{C_{2, i}3^{i}}{(i-k)!}. \end{aligned}$$

Hence when \(\rho L \ge 6\), the number of polynomials can be upper bounded by \(C \frac{(M_qW)^3}{\rho ^{6}}\) for a universal constant C.

Turn to the case \(\rho L < 6\). If a candidate curve \(\ell \) satisfies \(|\ell (x)-g(x)| \le \omega -1\) at some \(x\in I\), we deduce

$$\begin{aligned} |\ell (x) - g(c)| \le \omega - 1 + \frac{b_1\lambda (I)}{2}. \end{aligned}$$

As a result, we are interested in bounding the size of

$$\begin{aligned} \mathcal {B} = \left\{ p_q \; \Big | \; \min _{x \in I}|\mathcal {I}_{m}(p_q)(x)- g(c)| \le \omega - 1 + \tfrac{b_1\lambda (I)}{2} \right\} . \end{aligned}$$

To this end, writing a generic element in the set \(\mathcal {B}\) as

$$\begin{aligned} p_q(x) = \sum _{k=0}^2 \frac{a_k}{M_q} \left( \frac{x-c}{\lambda (I){/}2}\right) ^k, \end{aligned}$$

we aim to bound \(a_0\). Applying the triangle inequality gives

$$\begin{aligned}&|p_q(c) - g(c)| \\&\quad \le \min _{x \in I} \Big (|p_q(c) - p_q(x)| + |p_q(x)-\,\ell (x)| + |\ell (x) - g(c)|\Big ) \\&\quad \le \frac{1}{M_q} \sum _{k = 1}^2 |a_k| + E_2(I, m) + \omega - 1 +\,\frac{b_1\lambda (I)}{2} \\&\quad \le b_1\lambda (I) (1 + O(1)) \end{aligned}$$

thanks to Eq. (12). Multiplying both sides by \(M_q\), we find that \( |a_0 - g(c)M_q|\le M_qb_1\lambda (I) (1 + O(1)), \) and

$$\begin{aligned} |\mathcal {B}| \le M_qb_1\lambda (I) \left( 1 + O(1) \right) \prod _{k=1}^2\left( \frac{b_k\lambda (I)^kM_q}{(2n)^{k-1}} + 2\right) . \end{aligned}$$

As the condition \(\rho L < 6\) implies \(\lambda (I) < \frac{6}{\rho }\), we conclude \(|\mathcal {B}| \le \frac{C'}{\rho ^3}\) for some constant \(C'\) independent of n and \(\kappa \).

Proof of Lemma 7

From Eq. (24), for all \( k = 1, \ldots , r\)

$$\begin{aligned} \left| \frac{a_k^{(i)} k!}{M_q\lambda (I_i')^k} - g^{(k)}\left( x_i'\right) \right| \le \frac{k!}{2M_q\lambda (I_i')^k}, \quad i = 0, 1. \end{aligned}$$

An application of the triangle inequality leads to

$$\begin{aligned} \left| \tfrac{a_k^{(0)} k!}{M_q\lambda (I_0')^k} -\tfrac{a_k^{(1)} k!}{M_q\lambda (I_1')^k} \right|\le & {} \tfrac{k!}{2M_q\lambda (I_0')^k} + \tfrac{k!}{2M_q\lambda (I_1')^k} \\&+ \left| g^{(k)}\left( x_1' \right) - g^{(k)}\left( x_{0}'\right) \right| . \end{aligned}$$

Since \(\left| g^{(k)}\left( x_1' \right) - g^{(k)}\left( x_{0}'\right) \right| \le \min \left( 2\Vert g^{(k)}\Vert _{\infty }, \lambda (I_0')\Vert g^{(k+1)}\Vert _{\infty } \right) \), Eq.(28) of the lemma readily follows.

Regarding the constant term \(a_0^{(1)}\), thanks to the uniform approximation error bound (16), it satisfies

$$\begin{aligned} \left| \tfrac{a_0^{(1)}}{M_q} - p_q^{(0)}(x_{1}')\right|\le & {} \left| \tfrac{a_0^{(1)}}{M_q} - g(x_{1}')\right| + \left| g(x_{1}')- p_q^{(0)}(x_{1}')\right| \\\le & {} \tfrac{r+2}{2M_q} + \tfrac{b_{r+1}}{n^r} \lambda (I_0')^{r+1}. \end{aligned}$$

\(\square \)

Proof of Eq. (7)

We start by showing that the condition \(\sup _{x \in [x_1, x_L]}|h(x) - g(x)| \le \gamma \) implies that

$$\begin{aligned} \max _{1\le k\le L}|\lfloor h(x_k) \rfloor - \lfloor g(x_k) \rfloor | \le \gamma . \end{aligned}$$
(41)

To this end, note that the following holds for all \((z_1, z_2) \in \mathbb {R}^2\)

$$\begin{aligned} |\lfloor z_1 \rfloor - \lfloor z_2 \rfloor |&\mathop {\le }\limits ^{(i)} |z_1 - z_2| + |(z_2 - \lfloor z_2 \rfloor ) - (z_1 - \lfloor z_1 \rfloor )| \\&\mathop {<}\limits ^{(ii)} |z_1 - z_2| + 1. \end{aligned}$$

(i) follows from the triangle inequality. As \(z_2 - \lfloor z_2 \rfloor \) and \(z_1 - \lfloor z_1 \rfloor \) take values in [0, 1), their difference is strictly smaller than 1 in absolute value, hence inequality (ii). Consequently,

$$\begin{aligned} \max _{1\le k\le L}|\lfloor h(x_k) \rfloor - \lfloor g(x_k) \rfloor |< & {} 1 + \max _{1\le k\le L}|h(x_k) - g(x_k) | \\\le & {} 1 + \sup _{x \in [x_1, x_L]}|h(x) - g(x)| \\\le & {} 1 + \gamma . \end{aligned}$$

Since \(\max _{1\le k\le L}|\lfloor h(x_k) \rfloor - \lfloor g(x_k) \rfloor |\) is an integer, its being strictly smaller than \(\gamma +1\) can only imply Eq. (41).

To prove Eq. (7), we use Eq. (6) which gives

$$\begin{aligned} \mathbb {E}R(h) = \frac{1}{\sigma \sqrt{L}}\sum _{k=1}^L \mathbb {E}\left[ R(x_k, \lfloor h(x_k)\rfloor )\right] . \end{aligned}$$
(42)

If there is only one edge, all the summands in (42) have the same sign. Hence, we deduce from Eqs. (5) and (41) that

$$\begin{aligned} |\mathbb {E}R(h)|= & {} \frac{1}{\sigma \sqrt{L}} \sum _{k=1}^L |\mathbb {E}\left[ R(x_k, \lfloor h(x_k)\rfloor )\right] | \\\ge & {} \frac{\mu _{{\varGamma }(g)} \sqrt{L} (\omega -\gamma )_{+}}{\sigma \sqrt{2\omega }}. \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, YQ., Trouvé, A., Amit, Y. et al. Detecting Curved Edges in Noisy Images in Sublinear Time. J Math Imaging Vis 59, 373–393 (2017). https://doi.org/10.1007/s10851-016-0689-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-016-0689-x

Keywords

Navigation