Skip to main content
Log in

Multiscale Support Vector Approach for Solving Ill-Posed Problems

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

Based on the use of compactly supported radial basis functions, we extend in this paper the support vector approach to a multiscale support vector approach (MSVA) scheme for approximating the solution of a moderately ill-posed problem on bounded domain. The Vapnik’s \(\epsilon \)-intensive function is adopted to replace the standard \(l^2\) loss function in using the regularization technique to reduce the error induced by noisy data. Convergence proof for the case of noise-free data is then derived under an appropriate choice of the Vapnik’s cut-off parameter and the regularization parameter. For noisy data case, we demonstrate that a corresponding choice for the Vapnik’s cut-off parameter gives the same order of error estimate as both the a posteriori strategy based on discrepancy principle and the noise-free a priori strategy. Numerical examples are constructed to verify the efficiency of the proposed MSVA approach and the effectiveness of the parameter choices.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Aronszajn, N.: Theory of reproducing Kernels. Trans. Am. Math. Soc. 68, 337–404 (1950)

    Article  MATH  MathSciNet  Google Scholar 

  2. Bertero, M., De Mol, C., Pike, E.: Linear inverse problems with discrete data I: general formulation and sigular system analysis. Inverse Probl. 1, 301–330 (1985)

    Article  MATH  Google Scholar 

  3. Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers COLT 92. In: Proceedings of 5th Annual Workshop on Computational Learning Theory, pp. 144–152. ACM, New York (1992)

  4. Burger, M., Engl, H.: Training neural networks with noisy data as an ill-posed problem. Adv. Comput. Math. 13, 335–354 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  5. Chen, Z., Xu, Y., Yang, H.: A multilevel augmentation method for solving ill-posed operator equations. Inverse Probl. 22, 155–174 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  6. Chernih, A., Le Gia, Q.T.: Multiscale methods with compactly supported radial basis functions for Galerkin approximation of elliptic PDEs, Math.NA, submitted

  7. Chernih, A., Le Gia, Q.T.: Multiscale methods with compactly supported radial basis functions for the Stokes problem on bounded domains, Math.NA, submitted

  8. Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20, 273–297 (1995)

    MATH  Google Scholar 

  9. Christianini, N., Shawe-Taylor, J.: Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge University Press, Cambridge (2000)

    Book  Google Scholar 

  10. Engl, H.W., Hanke, M., Neubauer, A.: Regularization of Inverse Problems. Kluwer Academic Publishers, Dordrecht (1996)

    Book  MATH  Google Scholar 

  11. Fasshauer, G.E.: Solving partial differential equations by collocation with radial basis functions. In: Méhauté, A.L., Rabut, C., Schumaker, L.L. (eds.) Surface Fitting and Multiresolution Methods, pp. 131—138. Vanderbilt University Press, Nashville (1997)

  12. Franke, C., Schaback, R.: Solving partial differential equations by collocation using radial basis functions. Appl. Math. Comput. 93, 73–82 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  13. Giesl, P., Wendland, H.: Meshless collocation: error estimates with application to dynamical systems. SIAM J. Numer. Anal. 45, 1723–1741 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  14. Harbrecht, H., Pereverzev, S.V., Schneider, R.: Self-regularization by projection for noisy pseudodifferential equations of negative order. Numer. Math. 95, 123–143 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  15. Hardy, R.L.: Multiquadric equations of topography and other irregular surfaces. Geophys. Res. 76, 1905–1915 (1971)

    Article  Google Scholar 

  16. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning, 2nd edn. Springer, New York (2009)

    Book  MATH  Google Scholar 

  17. Hickernell, F.J., Hon, Y.C.: Radial basis function approximations as smoothing splines. Appl. Math. Comput. 102, 1–24 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  18. Hon, Y.C., Schaback, R.: Solvability of partial differential equations by meshless kernel methods. Adv. Comput. Math. 28, 283–299 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  19. Hon, Y.C., Wei, T.: Backus-Gilbert algorithm for the Cauchy problem of the Laplace equation. Inverse Probl. 17, 261–271 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  20. Takeuchi, T., Hon, Y.C.: Discretized Tikhonov regularization by reproducing kernel Hilbert space for backward heat conduction problem. Adv. Comput. Math. 34, 167–183 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  21. Kansa, E.J.: Application of Hardy’s multiquadric interpolation to hydrodynamics. Proc. Simul. Conf. 4(1986), 111–117 (1986)

    Google Scholar 

  22. Kirsch, A., Schomburg, B., Berendt, G.: The Backus–Gilbert method. Inverse Probl. 4, 771–783 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  23. Krebs, J.: Support vector regression for the solution of linear integral equations. Inverse Probl. 27(065007), 23 (2011)

  24. Krebs, J., Louis, A.K., Wendland, H.: Sobolev error estimates and a priori parameter selection for semi-discrete Tikhonov regularization. J. Inverse Ill-Posed Probl. 17, 845–869 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  25. Le Gia, Q.T., Sloan, I., Wendland, H.: Multiscale analysis in Sobolev spaces on the sphere. SIAM J. Numer. Anal. 48, 2065–2090 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  26. Le Gia, Q.T., Sloan, I., Wendland, H.: Multiscale RBF collocation for solving PDEs on spheres. Numer. Math. 121, 99–125 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  27. Le Gia, Q.T., Sloan, I., Wendland, H.: Multiscale approximation for functions in arbitrary Sobolev spaces by scaled radial basis functions on the unit sphere. Appl. Comput. Harmon. Anal. 32, 401–412 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  28. Louis, A.K.: Inverse und Schlecht Gestellte Probleme. B. G. Teubner, Stuttgart (1989)

    Book  MATH  Google Scholar 

  29. Li, J., Zou, J.: A multilevel model correction method for parameter identification. Inverse Probl. 23, 1759–1786 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  30. Lu, S., Pereverzev, S.V.: Regularization theory for ill-posed problems. Selected Topics. Inverse and Ill-posed Problems Series, 58. De Gruyter, Berlin (2013)

  31. Micchelli, C.A., Pontil, M.: Learning the kernel function via regularization. J. Mach. Learn. Res. 6, 1099–1125 (2005)

    MATH  MathSciNet  Google Scholar 

  32. Nair, M.T., Pereverzev, S.V.: Regularized collocation method for Fredholm integral equations for the first kind. J. Complex. 23, 454–467 (2006)

    Article  MathSciNet  Google Scholar 

  33. Pereverzev, S.V., Solodky, S.G., Volynets, E.A.: The balancing principle in solving semi-discrete inverse problems in Sobolev scales by Tikhonov method. Appl. Anal. 3, 435–446 (2012)

    Article  MathSciNet  Google Scholar 

  34. Rieder, A.: A wavelet multilevel method for ill-posed problems stabilized by Tikhonov regularization. Numer. Math. 75, 501–522 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  35. Rieger, C., Zwicknagel, B.: Deterministic error analysis of support vector regression and related regularized kernel methods. J. Mach. Learn. Res. 10, 2115–2132 (2009)

    MATH  MathSciNet  Google Scholar 

  36. Riplinger, M.: Lernen als inverses problem und deterministische Fehlerabschätzung bei support vektor regression, Diploma Thesis, Saarland University, Saarbrücken (2007)

  37. Schaback, R.: The missing Wendland functions. Adv. Comput. Math. 34, 67–81 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  38. Scherzer, O.: An iterative multi level algorithm for solving nonlinear ill-posed problems. Numer. Math. 80, 579–600 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  39. Schölkopf, B., Smola, A.J.: Learning with Kernels. MIT Press, Cambridge (2002)

    Google Scholar 

  40. Smola, A.J., Schölkopf, B.: A tutorial on support vector regression. Stat. Comput. 14, 199–222 (2004)

    Article  MathSciNet  Google Scholar 

  41. Stein, E.M.: Singular Integrals and Differentiability Properties of Functions. Princeton University Press, Princeton (1971)

    Google Scholar 

  42. Saitoh, S.: Theory of Reproducing Kernels and Its Applications, Pitman Research Notes in Mathematics Series, vol. 189. Longman Scientific & Technical, London (1988)

    Google Scholar 

  43. Townsend, A., Wendland, H.: Multiscale analysis in Sobolev spaces on bounded domains with zero boundary values. IMA J. Numer. Anal. (2012). doi:10.1093/imanum/drs036

    Google Scholar 

  44. Takeuchi, T., Yamamoto, M.: Tikhonov regularization by a reproducing kernel Hilbert space for the Cauchy problem for an elliptic equation. SIAM J. Sci. Comput. 31, 112–142 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  45. Vapnik, V.: Statistical Learning Theory. Wiley, New York (1998)

    MATH  Google Scholar 

  46. Vapnik, V.: The Nature of Statistical Learning Theory, 2nd edn. Springer, New York (2000)

    Book  MATH  Google Scholar 

  47. Wahba, G.: Spline Models for Observational Data, CBMS-NSF Regional Conference Series in Applied Mathematics, vol. 59. SIAM, Philadelphia (1990)

  48. Wendland, H.: Piecewise polynomial, positive difinite and compactly supported radial functions of minimal degree. Adv. Comput. Math. 4, 389–396 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  49. Wendland, H.: Multiscale analysis in Sobolev spaces on bounded domains. Numer. Math. 116, 493–517 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  50. Wendland, H.: Scattered Data Approximation. Cambridge University Press, Cambridge (2005)

    MATH  Google Scholar 

  51. Wong, S.M., Hon, Y.C., Golberg, M.: Compactly supported radial basis functions for shallow water equations. Appl. Math. Comput. 127, 79–101 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  52. Wu, Z.: Characterization of positive definite functions. In: Dælen, M., Lyche, T., Schumaker, L.L. (eds.) Mathematical Methods for Curves and Surfaces, pp. 573–578. Vanderbilt University Press, Nashville (1995)

    Google Scholar 

  53. Zhong, M., Lu, S., Cheng, J.: Multiscale analysis for ill-posed problems with semi-discrete Tikhonov regularization. Inverse Probl. 28, 065019 (2012). (19 pp.)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuai Lu.

Additional information

This work was partially supported by a Grant from the Research Council of the Hong Kong Special Administrative Region, China (Project No. CityU 101211), Special Funds for Major State Basic Research Projects of China (2015CB856003), NSFC (Key Project No. 91130004) and the Shanghai Science and Technology Commission (14QA1400400).

Appendix: Proofs of Lemmas 4–11

Appendix: Proofs of Lemmas 4–11

Proof of Lemma 4

Referring to Algorithm 1, the local reconstructed solution \(s_k^{\epsilon ,\gamma ,\delta }\) at each level \(k\) is determined by following minimization problem:

$$\begin{aligned} \min \limits _{s\in H^{\tau }(\mathbb {R}^d)}\mathcal {J}^{\delta }_k(s)&= \min \limits _{s\in H^{\tau }(\mathbb {R}^d)}\left( ~\sum _{j=1}^{N_k}\left| e_{k-1}^{\delta } (x_{k,j})-A^{(k)}s(x_{k,j})\right| _{\epsilon _k} +\gamma _k\Vert s\Vert _{\varPhi _k}^2\right) . \end{aligned}$$

Taking \(Es_k^{\delta ,*}\in H^{\tau }(\mathbb {R}^d)\) as a feasible candidate, note for each \(k\)

$$\begin{aligned} \left| e_{k-1}^{\delta }(x_{k,j})-A^{(k)}s(x_{k,j})\right| _{\epsilon _k}\le \left( \Vert e_{k-1}^{\delta }-A^{(k)}s\Vert _{l^{\infty }(X_k)}-\epsilon _k\right) _+. \end{aligned}$$

Then, the estimate

$$\begin{aligned} \Vert e_{k-1}^{\delta } - A^{(k)}s_k^{\delta ,*}\Vert _{l^{\infty }(X_k)}&= \Vert g^{\delta } - A^{(k)}f_{k-1}^{\epsilon ,\gamma ,\delta } - A^{(k)}(f^*-f_{k-1}^{\epsilon ,\gamma ,\delta })\Vert _{l^{\infty }(X_k)}\\&= \Vert g^{\delta } -A^{(k)}f^* \Vert _{l^{\infty }(X_k)}\\&\le \Vert g^{\delta }-g\Vert _{l^{\infty }(X_k)} + \Vert (A-A^{(k)})f^*\Vert _{l^{\infty }(X_k)}\\&\le \delta _k + Kh_k^{r}{\varrho }\, \end{aligned}$$

and the minimizing property of the objective functional yield the estimates.\(\square \)

Proof of Lemma 5

The first estimate is derived consequently from the minimality of the functional \(\mathcal {J}_k(s_k^{\epsilon ,\gamma })\) in (15), i.e.,

$$\begin{aligned} \gamma _k\Vert s_k^{\epsilon ,\gamma }\Vert _{\varPhi _k}\le \mathcal {J}_k(s_k^{\epsilon ,\gamma })\le \mathcal {J}_k(Es_k^*) = \gamma _k\Vert Es_k^{*}\Vert _{\varPhi _k}. \end{aligned}$$

The second estimate follows from

$$\begin{aligned} \Vert s_{k+1}^*\Vert _{H^{\tau }(\varOmega )}&= \Vert f^*-f_{k}^{\epsilon ,\gamma }\Vert _{H^{\tau }(\varOmega )} = \Vert f^{*}- (f_{k-1}^{\epsilon ,\gamma } + s_k^{\epsilon ,\gamma })\Vert _{H^{\tau }(\varOmega )} \\&\le \Vert s_k^*\Vert _{H^{\tau }(\varOmega )} + \Vert s_k^{\epsilon ,\gamma }\Vert _{H^{\tau }(\varOmega )}\le \Vert Es_k^*\Vert _{H^{\tau }(\mathbb {R}^d)} + \Vert s_k^{\epsilon ,\gamma }\Vert _{H^{\tau }(\mathbb {R}^d)} \\&\le c_2^{1/2}\eta _k^{-\tau }\left( \Vert Es_k^*\Vert _{\varPhi _k} + \Vert s_k^{\epsilon ,\gamma }\Vert _{\varPhi _k}\right) \\&\le 2c_2^{1/2}\eta _k^{-\tau }\Vert Es_k^*\Vert _{\varPhi _k}. \end{aligned}$$

Utilizing [A5] in Assumption 1, the second estimate and noticing the fact that \(\Vert \cdot \Vert _{l^{\infty }(X_k)}\le |\cdot |_{\epsilon _k}+\epsilon _k\), derivation for the third estimate is given as follow:

$$\begin{aligned} \Vert As_{k+1}^*\Vert _{l^{\infty }(X_k)}&= \Vert g - Af_{k}^{\epsilon ,\gamma }\Vert _{l^{\infty }(X_k)} \\&\le \Vert g-A^{(k)}f_{k}^{\epsilon ,\gamma }\Vert _{l^{\infty }(X_k)} + \Vert (A-A^{(k)})f_{k}^{\epsilon ,\gamma }\Vert _{l^{\infty }(X_k)}\\&\le \max _{j=1,\ldots ,N_k}\left| g_j - A^{(k)}f_{k}^{\epsilon ,\gamma }(x_{k,j})\right| _{\epsilon _k}+ \epsilon _k + \Vert (A-A^{(k)})f_{k}^{\epsilon ,\gamma }\Vert _{l^{\infty }(X_k)}\\&\le \sum _{j=1}^{N_k}\left| g_j - A^{(k)}f_{k}^{\epsilon ,\gamma }(x_{k,j})\right| _{\epsilon _k} + \epsilon _k + Kh_k^r\Vert f_{k}^{\epsilon ,\gamma }\Vert _{H^{\tau }(\varOmega )}\\&\le \epsilon _k + \gamma _k\Vert Es_k^*\Vert _{\varPhi _k}^2 + Kh_k^r\left( {\varrho }+ \Vert f^*-f_{k}^{\epsilon ,\gamma }\Vert _{H^{\tau }(\varOmega )}\right) \\&\le \epsilon _k + \gamma _k\Vert Es_k^*\Vert _{\varPhi _k}^2 + Kh_k^r\left( {\varrho }+\Vert s_{k+1}^*\Vert _{H^{\tau }(\varOmega )}\right) \\&\le \epsilon _k + \gamma _k\Vert Es_k^*\Vert _{\varPhi _k}^2 + Kh_k^r\left( {\varrho }+ 2c_2^{1/2}\eta _k^{-\tau }\Vert Es_k^*\Vert _{\varPhi _k}\right) . \end{aligned}$$

\(\square \)

Proof of Lemma 6

For sufficiently small \(h_1\) satisfying \(\eta _{k+1}<\cdots <\eta _1 = \nu h_1^{\beta }\in (0,1)\), utilizing the inequality (13), we have

$$\begin{aligned} \Vert Es_{k+1}^{*}\Vert _{\varPhi _{k+1}}^2&= \int _{\mathbb {R}^d}\frac{|\widehat{Es_{k+1}^{*}}(\omega )|^2}{\widehat{\varPhi _{k+1}}(\omega )}d\omega \\&\le \frac{1}{c_1} \int _{\mathbb {R}^d} |\widehat{Es_{k+1}^{*}}(\omega )|^2(1+\eta _{k+1}^2 \Vert \omega \Vert _2^2)^{\tau } d\omega := \frac{1}{c_1}\left( I_1+I_2\right) \end{aligned}$$

with

$$\begin{aligned} I_1&= \int _{\Vert \omega \Vert _2 \le \frac{1}{\eta _{k+1}}} |\widehat{Es_{k+1}^{*}}(\omega )|^2(1+\eta _{k+1}^2 \Vert \omega \Vert _2^2)^{\tau } d\omega ,\\ I_2&= \int _{\Vert \omega \Vert _2 \ge \frac{1}{\eta _{k+1}}} |\widehat{Es_{k+1}^{*}}(\omega )|^2(1+\eta _{k+1}^2 \Vert \omega \Vert _2^2)^{\tau } d\omega . \end{aligned}$$

From the extension operator in (5), Assumption 1, estimates in Lemma 3, and the choice strategies for two parameters \(\epsilon _k\) and \(\gamma _k\), the integration \(I_1\) can be estimated as

$$\begin{aligned} I_1&\le 2^{\tau } \Vert Es_{k+1}^{*}\Vert _{L^2(\mathbb {R}^d)}^2 \le 2^{\tau } C_{0}^2 \Vert s_{k+1}^{*}\Vert _{L^2(\varOmega )}^2 \le \frac{ 2^{\tau } C_{0}^2}{c_{3}^2} \Vert As_{k+1}^*\Vert _{H^{\alpha }(\varOmega )}^2\\&\le \frac{2^{\tau } C_{0}^2 C_d^2}{c_{3}^2}\left( c_{4}h_{k}^{\tau }\Vert s_{k+1}^{*}\Vert _{H^{\tau }(\varOmega )} +h_{k}^{-\alpha }\Vert As_{k+1}^{*}\Vert _{l^{\infty }(X_{k})}\right) ^2.\\&\le \frac{2^{\tau } C_{0}^2 C_d^2}{c_{3}^2}\left( \frac{2c_2^{1/2}\left( c_{4}h_{k}^{(1-\beta )\tau } +Kh_k^{r-\alpha -\beta \tau }\right) \mu ^{\beta \tau }}{T^{\tau }}\Vert Es_{k}^{*}\Vert _{\varPhi _{k}}\right. \nonumber \\&\qquad \left. +\frac{\kappa \mu ^{\beta \tau }}{T^{\tau }}\Vert Es_{k}^{*}\Vert _{\varPhi _{k}}^2+ 2K {\varrho }h_k^{r-\alpha }\right) ^2. \end{aligned}$$

Noting that the choice of the parameter \(\beta =\min \{1, \frac{r-\alpha }{\tau }\}\) guarantees that \(1-\beta \ge 0\) and \(r-\alpha -\beta \tau \ge 0\), we thus obtain

$$\begin{aligned} \sqrt{I_1}&\le \frac{2^{\tau /2}C_{0} C_d}{c_{3}}\left( \frac{2c_2^{1/2}\left( c_{4}+K\right) \mu ^{\beta \tau }}{T^{\tau }}\Vert Es_{k}^{*}\Vert _{\varPhi _{k}}+ \frac{\kappa \mu ^{\beta \tau }}{T^{\tau }}\Vert Es_{k}^{*}\Vert _{\varPhi _{k}}^2+ 2K {\varrho }h_k^{r-\alpha }\right) . \end{aligned}$$

For the second integral \(I_2\), we observe that \(\delta _{k+1}\Vert \omega \Vert _2\ge 1\) implies,

$$\begin{aligned} \left( 1+\eta _{k+1}^2\Vert \omega \Vert _2^2\right) ^{\tau } \le 2^{\tau }\eta _{k+1}^{2\tau }\Vert \omega \Vert _2^{2\tau }\le 2^{\tau }\eta _{k+1}^{2\tau }\left( 1+\Vert \omega \Vert _2^2\right) ^{\tau }. \end{aligned}$$

Thus, we have,

$$\begin{aligned} \sqrt{I_2}&\le 2^{\tau /2}\eta _{k+1}^{\tau }\Vert Es_{k+1}^*\Vert _{H^{\tau }(\mathbb {R}^d)} \le 2^{\tau /2}\eta _{k+1}^{\tau }C_{\tau }\Vert s_{k+1}^*\Vert _{H^{\tau }(\varOmega )}\\&\le 2^{\tau /2+1}c_2^{1/2}C_{\tau } \left( \frac{\eta _{k+1}}{\eta _k}\right) ^{\tau }\Vert Es_{k}^{*}\Vert _{\varPhi _k}\\&\le 2^{\tau /2+1}c_2^{1/2}C_{\tau }\mu ^{\beta \tau }\Vert Es_{k}^{*}\Vert _{\varPhi _k}. \end{aligned}$$

Combining the estimates of \(I_1\) and \(I_2\) yields the estimate (18).

Noticing that the inequality (14) and the additional assumption on \(\Vert f^*\Vert _{H^{\tau }(\varOmega )}\) imply

$$\begin{aligned} \Vert Es_1^*\Vert _{\varPhi _1} \le c_1^{-1/2}\Vert Es_1^*\Vert _{H^{\tau }(\mathbb {R}^d)} = c_1^{-1/2} \Vert Ef^*\Vert _{H^{\tau }(\mathbb {R}^d)} \le c_1^{-1/2} C_{\tau } \Vert f^*\Vert _{H^{\tau }(\varOmega )} \le 1. \end{aligned}$$

By induction, we can derive a more precise estimate (19) and the lemma is proven. \(\square \)

Proof of Lemma 7

Referring to the basic estimate (16) in Lemma 4, the minimality of the functional \(\mathcal {J}_k^{\delta }(s_k^{\epsilon ,\gamma ,\delta })\) yields

$$\begin{aligned} \Vert s_k^{\epsilon ,\gamma ,\delta }\Vert _{\varPhi _k}^2 \le \frac{N_k}{\gamma _k}(\delta _k+K{\varrho }h_k^r -\epsilon _k)_+ + \Vert Es_k^{\delta ,*}\Vert _{\varPhi _k}^2. \end{aligned}$$

The first estimate follows directly after the choice of cut-off parameter \(\epsilon _k \ge \delta _k+K{\varrho }h_k^r\).

The second estimate is obtained by the fact that

$$\begin{aligned} \Vert s_{k+1}^{\delta ,*}\Vert _{H^{\tau }(\varOmega )}&= \Vert f^* - f_{k-1}^{\epsilon ,\gamma ,\delta } - s_k^{\epsilon ,\gamma ,\delta }\Vert _{H^{\tau }(\varOmega )}\le \Vert s_k^{\delta ,*}\Vert _{H^{\tau }(\varOmega )}+\Vert s_k^{\epsilon ,\gamma ,\delta }\Vert _{H^{\tau }(\varOmega )}\\&\le 2c_2^{1/2}\eta _k^{-\tau }\Vert Es_k^{\delta ,*}\Vert _{\varPhi _k}. \end{aligned}$$

\(\square \)

Proof of Lemma 8

Referring to the definition of \(s_{k+1}^{\delta ,*}\), utilizing the second estimate in Lemma 7, it follows that,

$$\begin{aligned} \Vert As_{k+1}^{\delta ,*}\Vert _{l^{\infty }(X_k)}&= \Vert g-Af_k^{dis}\Vert _{l^{\infty }(X_k)} \\&\le \delta _k + \Vert g^{\delta } - Af_k^{dis}\Vert _{l^{\infty }(X_k)}\\&\le \delta _k + \Vert g^{\delta } - A^{(k)}f_k^{dis}\Vert _{l^{\infty }(X_k)}+ \Vert ( A^{(k)}-A)f_k^{dis}\Vert _{l^{\infty }(X_k)}\\&\le (C_{dis}+1)\delta _k + Kh_k^r\left( {\varrho }+2c_2^{1/2}\eta _k^{-\tau }\Vert Es_k^{\delta ,*}\Vert _{\varPhi _k}\right) . \end{aligned}$$

\(\square \)

Proof of Lemma 9

The proof follows the similar arguments in Lemma 6. For sufficiently small \(h_1\), we have

$$\begin{aligned} \Vert Es_{k+1}^{\delta ,*}\Vert _{\varPhi _{k+1}}^2&\le \frac{1}{c_1} \int _{\mathbb {R}^d} |\widehat{Es_{k+1}^{\delta ,*}}(\omega )|^2(1+\eta _{k+1}^2 \Vert \omega \Vert _2^2)^{\tau } d\omega := \frac{1}{c_1}\left( I_1+I_2\right) \end{aligned}$$

with

$$\begin{aligned} I_1&= \int _{\Vert \omega \Vert _2 \le \frac{1}{\eta _{k+1}}} |\widehat{Es_{k+1}^{\delta ,*}}(\omega )|^2(1+\eta _{k+1}^2 \Vert \omega \Vert _2^2)^{\tau } d\omega ,\\ I_2&= \int _{\Vert \omega \Vert _2 \ge \frac{1}{\eta _{k+1}}} |\widehat{Es_{k+1}^{\delta ,*}}(\omega )|^2(1+\eta _{k+1}^2 \Vert \omega \Vert _2^2)^{\tau } d\omega . \end{aligned}$$

Skipping the detailed calculation, the terms \(\sqrt{I_1}\) and \(\sqrt{I}_2\) can be estimated as

$$\begin{aligned} \sqrt{I_1}&\le \frac{2^{\tau /2}C_{0}C_d}{c_{3}}\left( \frac{2c_2^{1/2}\left( c_{4} +K\right) \mu ^{\beta \tau }}{T^{\tau }}\Vert Es_{k}^{\delta ,*}\Vert _{\varPhi _{k}}+ K {\varrho }h_k^{r-\alpha } + (C_{dis}+1)h_k^{-\alpha }\delta _k\right) \, \end{aligned}$$

and

$$\begin{aligned} \sqrt{I_2}&\le 2^{\tau /2+1}c_2^{1/2}C_{\tau }\mu ^{\beta \tau }\Vert Es_{k}^{\delta ,*}\Vert _{\varPhi _k}. \end{aligned}$$

The combination of both estimates yields the result. It is worth to note that, the norm constraint on the exact solution \(\Vert f^{*}\Vert _{H^{\tau }(\varOmega )}\) is not required. \(\square \)

Proof of Lemma 10

Similarly to the proof of Lemma 5, it follows that,

$$\begin{aligned} \Vert As_{k+1}^{\delta ,*}\Vert _{l^{\infty }(X_k)}&= \Vert g-Af_k^{\epsilon ,\gamma ,\delta }\Vert _{l^{\infty }(X_k)} \le \delta _k + \Vert g^{\delta }-Af_k^{\epsilon ,\gamma ,\delta }\Vert _{l^{\infty }(X_k)}\\&\le \delta _k + \Vert g^{\delta }-A^{(k)}f_k^{\epsilon ,\gamma ,\delta }\Vert _{l^{\infty }(X_k)} + \Vert (A-A^{(k)})f_k^{\epsilon ,\gamma ,\delta }\Vert _{l^{\infty }(X_k)}\\&\le \delta _k + \max _{j=1,\ldots ,N_k}\left| g^{\delta }_j-A^{(k)}f_k^{\epsilon ,\gamma ,\delta }(x_{k,j}) \right| _{\epsilon _k}+\epsilon _k + Kh_k^r\Vert f_k^{\epsilon ,\gamma ,\delta }\Vert _{H^{\tau }(\varOmega )}\\&\le \delta _k + \epsilon _k + \gamma _k\Vert Es_k^{\delta ,*}\Vert _{\varPhi _k}^2 + Kh_k^r\left( {\varrho }+ 2c_2^{1/2}\eta _k^{-\tau }\Vert Es_k^{\delta ,*}\Vert _{\varPhi _k}\right) . \end{aligned}$$

The result follows directly after the choice of the cut-off parameter. \(\square \)

Proof of Lemma 11

Again, for sufficiently small \(h_1\), there exists

$$\begin{aligned} \Vert Es_{k+1}^{\delta ,*}\Vert _{\varPhi _{k+1}}^2&\le \frac{1}{c_1} \int _{\mathbb {R}^d} |\widehat{Es_{k+1}^{\delta ,*}}(\omega )|^2\left( 1+\eta _{k+1}^2 \Vert \omega \Vert _2^2\right) ^{\tau } d\omega := \frac{1}{c_1}\left( I_1+I_2\right) \end{aligned}$$

with

$$\begin{aligned} I_1&= \int _{\Vert \omega \Vert _2 \le \frac{1}{\eta _{k+1}}} |\widehat{Es_{k+1}^{\delta ,*}}(\omega )|^2\left( 1+\eta _{k+1}^2 \Vert \omega \Vert _2^2\right) ^{\tau } d\omega ,\\ I_2&= \int _{\Vert \omega \Vert _2 \ge \frac{1}{\eta _{k+1}}} |\widehat{Es_{k+1}^{\delta ,*}}(\omega )|^2\left( 1+\eta _{k+1}^2 \Vert \omega \Vert _2^2\right) ^{\tau } d\omega . \end{aligned}$$

Similarly we obtain, for \(\sqrt{I_1}\) and \(\sqrt{I_2}\), respectively

$$\begin{aligned} \sqrt{I_1}&\le 2^{\tau /2}\Vert Es_{k+1}^{\delta ,*}\Vert _{L^2(\mathbb {R}^d)} \le 2^{\tau /2}C_{0}\Vert s_{k+1}^{\delta ,*}\Vert _{L^2(\varOmega )}\le \frac{2^{\tau /2}C_{0}}{c_{3}} \Vert As_{k+1}^{\delta ,*}\Vert _{H^{\alpha }(\varOmega )}\\&\le \frac{2^{\tau /2}C_{0}C_d}{c_{3}}\left( \frac{2c_2^{1/2}\left( c_{4} +K\right) \mu ^{\beta \tau }}{T^{\tau }}\Vert Es_k^{\delta ,*}\Vert _{\varPhi _k}+ \frac{\kappa \mu ^{\beta \tau }}{T^{\tau }}\Vert Es_k^{\delta ,*}\Vert _{\varPhi _k}^2\right. \nonumber \\&\qquad \left. + 2K {\varrho }h_k^{r-\alpha } + 2h_k^{-\alpha }\delta _k\right) , \end{aligned}$$

and

$$\begin{aligned} \sqrt{I_2}&\le 2^{\tau /2}\eta _{k+1}^{\tau }\Vert Es_{k+1}^{\delta ,*}\Vert _{H^{\tau }(\mathbb {R}^d)} \le 2^{\tau /2}\eta _{k+1}^{\tau }C_{\tau }\Vert s_{k+1}^{\delta ,*}\Vert _{H^{\tau }(\varOmega )}\\&\le 2^{\tau /2+1}c_2^{1/2}C_{\tau }\mu ^{\beta \tau }\Vert Es_{k}^{\delta ,*}\Vert _{\varPhi _k}. \end{aligned}$$

The combination of both integrals yields the estimate (25). Moreover, since we have assumed that \(\Vert f^*\Vert _{H^{\tau }(\varOmega )} \le \frac{c_1^{1/2}}{C_{\tau }}\), it follows that,

$$\begin{aligned} \Vert Es_{1}^{\delta ,*}\Vert _{\varPhi _1} \le c_1^{-1/2}C_{\tau }\Vert f^*\Vert _{H^{\tau }(\varOmega )}\le 1. \end{aligned}$$

Then, the estimate (26) is consequently obtained by induction.\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhong, M., Hon, Y.C. & Lu, S. Multiscale Support Vector Approach for Solving Ill-Posed Problems. J Sci Comput 64, 317–340 (2015). https://doi.org/10.1007/s10915-014-9934-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10915-014-9934-x

Keywords

Mathematics Subject Classification

Navigation