Abstract
Based on the use of compactly supported radial basis functions, we extend in this paper the support vector approach to a multiscale support vector approach (MSVA) scheme for approximating the solution of a moderately ill-posed problem on bounded domain. The Vapnik’s \(\epsilon \)-intensive function is adopted to replace the standard \(l^2\) loss function in using the regularization technique to reduce the error induced by noisy data. Convergence proof for the case of noise-free data is then derived under an appropriate choice of the Vapnik’s cut-off parameter and the regularization parameter. For noisy data case, we demonstrate that a corresponding choice for the Vapnik’s cut-off parameter gives the same order of error estimate as both the a posteriori strategy based on discrepancy principle and the noise-free a priori strategy. Numerical examples are constructed to verify the efficiency of the proposed MSVA approach and the effectiveness of the parameter choices.
Similar content being viewed by others
References
Aronszajn, N.: Theory of reproducing Kernels. Trans. Am. Math. Soc. 68, 337–404 (1950)
Bertero, M., De Mol, C., Pike, E.: Linear inverse problems with discrete data I: general formulation and sigular system analysis. Inverse Probl. 1, 301–330 (1985)
Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers COLT 92. In: Proceedings of 5th Annual Workshop on Computational Learning Theory, pp. 144–152. ACM, New York (1992)
Burger, M., Engl, H.: Training neural networks with noisy data as an ill-posed problem. Adv. Comput. Math. 13, 335–354 (2000)
Chen, Z., Xu, Y., Yang, H.: A multilevel augmentation method for solving ill-posed operator equations. Inverse Probl. 22, 155–174 (2006)
Chernih, A., Le Gia, Q.T.: Multiscale methods with compactly supported radial basis functions for Galerkin approximation of elliptic PDEs, Math.NA, submitted
Chernih, A., Le Gia, Q.T.: Multiscale methods with compactly supported radial basis functions for the Stokes problem on bounded domains, Math.NA, submitted
Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20, 273–297 (1995)
Christianini, N., Shawe-Taylor, J.: Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge University Press, Cambridge (2000)
Engl, H.W., Hanke, M., Neubauer, A.: Regularization of Inverse Problems. Kluwer Academic Publishers, Dordrecht (1996)
Fasshauer, G.E.: Solving partial differential equations by collocation with radial basis functions. In: Méhauté, A.L., Rabut, C., Schumaker, L.L. (eds.) Surface Fitting and Multiresolution Methods, pp. 131—138. Vanderbilt University Press, Nashville (1997)
Franke, C., Schaback, R.: Solving partial differential equations by collocation using radial basis functions. Appl. Math. Comput. 93, 73–82 (1998)
Giesl, P., Wendland, H.: Meshless collocation: error estimates with application to dynamical systems. SIAM J. Numer. Anal. 45, 1723–1741 (2007)
Harbrecht, H., Pereverzev, S.V., Schneider, R.: Self-regularization by projection for noisy pseudodifferential equations of negative order. Numer. Math. 95, 123–143 (2003)
Hardy, R.L.: Multiquadric equations of topography and other irregular surfaces. Geophys. Res. 76, 1905–1915 (1971)
Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning, 2nd edn. Springer, New York (2009)
Hickernell, F.J., Hon, Y.C.: Radial basis function approximations as smoothing splines. Appl. Math. Comput. 102, 1–24 (1999)
Hon, Y.C., Schaback, R.: Solvability of partial differential equations by meshless kernel methods. Adv. Comput. Math. 28, 283–299 (2008)
Hon, Y.C., Wei, T.: Backus-Gilbert algorithm for the Cauchy problem of the Laplace equation. Inverse Probl. 17, 261–271 (2001)
Takeuchi, T., Hon, Y.C.: Discretized Tikhonov regularization by reproducing kernel Hilbert space for backward heat conduction problem. Adv. Comput. Math. 34, 167–183 (2011)
Kansa, E.J.: Application of Hardy’s multiquadric interpolation to hydrodynamics. Proc. Simul. Conf. 4(1986), 111–117 (1986)
Kirsch, A., Schomburg, B., Berendt, G.: The Backus–Gilbert method. Inverse Probl. 4, 771–783 (1988)
Krebs, J.: Support vector regression for the solution of linear integral equations. Inverse Probl. 27(065007), 23 (2011)
Krebs, J., Louis, A.K., Wendland, H.: Sobolev error estimates and a priori parameter selection for semi-discrete Tikhonov regularization. J. Inverse Ill-Posed Probl. 17, 845–869 (2009)
Le Gia, Q.T., Sloan, I., Wendland, H.: Multiscale analysis in Sobolev spaces on the sphere. SIAM J. Numer. Anal. 48, 2065–2090 (2010)
Le Gia, Q.T., Sloan, I., Wendland, H.: Multiscale RBF collocation for solving PDEs on spheres. Numer. Math. 121, 99–125 (2012)
Le Gia, Q.T., Sloan, I., Wendland, H.: Multiscale approximation for functions in arbitrary Sobolev spaces by scaled radial basis functions on the unit sphere. Appl. Comput. Harmon. Anal. 32, 401–412 (2012)
Louis, A.K.: Inverse und Schlecht Gestellte Probleme. B. G. Teubner, Stuttgart (1989)
Li, J., Zou, J.: A multilevel model correction method for parameter identification. Inverse Probl. 23, 1759–1786 (2007)
Lu, S., Pereverzev, S.V.: Regularization theory for ill-posed problems. Selected Topics. Inverse and Ill-posed Problems Series, 58. De Gruyter, Berlin (2013)
Micchelli, C.A., Pontil, M.: Learning the kernel function via regularization. J. Mach. Learn. Res. 6, 1099–1125 (2005)
Nair, M.T., Pereverzev, S.V.: Regularized collocation method for Fredholm integral equations for the first kind. J. Complex. 23, 454–467 (2006)
Pereverzev, S.V., Solodky, S.G., Volynets, E.A.: The balancing principle in solving semi-discrete inverse problems in Sobolev scales by Tikhonov method. Appl. Anal. 3, 435–446 (2012)
Rieder, A.: A wavelet multilevel method for ill-posed problems stabilized by Tikhonov regularization. Numer. Math. 75, 501–522 (1997)
Rieger, C., Zwicknagel, B.: Deterministic error analysis of support vector regression and related regularized kernel methods. J. Mach. Learn. Res. 10, 2115–2132 (2009)
Riplinger, M.: Lernen als inverses problem und deterministische Fehlerabschätzung bei support vektor regression, Diploma Thesis, Saarland University, Saarbrücken (2007)
Schaback, R.: The missing Wendland functions. Adv. Comput. Math. 34, 67–81 (2011)
Scherzer, O.: An iterative multi level algorithm for solving nonlinear ill-posed problems. Numer. Math. 80, 579–600 (1998)
Schölkopf, B., Smola, A.J.: Learning with Kernels. MIT Press, Cambridge (2002)
Smola, A.J., Schölkopf, B.: A tutorial on support vector regression. Stat. Comput. 14, 199–222 (2004)
Stein, E.M.: Singular Integrals and Differentiability Properties of Functions. Princeton University Press, Princeton (1971)
Saitoh, S.: Theory of Reproducing Kernels and Its Applications, Pitman Research Notes in Mathematics Series, vol. 189. Longman Scientific & Technical, London (1988)
Townsend, A., Wendland, H.: Multiscale analysis in Sobolev spaces on bounded domains with zero boundary values. IMA J. Numer. Anal. (2012). doi:10.1093/imanum/drs036
Takeuchi, T., Yamamoto, M.: Tikhonov regularization by a reproducing kernel Hilbert space for the Cauchy problem for an elliptic equation. SIAM J. Sci. Comput. 31, 112–142 (2008)
Vapnik, V.: Statistical Learning Theory. Wiley, New York (1998)
Vapnik, V.: The Nature of Statistical Learning Theory, 2nd edn. Springer, New York (2000)
Wahba, G.: Spline Models for Observational Data, CBMS-NSF Regional Conference Series in Applied Mathematics, vol. 59. SIAM, Philadelphia (1990)
Wendland, H.: Piecewise polynomial, positive difinite and compactly supported radial functions of minimal degree. Adv. Comput. Math. 4, 389–396 (1995)
Wendland, H.: Multiscale analysis in Sobolev spaces on bounded domains. Numer. Math. 116, 493–517 (2010)
Wendland, H.: Scattered Data Approximation. Cambridge University Press, Cambridge (2005)
Wong, S.M., Hon, Y.C., Golberg, M.: Compactly supported radial basis functions for shallow water equations. Appl. Math. Comput. 127, 79–101 (2002)
Wu, Z.: Characterization of positive definite functions. In: Dælen, M., Lyche, T., Schumaker, L.L. (eds.) Mathematical Methods for Curves and Surfaces, pp. 573–578. Vanderbilt University Press, Nashville (1995)
Zhong, M., Lu, S., Cheng, J.: Multiscale analysis for ill-posed problems with semi-discrete Tikhonov regularization. Inverse Probl. 28, 065019 (2012). (19 pp.)
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was partially supported by a Grant from the Research Council of the Hong Kong Special Administrative Region, China (Project No. CityU 101211), Special Funds for Major State Basic Research Projects of China (2015CB856003), NSFC (Key Project No. 91130004) and the Shanghai Science and Technology Commission (14QA1400400).
Appendix: Proofs of Lemmas 4–11
Appendix: Proofs of Lemmas 4–11
Proof of Lemma 4
Referring to Algorithm 1, the local reconstructed solution \(s_k^{\epsilon ,\gamma ,\delta }\) at each level \(k\) is determined by following minimization problem:
Taking \(Es_k^{\delta ,*}\in H^{\tau }(\mathbb {R}^d)\) as a feasible candidate, note for each \(k\)
Then, the estimate
and the minimizing property of the objective functional yield the estimates.\(\square \)
Proof of Lemma 5
The first estimate is derived consequently from the minimality of the functional \(\mathcal {J}_k(s_k^{\epsilon ,\gamma })\) in (15), i.e.,
The second estimate follows from
Utilizing [A5] in Assumption 1, the second estimate and noticing the fact that \(\Vert \cdot \Vert _{l^{\infty }(X_k)}\le |\cdot |_{\epsilon _k}+\epsilon _k\), derivation for the third estimate is given as follow:
\(\square \)
Proof of Lemma 6
For sufficiently small \(h_1\) satisfying \(\eta _{k+1}<\cdots <\eta _1 = \nu h_1^{\beta }\in (0,1)\), utilizing the inequality (13), we have
with
From the extension operator in (5), Assumption 1, estimates in Lemma 3, and the choice strategies for two parameters \(\epsilon _k\) and \(\gamma _k\), the integration \(I_1\) can be estimated as
Noting that the choice of the parameter \(\beta =\min \{1, \frac{r-\alpha }{\tau }\}\) guarantees that \(1-\beta \ge 0\) and \(r-\alpha -\beta \tau \ge 0\), we thus obtain
For the second integral \(I_2\), we observe that \(\delta _{k+1}\Vert \omega \Vert _2\ge 1\) implies,
Thus, we have,
Combining the estimates of \(I_1\) and \(I_2\) yields the estimate (18).
Noticing that the inequality (14) and the additional assumption on \(\Vert f^*\Vert _{H^{\tau }(\varOmega )}\) imply
By induction, we can derive a more precise estimate (19) and the lemma is proven. \(\square \)
Proof of Lemma 7
Referring to the basic estimate (16) in Lemma 4, the minimality of the functional \(\mathcal {J}_k^{\delta }(s_k^{\epsilon ,\gamma ,\delta })\) yields
The first estimate follows directly after the choice of cut-off parameter \(\epsilon _k \ge \delta _k+K{\varrho }h_k^r\).
The second estimate is obtained by the fact that
\(\square \)
Proof of Lemma 8
Referring to the definition of \(s_{k+1}^{\delta ,*}\), utilizing the second estimate in Lemma 7, it follows that,
\(\square \)
Proof of Lemma 9
The proof follows the similar arguments in Lemma 6. For sufficiently small \(h_1\), we have
with
Skipping the detailed calculation, the terms \(\sqrt{I_1}\) and \(\sqrt{I}_2\) can be estimated as
and
The combination of both estimates yields the result. It is worth to note that, the norm constraint on the exact solution \(\Vert f^{*}\Vert _{H^{\tau }(\varOmega )}\) is not required. \(\square \)
Proof of Lemma 10
Similarly to the proof of Lemma 5, it follows that,
The result follows directly after the choice of the cut-off parameter. \(\square \)
Proof of Lemma 11
Again, for sufficiently small \(h_1\), there exists
with
Similarly we obtain, for \(\sqrt{I_1}\) and \(\sqrt{I_2}\), respectively
and
The combination of both integrals yields the estimate (25). Moreover, since we have assumed that \(\Vert f^*\Vert _{H^{\tau }(\varOmega )} \le \frac{c_1^{1/2}}{C_{\tau }}\), it follows that,
Then, the estimate (26) is consequently obtained by induction.\(\square \)
Rights and permissions
About this article
Cite this article
Zhong, M., Hon, Y.C. & Lu, S. Multiscale Support Vector Approach for Solving Ill-Posed Problems. J Sci Comput 64, 317–340 (2015). https://doi.org/10.1007/s10915-014-9934-x
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10915-014-9934-x
Keywords
- Multiscale support vector approach
- Compactly supported radial basis functions
- Ill-posed problems
- Regularization methods