Approximate Maximum a Posteriori with Gaussian Process Priors
A maximum a posteriori method has been developed for Gaussian priors over infinite-dimensional function spaces. In particular, variational equations based on a generalisation of the representer theorem and an equivalent optimisation problem are presented. This amounts to a generalisation of the ordinary Bayesian maximum a posteriori approach which is nontrivial as infinite-dimensional domains do not admit any probability densities. Instead of the gradient of the density, the logarithmic gradient of the probability distribution is used. Galerkin methods are proposed for the approximate solution of the variational equations. In summary, a framework and some foundations are provided which are required for the application of numerical approximation to an important class of machine learning problems.
KeywordsExponential Family Gaussian Measure Reproduce Kernel Hilbert Space Sparse Grid Gaussian Process Regression
Unable to display preview. Download preview PDF.