Skip to main content

Optimality of Kernel Density Estimation of Prior Distribution in Bayes Network

  • Conference paper
  • 1446 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4232))

Abstract

The key problem of inductive-learning in Bayes network is the estimator of prior distribution. This paper adopts general naive Bayes to handle continuous variables, and proposes a kind of kernel function constructed by orthogonal polynomials, which is used to estimate the density function of prior distribution in Bayes network. The paper then makes further researches into optimality of the kernel estimation of density and derivatives. When the sample is fixed, the estimators can keep continuity and smoothness, and when the sample size tends to infinity, the estimators can keep good convergence rates.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Zhang, H., Ling, C.: Numeric mapping and learnability of naive Bayes. Applied Artificial Intelligence 17, 507–518 (2003)

    Article  Google Scholar 

  2. Tang, Y., Pan, W.M., Li, H.M.: Fuzzy native Bayes classifier Based on fuzzy clustering. In: Proceedings of the IEEE International Conference on Systems, vol. 5, pp. 452–458 (2002)

    Google Scholar 

  3. Li, X.S., Li, D.Y.: A New Method Based on Density Clustering for Discretization of Continuous Attributes. Acta Simulata Systematica Sinica 6, 804–809 (2003)

    Google Scholar 

  4. Dougherty, J., Kohavi, R., Sahami, M.: Supervised and unsupervised discretization of continuous features. In: Proc. of the 12th International Conference on Machine Learning, pp. 194–202. Morgan Kaufmann Publishers, San Francisco (2003)

    Google Scholar 

  5. Li, B.C., Yuan, S.M., Wang, L.M.: The Simplified Expression of Bayesian Network. Chinese Journal of Scientific Instrument 10, 1070–1073 (2005)

    Google Scholar 

  6. Lin, P.E.: Rates of convergence in empirical Bayes estimation problems: continuous case. Ann. Statist. 3, 155–164 (1975)

    Article  MATH  MathSciNet  Google Scholar 

  7. Ronald, R.Y.: An extension of the naive Bayesian classifier. Information Sciences 176(5), 577–588 (2006)

    Article  MathSciNet  Google Scholar 

  8. Eugene, S., Yan, V.: Universal results for correlations of characteristic polynomials: Riemann-Hilbert approach. Communications in Mathematical Physics 241, 343–382 (2003)

    MATH  MathSciNet  Google Scholar 

  9. Freilikher, V., Kanzieper, E., Yurkevich, I.: Theory of random matrices with strong level confinement: orthogonal polynomial approach. Physical Review E 54, 210–219 (1996)

    Article  MathSciNet  Google Scholar 

  10. Gajek, L.: On improving density estimators which are not bona fide functions. Ann. Statist. 11, 1612–1618 (1986)

    Article  MathSciNet  Google Scholar 

  11. Roder, H., et al.: Kernel polynomial method for a nonorthogonal electronicstructure calculation of amorphous diamond. Physical Review B 55, 15382–15385 (1997)

    Article  Google Scholar 

  12. Li, J.J., Gupta, S.S.: Empirical Bayes tests based on kernel sequence estimation. Statistica Sinica 12, 1061–1072 (2002)

    MATH  MathSciNet  Google Scholar 

  13. Jones, M.C.: On kernel density derivative estimation. Communications in Statistics (Theory and Methods) 23, 2133–2139 (1994)

    Article  MATH  MathSciNet  Google Scholar 

  14. Girolami, M.: Orthogonal series density estimation and the kernel eigenvalue problem. Neural Computation 14, 669–688 (2002)

    Article  MATH  Google Scholar 

  15. Messer, K.: A comparison of spline estimate to its equivalent kernel estimate. Ann. Statist. 19, 817–829 (1991)

    Article  MATH  MathSciNet  Google Scholar 

  16. Parzen, E.: On estimation of a probability density function and model. Ann. Math. Statist. 33, 1065–1076 (1962)

    Article  MATH  MathSciNet  Google Scholar 

  17. Tong, H.Q.: Convergence rates for empirical Bayes estimators of parameters in multi-parameter exponential families. Communications in Statistics (Theory and Methods) 25, 1089–1098 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  18. Vanlessen, M.: Universal behavior for averages of characteristic polynomials at the origin of the spectrum. Communications in Mathematical Physics 253, 535–560 (2005)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Tong, H., Deng, Y., Li, Z. (2006). Optimality of Kernel Density Estimation of Prior Distribution in Bayes Network. In: King, I., Wang, J., Chan, LW., Wang, D. (eds) Neural Information Processing. ICONIP 2006. Lecture Notes in Computer Science, vol 4232. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11893028_74

Download citation

  • DOI: https://doi.org/10.1007/11893028_74

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-46479-2

  • Online ISBN: 978-3-540-46480-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics