Summary
The correlation dimension is usually used to study features of fractals and data generating processes. For estimating the value of the correlation dimension in a particular case, a polynomial approximation of correlation integral is often used and then linear regression for logarithms of variables is applied. In this Chapter, we show that the correlation integral can be decomposed into functions each related to a particular point of data space. For these functions, one can use similar polynomial approximations such as the correlation integral. The essential difference is that the value of the exponent, which would correspond to the correlation dimension, differs in accordance to the position of the point in question. Moreover, we show that the multiplicative constant represents the probability density estimation at that point. This finding is used to construct a classifier. Tests with some data sets from the Machine Learning Repository show that this classifier can be very effective.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Grassberger, P., Procaccia, I.: Measuring the strangeness of strange attractors. Physica 9D, 189–208 (1983)
Osborne, A.R., Provenzale, A.: Finite correlation dimension for stochastic systems with power-law spectra. Physica D 35, 357–381 (1989)
Lev, N.: Hausdorff dimension. Student Seminar, Tel-Aviv University (2006), www.math.tau.ac.il/~levnir/files/hausdorff.pdf
Weisstein, E.W.: Information Dimension. From MathWorld–A Wolfram Web Resource (2007), http://mathworld.wolfram.com/InformationDimension.html
Merz, C.J., Murphy, P.M., Aha, D.W.: UCI Repository of Machine Learning Databases. Dept. of Information and Computer Science, Univ. of California, Irvine (1997), http://www.ics.uci.edu/~mlearn/MLSummary.html
Camastra, P., Vinciarelli, A.: Intrinsic Dimension Estimation of Data: An Approach based on Grassberger-Procaccia’s Algorithm. Neural Processing Letters 14(1), 27–34 (2001)
Takens, F.: On the Numerical Determination of the Dimension of the Attractor. In: Dynamical Systems and Bifurcations. Lecture Notes in Mathematics, vol. 1125, pp. 99–106. Springer, Berlin (1985)
Camastra, F.: Data dimensionality estimation methods: a survey. Pattern Recognition 6, 2945–2954 (2003)
Guerrero, A., Smith, L.A.: Towards coherent estimation of correlation dimension. Physics letters A 318, 373–379 (2003)
Duda, R.O., Hart, P.E., Stork, D.G.: Pattern classification, 2nd edn. John Wiley and Sons, Inc., New York (2000)
Dvorak, I., Klaschka, J.: Modification of the Grassberger-Procaccia algorithm for estimating the correlation exponent of chaotic systems with high embedding dimension. Physics Letters A 145(5), 225–231 (1990)
Wikipedia - Gauss–Markov theorem, http://en.wikipedia.org/wiki/Gauss–Markov_theorem
Wikipedia – Linear regression, http://en.wikipedia.org/wiki/Linear_regression
Leamer, E.E.: Specification searches. Ad hoc inference with non-experimental data. John Wiley and Sons, New York (1978)
Paredes, R., Vidal, E.: Learning Weighted Metrics to Minimize Nearest Neighbor Classification Error. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(7), 1100–1110 (2006)
Merz, C.J., Murphy, P.M., Aha, D.W.: UCI Repository of Machine Learning Databases. Dept. of Information and Computer Science, Univ. of California, Irvine (1997), http://www.ics.uci.edu/~mlearn/MLrepository.html
Paredes, R.: http://www.dsic.upv.es/~rparedes/research/CPW/index.html
Friedmann, J.H.: Flexible Metric Nearest Neighbor Classification. Technical Report, Dept. of Statistics, Stanford University (1994)
Cover, T.M., Hart, P.E.: Nearest Neighbor Pattern Classification. IEEE Transactions on Information Theory IT-13(1), 21–27 (1967)
Gama, J.: Iterative Bayes. Theoretical Computer Science 292, 417–430 (2003)
Bellman, R.E.: Adaptive Control Processes. Princeton University Press, Princeton (1961)
Pestov, V.: On the geometry of similarity search: Dimensionality course and concentration of measure. Information Processing Letters 73(1-2), 47–51 (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Jiřina, M., Jiřina, M. (2009). Classification by the Use of Decomposition of Correlation Integral. In: Abraham, A., Hassanien, AE., Snášel, V. (eds) Foundations of Computational Intelligence Volume 5. Studies in Computational Intelligence, vol 205. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01536-6_2
Download citation
DOI: https://doi.org/10.1007/978-3-642-01536-6_2
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-01535-9
Online ISBN: 978-3-642-01536-6
eBook Packages: EngineeringEngineering (R0)