Real patterns and indispensability
While scientific inquiry crucially relies on the extraction of patterns from data, we still have a far from perfect understanding of the metaphysics of patterns—and, in particular, of what makes a pattern real. In this paper we derive a criterion of real-patternhood from the notion of conditional Kolmogorov complexity. The resulting account belongs to the philosophical tradition, initiated by Dennett (J Philos 88(1):27–51, 1991), that links real-patternhood to data compressibility, but is simpler and formally more perspicuous than other proposals previously defended in the literature. It also successfully enforces a non-redundancy principle, suggested by Ladyman and Ross (Every thing must go: metaphysics naturalized, Oxford University Press, Oxford, 2007), that aims to exclude from real-patternhood those patterns that can be ignored without loss of information about the target dataset, and which their own account fails to enforce.
KeywordsKolmogorov complexity Real patterns Structure functions Algorithmic information theory Metaphysics of science
We would like to thank James Ladyman for his very generous discussion of the topics of this paper. We would also like to thank the participants of the reading group on Real Patterns held at the University of Barcelona in 2018. Two very detailed reviews from two anonymous referees helped us to significantly improve the paper. Abel Suñé also wishes to thank J.P. Grodniewicz for valuable discussion and Pepa Toribio for her support. Manolo Martínez would like to acknowledge research funding awarded by the Spanish Ministry of Economy, Industry and Competitiveness, in the form of grants PGC2018-101425-B-I00 and RYC-2016-20642.
- Bennett, C. H. (1988). Logical depth and physical complexity. In R. Herken (Ed.), The universal Turing machine, a half century survey, (pp. 227-257). Oxford University Press. Google Scholar
- Collier, J. (2001). Dealing with the unexpected. In Partial proceedings of CASYS 2000: Fourth international conference on computing anticipatory systems, international journal of computing anticipatory systems (vol. 10, pp. 21–30).Google Scholar
- Cover, T., & Thomas, J. (2006). Elements of information theory (Wiley Series in Telecommunications and Signal Processing). New York, NY: Wiley-Interscience.Google Scholar
- Grünwald, P. D., & Vitányi, P. M. B. (2008). Algorithmic Information Theory. In Adriaans, P. & van Benthem, J. (Eds.), Philosophy of Information (Vol. 8, pp. 281–320). Handbook of the Philosophy of Science. Amsterdam: North-Holland.Google Scholar
- Kolmogorov, A. N. (1965). Three approaches to the quantitative definition of information. Problems of Information Transmission, 1(1), 1–7.Google Scholar
- Rappaport, T. S. (1996). Wireless communications: Principles and practice. Upper Saddle River, NJ: Prentice Hall PTR.Google Scholar
- Shannon, C. E. (1959). Coding theorems for a discrete source with a fidelity criterion. Institute of Radio Engineers, International Convention Record, Part 4, 142–163.Google Scholar
- Vereshchagin, N., & Shen, A. (2017). Algorithmic statistics: Forty years later. In A. Day, M. Fellows, N. Greenberg, B. Khoussainov, A. Melnikov, & F. Rosamond (Eds.), Computability and complexity: Essays dedicated to Rodney G. Downey on the occasion of his 60th birthday. Lecture Notes in Computer Science (pp. 669–737). Cham: Springer. https://doi.org/10.1007/978-3-319-50062-1_41.CrossRefGoogle Scholar
- Vereshchagin, N., & Vitányi, P. (2006). On algorithmic rate-distortion function. In 2006 IEEE international symposium on information theory (pp. 798–802). IEEE.Google Scholar
- Williams, P. L., & Beer, R. D. (2010). Nonnegative decomposition of multivariate information. ArXiv Preprint arXiv:1004.2515.