Skip to main content
Log in

Effective Complexity: In Which Sense is It Informative?

  • Article
  • Published:
Journal for General Philosophy of Science Aims and scope Submit manuscript

Abstract

This work responds to a criticism of effective complexity made by James McAllister, according to which such a notion is not an appropriate measure for information content. Roughly, effective complexity is focused on the regularities of the data rather than on the whole data, as opposed to algorithmic complexity. McAllister’s argument shows that, because the set of relevant regularities for a given object is not unique, one cannot assign unique values of effective complexity to considered expressions and, therefore, that algorithmic complexity better serves as a measure of information than effective complexity. We accept that problem regarding uniqueness as McAllister presents it, but would not deny that if contexts could be defined appropriately, one could in principle find unique values of effective complexity. Considering this, effective complexity is informative not only regarding the entity being investigated but also regarding the context of investigation itself. Furthermore, we argue that effective complexity is an interesting epistemological concept that may be applied to better understand crucial issues related to context dependence such as theory choice and emergence. These applications are not available merely on the basis of algorithmic complexity.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. This could be misleading because there are highly compressible strings that contain no patterns. Take, for example, transcendental numbers like π or Euler's number. Strings that represent these objects are extremely compressible, but there does not seem to be any regularity in such representations. As should become clear later, strings do not contain regularities, in a strict sense. Rather, we can construct ensembles based on those objects and then define regularities based on the relevant similarities that we may observe between the members of those ensembles.

  2. As explained, two-part code optimization is a procedure for compressing objects in different ways by selecting different kinds of regularities. One may criticize the generality of this approach arguing that there are ways of compressing data that do not involve finding patterns in it. For example, we may compress the decimal expansion of π even if it would pass any randomness test based on density measurements. It is an object that involves a lot of randomness. Now, we do not have to get into this problem in much detail, since our main concern is the question about whether effective complexity offers a good information measure or not. But one brief response to the criticism might go as follows. Consider the set of all known approximations of π, i.e. the data we have about π. Every member of that set is similar to π in the fact that, say, it is nearer to 3.14 than to 3.15. This is a regularity. Actually, it is an exceptionless regularity. All members of the set have 1 and 4 as first decimal digits. (This holds even if we include numbers similar to π that are not approximations of it). Thus, we are able to compress π on the grounds of a regularity shared by all its known approximations.

  3. As may seem clear, we work here on the basis of a very general definition for mainly two reasons: the first is that we want to tackle the issues pointed out by McAllister following, for that purpose, his simple characterization. Second, we think (as McAllister probably thinks as well) that no further technicalities are needed to consider the kinds of arguments discussed here. Anyhow, see the “Appendix” for a bit more detailed definition.

  4. Dramatic shifts in complexity have also been studied extensively in the context of algorithmic complexity and two-part code optimization on the basis of Kolmogorov’s structure functions.

  5. Consider the extreme case in which a process suddenly starts behaving in a completely random way. Its algorithmic complexity would increase abruptly. But should we see this as a sign of an emergent property? It seems plausible to think that randomness, as such, could be considered as emergent here. There would also be a decrease of effective complexity, given the loss of regularity, and emergence would be present in this sense too. However, on the basis of effective complexity, we can also say something about the relevant emergent properties that are associated with the increasing randomness.

  6. Although the irreducibility criterion may not be satisfied for one-part code optimization, based on algorithmic complexity, it could be satisfied using Kolmogorov’s structure function or other forms of two-part code optimization (cf. Kolmogorov 1974; Li and Vitányi 1997; Vereshchagin and Vitányi 2004).

References

  • Adriaans, P. (2012). Facticity as the amount of self-descriptive information in a data set. arXiv preprint arXiv:1203.2245. Accessed 15 Nov 2019.

  • Antunes, L., Fortnow, L., Van Melkebeek, D., & Vinodchandran, N. V. (2006). Computational depth: Concept and applications. Theoretical Computer Science, 354(3), 391–404.

    Article  Google Scholar 

  • Ay, N., Müller, M., & Skoła, A. (2010). Effective complexity and its relation to logical depth. IEEE Transactions on Information Theory, 56(9), 4593–4607.

    Article  Google Scholar 

  • Bennett, C. H. (1982). On the logical “depth” of sequences and their reducibilities to random sequences. IBM report.

  • Bennett, C. H. (1995). Logical depth and physical complexity. In R. Herken (Ed.), The universal turing machine: A half-century survey (2nd ed., pp. 207–235). Berlin: Springer.

    Chapter  Google Scholar 

  • Birkhoff, G. D. (1933). Aesthetic measure. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Chaitin, G. J. (1969). On the simplicity and speed of programs for computing infinite sets of natural numbers. Journal of the ACM, 16(3), 407–422.

    Article  Google Scholar 

  • El-Hani, C., & Pereira, A. (2000). Higher-level descriptions: Why should we preserve them. In P. B. Andersen, C. Emmeche, N. O. Finnemann, & P. V. Christiansen (Eds.), Downward causation (pp. 118–142). Aarhus: University of Aarhus Press.

    Google Scholar 

  • Feyerabend, P. (1962). Explanation, reduction, and empiricism. In H. Feigl & G. Maxwell (Eds.), Scientific explanation, space, and time (Minnesota studies in the philosophy of science, Vol. 3, pp. 28–97). Minneapolis, MN: University of Minnesota Press.

    Google Scholar 

  • Fuentes, M. (2014). Complexity and the emergence of physical properties. Entropy, 16(8), 4489–4496.

    Article  Google Scholar 

  • Gell-Mann, M., & Lloyd, S. (1996). Information measures, effective complexity, and total information. Complexity, 2(1), 44–52.

    Article  Google Scholar 

  • Gell-Mann, M., & Lloyd, S. (2003). Effective complexity. Santa Fe Institute working papers (pp. 387–398).

  • Grünwald, P. D., & Vitányi, P. M. (2003). Kolmogorov complexity and information theory. With an interpretation in terms of questions and answers. Journal of Logic, Language and Information, 12(4), 497–529.

    Article  Google Scholar 

  • Kolmogorov, A. N. (1965). Three approaches to the quantitative definition of information. Problemy Peredachi Informatsii, 1(1), 3–11.

    Google Scholar 

  • Kolmogorov, A. N. (1974). Complexity of algorithms and objective definition of randomness. Uspekhi Matematicheskikh Nauk, 29(4), 155.

    Google Scholar 

  • Koppel, M. (1987). Complexity, depth, and sophistication. Complex Systems, 1(6), 1087–1091.

    Google Scholar 

  • Kuhn, T. (1962). The structure of scientific revolutions. Chicago: The University of Chicago Press.

    Google Scholar 

  • Leiber, T. (1999). Deterministic chaos and computational complexity: The case of methodological complexity reductions. Journal for General Philosophy of Science, 30(1), 87–101.

    Article  Google Scholar 

  • Lewis, D. (1973). Counterfactuals. Oxford: Blackwell.

    Google Scholar 

  • Lewis, D. (1994). Humean supervenience debugged. Mind, 103(412), 473–490.

    Article  Google Scholar 

  • Li, M., & Vitányi, P. M. B. (1997). An introduction to Kolmogorov complexity and its applications (2nd ed.). Berlin: Springer.

    Book  Google Scholar 

  • McAllister, J. (2003). Effective complexity as a measure of information content. Philosophy of Science, 70(2), 302–307.

    Article  Google Scholar 

  • Sartenaer, O. (2016). Sixteen years later: Making sense of emergence (again). Journal for General Philosophy of Science, 47(1), 79–103.

    Article  Google Scholar 

  • Vereshchagin, N. K., & Vitányi, P. M. (2004). Kolmogorov’s structure functions and model selection. IEEE Transactions on Information Theory, 50(12), 3265–3290.

    Article  Google Scholar 

  • Vitányi, P. M. (2006). Meaningful information. IEEE Transactions on Information Theory, 52(10), 4617–4626.

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank the anonymous referees for their helpful comments. This work was financially suported by the Chilean Commission for Scientific and Technological Research (CONICYT; FONDECYT projects #3160180, #1181414 and #11180624).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Esteban Céspedes.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

We would like to show here, following Gell-Mann and Lloyd (1996; 2003), a more specific way to characterize effective complexity, based on steps, as follows:

Effective complexity. In order to obtain the effective complexity of a string a we can, according to a theory or an epistemic context C, first, construct an ensemble E on the basis of a, i.e. a set of entities that are similar to a, according to C, second, determine the random part of E, according to C, as well as its regular part, and, finally, assign, according to C, a value of algorithmic complexity to the regular part of E.

The value obtained with this procedure is the effective complexity of a, according to a particular epistemic context. Each member of an ensemble is assigned a probability, such that the ensemble’s probability distribution expresses the regularities associated with a. On the basis of that distribution, a’s effective complexity, according to C, is the algorithmic complexity of E’s regular part. Epistemic contexts may involve different constraints regarding the description of a, considerations about other theories that may describe a and information about members of E other than a, given the considered regularities expressed by the ensemble.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Céspedes, E., Fuentes, M. Effective Complexity: In Which Sense is It Informative?. J Gen Philos Sci 51, 359–374 (2020). https://doi.org/10.1007/s10838-019-09487-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10838-019-09487-1

Keywords

Navigation