Skip to main content
Log in

Compressing Graphs: a Model for the Content of Understanding

  • Original Research
  • Published:
Erkenntnis Aims and scope Submit manuscript

Abstract

In this paper, I sketch a new model for the format of the content of understanding states, Compressible Graph Maximalism (CGM). In this model, the format of the content of understanding is graphical, and compressible. It thus combines ideas from approaches that stress the link between understanding and holistic structure (like as reported by Grimm (in: Ammon SGCBS (ed) Explaining Understanding: New Essays in Epistemollogy and the Philosophy of Science, Routledge, New York, 2016)), and approaches that emphasize the connection between understanding and compression (like Wilkenfeld (Synthese 190(6):997–1016, 2018)). I argue that the combination of these ideas has several attractive features, and I defend the idea against some challenges.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. In this paper, I will defend the idea that understanding states in general have graphical contents, that is, that for any state of understanding u, the format of the content of u is graphical. It is possible to qualify this further, but for reasons that will be made clearer later, I think the general version of the view is preferable.

  2. In fact, here we should distinguish between the topic of understanding and the target proper. ‘How cheesse is made’ can be taken as a subject matter, a partition of logical space in various ways in which cheese is made, or as a particular cell in that partition, that is, the particular way in which cheese is made in the world that is relevant to our assessment of Anne’s understanding.

  3. I will use representation-talk throughout, but I am not really committed to the existence of representations in the classical sense. It is enough for me that something can serve a certain functional profile, namely, one that is similar to the one of classical representations (that it can be said to be about something, that it is information-bearing, that it can be at least partially communicated). In any case, at this stage we are simply reviewing the literature on the connection between understanding and structure, and it is patent that there is a strongly representationalist strand to it.

  4. Things quickly become more complicated, since Wittgenstein allows for understanding to be had by inventing connections.

  5. Hazlett (2018) offers an account of the role of structure for understanding where ‘structure’ stands for something objective as well as joint-carving, and is required for understanding as apt theorizing: according to the view, it is not possible to understand as apt theorizing if there is no objective structure.

  6. A different account that stresses the importance of the structure of content is Meynell’s 2020 pictorial account.

  7. Gardiner (2012) aims to attack veritism (the view that epistemic value is concerned only with truth). Her argument asks us to imagine to robots, Alpha and Beta, who ex hypothesi have the same propositional information about some topic, but who differ in that Alpha stores this information as a list of propositions whereas Beta stores them in a format that includes explicit connections, in ‘something akin to hyperlinks’. Gardiner argues that Beta’s epistemic state is more valuable than Alpha’s, because of the integration that this representational format affords. In fact, Gardiner argues (taking some hints from Stroud (1979)) that Alpha’s state could not have the same epistemic value as Beta’s, because of its lack of integration: the relation between the relevant facts simply cannot be represented by any number of propositions. I will return to this point in Sect. 4.4.

  8. A fuller overview of MDL is beyond my purposes here, so the reader is advised to refer to Grünwald (2007) for details. Later, in Sect. 5.4, I will describe in some detail a compression scheme for graphs in line with MDL ideas.

  9. A good introduction to graph theory can be found in Hartsfield and Ringel (2003). For a broader view, see Bondy (2008), and for an algorithmic approach, see Jungnickel (2005).

  10. I will allow edges to be both directed or undirected. The main difference is that directed edges are ordered pairs of vertices.

  11. This is sometimes called a pseudograph, because it lacks some properties of graphs that disallow these things.

  12. There are actually two ways in which we can take this idea. On the one, we can say that the content of an understanding state is a kind of graph. On the other, we can say that it suffices that the content can be represented by means of a graph. As a reviewer points out, the former is prima facie a far stronger claim. For my purposes here, the weaker claim suffices. However, perhaps it would be better to say the following: the content of an understanding state is a concrete graph, an object (of some kind) with graphical structure (so, it exemplifies what we may call an abstract graph, which is the subject matter of graph theory), similarly to how Fig. 1 (the diagram inscribed in the page) is a concrete graph that exemplifies an abstract graphical structure. In these cases we recognize that these objects have this structure because we seem to be able to represent some of their features in a graphical manner.

  13. A defender of the propositional model could argue the same point, suggesting that propositional contents can be heterogeneous in ways that accommodate the intuitions of those critics of the propositional model. This has been in fact suggested in the case of so-called mental maps. Camp (2018) offers an attack on those attempts.

  14. Cf. Elgin’s (2009) criticism of Kvanvig (2003) concerning the issue of whether certain individuals should be attributed with understanding in an ‘honorific’ but not literal sense, which can be interpreted in terms of whether the subjects are attributable with understanding or merely with something understanding-like. From a broader perspective, we should acknowledge that ‘mere’ understanding-like states can also have epistemological significance.

  15. Cf. Eklund (2008).

  16. The world might be ‘dappled’, as Cartwright (1999) suggests. See Strevens (2017) for a critical discussion.

  17. This suggests that the external graph might itself be a kind of composite, with a layered structure, or even that there could be a multitude of external graphs. In this case, different layers or graphs could be more relevant to different tasks.

  18. In principle, we could ask that the mapping relation satisfies certain constraints, but I think this is more relevant to the issue of understanding attribution (it could be that a subject’s state is only attributable with understanding if the mapping relation satisfies certain conditions). As I said before, I don’t want to offer an account of understanding attribution here.

  19. With mathematical graphs, appearance is a matter of how the graph is embedded in some space (cf. Gross and Tucker (1987) on topological graph theory, with deals with this kind of object). In the present case, there seem to be two options: the internal graph may afford different conditions for mental access, or it may have an imaginistic component with spatial properties.

  20. The resulting format is not only graphical, but also in a sense pictorial, which allows us to capture some of the insights from Meynell (2020).

  21. See footnote 7.

  22. On a similar point, one may be reminded of some ingenious arguments in Kitcher and Varzi (2000) to the effect that pictures (and we may try the assumption that at least some pictures are graphs) are worth \(2^{\aleph _0}\) sentences.

  23. Unordered collections of propositions can be represented by a disconnected graph, where every vertex/proposition has a degree of 0, or as complete graphs (from any proposition of the collection you can proceed to any other). Now, this lack of structure is arguably not even true of written text, since there can be an assortment of different relations between the sentences in a textual body. To that extent, the purely propositional model is also a bad representation of the expression of thoughs.

  24. See Oellermann (1996) and Freitas et al. (2022) for surveys. Some connectivity measures are computationally expensive, so there is some interest in devising connectivity measures that are useful more broadly. Cf. Beineke et al. (2002).

  25. Liu et al. (2018) give an excellent overview of summarization techniques and challenges.

  26. In the full GM model there may be multiple internal graphs, and each could be the product of a different operation where the external graph is compressed.

  27. Representations of representations could in principle be copies of their targets, so they won’t necessarily involve compression.

  28. This can be generalized as an objection to understanding accounts that rely on ideal states in order to account for degrees of understanding (like in Khalifa (2017) and Kelp (2021))–in those accounts degrees correspond to the distance between a given state and some ideal. If we cannot in general say how a state approximates ideal conditions, we cannot say to what degree of understanding it corresponds to (although we could still make relative comparisons, and that is all it takes to give an account of outright attribution). Cf. Baumberger (2019).

  29. \(C(G_I)\) does not need to be smaller than \(G_I\). Because of the combinatory pigeonhole principle, if \(C\) is a lossless operation, compressing some internal graphs is bound to generate larger representations. In practice, this leads to the use of domain-specific compression algorithms. See Tate (2003).

  30. It is already impossible to go back to \(s_3\) from \(s_4\) if it is not known that \(G_{I1}\) and \(G_{I2}\) are supposed to represent the same target (note that this information is tracked in \(M\) in the full GM model). If it is known, one could simply replace \(G_{I1}\) for \(G_{I2}\).

  31. In fact, it is possible that the subject or their cognitive system may learn new compression strategies as a result of this as well.

  32. Besta and Hoefler (2018) give a survey of lossless graph compression schemes, and Liu et al. (2018) one for graph summarization. A more limited survey can be found in Zhou (2015), from which I take the scheme I discuss here.

  33. Strictly speaking, while in \(G_1\) the set of vertices is \(\{a,b,c,d\}\), in \(R\) the set of vertices should be \(\{\{a\}, \{b\}, \{c\}, \{d\}\}\). Similar points can be made about the sets of edges.

  34. If you remember, this was also the idea behind Wilkenfeld’s MDL-based proposal.

  35. The graphical model is compatible with there being other non-graphical formats for representation–it is not necessary part of the model that all representation is uniformly graphical. Here I have assumed the weaker thesis that the vehicles of understanding could be uniformly graphical, but I would not think it strange if ultimately the picture would have to be more complicated even in this particular case.

  36. This is analogous to the phenomenon of junk-theorems. Cf. (Hamkins (2020), p. 18).

  37. Cf. Budhathoki and Vreeken (2028), Pranay and Nagaraj (2021), Vreeken (2015) and Wieczorek and Roth (2016) for related work.

  38. This is independent of the issue of whether memory itself could be accounted for non-representationaly.

  39. It is worth noticing that compressibility can be increased either by introducing falsehoods or by introducing new truths. Indirectly, one could then argue that someone could further their understanding through falsehoods, but I think we need to keep those two ideas separate because the contribution of compressibility to the degree of understanding could be indirect–again, in this paper I am not attempting to give an account of the conditions for understanding attribution. On the point of understanding by falsehoods, see Rancourt (2015), Le Bihan (2017), Lawler (2021) and Elgin (2022), among many others.

  40. In this case, there may also be an internal graph for understanding the analogue model as a system, in addition to the model itself encoding graphical structure for understanding the target of the model.

  41. Cf. Baumberger (2019) for an overview of approaches. An important point of covergence in the literature is the importance of context to characterize the conditions in which attributions are appropriate. In Morales Carbonell (2022) I also give an overview and present my favored approach, which is a form of higher-order contextualism (where the parameters for evaluation are also selected by context).

References

  • Baumberger, C. (2019). Explicating objectual understanding: Taking degrees seriously. Journal for General Philosophy of Science, 50, 367–388.

    Article  Google Scholar 

  • Beineke, L., Oellerman, O., & Pippert, R. (2002). The average connectivity of graphs. Discrete Mathematics, 252, 31–45.

    Article  Google Scholar 

  • Besta, M., Hoefler, T. (2018). Survey and taxonomy of lossless graph compression and space-efficient graph representations https://arxiv.org/abs/arXiv:1806.01799v2 [cs.DS].

  • Blumson, B. (2011). Mental maps. Philosophy and Phenomenological Research, 85(2), 413–434.

    Article  Google Scholar 

  • Bondy, U. J. A., & Murty,. (2008). Graph Theory. Berlin: Springer-Verlag.

  • Budhathoki, K., & Vreeken, J. (2028). Origo: Causal inference by compression. Knowledge and Information Systems, 56, 285–307.

    Article  Google Scholar 

  • Camp, E. (2018). Why maps are not propositional. In Non-Propositional. Intentionality (Ed.), Montague; AGM (pp. 19–45). Oxford University Press.

    Google Scholar 

  • Camp, E. (2019). Perspectives and frames in pursuit of ultimate understanding. In S. Grimm (Ed.), Varieties of Understanding: New Perspectives from Philosophy, Psychology, and Theology (pp. 17–45). Oxford University Press.

    Chapter  Google Scholar 

  • Cartwright N (1999) The dappled world: A study of the boundaries of science. Cambridge University Press.

  • Clark, A. (2013). Whatever next? predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204.

    Article  Google Scholar 

  • Cummins, R. (1991). Meaning and Mental Representation. The MIT Press.

    Book  Google Scholar 

  • Delariviere, S. (2020). Collective understanding–a conceptual defense for when groups should be regarded as epistemic agents with understanding. AVANT xi(2). https://doi.org/10.26913/avant.2020.02.01.

  • Eklund, M. (2008). The picture of reality as an amorphous lump. In: Zimmerman TSJHD (ed) Contemporary Debates in Metaphysics. Blackwell.

  • Elgin, C. (2009). Is understanding factive? In A. Haddock, A. Millar, & D. Pritchard (Eds.), Epistemic Value (pp. 331–339). Oxford University Press.

    Google Scholar 

  • Elgin, C. (2022). Models as felicitous falsehoods. Principia, 26(1), 7–23.

    Article  Google Scholar 

  • Freitas, S., Yang, D., Kumar, S., et al. (2022). Graph vulnerability and robustness: A survey https://doi.org/10.1109/TKDE.2022.3163672, https://arxiv.org/abs/arXiv:2105.00419 [cs.SI].

  • Gardiner, G. (2012). Understanding, integration and epistemic value. Acta Analytica, 27, 163–181.

    Article  Google Scholar 

  • Gelfert, A. (2016). How to Do Science with Models: A Philosophical Primer. Springer.

    Book  Google Scholar 

  • Gladziejewski, P. (2015). Explaining cognitive phenomena with internal representations: A mechanistic perspective. Studies in Logic, Grammar and Rethoric, 40, 63–90.

    Article  Google Scholar 

  • Gladziejewski, P., & Milkowski, M. (2017). Structural representations: Causally relevant and different from detectors. Biology & Philosophy, 32, 337–355.

    Article  Google Scholar 

  • Gopnik, A., & Glymour, C. (2002). Causal maps and bayes nets: A cogntiive and computational account of theory-formation. In The Cognitive (Ed.), Siegal PCSSM Basis of Science (pp. 117–132). Cambridge University Press.

  • Gopnik, A., Glymour, C., Sobel, D., et al. (2004). A theory of causal learning in children: Causal maps and bayes nets. Psychological Review, 111, 3–32.

    Article  Google Scholar 

  • Gray, E., & Tall, D. (2007). Abstraction as a natural process of mental compression. Mathematics Education Research Journal, 19(2), 23–40.

    Article  Google Scholar 

  • Grimm, S. (2016). Understanding and transparency. In: Ammon SGCBS (Ed.) Explaining Understanding: New Essays in Epistemollogy and the Philosophy of Science. Routledge, pp. 212–229.

  • Gross, J., & Tucker, T. W. (1987). Topological Graph Theory. Dover Publications.

    Google Scholar 

  • Grünwald, P. (2007). The Minimum Description Lenght Principle. The MIT Press.

    Book  Google Scholar 

  • Hamkins, J. D. (2020). Lectures on the Philosophy of Mathematics. The MIT Press.

    Google Scholar 

  • Hartsfield, N., & Ringel, G. (2003). Pearls in graph theory: a comprehensive introduction. Dover Publications.

    Google Scholar 

  • Hazlett, A. (2018). Understanding and structure. Making Sense of the World: New Essays in the Philosophy of Understanding (pp. 135–158). Oxford University Press.

    Google Scholar 

  • Hills, A. (2016). Understanding why. Nous, 50(4), 661–688.

    Article  Google Scholar 

  • Hohwy, J. (2014). The Predictive Mind. Oxford University Press.

    Google Scholar 

  • Hutto, D. D., & Myin, E. (2017). Evolving Enactivism: Basic Minds Meet Content. The MIT Press.

    Book  Google Scholar 

  • Jungnickel, D. (2005). Graphs, Networks and Algorithms. Springer-Verlag.

    Google Scholar 

  • Kelp, C. (2021). Inquiry, Knowledge and Understanding. Oxford University Press.

    Book  Google Scholar 

  • Khalifa, K. (2017). Understanding, Explanation, and Scientific Knowledge. Oxford University Press.

    Book  Google Scholar 

  • Khalifa, K., Islam, F., Gamboa, J. P., et al. (2022). Integrating philosophy of understanding with the cognitive sciences. Frontiers in Systems Neuroscience, 16. https://doi.org/10.3389/fnsys.2022.764708.

  • Kitcher, P., & Varzi, A. (2000). Some pictures are worth \(2^{\aleph _0}\) sentences. Philosophy, 75(3), 377–381.

    Article  Google Scholar 

  • Kvanvig, J. (2003). The Value of Knowledge and the Pursuit of Understanding. Cambridge University Press.

    Book  Google Scholar 

  • Lawler, I. (2021). Scientific understanding and felicitous legitimate falsehoods. Synthese, 198(7), 6859–6887.

    Article  Google Scholar 

  • Le Bihan, S. (2017). Elightening falsehoods: A modal view of scientific understanding. In S. G. C. Baumberger (Ed.), Explaining Understanding: New Perspectives from Epistemology and Philosophy of Science (pp. 111–135). Routledge.

    Google Scholar 

  • Lee, J. (2018). Structural representation and the two problems of content. Mind and Language, 34(5), 606–626.

    Article  Google Scholar 

  • Li, M., & Vitanyi, P. (2019). An Introduction to Kolmogorov Complexity and its Applications (4th ed.). Springer.

    Book  Google Scholar 

  • Liu, Y., Safavi, T., Dighe, A., et al. (2018). Graph summarization methods and applications: A survey. ACM Computing Surveys, 51(3–62), 1–34.

    Google Scholar 

  • Massimi, M. (2022). Perspectival Realism. Oxford University Press.

    Book  Google Scholar 

  • Meynell, L. (2020). Getting the picture: A new account of scientific understanding. The Aesthetics of Science: Beauty, Imagination and Understanding (pp. 36–62). Routledge.

    Chapter  Google Scholar 

  • Morales Carbonell, F. (2022). Undersanding attributions: Problems, options, and a proposal. Theoria, 88(3), 558–583. https://doi.org/10.1111/theo.12380

    Article  Google Scholar 

  • Moyal-Sharrock, D. (2021). From deed to word: Grapless and kink-free enactivism. Synthese, 198, 405–425.

    Article  Google Scholar 

  • Navlakha, S., Rastogi, R., & Shrivastava, N. (2008). Graph summarization with bounded error. In: SIGMOD’09: Proceedings of the 2008 ACM SIGMOD International COnference of Management of Data, ACM, pp. 419–432.

  • Oellermann, O. (1996). Connectivity and edge-connectivity in graphs: A survey. Congresus Numerantium, 116, 231–252.

    Google Scholar 

  • Pearson, J., & Kosslyin, S. (2015). The heterogeneity of mental representation: Ending the imagery debate. In Proceedings of the National Academy of Sciences 112(33):10089–10092.

  • Pranay, S., & Nagaraj, N. (2021). Causal discovery using compression-complexity measures. Journal of Biomedical Informatics 117(103742).

  • Ramsey, W. (2007). Representation Reconsidered. Cambridge University Press.

    Book  Google Scholar 

  • Rancourt, B. (2015). Better understanding through falsehood. Pacific Philosophical Quarterly. https://doi.org/10.1111/papq.12134

    Article  Google Scholar 

  • Rescorla, M. (2009). Cogntiive maps and the language of thought. British Journal of Philosophy of Science, 60, 377–407.

    Article  Google Scholar 

  • Robinet, V., Lemaire, B., & Gordon, M. B. (2011). Mdlchunker: A mdl-based cognitive model of inductive learning. Cognitive Science, 35(7), 1352–1389.

    Article  Google Scholar 

  • Shea, N. (2018). Representation in Cognitive Science. Oxford University Press.

    Book  Google Scholar 

  • Strevens, M. (2017). Dappled science in a unified world. In H. Chao, J. Reiss, & S. Chen (Eds.), Philosophy of Science in Practice: Nancy Cartwright and the Nature of Scientific Reasoning. Berlin: Springer.

    Google Scholar 

  • Stroud, B. (1979). Inference, belief and understanding. Mind, 86, 179–196.

    Article  Google Scholar 

  • Tate, S. R. (2003). Complexity measures. In K. Sayood (Ed.), Lossless Compression Handbook. Academic Press.

    Google Scholar 

  • Tenenbaum, J., Kemp, C., Griffiths, T., et al. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022), 1279–1285.

    Article  Google Scholar 

  • Thurston, W. P. (1990). Mathematical education. Notices of the AMS, 37, 844–850.

    Google Scholar 

  • Toon, A. (2015). Where is the understanding. Synthese, 192(12), 3859–3875.

    Article  Google Scholar 

  • Vreeken, J. (2015). Causal inference by direction of information. In: Proceedings of the 2015 SIAM International Conference on Data Mining, Society for Industrial and Applied Mathematics, pp. 909–917.

  • Weisberg, M. (2013). Simulation and Similarity: Using Models to Understand the World. Oxford University Press.

  • Wieczorek, A., Roth, V. (2016). Causal compression https://arxiv.org/abs/arXiv:1611.00261 [stat.ML].

  • Wilkenfeld, D. A. (2013). Understanding as representation manipulability. Synthese, 190(6), 997–1016.

    Article  Google Scholar 

  • Wilkenfeld, D. A. (2018). Understanding as compression. Philosophical Studies, 176, 2807–2831. https://doi.org/10.1007/s11098-018-1152-1

    Article  Google Scholar 

  • Wittgenstein, L. (2009). Philosophical Investigations, 4th edn. Wiley-Blackwell.

  • Ylikoski, P. (2014). Agent-based simulation and sociological understanding. Perspectives on Science, 22, 318–335.

    Article  Google Scholar 

  • Zagzebski, L. (2001). Recovering understanding. Knowledge, Truth and Duty: Essays on Epistemic Justification (pp. 235–251). Responsibility and Virtue. Oxford University Press.

    Chapter  Google Scholar 

  • Zhou, F. (2015). Graph compression, Tech. rep. Department of Computer Science and Helsinki Institute for Information Technonology HIIT.

  • Zhou, Y., Zheng, H., Huang, X., et al. (2022). Graph neural networks: Taxonomy, advances, and trends. ACM Transactions on Intelligent Systems and Technology, 13(1), 1–54. https://doi.org/10.1145/3495161

    Article  Google Scholar 

Download references

Acknowledgements

This research was funded by ANID (Chile), project 3220017. Earlier versions of this work were presented at the 4th SURe workshop (NY/online, April 2022) and at the CLPS seminar in Leuven (October 2022). I am thankful for the feedback given in those occasions, specially from Conor Mayo-Wilson, Letitia Meynell, Leander Vignero, Jan Heylen, Wouter Termont, Giulia Lorenzi, Kristine Grigoryan, and Lorenz Demey. I am also thankful for the feedback received from reviewers.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Felipe Morales Carbonell.

Ethics declarations

Conflict of interest

The author has no conflicts of interest to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Morales Carbonell, F. Compressing Graphs: a Model for the Content of Understanding. Erkenn (2023). https://doi.org/10.1007/s10670-023-00694-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10670-023-00694-3

Keywords

Navigation