Skip to main content
Log in

How to turn an MAS into a graphical causal model

  • Published:
Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript

Abstract

This paper proposes that an appropriately configured multi-agent system (MAS) is formally equivalent to a graphical causal model (GCM, a broad category that includes many formalisms expressed as directed graphs), and offers benefits over other GCMs in modeling a social scenario. MASs often use GCMs to support their operation, but are not usually viewed as tools for enhancing their execution. We argue that the definition of a GCM should include its update mechanism, an often-overlooked component. We review a wide range of GCMs to validate this definition and point out limitations that they face when applied to the social and psychological dimensions of causality. Then we describe Social Causality using Agents with Multiple Perspectives (SCAMP), a causal language and multi-agent simulator that satisfies our definition and overcomes the limitations of other GCMs for social simulation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. In addition to the author, the SCAMP team included Mike Cox, Jason Greanya, Peggy McCarthy, Jonny Morell, Sri Nadella, and Laura Sappelsa, with consulting input from Kathleen Carley.

  2. Major collaborators include, alphabetically by last name, Rafael Alonso, Ted Belding, Rob Bisson, Sven Brueckner, Mike Cox, Keith Decker, Liz Downs, Jason Greanya, Rainer Hilscher, Hua Li, Bob Matthews, Scott Page, Rich Rohwer, Mike Samples, Laura Sappelsa, John Sauter, Bob Savit, Peter Weinstein, and Andrew Yinger.

  3. SCAMP allows cycles, so the CEG no longer defines a partial order over events, but this discussion excludes such cycles. Their presence strengthens our argument by adding to the space of possible narratives.

References

  1. Albrecht, S., & Stone, P. (2018). Autonomous agents modelling other agents: A comprehensive survey and open problems. Artificial Intelligence, 258, 66–95.

    Article  MathSciNet  MATH  Google Scholar 

  2. Amorim, L. D. A. F., Fiaccone, R. L., Santos, C. A. S. T., Santos, TNd., Moraes, L. T. L. P. D., Oliveira, N. F., Barbosa, S. O., Santos, DNd., Santos, LMd., Matos, S. M. A., & Barreto, M. L. (2010). Structural equation modeling in epidemiology. Cadernos de Sade Pblica, 26, 2251–2262.

    Article  Google Scholar 

  3. Argonne National Laboratory. (2007). Repast agent simulation toolkit. http://repast.sourceforge.net/

  4. Baez, J. C., & Biamonte, J. D. (2018). Quantum techniques for stochastic mechanics. World Scientific.

    Book  MATH  Google Scholar 

  5. Bernstein, D. S., Zilberstein, S., & Immerman, N. (2000) The complexity of decentralized control of Markov decision processes. In C. Boutilier & M. Goldszmidt (Eds.) Sixteenth conference on uncertainty in artificial intelligence (UAI2000) (pp. 32–37). Morgan Kaufmann.

  6. Bonabeau, E., Dorigo, M., & Theraulaz, G. (1999). Swarm intelligence: From natural to artificial systems. SFI studies in the sciences of complexity. Oxford University Press.

    Book  MATH  Google Scholar 

  7. Busemeyer, J. R., & Townsend, J. T. (1993). Decision field theory: A dynamic-cognitive approach to decision making in an uncertain environment. Psychological Review, 100(3), 432–459.

    Article  Google Scholar 

  8. Dickau, R. (2020). Counting lattice paths. https://www.robertdickau.com/lattices.html

  9. Diez, F. J., & Druzdzel, M. (2006). Canonical probabilistic models for knowledge engineering. Technical Report. CISIAD-06-01, UNED

  10. Feynman, R. (1948). Space-time approach to non-relativistic quantum mechanics. Reviews of Modern Physics, 20(2), 367–387. http://authors.library.caltech.edu/47756/1/FEYrmp48.pdf

  11. Fisher, W. R. (1989). Human communication as narration: Toward a philosophy of reason, value, and action. University of South Carolina Press.

    Google Scholar 

  12. Forrester, J. W. (1961). Industrial dynamics. MIT Press.

    Google Scholar 

  13. Friedkin, N. E., & Johnsen, E. C. (2011). Social influence network theory: A sociological examination of small group dynamics structural analysis in the social sciences. Cambridge University Press.

    Book  MATH  Google Scholar 

  14. Gal, Y., & Pfeffer, A. (2003). A language for modeling agents’ decision making processes in games. In Proceedings of the second international joint conference on Autonomous agents and multiagent systems (pp. 265–272). Association for Computing Machinery

  15. Gal, Y., & Pfeffer, A. (2008). Networks of influence diagrams: A formalism for representing agents’ beliefs and decision-making processes. Journal of Artificial Intelligence Research, 33, 109–147.

    Article  MathSciNet  MATH  Google Scholar 

  16. Grassé, P. P. (1959). La reconstruction du nid et les coordinations inter-individuelles chez Bellicositermes natalensis et Cubitermes sp. La théorie de la stigmergie: Essai d’interprétation du comportement des termites constructeurs. Insectes Sociaux, 6, 41–84.

    Article  Google Scholar 

  17. Gray, S. A., Gray, S., De Kok, J. L., Helfgott, A. E. R., O’Dwyer, B., Jordan, R., & Nyaki, A. (2015). Using fuzzy cognitive mapping as a participatory approach to analyze change, preferred states, and perceived resilience of social-ecological systems. Ecology and Society, 20(2), 11.

  18. Grimm, V., Berger, U., DeAngelis, D. L., Polhill, J. G., Giske, J., & Railsback, S. F. (2010). The ODD protocol: A review and first update. Ecological Modelling, 221(23), 2760–2768. http://www.agsm.edu.au/bobm/teaching/SimSS/ODD_protocol.pdf

  19. Haas, P. J. (2002). Stochastic Petri Nets: Modelling, stability, simulation. Springer.

    MATH  Google Scholar 

  20. Hazen, G. B. (2004). Dynamic influence diagrams: Applications to medical decision modeling. In M. L. Brandeau, F. Sainfort & W. P. Pierskalla (Eds.), Operations research and health care: A handbook of methods and applications (pp. 613–638). Springer. https://doi.org/10.1007/1-4020-8066-2_24.

  21. Hermellin, E., & Michel, F. (2016). GPU delegation: Toward a generic approach for developping MABS using GPU programming. In C. M. Jonker, S. Marsella, J. Thangarajah, & K. Tuyls (Eds.), Proceedings of the 2016 international conference on autonomous agents and multiagent systems (pp. 1249–1258).

  22. HeuerRichards, J. J., & Pherson, R. H. (2010). Structured analytic techniques for intelligence analysis. CQ Press.

    Google Scholar 

  23. Horling, B., Lesser, V., Vincent, R., Wagner, T., Raja, A., Zhang, S., Decker, K., & Garvey, A. (2004). The TÆMS white paper. http://mas.cs.umass.edu/pub/paper_detail.php/182

  24. Howard, R. A., & Matheson, J. E. (1984). Influence diagrams. In R. A. Howard & J. E. Matheson (Eds.), Readings on the principles and applications of decision analysis (Vol. 2, pp. 719–762). Strategic Decisions Group.

    Google Scholar 

  25. Howard, R. A., & Matheson, J. E. (2005). Influence diagrams. Decision Analysis, 2(3), 127–143.

    Article  Google Scholar 

  26. Ishikawa, K., & Loftus, J. H. (1990). Introduction to quality control. 3A Corporation

  27. Jensen, F. V., Nielsen, T. D., & Shenoy, P. P. (2006). Sequential influence diagrams: A unified asymmetry framework. International Journal of Approximate Reasoning, 42(1), 101–118.

    Article  MathSciNet  MATH  Google Scholar 

  28. Jensen, F.V., & Vomlelov, M. (2002) Unconstrained influence diagrams. In: A. Darwiche & N. Friedman (Eds.), Eighteenth conference on uncertainty in artificial intelligence (UAI’02) (pp. 234–241). Morgan Kauffmann.

  29. Kaelbling, L. P., Littman, M. L., & Cassandra, A. R. (1998). Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101(1), 99–134. https://doi.org/10.1016/S0004-3702(98)00023-X

  30. Kahneman, D., & Tversky, A. (1982). The simulation heuristic. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp. 201–208). ambridge University Press.

    Chapter  Google Scholar 

  31. Koller, D., & Milch, B. (2001). Multi-agent influence diagrams for representing and solving games. In Proceedings of the 17th international joint conference on artificial intelligence (Vol. 2, pp. 1027–1034). Morgan Kaufmann.

  32. Kosko, B. (1986). Fuzzy cognitive maps. International Journal of Man-Machine Studies, 24(1), 65–75.

    Article  MATH  Google Scholar 

  33. Lauritzen, S. L., & Nilsson, D. (2001). Representing and solving decision problems with limited information. Management Science, 47(9), 1235–1251.

    Article  MATH  Google Scholar 

  34. Levenshtein, V. I. (1966). Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady, 10(8), 707–710.

    MathSciNet  Google Scholar 

  35. Lindley, C. A. (2005). Story and narrative structures in computer games. In B. Bushoff (Ed.), Developing interactive narrative content (Chap. 10). High Text Verlag.

  36. Mauá, D. D., de Campos, C. P., & Zaffalon, M. (2012). Solving limited memory influence diagrams. Journal of Artificial Intelligence Research, 44, 97–140.

    Article  MathSciNet  MATH  Google Scholar 

  37. Mumford, S., & Anjum, R. L. (2013). Causation: A very short introduction. Oxford University Press.

    Book  Google Scholar 

  38. Nielsen, T. D., & Jensen, F. V. (1999). Welldefined decision scenarios. In The fifteenth conference on uncertainty in artificial intelligence (UAI 1999) (pp. 502–511). Morgan Kaufmann.

  39. Park, C. Y., Laskey, K., Costa, P. C. G., & Matsumoto, S. (2014). A predictive situation awareness reference model using multi-entity Bayesian networks. In Seventeenth international conference on information fusion (FUSION 2014).

  40. Parunak, H. V. D. (2004). Evolving swarming agents in real time. In Genetic programming theory and practice (GPTP05). Springer. https://www.abcresearch.org/abc/papers/GPTP05.pdf

  41. Parunak, H. V. D. (2006). A survey of environments and mechanisms for human-human stigmergy. In D. Weyns, F. Michel, & H. V. D. Parunak (Eds.), Proceedings of E4MAS 2005. Lecture notes on AI (Vol. 3830, pp. 163–186). Springer.

  42. Parunak, H. V. D. (2020). Psychology from stigmergy. In Computational social Science (CSS 2020) (Vol. (forthcoming). CSSSA.

  43. Parunak, H. V. D. (2020). ODD protocol for SCAMP. Report. Wright State Research Institute. https://abcresearch.org/abc/papers/ODD4SCAMP.pdf

  44. Parunak, H. V. D. (2021). Learning actor preferences by evolution. In Computational social science (CSS21). CSSSA.

  45. Parunak, H. V. D. (2021). Social simulation for non-hackers. In K. H. Van Dam & N. Verstaevel (Eds.), 22nd International workshop on multi-agent-based simulation (MABS 2021). Springer.

  46. Parunak, H. V. D., Belding, T., Bisson, R., Brueckner, S., Downs, E., Hilscher, R., & Decker, K. (2009). Stigmergic modeling of hierarchical task networks. In G. D. Tosto & H. V. D. Parunak (Eds.), The tenth international workshop on multi-agent-based simulation (MABS 2009, at AAMAS 2009) (Vol. 5683, pp. 98–109). Springer.

  47. Parunak, H. V. D., & Brueckner, S. (2006). Concurrent modeling of alternative worlds with polyagents. In The seventh international workshop on multi-agent-based simulation (MABS06, at AAMAS06) (pp. 128–141). Springer.

  48. Parunak, H. V. D., Brueckner, S., Downs, L., & Sappelsa, L. (2012). Swarming estimation of realistic mental models. In F. Giardini & F. Amblard (Eds.), Thirteenth workshop on multi-agent based simulation (MABS 2012, at AAMAS 2012) (Vol. 7838, pp. 43–55). Springer.

  49. Parunak, H. V. D., Brueckner, S. A., Matthews, R., & Sauter, J. (2006). Swarming methods for geospatial reasoning. International Journal of Geographical Information Science, 20(9), 945–964.

    Article  Google Scholar 

  50. Parunak, H. V. D., Greanya, J., Morell, J. A., Nadella, S., & Sappelsa, L. (2021). SCAMP’s stigmergic model of social conflict. Computational and Mathematical Organization Theory. https://doi.org/10.1007/s10588-021-09347-8

  51. Parunak, H. V. D., Morell, J. A., Sappelsa, L., & Greanya, J. (2020). SCAMP user manual. Report, Parallax Advanced Research https://www.abcresearch.org/abc/papers/SCAMPUserManual.zip

  52. Parunak, H. V. D., Savit, R., & Riolo, R. L. (1998). Agent-based modeling vs. equation-based modeling: A case study and users’ guide. In N. Gilbert, R. Conte, & J. S. Sichman (Eds.), First International Workshop on Multi-agent systems and agent-based simulation. LNCS (pp. 10–25). Springer.

  53. Pearl, J. (2009). Causality (2nd ed.). Cambridge University Press.

    Book  MATH  Google Scholar 

  54. Pearl, J., & Mackenzie, D. (2018). The book of why. Basic Books.

    MATH  Google Scholar 

  55. Pfautz, J., Cox, Z., Catto, G., Koelle, D., Campolongo, J., & Roth, E. (2007). User-centered methods for rapid creation and validation of Bayesian belief networks. In K. B. Laskey, S. M. Mahoney, & J. Goldsmith (Eds.), Fifth Bayesian modeling applications workshop (UAI-AW 2007) at BMA ’07 (pp. 37–46).

  56. Polich, K., & Gmytrasiewicz, P. (2007). Interactive dynamic influence diagrams. In 6th International joint conference on autonomous agents and multi-agent systems (pp. 1–3).

  57. Pynadath, D., & Marsella, S. (2005). PsychSim: Modeling theory of mind with decision-theoretic agents. In International joint conference on artificial intelligence (pp. 1181–1186).

  58. Pynadath, D. V., Dilkina, B., Jeong, D. C., John, R. S., Marsella, S. C., Merchant, C., Miller, L. C., & Read, S. J. (2021). Disaster world: Decision-theoretic agents for simulating population responses to hurricanes. Computational & Mathematical Organization Theory (forthcoming).

  59. Pynadath, D. V., & Tambe, M. (2002). The communicative multiagent team decision problem: Analyzing teamwork theories and models. Journal of Artificial Intelligence Research, 16, 389–423.

    Article  MathSciNet  MATH  Google Scholar 

  60. Raiffa, H., & Schlaifer, R. (1961). Applied statistical decision theory. Harvard University.

    MATH  Google Scholar 

  61. Rao, A. S., & Georgeff, M. P. (1995). BDI agents: From theory to practice. In The first international conference on multi-agent systems (ICMAS-95) (pp. 312–319). AAAI.

  62. Richards, W., Finlayson, M. A., & Winston, P. H. (2009). Advancing computational models of narrative. Tech. Rep. MIT-CSAIL-TR-2009-063, MIT CSAIL.

  63. Robins, J. M., Hernn, M., & Brumback, B. (2000). Marginal structural models and causal inference in epidemiology. Epidemiology, 11(5), 550–560.

    Article  Google Scholar 

  64. Rosen, J. A., & Smith, W. L. (1996). Influence net modeling with causal strengths: An evolutionary approach. In Command and control research and technology symposium.

  65. Sappelsa, L., Parunak, H. V. D., & Brueckner, S. (2014). The generic narrative space model as an intelligence analysis tool. American Intelligence Journal, 31(2), 69–78.

    Google Scholar 

  66. Sauter, J. A., Matthews, R., Parunak, H. V. D., & Brueckner, S. (2002). Evolving adaptive pheromone path planning mechanisms. In Autonomous agents and multi-agent systems (AAMAS02) (pp. 434–440). ACM. https://www.abcresearch.org/abc/papers/AAMAS02Evolution.pdf

  67. Savage, E. L., Schruben, L. W., & Ycesan, E. (2005). On the generality of event-graph models. INFORMS Journal on Computing, 17(1), 3–9.

    Article  MathSciNet  MATH  Google Scholar 

  68. Shapiro, B. P., van den Broek, P., & Fletcher, C. R. (1995). Using story-based causal diagrams to analyze disagreements about complex events. Discourse Processes, 20(1), 51–77.

    Article  Google Scholar 

  69. Sheyner, O. M. (2004). Scenario graphs and attack graphs. Ph.D. thesis, Computer Science

  70. Shivashankar, V. (2015). Hierarchical goal networks: Formalisms and algorithms for planning and acting. Ph.D. thesis, Computer Science.

  71. Shnerb, N. M., Louzoun, Y., Bettelheim, E., & Solomon, S. (2000). The importance of being discrete: Life always wins on the surface. Proceedings of the National Academy of Sciences of the United States of America, 97(19), 10322–10324.

    Article  MATH  Google Scholar 

  72. Simon, H. (1969). The sciences of the artificial. MIT Press.

    Google Scholar 

  73. Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99–118.

    Article  Google Scholar 

  74. Sterman, J. (2000). Business dynamics. McGraw-Hill.

    Google Scholar 

  75. Tatman, J. A., & Schachter, R. D. (1990). Dynamic programming and influence diagrams. IEEE Transactions on Systems, Man, and Cybernetics, 20(2), 365–379.

    MathSciNet  MATH  Google Scholar 

  76. VanderWeele, T. J. (2012). Invited commentary: Sructural equation models and epidemiologic analysis. American Journal of Epidemiology, 176(7), 608–612.

    Article  Google Scholar 

  77. Wright, S. (1934). The method of path coefficients. Annals of Mathematical Statistics 5(3), 161–215.

    Article  MATH  Google Scholar 

  78. Xiang, Y., Jensen, F., & Chen, X. (2006). Inference in multiply sectioned Bayesian networks: Methods and performance comparison. IEEE Systems, Man, and Cybernetics, 36(3), 546–558.

    Article  Google Scholar 

  79. Xiang, Y., Poole, D., & Beddoes, M. P. (1993). Multiply sectioned Bayesian networks and junction forests for large knowledge-based systems. Computational Intelligence, 9(2), 171–220.

    Article  Google Scholar 

Download references

Acknowledgements

The development of SCAMP was funded by the Defense Advanced Research Projects Agency (DARPA), under Cooperative Agreement HR00111820003. The content of this paper does not necessarily reflect the position or the policy of the US Government, and no official endorsement should be inferred. This paper has benefited from the careful reading and thoughtful suggestions of multiple JAAMAS reviewers.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to H. Van Dyke Parunak.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Details on other graphical causal models

This appendix describes each formalism (other than SCAMP) listed in Table 1, and gives an example of it. These examples build on the children’s rhyme introduced in Sect. 4:

Little Miss Muffet sat on a Tuffet

Eating her curds and whey.

Along came a spider, and sat down beside her,

And frightened Miss Muffet away.

We illustrate how each formalism might represent the causal relations involving whether or not Miss Muffet is at the Tuffet. The formalisms fall into several classes, grouped by the horizontal lines within Table 1.

1.1 Non-computational

Some formalisms are intended primarily for human examination. The semantics for nodes and their values (and in one case, for edges) are quite loose, and updating U is an informal subjective assessment by the practitioners.

Factor tree analysis (root cause analysis) includes several causal graphs used in industry. These graphs are for human review rather than automatic analysis, so the nodes, values, and update mechanisms are not formalized. One example is the Ishikawa or fishbone diagram [26], used to trace the causes of quality problems in manufacturing. In this model, the top-level causes are pre-defined branches (e.g., Equipment, Process, People, Materials, Environment, Management), to lead analysts to consider different areas where quality problems may arise. Edges leading into each of these branches are primary causes; they in turn support secondary causes, and so forth. The lower level nodes are verbal descriptions of causes, and may be events (e.g.,“shipments delayed”), measurable observations (“rusty components”), or even prepositional phrases. Figure 8 shows a simple fishbone diagram for the problem “Miss Muffet not on Tuffet,” This diagram suggests that the problem may result from difficulties with material (lack of curds and whey), personnel (Miss Muffet ill), or the environment (presence of a spider).

Fig. 8
figure 8

Factor tree for Miss Muffet

Such diagrams contribute greatly to the quality of manufactured products, but are too ambiguous for formal analysis.

The nodes in causal loop diagrams [74] are understood to have scalar values, though they are not evaluated. Edges are labeled with a + if an increase in the source promotes an increase in the destination, and a - if an increase in the source promotes a decrease in the destination. Optionally, a double slash // indicates an unspecified time delay. Figure 9 shows a causal loop diagram for Miss Muffet. The left-hand loop means that she will only sit on the Tuffet if she has something to eat, but the longer she is there, the less curds and whey will remain. The right-hand loop means that she eventually attracts the lonely spider, who scares her away. Causal loop diagrams, unlike many other formalisms, do support causal loops, and the (informal) updating U for a given node is understood to reflect the changes in the node’s value over time.

Fig. 9
figure 9

Causal loop diagram for Miss Muffet

While the causal loop diagram is qualitative and not quantitative, it is the basis for the stock-and-flow model, discussed in “Other analytic models” section in Appendix 1, which supports a set of ordinary differential equations and can thus yield quantitative results.

1.2 Correlation

Sewall Wright’s path diagrams [77] are the basis for modern structural equation models (SEMs) [2, 76] and marginal structural models (MSMs) [63], and an inspiration for Bayesian Causal Diagrams [54]. Path diagrams compute correlations between variables connected either directly or indirectly by directed edges. The update mechanism U computes the correlation between end points of a path as the product of correlations along a single path, and as the sum of correlations entered by different paths. Extensions estimate the values of latent variables based on observed variables and compute conditional probabilities throughout the graph under assumptions of causal influence, but say nothing about how these conditional relations occur. Path diagrams do not represent time or allow cycles, and U propagates node values through the graph at a single point in time. Figure 10 shows a simple path diagram for Miss Muffet. The nodes representing the presence of Miss Muffet and of the spider at the Tuffet are binary.

Fig. 10
figure 10

Path diagram for Miss Muffet

1.3 Probabilistic

Bayesian causal diagrams [53] are a dominant causal formalism, commonly used in planning experiments and selecting experimental variables. Figure 11 shows a simple causal diagram for Miss Muffet. Nodes carry probabilities. U evaluates these probabilities by multiplying the probability of cause nodes through conditional probability tables attached to each effect node. One application of such graphs uses convention rather than computation for U. Because of the formalism’s probabilistic semantics, the pattern of connections alone can be used for experimental design, identifying what variables should and should not be observed to confirm a causal hypothesis.

Fig. 11
figure 11

Causal diagram for Miss Muffet

These graphs have two shortcomings. First, like path diagrams, the underlying mathematics does not allow cycles or support time intrinsically (though temporal extensions have been proposed [39]). Second, computation over probabilities requires complete conditional probability tables on the nodes, and these are hard to procure, particularly for non-repeatable social situations offering limited data.. The latter problem has motivated the development of “canonical models” [9] that make simplifying assumptions about the relations among the nodes in order to reduce the parameters needed to evaluate the model.

One example of a canonical model is the Influence Net [64]: nodes are events with baseline probabilities. Edges can be supporting or inhibitory, and contain two probabilities: that the effect will obtain if the cause is true, and if the cause is false. U propagates these values across a net—again, at a single point in time. The method recognizes the importance of events rather than variables as nodes, but characterizes them simply by probability of occurrence, treating them as propositions of the form “Event X occurred” to which belief values can be assigned.

Another canonical simplification of the Bayesian causal graph is the Causal Influence Model [55], whose nodes can be Boolean, ordinal, or categorical. They have a baseline probability (if categorical, a baseline probability for each option), and the connection between two nodes is an influence in [− 1, 1] on the probability of the target. U updates the baseline probability of the child node based on a function (typically the mean) of the influences of its parents.

1.4 Influence diagrams

Influence diagrams are a large and important family of GCMs. Their central feature is the distinction between decision nodes (reflecting agent choices) and uncertainty or chance nodes (reflecting chance events). Chance nodes may model questions or experiments that the decision maker can perform, with various probabilities of outcomes. This distinction emerged from an earlier formalism, decision trees, which constrained the order of evaluation of the two node types to be a tree, as alternate moves in a game between the decision maker and a partner named “chance” [60]. Influence diagrams ([24], reprinted as [25]) remove this ordering dependency, yielding directed acyclic graphs. In both decision trees and the earliest influence diagrams, the expected payoff from a node is associated with the outgoing edge corresponding to the choice that the decision-maker or chance has made. Later formalisms define value or utility nodes to record these payoffs.

Figure 12 shows a fragment of a decision tree (left) and influence diagram (right) for Miss Muffet. The outcome of the chance node “Is spider at tuffet?” depends on whether Miss Muffet decides to visit the tuffet on her walk, and the presence or absence of the spider determines whether she remains at the tuffet to finish her snack, or leaves prematurely. Influence diagrams are a specialization of probabilistic GCMs, and their update mechanisms U are based in one way or another on the chain rule and Bayes rule, with a variety of algorithmic refinements, such as message passing [33] and variable elimination [36], to name only a few.

Fig. 12
figure 12

Partial decision tree (left) and influence diagram (right) for Miss Muffet

Influence diagrams represent two advances over the formalisms considered so far.

  1. 1.

    They formally introduce a primitive notion of agency. Decision nodes are under the control of one distinguished agent (the decision maker), while the chance node captures both aleatoric uncertainty and epistemic uncertainty (including the actions taken by all other agents). Thus the agent modeled by the diagram has a sense of self vs. other. However, there is no notion of group affiliation, and no individual state.

  2. 2.

    Chance and decision nodes represent choices among different actions, while value or utility nodes represent variables. Intuitively, variables (the nouns and adjectives in a scenario) do not cause anything. Influence diagrams recognize that causality should be a relation among events, the verbs of the scenario.

Numerous refinements of influence diagrams (IDs) have been developed. For example:

  • partial IDs (PIDs) [38]) relax the condition that the decision variables be ordered temporally;

  • limited memory IDs (LIMIDs) [33] relax the assumption that previous observations are remembered and considered in all future decisions;

  • unconstrained IDs (UIDs) [28] and sequential (SIDs) [27] relax the order of observations;

  • dynamic IDs (DIDs) [20] allow chance nodes to change state, and incorporate cycles.

Of particular importance to the MAS community is the extension of influence diagrams to multi-agent scenarios by distinguishing multiple agents. For example:

  • multi-agent IDs (MAIDs) [31] incorporate separate decision and utility variables for each agent in a single influence diagram.

  • ID networks (IDNs) [14] and networks of IDs (NIDs) [15] construct a digraph of MAIDs, then solve them from the leaves to yield a single MAID incorporating relevant chance nodes that represent their solutions.

  • interactive dynamic IDs (I-DIDs) [56] nest DIDs, allowing agents to model one another.

One particularly important derivative of influence diagrams is the Partially Observable Markov Decision Process (POMDP) [29] (e.g., [58]), with three types of nodes. Like a Markov process, a POMDP defines a set S of states and transition probabilities among them. Like a Markov decision process (MDP), it augments the set of states with a set \({\mathbb{A}}\) of actions \(\alpha\) that some agent can take, based on the current state. The agent chooses among available actions to maximize its reward function \(R: S x {\mathbb{A}} \rightarrow {\mathbb{R}}\), which typically computes the expected time-discounted future reward for each available action. A set of transition probabilities \(T(s'|s,\alpha )\) computes the system’s next state, conditioned on the present state and the action chosen, generalizing the transition probabilities in the simple Markov process. A POMDP further adds a set of observations \(\varOmega\) (corresponding to the chance nodes in a primitive influence diagram) derived from the world state through a set of observation probabilities \(O(\omega |s,\alpha )\) conditioned on the state being observed and the action that brought the agent into that state. Thus the next action in a POMDP is driven by the state of the world probabilistically, not deterministically.

Like the other formalisms in this paper, the POMDP lends itself to representation as a directed graph. The nodes are states, actions, and observations, and edges are conditional probabilities from T and O. Though rewards are strictly speaking a function over actions and states, the common use of influence diagrams as a representation for POMDPs [75] leads to the convention of representing them as additional nodes in the graph. Figure 13 shows a fragment of a POMDP for Miss Muffet in her decision to leave the Tuffet.

Fig. 13
figure 13

POMDP for Miss Muffet

Like other Markov processes, POMDPs can revisit a state more than once, and so support cycles. They offer a partial solution to representing time and agency. For tractability, POMDPs in agent-based systems use a discrete time model with a constant period of time between successive actions. Thus the model captures time, but all actions have the same duration. POMDPs also incorporate agency, because an agent performs each action. Extensions [56] assign different agents to different actions. However, agency is at the level of individual agents, without intrinsic support for groups of agents, and state nodes capture (an agent’s view of) the state of the environment (including other agents), not the personal state of the agent. In addition, POMDPs require model builders to think in terms of transition probabilities, rather than psychological primitives from which probabilities are generated by the model.

1.5 Other analytic models

Fuzzy cognitive maps (FCMs) [32] are inspired by feed-forward neural networks. Nodes are concepts, and may include variables, events, and entity names. Values V on FCMs are activation levels derived from the intrinsic value of the category and scaled (depending on the domain) either to [0, 1] or [− 1, 1], and edges are weights in [− 1, 1]. U consists of multiplying the activation of a node’s causes by weights and summing, usually through a thesholding function to maintain activation bounds. This process, unlike the probabilistic theory behind causal diagrams, permits causal cycles (as in a recurrent neural network). Thus U is not simply propagating causality at a single time throughout the graph, but generating the dynamics exhibited by node values over time, though FCMs have no quantitative representation of time.

Figure 14 is a toy FCM for Miss Muffet illustrating the informality of concepts. This imprecision extends to the meaning of activation. If concepts are viewed as events, then activation is reasonably understood as probability of occurrence. If they are statements, activation becomes level of belief. The informality of its semantics and the ease with which multiple models of the same domain can be combined makes the method attractive for participatory modeling involving mathematically unsophisticated domain experts [17]. For such users, activation is a surrogate for node probability in indicating the prominence of a concept, but is not constrained to a formal probabilistic semantics.

Fig. 14
figure 14

Fuzzy cognitive map for Miss Muffet

Though causal loop diagrams (“Non-computational” section in Appendix 1) do not directly support computation, they are often the first step to a System Dynamics model [12]. This formalism is inspired by physical theories and is based on ordinary differential equations (ODEs) rather than Bayesian probability. Nodes evaluate to variables that are transformed into one another by the edges, using a metaphor of fluid flow often described as “stocks and flows.” Stocks correspond to variables in an ODE, while flows correspond to first derivatives. Time is intrinsic to the behavior of a differential equation, so these models allow feedback loops and characterize the system’s behavior over time. U consists of integrating the equations through time. As a continuous formalism, ODEs deal more easily with real-valued quantities than Boolean or integer values, and we adjust our running example to accommodate the following variables:

  • c amount of curds and whey available

  • s spider population near Tuffet

  • m Miss Muffet population on Tuffet

Figure 15 shows a few ODEs and the corresponding directed graph for Miss Muffet, based on these variables. \(\mu , \kappa , \sigma , \nu\) are the transition rates for the ODEs.

Fig. 15
figure 15

ODE model for Miss Muffet

Like probabilistic models and unlike factor trees and fuzzy cognitive maps, the values associated with the nodes of System Dynamics models have clearly defined mathematical meaning, but with very different semantics: ODEs support cycles and quantitative time, while probabilistic models do not. The difference is due to the difference in U. In an ODE U evolves node values through time, but in a probabilistic model it simply propagates node values to achieve a consistent labeling at a point in time.

A Stochastic Petri Net (SPN) [19] is a bipartite digraph whose nodes alternate between integer-valued places (thus, variables) and transitions that can have durations (thus modeling the passage of time) and activation probabilities (thus “stochastic”). U is algorithmic rather than analytic: a transition is eligible to fire when all of its input places are greater than 0, and when it fires, it decrements each input place by 1 and augments each output place by 1. If a transition does not have the same number of input and output places, the total value of places in the net is not conserved. A place’s value is sometimes called its marking and represented graphically by dots. Figure 16 is an SPN for a fragment of Miss Muffet. Circles represent places, while rectangles represent transitions.

Fig. 16
figure 16

Stochastic Petri Net for Miss Muffet

The reader can verify from Fig. 16 that

  • Miss Muffet requires curds and whey to sit on the Tuffet.

  • When she sits down, the amount of curds and whey decreases.

  • The spider requires Miss Muffet to be on the Tuffet in order to come near the Tuffet.

  • Miss Muffet’s departure from the Tuffet requires both that she is on the Tuffet, and that the spider has approached.

  • Miss Muffet and the spider are conserved, while the curds and whey are not.

SPNs can be represented (within continuity constraints) as sets of ODEs [4] and thus evaluated by integration. Unlike system dynamics diagrams but like influence diagrams, they distinguish between variables and the events that change them. Unlike influence diagrams, they do not capture agency. In Fig. 16, the markings for Miss Muffet and the spider are not agents that participate in events, but semaphores that enable the events.

1.6 Discussion of conventional methods

None of the four desirable features we summarized at the beginning of Sect. 3 is uniformly supported by all methods. The most common is column 6 in Table 1, probabilistic estimate of effects resulting from causes, formally supported in seven of the 11 formalisms. Column 7 (cycles and feedback) is supported in only six of the formalisms, and a quantitative estimate of time (column 8) in only four. Only influence diagrams (including POMDPs) support agency. One can always encode a particular agent as a causal node, but without a sense of an event, it is difficult to formalize the agent responsible for the event. Even the Influence Net, whose nodes are events, simply estimates the probability of their occurrence rather than representing their action. Even influence diagrams do not capture the effect of an agent’s history on its decisions.

No formalism considered so far has a clear semantics of group agency. Psychological and social features are naturally associated with the different groups of agents involved in a scenario. Again, one can always instantiate a social or psychological feature as a causal node, but the treatment is ad-hoc and not integral to the formalism. This observation suggests that agent-based models, with their explicit semantics of agency and action, can fill an important gap in modeling causality. Influence diagrams do have a natural notion of agency for individual agents, but the representation is restricted to probabilities. SCAMP provides a much richer set of modeling artifacts to capture important psychological and sociological features.

Appendix 2: Symbols

Table 5 summarizes the symbols used in this paper.

Table 5 Symbols used

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Van Dyke Parunak, H. How to turn an MAS into a graphical causal model. Auton Agent Multi-Agent Syst 36, 31 (2022). https://doi.org/10.1007/s10458-022-09560-y

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10458-022-09560-y

Keywords

Navigation