Skip to main content
Log in

Approximating behavioral equivalence for scaling solutions of I-DIDs

  • Regular Paper
  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract

Interactive dynamic influence diagram (I-DID) is a recognized graphical framework for sequential multiagent decision making under uncertainty. I-DIDs concisely represent the problem of how an individual agent should act in an uncertain environment shared with others of unknown types. I-DIDs face the challenge of solving a large number of models that are ascribed to other agents. A known method for solving I-DIDs is to group models of other agents that are behaviorally equivalent. Identifying model equivalence requires solving models and comparing their solutions generally represented as policy trees. Because the trees grow exponentially with the number of decision time steps, comparing entire policy trees becomes intractable, thereby limiting the scalability of previous I-DID techniques. In this article, our specific approaches focus on utilizing partial policy trees for comparison and determining the distance between updated beliefs at the leaves of the trees. We propose a principled way to determine how much of the policy trees to consider, which trades off solution quality for efficiency. We further improve on this technique by allowing the partial policy trees to have paths of differing lengths. We evaluate these approaches in multiple problem domains and demonstrate significantly improved scalability over previous approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24

Similar content being viewed by others

Notes

  1. We adapt the single-agent concert problem from the POMDP repository at http://www.cs.brown.edu/research/ai/pomdp/.

  2. The starting locations of the UAVs may differ from those shown in Fig. 21a.

  3. http://www.pomdp.org/pomdp/examples/index.shtml.

References

  1. Adam B, Dekel E (1993) Hierarchies of beliefs and common knowledge. Int J Game Theory 59(1):189–198

    MathSciNet  MATH  Google Scholar 

  2. Andersen S, Jensen F (1989) Hugin: a shell for building belief universes for expert systems. In: International joint conference on artificial intelligence (IJCAI), pp 332–337

  3. Aumann RJ (1999) Interactive epistemology i: Knowledge. Int J Game Theory 28(3):263–300

    Article  MathSciNet  MATH  Google Scholar 

  4. Bernstein DS, Givan R, Immerman N, Zilberstein S (2002) The complexity of decentralized control of Markov decision processes. Math. Oper. Res. 27(4):819–840

    Article  MathSciNet  MATH  Google Scholar 

  5. Boyen X, Koller D (1998) Tractable inference for complex stochastic processes. In: The 14th conference on uncertainty in artificial intelligence (UAI), pp 33–42

  6. Chandrasekaran M, Doshi P, Zeng Y (2010) Approximate solutions of interactive dynamic influence diagrams using \(\epsilon \)-behavioral equivalence. In: International symposium on artificial intelligence and mathematics (ISAIM)

  7. Chandrasekaran M, Doshi P, Zeng Y, Chen Y (2014) Team behavior in interactive dynamic influence diagrams with applications to ad hoc teams. In: Proceedings of the seventh international conference on autonomous systems and multiagent systems (AAMAS), pp 1559–1560

  8. Chen Y, Doshi P, Zeng Y (2015) Iterative online planning in multiagent settings with limited model spaces and pac guarantees. In: Proceedings of the seventh international conference on autonomous systems and multiagent systems (AAMAS), pp 1161–1169

  9. Chen Y, Hong J, Liu W, Godo L, Sierra C, Loughlin M (2013) Incorporating pgms into a bdi architecture. In: 16th international conference on principles and practice of multi-agent systems (PRIMA), pp 54–69

  10. Conroy R, Zeng Y, Cavazza M, Chen Y (2015) Learning behaviors in agents systems with interactive dynamic influence diagrams. In: Proceedings of international joint conference on artificial intelligence (IJCAI), pp 39–45

  11. Cover T, Thomas J (1991) Elements of information theory. Wiley, New York

    Book  MATH  Google Scholar 

  12. Daskalakis C, Papadimitriou C (2007) Computing equilibria in anonymous games. In: 48th annual ieee symposium on foundations of computer science (FOCS), pp 83–93

  13. Dekel E, Fudenberg D, Morris S (2006) Topologies on types. Theor Econ 1:275–309

    Google Scholar 

  14. Doshi P, Chandrasekaran M, Zeng Y (2010) Epsilon-subjective equivalence of models for interactive dynamic influence diagrams. In: WIC/ACM/IEEE conference on web intelligence and intelligent agent technology (WI-IAT), pp 165–172

  15. Doshi P, Sonu E (2010) GaTAC: a scalable and realistic testbed for multiagent decision making. In: Fifth workshop on multiagent sequential decision making in uncertain domains (MSDM). AAMAS, pp 62–66

  16. Doshi P, Zeng Y (2009) Improved approximation of interactive dynamic influence diagrams using discriminative model updates. In: International conference on autonomous agents and multi-agent systems (AAMAS). pp 907–914

  17. Doshi P, Zeng Y, Chen Q (2009) Graphical models for interactive POMDPs: representations and solutions. J Auton Agents Multi-Agent Syst JAAMAS 18(3):376–416

    Article  Google Scholar 

  18. Gal K, Pfeffer A (2008) Networks of influence diagrams: a formalism for representing agents’ beliefs and decision-making processes. J Artif Intell Res 33:109–147

    MathSciNet  MATH  Google Scholar 

  19. Gal Y, Pfeffer A (2003) A language for modeling agent’s decision-making processes in games. In: Autonomous agents and multi-agents systems conference (AAMAS), pp 265–272

  20. Gmytrasiewicz P, Doshi P (2005) A framework for sequential planning in multiagent settings. J Artif Intell Res JAIR 24:49–79

    MATH  Google Scholar 

  21. Howard RA, Matheson JE (1984) Influence diagrams. In: Howard RA, Matheson JE (eds) Readings on the principles and applications of decision analysis, vol 2. Strategic Decisions Group, Menlo Park, pp 719–762

  22. Kaelbling L, Littman M, Cassandra A (1998) Planning and acting in partially observable stochastic domains. Artif Intell J 101:99–134

  23. Koller D, Milch B (2001) Multi-agent influence diagrams for representing and solving games. In: International joint conference on artificial intelligence (IJCAI), pp 1027–1034

  24. Koller D, Milch B (2011) Multi-agent influence diagrams for representing and solving games. Games Econ Behav 45(1):181–221

    Article  MathSciNet  MATH  Google Scholar 

  25. Lauritzen SL, Nilsson D (2001) Representing and solving decision problems with limited information. Manag Sci 47:1235–1251

    Article  MATH  Google Scholar 

  26. Lipstein B (1965) A mathematical model of consumer behavior. J Mark 2:259–265

    Article  Google Scholar 

  27. Luo J, Yin H, Li B, Wu C (2011) Path planning for automated guided vehicles system via interactive dynamic influence diagrams with communication. In: 9th IEEE international conference on control and automation (ICCA), pp 755–759

  28. Nair R, Tambe M, Yokoo M, Pynadath D, Marsella S (2003) Taming decentralized POMDPs: towards efficient policy computation for multiagent settings. In: International joint conference on artificial intelligence (IJCAI), pp 705–711

  29. Ng B, Meyers C, Boakye K, Nitao J (2010) Towards applying interactive POMDPs to real-world adversary modeling. In: Innovative applications in artificial intelligence (IAAI), pp 1814–1820

  30. Oliehoek FA, Whiteson S, Spaan MT (2013) Approximate solutions for factored dec-pomdps with many agents. In: Proceedings of the 2013 international conference on autonomous agents and multi-agent systems (AAMAS). pp. 563–570

  31. Oliehoek FA, Witwicki SJ, Kaelbling LP (2012) Influence-based abstraction for multiagent systems. In: Twenty-sixth AAAI conference on artificial intelligence (AAAI), pp 1422–1428

  32. Oliehoek F, Spaan M, Whiteson S, Vlassis N (2008) Exploiting locality of interaction in factored Dec-POMDPs. In: Seventh international conference on autonomous agents and multiagent systems (AAMAS), pp 517–524

  33. Pajarinen J, Peltonen J (2011) Efficient planning for factored infinite-horizon DEC-POMDPs. In: International joint conference on artificial intelligence (IJCAI), pp 325–331

  34. Perry AR (2004) The flightgear flight simulator. In: UseLinux. http://www.flightgear.org

  35. Pineau J, Gordon G, Thrun S (2006) Anytime point-based value iteration for large POMDPs. J Artif Intell Res 27:335–380

    MATH  Google Scholar 

  36. Pynadath D, Marsella S (2007) Minimal mental models. In: Twenty-second conference on artificial intelligence (AAAI). Vancouver, Canada, pp 1038–1044

  37. Rathnasabapathy B, Doshi P, Gmytrasiewicz PJ (2006) Exact solutions to interactive POMDPs using behavioral equivalence. In: Autonomous agents and multi-agents systems conference (AAMAS), pp 1025–1032

  38. Russell S, Norvig P (2010) Artificial intelligence: a modern approach, 3rd edn. Prentice Hall, Englewood Cliffs

    MATH  Google Scholar 

  39. Seuken S, Zilberstein S (2008) Formal models and algorithms for decentralized decision making under uncertainty. Auton Agents Multi-Agent Syst 17(2):190–250

    Article  Google Scholar 

  40. Seuken S, Zilberstein S (2008) Formal models and algorithms for decentralized decision making under uncertainty. J Auton Agents Multi-agent Syst

  41. Shachter RD (1986) Evaluating influence diagrams. Oper Res 34(6):871–882

    Article  MathSciNet  Google Scholar 

  42. Smallwood R, Sondik E (1973) The optimal control of partially observable Markov decision processes over a finite horizon. Oper Res OR 21:1071–1088

    Article  MATH  Google Scholar 

  43. Sonu E, Doshi P (2012) GaTAC: A scalable and realistic testbed for multiagent decision making (demonstration). In: Eleventh international conference on autonomous agents and multiagent systems (AAMAS), DEMO track, pp 1507–1508

  44. Tatman JA, Shachter RD (1990) Dynamic programming and influence diagrams. IEEE Trans Syst Man Cybern 20(2):365–379

    Article  MathSciNet  MATH  Google Scholar 

  45. Witwicki SJ, Durfee EH (2010) Influence-based policy abstraction for weakly-coupled dec-pomdps. In: International conference on automated planning and scheduling (ICAPS), pp 185–192

  46. Woodberry O, Mascaro S (2012) Programming Bayesian network solutions with netica. Bayesian Intelligence, Brookvale

  47. Zeng Y, Chen Y, Doshi P (2011) Approximating behavioral equivalence of models using top-k policy paths (extended abstract). In: International conference on autonomous agents and multi-agent systems (AAMAS), pp 1229–1230

  48. Zeng Y, Doshi P (2009) Speeding up exact solutions of interactive influence diagrams using action equivalence. In: International joint conference on artificial intelligence (IJCAI)

  49. Zeng Y, Doshi P (2012) Exploiting model equivalences for solving interactive dynamic influence diagrams. J Artif Intell Res JAIR 43:211–255

    MathSciNet  MATH  Google Scholar 

  50. Zeng Y, Doshi P, Chen Q (2007) Approximate solutions of interactive dynamic influence diagrams using model clustering. In: Twenty second conference on artificial intelligence (AAAI). Vancouver, Canada, pp 782–787

  51. Zeng Y, Doshi P, Pan Y, Mao H, Chandrasekaran M, Luo J (2011) Utilizing partial policies for identifying equivalence of behavioral models. In: Twenty-fifth AAAI conference on artificial intelligence, pp 1083–1088

  52. Zeng Y, Pan Y, Mao H, Luo J (2012) Improved use of partial policies for identifying behavioral equivalences. In: Eleventh international conference on autonomous agents and multiagent systems (AAMAS), pp 1015–1022

Download references

Acknowledgments

This research is in part supported by NSFC 61375070, 61562033, 61502322. Yinghui would like to thank the Grant 20151BAB207021 and 20151BDH80014 from Jiangxi Province, China. Prashant would like to thank the support from a NSF CAREER Grant IIS-0845036 and a Grant from ONR N000141310870.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yifeng Zeng.

Appendices

Appendix 1: Proofs of propositions

We begin by proving the bound in Proposition 6. We may evaluate the error, \(\rho \), as:

$$\begin{aligned} \begin{array}{lll} \rho &{}=&{}|\alpha ^{T-d} \cdot b^{d,k}_{m_{j,l-1}} - \alpha ^{T-d} \cdot b^{d,k}_{\hat{m}_{j,l-1}}|\\ &{}=&{}|\alpha ^{T-d} \cdot b^{d,k}_{m_{j,l-1}} + \hat{\alpha }^{T-d}\cdot b^{d,k}_{m_{j,l-1}} - \hat{\alpha }^{T-d}\cdot b^{d,k}_{m_{j,l-1}}\\ &{}&{} - \alpha ^{T-d} \cdot b^{d,k}_{\hat{m}_{j,l-1}}| ~~~\hbox {(add zero)}\\ &{} \le &{}|\alpha ^{T-d} \cdot b^{d,k}_{m_{j,l-1}} + \hat{\alpha }^{T-d}\cdot b^{d,k}_{\hat{m}_{j,l-1}} - \hat{\alpha }^{T-d}\cdot b^{d,k}_{m_{j,l-1}}\\ &{}&{} - \alpha ^{T-d} \cdot b^{d,k}_{\hat{m}_{j,l-1}}| ~~~(\hat{\alpha }^{T-d}\cdot b^{d,k}_{\hat{m}_{j,l-1}} \ge \hat{\alpha }^{T-d}\cdot b^{d,k}_{m_{j,l-1}})\\ &{}=&{} |b^{d,k}_{m_{j,l-1}} \cdot (\alpha ^{T-d} - \hat{\alpha }^{T-d}) - b^{d,k}_{\hat{m}_{j,l-1}} \cdot (\alpha ^{T-d} - \hat{\alpha }^{T-d})|\\ &{}=&{} |(\alpha ^{T-d} - \hat{\alpha }^{T-d})\cdot (b^{d,k}_{m_{j,l-1}} - b^{d,k}_{\hat{m}_{j,l-1}})|\\ &{}\le &{} |\alpha ^{T-d} - \hat{\alpha }^{T-d}|_\infty \cdot |(b^{d,k}_{m_{j,l-1}} - b^{d,k}_{\hat{m}_{j,l-1}})|_1 ~~\hbox {(H}\ddot{o}\hbox {lder's ineq.)}\\ &{}\le &{} |\alpha ^{T-d} - \hat{\alpha }^{T-d}|_\infty \cdot 2D_{KL}(b^{d,k}_{m_{j,l-1}}||b^{d,k}_{\hat{m}_{j,l-1}}) ~~\hbox {(Pinsker's ineq.)}\\ &{}\le &{} (R_j^{max} - R_j^{min})(T-d)\cdot 2D_{KL}(b^{d,k}_{m_{j,l-1}}||b^{d,k}_{\hat{m}_{j,l-1}})\\ &{}\le &{} (R_j^{max} - R_j^{min})(T-d)\cdot 2\epsilon ~~~\hbox {(by definition)} \end{array} \end{aligned}$$

Here, \(R_j^{max}\) and \(R_j^{min}\) are the maximum and minimum rewards of j, respectively.

Appendix 2: Problem domains

Detailed descriptions of all the problem domains utilized in our evaluations, including their I-DID models, are given in “Multiagent tiger problem” section to “UAV reconnaissance and interception problem” section of Appendix.

1.1 Multiagent tiger problem

As we mentioned previously, our multiagent tiger problem is a noncooperative generalization of the well-known single-agent tiger problem [22] to the multiagent setting. It differs from other multiagent versions of the same problem [28] by assuming that the agents hear creaks as well as the growls and the reward function does not promote cooperation. Creaks are indicative of which door was opened by the other agent(s). While we described the problem in Sect. 2, we quantify the different uncertainties here. We assume that the accuracy of creaks is 90 %, while the accuracy of growls is 85 % as in the single-agent problem. The tiger location is chosen randomly in the next time step if any of the agents opened any doors in the current step. Figure 7 shows an I-DID unrolled over two time-slices for the multiagent tiger problem. We give the CPTs for the different nodes below:

Table 3 CPT of the chance node \(TigerLocation^{t+1}\) in the I-DID of Fig. 7

We assign the marginal distribution over the tiger’s location from agent i’s initial belief to the chance node, \(TigerLocation^t\). The CPT of \(TigerLocation^{t+1}\) in the next time step conditioned on \(TigerLocation^t\), \(A_i^t\), and \(A_j^t\) is the transition function, shown in Table 3. The CPT of the observation node, \( Growl \& Creak^{t+1}\), is shown in Table 4. CPTs of the observation nodes in level 0 DIDs are identical to the observation function in the single-agent tiger problem.

Table 4 CPT of the chance node, \( Growl \& Creak^{t+1}\), in agent i’s I-DID

Decision nodes, \(A_i^t\) and \(A_i^{t+1}\), contain possible actions of agent i such as L, OL, and OR. Model node, \(M_{j,l-1}^t\), contains the different models of agent j which are DIDs if the I-DID is at level 0, otherwise they are I-DIDs themselves. The distribution over the associated \(Mod[M_j^t]\) node (see Fig. 8) is the conditional distribution over j’s models given physical state from agent i’s initial belief. The CPT of the chance node, \(Mod[M_j^{t+1}]\), in the model node, \(M_{j,l-1}^{t+1}\), reflects which prior model, action and observation of j results in a model contained in the model node.

Table 5 Utility table for node, \(R_i\), in the I-DID

Finally, the utility node, \(R_i\), in the I-DID relies on both agents’ actions, \(A_i^t\) and \(A_j^t\), and the physical states, \(TigerLocation^t\). The utility table is shown in Table 5. These payoffs are analogous to the single-agent version, which assigns a reward of 10 if the correct door is opened, a penalty of 100 if the opened door is the one behind which is a tiger, and a penalty of 1 for listening. A result of this assumption is that the other agent’s actions do not impact the original agent’s payoffs directly, but rather indirectly by resulting in states that matter to the original agent. The utility tables for level 0 models are exactly identical to the reward function in the single-agent tiger problem.

1.2 Multiagent concert problem

We extend the single-agent concert problem available in the POMDP repositoryFootnote 3 to a two-agent setting. The problem involves a concert organizer who must decide whether to advertise the concert on TV, over the radio, or do nothing. The problem is inspired by real-world marketing problems involving multiple brands, changing attitudes about brands and the effect of advertising [26].

In the multiagent concert problem, two separate concerts are involved, each of which may be advertised on TV (we denote this action as TV), over the radio (denoted as Radio), or none (denoted as Nothing). The state of this problem is two different attitudes or predispositions that the target audience may have about both the concerts in general: They may be interested in them (denoted as I) or bored with them (denoted as B). The output of the actions could make the target audience definitely want to attend a particular concert (we denote this observation as Go), may attend the concert (denoted as MayGo), may not attend it (denoted as MayNoGo), or definitely not want to attend the concert (NoGo).

Figure 25 shows a level l I-DID unrolled over two time-slices for the multiagent concern domain.

Fig. 25
figure 25

Level l I-DID for concert i in the multiagent concert problem

The decision node, \(A_i\), contains the possible marketing actions for concert i, such as TV, Radio, or Nothing. The chance node, Like, represents audience attitude toward concerts. As attitudes vary in general, we begin with a uniform distribution over them as the initial belief. We show the CPT of \(Like^{t+1}\), which is the transition function, in Table 6. It models the fact that a TV marketing campaign may change the attitudes with a higher probability or maintaining interest compared to a radio campaign, while doing nothing may have an adverse impact.

Table 6 CPT of the chance node, \(Like^{t+1}\), in agent i’s I-DID of Fig. 25

The observation node, \(Out_i\), models the observed indications of the target audience toward going out to attend the concert, i, through the values, Go, MayGo, MayNoGo, and NoGo. The CPT of the node, \(Out_i^{t+1}\), is shown below table 7 and models the notion that TV advertisements would be more effective in translating the predispositions and making the target audience want to attend the conference even if they are bored, compared to radio advertisements. On the other hand, doing nothing does not have much effect and would result in a direct translation of predispositions to wanting to attend the conference or not.

Fig. 26
figure 26

Level l I-DID of agent i for the money laundering problem

Finally, we show the reward function in the utility node, \(R_i\), in the I-DID table 8. The rewards combine the cost of the different marketing campaigns with TV being most expensive, and a quantified efficacy of the different campaigns with TV being most effective. We show the reward function in Fig. 8.

1.3 Money laundering problems

As [29] mention, money laundering is a process of transferring “dirty” money to “clean” money through a series of criminal transactions. It normally contains three steps, namely placement, layering, and integrating. In the placement phase, money launderers introduce the dirty money into some common targets of financial systems like bank accounts, insurance, and securities. Then, in the layering phase, they transfer the money into some businesses like trusts, offshore accounts and shell companies. The transactions may obscure the money source. Finally, in the integration phase, the money launderers involve the laundered money into more legitimate businesses including real estate, loans, and casinos. On the other hand, as an anti-money laundering body, law enforcement monitors the money laundering flow by placing physical sensors at each possible location of dirty money. It analyzes the received information and accordingly confiscates the dirty money once it correctly identifies the money location.

Table 7 CPT of the chance node, \(Out^{t+1}\), in concert i’s I-DID. The CPT of the corresponding node in concert j’s I-DID is similar with the joint actions reversed
Table 8 Utility table for node, \(R_i\), in the I-DID

The law enforcement and money launderers are denoted as the blue team (agent i) and red team (agent j), respectively, in the problem domain. The blue team is represented as a level l I-DID shown in Fig. 26, while the red team is at level \(l-1\). The joint state of level l I-DID contains both money locations of the red team (11 possible states), \(ML^{t}\), and sensor locations installed by the blue team (9 possible states), \(SL^{t}\). The blue team has 9 possible actions in the decision node, \(A_i^t\), including the placement of possible sensors and the confiscation of the dirty money. The CPTs of chance nodes, \(MoneyLoc^{t+1}\) and \(SensorLoc^{t+1}\), encode the probabilities of the sensor installation in possible money locations. In particular, only when the blue team places the sensors in the same location as where the dirty money is transferred, it confiscates the dirty money and resumes its states.

The blue team receives observations in terms of reports generated from most of the installed sensors. The chance node, \(Report_i^{t+1}\), has 9 states and its CPT provides the sensing capability of the blue team. On average, blue team correctly detects the real location of the dirty money 80 % of the times given a positive report on the location.

The utility node, \(R_i\), is the reward assigned to the blue team when the agent acts at the joint state. The blue team gets 100 if it confiscates dirty money while it costs \(-\)10 for placing any sensor in the targeted location. The actual CPT tables are large and we do not show them here.

1.4 UAV reconnaissance and interception problem

We show a level l I-DID for the multiagent UAV problem in Fig. 27. Models of agent j, which may play the role of a fugitive or a hostile UAV J at the lower level differ in the probability that the fugitive assigns to its position in the grid. The UAV’s (agent i) initial beliefs are probability distributions assigned to the relative position of the fugitive decomposed into the chance nodes, \(FugRelPosX^t\) and \(FugRelPosY^t\), which represent the relative location of the fugitive along the row and column, respectively. Its CPTs assume that each action (except listen) moves the UAV in the intended direction with a probability of 0.67, while the remaining probability is equally divided among the other neighboring positions. Action listen keeps the UAV in the same position.

Fig. 27
figure 27

Level l I-DID of agent i for our UAV reconnaissance problem

The observation node, SenFug, represents the UAV’s sensing of the relative position of the fugitive in the grid. Its CPT assumes that the UAV has good sensing capability (likelihood of 0.8 for the correct relative location of the fugitive) if the action is listen, otherwise the UAV receives random observations during other actions.

The decision node, \(A_i\), contains five actions of the UAV, which includes moving in the four cardinal directions and listening. The edge incident into the node indicates that the UAV ascertains the observation on the relative position of the fugitive before it takes an action.

The utility node, \(R_i\), is the reward assigned to the UAV for its actions given the fugitive’s relative position and its actions. The UAV gets rewarded 50 if it captures the fugitive; otherwise, it costs -5 for performing any other action.

Because the actual CPT tables are very large, we do not show them here. All problem domain files are available upon request.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zeng, Y., Doshi, P., Chen, Y. et al. Approximating behavioral equivalence for scaling solutions of I-DIDs. Knowl Inf Syst 49, 511–552 (2016). https://doi.org/10.1007/s10115-015-0912-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10115-015-0912-x

Keywords

Navigation