Introduction

Critical infrastructure networks play a pivotal role in contemporary society. Such networks include essential components such as water supply networks, communication networks, transportation systems, and power grids. The destruction or degradation of critical infrastructure networks may have a significant negative impact. Due to the applicability of complex network theory in providing a holistic understanding of the interconnections among critical infrastructure components [6, 11, 40, 51], numerous researchers and organizations have made efforts to analyse the protection of critical infrastructure from a network perspective [17, 18, 31, 37]. Considering practical scenarios involving deliberate opponents, there is increasing research interest in employing game theory to analyse attack and defense strategies in infrastructure networks, as it provides an effective framework for studying strategic interactions [41, 42]. Li et al. [33,34,35,36] applied the attack and defense game model to complex networks; they evaluated different influencing factors on the equilibrium results. Fu et al. [13] developed static and dynamic game models to examine the consequences of cascading failures while incorporating camouflage strategies. Moreover, they proposed an evolutionary rule to achieve optimal resource allocation [12]. Zeng et al. [54, 55] applied the Bayesian Stackelberg game model to propose a false network construction method. In their study, they focused on the allocation of resources for defending critical infrastructure networks under conditions of asymmetric information. Thompson et al. [48, 49] conducted an analysis of the influence of intelligent attacks and worst-case interruptions on the US aviation network. Then, they established a defender-attacker-defender optimization model with three levels and solved it. Dui et al. [16] investigated cascade failure in a scale-free network by exploring a multistrategy evolutionary game. Qi et al. [44, 45] conducted a study on Stackelberg games in complex networks by investigating the effects of hiding network edges. Subsequently, they analysed the implications of this particular case. Huang et al. [19] used sequential game theory to model attack and defense games in complex networks, and they proposed a strategy optimization method. Wang et al. [50] integrated the Cournot model and the attack and defense game to discover that certain specific nodes and network topologies are the primary factors influencing payoffs when considering resource constraints.

Research gaps and motivations

Although the existing literature has significantly contributed to the study of attack and defense games in infrastructure networks, there are still several limitations that need to be addressed. We summarize these limitations as follows:

  1. 1.

    Real-world infrastructure networks have many unique topological features, which implies that the consequences of node failure following an attack can differ significantly depending on the choice of measurement metrics [9, 15]. Hence, the assessment of the impact of attack and defense inherently involves fuzziness and uncertainty.

  2. 2.

    The incorporation of subjective factors and human judgement presents analytical challenges when examining strategic interactions among decision-makers.

  3. 3.

    In the context of uncertain scenarios, research on the establishment of attack and defense game models for infrastructure networks is limited, posing considerable difficulties in addressing practical issues comprehensively.

  4. 4.

    There is a lack of detailed analysis regarding the factors that influence the equilibrium of attack and defense games in infrastructure networks in uncertain scenarios.

Considering the aforementioned limitations, the main objective of this paper is to bridge the existing research gaps. Fortunately, in 1965, Zadeh [53] proposed fuzzy set theory, which offers a valuable tool for addressing these difficulties. In comparison to fuzzy sets, the intuitionistic fuzzy sets (IFSs) introduced by Atanassov incorporate the notion of nonmembership to represent decision-makers’ dissatisfaction degree [8]. Furthermore, IFSs have been widely applied in various fields, such as transportation problems [24,25,26,27, 29], assignment problems [28], and decision-making [14, 38]. Several extensions have been proposed for IFSs. Smarandache [46] introduced the neutrosophic set, which considers degrees of membership, nonmembership, indeterminacy, and hesitation simultaneously. Yager [52] proposed the Pythagorean fuzzy set to broaden the ranges of membership and nonmembership to address realistic decision-making problems. Moreover, numerous studies have investigated different ranges of membership and nonmembership values, thereby enhancing the adaptability and effectiveness of describing fuzzy situations in real-world scenarios [1,2,3,4,5, 20,21,22]. However, in many cases, due to environmental factors or influences of decision-makers themselves, the membership and nonmembership of fuzzy quantities cannot be represented by crisp values, therefore, the IFS and its extensions lack the capability to comprehensively reflect human thinking. The interval-valued intuitionistic fuzzy set (IVIFS) is distinguished by treating membership and nonmembership degrees as intervals rather than crisp values, making it more suitable for handling complex scenarios of this nature [7].

Main contributions

This study focuses on attack and defense games in infrastructure networks under uncertain circumstances. The discussion in the preceding subsection indicates the advantages of IVIFSs in addressing such problems. Therefore, the primary objective of this study is to evaluate the payoffs of attack and defense games in infrastructure networks using IVIFSs. Furthermore, we conduct an extensive and in-depth discussion of the variation patterns and underlying reasons for the Nash equilibrium in this particular type of scenario. The main contributions of this research work are as follows:

  1. 1.

    We establish an attack and defense game model in infrastructure networks with payoffs represented by IVIFSs.

  2. 2.

    We present a solution method [32] to determine the Nash equilibrium of our model. Subsequently, we investigate the variation patterns of the Nash equilibrium and the main factors influencing it under uncertain environments.

  3. 3.

    We provide a thorough explanation of the reasons behind the variation patterns of the Nash equilibrium, thereby validating the rationality and applicability of the proposed method.

  4. 4.

    We find that compared to the existing attack and defense game model with crisp payoffs, the model proposed in this paper leads to a superior Nash equilibrium.

  5. 5.

    Our study provides a reference for selecting effective strategies to protect infrastructure networks in uncertain environments.

Structure of this paper

The main structure of the article is as follows. In the “Preliminaries" section, some definitions and preliminaries related to IVIFS theory are reviewed. In the section “Attack and defense game model based on IVIFS theory", we explain the cost model, the strategies, and the payoffs. The solution method for the game is introduced in the section “Solving the game model". In the “Experiments" section, we present the experimental results. Finally, our conclusions are summarized in the “Conclusions and discussion" section; several directions for future work are also provided.

Fig. 1
figure 1

Process used to generate the final IVIFS payoff matrix. A network consisting of 10 nodes is used as an illustrative example. Specifically, we examine a particular game result wherein the attacker and defender select strategies involving nodes colored red and blue, respectively. We obtain the IVIFS payoff matrix from the network topologies under various strategy profiles. We also use network metrics and subjective preferences as reference points for our analysis

Preliminaries

We briefly introduce the definitions and operators of IVIFSs in this section.

Definition 1

As proposed in [8], an IFS A in a universe of discourse U is defined by a set of ordered triplets: \(\left\{ \left\langle x, \mu _A(x), \nu _A(x)\right\rangle \mid x \in U\right\} \), where \({{\mu }_{A}}(x),{{\nu }_{A}}(x):U\rightarrow \left[ 0,1\right] \), and \({\mu }_A(x)\), \({{\nu }_{A}}(x)\), respectively, denote the membership and nonmembership degrees of x to the IFS A, such that \(\forall x\in U\), \(0\le {{\mu }_{A}}(x)+{{\nu }_{A}}(x)\le 1\). The degree to which x is hesitant to belong to A is defined as \((1-{{\mu }_{A}}(x)-{{\nu }_{A}}(x))\).

Definition 2

As proposed in [7], an IVIFS \(\tilde{A}\) in a universe of discourse U is defined by \(\tilde{A}=\left\{ \left\langle x,\left[ \mu _{\tilde{A}}^L(x), \mu _{\tilde{A}}^U(x)\right] ,\right. \right. \left. \left. \left[ \nu _{\tilde{A}}^L(x), \nu _{\tilde{A}}^U(x)\right] \right\rangle \mid x \in U\right\} \), where \(\left[ \mu _{\tilde{A}}^L(x),\mu _{\tilde{A}}^U(x)\right] \in D[0,1]\) and \(\left[ \nu _{\tilde{A}}^L(x),\nu _{\tilde{A}}^U(x)\right] \in D[0,1]\) satisfy the condition \(0 \le \mu _{\tilde{A}}^U(x)+\nu _{\tilde{A}}^U(x) \le 1, \forall x \in U\). The intervals \(\left[ \mu _{\tilde{A}}^L(x),\mu _{\tilde{A}}^U(x)\right] \) and \(\left[ \nu _{\tilde{A}}^L(x), \nu _{\tilde{A}}^U(x)\right] \) denote the membership and nonmembership degrees of element \(x \in U\), respectively, in the IVIFS \(\tilde{A}\). For each element x, the hesitancy degree of \(x \in U\) to \(\tilde{A}\) is defined as \(\left[ 1-\mu _{\tilde{A}}^U(x)-\nu _{\tilde{A}}^U(x), 1\right. \left. -\mu _{\tilde{A}}^L(x)-\nu _{\tilde{A}}^L(x)\right] \).

Definition 3

Let \(\xi _1=\left\langle \left[ \mu _1^L(x), \mu _1^U(x)\right] ,\right. \left. \left[ \nu _1^L(x), \nu _1^U(x)\right] \right\rangle \) and \(\xi _2=\left\langle \left[ \mu _2^L(x), \mu _2^U(x)\right] ,\right. \left. \left[ \nu _2^L(x), \nu _2^U(x)\right] \right\rangle \) be any two IVIFSs. Then [7],

  1. (i)

    \(\tilde{\xi }_1\prec \tilde{\xi }_2\left( \tilde{\xi }_1<\tilde{\xi }_2 \right) \) iff \( \mu _1^L(x)<\mu _2^L(x)\), \(\mu _1^U(x)<\mu _2^U(x)\), \(\nu _1^L(x)>\nu _2^L(x)\) and \(\nu _1^U(x)>\nu _2^U(x)\).

  2. (ii)

    \(\tilde{\xi }_1=\tilde{\xi }_2\) iff \(\mu _1^L(x)=\mu _2^L(x)\), \(\mu _1^U(x)=\mu _2^U(x)\), \(\nu _1^L(x)=\nu _2^L(x)\) and \(\nu _1^U(x)=\nu _2^U(x)\).

  3. (iii)

    \(\tilde{\xi }_1+\tilde{\xi }_2=\left\langle \left[ \mu _1^L(x)+\mu _2^L(x)-\mu _1^L(x) \mu _2^L(x), \mu _1^U(x)+\right. \right. \left. \left. \mu _2^U(x)-\mu _1^U(x) \mu _2^U(x)\right] ,\big [\nu _1^L(x) \nu _2^L(x),\nu _1^U(x)\right. \nu _2^U(x)\big ]\big \rangle \).

  4. (iv)

    \(\tilde{\xi }_1\tilde{\xi }_2=\left\langle \left[ \mu _1^L(x) \mu _2^L(x),\mu _1^U(x)\mu _2^U(x)\right] ,\left[ \nu _1^L(x) +\nu _2^L(x)\right. \right. \)\(\left. \left. -\nu _1^L(x) \nu _2^L(x), \nu _1^U(x)+\nu _2^U(x)-\nu _1^U(x) \nu _2^U(x)\right] \right\rangle \).

  5. (v)

    \(r\tilde{\xi }_1=\left\langle \left[ 1-\left( 1-\mu _1^L(x)\right) ^r, 1-\left( 1-\mu _1^U(x)\right) ^r\right] ,\right. \)\(\left. \left[ \left( \nu _1^L(x)\right) ^r, \left( \nu _1^U(x)\right) ^r\right] \right\rangle \).

Definition 4

[23] For an IVIFS \(\tilde{\xi }_t=\left\langle \left[ \mu _t^L(x), \mu _t^U(x)\right] ,\right. \)\(\left. \left[ \nu _t^L(x), \nu _t^U(x)\right] \right\rangle \), a proposed score function is defined by Eq. (1), where \(S\left( \tilde{\xi }_t\right) \) is the score value of \(\tilde{\xi }_t\), which satisfies \(-1 \le S\left( \tilde{\xi }_t\right) \le 1\). The score function provides a comprehensive assessment of the degree belonging to a particular set, and thus, the score function serves as a basis for comparing two IVIFSs. In the case of two IVIFSs, the one with a lower score function is associated with a smaller IVIFS.

$$\begin{aligned} S\left( \tilde{\xi }_t\right) = \begin{aligned}&{\left[ \left( \mu _t^L(x)\left( 2-\mu _t^U(x)-\nu _t^U(x)\right) +\mu _t^U(x)\left( 2-\mu _t^L(x)-\nu _t^L(x)\right) \right) \left( 2-\nu _t^L(x)-\nu _t^U(x)\right) \right. } \\ {}&\frac{\left. -\left( \nu _t^L(x)\left( 2-\mu _t^U(x)-\nu _t^U(x)\right) +\nu _t^U(x)\left( 2-\mu _t^L(x)-\nu _t^L(x)\right) \right) \left( 2-\mu _t^L(x)-\mu _t^U(x)\right) \right] }{4}\end{aligned} \end{aligned}$$
(1)

Attack and defense game model based on IVIFS theory

Based on IVIFS theory, we construct an attack and defense game model for infrastructure networks. The process of generating the final IVIFS payoff matrix under this model is shown in Fig. 1.

Basic assumptions

We concentrate on a target network, for instance, a rail transit network. This network is represented by a simple undirected graph, denoted as \(G\left( V,E \right) \), where the set of nodes is denoted by \(V=\{{{v}_{1}},{{v}_{2}},...,{{v}_{N}}\}\) and the set of edges is represented by \(E\subseteq V\times V\) (i.e., the rail transit stations and rail transit lines in the rail transit network, respectively). We define N as the total number of nodes within the network, so \(N = |V|\). We define \(A(G)=\left( a_{i j}\right) _{N \times N}\) as the adjacency matrix for G, where the values of \(a_{i j}\) and \(a_{j i}\) are set to 1 if nodes \(v_i\) and \(v_j\) are adjacent, and 0 otherwise. The basic assumptions made in this model are as follows:

  1. (i)

    There is a single attacker who targets specific nodes in the target network to disrupt the system’s performance. There is a single defender whose objective is to maintain the functionality of the network by protecting a subset of nodes. If a node fails due to an attack, the corresponding edges connected to that node are removed.

  2. (ii)

    Both players can obtain complete information about the target network, and they have full knowledge of the opponent, which means that they are perfectly informed of all the possible strategies that the opponent can potentially adopt and the decision-makers’ payoffs for each strategy profile.

  3. (iii)

    As the game is simultaneous, both the attacker and the defender move without knowing exactly which strategy the opponent plans to choose.

  4. (iv)

    The game is played within a single round; it is not subject to repetition over multiple rounds.

Cost model

We denote the attack cost for node \({v}_{i}\) by \(c_{i}^{A}\); the defense cost is represented by \(c_{i}^{D}\). The determination of the cost \(c_{i}^{A}\) or \(c_{i}^{D}\) depends on a specific reference property \(r_{i}\ge 0\) associated with node \({v}_{i}\). This association can be mathematically expressed as:

$$\begin{aligned} c_i^A= & {} r_i^{q_A} \end{aligned}$$
(2)
$$\begin{aligned} c_i^D= & {} r_i^{q_D}, \end{aligned}$$
(3)

where \(q_A\) represents the cost sensitivity parameter of the attacker and \(q_D\) represents the cost sensitivity parameter of the defender. The reference property \(r_i\) in this paper is defined as the node degree of \(v_i\). Consequently, the cost is influenced by both the degree of the node (\(r_i\)) and the players’ cost sensitivity parameters (\(q_A, q_D\)), as indicated by Eqs. (2) and (3). The parameters \(q_A\) and \(q_D\) can be obtained based on expert experience and specific infrastructure networks. When \(q_A\) and \(q_D\) are small, such as \(q_A=q_D=0\), the costs of attacking and defending nodes with different degrees are almost the same for the players. However, when \(q_A\) and \(q_D\) are large, such as \(q_A=q_D=1\), the cost of attacking and defending a node with a higher degree increases for the players.

The resources available for both the attacker and the defender are denoted by:

$$\begin{aligned} C^A= & {} \theta _A \sum _{i=1}^N c_i^A=\theta _A \sum _{i=1}^N r_i^{q_A} \end{aligned}$$
(4)
$$\begin{aligned} C^D= & {} \theta _D \sum _{i=1}^N c_i^D=\theta _D \sum _{i=1}^N r_i^{q_D} \end{aligned}$$
(5)

The parameters \(\theta _A\) and \(\theta _D\) denote the constraint parameters for attack and defense, respectively, where \(\theta _A, \theta _D\in [0,1]\). The variables \({{\theta }_{A}}\) and \({{\theta }_{D}}\) denote the cost budgets assigned to the attacker and defender, respectively, for their corresponding attacking or defending actions. Subsequently, as the cost constraint parameters increase, the available resources also increase.

Strategies

According to the cost model discussed in “Cost model" subsection and by using the attacker as an illustrative example, the feasible strategies in former research can be defined as follows [33, 34, 54].

First, we define \(V^A \subseteq V\) as the set of nodes that have been attacked; we represent the attack status of node \(v_i\) as \(x_i=1\) if it has been attacked (\(v_i \in V^A\)), whereas \(x_i=0\) signifies that the node remains unattacked. Then, given that the attacker’s strategy involves selecting a series of target nodes from the target network G, a feasible strategy can be represented as a vector \(X=\left[ x_1, x_2, \ldots , x_N\right] \). All feasible attack strategies can be represented by the set \(S_A\); thus, we have \(X \in S_A\). The cost associated with a specific attack strategy X is denoted as follows:

$$\begin{aligned} C_X=\sum _{v_i \in V^A} c_i^A=\sum _{i=1}^N x_i c_i^A=\sum _{i=1}^N x_i r_i^{q_A} \end{aligned}$$
(6)

From Eq. (4), the cost constraint imposed on the attacker can be defined as:

$$\begin{aligned} C_X=\sum _{i=1}^N x_i r_i^{q_A} \le C^A=\theta _A \sum _{i=1}^N r_i^{q_A} \end{aligned}$$
(7)

Similarly, we suppose that \(Y=\left[ y_1, y_2, \ldots , y_N\right] \in S_D\) is a defense strategy vector; then, the defender’s cost constraint is:

$$\begin{aligned} C_Y=\sum _{i=1}^N y_i r_i^{q_D} \le C^D=\theta _D \sum _{i=1}^N r_i^{q_D} \end{aligned}$$
(8)

We make the assumption that node \(v_i\) fails only in the event of an attack without protection, meaning \(x_i=1\) and \(y_i=0\). Conversely, the node does not fail if it is defended \(\left( y_i=1\right) \).

The strategies for the attacker and defender described in Eqs. (7) and (8) encompass a large strategy space; this is especially true when considering networks with a significant number of nodes. For example, in a target network where \(N=100\), the size of the attack strategy space \(\left| S_A\right| \) can be calculated as \(C_N^{N / 2}=(100 \times 99 \times \ldots \times 51) /(50 \times 49 \times \ldots \times 1) \ge 2^{50}\) when \(\theta _A=0.5\) and \(q_A=0\). The size of the strategy profiles, represented by \(\left| S_A\right| \times \left| S_D\right| \), is significantly larger.

In real-world scenarios, numerous decision-makers in practical situations tend to choose from a limited set of options. Therefore, to facilitate analysis, we focus on two typical attack and defense strategies in this paper [54]:

  1. (i)

    High-degree strategy (HS). For this strategy, the allocation of all resources by the attacker and defender is focused on the nodes with the highest degree. Despite the limited quantity of selected nodes, their importance remains comparatively significant.

  2. (ii)

    Low-degree strategy (LS). For this strategy, the allocation of all resources by the attacker and defender is focused on the lowest degree nodes. While the nodes in this strategy may possess lower significance, their overall quantity is substantial.

To acquire an HS (LS), the initial step involves arranging the nodes based on their reference properties in either descending or ascending order. Subsequently, the targets are sequentially included in the attack (defense) set; then, the violation of the cost constraint is examined. This procedure concludes when the addition of a new node to the set leads to a constraint violation.

Payoffs

To effectively express the inherent uncertainty in the game of attack and defense within infrastructure networks, it may be more suitable to define the decision-maker’s payoffs as IVIFSs. This approach provides a more realistic representation in addressing uncertainty and vagueness associated with such decision-making scenarios. Let us consider \(\hat{V}\) to represent the set of failing nodes and \(\hat{V}\in V\); \(\hat{E}\) denotes the set of edges that have been removed due to these failing nodes \(\hat{V}\). After a single round of attack and defense, the remaining network topology can be represented by \(\hat{G}=(V, E-\hat{E})\). From the attacker’s perspective, we consider the change in the network topology, which includes the removed edges \((\hat{E})\) in the target network G, as the universe of discourse U. To represent the attacker’s “satisfaction with the effect of the attack”, in the scenario where the attacker selects strategy i and the defender adopts strategy j, we represent the IVIFS on the universe of discourse U as \(\left\langle \left[ \mu _{i j}^L, \mu _{i j}^U\right] ,\left[ \nu _{i j}^L, \nu _{ij}^U\right] \right\rangle \). This representation is visually illustrated in Fig. 1. Since we consider only two typical strategies, \(i, j \in \{1,2\}\), strategy 1 corresponds to HS, while strategy 2 corresponds to LS.

To provide a comprehensive assessment of the impact of the attack on the target network, we use the symbol \(\Psi \) to represent network efficiency [30] and \(\Gamma \) to represent the size of the largest connected component [6]. We use these metrics to measure the characteristics of the network. The alteration of the network topology resulting from the strategy profiles (ij) can be measured by employing Eqs. (9) and (10). Moreover, these equations provide a basis for determining the attacker’s IVIFS payoffs.

$$\begin{aligned} \Psi _{i j}= & {} \frac{\Psi (G)-\Psi (\hat{G})}{\Psi (G)} \in [0,1] \end{aligned}$$
(9)
$$\begin{aligned} \Gamma _{i j}= & {} \frac{\Gamma (G)-\Gamma (\hat{G})}{\Gamma (G)} \in [0,1]. \end{aligned}$$
(10)

Obviously, in terms of quantifying the degree of satisfaction for the attacker, both \(\Psi _{i j}\) and \(\Gamma _{i j}\) can be employed as measures of membership. The degree of membership may exhibit variations within specific ranges, and it is more suitable to regard it as an interval \(\left[ \mu _{i j}^L, \mu _{i j}^U\right] \). In this study, we can calculate the initial membership degree by utilizing the aforementioned network metrics in Eqs. (9) and (10) to obtain the interval \(\left[ \min \left\{ \Psi _{i j}, \Gamma _{i j}\right\} , \max \left\{ \Psi _{i j}, \Gamma _{i j}\right\} \right] \). To improve the accuracy, we divide the initial membership degree into smaller subintervals; then, we reduce the range of values by half, which can be calculated as follows:

$$\begin{aligned} \mu _{i j}^L= & {} \frac{1}{4} \max \left\{ \Psi _{i j}, \Gamma _{i j}\right\} +\frac{3}{4} \min \left\{ \Psi _{i j}, \Gamma _{i j}\right\} \end{aligned}$$
(11)
$$\begin{aligned} \mu _{i j}^U= & {} \frac{3}{4} \max \left\{ \Psi _{i j}, \Gamma _{i j}\right\} +\frac{1}{4} \min \left\{ \Psi _{i j}, \Gamma _{i j}\right\} , \end{aligned}$$
(12)

where \(\mu _{i j}^L, \mu _{i j}^U \in [0,1]\) and \(\mu _{i j}^L \le \mu _{i j}^U\).

To incorporate the attacker’s subjective preference, we modify the formulation of nonmembership to reflect the attacker’s satisfaction. Specifically, if more nodes are attacked successfully, the attacker’s satisfaction increases, while the presence of more unattacked nodes decreases the attacker’s satisfaction. In this context, we denote the number of successfully attacked nodes as \(n_A\) and the number of nodes that remain unattacked as \(n_U\). As the nonmembership degree reflects the decision-maker’s preference information in terms of opposition, it can be calculated as follows:

$$\begin{aligned} \nu _{i j}^L= & {} \left( 1-\mu _{i j}^U\right) \frac{\left( 1-\mu _{i j}^U\right) }{\left( 1-\mu _{i j}^L\right) } e^{-\frac{n_A}{n_U}} \end{aligned}$$
(13)
$$\begin{aligned} \nu _{i j}^U= & {} \left( 1-\mu _{i j}^U\right) e^{-\frac{n_A}{n_U}}, \end{aligned}$$
(14)

where \(\nu _{i j}^L, \nu _{i j}^U \in [0,1]\) and \(\nu _{i j}^L \le \nu _{i j}^U\). As \(n_A\) increases, the nonmembership degree decreases. As \(n_U\) increases, the nonmembership degree increases. As \(e^{-\frac{n_A}{n_U}} \le 1\), it is evident that the IVIFS we proposed satisfies the condition \(\left( 1-\mu _{i j}^U\right) e^{-\frac{n_A}{n_U}}+\mu _{i j}^U \le 1-\mu _{i j}^U+\mu _{i j}^U=1\), i.e., \(\mu _{i j}^U+\nu _{i j}^U \le 1\).

Consequently, a \(2 \times 2\) payoff matrix can be derived, and it is illustrated in Fig. 2 based on Eqs. (11), (12), (13), and (14), where \(\mu _{i j}=\left[ \mu _{i j}^L, \mu _{i j}^U\right] , \nu _{i j}=\left[ \nu _{i j}^L, \nu _{i j}^U\right] \) are the membership degree and nonmembership degree of the attacker’s payoff when the attacker takes strategy i and the defender chooses strategy j. The player in the row position represents the attacker while the player in the column position represents the defender. As this is a zero-sum game, the defender’s loss can be represented by the same IVIFS \(\left\langle \left[ \mu _{i j}^L, \mu _{i j}^U\right] ,\left[ \nu _{i j}^L, \nu _{i j}^U\right] \right\rangle \).

Fig. 2
figure 2

The attacker’s IVIFS payoff matrix, where the attacker is in the row position and the defender is in the column position

Solving the game model

Considering the crisp condition, we employ two linear programs to solve the two-player zero-sum game, and we identify its Nash equilibrium. Let us assume that \(\xi \) and \(\sigma \) represent the expected payoffs of the attacker and the defender, respectively, and \(U=\left( u_{ij}\right) _{m\times n}\) denotes the payoff matrix of the attacker. The probabilities of the attacker and defender choosing strategies i and j are denoted as \(p_i^A\) and \(p_j^D\), respectively. The optimization models of the attacker and the defender are shown in Eqs. (15) and (16) [43]. We can find the Nash equilibrium \(p^{A *}, p^{D*}\) from the mixed strategy \(p^A, p^D\).

$$\begin{aligned} \begin{aligned}&\max \xi \\&\text{ s.t. } \left\{ \begin{array}{l} \sum _{i=1}^m u_{i j} \cdot p_i^A \ge z \quad j=1,2, \ldots , n \\ \sum _{i=1}^m p_i^A=1 \\ p_i^A \ge 0 \quad i=1,2, \ldots , m \end{array}\right. \end{aligned} \end{aligned}$$
(15)
$$\begin{aligned} \begin{aligned}&\min \sigma \\&\text{ s.t. } \left\{ \begin{array}{l} \sum _{j=1}^n u_{i j} \cdot p_j^D \le z \quad i=1,2, \ldots , m \\ \sum _{j=1}^n p_j^D=1 \\ p_i^D \ge 0 \quad j=1,2, \ldots , n \end{array}\right. \end{aligned} \end{aligned}$$
(16)

Inspired by the above optimization models, for the game with payoffs represented by IVIFSs, the Nash equilibrium for the attacker and the defender can be obtained through the resolution of the nonlinear biobjective interval programming model formulated by Eqs. (17) and (18),

$$\begin{aligned} \begin{aligned}&\max \left\{ \left[ \mu _A^L, \mu _A^U\right] \right\} , \min \left\{ \left[ \nu _A^L, \nu _A^U\right] \right\} \\&\text{ s.t. } \left\{ \begin{array}{l} \left[ 1-\prod _{j=1}^n \prod _{i=1}^m\left( 1-\mu _{i j}^L\right) ^{p_i^A p_j^D},\right. \\ \quad \left. 1-\prod _{j=1}^n \prod _{i=1}^m\left( 1-\mu _{i j}^U\right) ^{p_i^A p_j^D}\right] \\ \quad \ge \left[ \mu _A^L, \mu _A^U\right] , for\text { }any\text { }p^D \\ \left[ \prod _{j=1}^n \prod _{i=1}^m\left( \nu _{i j}^L\right) ^{p_i^A p_j^D}, \prod _{j=1}^n \prod _{i=1}^m\left( \nu _{i j}^U\right) ^{p_i^A p_j^D}\right] \\ \quad \le \left[ \nu _A^L, \nu _A^U\right] , for\text { }any\text { }p^D \\ 0 \le \mu _A^U+\nu _A^U \le 1, \sum _{i=1}^m p_i^A=1 \\ \mu _A^L \ge 0, \mu _A^U \ge 0, \nu _A^L \ge 0, \nu _A^U \ge 0, p_i^A \ge 0,\\ \quad i=1,2, \ldots , m \end{array}\right. \end{aligned} \end{aligned}$$
(17)
$$\begin{aligned} \begin{aligned}&\min \left\{ \left[ \mu _D^L, \mu _D^U\right] \right\} , \max \left\{ \left[ \nu _D^L, \nu _D^U\right] \right\} \\&\text{ s.t. } \left\{ \begin{array}{l} \left[ 1-\prod _{j=1}^n \prod _{i=1}^m\left( 1-\mu _{i j}^L\right) ^{p_i^A p_j^D},\right. \\ \quad \left. 1-\prod _{j=1}^n \prod _{i=1}^m\left( 1-\mu _{i j}^U\right) ^{p_i^A p_j^D}\right] \\ \quad \le \left[ \mu _D^L, \mu _D^U\right] , for\text { }any\text { }p^A \\ \left[ \prod _{j=1}^n \prod _{i=1}^m\left( \nu _{i j}^L\right) ^{p_i^A p_j^D}, \prod _{j=1}^n \prod _{i=1}^m\left( \nu _{i j}^U\right) ^{p_i^A p_j^D}\right] \\ \quad \ge \left[ \nu _D^L, \nu _D^U\right] , for\text { }any\text { }p^A \\ 0 \le \mu _D^U+\nu _D^U \le 1, \sum _{j=1}^n p_j^D=1 \\ \mu _D^L \ge 0, \mu _D^U \ge 0, \nu _D^L \ge 0, \nu _D^U \ge 0, p_i^A \ge 0,\\ \quad j=1,2, \ldots , n, \end{array}\right. \end{aligned} \end{aligned}$$
(18)

where:

\(\mu _A^L=\min _{p^D \in Y}\left\{ 1-\prod _{j=1}^n \prod _{i=1}^m\left( 1-\mu _{i j}^L\right) ^{p_i^A p_j^D}\right\} \), \(\mu _A^U=\min _{p^D \in Y}\left\{ 1-\prod _{j=1}^n\prod _{i=1}^m \left( 1{-}\mu _{i j}^U\right) ^{p_i^A p_j^D}\right\} \), \(\nu _A^L{=}\max _{p^D \in Y}\left\{ \prod _{j=1}^n \prod _{i=1}^m\left( \nu _{i j}^L\right) ^{p_i^A p_j^D}\right\} \) and \(\nu _A^U{=}\max _{p^D \in Y}\bigg \{\prod _{j=1}^n \prod _{i=1}^m\left. \left( \nu _{i j}^U\right) ^{p_i^A p_j^D}\right\} \); \(\mu _D^L=\max _{p^A \in X}\left\{ 1-\prod _{j=1}^n \prod _{i=1}^m\left( 1-\right. \right. \)\(\left. \left. \mu _{i j}^L\right) ^{p_i^A p_j^D}\right\} \), \(\mu _D^U=\min _{p^A \in X}\left\{ 1-\prod _{j=1}^n \prod _{i=1}^m\left( 1\right. \right. \)\(\left. \left. -\mu _{i j}^U\right) ^{p_i^A p_j^D}\right\} \), \(\nu _D^L=\max _{p^A \in X}\left\{ \prod _{j=1}^n\prod _{i=1}^m\right. \) \(\left. \left( \nu _{i j}^L\right) ^{p_i^A p_j^D}\right\} \) and \(\nu _D^U=\max _{p^A \in X}\) \(\left\{ \prod _{j=1}^n \prod _{i=1}^m\left( \nu _{i j}^U\right) ^{p_i^A p_j^D}\right\} \).

Li [32] transformed the biobjective interval programming models (Eqs. (17) and (18)) into a standard nonlinear programming model then formally rewrote them as follows:

$$\begin{aligned} \begin{aligned}&\min \{\xi \} \\&\text{ s.t. }\left\{ \begin{array}{l} \prod _{i=1}^m\left[ \left( 1-\mu _{i j}^L\right) ^\lambda \left( \nu _{i j}^L\right) ^{1-\lambda }\left( 1-\mu _{i j}^U\right) ^\lambda \right. \\ \left. \left( \nu _{i j}^U\right) ^{1-\lambda }\right] ^{p_i^A} \le \xi \quad j=1,2, \ldots , n \\ \sum _{i=1}^m p_i^A=1 \\ p_i^A \ge 0, i=1,2, \ldots , m \end{array}\right. \end{aligned} \end{aligned}$$
(19)
$$\begin{aligned} \begin{aligned}&\max \{\sigma \} \\&\text{ s.t. }\left\{ \begin{array}{l} \prod _{j=1}^n\left[ \left( 1-\mu _{i j}^L\right) ^\lambda \left( \nu _{i j}^L\right) ^{1-\lambda }\left( 1-\mu _{i j}^U\right) ^\lambda \right. \\ \left. \left( \nu _{i j}^U\right) ^{1-\lambda }\right] ^{p_j^D} \ge \sigma \quad i=1,2, \ldots , m \\ \sum _{j=1}^n p_j^D=1 \\ p_j^D \ge 0, j=1,2, \ldots , n, \end{array}\right. \end{aligned} \end{aligned}$$
(20)

where \(\lambda \in [0,1]\) represents the relative weights of the constraints of the membership degree and nonmembership degree by using the weighted average method. In the following paper, we set \(\lambda =0.5\) to assign equal weights to both the membership and nonmembership degrees.

Experiments

Equilibrium strategies with different q and \(\theta \)

As scale-free networks with a considerable number of nodes are widespread in the real world, we take a scale-free network as the target network that follows a power-law degree distribution \(\left( p(k) \sim (\eta -1) m^{\eta -1} k^{-\eta }\right) \) [10]. We set \(\textrm{N}=300, \eta =3\), and \(\textrm{m}=2\), where \(\textrm{N}\) is the number of nodes, \(\eta \) is the degree exponent, and \(\textrm{m}\) is the number of links connecting the new node to the nodes in the network.

The strategies for achieving equilibrium between the two players in various situations of \(q\left( q_A=q_D\right) \) and \(\theta \left( \theta _A=\theta _D\right) \) are presented in Fig. 3. For the attacker, when q is small, the adoption probability for the HS is estimated to be approximately 0.4. In the cases in which q is large, the probability of the HS being in Nash equilibrium is approximately 0.8. When q is higher, the attacker’s adoption probabilities in Nash equilibrium for the HS increase. For the defender, the situation is the opposite. When q is small, the adoption probability for the HS is estimated to be approximately 0.6. In the cases in which q is large, the probability of the HS being in Nash equilibrium is approximately 0.2. When q is higher, the defender’s adoption probabilities in Nash equilibrium for the HS decrease. However, we observe that when the cost constraint \(\theta \) is varied from 0.1 to 0.9 while keeping the same q, the circumstances of Nash equilibrium for both the attacker and defender remain relatively unchanged.

Fig. 3
figure 3

Equilibrium strategies with different \(q \left( q_A=q_D\right) \) and \(\theta \left( \theta _A=\theta _D\right) \) values. The target network is a scale-free network characterized by \(\textrm{N}=300, \eta =3\), and \(\textrm{m}=2\). The colors in the blocks represent the probabilities of the attacker and defender adopting the HS when q and \(\theta \) are varied within the interval [0.1, 0.9]. The results are displayed above

Fig. 4
figure 4

The attacker’s payoffs represented by IVIFSs for different values of q when \(\theta =0.5\). The target network corresponds to the one depicted in Fig. 3. The x-axis represents different values of q, while the y-axis represents IVIFS payoffs. \(\mu _{i j}=\left[ \mu _{i j}^L, \mu _{i j}^U\right] , \nu _{i j}=\left[ \nu _{i j}^L, \nu _{i j}^U\right] \) are the membership and nonmembership degrees of the payoff of the attacker when choosing strategy i and the defender adopts strategy j, respectively

The payoffs under different strategy profiles

From “Equilibrium strategies with different q and \(\theta \)" subsection, the primary factor influencing the circumstances of Nash equilibrium is the cost sensitive parameter. To provide a deeper explanation of why the Nash equilibrium is affected by q, we illustrate the attacker’s payoffs for all strategy profiles as a function of q when \(\theta =0.5\) in Fig. 4. Owing to the zero-sum nature of the game, the defender’s payoffs are the opposite of the attacker’s payoffs and are not explicitly presented.

Figure 3 shows that the probability that the attacker chooses the HS increases with q, while the probability that the defender selects the HS decreases with q. According to Fig. 4, for the attacker, IVIFS \(\left\langle \mu _{12}, \nu _{12}\right\rangle \) decreases from \(\langle [0.76,0.89],[0.02,0.05]\rangle \) to \(\langle [0.40,0.64],[0.16,0.26]\rangle \), while \(\left\langle \mu _{21}, \nu _{21}\right\rangle \) increases from \(\langle [0.55,0.67],[0.09,0.12]\rangle \) to \(\langle [0.77,0.87],[0.01,0.01]\rangle \). The values of \(\left\langle \mu _{11}, \nu _{11}\right\rangle \) and \(\left\langle \mu _{22}, \nu _{22}\right\rangle \) are equivalent to \(\langle [0,0],[1,1]\rangle \). We explain this problem from a mathematical perspective through the alterations of IVIFS payoffs in Fig. 4 as follows.

The nonlinear programming models (19) and (20) can be reformulated into a linear programming model, expressed as follows [32]:

$$\begin{aligned} \begin{aligned}&\min \{\bar{\xi }\} \\&\text{ s.t. }\left\{ \begin{array}{l} \sum _{i=1}^m\left[ \lambda \ln \left( 1-\mu _{i j}^L\right) +(1-\lambda ) \ln \left( \nu _{i j}^L\right) + \right. \\ \left. \lambda \ln \left( 1-\mu _{i j}^U\right) +(1-\lambda ) \ln \left( \nu _{i j}^U\right) \right] p_i^A \le \bar{\xi },\\ j=1,2, \ldots ,n \\ \sum _{i=1}^m p_i^A=1 \\ \bar{\xi } \le 0, p_i^A \ge 0, i=1,2, \ldots , m \end{array}\right. \end{aligned} \end{aligned}$$
(21)

and

$$\begin{aligned} \begin{aligned}&\max \{\bar{\sigma }\} \\&\text{ s.t. }\left\{ \begin{array}{l} \sum _{j=1}^n\left[ \lambda \ln \left( 1-\mu _{i j}^L\right) +(1-\lambda ) \ln \left( \nu _{i j}^L\right) + \right. \\ \left. \lambda \ln \left( 1-\mu _{i j}^U\right) +(1-\lambda ) \ln \left( \nu _{i j}^U\right) \right] p_j^D \ge \bar{\sigma },\\ i=1,2, \ldots , m \\ \sum _{j=1}^n p_j^D=1 \\ \bar{\sigma } \le 0, p_j^D \ge 0, j=1,2, \ldots , n. \end{array}\right. \end{aligned} \end{aligned}$$
(22)

Except for \(\mu _{i j}^L=1, \mu _{i j}^U=1, \nu _{i j}^L=0\), or \(\nu _{i j}^U=0\). We assume that \(\bar{p}_i^A=p_i^A / \bar{\xi }\) and \(\bar{p}_j^D=p_j^D / \bar{\sigma }\); then, Eqs. (21) and (22) can be rewritten as:

$$\begin{aligned} \begin{aligned}&\max \sum _{i=1}^m \bar{p}_i^A \\&\text{ s.t. }\left\{ \begin{array}{l} \sum _{i=1}^m\left[ \lambda \ln \left( 1-\mu _{i j}^L\right) +(1-\lambda ) \ln \left( \nu _{i j}^L\right) +\right. \\ \left. \lambda \ln \left( 1-\mu _{i j}^U\right) +(1-\lambda ) \ln \left( \nu _{i j}^U\right) \right] \bar{p}_i^A \ge 1, \\ j=1,2, \ldots , n \\ \bar{p}_i^A \le 0, i=1,2, \ldots , m \end{array}\right. \end{aligned} \end{aligned}$$
(23)

and

$$\begin{aligned} \begin{aligned}&\min \sum _{j=1}^n \bar{p}_j^D \\&\text{ s.t. }\left\{ \begin{array}{l} \sum _{j=1}^n\left[ \lambda \ln \left( 1-\mu _{i j}^L\right) +(1-\lambda ) \ln \left( \nu _{i j}^L\right) +\right. \\ \left. \lambda \ln \left( 1-\mu _{i j}^U\right) +(1-\lambda ) \ln \left( \nu _{i j}^U\right) \right] \bar{p}_j^D \le 1,\\ i=1,2, \ldots , m \\ \bar{p}_j^D \le 0, j=1,2, \ldots , n, \end{array}\right. \end{aligned} \end{aligned}$$
(24)

where \(p_i^A=\bar{p}_i^A / \sum _{i=1}^m \bar{p}_i^A\), \(p_j^D=\bar{p}_j^D / \sum _{j=1}^n \bar{p}_j^D\). In Fig. 3, the IVIFS payoffs are \(\mu _{11}^L=\mu _{11}^U=\mu _{22}^L=\mu _{22}^U=0\) and \( \nu _{11}^L=\nu _{11}^U=\nu _{22}^L=\nu _{22}^U=1\). Let \(\lambda \ln \left( 1-\mu _{12}^L\right) +(1-\lambda ) \ln \left( \nu _{12}^L\right) +\lambda \ln \left( 1-\mu _{12}^U\right) +(1-\lambda ) \ln \left( \nu _{12}^U\right) =\alpha _{12} \quad \) and \(\lambda \ln \left( 1-\mu _{21}^L\right) +(1-\lambda ) \ln \left( \nu _{21}^L\right) +\lambda \ln \left( 1-\mu _{21}^U\right) +(1-\lambda ) \ln \left( \nu _{21}^U\right) =\alpha _{21}\). As q increases, the value of \(\alpha _{12}\) increases, whereas the value of \(\alpha _{21}\) decreases. Furthermore, both \(\alpha _{12}\) and \(\alpha _{21}\) remain negative throughout. From Eqs. (23) and (24), we have:

$$\begin{aligned} \begin{aligned}&\max \left( \bar{p}_1^A+\bar{p}_2^A\right) \\&\text{ s.t. }\left\{ \begin{array}{l} \bar{p}_2^A \le 1 / \alpha _{21} \\ \bar{p}_1^A \le 1 / \alpha _{12} \\ \bar{p}_1^A \le 0, \bar{p}_2^A \le 0 \end{array}\right. \end{aligned} \end{aligned}$$
(25)

and

$$\begin{aligned} \begin{aligned}&\min \left( \bar{p}_1^D+\bar{p}_2^D\right) \\&\text{ s.t. }\left\{ \begin{array}{l} \bar{p}_2^D \ge 1 / \alpha _{12} \\ \bar{p}_1^D \ge 1 / \alpha _{21} \\ \bar{p}_1^D \le 0, \bar{p}_2^D \le 0 \end{array}\right. \end{aligned} \end{aligned}$$
(26)

According to Eqs. (25) and (26), given that \(p_i^A=\bar{p}_i^A / \sum _{i=1}^m \bar{p}_i^A\) and \(p_j^D=\bar{p}_j^D / \sum _{j=1}^n \bar{p}_j^D\), as q increases, the attacker’s probability of adopting HS increases, while the defender’s probability of adopting HS decreases. This observation consistent with the experimental results, confirming the consistency between the mathematical model and the empirical findings.

In Fig. 4, the value of \(\left\langle \mu _{12}, \nu _{12}\right\rangle \) decreases while the value of \(\left\langle \mu _{21}, \nu _{21}\right\rangle \) increases as q increases. For \(\left\langle \mu _{12}, \nu _{12}\right\rangle \), due to the significant heterogeneity of the degree distributions of scale-free networks, when the attacker adopts the HS and the defender adopts the LS, as q increases, the attacker attacks few hub nodes as the nodes with high degrees become costly and the defender can protect more leaf nodes in the same cost constraint. However, for the value of \(\left\langle \mu _{21}, \nu _{21}\right\rangle \), the situation is contrary to that of \(\left\langle \mu _{12}, \nu _{12}\right\rangle \).

Fig. 5
figure 5

The number of attacked \(\left( n_A\right) \) or unattacked \(\left( n_U\right) \) nodes versus q under two strategy profiles: one where the attacker adopts the HS and the defender adopts the LS (HL); another where the defender adopts the LS and the attacker adopts the HS (LH). The x-axis represents different values of q, while the y-axis represents the number of nodes. The target network corresponds to the one depicted in Fig. 3

To demonstrate the proposition above, we illustrate the number of attacked nodes when \(\theta =0.5\) as a function of q under all strategy profiles illustrated in Fig. 5. However, we do not display the scenarios of strategy profiles (HS, HS) and (LS, LS) because the number of attacked nodes in those cases is 0. Furthermore, as the number of attacked nodes increases or decreases, the value of \(e^{-\frac{n_A}{n_U}}\) decreases or increases. Consequently, the interval-valued degrees of nonmembership for the attacker either decrease or increase, which indicates that the rejection degree of the attacker is lower or greater, respectively.

Fig. 6
figure 6

Nash equilibrium of the attacker and defender with \(\theta =0.3\), \(\theta =0.5\) and \(\theta =0.7\). The target network corresponds to the one depicted in Fig. 3. In each subfigure, the x-axis represents different values of \(q_D\), while the y-axis represents different values of \(q_A\). The colors in the blocks represent the probabilities of the HS in the mixed-strategy Nash equilibrium

The equilibrium strategies when \(q_A\) and \(q_D\) are different

When \(q_A\) and \(q_D\) are different, the Nash equilibrium of the attacker and defender is illustrated in Fig. 6. Because there are only two strategies, in this figure, the attacker and defender commit to mixed strategies. Thus, we find some patterns in their equilibrium with different \(\theta \). When \(\theta =0.3\), for the attacker, the probabilities of the HS increase with \(q_A\) and \(q_D\); for the defender, the probabilities of the HS increase with \(q_D\) and decrease with \(q_A\). When \(\theta =0.5\), for the attacker, the probabilities of the HS increase with \(q_A\) and \(q_D\); for the defender, the probabilities of the HS decrease with \(q_A\) and \(q_D\). When \(\theta =0.7\), for the attacker, the probabilities of the HS increase with \(q_D\) and decrease with \(q_A\); for the defender, the probabilities of the HS decrease with \(q_A\) and \(q_D\).

To facilitate comprehension, let us consider the specific scenario depicted in Fig. 6. Suppose that for a rail transit network, the cost constraint parameter \(\theta \) for the attacker and defender is 0.7. For the defender, an increase in the cost sensitivity parameter \(q_D\) results in a greater cost for selecting important rail transit stations and a lower cost for selecting unimportant rail transit stations, thereby leading to a reduced probability of the defender selecting HS. Correspondingly, the attacker needs to increase the probability of selecting HS to avoid targeting protected stations. The security department can utilize this pattern, combined with actual attack and defense costs, to select a mixed strategy that offers the most effective protection.

The IVIFS payoffs under different \(q_A\) and \(q_D\)

To further reveal the phenomenon observed in Fig. 6, we consider some special scenarios where either \(q_A\) or \(q_D\) is fixed, and we explore various cases of \(\theta \) values. We set a fixed value for parameters \(q_A\) and \(q_D\) as an illustrative example, both of which are assigned a value of 0.2. The patterns in the equilibrium of the attacker and the defender under these cases are shown in Fig. 7.

Fig. 7
figure 7

The equilibrium patterns of the attacker and the defender in Fig. 6. We illustrate the probabilities of the attacker and the defender adopting HS versus \(q_A\) or \(q_D\) when \(\theta \) and one of \(q_A\) and \(q_D\) are fixed in specific cases, that is a \(\theta =0.3, q_D=0.2\), b \(\theta =0.3, q_A=0.2\), c \(\theta \) \(=0.5, q_D=0.2\), d \(\theta =0.5, q_A=0.2\), e \(\theta =0.7, q_D=0.2\) and f \(\theta =0.7, q_A=0.2\). In each subfigure, the x-axis represents different values of \(q_A\) or \(q_D\), while the y-axis represents the probability to adopt HS

The results in Fig. 7 can be explained by examining the payoffs in different specific cases. As described in “The payoffs under different strategy profiles" subsection, scale-free networks exhibit high heterogeneity in their degree distributions. Thus, when the attacker or the defender adopts the HS, as \(q_A\) or \(q_D\) increases separately, the attacker or the defender attacks or protects, respectively, and few hub nodes with high degrees become costly under the same cost constraint. Conversely, when the attacker or the defender adopts the LS, as \(q_A\) or \(q_D\) increases separately, the attacker or the defender attacks or protects, respectively, more leaf nodes, as the nodes with low degrees are relatively cost-effective within the same cost constraint. The changes in payoffs under different strategy profiles are dependent on the variations in the number of nodes that are successfully attacked. These changes are shown in Fig. 8, which only illustrates the payoffs of the attacker owing to the zero-sum feature of the game.

Fig. 8
figure 8

Payoffs in each strategy profile versus \(q_A\) or \(q_D\) of the attacker. We consider six cases of combinations: a \(\theta =0.3, q_D=0.2\), b \(\theta =0.3, q_A=0.2\), c \(\theta =0.5, q_D=0.2\), d \(\theta \) \(=0.5, q_A=0.2\), e \(\theta =0.7, q_D=0.2\) and f \(\theta =0.7, q_A=0.2\). In each subfigure, the x-axis represents different values of \(q_A\) or \(q_D\), while the y-axis represents the payoffs. The payoffs under various strategy profiles can be represented by IVIFSs, To visually illustrate the membership degrees, we use darker colors, and the upper and lower bounds are represented by solid lines. Conversely, lighter colors indicate the nonmembership degrees. The upper and lower bounds are depicted by using dashed lines. The target network corresponds to the one depicted in Fig. 3

From Fig. 8, it is evident that when \(q_A\) and \(q_D\) vary, for \(\forall {i,j}\in \{1,2\}\), the monotonicity of \(\mu _{i j}^L\) is consistent with that of \(\mu _{i j}^U\); it exhibits an opposite trend when compared to that of \(\nu _{i j}^L\) and \(\nu _{i j}^U\). According to Eq. (1), the IVIFSs used to represent the payoffs for all strategy profiles are monotonic, and they possess a distinct ranking characteristic. However, analyzing the impact of IVIFS payoffs on the patterns of Nash equilibria through Eqs. (11) and (12) becomes complex and less intuitive due to the multidimensionality of IVIFS representations. Moreover, considering the similarity in optimization models between the two-player zero-sum game with crisp payoffs and the zero-sum game with IVIFS payoffs (where the attacker aims to maximize the gain-floor and the defender aims to minimize the loss-ceiling), we approximate the size of the IVIFS by using the score function described in Eq. (1). This approach allows us to analyze the patterns in Fig. 7 more intuitively from the perspective of the typical two-player zero-sum game. However, it is important to note that the score function may overlook certain intrinsic characteristics of IVIFSs. As a result, it can be used to analyze only the patterns of Nash equilibria and cannot provide a definitive Nash equilibrium result for an IVIFS game.

Fig. 9
figure 9

The payoffs versus \(q_D\), given \(\theta =0.7\) and \(q_A=0.2\), are represented by the score values obtained after applying the score function defined in Eq. (1). The x-axis represents different values of \(q_D\), while the y-axis represents the payoffs under different strategy profiles. The payoffs are obtained by Eq. (27)

In this paper, we illustrate an instance where \(\theta =0.7\) and \(q_A=0.2\) as a representative case. By employing the score function outlined in Eq. (1), the payoffs under different strategy profiles in Fig. 8(f) can be transformed and depicted in Fig. 9.

$$\begin{aligned} S_{ij}\left( \tilde{\xi }_{ij}\right) = \begin{aligned}&{\left[ \left( \mu _{ij}^L(x)\left( 2-\mu _{ij}^U(x)-\nu _{ij}^U(x)\right) +\mu _{ij}^U(x)\left( 2-\mu _{ij}^L(x)-\nu _{ij}^L(x)\right) \right) \left( 2-\nu _{ij}^L(x)-\nu _{ij}^U(x)\right) \right. } \\ {}&\frac{\left. -\left( \nu _{ij}^L(x)\left( 2-\mu _{ij}^U(x)-\nu _{ij}^U(x)\right) +\nu _{ij}^U(x)\left( 2-\mu _{ij}^L(x)-\nu _{ij}^L(x)\right) \right) \left( 2-\mu _{ij}^L(x)-\mu _{ij}^U(x)\right) \right] }{4}\end{aligned} \end{aligned}$$
(27)

In a zero-sum game with payoffs represented by score values, we can construct a \(2 \times 2\) payoff matrix as shown in Fig. 10. We set the probability of the attacker adopting strategy \(A_1\) as x, and the probability of the attacker adopting strategy \(A_2\) as \((1-x)\). Similarly, the defender’s probability to adopt strategy \(D_1\) is y, and the defender’s probability of adopting strategy \(D_2\) is \((1-y)\).

Fig. 10
figure 10

A crisp two-player zero-sum game with two strategies for both the attacker and the defender illustrated by a \(2\times 2\) payoff matrix

It has been shown that the games in Fig. 7 do not have pure strategy Nash equilibria; instead, they exhibit mixed-strategy Nash equilibriums. From Eqs. (15) and (16), it is not difficult to deduce that in the mixed-strategy Nash equilibrium, the probabilities of adopting \(A_1\) and \(D_1\) are:

$$\begin{aligned} x=\frac{S_{21}-S_{22}}{S_{12}+S_{21}-S_{11}-S_{22}} \end{aligned}$$
(28)

and

$$\begin{aligned} y=\frac{S_{12}-S_{22}}{S_{12}+S_{21}-S_{11}-S_{22}} \end{aligned}$$
(29)

To analyze the impact of IVIFS payoffs on the patterns of Nash equilibria more conveniently, we can convert Eqs. (28) and (29) into:

$$\begin{aligned} x=\frac{1}{1+\frac{S_{12}-S_{11}}{S_{21}-S_{22}}} \end{aligned}$$
(30)

and

$$\begin{aligned} y=\frac{1}{1+\frac{S_{21}-S_{11}}{S_{12}-S_{22}}} \end{aligned}$$
(31)

From Fig. 9, it can be observed that \(S_{11}\) and \(S_{21}\) increase monotonically with \(q_D\). Conversely, \(S_{12}\) decreases monotonically with \(q_D\). However, \(S_{22}\) remains relatively stable and shows minimal variation as \(q_D\) increases.

From Eqs. (30) and (31), it is evident that in Nash equilibria, the adoption probability of the HS for the attacker increases as \(q_D\) increases, while the adoption probability of the HS for the defender decreases as \(q_D\) increases. This is consistent with the patterns of the Nash equilibria shown in Fig. 7. For the other instances depicted in Fig. 7, the patterns of the Nash equilibria can be elucidated by using the aforementioned method.

Comparison analysis

To demonstrate the advantages of interval-valued intuitionistic fuzzy games in infrastructure networks with uncertain environments, we use the existing two-player zero-sum game model under crisp conditions in the following experiment [34]. In this model, we assume that the weights of \(\Psi \) and \(\Gamma \) are equal and that the payoffs of the game are expressed by changes in \(\frac{\Psi _{ij} +\Gamma _{ij}}{2}\), where \(\Psi _{ij}\) and \(\Gamma _{ij}\) can be obtained from Eqs. (9) and (10), respectively. We calculate the Nash equilibrium through the minimax theorem [42].

Fig. 11
figure 11

The Nash equilibrium of the attacker and defender with \(\theta =0.3\), \(\theta =0.5\) and \(\theta =0.7\) when crisp payoffs are expressed by \(\frac{\Psi _{ij} +\Gamma _{ij}}{2}\). The network in this experiment is the same as Fig. 6. In each subfigure, the x-axis represents different values of \(q_D\), while the y-axis represents different values of \(q_A\). The colors within the blocks depict the probabilities of the HS in the mixed-strategy Nash equilibrium

According to Figs. 6 and 11, compared with the model proposed in this paper, utilizing the two-player zero-sum game model under crisp conditions can lead to different equilibrium results.

From Eqs. (17) and (18), we can derive the expected IVIFS payoffs of the attacker when employing the strategy profile \(\left( p^A,p^D\right) \) as Eq. (32).

$$\begin{aligned}&\Delta \left( p^A, p^D\right) \nonumber \\&\quad = \left\langle \left[ 1-\prod _{j=1}^n \prod _{i=1}^m\left( \left( 1-\mu _{i j}^L\right) ^\lambda \right) ^{p_i^A p_j^D},\right. \right. \nonumber \\&\qquad \left. 1-\prod _{j=1}^n \prod _{i=1}^m\left( \left( 1-\mu _{i j}^U\right) ^\lambda \right) ^{p_i^A p_j^D}\right] ,\nonumber \\&\qquad \left. \left[ \prod _{j=1}^n \prod _{i=1}^m\left( \nu _{i j}^L\right) ^{p_i^A p_j^D}, \prod _{j=1}^n \prod _{i=1}^m\left( \nu _{i j}^U\right) ^{p_i^A p_j^D}\right] \right\rangle \end{aligned}$$
(32)

The defender’s expected IVIFS payoffs can be expressed as the negative of \(\Delta \left( p^A, p^D\right) \), i.e., Eq. (33).

$$\begin{aligned}&\hat{\Delta }\left( p^A, p^D\right) \nonumber \\&\quad = \left\langle \left[ \prod _{j=1}^n \prod _{i=1}^m\left( \nu _{i j}^L\right) ^{p_i^A p_j^D}, \prod _{j=1}^n \prod _{i=1}^m\left( \nu _{i j}^U\right) ^{p_i^A p_j^D}\right] ,\right. \nonumber \\&\qquad \left[ 1-\prod _{j=1}^n \prod _{i=1}^m\left( \left( 1-\mu _{i j}^L\right) ^\lambda \right) ^{p_i^A p_j^D},\right. \nonumber \\&\qquad \left. \left. 1-\prod _{j=1}^n \prod _{i=1}^m\left( \left( 1-\mu _{i j}^U\right) ^\lambda \right) ^{p_i^A p_j^D}\right] \right\rangle \end{aligned}$$
(33)

We denote the Nash equilibrium obtained in the interval-valued intuitionistic fuzzy scenario by \(\left( p^{A*}, p^{D*}\right) \) (see Fig. 6), and the Nash equilibrium obtained in a crisp environment by \(\left( p^{A_c}, p^{D_c}\right) \) (see Fig. 11).

In order to demonstrate the advantages of the game with IVIFS payoffs in uncertain environments, we consider the Nash equilibria \(\left( p^{A*}, p^{D*}\right) \) and \(\left( p^{A_c}, p^{D_c}\right) \). Based on the principle of the Nash equilibrium, no player has an incentive to unilaterally deviate from their chosen strategy, given the strategies chosen by the others [41]. Therefore, if \(\left( p^{A*}, p^{D*}\right) \) outperforms \(\left( p^{A_c}, p^{D_c}\right) \), the attacker’s expected payoff under the strategy profile \(\left( p^{A*}, p^{D*}\right) \) will be greater than that under the strategy profile \(\left( p^{A_c}, p^{D*}\right) \); similarly, the defender’s expected payoff under the strategy profile \(\left( p^{A*}, p^{D*}\right) \) will surpass that under the strategy profile \(\left( p^{A*}, p^{D_c}\right) \). The score function in Eq. (1) is used to evaluate the expected IVIFS payoffs, and we can obtain \(S\left( \Delta \left( p^{A_c}, p^{D*}\right) \right) \) and \(S\left( \hat{\Delta }\left( p^{A*}, p^{D_c}\right) \right) \). We then compare these values with \(S\left( \Delta \left( p^{A*}, p^{D*}\right) \right) \) and \(S\left( \hat{\Delta }\left( p^{A*}, p^{D*}\right) \right) \) to validate the advantage of the Nash equilibrium \(\left( p^{A*}, p^{D*}\right) \). As an example, we select a representative scenario where \(\theta =0.7\) and \(q_D=0.9\); the scores of the expected payoffs in different cases are depicted in Table 1.

Table 1 Different equilibrium strategy combinations and corresponding expected payoff scores for the attacker and defender, when \(\theta =0.7\), \(q_{D}=0.9\) are fixed while \(q_{A}\) varies

According to Table 1, regardless of the variations in \(q_A\), we consistenly have \(S\left( \hat{\Delta }\left( p^{A*}, p^{D*}\right) \right) >S\left( \hat{\Delta }\left( p^{A^*}, p^{D_c}\right) \right) \) and \(S\left( \Delta \left( p^{A*}, p^{D*}\right) \right) >S\left( \Delta \left( p^{A_c}, p^{D*}\right) \right) \). This implies that when the attacker and defender unilaterally modify their strategy profile from \(\left( p^{A*}, p^{D*}\right) \) to \(\left( p^{A_c}, p^{D*}\right) \) or \(\left( p^{A*}, p^{D_c}\right) \), their respective expected payoff scores will decrease. According to the concept of the Nash equilibrium [41], it can be inferred that in an uncertain environment, employing IVIFSs to represent the payoffs of the attacker and defender results in a superior Nash equilibrium compared to the conventional approach of using crisp payoffs.

Conclusions and discussion

Critical infrastructure networks are of vital importance in modern society. Thus, their protection is a matter of great concern. In this paper, we consider fuzziness and uncertainty when analysing attack and defense games in infrastructure networks. We employ IVIFS theory to elucidate the payoffs of the attacker and defender, and five main contributions are made:

  1. 1.

    We propose a model for attack and defense games in infrastructure networks based on IVIFS theory. The payoffs of the game are evaluated as IVIFSs to demonstrate the impact of the attack on the target network more comprehensively.

  2. 2.

    We use two nonlinear programs to determine the Nash equilibria of the game with IVIFS payoffs. We examine the variation patterns of the Nash equilibria and find that the cost sensitive parameters play a crucial role in determining them.

  3. 3.

    We further study how the patterns of the Nash equilibria vary with respect to the IVIFS payoffs, and we provide mathematical explanations.

  4. 4.

    We find that compared to the existing attack and defense model with crisp payoffs, the model proposed in this paper leads to a superior Nash equilibrium.

  5. 5.

    Our study provides valuable insights for selecting effective strategies to protect infrastructure networks in uncertain environments. For example, in rail transit network security, the effects of attacking different stations are represented using IVIFSs. By incorporating these effects into the model proposed in this paper, we can derive the Nash equilibrium results, which represent the optimal defense strategy for the security department.

The study of attack and defense games in infrastructure networks has been extensively explored, and the development of fuzzy theory has become increasingly diverse. Given that the model proposed in this paper represents a preliminary attempt to integrate fuzzy theory with attack and defense games in complex networks, the focus is to establish the model and investigate the Nash equilibrium. Certainly, the proposed model has several limitations:

  1. 1.

    For more complex scenarios, the ranges of values for the membership and nonmembership degrees in IVIFS theory may be limited and thus may fail to effectively reflect the decision-maker’s preferences.

  2. 2.

    The model proposed in this paper lacks integration with real-world infrastructure network cases, thereby limiting the validation of its effectiveness from an application perspective. Furthermore, it is essential to adjust the model based on specific application scenarios.

  3. 3.

    We only consider a limited set of attack and defense strategies, which may not fully capture the complexity of decision-makers’ strategies.

In the future, we can expand the article in the following directions:

  1. 1.

    Numerous methodologies exist for extending the range of membership and nonmembership values, such as SR-fuzzy sets [1], (2,1)-fuzzy sets [2], and (m,n)-fuzzy sets [3]. In future research, these theories can be applied to accommodate the varying preferences of decision-makers.

  2. 2.

    The model proposed in this paper can be applied in practical infrastructure network scenarios, such as aviation networks, rail transit networks, power networks, etc. By adapting the model to the specific characteristics of different networks, its applicability can be enhanced.

  3. 3.

    In our future research, we intend to expand the strategy sets and investigate effective fuzzy optimization algorithms, such as those proposed in [39, 47], for addressing the IVIFS payoff matrix.