Introduction

As the Internet of Things (IoT) technology evolves, users are becoming increasingly interconnected with their electronic devices [1]. The advent of new wireless networks such as the fifth-generation (5G) network, the proliferation of smart devices, and users’ high usage of diverse applications such as video streaming, online gaming, and virtual reality have resulted in a profound surge in video traffic. According to the Cisco report [2], studies predicted that the traffic of video types would account for 79% of all Internet traffic worldwide by 2022. The extensive prevalence of video traffic and the stringent quality of experience (QoE) requirements have put tremendous backhaul pressure on networks [3]. Therefore, the issue of minimizing the network resource consumption during transmission while simultaneously satisfying user demand has become one of the most critical concerns of network operators [4].

In traditional cloud environments, the service process requires moving data to remote data centers for centralized computing and storage. This leads to high network transmission latency, which can negatively impact the performance of mobile applications. To address this problem and provide reliable services for latency-sensitive applications, researchers have explored deploying small-scale cloud servers at the edge so that these edge cloud servers can provide resources closer to edge IoT devices [5,6,7]. Edge cloud servers are equipped with finite resources and can be utilized to deliver bandwidth-optimized services at the edge, thus enabling the provision of fast and immediate services [8, 9]. The multi-clouds architecture, including the remote cloud and edge clouds, is a promising paradigm to improve the QoE of users and reduce energy consumption [10, 11]. This potential stems from its ability to facilitate ubiquitous caching and efficient content delivery for end users, as highlighted by several studies.

During the content request phase, the network engages in content searching upon receiving a user’s request. To alleviate traffic congestion, edge caching is an efficient manner of caching popular files on edge cloud servers closer to their requesters. It tackles the problem of which content to be cached in the edge cloud [12]. Recent scholarly investigations have substantiated the effectiveness of collaborative caching, which has attracted considerable scholarly attention. Collaborative caching works by allowing edge clouds to collectively distribute content through internal connections. Song et al. [13] presented an adaptive cooperative caching scheme that incorporates an enhanced quantum genetic algorithm to address the energy-delay tradeoff problem. Zhang et al. [14] proposed a spatially cooperative caching strategy for a two-tier heterogeneous network. The objective of this strategy is to minimize storage usage for duplicated content with caching while maximizing the likelihood of successful content retrieval (hit probability).

During the content delivery phase, traditional unicast mechanisms for distributing content from remote cloud to edge clouds and user ends (UEs) result in inefficient delivery. Multicasting, on the other hand, can leverage the available network bandwidth to deliver the same content to multiple receivers, benefiting from the similarity of users’ preferences for content in close geographic locations. This mechanism reduces traffic generated during delivery by delivering the requested file through a single multicast rather than multiple unicasts [15]. Significant efforts have been devoted to video coding and multicast transmission [16,17,18,19]. For instance, Guo et al. [18] proposed a layer-based multi-quality multicast beamforming scheme based on scalable video coding. Wu et al. [19] designed an adaptive video streaming scheme using named data network multicast. However, these algorithms, while addressing video coding and multicast transmission, did not consider the integration of coded multicasting with caching in a cooperative environment.

Intuitively, Caching reduces latency and network bandwidth consumption by serving frequently requested content locally at the edge clouds [10, 20]. Multicasting further reduces bandwidth usage by efficiently delivering popular content to multiple users simultaneously, especially in scenarios with concurrent requests for the same content. Joint consideration of caching and multicasting can enhance the overall network performance and resource utilization by dynamically allocating caching and multicasting resources based on real-time user demand and network conditions. This adaptive strategy optimizes the content availability and delivery efficiency, leading to an improved user experience. Notably, it facilitates the deployment of various latency-sensitive applications and services [21, 22]. In the context of large-scale cache-enabled wireless networks, Jiang et al. [23] applied an iterative numerical algorithm to analyze and optimize caching and multicasting. Various coding multicasting mechanisms have been proposed in different scenarios [24,25,26,27]. Nevertheless, in large-scale cooperative caching scenarios, finding a balance between edge caching and multicasting to improve resource efficiency remains a challenging task.

In this paper, we exploit the benefits of mobile edge caching with multicasting in the multi-clouds environment to reduce network transmission consumption. We investigate the collaborative caching among different edge clouds to effectively adapt to dynamic edge environments. We propose a multi-agent DRL-based approach for COoperative video CAching and Multicasting named COCAM to minimize the average transmission number, thereby enhancing video delivery efficiency. Our main contributions are summarized as follows:

  • We investigate the cooperative video edge caching and multicasting issue to reduce the transmission number in the multi-clouds scenario. Moreover, we present the problem formulation as a multi-agent Markov decision process (MDP).

  • A novel multi-agent actor-critic algorithm is designed to address the formulated MDP. Specifically, each agent learns a local caching strategy and further encompasses the observations of neighboring agents as constituents of the overall state. Multiple agents work in collaboration to efficiently adapt to the dynamic network environment.

  • Extensive trace-driven simulations demonstrate that our proposed algorithm outperforms other baselines in terms of video transmission number.

The rest of this paper is organized as follows. In Related work section, we introduce the related works. System model and problem formulation section presents the system model and problem formulation. The details of the COCAM approach are presented in The COCAM approach section. We compare the experimental performance and analyze the results in Performance evaluation section. Conclusion section concludes the paper.

Related work

Caching algorithms

Edge caching stores popular content locally on edge clouds, allowing them to deliver the requested content directly to users. It significantly reduces network latency and network consumption. Li et al. [28] investigated a cost-effective greedy algorithm with consideration for different video characteristics. It optimized the mobile edge cache placement problem for QoE-driven dynamic adaptive video streams. Tran et al. [29] proposed a federated collaborative caching and processing framework based on integer linear programming to accommodate adaptive bitrate video streams in mobile edge computing networks. The caching decision process in wireless communication networks can be represented as an MDP, and reinforcement learning has been commonly employed in this domain. Based on a multi-agent framework, Wang et al. [30] proposed a deep actor-critic reinforcement learning algorithm to address the dynamic control of caching decisions by enabling each edge to learn an optimal policy through self-adaptation. However, the existing research primarily focused on content caching policies and did not incorporate consideration for the content delivery process.

Multicasting algorithms

Multicast transmission is extensively utilized in edge networks, demonstrating its efficacy in enhancing network performance by reducing bandwidth, routing, and cost [31]. Damera et al. [32] constructed a new feasible architectural model to transmit the required content to the user using the multicell transmission. The Signal Noise Ratio was improved using the multicell transmission. The optimized MEC scheduling algorithm showed better performance compared to the existing model. Zahoor et al. [33] proposed a suggested enhanced eMBMS network architecture to address the significant limitations of the standard eMBMS architecture, i.e., a network architecture using network function virtualization (NFV) and MEC. The proposed architecture allows the multicasting of crowdsourcing live streams. Ren et al. [15] considered the fundamental issues of NFV-enabled multicast in mobile edge clouds and designed a heuristic algorithm. Qin et al. [34] studied the multicast traffic for IoT applications in edge networks under the delay-oriented network slicing problem. Nevertheless, these works focused on the network architecture and multicast protocols without integration with the practical applications of the edge cloud servers.

Joint caching and multicasting algorithms

The utilization of multicast transmission at the base station, enabling concurrent servicing of distinct user requests for the identical file, is recognized as a highly efficacious approach for supporting the delivery of extensive content over wireless networks. This approach is regarded as an effective strategy in wireless communications to meet the constantly increasing demand for content transmission. Maddah-Ali et al. [35] used the joint encoding of multiple files and the multicasting feature of downlink channels to optimize content placement and delivery under encoded multicast. They also evaluated the caching gain and demonstrated that the joint optimization problem could improve the caching gain. Liao et al. [36] used the benefits of multicast content delivery and collaborative content sharing jointly to develop a compound caching technique (multicast-aware cooperative caching). He et al. [37] designed partial caching bulk transmission and partial caching pipelined transmission to reduce the delivery latency of cache-enabled multi-group multicast networks. Somuyiwa et al. [38] combined active caching and multicast transmission to model the single-user multi-request problem as an MDP and used a DRL approach to solve the problem. Since traditional approaches are difficult to adapt to this highly diverse and dynamic environment under multi-clouds cooperative caching, we propose a COCAM-based framework to maximize the traffic consumption during the video delivery phase.

System model and problem formulation

In this section, we introduce the cooperative video edge caching and multicasting model and give concrete definitions. Then, we state the corresponding cache decision-making problem. For convenience, we summarize some key modeling parameters and notations in Table 1.

Table 1 Summary of important notations

Network model

We consider the multi-clouds system, which consists of three types of layers: the remote cloud layer, the edge cloud layer, and the UE layer. Assuming that the remote cloud provides all the requested video files \(\mathcal F=\left\{ 1,2,\cdots ,F\right\}\). Since video service generally fragments a video into equally sized chunks, we assume all files are unit-sized. The set of edge cloud servers can be denoted as \(\mathcal N=\left\{ 1,2,\cdots ,N\right\}\). We denote the time slot of requests as \(\mathcal T=\left\{ 1,2,\cdots ,T\right\}\). The edge clouds receive the requests and make the caching decision at each time slot t. At each time slot t, the edge clouds receive requests and determine caching decisions. The request received by edge cloud n for file f is denoted as \(q_{t,n}^f \in \{0,1\}\), where \(q_{t,n}^f = 1\) represents a request for file f, and \(q_{t,n}^f = 0\) signifies no request for file f. A variable \(x_{t,n}^f\) is used to denote the transmission decision, i.e., whether the requested video f is transmitted from the remote cloud to the edge cloud n at time t. If no, we have \(x_{t,n}^f=0\), and \(x_{t,n}^f\in (0,1]\) otherwise. \(x_{t,n}^f=1\) means a transmission channel is fully used by edge cloud n and it occurs only under unicast conditions. Otherwise, if multiple edge cloud servers share a channel under one of the multicast conditions, we assume these edge clouds share the channel equally.

Caching model

At each time t, we assume that only one UE under the edge cloud server n will request the video. For each UE, if the requested video has been cached in the upper edge cloud, the edge cloud server can deliver it to the UE directly. Else the edge cloud server requests the file from the remote cloud.

Each edge cloud has the same maximum capacity C. We use a binary variable \(y_{t,n}^f\) to indicate whether the requested video f has been stored in edge cloud n at time t. If yes, we have \(y_{t,n}^f=1\), and 0 otherwise. Each server stores content limited to its maximum storage capacity:

$$\begin{aligned} \sum _{f\in \mathcal {F}} y_{t,n}^f \le C. \end{aligned}$$
(1)

After the edge cloud gets the requested video, the edge cloud will decide whether to cache the content or not. If the edge cloud storage capacity is not fully filled, we store the video directly. Otherwise, we update our caching space based on the policy.

Transmission model

The remote cloud delivers the videos to the requested edge clouds. Figure 1 gives four schemes in our cooperative transmission model which are described as follows:

  • Localcast (LC): If the requested video has been cached in the local edge cloud server at time t, the UE can fetch it from the edge cloud directly without requesting from the remote cloud. We use \(\mathcal {N}_{LC}=\{n|y_{t,n}^f=1,\forall n \in \mathcal N,\forall f \in \mathcal F\}\) to denote the set of edge clouds from which UEs can get videos at time t through LC schema without fetching from the remote cloud. We have:

    $$\begin{aligned} x_{t,n}^f=0,\forall n \in \mathcal {N}_{LC}, \end{aligned}$$
    (2)

    as shown in the LC part of Fig. 1, \(N_1\) requests \(f_1\), \(f_1\) has been stored in \(N_1\).

  • Multicast (MC): If the requested video has not been cached in the edge cloud, then the edge cloud requests the file from the remote cloud. If there are other different edge clouds requesting the same video f at time t, then these edge clouds can obtain the requested video f through MC schema. We use the \(\mathcal {N}_{MC}^f =\{n|q_{t,n}^{f}=1, y_{t,n}^f=0, \forall n \in \mathcal N \setminus \mathcal {N}_{LC} \}\) to denote the set of edge clouds that can use multicast transmission to obtain the requested video f. We have:

    $$\begin{aligned} \sum _{n \in {\mathcal {N}_{MC}^f}}{x_{t,n}^f}=1, \end{aligned}$$
    (3)

    as shown in the MC part of Fig. 1, \(N_2\) and \(N_3\) simultaneously request \(f_2\) that have not been cached.

  • XOR-cast (XC): We form a special edge cloud set named exclusive OR (XOR) set where each edge cloud in the set stores the video files requested by the other edge clouds. We denote this set as:

    $$\begin{aligned} \begin{array}{r} \mathcal {N}_{X C}^G=\left\{ n \mid q_{t, n}^f=1, y_{t, n}^f=0, y_{t, n^{\prime }}^f=1,\right. \\ \left. \forall n \in \mathcal {N} \backslash \left( \mathcal {N}_{L C} \cup \mathcal {N}_{M C}^f\right) , \forall n^{\prime } \in \mathcal {N}_{X C}^G \setminus n, \forall f \in G\right\} , \end{array} \end{aligned}$$
    (4)

    where the video set through the XC scheme can be denoted as:

    $$\begin{aligned} \begin{array}{r} G=\left\{ f \mid q_{t, n}^f=1, y_{t, n}^f=0, y_{t, n}^{f^{\prime }}=1,\right. \\ \left. \forall n \in \mathcal {N} \backslash \left( \mathcal {N}_{L C} \cup \mathcal {N}_{M C}^f\right) , \forall f^{\prime } \in G\setminus f\right\} . \end{array} \end{aligned}$$
    (5)

    The XOR set receives the XOR-encoded bit stream by one transmission. Then, each edge cloud restores its video by decoding the received bit stream with the contents stored in its cache. We have:

    $$\begin{aligned} \sum _{n \in \mathcal {N}_{X C}^F} \sum _{f \in F} x_{t, n}^f=1, \end{aligned}$$
    (6)

    as shown in the XC part of Fig. 1, \(N_4\) and \(N_5\) simultaneously request \(f_5\) and \(f_4\) that have been cached not by themselves but by each other. We denote the coded XOR information as f. If there are multiple XC combinations, we choose the combination that will generate the smallest number of XC sets with the participation of the same number of edge clouds. This preference is based on the effectiveness of our proposed XC approach in significantly reducing internal energy consumption during unicast transmission. While this paper does not explicitly consider the energy consumption associated with XOR operations, it is important to acknowledge that such operations still entail a non-negligible energy overhead. Considering a fixed number of edge clouds, our objective is to minimize the number of XC combinations to mitigate the impact of XOR energy consumption.

  • Unicast (UC): When the relationship between the requests from edge clouds and the cache list does not satisfy any of the above cases, edge clouds fetch videos directly from the remote cloud by establishing a transmission channel. We denote the UC set as \(\mathcal {N}_{U C}=\left\{ N \backslash \left( \mathcal {N}_{L C} \cup \mathcal {N}_{M C}^f \cup \mathcal {N}_{X C}^F\right) \right\}\). We have:

    $$\begin{aligned} {x_{t,n}^f}=1,\forall n \in \mathcal {N}_{UC}. \end{aligned}$$
    (7)

    as shown in the UC part of Fig. 1, the edge cloud gets content from the remote cloud.

Fig. 1
figure 1

System model

To use fewer transmissions to deliver all the data during the delivery process, we use the network coding technique. The transmitted content is encoded at the network nodes and then decoded at the destination. We use XOR coding techniques. These edge clouds have not cached the requested video but have cached the video requested by other edge clouds. The caching policy determines what will be cached in the edge cloud, and then the remote cloud classifies the transmission based on the cache state in the edge clouds. According to the above four cases, we can formulate the joint multicast transmission and cache replacement problem that aims to minimize the total number of transmissions from the remote cloud to the edge cloud as:

$$\begin{aligned} \min \quad{} & {} \sum _{n\in \mathcal {N}}\sum _{f\in \mathcal {F}}\sum _{t\in \mathcal {T}}{\frac{q_{t,n}^{f}x_{t,n}^f}{N}}\end{aligned}$$
(8)
$$\begin{aligned} s.t. \quad{} & {} (1), (2), (3), (6),(7)\end{aligned}$$
(9)
$$\begin{aligned} \quad{} & {} 0 \le x_{t,n}^f \le 1-y_{t,n}^f,\end{aligned}$$
(10)
$$\begin{aligned} \quad{} & {} {y_{t,n}^f}\in \{0,1\}. \end{aligned}$$
(11)

The COCAM approach

Our modeling problem is a mixed integer programming (MIP) problem [22], which is strictly NP-hard. Solving MIP problems with traditional computational methods has been proven challenging in natural caching systems with low computational efficiency. Thus we consider a learning approach. We explore the collaboration between different edge cloud servers with a multi-agent reinforcement learning-based algorithm to better adapt to dynamic edge environments.

In this section, each edge cloud operates as an independent agent, while maintaining a cooperative relationship with other edge clouds. We model the cache decision-making problem as a multi-agent extension of the Markov Decision Process (MDP) and introduce a novel multi-agent actor-critic-based caching approach. Our proposed approach aims to minimize the average number of transmissions during the request transmission process. Multi-agent reinforcement learning consists of agents and the environment. Based on the state and the reward from the environment, each agent executes an action according to its certain strategy. Then the environment changes to a new state. An MDP is a mathematical framework for modeling sequential decision-making consisting of state, action, transition probability, and reward. Each agent learns the optimal decision-making sequence through continuous interaction with the environment. We define the basic elements of a multi-agent MDP as follows:

State

The state of agent n at time t be denoted as \({{s}_{t,n}}=\{{y_{t,n}},q_{t,n}^{f}\}\), where \(q_{t,n}^{f}\) indicates the current request demands and \(y_{t,n}=\{y_{t,n}^f\}_{\forall f \in \mathcal {F}}\) denotes the caching state of edge cloud n. We define the neighborhoods that can be observed by the agent n as \(\mathcal N_n\). We use \(\pi _{t,n}\) to denote the policy of agent n. Thus, the adjacent agent policy of agent n can be denoted as \(\pi _{t,\mathcal N_n}\). Each agent can observe the states and policies of the neighborhoods. Therefore, the joint state of an agent n to be fed into the input network is \(\hat{s}_{t,n} =\{{s}_{t,m}\}_{\forall m \in \{n,\mathcal N_n\} }\).

Action

An agent decides which video should be replaced from the cache list based on its policy. We denote the action of agent n as \({a}_{t,n}=v\), where \(v\in \{0,1,2,\cdots ,C\}\). If \(v=0\), the requested video will not be cached. Else, the v-th content in the cache space of edge cloud n will be replaced by the current requested video.

Reward

The goal is to minimize the average transmission number. We define the negative value of the transmission number as the reward:

$$\begin{aligned} r_{t,n} =- \sum _{f\in \mathcal {F}}\frac{q_{t,n}^{f}x_{t,n}^f}{N}. \end{aligned}$$
(12)

So the global reward is calculated as:

$$\begin{aligned} r_{t}=\sum _{n\in \mathcal {N}}{r_{t,n}}. \end{aligned}$$
(13)

Network architecture

As shown in Fig. 2, each agent consists of two parts: actor network (as \(\theta\)) and critic network (as \(\omega\)). The actor network and critic network are essential components of a policy network. The actor network receives environmental states as input and generates corresponding action outputs, aiming to learn an optimal policy \(\pi _{\theta _n}\) that maximizes the expected return or value function associated with accumulated rewards. On the other hand, the critic network serves as a value function estimation network, evaluating the quality of actions chosen by the actor network within a given state. Its primary objective is to learn a value function \(V_{\omega _n}\) capable of estimating the expected return or value based on the current state and the actions selected by the actor network. The actor network consists of two fully connected hidden layers with ReLU activation functions, where the dimensions are determined by the variable state size of the cache. Its output layer is a fully connected layer utilizing a hyperbolic tangent (tanh) activation function. Similarly, the critic network shares the same architecture as the actor network, comprising two fully connected hidden layers with ReLU activation functions. The critic network’s output layer consists of a single unit activated by a linear function. Each network contains a target network and a primary network of the same network structure. We use the target network to improve the stability and convergence of training. After the primary network learns a certain number of times, the parameters of the primary network are used to update the parameters of the target network.

Fig. 2
figure 2

The COCAM approach

Agents get the policies \(\pi\) based on their actor networks. The actor network is denoted as a function to seek optimal policy \(\pi _{t,n}=\pi _{\theta _n}(a_{t,n}|\hat{s}_{t,n},\pi _{t-1,\mathcal {N}_n})\), where \(\theta _{n}\) denotes the actor network parameter of agent n. An agent gets the action by random sampling with the policy distribution. We denote the parameter of the critic network for agent n as \({\omega }_{n}\). Thus, \(V_{{\omega }_{n}}\) denotes the value function of the critic network trained as an estimate of the expected reward.

We formulate the expected value equation for edge cloud n as:

$$\begin{aligned} R_{t,n} =r_t+\gamma V'_{w_{n}}\left( \hat{s}_{t+1,n}, \pi _{t,\mathcal {N}_n}\right) , \end{aligned}$$
(14)

where \(\gamma\) denotes the discount reward factor. At each time t, the agent stores the experience \((\hat{s}_{t,n},{a}_{t,n},{r}_{t,n}, \hat{s}_{t+1,n})\) in replay memory B.

We use the temporal difference (TD) algorithm to update the critic network. The loss function of the critic network can be calculated as:

$$\begin{aligned} {L}\left( w_{n}\right) =\frac{1}{2|B|} \sum _{t}\left( R_{t,n}-V_{w_{n}}\left( \hat{s}_{t,n}, \pi _{t-1,\mathcal {N}_n}\right) \right) ^{2}. \end{aligned}$$
(15)

The actor network is updated by the policy gradient (PG) algorithm. The loss function of the actor network can be defined as:

$$\begin{aligned} {L}\left( \theta _{n}\right)= & {} -\frac{1}{|B|} \sum _t(\log \pi _{\theta _{n}}\left( a_{t,n} \mid \hat{s}_{t,n}, \pi _{t-1,\mathcal {N}_n}\right) \tilde{A}_{t, n}\nonumber \\{} & {} \quad -\beta \sum \pi _{\theta _{n}} \log \pi _{\theta _{n}}\left( a_{t,n} \mid \hat{s}_{t,n}, \pi _{t-1,\mathcal {N}_n}\right) ), \end{aligned}$$
(16)

where the \(\beta\) denotes a hyperparameter to control the entropy term, and the advantage function \(\tilde{A}_{t, n}= R_{t,n}-V_{w_{n}}\left( \hat{s}_{t,n}, \pi _{t-1,\mathcal {N}_n}\right)\) is the discounted reward minus a baseline.

Then we update the target network parameters for each agent n as:

$$\begin{aligned} \theta '_n=\zeta \theta _n+(1-\zeta )\theta '_n, \end{aligned}$$
(17)
$$\begin{aligned} \omega '_n=\zeta \omega _n+(1-\zeta )\omega '_n, \end{aligned}$$
(18)

where \(\zeta\) denotes the target network update parameter. The target network is updated every \(\mathcal {\tau }\) step.

After the training is completed, each agent can get the most effective action strategy in the current state according to its own state in each execution step.

figure b

Algorithm 1 The COCAM Algorithm

The COCAM algorithm is given in Algorithm 1. Each local agent collects the experience tuple by following the current policy until enough samples are collected for batch updating (lines 8 to 15). Then a batch will be sampled randomly to update the actor and the critic network (lines 17 to 20). For every \(\mathcal {\tau }\) step, the target network is updated (lines 21 to 23).

Table 2 Simulation parameters

Performance evaluation

Experiment setup

We conduct experiments on a real-world dataset from iQIYI which contains 300,000 individual videos watched by 2 million users over two weeks. We randomly select 10,000 records from it. Figure 3 illustrates a descending order trend in video request preferences observed in the iQIYI dataset. The popularity distribution of videos exhibits notable skewness, adhering to a Zipf distribution. This implies that a small subset of highly popular videos significantly contributes to the majority of access volume, while a large number of other videos receive minimal attention. Popular videos are frequently accessed, necessitating regular updates to their cached content. Conversely, a substantial proportion of less popular videos are rarely accessed, rendering them ineffective for caching purposes. However, despite their limited popularity, these less popular videos still contribute to users’ demand. Therefore, it becomes imperative to design an adaptive cooperative caching and multicasting strategy that captures the distribution and dynamics in video popularity. We divide the dataset into 30 edge areas based on geographic information with the K-means algorithm [39]. We select 20 to deploy edge cloud servers (i.e., agents) to provide the video service for users. By default, we set the cache size to 50. We assume that each agent can observe the states of all the other agents from the environment. The key experimental parameters are listed in Table 2.

Fig. 3
figure 3

Number of requests of a content versus its rank on iQIYI dataset

Comparisons and results

The contents in different edge cloud servers are related to each other in multicast delivery, leading to the tendency of multicast caches to store similar contents. In contrast, for cooperative caching, the cached contents in different units should be mutually exclusive for better utilization of the limited storage space. The combination is balanced by using multi-agent reinforcement learning in the combination.

Figure 4 shows the variation of transmission number of COCAM with the increasing training episode. We can see that the transmission number decreases linearly at the beginning and steadily converges at around 150 episodes.

Fig. 4
figure 4

The values of transmission number in the training process of COCAM

According to Eq. (8), we measure the performance of our proposed algorithm using the average number of transmissions during the entire process of requesting. The average number of transmissions can show the efficiency of multicast transmission, which is affected by the caching decision. A lower average number of transmissions means fewer channel resources and higher multicast transmission efficiency for the same request, which can effectively relieve network transmission pressure.

To evaluate the performance of the COCAM algorithm, we compare it with other algorithms in different cases.

Comparison with non-cooperative caching algorithms

In Fig. 5, we compare the COCAM algorithm with non-cooperative caching algorithms under cooperative transmission in terms of the number of transmissions. LRU [40]: The new content will replace the cached content which has been least recently requested. LFU [40]: The new content will replace the cached content which has been least frequently requested. FIFO [41]: The new content will replace the cached content which has been stored earliest. Lecar [42]: It adopts LRU or LFU algorithm to update the cache according to the weight adaption by regret minimization technique. Arc [43]: It dynamically adjusts the size of the two queues and performs cache updates based on the LRU algorithm.

Fig. 5
figure 5

Performance comparison with non-cooperative caching algorithms

In these caching algorithms, each edge cloud server individually caches the content based on its caching decision without combining the cooperative caching among the edge clouds.

Figure 5a shows the comparison result under different numbers of requests. The request numbers are set ranging from 300 to 1500. Our COCAM algorithm performs better than the other baselines, with an average improvement of 2% to 15% in global benefits. Besides, the variations in the number of requests hardly affect the performance except for the LRU algorithm. It is because LRU works better for popular content and tends to lead to cache pollution in smooth datasets. Figure 5b shows the performance comparison under different edge cloud cache sizes. The cache size ranges from 30 to 90.

From Fig. 5b, it is observed that the transmission number decrease as the edge cloud cache sizes increase for all methods. Since the requested videos are more likely to be hit locally or built a multicast transmission as the cache capacity increases, our COCAM algorithm performs better than the other baselines.

Figure 5c shows the comparison under different amounts of edge clouds. We set the edge cloud server numbers ranging from 5 to 25 with a cache size of 50. We compare the results after 1500 requests. We can see that COCAM achieves the minimum transmission number. The performance of our algorithm is significantly better than other algorithms and stabilizes when there are fewer edge clouds.

Comparison with cooperative caching algorithms

In Fig. 6, we compare the COCAM algorithm with A2C algorithms that apply cooperative caching under cooperative transmission. A2C [44]: This algorithm uses a single-agent advantage actor-critic algorithm to select the action with the best reward.

Fig. 6
figure 6

Performance comparison with cooperative caching algorithms

As seen in the figure, the two cooperative caching algorithm curves converge, with the COCAM significantly outperforming the A2C algorithm. Compared to the A2C algorithm, our proposed algorithm results in an average improvement of 4\(\%\). It is mainly because COCAM yields more intelligent decision-making that learns the dynamic request pattern based on the global state. The performance of the learning-based algorithm can adapt well to the multicast environment and is not significantly affected by the variations in the number of edge clouds. With the cooperation of different agents, COCAM shows better and more stable performance.

Comparison of algorithms with different multicasting schemes

Figure 7 shows the performance of multicast transmission and coding transmission during the delivery phase in the cooperative caching scenario. COCAM-w/o-MC &XC: We design the COCAM-w/o-MC &XC by using COCAM without using the part of MC and XC. COCAM-w/o-XC: We design the COCAM-w/o-XC by using COCAM without using the part of XC.

Fig. 7
figure 7

Performance comparison in different parts

The experimental results illustrate that our proposed COCAM algorithm works better than the design-altered COCAM algorithms. It shows that our proposed MC and XC schemes effectively reduce the transmission number. As shown in the figure, the two altered algorithm curves are closer in results, indicating MC scheme is less effective on this dataset. This phenomenon can be attributed to the observation that users within the same region tend to have similar request preferences, while their activities of accessing the same content may vary across different time slots. It indicates that our XC scheme can effectively leverage this insight to achieve superior performance in the integrated caching and multicasting scenario.

Conclusion

In this paper, we have proposed a joint cache replacement and multicast transmission strategy in the multi-clouds scenario. This strategy could reduce the transmission number efficiently for video delivery. We have designed a multi-agent actor-critic algorithm named COCAM, enabling multiple edge clouds to cooperate to achieve intelligent caching decisions. In addition, we have conducted experiments on a real-world dataset. The evaluation results have shown that our COCAM algorithm could reduce the average transmission number by cooperation between different agents compared to other baselines. In our future work, we will further enhance reinforcement learning algorithms to achieve improved adaptation in resource-constrained and bandwidth-limited multi-clouds environment at a large scale.