Federated learning based on stratified sampling and regularization

Federated learning (FL) is a new distributed learning framework that is different from traditional distributed machine learning: (1) differences in communication, computing, and storage performance among devices (device heterogeneity), (2) differences in data distribution and data volume (data heterogeneity), and (3) high communication consumption. Under heterogeneous conditions, the data distribution of clients varies greatly, which leads to the problem that the convergence speed of the training model decreases and the training model cannot converge to the global optimal solution. In this work, an FL algorithm based on stratified sampling and regularization (FedSSAR) is proposed. In FedSSAR, a density-based clustering method is used to divide the overall client into different clusters, then, some available clients are proportionally extracted from different clusters to participate in training which realizes unbiased sampling for the overall client and reduces the aggregation weight variance of the client. At the same time, when calculating the model local loss function, we limit the update direction of the model by a regular term, so that heterogeneous clients are optimized in the globally optimal direction. We prove the convergence of FedSSAR theoretically and experimentally, and demonstrate the superiority of FedSSAR by comparing it with other FL algorithms on public datasets.


Introduction
Federated learning is a new distributed machine learning paradigm [1,2] that allows multiple devices (called clients) to collaboratively train a global model without uploading their local data [3,4]. Compared to traditional distributed machine learning, the main differences are as follows: (1) clients have independent control over local devices and data; (2) clients are often unreliable (edge nodes are often disconnected due to equipment and communication problems); (3) communication consumption is higher than computing consumption; (4) the data distribution in FL is not independent identically distributed(non-IID); (5) the distribution of local data is uneven [5]. These new features challenge the design and analysis of FL algorithms. One of the major challenges is client heterogeneity, including data and device heterogeneity. Heterogeneity of clients for FL exists extensively under real-world conditions [6], such as (1) heterogeneous data distribution, data on each client are generated locally, so the sample generation mechanism among clients may be different (like different countries or regions); (2) the covariate shift, for example, in handwriting recognition, different people write the same word differently; (3) label distribution skew (Prior probability drift), such as the use of the Chinese people in China, mainly in foreign people use less; (4) Quantity tilt or imbalance, etc. In real life, all kinds of a situation may lead to the occurrence of non-IID data. Traditional machine learning is based on the assumption of IID data, but FL differs from centralized machine learning in that the data on each node are non-IID in the absence of centralized data.
Consider a real situation, when we use the FL method to train a model of mobile phone input method [7,8], different mobile phones have different operating speeds, internal data, network conditions, etc. The latest mobile phones have faster operation speed and transmission speed than the old ones. Mobile phones in a better signal location such as town have better signal location than those in rural areas. Mobile communication transmission is more stable in areas with signal interference. In model training, the old mobile phone training is slow and often does not complete the training task on time.
Mobile phones with poor network alleviation are also more prone to signal loss when transmitting the model, resulting in the difference between the data distribution received by the parameter server and the actual distribution. Because of client heterogeneity, some classes of data participate more frequently in the training process, which introduces errors to the training data.
To reduce the impact of client heterogeneity, we propose FedSSAR algorithm, which consists of two parts: Client selection and regular term restriction. Compared with the traditional method of randomly selecting the clients participating in each round of training, we divide the overall clients into multiple clusters according to their local data similarity without requiring the local data of the clients and then extracts a certain number of available clients from different clusters for aggregation. At the same time, we add a regularizer to the client, modifying based on the global model, which reduces updates far from the global model when the client performs local model updates. By setting the appropriate parameters, we prove the O(1/ √ T ) convergence speed of the algorithm. We performed experiments on benchmark datasets, including MNIST, EMNIST, Cifar-10, and Cifar-100 datasets, to explore the performance of algorithms under different levels of data heterogeneity. We compared FedSSAR with FedAvg [2], FedProx [6], and SCAFFOLD [9]. Experiments show that our proposed algorithm has a faster convergence speed and a higher training accuracy. The main contributions of this paper are as follows: • We demonstrate that, for a convex objective function, the traditional FL algorithm cannot converge to the global optimum due to client heterogeneity even if a precise (rather than random) gradient descent is used. In particular, when data are highly heterogeneous, traditional FL algorithm may diverge; • We propose the FedSSAR algorithm. The main idea of FedSSAR is to use stratified sampling for client selection and the regular term to limit the update of the model. Clients sampled by clustering can better represent the overall data distribution and reduce the variance of the aggregation model. At the same time, the regularization method is used to change the optimization objective of each device, making it close to the optimization objective of the global loss function; • We theoretically identify the reason for the divergence of traditional FL algorithm, and prove convergence results for FedSSAR. FedSSAR has a O(1/ √ T ) convergence rate by our theoretical analysis; • We used the public MNIST, EMNIST-L, Cifar-10 and Cifar-100 datasets to evaluate the FedSSAR and compared it with FedAvg, FedProx and SCAFFOLD. The evaluation results validate the superiority of our FedSSAR in the aspects of a smoother training process, faster convergence and less training loss.
The remainder of this paper is organized as follows. In section "Related works", we introduce the background research of FL and an overview of related studies on heterogeneous federated learning; in section "Algorithm design", we theoretically analyse the effect of client heterogeneity on model convergence and propose the framework for FedSSAR; in section "Convergence analysis", we discuss the improvements provided by stratified sampling and regularization and give a theoretical proof of the convergence of FedSSAR; in section "Experiments", we comprehensively evaluate FedSSAR using different neural network models on multiple datasets.

Related works
With the improvement of storage capacity and computing capability of devices, in big data regime, there is an ever-increasing trend for distributed optimization [10][11][12][13], Traditional distributed machine learning concentrates training data in one machine or one data center and then updates the model. However, with the increasing local computing power of mobile phones, smart wearable devices, sensors, and other devices, as well as recent restrictions on user data privacy protection [14], it is more efficient for distributed clients to train models locally and then upload model parameters to parameter servers for aggregation rather than directly transferring client data. This research is known as FL and needs to address challenges such as large-scale training data, privacy protection, heterogeneous data, and devices [1,[15][16][17].
McMahan et al. [2] proposed a distributed learning method based on an iterative average of the training model and developed the Federated Averaging (FedAvg) algorithm, the learning task is jointly trained and solved by the participating devices and coordinated by the central server, its organization form is like a loose federation. Compared to data-centric distributed machine learning, one of the main advantages of FL is that it separates model training from the need for direct access to raw data, which is very important when data privacy is strictly required or it is difficult to share data centrally. Meanwhile, the FedAvg algorithm accelerates learning efficiency by using multiple local iterations. This is very helpful in reducing communication costs.
Kairouz et al. [18] discussed open issues and challenges in FL: (1) Data heterogeneity (non-IID data) in federal learning; (2) Data privacy of the client; (3) Communication restrictions; (4) Robustness of the algorithm; (5) Fairness and efficiency. One of the most fundamental challenges is the heterogeneity of client data.
To solve the problem of different data distribution in FL, Zhao et al. [19] improved the FedAvg algorithm. They proposed that when the client data is distributed heterogeneous, the application of FedAvg has a high precision loss, which can be explained by weight divergence. They proposed an improved strategy to create a small part of globally shared data among all devices. Although this method can reduce the impact of data skew, it is equivalent to adding errors artificially. In addition, this data-sharing method essentially violates the data privacy protection principle of federal learning. Yan et al. [20], considering the intermittent availability of clients. The article holds that under heterogeneous client conditions, different clients participate at different training times, which leads the training model to skew towards trainingintensive client data. Therefore, when selecting clients, clients with fewer training sessions are preferred for training to ensure that each client participates in training as many times as possible. However, the assumptions in this article are too strong and the way in which the average number of client training sessions is enforced is susceptible to the influence of slow nodes, resulting in a much longer training time.
Li et al. [6] proposed FedProx, starts with the object function and reduces the impact of data heterogeneity by adding a restriction to the object function of the local model, so that each client does not deviate too much from the global model when using local data updates, this method can alleviate the impact of heterogeneity by increasing acceptable computing overhead on the client. Similar to this idea, Acar et al. [21], proposed FedDyn, when updating the local model, add regularization items based on the global model and the models in previous rounds of communication, it uses a penalty term to dynamically modify the equipment targets, so that when the model parameters converge under the limit, they converge to the stationary point of global empirical loss. SCAFFOLD [9] introduces variance reduction technology to deal with the problem of data heterogeneity. In the implementation, it introduces the control variables of the server and a local client to estimate the updated direction of the server model and each client. Then, the difference between the two update directions is used to approximate the offset of local training. Adding an optimizer to the update function of the client and global model can significantly improve the training performance of the model, but this method is very sensitive to parameters. Tuning model parameters is a difficult problem when faced with different datasets.
Huang et al. [22], starting from another direction, they believe that the existence of data heterogeneity does not allow us to obtain a global model with sufficient precision, so that they can personalize the model, use local data to further train the global model additional, and obtain a higher quality personalized model. Similarly to this idea, Ghosh et al. [23], Sattler et al. [24], proposed a method of dividing clients into different clusters, then trained a single global model in each cluster, which used different clustering methods to group the local empirical loss function of the client or the node gradient. Due to the high similarity of the clients in each cluster, the global model trained by this method has a higher accuracy. These methods are contrary to the traditional federated learning algorithm, they will train multiple global models. Each global model has high accuracy in one cluster but low accuracy in other clusters. This results in poor generalization performance of the model, however, it provided an idea that we could cluster clients with similar data distributions. Fraboni et al. [25] demonstrated that cluster sampling better represents customers and proposed two client aggregation methods based on sample size and model similarity. In addition, different sampling methods are used to construct a homogeneous distribution of client data, and model training is carried out, greatly improving the model performance in heterogeneous datasets. However, the computational complexity in the article is too high, and sampling of client-side local datasets seems to violate the rule that local data cannot flow out of federal learning.

Algorithm design
In FL, we consider the following standard optimization model: where S t ⊆ {1, 2, . . . , N },S t represent the set of clients participating in model parameter aggregation in round t, N is the total number of devices and ρ k is the weight of the k-th device so that ρ k ≥ 0, k∈S t ρ k 1. In common, we define ρ k n k /M, where M k∈S t n k represent the total number of client samples participating in the training in round t, n k indicates the number of samples in client k. Suppose the kth device holds the training data D k , ξ k t is a sample uniformly selected from the local data, Here, we describe a standard FedAvg algorithm. First, the central server broadcasts the latest model w t, to all the devices. Then, every device lets w k t w t and performs local updates: where η t is the learning rate, assuming that K clients (1 ≤ K ≤ N ) are selected to participate in the training process.
The aggregation steps are as follows: The global data are a mixture of all local data: D N k 1 ρ k D k . When the client data is independent and identically distributed, for all k ∈ N , D k D. However, in real life, the data distribution of different clients is different, so our theoretical analysis is based on the assumption that the data is not independent and identically distributed.

Effects of data heterogeneity
Example 1. A distributed optimization problem with N clients and convex objective functions is considered. Our goal is to learn the mean of these clients' one-dimensional data. The sample of local data is ξ k t ∼ D k t and the mean e k E[ξ k ]. We can express this learning problem by minimizing the mean square error (MSE): For the convenience of calculation, we assume that each client contains the same amount of data, and we can get the optimal solution as follows: Assuming that τ k is the weight offset caused by communication loss, client device difference, etc., usex * to represent the convergence value in reality. The objective function will converge to:

Proof of Example 1
Since the objective function is convex, if we take a small learning rate, the function will converge to the optimal solution. Take the derivative of the objective function and make the derivative 0: According to the assumption that each client contains the same amount of data, for any ρ k (k ∈ N ), ρ k 1/N , therefore: When calculating the convergence value of the objective function under real conditions, ρ k 1/N + τ k (τ k is the weight offset for client k), we can get: Now that x * x * only when e 1 e 2 … e n (data distributions are IID) or for all k ∈ {1, 2, . . . , N },τ k 0. Therefore, the traditional federated learning algorithms can lead to poor performance in the case of heterogeneous clients.

FedSSAR architecture
As shown in section "Effects of data heterogeneity", data heterogeneity and device heterogeneity significantly reduce the performance of the traditional FL algorithm. In FL, the overall data distribution is a mixture of local data distributions from each client, and in the FedAvg algorithm, this aggregation weight is the sample weight. This setting only considers the data volume differences of each client, not the hardware and communication differences of the client, such as in the classic example of federal learning. When training the mobile input method model, the latest mobile phone runs faster and transmits faster than the older one. Mobile phones in better signal locations such as cities and towns transmit more stability than mobile phones in rural areas and areas with signal interference, which results in differences between the distribution of data received by parameter servers and the actual distribution of data. Because of client heterogeneity, some classes of data participate more frequently in the training process, which introduces errors to the training data. To alleviate this problem, we consider using all types of data during each training cycle to ensure that the probability of each type of data participating in the training is basically the same so that the training data distribution is an unbiased mix of the sample distribution of each client. At the same time, regular terms are added to limit the updating of the model. In this way, we can eliminate bias in the training data and establish a convergence result.
Let us recall the training steps of federal learning, the parameter server first initializes the global model, and then broadcasts the model w 0 to all clients. The client trains a sample of local data ξ k based on the received model to get the local model parameter w k 1 , the expression is as follows: We can see that when calculating parameter w k 1 , parameters w 0 and η 1 are the same for all clients, so parameter w k 1 only related to ξ k 1 , that is, parameter w k 1 contains the data distribution information of the model. This indicates that we can group the model parameters to divide clients by their local data similarity.
The detailed process of the FedSSAR is given in Algorithm 1. The client selection principle of the FedSSAR is to select available clients from different clusters (lines 2-8). After training, the parameter server collects information about the local model parameter for each client and divides the clients into groups using the OPTICS clustering method [26] (Ordering Points to Identify the Clustering Structure, OPTICS). 1 In each training round, available clients are drawn 1 OPTICS is a density-based clustering algorithm. It defines the cluster as the maximum set of density connected points and divides the region with sufficient density into clusters. OPTICS was able to detect clusters of any shape in noisy spatial data compared to K-means and BIRCH. Compared to the DBSCAN method, the OPTICS method is less sensitive to input parameters and improves cluster stability. Thus, OPTICS clustering has several advantages over other clustering methods: (1) OPTICS does not need to know the number of clusters in advance; (2) from each cluster proportionally to participate in the training, ensuring that all types of data participate in each training round and reduce the impact of client heterogeneity. After receiving the latest global model parameters from the parameter server, as shown in line 12, when calculating the client's minimizing local loss function, a regular term is added to the loss function so that the new local model parameters do not deviate from the previous global model parameters, and the degree of correction is controlled by parameter α. Then the latest parameters are sent back to the parameter server, the parameters returned are weighted and averaged by the parameter server (lines 10-15).

Assumption 2. Bounded dissimilarity:
Footnote 1 continued OPTICS can find cluster classes of any shape; (3) OPTICS can detect noise points and eliminate the influence of malicious attack nodes; (4) OPTICS is insensitive to input parameters.
∇ F(w) 2 . This definition quantifies the difference between the local model loss function and the global loss function. We assume that, for some

Stratified sampling
As we know, the model parameters contain the data distribution information of the client. We assume that there are m different data distributions in the overall data, similarly, for the convenience of analysis, we assume that each client has the same amount of local data. Through clustering, the client data distribution in each cluster is the same, therefore, the trained neural network model has the same parameter information, namely: According to Eq. 5, we use w c i t to represent the model parameters in cluster i after the t rounds training, n c i indicates the number of clients in cluster i. We have: We first prove that the sampling method of FedSSAR is unbiased, that is: where w k (S t ) represents the aggregate weight of client k in subset S t .

Proof.
We can sum the model parameters of each round of training according to different clusters, From Eq. (7), we have  (9) We prove that the stratified sampling method can realize the unbiased sampling of all data. At the same time, we will prove that the use of stratified sampling reduces the aggregate weight variance of the client and makes the model update more stable. For the traditional random sampling method, the aggregation weight of each client is equal to its sample weight, ρ k ← n k /M, where n k is the sample data of client k, and M is the overall sample data. Based on the assumption in this section that the client has the same amount of local data, it can be simplified to ρ k ← 1/N . According to the assumption that the overall data has different distributions in m, we select the number of clients participating in each round of training as m. In the random sampling method, m clients are randomly selected according to the Bernoulli distribution:B(ρ k ). For client k, the aggregate weight variance is: In stratified sampling, we first divide the overall client into m cluster sets and then select one client from each set to participate in training, we use Var c (w k (S t )) to represent the aggregate weight variance of client k in stratified sampling. It can be seen that: We have that:Var(w k (S t )) ≥ Var c (w k (S t )), only when m 1(the data distribution in each client is basically the same), they are equal.

Regularization
In general, for federated learning under heterogeneous conditions, the client's local optimum does not conform to the global optimum. In fact, the global optimum W* should satisfy: The local optimum for each client w * k satisfies, ∇ F(w * k ) 0. However, due to client heterogeneity, the local and global data distributions are not the same, so the global and local optimum are often different, which means, ∇ F k (w * ) 0. This means that updating the model by optimizing the local empirical loss cannot make the model converge to the global optimal solution.
As shown in Algorithm 1. we added a regular term when optimizing the client's local loss function, Therefore, the client's local model optimal solution will approach the global model optimal solution. To be more specific, when calculating the local empirical loss of the client, we add a restriction based on calculating the empirical loss, α 2 w − w t 2 ,we have we modify the optimal point of each client to make it close to the best global point.

Convergence result
In this section, we discuss the convergence of FedSSAR with the participation of all devices in the aggregation step. Assume that FedSSAR terminates after T round iteration and returns w T as the output. Theoretical analysis proves the convergence of FedSSAR under non-convex setting.

Theorem 1 (Non-convex FedSSAR convergence: All devices participate).
Let Assumptions 1 to 3 hold and L, α, σ , εbe defined. Assume the functions F k are non-convex and there exist a In full client participation mode, FL are seriously affected by the "straggler's effect" (where all nodes wait for the slowest node), so partial client participation FL have more practical applications. Suppose that there are m different data distributions in the overall dataset, S t ⊆ {1, 2, . . . , m} is a subset contains m indices in the k-th iteration, and S t is randomly selected from each cluster. We can get the convergence of FedSSAR with the participation of partial clients.

Experiments
This section, we evaluate the performance of FedSSAR across various datasets, models, availability settings and compare it with FedAvg, FedProx and SCAFFOLD algorithms.

Datasets
We experimented with different public datasets. These datasets are the benchmark datasets derived from previous related work (MNIST, Cifar10, Cifar-100 and EMNIST-L). Take MNIST dataset as an example, we distributed 60,000 training data to 100 clients, with 600 data (1%) in each client. The IID split adopts independent and random division, as shown in Fig. 1a, the data distribution in each client is basically the same; To simulate different degrees of data heterogeneous, the non-IID and non-IID2 split use biased sampling, the overall data are sorted according to the labels and then divided into different slices so that each slice contains only one kind of label data, and then the slice data are randomly divided into different clients. The size of the slice affects the number of data types contained in the client, each client in non-IID contains approximately two types of data (Fig. 1b), and each client in non-IID2 contains approximately only one type of data (Fig. 1c), which simulates the extreme heterogeneity of the data. Figure 1 shows the data distribution of the top 20 clients in the MNIST dataset. We use a similar division method in Cifar-10 and EMNIST-L dataset (details are shown in "Appendix B.1"). Specifically, we conducted a large-scale client experiment with 1000 clients on MNIST, EMNIS, Cifar10 and Cifar-100 dataset. Figure 2 shows the data distribution of the top 20 clients in the Cifar-100 dataset.

Implementation
We select the FedAvg, FedProx and SCAFFOLD algorithms as baselines. The value of parameter α in FedSSAR is selected from set {1, 0.5, 0.1, 0.05, 0.01, 0.005, 0.001}, in "Appendix C.1" we test the sensitivity of α. The original proportion of clients selected per round is 10%; the initial learning rate is 0.1, and the learning rate decay η t  t); we set the parameter weight decay as 10 −3 ; we set μ 10 −3 for FedProx; K of 12 and 1 is selected for SCAFFOLD under moderate and massive device number. In order to ensure that the samples extracted each time are unbiased estimates of the population samples, we randomly extract the target number of samples from the population samples (FedSSAR algorithm is from each cluster) in each round.

Models
We use fully connected multilayer neural network architectures for MNIST and EMNIST-L with 2 hidden layers (the number of neurons in the hidden layer are 200 and 100). For Cifar-10 and Cifar-100 datasets, we use a CNN model (2 convolutional layers with 64 5 × 5 filters, 2 fully connected hidden layers contains 394 and 192 neurons followed by a softmax layer).

Clustering results of model parameters
According to algorithm 1, taking MNIST (100 clients) dataset as an example, under different heterogeneous data conditions, the clustering results of model parameters are shown in Fig. 3.
As shown in Fig. 3a, for the IID split, all clients belong to the same cluster, and the local data distribution among clients has high similarity, so it can be considered that the local data distribution of clients is the same as the overall data distribution of datasets; Under different heterogeneous settings (as shown in Fig. 3b, c), clients are divided into different clusters. As shown in Fig. 3b, in the first type of heterogeneous data settings, the total number of clients is divided into 29 groups, and the similarity of the model parameters in each set is relatively high; Fig. 3c shows the results of the heterogeneous setting of the second type of data. It can be seen that the overall client is divided into 10 cluster sets. Compared to the results of the heterogeneous setting of the first type of data, the model parameters between different clusters differ

Experimental results
We first tested the performance of the FedSSAR on different datasets with the above baselines: FedAvg, FedProx and SCAFFOLD under different conditions. The verification performance of each task is shown in Figs. 4 and 5. Tables 1  and 2 summarizes the validation performance of MNIST and EMNIST-L after 50 rounds of training and the validation performance of Cifar-10, Cifar-100 after 500 rounds of training.
Due to space constrains, plots of model training losses will be shown in "Appendix C.2". Non-IID split the non-IID split simulates the medium degree of data heterogeneity. From Table 1, it can be seen that FedSSAR achieves the best results in the specified training rounds in all task settings. In terms of dataset structure, MNIST and EMNIST are handwritten character image recognition, and Cifar-10 and Cifar-100 are color object image recognition in reality, therefore, the latter has higher data distribution differences on different types of pictures and higher heterogeneity of the entire dataset, experiments on Cifar datasets require more rounds of training to achieve convergence than on MNIST and EMNIST datasets. At the Moderate device number Massive device number Fig. 5 Validation accuracy of FedSSAR and other methods, using decreasing learning rate η t and weight decay: 10 −3 to achieve the best training performance during the last 50 training rounds for MNIST and EMNIST-L, 500 training rounds for Cifar-10 and Cifar-100 Bold represents the best experiment result same time, FedSSAR performs better than competing methods in more heterogeneous datasets and in a larger number of device scenarios that fit larger-scale scenarios, such as Cifar-10 and Cifar-100 for a large number of devices. In addition to faster training speed and higher training accuracy, as shown in Fig. 4, the training process of FedSSAR is more stable. As we have shown in section "Improvements provided by stratified sampling and regularization", the stratified sampling method reduces the client's aggregation weight bias, while the regular term on the client optimization function limits the direction of updating the model parameters, making the updating of the model more stable. Especially on highly heterogeneous datasets, such as CIFAR datasets, the traditional algorithm training curve fluctuates sharply, while the FedSSAR maintains a stable growth curve.
Non-IID2 split the non-IID2 split simulates the extreme heterogeneity of client data, as shown in Fig. 1 and "Appendix B.1", different types of clients have completely different local data distribution. According to the results of the clustering ("Appendix B.2"), for clients with different clustering results, their training model parameters differ greatly. When the model parameters are aggregated, the aggregation variance of the model parameters participating in the training increases, which greatly affects the training efficiency of the model. As can be seen from Table 2, the model performance of all training tasks is reduced to varying degrees compared to the non-IID split. However, under extreme heterogeneity conditions, FedSSAR shows better performance. As the degree of data heterogeneity increases, FedSSAR improves more than traditional federated learning algorithms.

Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.

A.1 Proof of Theorem 1
Let w k t represent the model parameters of the k-th client in training round t, the local update of clients can be expressed as: Then, we have According to Assumption 1, the function F k is L-Smooth and non-convex, there exist a L > 0, such that ∇ 2 F k ≥ −L, with α α − L > 0, we have h k is α-strong convexity. We know that: Form Asuumption2 and 3, we have, Based on Assumption 1, we have Then (23) We have

A.2 Proof of Theorem 2
Participation of all devices in training is not how FedSSAR works. Based on the results of clustering, it is assumed that there are m data distribution modes in all devices, and we select m clients to participate in training each time.
We define the client collection selected for each round as S t . We define E S t [F(w t+1 )] k∈S t ρ k w k t+1 , obviously, F(w t+1 ), we need to bound E S t [F(w t+1 )]. According to Assumption 1, we have We take expectations on subset S t We have that Combining Eqs. (27), (28) and (29), we get From Eq. (21), we have ε ) > 0, the same proof process as Theorem 1, we can get

B.1 Supplementary data distribution map
In MNIST dataset, for massive device number, we distributed 60000 training data to 1000 clients, with approximately 60 data (0. 1%) in each client. In EMNIST-L dataset, for moderate device number, we distributed 48000 training data to 100 clients, with 480 data (1%) in each client; for massive device number, we distributed 48000 training data to 1000 clients, with 48 data (0.1%) in each client. In Cifar-10 dataset, for the moderate device number, we distributed 50000 training data to 100 clients, with 500 data (1%) in each client; for the massive device number, we distributed 50000 training data to 1000 clients, with 50 data (0.1%) in each client. In Cifar-100 dataset, we distributed 50000 training data to 100 clients, with 500 data (1%) in each client; for massive device number, we distributed 50000 training data to 1000 clients, with 50 data (0.1%) in each client (Figs. 6, 7 and 8).

C.1˛sensitivity analysis of FedSSAR.
In FedSSAR, α is an important parameter. α changes the update direction of the local model, making the heterogeneous clients update in the direction of global optimization. As shown in Theorems 1 and 2, the value of α affects the convergence speed of the model, so it is necessary to explore the sensitivity of α.
To explore the sensitivity of α, we consider MNIST and EMNIST-L under non-IID2 split, 100 clients, 10% participation setting. Figure 10 shows the best-achieved test accuracy that can be achieved under different α while other parameters remain the same. We can see that the best test accuracy can be obtained when α 10 −3 . (Fig. 11).  Training loss of FedSSAR and other methods, using decreasing η t and weight decay to achieve the best training performance during the last 50 training rounds for MNIST and EMNIST-L, 500 for Cifar-10 and Cifar-100 (non-IID split)

Moderate device number
Massive device number Fig. 13 Training loss of FedSSAR and other methods, using decreasing η t and weight decay to achieve the best training performance during the last 50 training rounds for MNIST and EMNIST-L, 500 for Cifar-10 and Cifar-100 (non-IID2 split)