1 Introduction

In order to be able to properly compare and evaluate new machine learning algorithms [7], testing and validation needs to be done on public data. This permits to reproduce results and determine in a fair way the performance of the algorithms. This is in fact the case for all data-driven methods. Public available repositories, as the UCI Machine learning repository [5], have been used for this purpose for a long time.

Recent research in distributed machine learning has the same requirements. In this case, data sets need to satisfy additional constraints. One of them is that standard assumptions on data sets do not hold. In standard machine learning and statistical learning it is usual to assume that data sets contain observations that are independent and identically distributed. This is usually expressed as i.i.d. That i.i.d. holds is convenient for the statistical properties of the methods [3]. Unfortunately, this assumption cannot be considered true in general in distributed machine learning. More particularly, it cannot be considered true in federated learning (see e.g. [9]).

Federated learning [1, 2, 9, 12] is a distributed machine learning framework in which a set of agents collaborate in a machine learning task. Each agent has (or it is) a device and the goal is to build a classification model from these data. The data is naturally distributed and stored in each device. Typically, the goal is to build a classification model based on devices’ data. More typically, this model is a deep learning model and, thus, internally represented by the matrices of weights associated to the model.

Federated learning assumes that there is a server that leads the process of building the machine learning model. This model is built in a distributed way. The server bootstraps the process with an initial model. This model is transmitted to the devices. Then, the devices use their own data to locally train the received model. Then, they transmit to the central server the difference between their own model and the one they have received. This process is repeated until the central model converges. According to this standard procedure, the data from each device is not transmitted to the central server, and, instead, is kept private in the device. Because of that, the approach is more respectful with respect to the users from a privacy perspective. This has made federated learning a hot research topic. Different research directions exist for federated learning, including the research for more efficient algorithms as well as on algorithms that are private by design (e.g., PySyft [10]).

Federated learning usually considers that the devices are heterogeneous. Heterogeneity comes in different flavors. The research literature shows that its most important source is the one related to the computational capabilities of the device. That is, they are typically resource-constrained. In particular, this refers to communication capabilities (and access to internet), computational power, and memory and storage. In addition, the research literature [9] also discusses that agents are heterogeneous with respect to the data they keep in their devices. This is modeled considering that the data is not i.i.d. In other words, different devices have data that have been generated using different (random) processes. A typical scenario in federated learning consists of a population of mobile phones, these mobile phones gather data related to texting (SMS messaging, social network messaging, etc.) and the central machine learning model learned using the decentralized approach is for predictive text. It is clear that different devices contain different types of data as agents (i.e., people) use, e.g. different (natural) languages. So, at least the textual data gathered from people using different languages will differ from the data gathered from people expressing themselves using the same languages. In other words, the data will not be independent and identically distributed. I.e., data will not be i.i.d.

Some of the existing data sets currently used in the federated learning literature are not i.i.d., others are but they are transformed in an ad-hoc way so do not follow the i.i.d. assumptions. Other experiments just ignore this issue. These are some relevant examples of data sets used in federated learning. LEAF [2] considers the use of FEMNIST, Sentiment140, and Shakespeare. For these data sets, LEAF consider 3550, 660120, and 1129 devices. FEMNIST data set consists of data for 62 different classes. Shakespeare data set includes data for 715 different characters from Shakespeare plays. When each character is considered independently, this corresponds to 715 non i.i.d. data sources. Here the number of classes will depend on the classification problem to be considered. For example, classification of a character from text means 715 classes, but for text prediction purposes, the number of classes will be different. Chai et al. [4] use MNIST [8] and Fashion-MNIST [13]. Both contain 60,000 training images (and 10,000 test images) of images represented by 28x28 pixels corresponding to 10 classes. They also use Cifar10 (10 classes) and FEMNIST. Sarkar et al. [11] consider also MNIST, sampled-FEMNIST (10000, 10 classes) as well as other data sets with smaller number of classes. In particular, VSN (68532 samples, 2 classes), HAR (15762 samples, 6 classes). The number of clients they use in their experiments is rather small: 10, 10, 23, 30, respectively (for MNIST, sampled-FEMNIST, VSN, and HAR). The use of MNIST and similar data sets is quite common in the federated learning literature. An arbitrary subset of MNIST will satisfy the same properties as the whole MNIST. If the latter is considered i.i.d, the arbitrary subset (selected by, e.g. a random partition) will also be i.i.d.

Most open data sets in machine learning repositories do not provide non-i.i.d. data. Well, properly speaking, machine learning and statistical research using open data sets usually assume that the instances of a single database (of these open data sets) have been generated or belong to independent and identically distributed random variables. Then, in principle, when we partition the database into random subsets (generated using, e.g. uniform distributions on the instances of the database) to assign a subset to each device, these subsets will also follow an independent and identically distributed random variable. Because of that, we need to follow a different approach to build this partition. We summarize our goal and contribution as follows.

  • Our goal is to provide an approach for building subsets for training and testing that is as systematic as the current approaches used for cross-validation/k-fold validation.

  • To achieve this goal, our approach generates several disjoint data sets that are non-identically distributed from a given data set (that is assumed to be i.i.d.). Each subset will contain instances of only a subset of the whole set of classes. In this way, we build a partition of an original data set in a systematic way so that the resulting sets do not longer satisfy the i.i.d. condition.

The structure of this paper is as follows. In Sect. 2 we describe our approach to generate non-identically distributed data from a single data set. In Sect. 3 we discuss the complexity of the approach and we report as well some examples of computations. The paper concludes with a discussion and research directions.

2 Generation of a non-identically distributed data set

This section describes our systematic approach to create data sets that do not satisfy the i.i.d. condition from an original data set that is i.i.d. The premise is that devices have instances corresponding to different classes. For example, device number 1 has instances of the first and the second classes but no instances of the other classes; then, in contrast, device number 2 has instances of the first and the third classes but no instance of the other classes.

We will use the following notation. The original data sets has n instances of l classes. Then, there are \(n_j\) instances associated to each class. Naturally, \(\sum _{j=1}^l n_j = n\). In addition, one of the parameters of the approach is the number of different classes each device can handle. In the example above, device 1 and 2 have only two different classes. Let cxd denote the number of classes per device. Then, there are

$$\begin{aligned} \textit{osdd} = {l \atopwithdelims ()cxd} = \frac{l!}{cxd! (l-cxd)!} \end{aligned}$$

number of possible devices when there are l different classes in the data set. We call this number osdd for One Set of Different Devices. We create nCopies of each of these devices, each with different probabilities for each class to fulfill the non-i.i.d. assumption. In this way, the total number of devices will be \(d=\)nCopies \(\cdot \) osdd.

Table 1 For each device (\(dev_i\)), probability that instances belong to given classes \(c_1, \ldots , c_l\)

Taking all these assumptions into consideration, we create the new data by means of finding a set of probabilities \(p_{ij}\) for \(i=1, \ldots , d\) and \(j=1, \ldots , l\). Here, \(p_{ij}\) represents the probability of the ith device of an instance being of the jth class. We represent these probabilities in Table 1. We define and solve an optimization problem to find these probabilities. These probabilities need to satisfy several constraints.

First, for a given device, only selected classes have a nonzero probability. For example, according to the example above, device number one will have \(p_{1j}=0\) for \(j\ge 3\) and device number 2 will have \(p_{22}=0\) and \(p_{2j}=0\) for \(j \ge 4\). Other probabilities can be nonzero. This is modeled by means of a set \({{{\mathcal {N}}}}\) that denotes all null probabilities. In this example, \({{{\mathcal {N}}}}\) includes at least \(p_{1j}=0\) for \(j\ge 3\), \(p_{22}=0\) and \(p_{2j}=0\) for \(j \ge 4\).

Second, for each device, the probabilities add to one. I.e., for all i, we have \(\sum _{j=1}^l p_{ij} =1\).

Third, probabilities associated to each class are also constrained by the number of instances in the data. For example, if in the original data set there are so many instances for class one as for class two (i.e., \(n_1 = n_2\)), then the proportion of instances assigned to 1st class and to 2nd class should be the same. Naturally, the sum of probabilities for the jth class is \(\sum _{i=1}^d p_{ij}\). It should be clear that the sum of all probabilities for all classes is d (because there are d devices and the probabilities associated to each device is 1). That is, \(\sum _{j=1}^l \sum _{i=1}^d p_{ij} = d\). Therefore, we need that for each class j, the proportion of \(\sum _{i=1}^d p_{ij} / d\) is equal to \(n_j / n\). In other words, we need that for each class j:

$$\begin{aligned} \sum _{i=1}^d p_{ij} = d ~n_j / n. \end{aligned}$$
(1)

Finally, we also need probabilities to be positive. That is, \(p_{ij} \ge 0\) for all \(i=1, \ldots , d\) and \(j=1, \ldots , l\).

Different assignments satisfy these constraints. We prefer in a device probabilities to be distributed among different (selected) classes. In our example, for device number one we prefer \(p_{11}=0.5\) and \(p_{12}=0.5\) than the solution \(p'_{11}=1.0\) and \(p'_{12}=0\). Similarly for the second device: our goal is to distribute the non-null probabilities among classes 1 and 3 (i.e., \(p_{21} + p_{23}=1\) but also \(p_{21}\ne 0\) and \(p_{23}\ne 0\)). From an optimization point of view, this means that we do not want the solutions to be at the vertices of the polyhedron of feasible solutions. We define a quadratic objective function to achieve this effect. The best solutions are the ones in which probabilities for a device are equally distributed. Therefore, an interim expression for the objective function associated to the ith device is the following one:

$$\begin{aligned} \sum _{j=1}^l (p_{ij} - 1/cxd)^2. \end{aligned}$$
(2)

This results into a quadratic (and, thus, convex) optimization problem with solutions, in general, not in the vertices of the feasible polyhedron.

Recall that nCopies represents the number of devices with the same classes. In our example, when nCopies is two, we have two devices with only classes 1 and 2. In order to have a non-identically distributed data set, these two devices need different probabilities for classes 1 and 2. Nevertheless, Eq. 2 will produce the same probabilities for all devices with the same classes. To avoid this, we modify the objective function using some random numbers. Let \(\alpha _{ij}\) be a random number taken from a uniform distribution in [0,1], then replace 1/cxd by \(\alpha _{ij}\). I.e.,

$$\begin{aligned} \sum _{j=1}^l (p_{ij} - \alpha _{ij})^2. \end{aligned}$$
(3)

Naturally, this is also a quadratic objective function. Let p be the vector of all probabilities \(p_{ij}\). This vector has dimension \(d \cdot l\). Then, as \((p_{ij} - \alpha _{ij})^2 = (p_{ij}^2 - 2\alpha _{ij}p_{ij} + \alpha _{ij}^2)\), we have that the objective function of our problem can be expressed using the square matrix \(Q=Id\) (i.e., the identity matrix of size \(d \cdot l\)) and the vector \(L=-2\textrm{A}\) where \(\textrm{A}=(\alpha _{11}, \alpha _{12}, \ldots , \alpha _{dl})\). That is, the objective function is \(p^T Q p + p^T L\), where \(p^T\) denotes the transpose of p.

Putting all together, we need to solve:

$$\begin{aligned} \begin{array}{ll} \text {Minimize}~~&{} p^T Q p + p^T L \\ \text {subject to}~~&{} \\ &{} \sum _{i=1}^d p_{ij} = d n_j / n ~~ \text {for each}~~ j = 1, \ldots , l\\ &{} \sum _{j=1}^l p_{ij} =1 ~~ \text {for each}~~i = 1, \ldots , d \\ &{} p_{ij}\ge 0 ~~ \text {for each}~~i = 1, \ldots , d \text {~and~} j=1,\ldots , l \\ &{} p_{ij} = 0 ~~ \text {for each} ~~p_{ij} \in {{{\mathcal {N}}}} \\ \end{array} \end{aligned}$$
(4)

A solution of this problem will be a matrix of probabilities \(p_{ij}\) for \(i=1, \ldots , d\) and \(j=1, \ldots , l\) as in Table 1. From this matrix of probabilities we can compute the expected number of instances of each class assigned to each device. Let us denote this number by \(n_{ij}\). That is, \(n_{ij}\) represents the number of instances that we assign to device j such that their class is i. Then, \(n_{ij}\) needs to be defined as follows:

$$\begin{aligned} n_{ij} = p_{ij}\cdot n / d. \end{aligned}$$
(5)

It is very easy to see that by construction \(\sum _{i=1}^d n_{ij} = n_j\) for any class j. Observe (using Eq. 1):

$$\begin{aligned} \sum _{i=1}^d n_{ij} = \sum _{i=1}^d p_{ij} \frac{n}{d} = (n/d) \sum _{i=1}^d p_{ij} = (n/d) d n_j / n = n_j. \end{aligned}$$

2.1 An example: the IRIS data set

As an example, we consider the IRIS data set [5] that consists of 150 instances of 3 classes represented by 4 numerical features. The three classes are Iris setosa, Iris virginica and Iris versicolor but we represent them here by \({c_1,c_2,c_3}\). There are 50 instances for each class. Therefore, \(n_1=n_2=n_3=50\). Then, if we consider that each device has 2 classes (i.e., \(cxd=2\)), this means that there are fundamentally 3 types of devices. Type 1 has classes \(c_2\) and \(c_3\), type 2 has classes \(c_1\) and \(c_3\), and type 3 has classes \(c_1\) and \(c_2\). Therefore, \(osdd=3\). If we select nCopies=2, this means that we will have two devices of each type and, thus, a total of 6 devices (i.e., \(d=2 \cdot osdd = 6\)). Let \(dev_1\) and \(dev_4\) be of type 1, \(dev_2\) and \(dev_5\) of type 2, and \(dev_3\) and \(dev_6\) of type 3.

Then, we need the constraints \(p_{11}=0\), \(p_{22}=0\), \(p_{33}=0\), \(p_{41}=0\), \(p_{52}=0\), and \(p_{63}=0\) to avoid the devices to include instances from not allowed classes. Then, we will have 6 constraints for the six devices requiring that their probabilities add to one (i.e., \(p_{12}+p_{13}=1\), \(p_{21}+p_{23}=1\), \(p_{31}+p_{32}=1\), \(p_{42}+p_{43}=1\), \(p_{41}+p_{43}=1\), \(p_{41}+p_{42}=1\)). Finally, we have also the equalities for each class. As we have \(n_j = 50\) instances for each class j, this means that the probabilities for each class add to \(d \cdot n_j / n = 6\cdot 50 / 150 = 2\). So, \(p_{21}+p_{31}+p_{51}+p_{61}=2\), \(p_{12}+p_{32}+p_{42}+p_{62}=2\), and \(p_{13}+p_{23}+p_{43}+p_{53}=2\).

In addition, we have the inequalities \(p_{ij}\ge 0\), and the objective function defined by \(Q=Id\) (with an identity matrix of size \(d \cdot 3 = 6*3=18\), as there are 18 probabilities \(p_{ij}\)) and L, a vector of length 18, with random numbers in [0,1] multiplied by −2. The problem built in this way minimizes

$$\begin{aligned} \sum _{i=1}^d\sum _{j=1}^l (p_{ij}-\alpha _{ij})^2 \end{aligned}$$

for random numbers \(\alpha _{ij}\) uniformly distributed in [0,1].

Solving this problem we get the following solution:

$$\begin{aligned} \left( \begin{array}{ccc} 0.0 &{}\quad 0.89029774 &{}\quad 0.10970226\\ 0.25199663 &{}\quad 0.0 &{}\quad 0.74800337\\ 0.33080903 &{}\quad 0.66919097 &{}\quad 0.0 \\ 0.0 &{}\quad 0.30016059 &{}\quad 0.69983941\\ 0.55754504 &{}\quad 0.0 &{}\quad 0.44245496\\ 0.8596493 &{}\quad 0.1403507 &{}\quad 0.0 \\ \end{array} \right) \end{aligned}$$

Now, using Eq. 5, we have that we will have the following number of instances for each pair (device, class).

$$\begin{aligned} \left( \begin{array}{ccc} 0.0 &{}\quad 22.25744352&{}\quad 2.74255648\\ 6.29991574 &{}\quad 0.0 &{}\quad 18.70008426\\ 8.27022579 &{}\quad 16.72977421&{}\quad 0.0 \\ 0.0 &{}\quad 7.50401479&{}\quad 17.49598521\\ 13.93862595 &{}\quad 0.0 &{}\quad 11.06137405\\ 21.49123252 &{}\quad 3.50876748 &{}\quad 0.0 \\ \end{array} \right) \end{aligned}$$

2.2 Assignment of instances to devices

In this way, we can partition the whole data set of n instances randomly assigning to each device an appropriate number of instances that take into account the classes this device needs to consider.

This can be easily implemented in the following way. Let us consider class 1 with \(n_1\) instances, then, we can assign instances to the devices as follows.

  1. 1.

    Generate a random sample (with replacement!) of size \(n_1\) of values in the set of devices \(\{1, \ldots , d\}\). The probability of selecting the device \(i_0\) is \(p_{i_01}/\sum _{i=1}^d p_{i1}\). Let \((d_1, \ldots , d_{n_1})\) be the name of these devices according to the sample. Observe that as there are \(n_1\) instances of the first class, this process results into an assignment of each instance to a device.

  2. 2.

    Assign each instance in the data set with class \(c_1\) to the devices according to the random sample. For example, assign the first instance with class \(c_1\) to \(d_1\), assign the second instance with class \(c_1\) to \(d_2\), etc.

For the first step, in our implementation we have used the function choice from Python’s package numpy (random). An alternative way would be to draw \(n_1\) values from a uniform distribution, and then use the inverse of the cumulative distribution function to map each values to a class.

2.3 Unbalanced number of instances for devices

The optimization problem formulated above does not make any requirement on the number of instances associated to each device. That is, Eq. 4 may result in all devices having the same number of instances. In practice, this is not always necessarily the case because constraints (including the definition of the set of null probabilities \({{{\mathcal {N}}}}\)) can force some sets having more instances than others.

Our point of view that Eq. 4 assumes that all devices have the same number of instances is supported by Eq. 1. Observe that the solution in Sect. 2.1 satisfies this property: all devices have on average exactly 25 instances.

Equation 1 can be rewritten equivalently as:

$$\begin{aligned} \sum _{i=1}^d \frac{1}{d} p_{ij} = n_j / n ~~ \text {for each}~~ j = 1, \ldots , l \end{aligned}$$

Expressed in this way we have that the equation considers d devices and each one has weight 1/d. We can, thus, consider different weights for different devices including parameters \(w_{i}\) for \(i=1, \ldots , d\). These weights need to be positive and add to one (i.e., \(\sum _{i=1}^d w_i = 1\)). Then, we can rewrite the previous set of equations into the following one:

$$\begin{aligned} \sum _{i=1}^d w_i p_{ij} = n_j / n ~~ \text {for each}~~ j = 1, \ldots , l \end{aligned}$$
(6)

Then, the optimization problem becomes:

$$\begin{aligned} \begin{array}{ll} \text {Minimize}~~&{} p^T Q p + p^T L \\ \text {subject to}~~&{} \\ &{} \sum _{i=1}^d w_i p_{ij} = n_j / n ~~ \text {for each}~~ j = 1, \ldots , l\\ &{} \sum _{j=1}^l p_{ij} =1 ~~ \text {for each}~~i = 1, \ldots , d \\ &{} p_{ij}\ge 0 ~~ \text {for each}~~i = 1, \ldots , d \text {~and~} j=1,\ldots , l \\ &{} p_{ij} = 0 ~~ \text {for each} p_{ij} \in {{{\mathcal {N}}}} \\ \end{array} \end{aligned}$$
(7)

A solution of this optimization problem is, again, probabilities \(p_{ij}\) of instances being assigned to class j given a device i. Then, from these \(p_{ij}\)we need to compute the average number of instances for each pair (device, class). In the first formulation of the problem, this was achieved by means of multiplying \(p_{ij}\) by n/d. In the present situation, we need to multiply \(p_{ij}\) by n and the weight of the device. Using the notation above, with \(n_{ij}\) as the average number of instances for class j in device i, we have

$$\begin{aligned} n_{ij} = p_{ij} n w_i \end{aligned}$$

for all \(i=1, \ldots , d\) and \(j=1, \ldots , l\).

We can also observe that this definition is consistent because we can prove that \(\sum _{i=1}^d n_{ij}=n_j\) for any class j. Taking into account Eq. 6, we can write:

$$\begin{aligned} \sum _{i=1}^d n_{ij} = \sum _{i=1}^d p_{ij} n w_i = n \sum _{i=1}^d p_{ij} w_i = n n_j / n = n_j. \end{aligned}$$

This solution is implemented in the same way as for the previous problem. That is, using the description in Sect. 2.2. Both optimization problems just produce a set of probabilities and an average number of instances consistent with the constraints.

2.4 An example: unbalanced case for the IRIS data set

Let us consider again the IRIS data set, but now with the goal of creating an unbalanced number of instances. For illustration, we consider the same parameters as above (i.e., 6 devices). In addition, we consider that devices have different weights (i.e., so that we expect them to have different number of instances): the first device has three times the weight of the last one, the second and the third twice the weight of the last one, and the fourth and the fifth have the same weight as the last one. More formally, we consider the following weights.

$$\begin{aligned} w= & {} (w_1, w_2, w_3, w_4,w_5,w_6) \\= & {} (3/10, 2/10, 2/10, 1/10, 1/10, 1/10) \end{aligned}$$

Then, for this optimization problem, and using the random vector of \(\alpha _{ij}\) as follows:

$$\begin{aligned} \textrm{A}= \left( \begin{array}{ccc} 0.86342481 &{}0.97109034 &{}0.04367299 \\ 0.87511626 &{}0.05392879 &{}0.83590614 \\ 0.064926 &{}0.46682201 &{}0.759437 \\ 0.5029502 &{}0.03387364 &{}0.77129996\\ 0.92267684 &{}0.67803432 &{}0.31014253 \\ 0.06771505 &{}0.22552457 &{}0.62034422 \end{array} \right) \end{aligned}$$

we get the following probabilities \(p_{ij}\) and assignments of instances to devices \(n_{ij}\) (for \(i=1, \ldots , d\) and \(j=1, \ldots , l\)).

$$\begin{aligned}{} & {} {P}= \left( \begin{array}{ccc} 0.0 &{}\quad 6.2211e-01 &{}\quad 3.7789e-01 \\ 4.0017e-01 &{}\quad 0.0 &{}\quad 5.9983e-01 \\ 4.9594e-01 &{}\quad 5.0406e-01 &{}\quad 0.0 \\ 0.0 &{}\quad 1.5414e-09 &{}\quad 9.9999e-01 \\ 9.9999e-01 &{}\quad 0.0 &{}\quad 4.1341e-06\\ 5.41108787e-01 &{}\quad 4.5889e-01 &{}\quad 0.0 \end{array} \right) \\{} & {} {N}= \left( \begin{array}{ccc} 0.0 &{}\quad 27.9948 &{}\quad 17.0051 \\ 12.0052 &{}\quad 0.0 &{}\quad 17.9948 \\ 14.8782 &{}\quad 15.1217 &{}\quad 0.0 \\ 0.0 &{}\quad 2.3121e-08 &{}\quad 14.9999 \\ 14.9999 &{}\quad 0.0 &{}\quad 6.2012e-05 \\ 8.1166 &{}\quad 6.8834 &{}\quad 0.0 \\ \end{array} \right) \end{aligned}$$

It can be seen that the assignment of instances to devices is in such a way that the number of instances is according to weights \(w_i\) for \(i=1, \ldots , d\). Observe that the assignments \(n_{ij}\) are such that \(\sum _{j=1}^d n_{ij}\) lead, respectively to:

$$\begin{aligned} (45, 30, 30, 15, 15, 15), \end{aligned}$$

that, in this case, satisfies the proportion given above

$$\begin{aligned} w=(3/10,2/10,2/10,1/10,1/10,1/10). \end{aligned}$$

In this example, there are a few probabilities with a value that is close to zero. This is caused by the weights assigned to the devices, but also by the random vector \(\textrm{A}\). Different assignments \(\textrm{A}\) will produce different probabilities, and some of them will avoid these values close to zero, and, thus, enforcing diversity in these devices.

We now describe this results in text. The first device will have the majority of the instances (45 instances): 28 in class \(c_2\) and 17 in class \(c_3\). Then, the second and third devices have both 30 instances. Device 2 has 12 instances of class \(c_1\) and 18 instances of class \(c_3\). Device 3 has 15 instances of class \(c_1\) and 15 instances of class \(c_2\). Finally, we have devices 4, 5, and 6, all with 15 instances. In our results only the last device has instances in two classes (8 in class \(c_1\) and 7 in class \(c_2\)). Devices 4 and 5 have only instances of one class. More particularly, device 4 in class \(c_2\) (15 instances) and device 5 in class \(c_3\) (15 instances).

3 Computational complexity and experiments

In this section we discuss the computational complexity of our approach and some experiments we have performed to generate data sets.

3.1 Computational complexity of the approach

We have given our initial formulation of the problem in Eq. 4 and the revised version in Eq. 7. The optimization problem has been defined taking into account that the problem has l classes, with \(n_1, \ldots , n_l\) number of instances for each class. Then, we have also considered the number of classes per device (cxd) and the number of copies of different types of devices (nCopies) as input parameters of our approach.

It can be observed that the computational complexity of both definitions is the same. Both problems are quadratic with linear constraints. Both problems have the same number of variables and equations.

To solve this type of problems, quadratic solvers can be used. We have implemented our approach using Python and solved the optimization problem using the library cvxopt. In particular, we have used the function solvers.qp. This function requires the specification of the matrix and vector of the objective function, and the matrices and vectors of equalities and inequalities. The software is available at [14].

The optimization problem has one constraint for each class (i.e., l equations), one constraint for each device (i.e., d equations), inequalities for each probability (\(l \cdot d\) inequalities), and equalities for each \(p_{ij} \in {{{\mathcal {N}}}}\). The latter equations just set the probability to zero. In our case we have established the number of devices as

$$\begin{aligned} d = \textit{nCopies} \cdot osdd = {l \atopwithdelims ()cxd} = \textit{nCopies} \cdot \frac{l!}{cxd! (l-cxd)!}. \end{aligned}$$

So, the total number of relevant equations (i.e., ignoring the ones that set a probability to zero) are: \(l + d + l \cdot d\) where l is defined as above.

The number of variables to be determined is naturally \(l \cdot d\).

It can be observed that the number of equations mainly depends on the number of classes of the problem. More particularly, it is linear on the number of combinations built from the number of classes. In contrast, the number of instances does not affect the computational cost.

For problems with a limited number of classes, the solution of this optimization problem does not pose any computational difficulty.

3.2 Experiments

We have illustrated our approach with the Iris data set. This data set is a classification problem that consists of 150 instances described by 4 features and corresponding to 3 classes. We have seen that considering 2 classes for each device this means only 3 different types of devices. Then, for nCopies=1 we have

  • \(l=3\) equations, one for each class,

  • \(d=3\) equations, one for each different type of device, and

  • \(l \cdot d=9\) inequalities, one for each probability (i.e., pair (class, device)).

Therefore, it is an optimization problem with 15 equations, and 9 variables.

We have also considered the case of MNIST. This data set consists of 60000 training instances (images of 28x28 pixels each) that correspond to 10 classes. Therefore, \(l=10\). Then, d will depend on the number of classes we require to each device. We give in Table 2, the number of different types of devices that would be generated considering different number of classes in each device. The table also includes the number of variables of the optimization problem. As we have described above, the total number of equations is the addition of the three values \(l=10\), d and \(l \cdot d\).

Table 2 Number of different types of device d for the case of MNIST (10 classes) and different number of classes per device (cxd). The number of variables and the number of equalities/inequalities for the corresponding optimization problem are also given

For illustration we give mean computation times of our implementation using a regular laptop (characteristics: lat7400n, 31,2 GiB, Intel Core i7-8665U CPU@1.90GHz x 8, 1,0TB, Ubuntu 20.04.2 LTS 64-bits) when we require devices to have instances of 4 and 5 different classes (i.e., cxd=4 and cxd=5). Note that these are the cases in which the optimization problem has the largest number of equalities and variables.

  • Case of cxd = 4. Mean execution times: 21.2033. Execution time for 5 different executions: (6.6630, 21.2373, 22.8042, 28.1049, 27.2072)

  • Case of cxd = 5. Mean execution times: 26.8747. Execution time for 5 different executions: (28.7177, 31.4002, 23.5477, 26.7277, 23.9802)

3.3 Simple extensions

In our analysis of problem complexity we have assumed that the number of classes associated to each device is the same for all devices (and equal to cxd). Considering different number of classes for different devices (e.g., devices have cxd’ number of classes for xcd’ \(\ge \) cxd) will produce additional types of devices and the corresponding constraints. Nevertheless, the whole process will be similar to the one described in Sect. 2.

4 Discussion and research directions

In this paper we have presented an approach to generate non-independent and identically distributed data from a data set. The approach is based on creating different data sets for different devices so that each device has only a subset of the classes. We have formulated this solution in terms of an optimization problem: a quadratic problem with linear constraints. We have provided two solutions. One in which all devices have the same number of instances, and the second in which we can generate different number of instances for different devices. This second problem is, naturally, a generalization of the first and we have seen that this does not add complexity to the optimization problem.

The goal was to define a systematic way to create these data sets, in line with other machine learning standard approaches to partition data sets for testing and evaluation. For example, generating sets for cross-validation / k-fold validation.

This approach has been defined for classification data sets. In this approach we have considered that all devices share the features of the data set. That is, our approach provides horizontally distributed data.

As a future work we consider alternatives to the use of random values \(\alpha _{ij}\) in Eq. 3. In particular, as a referee suggested, we may consider values diverging from 1/cxd (as used in Eq. 2). Another direction is to consider the generation of non-i.i.d. for vertically distributed data. A similar optimization problem can be defined considering partition of features. Selection of features per device will also provide non-i.i.d. data.