An ensemble learning model based on differentially private decision tree

Using differential privacy to provide privacy protection for classification algorithms has become a research hotspot in data mining. In this paper, we analyze the defects in the differentially private decision tree named Maxtree, and propose an improved model DPtree. DPtree can use the Fayyad theorem to process continuous features quickly, and can adjust privacy budget adaptively according to sample category distributions in leaf nodes. Moreover, to overcome the inevitable decline of classification ability of differentially private decision trees, we propose an ensemble learning model for DPtree, namely En-DPtree. In the voting process of En-DPtree, we propose a multi-population quantum genetic algorithm, and introduce immigration operators and elite groups to search the optimal weights for base classifiers. Experiments show that the performance of DPtree is better than Maxtree, and En-DPtree is always superior to other competitive algorithms.


Introduction
With the rapid development of information technology, in addition to the government, many companies now have a large amount of data about citizens' personal information. As a powerful data analysis tool, data mining can identify and extract implicit, unknown, novel and potentially useful knowledge and rules from a large number of incomplete and noisy data. Data mining makes great contributions in scientific research, business decision-making, medical research, and other fields [1][2][3]. At the same time, it also produces the inevitable problem of privacy disclosure, which has attracted more and more attention from the industry and society. For example, using data mining and machine learning to mine medical case records can obtain sensitive information, such as patients' diseases. Privacy protection technology can solve the privacy threat caused by data analysis. How to analyze and not disclose private information is the main purpose of privacy protection technology. In recent years, many privacy protection technologies have emerged, such as k-anonymity, l-diversity, and t-proximity. K -anonymity [4] can make every individual information contained in anonymous datasets indistinguishable from other k − 1 individual information. However, k-anonymity cannot prevent the attributes disclosure, so attackers can obtain sensitive information through the background knowledge attack and consistency attack [5,6]. The idea of l-diversity [7] is that the values of sensitive attributes should be diverse, so as to ensure that the user's sensitive information cannot be inferred from the background knowledge. In a real dataset, the attribute values are likely to be skewed or semantically similar, while l-diversity only guarantees diversity, and it is not recognized that the attribute values are semantically similar. Therefore, l-diversity will be attacked by similarity attack [6]. T -proximity [8] means that the distribution of sensitive attributes in a category is close to the distribution of the attribute in the whole data, and does not exceed the threshold value t. Because t-proximity can prevent attribute disclosure but not identity disclosure, k-anonymity and l-diversity may be required when dealing with problems. And t-proximity has a large amount of information loss and is more difficult to generalize. In addition, there are some new attack models, such as combined attack [9] and foreground knowledge attack [10]. These new attack models pose a severe challenge to the effectiveness of the above methods. Differential privacy (DP) is a widely recognized strict privacy protection technology, which was first proposed by Dwork [11]. It makes malicious adversaries unable to infer the user's sensitive information even if they know the results published by the user. DP has become the defacto privacy standard around the world in recent years, with the U.S. Census Bureau using it in their Longitudinal Employer-Household Dynamics Program in 2008 [12], and the technology company Apple implementing DP in their latest operating systems and applications [13]. Applying DP to machine learning model can protect training data from model reverse attack when model parameters are released.
Classification technology plays a key role in data prediction and analysis, and decision tree (DT) is a typical representative of classification models. DT is a non-parametric supervised learning method, and it makes no assumptions about the distribution of the underlying data [14]. However, the transparency property of DT can be used by attackers to steal personal information. Suppose there are two adjacent datasets used to train two trees, which are different in one record at most. Opponents can obtain sensitive personal information from the database by comparing the counting results. To solve the problem of privacy disclosure of DT, scholars apply DP to the construction of DT to realize privacy protection.
Although scholars have put forward many DPDT models, there are still some problems need to be overcome: • When inner nodes use the Laplace mechanism to realize privacy protection, no matter what split criterion is adopted, there must be a lot of fine-grained counting queries, which will inevitably lead to the accumulation of noise. • In the process of privacy budget allocation, none of the DPDT models takes into account the importance of leaf nodes in the final classification prediction. When the difference in the number of samples between the category with the most samples and the category with the second most samples in leaf nodes is small, adding more noise will directly lead to error classification results. On the contrary, when the difference between them is large, leaf nodes can bear more noise, so as to improve privacy protection ability of DPDT. Obviously, we should formulate a unique privacy budget allocation strategy for each leaf node. • Many DPDT models can only deal with discrete attributes, and the other part directly uses the exponential mechanism to select partition points in discretization of continuous attributes. The exponential mechanism needs to traverse all potential partition points of each continuous attribute, which will lead to low efficiency and need to consume privacy budget.
• The bootstrap sampling makes each training set intersect, so the total privacy budget needs to be evenly allocated to each base classifier. The smaller the privacy budget, the greater the noise, and the worse the classification performance of base classifiers. • In the ensemble learning models based on DPDT, the voting strategy always use the majority voting or assign weights by the accuracy of base classifiers. To improve the privacy protection ability, DPDT has to introduce noise in the process of constructing nodes, which will lead to the weak classification ability of some base learners. Therefore, it is necessary to set an appropriate weight for each basic classifier to obtain an ensemble model with strong classification ability.
To overcome the above issues, we propose a DPDT model, called DPtree. In addition, to improve the classification ability of DPtree, we propose an ensemble learning model based on DPtree, called En-DPtree. And the main contributions in this article are as follows: • The exponential mechanism is utilized in inner nodes, which can obtain the split attribute only by one calculation without multiple counting queries, so as to avoid noise accumulation. • We formulate an adaptive privacy budget allocation strategy for each leaf node according to its sample category distribution, which can not only ensure that the classification result of each leaf node is not distorted, but also improve the privacy protection ability of DPtree. • The Fayyad theorem is applied to quickly locate the best partition points of continuous attributes by comparing the adjacent boundary points of different categories, which can greatly improve the computational efficiency and does not occupy privacy budget. • To economize on privacy budget, we use the sampling without replacement method to obtain training sets. Therefore, the privacy budget of each DPtree is equal to the total privacy budget. • We design multi-population quantum genetic algorithm (MPQGA) to search the appropriate weight for each base classifier, so as to improve the classification ability of En-DPtree. In MPQGA, the individuals of each population evolve to the optimal solution of their own population, which will reduce the possibility of the algorithm finally falling into the local optimal solution. In addition, we also design immigration operators and elite groups to avoid premature convergence of each population.
The rest of this paper is organized as follows. The section "Related work" presents previous works related to DPDT. The section "Differential privacy" introduces the contents of DP. The section "Analysis on MaxTree algorithm" shows the analysis on MaxTree. The section "Differentially private decision tree" describes the specific implementation steps of DPtree and En-DPtree. The section "The ensemble learning based on DPtree" shows the experimental process and results. The section "Time complexity analysis" shows the time complexity analysis of DPtree and En-DPtree. The section "Conclusion" presents the conclusion.

Related work
The representative schemes in the DPDT models are SuLQbased ID3, DiffP-C4.5, and DiffGen. In 2005, Blum et al. first introduced DP into DT based on the SuLQ framework and proposed SuLQ-based ID3 [15]. This algorithm achieves the ability of privacy protection by adding Laplace noise to the query results in ID3. However, adding Laplace noise to each count of information gain will lead to a large amount of accumulated noise and waste of privacy budget. To solve the problems, Friedman and Schuster proposed PINQ-based ID3 [16]. This algorithm implements ID3 on multiple mutually exclusive data subsets, so it can effectively use privacy budget. However, PINQ-based ID3 also needs to add Laplace noise in the calculation of information gain, which still cannot significantly reduce the noise. Besides, Friedman and Schuster proposed DiffP-ID3 by the exponential mechanism [17]. Since the exponential mechanism can evaluate attributes only through one query, DiffP-ID3 can reduce the introduction of noise. Because ID3 can only deal with discrete data, Friedman and Schuster proposed DiffP-C4.5 [17] which can deal with continuous data based on the exponential mechanism. Different from the above methods, DiffGen [18] uses the classification tree to divide all records in the dataset into leaf nodes from top to bottom, and then adds Laplace noise to the count value in leaf nodes. The classification accuracy of DiffGen is improved and each classification attribute corresponds to a classification tree. When the dimensions of sample attributes are very large, this will lead to inefficient selection based on the exponential mechanism, and may exhaust privacy budget. In addition, there are many other DPDT models. During the construction of DT, the number of instances in each layer is usually decreasing. If each layer is given the same privacy budget, it will inevitably lead to imbalance of signal-to-noise ratio of DT. Therefore, Liu et al. [19] designed a budget allocation strategy, so that less noise would be added in larger depth to balance between true counts and noise. To reduce influence of noisy by the Laplace mechanism, Wu et al. [20] designed the up-down and bottom-up approaches to reduce the number of nodes in a random DT. However, no matter which strategy is used to build DPDT, it cannot avoid introducing noise in the process of node generation, resulting in weak and unstable classifica-tion ability. How to not only protect privacy but also improve classification ability of DT is a difficult problem.
Luckily, ensemble learning can always significantly improve the generalization ability of base learners in most cases by training multiple learners and combining their results. Therefore, scholars use ensemble learning to improve the classification performance of DPDT. Random forest under DP was first proposed by Jagannathan et al. [21]. In this model, the base classifier is ID3 DPDT. However, they found that to obtain reasonable privacy protection, it needed to pay a great loss of prediction accuracy. Therefore, they proposed an improved ensemble learning based on random DT, which we call it as En-RDT. Fletcher and Islam [22] proposed a DP decision forest that took advantage of a theorem for the local sensitivity of the Gini Index. In addition, they used the theory of Signal-to-Noise Ratios to automatically tune the model parameters [23]. However, the split features are randomly selected which might make the tree worse. Patil and Singh [24] designed a random forest algorithm satisfying DP, which is called DiffPRF. DiffPRF first uses entropy to discrete continuous attributes, and then uses DiffP-ID3 as the base learner to generate random forest. Subsequently, Yin et al. [25] further proposed DiffP-RFs based on DiffPRF. DiffP-RFs does not need to preprocess dataset, and it extends the monotonic privacy budget allocation strategy that DiffPRF can only deal with discrete attributes. Compared with random forests, boosting with DP has rarely been researched. Privacy-preserving boosting usually uses distributed training. Each party uses their own data to train a DP boosting classifier, and finally aggregates classifiers through a trusted third party without sharing their own datasets [26][27][28]. This method allows users to define privacy levels according to their own needs, but training data are often limited. Li et al. [29] proposed a gradient boosting based on DP. They filter the data based on gradient and use geometric leaf cutting to ensure a smaller sensitivity boundary. Shen [30] proposed DP-AdaBoost using single-layer ID3. The algorithm does not use counting function directly when adding noise, but considers the weights of each record at the same time. However, the algorithm ignores the influence of tree depth on classification ability. Jia and Qiu [31] proposed DP-AdaBoost based on CART and this model can handle continuous features.

Differential privacy
DP is a strict privacy definition against the individual privacy leakage that guarantees the outcome of a calculation to be insensitive to any particular record in the dataset. In the following, we introduce the definition of DP and two important theorems.
Definition 1 (Differential privacy [11]) We say a randomized computation F provides -DP if for any adjacent datasets D 1 and D 2 with symmetric difference D 1 1D 2 = 1, and any set of possible outcomes S ∈ Range(F) (1) The parameter is called the privacy budget and is inversely proportional to the strength of privacy protection.
Definition 2 (Sensitivity [32]) Given an arbitrary function f : where D 1 and D 2 differ in one record and d is the dimension of the function f .

Theorem 1 (Laplace mechanism [32]) Given an arbitrary function f : D → R d , for an arbitrary domain D, the function F provides -DP, if F satisfy
where the noise Lap f is drawn from a Laplace distribution, and d is the dimension of the function f .
Theorem 2 (Exponential mechanism [33]) Given a random mechanism F, its input is dataset D, and the output is an entity object r ∈ Range. Let q(D, r ) be a score function to assign each output r a score, and q be the sensitivity of the score function. Then, the mechanism F maintains -DP, if F satisfies

Analysis on MaxTree algorithm
Maxtree algorithm [19] proposed by Liu et al. is shown in Algorithm 1. When constructing DT, it introduces noise to relevant counting results by the Laplace mechanism to realize privacy protection. Generally, as tree depth increases, the node counting results decrease accordingly. If all nodes are allocated the same privacy budget, signal-to-noise ratio of nodes at different depths must be unbalanced. To overcome this defect, Maxtree allocates more privacy budget for deeper nodes. Specifically, assuming that only k attributes are used to build branch nodes, 1 k−i shares of privacy budget are allocated to each inner node of the ith layer and 1 share to each leaf node. Then, the total privacy budget shares of DT are Suppose denotes total privacy budget of DT, then Maxtree allocates s t * 1 k−i privacy budget to each inner node in the ith layer. At this point, there are k − i attributes need to be evaluated through related attribute evaluation function. Therefore, the privacy budget required to evaluate each In internal nodes, the use of the Laplace mechanism must involve counting queries. The more counting queries, the more shares of the privacy budget are divided, which will lead to the greater noise disturbance for each query. Maxtree selects the maximum label vote rather than information gain in attribute evaluation process, which can relatively reduce the number of queries. However, after allocating privacy budget for each layer, Maxtree still needs to divide the budget again to each feature for evaluation. Obviously, the use of the Laplace mechanism in internal nodes cannot fundamentally avoid quadratic partition of privacy budget.
In leaf nodes, the query datasets do not intersect, so the privacy budget allocated to leaf nodes does not need to be divided again. Because leaf nodes are the key to classification, Maxtree allocates the most privacy budget for each leaf node, that is, 1 share. The larger the privacy budget, the less noise generated, and the weaker the privacy protection capability. Therefore, the privacy budget strategy in leaf nodes of Maxtree leads to weak privacy protection ability.
On the whole, there are several difficulties in constructing DPDT. First, the counting queries of attributes in internal nodes need to subdivide privacy budget, resulting in high noise and low accuracy of DT. Second, assigning the same share of privacy budget to all leaf nodes only takes into account the reduction of noise, but leads to weak privacy protection ability.

Differentially private decision tree
In this section, we design an improved DPDT algorithm called DPtree. DPtree uses the Fayyad theorem to quickly discretize continuous attributes. Besides, DPtree uses the exponential mechanism and Laplace mechanism to construct inner nodes and leaf nodes, respectively. Each inner node has the same privacy budget, and each leaf node can adaptively adjust privacy budget according to the sample category distribution. The generation process of DPtree can not only ensure that the classification results are not distorted, but also improve the privacy protection ability. The key of constructing DPDT lies in the generation of tree structure and the allocation of privacy budget. In the section

Algorithm 1 Maxtree algorithm
Require: D denotes a training set, A = {A 1 , A 2 , · · · , A d } denotes a set of attributes, C denotes class attributes, k denotes the pre-defined number of attributes, s t denotes total budget shares, and denotes privacy budget Ensure: Decision tree with differential privacy protection 1: procedure MAXTREE(D, A, C, k, ) 2: Partition(D,∀c ∈ C) 9: ∀c ∈ C: classify the node according to argmax c (n c + Lap(1/ )) 10: end if 11: for every attribute A ∈ A do 12: ∀c ∈ C and ∀a ∈ A, score A = max(n a,c ) + Lap(1/ a ) 13: end for 14: return a tree with a root node labeled A and edges labeled a from 1 to |A| each going to Subtree a 18: end procedure "Generation of tree structure", we introduce some important problems involved in the generation of tree structure. In the section "Privacy budget allocation strategy", we introduce a practical privacy budget allocation strategy. The privacy analyze is shown in the section "Privacy analyze".

Generation of tree structure
In the process of growing DT, two kinds of nodes are involved, namely inner nodes and leaf nodes. According to the analysis of Maxtree, if inner nodes use the Laplace mechanism, no matter which rule is used to evaluate attributes, it inevitably needs some low granularity queries. Fine-grained queries lead to accumulation of noise and ultimately affect classification accuracy. Therefore, we choose the exponential mechanism in inner nodes, and quality function is the information gain rate. The exponential mechanism does not need to multiple counting queries, it only needs one time calculation to acquire split attributes. Obviously, the exponential mechanism can economize on privacy budget and overcome the inherent defects of the Laplace mechanism in inner nodes. In leaf nodes, we still use the Laplace mechanism to compare the number of samples in each category.
In addition, continuous attributes are usually involved in the construction of DT. C4.5 algorithm [34] has a great advantage over I D3 algorithm, that is, it can deal with both discrete attributes and continuous attributes. C4.5 algorithm utilizes dichotomy to discretize continuous attributes. The specific process is as follows: assume that there are m different values of continuous attribute A in dataset D, and sort them in ascending order to get {a 1 , a 2 , . . . , a m }. Calculate the middle points t = (a i + a i+1 )/2 of every two adjacent elements, and t denotes a potential partition point (there are m − 1 potential partition points in total). Among the m − 1 potential partition points, the point with the largest information gain rate is the optimal partition point of attribute A. Obviously, C4.5 algorithm needs to traverse all potential partition points of each continuous attribute, which will lead to low efficiency. How to quickly locate the best partition point of continuous attributes has become an urgent problem to be solved.
We use the Fayyad theorem [35] to quickly locate the best partition point of each continuous attributes. According to the Fayyad theorem, the best partition point always appears on the boundary points of two adjacent heterogeneous instances. Therefore, there is no need to compare each threshold point, just compare the adjacent boundary points of different categories to discretize continuous attributes. Suppose the instance attribute values after ascending sorting are {a 1 , a 2 , . . . , a 9 }, in which the first three samples belong to category c 1 , the middle three belong to category c 2 , and the last three belong to category c 3 . According to the Fayyad theorem, only boundary points a 3 and a 6 are the potential partition points, and the potential partition point with the largest information gain rate is the best partition point of attribute A. If we use the dichotomy, there are eight potential partition points. Obviously, the continuous attribute discretization method based on the Fayyad theorem can greatly improve the efficiency.
During the construction of DPtree, it is necessary to determine whether the current node is a leaf node or an inner node. If all the samples of the current node belong to the same category, or the attribute set to be selected is empty, or the current node reaches the maximum depth, the current node is a leaf node. In addition, the Laplace noise is added to the number of samples contained in the current node, that is, N node = N oisyCount 0 (node). If N node is smaller than the threshold τ , the current node is a leaf node. Otherwise, the current node is an inner node. When it is a leaf node, use the Laplace mechanism to add noise to the number of all categories of the node, and classify the node according to argmax c (n c + Lap(1/ l )). When it is an inner node, the exponential mechanism is used to select split attributes, that is, A = E x pMech 0 (A, q). The score function q of the exponential mechanism is the information gain rate. On the whole, we grow a DPtree model, as shown in Algorithm 2. And an overview of DPtree is shown in Fig. 1. DPtree discretizes continuous attributes based on the Fayyad idea, and adopts the exponential mechanism and the Laplace mechanism in internal nodes and leaf nodes, respectively. return a tree with a root node labeled A and edges labeled a from 1 to |A| each going to Subtree a 17: end procedure

Privacy budget allocation strategy
For inner nodes, when we use the exponential mechanism to select split attributes, it has little correlation with the number of instances, so each inner node is allocated an equal privacy budget 0 . In addition, since the classification of leaf nodes directly affects the final classification results, enough privacy budget is allocated to leaf nodes, that is, 2 0 , so less noise is introduced to ensure that the classification results are not distorted. From the above analysis of Maxtree, it can be seen that allocating the same and enough privacy budget to each leaf node only considers noise reduction, but it will lead to weak privacy protection ability. Therefore, we should formulate an unique privacy allocation strategy for each leaf node.
When there is a large difference between the number of samples contained in the maximum category and the submaximum category in a leaf node, even if large noise is added, the classification result will not be distorted. At this time, less privacy budget can be allocated to it, so as to improve the privacy protection ability. Suppose there are two types of data in the leaf nodes n a and n b . In the leaf node n a , there are ten samples which belong to the category c 1 and two samples belong to the category c 2 . In the leaf nodes n b , there are seven samples belong to the category c 1 and five samples belong to the category c 2 . Suppose that the Laplace noise of the category c 1 and the category c 2 in leaf nodes are L 1 and L 2 . When L 2 < 8 + L 1 , the classification result of the leaf node n a will not be distorted. When L 2 < 2 + L 1 , the classification result of the leaf node n b will not be distorted. It is obvious that the leaf node n a can tolerate more noise than the leaf node n b . Therefore, we can formulate an adaptive privacy budget allocation strategy for each leaf node according to its sample category distribution. Let α be the ratio of the submaximum category to the maximum category, then the privacy budget of this leaf node is l = α * 2 0 . On the one hand, when the difference between the submaximum and the maximum is very small, the privacy budget is almost equal to 2 0 , which can ensure that the classification result is not distorted. On the other hand, the greater the difference between the submaximum and the maximum, the lower the privacy budget and the stronger the privacy protection ability. Therefore, this adaptive privacy allocation scheme can not only ensure that the classification result of each leaf node is not distorted, but also improve the privacy protection ability.

Privacy analyze
DP has two important properties, namely the sequential combination and the parallel combination, which play an important role in privacy budget allocation. Suppose a randomization mechanism F i is applied to subsets of database D providing i -DP. When the subsets are joint, the sequential combination makes the whole mechanism provides i i -DP [36]. When the subsets are disjoint, the parallel combination indicates the whole mechanism provides max i -DP [36].

Theorem 3 DPtree satisfies -DP.
Proof Assume adjacent datasets D and D with symmetric difference D1D = 1, F(D) and F(D ), respectively, represent the output of the random algorithm; each attribute A x has |r x | division methods, and we can get The degree of differential privacy protection for inner nodes is Therefore, the total privacy budget of inner nodes is d 0 .
Add the Laplace noise to class counting function Count (D), and the output after disturbance is denoted as N oisy Therefore, the total privacy budget of leaf nodes is 2 0 . Besides, each node of DPtree needs to be judged as an inner node or a leaf node, and the privacy budget of the process is 0 . We can get the total privacy budget of DPtree is d 0 + 2 0 + (d + 1) 0 . Therefore, DPtree satisfies -DP.

The ensemble learning based on DPtree
DP improves the security of DT, but the accuracy decreases. By constructing multiple DT models into an integration, decision forest reduces the impact of large variance of DT models on generalization performance to a certain extent. On the other hand, the greedy strategy is used in the construction of DT, and the decision forest can get better performance through integration. Therefore, we use the bagging technology to build the ensemble model based on DPtree.
Generally, the bagging technology [37] uses bootstrap sampling to obtain a set of training sets {D 1 , D 2 , . . . , D M }, each DT is trained based on these training sets, and then, test samples are predicted by voting method. Assume the total privacy budget is , due to the intersection between each training set, the privacy budget allocated to each DT is only /M. To economize on privacy budget, we use the sampling without replacement method to obtain training sets {D 1 , D 2 , . . . , D M }. Therefore, the privacy budget of each DT is . Furthermore, the weighted voting scheme usually sets the weights manually or sets the confidence of base learners as the weights, which are usually not optimal. The weights of base classifiers play a great role in the final classification results. It is necessary to get a set of better weights by searching weight space. Therefore, we need to design an intelligent optimization algorithm to generate a set of better weights for ensemble model.
Genetic algorithm (GA) [38] is good at solving global optimization problem, and it can efficiently jump out of local optimal points to find the global optimal point. However, when the selection, crossover, and mutation operators are not appropriate, GA will cost many iterations, and have slow con-vergence, premature convergence, and many other defects. In recent years, using quantum theory to further improve intelligent optimization algorithm has become a very popular research direction [39,40]. Quantum genetic algorithm (QGA) [41] is a kind of fusion algorithm that introduces efficient quantum computing into GA, uses the quantum state to encode, and selects the quantum rotating gate to perform genetic operation. Compared with ordinary GA, QGA can keep the population diversity because of the quantum superposition state, and it simplifies the calculation and reduces the operation steps through using the quantum rotating gate. With the help of QGA, we can generate appropriate weights for each DT, so as to further improve the performance of ensemble model.
In the evolution process of QGA, the individuals of each iteration are updated in the direction of the current optimal solution. If the current optimal solution is a local solution, the algorithm may eventually fall into a local optimal solution. Therefore, we design a multi-population strategy. The individuals of each population evolve to the optimal solution of their own population, which will reduce the possibility of the algorithm finally falling into the local solutions. In addition, an immigration operation is designed to exchange the individuals with the largest fitness and the smallest fitness between populations, which can avoid premature convergence of each population.
When we use MPQGA to optimize the M weights of base classifiers, each chromosome is coded by qubits as follows: where q t j is the jth chromosome in the population of the tth generation, k represents the number of qubits coding each gene, and M is the number of genes on the chromosome which is equal to the number of weights to be optimized. The qubit code of each individual in the population is initialized to which means that all possible states expressed by each chromosome are equal. Besides, the prediction output of each DPtree represents the output of T i on category c. Then, the fitness function in the MPQGA is as follows: E and T (x) is the error and the prediction label of the ensemble model individually. The ensemble model based on MPQGA is shown in Algorithm 3 and the overview of it is shown in Fig. 2.

Theorem 4 En-DPtree satisfies -DP.
Proof We have proven that DPtree satisfies -DP. Besides, each private tree only depends on the corresponding training subsets; different trees are built on disjoint data subsets. According to the parallel composition of DP, the privacy budget consumed by all the base learners is still . Besides, because DP is immune to post-processing, the voting step of the ensemble model will not damage the degree of privacy protection. Obviously, En-DPtree satisfies -DP.

Algorithm 3 The ensemble of DPtree based on MPQGA(En-DPtree)
Require: D tr denotes a training set, D te denotes a test set, A = {A 1 , A 2 , . . . , A d } denotes a set of attributes, C denotes class attributes, d denotes the maximal tree depth, and denotes privacy budget Ensure: Label of D te 1: Generate training subsets D tr = {D 1 , D 2 , · · · , D M } based on sampling without replacement method from D tr 2: for every training subset D i in D tr do 3: T i = D Ptree(D i , A, C, d, ) 4: end for 5: Initialize each population and generate chromosomes encoded by qubits 6: repeat 7: Calculate the fitness function value of each population 8: Renew each population by quantum rotation gate 9: Perform immigration operation between populations 10: Put the best individual in each population into the elite group 11: until Maximum iterations reached or the best individual in the elite group remains stable 12: Decode the optimal individual to obtain weight w * = {w 1 , w 2 , . . . , w M } 13: Classify D te according to weighted voting method based on w *

Time complexity analysis
Suppose the maximal depth of DPtree is d, the number of attributes is |A|, and the number of data is |D|. When the optimal splitting attribute is selected on the branch node, the time complexity is O(|A|·|D|). Besides, the time complexity of privacy budget allocation is O (1). Therefore, the time complexity of DPtree is O(d · |A| · |D|). In En-DPtree, M DPtree models need to be constructed, so the time complexity of En-DPtree is O(M · d · |A| · |D|).

Performance evaluation
To test the property of the proposed En-DPtree, we select four datasets from the UCI Machine Learning Repository [42], as shown in Table 1. In this section, we select accuracy, F1, and micro-F1 to evaluate the classification ability of the classifiers. In the following, we first analyze the impact of privacy budget allocation strategy on classification performance through comparing our proposed DPtree and En-DPtree with Maxtree [19], Maxforest [19], En-RDT [21], DiffP-RFs [25], DP-AdaBoost, and AdaBoost. Second, we test the effects of the ensemble learning method based on MPQGA. Ultimately, we examine how the classification accuracy of En-DPtree changes with the number of iterations of MPQGA.

Comparison with other competitive classifiers
Privacy budget is a measure of the privacy protection capability. The smaller the privacy budget, the stronger the privacy protection capability of the algorithm, but the classification ability is usually reduced due to the introduction of a large number of noise. Figures 3 and 4 show the experimental results of algorithms DPtree, En-DPtree, Maxtree, Maxforest, En-RDT, DiffP-RFs, DP-AdaBoost, and AdaBoost on datasets in Table 1 under privacy budget from 0.1 to 1.
Obviously, in the binary classification tasks, the performances of classification algorithms on Mushroom are better than that on Adult. The possible reason is that all attributes of Mushroom are discrete, while Adult has both discrete and continuous attributes. Each algorithm needs to discretize the continuous attributes of Adult to classify. The discretization process may lead to the loss of underlying information or insufficient extraction of features, resulting in the reduction of classification ability. In addition, if the algorithm consumes the privacy budget in the process of discretizing continuous attributes, it will reduce the privacy budget in the process of selecting split attributes, which will affect the classification accuracy of the algorithm.
There is no differential privacy involved in the Adaboost algorithm, so its classification capability remains unchanged. It can be seen from Figs. 3 and 4 that with the increase of privacy budget, the classification ability of each classifier based on differential privacy is increasing as a whole. This is because when the privacy budget increases, the noise in each classifier decreases. However, it leads to decline of the classifier's privacy protection ability. Therefore, in practical application, we usually need to abandon a certain classification accuracy to ensure the security of the classification model. With the increase of privacy budget, the classification ability of En-RDT becomes stronger rapidly. Compared with other algorithms, the classification ability of En-RDT on Mushroom and Nursery is relatively high, while the classification ability on Adult and Car is relatively lower as a whole. The reason for the instability of classification may be that every DT of En-RDT is constructed by randomly selecting split attributes. Besides, Maxtree and DPtree have relatively poor classification ability. The main reason is that they are two single classifiers, and the introduction of DP adds a lot of noise to the node generation process of trees, which interferes with the experimental results. The classification ability of Maxforest and En-DPtree is greatly improved on the basis of Maxtree and DPtree. Ensemble learning comprehensively considers the learning results of multiple base learners in an appropriate voting way, so that it can usually improve the classification ability of base learners in most cases by training multiple learners. The larger the privacy budget , the closer En-DPtree's classification capability is to AdaBoost. On the whole, En-DPtree have the strongest classification

Effect of the weighted voting scheme based on MPQGA
To examine the necessity of the weighted voting scheme based on MPQGA, we perform experiments by DPtree, DPtree-1, DPtree-2, and En-DPtree based on the above datasets. DPtree-1 and DPtree-2, respectively, mean that the voting scheme of ensemble learning is based on the majority voting and original QGA.
The classification results are shown in Figs. 5 and 6. The classification ability of DPtree-1, DPtree-2, and En-DPtree is always better than that of DPtree. Therefore, it is necessary to design an ensemble learning process for DPtree. Moreover, En-DPtree has the strongest classification ability, and the performance of DPtree-2 is better than DPtree-1 in these datasets. QGA can search appropriate weights for base classifiers through evolution process, so that DPtree-2 can obtain better classification ability than DPtree-1. Besides, compared with QGA which has only one population in evolutionary process, MPQGA evolves based on multiple populations. Therefore, MPQGA has stronger global search capability and can assist En-DPtree to obtain the strongest classification capability. Therefore, the improvement measures we designed in En-DPtree are effective.

Effect of the number of iterations in MPQGA
To illustrate the effect of the number of iterations of MPQGA on finding better weights, we apply En-DPtree based on the above datasets, as shown in Figs. 7 and 8. The horizontal axis is the number of iterations of MPQGA, and the vertical axis is the classification accuracy, F1 or micro-F1 of En-DPtree. It can be seen that when the number of iterations increases, the classification ability of En-DPtree shows an overall upward trend. In addition, the classification ability of En-DPtree changes greatly when the number of iterations is low. The possible reason is that MPQGA cannot find a stable evolution point in the early stage of iteration, and it still needs a series of gene changes to achieve a stable state. In general, MPQGA can obtain the optimal weights through a certain number of iterations. Therefore, using MPQGA in En-DPtree can search appropriate weights for base classifiers, so as to improve the classification ability of En-DPtree.

Conclusion
By analyzing the problems existing in Maxtree, we propose a new privacy budget allocation strategy and introduce quantum ensemble learning to form En-DPtree scheme. Compared with the existing works, we introduce the Fayyad theorem to quickly locate the best partition points of contin- uous attributes. In addition, we design an adaptive privacy budget allocation strategy for each leaf node according to its sample category distribution which can not only ensure that the classification result of each leaf node is not distorted, but also improve the privacy protection ability. Moreover, we design MPQGA to generate a set of better weights for ensemble model, so as to improve the classification performance of En-DPtree. Finally, we carry out several experiments on datasets. The experimental results show that the classification ability of En-DPtree is usually superior to other state-ofthe-art classifiers. Besides, En-DPtree can obtain the optimal weights through a certain number of genetic iterations.

Data availability statement
The data that support the findings of this study are openly available in [UCI machine learning repository] at http:// archive.ics.uci.edu/ml, reference number [42].

Conflict of interest
The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.