Fairness in graph-based semi-supervised learning

Machine learning is widely deployed in society, unleashing its power in a wide range of applications owing to the advent of big data. One emerging problem faced by machine learning is the discrimination from data, and such discrimination is reflected in the eventual decisions made by the algorithms. Recent study has proved that increasing the size of training (labeled) data will promote the fairness criteria with model performance being maintained. In this work, we aim to explore a more general case where quantities of unlabeled data are provided, indeed leading to a new form of learning paradigm, namely fair semi-supervised learning. Taking the popularity of graph-based approaches in semi-supervised learning, we study this problem both on conventional label propagation method and graph neural networks, where various fairness criteria can be flexibly integrated. Our developed algorithms are proved to be non-trivial extensions to the existing supervised models with fairness constraints. Extensive experiments on real-world datasets exhibit that our methods achieve a better trade-off between classification accuracy and fairness than the compared baselines.


Introduction
Machine learning algorithms, as useful decision-making tools, are widely used in the society. These algorithms are often assumed to be paragons of objectivity. However, many studies show that the decisions made by these models can be biased against certain groups of people. For example, Abid et al. observed that large-scale language models capture undesirable racial bias [1] and Vigdor et al. [2] reported gender bias in credit ranking of Apple card. These events prove that discrimination can arise from machine learning, and one of the most important discrimination sources is from data, including data collection (imbalanced training set) and data preparation (biased content in the training set) [3]. Given the widespread use of machine learning to support decisions over loan allocations, insurance coverage, and many other basic precursors to equity, fairness in machine learning has become a significantly important issue [4]. Thus, how to design big data enabled machine learning algorithms that treat all groups equally is critical.
In recent years, many fairness metrics have been proposed to define what is fairness in machine learning. Popular fairness metrics include statistical fairness [5,6], individual fairness [7][8][9][10] and causal fairness [11,12]. Meanwhile, many algorithms have been developed to address fairness issues for both supervised learning settings [6,13,14] and unsupervised settings [15][16][17][18]. Generally, these studies have focused on two key issues: how to formalize the concept of fairness in the context of machine learning tasks, and how to design efficient algorithms that strike a desirable trade-off between accuracy and fairness. What is lacking is the research that considers semi-supervised learning (SSL) scenarios.
In real-world machine learning tasks, a large amount of data used for training is necessary, and is often a combination of labeled and unlabeled data. Therefore, fair SSL is a vital area of development. Like the other learning settings, achieving a balance between accuracy and fairness is a key issue. According to [19], increasing the size of the training set can create a better trade-off. This finding sparked an idea over whether the trade-off might be improved via unlabeled data. Unlabeled data is abundant in era of big data and, if it could be used as training data, we may be able to make a better compromise between fairness and accuracy. To achieve this goal, two challenges are ahead of us: (1) how to achieve fair learning from both labeled and unlabeled data; and (2) how to give labels for unlabeled data to ensure that the learning is towards a fair direction.
To solve these challenges, we propose two approaches to improve the trade-off with unlabeled data in graph-based SSL which is one of the most prominent methods in SSL. Graph-based SSL first constructs a graph, where nodes represent all samples, and weighted edges reflect the similarity between a pair of nodes. Then the label information of unlabeled samples can be inferred from the graphs based on the manifold assumption. Graph-based SSL mainly includes two lines, graph-based regularization [20][21][22] and graph neural networks (GNNs) [20,23], and thus we design two approaches to achieve fairness in these two lines.
Graph-based SSL shares an assumption that smoothness (e.g., the labels of adjacent nodes are likely to be the same) should present in the local and global graph structure [22]. Regularization methods are used to smooth the predictions or feature representations over local neighborhoods. Our first approach, fair semi-supervised margin classifiers (FSMC), is formulated as an optimization problem, where the objective function includes a loss for both the classifier and label propagation, and fairness constraints over labeled and unlabeled data. Classification loss is to optimize the accuracy of training result; label propagation loss is to optimize the label predictions on unlabeled data; the fairness constraint is to lead optimization towards to a fairness direction. The optimization includes two steps. In the first step, fairness constraints enforce weights update towards a fair direction. This step can be solved by a convex problem and convex-concave programming when disparate impact and disparate mistreatment are used as fairness metrics respectively. In the second step, updated weights further direct labels assigned to unlabeled data in a fair direction by label propagation. Labels for unlabeled data can be calculated in a closed form. In this way, labeled and unlabeled data are used to achieve a better trade-off between accuracy and fairness.
GNNs have been widely used in supervised learning or semi-supervised learning tasks, such as convolutional GNNs and recurrent GNNs [23]. In SSL, GNNs aim to classify the data in a graph using a small subset of labeled data and all the data features. A large number of unlabeled data added to model training is able to help the utilization of structural and feature information of all data, and thus improves the classification accuracy. Our second approach, fair graph neural networks (FGNN), is built with GNNs, where the loss function includes classification loss and fairness loss. Classification loss optimizes the classification accuracy over all labeled data, and fairness loss enforces fairness over labeled data and unlabeled data. GNN models combine graph structures and features, and our method allows GNN models to distribute gradient information from the classification loss and fairness loss. Thus, fair representations of nodes with labeled and unlabeled data can be learned to achieve the ideal trade-off between accuracy and fairness.
With the aim of achieving fair graph-based SSL, the contributions of this paper are as follows.
• First, we conduct the study of algorithmic fairness in the setting of graph-based SSL, including graph-based regularizations and graph neural networks. These approaches enable the use of unlabeled data to achieve a better trade-off between fairness and accuracy. • Second, we propose algorithms to solve optimization problems when disparate impact and disparate mistreatment are integrated as fairness metrics in the graph-based regularization. • Third, we consider different cases of fairness constraints on labeled and unlabeled data. This helps us understand the impact of unlabeled data on model fairness, and how to control the fairness level in practice. • Forth, we conduct extensive experiments to validate the effectiveness of our proposed methods.
The rest of this paper is organized as follows. The preliminaries is given in Section 2. The first proposed method FSMC is given in Sect. 3, and the second proposed method FGNN is given in Sect. 4. The experiments are set out in Sect. 5. The related work appears in Sect. 6, with the conclusion in Sect. 7.

Notations
Let X = {x 1 , . . . , x k } T ∈ R k×v denote the training data matrix, where k is the number of data point and v is the number of unprotected attributes; z = {z 1 , . . . , z k } ∈ {0, 1} k denotes the protected attribute, e.g., gender or race. Labeled dataset is denoted as with k l data points, and y l = {y l,1 , . . . , y l,k l } T ∈ {0, 1} k l is the label for the labeled dataset. Unlabeled dataset is denoted as with k u data points, and y u = {y u,1 , . . . , y u,k u } T ∈ {0, 1} k u is the predicted labels for the unlabeled dataset.
Given the whole dataset, an adjacency matrix is denoted as A = θ i j ∈ R k×k , ∀i, j ∈ 1, . . . , k, (k = k l + k u ), where θ i j is the weight to evaluate the relationship of two data points. The degree matrix D is constructed as a diagonal matrix whose i-th diagonal element is d ii = k j=1 θ i j . We use L to denote Laplacian matrix, calculated as L = D − A. Our objective is to learn a classification model f (·) with the model parameters w (or W ) and y u over discriminatory datasets D l and D u that delivers high accuracy with low discrimination.

Fairness metrics
In our framework, we have applied disparate impact and disparate mistreatment as the fairness metrics [6,24].

Disparate impact
A classification model does not suffer disparate impact if, whereŷ is the predicted label. When the rate of positive predictions is the same for both groups z = 1 and z = 0, then there is no disparate impact.

Disparate mistreatment
A binary classifier will not suffer disparate mistreatment if the misclassification rate of different groups with different values of sensitive feature z is the same. Here, three different kind of disparate mistreatments are adopted to evaluate the discrimination as follows, • Overall misclassification rate (OMR): • False positive rate (FPR): • False negative rate (FNR): In most cases, a classifier suffers discrimination in terms of disparate impact or disparate mistreatment. The discrimination level is defined as the differences in rates between different groups.

Definition 1 (Discrimination level)
Let γ z denote the probability of positive predictions of group z on a model f training with a dataset D in terms of a fairness metric. The discrimination level (ŷ) on a model f training with a dataset D is measured by the difference between groups: Take disparate impact as an example, we have γ 1 = Pr(ŷ = 1 | z = 1), and the discrimina-

Fairness constraints
Many fairness constraints [6,24,25] have been proposed to enforce various fairness metrics, such as disparate impact and disparate mistreatment, and these fairness constraints can be used in our framework. The basic idea to design fairness constraints is that using the covariance between the users' sensitive attributes and the signed distance between the feature vectors restricts the correlation between sensitive attributes and classification results. This can be described as, where g T w ∈ R k is a vector that denotes the signed distance between the feature vectors and the decision boundary of a classifier. z denotes the vector of the protected attribute, andz denotes the mean value of the protected attribute. The details of obtaining Eq.(6) can be found in [6]. The form of g w is different in fairness metrics, and we list them in the following, • Disparate impact • Overall misclassification rate g w = min 0, y T yw T X (8) • False positive rate • False negative rate

Graph-base regularization
In graph-based regularization, the goal is searching for a function f on the graph. f has to satisfy two criteria simultaneously: (1) it should be as close to the given labels as possible, and (2) it should be smooth on the entire constructed graph. Graph stores the geometric structure in the data (such as similarity or proximity) and use this structure as a regularizer to infer labels of unlabeled data. Generally, the graph-based regularization methods adopt the following objective function, where J C is the classification loss; α is a balancing parameter; J L is a graph-based regularizer. Different methods can have different variants of the regularizer. In our paper, we consider Laplacian regularizer as it is the most common used regularizer, which is calculated by, Here, θ i j is a graph-based weight. The edges in the graph between each pair of data points i and j is weighted. The closer the two points are in Euclidean space d i j , the greater the weight θ i j . In this paper, we chose a Gaussian similarity function to calculate the weights, given as follows: where σ is a length scale parameter. This parameter has an impact on the graph structure; hence, the value of σ needs to be selected carefully [21].

GNN-based SSL
Another method that has received a lot of attention recently is GNNs [23,26]. The main idea is that the representation vector of the node can contain information from the structure of the graph, and also on any associated feature information. A graph neural network aggregates the neighboring nodes' features into a hidden representation for a central node. This aggregation operation can also be imposed on the hidden representation to form a deeper neural network. In general, for node i, a single aggregation operation can be represented as follows, where H l is the hidden representation of l-th layer; W is the trainable weight matrix in the layer l; υ is the activation function; denotes the rule of how to aggregate neighboring information. Predictions of each node is given on top of the hidden representation of the last layer.

Fairness constraints in SSL on margin classifiers
In this section, we first present the proposed framework in Sect. 3.1. Then fairness metrics of disparate impact and disparate mistreatment in logistic regression are analyzed in Sect. 3.2, and finally a discussion is given in Sect. 3.3.

The proposed framework
We formulate the framework of fair SSL as following, including the classification loss, the label propagation loss and fairness constraints.
where J C is the classification loss between predicted labels and true labels; J L is the loss of label propagation from labeled data to unlabeled data; α is a parameter to balance the loss; s(w) is the expression of fairness constraints; and c is a threshold.

Classification loss
A classification loss function evaluates how well a specific algorithm models the given dataset. When different algorithms are used to train datasets, such as logistic regression or neural networks, a corresponding loss function is applied to evaluate the accuracy of the model.

Label propagation loss
According to [22], when Laplacian regularizer is used, the label propagation loss for J L through SSL can be expressed as, where T r denotes the trace, and the vector y = [y l ; y u ] ∈ R k includes labels of labeled and unlabeled data.

Fairness constraints
Adding fairness constraints is a useful method to enforce fair learning with in-processing methods. In SSL, labeled data and unlabeled data have different impacts on discrimination because of two reasons: (1) predicting labels for unlabeled data will bring noise to the labels; (2) labeled data and unlabeled data may have different data distributions. Therefore, the discrimination inherently in unlabeled data is different from the discrimination in labeled data. For these reasons, we impose fairness constraints on labeled and unlabeled data to measure discrimination to see the disparate impact of fairness constraints on labeled and unlabeled data. We consider four cases of fairness constraints enforced on the training data: The fairness constraint is on labeled data. • 2. Unlabeled constraint: The fairness constraint is on unlabeled data. • 3. Combined constraint: The fairness constraint is on labeled data and unlabeled data separately. • 4. Mixed constraint: The fairness constraint is on labeled and unlabeled data together.

Fair SSL of logistic regression
In this section, we propose algorithms to solve the optimization problem (15) with a binary logistic regression (LR) classifier. (Other margin classifiers can also be applied in our method, and we give another example of support vector machines in the supplemental material.) The classifier is subjected to the fairness metric of disparate impact with mixed labeled and unlabeled data. The objective function of LR is defined as, where p = 1 1+e −w T X is the probability distribution of mapping X to the class label y; 1 denotes a column vector with all its elements being 1. Given the logistic regression loss, the label propagation loss and the fairness metric, the optimized problem (15) adopts the form,

Disparate impact
First, we solve the optimization problem with disparate impact as the fairness metric. The optimization of problem (18) includes two parts: learning the weights w and predicted labels of unlabeled data y u . The basic idea of solution is that because of the fairness constraint, the weight w is updated towards a fair direction, and using the updated w to update y u also ensures that y u is directed towards fairness. The problem is solved by updating w and y u iteratively as follows. Solving w when y u is fixed, the problem (18) becomes Note that problem (19) is a convex problem that can be written as a regularized optimization problem by moving fairness constraints to the objective function. The optimal w * can then be calculated by using Karush-Kuhn-Tucker (KKT) conditions. Solving y u when w is fixed, the problem (18) becomes Given that problem (20) is also a convex problem, the optimal y u can be obtained from the deviation of y u in problem (20). In order to calculate y u conveniently, we split Laplacian matrix L into four blocks after the l-th row and the l-th column: L = L ll L lu L ul L uu . The deviation of Eq. (20) is then calculated w.r.t. y u and setting to zero, we have Note that L is a symmetric matrix and, after simplification, the closed updated form of y u can be derived from Note that the computed optimal y u is decimals, and it cannot be used to update w directly because only integers are allowed to optimize w in the next update. Due to this, we need to convert y u from decimals to integers to update w. Before using y u to update the next w, the value of y u,i ∈ y u , i = 1, . . . , k u is set to, where ξ is the threshold that determines the classification result. Then, the optimization problem (18) can be solved by optimizing w and y u iteratively. Algorithm 1 summarizes the solution of optimization problem (18) with the disparate impact.

Algorithm 1
The algorithm of optimizing problem (18)  Fix w and update y u by Eq. (22)  5: Set y u,i ∈ y u to 0 or 1 by Eq. (23) 6: until The optimization problem (18) convergs

Disparate mistreatment
Disparate mistreatment metrics include overall misclassification rate, false positive rate and false negative rate. For simplicity, overall misclassification rate is used to analyze disparate mistreatment. However, false positive rate and false negative rate can also be analyzed easily, and the result of three disparate mistreatment metrics are presented in the experiment.
With the overall misclassification rate as the fairness metric, the objective function is denoted as, Note that fairness constraints of disparate mistreatment are non-convex, and the solution of the optimization problem (24) is more challenging than the optimization problem in (18). Next, we convert these constraints into a Disciplined Convex-Concave Program (DCCP). Thus, the optimization problem (24) can be solved efficiently with the recent advances in convex-concave programming [27]. The fairness constraint of disparate mistreatment can be split into two terms, where D 0 and D 1 are the subsets of the labeled dataset D l and unlabeled dataset D u with values z = 0 and z = 1, respectively. k 0 and k 1 are defined as the number of data points in the D 0 and D 1 , and thusz can be rewritten asz = 0 * k 0 +1 * k 1 k = k 1 k . Then the fairness constraint of disparate mistreatment can be rewriten as, Solving w when y u is fixed, the problem (24) becomes The optimization problem (27) is a Disciplined Convex-Concave Program (DCCP) for any convex loss, and can be solved with some efficient heuristics [27]. Solving y u when w is fixed, the problem (24) becomes The solution of Eq. (28) is the same as the solution of the Eq. (22). The closed form of y u can be obtained via Eq. (23), and then the optimization problem (23) can be solved by updating y u and w iteratively. Algorithm 2 summarizes this process.

Algorithm 2
The algorithm of optimizing problem (24) Input: Labeled dataset D l , unlabeled dataset D u , fairness thresholds c Parameter: ξ , σ Initialize:Given initial values of y u by label propagation Output: w and y u 1: Calculate the adjacency matrix A according to Eq.(13) 2: Choose a metric in disparate mistreatment 3: repeat 4: Divide D into D 0 and D 1 5: Calculate k 0 and k 1 6: Fix y u and update w with DCCP 7: Fix w and update y u by Eq.(22) 8: Set y u,i ∈ y u to 0 or 1 by Eq. (23) 9: until The optimization problem (24) convergs

Discussion
Based on above analysis, some conclusions can be drawn: 1. Since unlabeled data do not contain any label information, they do not label biased information so that we can take advantage of the unlabeled data to improve the trade off between accuracy and fairness. In our framework, due to the fairness constraint, the weight w is updated towards a fair direction. Using the updated w to update y u also ensures that y u is directed towards fairness. In this way, fairness is enforced in labeled and unlabeled data by updating w and y u iteratively. Therefore, labels of unlabeled data are calculated in a fair way, which is beneficial to the accuracy of the classifier as well as the fairness of the classifier. 2. Fairness constraints on labeled data and unlabeled data have different impact on the training result because labeled and unlabeled data may present different covariance between the sensitive attribute and the signed distance between feature vectors to the decision boundaries.

Fairness regularizers in SSL on graph neural networks
In this section, we present the proposed method of how to achieve fair SSL on GNNs. The main idea of the proposed method is to impose fairness regularizers on GNNs that is implemented in the SSL setting. In this way, GNN models can allocate gradient information from the classification loss and the fairness loss to ensure fairness. Firstly, we introduce a framework for fair SSL on GNNs, and then present a case of fair graph convolutional networks.

The proposed methods
Our goal is to learn a neural network function f (W ) that optimize two main objectives: the classification accuracy and fairness. The loss function of the model is defined as, where J (D; W ) denotes the classification loss, and J F (D; W ) denotes the fairness loss that imposes fairness regularizers on the output of the model. β adjusts the trade-off between fairness and accuracy loss. Typically, the cross-entropy loss is used to calculate the classification loss.

Fairness constraints
The second item in the loss function exerts fairness on the learning function. Since fairness constraints Eqs. (7)-(10) are not differentiable, fairness regularizers are defined according to literal definitions of fairness metrics, these regularizers are able to handle and optimize different fairness definitions so as to adjust the appropriate fairness definition according to the application. The fairness regularizer of disparate impact is defined as, where p i denotes the predicted probability of the i-th data point belonging to one class calculated by a softmax function in the last layer of the network. Disparate mistreatment, including FPR, FNR, and OMR are defined in the following,

Fair SSL of convolutional GNN
In this section, we study a case of fair graph convolutional network (GCN), where a multilayer graph convolutional networks is used to optimize the classification loss in the Eq. (30). We take GCN as an example since GCN achieves high performance in SSL tasks, and our method can also apply in other GNNs. The GCN model combines the graph structure and vertex features in the convolution, in which the features of unlabeled vertices are mixed with those of neighboring labeled vertices, and then propagated to the graph through multiple layers.
The propagation rule of a multi-layer GCN is defined as [26], whereÃ = A + I N is the adjacency matrix of the undirected graph with added selfconnections andD = jÃ i j . The model used in this paper is a two-layer GCN, and softmax classifier is applied to the output features, whereÂ The loss function is defined as the cross entropy error of all labeled data points, where Y L is the set of indices of labeled vertices and F is the number of classes. Given the GCN loss and the fairness regularizer, the Eq. (29) adopts the form, The model parameters W can be trained via gradient descent. In this paper, batch gradient descent is used to train datasets for each iteration.

Discussion
1. GCN naturally combines the structure and features of the graph in the convolution, and thus avoids graph Laplacian regularization. Our method allows the GCN model to allocate gradient information from the classification loss and the fairness loss. Therefore, fair representation of nodes with labeled data and unlabeled data can be learned to achieve fair SSL. 2. Parameter β adjusts the discrimination level. A higher β will impose a higher penalty on fairness loss, and thus decrease the discrimination level. However, a very large β may destroy the expression ability of the model.

Experiment
In this section, we first describe the experimental setup, including datasets, baselines, and parameters. Then, we evaluate our method on three real-world datasets under the fairness metric of disparate impact and disparate mistreatment (including OMR, FNR and FPR). The aim of our experiments is to assess: the effectiveness of our methods to achieve fair semisupervised learning; the impact of different fairness constraints on fairness; and the extent to which unlabeled data can balance fairness with accuracy.

Dataset
Our experiments involve three real-world datasets: Health dataset 1 , Titanic dataset 2 and Bank dataset 3 When GNN models are used for training, structured datasets need processing into graphs. To construct graph-structured data based on structured data, we need to build an adjacency matrix to describe the topological relationship. In our experiment, we instinctively using Euclidean distance calculated by Eq. (13) as our adjacency matrix for simplicity.
• The task in the Health dataset is to predict whether people will spend time in the hospital. In order to convert the problem into the binary classification task, we only predict whether people will spend any day in the hospital. After data preprocessing, the dataset contains 27,000 data points with 132 features. We divide patients into two groups based on age (≥65 years) and consider 'Age' to be the sensitive attribute. • The Bank dataset contains a total of 41,188 records with 20 attributes and a binary label, which indicates whether the client has subscribed (positive class) or not (negative class) to a term deposit. We consider 'Age' as sensitive attribute. • The Titanic dataset comes from a Kaggle competition where the goal is to analyze which sorts of people were likely to survive the sinking of the Titanic. We consider "Gender" as the sensitive attribute. After data preprocessing, we extract 891 data points with 9 features.

Parameters
The sensitive attributes are excluded from the training set to ensure fairness between groups and are only used to evaluate discrimination in the test phrases. In the Health, Bank and Titanic datasets, data are all labeled. In the Health dataset, we sample 4,000 data points as labeled dataset, 4,000 data points as test dataset, and left as unlabeled dataset. In the Bank dataset, we sample 4,000 data points as labeled dataset, 4,000 data points as test dataset, and left as unlabeled dataset. In the Titanic dataset, we sample 200 data points as labeled dataset, 200 data points as test dataset, and left as unlabeled dataset. Therefore, D l and D u are collected from the similar data distribution.
In the experiments, the results are an average of 10 results by randomly sampling labeled dataset, test dataset and unlabeled dataset.
We set α = 1 and ξ = 0.5 in all datasets. σ is a length scale parameter. This parameter has an impact on the graph structure, and we set σ = 0.5 in the Health dataset and Bank dataset, and σ = 0.1 in the Titanic dataset by using binary search. τ and μ are parameters in DCCP. τ is a parameter that trades off satisfying the constraints and minimizing the objective in DCCP, and we set τ = 0.05 and τ = 1 in Bank and Titanic dataset by binary search. μ parameter sets the rate at which τ increases inside the algorithm, and we set /mu as the default value 1.2 in Bank and Titanic datasets.

Baseline methods
The methods chosen for comparison are listed as follows. PS, US and FES are only applied in the fairness metric of disparate impact, so they are compared with the performance with our methods using disparate impact. FC and FMLP are compared with the performance with our methods using disparate impact and disparate mistreatment. It is worth to note that [28] also used unlabeled data on fairness. However, they only applied the equal opportunity metric, which is different to ours. Hence, we did not compare the proposed method with them.
• Fairness Constraints (FC): Fairness constraints are used to ensure fairness for classifiers.  the same discrimination, accuracy is higher. For example, at the same level of accuracy on the Titanic dataset, our method FS-LR has a discrimination level of around 0.08, while FC method has a discrimination level of 0.11. A similar observation can be made from the results with PS method (Yellow cross), US method (Blue cross) and FES method (Green cross). Note that the discrimination level (red line) with LR in the Health dataset does not extend because discrimination does not increase as c grows. Figure 2 shows that accuracy and discrimination level in the proposed method fair GCN (FGCN) and the baseline method FMLP as β varies. The result shows that FGCN performs the better trade-off between accuracy and discrimination than FMLP. This contributes to GCN has effective utilization of structural and feature information of unlabeled data.

Different fairness constraints
Our next set of experiments is to determine the impact of different fairness constraints. For these tests, the size of unlabeled data is set to 12,000 data points in the Health dataset and 400 data points in the Titanic dataset. Due to space limitation, we have only reported the results for the LR, which appear in Tables 1 and 2. The result shows that, when varying the threshold of covariance c, different fairness constraints on labeled and unlabeled data have different impacts on the training results. As the threshold of covariance increases, both accuracy and discrimination level increase before steadying off for the duration. In terms of accuracy, this is because a larger c allows for a larger space to find better weights w to inform classification. In terms of discrimination, a larger c tends to introduce more discrimination in noise.
It is also observed that the fairness constraint on mixed data generally has the best performance in the trade-off between accuracy and discrimination. Other three constraints have very similar accuracy and discrimination levels. We attribute this to the assumption that labeled and unlabeled data have the similar data distribution, and therefore the mixed fairness constraint on labeled and unlabeled data gives the best description of the covariance between sensitive attributes and signed distance from feature vectors to the decision boundary.

The impact of unlabeled data
For these experiments, we set the covariance threshold c = 1 for the Health and Titanic datasets, and parameter β = 0.5 in the Health dataset and β = 0.8 in the Titanic dataset. Figure 7 shows that accuracy and discrimination level varies with the amount of unlabeled data with FS-LR and FGCN methods on both datasets. As shown, accuracy increases as the amount of unlabeled data increases in both datasets before stabilizing at its peak. Discrimination level sharply decreases almost immediately, then stabilize or decrease. We can explain why unlabeled data help to reduce discrimination according to [19,31]. In [19,31], discrimination is decoupled into discrimination in bias, discrimination in variance and discrimination in noise. With an increasing size of unlabeled data, discrimination in variance decreases, leading to the whole discrimination decreases. . This indicates that our framework provides the better trade-off between accuracy and discrimination in three metrics for the most time. For example, at the same level of accuracy (Acc = 0.885) on the Bank dataset under OMR, our method with FS-LR has a discrimination level of around 0.045, while FC method has a discrimination level of 0.06. We also observe that discrimination level is quite different under fairness metrics. For example, discrimination level can reach 0.17 at the end under FNR, while discrimination level only shows 0.01 under FPR. In addition, we note that accuracy and discrimination level have different performance on training models. In the Bank dataset, FGCN generally has a lower accuracy and discrimination than FS-LR. The trade-off between accuracy and discrimination in proposed method FS-LR and FGCN (Red), FC and FMLP (Blue) in two datasets under the metric of overall misclassification rate. The results demonstrate that our methods using unlabeled data achieves a better trade-off between accuracy and discrimination Tables 3 and 4 shows that different fairness constraints on labeled and unlabeled data have different impacts on the training results. Due to space limitation, we have only reported the results for the FS-LR under the metric of OMR on the Bank and Titanic datasets. For these tests, the size of unlabeled data is set to 4,000 data points in the Bank dataset and 400 data points in the Titanic dataset. As shown, when varying the threshold of covariance c, different fairness constraints on labeled and unlabeled data have huge difference on the training results. When the fairness constraint is enforced in labeled data, accuracy and discrimination increases with the increase in c in the Titanic dataset. This is because a smaller c enforces the lowest discrimination level, which results in a lower accuracy. However, when the fairness constraint is enforced in unlabeled data, accuracy and discrimination could decrease with the increase in c. This is because the label of unlabeled data appears in the fairness constraint of disparate mistreatment, and it is updated during the training. This means that the distribution of unlabeled data is not described well during the training. As a result, the fairness constraint on unlabeled data is not that effective.

The impact of unlabeled data under OMR
For these experiments, we show the impact of unlabeled data on OMR. The covariance threshold is set as c = 1 for the Bank and Titanic datasets. Figure 7 shows accuracy and discrimination level varies given different size of unlabeled data with FS-LR and FGCN on two datasets. As shown, before the peak is reached, as the amount of unlabeled data increases in the two data sets, accuracy will also increase. Discrimination level decreases at the beginning, and then stabilize in the Titanic dataset. These results indicate that discrimination in variance decreases as the amount of unlabeled data in the training set increases.

Discussion
We have some comparison of two methods. This provides some suggestions to choose which method to use in practice. 1) FGCN is suitable to train a large dataset, while FSMC may not work because the DCCP solver is difficult to process a large number of data points. 2) FGCN is suitable for multi-classification problems, while FSMC cannot directly be applied in multiclassification problems. 3) FSMC admits a closed form solution which makes it attractive in practice with a low computationally cost, while FGCN is generally more computational.

Summary
From these experiments, we can obtain some conclusions. 1) The proposed methods, FSMC and FGNN can make use of unlabeled data to achieve a better trade-off between accuracy and discrimination. 2) In FSMC, the fairness constraint on mixed labeled and unlabeled  The impact of the amount of unlabeled data in the training set on accuracy (Red) and discrimination level (Blue) under the fairness metric of overall mistreatment rate with FS-LR and FGCN in two datasets. The X-axis is the size of unlabeled dataset; left y-axis is accuracy; and right y-axis is discrimination level 6 Related work

Fair supervised learning
Methods for fair supervised learning include pre-processing, in-processing and postprocessing methods. In pre-processing, discrimination is eliminated by guiding the distribution of training data towards a fairer direction [29] or by transforming the training data into a new space [14,[32][33][34]. Subsequent studies extended fair representations into more fairness metrics and more generalized tasks [35][36][37][38]. The main advantage of the pre-processing method is that it does not require changes to the machine learning algorithm, so it is very simple to use. In in-processing, discrimination is constrained by fair constraints or regularizers during the training phase. For example, Kamishima et al. [39] used regularizer term to penalize discrimination in the learning objective. Konstantinov et al. designed fairness regularizers during training can greatly improve the fairness of rankings [40]. [6,24,41] designed the convex fairness constraint, called decision boundary covariance to achieve fair classification for classifiers. Some work presented the constrained optimization problem as a two-player game, and formalized the definition of fairness as a linear inequality [42][43][44][45]. This is more flexible for optimizing different fair constraints, and solutions using this method are considered to be the most robust. Recent work extended in-processing methods to more complex cases [46][47][48]. For example, Perrone et al. proposed a general constrained Bayesian optimiza-tion framework to optimize the model performance [47]. Chikahara et al. studied individual fairness with path-specific causal-effect constraint [48].
A third approach to achieving fairness is post-processing, where a learned classifier is modified to adjust the decisions to be non-discriminatory for different groups [13,49,50]. Post-processing does not need changes in the classifier, but it cannot guarantee a optimal classifier. Awasthi et al. further studied equalized odds post-processing method with a perturbed attribute [51]. Putzel et al. worked on the predictions of a blackbox machine learning classifier in order to achieve fairness in a multiclass setting [52].

Fair unsupervised learning
Chierichetti et al. [15] was the first to study fairness in clustering problems. Their solution, under both k-center and k-median objectives, was required every group to be (approximately) equally represented in each cluster. Many subsequent works have since been undertaken on the subject of fair clustering. Among these, Rosner et al. [18] extended fair clustering to more than two groups. Schmidt et al. [53] consider the fair k-means problem in the streaming model, define fair coresets and show how to compute them in a streaming setting, resulting in significant reduction in the input size. Bera et al. [54] presented a more generalized approach to fair clustering, providing a tunable notion of fairness in clustering. Li et al. [55] defined a new fairness metric in clustering and incorporated group fairness into the algorithmic centroid clustering problem.

Comparing with other work
Existing fair learning methods mainly focus on supervised and unsupervised learning, and cannot be directly applied to SSL. As far as we know, only [28,30,31] has explored fairness in SSL. Chzhen et al. [28] studied Bayes classifier under the fairness metric of equal opportunity, where labeled data is used to learn the output conditional probability, and unlabeled data is used for to calibrate threshold in the post-processing phase. However, unlabeled data is not fully used to eliminate discrimination, and the proposed method only applies in equal opportunity. In [30], the proposed method is built on neural networks for SSL in the in-processing phase, where unlabeled data is marked labels with pseudo labeling. Zhang et al. [31] proposed a pre-processing framework which includes pseudo labeling, re-sampling and ensemble learning to remove discrimination. Our solution will focus on margin-based classifier in the in-processing stage, as in-processing methods have demonstrated good flexibility in both balancing fairness and supporting multiple classifiers and fairness metrics. A few studies have studied fair graph learning. For example, Rahman et al. studied how to learn fair node representations [56], while we focus on fair graph-based SSL. Kang et al. studied individual fairness on graph mining [57], while we focus on group fairness on graph-based SSL.

Conclusion
In this paper, we study how to improve the trade-off between fairness and accuracy with unlabeled data. We propose two methods of fair graph-based SSL that operates during inprocessing phase. Our first method is formulated as an optimization problem with the goal of finding weights and labeling unlabeled data by minimizing the loss function subject to fairness constraints. We analyze several different cases of fairness constraints for their effects on the optimization problem plus the accuracy and discrimination level in the results. The second method is built on GNN models with fairness regularizers that ensures fair representations of nodes with labeled and unlabeled data can be learned. Our experiments confirm this analysis, showing that the proposed framework provides accuracy and fairness at high levels in semisupervised settings.
Funding Open Access funding enabled and organized by CAUL and its Member Institutions

Declarations
Conflict of interest The authors declare that there are no conflicts of interest regarding the publication of this paper Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.