Confidence interval for micro-averaged F1 and macro-averaged F1 scores

A binary classification problem is common in medical field, and we often use sensitivity, specificity, accuracy, negative and positive predictive values as measures of performance of a binary predictor. In computer science, a classifier is usually evaluated with precision (positive predictive value) and recall (sensitivity). As a single summary measure of a classifier’s performance, F1 score, defined as the harmonic mean of precision and recall, is widely used in the context of information retrieval and information extraction evaluation since it possesses favorable characteristics, especially when the prevalence is low. Some statistical methods for inference have been developed for the F1 score in binary classification problems; however, they have not been extended to the problem of multi-class classification. There are three types of F1 scores, and statistical properties of these F1 scores have hardly ever been discussed. We propose methods based on the large sample multivariate central limit theorem for estimating F1 scores with confidence intervals.


Introduction
In medical field, a binary classification problem is common, and we often use sensitivity, specificity, accuracy, negative and positive predictive values as measures of performance of a binary predictor. In computer science, a classifier is usually evaluated with precision and recall, which are equal to positive predictive value and sensitivity, respectively. For measuring the performance of text classification in the field of information retrieval and of a classifier in machine learning, the F score (F measure) has been widely used. In particular, the F 1 score has been popular, which is defined as the harmonic mean of precision and recall [1,2]. The F 1 score is rarely used in diagnostic studies in medicine despite its favorable characteristics. As a single performance measure, the F 1 score may be preferred to specificity and accuracy, which may be artificially high even for a poor classifier with a high false negative probability when disease prevalence is low. The F 1 score is especially useful when identification of true negatives is relatively unimportant because the true negative rate is not included in the computation of either precision or recall.
To evaluate a multi-class classification, a single summary measure is often sought. And as extensions of the F 1 score for the binary classification, there exist two types of such measures: a micro-averaged F 1 score and a macro-averaged F 1 score [2]. The microaveraged F 1 score pools per-sample classifications across classes, and then calculates the overall F 1 score. Contrarily, the macro-averaged F 1 score computes a simple average of the F 1 scores over classes. Sokolova and Lapalme [3] gave an alternative definition of the macro-averaged F 1 score as the harmonic mean of the simple averages of the precision and recall over classes. Both micro-averaged and macro-averaged F 1 scores have a simple interpretation as an average of precision and recall, with different ways of computing averages. Moreover, as will be shown in Section 2, the micro-averaged F 1 score has an additional interpretation as the total probability of true positive classifications.
For binary classification, some statistical methods for inference have been proposed for the F 1 scores (e.g., [4]); however, the methodology has not been extended to the multi-class F 1 scores. To our knowledge, methods for computing variance estimates of the micro-averaged F 1 score and macro-averaged F 1 score have not been reported. Thus, computing confidence intervals for the multi-class F 1 scores is not possible, and the inference about them is usually solely based on point estimates, and thus highly limited in practical utility. For example, consider the results of an analysis reported by Dong et al. [5]. In this analysis, the authors calculated the point estimates of macro-averaged F 1 scores for four classifiers, and they concluded a classifier outperformed the others by comparing the point estimates without taking into account their uncertainty. Others have also used multi-class F 1 scores but only reported point estimates without confidence intervals [6][7][8][9][10][11][12][13][14][15][16].
To address this knowledge gap, we provide herein the methods for computing variances of these multi-class F 1 scores so that estimating the micro-averaged F 1 score and macroaveraged F 1 score with confidence intervals becomes possible in multi-class classification.
The rest of the manuscript is organized as follows: The definitions of the micro-averaged F 1 score and macro-averaged F 1 score are reviewed in Section 2. In Section 3, variance estimates and confidence intervals for the multi-class F 1 scores are derived. A simulation study to investigate the coverage probabilities of the proposed confidence intervals is presented in Section 4. Then, our method is applied to a real study as an example in Section 5 followed by a brief discussion in Section 6.

Averaged F 1 scores
This section introduces notations and definitions of multi-class F 1 scores, namely, macroaveraged and micro-averaged F 1 scores. Consider an r × r contingency table for a nominal categorical variable with r classes (r ≥ 2). The columns indicate the true conditions, and rows indicate the predicted conditions. It is called the binary classification when r = 2, and the multi-class classification when r > 2. Such a table is also called a confusion matrix. We consider multi-class classification, i.e., r > 2, and denote cell probabilities and marginal probabilities by p ij , p i· , and p ·j , respectively (i, j = 1, ⋯, r). For each class (i = 1, ⋯, r), the true positive rate (TP i ), the false positive rate (FP i ), and the false negative rate (FN i ) are defined as follows: T P i = p ii , TP i is the i-th diagonal element, FP i is the sum of off-diagonal i-th row, and FN i is the sum of off-diagonal elements of the i-th column. Note that TP i + FP i = p i· , and TP i + FN i = p ·i .
In the current and following sections, we will use the simple 3-by-3 confusion matrix in Table 1 as an example to demonstrate various computations. Columns represent the true state, and rows represent the predicted classification. The total sample size is 100.

Micro-averaged F 1 score
The micro-averaged precision (miP) and micro-averaged recall (miR) are defined as Note that for both miP and miR, the denominator is the sum of all the elements (diagonal and off-diagonal) of the confusion matrix, and it is 1. Finally, the micro-averaged F 1 score is defined as the harmonic mean of these quantities: This definition is commonly used (e.g., [6, 8-12, 14, 15]).
By definition, we have miP, miR, and miF 1 all equal to the sum of the diagonal elements, which, in our example, is 0.87.

Macro-averaged F 1 score
To define the macro-averaged F 1 score (maF 1 ), first consider the following precision (P i ) and recall (R i ) within each class, i = 1, ⋯, r: For our example, simple calculation shows: And F 1 score within each class (F 1i ) is defined as the harmonic mean of P i and R i , that is, The macro-averaged F 1 score is defined as the simple arithmetic mean of F 1i : This score, like miF 1 is frequently reported (e.g., [5][6][7][8][9][10]13]).
F 1i and maF 1 in our example are:

Alternative definition of Macro-averaged F 1 score
Sokolova and Lapalme [3] gave an alternative definition of the macro-averaged F 1 score (maF 1 * ). First, macro-averaged precision (maP) and macro-averaged recall (maR) are defined as simple arithmetic means of the within-class precision and within-class recall, respectively.
And maF 1 * is is defined as the harmonic mean of these quantities.
Takahashi et al. Page 5 This version of macro-averaged F 1 score is less frequently used (e.g., [11,12,16] In this example, the micro-averaged F 1 score is higher than the macro-averaged F 1 scores because both within-class precision and recall are much lower for the first class compared to the other two. Micro-averaging puts only a small weight on the first column because the sample size there is relatively small. This numeric example shows a shortcoming of summarizing a performance of a multi-class classification with a single number when within-class precision and recall vary substantially. However, aggregate measures such as the micro-averaged and macro-averaged F 1 scores are useful in quantifying the performance of a classifier as a whole.

Variance estimate and confidence interval
In this section, we derive the confidence interval for miF 1 , maF 1 , and maF 1 * . We assume that the observed frequencies, n ij , for 1 ≤ i ≤ r, 1 ≤ j ≤ r, have a multinomial distribution with sample size n and probabilities p = p 11 , ⋯, p 1r , p 21 , ⋯, p 2r , ⋯, p r1 , ⋯, p rr T , where "T" represents the transpose, that is n 11 , n 12 , ⋯, n rr Multinomial(n; p) .
The expectation, variance, and covariance for i, j = 1, ⋯, r, are: Var n ij = np ij 1 − p ij , Cov n ij , n kl = − np ij p kl , for i ≠ k or j ≠ l, respectively, where n = ∑ i, j n ij is the overall sample size. The maximum likelihood estimate (MLE) of p ij is p ij = n ij /n. Using the multivariate central limit theorem, we have where 0 r 2 is r 2 × 1 vector whose elements are all 0, diag(p) is an r 2 × r 2 diagonal matrix whose diagonal elements are p, and "⩪"represents "approximately distributed as." By invariance property of MLE's, the maximum likelihood estimates of miF 1 , maF 1 , and maF 1 * , and other quantities in the previous section can be obtained by substituting p ij by p ij .
In the following subsections, we use the multivariate delta-method to derive large-sample distributions of miF 1 , maF 1 , and maF 1 * .

Confidence interval for miF 1
As shown in (1), miF 1 = ∑ p ii , and the maximum likelihood estimate (MLE) of miF 1 is Using the multivariate delta-method (Appendix A), we have And a (1 -α) × 100% confidence interval of maF 1 is where V ar miF 1 is Var miF 1 with {p ii } replaced by p ii , and Z p denote the 100 p-th percentile of the standard normal distribution. Computation of V ar miF 1 for our numeric example is straightforward using (4):

Confidence interval for maF 1
The MLE of maF 1 can be obtained by substituting p ii , p ·i and p ·i by their MLE's in (2).
Again by the multivariate delta-method (Appendix B), we have the variance of maF 1 as Var maF 1 = 2 And a (1 -α) × 100% confidence interval of miF 1 is where V ar maF 1 is Var maF 1 with {p ii } replaced by p ij . This computation is complex even for a small 3 by 3 table; an R code (Appendix D) was used to compute the variance estimate and a 95% confidence interval of maF 1 .

Confidence interval form maF 1 *
To obtain the MLE's of maF 1 * we first substitute p ii , p i and p ·i by their MLE's of maP and maR and use these in (3)

Simulation
We performed a simulation study to assess the coverage probability of the confidence intervals proposed in Section 3. We set r = 3 (class 1, 2, 3), and generated data according to the multinomial distributions with p summarized in Table 2. The total sample size, n, was set to 25, 50, 100, 500, 1,000, and 5,000. For each combination of the true distribution and sample size, we generated 1,000,000 data, each time computing 95% confidence intervals for miF 1 , maF 1 , and maF 1 * .
In scenario 2, the true condition of class 1 has higher probability than the others (80% vs 10%), and the recall and precision of class 1 are also higher than the others (80% vs 40%, and 91% vs 27%, respectively). miF 1 gives equal weight to each per-sample classification decision, whereas miF 1 gives equal weight to each class. Thus, large classes dominate small classes in computing miF 1 [2], and miF 1 is larger than maF 1 (miF 1 = 0.72, maF 1 = 0.50, maF 1 * = 0.51) in scenario 2 because class 1 has higher probability and has higher precision and recall.
In scenario 3, the true condition of class 1 has higher probability than the others (80% vs 10%). The precision of class 1 is higher than the others (94% vs 24%), and the recall of class 1 is lower than the others, (40% vs 80%). Compared to the other two scenarios, the diagonal entries are relatively small, which makes miF 1 small (miF 1 = 0.48, maF 1 = 0.44, and maF 1 * = 0.55). Table 3 shows the coverage probability of the proposed 95% confidence intervals for each scenario. The coverage probabilities for both miF 1 and maF 1 are close to the nominal 95% when the sample size is large. When n smaller than 95%, especially for maF 1 and maF 1 * .
Moreover, computing a confidence interval for maF 1 * for small n is often impossible because maF 1 * is undefined when either p i· = 0 or p ·j = 0 for any i or j. In typical applications where these F scores are computed, n is large, and the small n problem is unlikely to occur.

Example
As an example, we applied our method to the temporal sleep stage classification data provided by Dong et al. [5]. They proposed a new approach based on a Mixed Neural Network (MNN) to classify sleep into five stages with one awake stage (W), three sleep stages (N1, N2, N3), and one rapid eye movement stage (REM). In addition to the MNN, they evaluated the following three classifiers: Support Vector Machine (SVM), Random Forest (RF), and Multilayer Perceptron (MLP). The data came from 62 healthy subjects, and classification by a single sleep expert was used as the gold standard. The staging is based on a 30-second window of the physiological signals called an EEG (electroencephalography) epoch. Thus, each subject contributes a large number of data to be classified. The total number of epochs depends on the classifiers, and it is about 59,000. Performance of each classifier was evaluated using maF 1 along with precision, recall, and overall accuracy. They concluded that the MNN outperformed the competitors by comparing the point estimates of maF 1 and overall accuracy. We provide here 95% confidence intervals for miF 1, maF 1 , and maF 1 * for each of the four methods, as summarized in Table 4. The confidence intervals of miF 1 , maF 1 , and maF 1 * for the MNN do not overlap with the point estimates of other methods, providing further evidence that MNN is superior to the other method. For completeness we present 95% confidence intervals for other methods in Table 4 as well. As n is large for this example, the confidence intervals are narrow, and the ones for MNN do not overlap with confidence intervals for other three methods.

Discussion
We derived large sample variance estimates of miF 1 , maF 1 , and maF 1 * in terms of the observed cell probabilities and sample size. This enabled us to derive large sample confidence intervals.
Coverage probabilities of the proposed confidence intervals were assessed through the simulation study. According to the result of the simulation, when n is larger than 100, the coverage probability was close to the nominal level; however, for n < 100, the coverage probabilities tended to be smaller than the target. Moreover, with an extremely small sample size, maF 1 * could not be estimated as computation of maF 1 * requires all margins to be non-zero. Zhang et al. [17] have considered interval estimation miF 1 and maF 1 and proposed the highest density interval through Bayesian framework. On the other hand, we have proposed confidence interval for miF 1 , maF 1 , and maF 1* through frequentist framework using a large-sample approximation.
There is an inherit drawback of multi-class F 1 scores that these scores do not summarize the data appropriately when a large variability exists between classes. This was demonstrated in the numeric example in Section 2 for which the within-class F 1 values are 0.308, 0.927, and 0.833, and miF 1 , maF 1 , and maF 1 * are 0.870, 0.689, and 0.691, respectively. Reporting multiple within-class F 1 scores may be an option as done in [18] and [19]; however, an aggregate measure is useful in evaluating an overall performance of a classifier across classes. Another limitation with F 1 scores is that they do not take into consideration the true negative rate, and they may not be an appropriate measure when true negatives are important.
For future works, we are working on developing hypothesis testing procedure for miF 1 , maF 1 , and maF 1 * based on the variance estimates proposed in this article.
An R code for computing confidence intervals for miF 1 , maF 1 , and maF 1 * , available and presented in Appendix D. Let p be the ordered elements of a confusion matrix. p = (p 11 , ⋯, p 1r , p 21 , ⋯, p 2r , ⋯, p r1 , ⋯, p rr ) T . Using the multivariate delta-method for p, we get     Table 2 Simulation study: True cell probabilities  Table 3 Simulation study: Coverage probability