New fat-tail normality test based on conditional second moments with applications to finance

In this paper we introduce an efficient fat-tail measurement framework that is based on the conditional second moments. We construct a goodness-of-fit statistic that has a direct interpretation and can be used to assess the impact of fat-tails on central data conditional dispersion. Next, we show how to use this framework to construct a powerful normality test. In particular, we compare our methodology to various popular normality tests, including the Jarque--Bera test that is based on third and fourth moments, and show that in many cases our framework outperforms all others, both on simulated and market stock data. Finally, we derive asymptotic distributions for conditional mean and variance estimators, and use this to show asymptotic normality of the proposed test statistic.


Introduction
It has been recently shown in Jaworski and Pitera (2016) that for a normal random variable and a unique ratio close to 20/60/20 the conditional dispersion in the tail sets is the same as in the central set. In other words, if we split big normal sample into three sets -one corresponding to worst 20% outcomes, one corresponding to the middle 60% outcomes, and one corresponding to best 20% outcomes -then the conditional variance on those subsets is approximately the same.
In this paper we show that this property could be used to construct an efficient goodness-of-fit testing framework that has a direct (financial) interpretation. The impact of tail dispersion on central dispersion is a natural measure of tail heaviness and can serve as an alternative to other methods which are typically based on tail limit analysis or higher order moments; see Alexander (2009) and Jarque and Bera (1980). In particular, in contrast to the Jarque-Bera normality test that is based on third and fourth moments, our test relies on the conditional second moments which are often easier to estimate.
Testing for normality has a long history and many remarkable methods have been developed. This includes general distribution-fit frameworks like Anderson-Darling test based on the distance between theoretical and empirical distribution function (Anderson and Darling, 1954) or Shapiro-Wilk test relying on the regression coefficient (Wilk and Shapiro, 1965); see Madansky (2012), Henze (2002) or Thode (2002) for a comprehensive overview of normality testing procedures.
Most empirical studies suggest that the normality tests should be chosen carefully as their statistical power varies depending on the context; see e.g. Thadewald and Büning (2007), Romão et al. (2010) or Brockwell and Davis (2016). This is why the existing procedures are constantly refined and new ones are being developed; for example, a recent revision of the Jarque-Bera testing framework based on the second-order analogue of skewness and kurtosis could be found in Desgagné and Lafaye de Micheaux (2018).
The approach presented in this paper draws attention to interesting and previously not exploited aspect of normal distributions that could be used for efficient normality testing. In particular, we show that our approach outperforms and/or complements multiple benchmark normality testing frameworks when popular (financial) alternative distributions, such as Student's t or logistic, are considered. More explicitly, we show that our test usually has the best power if one expects that a sample comes from a symmetric distribution which have heavier (or lighter) tails in comparison with the normal distribution. We illustrate that with a financial market data example; see Section 6 for details.
Finally, it is worth mentioning that the 20/60/20 division leads to a very accurate data clustering when it comes to the tail assessment performed in reference to the central set normality assumption. Consequently, our method could be embedded into data analytics frameworks based on cluster analysis, help to refine data mining techniques, etc; see Romesburg (2004); Kaufman and Rousseeuw (2009); Hair et al. (2013) for an overview. In fact, the good performance of our test statistic on market data could be linked to a popular financial stylised fact saying that typical financial asset returns can be seen as normal, but the extreme returns are more frequent and with greater magnitude than the ones resulting from the normal fit; see Cont (2001) and Sheikh and Qiao (2010) for details. This paper is organised as follows: In Section 2 we briefly recall the concept of the 20-60-20 Rule, while in Section 3 we outline the construction of the test statistic and discuss its basic properties. Section 4 provides a high-level discussion about the test power, and Section 5 discusses in details mathematical background including derivation of the asymptotic distribution of the proposed test statistic. Next, in Section 6, we present a simple market data case-study and discuss application of our framework in the financial context. We conclude in Section 7. For brevity, we moved the closed-form formula for the normalising constant introduced in Section 3 to Appendix A.
2. The 20-60-20 Rule for the univariate normal distribution Let us assume that X is a normally distributed random variable. We define left, middle, and right partitioning sets of X by It has been shown in Jaworski and Pitera (2016) that 2) for this unique 20/60/20 ratio, where σ 2 A denotes the conditional variance of X on set A 1 .
This specific division together with the associated set of equalities in (2.2) create a dispersion balance for the conditioned populations. This property might be linked to the statistical phenomenon known as the 20-60-20 Rule: a principle that is widely recognised by the practitioners and used e.g. for efficient management or clustering. In fact, a similar statement is true in the multivariate case: the conditional covariance matrices of multivariate normal vector are equal to each other, when the conditioning is based on the values of any linear combination of the margins, and 20/60/20 ratio is maintained. For more details, see Jaworski and Pitera (2016) and references therein.

Test statistic
Let us assume we have a sample from X at hand. Then, based on (2.2), we define a test statistic whereσ 2 is the sample variance,σ 2 A is the conditional sample variance on set A (where the conditioning is based on empirical quantiles), n is the sample size, and ρ ≈ 1.8 is a fixed normalising constant; see Figure 1 for the R implementation code. We refer to Section 5 for more details including rigorous definitions of conditional varianceσ 2 A , constant ρ, etc. It is not hard to see that under the normality assumption N is a pivotal quantity. Furthermore, in Section 5 we show that the distribution of N is asymptotically normal; see Theorem 5.1 therein. In Figure 2, we illustrate this by computing the Monte Carlo density of N under the normality assumption for samples of size 50, 100, and 250.
Test statistic N has a clear interpretation: the difference between tail and central conditional variances could be seen as a measure of tail fatness, i.e. the bigger the value of N , the fatter the tails.
1 In fact, this equality is true for the ratio very close to 20/60/20, i.e. for upper and lower quantiles equal to approximately 0.198. For transparency, we have decided to use the rounded numbers here; see Section 5 for details.
q1 <− q u a n t i l e ( x , 0 . 2 ) q2 <− q u a n t i l e ( x , 0 . 8 ) N <− N * s q r t ( n ) / ( v ar ( x ) * 1 . 8 ) r e t u r n (N) } Figure 1. Simplified R source code that was used to create a function that computes the test statistic N given input sample x  To illustrate this we compute the values of N for bigger sample size, n = 500, for three different fat-tail distributions and three different slim-tail distributions. For fat-tail comparison we picked logistic, Student's t with five degrees of freedom, and Laplace distributions, while for slim-tail comparison we considered generalised normal distribution with shape parameter s ∈ {2.5, 3, 5}; the (standardised) generalised normal density for s ∈ R + is given by we refer to Nadarajah (2005) or Tumlinson et al. (2016) for more details.
The results presented in Figure 3 confirm that the behaviour of N is as expected.
Based on statistic N values, one can construct a one-sided or two-sided statistical test with normality (N = 0) as a null hypothesis. For brevity, we refer to such test as N normality test or simply N test.

Power of the test
In this section we check the power of the proposed N test in a controlled environment. We focus on symmetric distributional alternatives (used e.g. in finance) when one wants to abandon the normality assumption due to fat-tail or slim-tail phenomena. Namely, we consider the Cauchy distribution, the Logistic distribution, the Laplace distribution, the Student's t distribution with v ∈ {2, 5, 10, 20, 30} degrees of freedom parameter, and the generalised normal distribution (GN) with the shape parameter s ∈ {1.5, 2.5, 3, 5, 10}. Note that GN with s = 1 and s = 2 correspond to Laplace and normal distribution, respectively; see (3.2) for the GN density definition. In all cases, the location parameter is set to 0 and the scale parameter is set to 1.
For completeness, we compare test N results with well-established benchmark normality tests: Jarque-Bera test, Anderson-Darling test, and Shapiro-Wilk test. It should be noted that in contrast to these frameworks, statistic N allows one to consider a specific heavy-tail (or slim-tail) alternative, i.e. positive (or negative) values of N point out to heavy-tail (or slim-tail) alternative hypothesis. Consequently, we decided to construct right-sided (or left-sided) critical region for the fat-tailed (or slim-tailed) distributions. Nevertheless, for completeness, we include the results for two-sided critical regions in all cases.
For all alternative distribution choices we consider four different sample sizes, i.e. n = 20, 50, 100, 250. For each n, we simulate 2 000 000 strong Monte Carlo sample, and check for what proportion of simulations the tests reject normality at significance level α = 5%.
All computations are performed in R 3.5.2. For benchmark normality testing we use multiple add-on R packages including gnorm (for GN simulation), stats (for Shapiro-Wilk test), nortest (Anderson-Darling test), and tseries (Jarque-Bera test). For better comparability, we used test-wise simulated rejection thresholds instead of theoretical p-values returned by R functions; for computations, we used big strong Monte Carlo sample of size 10 000 000. In particular, note that while Jarque-Bera test has asymptotic χ 2 distribution (with 2 degrees of freedom) under normality, this approximation may be inaccurate for small samples, and lead to non-meaningful (non-adjusted) p-values.
It should be noted that our framework specification is consistent with the one presented in Desgagné and Lafaye de Micheaux (2018), where a comprehensive normality tests comparison is made. In particular, results presented in Appendix C therein are perfectly consistent with results presented here (for all benchmark tests). 2 For transparency, we have decided to consider fat-tail and slim-tail case separately. The results for fat-tailed distributions are presented in Table 1. The best performance of the right-sided test N could be observed for almost all considered distributions. The fatter the tails the bigger the absolute difference between N test power and JB test power, which could be considered as a second best choice. To check whether test statistic N brings some novel results, we decided to check the proportion of simulations on which the normality assumption was rejected uiquely by N among all considered tests; for comparison purposes, we checked the same for all other tests. The results for three selected distributions are presented in Table 2. It can be observed, that the unique rejection proportion for test N is the highest among all tests and point out to the fact that statistic N is taking into account 2 Please note that results presented for Asymmetric Power Distribution (APD) with symmetry parameter α = 0.5 correspond to results presented for Generalised Normal (GN) distribution; we refer to Section 2 in Desgagné and Lafaye de Micheaux (2018) for details. sample properties that are not exploited by other tests. 3 Finally, it should be noted that the performance of two-sided N test is also quite good: the test outperforms both AD and SW in most of the considered cases.
The results for slim-tailed distributions are presented in Table 3. Both left-sided and two-sided test based on N statistic substantially outperform all other tests on all datasets. This suggest that the proposed framework is also suitable when assessing slim-tailed distributions.

Mathematical framework and asymptotic results
In this section, we provide the explicit formulas for the conditional variance estimators, study their asymptotic behaviour, and show that N is asymptotically normal.
First, we introduce the basic notation and provide more explicit formulas for sets L, M , and R that were given in Section 2; see (2.1).
We assume that X ∼ N (µ, σ) for mean parameter µ and standard deviation parameter σ. We use F X to denote the distribution of X, Φ to denote the standard normal distribution, and φ to denote the standard normal density. Following the usual convention, for any n ∈ N, we use (X 1 , . . . , X n ) to denote the random sample from X and for i = 1, . . . , n, we use X (i) to denote the sample ith order statistic.
For fixed partition parameters α, β ∈ R, where 0 ≤ α < β ≤ 1, we define the conditioning set For brevity and with slight abuse of notation, we often write A instead of A[α, β]. Then, the explicit formulas for sets L, M , and R given in (2.1) are , and x is the unique negative solution of the equation The approximate value ofq is 0.19809; we refer to (Jaworski and Pitera, 2016, Lemma 3.3) for details. Note that (5.2) could be seen as a specific form of differential equation −xy − y ′ (1 − 2y) = 0, where y(x) := Φ(x); this could be used to determine similar ratios for other distributions. Next, we give the exact definition of the conditional sample variance. For a fixed set A, where A = A[α, β], the conditional variance estimator on the set A is given byσ where [x] := max{k ∈ Z : k ≤ x} denotes the floor of x ∈ R and is the conditional sample mean. In particular, we setσ 2 :=σ 2 A[0,1] . Recall that the test statistic N is given by where the normalising constant ρ in (5.5) is approximately equal to 1.7885; we refer to Appendix A for the closed form formula for ρ. Now, we are ready to state the main result of this section, i.e. Theorem 5.1.
where N is given in (5.5), and ρ is a fixed normalising constant independent of µ, σ, and n.
Before we present the proof of Theorem 5.1 let us introduce a series of Lemmas and additional notation; proof techniques are partially based on those introduced in Stigler (1973). To ease the notation, for a fixed set A, where A = A[α, β], we define Additionally, we set where ½ C is the indicator function of set C. It is useful to note that A n and B n follow the binomial distributions B(n, α) and B(n, β), respectively; note that for α = 0 and β = 1 the distributions are degenerate with A n ≡ 0 and B n ≡ n. Finally, for any sequence (a i ) we introduce the notation of the directed sum that is given by In Lemma 5.2, we show the consistency of the conditional sample expectation. Note that the statement of Lemma 5.2 does not explicitly rely on normality assumption. In fact, the proof is true under very weak conditions imposed on X (e.g. continuity of the distribution function of X); similar statement is true for other lemmas presented in this section. Also, it should be noted that Lemma 5.2 and Lemma 5.3 show consistency and asymptotic distribution of the standard non-parametric Expected Shortfall estimator; see e.g. McNeil et al. (2010) for details. α, β]. For any n ∈ N, we get

Lemma 5.2. For any
and, by the Law of Large Numbers, 1 n A n − α P − → 0, we conclude the proof of (5.6). The proof of is similar to the proof of (5.6) and is omitted for brevity. Next, observe that Consequently, noting that µ A = (5.9) Combining (5.6), (5.8), and (5.9), we conclude the proof.
Next, we focus on the asymptotic distribution of the conditional sample mean; note that Lemma 5.3 is a slight modification of the result of Stigler (1973) for trimmed means. For completeness, we present the full proof.

Lemma 5.3. For any
Proof. For brevity, we assume that α > 0 and β < 1. The remaining degenerate cases could be treated in the similar manner.
As in the proof of Lemma 5.2, observe that Due to the consistency of the empirical quantiles, we get it is sufficient to show that An−[nα] mn/ √ n converges in distribution to some nondegenerate distribution. Note that (5.12) and, by the Central Limit Theorem applied to A n ∼ B(n, α), we get Thus, using the Slutsky's Theorem (see e.g. (Ferguson, 1996, Theorem 6')), we get which concludes the proof of (5.11). Similarly, one can show that we can rewrite (5.10) as where r n P − → 0. Next, we have where for i = 1, . . . , n, we set Finally, noting that for n → ∞ we get n(β−α) mn P − → 1, and combining the Central Limit Theorem applied to (Z A i ) with the Slutsky's Theorem we conclude the proof; note that (Z A i ) are i.i.d. with zero mean and finite variance.
Next, we show that for the conditional variance estimator one can substitute the sample mean with the true mean without impacting the asymptotics. For any A, where A = A[α, β], the conditional variance estimator with known mean is given bŷ Lemma 5.4. For any A = A[α, β], it follows that √ n σ 2 A −ŝ 2 A P − → 0, n → ∞.
Proof. As in the proof of Lemma 5.3, we focus on the case 0 < α < β < 1. Let A = A[α, β] and note that where the last summand equals 0 since [nβ] i=[nα]+1 Thus, using Lemma 5.2 combined with Lemma 5.3, we conclude the proof. Now, we study the asymptotic behaviour of the conditional variance estimator; this is a key lemma that will be used in the proof of Theorem 5.1. Moreover, this result may be of independent interest since it allows one to construct the asymptotic confidence interval for the conditional variance.

Lemma 5.5. For any
Proof. Due to Lemma 5.4, it is enough to considerŝ 2 A instead ofσ 2 A . For Thus, recalling (5.14), we can rewrite (5.16) as where r n P − → 0. Next, for i = 1, . . . , n, we set (5.18) and using the Central Limit Theorem combined with the Slutsky's Theorem, we conclude the proof.

17) and by straightforward computations we get E[Y
Finally, we are ready to show the proof of Theorem 5.1.
of Theorem 5.1. For conditioning sets L, M , and R given in (5.1), we define the associated sequences of random variables (Y L i ), (Y M i ), (Y R i ) using (5.17). For any n ∈ N, we set whereq is defined via (5.2). By the multivariate Central Limit Theorem (cf. (Ferguson, 1996, Theorem 5 Consequently, by the arguments similar to the ones presented in the proof of Lemma 5.5 (see (5.18)), we can rewrite (5.19) as S n = M n Z n + r n , where To conclude the proof of Theorem 5.1, we need to show that ρ is independent of µ and σ.

Combining this with equalities
we conclude the proof of (5.21). Now, using (5.21) for L, M , and R, and expressing Σ/σ 4 as we see that Σ/σ 4 does not depend on µ and σ. Finally, recalling (5.20) and the definition of ρ we conclude the proof of Theorem (5.1); we refer to Appendix A for the closed-form formula for ρ.
The results presented in this section could be directly applied to various other non-parametric quantile estimators and to the unbiased variance estimators. 6 This is summarised in the next two remarks.
Remark 5.6. The standard formula for the whole sample (unbiased) variance uses n − 1 instead of n in the denominator. In the conditional case, this would be reflected in the different formula for (5.3), where m n is replaced by m n − 1. Note that the statement of Theorem 5.1 remains valid for the modified conditional variance estimator due to the combination of the Slutsky's Theorem and the fact that (m n − 1)/m n → 1.
Remark 5.7. When defining the conditional sample variance (5.3), we used [nα] + 1 and [nβ] as the limits of the summation in (5.3) and (5.4). This choice corresponds to the non-parametric α-quantile estimator given by In the literature there exist many different formulas for non-parametric quantile estimators, most of which are bounded by the nearest order statistics; see Hyndman and Fan (1996) for details. It is relatively easy to show that all results presented in this section hold true if we replace [nα] and [nβ] by suitably chosen sequences that correspond to different empirical quantile choices. For completeness, we provide a more detailed description of this statement.
Consider sequences (α n ) and (β n ) such that nα − α n and β n − nβ are bounded, and definem n := β n − α n . The corresponding conditional sample mean and variance are given bȳ Then, we can replace X A andσ 2 A byX * A andσ 2, * A in Theorem 5.1 as well as in all lemmas presented in the section.
Instead of showing a full proof, we briefly comment how to show consistency of quantile estimators as well as comment on counterparts of (5.7) and (5.12). All proofs could be translated using a very similar logic.

Empirical example: case study of market stock returns
In this section we apply the proposed framework to stock market returns performing a basic sanity-check verification. Before we do that, let us comment on the connection between the 20-60-20 Rule and financial time series.
Assuming that X describes financial asset return rates we can split the population using 20/60/20 ratio and check the behaviour of returns within each subset. If non-normal perturbations are observed only for extreme events, the 20/60/20 break might identify the regime switch and provide a good spatial clustering; this could be linked to a popular financial stylised fact saying that average financial asset returns tend to be normal, but the extreme returns are not -see Cont (2001) and Sheikh and Qiao (2010) for details. It should be emphasized that according to authors' best knowledge, the link between this property and data non-normality was not discussed in the literature before.
The easiest way to verify this hypothesis is to take stock return samples for different periods, make the quantile-quantile plots (with standard normal as a reference distribution) and check if the clustering is accurate. In Figure 4, we present exemplary results for two major US stocks, namely GOOGL and AAPL, and two major stock indices, namely S&P500 and DAX; we took time-series of length 250 for different time intervals ranging in the period from 10/2015 to 01/2018. 7  From Figure 4 we see that this division is surprisingly accurate: a very good normal fit is observed in the M set (middle 60% of observations), while the fit in the tail sets L and R (bottom and top 20% of observations) is bad. By taking different sample sizes, different time-horizons, and different stocks we can confirm that this property is systematic, i.e. the results are almost always similar to the ones presented in Figure 4.
While the presence of fat-tails in asset return distributions is a well-known observation in the financial world, it is quite surprising to note that the nonnormal behaviour could be seen for approximately 40% of data. Also, test statistic N can be used to formally quantify this phenomenon and to measure tail heaviness: the bigger the conditional standard deviation in the tails (in reference to the central part), the fatter the tails.
In the following, we focus on assessing the performance of the test statistic N on market data. We perform a simple empirical study and take returns of all stocks listed in S&P500 index on 16.06.2018 that have full historical data in the period from 01.2000 to 05.2018. This way we get full data (4610 daily adjusted close price returns) for 381 stocks. Next, for a given sample size n ∈ {50, 100, 250} we split the returns into disjoint sets of length n, and for each subset we compare the value of N with the corresponding empirical quantiles presented in Figure 2. More precisely, using N we perform a rightsided statistical test and reject normality (null) hypothesis if the computed value is greater than the empirical value F −1 n (1−α), for α ∈ {1%, 2.5%, 5%}. To assess test performance, we compare the results with other benchmark normality tests: Jarque-Bera test, Anderson-Darling test, and Shapiro-Wilk test. While the non-normality of returns is a well-known fact, and all testing frameworks should show good performance, we want to check if our framework leads to some new interesting results. We check the normality hypothesis and compute two supplementary metrics that are used for performance assessment: -Statistic T gives the total rejection ratio of a given test. It corresponds to the proportion of data on which the normality assumption was rejected at a given significance level; it is the ratio of rejected data subsamples to all data subsamples. -Statistic U gives the unique rejection ratio of a given test. It corresponds to the proportion of data on which normality assumption was rejected at a given significance level only by the considered test (among all four tests); it is the ratio of uniquely rejected data subsamples to all data subsamples. The combined results for all values of n and α are presented in Table 4. One can see that the statistic N performs very well and gives the best results for all choices of n. Surprisingly, our testing framework allows one to detect non-normal behaviour in cases when other tests fail: the outcomes of measure U are material in all cases. For example, for n = 50 and α = 5%, the value of U was equal to 5.1% -this corresponds to almost 11% of all rejected samples. The results are especially striking for n = 250, where the normality assumption was rejected in almost all cases (ca. 90%). While one might think that for such a big sample size the three classical tests should detect all abnormalities, our test still uniquely rejected normality in multiple cases. For α = 1%, the normality was rejected for additional 262 samples (3.8% of the population). For transparency, in Figure 5 we show exemplary data subset for which this happened.

Concluding remarks and other applications
In this paper we have shown that the test statistic N introduced in (3.1) could be used to measure the heaviness of the tails in reference to the central part of distribution and could serve as an efficient goodness-of-fit normality test statistic. Test statistic N is based on the conditional second moments, performs quite well on market financial data, and allows one to detect nonnormal behaviour where other benchmark tests fail.
As mentioned in the introduction, most empirical studies suggest that the normality tests should be chosen carefully as their statistical power varies depending on the context. Our proposal proves to have the best test power in the cases when the true distribution is assumed to be symmetric and have tails that are fatter or slimmer than the normal one. It should be noted that our test is in fact based on the implicit distribution symmetry assumption. Indeed, in (3), the impact of left and right tail is taken with the same weight. Nevertheless, this could be easily generalised e.g. by considering only one of the tail variances; we comment on that later.
In Theorem 5.1 we proved that the asymptotic distribution of N is normal under the normality null hypothesis. This allows us to study the shape of rejection intervals for sufficiently large samples. To obtain this result, in Lemma 5.5 we derived the asymptotic distribution of the conditional sample variance.
Also, we showed that the 20-60-20 Rule explains the financial stylised fact related to tail non-normal behaviour and provides surprisingly accurate clustering of asset return time series. Quite surprisingly, non-normality is visible for almost 40% of the observations.
In summary, we believe that tail-impact tests based on the conditional second moments are very promising and provide a nice alternative to the classical framework based e.g. on the third and fourth moments.
For example, the multivariate extension of the test statistic N could be defined using the results presented in Jaworski and Pitera (2016), e.g. to assess the adequacy of using the correlation structure for dependence modeling. Also, this could be extended to any multivariate elliptic distribution using the results from Jaworski and Pitera (2017).
The construction of N shows how to use conditional second moments for statistical purposes. In fact, one might introduce various other statistics that test underlying distributional assumptions. Let us present a couple of examples: -We can test only the (left) low-tail impact on the central part by considering one of test statistics -For any quantile-based conditioning sets A and B, and any elliptical distribution, one can introduce the statistic where λ ∈ R is a constant depending on the quantiles that define conditioning sets and the underlying distribution. Assuming that A = L and B = R (whole space), we get the proportion between the tail dispersion and overall dispersion. In this specific case, in the normal framework, we get λ = 1 − Φ −1 (0.2)φ(Φ −1 (0.2)) 0.2 − (φ(Φ −1 (0.2)) 2 0.2 2 ; see (Jaworski and Pitera, 2016, Section 3) for details.
Note that under the normality assumption all proposed statistics are pivotal quantities which facilitates an easy and efficient hypothesis testing; the asymptotic distribution for all statistics could be derived using similar reasoning as the one presented in Theorem 5.1.
Appendix A. Closed-form formula for the normalising constant In this section, we present the closed-form formula for the normalising constant ρ from Theorem 5.1. For brevity, we omit detailed calculations and only present the outcome.
Corollary A.1. The normalising constant ρ from Theorem 5.1 is given by where for A ∈ {L, M, R} we have and constants C 1 , C 2 , and C 3 are given by L . Approximately, the value of ρ is equal to 1.7885.

Funding
Part of the work of the second author was supported by the National Science Centre, Poland, via project 2016/23/B/ST1/00479.