1 Introduction

Thick data refers to datasets in which the number of variables exceeds the number of observations. In many modern research scenarios (e.g. engineering, manufacturing, genetics, neuroscience and other medical fields) the number of variables can be much larger than the number of observations. Classic parametric methods (e.g. generalized linear models) are underdetermined when the number of parameters exceeds the number of observations and are therefore unable to provide valid inference. Nonparametric permutation tests for a suitable global null hypothesis in one-factorial designs have been proposed, for example a distance-based approach by Mielke and Berry (2007), a rank-based approach (Bathke and Harrar 2008; Harrar and Bathke 2008a, b; Liu et al. 2011) implemented in the R package npmv (Ellis et al. 2017), and the Nonparametric combination procedure (NPC) by Pesarin and Salmaso (2010). Such methods differ fundamentally in their approach: the distance and rank methods use a top-down approach, calculating firstly a global test statistic and relying on post-hoc tests to test marginal univariate hypotheses. The NPC starts with test statistics for the marginal hypotheses and combines them (hence the name nonparametric combination procedure) to obtain p-values for global tests. Combining functions for p-values have a long history, with some of the best known being Fisher’s (Fisher 1932), Stouffer’s (Stouffer et al. 1949) and Tippett’s (Tippett et al. 1931).

One has to differentiate between p-value combination in a meta-analytic sense and p-value combination in a multivariate sense. The first concerns itself with testing the same hypothesis on different datasets, the second with testing multiple hypotheses on the same dataset. In the first scenario, the one for which p-value combination was originally developed, there are theoretical works on the optimality of combining functions for different alternative hypotheses, for example Birnbaum (1954) and Heard and Rubin-Delanchy (2018). In the second case, there is much less guidance regarding the optimal choice of p-values, especially when used in the context of the NPC. Pesarin and Salmaso basically advise (see Pesarin and Salmaso 2010, p. 133) using Tippett’s when we expect only one or a few, but not all, sub-alternatives to occur, using Stouffer’s when we expect all sub-alternatives to be jointly true, and using Fisher’s when no specific knowledge is available.

After a quick review of each method, we compare their performance in different simulation settings. We focus on settings in which the number of variables is larger than the sample size and the ratio of informative to uninformative variables is small. We focus on the NPC and the influence of different combining functions on its performance, using the other two methods as competitors when suitable. Lastly, we demonstrate the use and usefulness of multivariate testing versus simple univariate testing with multiplicity control using a real-life dataset.

2 Methods

Let us briefly introduce the basic principles of the Nonparametric combination procedure.

Assume we observe p-dimensional data \(X_{ij}^{(k)}\) where \(i = 1,\dots ,a\) is the group index, \(j = 1,\dots ,n_i\) is the subject index, and \(k = 1,\dots ,p\) is the variable index. We then use \({\mathbf {X}} = \{{\mathbf {X}}(i) | i = 1,\dots ,n\}\) to denote the dataset, where \(n = \sum _{i = 1}^{a} n_i\) and \({\mathbf {X}}(i) = (X_i^{(1)},\dots ,X_{i}^{(p)})^{\top }\) such that the first \(n_1\) observations belong to the first group, the next \(n_2\) to the second group and so on as illustrated in Table 1.

Table 1 Unit-by-unit representation of the data

This is the so-called unit-by-unit representation. Because we want to exchange columns of this table, we need multivariate exchangeability under the null hypothesis. A suitable choice for \(H_0\) is therefore

$$\begin{aligned} H_0:\, \{{\mathbf {X}}_1 \overset{d}{=} \cdots \overset{d}{=} {\mathbf {X}}_a\} \end{aligned}$$

where \({\mathbf {X}}_i\) denotes a random vector with the same distribution as the one from which the observations of the i-th group were drawn and \(\overset{d}{=}\) denotes equality in distribution. The core principle underlying the NPC is to firstly test a number of sub-hypotheses. A combined test is then conducted to test for the intersection hypothesis.

A natural choice in our case is the hypotheses of equality of the marginal distributions

$$\begin{aligned} H_0^{(k)}:\, \{F_{1k} = \cdots = F_{ak}\} \end{aligned}$$

where \(F_{ik}\) is the marginal distribution of variable k in group i. Note that while

$$\begin{aligned} H_0 \subseteq \bigcap _{k = 1}^{p} H_0^{(k)} =: H_0^{mar} \end{aligned}$$

the reverse is not necessarily true. Thus, some alternatives in \(H_0^{C}\) might not be in the consistency region of the test.

The multivariate distributions are stochastically comparable if \(\forall i, i^{\prime } = 1,\ldots ,a\)

$$\begin{aligned} {\mathbf {X}}_i \overset{d}{\le } {\mathbf {X}}_{i^{\prime }} \text { or } {\mathbf {X}}_i \overset{d}{\ge } {\mathbf {X}}_{i^{\prime }}, \end{aligned}$$

where \({\mathbf {X}}_i \overset{d}{\le } {\mathbf {X}}_{i^{\prime }}\) means that for all bounded and increasing functions \(f:\, {\mathbb {R}}^{p} \longrightarrow {\mathbb {R}}\) we have \(E(f({\mathbf {X}}_i)) \le E(f({\mathbf {X}}_{i^{\prime }}))\). It is worth pointing out that under the assumption that the multivariate distributions are stochastically comparable, the equality between \(H_0\) and \(H_0^{mar}\) does hold (see Baccelli and Makowski 1989). This makes the NPC especially suitable for testing hypotheses of stochastic dominance as we shall see later.

After the choice of sub-hypotheses has been made, let \({\mathbf {T}} = (T_1,\dots ,T_p)\) be a vector of test statistics, each testing a hypothesis \(H_0^{(k)}\), such that \(H_0 = \bigcap _{k = 1}^{p} H_0^{(k)}\). In our example \(T_k\) would be a test statistic for \(H_0^{(k)}\). Let \(\phi \) be a p-value combining function as described below. We thereafter compute the original test statistics over \({\mathbf {X}}\) (see Table 2).

Table 2 Test statistic computation

We then randomly draw B permutations \(u:\, (1,\dots ,n) \mapsto (u_1,\dots ,u_n)\) of \(\{1,\dots ,n\}\) and compute the test statistics again for each permuted dataset \({\mathbf {X}}^{*} := \{{\mathbf {X}}(u_i) | i = 1,\dots ,n\}\) (see Table 3).

Table 3 Test statistic permutation distribution

Next, the centered version of the empirical cumulative function for each of the p test statistics among the B permutations is calculated as

$$\begin{aligned} {\hat{L}}_k(t) = \frac{\frac{1}{2} + \frac{1}{2} \sum _{b = 1}^{B} {\mathbf {I}}(T_{kb} = t) + \sum _{b = 1}^{B} {\mathbf {I}}(T_{kb} > t)}{B + 1} \quad k = 1,\dots ,p; \end{aligned}$$

and a p-value for each of the p tests is defined as

$$\begin{aligned} {\hat{\lambda }}_k = {\hat{L}}_k(T_k^{o}). \end{aligned}$$

In other words, p partial p-values are achieved together with their simulated permutation distributions \(\{{\hat{L}}_{kb}={\hat{L}}_{k}(T_{kb}),~b=1,\ldots ,B,~k=1,\ldots ,B\}\) as illustrated in Table 4.

Table 4 P-value permutation distribution

For each permutation and the original data we then combine the p-value statistics into

$$\begin{aligned} T^{\prime o}&:= \phi ({\hat{\lambda }}_1,\dots ,{\hat{\lambda }}_p) \quad \text {and}\\ T_b^{\prime }&:= \phi ({\hat{L}}_{1b},\dots ,{\hat{L}}_{pb}) \end{aligned}$$

We again calculate the empirical cumulative function

$$\begin{aligned} {\hat{L}}^{\prime }(t) := \frac{\frac{1}{2} + \frac{1}{2} \sum _{b = 1}^{B} {\mathbf {I}}(T_b^{\prime } = t) + \sum _{b = 1}^{B} {\mathbf {I}}(T_b^{\prime } > t)}{B + 1} \end{aligned}$$

and finally calculate the global p-value

$$\begin{aligned} {\hat{\lambda }}^{\prime } := {\hat{L}}^{\prime }(T^{\prime o}). \end{aligned}$$

The choice of combining function \(\phi (\cdot )\) affects the definition and potentially the power of the combined test used to assess the global problem of interest. This is therefore a critical step.

According to Pesarin and Salmaso (2010), a combining function \(\phi (\cdot )\) should satisfy a few fundamental properties:

  • \(\phi (\cdot )\) should be a non-increasing (i.e. \(\phi (\ldots , {\hat{\lambda }}_k, \ldots )\ge \phi (\ldots , {\hat{\lambda }}_k^1, \ldots ) \text { if } {\hat{\lambda }}_k \le {\hat{\lambda }}_k^1\)) and possibly symmetric function (i.e. \(\phi ({\hat{\lambda }})=\phi ({\hat{\lambda }}_u)\) where u is a random permutation)

  • \(\phi (\cdot )\) should reach its supremum value \(\phi _{sup}\) even when only a single partial p-value attains 0

  • for each significance level \(\alpha \), the related critical value \(T'_{\alpha }\) should be finite and lower than \(\phi _{sup}\)

  • the rejection region of the combined test achieved using \(\phi (\cdot )\) should be convex.

In this paper we consider some of the most popular combining functions which satisfy these properties and investigate, by means of a simulation study, how the choice of combining function affects the power of the combined tests under several different scenarios.

Let us now introduce these combining functions, a special case of alternative hypothesis, and the aforementioned competing nonparametric methods.

2.1 P-value combining functions

Five different combining functions are considered in this paper and they are briefly described in this section.

2.1.1 Fisher and truncated Fisher

Fisher’s combining function is defined as

$$\begin{aligned} \phi _{Fisher} :\, {\mathbb {R}}^{p} \longrightarrow {\mathbb {R}} \quad \quad ({\hat{\lambda }}_1,\dots ,{\hat{\lambda }}_p) \mapsto -2 \cdot \sum _{k = 1}^{p} \log ({\hat{\lambda }}_k) \end{aligned}$$

or equivalently as the product of the p-values.

This approach was extended by Zaykin et al. (2002) to the so-called truncated product method (TPM). Here only those p-values below a certain prespecified threshold \(\tau \) are considered, that is

$$\begin{aligned} \phi _{TPM} :\, {\mathbb {R}}^{p} \longrightarrow {\mathbb {R}} \quad \quad ({\hat{\lambda }}_1,\dots ,{\hat{\lambda }}_p) \mapsto -2 \cdot \sum _{k = 1}^{p} \log ({\hat{\lambda }}_k) \cdot {\mathbf {1}}_{[0, \tau ]}({\hat{\lambda }}_k) \end{aligned}$$

Truncation generally helps gain power, especially with highly dependent data (Arboretti Giancristofaro et al. 2016).

2.1.2 Stouffer

Stouffer’s method, also sometimes attributed to Liptak, is based on the quantile function of a standard normal distribution (\(\Phi ^{-1}\)):

$$\begin{aligned} \phi _{Stouffer} :\, {\mathbb {R}}^{p} \longrightarrow {\mathbb {R}} \quad \quad ({\hat{\lambda }}_1,\dots ,{\hat{\lambda }}_p) \mapsto \sum _{k = 1}^{p} \Phi ^{-1}(1 - {\hat{\lambda }}_k) \end{aligned}$$

Here we choose the version with \(1 - p_i\) instead of \(p_i\) as the argument for \(\Phi ^{-1}\) because for the NPC, large values should be significant.

2.1.3 Tippett

Tippett’s method looks at the smallest p-value (or in the case of the NPC, at one minus the smallest p-value):

$$\begin{aligned} \phi _{Tippett} :\, {\mathbb {R}}^{p} \longrightarrow {\mathbb {R}} \quad \quad ({\hat{\lambda }}_1,\dots ,{\hat{\lambda }}_p) \mapsto 1 - \min \{{\hat{\lambda }}_1,\dots ,{\hat{\lambda }}_p\} \end{aligned}$$

If the global null is true, then the values of the Tippett combining function obtained during the combination step of the NPC will be distributed as the maximum of p different values. The use of this combining function is recommended when a low number of informative variables is available (see Pesarin and Salmaso 2010, p. 133).

2.1.4 Logistic combining function

The logistic combining function (LCF) is similar to Fisher’s function and defined as:

$$\begin{aligned} \phi _{LCF} :\, {\mathbb {R}}^{k} \longrightarrow {\mathbb {R}} \quad \quad (p_1,\dots ,p_k) \mapsto \sum _{i = 1}^{k} \log \left( \frac{p_i}{1 - p_i} \right) \end{aligned}$$

2.1.5 Iterative combination

Different combining functions will of course lead to different p-values. This provides the researcher with a significant degree of freedom and raises the already mentioned question of which combining function is ’the best’. One way to solve this problem is to formulate specific advice based on results of the performance of different combining functions under different scenarios. Another is to try to sidestep the question entirely by using the so-called iterative combination. Assume one has conducted the NPC as described above on a given dataset using f different combining functions \(\phi _1,\dots ,\phi _f\). Instead of one vector consisting of a p-value and the B values of the empirical cumulative function as above, we get f different vectors, one for each combining function (see Table 5).

Table 5 Multiple combining functions permutation distributions

This is the exact same situation as with the traditional NPC before doing the combination. But instead of h rows resulting from the h different test statistics, we now have f different rows as a result of the f different combining functions. We can simply apply the combining functions again to get second iteration p-values and values of the empirical functions

$$\begin{aligned}&T_{fb}^{\prime \prime } := \phi _g({\hat{L}}_{1b^{\prime }}^{\prime },\ldots ,{\hat{L}}_{fb^{\prime }}^{\prime }) \\&{\hat{L}}_{gb}^{\prime \prime } = \frac{\frac{1}{2} + \frac{1}{2} \sum _{b^{\prime } = 1}^{B} {\mathbf {I}}(T_{1b^{\prime }}^{\prime \prime } = T_{1b}^{\prime \prime }) + \sum _{b^{\prime } = 1}^{B} {\mathbf {I}}(T_{fb^{\prime }}^{\prime \prime } > T_{fb}^{\prime \prime })}{B + 1} \\&g = 1,\dots ,f; \quad b = 1,\ldots ,B \end{aligned}$$

Our simulation study suggests that there is a two-fold convergence going on if one repeats this iterative procedure. Firstly, the p-values resulting from the f combining functions converge towards each other, and they all converge together to a certain fixed point. One could define this point as the ultimate global p-value. Of course, the researchers retain some degree of freedom—they do not have to settle on a single combining function. However, the number of possible sets of combining functions is even larger than the number of individual combining functions.

In the simulation study we used an iterative procedure based on Fisher’s, Stouffer’s and the logistic combining function. Convergence was considered to have occurred when the difference between the largest and smallest of the three p-values was less than 0.01. In this case the result from Fisher’s combining function at the iteration at which convergence was achieved was returned as the iterated p-value. If convergence did not occur after 10 iterations, a missing value was returned.

2.2 Stochastic ordering

A special example of an alternative hypothesis for the comparison of more than two groups is the so-called stochastic ordering alternative. It tests for a certain trend in the population that puts an order relation on the groups. Let us assume we observe univariate data \(X_{ij}\), \(i = 1,\dots ,a\) being the group index and \(j = 1,\dots ,n_i\) being the subject index. Again let us use unit-by-unit representation and write \({\mathbf {X}} = \{X(i) | i = 1,\dots ,n\}\) where \(n = \sum _{i = 1}^{a} n_i\) and assume the first \(n_1\) observations belong to group 1, the next \(n_2\) to group 2 and so on. We want to test the hypothesis

$$\begin{aligned} H_0:\, \{{\mathbf {X}}_1 \overset{d}{=} \cdots \overset{d}{=} {\mathbf {X}}_a\} \end{aligned}$$

against one or both of

$$\begin{aligned} H_1^{\le }:\, \{{\mathbf {X}}_1 \overset{d}{\le } {\mathbf {X}}_2 \cdots \overset{d}{\le } {\mathbf {X}}_a\} \quad H_1^{\ge }:\, \{{\mathbf {X}}_1 \overset{d}{\ge } {\mathbf {X}}_2 \overset{d}{\ge } \cdots \overset{d}{\ge } {\mathbf {X}}_a\} \end{aligned}$$

with at least one equality being strict. For this purpose one can partition the null hypothesis into \(a - 1\) sub-hypotheses corresponding to pairwise comparisons. To do so, use \(Y_{1c} = {\mathbf {X}}_1 \cup \cdots \cup {\mathbf {X}}_c\) to define the artificial group of observations that are within the first c original groups, and use \(Y_{2c}\) to define the observations that do not belong to the first c groups. Then the original null hypothesis becomes (for \(H_0^{\le }\); the case for \(H_0^{\ge }\) works equivalently of course)

$$\begin{aligned} H_0:\, \{{\mathbf {X}}_1 \overset{d}{=} \cdots \overset{d}{=} {\mathbf {X}}_a\}&= \bigcap _{c = 1}^{a - 1} \{Y_{1c} \overset{d}{=} Y_{2c}\} \quad \text {vs} \\ H_1^{\le }:\, \{{\mathbf {X}}_1 \overset{d}{\le } {\mathbf {X}}_2 \cdots \overset{d}{\le } {\mathbf {X}}_a\}&= \bigcup _{c = 1}^{a - 1}\{Y_{1c} \overset{d}{\le } Y_{2c}\} \end{aligned}$$

This approach can easily be extended to a multivariate setting. Under the assumption that multivariate distributions are stochastically comparable to one another, equality of the marginal distributions implies equality of the multivariate distributions. The reverse is of course always true. In this case, therefore, the global null can simply be written as an intersection of local null hypotheses of equality of marginal distribution and the NPC can be applied again.

2.3 Competing methods

The distance-based approach by Mielke and Berry (2007) and the rank-based approach by Ellis et al. (2017) are also considered in this study and are briefly described in this section.

2.3.1 Rank-based approach

Assume we observe realizations of random variables \(X_{ij}^{(k)}\), where \(i = 1,\dots ,a\) is the group index, \(j = 1,\dots ,n_i\) is the subject index, and \(k = 1,\dots ,p\) is the variable index. Set \(N := \sum _{i = 1}^{a} n_i\) as the total number of observational units and \({\mathbf {X}}_{ij} := (X_{ij}^{(1)},\dots ,X_{ij}^{(p)})^{\top }\) as the observation vector for each observational unit. We assume that all \({\mathbf {X}}_{ij}\) are independent and within each group are identically distributed as \({\mathbf {X}}_{ij} \sim F_i\). We want to test the global null hypothesis

$$\begin{aligned} H_0:\, F_1 = \cdots = F_a. \end{aligned}$$

Let \(R_{ij}^{(k)}\) be the mid-rank of \(X_{ij}^{(k)}\) among all N observational units and let \({\mathbf {R}}_{ij} := (R_{ij}^{(1)}, \dots , R_{ij}^{(p)})\) be the vector of these ranks associated with each observational unit. Then define the within group means \(\bar{{\mathbf {R}}}_{i.} := \frac{1}{n_i} \sum _{j=1}^{n_i} {\mathbf {R}}_{ij}\) and the total unweighted mean \(\bar{{\mathbf {R}}}_{..} := \frac{1}{a} \sum _{i=1}^{a} \bar{{\mathbf {R}}}_{i.}\). Finally

$$\begin{aligned} {\mathbf {H}}&:= \frac{1}{a-1} \sum _{i=1}^{a} \left( \bar{{\mathbf {R}}}_{i.} - \bar{{\mathbf {R}}}_{..}\right) \left( \bar{{\mathbf {R}}}_{i.} - \bar{{\mathbf {R}}}_{..}\right) ^{\top }\\ {\mathbf {G}}&:= \frac{1}{a} \sum _{i=1}^{a} \frac{1}{n_i(n_i - 1)} \sum _{j=1}^{n_i} \left( {\mathbf {R}}_{ij} - \bar{{\mathbf {R}}}_{i.}\right) \left( {\mathbf {R}}_{ij} - \bar{{\mathbf {R}}}_{i.}\right) ^{\top } \end{aligned}$$

Define further

$$\begin{aligned} {\hat{f}}_1 := \frac{tr({\mathbf {G}})^2}{tr({\mathbf {G}}^2)} \quad \text {and} \quad {\hat{f}}_2 := \frac{a^2}{(a - 1) \sum _{i = 1}^{a} \frac{1}{n_i - 1}} {\hat{f}}_1. \end{aligned}$$

The ANOVA-type test statistic used is then defined as \(T := \displaystyle \frac{tr({\mathbf {H}})}{tr({\mathbf {G}})}\) and we have approximately

$$\begin{aligned} T \sim F({\hat{f}}_1, {\hat{f}}_2). \end{aligned}$$

Note that Bathke et al. (2008) define more than just this test statistic in their paper, however this is the only one described that can be used for data with a higher number of variables than observations. The abovementioned result of the approximation of the sampling distribution is shown in Bathke et al. (2008), which also contains simulations showing excellent small-sample performance. For the simulations presented here, we do not use the approximation by the F-distribution but rather a permutation version of T, whose p-values are therefore calculated as

$$\begin{aligned} \frac{\frac{1}{2} + \frac{1}{2} \sum _{b = 1}^{B} {\mathbf {I}}(T^{*}_b = T) + \sum _{b = 1}^{B} {\mathbf {I}}(T^{*}_b > T)}{B + 1} \end{aligned}$$

where T is the test statistic from the original sample and \(T^{*}_b; \; b = 1,\ldots ,B\) is the test statistic for the b-th permuted sample.

2.3.2 Distance-based approach

The basis of this approach (see Mielke and Berry 2007) is the standard analysis of variance (ANOVA). Basically we observe realizations of real variables \(X_{ij}\) where \(i = 1,\dots ,a\) is the group index and \(j = 1,\dots ,n_i\) is the subject index. Define

$$\begin{aligned} MSE_{Between}&:= \frac{1}{a - 1} \sum _{i = 1}^{a} n_i (X_{i.} - X_{..})^2\\ MSE_{Within}&:= \frac{1}{N - a} \sum _{i = 1}{a} \sum _{j = 1}^{n_i} (X_{ij} - X_{i.})^2. \end{aligned}$$

The F statistic is then

$$\begin{aligned} F = \frac{MSE_{Between}}{MSE_{Within}} = \frac{2 \sum _{i = 1}^{a} \sum _{j = 1}^{n_i} (X_{ij} - X_{..})^2 - (N - a) \delta }{(a - 1) \delta }. \end{aligned}$$
(1)

where

$$\begin{aligned} \delta = \sum _{i = 1}^{a} C_i \xi _i, \quad C_i = \frac{n_i - 1}{N - a}, \quad \xi _i = \left( {\begin{array}{c}n_i\\ 2\end{array}}\right) ^{-1} \sum _{j < l} \left( X_{ij} - X_{il} \right) ^2. \end{aligned}$$

Note that \(\delta = 2MSE_{Within}\) (see Mielke and Berry 2007, page 52). Since all the expressions in (1) are fixed properties given a dataset, one only needs to compute \(\delta \) in a permutation approach: permute the data B times, obtaining \(\delta ^{*}_1, \dots , \delta ^{*}_B\) and estimating the p-value as \(\frac{\sum _{b = 1}^{B} \delta ^{*}_b \le \delta }{B}\). In this paper we estimated the p-value as

$$\begin{aligned} \frac{\frac{1}{2} + \frac{1}{2} \sum _{b = 1}^{B} {\mathbf {I}}(\delta ^{*}_b = \delta ) + \sum _{b = 1}^{B} {\mathbf {I}}(\delta ^{*}_b < \delta )}{B + 1} \end{aligned}$$

for a fair comparison with the NPC and the rank-based method.

This approach can be turned into a multivariate approach with a clever twist—when observing multivariate data \({\mathbf {X}}_{ij} = (X_{ij}^{(1)}, \dots , X_{ij}^{p})\), instead of \((X_{ij} - X_{il})^{2}\), one computes \(d({\mathbf {X}}_{ij}, {\mathbf {X}}_{il})\) for a particular metric d. Thus in the multivariate setting the statistic \(\delta \) is computed using

$$\begin{aligned} \xi _i = \left( {\begin{array}{c}n_i\\ 2\end{array}}\right) ^{-1} \sum _{j < l} d\left( {\mathbf {X}}_{ij}, {\mathbf {X}}_{il}\right) . \end{aligned}$$

If the variables are measured in different units, they will firstly need to be standardized (see Mielke and Berry 2007, p. 53). Assume we observe \(X_{j}^{(k)}\) \(j = 1,\dots ,N\) in the k-th variable and want to use the Euclidean metric for the test statistic \(\delta \). We can then standardize this variable by defining

$$\begin{aligned} Y_{j}^{(k)} := \frac{X_{j}^{(k)}}{(\sum _{j < l} |X_{j}^{(k)} - X_{l}^{(k)}|^{v})^{\frac{1}{v}}} \end{aligned}$$

which results in

$$\begin{aligned} \sum _{j < l} |Y_{j}^{(k)} - Y_{l}^{(k)}|^{v} = 1. \end{aligned}$$

3 Simulation study

Before entering simulation scenarios it is worth noting that while the NPC can easily accommodate tests for one-sided hypotheses, the competitor procedures as described above only test for two-sided hypotheses. We decided to focus our investigation on behavior in relation to one-sided tests. In a stochastic dominance situation this is the only possible hypothesis. However, to provide a fair comparison, we also conducted a two-sided version of the tests. As such we explored the simulation scenarios discussed below.

3.1 Scenario descriptions

In this section we present the scenarios considered in the simulation study. For all scenarios we generated 1000 Monte Carlo samples. B was set to 1000 and we used the same permutations for each method. For the truncated product method, we set \(\tau = 0.2\). Since there were always 30 informative variables under the alternative, we did not consider Tippet’s combining function because its use is recommended for cases with only few such variables as mentioned in the corresponding section. For all simulations we used the statistical software package R (R Core Team 2021). t-distributed variables were created using the package mvtnorm (Genz et al. 2009, 2021). Ordinal variables were created using the package GenOrd (Barbiero and Ferrari 2015), which allows for the creation of ordinal variables with a prespecified correlation structure and marginal distributions.

3.1.1 Two groups–continuous–uncorrelated–one-sided alternative

In this scenario there are two groups, each with a sample size of 10. All variables in this scenario follow a t-distribution with two degrees of freedom and are independent of each other. We created blockwise response variables, one block consisting of 30 variables. In the alternative scenario the variables in the first block have mean 0.25 in the first group and mean zero in the second group. In the null hypothesis scenario all variables in the first block have mean zero. We then added between 0 and 3 blocks of uninformative variables, i.e. blocks where all variables have mean zero, thereby increasing the number of variables from 30 to 60, 90 and 120 respectively. We did this for both the alternative and null hypothesis scenario. For example, adding two blocks of uninformative variables results in three blocks (90 variables) for each scenario: one informative and two uninformative blocks in the alternative scenario and three uninformative blocks in the null hypothesis scenario.

For the NPC we tested the hypothesis \(H_0:\, \{{\mathbf {X}}_1 \overset{d}{=} {\mathbf {X}}_2\}\) versus \(H_1:\, \{{\mathbf {X}}_1 \overset{d}{>} {\mathbf {X}}_2\}\), where \({\mathbf {X}}_i\) is a random vector distributed as the observations from the i-th group. As a test statistic for each variable we used the sum of the first 10 observations.

3.1.2 Two groups–continuous–uncorrelated–two-sided alternative

Here everything is exactly the same as in the previous scenario, except we tested \(H_0:\, \{{\mathbf {X}}_1 \overset{d}{=} {\mathbf {X}}_2\}\) versus \(H_1:\, \{{\mathbf {X}}_1 \overset{d}{\ne } {\mathbf {X}}_2\}\), using the absolute value of the sum of the first 10 observations minus the sum of the other 10 observations as the permutation test statistic.

Here we also tested the rank and distance-based methods. For the distance-based approach we used the Euclidean and Manhattan metric. Since variances are equal for all variables, we do not standardize.

3.1.3 Two groups–mixed–uncorrelated–one-sided alternative

Again there are two groups of 10 observations each and blockwise variables are created in the same way as in the two groups–continuous–uncorrelated scenarios. Each block consists of 15 multivariate t-distributed variables with two degrees of freedom and 15 discrete variables with uniform distribution on \(\{1, 2, 3, 4, 5\}\) in the null hypothesis. In the alternative the first block of the first group was shifted by 0.25 for the normally distributed variables and the discrete variables were distributed as

$$\begin{aligned} P(X = 1)= & {} \frac{1}{15}, \, P(X = 2) = \frac{2}{15}, \, P(X = 3) = \frac{3}{15}, \, P(X = 4) = \frac{4}{15}, \,\\ P(X = 5)= & {} \frac{5}{15} \end{aligned}$$

The test statistics used for the NPC in this scenario was the sum of the first 10 observations for the continuous variables and the one-sided Anderson–Darling test statistic for the discrete variables, which is defined as:

$$\begin{aligned} T^{(k)} = \sum _{i=1}^{2} \sum _{j=1}^{n_i} \frac{{\hat{F}}_{2}^{(k)}(X_{ij}^{(k)}) - {\hat{F}}_{1}^{(k)}(X_{ij}^{(k)})}{{\hat{F}}(X_{ij}^{(k)})(1-{\hat{F}}(X_{ij}^{(k)}))}, \,\forall k=1,\ldots ,p \end{aligned}$$

where \({\hat{F}}_{1}^{(k)}(t) = \frac{1}{n_1}\sum \nolimits _{j=1}^{n_1}{\mathbf {I}}(X_{1j}^{(k)}\le t)\), \({\hat{F}}_{2}^{(k)}(t)= \frac{1}{n_2}\sum \nolimits _{j=1}^{n_2}{\mathbf {I}}(X_{ij}^{(k)}\le t)\), \({\hat{F}}^{(k)}(t)= \frac{1}{N} (n_1 {\hat{F}}_{1}^{(k)}(t) + n_2 {\hat{F}}_{2}^{(k)}(t))\), and \(N=n_1+n_2\).

3.1.4 Two groups–mixed–correlated–one-sided alternative

This scenario is the same as the previous one, except we introduced a dependency structure. The dependency holds only among variables of the same type (continuous or discrete) within the same block. The correlation/scale matrix used was

$$\begin{aligned} \Sigma _{i, j} = {\left\{ \begin{array}{ll} 1 &{}\text { if } i = j\\ 0.5 &{}\text { if } i \ne j \end{array}\right. } \quad i, j = 1,\dots ,15. \end{aligned}$$

3.1.5 Three groups–mixed–correlated–two-sided alternative

Here there are 3 groups with 10 observations each. The setup was the same as in the previous scenario, i.e. the alternative influenced only group 1, the other two groups being unaffected.

For the NPC we tested for equality of distribution versus any kind of stochastic ordering. The test statistic used by the NPC for the continuous variables was the standard ANOVA F statistic. For the discrete variables we used the a-sample version of the Anderson–Darling test statistic.

$$\begin{aligned} T^{(k)} = \sum _{i = 1}^{a} \sum _{j = 1}^{n_i} \frac{({\hat{F}}_{i}^{(k)}(X_{ij}^{(k)}) - {\hat{F}}^{(k)}(X_{ij}^{(k)}))^2}{{\hat{F}}^{(k)}(1 - {\hat{F}}^{(k)}(X_{ij}^{(k)}))}, \, \forall k=1,\ldots ,p \end{aligned}$$

where \({\hat{F}}_{i}^{(k)}(t)= \frac{1}{n_i} \sum \nolimits _{j=1}^{n_i}{\mathbf {I}}(X_{ij}^{(k)}\le t)\), \({\hat{F}}^{(k)}(t)= \frac{1}{N}\sum _{i=1}^a n_i {\hat{F}}_{i}^{(k)}(t)\), and \(N=\sum _{i=1}^a n_i\).

Here we also tested the rank and distance-based methods. For the distance-based approach we again used the Euclidean and Manhattan metric. Since we have a mixture of different variable types, we also standardized them using the Euclidean metric.

3.1.6 Three groups–mixed–correlated–stochastic ordering alternative

The setup is the same as in the previous scenario but the alternative is different. For the alternative in this scenario the continuous variables in group 3 had zero mean and the discrete variables were uniformly distributed on \(\{1, 2, 3, 4, 5\}\). The continuous variables in the first block of the second group have mean 0.25 and the discrete variables are distributed as

$$\begin{aligned} P(X = 1)= & {} \frac{1}{15}, \, P(X = 2) = \frac{2}{15}, \, P(X = 3) = \frac{3}{15}, \, P(X = 4) = \frac{4}{15}, \,\\ P(X = 5)= & {} \frac{5}{15}. \end{aligned}$$

The continuous variables in the first block of the first group have mean 0.5 and the discrete variables in the first block of the first group are distributed as

$$\begin{aligned} P(X = 1)= & {} \frac{1}{35}, \, P(X = 2) = \frac{3}{35}, \, P(X = 3) = \frac{6}{35}, \, P(X = 4) = \frac{10}{35}, \,\\ P(X = 5)= & {} \frac{15}{35}. \end{aligned}$$

The test statistics used by the NPC are the same as in the two groups–mixed–uncorrelated–one-sided alternative and two groups–mixed–correlated–one-sided alternative scenarios. Here we tested for a prespecified stochastic order, that is \(H_0:\, \{{\mathbf {X}}_1 \overset{d}{=} {\mathbf {X}}_2 \overset{d}{=} {\mathbf {X}}_3\}\) versus \(H_1:\, \{{\mathbf {X}}_1 \overset{d}{\ge } {\mathbf {X}}_2 \overset{d}{\ge } {\mathbf {X}}_3\}\) with at least one inequality being strict.

3.2 Results

In this section we present the results of the simulation study for each scenario.

3.2.1 Two groups–continuous–uncorrelated–one-sided alternative

Figure 1a shows the rejection rate at \(\alpha = 0.05\) under the null and alternative hypotheses for all tests. Figure 1b, c illustrate the actual rejection rate under different significance levels for the null and alternative hypotheses respectively.

There are some differences between the different combining functions; the Fisher and TPM functions perform worse than the others when there are no uninformative variables. However, they perform similarly when the number of uninformative variables is high. All NPC variants suffer greatly in terms of performance when the number of uninformative variables increases. All methods kept the nominal level.

Fig. 1
figure 1figure 1

Two groups-continuous-uncorrelated-one-sided alternative scenario

3.2.2 Two groups–continuous–uncorrelated–two-sided alternative

Figure 2 shows rejection rates at \(\alpha = 0.05\) for all tests. Figure 3 illustrates the level of the different tests under the null hypothesis for different levels of \(\alpha \), and Fig. 4 does the same under the alternative hypothesis.

As can be seen, the power of the NPC variants is much smaller than in the two groups–continuous–uncorrelated–one-sided alternative scenario, which is to be expected. Within the NPC methods, the Stouffer and Logistic methods seem to be mildly inferior to the other three. Again, a reduction in power is seen when adding uninformative blocks. However, after one block, power is already very close to the null scenario such that no substantial differences can be seen between one, two or three blocks of uninformative variables. The distance-based approaches are very similar in power to the NPC variants, whereas the rank-based method seems to be slightly more powerful. All methods kept the nominal level.

Fig. 2
figure 2

Rejection rates at \(\alpha = 0.05\) for the two groups–continuous–uncorrelated–two-sided alternative scenario

Fig. 3
figure 3

Actual rejection rate versus nominal level for the two groups–continuous–uncorrelated–two-sided alternative scenario when the null hypothesis is true

Fig. 4
figure 4

Actual rejection rate versus nominal level for the two groups–continuous–uncorrelated–two-sided alternative scenario when the alternative hypothesis is true

3.2.3 Two groups–mixed–uncorrelated–one-sided alternative

Figure 5a shows the rejection rate at \(\alpha = 0.05\) under the null and alternative hypotheses for all tests. Figure 5b, c illustrate the actual rejection rate for different \(\alpha \)-levels under the null and alternative hypotheses respectively.

The power to detect the alternative is much larger here, probably because the effect of the discrete variables is picked up more easily than the continuous variables. There does not seem to be any great difference between different combining functions. Fisher’s combining function and the Iterative method perform slightly better than the others when a large number of uninformative variables is present. All tests kept the nominal level.

Fig. 5
figure 5figure 5

Two groups-mixed-uncorrelated-one-sided alternative scenario

3.2.4 Two groups–mixed–correlated–one-sided alternative

Figure 6a shows the rejection rate at \(\alpha = 0.05\) under the null and alternative hypotheses for all tests. Figure 6b, c show the actual rejection rate for different significance levels under the null and alternative hypotheses respectively.

It is clear from these results that introducing a strong correlation between variables greatly reduces the power of the tests. No large difference is present between combining functions. Fisher’s combining function and the iterative method, this time jointly with the truncated product method, outperform the others by a small amount when a large number of uninformative variables is present. All methods kept the nominal level.

Fig. 6
figure 6figure 6

Two groups-mixed-correlated-one-sided alternative scenario

3.2.5 Three groups–mixed–correlated–two-sided alternative

Figure 7 shows the rejection rate at \(\alpha = 0.05\) under the null and alternative hypotheses for all tests. Figure 8 illustrates the actual rejection rate for different \(\alpha \)-levels under the null hypothesis and Fig. 9 does the same for the alternative hypothesis.

The overall power of the tests is increased somewhat in comparison to the previous scenario, likely due to the higher overall sample size. Again the Stouffer and Logistic methods are slightly inferior to the others. The rank-based method is essentially identical in power to the better performing NPC methods. Under this scenario, the distance-based methods show superior performance to the rank-based method. Indeed, the power of the rank-based method appears to be negatively affected by the presence of mixed variable types. All methods kept the nominal level.

Fig. 7
figure 7

Rejection rates at \(\alpha = 0.05\) for the three groups–mixed–correlated–two-sided alternative scenario

Fig. 8
figure 8

Actual rejection rate versus nominal level for the three groups–mixed–correlated–two-sided alternative scenario when the null hypothesis is true

Fig. 9
figure 9

Actual rejection rate versus nominal level for the three groups–mixed–correlated–two-sided alternative scenario when the alternative hypothesis is true

3.2.6 Three groups–mixed–correlated–stochastic ordering alternative

Figure 10a shows the rejection rate at \(\alpha = 0.05\) under the null and alternative hypotheses for all tests. Figure 10b, c illustrate the actual rejection rate for different significance levels under the null and alternative hypotheses respectively.

The overall power is again increased compared to the previous scenario due to the larger effect size. The inferiority of the Stouffer and Logistic methods is even more evident now, while the others are comparable in power. All methods kept the nominal level.

Fig. 10
figure 10figure 10

Three groups-mixed-correlated-stochastic ordering alternative scenario

4 Application to a real case study

Finally, we applied the NPC procedure (using all the aforementioned combining functions) and the competing methods to a real case study in the medical field to further compare their performances.

4.1 Description of the dataset

The dataset we used to illustrate the different procedures is a small random sample taken from a much larger dataset gathered by the Departments of Internal Medicine I, Geriatrics, Neurology, and Pneumology and the Institute of Medical and Chemical Laboratory Diagnostics of the Paracelsus Medical University, Salzburg, Austria in the context of the Paracelsus 10,000 project (see https://salk.at/16707.html). The goal of the project is to collect epidemiological data and establish a biobank containing samples of 10,000 inhabitants of the city of Salzburg and neighboring areas. For the purposes of this study, we extracted the laboratory data of ten male and female participants.

4.2 Statistical analysis

We transformed the non-numeric variables by ordering the observed values and coding them from 1 to l where l is the number of unique observed values. The null hypothesis was equality between the multivariate distributions of the observed variables between men and women. For the NPC we used the absolute value of the difference between the sum of the first 10 observations and the sum of the second 10 observations as the test statistic for the continuous variables. For the ordinal variables we used the two-sided version of the Anderson–Darling test statistic given in Sect. 3.1.5. We permuted the data 10,0000 times.

4.3 Results

The global null was rejected by all methods. The p-values (rounded to four digits) can be found in Table 6. All of them allow for a rejection of the null hypothesis at \(\alpha = 0.05\). The different combining functions of the NPC create a range of p-values from 0.0171 for the iterative combination to 0.0365 for Stouffer’s combining function. The rank-based approach and the Euclidean-Distance-based approach generate p-values at the lower end of the NPC spectrum and the Manhattan-Distance-based approach created the smallest p-value of all. One should be careful not to read too much into the differences in p-values in this individual case, however we would like to point out that the order of p-values corresponds roughly to the power of the different combining functions in the two groups–mixed–correlated–one-sided alternative scenario, which is also the scenario that is probably closest to the structure of these data. We would also like to point out that when using only the univariate p-values produced by the NPC and correcting them with the Bonferroni–Holm procedure, no significant difference can be detected in any variable (see Table 7).

Table 6 Global p-values
Table 7 Univariate p-values used for the NPC

5 Conclusions

Testing hypotheses of equality in distribution in a multivariate context when the number of variables is much greater than the sample size is a challenging task. Several nonparametric tests have been proposed in the literature to address this type of problem and among them are the distance-based approach by Mielke and Berry (2007), the rank-based approach by Ellis et al. (2017), and the Nonparametric Combination procedure by Pesarin and Salmaso (2010).

In this paper we conducted a simulation study to evaluate the performances of these methods under several different scenarios and then applied them to a real case study, placing particular emphasis on the NPC.

The choice of combining function is a critical step in the NPC procedure which can affect its performances. For this reason, in our study, we also tried to evaluate the impact of this choice on the power of this testing procedure with the aim of providing practitioners with a few guidelines to help them make this decision.

In those simulation scenarios in which we could compare different methods, and not simply different variants of the NPC, we observed the following:

  1. 1.

    The rank-based approach is similar in performance to the best performing NPC variants.

  2. 2.

    The distance-based approach was slightly inferior with only two groups present, but outperformed all others with three groups present.

When comparing the NPC methods with one another, the following statements can be made:

  1. 1.

    Differences between combining functions are not very large but do exist.

  2. 2.

    The truncated product method performs worse than others when no correlation between variables is present, but is among the best performing methods when such a correlation is introduced.

  3. 3.

    Overall, Fisher’s combining function and the iterative combining function have a favourable profile, regardless of dependency structure.

The advice we can give, therefore, is that in general the NPC with Fisher’s combining function or an iteration of several combining functions and the rank-based approach performed robustly across all scenarios, never being among the worst ones. While the performance of the distance-based methods was not as good as the performance of the other procedure in the two groups–continuous–uncorrelated–two-sided alternative scenario, it outperformed all others in the three groups–mixed–correlated–two-sided alternative scenario. One might suspect that either the nature of the variables (continuous vs mixed), the number of groups (two vs three) or the correlation structure (uncorrelated vs correlated) is responsible for this. Our conjecture is that it is due to the difference in correlation structure.

We can thus state, with some reservation, that the rank-based approach or the NPC with the Fisher or Iterative combining function is a solid and reliable choice. When it is suspected that a strong correlation exists between variables or a very large number of uninformative variables are believed to be present, the truncated product method also performs well.

Unfortunately, all methods suffer from a large loss in power when the number of uninformative variables is increased. We should also point out that, due to computational costs, the maximum ratio of uninformative to informative variables in our simulations is 3:1. It is of course possible that one of the investigated methods would show better performance were this ratio increased.

In the Application section we can see that all the presented methods are valuable tools for multivariate data analysis. While a univariate approach, with the necessary adjustment for multiple testing, failed to show any difference between groups, the difference was clearly significant with all multivariate methods.