Advertisement

Nonparametric Combination Tests for Dentistry Applications

  • Rosa ArborettiEmail author
  • Eleonora Carrozzo
  • Luigi Salmaso
Chapter
  • 699 Downloads
Part of the SpringerBriefs in Statistics book series (BRIEFSSTATIST)

Abstract

In this chapter we present a brief overview of multivariate permutation tests useful for dentistry applications. Particular attention is given to problems with repeated measurements and/or missing data. Testing hypothesis problems for repeated measurements and missing data are examined by means also of a real case study related to a preliminary double-blind, placebo controlled, randomized clinical trial with a 6-month follow-up period. The purpose of this trial is to evaluate the effectiveness of type A botulinum toxin to treat myofascial pain symptoms and to reduce muscle hyperactivity in bruxers.

Keywords

Botulinum Toxin Partial Test Main Treatment Effect Permutation Distribution Actual Sample Size 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Considering the field of standard parametric or rank-based nonparametric methods, a large number of univariate problems may be effectively faced. Although in relatively mild conditions their permutation counterparts are generally asymptotically as good as the best parametric ones (Lehmann 2009), and for most sample sizes of practical interest, the relative lack of efficiency of permutation solutions may sometimes be compensated by the lack of approximation of parametric asymptotic counterparts. Let us also think of the situation where the responses are multivariate normal-distributed and there are too many nuisance parameters to estimate and remove, due to the fact that each estimate implies a reduction of the degrees of freedom in the overall analysis (note that “responses,” “variables,” “outcomes,” and “end points” are often used synonymously); It is possible for the permutation solution to be more efficient than its parametric counterpart. Therefore, most parametric methods are based on several assumptions that rarely occur in real contexts, so that consequent inferences, when not improper, are necessarily approximated and their approximations are often difficult to assess. For instance, too often and without any justification, researchers assume multivariate normality, random sampling from a given population, homoscedasticity of responses also in the alternative, etc., so that it becomes possible to write down a likelihood function and to estimate a variance–covariance matrix. As a result, consequent inferences do not have real credibility.

Thus, the assumptions that parametric methods generally require are stringent and often quite unrealistic, unclear, and difficult to justify, and sometimes they are merely set on an ad hoc basis for specific inferential analyses. Thus, they appear to be mostly related to the availability of the methods one wishes to apply rather than with well-discussed necessities obtained from a rational analysis of reality, in accordance with the idea of modifying a problem so that a known method is applicable rather than that of modifying methods in order to properly deal with the problem. On the contrary, with nonparametric approaches, the assumptions are kept at a lower workable level, avoiding those which are difficult to justify or interpret, and possibly without excessive loss of inferential efficiency. Thus, they are based on more realistic foundations for statistical inference, and therefore, they are intrinsically robust and consequent inferences credible.

However, there are many complex multivariate problems (quite common in clinical trials, epidemiology, and biostatistics) which are difficult to solve outside the conditional framework and in particular outside the method of nonparametric combination (NPC) of dependent permutation tests.

We refer to Pesarin and Salmaso (2010) for an extended explanation of the theory presented in this chapter which represents a summary of some concepts suitable to understand how to apply multivariate permutation tests in particular to repeated measures designs, very much used in follow-up studies in dentistry applications.

5.1 Repeated Measures Problems and the Nonparametric Combination

In this section, we deal with observational or experimental situations where each subject is observed on a finite or at most a countable number of occasions, usually according to time or space. Thus, successive responses of one unit are dependent and may be viewed as obtained by a discrete or discretized stochastic process. This kind of problem is known as repeated measures design. With reference to each specific subject, repeated observations are also called the response profiles, and may be viewed as a multivariate variable.

Without loss of generality, we discuss general problems which can be referred to in terms of a one-way multivariate analysis of variance (MANOVA) layout for response profiles. Hence, we refer to testing problems for treatment effects when units are partitioned into C groups or samples, where C is given by the levels of a treatment and measurements are typically repeated k times on the same units. We want to test whether the observed profiles do or do not depend on treatment levels. It is presumed that responses may depend on time or space and that related effects are not of primary interest. From here onward, we refer to time occasions of observation, where time means any sequentially ordered entity including: space, lexicographic ordering, etc.

In the context of this chapter, repeated measurements, panel data, longitudinal data, response trajectories, and profiles are considered as synonyms. The proposed solutions essentially employ the method of NPC of dependent permutation tests, each obtained by a partial analysis on data observed on the same ordered occasion (time-to-time analysis). Hence, we assume that the permutation testing principle holds, i.e., in the null hypothesis, where treatment does not induce differences with respect to levels, we assume that the individual response profiles are exchangeable with respect to groups.

Formalizing, let us refer to a problem in which we have C groups of size \(n_{j}\geq 2\), \(j\,=\,1,\ldots,C\), with \(n=\sum_{j}n_{j}\) and a univariate variable X is observed. Units belonging to the jth group are presumed to receive a treatment at the jth level. All units are observed at k fixed ordered occasions \(\tau _{1},\ldots,\tau _{k}\), where k is an integer. For simplicity, we refer to time occasions by using t to mean \(\tau _{t}\), \(t\,=\,1,\ldots,k\). Hence, for each unit, we observe the discrete or discretized profile of a stochastic process, and profiles related to different units are assumed to be stochastically independent. Thus, within the hypothesis that treatment levels have no effect on response distributions, profiles are exchangeable with respect to groups.

5.2 Modeling Repeated Measurements

Let us consider a univariate stochastic time model with additive effects. Extensions of the proposed solution to multivariate response profiles are generally straightforward, by analogy with those given for the one-way MANOVA layout.

Let us refer to a two-way layout of univariate observations \(X=\{X_{ji}(t)\), \(i=1,\ldots,n_{j}\), \(j=1,\ldots,C\), \(t=1,\ldots,k\}\) or alternatively, when effects due to time are not of primary interest, to a one-way layout of profiles \(X=\{X_{ji}\), \(i=1,\ldots,n_{j}\), \(j=1,\ldots,C\}\), where \(X_{ji}=\{X_{ji}(t)\), \(t=1,\ldots,k\}\) indicates the jith observed profile.

Consider the general additive response model:
$$ X_{ji}\left(t\right) =\mu +\eta _{j}\left(t\right) +\Delta _{ji}(t)+\sigma (\eta _{j}(t)) \cdot Z_{ji}(t)$$
\(i=1,\ldots,n_{j},~j=1,\ldots,C,~t=1,\ldots,k\), where μ is a population constant coefficients \(\eta _{j}(t)\) represent the main treatment effects, which may depend on time through any kind of function, but are independent of units; quantities \(\Delta _{ji}(t)\) represent the so-called individual effects; and \(\sigma (\eta _{j}(t))\) are time-varying scale coefficients which may depend, through monotonic functions, on main treatment effects \(\eta _{j},\) provided that the resulting cumulative distribution functions (CDFs) are pairwise-ordered so that they do not cross each other, as in \(X_{j}(t)\overset{d}{<}(\)or \(\overset{d}{>})\) \(X_{r}(t)\), \(t=1,\ldots,k\), and \(j\neq r=1,\ldots,C\); \(Z_{ji}(t)\) are generally non-Gaussian error terms distributed as a stationary stochastic process with null mean and unknown distribution \(P_{\mathbf{Z}}\) (i.e., a generic white noise process). These error terms are assumed to be exchangeable with respect to units and treatment levels but, of course, not independent of time. When the \(\Delta _{ji}(t)\) are stochastic, we assume that they have null mean values and distributions which may depend on main effects, units and treatment levels. Hence, random effects \(\Delta _{ji}(t)\) are determinations of an unobservable stochastic process or, equivalently, of a k-dimensional variable \(\mathbf{\Delta =\{}\Delta (t),\) \(t=1,\ldots,k\}\). In this context, we assume that \(\mathbf{\Delta}_{j}\sim \mathcal{D}_{k}\{\mathbf{0},\mathbf{\mathbf{\beta} (\mathbf{\eta}}_{j})\}\), where \(\mathcal{D}_{k}\) is any unspecified distribution with null mean vector and unknown dispersion matrix \(\mathbf{\beta}\), indicating how unit effects vary with respect to main effects \({\eta}_{j}=\{\eta _{j}(t),\) \(t=1,\ldots,k\}\). Regarding the dispersion matrix \(\mathbf{\beta}\), we assume that the resulting treatment effects are pairwise stochastically ordered, as in \(\Delta _{j}(t)\overset{d} {<}(\)or \(\overset{d}{>})\) \(\Delta _{r}(t)\), \(t=1,\ldots,k\), and \(j\neq r=1,\ldots,C\). Moreover, we assume that the underlying bivariate stochastic processes \(\{\Delta _{ji}(t),\) \(\sigma (\eta _{j}(t))\cdot Z_{ji}(t),t=1,\ldots,k\}\) of individual stochastic effects and error terms, in the null hypothesis, are exchangeable with respect to groups. This property is easily justified when subjects are randomized to treatments.

This setting is consistent with a general form of dependent random effects fitting a very large number of processes that are useful in most practical situations. In particular, it may interpret a number of the so-called growth processes. Of course, when \(\beta =0\) with probability 1 for all t, the resulting model has fixed effects. When dispersion matrices Σ and \(\beta\) have no known simple structure, the underlying model may not be identifiable and, thus, no parametric inference is possible. Also, when \(k\geq n\), the problem cannot admit any parametric solution (see Chung and Fraser 1958 and Blair et al. 1994).

Among the many possible specifications of models for individual effects, one of these assumes that terms \(\Delta _{ji}(t)\) behave according to an AR(1) process:
$$ \Delta _{ji}(0)=0;~\Delta _{ji}(t)=\gamma (t)\cdot \Delta _{ji}(t-1)+\beta (\eta _{j}(t))\cdot W_{ji}(t),$$
\(i=1,\ldots,n_{j},~j=1,\ldots,C,~t=1,\ldots,k,\) where\( W_{ji}(t)\) represent random contributions interpreting deviates of individual behavior \(\gamma (t)\) are autoregressive parameters which are assumed to be independent of treatment levels and units, but not time \( \beta (\eta _{j}(t)),\, t=1,\ldots,k\) are time-varying scale coefficients of autoregressive parameters, which may depend on the main effects. By assumption, the terms \(W_{ji}(t)\) have null mean value, unspecified distributions, and are possibly time-dependent, so that they may behave as a stationary stochastic process.
A simplification of the previous model considers a regression-type form such as
$$ \Delta _{ji}(t)=\gamma _{j}(t)+\beta (t)\cdot W_{ji}(t),~i=1,\ldots,n_{j},j=1,\ldots,C,t=1,\ldots,k.$$
Of course, many other models of dependence errors might be taken into consideration, including situations where matrices \(\mathbf{\Sigma}\) and \(\mathbf{\beta}\) are both full.
The hypotheses we wish to test are
$$ H_{0}:\left\{\mathbf{X}_{1}\overset{d}{=}\ldots \overset{d} {=}\mathbf{X}_{C}\right\} =\left\{X_{1}(t)\overset{d}{=}\ldots \overset{d}{=}X_{C}(t),~t=1,\ldots,k\right\}$$
against \(H_{1}:\) \(\{\bigcup_{t}[H_{0t}\) is not true\(]\}\).

The global null hypothesis can be written referring to the so-called time-to-time analysis, i.e., it can be seen as decomposed into k subhypotheses according to time \(H_{0}:\left\{\bigcap\limits_{t=1}^{k} \left[X_{1}(t)\overset{d}{=}\ldots \overset{d}{=}X_{C}(t)\right] \right\} =\left\{\bigcap\limits_{t=1}^{k}H_{0t}\right\}\) against \(H_{1}=\{\bigcup_{t}H_{1t}\}.\) Note that H 0 is true if and only if all the subhypotheses are jointly true and the alternative is true if only one of the k alternatives is true. By this decomposition, each subproblem is reduced to a one-way ANOVA, and from this point of view, the associated two-way ANOVA, in which effects due to time are not of interest, becomes equivalent to a one-way MANOVA.

Distributional assumptions imply that \(X=X_{1} \biguplus\ldots\biguplus X_{C}\) is a set of sufficient statistics for the problem in H 0. The permutation testing principle can be applied to observed time profiles because \(H_{0}=\{X_{1}\overset{d}{=}\ldots \overset{d}{=}X_{C}\}\) implies that the observed profiles are exchangeable with respect to treatment levels.

Thus, in the given conditions, let us consider k partial tests \(T_{t}=\sum_{j}n_{j}\cdot (\bar{X}_{j})^{2}\), where \(\bar{X} _{j}=\sum_{i}X_{ji}(t)/n_{j}\), \(t=1,\ldots,k\), are appropriate for time-to-time subhypotheses \(H_{0t}\) against \(H_{1t}\). In order to compute the k p-values we need the permutation distribution \(\left(T_{1}^{\ast},{\ldots},T_{k}^{\ast}\right)\) under H 0 of \(\left(T_{1},{\ldots},T_{k}\right)\). We estimate this distribution by permuting a number B of times the original profiles among the groups and computing at each permutation the statistic \(T_{t}^{\ast b}=\sum_{j}n_{j}\cdot (\bar{X} _{j}^{\ast b})^{2}\), \(\bar{X}_{j}^{\ast b}=\sum_{i}X_{ji}^{\ast b}(t)/n_{j}\), \(t=1,\ldots,k\), \(b=1,{\ldots},B\), where with the symbol “*,” we mean that the statistics are computed on a permutation of data. On this null distribution, we can compute the p-values of the k partial tests by \(\frac{\#(T_{t}^{\ast}\geq T_{t})}{B}\), i.e., the proportion of times in which we observe a value of \(T_{t}^{\ast}\) greater than the value T t observed on original data. Now, we can achieve a global complete solution for H 0 against H 1, by combining all these partial tests. Of course, due to the complexity of the problem and to the unknown k-dimensional distribution of \((T_{1},\ldots,T_{k}\)) see Crowder and Hand 1990; Diggle et al. 2002), we are generally unable to evaluate all dependence relations among partial tests directly from \(\mathbf{X}\). Therefore, this combination should be nonparametric and may be obtained through any combining function \(\psi \in C\), where C is a class of combining functions that are characterized by the following property: (1) a combining function must be nonincreasing in each argument: \(\psi ({\ldots},\lambda _{t},{\ldots})\geq\) \(\psi ({\ldots},\lambda _{t},{\ldots})\) if \(\lambda _{t}<\lambda _{t}^{^{\prime}},\) where \(\lambda _{t}, t\in \left\{1,{\ldots},k\right\}\) is the p-value related to the t-th partial hypothesis. Also, it is generally desirable that is symmetric, i.e., invariant with respect to rearrangements of the entry arguments \(\psi (\lambda _{u_{1}},{\ldots},\lambda _{u_{k}})\), where \(\left(u_{1},{\ldots},u_{k}\right)\) is any permutation of \(\left(1,{\ldots},k\right)\); (2) every combining function must attain its supremum value \(\bar{\psi}\), possibly not finite, even when only one argument attains zero \(\psi ({\ldots},\lambda _{t},{\ldots})\longrightarrow\) \(\bar{\psi}\) if \(\lambda _{t}\longrightarrow 0\),\(t\in \left\{1,{\ldots},k\right\}\); c) \(\forall \alpha>0\), the critical value \(T_{\alpha}^{^{\prime \prime}}\) of every ψ is assumed to be finite and strictly smaller than \(\bar{\psi}:T_{\alpha}^{^{\prime \prime}}<\bar{\psi}\). Some practical examples of combining function are:
  • Fisher omnibus combining function based on the statistic \(\psi _{F}=-2\sum\limits_{t=1}^{k}\log \left(\lambda _{t}\right)\);

  • Liptak combining function based on the statistic \(\psi _{L}=\sum\limits_{t=1}^{k}\Phi ^{-1}\left(1-\lambda _{t}\right)\), where \(\Phi\) is the standard normal CDF;

  • Tippett combination function based on the statistic \(\psi _{T}=\max_{1\leq t\leq k}\left(1-\lambda _{t}\right)\).

Of course, when the underlying model is not identifiable, and so some or all of the coefficients cannot be estimated, this NPC becomes unavoidable. Moreover, when all observations come from only one type of variable (continuous, discrete, nominal, and ordered categorical) and thus, partial tests are homogeneous, a direct combination of standardized partial tests, such as \(T_{t}^{\ast}=\sum_{j}n_{j}\cdot \bar{X}_{j}^{\ast}(t)-\bar{X} _{\bullet}(t)]^{2}/\sum_{ji}[X_{ji}^{\ast}(t)-\bar{X}_{j}^{\ast}(t)]^{2},\) may be appropriate especially when k is large. This may not be the case when observations are on variables of different types, e.g., some continuous and others categorical.

5.3 Analysis of Case–Control Designs

Let us consider a particular case of the problem in previous section. Suppose to have C = 2 groups and, for instance, we are interested in testing whether the first process is stochastically dominated by the second: \(\{X_{1}(t)\overset{d}{<}X_{2}(t),\) \(t=1,\ldots,k\}\). This kind of problem is known as two-sample dominance problem. In such a case, referring to models with stochastic coefficients, we want to test the following hypothesis:
$$ H_{0}:\left\{\bigcap_{t=1}^{k}\left[X_{1}(t)\overset{d}{=}X_{2}(t)\right] \right\} =\left\{\bigcap\limits_{t=1}^{k}\mathbf{[}\eta _{1}(t)=\eta _{2}(t) \mathbf{]}\right\} =\left\{\bigcap\limits_{t=1}^{k}H_{0t}\right\}$$
against \(H_{1}:\{\bigcup_{t}[X_{1}(t)\overset{d}{<} X_{2}(t)]\}=\{\bigcup_{t}[\eta _{1}(t)<\eta _{2}(t)]\}=\{\bigcup_{t}H_{1t}\}\), where \(\eta _{j}(t),j=1,2\) represent the main treatment effects, and may depend on time. Note that the stochastic dominance problem is represented by a suitable decomposition of hypotheses. Observe that the alternative is now broken down into k one-sided (restricted) sub-alternatives. Hence, for each sub hypothesis, a one-tailed partial test for comparison of two locations should be considered.

The overall solution for this is now straightforward because according to the permutation principle, the exchangeability of individual profiles with respect to treatment levels is assumed in H 0. A set of permutation partial test statistics might be \(\{T_{t}^{\ast}=\bar{X}_{2}^{\ast}(t)\), \(t=1,\ldots,k\}\). Thus, we are able to estimate the distribution of \(\left(T_{1},{\ldots},T_{k}\right)\) so that we can compute the related partial p-values. These partial tests are marginally unbiased, exact, significant for large values, and consistent. Consequently, we can obtain the overall solution by NPC of partial tests.

5.4 Testing for Repeated Measurements with Missing Data

Consider a problem with repeated measures, where data are grouping into \(C>2\) groups and some of the data are missing. We want test the hypothesis if the profiles depend on treatment level.

Assuming that in the null hypothesis, both observed and missing data are exchangeable with respect to groups associated with treatment levels, such multivariate testing problems are solvable by the NPC of dependent permutation tests. Thus consider the hypotheses broken down into a set of subhypotheses, and related partial tests are assumed to be marginally unbiased, significant for large values and consistent. In this section, this NPC solution is also compared with two different parametric approaches to the problem of missing values: Hotelling’s T 2 with deletion of units with at least one missing datum, and Hotelling’s T 2 with data imputation by the EM algorithm (Dempster et al. 1977; Little and Rubin 1987). First of all, in this section we define two different situations: the first in which data are missing completely at random (MCAR) and the second in which data are missing not at random (MNAR).

Although some solutions presented in this chapter are exact, the most important of them are approximate because the permutation distributions of the test statistics concerned are not exactly invariant with respect to permutations of missing data, as we shall see. However, the approximations are quite accurate in all situations, provided that the number of effective data in all data permutations is not too small. To this end, we may remove from the permutation sample space, associated with the whole data set, all data permutations in which the actual sample sizes of really observed data are not sufficient for approximations. We must establish a kind of restriction on the permutation space, provided that this restriction does not imply biased effects on inferential conclusions.

In all kinds of problems, missing data are usually assumed to originate from an underlying random process, which may or may not be related to the observation process. Thus, within a parametric approach, in order to make valid inferences in the presence of missing data, this process must in general be properly specified. But, when we assume that the probability of a datum being missing does not depend on its unobserved value, so that the missing data are missing at random, then we may ignore this process and so need not specify it.

5.4.1 Data Missing Completely at Random

Let θ be the parameter regulating the distribution of the observable variable and let φ denote the missing data process; thus, the vector \((\theta,\phi)\) identifies the whole probability distribution of observed data within a family P of non-degenerate distributions. The ignorability of the missing data process depends on the method of inference and on three conditions which the data-generating process must satisfy.

According to Donald Rubin: “The missing data are missing at random (MAR) if for each possible value of the parameter φ, the conditional probability of the observed pattern of missing data given the missing data and the value of the observed data, is the same for all possible values of the missing data. The observed data are observed at random (OAR) if for each possible value of the missing data and the parameter φ, the conditional probability of the observed pattern of missing data given the missing data and the observed data, is the same for all possible values of the observed data. The parameter φ is distinct from θ if there are no a priori ties, via parametric space restrictions or prior distributions, between φ and θ.”

If the missing data are MAR and the observed data are OAR, the missing data are missing completely at random (MCAR). In this case, missingness does not depend on observed or unobserved values, and observed values may be considered as a random subsample of the complete data set. In these situations, therefore, it is appropriate to ignore the process that causes missing data when making inferences on θ.

5.4.2 Data Missing Not at Random

Let us think about sample surveys where it is very common to observe missing responses. These are situations in which circumstances behind nonresponses are varied and complex. Thus, the missing data might be missing not at random (MNAR). In order to make valid parametric inferences, the missing data process must be properly specified. Typically, in experimental situations this occurs when the treatment acts on the missing mechanism either on the missingness of a datum or on its observability. In general, it is very unlikely that a single model may correctly reflect all the implications of nonresponses in all instances. Thus, the analysis of MNAR missing data is much more complicated than that of MCAR data because inferences must be made by taking into consideration the data set as a whole and by specifying a proper model for each specific situation. In any case, the specification of a model which correctly represents the missing data process seems the only way to eliminate the inferential bias caused by nonresponses in a parametric framework.

In the literature, various models have been proposed, most of which concern cases in which nonresponses are confined to a single variable.

Let us present the permutation solution, considering a one-way MANOVA layout. Thus, the hypothesis to be tested is whether there is equality between \(C\geq 2,\) V-dimensional distributions. In order to do this, consider C groups of exchangeable V-dimensional responses \(X_{j}=\{X_{ji}=(X_{hji},~h=1,\ldots,V)\), \(i=1,\ldots,n_{j}\}\), \(j=1,\ldots,C\), respectively with distribution function P j , \(X_{ji}\in R^{V},\) where \(n=\sum_{j}n_{j}\) is the total sample size. Some of the data are supposed to be missing. Formalizing the null hypothesis is \(H_{0}:\{P_{1}=\ldots =P_{C}=P\}=\{X_{1}\overset{d}{=}\ldots \overset{d}{=} X_{C}\}\) against the alternative is \(H_{1}:\{H_{0}\) is not true\(\}\).

Assume that under the null hypothesis, data can be considered exchangeable with respect to C groups. This requirement concerns both observed and missing data. Let us assume that the model for treatment effects is such that resulting CDFs satisfy the pairwise dominance condition, so that locations of suitable transformations \(\varphi _{h}\), \(h=1,\ldots,V\), of the data are useful for discrimination, where \(\varphi _{h}\) may be specific to the hth variable. This assumption leads us to consider sampling means of transformed data as proper indicators for treatment effects. The reason for this kind of statistical indicator, and consequently for this kind of assumption, is that in this situation we are able to derive an effective solution. Therefore, we assume that the analysis is based on the transformed data:
$$ \mathbf{Y=\{}Y_{hji}=\varphi _{h}(X_{hji}),~i=1,\ldots,n_{j},~j=1,\ldots,C,~h=1,\ldots,V\}.$$
Hence, consequent permutation partial tests should be based on proper functions of sampling totals \(S_{hj}^{\ast}=\sum_{i\leq n_{j}}Y_{hji}^{\ast}\), \(j=1,\ldots,C\), \(h=1,\ldots,V\).
Since for whatever reason some of the data are missing, we must also consider the associated inclusion indicator, which represents the observed configuration in the data set:
$$ O=\{O_{hji},~i=1,\ldots,n_{j},~j=1,\ldots,C,~h=1,\ldots,V\},$$
where \(O_{hji}=1\) if X hji has been observed and collected, otherwise \(O_{hji}=0\).

Hence, we can write the whole set of observed data as the pair of associated matrices \((Y,O)\), and we can also define the actual sample size of the really observed data in the jth group relative to the hth variable and the total actual sample size of the really observed data relative to the hth variable, respectively by \(\nu _{hj}=\sum_{i}O_{hji}\), \(j=1,\ldots,C,~\) \(h=1,\ldots,V\) and \(\nu _{h\mathbf{\bullet}}=\sum_{j}\nu _{hj}\), \(h=1,\ldots,V\).

Note that, we may express the hypotheses of interest as
$$ H_{0}:\left\{(\mathbf{Y}_{1},\mathbf{O}_{1}\mathbf{)}\overset{d}{=}\ldots \overset{d}{=}(\mathbf{Y}_{C}\mathbf{,O}_{C}\mathbf{)}\right\}$$

against the alternative \(H_{1}:\{H_{0}\) is not true\(\}\).

The complexity of this testing problem is such that it is very difficult to find a single overall test statistic. This kind of problem may be tackled by means of the NPC of a set of dependent permutation tests. To this end, we observe that the null hypothesis may be equivalently written in the form

$$ H_{0}:\left\{\bigcap\limits_{h=1}^{V}\left[(Y_{h1},O_{h1})\overset{d}{=} \ldots \overset{d}{=}(Y_{hC},O_{hC})\right] \right\} =\left\{\bigcap\limits_{h=1}^{V}H_{0~h}\right\},$$

where, as usual, a suitable and meaningful breakdown of H 0 is emphasized. Hence, the hypothesis H 0 against H 1 is broken down into V subhypotheses \(H_{0~h}\) against \(H_{1~h}\), \(h=1,\ldots,V\), in such a way that H 0 is true if all the \(H_{0~h}\) are jointly true and H 1 is true if at least one among the \(H_{1~h}\) is true, so that \(H_{1}=\bigcup_{h}H_{1~h}\).

Thus, to test H 0 against H 1, we consider a V-dimensional vector of real-valued test statistics \(\mathbf{T=\{}T_{1},\ldots,T_{V}\}\), the hth component of which is the univariate partial test for the hth sub-hypothesis \(H_{0~h}\) against \(H_{1~h}\). Without loss of generality, we assume that partial tests are non-degenerate, marginally unbiased, consistent, and significant for large values. Hence, the combined test is a function of V dependent partial tests. Of course, the combination must be nonparametric, particularly with regard to the underlying dependence relation structure, because in this setting only very rarely may the dependence structure among partial tests be effectively analyzed.

Let us start considering a MNAR model for missing data, where it is assumed that, in the alternative, the symbolic treatment may influence missingness. In fact, the treatment may affect the distributions of both variables Y and of the inclusion indicator O. Thus, in this setting, the null hypothesis have to take into consideration the joint distributional equality of the missing data process in the C groups, giving rise to O, and of response variables Y conditional on O, i.e.,

$$ H_{0}:\left\{\left[\mathbf{O}_{1}\overset{d}{=}\ldots \overset{d}{=} \mathbf{O}_{C}\right] \bigcap \left[\left(\mathbf{Y}_{1}\overset{d}{=} \ldots \overset{d}{=}\mathbf{Y}_{C}\right) \mathbf{O}\right] \right\}.$$

In the null hypothesis, the assumption of exchangeability of the n individual data vectors in \((Y,O)\) with respect to the C groups is satisfied, because we assume that there is no difference in distribution for the multivariate inclusion indicator variables O j , \(j=1,\ldots,C,\) and, conditionally on O, for actually observed variables \(\mathbf{Y}\). As a consequence, it is not necessary to specify both the missing data process and the data distribution, provided that marginally unbiased permutation tests are available. In particular, it is not necessary to specify the dependence relation structure in \((\mathbf{Y,O)}\) because it is nonparametrically processed. In this framework, the hypotheses may be broken down into the 2V sub-hypotheses

$$\begin{aligned} H_{0} &:&\left\{\left[\bigcap\nolimits_{h}\left(O_{h1} \overset{d}{=}\ldots \overset{d}{=}O_{hC}\right) \right] \bigcap \left[\bigcap\nolimits_{h}\left(Y_{h1}\overset{d}{=}\ldots \overset{d}{=} Y_{hC}\right) \left\vert \mathbf{O}\right. \right] \right\} \\ &=&\left\{H_{0}^{\mathbf{O}}\bigcap H_{0}^{\mathbf{Y|O}}\right\} =\left\{\left(\bigcap\nolimits_{h}H_{0~h}^{\mathbf{O}}\right) \bigcap \left(\bigcap\nolimits_{h}H_{0~h}^{\mathbf{Y|O}}\right) \right\}\end{aligned}$$

against

$$ H_{1}:\left\{\left(\bigcup\nolimits_{h}H_{1~h}^{\mathbf{O}}\right) \bigcup \left(\bigcup\nolimits_{h}H_{1~h}^{\mathbf{Y|O}}\right) \right\},$$
where
\(H_{0~h}^{\mathbf{O}}\)

indicates the equality in distribution among the C levels of the hth marginal component of the inclusion (missing) indicator process, and

\(H_{0~h}^{\mathbf{Y|O}}\)

indicates the equality in distribution of the hth component of \(\mathbf{Y}\), conditional on \(\mathbf{O}\).

For each of the V sub-hypotheses \(H_{0~h}^{\mathbf{O}}\), a permutation test statistic such as Pearson’s X 2, or other suitable tests for proper testing with binary categorical data, are generally appropriate (for testing with categorical variables, see Cressie and Read 1988; Agresti 2002). For each of the k sub-hypotheses \(H_{0~h}^{\mathbf{Y|O}},\), \(\mathbf{O}\) is fixed at its observed value, so that we may proceed conditionally.

Let us consider now the situation where missing data are MCAR. Note that in this setting, we assume that O does not provide any discriminative information about treatment effects. Thus, we can proceed according to Donald Rubin, i.e., conditionally with respect to the observed inclusion indicator O and ignore \(H_{0}^{\mathbf{O}}\). The null hypothesis can be written as:

$$ H_{0}=H_{0}^{\mathbf{Y|O}}:\left\{\bigcap\nolimits_{h}\left[\left(Y_{h1} \overset{d}{=}\ldots \overset{d}{=}Y_{hC}\right) \left\vert \mathbf{O} \right. \right] \right\} =\left\{\bigcap\nolimits_{h}H_{0~h}^{\mathbf{Y|O}}\right\}$$

against

$$ H_{1}:\left\{\bigcup\nolimits_{h}H_{1~h}^{\mathbf{Y|O}}\right\}.$$

Of course, this problem is solved by NPC \(\psi _{\mathbf{Y}}\left(\lambda _{1}^{\mathbf{Y|O}},\ldots,\lambda _{V}^{\mathbf{Y|O}}\right)\).

In order to deal with this problem using a permutation strategy, it is necessary to consider the role of permuted inclusion indicators \(\mathbf{O}^{\ast}=\{O_{hji}^{\ast}\), \(i=1,\ldots,n_{j},\) \(j=1,\ldots,C\), \(h=1,\ldots,V\}\), especially with respect to numbers of missing data, in all points of the permutation sample space \((\mathcal{Y},\mathcal{O})_{/(\mathbf{Y,O)}}\) associated with the pair \((\mathbf{Y,O)}\).

Note that, units with missing data participate in the permutation mechanism as well as all other units, so that permutation actual sample sizes of really valid data for each component variable within each group, \(\nu _{hj}^{\ast}=\sum_{i}O_{hji}^{\ast}\), \(j=1,\ldots,C\), \(h=1,\ldots,V\), vary according to the random attribution of unit vectors, and of relative missing data, to the C groups.

Thus, the key to a suitable solution is to use partial test statistics, the permutation distributions of which are at least approximately invariant with respect to the permutation of actual sample sizes of valid data. This is done in what follows. However, these tests are also presented in Pesarin and Salmaso (2010).

Let us first consider an MCAR model. Let T be a vector of partial test statistics, based on functions of sampling totals of valid data, and \(F[t|(Y,O)]\), \(t\in R^{V}\) its multivariate permutation distribution. The set of possible permuted inclusion indicators according to the random attribution of data to the C groups, say \(O^{\ast}\) of O, leads to a partition into suborbits on the whole permutation sample space \((Y,O)_{(\mathbf{Y,O)}}\), which are characterized by points which exhibit the same matrix of permutation actual sample sizes of valid data \(\mathbf{\{}\nu _{hj}^{\ast},\) \(j=1,\ldots,C,\) \(h=1,\ldots,V\}\).

This partition shows that the two points \((\mathbf{Y}_{1}^{\ast},\mathbf{O} _{1}^{\ast})\) and \((\mathbf{Y}_{2}^{\ast},\mathbf{O}_{2}^{\ast})\) lying on the same suborbit if the respective permutation actual sample sizes of valid data \(\nu _{1hj}^{\ast}=\sum_{i}O_{1hji}^{\ast}\) and \(\nu _{2hj}^{\ast}=\sum_{i}O_{2hji}^{\ast}\) are equal for every h and j, \(h=1,\ldots,V\), \(j=1,\ldots,C.\)

Of course, if the permutation subdistributions of the whole matrix of sampling totals \(\mathbf{\{}S_{hj}^{\ast}=\sum_{i}Y_{hji}^{\ast}\cdot O_{hji}^{\ast}\), \(j=1,\ldots,C\), \(h=1,\ldots,V\}\), where it is assumed that \(O_{hji}^{\ast}=0\) implies \(Y_{hji}^{\ast}\cdot O_{hji}^{\ast}=0\), are invariant with respect to the suborbits induced by \(\mathbf{O}^{\ast}\), then we may evaluate \(F[t|(Y,O)]\) for instance by a simple CMC procedure, i.e., by ignoring the partition into induced suborbits.

Thus, the equality

$$ F[\mathbf{t}|(\mathbf{Y,O)}]=F[\mathbf{t}|(\mathbf{Y,O}^{\ast}\mathbf{)]}$$

is satisfied for every \(t\in R^{V}\), for every specific permutation \(O^{\ast}\) of O, and for all data sets Y, due to the distributional invariance with respect to permuted inclusion indicators \(O^{\ast}\) of sampling totals \(S^{\ast}\). Note that, for one-dimensional problems, this distributional invariance may become exact in MCAR models because, conditionally, we are allowed to ignore missingness by removing all unobserved units from the data set. But with V-dimensional (\(V>1\)) problems, this distributional invariance can be satisfy exactly only for some particular conditions, or for very large sample sizes.

Moreover, when problems involve multivariate paired data, so that numbers of missing differences are permutationally invariant quantities, then related tests become exact. Therefore, in general, we must look for approximate solutions.

Let \(\nu =\{\nu _{hj},\) \(j=1,\ldots,C,\) \(h=1,\ldots,V\}\) be the \(V\times C\) matrix of actual sample sizes of valid data in the observed inclusion indicator O, and consider test statistics based on permutation sampling totals of valid data \(\{S_{hj}^{\ast}=\sum_{i}Y_{hji}^{\ast}\cdot O_{hji}^{\ast}\), \(j=1,\ldots,C\), \(h=1,\ldots,V\}\). Note that, the following distributional equality

$$ F[t|(Y,\nu)]=F[t|(Y,\nu ^{\ast})],$$

where \(\nu ^{\ast}=\{\nu _{hj}^{\ast},~j=1,\ldots,C,~h=1,\ldots,V\}\) represents the \(V\times C\) matrix of permutation of actual sample sizes of valid data associated with \(O^{\ast}\), holds. In fact, the permutation distribution of the sampling total \(S_{hj}^{\ast}\), conditional on the whole data set \((Y,O)\) considered as a finite population, depends essentially on the number \(\nu _{hj}^{\ast}\) of summands. Hence, we have to find test statistics the permutation null subdistributions of which are invariant with respect to \(\nu ^{\ast}\) and for all Y.

In general, in very few situations this condition is exactly satisfied, so that we must consider an approximate solution. Thus, we must look for statistics T whose means and variances are invariant with respect to the suborbits induced by \(O^{\ast}\) on permutation sample space \((Y,O)_{/(\mathbf{Y,O)}}\). Let us suppose, without loss of generality, to have a univariate variable Y, so that we have only one test statistic T. Considering permutation tests based on univariate sampling totals of valid data, \(S_{j}^{\ast}=\sum_{i}Y_{ji}^{\ast}\cdot O_{ji}^{\ast},\) \(j=1,\ldots,C\), the overall total \(S=\sum\nolimits_{j}S_{j}\), which is assumed to be a nonnull quantity, is permutationally invariant because in \((\mathcal{Y},\mathcal{O})_{/(\mathbf{Y,O)}}\). Thus, the equation

$$ S=\sum\nolimits_{ji}Y_{ji}\cdot O_{ji}=\sum\nolimits_{j}S_{j}^{\ast}$$

is always satisfied.

Let us now consider the two-sample case (C = 2) and assume that the test statistic for \(H_{0}^{\mathbf{Y|O}}\) against \(H_{1}^{\mathbf{Y|O}}\) is a linear combination of \(S_{1}^{\ast}\) and \(S_{2}^{\ast}\). Thus, the test is expressed in the form

$$ T^{\ast}(a^{\ast},b^{\ast}|\mathbf{\nu}^{\ast})=a^{\ast}\cdot S_{1}^{\ast}-b^{\ast}\cdot S_{2}^{\ast},$$

where \(a^{\ast}\) and \(b^{\ast}\) are two coefficients which are independent of the actually observed data \(\mathbf{Y}\) but which may be permutationally noninvariant. These coefficients must be determined assuming that, in the null hypothesis, the variance \(\mathbb{V[}T^{\ast}(a^{\ast},b^{\ast})|\mathbf{\nu}^{\ast}]=\zeta ^{2}\) is constant, in the sense that it is independent of the permutation of actual sample sizes \(\nu _{j}^{\ast}\), \(j=1,2\), and that the mean values should identically satisfy the condition \(\mathbb{E[}T^{\ast}(a^{\ast},b^{\ast})|\mathbf{\nu}^{\ast}]=0\).

In accordance with the technique of without-replacement random sampling from \(\mathbf{(Y,O)}\) which, due to conditioning, is assumed to play the role of a finite population, we can write the following set of equations:
  • \(\nu _{1}^{\ast}+\nu _{2}^{\ast}=\nu,\)

  • \(S_{1}^{\ast}+S_{2}^{\ast}=S,\)

  • \(\mathbb{E(}S_{j}^{\ast})=S\cdot \nu _{j}^{\ast}/\nu, j=1,2,\)

  • \(\mathbb{V(}S_{j}^{\ast})=\sigma ^{2}\cdot \nu _{j}^{\ast}(\nu -\nu _{j}^{\ast})/(\nu -1)=V(\mathbf{\nu}^{\ast}), j=1,2,\)

where V is a positive function, and \(\nu =\nu _{1}+\nu _{2}\), \(S=S_{1}+S_{2}\) and \(\sigma ^{2}\) = \(\sum_{ji}(Y_{ji}-S/\nu)^{2}\cdot O_{ji}/\nu\) are permutationally invariant nonnull quantities. Thus, for any given pair of positive permutation actual sample sizes \((\nu _{1}^{\ast},\nu _{2}^{\ast})\), the two permutation sampling totals \(S_{1}^{\ast}\) and \(S_{2}^{\ast}\) have the same variance and their correlation coefficient is \(\rho (S_{1}^{\ast},S_{2}^{\ast})=-1\), because their sum S is a permutation invariant quantity. Hence, we may write:
  • \(\mathbb{E[}T^{\ast}(a^{\ast},b^{\ast})]=a^{\ast}\cdot S\cdot \nu _{1}^{\ast}-b^{\ast}\cdot S\cdot \nu _{2}^{\ast}=0,\)

  • \(\mathbb{V[}T^{\ast}(a^{\ast},b^{\ast})]=a^{\ast 2}V(\mathbf{\nu} ^{\ast})+2a^{\ast}b^{\ast}V(\mathbf{\nu}^{\ast})+b^{\ast 2}V(\mathbf{\nu}^{\ast})=(a^{\ast}+b^{\ast})^{2}V\mathbf{(\nu}^{\ast}).\)

The solutions to these equations are \(a^{\ast}=(\nu _{2}^{\ast}/\nu _{1}^{\ast})^{1/2}\) and \(b^{\ast}=(\nu _{1}^{\ast}/\nu _{2}^{\ast})^{1/2}\), ignoring an inessential positive coefficient.

Hence, for C = 2 and V = 1, the test statistic, the sub-distributions of which are approximately invariant with respect to permutation of actual sample sizes of valid data because they are permutationally invariant in mean value and variance, takes the form

$$ T^{\ast}=S_{1}^{\ast}\cdot (\nu _{2}^{\ast}/\nu _{1}^{\ast})^{1/2}-S_{2}^{\ast}\cdot (\nu _{1}^{\ast}/\nu _{2}^{\ast})^{1/2}.$$

If there are no missing values, so that \(\nu _{j}^{\ast}=n_{j}\), \(j=1,2\), the latter test is permutationally equivalent to the standard two-sample permutation test for comparison of locations \(T^{\ast}\approx \sum_{i}Y_{1i}^{\ast}\).

In the case of \(C>2\) and again with V = 1, one approximate solution is

$$ T_{C}^{\ast}=\sum\limits_{j=1}^{C}\left\{S_{j}^{\ast}\cdot (\nu -\nu _{j}^{\ast})/\nu _{j}^{\ast}]^{1/2}-(S-S_{j}^{\ast})\cdot \nu _{j}^{\ast}/(\nu -\nu _{j}^{\ast})^{1/2}\right\} ^{2}.$$

This test statistic may be seen as a direct combination of C partial dependent tests, each obtained by a permutation comparison of the jth group with respect to all other C − 1 groups pooled together. Also, in the case of complete data, when there are no missing values, this test is equivalent to the permutation test for a standard one-way ANOVA layout, provided that sample sizes are balanced, \(n_{j}=m\), \(j=1,\ldots,C,\) whereas in the unbalanced cases the two solutions, although not coincident, are very close to each other.

One more solution may be obtained by the direct NPC of all pairwise comparisons:

$$ T_{2~C}^{\ast}=\sum\nolimits_{r<j}\left(T_{rj}^{\ast}\right) ^{2},$$

where \(T_{rj}^{\ast}=S_{r}^{\ast}\cdot (\nu _{j}^{\ast}/\nu _{r}^{\ast})^{1/2}-S_{j}^{\ast}\cdot (\nu _{r}^{\ast}/\nu _{j}^{\ast})^{1/2},\) \(1\leq r<j\leq C\).

Of course, if \(V>1\), a non-parametric combination will result. Hence, to test \(H_{0}:\{\bigcap\nolimits_{h}H_{0~h}^{\mathbf{Y|O}}\}\) against \(H_{1}:\{\bigcup\nolimits_{h}H_{1~h}^{\mathbf{Y|O}}\},\) the solution becomes \(T^{\prime \prime}=\psi (\lambda _{1},\ldots,\lambda _{V})\), where ψ is any member of the class \(\mathcal{C}\), and \(\lambda _{h}\) is the partial p-value of either

$$ T_{Ch}^{\ast}=\sum\limits_{j=1}^{C}\left\{S_{hj}^{\ast}\cdot \left(\frac{\nu _{h}-\nu _{hj}^{\ast}}{\nu _{hj}^{\ast}}\right) ^{1/2}-(S_{h}-S_{hj}^{\ast})\cdot \left(\frac{\nu _{hj}^{\ast}}{\nu _{h}-\nu _{hj}^{\ast}}\right) ^{1/2}\right\} ^{2},$$

or

$$ T_{2Ch}^{\ast}=\sum\nolimits_{r<j}\left(S_{hr}^{\ast}\cdot (\nu _{hj}^{\ast}/\nu _{hr}^{\ast})^{1/2}-S_{hj}^{\ast}\cdot (\nu _{hr}^{\ast}/\nu _{hj}^{\ast})^{1/2}\right) ^{2},$$

each relative to the hth component variable, \(h=1,\ldots,V\).

For MNAR models, again in a nonparametric way, we must also combine the V test statistics on the components of the inclusion indicator \(\mathbf{O}\), provided that all partial tests are marginally unbiased (see Sect.  4.2.1). More specifically, to test \(H_{0}:\{[\bigcap\nolimits_{h}H_{0~h}^{\mathbf{O}}]\bigcap [\bigcap\nolimits_{h}H_{0~h}^{\mathbf{Y|O}}]\}\) against \(H_{1}:\{[\bigcup \nolimits_{h}H_{1~h}^{\mathbf{O}}]\bigcup [\bigcup\nolimits_{h}H_{1~h}^{\mathbf{Y|O}}]\}\) we must now combine V tests \(T_{h}^{\ast \mathbf{O}}\) and V tests \(T_{h}^{\ast \mathbf{Y|O}}\), \(h=1,\ldots,V\). Hence (with obvious notation)

$$ T^{\prime \prime}=\psi (\lambda _{1}^{\mathbf{O}},\ldots,\lambda _{V}^{\mathbf{O}};\lambda _{1}^{\mathbf{Y|O}},\ldots,\lambda _{V}^{\mathbf{Y|O}}).$$

For each of the V subhypotheses \(H_{0~h}^{\mathbf{O}}\) against \(H_{1~h}^{\mathbf{O}}\), a permutation statistic such as Pearson’s chi-square or any other suitable test statistic for proper testing of categorical data may be used (for instance, when C = 2 and restricted alternatives are of interest, Fisher’s exact probability test may be appropriate). This combined permutation test has good general asymptotic properties. In particular, under very mild conditions, if best univariate partial tests are used, then the combined test is asymptotically best in the same sense.

5.5 Botulinum Data

In this section, we consider a real case study, related to a preliminary double-blind, placebo controlled, randomized clinical trial with a 6-month follow-up period. The purpose of this trial is to evaluate the effectiveness of type A botulinum toxin to treat myofascial pain symptoms and to reduce muscle hyperactivity in bruxers. In order to do this, 20 patients (10 males, 10 females; aged between 25 and 45) with clinical diagnosis of bruxism and with myofascial pain of masticatory muscles were enrolled.These patients were randomly divided into two groups of 10 patients. A group received botulinum toxin injections—BTX-A (treated group) and the other group was treated with saline placebo injections (control group). Several clinical variables (in the medical literature, same as “end-points”) were assessed at baseline time, at 1 week, 1 month, and 6-month follow-up appointments, along with electromyography (EMG) recordings of muscle activity in different conditions. Clinical end points are as follows:
  • pain at rest (DR), at phoning (DF), and at chewing (DM), assessed by means of a visual analog scale (VAS) from 0 to 10, with the extremes being “no pain” and “pain as bad as the patient has ever experienced” respectively;

  • mastication efficiency (CM), assessed by a VAS from 0 to 10, the extremes of which were “eating only semiliquid” and “eating solid hard food”;

  • maximum nonassisted (Mas) and assisted (Maf) mouth opening (in millimeters), protrusive (Mp), and laterotrusive left (Mll) and right (Mlr) movements (in millimeters);

  • functional limitation (LF) during usual jaw movements (0, absent; 1, slight; 2, moderate; 3, intense; 4, severe);

At the same time as the clinical evaluations, all patients underwent EMG recordings of left and right anterior and posterior temporalis muscles at rest (LTA, RTA, LTP, RTP, respectively) and left and right masseter muscles at rest (LMM, RMM); left and right anterior temporalis muscles during maximum voluntary clenching (LTA11, RTA11) and during clenching on cotton rolls (LTA11c, RTA11c); masseter muscles during maximum voluntary clenching (LMM11, RMM11) and during clenching on cotton rolls (LMM11c, RMM11c).
  1. 1.

    Hence, we are in presence of a multivariate problem whit repeated measures and missing data. In particular, for each of n = 20 \((n_{1}=n_{2}=10)\) units in C = 2 experimental situations (which represent the two levels of the treatment), a V-dimensional non-degenerate variable (V = 24) is observed on k = 4 different time occasions. Note that in this longitudinal study the number of observed variables in different time points is much higher than the number of subjects \((V\cdot k\gg n)\), thus parametric tests are not available. Furthermore, since all variables may be informative for differentiating two groups, the NPC approach properly applies when analyzing these data. Classic parametric tests or even rank tests in such situations may fail to take into account the dependence structure across variables and time points.

     
The whole data set is denoted by:
$$\begin{aligned} \mathbf{X} &=&\{X_{hji}(t),\text{}t=1,\ldots,k,\text{} i=1,\ldots,n_{j},\text{}j=1,2,\text{}h=1,\ldots,V\}\text{} \\ &=&\{\mathbf{X}_{hji},\text{}i=1,\ldots,n_{j},\text{} j=1,2,\text{}h=1,\ldots,V\},\text{}\end{aligned}$$

where \(\mathbf{X}_{hji}=\{X_{hji}(t),\) \(t=1,\ldots,k\}.\)

In order to take account of different baseline observations, assumed to have the role of covariates, the k − 1 V-dimensional differences \(D_{hji}(t)=X_{hji}(1)-X_{hji}(t),\) \(t=2,\ldots,k,\) \(i=1,\ldots,n_{j},\) \(j=1,2,\) \(h=1,\ldots,:V,\) are considered in the analysis. Hence the hypothesis testing problem related to the hth variable may be formalized as

$$ H_{0~h}:\left\{\bigcap_{t=2}^{k}\left[D_{h1}(t)\overset{d}{=}D_{h2}(t) \right] \right\} =\left\{\bigcap\limits_{t=2}^{k}H_{0ht}\right\},\text{} h=1,\ldots,V,$$

against the alternative:

$$ H_{1~h}:\left\{\bigcup\limits_{t=2}^{k}H_{1ht}\right\},\text{}h=1,\ldots,V,$$

where \(H_{1ht}:D_{h1}(t)\overset{d}{>}D_{h2}(t)\) or \(D_{h1}(t)\overset{d}{<}D_{h2}(t)\) according to which kind of stochastic dominance is of interest for the hth variable. The alternative hypothesis is that patients treated with the botulinum toxin had lower values than those treated with the placebo (i.e., differences between baseline values and follow-up values tend to increase, for which the \(\overset{d}{>}\) dominance is appropriate), except for variables: ME, Mas, Maf, Mp, Mll, Mlr, E, and T, where the placebo group is expected to have lower values than the toxin group, for which the \(\overset{d}{<}\) is then appropriate.

References

  1. Agresti A. Categorical data analysis. Hoboken: Wiley; 2002.CrossRefzbMATHGoogle Scholar
  2. Blair RC, Higgins JJ, Karniski W, Kromrey JD. A study of multivariate permutation tests which may replace Hotelling’s t 2 test in prescribed circumstances. Multivar Behav Res. 1994;29:141–63.CrossRefGoogle Scholar
  3. Chung JH, Fraser DAS. Randomization tests for a multivariate two-sample problem. J Am Stat Assoc. 1958;53:729–35.CrossRefzbMATHGoogle Scholar
  4. Cressie NAC, Read TRC. Goodness of fit statistics for discrete multivariate data. New York: Springer; 1988.zbMATHGoogle Scholar
  5. Crowder MJ, Hand DJ. Analysis of repeated measures. London: Chapman & Hall; 1990.zbMATHGoogle Scholar
  6. Diggle PJ, Liang KY, Zeger SL. Analysis of longitudinal data. Oxford: Oxford University Press; 2002.Google Scholar
  7. Dempster AP, Laird NM, Rubin DB. Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc B. 1977;39:1–38.zbMATHMathSciNetGoogle Scholar
  8. Lehmann EL. Parametric versus nonparametrics: two alternative methodologies. J Nonparametr Stat. 2009;21:397–405.CrossRefzbMATHMathSciNetGoogle Scholar
  9. Little RJA, Rubin DB. Statistical analysis with missing data. New York: Wiley; 1987.zbMATHGoogle Scholar
  10. Pesarin F, Salmaso L. Permutation tests for complex data: theory, applications and software. Chichester: Wiley; 2010.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2014 2014

Authors and Affiliations

  • Rosa Arboretti
    • 1
    Email author
  • Eleonora Carrozzo
    • 1
  • Luigi Salmaso
    • 1
  1. 1.University of PadovaPadovaItaly

Personalised recommendations