A formula for hidden regular variation behavior for symmetric stable distributions

We develop a formula for the power-law decay of various sets for symmetric stable random vectors in terms of how many vectors from the support of the corresponding spectral measure are needed to enter the set. One sees different decay rates in"different directions", illustrating the phenomenon of hidden regular variation. We give several examples and obtain quite varied behavior, including sets which do not have exact power-law decay.

For any integer k ≥ 1 and any E as above, letting C α be a constant defined in Section 2, define and We let I δ (E, k, α) denote the expression inside the last limit in (3).
Theorem 1.1. For any X, E and k as above, Remarks 1.2.
• Since 0 ∈Ē, for small δ > 0 and s 1 > 0 we have that s 1 S n−1 ∩ E δ,+ = ∅. This implies in particular that the integrand in (3) is equal to zero for small δ and s 1 , removing the singularity at s 1 = 0, and hence I 3 (E, 1, α) is always finite.
• Taking E to be the set {x ∈ R n : min x i > 1} and k = 1, one obtains Theorem 4.4.1 in [11] in the symmetric case (see equation (4.4.2) in [11]).
• If we let E := Cone(A) := x ∈ R n : x 2 > 1 and x/ x 2 ∈ A for some A ⊆ S n−1 with Λ(∂A) = 0, and then apply Theorem 1.1 to both E and to the complement of the unit ball with k = 1, we recover Corollary 6.20 in [1] in the symmetric case, stating that lim h→∞ P X ∈ Cone(A), X 2 > h P X 2 > h = Λ(A) Λ S n−1 .
• When the spectral measure Λ of X is finitely supported, some asymptotic behavior of the corresponding probability density function f (x) in different directions is obtained in [7]. However, since the convergence in this case is not known to be uniform, this cannot be used to get a version of Theorem 1.1 for finitely supported Λ.
• Our motivation for looking at Theorem 1.1 and its consequences (see Section 4) was to understand which threshold stable vectors can be obtained as divide and color processes in the sense of [12]. These applications, as well as a study of which threshold Gaussian vectors can be obtained as divide and color processes, will be carried out in [5].

Background
We now give some relevant definitions. These will be very brief as we assume the reader is familiar with the basics of stable vectors. For a more thorough introduction to stable random vectors, we refer the reader to [11].
Definition 2.1. A random vector X := (X i ) 1≤i≤n in R n has a symmetric stable distribution if X is symmetric (invariant under x → −x) and if for all k ≥ 1, there exists a k > 0 so that if X 1 , . . . , X k are k i.i.d. copies of X, then It is well known that for any symmetric stable vector X there exists α ∈ (0, 2], called the stability index, so that for all k ≥ 1, a k = k 1/α . The stability index α = 2 corresponds to Gaussian random vectors. If n = 1, then besides α, there is only one parameter, the scale parameter σ, and in this case the characteristic function φ X (θ) is given by (When α = 2, σ corresponds to the standard deviation divided by √ 2, an irrelevant scaling.) When σ = 1, we denote this distribution by S α . For stable vectors, the picture is somewhat more complicated. A random vector X in R n has a symmetric stable distribution with stability exponent α if and only if its characteristic function φ X (θ) has the form for some finite measure Λ on the unit sphere S n−1 which is invariant under x → −x. Λ is called the spectral measure of X. If (6) holds for some α and Λ, we write X ∼ S α (Λ). For α ∈ (0, 2) fixed, different Λ's yield different distributions. This is not true for α = 2. When S 1 , S 2 , . . . , S m are i.i.d. random variables with distribution S α , S := (S 1 , . . . , S m ), and A is an n × m matrix, then the vector X := (X 1 , . . . , X n ) defined by X := AS is a symmetric α-stable random vector. To describe the spectral measure of X, consider the columns of A as elements of R n , denoted byŷ 1 , . . . ,ŷ m . Then Λ is obtained by placing, for each i ∈ [m], a mass of weight ŷ i α 2 /2 at ±ŷ i / ŷ i 2 . See p. 69 in [11].
Finally, we need the following facts. If X ∼ S α , then where there is an exact formula for C α ; see e.g. page 17 in [11]. The exact formula for this constant will not be relevant to us and so we will express quantities in terms of C α . Moreover, if we let f denote the probability density function of X, then see [4]. Also, f (x) is decreasing in x for x > 0; see Theorem 2.7.4 on page 128 in [13].
3 Proof of Theorem 1.1 The proof of Theorem 1.1 is somewhat simpler in the case when the spectral measure is finitely supported in addition to being symmetric. We therefore first give a proof in this simpler setting, which is also sufficient for the examples covered in Section 4.
Proof of Theorem 1.1 for symmetric and finitely supported spectral measures. . Suppose that Λ is symmetric and has support in ±y 1 , . . . , ±y m ∈ S n−1 . For i = 1, 2, . . . m, letŷ i := (2Λ(y i )) 1/α y i and let S 1 , S 2 , . . . , S m ∼ S α be i.i.d. Then we have (see Section 2) that The rest of the proof will be divided into two steps. In the first step, we give a proof under the additional assumption that In the second step, we show that this additional assumption can be removed.
Step 1. Assume that (9) holds. Given this assumption, we make the following observations.
(O1) The assumption onĒ in (9) implies that there is ε 0 > 0 such that if s 1 , s 2 , . . . , s m are such that m i=1 s iŷi ∈Ē, then there is a set I ⊆ [m], with |I| ≥ k, such that |s i | > ε 0 for all i ∈ I.
(O2) It follows from the previous observation and (7) that (O3) For any ε ′ > 0, For each δ > 0, recall Using the observations above, it follows that for any ε ′ ∈ (0, ε 0 ) Fix δ > 0 arbitrarily and set ε ′ = ε 0 ∧ δ/((m−k) sup i∈[m] ŷ i 2 ) . Note that for each set I, the event in question implies that Let f denote the common probability density function of S 1 , S 2 , . . ., S m . By (O1), we have that for a fixed set I of size k, P i∈I S iŷi ∈ hE δ,− = s 1 ,...,s k ∈R : Using first (8) and then (O1), it follows that Making the change of variables (2Λ(y i )) 1/α s i /h → s i , we obtain Summing over all I ⊆ [m] with |I| = k, we get (ii) each set I of size k can be ordered exactly in k! ways.
Using this, it follows that the previous equation is equal to and hence, by taking h to infinity and then δ to zero, Using the monotone convergence theorem, this implies in particular that and hence the lower bound in Theorem 1.1 holds. The proof of the upper bound is completely analogous, slightly easier and hence omitted here.
Step 2. It now remains only to show that the assumption onĒ given in (9) can be removed. So we now assume that Then it is easy to see that the integral in (3) is infinite for every δ > 0 and hence the upper bound holds without the assumption onĒ. We now show that the lower bound holds also without the assumption on E. To this end, assume first that there is t : Assume further that ℓ is the smallest integer for which such a point t exists. Then, for all sufficiently small δ > 0 we have that m i=1 t iŷi ∈ E δ,− , and Since by the first part of the proof, we have that and hence the lower bound is still valid in this case. If no such point t exists, then we have that Using Step 1, this implies in particular that for all δ > 0, we have that Since I 1 (E δ,− , k, α) is monotone in δ, the desired conclusion follows by applying the monotone convergence theorem. This concludes the proof.
Remark 3.1. We observe that we have shown that if there is a matrix A = (ŷ 1 ,ŷ 2 , . . . ,ŷ m ) such that X D = A(S 1 , . . . , S m ), where S 1 , S 2 , . . . , S m ∼ S α are i.i.d. (or equivalently that the spectral measure is finitely supported), then, for any set E ⊆ R n , With only small adjustments of the proof above, the assumption that X is symmetric can be dropped. To do this, one replaces the matrix representation used above with the corresponding representation for when X is not symmetric (i.e. define A by A(·, i) = (Λ(y i )) 1/α y i and S i is a so-called totally skewed α-stable random variable with scale one, and then adjust the proof accordingly. This is not as easy to do however when Λ is not finitely supported.
Remark 3.3. By Theorem 1(ii) in [3], any multivariate stable distribution X ∼ S α (Λ) can be approximated by a multivariate stable distribution X ε ∼ S α (Λ ε ) which is such that Here Λ ε is chosen by partitioning the unit sphere into a finite number of sets of small diameter, and then concentrating all the mass of Λ in each such set at an arbitrarily chosen point in the set.
This result, together with the proof for the finitely supported case, is however not sufficient to be able to make the same conclusion for any spectral measure. To see this, let E and Λ be as in Example 4.4, and let α ∈ (0, 1) so that Example 4.4 gives that lim h→∞ h 2α P (X ∈ hE) ∈ (0, ∞). Then there are Λ ε as above which are arbitrarily close to Λ but for which the corresponding limit is infinite by Theorem 1.1.
To be able to give the proof of Theorem 1.1 in the general setting, we will first need the following lemma. The special case k = 2 was stated in [11] (see Equation 1.4.8 on p. 27), but no proof is given there. A sketch of the proof of this particular case was provided in private correspondence with one of the authors.
be the arrival times of a Poisson process with rate one where we assume that these three sequences are independent of each other. Next let α ∈ (0, 2), k ≥ 2 be an integer and ε ∈ (0, min({α, Proof of Lemma 3.4. To simplify notation, write β := (k − 1)α + ε. We then need to show that To this end, note first that for any fixed m ≥ k we have that for any fixed m ≥ k. (11), the conclusion of the lemma will thus follow if we can prove that

Now recall that for any real-valued random variables
Since β/(2(k − 1)) = ((k − 1)α + ε)/(2(k − 1)) < 1 by the assumption on ε, we can apply Jensen's inequality to bound this expression from above by . Now we can again use the fact that β/(2(k − 1)) < 1 and the so-called c rinequality (see e.g. Theorem 2.2 in [6]) to move this exponent into the summands to bound the previous expression from above by In particular, this implies that it now only remains to show that i 1 ,...,,i k−1 : To do this, first fix γ ∈ R + . If i ∈ Z + , then E[Γ −γ i ] < ∞ if and only if i > γ. Moreover, for such i and γ we easily have that E Γ −γ i = Γ(i−γ) Γ(i) . By Stirling's formula, it follows that for a fixed γ we have that E Γ −γ i ∼ i −γ and hence E Γ −γ i < C γ i −γ for some constant C γ ≥ 1 and all i > γ. Now assume that 1 ≤ i 1 ≤ . . . ≤ i k−1 is a sequence of integers. Then for j = 2, 3, . . . , k − 1 we have that Γ ij ≥ Γ i1 , and Γ ij ≥ Γ ij − Γ ij−1 . The random variables Γ ij − Γ ij−1 are independent and equal in distribution to Γ ij −ij−1 if i j = i j−1 noting that Γ 0 = 0. Using this, it follows that for any fixed integer M > 0 we have that If i 1 > (k − 1)γ and M > γ, then, using the above, this is bounded from above by In particular, if we let γ = β/(α(k − 1)) = (α(k − 1) + ε)/(α(k − 1)) > 1 and M = m, then for i 1 ≥ m we have that i 1 ≥ m > β/α · k/(k − 1) = kγ > (k − 1)γ and therefore, since k ≥ 2, M = m > γ. Hence it follows that (12) is bounded from above by This implies in particular that it only remains to show that i 1 ,...,,i k−1 : To see this, we first change the order of summation as follows. First, we will sum over all possible choices of i 1 . Then we sum over the number G of terms in the product, which will range between 0 and k − 2. Finally, we sum also over the possible choices of ℓ j := i j − i j−1 in the product, which will range from m to infinity. To sum over all possible sequences m ≤ i 1 ≤ . . . ≤ i k−1 , we find an upper bound on the number of ways to choose the differences i j − i j−1 which are smaller than m and also, on the number of ways to choose which of the differences are larger than or equal to m. The former of these quantities is clearly bounded from above by m k−2 , and the latter is equal to k−2 Putting these observations together, we get i 1 ,...,,i k−1 : Since β/(α(k − 1)) = (α(k − 1) + ε)/(α(k − 1)) > 1, the desired conclusion now follows.
We now state the following lemma which will be used in the proof of Theorem 1.1. For a proof of this lemma we refer the reader to [11].
Lemma 3.5 (Theorem 3.10.1 in [11]). Let Λ be a symmetric spectral measure on S n−1 . Furthermore, let C α be defined by P be the arrival times of a rate one Poisson process and let (W i ) i≥1 be i.i.d., each with distributionΛ := Λ/Λ(S n−1 ) (the normalized spectral measure), independent of the Poisson process. Then converges almost surely to a random vector with distribution S α (Λ).
We now give a proof of Theorem 1.1 using Lemmas 3.4 and 3.5.
Proof of Theorem 1.1. Let C α , (Γ i ) and (W i ) be as in Lemma 3.5. Define X = X 1 , X 2 , . . . , X n : Then Lemma 3.5 implies that X has distribution S α (Λ). By Markov's inequality, for any j = 1, 2, . . . , n and all h > 0 and ε > 0 we have that By picking ε sufficiently small and applying Lemma 3.4 using k + 1 (noting that by symmetry, W i (j) has the same distribution as ǫ i |W i (j)| with the two factors independent), it follows that This implies in particular that for any ε ′ > 0 Now for any δ > 0, let E δ,− := x ∈ E : d(x, ∂E) > δ . Setting δ := √ nε ′ , we then have Similarly, we have that P (X 1 , X 2 , . . . , X n ) ∈ hE = P (X 1 , X 2 , . . . , X n ) ∈ hE, C α Λ S n−1 1/α To be able to simplify these expressions, first recall that if (Γ 1 , Γ 2 , . . . , Γ k+1 ) are the first k + 1 arrivals of a mean one Poisson process and U 1 , U 2 , . . . , U k ∼ unif(0, 1) are independent, Using this and now letting U 1 , U 2 , . . . , U k be i.i.d. uniforms defined on the same probability space as everything else but independent of them, we see that for E δ,· = E δ,+ or E δ,· = E δ,− , we have that If, for each fixed x, we make the change of variables Note that the integral above is increasing in h. Combining the previous equation with (13) and (14) and applying the monotone convergence theorem, it follows that for any δ > 0, Noting that the integrand in (15) is monotone in δ and converges pointwise to the integrand in I 1 (E, k, α), the desired conclusion follows by letting δ → 0 and applying the monotone convergence theorem.

Examples
We will now apply Theorem 1.1 to a few examples.
Then it is easy to see that I 1 (E, 2) = I 3 (E, 2) and furthermore this common value is Applying Theorem 1.1 with k = 2, we obtain which is of course consistent with what independence yields. Case (ii) Let A ⊆ S 1 ∩ (ε, ∞) 2 for some ε > 0 and define C A := x ∈ R n : x 2 > 1 and x/ x 2 ∈ A be the cone above A. Then we have the following.
Proposition 4.2. Let X, A and C A be as above, and assume that in addition to the above, the boundary of A has zero (one-dimensional) measure. Then Proof. We begin with the following computation which is valid for any set A contained in S 1 ∩ (ε, ∞) 2 .
For any set U , letting U o be the interior of U , one easily checks that keeping in mind that the interiors and closures are with respect to different spaces, in one case R 2 and in one case S 1 . Therefore the above computation shows that where for the latter equation, we also used the fact that the S 1 piece adds nothing to the relevant integral. Now, using the fact the boundary of A has measure zero, we conclude that I 1 (C A , 2) = I 2 (C A , 2). Since ε is fixed, it is easy to see that I δ (C A , 2) is finite for sufficiently small δ allowing us to conclude that I 1 (C A , 2) = I 3 (C A , 2). Theorem 1.1 with k = 2 now yields the result. Remark 4.3. This improves on (5) in this case since it yields the correct decay rate and demonstrates the hidden regular variation behavior. The former result would only give lim h→∞ h α P (X ∈ C A ) = 0. Not surprisingly, when A is as large as possible with ε fixed, the integral tends to infinity as ε goes to 0; this is because we are getting closer to the support of the spectral measure.
Case (iii) This example, while fairly simple, has three different values arising in (4) when k = 1 and, in particular, Theorem 1.1 yields nonmatching upper and lower bounds. We let It is easy to check that for any α ∈ (0, 2), we have that I 1 (E, 1, α) = 0, I 2 (E, 1, α) = I 3 (E, 1, α) = C α /2 while using the independence of the components, it is immediate that the middle terms in (4) when k = 1 are C α /4.  Our next example illustrates a number of interesting phenomena which we summarize in Proposition 4.5 after giving the example. This provides an example where (i) the decay rate has three possible behaviors depending on α, (ii) I 1 (E, k, α) = I 3 (E, k, α) and (iii) where the tail behavior can drastically change due to a modification in the set E in an arbitrarily small neighborhood of one point, namely (1,1). It is also a "baby version" of the example following it which will be crucially used in [5]. Then X is a symmetric α-stable random vector and the spectral measure Λ of X has mass 2 α/2 /2 at ±(1, 1)/ √ 2 and mass 1/2 at ±(0, 1). Let We mention that it is straightforward to show that for all α, I 3 (E, 1, α) = 0.
Proposition 4.5. Let Λ, X and E be as above.
Proof. We only prove (i) and (ii). (iii) and (iv) are fairly straightforward and left to the reader. We start with the proof of (ii). It is easy to see that I 1 (E, 2, α) = I 2 (E, 2, α) and that their common value is This integral is easily verified to be infinite if and only if α ≥ 1 and strictly positive for all α ∈ (0, 1). Recognizing the integrand as the probability density function (up to a constant) of a Beta distribution with parameters 2α and 1 − α, we see that the last expression is equal to The fact that I 3 (E, 2, α) is ∞ is seen by noting that for any fixed δ > 0, the term Λ 2 x 1 , x 2 ∈ S 1 : s 1 x 1 + s 2 x 2 ∈ E δ,+ is uniformly bounded away from 0 for arbitrarily small s 2 and hence the integral diverges. This finishes the proof of (ii). We now move to (i). Since X = (1, 1)S 1 − (0, 1)S 2 , we have and so for any α, we have We now proceed with the α ∈ (0, 1) case. It is not hard to show that for every ε > 0, and hence by Theorem 1.1 Letting ε → 0, we can apply the monotone convergence theorem to both sides (using the fact that E is open) and conclude (17) as desired. Now instead let α = 1. It is not hard to show that for every ε > 0, by breaking up the following integral into [0, h] and [h, ∞) and using the fact that f is decreasing, we have Noting that (7) implies the second term goes to 0 as h → ∞ and the fact that that (8) easily implies that as well as applying (8) Using (8) and (20), the limit of the last term is, as h → ∞, equal to C 2 1 /4(1+ε) 2 . Hence for every ε > 0, we have and we can then let ε → 0 to complete the proof. Finally, we now do the case α ∈ (1, 2). Using the fact that f is decreasing and using (8), we have establishing the upper bound in (19). For the lower bound, fixing ε > 0, we have It follows that One can now let ε → 0, obtaining the lower bound in (19), completing the proof.
Remark 4.6. If X is as in our first example where we have independent components, one can construct a set, namely for a ∈ (0, 1), which exhibits similar behavior to that in the above proposition. However, the above example, when generalized to three variables, is what we need in another context and so we proceeded in this way.
1 1 x 1 x 2 Remark 4.7. With the previous result in mind, one might wonder if any threshold for events of the type {X ∈ hE} will occur at α = 1. To show that this is not the case, fix α ∈ (0, 2) and σ > 0, and define E σ := x ∈ R 2 : 1 < x 1 < 1 + x σ 2 .
Further, let S 1 , S 2 ∼ S α be i.i.d and consider the decay rate of P (X ∈ hE σ ) as h → ∞. Then, using a very similar argument to the argument in the proof of Proposition 4.5, one can show that we get a phase transition in the behavior of the decay rate of P (X ∈ hE σ ) at α = σ, and in fact if α < σ.
In our next, and final, example we study one of the simplest three-dimensional permutation invariant multivariate stable distributions, and show that it exhibits the same behavior as our previous example. Here we only study the case α ∈ (0, 1) in detail, but the cases α = 1 and α > 1 can be done similarly as in the the proof of Proposition 4.5.
Proof sketch. It is easy to see that I 1 (E, 2, α) = I 2 (E, 2, α) and that their common value is The rest of the proof follows the lines of the proof of Proposition 17 exactly, and is hence omitted here.