A formula for hidden regular variation behavior for symmetric stable distributions

We develop a formula for the power-law decay of various sets for symmetric stable random vectors in terms of how many vectors from the support of the corresponding spectral measure are needed to enter the set. One sees different decay rates in “different directions”, illustrating the phenomenon of hidden regular variation. We give several examples and obtain quite varied behavior, including sets which do not have exact power-law decay.

(2007)), meaning that one has different power-law decay in different directions. Ideally, one would like to capture the correct decay rate in each such direction. Our main result, Theorem 1, describes such behavior for symmetric stable distributions. Needed definitions and background will be given in Section 2.
Let α ∈ (0, 2) and X be an n-dimensional symmetric α-stable random vector with spectral measure Λ (see Eq. 4). Then Λ is a bounded measure on the unit sphere S n−1 in R n . Let E ⊆ R n be a Borel set with 0 ∈Ē, whereĒ and E o denote the closure and interior of E respectively. With d being the Euclidean distance, define the δ-neighborhood of E by E δ,+ := {x ∈ R n : d(x, E) < δ}.
For any integer k ≥ 1 and any E as above, letting C α be a constant defined in Section 2 and Λ k := Λ × · · · × Λ (k times), define Theorem 1 For any α, X, E and k as above, L (E δ,+ , k, α).
Remark 2 Taking E to be the set {x ∈ R n : min x i > 1} and k = 1, one obtains Theorem 4.4.1 in Samorodnitsky and Taqqu (1994) in the symmetric case (see equation (4.4.2) in Samorodnitsky and Taqqu (1994)).
Remark 3 If we for x ∈ R n let x 2 := (x 2 1 + . . . + x 2 n ) 1/2 and define E := Cone(A) := x ∈ R n : x 2 > 1 and x/ x 2 ∈ A for some A ⊆ S n−1 with Λ(∂A) = 0, and then apply Theorem 1 to both E and to the complement of the unit ball with k = 1, we recover Corollary 6.20 in Araujo and Gine (1980) in the symmetric case, stating that Remark 4 Our motivation for looking at Theorem 1 and its consequences (see Section 4) was to understand which threshold stable vectors can be obtained as divide and color processes in the sense of Steif and Tykesson (2019). These applications, as well as a study of which threshold Gaussian vectors can be obtained as divide and color processes, is carried out in Forsström and Steif (in preparation).
Remark 5 One might guess that Theorem 1 would generalize to so-called regularly varying random vectors (see e.g. Proposition 2.2.20 on p. 57 in Mikosch and Wintenberger (2019)). This is however not the case. To see this, let α ∈ (0, 2) and let X 1 be an α-stable random vector in R 2 whose spectral measure Λ 1 has mass 1/4 at ±(1, 0) and ±(0, 1). Further, let α ∈ (α, 2α) ∩ (0, 2) and let X 2 be an α -stable random vector in R 2 with uniform spectral measure independent of X 1 . Then X 1 and X 1 + X 2 are both regularly varying with the same index and same limiting measure. However, if we let E := Cone(π/8, 3π/8), then using Theorem 1 one easily obtains (As usual, denotes two quantities whose ratio is bounded away from zero and infinity.)

Stable random vectors
In this section we now give some relevant definitions. These will be very brief as we assume the reader is familiar with the basics of stable vectors. For a more thorough introduction to stable random vectors, we refer the reader to Samorodnitsky and Taqqu (1994).
It is well known that for any symmetric stable vector X there exists α ∈ (0, 2], called the stability index, so that for all k ≥ 1, a k = k 1/α . The stability index α = 2 corresponds to Gaussian random vectors. If n = 1, then besides α, there is only one parameter, the scale parameter σ , and in this case the characteristic function φ X (θ ) is given by (When α = 2, σ corresponds to the standard deviation divided by √ 2, an irrelevant scaling.) When σ = 1, we denote this distribution by S α . For stable vectors, the picture is somewhat more complicated. A random vector X in R n has a symmetric stable distribution with stability exponent α if and only if its characteristic function φ X (θ ) has the form for some finite measure Λ on the unit sphere S n−1 which is invariant under x → −x. Λ is called the spectral measure of X. If Eq. 4 holds for some α and Λ, we write X ∼ S α (Λ). For α ∈ (0, 2) fixed, different Λ's yield different distributions. This is not true for α = 2. When S 1 , S 2 , . . . , S m are i.i.d. random variables with distribution S α , S := (S 1 , . . . , S m ), and A is an n × m matrix, then the vector X := (X 1 , . . . , X n ) defined by X := AS is a symmetric α-stable random vector. To describe the spectral measure of X, consider the columns of A as elements of R n , denoted byŷ 1 , . . . ,ŷ m . Then Λ is obtained by placing, for each i ∈ [m] := {1, 2, . . . , m}, a mass of weight ŷ i α 2 /2 at ±ŷ i / ŷ i 2 . See p. 69 in Samorodnitsky and Taqqu (1994).
Finally, we need the following facts. If X ∼ S α , then where there is an exact formula for C α ; see e.g. page 17 in Samorodnitsky and Taqqu (1994). The exact formula for this constant will not be relevant to us and so we will express quantities in terms of C α . Moreover, if we let f denote the probability density function of X, then see Fofack and Nolan (1999). Also, f (x) is decreasing in x for x > 0; see Theorem 2.7.4 on page 128 in Zolotarev (1983).

Related work
When the spectral measure Λ of X is finitely supported, some asymptotic behavior of the corresponding probability density function f (x) in different directions is obtained in Hiraba (2003). However, since the convergence in this case is not known to be uniform, this result cannot be used to get a version of Theorem 1 for finitely supported Λ. We also mention that results in Watanabe (2007) can be used to find the correct normalizing function above for many sets, but that these results cannot be used to find an expression for the limit as given by Theorem 1, as only upper and lower bounds are given. In both Hiraba (2003) and Watanabe (2007), the proofs are analytical, while our proofs are more probabilistic.

Proof of Theorem 1
The proof of Theorem 1 is somewhat simpler in the case when the spectral measure is finitely supported in addition to being symmetric. We therefore first give a proof in this simpler setting, which is also sufficient for the examples covered in Section 4.
Proof of Theorem 1 for symmetric and finitely supported spectral measures Suppose that Λ is symmetric and has support in ±y 1 , . . . , ±y m ∈ S n−1 . For i = 1, 2, . . . m, letŷ i := (2Λ(y i )) 1/α y i and let S 1 , S 2 , . . . , S m ∼ S α be i.i.d. Then we have (see Section 2) that The rest of the proof will be divided into two steps. In the first step, we give a proof under the additional assumption that, for some positive integer k, In the second step, we show that this additional assumption can be removed.
Step 1 Assume that Eq. 7 holds. Given this assumption, we make the following observations.
(O1) The assumption onĒ in Eq. 7 implies that there is ε 0 > 0 such that if s 1 , s 2 , . . . , s m are such that m i=1 s iŷi ∈Ē, then there is a set J ⊆ [m], with |J | ≥ k, such that |s i | > ε 0 for all i ∈ J . (O2) It follows from the previous observation and Eq. 5 that (O3) For any ε > 0, For each δ > 0, recall and define E δ,− := x ∈ E : d(x, ∂E) > δ . Using the observations above, it follows that for any ε ∈ (0, ε 0 ) Hence the previous equation can be bounded from below by Let f denote the common probability density function of S 1 , S 2 , . . ., S m . By (O1), we have that for a fixed set J of size k, P i∈J S iŷi ∈ hE δ,− = s 1 ,...,s k ∈R : Using first Eq. 6 and then (O1), it follows that Making the change of variables (2Λ( Now note that 1. each pair of points, ±y i , i = 1, 2, . . . , m, is counted only once in the last equation and 2. each set J of size k can be ordered exactly in k! ways.
Using this, and symmetry, it follows that the previous equation is equal to and hence, by taking h to infinity and then δ to zero, Using the monotone convergence theorem, this implies in particular that and hence the lower bound in Theorem 1 holds. The proof of the upper bound is completely analogous and slightly easier, and is hence omitted here.
Step 2 It now remains only to show that the assumption onĒ given in Eq. 7 can be removed. So we now assume that Then it is easy to see that the integral in the definition of L(E δ,+ , k, α) is infinite for every δ > 0 and hence the upper bound holds without the assumption onĒ. We now show that the lower bound holds also without the assumption onĒ. To this end, assume first that there is t : Assume further that is the smallest integer for which such a point t exists. Then, for all sufficiently small δ > 0 we have that m i=1 t iŷi ∈ E δ,− , and Since by the first part of the proof, we have that and hence the lower bound is still valid in this case. If no such point t exists, then we have that Using Step 1, this implies in particular that for all δ > 0, we have that Since L(E δ,− , k, α) is monotone in δ, the desired conclusion follows by applying the monotone convergence theorem. This concludes the proof.

Remark 6 We observe that we have shown that if there is a matrix
(or equivalently that the spectral measure is finitely supported), then, for any set Remark 7 With only small adjustments of the proof above, the assumption that X is symmetric can be dropped. To do this, one replaces the matrix representation used above with the corresponding representation for when X is not symmetric (i.e. define A by A(·, i) = (Λ(y i )) 1/α y i and S i is a so-called totally skewed α-stable random variable with scale one, and then adjust the proof accordingly. This is not as easy to do however when Λ is not finitely supported.
Here Λ ε is chosen by partitioning the unit sphere into a finite number of sets of small diameter, and then concentrating all the mass of Λ in each such set at an arbitrarily chosen point in the set.
This result, together with the proof for the finitely supported case, is however not sufficient to be able to make the same conclusion for any spectral measure. To see this, let E and Λ be as in Example 2, and let α ∈ (0, 1) so that Example 2 gives that lim h→∞ h 2α P (X ∈ hE) ∈ (0, ∞). Then there are Λ ε as above which are arbitrarily close to Λ but for which the corresponding limit is infinite by Theorem 1.
To be able to give the proof of Theorem 1 in the general setting, we will first need the following lemma. The special case k = 2 was stated in Samorodnitsky and Taqqu (1994) (see Equation 1.4.8 on p. 27), but no proof is given there. A sketch of the proof of this particular case was provided in private correspondence with one of the authors.
be the arrival times of a Poisson process with rate one where we assume that these three sequences are independent of each other. Next let α ∈ (0, 2), k ≥ 2 be an integer and ε ∈ (0, min({α, (k − 1)(2 − α)})). Then Proof of Lemma 1 To simplify notation, write β := (k − 1)α + ε. We then need to show that To this end, note first that for any fixed m ≥ k we have that for any fixed m ≥ k. Now recall that for any real-valued random variables X and Y with E |X| β < ∞ and E |Y | β < ∞ we have that E[|X + Y | β ] < ∞. Using Eq. 9, the conclusion of the lemma will thus follow if we can prove that Since β/(2(k − 1)) = ((k − 1)α + ε)/(2(k − 1)) < 1 by the assumption on ε, we can apply Jensen's inequality to bound this expression from above by . Now we can again use the fact that β/(2(k − 1)) < 1 and the so-called c r -inequality (see e.g. Theorem 2.2 in Gut (2013)) to move this exponent into the summands to bound the previous expression from above by In particular, this implies that it now only remains to show that i 1 ,...,,i k−1 : To do this, first fix γ ∈ R + .
Γ (i) . By Stirling's formula, it follows that for a fixed γ we have that Using this, it follows that for any fixed integer M > 0 we have that If i 1 > (k − 1)γ and M > γ , then, using the above, this is bounded from above by In particular, if we let γ = β/(α(k − 1)) = (α(k − 1) + ε)/(α(k − 1)) > 1 and M = m, then for i 1 ≥ m we have that i 1 ≥ m > β/α · k/(k − 1) = kγ > (k − 1)γ and therefore, since k ≥ 2, M = m > γ . Hence it follows that Eq. 10 is bounded from above by This implies in particular that it only remains to show that i 1 ,...,,i k−1 : m≤i 1 ≤...≤i k−1 i −β/(α(k−1)) 1 j ∈{2,3,...,k−1}: To see this, we first change the order of summation as follows. First, we will sum over all possible choices of i 1 . Then we sum over the number G of terms in the product, which will range between 0 and k − 2. Finally, we sum also over the possible choices of j := i j − i j −1 in the product, which will range from m to infinity. To sum over all possible sequences m ≤ i 1 ≤ . . . ≤ i k−1 , we find an upper bound on the number of ways to choose the differences i j − i j −1 which are smaller than m and also, on the number of ways to choose which of the differences are larger than or equal to m. The former of these quantities is clearly bounded from above by m k−2 , and the latter is equal to k−2 G < 2 k−2 . Putting these observations together, we get i 1 ,...,,i k−1 : Since β/(α(k − 1)) = (α(k − 1) + ε)/(α(k − 1)) > 1, the desired conclusion now follows.
We now state the following lemma which will be used in the proof of Theorem 1. For a proof of this lemma we refer the reader to Samorodnitsky and Taqqu (1994).
Lemma 2 (Theorem 3.10.1 in Samorodnitsky and Taqqu (1994)) Let Λ be a symmetric spectral measure on S n−1 . Furthermore, let C α be defined by P (Y ≥ h) ∼ C α h −α /2 for Y ∼ S α , let (Γ i ) i≥1 be the arrival times of a rate one Poisson process and let (W i ) i≥1 be i.i.d., each with distributionΛ := Λ/Λ(S n−1 ) (the normalized spectral measure), independent of the Poisson process. Then

converges almost surely to a random vector with distribution S α (Λ).
We now give a proof of Theorem 1 using Lemmas 1 and 2.
Proof of Theorem 1 Let C α , (Γ i ) and (W i ) be as in Lemma 2. Define X = X 1 , X 2 , . . . , X n := C 1/α α Λ S n−1 1/α Then Lemma 2 implies that X has distribution S α (Λ). For j ∈ [n] and i ≥ 1, let W i (j ) denote the j th component of W i . By Markov's inequality, for any j = 1, 2, . . . , n and all h > 0 and ε > 0 we have that By picking ε sufficiently small and applying Lemma 1 using k + 1 (noting that by symmetry, W i (j ) has the same distribution as i |W i (j )| with the two factors independent), it follows that This implies in particular that for any ε > 0 Similarly, we have that P (X 1 , X 2 , . . . , X n ) ∈ hE = P (X 1 , X 2 , . . . , X n ) ∈ hE, C α Λ S n−1 1/α To be able to simplify these expressions, first recall that if (Γ 1 , Γ 2 , . . . , Γ k+1 ) are the first k + 1 arrivals of a mean one Poisson process and U 1 , U 2 , . . . , U k ∼ unif(0, 1) are independent, Using this and now letting U 1 , U 2 , . . . , U k be i.i.d. uniforms defined on the same probability space as everything else but independent of them, we see that for E δ,· = E δ,+ or E δ,· = E δ,− , we have that If, for each fixed x, we make the change of variables Note that the integral above is increasing in h. Combining the previous equation with Eqs. 11 and 12 and applying the monotone convergence theorem, it follows that for any δ > 0, Noting that the integrand in Eq. 13 is monotone in δ and converges pointwise to the integrand in L(E o , k, α), the desired conclusion follows by letting δ → 0 and applying the monotone convergence theorem.

Examples
We will now apply Theorem 1 to a few examples.
Case (i) Let E = x ∈ R 2 : x 1 , x 2 > 1 (Fig. 1). Then it is easy to see that L(E o , 2, α) = lim δ→0 L(E δ,+ , 2, α) and furthermore this common value is Applying Theorem 1 with k = 2, we obtain which is of course consistent with what independence yields.

Case (ii)
Let A ⊆ S 1 ∩ (ε, ∞) 2 for some ε > 0 and define C A := x ∈ R 2 : x 2 > 1 and x/ x 2 ∈ A be the cone above A (Fig. 2). Then we have the following. Proposition 1 Let X, A and C A be as above, and assume that in addition to the above, the boundary of A has zero (one-dimensional) measure. Then Proof We begin with the following computation which is valid for any set A contained in S 1 ∩ (ε, ∞) 2 .
For any set U , letting U o be the interior of U , one easily checks that keeping in mind that the interiors and closures are with respect to different spaces, in one case R 2 and in one case S 1 . Therefore the above computation shows that where for the latter equation, we also used the fact that the S 1 piece adds nothing to the relevant integral. Now, using the fact the boundary of A has measure zero, we conclude that L((C A ) o , 2, α) = L(C A , 2, α). Since ε is fixed, it is easy to see that L((C A ) δ,+ , 2, α) is finite for sufficiently small δ allowing us to conclude that L((C A ) o , 2, α) = lim δ→0 L((C A ) δ,+ , 2, α). Theorem 1 with k = 2 now yields the result.
Remark 9 This improves on Eq. 3 in this case since it yields the correct decay rate and demonstrates the hidden regular variation behavior. The former result would only give lim h→∞ h α P (X ∈ C A ) = 0. Not surprisingly, when A is as large as possible with ε fixed, the integral tends to infinity as ε goes to 0; this is because we are getting closer to the support of the spectral measure. Case (iii) This example, while fairly simple, has three different values arising in Eq. 2 when k = 1 and, in particular, Theorem 1 yields nonmatching upper and lower bounds. We let (Fig. 3) It is easy to check that for any α ∈ (0, 2), we have that L(E o , 1, α) = 0, L(Ē, 1, α) = lim δ→0 L(E δ,+ , 1, α) = C α /2 while using the independence of the components, it is immediate that the middle terms in Eq. 2 when k = 1 are C α /4. Our next example illustrates a number of interesting phenomena which we summarize in Proposition 2 after giving the example. This provides an example where (i) the decay rate has three possible behaviors depending on α, (ii) L(E o , k, α) = lim δ→0 L (E δ,+ , k, α) and (iii) where the tail behavior can drastically change due to a modification in the set E in an arbitrarily small neighborhood of one point, namely (1, 1). It is also a "baby version" of the example following it which will be crucially used in Forsström and Steif (in preparation).
Proposition 2 Let Λ, X and E be as above.
Proof We only prove (i) and (ii). (iii) and (iv) are fairly straightforward and left to the reader. We start with the proof of (ii). It is easy to see that L(E o , 2, α) = L(Ē, 2, α) and that their common value is This integral is easily verified to be infinite if and only if α ≥ 1 and strictly positive for all α ∈ (0, 1). Recognizing the integrand as the probability density function (up to a constant) of a Beta distribution with parameters 2α and 1 − α, we see that the last expression is equal to The fact that lim δ→0 L(E δ,+ , 2, α) is ∞ is seen by noting that for any fixed δ > 0, the term Λ 2 x 1 , x 2 ∈ S 1 : s 1 x 1 + s 2 x 2 ∈ E δ,+ is uniformly bounded away from 0 for arbitrarily small s 2 and hence the integral diverges. This finishes the proof of (ii).
We now move to (i). Since X = (1, 1)S 1 − (0, 1)S 2 , we have and so for any α, we have We now proceed with the α ∈ (0, 1) case. It is not hard to show that for every and hence by Theorem 1 Letting ε → 0, we can apply the monotone convergence theorem to both sides (using the fact that E is open) and conclude Eq. 15 as desired. Now instead let α = 1. It is not hard to show that for every ε > 0, by breaking up the following integral into [0, h] and [h, ∞) and using the fact that f is decreasing, we have Noting that Eq. 5 implies the second term goes to 0 as h → ∞ and the fact that that Eq. 6 easily implies that and we can then let ε → 0 to complete the proof.
Finally, we now do the case α ∈ (1, 2). Using the fact that f is decreasing and using Eq. 6, we have establishing the upper bound in Eq. 17. For the lower bound, fixing ε > 0, we have 1+α) . (20) It follows that One can now let ε → 0, obtaining the lower bound in Eq. 17, completing the proof.
Remark 10 If X is as in our first example where we have independent components, one can construct a set, namely E := {x ∈ R 2 : x 1 > 1, x 2 > a(x − 1)} for a ∈ (0, 1), which exhibits similar behavior to that in the above proposition. However, the above example, when generalized to three variables, is what we need in another context and so we proceeded in this way.

Fig. 5
The figures above shows the set 2E σ in Remark 11, for two different values of σ , together with the four points (in red) at which the spectral measure Λ from the same remark has support Remark 11 With the previous result in mind, one might wonder if any threshold for events of the type {X ∈ hE} will occur at α = 1. To show that this is not the case, fix α ∈ (0, 2) and σ > 0, and define (Fig. 5).
Further, let S 1 , S 2 ∼ S α be i.i.d and consider the decay rate of P (X ∈ hE σ ) as h → ∞. Then, using a very similar argument to the argument in the proof of Proposition 2, one can show that we get a phase transition in the behavior of the decay rate of P (X ∈ hE σ ) at α = σ , and in fact if α < σ.
In our next, and final, example we study one of the simplest three-dimensional permutation invariant multivariate stable distributions, and show that it exhibits the same behavior as our previous example. Here we only study the case α ∈ (0, 1) in detail, but the cases α = 1 and α > 1 can be done similarly as in the the proof of Proposition 2.
The proof of the following proposition follows the proof of Proposition 2 exactly, and therefore we only give a sketch of the proof here.