Holonomy of the Planar Brownian Motion in a Poisson Punctured Plane

We define a family of diffeomorphism-invariant models of random connections on principal G-bundles over the plane, whose curvatures are concentrated on singular points. We study the limit when the number of points grows to infinity whilst the singular curvature on each point diminishes, and prove that the holonomy along a Brownian trajectory converges towards an explicit limit.


Introduction
This paper is concerned with the way in which the Brownian motion winds around points in the plane.This question was studied in the past mostly from a homological point of view, that is by studying the winding of the Brownian motion around the points.One can mention in particular the celebrated result of Spitzer [26] about the Cauchy behaviour of the asymptotic winding around one point, when the Brownian motion runs up to a time that tends to infinity.It paved the way for an exact computation by Yor [29] of the distribution of this winding, at a fixed time, and also for further limits theorems when time tends to infinity [22].Asymptotic winding around a family of points are also well known from the homological perspective (see e.g.[21]).
In [9-11, ...] (see also [13]), the effect of magnetic impurities on an electron in a 2 dimensional medium is studied by looking at the total winding (i.e. the sum of the winding) between this Brownian electron X and the Poisson distributed impurities.A particular attention is given to the limit when the number of impurities (i.e. the intensity of the Poisson process) goes to infinity.Following ideas of Werner on the set of points with given winding [27,28], we showed in [24] that, when the intensity K of the Poisson process P of impurities goes to infinity, X -almost surely, the averaged winding converges in distribution (with respect to P) towards a Cauchy variable centered at the Lévy area of X .
The model presented here corresponds to a more general interaction, associated with an arbitrary connected compact Lie group G, typically non-abelian.We consider a random flat G-bundle over R 2 \ P, each point of P being associated with a monodromy element whose distance to 1 in G is proportional to K −1 ; this correspond in the litterature to a Markovian holonomy field [16,20], for which the underlying Lévy process is a Poisson one.
Our main theorem proves that the same kind of law arises at the limit as in the abelian case from [24]: X -almost surely, the holonomy along a Brownian curve X independent from P converges in distribution, as K → ∞, towards the analogue in G of a Cauchy distribution, first introduced in [14].
Apart from the insight this gives in the structure of windings of planar Browian motion, we are motivated initially by relation with the Yang-Mills field (see e.g.[12,17,19] for construction and study of the Yang-Mills, or [7] for a more recent treatment built upon Parisi-Wu stochastic quantization and regularity structures).The 2-dimensional Yang-Mills field over the plane is a random map from a set of sufficiently smooth loops on R 2 to a compact connected Lie group G.This map satisfies the algebraic properties of the holonomy function associated to a smooth connection on the trivial G-bundle over R 2 , but with a much lower regularity.The Yang-Mills field is therefore heuristically understood as the holonomy map of a very irregular random connection on this bundle.In Sobolev scale, the regularity of this random connection is expected to be H −ε (in a suitable gauge choice), which by far is too low to allow for a reasonable definition of its holonomy along a typical trajectories of the Brownian motion.
However, being able to define this holonomy along Brownian motion would have significant impact in quantum field theory, for it would allow to define probabilistically and compute the k-point functions of a Gaussian free field coupled with Yang-Mills field through Symanzik identities (see e.g.[18,Theorem 7.3] for a discrete analogous).
The idea of replacing the Yang-Mills field with Markovian holonomy field can be found in [1].
Besides, Yang-Mills holonomy along Brownian path has already been looked at long ago, in the commutative case, by Albeverio and Kusuoka [2], with a radically different approach than ours.There renormalization procedure consists in cutting off the high frequency component of the Yang-Mills field, compute the Brownian holonomy in G, embedded as a matrix group so that one can average this holonomy, and then they rescale this average to get a non-trivial limit as they remove the cut-off.
In the particular case where the group G is abelian with Lie algebra g, the Yang-Mills field can be built from a g-valued white noise φ on R 2 .For a loop based at 0 and smooth enough, the Yang-Mills holonomy along is then simply given by φ(θ ), where θ is the winding function associated with .φ(θ ) is well-defined for θ ∈ L 2 (R 2 ), and in particular for ∈ C 1 .
Another way to define a map from a set of loops based on 0 to G in a multiplicative way is to take a Poisson process P, and then to define a group homomorphism h from the fundamental group π 1 (R 2 \ P, 0) to G. We will define a family of such random pairs (P, h), indexed by two parameters K (the intensity of the point process) and ι (the curvature or charge associated with each point, that is the distance between 1 G and the holonomy of a small loop winding once around one of these points) in such a way that their law is invariant under volume-preserving diffeomorphisms and gauge transformations.When ι is proportional to K − 1 2 and K tends to infinity, these random connections converge towards a Yang-Mills field, in the sense of finite-dimensional marginals.Our goal, however, is to study another scaling regime, when ι is proportional to K instead.Then, the holonomy of any given smooth loop converges in distribution towards 1 G , but the holonomy along the Brownian curve (made into a loop by joining the endpoints smoothly) does not.Instead, it converges towards a 1-stable distribution in G.

Definition of the Model and Presentation of the Main Result
Let G be a connected compact Lie group, of which we denote the unit element by 1.We endow its Lie algebra g with a bi-invariant scalar product, and we denote by • the associated norm.This Euclidean structure on g gives rise to a bi-invariant Riemannian structure on G, and we denote by d G the corresponding distance.For a positive real number K , the uniform probability distribution on the sphere of radius of 1  K of g is denoted μ K , and we abbreviate μ 1 into μ.
Let P g K be a Poisson process on R 2 × g, with intensity K λ ⊗ μ K , where λ is the Lebesgue measure on R 2 .Let also and P K be the projection of P G K on R 2 .The projection π : P G K → P K is almost surely injective, and we define h K : P K → G the unique map such that (Id R 2 , h K ) = π −1 .It can be uniquely extended into a group homomorphism hK from the free group F P K generated by P K to G.
To define a group homomorphism from the fundamental group π 1 (R 2 \ P, o) to G, it thus suffices to fix an isomorphism between F P K and π 1 (R 2 \P, o).This, however, cannot be done in a canonical way.For any locally finite set P ⊂ R 2 , we will define (Definition 3.6) a family Prop P ⊂ Iso(π 1 (P), F P ) of group isomorphisms from π 1 (P) := π 1 (R 2 \P, 0) to F P that we will call proper.This family as the following properties: • It is uniquely defined from the data of the smooth oriented surface R 2 and the subset P. In particular, it is invariant by volume-preserving diffeomorphisms.
• If (h x ) x∈P ∈ G P is a collection of independent random variables with conjugation invariant distributions, for any (deterministic) φ, ψ ∈ Prop P , the homomorphisms h • φ and h • ψ are equal in distribution.
Then, for any family of deterministic loops ( i ) i∈I such that the λ i∈I Range( i ) = 0, a family of G valued random variables (ω i ) i∈I is defined by The distribution of this family does only depend on the specific choice of φ ∈ Prop P K , provided that it is independent from P G K conditional to P K .Let also X : [0, 1] → R 2 be a planar Brownian motion started from 0, independent from P G K , and defined on a probability space ( X , F X , P X ), and let X be its concatenation with the straight line segment between its endpoints.Then, X almost surely avoids P K , and we set ω The main result of this paper is the following.
Theorem 1. Assume that the proper isomorphism φ between π 1 (P K ) and F P K is independent of (X, P G K ) conditional on P K .Then, P X -almost surely, as K → ∞, ω φ,K ( X ) converges in distribution towards a 1-stable distribution on G that does not depend on φ.
The limiting distribution ν * that we call 1-stable1 can be described as follows.Let d denote the dimension of G and Let ν σ be the symmetric 1-stable probability distribution on g with scale parameter σ , defined by its density with respect to the Lebesgue measure . ( There exists then a g-valued symmetric 1-stable process Ỹ such that Ỹ (1) is distributed according to ν σ .This process has jumps, but we can 'fill' them with straight-line segments into a continuous curve Y : [0, 1] → g, defined up to increasing parametrization of [0, 1].Such a parametrization can [5,Theorem 4.1] be chosen in such a way that Y has finite p-variation for p > 1.In particular, taking p < 2 allows to define the Cartan development y of Y on G.Then, ν * is equal to the distribution of y(1).
Remark 2.1.The informed reader will have recognizd that the random group homomorphism hK • φ can be described as a Markovian holonomy field (see e.g.[16,20] for the definition of Markovian holonomy fields).
To be more specific, let P(R 2 ) be a set of smooth enough paths in R 2 (say, Lipschitzcontinuous, although some other slightly different classes of paths can be used as well) and let A : P(R 2 ) → G be a Markovian holonomy field, whose distribution is determined by its associated G-valued Lévy process Z , which jumps at rate K from Z t− to exp G (V t )Z t− , where V t is uniformly distributed on ∂ B(0, K −1 ) ⊂ g, and stays constant in between these jumps.Then, it can be shown that there exists an underlying set P such that the restriction of A to loops which avoids P passes to the quotient into a group homomorphism from π 1 (R 2 ) to G.This quotient is then equal in distribution to the group homomorphism hK • φ we will defined.
It is conceptually not difficult to see that these two objects agree in distribution, but it is technical in practice and is not the goal of this paper, which is very inspired indeed from the theory of Markovian holonomy fields (and in particular from [20]) but do not use it directly.Besides, the fact that the associated Lévy process is pure jump and only contains big jumps allow for much simplification compared to the general theory of Markovian holonomy fields.In fact, it is technically closer from random ramified coverings (see [20]) than from Markovian holonomy fields (although, since G isn't discrete, it does not define a covering but a gauge-equivalence class of flat G-bundle instead).In particular, for any continuous loop, the holonomy along that loop is well defined with probability 1; whilst for general Markovian holonomy fields the loop need a bit of regularity (such as Lipschitz-continuity).
We thus provide an independent proof that the distribution of hK • φ does not depend on the specific choice of φ.This would follow automatically from the equality in distribution we just mention.We would like to point however that this independent proof extends beyond the framework of Markovian holonomy field, since it extend to any locally finite point process, which need not be Poisson distributed (hence corresponding to a non-Markovian holonomy field).

Strategy of the proof.
In order to guide us through the proof, we first give an idea of why the main theorem holds, and of the main difficulty that arises, before giving a summary of the steps involved to solve it.
When a proper isomorphism has been fixed, the homotopy class [ ] ∈ π 1 (P K ) of a loop can be understood as a word on the letters x, x −1 : x ∈ P K .If we count algebraically each apparition of a letter x in [ ], the relative integer that we obtain is equal to the winding number θ(x, ) of around the point x.If the order in which these letters appear were not important, the class [ ] would thus be equal to where {x 1 , . . ., x n } = P K .In Sect.4, we will prove that Theorem 1 does hold if we replace ω φ,K ( X ) with where {(x 1 , g 1 ), . . ., (x n , g n )} = P G K is an ordering of P G K , independent of P G K conditionally to P K .This will follow rather easily from the corresponding result with G replaced with R, and indeed proves the theorem in the case the group G is abelian.
Let us now explain why it is reasonable to think that the order doesn't matter indeed.First of all, with the kind of scaling that we have, and with the heavily tailed Cauchy distribution appearing, only the extremely large exponents (of the order of K ) should contribute to the limit.
If a point appear once with an exponent of the order of K , it likely has a winding number which is at least of the order of K .The number of such points in P is only of the order of 1.Let us consider x one of them.Then, the Brownian motion X must go extremely close to x.Let us say that [s, t] is some small amount of time such that X s and X t are relatively far from x, but during which X goes very close to x.During that time, it might wind a large amount of time, say θ(x), around x.Then, it is very unlikely that there is a second period of time during which X goes close to x, and since this is the only reasonable way for X to accumulate winding around x, we can deduce that θ(x) θ(x).
During the same period [s, t], the homotopy class of X will vary a lot, it will start from say g to ghx θ(x) h . 2 Actually, it is possible to choose s and t so that additionally h = h = 1, provided that the proper isomorphism is chosen so that x corresponds to the path going straight ahead from 0 to x, winding once around x, and then going back straight to 0. If x 1 , . . ., x k are these special points that appears once with a large exponent, we end up with a writing of [ X ] as where X s,t is the loop that starts from 0, goes straight to X s , then follows X up to X t , and then goes straight to 0. Since the loops X t i ,s i+1 do not have large winding around any point, it is expected that ω From the conjugation invariance of the variable associated with x 1 , . . ., x k , one can then expect that the distribution of ω φ,K ([ X ]) is very close from the one of g One of the main problems we face when trying to make this rigorous is to actually get some control on ] cannot be easily bounded, and we expect it to be way too large in general for a naive approach to be conclusive.What happens is that each time the Brownian motion does a 'large winding' around x ∈ P (that is a winding during which it is not specifically close from x), what will appear in [X t i ,s i+1 ] is not only x ±1 , but instead a conjugate cx ±1 c −1 , where c is a word whose length is of order K in general.Though the variables corresponding to x and cxc −1 are identical in law, replacing one with the other would break the independence between the variables -as soon as X does a second turn around x, we get both cxc −1 and c xc −1 appearing in ].This independence is needed to apply some law of large numbers and show that ω φ,K ([X 0,s 1 ] . . .[X t k ,1 ]) converges to 1 G .We solve this problem by finding a hierarchical structure between the different points that allows to simultaneously keep some independence and ignore the length of the conjugators c.
To be more specific, we will order the points in P = {x 1 , x 2 , . . .} and decompose the class h = [ X ] into a product h = h 1 . . .h n , and decompose further each h i as in such a way that each h i, j is a conjugate of x ±1 i , say h i, j = c i, j x , with only the letters (x k ) k<i allowed to appear in c i, j .The goal of Sect. 5 is to show that such a decomposition is always possible, and to understand some of its aspects.
It is only on Sect.9, with Proposition 8.3, that we will study the probabilistic interest of such a decomposition.To each x i will be associated a random element X i ∈ G, very close from the unit, and we will study the product X h = X i 1 . . .X i k , where h = x i 1 . . .x i k .Roughly speaking, we then manage to control the distance from X h to 1 in a way that depends only on the j i that appears in (2), and with a bound which is almost the same as when the group is commutative.
The geometric interest of this decomposition is studied in Sect.6.Here, we make a specific choice for the order between the points of P K .We order them depending on their distance to the origin.This choice allows to show that for each point x i , the number j i cannot be larger than some number θ 1 2 (x i ) which, roughly speaking, counts some number of half-turns of X around x i .The point is that this number θ 1 2 (x i ) depends on X and on x i , but not on the other points on P K .
In Sect.7, we finally use the fact that our path is Brownian.We will see that the quantity θ 1 2 (x i ) is then strongly related to the winding number θ(x i ), and that it is of order roughly θ(x i ) 2 .We will also consider the partition β i,l of j i , defined by the indices k such that the conjugators c i,k and c i,k+1 are different.We will give various upper bounds (with large probability) on the highest values β i,1 , β i,2 , β i,3 , . . .and on the possible size of set of indices i for which these values can be large.
In Sect.8, we will arrange these bounds together to finally prove the main theorem.Section 3 is devoted to prove that the distribution of ω φ,K does not depend on φ.

Invariance by Volume-Preserving Diffeomorphism: From a Point Set to a Connection
In this section, we define the family Prop P of proper group isomorphisms from π 1 (P) to the group F P freely generated by P.This family does not depend on any additional data, such as a Riemannian structure on the plane.We then show that the random connection ω φ,K defined in the previous section does not depend, in distribution, from the isomorphism φ ∈ Prop P .We will see that the braid groups acts freely and transitively on Prop P , so that we should first understand how braids act on the distribution of random variables.
3.1.An easy lemma.Let B n be the Artin braid group with n strands, It admits a group homomorphism onto the symmetric group S n , which maps the generator b i to the permutation σ (b i ) = σ i = (i, i + 1).We denote by σ this projection.It also admits a natural action on G n , given by b i • (g 1 , . . ., g n ) = (g 1 , . . ., g i−1 , g i+1 , g −1 i+1 g i g i+1 , g i+2 , . . ., g n ), as well as an action ρ on F n , given by We propose here a simple lemma, which we do not use directly but which is somehow the key point to understand the distributional invariance of ω φ,K .This was first proved by [6, Lemma 6.12].
Lemma 3.1.Let (X 1 , . . ., X n ) be a family of independent G-valued random variables, each conjugation invariant.Then, the following equalities in distribution hold: Proof.For the first item, we need to prove is that for all i ∈ {1, . . ., n − 1}, From the independence assumption, this reduces to show that For j ∈ {i, i + 1}, let P j be the law of X j .Then, for all bounded which concludes the proof of the first item.
We can thus apply the first item in the lemma to this family, and using also the recursion assumption we deduce that b which concludes the recursion hence the proof.

Finite set of punctures.
The plane is endowed with its differential structure and orientation, and a point 0 is fixed.Let P be a finite subset of R 2 \ {0}.
The fundamental group π 1 (P) = π 1 (R 2 \ P, 0) is a free group with #P generators.A basis can be obtained by first choosing #P non-intersecting and non self-intersecting smooth paths (γ x ) x∈P , parameterized by [0, 1], with nowhere vanishing derivative γx , and such that for all x ∈ P, γ x is a path from 0 to x.Any such path γ x then determines a unique element x in π 1 (P), a representative of which is given by a path that goes from 0 to a small neighbourhood of x following γ x , then turns once around x in trigonometric order, and finally goes back to 0 following γ backward.The family ( x ) x∈P then freely generates π 1 (P), and determines therefore a unique group isomorphism φ(( x ) x∈P ) between π 1 (P) and F P .Figure 1 below represents a possible choice for the γ x , and a path in one of the corresponding classes x .We define Prop P as the set of isomorphisms obtained that way, which we call proper.
For a given map h : P → G, in order to understand how ω φ,h := h • φ depends on φ, one should first understand how proper isomorphisms are related one to the other.To this end, let us define Aut the set of orientation-preserving homeomorphisms of R 2 that maps 0 to itself and P to itself (possibly permuting the elements of P).A function f ∈ Aut induces a group automorphism ρ( f ) of π 1 (P), given by ρ( f It also induces a group automorphism σ ( f ) of F P , which maps x ∈ P to f (x).Finally, it defines an automorphism χ( f ) of Prop P given by The three applications ρ, σ and χ are then easily seen to be group actions of Aut.Proof.We fix a Euclidean structure on R 2 .Let φ, ψ be two proper isomorphisms, respectively obtained from the family of paths Thus, it suffices to show that such an homeomorphism f of R 2 exists for all families (γ x ) x∈P and (δ x ) x∈P .We can assume that for all x ∈ P, δ x is the piecewise linear path from 0 to x.
Figure 2 below pictures the different steps we use to build such an homeomorhism.
For each x ∈ P, and ε > 0, let γ ε x be the linear segment of length ε, starting from x and with direction γx (1).For ε 0 > 0 small enough, none of this path γ ε x cross any of the path γ y for y ∈ P, nor any γ ε y for y = x.For x ∈ P, let then U x be the ball of center x and radius ε 1 = ε 0 2 .In particular, these balls are disjoints.Let x = x + ε 1 γx (1) be the point where γ ε x reaches the boundary of U x .Let then be a relatively compact, connected and simply connected neighbourhood of the union of the ranges of the γ x , with a smooth boundary, small enough for all the The boundary of U ∞ is smooth by part, thus homeomorphic to a circle, which induces a cyclic order between the points x.Since K ∞ is homeomorphic to R 2 \B(0, 1), one can find for each x a continuous path γx from x to its successor s( x) = s(x), in such a way that these paths stay on K ∞ and do not intersect each other.
Then, for each x, the concatenation of γ x , γ ε 1 x , γx and γ −1 s(x) is a continuous loop, and Jordan theorem ensures the existence of an homeomorphism from its interior to an open triangle.Applying this to each of these loop, and to the unbounded component delimited by the concatenation γx γs(x) γs 2 (x) . . ., we obtain the desired homeomorphism.Lemma 3.3.Let Aut 0 be the set of elements of Aut isotopic to the identity.Then, the restriction of χ to Aut 0 is trivial.That is, for every f ∈ Aut 0 and φ ∈ Prop P , χ( f )(φ) = φ.Stated otherwise, χ passes to the quotient into a (transitive) action χ of the mapping class group π 0 (Aut) = Aut / Aut 0 .Proof.Let f ∈ Aut 0 , and h be an isotopy from f to the identity.Then, for all x ∈ P, This allows us to deduce the following result.Lemma 3.4.Let P be a finite set and (h x ) x∈P ∈ G P be a family of independent random variables, each having a distribution invariant by conjugation.Then, the law of ω φ,h does not depend on φ ∈ Prop P .
Proof.Since the mapping class group π 0 (Aut) acts transitively on Prop, it suffices to show that for one given proper isomorphism φ and The mapping class group π 0 (Aut) is known (See e.g.[3,4]) to be isomorphic to the Artin braid group group B n .An isomorphism can be obtain as follow.Fix an identification between the plane and C, such that the n points of P are identified with 1, . . ., n, and that the origin is located on {z : (z) < − 3 4 .For j ∈ {1, . . ., n − 1}, we define the function f j ∈ Aut which maps j + 1  2 + re iθ to j + 1 2 + re i(θ−ψ(r )) , where . This function maps x to itself for x ∈ {1, . . ., n}\{ j, j + 1} and exchanges j with j + 1 (see Fig. 3 below), so that σ ( f j ) = σ j .In fact, setting for γ x the straight line segment from the origin to x (and φ ∈ Prop P the element associated to this family of paths), we see that The family ([ f j ]) j∈{1,...,n−1} generates π 0 (Aut), and the isomorphism ι betwen π 0 (Aut) and B n is obtained by mapping [ f j ] to the generator b j .
In particular, since this family is generating, it suffices to show that for all j ∈ {1, . . ., n − 1}, ω φ,h ,h are group morphism and since (φ −1 (i)) i∈{1,...,n} generates π 1 (P), it suffices to show that The left hand side is simply (h 1 , . . ., h n ).The right hand side is equal to We conclude as in Lemma 3.1, that is by using the fact that (h Remark 3.5.If we assume that the variables (ω x ) x∈P are identically distributed, the proof can be slightly simplified by using the fact that both the action of B n at the source and the action of S n at the target preserve the distribution of ω φ,h .We now extend Lemma 3.4 to the case when P is no longer finite but only locally finite.

Locally finite set of punctures.
Definition 3.6.Let P be a locally finite subset of R 2 .We say that a group morphism φ : π 1 (P) → F P is proper if there exists a family (γ x ) x∈P which satisfies the following conditions.
• For all x ∈ P, γ x : [0, 1] → R 2 is a path from 0 to x, with no self-intersection, a nowhere vanishing derivative, and such that φ([ x ]) = x ∈ F P , where [ x ] is the previously defined homotopy element associated to γ x .
• For all x, y ∈ P, We set Prop P the set of proper isomorphism.
Proper isomorphsims always exist: if no two points of P are colinear with 0, then we can take each γ x as the straight-line path from 0 to x.Otherwise, and since the notion of proper isomorphism does not rely on the specific euclidean structure of R 2 , it suffices to apply a diffeomorphism to return to the case with no two points of P colinear with 0. ) x∈P can be free but not generating.In this example, the paths γ x i are drawn one after the other.The path going to x i+1 passes above the point x i+2 , and then navigates until it reaches x i+1 .The black loop which appears on the first picture does not lie on the group generated algebraically by the [ x i ].Formally, it would be described as w[ x 1 ]w −1 with w a bi-infinitely long word in the [ x i ] Fig. 5.The two proper isomorphisms corresponding to the paths in red and blue are not related by the action of any braid Remark 3.7.Contrary to the case when P is finite, the last condition is not superfluous.The subtlety is that the group generated by ([ x ]) x∈P is a direct limit of groups, whilst π 1 (P) is an inverse limit, in general larger.As a counter example, one can consider the case pictured in Fig. 4 below.
Besides, neither the direct limit of the finite braid groups nor π 0 (Aut(R 2 \ P)) acts transitively on Prop anymore.Figure 5 shows two proper isomorphisms which are not related by any such braids.
A proper basis b = ([ x ]) x∈P induces an isomorphism between π 1 (P) and the free group F P , and therefore allows to extend the definition of ω b to the case when P is locally finite.Lemma 3.8.Let P be a locally finite set and (h x ) x∈P ∈ G P be a family of independent random variables, each having a distribution invariant by conjugation.Then, the finitedimensional distributions of ω φ,h does not depend on φ ∈ Prop P .
Proof.During this proof, for P ⊆ P, we write [ x ] P the homotopy class of x on R 2 \ P.
Let φ, ψ ∈ Prop P be two proper isomorphisms, associated respectively with ([ x ] P ) x∈P and ([ x ] P ) x∈P .Let g 1 , . . ., g k ∈ π 1 (P).Since ([ x ] P ) x∈P generates π 1 (P), there exists a finite set P ⊂ P such that for all i ∈ {1, . . ., k}, g i is spanned by Thus, to show that (ω φ,h (g i )) i∈{1,...,k} = (ω ψ,h (g i )) i∈{1,...,k} , it suffices to show that for all finite set P ⊂ P, Since ([ x ] P ) x∈P also generates π 1 (P), for all x ∈ P, there exists n x ∈ N, y x,1 , . . ., y x,n x ∈ P and ε x,1 , . . ., ε x,n x ∈ {±1} such that Then, Let P be the finite set {y x,i , x ∈ P , i ∈ {1, . . ., n x }}.For x ∈ P , the homotopy class of x in π 1 (P ) is given by where π is the canonical projection [ ] P → [ ] P from π 1 (P) to π 1 (P ).It follows that Since φ, as an element of Prop P , is associated with the family Since P is finite, one can apply Lemma 3.4 to the morphisms φ, ψ ∈ Prop P , and deduce that which concludes the proof.Proposition 3.9.Assume that P G K is a Poisson process with intensity K λ ⊗ ν, where λ is the Lebesgue measure on R 2 and ν is a conjugation-invariant measure on G. Let P K be the projection of P G K on R 2 , and h K : Then, the finite-dimensional distribution of ω φ,h K does not depend on the σ (P K )measurable proper isomorphism φ.
Besides, it is invariant by orientation and volume preserving diffeomorphisms, in the sense that for all deterministic orientation and volume preserving diffeomorphism ψ of R 2 with ψ(0) = 0, and for all continuous oriented loops 1 , . . ., k : [0, 1] → R 2 based at 0, and whose ranges has vanishing Lebesgue measure, Remark that the vanishing range assumption is necessary for ω φ,h [ i ] to be almost surely well-defined.
Proof.Let φ, ψ be two σ (P K )-measurable elements of Prop P K .Conditionally on σ (P K ), the variables (h K (x)) x∈P K are independent and each have a law ν invariant by conjugation.We can therefore apply Lemma 3.8, to deduce that conditionally on σ (P K ), We conclude to the first assertion by integrating over σ (P K ).
To prove the second assertion, let unique group homomorphism that maps x ∈ P K to ψ(x), and ψ : π Then, we have the following commutative diagram.
For P a locally finite set of R 2 , and h ∈ P G , let us denote P G = {(x, h(x)) : x ∈ P}, and The isomorphism ψ G of R 2 × G preserves the measure K λ ⊗ ν, so that the sets P G K and PG K have the same distributions.We deduce that the finite-dimensional distributions of F φ (P G K ) and F φ ( PG K ), which do not depend on φ nor on φ, are equal.Therefore, In the following, we assume that R 2 is endowed with a Euclidean structure.For all K , almost surely, no pair of distinct points in P K is colinear with 0, and a proper isomorphism φ K is obtained by taking, for all x ∈ P, as the path γ x , the straight line segment from 0 to x.We set ω K = ω φ K ,h K .We implicitly identify π 1 (P K ) with F P K using this isomorphism φ K .

A much Simpler Problem
Let us recall that our goal is to study ω K ([ X ]), where X is the concatenation of the Brownian motion X with a straight line segment between its endpoints.This quantity can be written as a very large product of random elements in G, If the group G was commutative, we could arrange this product as where (x i , g i ) is an enumeration of our random set P G K and θ(x, X ) is the integer winding number of X around x. Our goal, as we explained sooner, is actually to reduce our problem to the study of this simpler expression (3), even when G is not commutative.In this section, we show that the expression given by (3) does converge in distribution, as K → ∞, and we identify the limit.Recall that the 1-stable law ν σ is defined by (1).Definition 4.1.We say that a probability measure ν on g lies in the strong attraction domain of ν σ , if ν is ad-invariant and there exists δ > 0 and a coupling (X, Y ) : We then write ν ∈ D δ (ν σ ), and we write [ν, ν σ ] for the law of such a coupling (X, Y ).Proposition 4.2.Let G be a compact Lie group with Lie algebra g endowed with a biinvariant scalar product.Let ν ∈ D δ (ν σ ), for some σ > 0.
Let (X i ) be an i.i.d.sequence of random variables distributed according to ν, and let n λ be an integer-valued random function of λ, independent from the sequence (X i ), and such that n λ λ converges in probability toward a deterministic t > 0 as λ → ∞.Then, as We now introduce very quickly the notion of geometric solution to ODE driven by càdlàg paths with finite p-variation, for p < 2. This provides us with a rigorous definition for the distribution ν * , and allow us to write a detailed version of Proposition 4.2 with intermediary steps to guide us through its proof.
For a C 1 curve : [0, 1] → T 1 G ∼ = g starting at 0, consider the ordinary differential equation For p < 2, Young integration theory teaches us that the solution map I : → γ , extends continuously to the space C p−var of continuous paths with finite p-variation (endowed with the p-variation distance).In fact, it can be extended further to the space D p−var of càdlàg paths with finite p-variation variation distance, endowed with the Skororokhod-type p-variation metric α p ([8, Definition 3.7, Proposition 3.10 (ii) and Proposition 3.18].We still write I for the extended map, also known as the Marcus (or geometric) canonical solution to Eq. ( 4), and usually denoted When is a càdlàg semimartingale (in particular when is a Cauchy process), this also agree with the stochastic interpretation of the equation, defined in [14] prior to the notion of geometric canonical solution.
The distribution ν * (tσ ) is then the distribution of γ (t), when the driving process is a symmetric 1-stable process on g, whose distribution is uniquely determined by the property that (1) is distributed as ν σ .Notice the definition of ν * (s) does not depend on the choice of t and σ such that s = tσ , precisely because is 1-stable.
Proposition.Let G be a compact Lie group with Lie algebra g endowed with a biinvariant scalar product.Let νσ ∈ D δ (ν σ ), for some σ > 0.
Then, the following hold.
(5) For all t > 0, the random variable I ( ˜ λ )(t) converges in distribution, as λ → ∞, toward I ( )(t).(6) Let n λ be an integer-valued random function of λ, independent from the sequence Xi , and such that n λ λ converges in probability toward a deterministic t as λ → ∞.Then, as λ → ∞, the random variable exp G ( X 1 λ ) . . .exp G ( X n λ λ ) converges in distribution, and the limit law is the one of I ( )(t).
Proof.The first point follows from the fact that the family ( 1 λ j i=1 X i ) j∈N is distributed as ( ( j λ )) j∈N , and that when x : [0, λ −1 ] → g is a straight-line segment from a to b, For the second item, we use the following interpolation inequality [15, Proposition 5.5]: for all p ∈ (1, +∞), for all continuous f , where We also use the following inequality [23, Lemma 2] 3 .
Lemma 4.3.Let X 1 , . . ., X n be a family of independent and centered R-valued random variables with moment of order p < 2.Then, 3 Despite the title of the cited article, the given inequality does concern the case p < 2.
We then obtain, for p > 1: We now fix an arbitrary q ∈ (1, 1 + δ), and we apply Doob's maximal inequality: This proves the second item.
For the third point, since t is piecewise-linear with interpolation points (0, λ −1 , 2λ −1 , . . .), it suffices to show that thus to show that the family of increments are identical in distributions.For both sides, the increments are i.i.d., and in both case distributed as ν σ , which is enough to conclude.
The fourth item is a special case of Proposition 4.15 in [8].
For the fifth item, we use items 2 and 4, from which we deduce that ˜ (λ) converges in distribution in α p -topology toward .Take a probability space in which this convergence holds almost surely.Then, in the event the convergent occurs and t is a continuity point of , we can use [8,Theorem 3.13], from which we deduce that I ( ˜ (λ) )(t) converges toward I ( )(t).We conclude by noticing that t is almost surely a continuity point of .
Under this event, the distance in G between Z λ and I ( ˜ λ )(t) is bounded by We deduce that, on the event E, Since X 1 is 1-stable, there exists a constant C such that for all z > 0, Besides, which together with item 5 concludes the proof.
Before we explain how we will use Proposition 4.2, we need the following definition.
Definition 4.4.For a point x outside the range of X , the planar Brownian motion concatenated with a straight segment, we define θ(x) ∈ Z the winding number of X around the point x.
Theorem 4.5.Let P X be the law of the Brownian motion (from [0, 1] to R 2 ).For R > 0, let x be a point distributed uniformly on B(0, R).Then, P X -almost surely on the event X ∞ ≤ R, the random variable θ(x) lies on the strong attraction domain of a Cauchy distribution with scale parameter 1 2π R 2 .
Proof.This is a direct corollary of Theorem 1.1 in our previous paper [24].Proof.One can and will assume that s = 1.The general case is then recovered from a scaling argument.
Let S be a Cauchy random variable which is such that ] < +∞, it suffices to show that there exists a random variable Z distributed as ν σ with E[ SU − Z 1+δ ] < +∞.Actually, we will show that there exists Z distributed as ν σ and such that SU − Z is bounded.
For all x ∈ [0, +∞), let ψ(x) be the unique positive real number solution of the equation The constants are tuned so that both sides are equal to 1 for x = 0 = ψ(0).It is then easily seen that ψ defines a continuous increasing bijection of [0, +∞).Set φ = ψ −1 .We set Z φ = sgn(S)φ(|S|)U .We also set Z σ a random variable distributed as ν σ .
Remark that Z σ and Z σ Z σ are independent variables, and that the latter is distributed uniformly over the unit sphere.The same is true for Z σ replaced with Z φ , and to show = Z φ , which follows from a simple computation.Indeed, .
On the other hand, It follows that Z φ is indeed distributed as Z σ .
Let us now tune the parameter σ so that sgn(S)φ(|S|) and S are close to each other.Since +∞ ψ(x) , or equivalently that ψ(x) ∼ For sgn(S)φ(|S|) and S to be close to each other, including when S is large, this limit must be one, so that we must set σ := A simple computation gives We deduce that It follows that ψ(x) − x is bounded near +∞, and by symmetry it is bounded on R. Thus, sgn(S)φ(|S|) − S is bounded, and so is Corollary 4.7.Let ι K : P K → N be a (random) bijection, independent from (X, P G K ) conditional to P K .Then, P X -almost surely, as K → ∞, the product Proof.Let x 1 , . . .be the enumeration of P K ∩B(0, X ∞ ) such that ι K (x 1 ) < ι K (x 2 ) < . . ., and let G i ∈ g be the unique element such that (x i , G i ) ∈ P g K , and g i = exp G G i .Set n K := |P K ∩ B(0, X ∞ )|, and let σ i be the elementary permutation of {1, . . ., n K } which switches the indices i and i +1.Conditionally on (X, P K , ι K ), the random variables g θ(x i ) i are conjugation invariant and globally independent.As we have seen already, it follows that conditionally on (X, P K , ι K ), In particular, Thus, for any permutation σ which is independent from P G K conditionally on (X, P K , ι K ), .
We take σ to be uniformly distributed among the permutations of {1, . . ., n K }, and independent from (X, P G K , ι K ) conditionally on n K , and we set y i = x σ (i) , H i = G σ (i) , and h i = g σ (i) .Then, conditionally on n K , the sequence (y i , H i ) i∈{1,...,n K } is independent from X and i.i.d.Furthermore each (y i , H i ) is composed of two independent variables, with y i distributed uniformly on B(0, X ∞ ) and H i distributed as ν σ .
Notice this is in contrast with the initial sequence (x i , G i ) i∈{1,...,n K } , for which the choice of the bijection ι K do not even ensure that x 1 is uniformly distributed on B(0, X ∞ ).
It then follows from Theorem 4.5 that P X -almost surely, for all i, conditionally on the event i ≤ n K , θ i := θ(y i ) lies in the strong attraction domain of a Cauchy distribution with scale parameter 1 2π X 2

∞
. By applying Lemma 4.6, we deduce that P X -almost surely, θ i K H i lies in the strong attraction domain of ν σ , for σ = By eventually extending the probability space on which P G K is defined, we can extend the sequence (θ i , H i ) i≤n K into a sequence (θ i , H i ) i∈N of i.i.d.random variables, globally independent from n K conditionally on X ∞ .Since n K K almost surely converges toward π X 2 ∞ as K → ∞, we can apply Proposition 4.2.We deduce that P X almost surely, conditionally on n K , as K → ∞, the G-valued random variable This concludes the proof.
The problem that we now face, in order to prove the main theorem, is to replace this product with ω K ( X ).

Free Groups
For two groups K , K , we denote by K * K their free product.For a set E, we denote by F E the free group freely generated by E.

Free groups as semi-direct products.
Let P = {x 1 , . . ., x k } be a totally ordered finite set.We will build up a group isomorphism between the free group F P and a semi-direct product of free groups Let K be a group and let π : K * Z → K be the canonical projection.For simplicity, the canonical injections from K to F K and from K to K * Z are not written explicitly.We write 1 for the image of 1 ∈ Z into K * Z by the canonical injection.Here 1 is a given generator of Z, but we will use multiplicative notations otherwise (so for example 1 k is the usual k ∈ Z).Notice also that 1 is different from the unit element of K * Z. (2) The homomorphism c : is a split short exact sequence and K * Z ∼ = K F K .
Let g ∈ K * Z.Then, there exists ε 1 , . . ., ε n ∈ {±1} and h 1 , . . ., h n+1 ∈ K such that Second point: We only need to check that c is injective.Any a ∈ F K \ {1} can be written as which is a non-empty reduced word in n are all different from 1 and lie alternatively in Z and G. Thus c(a) = 1, which concludes the proof.
Remark 5.2.We are very thankful to the anonymous reviewer who suggested to us this elementary proof of the second point, therefore drastically simplifying our initial proof, and who also noticed the proof works for a general group K rather than for the specific free groups K we will use it for.
Although we have not been able to find this lemma in the literature, we have little doubt this must be written somewhere; we apologize to the author we should have cited.
In the following, we write π k for the canonical projection from F P to F P\{x k } , and c k for the application from When we want to explicitly the canonical inclusion of a set Q into the free group F Q , we call it ι Q .
By iteratively applying Lemma 5.1 (notice F P F P\{x k } * Z)), we obtain an isomorphism where F ∅ is the group with a single element.We write φ j (g) the element on the component F F {x 1 ,...,x j−1 } , so that To be more down-to-earth, the only thing we are doing here is that we write g ∈ F P as ), with g , w 1 , . . ., w n ∈ F P\{x k } , and we then iterate this procedure to further decompose g in a similar manner (but with x k replaced with x k−1 ).As an example, the corresponding writing of g and more generally g is decomposed as a power of x 1 , followed by a product of conjugates of x 2 (where the."conjugator"only uses the letter x 1 ), followed by a product of conjugates of x 3 (where the conjugator only uses the letters x 1 ,x 2 ), and so on.
For x ∈ Q and g ∈ F Q , let I x (g) be the set of the indices i such that a i (g) = x: For k ∈ {1, . . ., |I x (g)|}, we set i x k (g) the k th element of I x (g), so that We also set α x k (g) = α i x k (g) (g), and for k > |I x (g)| we set α x k (g) = 0.This gives us an ultimately vanishing sequence α x (g) = (α x k (g)) k∈N , the sequence of the exponents to which x appears in g.As an example, for g = x 7 y 5 x −4 , I x (g) = {1, 3}, i x 1 = 1, i x 2 = 3, α x 1 = 7, and α x 2 = −4.We endow the set of integer valued and ultimately vanishing sequences with the order obtain as the reflexive and transitive closure of the relation R given by If α and α are partitions, α α means that α is a finer partition than α,4 but in our case the sequences α and α can take negative values so that they are not partitions in general.
We will use this lemma in the following way.Given the set P, choose a specific order on P with some geometric relevance.Then, instead of studying α j (c j • φ j (g)), study instead the sequence α j (π ≤ j (g)).If g is given as the homotopy class of γ in π 1 (P), then π ≤ j (g) is given as the homotopy class of γ in π 1 ({x 1 , . . ., x j }), which might be easier to study if the order has been chosen appropriately.

Relations with Paths
In this section, we let P be a finite (deterministic) subset of R 2 \ {0}, and γ : [0, 1] → R 2 \ P be a continuous function satisfying γ (0) = 0.It is assumed that there is no pair of distinct points x, y ∈ P aligned with 0 (i.e.such that x0y = 0), and that there is no pair of distinct points x, y ∈ P with equal norms.One can think about γ as being our Brownian motion X , and about P as one of our Poisson processes P K , but the results hold with full generality.For x, y ∈ R 2 , we use the notation [x, y] for the oriented line segment from x to y, and γ • γ for the concatenation of γ with γ , when the endpoint of γ is equal to the starting point of γ .For purpose of notation, it if often practical to fix a parametrisation of the oriented curves we deal with, but the specific choice doesn't matter and we do not make them explicit.
The homotopy class in π ), and its homotopy class in a more general subset Remark that [γ s,t ] is ill-defined as soon as one of the line segments [0, γ s ] and [0, γ s ] intersects P. Since we ultimately want γ to be a Brownian motion, the set of times s and t for which this happens is uncountable, and it is not possible to extend [γ s,t ], or even [γ t ], by right or left continuity.In the following, each time we write [γ s,t ] or [γ s,t ] x in an equation, it is implicitly assumed that the equation holds provided that these classes are well-defined.
Remark that

Following the Cayley geodesic along the path. Let us set h
the geodesic walk from 1 to [γ ] in the Cayley graph of F P ∼ = π 1 (P), with respect to the generating family P, and with multiplication on the right: g, g is an edge of if g = g y ±1 for some y ∈ P. In other words, h i is the prefix of length i of the word For i ∈ {0, . . ., N }, we also define (y i , ε i ) ∈ P × {±1} the unique pair such that h i = h i−1 y ε i i and Be careful that, when γ is random, the T i are, in general, not stopping times with respect to the filtration F = (F t ) t∈[0,1] generated by γ : they are only stopping times with respect to the enlarged filtration (σ Indeed, the stochastic process ([γ t ]) t∈[0,1] has steps in , in the sense that for any s < t, there exists n ∈ N and s = s ) is an edge of .In particular, for any g ∈ \ {1} and any set ⊂ \ {1, g} which disconnects 1 from g, [γ t ] = g ⇒ ∃s ∈ (0, t) : [γ s ] ∈ .Since is a tree, for all i ∈ {2, . . .N }, {h i−1 } disconnects h i from 1 in and we deduce that indeed 0 and For i ∈ {0, . . ., N }, we set For i ∈ {1, . . ., N }, we set U i the possibly infinite time Lemma 6.1.Let i < k be such that y i = y k .Assume that there exists j 0 such that i < j 0 < k and and y j 0 = y i .Then, U i < T − k .Proof.Let l = min{ j > i : y j = y i } − 1 = max{ j ≥ i : ∀ j ≤ j, y j = y i } and m = min{ j > l : y j = y i }.Then, l ≥ i and the existence of j 0 ensures m ≤ k.Thus, U i ≤ U l and T − m ≤ T − k , and it suffices to show that U l < T − m .We assume that γ T l and γ T m both lie inside B(y i , δ) since otherwise the result is trivial.
The conditions on the angles are such that the triangle with vertices 0, T + l and T − m does not contain any points of P\{y i }.Thus, the loop In both cases, we can deduce that there exists s ∈ [T + l , T − m ] such that γ s / ∈ B(y i , δ).Since U i is smaller than such an s, we deduce that U i < T − m .
6.2.Half-turns.For x ∈ P, we now define an integer θ 1 2 (x, γ ), the number of half-turns of γ around x. Let d 1 x and d 2 x be the two half-lines delimited by x and orthogonal to the vector from 0 to x (see Fig. 6 below).Let t 0 = 0 and t 1 be the (possibly infinite) first time γ hits d 1 x .Times t 2 , t 3 , . . .are then defined recursively by the formulas Only finitely many of these times are less than 1, after which they are all infinite.The integer θ 1 2 ) is then defined as the maximal index i such that t i is finite, plus 1.This additional 1 is to account for the potential winding of γ before it reaches d 1 x for the first time.
In the following, the set P is ordered by the norm of its elements, Since we assumed that all the points of P have different norms, this order is total.What is somehow the key idea of this paper is that when P is endowed with this order, the number of half-turns θ does not depend on P but only on x and γ , which makes it much easier to control when γ is a Brownian motion.Proposition 6.2.The following inequality holds: (x, γ ), and since for all g, g , it suffices to prove that α x ([γ t i ,t i+1 ] x ) 1 cannot exceed 1.
We recommend the reader to convince him or herself about the truth of this before looking at the proof.Figure 6 pictures the proof.We assume i is odd and i = n − 1.The case when i is even, i = 0 and i = n − 1 is dealt with by switching d 1 x and d 2 x .Minor modifications, which are left to the reader, are required for the cases i = 0 and i + 1 = n.
Since i is odd, γ t i lies in d 1 x .Set Let B be the closed ball centered at 0 and with radius x +max{ y :y∈P x \{x}} 2 .In particular, B contains all the points of P x but x, and ).Thus, we can continuously deform the loop x ), hence inside R 2 \P x , into a loop γ based at 0 and with values in B .
It follows that the classes [γ t i ,s i ] P x and [γ t i ,s i ] P x \{x} agree (using, once again, the identifications and inclusions π 1 (P x \{x}) ∼ = F P x \{x} ⊂ F P x ∼ = π 1 (P x )), and therefore Let us now look at [γ s i ,t i+1 ] x .The path γ [s i ,t i+1 ] is contained in one of the two closed half-spaces H, H delimited by the line d 1 x ∪ d 2 x .If if lies on H , one can repeat the argument above, and deduce that α x ([γ s i ,t i+1 ] x ) = 0. Otherwise, since there is no point in P x ∩ H but x, and since H \{x} is contractible, one can deform γ [s i ,t i+1 ] , continuously and with fixed endpoints, in H \ P x , into any curve with the same endpoints and staying inside H \ {x}.Replacing it with a curve γ with monotonic angle around 0, it is clear that α x (g s i ,t i+1 x ) 1 = 1, which concludes the proof.Fig. 6.There is no universal bound on the number of times the letter x appears in [γ t i ,t i+1 ], but it can appear at most once in [γ t i ,t i+1 ] x .In the pictured case, [γ s 1 ,t 2 ] = w 1 xw 2 y −1 w 3 x −1 w 4 z −1 xw 5 for some words w 1 , . . ., w 5 ∈ F P x \{x} , but [γ s 1 ,t 2 ] x is simply given by w 1 xw 2 , with w 1 , w 2 ∈ F P x \{x} Let x ∈ P, g ∈ F P .For a positive integer i, we now define S x i (g) as the sum of all but the i − 1 largest values of the sequence |α x (g)|.In particular, S x 1 (g) = α x (g) 1 .Definition 6.3.Let n = max{i : α x i (g) = 0}, and σ ∈ S n be such that Extend σ into a bijection of N * by setting σ (i) = i for i > n.
Then, we define Remark 6.4.Because of the absolute values, the sequence β x (g) = (β x i (g)) i≥1 is not always uniquely determined by (x, g).One can for example enforce the condition in order to get unicity, but the peculiar choice of β x (g) we make does not play a specific role in the remaining part of the paper.Lemma 6.5.Let u 1 , . . ., u n be a finite sequence of positive real numbers and i a positive integer.
Let σ be a permutation of {1, . . ., n} which is such that u σ (1) ≥ u σ (2) ≥ . . .u σ (n) , and let that is S i is the sum of all but the i − 1 largest elements of the sequence.
Then, there exist 0 Proof.Define recursively j l the minimal index j ≥ j l−1 such that or j l = ∞ if such an index does not exist (in particular, if j l−1 = ∞), and assume by contradiction that l 0 := min{l : j l = ∞} satisfies l 0 ≤ i.For all l < l 0 , By summing over l ∈ {1, . . ., l 0 − 1}, we deduce that We deduce that j l 0 ≤ n, which is in contradiction with the assumption that j l 0 = ∞.
Finally, one can deduce the following proposition.Proposition 6.6.Let x ∈ P, and N , i be two positive integers.Assume that S Then, there exist 0 Proof.We will show that there exist 0 Since θ 1 2 (x, γ t,s ) is monotonic in s and t, the two properties then still hold with s k replaced by u k and t k replaced by u k−1 , which allows to conclude.By Proposition 6.2, the first of these two conditions can be replaced with the condition that for all k ∈ {1, . . ., i}, , and for l ∈ {1, . . ., n} let p x l be the length of the largest prefix g of [γ ] x such that α x l (g) = 0, and q x l be the length of the smallest prefix g of [γ ] x such that ).For example, if [γ ] x = x 3 y 4 zx 5 y, p x 1 = 0, q x 1 = 3, p x 2 = 3 + 4 + 1, and q x 2 = 3 + 4 + 1 + 5.Then, . By Lemma 6.5, there exist 0 = j 1 ≤ • • • ≤ j i+1 ≤ n such that for all l ∈ {1, . . ., i}, The first property then holds with t l = T − p x j l +1 +1 and s l = T + q x j l+1 .
As for the u l , their existence is then ensured by Lemma 6.1.

The Case of Brownian Paths
7.1.Minimal spacing in a Poisson set.We finally consider the case given by P = P K .First, we need to get a bound on δ the minimal distance between two points in P K ∪ {0}.
Since P K is infinite, δ(P K ) is actually vanishing.Fortunately, one can freely replace P K with its intersection with some large ball containing the Brownian motion X .With this in mind, we set F R the event where X is the maximum of t → X t and For a given R, the cardinal of P R K is equivalent (in distribution) to π R 2 K as K goes to infinity, and we will see that δ(P R K ) is of order at least K −1 , so we also set We could have replaced log(K ) with any function f diverging slowly toward infinity, the goal being that E R has probability arbitrary close to 1 when K goes to infinity.
Lemma 7.1.As K goes to infinity, Proof.We cover B(0, R) with N = C(K log(K )) 2 balls B 1 , . . ., B N of radius (K log(K )) −1 (with C a constant depending only on R).This can be done for example by choosing the points on a properly scaled grid.Let then B 1 , . . ., B N be the balls with the same centers x 1 , . . ., x N and radius 2(K log(K )) −1 .
If δ(P R K ) ≤ (K log(K )) −1 , there exist two distinct points x, y ∈ P R K ∪ {0} at distance less than (K log(K )) −1 from each other.Let then i be such that x ∈ B i .Then, both x and y lie in B i , and we deduce that For a given i, #(P K ∩ B i ) is a Poisson variable with intensity K × π(K log(K )) −2 = π K −1 log(K ) −2 , the probability that it is at least 2 is of order K −2 log(K ) −4 , so that It is left to the reader to show that the probability of the event {P K ∩B(0, (K log(K )) −1 ) = ∅} is small as well.

Controlling the
It is now time to use the fact that our path X is Brownian.Let us denote θ(x) = θ(x, X ) the winding number of X around x, which counts algebraically the number of turns of X around x, and recall that θ 1 2 (x) counts the absolute number of half-turns of X around x.We also set θ s,t (x) as the winding number of X s,t around x. Working with a Brownian motion allows for two things.From one side, it allows to easily relates θ and θ 1 2 , as we will show on the next subsection.From the other side, it allows to use some already established estimations on θ .
We will use the following idea.For the Brownian motion to wind a large amount of times around a given point x, the Brownian motion has to go extremely close to x.The winding are then mostly due to the small fraction of time it spend close to x.Since it is highly unlikely that the Brownian motion goes close to z twice, the winding are actually mostly due to a very small interval of time, during which it does not wind around any other point.
In particular, if we wait for the Brownian motion to wind a lot around x, and then to go far5 from x, it is unlikely that it will winds a lot around x after that.The following proposition gives a way to quantize this statement.Lemma 7.2.Let n be a positive integer.Then, there exists a constant C such that for all r ≤ 1, for all x ∈ R 2 with x ≥ r , for all positive integers N 1 , . . ., N n , Proof.In [25, p. 117], we can find the following inequality on the maximal possible value of the winding function θ 0,t (1) as t varies: when t log(N ) is large, From brownian scaling, we deduce that, provided r −2 log(N ) is large, which is our case, sup x: x ≥r P( sup Proof.In the three cases, we control the probability of the given event intersected with E R ∩ F R .Since P(E R ∩ F R ) can be made arbitrary close to 1, it is sufficient to conclude.Let G 1 be the event On the event G 1 ∩ E R ∩ F R , let us write [ X ] x as g 1 x β 1 g 2 x β 2 g 3 (the case [ X ] x = g 1 x β 2 g 2 x β 1 g 3 is treated identically).Let h 0 , . . ., h N be the geodesic path from 1 to [ X ] x , let i, j, k, l be the indices such that , and for m ∈ {i, j, k, l}, let Tm = T k i be the infimum of the times t such that the class of X 0,t • [X t , X 0 ] on P x is equal to h i .Then, the loop winds β 1 times around x, and the loop X |[ Tk , Tl ] • [X Tl , X Tk ] winds β 2 times around x.
According to Lemma 6.1, there exists U ∈ [ Tj , Tk ] such that the distance between X U and x is at least δ.Therefore, the considered event implies or the similar event with the exponents 2  3 and 1 2 − ε switched.We then apply Corollary 7.3, and we get a probability smaller than C K log(K ) 4 K − 2 3 K − 1 2 +ε , which goes to 0 as K → ∞ since ε < 1  6 .This proves the first convergence.The second convergence is obtained in a similar manner, we leave it to the reader.For the third one, we enumerate P R K = {x 1 , . . ., x #P R K } independently from X , and we set conventionally x i = x 1 for i > #P R K (as before).For a given i ∈ N, let H i be the event Applying, as before, Lemma 6.1 and then Lemma 7.2 (instead of Corollary 7.3), we obtain that Therefore, From Markov inequality, we deduce that This concludes the proof.
The previous bounds tell us about the highest values β x 1 , β x 2 , β x 3 , but it does not tell us anything about the behaviour of the 'tails' S x i .Controlling these tails is our next goal.

Controlling the tails S x
i .Our strategy is to apply Proposition 6.6, which relates S x i with θ 1 2 (x), to bound the probably that θ 1 2 (x) is large in term of the probability that the winding number θ(x) is large, and then to use the already presented bound on the probability that θ(x) is large.In this strategy, only the middle piece, given by the following lemma, is still missing.Lemma 7.5.There exists a finite constant C such that for all N ≥ 1, and x ∈ R 2 , Proof.In short, it follows from the fact that θ(x) is, up to an error of ±1, the sum of θ 1 2 (x) i.i.d.centered Bernoulli variables.Let use recall that the stopping times t i , defined by (11), corresponds to times when the Brownian motion hits some half-lines d 1 x , d 2 x .It is easily seen that the real-valued winding θ(x, X |[0,t i ] ) and θ(x, X |[0,t i+1 ] ) are related by , 1}.Since the t i are stopping times, we can apply the reflection property of the Brownian motion at the times t i , with reflection around the axe d 1 x ∪ d 2 x .We obtain a new Brownian motion with the same stopping times t i , and we can deduce that, conditionally on θ 1 2 (x), the variables (ε i ) i∈{1,...,θ 1 2 (x)} are i.i.d.symmetric Bernoulli variables.Hence, for all N ≤ M, converges in probability towards 1 π as K goes to infinity.This proves the first convergence.The second is obtain in an identical way.
We now consider H R the large probability event

End of the Proof
In order to finally conclude the proof of the main theorem, we split P R K into a disjoint union P R K = P 0 P 1 P 2 P 3 , with The elements of P 0 are somehow the most important, in the sense that there are the ones that we expect to contribute to the limit.Besides, for each of these points in P 0 , there is one particular place where it appears with a large exponent, and this place is the most important.
For k ∈ {1, . . ., #P 0 }, let y k = x u(k) be the k th element of P 0 (with P 0 inheriting the order of P K ), which is also the u(k) th element of P K .For y = y k , let i k be the unique index i k such that α In the following, for g K , h K ∈ π 1 (P K ), we write g K ≈ h K if for any compact and connected Lie group G, ω K (g K ) converges in distribution if and only if ω K (h K ) converges in distribution, and the limit distributions are the same if they exist.We also write g K h K if ω K (g K h −1 K ) converges in distribution towards 1, which is a stronger statement.Proposition 8.1.
[ X ] converges in distribution, in which case the limiting distribution are the same.
Proof.Remark that Since Y 1 . . .Y n K converges to 1, the left-hand-side of (15) converges if and only if converges as well, and the limits are the same.This last expression is equal in distribution to the product X 1 . . .X n K , hence the lemma.
Let G be a compact Lie group, endowed with a biinvariant metric.There exist C, c > 0 such that for any totally ordered finite set P with #P ≥ 4, for all K ≥ 1, all g ∈ F P This proves the first relation of (14).
We now look at the second one, which follows from Lemma 8.2 if one can apply it to the variables We need to check that, Recall that for a set of independent random variables (Z 1 , . . ., Z n ) which are each conjugation-invariant and measurable functions f This can be proved easily by induction.By applying this to the sequence of points in P G K , with f x k = Y k which only depends on the previous variables, we deduce ( 16).Now we need to check the second assumption of Lemma 8.2, which amounts to show that We want to apply Proposition 8.3, for which we need to bound For x ∈ P 1 , we know that β 2 +ε .Since we also know that #P 2 ≤ K 5ε , Since we also know that #P 2 ≤ #P ≤ K log(K ), Combining this three results together, we obtain, for K large enough, This is enough to apply Proposition 8.3, and to deduce that It follows that ω K #P 0 +1 k=1 P k converges to 1 in probability, which proves the second relation.
The two last relations are proved in a similar, but easier, way.First, The last inequality follows from the events G R (recall the first bound in Proposition 7.7) and H R (recall Lemma 7.8).This proves the third relation.
For the fourth one, we apply Lemma 8.2 once more, with Here the independence assumption is even easier to check.The only problem left is to show that the second assumption of Lemma 8.2 holds, that is to show that the product x∈P K \P 0 h K (x) θ(x, X ) converges towards 1 (recall that h K is the map from P K to G such that (x, h K (x)) ∈ P G K ).Since we can use Proposition 8.3 to conclude.Remark that the situation here is actually much easier than for the second relation: the letters here appear on the right order (so that we could actually avoid the use of Proposition 8.3 here).
It only remains to prove Proposition 8.3 to conclude the proof of Theorem 1.

An Inequality for Words in Random Matrices
This section is devoted to the proof of the last piece missing, Proposition 8.3.
Several times, we will need to compare the exponential of a sum in g with the product in G of the exponentials.Lemma 9.1.There exists a constant C such that all positive integer n, for all X 1 , . . ., X n ∈ g, Proof.First, let is remark that, from the Dynkin's formula and the compacity of G, there exists a constant C such for all X, Y ∈ g, For j ∈ {0, . . ., n}, let Y j = j i=1 X i and z j = n i=n− j+1 exp G (X i ).We are thus trying to bound d(z n , exp G (Y n )).Remark that z n− j = exp G (X j+1 )z n− j−1 and Y j+1 = X j+1 +Y j .Applying the triangle inequality between the points z n = exp G (Y 1 )z n−1 , . . ., exp G (Y n−1 )z 1 , exp G (Y n ), we obtain Under the assumptions on g in Proposition 8.3, for all x ∈ P, d G (h K (c x •φ x (g)), 1) < c, so that the logarithm of h K (c x • φ x (g)) is well-defined, as an element of g, provided that c is small enough.During this section, we will call this logarithm X x (g), so that exp G (X x (g)) = h K (c x • φ x (g)).
Lemma 9.2.Let G be a compact Lie group, endowed with a biinvariant metric.Let (H x ) x∈P be a family of g-valued random variables, each with support on the ball of radius K −1 , and assume that for all x, E[H x |σ ((H y ) y<x )] = 0.

Let h K : F P → G be the random group morphism determined by h K (e i ) = exp G (H i ).
There exists c > 0 and a constant C such that for any totally ordered finite set P, for all K ≥ 1 and all g ∈ F P with |g| 2 ≤ cK , Proof.Let us recall that c x • φ x (g) is a product of S x 1 (π ≤x (g)) commutators of either x or x −1 : 1 (π ≤x (g)) k=1 w x,k x ε x,k w x,k , with ε x,k ∈ {−1, 1} and w x,k ∈ F {y:y<x} .Set x ).
From triangle inequalities, In order to control the remaining term, let us remark that E Ad h K (w x,k ) (H x )|σ ((H y ) y<x )] = h K (w x,k )E[H x |σ ((H y ) y<x )]h K (w x,k ) −1 = 0, and therefore E Xx (g) ( X y (g)) y<x = 0.The finite sequence (S x = y<x X y ) x∈P is thus a martingale.Let (S α x ) α be the components of S x in some orthonormal basis (e α ) α of g.Then, for each α, (S α x ) x is a martingale as well, whose i th steps X α x i is supported on the interval . Thus We sum over α ∈ {1, . . ., dim(g)} to conclude.
We now intend to prove Proposition 8.3.We invite the reader to look at the similarity between Lemma 9.2 that we have just proved and Proposition 8.3: they differ by the fact that the former take places in the Lie algebra whilst the latter takes place in the Lie group.Some logarithmic (in the size of P) corrections also appear in the Proposition.We think the same result does hold without these corrections, but we were not able to prove it.Proof of Proposition 8.3.Let us first have a naive approach of the problem.We want to show that E[d G (h K (g), 1) 2 ] is small.From Lemma 9.2, we already know that Besides, Lemma 9.1 gives The main problem is that we would like the square to be under the sum, and the best we can get seems to be 7 We will now try to reduce this factor (#P).The previous estimation is enough to conclude if #P < 16, and we now assume #P ≥ 16, so that log(log(#P)) ≥ 1.We use a divide and conquer algorithm.We split the alphabet P into M subsets of size #P/M.The choice M log(#P) gives a nearly optimal result.Each of the subsets is then recursively split in a similar way, until each subsets contains a single element.
Therefore, we consider a log(#P) -regular rooted tree T of depth D = log(#P) log(log(#P)) , (see Fig. 7 below). 7Actually, #P can easily be replaced with √ #P in Eq. (19).It suffices to remark that we can replace x∈P X x (g) 2 with x∈P X x (g) max x y<x X y (g) in Lemma 9.2, and get a maximal inequality in Lemma 9.1.
Since #P = log(#P) log(#P) log(log(#P)) ≤ log(#P) D , the tree T has more than #P leafs.We fix some depth-first traversal of T, and we enumerate the leaf accordingly.We label the i th leaf by X x i if i ≤ #P, and by 0 otherwise.Then, we inductively label all the internal vertices, from the bottom to the root by the sum of the labels of all its children: for any internal vertex v, setting σ (v) the set of the children of v and l(v) the set of leafs descendant from v, In particular, the root r is labeled by X r = x∈P X x .
For k ∈ {1, . . ., D}, we define T k = {v : d T (v, r ) = k − 1} the set of vertices at depth exactly k, which is naturally ordered, and h k = v∈T k exp G (X v ).In particular, h 1 = exp G x∈P X x and h D = exp G (X x 1 ) . . .exp G (X x #P ).We already know by ( 17) that h 1 is close to 1, and our goal is to show that h D is also close to 1.It thus suffices to show that d G (h i , h i+1 ) is small for all i.
Since d G is biinvariant, it satisfies the property that d G (ab, cd) ≤ d G (a, c)+d G (b, d) for any a, b, c, d ∈ G, and it follows that X w 2 (using Lemma 9.1).

≤ C(#σ (v))
w∈T i+1 X w 2 . Therefore, For w ∈ T, let g w ∈ F P = i∈l(w) c x i •φ x i (g), so that h K (g w ) = i∈l(w) exp G (X x i ).For i ∈ l(w), X x i (g w ) = X x i (g) whilst for i / ∈ l(w), X x i (g w ) = 0. We also set X w = i∈l(w) X i .By applying Lemma 9.2 to g w , we get S x 1 (π ≤x (g)) 2 , and therefore We end up with

Fig. 2 .Lemma 3 . 2 .
Fig. 2. Black lines: the different paths γ x .Red lines: the geodesic continuations γ εx .Blue circles: The neighbourhoods U x .Red crosses: the points x.Purple set: the set U ∞ .Red curves: the curves γx .Eight picture: image of the previous one after application of an homeomorphism on each of the connected components

Fig. 3 .
Fig. 3.The map f 1 , and the corresponding braid b 1 acting on a proper isomorphism.On the right-hand side, the colored loops can be extended into 'sheets' that do not cross each other but on the axe above 0

Fig. 4 .
Fig. 4.When P is no longer finite, the family ([ x ]) x∈P can be free but not generating.In this example, the paths γ x i are drawn one after the other.The path going to x i+1 passes above the point x i+2 , and then navigates until it reaches x i+1 .The black loop which appears on the first picture does not lie on the group generated algebraically by the [ x i ].Formally, it would be described as w[ x 1 ]w −1 with w a bi-infinitely long word in the[ x i ]

Lemma 4 . 6 .
Let R be a random variable on the strong attraction domain D δ (C) of a (real-valued) Cauchy distribution C with scale parameter s.Let U be a random variable independent from R, and distributed uniformly on the unit sphere of g.Then, RU lies on the attraction domain D δ (ν σ ) of the symmetric 1-stable distribution ν σ with σ =

Lemma 5 . 1 .( 1 )
Let H be the smallest normal subgroup of K * Z containing 1.Then, The kernel of π is H .

)
Remark that this proposition, together with Corollary 4.7, allows to conclude to Theorem 1, for which it would suffice to have ≈ relations everywhere.The arguments that we use to show that relations hold are very different from the arguments that we use to show that ≈ relations hold:k X k k Y k typically holds when X k is very close from Y k for all k, whilst k X k k Y k typically holds when X k is very close from a conjugate of Y k for all k.To prove Proposition 8.1, we only lack two more lemmata, in which the compact Lie group G finally appears.Let G be a Lie group.For all K , let n K be a random integer, and let (X i ) i∈{1,...,n K } and (Y i ) i∈{1,...,n K +1} be two sequences of G-valued random variables.Assume that the product Y 1 . . .Y n K +1 converges in distribution towards 1 as K goes to infinity, and that