Abstract
We investigate tensor products of random matrices, and show that independence of entries leads asymptotically to \(\varepsilon \)free independence, a mixture of classical and free independence studied by Młotkowski and by Speicher and Wysoczański. The particular \(\varepsilon \) arising is prescribed by the tensor product structure chosen, and conversely, we show that with suitable choices an arbitrary \(\varepsilon \) may be realized in this way. As a result, we obtain a new proof that \(\mathcal {R}^\omega \)embeddability is preserved under graph products of von Neumann algebras, along with an explicit recipe for constructing matrix models.
Introduction
\(\varepsilon \)free probability was originally introduced by Młotkowski [14]^{Footnote 1} to provide a mixture of classical and free independence; it was shown, for example, that the qdeformed Gaussians can be realized as central limit variables in this framework. Relations of this type were further studied by Speicher and Wysoczański [18], where cumulants describing \(\varepsilon \)freeness were introduced; this led into later work on partial commutation relations in quantum groups studied by Speicher and Weber [17].
The theory of \(\varepsilon \)freeness is also connected with the graph products of groups, which has itself been imported into operator algebras and studied, for example, by Caspers and Fima [5] and by Atkinson [1], where certain stability properties (e.g., the Haagerup property) are shown to be preserved under graph products. It was also shown by Caspers [4] that graph products preserve \(\mathcal {R}^\omega \)embeddability. The connection with our setting is this: algebras are \(\varepsilon \)freely independent in their Gproduct when \(\varepsilon \) is the adjacency matrix of G.
The goal of this short note is to show that \(\varepsilon \)independence describes the asymptotic behaviour of tensor products of random matrices; as a consequence, we obtain a method of producing nice matrix models for \(\varepsilon \)independent \(\mathcal {R}^\omega \)embeddable noncommutative random variables, and another proof that graph products preserve \(\mathcal {R}^\omega \)embeddability.
This document is organized as follows. In Section 2, we present the necessary background on \(\varepsilon \)free independence. In Section 3, we describe the sort of matrix model we will be considering, and state the main theorem describing the asymptotic behaviour of such models. In Section 4, we compute the necessary moments to obtain the result. Finally, in Section 5, we discuss some applications of this work to \(\mathcal {R}^\omega \)embeddability, and show how the techniques may be adapted to study the case of orthogonallyinvariant matrix models instead of unitarilyinvariant ones.
Preliminaries
We discuss here the notion of \(\varepsilon \)freeness, originally introduced by Młotkowski [14] and later examined by Speicher and Wysoczański [18]. This is meant to be a mixture of classical and free independence defined for a family of algebras, where the choice of relation between each pair of algebras is prescribed by a given symmetric matrix \(\varepsilon \). Note that we use here the convention of Speicher and Wysoczański to use \(\varepsilon \) rather than Młotkowski’s \(\Lambda \).
Let \(\mathcal {I}\) be a set of indices. Fix a symmetric matrix \(\varepsilon = (\varepsilon _{ij})_{i, j \in \mathcal {I}}\) consisting of 0’s and 1’s with 0’s on the diagonal. We will think of nondiagonal entries as prescribing free independence by 0 and classical independence by 1; the choice of 0’s on the diagonal was made for later convenience.
Definition 1
Let \(\mathcal {I}\) be a set of indices. The set \(I^\varepsilon _n\) consists of ntuples of indices from \(\mathcal {I}\) for which neighbours are different modulo the implied commutativity. That is, \((i_1, \ldots , i_n) \in I^\varepsilon _n\) if and only if, whenever \(i_j = i_\ell \) with \(1\le j < \ell \le n\), there is k with \(j< k < \ell \), \(i_j \ne i_k\), and \(\varepsilon _{i_ji_k} = 0\).
Definition 2
Let \((\mathcal {A}, \varphi )\) be a noncommutative probability space. Then the unital subalgebras \(\mathcal {A}_i \subseteq \mathcal {A}\) are \(\varepsilon \)independent if \(\mathcal {A}_i\) and \(\mathcal {A}_j\) commute whenever \(\varepsilon _{ij} = 1\), and, whenever we take \(a_1, \ldots , a_n \in \mathcal {A}\) with \(a_j \in \mathcal {A}_{i_j}\), \(\varphi (a_j) = 0\), and \((i_1, \ldots , i_n) \in I^\varepsilon _n\), we have \(\varphi (a_1\cdots a_n) = 0\).
For the purpose of introducing matrix models, we introduce the notion of \(\varepsilon \)independence for a disjoint collection of sets of elements (aka noncommutative random variables) in \((\mathcal {A}, \varphi )\) and an asymptotic counterpart.
Definition 3
Let \((\mathcal {A}, \varphi )\) be a noncommutative probability space and \(a_{ij}, (i,j)\in \mathcal {I}\times J\), a family of elements of \(\mathcal {A}\). They are called \(\varepsilon \)independent (or \(*\)\(\varepsilon \)independent) if the \(*\)unital subalgebras \(\mathcal {A}_i\) generated by \(a_{ij}, j\in J\), are \(\varepsilon \)independent.
If one takes a sequence of noncommutative probability spaces \((\mathcal {A}_n, \varphi _n)\) and \(a_{ij}^{(n)}\in \mathcal {A}_n\) for \((i,j)\in \mathcal {I}\times J\), then one defines asymptotic \(\varepsilon \)independence as the convergence of \(a_{ij}^{(n)}\) in \(*\)moments to \(*\)\(\varepsilon \)independent variables.
For an \(\varepsilon \)independent family \(a_{ij}, (i,j)\in \mathcal {I}\times J\), a family of elements of \(\mathcal {A}\), we say that it has a matrix model if \(\mathcal {A}_n\) can be taken as a matrix algebra.
Description of the model
Sufficient conditions to exhibit a model
We are interested in producing a way to model \(\varepsilon \)freeness using approximations in finite dimensional matrix algebras. In particular, we will be interested in models with the following data:

1.
for each \(N \in \mathbb {N}\), an algebra \(\mathcal {M}^{(N)}\) which is the tensor product of a fixed number of copies of \(M_N(\mathbb {C})\), indexed by a nonempty finite set S;

2.
for each \(\iota \in \mathcal {I}\), a nonempty subset \(\mathcal {K}_\iota \subseteq S\), with algebras
$$\begin{aligned} \mathcal {M}_\iota ^{(N)} := \bigotimes _{s \in \mathcal {K}_\iota }M_N(\mathbb {C}) \otimes \bigotimes _{s \in S {\setminus } \mathcal {K}_\iota } \mathbb {C}1_N \end{aligned}$$viewed as included in \(\mathcal {M}^{(N)}\) in a way consistent with their indices;

3.
a family of independent unitaries \((U_\iota )_{\iota \in \mathcal {I}}\), with \(U_\iota \) Haardistributed in \(\mathcal {U}\left( \mathcal {M}_\iota ^{(N)}\right) \).
Our goal for the next few sections of this paper will be to establish the following characterisation of the asymptotic behaviour of such models.
Theorem 4
With the notation used above, let \((a_{\iota ,j}^{(N)})_N \in U_\iota \mathcal {M}_\iota ^{(N)}U_\iota ^*\) be a collection of sequences of random matrices, indexed by \((\iota , j) \in \mathcal {I}\times J\), such that for each \(\iota \), the collection \(\left( (a_{\iota ,j}^{(N)})_N\right) _j\) has a joint limiting \(*\)distribution in the large N limit as per Definition 3. Then the entire family \(\left( (a_{\iota ,j}^{(N)})_N\right) _{\iota ,j\in \mathcal {I}\times J}\) has a joint limiting \(*\)distribution in the large N limit, and the original families are \(\varepsilon \)free where
This theorem is proven below as Theorem 7(2). We will also indicate some ways in which such choices may be made to realise a desired matrix \(\varepsilon \).
Model A. Suppose \(\varepsilon \) is prescribed. Let \(S = \left\{ \left\{ i,j\right\} : i, j \in \mathcal {I}\text { with } \varepsilon _{i,j} = 0\right\} \), and \(\mathcal {K}_\iota = \left\{ s \in S : \iota \in s\right\} \); note that our choice that \(\varepsilon _{i,i}=0\) ensures that each \(\mathcal {K}_\iota \) is nonempty since \(\left\{ \iota \right\} \in \mathcal {K}_\iota \). It is clear that \(\mathcal {K}_i\cap \mathcal {K}_j = \emptyset \) precisely when \(\varepsilon _{i,j}=1\), as we desire.
Model B. Suppose \(\varepsilon \) is prescribed. Let \(\mathcal {G}\) be the graph on \(\mathcal {I}\) with adjacency matrix \(\varepsilon \); that is, indices \(i, j \in \mathcal {I}\) are connected in \(\mathcal {G}\) if and only if \(\varepsilon _{i,j} = 1\); moreover, let \(\mathcal {G}^\circ \) be the (selfloop free) complement graph of \(\mathcal {G}\), i.e., the graph with vertex set \(\mathcal {I}\) where two (distinct) vertices are connected if and only if they are not connected in \(\mathcal {G}\). Take S to be a clique edge cover of \(\mathcal {G}^\circ \): any set of cliques in \(\mathcal {G}^\circ \) such that each edge in \(\mathcal {G}^\circ \) is contained in at least one element of S, and set \(\mathcal {K}_\iota = \left\{ K \in S : \iota \in K\right\} \). If we choose S in such a way that each vertex is included in at least one clique, \(K_\iota \) will be nonempty. Further, if \(\varepsilon _{i,j} = 0\), then (i, j) is an edge in \(\mathcal {G}^\circ \) and hence contained in at least one clique \(K \in S\); then \(S \in \mathcal {K}_i \cap \mathcal {K}_j\) which is therefore nonempty. It follows that our model will be asymptotically \(\varepsilon \)free.
One may, for example, take S to be the set of all edges and all vertices in \(\mathcal {G}^\circ \) (that is, to consist of all cliques of size one or two in \(\mathcal {G}^\circ \)) which will recover Model A. Alternatively, one may take the set of all maximal cliques in \(\mathcal {G}^\circ \) (equivalently, the set of all maximal anticliques in \(\mathcal {G}\)). Since \(\mathcal {M}^{(N)} \cong M_{N^{\#S}}(\mathbb {C})\), selecting a smaller set S leads to a much lower rate of dimension growth as \(N\rightarrow \infty \); unfortunately, finding a minimum clique edge cover is in general very hard [13].^{Footnote 2}
An example
Let us thoroughly work out an example. We will take \(\varepsilon \) to be the following matrix:
The corresponding graph and its complement are drawn below.
The maximal anticliques in \(\mathcal {G}\) are \(\left\{ 1, 4, 5\right\} \), \(\left\{ 2, 4, 5\right\} \), and \(\left\{ 1,3\right\} \). Therefore, if we build Model B for this graph using the set of maximal anticliques for S, we wind up with
In particular, if we take \(X_i^{(N)}\) to be independent GUE matrices in \(\mathcal {M}_i^{(N)}\), we have that \((X_i^{(N)})_i\) converges in law to a family \((S_i)_i\) of semicircular variables which are \(\varepsilon \)free.
It is sometimes useful to visualize S as labelling a set of parallel strings, and elements of \(\mathcal {M}_\iota ^{(N)}\) as being drawn on the strings corresponding to \(\mathcal {K}_\iota \); then, for example, elements commute when they can slide past each other on the strings. So for example, one could imagine the product \(X_1^{(N)}X_3^{(N)}X_5^{(N)}X_2^{(N)}X_1^{(N)}X_4^{(N)}X_2^{(N)}X_4^{(N)}\) as follows:
Note that \((1,3,5,2,1,4,2,4) \in I_8^\varepsilon \) (pictorially, no two columns of variables of the same colour may be slid next to each other along the strings), so in particular, asymptotic \(\varepsilon \)freeness implies
Here, we took Gaussian unitary ensembles, but it is enough to take matrices that are centred (in expectation) and unitarily invariant, as we will see further down.
The formula
Weingarten integral over tensors
Let us first fix some notation. On any matrix algebra, we call \({\text {tr}}\) the normalized trace. For a permutation \(\sigma \in S_k\), and a matrix algebra of any dimension \(\mathcal {M}_N\), we define a klinear map \({\text {tr}}_{\sigma }: \mathcal {M}_N^k\rightarrow \mathbb {C}\) by
where the product in each cycle is taken according to the cyclic order defined by \(\sigma \). For example, \({\text {tr}}_{(1, 2)(3)}(A_1,A_2,A_3)={\text {tr}}(A_1A_2){\text {tr}}A_3\). Note that by traciality, this formula does not depend on the product cycle decomposition.
As in Section 3.1, we fix an index set \(\mathcal {I}\) and a finite set S, and, for \(N \in \mathbb {N}\), take
We also fix \(\emptyset \ne \mathcal {K}_\iota \subset S\) for each \(\iota \in \mathcal {I}\), and set \(\mathcal {M}_\iota ^{(N)}\) and \((U^{(N)}_\iota )_{\iota \in \mathcal {I}}\) as above, so that \(U_\iota ^{(N)}\) has Haar distribution on \(\mathcal {U}(M_\iota ^{(N)})\).
Let us fix \(\iota _1, \ldots , \iota _k \in \mathcal {I}\). The purpose of this section is to compute, for E the expectation with respect to all \(U_\iota ^{(N)}\),
where \(A_j \in M_{\iota _j}^{(N)}\) and \(U_j = U_{\iota _j}^{(N)}\).
This integral can be computed thanks to the Weingarten formula which we recall here; for further details about Weingarten calculus, we refer the interested reader to [6, 9]. The Weingarten calculus says that there exists a central function \({\text {Wg}}: S_k\times \mathbb {N}\rightarrow \mathbb {C}\) such that
where the integral is over the Haar measure of the unitary group of \(U_N\), and the action of permutations is understood as \(\tau \cdot i' = (i_{\tau ^{1}(1)}', \ldots , i_{\tau ^{1}(k)}')\).^{Footnote 3} For the purpose of this paper, we just need to know that this function is well defined for \(N\ge k\) and that there exists a function \(\mu : S_k\rightarrow \mathbb {Z}_*\) so that
Here, we use the notation \(\sigma \) for the minimal number of transpositions needed to realize \(\sigma \) as a product of transpositions. This quantity is known to satisfy \(\sigma  = k\# (\sigma )\) where \(\# (\sigma )\) is the number of cycles of the permutation \(\sigma \) in the cycle decomposition (counting the fixed points too). As for \(\mu \), although it is irrelevant to this paper, it is closely related to Speicher’s noncrossing Möbius function, and more details can also be found in the above references.
In order to be able to extract asymptotic information when we compute the quantity in (1), we introduce some notation:

\(Z=(1\cdots k)\) is the full cycle of the symmetric group \(S_k\).

\({\tilde{S}}_k\) is the subgroup of permutations \(\sigma \) which satisfy \(\iota _j = \iota _{\sigma (j)}\) for all j, i.e., that stabilize the word \((\iota _1, \ldots , \iota _k)\).

For each \(s \in S\), we define \(J_s := \left\{ j : s \in \mathcal {K}_{\iota _j}\right\} \); that is, \(J_s\) is those indices where the corresponding label’s set contains s; we further denote \(k_s := \#J_s\). In words, for the string s, \(J_s\) is the subset of [[1, k]] of indices that act on the leg s.

Any \(\sigma \in {\tilde{S}}_k\) preserves each \(J_s\) and so induces a permutation \(\sigma _s \in S_{J_s} \cong S_{k_s}\). We will abuse this notation slightly and write \(Z_s\) for the full cycle sending each element of \(J_s\) to the next one according to the cyclic order.

When we need \(\#(\cdot )\) and it is necessary to stress which set a permutation is acting on, we may write something such as \(\#_{k_s}(\cdot )\) to mean that the permutation is viewed as an element of \(S_{k_s}\) and not as the induced permutation in \(S_k\) which is constant off of \(J_s\).
Example 5
Let us briefly note how this notation behaves in the context of the example in Section 3.2. Recall that in this case, we have \(k = 8\) and \((\iota _1, \ldots , \iota _8) = (1,3,5,2,1,4,2,4)\). Then \({\tilde{S}}_k\) is generated by the transpositions \((1\ 5)\), \((4\ 7)\), and \((6\ 8)\). Letting \(s_1 = \left\{ 1,4,5\right\} , s_2 = \left\{ 2,4,5\right\} \), and \(s_3 = \left\{ 1,3\right\} \) (so that \(S = \left\{ s_1, s_2, s_3\right\} \)), we have
which is readily seen from the diagram above. We also have
and if \(\sigma = (6\ 8)(4\ 7)\), we have
Finally, we will introduce a modification of the Weingarten function, specific to our notation. First, for \(\iota \in \mathcal {I}\), we denote by \(B_\iota := \left\{ j : \iota _j = \iota \right\} \) and by \(\Pi \) the partition \(\left\{ B_\iota \right\} _{\iota \in \mathcal {I}}\) of \(\left\{ 1, \ldots , k\right\} \) (ignoring any empty blocks). Note that \({\tilde{S}}_k\) is the stabilizer of \(\Pi \). We define the function \(\overset{\sim }{{\text {Wg}}}\) as follows:
We are now able to state our formula:
Theorem 6
The following equation always holds true:
We remark that if \(k_s > 0\), then \(\#(\tau _s^{1}Z) = \#(\tau _s^{1}Z_s)\) where the left hand side is viewed in \(S_k\) and the right in \(S_{k_s}\).
Proof
Let us use the following vector notation: \(\mathbf {i}=(i_1,\ldots , i_k)\), where \(i_1,i_2,\ldots \) are \(\#S\)tuples described as \(i_1= (i_1[1], \ldots , i_1[\#S])\), and similarly for \(\mathbf {i'},\mathbf {j},\mathbf {j'}\). With this notation, we can expand our product integral:
where the product over l is understood modulo k. Next, we use the fact that the a’s are deterministic while u’s from different families are independent:
We wish to apply the Weingarten formula to each expectation above. Since the unitaries in the \(\iota \)th term are Haar distributed on \(\#\mathcal {K}_\iota \) strings, our resulting dimension is \(N^{\#\mathcal {K}_\iota }\); meanwhile the sum will be over permutations of \(B_\iota \), and importantly the delta functions involved \(\delta _{\cdot , \sigma \cdot (\cdot \cdot )}\) and \(\delta _{\cdot , \tau \cdot (\cdot \cdot )}\) will apply only on strings in \(\mathcal {K}_\iota \), while the other strings must have equal row and column indices; that is, each term in the product will be of the form
Since we are independently choosing permutations on each \(B_\iota \), we may instead sum over all \(\sigma , \tau \in {\tilde{S}}_k\) outside of the product. We may then group the Weingarten terms into \(\overset{\sim }{{\text {Wg}}}(\sigma \tau ^{1}, N)\). We are left with the following:
Let us make some observations to simplify the above expression. First, notice that the final product means \(\mathbf {i'}\) is entirely determined by \(\mathbf {i}\), and in fact, \(\mathbf {i'} = Z^{1}\cdot \mathbf {i}\). The penultimate product means that our choice of \(\mathbf {j}\), \(\mathbf {j'}\) is partially constrained: when choosing \(\mathbf {j}_\ell \), we are free to choose any value on the strings corresponding to \(\mathcal {K}_{\iota _\ell }\), but must set \(j_\ell [s] = i_\ell [s]\) for \(s \notin \mathcal {K}_{\iota _\ell }\). Then because we have a factor of \(a_{j_\ell , j_\ell '}\) which vanishes as soon as \(j_\ell [s]\) and \(j_\ell '[s]\) differ for any \(s \notin \mathcal {K}_{\iota _\ell }\), we have \(j_\ell '[s] = i_\ell [s]\) for \(s \notin \mathcal {K}_{\iota _\ell }\) too; that is, a valid choice for \(\mathbf {j}\) is a valid choice for \(\mathbf {j'}\) and vice versa. Let us denote by \(V_{\mathbf {i}}\) the set of valid choices of j for a given \(\mathbf {i}\):
When we restrict to summing over this set, we will need to keep the condition that \(\mathbf {i}_\ell [s]=\mathbf {j}_\ell [s]=\mathbf {j'}_\ell [s]=\mathbf {i'}_\ell [s]\); although the first two equalities are satisfied if we restrict \(\mathbf {j},\mathbf {j'}\in V_{\mathbf {i}}\), the last is not. Let us also adopt the following shorthand:
Rearranging some terms, we arrive at the expression below:
Let us first consider the innermost sum. Note first that each \(a_{j_\ell j_\ell '}\) depends only on \(j_\ell [s]\) for \(s \in \mathcal {K}_\iota \), so this sum does not actually depend on \(\mathbf {i}\). Moreover, if we factor the sum based on the cycles of \(\sigma \) (both the a’s and the delta functions in \(\Delta _{\mathbf {j},\sigma \cdot \mathbf {j'}}\)), we notice that we have exactly a nonnormalized trace (over \(\mathcal {M}_\iota ^{(N)}\) for the corresponding \(\iota \)) of the A’s corresponding to \(\sigma \) (since the condition we are enforcing is that on the relevant strings, the column index \(j_\ell '[s] = j_{\sigma (\ell )}[s]\) the row index). That is, for any \(\mathbf {i}\),
Next, we turn to the middle sum in (2). Note that if we look at the conditions along a single string, we find that we need \(\mathbf {i}_\ell [s] = \mathbf {i}_{\tau (\ell )+1}[s]\) when \(s \in \mathcal {K}_{\iota _\ell }\), while \(\mathbf {i}_\ell [s] = \mathbf {i}_{\ell +1}[s]\) when \(s \notin \mathcal {K}_{\iota _\ell }\). That is, we need \(\mathbf {i}[s] = Z^{1}\tau _s\cdot \mathbf {i}[s]\). The number of ways we have of satisfying this condition is precisely \(N^{\#(Z^{1}\tau _s)}\), and so we have
Putting everything together, we have
\(\square \)
Note that some readers might have found it easier to perform the above calculation with graphical calculus as introduced in [8]. As a consequence, we can prove the following:
Theorem 7
The following two statements hold true:

1.
$$\begin{aligned} \begin{aligned}&E({\text {tr}}(U_1A_1U_1^*\cdots U_kA_kU_k^*)) \\&\quad =\sum _{\sigma ,\tau \in {\tilde{S}}_k}{\text {tr}}_{\sigma }(A_1, \ldots , A_k) \mu (\sigma ^{1}\tau )\\&\qquad \left( \prod _{s=1}^S N^{Z_s\sigma _s\sigma _s^{1}\tau _s\tau _s^{1}Z_s}\right) (1+O(N^{2})). \end{aligned} \end{aligned}$$(3)

2.
If \((\iota _1, \dots , \iota _k) \in I_k^\varepsilon \) and \({\text {tr}}(A_i) = o(1)\) for all i, then \(E({\text {tr}}(U_1A_1U_1^*\cdots U_kA_kU_k^*)) = o(1).\)
Proof
For the first claim, we start with the formula of Theorem 6:
Given that
we learn
Assembling the above, we have
Let us turn our attention to the exponent of N in each term of the product. We note the following things: first, that \(\#(\sigma _s)  k_s = \left \sigma _s\right \); second, that if \(k_s = 0\), then \(\#(\tau _s^{1}Z)1 = 0 = \left Z_s\right  \left \tau _s^{1}Z_s\right \); and third, that if \(k_s > 0\), then \(\#(\tau _s^{1}Z)1 = \#(\tau _s^{1}Z_s)  1 = k_s\left \tau _s^{1}Z_s\right  (k_s\left Z_s\right ) = \left Z_s\right  \left \tau _s^{1}Z_s\right \). Putting this together, we arrive at:
As for the second claim, on the one hand, \({\text {tr}}_{\sigma }(A_1, \ldots , A_k)\) is always bounded by assumption, which implies that the quantities \({\text {tr}}_{\sigma }(A_1, \ldots , A_k)\) are uniformly bounded in \(\sigma , N\). On the other hand, \(Z_s\sigma _s\sigma _s^{1}\tau _s\tau _s^{1}Z_s\le 0\). Indeed, this can be reformulated as
which is obvious since \(\sigma \tau \le \sigma +\tau \). In addition, it is known that in case of equality, \(\sigma _s,\tau _s\) form noncrossing partitions. This is a classical result in combinatorics, we refer to [2] for one of the first uses of this fact in random matrix theory.
So, the summands of Theorem 6 that are crossing are of order o(1), and we may restrict our sum to noncrossing contributions up to o(1). However, according to Proposition 8, each such summand involves a \({\text {tr}}A_i\) so is actually also o(1), which proves the vanishing as \(N\rightarrow \infty \) of the expression \(E({\text {tr}}(U_1A_1U_1^*\cdots U_kA_kU_k^*))\). \(\square \)
Note that for the purpose of this proof, we only need to know that \(\mu (\sigma _s^{1}\tau _s)\) is a function, and its value is irrelevant. It turns out to be an integer, the BianeSpeicher Möbius function. Let us also remark that Eq. (3) of Theorem 7 is the \(\varepsilon \)free moment cumulant formula of Speicher and Wysoczański [18].
A fixed point
Proposition 8
Suppose that \((\iota _1, \ldots , \iota _k) \in I_k^\varepsilon \). Suppose further that \(\sigma \in {\tilde{S}}_k\) is such that \(\sigma _s\) describes a noncrossing partition for each \(s \in S\). Then \(\sigma \) has at least one fixed point.
Proof
Let \(B_0 \in \pi _\sigma \) be a block such that \(\max B_0  \min B_0\) is minimal: that is, it is a block of minimal length. If \(B_0\) is a singleton, it is a fixed point of \(\sigma \) and we are done; therefore let us suppose that \(B_0\) contains distinct elements \(i < j\). As \((\iota _1, \ldots , \iota _k) \in I_k^\varepsilon \) and \(\iota _i = \iota _j\), there must be some \(\ell \) with \(i< \ell < j\), \(\iota _\ell \ne \iota _i\), and \(\varepsilon _{\iota _i, \iota _\ell } = 0\); moreover, there is some \(s \in \mathcal {K}_{\iota _i}\cap \mathcal {K}_{\iota _\ell }\). Let \(B_1\in \pi _\sigma \) be the block containing \(\ell \); as \(\iota _i \ne \iota _\ell \), \(B_0 \ne B_1\). Now since \(\sigma _s\) describes a noncrossing partition of which both \(B_0\) and \(B_1\) are blocks, we must have \(B_1 \subset \left\{ i+1, \ldots , j1\right\} \), contradicting the minimality of the length of \(B_0\). \(\square \)
Further considerations
The orthogonal case
The preprint [15] of Morampudi and Laumann (which we were not aware of when preparing this manuscript) contains a result quite similar to Theorem 9, with motivations arising from quantum manybody systems, and a graphical approach to their arguments. The preprint explicitly raises the question of the behaviour of such models with different symmetry groups; we will show that our techniques may be readily adapted to also describe the asymptotic distribution of orthogonallyinvariant matrix ensembles.
Let us adopt the setting from Section 3, and let \((O_\iota )_{\iota \in \mathcal {I}}\) be a family of independent orthogonal matrices with \(O_\iota \) Haardistributed in \(\mathcal {O}(M_\iota ^{(N)})\).
Theorem 9
Let \((a_{\iota ,j}^{(N)})_N \in O_\iota \mathcal {M}_\iota ^{(N)}O_\iota ^*\) be a collection of sequences of random matrices, indexed by \((\iota , j) \in \mathcal {I}\times J\), such that for each \(\iota \), the collection \(\left( (a_{\iota ,j}^{(N)})_N\right) _j\) has a joint limiting \(*\)distribution in the large N limit as per Definition 3. Then the entire family \(\left( (a_{\iota ,j}^{(N)})_N\right) _{\iota ,j\in \mathcal {I}\times J}\) has a joint limiting \(*\)distribution in the large N limit, and the original families are \(\varepsilon \)free where
The thrust of the argument will be to show that the combinatorics are in correspondence to the unitary setting. We replace the Weingarten function with its orthogonal analogue, but show that the differences caused by this vanish asymptotically and we are left in a situation where the computation may proceed as in the unitary case.
We recall now some useful results from [9] about the orthogonal Weingarten calculus. Let \(P_{2k}\) be the set of pairings on the set \(\left\{ 1, \ldots , 2k\right\} \), and note that each such pairing induces a permutation of order two; then for \(N \ge k\), the orthogonal Weingarten function \({\text {Wg}}_\mathcal {O}(\cdot , \cdot , N) : P_{2k}\times P_{2k} \rightarrow \mathbb {C}\) satisfies
where the u’s are the entries of a Haardistributed orthogonal matrix in \(\mathcal {O}_N\). The asymptotic behaviour of \({\text {Wg}}_\mathcal {O}\) is given by
where \(\mu : P_{2k}^2 \rightarrow \mathbb {Z}_*\) is some function.
We now point out the following: the delta functions arising for any fixed \(s \in S\) are independent of all the other strings in S. The delta functions arising corresponding to those indices \(2\ell +1, 2\ell +2\) for which \(s \in K_{\iota \ell }\) correspond precisely to the delta functions one encounters when computing the expected trace of a corresponding product of matrices in a single matrix family \(M_N(\mathbb {C})\), conjugated by matrices in \(\mathcal {O}(N)\); as in the unitary case, we also have restrictions that the choices of these indices constrain the choices of all indices corresponding to \(2\ell +1, 2\ell +2\) for \(s \notin K_{\iota _\ell }\). But conjugation by independent Haardistributed orthogonal or independent Haardistributed unitary matrices both lead to asymptotic freeness (see [9]), i.e.,
whenever the Q’s are drawn from a pool of independent Haardistributed matrices in \(\mathcal {O}(N)\), the V’s are drawn similarly from U(N) (with the same independence or equality as the choice of Q’s), and the B’s are arbitrary. It follows from a linear independence argument that the only pairings which contribute asymptotically to the sum arising from expanding the left hand side are those of the same type as in the unitary case, i.e., those which never match two elements of the same parity. (A more direct argument may be found in [9, Lemma 5.1].)
It follows that exactly the same computation carries through as in the unitary case, and we arrive at the same result. We point out that although the \(\frac{1}{2}\#(\sigma \tau )\) in the exponent of the asymptotic expansion of the Weingarten function appears different from the unitary case, it merely accounts for the fact that in this picture every cycle is doubled, depending on whether one begins with an odd or even index.
Applications of the main results
Let us quote two corollaries of our main results. We start with a corollary of operator algebraic flavor:
Corollary 10
Let K be any loopfree and multiplicity free unoriented graph on k vertices, and \((M_1,\tau _1),\ldots , (M_k,\tau _k)\) be von Neumann tracial noncommutative probability spaces, i.e., \(M_i\) is a finite von Neumann algebra, \(\tau _i\) is normal faithful tracial. If all \((M_i,\tau _i)\) satisfy the Connes embedding property, then so does their von Neumann \(\varepsilon \)product.
Note here that we did not define the von Neumann \(\varepsilon \)product, it is just the completion of the \(\varepsilon \)product of \((M_i,\tau _i)\) in the GNS representation associated to the product trace.
Proof
It is enough to assume that \(M_i\) is finitely generated. The fact that \((M_i,\tau _i)\) satisfy the Connes embedding property means that \(M_i\) embeds in \(R^{\omega }\) in a trace preserving way, and that its generators admit a matrix model. See, for example, [7]. We use our construction with this matrix model to conclude. \(\square \)
Let us finish with a corollary in geometric group theory of the above corollary. We recall that a group is hyperlinear if its group algebra satisfies the Connes embedding problem. We refer to [3, 16] for more details on the notions of hyperlinearity (and the Connes problem). A notion of a graph product of a group was introduced, cf., in particular, [10,11,12] for early works on this topic. We can state the following
Corollary 11
If \(G_1, \ldots , G_k\) are hyperlinear groups and K is any loopfree and multiplicity free unoriented graph on k vertices, then the graph product of these groups over K is also hyperlinear.
This is just a consequence of the fact that the group algebra of the graph product is the \(\varepsilon \)product of the group \(*\)algebras of \(G_i\), and an application of the above lemma.
Notes
 1.
In fact, Młotkowski referred to this concept as “\(\Lambda \)free probability”, but we follow Speicher and Wysoczański here and use the term “\(\varepsilon \)free probability” instead.
 2.
Even approximating a minimum clique edge cover within a factor of 2 cannot be done in polynomial time unless \(P=NP\).
 3.
In the literature, it is more common to take the right action of \(S_k\); however, using the left action makes some of our arguments later cleaner and does not affect the defining property of the Weingarten function since it is constant on conjugacy classes and so invariant under replacing \(\sigma \tau ^{1}\) by \(\tau \sigma ^{1}\).
References
 1.
Atkinson, S.: On graph products of multipliers and the Haagerup property for \({C}^*\)dynamical systems. Ergodic Theory Dyn. Syst. 1–29 (2019)
 2.
Biane, P.: Representations of symmetric groups and free probability. Adv. Math. 138(1), 126–181 (1998)
 3.
Capraro, V., Lupini, M.: Introduction to Sofic and Hyperlinear Groups and Connes’ Embedding Conjecture. Lecture Notes in Mathematics, vol. 2136. Springer, Cham (2015)
 4.
Caspers, M.: Connes embeddability of graph products. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 19(1), 1650004 (2016)
 5.
Caspers, M., Fima, P.: Graph products of operator algebras. J. Noncommut. Geom. 11(1), 367–411 (2017)
 6.
Collins, B.: Moments and cumulants of polynomial random variables on unitary groups, the Itzykson–Zuber integral, and free probability. Int. Math. Res. Not. 17, 953–982 (2003)
 7.
Collins, B., Dykema, K.: A linearization of Connes’ embedding problem. New York J. Math. 14, 617–641 (2008)
 8.
Collins, B., Nechita, I.: Random quantum channels I: graphical calculus and the Bell state phenomenon. Comm. Math. Phys. 297(2), 345–370 (2010)
 9.
Collins, B., Śniady, P.: Integration with respect to the Haar measure on unitary, orthogonal and symplectic group. Comm. Math. Phys. 264(3), 773–795 (2006)
 10.
Droms, C.: Graph groups, coherence, and threemanifolds. J. Algebra 106(2), 484–489 (1987)
 11.
Droms, C.: Isomorphisms of graph groups. Proc. Amer. Math. Soc. 100(3), 407–408 (1987)
 12.
Droms, C.: Subgroups of graph groups. J. Algebra 110(2), 519–522 (1987)
 13.
Kou, L.T., Stockmeyer, L.J., Wong, C.K.: Covering edges by cliques with regard to keyword conflicts and intersection graphs. Comm. ACM 21(2), 135–139 (1978)
 14.
Młotkowski, W.: \(\lambda \)free probability. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 7(01), 27–41 (2004)
 15.
Morampudi, S.C., Laumann, C.R.: Manybody systems with random spatially local interactions. Phys. Rev B 100(24), 245152 (2019)
 16.
Pestov, V.G.: Hyperlinear and sofic groups: a brief guide. Bull. Symbolic Logic 14(4), 449–480 (2008)
 17.
Speicher, R., Weber, M.: Quantum groups with partial commutation relations Indiana Univ. Math. J. 68(6), 1849–1883 (2019)
 18.
Speicher, R., Wysoczański, J.: Mixtures of classical and free independence. Arch. Math. (Basel) 107(4), 445–453 (2016)
Acknowledgements
This research project was initiated at PCMI during a Random Matrix program in 2017, continued at IPAM during an Operator Algebra program in 2018, and completed during visits of the authors at each other’s institutions in 2019. We thank all these institutions for a fruitful working environment.
Finally, we would like to express our gratitude to Guillaume Cébron: after we released the first version of this manuscript, he pointed out a preprint of Morampudi and Laumann [15], which we were not aware of. Although the motivations and perspective are rather different—[15] is of physical motivation and nature—it turns out to be quite relevant because it studies similar random objects and describes related phenomena to those considered here. We hope that the approach through noncommutative probability contained in this manuscript will trigger further interactions with the study of quantum manybody systems and theoretical physics.
IC was supported by NSF DMS grant 1803557, and BC was supported by JSPS KAKENHI 17K18734, 17H04823, 15KK0162, and ANR14CE250003.
Author information
Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Charlesworth, I., Collins, B. Matrix models for \(\varvec{\varepsilon }\)free independence. Arch. Math. 116, 585–600 (2021). https://doi.org/10.1007/s00013020015697
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00013020015697
Keywords
 Free independence
 Lambdafreeness
 Random matrices
Mathematics Subject Classification
 46L54