1 Introduction

Gauge theories represent the main ingredients to the current standard model (SM) of particle physics, which unifies the electromagnetic, the weak and the strong interactions. Despite the tremendous success of the SM, first principle calculations in non-Abelian gauge theories underlying for instance quantum chromodynamics (QCD), which describes the strongly interacting part of the SM, are still challenging. In the last decades lattice field theoretical methods have been developed and optimised with great success to provide a non-perturbative approach for the investigation of such gauge theories using Monte Carlo (MC) methods.

However, studying QCD for instance at finite density or its real time dynamics is difficult if not impossible with MC methods, either due to the sign problem or because Euclidean space-time is used. Here is where methods based on the Hamiltonian formalism in Minkowski space-time can provide a way out. In fact, tensor network methods have seen very rapid developments in the recent years towards the possibility of simulations in \(2+1\) and \(3+1\) space-time dimensions [1, 2]. And the number of qubits available on real quantum devices is ever increasing. This offers a prospect for studying gauge theories with tensor network methods or on quantum computers in the not too distant future.

The Hamiltonian formalism for non-Abelian gauge theories with or without matter content was presented a long time ago in Ref. [3]. Its implementation with TN methods or on a quantum computer, however, requires some form of digitisation of SU(N).

There are different ways to digitise SU(N) or more specifically SU(2) which we will study in this paper. One can for instance chose a discrete subgroup of SU(2). In the early days of lattice gauge theory simulations such discrete subgroups were already investigated to improve the efficiency of the simulation programmes. Soon it was realised that due to the finite number of elements in such subgroups a so-called freezing phase transition occurs at some critical \(\beta \)-value [4, 5]. For \(\beta \)-values larger than this critical value MC simulations are no longer reliable, because they result in the wrong distribution (for results in a \(Z_N\) gauge theory see Refs. [6, 7].). There exist different approaches to overcome this problem: one is to chose a subgroup with a larger number of elements, if available. The alternative is to improve the action in order to be able to simulate at relevant values of the lattice spacing. We are going to follow the former here, because it can be applied in addition to improved actions and the two approaches are in some sense orthogonal.

With the rising interest in quantum computation, interest in digitisations of gauge groups also increased again: in Ref. [8] a geodesic mesh was used to discretise SU(2). The authors study systematic effects of this discretisation in detail around \(\beta =2\), a choice motivated by the onset of the scaling region.

For the gauge group SU(3) particular choices of digitisations were first studied in Refs. [9,10,11,12]. For this gauge group, however, improving the action is mandatory [10], otherwise simulations at values of the lattice spacing in the interesting region are not possible. Recently it was shown that with modified gauge action and a particular SU(3) subgroup MC simulations are feasible with sufficiently small lattice spacing values [13,14,15,16].

While discrete subgroups have the advantage that they are closed under multiplication, there is no flexibility in the number of group elements. This motivates using the isomorphy between SU(2) and the sphere \(S_3\) in four dimensions. The aim is then to find points on \(S_3\) depending on some parameter m which are dense in \(S_3\) as m approaches infinity.

In this paper we investigate all the discrete subgroups of SU(2) and several representative discretisations of \(S_3\). We study the freezing transition as a function of the number of elements in these discretisations and show that the discretisation based on so-called Fibonacci lattices behaves optimally. By doing so we go significantly beyond what was done in Ref. [8] in studying discretisations which have not been studied before. Moreover, we study the critical \(\beta \)-value of the freezing transition for the different discretisations and compare and connect to the analytical understanding of this phase transition.

2 Lattice action

We work on a hypercubic, Euclidean lattice with the set of lattice sites

$$\begin{aligned} \Lambda \ =\ \{n=(n_0,\ldots , n_{d-1})\in {\mathbb {N}}_0^d: n_\mu = 0, 1, \ldots , L-1\}, \end{aligned}$$

with \(L\in {\mathbb {N}}\). At every site there are \(d\ge 2\) link variables \(U_\mu (n)\in {\mathrm {SU}}(2)\) connecting to sites in forward direction \(\mu =0, \ldots , d-1\). We define the plaquette operator as

$$\begin{aligned} P_{\mu \nu }(n)\ =\ U^{~}_\mu (n) U^{~}_\nu (n+{\hat{\mu }}) U^\dagger _\mu (n+{\hat{\nu }}) U^\dagger _\nu (n), \end{aligned}$$
(1)

where \({\hat{\mu }}\in {\mathbb {N}}_0^d\) is the unit vector in direction \(\mu \). In terms of \(P_{\mu \nu }\) we can define Wilson’s lattice action [17]

$$\begin{aligned} S = -\frac{\beta }{2}\sum _n \sum _{\mu <\nu } {\mathrm {Re}}\,{\mathrm {Tr}}\,P_{\mu \nu }(n), \end{aligned}$$
(2)

with \(\beta \) the inverse squared gauge coupling. We will use the Metropolis Markov Chain Monte Carlo algorithm to generate chains of sets \({\mathcal {U}}_i\) of link variables \({\mathcal {U}} = \{U_\mu (n): n\in \Lambda , \mu =0,\ldots , d-1\}\) distributed according to

$$\begin{aligned} {\mathbb {P}}({\mathcal {U}})\ \propto \ \exp [-S({\mathcal {U}})]. \end{aligned}$$
(3)

The main observable we will study in this paper is the plaquette expectation value defined as

$$\begin{aligned} \langle P\rangle = \frac{1}{N}\sum _{i=1}^N\ P({\mathcal {U}}_i) \end{aligned}$$
(4)

with

$$\begin{aligned} P({\mathcal {U}}) = \frac{2}{d(d-1) L^d}\sum _n \sum _{\mu <\nu } {\mathrm {Re}}\,{\mathrm {Tr}}\,P_{\mu \nu }(n). \end{aligned}$$

3 SU(2) partitionings

In Monte Carlo simulations of lattice SU(N) gauge theories using the Metropolis algorithms or some variant of ti one typically requires a proposal gauge link at site n in direction \(\mu \) obtained as

$$\begin{aligned} U'_\mu (n) = V\cdot U_\mu (n). \end{aligned}$$

Here, V is a random element of SU(N) with average distance \(\delta \) to the identity element. The average distance, measured using some norm, determines the acceptance rate of the MC algorithm.

The actual value of \(\delta \) needs to be adjusted to tune the acceptance rate to about 50%, which implies that for \(\beta \rightarrow \infty \) one needs to decrease \(\delta \) further and further.

In numerical simulations, one nowadays represents an element U of SU(N) by an \(N\times N\) complex valued matrix and constrains it to be unitary with unit determinant. Every complex number is then represented by two floating point numbers with accuracy limited by the adopted data type (usually double precisions floating point numbers). The results obtained with this quasi continuous representation of SU(2) will be referred to as reference results in the following. It imposes, for \(\beta \)-values of practical relevance, no restriction on the possible elements \(U'\): small enough distances \(\delta \) are possible.

Table 1 Quaternionic representation of \({\overline{D}}_4\), \({\overline{T}}\), \({\overline{O}}\) and \({\overline{I}}\) as found in [19], where \(\varphi = \frac{1+\sqrt{5}}{2}\) denotes the golden ratio

However, this is not necessarily the case if a finite set of elements of SU(N) is to be used, like for instance a finite subgroup of SU(N). Here, there is a lower bound for the distance between two available elements, which significantly restricts the possible proposal gauge links. For too large \(\beta \)-values, therefore, the acceptance drops to (almost) zero, an effect that was dubbed freezing transition [4].

This transition can be pushed towards larger and larger \(\beta \)-values by increasing the number of elements in the set. Since there are in general no finite subgroups of SU(N) with arbitrarily many elements available, one needs to resort to sets of elements which do not form a subgroup of SU(N), but which lie asymptotically dense and are as isotropically as possible distributed in the group. We will call these sets partitionings of SU(N).

Focusing on SU(2), we discuss first some finite subgroups followed by other partitionings of SU(2).

3.1 Finite subgroups of SU\((2)\)

Due to the double cover relation between SU(2) and SO(3), finite subgroups of SU(2) can be constructed by taking the Cartesian product of the cyclic group of order 2 with the subgroups of SO(3). Subgroups of SO(3) are obtained by considering the symmetry transformations of regular polygons, as well as the rotational symmetries of platonic solids [18].

In the following we will consider the binary tetrahedral group \({\overline{T}}\), the binary octahedral group \({\overline{O}}\) and the binary icosahedral group \({\overline{I}}\), with 24, 48 and 120 elements respectively. Their elements are evenly distributed across the whole group, and research on their behavior has already been conducted [4]. The last four-dimensional finite subgroup of SU(2) is the binary dihedral group \({{\overline{D}}}_4\) with 8 elements. One possible representation of these groups can be found in Table 1.

3.2 Asymptotically dense partitionings of SU(2)

For generating partitionings of SU(2), we use the isomorphy between SU(2) and the sphere \(S_3\) in four dimensions, which is defined by

$$\begin{aligned} x\in S_3\ \Leftrightarrow \ \begin{pmatrix} x_0 + \mathrm {i}x_1 &{} x_2 + \mathrm {i}x_3 \\ -x_2 + \mathrm {i}x_3 &{} x_0 - \mathrm {i}x_1\\ \end{pmatrix}\in {\mathrm {SU}}(2). \end{aligned}$$
(5)

For such partitionings, the number of elements can be increased very easily, i.e. the discretisation of SU(2) can be made arbitrarily fine.

The reduction approach of SU(2) to a sphere can be generalised to general U(N) and SU(N) which can be expressed as products of spheres. To this end, we note that U(1) is isomorphic to \(S_1\) and \( \text {U}(N)\cong \text {SU}(N)\rtimes \text {U}(1)\) where \(\rtimes \) denotes the semi-direct product. This follows from the existence of the short exact sequence \(1\rightarrow \text {SU}(N)\rightarrow \text {U}(N){\mathop {\longrightarrow }\limits ^{\det }}\text {U}(1)\rightarrow 1\).

With respect to SU(N),Footnote 1 we note that SU(N) acts transitively on \(S_{2N-1}\) since the point \((1,0,0,\ldots ,0)\) is mapped to a point z by any element of SU(N) whose first column is z, and the isotropy subgroup of \((1,0,0,\ldots ,0)\) is the SU\((N-1)\) embedding

$$\begin{aligned} \begin{pmatrix} 1&{}0\\ 0&{}\text {SU}(N-1) \end{pmatrix}. \end{aligned}$$

Hence, we obtain

$$\begin{aligned} \text {SU}(N-1)\rightarrow \text {SU}(N)\rightarrow \text {SU}(N)/\text {SU}(N-1)\cong S_{2N-1} \end{aligned}$$

which implies that SU(N) is a principal bundle over \(S_{2N-1}\) with fibre SU\((N-1)\). Thus, by induction with SU\((2)\cong S_3\), we can express SU(N) as a product of odd-dimensional spheres \(S_3\), \(S_5\), \(\ldots \), \(S_{2N-1}\), and U(N) as a product of odd-dimensional spheres \(S_1\), \(S_3\), \(\ldots \), \(S_{2N-1}\).

Our aim is therefore to find a discretisation scheme of the k-dimensional sphere \(S_k\) depending on some parameter m so that the discretising set \(S^m_k\) is dense in \(S_k\) as m goes to infinity. The following examples all meet this requirement. Yet they differ in the measure or probabilistic weight attributed to each point. This measure w is defined as the volume of the Voronoi cell [21, 22] of the point using the canonical metric on \(S_k\) derived from the Euclidean distance, i.e. the measure is the volume of that part of the sphere closer to the given point than to any other point.

3.2.1 Genz points

A first, quite intuitive, partitioning is given by the Genz points [23] setting \(S^m_k=G_m(k)\) where we define

$$\begin{aligned} G_m(k)&:=\left\{ \left( s_0\sqrt{\frac{j_0}{m}},\dots ,s_k\sqrt{\frac{j_k}{m}}\right) \left| \,\sum _{i=0}^kj_i=m,\right. \right. \nonumber \\&\qquad \,\left. \forall i\in \{0,\dots ,k\}:\,s_i\in \{\pm 1\},\,j_i\in {\mathbb {N}}\right\} , \end{aligned}$$
(6)

that is all integer partitions \(\{ j_0,\dots ,j_k\}\) of \(m\ge 1\) including all permutations and adding all possible sign combinations. Whenever the argument is dropped, we implicitly set \(k=3\). The nearest neighbours of a Genz point can be found (up to sign changes) by choosing all pairs \(i,l\in \{0,\dots ,n\}\) with \(j_i>0\) and \(j_l<n\) and replacing

$$\begin{aligned} j_i\mapsto j_i-1,\quad j_l\mapsto j_l+1. \end{aligned}$$
(7)

Note that all elements of a Genz point other than the i-th and l-th components remain unchanged by such a replacement because the denominator \(\sqrt{m}\) is fixed. The square distance between neighbouring points reads

$$\begin{aligned} d(j_i,j_l)^2 = \left| \left( \sqrt{\frac{j_i}{m}}-\sqrt{\frac{j_i-1}{m}},\,\sqrt{\frac{j_l}{m}}-\sqrt{\frac{j_l+1}{m}}\right) \right| ^2, \end{aligned}$$
(8)

which can be evaluated to (see Appendix A for details)

$$\begin{aligned} d(j_i,j_l)^2 = \frac{1}{m} \left( \frac{1}{4j_i}+\frac{1}{4j_l}+{{\,\mathrm{{\mathcal {O}}}\,}}\left( j_i^{-2}\right) +{{\,\mathrm{{\mathcal {O}}}\,}}\left( j_l^{-2}\right) \right) , \end{aligned}$$
(9)

which is highly anisotropic. In the regions where both \(j_i\) and \(j_l\) are of the order m the distance scales as \(d\sim \frac{1}{m}\) whereas smaller values of \(j_i\) and \(j_l\) lead to \(d\sim \frac{1}{\sqrt{m}}\). The minimal distance is \(d(\frac{m}{2},\frac{m}{2})=\frac{1}{m} +{{\,\mathrm{{\mathcal {O}}}\,}}\left( m^{-3/2}\right) \) and the maximum is reached at \(d(1,0)=\sqrt{\frac{2}{m}}\). We thus find that the distance does not only depend on the position of the point but also on the choice of the neighbour. Therefore even an approximation of the measure w would require a product over the distances of a given point to all its neighbours.

As a concluding remark we note that in k dimensions the weight of different point differs by a factor up to

$$\begin{aligned} \frac{w_\text {max}}{w_\text {min}} \sim m^{k/2} \end{aligned}$$
(10)

where the least density (largest measure) is reached where many of the js are zero. This is in particular the case near the poles.

3.2.2 Linear discretisation

In order to avoid the aforementioned anisotropy, we consider the following, linearly discretised, set of points, equivalent to the geodesic mesh used in [8],

$$\begin{aligned}&L_m(k) :=\left\{ \frac{1}{M}\left( s_0j_0,\dots ,s_kj_k\right) \left| \,\sum _{i=0}^kj_i=m,\right. \right. \nonumber \\&\qquad \qquad \,\left. \forall i\in \{0,\dots ,k\}:\,s_i\in \{\pm 1\},\,j_i\in {\mathbb {N}}\right\} , \end{aligned}$$
(11)
$$\begin{aligned}&M :=\sqrt{\sum _{i=0}^kj_i^2}. \end{aligned}$$
(12)

M takes values \(m\ge M \ge \frac{m}{\sqrt{k+1}}\). The lower bound is assumed when all the j are equal and the upper one when all but one j are zero. Note that \(L_1\) happens to coincide with the finite subgroup \({\bar{D}}_4\).

We find the nearest neighbours as before Eq. (7) and we obtain the change in M from neighbour to neighbour as

$$\begin{aligned} \Delta M = \frac{j_l-j_i}{M} + {{\,\mathrm{{\mathcal {O}}}\,}}\left( \frac{1}{m}\right) \end{aligned}$$
(13)

yielding the inverse change

$$\begin{aligned} \frac{1}{M} - \frac{1}{M+\Delta M}= & {} \frac{\Delta M}{M^2}+{{\,\mathrm{{\mathcal {O}}}\,}}\left( \frac{1}{M^3}\right) \nonumber \\= & {} \frac{j_l-j_i}{M^3} + {{\,\mathrm{{\mathcal {O}}}\,}}\left( \frac{1}{m^3}\right) . \end{aligned}$$
(14)

With this we can again calculate the square distance (with an equivalent definition to Eq. (8), for details see appendix A)

$$\begin{aligned} d(j_i,j_l)^2 = \frac{(j_l-j_i)^2}{M^4}+\frac{2}{M^2}+{{\,\mathrm{{\mathcal {O}}}\,}}\left( \frac{1}{m^3}\right) . \end{aligned}$$
(15)

It follows from \(|j_l-j_i|\le M\) that \(\frac{\sqrt{2}}{M}\le d \le \frac{\sqrt{3}}{M}\) to leading order. Thus the distance has only a weak dependence on the direction and it always scales as \(d\sim \frac{1}{m}\) with a difference of at most a factor \(\sqrt{k+1}\) between different points. This difference is governed by the range of M. We therefore find the largest density of points (smallest distance) with the largest values of M at the poles.

Fig. 1
figure 1

Fibonacci lattices on \(S_2\) with 20 (blue), 100 (orange) and 500 (green) vertices

A good approximation for the weights is given by

$$\begin{aligned} w \approx \left( \frac{\sqrt{2}}{M}\right) ^k \end{aligned}$$
(16)

with the largest deviation

$$\begin{aligned} \frac{w_\text {max}}{w_\text {min}} \sim (k+1)^{k/2}. \end{aligned}$$
(17)

3.2.3 Volleyball

A third partitioning reminds of a Volleyball. It is the class of geodesic polytopes [24] simplest to construct with its points given by

$$\begin{aligned} V_m(k)&:=\left\{ \frac{1}{M} \left( s_0 j_0, \dots , s_k j_k \right) \right. \nonumber \\&\quad \, \times \left| \, (j_0, \dots , j_k) \in \left\{ \text {all perm. of } (m, a_1, \dots , a_k)\right\} , \right. \nonumber \\&\quad \, \left. \,s_i\in \{\pm 1\}, \, a_i \in \{0, \dots , m\} ~\right\} \end{aligned}$$
(18)

with M defined in Eq. (12), which takes values \(m \le M \le \sqrt{k+1} \, m\).

Additionally, the corners of the hypercube, in four dimensions also called \(C_8\), form

$$\begin{aligned} V_0(k)&:=\left\{ \frac{1}{\sqrt{k+1}} \left( s_0, \dots , s_k \right) | \,s_i\in \{\pm 1\}\right\} . \end{aligned}$$
(19)

For \(m\ge 1\) nearest neighbours can be obtained by \(j_i \pm 1\), as long as the conditions from above hold. The corresponding change in M is computed to

$$\begin{aligned} \Delta M = \frac{\pm j_i}{M} + {{\,\mathrm{{\mathcal {O}}}\,}}\left( \frac{1}{m}\right) , \end{aligned}$$
(20)

yielding the inverse change

$$\begin{aligned} \frac{1}{M} - \frac{1}{M+\Delta M}&= \frac{\Delta M}{M^2}+{{\,\mathrm{{\mathcal {O}}}\,}}\left( \frac{1}{M^3}\right) \nonumber \\&= \frac{\pm j_i}{M^3} + {{\,\mathrm{{\mathcal {O}}}\,}}\left( \frac{1}{m^3}\right) . \end{aligned}$$
(21)

The square distance in this case reads (see again appendix A for details)

$$\begin{aligned} d(j_i, j_l)^2 = \frac{j_i^2}{M^4}+\frac{1}{M^2}+{{\,\mathrm{{\mathcal {O}}}\,}}\left( \frac{1}{m^3}\right) , \end{aligned}$$
(22)

where from \(|j_i|\le M\) follows that \(\frac{1}{M}\le d \le \frac{\sqrt{2}}{M}\) to leading order. Thus, like for the linear partitioning \(L_m(k)\), the distance has only a weak direction dependence and it always scales as \(d\sim \frac{1}{m}\) with a difference of at most a factor \(\sqrt{k+1}\) between different points. This difference is governed by the range of M. We therefore find the largest density of points (smallest distance) with the largest values of M at the poles. Then, a good approximation for the weights is given by

$$\begin{aligned} w \approx \left( \frac{1}{M}\right) ^k \end{aligned}$$
(23)

with the largest deviation

$$\begin{aligned} \frac{w_\text {max}}{w_\text {min}} \sim (k+1)^{k/2}. \end{aligned}$$
(24)

3.2.4 Fibonacci lattice

The final discretization of SU\((2)\) considered in this work is a higher dimensional version of the so-called Fibonacci lattice. It offers an elegant and deterministic solution to the problem of distributing a given amount of points on a two-dimensional surface. Fibonacci lattices are used in numerous fields of research such as numerical analysis or computer graphics, mostly to approximate spheres (as e.g. shown in Fig. 1). Mainly inspired by [25], we will now construct a similar lattice for \(S_3\).

The two-dimensional Fibonacci lattice is usually constructed within a unit square \([0,1)^2\) as

$$\begin{aligned} \Lambda _n^2&= \left\{ {\tilde{t}}_m \big | 0 \le m < n, \, \, m \in {\mathbb {N}} \right\} \\ \text {with} \qquad {\tilde{t}}_m&= \begin{pmatrix}x_m\\ y_m\end{pmatrix} = \left( \frac{m}{\tau } \quad {\mathrm {mod}} \quad 1, \frac{m}{n} \right) ^t,\\ \tau&= \frac{1+\sqrt{5}}{2}. \end{aligned}$$

This can be generalized to the hypercube \([0,1)^k\) embedded in \({\mathbb {R}}^k\):

$$\begin{aligned} \Lambda _n^k&= \left\{ t_m \big | 0 \le m < n, \, \, m \in {\mathbb {N}} \right\} \\ t_m&= \begin{pmatrix} t_m^1 \\ t_m^2 \\ \vdots \\ t_m^k \end{pmatrix} = \begin{pmatrix} \frac{m}{n} &{} \\ a_1 \, m \quad &{} {\mathrm {mod}} \quad 1 \\ \vdots &{} \\ a_{k-1} \, m \quad &{} {\mathrm {mod}} \quad 1 \\ \end{pmatrix} \end{aligned}$$

with

$$\begin{aligned} \frac{a_i}{a_j} \notin {\mathbb {Q}} \quad \text {for} \ i \ne j \text {,} \end{aligned}$$

where \({\mathbb {Q}}\) denotes the field of rational numbers. The square roots of the prime numbers provide a simple choice for the constants \(a_i\):

$$\begin{aligned} (a_1, a_2 ,a_3, \dots ) = (\sqrt{2}, \sqrt{3}, \sqrt{5}, \dots ). \end{aligned}$$

The points in \(\Lambda _n^k\) are then evenly distributed within the given Volume. All that is left to do is to map these points onto a given compact manifold M, in our case SU\((2)\). In order to maintain the even distribution of the points, such a map \(\Phi \) needs to be volume preserving in the sense that

$$\begin{aligned} \int _{\Omega \subseteq [0,1)^k} {{\mathrm {d}}}^k x = \frac{1}{{{\mathrm {Vol}}}(M)} \int _{\Phi (\Omega ) \subseteq M} {{\mathrm {d}}}V_M \end{aligned}$$
(25)

holds for all measurable sets \(\Omega \).

To find such a map for \(S_3\) (and therefore SU\((2)\)) we start by introducing spherical coordinates

$$\begin{aligned} z (\psi , \theta , \phi ) = \left( \begin{array}{l} \cos \psi \\ \sin \psi \cos \theta \\ \sin \psi \sin \theta \cos \phi \\ \sin \psi \sin \theta \sin \phi \end{array}\right) \end{aligned}$$
(26)

with

$$\begin{aligned} \psi \in [0,\pi ),\ \theta \in [0,\pi ),\ \phi \in [0,2\pi ). \end{aligned}$$

Therefore, the metric tensor \(g_{ij}\) in terms of the spherical coordinates \((y_1, y_2, y_3) :=(\psi , \theta , \phi )\) is given by

$$\begin{aligned} g_{ij}&= \frac{\partial z^a}{\partial y^i} \frac{\partial z^b}{\partial y^j} \delta _{ab} = \begin{pmatrix} 1 &{} 0 &{} 0 \\ 0 &{} \sin ^2 \psi &{} 0 \\ 0 &{} 0 &{} \sin ^2 \psi \sin ^2 \theta \\ \end{pmatrix}_{ij} \text {.} \end{aligned}$$

From this one can calculate the Jacobian \(\sqrt{|g|}\) to be

$$\begin{aligned} \sqrt{|g|}&= \sin ^2 \psi \sin \theta \text {.} \end{aligned}$$

As \(\sqrt{|g|}\) factorizes nicely into functions only dependent on one coordinate, one can construct a bijective map \(\Phi ^{-1}\) mapping \(S_3\) to \([0,1)^3\) given by \(\Phi ^{-1}(\psi ,\theta ,\phi ) = \left( \Phi _1^{-1} (\psi ), \Phi _2^{-1}(\theta ), \Phi _3^{-1} (\phi ) \right) \) with

$$\begin{aligned} \Phi _1^{-1} (\psi )&= \frac{\int _0^{\psi }{\text {d}}{\tilde{\psi }} \sin ^2 {\tilde{\psi }}}{\int _0^\pi {\text {d}} {\tilde{\psi }} \sin ^2 {\tilde{\psi }}}= \frac{1}{\pi } \left( \psi - \frac{1}{2} \sin ( 2 \psi ) \right) \\ \Phi _2^{-1} (\theta )&= \frac{\int _0^{\theta }{\text {d}}{\tilde{\theta }} \sin \theta }{\int _0^\pi {\text {d}}{\tilde{\theta }} \sin \theta } = \frac{1}{2} \left( 1-\cos (\theta ) \right) \\ \Phi _3^{-1} (\phi )&= \frac{\int _0^{\phi }{\text {d}}{\tilde{\phi }} }{\int _0^{2 \pi } {\text {d}} {\tilde{\phi }}} = \frac{1}{2 \pi } \phi . \end{aligned}$$

Looking at some measurable set \(\Omega = \Phi ^{-1} ({\tilde{\Omega }})\) one can see that the inverse map \((\Phi ^{-1})^{-1} \equiv \Phi \) trivially fulfils equation Eq. (25). A Fibonacci-like lattice on \(S_3\) is therefore be given by

$$\begin{aligned} F_n&= \left\{ z\left( \psi _m(t_m^1), \theta _m(t_m^2), \phi _m(t_m^3)\right) \big | \, 0 \le m < n, \, \, m \in {\mathbb {N}} \right\} , \end{aligned}$$

with

$$\begin{aligned} \begin{aligned} \psi _m(t_m^1)&= \Phi _1 \left( t_m^1 \right) = \Phi _1 \left( \frac{m}{n}\right) ,\\ \theta _m(t_m^2)&= \Phi _2\left( t_m^2 \right) = \cos ^{-1}\left( 1-2(m\sqrt{2}\mod 1)\right) , \\ \phi _m (t_m^3)&= \Phi _3( t^3_m) = 2 \pi (m\sqrt{3} \mod 1). \end{aligned} \end{aligned}$$

4 Methods

In order to test the performance of the finite subgroups and the partitionings discussed in the last section in Monte Carlo simulations, we use a standard Metropolis Monte Carlo algorithm. It consists of the following steps at site n in direction \(\mu \)

  1. 1.

    generate a proposal \(U_\mu '(n)\) from \(U_\mu (n)\).

  2. 2.

    compute \(\Delta S = S(U_\mu '(n)) - S(U_\mu (n))\).

  3. 3.

    accept with probability

    $$\begin{aligned} {\mathbb {P}}_{{\mathrm {acc}}} = \min \left\{ 1,\ \exp (-\Delta S)\frac{w(U_\mu '(n)}{w(U_\mu (n))}\right\} . \end{aligned}$$
    (27)

This procedure is repeated \(N_{{\mathrm {hit}}}\) times per n and \(\mu \) before it is repeated for all \((n,\mu )\) pairs. As reference we use an algorithm based on the double precision floating point representation of the two complex elements ab needed to represent an SU(2) matrix

$$\begin{aligned} U = \begin{pmatrix} a &{} b\\ -b^\star &{} a^\star \\ \end{pmatrix} \end{aligned}$$
(28)

with the additional constraint \(aa^\star + bb^\star = 1\). In this case \(w(U) =1\,\forall \ U\) and the proposal is generated via \(U_\mu '(n) = V\cdot U_\mu (n)\), as explained above. The algorithm can be tested for instance in the strong coupling limit \(\beta \rightarrow 0\) against the strong coupling expansion derived in Refs. [26, 27], which reads in d dimensions for the plaquette expectation value

$$\begin{aligned} \langle P\rangle (\beta )= & {} \frac{1}{4}\beta - \frac{1}{96}\beta ^3 + \left( \frac{d}{96} - \frac{5}{288}\right) \frac{3}{16}\beta ^5\nonumber \\&+ \left( -\frac{d}{96} + \frac{29}{1440}\right) \frac{1}{16}\beta ^7 + {\mathcal {O}}(\beta ^9). \end{aligned}$$
(29)

In the upper panel of Fig. 2 we show the plaquette expectation value as a function of \(\beta \) in \(d=1+1\) dimensions. In the lower panel we compare to the corresponding strong coupling expansion and find very good agreement.

Fig. 2
figure 2

Plaquette expectation value as a function of \(\beta \) in \(1+1\) dimensions. In the lower panel we compare to the strong coupling expansion (SCE) Eq. (29)

For the subgroups the proposal step is implemented by multiplication of \(U_\mu (n)\) with one of the elements V of the subgroup adjacent to the identity. Also in this case the weights w are constant.

In the case of the Genz points, the linear discretisation and the Volleyball neighbouring points in the partitioning can be found by geometric considerations as explained in the previous section. A proposal is chosen uniformly random from the set of neighbouring points. For the Genz points we do not take the weights w into account, because of their complex dependence on the direction and the point itself. For the linear and the Volleyball discretisations we compare simulations with and without taking the approximate weights Eqs. (16) and (23) into account.

Due to the locally irregular structure of the Fibonacci lattices finding the appropriate neighboring elements is not as straightforward as in the case of the other partitionings. Therefore, we pregenerate a neighbour list for each element of the Fibonacci set based on the geometric distance. This list is read and used during the update process.

In order to study the freezing transition, we follow the following procedure. For a given \(\beta \)-value we perform a hot (random gauge field) and a cold (unit gauge field) start separately. This is repeated for \(\beta \)-values from \(\beta _i\ll 1\) to \(\beta _f\) in steps of \(\Delta \beta \). The phase transition is indicated by either the fact that hot and cold starts do not equilibrate to the same average plaquette expectation value for \(\beta \ge \beta _c\) with one of the two, typically the cold start, deviating from the reference result. Or a significant deviation from the reference result is seen for \(\beta \ge \beta _c\) for both, hot and cold start.

For the purpose of this paper we define the critical value of \(\beta \), denoted as \(\beta _c\) as the smallest value of \(\beta \) for which forward and backward branches do not agree within errors. In practice, this will only be a lower bound for \(\beta _c\).

Statistical errors are computed based on the so-called \(\Gamma \)-method detailed in Ref. [28] and implemented in the publicly available software package hadron [29].

Finally, we would like to point out one important difference in methodology compared to Ref. [8]: we run the MC algorithm directly on the discrete set of SU(2) elements, while the authors of Ref. [8] run what we call the reference algorithm and project to the discrete set afterwards. Then they study different ways to project onto the discrete set.

5 Results

5.1 Influence of weights

One important difference between finite subgroups and the partitionings discussed above is the need for weights in the case of the partitionings. In order to study the influence of weights, we compare here Genz points with the linear discretisation for simplicity in \(d=1+1\) dimensions for \(L^2=100^2\) lattices.

Fig. 3
figure 3

Comparison of plaquette expectation values for Genz partitioning \(G_m\) and linear partitioning \(L_m\) w/ and w/o weighting in \(1+1\) dimensions on a \(L^2=100^2\) lattice for \(\beta =4.0\) as a function of m

In Fig. 3 we compare the plaquette expectation value obtained from MC simulations with Genz points to those with the linear discretisation with and without weighting taken into account for \(\beta =4.0\). The comparison is performed for values of m in the range from 5 to 200, which adjusts the fineness of the partitioning. The reference result – generated with the reference algorithm as discussed above, is indicated by the red solid line and the corresponding statistical uncertainty by the dashed red lines. This \(\beta \)-value is representative. Only at very small \(\beta \)-value, no dependence on m can be observed.

We observe for the Genz points the strong influence of missing weights. As expected from our estimate of the weights, the deviation from the reference result increases with increasing m.

In contrast, the linear discretisation without weighting converges towards the reference result with increasing m. The smallish deviations from the reference result at small m-values can be reduced significantly (if not removed completely) by including the weights in the MC simulation. This observation appears is largely independent of \(\beta \).

We conclude from these results that it is not worthwhile to further consider the Genz points. For the linear partitioning it turns out that the weights appear to be important for small m-values, but become negligible for large m. However, this might also depend on the observable.

Note that there are alternative methods than the reweighting as described here to avoid biases, for instance projection schemes [8]. We find the weight based method more intuitive and computationally more efficient though.

Fig. 4
figure 4

Hysteresis loops for the Fibonacci partitioning \(F_{88}\) and the linear partitioning with weights included \(L_3\). Both have \(n=88\) elements

5.2 Freezing transition

We study the freezing transition using simulations of the SU(2) gauge theory in \(3+1\) dimensions with \(L^4=8^4\) lattice volume. We look at \(\beta \in \{ 0.1,0.2,\dots , 9.9,10.0\}\). For each value of \(\beta \), 7000 sweeps are performed, once with a hot, and once with a cold starting configuration. During a single sweep every lattice site and direction is probed \(N_{\mathrm {hit}} = 10\) times. The plaquette is then measured by averaging over the last 3000 iterations.

Such scans in \(\beta \) can be found in Fig. 4. \(\beta _c\) is then estimated to be the last value before a significant jump in \(\langle P \rangle \), or a significant disagreement between the hot and cold startFootnote 2. We have checked that the such determined critical \(\beta \)-values do not depend significantly on the volume.

In Fig. 5 we show the \(\beta _c\)-values at which the freezing transitions takes place as a function of the number n of elements in the set of points or the subgroups. We compare the Fibonacci, the linear and the Volleyball partitioning, and the finite subgroups of SU(2). For the linear and the Volleyball partitioning we also distinguish between results with and without including weighting to correct for the different Voronoi cell volumes. The corresponding results are also tabularised in Tables 2, 3, 4 and 5. For the Fibonacci partitioning we tabularise only results for selected n values.

Also note that our \(\beta _c\)-values for the finite subgroups \({\overline{T}}\), \({\overline{O}}\) and \({\overline{I}}\) reproduce the ones given in Ref. [4].

Fig. 5
figure 5

The critical value \(\beta _c\) as a function of the number n of elements in the set. The lines represent the approximation Eq. (30) where the order \({\tilde{N}}(n)\) is obtained from Eq. (33)

Figure 5 suggests that all our SU(2) discretisations behave qualitatively similar. However, at fixed n-value the subgroups, linear and Volleyball partitionings do have smaller \(\beta _c\)-values compared to the Fibonacci lattice. Moreover, we observe a significant difference between simulations with and without weighting included. This difference increases with increasing n.

Table 2 \(\beta _c\)-values for selected Fibonacci lattice partitionings of SU(2) for \(d=4\) and \(8^4\) lattices. Orders \({{\tilde{N}}}\) are approximations according to Eq. (33), rounded to one digit
Table 3 \(\beta _c\)-values for the weighted and not-weighted linear discretisation \(L_m\) for \(1\le m\le 5\) determined for \(d=4\) on \(10^4\) lattices. Orders \({\tilde{N}}\) are approximations according to Eq. (33), rounded to one digit
Table 4 \(\beta _c\)-values for the discrete subgroups of SU(2). N are the exact cyclic orders and \({\tilde{N}}\) are approximations according to Eq. (33)

In Fig. 5 we have also added a second x-axis indicating the number of qubits \(n_{{\mathrm {qubits}}}\) per link that would be needed to represent the corresponding discretisation on a quantum device. We remark that for a SU(2) gauge theory the relevant region of \(\beta \)-values is around \(\beta =2\), where one enters the scaling region, which is sufficient to reach for traditional MC simulations [13,14,15].

In Ref. [4] the authors find that the critical \(\beta \)-value can be computed theoretically, at least approximately, for the finite subgroups. It is based on an analytical calculation of \(\beta _c(N)\) for \(Z_N\), which is generalised to finite subgroups as follows: for the subgroup G, the authors define the set of elements C(G) closest to the identity, but excluding the identity itself. Close to the freezing transition, plaquettes are made of identity links, or \(g, g^{-1}\in C(G)\) for minimal changes compared to unit plaquettes. Next, they define N (the cyclic order) as the minimal integer for which \(g^N=1\). The corresponding subgroup generated by g is isomorphic to \(Z_N\). For the four groups \({\bar{D}}_4\), \({\overline{T}}\), \({\overline{O}}\) and \({\overline{I}}\) one finds \(N=4,6,8\) and 10, respectively. This leads to the following expectation for the critical \(\beta \)-value as a function of N

$$\begin{aligned} \beta _c(N) \approx \frac{\ln \left( 1+\sqrt{2}\right) }{1-\cos (2\pi /N)}. \end{aligned}$$
(30)

However, for Fibonacci, linear and Volleyball partitionings we no longer deal with subgroups. In particular, taking one of the elements e closest to the identity element, it is not guaranteed that there is an \(N\in {\mathbb {N}}\) for which \(e^N=1\).

Thus, we have to approximate the order N. For (approximately) isotropic discretisations such as the finite subgroups and the Fibonacci partitioning a global average over the point density is bound to yield a good approximation for the elements in C(G) and therefore N. The volume of the three dimensional unit sphere is \(2\pi ^2\). If we then assume a locally primitive cubic lattice, the average distance of n points in \(S_3\) becomes

$$\begin{aligned} d(n) = \left( \frac{2\pi ^2}{n}\right) ^{1/3}. \end{aligned}$$
(31)

Two points of this distance together with the origin form a triangle with the opening angle

$$\begin{aligned} \alpha (n) = 2\arcsin \frac{d(n)}{2}, \end{aligned}$$
(32)

thus a first approximation of the cyclic order is obtained by

$$\begin{aligned} {\tilde{N}}(n) = \frac{2\pi }{\alpha (n)}, \end{aligned}$$
(33)

which solely depends on the number n of elements in the partition.

Note that the assumption of a primitive cubic lattice is even asymptotically incorrect for all the partitionings discussed in this work and at best a good approximation. How good an approximation it is, can only be checked numerically. In specific cases it needs further refinement.

In particular, in the case of the Fibonacci partitioning the approximation has to be adjusted. Since the points are distributed irregularly in this case, a path going around the sphere does not lie in a two-dimensional plane. Instead it follows some zigzag route which is longer than the straight path. Assuming the optimal maximally dense packing, we expect the points to lie at the vertices of tetrahedra locally tiling the sphere. The length of the straight path would then correspond to the height of the tetrahedron whereas the length of the actual path corresponds to the edge length. Their ratio is \(\sqrt{\frac{3}{2}}\), so \({{\tilde{N}}}\) has to be rescaled by this factor to best describe \(\beta _c\) for the Fibonacci partitioning.

We show the curve Eq. (30) using \({\tilde{N}}(n)\) and \(\sqrt{3/2}{\tilde{N}}\), respectively, in addition to the data in Fig. 5. The version with \({\tilde{N}}\) is in very good agreement with the results obtained for the finite subgroups while the rescaled version matches the values for the Fibonacci partitioning remarkably well.

The unweighted simulations of the Volleyball and the weighted simulations of the linear discretisations also yield results compatible with the unscaled version of Eqs. (30) and (33). On the other hand, the weighted Volleyball and the unweighted linear discretisations deviate clearly (Table 5).

Table 5 \(\beta _c\)-values for the weighted and not-weighted Volleyball discretisation \(V_m\) for \(0\le m\le 1\) determined for \(d=4\) on \(10^4\) lattices. Orders \({{\tilde{N}}}\) are approximations according to Eq. (33), rounded to one digit

6 Discussion and outlook

Some of the results presented in the previous section deserve separate discussion. Figure 5 shows that the Fibonacci lattice discretisation has larger \(\beta _c\)-values at fixed n compared to the finite subgroups and the other partitionings. This can be understood due to the irregularity of the points in the Fibonacci lattices: at fixed n, this irregularity will generate minimal distances between points which are smaller than the ones for the other discretisations. Thus, the freezing transition should appear at comparably larger \(\beta \) values only, because smaller values of \(|\Delta S|\) are available.

Also the difference in \(\beta _c\) between simulations with and without weight included for the linear and Volleyball partitionings, respectively, can be understood qualitatively. Assume the system freezes for the weighted case at some value \(\beta _c^w\). Switching off the weighting, there will be subsets of points with on average lower (or larger) distances between elements than the average distance. In these regions the \(|\Delta S|\) values required for acceptance will be smaller than the average \(|\Delta S|\) value at this \(\beta \). And it is reasonable to assume that these regions are also reached during equilibration. Thus, the critical \(\beta \)-value for the not weighted simulation \(\beta ^{nw}\) must be larger or equal \(\beta ^w\).

Though this trend is universal, we find an additional superiority of the linear as compared to the Volleyball discretisation. We expect this to be a consequence of the denser packing of the linear discretisation where most of the points have twelve neighbours whereas the majority of the points in the Volleyball discretisation has only six neighbours.

We have obtained excellent predictions for the \(\beta _c\)-values for finite subgroups and Fibonacci partitionings. For finite subgroups the prediction using \({\tilde{N}}\) is even better than the prediction using N, in particular for larger n. For the Fibonacci partitionings the rescaling with the factor \(\sqrt{3/2}\) suggests that the Fibonacci elements are close to maximally densely packed. This is strongly backed up by the numerical evidence. Based on this assumption of closest-packing we postulate that there is no discretisation scheme yielding a significantly later freezing transition than the Fibonacci partitioning at an equal number of points.

Finally, the predicted \(\beta _c\) values do not agree with our observations for the unweighted linear and the weighted Volleyball discretisations, respectively. We do not fully understand these discrepancies, but it suggests that the linear discretisation has subsets of elements which are close to maximally densely packed. And the Volleyball discretisation is in this regard sub-optimal.

In Sect. 3.2 we have explained how SU(N) can be expressed as a product of odd-dimensional spheres \(S_3\), \(S_5\), \(\ldots \), \(S_{2N-1}\). Since the k-dimensional hypervolume \(H(S_k)\equiv 2\pi ^{\frac{k+1}{2}}/\Gamma \left( \frac{k+1}{2}\right) \) of the k-sphere is well known, we can generalise the prediction of \(\beta _c\) to \(N>2\) by adjusting the average distance from Eq. (31)

$$\begin{aligned} d_N(n) = \left( \frac{1}{n}\prod _{k=3,\text {odd}}^{2N-1}H(S_k)\right) ^{1/(N^2-1)} \end{aligned}$$
(34)

and applying Eq. (33) and Eq. (30) successively as before. In particular, this formula readily predicts critical couplings for the finite subgroups of SU(3) which have been determined by Bhanot and Rebbi [30]. We show how our prediction compares to the values obtained by Bhanot and Rebbi in Fig. 6. In addition to the results for \(\beta _c\) given in the paper originally (black circles), we plot the leftmost bounds of the hysteresis loopsFootnote 3 they visualised, denoting the minimal possible value of \(\beta _c\) (red squares). The systematic effect stemming from different estimations of \(\beta _c\) is remarkably large. We therefore refrain from any conclusion as to the quantitative correctness of our prediction. Nevertheless it seems well suited to predict the qualitative scaling of the freezing transition and it provides the correct order of magnitude.

In this light it is also interesting to discuss the number of qubits \(n_{{\mathrm {qubits}}}\) per link needed to represent these discretisations on a quantum computer. \(n_{{\mathrm {qubits}}}\) is of course related to the number of elements via \(n_{{\mathrm {qubits}}} = \log _2(n)\). For SU(2) (see Fig. 5) the usage of the Fibonacci discretisation would mean only a single qubit improvement per link compared to the other discretisations. While this does not sound like much, it might be crucial on so-called near-term noisy quantum devices. Moreover, the added flexibility might be of great help.

As argued in Refs. [13,14,15] it is in principle sufficient to reach the scaling region, which for SU(2) starts around \(\beta =2\). However, with a quantum device one could in principle reach very large \(\beta \)-values without suffering from the limitations of MC algorithms. At such \(\beta \)-values one could then seamlessly connect to perturbation theory. Here, the added flexibility of the Fibonacci discretisation could, again, be of large help. Moreover, if indeed \(\sqrt{3/2}{\tilde{N}}\) is relevant also for SU(3) with the Fibonacci discretisation, the saving in the number of qubits per link is much larger than for SU(2) and might become highly relevant (compare the black dashed with the red dotted line in Fig. 6!).

Fig. 6
figure 6

The critical value \(\beta _c\) in SU(3) as a function of the number n of elements in the set. The lines represent the approximation Eq. (30) where the order \({\tilde{N}}(n)\) is obtained from Eq. (34) and Eq. (33). The reference data by Bhanot and Rebbi [30] comes from their Table 1 (“original”) and from the leftmost points of the hysteresis loops in their Figures 1–3 (“minimal”), respectively

7 Summary

In this paper we have presented several asymptotically dense partitionings of SU(2), which do not represent subgroups of SU(2) but which have adjustable numbers of elements. The discussed partitionings are not necessarily isotropically distributed in the group, which requires in principle the inclusion of additional weight factors in the Monte Carlo algorithms. We have investigated whether or not the partitionings without and, if possible, with weights included can be used in Monte Carlo simulations of SU(2) lattice gauge theories by comparing the plaquette expectation value as a function of \(\beta \) to reference results of a standard lattice gauge simulation.

This comparison rules out the usage of the so-called Genz partitioning, because the weights are difficult to compute and the difference due to not included weights increases with the number of elements. Thus, Monte Carlo simulations with fine Genz partitionings of SU(2) are not feasible.

For the other considered partitionings, this comparison turned out to give good agreement with the standard simulation code, in particular when the weights are included. Moreover, the finer the discretisation (and the larger the number of elements) the smaller the deviation between simulations with and without weighting.

In addition we have investigated the so-called freezing transition for the partitions and for all finite subgroups of SU(2). The main result visualised in Fig. 5 is that the partitioning \(F_k\) based on Fibonacci lattices allows for a flexible choice of the number of elements by adjusting k and at the same time larger \(\beta _c\)-values compared to finite subgroups and the other discussed partitionings. Thus, Fibonacci based discretisations provide the largest simulatable \(\beta \)-range at fixed n.

Coming back to the introduction, using the partitionings proposed here does not pose any problem even at very large \(\beta \)-values at least in Monte Carlo simulations. This leaves us optimistic for their applicability in the Hamiltonian formalism for tensor network or quantum computing applications.

Finally, the generalisation of the partitionings discussed here to the case of SU(3) relevant for quantum chromodynamics is straightforward and we expect that the results obtained in this paper directly translate to this larger group. It becomes again clear that in the SU(3) case one also needs to improve the lattice action to avoid the freezing problem. However, the saving due to the Fibonacci discretisation would be much larger than for SU(2) if our prediction turns out to be correct.