Optimisation of complex integration contours at higher order

We continue our study of contour deformation as a practical tool for dealing with the sign problem using the $d$-dimensional Bose gas with non-zero chemical potential as a toy model. We derive explicit expressions for contours up to the second order with respect to a natural small parameter and generalise these contours to an ansatz for which the evaluation of the Jacobian is fast ($O(1)$). We examine the behaviour of the various proposed contours as a function of space-time dimensionality, the chemical potential, and lattice size and geometry and use the mean phase factor as a measure of the severity of the sign problem. In turns out that this method leads to a substantial reduction of the sign problem and that it becomes more efficient as space-time dimensionality is increased. Correlations among contributions to $\operatorname{Im}\langle S \rangle$ play a key role in determining the mean phase factor and we examine these correlations in detail.


Introduction
While perturbative field theory gives some of the most accurate predictions in science, at strong coupling it loses its efficiency and we are usually forced to use numerical simulations. The main approach for numerical simulations of field theories, as well as of other, e.g., condensed matter, systems, is the Monte Carlo method. In this approach the Euclidean e −S factor is interpreted as an unnormalised probability density and high dimensional integrals are evaluated using importance sampling. While this method generically works very well, in some cases the action is not real and the naive interpretation of e −S as a probability density fails. Although this problem can be dealt with using phase quenching, when the imaginary part of the action becomes large, the expressions that one has to evaluate fluctuate and an exponentially large number of configurations is needed in order to obtain a reliable result. This is the sign problem. Many approaches were proposed for dealing with the sign problem, see for example [1][2][3][4][5][6][7][8][9][10][11]. One such approach relies on the use of Lefschetz thimbles [12] (see [13] for more on Lefschetz thimbles). Lefschetz thimbles are manifolds of real dimension N that live in a complex space of dimension N , which comes from complexification of the original N real degrees of freedom of the system 1 . Over each thimble the imaginary part of the action is constant, hence the sign problem is avoided when integrating over them. Also, Lefschetz thimbles form a basis of integration cycles in the complexified space. Hence, the original integration contour can be deformed using (a multi-dimensional version of) Cauchy's theorem to a linear combination of thimbles. Nonetheless, this method also has some drawbacks. In particular, while computational cost is significantly reduced as compared to that of a system with a sign problem, it can still be relatively high, generically O(N 4 ). This issue and others led to attempts to generalise the thimble method, e.g. [14,15], and to attempts to use contour deformations that do not rely on thimbles at all [16][17][18][19][20][21].
In particular, in [19] (henceforth "paper I") we studied the one dimensional Bose gas with non-zero chemical potential, which is often used as a toy model for studying the sign problem [12,22]. Here, we attempt to go beyond what was accomplished in paper I and generalise its results in several ways. In paper I we derived an expression for the contour at first order with respect to an expansion that we defined. Then, we generalised the obtained expression to ansätze and studied the various contours. Here, we describe the expansion at higher order and generalise the ansätze appropriately. Also, in paper I we studied only the one dimensional case. Here, we examine the same theory in various dimensions and examine the effectiveness of our approach as a function of dimensionality, with various variables kept fixed. Finally, in paper I we identified an obstacle towards the efficient (O(N )) implementation of our approach coming from the treatment of boundary terms. We proposed several strategies for handling this issue, all of which had some drawbacks. Here, we propose another approach and examine its efficiency and its limitations.
The rest of the paper is organised as follows: In the next section we review the results of paper I and develop the second order of the series expansion for the deformed contour. We then propose a method for a fast evaluation of the Jacobian and construct ansätze that generalise the systematic expansion. In section 3 we present simulation results. We perform a thorough examination of our approach, varying different parameters. Our purpose is to understand the strength of our approach as well as its limitations, in order to be able to efficiently utilise it in the future for other, more realistic and important systems. We discuss our results in section 4.
to other systems, while obtaining large enough reduction of the sign problem that would make the evaluation of observables practical enough. We return to the evaluation of the Jacobian in 2.3, where we identify that the periodic boundary conditions pose a challenge to its efficient evaluation. In paper I we proposed several approaches for dealing with this issue, none of which was completely satisfactory. Here we propose a refinement of one of the methods of that paper. We also define an ansatz that generalises the systematic second order expansion, as well as the first order ansatz of paper I.
Throughout the construction, our purpose is not to decrease individual phase factors in the action. In fact, we find in section 3 that the expectation value for individual phase factors is larger for our contours than for the undeformed one, but they cancel out each other. Hence, we try to put the sign problem on its head: Instead of having large factors that cancel each other among different configurations, we attempt to have large cancellations of the imaginary part for any given configuration among different contributions to the phase. While the systematic expansion is explicitly constructed in order to achieve this goal, ansätze have the potential to do even better in this respect, since they can lead to cancellations of higher order terms, Jacobian contributions to the phase, and contributions from our special treatment of the boundary, described below.

The d dimensional Bose gas with chemical potential
Consider the theory of a Bose gas with non-zero chemical potential on a d-dimensional cubic lattice. After rescaling the fields, the action takes the form, and m is the mass parameter entering the lattice action before rescaling the fields. In the rest of the paper we set λ = 1 for simplicity. We assume periodic boundary conditions with one period, L for the time direction and one period for all the spatial directionsL. The total number of lattice points in then, We can rewrite the action also in terms of two real fields, u, v related to Φ in the usual way, Without deforming the contour the imaginary part of the action is, We attempt to deal with the sign problem by complexifying the component fields u and v and defining a manifold by specifying the imaginary parts of u and v as functions of the real parts. Hence, we write, In terms of the complex field Φ we can express the complexification by substituting in the action, Note that "complex conjugation does not act on the i in front of the ψ r " (in blue). More accurately,Φ r is not the complex conjugate of Φ r and we obtain an extended space by considering Φ r andΦ r as independent variables. We can now write, and look for an expression for the contour by specifying ψ r = ψ r {φ r } . This is not the most general form for a contour, but it should be sufficiently general for establishing whether this approach is useful. With the substitution (2.7), the imaginary part of the action for an arbitrary contour is given by, (2.9) Note that exact solutions to this equation exist [19], Moreover, the Jacobian for these solutions is constant, so there is no residual sign problem. However, for these solutions, not only the imaginary part, but actually the whole action vanishes, as can be seen by inspecting (2.7): In this case either Φ r vanishes identically or Φ * r vanishes identically. Thus, these solutions do not lead to convergent integrals. This can also be seen by directly inspecting the asymptotic regions of integration.

Defining the expansion
Instead of looking for exact solutions, we attempt to solve for ψ r in terms of a power series with respect to α, (2.11) Inspecting (2.9) we can expect that for large enough µ the dominant term contributing to the expansion would be proportional to e µ α. Thus, the approach should be useful at least up to values of µ of the order of, Hence, we can interpret this expansion either as an expansion around m = ∞ or as one around d = ∞. For example, while for d = 1, m = 1, one can expect to get with the expansion to around µ = ln(3) 1.1, for d = 4, m = 1, one could expect to get to about µ = ln(9) 2.2. Note that this is only a rule of thumb. In fact we can get to higher values of µ. Substituting the series (2.11) in the expression for the imaginary part of the action (2.9) we obtain at the lowest order, where we defined, (2.14) A simple solution exists even before summation and before taking the real part of the expression, We refer to this choice as the "simple first order contour". Note that this is not the most general solution one can obtain even before summation. Using the fact that only the real part should vanish we find that an extra piece can be added to it, where f r is an arbitrary continuous real function. This looks as if we add to our perturbative solution a component in the direction of the exact (bad) solution (2.10). But one can choose f r in such a way that the obtained expression is well behaved. Interesting choices are f r = − sinh µ d r , which for close values of φ r and φ r+0 cancels the first term in the r.h.s of (2.16) and f r = sinh µ d r , which leads to cancellation of some of the terms in (2.9). We now set f r = 0 for simplicity. However, we keep in mind that this option exists and use it later on as a starting point for turning the expressions for ψ r into a more general ansatz, with free parameters that can be chosen such that (2.9) is reduced as much as possible.
We can write the solution (2.15) in terms of components, We notice that the deformation depends only on nearest neighbours in the temporal direction. Indeed, since the source of the phase in the undeformed case comes from this direction, this should be the only coordinate relevant at the lowest order. Note that the solution includes asymptotic regions in which ψ r approaches infinity, namely, regions for which φ r+0 → ∞ with bounded φ r . While such regions do not lead to inconsistencies, as long as the integral remains absolutely convergent, they can still be problematic for the following reasons: 1. Contours that go to infinity and back might lead to terms which would mostly cancel each other and hence to a mild sign problem similar to the global sign problem that is obtained in the Lefschetz thimble approach.
2. As |ψ| becomes large so does the Jacobian and in particular, the phase of the Jacobian can become large. This could lead to a residual sign problem.
3. Our approach is perturbative with respect to ψ. Large values of |ψ| can potentially break the validity of the perturbative approach.
In light of these issues, it could be worthwhile to generalise (2.15) to an ansatz for which ψ is always bounded. We propose such ansätze in the next subsection. At the next order (α 2 ) we obtain, Substituting (2.15) to this equation while rewriting some terms in light of the fact that only the real part contributes leads to, where we defined, We can write a solution for the second order term (which together with (2.15) defines what we call "the simple second order contour"), We see that the expansion in powers of α turns out to be also an expansion in neighbourdistance. At the second order, there are contributions from terms with distance 2 in the temporal direction and terms with distance 1 in both the temporal direction and one spatial direction. The total distance (the sum of distances) of terms that contribute at the second order is at most 2. Expressing (2.21) in term of components we obtain, While the expressions in terms of complex functions are easier to manipulate, the expressions in terms of components (2.22) can be useful for performing simulations, although one can use the complex variables also in the simulations. Here and below we write expressions in terms of the real variables for the sake of completeness.
If we choose to retain the arbitrary functions f r of (2.16) and look for similar expressions at the second order we obtain, r . (2.23) Now the solution depends (at every lattice point r) on two arbitrary continuous real functions, f r and f (2) r . Again, this expression can guide us towards ansätze generalising the simple second order contour (2.21).

Fast evaluation of the Jacobian
In order to obtain a fast algorithm we attempt to obtain an upper-block-triangular Jacobian matrix. First, we have to specify the order of lattice points for defining the entries of this matrix. Most of the obtained expressions depend only on neighbours to the right of the point in the time direction 2 . Hence, we use a lexicographic ordering in which the most significant weight is that of the time direction. If it was not for the periodic boundary conditions this would have sufficed for obtaining a matrix of the desired form at the leading α order.
In paper I we proposed several ways to deal with the problems coming from the periodic boundary conditions. None of which was completely satisfactory. Here, we propose another approach, which turns out to be successful in a given range of parameters, but also has some limitations. Let us first recall and discuss the proposals described in paper I: • We can use general algorithms for the evaluation of the Jacobian. This would lead to slow simulations 3 . This can always be achieved and one can manage in this way lattices up to the order of magnitude of a hundred points. However, simulations on large lattices become unpractical.
• The boundary conditions can be changed to Dirichlet boundary conditions, or one could choose to retain the periodic boundary conditions, but to avoid modifying the contour at the rightmost lattice point. In both cases, a large phase factor would be introduced by this "last" point. Moreover, for d > 1 this is already a problem not at a single point, but at a co-dimension one hyper-surface. Thus, a significant sign problem would remain in this case, making the approach inapplicable.
• Block operations can be performed on the Jacobian matrix that bring it to the desired upper-block-triangular form. One drawback of this approach is that explicit expressions obtained in this way for d > 1 at the second order, or for ansätze generalising it, would be very cumbersome. Another problem is that the evaluation of the expressions obtained this way generically has a cost of O(N 2 ) per sweep. While this is significantly better than the previous proposals, it is still not as good as an O(N ) algorithm. It was further proposed to evaluate the Jacobians of the small blocks using a particular algorithm that, by changing the way they are stored during the run, leads to the desired O(N ) behaviour. However, it turned out that this algorithm suffers in some cases from numerical instability, stemming from the fact that the inverse of some, potentially singular, matrices has to be evaluated. Again, the generalisation of this approach to the d > 1 case would be quite cumbersome. Nonetheless, for the simple first order contour (2.15) it was demonstrated in paper I that the instability does not occur. Moreover, in this case lattice points with spatial separation do not influence the contour. Hence this method is applicable in this case for general d. We compare this method to the new method described below in order to evaluate the range of validity of the new method.
In light of all that it seems that an approach in which the problematic terms are absent from the Jacobian matrix would be desirable. This can be obtained by rewriting the terms before summation such that each variable would depend only on variables to its right (in the time direction). We now propose a way to achieve this goal, for the first order expansion and then for a second order expansion and for the generalising ansätze.

First order
Consider again the lowest order equation (2.13) with the solution (2.15) everywhere, except on the summands defined on the hyper-surfaces r = (1, s) and r = (L, s). The remaining equation is now, (2.24) We can rewrite this equation as, Thus, we choose the form (2.15) for all points not on these hyper-surfaces, while on the hyper-surfaces we choose, Since the Jacobian matrix is now block diagonal the Jacobian is a product of local terms: For r = (L, s) we immediately get J r = 1. For the evaluation of J r outside this hypersurface we can either express everything in terms of the real variables, or work directly with the complex variables using, .
Recalling that we can evaluate the first and last Jacobians obtaining, Hence, (2.29) reduces to, This expression in more general than what we need here and would also be useful for the evaluation of the Jacobians of the ansätze that we introduce in what follows. However, as long as the dependence on φ r comes only from dependence on the real combination d r (2.14), as is the case here, the second and third terms cancel out and (2.32) is given by Plugging (2.15) in this expression we obtain for r = (t, s) and 1 < t < L, and for r = (1, s) we similarly obtain, . (2.35)

Second order
For simplicity we illustrate the construction of second order expressions with fast evaluation of the Jacobian for the one dimensional case. Since we modified our choice for ψ 1 and ψ (1) L we have to examine again all terms in (2.18) in which they appear, namely the terms with r = 1, L − 1, or L. All terms in the range 1 < r < L − 1 vanish upon the substitution of (2.21). Also note that the s r terms, which are the source of additional challenges for a fast evaluation of the Jacobian, are absent for d = 1.
The three special terms give, Here, the first line comes from the r = 1 term, the second is the r = L − 1 term and the third one is the r = L term. Simplifying this expression leads to, (2.37) Note the last term. The presence of d 1 in the denominator prevents us from writing ψ (2) that depends only on components to its right. Thus, we cannot obtain an upper-blocktriangular form for the Jacobian. We can evaluate the efficiency of using the second order expressions by either a slow algorithm, or by ignoring the last term in (2.37).
Ignoring the last term in (2.37), we can choose ψ 1 to obey the generic equation (2.21) while setting ψ This implies that J L−1 and J L do not change as compared to their first order values. As for the case 1 < r < L − 1, let us note that the expression that enters now in the evaluation of the Jacobian is, Hence, now we have, Similarly, we obtain, (2.41)

A more general ansatz
In paper I we suggested to use an ansatz that generalises the functional form of the first order expression. Similarly, here we suggest to use an ansatz that generalises the form of the second order expression. Again, we consider the one dimensional case for simplicity. We impose the natural U (1) symmetry of complex variables and examine only expressions that can be simulated efficiently, i.e. expressions whose Jacobian is upper-block-triangular. The proposed ansatz takes the following form 4 , Here, we defined, and the a i and b i are ten real parameters subject to the constraint ∀k, b k ≥ 0. The simple second order contour (2.39) is obtained by setting in the ansatz, If instead we set a 5 = 0 we obtain the simple first order contour (2.15). Again, the expression (2.42) cannot be used for r = L or r = L − 1 if we want to obtain an upperblock-triangular Jacobian matrix. At most, we can set, where we dropped terms that would have lead to a non-upper-block-triangular Jacobian in the numerator and replaced them by a constant c (to be discussed below) in the denominator. We can attempt to compensate for the missing terms as we did in the previous subsection. However, the ansatz we use was not obtained from setting to zero the imaginary part of the action (at some order). Hence, it is not clear what should be the form of the compensating terms in this case. In order to choose these terms we pretend that the ansatz (2.42) was obtained by setting to zero first and second order terms similar in form to the actual expressions obtained before. We write ψ r = ψ r . Then, we pretend that the first order term solves, and the second order term solves, This is a natural generalisation of the expressions obtained before for the first two orders of the expansion, that would have led to a solution of the form of the ansatz (2.42). Repeating the methods used in the previous subsection and summing the results we get the special values that include compensating terms, While we managed to compensate for all the terms that appear in the numerators in (2.45), we could not do that for the terms in the denominators. Thus we replaced in the denominators of (2.45) and (2.48b) terms of the form |φ r | 2 that would have destroyed the upper-block-triangular form of the Jacobian by a constant c ≥ 0. One can decide to set c = 0, that is, to ignore these contributions. However, this would lead to denominators which are too small and hence to deformations that are too large. In order to prevent problems that this can cause, one could prefer to take the limit c → ∞, which amounts to completely dropping these terms. This, however, could also result in contours that are quite far from their desired form. A natural compromise between these two extreme cases would be to choose c = |φ r | 2 . Alternatively, one can add the constant c to the list of a's and b's defining the ansatz. This constant is, however, somewhat different, since it does not influence all lattice points.
Using (2.32) for the evaluation of the Jacobian for the ansatz we obtain, where we defined, The special cases (2.45) and (2.48) correspond to special contributions to the Jacobian. These still take the form (2.49) only with the following definitions (the expressions for p 1 and p 2 are the standard ones, but we write them anyway for completeness),

Simulation results
In this section we perform simulations in order to examine the proposed approach identifying both its strengths and its weaknesses. We concentrate on the mean phase factor as a characteristic of the sign problem. We examine its behaviour, as a function of µ, in 3.1 for the simple first order contour with the proposed algorithm for treating the special point, as well as using the methods of paper I, where all points are treated in the same way. This is important since while for the first order contour we can use the method of paper I, for the more sophisticated contours a numerical instability, described in paper I, prevents us from doing so. Thus, in the case of this contour, we can disentangle problems stemming from increasing µ and problems whose source is the special point. We recognize that at large values of µ the treatment of the special point reduces the mean phase factor significantly. In 3.2 we thoroughly examine the origin of this reduction of the mean phase factor.
Next, we examine the dependence of the mean phase factor on lattice geometry in 3.3 and on dimensionality in 3.4 and in 3.5. In particular we examine whether the method works better or worse as d is increased when the special point prescription is used, since two opposite effects exist in this case: On the one hand, our expansion can be interpreted as one around d = ∞, which suggests that as we increase d the behaviour should improve. On the other hand, the special point at d = 1 becomes, for d > 1, a co-dimension one hyper-surface, over which the phase accumulates. This suggests that the behaviour should worsen for larger d.
Then, in 3.6 we compare the various proposed contours, as a function of µ and of lattice size. We find that the second order ansatz behaves significantly better than the first order ansatz of paper I.
Note that for the proposed ansätze, large values for the parameters in the numerator can cause a runaway toward wrong asymptotic regions in the complexified space, which could lead to erroneous results, unless these parameters are accompanied by large enough values of the matching parameters in the denominator. Hence, we limit the values of the parameters in the denominator to be not too small.
In all the simulations performed in this work the imaginary part of the phase is consistent with zero, that is, it is small and within a couple of standard deviations from zero. Hence, when we mention the mean phase factor we implicitly refer to its real part. Similar remarks apply to the observables S and n mentioned below.
Simulations used for determining optimal parameters for the ansätze were performed with 50,000 sweeps on small (L = 8) lattices and then fine tuned on larger lattices with longer simulation times. All other simulations were performed with short thermalisation (10,000-30,000 sweeps) followed by 300,000 sweeps. Running times (on a standard laptop) were approximately linear with lattice size for fixed d and were somewhat longer for larger d and varied from about a quarter of a minute for the short lattices (d = 1, L = 8) and up to several hours for the very large ones (d = 1, L = 7, 500), with several simulations running simultaneously.

Varying µ
We proposed a new way to obtain an efficient evaluation of the Jacobian. However, this method treats one point (for d = 1) or a specific hyper-surface (for d > 1) differently. This can potentially lead to problematic behaviour, especially for large values of µ. We compared this method to that of paper I for the simple first order contour, for which the method of paper I is applicable (this contour has no numerical instability problem). We examine the behaviour of the mean phase factor as a function of µ for both contours, as well as for the undeformed contour for comparison, for the d = 1, L = 8, m = 1 case in fig. 1. We also examine for these contours an observable, the expectation value of the action, in order to verify that the simulation results of both methods are consistent, in fig. 2. The Silver Blaze phenomenon is well demonstrated in this plot. The fact that the new method does not work well in the current simple case (the simple first order contour at d = 1) for too large values of µ seems to suggest that this is not the best possible method for large µ. However, for values up to about µ = 1, this method, which is relatively simple to use for d > 1 as well as for the ansätze , works well.
Of course, the values of the mean phase factors for fixed lattice size do not capture the whole story. We expect this factors to decay exponentially as a function of lattice size, as long as the parameter range with strong sign problem is avoided. A main factor in the evaluation of the efficiency of a particular contour is this decay rate. We compare the decay rates for µ = 1 and µ = 1.5 by finding linear fits to the logarithm of the mean  Figure 3. The mean phase factor as a function of lattice size for µ = 1 (left) and µ = 1.5 (right) on a logarithmic scale. We compare the results of the old algorithm, which treats all point uniformly, the new algorithm that treats one point differently, and the undeformed contour. We observe that the linear fit works very well in all cases, as long as we do not include in the fit points with very low values, i.e., we did not take into account points for which the mean phase factor is below 0.002, since then the sign problem already becomes significant. We observe that the slopes of the two deformed contours are practically identical (but very different from the undeformed case).

The origin of the mean phase factor
We want to understand the origin of the difference in the phases between the two approaches. It is somewhat unexpected, since in both cases the first order term in the expansion of Im(S) exactly vanishes. To that end we examine contributions to the phase for three cases: the undeformed contour and the first order ansatz with a particular choice of parameters with and without a special point. For all these cases we evaluate the rms of the contribution to Im(S) coming from different sites as well as from terms involving nearest neighbours. We refer to the nearest neighbour pairs as "half-integer lattice sites", with the value being in between the two nearest neighbours involved 5 . In all cases d = 1, L = 16, and m = 1. In order to see the effect clearly we work with a large value of µ = 2. We choose for the ansatz a 1 = a 2 = 0.604, b 1 = 0.9, b 2 = 0.2 with all other parameters set to zero. These values are chosen since they lead to a large mean phase factor that decays slowly, for the given choice of m and µ. For the case with no special points the mean phase factor is 0.72, while with a special point it is 0.099, which is comparable to the case of the undeformed contour, where it equals 0.051. However, the phase in the case of an ansatz with a special point decays much slower that that of the undeformed contour, and the decay rate differs from the case with no special point by only about 2.5%. This is similar to the behaviour observed in fig. 3. In fig. 4 we present the result for the undeformed contour. In this case the only contributions to Im(S) come from nearest neighbour interactions (2.5). We observe that all 16 contributions are of the same order of magnitude, of about 3.5. Had all these contributions been independent we would have obtained a total rms value of Im(S) of the order of 3.5 √ 16 = 14. In fact we observe that the rms value of Im(S) is about 3, which is significantly lower. This implies that there are negative correlations between these contributions. Indeed, we observe that there are small and almost uniform negative correlations between contributions from all pairs of "half-integer lattice sites". Consider now the contour that corresponds to the ansatz with no special point. We present contributions to the rms of Im(S) coming from different lattice sites and from nearest neighbour terms in fig. 5. It is somewhat surprising that the rms contributions from this contour, which has a much higher mean phase factor, are actually larger than those of the undeformed contour. This can only happen if there are stronger negative correlations to balance the large values. We present these correlations in fig. 6. We observe that the phase cancellations result from a delicate and uniform correlation among the different contributions to the phase. One could even claim that a good ansatz is one that maximises the cancellations.
For the same contour with a special point the situation is similar at the bulk but differs near the special point. The rms contributions to Im(S) are presented in fig. 7, and the correlations are presented in fig. 8. The fact that the rms values at a single site are quite large together with the breaking of uniformity of both the rms and the correlations in some range around the special point are enough to understand the reduction of the phase when a special point is present. All the observed effects are very sensitive to the value of µ. Hence, for smaller values of µ the effect is not as dramatic as here. Figure 5. Contributions to Im(S) for the contour corresponding to the ansatz with no special point. The rms values at the (integer) lattice sites is now non-zero, although it is still smaller than the contributions from nearest neighbour pairs ("half-integer lattice sites").

Dependence of the mean phase factor on geometry
For a lattice in d > 1 of the form defined by (2.3) one can naively expect that the mean phase factor would be independent of geometry, that is, it would behave in the same way regardless of the choice of L andL, as long as V remain the same. This expectation relies on the fact that our contours are defined locally and each lattice point has exactly the same interactions with its neighbours and contributions to Im(S) from these interactions. In fact there are at least two effects that can modify this expectation. First, as already mentioned, if we use an algorithm with a special point, there would be an extra contribution from these points, which is proportional to their number,L d−1 . On the other hand, for small values of L, the kinetic term and the periodic boundary conditions would limit the fluctuations in the temporal directions.
Since contributions to Im(S) come from these fluctuations (2.1), short L should result in larger values of the mean phase factor. One could expect that for small values of µ the second effect would be larger while for large values of µ the first effect would be larger. We examine these expectations for the simple first order contour. In fig. 9 we present the results for the two dimensional case and in fig. 10 for the three dimensional case. These results are consistent with our expectations.

Dependence of the mean phase factor on d for fixed α
An important question is how does the proposed approach depend on space-time dimensionality. The question by itself is not even well defined since field theories can behave very differently in different dimensions. But even before examining the continuum limit arises the question: which parameters should be kept fixed for such a comparison. A natural possibility for the case at hand is to examine theories with similar values of α, since this is the parameter used in our expansion. Moreover, the terms contributing to Im(S) depend on α and at the leading order this contribution does not depend on d for fixed lattice size V . However, we've already noticed that even a change of the geometry that does not change the dimensionality can lead to a change in the mean phase factor. Thus, it is natural to Figure 6. Correlations among contributions to Im(S) from integer and half-integer lattice sites. Positive correlations appear as blue dots and negative ones are depicted as red. The size of the dots represents the size of the correlation. There are small positive and negative correlations almost between all possible pairs, but the most dominant feature (other than the trivial unity correlation on the diagonal) is the strong negative correlation between the phase coming from integer lattice sites r and the half integers to their right, r + 1/2. The breaking of symmetry between left and right stems from the form of our ansatz (2.42), which, like the simple first order contour (2.15), includes only dependence of ψ r on the value of the field to its right φ r+1 . These strong anti-correlations are not enough for explaining the phase cancellations, since the size of the contributions at integer and half-integer values differs (recall fig. 5). An added effect is the smaller negative correlations between two neighbouring half-integer sites, r + 1/2 and r + 3/2, as well as between two neighbouring integer sites, r and r + 1. This effect is stronger than the small positive correlations between sites r and r + 3/2 and between r and r − 1/2, again, in light of the different size of contributions from integer and half-integer sites. expect that there would be a difference, but of which nature and how significant would it be?
The theory at larger value of d is the same as one with lower d with some spatial links added. While these links do not contribute at the leading order, they still contribute, especially for large values of µ. These contributions are not accounted for in the simple first order contour. Thus, at least for this contour, it is expected that larger d would result in a lower mean phase factor. We examine this expectation in fig. 11.
While it makes sense to compare different dimensions for fixed α from the point of view of the expansion, observables behave differently in these cases. In fig. 12 we compare two observables, the action and the density, as a function of µ for several lattices with fixed  α. In all cases we observe the Silver Blaze effect 6 , but since the values of m differ, the effect begins around different values of µ.  We note that while the L = 4 case behaves better than the other two cases these other two cases are almost indistinguishable, in line with the naive expectation that it is only the total volume that matters. We can conclude that for L = 16 the effect of the boundary conditions on uniformization of the field is negligible. The effect of the special point is also not visible. Since the lattice is large, contributions to Im(S) from the bulk reduce the mean phase factor to near zero before this effect becomes important. An inset with the range 0.1 ≤ µ ≤ 0.8 is presented in order to better distinguish the points in this range. In order to avoid the geometry factor of the previous subsection we compare three lattices with L = 4, at d = 2, 3, 4. We see that indeed the d = 4 case has a lower value of the mean phase factor than the d = 3 case, whose mean phase factor is still lower than that of the d = 2 theory. The geometry factor can change this behaviour. Indeed we see that the d = 2 lattice with L = 64, whose mean phase factor is consistently lower that that of the d = 2, L = 4 case, behaves better than the d = 3 theory with L = 4 for small µ, but this changes around µ = 1.2.
3.5 Dependence of the mean phase factor on d for fixed m As already suggested, a more natural parameter to fix for the comparison of the behaviour at different values of d is the mass parameter m. As suggested in section 2, we expect to obtain in this case better behaviour for larger values of d, since from this point of view the expansion can be interpreted as an expansion around d = ∞. We examine this expectation by comparing the mean phase factors in fig. 13. We then examine the Silver Blaze phenomenon in fig. 14. Both figures are for m = 1 and lattices of 256 sites.

Comparing different contours
In 3.1 we compared the simple first order contour to the undeformed contour and identified that at large µ the phase is reduced when we use the special point prescription. We defined several other contours, the second order simple contour and the first and second order ansätze. Here we want to compare these contours. Since in some of these cases we have to use the special point prescription, we use it for all contours, for consistency. We limit the analysis to small and moderate values of µ, in order to avoid the region in which the The two lattices at d = 3 behave similarly; there seems to be no strong dependence on the geometry in this case. As µ is increased error bars become larger, in light of the reduction of the mean phase factor, which results in a stronger sign problem. special point becomes the dominant variable. For simplicity we examine the d = 1 case with m = 1.
The ansätze depend on several parameters and there are several local maxima for the phase in parameter space. We use several different starting points in parameter space for finding optimal values for the parameters. In particular, we use the parameters that define the respective simple contours as starting points. We also use previously obtained values of the parameters at nearby values of µ. Also, for the second order ansatz we use as starting points the optimal parameters obtained for the first order ansatz. With all these starting points we probe nearby points in parameter space, searching for parameter values that increase the mean phase factor. We observe that several different values for the parameters can result in similar local maxima of the mean phase factor. We are not claiming that the values we use for the parameters are the absolute optimal ones, but they are probably not far from it and anyway, they are values that we managed to obtain easily, which is what we aim for in this approach.
In fig. 15 we compare the results of the first and second order simple contours and ansätze for the m = 1, L = 8 case. We see that for small µ going to the second order is more important than generalising to an ansatz. This is not surprising, since this is where we expect the expansion to be particularly useful. Also, this is where the special point is least important. As µ is increased the ansätze begin to behave better than the respective simple contours and the gap between the simple second order contour and the first order ansatz closes. This is again as expected.
Nonetheless, it might seem that the benefit from using the simple second order contour  Figure 13. The mean phase factor as a function of µ for lattices with 256 sites at various dimensions for m = 1 (α decreases from α = 1 3 for d = 1 to α = 1 9 for d = 4) evaluated using the simple first order contour. The lattices are identified as L ×L d−1 . The expected behaviour of higher mean phase factor for larger d is indeed observed. This can be masked by geometry factors. Indeed, the phase factors of the 4 × 64 lattice are larger than those of the 16 × 4 2 one. On the other hand it seems that the effect of the increase of the size of the boundary, where additional contributions to Im(S) are present, is not very significant, at least in the current case. or the ansätze as compared to the simple first order contour is not significant. This impression is incorrect, since what actually matters is the decay rate of the phase as a function of lattice size, as we examined in fig. 3. We repeat the same analysis for the four contours in the problem at hand. The results are shown in fig. 16. In paper I we claimed that the first  order ansatz is reliable up to about L = 1000. This is consistent with the current result.
We note that the current treatment of the special point hardly changes anything in this respect. Somewhat surprisingly, the second order simple contour is doing better than the first order ansatz even at this, not too small, value of µ. Not surprisingly, the second order ansatz behaves even better. We can expect that it would give reliable results at least up to L = 3000, that is, by generalising the first order ansatz we can triple the size of the lattice, which can be used. In order to examine this expectation we plot the action density, S L , as a function of lattice size, retaining m = 1 and choosing µ = 1, for the various contours, in fig. 17. For these sizes of the lattice it is expected that this observable is L-independent. We observe that all four contours and especially the second order ansatz give quite reliable results even beyond the point where the sign problem is expected to become significant.  Figure 17. The action density, S L , as a function of lattice size for the various contours for m = 1, µ = 1. For each contour we present the results up to somewhere beyond the point where the sign problem begins to become significant. A constant dashed line with the average value of the observable is presented for convenience. For the first order simple contour and the first order ansatz the results are consistent with those of paper I, which implies that the treatment of the special point is of no significant importance.

Discussion
We studied the method of contour deformation and established that it is a viable option for dealing with the sign problem. Indeed, we managed to get reliable results for quite large lattices for generic values of the parameters. To that end we used both a systematic expansion and ansätze generalising this expansion. The expansion used can be regarded as one around m = ∞, around d = ∞, or as an expansion with respect to neighbour distance. Each one of these interpretations can be used for the construction of generalisations of our approach to other theories.
Both the expansion and the ansätze have a large degree of arbitrariness. Our purpose was not to construct an elaborate mathematical framework, but to establish a practical tool for dealing with the sign problem. From this perspective the arbitrariness is not a problem but a bliss, since one can deform the contour in many ways and choose different values for the parameters and they would all be good enough for obtaining the desired results. Thus, it should be relatively easy to use this method in practice and we believe that similar results would be obtained by employing this approach also to other systems.
One guiding principle we used that restricted this arbitrariness is the requirement of obtaining a computationally efficient algorithm. This led to a restriction on the form of the contour deformation to one that would lead to Jacobians that are simple to evaluate. As in paper I, a stumbling block towards an upper-block-triangular form for the Jacobian matrix originated from the periodic boundary conditions. We proposed an approach that improved one of the options that we considered in paper I, of not deforming the contour related to the special point at all. This new approach turned out to be quite adequate for not very large values of the chemical potential. However, for large values of µ the contributions to the phase from the special point become significant. This contribution becomes more problematic in higher dimensions, although generally they behave better. We identified that, especially for ansätze, the origin of this problem is the non-uniformity of correlations that stems from treating a specific point (or hyper-surface) as special. Thus, it would be advisable to devise contours that do not treat any point in a special way. We currently examine such possibilities.
Another important direction is the examination of the efficiency of the contour deformation approach for fermionic theories. The local nature of deformations, which we imposed in order to obtain simple and computationally efficient expressions, as well as the general requirement of computational efficiency are challenged in this case by the fermionic determinant. Moreover, it was argued in [11] that in many models with fermionic sign problems contour deformation would be ineffective. While there are known examples of contour deformations, e.g., Lefschetz Thimbles that improve sign problems in fermionic cases, it would be interesting to examine the current method in several such cases. We hope to study this issue in the future.