Scattering from production in 2d

In 1968, Atkinson proved the existence of functions that satisfy all S-matrix axioms in four spacetime dimensions. His proof is constructive and to our knowledge it is the only result of this type. Remarkably, the methods to construct such functions used in the proof were never implemented in practice. In the present paper, we test the applicability of those methods in the simpler setting of two-dimensional S-matrices. We solve the problem of reconstructing the scattering amplitude starting from a given particle production probability. We do this by implementing two numerical iterative schemes (fixed-point iteration and Newton's method), which, by iterating unitarity and dispersion relations, converge to solutions to the S-matrix axioms. We characterize the region in the amplitude-space in which our algorithms converge, and discover a fractal structure connected to the so-called CDD ambiguities which we call"CDD fractal". To our surprise, the question of convergence naturally connects to the recent study of the coupling maximization in the two-dimensional S-matrix bootstrap. The methods exposed here pave the way for applications to higher dimensions, and expose some of the potential challenges that will have to be overcome.


Introduction
Scattering and particle production are deeply connected in relativistic theories. 1 In two-dimensional spacetime, d = 2, scattering without production leads to integrability [1]. In higher dimensions, d > 2, scattering without production is impossible [2]. In this paper we will be interested in the converse question: Given particle production, can we reconstruct scattering?
In the simplest case of 2 → 2 scattering, the familiar relation between the two is expressed by unitarity 2 Disc s T 2→2 − |T 2→2 | 2 Scattering = (MP) where T 2→2 is the two-to-two scattering amplitude, Disc s stands for discontinuity with respect to Mandelstam variable s, and (MP) = n>2 |T 2→n | 2 is the multi-particle contribution that describes particle production in a given scattering process. The nontrivial problem we are interested in here is to solve unitarity (1.1) combined together with analyticity and crossing (see section 2.1 for the precise formulation). Following the pioneering works by Atkinson [3][4][5][6][7], we treat the particle production (MP) as a fixed, given data, and solve for T 2→2 as a function of (MP). This is the central idea of the present work. In practice, we do this by implementing numerically an iterative procedure developed by Atkinson, which converges to a solution of (1.1). Remarkably, given (MP) is not too large, 3 the solutions to (1.1) can be shown to exist and be reached by simple iteration procedures that we describe in detail below.
In this paper, we present and implement numerically two algorithms to reconstruct scattering from production in two spacetime dimensions. Our first method relies on applying iteratively the unitarity equation (1.1) to find a fixed-point thereof, our second method is the standard Newton method applied to this equation.
Two dimensions is an obvious starting point because S-matrices are much simpler: they have only one kinematic invariant, and no phase-space integral in the unitarity equation. Furthermore, for scattering of identical particles the problem can be solved analytically and therefore the performance and limitations of the numerical algorithms can be explored in great depth. While in two dimensions our algorithm simply recovers known solutions in a novel way, its power lies in the fact that it can be easily generalized to higher dimensions where no solutions are known.
We call a function of Mandelstam invariants that satisfies the fundamental constraints of analyticity, unitarity, and crossing an amplitude-function. It is one of the aims of the nonperturbative S-matrix program to characterize the space of such functions. 4 It does not necessarily describe scattering in some physical theory, but the converse is definitely true: every physical scattering amplitude is an amplitude-function. Constructing amplitude-functions and characterizing the space of physical parameters that they span, which we can loosely call the space of couplings, is an interesting and important problem, sometimes also called the primal problem, see [12] for the recent 1 By "scattering" we mean elastic scattering amplitudes n → n. By "production" we mean inelastic scattering amplitudes n → m. 2 For illustration purposes, we suppress the phase space integrals and simple kinematical pre-factors. 3 For d = 2 the precise statement can be found in the bulk of the paper. For d = 4 Atkinson proved sufficient conditions for convergence in [3][4][5][6]. 4 Note that the basic principles discussed here are known to lead to many nontrivial constraints on the physical parameters of the theory, see [8][9][10][11] for some recent works. In these works, however, only some of the constraints coming from unitarity are implemented. In the problem we are considering we aim at imposing unitarity fully at the level of two-to-two scattering amplitude in a gapped theory (no massless particles in the spectrum). discussion of both the primal and dual problems in 2d. Even more ambitiously, by exploring the space of the amplitude-functions, we might hope to find physical theories at its boundary and in this way try to solve them.
An important step in this direction has been taken in [13][14][15][16][17], where inelastic unitarity constraints were fully implemented and nonperturbative constraints on the scattering amplitudes derived. Remarkably, in two spacetime dimensions this led to re-discovering the actual physical S-matrices using purely bootstrap methods. Finding such exact, solvable S-matrices in higher dimensions is still elusive.
To complete the program of constructing amplitude-functions in higher dimensions, one extra condition has still to be implemented for massive theories: elastic unitarity. Elastic unitarity is the statement of unitarity at energies where no particle production is possible. In this energy range (e.g. 4m 2 < s < 16m 2 for pion scattering), (MP) vanishes in (1.1). When combined with analyticity and crossing, elastic unitarity is known to give many powerful and nontrivial constraints on the scattering amplitudes, see e.g. [18] for a recent discussion of the question.
The methods that have been developed so far do not allow to impose elastic unitarity, due to its nonlinear nature. It is therefore an open problem to explore the space of amplitudes that satisfy both elastic and inelastic unitarity, analyticity and crossing.
In this context, one remarkable result that so far has attracted little attention is a constructive proof by Atkinson [3][4][5][6] of the existence of functions satisfying all of the S-matrix axioms: crossing, elastic and inelastic unitarity, in four dimensions. The mathematical proof is based on constructing an iterative sequence and exhibiting some sufficient conditions about its convergence. To our knowledge (see also [19]), it is the only result about two-to-two scattering amplitudes in gapped theories that has all the desired unitarity properties.
The main message of our paper is that the approach of imposing unitarity advocated in the papers by Atkinson is actually suitable for efficient numerical implementation. It therefore offers an exciting avenue to explore the space of scattering amplitudes that satisfy all the desired unitarity properties in higher dimensions and overcome some of the present limitations of the Smatrix program. If implemented in four dimensions, it would be the first method able to construct amplitude-functions which satisfy all axioms of the S-matrix program and therefore we believe that this idea deserves further attention [20].
From a more philosophical standpoint, choosing (MP) seems to imply a huge amount of indeterminacy on the resulting scattering amplitude-function. Nevertheless, we see two main aspects that justify this approach. Firstly, when deriving bounds on the space of low-energy S-matrix parameters, we expect that detailed form of the multi-particle input to be irrelevant. By elevating the (MP) particle production amplitude to the role of an input parameter and observing how the 2 → 2 S-matrix depends on it by varying its parameters, we can very explicitly test this idea. Secondly, to bootstrap particular physical theories, we have the possibility to input particle production data obtained by other means and in this way narrow the search for the desired amplitude in the potentially vast space of amplitude-functions. Thirdly, it provides an interesting middle ground between perturbative and nonperturbative methods. In solving (1.1), we can use for example perturbative methods to get some insight into the precise form of the particle production term (MP), which can be then turned into the scattering amplitude using the nonperturbative methods of the present paper.
The main results of this paper are • We implemented numerically for the first time iterative solutions to unitarity, analyticity, and crossing with a given inelasticity (MP) in two space-time dimensions. We recovered the known integrable S-matrices and analogues with inelasticity which match the known solutions to unitarity in 2d.
• We characterized in details the range of convergence of the two methods we used to solve (1.1): fixed-point iteration, described in section 3; Newton's method, described in section 4. We see that most of the parameter space of theories can be captured, appart from a slice near the edge, for which our algorithms do not converge. Covering the complete space of solutions in two space-time dimensions requires methods that go beyond the ones described in the paper.
• In d = 2, particle production (MP) does not specify the scattering amplitude T 2→2 uniquely. In fact, there are infinitely many scattering amplitudes T 2→2 with a given inelasticity. This space of solutions to our problem are characterized by CDD ambiguities [21][22][23]. We explored how different starting points of Newton's method allow to recover S-matrices with different CDD factors.
The plan of the paper is as follows. In section 2, we review the basics of S-matrices in 2d and the analytic solutions which we will recover through our numerical implementation. In section 3 we describe our numerical implementation of the fixed-point iteration in two dimensions, and in section 4 we explain how we used Newton's method to answer the question posed at the beginning of the paper. Section 5 presents the discussion of the results and some open directions.

Two-dimensional S-matrices
In this section, we review the basic properties of two-dimensional S-matrices and formulate precisely the problem of scattering from production that we would like to solve. We review the analytic solution to the problem which involves the notion of CDD S-matrices [23], central to this work. Then, we present our numerical iterative solution to the problem which is the subject of the present paper.

Problem
We consider scattering of identical massive bosons in two spacetime dimensions. The two-to-two kinematics is captured by one kinematic invariant, the center of mass energy s, while t = 0 and u = 4m 2 − s. The S-matrix is therefore a function of s only the scattering amplitude T is defined by In the free theory, T (s) = 0. The inverse factor of s(s − 4m 2 ) comes from the Jacobian that relates δ (2) (p 1 + p 2 + p 3 + p 4 ) to δ (2) (p 1 + p 3 )δ (2) (p 2 + p 4 ) in two dimensions (see for instance [24]). We assume that S(s) ≡ lim →0 S(s + i ) satisfies Mandelstam analyticity, namely that it is holomorphic outside of the unitarity cuts s < 0 and s ≥ 4m 2 , apart from potential poles that correspond to bound states located at 0 < s < 4m 2 . Real analyticity implies that T (s * ) = T * (s). Crossing symmetry takes the form S(s) = S(4m 2 − s).
(2.2) Finally, assuming that particle production starts from s 0 , unitarity takes the following form In a theory with a single stable particle of mass m, s 0 = 9m 2 and s = 16m 2 stands for the threeand four-particle thresholds correspondingly. The explicit value of s 0 does not play an important role in the subsequent analysis.
To characterize particle production, we explicitly introduce inelasticity as follows where f i (s) is defined for real, positive s and by assumption it has a nontrivial support starting for s ≥ s 0 . It characterizes the amount of particle production. Below, for concreteness we set s 0 = 16m 2 which corresponds to a situation where T 2→3 = 0.
In terms of T (s), unitarity and inelasticity read At this point we can formulate precisely the problem that we would like to solve.

Explicit solution
It turns out that an explicit solution to the problem stated above exists in two dimensions. We consider the cases v i (s) = 0 and v i (s) = 0 separately.
Elastic amplitudes (v i (s) = 0) In two dimensions, it is possible to have scattering without particle production. Such theories are integrable, and they are characterized by an S-matrix which is a pure phase. The building blocks thereof are known as CDD factors. Following the terminology of [14], we will distinguish CDD poles and CDD zeros. The corresponding S-matrices are given by . (2.9) These S-matrices satisfy Mandelstam analyticity, crossing, and unitarity |S pole CDD (s)| = |S zero CDD (s)| = 1. It is easy to check that S pole CDD (s) has a pole at s = m 2 p , and that S zero CDD (m 2 z ) = 0.
It is also possible for S(s) to have a pair of complex conjugate zeros. 5 This can be achieved by either choosing a single CDD factor with m 2 z = 2m 2 + iα with α ∈ R, or by taking a product of two S zero CDD with complex conjugate zeros. It is easy to see that any purely elastic S-matrix in 2d is a product of CDD-poles and CDDzeros. Firstly, map one half of complex plane (Re(s) > 2) minus the right cut s > 4m 2 to the unit for some a (see [15, fig. 1]). Suppose that there exists an S-matrixS(z) with poles and zeros at some location in the complex plane, and define f (z) to be the ratio ofS(z) to the CDD-pole and CDD S-matrices with the same poles and zeros. f (z) is therefore holomorphic, and since it possesses no zeros in the unit disk, 1/f (z) is also holomorphic. Since the cut maps the left cut s > 4m 2 to the boundary of the unit disk (again, see [15, fig. 1]), |f (exp(iθ))| = 1 for θ ∈ [0; 2π]. The application of the maximum modulus principle to f and 1/f gives that f (z) = 1 everywhere. Crossing symmetry allows to extend this to the full complex plane.
Inelastic amplitudes (v i (s) = 0) Following [14,22], we can immediately write down a general solution to eq. (2.6), as follows: where |S elastic (s)| = 1 is a purely elastic S-matrix given by the product of CDD zeros and CDD poles, as well as possibly an overall sign factor. It is immediate to check that (2.10) satisfies all the conditions described previously. Note that this formula assumes that | log (1 − f i (s)) | < s at large s. This will be enough for the purposes of this paper. 6 Properties of T (s) Below we will be interested in the properties of the scattering amplitude T (s), in addition to those of S(s). Let us list a few of its relevant properties which concern its behavior close to the two-particle threshold s = 4m 2 as well as its high energy limit s → ∞. Consider an amplitude with N zeros ≥ 0 CDD zeros, N poles ≥ 0 CDD poles and inelasticity (possibly zero). For s → ∞, this amplitude goes to a constant lim s→∞ T (s) := c ∞ (2.11) given by (the computation is straightforward): Close to the threshold s 4m 2 , the behaviour of the amplitude depends on wheter the total number of CDD poles and zeros N tot = N zeros + N poles is even or odd: (2.13) 5 In [14] such pairs of zeros were called CDD resonances. where c 0 is a real coefficient depending on the location of the zeros and poles whose explicit value is not important for us but straightforward to work out.

Iterative solution
Next, we present an iterative solution to the problem above that we implement numerically in the following sections. Following the basic idea of Atkinson, we define the iterative map Φ as the right-hand side of the unitarity equation (2.14) The RHS (right-hand side) of (2.14) explicitly depends on the real part of the amplitude: using dispersion relations, , (2.15) it can be expressed in terms of the imaginary part of the amplitude Im T (s). This fact justifies writing Φ(Im T )(s). Here, for simplicity we assumed that only a single bound state is present in the spectrum. Analogous formulas can be easily written for an arbitrary number of bound states. The first iterative strategy that we consider is therefore to define a sequence which, whenever it converges, converges to a fixed-point Im T = Φ(Im T ) that, by construction, satisfies unitarity. Crossing is ensured by (2.15). Our second iterative strategy is the Newton-Kantorovich method, a functional analogue of Newton's method, which we apply to find the roots of the map where id is the identity functional. In that case we look at the following iteration: where Ψ is the functional derivative of Ψ with respect to Im T . The roots of this operator clearly also satisfy unitarity and crossing. We make this more explicit in the section dedicated to Newton's method below. Note also that in our implementation, we deal with a discretized version of this algorithm, which is simply called "Newton's method". As we mentioned above, the map Φ uses explicitly involves the real part of the amplitude. Therefore, it contains an implicit step of reconstructing the real part, from the imaginary part via dispersion relations which can be recast as 19) where taking the real part prescribes the integral to be understood in the sense of its principal value.
In a practical implementation, the challenge that one faces is that T n (s), and consequently Re T n (s), should vanish when s → 4m 2 (see (2.13)), but the dispersion relation above does not guarantee this vanishing. There are two strategies to enforce this condition, which we describe now.
Most of the time, in our algorithm, by convention, we will keep the constant at infinity c ∞ fixed throughout the iteration and consider it an input of the algorithm. We will then use the vanishing of Re T n (4m 2 ) = 0 to define a sequence of couplings g n according to Note that this result is actually generic: for an amplitude with only one bound state, the coupling is given by the dispersion integral above. Generalizing to more bound states is straightforward. One can for instance, following [14], keep all fixed but one and define g n in a very similar maner, by adding the contribution from the other poles at s = 4m 2 in the parenthesis on the right-hand side of (2.20). However, in the cases where there are no poles, we cannot keep c ∞ fixed and we have to define a sequence of constants at infinity c n by To sum up, in the case where we have one bound state, we construct the iteration procedure as follows: Iterative solution: The input data required for the iteration procedure is: • mass of the bound state m p , • constant at infinity c ∞ .
Given the input data, and an arbitrarily chosen initialization amplitude T 0 (s) we can iterate the three equations above. Unitarity relation allows us to compute the discontinuity of the amplitude at the next step of the iteration. The essential step of this procedure is the reconstruction of the real part of T n (s) via dispersion relations (2.23). The condition T n (4m 2 ) = 0 (2.24), which is a consequence of elastic unitarity (2.3), guarantees that the dispersion integral is well-defined at every iteration. Depending on our choice of the problem we can alternatively fix g 2 and iterate c ∞ instead.
In implementing the iterative solution above, we will have to address two principal questions: • when does a given numerical strategy converge?
• what does it converge to, as a function of the input data and the initialization amplitude T 0 (s)?
These questions will be the subject of the following sections dedicated to the numerical implementation of the iterative algorithm.

Ambiguities in the reconstruction
Given the input data, and looking back at the explicit solution (2.10) it is immediate to see that the ineslastic contribution is fully determined, and so is the pole contribution of S elastic . However, it is not difficult to realize that a solution of interest can have an arbitrary number of CDD zeros, as long as they are arranged so that the sum of (2.12) is respected, i.e. (2.25) Whenever convergent, we will find below that the fixed-point iteration algorithm converges to the N zeros = 1 solution independently of the initialization amplitude T 0 (s). On the other hand, we will also see that the Newton method can converge to arbitrary N zeros depending on the initialization amplitude T 0 (s). Finally, there are amplitudes for which neither of the methods converge and some other method should be used.
A related source of ambiguity is related to the fact that, at finite numerical resolution, 7 one cannot distinguish amplitudes which differ by an insertion of even number of CDD zeros arbitrarily close to s = 4m 2 . This ambiguity will not play any role in our discussion below, but the reader should keep in mind that adding such undetectable CDD zeros is always formally possible in going from the discretized problem back to the continuum.

Fixed-point iteration
In this section, we implement numerically the fixed-point iteration algorithm described above, see (2.22a). Remarkably, we find a wide class of amplitudes for which the algorithm converges. 8 Below and for the rest of the paper, we will set m = 1. It will also be convenient to map the domain s ∈ [4; +∞) to (0, 1] via To avoid confusion, we should use a different name for functions of x, for instanceT (x) ≡ T (4/x).
Hoping that it will not confuse the reader, below, we omit tildes and simply write T (x). In this variable, the two-particle threshold at s = 4 is mapped to x = 1, infinite s is mapped to x = 0, and the four-particle inelastic threshold at s = 16 maps to x = 1/4. Finally, we introduce for the imaginary part of the amplitude The steps of the numerical implementation are as follows: 1. Discretization. We discretize the values of the function that we iterate on a grid of length N The grid spacing controls the precision of the overall agreement with the analytic solutions.
2. Interpolation. We adopt two methods of interpolation: linear interpolation, and interpolation with Bernstein polynomials, see respecively appendices A and B. Linear interpolation allows an easier experimentation with various grids and local increase of precision near points of interest. The grid we used for the results of this paper for linear interpolants is given in (A.1). Bernstein polynomials require a unformly-spaced grid but result in smoother functions and are known to converge uniformly upon increasing the resolution of the grid. The value of the function ρ n at step n of the iteration on the grid are called ρ n,i Here x 0 = 1 and x N = 1.
3. Dispersion integral (2.23). The dispersion integral reconstructs the real part of T (x) from its imaginary part obtained using (2.23). Given a grid and an interpolation scheme, this dispersion integral reduces to following matrix operation on ρ n , just like in [14], The reconstruction-matrix B ij is computed once and for all, given the discretization and interpolation schemes. The explicit form of this matrix for the linear interpolation and interpolation with Bernstein polynomials can be found in the dedicated appendices.
The coupling g n in (3.5) is defined by demanding that Re T n,N = 0 (recall that x N = 1 corresponds to s = 4): This is the discretized version of (2.24). Once this is done, Re T n reads and P (x) is the pole function in the x-variable: . (3.9) 4. Iteration. The resulting map assumes the final form This is the discretized version of (2.22a)-(2.24).
With the explicit form of the map given in (3.10) at hand, we can apply and implement numerically theorems about the convergence of such maps. In finite dimensions, a map ρ = Φ( ρ) converges locally to a fixed point ρ * = Φ( ρ * ) if the largest eigenvalue of Φ at the fixed point, also called spectral radius, is strictly smaller than one. This result is standard and easily proven. Upon iteration, in a small neighborhood of the fixed point, any large eigvenvalue will drive the iteration away and small eigvenvalues will make it converge. 9 In our case, the Jacobian of the map can be computed explicitly and is given by where i is not summed. The eigenvalues of the Jacobian are easily computed numerically, and we verify very accurately that, when parameters are such that the spectral radius of the iteration is bigger than one, the algorithm diverges, while when it converges it does so at the exponential rate of the largest eigenvalue. 10 We now present our results, and start with a few examples of iterations.

Convergent amplitudes
Here we present examples of the amplitudes for which the iteration converges. In terms of the analytical solution (2.10) we consider mainly two cases: no bound state (and no CDD zeros), one bound state (and one CDD zero). We have also determined the convergence criterion of the algorithm in these cases, which we present below.

No bound states
We start with the case where we have no bound state below the two-particle threshold s = 4. This is a small subset of the amplitudes we study, one for which S elastic = 1 in (2.10). For them, convergence is easy to understand, and the algorithm simply stops converging when f i > 1, because there are no such S-matrices, as evident from eq. (2.5). Therefore, we need to have some inelasticity, otherwise the algorithm simply converges to zero. Besides, since we have no bound state, we need to turn temporarily to the other strategy described around (2.21), and adjust the constant at infinity at each step of the iteration so as to cancel the real part at s = 4.
In figure 1, we display the results of the iteration for the following inelasticity: (see (2.7) for the relation between v i and f i ), starting on an arbitrary starting point. The parameter H > 0 reduces to one dimension the space of inelastic inputs. Inelasticity starts at x = 1/4 which corresponds to the four-particle threshold.
It is remarkable to observe that the speed of convergence is exactly given by the maximum eigenvalue of the Jacobian. This can be seen on figure 2.
Convergence is lost when v i becomes so large that (2.5) cannot be satisfied. In our parametrization, this corresponds to H = 197. Near H = 197, convergence is no more exponential, and eventually leads to divergence, as expected.  Figure 2: Convergence of the algorithm is dictated by the spectral radius ξ of the Jacobian. The horizontal axis labels the number of iterations. ||.|| ∞ represents the maximum of the elements of the discretized Smatrix seen as a vector of the values of the S-matrix, A rather wide class of starting points can be used. We tried a variety of functions, but we did not play the game of trying to characterize this space exactly, since the result of the iteration is unique and determined by (2.10), therefore there cannot be any subtleties in the final answer. In particular, one can start on a null initial amplitude ρ 0 (x) = 0.

Examples with one bound state : with and without inelasticity
We now introduce one bound state at mass s = m 2 p (x = 4/m 2 p ) in the analysis. From now on, we use the version of the algorithm where we keep the c ∞ fixed and update the coupling, therefore we have a sequence g n as well as a sequence ρ n (x).
We provide two examples to illustrate the dynamics of the algorithm, with, and without inelasticity. Then we discuss the function space in which convergence is achieved below in section 3.2.
No inelasticity In the case without inelasticity, the input of the algorithm is solely the mass of the bound state and the constant at infinity. Given this data, and for a wide class of input functions, we found systematically amplitudes with one zero through the fixed point iteration. An example thereof is provided in figure 3.
The algorithm converges to amplitude with one zero, whose location is exactly the one fixed by (2.12). In this particular example, we found in this example to be m 2 z 3.594.... Figure 3 shows the perfect agreement between this solution and the result of the iteration. With the given input data, other options would have included 2 or more CDD zeros whose contribution to the constant at infinity adds up according to (2.25). As alluded to above, many such solutions are possible. However, we also observed that for those solutions, the spectral radius is always larger than one, hence, the version of the algorithm considered in this section does not converge. We discuss this in more detail below.
With inelasticity In the presence of inelasticity, we similarly found that we converge to the solution (2.10) with only one-zero. We show an example of the iteration for the sake of the reader's curiosity in fig. 4 below. As we vary some of those parameters (inelasticity when present, or constant at infinity), convergence is lost, and this draws the boundary of some domain of convergence, which we describe at length in section 3.2.2 below.

More pairs pole-zero
Implementing the algorithm for more bound states presents no technical difficulty and can be done easily. If we have n bound states, the input data of the algorithm, in the spirit of [14], will be the location of the poles m p 1 , . . . , m pn , the couplings of the poles except the first one g 2 , . . . , g n and the constant at infinity c ∞ . The simplest convergent possibility for this case is the amplitude with n CDD zeros. The problem is therefore to determine n variables: the coupling g 1 and the location of the poles m z 1 . . . m zn . Each coupling g i at a given value of the m p j 's furnishes one constraint on the m z j 's, therefore we have n − 1 constraints coming from the couplings and an extra constraint form the constant at infinity which allows to determine exactly the end-point of the algorithm in advance.
We implemented this algorithm for two poles (linear inerpolant) and observed perfect convergence to the two-zero solutions for a variety of input points, inelasticities, constant at infinity and coupling g 2 , which lead to conjecture that the systematics is exactly identical. Only the shape of the domain in which convergence is achieved is changed. We have not performed an exhaustive study of this shape in the case with more poles.

Spectral radius and convergence of the algorithm
Let us now turn to the analysis of the region in parameter space in which the algorithm converges.

Odd CDD sector divergence
In the convergent examples above, the total number of CDD factors (poles and zeros) N tot was always even. It is easy to understand the origin of the fact that we never converged to solutions with N tot odd: this is related to the near-threshold behavior (2.13). When N tot is odd, we have the following near-threshold behavior of the amplitude With this near threshold behaviour, it is straightforward to show that there exists at least one eigenvector of the Jacobian with the eigenvalue close to 2 and therefore the iterative map is divergent, as explained below the presentation of the algorithm for the fixed-point iteration, around (3.11).
To demonstrate this explicitly, let us consider the behavior of the Jacobian (3.11) near x = 1. From the behavior (3.13) and (3.11) it immediately follows that (3.14) This implies that (J T ) ij δ jN −1 2δ iN −1 and therefore J has an eigenvalue close to 2. As a result even if we start exactly on the solution, finite grid precision induces a small deviation from the actual solution which grows exponentially and eventually causes the map to diverge after a finite number of iterations (empirically, 5-10). When we compute the eigenvalues of the iterative map explicitly, we discover an intricate pattern of complex eigenvalues, but confirm the presence of eigenvalues at close to 2. We show one example in figure 5. The e.v.'s are intriguingly contained in a circle of radius one centered around one, but unfortunately the criterion for convergence is that the e.v. should be within the unit disk centered around zero. This might be related to the e.v.'s of Newton's method, but in this case the criterion for convergence is different: as we explain below, the divergence then comes from the fact that the Jacobian becomes singular.
Within the CDD-even class, we have found convergence only when the number of zeros equals the number of poles, and within a certain range of parameters among them, which we discuss in the subsection below. For amplitudes with even number of CDD factors but different numbers of poles and zeros, there seem to exist no region where the iteration converges. We leave understanding this more systematically for the future work.

Convergence of the 1-pole 1-zero solution. Relation to the optimal coupling analysis
In this section we present one of the main results of the paper, summarized in figure 6. We analyzed the shape of convergence-space for the one-zero one-pole amplitudes. Remarkably, it happens to be given by translates of the optimal coupling curve of [14]. We also found that, both with linear and Bernstein interpolants, the full space of one-zero one-pole amplitudes cannot be covered and the algorithm stops converging at a small but nonzero distance away from the full space (in section 4 we show how Newton's method allowed us to extend this domain).
Taking for granted that our algorithm, when fed with one pole as in (2.23), converges only to a 1-pole solution (see hereafter sec. 3.2.3), the problem of the shape of the convergence space becomes effectively one-dimensional. Indeed, the inelastic contribution is fixed entirely by (2.12), and the only unknown that remains is the position of the zero, or equivalently the coupling of the residue of the pole (they are related, hence determining one fixes the other). As there is only one unknown, we can choose to represent it either as the location of the zero, the value of the coupling, or the value of the constant at infinity, since all these quantities are related to each other. To allow a more direct comparison with [14], we choose to represent the coupling, or rather the logarithm of the coupling squared.
To better understand the results, we need first to describe two phenomena. Firstly, take a 1-pole 1-zero amplitude. At fixed m p (mass of the bound-state), the coupling is increased if the CDD zero moves towards 4, and eventually becomes maximum when the zero is exactly at 4: that is the optimal coupling amplitude [14,22] (black curve in figure 6). That fact is easy to check by extracting explicitly the residue of the corresponding amplitude. Furthermore, as the constant at infinity is the input of our algorithm and is related to the position of the zero by (2.12) of (3.16), increasing the constant at infinity can also be seen to increase both the zero, and, correspondingly, the coupling, until it reaches its maximal value. Relatedly, when decreasing the constant at infinity, the zero approaches two, and eventually goes off in the complex plane such that Re(m 2 z ) = 2 and m z 2 = 4 − m 2 z so that the constant at infinity remains real nut can then be arbitrarily negative while keeping ρ(x = 0) = 0. 11 11 A complex constant at infinity implies that the imaginary part of the amplitude does not decay to zero and hence Secondly, for a fixed constant at infinity, we found (empirically, at first) that increasing the magnitude of inelasticity increases the coupling. As can be read off the explicit solution (2.10), the coupling of the solution is given by 15) and the position of the zero is constrained by (3. 16) In (3.16), we see that increasing inelasticity shifts the zero towards 4, hence increases the factor g elastic above. To determine whether g increases or not, one needs to look at the exponential. It turns out that for most inelasticities, the exponent is numerically very small because of the fast decay 1 s 2 of the integrand, and that integral is hardly different from zero. Therefore, from this perspective, increasing inelasticity increases the coupling.
We can now describe our results. Our first result is that we mapped the domain of convergence space without inelasticity by applying the process described above: at a fixed value of m p , for which values of the constant at infinity does the map converge? As expected, we found that the constant at infinity can be arbitrarily negative, and this simply pushes the zero in the complex plane with real part 2. We find that the actual limit of convergence is reached for positive values of the constant at infinity, which are such that the coupling of the resulting amplitude is a certain fraction of the optimal coupling; g 2 g 2 max ∼ c . (3.17) With linear interpolants, we found that fraction g 2 g 2 max to be of order 0.5, while with Bernstein polynomials we had a slight improvement and observed a factor of 0.55. This is the blue and yellow thin curves in figure 6.
We also explored the convergence space in the presence of inelasticity; this is our second result. Surprisingly, we found that inelasticity does not play a role, and whatever function we used, we could only reach the same maximal coupling as without inelasticity. Later, when exploring Newton's method in the next section we will again find that remarkably the convergence region is well characterized by the criterion g 2 g 2 max is less than some number. Furthermore, we have not been able to produce any improvement of these bounds by improving our grid and we believe that the bound is strict: in two dimensions the fixed-point iteration map should not cover all of parameter space. In the next section we will increase the region of convergence using Newton's method.
To produce our results, we applied the bisection method by hand for a given grid of mass-values m p : we could in this way found an optimal c ∞ where the algorithm is at the edge of not converging anymore. Our criteria was that, after a few hundred iterations, the S-matrix should be of modulus one to 10 −3 accuracy over the last 10 iterations. We came up with this criterion by empirical search. It turned out to correspond pretty accurately the fact that the spectral radius becomes equal to 1 (within 10 −2 ). the dispersion integral needs to be written with subtractions. We do not consider such amplitudes in this work.

Non-convergence of 1 pole-n > 1 zeros amplitude
We now look at amplitudes with one pole and more zeros and ask whether our fixed-point iteration could converge to them. The first observation is that one can always take an amplitude with n zeros and send n − 1 of them towards 4, in such a way that, up to our numerical precision, such an amplitude becomes undistinguishable from the the 1-zero amplitude. This is the sort of CDD ambiguity that we have nothing to say about. For instance, by setting the zero to its maximum value for the 1-pole 1zero problem, we observe that, within our numerical accuracy, a second zero should be located at 3.99999 i.e. 10 −5 away from 4, in order to have a spectral radius below one and hence possible convergence. At this point, the S-matrices look completely identical and this relates to the question of the numerical accuracy of the algorithm.
Excluding this pathological behaviour, we have scanned extensively the bulk of parameter space of the 1-pole n-zero amplitude (n even and odd) up to n = 5.
We first designed a grid of n-tuples (m p , m z 1 , . . . , m zn ) of 10 evenly-spaced, distinct elements within [2.1; 3.9] (11 for n = 5). We computed systematically the spectral radius of the Jacobian for all the elements (for n = 5, there are more than 400 tuples) and found all spectral radii to be equal exactly to two, to 10 −2 accuracy for linear interpolants. We also computed systematically for n = 2 zeros the spectral radius for possibly identical elements (which include zeros of higher order). This represents around 1000 eigenvalue computations, which all ended up being equal to 2 to the same accuracy.
Note that while we presented an argument for existence of such an eigenvalue for odd number of poles and zeros around (3.14), we do not have such an argument for N tot even. It would be interesting to understand the origin of this fact.
Given the observed similarity of eigenvalue patterns between even and odd n, and knowing that for n odd we can determine that the spectral radius is at least two, we are lead to conjecture for all n, the spectral radius of the Jacobian of the map of the 1-pole n-zero amplitude is exactly 2. It would be very interesting to have a demonstration of this fact.

Newton's method
In the previous section we explored fixed-point mapping (2.22a) to solve unitarity and crossing. We observed convergence of the algorithm for a wide class of amplitudes and within some range of parameters (positions of poles and zeros, constant at infinity). We gave evidence that the fixed point iteration diverges at a finite distance away from the boundary of amplitude space, and explained that improving the grid did not decrease this distance.
The aim of this section is to present results obtained using a different iteration scheme, with better convergence properties: Newton's method (2.22b).
In one dimension, Newton's method aims at finding a root of a function f (x) via the following sequence For a function continuously differentiable near a root, there always exists a neighborhood of the root such that Newton's method will converge to the root. In this case, convergence will be "quadratic". 12 Convergence of the Newton method is a rich mathematical domain. One famous result that maybe illustrates this point best is the fractal structure of the basins of attraction of the method to determine complex roots of polynomials, see appendix C for details. In this section we will observe similar patterns, which we relate to CDD ambiguities, and discuss them in section 4.2.
The point that interests us here is that Newton's method can be used to solve a fixed point equation x * = g(x * ), by searching for the roots of f = id − g, where id is the identity function, id(x) = x. Atkinson suggested, see e.g. [7], that when the iteration method described in the previous section reaches the boundary of convergence, one could extend convergence by using Newton's method. The conditions for convergence of the Newton method are indeed better than the fixed point iteration, because the gradient helps to direct the iteration.
The steps of the iteration as we implemented it are as follows. The steps 1-3: discretization, interpolation, and dispersion integral, are implemented in the same way as before. At step 4 the algorithm changes, as follows: 4'. Newton's method iteration. The map (2.22b) is implemented as where i, j indices have been suppressed, Ψ ≡ id − Φ, id is the identity operator, Φ is the iterative map defined in the previous section in (3.10), J Ψ n,ij is the Jacobian of the map Ψ. In effect, this way of implementing Newton's method is equivalent the form whose continuous version was eq. (2.18). But eq.(4.2) has an immense advantage in terms of its numerical cost and stability. 13 The Jacobian is defined explicitly by where J was defined in (3.11). In Mathematica, we use LinearSolve[J n , b], where b = J n ρ n − Ψ(ρ n ), which performs very well in terms of speed for grids of size 1000 which we worked with (less than a second, where inversion would take of the order of minutes).
We present now the results we obtained with Newton's method. Firstly we describe the extension of convergence range in the one-pole one-zero sector, see figure 7. Next we describe the phenomenon of CDD ambiguity, which arises because Newton's method allows convergence in a range where many solutions are possible given the input data, and the end-point of the iteration is fixed by the starting point. We find that the starting points hint to a standard fractal structure of basins of attraction which we describe in figure 11.

Improved convergence for the 1-pole 1-zero solution
We first consider the case discussed in detail in section 3.2.2. The S-matrix of interest has one CDD pole and one CDD zero. When doing the fixed-point iteration, we observed that the algorithm stopped converging as the CDD zero was approaching the two-particle threshold and the convergence was lost when an eigenvalue of the map became bigger than one. Let us now show the results of our analysis using the Newton's method.
We found that Newton's method can be used to significantly extend the region of convergence of the fixed-point mapping. This is an important result from two points of view. From the Atkinson program's perspective, it remained an unknown whether the convergence radius of the fixed-point iteration could be extended at all. Our succesful implementation of Newton's method shows that it is indeed the case. Secondly, from the point of view of our bootstraping procedure, it is important to be able to describe a wider class of amplitudes.
We depict our extended convergence region by plotting again the maximal coupling as a function of the mass of the bound state in figure 7. We could get, with both linear and Bernstein interpolants, to g 2 g 2 max ∼ 0.75, which is a significant improvement of g 2 g 2 max ∼ 0.5−0.55, which we could achieve using the fixed-point iteration. Interestingly, linear interpolation gave better results than Bernstein's this time.
In the bulk of the convergence region, we observe (see figure 8) that the convergence speed  Figure 9: Distribution of eigenvalues of J Ψ for a pure elastic scattering amplitude consisting of a singe CDDpole and a CDD-zero. We plot it close to the boundary of convergence, here m 2 p = 2.4 and m 2 z = 3.98 with linear interpolation (left), while m 2 z = 3.97 with Bernstein polynomial interpolation (right). As we move the location of zero closer to 4, the support of distribution of zeros on the right panel moves to negative values for interpolation with Bernstein polynomials. It has a more obscure pattern for linear interpolants. The amplitude functions, past the limit of convergence, look more and more jagged, with an instability growing near x = 0.9, as in figure 10. of the Newton method is extremely fast, as expected from the fact that it typically converges quadratically: we converge to 10 −60 in 5-10 iterations in the bulk of convergence range. This counterbalances the time lost in computing the Jacobian.
A natural question one then can ask is the following: as we improve the resolution of the grid can we fill in the gray region on figure 7? To our surprise, given the efficiency of Newton's method, we found that for a few grid resolutions that we used, we could not improve the results in a clearcut way. This would lead us to conclude that there is a finite set of amplitudes which the default implementation of Newton's method is not able to capture. We motivate this intuition as follows. In analyzing how convergence is lost for one-zero one-pole amplitudes, we found that, as m 2 z approaches 4 (i.e. as we increase the coupling), the Jacobian J Ψ becomes singular and develops a zero eigenvalue. We plot the distribution of eigenvalues for J Ψ close to the boundary of convergence in figure 9.
We observed that the distribution of eigenvalues does not qualitatively change as we change the grid. It simply becomes more dense along the lines that are clearly visible on both plots of figure 9. In particular, for Bernstein's interpolants, the picture is very clear: the e.v.'s populate a continuum on the real line and once the minimal e.v. has passed 0, the Jacobian becomes uniformly singular. The situation is a little more complex for linear interpolants, where one could think that if a line of e.v.'s crosses the origin, the algorithm might converge again. We observed that this is not the case and passed this point, the algorithm either diverges or yield pathological results.
We therefore believe that, in order to fill the gray region that remains in figure 7, a more sophisticated method of solving unitarity is needed. It would be interesting to develop such a method, but we do not pursue this further in the present paper. For illustrative purposes, in figure 10 we display an example of non-convergent iteration past the convergence limit for linear interpolants. The functions exhibit an instability growing near x = 1 and look more and more jagged or dented, while on the coupling plot one can see oscillations appearing.

Accessing different CDD sectors and a CDD fractal
Given that now we are in possession of an algorithm that converges for a much broader set of amplitudes, we can ask again the question which was considered in section 3.1. Given an input data which is: the constant at infinity and the position of the poles, how does the algorithm determines the number of CDD zeros in the final solution? Indeed, many combinations of CDDzero factors can produce the same constant at infinity, and it cannot be known in advance what the algorithm converges to. This is one aspect of the CDD ambiguity.
The answer to this question is actually simple: it depends on the starting point. We have been able to observe this very explicitly. By choosing generic initial conditions we found that the algorithm converges to the one pole and one zero solution. By tuning the initial data close to another many-zero solution, we could reach the solutions with multiple zeros as we demonstrate in figure 11.
In this figure, we report the results of the following experiment. We chose a particular CDD amplitude with one pole at m 2 p = 2.1 and one zero at m 2 z = 2.2. We further chose one CDD amplitude with the same pole and three zeros at m 2 z 1 = 3.8, m 2 z 2 = 3.9 and the last one m 2 z 3 such that the constant at infinity of both amplitudes are identical, which means that m 2 z 3 solves the following equation: Numerically, this gives m 2 z 3 3.938. We then considered a series of starting points of the iteration algorithm Im T 0 (x) = f λ (x) that interpolates between the one-zero amplitude and the three-zero amplitude defined by Figure 11: CDD fractal. (a) Position of the zero(s) of the amplitude as the initial point, determined by λ in eq. (4.6) is varied. For λ < 1/2 we have the one-zero amplitude, and for λ ≥ 1/2 we find three zeros which move in the complex plane. (b) Shows the real part of the real zero m z1 = Re(m z1 ), as a function of λ, on two superposed x-axis (top and bottom for ticks). Panels (c) and (d) represent projections of the three-dimensional plot a). Those plots illustrate the roughness of the curve for λ ≥ 1/2, at various scales, around λ = 0.7 (red) and λ = 0.851 (blue). This is indicative of some chaotic fractal behaviour. Hence the name: CDD fractal.
We asked the following question: how does the result of Newton's method iterations depend on λ?
In the first part of our analysis, we studied a uniform grid spacing in λ-space with increments of 0.05. We discovered that for λ < 1/2, we systematically converge to the one-zero solution. We probed more values in this range, in particular close to λ 1/2, and confirmed this behaviour. With linear interpolants, the first signs of different amplitudes arise very close to λ = 1/2 where for instance at λ = 0.499992 we have a one-zero solution, λ = 0.499993 displays a 7-zero solution, while λ = 0.499994 is just a 3-zero solution. With Bernstein interpolants, the transition happens just above 0.5, but the rough behaviours are identical. Then, for other evenly spaced values of λ ≥ 1/2, we observed that this somewhat rough behaviour persists. The amplitude converges most of the time to a 3-zero amplitude, whose position of the zeros in the complex plane we have represented as a function of λ in 3 dimensions in figure (11a). The position on the real zero on the blue-shaded plane in particular is visibly erratic on this plot. We did not represent this but the complex zeros also undergo such an erratic movement in the complex plane.
The abrupt change around 1/2 and the appearance of very different amplitudes within a small range of λ pushed us to repeat the experiment near other points of λ > 1/2 but with a greater resolution (we did this with linear interpolants). We chose λ = 0.7 and λ = 0.85 for no special reason. We observed at various scales a similar roughness, as we try to illustrate in figure (11b) where we represented only the position of the real part of the real zero. This figure shows the two series of points λ = 0.7 and λ = 0.85 in red and blue, respectively, whose x-graduations are read on the top and bottom axis, respectively. A zoomed version of a transition near 0.8510005 is shown as a yellow-background subgraph. We also found amplitudes with more zeros (5) at other selected points. Overall, this is strongly indicative of a chaotic fractal behaviour: within flat basins of attraction of some various sizes, sudden transitions occur. It is very similar to the fractal structures classically found using Newton's method when finding roots of one-dimensional function, see appendix C. It is therefore not surprising to find a similar behaviour here, in the context of the CDD ambiguity, which one could refer to as a "CDD fractal". We did not try to characterize this phenomenon in a more detailed way. It would be interesting to see if a similar phenomenon occurs for higher-dimensional amplitudes as well.
As we take the continuum limit, several scenarios are possible. It might be that the size of all of the flat basins of attraction shrinks to zero and give a nowhere-continuous curve. It might also be that finite size basins of attraction remain at all scales. It is an interesting open question.
Finally, using Newton's method, we were able to obtain solutions with odd N tot . This is remarkable because this was not possible using the fixed-point iteration method. This fact also excludes the possibility that the square-root behaviour near s = 4 presents a problem for the algorithm of the algorithm, because all amplitudes with odd N tot exhibit the same near-threshold behavior (3.13), and not just the pure CDD-pole amplitude for which Newton's method does not converge.

Discussion of the different strategies
We used two different iterative strategies to solve equation (1.1) together with analyticity and crossing: fixed-point method (2.22a), and Newton's method (2.22b). We further used two different interpolating strategies for each of the algorithms: linear interpolation described in appendix A, and interpolation with Bernstein polynomials described in appendix B. Here we review the pros and cons of various strategies.

Fixed-point vs Newton
To remind the reader, the iterative map was defined by and Newton's method was applied to find the roots of Ψ ≡ id − Φ While both methods lead to the same result, namely a solution to unitarity and crossing, the domains in which the algorithms converge are not identical. We could partially characterize domains of convergence in both cases. For the fixed-point iteration we related convergence of the algorithm to the spectral radius (the maximum eigenvalue, in modulus) of the Jacobian of the Φ (3.11). The fixed-point iteration converges whenever the spectral radius at the fixed-point is smaller than one. For Newton's method, convergence is lost whenever the Jacobian of Ψ becomes singular. There exist adaptions of the Newton method which can be employed to deal with singular Jacobians. We did not attempt this: since the problem of unitarity and crossing in two dimensions is already solved theoretically, improving this aspect did not seem pressing.
Newton's method is also slightly more complex to implement, as one needs to compute the Jacobian of the map at every step. Since it was easily computable for us, it is not surprising that we could implement this method efficiently. If the Jacobian is not available or hard to compute, one can use the so-called quasi-Newton methods. Note also that the time spent in computing the Jacobian is a trade-off for much faster convergence of Newton's method.

Results
We observed that implementation of Newton's method increased vastly the range of convergence of the fixed-point iterative algorithm. It extended convergence in two directions. Firstly, it allowed to describe more 1-zero 1-pole amplitudes, see figure 7. Secondly, it allowed to describe amplitudes with arbitrary number of CDD zeros. In the latter case, we observed a rich pattern of convergence to various solutions as a function of the starting point, with a fractal structure, see figure 11.
Newton's method also converged much faster than the fixed-point iteration, due to the help of the gradient. Compare for instance figures 2 and 8. From Atkinson's program perspective, these results are truly new, as it had not been shown, even theoretically, that one could go beyond the convergence radius of the fixed-point iteration. Our results show that it can be done.

Linear interpolation vs Bernstein polynomials
We used two different types of interpolants : linear, and polynomial (Bernstein). For both, we were able to devise grids that would yield results with good accuracy, within a certain range of parameters, with laptop computing power in Mathematica. The results for finite grid-size, which we obtained were very good and in principle we could converge to an even better accuracy by increasing the size of the grid (see below).
The advantage of the linear interpolants is that we were able to locate more points near x = 1 (s → 4 + ). It was however a delicate task to know where specifically to add points to improve convergence, while with Bernstein polynomials is is more straightfoward, as it had a uniform spacing. With our tweaked, 200-point grid in the linear case we could converge to 10 −3 − 10 −4 accuracy, all the error coming from the trapezoidal rule in the dispersion integral. With Bernstein polynomials, we obtained a clear uniform convergence as we increased the grid size.
In the both iterations, fixed-point and Newton's method, both interpolation strategies would exhibit, near the edge of convergence, jagged solutions with stable O(1) oscillations growing to similar oscillations as those displayed in figure 10. Eventually, the oscillations become so large that the S-matrix does not satisfy unitarity anymore. While spikes could have been expected with linear interpolants, which have no notion of smoothness, this is more suprising for the Bernstein polynomials which could have been smoother.
Finally, in terms of performance, linear interpolants performed better for the Newton method. Bernstein polynomials performed better with fixed-point iteration.

Extension to higher dimensions
The most exciting aspect of the method described in the present paper is that it generalizes to higher dimensions. Indeed, an algorithm similar to the one described in section 2.3 can be numerically implemented in four dimensions [20]. 14 This refers both to the fixed-point iterations, as well as to Newton's method.
Technically, the main difference is that higher-dimensional amplitudes are functions of both energy and angle, or, equivalently, s and t, which makes the problem two-dimensional. Upon discretization and interpolation the problem again reduces to multiplication of pre-computed matrices. Increasing the resolution of the two-dimensional grid is computationally more costly, however, we believe it is feasible on a cluster, since most of the computations can be parallelized. The implications of elastic unitarity are much richer in higher dimensions, and the method described in this paper, to the best of our knowledge, is the only available tool to implement the correct structure of the support of the double spectral density as well as to satisfy the Mandelstam elastic unitarity equation. In particular, it would be very interesting to construct amplitude functions that both satisfy elastic unitarity, and maximize the coupling. The results of [15] suggest that the maximal coupling amplitudes have negligible inelasticity, therefore we expect that such amplitudes are excellent candidates to be approached using the methods described in the present paper. We hope to report on this in the near future.

Relation to perturbation theory
It is instructive to compare iterations discussed in this paper to the standard perturbation theory. The diagrammatic representation of the iterative map is given in figure 12. The black dot represents inelastic processes, namely it has support only in the multi-particle region, and is kept fixed during the iteration process. This is in sharp contrast with the usual perturbation theory where the number of new multi-particle diagrams grows factorially, see e.g. [27,28] and [29] for a more recent discussion, and as a result the perturbation theory is typically divergent (in two dimensions, integrable theories are notable exceptions [1,30]).
We can actually in the iterative solution to unitarity we can relate the number we can relate the number of graphs at step n to the number of graphs at step n − 1 according to The n-th step iteration does not involve n loop diagrams, but L = 2 n loops. It is easy to see that (5.3) implies that log 2 (N n ) ∼ 2 n ∼ L. Therefore, we have e c 0 L graphs at L loops which is much fewer graphs than the usual L! number of Feynman graphs at L loops. In particular, denoting inelasticity by coupling λ, the series becomes L λ L e c 0 L and has a finite radius of convergence given by the condition λe c 0 < 1. While in two dimensions, the systematics of this re-summation is rather easy to understand and depict, in higher dimensions it becomes more intricate due to an additional s−t crossing (as opposed to only s − u crossing in two dimensions). Rather than summing one-dimensional chains of bubbles generated by iterations with beads of inelasticity v i , higher-dimensional iterations naturally lead to two-dimensional diagrams. It would be very interesting to study them in detail. Nevertheless, we expect that the basic scaling of the number of diagrams as a function of the number of loops, with the coupling measured by inelasticity, stays the same, and therefore given that inelasticity is not too big, iterations should converge. This picture is consistent with the rigorous results of existence of nonzero convergence radius by Atkinson in four dimensions [3][4][5][6].
One immediate observation regarding the standard perturbation theory versus unitarity iterations in higher dimensions is that any fixed order in the coupling result for the scattering amplitude is not consistent with Gribov's theorem, see e.g. [18] for a recent review, and thus cannot be used as an input for the convergent iterations. In other words, it is crucial for the iterative algorithm to work to have as an input a UV-improved model for inelastic effects. We leave exploration of this aspect for the future.

Figuring out exact bounds of Atkinson's convergence in 2d
In the present paper we analyzed convergence of the iterative map (2.22a)-(2.24) numerically in the discretized setting. It would be interesting to establish convergence criteria of the functional map directly, in the spirit of Atkinson's proofs [3][4][5][6]. We were not able to generalize the proof technique of the original papers in four dimensions to the two-dimensional case.
Atkinson's proof is nicely exemplified by the forward limit of the amplitude iteration in the lectures [7]. It proceeds in two steps. Firstly, one proves the existence of an open set of functions that maps to itself under the map which gives existence of a solution. Then one shows that the map is contracting within this set, which ensures uniqueness of this solution.
The problem we encounter to reproduce the proof in 2d is that the map involves a factor of 1/ √ 1 − x, while in 4d, the square root is in the numerator √ 1 − x. 15 Defining ρ = Φ(ρ), for the first step, we want to compare ||ρ || = ||Φ(ρ)|| to ||ρ||, where the norm is the so-called Hölder norm, and is given by for some parameter 0 < µ < 1. The factor 1/ √ 1 − x in the unitarity condition, however, seems to allow the function ρ to grow parametrically bigger than ||ρ|| near x = 1. Where Atkinson could find a way out by simply defining the set as the ball of radius b: ||ρ|| < b, we need to specify more data about the function to ensure that ||ρ|| < b =⇒ ||ρ || < b.
Looking back, we know that the fixed-point iteration is divergent for 1-pole 3-zero amplitudes, albeit it resembles a lot the convergent 1-pole 1-zero amplitudes, both from the perspective of the near threshold behaviour, eq. (2.13) and the fast that they have similar Holder norms. Therefore, it is maybe not surprising that a generic proofà la Atkinson, which assumes nothing but bounded Holder norms, should fail without adding further assumptions on the derivatives of the functions for instance.
It would be interesting to understand this in detail and rigorously establish the space of inelasticities and input parameters that lead to convergent iterations for the continuous map.
Theories with multiple stable particles An obvious direction to generalize the present analysis is to consider theories with several stable particles, e.g. theories with nontrivial flavor symmetry [31][32][33][34]. In this case the analog of the analytic solution (2.10) is not readily available and the numerical techniques developed in the present work might be useful.
Theories with massless particles In theories with massless particle there is no separation in energy between the elastic and inelastic contribution, both starting at s = 4m 2 . Nevertheless we can still write the unitarity equation as Disc s T 2→2 − |T 2→2 | 2 = Multi-particle. (5.5) Then, we can again ask: given a particular form of inelasticity, can we construct an amplitude function which satisfies unitarity and crossing? It would be very interesting to understand if the iterative techniques of the present paper can be generalized to this case. Clearly, nothing prevents us from repeating the analysis in two dimensions. In higher dimensions the situation is less clear. Such an iterative scheme would be particularly desirable for theories where inelasticity is expected to have some universality, e.g. in gravitational theories where at high energies, black holes are produced in the collision [35].
complexity of computing integrals at each step to a matrix action on a vector. This very same step renders the calculations [14] amenable to efficient numerics.
To be complete, we should also mention that we developped a mixed-interpolation strategy, using square-root interpolants, in order to have better convergence properties in the N tot even section, whereby, close to 1, we would resort to the following interpolants.
We did not observe a significant gain in accuracy so we did not report on related results, which are essentially identical to those obtained within the pure linear interpolant. In particular, the pure-pole CDD factor is still not a convergent fixed point of any of the algorithm.

B Interpolation with Bernstein polynomials
Here we describe the interpolation method with Bernstein polynomials. We use this method in one-dimensional case that is relevant for the present paper but it can be straightforwardly adapted to the higher-dimensional cases as well.
Bernstein polynomials are defined as follows Let us consider now the following ansatz for the imaginary part of the amplitude ρ(x) Imposing the decay at infinity, ρ(0) = 0, and elastic unitarity at x = 1, or lim x→1 ρ(x) = 8δ Ntot,even √ 1 − x(1 + O ((1 − x))), we can rewrite this ansatz as follows where the presence of δ Ntot,even signifies the difference of the threshold behavior of the amplitude with N tot even versus odd. The precise coefficient 8 is fixed by elastic unitarity, see (2.13).
Alternatively, we can also use (B.4) without the √ 1 − x factor in front. We tried both options in implementing the iterative algorithms. While the performance of both algorithms is very similar for N tot odd, for N tot even (B.4) works much better because it correctly captures the near-threshold behavior of the amplitude.
The dispersion integral relevant for iterations takes the form b ν,N 4 s = Γ(n + 1)Γ(n − ν + 3 2 ) Γ(n + 3 2 )Γ(n − ν + 1) This integral is singular for ν = 0, however in the present paper we work with functions that satisfy ρ(0) = 0 therefore the integral is always effectively finite. Using this result we can immediately write down the matrix B ij that enters into the discretized unitarity relation C Newton's method : oscillations and fractals.
In this short section we describe two properties of Newton's method.

C.1 Oscillations
Sometimes, the Newton method resuls in oscillations of fixed amplitude. It is not surprising, and a similar phenomenon is easily observed in 1 dimension. Let f be a function such that f (x) ∼ x→0 |x| α . It is known that the root at x = 0 can be reached by the standard Newton method only for α > 1/2. For α = 1/2, one is stuck in a cycle of 2-point oscillations, which are therefore of O(1) if the starting point is O(1) away from the root. For 0 < α < 1/2, the Newton method overshoots the root and eventually diverges.

C.2 Fractals
Here we give review a simple fact about the fractal structure of the Newton algorithm. It is a well known property of the Newton method that it gives fractal bassins of attraction (Fatou sets), separated by so-called Julia curves. Newton method applied to find the roots of the polynomial p(z) = z 3 − 1 splits the complex plane in a fractal structure known as a Julia set, which we depicted below in figure 13.

D Coupling maximization D.1 Coupling maximization
It is interesting to consider the following variation of the coupling maximization problem. Consider a scattering amplitude that has a single bound state at the location m p . Let us also assume in addition that the amplitude satisfies the following bound at high energies where α ≤ 1 is a consequence of unitarity. Following [14] we would like to find the amplitude that maximizes the coupling g 2 that is defined as the residue of the amplitude at s = m 2 p . It is easy to see that the coupling is maximized by minimizing inelasticity. Indeed, we can write [14]  where g 2 elastic comes from S elastic (s) in (2.10). It is clear from (D.2) that the exponent argument is nonpositive and is minimized by setting f i (s) = 0. It is easy to see than that adding CDD zeros can only reduce the coupling as well, and as a result the maximum coupling is achieved by a single CDD pole ±S pole CDD (s). This argument is however not valid for general α. Indeed, a single CDD pole corresponds to (D.1) with α = 0. Therefore if we want to consider amplitudes with a different behavior in the UV we should modify this amplitude. Let us argue however that such modifications do not affect the maximal value of the coupling. In other words, we can introduce a small inelasticity that can implement (D.1) while making its contribution to (D.2) arbitrarily small.
Let us demonstrate this very explicitly for the case α = −1. To implement such an amplitude we would like to introduce an inelasticity that cancels the constant piece in the expansion of the CDD pole. To this extent we get the following equation Therefore, at least in 2d we see that there are infinitely many amplitudes that satisfy all the required properties arbitrary close to the maximal value of the coupling. In some sense we can hide high energy properties of the amplitude so far in the UV such that they do not affect the IR. It would be very interesting to understand if this principle continues to hold in higher dimensions as well.