On simulation of continuous determinantal point processes

We review how to simulate continuous determinantal point processes (DPPs) and improve the current simulation algorithms in several important special cases as well as detail how certain types of conditional simulation can be carried out. Importantly we show how to speed up the simulation of the widely used Fourier based projection DPPs, which arise as approximations of more general DPPs. The algorithms are implemented and published as open source software.


Introduction
Since the seminal work by A. Kulesza and B. Taskar [22], discrete DPPs defined on a finite set t1, . . ., N u have become popular objects in the machine learning community to generate diverse random subsets, the main application being for recommandation systems.Discrete DPPs are defined through a kernel matrix of size N ˆN , which is often assumed to be Hermitian.The default algorithm to make perfect simulation of discrete DPPs, the so-called spectral algorithm of [21], starts from an orthonormal eigendecomposition of this matrix.Since this decomposition may be costly to obtain, and even unfeasible for large N , other options have been developed, from approximate algorithms [2; 28] to alternative perfect simulation strategies [15; 14; 36; 23].
On the other side, continuous DPPs generally refer to DPPs defined on the Euclidean space R d .They constitute the initial setting of O. Macchi [30] who introduced these processes in their current form to model the distribution of a fermion system in statistical physics.Continuous DPPs find applications in machine learning [1], spatial statistics [25], telecommunications [13; 31] and Monte Carlo approximations [4; 5; 10].They are defined through a kernel K which in this setting is a function Kpx, yq, x, y P R d , that must satisfy some properties to ensure existence of the continuous DPP as detailed in Section 2. The spectral algorithm of [21] to simulate continuous DPPs on a compact set S Ă R d is in general feasible only if we know the spectral representation (also called Mercer representation) of K on S. While this representation always exists in theory, it is in general intractable.Unfortunately, the aforementioned alternative perfect simulation strategies developed in the discrete case do not apply to the continuous case, and their extension to this setting is a difficult open challenge.For these reasons, there is still an avenue to improve simulation of continuous DPPs.This paper addresses this question for projection DPPs and other important classes of DPPs, including the β-Ginibre process, the Gaussian-type DPP and the Bessel-type DPP.
A projection DPP on a bounded subset S Ă R d corresponds to the special case where K satisfies the identity ş S Kpx, zqKpz, yqdz " Kpx, yq.This class of DPPs is of particular interest for several reasons.First, unlike general DPPs, projection DPPs generate a fixed number of points in S, equal to n " ş S Kpx, xqdx.Second, they correspond to the most repulsive DPPs, since their (non null) eigenvalues in their spectral representation are all identically equal to 1, the maximal possible value.Third, the simulation algorithm of [21] for a general DPP, provided its spectral representation is known, boils down to the simulation of an associated projection DPP.A first contribution of this paper, carried out in Section 3, is to review and improve the simulation algorithms of projection DPPs.An important particularity of these processes is that they can be perfectly simulated without knowing the spectral representation of their kernel; see Algorithm 1. Nonetheless, if we have access to this representation, then the well-known spectral algorithm (Algo-rithm 2) is more efficient.As a special case, we give a particular account of the case when the eigenfunctions of the spectral decomposition are the Fourier basis functions on S.This case plays an important role both because it gives rise to an homogeneous projection DPP on S and because it is at the heart of the spectral approximation of general invariant-translation kernels in [25].For this basis, we refine the spectral algorithm to decrease the computation time by up to 45% in dimension d " 1 and up to 25% in d " 2; see Algorithm 3. Finally, we show how Algorithm 1 and Algorithm 2 can be combined to perform conditional simulation of a projection DPP given a subset of points on S, including in-painting conditional simulation.
For general (non projection) DPPs, the standard simulation procedure consists in approximating the spectral representation of their kernel in order to apply the algorithm of [21]; see [25; 1].We do not discuss these kinds of approximation here, but as a second contribution, we rather focus on important classes of DPPs for which the spectral representation can be obtained explicitly on some subset S. In particular, we investigate in details in Section 4 the β-Ginibre process [17; 13], a generalisation of the famous standard Ginibre process [16].For this model, we provide a simulation algorithm based on the exact spectral representation of the kernel, and we compare it to the simulation based on the eigenvalues of a random matrix (sometimes referred to as the truncated β-Ginibre approximation).Moreover, we also derive in Section 5 the spectral representation of Gaussian kernels and Bessel-type kernels, two widely used parametric models in spatial statistics, opening the way of perfect simulation for these models.
All algorithms presented in Sections 3 and 4 are available in an R-package [37] currently available on https://github.com/rubak/dppsimand planned for submission to the Comprehensive R Archive Network (CRAN).The package includes functions for simulation and conditional simulation of any projection DPP, and simulation of the β-Ginibre process by the spectral algorithm and the truncated β-Ginibre approximation.

Preliminaries
Throughout the manuscript the notation |.| has different meanings: for a complex number z P C, |z| denotes its modulus; for a vector x, |x| is its infinite norm; if J is a finite set, then |J| stands for its cardinality; if S is an infinite set, then |S| is its volume.On the other hand, }.} will either refer to the vectorial ℓ 2 norm, or to the functional L 2 norm, depending on the context.For two vectors x and y, x ¨y denotes their inner product.For a matrix A, we denote by A 1 its transpose and by A ˚its conjugate transpose.

DPPs and basic properties
Let X be a point process on R d (we refer to [33] for background material on spatial point processes).If there exists a non-negative function ρ n : for all locally integrable functions f : pR d q n Ñ R, where the symbol ‰ means that the sum is done for distinct x i , then ρ n is called the n-th order joint intensity function of X. Intuitively, ρ n px 1 , . . ., x n q is the infinitesimal probability that X contains points (among others) located at x 1 , . . ., x n .
We say that X is a DPP on S Ď R d with kernel K : S ˆS Ñ C if for all n ě 1, the joint intensity ρ n of X exists and is of the form for all tx 1 , . . ., x n u Ă S, where rKpx k , x l qs 1ďk,lďn denotes the matrix with entries Kpx k , x l q.Some conditions on K are necessary to ensure the existence of X, which depend on whether X is defined on all of R d or a compact subset S Ă R d .First, assume that S is a compact set and that K is Hermitian and continuous on S ˆS, then by Mercer's Theorem it admits the spectral representation Kpx, yq " for some eigenvalues λ k and orthonormal eigenfunctions Φ k on S, for k " 1, 2, . . . .(In typical applications k P Z d , but for convenience we use a natural number index here.)A necessary and sufficient condition for existence of a DPP X on S with kernel K is that λ k P r0, 1s for all k " 1, 2, . . . .We refer the reader to [21] for details and more results.Second, assume that K is invariant under translation on R d , i.e.Kpx, yq " K 0 px ´yq for some function K 0 , then a sufficient condition for existence of the DPP with kernel K on R d (and then on any subset S Ă R d ) is that K 0 is a continuous covariance function on R d whose Fourier transform is less than 1; see [25].Note that to check the latter condition it is not necessary to know the spectral representation (2) of K on any subset S. In the following, we will assume that the considered DPPs exist (conditions will be recalled for each specific example).
Let us recall some important properties of DPPs [25].
(i) The (first order) intensity of a DPP X on S is by definition ρpxq " Kpx, xq, x P S. We say that X is homogeneous if ρpxq is constant.
(iii) Suppose we apply an independent thinning of X with retention probabilities ppxq, x P S, then the resulting process is still a DPP with kernel Kpx, yq a ppxqppyq.In particular, the restriction of X to any subset S 1 Ă S is the DPP with kernel Kpx, yq1 xPS 1 1 yPS 1 .
(iv) Any smooth transformation of X remains a DPP.For example, let T pxq " Ax`b, x P R d , be an affine transformation on R d with detpAq ‰ 0, then T pXq is the DPP on T pSq with kernel KpA ´1px ´bq, A ´1py bqq{| detpAq|.
As mentioned in the introduction, an important special class of DPPs is the class of projection DPPs on bounded S Ă R d .Their kernel K satisfies ş S Kpx, zqKpz, yqdz " Kpx, yq and they generate a fixed number of n points in S, where n " ş S Kpx, xqdx.Their eigenvalues in (2) take only two possible values, 0 or 1, exactly n of them being 1, the others being 0.
Assume now that all eigenvalues of K in (2) are strictly less than 1.Then, the probability density that X has n points located at x 1 , . . ., x n (sometimes called likelihood or Janossy density of X) is proportional to detrLpx k , x l qs 1ďk,lďn , where L is the kernel defined by Lpx, yq " It is common in the machine learning community to define X from this Lkernel, instead of its K-kernel, meaning that X is defined through its Janossy densities instead of its joint intensities ρ n .If λ k ă 1 for all k, both point of views are equivalent and the spectral representation of K can be deduced from that of L, and vice-versa.On one hand, the advantage of defining X from L is that the eigenvalues of L are not restricted to be less than 1 for X to be well defined and the likelihood function is easier to understand.On the other hand, defining X through L implies that the moments of X become unknown, contrary to (i) and (ii) above.Moreover some DPPs, including projection DPPs for which λ k P t0, 1u, cannot be defined through a L-kernel.Note however that in [42], extended L-ensembles are introduced and fill this gap (the main setting of that paper concerns discrete DPPs but it is is argued in conclusion that the extension to the continuous setting is straightforward).
In this paper we start from the K-kernel and we consider the simulation of X given K.

Projection DPPs
For projection DPPs on a compact set S Ă R d , perfect simulation algorithms are available, whatever the spectral representation of the kernel is known or not.We detail these algorithms in Section 3.1.The idea is to generate the first point of the DPP with respect to the unnormalised density Kpx, xq on S, then to generate the second point given the first one with respect to the associated conditional density on S, and so on.The difficulty in this procedure is to be able to simulate, at each step, a point with respect to the conditional density given the previous points.Figure 1 shows examples of such conditional densities at intermediate steps of the algorithm when simulating a homogeneous projection DPP having 121 points on the unit square with Kpx, yq " ř |j|ď5 e ij¨px´yq for x, y P r0, 1s 2 and j P Z 2 (where i " ?´1 and the unit square in R 2 is identified with the unit square in C).The top left hand plot is the density of the 20th point given the 19 first points while the top right hand plot shows the density of the last (121st) point given the first 120 points.The bottom row plots are the same plots on log scale.The simulation with respect to these conditional densities is commonly achieved by rejection sampling where the proposal is the unnormalised density Kpx, xq.This procedure can be extremely costly in the last steps of the algorithm where many 0. 9 0.9 0. 9 0. 9 0. 9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 rejections may occur due to the complicated shape of the conditional density.
Currently there seems to be no general way to avoid the costly rejection sampling step, but in Section 3.2 we show how to significantly accelerate it in the important particular case where the eigenfunctions of K correspond to the Fourier basis functions.Moreover, the fact that it is possible to simulate a projection DPP without knowing its spectral representation opens the way to perform conditional simulation, including in-painting, as we discuss it in Section 3.3.

Non-projection DPPs
If the DPP is not a projection DPP, then the standard algorithm to generate it on S, due to [21], starts from the spectral representation (2) of K on S. It consists in first, generating a sequence of Bernoulli variables B k with respective rates λ k (note that only a finite number of B k 's are non-zero since ř kě1 λ k ă 8), and second, given pB k q kě1 , generating the projection DPP with kernel ř kě1 B k Φ k pxq s Φ k pyq.In Sections 4 and 5, we show that the exact spectral representation (2) of the β-Ginibre kernel, the Gaussian kernel, and the Bessel-type kernel is available on some sets S, opening the possibility of perfect simulation for the associated DPPs on these sets (and any subset thereof).We in particular treat in detail the β-Ginibre process, for which we provide efficient simulation algorithms in Section 4.
However, for most kernels and sets S, the expansion (2) is not known.To overcome this issue we may: • Consider an approximation of (2).For any translation invariant kernel on a rectangular set S, a Fourier series approximation is proposed in [25], and when the DPP is defined through the L-kernel, a Nyström based approximation is suggested in [1].
• Using property (iv) of the previous section, consider a smooth transformation Y " T pXq, where the spectral representation of Y on T pSq is known or can be approximated, then generate Y on T pSq and deduce X " T ´1pY q.In particular, by this property, the simulation of a DPP on any rectangular (resp.any ellipsoid) set S boils down to the simulation on the unit square (resp.on the unit disc).
• Using property (iii), view X as an independent thinning of some DPP Y whose spectral representation is known or can be approximated, then generate Y and get X by thinning.This includes in particular the simulation of X on any subset S 1 of S, provided we know how to generate X on S.This strategy is also possible for any non-stationary DPP X whose kernel reads K 0 px ´yq a ρpxqρpyq with K 0 p0q " 1, provided ρpxq is uniformly bounded on S.
If X is defined through its L-kernel, the simulation challenge is similar: one needs to know or approximate the spectral representation of L in order to deduce K and use the procedure above.Let us however mention an alternative solution that does not require the spectral representation of L: knowing L on S allows one to have access to the Papangelou intensity of X on S.This opens the possibility to use a CFTP algorithm to simulate perfectly X on S, provided ş S Lpx, xqdx ă 8, as considered in [11,Section 5.2].Unfortunately this procedure can be very slow.

Basic algorithms
Assume that X " DP P pKq is a projection DPP with cardinality n on a compact set S Ă R d .Then we can simulate X on S with Algorithm 1 below.This algorithm is justified in [21]; see also [29,Section 3.5] and Appendix A.1.Note that explicit knowledge of the spectral representation (2) of K is not required.This is useful in some situations where we know that DP P pKq exists but no spectral representation, and even no accurate approximation of it, is available for K.This will be the case for the kernel K involved in the inpainting conditional simulation of Section 3.3.
Algorithm 1 Projection DPP without using the spectral representation sample X n from the distribution with density p n pxq " Kpx, xq{n, x P S for i " pn ´1q to 1 do set k i pxq " pKpx, X n q, . . ., Kpx, X i`1 qq and K i " pKpX j , X l qq něj,lěi`1 sample X i from the distribution with density On the other hand, if we have access to the spectral representation of K, the conditional density p i in (3) can be recast to get the more efficient Algorithm 2. Some justification for this is given in Appendix A.1 and more details can be found in [25].Here, we assume without loss of generality that the spectral representation of K reads Kpx, yq " and we set vpxq " pΦ 1 pxq, . . ., Φ n pxqq 1 .
Algorithm 2 Projection DPP using the spectral representation ( 4) sample X n from the distribution with density p n pxq " }vpxq} 2 {n, x P S set e 1 " vpX n q{}vpX n q} for i " pn ´1q to 1 do sample X i from the distribution with density

. , X n u
To appreciate why the computation of p i is faster with Algorithm 2 than with Algorithm 1, take for instance the evaluation of the second term in (3).For each proposed x it requires the computation of k i pxq that itself requires n ´i evaluations of the vector vpxq along with the n ´i vectors vpX j q for i `1 ď j ď n.In contrast, only one evaluation of vpxq is required for the same term in Algorithm 2 since the e j 's are stored from the previous step.
From a practical point of view, we need to be able to generate a point with respect to p i , whether we use Algorithm 1 or 2. This is in fact the bottleneck of these simulation methods.The standard procedure consists in rejection sampling, where the proposal can be deduced from the bound ip i pxq ď Kpx, xq.Two strategies are then possible: • Strategy 1: If we know how to simulate with respect to the distribution on S with density Kpx, xq{n, then this distribution can be chosen as the proposal.Specifically, if Z is a point generated by this proposal and The probability of acceptance is i{n.
• Strategy 2: If we only know an upper-bound for Kpx, xq, i.e.Kpx, xq ď M for all x P S, then p i pxq ď M {i and rejection sampling can be applied where the proposal is the uniform distribution on S. Specifically, if Z is generated uniformly on S and U " Upr0, 1sq independent of Z, we accept Z if ip i pZq{M ą U .The probability of acceptance is i{pM |S|q.
The second strategy is always an option in practice, provided we initially approximate the upper-bound M numerically.Note that given S and K, this bound can be found offline and once for all.

Remark 3.1. A natural and commonly used idea in spatial statistics consists in leveraging the local contribution of a point process to generate a new point
given the other ones.Unfortunately, the conditional density p i is not well localised, especially in presence of many conditional points, which corresponds to the smallest values of i or equivalently to the last (most time consuming) steps of the simulation algorithms, ruling out the idea of localisation.This is clearly illustrated in Figure 1: While in the first steps of the algorithm (left hand plots) p i seems to be localised around the conditional points, the picture looks different in the right hand plots, where some "empty" regions are associated to very small values of the density p i , whereas other seemingly similar regions are associated with a high density.This effect is particularly evident in the bottom right hand plot on log scale, that shows quite stunning long range dependencies in p i .This demonstrates that not only the neighbour points of these regions impact the values of p i , but really the whole point pattern.

Refinement for the Fourier basis
The Fourier basis plays an important special role in continuous DPPs for its simplicity, the homogeneity and the repulsiveness it yields [10], and the fact that it is at the heart of the spectral approximation of invariant-translation kernels in [25].Let us assume for simplicity that S " r0, 1s d (remember that the simulation on any rectangular windows boils down to this case).The kernel of a projection DPP on S based on the Fourier basis reads Kpx, yq " where J Ă Z d is some finite subset of Z d with cardinality |J| " n.
In this case, the more efficient Algorithm 2 is feasible and the two aforementioned strategies for the simulation with respect to p i are similar since Kpx, xq " n (then M " n in Strategy 2).The Algorithm builds on rejection sampling with the uniform distribution as a proposal.The costly part in this procedure is the evaluation of p i pZq for any proposed point Z generated from the uniform distribution on S, which becomes all the more problematic in the last steps of the i-loop in Algorithm 2 when many rejections are expected, the probability of acceptance then being of order 1{n.
To accelerate the procedure, we suggest to exploit the following bound; see Appendix B: where is a second-order polynomial depending on d-variables.Note that the coefficients of P only depend on the index set J and can thus be computed offline before running Algorithm 2. The inequality ( 6) provides a simple first bound to test against in the rejecting sampling procedure, avoiding the more costly evaluation of p i for many proposed points.The refined rejection sampling procedure is detailed in Algorithm 3.
Algorithm 3 Refined rejection sampling for p i with the Fourier basis repeat sample Z from the uniform distribution on S generate U " Upr0, 1sq independent of Z if min i`1ďkďn P pZ ´Xk q{n ă We tested the performance of this refinement by simulation.Table 1 reports the fraction of time needed to generate different point patterns in dimension d " 2 by using Algorithm 3 compared to using simple rejection sampling.The entries are averaged over 500 replications.The tested models are first, the projection DPP with kernel Kpx, yq " ř |j|ďℓ e ij¨px´yq , that leads to a cardinality of n " p2ℓ `1q 2 points, which we call "most repulsive" kernel with intensity ρ " n in Table 1, and second, the Gaussian-type DPP (see Section 5.1) with a comparable intensity ρ " n and an interaction parameter α " α max (strong repulsion) and α " α max {2 (mild repulsion).Here α max " 1{ ?ρπ is the maximal possible value of α for a Gaussian-type DPP with intensity ρ, and the simulation exploits the Fourier approximation of (2) as introduced in [25], making the use of Algorithm 3 relevant.Table 1 also reports the "bound rate" which summarises how often the bound (6) was used on average.This rate was empirically quite stable across the different values of ρ that we implemented and this is why only a single number is given.Specifically, a bound rate of 0.40 means that 40% of the rejections in the algorithm was done directly thanks to the bound (6).This rate can be viewed as the maximal theoretical gain we can expect thanks to Algorithm 3, if the only computation time was due to this rejection step in Algorithm 2.
From Table 1, we conclude that using Algorithm 3 in dimension d " 2 may lead to a speed improvement up to almost 30% (for the most repulsive kernel and about 1000 points), and never really slows down the simulation even when no clear improvement can be expected (as for the mild repulsive Gaussian-type DPP for which the bound rate is only 0.06).We have carried out similar simulations in dimension d " 1, not detailed here, where we observed a bound rate of 65% and a gain up to 45% for the most repulsive kernel with 1000 points.In dimension d " 3, comparable simulations showed a bound rate of 25% and a speed improvement up to 16% for the most repulsive kernel with 1000 points.We expect the improvements to be smaller in higher dimensions, making the use of Algorithm 3 superfluous in this case.
Finally, let us mention an additional possible refinement in dimension d " 1.In this case, the bound (6) can be manipulated to provide a more efficient proposal than the uniform distribution, that can be easily simulated by the inversion method.The details are given in Appendix B.1.With this new proposal, the acceptance probability in the rejection sampling for the last step of Algorithm 2, i.e. for the simulation of the last point given the n´1 first ones, can be 3 times higher that one associated to a uniform proposal; see Appendix B.1.Unfortunately, this procedure does not generalise easily to higher dimensions, in the sense that it seems difficult to derive a similar efficient proposal from (6) in dimension d ą 1.

Conditional simulation
Let again X be a projection DPP on S Ă R d with cardinality n.
By construction, the last i steps of Algorithms 1 and 2 provide an exact simulation of X ∖tX n , . . ., X i`1 u given that tX n , . . ., X i`1 u Ă X.This conditional simulation may be useful in practice if we observe a point pattern in S with missing points and we wish to generate some realisations of the full point pattern in S.This is possible if we agree that the complete unobserved point pattern can be modelled by a projection DPP, to be chosen, and that the missing points are due to an independent thinning with known retention probability qpxq, x P S.
As an example, assume that X is a projection DPP on S " r0, 1s 2 with kernel Kpx, yq " ř |j|ďℓ e ij¨px´yq , so cardinality n " p2ℓ `1q 2 , where we need to choose ℓ (or equivalently n), and we observe only m points of X, m ď n, coming from an independent thinning of X with known retention probability qpxq, x P S. Using the relation Epmq " ş S qpxqKpx, xqdx " n ş S qpxqdx, we deduce a plausible value of n by replacing Epmq by the observed value of m.Then we can use Algorithm 2 to generate the n ´m remaining points given the m observed ones.This example is illustrated in the leftmost plot of Figure 2 where the red triangles represent a possible realisation of the unobserved points in S " r0, 1s 2 given the observed point pattern formed by the black circles.Here we assumed that qpxq " 1{2, so that n is twice the number of observed black points.
We chose in the above example a specific form of K, up to n, that may not be relevant for other applications.A general procedure to select a projection DPP X that fits well with the observed conditional points is discussed in Appendix A.3.
Alternatively, we may consider in-painting conditional simulation, that is the simulation of X X A given that X X A c " tX 1 , . . ., X m u, m ď n, where A and A c constitute a partition of S. (Here we agree that if m " 0, X X A c " H.) Since X is a projection DPP, it is easily verified (see Appendix A.2) that the latter conditional point process is a projection DPP with cardinality n ´m and kernel, when m ą 0: where k mpxq " pKpx, X 1 q, . . ., Kpx, X m qq and K m " pKpX j , X l qq 1ďj,lďm .If m " 0, K H px, yq " Kpx, yq1 A pxq1 A pyq.The spectral representation of the conditional kernel ( 7) is unlikely to be known, even if the spectral representation of K on S is explicitly given.Nonetheless the simulation can be done by Algorithm 1.
As an illustration, we have generated in the rightmost plot of Figure 2 the red triangles in the inner square r1{4, 3{4s 2 given the point pattern in black circles outside this square.The full point pattern was assumed to be the same DPP as before, where n was fixed to the number of observed black circles times 4{3 (that is the ratio of the volume of the full region to the volume of the observed region).A more adaptive choice of K based on the observed points is discussed in Appendix A.3.

β-Ginibre process
The β-Ginibre process, for β ą 0, is the DPP in the complex plane C with kernel Note that we could equivalently consider this process in R 2 but it is commonly defined in C, a point of view which we will respect in the following.The β-Ginibre process is a stationary and isotropic point process with intensity ρ ą 0, that exists if and only if ρβπ ď 1.The standard Ginibre process introduced in [16] corresponds to the particular case ρ " 1{π and β " 1.The β-Ginibre process, initially considered in [17] and [13], can be viewed as an independent thinning of the Ginibre process with retention probability ρβπ, followed by the homothety with ratio ?β.Given ρ ą 0, the β-Ginibre process includes the Poisson point process with intensity ρ as a limiting case when β Ñ 0, and when β " 1{pρπq, it is one of the most repulsive stationary DPPs with intensity ρ in the sense of [32].
We discuss and compare in this section two strategies to generate the β-Ginibre process on the ball B R " t|x| ă Ru in C, for any R ą 0. Remember that the simulation on any compact set S Ă B R can then be performed by removing the points in B R ∖S.The first strategy is a standard procedure that takes advantage of the representation of the Ginibre process as the distribution of certain eigenvalues of an infinite matrix.The second strategy exploits the (unusual) fact that the spectral decomposition (2) of K is explicitly known on B R , and so the spectral algorithm of [21] can be employed.
Remark 4.1.The second order properties of the Ginibre process, i.e. the intensity and the pair correlation function, are similar to those of the Gaussiantype DPP with parameters ρ and α (see Section 5.1) with the correspondence β " α 2 {2.Note however that the Ginibre process can reach more repulsiveness in view of the existence condition β ď 1{pρπq, while α 2 {2 ď 1{p2ρπq is required for existence of the Gaussian-type DPP.

Approximation by the truncated β-Ginibre process
The Ginibre process can be viewed as the distribution of the eigenvalues of an infinite matrix with i.i.d.standard complex Gaussian entries [16].To simulate (approximately) the Ginibre process on B R , the standard procedure consists of generating a finite n ˆn matrix of i.i.d.standard complex Gaussians, where n has to be chosen according to R, and then computing its n complex eigenvalues.The simulation of the β-Ginibre process is easily deduced by the thinning and rescaling steps explained in the beginning of this section.Given the importance of this procedure, we provide some details below and we discuss the choice of n.
The distribution of the n eigenvalues above (followed by the thinning and scaling steps) corresponds to the distribution of the truncated β-Ginibre process, which is the DPP on C whose kernel is the truncation at k " n ´1 of (11) defined in Section 4.2 below, that is We refer to [12] for some discussion of this process and on other modified versions of the Ginibre process.As already noticed in [16], this DPP is not stationary and is mostly concentrated on the disc with radius ?nβ.Inside this disc the truncated β-Ginibre process and the genuine β-Ginibre process are very similar, their differences arising mostly close to the border.In particular the intensity of the truncated β-Ginibre process reads This intensity is always less than ρ (the intensity of the β-Ginibre process) and we have for any |x| ď ?nβ (see [16]): To simulate approximately the β-Ginibre process on B R , the standard procedure consists in simulating the truncated β-Ginibre process for n large enough so that R ď ?nβ and so that the approximation is satisfying.To our knowledge, there is no clear recommendation for the choice of n.Choosing n " rR 2 {βs would lead to a poor approximation close to the border of the disc.We instead suggest to choose n so that the upper bound in (10) is uniformly less than a prescribed error ε on B R .Specifically, we choose n " rn ε s where n ε is such that M β pR, n ε q " ε.In the simulations presented in Section 4.3, we fixed ε " 10 ´10 .The full procedure is summarised in Algorithm 4.
Algorithm 4 β-Ginibre process on B R by the eigenvalues method set n " rn ε s where n ε solves M β pR, n ε q " ε; see (10) sample the matrix M " rpA k,l `9 ιB k,l q a β{2s 1ďk,lďn where 9 ι " ?´1 and A k,l and B k,l are all independent N p0, 1q random variables compute the eigenvalues tZ 1 , . . ., Z n u of M set X " tHu for i "

Spectral algorithm
Starting from ř kě0 pxȳ{βq k {k! " exppxȳ{βq, it is not difficult to verify that the spectral representation of the β-Ginibre kernel (8) on C reads Kpx, yq " where As noticed in [12], the functions Φ k satisfy the unusual property to remain orthogonal on the ball B R for any R ą 0. So the spectral representation of K on B R is simply the restriction of (11) to B R : where where and Φk pxq " To apply the spectral algorithm of [21], we need to first simulate a sequence of Bernoulli random variables B k , with respective parameters λ k , k ě 0. We know that M " max kě0 tB k ‰ 0u is finite because ř λ k ă 8, but the simulation of M is not straightforward; see [24,Appendix D].A simple solution consists in setting a maximal number k max of Bernoulli variables that we generate.This amounts to truncate the representation (12) to k max ´1, which gives exactly the same kernel as in (9) but restricted to B R .A clever choice of k max can then be carried out as for the choice of n in (9), that is k max " rk ε s where k ε is the solution of M β pR, k ε q " ε.Here ε is a prescribed error (ε " 10 ´10 in Section 4.3) that guarantees through (10) that the choice of k max has a little impact on the expected number of points.The details of this spectral approach are given Algorithm 5.

Simulation study
In order to compare Algorithms 4 and 5, we have performed several simulations of the β-Ginibre process, for different values of the parameters ρ and β, on the ball B R of area 1 where R " 1{ ?π, so that the expected number of points coincide with the intensity ρ.An example using each algorithm is shown in Figure 3, where the gray points on the left correspond to the  Algorithm 5 β-Ginibre process on B R by the spectral method set k max " rk ε s where k ε solves M β pR, k ε q " ε; see (10) sample independently B 0 " Bpλ 0 q, . . ., B kmax´1 " Bpλ kmax´1 q where the λ k 's are given by ( 13) if B k " 0 for all k " 0, . . ., k max ´1 then return the empty point configuration tHu else Given B 0 , . . ., B kmax´1 , generate the projection DPP with kernel using Algorithm 2, where the Φk 's are given by ( 14) end if deleted points in the process of Algorithm 4, while the black points represent the final simulated point pattern.
Table 2 displays the average computation time (over 500 replications) of each algorithm.The picture is quite clear: the spectral algorithm 5 is more efficient to generate a moderate number of points, while Algorithm 4 becomes preferable for a high number of points.On the other hand, the effect of β is mild for the spectral algorithm, whereas it has a strong impact on the efficiency of Algorithm 4 (the smaller β is, the less efficient it is), which is consistent with the fact that when β is small, many points are generated to be afterwards deleted by the thinning step of Algorithm 4, slowing down the process.

Exact spectral expansion for other kernels
This section aims at paving the way to (nearly) perfect simulation of the Gaussian-type DPP and the Bessel-type DPP, two widely used DPPs in spatial statistics.In both cases we provide the explicit spectral representation of the kernel on the unit ball, which opens the way to perfect simulation by the spectral algorithm.The numerical evaluation of the detailed spectral representations may pose some separate numerical challenges which are left for future research.

Inhomogeneous kernel
We call the DPP with the following kernel an inhomogeneous Gaussian-type DPP: where ρ ą 0 is the intensity parameter, α ą 0 is the range parameter, and p σ denotes the density of a standard normal distribution in R d with standard deviation σ ą 0, i.e.
As verified in Appendix C, this DPP exists if and only if or equivalently 2σ 2 {α 2 ě ρ 2{d ´ρ1{d .Since the intensity is K σ px, xq " ρ p σ pxq, x P R d , this DPP is an inhomogeneous finite point process on R d with expected cardinality ρ.Accordingly, realised point patterns are concentrated near the origin.Specifically, let χ 2 d ppq denote the quantile of the chi-squared distribution with d degrees of freedom, then for any 0 ď ε ď 1, the expected number of points in the centered ball with radius σ a χ 2 d p1 ´εq is p1 ´εqρ.For instance in dimension d " 2, about 99% of the points are in the centered ball with radius 3σ.
A special property of the inhomogeneous Gaussian-type DPP is that the spectral representation of its kernel is perfectly known on R d , as explained below (see for instance [43,Section 4.3.1]).This decomposition specifically reads K σ px, yq " where, setting a " σ ´2{4, b " α ´2, c " ? a 2 `2ab, A " a `b `c and B " b{A, for j " pj 1 , . . ., j d q P N d and x " px 1 , . . ., x d q P R d , and Φ j pxq " H k being the (physicists') Hermite polynomial of order k.The spectral decomposition ( 17) is a tensorial product and is deduced from the decomposition when d " 1.The latter is easily verified since when d " 1, the identity The orthonormality of the eigenfunctions is in turn a consequence of the (17), the spectral algorithm is in theory feasible to simulate the inhomogeneous Gaussian-type DPP on R d .Two approximations are however needed in practice.The first one is the truncation of (17) to |j| ď ℓ, for some ℓ such that the expected number of generated points ř |j|ďℓ λ j is sufficiently close to ρ.In fact, we know in theory the exact distribution of the index ℓ of the maximal non-null eigenvalue, and perfect simulation of ℓ could in principle be considered, see Appendix D in [24].But this distribution is in practice impossible to generate and manual truncation is mandatory.This truncation allows us to generate the p2ℓ `1q d Bernoulli variables B j that are needed in the first step of the spectral algorithm.Then, given the B j 's, the simulation boils down to generating the projection DPP with kernel K proj px, yq " ř |j|ďℓ B j Φ j pxqΦ j pyq using Algorithm 2. But in this algorithm, the simulation with respect to p i given by ( 5) cannot easily be performed by Strategy 1 in Section 3.1 because the simulation with respect to the unnormalised density K proj px, xq is not straightforward.Instead we need to apply Strategy 2, where the proposal in the rejection sampling is the uniform distribution.To do so, we need a second approximation which is the choice of a compact set S that contains with high probability the generated point pattern.The natural choice for S is the centered ball with radius σ a χ 2 d p1 ´εq for a prescribed error ε.

Homogeneous Gaussian-type kernel
The homogeneous Gaussian-type kernel is defined for any x, y P R d by Kpx, yq " ρ e ´}y´x} 2 {α 2 , x, y P R d . ( The DPP with this kernel exists on R d if and only if ρα d π d{2 ď 1 [25], and it is stationary with intensity ρ. Note that the homogeneous Gaussian-type DPP is the limit in distribution, when σ Ñ 8, of the inhomogeneous Gaussian-type DPP with intensity parameter ρσ d p2πq d{2 .This comes from the fact that the convergence of the associated kernel holds uniformly on all compact sets, which implies that the Laplace transforms of the two DPPs asymptotically coincide; see [38,Proposition 3.10].Despite this property, the spectral representation of (19) is not known, neither on R d or on any subset.For this reason, the standard approach to simulate the homogeneous Gaussian-type DPP is to resort to the spectral approximation by a Fourier series.
We explain below an alternative simulation procedure, that is arguably much more accurate than the Fourier series approximation, although more time demanding.The idea is to first generate an inhomogeneous Gaussiantype DPP using the exact spectral representation (17), then apply an independent thinning procedure to get the final point pattern.
Assume that we wish to simulate the homogeneous Gaussian-type DPP with parameter ρ and α on the unit ball B (the procedure extends straightforwardly to any compact set).Let ρσ " ρ sup xPB p ´1 σ pxq " ρσ d p2πq d{2 e 1{p2σ 2 q .
(ii) Generate the inhomogeneous Gaussian-type DPP with intensity parameter ρσ 0 , range parameter α and standard deviation σ 0 , on R d , using the spectral algorithm (see Section 5.1.1).
(iii) Apply an independent thinning procedure with retention probability The first step aims at choosing the value of σ that minimises the intensity of the inhomogeneous Gaussian-type DPP, thus optimising the computational cost, while ensuring the existence of the process through (16).It is easily verified that the final point pattern is a DPP on R d with kernel ρσ 0 e ´}y´x} 2 {α 2 a p σ 0 pxqp σ 0 pyq ˆaqpxqqpyq " ρ e ´}y´x} 2 {α 2 1 B pxq1 B pyq, that is a homogeneous Gaussian-type DPP on B. This procedure provides a nearly perfect simulation algorithm of this DPP, up to the two mild approximations discussed in Section 5.1.1 used for the second step.It can however be very time consuming, and we think that it is to be considered instead of the Fourier series approximation only if distributional accuracy is of main concern.

Bessel-type kernel
The Bessel-type kernel is defined for any x, y P R d by where ρ ą 0 is the intensity, α ą 0 is the range parameter and J d{2 is the Bessel function of the first kind.This kernel defines a DPP in R d whenever ρ α d π d{2 Γ p1 `d{2q ď 1; see [6].Given ρ ą 0, the maximal possible value of α corresponds to the most repulsive stationary DPP in R d with intensity ρ, while α Ñ 0 corresponds to the Poisson point process with intensity ρ.
As explained below, the spectral representation of this kernel is explicit on the unit ball B and involves the so-called generalized prolate spheroidal functions introduced and studied in [40].This representation opens the way to perfect simulation, by the spectral algorithm, of the Bessel-type DPP on any ball, and therefore on any bounded set (by first simulating on an outer ball and then taking the restriction to the desired set).
The generalized prolate spheroidal functions are the orthogonal eigenfunctions of the truncated Fourier transform, which means that given an arbitrary c P R, each such function ψ satisfies for a certain γ P C ż }y}ď1 ψpyqe cix¨y dy " γψpxq, x P R d . ( The eigenvalues γ and the eigenfunctions ψ depend of course on c.These functions are proved to be real-valued, so iterating the latter equality and its transpose we obtain that ψpyqe ´ciz¨y dy ˙ecix¨z dz. If we let K c denote the Fourier transform of the unit ball, i.e. (see [19]) By choosing c " 2{α, we deduce that ψ is an eigenfunction of the Bessel-type kernel (20) associated with the eigenvalue ρΓ p1 `d{2q |γ| 2 {π d{2 .The generalized prolate spheroidal functions ψ and the eigenvalues γ have rather complicated expressions.In the one dimensional case (d " 1), these expressions are obtained in [41] and detailed in [34], where the eigenvalues form a sequence γ n associated to the eigenfunctions ψ n .To make the connection with our setting, note that λ n pcq in [34] corresponds to c|γ n | 2 {p2πq, and the associated eigenfunction ψ n pc, xq corresponds to ψ n pxq.Note finally that the norm of ψ n pc, xq on the unit ball is a λ n pcq (see (9) in [34]) so that in dimension d " 1, the spectral expansion of the Bessel-type kernel on the unit ball is ψ n p2{α, yq a λ n p2{αq , @x, y P r´1, 1s, using the notation of [34].Several algorithms are available to evaluate λ n and ψ n ; see for instance [34], [35], [8] and the references therein.
In dimension d ě 2, following [40], the sequence of eigenvalues can be written tγ N,n , N ě 0, n ě 0u, where the eigenvalues γ 0,n are simple while the eigenvalues γ N,n for N ě 1 have multiplicity hpN, dq " p2N `d ´2qpN d ´3q!{ppd ´2q!N !q. Accordingly, their associated eigenfunctions form a sequence ψ N,n,l where l " 1, . . ., hpN, dq.All formulas are given in [40] and we do not reproduce them in the general case.We only provide some details in Appendix D for the case d " 2; see also [39] for this particular case.This results in the following spectral representation, for any x, y P B, where the specific form of λ 0,n , ψ 0,n , λ N,n , ψ N,n,1 and ψ N,n,2 are given in Appendix D. Contrary to d " 1, numerical algorithms to compute these eigenvalues and eigenfunctions in dimension d " 2 are not mature yet, but they are the subject of an active current research and software is becoming available online [39; 20; 27].This will hopefully make the computation of the above spectral representation straightforward in the near future.

A.1 Palm distribution and Algorithms 1 and 2
Let X be a DPP on S with kernel K and let y 1 , . . ., y m P S. Remember that the conditional distribution of X ∖ty 1 , . . ., y m u given that ty 1 , . . ., y m u P X is formally called the reduced Palm distribution of X given y 1 , . . ., y m .The intensity function of order n of the reduced Palm distribution is ρ pnq y 1 ,...,ym px 1 , . . ., x n q " ρ pn`mq px 1 , . . ., x n , y 1 , . . ., y m q ρ pmq py 1 , . . ., y m q (23) if ρ pmq py 1 , . . ., y m q ą 0, and 0 otherwise; see [9].Since X is a DPP, we may simplify the expression of ρ pn`mq thanks to (1) and the standard formula where detpAq " ρ pmq py 1 , . . ., y m q, whereby we deduce that the reduced Palm distribution of X given y 1 , . . ., y m is still a DPP with kernel K y 1 ,...,ym px, yq " Kpx, yq ´km pxqK ´1 m k m pyq, x, y P S, (25) where k mpxq " pKpx, y 1 q, . . ., Kpx, y m qq and K m " pKpy j , y l qq 1ďj,lďm .This result is not enough to simulate a general DPP, because it says nothing about its cardinality.But if X is a projection DPP on S, then we know its cardinality n " ş S Kpx, xqdx and ρ pnq becomes the unnormalised density of the n-tuple set of points of X, with respect to the Lebesgue measure on S n .In this case, for any i " n ´1, . . ., 1, by (23) where pm, nq is pn í, iq, ρ piq y i`1 ,...,yn is the unnormalised density of X ∖ ty i`1 , . . ., y n u given that ty i`1 , . . ., y n u P X.We can thus apply a sequential conditional simulation: generate the first point y n of X, the marginal density of which is Kpx, xq{n, then the second point y n´1 given the first one, according to K yn px, xq{pn´1q, then the third one given the first two points, according to K yn,y n´1 px, xq{pn 2q, and so on.This procedure leads to Algorithm 1.
To accelerate this algorithm, the expression of K y i`1 ,...,yn px, xq or equivalently of ip i pxq in (3) can be rewritten to get (5) in Algorithm 2, provided we know the spectral representation of K given by (4).Then it is possible to calculate the matrix of the projection onto the orthogonal complement of spantvpX n q, . . ., vpX i`1 qu which is P i " I n ´VpV ˚Vq ´1V ˚" I n V K ´1 i V ˚, where V is the pn, n ´iq matrix with entries Φ i pX j q, i " 1, . . ., n, j " n, . . ., i `1.Then, starting from (3), it is not difficult to verify that ip i pxq " }P i vpxq} 2 .This means that ip i may be obtained by successive orthogonalisation, which is exploited in Algorithm 2 (see [25] for more details).

A.2 Conditional distribution given an outside set
Let A and A c be a partition of S, i.e. S " A Y A c and A X A c " H.If X is a projection DPP on S with cardinality n, then the density of X X A given that X X A c " ty 1 , . . ., y m u, m ă n, is f A|A c pX X A " tx 1 , . . ., x n´m u|X X A c "ty 1 , . . ., y m uq " f S pX " tx 1 , . . ., x n´m , y 1 , . . ., y m uq f A c pX X A c " ty 1 , . . ., y m uq if tx 1 , . . ., x n´m u Ă A and 0 otherwise.We have excluded the trivial case m " n since XXA is simply the empty set in this case.Since X is a projection DPP on S, the numerator above is (up to a multiplicative constant) the determinant of the 2 ˆ2 block matrix ˆrKpy i , y j qs 1ďi,jďm rKpy i , x j qs 1ďiďm,1ďjďn´m rKpx i , y j qs 1ďiďn´m,1ďjďm rKpx i , x j qs 1ďi,jďn´m ˙.
By (24) and using the same notation as in (25), we deduce that f A|A c pX X A " tx 1 , . . ., x n´m u|X X A c "ty 1 , . . ., y m uq9 detrK y 1 ,...,ym px i , x j qs 1ďi,jďn´m , where the constant of proportionality does not depend on tx 1 , . . ., x n´m u.This proves that given X X A c " ty 1 , . . ., y m u, X X A is a projection DPP with cardinality n ´m and kernel K y 1 ,...,ym px, yq1 A pxq1 A pyq.

A.3 How to choose a projection DPP for conditional simulation?
We start by a simple setting.Assume that we observe m points of a point process Y that is an independent thinning of a homogeneous point process X on S " r0, 1s d with constant thinning probability τ P r0, 1s.Assume moreover that we know the value of τ .Then a way to model X as a projection DPP on S that fits with the observed thinned point process Y is as follows.
Choose for X a parametric stationary model of DPP with kernel Kpx, yq " ρK α px ´yq with K α p0q " 1, for which we know the Fourier transform φ α of K α , for instance the Gaussian-type DPP model [25].Here ρ is the intensity of X and α stands for the remaining parameters of the model, typically α is a univariate range parameter.Under this model, Y is a DPP with kernel K thin px, yq " τ Kpx, yq and so its intensity is τ ρ while its pair correlation function is 1´K 2 α px´yq.Consequently we can estimate from Y the intensity of X by ρ " m{τ and the remaining parameter α by a contrast estimation method based on the pcf or the Ripley's K function of Y to get α (see [25; 7]).Using the Fourier approximation of [25] The above setting can be generalized to the case where the thinning probability is a known function qpxq, x P S. For instance we may have qpxq " τ apxq{psup xPS apxqq where τ P r0, 1s is known and apxq is an observed auxiliary variable.Note that inpainting conditional simulation in A given a point pattern in A c corresponds to the particular case where qpxq " 1 A c pxq.Then Y becomes a DPP with kernel K thin px, yq " ρ a qpxqK α px ´yq a qpyq and we can deduce ρ " m{p ş S qpxqdxq and α by minimum contrast estimation.The approximation of X by a projection DPP can then be carried out as above.
A further generalization is possible in the case where X is inhomogeneous with intensity ρ θ pxq, where θ is an unknown parameter.As before, we assume to observe m points of Y , which is a thinning of X with thinning probability qpxq " τ apxq{psup xPS apxqq where τ P r0, 1s is known and apxq is an observed auxiliary variable.We can then start with a parametric DPP model for X with kernel Kpx, yq " a ρ θ pxqK α px ´yq a ρ θ pyq from which we deduce that Y is an inhomogeneous DPP with kernel a qpxqρ θ pxqK α px ´yq a ρ θ pyqqpyq.Based on the observation of Y , we may deduce the estimations θ and α by a two-step estimation method as in [26].Remember that X can be seen as a thinning of a homogeneous DPP X `with kernel K `px, yq " ρ `Kα px ´yq and thinning probability ρ θ pxq{ρ `, where ρ `is an upper bound of ρ θ pxq on S. Following the same scheme as before, we may approximate X `by a projection DPP with random kernel conditional on ρ`, α and ř kě0 B k ě m, where the B k 's are independent Bernoulli variables with respective rate ρ`φ αpkq.A conditional simulation of X `given Y is then possible.To finally get a realisation of X given Y , it suffices to thin the realisation of X `∖Y with thinning probability ρ θpxq{ ρ`. the proposal has density f pxq{F p1q and the rate of acceptance is i{F p1q.Simulation from the proposal can be done by the inversion method: if U " Upr0, 1sq, F ´1pF p1qU q is distributed from the proposal.
To complete this procedure, it remains to provide the formulas for F p1q and F ´1pxq.Note that for all k, By construction of tx 1 , . . ., xp u, we deduce that For k " 1, . . ., p, we denote y ḱ " F px k ´?n{aq, y k " F px k q and y k " F px k `?n{aq and we have For xk ´?n{a ď x ď xk , The same results holds true when xk ď x ď xk `?n{a.We deduce that for y ḱ ď y ď y k , ˙.
With this new proposal, the acceptance rate in the rejection sampling for the last step of Algorithm 2, i.e. for the simulation of the last point given the n ´1 first ones, becomes 1{F p1q instead of 1{n for a uniform proposal.For the most repulsive kernel (i.e.J " t´ℓ, . . ., ℓu implying n " 2ℓ `1), we obtain a 2 ď π 2 npn 2 ´1q{3, so that this new rate can be 3 times more effective, since by ( 27

C Existence of the inhomogeneous Gaussiantype DPP
The existence is equivalent to 0 ď λ i ď 1 for any i P N d , where λ i is given by (18).By this formula, since B ď 1, we have that for any i, λ i ď λ 0 and the condition is verified if only if ρp2a{Aq d{2 ď 1.Since A " a `b `?a 2 `2ab, this is equivalent to ap2ρ 2{d ´1q ´b ď ? a 2 `2ab, which is (16).

D Generalised prolate spheroidal wave functions in dimension d " 2
We provide in this appendix some details on the spectral representation (22) of the Bessel type kernel when d " 2. As explained in Section 5.2, this amounts to describing the eigenvalues and the associated generalized prolate spheroidal eigenfunctions in (21), where c " 2{α.The following formulas come from [40] and [39].When d " 2, the sequence of eigenvalues in (21) is tγ N,n , N ě 0, n ě 0u where the eigenvalues γ 0,n , n ě 0, are simple, and the eigenvalues γ N,n for N ě 1 and n ě 0 are associated to hpN, 2q " 2 eigenfunctions.Turning to polar coordinates by setting x " pr cos θ, r sin θq with r P r0, 1s and θ P r0, 2πs, the eigenfunctions read for any n ě 0, ψ 0,n pxq " 1 ?2π R 0,n prq, @x " pr cos θ, r sin θq P B, and for any N ě 1 and n ě 0, for all x " pr cos θ, r sin θq P B, In our notation the Zernike polynomials are defined for any r P r0, 1s by T N,k prq " a 2p2k `N `1qr N `1{2 P N,0 k p1 ´2r 2 q, where P N,0 k are Jacobi polynomials, and they satisfy ă T N,k , T N,l ą" δ k,l .(Note that the Zernike polynomials in [40] are not normalised, unlike in [39] and in our case, explaining a slight difference in the definition).Concerning the coefficients pd N,n k q kě0 in (28), they satisfy the recurrence relation where χ N,n P R is an unknown quantity that does not depend on k and where the coefficients b N i,j , the expression of which is given below, are symmetric (b N i,j " b N j,i ) and do not depend on n.This means that given N , the (infinite) vector pd N,n k q kě0 is the n-th eigenvector associated to the eigenvalue χ N,n of the symmetric tridiagonal (infinite) matrix B N with entries b i,j : and we recall that we have set c " 2{α.
For the numerical computation of (28), one needs for each N to truncate the matrix B N at some size k max , depending on the expected accuracy, which allows to deduce all coefficients d N,n k for k ď k max ´1 and n ď k max ´1.The choice of k max is discussed in [20].Note that in view of the orthonormal property of the Zernike polynomials, i.e. ă T N,k , T N,l ą" δ k,l , the eigenfunctions ψ 0,n , ψ N,n,1 and ψ N,n,2 are orthonormal whenever the coefficients pd N,n k q kě0 are normalised, i.e. ř kě0 pd N,n k q 2 " 1, which is numerically achieved by normalising the eigenvectors of the (truncated) matrix B N .
Concerning the eigenvalues in (22), they are equal to 2πραλ 2 N,n p2{αq where λ N,n pcq, c " 2{α, satisfies (see (12) in [39]): where the equivalence comes from the fact that for a Jacobi polynomial, P N,0 k p1q " `N`k k ˘.For the right hand side in (30), we have at r " 0 J N pcrr 1 q r N `1{2 ? crr 1 ?r 1 R N,n pr 1 q " pcr 1 q N `1{2 2 N N !? r 1R N,n pr 1 q, so that using (28) again Since T N,0 prq " a 2pN `1qr N `1{2 and using the fact that ă T N,0 , T N,k ą" δ 0,k , we obtain In view of (30), the two right hand side expressions in (31) and ( 32) are equal, which gives For the numerical aspects, formula (33) may be implemented directly, or only for λ N,0 and then a recursive relation can be used to get λ N,n for n ě 1, as advised in dimension d " 1 in [35] through their relation (7.13), an approach generalised in d ě 2 in [27; 20].

Figure 1 :
Figure 1: Conditional density p i at intermediate steps of Algorithms 1 and 2, for the Fourier basis and n " 121 on the unit square.Top row is untransformed values while the bottom row is in log scale.Left: density of the 20th point given the 19 first ones (superimposed as white circles); right: density of the last point given the preciding 120 points.

Figure 2 :
Figure 2: Conditional simulation of the red points (shown as triangles) given the black points, assuming that the full point pattern formed by the union of the black and red points is a projection DPP.Left: simulation in the whole domain.Right: simulation in the inner square (in-painting).

Figure 3 :
Figure 3: Realisations of two β-Ginibre processes generated with Algorithms 4 (left) and 5 (right).In all cases ρ " 200.In the top row β " β max and in the bottom row β " β max {2.For Algorithm 4 on the left the gray points represent the points deleted by the algorithm.
k,k`1 d N,n k`1 " χ N,n d N,n k ,

λ 1 0J N pcrr 1 q ? crr 1 ? r 1 R
N,n pcq ?rR N,n prq " ż N,n pr 1 qdr 1 .(30) Using (28) and (29), we have for the left hand side λ N,n pcq ?rR N,n prq r N `1{2 " λ N,n pcq is introduced to normalise the eigenfunctions on B R .Some algebra yields }Φ k } 2 R " γpk `1, R 2 {βq{k!where γpa, zq " ş z 0 t a´1 e ´tdt is the incomplete gamma function.Finally the spectral representation of K on B R is kě0 λ k Φk pxq s Φk pyq, x, y P B R , , we thus have ě m, where the B k 's are independent Bernoulli variables with respective rate ρφ αpkq. k ´1pyq " xk `ˆ3 a 2 `y ´nx k `p4k ´2qn 3{2 {a ˘˙1{3 .Finally, if xk `?n{a ď x ď xk`1 ´?n{a, then F pxq " y k `npx ´px k `?n{aqq " nx ´4k 3a n 3{2 , so that for y k ď y ď y ḱ`1 , In these expressions R N,n prq is given in terms of a series of Zernike polyno- k T N,k prq, @r P r0, 1s.