Multiscale matrix pencils for separable reconstruction problems

The nonlinear inverse problem of exponential data fitting is separable since the fitting function is a linear combination of parameterized exponential functions, thus allowing to solve for the linear coefficients separately from the nonlinear parameters. The matrix pencil method, which reformulates the problem statement into a generalized eigenvalue problem for the nonlinear parameters and a structured linear system for the linear parameters, is generally considered as the more stable method to solve the problem computationally. In Section 2 the matrix pencil associated with the classical complex exponential fitting or sparse interpolation problem is summarized and the concepts of dilation and translation are introduced to obtain matrix pencils at different scales. Exponential analysis was earlier generalized to the use of several polynomial basis functions and some operator eigenfunctions. However, in most generalizations a computational scheme in terms of an eigenvalue problem is lacking. In the subsequent Sections 3--6 the matrix pencil formulation, including the dilation and translation paradigm, is generalized to more functions. Each of these periodic, polynomial or special function classes needs a tailored approach, where optimal use is made of the properties of the parameterized elementary or special function used in the sparse interpolation problem under consideration. With each generalization a structured linear matrix pencil is associated, immediately leading to a computational scheme for the nonlinear and linear parameters, respectively from a generalized eigenvalue problem and one or more structured linear systems. Finally, in Section 7 we illustrate the new methods.

Exponential analysis was earlier generalized to the use of several polynomial basis functions and some operator eigenfunctions.However, in most generalizations a computational scheme in terms of an eigenvalue problem is lacking.In the subsequent Sections 3-6 the matrix pencil formulation, including the dilation and translation paradigm, is generalized to more functions.Each of these periodic, polynomial or special function classes needs a tailored approach, where optimal use is made of the properties of the parameterized elementary or special function used in the sparse interpolation problem under consideration.
With each generalization a structured linear matrix pencil is associated, immediately leading to a computational scheme for the nonlinear and linear parameters, respectively from a generalized eigenvalue problem and one or more structured linear systems.
Finally, in Section 7 we illustrate the new methods.

Introduction
The nonlinear inverse problems of complex exponential analysis [1,2] and sparse polynomial interpolation [3,4] from uniformly sampled values can both be traced back to the exponential fitting method of de Prony from the 18-th century [5,6]: α i exp(ϕ i t j ), α i , ϕ i ∈ R, t j = j∆ ∈ R, j = 0, . . ., 2n − 1. ( The French nobleman de Prony solved the problem by obtaining the n nonlinear parameters ϕ i from the roots of a polynomial and the n coefficients α i as the solution of a Vandermonde structured linear system.Almost 200 years later this basic fitting problem, that plays an important role [7,8] in many computational science disciplines, engineering applications and digital signal processing, was reformulated in terms of a generalized eigenvalue problem [9].This reformulation, which is referred to as the matrix pencil method, is generally the most reliable one when solving the exponential analysis problem computationally. It is the property exp(ϕ i t j+1 ) = exp(ϕ i ∆) exp(ϕ i t j ) of the building blocks exp(ϕ i t) in (1) that allows to split the nonlinear interpolation problem (1) into two numerical linear algebra problems, namely the separate computation of the nonlinear parameters ϕ i from a generalized eigenvalue problem on the one hand and the linear coefficients α i from a structured linear system on the other.
Problem statement (1) was partially generalized, to the use of non-standard polynomial bases such as the Pochhammer basis and Chebyshev and Legendre polynomials [10][11][12][13][14] and to the use of some eigenfunctions of linear operators [15][16][17].Many of these generalizations are unified in the algebraic framework described in [18].
What is lacking in most of the generalizations above, is a reformulation in terms of numerical linear algebra problems.In this paper we carry the generalized eigenvalue formulation of (1), so essentially the matrix pencil method, to linear combinations of the trigonometric functions cosine, sine, the hyperbolic cosine and sine functions, the Chebyshev (1-st, 2-nd, 3-rd, 4-th kind) and spread polynomials, the Gaussian function, the sinc and gamma function.In addition, we introduce the paradigm of a selectable dilation σ and translation τ of the interpolation points, as used in refinable function theory.All of the above functions namely satisfy a property similar to exp(ϕ i t τ +(j+1)σ ) = exp(ϕ i t τ ) exp σ (ϕ i ∆) exp(ϕ i t jσ ), which allows to separate the effect of the scale σ and the shift τ on the estimation of the parameters ϕ i and α i .This multiscale option will prove to be useful in several situations, as further detailed in Section 2.2.
In each of the subsequent sections on the trigonometric and hyperbolic functions, polynomial functions, the Gaussian distribution, and some special functions, a different approach is required to express the nonlinear inverse problem under consideration, as a generalized eigenvalue problem, tailored to the particular properties of the building block g(ϕ i ; t) in use.The interpolant is always computed directly from the evaluations f j where the t j follow some regular interpolation point pattern associated with the specific function g(ϕ i ; t).

Exponential fitting
We first lay out how the whole theory works for the exponential problem, where g(ϕ i ; t) = exp(ϕ i t).
From exp(ϕ i t j+1 ) = exp(ϕ i ∆) exp(ϕ i t j ) we find that or more generally for σ ∈ N and τ ∈ Z that Hence we see that the scaling σ and the shift τ are separated in a natural way when evaluating (3) at t τ +jσ , a property that plays an important role in the sequel.The freedom to choose σ and τ when setting up the sampling scheme, allows to stretch, shrink and translate the otherwise uniform progression of sampling points dictated by the sampling step ∆.The aim is now to estimate the model order n, and the parameters ϕ 1 , . . ., ϕ n and α 1 , . . ., α n in (3) from samples f j at a selection of points t j .

Generalized eigenvalue formulation
In this and the next subsection we assume for a moment that n was determined.With n, σ ∈ N, τ ∈ Z we define It is well-known that the Hankel matrix τ σ H n can be decomposed as This decomposition on the one hand translates (5) and on the other hand connects it to a generalized eigenvalue problem: the values exp(ϕ i σ∆) can be retrieved [9] as the generalized eigenvalues of the problem where v i are the generalized right eigenvectors.Setting up this generalized eigenvalue problem requires the 2n samples f jσ , j = 0, . . ., 2n − 1.A similar statement holds for the values exp(ϕ i τ ∆) from the linear pencil ( τ σ H n , 0 σ H n ).In [5,9] the choices σ = 1 and τ = 1 are made and then, from the generalized eigenvalues exp(ϕ i ∆), the complex numbers ϕ i can be retrieved uniquely because of the restriction |ℑ(ϕ i )|∆ < π.
With σ > 1 the ϕ i cannot necessarily be retrieved uniquely from the generalized eigenvalues exp(ϕ i σ∆) since |ℑ(ϕ i )| σ∆ may well be larger than π.Let us indicate how to solve that problem which is called aliasing.

Vandermonde structured linear systems
For chosen σ and with τ = 0, the α i are computed from the interpolation conditions either by solving the system in the least squares sense, in the presence of noise, or by solving a subset of n interpolation conditions in the case of noiseless samples.The samples of f (t) occurring in ( 9) are the same samples as the ones used to fill the Hankel matrices in (8) with.Note that exp(ϕ i t jσ ) = (exp(ϕ i σ∆)) j , and that for fixed σ the coefficient matrix of ( 9) is therefore a transposed Vandermonde matrix with nodes exp(ϕ i σ∆).In a noisy context the Hankel matrices in (8) can also be extended to rectangular N × ν matrices with N > ν ≥ n and the generalized eigenvalue problem can be considered in a least squares sense [26].In that case the index j in (9) runs from 0 to N + ν − 1.
Next, for chosen nonzero τ , a shifted set of at least n samples f τ +jσ is interpreted as where k ∈ {0, 1, . . ., n} is fixed.Note that (10) is merely a shifted version of the original problem (3), where the effect of the shift is pushed into the coefficients of (3).The latter is possible because of (5).From (10), having the same (but maybe less) coefficient matrix entries as (9), we compute the unknown coefficients α i exp(ϕ i τ ∆).From α i and α i exp(ϕ i τ ∆) we then obtain from which again the ϕ i cannot necessarily be extracted unambiguously if τ > 1.But the following can be proved [19].
So at this point the nonlinear parameters ϕ i , i = 1, . . ., n and the linear α i , i = 1, . . ., n in (3) are computed through the solution of ( 8) and ( 9), and if σ > 1 also (10).Remains to discuss how to determine n.

Determining the sparsity
What can be said about the number of terms n in (3), which is also called the sparsity?From [27, p. 603] and [28] we know for general σ that det 0 σ H ν = 0 only accidentally, ν < n, The regularity of 0 σ H n persists for any value of ∆ when collecting the samples to fill the matrix with, while an accidental singularity of 0 σ H ν with ν < n only occurs for an unfortunate choice of ∆ that makes the determinant zero.A standard approach to make use of this statement is to compute a singular value decomposition of the Hankel matrix 0 σ H ν and this for increasing values of ν.In the presence of noise and/or clustered eigenvalues, this technique is not always reliable and we need to consider rather large values of ν for a correct estimate of n [24] or turn our attention to some validation add-on [25].
With σ = 1 and in the absence of noise, the exponential fitting problem can be solved from 2n samples for α 1 , . . ., α n and ϕ 1 , . . ., ϕ n and at least one additional sample to confirm n.As pointed out already, it may be worthwhile to take σ > 1 and throw in at least an additional n values f τ +jσ to remedy the aliasing.Moreover, if max i=1,...,n |ℑ(ϕ i )| is quite large, then ∆ may become so small that collecting the samples f j becomes expensive and so it may be more feasible to work with a larger sampling interval σ∆.
We remark that, although the sparse interpolation problem can be solved from the 2n samples f j , j = 0, . . ., 2n − 1 when σ = 1, we need at least an additional n samples at the shifted locations t τ +jσ , j = k, . . ., k + n − 1 when σ > 1.The former is Prony's original problem statement in [5] and the latter is the generalization presented in [19].The factorisation (7) allows some alternative computational schemes, which may deliver a better numerical accuracy but demand somewhat more samples.
First we remark that the use of a shift τ can of course be replaced by the choice of a second scale factor σ relatively prime with σ.But this option requires the solution of two generalized eigenvalue problems of which the generalized eigenvalues need to be matched in a combinatorial step.Also, the sampling scheme looks different and requires the 4n − 1 sampling points A better option is to set up the generalized eigenvalue problem which in a natural way connects each eigenvalue exp(ϕ i τ ∆), bringing forth the set T i , to its associated eigenvector v i bringing forth the set S i .The latter is derived from the quotient of any two consecutive entries in the vector 0 σ H n v i which is a scalar multiple of α i (1, exp(ϕ i σ∆), . . ., exp(ϕ i (n − 1)σ∆)) T .
Such a scheme requires the 4n − 2 samples Note that the generalized eigenvectors v i are actually insensitive to the shift τ : the eigenvectors of ( 8) and (12) are identical.This is a remarkable fact that reappears in each of the subsequent (sub)sections dealing with other choices for g(ϕ i ; t).
We now turn our attention to the identification of other families of parameterized functions and patterns of sampling points.We distinguish between trigonometric and hyperbolic, polynomial and other important functions.Our focus here is on the derivation of the mathematical theory and not on the practical aspects of the numerical computation.

Trigonometric functions
The generalized eigenvalue formulation (8) incorporating the scaling parameter σ, was generalized to g(ϕ i ; t) = cos(ϕ i t) in [11] for integer ϕ i only.Here we present a more elegant full generalization for cos(ϕ i t) including the use of a shift τ as in (10) to restore uniqueness of the solution if necessary.In addition we generalize the scale and shift approach to the functions sine, cosine hyperbolic and sine hyperbolic.

Cosine function
Let g(ϕ i ; t) = cos(ϕ i t) with ϕ i ∈ R where Since cos(ϕ i t) = cos(−ϕ i t), we are only interested in the |ϕ i |, i = 1, . . ., n, disregarding the sign of each ϕ i .With t j = j∆ we still denote and because of we now also introduce for fixed chosen σ and τ , Relation (15) deals with the case σ = 1 and τ = 1, while the expression F τ +jσ is a generalization of (15) for general σ and τ .Observe the achieved separation in (16) of the scaling σ and the shift τ .We emphasize that σ and τ are fixed before defining the F (σ, τ ; t j ).Otherwise the index j cannot be associated uniquely with the value 1 /2 (f τ +jσ + f τ −jσ ).
Besides the Hankel structured τ σ H n , we introduce the Toeplitz structured , which is symmetric when τ = 0. Now consider the structured matrix where τ −σ T n = τ σ T T n .When τ = 0, the first two matrices in the sum coincide and the latter two do as well.Note that working directly with the cosine function instead of expressing it in terms of the exponential as cos x = (exp(ix) + exp(−ix))/2, reduces the size of the matrices involved in the pencil from 2n to n. Theorem 1.The matrix τ σ C n factorizes as Proof.The proof is a verification of the matrix product entry at position (k + 1, ℓ + 1) for k, ℓ = 0, . . ., n − 1: This matrix factorization translates ( 16) and opens the door to the use of a generalized eigenvalue problem: the cosine equivalent of ( 8) becomes where v i are the generalized right eigenvectors.Setting up (18) takes 2n evaluations f jσ , as in the exponential case.Before turning our attention to the extraction of the ϕ i from the generalized eigenvalues cos(ϕ i σ∆), we solve two structured linear systems of interpolation conditions.The coefficients α i in ( 14) are computed from Making use of ( 16), the coefficients α i cos(ϕ i τ ∆) are obtained from the shifted interpolation conditions where k ∈ {0, 1, . . ., n} is fixed.While for σ = 1 the sparse interpolation problem can be solved from 2n samples taken at the points t j = j∆, j = 0, . . ., 2n − 1, for σ > 1 additional samples are required at the shifted locations t τ ±jσ = (τ ± jσ)∆ in order to resolve the ambiguity that arises when extracting the nonlinear parameters ϕ i from the values cos(ϕ i σ∆).The quotient delivers the values cos(ϕ i τ ∆), i = 1, . . ., n.Neither from cos(ϕ i σ∆) nor from cos(ϕ i τ ∆) the parameters ϕ i can necessarily be extracted uniquely when σ > 1 and τ > 1.But the following result is proved in the appendix.
If gcd(σ, τ ) = 1, the sets containing all the candidate arguments for ϕ i in cos(ϕ i σ∆) and cos(ϕ i τ ∆) respectively, have at most two elements in their intersection.Here Arccos(•) ∈ [0, π] denotes the principal branch of the arccosine function.In case two elements are found, then it suffices to extend (19) to which only requires the additional sample f τ +(k+n)σ as f τ −(k+n−2)σ is already available.From this extension, cos(ϕ i (σ + τ )∆) can be obtained in the same way as cos(ϕ i τ ∆).As explained in the appendix, only one of the two elements in the intersection of S i and T i fits the computed cos(ϕ i (σ + τ )∆) since gcd(σ, τ ) = 1 implies that also gcd(σ, σ + τ ) = 1 = gcd(τ, σ + τ ).
So the unique identification of the ϕ i can require 2n − 1 additional samples at the shifted locations (τ ± jσ)∆, j = 0, . . .n − 1 if the intersections S i ∩ T i are all singletons, or 2n additional samples, namely at (τ ± jσ)∆, j = 0, . . ., n − 1 and (τ + nσ)∆ if at least one of the intersections S i ∩ T i is not a singleton.
The factorization in Theorem 1 immediately allows to formulate the following cosine analogue of (11).
To round up our discussion, we mention that from the factorization in Theorem 1, it is clear that for the generalized eigenvector v i from the different generalized eigenvalue problem This immediately leads to a computational variant of the proposed scheme, similar to the one given in Section 2.4 for the exponential function, requiring somewhat more samples though.Let us now turn our attention to other trigonometric functions.

Sine function
Let g(ϕ i ; t) = sin(ϕ i t) and let ( 13) hold.With t j = j∆ We denote and because of we introduce for fixed chosen σ and τ , We fill the matrices τ σ H n and the Toeplitz matrices τ σ T n and define Theorem 2. The structured matrix τ σ B n factorizes as Proof.The proof is again a verification of the matrix product entry, at the position (k, ℓ + 1) with k = 1, . . ., n and ℓ = 0, . . ., n − 1: Note that the factorization involves precisely the building blocks in the shifted evaluation (21) of the help function F (σ, τ ; t).From this decomposition we find that the cos(ϕ i σ∆), i = 1, . . ., n are obtained as the generalized eigenvalues of the problem We point out that setting up this generalized eigenvalue problem requires samples of f (t) at the points t (−n+1)σ , . . ., t 2nσ .Since f (t jσ ) = −f (t −jσ ) and f (0) = 0 it costs 2n samples.Unfortunately, at this point we cannot compute the α i , i = 1, . . ., n from the linear system of interpolation conditions as we usually do, because we do not have the matrix entries sin(ϕ i jσ∆) at our disposal.It is however easy to obtain the values cos(ϕ i jσ∆) because cos(ϕ i jσ∆) = cos (±j Arccos(cos(ϕ i σ∆))) where Arccos(cos(ϕ i σ∆)) returns the principal branch value.The proper way to proceed is the following.From Theorem 2 we get 0 σ B T n = W n A n U T n .So we can obtain the α i sin(ϕ i σ∆) in the first column of A n U T n from the structured linear system where . From the generalized eigenvalues cos(ϕ i σ∆), i = 1, . . .and the α i sin(ϕ i σ∆) we can now recursively compute for j = 1, . . ., n, α i sin(ϕ i jσ∆) = α i sin(ϕ i (j − 1)σ∆) cos(ϕ i σ∆) + cos(ϕ i (j − 1)σ∆)α i sin(ϕ i σ∆).
The system of shifted linear interpolation conditions can then be looked at as having a coefficient matrix with entries α i sin(ϕ i jσ∆) and unknowns cos(ϕ i τ ∆).In order to retrieve the ϕ i uniquely from the values cos(ϕ i σ∆) and cos(ϕ i τ ∆) with gcd(σ, τ ) = 1, one proceeds as in the cosine case.Finally, the α i are obtained from the expressions α i sin(ϕ i σ∆) after plugging in the correct arguments ϕ i in sin(ϕ i σ∆) and dividing by it.So compared to the previous sections, the intermediate computation of the α i before knowing the ϕ i , is replaced by the intermediate computation of the α i sin(ϕ i σ∆).In the end, the α i are revealed in a division, without the need to solve an additional linear system.From the factorization in Theorem 2, the following sine analogue of (11) follows immediately.Corollary 2. For the matrix 0 σ B n defined in (22) holds that For completeness we mention that one also finds from this factorization that for v i in the generalized eigenvalue problem

Phase shifts in cosine and sine.
It is possible to include phase shift parameters in the cosine and sine interpolation schemes.We explain how, by working out the sparse interpolation of Since sin t = (exp(it) − exp(−it)) /2i, we can write each term in (25) as So the sparse interpolation of ( 25) can be solved by considering the exponential sparse interpolation problem where The computation of the ϕ i through the ζ i remains separated from that of the α i and ψ i .The latter are obtained as

Hyperbolic functions
For g(ϕ i ; t) = cosh(ϕ i t) the computational scheme parallels that of the cosine and for g(ϕ i ; t) = sinh(ϕ i t) that of the sine.We merely write down the main issues.
When g(ϕ i ; t) = cosh(ϕ i t), let and for fixed chosen σ and τ , let Subsequently the definition of the structured matrix τ σ C n is used and in the factorization of Theorem 1, the cosine function is everywhere replaced by the cosine hyperbolic function.
Similarly, when g(ϕ i ; t) = sinh(ϕ i t), let and for fixed chosen σ and τ , let Now the definition of the structured matrix τ σ B n is used and in the factorization of Theorem 2 the occurrences of cos are replaced by cosh and those of sin by sinh.

Polynomial functions
The orthogonal Chebyshev polynomials were among the first polynomial basis functions to be explored for use in combination with a scaling factor σ, in the context of sparse interpolation in symbolic-numeric computing [11].We elaborate the topic further for numerical purposes and for lacunary or supersparse interpolation, making use of the scale factor σ and the shift term τ .We also extend the approach to other polynomial bases and connect to generalized eigenvalue formulations.

Chebyshev 1st kind
Let g(m i ; t) = T mi (t) of degree m i , which is defined by and consider the interpolation problem The Chebyshev polynomials T m (t) satisfy the recurrence relation and the property With 0 ≤ m 1 < m 2 < . . .< m n < M we choose t j = cos(j∆) where 0 < ∆ ≤ π/M .Note that the points t j are allowed to occupy much more general positions than in [11].If M is extremely large and n is small, in other words if the polynomial is very sparse, then it is a good idea to recover the actual m i , i = 1, . . ., n in two tiers as we explain now.Let gcd(σ, τ ) = 1.We denote and introduce for fixed σ and τ , in order to separate the effect of σ and τ in the evaluation.With the same matrices τ σ H n , τ σ T n and τ σ C n as in the cosine subsection, now filled with the f τ +jσ from ( 27), the values T mi (cos(σ∆)) are the generalized eigenvalues of the problem From the values T mi (cos(σ∆)) = cos(m i σ∆) the integer m i cannot necessarily be retrieved unambiguously.We need to find out which of the elements in the set is the one satisfying (26), where Arccos(cos(m i σ∆))/(σ∆) ≤ M/σ.Depending on the relationship between σ and M (relatively prime, generator, divisor, . ..) the set S i may contain one or more candidate integers for m i evaluating to the same value cos(m i σ∆).To resolve the ambiguity we consider the Vandermonde-like system for the α i , i = 1, . . ., n, and the shifted problem If the intersection of the set S i with the set is processed as in Section 3.1, then one can eventually identify the correct m i .An illustration thereof is given in Section 7.3.
When replacing (28) by we find that for v i holds that 0 σ C n v i is a scalar multiple of This offers an alternative algorithm similar to the alternative in Section 3.1 on the cosine function.

Chebyshev 2nd, 3rd and 4th kind
While the Chebyshev polynomials T mi (t) of the first kind are intrinsically related to the cosine function, the Chebyshev polynomials U mi (t) of the second kind can be expressed using the sine function: Therefore the sparse interpolation problem can be solved along the same lines as in Section 4.1 but now using the samples instead of the f τ +jσ , for the sparse interpolation of In a very similar way, the sparse interpolation problems can be solved, using the Chebyshev polynomials V mi (t) and W mi (t) of the third and fourth kind respectively, given by

Spread polynomials
Let g(m i ; t) equal the degree m i spread polynomial S mi (t) on [0, 1], which is defined by The spread polynomials S m (t) are related to the Chebyshev polynomials of the first kind by 1 − 2tS m (t) = T m (1 − 2t) and satisfy the recurrence relation and the property We consider the interpolation problem where t j = sin 2 (j∆), j = 0, 1, 2, . . .with 0 As in Section 4.1 we present a two-tier approach, which for σ ≤ 1 reduces to one step and avoids the additional evaluations required for the second step.However, as indicated above, the two-tier scheme offers some additional possibilities.We denote we obtain So the effect of the scale factor σ on the one hand and the shift term τ on the other can again be separated in the evaluation F τ +jσ .
Proof.The factorization is again verified at the level of the matrix entries, now making use of property (29), which is slightly more particular.□ This factorization paves the way to obtaining the values S mi (sin 2 σ∆) = sin 2 (m i σ∆) as the generalized eigenvalues of Filling the matrices in this matrix pencil requires 2n + 1 evaluations f (jσ∆) for j = 1 . . ., 2n+1.From these generalized eigenvalues we cannot necessarily uniquely deduce the values for the indices m i .Instead, we can obtain for each i = 1, . . ., n the set of elements characterising all the possible values for m i consistent with the sparse spread polynomial interpolation problem.Fortunately, with gcd(σ, τ ) = 1, we can proceed as follows.
The additional values F τ +jσ lead to a second system of interpolation conditions, which delivers the coefficients α i S mi (sin 2 τ ∆).Dividing the two solution vectors of these linear systems componentwise delivers the values S mi (sin 2 τ ∆), i = 1, . . ., n from which we obtain sets that have the correct m i in their intersection with the respective S i .The proof of this statement follows a completely similar course as that for the cosine building block g(ϕ i ; t), given in the Appendix.
The factorization in Theorem 3 allows to write down a spread polynomial analogue of (11).
Corollary 3.For the matrices σ J n and 0 σ K n defined in (30) holds that To round up the discussion we mention that from Theorem 3 and the generalized eigenvalue problem we also find that σ J n v i is a scalar multiple of α i S mi (sin 2 σ∆), . . ., S mi (sin 2 nσ∆) T .
At the expense of some additional samples this eigenvalue and eigenvector combination offers again an alternative computational scheme.

Distribution functions
In [29, pp.85-91] Prony's method is generalized from g(ϕ i ; t) = exp(ϕ i t) with ϕ i ∈ C to g(ϕ i ; t) = exp(−(t − ϕ i ) 2 ), to solve the interpolation problem with given fixed Gaussian peak width w.Here we further generalize the algorithm to include the new scale and shift paradigm.The scheme is useful when modelling phenomena using Gaussian functions, as illustrated in Section 7.1.Without loss of generality we put 2w 2 = 1.The easy adaptation to include a fixed constant width factor in the formulas is left to the reader.
We again assume that (4) holds, but now for 2∆.With t j = j∆, the Gaussian Let us take a closer look at the evaluation of f (t) at t τ +jσ = (τ + jσ)∆, j = 0, 1, . . .with σ ∈ N and τ ∈ Z: With the auxiliary function we obtain a perfect separation of σ and τ and the problem can be solved using Prony's method.With fixed chosen σ and τ , the value F (σ, τ ; t j ) is denoted by F τ +jσ .Theorem 4. The Hankel structured matrix Proof.The proof is again by verification of the entry F τ +(k+ℓ)σ in τ σ G n at position (k + 1, ℓ + 1) for k = 0, . . ., n − 1 and ℓ = 0, . . ., n − 1. □ With τ = 0, σ the values exp(2ϕ i σ∆) are retrieved as a factor of the generalized eigenvalues of the problem As we know from the exponential case, the ϕ i cannot necessarily be identified unambiguously from exp(2ϕ i σ∆) when σ > 1.In order to remedy that, we turn our attention to two structured linear systems.The first one, where τ = 0, delivers the α i exp(−ϕ 2 i ) after rewriting it as The coefficient matrix of this linear system is Vandermonde structured with entry (exp(2ϕ i σ∆)) j at position (j + 1, i).The second linear system, where τ > 0, delivers the α i exp(−(τ ∆ − ϕ i ) 2 ) through (31), Here the coefficient matrix is structured identically as in the first linear system.From both solutions we obtain exp(τ 2 ∆ 2 ) From the values exp(2ϕ i σ∆), i = 1, . . ., n and exp(2ϕ i τ ∆), i = 1, . . ., n the parameters 2ϕ i can be extracted as explained in Section 2, under the condition that gcd(σ, τ ) = 1.
The values exp(2ϕ i τ ∆) and exp(2ϕ i σ∆) can also be retrieved respectively from the generalized eigenvalues and the generalized eigenvectors of the alternative problem with 0 σ G n v i being a scalar multiple of α i (1, exp(2ϕ i σ∆), . . ., exp(2ϕ i (n − 1)σ∆)) T , thereby requiring at least of 4n − 2 samples instead of 3n samples.To conclude, the following analogue of ( 11) can be given.
Corollary 4. For the matrix 0 σ G n given in Theorem 4 holds that

Some special functions
The sinc function is widely used in digital signal processing, especially in seismic data processing where it is a natural interpolant.There are several similarities between the narrowing sinc function and the Dirac delta function, among which the shape of the pulse.A large number of papers, among which [30], already discuss the determination of a so-called train of Dirac spikes and their amplitudes, which is essentially an exponential fitting problem.This is generalized here to the use of the sinc function, including a matrix pencil formulation.The gamma function first arose in connection with the interpolation problem of finding a function that equals n! when the argument is a positive integer.Nowadays the function plays an important role in mathematics, physics and engineering.In [10] sparse interpolation or exponential analysis was already generalized to the Pochhammer basis (t) m = t(t + 1) • • • (t + m − 1), also called rising factorial, which is related to the gamma function by (t) m = Γ(t + m)/Γ(t) for t ̸ ∈ Z − ∪ {0}.Here we generalize the method to the direct use of the gamma function and we present a matrix pencil formulation as well.

The sampling function sin(x)/x
Let g(ϕ i ; t) = sinc(ϕ i t) where sinc(t) is historically defined by sinc(t) = sin t/t.So our sparse interpolation problem is with the same assumptions for ϕ i and ∆ as in Section 3. In order to solve this inverse problem of identifying the ϕ i and α i for i = 1, . . ., n, we introduce and apply the technique from Section 3.2 for the separate identification of the nonlinear parameters ϕ i and linear parameters α i /ϕ i in the sparse sine interpolation.

The gamma function Γ(z)
With the new tools obtained so far, it is also possible to extend the theory to other functions such as the gamma function Γ(z).The function g(ϕ i ; z) = Γ(z + ϕ i ) with z, ϕ i ∈ C, satisfies the relation Our interest is in the sparse interpolation of where the α i , ϕ i , i = 1, . . ., n are unknown.In the sample point z = ∆ we define If by the choice of ∆, one or more of the ∆ + ϕ i , i = 1, . . ., n accidentally belong to the set of nonpositive integers, then one cannot sample f (z) at z = ∆.In that case a complex shift τ can help out.It suffices to shift the arguments ∆ + ϕ i away from the negative real axis.We then redefine or in other words Using (32) we find As soon as the samples at τ + ∆ + j are all well-defined, we can start the algorithm for the computation of the unknown linear parameters α i and the nonlinear parameters ϕ i , We further introduce Proof.With the matrix factorization given, the proof consists of an easy verification of the matrix product with the matrix τ,k 1 H n .□ Filling the matrices τ,0 1 H n and τ,1 1 H n requires the evaluation of f (z) at z = τ + ∆ + j, j = 0, . . ., 2n − 1 which are points on a straight line parallel with the real axis in the complex plane.
The nonlinear parameters ϕ i are now obtained as the generalized eigenvalues of τ,1 where the v i , i = 1, . . ., n are the right generalized eigenvectors.Afterwards the linear parameters α i are obtained from the linear system of interpolation conditions n i=1 by computing the coefficients α i Γ(τ +∆+ϕ i ) and dividing those by the function values Γ(τ + ∆ + ϕ i ) which are known because ∆, τ and the ϕ i , i = 1, . . ., n are known.
From Theorem 5 we find that for the generalized eigenvectors of (33) holds that τ,0 This allows to validate the computation of the ϕ i , i = 1, . . ., n obtained as generalized eigenvalues, if desired.

Pochhammer basis connection
Results on sparse polynomial interpolation using the Pochhammer basis (t) m where usually the interpolation points are positive integers and t ∈ R + , were published in [10,28], but no matrix pencil method for its solution was presented.This can now easily be obtained using a similar approach as for the gamma function.We consider more generally the interpolation of For complex values z, the Pochhammer basis or rising factorial (z) m satisfies the recurrence relation For real ∆, a complex shift τ could shift the problem statement away from the negative real axis, as with the gamma function, but it is much simpler here to immediately consider ∆ ∈ C \ {0, −1, −2, . ..}.Let With the evaluations F j we fill the Hankel matrix This Hankel matrix decomposes as in Theorem 5, but now with So the nonlinear parameters m i , i = 1, . . ., n are obtained as the generalized eigenvalues of where the v i are the right generalized eigenvectors, for which holds that 0 1 H n v i is a multiple of the vector From the latter the estimates of the m i can be validated by computing the quotient of successive entries in the vector.The linear parameters α i , i = 1, . . ., n are obtained from the linear system n i=1 α i m j i (∆) mi = F j , j = 0, . . ., 2n − 1.

Numerical illustrations
We present some examples to illustrate the main novelties of the paper, including the multiscale facilities: • an illustration of sparse interpolation by Gaussian distributions with fixed width but unknown peak locations; • an illustration of the new generalized eigenvalue formulation for use with several trigonometric functions and the sinc; • an illustration of the use of the scale and shift strategy for the supersparse interpolation of polynomials.
As stated earlier, our focus is on the mathematical generalizations and not on the numerical issues.

Fixed width sparse Gaussian fitting
Consider the expression illustrated in Figure 1, with the parameters α i , ϕ i ∈ R. From the plot it is not obvious that the signal has two peaks.We first show the output of the widely used [31] Matlab state-of-the-art peak fitting program peakfit.m, which calls an unconstrained nonlinear optimization algorithm to decompose an overlapping peak signal into its components [32,33].
In peakfit.m the user needs to supply a guess for the number of peaks and supply this as input.If one does not have any idea on the number of peaks, the usual practice is to try different possibilities and compare the corresponding results.Of course, a good estimate of the number of peaks may lead to a good fit of the data.In addition to the peak position ϕ i , its height α i and width w, the program also returns a goodness-of-fit (GOF).
The peakfit.m algorithm can work without assuming a fixed width, or the width can be passed as an argument.We do the latter as our algorithm also assumes a known fixed peak width w.
Let ∆ = 0.1 and let us collect 20 samples f 0 , f 1 , . . ., f 19 .When passing the width to peakfit.m and guessing the number of peaks, then it returns for 1 peak the estimates From the latter experiment it is easy to formulate some desired features for a new algorithm: • built-in guess of the number of peaks in the signal, • and reliable output from a smaller number of samples.
So let us investigate the technique developed in Section 5. Take σ = 1 and τ = 0 since there is no periodic component in the Gaussian signal, which has only real parameters.With the 20 samples f 0 , f 1 , . . ., f 19 we define the samples F j = exp(j 2 ∆ 2 )f j and compose the Hankel matrix 0 1 G 10 .Its singular value decomposition, illustrated in Figure 2, clearly reveals that the rank of the matrix is 2 and so we deduce that there are n = 2 peaks.The new method clearly provides both an automatic guess of the number of peaks and a reliable estimate of the signal parameters, all from only 20 samples.What remains to be done is to investigate the numerical behaviour of the method on a large collection of different input signals, which falls out of the scope of this paper where we provide the mathematical details.

Supersparse Chebyshev interpolation
We consider the polynomial which is clearly supersparse when expressed in the Chebyshev basis.We sample f (t) at t j = cos(j∆) where ∆ = π/100000 with M = 50000.The first challenge is maybe to retrieve an indication of the sparsity n.
Take σ = 1 and collect 15 samples f j , j = 0, . . ., 14 to form the matrix 0 1 C 8 .From its singular value decomposition, computed in double precision arithmetic and illustrated on the log-plot in Figure 5, one may erroneously conclude that f (t) has only 2 terms, a consequence of the fact that, relatively speaking, the degrees m 1 = 6 and m 2 = 7 are close to one and appear as one cluster.

Conclusion
Let us summarize the sparse interpolation formulas obtained in the preceding sections in a table.For each parameterized univariate function g(ϕ i ; t) we list in the columns 1 to 4: 1. the minimal number of samples required to solve the sparse interpolation without running into ambiguity problems, meaning for the choice σ = 1, 2. the minimal number of samples required for the choice σ > 1 (if applicable), thereby involving a shift τ ̸ = 0 to restore uniqueness of the solution, 3. the linear matrix pencil (A, B) in the generalized eigenvalue formulation Av i = λ i Bv i of the sparse interpolation problem involving the g(ϕ i ; t), 4. the generalized eigenvalues in terms of τ , as they can be read directly from the structured matrix factorizations presented in the theorems 1-5, 5. the information that can be computed from the associated generalized eigenvectors, as indicated at the end of each (sub)section.We further denote

g(ϕ
Then the possible solutions for ϕ to C σ = cos(ϕσ∆) are in Φ σ,1 ∪ Φ σ,2 where Analogously, the possible solutions to One statement is obvious: whatever the choice for σ and τ , both Φ σ,1 ∪ Φ σ,2 and Φ τ,1 ∪Φ τ,2 contain the unknown value for ϕ which produced C σ and C τ .What remains open is the question whether ) is a singleton.And in case it is not, we want to find an algorithm that can identify the correct ϕ.
When either ϕ σ = 0 or ϕ σ = R/σ the sets Φ σ,1 and Φ σ,2 coincide.And similarly for ϕ τ .On the other hand, if these sets do not coincide, they are disjoint.So the true value for the unknown parameter ϕ can belong to any of the intersections Φ A sequence of lemmas will lead to the conclusion that the four intersections do not deliver more than two distinct elements.Thereafter we indicate how to identify the only true value for the unkown ϕ.
In general, when at least 3 of the 4 sets Φ σ,1 , Φ σ,2 , Φ τ,1 , Φ τ,2 are distinct, then at most 2 of the 4 intersections Φ σ,i ∩ Φ τ,j , 1 ≤ i, j ≤ 2 are nonempty, with each of the nonempty intersections being a singleton.Further down we illustrate the actual existence of a case, where two intersections are nonempty and consequently the true value of the unknown ϕ cannot be identified from the evaluations cos(ϕσ∆) and cos(ϕτ ∆) with gcd(σ, τ ) = 1.

Figure 1 :
Figure 1: Plot of Gaussian example function

Figure 5 :
Figure 5: Log-plot of singular values of 0 1 C 8 Imposing a 3-term model to f (t) instead of the erroneously suggested 2-term one, does not improve the computation as the matrix 0 1 C 8 is ill-conditioned with a condition number of the order of 10 10 .So 3 generalized eigenvalues cannot be extracted reliably from the samples.For completeness we mention the unreliable double precision results, rounded to integer values: m 1 = 6, m 2 = 39999, m 3 = 25119.Now choose σ = 3125 and τ = 16.The singular value decomposition of 0 3125 C 8 , shown on the log-plot in Figure6, reveals that f (t) indeed consists of 3 terms.Also, the conditioning of the involved matrix 0 3125 C 8 has improved to the order of 10 3 .