Phase retrieval and system identification in dynamical sampling via Prony’s method

Phase retrieval in dynamical sampling is a novel research direction, where an unknown signal has to be recovered from the phaseless measurements with respect to a dynamical frame, i.e., a sequence of sampling vectors constructed by the repeated action of an operator. The loss of the phase here turns the well-posed dynamical sampling into a severe ill-posed inverse problem. In the existing literature, the involved operator is usually completely known. In this paper, we combine phase retrieval in dynamical sampling with the identification of the system. For instance, if the dynamical frame is based on a repeated convolution, then we want to recover the unknown convolution kernel in advance. Using Prony’s method, we establish several recovery guarantees for signal and system, whose proofs are constructive and yield algebraic recovery methods. The required assumptions are satisfied by almost all signals, operators, and sampling vectors. Studying the sensitivity of the recovery procedures, we establish error bounds for the approximate Prony method with respect to complex exponential sums.

In this paper, we consider phase retrieval in the context of dynamical sampling.Dynamical sampling is a novel research direction motivated by the work of Vetterli et al. [ , ] and was introduced in [ , , , ].The topic instantly attracted attention in the scienti c community, see for instance [ , , , , , , , , , ] for further studies.Formulated in the setting of nite-dimensional spaces, the main question in dynamical sampling is to nd conditions on the system  ∈ C × and the sampling vectors {  }  −1 =0 ⊂ C  such that each signal  ∈ C  can be stably recovered from the spatiotemporal samples or such that { ℓ   } −1, −1 ℓ,=0 forms a frame for some ,  ∈ N .Note that many structured measurements like the discrete Gabor transform may be interpreted as dynamical samples.For the Gabor transform,  would be a diagonal matrix corresponding to the modulation operator, and   would be shifts of a window function.We refer to [ , ] for motivations about this particular question.
Di erent from the classical nite-dimensional dynamical sampling, we consider the phaseless measurements for some ,  ∈ N. The main question is again: under which conditions on  and   can  be recovered from the given measurements.Due to the loss of the phase, this problem becomes far more challenging since the recovery is now severely ill posed in advance.

R
Phase retrieval in dynamical sampling has already been studied.In [ , ], the authors pose conditions on the operator  de ned on a real Hilbert space and on the sampling vectors   to ensure that the dynamical phase retrieval problem has a unique solution.The main strategy is here to ensure that the sequence { ℓ   }   −1, −1 ℓ==0 has the complementary property meaning that each subset or its complement spans the entire space.The restriction to the real-valued problem is crucial since the complementary property is not su cient to allow phase retrieval in the complex case.Further, the results are of a theoretical nature, and the question how to recover the signal numerical remains open.
An approach for a numerical recovery procedure based on polarization identities has been considered in [ ], where the measurement vectors   have been designed to allow phase retrieval.The key idea has been to consider interfering measurement vectors that allow the recovery of the missing phase by polarization such that we obtain a classical dynamical sampling problem, which can be solved in a second step.The presented reconstruction technique works for almost all real or complex signals.

C
Besides the recovery of the real or complex signal , we want to recover the unknown operator  from a certain class in advance.For instance, if the operator  circ  corresponds to the convolution with , we want to recover the signal or the spectrum â of , where • denotes the discrete Fourier transform.The theoretical requirements to allow phase retrieval besides system identi cation is our main contribution and focus of this paper.The combination of phase retrieval, dynamical sampling, and system identi cation is to our knowledge a new research topic.Our work horse to establish the recovery guarantees for phase and system is Prony's method, which allow us to recover the wanted entities from the given measurements.As a result, all our proofs contain analytic recovery methods.The required assumptions are satis ed by almost all signals, spectra, and sampling vectors.Using several sampling vectors, phase retrieval and system identi cation is possible from only linearly many samples.The basic idea here generalizes to the in nite-dimensional setting.Moreover, we study the sensitivity of the applied Prony method resulting in error bounds that are interesting by their own outside the context of dynamical sampling.On this basis, we moreover study the sensitivity of the proposed analytic recovery procedures.

R
This paper is organized as follows.In Section , we introduce the required notations.In Section , we recall Prony's method, and we explain how this method enables us to recover the missing information.In Section , we provide conditions to retrieve an unknown signal when the underlying dynamical frame is known.Section is devoted to the system identi cation in case that the signal  is already known.In Section , we suppose that both the signal and the spectrum of  are unknown.In particular, we establish recovery guarantees when the operator  corresponds to a convolution with a low-pass lter as kernel.In Section , we consider multiple sampling vectors, which nally allow us to recover both -signal and operator.In Section , we adapt our results to the in nite-dimensional setting.The sensitivity of the analytic reconstructions is investigated in Section .In Section , we provide numerical examples to accompany our theoretical results.Section concludes the paper with a number of nal remarks.

P
In this section, we introduce the notations and de nitions that are needed throughout this paper.All nite-dimensional vectors and matrices are stated in bold print.The zero matrix of dimension  ×  is denoted by 0 0 , and the ( × )-dimensional identity by    .If the dimension is clear within the context, we usually skip the indices.A matrix  ∈ C × is called diagonalizable if there exist an invertible matrix , whose columns consists of eigenvectors of , and a diagonal matrix  with the eigenvalues of  on its diagonal, such that  =  −1 .Throughout the paper, we always use this eigenvalue decomposition, where  does not have to be orthogonal implying that the columns of  only form a (maybe non-orthogonal) basis.Further, if the eigenvalues are pairwise distinct, we say that a given vector  ∈ C  depends on all eigenspaces of  if  −1  does not vanish anywhere, i.e. if all coordinate to the basis in  are non-zero.Note that in this case  is unique up to permutation and global phase of the columns.
For  ∈ C  , we denote by circ() the circulant matrix whose rst column is .Note that the multiplication with circ() results in the convolution with , i.e. circ()  =  * .All circulant matrices are diagonalizable with respect to the discrete Fourier transform.More precisely, we have circ() = 1 /  diag( â)  −1 , where  = (e −2πi / ) −1 ,=0 denotes the Fourier matrix and â   the discrete Fourier transform.Given a vector  ∈ C  and  ∈ N, we de ne the rectangular Vandermonde matrix .
For  = , we drop the subscript and denote the Vandermonde matrix by  or  ().
Recall that the nite-dimensional -norm is de ned as Moreover, the maximum norm is de ned by  ∞ = max  |  |.Against this background and for notational convenience, we de ne the minimum norm  −∞ = min  |  | although this expression is clearly no norm.
The non-zero complex numbers are denote by C * .Without loss of generality, we always choose the phase arg(•) of a complex number within the interval [−π, π).Especially for calculations with phases, we denote by • mod 2π the remainder within [−π, π), i.e. we add or subtract a multiple of 2π to obtain an number in the considered interval.
For a given vector  = ( 0 , . . .,  −1 ), we call the set of relative phases arg(  x ) the winding direction of .Figuratively, the winding direction describes how the phase is changing by traveling through the components of .We say that a vector  can be uniquely recovered up to the winding direction if the relatives phases are only reconstructable up to a global sign.If  0 is real, a vector with the opposite winding direction can be computed by conjugating all components of , i.e. changing the sign of all relative phases.
Finally, we denote by #[•] the cardinality of a set.

T P
In a nutshell, Prony's method [ ] allows us to recover the non-zero coe cients   ∈ C * and the pairwise distinct bases   ∈ C * of an exponential sum from the equispaced sampled data ℎ ℓ  (ℓ) with ℓ = 0, . . ., 2 − 1.The so-called Prony polynomial  : C → C is the monic polynomial whose zeros are the unknown bases, i.e.
Since the occurring Vandermonde and diagonal matrices have full rank, we have rank  =  meaning dim(ker( )) = 1.Thus,  possesses the simple singular value zero.Considering ( ) for ℓ = 0, . . .,  −  − 1, we obtain Since the Vandermonde matrix  − has full rank due to the assumptions on ( ), the equivalence follows immediately.
Lemma . is the theoretical justi cation why Prony's method always yields the parameters of ( ) for exact measurements ℎ ℓ .In practice, the measurements hℓ ℎ ℓ +  ℓ are disturbed by some small error  ℓ ; so we have only access to the disturbed rectangular Hankel matrix where is the rectangular error Hankel matrix.If  > 2, the kernel of the perturbed Hankel matrix H will be trivial almost surely.For this reason, Potts & Tasche [ ] suppose to approximate the kernel using the singular value decomposition.This approach is supported by the Lidskii-Weyl perturbation theorem for singular values, see [ , Prob III. . ] or [ ], yielding max =0,..., ( ) If the non-zero singular values of  are greater than 2  , the singular vector to the smallest singular value of H seems to be a valid approximation for  .Summarized, we obtain the so-called approximate Prony method [ , Alg .] here written down for complex exponential sums.
Finally, we would like to note that alternative methods to obtain unknown bases from the exponential sum in ( ) can be employed, for instance matrix pencil methods [ , ], ESPRIT estimation methods [ ], and Cadzow denoising method [ ].

E
In the following, we assume that  ∈ C × is diagonalizable, i.e.  =  −1 .For a xed signal  ∈ C  and a xed sampling vector  ∈ C  , the given phaseless measurements are then of the form where   *  and   −1 .Notice that ( ) is an exponential sum with coe cients   c and bases   λ .In the following, we require that the exponential sum has exactly  2 unique bases.Therefore, we call  { 0 , . . .,  −1 } ⊂ C, • collision-free if the products   μ are pairwise distinct for ,  ∈ {0, . . .,  − 1}.
• absolutely collision-free if  is collision-free and if the products |   ||   | are pairwise distinct for  > .
Note that a matrix with collision-free eigenvalues is always invertible, and that the eigenvalue decomposition becomes unique up to permutations and global phases of the columns of .If the system or the matrix  is known, we can usually recover the signal  using one sampling vector .T . .Let  ∈ C × be known and diagonalizable with collision-free eigenvalues, and let  ∈ C  depend on all eigenspaces of .Then every  ∈ C  can be recovered from the samples {| ,  ℓ  |}  2 −1 ℓ=0 up to global phase.
Proof.Assume that  has the eigenvalue decomposition  =  −1 , and denote the coordinates of  with respect to  by   −1 .The given measurements have the form with   = ȳ   as shown in ( ).Due to the distinctness of the products   λ , the coecients   c may be calculated by solving a linear equation system based on an invertible Vandermonde matrix.The products   c contain the absolute values |  | and the relative phases arg(  c ); so the factors   are determined up to global phase.Since the components of  are non-zero, and since  is invertible, we nally obtain  up to global phase.

C
. .For almost all  ∈ C  and almost all  ∈ C  , the signal  ∈ C  can be recovered from the samples {| , (circ ) ℓ  |}  2 −1 ℓ=0 up to global phase.
Proof.The eigenvalues of  circ  are just given by the discrete Fourier transform â, and for almost all vectors  ∈ C  or, equivalently, â ∈ C  , the products â ā are pairwise distinct.Further, the vectors  that are orthogonal to one column of the Fourier matrix form a hyperplane.
We would like to note that phase retrieval from the sample {| , (circ is possible with much less than  2 temporal measurement if more spatial measurement vectors   and polarization techniques are employed [ ].

E
The other way round, if the signal  is known, then we can usually identity the eigenvalues of the system  =  −1 , i.e. we assume that the eigenvectors  of the system are known.For a convolutional systems  = circ , the eigenvectors are just the columns of the Fourier matrix for instance.
T . .Let  =  −1 be diagonalizable by a known eigenvector basis  and assume that the eigenvalues are collision-free.Let  ∈ C  depend on all eigenspaces of , and let  ∈ C  be given.If the coe cients   de ned in ( ) are collision-free too, then the eigenvalues  0 , . . .,  −1 of  are de ned by the samples {| ,  ℓ  |} 2 2 −1 ℓ=0 up to global phase.
Proof.The measurements again have the form c (  λ ) ℓ as shown in ( ).By assumption, the bases   λ of this exponential sum are pairwise distinct and the coe cients   are non-zero.Thus the products   λ and   c are determinable by Prony's method.Note that Prony's method gives only the values but not the corresponding indices  and .Exploiting that the products   c are known -, ,  are known, we can however deduce these indices.Similarly to the proof of Theorem ., the products   λ contain the absolute values |   | and the relative phases arg(  λ ); so the eigenvalues   are determined up to global phase.
Proof.Again, the vectors  that are orthogonal to one column of the Fourier matrix form a hyperplane.Further, for almost all  and , the products   c in ( ) are pairwise distinct.As discussed in the proof of Corollary .almost all vectors  ∈ C  satisfy the assumption of Theorem . .

S &
If either the signal  or the spectrum of  are known, we can recover the respective unknown information from the temporal samples of only one sampling point.To a certain degree, we may even determine some information if both -the signal and the spectrum -are unknown.Using one sampling point, we however lose the order of the components.So we only obtain the unordered spectrum of .
T . .Let  =  −1 be diagonalizable by a known eigenvector basis  and assume that the eigenvalues are absolutely collision-free.Let  ∈ C  depend on all eigenspaces of , and let   *  be elementwise non-zero for unknown  ∈ C  .Then the spectrum of  is determined by the samples {| ,  ℓ  |} 2 2 −1 ℓ=0 up to global phase and winding direction.
Proof.Since the coe cients   = ȳ   with   *  and   −1  are non-zero, and since the eigenvalues are absolutely collision-free, the measurements have the form Propagating the phase in the proof of Theorem . .The points  0 and  1 are already known.Using the relatives phases ± arg(   0 ) and ± arg(  μ1 ), starting from  0 and  1 , we obtain two possible candidates (×) for   respectively since |   | is known too.Further, since  1 cannot also be real by assumption, exactly two candidates coincide yielding   .
For the other winding direction, i.e. choosing μ1 instead of  1 , we obtain μ .
In the following, we denote the recovered eigenvalues of  in absolutely decreasing order by   , i.e.
and recover the permuted eigenvalues step by step.Our assumption guarantees that   μ di ers from   μ , i.e. the imaginary part cannot vanish; so the real values in  correspond to the magnitudes |   |.The absolute collision freedom now allow us to recover the products   μ and   μ in  corresponding to |   | and |   |.We now assume that  0 is real and positive because the global phase cannot be recovered.Considering  0 μ1 and  1 μ0 , we obtain the relative phase arg( 0 ) − arg( 1 ) mod 2π up to sign.At this point, we have to chose one winding direction for the phase.For  = 2, . . .,  − 1, we may consider the relative phases between   and the recovered  0 and  1 , see Figure , which uniquely determines the remaining phases.
Remark . .Note that the spectrum retrieved in Theorem . is an unordered set, i.e. the relation to the known eigenvectors in  is not revealed.Applying the recovered relations between the bases, we may also recover the coe cients   in ( ) up to global phase and winding direction.However, without knowing the actual order of the eigenvalues/coe cients, the recovery of the unknown signal is forlorn.
Supposing that the unknown complex eigenvalues of the operator  have a clearly recognizable structure like increasing/decreasing absolute values leads to highly arti -cial side condition.A nevertheless interesting special case are real-valued convolutional systems with symmetrically decreasing kernels in the frequency domain.For the following theorem, we therefore restrict the setup to real-valued signals  ∈ R  , real-valued convolution operators circ  with  ∈ R  , and real-valued sampling vectors  ∈ R  .We call a kernel  strictly, symmetrically decreasing when â ∈ R  ++ , â = â− , and â > â for ,  ∈ {0, . . .,  /2 } with  < .The negative indices are here considered modulo , and R ++ denotes the real and positive half axis.Strictly, symmetrically decreasing kernels correspond to low-pass lters, whose identi cation in dynamical sampling has been studied in [ ].Note that the signal  is real and symmetric too.We call the kernel collision-free in frequency if the products â â are unique for ,  ∈ {0, . . .,  /2 } with  ≥ .This de nition di ers from the collision-free complex sets.In order to recover both -signal and kernel, we employ two sampling vectors  1 and  2 .We call  1 and  2 pointwise independent (in the frequency domain) when φ1, and φ2, interpreted as twodimensional real vectors are independent for  = 1, . . .,  /2 .For this speci c setting, the identi cation of the system and the signal is usually possible.
T . .Let  ∈ R  be strictly, symmetrically decreasing and collision-free in frequency, let  1 ,  2 ∈ R  be pointwise independent, and let  ∈ R  satisfy [ x φ, ] ≠ 0 for  = − (−1) /2 , . . .,  /2 ,  = 1, 2. Then  and  can be recovered from the samples Proof.To simplify the notation, we rst study the temporal samples with respect to an arbitrary sampling vector .Exploiting the symmetry of â and the conjugated symmetry of  ( x φ ) /2 =− (−1) /2 caused by the Fourier transform, we combine the several times appearing bases in ( ) to obtain with bases   related to â â and coe cients   where the multipliers are given by This exponential sum has exactly 1 /2(  /2 +1) (  /2 +2) distinct bases since  is collisionfree in frequency.Applying Prony's method, we compute the bases   and coe cients   .Because the bases   are all real and non-negative, we need a di erent procedure than before to reveal the relation to the factors â â .Let  be the set of recovered bases, where we assume (i) The strict, symmetric decrease of  ensures  0 = â2 0 .Now, remove  0 from . (ii) The next largest basis  1 corresponds to â0 â1 allowing the recovery of â1 .Remove  1 = â0 â1 and â2 1 from . (iii) The largest remaining bases correspond to â0 â2 , which gives us â2 .Remove all products â0 â2 , â1 â2 , â2 â2 of â2 with the recovered components from .
Alongside of the kernel, we also obtain the relation between Remark . .Note that the assumption [ x φ, ] ≠ 0 for  = 0, . . .,  − 1 may be weakened to only hold for one sampling vector  1 or  2 as long as [ x0 φ,0 ] ≠ 0 for both.In this case, the exponential sum corresponding to the temporal samples of the other sampling vector may consist of less than 1 /2 (  /2 + 1) (  /2 + 2) bases.Exploiting that the coe cients   of the missing bases are zero, and spreading the sign between the non-zero coe cients, we can nevertheless recover .
It is also possible to identify the strictly, symmetrically decreasing kernel alongside a complex signal and to allow complex sampling vectors.In this case, the temporal samples corresponding to one sampling vector  possesses the form Similarly to the proof of Theorem ., we may recover the kernel  from the temporal samples of one sampling vector if for ,  = 0, . . .,  /2 .Additionally, the signal  may be recovered if four sampling vectors are employed.In this case, the coe cient of â2 0 is just | x0 φ,0 |; so xing the phase for  1,0 , we may spread the phase to  ,0 ,  = 2, 3, 4, where the rst index stands for the related sampling vector, i.e. all coe cients  ,0 are known.If the equation system with  = 1, . . ., 4 is solvable, we obtain x and thus .Notice that the recovery of â here is not a special case of Theorem .since â is not collision-free as a complex set.In sum, the following statement can be established.are independent for each  = 1, . . ., (−1) /2 , then  and  can be recovered from the samples with up to global phase.
Remark . .The strictly, symmetrically decreasing kernels form a (  /2 +1)-dimensional manifold.Further, the not collision-free kernels live on the union of submanifolds with strictly smaller dimension; so almost all strictly, symmetrically decreasing kernels are collision-free.Moreover, almost all vectors  and   satisfy the posed conditions in the real as well as in the complex setting.

M
Let us return to the parameter identi cation of arbitrary systems after that brief digression to strictly, symmetrically decreasing convolution kernels.Revisiting the statement in Theorem ., our main problem has been that we cannot recover the order of the spectrum from merely one sampling vector if both -signal and eigenvalues -are unknown.Since our analysis is based on Prony's method, we have always relied on a squared number of measurements.To surmount these shortcomings, we suppose speci cally constructed sets of sampling vectors.Instead of assuming that the sampling vectors   depend on all eigenspaces of the system matrix , we now assume that   might only depends on a small set of eigenspaces.Considering the temporal samples for such a sampling vector, in analogy to ( ), we have where   * ,    −1   , and I  supp   .Since   only captures a small part of the spectrum, the last sum only consists of | I  | exponentials instead of  2 and allows the recovery of a speci c part of the spectrum.To combine these partial information and to overcome the mentioned issues, the sampling vectors =0 form a full cover meaning  −1 =0 supp   = {0, . . .,  − 1}, and for every  ∈ {0, . . .,  − 1} there exist two index sets F  and for  = 1, . . .,  − 1, i.e. there is an overlap of two elements at least, (iii) winding direction determination: where If the sampling vectors   ful ll all three assumptions, we say that the sampling set allows parameter identi cation and phase retrieval (up to global phase).
T . .Let  =  −1 be diagonalizable by a known eigenvector basis  and assume that the eigenvalues are absolutely collision-free.Let {  }  −1 =0 ⊂ C  allow parameter identi cation and phase retrieval, and let   *  be elementwise non-zero for unknown  ∈ C  .Then the eigenvalues  0 , . . .,  −1 of  and the signal  are determined by the spatiotemporal samples up to global phase.
Proof.Using the procedure in the proof of Theorem ., we recover the unblocked part   {  :  ∈ I  } of the spectrum of  for each  = 0, . . .,  − 1 up to global phase and winding direction.Note that we do not know which value in   corresponds to which index.However, since the eigenvalues are absolutely collision-free, and since the sampling set allows index separation, we have where the absolute value is applied element by element.Thus the true index of the eigenvalues is revealed.
Using that the sampling set allows phase propagation, we align the global phase and winding direction of the sets   as follows.First, we x the global phase and winding direction of  0 .There are at least two eigenvalues   1 and   2 that are contained in  0 and  1 .The collision-freedom ensures arg(  1 λ 2 ) 0 mod π.Using   1 and   2 , which can be identi ed by their absolute values, the global phase and winding direction are uniquely transferable form  0 to  1 , i.e. we obtain the eigenvalues in  0 ∪  1 up to global phase and winding direction.Repeating this argument, we propagate the phase information to the remaining subsets   , which results in the recovery of all eigenvalues  0 , . . .,  −1 up to global phase and winding direction.
The ambiguity with respect to the winding direction occurs since we have not been able to determine whether the true relative phase between   and   corresponds to arg(  λ ) or to arg(  λ ).Let us now consider the indices  1 ,  2 ,  1 ,  2 in the winding direction property ( ) of {  }  −1 =0 .Notice that both   1 and   2 are captured by the sampling vectors   1 ,   2 .Due to the missing winding direction, the coe cients   1 , 1 c 1 , 2 and   2 , 1 c 2 , 2 can only be identi ed up to the conjugation; so we merely obtain [  1 , 1 c 1 , 2 ] and [  2 , 1 c 2 , 2 ], which however are given by Our assumptions guarantees that this equation system has the unique answer   1 ȳ 2 , which yields   1 , 1 c 1 , 2 and   2 , 1 c 2 , 2 without conjugation ambiguity.Further, at least Remark . .The absolute collision-freedom of the eigenvalues can be weakened.More precisely, we only require the absolute collision-freedom on the non-blocked parts of the spectrum with respect to {  }  −1 =0 , i.e. we only require that the sets   are absolutely collision-free.In order to propagate the phase, there have to be to at least two indices Theorem .not only allow us to recover the signal and the system's eigenvalues simultaneously but also to reduce the required number of samples.In the statements before, the number of measurements to apply Prony's method is always a multiple of the squared dimension, i.e. we require O( 2 ) samples.In Theorem .the number of spatiotemporal samples mainly correlate with the support sparsity max{  :  = 0, . . . − 1}, the number of samples is thus bounded by 2 2  .Notice that we need  vectors at the most to build a sampling set allow parameter identication and phase retrieval.For instance the sampling vectors may be constructed such that supp   {, . . .,  +  − 1} for  = 0, . . .,  −  and  ≥ 3. We then employ only O( 2 ) measurement.For a xed sparsity , we only need linearly many spatiotemporal samples.

C
. .Under the assumption of Theorem ., the eigenvalues of  ∈ C × and the unknown signal  ∈ C  are identi able with O() spatiotemporal samples.
The idea of blocking a part of the spectrum to reduce the number of required spatiotemporal samples clearly transfers to Theorem .and . .The indices of the recovered eigenvalues is then determined by the strict, symmetrical decay; so the index separation, phase propagation, and winding direction determination is not required, although the supports of { φ }  −1 =0 should still form a full cover.Considering Theorem .exemplarily, we instead need that, for every  ∈ {0, . . .,  − 1}, there exists at least one index  ∈ {0, . . .,  − 1} such that [ x φ, ] ≠ 0 to recover all components of â and two indices  1 ,  2 ∈ {0, . . .,  − 1} such that φ 1 , and φ 2 , are linearly independent interpreted as two-dimensional real vectors to recover all components of x.

P &
Up to this point, we only considered the nite-dimensional setting.The central ideas to apply Prony's method to identify the eigenvalues of the system and the unknown signal simultaneously is however extendable to the in nite-dimensional setting too.In the following, we consider an in nite-dimensional, complex Hilbert space H and call an invertible, bounded, linear operator A : H → H diagonalizable if A can be factorized into A = SS −1 , where S : ℓ 2 (Z) → H is an invertible, bounded, linear operator, Similarly to the nite-dimensional setting, the temporal samples for one sampling vector   are given by where  S * ,   S −1   , and I  supp  ⊂ Z.If supp  is nite, the sum on the right-hand side becomes nite such that Prony's method may be applied to recover the present eigenvalues (without indices).In order to determine the complete spectrum, the nite supports of   have to form a full cover of Z, which is only possible for in nitely many sampling vectors, i.e.  = ∞.To align the recovered subsets, we rely again on the parameter identi cation and phase retrieval properties in ( -).In sum, we obtain the following recovery guarantee for in nite-dimensional Hilbert spaces.

T . . Let
A : H → H with absolutely collision-free eigenvalues be diagonalizable by a known S : ℓ 2 (Z) → H, where H is an in nite-dimensional Hilbert space and Z an in nite countable set.Let {  } ∞ =0 ⊂ H allows parameter identi cation and phase retrieval with nitely supported S −1   , and let  S *  be elementwise non-zero for unknown  ∈ H. Then the eigenvalues   with  ∈ Z of A and the signal  are de ned by the spatiotemporal samples up to a global phase.
Since the statement can be established with the construction in the proof of Theorem ., we omit the proof.Furthermore, Remark .carries over to the in nite-dimensional setting as well.Note that the non-zero assumption on  S *  is crucial since otherwise a part of the spectrum is blocked in all spatiotemporal measurements and thus cannot be recovered.
An example for the in nite-dimensional Hilbert space setting is the repeated convolution of periodic function.For this, let H be the Hilbert space  2 (T) of all squareintegrable, one-periodic functions on the torus T. The convolution operator with respect to an absolutely integrable function  ∈  1 (T) is de ned by for  ∈ T. The convolution operator conv  is here an isomorphism on  2 (T) due to Young's convolution inequality, see e.g., [ ], and is diagonalized by the nite Fourier transform F :  2 (T) → ℓ 2 (Z) given by More precisely, we have S −1 = F, Z = Z, and  :  ↦ → â  , where denotes the elementwise multiplication.Due to the support constraints on the Fourier coe cients, the sampling vectors {  } ∞ =0 are trigonometric polynomials.

C
. .Let  ∈  1 (T) with absolutely collision-free Fourier coe cients â be unknown, let {  } ∞ =0 be a set of trigonometric polynomials allowing parameter identi cation and phase retrieval, and let f be elementwise non-zero for unknown  ∈  2 (T).Then  and  are de ned by the spatiotemporal samples up to global phase.
The proposed eigenvalue and signal identi cation can be generalized to arbitrary Banach spaces X that are isomorphic to a sequence space like ℓ  (Z).In this case, the inner products have to be replaced by appropriate dual pairings.

S
In the previous sections, we have shown that the dynamical phase retrieval and system identi cation problem is solvable under certain assumptions from exact measurements.In the following, we study the situation for disturbed measurements.Since our constructive proofs have been heavily based on Prony's method, the sensitivity also mainly depends on it.On the bases of Potts & Tasche [ ], initially, the sensitivity of the approximate Prony method is considered; hereby, we follow the proofs of [ ] for real-valued exponential sums and generalize to the complex setting.In a second step, we analyse the error propagation in dynamical phase retrieval.
. S P ' Essentially, the (approximate) Prony method is a two step approach to determine the parameters of the exponential sum ( ).In the rst step, the unknown bases  are recovered using a singular value decomposition and determining the roots of the Prony polynomial.In the second, the unknown coe cients  are computed by solving a linear least-square problem.To analyse the sensitivity of the rst step, we require the following lemma estimating the norm of a rectangular Vandermonde matrix by the maximal radius of the bases L . .For  ∈ C  , the Vandermonde matrix   () satis es and thus Proof.The assertion immediately follows from Further, we need a left inverse of the rectangular Vandermonde matrix.The inverse of a quadratic Vandermonde matrix has been well studied in the literature [ , , , , , , , ] and is given by where  (ℓ)  denotes the th elementary symmetric polynomial without the ℓth variable, which is more precisely de ned by and where  ℓ is the product of di erences The classical elementary symmetric polynomials are based on all elements of , i.e. without the condition  1 , . . .,   ≠ ℓ, and are denoted by   ().

L . (G [ ])
. The elementary symmetric polynomial are bounded by Proof.For convenience, we give the brief proof from [ ].On the bases of Vieta's formula, the elementary symmetric polynomials are related to the polynomial Choosing  = −1, we obtain the assertion for real and positive   ,  = 0, . . .,  − 1.The general assertion then follows from De ning the product radius   and the minimal separation   of the bases in  as the norm of the inverse Vandermonde matrix is bounded as follows.
P . .For  ∈ C  * with distinct elements, the inverse of the quadratic Vandermonde matrix  () satis es Proof.The bound follows immediately from the inversion formula ( ) and from applying Lemma .to the sum over the elementary symmetric polynomials  (ℓ)  with xed ℓ as well as multiplying the estimated for the row sums by the missing factor The norm estimates regarding the Vandermonde matrix allow us to study the quality of the Prony polynomial for perturbed measurements.If the error is small, the true bases are nearly roots; so we may hope that the rst two steps of Algorithm .approximate the bases well.Recall that the approximate Prony method is based on the assumption that the measurement error  with |ℎ ℓ +  ℓ | ≤  is small enough such that the singular values of the unperturbed Hankel matrix ful l   ( ) ≥ 2  2 .The spectral norm is here bounded by T . .Let  > 2, and let γ be a normalized right singular vector to the smallest singular value σ of the perturbed Hankel matrix ( ) with respect to the exponential sum ( ).Then the corresponding polynomial P () =  =0 γ   satis es Proof.Let ν be the corresponding left singular vector, i.e.H γ = σ ν.Incorporating ( ) and ( ) into this equation, we obtain ℓ+ γ for ℓ = 0, . . .,  −  − 1.In matrix-vector form, these equations are given by Multiplying with the left inverse  + − () 0 −2, , we obtain Taking the squared Euclidean norm, bounding the spectral norm by the row-sum norm, and applying Proposition .yields the assertion.
T . .Let  > 2, let γ be a normalized right singular vector to the smallest singular value σ of the perturbed Hankel matrix ( ) with |ℎ ℓ − hℓ | ≤ , and let  −1 be the smallest non-zero singular value of the unperturbed Hankel matrix ( ).Then the corresponding polynomial P () =  =0 γ   satis es Proof.First assume γ ∉ ker  .Letting  proj ker  γ , the projection  is a maybe not normalized right singular vector for the singular value zero.Lemma .implies that the polynomial  ()  =0     has the roots  0 , . . .,  −1 .Therefore, we can write Now since ( γ −  ) ⊥ ker  , we obtain Combining the above inequalities, and applying Lemma ., we establish the assertion.
For the remaining case γ ∈ ker  , the bases   are roots of P by Lemma . .
Remark . .The above Theorems .and .essentially state that the true bases are nearly roots of the perturbed Prony polynomial.Therefore, we nurture the hope that the perturbed roots are close.Although this seems plausible for generic polynomials, we can construct pathological cases of very sensitive polynomials, where already slight disturbances of the coe cients have tremendous e ects on the roots.In [ ], the author tries to establishes an explicit bound on the reconstruction error regarding the roots of the Prony polynomial, which we initially wanted to adapt to our setting.Unfortunately, the key theorem studying a linear perturbation of the coe cient of a polynomial cannot be applied to our setting since here the perturbations  ℓ in the measurements hℓ = ℎ ℓ + ℓ lead to non-linear perturbations of the coe cients in the Prony polynomial.
In the third step of Prony's method, the coe cients  of the exponential sum ( ) are determined by solving   ()  = h in the least-square sense, i.e. we have to determine the minimizer of   ()  − h 2 .The minimizer is given by  †  () h, where is the Moore-Penrose inverse.To estimate the reconstruction error with respect to , we need to estimate the norm of the Moore-Penrose inverse.For this, we exploit that the Moore-Penrose inverse is the zero continuation of the inverse with respect to the range of the orthogonal complement of the kernel.For an arbitrary full-rank matrix, the Moore-Penrose inverse is therefore the left inverse with the smallest norm.

P
. .Let  ∈ C × with  ≥  be a full-rank matrix, and let  + be an arbitrary left inverse.For every 1 ≤  ≤ ∞, the Moore-Penrose inverse then satis es Proof.Since every left inverse  + ful ls  +  =  , all left inverses coincide on the range of .The Moore-Penrose inverse is now the unique zero continuation from the range to the whole space C  , which geometrically means that the Moore-Penrose inverse is the projection onto ran  composed with the unique inverse on the range.For the induced matrix norm, this means This argumentation holds for all induced matrix norms and not only for the -norm.
Using this property of the Moore-Penrose inverse, we may immediately estimate the condition number  (  ()) †  () 2   () 2 of the Vandermonde matrix   () if the bases  are known.

P
. .The condition number of the Vandermonde matrix   () is bounded by Proof.The bound follows from Lemma .and from Proposition .and .with the left inverse  + − () . Let  and  be the parameters of the exponential sum ( ).The leastsquares solution η of the perturbed equation system Proof.The inequality follows immediately from  − η ∞ ≤  †  () ∞  − h ∞ and from applying Proposition .and .with the left inverse  + − () Certainly, the computed bases β are themselves only approximations of  in practice.Therefore, besides the right-hand side h, the Vandermonde matrix   ( β) is perturbed too.For studying the e ect to the recovered coe cients, we need the following lemmata.

L
. .For  ∈ C  , and for β ∈ C  with  − β ∞ ≤ , it holds Proof.The lemma is established by Proof.Using the triangle inequality, we may estimate the minimal separation by Proof.We use the following complex mean value theorem [ , Thm .]: Let  be a holomorphic function de ned on an open convex set  ⊂ C, and let  and  be two distinct points in .Then there exist  1 ,  2 ∈ (, ) such that where (, ) denotes the open line segment On the basis of this complex mean value theorem, we obtain T . .Let  and  be the parameters of the exponential sum ( ).The leastsquares solution η of the perturbed equation system   ( β) η = h with  − h ∞ ≤ ,  − β ∞ ≤ , and  <  /2 satis es Proof.Due to  <  /2, the perturbed Vandermonde matrix   ( β) has full rank.Further, the reconstruction error may be estimated by The rst factor may be estimated by applying Proposition .with perturbed left inverse followed by Proposition ., Lemma ., and Lemma .yielding Using Lemma .and that  ∞ ≤  †  () ∞  ∞ together with Proposition .and Proposition ., we nally arrive at . S & On the basis of the sensitivity analysis of Prony's method, we analyse the error propagation in dynamical phase retrieval.For this, we assume that the unknown bases   λ and coe cients   c of the exponential sum describing the measurements ( ) have been approximately computed.
In the following, we denote the true bases and coe cients by where the bijective map  : {0, . . .,  − 1} × {0, . . .,  − 1} → {0, . . .,  2 − 1} describes the relation between the indices.Assuming that the recovered bases β and coe cients η satisfy β −  ∞ ≤  and η −  ∞ ≤ , where  should be small enough such that the mapping  can be recovered up to the winding direction by the above constructive proofs, i.e. the error is small enough such that the order of the absolute values |   | remains unchanged, we want to estimate the errors in the recovered spectrum λ and signal x.Note that β ( ,) and η ( ,) are simply conjugated for the opposite winding direction.
In line with the above procedures, where rstly the magnitudes of the unknown variables are determined, and secondly the phase is propagated between the elements, we decouple the sensitivity analysis of absolute value and phase.Further, we rst discuss the sensitivity of the unknown operator spectrum, followed by the analysis of the unknown signal, and nally the error propagation for multiple sampling vectors.

S
The recovered bases β already contain estimates of the squared modulus of the spectrum .After recovering the relation  (up to winding direction), the magnitude of the spectrum is easily obtained by taking the square root, i.e.
( ) The sensitivity of the magnitude computation may be easily estimated via the mean value theorem.
Proof.The statement immediately follows from applying the mean value theorem and the reversed triangle inequality by The second one is a trivial consequence.
Recall that for the phase of λ , we rst nd the element with the largest magnitude, say λ , then set the phase of λ to be zero due to the global phase ambiguity, and nally propagate the phase to λ using the relative phase encoded in   ( ,) .More precisely, exploiting β ( ,) ≈   λ , we retrieve the phase of   by λ β ( ,) where | λ | has been computed by ( ) in the rst step.Note that this phase propagation is a very simple method, which however allow to analyse the propagation error.For doing this, we assume that the map  given in ( ) has been identi ed with respect to the true winding direction.Otherwise, we consider the conjugated recovered spectrum λ without loss of generality.For simplicity, we rst consider the phase propagation only between two elements.To estimate the sine of the phase di erence, we exploit the geometrical relation between   ( ,) and β ( ,) schematically presented in Figure .Using the best-known sine relation of the right-angled triangle, we have Coupling the recovery of absolute values and the phase, we may estimate the total recovery error for the spectrum , which mainly depends on  −∞ .

P
. .Assume β −  ∞ ≤ , and estimate  by ( ) and ( ), where the true winding direction is used without loss of generality, and where the phase is propagated from the element largest in magnitude.If  <  2 −∞ , then we have Proof.Let α  ,   be the phases of λ ,   respectively.We decouple the phase and magnitude error by The magnitude error may be simply estimated using Lemma .via For the phase error, assume that   is the eigenvalue with largest magnitude, set arg(  ) = 0, and propagate the phase from   to the remaining   by ( ).The di erence between the unimodular exponentials is now where the last inequality holds by Lemma . .Using

S
As discussed in the previous sections, the components of η are in line with the structure of ( ) meaning η ( ,) ≈   c with   = ȳ   = ( * )  ( −1 )  .
With respect to the above proofs, we recover the transformed signal  =  *  similar to the spectrum .Thus, we rst recover the magnitudes via the real and positive values η ( ,) , then assume that c largest in magnitude is real and positive, and spread the phase from c to every other c  using the relative phase encoded in η ( ,) .Because of error will appear in the phase of eigenvalues because of the phase propagation between the partial spectra.Fortunately, the amplitude of the eigenvalues is not a ected.
To demonstrate the issue in more detail, let us -for the moment -consider two partial spectra Λ0 and Λ1 and assume For simplicity, we assume that the winding directions are already aligned.If we now propagate the phase from Λ0 over λ0, and λ1, to Λ1 , then the phases in Λ1 have to be shifted by arg( λ0, ) − arg( λ1, ).Since the phase of λ1, is already defective, the error within Λ1 may accumulate at most to 2.If we want to align the global phase of the entire spectrum, we may take the element with the largest magnitude in Λ0 , look for the shortest path over the partial spectra Λ to λ , and propagate the phase along this path.The error of arg( λ ) may then accumulate at most to [1 + 2( − 1)], where  is the number of the employed spectra   .A schematic example of this procedure is shown in Figure .For the phase of the transformed signal , we may apply the same procedure.

N
The constructive proofs of the uniqueness guarantees for phase retrieval and system identi cation can immediately be implemented to obtain numerical algorithms.Because of the sensitivity of Prony's method as corner stone of the proofs, these methods will however be vulnerable to noise.Nevertheless, we provide some small numerical examples to accompany the theoretical results and to show that simultaneous identi cation of system and signal is possible in principle.All numerical experiments have been implemented in Julia .
Example .(Prony's method).First, we apply the approximated Prony method in Algorithm .to the complex setting.For this, we generate exponential sums ( ) by choosing the coe cients and bases from a ring in the complex plane.More precisely, the absolute values are drawn with respect to the uniform distributions ) and the phases form U((−π, π]) independently.The mean maximal reconstruction errors for di erent numbers of addends  and numbers of samples .The results over reconstructions are recorded in Table and .For a small number of addends, the parameter are identi ed fairly well.Increasing the number of addends however leads to a signi cant loss of accuracy.To some degree, this may be compensated by employing more samples.We repeat this experiment with small additive noise |  | ∼  ( [0, 10 −10 ]) and arg(  ) ∼ U((−π, π]), see Table and .Example .(Simultaneous signal & system identi cation).In this numerical example, we consider the recovery of real-valued signals and convolution kernels as discussed in Section .The true, unknown kernel  ∈ R 6 is here chosen as â cos(2) 2 =−3 , where the indices are considered modulo .Besides the strictly, symmetrically decreasing kernel, the unknown signal  ∈ R 6 and the known measurement vectors  1 ,  2 ∈ R 6 have been randomly generated such that the requirements for the reconstruction are ful lled, i.e.  1 and  2 are pointwise independent in the frequency domain, and the assumption [ x φ, ] ≠ 0 is satis ed for  = 0, . . ., 5,  = 1, 2. For reproducibility, the employed signals are shown in Table .Choosing  4 2 + 1 = 145 to encounter the numerical sensitivity of Prony's method, we now apply the procedure in the constructive proof of Theorem . .The reconstructions ã and x of the true signals  and  are shown in Figure .Aligning the overall sign of  and x, we are able to recover the unknown signals up to an error of â − â ∞ = 8.650 • 10 −5 and  − x ∞ = 1.141 • 10 −3 .The theoretical procedure behind Theorem .thus allows the simultaneous recovery of signal and kernel numerically at least for small instances.
Example .(Multiple sampling vectors).Finally, we consider the identi cation of complex-valued signals and convolution kernels, i.e.  circ , using multiple sampling vectors.For the experiment, the true but unknown signal  ∈ C 50 and kernel  ∈ C 50 have been randomly generated such that  has a non-vanishing Fourier transform and  is absolutely collision-free, see Figure .Further, we generate sampling vectors   ∈ C 50 such that supp φ = {, . . .,  + 3}.Since the support of two consecutive sampling vectors is shifted by one, the generated sampling vectors allow index separation ( ) and phase propagation ( ).Additionally, we ensure that the winding direction  Table : The randomly generated unknown signal  and the known measurement vectors  1 ,  2 in Example .satisfying the assumptions of Theorem . .determination property ( ) is satis ed for  1 = 0,  1 = 1,  1 = 1,  2 = 2. Further, we employ for each sampling vector samples, which is around twice the minimal required number to apply Prony's method.Next, we apply the construction behind the proof of Theorem .line by line, where the procedure in the proof of Theorem . is used to identify the partial spectrum of  with respect to   .The recovered signal x and kernel ã are shown in Figure .Aligning the phase of the true and recovered vectors at the rst component, we here observe the reconstruction errors â − â ∞ = 1.897 • 10 −3 and x − x ∞ = 1.563 • 10 −4 .As shown in this example, the techniques behind the theoretical proofs may be applied to recover signal and kernel from noise-free samples.

C
Phase retrieval in dynamical sampling is a novel research direction occurring a few years ago.As for most phase retrieval problems, the main issue is the ill-posedness especially emerging in the non-uniqueness of the solution.Besides the phase retrieval of the unknown signal, we additionally identify the unknown involved operator from a certain operator class.We have shown that both -phase retrieval and system identi cation -is in principle simultaneously possible if the spectrum of the operator is (absolutely) collision-free.The employed conditions to ensure the uniqueness of the combined phase and system identi cation hold for almost all signals, spectra, and measurement vectors.Our work horse has been the approximate Prony method for complex exponential sums.As a consequence, all proofs are constructive and give explicit analytic reconstruction methods.Unfortunately, Prony's method is notorious for its instability.We have studied the sensitivity in more details yielding error bounds that are interesting by themselves outside the context of dynamical sampling.The recovery error of phase and system here centrally depends on the well-separation of the pairwise products of the spectrum and  how far the involved entities are away from zero.Especially for high-dimensional instances the well-separation gets worse and worse since the pairwise products start to cluster; so the analytic reconstructions can only be applied to small instances or a series of specially constructed sampling vectors numerically.The main contributions of this paper are the theoretical uniqueness guarantees, where the question of a practical recovery methods remains open for further research.In particular for phase retrieval, it would be interesting to adapt Prony's method to the occurring quadratic structure or to replace it by a more suitable method.[ ] R. Alaifari and M. Wellersho .Stability estimates for phase retrieval from discrete Gabor measurements.J Fourier Anal Appl, ( ): -, .

Figure :
Figure :Schematic example for the propagation of the phase from some starting element over some path to another element.The elements of Λ with the largest magnitude are marked in red.In each partial spectra, the phase error get worse by 2 at the most.
Unknown signal .

Figure :
Figure :The true and reconstructed signal and kernel in Example .by applying the procedure provided in Theorem . .
Phase of the signal in frequency.

Figure :
Figure :The true and reconstructed signal and kernel in Example .by applying the procedure behind Theorem . .
the coe cients   of the Prony polynomial by solving a linear equation system.Knowing the Prony polynomial, we may extract the unknown bases   via its roots.The coe cients   of the exponential sum are determined by an overdetermined linear equation system.To improve the numerical performance, the number of measurements may be increased [ , , ].On the basis of the rectangular Hankel =0     = −1 =0 ( −   ) with   = 1.Considering the linear equations  ∑︁ =0   ℎ ℓ+ = L . .For the exact samples ℎ ℓ with ℓ = 0, . . .,  − 1, the rectangular Hankel matrix ( ) is of rank , and the following assertions are equivalent: (i) the polynomial  ()  ℓ=0  ℓ  ℓ has the  distinct roots  0 , . . .,  −1 , (ii) the vector  ( ℓ )  ℓ=0 spans ker( ), i.e.  = 0. Proof.With  (  ) −1 =0 and  (  ) −1 =0 , we may factorize the Hankel matrix ( ) into one of the products   1 , 1 c 1 , 2 and   2 , 1 c 2 , 2 has a non-vanishing imaginary part again due to ( ).The corresponding basis   1 λ 2 reveals the true winding direction resulting in the recovery of  0 , . ..,  −1 up to global phase.Considering the coe cient of the temporal samples for each   , we determine   with  ∈ supp   up to global phase.The recovered components of  may now be aligned due to the overlap between the supports in ( ) yielding  up to global phase.Applying the inverse of  * , we nally obtain the wanted signal  up to global phase.
is a multiplication operator, and Z is an in nite countable set like N or Z.The elementwise multiplication operator  : ℓ 2 (Z) → ℓ 2 (Z) is de ned by ()      ∈Z with bounded eigenvalues   ∈ C * , i.e. sup  ∈Z |   | < ∞.