1 Introduction

In this report, I describe some frequency domain modeling techniques using a continuous time approach, with a highly abbreviated version of the model in Taub (2018) to motivate the technical elements. The main tools are the Laplace and Fourier transforms and the continuous-time analogue of the Wiener–Hopf equation. These tools are described in sections 6.A (pp. 216–220), 7.1–7.2 (pp. 221–228), and 7.A (262–264) of Kailath et al. (2000). An additional reference is Hansen et al. (1991).

2 White noise and serially correlated processes in continuous time

Consider a continuous time process n(t); we would like to consider this as the continuous-time analogue of discrete-time white noise, that is, Gaussian, zero-mean, and serially uncorrelated. We are interested in processes that are convolutions of n(t):

$$\begin{aligned} u(t) = \int _0^\infty \nu (\tau ) n(t-\tau ) \mathrm{d}\tau , \end{aligned}$$
(1)

where \(\nu (t)\) is some suitably well-behaved function such as an exponential function of time, and which is an element of the space of square-integrable functions, taking account of discounting:

$$\begin{aligned} L_2(r) \equiv \left\{ f(\cdot ) \Big | \int _0^\infty e^{-rt} \left| f(t)\right| ^2 < \infty \right\} . \end{aligned}$$
(2)

Technically speaking, the formulation in (1) is mathematically ill-defined: the sample paths of white noise in continuous time are not continuous or even measurable. To address this issue, Davis (1977, pp. 79–83) and Doob (1953, p. 426) begin their treatments of white noise by constructing Brownian motion via limiting arguments; for example, Davis develops Brownian motion as the sums of products of Haar functions (a type of step function) with Gaussian random variables. He later provides a second derivation using Ornstein–Uhlenbeck processes, which are well-defined via their correlation functions.

Davis then examines the limiting properties of the Fourier transform of the Ornstein–Uhlenbeck covariance function. The limit of the Fourier transform is simply a constant, which is the Fourier transform of the Dirac \(\delta \)-function. The \(\delta \)-function is also an ill-posed object, but its integral is a step function, which is tractable. Davis then argues that Brownian motion can be well-defined as the integral of white noise, which by the limiting argument has a covariance function that is a step function, and is thus well posed. He concludes that integration of white noise, either in its pure form (yielding Brownian motion) or as convolutions with a filter such as \(\phi \) (yielding a stationary process such as an Ornstein–Uhlenbeck process) is mathematically sound:

The conclusion we arrive at from the above discussion is that we cannot represent mathematically white noise itself, but if it appears in integrated form then Brownian motion is an appropriate model (Davis 1977 p. 82).

Doob and also Hansen and Sargent draw a similar conclusion.

The process u(t) can be viewed as white noise in the limit as \(\eta \) goes to infinity. Given that white noise is itself ill-defined, it is mathematically more sound to view the Laplace transform of u(t) as occurring before taking that limit, that is, it is the transform of the integrated process:

$$\begin{aligned} u(t) \equiv \int _0^\infty e^{-\eta \tau } \mathrm{d}N(t-\tau ), \end{aligned}$$

where \(N(t-\tau )\) is a Brownian motion. If \(\eta <\infty \), the Laplace transform of the integral is well-defined, and so one can think of it that way, taking the the limit of the Laplace transform as \(\eta \) tends to infinity. This is what Davis (1977, p. 80) does: specifically, he examines the Fourier transform of the (auto) covariance function of the Ornstein–Uhlenbeck process, that is

$$\begin{aligned} \hbox {cov}(u(t), u(s)) \equiv \sigma ^2 \mathrm{e}^{-\alpha |t-s|} \end{aligned}$$

then lets the salient parameter (\(\alpha \) in his notation) tend to infinity; this yields a flat spectrum which he notes is the transform of a delta function, which is the correlation function of white noise. Hansen and Sargent carry out a similar operation: see their first example (Hansen et al. 1991, p. 213).

As Kailath, Sayed, and Hassibi, and also Hansen and Sargent note, the Laplace transform is a special case of the Fourier transform, and care must be taken to ensure that the integration inherent in the transforms converges. Kailath, Sayed, and Hassibi refer to exponential boundedness of the processes; this is a specialization of the more general requirement that the functions in question have no poles in the domain of interest. Specifically, as elaborated by Hansen and Sargent, I assume that functions are analytic in a strip along the imaginary line, and the imaginary line itself is in this strip because of discounting. Moreover the endogenous processes that will later be generated by optimization and equilibrium will automatically satisfy this criterion.

For modeling purposes it is useful to assume that the process u(t) is an Ornstein–Uhlenbeck processes, that is, the analogue of an autoregressive process in discrete time settings. The weighting filter is then in exponential form:

$$\begin{aligned} \nu (\tau ) = \mathrm{e}^{-\eta \tau }. \end{aligned}$$

In the limit, this process becomes a Brownian motion at \(\eta =0\), and a white noise processes at the limit of the other extreme, \( \eta =\infty \).

Using the continuous time transform as described in Kailath et al. (2000, p. 217), and also in Sect. 3, the s-transforms of the filters for the input processes of a typical economic model are Ornstein–Uhlenbeck processes—again, the analogue of first-order autoregressive process—is \(\varPhi (s) = \frac{1}{s+\rho }\), (see Davis (1977, p. 80); similarly the s-transform of a white noise process is the identity matrix I (again, see Davis 1977 p. 80).

3 Fourier transforms of continuous-time processes

The Ornstein–Uhlenbeck process is the continuous-time analogue of the discrete autoregressive process:

$$\begin{aligned} \mathrm{d}x = a x \mathrm{d}t + \mathrm{d}z. \end{aligned}$$

The integral representation is:

$$\begin{aligned} x(t) = \int _{-\infty }^t \mathrm{e}^{a(t-s)} \mathrm{d}z(t). \end{aligned}$$

The Fourier transform of \(\mathrm{e}^{a(t-s)}\) is:

$$\begin{aligned} \frac{1 }{ i\omega - a} \end{aligned}$$

and the Fourier transform of dz (which corresponds to white noise) is the \(\delta \) function. (A reference is Igloi and Terdik 1999, p. 4.) The spectral density is:

$$\begin{aligned} \frac{1 }{ a^2 + \omega ^2}. \end{aligned}$$

The Fourier transform (and corresponding Laplace transform) resemble the pole forms \(1/(z-a)\) in discrete time models. The building block in the s-domain is, therefore, also rational functions, except that causality is associated with poles in the left half plane instead of the unit circle. An additional reference is Hansen et al. (1991).

Observe that \(a=1\) yields the Fourier transform of a standard Brownian motion.

4 Optimizing in the frequency domain

Whiteman (1985) constructed a discrete time model and then converted the objective itself into z-transform form. The optimization was then over linear operators or filters that were found via a variational derivative of the transformed objective.Footnote 1 This was achieved by imposing the constraint that the controls must be a linear filter of the information, and taking the expectation of the objective prior to optimizing over those filters. However, it is essential to reduce the covariance function of the fundamental processes—the white noise fundamentals—to a scalar covariance matrix. In continuous time, the equivalent operation is to make the fundamental covariance function \(R_x(t)\) a Dirac \(\delta \)-function.Footnote 2

An essential step in the variational approach is to transform the conventional time-domain objective—typically a conditional expected value of profit or utility—to the frequency domain. If the fundamental processes are serially uncorrelated, as is the case here by the assumption that the fundamental processes are white noise, then the expectation of such an objective leaves an integral in which the integrand consists of products of functions. Fourier transforming these objects then yields a convolution in the frequency domain, and the variational derivative of these convolutions can then be calculated. Proceeding in this way with abstract functions f and g that are elements of \(L_2\)

$$\begin{aligned} \int _0^\infty \mathrm{e}^{-rt} f(t) g(t) \mathrm{d}t = \int _{a-i \infty }^{a+i\infty } F(s) G^*(r-s^*) \mathrm{d}s, \end{aligned}$$
(3)

where F and G are elements of the Hardy space \(H^2\), the set of square integrable functions on the right half plane with inner product

$$\begin{aligned} H^2(r) = \left\{ F: \int _{a-i \infty }^{a+i\infty } \hbox {tr}\,F(s) F^*(r-s^*) \mathrm{d}s < \infty \right\} \end{aligned}$$

and where the notation \(G^*\) signifies the complex conjugate transpose of G, \(G^*(r-s^*)\).Footnote 3 The \(r-s^*\) term captures discounting, and where the integration is along a strip parallel to the imaginary axis in which \(\hbox {Re}(s) = a\), where the functions F and G are analytic in the right half plane—that is, F and G have no poles or singularities in the region \(\hbox {Re}(s) > - r\), and with a small enough to avoid poles and thus yield convergence, that is, \(a<r\).Footnote 4 There are two parts to the integrand: the causal part F(s) and the anti-causal part \(G^*(r-s^*)\), reflecting the inner product that is expressed in the objective.

4.1 An objective in the frequency domain

Combining these ingredients yields the s-transform of an objective; as an example, a much-simplified version of the objective in Taub (2018) is:

$$\begin{aligned} \max _{\{B\}} - \int _{a-i\infty }^{a+i\infty } \hbox {tr}\,\Bigg \{ \begin{pmatrix} \varPhi - B (I+\varGamma ) \varLambda \\ - (I+\varGamma ) \varLambda \end{pmatrix} \begin{pmatrix} (1+\varGamma ^*) B^*&\varGamma ^* \end{pmatrix} R \Bigg \} ds, \end{aligned}$$
(4)

where \(\varPhi \) is the s-transform of an exogenous driving process, and \(\varLambda \) and \(\varGamma \) are elements of \(H^2\). The causal and anti-causal parts reflect the inner product that is expressed in the objective, and R is the covariance matrix function of the Dirac-\(\delta \) fundamentals e(t) and u(t).Footnote 5 The covariance function R is block diagonal:

$$\begin{aligned} R = \begin{pmatrix} R_e &{} 0 \\ 0 &{} R_u \end{pmatrix}. \end{aligned}$$
(5)

It is key that the optimization in Eq. (4) is now over the function B.

4.1.1 The variational condition and solution

Following the steps in Bernhardt et al. (2010), the first-order conditions of the s-transformed objectives can be stated. First, the notation

$$\begin{aligned} {\mathscr {A}}^* \end{aligned}$$

denotes an arbitrary function is the s-domain that is anti-causal, that is, \(A^*(s)=0\), for s in the right half-plane.

The variational first-order condition for B in the large shareholder’s objective (4), can be expressed as:

$$\begin{aligned} B \Big [\varLambda (I+\varGamma ) (1+\varGamma ^*) + (1+\varGamma ) (I+\varGamma ^*) \varLambda ^*\Big ] \sigma _{e}^2 = \varPhi (1+\varGamma ^*) \sigma _{e}^2 + {\mathscr {A}}^* \end{aligned}$$
(6)

which is a Wiener–Hopf equation. The solution methods for continuous-time Wiener–Hopf equations outlined in Kailath, Sayed, and Hassibi section 7.A can now be applied.

The equation could be solved if we could simply invert the coefficient of B. However, this inverse would involve values of s in the right half plane that correspond to putting weights on future values of the underlying stochastic processes, which cannot be predicted. This asymmetry in future versus past values of the underlying processes prevents solution via such direct inversion. The solution, therefore, requires an indirect method.

There are three elements of the solution method: factorization, inversion, and projection. In the factorization step, the coefficient matrix is factored into two factors, one that is a function whose zeroes are only in the left half plane, and one whose zeroes are only in the right half plane. There are two inversion steps. In the first inversion step the factor with zeroes in the right half plane is inverted: as a result of the inversion the right-half-plane zeroes then become poles. The projection step is then undertaken: the projection or annihilator operator is applied to eliminate terms that have poles in the right half plane, while preserving elements with poles in the left half plane. Finally, the factor that has zeroes in the left half plane is inverted, yielding a solution that only has poles in the left half plane, corresponding to functions that operate only on the current and past history of the underlying stochastic processes.

Thus, to solve (6), propose a factorization

$$\begin{aligned} GG^* \equiv \varLambda (I+\varGamma ) (1+\varGamma ^*) + (1+\varGamma ) (I+\varGamma ^*) \varLambda ^*, \end{aligned}$$
(7)

where by standard results G can be chosen to be analytic and invertible. Then, the solution is:

$$\begin{aligned} B = \left\{ \varPhi (1+\varGamma ^*) {G^*}^{-1}\right\} _+ G^{-1}, \end{aligned}$$
(8)

where the projection or annihilator operator \(\left\{ \cdot \right\} _+\) is defined by:

$$\begin{aligned} \left\{ F(s)\right\} _+ = 0, \qquad \hbox {Re}(s) \le 0. \end{aligned}$$

In the next sections, I describe how to further characterize the solution via spectral factorization, and also exploiting the Ornstein–Uhlenbeck structure of \(\varPhi \) to effectively remove the annihilator operator from the right hand side.

5 Practical details of spectral factorization and annihilator operations in continuous time

In this section, I examine how factorization and the annihilation operator work in practical examples. I begin by briefly recapitulating an example from Kailath et al., pp. 263–264. Kailath et al. posit a model which has the following Wiener–Hopf equation:

$$\begin{aligned} K(s) S_y(s) = S_{sy}(s) e^{s\lambda } - G(s). \ \end{aligned}$$
(KSH 7.A.4)

Here, K(s) is the Laplace transform (s-transform) of the unknown filter that is to be found; G(s) is the Laplace transform of a function g(t) that is a purely anticausal function, that is, a function that is analytic on the left half plane only and zero in the right half plane, but which is otherwise arbitrary, corresponding to the principal part function \(\sum _{-\infty }^{-1}\) in the discrete time setting: \(g(t)=0\), \(t>0\); \(S_{sy}(s)\) and \(S_y(s)\) are the Laplace transforms of variance and covariance functions:

$$\begin{aligned} S_y(s) = {\mathscr {L}}\{R_y\} \qquad S_{ys}(s) = {\mathscr {L}}\{R_{ys}\} \end{aligned}$$

with

$$\begin{aligned} R_y(\tau ) \equiv E [\mathbf{y}(t) \mathbf{y}(t-\tau )] \qquad R_{ys}(\tau ) \equiv E [\mathbf{s}(t) \mathbf{y}(t-\tau )]. \end{aligned}$$

Note that Kailath, Sayed, and Hassibi have some contrasting notation: process \(\mathbf{s(t)}\) is in boldface, and the argument of the Laplace-transformed function s, which is completely different. Thus, \(S_y\) is the Laplace transform of the observed process, and \(\mathbf{s}(t)\) is the signal process that the observer wants to extract; \(R_{ys}\) is then the covariance function between the observed and signal processes.

The exponential term appears in the Wiener–Hopf equation because the original equation is shifted:

$$\begin{aligned} R_{sy}(t+\lambda ) = \int _0^\infty k(\tau )R_y(t-\tau )\mathrm{d}\tau , \qquad t>0 \end{aligned}$$

which captures the idea of time-lagged observations.

To solve the problem (Kailath et al. 2000, 7.A.4), first factor \(S_y\). Abstractly, this factorization is:

$$\begin{aligned} S_y(s) = L(s) R L^*(-s^*), \end{aligned}$$
(KSH 7.A.2)

where R is a positive constant, and L(s) is causal, that is, both L and \(L^{-1}\) are analytic on the right half plane.

Now write the solution:

$$\begin{aligned} K(s) = L(s)^{-1} \left\{ {L^*(-s^*)}^{-1} R^{-1} S_{xy}(s) e^{s\lambda }\right\} _+. \end{aligned}$$
(KSH 7.A.7)

The remaining agenda is to carry out a factorization for a practical problem and to demonstrate how the annihilation operation works in that practical setting.

Kailath et al. posit a signal process with Fourier transform spectral density:

$$\begin{aligned} S_s(f) = {\mathscr {F}} \left\{ \mathrm{e}^{-\alpha |t|} \right\} = \frac{2\alpha }{\alpha ^2 + 4 \pi ^2 f^2}. \end{aligned}$$

Note that there is a distinction between the Fourier and Laplace representations. Defining \(s\equiv 2\pi i f\), the equivalent bilateral Laplace transform is:

$$\begin{aligned} S_s(s) = {\mathscr {L}} \left\{ \mathrm{e}^{-\alpha |t|} \right\} = \frac{2\alpha }{\alpha ^2 - s^2}. \end{aligned}$$

The noise process \(\mathbf{v}(t)\) is white noise, which has a flat spectrum:

$$\begin{aligned} S_v(s)=1 \end{aligned}$$

and the sum of the signal and noise, \(\mathbf{y}(t) = \mathbf{s}(t) + \mathbf{v}(t)\), is (because the Laplace transform of a sum is the sum of the Laplace transforms)

$$\begin{aligned} S_y(s) = S_s(s) + S_v(s) = \frac{2\alpha }{\alpha ^2 - s^2} + 1 = \frac{s^2 - \alpha ^2 - 2\alpha }{s^2 - \alpha ^2} = L(s) R L^*(-s^*). \end{aligned}$$

The denominator of this expression is the product \((s - \alpha )(s + \alpha )\). Restate the entire expression as a product:

$$\begin{aligned} \frac{ s + \sqrt{\alpha ^2 + 2\alpha }}{s + \alpha } \frac{ s - \sqrt{\alpha ^2 + 2\alpha }}{s - \alpha } \end{aligned}$$

so that

$$\begin{aligned} L(s)= \frac{ s + \sqrt{\alpha ^2 + 2\alpha }}{s + \alpha } . \end{aligned}$$

(Of course this is just one of the potential factorizations.) Note that L is analytic in the right half plane because its pole, \(-\alpha \), is in the left half plane, and the inverse is analytic in the right half plane because the zero, \(- \sqrt{\alpha ^2 + 2\alpha }\), is in the left half plane.

The final step is to calculate the annihilate. To do this, a partial fractions calculation must be done. The argument of the annihilator operator is:

$$\begin{aligned} \frac{s - \alpha }{ s - \sqrt{\alpha ^2 + 2\alpha }} \frac{2\alpha }{\alpha ^2 - s^2}. \end{aligned}$$

Writing out the factors in the denominator and canceling terms, rewrite this with partial fractions:

$$\begin{aligned} = \frac{-\frac{2\alpha }{\alpha + \sqrt{\alpha ^2 + 2\alpha }}}{s - \sqrt{\alpha ^2 + 2\alpha }} + \frac{\frac{2\alpha }{\alpha + \sqrt{\alpha ^2 + 2\alpha }}}{\alpha + s} . \end{aligned}$$

The annihilator kills elements that have poles in the right half plane; the first term will therefore be killed:

$$\begin{aligned} \left\{ \frac{-\frac{2\alpha }{\alpha + \sqrt{\alpha ^2 + 2\alpha }}}{s - \sqrt{\alpha ^2 + 2\alpha }} + \frac{\frac{2\alpha }{\alpha + \sqrt{\alpha ^2 + 2\alpha }}}{\alpha + s} \right\} _+ = \frac{\frac{2\alpha }{\alpha + \sqrt{\alpha ^2 + 2\alpha }}}{\alpha + s} . \end{aligned}$$

Therefore, the solution of the Wiener–Hopf equation is

$$\begin{aligned} K(s) = \frac{s + \alpha }{ s + \sqrt{\alpha ^2 + 2\alpha }} \frac{\frac{2\alpha }{\alpha + \sqrt{\alpha ^2 + 2\alpha }}}{\alpha + s} = \frac{\frac{2\alpha }{\alpha + \sqrt{\alpha ^2 + 2\alpha }}}{ s + \sqrt{\alpha ^2 + 2\alpha }} . \end{aligned}$$

This is the Laplace transform for a filter. The actual filter can be obtained by performing the inverse transform operation.

5.1 A small lemma about the annihilator

The annihilator operator is a linear operator and therefore can be expressed as an integral (Kailath, Sayed, and Hassibi, p. 263):

$$\begin{aligned} \left\{ F(s)\right\} _+ = \int _0^\infty \left[ \frac{1}{2\pi i} \int F(p) \mathrm{e}^{pt} \mathrm{d}p \right] e^{-st} \mathrm{d}t. \end{aligned}$$
(9)

The interpretation is straightforward: perform the inverse Laplace transform with the inner integral (which in conventional situations is integrated along the imaginary axis). Then perform the one-sided Laplace transform on the result, which picks up only the part of the function defined for positive t, that is, in the right half plane. The following small lemma holds, which is a variation of Whittle’s theorem.

Lemma 1

Let F be analytic in the right half plane, and let \(a>0\). Then

$$\begin{aligned} \left\{ F^*(r-s^*) \frac{1}{s+a}\right\} _+ = F(r+a)\frac{1}{s+a}. \end{aligned}$$

Proof

I will first demonstrate this for a simple version of F, namely \(F(s)=\frac{1}{s+b}\), \(b>0\)—namely when F is also the filter for an Ornstein–Uhlenbeck process. In that case, the inner integral of (9) is:

$$\begin{aligned} \frac{1}{2\pi i} \int \frac{1}{-p + r + b} \frac{1}{p+a} \mathrm{e}^{pt} \mathrm{d}p. \end{aligned}$$

Now do partial fractions:

$$\begin{aligned} = \frac{1}{2\pi i} \int \left( \frac{\frac{1}{r+b+a}}{-p +r +b} + \frac{\frac{1}{r+b+a}}{p+a} \right) \mathrm{e}^{pt} \mathrm{d}p. \end{aligned}$$

The integration is along the imaginary axis. This is equivalent (via a Möbius transform) to integrating around the unit circle. Consequently Cauchy’s theorem can be invoked: a holomorphic function with a pole in the right half plane integrates to zero. The pole of the first term in the expression is \(p+r\), and therefore the integral of the first term is zero. The remaining expression is:

$$\begin{aligned} \frac{1}{r+b+a} \mathrm{e}^{-at}. \end{aligned}$$

Now take the outer integral

$$\begin{aligned} \int _0^\infty \left[ \frac{1}{r+b+a} \mathrm{e}^{-at} \right] \mathrm{e}^{-st} \mathrm{d}t = \frac{1}{r+b+a} \frac{1}{s+a}. \end{aligned}$$

This completes the proof for this simple case.

If f is analytic, then it can be represented in power series form:

$$\begin{aligned} f(\tau ) = \sum _{k=0}^\infty f_k \mathrm{e}^{-b_k \tau }. \end{aligned}$$

The s-transform of this function is:

$$\begin{aligned} F(s) = \sum _{k=0}^{\infty } f_k \frac{1}{s+b_k}. \end{aligned}$$

Now proceed as in the proof above for each k. \(\square \)

This result is stated and proved in greater generality for matrix systems in Seiler and Taub (2008), Lemma C.18, using state space methods. When general compound expressions of the sort \(\left\{ F G^*\right\} _+\), where both F and G are analytic, that is, their poles are in the left half plane, are viewed from a state space perspective, it is clear that the product will be a function with poles in the left half place inherited from the poles of F and poles in the right half plane inherited from G. The annihilator removes the latter poles, while the poles of F survive.

6 Conclusion

Frequency domain methods are a useful tool for modeling economic situations in which there are dynamics, noise, and strategic behavior. After mapping a typical model to an appropriate function space, the essential features of optimization by agents, and the establishment and characterization of equilibria, can be tractably carried out via variational methods and fixed point methods respectively using operations that are essentially algebraic in nature. Furthermore, numerical simulations of the resulting models are a straightforward application of complementary methods that have been developed by the applied engineering literature.