Abstract
We will now study the basic rules of Fourier Series. You can study these basic principles for years in order to completely understand some of the many subtleties. The concentration of this book will be to quickly understand Fourier Series at a level which will allow to study their many applications. The reader is then encouraged to look further for a more indepth understanding of the topics from the many other resources (Benedetto, Harmonic analysis and applications, CRC Press, 1996, [1]; Bracewell, The Fourier Transform and its applications, 1999, [3]; Cheney, Approximation theory, Chelsea Publishing Company, New York, [5]; Rudin, Real and complex analysis, McGrawHill, 1974, [17]; Tolstov, Fourier Series, Dover, London, 1962, [20]).
We will now study the basic rules of Fourier Series. You can study these basic principles for years in order to completely understand some of the many subtleties. The concentration of this book will be to quickly understand Fourier Series at a level which will allow to study their many applications. The reader is then encouraged to look further for a more indepth understanding of the topics from the many other resources [1, 3, 5, 17, 20].
2.1 Fourier Series on \(L^2[a, b]\)
We begin with the most basic Fourier Series and outline how we can adjust them to other intervals. We will then outline the mathematics which forms the basis for these claims, and the many implications and structure which gives background to the study of Fourier Series.
We must begin with the basic definition of \(L^2[a, b]\)
Definition 2.1.1
The set of functions \(f(t): [a, b] \rightarrow R \) whose squared integral is finite, or \(\int _a^b f(t)^2 < \infty \) is referred to as \(L^2[a, b]\), or the square integrable functions on [a, b].
We now state the most basic theorem of this section.
Theorem 2.1.1
Thus, you can represent any function in \(L^2[\pi ,\pi ]\) as a sum of sines and cosines. We can even state more than this:
Theorem 2.1.2
Note that Theorem 2.1.2 is just a generalization of Theorem 2.1.1. To see this, let \(a=\pi \), \(b =\pi \), which implies that \(H = \pi \), and \(h = 0\). Let us add a simplified version of Theorem 2.1.2, where the interval is centered about the origin, or where \(a=T\) and \(b=T\).
Corollary 2.1.3
Theorem 2.1.1 is relatively easy to remember, while the author cannot remember Theorem 2.1.2 without recreating it. It is much easier to understand than it is to memorize, so let us understand how you can get Theorem 2.1.2 from Theorem 2.1.1 without memorization. To do this, we need pictures.
The key which is illustrated in Figure 2.1 is that the first cosine and the first sine in Theorem 2.1.1 (i.e., the cosine and sine terms with k = 1) have exactly one cycle between \(\pi \), and \(\pi \). If we make the interval \([a, b] = [2\pi , 2\pi ]\), then we would have to change the cosine and sine terms to be \(\cos (kt/2 )\) and \(\sin ( kt/2)\). This assures that when t reaches the edge of the interval or \(2\pi \), we will have \( k t/2 = k \pi \). This assures that the first cosine and sine terms of our new series (using \(\cos (kt/2 )\) with \(k=1\)) will also have exactly one cycle.
2.1.1 Calculating a Fourier Series
The obvious problem with Theorem 2.1.1 is that we do not just have to calculate one integral, but an infinite number of them. The reality is that oftentimes we can calculate all of these integrals simultaneously. The second reality is that except for a certain number of simple functions, the Fourier Series cannot easily be calculated by hand. The Fourier Series will be evaluated by the computer in general, but that is a topic for a later chapter.
Example 1:
Definition 2.1.2
(Even and Odd Functions) A function f(t) is said to be even if \(f(t) = f(t)\). A function f(t) is said to be odd if \(f(t) = f(t)\).
The first obvious reason why we care about even and odd functions is that \(\cos (kt)\) is even for all k and \(\sin (kt)\) is odd for all k. Another set of functions which separate nicely into even and odd functions is the monomials \(t^k\). If k is even, \(t^k\) is even. If k is odd, \(t^k\) is odd.
To understand why this helps, remember that if f(t) is odd, \(\int _{T}^T f(t) dt = 0\). Now, remember that the product of two even functions is even. The product of an even function and an odd function is odd. Finally, the product of an odd function with an odd function is even. These correspond directly to the products of positive and negative numbers, where even is positive and odd is negative, for some rather obvious reasons.
Now, consider the formulas for \(a_k\) and \(b_k\) above. If f(t) is even, then \(f(t) \sin (kt)\) is odd, so the \(b_k\) terms will all be zero. This makes sense because an even function can be represented entirely by cosine (even) terms. Similarly, if f(t) is odd, \(f(t)\cos (kt)\) will be odd, so the \(a_k\) terms are all zero. Thus, the Fourier Series also separates our functions nicely into even and odd terms.
To investigate further we check the numerical approximation of \(\chi (t)\) with 20, 50, and 100 terms. The numerical results for the error are 0.03, 0.01, and 0.008. Thus, the error is going to zero, although very slowly. These results are shown in Figure 2.3. We will spend extensive time in the future understanding the rate at which these series converge, and the many implications of these convergence rates.
Example 2:
Before we move on to the next topic, we will address a second example, pointing the way to the solution which is left as an exercise. Let us represent the extremely simple function \(f(t) = t\) in terms of its Fourier Series, on the simple interval from \([\pi ,\pi ]\). Let us explain why we call this the simple interval. This is because you do not need anything inside \(\cos (kt)\) or \(\sin (kt)\). It can be argued that \([1,1]\) is a simpler interval, but then you have \(\cos (k\pi t)\), etc. Neither is very difficult.
is not easy for most functions and impossible for some. As a result, we will study methods to calculate these coefficients by other means, most often with numerical approximations.
2.1.2 Periodicity and Equality
For this reason, we can only say that \(f(t) = S(f)\) on the interval \([\pi ,\pi ]\). This is completely reasonable, because the coefficients \(a_k\) and \(b_k\) only depend on f(t) in that interval. Secondly, remember that f(t) does not need to be periodic at all, and so it is obvious that we cannot expect that S(f)(t) would represent f(t) outside of the defined interval. Perhaps we should refer to S(f)(t) as \(S_\pi (f)(t)\), meaning that it is the Fourier Series on \([\pi ,\pi ]\), and \(S_{[a, b]}(f)(t)\) to be the Fourier Series on [a, b]. When it is important to make the distinction, we will use this altered notation.
To emphasize this point, we should look at \(S_\pi (f)(t)\), or one of its approximations, on \([2\pi , 2\pi ]\). This is shown in Figure 2.5.
There are many different books which discuss the relative convergence rates of Fourier Series. These are important and interesting, and the reader is encouraged to look into these topics [1, 3, 17, 20]. They are beyond the scope of this book, however. We will consider only Fourier Series in \(L^2\) of some type or another. We will then use these convergence rates to understand some of the subtleties of the applications.
2.1.3 Problems and Exercises
 1.Compute the Fourier Series for the functionon \([\pi ,\pi ]\) by using the above example as a guide. Also, plot the first few terms of the expansion (a) on \([\pi ,\pi ]\) and (b) on \([2\pi , 2\pi ]\).$$\begin{aligned} \chi _{\pi /4}(t) = \left\{ \begin{array}{c} 1 \text{ if } t < \frac{\pi }{4} \\ 0 \text{ if } t > \frac{\pi }{4} \end{array}, \right. \end{aligned}$$(2.10)
 2.
Compute the Fourier Series for \(\chi _{\pi /4}(t)\) on \([2\pi , 2\pi ]\). Plot this on the interval from \([6\pi , 6\pi ]\) or a larger interval and compare to the plot on the function expanded on \([\pi ,\pi ]\). Realize that is should converge on the whole interval \([2\pi , 2\pi ]\), and you must use the appropriate sine and cosine terms \( \cos (k/2 t) \) and \( \sin (k/2 t)\).
 3.
Compute two Fourier Series for the function \(f(t) = t\) (a) on \([\pi ,\pi ]\) and (b) on \([2\pi , 2\pi ]\). Plot the approximations using 5,10, and 15 terms on \([4\pi , 4\pi ]\).
 4.
(a) Find the necessary cosine and sine terms for expanding on \([1,1]\) and the appropriate coefficient formulas to approximate a function on \([1,1]\). (b) Compute the expansion of \(f(t) = t^2\) on \([1,1]\), and plot the first few terms on \([3,3]\).
 5.
(a) Figure out the necessary cosine and sine terms and the appropriate coefficient formulas to approximate a function on the interval [1, 3]. (b) Using the result from 4 above, plot the first few terms of the expansion of t on [1, 3] and on \([1,5]\).
 6.
Find the expansion for \(f(t) = t^3\) on \([1,1]\). Plot the first few terms on the interval \([3,3]\).
 7.
Compute the expansion of \(f(t) = \cos (t/2)\) on \([\pi ,\pi ]\). Plot the first few terms on this interval.
 8.
Compute the expansion of \(f(t) = \cos ^2(t/2)\) on \([\pi ,\pi ]\). Plot the first few terms on this interval.
 9.
Compute the expansion of \(f(t) = \sin (t/2)\) on \([\pi ,\pi ]\). Plot the first few terms on this interval.
 10.Compute the expansion ofon the interval \([\pi ,\pi ]\). Plot the first few terms on this interval.$$\begin{aligned} f(t) = \left\{ \begin{array}{l@{\quad }l} 0 &{} \text{ for } t > \pi /2 \\ t+\pi /2 &{} \text{ for } t\in [\pi /2,0] \\ \pi /2t &{} \text{ for } t\in [0,\pi /2] \end{array} \right. \end{aligned}$$(2.11)
 11.Compute the expansion ofon the interval \([\pi ,\pi ]\). Plot the first few terms on this interval.$$\begin{aligned} f(t) = \left\{ \begin{array}{l@{\quad }l} 0 &{} \text{ for } t > \pi /2 \\ 1 &{} \text{ for } t\in [\pi /2,0] \\ 1 &{} \text{ for } t\in [0,\pi /2] \end{array} \right. \end{aligned}$$(2.12)
 12.Compute the expansion ofon the interval \([\pi ,\pi ]\). Plot the first few terms on this interval. Plot them on \([2\pi , 2\pi ]\) also.$$\begin{aligned} f(t) = \left\{ \begin{array}{l@{\quad }l} 0 &{} \text{ for } t <0 \\ 1 &{} \text{ for } t \ge 0 \end{array} \right. \end{aligned}$$(2.13)
2.2 Orthogonality and Hilbert Spaces
We simply stated the basics of Fourier Series in the last section. We intentionally presented no proofs, and we also skipped what we will refer to as the geometry of Fourier Series. We will refer to the discussion of orthogonality which we touched on in the first chapter. Orthogonality generally comes up in Linear Algebra. Its extension to studying functions here is straightforward. We will review quickly the basic concepts. The first and most obvious statement or question is “What is the difference between orthogonality and perpendicularity?” The answer is there is no difference. They are interchangeable, but mathematicians and most others generally switch to orthogonality when dealing with higher dimensional vector spaces, i.e., \({\mathbb R}^n\), where \(n> 3\), or function spaces, such as \(L^2[a, b]\) as we defined in the last section.
Let us first begin with some notation. From this point forward, we will cease to recognize the difference between a common dot product, as we have in basic Linear Algebra, and the inner product, which we have yet to define. The dot product of two vectors is given by \(\vec {a} \cdot \vec {b} = \vec {a}^t b = \sum _k a_k b_k \). Namely, we multiply and add. Remember that length (common Euclidean distance) is given by \(  \vec {a}  = \sqrt{ \vec {a}\cdot \vec {a} }\). Recall also that the angle \(\theta \) between two vectors is given by \( \vec {a} \cdot \vec {b} = ab\cos ( \theta ) \).
Definition 2.2.1
Note that we had to use the complex conjugate in the inner product definition above. This allows us to deal with complexvalued functions. If we did not use this, note that a function which is purely complex would have a negative distance, which we do not want. We will oftentimes not include the t in the integral and just write \(\langle f , g \rangle \) for the inner product.
 1.
(Linearity) If f and g are any two functions in an inner product space, then \(c_1 f + c_2 g\) is also a function in that inner product space.
 2.
If \(f \ne 0\), then \(\Vert f \Vert \ne 0\).
 3.
(Triangle Rule) If f and g are in an inner product space, \( \Vert f + g \Vert \le \Vert f \Vert + \Vert g\Vert \).
 4.(Cauchy–Schwartz Inequality) If f and g are in an inner product spaceMoreover, equality holds only if f is a constant multiple of g, or \(f(t) = c g(t)\).$$\begin{aligned}  \langle f , g \rangle  \le \Vert f \Vert \Vert g \Vert . \end{aligned}$$(2.14)
Let us recall that the Cauchy–Schwartz inequality is an outgrowth of the standard formula in Linear Algebra \( \vec {a} \cdot \vec {b} = ab\cos (\theta ) \). Since \(\cos (\theta )\) is always less than or equal to one, the inequality holds.
We will now focus on orthogonality, since it is a key which we must keep in mind. Simple functions such as the monomials \( 1, t, t^2, t^3 \dots \) are in \(L^2\) of any interval, but they are not even close to being orthogonal. Let us now formalize this idea which was introduced in Chapter 1.
Definition 2.2.2
Two functions f(t) and g(t) in \(L^2[a, b]\) are said to be orthogonal if \(\langle f, g \rangle = 0\). A collection or set of functions \(\{o_k(t)\}_{k=0}^N\) is said to be orthogonal if for any pair functions from the set \( \langle o_j , o_i \rangle = 0\) as long as \( i \ne j \). In addition, we can say that a set of functions is orthonormal if they are orthogonal, and they all have length one, or \( \Vert o_j \Vert = 1\).
One of the keys to utilizing Fourier Series is that sines and cosines, when adjusted for an interval as in Theorem 2.1.1 and Theorem 2.1.2, are naturally orthogonal. This makes the computation used in Theorem 2.1.1 work. Let us first state the result.
Theorem 2.2.1
(Orthogonality) The functions \(\{\cos (kt)\}_{k=0}^\infty \), and \(\{\sin (kt)\}_{k=1}^\infty \) are orthogonal on the interval \([\pi ,\pi ]\). In addition, the functions used in Theorem 2.1.2 are also orthogonal on the corresponding interval [a, b].
To prove this, we will need to remember the trigonometric addition identities: \(\sin ( \alpha \pm \beta ) =\sin (\alpha )\sin (\beta ) \pm \cos (\alpha )\cos (\beta ) \) and \(\cos ( \alpha \pm \beta ) =\cos (\alpha )\cos (\beta ) \mp \sin (\alpha )\sin (\beta )\).
Proof: We will proceed to prove three things. (1) All of the cosines are orthogonal to all of the sines, and (2) The cosines are orthogonal to each other, and (3) The sines are orthogonal to each other.
Proof of (3): This is identical to the Proof of (2) with the exception of using \(\cos (mt+nt)\cos (mtnt)\) to cancel the cosine terms on the right.
We have proven orthogonality of the functions in Theorem 2.1.1. The proof for the functions in Theorem 2.1.2 is identical as long as you note the following things. The functions in Theorem 2.1.2 are adjusted so that just in the Proof of part (2) above, the sine terms will be zero at the endpoints. This is due to the fact the sine terms are adjusted to have exactly one cycle for \(k=1\) and multiple cycles \(k>1\) (k is an integer).
2.2.1 Orthogonal Expansions
We have avoided discussions of completeness, or whether or not we have enough functions for the expansions which we are suggesting. An interested student should inquire of this, but it is beyond the scope of this book.
Theorem 2.2.2
This theorem is true in any inner product space, but we state it for our current setting. We will emphasize further generalizations when relevant. From now on, we will refer to any topologically complete inner product space as a Hilbert space. We will not go into the details of topological completeness. It suffices to state that all inner product spaces in this book are Hilbert spaces.
2.2.2 Problems and Exercises:
 1.
Prove that the representation in (2.15) is correct. Specifically, show that the norm, or length of the functions \(\cos (kt)\) and \(\sin (kt)\) is \(\pi \) on \([\pi ,\pi ]\), and that the length of 1 is \(2\pi \) on this interval.
 2.
Challenging: Rewrite the expansions in Theorem 2.1.2 so that it is an orthonormal expansion such as in 2.15. Show that these functions are orthogonal on [a, b]. Find the norms of these functions on [a, b], and give the altered version of Theorem 2.1.2. This theorem should distribute the normalization factors as in (2.15).
2.3 The Pythagorean Theorem
One of the oldest and most wellknown theorems for elementary students and others dates back to the Greeks. Namely, in a right triangle, or a triangle in which one of the angles is \(90^o\), the squared sum of the length of two sides is the square of the third side, or \(a^2 + b^2 = c^2\). This simple elementary school identity has fundamental importance in Fourier Analysis and Linear Analysis in general. It is also very simple to prove.
Theorem 2.3.1
Stated simply, the squared length of f is identical to the sum of the squared lengths of the sides, or \(\sum _k  \langle f , o_k \rangle ^2\). This is exactly the Pythagorean theorem which is taught at the elementary school. The interesting thing is that the proof can be understood in middle school (perhaps), but it is exactly the same in this highly abstract setting.
2.3.1 The Isometry between \(L^2[a, b]\) and \(l^2\)
We will now explore the idea of an isometry. Stated plainly, an isometry is a rule, or transform, which maps one Hilbert space into another Hilbert space, while preserving two things: a) all distance and b) all inner products. A simple example in the twodimensional plane \(R^2\) is a rotation by \(90^o\), or by any fixed number of degrees. Since the plane remains fixed, with all of the vectors rotating the same number of degrees, it is an isometry. Other examples are flips about the x and y axis, or any other line through the origin.
We will now introduce a couple of definitions which will allow us to express ourselves more easily in the future. We begin the formal definition of a linear operator.
Definition 2.3.1
(Linear Operator) A linear operator, or linear map, from one Hilbert space H to another space G is a rule which uniquely assigns an element \(g \in G\) to each \(h \in H\). In addition, it must be linear. We will generally denote these by \(\mathcal{L} : H \rightarrow G\), meaning that \(\mathcal{L}(f) = g\), with \(f\in H\), and \(g\in G\). Moreover, linearity means that \(\mathcal{L}( c_1 h_1 + c_2 h_2 ) = c_1 \mathcal{L}(h_1) + c_2 \mathcal{L}(h_2)\).
We will now formally define an isometry between Hilbert spaces.
Definition 2.3.2
The fundamental nature of an isometry is that you can measure distances and angles for functions in one space, say H, by measuring the distances and angles of the images of the functions in the corresponding space, G. This is critical for Fourier Analysis. A great deal of the reason why we use Fourier Series, and in the future the Fourier Transform is that we can often measure something easily in one space, and not so easily in another space. Thus, we choose the place where things are easiest, and then the result extends to the more difficult space. This allows us to analyze the relationships between similar functions, images, or objects in two different ways, and then choose the way which is most clear.
We now define the critical Hilbert space \(l^2\). We are interested in sequences of constants which are either real or complex. We will denote these for now by \(\{ c_k \}\). For the purposes of this book, we will generally have k be either \(k = 0 , 1, 2, 3 \dots \infty \), or \(k =  \infty \dots 2, 1, 0, 1,2, \dots \infty \).
Definition 2.3.3
The notation in the inner product \(\overline{b}\) is the complex conjugate of b. Namely, if b is complex, \( b= \alpha + i \beta \), then \(\overline{b} = \alpha  i \beta \). This is necessary so that the length of complex sequences is positive. Suppose that a sequence has only one nonzero element b, then its squared length would be \(\Vert b\Vert _2^2 = b \overline{b} = ( \alpha + i \beta )( \alpha  i \beta ) = \alpha ^2 + \beta ^2\).
We now state the critical theorem of this section, which is certainly one of the critical ideas of Fourier Analysis.
Theorem 2.3.2
where we remember that for \(k<0\), \(a_k = b_{k}\).
Proof: It turns out that we have already proven this theorem. Note that the Pythagorean Theorem 2.3.1 guarantees us that anytime you have a sequence of complete orthonormal functions in a Hilbert space, you can calculate the length or distance of a function, and inner products between two functions, from the orthogonal coefficients of the function.
Thus, the Pythagorean theorem states that any set of orthonormal functions naturally generates an isometry between the elements of the original Hilbert space, and the Hilbert space \(\{ l^2 \}\).
2.3.2 Complex Notation
We will now start using \(\exp ( i kt ) \equiv \cos (kt ) + i\sin (kt )\) as the standard functions on \([\pi ,\pi ]\). Now, we will state the Fourier Series theorem in complex notation.
Theorem 2.3.3
Note first of all that this is much simpler than Theorem 2.1.1 (with the exception of getting used to complex notation). There is only one set of coefficients instead of two. There is only one set of functions (rather than cosines and sines). Also note that \(exp(ikt)+exp(ikt) = 2\cos (kt)\) and \(exp(ikt)exp(ikt) = 2\sin (kt)\), so the cosine and sine coefficients and functions are easily recovered from the new functions and coefficients (which will generally be complex).
We must also redefine the inner product for complexvalued functions at this time.
Definition 2.3.4
One thing to keep in mind with the complex inner product is that it is linear in the first argument (f(t) above) but conjugate linear in the second argument (g(t) above), i.e., \( \langle f(t) , c g(t) \rangle = \overline{c} \langle f(t), g(t) \rangle \). This is necessary to make distances positive for complex vectors or functions.
We now restate the Fourier isometry from \(L^2[a, b]\) to \(l^2\) using the complex notation, which we will tend to use from now on.
Definition 2.3.5
Theorem 2.3.4
We can state the above theorem for \(L^2[a, b]\), by merely changing the expansions. Thus, this holds on \(L^2[a, b]\).
Proof: The first portion is just a restatement of the Pythagorean theorem. The second portion follows from direct substitution.
Differences between Notations
We know from Theorem (2.3.1) that whenever we have a complete orthonormal set of functions, we have a corresponding orthonormal expansion. The question arises, “Why do we have two for Fourier Series, and how does that change?” Both are the same, and we just want you to recognize the difference.
2.3.3 Estimating Truncation Errors
An Example:
2.3.4 Problems and Exercises:
 1.
Prove that if f(t) and \(g(t)\in L^2[a, b]\), then for any constants \(h(t) = \alpha f(t) + \beta g(t) < \infty \).
 2.
Use basic trigonometric identities to verify that \(e^{ix} e^{iy} = e^{i(x+y)}\).
 3.
Prove the inner product statement in Theorem 2.3.1.
 4.Show that the representation in Theorem 2.3.3 is a valid orthonormal representation. Specifically, show that the functionsare orthonormal on \([2\pi , 2\pi ]\).$$\frac{1}{\sqrt{2\pi }} e^{ikt}$$
 5.
Find the orthonormal representation for Theorem 2.1.2, such as we rewrote Theorem 2.1.1 in Theorem 2.3.3. First, find the appropriate complex exponentials for [a, b], as are suggested by the sine and cosines of Theorem 2.1.2. Make sure that your altered complex exponentials are orthogonal. Add in the normalization factors which will make them orthonormal. Finally, write the final form of the new theorem, as in Theorem 2.3.3.
 6.
Verify that the Fourier isometry holds on \([\pi ,\pi ]\) for \(f(t) = t\). To do this, (a) calculate the coefficients of the orthogonal Fourier Series from representation in (2.17), (b) calculate the sum of the squared coefficients, and (c) calculate the norm of the function as \(\int _{\pi }^\pi f(t)^2 dt.\) How many terms in the Fourier Series are necessary to have the isometry be under \(5 \%\)? How many until you are under \(3 \%\), or \(1 \%\)?
 7.
Verify that the Fourier isometry holds on \([\pi ,\pi ]\) for \(f(t) = \chi _{\pi /4}(t)\). To do this, (a) calculate the coefficients of the orthogonal Fourier Series from representation in (2.17), (b) calculate the sum of the squared coefficients numerically, or analytically if possible, and (c) calculate the norm of the function as \(\int _{\pi }^\pi f(t)^2 dt.\) How many terms in the Fourier Series are necessary to have the isometry be under \(5 \%\)? How many until you are under \(3 \%\), or \(1 \%\)?
 8.
Verify that the Fourier isometry holds on \([\pi ,\pi ]\) for \(f(t) = t^2\). To do this, (a) calculate the coefficients of the orthogonal Fourier Series from representation in (2.17), (b) calculate the sum of the squared coefficients numerically, or analytically if possible, and (c) calculate the norm of the function as \(\int _{\pi }^\pi f(t)^2 dt.\) How many terms in the Fourier Series are necessary to have the isometry be under \(5 \%\)? How many until you are under \(3 \%\), or \(1 \%\)?
2.4 Differentiation and Convergence Rates
2.4.1 A Quandary between Calculus and Fourier Analysis
This is not a valid argument for functions in \(L^2[\pi ,\pi ]\). If we were dealing with only the continuous functions, then the argument would be valid. The problem is that the integration by parts above assumes that \(f'(t)\in L^2[\pi ,\pi ]\), but nothing more about \(f'(t)\). We utilized the Fundamental Theorem of Calculus, but that assumes that \(f'(t)\) is continuous. A very extensive resolution of this quandary is given in Tolstov’s book [20]. The theorems of Jackson and other approaches are elegantly described in Cheney [5]. The above argument is “morally” correct, and the details have been extensively studied. The reader is encouraged to look more extensively into these details. For the purposes of this book, we will concentrate on the criterion of equation (2.29). This is sufficient to understand Fourier Series, derivatives, and the rates of decay of Fourier Series, which are essential for the applications in later chapters.
2.4.2 Derivatives and Rates of Decay
Now, let us consider some notation. We will use this to consider more general sequences, or to compare one sequence to another. This will be necessary to understand the nature of Fourier coefficients.
Definition 2.4.1
These are terms as little “o” and big “O” notation. Thus, if one sequence is smaller than another in the limit, the little “o” applies implying one is smaller than the other. If the sequences are comparable, then the “O” notation applied, implying that they are of similar magnitude, up to a constant M of some size (which might be huge). Note that \(O(b_k)\) is weaker than \(o(b_k)\), so anytime something is little “o” to another sequence it is also big “O”, with the constant M being zero. The little “o” notation is therefore stronger and preferable when available.
There are many different varieties of the statements for defining when a function is differentiable in terms of its coefficients. Equation (2.30) is correct. Trying to move it to limit equations on the \(c_k\)s becomes difficult mathematically and has led to an incredible number of true theorems. They are all within a very small \(\epsilon \) of the general rule, which is illustrated in (2.31). They are all different, however, and cause confusion. Equation (2.30) is a very good and true guideline. One is referred to [5, 20] and many other books and publications for the details of more exact relationships.
Higher Derivatives
Theorem 2.4.1
Let us investigate the consequences of the series in (2.32). As before, a good rule of thumb is that the series \(k^{2n} c_k^2\) should be asymptotically smaller than the series 1/k, since the sum of 1/k is infinite. This would imply that \(k^{2n} c_k^2 = o\)(1/k), or that \(c_k^2 = o(1/k^{2n+1})\), implying that \(c_k = o(1/k^{n+1/2})\). While this is essentially correct, one can make up counterexamples which as we did in before showing that we can have convergence without this condition.
We can state something more definitive, without being exhaustive on all of the conditions necessary to have a term by term derivative.
Theorem 2.4.2
This clears up some of the problems. We do not claim to be exhaustive on all of the possible conditions which may make term by term differentiation work. Once again, please refer to the references at the end of this book, and the many other possible Fourier Analysis resources. We believe that this basic understanding is enough to move forward.
2.4.3 Fourier Derivatives and Induced Discontinuities
First Example: Let us consider the simple example of \(f(t) = t\) on \([\pi ,\pi ]\). While this function seems to be absolutely continuous, its periodic extension, illustrated in Figure 2.4, is discontinuous at every odd multiple of \(\pi \). It fails the basic test \(f(\pi ) = f(\pi ) ?\), and thus the decay rate cannot be fast. We will leave the prediction of this decay rate and the verification as an exercise.
since it is continuous, and its derivative is \(f'(t) = 2t\), we expect its series to converge much faster. Certainly, this satisfies the Fundamental Theorem of Calculus. Once again, we will let the reader predict the decay rate, and verify this with the Fourier expansion.
2.4.4 Problems and Exercises:
 1.
(a) Predict the decay rate of the coefficients for the Fourier expansion of \(f(t) = t\) on \([\pi ,\pi ]\). (b) Calculate the Fourier Series for this and compare this to your prediction. (c) Were you correct?
 2.
(a) Predict the decay rate of the coefficients for the Fourier expansion of \(f(t) = t^2\) on \([\pi ,\pi ]\). (b) Calculate the Fourier Series for this and compare this to your prediction. (c) Were you correct?
 3.Consider the function(a) Does f(t) have a valid Fourier derivative in \([1,1]\)? (b) What is that derivative if it exists? (c) Calculate the Fourier Transform of the function f(t) on \([1,1]\). (d) Take the term by term Fourier Transform of this function. (e) Plot this function and state whether it is consistent with the derivative of the function.$$\begin{aligned} f(t) = \left\{ \begin{array}{l@{\quad }l} t+1 &{} \text{ for } t \in [1,0] \\ 1 t &{} \text{ for } t\in [0,1] \end{array} \right. \end{aligned}$$(2.34)
 4.Challenging: Consider the function \(g(t) = f(t)^2\) where(a) Does g(t) have a valid Fourier derivative in \([1,1]\)? (b) What is that derivative if it exists? (c) Calculate the Fourier Transform of the function g(t) on \([1,1]\). (d) Take the term by term Fourier Transform of this function. (e) Plot this function and state whether it is consistent with the derivative of the function.$$\begin{aligned} f(t) = \left\{ \begin{array}{l@{\quad }l} t+1 &{} \text{ for } t \in [1,0] \\ 1 t &{} \text{ for } t\in [0,1] \end{array} \right. \end{aligned}$$(2.35)
 5.Consider the function(a) Does f(t) have a valid Fourier derivative in \([\pi ,\pi ]\)? (b) What is that derivative if it exists? (c) Calculate the Fourier Transform of the function f(t) on \([\pi ,\pi ]\). (d) Take the term by term Fourier Transform of this function. (e) Plot this function and state whether it is consistent with the derivative of the function.$$\begin{aligned} f(t) = \left\{ \begin{array}{l@{\quad }l} 0&{} \text{ for } t \ge \pi /2 \\ \cos (t) &{} \text{ for } t \le \pi /2 \end{array} \right. \end{aligned}$$(2.36)
2.5 Sine and Cosine Series
We have talked about Fourier Series on \(L^2[a, b]\). By that we are implicitly using both sines and cosines. We have talked about adjusting the sines and cosines to the length of the interval. In addition, we have also talked about the fact that an even function, when expanded about 0, will have only cosine terms, and an odd function, expanded about 0, will have only sine terms.
We will now look to expand functions which are neither odd, nor even, in either cosine or sine series. This cannot be done in the manner described until now in this chapter.
To do this, let us return to an example we had earlier, namely \(f(t) = t\). We expanded this on \([\pi ,\pi ]\) using the Fourier Series in Figure 2.4. Since this function was odd, this expansion involved only sine terms. Now, we would like to consider expanding it on the half interval \([0,\pi ]\). The earlier sine expansion will obviously still converge on this interval.
We can also expand this function on \([0,\pi ]\) using cosine terms, should we choose. We do this by considering the function \(g(t) = t\) on \([\pi ,\pi ]\) and expanding it in a traditional Fourier Series. Because g(t) is even, this expansion will have only cosine terms. But since \(g(t) = f(t)\) on \([0,\pi ]\), this cosine expansion will converge to f(t) on \([0,\pi ]\).
Thus, we have a way to expand a function f(t) on \([0,\pi ]\) using either a cosine or sine series.
 If you want to expand in a cosine series, you defineThus, you are artificially creating a g(t) which is an even extension of f(t).$$\begin{aligned} g_e(t) = \left\{ \begin{array}{c} f(t) \text{ if } t > 0 \\ f(t) \text{ if } t < 0 \end{array}, \right. \end{aligned}$$(2.37)
 If you want to expand in a sine series, you defineThus, you are artificially creating a g(t) which is an odd extension of f(t).$$\begin{aligned} g_o(t) = \left\{ \begin{array}{c} f(t) \text{ if } t > 0 \\ f(t) \text{ if } t < 0 \end{array}, \right. \end{aligned}$$(2.38)
The Fourier Series for both of these functions will converge on \([\pi ,\pi ]\), and thus will both be equal to f(t) on \([0,\pi ]\) since \(g_e(t) = g_o(t) = f(t)\) for \(t \in [0,\pi ]\).
it for only a discontinuity in the derivative. Thus, we are now dealing with a continuous and not a discontinuous function. This makes the approximations much better.
Let us remember at this point that we are approximating an even extension of t on \([0,\pi ]\), or the two \(\pi \) periodic extension of t. The odd extension of t would have simply been t, but which is \(2 \pi \) periodic. Thus, let us examine the two periodic extensions, \(g_e(t)\) and \(g_o(t)\) in Figure 2.9.
of this series. Note that it quite apparently converges for a function which is \(\pm 1\). This is exactly the derivative of the extension \(g_e(t)\) shown in Figure 2.9.
Note that we can similarly define the derivative of the odd extension \(g_o(t)\). You will not be able to write an analogue of the equation (2.43), however, due to the jump discontinuity in the function \(g_o(t)\).
We summarize by stating the following
Theorem 2.5.1
Proof: The proof follows directly from the above discussions and the original Fourier Series theorems.
2.5.1 Problems and Exercises:
 1.
Calculate the sine and cosine series for \(f(t) = t\) on [0, 1]. Plot the first 30 terms of these series on \([3,3]\). Estimate the error between both series and the function on [0, 1] after 30 terms. How many more terms of the sine series are necessary to achieve the same error as was achieved with the cosine series and 30 terms?
 2.
Plot the first 30 terms of the derivatives of both the sine and cosine series in the above problem. What do you observe? Do either of them converge, and why?
 3.Prove that both the cosines \(\{1,\cos (kt)\}_{k=1}^\infty \), and the sines \(\{ \sin (kt)\}_{k=1}^\infty \) are orthogonal on \([0,\pi ]\). Given thatis complete in \(L^2[\pi ,\pi ]\) show that both \(\{1,\cos (kt)\}_{k=1}^\infty \) and \(\{ \sin (kt)\}_{k=1}^\infty \) are complete in \(L^2[0,\pi ]\).$$\{1,\cos (kt),\sin (kt)\}_{k=1}^\infty $$
 4.Compute the sine and cosine series foron \([0,\pi ]\). Which converges faster? Plot the terms. Why do you think this is?$$\begin{aligned} f(t) = \left\{ \begin{array}{l@{\quad }l} t &{} \text{ if } t \le \pi /2 \\ \pi /2 t &{} \text{ if } t > \pi /2 \end{array}, \right. \end{aligned}$$(2.44)
2.6 Perhaps Cosine Series Only
In the last section, we showed how a cosine series, which is naturally even, could far more efficiently approximate an odd function. We would like to push this idea one step further, and try to use only cosine series, which are naturally even, to represent an arbitrary function, which is neither odd nor even.
What we showed above in the last section is that an even extension of an odd function is more easily representable than the original odd function. Secondly, we have shown that \(t^2\) is more easily representable than t, or that even functions are more easily representable.
The question then is, can’t we make everything even? The answer is yes. We will call this they Compression Series. The algorithm is simple.
 1.
Calculate the cosine series coefficients for \(f_e(t)\).
 2.
Calculate the cosine series coefficients for \(f_o(t)\).
 3.
Figure out how many terms are necessary to represent \(f_e(t)\) and \(f_o(t)\) within a desired accuracy, and keep only these terms.
The reconstruction algorithm is similarly simple.
 1.
Calculate \(f_e(t)\) within the desired accuracy from the above stored coefficients.
 2.
Calculate \(f_o(t)\) within the desired accuracy on \([0,\pi ]\) from the above stored coefficients. Let \(f_o(t) =  f_o(t)\).
 3.
Calculate \(f(t) = f_e(t) + f_o(t)\) within the desired accuracy with very few coefficients.
Theoretically, we have suggested that you can store, or transmit, many fewer coefficients using the above algorithm than blindly using the Fourier Transform. Let us calculate a test example, just to understand what the savings might be. We will use the very simple function \(f(t) = 2 + 3t \), on the interval \([\pi ,\pi ]\).
The even and odd components of this function are obvious, namely \(f_3(t) = 2\) and \(f_o(t) = 3t\). Now, let us try to represent this function with a minimal number of coefficients, using both the standard Fourier Transform and the cosine series.
2.6.1 Induced Discontinuities vs. True Discontinuities
2.6.2 Problems and Exercises:
 1.
Challenging: Calculate the Fourier Series for the above function. How many terms are necessary to represent it with a RMSE of less than .01?
 2.
Challenging: Remove the discontinuity from the above function, and represent it using only cosine series, and the removed discontinuity. How many terms are necessary to represent it with a RMSE of less than .01?
2.7 Gibbs Ringing Phenomenon
Throughout all of our Fourier Series examples, we have noticed a ringing phenomenon, at the discontinuities of the functions. This phenomena is referred to as Gibbs ringing in honor of an American mathematician and physicist who wrote about it in the late 1800s [9]. We will address this phenomena from a more analytical perspective in Chapter 4.
In Figure 2.13, we examine the maximas and minimas of the approximations to this function. We notice that they are all nearly as far above or below the desired function, regardless of the number of terms in the approximation. This is very disturbing, since we have proven that the Fourier Series must converge for all functions in \(L^2[a, b]\). It seems that no matter how many terms are added, there is still a large difference between the maxima, minima, and the desired values of 1 and 0.
There are a number of observations that can be made from Figure 2.13. Putting all of them together allows us to understand what seems like an anomaly which is not consistent with our theorems. Let us note some observations.
 1.
The maxima and minima of the partial sums do not seem to decrease and approach the final desired value.
 2.
The locations of those maxima and minima seem to get closer to the discontinuity as the number of terms increases.
 3.
The area of the error created by this “ringing” seems to decrease.
We will prove that the above observations demonstrate the truth in Chapter 4. We need more tools to accurately describe this.
2.7.1 Problems and Exercises

Gibbs ringing Calculate the Fourier Series for \(\chi _{1/2}(t)\) on \([1,1]\), and plot the first 50 terms. Find the maximum value of the series with 30, 40, 50, and 100 terms. Guess at the limit of this maximum value. Find the location of the maximum value on the right of 0, with 30, 40, 50, and 100 terms. Where is the maximum value headed? Finally, estimate the squared error \(\int _{1}^1 \chi _{1/2}(t)  S_n(\chi _{1/2})(t)^2dt\) and the series on \([1,1]\) with the same number of terms (do this numerically).
2.8 Convolution and Correlation
There are two very common operations which come up in association with Fourier Series. They are generally associated with localized averages or operations on a function. We will begin by noting that if \(f(t) \in L^2[T, T]\), then the extension of f given by \(f = 0\) for \(t>T\) allows us to consider \(f(t) \in L2[B, B]\) for any \(B>T\). Oftentimes in this section, we will want \(B = 2T\). Now, let us define convolution.
Definition 2.8.1
We would like to characterize the Fourier Series of these two functions. This leads to the following theorem
Theorem 2.8.1
Proof: We must first demonstrate that \(f*g \in L^2[2T, 2T]\). For simplicity’s sake, we will assume that \(T = \pi \), since the general proof follows immediately.
We will now prove the second assertion of the theorem. We will do this by simply calculating the Fourier coefficients of the convolution.
2.8.1 A couple of classic examples
One of the problems with Fourier Series is that they are very hard to compute. You have to be able to solve the integral equations, which is not easy for most functions. For this reason, we oftentimes use other means to try to calculate the Fourier Series. The convolution and correlation theorems do provide us with one such method.
Example 1: We begin by picking an example, which we can calculate directly, through the integral equations. We are also able to calculate this example by using the convolution equation. This is a very simple example, but gives a very direct idea of what convolution is.
We start with one of our favorite functions \(f(t) = \chi _\pi (t)\), on \([\pi ,\pi ]\). We want to consider its extension to the whole real line, namely we want to consider it to be zero outside of \([\pi ,\pi ]\). Now, we want to consider the convolution \(f*f\). Note that in this case, since \(\chi (t)\) is an even function, convolution and correlation are equal. We will present the result and then leave the calculations as very important exercises.
Example 2: We now consider another example. We choose a simple Gaussian which has been corrupted by noise. We then use a window to do a simple moving average of the function to greatly reduce the noise. This is illustrated in Figure 2.17.
2.8.2 Problems and Exercises:
 1.
Gibbs Ringing Revisited Return to the Fourier Series \(\chi _{\pi /2}(t)\), on \([\pi ,\pi ]\) which we calculated in 2.8. Calculate a new function, with the Fourier coefficients from 2.8, multiplied by \(exp(k^2/100)\) for each k. Use 30, 50, and 100 terms. Compare the series with and without the added exponential term. Do you see a reduction in Gibbs ringing? Why?
 2.
Prove that \(f(t) = 2\pi  t\) on \([2\pi , 2\pi ]\) is the convolution of \(\chi _\pi (t) \) against itself, or \(\chi _\pi *\chi _\pi (t) \).
 3.
Calculate the Fourier Series for the hat function \(f(t) = 2\pi  t\) on \([2\pi , 2\pi ]\) in two ways: (a) by calculating its Fourier Series directly. (b) by calculating the Fourier Series for \(\chi _\pi (t) \) and using the convolution theorem.
 4.Challenging Consider the two functions,and$$\begin{aligned} f(t) = \left\{ \begin{array}{c@{\quad }c@{\quad }c} \cos (x) &{} \text{ if } &{} t < \pi /2 \\ 0 &{} \text{ if } &{}t \ge \pi /2 \end{array}. \right. \end{aligned}$$(2.58)(a) Compute the Fourier Transforms of both functions on \([\pi ,\pi ]\). (b) Compute the convolution \(f*g\) via the convolution theorem. (c) Plot the first 25 terms of the result. (d) Is the result similar to the derivative of f? (e) Why do you think this is?$$\begin{aligned} g(t) = \left\{ \begin{array}{c@{\quad }c@{\quad }c} 0 &{} \text{ if } &{}t > .1 \\ 10 &{} \text{ if } &{}t\in [.1,0] \\ 10 &{} \text{ if } &{}t\in [0,.1]\end{array}. \right. \end{aligned}$$(2.59)
2.9 Chapter Project:
 1.
Compute the Fourier Series for the function \(f(t) = t\) which converges a) on \([\pi ,\pi ]\) and b) on \([2\pi , 2\pi ]\). Plot the approximations using 5,10, and 15 terms on \([4\pi , 4\pi ]\). These are two different series and should look different.
 2.
Figure out the necessary cosine, and sine terms and the appropriate coefficient formulas to approximate a function on the interval [1, 3].
 3.
Using the result from Problem 2 above, calculate the first few terms of the expansion of \(f(t)=t\) on [1, 3]. Plot this result on \([1,5]\).
 4.
Verify that the Fourier isometry holds on \([\pi ,\pi ]\) for \(f(t) = t\). To do this, a) calculate the coefficients of the orthogonal Fourier Series from the orthogonal series representation, b) calculate the sum of the squared coefficients, and c) Calculate the norm of the function as \(\int _{\pi }^\pi f(t)^2 dt.\) They must be equal. How many terms in the Fourier Series are necessary to have the isometry be under \(5 \%\)? How many until you are under \(3 \%\), or \(1 \%\)?
 5.
Gibbs ringing: Calculate the Fourier Series for \(\chi _{1/2}(t)\) on \([1,1]\), and plot the first 50 terms. Find the maximum value of the series with 30, 40, 50, and 100 terms. Guess at the limit of this maximum value. Find the location of the maximum value on the right of 0, with 30, 40, 50, and 100 terms. Where is the maximum value headed? Finally, estimate the squared error \(\int _{1}^1 \chi _{1/2}(t)  S_n(\chi _{1/2})(t)^2dt\) and the series on \([1,1]\) with the same number of terms (do this numerically).
 6.
Sine and cosine series: Calculate the sine and cosine series for \(f(t) = t\) on [0, 1]. Plot the first 30 terms of these series on \([3,3]\). Estimate the error between both series and the function on [0, 1] after 30 terms. How many more terms of the sine series are necessary to achieve the same error as was achieved with the cosine series and 30 terms?
 7.
Sine and cosine series: Plot the first 30 terms of the derivatives of both the sine and cosine series in the above problem. What do you observe? Do both of them converge?
 8.
Convolution: Calculate the Fourier Series for the hat function \(f(t) = 2\pi  t\) on \([2\pi , 2\pi ]\) in two ways. a) by calculating its Fourier Series directly. b) by calculating the Fourier Series for \(\chi _\pi (t) \) and using the convolution theorem. They must be equal! Use trig identities...
2.10 Summary of Expansions:

Fourier expansion for \(f(t) \in L^2[\pi ,\pi ]\):
Basic Formula:where$$ f(t) = \frac{a_0}{2} + \sum _{k=0}^\infty a_k \cos (kt) + b_k \sin (kt) , $$Orthonormal Expansion:$$ a_k = \frac{1}{\pi }\int _{\pi }^\pi f(t) \cos (kt) dt \text{ and } b_k = \frac{1}{\pi }\int _{\pi }^\pi f(t) \sin (kt) dt . $$where \( a_0 = 1/\sqrt{2\pi } \int _{\pi }^\pi f(t) dt \) and for \(k\ge 1\)$$ f(t) = a_0 \frac{1}{\sqrt{2\pi }} + \sum _{k=0}^\infty a_k \frac{\cos (kt)}{\sqrt{\pi }} + b_k \frac{\sin (kt)}{\sqrt{\pi }} , $$$$ a_k = \int _{\pi }^\pi f(t) \frac{\cos (kt)}{\sqrt{\pi }} dt \text{ and } b_k = \int _{\pi }^\pi g(t)\frac{\sin (kt)}{\sqrt{\pi }} dt . $$ 
Fourier expansion for \(f(t) \in L^2[T, T]\):
Basic Formulawhere$$ f(t) = \frac{a_0}{2} + \frac{1}{T} \sum _{k=0}^\infty a_k \cos \left( \frac{k\pi t}{T}\right) + b_k \sin \left( \frac{k\pi t}{T}\right) , $$Orthonormal Expansion:$$ a_k = \int _{T}^T f(t) \cos \left( \frac{k\pi t}{T}\right) dt \text{ and } b_k = \int _{T}^T f(t) sin\left( \frac{k\pi t}{T}\right) dt . $$where \( a_0 = 1/\sqrt{2T} \int _{\pi }^\pi f(t) dt \) and for \(k\ge 1\)$$ f(t) = a_0 \frac{1}{\sqrt{2T}} + \sum _{k=0}^\infty a_k \frac{\cos (\frac{k\pi t}{T})}{\sqrt{T}} + b_k \frac{\sin (\frac{k\pi t}{T})}{\sqrt{T}} , $$$$ a_k = \int _{T}^T f(t) \frac{\cos (\frac{k\pi t}{T})}{\sqrt{T}} dt \text{ and } b_k = \int _{T}^T f(t)\frac{\sin (\frac{k\pi t}{T})}{\sqrt{T}} dt . $$  Fourier expansion for \(f(t) \in L^2[a, b]\): Let \(m = (a+b)/2\), and \(L = (ba)/2\).where$$ f(t) = \frac{a_0}{2} + \frac{1}{L} \sum _{k=0}^\infty a_k \cos \left( \frac{k\pi (tm)}{L}\right) + b_k \sin \left( \frac{k\pi (tm)}{L}\right) , $$Orthonormal Expansion:$$ a_k = \int _a^b f(t) \cos \left( \frac{k\pi (tm)}{L}\right) dt \text{ and } b_k = \int _a^b g(t) \sin \left( \frac{k\pi ( tm)}{L}\right) dt . $$where \( a_0 = 1/\sqrt{2L} \int _{a}^b f(t) dt \) and for \(k\ge 1\)$$ f(t) = a_0 \frac{1}{\sqrt{2L}} + \sum _{k=0}^\infty a_k \frac{\cos \left( \frac{k\pi ( tm)}{L}\right) }{\sqrt{L}} + b_k \frac{\sin \left( \frac{k\pi (tm)}{L}\right) }{\sqrt{L}} , $$$$ a_k = \int _{a}^b f(t) \frac{\cos \left( \frac{k\pi t}{L}\right) }{\sqrt{L}} dt \text{ and } b_k = \int _a^b f(t)\frac{\sin \left( \frac{k\pi t}{L}\right) }{\sqrt{L}} dt . $$

Cosine and Sine expansions for \(f(t) \in L^2[0,\pi ]\):
Basic Formulas:and$$ f(t) = \frac{a_0}{2} + \sum _{k=0}^\infty a_k \cos (kt) , $$where$$ f(t) = \sum _{k=0}^\infty b_k \sin (kt) , $$Orthonormal Expansion:$$ a_k = \frac{2}{\pi }\int _{0}^\pi f(t) \cos (kt) dt \text{ and } b_k = \frac{2}{\pi }\int _{0}^\pi f(t) \sin (kt) dt . $$and$$ f(t) = a_0 \frac{1}{\sqrt{\pi }} + \sum _{k=0}^\infty a_k \sqrt{\frac{2}{\pi }} \cos (kt) $$where \( a_0 = 1/\sqrt{\pi } \int _{0}^\pi f(t) dt \) and for \(k\ge 1\)$$f(t) = \sum _{k=0}^\infty b_k \sqrt{\frac{2}{\pi }} \sin (kt) , $$$$ a_k = \int _{0}^\pi f(t)\sqrt{\frac{2}{\pi }} \cos (kt) dt \text{ and } b_k = \int _{0}^\pi f(t) \sqrt{\frac{2}{\pi }}\sin (kt) dt . $$
Supplementary material
Supplementary material 1 (avi 663 KB)
Supplementary material 2 (avi 1066 KB)
Supplementary material 3 (avi 2260 KB)
Supplementary material 4 (avi 2260 KB)
Supplementary material 5 (avi 5783 KB)
Supplementary material 6 (avi 1768 KB)
Supplementary material 7 (avi 1750 KB)
Supplementary material 8 (avi 12901 KB)
Supplementary material 9 (avi 7168 KB)