Skip to main content

The Method of Least Squares and Signal Analysis

  • Chapter
  • First Online:
  • 128k Accesses

Abstract

Often one needs to describe experimental data with a mathematical function containing parameters that must be adjusted to give the best fit. The least squares method is one way to accomplish this goal. We use the least squares method to derive how to approximate a function by a sum of oscillating functions: the Fourier series. Fourier analysis is then developed in detail, including aliasing, Fourier transforms, the power spectrum, and the autocorrelation function. We show how to use these methods to analyze noisy data, and we explore how noise arises, considering examples such as Johnson noise and shot noise. We conclude with a discussion of stochastic resonance.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    The parameters \(a_{k}\) can either be included in the parameter list, or the values of \(a_{k}\) for each trial set \(b_{k}\) can be determined by linear least squares.

  2. 2.

    Although we speak of \(T\) and time, the technique can be applied to any independent variable if the dependent variable repeats as in Eq. 11.10. Zebra stripes are (almost) periodic functions of position.

  3. 3.

    For equally spaced data and \(N\) even, there are actually \(n=N/2 + 1\) values of \(a_k\) and \(n= N/2-1\) values of \(b_k\). (We will find from Eq. 11.26c that \(b_k\) for \(k = N/2\) is identically zero). Thus, there are \(N\) parameters and \(N\) coefficients. We will ignore this point in this chapter, since for large \(N\) it makes little difference.

  4. 4.

    The time average of a variable will be denoted by \(\left \langle {}\right \rangle \) brackets.

  5. 5.

    One virtue of the complex notation is that these addition formulae become the standard rule for multiplying exponentials: \(e^{i(x+y)}=e^{ix}e^{iy}\).

  6. 6.

    A rigorous but relatively elementary mathematical treatment is given by Lighthill (1958).

  7. 7.

    See Press et al. (1992), Cohen (2006), or Mainardi et al. (2006).

  8. 8.

    The technique works only for a linear system. If the system is not linear, the output will not be sinusoidal.

  9. 9.

    The bel is the logarithm to the base 10 of the power ratio. The decibel is one tenth as large as the bel. Since the power ratio is the square of the voltage ratio or gain, the factor in Eq. 11.82 is 20.

  10. 10.

    References can be found in the articles by Wiesenfeld and Jaramillo (1998) and by Astumian and Moss (1998).

  11. 11.

    See Astumian (1997); Astumian and Moss (1998); Wiesenfeld and Jaramillo (1998); Gammaitoni et al. (1998); Adair et al. (1998); Glass (2001).

References

  • Acton FS (1990) Numerical methods that work. Mathematical Society of America, Washington DC

    Google Scholar 

  • Adair RK, Astumian RD, Weaver JC (1998) Detection of weak electric fields by sharks, rays and skates. Chaos 8(3):576–587

    Google Scholar 

  • Adair EC, Hobbie SE, Hobbie RK (2010) Single-pool exponential decomposition models: potential pitfalls in their use in ecological studies. Ecology 91(4):1225–1236

    Google Scholar 

  • Anderka M, Declercq ER, Smith W (2000) A time to be born. Am J Pub Health 90(1):124–126

    Google Scholar 

  • Astumian RD (1997) Thermodynamics and kinetics of a Brownian motor. Science 276:917–922

    Google Scholar 

  • Astumian RD, Moss F (1998) Overview: the constructive role of noise in fluctuation driven transport and stochastic resonance. Chaos 8(3):533–538

    Google Scholar 

  • Bevington PR, Robinson DK (2003) Data reduction and error analysis for the physical sciences, 3rd edn. McGraw-Hill, New York

    Google Scholar 

  • Blackman RB, Tukey JW (1958) The measurement of power spectra. Dover, New York, pp 32–33

    Google Scholar 

  • Bracewell RN (1990) Numerical transforms. Science 248:697–704

    Google Scholar 

  • Bracewell RN (2000) Fourier transform and its applications, 3rd edn. McGraw-Hill, Boston

    Google Scholar 

  • Cohen A (2006) Biomedical signals: origin and dynamic characteristics; frequency-domain analysis. In Bronzino JD (ed) The biomedical engineering handbook, vol 2, 3rd edn. CRC, Boca Raton, pp 1-1–1–22

    Google Scholar 

  • Cooley JW, Tukey JW (1965) An algorithm for the machine calculation of complex Fourier series. Math Comput 19:297–301

    Google Scholar 

  • DeFelice LJ (1981) Introduction to membrane noise. Plenum, New York

    Google Scholar 

  • Feynman RP, Leighton RB, Sands M (1963) The Feynman lectures on physics, vol 1, Chap 46. Addison-Wesley, Reading

    Google Scholar 

  • Gammaitoni L, Hänggi P, Jung P, Marchesoni F (1998) Stochastic resonance. Rev Mod Phys 70(1):223–287

    Google Scholar 

  • Gingl Z, Kiss LB, Moss F (1995) Non-dynamical stochastic resonance: theory and experiments with white and arbitrarily coloured noise. Europhys Lett 29(3):191–196

    Google Scholar 

  • Glass L (2001) Synchronization and rhythmic processes in physiology. Nature 410(825):277–284

    Google Scholar 

  • Guyton AC (1991) Textbook of medical physiology, 8th edn. Saunders, Philadelphia

    Google Scholar 

  • Hämäläinen M, Harri R, Ilmoniemi RJ, Knuutila J, Lounasmaa OV (1993) Magnetoencephalography—theory, instrumentation, and applications to noninvasive studies of the working human brain. Rev Mod Phys 65(2):413–497

    Google Scholar 

  • Kaiser IH, Halberg F (1962) Circadian periodic aspects of birth. Ann N Y Acad Sci 98:1056–1068

    Google Scholar 

  • Kaplan D, Glass L (1995) Understanding nonlinear dynamics. Springer, New York

    Google Scholar 

  • Lighthill, MJ (1958) An introduction to Fourier analysis and generalised functions. Cambridge University Press, Cambridge

    Google Scholar 

  • Lybanon, M (1984) A better least-squares method when both variables have uncertainties. Am J Phys 52: 22–26

    Google Scholar 

  • Mainardi, LT, Bianchi AM, Cerutti S (2006) Digital biomedical signal acquisition and processing. In: Bronzino JD (ed) The biomedical engineering handbook, vol 2, 3rd edn. CRC, Boca Raton, pp 2-1–2-24

    Google Scholar 

  • Maughan WZ, Bishop CR, Pryor TA, Athens JW (1973) The question of cycling of the blood neutrophil concentrations and pitfalls in the statistical analysis of sampled data. Blood 41:85–91

    Google Scholar 

  • Milnor WR (1972) Pulsatile blood flow. N Eng J Med 287:27–34

    Google Scholar 

  • Nedbal L, Březina V (2002) Complex metabolic oscillations in plants forced by harmonic irradiance. Biophys J 83:2180–2189

    Google Scholar 

  • Nyquist H (1928) Thermal agitation of electric charge in conductors. Phys Rev 32:110–113

    Google Scholar 

  • Orear J (1982) Least squares when both variables have uncertainties. Am J Phys 50:912–916

    Google Scholar 

  • Packard GC (2009) On the use of logarithmic transformations in allometric analyses. J Theor Biol 257:515–518

    Google Scholar 

  • Press WH, Teukolsky SA, Vetterling WT, Flannery BP (1992) Numerical recipes in C: the art of scientific computing, 2nd edn. Cambridge University Press, New York (reprinted with corrections, 1995)

    Google Scholar 

  • Visscher PB (1996) The FFT: Fourier transforming one bit at a time. Comput Phys 10(5):438–443

    Google Scholar 

  • Wiesenfeld K, Jaramillo F (1998) Minireview of stochastic resonance. Chaos 8(3):539–548

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Russell K. Hobbie .

Appendices

Symbols Used in Chap. 11

Table 6

Problems

11.2.1 Section  11.1

Problem 1

Find the least squares straight line fit to the following data:

$$ \begin{tabular} [c]{p{0.5in}p{0.5in}}$x$ & $y$\\ & \\[-10pt]0 & 2\\ 1 & 5\\ 2 & 8\\ 3 & 11 \end{tabular} $$

Problem 2

Suppose that you wish to pick one number to characterize a set of data \(x_{1},x_{2},\dots ,x_{N}\). Prove that the mean \(\overline {x}\), defined by

$$ \overline{x}=\dfrac{1}{N}{\displaystyle\sum\limits_{j=1}^{N}} x_{j}, $$

minimizes the mean square error

$$ Q=\dfrac{1}{N}{\displaystyle\sum\limits_{j=1}^{N}} (x_{j}-\overline{x})^{2}. $$

Problem 3

Derive Eqs. 11.5.

Problem 4

Suppose that the experimental values \(y(x_{j})\) are exactly equal to the calculated values plus random noise for each data point: \(y(x_{j})=y_{\text {calc}}(x_{j})+n_{j}.\) What is \(Q\)?

Problem 5

You wish to fit a set of data \((x_{j},y_{j})\) with an expression of the form \(y=Bx^{2}\). Differentiate the expression for \(Q\) to find an equation for \(b\).

Problem 6

Assume a dipole \(\mathbf {p}\) is located at the origin and is directed in the \(xy\) plane. The \(z\) component of the magnetic field, \(B_{z}\), produced by this dipole is measured at nine points on the surface \(z=50\operatorname {~mm}.\) The data are

$$ \begin{tabular} [c]{p{0.3in}p{0.7in}p{0.7in}p{0.5in}}$i$ & $x_{i} \ \text{(mm)}$ & $y_{i}\text{ (mm)}$ & $B_{zi}\text{ (fT)}$\\ 1 & $-$50 & $-$50 & $-$154\\ 2 & ~~~~~0 & $-$50 & $-$170\\ 3 & ~~~50 & $-$50 & ~~$-$31\\ 4 & $-$50 & ~~~~~0 & $-$113\\ 5 & ~~~~~0 & ~~~~~0 & ~~~~~~~0\\ 6 & ~~~50 & ~~~~~0 & ~~~113\\ 7 & $-$50 & ~~~50 & ~~~~~31\\ 8 & ~~~~~0 & ~~~50 & ~~~170\\ 9 & ~~~50 & ~~~50 & ~~~154 \end{tabular} $$

The magnetic field of a dipole is given by Eq. 8.17, which in this case is

$$ B_{z}=\frac{\upmu_{0}}{4\pi}\left[ \frac{p_{x}y_{i}}{\left( x_{i}^{2}+y_{i}^{2}+z_{i}^{2}\right) ^{3/2}}-\frac{p_{y}x_{i}}{\left( x_{i}^{2}+y_{i}^{2}+z_{i}^{2}\right) ^{3/2}}\right] . $$

Use the method of least squares to fit the data to the equation, and determine \(p_{x}\) and \(p_{y}\).

Problem 7

Consider the data

$$ \begin{tabular} [c]{p{0.5in}p{0.5in}}$x$ & $y$\\ $100$ & $4004$\\ $101$ & $4017$\\ $102$ & $4039$\\ $103$ & $4063$ \end{tabular} \ \ $$
  1. (a)

    Fit these data with a straight line \(y=ax+b\) using Eqs. 11.5a and 11.5b to find \(A\).

  2. (b)

    Use Eq. 11.5c to determine \(a.\) Your result should be the same as in part (a).

  3. (c)

    Repeat parts (a) and (b) while rounding all the intermediate numbers to four significant figures. Do Eqs. 11.5a and 11.5b give the same result as Eq. 11.5c? If not, which is more accurate? To explore more about how numerical errors can creep into computations, see Acton (1990).

Problem 8

This problem is designed to show you what happens when the number of parameters exceeds the number of data points. Suppose that you have two data points:

$$ \begin{tabular} [c]{p{0.5in}p{0.5in}}$x$ & $y$\\ & \\[-10pt]0 & 1\\ 1 & 4 \end{tabular} \ \ \ $$

Find the best fits for one parameter (the mean) and two parameters \((y=ax+b)\). Then try to fit the data with three parameters (a quadratic). What happens when you try to solve the equations?

Problem 9

The strength-duration curve for electrical stimulation of a nerve is described by Eq. 7.45: \(i=i_{R}(1+t_{C}/t),\) where \(i\) is the stimulus current, \(i_{R}\) is the rheobase, and \(t_{C}\) is the chronaxie. During an experiment you measure the following data:

$$ \begin{tabular}[c]{ll} t \text{ (ms)} & i\text{ (mA)}\\ 0.5 & 2.004\\ 1.0 & 1.248\\ 1.5 & 0.997\\ 2.0 & 0.879\\ 2.5 & 0.802\\ 3.0 & 0.749 \end{tabular} $$

Determine the rheobase and chronaxie by fitting these data with Eq. 7.45. Hint: let \(a=i_{R}\) and \(b=i_{R}t_{C},\) so that the equation is linear in \(A\) and \(b:\) \(i=a+b/t\). Use the linear least squares method to determine \(A\) and \(b\). Plot \(i\) vs \(t,\) showing both the theoretical expression and the measured data points.

11.2.2 Section  11.2

Problem 10

  1. (a)

    Obtain equations for the linear least-squares fit of \(y=Bx^{m}\) to data by making a change of variables.

  2. (b)

    Apply the results of (a) to the case of Problem 521. Why does it give slightly different results?

  3. (c)

    Carry out a numerical comparison of Problems 521 and (b) with the data points

$$ \begin{tabular} [c]{p{0.5in}p{0.5in}}$x$ & $y$\\ & \\[-10pt]1 & 3\\ 2 & 12\\ 3 & 27 \end{tabular} \ \ \ $$

Repeat with

$$ \begin{tabular} [c]{p{0.5in}p{0.5in}}$x$ & $y$\\ & \\[-10pt]1 & 2.9\\ 2 & 12.1\\ 3 & 27.1 \end{tabular} \ \ \ $$

Problem 11

Consider the data given in Problem 2.40 relating molecular weight \(M\) and molecular radius \(R\). Assume the radius is determined from the molecular weight by a power law: \(R=BM^{n}.\) Fit the data to this expression to determine \(b\) and \(N\). Hint: take logarithms of both sides of the equation.

Problem 12

In Prob. 522 the dipole strength and orientation were determined by fitting the equation for the magnetic field of a dipole to the data, using the linear least squares method. In that problem the location of the dipole was known. Now, suppose the location of the dipole \((x_{0},y_{0},z_{0})\) is not known. Derive an equation for \(B_{z}(p_{x},p_{y},x_{0},y_{0},z_{0})\) in this more general case. Determine which parameters can be found using linear least squares, and which must be determined using nonlinear least squares.

11.2.3 Section  11.4

Problem 13

Write a computer program to verify Eqs. 11.2011.24.

Problem 14

Consider Eqs. 11.1711.19 when \(n=N\) and show that all equations for \(m>N/2\) reproduce the equations for \(m<N/2\).

Problem 15

The secretion of the hormone cortisol by the adrenal gland is subject to a 24-h (circadian) rhythm (Guyton 1991). Suppose the concentration of cortisol in the blood, \(K\) (in \(\upmu g \) per \(100\;{\text{ml}}\)) is measured as a function of time, \(T\) (in hours, with \(0\) being midnight and 12 being noon), resulting in the following data:

$$\begin{array}{*{20}{c}}t&K\\0&{10.3}\\4&{16.1}\\8&{18.3}\\{12}&{13.7}\\{16}&{7.9}\\{20}&{6.0}\end{array}$$

Fit these data to the function \(K=a+b\cos \left ( 2\pi t/24\right ) +c\sin \left ( 2\pi t/24\right ) \) using the method of least squares, and determine \(a,\) \(b\), and \(c\).

Problem 16

Verify that Eqs. 11.29 follow from Eqs. 11.27.

Problem 17

This problem provides some insight into the fast Fourier transform. Start with the expression for an \(N\)-point Fourier transform in complex notation, \(Y_{k}\) in Eq. 11.29a. Show that \(Y_{k}\) can be written as the sum of two \(N/2\)-point Fourier transforms: \(Y_{k}\,{=}\,\frac {1}{2}\left [ Y_{k}^{e}+W^{k}Y_{k}^{o}\right ] \), where \(W\,\,\exp \left ( -i2\pi /N\right ) \), superscript \(e\) stands for even values of \(j\), and \(o\) stands for odd values.

Problem 18

The following data from Kaiser and Halberg (1962) show the number of spontaneous births vs time of day. Note that the point for 2300–2400 is much higher than for 0000–0100. This is probably due to a bias: if a woman has been in labor for a long time and the baby is born a few minutes after midnight, the birth may be recorded in the previous day. Fit these data with a 24-h period and again including an 8-h period as well. Make a correction for the midnight bias.

Table 7

Problem 19

Calculate the discrete Fourier transform of the data \(y_i =\) 0.00, 0.25, 0.50, 0.75, 0.50, 0.25 using Eq. 11.26.

11.2.4 Section  11.5

Problem 20

Let \(y(t)\) be a periodic function with period \(T\):

$$ y(t)=t,\ \ 0<t<T. $$
  1. (a)

    Plot \(y(t)\) over the range \(-2T<t<2T\).

  2. (b)

    Use Eqs. 11.30 and 11.34 to calculate the Fourier series for \(y(t)\).

  3. (c)

    Plot the Fourier series using only the term \(k=0\), then using \(k=0\) and \(k=1\), and finally \(k=0,k=1\) and \(k=2\). Compare these plots to the plot in part (a).

Problem 21

Let \(y(t)\) be a periodic function with period \(T\):

$$ y(t)=sin(\pi t/T),\ \ 0<t<T. $$
  1. (a)

    Plot \(y(t)\) over the range \(-2T<t<2T\).

  2. (b)

    Use Eqs. 11.30 and 11.34 to calculate the Fourier series for \(y(t)\).

  3. (c)

    Plot the Fourier series using only the term \(k=0\), then using \(k=0\) and \(k=1\), and finally \(k=0,k=1\) and \(k=2\). Compare these plots to the plot in part (a).

Problem 22

Use Eqs. 11.34 to derive Eq. 11.36.

11.2.5 Section  11.6

Problem 23

Calculate the power spectrum for the function given in Problem 536.

11.2.6 Section  11.7

Problem 24

Suppose that \(y(x,t)=y(x-vt)\). Calculate the cross correlation between signals \(y(x_{1})\) and \(y(x_{2})\).

Problem 25

Calculate the cross-correlation, \(\phi _{12}\), for the example in Fig. 11.21:

$$ \begin{aligned} y_{1}(t) & =\left\{ \begin{array}[c]{c} +1,\qquad0<t<T/2\\ -1,\qquad T/2<t<T \end{array} \right. \\ y_{2}(t) & =\sin\left( \frac{2\pi t}{T}\right) . \end{aligned} $$

Both functions are periodic.

Problem 26

Suppose you measure some noisy signal every hour for several weeks. Explain how you could use the autocorrelation function to search for a circadian rhythm : a component of the signal that varies with a period of one day.

11.2.7 Section  11.8

Problem 27

Fill in the missing steps to show that the autocorrelation of \(y_{1}(t)\) is given by Eq. 11.51.

Problem 28

Consider a square wave of amplitude \(A\) and period \(T\).

  1. (a)

    What are the coefficients in a Fourier-series expansion?

  2. (b)

    What is the power spectrum?

  3. (c)

    What is the autocorrelation of the square wave?

  4. (d)

    Find the Fourier-series expansion of the autocorrelation function and compare it to the power spectrum.

Problem 29

The series of pulses shown are an approximation for the concentration of follicle-stimulating hormone (FSH) released during the menstrual cycle.

  1. (a)

    Determine \(a_{0}\), \(a_{k}\), and \(b_{k}\) in terms of \(d\) and \(T\).

  2. (b)

    Sketch the autocorrelation function.

  3. (c)

    What is the power spectrum?

Problem 90

Consider the following simplified model for the periodic release of follicle-stimulating hormone (FSH). At \(t=0\) a substance is released so the plasma concentration rises to value \(C_{0}\). The substance is cleared so that \(C(t)=C_{0}e^{-t/\tau }\). Thereafter the substance is released in like amounts at times \(T\), \(2T\), and so on. Furthermore, \(\tau \ll T\).

  1. (a)

    Plot \(C(t)\) for two or three periods.

  2. (b)

    Find general expressions for \(a_{0}\), \(a_{k}\), and \(b_{k}\). Use the fact that integrals from \(0\) to \(T\) can be extended to infinity because \(\tau \ll T\). Use the following integral table:

    $$\begin{array}{*{20}{c}} {\mathop \smallint \limits_0^\infty {e^{ - ax}}{\mkern 1mu} dx = \frac{1}{a},} \\ {\mathop \smallint \limits_0^\infty {e^{ - ax}}\cos mx{\mkern 1mu} dx = \frac{a}{{{a^2} + {m^2}}},} \\ {\mathop \smallint \limits_0^\infty {e^{ - ax}}{\mkern 1mu} \sin mx{\mkern 1mu} dx = \frac{m}{{{a^2} + {m^2}}}.}\end{array}$$
  3. (c)

    What is the “power” at each frequency?

  4. (d)

    Plot the “power” for \(k=1,10,100\) for two cases: \(\tau /T=0.1\) and \(0.01\). Compare the results to the results of Problem 545.

  5. (e)

    Discuss qualitatively the effect that making the pulses narrower has on the power spectrum. Does the use of Fourier series seem reasonable in this case? Which description of the process is easier—the time domain or the frequency domain?

  6. (f)

    It has sometimes been said that if the transform for a given frequency is written as \(A_{k}\cos (k\omega _{0}t-\phi _{k})\) that \(\phi _{k}\) gives timing information. What is \(\phi _{1}\) in this case? \(\phi _{2}\)? Do you agree with the statement?

Problem 31

Calculate the autocorrelation function and the power spectrum for the previous problem.

11.2.8 Section  11.9

Problem 32

Calculate the Fourier transform of \(\exp [-(at)^{2}]\) using complex notation (Eq. 11.59). Hint: complete the square.

Problem 33

Figure 11.24 implies that two different functions can have the same autocorrelation, so that taking the autocorrelation is a one-way process. Show this by calculating the autocorrelation of \(A\cos (\omega t)\) and comparing it to the autocorrelation of \(A\sin (\omega t)\) given in Eq. 11.49.

11.2.9 Section  11.10

Problem 34

Prove that

$$ \begin{aligned}[c]{c} \delta(t)=\delta(-t),\\ t\,\delta(t)=0,\\ \delta(at)=\dfrac{1}{a}\delta(t). \end{aligned} $$

11.2.10 Section  11.11

Problem 35

Rewrite Eqs. 11.61 in terms of an amplitude and a phase. Plot them.

Problem 36

Find the Fourier transform of

$$ f(t)=\left\{ \begin{array}[c]{cc} 1, & -a\leq t\leq a,\\ 0, & \text{everywhere else.} \end{array} \right. $$

Problem 37

Find the Fourier transform of

$$ y=\left\{ \begin{array}[c]{cc} e^{-at}\sin\omega_{0}t, & t\geq0,\\ 0, & t<0. \end{array} \right. $$

Determine \(C(\omega )\), \(S(\omega )\), and \(\Phi ^{\prime }(\omega )\) for \(\omega>0\) if the term that peaks at negative frequencies can be ignored for positive frequencies.

11.2.11 Section  11.14

Problem 38

Here are some data.

$$ \begin{tabular} [c]{p{0.3in}p{0.6in}p{0.3in}p{0.6in}p{0.3in}p{0.6in}}\multicolumn{1}{l}{$t$} & \multicolumn{1}{l}{$y$} & \multicolumn{1}{l}{$t$} & \multicolumn{1}{l}{$y$} & \multicolumn{1}{l}{$t$} & \multicolumn{1}{l}{$y$}\\ $2$ & $~~~1.39$ & $14$ & $~~~5.01$ & $26$ & $~~~0.91$\\ $3$ & $~~~0.67$ & $15$ & $~~~0.75$ & $27$ & $~~~1.32$\\ $4$ & $-1.38$ & $16$ & $~~~0.90$ & $28$ & $~~~1.92$\\ $5$ & $-0.76$ & $17$ & $-0.42$ & $29$ & $~~~0.57$\\ $6$ & $~~~5.23$ & $18$ & $~~~3.68$ & $30$ & $~~~2.30$\\ $7$ & $~~~1.31$ & $19$ & $~~~4.15$ & $31$ & $~~~1.09$\\ $8$ & $~~~2.63$ & $20$ & $~~~1.45$ & $32$ & $-0.71$\\ $9$ & $~~~1.03$ & $21$ & $-2.44$ & $33$ & $-1.72$\\ $10$ & $~~~4.62$ & $22$ & $~~~4.44$ & $34$ & $~~~4.22$\\ $11$ & $~~~1.98$ & $23$ & $-0.08$ & $35$ & $~~~3.20$\\ $12$ & $~~~0.47$ & $24$ & $~~~2.34$ & $36$ & $~~~1.69$ \end{tabular} \ \ $$
  1. (a)

    Plot them.

  2. (b)

    If you are told that there is a signal in these data with a period of 4 s, you can group them together and average them. This is equivalent to taking the cross correlation with a series of \(\delta \) functions. Estimate the signal shape.

11.2.12 Section  11.15

Problem 39

Verify that Eqs. 11.80 and 11.81 are solutions of Eq. 11.79.

Problem 40

Equation 11.81 is plotted on log–log graph paper in Fig. 11.43. Plot it on linear graph paper.

Problem 41

If the frequency response of a system were proportional to \(1/\left [ 1+(\omega /\omega _{0})^{3}\right ] \), what would be the high frequency roll-off in decibels per octave for \(\omega \gg \omega _{0}\)?

Problem 42

Consider a signal \(y=A\cos \omega t\). What is the time derivative? For a fixed value of \(A\), how does the derivative compare to the original signal as the frequency is increased? Repeat these considerations for the integral of \(y(t)\).

11.2.13 Section  11.16

Problem 43

Show that integration of Eq. 11.102 over all shift times is consistent with the integration of the \(\delta \) function that is obtained in the limit \(\tau _{1}\rightarrow 0\).

11.2.14 Section  11.18

Problem 44

Show that the net clockwise rate of rotation of the Feynman ratchet is given by Eq. 11.103.

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Hobbie, R., Roth, B. (2015). The Method of Least Squares and Signal Analysis. In: Intermediate Physics for Medicine and Biology. Springer, Cham. https://doi.org/10.1007/978-3-319-12682-1_11

Download citation

Publish with us

Policies and ethics