Finite Difference Computing with PDEs pp 192  Cite as
Vibration ODEs
Abstract
Vibration problems lead to differential equations with solutions that oscillate in time, typically in a damped or undamped sinusoidal fashion. Such solutions put certain demands on the numerical methods compared to other phenomena whose solutions are monotone or very smooth. Both the frequency and amplitude of the oscillations need to be accurately handled by the numerical schemes. The forthcoming text presents a range of different methods, from classical ones (RungeKutta and midpoint/CrankNicolson methods), to more modern and popular symplectic (geometric) integration schemes (Leapfrog, EulerCromer, and StörmerVerlet methods), but with a clear emphasis on the latter. Vibration problems occur throughout mechanics and physics, but the methods discussed in this text are also fundamental for constructing successful algorithms for partial differential equations of wave nature in multiple spatial dimensions.
Vibration problems lead to differential equations with solutions that oscillate in time, typically in a damped or undamped sinusoidal fashion. Such solutions put certain demands on the numerical methods compared to other phenomena whose solutions are monotone or very smooth. Both the frequency and amplitude of the oscillations need to be accurately handled by the numerical schemes. The forthcoming text presents a range of different methods, from classical ones (RungeKutta and midpoint/CrankNicolson methods), to more modern and popular symplectic (geometric) integration schemes (Leapfrog, EulerCromer, and StörmerVerlet methods), but with a clear emphasis on the latter. Vibration problems occur throughout mechanics and physics, but the methods discussed in this text are also fundamental for constructing successful algorithms for partial differential equations of wave nature in multiple spatial dimensions.
1.1 Finite Difference Discretization
Many of the numerical challenges faced when computing oscillatory solutions to ODEs and PDEs can be captured by the very simple ODE \(u^{\prime\prime}+u=0\). This ODE is thus chosen as our starting point for method development, implementation, and analysis.
1.1.1 A Basic Model for Vibrations
In vibrating mechanical systems modeled by (1.1), u(t) very often represents a position or a displacement of a particular point in the system. The derivative \(u^{\prime}(t)\) then has the interpretation of velocity, and \(u^{\prime\prime}(t)\) is the associated acceleration. The model (1.1) is not only applicable to vibrating mechanical systems, but also to oscillations in electrical circuits.
1.1.2 A Centered Finite Difference Scheme
To formulate a finite difference method for the model problem (1.1 ), we follow the four steps explained in Section 1.1.2 in [9].
Step 1: Discretizing the domain
The domain is discretized by introducing a uniformly partitioned time mesh. The points in the mesh are \(t_{n}=n\Delta t\), \(n=0,1,\ldots,N_{t}\), where \(\Delta t=T/N_{t}\) is the constant length of the time steps. We introduce a mesh function u ^{ n } for \(n=0,1,\ldots,N_{t}\), which approximates the exact solution at the mesh points. (Note that n = 0 is the known initial condition, so u ^{ n } is identical to the mathematical u at this point.) The mesh function u ^{ n } will be computed from algebraic equations derived from the differential equation problem.
Step 2: Fulfilling the equation at discrete time points
Step 3: Replacing derivatives by finite differences
Step 4: Formulating a recursive algorithm
Computing the first step
The computational algorithm
Remark on using w for ω in computer code
In the code, we use w as the symbol for ω. The reason is that the authors prefer w for readability and comparison with the mathematical ω instead of the full word omega as variable name.
Operator notation
1.2 Implementation
1.2.1 Making a Solver Function
The algorithm from the previous section is readily translated to a complete Python function for computing and returning \(u^{0},u^{1},\ldots,u^{N_{t}}\) and \(t_{0},t_{1},\ldots,t_{N_{t}}\), given the input I, ω, \(\Delta t\), and T:
We have imported numpy and matplotlib under the names np and plt, respectively, as this is very common in the Python scientific computing community and a good programming habit (since we explicitly see where the different functions come from). An alternative is to do from numpy import * and a similar ‘‘import all’’ for Matplotlib to avoid the np and plt prefixes and make the code as close as possible to MATLAB. (See Section 5.1.4 in [9] for a discussion of the two types of import in Python.)
A function for plotting the numerical and the exact solution is also convenient to have:
A corresponding main program calling these functions to simulate a given number of periods (num_periods) may take the form
Adjusting some of the input parameters via the command line can be handy. Here is a code segment using the ArgumentParser tool in the argparse module to define option value (option value) pairs on the command line:
Such parsing of the command line is explained in more detail in Section 5.2.3 in [9].
A typical execution goes like
Computing \(u^{\prime}\)
Typical (scalar) code is
Since the loop is slow for large N _{ t }, we can get rid of the loop by vectorizing the central difference. The above code segment goes as follows in its vectorized version (see Problem 1.2 in [9] for explanation of details):
1.2.2 Verification
Manual calculation
The simplest type of verification, which is also instructive for understanding the algorithm, is to compute u ^{1}, u ^{2}, and u ^{3} with the aid of a calculator and make a function for comparing these results with those from the solver function. The test_three_steps function in the file vib_undamped.py shows the details of how we use the hand calculations to test the code:

the function name begins with test_

the function takes no arguments

the test is formulated as a boolean condition and executed by assert
Testing very simple polynomial solutions
Constructing test problems where the exact solution is constant or linear helps initial debugging and verification as one expects any reasonable numerical method to reproduce such solutions to machine precision. Secondorder accurate methods will often also reproduce a quadratic solution. Here \([D_{t}D_{t}t^{2}]^{n}=2\), which is the exact result. A solution \(u=t^{2}\) leads to \(u^{\prime\prime}+\omega^{2}u=2+(\omega t)^{2}\neq 0\). We must therefore add a source in the equation: \(u^{\prime\prime}+\omega^{2}u=f\) to allow a solution \(u=t^{2}\) for \(f=2+(\omega t)^{2}\). By simple insertion we can show that the mesh function \(u^{n}=t_{n}^{2}\) is also a solution of the discrete equations. Problem 1.1 asks you to carry out all details to show that linear and quadratic solutions are solutions of the discrete equations. Such results are very useful for debugging and verification. You are strongly encouraged to do this problem now!
Checking convergence rates
Empirical computation of convergence rates yields a good method for verification. The method and its computational details are explained in detail in Section 3.1.6 in [9]. Readers not familiar with the concept should look up this reference before proceeding.

perform m simulations, halving the time steps as: \(\Delta t_{i}=2^{i}\Delta t_{0}\), \(i=1,\ldots,m1\), and \(\Delta t_{i}\) is the time step used in simulation i;

compute the L ^{2} norm of the error, \(E_{i}=\sqrt{\Delta t_{i}\sum_{n=0}^{N_{t}1}(u^{n}u_{\mbox{\footnotesize e}}(t_{n}))^{2}}\) in each case;

estimate the convergence rates r _{ i } based on two consecutive experiments \((\Delta t_{i1},E_{i1})\) and \((\Delta t_{i},E_{i})\), assuming \(E_{i}=C(\Delta t_{i})^{r}\) and \(E_{i1}=C(\Delta t_{i1})^{r}\), where C is a constant. From these equations it follows that \(r=\ln(E_{i1}/E_{i})/\ln(\Delta t_{i1}/\Delta t_{i})\). Since this r will vary with i, we equip it with an index and call it \(r_{i1}\), where i runs from 1 to m − 1.
All the implementational details of computing the sequence \(r_{0},r_{1},\ldots,r_{m2}\) appear below.
The error analysis in Sect. 1.4 is quite detailed and suggests that r = 2. It is also a intuitively reasonable result, since we used a secondorder accurate finite difference approximation \([D_{t}D_{t}u]^{n}\) to the ODE and a secondorder accurate finite difference formula for the initial condition for \(u^{\prime}\).
In the present problem, when \(\Delta t_{0}\) corresponds to 30 time steps per period, the returned r list has all its values equal to 2.00 (if rounded to two decimals). This amazingly accurate result means that all \(\Delta t_{i}\) values are well into the asymptotic regime where the error model \(E_{i}=C(\Delta t_{i})^{r}\) is valid.
We can now construct a proper test function that computes convergence rates and checks that the final (and usually the best) estimate is sufficiently close to 2. Here, a rough tolerance of 0.1 is enough. Later, we will argue for an improvement by adjusting omega and include also that case in our test function here. The unit test goes like
The complete code appears in the file vib_undamped.py.
Visualizing convergence rates with slope markers
Tony S. Yu has written a script plotslopes.py ^{}2 that is very useful to indicate the slope of a graph, especially a graph like \(\ln E=r\ln\Delta t+\ln C\) arising from the model \(E=C\Delta t^{r}\). A copy of the script resides in the src/vib ^{}3 directory. Let us use it to compare the original method for \(u^{\prime\prime}+\omega^{2}u=0\) with the same method applied to the equation with a modified ω. We make loglog plots of the error versus \(\Delta t\). For each curve we attach a slope marker using the slope_marker((x,y), r) function from plotslopes.py, where (x,y) is the position of the marker and r and the slope (\((r,1)\)), here (2,1) and (4,1).
1.2.3 Scaled Model
It is advantageous to use dimensionless variables in simulations, because fewer parameters need to be set. The present problem is made dimensionless by introducing dimensionless variables \(\bar{t}=t/t_{c}\) and \(\bar{u}=u/u_{c}\), where t _{ c } and u _{ c } are characteristic scales for t and u, respectively. We refer to Section 2.2.1 in [11] for all details about this scaling.
We can easily check this assertion: the solution of the scaled problem is \(\bar{u}(\bar{t})=\cos(2\pi\bar{t})\). The formula for u in terms of \(\bar{u}\) gives \(u=I\cos(\omega t)\), which is nothing but the solution of the original problem with dimensions.
The scaled model can be run by calling solver(I=1, w=2*pi, dt, T). Each period is now 1 and T simply counts the number of periods. Choosing dt as 1./M gives M time steps per period.
1.3 Visualization of Long Time Simulations

The numerical solution seems to have correct amplitude.

There is an angular frequency error which is reduced by decreasing the time step.

The total angular frequency error grows with time.
1.3.1 Using a Moving Plot Window
In vibration problems it is often of interest to investigate the system’s behavior over long time intervals. Errors in the angular frequency accumulate and become more visible as time grows. We can investigate long time series by introducing a moving plot window that can move along with the p most recently computed periods of the solution. The SciTools ^{4} package contains a convenient tool for this: MovingPlotWindow. Typing pydoc scitools.MovingPlotWindow shows a demo and a description of its use. The function below utilizes the moving plot window and is in fact called by the main function in the vib_undamped module if the number of periods in the simulation exceeds 10.
We run the scaled problem (the default values for the commandline arguments I and w correspond to the scaled problem) for 40 periods with 20 time steps per period:
The moving plot window is invoked, and we can follow the numerical and exact solutions as time progresses. From this demo we see that the angular frequency error is small in the beginning, and that it becomes more prominent with time. A new run with \(\Delta t=0.1\) (i.e., only 10 time steps per period) clearly shows that the phase errors become significant even earlier in the time series, deteriorating the solution further.
1.3.2 Making Animations
Producing standard video formats
The visualize_front function stores all the plots in files whose names are numbered: tmp_0000.png, tmp_0001.png, tmp_0002.png, and so on. From these files we may make a movie. The Flash format is popular,
The ffmpeg program can be replaced by the avconv program in the above command if desired (but at the time of this writing it seems to be more momentum in the ffmpeg project). The r option should come first and describes the number of frames per second in the movie (even if we would like to have slow movies, keep this number as large as 25, otherwise files are skipped from the movie). The i option describes the name of the plot files. Other formats can be generated by changing the video codec and equipping the video file with the right extension:
Format  Codec and filename 

Flash  c:v flv movie.flv 
MP4  c:v libx264 movie.mp4 
WebM  c:v libvpx movie.webm 
Ogg  c:v libtheora movie.ogg 
The video file can be played by some video player like vlc, mplayer, gxine, or totem, e.g.,
A web page can also be used to play the movie. Today’s standard is to use the HTML5 video tag:
Modern browsers do not support all of the video formats. MP4 is needed to successfully play the videos on Apple devices that use the Safari browser. WebM is the preferred format for Chrome, Opera, Firefox, and Internet Explorer v9+. Flash was a popular format, but older browsers that required Flash can play MP4. All browsers that work with Ogg can also work with WebM. This means that to have a video work in all browsers, the video should be available in the MP4 and WebM formats. The proper HTML code reads
The MP4 format should appear first to ensure that Apple devices will load the video correctly.
Caution: number the plot files correctly
To ensure that the individual plot frames are shown in correct order, it is important to number the files with zeropadded numbers (0000, 0001, 0002, etc.). The printf format %04d specifies an integer in a field of width 4, padded with zeros from the left. A simple Unix wildcard file specification like tmp_*.png will then list the frames in the right order. If the numbers in the filenames were not zeropadded, the frame tmp_11.png would appear before tmp_2.png in the movie.
Playing PNG files in a web browser
The scitools movie command can create a movie player for a set of PNG files such that a web browser can be used to watch the movie. This interface has the advantage that the speed of the movie can easily be controlled, a feature that scientists often appreciate. The command for creating an HTML with a player for a set of PNG files tmp_*.png goes like
The fps argument controls the speed of the movie (‘‘frames per second’’).
To watch the movie, load the video file vib.html into some browser, e.g.,
Click on Start movie to see the result. Moving this movie to some other place requires moving vib.html and all the PNG files tmp_*.png:
Making animated GIF files
The convert program from the ImageMagick software suite can be used to produce animated GIF files from a set of PNG files:
The delay option needs an argument of the delay between each frame, measured in 1/100 s, so 4 frames/s here gives 25/100 s delay. Note, however, that in this particular example with \(\Delta t=0.05\) and 40 periods, making an animated GIF file out of the large number of PNG files is a very heavy process and not considered feasible. Animated GIFs are best suited for animations with not so many frames and where you want to see each frame and play them slowly.
1.3.3 Using Bokeh to Compare Graphs
Instead of a moving plot frame, one can use tools that allow panning by the mouse. For example, we can show four periods of several signals in several plots and then scroll with the mouse through the rest of the simulation simultaneously in all the plot windows. The Bokeh ^{5} plotting library offers such tools, but the plots must be displayed in a web browser. The documentation of Bokeh is excellent, so here we just show how the library can be used to compare a set of u curves corresponding to long time simulations. (By the way, the guidance to correct pronunciation of Bokeh in the documentation ^{6} and on Wikipedia ^{7} is not directly compatible with a YouTube video ^{8} …).
Imagine we have performed experiments for a set of \(\Delta t\) values. We want each curve, together with the exact solution, to appear in a plot, and then arrange all plots in a gridlike fashion:
Furthermore, we want the axes to couple such that if we move into the future in one plot, all the other plots follows (note the displaced t axes!):
A function for creating a Bokeh plot, given a list of u arrays and corresponding t arrays, is implemented below. The code combines data from different simulations, described compactly in a list of strings legends.
A particular example using the bokeh_plot function appears below.
1.3.4 Using a LinebyLine Ascii Plotter
Plotting functions vertically, line by line, in the terminal window using ascii characters only is a simple, fast, and convenient visualization technique for long time series. Note that the time axis then is positive downwards on the screen, so we can let the solution be visualized ‘‘forever’’. The tool scitools.avplotter.Plotter makes it easy to create such plots:
The call p.plot returns a line of text, with the t axis marked and a symbol + for the first function (u) and o for the second function (the exact solution). Here we append to this text a time counter reflecting how many periods the current time point corresponds to. A typical output (\(\omega=2\pi\), \(\Delta t=0.05\)) looks like this:
1.3.5 Empirical Analysis of the Solution
For oscillating functions like those in Fig. 1.2 we may compute the amplitude and frequency (or period) empirically. That is, we run through the discrete solution points \((t_{n},u_{n})\) and find all maxima and minima points. The distance between two consecutive maxima (or minima) points can be used as estimate of the local period, while half the difference between the u value at a maximum and a nearby minimum gives an estimate of the local amplitude.
Note that the two returned objects are lists of tuples.
Let \((t_{i},e_{i})\), \(i=0,\ldots,M1\), be the sequence of all the M maxima points, where t _{ i } is the time value and e _{ i } the corresponding u value. The local period can be defined as \(p_{i}=t_{i+1}t_{i}\). With Python syntax this reads
The list p created by a list comprehension is converted to an array since we probably want to compute with it, e.g., find the corresponding frequencies 2*pi/p.
Having the minima and the maxima, the local amplitude can be calculated as the difference between two neighboring minimum and maximum points:
The code segments are found in the file vib_empirical_analysis.py .
Since a[i] and p[i] correspond to the ith amplitude estimate and the ith period estimate, respectively, it is most convenient to visualize the a and p values with the index i on the horizontal axis. (There is no unique time point associated with either of these estimate since values at two different time points were used in the computations.)
In the analysis of very long time series, it is advantageous to compute and plot p and a instead of u to get an impression of the development of the oscillations. Let us do this for the scaled problem and \(\Delta t=0.1,0.05,0.01\). A readymade function
computes the empirical amplitudes and periods, and creates a plot where the amplitudes and angular frequencies are visualized together with the exact amplitude I and the exact angular frequency w. We can make a little program for creating the plot:
1.4 Analysis of the Numerical Scheme
1.4.1 Deriving a Solution of the Numerical Scheme
After having seen the phase error grow with time in the previous section, we shall now quantify this error through mathematical analysis. The key tool in the analysis will be to establish an exact solution of the discrete equations. The difference equation (1.7) has constant coefficients and is homogeneous. Such equations are known to have solutions on the form \(u^{n}=CA^{n}\), where A is some number to be determined from the difference equation and C is found as the initial condition (C = I). Recall that n in u ^{ n } is a superscript labeling the time level, while n in A ^{ n } is an exponent.
The physically relevant numerical solution can be taken as the real part of this complex expression.
1.4.2 The Error in the Numerical Frequency
The first observation following (1.18) tells that there is a phase error since the numerical frequency \(\tilde{\omega}\) never equals the exact frequency ω. But how good is the approximation (1.18)? That is, what is the error \(\omega\tilde{\omega}\) or \(\tilde{\omega}/\omega\)? Taylor series expansion for small \(\Delta t\) may give an expression that is easier to understand than the complicated function in (1.18):
The plot shows that at least \(N_{P}\sim 2530\) points per period are necessary for reasonable accuracy, but this depends on the length of the simulation (T) as the total phase error due to the frequency error grows linearly with time (see Exercise 1.2).
1.4.3 Empirical Convergence Rates and Adjusted ω
Adjusting ω is an ideal trick for this simple problem, but when adding damping and nonlinear terms, we have no simple formula for the impact on ω, and therefore we cannot use the trick.
1.4.4 Exact Discrete Solution
The error mesh function is ideal for verification purposes and you are strongly encouraged to make a test based on (1.20) by doing Exercise 1.11.
1.4.5 Convergence
Also (1.19) can be used to establish that \(\tilde{\omega}\rightarrow\omega\) when \(\Delta t\rightarrow 0\). It then follows from the expression(s) for e ^{ n } that \(e^{n}\rightarrow 0\).
1.4.6 The Global Error
To achieve more analytical insight into the nature of the global error, we can Taylor expand the error mesh function (1.21). Since \(\tilde{\omega}\) in (1.18) contains \(\Delta t\) in the denominator we use the series expansion for \(\tilde{\omega}\) inside the cosine function. A relevant sympy session is
Series expansions in sympy have the inconvenient O() term that prevents further calculations with the series. We can use the removeO() command to get rid of the O() term:
Using this w_tilde_series expression for \(\tilde{w}\) in (1.21), dropping I (which is a common factor), and performing a series expansion of the error yields
Since we are mainly interested in the leadingorder term in such expansions (the term with lowest power in \(\Delta t\), which goes most slowly to zero), we use the .as_leading_term(dt) construction to pick out this term:
1.4.7 Stability
Looking at (1.20), it appears that the numerical solution has constant and correct amplitude, but an error in the angular frequency. A constant amplitude is not necessarily the case, however! To see this, note that if only \(\Delta t\) is large enough, the magnitude of the argument to \(\sin^{1}\) in (1.18) may be larger than 1, i.e., \(\omega\Delta t/2> 1\). In this case, \(\sin^{1}(\omega\Delta t/2)\) has a complex value and therefore \(\tilde{\omega}\) becomes complex. Type, for example, asin(x) in wolframalpha.com ^{11} to see basic properties of \(\sin^{1}(x)\).
A complex \(\tilde{\omega}\) can be written \(\tilde{\omega}=\tilde{\omega}_{r}+i\tilde{\omega}_{i}\). Since \(\sin^{1}(x)\) has a negative imaginary part for x > 1, \(\tilde{\omega}_{i}<0\), which means that \(e^{i\tilde{\omega}t}=e^{\tilde{\omega}_{i}t}e^{i\tilde{\omega}_{r}t}\) will lead to exponential growth in time because \(e^{\tilde{\omega}_{i}t}\) with \(\tilde{\omega}_{i}<0\) has a positive exponent.
Stability criterion
1.4.8 About the Accuracy at the Stability Limit
An interesting question is whether the stability condition \(\Delta t<2/\omega\) is unfortunate, or more precisely: would it be meaningful to take larger time steps to speed up computations? The answer is a clear no. At the stability limit, we have that \(\sin^{1}\omega\Delta t/2=\sin^{1}1=\pi/2\), and therefore \(\tilde{\omega}=\pi/\Delta t\). (Note that the approximate formula (1.19) is very inaccurate for this value of \(\Delta t\) as it predicts \(\tilde{\omega}=2.34/pi\), which is a 25 percent reduction.) The corresponding period of the numerical solution is \(\tilde{P}=2\pi/\tilde{\omega}=2\Delta t\), which means that there is just one time step \(\Delta t\) between a peak (maximum) and a through ^{12} (minimum) in the numerical solution. This is the shortest possible wave that can be represented in the mesh! In other words, it is not meaningful to use a larger time step than the stability limit.
Summary
 1.
The key parameter in the formulas is \(p=\omega\Delta t\). The period of oscillations is \(P=2\pi/\omega\), and the number of time steps per period is \(N_{P}=P/\Delta t\). Therefore, \(p=\omega\Delta t=2\pi/N_{P}\), showing that the critical parameter is the number of time steps per period. The smallest possible N _{ P } is 2, showing that \(p\in(0,\pi]\).
 2.
Provided p ≤ 2, the amplitude of the numerical solution is constant.
 3.
The ratio of the numerical angular frequency and the exact one is \(\tilde{\omega}/\omega\approx 1+\frac{1}{24}p^{2}\). The error \(\frac{1}{24}p^{2}\) leads to wrongly displaced peaks of the numerical solution, and the error in peak location grows linearly with time (see Exercise 1.2).
1.5 Alternative Schemes Based on 1stOrder Equations
1.5.1 The Forward Euler Scheme
The reasoning above does not imply that the Forward Euler scheme is not correct, but more that it is almost equivalent to a secondorder accurate scheme for the secondorder ODE formulation, and that the error committed has to do with a wrong sampling point.
1.5.2 The Backward Euler Scheme
1.5.3 The CrankNicolson Scheme
1.5.4 Comparison of Schemes
We can easily compare methods like the ones above (and many more!) with the aid of the Odespy ^{13} package. Below is a sketch of the code.
There is quite some more code dealing with plots also, and we refer to the source file vib_undamped_odespy.py for details. Observe that keyword arguments in f(u,t,w=1) can be supplied through a solver parameter f_kwargs (dictionary of additional keyword arguments to f).
Specification of the Forward Euler, Backward Euler, and CrankNicolson schemes is done like this:
The vib_undamped_odespy.py program makes two plots of the computed solutions with the various methods in the solvers list: one plot with u(t) versus t, and one phase plane plot where v is plotted against u. That is, the phase plane plot is the curve \((u(t),v(t))\) parameterized by t. Analytically, \(u=I\cos(\omega t)\) and \(v=u^{\prime}=\omega I\sin(\omega t)\). The exact curve \((u(t),v(t))\) is therefore an ellipse, which often looks like a circle in a plot if the axes are automatically scaled. The important feature, however, is that the exact curve \((u(t),v(t))\) is closed and repeats itself for every period. Not all numerical schemes are capable of doing that, meaning that the amplitude instead shrinks or grows with time.
1.5.5 RungeKutta Methods
The visual impression is that the 4thorder RungeKutta method is very accurate, under all circumstances in these tests, while the 2ndorder scheme suffers from amplitude errors unless the time step is very small.
1.5.6 Analysis of the Forward Euler Scheme
The amplitude goes like \(1+\frac{1}{2}n\omega^{2}\Delta t^{2}\), clearly growing linearly in time (with n).
We can also investigate the error in the angular frequency by a series expansion:
The error in the angular frequency is of the same order as in the scheme (1.7) for the secondorder ODE, but the error in the amplitude is severe.
1.6 Energy Considerations
1.6.1 Derivation of the Energy Expression
E(t) is closely related to the system’s energy
The equation \(mu^{\prime\prime}+ku=0\) can be divided by m and written as \(u^{\prime\prime}+\omega^{2}u=0\) for \(\omega=\sqrt{k/m}\). The energy expression \(E(t)=\frac{1}{2}(u^{\prime})^{2}+\frac{1}{2}\omega^{2}u^{2}\) derived earlier is then \(\tilde{E}(t)/m\), i.e., mechanical energy per unit mass.
Energy of the exact solution
Growth of energy in the Forward Euler scheme
1.6.2 An Error Measure Based on Energy
A vectorized Python implementation of \(e_{E}^{n}\) takes the form
The convergence rates of the quantity e_E_norm can be used for verification. The value of e_E_norm is also useful for comparing schemes through their ability to preserve energy. Below is a table demonstrating the relative error in total energy for various schemes (computed by the vib_undamped_odespy.py program). The test problem is \(u^{\prime\prime}+4\pi^{2}u=0\) with \(u(0)=1\) and \(u^{\prime}(0)=0\), so the period is 1 and \(E(t)\approx 4.93\). We clearly see that the CrankNicolson and the RungeKutta schemes are superior to the Forward and Backward Euler schemes already after one period.
Method  T  \(\Delta t\)  \(\max\lefte_{E}^{n}\right/e_{E}^{0}\) 

Forward Euler  1  0.025  \(1.678\cdot 10^{0}\) 
Backward Euler  1  0.025  \(6.235\cdot 10^{1}\) 
CrankNicolson  1  0.025  \(1.221\cdot 10^{2}\) 
RungeKutta 2ndorder  1  0.025  \(6.076\cdot 10^{3}\) 
RungeKutta 4thorder  1  0.025  \(8.214\cdot 10^{3}\) 
However, after 10 periods, the picture is much more dramatic:
Method  T  \(\Delta t\)  \(\max\lefte_{E}^{n}\right/e_{E}^{0}\) 

Forward Euler  10  0.025  \(1.788\cdot 10^{4}\) 
Backward Euler  10  0.025  \(1.000\cdot 10^{0}\) 
CrankNicolson  10  0.025  \(1.221\cdot 10^{2}\) 
RungeKutta 2ndorder  10  0.025  \(6.250\cdot 10^{2}\) 
RungeKutta 4thorder  10  0.025  \(8.288\cdot 10^{3}\) 
The RungeKutta and CrankNicolson methods hardly change their energy error with T, while the error in the Forward Euler method grows to huge levels and a relative error of 1 in the Backward Euler method points to \(E(t)\rightarrow 0\) as t grows large.
Running multiple values of \(\Delta t\), we can get some insight into the convergence of the energy error:
Method  T  \(\Delta t\)  \(\max\lefte_{E}^{n}\right/e_{E}^{0}\) 

Forward Euler  10  0.05  \(1.120\cdot 10^{8}\) 
Forward Euler  10  0.025  \(1.788\cdot 10^{4}\) 
Forward Euler  10  0.0125  \(1.374\cdot 10^{2}\) 
Backward Euler  10  0.05  \(1.000\cdot 10^{0}\) 
Backward Euler  10  0.025  \(1.000\cdot 10^{0}\) 
Backward Euler  10  0.0125  \(9.928\cdot 10^{1}\) 
CrankNicolson  10  0.05  \(4.756\cdot 10^{2}\) 
CrankNicolson  10  0.025  \(1.221\cdot 10^{2}\) 
CrankNicolson  10  0.0125  \(3.125\cdot 10^{3}\) 
RungeKutta 2ndorder  10  0.05  \(6.152\cdot 10^{1}\) 
RungeKutta 2ndorder  10  0.025  \(6.250\cdot 10^{2}\) 
RungeKutta 2ndorder  10  0.0125  \(7.631\cdot 10^{3}\) 
RungeKutta 4thorder  10  0.05  \(3.510\cdot 10^{2}\) 
RungeKutta 4thorder  10  0.025  \(8.288\cdot 10^{3}\) 
RungeKutta 4thorder  10  0.0125  \(2.058\cdot 10^{3}\) 
A striking fact from this table is that the error of the Forward Euler method is reduced by the same factor as \(\Delta t\) is reduced by, while the error in the CrankNicolson method has a reduction proportional to \(\Delta t^{2}\) (we cannot say anything for the Backward Euler method). However, for the RK2 method, halving \(\Delta t\) reduces the error by almost a factor of 10 (!), and for the RK4 method the reduction seems proportional to \(\Delta t^{2}\) only (and the trend is confirmed by running smaller time steps, so for \(\Delta t=3.9\cdot 10^{4}\) the relative error of RK2 is a factor 10 smaller than that of RK4!).
1.7 The EulerCromer Method
While the RungeKutta methods and the CrankNicolson scheme work well for the vibration equation modeled as a firstorder ODE system, both were inferior to the straightforward centered difference scheme for the secondorder equation \(u^{\prime\prime}+\omega^{2}u=0\). However, there is a similarly successful scheme available for the firstorder system \(u^{\prime}=v\), \(v^{\prime}=\omega^{2}u\), to be presented below. The ideas of the scheme and their further developments have become very popular in particle and rigid body dynamics and hence are widely used by physicists.
1.7.1 ForwardBackward Discretization
The scheme (1.54)–(1.55) goes under several names: forwardbackward scheme, semiimplicit Euler method ^{14}, semiexplicit Euler, symplectic Euler, NewtonStörmerVerlet, and EulerCromer. We shall stick to the latter name.
How does the EulerCromer method preserve the total energy? We may run the example from Sect. 1.6.2:
Method  T  \(\Delta t\)  \(\max\lefte_{E}^{n}\right/e_{E}^{0}\) 

EulerCromer  10  0.05  \(2.530\cdot 10^{2}\) 
EulerCromer  10  0.025  \(6.206\cdot 10^{3}\) 
EulerCromer  10  0.0125  \(1.544\cdot 10^{3}\) 
The relative error in the total energy decreases as \(\Delta t^{2}\), and the error level is slightly lower than for the CrankNicolson and RungeKutta methods.
1.7.2 Equivalence with the Scheme for the SecondOrder ODE
We shall now show that the EulerCromer scheme for the system of firstorder equations is equivalent to the centered finite difference method for the secondorder vibration ODE (!).
A different view can also be taken. If we approximate \(u^{\prime}(0)=0\) by a backward difference, \((u^{0}u^{1})/\Delta t=0\), we get \(u^{1}=u^{0}\), and when combined with (1.7), it results in \(u^{1}=u^{0}\omega^{2}\Delta t^{2}u^{0}\). This means that the EulerCromer method based on (1.55)–(1.54) corresponds to using only a firstorder approximation to the initial condition in the method from Sect. 1.1.2.
Correspondingly, using the formulation (1.52)–(1.53) with \(v^{n}=0\) leads to \(u^{1}=u^{0}\), which can be interpreted as using a forward difference approximation for the initial condition \(u^{\prime}(0)=0\). Both EulerCromer formulations lead to slightly different values for u ^{1} compared to the method in Sect. 1.1.2. The error is \(\frac{1}{2}\omega^{2}\Delta t^{2}u^{0}\).
1.7.3 Implementation
Solver function
The function below, found in vib_undamped_EulerCromer.py , implements the EulerCromer scheme (1.54)–(1.55):
Verification
Since the EulerCromer scheme is equivalent to the finite difference method for the secondorder ODE \(u^{\prime\prime}+\omega^{2}u=0\) (see Sect. 1.7.2), the performance of the above solver function is the same as for the solver function in Sect. 1.2. The only difference is the formula for the first time step, as discussed above. This deviation in the EulerCromer scheme means that the discrete solution listed in Sect. 1.4.4 is not a solution of the EulerCromer scheme!
To verify the implementation of the EulerCromer method we can adjust v[1] so that the computergenerated values can be compared with the formula (1.20) from in Sect. 1.4.4. This adjustment is done in an alternative solver function, solver_ic_fix in vib_EulerCromer.py. Since we now have an exact solution of the discrete equations available, we can write a test function test_solver for checking the equality of computed values with the formula (1.20):
Another function, demo, visualizes the difference between the EulerCromer scheme and the scheme (1.7) for the secondoder ODE, arising from the mismatch in the first time level.
Using Odespy
The EulerCromer method is also available in the Odespy package. The important thing to remember, when using this implementation, is that we must order the unknowns as v and u, so the u vector at each time level consists of the velocity v as first component and the displacement u as second component:
Convergence rates
We may use the convergence_rates function in the file vib_undamped.py to investigate the convergence rate of the EulerCromer method, see the convergence_rate function in the file vib_undamped_EulerCromer.py. Since we could eliminate v to get a scheme for u that is equivalent to the finite difference method for the secondorder equation in u, we would expect the convergence rates to be the same, i.e., r = 2. However, measuring the convergence rate of u in the EulerCromer scheme shows that r = 1 only! Adjusting the initial condition does not change the rate. Adjusting ω, as outlined in Sect. 1.4.2, gives a 4thorder method there, while there is no increase in the measured rate in the EulerCromer scheme. It is obvious that the EulerCromer scheme is dramatically much better than the two other firstorder methods, Forward Euler and Backward Euler, but this is not reflected in the convergence rate of u.
1.7.4 The StörmerVerlet Algorithm
Another very popular algorithm for vibration problems, especially for long time simulations, is the StörmerVerlet algorithm. It has become the method among physicists for molecular simulations as well as particle and rigid body dynamics.
 1.
solve \(v^{\prime}=\omega u\) by a Forward Euler step in \([t_{n},t_{n+\frac{1}{2}}]\)
 2.
solve \(u^{\prime}=v\) by a Backward Euler step in \([t_{n},t_{n+\frac{1}{2}}]\)
 3.
solve \(u^{\prime}=v\) by a Forward Euler step in \([t_{n+\frac{1}{2}},t_{n+1}]\)
 4.
solve \(v^{\prime}=\omega u\) by a Backward Euler step in \([t_{n+\frac{1}{2}},t_{n+1}]\)
The most numerically stable scheme, with respect to accumulation of rounding errors, is (1.61)–(1.62). It has, according to [6], better properties in this regard than the direct scheme for the secondorder ODE.
1.8 Staggered Mesh
A more intuitive discretization than the EulerCromer method, yet equivalent, employs solely centered differences in a natural way for the 2 × 2 firstorder ODE system. The scheme is in fact fully equivalent to the secondorder scheme for \(u^{\prime\prime}+\omega u=0\), also for the first time step. Such a scheme needs to operate on a staggered mesh in time. Staggered meshes are very popular in many physical application, maybe foremost fluid dynamics and electromagnetics, so the topic is important to learn.
1.8.1 The EulerCromer Scheme on a Staggered Mesh
We can eliminate the v values and get back the centered scheme based on the secondorder differential equation \(u^{\prime\prime}+\omega^{2}u=0\), so all these three schemes are equivalent. However, they differ somewhat in the treatment of the initial conditions.
1.8.2 Implementation of the Scheme on a Staggered Mesh
Implementation with integer indices
Translating the schemes (1.68) and (1.67) to computer code faces the problem of how to store and access \(v^{n+\frac{1}{2}}\), since arrays only allow integer indices with base 0. We must then introduce a convention: \(v^{1+\frac{1}{2}}\) is stored in v[n] while \(v^{1\frac{1}{2}}\) is stored in v[n1]. We can then write the algorithm in Python as
Note that u and v are returned together with the mesh points such that the complete mesh function for u is described by u and t, while v and t_v represent the mesh function for v.
Implementation with halfinteger indices
Some prefer to see a closer relationship between the code and the mathematics for the quantities with halfinteger indices. For example, we would like to replace the updating equation for v[n] by
This is easy to do if we could be sure that n+half means n and nhalf means n1. A possible solution is to define half as a special object such that an integer plus half results in the integer, while an integer minus half equals the integer minus 1. A simple Python class may realize the half object:
The __radd__ function is invoked for all expressions n+half (″right add″ with self as half and other as n). Similarly, the __rsub__ function is invoked for nhalf and results in n1.
Using the half object, we can implement the algorithms in an even more readable way:
Verification of this code is easy as we can just compare the computed u with the u produced by the solver function in vib_undamped.py (which solves \(u^{\prime\prime}+\omega^{2}u=0\) directly). The values should coincide to machine precision since the two numerical methods are mathematically equivalent. We refer to the file vib_undamped_staggered.py for the details of a unit test (test_staggered) that checks this property.
1.9 Exercises and Problems
Problem 1.1 (Use linear/quadratic functions for verification)
 a)
Discretize this equation according to \([D_{t}D_{t}u+\omega^{2}u=f]^{n}\) and derive the equation for the first time step (u ^{1}).
 b)
For verification purposes, we use the method of manufactured solutions (MMS) with the choice of \(u_{\mbox{\footnotesize e}}(t)=ct+d\). Find restrictions on c and d from the initial conditions. Compute the corresponding source term f. Show that \([D_{t}D_{t}t]^{n}=0\) and use the fact that the \(D_{t}D_{t}\) operator is linear, \([D_{t}D_{t}(ct+d)]^{n}=c[D_{t}D_{t}t]^{n}+[D_{t}D_{t}d]^{n}=0\), to show that \(u_{\mbox{\footnotesize e}}\) is also a perfect solution of the discrete equations.
 c)
Use sympy to do the symbolic calculations above. Here is a sketch of the program vib_undamped_verify_mms.py:
Fill in the various functions such that the calls in the main function works.
 d)
The purpose now is to choose a quadratic function \(u_{\mbox{\footnotesize e}}=bt^{2}+ct+d\) as exact solution. Extend the sympy code above with a function quadratic for fitting f and checking if the discrete equations are fulfilled. (The function is very similar to linear.)
 e)
Will a polynomial of degree three fulfill the discrete equations?
 f)
Implement a solver function for computing the numerical solution of this problem.
 g)
Write a test function for checking that the quadratic solution is computed correctly (to machine precision, but the roundoff errors accumulate and increase with T) by the solver function.
Filename: vib_undamped_verify_mms.
Exercise 1.2 (Show linear growth of the phase with time)
Consider an exact solution \(I\cos(\omega t)\) and an approximation \(I\cos(\tilde{\omega}t)\). Define the phase error as the time lag between the peak I in the exact solution and the corresponding peak in the approximation after m periods of oscillations. Show that this phase error is linear in m.
Filename: vib_phase_error_growth.
Exercise 1.3 (Improve the accuracy by adjusting the frequency)
According to (1.19), the numerical frequency deviates from the exact frequency by a (dominating) amount \(\omega^{3}\Delta t^{2}/24> 0\). Replace the w parameter in the algorithm in the solver function in vib_undamped.py by w*(1  (1./24)*w**2*dt**2 and test how this adjustment in the numerical algorithm improves the accuracy (use \(\Delta t=0.1\) and simulate for 80 periods, with and without adjustment of ω).
Filename: vib_adjust_w.
Exercise 1.4 (See if adaptive methods improve the phase error)
Adaptive methods for solving ODEs aim at adjusting \(\Delta t\) such that the error is within a userprescribed tolerance. Implement the equation \(u^{\prime\prime}+u=0\) in the Odespy ^{15} software. Use the example from Section 3.2.11 in [9]. Run the scheme with a very low tolerance (say 10^{−14}) and for a long time, check the number of time points in the solver’s mesh (len(solver.t_all)), and compare the phase error with that produced by the simple finite difference method from Sect. 1.1.2 with the same number of (equally spaced) mesh points. The question is whether it pays off to use an adaptive solver or if equally many points with a simple method gives about the same accuracy.
Filename: vib_undamped_adaptive.
Exercise 1.5 (Use a Taylor polynomial to compute u ^{1})
With \(u^{\prime\prime}=\omega^{2}u\) and \(u^{\prime}(0)=0\), show that this method also leads to (1.8). Generalize the condition on \(u^{\prime}(0)\) to be \(u^{\prime}(0)=V\) and compute u ^{1} in this case with both methods.
Filename: vib_first_step.
Problem 1.6 (Derive and investigate the velocity Verlet method)
 1.
step u forward from t _{ n } to \(t_{n+1}\) using a threeterm Taylor series,
 2.
replace \(u^{\prime\prime}\) by \(\omega^{2}u\)
 3.
discretize \(v^{\prime}=\omega^{2}u\) by a CrankNicolson method.
Problem 1.7 (Find the minimal resolution of an oscillatory function)
Sketch the function on a given mesh which has the highest possible frequency. That is, this oscillatory ‘‘coslike’’ function has its maxima and minima at every two grid points. Find an expression for the frequency of this function, and use the result to find the largest relevant value of \(\omega\Delta t\) when ω is the frequency of an oscillating function and \(\Delta t\) is the mesh spacing.
Filename: vib_largest_wdt.
Exercise 1.8 (Visualize the accuracy of finite differences for a cosine function)
Plot E as a function of \(p=\omega\Delta t\). The relevant values of p are \([0,\pi]\) (see Exercise 1.7 for why p > π does not make sense). The deviation of the curve from unity visualizes the error in the approximation. Also expand E as a Taylor polynomial in p up to fourth degree (use, e.g., sympy).
Filename: vib_plot_fd_exp_error.
Exercise 1.9 (Verify convergence rates of the error in energy)
We consider the ODE problem \(u^{\prime\prime}+\omega^{2}u=0\), \(u(0)=I\), \(u^{\prime}(0)=V\), for \(t\in(0,T]\). The total energy of the solution \(E(t)=\frac{1}{2}(u^{\prime})^{2}+\frac{1}{2}\omega^{2}u^{2}\) should stay constant. The error in energy can be computed as explained in Sect. 1.6.
Make a test function in a separate file, where code from vib_undamped.py is imported, but the convergence_rates and test_convergence_rates functions are copied and modified to also incorporate computations of the error in energy and the convergence rate of this error. The expected rate is 2, just as for the solution itself.
Filename: test_error_conv.
Exercise 1.10 (Use linear/quadratic functions for verification)
This exercise is a generalization of Problem 1.1 to the extended model problem (1.71) where the damping term is either linear or quadratic. Solve the various subproblems and see how the results and problem settings change with the generalized ODE in case of linear or quadratic damping. By modifying the code from Problem 1.1, sympy will do most of the work required to analyze the generalized problem.
Filename: vib_verify_mms.
Exercise 1.11 (Use an exact discrete solution for verification)
Write a test function in a separate file that employs the exact discrete solution (1.20) to verify the implementation of the solver function in the file vib_undamped.py.
Filename: test_vib_undamped_exact_discrete_sol.
Exercise 1.12 (Use analytical solution for convergence rate tests)
The purpose of this exercise is to perform convergence tests of the problem (1.71) when \(s(u)=cu\), \(F(t)=A\sin\phi t\) and there is no damping. Find the complete analytical solution to the problem in this case (most textbooks on mechanics or ordinary differential equations list the various elements you need to write down the exact solution, or you can use symbolic tools like sympy or wolframalpha.com). Modify the convergence_rate function from the vib_undamped.py program to perform experiments with the extended model. Verify that the error is of order \(\Delta t^{2}\).
Filename: vib_conv_rate.
Exercise 1.13 (Investigate the amplitude errors of many solvers)
Use the program vib_undamped_odespy.py from Sect. 1.5.4 (utilize the function amplitudes) to investigate how well famous methods for 1storder ODEs can preserve the amplitude of u in undamped oscillations. Test, for example, the 3rd and 4thorder RungeKutta methods (RK3, RK4), the CrankNicolson method (CrankNicolson), the 2nd and 3rdorder AdamsBashforth methods (AdamsBashforth2, AdamsBashforth3), and a 2ndorder Backwards scheme (Backward2Step). The relevant governing equations are listed in the beginning of Sect. 1.5.
Filename: vib_amplitude_errors.
Problem 1.14 (Minimize memory usage of a simple vibration solver)
We consider the model problem \(u^{\prime\prime}+\omega^{2}u=0\), \(u(0)=I\), \(u^{\prime}(0)=V\), solved by a secondorder finite difference scheme. A standard implementation typically employs an array u for storing all the u ^{ n } values. However, at some time level n+1 where we want to compute u[n+1], all we need of previous u values are from level n and n1. We can therefore avoid storing the entire array u, and instead work with u[n+1], u[n], and u[n1], named as u, u_n, u_nmp1, for instance. Another possible naming convention is u, u_n[0], u_n[1]. Store the solution in a file for later visualization. Make a test function that verifies the implementation by comparing with the another code for the same problem.
Filename: vib_memsave0.
Problem 1.15 (Minimize memory usage of a general vibration solver)
The program vib.py stores the complete solution \(u^{0},u^{1},\ldots,u^{N_{t}}\) in memory, which is convenient for later plotting. Make a memory minimizing version of this program where only the last three \(u^{n+1}\), u ^{ n }, and \(u^{n1}\) values are stored in memory under the names u, u_n, and u_nm1 (this is the naming convention used in this book). Write each computed \((t_{n+1},u^{n+1})\) pair to file. Visualize the data in the file (a cool solution is to read one line at a time and plot the u value using the linebyline plotter in the visualize_front_ascii function  this technique makes it trivial to visualize very long time simulations).
Filename: vib_memsave.
Exercise 1.16 (Implement the EulerCromer scheme for the generalized model)
 a)
Implement the EulerCromer method from Sect. 1.10.8.
 b)
We expect the EulerCromer method to have firstorder convergence rate. Make a unit test based on this expectation.
 c)
Consider a system with m = 4, \(f(v)=bvv\), b = 0.2, s = 2u, F = 0. Compute the solution using the centered difference scheme from Sect. 1.10.1 and the EulerCromer scheme for the longest possible time step \(\Delta t\). We can use the result from the case without damping, i.e., the largest \(\Delta t=2/\omega\), \(\omega\approx\sqrt{0.5}\) in this case, but since b will modify the frequency, we take the longest possible time step as a safety factor 0.9 times 2 ∕ ω. Refine \(\Delta t\) three times by a factor of two and compare the two curves.
Filename: vib_EulerCromer.
Problem 1.17 (Interpret \([D_{t}D_{t}u]^{n}\) as a forwardbackward difference)
Show that the difference \([D_{t}D_{t}u]^{n}\) is equal to \([D_{t}^{+}D_{t}^{}u]^{n}\) and \(D_{t}^{}D_{t}^{+}u]^{n}\). That is, instead of applying a centered difference twice one can alternatively apply a mixture of forward and backward differences.
Filename: vib_DtDt_fw_bw.
Exercise 1.18 (Analysis of the EulerCromer scheme)
The EulerCromer scheme for the model problem \(u^{\prime\prime}+\omega^{2}u=0\), \(u(0)=I\), \(u^{\prime}(0)=0\), is given in (1.55)–(1.54). Find the exact discrete solutions of this scheme and show that the solution for u ^{ n } coincides with that found in Sect. 1.4.
Hint
1.10 Generalization: Damping, Nonlinearities, and Excitation
There are two main types of damping (friction) forces: linear \(f(u^{\prime})=bu\), or quadratic \(f(u^{\prime})=bu^{\prime}u^{\prime}\). Spring systems often feature linear damping, while air resistance usually gives rise to quadratic damping. Spring forces are often linear: \(s(u)=cu\), but nonlinear versions are also common, the most famous is the gravity force on a pendulum that acts as a spring with \(s(u)\sim\sin(u)\).
1.10.1 A Centered Scheme for Linear Damping
1.10.2 A Centered Scheme for Quadratic Damping
When \(f(u^{\prime})=bu^{\prime}u^{\prime}\), we get a quadratic equation for \(u^{n+1}\) in (1.73 ). This equation can be straightforwardly solved by the wellknown formula for the roots of a quadratic equation. However, we can also avoid the nonlinearity by introducing an approximation with an error of order no higher than what we already have from replacing derivatives with finite differences.
1.10.3 A ForwardBackward Discretization of the Quadratic Damping Term
1.10.4 Implementation
The algorithm arising from the methods in Sect.s 1.10.1 and 1.10.2 is very similar to the undamped case in Sect. 1.1.2. The difference is basically a question of different formulas for u ^{1} and \(u^{n+1}\). This is actually quite remarkable. The equation (1.71) is normally impossible to solve by pen and paper, but possible for some special choices of F, s, and f. On the contrary, the complexity of the nonlinear generalized model (1.71) versus the simple undamped model is not a big deal when we solve the problem numerically!
 1.
\(u^{0}=I\)
 2.
 3.
The complete code resides in the file vib.py .
1.10.5 Verification
Constant solution
For debugging and initial verification, a constant solution is often very useful. We choose \(u_{\mbox{\footnotesize e}}(t)=I\), which implies V = 0. Inserted in the ODE, we get \(F(t)=s(I)\) for any choice of f. Since the discrete derivative of a constant vanishes (in particular, \([D_{2t}I]^{n}=0\), \([D_{t}I]^{n}=0\), and \([D_{t}D_{t}I]^{n}=0\)), the constant solution also fulfills the discrete equations. The constant should therefore be reproduced to machine precision. The function test_constant in vib.py implements this test.
Linear solution
Quadratic solution
Choosing \(u_{\mbox{\footnotesize e}}=bt^{2}+Vt+I\), with b arbitrary, fulfills the initial conditions and fits the ODE if F is adjusted properly. The solution also solves the discrete equations with linear damping. However, this quadratic polynomial in t does not fulfill the discrete equations in case of quadratic damping, because the geometric mean used in the approximation of this term introduces an error. Doing Exercise 1.10 will reveal the details. One can fit F ^{ n } in the discrete equations such that the quadratic polynomial is reproduced by the numerical method (to machine precision).
Catching bugs

Use m instead of 2*m in the denominator of u[1]: code works for constant solution, but fails (as it should) for a quadratic one.

Use b*dt instead of b*dt/2 in the updating formula for u[n+1] in case of linear damping: constant and quadratic both fail.

Use F[n+1] instead of F[n] in case of linear or quadratic damping: constant solution works, quadratic fails.
1.10.6 Visualization
The functions for visualizations differ significantly from those in the undamped case in the vib_undamped.py program because, in the present general case, we do not have an exact solution to include in the plots. Moreover, we have no good estimate of the periods of the oscillations as there will be one period determined by the system parameters, essentially the approximate frequency \(\sqrt{s^{\prime}(0)/m}\) for linear s and small damping, and one period dictated by F(t) in case the excitation is periodic. This is, however, nothing that the program can depend on or make use of. Therefore, the user has to specify T and the window width to get a plot that moves with the graph and shows the most recent parts of it in long time simulations.
The vib.py code contains several functions for analyzing the time series signal and for visualizing the solutions.
1.10.7 User Interface
The main function is changed substantially from the vib_undamped.py code, since we need to specify the new data c, s(u), and F(t). In addition, we must set T and the plot window width (instead of the number of periods we want to simulate as in vib_undamped.py). To figure out whether we can use one plot for the whole time series or if we should follow the most recent part of u, we can use the plot_empricial_freq_and_amplitude function’s estimate of the number of local maxima. This number is now returned from the function and used in main to decide on the visualization technique.
The program vib.py contains the above code snippets and can solve the model problem (1.71). As a demo of vib.py, we consider the case I = 1, V = 0, m = 1, c = 0.03, \(s(u)=\sin(u)\), \(F(t)=3\cos(4t)\), \(\Delta t=0.05\), and T = 140. The relevant command to run is
1.10.8 The EulerCromer Scheme for the Generalized Model
1.10.9 The StörmerVerlet Algorithm for the Generalized Model
1.10.10 A Staggered EulerCromer Scheme for a Generalized Model
1.10.11 The PEFRL 4thOrder Accurate Algorithm
1.11 Exercises and Problems
Exercise 1.19 (Implement the solver via classes)
Reimplement the vib.py program using a class Problem to hold all the physical parameters of the problem, a class Solver to hold the numerical parameters and compute the solution, and a class Visualizer to display the solution.
Hint
Use the ideas and examples from Sections 5.5.1 and 5.5.2 in [9]. More specifically, make a superclass Problem for holding the scalar physical parameters of a problem and let subclasses implement the s(u) and F(t) functions as methods. Try to call up as much existing functionality in vib.py as possible.
Filename: vib_class.
Problem 1.20 (Use a backward difference for the damping term)
Filename: vib_gen_bwdamping.
Exercise 1.21 (Use the forwardbackward scheme with quadratic damping)
What we learn from this exercise is that the firstorder differences and the linearization trick play together in ‘‘the right way’’ such that the scheme is as good as when we (in Sect. 1.10.10) carefully apply centered differences and a geometric mean on a staggered mesh to achieve secondorder accuracy. There is a difference in the handling of the initial conditions, though, as explained at the end of Sect. 1.7.
Filename: vib_gen_bwdamping.
1.12 Applications of Vibration Models
The following text derives some of the most wellknown physical problems that lead to secondorder ODE models of the type addressed in this book. We consider a simple springmass system; thereafter extended with nonlinear spring, damping, and external excitation; a springmass system with sliding friction; a simple and a physical (classical) pendulum; and an elastic pendulum.
1.12.1 Oscillating Mass Attached to a Spring
Since only one scalar mathematical quantity, u(t), describes the complete motion, we say that the mechanical system has one degree of freedom (DOF).
Scaling
The physics
The typical physics of the system in Fig. 1.17 can be described as follows. Initially, we displace the body to some position I, say at rest (V = 0). After releasing the body, the spring, which is extended, will act with a force \(kI\boldsymbol{i}\) and pull the body to the left. This force causes an acceleration and therefore increases velocity. The body passes the point x = 0, where u = 0, and the spring will then be compressed and act with a force \(kx\boldsymbol{i}\) against the motion and cause retardation. At some point, the motion stops and the velocity is zero, before the spring force \(kx\boldsymbol{i}\) has worked long enough to push the body in positive direction. The result is that the body accelerates back and forth. As long as there is no friction forces to damp the motion, the oscillations will continue forever.
1.12.2 General Mechanical Vibrating System
The most common models for the spring and dashpot are linear: \(f(\dot{u})=b\dot{u}\) with a constant b ≥ 0, and \(s(u)=ku\) for a constant k.
Scaling
1.12.3 A Sliding Mass Attached to a Spring
1.12.4 A Jumping Washing Machine
A washing machine is placed on four springs with efficient dampers. If the machine contains just a few clothes, the circular motion of the machine induces a sinusoidal external force from the floor and the machine will jump up and down if the frequency of the external force is close to the natural frequency of the machine and its springdamper system.
1.12.5 Motion of a Pendulum
Simple pendulum
The motion is governed by Newton’s 2nd law, so we need to find expressions for the forces and the acceleration. Three forces on the body are considered: an unknown force S from the wire, the gravity force mg, and an air resistance force, \(\frac{1}{2}C_{D}\varrho Avv\), hereafter called the drag force, directed against the velocity of the body. Here, C _{ D } is a drag coefficient, \(\varrho\) is the density of air, A is the cross section area of the body, and v is the magnitude of the velocity.

Wire force: \(S\boldsymbol{i}_{r}\)

Gravity force: \(mg\boldsymbol{j}=mg(\sin\theta\,\boldsymbol{i}_{\theta}+\cos\theta\,\boldsymbol{i}_{r})\)

Drag force: \(\frac{1}{2}C_{D}\varrho Avv\,\boldsymbol{i}_{\theta}\)
Equation (1.126) fits the general model used in (1.71) in Sect. 1.10 if we define u = θ, \(f(u^{\prime})=\frac{1}{2}C_{D}\varrho AL\dot{u}\dot{u}\), \(s(u)=L^{1}mg\sin u\), and F = 0. If the body is a sphere with radius R, we can take \(C_{D}=0.4\) and \(A=\pi R^{2}\). Exercise 1.25 asks you to scale the equations and carry out specific simulations with this model.
Physical pendulum
1.12.6 Dynamic Free Body Diagram During Pendulum Motion
Usually one plots the mathematical quantities as functions of time to visualize the solution of ODE models. Exercise 1.25 asks you to do this for the motion of a pendulum in the previous section. However, sometimes it is more instructive to look at other types of visualizations. For example, we have the pendulum and the free body diagram in Fig. 1.20 and 1.21. We may think of these figures as animations in time instead. Especially the free body diagram will show both the motion of the pendulum and the size of the forces during the motion. The present section exemplifies how to make such a dynamic body diagram. Two typical snapshots of free body diagrams are displayed below (the drag force is magnified 5 times to become more visual!).
Dynamic physical sketches, coupled to the numerical solution of differential equations, requires a program to produce a sketch for the situation at each time level. Pysketcher ^{18} is such a tool. In fact (and not surprising!) Fig. 1.20 and 1.21 were drawn using Pysketcher. The details of the drawings are explained in the Pysketcher tutorial ^{19}. Here, we outline how this type of sketch can be used to create an animated free body diagram during the motion of a pendulum.
Pysketcher is actually a layer of useful abstractions on top of standard plotting packages. This means that we in fact apply Matplotlib to make the animated free body diagram, but instead of dealing with a wealth of detailed Matplotlib commands, we can express the drawing in terms of more highlevel objects, e.g., objects for the wire, angle θ, body with mass m, arrows for forces, etc. When the position of these objects are given through variables, we can just couple those variables to the dynamic solution of our ODE and thereby make a unique drawing for each θ value in a simulation.
Writing the solver
A suitable function for computing (1.128)–(1.131) is listed below.
Drawing the free body diagram
The sketch function below applies Pysketcher objects to build a diagram like that in Fig. 1.21, except that we have removed the rotation point \((x_{0},y_{0})\) and the unit vectors in polar coordinates as these objects are not important for an animated free body diagram.
Making the animated free body diagram
It now remains to couple the simulate and sketch functions. We first run simulate:
The next step is to run through the time levels in the simulation and make a sketch at each level:
The individual sketches are (by the sketch function) saved in files with names tmp_%04d.png. These can be combined to videos using (e.g.) ffmpeg. A complete function animate for running the simulation and creating video files is listed below.
1.12.7 Motion of an Elastic Pendulum
Consider a pendulum as in Fig. 1.20, but this time the wire is elastic. The length of the wire when it is not stretched is L _{0}, while L(t) is the stretched length at time t during the motion.
Stretching the elastic wire a distance \(\Delta L\) gives rise to a spring force \(k\Delta L\) in the opposite direction of the stretching. Let n be a unit normal vector along the wire from the point \(\boldsymbol{r}_{0}=(x_{0},y_{0})\) and in the direction of i _{θ}, see Fig. 1.21 for definition of \((x_{0},y_{0})\) and i _{θ}. Obviously, we have \(\boldsymbol{n}=\boldsymbol{i}_{\theta}\), but in this modeling of an elastic pendulum we do not need polar coordinates. Instead, it is more straightforward to develop the equation in Cartesian coordinates.
The other forces are the gravity and the air resistance, just as in Fig. 1.21. For motion in air we can neglect the added mass and buoyancy effects. The main difference is that we have a model for S in terms of the motion (as soon as we have expressed \(\Delta L\) by r). For simplicity, we drop the air resistance term (but Exercise 1.27 asks you to include it).
Remarks about an elastic vs a nonelastic pendulum
Note that the derivation of the ODEs for an elastic pendulum is more straightforward than for a classical, nonelastic pendulum, since we avoid the details with polar coordinates, but instead work with Newton’s second law directly in Cartesian coordinates. The reason why we can do this is that the elastic pendulum undergoes a general twodimensional motion where all the forces are known or expressed as functions of x(t) and y(t), such that we get two ordinary differential equations. The motion of the nonelastic pendulum, on the other hand, is constrained: the body has to move along a circular path, and the force S in the wire is unknown.
The nonelastic pendulum therefore leads to a differentialalgebraic equation, i.e., ODEs for x(t) and y(t) combined with an extra constraint \((xx_{0})^{2}+(yy_{0})^{2}=L^{2}\) ensuring that the motion takes place along a circular path. The extra constraint (equation) is compensated by an extra unknown force \(S\boldsymbol{n}\). Differentialalgebraic equations are normally hard to solve, especially with pen and paper. Fortunately, for the nonelastic pendulum we can do a trick: in polar coordinates the unknown force S appears only in the radial component of Newton’s second law, while the unknown degree of freedom for describing the motion, the angle \(\theta(t)\), is completely governed by the asimuthal component. This allows us to decouple the unknowns S and θ. But this is a kind of trick and not a widely applicable method. With an elastic pendulum we use straightforward reasoning with Newton’s 2nd law and arrive at a standard ODE problem that (after scaling) is easy to solve on a computer.
Initial conditions
The complete ODE problem
Scaling
The elastic pendulum model can be used to study both an elastic pendulum and a classic, nonelastic pendulum. The latter problem is obtained by letting \(k\rightarrow\infty\). Unfortunately, a serious problem with the ODEs (1.135)–(1.136) is that for large k, we have a very large factor k ∕ m multiplied by a very small number \(1L_{0}/L\), since for large k, \(L\approx L_{0}\) (very small deformations of the wire). The product is subject to significant roundoff errors for many relevant physical values of the parameters. To circumvent the problem, we introduce a scaling. This will also remove physical parameters from the problem such that we end up with only one dimensionless parameter, closely related to the elasticity of the wire. Simulations can then be done by setting just this dimensionless parameter.
Remark on the nonelastic limit
1.12.8 Vehicle on a Bumpy Road
1.12.9 Bouncing Ball
A bouncing ball is a ball in free vertically fall until it impacts the ground, but during the impact, some kinetic energy is lost, and a new motion upwards with reduced velocity starts. After the motion is retarded, a new free fall starts, and the process is repeated. At some point the velocity close to the ground is so small that the ball is considered to be finally at rest.
 1.
The ball impacts the ground, recognized by a sufficiently large negative velocity (\(v<\epsilon_{v}\)). The velocity then changes sign and is reduced by a factor C _{ R }, known as the coefficient of restitution ^{21}. For plotting purposes, one may set h = 0.
 2.
The motion stops, recognized by a sufficiently small velocity (\(v<\epsilon_{v}\)) close to the ground.
1.12.10 TwoBody Gravitational Problem
Consider two astronomical objects A and B that attract each other by gravitational forces. A and B could be two stars in a binary system, a planet orbiting a star, or a moon orbiting a planet. Each object is acted upon by the gravitational force due to the other object. Consider motion in a plane (for simplicity) and let \((x_{A},y_{A})\) and \((x_{B},y_{B})\) be the positions of object A and B, respectively.
The governing equations
Scaling
Solution in a special case: planet orbiting a star
1.12.11 Electric Circuits
1.13 Exercises
Exercise 1.22 (Simulate resonance)
 a)
Figure out how the solver function in vib.py can be called for the scaled ODE (1.122).
 b)
Run γ = 5,1.5,1.1,1 for β = 0.005,0.05,0.2. For each β value, present an image with plots of u(t) for the four γ values.
Filename: resonance.
Exercise 1.23 (Simulate oscillations of a sliding box)
 a)
Plot \(g(u)=\alpha^{1}\tanh(\alpha u)\) for various values of α. Assume \(u\in[1,1]\).
 b)
Scale the equations using I as scale for u and \(\sqrt{m/k}\) as time scale.
 c)
Implement the scaled model in b). Run it for some values of the dimensionless parameters.
Filename: sliding_box.
Exercise 1.24 (Simulate a bouncing ball)
Section 1.12.9 presents a model for a bouncing ball. Choose one of the two ODE formulation, (1.151) or (1.152)–(1.153), and simulate the motion of a bouncing ball. Plot h(t). Think about how to plot v(t).
Hint
A naive implementation may get stuck in repeated impacts for large time step sizes. To avoid this situation, one can introduce a state variable that holds the mode of the motion: free fall, impact, or rest. Two consecutive impacts imply that the motion has stopped.
Filename: bouncing_ball.
Exercise 1.25 (Simulate a simple pendulum)
 a)
Scale the model. Set up the dimensionless governing equation for θ and expressions for dimensionless drag and wire forces.
 b)
Write a function for computing θ and the dimensionless drag force and the force in the wire, using the solver function in the vib.py file. Plot these three quantities below each other (in subplots) so the graphs can be compared. Run two cases, first one in the limit of Θ small and no drag, and then a second one with Θ = 40 degrees and α = 0.8.
Filename: simple_pendulum.
Exercise 1.26 (Simulate an elastic pendulum)
 a)
Write a function simulate that can simulate an elastic pendulum using the scaled model. The function should have the following arguments:
To set the total simulation time and the time step, we use our knowledge of the scaled, classical, nonelastic pendulum: \(u^{\prime\prime}+u=0\), with solution \(u=\Theta\cos\bar{t}\). The period of these oscillations is \(P=2\pi\) and the frequency is unity. The time for simulation is taken as num_periods times P. The time step is set as P divided by time_steps_per_period.
The simulate function should return the arrays of x, y, θ, and t, where \(\theta=\tan^{1}(x/(1y))\) is the angular displacement of the elastic pendulum corresponding to the position \((x,y)\).
If plot is True, make a plot of \(\bar{y}(\bar{t})\) versus \(\bar{x}(\bar{t})\), i.e., the physical motion of the mass at \((\bar{x},\bar{y})\). Use the equal aspect ratio on the axis such that we get a physically correct picture of the motion. Also make a plot of \(\theta(\bar{t})\), where θ is measured in degrees. If Θ < 10 degrees, add a plot that compares the solutions of the scaled, classical, nonelastic pendulum and the elastic pendulum (\(\theta(t)\)).
Although the mathematics here employs a bar over scaled quantities, the code should feature plain names x for \(\bar{x}\), y for \(\bar{y}\), and t for \(\bar{t}\) (rather than x_bar, etc.). These variable names make the code easier to read and compare with the mathematics.
Hint 1
Equal aspect ratio is set by plt.gca().set_aspect(’equal’) in Matplotlib (import matplotlib.pyplot as plt) and in SciTools by the command plt.plot(..., daspect=[1,1,1], daspectmode=’equal’) (provided you have done import scitools.std as plt).
Hint 2
If you want to use Odespy to solve the equations, order the ODEs like \(\dot{\bar{x}},\bar{x},\dot{\bar{y}},\bar{y}\) such that odespy.EulerCromer can be applied.
 a)
Write a test function for testing that Θ = 0 and ϵ = 0 gives x = y = 0 for all times.
 b)
Write another test function for checking that the pure vertical motion of the elastic pendulum is correct. Start with simplifying the ODEs for pure vertical motion and show that \(\bar{y}(\bar{t})\) fulfills a vibration equation with frequency \(\sqrt{\beta/(1\beta)}\). Set up the exact solution.
Write a test function that uses this special case to verify the simulate function. There will be numerical approximation errors present in the results from simulate so you have to believe in correct results and set a (low) tolerance that corresponds to the computed maximum error. Use a small \(\Delta t\) to obtain a small numerical approximation error.
 c)
Make a function demo(beta, Theta) for simulating an elastic pendulum with a given β parameter and initial angle Θ. Use 600 time steps per period to get every accurate results, and simulate for 3 periods.
Filename: elastic_pendulum.
Exercise 1.27 (Simulate an elastic pendulum with air resistance)
This is a continuation Exercise 1.26. Air resistance on the body with mass m can be modeled by the force \(\frac{1}{2}\varrho C_{D}A\boldsymbol{v}\boldsymbol{v}\), where C _{ D } is a drag coefficient (0.2 for a sphere), \(\varrho\) is the density of air (1.2 \(\hbox{kg}\,{\hbox{m}}^{3}\)), A is the cross section area (\(A=\pi R^{2}\) for a sphere, where R is the radius), and v is the velocity of the body. Include air resistance in the original model, scale the model, write a function simulate_drag that is a copy of the simulate function from Exercise 1.26, but with the new ODEs included, and show plots of how air resistance influences the motion.
Filename: elastic_pendulum_drag.
Remarks
Test functions are challenging to construct for the problem with air resistance. You can reuse the tests from Exercise 1.27 for simulate_drag, but these tests does not verify the new terms arising from air resistance.
Exercise 1.28 (Implement the PEFRL algorithm)
 a)
It is easy to show that x(t) and y(t) go like sine and cosine functions. Use this idea to derive the exact solution.
 b)
One believes that a planet may orbit a star for billions of years. We are now interested in how accurate methods we actually need for such calculations. A first task is to determine what the time interval of interest is in scaled units. Take the earth and sun as typical objects and find the characteristic time used in the scaling of the equations (\(t_{c}=\sqrt{L^{3}/(mG)}\)), where m is the mass of the sun, L is the distance between the sun and the earth, and G is the gravitational constant. Find the scaled time interval corresponding to one billion years.
 c)
Solve the equations using 4thorder RungeKutta and the EulerCromer methods. You may benefit from applying Odespy for this purpose. With each solver, simulate 10,000 orbits and print the maximum position error and CPU time as a function of time step. Note that the maximum position error does not necessarily occur at the end of the simulation. The position error achieved with each solver will depend heavily on the size of the time step. Let the time step correspond to 200, 400, 800 and 1600 steps per orbit, respectively. Are the results as expected? Explain briefly. When you develop your program, have in mind that it will be extended with an implementation of the other algorithms (as requested in d) and e) later) and experiments with this algorithm as well.
 d)
Implement a solver based on the PEFRL method from Sect. 1.10.11. Verify its 4thorder convergence using an equation \(u^{\prime\prime}+u=0\).
 e)
The simulations done previously with the 4thorder RungeKutta and EulerCromer are now to be repeated with the PEFRL solver, so the code must be extended accordingly. Then run the simulations and comment on the performance of PEFRL compared to the other two.
 f)
Use the PEFRL solver to simulate 100,000 orbits with a fixed time step corresponding to 1600 steps per period. Record the maximum error within each subsequent group of 1000 orbits. Plot these errors and fit (least squares) a mathematical function to the data. Print also the total CPU time spent for all 100,000 orbits.
Now, predict the error and required CPU time for a simulation of 1 billion years (orbits). Is it feasible on today’s computers to simulate the planetary motion for one billion years?
Filename: vib_PEFRL.
Remarks
This exercise investigates whether it is feasible to predict planetary motion for the life time of a solar system.
Footnotes
Copyright information
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.