Skip to main content
Log in

Arbitrarily Accurate Spline Based Approximations for the Hyperbolic Tangent Function and Applications

  • Original Paper
  • Published:
International Journal of Applied and Computational Mathematics Aims and scope Submit manuscript

Abstract

Two point spline based approximations for \(\mathrm{tanh}(x)\), valid over the interval \([0,\infty ]\), which can be made arbitrarily accurate, have uniform convergence, and which have better convergence than existing series, are detailed. Explicit formulas for the errors in the approximations are specified. Applications are detailed and these include, first, upper and lower bound functions for \(\mathrm{tanh}(x)\) with arbitrary accuracy. Second, a rapidly convergent series for Catalan’s constant. Third, approximations for \(\mathrm{sech}(x)\), \({\mathrm{sech}}\left(x\right)^{2}\), \({\mathrm{tanh}\left(x\right)}^{2}\), \(\mathrm{ln}[\mathrm{cosh}(x)]\), \(\mathrm{ln}\left[\mathrm{sech}\left(x\right)\right]\) and approximations for the unknown integrals of \(\mathrm{ln}\left[\mathrm{cosh}\left(x\right)\right]\) and \(\mathrm{ln}\left[\mathrm{sech}\left(x\right)\right]\), \(\mathrm{cosh}\left({k}_{o}x\right)\mathrm{tanh}\left(x\right)\) and \(\mathrm{sech}{\left({k}_{o}x\right)}^{2}\mathrm{tanh}\left(x\right)\) which are valid, with arbitrary accuracy, over the positive real line. Fourth, approximations for the integral of \(x\mathrm{tanh}\left(x\right)\) which have better convergence properties than the standard series approximation that is used. Fifth, an approximation to the response of a damped second order system to a hyperbolic tangent function input signal. Sixth, simple approximations for the Laplace transform of the hyperbolic tangent function and approximations for the response of a linear filter to a hyperbolic tangent function input signal. Seventh, a structure for a comb filter that extracts the odd harmonics and suppresses the even harmonics of a signal. Finally, explicit approximate analytical expressions for the output power and harmonic distortion arising, for the sinusoidal case, from a hyperbolic tangent function nonlinearity.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27

Similar content being viewed by others

Data availability

Not applicable.

Code availability

Not applicable.

References

  1. Bagul, Y. J.: New inequalities involving circular, inverse circular, hyperbolic, inverse hyperbolic and exponential functions, Advances in Inequalities and Applications, Article ID 5, (2018). https://doi.org/10.28919/aia/3556.

  2. Bagul, Y.J., Chesneau, C.: Some new simple inequalities involving exponential, trigonometric and hyperbolic functions. CUBO 21(1), 21–35 (2019). https://doi.org/10.4067/S0719-06462019000100021

    Article  MathSciNet  MATH  Google Scholar 

  3. Barnett, J.H.: Enter, stage center: The early drama of the hyperbolic functions. Math. Mag. 77(1), 15–30 (2004)

    Article  MathSciNet  Google Scholar 

  4. Basokur, A.T.: Designing frequency selective filters via the use of hyperbolic tangent functions. Bullet. Earth Sci. Appl. Res. Centre Hacettepe Univ. 32(1), 69–88 (2011)

    Google Scholar 

  5. Bercu, G., Wu, S.: Refinements of certain hyperbolic inequalities via the Padé approximation method. J. Nonlinear Sci. Appl. 9, 5011–5020, (2016)

  6. Bhayo, B.A., Klén, R., Sándor, J.: New trigonometic and hyperbolic inequalities. Miskolc Math. Notes 18(1), 125–137 (2017). https://doi.org/10.18514/MMN.2017.1560

    Article  MathSciNet  MATH  Google Scholar 

  7. Brandt, M., Bitzer, J.: Hum removal filters: Overview and analysis, 132nd Convention of the Audio Engineering Society, April 26–29, 2012, Budapest, Hungary, 6 pages, (2012)

  8. Champeney, D.C.: A Handbook of Fourier Theorems. Cambridge University Press, Cambridge (1987)

    Book  Google Scholar 

  9. Erdélyi, A. (ed.): Table of Integral Transforms, vol. 1, McGraw Hill, (1954).

  10. Gilbert, B.: The multi-tanh principle: A tutorial overview. IEEE J. Solid-State Circuits 33, 2–17 (1998)

    Article  Google Scholar 

  11. Gradshteyn, I. S., Ryzhik, I. M.: Tables of Integrals, Series and Products, Edited by Jeffery, A. & Zwillinger, D., 7th ed. Academic Press, (2007)

  12. Howard, R.M.: Dual Taylor series, spline based function and integral approximation and applications. Math. Comput. Appl. 24, 35 (2019). https://doi.org/10.3390/mca24020035

    Article  MathSciNet  Google Scholar 

  13. Johannesson, T., Distner, M.: Dynamic loading of synchronous belts. J. Mech. Des. 124, 79–85 (2002). https://doi.org/10.1115/1.1426088

    Article  Google Scholar 

  14. Kalman, B. L., Kwasny, S. C.: Why tanh: choosing a sigmoidal function. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN), Baltimore, MD, USA, vol. 4, pp. 578–581, (1992). https://doi.org/10.1109/IJCNN.1992.227257.

  15. Malfliet, W., Hereman, W.: The tanh method: I Exact solutions of nonlinear evolution and wave equations. Phys. Scripta 54, 563–568 (1996)

    Article  MathSciNet  Google Scholar 

  16. Malfliet, W.: The tanh method: a tool for solving certain classes of nonlinear evolution and wave equations. J. Comput. Appl. Math. 164–165, 529–541 (2004). https://doi.org/10.1016/S0377-0427(03)00645-9

    Article  MathSciNet  MATH  Google Scholar 

  17. Marichev, O., Sondow, J., Weisstein, E. W.: Catalan's Constant, eqn 33, From MathWorld--A Wolfram Web Resource. https://mathworld.wolfram.com/CatalansConstant.html. Accessed 14 Sept. 2020

  18. Meher, J., Meher, P., Dash, G.: Improved comb filter based approach for effective prediction of protein coding regions in DNA sequences. J. Signal Inf. Process. 2, 88–99 (2011). https://doi.org/10.4236/jsip.2011.22012

    Article  Google Scholar 

  19. Nwankpa, C., Ijomah, W., Gachagan, A., Marshall, S.: Activation Functions: Comparison of trends in Practice and Research for Deep Learning, http://arxiv.org/abs/1602.07360 (2018)

  20. Özbal, S., Südor, H. C., Keskin, A. Ü.: Chaotic dynamics of a jerk function with hyperbolic tangent nonlinearity, 2018 Medical Technologies National Congress (TIPTEKNO), Magusa, pp. 1–4, (2018). https://doi.org/10.1109/TIPTEKNO.2018.8596866

  21. Paris, R. B.: Struve and Related Functions, ch. 11, NIST Handbook of Mathematical Functions, Editors: Olver, F. W., Lozier, D. W., Boisvert, R. F., & Clark, C. W., National Institute of Standards and Technology and Cambridge University Press, (2010)

  22. Paul, S.K., Choubey, C.K., Tiwari, G.: Low power analog comb filter for biomedical applications. Analog Int. Circ. Sig. Process 97, 371–386 (2018). https://doi.org/10.1007/s10470-018-1329-8

    Article  Google Scholar 

  23. Roy, R., Olver, F. W. J.: Elementary functions, ch. 4, NIST Handbook of Mathematical Functions, Editors: Olver, F. W., Lozier, D. W., Boisvert, R. F., Clark, C. W., National Institute of Standards and Technology and Cambridge University Press, (2010)

  24. Schuck Jr., A., Bodman, B. E. J.: Audio nonlinear modeling through hyperbolic tangent functionals. In: Proceedings of the 19th International Conference on Digital Audio Effects (DAFx-16), Brno, Czech Republic, September 5–9, 2016, pp. DAFX-103 to DAFX-108, (2016)

  25. Spiegel, M. R., Lipshutz, S., Liu, J.: Mathematical Handbook of Formulas and Tables, Third Ed., McGraw Hill, (2009)

  26. Weisstein, E. W.: Hyperbolic Cotangent, From MathWorld--A Wolfram Web Resource. https://mathworld.wolfram.com/HyperbolicCotangent.html. Accessed September 14, 2020

  27. Zhu, L., Sun, J.: Six new Redheffer-type inequalities for circular and hyperbolic functions. Comput. Math. Appl. 56, 522–529 (2008)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author is pleased to acknowledge the support of Prof. A. Zoubir, SPG, Technische Universität Darmstadt, Darmstadt, Germany, who hosted a visit where the research underpinning this paper, along with the writing of the paper, was completed.

Funding

This research was not supported by any external funding.

Author information

Authors and Affiliations

Authors

Contributions

RH is the sole contributor to the manuscript.

Corresponding author

Correspondence to Roy M. Howard.

Ethics declarations

Conflict of interest

The author declares that he has no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Proof of Theorem 3.1

The approximations for the hyperbolic tangent function arise from utilizing the spline based approximation, defined by Eq. (30), to approximate the function.

$${f}_{1}\left({x}_{1}\right)=\frac{{x}_{1}}{2-{x}_{1}}\;\;{ x}_{1}\in [\mathrm{0,1}]$$
(175)

over the interval \([\mathrm{0,1}]\). The \(n\mathrm{th}\) order approximation is

$$\begin{aligned}{f}_{1,n}\left({x}_{1}\right)&={\left(1-{x}_{1}\right)}^{n+1} \stackrel{n}{\sum_{k=0}}\frac{{x}_{1}^{k}}{k!}\cdot {f}_{1}^{k)}(0)\cdot \left[\stackrel{n-k}{\sum_{i = 0}}\frac{(n+i)!}{i!\cdot n!}\cdot {x}_{1}^{i}\right]\\ &\quad+\,{x}_{1}^{n+1}\stackrel{n}{\sum_{k = 0}}\frac{(-1{)}^{k}{\left(1-{x}_{1}\right)}^{k}}{k!}\cdot {f}_{1}^{k)}(1)\cdot \left[\stackrel{n-k}{\sum_{i = 0}}\frac{(n+i)!}{i!\cdot n!}\cdot {\left(1-{x}_{1}\right)}^{i}\right]\end{aligned}$$
(176)

As

$${f}_{1}^{(k)}\left({x}_{1}\right)=\frac{2k!}{{\left(2-{x}_{1}\right)}^{k+1}} \quad k\in \{\mathrm{1,2},\ldots \}$$
(177)

which implies \({f}_{1}^{(k)}(0)=k!/{2}^{k}\) and \({f}_{1}^{k)}(1)=2k!\) , it follows that

$$\begin{aligned}{f}_{1,n}\left({x}_{1}\right)&={\left(1-{x}_{1}\right)}^{n+1}\stackrel{n}{\sum_{k = 0}}\frac{{x}_{1}^{k}}{{2}^{k}}\cdot \left[\stackrel{n-k}{\sum_{i = 0}}\frac{(n+i)!}{i!\cdot n!}\cdot {x}_{1}^{i}\right]\\ &\quad +\,2{x}_{1}^{n+1}\stackrel{n}{\sum_{k = 0}}(-1{)}^{k}{\left(1-{x}_{1}\right)}^{k}\left[\stackrel{n-k}{\sum_{i = 0}}\frac{(n+i)!}{i!\cdot n!}\cdot {\left(1-{x}_{1}\right)}^{i}\right]\end{aligned}$$
(178)

Explicit approximations, of orders zero to six, are:

$${f}_{\mathrm{1,0}}\left({x}_{1}\right)={x}_{1}\qquad { f}_{\mathrm{1,1}}\left({x}_{1}\right)=\frac{{x}_{1}}{2}+\frac{{x}_{1}^{3}}{2}$$
(179)
$${f}_{\mathrm{1,2}}\left({x}_{1}\right)=\frac{{x}_{1}}{2}+\frac{{x}_{1}^{2}}{4}+\frac{{x}_{1}^{3}}{4}-\frac{{x}_{1}^{4}}{4}+\frac{{x}_{1}^{5}}{4}\;\;{ f}_{\mathrm{1,3}}\left({x}_{1}\right)=\frac{{x}_{1}}{2}+\frac{{x}_{1}^{2}}{4}+\frac{{x}_{1}^{3}}{8}+\frac{{x}_{1}^{5}}{4}-\frac{{x}_{1}^{6}}{4}+\frac{{x}_{1}^{7}}{8}$$
(180)
$${f}_{\mathrm{1,4}}\left({x}_{1}\right)=\frac{{x}_{1}}{2}+\frac{{x}_{1}^{2}}{4}+\frac{{x}_{1}^{3}}{8}+\frac{{x}_{1}^{4}}{16}+\frac{{x}_{1}^{5}}{16}-\frac{{x}_{1}^{6}}{8}+\frac{{x}_{1}^{7}}{4}-\frac{3{x}_{1}^{8}}{16}+\frac{{x}_{1}^{9}}{16}$$
(181)
$${f}_{\mathrm{1,5}}\left({x}_{1}\right)=\frac{{x}_{1}}{2}+\frac{{x}_{1}^{2}}{4}+\frac{{x}_{1}^{3}}{8}+\frac{{x}_{1}^{4}}{16}+\frac{{x}_{1}^{5}}{32}+\frac{3{x}_{1}^{7}}{32}-\frac{3{x}_{1}^{8}}{16}+\frac{7{x}_{1}^{9}}{32}-\frac{{x}_{1}^{10}}{8}+\frac{{x}_{1}^{11}}{32}$$
(182)
$$\begin{aligned}{f}_{\mathrm{1,6}}\left({x}_{1}\right)&=\frac{{x}_{1}}{2}+\frac{{x}_{1}^{2}}{4}+\frac{{x}_{1}^{3}}{8}+\frac{{x}_{1}^{4}}{16}+\frac{{x}_{1}^{5}}{32}+\frac{{x}_{1}^{6}}{64}+\frac{{x}_{1}^{7}}{64}-\frac{3{x}_{1}^{8}}{64}+\frac{9{x}_{1}^{9}}{64}-\frac{13{x}_{1}^{10}}{64}\\ &\quad+\,\frac{11{x}_{1}^{11}}{64}-\frac{5{x}_{1}^{12}}{64}+\frac{{x}_{1}^{13}}{64}\end{aligned}$$
(183)

The associated approximations for tanh (x), as stated in Eqs. (37) to (43), then arise from the result \({f}_{n}(x)={f}_{1,n}\left[{h}^{-1}(x)\right]\) with \({h}^{-1}(x)=1-\mathrm{exp }(-2x)\).

To prove the general form for the coefficients, consider the coefficient array associated with the approximations for \(\mathrm{tanh}(x)\), as stated in Eqs. (37) to (43) for \({c}_{n,k}\), \(n\le k\le 2n+1\):

(184)

The general definition for the coefficients then follows according to

$$\begin{aligned} & {c}_{\mathrm{0,0}}=1,{ c}_{\mathrm{0,1}}=-1 \quad { c}_{\mathrm{1,0}}=1,{c}_{\mathrm{1,1}}=-2,{c}_{\mathrm{1,2}}=\frac{3}{2},{c}_{\mathrm{1,3}}=\frac{-1}{2}\\ & {c}_{n,0}=1 \quad {c}_{n,k}=\left(-{1}^{k}\right)2 \quad 1\le k\le n\\ & {c}_{n,k}=\frac{{c}_{n-1,k-2}-{c}_{n-1,k-1}}{2} \quad n+1\le k\le 2n\\ & {c}_{n,2n+1}=\frac{-1}{{2}^{n}}\end{aligned}$$
(185)

Appendix 2: Proof of Theorem 3.2

Consider the error function.

$${e}_{1,n}\left({x}_{1}\right)=\mathrm{tanh }\left[h\left({x}_{1}\right)\right]-{f}_{1,n}\left({x}_{1}\right)\quad {x}_{1}\in [\mathrm{0,1}]$$
(186)

associated with approximations for \({f}_{1}\left({x}_{1}\right)=\mathrm{tanh }\left[h\left({x}_{1}\right)\right]=\frac{{x}_{1}}{2-{x}_{1}}, {x}_{1}={h}^{-1}(x)=1-\mathrm{exp }(-2x)\) . The \(n\mathrm{th}\) order approximation, \({f}_{1,n}\), is specified by Eq. 178. Simplification (e.g. via use of Mathematica) leads to

$${e}_{1,n}\left({x}_{1}\right)=\frac{(-1{)}^{n+1}{\left(1-{x}_{1}\right)}^{n+1}{x}_{1}^{n+1}}{{2}^{n}\left(2-{x}_{1}\right)}\quad{ x}_{1}\in [\mathrm{0,1}]$$
(187)

Explicit expressions, valid for \({x}_{1}\in [\mathrm{0,1}]\), are:

$${e}_{\mathrm{1,0}}\left({x}_{1}\right)=\frac{-\left(1-{x}_{1}\right){x}_{1}}{2-{x}_{1}}\qquad{ e}_{\mathrm{1,1}}\left({x}_{1}\right)=\frac{{\left(1-{x}_{1}\right)}^{2}{x}_{1}^{2}}{2\left(2-{x}_{1}\right)}$$
(188)
$${e}_{\mathrm{1,2}}\left({x}_{1}\right)=\frac{-{\left(1-{x}_{1}\right)}^{3}{x}_{1}^{3}}{4\left(2-{x}_{1}\right)}\qquad{ e}_{\mathrm{1,3}}\left({x}_{1}\right)=\frac{{\left(1-{x}_{1}\right)}^{4}{x}_{1}^{4}}{8\left(2-{x}_{1}\right)}$$
(189)

The corresponding errors in the approximations for tanh(x) follow according to Lemma 1:

$$\begin{aligned}{e}_{n}(x)&=\text{tanh} (x)-{f}_{n}(x)=\text{tanh} \left[h\left({x}_{1}\right)\right]-{f}_{n}\left[h\left({x}_{1}\right)\right]\\ &=\text{tanh} \left[h\left({x}_{1}\right)\right]-{f}_{1,n}\left({x}_{1}\right)={e}_{1,n}\left({x}_{1}\right)\end{aligned}$$
(190)

with the transformation of \(x=h\left({x}_{1}\right){, x}_{1}={h}^{-1}(x)\). Substitution and simplification yields

$${e}_{n}(x)=\frac{(-1{)}^{n+1}{e}^{-2(n+1)x}{\left[1-{e}^{-2x}\right]}^{n+1}}{{2}^{n}\left[1+{e}^{-2x}\right]}$$
(191)

which is the required result.

Appendix 3: Proof of Theorem 4.1

First, convergence of the approximation stated in Theorem 3.1 implies

$$\left.\underset{n \to \infty }{\mathrm{lim}} x{e}^{-x}\left[\stackrel{2n + 1}{\sum_{k = 0}}{c}_{n,k}{e}^{-2kx}\right]=x{e}^{-x}\mathrm{ tanh }\left(x\right) \quad x\in [0,\infty )\right]$$
(192)

and, consistent with Theorem 3.3, the convergence is again uniform as \(x{e}^{-x}\) is bounded above for \(x\in [0,\infty )\). It then follows, consistent with Lemma 3, that function convergence leads to integral convergence according to

$$\begin{aligned}\underset{n \to \infty }{\mathrm{lim}}\left[\int\limits_{0}^{x}\uplambda {e}^{-\uplambda }\left[\stackrel{2n + 1}{\sum_{k = 0}}{c}_{n,k}{e}^{-2k\uplambda }\right]d\uplambda \right]&=\left[\int\limits_{0}^{x} \underset{n \to \infty }{\mathrm{lim}}\uplambda {e}^{-\uplambda }\left[\stackrel{2n +1 }{\sum_{k = 0}}{c}_{n,k}{e}^{-2k\uplambda }\right]d\uplambda \right]\\&=\int\limits_{0}^{x}\uplambda {e}^{-\uplambda }\mathrm{tanh }\left(\uplambda \right)d\uplambda \end{aligned}$$
(193)

As

$$\int\limits_{0}^{\infty}\uplambda {e}^{-\uplambda }{e}^{-2k\uplambda } d\uplambda =\frac{1}{(2k+1{)}^{2}}$$
(194)

the required result follows from the definition specified in Eq. (58):

$$\underset{n \to \infty }{\mathrm{lim}} \stackrel{2n+ 1}{\sum_{k = 0}}\frac{{c}_{n,k}}{(2k+1{)}^{2}}=\int\limits_{0}^{x}\uplambda {e}^{-\uplambda }\mathrm{tanh }(\uplambda )d\uplambda =2G-1$$
(195)

Appendix 4: Proof of Theorem 4.2

These results follows from Theorem 3.1 and the result \(\frac{\mathrm{d}}{\mathrm{d}x}\mathrm{tanh }(x)=\mathrm{sech}(x{)}^{2}\). Consider Eq. (44)

$$\mathrm{tanh }(x)=\stackrel{2n + 1}{\sum_{k = 0}}{c}_{n,k}{e}^{-2kx}+\frac{(-1{)}^{n+1}{e}^{-2(n+1)x}{\left[1-{e}^{-2x}\right]}^{n+1}}{{2}^{n}\left[1+{e}^{-2x}\right]}$$
(196)

for the case of \(x>0\). Differentiation yields

$$\mathrm{sech} (x{)}^{2}=-2\stackrel{2n + 1}{\sum_{ k = 1}} k{c}_{n,k}{e}^{-2kx}+{e}_{2,n}(x)$$
(197)

where the error in the approximation is

$${e}_{2,n}(x)=\frac{(-1{)}^{n+1}{e}^{-2(n+1)x}{\left[1-{e}^{-2x}\right]}^{n}}{{2}^{n-1}{\left[1+{e}^{-2x}\right]}^{2}}\cdot \left[-n-1+(n+2){e}^{-2x}+(2n+1){e}^{-4x}\right]$$
(198)

This error has the following bound

$$|{e}_{2,n}(x)|\le\frac{1}{{2}^{n-1}}\cdot [-n-1+(n+2)+(2n+1)]=\frac{n+1}{{2}^{n-2}}\quad x >0$$
(199)

which clearly converges to zero as the order of approximation increases. The convergence is uniform over \((0,\infty )\).

For the case of \(x=0\), \({e}_{2,n}(0)=0\) and the stated approximation for \({\mathrm{sech}\left(x\right)}^{2}\) is valid at this point. This result is consistent with the summation

$$\sum_{k = 1}^{2n+1} k{c}_{n,k}=\frac{-1}{2}\quad n\in \{1, 2,\ldots \}$$
(200)

and this result can readily be confirmed.

Appendix 5: Proof of Theorem 4.3

First, \(\mathrm{tanh}(x)\), and the approximations for \(\mathrm{tanh}(x)\) as stated in Theorem 3.1, satisfy the conditions specified in Lemma 3. Thus, function convergence leads to integral convergence, i.e.

$$\underset{n\to \infty }{\mathrm{lim}}{ g}_{n}\left(x\right)=\underset{n\to \infty }{\mathrm{lim}} \int\limits_{0}^{x}{f}_{n}(\uplambda )d\uplambda =\int\limits_{0}^{x}\mathrm{tanh }(\uplambda )d\uplambda =\mathrm{ln }[\mathrm{cosh}(x)]$$
(201)

Direct integration of \({f}_{n}\), as defined by Eq. (35), yields:

$$\begin{aligned}{g}_{n}(x)&=x+\stackrel{2n+ 1}{\sum_{k = 1}}\frac{{c}_{n,k}}{2k}\cdot \left[1-{e}^{-2kx}\right]\\&={k}_{0}(n)+x-\stackrel{2n + 1}{\sum_{k = 1}}\frac{{c}_{n,k}}{2k}\cdot {e}^{-2kx}\qquad { k}_{0}(n)=\stackrel{2n + 1}{\sum_{k = 1}}\frac{{c}_{n,k}}{2k}\end{aligned}$$
(202)

and it is the case that \({g}_{n}\left(0\right)=0\).

Convergence of \({g}_{n}\), for all finite intervals, according to \(\underset{n\to \infty }{\mathrm{lim}} {g}_{n}(x)=\mathrm{ln }[\mathrm{cosh }(x)]\) implies, consistent with Lemma 3, that

$$\underset{n\to \infty }{\mathrm{lim}} {h}_{n}\left(x\right)=\underset{n\to \infty }{\mathrm{lim}}\int\limits_{0}^{x}{g}_{n}\left(\uplambda \right)d\uplambda =\int\limits_{0}^{x}\mathrm{ln }[\mathrm{cosh }(\uplambda )]d\uplambda $$
(203)

Direct integration then yields the approximations for \(h\):

$$\begin{aligned}{h}_{n}(x)&={k}_{0}(n)x+\frac{{x}^{2}}{2}+\stackrel{2n + 1}{\sum_{k = 1}}\frac{{c}_{n,k}}{4{k}^{2}}\cdot \left[{e}^{-2kx}-1\right]\\&=-{k}_{1}(n)+{k}_{0}(n)x+\frac{{x}^{2}}{2}+\stackrel{2n + 1}{\sum_{k = 1}}\frac{{c}_{n,k}}{4{k}^{2}}\cdot {e}^{-2kx}\qquad {k}_{1}(n)=\stackrel{2n + 1}{\sum_{k = 1}}\frac{{c}_{n,k}}{4{k}^{2}}\end{aligned}$$
(204)

Appendix 6: Proof of Theorem 4.4

Consider the differential equation

$${y}^{{{\prime}}}\left(t\right)+{f}_{n}\left(t\right)y\left(t\right)={f}_{n}\left(t\right) \quad y(0)=0$$
(205)

for the case where \({f}_{n}\) is the \(n\mathrm{th}\) order approximation to \(\mathrm{tanh}\) defined, consistent with Theorem 3.1, according to

$${f}_{n}(t)=\stackrel{2n + 1}{\sum_{k = 0}}{c}_{n,k}{e}^{-2kt}\quad t\ge 0$$
(206)

The solution to the differential equation can be found by considering a signal form defined according to

$${y}_{n}\left(t\right)=1-\mathrm{exp }\left[{{\alpha }}_{0}+{{\alpha }}_{1}t+\stackrel{2n + 1}{\sum_{k = 1}}{d}_{n,k}{e}^{-2kt}\right] \quad t\ge 0$$
(207)

which implies

$${y}_{n}^{{{\prime}}}(t)=\left[-{{\alpha }}_{1}+2\stackrel{2n+1}{\sum_{k=1}} {kd}_{n,k}{e}^{-2kt}\right]\mathrm{exp}\left[{{\alpha }}_{0}+{{\alpha }}_{1}t+\stackrel{2n + 1}{\sum_{k = 1}} {d}_{n,k}{e}^{-2kt}\right]\quad t\ge 0$$
(208)

Substitution of these expressions into the differential equations yields

$$\mathrm{exp} \left[{{\alpha }}_{0}+{{\alpha }}_{1}t+\stackrel{2n + 1}{\sum_{k = 1}}{d}_{n,k}{e}^{-2kt}\right]\left[-{{\alpha }}_{1}+2\stackrel{2n + 1}{\sum_{k = 1}}k{d}_{n,k}{e}^{-2kt}-\stackrel{2n + 1}{\sum_{k = 0}}{c}_{n,k}{e}^{-2kt}\right]=0$$
(209)

As \({c}_{n,0 }=1\), it follows that \({{\alpha }}_{1}=-1\) and the requirement is then for

$$2\stackrel{2n + 1}{\sum_{k = 1}}k{d}_{n,k}{e}^{-2kt}=\stackrel{2n + 1}{\sum_{k=1}}{c}_{n,k}{e}^{-2kt} \Rightarrow {d}_{n,k}=\frac{{c}_{n,k}}{2k}$$
(210)

The initial condition of \(y\left(0\right)=0\) implies

$${{\alpha }}_{0}+\stackrel{2n+1}{\sum_{k=1}}{d}_{n,k}=0 \Rightarrow {\alpha }_{0}=-\stackrel{2n + 1}{\sum_{k=1}}\frac{{c}_{n,k}}{2k}$$
(211)

As the solution of the differential equation

$${y}^{{{\prime}}}\left(t\right)+\mathrm{tanh }\left(t\right)y\left(t\right)=\mathrm{tanh }\left(t\right)\quad y(0)=0$$
(212)

is \(y(t)=1-\mathrm{sech }(t)\) , it follows that a \(n\mathrm{th}\) order approximation to \(\mathrm{sech}(t)\) is given by

$${S}_{n}(t)=\mathrm{exp}\left[-{k}_{0}(n)-t+\stackrel{2n+1}{\sum_{k=1}}\frac{{c}_{n,k}}{2k}\cdot {e}^{-2kt}\right]\qquad{ k}_{0}(n)=\stackrel{2n + 1}{\sum_{k = 1}}\frac{{c}_{n,k}}{2k}$$
(213)

Proof of Convergence for Sech

The error in the approximation to \(\mathrm{sech}(x)\), as defined by \({\upvarepsilon }_{S,n}(x)=\mathrm{sech }(x)-{S}_{n}(x)\), is

$${\upvarepsilon }_{S,n}(x)=\mathrm{sech }(x)-\mathrm{exp}\left[-{k}_{0}(n)-x+\stackrel{2n + 1}{\sum_{k= 1}}\frac{{c}_{n,k}}{2k}\cdot {e}^{-2kx}\right]$$
(214)

The argument of the exponential function is the negative of the integral of the \(n\mathrm{th}\) order approximation, \({f}_{n}\), for \(\mathrm{tanh}(x)\) as specified in Theorem 3.1, i.e.

$$\int\limits_{0}^{x}{f}_{n}(\lambda )d\lambda ={k}_{0}(n)+x-\stackrel{2n+1}{\sum_{k=1}}\frac{{c}_{n,k}\cdot {e}^{-2kx}}{2k}\;\;{ k}_{0}(n)=\stackrel{2n + 1}{\sum_{k=1}}\frac{{c}_{n,k}}{2k}\;\;x\ge 0$$
(215)

It then follows that

$${\upvarepsilon }_{S,n}(x)=\mathrm{sech }(x)-\mathrm{exp }\left[-\int\limits_{0}^{x}{f}_{n}(\lambda )d\lambda \right]$$
(216)

with the error being zero when \(x=0\). As

$$\int\limits_{0}^{x}{f}_{n}(\lambda )d\lambda =\int\limits_{0}^{x}\left[\mathrm{tanh }(\uplambda )-{\upvarepsilon }_{n}(\uplambda )\right]d\uplambda =\mathrm{ln }[\mathrm{cosh }(x)]-\int\limits_{0}^{x}{\upvarepsilon }_{n}(\uplambda )d\uplambda $$
(217)

it follows that

$${\upvarepsilon }_{S,n}(x)=\mathrm{sech }(x)\left[1-\mathrm{exp }\left[\int\limits_{0}^{x}{\upvarepsilon }_{n}(\uplambda )d\uplambda \right]\right]$$
(218)

From Eq. (44) it follows that \({\upvarepsilon }_{n}\) has the bound

$$\left|{\upvarepsilon }_{n}(x)\right|\le \frac{{e}^{-2(n+1)x}}{{2}^{n}}$$
(219)

and it then follows that

$$\int\limits_{0}^{x}\left|{\upvarepsilon }_{n}(\uplambda )\right|d\uplambda \le \int\limits_{0}^{x}\frac{{e}^{-2(n+1)\uplambda }}{{2}^{n}}d\uplambda =\frac{1-{e}^{-2(n+1)x}}{{2}^{n+1}(n+1)}\le \frac{1}{{2}^{n+1}(n+1)}$$
(220)

Thus, as \(n\) increases, the approximation

$$\mathrm{exp}\left[\int\limits_{0}^{x}{\upvarepsilon }_{n}(\uplambda )d\uplambda \right]\approx 1-\int\limits_{0}^{x}{\upvarepsilon }_{n}(\uplambda )d\uplambda $$
(221)

becomes increasingly valid and, hence:

$${\upvarepsilon }_{S,n}\left(x\right)\approx \mathrm{sech }(x)\left[\int\limits_{0}^{x}{\upvarepsilon }_{n}(\uplambda )d\uplambda \right] \Rightarrow \left|{\upvarepsilon }_{S,n}(x)\right|<\frac{{k}_{S}}{{2}^{n+1}(n+1)}$$
(222)

for a constant \({k}_{S}\) which is close to one. Clearly, the convergence is uniform for \(x\in (0,\infty )\). It also follows that the relative error in the approximation for \(\mathrm{sech}(x)\) has the bound

$$\left|\mathrm{re} \left(x\right)\right|=\frac{|{\upvarepsilon }_{S,n}(x)|}{\mathrm{sech}\left(x\right)}\approx \left|\int\limits_{0}^{x}{\upvarepsilon }_{n}(\uplambda )d\uplambda \right|<\frac{{k}_{S}}{{2}^{n+1}(n+1)}$$
(223)

Proof of Convergence for ln[Sech] and Integral of lnSech

Uniform convergence of \({S}_{n}\) to \(\mathrm{sech}\) for \(x\ge 0\) is consistent with: \(\forall\upvarepsilon >0\), \(\exists N>0\) such that for \(\forall n>N\) it is the case that

$$\left|\mathrm{sech }(x)-{S}_{n}(x)\right|<\upvarepsilon \;\; x\ge 0$$
(224)

It then follows, for \(n\) suitably large, that

$$\begin{aligned}\text{ln} [\text{sech} (x)]&=\text{ln} \left[{S}_{n}(x)+{\upvarepsilon }_{S,n}(x)\right]=\text{ln}\left[{S}_{n}(x)\left[1+\frac{{\upvarepsilon }_{S,n}(x)}{{S}_{n}(x)}\right]\right]=\text{ln} \left[{S}_{n}(x)\right]+\text{ln} \left[1+\frac{{\upvarepsilon }_{S,n}(x)}{{S}_{n}(x)}\right]\\ &=\text{ln} \left[{S}_{n}\left(x\right)\right]+\text{ln} \left[1+\frac{{\upvarepsilon }_{S,n}\left(x\right)}{\mathrm{sech}\left(x\right)}\cdot \frac{\mathrm{sech}\left(x\right)}{{S}_{n}\left(x\right)}\right]\approx \text{ln} \left[{S}_{n}(x)\right]+\frac{{\upvarepsilon }_{S,n}(x)}{\mathrm{sech }(x)}\cdot \frac{\mathrm{sech }(x)}{{S}_{n}(x)}\end{aligned}$$
(225)

where the approximation is valid as \(\frac{\mathrm{sech}\left(x\right)}{{S}_{n}\left(x\right)}\approx 1\) and \(\frac{{\upvarepsilon }_{S,n}\left(x\right)}{\mathrm{sech}\left(x\right)}\ll 1\) (Eq. 223) as \(n\) increases. Hence, the error in the approximation \(\mathrm{ln }\left[\mathrm{sech }\left(x\right)\right]\approx \mathrm{ln }\left[{S}_{n}(x)\right]\) is

$${\upvarepsilon }_{L,n}\left(x\right)=\mathrm{ln }\left[\mathrm{sech }\left(x\right)\right]-\mathrm{ln }\left[{S}_{n}\left(x\right)\right]\approx \frac{{\upvarepsilon }_{S,n}(x)}{\mathrm{sech }(x)}$$
(226)

and this has the bound, consistent with Eq. (223), of

$$\left|{\upvarepsilon }_{L,n}(x)\right|<\frac{{k}_{L}}{{2}^{n+1}(n+1)}$$
(227)

where \({k}_{L}\) is similar in magnitude to \({k}_{S}\). Thus, the convergence of \(\mathrm{ln} [{S}_{n}(x)]\) to \(\mathrm{ln} [\mathrm{sech}(x)]\) is uniform for \(x>0\).

Convergence of \({L}_{n}\), for all finite intervals, according to \(\underset{n\to \infty }{\mathrm{lim}}{ L}_{n}(x)=\mathrm{ln }[\mathrm{sech }(x)]\) implies, consistent with Lemma 3, that

$$\underset{n\to \infty }{\mathrm{lim}} {I}_{n}(x)=\underset{n\to \infty }{\mathrm{lim}}\int\limits_{0}^{x}{L}_{n}(\uplambda )d\uplambda =\int\limits_{0}^{x} \mathrm{ln}[\mathrm{sech }(\uplambda )]d\uplambda $$
(228)

Appendix 7: Proof of Theorem 4.5

The approximation for the hyperbolic tangent function, specified in Theorem 3.1, yields the \(n\mathrm{th}\) order approximation to the integral of \(\mathrm{cosh }\left({k}_{o}x\right)\mathrm{tanh }(x)\) according to

$${I}_{n}\left({k}_{o},x\right)=\int\limits_{0}^{x}\left[\frac{{e}^{{k}_{o}\uplambda }+{e}^{-{k}_{o}\uplambda }}{2}\right]\left[\stackrel{2n + 1}{\sum_{k = 0}} {c}_{n,k}{e}^{-2k\uplambda }\right]d\uplambda $$
(229)

Standard analysis, with the restriction \({k}_{o}^{2}-4{k}^{2}\ne 0,k\in \{\mathrm{0,1},\ldots ,2n+1\}\) , leads to

$${I}_{n}\left({k}_{o},x\right)=\stackrel{2n + 1}{\sum_{k=0}}\frac{{c}_{n,k}}{2\left[{k}_{o}^{2}-4{k}^{2}\right]}\cdot \left[\left[{k}_{o}+2k\right]{e}^{\left({k}_{o}-2k\right)x}-\left({k}_{o}-2k\right){e}^{-\left({k}_{o}+2k\right)x}-4k\right]$$
(230)

and the alternative form then follows:

$${I}_{n}\left({k}_{o},x\right)=\stackrel{2n+1}{\sum_{k=0}}\frac{{c}_{n,k}}{{k}_{o}^{2}-4{k}^{2}}\left[{e}^{-2kx}\left[{k}_{o}\mathrm{sinh }\left({k}_{o}x\right)+2k\mathrm{cosh }\left({k}_{o}x\right)\right]-2k\right]$$
(231)

Convergence is guaranteed as

$$\underset{n\to \infty }{\mathrm{lim}} \left[\stackrel{2n+1}{\sum_{k=0}}{c}_{n,k}{e}^{-2kx}\right]\mathrm{cosh }\left({k}_{o}x\right)=\mathrm{cosh }\left({k}_{o}x\right)\mathrm{tanh }\left(x\right) \quad x\ge 0$$
(232)

and, consistent with Lemma 3, function convergence leads to integral convergence, i.e.

$$\begin{aligned}\underset{n\to \infty }{\mathrm{lim}} {I}_{n}\left({k}_{o},x\right)&=\underset{n\to \infty }{\mathrm{lim}}\int\limits_{0}^{x}\left[\stackrel{2n+1}{\sum_{k=0}}{c}_{n,k}{e}^{-2k\uplambda }\right]\mathrm{cosh }\left({k}_{o}\uplambda \right)d\uplambda\\& =\int\limits_{0}^{x}\mathrm{tanh }\left(\uplambda \right)\mathrm{cosh }\left({k}_{o}\uplambda \right)d\uplambda \;\;{ k}_{o}>0,\quad x\ge 0 \end{aligned}$$
(233)

Appendix 8: Proof of Theorem 4.6

Substitution of the approximations for \(\mathrm{tanh}(x)\) and \(\mathrm{sech }{\left(x\right)}^{2}\), as defined by Theorems 3.1 and 4.2, yields the approximation.

$$\int\limits_{0}^{x}\mathrm{sech }{\left({k}_{o}\uplambda \right)}^{2}\mathrm{tanh }\left(\uplambda \right)d\uplambda \approx {I}_{n}\left({k}_{o},x\right)=-2\int\limits_{0}^{x}\left[\stackrel{2n+1}{\sum_{k=1}} k{c}_{n,k}{e}^{-2k{k}_{o}\lambda }\right]\left[\stackrel{2n + 1}{\sum_{i=0}}{c}_{n,i}{e}^{-2i\lambda }\right]d\lambda $$
(234)

Interchanging the order of summation and integration leads to

$${I}_{n}\left({k}_{o},x\right)=-\stackrel{2n+1 }{\sum_{i=0}}\stackrel{2n+1}{\sum_{k=1}}\frac{k{c}_{n,i}{c}_{n,k}}{k{k}_{o}+i}\cdot \left[1-{e}^{-2\left(k{k}_{o}+i\right)x}\right]$$
(235)

and the required result:

$${I}_{n}\left({k}_{o},x\right)={C}_{0}(n)+\stackrel{2n+1}{\sum_{i=0}} \stackrel{2n+1}{\sum_{k=1}} \frac{k{c}_{n,i}{c}_{n,k}}{k{k}_{o}+i} \cdot {{e}^{-2\left(k{k}_{o}+i\right)x} } \qquad {C}_{0}(n)= -\stackrel{2n+1 }{\sum_{i=0}}\stackrel{2n+1 }{\sum_{k=1} } \frac{k{c}_{n,i}{c}_{n,k}}{k{k}_{o}+i}$$
(236)

Convergence is guaranteed as

$$\underset{n\to \infty }{\mathrm{lim}}\left[-2\stackrel{2n+ 1}{\sum_{k=1}}k{c}_{n,k}{e}^{-2k{k}_{o}x}\right]\left[\stackrel{2n+1}{\sum_{i=0}}{c}_{n,i}{e}^{-2ix}\right]\!=\!\mathrm{sech} {\left({k}_{o}x\right)}^{2}\mathrm{tanh }(x)\quad\! { k}_{o}\!>\! 0,x\!>\! 0$$
(237)

and, consistent with Lemma 3, function convergence leads to integral convergence, i.e.

$$\underset{n\to \infty }{\mathrm{lim}} {I}_{n}\left({k}_{o},x\right)=\underset{n\to \infty }{\mathrm{lim}} \int\limits_{0}^{x}{S}_{2,n}\left({k}_{o}\uplambda \right){f}_{n}(\uplambda )d\uplambda =\int\limits_{0}^{x}\mathrm{sech}{\left({k}_{o}\uplambda \right)}^{2}\mathrm{tanh }(\uplambda )d\uplambda \qquad {k}_{o}>0,x>0$$
(238)

Appendix 9: Proof of Theorem 4.7

These results arise from the approximations for \(\mathrm{tanh }(x)\) specified in Theorem 3.1 and the Laplace transform result:

$${e}^{-pt}u\left(t\right)\iff \frac{1/p}{1+s/p}$$
(239)

To prove convergence, note, consistent with Eq. (44), that

$$\begin{aligned}{F}_{n}(s)&=\int\limits_{0}^{\infty}{f}_{n}(x){e}^{-sx}dx=\int\limits_{0}^{\infty}\left[\mathrm{tanh }(x)-{\upvarepsilon }_{n}(x)\right]{e}^{-sx}dx\quad Re(s)>0\\ &=\int\limits_{0}^{\infty} \mathrm{tanh }(x){e}^{-sx}dx-\int\limits_{0}^{\infty}\left[\frac{(-1{)}^{n+1}{e}^{-2(n+1)x}{\left[1-{e}^{-2x}\right]}^{n+1}}{{2}^{n}\left[1+{e}^{-2x}\right]}\right]{e}^{-sx}dx\end{aligned}$$
(240)

It then follows, with \(s=\upsigma +j\upomega \), that

$$\begin{aligned}& \left|\int\limits_{0}^{\infty}\left[\frac{(-1{)}^{n+1}{e}^{-2(n+1)x}{\left[1-{e}^{-2x}\right]}^{n+1}}{{2}^{n}\left[1+{e}^{-2x}\right]}\right]{e}^{-sx}dx\right| \le \int\limits_{0}^{\infty}\left[\frac{{e}^{-2(n+1)x}{\left[1-{e}^{-2x}\right]}^{n+1}}{{2}^{n}\left[1+{e}^{-2x}\right]}\right]{e}^{-\sigma x}dx\\ &\quad <\frac{1}{{2}^{n}}\int\limits_{0}^{\infty}{e}^{-[\upsigma +2(n+1)]x}dx=\frac{1}{{2}^{n}\cdot [\sigma +2(n+1)]}\end{aligned}$$
(241)

which clearly converges to zero as \(n\) increases for \(\upsigma \ge 0\). Thus, for \(\mathrm{Re} \left(s\right)>0\), it is the case that \(\underset{n\to \infty }{\mathrm{lim}} {F}_{n}(s)=F(s)\) , as required.

Appendix 10: Proof of Theorem 4.8

The integral of \(x \mathrm{tanh}(x)\) is given by.

$$\begin{aligned} \int\limits_{0}^{x} \lambda \text{tanh} \left(\uplambda \right)d\lambda& =-\int\limits_{0}^{x} \left(x-\uplambda \right)\text{tanh} \left(\uplambda \right)d\lambda +x\int\limits_{0}^{x} \text{tanh} \left(\uplambda \right)d\lambda \\&=x \text{ln} [\text{cosh} (x)]-\int\limits_{0}^{x} (x-\lambda )\text{tanh} (\lambda )d\lambda \end{aligned}$$
(242)

as \(\int\limits_{0}^{x}\mathrm{ tanh}(\uplambda )d\uplambda =\mathrm{ln }[\mathrm{cosh}(x)]\) . The unknown convolution integral has the Laplace transform

$$\int\limits_{0}^{x}(x-\uplambda )\mathrm{tanh}(\uplambda )d\uplambda \iff \frac{1}{{s}^{2}}\cdot {\varvec{L}}[\mathrm{tanh}(x)]$$
(243)

where \({\varvec{L}}\) is the Laplace transform operator. Utilizing the Laplace transform approximation for \(\mathrm{tanh}\), as specified in Theorem 4.7, it follows that

$$\frac{1}{{s}^{2}}\cdot {\varvec{L}}\left[\mathrm{tanh }\left(x\right)\right]\approx \frac{1}{{s}^{2}}\cdot \left[\frac{1}{s}+\stackrel{2n+1}{\sum_{k=1}} \frac{{c}_{n,k}}{s+2k}\right]$$
(244)

The partial fraction expansion

$$\frac{1}{{s}^{2}}\cdot \frac{1}{s+2k}=\frac{-1}{4{k}^{2}s}+\frac{1}{2k{s}^{2}}+\frac{1}{4{k}^{2}(s+2k)}$$
(245)

leads to

$$\begin{aligned}&\frac{1}{{s}^{2}}\cdot L\left[\mathrm{tanh }\left(x\right)\right]\approx \frac{1}{{s}^{3}}-\stackrel{2n+1}{\sum_{k=1}}\frac{{c}_{n,k}}{4{k}^{2}s}+\frac{{c}_{n,k}}{2k{s}^{2}}+\frac{{c}_{n,k}}{4{k}^{2}(s+2k)}\\ &\iff \frac{{x}^{2}}{2}-{k}_{1}(n)+{k}_{0}(n)x+\stackrel{2n+1}{\sum_{k=1}}\frac{{c}_{n,k}{e}^{-2kx}}{4{k}^{2}} \quad {k}_{1}(n)=\stackrel{2n+1}{\sum_{k=1}}\frac{{c}_{n,k}}{4{k}^{2}} \quad\\ &{k}_{0}(n)=\stackrel{2n+1}{\sum_{k=1}}\frac{{c}_{n,k}}{2k}\end{aligned}$$
(246)

where the results

$$u\left(t\right)\iff \frac{1}{s}\qquad tu\left(t\right)\iff \frac{1}{{s}^{2}}\qquad \frac{{t}^{2}}{2}u\left(t\right)\iff \frac{1}{{s}^{3}}\qquad {e}^{-pt}u(t)\iff \frac{1}{s+p}$$
(247)

have been used. The required result then follows:

$$\int\limits_{0}^{x} \lambda \mathrm{tanh }\left(\uplambda \right)d\uplambda =x \mathrm{ln }[\mathrm{cosh}(x)]-\left[-{k}_{1}(n)+{k}_{0}(n)x+\frac{{x}^{2}}{2}+\stackrel{2n + 1}{\sum_{k=1}}\frac{{c}_{n,k}{e}^{-2kx}}{4{k}^{2}}\right]$$
(248)

Proof for Second Approximation

Consider the approximation for \(\mathrm{tanh}(x)\) specified in Theorem 3.1 which implies

$$\int\limits_{0}^{x} \lambda \mathrm{tanh}(\uplambda )d\uplambda =\int\limits_{0}^{x}\uplambda \left[ \underset{n\to \infty }{\mathrm{lim}} \stackrel{2n+1}{\sum_{k=0}}{c}_{n,k}{e}^{-2k\uplambda }\right]d\lambda =\underset{n\to \infty }{\mathrm{lim}} \int\limits_{0}^{x}\uplambda \left[\stackrel{2n+1}{\sum_{k=0}}{c}_{n,k}{e}^{-2k\uplambda }\right]d\uplambda $$
(249)

where the interchange of integral and limit is valid consistent with the dominated convergence theorem. It then follows that

$$\begin{aligned}\int\limits_{0}^{x} \lambda \mathrm{tanh}(\uplambda )d\uplambda &=\underset{n\to \infty }{\mathrm{lim}}\stackrel{2n + 1}{\sum_{k= 0}}{c}_{n,k}\int\limits_{0}^{x}\uplambda {e}^{-2k\uplambda } d\uplambda \\&=\underset{n\to \infty }{\mathrm{lim}} \left[\frac{{x}^{2}}{2}+\stackrel{2n + 1}{\sum_{k=1}}\frac{{c}_{n,k}}{4{k}^{2}}\left[1-{e}^{-2kx}(1+2kx)\right]\right]\end{aligned}$$
(250)

as

$$\int\limits_{0}^{x}\uplambda {e}^{-a\uplambda } d\uplambda =\frac{1}{{a}^{2}}-\frac{{e}^{-ax}}{{a}^{2}}\cdot (1+ax)$$
(251)

The second approximation also follows from the first approximation by utilizing the approximation for \(\mathrm{ln} [\mathrm{cosh}(x)]\) specified in Theorem 4.3. As this approximation is convergent it follows from the convergence of the second approximation that the first approximation is also convergent over the positive real line.

Appendix 11: Proof of Theorem 4.9

The response of a system, with an impulse response defined by Eq. (142), to an exponential input signal, defined by \({x}_{m}(t)={e}^{-mt}u(t)\), is.

$${z}_{m}(t)=\int\limits_{0}^{t}h(\uplambda ){x}_{m}(t-\uplambda )d\uplambda =\frac{{\upomega }_{n}{e}^{-mt}}{\sqrt{1-{\upxi }^{2}}}\cdot \int\limits_0^{\text{t}}{e}^{\left[m-{\upomega }_{n}\xi \right]\uplambda }\mathrm{sin}\left[{\upomega }_{n}\sqrt{1-{\upxi }^{2}}\uplambda \right]d\uplambda $$
(252)

The integral result (e.g. [25], Eq. 17.25.10)

$$ \int\limits_0^{\text{t}} {e}^{ax}\mathrm{sin}(bx)dx=\frac{{e}^{at}[a\mathrm{sin}(bt)-b\mathrm{cos}(bt)]+b}{{a}^{2}+{b}^{2}}$$
(253)

implies

$$\begin{aligned}{z}_{m}(t)&=\frac{{\upomega }_{n}{e}^{-mt}}{\sqrt{1-{\upxi }^{2}}}\cdot \frac{1}{{\left[m-{\upomega }_{n}\upxi \right]}^{2}+{\upomega }_{n}^{2}\left(1-{\upxi }^{2}\right)}\cdot \left[{\upomega }_{n}\sqrt{1-{\upxi }^{2}}+\right.\\ &\left.{e}^{\left[m-{\upomega }_{n}\upxi \right]t}\left[\left(m-{\upomega }_{n}\upxi \right)\mathrm{sin}\left({\upomega }_{n}\sqrt{1-{\upxi }^{2}}t\right)-{\upomega }_{n}\sqrt{1-{\upxi }^{2}}\mathrm{cos}\left({\upomega }_{n}\sqrt{1-{\upxi }^{2}}t\right)\right]\right]\end{aligned}$$
(254)

As

$${\left[m-{\upomega }_{n}\upxi \right]}^{2}+{\upomega }_{n}^{2}\left(1-{\upxi }^{2}\right)={\upomega }_{n}^{2}\left[1+\frac{{m}^{2}}{{\upomega }_{n}^{2}}-\frac{2m\upxi }{{\upomega }_{n}}\right]$$
(255)

the output simplifies to

$${z}_{m}(t)=\frac{{e}^{-mt}}{1+\frac{{m}^{2}}{{\upomega }_{n}^{2}}-\frac{2m\upxi }{{\upomega }_{n}}}+{e}^{-{\upomega }_{n}\upxi t}\left[\frac{\left[\frac{m}{{\upomega }_{n}}-\upxi \right]\mathrm{sin }\left({\upomega }_{n}\sqrt{1-{\upxi }^{2}}t\right)}{\sqrt{1-{\xi }^{2}}\left[1+\frac{{m}^{2}}{{\upomega }_{n}^{2}}-\frac{2m\upxi }{{\upomega }_{n}}\right]}-\frac{\mathrm{cos }\left({\upomega }_{n}\sqrt{1-{\upxi }^{2}}t\right)}{1+\frac{{m}^{2}}{{\upomega }_{n}^{2}}-\frac{2m\upxi }{{\upomega }_{n}}}\right]$$
(256)

The linearity of the system and the approximation of the hyperbolic tangent function input signal according to \(\sum_{k=0}^{2n+1} {c}_{n,k}\cdot {e}^{-2kt/\upgamma }\) (Theorem 3.1) leads to

$$\begin{aligned} & {y}_{n}(t)={\left.\stackrel{2n + 1}{\sum_{k=0}}{c}_{n,k} \int\limits_0^{\text{t}}h(\uplambda ){x}_{m}(t-\uplambda )\right|}_{m=2k/\upgamma }d\lambda \\ &=\stackrel{2n + 1}{\sum_{k=0}}{c}_{n,k}\cdot \left[\frac{{e}^{-2kt/\upgamma }}{1+\frac{4{k}^{2}/{\upgamma }^{2}}{{\upomega }_{n}^{2}}-\frac{4k\upxi /\upgamma }{{\upomega }_{n}}}+{e}^{-{\upomega }_{n}\upxi t}\left[\frac{\left[\frac{2k/\upgamma }{{\upomega }_{n}}-\upxi \right]\cdot \mathrm{sin}\left({\upomega }_{n}\sqrt{1-{\upxi }^{2}}t\right)}{\sqrt{1-{\upxi }^{2}}\cdot \left[1+\frac{4{k}^{2}/{\upgamma }^{2}}{{\upomega }_{n}^{2}}-\frac{4k\upxi /\upgamma }{{\upomega }_{n}}\right]}-\frac{\mathrm{cos}\left({\upomega }_{n}\sqrt{1-{\upxi }^{2}}t\right)}{1+\frac{4{k}^{2}/{\upgamma }^{2}}{{\upomega }_{n}^{2}}-\frac{4k\upxi /\upgamma }{{\upomega }_{n}}}\right]\right]\end{aligned}$$
(257)

which is the required result.

To prove convergence, consider

$$\underset{n\to \infty }{\mathrm{lim}}{ y}_{n}(x)=\underset{n\to \infty }{\mathrm{lim}} \int\limits_{0}^{x}\left[\stackrel{2n + 1}{\sum_{k=0}}{c}_{n,k}\cdot {e}^{-2k(t-\uplambda )/\upgamma }\right]h(\lambda )d\lambda =\int\limits_{0}^{x}\mathrm{tanh}\left[\frac{t-\uplambda }{\upgamma }\right]h(\uplambda )d\uplambda $$
(258)

The interchange of limit and integration is valid, consistent with Lemma 3, for \(0<\upxi <1\) as the integrand comprises of differentiable bounded functions.

Appendix 12: Proof of Theorem 4.10

Using the approximation to the hyperbolic tangent function specified by Theorem 3.1, and the Laplace transform result

$$\frac{{t}^{i-1}{e}^{-t/\beta }}{(i-1)!{\upbeta }^{i}}\cdot u\left(t\right) \iff \frac{1}{(1+s\upbeta {)}^{i}}$$
(259)

it follows that

$$\mathrm{tanh}\left[\frac{t}{\upgamma }\right]\approx 1+\stackrel{2n + 1}{\sum_{k=1}}{c}_{n,k}{e}^{-2kt/\upgamma \iff }\frac{1}{s}+\stackrel{2n + 1}{\sum_{k=1}}{c}_{n,k}\cdot \frac{\upgamma /2k}{1+\frac{\upgamma s}{2k}}$$
(260)

and the Laplace transform of the \(n\mathrm{th}\) order approximation to the output signal is

$${Y}_{n}(s)=\left[\frac{1}{s}+\stackrel{2n + 1}{\sum_{k=1}}{c}_{n,k}\cdot \frac{\upgamma /2k}{1+\frac{\upgamma S}{2k}}\right]\cdot \frac{1}{(1+s\uptau {)}^{r}}$$
(261)

As

$$\frac{1}{s}\cdot \frac{1}{(1+s\beta {)}^{m}}=\frac{1}{s}-\stackrel{m}{\sum_{i=1}}\frac{\beta }{(1+s\beta {)}^{i}}$$
(262)
$$\frac{1}{1+s{\alpha }}\cdot \frac{1}{(1+s\beta {)}^{m}}=\frac{1}{(1-\beta /a{)}^{m}(1+s{\alpha })}-\stackrel{m}{\sum_{i=1}}\frac{\beta /{\alpha }}{(1-\beta /\alpha {)}^{m-i+1}(1+s\beta {)}^{i}}\quad \beta \ne \alpha $$
(263)

it follows, with \({\alpha }=\gamma /2k\), \(\upbeta =\uptau \), that

$$\begin{aligned}{Y}_{n}(s)&=\frac{1}{s}-\stackrel{r}{\sum_{i=1}}\frac{\uptau }{(1+s\uptau {)}^{i}}\\&+ \stackrel{2n+1}{\sum_{k=1}}{c}_{n,k}\cdot \frac{\upgamma }{2k}\cdot \left[\frac{1}{(1-2k\uptau /\upgamma {)}^{r}(1+s\upgamma /2k)}-\stackrel{r}{\sum_{i=1}}\frac{2k\uptau /\upgamma }{(1-2k\uptau /\upgamma {)}^{r-i+1}(1+s\uptau {)}^{i}}\right]\end{aligned}$$
(264)

Taking the inverse Laplace transform yields the required result:

$$\begin{aligned}{y}_{n}(t)&=1-\stackrel{r}{\sum_{i=1}}\frac{{t}^{i-1}{e}^{-t/\uptau }}{(i-1)!{\uptau }^{i-1}}\\&+\stackrel{2n + 1}{\sum_{k=1}}{c}_{n,k}\cdot \left[\frac{{e}^{-2kt/\upgamma }}{(1-2k\uptau /\upgamma {)}^{r}}-\stackrel{r}{\sum_{i = 1}}\frac{{t}^{i-1}{e}^{-t/\uptau }}{(i-1)!{\uptau }^{i-1}(1-2k\uptau /\upgamma {)}^{r-i+1}}\right]\end{aligned}$$
(265)

To prove convergence, consider the time domain form for the output signal:

$$\underset{n\to \infty }{\mathrm{lim}} {y}_{n}\left(t\right)=\underset{n\to \infty }{\mathrm{lim}} \int\limits_{0}^{t}\left[\stackrel{2n + 1}{\sum_{k=0}}{c}_{n,k}\cdot {e}^{-2k(t-\uplambda )/\upgamma }\right]h(\uplambda )d\uplambda =\int\limits_{0}^{t}\mathrm{tanh}\left[\frac{t-\uplambda }{\upgamma }\right]h(\uplambda )d\uplambda $$
(266)

where \(h(t)=\frac{{t}^{i-1}{e}^{-t/\uptau }}{(i-1)!{\uptau }^{i}}\cdot u(t)\). The interchange of limit and integration is valid, consistent with Lemma 3, as the integrand comprises of differentiable bounded functions.

Appendix 13: Proof of Theorem 4.11

Consider.

$${H}_{n}(s)=\stackrel{2n + 1}{\sum_{k=0}}{c}_{n,k}{e}^{-2k\uptau s}$$
(267)

where \({c}_{n,0 }=1\). The \(n\mathrm{th}\) order transfer function is defined by the approximation to the hyperbolic tangent function specified in Theorem 3.1. The impulse response follows from the Laplace transform relationship

$$\updelta \left(t-a\right)\iff {e}^{-as}$$
(268)

where \(\updelta \) is the Dirac delta.

The magnitude response of the transfer function, by definition, is

$$\begin{aligned}{H}_{M}(f)&={\left.\left|{H}_{n}(s)\right|\right|}_{s=j2\uppi f}=\left|\stackrel{2n + 1}{\sum_{k=0}}{c}_{n,k}{e}^{-j4k{\tau \pi }f}\right|=\sqrt{\stackrel{2n + 1}{\sum_{i = 0}} \stackrel{2n+1}{\sum_{k=0}} {c}_{n,i}{c}_{n,k}{e}^{-j4\left(i-k\right){\tau \pi }f}}\\ &=\sqrt{\stackrel{2n + 1}{\sum_{i=0}} \stackrel{2n + 1}{\sum_{k=0}}{c}_{n,i}{c}_{n,k}\mathrm{cos}[4\left(i-k\right){\tau \pi }f]}\\&=\sqrt{\stackrel{2n + 1}{\sum_{k=0}}{c}_{n,k}^{2}+2\stackrel{2n}{\sum_{i=0}} \stackrel{2n+1}{\sum_{k=i+1}} } {c}_{n,i}{c}_{n,k}{\text {cos}}[4(k-i)\tau \pi f]\end{aligned}$$
(269)

To determine the values of \(f\) where the magnitude response is zero, consider

$$\frac{\mathrm{d}}{\mathrm{d}f}\left[{H}_{M}^{2}(f)\right]=-8{\pi \tau }\stackrel{2n}{\sum_{i = 0}} \stackrel{2n + 1}{\sum_{k=i+1}} {c}_{n,i}{c}_{n,k}(k-i)\mathrm{sin}[4(k-i){\tau \pi }f]$$
(270)

A sufficient condition for this to be zero is for

$$4\left(k-i\right){\tau \pi }f\in \left\{0,\uppi ,2\uppi ,\ldots \right\} \quad \forall i,k\in \{0, 1,\ldots , 2n+1\},k>i$$
(271)

and this is satisfied when

$$f\in \left\{0,\frac{1}{4\uptau },\frac{2}{4\uptau },\frac{3}{4\uptau },\frac{1}{\uptau },\ldots \right\}$$
(272)

For the case of \(f\in \{0, 1/2\uptau ,1/\uptau ,\ldots ,r/2\uptau ,\ldots \}\) , the magnitude response is a minimum with a value of zero as

$${H}_{M}\left[\frac{r}{2\uptau }\right]=\left|\stackrel{2n + 1}{\sum_{k=0}}{c}_{n,k}{e}^{-j2rk\uppi }\right|=\left|\stackrel{2n + 1}{\sum_{k = 0}}{c}_{n,k}\right|=0\;\; r\in \{0, 1, 2,\ldots \}$$
(273)

where the equality to zero follows from Eq. (46).

For the case of \(f\in \{1/4\uptau ,3/4\uptau ,\ldots ,(2r+1)/4\uptau ,\ldots \}\) , the magnitude response has the value

$$\begin{aligned}{H}_{M}\left[\frac{2r+1}{4\uptau }\right]&=\left|\stackrel{2n+1}{\sum_{k=0}}{c}_{n,k}{e}^{-jk\left(2r+1\right)\uppi }\right|=\left|\stackrel{2n+1}{\sum_{k=0}}(-1{)}^{k}{c}_{n,k}\right|\;\; r\in \{0, 1, 2,\ldots \}\\ &=2+3n\end{aligned}$$
(274)

where the final result can readily be confirmed numerically. Thus, the frequencies \(f\in \left\{0,\frac{1}{4\uptau },\frac{2}{4\uptau },\frac{3}{4\uptau },\ldots \right\}\) are consistent with alternating minima and maxima points and a comb filter structure as detailed in Fig. 23.

Appendix 14: Proof of Theorem 4.12

The odd nature of the hyperbolic tangent function, along with the approximations detailed in Theorem 3.1, yield the following explicit approximate expression for the output signal:

$${y}_{n}(t)=\left\{\begin{array}{ll}\stackrel{2n+1}{\sum\limits_{k=0}}{c}_{n,k}{e}^{-2ka \mathrm{sin}(2\uppi {f}_{o}t)}& \mathrm{sin}\left(2\uppi {f}_{o}t\right)\ge 0\\ -\stackrel{2n+1}{\sum\limits_{k=0}}{c}_{n,k}{e}^{2ka \mathrm{sin}(2\uppi {f}_{o}t)}& \mathrm{sin}\left(2\uppi {f}_{o}t\right)<0\end{array}\right.$$

Utilizing the first half cycle of the signal, the output power has the associated approximation

$${P}_{n}=\frac{2}{T}\cdot \stackrel{2n+1}{\sum_{i=0}}\stackrel{ 2n+1}{ \sum_{k=0}} {c}_{n,i}{c}_{n,k} \int\limits_ 0^{{\text{1}}{/}{2}}{e}^{-2(k+i)a \mathrm{sin}\left(2\uppi {f}_{o}t\right)}dt \quad T=1/{f}_{o}$$
(276)

Using a change of variable \(\lambda ={f}_{o}t\) the normalized form results

$$\begin{aligned}{P}_{n}&=2\stackrel{2n+1}{\sum_{i=0}} \stackrel{2n+1}{\sum_{k=0}}{c}_{n,i}{c}_{n,k}\int\limits_ 0^{{\text{1}}{/}{2}}{e}^{-2\left(k+i\right)a \mathrm{sin}(2\uppi \lambda )}d\lambda \\ &= \stackrel{2n+1}{\sum_{i=0}} \stackrel{2n+1}{\sum_{k=0}} {c}_{n,i}{c}_{n,k}\cdot \left[{I}_{0}[2(k+i)a]-{L}_{0}[2(k+i)a]\right]\end{aligned}$$
(277)

and the solution of the integral in terms of \({I}_{0}\) and \({L}_{0}\) arises from Mathematica.

The rth harmonic of the output signal is given by

$${c}_{r}=\frac{\sqrt{2}}{\sqrt{T}}\cdot \int\limits_0^{\text{T}}\mathrm{tanh }\left[a \mathrm{sin}\left(2\uppi {f}_{o}t\right)\right]\mathrm{sin }\left(2\uppi r{f}_{o}t\right)dt\quad r\in \{1, 2, 3,\ldots \}$$
(278)

and can be approximated according to

$$\begin{aligned}{c}_{r}&=\frac{\sqrt{2}}{\sqrt{T}} \int\limits_0^{\text{T}/2}\left[1+\stackrel{2n+1}{\sum_{k=1}}{c}_{n,k}{e}^{-2ka\mathrm{sin}\left(2\uppi {f}_{o}t\right)}\right]\mathrm{sin}\left(2\uppi r{f}_{o}t\right)dt\\&\quad+\,\frac{\sqrt{2}}{\sqrt{T}} \int\limits_{T/2}^{\text{T}}\left[-1-\stackrel{2n+1}{\sum_{k=1}}{c}_{n,k}{e}^{2ka\mathrm{sin}\left(2\uppi {f}_{o}t\right)}\right]\sin\left(2\uppi r{f}_{o}t\right)dt\\ &=\sqrt{2T}\left[ \int\limits_0^{\text{1}/2}\mathrm{sin}(2\uppi r\uplambda )d\uplambda -\int\limits_{1/2}^{1}\mathrm{ sin}(2\uppi r\uplambda )d\uplambda \right]\\&\quad+\, \sqrt{2T}\stackrel{2n + 1}{\sum_{k=1}}{c}_{n,k}\left[ \int\limits_0^{\text{1/2}}{e}^{-2ka \mathrm{sin}(2{\pi \lambda })}\mathrm{sin}(2\uppi r\uplambda )d\uplambda - \int\limits_{1/2}^{\text{1}}{e}^{2ka \mathrm{sin}(2{\pi \lambda })}\mathrm{sin}(2\uppi r\uplambda )d\uplambda \right]\end{aligned}$$
(279)

after a change of variable \(\lambda ={f}_{o}t\). With the further change of variable for the second integrals of \(\upgamma =\uplambda -1/2\) , it follows that

$$\begin{aligned}c_{r}&=\sqrt{2T}\left[\int\limits_{0}^{1/2}\mathrm{sin}(2\uppi r\uplambda )d\uplambda -\int\limits_{0}^{1/2}\mathrm{sin}(2\uppi r\upgamma +{\pi r })d\upgamma \right]\\ &+\sqrt{2T}\stackrel{2n+1}{\sum_{k = 1}}{c}_{n,k}\left[\int\limits_{0}^{1/2}{e}^{-2ka \mathrm{sin }(2{\pi \lambda } )}\mathrm{sin}(2\uppi r\uplambda )d\uplambda -\int\limits_{0}^{1/2}{e}^{2ka \mathrm{sin}(2{\pi \gamma }+\uppi )}\mathrm{sin}(2\uppi r\upgamma +\uppi r)d\upgamma \right]\end{aligned}$$
(280)

Hence

$$c_r=\left\{\begin{array}{ll}0& r\in \{\mathrm{0,2},4,\ldots \}\\ \frac{2\sqrt{2T}}{\uppi r}+2\sqrt{2T}\stackrel{2n + 1}{\sum_{k=1}}{c}_{n,k} \int\limits_{0}^{1/2}{e}^{-2ka \mathrm{sin}(2{\pi \lambda })}\mathrm{sin}(2{\pi r\lambda })d\uplambda & r{\in \{\mathrm{1,3},5,\ldots \}} \end{array}\right.$$
(281)

as

$$\int\limits_{0}^{1/2}\mathrm{sin}(2\uppi r\uplambda )d\uplambda =\frac{-[\mathrm{cos}(\uppi r)-1]}{2\uppi r}=\frac{1}{\uppi r}\;\; \ r\in \{\mathrm{1, 3, 5,\ldots }\} $$
(282)

Evaluation, via Mathematica, of the integral in Eq. (281) yields the coefficient expressions detailed in Eqs. (169) to (172).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Howard, R.M. Arbitrarily Accurate Spline Based Approximations for the Hyperbolic Tangent Function and Applications. Int. J. Appl. Comput. Math 7, 215 (2021). https://doi.org/10.1007/s40819-021-01088-1

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40819-021-01088-1

Keywords

Mathematics Subject Classification

Navigation