Challenges in approximating the Black and Scholes call formula with hyperbolic tangents

In this paper, we introduce the concept of standardized call function and we obtain a new approximating formula for the Black and Scholes call function through the hyperbolic tangent. Differently from other solutions proposed in the literature, this formula is invertible; hence, it is useful for pricing and risk management as well as for extracting the implied volatility from quoted options. The latter is of particular importance since it indicates the risk of the underlying and it is the main component of the option’s price. That is what trading desks focus on. Further we estimate numerically the approximating error of the suggested solution and, by comparing our results in computing the implied volatility with the most common methods available in the literature, we discuss the challenges of this approach.


Introduction
For investors and traders, a key component in their decision making is to assess the risk they run. A common way to do so is to focus on the dispersion of returns. However, this measure has the problem that is computed on past performance and may have little to do with the current level of risk. Within the Black-Scholes framework (Black and Scholes 1973), later extended by Merton (1973), it is possible to identify a relation between the value of an asset and the option written on it. Though such a formula is subjected to strong criticism (Derman and Taleb 2005), and notwithstanding its pitfalls, the formula and its related approximations (Taylor or delta-gamma) are widely used in risk management (Estrella 1995) and "in many respects the story of the establishment of the Black-Scholes-Merton model (BSM) simply marks the emergence of contemporary financial risk management" (Millo and MacKenzie 2009). Many other models have been developed to address some of the BSM weaknesses, such as those proposed by Engle and Mustafa (1992), Heston (1993), Duan (1995) Ritchken and Trevor (1999), Christoffersen and Jacobs (2004) and Duan et al. (2006) as well as models based on neural networks (for a review see Mostafa et al. 2017) but those are beyond the scope of the present paper. This is for the simple reason that we share the same experience of most quants such as Paul Wilmott: when it comes to price and managing the risk of thousands of options in real time "better" models are not a suitable choice. Problems should be streamlined and models should be of practical use. Therefore constant volatility Black-Scholes is a common choice while tail risk is managed through worstcase scenarios. Last but not least, the BSM is, simply put, another way to ensure that "a portfolio consisting of a long position in a call and a short position in a put, valued by the traditional discounted expected value of their payoffs, must statically replicate a forward contract" (Derman and Taleb 2005). Therefore, irrespectively of its shortcomings, it is widely used as an arbitrage free tool by dealers or market-makers in options who need to stay consistent with the value of their "raw supplies" (Derman and Taleb 2005).
As the usage of the BSM formula has become widespread in financial markets (Millo and MacKenzie 2009), options are priced and traded in terms of their risk or, in other words, the so-called implied volatility (i.e. the actual volatility embedded in the option's price). To be precise, given the value S of an underlying asset, the strike price K , the time to maturity T , the interest rate r and the volatility σ , the BSM formula derives the price C of a European call option through the formula: (1.1) where X = K e −r T is the present value of the strike price, N (x) is the cumulative distribution function of the standard normal i.e.
N (x) := 1 √ 2π x −∞ e −t 2 /2 dt, (1.2) and are the first and the second parameter of probability, i.e. respectively, "the factor by which the present value of contingent receipt of the stock, contingent on exercise, exceeds the current value of the stock" (Nielsen 1993) and the risk-adjusted probability of exercise.
Since the BSM formula involves the use of the cumulative distribution function of the standard normal N (x), then we have to face two kinds of problems: (a) how to compute the price of the option for given S, K , T , r and σ ; (b) given S, K , T , r , how to compute the so-called "implied volatility", i.e. the volatility corresponding to the current price of the option; in other terms, how to invert the call function, i.e. the function C = C (σ ) which gives the price of the option as a function of the volatility.
The first problem may be faced in several ways. An approach is to use the standard software based on the power series expansion of the cumulative normal distribution function (cndf). Alternatively N can be approximated with suitable functions, generally of rational form. In this framework, there is a variety of contributions, see for example Bowling et al. (2009), Choudhury (2014) and Eidous and Abu-Shareefa (2020) who focus on the choice of the best coefficients. Hofstetter and Selby (2001) obtained approximations by replacing the cndf with the logistic distribution, Li (2008) developed an heuristic closed-form method based on the rational functions which requires one or two steps of Newton-Raphson algorithm to improve the accuracy. Unfortunately it works only in a limited domain. Jacquier and Lorig (2015) compute the option price by a quadrature of the inverse Fourier transform and the "true implied volatility by fitting the SVI parametrization to it". Other closed-form approximations that work only in the case S = X are the Pólya approximation (Pólya 1949;Matić et al. 2017), and the logistic approximation (Pianca 2005). A completely different point of view is due to Fan and Mancini (2009), who derive the price of an European call by a "nonparametric regression".
Concerning the second problem, the inverse of the call function does not have an analytical representation and therefore the problem can only be approached by numerical methods (Newton-Raphson and other iterative schemes) and, in some instances, even those methods may fail for technical reasons (Orlando and Taglialatela 2017;Dura and Moşneagu 2010;Lorig et al. 2014;Liu et al. 2019). Recently, Liu et al. (2019) and Cao et al. (2019) suggested new numerical methods to reconstruct the volatility through neural networks.
A different approach to the solution of both problems is to approximate directly the call function with a suitable function, and then to use the inverse of such a function to derive the approximated implied volatility. The rationale of such an approach is that "The cost and inconvenience of iterating also motivate the search for explicit formulas. For example, traders often need to plot intra-day implied volatility in real time. In this case, the non-numerical approach such as using an explicit formula is a must" (Li 2005). In this way, practitioners can use spreadsheets to derive the price of the call as a function of the volatility and the implied volatility as a function of the price. This is of particular importance since only volatility matters in option valuation while the direction of the asset price does not. Moreover, when traders agree on the volatility they agree, also, on the option value and personal preferences are not relevant.
Generally, such closed form approximations rely on Taylor approximations or on the power series expansion of the cumulative normal distribution function (cndf) [see for example Manaster and Koehler (1982), Brenner and Subrahmanyam (1998), Bharadia et al. (1995), Chance (1996), Corrado and Miller (1996a, b), Liang and Tahara (2009) and Li (2005)] [for a review of the subject see Orlando and Taglialatela (2017)]. In particular, Li proposes to approximate the call function with its third or second order Taylor polynomial and derives the implied volatility by solving a third or second degree equation. Obviously, the Taylor polynomials have local character, so the approximations are useful only in the case S = X or S close to X and only for restricted values of σ and C.
In this paper, we propose a different approach; to be precise, for every value of S, X and T , the call function C = C (σ ) is a sigmoidal function with finite limits at 0 and +∞; so we propose to approximate globally C with a function which has the same behaviour at 0, at +∞ and at the inflection point of C . Moreover, since we have to use the inverse of the approximating function to approximate the inverse of C , we need the approximating function to have an inverse which is easily analytically representable. So we have been led to look for the approximating function by means of the hyperbolic tangent as in Orlando and Taglialatela (2020), which is the prototype of a sigmoidal function with an easy inverse.
The paper is organized as follows. In the first section, we consider the case S = X and we introduce the so-called "standardized call functions", which is a singleparameter function representing the general family of calls, and we propose the desired approximations of the call functions. In the second section we propose the approximation of the call function in the case S = X . The third section is devoted to deriving the approximations of the implied volatility again in the cases S = X and S = X . In the fourth section, we present some results of numerical simulations and compare such results with the ones given by other authors. Performance of this closed form solution as well as empirical tests on market data are included. Finally, the last section summarizes the work and concludes.

The standardized call function
In order to simplify the presentation we introduce a family of standardized call functions: depending on a single parameter α > 0.
The following Proposition contains the main properties of the mappings χ α .

Proof of (i)
It is a trivial consequence of the limits at ±∞ of N (x).
Proof of (ii) We have: from which we derive that χ α (x) is strictly increasing in ]0, +∞[.
For S, X and T fixed, the relationship between the call function C = C (σ ) and the family of functions (χ α ) α>0 is contained in the following Proposition 2.2 Let us fix S > 0, X > 0 and T > 0, with X = S, and let us put α := 2 log(S/X ) ; then we have Proof. Assume X > S; since α 2 /2 = log(X /S) = − log(S/X ), and therefore X = Se α 2 /2 , we have Assume X < S; since α 2 /2 = log(S/X ) = − log(X /S) and therefore S = X e α 2 /2 , we have we get

Construction of the approximating functions
From Proposition 2.2 it follows that, in order to get a good approximation of the call functions C (σ ), it is sufficient to give, for all α > 0, a good approximation χ α of χ α .
For the sake of simplicity, let us fix α > 0 so that we can omit the index α and simply denote χ and χ the mappings χ α and χ α .
By Proposition 2.1, we know that χ has a sigmoidal shape. This fact suggests us to look for an approximation based on the hyperbolic tangent which has a similar shape and has the advantage of having a very simple inverse function arctanh(x) = 1 2 log 1 + x 1 − x . (2.4) In our case, we look for an approximating function of the form where ϕ : ]0, +∞[→ R is strictly increasing and satisfies the conditions ϕ(0 + ) = −∞ and ϕ(+∞) = +∞, so that χ is strictly increasing and tends to 0 as x tends to 0 and tends to 1 as x tends to +∞. For example, we can choose ϕ of the form with c 1 > 0 and c 2 > 0 so that ϕ(x) is strictly increasing and has the desired behaviour at 0 and +∞. Obviously, we have to choose the constants c 1 , c 2 , c 3 in such a way that χ gives the best approximation of χ ; hence we impose that both χ and χ have an inflection point at x = 1 with the same tangent lines there, i.e. we impose that c 1 , c 2 and c 3 have to satisfy the conditions: we have Thus, conditions (2.7) give If we put β = e α 2 /2 N (−α), then we can rewrite such equations as thus Eq. (2.10) gives (2.11) and consequently (2.12) Finally, from (2.8) it follows (2.13) Hence, c 1 , c 2 and c 3 are uniquely determined and depend only on α. We have only to check that c 1 and c 2 are positive.
The positivity of c 2 follows from (2.11); the proof of the positivity of c 1 is a little more involved.
In fact the sign of c 1 is the same as the sign of The second factor is positive; hence, the sign of c 1 is the same as the sign of the first one, i.e.
(2.15) So, in order to prove that c 1 is positive, it is sufficient to prove that Such an inequality is an immediate consequence of the Komatsu-Pollak estimate (Komatsu 1955;Pollak 1956). For the reader's convenience, we provide a proof in "Appendix".

The approximating call functions
Hence, for all α > 0 we have found a good approximation to χ α , namely the function where c 1 (α), c 2 (α) and c 3 (α) are given by (2.12), (2.11) and (2.13). From this and from Proposition 2.2, we can conclude that for all S > 0 and X > 0, with S = X , a good approximation to C (σ ) is the function

Approximating the call function when S = X
In the special case S = X , we have Recalling the error function defined by we can also write Hence, in order to approximate the call function, we have to approximate the function N (x) or equivalently erf(x). There are a lot of approximated formulae for N (x) as well as for erf(x), see Choudhury (2014), Eidous and Abu-Shareefa (2020) and the references therein. However, not all of these formulas are useful for our purpose, since the inverse is not easy to compute; (e.g. Eidous and Abu-Shareefa 2020) or they are similar to the hyperbolic tangent [see Page (1977) or the almost identical formula in Bowling et al. (2009)].
We propose to approximate erf(x) with the function which has the same limit at +∞ and the same Taylor expansion of third order in 0 as the error function, and therefore we propose to approximate C (σ ) with obtained, respectively, in Fairclough (2000) and Page (1977) by an optimization procedure. Numerical tests show that (x), F (x) and P (x) provide approximations of the same order of accuracy (see Sect. 5.2 for details).

Approximation of the implied volatility
In order to find the implied volatility we should invert the call function C , i.e. we should be able to solve the equation C (σ ) = C. Since C is a good approximation of C , we approximate the implied volatility by solving the equation (4.1)

Case S = X
In this case, by (2.18), equation (4.1) is equivalent to According to (2.5) such an equation is equivalent to On the other hand, for any λ ∈ R the equation has a unique positive solution, given by Thus, the implied volatility can be approximated by and c 1 (α), c 2 (α) and c 3 (α) are given by (2.12), (2.11) and (2.13).

Case S = X
In this case, the equation C (σ ) = C is equivalent to the equation Now it is well known that for all p > 0 the equation has a unique real solution given by Hence, the approximating value of the implied volatility, i.e. the unique solution to Eq. (4.4) is given by

Numerical results
Volatility in the markets has been calculated by many. Mo and Wu (2007), on a sample from January 3, 1996, to August 14, 2002, of 346 weekly observations for each index, reported that the implied volatility on the S&P 500 Index (SPX), the FTSE 100 Index (FTS), and the Nikkei-225 Stock Average (NKY) ranges from 15 to 35% with S/X comprised between 80 and 120% and maturity 1 m, 3 m, 6 m and 12 m. Glasserman and Wu (2011), on a sample consisting of 2049 data points from August 9, 2001 to June 16, 2009, found that the implied volatility ranges from 5 to 43% with currency options on EURUSD, GBPUSD and USDJPY for at 10-delta put (P10d), 25-delta put (P25d), At-The-Money (ATM), 25-delta call (C25d), and 10-delta call (C10d). Figure 1 shows the overall daily distribution of the VIX (CBOE 2020) since inception up to 2015. Throughout these years median volatility has been ∼ 18%, mode ∼ 13% and average is ∼ 20% and high values are rare.  (2015). Volatilities above ∼ 30% belong to the 10th decile while the remaining ∼ 90% of the VIX distribution is comprised between 0 and ∼ 30%

On the estimation error
First of all notice that for all α > 0 the approximation error corresponds to the moneyness S/X = e α 2 /2 > 1 as well as to the moneyness S/X = e −α 2 /2 < 1.
We start by showing (Fig. 2) the graph of E α (x) for α = 0.3124 corresponding to the moneyness S/X = 1.05, as well as S/X = 0.95. There it can be seen that the error of the approximation E α (x) is greater than 0.01 only for x > 2, e.g. for T = 0.25, when σ > 1.2495 which is an unrealistic level of the volatility. More generally, Tables 1 and 2 report the error E α (x) for some significant values of x and α in absolute and percentage terms. Notice that the chosen values of α correspond approximately to S/X equal to 1.03, 1.13, 1.32, 1.65, 2.18 as well as S/X equal to 0.97, 0.88, 0.75, 0.61, 0.46 respectively. These tables show that the approximation is sufficiently good, especially for the small values of x, which correspond to realistic values of sigma.
We have also tried to compute the maximum value of the approximation error as a function of α. Indeed for every α > 0 the function E α (x) is smooth in ]0, +∞[, and tends to 0 as x tends to 0 and to +∞. Thus, the maximum value of E α (x) in the interval ]0, +∞[ exists and can be located by finding the zeros of its derivative, x +c 3 (α) and c 1 (α), c 2 (α) and c 3 (α) are given by (2.12), (2.11) and (2.13). Unfortunately, finding the zeros of E α is an almost impossible task. The same applies to finding a function = (α) such that for every α > 0 one has For the said reason, for a given α > 0, in order to compute we resort to numerical methods, namely to the Nelder-Mead simplex algorithm (Nelder and mead 1965;Gao and Han 2012), which, even though it is slower than gradient-based algorithms, works well when the call function to approximate is steep or flat [for issues related to those cases see Orlando and Taglialatela (2017)]. Table 3  reports the maximum value of the approximation error for the same values of α considered in Tables 1 and 2.

Monte Carlo analysis on the estimation error
Notice that for all α > 0, T > 0, σ > 0 the approximation error represents the approximation error between the call C (σ ) and its approximation C (σ ) for the case when X = 1 and S = e α 2 /2 > 1 = X as well for the case S = 1 and X = e α 2 /2 > 1 = S.
The following Table 4 shows that such an approximation error is very good for the most traded combination of T , S/X and σ .
We have also performed a Monte Carlo analysis on the error for 10, 000 values of α and 500 instances of σ , uniformly distributed in the interval ]0, 1.25]. We first take the whole interval ]0, 1.25] and then we divide it into five parts (each containing 100 σ ) to illustrate where the differences are higher or lower.
In Tables 5, 6 and 7, we show such an analysis, respectively, for the cases where time is equal to 1 month, one quarter and one semester (respectively).
Notice that the biggest errors are when σ ∈ [0.75, 1.25] i.e. for those cases in which the volatility is extremely rare (see Fig. 1).   Results obtained by simulating randomly 10,000 occurrences of α for a lattice of 500 σ . The full matrix of 10,000 × 500 has been divided into 5 parts, each composed of 10,000 × 100 data. Note that the biggest errors are for high (and unlikely) level of volatility  Results obtained by simulating randomly 10,000 occurrences of α for a lattice of 500 σ . The full matrix of 10,000 × 500 has been divided into 5 parts, each composed of 10,000 × 100 data. Note that the biggest errors are for high (and unlikely) level of volatility  Results obtained by simulating randomly 10,000 occurrences of α for a lattice of 500 σ . The full matrix of 10,000 × 500 has been divided into 5 parts, each composed of 10,000 × 100 data. Note that the biggest errors are for high (and unlikely) level of volatility

Call when S = X
To show the approximation's error, in Fig. 3 we plot the graph of the difference between the functions erf(x) and (x) as defined in Sect. 3. As previously mentioned, there is a large number of approximations for the standard normal distribution or, equivalently, for the error function. In Table 8, we compare our approximations with Bowling et al. (2009) [which is equivalent to Page (1977)], and Eidous and Abu-Shareefa (2020). As shown, the approximations are very close and has the smallest RMSE.

Monte Carlo analysis on the estimation error
Now we consider the lattice L composed of the couples (σ, T ) with σ ∈ [0, 1.25] and T = k/12 with k = 1, . . . , 24. From it, we randomly extract 10,000 samples. The statistics on the errors reported in Table 9 confirm the good quality of the approximation.

Implied volatility
In Orlando and Taglialatela (2017), we compared the results derived with Brenner and Subrahmanyam (1998), Corrado and Miller (1996a, b) and Li (2005) formulae. As we found that the latter is the most accurate, we consider S. Li formula as our benchmark. Notice that, for reader's convenience, ranges are similar to the ones published by AMC (see Li 2005). This because we wanted to make easier for the reader a comparison with one of the most precise approximations available in the literature.

Case when S = X
In Table 10, we compare the results obtained by S. Li formula denoted by σ L with those obtained with formula (4.3) denoted by σ . The prices of all calls have been generated with the BSM model and, then, the implied volatility has been derived by The table has been divided in four parts: the first and the last to show the differences when z is very small or very large, the second (more granular) part shows some significant values for those cases in which the volatility is more common and the third (less granular) part when the volatility is high  The implied volatilities σ L and σ have been calculated with S. Li formula and (4.3) respectively S = $100, X = $125, risk free-rate 5% p.a. The implied volatilities σ L and σ have been calculated with S. Li formula and (4.3) respectively S = $100, X = $105, risk free-rate 0.5% p.a.
using the inversion formulae. Each column provides the results of the said formulae for maturities T from 0.1 to 1.5 versus the true volatility. As shown, σ is available even when σ L is not and approximates better the implied volatility for all maturities, moneyness and level of σ except the part where it is very high (but this occurrence is very unlikely as shown in Fig. 1). Moreover the error for the σ formula is more consistent.
When S is close to X , the proposed approximating function is structurally less precise than the S. Li formula. This because his formula is derived from the Taylor polynomial with center S/X = 1. Nevertheless, in Table 11 we report some results for the case S/X = 1.05 and low levels of σ , where our formula produces a result close to the reality for the short term traded options (i.e. the most traded), while the benchmark fails to provide any value.

Case when S = X
Let us now consider the ATM case. In Table 12, we compare the implied volatility σ as defined in (4.5) with σ L calculated with S. Li formula (Li 2005). Notice that precision of our formula is of 10 −6 .   The implied volatilities σ L and σ have been calculated with S. Li formula and (4.5) respectively Stock $100, exercise price $100, risk free-rate 5% p.a.  The implied volatilities σ L and σ have been calculated with S. Li formula and (4.5) respectively S = $100, X = $100, time to expiration T = 0.1, 0.2, . . . , 1.5, risk free-rate 5% p.a.
Furthermore, in Table 13, we show a Monte Carlo simulation where the prices of all calls are generated with the BSM model for a given volatility ranging from 15 to 125% and maturity comprised between 0.1 and 1.5 years. Displayed on the left of Table 13 there are some statistics on the error between the true volatility and the estimated one.

Empirical results on SPDR S&P 500 ETF TRUST
In Table 14 we show how Eq. (4.3) compares versus (Li 2005) and the widely used Brenner and Subrahmanyam (1998) ( σ B ) formula. As shown, S. Li formula often does not provide any value, whereas Brenner and Subrahmanyam and our formula give an estimate in all cases. Unfortunately, Brenner and Subrahmanyam formula is precise only for ATM options which may exist only at the inception.

Computational performance of the algorithm: hybrid method versus closed form solution
With regard to computational performance, we recall that the scope of this work is limited to analytical approximations. Among the reasons of this choice is the practical need to obtain a fast result. In order to asses that, we take as a benchmark a hybrid numeric algorithm able to quickly compute the implied volatility in a large number of cases such as the one proposed in Orlando and Taglialatela (2017). While we do not want to discuss all potential alternatives in this instance, we limit to mention that there are other solutions such as the built-in blsimpv available in Matlab (2020) or the heuristic hybrid method proposed by Li (2008) as implemented by Whirdy (2020). The idea in both papers Li (2008) and Orlando and Taglialatela (2017) is to derive numerically the implied volatility from a suitable starting point with the following characteristics: as close as possible to the true value and able to "guide" a gradient based algorithm to not diverge. Apart from the said similarity, the two approaches are quite different in their implementation. In fact, the first one calculates Black-Scholes implied volatility by using M. Li's rational function approximations for the initial estimate, followed by 3rd-order Householder's root finder (i.e. using V ega, V omma and Ultima) (Li 2008;Whirdy 2020). The second adopts the Newton-Raphson algorithm by taking as a starting point the inflection of the call function (Orlando and Taglialatela 2017). The problem with the first approach is that M. Li formula (Li 2008) works only in a very strict domain while the second approach (Orlando and Taglialatela 2017) works almost every time (unless the objective function is flat or vertical). Table 15, for a given true volatility of 10%, displays the implied volatility obtained with the two above mentioned approaches. As shown, once more, the Li (2005) formula in some instances does not provide a solution and the Whirdy algorithm (Li 2008; Whirdy 2020) shoots off.
As the chosen parameters are not uncommon to find in the current market environment, we discard (Li 2008;Whirdy 2020) in favour of Orlando and Taglialatela (2017).
Having demonstrated that the best available hybrid solution is Orlando and Taglialatela (2017), in this work, we compare the said algorithm with the proposed ana-   Table 16 we show the performance of the Newton-Raphson method as described in Orlando and Taglialatela (2017) versus σ on a test run over 10,000 options. The closed form solution, executed in VBA, is quite fast in finding the 10,000 volatilities for the corresponding ATM, ITM and OTM options and, as expected, it is faster than its corresponding numerical algorithm.

Stress tests
Stress testing is a key activity for managing the risk. Risk is represented by σ which is the only variable that may jump suddenly. By sampling the approximating functions in different segments of σ , stress testing boils down to taking one of those intervals. Tables in Sects. 5.1.2, 5.2.1 and 5.3.1 list the possible outcomes for both: volatility and strike. Alternative formulas (Corrado and Miller 1996a, b;Li 2005Li , 2008 for some values cannot provide any solution.

Conclusions
In regard to the BSM formula "in many respects the story of the establishment of the Black-Scholes-Merton model simply marks the emergence of contemporary financial risk management" (Millo and MacKenzie 2009). Despite the many pitfalls of the BSM formula, there are several reasons that keep it alive. For example "the BSMs biggest strength is the possibility of estimating market volatility of an underlying asset generally as a function of price and time without, for example, direct reference to expected yield, risk aversion measures or utility functions. The second greatest strength aspect is its self-replicating strategy or hedging: explicit trading strategy in underlying assets and risk-less bonds whose terminal payoff, which equals the payoff of a derivative security at maturity. In other words, theoretically an investor can continuously buy and sell derivatives in the strategy and never incur loss. It is also simple and mathematically tractable as compared to some of its more recent variations" (Yalincak 2012). Therefore, even though in many respects BSM is outdated and has a number of shortcomings, it is still widely used especially for real time computations and for extracting the implied volatility.
In this work, we have recalled the importance of calculating the value of the call for pricing as well as for inferring the implied volatility. To be precise a standardized call function has been introduced to represent the whole family of calls and to simplify the calculations. Then we have shown how the approximation of the aforementioned standardized call can be performed through the hyperbolic tangent instead of the usual Taylor truncation. This allows a greater accuracy for extreme values of σ , which makes this approach particularly suitable for stress testing and hedging purposes.
Finally, we have derived some explicit formulae for approximating the implied volatility that seem to be better than the ones proposed in the literature so far and are valid regardless of an option's moneyness. Therefore because of the higher accuracy and flexibility, this approach could replace current methods with little additional effort.
Funding Open access funding provided by Università degli Studi di Bari Aldo Moro within the CRUI-CARE Agreement.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
On the other hand, since h(0) = 0 and h(+∞) = 0, it is sufficient to prove that there exists x 1 > 0 such that h is strictly increasing in ]−∞, x 1 ] and strictly decreasing in [x 1 , +∞[.
Therefore in order to study the sign of h , we need to study the sign of the polynomial p(u) := u 2 − 2 (π − 1) u + 1 in the interval ] − 1, 1[. Since p(u) has two real roots, u 1 and u 2 , such that 0 < u 1 < 1 < u 2 , one has p(u) > 0 if u ∈] − 1, u 1 [ and p(u) < 0 if u ∈]u 1 , 1[. On the other hand it is easy to see that u(x) is strictly increasing; therefore, if we put Thus, h is strictly increasing in ] − ∞, x 1 ] and strictly decreasing in [x 1 , +∞[, as desired.