Skip to main content

Implementation of Newton’s Algorithm Using FORTRAN


A lot of software today dealing with various domains of engineering and life sciences have to deal with non-linear problems. In order to reduce the problem to a linear problem, a lot of state of the art solutions already exist. This work focus on the implementation of Newton’s Algorithm (also known as Newton’s method), to determine the roots of a given function within a specific user defined interval. The software for this implementation is FORTRAN. Even though FORTRAN is considered to be outdated, it still has a lot of application due to its long history and the existing legacy code. The code is written in such a manner that a user can provide a function and a specific interval and the code should in turn run iterations over the interval and should display all the possible roots within that interval. The results are compared at the end for their accuracy. The program is successful in finding out all the roots within an interval.


FORTRAN programming language has been one of the earliest of its kind to be in use for the purpose of writing programs. In that regard, learning it is necessary to have a better grasp on it, in order to have a better understanding of those programming languages that followed. As a learning exercise, it was asked to make a program on FORTRAN, that uses Newton’s method as its base to approximate the roots of a function over a fixed interval (both the function and the interval are to be given by the user). The sequence is as follows:

  • The theory behind Newton’s method will be discussed.

  • The algorithm for Newton’s implementation will be described.

  • Comparison of the program’s results with actual data.

We examine three variations of the strategy initiating from Newton's strategy for discovering the roots of a function of a sole variable: the method in higher dimensions, higher order method, and continuous method. A brief account of the advancement of Newton's strategy is given and the inventiveness of this article is not claimed; accentuation is put on applications of Newton's strategy in abstract analysis, in specific, subjectivity of functions between finite dimensional Banach spaces [1].

Imperative hypothetical outcomes on Newton's method regarding the convergence properties, the error estimates, the numerical stability and the computational complexity of the algorithm were assessed. In Newton or Newton Raphson strategy an arrangement of the nonlinear condition F = 0; where F0 is the Frechet derivative of F and X and Y are Banach spaces. In case F may be a real function the geometric interpretation of the Newton method is well known [2].

The proposed strategy is demonstrated to be productive with the assistance of the numerical performance and comparison. The first one, for finding roots of scalar functions, is the numerical comparison between the new Newton formulas, Newton's method and a third order Newton method. We observe that the offered algorithm is effective for one dimensional real function [3].

Theoretical Background

Like many other root-finding methods, Newton’s method, also known as Newton Raphson method, is a mathematical technique to find the best possible vales (roots) of a real-valued function. For many simpler equations (e.g. linear, quadratic), there already exists set of formulas to calculate the exact roots of an equation. But in cases where the equations are far more complex, this method is very useful to get quick and accurate results.

From the development of Newton's strategy to the higher dimensional analogue (our first variation of Newton's method) and to the newton’s applications isn't as blunt. Why ought to one consider as it were the constant and the linear term as an approximation of F, in the Taylor expansion of F? Be that as it may, we can utilize Newton's method to show that the condition (F = 0 = y) declares a solution [1].

The first and second derivative of F are signified by F0 and F00 in the sense of Frechet. An over the top amount of extensions and variants of outcomes arose in the literature. The Kantorovich theorem, by its sheer significance and by the original and powerful proof technique could be a showstopper. Intensive research on Newton and related strategies were started by the results of Kantorovich and his school [2].

The Newton–Kantorovich theorem also alluded to as convergence theorem, was set up in 1948, Kantorovich. An inaccurate newton strategy was suggested by Dembo et al. [4]. This technique roughly answers the linear equation. Then convergence theorem of Newton’s strategy for distinctive cases was set up by Fourier, Cauchy, and Fine. Newton initially utilized the Newton iteration to illuminate a cubic equation in 1916 [3].

Figure 1 gives a basic explanation of how the Newton’s method works.

  • at the start the root is approximated

  • the value of the function is calculated at that approximation

  • the difference (delta y) between x-axis and the point at the graph is calculated.

  • if delta y is greater than 0, then the slope of the function at the first approximations is calculated

  • the point where the extended line of the slope cuts the x-axis is the next approximation

  • if the difference between the results of the iterations is almost the same, we can assume that we have reached the root of the function.

Fig. 1

Source: Adapted from [5]

Graphical representation of Newton’s method.

The above process can also be written in the following mathematical expression:

$$x_{1} = x_{0} - \frac{{f\left( {x_{0} } \right)}}{{f^{\prime}\left( {x_{0} } \right)}}.$$

And the general form of this process to find the nth approximation is:

$$x_{n} = x_{n - 1} - \frac{{f\left( {x_{n - 1} } \right)}}{{f^{\prime}\left( {x_{n - 1} } \right)}}.$$

Constraints of Newton’s Method

Newton’s method is considered to be one of the most fastest methods to give the results, since it normally requires fewer iterations to reach the results. Even so, there are some conditions that must be mentioned, where this method does not perform so well.

  1. (a)

    Function with no roots

    One of the most common case, where this method will fail is the no-roots case. The iterations keep on running for ever since whenever it finds the value of the function and analyses it, it is never close to zero hence the iterations keeps on finding the next possible root. The figure below can display why this can be a problem (Fig. 2).

  2. (b)

    Problems with the derivative

    In order to find the next approximation, the method not only needs the value of the function but also of its derivative. In the formulas above for the calculation of the next approximation, it is shown that the value of the derivative lies in the denominator. So, whenever the value of the derivative is zero (straight line), this means that it can never intersect with the x-axis hence the iteration cannot proceed further. On the other hand, a problem can also arise if the value of the derivative is too large. Because this the next iteration will give the same result as the previous one and hence the process will not move in any direction. The following figure shows the case for zero slope (Fig. 3).

  3. (c)

    Cyclic problem

    In some complex cases, there comes a time in the iterations when the process gets stuck between two points i.e. that one value leads to another and that new value leads back to the old one. This constant loop doesn’t allow the process to move further hence there cannot be any roots found for the function (Fig. 4).

Fig. 2

Source: Adapted from [6]

Case with no possible roots.

Fig. 3

Source: Adapted from [7]

Function having zero-slope.

Fig. 4

Source: Adapted from [8]

Cyclic problem.

Algorithm of the Program

The flowchart on the following page describes how the program’s processes run in order to calculate the roots of a function over an interval (Fig. 5).

Fig. 5

Algorithm for Newton’s method

The flow chart can be broken down into two flow charts for easier understanding and implementation (Fig. 6).

Fig. 6

Algorithm for Newton’s method (broken down)

Variables of the Program

In order to understand any program, it is necessary to first know what the variables inside actually stand for. The following table is an insight into the description of those variables (Table 1).

Table 1 Variables and their description


Before writing the algorithm, it is necessary to first identify possible errors that could be problematic for the program. Methods to prevent them are then later incorporated into the program. The following errors were considered to be essential to be removed or to be dealt with:

  • Lower bound should be different than the upper bound.

  • Lower bound should be lesser than the upper bound.

  • Step-width should be a non-negative number.

  • The iterations should be lesser than the maximum allowable number of iterations.

  • The maximum number of roots should be greater zero.

Comparison of Results

In order to test the credibility and accurateness of the program, it is necessary to compare it’s results with the known thresholds. For the same reasons, the program was used for three given functions (3 cases) and the roots calculated by the program for those functions were compared those seen on a plotted graph. For plotting of the graph, online tools were used.

  • Case-1

    The function for case-1 is given below:

    $$\frac{{\left( {1 - 5x - 2x^{2} } \right) \times \tan \left( {2x} \right)}}{{\left( {1 - x} \right)\sin \left( {3x} \right) \times \cos \left( {4x} \right)}}.$$

    The results obtained by the program are shown by the figure below (Fig. 7).

    The following graph shows the roots for the same interval as given in the program for case 1 (Fig. 8).

  • Case-2

    The function for case-2 is given below:

    $$\frac{{x^{5} }}{2} + 2x^{4} + 3x^{3} 8x^{2} - 5x - 1.$$

    The results obtained by the program are shown by the figure below (Fig. 9).

    The following graph shows the roots for the same interval as given in the program for case-2 (Fig. 10).

  • Case-3

    The function for case-3 is given below:

    $$e^{ - x/5} \left( {\left( {1 - 2x} \right) \times \cos \left( {3x} \right)} \right) + e^{x/5} \left( {\left( {1 + 2x} \right) \times \sin \left( {2x} \right)} \right).$$

    The results obtained by the program are shown by the figure below (Fig. 11).

    The following graph shows the roots for the same interval as given in the program for case-3 (Fig. 12).

Fig. 7

Results by the program for Case-1

Fig. 8

Plot of the function in between the interval for Case-1

Fig. 9

Results by the program for Case-2

Fig. 10

Plot of the function in between the interval for Case-2

Fig. 11

Results by the program for Case-3

Fig. 12

Plot of the function in between the interval for Case-3

Discussion and Conclusion

The previous section shows side by side, the results of the program and the actual roots of those functions. By comparison it can be easily said that the number of roots and their values match those shown on the graphs. Therefore, it is safe to say that the program is accurate and gives reliable results.

Finding the roots of complex equations can be very difficult and tedious at times. With this technique it can be very easy to give good estimates of the results, provided that there are no errors possible during calculation within that interval.


  1. 1.

    Chill R. Three variations on Newton’s method. Math Stud. 2008;79:215.

  2. 2.

    Galántai A. The theory of Newton’s method. J Comput Appl Math. 2000;124(1–2):25–44.

    MathSciNet  Article  MATH  Google Scholar 

  3. 3.

    Saheya B, Chen G, Sui Y, Wu C. A new Newton-like method for solving nonlinear equations. Springerplus. 2016.

    Article  Google Scholar 

  4. 4.

    Dembo R, Eisenstat S, Steihaug T. Inexact Newton methods. SIAM J Numer Anal. 1982;19:400–8.

    MathSciNet  Article  MATH  Google Scholar 

  5. 5.

    Koblitz N (1998) Calculus. University of Washington. 1998.

  6. 6.

    Department of Biochemistry and Molecular Biophysics (2016) Roots of quadratic equations and the quadratic formula. 2016.

  7. 7.

    Collier N, Kaw A, Paul J, Keteltas M (2013) Zero slope—Newton–Raphson method. 2013.

  8. 8.

    Burton A (2020) Newton’s method and fractals.

Download references


Open Access funding enabled and organized by Projekt DEAL.

Author information



Corresponding author

Correspondence to Shahida Anusha Siddiqui.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Computational Statistics” guest edited by Anish Gupta, Mike Hinchey, Vincenzo Puri, Zeev Zalevsky and Wan Abdul Rahim.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Siddiqui, S.A., Ahmad, A. Implementation of Newton’s Algorithm Using FORTRAN. SN COMPUT. SCI. 1, 348 (2020).

Download citation


  • Newton’s Algorithm
  • Roots of a function
  • Non-linear problems