1 Introduction

An Archimedean spiral is a curve flow from a point that draw circles away around this point with a fixed distance between turns. A formal definition of spirals is described by Lockwood (1967) “ The word spiral, in its mathematical sense, means, properly speaking, a plane curve traced by a point which winds about a fixed pole from which it continually recedes.”

The Archimedean spiral can be use in handwriting, for example to distinguish patients diagnosed with Parkinson’s disease and how bad is the condition (Pereira et al. 2016). The spiral also can be found in jewellery, clocks, cars and spiral wound springs (Pickover 1988).

We focus on an important research about Parkinson’s disease to study the efficiency of our methodology, in Sect. 5. As we study the handwriting of these patients by drawing an Archimedes spirals and straight lines as clinical assessments, in Pereira et al. (2016).

Consider an Archimedean spiral in two-dimensional space which a sequence of points is observed. There are many approaches in the literature that have been developed based on Least squares to fit an Archimedean spiral, such as Mishra (2004) and Jinting et al. (2018).

In this paper, we study a set of points in \({\mathbb {R}}^2\) that present almost Archimedean spiral structure. A Mathematical Archimedean spiral in two dimensions can be given by the function

$$\begin{aligned} {\varvec{g}}(t)=\left[ \begin{array}{c} r \cos t \hspace{10pt} r \sin t \end{array}\right] ^T \end{aligned}$$

where r the radius is defined by the equation \(r(t)=a t+b\), \(a,b\in {\mathbb {R}}^2\) and t time. Measurements subject to statistical error determine a statistical Archimedes spiral. Spiral Equations present in more details in Sect. 2.

The main goal of this paper is to re-explore the estimation problem for a statistical Archimedes spiral using optimized least squares problem under certain assumptions. The optimization algorithm needs starting estimates of the parameters to start the iterations, the methodology introduced in Sect. 3. We need an initial point to start the algorithm, as a bad choice of this point can lead us to local minimum. Furthermore, these techniques work well on the logarithmic spiral, by using Taylor approximation and modifying the results.

Three methods are proposed to estimate the initial parameters, the optimization algorithm, are presented in details in Sect. 3.3. We also describe a forth method by Mishra (2004) to estimate spiral curve parameters. In fact, we were inspired by Mishara method for fitting the Archimedean spiral. However, Mishara’s method not working well with clockwise data. On the other hand, our proposal methods working well with both clockwise and counter-clockwise data. Therefore, we turn the data into counter-clockwise inorder to use meshara’s method. We update these estimation by least squares, in Sect. 3.4. We propose a methodology that clear view procedure to follow in Sect. 3.

Numerical examples with various choices of the radius parameters are used to form datasets and then apply the methods on them in Sect. 4. In addition, we apply our approach on real data from Pereira et al. (2016) in Sect. 5.

2 Archimedean Spiral Model

In general, a spiral curve going around a definite point, usually called the pole or the center of the spiral, say \((\alpha ,\beta )\). If this point is selected as the pole of a polar coordinate system, then the general form of the equation of a spiral is given by \(r=f(t)\), where f is a continuous function, r is the length of the radius from the centre and t is the angular position (amount of rotation) of the radius r.

There are many types of spiral and the most common ones are Archimedean spirals. The general form of the Archimedean spiral is:

$$\begin{aligned} r(t)=a t+b, \end{aligned}$$
(1)

where b and a are the parameters that determine the initial radius of the spiral and the distance between its successive turns. The r increases as t time increases.

In Cartesian coordinates, the Archimedean spiral (1) of the center \((\alpha ,\beta )\) is described by the couple of equations

$$\begin{aligned} x=(at+b)\cos (t)+\alpha , \hspace{15pt} y=(at+b)\sin (t)+\beta , \end{aligned}$$
(2)

where \(r=\sqrt{(x-\alpha )^2+(y-\beta )^2}\).

Throughout the paper, we operate with n pairs \((x_i, y_i)_{i=0,\dots ,n-1}\) of data, where \(x_i,y_i\in {\mathbb {R}}\). Let that they resemble the trace of a spiral.

A statistical spiral that is obtained from (2) by adding noise at equally spaced time points \(t_i=t_0+i\lambda\), \(\lambda\) is the turn angle, to give data

$$\begin{aligned} \begin{array}{cc} x_i=(at_i +b)\cos (t_i)+\alpha +\epsilon _{1,i},\\ y_i=(at_i +b)\sin (t_i)+\beta +\epsilon _{2,i}, \end{array} \bigg \} \end{aligned}$$
(3)

where \(i=0, \dots , n-1\),and \(\varvec{\epsilon }_{i}=[\epsilon _{1,i},\epsilon _{2,i}]^T\) are small noise terms. We assumed these noise terms are following independent normal distributions

$$\begin{aligned} \varvec{\epsilon }_{i} \sim N_{2}\left( {\textbf{0}}, \sigma ^{2} I\right) \end{aligned}$$

In the previous parametric equations, let \(t_i=t'_i+2k_i\pi\) where \(t'_i=\arctan ((y_i-\beta )/(x_i-\alpha ))\), \(0 \le t'_i < 2\pi\) and \(k_i\) is a non-negative integer for each i. Then, the Eqs. (3) can also be written as

$$\begin{aligned} \begin{array}{cc} x_i=(a(t'_i+2k_i\pi )+b)\cos (t'_i)+\alpha +\epsilon _{1,i},\\ y_i=(a(t'_i+2k_i\pi )+b)\sin (t'_i)+\beta +\epsilon _{2,i}. \end{array} \bigg \} \end{aligned}$$
(4)

The structural parameters of the spiral data are: The center point \((\alpha ,\beta )\) (sometimes we mentioned them as the shift parameters), the values of \(k_i\), the initial radius b, and the distance between its successive turns a.

3 The Suggested Methods

In this Section, we present some methods to compute initial values of the parameters in a spiral equation and then use them to obtain a good fitting. The performance of the least squares method in fitting a spiral curve depends on estimating its center and the positions of some points, we will give more details afterward.

Before employing the data for estimation purposes, the data must be preprocessed to make sure that the point (0, 0) is inside the inner turn of the data, the point (0, 0) does not belong to the data and the turn angle \(\lambda\) is assumed to be known or has been estimated.

The data fitting procedure follows six steps. Each of these steps will be discussed in some detail and illustrated with examples later. These steps are:

1. Estimating the center of the spiral,

2. Estimating the step size \(\lambda\) (the turn angle),

3. Estimating the values of \(k_i\) in the Eq. (4),

4. Getting initial values of the parameters a and b,

5. Using least squares method,

6. Getting a better estimation of the center of the spiral.

3.1 Estimating the Center of the Spiral

The problem of transferring data points with an unknown shift has received considerable attention in many models. Our aim is to estimate the point \((\alpha ,\beta )\) without prior knowledge of the pattern of data. Ferris (2000) and Mishra (2006) has discussed this problem in some detail.

The difficulties in fitting a spiral to data become much more intensified when \(z_i= (x_i, y_i)\) are not measured from the origin (0, 0). Plot the data and look at them carefully, if the point (0, 0) is not inside the inner turn of the data, we need to work on adjusting the data.

We begin with the recognition of the fact that \(z'_i= (x'_i, y'_i)\) are measured from \((\alpha ,\beta )\ne (0,0)\). Let \(z_i= (x_i, y_i)\) be the points measured from true (0, 0) such that \(z_i+(\alpha ,\beta )=z'_i\). Here \(\alpha\) is a constant by which value the measured \(x'_i\) has shifted from the true \(x_i\) and \(\beta\) is a constant by which value the measured \(y'_i\) has shifted from the true \(y_i\). Once the values of \(\alpha ,\beta\) are obtained, we translate \((x'_i, y'_i)\) into \((x_i, y_i)\).

Firstly, we choose values of \(\alpha\) and \(\beta\) by observation the plot of data or considering \((\sum x_i /n,\sum y_i /n)\) as the first estimation of \((\alpha ,\beta )\). Then based on the inspection of the graphical presentation of the spiral obtained from the data on \((x'_i-\alpha , y'_i-\beta )\), we may need to adjust the values to make sure the point (0, 0) is inside the inner loop and not one of the data’s points. For example, the point (0, 0) is outside the inner loop of the data in both Figs. 1 and 2. Figure 2 shows the approximation \((\sum x_i /n,\sum y_i /n)\) of the center is good. For the data in Fig. 1, estimating the center by mean was not good, so we have modified it by using the first data point. Then, we can fit the new data set \((x'_i-x_1, y'_i-y_1)\). We use this way to make sure that the point (0, 0) is inside the inner loop in simulated dataset 3 in Sect. 3.2.

Fig. 1
figure 1

a Plot of the original data points in blue and the point (0, 0) in red. The point (0, 0) is close to the last external turns of data. b The data points after shifting by the first estimation of \((\alpha ,\beta )=(2.2331, 2.9269)\), by using the mean. c Plot of the data points after shifting by the altered values of the shift parameters (1.8331, 2.9269). We alter the first estimation to make the first data point on the right side of the point (0, 0)

Fig. 2
figure 2

a Plot of the original data points in blue and the point (0, 0) in red. b Plot of the data after shifting by the data means \((\alpha ,\beta )=(-3.6484,0.9524)\)

3.2 Estimating the Step Size \(\lambda\) and the values of \(k_i\)

Estimating the step size \(\lambda\): This step is to find one of the most important parameters of the model the constant angle \(\lambda\). Note that not any value of \(\lambda\) will produce a spiral.

To obtain good calculations of angles of the points we must let the point (0, 0) be inside the inner loop of the data. To estimate the step size, we apply the following steps:

  • Step 1: Find \(t'_i\) for each i.

  • Step 2: Compute \(\lambda _i:=t'_{i+1}-t'_i\) where \(i=0, \dots ,n-2\).

  • Step 3: Remove all points that corresponding to the values of \(\lambda _i\) that come from the points which near the positive x-axis, and compute \(n':=n-1-c,\) where \(c=\) the number of the removed points.

  • Step 4: Compute the mean of the rest, \(\lambda \approx \sum \limits _{1\le i \le n'} \lambda _i/ n'\).

Estimating the values of \(k_i\): We need to find where are the intersections (if they exist) of data with the positive x-axis. Let the numbers \(m_1, \dots , m_l\) representing these intersections, i.e. there are two points \((x_{m_j},y_{m_j}),\) and \((x_{m_{j+1}},y_{m_{j+1}})\), for each \(j=1, \dots , l\), where one of them is above and the other is below the positive x-axis. Then

$$\begin{aligned} k_i:= {\left\{ \begin{array}{ll} 0 &{}\quad \text {if } i\le m_1, \\ 1 &{}\quad \text {if } m_1 <i\le m_2, \\ \vdots &{} \\ j-1 &{}\quad \text {if } m_{j-1}<i\le m_j, \\ j &{}\quad \text {if } m_{j}<i\le n-1, \\ \end{array}\right.} \end{aligned}$$

where \(i=1, \dots , l\). Recall, the point (0, 0) must be inside the inner loop of the data and none of them. Therefore, we work out the locations of the intersections of data with the positive x-axis as follows:

  • Step 1: Find \(t'_i\) and \(r_i\) for each i.

  • Step 2: Arrange \(r_i\) in an ascending order of their magnitudes, i.e. when we draw the plot of a data, there will be one direction of drawing left or right without going back. Actually, the points which near the positive x-axis, in which \(x_i>0\) and \(y_i\) almost zero, are most important ones in this step. Because these points may oscillate around the x-axis.

  • Step 3: Compute \(\lambda _i:=t'_{i+1}-t'_i\) where \(i=0, \dots ,n-2\).

  • Step 4: Find all i such that \(\text {abs}( \lambda _i)\ge \pi\), and that only happens when one of the points \(z_{i+1},z_i\) are above the x-axis and the other under it. These values represent the values of \(m_j\)’s.

Remark 1

In the case \(m_1\) is too small comparing to n (in other words, \(t_0\) is close to \(2\pi\)), it is better to delete the first \(m_1\) points from the data. In all the numerical examples that have been studied, we obtained a better fitting by deleting the first \(m_1\) points from the data in this case. For example, see Fig. 3.

Remark 2

In the previous step 2, after arranging the points we may have different points in the same turn with almost the same angles, specially the first turn. It is better to delete all points that have almost the same angles except one which has the biggest radius. For example, see Fig. 4. In Fig. 4a the first nine points, in the first turn of spiral, have equal angles. Therefore we omit the first nine points in Fig. 4b.

Fig. 3
figure 3

a Plot of the original data in blue and the point (0, 0) in red. The first intersection is after the second point, \(m_1=2\) and the third and forth points are too close to the x-axis. b Plot the data after deleting the first four points

Fig. 4
figure 4

a Plot of the original data in blue and the point (0, 0) in red. The first nine points, in the first turn of spiral, have equal angles, so we need to omit them. b Plot the data plot after deleting the first nine points

3.3 Getting Initial Values of the Parameters ab

In this part, we present four methods to obtain initial values of the parameters a and b. Let the initial values of the parameters a and b denoted by \({\hat{a}}\) and \({\hat{b}}\) respectively. Recall, we start by completeing the previous steps, i.e. finding \(r_i\), \(t'_i\) and \(t_i\) for all i, where all the points of data are measured from the point (0, 0). We can define the radius of each point in the data, from equation (3), as follows

$$\begin{aligned} r_i\approx at_i+b=a(t_0+i\lambda )+b, \hspace{15pt} i=0,\dots , n-1. \end{aligned}$$
(5)

Now, we could choose one of the following methods for evaluating \({\hat{a}} \ \text{and}\ {\hat{b}}\):

  1. (1)

    Method 1: When the noise in the data is too small comparing to the value of the radius, from the equation (5) the difference between the radiuses of successive points in the data is almost constant. We use this fact to approximate a and b as follows:

    1. (i)

      Find \(a_i:=(r_{i+1}-r_i)/\lambda\) for each \(i=0,\dots , n-2\).

    2. (ii)

      compute \({\hat{a}}:=\big (\sum \limits ^{n-2} a_i \big )/(n-1)\).

    3. (iii)

      Find \({\hat{b}}:=\big (\sum \limits ^{n-1} (r_i-{\hat{a}}t_i)\big )/n\).

  2. (2)

    Method 2: From geometric properties of Archimedean spirals that their center (0, 0), any line passes through the origin point is intersecting with the curve of a spiral infinitely times. We employ this fact to approximate a and b as follows:

    1. (i)

      For each i, check if there exist j such that \(t'_i\) is almost equal to \(t'_j\). Choose ij such that \(abs(t'_i-t'_j)\) is the smallest.

    2. (ii)

      For the values ij from the previous step, calculate \({\hat{a}}:=\big (r_{i}-r_j\big )/\big ( 2\pi (k_i-k_j)\big )\).

    3. (iii)

      Find \({\hat{b}}=\big (\sum \limits ^{n-1} (r_i-{\hat{a}}t_i)\big )/n\).

  3. (3)

    Method 3: From the definition of an Archimedean spiral, we have \(\frac{dr}{dt}=a\). In this method, we use a numerical method for approximating the first derivative of the radius to obtain an initial value of a:

    $$\begin{aligned} {\hat{a}}:=\frac{\sum \limits ^{n-1}_{i=2} (r_{i+1}-r_{i-1})}{2\lambda (n-2)}, \hspace{15pt} {\hat{b}}:=\frac{\sum \limits ^{n-1}_{i=2} (r_i-{\hat{a}}t_i)}{n}. \end{aligned}$$

    The first derivative is approximated by a formula known as the center divided difference method (Faires and Burden 1998).

  4. (4)

    Method 4 (Mishra’s algorithm): Mishra (2004) presented an algorithm to compute an initial value of a where a different way used to determine the values of \(k_i\) and the data is assumed to be measured from the origin (0, 0). In this method, the initial values are:

    $$\begin{aligned} {\hat{a}}:=\sum \limits ^{n-1}_{i=0} \frac{ r_i}{ t_i}, \hspace{15pt} {\hat{b}}:=\frac{\sum \limits ^{n-2}_{i=1} (r_i-{\hat{a}}t_i)}{n}. \end{aligned}$$

3.4 Least Squares Optimization

The estimates for all the parameters in Sect. 3.3 can be improved using least squares (LS) optimization. In order to do that we use the optimization algorithm routine nlm in R (R core team 2014) and the algorithm lsquares\(\_\)estimates in WxMaxima (Timberlake and Mixon 2016).

In this subsection, we assume that all the points of data are measured from the point (0, 0), so \(r_i\approx at_i+b\). Least squares method is a method for estimating values of the parameters a and b that minimizes the residual sum of squares (RSS), that is the sum over all i of \((r_i-at_i-b)^2\). Starting with initial values of these parameters the algorithm provides the solution vector more rapidly.

After computing \(r_i\) and \(t_i\) for all i, then both of the nlm and the lsquares\(\_\)estimates procedures work on the two parameters a and b with an initial value from any of the previous methods, see Sect. 3.3.

3.5 Getting a Better Estimation of the Center of Spiral

The equations (3) can be written in matrix form as \(Z=AB+\Upsilon\) where

$$\begin{aligned} Z=\left[ \begin{array}{c} x_0,\ldots , x_{n-1}, y_0, \ldots , y_{n-1} \end{array} \right] ^T, B= \left[ \begin{array}{c} a, b, \alpha , \beta \end{array} \right] ^T,\\ \\ \Upsilon =\left[ \begin{array}{cc} \epsilon _{1,0}, \ldots , \epsilon _{1,n-1}, \epsilon _{2,0}, \ldots , \epsilon _{2,n-1} \end{array} \right] ^T,\\ \\ A =\left[ \begin{array}{cccc} t_0\cos (t_0)&{}\cos (t_0)&{}1&{}0\\ \vdots &{}\vdots &{}\vdots &{}\vdots \\ t_{n-1}\cos (t_{n-1})&{}\cos (t_{n-1})&{}1&{}0\\ t_0\sin (t_0)&{}\sin (t_0)&{}0&{}1\\ \vdots &{}\vdots &{}\vdots &{}\vdots \\ t_{n-1}\sin (t_{n-1})&{}\sin (t_{n-1})&{}0&{}1 \end{array} \right] . \end{aligned}$$

Calculate the values \(t_i\) for all i, then apply the LS method on the equation \(Z=AB+\Upsilon\), we obtain the approximation:

$$\begin{aligned} B \approx \big ( A^TA)^{-1}A^TZ. \end{aligned}$$

4 Numerical Examples

Herein, we present few examples to illustrate the efficiency of the suggested methods and to find out any problems we may face in order to find a good fitting. In addition, we carry on some comparison between these suggested methods.

In this section, we simulate 10 000 datasets from four sets and apply our methodology on them. The first dataset was created by the spiral \(r=t/3+1\), where \(n=150\), \(\lambda =0.1=t_0\) and the true center is \((-4,1)\). The second example contains a data was build from the curve \(r=t/3\) without shift, where \(n=150\) and \(\lambda =0.1=t_0\). The third data points were extracted from the spiral \(r=t/4\), where \(n=100\), \(\lambda =0.2=t_0\) and the true center is (2, 3). The Last data was created from the spiral \(r=2t+1\), where \(n=100\), \(\lambda =0.2=t_0\) and the true center is (0, 0). All datasets are subject to normally distributed noise with \(\mu =0\) and \(\sigma ^2=0.05\).

Example 1

In order to use Mishara’s method and our methods for fitting, we need the point (0,0) be inside the first loop. Therefore, we shift the data by its means and then we remove the first 6 points as in Fig. 2. Deleting the first 6 points gives a better fit when we use Mishara method. After that we apply our methodology as in Sect. 3. The step size is estimated to be \(0.19909\approx 0.1\), and the positions of the intersections with the positive x-axis are in \(m_1=31, m_2=62\) and \(m_3=94\).

Table 1 shows the estimate of initial values of the parameters a and b by the four methods as in Sect. 3.3 and the updated estimate of these parameters by least squares method using the previous initial estimations with the residual sum of squares. After applying the method in Sect. 3.4, we get \((\alpha ,\beta )\approx (0.83,-0.29)\). Figure 8 gives four fitted plots of one simulated spiral using each of four methods. Our methods give much better fit than Mishara’s methods. Figures 9 and 10 present the ditribution of the estimates a and b, respectively. It is clearly from these Figures that our methods shows that the distributions of the estimates are around the true value, whereas Mishara’s results are far away. Table 1 also shows that methods 1, 2, 3 have estimates close to \(a=1/3\) and \(b=1\). The variances of 10 000 data are very small. Method 3 has the best esimates with the minimum variances among all methods. Method 1 comes after. For method 2, the hardest part is to choose the values of i and j, as in Sect. 3.3. Overall, with any choice of initial values of a and b we get a better fit using the least squares method.

Table 1 The initial estimates of the spiral parameters, fitted curve equation and RSS of the data spiral after shifting (as needed) and deleting 6 points

Example 2

We applied methodology to the 10 000 simulated datasets from data in Fig. 2.The step size is estimated to be \(0.106\approx 0.1\), and the positions of the intersections with the positive x-axis are in \(m_1=62\) and \(m_2=125\).

In this example, we apply our methods three times: without deleting and no shifting, deleting nine points and without shift, and finally without deleting and with shifting. The best initial values are obtained from our three methods in all our tries, as the true values are \(a=1/3\) and \(b=0\). Table 1 and Figs. 11,  12, and  13 are summarize our findings. Methods 1, 2, and 3 give estimets that close to the true values with small variances amonge 10 000 datasets. On the other hand, Mishara’s estimate of b is a way from the true value and the variances of a and b are much larger than those of the other methods. We obtain much better fitting using least squares method with RSS\(=0.243\).

Example 3

From the data in Fig. 14, where the point (0, 0) is between the last external turns of data. The first choice of \((\alpha ,\beta )\) is the first point of the data since gives a better estimation using Mishara’s to make the first data point on the right side of the point (0, 0). Table 1 shows the results that are obtained after shifting the original data, the step size is estimated to be \(0.196\approx 0.2\), and the positions of the intersections with the positive x-axis are in \(m_1=31, m_2=62\) and \(m_3=94\).

As in previouse examples, our methods give much better initial estimates of the parameters than Mishara’s method. These three methods give simillar results, where method 1 has the minimum variances. We obtain much better fitting using the least squares with RSS\(=0.589\) (Figs. 15, 16).

Example 4

We fit the data in Fig. 3b where the point (0, 0) is inside the inner loop. We apply our methodology. The step size is estimated to be \(0.2014\approx 0.2\), and the positions of the intersections with the positive x-axis are in \(m_1=2, m_2=31, m_3=62\) and \(m_4=94\). As we explained it before, we deleted the first 6 points. Then the positions of the intersections of the new data are in \(m_1=27, m_2=58\) and \(m_3=90\). Figure 17 shows plots of our simulated spiral fitted by the four methods.

Table 1 and Figs. 18 and 19 show the results that are obtained after deleting the first 6 points. The best initial values are obtained from our methods which are close to the true values \(a=2\) and \(b=1\). The best method among these methods is method 3, which has the minimum variances.

5 Real Datasets

The purpose of this section is to evaluate the performance of this methodology on real datasets. We use two different databases, these datasets are available from Pereira et al. (2016). The data collected in Pereira et al. (2016) by letting patients draw over sketches of archimedean spiral. The datasets shown in Fig. 5. Plot (a) shows a very clear hand drawn spiral, whereas plot (b) shows unclear hand drawn spiral. There are 1908 and 2716 points in the data the panels (a) and (b), respectively.

In all the previous simulated data, the turn would begin by rotating to the left. But the data sets from Pereira et al. (2016) are all clockwise. The Mishara’s method is designed to apply on counter-clockwise turns. We changed the direction of real data by starting from the last point in the data instead of the initial one. After applying all the four methods on each data set in both directions, the obtained results from counter-clockwise were clearly better and the forth method gave the worse estimation.

Since the big data take much time to analysis and to obtain the initial solution, the approach of using random sampling is appropriate. After selecting many random samplings (each includes twenty points at least) and applying the methodology, we have obtained good results. The sample sizes are 1080 and 636 from data 1 and data 2, respectively.

Table 2 shows that the parameters estimates of all datasets are close, which implies that our optimal algorithm fit well. The updated estimates from the Least squares procedure with the 95% confidence interval (C.I.) are provided. The first dataset gives \({\hat{a}}=10.45\) with C.I.= (10.336,10.564) and \({\hat{b}}=4.12\) with C.I.=(2.742, 5.498). The second dataset gives \({\hat{a}}=7.95\) with C.I.= (7.688, 8.204) and \({\hat{b}}=6.8\) with C.I.=(3.693, 9.895). Figures 6 and 7 present 2D-spiral of two people in points, after shifting by (200, 215), and the fitted spiral in line.

Fig. 5
figure 5

Figures (a) and (b) present different parkinson dataset

Table 2 The fitting results of the data sets in Figs. 6 and 7

6 Conclusion

we established an approach to fit an Archimedean spiral, which started by finding the initial values of the spiral parameters ab. We present four different methods to estimates these initial values ab. One method is Mishara’s method (Mishra 2004) and three proposed methods. These values are updated by least squares. We also discuss the methodology to analysis spiral data in two dimensions.

The errors are assumed to be independent and identically normally distributed with mean 0 and variance \(\sigma ^2\). In the numerical examples we assumed that \(\sigma ^2\) = 0.01 and 0.1. The algorithm working even for larger \(\sigma ^2=1\). The results show that the best initial starting points is obtained by the first and the third methods. In general, our methods behave better in both the simulatted and real data.

In the future, It could be more interesting to fit a spiral 3-dimensional Model, which commonly seen in many engineering designs.