On the use of the Infinity Computer architecture to set up a dynamic precision floating-point arithmetic

We devise a variable precision floating-point arithmetic by exploiting the framework provided by the Infinity Computer. This is a computational platform implementing the Infinity Arithmetic system, a positional numeral system which can handle both infinite and infinitesimal quantities symbolized by the positive and negative finite powers of the radix grossone. The computational features offered by the Infinity Computer allows us to dynamically change the accuracy of representation and floating-point operations during the flow of a computation. When suitably implemented, this possibility turns out to be particularly advantageous when solving ill-conditioned problems. In fact, compared with a standard multi-precision arithmetic, here the accuracy is improved only when needed, thus not affecting that much the overall computational effort. An illustrative example about the solution of a nonlinear equation is also presented.


Introduction
The Arithmetic of Infinity was introduced by Y.D. Sergeyev with the aim of devising a new coherent computational environment able to handle finite, infinite and infinitesimal quantities, and to execute arithmetical operations with them. It is based on a positional numeral system with the infinite radix ①, called grossone and representing, by definition, the number of elements of the set of natural numbers N (see, for example, [13,17] and the survey paper [15]). Similar to the standard positional notation for finite real numbers, a number in this system is recorded as with the obvious meaning c m ① pm +. . .+c 1 ① p1 +c 0 ① p0 +c −1 ① p−1 +. . .+c −k ① p −k . (1) The coefficients c i , called grossdigits, are real numbers while the grosspowers p i , sorted in decreasing order p m > . . . > p 1 > p 0 = 0 > p −1 > . . . > p −k , may be finite, infinite or infinitesimal even though, for our purposes, only finite integer grosspowers will be considered.
Notice that, since ① 0 = 1 by definition, the set of real numbers and the related operations are naturally included in this new system. In this respect, the Arithmetic of Infinity should be perceived as a more powerful tool that improves the ability of observing and describing mathematical outcomes that the standard numeral system could not properly handle. In particular, the new system allows us to better inspect the nature of the infinite objects we are dealing with. For example, while ∞ + 1 = ∞ in the standard thinking, if we are in the position to specify as, say ①, the kind of infinity we are observing using the new methodology, such an equality could be better replaced with ① + 1 > ①. According to the principle that the part is less than the whole, this novel perception of the infinite dimensionality has proved successful in resolving a number of paradoxes involving infinities and infinitesimals, the most famous being Hilbert's paradox of the Grand Hotel (see [13,17]).
The Arithmetic of Infinity paradigm is rooted in three methodological postulates and its consistency has been rigorously recognized in [11]. Its theoretical and practical implications are formidable also considering that the final goal is to make the new computing system available through a dedicated processing unit. The computational device that implements the Infinity Arithmetic has been called Infinity Computer and is patented in EU, USA, and Russia (see, for example, [21]).
Among the many fields of research this new methodology has been successfully applied, we mention numerical differentiation and optimization [5,14,22], numerical solution of differential equations [18,2,19,12,7], models for percolation and biological processes [20,9], cellular automata [10,4]. 1 The aim of the present study is to devise a dynamic precision floating-point arithmetic by exploiting the computational platform provided by the Infinity Computer. In contrast with standard variable precision arithmetics, here not only may the accuracy be dynamically changed during the execution of a given algorithm, but variables stored with different accuracies may be combined through the usual algebraic operations. This strategy is explored and addressed to the accurate solution of ill-conditioned/unstable problems [3,8].
One interesting application is the possibility of handling ill-conditioned problems or even of implementing algorithms which are labeled as unstable in standard floating-point arithmetic. 2 One example in this direction has been illustrated in [1]. It consists in the use of the iterative refinement to improve the accuracy of a computed solution to an ill-conditioned linear system until a prescribed input accuracy is achieved.
The paper is organized as follows. In the next section we highlight those features of the Infinity Computer that play a key role to set up the variable-precision arithmetic. This latter is discussed in Section 3 together with a few illustrative examples. As an application in Numerical Analysis, in Section 4 we consider the problem of finding the zero of a nonlinear function affected by ill-conditioning issues. Finally, some conclusions are drawn in Section 5.

Background
As is the case with the standard floating-point arithmetic, the Infinity Computer handles both numbers and operations numerically (not symbolically). Consequently, it is prone to efficiently afford the massive amount of computation needed while solving a wide variety of real-life problems. On the other hand, a roundoff error proportional to the machine accuracy is generated during representation of data (i.e. the coefficients c i and p i in (1)) and execution of the basic operations. We will give a more detailed description about how the representation of grossdigits and the floating-pont operations should be carried out in the next section. Here, for sake of simplicity, we will neglect these sources of errors.
The grossnumbers that will be considered in the sequel are those that admit an expansion in terms of integer powers of ① −1 and, thus, take the form where T denotes the maximum order of infinitesimal appearing in X. For this special set, the arithmetic operations on the Infinity Computer follow the same rules defined for the polynomial ring. For example, given the two grossnumbers we get and analogously for the division X/Y . Notice that, on the Infinity Computer, variables may coexist with different storage requirements. Taking aside the (negative) powers of ① that, as we will see, need not to be stored in our usage, the variable Y displays infinitesimals quantities up to the order 2, thus requiring one extra record to store the grossdigit y 2 , if compared with the variable X that only contains a first order infinitesimal. This circumstance also influences the computational complexity associated with each single floating-point operation. As a consequence of the different amount of memory allocated for storing grossnumbers, the global computational complexity associated with a given algorithm performed on the Infinity Computer, cannot be merely estimated in terms of how many flops are executed, but should also take into account how many grossdigits are involved in each operation. If X is chosen as in (2), we denote by X (q) its section obtained by neglecting, in the sum, all the infinitesimals of order greater than q, that is For example, choosing q = 0 and X and Y as in (3), we see that X (0) + Y (0) = x 0 + y 0 and X (0) · Y (0) = x 0 y 0 would resemble the floating-point addition and multiplication in standard arithmetic, respectively, while additional effort is needed if other powers of ① −1 are successively involved. More precisely, the computational cost associated with a single operation of two grossnumbers will depend on how many infinitesimal are considered. Assuming q < p and denoting by d j the grossdigits associated with Y , for the two sections X (q) and Y (p) , the addition requires q + 1 additions of grossdigits, while the multiplication amounts to (q + 1)(p + 1) multiplications and qp − q(q − 1)/2 additions/subtractions of grossdigits. 3 It is worth noticing that, since in both operations all the coefficients of ① −j may be independently calculated, there is room for a huge parallelization. We will not consider this aspect in detail in the present study.
numbers with a different accuracy may be simultaneously represented and combined. The idea is to let ① −1 and its powers act as machine infinitesimal quantities when related to the classical floating-point system. These infinitesimal entities, if suitably activated or deactivated, may be conveniently exploited to increase or decrease the required accuracy during the flow of a given computation. This strategy may be used to automatically detect ill-conditioning issues during the execution of a code that solves a given problem, and to change the accuracy accordingly, in order to optimize the overall computational effort under the constrain that the resulting error in the output solution should fit a given input tolerance. A formal introduction of the new dynamic precision arithmethic is discussed hereafter.

Machine numbers and their storage in the Infinity Computer
Let t and T be two given non-negative integers and N = (T + 1)(t + 1) − 1. The set of machine numbers we are interested in is given by where β ≥ 2 denotes the base of the numeral system, the integer p is the exponent ranging in a given finite interval, and d i are the significant digits, with d 0 = 0 (normalization condition). Starting from d 0 , we group the digits d i in T + 1 adjacent strings each of length t + 1: The representation of the numbers X as in (7), under the shape (8), suggests an interesting application of the Infinity Computer. Introducing the new symbol ❶, called dark grossone, as and setting the number X in (8) may be rewritten as Its section of index q is then given by We assume that a real number x is represented by a floating-point number X in the form (11) by truncating or rounding it to the nearest even, after the digit d N . This is the most attainable accuracy during the data representation phase but, in general, a lower accuracy (and hence faster execution times) will be required while processing the data, which will be achieved by involving sections of X of suitable indices q during the computations.
Echoing the symbol ①, the new symbol ❶ emphasizes the formal analogy between a machine number and a grossnumber (compare (11) with (2) and (12) with (4)). This correspondence suggests that the computational platform provided by the Infinity Computer may be conveniently exploited to host the set F defined at (7) and to execute operations on its elements using a novel methodology. This is accomplished by formally identifying the two symbols, which means that, though they refer to two different definitions, they are treated in the same way in relation to the storage and execution of the basic operations. In accord with the features outlined in Section 2, the Infinity Computer will then be able to: (a) store floating-point numbers at different accuracy levels, by involving different infinitesimal terms, according to the need; (b) easily access to sections of floating-point numbers as defined in (12); (c) perform computations involving numbers stored with different accuracies.
The affinity between the meaning of the two symbols goes even beyond what has been stated above. We have already observed that the case q = 0 in (12) resembles the standard set of floating-point numbers with t + 1 significant figures. This means that when the Infinity Computer works with numbers of the form X (0) it precisely matches the structure designed following the principles of the IEEE 754 standard. In this mode, the operational accuracy is set at its minimum value and the upper bound on the relative error due to rounding (unit roundoff) is ❶ −1 . In other words, ❶ −1 will be perceived as an infinitesimal entity which cannot be handled unless we let numbers in the form X (1) come into play. This argument can then be iterated to involve ❶ −i , i = 2, . . . , T . Mimicking the same concept expressed by the use of ①, negative powers of ❶ act like lenses to observe and combine numbers using different accuracy levels.
Remark 1 What about the role of ❶ as an infinite-like quantity? Consider again the basic operational mode with numbers in the form X (0) . If we ask the computer to count integer numbers according to the scheme n=0 while n+1>n n=n+1 end it would stop at n = ❶, yielding a further similarity with the definition of ① in the Arithmetic of Infinity. Again, involving sections of higher index, the counting process could be safely continued.
In conclusion, the role of ❶ could be interpreted as an inherent feature of the machine architecture which, consistently with the Infinity Arithmetic methodology, could activate suitable powers of ❶ to get, when needed, a better perception of numbers. The examples included in the sequel further elucidate this aspect.

Floating-point operations
We have seen that, through the formal identification of ❶ with ①, it is possible to store the elements of F as if they were grossnumbers and, consequently, to take advantages of the facilities provided by the Infinity Computer in accessing their sections and performing the four basic operations on them, according to the rules described in Section 2 (see, for example, (5) and (6)). For these reasons, in the sequel, we shall use ① in place of ❶ when working on the Infinity Computer, even though, due to the finite nature of ❶, the result of a given operation may not be in the form (12), so that a normalization procedure has to be considered. Hereafter, we report a few examples in order to elucidate this aspect. For all cases, a binary base has been adopted for data representation.
Addition. Set t = 3 and T = 2 (three grossdigits each with four significant digits), and consider the sum of the two floating-point normalized numbers: Table 1 summarizes the procedure by a sequence of commented steps. First of all, the two numbers are Table 1 Scheme of the addition of two positive floating-point numbers. stored in memory by distributing their digits along the powers ① 0 , ① −1 and ① −2 (step (a)). Before summing the two numbers, an alignment is performed to make the two exponents equal (step (b)). Notice that shifting to the right the digits of the second number causes a redistribution of the digits along the three mantissas.
Step (c) performs a polynomial-like sum of the two numbers. The contribution of each term has to be consistently redistributed (step (d)), in order to take into account possible carry bits, and the three mantissas accordingly updated (step (e)). Steps (f) and (g) conclude the computation by normalizing and rounding the result.
Subtraction. As usual, floating-point subtraction between two numbers sharing the same sign is performed by inverting the sign bit of the second number, converting to 2's complement its mantissa, and then performing the addition as outlined above. It is well-known that subtracting two close numbers may lead to cancellation issues. We consider an example where the accuracy may be dynamically changed in order to overcome ill-conditioning issues. We assume to work with the arithmetic resulting by setting t = 7 and T = 3 (four grossdigits each consisting of one byte) with truncation. It turns out that, for a floating-point number X representing an input real number x, its section X (0) may be interpreted as the single precision representation of x, while X (1) , X (2) and X (3) ≡ X are its double, triple and quadruple precision approximations respectively. Loss of accuracy, resulting from a subtraction between two numbers having the same sign, will be detected during the normalization phase, when it requires shifting the mantissa by a large number of bits.
Consider the simple problem of evaluating the function f (x, y, z) = x + y + z that computes the sum of three real numbers, and assume that the user requires a simple precision accuracy in the result. In the examples below, we discuss three different situations.

Example 1 The three real numbers
x = 2 −1 · 1.0001100000010111111001001110110 · · · , y = 2 0 · 1.0010101010110010110101001101011 · · · , z = 2 0 · 1.1011011010111011011011010111001 · · · , are represented on the Infinity Computer as Since we are adding positive numbers, no control on the accuracy is needed here, and the result is yielded as with a relative error E (0) ≈ 1.1 · 2 −10 , as is expected in simple precision.
Example 2 Given the three real numbers defined in the previous example, we want now to evaluate f (x, y, −z) again requiring an eight-bit accuracy in the result. Table 2 shows the sequence of steps performed to achieve the desired result. The computation in simple precision, as in the previous example, is described in step (a): it leads to a clear cancellation phenomenon and, once detected, the accuracy is improved by letting the ① −1 terms enter into play (step (b)). However, the relative error remains higher than the prescribed tolerance, and accuracy needs to be improved by also considering the ① −2 terms. The computation is then repeated at step (c) and the correct result is finally achieved. Notice that, in performing steps (b) and (c), one can evidently exploit the work already carried out in the previous step. The overall procedure thus requires 6 additions/sutractions of grossdigits, the same that would be needed by directly working with a 24-bit register which, Table 2 Avoiding cancellation issues when evaluating the function f (x, y, z) = x + y + z for the input data in Example 2.
for this case, is the minimum accuracy requirement to obtain eight correct bit in the result. This means that no extra effort is introduced during the steps. As a further remark, we stress again that a parallelization through the steps is also possible, even though we will not discuss this issue here.

Example 3
We want to evaluate f (x, −y, −z) requiring an eight-bit accurate result, now choosing x = 2 0 · 1.0010101101010111111001001110110 · · · , y = 2 0 · 1.0010100010110010110101001101011 · · · , z = 2 −7 · 1.0101000011011110010010110001010 · · · . Table 3 shows the sequence of steps performed to achieve the desired result for this case. When working in simple precision, an accuracy improvement is already needed when subtracting the first two terms X (0) and Y (0) and, consequently, step (a) is stopped. At step (b), the difference x − y is evaluated in double precision which, on balance, assures an eight-bit accuracy in the result. However, a new cancellation issue emerges when subtracting Z (0) from X (1) − Y (1) , suggesting that the two terms need to be represented more accurately. This is done in step (c), evaluating x − y in triple precision and representing z in double precision. The overall procedure requires 5 additions/sutractions of grossdigits. This example, compared with the previous one, reveals the coexistence of variables combined with different precisions.
Summarizing the three examples above, we observe how the accuracy of representation and combination of variables may be dynamically changed, in order to overcome possible loss of significant figures in the result when evaluating a function. Of course, for this strategy to work, it is necessary that the input data are stored with high precision and a technique to detect the loss of accuracy be available. In Section 4 we will illustrate this procedure applied to the accurate determination of zeros of functions (a further example may be found in [1]).
Concerning the computational complexity, it should be noticed that Example 1 reflects the normal situation where the use of the standard precision is enough to produce a correct result, while Examples 2 and 3 highlight less frequent events.
Multiplication. Set t = 3 and T = 2 (three grossdigits each with four significant digits). Consider the product of the two floating-point normalized numbers X = 2 0 · 1.01101111100, Y = 2 0 · 1.10111111101. Table 4 summarizes the procedure by a sequence of commented steps. After expanding the input data along the negative powers of ① for data storage (step (a)), the convolution product described in (6) is performed (step (b)). At step (c), the contribution of each term is redistributed, and a sum is then needed to update the mantissas (step (d)). Steps (e) and (f) conclude the computation by normalizing and rounding the result. Notice that step (e) may be carried out by applying the rules for the addition described in Table 1. Again, we stress that the terms in the convolution product, as well as in the subsequent sum, may be computed in parallel.
Division. The division of two floating-point numbers X and Y has been switched to the multiplication of X by the reciprocal of Y . This latter, in turn, is obtained with the aid of the Newton-Raphson method applied to find the zero of the function f (Z) = 1/Z − Y . Hereafter, without loss of generality, we assume Y > 0. Starting from a suitable initial guess Z 0 , the Newton iteration then reads The relative error Avoiding cancellation issues when evaluating the function f (x, y, z) = x + y + z for the input data in Example 3. Table 4 Scheme of the multiplication of two floating-point numbers. which means that, as is expected in presence of simple zeros, the sequence Z k eventually converges quadratically to 1/Y , and the number of correct figures doubles at each iteration. This feature makes the division procedure extremely efficient in our context, since the required accuracy may be easily increased to an arbitrary level. In order to obtain such a good convergence rate starting from the very beginning of the sequence, the numerator X and denominator Y are scaled by a suitable factor β s so that Y := β s Y lies in the interval [0.5, 1]. In the literature, the minmax linear polynomial approximation is often used to estimate the reciprocal of Y . The resulting initial guess is which assures an initial error E 0 ≤ 1/17. Taking into account the equality (14), the relative error at step k decreases as and consequently, assuming β = 2, a q-bit accurate approximation is obtained by setting where ⌈·⌉ denotes the ceiling function. As an example, four iterations suffice to get an approximation with at least 32 correct digits. Table 5 shows the sequence generated from the scheme above applied to find, on the Infinity Computer, the reciprocal of the binary number Y = (1010) 2 (1/10 in decimal base), under the choice t = 3 and T = 7 (eight grossdigits each with four significant figures).

Implementation details
We have developed a Matlab prototype emulating the Infinity Computer environment interfaced with a module that performs the suitable carrying, normalization and rounding processes, needed by the identification of ① and ❶ to ensure proper functioning of the resulting dynamic floating-point arithmetic.
The emulator represents input real numbers using a set of binary grossdigits, whose length and number are defined by the two input parameters t and T . This latter parameter is used to define the maximum available accuracy for storing variables. In accord with formulae such as (5) and (6), the actual accuracy used to execute a single operation will depend on the accuracy of the two operands but cannot exceed T .
At the moment, the emulator implements the four basic operations following the strategies described above, plus some simple functions. The vectorization issue, to speed-up the execution time associated with each floating-point operation, has not yet been addressed, Table 5 Newton iteration to compute the reciprocal of Y = 2 0 · 1010 on the Infinity Computer. so that all operations between grossdigits are executed sequentially.
All computations reported in the present paper, including the results presented in the next section, have been carried out on an Intel i5 quad-core computer with 16GB of memory, running Matlab R2019b.

A numerical illustration
As an application highlighting the potentialities of the dynamic precision arithmetic introduced above, we consider the problem of determining accurate approximations of the zeros of a function f : [a, b] → R, in the case where this problem suffers from ill-conditioning issues.
The finite arithmetic representation of the function f introduces perturbation terms of different nature: analytical errors, errors in the coefficients or parameters involved in the definition of the function, or roundoff errors introduced during its evaluation.
From a theoretical point of view, these sources of errors may be accounted for by introducing a perturbation function g(x) and analyzing its effects on the zeros of the perturbed functionf (x) := f (x) + εg(x) where the factor ε has the size of the unit roundoff. Under regularity assumptions on f , if α ∈ (a, b) is a zero with multiplicity d > 0, it turns out thatf (x) admits a perturbed zero α + δα, with the perturbing term δα satisfying, in first approximation, As an example, consider the polynomial that admits α = 1 as unique root with multiplicity d = 5 (indeed p(x) = (x − 1) 5 ). For this problem, from formula (15) we get Working with 64-bit IEEE arithmetic, i.e. with a roundoff unit u = 2 −53 , we expect a breakdown of the relative error proportional to u 1/5 ≈ 6.4 · 10 −4 > 0.5 · 10 −3 , so that, assuming |g(1)| 1/d ≈ 1, the approximation of the zero α only contains 3 ÷ 4 correct figures. This is confirmed by the two plots in Figure 1. They display the relative error of the approximations to α generated by applying the Newton method to the problem p(x) = 0, choosing x 0 = 2 as initial guess: The solid line refers to the implementation of the iteration on the Infinity Computer using t = 52 and T = 0. This choice mimics the default double precision arithmetic in Matlab, which uses a register of 64 bit to store a normalized binary number, 52 bit being dedicated to the (fractional part of the) mantissa. As a matter of fact, the dashed line, coming out from the implementation of the scheme using the standard Matlab arithmetic, precisely overlap with the solid line as long as the error decreases, while the two lines slightly depart from each other when they reach the saturation level right below 10 −3 , namely starting from step 32.
We want now to improve the accuracy of the approximation to the zero α = 1 of (16) by exploiting the new computational platform. Hereafter, the 53-bit precision used above will be referred to as single precision. The dashed lines in Figure 2 show the relative error reduction when the Newton method is im- plemented on the Infinity Computer by working with multiple fixed precision. From top to bottom, we can see the five saturation levels corresponding to the stagnation of the error at E 1 ≈ 6.8 ·10 −4 in single precision, E 2 ≈ 3.7 · 10 −7 in double precision, E 3 ≈ 2.0 · 10 −10 in triple precision, E 4 ≈ 1.5 · 10 −13 in quadruple precision, and E 5 ≈ 6.7 · 10 −18 in quintuple precision. These saturation values are consistently predicted by formula (17), after replacing ε with 2 −53k , for k = 1, . . . , 5. Now suppose we want 53 correct binary digits in the approximation (i.e., about 15 ÷ 16 correct decimal digits). From the discussion above, it turns out that we have to activate the quintuple precision, thus setting t = 52 and T = 4 (five grossdigits, each consisting of a 53-bit register). However, the computational effort may be significantly reduced if we increase the accuracy by involving new negative grosspowers only when they are really needed. In a dynamic usage of the accuracy, starting from x 0 , we can initially activate the single precision mode until we reach the first saturation level and, thereafter, switch to double precision until the second saturation level is reached, and so forth until we get the desired accuracy in the approximation. Denoting by the estimated error at step k, and by prec the current precision, initially set equal to 1, the points where an increase of the accuracy is needed may be automatically detected by employing a simple control scheme such as if err(k)>=s*err(k-1) and prec <=T prec=prec+1 end where s ≤ 1 is a positive safety factor that we have set equal to 1. The solid line in Figure 2 shows the corresponding reduction of the error and we can see that the change of precision scheme described above works quite well for this example, since all saturation levels are correctly detected and overcome. At step 162 the error reaches its minimum value of 2.2 · 10 −16 and the iteration could be stopped by the standard criterion err(k) < 10 −15 even though, for clarity, we have generated additional points to reveal the last saturation level corresponding to prec= T + 1 = 5. Now, let us compare the computational cost of the dynamic implementation versus the fixed quintuple precision one, considering that to reach the highest precision each mode requires 162 Newton iterations (see Figure 2). On the basis of the formula reported right below (6), the dynamic implementation would take about 2.4 ·10 3 grossdigits multiplications while the fixed quintuple precision implementation requires 2.0 · 10 4 grossdigits multiplications. 4 It follows that the former mode would reduce the execution times of a factor at least eight with respect to the latter. Actually, it does much better: the dynamic usage of variables and operations, understood as the ability of handling variables with different accuracy and executing operations on them, makes the resulting arithmetic definitely much more efficient than what emerged from the comparison above.
In carrying out the computation above, for the dynamic precision mode we have assumed that all floatingpoint operations were executed with the current selected precision. For example, under this assumption, the computational effort per step of the two modes would become equivalent starting from step 139 onwards since, at that step, the dynamic mode activates the quintuple precision to overcome the threshold level E 4 in Figure 2.
There is, however, one fundamental aspect that we have not yet considered. In fact, to overcome the illconditioning of the problem, the higher precision is only needed during the evaluation of p(x k ) and p ′ (x k ) in (18), while the single 53-bit precision is enough to handle the sequence x k . In other words, to minimize the overall computational effort, we may improve the accuracy only in the part of the code that implements the Horner rule to evaluate the polynomial p(x) and its derivative.
Interestingly, we have not to instruct the Infinity Computer to switch between single and quintuple precision: all is done automatically and naturally and, more importantly, even during the evaluation of p(x k ) and p ′ (x k ), the transition from single to quintuple precision is gradual, in that all the intermediate precisions are ac- Table 6 The Horner method for evaluating p(x) in (16) at x = 2 0 ·1.0000000000000000000000000000000000000000000001000010.
tually involved only when really needed, which makes the whole machinery much more efficient.
To better elucidate this aspect, we illustrate the sequence produced by the Horner rule to evaluate p(x k ) at step k = 145, where the quintuple precision is activated. The first column in Table 6 reports the five steps of the Horner method applied to evaluate the polynomial p(x) in (16) at the floating-point single precision number x = x 145 (its value is in the caption of the table). The variable p is initializated with the leading coefficient of the polynomial, but is allowed to store five grossdigits, each 53-bit long, to host floating-point numbers up to quintuple precision. From the table we see that, as the iteration scheme proceeds, new negative grosspowers appear in the values taken by the variable p. More precisely, at step k the variable p stores a k-fold precision floating-point number, for k = 1, . . . , 5.
The increase in the precision of one unit at each step evidently arises from the product p · x, since x remains a single-precision variable and no rounding occurs. Let us better examine what happens at the last step. The product p · x generates a quintuple-precision number whose expansion along negative grosspowers matches the number 1 up to ① −3 . Consequently, the last operation p − 1 only contains significant digits in the coefficient of ① −4 so that, after normaliziation, p will store again a single-precision number that can be consistently combined inside formula (18).
In conclusion, the Horner procedure, though being enabled to operate in quintuple precision, actually involves lower precision numbers, except at the very last step. The five steps reported in Table 6 require 15 mul-tiplications of grossdigits, with a clear saving of time, if we consider that the fixed quintuple-precision mode would require 125 multiplications of grossdigits. Comparing the execution times in Matlab over 162 steps, we found out that the dynamic-precision implementation is about 1.75 times slower than the single-precision implementation (which however stagnates at level E 1 ) and about 19 times faster than the quintuple precision mode, thus confirming the expected efficiency.

Conclusions
We have proposed a variable precision floating-point arithmetic able to simultaneously storing numbers and execute operations with different accuracies. This feature allows one to dynamically change the accuracy during the execution of a code, in order to prevent inherent ill-conditioning issues associated with a given problem. In this context, the Infinity Computer has been recognized as a natural computational environment that can easily host such an arithmetic. The assumption that makes this paradigm work is the identification of the two symbols ① and ❶. The latter, defined as ❶ = β t+1 , is evidently a finite quantity for our numeral system but, in many respects, its reciprocal behaves as an infinitesimal-like entity in the numeral system induced by a floating-point arithmetic operating with t+1 significant figures. In the same spirit of the Infinity Computer, it turns out that negative powers of ❶ may be used as "lenses" to increase and decrease the accuracy when needed. An emulator of this dynamic precision floating-point arithmetic has been developed in Matlab, and an application to the accurate solution of (possibly ill-conditioned) scalar nonlinear equations has been discussed.