Abstract
Properties of divisor functions \(\sigma _k(n)\), defined as sums of kth powers of all divisors of n, are studied through the analysis of Ramanujan’s differential equations. This system of three differential equations is singular at \(x=0\). Solution techniques suitable to tackle this singularity are developed and the problem is transformed into an analysis of a dynamical system. Number theoretical consequences of the presented dynamical system analysis are then discussed, including recursive formulas for divisor functions.
Introduction
In 1916, Ramanujan [8] showed that certain arithmetic functions satisfy a system of three singular differential equations. Denoting the scaled Eisenstein series by
where \(\ell \in {{\mathbb {N}}}\), \(x<1\), \(\sigma _k(n) = \sum _{dn} d^k\) and \(c_\ell \) is a scaling constant, Ramanujan formulated his differential equations in terms of three dependent variables
with the choice of scaling constants
as the following system
System (1.4) may be derived alternatively through a triple product identity and a quintuple product identity [3], and that approach was later extended to derive similar differential equations for Eisenstein series of level 2 [10]. Ramanujan’s differential equations were previously mapped into a first order Riccati differential equation by [5, 6], with the solutions expressed in terms of hypergeometric functions after a sequence of transformations. Along these lines, Zudilin [12] provides further connections between Eisenstein series and hypergeometric functions. A similar method for cubic theta functions was given by Huber [7]. In the present paper we take a different approach, utilising a series development about the singular point \(x=0\). One benefit of the presented approach is that we are able to extract information about the Eisenstein series and divisor functions through recursive calculation of the series coefficients.
The paper is organized as follows. In Sect. 2, we rewrite Ramanujan’s system (1.4) in terms of variables \(U_1\), \(U_2\), and \(U_3\) defined by (1.1) and derive recursive relation for their solutions. It is natural to wonder if there are other solution branches, and in Sect. 3, we transform this singular system of differential equations to a regular system by changing the independent variable from x to \(t =  \log x\). The larget behaviour of the resulting system is then investigated, with different steady sates corresponding to different initial conditions at \(x=0\) for the original system. We present the number theoretical consequences of our analysis in Sect. 4, and conclude with a discussion of the obtained results in Sect. 5.
Recursive formula to solve Ramanujan’s differential equations
We outline the series development which will prove useful in solving Ramanujan’s differential equations. It will be helpful to work in terms of the functions \(U_\ell \) rather than P, Q, R, and substituting (1.2) into (1.4), we obtain
Denoting the vector \({{{\mathbf {U}}}} = [U_1,U_2,U_3]^{\mathrm {T}}\), this system of equations can be equivalently described in a matrix form as
where matrix \(A \in {{{\mathbb {R}}}}^{3 \times 3}\) and vectorvalued function \({{{\mathbf {b}}}}: {{{\mathbb {R}}}}^{3} \rightarrow {{{\mathbb {R}}}}^{3}\) are defined by
and
We observe that the matrix A has eigenvalues 1 (with multiplicity 1) and 0 (with multiplicity 2). The eigenvector corresponding to eigenvalue 1 is proportional to
i.e., it contains information about scaling constants (1.3) used by Ramanujan.
Considering the differential equation system (2.2) on its own in the context of the theory of differential equations, the first two fundamental questions would concern the existence and uniqueness of its solutions, i.e.:

(i)
Does system (2.2) have a solution?

(ii)
If a solution exists, is it unique?
The answer to the first question is trivial, because the system (2.2) is derived for \(U_1\), \(U_2,\) and \(U_3\) defined by (1.1), i.e. there is at least one solution given by series (1.1). The second question is more important here, because if we can prove the uniqueness of solutions to the differential equation system (2.2), then any properties of solutions which can be obtained by analyzing differential equations (2.2) will also immediately give us properties of arithmetic functions (1.1).
To obtain uniqueness of solutions, the standard Picard theorem [4] for firstorder ordinary differential equations can be applied, because the right hand side of (2.2) is Lipschitz continuous for any interval not containing \(x=0\). That is, if we specify the value of \({{{\mathbf {U}}}}(x_0)\) at a given point \(x_0 \ne 0\) as the initial condition of the system (2.2), then the application of the Picard theorem would apply the existence and uniqueness of solutions of (2.2) on an interval containing \(x_0\). Unfortunately, the knowledge of \({{{\mathbf {U}}}}(x_0)\) at \(x_0 \ne 0\) requires one to know some nontrivial information about functions \(U_1\), \(U_2\), and \(U_3\) defined by (1.1). Thus, we consider the singular case, \(x=0\), as our initial condition, for which the standard Picard theorem is not applicable, but the value of \({{{\mathbf {U}}}}(0)\) can be easily obtained. Substituting \(x=0\) in definition (1.1), we get
Considering our differential equation system (2.2) with initial condition (2.6), we observe that it has at least two solutions, one of them is a function of \(U_1\), \(U_2\), and \(U_3\) defined by (1.1) and the other solution is the trivial solution where all functions \(U_1\), \(U_2\), and \(U_3\) are identically equal to zero. This nonuniqueness is caused by the singularity which the differential equation system (2.2) has on the left hand side when \(x=0\), making the Picard theorem inapplicable. To look for all possible analytic solutions, we assume that the solution of (2.2) is written as the series expansion
where \({{{\mathbf {a}}}}(n) \equiv [a_1(n),a_2(n),a_3(n)]^{\mathrm {T}}\) are coefficients to be determined. Substituting (2.7) into (2.2), we obtain
where I is the identity matrix. Since matrix A, given by (2.3), has eigenvalues 0 (with multiplicity 2) and 1 (with multiplicity 1), matrix \(n I  A\) on the left hand side of (2.8) has eigenvalues n (with multiplicity 2) and \(n1\) (with multiplicity 1) for \(n=1,2,3,\ldots .\) Moreover, the eigenvector corresponding to the eigenvalue \(n1\) is proportional to \({{{\mathbf {c}}}}\) given by (2.5).
Next, we compare coefficients in front of the corresponding terms \(x^n\) on the left and right hand sides of Eq. (2.8). Using (2.4), we observe that \({{{\mathbf {b}}}}\) is quadratic. Therefore the right hand side of Eq. (2.8) has no terms of the lowest order, \(x^1\), while the corresponding coefficient on the left hand side yields
Since matrix \(I  A\) has eigenvalue 0 (with multiplicity 1), we can express the solutions of this system as
where \({{{\mathbf {c}}}}\) is given by (2.5) and \(\alpha \in {{{\mathbb {R}}}}\) is a constant. Formula (2.10) includes both solutions which we are already aware of: the trivial zero solution corresponds to \(\alpha =0\) and the solution given by (1.1) corresponds to \(\alpha =1.\) We also observe that there is a possibility that the system could have solutions for other values of \(\alpha \). To show that this is indeed the case, we consider the coefficients in front of the corresponding terms \(x^2\) on the left and right hand sides of Eq. (2.8). We have
Since matrix \(2 I  A\) has eigenvalues 2 and 1, it is invertible. Thus, we have
which gives us \(\alpha \)dependent solutions for coefficient \({{{\mathbf {a}}}}(2)\). Repeating this for all orders \(x^n\), we arrive at the following lemma.
Lemma 1
The system of differential equations (2.2) with the initial condition (2.6) has the oneparameter family of series solutions parametrized by \(\alpha \in {{{\mathbb {R}}}},\) where the first coefficient \({{{\mathbf {a}}}}(1)\) is given by (2.10), the second coefficient \({{{\mathbf {a}}}}(2)\) by (2.11) and other coefficients can be obtained iteratively by
We note that Formula (2.12) reduces to (2.11) for \(n=2.\) In Fig. 1, we plot functions \(U_1(x)\) and \(U_2(x)\) for representative values of parameter \(\alpha \). A qualitatively similar plot can also be obtained for \(U_3(x)\) (graph not shown). We observe that functions \(U_\ell (x)\), for \(\ell =1,2,3\), are increasing functions of x with \(U_\ell (x) \rightarrow \infty \) as \(x \rightarrow 1^{}\). For a fixed value of x, the value of \(U_\ell (x)\) is also an increasing function of parameter \(\alpha .\) We use the first one hundred terms in the series expansion (2.7) to approximate \({{{\mathbf {U}}}}(x)\) numerically. Considering additional terms would not change the computed result (to the machine precision) in the visualized interval \(x \in [0,0.5]\).
Although Lemma 1 states that there is a oneparameter family of solutions, all these solutions are selfsimilar and can be collapsed into one by rescaling the independent variable x accordingly. This is formalized in the next lemma.
Lemma 2
All analytic solutions of the system of differential equations (2.2) with initial condition (2.6) are given as
where \(U_1\), \(U_2,\) and, \(U_3\) are functions defined by (1.1) and \(\alpha \in {{{\mathbb {R}}}}\).
The trivial zero solution is recovered from the solution Formula (2.13) for \(\alpha =0\). Increasing \(\alpha \) from 0 to 1, the solution Formula (2.13) connects the zero solution with Ramanujan’s solution given by (1.1). Lemma 2 is a statement of uniqueness of solutions which will help us to translate some properties of the differential equation system into properties of series (1.1) and divisor functions. The iterative Formula (2.12) in Lemma 1 will give us an iterative formula for divisor functions. We will discuss such number theoretical consequences in Sect. 4.
We note that the statement of uniqueness of solutions in Lemma 2 would not hold if we replaced our condition that solutions are analytic with a weaker condition that solutions were only differentiable. Indeed, a solution given by (2.13) with \(\alpha =1\) for \(x \ge 0\) could be continued for \(x<0\) by (2.13) for any other value of parameter \(\alpha \in {{{\mathbb {R}}}}\).
Dynamical system analysis of Ramanujan’s differential equations
Through the series development of Sect. 2 we have been able to use series in a neighbourhood of \(x=0\) in order to extract salient features of the solutions to Ramanujan’s differential equations, identifying a oneparameter family of solutions (2.13) to Eq. (2.2). Due to the singular nature of these equations, it is not clear that the only solutions will originate with \(U_\ell (0)=0\) for \(\ell =1,2,3\), or if there are other solution branches which are fundamentally singular. In this section, we change the independent variable and treat Ramanujan’s differential equations as a dynamical system which evolves toward a condition as \(x\rightarrow 0^{+}\).
Consider the system of differential equations (2.2) and transform the independent variable x to t by using \(t =  \log x\). Then the limit \(x\rightarrow 0^+\) corresponds to \(t\rightarrow \infty \). We have
where \({{{\mathbf {V}}}} \equiv {{{\mathbf {V}}}}(t) = {{{\mathbf {U}}}}(\exp (t))\), matrix \(A \in {{{\mathbb {R}}}}^{3 \times 3}\) is given by (2.3), and vectorvalued function \({{{\mathbf {b}}}}: {{{\mathbb {R}}}}^{3} \rightarrow {{{\mathbb {R}}}}^{3}\) is given by (2.4). Initial conditions (2.6) transform to limiting values of function \({{{\mathbf {V}}}} = [V_1,V_2,V_3]\) at \(t=\infty \), namely
To get some insights into this limiting behaviour, we investigate the steady states of our differential equation system (3.1). To do this, we denote the Jacobian matrix of the vectorvalued function \({{{\mathbf {b}}}}\) in definition (2.4) by
Solving the steady state equations
corresponding to the system (3.1) and analyzing the stability of the steady states found, we obtain the following lemma.
Lemma 3
All steady state solutions of the differential equation system (3.1) are given as a curve, parametrized by \(\beta \in {{{\mathbb {R}}}},\) in the form
Denoting \({{{\mathbf {V}}}} = {{{\mathbf {s}}}}(\beta ) + {{{\mathbf {v}}}}\) and linearizing the system (3.1) around the steady state (3.4) corresponding to \(\beta \in {{{\mathbb {R}}}},\) we obtain a linear system of differential equations,
where the matrix \( \big (A + J({{{\mathbf {s}}}}(\beta ))\big )\) has eigenvalues \(\,0\) (with multiplicity \(\,2)\) and \(\beta \) with (multiplicity \(\,1)\). The eigenvector corresponding to the eigenvalue \(\beta \) is given as
In Fig. 2, we plot the steady state curve for a range of positive values of parameter \(\beta \). We visualize it as a black dotteddashed line in Fig. 2a which shows its projection to the \((V_1,V_2)\)plane. We also plot solutions converging to a representative selection of steady states (highlighted as black dots). Considering \(\beta =1\) in (3.4), we obtain the zero steady state \({{{\mathbf {s}}}}(1)=(0,0,0)^{\mathrm {T}}\) corresponding to limiting values (3.2). It has one linearly stable direction (with eigenvalue 1) and the corresponding eigenvector (3.5) is proportional to vector \({{{\mathbf {c}}}}\) given by (2.5), which is also the first coefficient of the series solution (2.7), see Eq. (2.10). This series solution is visualized as the red trajectory in Fig. 2—two different branches correspond to positive and negative values of parameter \(\alpha \). We note that all solutions converging to \({{{\mathbf {s}}}}(1)=(0,0,0)^{\mathrm {T}}\) are given by Lemma 2, which means that they are all represented by the red line in Fig. 2—they only correspond to different rescalings of the independent variable x. In fact, if we included \(\alpha \) in the transformation of the independent variable x to t by \(t =  \log (\alpha x)\), we would obtain the same differential equation (3.1).
In Fig. 2, we also plot solutions converging to the steady states \({{{\mathbf {s}}}}(\beta )\) for \(\beta \ne 1\). They are visualized as blue lines. Their long time behaviour satisfies
where \({{{\mathbf {s}}}}(\beta )\) is given by (3.4). Transforming back to the original variable \(x = \exp (t)\), the limiting condition (3.6) is equivalent to the initial condition
In particular, we generalize (2.7) to the series solution
where \({{{\mathbf {a}}}}(n; \beta ) \equiv [a_1(n; \beta ),a_2(n; \beta ),a_3(n; \beta )]^{\mathrm {T}}\) are coefficients to be determined. Substituting (3.8) into (2.2) and using \(A \, {{{\mathbf {s}}}}(\beta ) + {{{\mathbf {b}}}}({{{\mathbf {s}}}}(\beta )) = 0\), we obtain
which is the generalization of (2.8) to the case \(\beta \ne 1\). Using Lemma 3, matrix \( A  J({{{\mathbf {s}}}}(\beta ))\) has eigenvalues \(\beta \) (with multiplicity 1) and 0 (with multiplicity 2). Thus the matrix \(n \beta I  A  J({{{\mathbf {s}}}}(\beta ))\) on the left hand side of (3.9) has eigenvalues \(n \beta \) (with multiplicity 2) and \((n1) \beta \) (with multiplicity 1) for \(n=1,2,3,\ldots .\) Comparing coefficients of the lowest order \(x^{\beta }\) on the left and right hand sides of Eq. (3.9), we generalize (2.9) to
Since matrix \(\beta I  A  J({{{\mathbf {s}}}}(\beta ))\) has eigenvalue 0 (with multiplicity 1) with the corresponding eigenvector proportional to (3.5), we can express the solutions of this system as
Moreover, we can generalize Lemma 1 to the following result giving us a recursive formula for finding series solutions (3.8) for general values of parameter \(\beta .\)
Lemma 4
The system of differential equations (2.2) with the initial condition (3.7) has the oneparameter family of series solutions (3.8) parametrized by \(\alpha \in {{{\mathbb {R}}}},\) where the first coefficient \({{{\mathbf {a}}}}(1; \beta )\) is given by (3.10) and other coefficients can be obtained iteratively by
Considering \(\beta =1\), Lemma 4 reduces to Lemma 1, i.e. \({{{\mathbf {a}}}}(n)\) in (2.12) is equal to \({{{\mathbf {a}}}}(n; 1)\) given by (3.11). Considering general values of \(\beta \), we use the recursive Formula (3.11) in Lemma 4 for \(\alpha =1\) and \(\alpha =1\) to obtain solutions converging to \({{{\mathbf {s}}}}(\beta )\) which are visualized in Fig. 2 as blue trajectories. The solution for any other positive (resp. negative) value of \(\alpha \) corresponds to the case \(\alpha =1\) (resp. \(\alpha =1\)), because the parameter \(\alpha \) rescales the independent variable in a similar way to what we have already observed in Lemma 2 for the case \(\beta =1.\) In Fig. 2b, we use a higher number of representative blue trajectories (than in Fig. 2a) and observe that we have generalized the scaled Eisenstein series (1.1) (red line in Fig. 2b) to the blue surface in the \((V_1,V_2,V_3)\)phase space (the surface swept by blue trajectories).
Number theoretical consequences
The evaluation of sums of the form \(\sum \sigma (m)\sigma (n)\) has attracted interest in the literature [1, 2], and we use our results to calculate certain sums of this type in terms of the coefficients of solutions to Ramanujan’s differential equations. Considering \(\alpha =1\) in Formula (2.12) and comparing with (1.1), we obtain the following iterative relation between divisor functions
where \(c_\ell \) is given by (2.5). This formula can be iteratively used to compute the values of \(\sigma _1(n)\), \(\sigma _3(n)\), and \(\sigma _5(n)\). It can also be rewritten in the form of convolution identities. Following Ramanujan’s notation [8], we denote
where the definition of \(\sigma _k(n)\) is extended to \(n=0\) by \(\sigma _k(0) = \zeta (k)/2,\) namely
where \(c_\ell \) is given by (2.5). Multiplying the iterative Formula (4.1) by the diagonal matrix with vector \([c_1,c_2,c_3]\) on the diagonal, then by matrix \(AnI\) and using (2.4), we obtain
The first two lines of this vector system yield formulas for \({\varSigma }_{1,1}(n)\) and \({\varSigma }_{1,3}(n)\) which appear in Table IV of Ramanujan’s paper [8]. The last line is also consistent with his results. In the same table, he writes
Using this result to calculate \(84 \, {\varSigma }_{1,5}(n)  400 \, {\varSigma }_{3,3}(n)\), we obtain the last line of our vector system (4.3).
Considering general values of the parameter \(\beta >0\), we can also connect the coefficients calculated by the general recursive Formula (3.11) with divisor functions.
Lemma 5
Let \(\beta >0\). Consider the system of differential equations (2.2) with the initial condition (3.7). Assume \(\alpha =\beta \) and consider the solution \({{{\mathbf {U}}}}(x; \beta )\) given by series (3.8) which is calculated using Formula (3.11) in Lemma 4. Then the coefficients \({{{\mathbf {a}}}}(n; \beta )\) are related to divisor functions by
Relation (4.4) implies
which connects the general solution \({{{\mathbf {U}}}}(x; \beta )\) for \(\alpha =\beta \) with the scaled Eisenstein series \({{{\mathbf {U}}}}(x)\) given by (1.1). Consequently, the recursive Formula (3.11) in Lemma 4 can also be rewritten as a recursive formula for calculating \(\sigma _1(n)\), \(\sigma _3(n)\), and \(\sigma _5(n)\), in a similar way as we did when deriving (4.1) in the special case \(\beta =1\).
Discussion
We have employed both a series development and a dynamical systems approach to better understand solutions of Ramanujan’s equations (1.4). Our results imply the existence of a oneparameter family of solutions to these equations which comprise a similarity scaling of the scaled Eisenstein series (1.1), in addition to another class of solutions which is not zero at \(x=0\). This latter class of solutions can, however, be brought into the form of the scaled Eisenstein series through a shift of the dependent variable and a scaling of the independent variable. This suggests that the vital information encoded in these series through their coefficients is invariant under Ramanujan’s differential equations, modulo shifting and scaling, and that the value of specific divisor functions remains encapsulated in these series solutions. In addition to their intrinsic interest, Ramanujan’s differential equations (1.4) give information about certain Eisenstein series, and we demonstrate that our results give an alternate approach to obtain formulae involving sums of products of divisor functions.
The results we obtain can be used to better understand solutions of related differential equations of relevance to the Eisenstein series. In addition to the Eisenstein series which satisfy Ramanujan’s differential equations (1.4), we remark that solutions of various secondorder differential equations with coefficients involving the Eisenstein series have also attracted some attention [9]. Treating the Eisenstein series in the manner of (2.7), one can then solve such secondorder differential equations with a series, making use of the Cauchy product of the series for the unknown function with our series representation for the Eisenstein series.
The algebraic independence of the functions P, Q, R in (1.2) and hence of \(U_1\), \(U_2\), \(U_3\) in (1.1) was discussed in [11]. It is worth noting that additional relations exist between \(U_\ell \) for \(\ell \ge 4\), with the first several of these shown in Table I of [8]. One can then express \(U_\ell \) for \(\ell \ge 4\) in terms of algebraic combinations of the \(U_1\), \(U_2\), and \(U_3\) variables. As an example, from entry 4 in Table I of [8] we have that \(1+480 \, U_{4}=Q^2 = \left( 1+U_2\right) ^2\). Defining \(S=Q^2\), we see that
Rewriting this as an equation involving \(U_4\) by taking \(c_4 = 480\), one obtains a fourthorder analogue of the thirdorder system (2.1). Continuing in this manner, one may obtain higherorder analogues of system (2.1) involving \(U_1, U_2, \ldots , U_N\) for \(N\ge 4\), and using the approach we outline for (2.1), one may obtain the series coefficients recursively in a similar manner, providing alternate derivations for formulae analogous to (4.3).
References
 1.
Alaca, S., Williams, K.: Evaluation of the convolution sums \({l+6m=n} \sigma (l) \sigma (m)\) and \(\Sigma _{2l+3m=n} \sigma (l) \sigma (m)\). J. Number Theory 124(2), 491–510 (2007)
 2.
Aygin, Z., Hong, N.: Ramanujan’s convolution sum twisted by Dirichlet characters. Int. J. Number Theory 15(1), 137–152 (2019)
 3.
Chan, H.: Triple product identity, quintuple product identity and Ramanujan’s differential equations for the classical Eisenstein series. Proc. Am. Math. Soc. 135(7), 1987–1992 (2007)
 4.
Coddington, E., Levinson, N.: Theory of Ordinary Differential Equations. McGrawHill Book Company, Inc., New York (1955)
 5.
Hill, J., Berndt, B., Huber, T.: Solving Ramanujan’s differential equations for Eisenstein series via a first order Riccati equation. Acta Arith. 128(3), 281–294 (2007)
 6.
Huber, T.: Basic representations for Eisenstein series from their differential equations. J. Math. Anal. Appl. 350(1), 135–146 (2009)
 7.
Huber, T.: Differential equations for cubic theta functions. Int. J. Number Theory 7(7), 1945–1957 (2011)
 8.
Ramanujan, S.: On certain arithmetical functions. Trans. Camb. Philos. Soc. 22, 159–184 (1916)
 9.
Sebbar, A., Sebbar, A.: Eisenstein series and modular differential equations. Can. Math. Bull. 55(2), 400–409 (2012)
 10.
Toh, P.: Differential equations satisfied by Eisenstein series of level 2. Ramanujan J. 25, 179–194 (2011)
 11.
Zudilin, W.: Thetanulls and differential equations. Mat. Sb. 191(12), 77–122 (2000)
 12.
Zudilin, W.: The hypergeometric equation and Ramanujan functions. Ramanujan J. 7(4), 435–447 (2003)
Acknowledgements
Radek Erban would like to thank the Royal Society for a University Research Fellowship.
Author information
Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Erban, R., Van Gorder, R.A. On the sum of positive divisors functions. Res. number theory 7, 25 (2021). https://doi.org/10.1007/s40993021002406
Received:
Accepted:
Published:
Keywords
 Eisenstein series
 Ramanujan differential equations
 Singular equations
Mathematics Subject Classification
 11M36
 34C05
 34A12