# On the sum of positive divisors functions

## Abstract

Properties of divisor functions $$\sigma _k(n)$$, defined as sums of k-th powers of all divisors of n, are studied through the analysis of Ramanujan’s differential equations. This system of three differential equations is singular at $$x=0$$. Solution techniques suitable to tackle this singularity are developed and the problem is transformed into an analysis of a dynamical system. Number theoretical consequences of the presented dynamical system analysis are then discussed, including recursive formulas for divisor functions.

## Introduction

In 1916, Ramanujan  showed that certain arithmetic functions satisfy a system of three singular differential equations. Denoting the scaled Eisenstein series by

\begin{aligned} {U_\ell (x)} = {c_\ell }{\sum _{n=1}^\infty } {\sigma _{2\ell -1}(n)} {x^n}, \end{aligned}
(1.1)

where $$\ell \in {{\mathbb {N}}}$$, $$|x|<1$$, $$\sigma _k(n) = \sum _{d|n} d^k$$ and $$c_\ell$$ is a scaling constant, Ramanujan formulated his differential equations in terms of three dependent variables

\begin{aligned} P = 1 - U_1, \quad Q = 1 + U_2, \quad R = 1 - U_3 \end{aligned}
(1.2)

with the choice of scaling constants

\begin{aligned} c_1 = 24, \quad c_2 = 240, \quad c_3 = 504 \end{aligned}
(1.3)

as the following system

\begin{aligned} x \frac{\text{ d }P}{\text{ d }x}= & {} \frac{P^2 - Q}{12}, \end{aligned}
(1.4a)
\begin{aligned} x \frac{\text{ d }Q}{\text{ d }x}= & {} \frac{P Q - R}{3}, \end{aligned}
(1.4b)
\begin{aligned} x \frac{\text{ d }R}{\text{ d }x}= & {} \frac{P R - Q^2}{2}. \end{aligned}
(1.4c)

System (1.4) may be derived alternatively through a triple product identity and a quintuple product identity , and that approach was later extended to derive similar differential equations for Eisenstein series of level 2 . Ramanujan’s differential equations were previously mapped into a first order Riccati differential equation by [5, 6], with the solutions expressed in terms of hypergeometric functions after a sequence of transformations. Along these lines, Zudilin  provides further connections between Eisenstein series and hypergeometric functions. A similar method for cubic theta functions was given by Huber . In the present paper we take a different approach, utilising a series development about the singular point $$x=0$$. One benefit of the presented approach is that we are able to extract information about the Eisenstein series and divisor functions through recursive calculation of the series coefficients.

The paper is organized as follows. In Sect. 2, we rewrite Ramanujan’s system (1.4) in terms of variables $$U_1$$, $$U_2$$, and $$U_3$$ defined by (1.1) and derive recursive relation for their solutions. It is natural to wonder if there are other solution branches, and in Sect. 3, we transform this singular system of differential equations to a regular system by changing the independent variable from x to $$t = - \log x$$. The large-t behaviour of the resulting system is then investigated, with different steady sates corresponding to different initial conditions at $$x=0$$ for the original system. We present the number theoretical consequences of our analysis in Sect. 4, and conclude with a discussion of the obtained results in Sect. 5.

## Recursive formula to solve Ramanujan’s differential equations

We outline the series development which will prove useful in solving Ramanujan’s differential equations. It will be helpful to work in terms of the functions $$U_\ell$$ rather than PQR, and substituting (1.2) into (1.4), we obtain

\begin{aligned} x \frac{\text{ d }U_1}{\text{ d }x}= & {} \frac{2 U_1 + U_2 - U_1^2}{12}, \end{aligned}
(2.1a)
\begin{aligned} x \frac{\text{ d }U_2}{\text{ d }x}= & {} \frac{- U_1 + U_2 + U_3 - U_1 U_2}{3}, \end{aligned}
(2.1b)
\begin{aligned} x \frac{\text{ d }U_3}{\text{ d }x}= & {} \frac{U_1 + 2 U_2 + U_3 - U_1 U_3 + U_2^2}{2}. \end{aligned}
(2.1c)

Denoting the vector $${{{\mathbf {U}}}} = [U_1,U_2,U_3]^{\mathrm {T}}$$, this system of equations can be equivalently described in a matrix form as

\begin{aligned} x \frac{\text{ d }{{{\mathbf {U}}}}}{\text{ d }x} = A \, {{{\mathbf {U}}}} + {{{\mathbf {b}}}}({{{\mathbf {U}}}}), \end{aligned}
(2.2)

where matrix $$A \in {{{\mathbb {R}}}}^{3 \times 3}$$ and vector-valued function $${{{\mathbf {b}}}}: {{{\mathbb {R}}}}^{3} \rightarrow {{{\mathbb {R}}}}^{3}$$ are defined by

\begin{aligned} A = \frac{1}{12} \left( \begin{matrix} 2 &{}\quad 1 \; &{}\quad 0 \\ - 4 \; &{}\quad 4 \; &{}\quad 4 \\ 6 &{}\quad 12 \; &{} \quad 6 \end{matrix} \right) \end{aligned}
(2.3)

and

\begin{aligned} {{{\mathbf {b}}}}({{{\mathbf {u}}}}) = \frac{1}{12} \left( \begin{matrix} - u_1^2 \\ - 4 u_1 u_2 \\ 6 u_2^2 - 6 u_1 u_3 \end{matrix} \right) \quad \text{ for } \quad {{{\mathbf {u}}}} = \left( \begin{matrix} u_1 \\ u_2 \\ u_3 \end{matrix} \right) . \end{aligned}
(2.4)

We observe that the matrix A has eigenvalues 1 (with multiplicity 1) and 0 (with multiplicity 2). The eigenvector corresponding to eigenvalue 1 is proportional to

\begin{aligned} {{{\mathbf {c}}}} = [c_1,c_2,c_3]^{\mathrm {T}} = [24, 240, 504]^{\mathrm {T}}, \end{aligned}
(2.5)

i.e., it contains information about scaling constants (1.3) used by Ramanujan.

Considering the differential equation system (2.2) on its own in the context of the theory of differential equations, the first two fundamental questions would concern the existence and uniqueness of its solutions, i.e.:

1. (i)

Does system (2.2) have a solution?

2. (ii)

If a solution exists, is it unique?

The answer to the first question is trivial, because the system (2.2) is derived for $$U_1$$, $$U_2,$$ and $$U_3$$ defined by (1.1), i.e. there is at least one solution given by series (1.1). The second question is more important here, because if we can prove the uniqueness of solutions to the differential equation system (2.2), then any properties of solutions which can be obtained by analyzing differential equations (2.2) will also immediately give us properties of arithmetic functions (1.1).

To obtain uniqueness of solutions, the standard Picard theorem  for first-order ordinary differential equations can be applied, because the right hand side of (2.2) is Lipschitz continuous for any interval not containing $$x=0$$. That is, if we specify the value of $${{{\mathbf {U}}}}(x_0)$$ at a given point $$x_0 \ne 0$$ as the initial condition of the system (2.2), then the application of the Picard theorem would apply the existence and uniqueness of solutions of (2.2) on an interval containing $$x_0$$. Unfortunately, the knowledge of $${{{\mathbf {U}}}}(x_0)$$ at $$x_0 \ne 0$$ requires one to know some non-trivial information about functions $$U_1$$, $$U_2$$, and $$U_3$$ defined by (1.1). Thus, we consider the singular case, $$x=0$$, as our initial condition, for which the standard Picard theorem is not applicable, but the value of $${{{\mathbf {U}}}}(0)$$ can be easily obtained. Substituting $$x=0$$ in definition (1.1), we get

\begin{aligned} U_1(0)=U_2(0)=U_3(0)=0. \end{aligned}
(2.6)

Considering our differential equation system (2.2) with initial condition (2.6), we observe that it has at least two solutions, one of them is a function of $$U_1$$, $$U_2$$, and $$U_3$$ defined by (1.1) and the other solution is the trivial solution where all functions $$U_1$$, $$U_2$$, and $$U_3$$ are identically equal to zero. This non-uniqueness is caused by the singularity which the differential equation system (2.2) has on the left hand side when $$x=0$$, making the Picard theorem inapplicable. To look for all possible analytic solutions, we assume that the solution of (2.2) is written as the series expansion

\begin{aligned} {{{\mathbf {U}}}}(x) = \sum _{n=1}^\infty {{{\mathbf {a}}}}(n) \, x^n, \end{aligned}
(2.7)

where $${{{\mathbf {a}}}}(n) \equiv [a_1(n),a_2(n),a_3(n)]^{\mathrm {T}}$$ are coefficients to be determined. Substituting (2.7) into (2.2), we obtain

\begin{aligned} \sum _{n=1}^\infty (n I - A) \, {{{\mathbf {a}}}}(n) \, x^n = {{{\mathbf {b}}}} \left( \sum _{n=1}^\infty {{{\mathbf {a}}}}(n) \, x^n\right) , \end{aligned}
(2.8)

where I is the identity matrix. Since matrix A, given by (2.3), has eigenvalues  0 (with multiplicity 2) and 1 (with multiplicity 1), matrix $$n I - A$$ on the left hand side of (2.8) has eigenvalues n (with multiplicity 2) and $$n-1$$ (with multiplicity 1) for $$n=1,2,3,\ldots .$$ Moreover, the eigenvector corresponding to the eigenvalue $$n-1$$ is proportional to $${{{\mathbf {c}}}}$$ given by (2.5).

Next, we compare coefficients in front of the corresponding terms $$x^n$$ on the left and right hand sides of Eq. (2.8). Using (2.4), we observe that $${{{\mathbf {b}}}}$$ is quadratic. Therefore the right hand side of Eq. (2.8) has no terms of the lowest order, $$x^1$$, while the corresponding coefficient on the left hand side yields

\begin{aligned} (I - A) \, {{{\mathbf {a}}}}(1) = 0. \end{aligned}
(2.9)

Since matrix $$I - A$$ has eigenvalue 0 (with multiplicity 1), we can express the solutions of this system as

\begin{aligned} {{{\mathbf {a}}}}(1) = \alpha \, {{{\mathbf {c}}}}, \end{aligned}
(2.10)

where $${{{\mathbf {c}}}}$$ is given by (2.5) and $$\alpha \in {{{\mathbb {R}}}}$$ is a constant. Formula (2.10) includes both solutions which we are already aware of: the trivial zero solution corresponds to $$\alpha =0$$ and the solution given by (1.1) corresponds to $$\alpha =1.$$ We also observe that there is a possibility that the system could have solutions for other values of $$\alpha$$. To show that this is indeed the case, we consider the coefficients in front of the corresponding terms $$x^2$$ on the left and right hand sides of Eq. (2.8). We have

\begin{aligned} (2 I - A) \, {{{\mathbf {a}}}}(2) = \frac{1}{12} \left( \begin{matrix} - a_1(1)^2 \\ - 4 a_1(1) a_2(1) \\ 6 a_2(1)^2 - 6 a_1(1) a_3(1) \end{matrix} \right) = {{{\mathbf {b}}}}({{{\mathbf {a}}}}(1)). \end{aligned}

Since matrix $$2 I - A$$ has eigenvalues 2 and 1, it is invertible. Thus, we have

\begin{aligned} {{{\mathbf {a}}}}(2) = (2 I - A)^{-1} \, {{{\mathbf {b}}}}({{{\mathbf {a}}}}(1)) = (2 I - A)^{-1} \, {{{\mathbf {b}}}}(\alpha \, {{{\mathbf {c}}}}), \end{aligned}
(2.11)

which gives us $$\alpha$$-dependent solutions for coefficient $${{{\mathbf {a}}}}(2)$$. Repeating this for all orders $$x^n$$, we arrive at the following lemma.

### Lemma 1

The system of differential equations (2.2) with the initial condition (2.6) has the one-parameter family of series solutions parametrized by $$\alpha \in {{{\mathbb {R}}}},$$ where the first coefficient  $${{{\mathbf {a}}}}(1)$$ is given by (2.10), the second coefficient  $${{{\mathbf {a}}}}(2)$$ by (2.11) and other coefficients can be obtained iteratively by

\begin{aligned} {{{\mathbf {a}}}}(n) = \frac{1}{12} \, (n I - A)^{-1} \displaystyle \sum _{j=1}^{n-1} \left( \begin{matrix} - \, a_1(j) \, a_1(n-j) \\ - \, 4 \, a_1(j) \, a_2(n-j) \\ 6 \, a_2(j) \, a_2(n-j) - 6 \, a_1(j) \, a_3(n-j) \end{matrix} \right) . \end{aligned}
(2.12)

We note that Formula (2.12) reduces to (2.11) for $$n=2.$$ In Fig. 1, we plot functions $$U_1(x)$$ and $$U_2(x)$$ for representative values of parameter $$\alpha$$. A qualitatively similar plot can also be obtained for $$U_3(x)$$ (graph not shown). We observe that functions $$U_\ell (x)$$, for $$\ell =1,2,3$$, are increasing functions of x with $$U_\ell (x) \rightarrow \infty$$ as $$x \rightarrow 1^{-}$$. For a fixed value of x, the value of $$U_\ell (x)$$ is also an increasing function of parameter $$\alpha .$$ We use the first one hundred terms in the series expansion (2.7) to approximate $${{{\mathbf {U}}}}(x)$$ numerically. Considering additional terms would not change the computed result (to the machine precision) in the visualized interval $$x \in [0,0.5]$$. Fig. 1

Although Lemma 1 states that there is a one-parameter family of solutions, all these solutions are self-similar and can be collapsed into one by rescaling the independent variable x accordingly. This is formalized in the next lemma.

### Lemma 2

All analytic solutions of the system of differential equations (2.2) with initial condition (2.6) are given as

\begin{aligned} U_1(\alpha x), \quad U_2(\alpha x), \quad U_3(\alpha x), \end{aligned}
(2.13)

where $$U_1$$, $$U_2,$$ and, $$U_3$$ are functions defined by (1.1) and $$\alpha \in {{{\mathbb {R}}}}$$.

The trivial zero solution is recovered from the solution Formula (2.13) for $$\alpha =0$$. Increasing $$\alpha$$ from 0 to 1, the solution Formula (2.13) connects the zero solution with Ramanujan’s solution given by (1.1). Lemma 2 is a statement of uniqueness of solutions which will help us to translate some properties of the differential equation system into properties of series (1.1) and divisor functions. The iterative Formula (2.12) in Lemma 1 will give us an iterative formula for divisor functions. We will discuss such number theoretical consequences in Sect. 4.

We note that the statement of uniqueness of solutions in Lemma 2 would not hold if we replaced our condition that solutions are analytic with a weaker condition that solutions were only differentiable. Indeed, a solution given by (2.13) with $$\alpha =1$$ for $$x \ge 0$$ could be continued for $$x<0$$ by (2.13) for any other value of parameter $$\alpha \in {{{\mathbb {R}}}}$$.

## Dynamical system analysis of Ramanujan’s differential equations

Through the series development of Sect. 2 we have been able to use series in a neighbourhood of $$x=0$$ in order to extract salient features of the solutions to Ramanujan’s differential equations, identifying a one-parameter family of solutions (2.13) to Eq. (2.2). Due to the singular nature of these equations, it is not clear that the only solutions will originate with $$U_\ell (0)=0$$ for $$\ell =1,2,3$$, or if there are other solution branches which are fundamentally singular. In this section, we change the independent variable and treat Ramanujan’s differential equations as a dynamical system which evolves toward a condition as $$x\rightarrow 0^{+}$$.

Consider the system of differential equations (2.2) and transform the independent variable x to t by using $$t = - \log x$$. Then the limit $$x\rightarrow 0^+$$ corresponds to $$t\rightarrow \infty$$. We have

\begin{aligned} \frac{\text{ d }{{{\mathbf {V}}}}}{\text{ d }t} = - A \, {{{\mathbf {V}}}} - {{{\mathbf {b}}}}({{{\mathbf {V}}}}), \end{aligned}
(3.1)

where $${{{\mathbf {V}}}} \equiv {{{\mathbf {V}}}}(t) = {{{\mathbf {U}}}}(\exp (-t))$$, matrix $$A \in {{{\mathbb {R}}}}^{3 \times 3}$$ is given by (2.3), and vector-valued function $${{{\mathbf {b}}}}: {{{\mathbb {R}}}}^{3} \rightarrow {{{\mathbb {R}}}}^{3}$$ is given by (2.4). Initial conditions (2.6) transform to limiting values of function $${{{\mathbf {V}}}} = [V_1,V_2,V_3]$$ at $$t=\infty$$, namely

\begin{aligned} \lim _{t \rightarrow \infty } V_\ell (t) = 0, \quad \text{ for } \quad \ell = 1,2,3. \end{aligned}
(3.2)

To get some insights into this limiting behaviour, we investigate the steady states of our differential equation system (3.1). To do this, we denote the Jacobian matrix of the vector-valued function $${{{\mathbf {b}}}}$$ in definition (2.4) by

\begin{aligned} J({{{\mathbf {u}}}}) = \frac{1}{12} \left( \begin{matrix} - 2 u_1 &{} 0 &{} 0 \\ - 4 u_2 &{} - 4 u_1 &{} 0 \\ - 6 u_3 &{} 12 u_2 &{} - 6 u_1 \end{matrix} \right) . \end{aligned}
(3.3)

Solving the steady state equations

\begin{aligned} A \, {{{\mathbf {V}}}} + {{{\mathbf {b}}}}({{{\mathbf {V}}}}) = 0 \end{aligned}

corresponding to the system (3.1) and analyzing the stability of the steady states found, we obtain the following lemma.

### Lemma 3

All steady state solutions of the differential equation system (3.1) are given as a curve, parametrized by $$\beta \in {{{\mathbb {R}}}},$$ in the form

\begin{aligned} {{{\mathbf {s}}}}(\beta ) = \left( \begin{matrix} s_1(\beta ) \\ s_2(\beta ) \\ s_3(\beta ) \end{matrix} \right) = \left( \begin{matrix} 1 - \beta \\ -(1-\beta ^2) \\ 1 - \beta ^3 \end{matrix} \right) = (1-\beta ) \left( \begin{matrix} 1 \\ -(1 + \beta ) \\ 1 + \beta + \beta ^2 \end{matrix} \right) . \end{aligned}
(3.4)

Denoting $${{{\mathbf {V}}}} = {{{\mathbf {s}}}}(\beta ) + {{{\mathbf {v}}}}$$ and linearizing the system (3.1) around the steady state (3.4) corresponding to $$\beta \in {{{\mathbb {R}}}},$$ we obtain a linear system of differential equations,

\begin{aligned} \frac{\text{ d }{{\mathbf {v}}}}{\text{ d }t} = - \big (A + J({{{\mathbf {s}}}}(\beta )) \big )\, {{{\mathbf {v}}}}, \end{aligned}

where the matrix $$- \big (A + J({{{\mathbf {s}}}}(\beta ))\big )$$ has eigenvalues $$\,0$$ (with multiplicity $$\,2)$$ and $$-\beta$$ with (multiplicity $$\,1)$$. The eigenvector corresponding to the eigenvalue $$-\beta$$ is given as

\begin{aligned} \left( \begin{matrix} 1 \\ 10 \beta \\ 21 \beta ^2 \end{matrix} \right) = \frac{1}{24} \left( \begin{matrix} c_1 \\ \beta \, c_2 \\ \beta ^2 c_3 \end{matrix} \right) . \end{aligned}
(3.5)

In Fig. 2, we plot the steady state curve for a range of positive values of parameter $$\beta$$. We visualize it as a black dotted-dashed line in Fig. 2a which shows its projection to the $$(V_1,V_2)$$-plane. We also plot solutions converging to a representative selection of steady states (highlighted as black dots). Considering $$\beta =1$$ in (3.4), we obtain the zero steady state $${{{\mathbf {s}}}}(1)=(0,0,0)^{\mathrm {T}}$$ corresponding to limiting values (3.2). It has one linearly stable direction (with eigenvalue -1) and the corresponding eigenvector (3.5) is proportional to vector $${{{\mathbf {c}}}}$$ given by (2.5), which is also the first coefficient of the series solution (2.7), see Eq. (2.10). This series solution is visualized as the red trajectory in Fig. 2—two different branches correspond to positive and negative values of parameter $$\alpha$$. We note that all solutions converging to $${{{\mathbf {s}}}}(1)=(0,0,0)^{\mathrm {T}}$$ are given by Lemma 2, which means that they are all represented by the red line in Fig. 2—they only correspond to different re-scalings of the independent variable x. In fact, if we included $$\alpha$$ in the transformation of the independent variable x to t by $$t = - \log (\alpha x)$$, we would obtain the same differential equation (3.1). Fig. 2

In Fig. 2, we also plot solutions converging to the steady states $${{{\mathbf {s}}}}(\beta )$$ for $$\beta \ne 1$$. They are visualized as blue lines. Their long time behaviour satisfies

\begin{aligned} \lim _{t \rightarrow \infty } {{{\mathbf {V}}}}(t) = {{{\mathbf {s}}}}(\beta ), \end{aligned}
(3.6)

where $${{{\mathbf {s}}}}(\beta )$$ is given by (3.4). Transforming back to the original variable $$x = \exp (-t)$$, the limiting condition (3.6) is equivalent to the initial condition

\begin{aligned} {{{\mathbf {U}}}}(0) = {{{\mathbf {s}}}}(\beta ). \end{aligned}
(3.7)

In particular, we generalize (2.7) to the series solution

\begin{aligned} {{{\mathbf {U}}}}(x; \beta ) = {{{\mathbf {s}}}}(\beta ) + \sum _{n=1}^\infty {{{\mathbf {a}}}}(n; \beta ) \, x^{n \beta }, \end{aligned}
(3.8)

where $${{{\mathbf {a}}}}(n; \beta ) \equiv [a_1(n; \beta ),a_2(n; \beta ),a_3(n; \beta )]^{\mathrm {T}}$$ are coefficients to be determined. Substituting (3.8) into (2.2) and using $$A \, {{{\mathbf {s}}}}(\beta ) + {{{\mathbf {b}}}}({{{\mathbf {s}}}}(\beta )) = 0$$, we obtain

\begin{aligned} \sum _{n=1}^\infty \Big ( n \beta I - A - J \big ( {{{\mathbf {s}}}}(\beta ) \big ) \Big ) \, {{{\mathbf {a}}}}(n; \beta ) \, x^{n \beta } = {{{\mathbf {b}}}} \left( \sum _{n=1}^\infty {{{\mathbf {a}}}}(n; \beta ) \, x^{n \beta } \right) , \end{aligned}
(3.9)

which is the generalization of (2.8) to the case $$\beta \ne 1$$. Using Lemma 3, matrix $$- A - J({{{\mathbf {s}}}}(\beta ))$$ has eigenvalues $$-\beta$$ (with multiplicity 1) and 0 (with multiplicity 2). Thus the matrix $$n \beta I - A - J({{{\mathbf {s}}}}(\beta ))$$ on the left hand side of (3.9) has eigenvalues $$n \beta$$ (with multiplicity 2) and $$(n-1) \beta$$ (with multiplicity 1) for $$n=1,2,3,\ldots .$$ Comparing coefficients of the lowest order $$x^{\beta }$$ on the left and right hand sides of Eq. (3.9), we generalize (2.9) to

\begin{aligned} \Big (\beta I - A - J\big ({{{\mathbf {s}}}}(\beta ) \big )\Big ) \, {{{\mathbf {a}}}}(1; \beta ) = 0. \end{aligned}

Since matrix $$\beta I - A - J({{{\mathbf {s}}}}(\beta ))$$ has eigenvalue 0 (with multiplicity 1) with the corresponding eigenvector proportional to (3.5), we can express the solutions of this system as

\begin{aligned} {{{\mathbf {a}}}}(1; \beta ) = \alpha \, \left( \begin{matrix} c_1 \\ \beta \, c_2 \\ \beta ^2 c_3 \end{matrix} \right) . \end{aligned}
(3.10)

Moreover, we can generalize Lemma 1 to the following result giving us a recursive formula for finding series solutions (3.8) for general values of parameter $$\beta .$$

### Lemma 4

The system of differential equations (2.2) with the initial condition (3.7) has the one-parameter family of series solutions (3.8) parametrized by $$\alpha \in {{{\mathbb {R}}}},$$ where the first coefficient $${{{\mathbf {a}}}}(1; \beta )$$ is given by (3.10) and other coefficients can be obtained iteratively by

\begin{aligned} \begin{aligned} {{{\mathbf {a}}}}(n; \beta )&= \frac{1}{12} \Big ( n \beta I - A - J\big ( {{{\mathbf {s}}}}(\beta ) \big ) \Big )^{-1} \\&\quad \times \displaystyle \sum _{j=1}^{n-1} \left( \begin{matrix} - \, a_1(j; \beta ) \, a_1(n-j; \beta ) \\ - \, 4 \, a_1(j; \beta ) \, a_2(n-j; \beta ) \\ 6 \, a_2(j; \beta ) \, a_2(n-j; \beta ) - 6 \, a_1(j; \beta ) \, a_3(n-j; \beta ) \end{matrix} \right) . \end{aligned} \end{aligned}
(3.11)

Considering $$\beta =1$$, Lemma 4 reduces to Lemma 1, i.e. $${{{\mathbf {a}}}}(n)$$ in (2.12) is equal to $${{{\mathbf {a}}}}(n; 1)$$ given by (3.11). Considering general values of $$\beta$$, we use the recursive Formula (3.11) in Lemma 4  for  $$\alpha =1$$ and $$\alpha =-1$$ to obtain solutions converging to $${{{\mathbf {s}}}}(\beta )$$ which are visualized in Fig. 2 as blue trajectories. The solution for any other positive (resp. negative) value of $$\alpha$$ corresponds to the case $$\alpha =1$$ (resp. $$\alpha =-1$$), because the parameter $$\alpha$$ rescales the independent variable in a similar way to what we have already observed in Lemma 2 for the case $$\beta =1.$$ In Fig. 2b, we use a higher number of representative blue trajectories (than in Fig. 2a) and observe that we have generalized the scaled Eisenstein series (1.1) (red line in Fig. 2b) to the blue surface in the $$(V_1,V_2,V_3)$$-phase space (the surface swept by blue trajectories).

## Number theoretical consequences

The evaluation of sums of the form $$\sum \sigma (m)\sigma (n)$$ has attracted interest in the literature [1, 2], and we use our results to calculate certain sums of this type in terms of the coefficients of solutions to Ramanujan’s differential equations. Considering $$\alpha =1$$ in Formula (2.12) and comparing with (1.1), we obtain the following iterative relation between divisor functions

\begin{aligned} \left( \begin{matrix} \sigma _{1}(n)\\ \sigma _{3}(n) \\ \sigma _{5}(n) \end{matrix} \right)= & {} \left( \begin{matrix} 1/c_1 &{} 0 &{} 0 \\ 0 &{} 1/c_2 &{} 0 \\ 0 &{} 0 &{} 1/c_3 \end{matrix} \right) (A - n I)^{-1} \nonumber \\&\quad \times \displaystyle \sum _{j=1}^{n-1} \left( \begin{matrix} 48 \, \sigma _1(j) \, \sigma _1(n-j) \\ 1920 \, \sigma _1(j)\, \sigma _3(n-j) \\ 6048 \, \sigma _1(j) \, \sigma _5(n-j) - 28800 \, \sigma _3(j) \, \sigma _3(n-j) \end{matrix} \right) , \end{aligned}
(4.1)

where $$c_\ell$$ is given by (2.5). This formula can be iteratively used to compute the values of $$\sigma _1(n)$$, $$\sigma _3(n)$$, and $$\sigma _5(n)$$. It can also be rewritten in the form of convolution identities. Following Ramanujan’s notation , we denote

\begin{aligned} {\varSigma }_{k,s}(n) = \sum _{j=0}^{n} \sigma _k(j) \, \sigma _s(n-j), \end{aligned}
(4.2)

where the definition of $$\sigma _k(n)$$ is extended to $$n=0$$ by $$\sigma _k(0) = \zeta (-k)/2,$$ namely

\begin{aligned} \sigma _{2 \ell - 1}(0) = \frac{(-1)^\ell }{c_\ell }, \quad \text{ for } \; \ell = 1,2,3, \end{aligned}

where $$c_\ell$$ is given by (2.5). Multiplying the iterative Formula (4.1) by the diagonal matrix with vector $$[c_1,c_2,c_3]$$ on the diagonal, then by matrix $$A-nI$$ and using (2.4), we obtain

\begin{aligned} \left( \begin{matrix} - 6 \, n &{} 5 \; &{} 0 \\ 0 \; &{} - 10 \, n \; &{} 7 \\ 0 &{} 0 \; &{} - 7 \, n \end{matrix} \right) \left( \begin{matrix} \sigma _{1}(n)\\ \sigma _{3}(n) \\ \sigma _{5}(n) \end{matrix} \right) = \left( \begin{matrix} 12 \, {\varSigma }_{1,1}(n) \\ 80 \, {\varSigma }_{1,3}(n) \\ 84 \, {\varSigma }_{1,5}(n) - 400 \, {\varSigma }_{3,3}(n) \end{matrix} \right) . \end{aligned}
(4.3)

The first two lines of this vector system yield formulas for $${\varSigma }_{1,1}(n)$$ and $${\varSigma }_{1,3}(n)$$ which appear in Table IV of Ramanujan’s paper . The last line is also consistent with his results. In the same table, he writes

\begin{aligned} {\varSigma }_{1,5}(n) = \frac{10 \, \sigma _7(n) - 21 \, n \, \sigma _5(n)}{252}, \quad {\varSigma }_{3,3}(n) = \frac{\sigma _7(n)}{120}. \end{aligned}

Using this result to calculate $$84 \, {\varSigma }_{1,5}(n) - 400 \, {\varSigma }_{3,3}(n)$$, we obtain the last line of our vector system (4.3).

Considering general values of the parameter $$\beta >0$$, we can also connect the coefficients calculated by the general recursive Formula (3.11) with divisor functions.

### Lemma 5

Let $$\beta >0$$. Consider the system of differential equations (2.2) with the initial condition (3.7). Assume $$\alpha =\beta$$ and consider the solution $${{{\mathbf {U}}}}(x; \beta )$$ given by series (3.8) which is calculated using Formula (3.11) in Lemma 4. Then the coefficients $${{{\mathbf {a}}}}(n; \beta )$$ are related to divisor functions by

\begin{aligned} \sigma _{2\ell -1}(n) = \frac{a_\ell (n; \beta )}{c_\ell \, \beta ^\ell }, \quad \text{ for } \quad \ell = 1,2,3. \end{aligned}
(4.4)

Relation (4.4) implies

\begin{aligned} {{{\mathbf {U}}}}(x) = {{{\mathbf {U}}}}(x; 1) = \left( \begin{matrix} 1/\beta &{} 0 &{} 0 \\ 0 &{} 1/\beta ^2 &{} 0 \\ 0 &{} 0 &{} 1/\beta ^3 \end{matrix} \right) \left( \begin{matrix} U_1(x^{1/\beta }; \beta ) - s_1(\beta ) \\ U_2(x^{1/\beta }; \beta ) - s_2(\beta ) \\ U_3(x^{1/\beta }; \beta ) - s_3(\beta ) \end{matrix} \right) , \end{aligned}

which connects the general solution $${{{\mathbf {U}}}}(x; \beta )$$ for $$\alpha =\beta$$ with the scaled Eisenstein series $${{{\mathbf {U}}}}(x)$$ given by (1.1). Consequently, the recursive Formula (3.11) in Lemma 4 can also be rewritten as a recursive formula for calculating $$\sigma _1(n)$$, $$\sigma _3(n)$$, and $$\sigma _5(n)$$, in a similar way as we did when deriving (4.1) in the special case $$\beta =1$$.

## Discussion

We have employed both a series development and a dynamical systems approach to better understand solutions of Ramanujan’s equations (1.4). Our results imply the existence of a one-parameter family of solutions to these equations which comprise a similarity scaling of the scaled Eisenstein series (1.1), in addition to another class of solutions which is not zero at $$x=0$$. This latter class of solutions can, however, be brought into the form of the scaled Eisenstein series through a shift of the dependent variable and a scaling of the independent variable. This suggests that the vital information encoded in these series through their coefficients is invariant under Ramanujan’s differential equations, modulo shifting and scaling, and that the value of specific divisor functions remains encapsulated in these series solutions. In addition to their intrinsic interest, Ramanujan’s differential equations (1.4) give information about certain Eisenstein series, and we demonstrate that our results give an alternate approach to obtain formulae involving sums of products of divisor functions.

The results we obtain can be used to better understand solutions of related differential equations of relevance to the Eisenstein series. In addition to the Eisenstein series which satisfy Ramanujan’s differential equations (1.4), we remark that solutions of various second-order differential equations with coefficients involving the Eisenstein series have also attracted some attention . Treating the Eisenstein series in the manner of (2.7), one can then solve such second-order differential equations with a series, making use of the Cauchy product of the series for the unknown function with our series representation for the Eisenstein series.

The algebraic independence of the functions P, Q, R in (1.2) and hence of $$U_1$$, $$U_2$$, $$U_3$$ in (1.1) was discussed in . It is worth noting that additional relations exist between $$U_\ell$$ for $$\ell \ge 4$$, with the first several of these shown in Table I of . One can then express $$U_\ell$$ for $$\ell \ge 4$$ in terms of algebraic combinations of the $$U_1$$, $$U_2$$, and $$U_3$$ variables. As an example, from entry 4 in Table I of  we have that $$1+480 \, U_{4}=Q^2 = \left( 1+U_2\right) ^2$$. Defining $$S=Q^2$$, we see that

\begin{aligned} x \, \frac{\mathrm {d}S}{\mathrm {d}x} = 2 x \, Q \, \frac{\mathrm {d}Q}{\mathrm {d}x} = \frac{2}{3}\left( PQ^2 - QR\right) = \frac{2}{3}\left( PS -QR \right) . \end{aligned}

Rewriting this as an equation involving $$U_4$$ by taking $$c_4 = 480$$, one obtains a fourth-order analogue of the third-order system (2.1). Continuing in this manner, one may obtain higher-order analogues of system (2.1) involving $$U_1, U_2, \ldots , U_N$$ for $$N\ge 4$$, and using the approach we outline for (2.1), one may obtain the series coefficients recursively in a similar manner, providing alternate derivations for formulae analogous to (4.3).

## References

1. 1.

Alaca, S., Williams, K.: Evaluation of the convolution sums $${l+6m=n} \sigma (l) \sigma (m)$$ and $$\Sigma _{2l+3m=n} \sigma (l) \sigma (m)$$. J. Number Theory 124(2), 491–510 (2007)

2. 2.

Aygin, Z., Hong, N.: Ramanujan’s convolution sum twisted by Dirichlet characters. Int. J. Number Theory 15(1), 137–152 (2019)

3. 3.

Chan, H.: Triple product identity, quintuple product identity and Ramanujan’s differential equations for the classical Eisenstein series. Proc. Am. Math. Soc. 135(7), 1987–1992 (2007)

4. 4.

Coddington, E., Levinson, N.: Theory of Ordinary Differential Equations. McGraw-Hill Book Company, Inc., New York (1955)

5. 5.

Hill, J., Berndt, B., Huber, T.: Solving Ramanujan’s differential equations for Eisenstein series via a first order Riccati equation. Acta Arith. 128(3), 281–294 (2007)

6. 6.

Huber, T.: Basic representations for Eisenstein series from their differential equations. J. Math. Anal. Appl. 350(1), 135–146 (2009)

7. 7.

Huber, T.: Differential equations for cubic theta functions. Int. J. Number Theory 7(7), 1945–1957 (2011)

8. 8.

Ramanujan, S.: On certain arithmetical functions. Trans. Camb. Philos. Soc. 22, 159–184 (1916)

9. 9.

Sebbar, A., Sebbar, A.: Eisenstein series and modular differential equations. Can. Math. Bull. 55(2), 400–409 (2012)

10. 10.

Toh, P.: Differential equations satisfied by Eisenstein series of level 2. Ramanujan J. 25, 179–194 (2011)

11. 11.

Zudilin, W.: Thetanulls and differential equations. Mat. Sb. 191(12), 77–122 (2000)

12. 12.

Zudilin, W.: The hypergeometric equation and Ramanujan functions. Ramanujan J. 7(4), 435–447 (2003)

Download references

## Acknowledgements

Radek Erban would like to thank the Royal Society for a University Research Fellowship.

## Author information

Authors

### Corresponding author

Correspondence to Radek Erban.

## Additional information

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions