1 Introduction

Quantum difference operator allows us to deal with sets of non-differentiable functions. Its applications are used in many mathematical fields such as the calculus of variations, orthogonal polynomials, basic hypergeometric functions, quantum mechanics, and the theory of scale relativity; see, e.g., [3, 5, 7, 13, 14].

The general quantum difference operator \(D_{\beta }\) generalizes the Jackson q-difference operator \(D_{q}\) and the Hahn difference operator \(D_{q,\omega }\), see [1, 2, 4, 8, 12]. It is defined, in [10, p. 6], by

$$ {D}_{\beta }f(t)=\textstyle\begin{cases} \frac{f(\beta (t))-f(t)}{\beta (t)-t},& {t}\neq {s_{0}}, \\ {{f'}(s_{0})},& {t}={s_{0}}, \end{cases} $$

where \(f:I\rightarrow \mathbb{X}\) is a function defined on an interval \(I\subseteq {\mathbb{R}}\), \(\mathbb{X}\) is a Banach space, and \(\beta:I\rightarrow I\) is a strictly increasing continuous function defined on I that has only one fixed point \(s_{0}\in {I}\) and satisfies the inequality \((t-s_{0})(\beta (t)-t)\leq 0\) for all \(t\in I\). The function f is said to be β-differentiable on I if the ordinary derivative \({f'}\) exists at \(s_{0}\). The general quantum difference calculus was introduced in [10]. The exponential, trigonometric, and hyperbolic functions associated with \(D_{\beta }\) were presented in [9]. The existence and uniqueness of solutions of the first-order β-initial value problem were established in [11]. In [6], the existence and uniqueness of solutions of the β-Cauchy problem of the second-order β-difference equations were proved. Also, a fundamental set of solutions for the second-order linear homogeneous β-difference equations when the coefficients are constants was constructed, and the different cases of the roots of their characteristic equations were studied. Moreover, the Euler–Cauchy β-difference equation was derived.

The organization of this paper is briefly summarized in the following. In Sect. 2, we present the needed preliminaries of the β-calculus from [6, 911]. In Sect. 3, we give the sufficient conditions for the existence and uniqueness of solutions of the β-Cauchy problem of the nth-order β-difference equations. Also, we construct the fundamental set of solutions for the homogeneous linear β-difference equations when the coefficients \(a_{j}\) (\(0\leq j \leq n\)) are constants. Furthermore, we introduce the β-Wronskian which is an effective tool to determine whether the set of solutions is a fundamental set or not and prove its properties. Finally, we study the undetermined coefficients, the variation of parameters, and the annihilator methods for the non-homogeneous linear β-difference equations.

Throughout this paper, J is a neighborhood of the unique fixed point \(s_{0}\) of β, \(S(y_{0}, b)=\{y\in \mathbb{X}:\Vert y-y_{0}\Vert \leq b \}\), and \(R=\{(t,y)\in {{I}\times \mathbb{X}}:|t-s_{0}|\leq {a},\Vert y-y _{0}\Vert \leq {b}\}\) is a rectangle, where a, b are fixed positive real numbers, \(\mathbb{X}\) is a Banach space. Furthermore, \(D_{\beta }^{n}f=D _{\beta }(D_{\beta }^{n-1}f)\), \(n\in \mathbb{N}_{0}=\mathbb{N} \cup \{0\}\), where f is β-differentiable n times over I, \(\mathbb{N}\) is the set of natural numbers. We use the symbol T for the transpose of the vector or the matrix.

2 Preliminaries

Lemma 2.1

([10])

The following statements are true:

  1. (i)

    The sequence of functions \(\{\beta^{k}(t)\}_{k=0}^{\infty }\) converges uniformly to the constant function \(\hat{\beta }(t):=s _{0}\) on every compact interval \(V \subseteq I\) containing \(s_{0}\).

  2. (ii)

    The series \(\sum_{k=0}^{\infty }|\beta^{k}(t)-\beta^{k+1}(t)|\) is uniformly convergent to \(|t-s_{0}| \) on every compact interval \(V \subseteq I\) containing \(s_{0}\).

Lemma 2.2

([10])

If \(f:I\rightarrow \mathbb{X} \) is a continuous function at \(s_{0}\), then the sequence \(\{f(\beta^{k}(t))\}_{k=0}^{\infty }\) converges uniformly to \(f(s_{0})\) on every compact interval \(V\subseteq I\) containing \(s_{0}\).

Theorem 2.3

([10])

If \(f:I \rightarrow \mathbb{X} \) is continuous at \(s_{0}\), then the series \(\sum_{k=0}^{\infty }\| (\beta^{k}(t)- \beta^{k+1}(t) ) f(\beta^{k}(t))\|\) is uniformly convergent on every compact interval \(V \subseteq I\) containing \(s_{0}\).

Theorem 2.4

([10])

Assume that \(f:{I}\rightarrow \mathbb{X}\) and \(g:{I} \rightarrow \mathbb{R}\) are β-differentiable at \(t\in {I}\). Then:

  1. (i)

    The product \(fg:I\rightarrow \mathbb{X}\) is β-differentiable at t and

    $$\begin{aligned} {D}_{\beta }(fg) (t) &=\bigl({D}_{\beta }f(t) \bigr)g(t)+f\bigl(\beta (t)\bigr){D}_{\beta }g(t) \\ & =\bigl({D}_{\beta }f(t)\bigr)g\bigl(\beta (t)\bigr)+f(t){D}_{\beta }g(t), \end{aligned} $$
  2. (ii)

    \(f/g\) is β-differentiable at t and

    $$ {D}_{\beta } ({f}/{g} ) (t)=\frac{({D}_{\beta }f(t))g(t)-f(t) {D}_{\beta }g(t)}{g(t)g(\beta (t))}, $$

    provided that \(g(t)g(\beta (t))\neq {0}\).

Theorem 2.5

([10])

Assume that \(f:{I}\to \mathbb{X}\) is continuous at \(s_{0}\). Then the function F defined by

$$ F(t)=\sum_{k=0}^{\infty } \bigl( \beta^{k}(t)-\beta^{k+1}(t) \bigr)f\bigl(\beta ^{k}(t) \bigr), \quad t\in {I} $$
(2.1)

is a β-antiderivative of f with \(F(s_{0})=0\). Conversely, a β-antiderivative F of f vanishing at \(s_{0}\) is given by (2.1).

Definition 2.6

([10])

The β-integral of \(f:{I}\rightarrow {\mathbb{X}}\) from a to b, \(a,b\in {I}\), is defined by

$$ \int^{b}_{a}f(t)\,d_{\beta }{t}= \int^{b}_{s_{0}}f(t)\,d_{\beta }{t}- \int ^{a}_{s_{0}}f(t)\,d_{\beta }{t}, $$

where

$$ \int^{x}_{s_{0}}f(t)\,d_{\beta } {t}=\sum ^{\infty }_{k=0} \bigl(\beta^{k}(x)- \beta^{k+1}(x) \bigr)f\bigl(\beta ^{k}(x)\bigr),\quad x\in {I}, $$

provided that the series converges at \(x=a\) and \(x=b\). f is called β-integrable on I if the series converges at a and b for all \(a,b\in {I}\). Clearly, if f is continuous at \(s_{0}\in {I}\), then f is β-integrable on I.

Definition 2.7

([9])

The β-exponential functions \(e_{p,\beta }(t)\) and \(E_{p,\beta }(t)\) are defined by

$$\begin{aligned} e_{p,\beta }(t)=\frac{1}{\prod_{k=0}^{\infty }[1-p(\beta^{k} (t))( \beta^{k}(t)-\beta^{k+1}(t))]} \end{aligned}$$
(2.2)

and

$$\begin{aligned} E_{p,\beta }(t)=\prod_{k=0}^{\infty } \bigl[1+ p\bigl(\beta^{k}(t)\bigr) \bigl(\beta ^{k} (t) - \beta^{k+1}(t) \bigr) \bigr], \end{aligned}$$
(2.3)

where \(p:I \rightarrow \mathbb{C}\) is a continuous function at \(s_{0}\), \(e_{p,\beta }(t)=\frac{1}{E_{-p,\beta }(t)}\).

The both products in (2.2) and (2.3) are convergent to a non-zero number for every \(t\in I\) since \(\sum_{k=0}^{\infty } | p( \beta^{k}(t)) (\beta^{k}(t)-\beta^{k+1}(t) ) |\) is uniformly convergent.

Definition 2.8

([9])

The β-trigonometric functions are defined by

$$\begin{aligned}& \cos_{p,\beta }(t) =\frac{ e_{ip,\beta }(t)+e_{-ip,\beta }(t)}{2}, \\& \sin_{p,\beta } (t) =\frac{ e_{ip,\beta }(t)-e_{-ip,\beta }(t)}{2i}, \\& \operatorname{Cos}_{p,\beta } (t) =\frac{E_{ip,\beta }(t)+E_{-ip,\beta }(t)}{2}, \\& \text{and} \quad \operatorname{Sin}_{p,\beta }(t) =\frac{E_{ip,\beta }(t)-E_{-ip,\beta } (t)}{2i}. \end{aligned}$$

Theorem 2.9

([9])

The β-exponential functions \(e_{p,_{\beta }}(t)\) and \(E_{p,_{\beta }}(t)\) are the unique solutions of the first-order β-difference equations

$$\begin{aligned}& D_{\beta }y(t)=p(t)y(t), \qquad y(s_{0})=1, \\& D_{\beta }y(t)=p(t)y\bigl(\beta (t)\bigr), \qquad y(s_{0})=1, \end{aligned}$$

respectively.

Theorem 2.10

([9])

Assume that \(f:I\rightarrow \mathbb{X}\) is continuous at \(s_{0}\). Then the solution of the following equation \(D_{\beta }y(t)= p(t)y(t)+f(t)\), \(y(s_{0})=y_{0}\in \mathbb{X}\), has the form

$$ y(t)=e_{p,\beta }(t) \biggl[y_{0}+ \int_{s_{0}}^{t}f(\tau)E_{-p,\beta }\bigl( \beta ( \tau)\bigr)\,d_{\beta }\tau \biggr]. $$

Theorem 2.11

([11])

Let \(z\in \mathbb{C}\) be a constant. Then the function \(\phi (t)\) defined by

$$ \phi (t)=\sum_{k=0}^{\infty }z^{k} \alpha_{k}(t) $$

is the unique solution of the β-IVP

$$ D_{\beta }{y(t)}=zy(t),\qquad y(s_{0})=1, $$

where

$$\alpha_{k}(t) =\textstyle\begin{cases} \sum_{i_{1},i_{2},i_{3},\dots,i_{k-1}=0}^{\infty } (\prod_{l=1} ^{k-1}(\beta,\beta)_{\sum_{j=1}^{l}{i_{j}}} ) ( \beta^{\sum_{j=1}^{k-1}{i_{j}}}(t)-s_{0} ),&\textit{if } k\geq 2, \\ t-s_{0},&\textit{if } k=1, \\ 1,&\textit{if } k=0, \end{cases} $$

with \((\beta,\beta)_{i}=\beta^{i}(t)-\beta^{i+1}(t)\).

Proposition 2.12

([11])

Let \(z\in \mathbb{C}\). The β-exponential function \(e_{z,\beta }(t)\) has the expansion

$$ e_{z,\beta }(t)=\sum_{k=0}^{\infty }z^{k} \alpha_{k}(t). $$

Theorem 2.13

([11])

Assume that \(f:R\rightarrow {\mathbb{X}}\) is continuous at \((s_{0},y_{0})\in {R}\) and satisfies the Lipschitz condition (with respect to y)

$$ \bigl\Vert f(t,y_{1})-f(t,y_{2})\bigr\Vert \leq {L} \Vert y_{1}-y_{2}\Vert \quad \textit{for all } (t,y_{1}),(t,y_{2})\in {R}, $$

where L is a positive constant. Then the sequence defined by

$$ \phi_{k+1}(t)=y_{0}+ \int_{s_{0}}^{t}f \bigl(\tau,\phi_{k}(\tau) \bigr)\,d_{\beta }{\tau },\qquad \phi_{0}(t)=y_{0}, \quad \vert t-s_{0}\vert \leq {\delta }, k\geq {0} $$
(2.4)

converges uniformly on the interval \(|t-s_{0}|\leq {\delta }\) to a function ϕ, the unique solution of the β-IVP

$$ D_{\beta }{y(t)}=f(t,y),\qquad y(s_{0})=y_{0}, \quad t\in {I}, $$
(2.5)

where \(\delta =\min \{a,\frac{b}{Lb+M}, \frac{\rho }{L}\}\) with \(\rho \in (0,1)\) and \(M=\sup_{(t,y)\in {R}}\Vert f(t,y)\Vert <\infty \), \(\rho \in (0,1)\).

Theorem 2.14

([6])

Let \(f_{i}(t,y_{1},y_{2}):I \times \prod_{i=1}^{2} S_{i}(x _{i}, b_{i})\rightarrow {\mathbb{X}}\), \(s_{0}\in I\) such that the following conditions are satisfied:

  1. (i)

    For \(y_{i}\in S_{i}(x_{i},b_{i})\), \(i=1,2\), \(f_{i}(t,y_{1},y _{2})\) are continuous at \(t=s_{0}\).

  2. (ii)

    There is a positive constant A such that, for \(t\in I\), \(y_{i}, \tilde{y}_{i}\in S_{i}(x_{i},b_{i})\), \(i=1,2\), the following Lipschitz condition is satisfied:

$$ \bigl\Vert f_{i}(t,y_{1},y_{2})-f_{i}(t, \tilde{y}_{1},\tilde{y}_{2}) \bigr\Vert \leq A \sum _{i=1}^{2} \Vert y_{i}- \tilde{y}_{i} \Vert . $$

Then there exists a unique solution of the β-initial value problem β-IVP

$$\begin{aligned} D_{\beta }y_{i}(t)=f_{i}\bigl(t,y_{1}(t),y_{2}(t) \bigr), \qquad y_{i}(s_{0})=x_{i}\in { \mathbb{X}},\quad i =1,2, t \in I. \end{aligned}$$

Corollary 2.15

([6])

Let \(f(t,y_{1},y_{2})\) be a function defined on \(I\times \prod_{i=1}^{2} S_{i}(x_{i},b_{i})\) such that the following conditions are satisfied:

  1. (i)

    For any values of \(y_{i}\in S_{i}(x_{i},b_{i})\), \(i=1,2\), f is continuous at \(t=s_{0}\).

  2. (ii)

    f satisfies the Lipschitz condition

    $$ \bigl\Vert f(t,y_{1},y_{2})-f(t,\tilde{y}_{1}, \tilde{y}_{2}) \bigr\Vert \leq A\sum_{i=1}^{2} \Vert y_{i} -\tilde{y}_{i} \Vert , $$

where \(A>0\), \(y_{i},\tilde{y}_{i}\in S_{i}(x_{i},b_{i})\), \(i=1,2\), and \(t \in I\). Then

$$\begin{aligned} D_{\beta }^{2}y(t)=f\bigl(t,y(t),D_{\beta }y(t)\bigr), \qquad D_{\beta }^{i-1}y(s_{0})=x_{i},\quad i=1,2, \end{aligned}$$

has a unique solution on \([s_{0},s_{0} +\delta ]\).

Corollary 2.16

([6])

Assume that the functions \(a_{j}(t):I\rightarrow \mathbb{C}\), \(j=0,1,2\), and \(b(t):I\rightarrow {\mathbb{X}}\) satisfy the following conditions:

  1. (i)

    \(a_{j}(t)\), \(j=0,1,2\), and \(b(t)\) are continuous at \(s_{0}\) with \(a_{0}(t)\neq 0\) for all \(t \in I\),

  2. (ii)

    \(a_{j}(t)/a_{0}(t)\) is bounded on I, \(j=1,2\). Then

$$ a_{0}(t)D_{\beta }^{2}y(t)+ a_{1}(t)D_{\beta }y(t)+a_{2}(t)y(t)=b(t), \qquad D_{\beta }^{i-1}y(s_{0})= x_{i},\quad x_{i} \in {\mathbb{X}}, i=1,2, $$

has a unique solution on a subinterval \(J\subseteq I\), \(s_{0}\in J\).

3 Main results

In this section, we give the sufficient conditions for the existence and uniqueness of solutions of the β-Cauchy problem of the nth-order β-difference equations. We also present the fundamental set of solutions for the homogeneous linear β-difference equations when the coefficients \(a_{j}\) (\(0\leq j \leq n \)) are constants. Furthermore, we introduce the β-Wronskian. Finally, we study the undetermined coefficients, the variation of parameters, and the annihilator methods for the non-homogeneous linear β-difference equations.

3.1 Existence and uniqueness of solutions

Theorem 3.1

Let I be an interval containing \(s_{0}\), \(f_{i}(t,y_{1},\ldots,y _{n}):I \times \prod_{i=1}^{n}S_{i}(x_{i},b_{i})\rightarrow \mathbb{X}\), such that the following conditions are satisfied:

  1. (i)

    For \(y_{i}\in S_{i}(x_{i},b_{i})\), \(i=1,\ldots,n\), \(f_{i}(t,y_{1},\ldots,y_{n})\) are continuous at \(t=s_{0}\).

  2. (ii)

    There is a positive constant A such that, for \(t \in I\), \(y_{i},\tilde{y}_{i}\in S_{i}(x_{i},b_{i})\), \(i=1,\ldots,n\), the following Lipschitz condition is satisfied:

$$ \bigl\Vert f_{i}(t,y_{1},\ldots,y_{n})-f_{i}(t, \tilde{y}_{1},\ldots,\tilde{y}_{n}) \bigr\Vert \leq A \sum _{i=1}^{n} \Vert y_{i}- \tilde{y}_{i} \Vert . $$

Then there exists a unique solution of the β-initial value problem β-IVP

$$\begin{aligned} D_{\beta }y_{i}(t)=f_{i}\bigl(t,y_{1}(t), \ldots,y_{n}(t)\bigr),\qquad y_{i}(s_{0})=x_{i} \in \mathbb{X},\quad i=1,\ldots,n, t \in I. \end{aligned}$$

Proof

See the proof of Theorem 2.14. □

The proof of the following two corollaries is the same as the proof of Corollaries 2.15, 2.16.

Corollary 3.2

Let \(f(t,y_{1},\ldots,y_{n})\) be a function defined on \(I\times \prod_{i=1}^{n} S_{i}(x_{i},b_{i})\) such that the following conditions are satisfied:

  1. (i)

    For any values of \(y_{r}\in S_{r}(x_{r},b_{r})\), f is continuous at \(t=s_{0}\).

  2. (ii)

    f satisfies the Lipschitz condition

    $$ \bigl\Vert f (t,y_{1},\ldots,y_{n})-f(t, \tilde{y}_{1},\ldots,\tilde{y}_{n}) \bigr\Vert \leq A\sum _{i=1}^{n} \Vert y_{i}- \tilde{y}_{i} \Vert , $$

where \(A>0\), \(y_{i},\tilde{y}_{i}\in S_{i}(x_{i},b_{i})\), \(i=1, \ldots,n\), and \(t \in I\). Then

$$\begin{aligned}& \begin{aligned} &D_{\beta }^{n} y(t) =f \bigl(t,y(t),D_{\beta }y(t),\ldots,D_{\beta }^{n-1}y(t)\bigr), \\ &D_{\beta }^{i-1}y(s_{0}) =x_{i},\quad i=1, \ldots,n, \end{aligned} \end{aligned}$$
(3.1)

has a unique solution on \([s_{0},s_{0} +\delta ]\).

The following corollary gives us the sufficient conditions for the existence and uniqueness of solutions of the β-Cauchy problem (3.1).

Corollary 3.3

Assume that the functions \(a_{j}(t):I\rightarrow \mathbb{C}\), \(j=0,1, \ldots,n\), and \(b(t):I\rightarrow \mathbb{X}\) satisfy the following conditions:

  1. (i)

    \(a_{j}(t)\), \(j=0,1,\ldots,n\), and \(b(t)\) are continuous at \(s_{0}\) with \(a_{0}(t)\neq 0\) for all \(t \in I \),

  2. (ii)

    \(a_{j}(t)/a_{0}(t)\) is bounded on I, \(j=1,\ldots,n\). Then

$$\begin{aligned}& a_{0}(t)D_{\beta }^{n}y(t)+a_{1}(t)D_{\beta }^{n-1}y(t)+ \cdots +a_{n}(t)y(t)=b(t), \\& D_{\beta }^{i-1}y(s_{0})=x_{i},\quad i=1, \ldots,n, \end{aligned}$$

has a unique solution on a subinterval \(J\subset I\) containing \(s_{0}\).

3.2 Homogeneous linear β-difference equations

Consider the nth-order homogeneous linear β-difference equation

$$ a_{0}(t)D_{\beta }^{n}y(t)+a_{1}(t)D_{\beta }^{n-1}y(t)+ \cdots + a _{n-1}(t)D_{\beta }y(t)+a_{n}(t)y(t)=0, $$
(3.2)

where the coefficients \(a_{j}(t)\), \(0\leq j\leq n\), are assumed to satisfy the conditions of Corollary 3.3. Equation (3.2) may be written as \(L_{n}y=0\), where

$$ L_{n}=a_{0}(t)D_{\beta }^{n}+a_{1}(t)D_{\beta }^{n-1}+ \cdots +a_{n-1}(t)D _{\beta }+a_{n}(t). $$

The following lemma is an immediate consequence of Corollary 3.3.

Lemma 3.4

If y is a solution of equation (3.2) such that \(D_{\beta } ^{i-1}y(s_{0})=0\), \(1\leq i\leq n\), then \(y(t)=0\) for all \(t\in J\).

Theorem 3.5

The nth-order homogeneous linear scalar β-difference equation (3.2) is equivalent to the first-order homogeneous linear system of the form

$$ D_{\beta }Y(t)=A(t)Y(t), $$

where

$$\begin{aligned} Y= \left ( \begin{matrix} y_{1} \\ \vdots \\ y_{n} \end{matrix} \right ) \quad \textit{and}\quad A= \left ( \begin{matrix} 0 & 1 & \ldots & 0 \\ \vdots & \vdots & \ldots & \vdots \\ 0 & 0 & & 1 \\ -\frac{a_{n}}{a_{0}} & -\frac{a_{n-1}}{a_{0}} & \ldots & -\frac{a_{1}}{a _{0}} \end{matrix} \right ) . \end{aligned}$$

Proof

Let

$$\begin{aligned}& \begin{aligned} &y_{1} =y, \\ &y_{2} = D_{\beta }y, \\ &\vdots \\ &y_{n-1} =D_{\beta }^{n-2}y, \\ &y_{n} =D_{\beta }^{n-1}y. \end{aligned} \end{aligned}$$
(3.3)

β-differentiating (3.3), we have

$$\begin{aligned} D_{\beta }y=D_{\beta }y_{1} ,\qquad D_{\beta }^{2}y=D_{\beta }y_{2}, \qquad \ldots,\qquad D_{\beta }^{n-1}y=D_{\beta }y_{n-1}, \qquad D_{\beta }^{n}y=D_{\beta }y_{n}. \end{aligned}$$
(3.4)

Then

$$\begin{aligned} D_{\beta }y_{1}=y_{2}, \qquad D_{\beta }y_{2}=y_{3}, \qquad \ldots,\qquad D_{\beta }y _{n-1}=y_{n}. \end{aligned}$$
(3.5)

Since \(a_{0}(t) \neq 0\) on J, (3.2) is equivalent to

$$ D_{\beta }^{n}y=-\frac{a_{n}(t)}{a_{0}(t)}y- \frac{a_{n-1}(t)}{a_{0}(t)}D_{\beta }y- \cdots - \frac{a_{1}(t)}{a_{0}(t)}D_{\beta }^{n-1} y, $$

from (3.3) and (3.4), we have

$$\begin{aligned} D_{\beta }y_{n}= -\frac{a_{n}(t)}{a_{0}(t)}y_{1}- \frac{a_{n-1}(t)}{a _{0}(t)}y_{2}-\cdots -\frac{a_{1}(t)}{a_{0}(t)}y_{n}. \end{aligned}$$
(3.6)

Combining (3.5) and (3.6), we get

$$\begin{aligned}& \begin{aligned} &D_{\beta }y_{1}= y_{2}, \\ &\vdots \\ &D_{\beta }y_{n-1}= y_{n}, \\ &D_{\beta }y_{n}= -\frac{a_{n}(t)}{a_{0}(t)}y_{1}- \frac{a_{n-1}(t)}{a _{0}(t)}y_{2}-\cdots -\frac{a_{1}(t)}{a_{0}(t)}y_{n}. \end{aligned} \end{aligned}$$
(3.7)

This is equivalent to the homogeneous linear vector β-difference equation

$$\begin{aligned} D_{\beta }Y(t)=A(t)Y(t), \end{aligned}$$
(3.8)

where

$$\begin{aligned} Y= \left ( \begin{matrix} y_{1} \\ \vdots \\ y_{n} \end{matrix} \right ) \quad \textit{and}\quad A= \left ( \begin{matrix} 0 & 1 & \ldots & 0 \\ \vdots & \vdots & \ldots & \vdots \\ 0 & 0 & & 1 \\ -\frac{a_{n}}{a_{0}} & -\frac{a_{n-1}}{a_{0}} & \ldots & -\frac{a_{1}}{a _{0}} \end{matrix} \right ). \end{aligned}$$

 □

Theorem 3.6

Consider equation (3.2) and the corresponding system (3.8). If f is a solution of (3.2) on J, then \(\phi = (f,D_{\beta }f,\ldots,D_{\beta }^{n-1}f )^{T}\) is a solution of (3.8) on J. Conversely, if \(\phi = (\phi_{1}, \ldots,\phi_{n} )^{T}\) is a solution of (3.8) on J, then its first component \(\phi_{1}\) is a solution f of (3.2) on J and \(\phi = (f,D_{\beta }f,\ldots,D_{\beta }^{n-1}f )^{T}\).

Proof

Suppose that f satisfies equation (3.2). Then

$$\begin{aligned} a_{0}(t)D_{\beta }^{n}f(t)+\cdots +a_{n-1}(t)D_{\beta }f(t)+a_{n}(t)f(t)=0,\quad t\in J. \end{aligned}$$
(3.9)

Consider

$$\begin{aligned} \phi (t)= \bigl(\phi_{1}(t),\ldots, \phi_{n}(t) \bigr)^{T}= \bigl(f(t),D_{\beta }f(t), \ldots,D_{\beta }^{n-1}f(t) \bigr)^{T}. \end{aligned}$$
(3.10)

From (3.9) and (3.10), we have

$$\begin{aligned} \begin{aligned}&D_{\beta }\phi_{1}(t) =\phi_{2}(t), \\ &\vdots \\ &D_{\beta }\phi_{n-1}(t) =\phi_{n}(t), \\ &D_{\beta }\phi_{n}(t) = -\frac{a_{n}(t)}{a_{0}(t)} \phi_{1}(t)-\frac{a _{n-1}(t)}{a_{0}(t)}\phi_{2}(t)-\cdots - \frac{a_{1}(t)}{a_{0}(t)}\phi _{n}(t). \end{aligned} \end{aligned}$$
(3.11)

Comparing (3.11) with (3.7), ϕ defined by (3.10) satisfies system (3.7). Conversely, suppose that \(\phi (t)= (\phi_{1}(t),\ldots,\phi_{n}(t) )^{T}\) satisfies system (3.7) on J. Then (3.11) holds for all \(t \in J\). The first \(n - 1\) equations of (3.11) give

$$\begin{aligned} \begin{aligned}&\phi_{2}(t) =D_{\beta }\phi_{1}(t), \\ &\phi_{3}(t) =D_{\beta }\phi_{2}(t)=D_{\beta }^{2} \phi_{1}(t), \\ &\vdots \\ &\phi_{n}(t) = D_{\beta }\phi_{n-1}(t)=D_{\beta }^{2} \phi_{n-2}(t)= \cdots =D_{\beta }^{n-1} \phi_{1}(t), \end{aligned} \end{aligned}$$
(3.12)

and so \(D_{\beta }\phi_{n}(t)=D_{\beta }^{n}\phi_{1}(t)\). The last equation of (3.11) becomes

$$ a_{0}(t)D_{\beta }^{n}\phi_{1}(t)+a_{1}(t)D_{\beta }^{n-1} \phi_{1}(t)+ \cdots +a_{n-1}(t)D_{\beta } \phi_{1}(t) +a_{n}(t)\phi_{1}(t)=0. $$

Thus \(\phi_{1}\) is a solution f of equation (3.2); and moreover, (3.12) shows that \(\phi (t)= (f(t),D_{\beta }f(t), \ldots, D_{\beta }^{n-1}f(t) )^{T}\). □

The following corollary is an immediate consequence of Theorem 3.6.

Corollary 3.7

If f is the solution of equation (3.2) on J satisfying the initial condition \(D_{\beta }^{i-1}f(s_{0})=x_{i}\), \(1\leq i\leq n\), then \(\phi = (f,D_{\beta }f,\ldots,D_{\beta }^{n-1}f )^{T}\) is the solution of system (3.8) on J satisfying the initial condition \(\phi (s_{0})=(x_{1},\dots,x_{n})^{T}\). Conversely, if \(\phi =(\phi_{1},\dots,\phi_{n})^{T}\) is the solution of (3.8) on J satisfying the initial condition \(\phi (s_{0})=(x _{1},\dots,x_{n})^{T}\), then \(\phi_{1}\) is the solution f of (3.2) on J satisfying the initial condition \(D_{\beta }^{i-1}f(s _{0} ) =x_{i}\), \(1\leq i\leq n\).

Theorem 3.8

A linear combination \(y=\sum_{k=1}^{m}c_{k}y_{k}\) of m solutions \(y_{1},\ldots,y_{m}\) of the homogeneous linear β-difference equation (3.2) is also a solution of it, where \(c_{1},\ldots,c _{m}\) are arbitrary constants.

Proof

The proof is straightforward. □

Definition 3.9

(A fundamental set)

A set of n linearly independent solutions of the nth-order homogeneous linear β-difference equation (3.2) is called a fundamental set of equation (3.2).

By the theory of differential equations, we can easily prove the following theorems.

Theorem 3.10

If the solutions \(y_{1},\ldots,y_{n}\) of the homogeneous linear β-difference equation (3.2) are linearly independent on J, then the corresponding solutions

$$ \phi_{1}= \bigl(y_{1},D_{\beta }y_{1}, \ldots,D_{\beta }^{n-1}y_{1} \bigr)^{T},\qquad \ldots,\qquad \phi_{n}= \bigl(y_{n}, D_{\beta }y_{n}, \ldots, D_{ \beta }^{n-1}y_{n} \bigr)^{T} $$

of system (3.8) are linearly independent on J; and conversely.

Theorem 3.11

Any arbitrary solution y of homogeneous linear β-difference equation (3.2) on J can be represented as a suitable linear combination of a fundamental set of solutions \(y_{1},\ldots,y_{n}\) of (3.2).

Now, we are concerned with constructing the fundamental set of solutions of equation (3.2) when the coefficients are constants. Equation (3.2) can be written as

$$\begin{aligned} L_{n} y(t)=a_{0} D_{\beta }^{n}y(t)+a_{1}D_{\beta }^{n-1}y(t)+ \cdots +a_{n} y(t)=0, \end{aligned}$$
(3.13)

where \(a_{j}\), \(0\leq j \leq n\), are constants. From Theorem 3.5, equation (3.13) is equivalent to the system

$$ D_{\beta }{Y(t)}=AY(t), $$
(3.14)

where

$$ A= \left ( \begin{matrix} 0 & 1 & \ldots & 0 \\ \vdots & \vdots & \ldots & \vdots \\ 0 & 0 & & 1 \\ -\frac{a_{n}}{a_{0}} & -\frac{a_{n-1}}{a_{0}} & \ldots &-\frac{a_{1}}{a _{0}} \end{matrix} \right ) . $$

The characteristic polynomial of equation (3.13) is given by

$$ P(\lambda)=\det (\lambda \mathcal{I}-A)=a_{0} \lambda^{n}+a_{1} \lambda^{n-1}+\cdots +a_{n}, $$
(3.15)

where \(\mathcal{I}\) is the unit square matrix of order n, \(\lambda_{i}\), \(1\leq i \leq k\), are distinct roots of \(p(\lambda)=0\) of multiplicity \(m_{i}\), so that \(\sum_{i=1}^{k}m_{i}=n\).

Theorem 3.12

Let A be a constant \(n\times n\) matrix. Then the function \(\Phi (t)\) defined by

$$ \Phi (t)=\sum_{r=0}^{\infty }A^{r} \alpha_{r}(t) $$

is the unique solution of the β-IVP

$$ D_{\beta }{Y(t)}=AY(t),\quad Y(s_{0})= \mathcal{I}, $$

where \(\mathcal{I}\) is the unit square matrix of order n and

$$\begin{aligned} \alpha_{r}(t)= \textstyle\begin{cases} \sum_{i_{1},i_{2},i_{3},\ldots,i_{r-1}=0}^{\infty }(\prod_{l=1}^{r-1}( \beta,\beta)_{\sum_{j=1}^{l}i_{j} })(\beta^{\sum_{j=1}^{r-1}i_{j}} {(t)-s_{0}}),& \textit{if } r \geq 2, \\ t-s_{0} & \textit{if } r=1, \\ \mathcal{I},& \textit{if } r=0, \end{cases}\displaystyle \end{aligned}$$

with \((\beta;\beta)_{i}=\beta_{i}(t)-\beta_{i+1}(t)\).

Proof

By using the successive approximations, with choosing \(\Phi_{0}(t)= \mathcal{I}\), we have the desired result. See the proof of Theorem 2.11. □

Corollary 3.13

Let A be a constant \(n\times n\) matrix with characteristic polynomial (3.15), then \(\Phi (t)=e_{A,\beta }(t)=\sum_{r=0}^{\infty }A^{r} \alpha_{r}(t)\) is the unique solution of (3.13) satisfying the initial conditions

$$ \Phi (s_{0})=\mathcal{I},\qquad D_{\beta }\Phi (s_{0})=A,\qquad \ldots,\qquad D_{\beta } ^{n-1}\Phi (s_{0})=A^{n-1}. $$

Proof

The proof is straightforward. □

We have from the previous that

$$ y_{i}(t)=e_{\lambda_{i},\beta }(t)=\sum_{r=0}^{\infty } \lambda_{i} ^{r}\alpha_{r}(t),\quad 1\leq i \leq k, $$

forms a fundamental set of solutions of equation (3.13).

Example 3.14

Consider the homogeneous linear system

$$\begin{aligned} D_{\beta }Y(t)= \left ( \begin{matrix} 3 &1 & -1 \\ 1& 3 & -1 \\ 3 & 3& -1 \end{matrix} \right ) Y(t). \end{aligned}$$
(3.16)

Let \(Y(t)=\gamma e_{\lambda,\beta }(t)\), where \(\gamma=(\gamma_{1},\ldots,\gamma_{n})^{T}\) is a constant vector. The characteristic equation is

$$ \lambda^{3}-5\lambda^{2}+8\lambda -4=0, $$

where \(\lambda_{1}=1\), \(\lambda_{2}=\lambda_{3}=2\). Then

$$ y_{1}(t)= \left ( \begin{matrix} 1 \\ 1 \\ 3 \end{matrix} \right ) e_{1,\beta }(t),\qquad y_{2}(t)= \left ( \begin{matrix} 1 \\ -1 \\ 0 \end{matrix} \right ) e_{2,\beta }(t) \quad \mbox{and}\quad y_{3}(t)= \left ( \begin{matrix} 1 \\ 0 \\ 1 \end{matrix} \right ) e_{2,\beta }(t) $$

are the solutions of (3.16). The general solution of system (3.16) is

$$ Y(t)=c_{1} \left ( \begin{matrix} e_{1,\beta }(t) \\ e_{1,\beta }(t) \\ 3 e_{1,\beta }(t) \end{matrix} \right ) +c_{2} \left ( \begin{matrix} e_{2,\beta }(t) \\ -e_{2,\beta }(t) \\ 0 \end{matrix} \right ) +c_{3} \left ( \begin{matrix} e_{2,\beta }(t) \\ 0 \\ e_{2,\beta }(t) \end{matrix} \right ) , $$

where \(c_{1}\), \(c_{2}\), and \(c_{3}\) are arbitrary constants.

Example 3.15

Consider the homogeneous linear system

$$\begin{aligned} D_{\beta }Y(t)= \left ( \begin{matrix} 4 & 3 & 1 \\ -4 & -4 & -2 \\ 8 & 12& 6 \end{matrix} \right ) Y(t). \end{aligned}$$
(3.17)

Assume that \(Y=\gamma e_{\lambda,\beta }(t)\). The characteristic equation is

$$ \lambda^{3}-6\lambda^{2}+12\lambda -8=0, $$

where \(\lambda_{1}=\lambda_{2}=\lambda_{3}=2\). Then

$$ y_{1}(t)= \left ( \begin{matrix} 1 \\ 0 \\ - 2 \end{matrix} \right ) e_{2,\beta }(t) \quad \mbox{and}\quad y_{2}(t)= \left ( \begin{matrix} 0 \\ 1 \\ - 3 \end{matrix} \right ) e_{2,\beta }(t). $$

Let \(y_{3}(t)=(\gamma t+\nu)e_{2,\beta }(t)\),

$$ \gamma= \left ( \begin{matrix} k_{1} \\ k_{2} \\ -2k_{1} - 3k_{2} \end{matrix} \right ) \quad \mbox{and}\quad \nu = \left ( \begin{matrix} \nu_{1} \\ \nu_{2} \\ \nu_{3} \end{matrix} \right ) , $$

where \(k_{1}\) and \(k_{1}\) are constants, and also γ and ν satisfy

$$ (A-\lambda \mathcal{I})\gamma =0 $$

and

$$ (A-\lambda \mathcal{I})\nu =\gamma. $$

Therefore,

$$ y_{3}(t)=\left [ \left ( \begin{matrix} 1 \\ -2 \\ 4 \end{matrix} \right ) t+ \left ( \begin{matrix} 0 \\ 0 \\ 1 \end{matrix} \right ) \right ] e_{2,\beta }(t). $$

The general solution of system (3.17) is

$$ Y(t)=c_{1} \left ( \begin{matrix} e_{2,\beta }(t) \\ 0 \\ - 2e_{2,\beta }(t) \end{matrix} \right ) +c_{2} \left ( \begin{matrix} 0 \\ e_{2,\beta }(t) \\ - 3e_{2,\beta }(t) \end{matrix} \right ) +c_{3} \left ( \begin{matrix} te_{2,\beta }(t) \\ -2te_{2,\beta }(t) \\ (4t + 1)e_{2,\beta }(t) \end{matrix} \right ) , $$

where \(c_{1}\), \(c_{2}\), \(c_{3}\) are arbitrary constants.

3.3 β-Wronskian

Definition 3.16

Let \(y_{1},\dots,y_{n}\) be β-differentiable functions \((n-1)\) times defined on I, then we define the β-Wronskian of the functions \(y_{1},\ldots,y_{n}\) by

$$ W_{\beta }(y_{1},\ldots,y_{n}) (t)=\left \vert \begin{matrix} y_{1}(t) & \ldots & y_{n}(t) \\ D_{\beta }y_{1}(t)& \ldots & D_{\beta }y_{n} (t) \\ \vdots & \ddots & \vdots \\ D_{\beta }^{n-1}y_{1}(t) & \ldots & D_{\beta }^{n-1}y_{n}(t) \end{matrix} \right \vert . $$

Throughout this paper, we write \(W_{\beta }\) instead of \(W_{\beta }(y _{1},\ldots,y_{n})\).

Lemma 3.17

Let \(y_{1}(t),\ldots,y_{n}(t)\) be n-times β-differentiable functions defined on I. Then, for any \(t\in I\), \(t \neq s_{0}\),

$$ D_{\beta }W_{\beta }(y_{1},\ldots,y_{n}) (t)= \left \vert \begin{matrix} y_{1}(\beta (t)) & \ldots & y_{n}(\beta (t)) \\ D_{\beta }y_{1} (\beta (t)) & \ldots & D_{\beta }y_{n} (\beta (t)) \\ \vdots & \ddots & \vdots \\ D_{\beta }^{n-2}y_{1}(\beta (t)) & \ldots & D_{\beta }^{n-2} y_{n}( \beta (t)) \\ D_{\beta }^{n} y_{1}(t) & \ldots & D_{\beta }^{n} y_{n} (t) \end{matrix} \right \vert . $$
(3.18)

Proof

We prove by induction on n. The lemma is trivial when \(n =1\). Then suppose that it is true for \(n=k\). Our objective is to show that it holds for \(n=k+1\).

We expand \(W_{\beta }(y_{1},\ldots,y_{k+1})\) in terms of the first row to obtain

$$ W_{\beta }(y_{1},\ldots,y_{k+1})=\sum _{j=1}^{k+1}(-1)^{j+1} y_{j}(t) W_{\beta }^{(j)}(t), $$

where

$$ W_{\beta }^{(j)}=\textstyle\begin{cases} W_{\beta }(D_{\beta }y_{2},\ldots,D_{\beta }y_{k+1}),& j=1, \\ W_{\beta }(D_{\beta }y_{1},\ldots,D_{\beta }y_{j-1},D_{\beta }y_{j+1}, \ldots,D_{\beta }y_{k+1} ),& 2\leq j\leq k, \\ W_{\beta }(D_{\beta }y_{1},\ldots,D_{\beta }y_{k}),& j=k+1. \end{cases} $$

Consequently,

$$ D_{\beta }W_{\beta }(y_{1},\ldots,y_{k+1}) (t)= \sum_{j=1}^{k+1}(-1)^{j+1}D _{\beta }y_{j}(t)W_{\beta }^{(j)} (t)+\sum _{j=1}^{k+1}(-1)^{j+1}y_{j} \bigl( \beta (t)\bigr) D_{\beta }W_{\beta }^{(j)}(t). $$

We have

$$ \sum_{j=1}^{k+1}(-1)^{j+1}D_{\beta }y_{j} (t)W_{\beta }^{(j)}(t)= \left \vert \begin{matrix} D_{\beta }y_{1}(t) &\ldots &D_{\beta }y_{k+1}(t) \\ D_{\beta }y_{1}(t) & \ldots & D_{\beta }y_{k+1}(t) \\ D_{\beta }^{2} y_{1}(t) & \ldots & D_{\beta }^{2} y_{k+1}(t) \\ \vdots & \ddots & \vdots \\ D_{\beta }^{k-1} y_{1}(t) & \ldots & D_{\beta }^{k-1}y_{k+1}(t) \\ D_{\beta }^{k} y_{1}(t) & \ldots & D_{\beta }^{k}y_{k+1}(t) \end{matrix} \right \vert =0, $$

and from the induction hypothesis we have

$$\begin{aligned}& \sum_{j=1}^{k+1}(-1)^{j+1}y_{j} \bigl(\beta (t)\bigr)D_{\beta }W_{\beta }^{(j)}(t) \\& \quad = \sum_{j=1}^{k+1}(-1)^{j+1}y_{j} \bigl(\beta (t)\bigr) \\& \qquad {}\times \left \vert \begin{matrix} D_{\beta }y_{1}(\beta (t)) & \ldots & D_{\beta }y_{j-1}(\beta (t)) & D _{\beta }y_{j+1}(\beta (t)) & \ldots &D_{\beta }y_{k+1}(\beta (t)) \\ D_{\beta }^{2} y_{1}(\beta (t)) & \ldots &D_{\beta }^{2} y_{j-1}( \beta (t)) &D_{\beta }^{2} y_{j+1}(\beta (t)) & \ldots & D_{\beta } ^{2}y_{k+1}(\beta (t)) \\ \vdots & \ddots & \vdots & \ddots & \vdots & \vdots \\ D_{\beta }^{k-1} y_{1}(\beta (t)) & \ldots &D_{\beta }^{k-1} y_{j-1}( \beta (t)) & D_{\beta }^{k-1}y_{j+1}(\beta (t)) & \ldots & D_{\beta } ^{k-1} y_{k+1}(\beta (t)) \\ D_{\beta }^{k+1}y_{1}(t) & \ldots & D_{\beta }^{k+1}y_{j-1}(t) &D_{ \beta }^{k+1}y_{j+1}(t) & \ldots & D_{\beta }^{k+1} y_{k+1}(t) \end{matrix} \right \vert , \end{aligned}$$
(3.19)

where at \(j=1\) the determinant of (3.19) starts with \(D_{\beta }y_{2}(\beta (t))\) and at \(j= k+1\) the determinant ends with \(D_{\beta }^{k+1}y_{k}(t)\). So,

$$ \sum_{j=1}^{k+1}(-1)^{j+1}y_{j} \bigl(\beta (t)\bigr)D_{\beta }W_{\beta }^{(j)}(t)= \left \vert \begin{matrix} y_{1}(\beta (t)) & \ldots & y_{k+1} (\beta (t)) \\ D_{\beta } y_{1} (\beta (t)) & \ldots & D_{\beta } y_{k+1} (\beta (t)) \\ \vdots & \ddots & \vdots \\ D_{\beta }^{k-1} y_{1}(\beta (t)) & \ldots & D_{\beta }^{k-1} y_{k+1}( \beta (t)) \\ D_{\beta }^{k+1}y_{1} (t) & \ldots &D_{\beta }^{k+1} y_{k+1}(t) \end{matrix} \right \vert . $$

Thus, we have

$$ D_{\beta }W_{\beta }(y_{1},\ldots,y_{k+1}) (t)= \left \vert \begin{matrix} y_{1}(\beta (t)) & \ldots & y_{k+1} (\beta (t)) \\ D_{\beta }y_{1} (\beta (t)) & \ldots & D_{\beta }y_{k+1} (\beta (t)) \\ \vdots & \ddots & \vdots \\ D_{\beta }^{k-1} y_{1}(\beta (t)) & \ldots & D_{\beta }^{k-1} y_{k+1}( \beta (t)) \\ D_{\beta }^{k+1}y_{1} (t) & \ldots &D_{\beta }^{k+1} y_{k+1}(t) \end{matrix} \right \vert $$

as required. □

Theorem 3.18

If \(y_{1}(t),\ldots,y_{n}(t)\) are solutions of equation (3.2) in J, then their β-Wronskian satisfies the first-order β-difference equation

$$ D_{\beta }W_{\beta }(t)=-P(t)W_{\beta }(t),\quad \forall t\in J \backslash \{s_{0}\}, $$
(3.20)

where

$$ P(t)= \sum_{k=0} ^{n-1} \bigl(t-\beta (t) \bigr)^{k}a_{k+1}(t)/a_{0}(t). $$

Proof

First, we show by induction that the following relation

$$ D_{\beta }W_{\beta }(y_{1},\ldots,y_{n})=\sum _{k=1}^{n}(-1)^{k-1}\bigl(t- \beta (t)\bigr)^{k-1} \left \vert \begin{matrix} y_{1}(t)& \ldots & y_{n}(t) \\ D_{\beta }y_{1}(t) & \ldots & D_{\beta }y_{n}(t) \\ \vdots & \ddots & \vdots \\ D_{\beta }^{n-k-1}y_{1}(t) & \ldots & D_{\beta }^{n-k-1}y_{n}(t) \\ D_{\beta }^{n-k+1}y_{1}(t) & \ldots & D_{\beta }^{n-k+1}y_{n}(t) \\ \vdots & \ddots & \vdots \\ D_{\beta }^{n}y_{1}(t) & \ldots & D_{\beta }^{n}y_{n}(t) \end{matrix} \right \vert $$
(3.21)

holds. Indeed, clearly (3.21) is true at \(n=1\). Assume that (3.21) is true for \(n=m\). From Lemma 3.17,

$$\begin{aligned} D_{\beta }W_{\beta }(y_{1},\ldots,y_{m+1}) (t) =& \left \vert \begin{matrix} y_{1}(\beta (t)) & \ldots & y_{m+1}(\beta (t)) \\ D_{\beta }y_{1}(\beta (t)) & \ldots & D_{\beta }y_{m+1}(\beta (t)) \\ \vdots & \ddots & \vdots \\ D_{\beta }^{m-1}y_{1}(\beta (t)) & \ldots & D_{\beta }^{m-1}y_{m+1}( \beta (t)) \\ D_{\beta }^{m+1}y_{1}(t) & \ldots & D_{\beta }^{m+1}y_{m+1}(t) \end{matrix} \right \vert \\ =&\sum_{j=1}^{m+1}(-1)^{j+1}y_{j} \bigl(\beta (t)\bigr)W_{\beta }^{\ast (j)}(t), \end{aligned}$$

where

$$ W_{\beta }^{\ast (j)}= \textstyle\begin{cases} D_{\beta }W_{\beta }(D_{\beta }y_{2},\ldots,D_{\beta }y_{m+1}),& j=1, \\ D_{\beta }W_{\beta }(D_{\beta }y_{1},D_{\beta }y_{j-1},D_{\beta }y _{j+1},\ldots,D_{\beta }y_{m+1}),& 2\leq j\leq m, \\ D_{\beta }W _{\beta }(D_{\beta }y_{1},\ldots,D_{\beta }y_{m}),& j=m+1. \end{cases} $$

One can see that \(W_{\beta }^{\ast (j)}(t)=\sum_{k=1}^{m}(-1)^{k-1} (t-\beta (t) )^{k-1}R_{jk}\), where

$$\begin{aligned}& R_{jk}= \left \vert \begin{matrix} D_{\beta }y_{1}(t) & \ldots & D_{\beta }y_{j-1}(t) & D_{\beta }y_{j+1}(t) & \ldots & D_{\beta }y_{m+1}(t) \\ D_{\beta }^{2} y_{1}(t) & \ldots & D_{\beta }^{2} y_{j-1}(t) & D_{ \beta }^{2} y_{j+1}(t)& \ldots & D_{\beta }^{2} y_{m+1}(t) \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ D_{\beta }^{m-k} y_{1}(t) & \ldots & D_{\beta }^{m-k} y_{j-1}(t) & D _{\beta }^{m-k} y_{j+1}(t) & \ldots & D_{\beta }^{m-k} y_{m+1}(t) \\ D_{\beta }^{m-k+2} y_{1}(t) & \ldots & D_{\beta }^{m-k+2} y_{j-1}(t) & D_{\beta }^{m-k+2} y_{j+1}(t) & \ldots & D_{\beta }^{m-k+2} y_{m+1}(t) \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ D_{\beta }^{m+1} y_{1}(t) & \ldots & D_{\beta }^{m+1} y_{j-1}(t) & D _{\beta }^{m+1} y_{j+1}(t)& \ldots & D_{\beta }^{m+1} y_{m+1}(t) \end{matrix} \right \vert , \\& \quad 2\leq j\leq m, \\& R_{jk}= \left \vert \begin{matrix} D_{\beta }y_{2}(t) & \ldots & D_{\beta }y_{m+1}(t) \\ D_{\beta }^{2} y_{2}(t)& \ldots & D_{\beta }^{2} y_{m+1}(t) \\ \vdots & \ddots & \vdots \\ D_{\beta }^{m-k} y_{2}(t) & \ldots & D_{\beta }^{m-k} y_{m+1}(t) \\ D_{\beta }^{m-k+2} y_{2}(t) & \ldots & D_{\beta }^{m-k+2} y_{m+1}(t) \\ \vdots & \ddots & \vdots \\ D_{\beta }^{m+1} y_{2}(t) & \ldots & D_{\beta }^{m+1} y_{m+1}(t) \end{matrix} \right \vert ,\quad j=1, \\& R_{jk}= \left \vert \begin{matrix} D_{\beta }y_{1}(t) & \ldots & D_{\beta }y_{m}(t) \\ D_{\beta }^{2} y_{1}(t)& \ldots & D_{\beta }^{2} y_{m}(t) \\ \vdots & \ddots & \vdots \\ D_{\beta }^{m-k} y_{1}(t) & \ldots & D_{\beta }^{m-k} y_{m}(t) \\ D_{\beta }^{m-k+2} y_{1}(t) & \ldots & D_{\beta }^{m-k+2} y_{m}(t) \\ \vdots & \ddots & \vdots \\ D_{\beta }^{m+1} y_{1}(t) & \ldots & D_{\beta }^{m+1} y_{m}(t) \end{matrix} \right \vert ,\quad j=m+1. \end{aligned}$$

It follows that

$$\begin{aligned} D_{\beta } W_{\beta }(y_{1},\ldots,y_{m+1}) =& \sum_{j=1}^{m+1}(-1)^{j+1}\bigl[y _{j}(t)-\bigl(t-\beta (t)\bigr)D_{\beta }y_{j}(t) \bigr] \\ &{} \times \sum_{k=1}^{m}(-1)^{k-1} \bigl(t- \beta (t)\bigr)^{k-1}R_{jk} \\ =&\sum_{k=1}^{m}(-1)^{k-1} \bigl(t-\beta (t)\bigr)^{k-1}\sum_{j=1}^{m+1}(-1)^{j+1}y _{j}(t)R_{jk} \\ &{}+ \sum_{k=1}^{m}(-1)^{k} \bigl(t-\beta (t)\bigr)^{k}\sum_{j=1}^{m+1}(-1)^{j+1} D_{\beta }y_{j}(t)R_{jk} \\ =&\sum_{k=1}^{m}(-1)^{k-1} \bigl(t-\beta (t)\bigr)^{k-1}M(k)+ \sum_{k=1}^{m}(-1)^{k} \bigl(t-\beta (t)\bigr)^{k}L(k), \end{aligned}$$
(3.22)

where

$$\begin{aligned}& M(k)=\sum_{j=1}^{m+1}(-1)^{j+1}y_{j}(t)R_{jk}= \left \vert \begin{matrix} y_{1}(t) & \ldots & y_{m+1}(t) \\ D_{\beta }y_{1}(t) & \ldots & D_{\beta }y_{m+1}(t) \\ D_{\beta }^{2} y_{1}(t)& \ldots & D_{\beta }^{2} y_{m+1}(t) \\ \vdots & \ddots & \vdots \\ D_{\beta }^{m-k} y_{1}(t) & \ldots & D_{\beta }^{m-k} y_{m+1}(t) \\ D_{\beta }^{m-k+2} y_{1}(t) & \ldots & D_{\beta }^{m-k+2} y_{m+1}(t) \\ \vdots & \ddots & \vdots \\ D_{\beta }^{m+1} y_{1}(t) & \ldots & D_{\beta }^{m+1} y_{m+1}(t) \end{matrix} \right \vert , \end{aligned}$$
(3.23)
$$\begin{aligned}& L(k)=\sum_{j=1}^{m+1}(-1)^{j+1} D_{\beta }y_{j}(t)R_{jk}=\textstyle\begin{cases} 0,& \mbox{if } (k=1,\ldots,m-1), \\ \left \vert {\scriptsize\begin{matrix}{} D_{\beta }y_{1}(t) & \ldots & D_{\beta }y_{m+1}(t) \cr D_{\beta }^{2} y_{1}(t)& \ldots & D_{\beta }^{2} y_{m+1}(t) \cr \vdots & \ddots & \vdots \cr D_{\beta }^{m+1} y_{1}(t) & \ldots & D_{\beta }^{m+1} y_{m+1}(t) \end{matrix}} \right \vert ,& \mbox{if } k=m. \end{cases}\displaystyle \end{aligned}$$
(3.24)

Using relations (3.23) and (3.24) and substituting in (3.22), we obtain relation (3.21) at \(n = m+1\). Since \(D_{\beta }^{n}y_{j}(t)=-\sum_{i=1}^{n} (a_{i}(t)/a_{0}(t) )D _{\beta }^{n-i}y_{j}(t)\), it follows that

$$\begin{aligned} D_{\beta }W_{\beta }(t) =&\sum_{k=1}^{n}(-1)^{k-1} \bigl(t-\beta (t)\bigr)^{k-1} \biggl(\frac{-a_{k}(t)}{a_{0}(t)} \biggr) \left \vert \begin{matrix} y_{1}(t) & \ldots & y_{n}(t) \\ D_{\beta }y_{1}(t) & \ldots & D_{\beta }y_{n}(t) \\ \vdots & \ddots & \vdots \\ D_{\beta }^{n-k-1} y_{1}(t)& \ldots & D_{\beta }^{n-k-1} y_{n}(t) \\ D_{\beta }^{n-k+1} y_{1}(t) & \ldots & D_{\beta }^{n-k+1} y_{n}(t) \\ \vdots & \ddots & \vdots \\ D_{\beta }^{n-1} y_{1}(t) & \ldots & D_{\beta }^{n-1} y_{n}(t) \\ D_{\beta }^{n-k} y_{1}(t) & \ldots & D_{\beta }^{n-k} y_{n}(t) \end{matrix} \right \vert \\ =&\sum_{k=1}^{n}(-1)^{2(k-1)} \bigl(t-\beta (t) \bigr)^{k-1} \biggl(\frac{-a _{k}(t)}{a_{0}(t)} \biggr)W_{\beta }(t) \\ =&-\sum_{k=0}^{n-1} \bigl(t-\beta (t) \bigr)^{k} \biggl(\frac{a_{k+1}(t)}{a _{0}(t)} \biggr)W_{\beta }(t) =-P(t)W_{\beta }(t), \end{aligned}$$

which is the desired result. □

The following theorem gives us Liouville’s formula for β-difference equations.

Theorem 3.19

Assume that \((\beta (t)-t)P(t)\neq 1\), \(t\in J\). Then the β-Wronskian of any set of solutions \(\{y_{i}(t)\}_{i=1}^{n}\), valid in J, is given by

$$ W_{\beta }(t)=\frac{W_{\beta }(s_{0})}{\prod_{k=0}^{\infty } [1+ P( \beta^{k}(t)) (\beta^{k} (t)-\beta^{k+1}(t) ) ]}, \quad t\in J. $$
(3.25)

Proof

Relation (3.20) implies that

$$ W_{\beta }\bigl(\beta (t)\bigr)= \bigl[1+\bigl(t-\beta (t)\bigr)P(t) \bigr]W_{\beta }(t),\quad t \in J\backslash \{s_{0}\}. $$

Hence,

$$\begin{aligned} W_{\beta }(t) =&\frac{W_{\beta }(\beta (t))}{1+(t-\beta (t))P(t)} \\ =&\frac{W_{\beta }(\beta^{m}(t))}{\prod_{k=0}^{m-1} [1+ P(\beta ^{k}(t)) (\beta^{k} (t) - \beta^{k+1}(t) ) ]},\quad m \in \mathbb{N}. \end{aligned}$$

Taking \(m\rightarrow \infty \), we get

$$ W_{\beta }(t)=\frac{W_{\beta }(s_{0})}{\prod_{k=0}^{\infty } [1+ P( \beta^{k}(t)) (\beta^{k} (t) - \beta^{k+1}(t) ) ]},\quad t\in J. $$

 □

Example 3.20

We calculate the β-Wronskian of the β-difference equation

$$ D_{\beta }^{2}y(t)+y(t)=0. $$
(3.26)

The functions \(y_{1}(t)=\cos_{1,\beta }(t)\) and \(y_{2}(t)=\sin_{1, \beta }(t)\) are solutions of equation (3.26) subject to the initial conditions \(y_{1}(s_{0})=1\), \(D_{\beta }y_{1}(s_{0})=0\), \(y _{2}(s_{0})=0\), \(D_{\beta }y_{2}(s_{0})=1\), respectively. Here, \(P(t)=(t-\beta (t))\). So, \((\beta (t)- t)P(t)\neq 1\) for all \(t\neq s_{0} \). Since

$$ W_{\beta }(s_{0})= \left \vert \begin{matrix} \cos_{1,\beta }(s_{0}) & \sin_{1,\beta }(s_{0}) \\ \sin_{1,\beta }(s_{0})& \cos_{1,\beta }(s_{0}) \end{matrix} \right \vert =\left \vert \begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix} \right \vert =1. $$

Therefore, \(W_{\beta }(t)=\frac{1}{\prod_{k=0}^{\infty } [1+ (\beta^{k}(t)-\beta ^{k+1}(t) )^{2} ]}\).

The following corollary can be deduced directly from Theorem 3.19.

Corollary 3.21

Let \(\{y_{i}\}_{i=1}^{n}\) be a set of solutions of equation (3.2) in J. Then \(W_{\beta }(t)\) has two possibilities:

  1. (i)

    \(W_{\beta }(t)\neq 0\) in J if and only if \(\{y_{i} \} _{i=1}^{n}\) is a fundamental set of equation (3.2) valid in J.

  2. (ii)

    \(W_{\beta }(t)=0\) in J if and only if \(\{y_{i}\}_{i=1} ^{n}\) is not a fundamental set of equation (3.2) valid in J.

3.4 Non-homogeneous linear β-difference equations

The nth-order non-homogeneous linear β-difference equation has the form

$$ a_{0}(t)D_{\beta }^{n}y(t)+a_{1}(t)D_{\beta }^{n-1}y(t)+ \cdots +a_{n-1}(t)D _{\beta }y(t)+a_{n}(t)y(t)=b(t), $$
(3.27)

where the coefficients \(a_{j}(t)\), \(0\leq j\leq n\), and \(b(t)\) are assumed to satisfy the conditions of Corollary 3.3. We may write this as

$$ L_{n}y=b(t), $$
(3.28)

where, as before, \(L_{n}=a_{0}(t)D_{\beta }^{n}+a_{1}(t)D_{\beta } ^{n-1}+\cdots +a_{n-1}(t)D_{\beta }+a_{n}(t)\).

By the theory of differential equations, if \(y_{1}(t)\) and \(y_{2}(t)\) are two solutions of the non-homogeneous equation (3.28), then \(y_{1}\pm y_{2}\) is a solution of the corresponding homogeneous equation (3.2). Also, by Theorem 3.11, if \(y_{1}(t),\ldots,y _{n}(t)\) form a fundamental set for equation (3.2) and \(\varphi (t)\) is a particular solution of equation (3.27), then for any solution of equation (3.27), there are constants \(c_{1},\ldots,c_{n}\) such that

$$ y (t)=\varphi (t)+c_{1}y_{1}(t)+\cdots +c_{n}y_{n}(t). $$
(3.29)

Therefore, if we can find any particular solution \(\varphi (t)\) of equation (3.27), then (3.29) gives a general formula for all solutions of equation (3.27).

Theorem 3.22

Let \(\varphi_{i}\) be a particular solution of \(L_{n}y=b_{i}(t)\), \(i=1, \ldots,m\). Then \(\sum_{i=1}^{m}\zeta_{i}\varphi_{i}\) is a particular solution of the equation \(L_{n}y=\sum_{i=1}^{m}\zeta_{i}b_{i}(t)\), where \(\zeta_{1},\ldots,\zeta_{m}\) are constants.

Proof

The proof is straightforward. □

3.4.1 Method of undetermined coefficients

We will illustrate the method of undetermined coefficients when the coefficients \(a_{j}\) (\(0\leq j \leq n \)) of the non-homogeneous linear β-difference equation (3.27) are constants by simple examples.

Example 3.23

Find a particular solution of

$$ D_{\beta }^{2}y(t)-3D_{\beta }y(t)-4y(t)=3e_{2,\beta }(t). $$
(3.30)

Assume that

$$ \varphi (t)=\zeta e_{2,\beta }(t), $$
(3.31)

where the coefficient ζ is a constant to be determined. To find ζ, we calculate

$$ D_{\beta }\varphi (t)=2\zeta e_{2,\beta }(t),\qquad D_{\beta }^{2}\varphi (t)=4\zeta e_{2,\beta }(t) $$
(3.32)

by substituting with equations (3.31), (3.32) in equation (3.30). Thus a particular solution is

$$ \varphi (t)=-1/2e_{2,\beta }(t). $$

In the following example, we refer the reader to see the different cases of the roots of the characteristic equation of second-order linear homogeneous β-difference equation when the coefficients are constants, see [6].

Example 3.24

Find the general solution of

$$ D_{\beta }^{2}y-3D_{\beta }y-4y=2\sin_{1,\beta }(t). $$
(3.33)

The corresponding homogeneous equation of (3.33) is

$$ D_{\beta }^{2}y-3D_{\beta }y-4y=0. $$
(3.34)

Then the characteristic polynomial of (3.34) is

$$ P(\lambda)=\lambda^{2}-3\lambda -4=0. $$
(3.35)

Therefore,

$$ y_{h}(t)=c_{1}e_{4,\beta }(t)+c_{2}e_{-1,\beta }(t). $$

Now, assume that

$$ \varphi (t)=\zeta_{1}\sin_{1,\beta }(t)+\zeta_{2} \cos_{1,\beta }(t), $$
(3.36)

where \(\zeta_{1}\) and \(\zeta_{2}\) are to be determined. Then

$$\begin{aligned}& \begin{aligned}D_{\beta }\varphi (t) =\zeta_{1} \cos_{1,\beta }(t)-\zeta_{2} \sin_{1,\beta }(t), \\ D_{\beta }^{2}\varphi (t) =-\zeta_{1} \sin_{1,\beta }(t)-\zeta_{2} \cos_{1,\beta }(t). \end{aligned} \end{aligned}$$
(3.37)

By substituting with equations (3.36), (3.37) in equation (3.33), we get a particular solution

$$ \varphi (t)=-5/17\sin_{1,\beta }(t)+3/17\cos_{1,\beta }(t). $$

Then the general solution of (3.33) is

$$ y(t)=c_{1}e_{4,\beta }(t)+c_{2}e_{-1,\beta }(t)-5/17 \sin_{1,\beta }(t)+3/17\cos _{1,\beta }(t). $$

In the following example, we show the solution in the case of \(b(t)\) being a linear combination of exponential and trigonometric functions.

Example 3.25

Find the general solution of

$$ D_{\beta }^{2}y-2D_{\beta }y-3y=2e_{1,\beta }(t)-10 \sin_{1,\beta }(t). $$
(3.38)

The corresponding homogeneous equation of (3.38) has the solution

$$ {y}_{h}(t)=c_{1}e_{3,\beta }(t)+c_{2}e_{-1,\beta }(t). $$

The non-homogeneous term is the linear combination \(2e_{1,\beta }(t)-10 \sin_{1,\beta }(t)\) of the two functions given by \(e_{1,\beta }(t)\) and \(\sin_{1,\beta }(t)\).

Let

$$\begin{aligned} \varphi (t)=c_{1}e_{1,\beta }(t)+ c_{2} \sin_{1,\beta }(t)+c_{3} \cos_{1,\beta }(t) \end{aligned}$$
(3.39)

be a particular solution of (3.38). Then

$$\begin{aligned}& \begin{aligned}D_{\beta }\varphi (t) =c_{1}e_{1,\beta }(t)+c_{2} \cos_{1,\beta }(t)-c _{3}\sin_{1,\beta }(t), \\ D_{\beta }^{2}\varphi (t) =c_{1}e_{1,\beta }(t)-c_{2} \sin_{1,\beta }(t)-c _{3}\cos_{1,\beta }(t), \end{aligned} \end{aligned}$$
(3.40)

where \(c_{1}\), \(c_{2}\), \(c_{3}\) are undetermined coefficients. By substituting with (3.39), (3.40) in (3.38), we have the particular solution \(\varphi (t)=-1/2e_{1,\beta }(t)+2\sin_{1,\beta }(t)- \cos_{1,\beta }(t)\). Thus the general solution of (3.38) is

$$ y(t)= c_{1}e_{3,\beta }(t)+c_{2}e_{-1,\beta }(t)-1/2e_{1,\beta }(t)+2 \sin_{1,\beta }(t)-\cos_{1,\beta }(t). $$

Example 3.26

Find the general solution of

$$ D_{\beta }^{2}y-3D_{\beta }y+2y=e_{3,\beta }(t) \sin_{4,\beta }(t). $$
(3.41)

The corresponding homogeneous equation of (3.41) has the solution

$$ y_{h}(t)=c_{1}e_{2,\beta }(t)+c_{2}e_{1,\beta }(t). $$

Let

$$\begin{aligned} \varphi (t)=Ae_{3,\beta }(t)\sin_{4,\beta }(t)+ Be_{3,\beta }(t) \cos_{4,\beta }(t) \end{aligned}$$
(3.42)

be a particular solution of (3.41), where A and B are constants. Then

$$\begin{aligned}& D_{\beta }\varphi (t) =3Ae_{3,\beta }(t)\sin_{4,\beta }(t)+4A e_{3, \beta }\bigl(\beta (t)\bigr)\cos_{4,\beta }(t) \\& \hphantom{D_{\beta }\varphi (t) =}{}-3Be_{3,\beta }(t)\cos_{4,\beta }(t)-4Be_{3,\beta } \bigl(\beta (t)\bigr) \sin_{4,\beta }(t), \end{aligned}$$
(3.43)
$$\begin{aligned}& D_{\beta }^{2}\varphi (t) =9Ae_{3,\beta }(t) \sin_{4,\beta }(t)+12A e _{3,\beta }\bigl(\beta (t)\bigr) \cos_{4,\beta }(t) \\& \hphantom{D_{\beta }^{2}\varphi (t) =}{}+12A e_{3,\beta }\bigl(\beta (t)\bigr)\cos_{4,\beta } \bigl(\beta (t)\bigr)-16A e_{3, \beta }\bigl(\beta (t)\bigr) \sin_{4,\beta }(t) \\& \hphantom{D_{\beta }^{2}\varphi (t) =}{} +9Be_{3,\beta }(t)\cos_{4,\beta }(t)-12Be_{3,\beta } \bigl(\beta (t)\bigr) \sin_{4,\beta }(t) \\& \hphantom{D_{\beta }^{2}\varphi (t) =}{} -12Be_{3,\beta }\bigl(\beta (t)\bigr)\sin_{4,\beta } \bigl(\beta (t)\bigr)-16Be_{3, \beta }\bigl(\beta (t)\bigr) \cos_{4,\beta }(t). \end{aligned}$$
(3.44)

By substituting with (3.42), (3.43) and (3.44) in (3.41), we get \(A=\frac{1}{2}\) and \(B=0\). Then the particular solution is \(\varphi (t)=1/2e_{3,\beta }(t) \sin_{4,\beta }(t)\). Thus the general solution of (3.41) is

$$ y(t)= c_{1}e_{2,\beta }(t)+c_{2}e_{1,\beta }(t)+1/2e_{3,\beta }(t) \sin_{4,\beta }(t). $$

3.4.2 Method of variation of parameters

We use the method of variation of parameters to obtain a particular solution \(\varphi (t)\) of the non-homogeneous linear β-difference equation (3.27), which can be applied in the case of the coefficients \(a_{j}\) (\(0\leq j \leq n \)) being functions or constants. It depends on replacing the constants \(c_{r}\) in relation (3.29) by the functions \(\zeta_{r}(t)\). Hence, we try to find a solution of the form

$$ \varphi (t)=\zeta_{1}(t)y_{1} (t)+\cdots + \zeta_{n}(t)y_{n} (t). $$
(3.45)

Our objective is to determine the functions \(\zeta_{r} (t)\). We have

$$ D_{\beta }^{i-1}\varphi (t)=\sum_{j=1}^{n} \zeta_{j}(t)D_{\beta }^{i-1}y _{j}(t), \quad 1\leq i\leq n, $$
(3.46)

provided that

$$ \sum_{j=1}^{n}D_{\beta } \zeta_{j}(t)D_{\beta }^{i-1}y_{j}\bigl(\beta (t)\bigr)=0,\quad 1\leq i\leq n-1. $$
(3.47)

Putting \(i = n\) in (3.46) and operating on it by \(D_{\beta }\), we obtain

$$ D_{\beta }^{n}\varphi (t)= \sum_{j=1}^{n} \zeta_{j}(t)D_{\beta }^{n}y _{j}(t)+D_{\beta } \zeta_{j}(t)D_{\beta }^{n-1}y_{j}\bigl(\beta (t)\bigr). $$
(3.48)

Since \(\varphi (t)\) satisfies equation (3.27), it follows that

$$ a_{0}(t) D_{\beta }^{n}\varphi (t)+a_{1}(t)D_{\beta }^{n-1} \varphi (t)+ \cdots +a_{n}(t)\varphi (t)=b(t). $$
(3.49)

Substitute by (3.46) and (3.48) in (3.49) and in view of equation (3.2), we obtain

$$ \sum_{j=1}^{n}D_{\beta } \zeta_{j}(t)D_{\beta }^{n-1}y_{j}\bigl(\beta (t)\bigr)=\frac{b(t)}{a _{0}(t)}. $$

Thus, we get the following system:

$$\begin{aligned}& \begin{aligned} &D_{\beta }\zeta_{1}(t)y_{1}\bigl( \beta (t)\bigr)+\cdots + D_{\beta }\zeta_{n}(t)y _{n} \bigl(\beta (t)\bigr)=0, \\ &\vdots \\ &D_{\beta }\zeta_{1}(t)D_{\beta }^{n-2}y_{1} \bigl(\beta (t)\bigr)+\cdots +D_{ \beta }\zeta_{n}(t)D_{\beta }^{n-2}y_{n} \bigl(\beta (t)\bigr)=0, \\ &D_{\beta }\zeta_{1}(t)D_{\beta }^{n-1}y_{1} \bigl(\beta (t)\bigr)+\cdots +D_{ \beta }\zeta_{n}(t)D_{\beta }^{n-1}y_{n} \bigl(\beta (t)\bigr)= \frac{b(t)}{a_{0}(t)}. \end{aligned} \end{aligned}$$
(3.50)

Consequently,

$$ D_{\beta }\zeta _{r}(t) =\frac{W_{r}(\beta (t))}{W_{\beta }(\beta (t))}\times \frac{b(t)}{a _{0}(t)}, \quad t \in I, $$

where \(1\leq r\leq n\) and \(W_{r}(\beta (t))\) is the determinant obtained from \(W_{\beta }(\beta (t))\) by replacing the rth column by \((0,\ldots,0,1)\). It follows that

$$ \zeta_{r}(t)= \int_{s_{0}}^{t}\frac{W_{r}(\beta (\tau))}{W_{\beta }( \beta (\tau))}\times \frac{b(\tau)}{a_{0}(\tau)}\,d_{\beta }\tau,\quad r=1,\ldots,n. $$

Example 3.27

Consider the equation

$$ D_{\beta }^{2}y(t)+z^{2}y(t)=b(t), $$
(3.51)

where \(z\in \mathbb{C}\setminus \{0\}\). It is known that \(\cos_{z, \beta } (t)\) and \(\sin_{z,\beta }(t)\) are the solutions of the corresponding homogeneous equation of (3.51). We can easily show that

$$\begin{aligned} \varphi (t)=\frac{1}{z} \biggl[\sin_{z,\beta }(t) \int_{s_{0}}^{t}b( \tau)\operatorname{Cos}_{z,\beta } \bigl( \beta (\tau)\bigr)\,d_{\beta }\tau -\cos_{z,\beta } (t) \int_{s_{0}}^{t}b(\tau)\operatorname{Sin}_{z,\beta }\bigl(\beta (\tau)\bigr)\,d_{\beta }\tau \biggr]. \end{aligned}$$

It follows that every solution of equation (3.51) has the form

$$\begin{aligned} y(t) =&c_{1}\cos_{z,\beta }(t)+c_{2} \sin_{z,\beta }(t) \\ &{}+\frac{1}{z} \biggl[\sin_{z,\beta }(t) \int_{s_{0}}^{t}b(\tau)\operatorname{Cos}_{z, \beta } \bigl(\beta (\tau)\bigr)\,d_{\beta }\tau -\cos_{z,\beta } (t) \int_{s_{0}} ^{t}b(\tau)\operatorname{Sin}_{z,\beta }\bigl(\beta (\tau)\bigr)\,d_{\beta }\tau \biggr]. \end{aligned}$$

3.4.3 Annihilator method

In this section, we can use annihilator method to obtain the particular solution of non-homogeneous linear β-difference equation (3.27) when the coefficients \(a_{j}\) (\(0\leq j \leq n \)) are constants.

Definition 3.28

We say that \(f:I\rightarrow \mathbb{C}\) can be annihilated provided that we can find an operator of the form

$$ L(D)=\rho_{n}D_{\beta }^{n}+\rho_{n-1}D_{\beta }^{n-1}+ \cdots +\rho _{0}\mathcal{I} $$

such that \(L(D)f(t)=0\), \(t\in I\), where \(\rho_{i}\), \(0 \leq i\leq n\) are constants, not all zero.

Example 3.29

Since \((D_{\beta }-4\mathcal{I})e_{4,\beta }(t)=0\), \(D_{\beta }-4\mathcal{I}\) is an annihilator for \(e_{4,\beta }(t)\).

Table 1 indicates a list of some functions and their annihilators.

Table 1 A list of some functions and their annihilators

Example 3.30

Consider the equation

$$ D_{\beta }^{2}y(t)-4D_{\beta }y(t)+3y(t)=e_{5,\beta }(t). $$
(3.52)

Equation (3.52) can be rewritten in the form

$$ (D_{\beta }-3\mathcal{I}) (D_{\beta }-\mathcal{I})y(t)= e_{5,\beta }(t). $$

Multiplying both sides by the annihilator \((D_{\beta }-5\mathcal{I})\), we get that if \(y(t)\) is a solution of (3.52), then \(y(t)\) satisfies

$$ (D_{\beta }-3\mathcal{I}) (D_{\beta }-\mathcal{I}) (D_{\beta }-5 \mathcal{I})y(t)=0. $$

Hence,

$$ y(t)=c_{1}e_{3,\beta }(t)+c_{2}e_{1,\beta }(t)+c_{3}e_{5,\beta }(t). $$

One can see that \(\varphi (t)=(1/8)e_{5,\beta }(t)\) is a solution of equation (3.52). Therefore, the general solution of equation (3.52) has the following form:

$$ y(t)=c_{1}e_{3,\beta }(t)+c_{2}e_{1,\beta }(t)+(1/8)e_{5,\beta }(t). $$

4 Conclusion

In this paper, the sufficient conditions for the existence and uniqueness of solutions of the β-Cauchy problem were given. Also, a fundamental set of solutions for the homogeneous linear β-difference equations when the coefficients \(a_{j}\) (\(0\leq j \leq n\)) are constants was constructed. Moreover, β-Wronskian and its properties were introduced. Finally, the undetermined coefficients, the variation of parameters, and the annihilator methods for the non-homogeneous case were presented.