1 Introduction

Integral and differential inequalities have turned out to be important devices in the investigation of the differential and integral equations that happen in nature or are built by several mathematicians (see [1,2,3]). At whatever point there is an exchange about the significance of research work for boundedness, global existence, the stability of the solutions of differential and integral inequalities, the reality cannot be refused, i.e., Gronwall–Bellman inequality, Bihari inequality, and their different speculations play a critical role in giving explicit bounds of differential, integral, and also of difference equations. In 1919, Gronwall [4] found a famous inequality and concluded that

$$ 0\leq r(v)\leq \int _{m}^{v} \bigl[lr(u)+k\bigr]\,du, \quad v\in M, $$
(1)

where r is a continuous function defined on the interval \(M = [m, m + n]\) and m, l, k, n are nonnegative constants. Inequality (1) played a significant part in the study of differential equations and difference equations. From that point onward, Bellman [5] showed a linear version of Gronwall’s inequality (1):

$$ r(v)\leq b+ \int _{m}^{v} j(u)r(u)\,du, \quad v\in [m,n], $$
(2)

r and j are continuous nonnegative functions defined on \([m,n]\), and b is a nonnegative constant. Bellman [5], further, gave a generalization of (2) where he replaced a constant b by a nondecreasing function \(b(v)\) as

$$ r(v)\leq b(v)+ \int _{0}^{v} j(u)r(u)\,du, \quad v\in \mathbb{R_{+}}, $$
(3)

and \(r, b, j\in C(\mathbb{R_{+}},\mathbb{R_{+}})\). After the revelation of the fundamental inequality coming about as a result of Gronwall, numerous mathematicians have discussed the initial kind of this inequality and its applications, for which we may refer to [6,7,8,9,10,11] and the references cited therein. Ferreira et al. [12] studied the following inequality:

$$ \chi \bigl(r(v)\bigr)\leq b(v)+ \int _{0}^{v} \bigl[j(u,x)\varsigma _{1} \bigl(r(x)\bigr)\varsigma _{2}\bigl(r(x)\bigr)+d(u,x)\varsigma _{1}\bigl(r(x)\bigr)\bigr]\,dx, \quad v\in \mathbb{R_{+}}. $$
(4)

Sometimes, the above mentioned inequalities are not directly applicable in the study of certain nonlinear retarded differential and integral equations. Therefore, it is attractive to locate some new estimates where non-retarded term v is changed into retarded argument \(\rho (v)\) in certain situations. Closely related to this type, the following new retarded nonlinear integral inequalities were established by El-Owaidy et al. [13]:

$$ r(v)\leq b(v)+ \int _{m}^{v} j(u)r^{\alpha }(u)\,du+ \int _{m}^{\rho (v)} q(u)r ^{\alpha }(u)\,du $$

and

$$ r^{\alpha }(v)\leq b^{\alpha }(v)+ \int _{m}^{v} j(u)r^{\alpha }(u)\,du+ \int _{m}^{\rho (v)} q(u)r^{\beta }(u)\,du. $$

This century has seen significant and productive inquire about retarded integral inequalities in the field of inequalities and their applications in different parts of science (see [14, 15]).

During the recent years there has been an increasing interest in the investigation of norm inequalities for evolution equations, boundedness of integral operators in the functional spaces with variable exponents, and also in the applications of the boundedness properties of singular integral operators with discontinuous coefficients of new regularity theory of partial differential equations in [16,17,18].

Differential equations with maxima are a special kind of differential equations that consists of the maximum of unknown functions. During the last years, a few authors studied integral inequalities with maxima in order to get the explicit bounds (see [19,20,21,22]). Hristova et al. [23] in 2010 mentioned the integral inequality with maxima of the form

$$\begin{aligned} \textstyle\begin{cases} r(v)\leq b+\int _{v_{0}}^{v}[j(u)r(u)+d(u)\max_{\phi \in [u-n,u]}r( \phi )]\,du \\ \hphantom{r(v)}\quad{} +\int _{\rho (v_{0})}^{\rho (v)}[g(u)r(u)+a(u)\max_{\phi \in [u-n,u]}r(\phi )]\,du, \quad \text{for } v\in [v_{0},T), \\ r(v)\leq \psi (v),\quad \text{for } v \in [\rho (v_{0})-n,v_{0}], \end{cases}\displaystyle \end{aligned}$$

\(v_{0}>0\), \(T\geq v_{0}\). T could be equal to ∞, and \(n=\mathrm{constant} >0\), \(b=\mathrm{constant} >0\), with \(b\leq \max_{v\in [{\rho (v_{0})}-n,{\rho (v_{0})}]}\psi (v)\).

On the opposite side, Hilger [24] was the prime analyst who started the advancement of calculus of time scales and expanded various theories of both difference and differential equations in a predictable way. In the course of recent years, many dynamic inequalities have been studied by different authors (see [25,26,27,28,29] and the references therein). At the beginning, Bohner et al. [30] unified the continuous form of (3) as follows:

$$ r(v)\leq b(v)+ \int _{v_{0}}^{v} j(u)r(u)\Delta u, \quad v\in \mathbb{T}, $$

provided r, j are right-dense continuous functions, \(r\geq 0\) is a regressive right-dense continuous function, and \(\mathbb{T}\) is a time scale.

Later in 2009, Li [31] obtained the following integral inequality:

$$ r(v)\leq r_{0}+ \int _{v_{0}}^{v} \bigl[j(u)r(u)+p(u) \bigr]\Delta u+ \int _{v_{0}}^{v}f(u) \bigl[g(u,z)r(z)\Delta z \bigr] \Delta u, $$

and \(g(u,z)\geq 0\), \(g^{\Delta }(u,z)\geq 0\) for \(v,z\in \mathbb{T}\) and \(z\leq v\). Recently, Feng et al. [32] discovered inequality (4) via time scales by

$$ \begin{aligned} \chi \bigl(r(v)\bigr)&\leq b(v)+ \int _{0}^{v} \bigl[j(u,x)\varsigma _{1} \bigl(r(x)\bigr)\varsigma _{2}\bigl(r(x)\bigr)+d(u,x)\varsigma _{2}\bigl(r(x)\bigr)\bigr]\Delta x \\&\quad{} + \int _{0}^{v} \int _{0} ^{x}a (\xi )\varsigma _{2} \bigl(r(\xi )\bigr)]\Delta \xi \Delta x. \end{aligned} $$

As far as we could possibly know, there are not large amount of papers in which the nonlinear integral inequalities on time scales with ‘maxima’ have been established. To fill this gap, and based on the knowledge of the above work, in this article, we will explore the following nonlinear integral inequalities that have been built up for the solution of the integral inequalities where maxima of the scalar unknown functions are engaged with the integral on time scales.

$$\begin{aligned}& \textstyle\begin{cases} \varsigma _{1}(r(v)) \leq b+\int _{v_{0}}^{v}j(v,x)\varsigma _{2}(r(x)) \Delta x \\ \hphantom{\varsigma _{1}(r(v))}\quad{} +\int _{\rho ({v_{0}})}^{\rho ({v})}d(v,s)\varsigma _{1}(\max_{\phi \in [\tau x,x]_{\mathbb{T}}}r(\phi ))\circ \rho ^{-1}(x)\bar{ \Delta x}, \quad v\in \mathbb{T}_{0}, \\ \varsigma _{1}(r(v)) \leq \psi (v),\quad v \in [\tau \mu ,v_{0}]_{ \mathbb{T}}, \end{cases}\displaystyle \end{aligned}$$
(5)
$$\begin{aligned}& \textstyle\begin{cases} r(v) \leq b(v)+\int _{v_{0}}^{v}j(v,x)\varsigma _{1}(r(x)) \\ \hphantom{r(v)}\quad{}\times [r^{\beta }(x)+\int _{\rho ({v_{0}})}^{\rho ({v})}d(s) \varsigma _{2}(\max_{\phi \in [\tau \lambda ,\lambda ]_{ \mathbb{T}}}r(\phi ))\circ \rho ^{-1}(\lambda )\bar{\Delta \lambda } ]^{\alpha }\Delta x, \quad v\in \mathbb{T}_{0}, \\ r(v) \leq \psi (v),\quad v \in [\tau \mu ,v_{0}]_{\mathbb{T}}, \end{cases}\displaystyle \end{aligned}$$
(6)
$$\begin{aligned}& \textstyle\begin{cases} r^{\alpha }(v) \leq b^{\alpha }+\int _{v_{0}}^{v}j(v,x)\varsigma _{1}(r(x)) \\ \hphantom{r^{\alpha }(v)}\quad{} \times [\varsigma _{2}(r(x))+\int _{\rho ({v_{0}})}^{\rho ({v})}d(x,s) \varsigma _{2}(\max_{\phi \in [\tau \lambda ,\lambda ]_{ \mathbb{T}}}r(\phi ))\circ \rho ^{-1}(\lambda )\bar{\Delta \lambda } ]\Delta x, \quad v\in \mathbb{T}_{0},\hspace{-20pt} \\ r(v) \leq \psi (v),\quad v \in [\tau \mu ,v_{0}]_{\mathbb{T}}, \end{cases}\displaystyle \end{aligned}$$
(7)
$$\begin{aligned}& \textstyle\begin{cases} \varsigma _{1}(r(v)) \leq b(v)+\int _{v_{0}}^{v}\varsigma _{2}(r(x)) [j(v,x)\varsigma _{1}(\max_{\phi \in [\tau x,x]_{ \mathbb{T}}}r(\phi ))+p(x) ]\Delta x \\ \hphantom{\varsigma _{1}(r(v))}\quad{} +\int _{v_{0}}^{v}\varsigma _{2}(r(x))j(v,x) [\int _{\rho ({v_{0}})} ^{\rho ({v})}d(x,s)\varsigma _{1}(\max_{\phi \in [\tau \lambda ,\lambda ]_{\mathbb{T}}}r(\phi )) \circ \rho ^{-1}(\lambda )\bar{\Delta \lambda } ]\Delta x,\hspace{-20pt} \\ \hphantom{\varsigma _{1}(r(v))}\quad v \in \mathbb{T}_{0}, \\ \varsigma _{1}(r(v)) \leq \psi (v),\quad v \in [\tau \mu ,v_{0}]_{ \mathbb{T}}. \end{cases}\displaystyle \end{aligned}$$
(8)

These kinds of inequalities have many applications, when one wants to study the existence and uniqueness of the solutions of a differential equation (see [33,34,35]). The results of our research depend on the law that maxima are taken on the intervals \([\tau v,v]\), where \(0<\tau <1\); however, previously in most papers, the maxima on the interval was held on \([v-n,v]\) and \(n>0\). Toward the finish of this article, some applications are presented to examine the uniqueness and global existence of solutions for the following class of boundary value problems of nonlinear delay dynamic integral equations:

$$ \bigl(r^{\alpha }\bigr)^{\Delta }(v)= G (v,x,r(x), \int _{v_{0}}^{v}Z\Bigl(v,x, \max_{\phi \in [\tau x,x]_{\mathbb{T}}}r( \phi )\Delta x \Bigr), $$
(9)

provided that

$$\begin{aligned} \textstyle\begin{cases} r(v_{0}) =k, \\ r(v) \leq \psi (v),\quad v \in [\tau \mu ,v_{0}]_{\mathbb{T}}. \end{cases}\displaystyle \end{aligned}$$
(10)

The remaining parts of the paper are organized as follows. In Sect. 2, we narrate fundamental facts and preliminary lemmas that are key tools for our main results. The theoretical discussions with some concluding remarks are presented in Sect. 3. The last section is devoted to illustrating the applications of the abstract results.

2 Basic concepts and lemmas on time scales

A time scale \(\mathbb{T}\) is a nonempty closed subset of the real line \(\mathbb{R}\). For \(v\in \mathbb{T}\), the forward jump operator \(\varpi :\mathbb{T}\rightarrow \mathbb{R}\) is defined by \(\varpi (v)= \inf \{ n\in \mathbb{T}: n> v \} \), the backward jump operator \(\varsigma :\mathbb{T}\rightarrow \mathbb{R}\) by \(\varsigma (v)=\sup \{ n\in \mathbb{T}: n< v \} \), and the graininess function \(\xi :\mathbb{T}\rightarrow [0,\infty )\) by \(\xi (v)=\varpi (v)-v\). An element \(v\in \mathbb{T}\) is said to be right-dense if \(\varpi (v)=v\) and right-scattered if \(\varpi (v)>v\), left-dense if \(\varsigma (v)=v\), and left-scattered if \(\varsigma (v)< v\). The set \(\mathbb{T}^{k}\) is defined to be \(\mathbb{T}\) if it has left-scattered maximum g, then \(\mathbb{T}^{k}=\mathbb{T}- \{ g \} \); otherwise, \(\mathbb{T}^{k}= \mathbb{T}\). ℜ defines the set of all regressive and rd-continuous functions and \(\Re ^{+}= \{ q\in \Re : 1+\psi (v)q(v)>0, v\in \mathbb{T} \} \). It is expected that the reader must be acquainted with the information and fundamental ideas about the analytics on time scales. For further details on time scale analysis, we refer the reader to the excellent monograph by Bohner [36] which summarizes and organizes much of the time scale calculus.

Next, some essential lemmas on time scales, which will be needed in the proofs of the presented paper, are listed.

Lemma 2.1

([26])

Let \(m,n\in \mathbb{T}\) and \(\alpha >1\). Assume that \(r:\mathbb{T}\rightarrow \mathbb{R}\) is delta differentiable at \(v\in \mathbb{T}^{k}\) and nonnegative increasing function on \([m,n]_{\mathbb{T}}\). Then

$$ \alpha r^{\alpha -1}(v)r^{\Delta }(v)\leq \bigl(r^{\alpha }(v) \bigr)^{\Delta } \leq \alpha \bigl(r^{\sigma }(v)\bigr)^{\alpha -1}r^{\Delta }(v). $$

Lemma 2.2

([31])

If \(j\in \Re \), \(v\in \mathbb{T}\), then the exponential function \(e_{j}(v,v_{0})\) is the unique solution of the following initial value problem:

$$ \textstyle\begin{cases} r^{\Delta }(v)= j(v)r(v), \\ r(v_{0})=1. \end{cases} $$

Lemma 2.3

([30])

Assume that \(r:\mathbb{T}\rightarrow \mathbb{R}\) is delta differentiable at \(v\in \mathbb{T}^{k}\). Then

$$ r^{\sigma }(v)=r(v)+\psi (v)r^{\Delta }(v). $$

Lemma 2.4

([30])

Assume that \(r:\mathbb{T}\rightarrow \mathbb{R}\) is delta differentiable at \(v\in \mathbb{T}^{k}\). Then

$$ \bigl(r^{k}\bigr)^{\Delta }(v)= \Biggl\{ \sum _{s=0}^{k-1}r^{s}(v)\bigl[r^{\sigma }(v) \bigr]^{k-1-s} \Biggr\} r^{\Delta }(v). $$

Lemma 2.5

([30] Chain rule)

Let \(r:\mathbb{R}\rightarrow \mathbb{R}\) be continuous differentiable, and suppose \(\varUpsilon :\mathbb{T}\rightarrow \mathbb{R}\) is delta differentiable. Then \(r\circ \varUpsilon : \mathbb{T}\rightarrow \mathbb{R}\) is delta differentiable, and

$$ (r\circ \varUpsilon )^{\Delta }(v)= \biggl\{ \int _{0}^{1}\bigl[r^{\prime }\bigl( \varUpsilon (v)\bigr)+n\psi (v)\varUpsilon ^{\Delta }(v))\bigr]\,d\psi \biggr\} \varUpsilon ^{\Delta }(v). $$

Lemma 2.6

([31])

Let \(v_{0}\in \mathbb{T}^{k}\) and \(\varTheta :\mathbb{T} \times \mathbb{T}^{k} \rightarrow \mathbb{R}\) be continuous at \((v,v)\), where \(v>v_{0}\) and \(v\in \mathbb{T}^{k}\). Assume that \(\varTheta ^{\Delta }(v,\cdot )\) is rd-continuous on \([v_{0},\sigma (v)]_{ \mathbb{T}}\). Suppose that, for every \(\epsilon >0\), there exists a neighborhood Ω of v, independent of \(\eta \in [v_{0},\sigma (v)]_{ \mathbb{T}}\), such that

$$ \bigl\vert \bigl[\varTheta \bigl(\sigma (v),\eta \bigr)-\varTheta (s,\eta ) \bigr]-\varTheta ^{\Delta }(v, \eta )\bigl[\sigma (v)-s\bigr] \bigr\vert \leq \epsilon \bigl\vert \sigma (v)-s \bigr\vert , \quad s\in \varOmega , $$

where \(\varTheta ^{\Delta }\) denotes the derivative of Θ with respect to the first variable. Then

$$ r(v)= \int _{v_{0}}^{v}\varTheta (v,\eta )\Delta \eta $$

yields

$$ r^{\Delta }(v)= \int _{v_{0}}^{v}\varTheta ^{\Delta }(v,\eta )\Delta \eta + \varTheta \bigl(\sigma (v),v\bigr). $$

Lemma 2.7

([30])

If \(r^{\Delta }(v)\geq 0\), then \(r(v)\) is nondecreasing.

3 Results and discussion

Without loss of generality, throughout in this work, denote \(\mathbb{R}_{+}=[0,\infty )\) and \(v_{0}\in \mathbb{T}\), \(v_{0}\geq 0\), \(\mathbb{T}_{0}=[v_{0},\infty )\cap \mathbb{T}\) and an interval \([u,\zeta ]_{\mathbb{T}}=[u,\zeta ]\cap \mathbb{T}\). \(C_{\mathrm{rd}}\) denotes the set of rd-continuous functions. Moreover, for a strictly increasing function \(\rho : \mathbb{T}\rightarrow \mathbb{R}\), \(\bar{\mathbb{T}}=\rho ( \mathbb{T})\) is a time scale, where \(\bar{\mathbb{T}}\subseteq \mathbb{T}\). For \(j\in C_{\mathrm{rd}}( \mathbb{T},\mathbb{R})\), the composition of two functions on time scales is defined by

$$ j(u)\circ \rho ^{-1}(\lambda )=j\bigl(\rho ^{-1}(\lambda ) \bigr), \quad u\in \mathbb{T},\lambda \in \bar{\mathbb{T}}. $$

Example 3.1

Let \(j(s)=5^{s^{2}}\) for \(s\in \mathbb{T}\): \(\mathbb{N}_{0}^{ \frac{1}{2}}= \{ \sqrt{n}:n\in \mathbb{N}_{0} \} \) and \(\rho (s)=s^{2}\) for \(s\in \mathbb{T}\). Therefore we have \(\rho ^{-1}(s)= \sqrt{s}\) for \(s\in \bar{\mathbb{T}}=\mathbb{N}_{0}\) and

$$ j(u)\circ \rho ^{-1}(\lambda )=\bigl(5^{u^{2}}\bigr)\circ \sqrt{ \lambda }= 5^{ \lambda }, \quad \lambda \in \bar{\mathbb{T}}. $$

To prove our main results, we first list the following assumptions:

  1. (D1)

    \(\psi \in C_{\mathrm{rd}}([\tau \mu ,v_{0}]_{\mathbb{T}}, \mathbb{R}_{+})\), where \(0<\tau <1\) and \(\mu =\min [v_{0},\rho (v_{0})]\).

  2. (D2)

    The function \(\rho \in C_{\mathrm{rd}}(\mathbb{T}_{0},\mathbb{R} _{+})\) is strictly increasing.

  3. (D3)

    \(\varsigma _{1}, \varsigma _{2}\in C_{\mathrm{rd}}(\mathbb{R}_{+}, \mathbb{R}_{+})\) are continuous, nondecreasing functions with \(\varsigma _{i}(v)>0\) for \(v>0\), (\(i=1,2\)).

  4. (D4)

    \(r\in C_{\mathrm{rd}}( [\tau \mu ,\infty )_{ \mathbb{T}}, \mathbb{R}_{+})\).

  5. (D5)

    The functions \(j(v,x), j^{\Delta }(v,x), d(v,x), d^{\Delta }(v,x)\in C_{\mathrm{rd}}(\mathbb{T}_{0}\times \mathbb{T}_{0}, \mathbb{R}_{+})\).

Now, it is time to state and prove our main theorems:

Theorem 3.2

If conditions (D1)(D5) and relation (5) hold with \(b\geq 0\), then

$$ r(v)\leq \varsigma _{1}^{-1} \biggl\{ \varLambda ^{-1} \biggl(\varPhi ^{-1} \biggl[ \varPhi \biggl( \varLambda \bigl(\varsigma _{1}(H)\bigr)+ \int _{v_{0}}^{v}j(v,x)\Delta x \biggr)+ \int _{v_{0}}^{v}d(v,x)\rho ^{\Delta }(x)\Delta x \biggr] \biggr) \biggr\} , $$
(11)

with

$$\begin{aligned}& \varPhi \biggl(\varLambda \bigl(\varsigma _{1}(H)\bigr)+ \int _{v_{0}}^{v}j(v,x)\Delta x \biggr)+ \int _{v_{0}}^{v}d(v,x)\rho ^{\Delta }(x)\Delta x \in \operatorname{Dom}\bigl(\varPhi ^{-1}\bigr), \\ & H=\max \Bigl\{ \varsigma _{1}(b),\max _{x\in [\tau \mu ,v_{0}]_{\mathbb{T}}}\psi (x) \Bigr\} , \end{aligned}$$
(12)
$$\begin{aligned}& (\varLambda \circ y)^{\Delta }(v)=\frac{y^{\Delta }(v)}{\varsigma _{2}( \varsigma _{1}^{-1}(y(v)))}, \end{aligned}$$
(13)
$$\begin{aligned}& (\varPhi \circ z)^{\Delta }(v)=\frac{\varsigma _{2}(\varsigma _{1}^{-1}( \varLambda ^{-1}(z(v))))z^{\Delta }(v)}{\varLambda ^{-1}(z(v))}, \end{aligned}$$
(14)

where Λ and Φ are increasing bijective functions.

Proof

Define a function \(y: [\tau \mu ,\infty )_{\mathbb{T}} \rightarrow \mathbb{R}_{+}\) by

$$ y(v)= \textstyle\begin{cases} \varsigma _{1}(H)+\int _{v_{0}}^{v}j(v,x)\varsigma _{2}(r(x))\Delta x \\ \quad{} + \int _{\rho ({v_{0}})}^{\rho ({v})}d(v,s)\varsigma _{1}(\max_{\phi \in [\tau x,x]_{\mathbb{T}}}r(\phi ))\circ \rho ^{-1}(x)\bar{ \Delta x}, \quad v\in \mathbb{T}_{0}, \\ \varsigma _{1}(H),\quad \hspace{168pt}v \in [\tau \mu ,v_{0}]_{\mathbb{T}}, \end{cases} $$

where H is as given in (12). Since the function \(y(v)\) is nondecreasing, so inequality (5)

$$ \varsigma _{1}\bigl(r(v)\bigr)\leq y(v)\quad \Rightarrow\quad r(v)\leq \varsigma _{1}^{-1}\bigl(y(v)\bigr) $$
(15)

is satisfied. For \(v\in \mathbb{T}_{0}\) and \(x\in [\rho (v_{0}), \rho (v)]_{\mathbb{T}}\), we get

$$\begin{aligned} \varsigma _{1}\Bigl(\max_{\phi \in [\tau x,x]_{\mathbb{T}}}r(\phi )\Bigr) \circ \rho ^{-1}(x) &\leq \varsigma _{1}\Bigl(\max _{\phi \in [\tau x,x]_{\mathbb{T}}}\varsigma _{1}^{-1}\bigl(y(\phi ) \bigr)\Bigr) \circ \rho ^{-1}(x) \\ &=\varsigma _{1}\Bigl(\max_{\phi \in [\tau \rho ^{-1}( x),\rho ^{-1}(x)]_{\mathbb{T}}} \varsigma _{1}^{-1}\bigl(y(\phi )\bigr)\Bigr) \\ &=y(s)\circ \rho ^{-1}(x). \end{aligned}$$

Inequalities (5), (15) and the above analysis yield

$$\begin{aligned} y(v) &\leq \varsigma _{1}(H)+ \int _{v_{0}}^{v}j(v,x)\varsigma _{2}\bigl( \varsigma _{1}^{-1}\bigl(y(x)\bigr)\bigr)\Delta x \\ &\quad{} + \int _{\rho ({v_{0}})}^{\rho ({v})}d(v,s)\varsigma _{1}\Bigl( \max_{\phi \in [\tau x,x]_{\mathbb{T}}}\bigl(\varsigma _{1}^{-1}y(\phi ) \bigr)\Bigr) \circ \rho ^{-1}(x)\bar{\Delta x} \\ &=\varsigma _{1}(H)+ \int _{v_{0}}^{v}j(v,x)\varsigma _{2}\bigl( \varsigma _{1} ^{-1}\bigl(y(x)\bigr)\bigr)\Delta x+ \int _{\rho ({v_{0}})}^{\rho ({v})}d(v,s)y(s) \circ \rho ^{-1}(x) \bar{\Delta x} \\ &=\varsigma _{1}(H)+ \int _{v_{0}}^{v}j(v,x)\varsigma _{2}\bigl( \varsigma _{1} ^{-1}\bigl(y(x)\bigr)\bigr)\Delta x+ \int _{v_{0}}^{v}d(v,x)y(x)\rho ^{\Delta }(x) \Delta x. \end{aligned}$$
(16)

Taking delta derivative (16) with respect to v and by using Lemma 2.6, we deduce

$$\begin{aligned} y^{\Delta }(v) &\leq \int _{v_{0}}^{v}j^{\Delta }(v,x)\varsigma _{2}\bigl( \varsigma _{1}^{-1}\bigl(y(v)\bigr) \bigr)\Delta x+j\bigl(\sigma (v),v\bigr)\varsigma _{2}\bigl( \varsigma _{1}^{-1}\bigl(y(x)\bigr)\bigr) \\ &\quad {}+ \int _{v_{0}}^{v}d^{\Delta }(v,x)y(x)\rho ^{\Delta }(x)\Delta x+d\bigl( \sigma (v),v\bigr)y(v)\rho ^{\Delta }(v) \\ &\leq \biggl\{ \int _{v_{0}}^{v}j^{\Delta }(v,x)\Delta x+j\bigl( \sigma (v),v\bigr) \biggr\} \varsigma _{2}\bigl(\varsigma _{1}^{-1}\bigl(y(v)\bigr)\bigr) \\ &\quad{} + \biggl\{ \int _{v_{0}}^{v}d^{\Delta }(v,x)\Delta x+d\bigl( \sigma (v),v\bigr) \biggr\} y(v)\rho ^{\Delta }(v), \end{aligned}$$

which leads to

$$\begin{aligned} \frac{y^{\Delta }(v)}{\varsigma _{2}(\varsigma _{1}^{-1}(y(v)))} &\leq \biggl\{ \int _{v_{0}}^{v}j(v,x)\Delta x \biggr\} ^{\Delta }+ \frac{ \{\int _{v_{0}}^{v}d(v,x)\Delta x \}^{\Delta }y(v)\rho ^{\Delta }(v)}{ \varsigma _{2}(\varsigma _{1}^{-1}(y(v)))}. \end{aligned}$$
(17)

By integrating both sides of (17) from \(v_{0}\) to v, we have

$$ \varLambda \bigl(y(v)\bigr)-\varLambda \bigl(y(v_{0})\bigr)\leq \int _{v_{0}}^{v}j(v,x)\Delta x+ \int _{v_{0}}^{v} \biggl\{ \int _{v_{0}}^{s}d(s,\eta )\Delta \eta \biggr\} ^{\Delta }\frac{y(s)\rho ^{\Delta }(s)}{\varsigma _{2}(\varsigma _{1} ^{-1}(y(s)))}\Delta s. $$

Since Λ is increasing and using the fact that \(y(v_{0})= \varsigma _{1}(H)\), the last inequality takes the form

$$ y(v)\leq \varLambda ^{-1} \biggl[\varLambda \bigl( \varsigma _{1}(H)\bigr)+ \int _{v_{0}} ^{v}j(v,x)\Delta x+ \int _{v_{0}}^{v} \biggl\{ \int _{v_{0}}^{s}d(s,\eta ) \Delta \eta \biggr\} ^{\Delta }\frac{y(s)\rho ^{\Delta }(s)}{\varsigma _{2}(\varsigma _{1}^{-1}(y(s)))}\Delta s \biggr], $$
(18)

denote

$$ z(v)=\varLambda \bigl(\varsigma _{1}(H)\bigr)+ \int _{v_{0}}^{v}j(v,x)\Delta x+ \int _{v_{0}}^{v} \biggl\{ \int _{v_{0}}^{s}d(s,\eta )\Delta \eta \biggr\} ^{\Delta }\frac{y(s)\rho ^{\Delta }(s)}{\varsigma _{2}(\varsigma _{1} ^{-1}(y(s)))}\Delta s. $$
(19)

From (18) and (19), it is easy to observe that

$$ y(v)\leq \varLambda ^{-1}\bigl(z(v)\bigr) $$
(20)

and

$$ z(v_{0})=\varLambda \bigl(\varsigma _{1}(H) \bigr)+ \int _{v_{0}}^{v}j(v,x)\Delta x. $$
(21)

It follows from (20), (21) and delta differentiate (19) with respect to v that

$$\begin{aligned} z^{\Delta }(v) &= \biggl\{ \int _{v_{0}}^{v}d^{\Delta }(v,x)\Delta x+d\bigl( \sigma (v),v\bigr) \biggr\} \frac{y(v)\rho ^{\Delta }(v)}{\varsigma _{2}( \varsigma _{1}^{-1}(y(v)))} \\ &\leq \biggl\{ \int _{v_{0}}^{v}d(v,x)\Delta x \biggr\} ^{\Delta } \frac{ \varLambda ^{-1}(z(v))\rho ^{\Delta }(v)}{\varsigma _{2}(\varsigma _{1}^{-1}( \varLambda ^{-1}(z(v))))}, \end{aligned}$$

which gives

$$ \frac{{\varsigma _{2}(\varsigma _{1}^{-1}(\varLambda ^{-1}(z(v))))}z^{ \Delta }(v)}{\varLambda ^{-1}(z(v))}\leq \biggl\{ \int _{v_{0}}^{v}d(v,x) \Delta x \biggr\} ^{\Delta } \rho ^{\Delta }(v). $$
(22)

By integration (22) from \(v_{0}\) to v and from (14), (21), we obtain

$$ z(v)\leq \varPhi ^{-1} \biggl[\varPhi \biggl(\varLambda \bigl(\varsigma _{1}(H)\bigr)+ \int _{v_{0}}^{v}j(v,x)\Delta x \biggr)+ \int _{v_{0}}^{v}d(v,x)\rho ^{\Delta }(x) \Delta x \biggr], $$
(23)

the conclusion in (11) follows upon substituting from (23) into (15) and (20). Details are omitted. □

Some of the important remarks on Theorem 3.2 if \(\tau \rightarrow 1\) without maxima are listed below:

Remark 3.3

By taking \(b=a(v)\), which is nondecreasing, \(\varsigma _{1}(r)=r^{p}\), \(\varsigma _{2}(r)=\varsigma (r)\), \(d(v,s)=0\), whereas if \(b=a(v)\), \(\varsigma _{1}(r)=r^{p}\), \(\varsigma _{2}(r)=\varsigma (r)\), \(d(v,s)=0\), \(j(v,x)=b(v)f(s)\), then Theorem 3.2 changes into [32], Theorem 3.1 and Theorem 3.2, respectively.

Remark 3.4

It is quite impressive to know that, as a peculiar case, Theorem 3.2 reduces to [6], Theorem 3.1 by putting \(b=r_{0}\), \(\varsigma _{1}(r)=r^{p}\), \(\varsigma _{2}(r)=r^{q}\), \(j(v,x)=h(x)\), \(d(v,s)=f(s)\), \(\rho (v_{0})=0\), \(\rho (v)\leq v\) if \(\mathbb{T}=\mathbb{R}\).

Remark 3.5

If \(\varsigma _{1}(r)=r^{p}\), \(d(v,s)=0\), \(b=a(v)\), \(v_{0}=0\), \(j(v,x)=b(v)f(x)\), so Theorem 3.2 can be converted into the inequality proved by Pachpatte [7], Theorem 4\((d_{3})\) with \(\mathbb{T}=\mathbb{Z}\) and [12], Theorem 2\((b_{3})\) with \(\mathbb{T}= \mathbb{R}\).

Remark 3.6

If \(\mathbb{T}=\mathbb{R}\), \(\varsigma _{1}(r)=r\), \(v_{0}=0\), \(b=k(v)\), \(\varsigma _{2}(r)=\varsigma (r)\), \(d(v,s)=0\), then we can easily get the inequality in [28], Theorem 2.1 from Theorem 3.2.

Theorem 3.7

Suppose that the inequalities of (6) and conditions (D1)(D4) are fulfilled, also

  1. (i)

    \(b(v)\in C_{\mathrm{rd}}(\mathbb{T}_{0},(0,\infty ))\) is nondecreasing,

  2. (ii)

    The functions \(j(v,x), j^{\Delta }(v,x)\in C_{\mathrm{rd}}( \mathbb{T}_{0}\times \mathbb{T}_{0},\mathbb{R}_{+})\), \(d(v)\in C_{ \mathrm{rd}}(\mathbb{T}_{0},\mathbb{R}_{+})\),

  3. (iii)

    \(\max_{v\in [\tau \mu ,v_{0}]_{\mathbb{T}}}\varPsi (v) \leq b(v_{0})\), then

    $$ \begin{aligned}[b] r(v)&\leq \varLambda _{1}^{-1} \biggl\{ \varPhi _{1}^{-1} \biggl[\varPhi _{1} \biggl( \varLambda _{1}\bigl(b(v)\bigr)+ \int _{v_{0}}^{v}d(x)\rho ^{\Delta }(x)\Delta x \biggr) \\ &\quad{} + \int _{v_{0}}^{v}j(v,x)\rho ^{\Delta }(x)\Delta x \biggr] \biggr\} , \end{aligned} $$
    (24)

    such that \(\alpha >0\), \(\beta \geq 1\), \(\alpha +\beta >1\), and

    $$\begin{aligned}& \varPhi _{1} \biggl(\varLambda _{1}\bigl(b(v)\bigr)+ \int _{v_{0}}^{v}d(x)\rho ^{\Delta }(x) \Delta x \biggr)+ \int _{v_{0}}^{v}j(v,x)\rho ^{\Delta }(x)\Delta x\in \operatorname{Dom}\bigl(\varPhi _{1}^{-1}\bigr), \\& (\varLambda _{1}\circ Q)^{\Delta }(v)= \frac{Q^{\Delta }(v)}{\varsigma _{2}(Q(v))}, \end{aligned}$$
    (25)
    $$\begin{aligned}& (\varPhi _{1}\circ L)^{\Delta }(v)= \frac{\varsigma _{2}(\varLambda _{1}^{-1}(L(v)))L ^{\Delta }(v)}{\varsigma _{1}(\varLambda _{1}^{-1}(L(v)))[\varLambda _{1}^{-1}(L( \sigma (v)))]^{\frac{\beta +\alpha \beta -1}{\beta }}}. \end{aligned}$$
    (26)

Proof

Fix arbitrary \(v^{\ast }\in \mathbb{T}_{0}\) for \(v\in [v_{0}, v^{ \ast }]\cap \mathbb{T}\) and denote a function \(y: [\tau \mu , \infty )_{\mathbb{T}}\rightarrow \mathbb{R}_{+}\) in (6), therefore

$$\begin{aligned} y(v)= \textstyle\begin{cases} b(v^{\ast })+\int _{v_{0}}^{v}j(v,x)\varsigma _{1}(r(x)) \\ \quad{} \times [r^{\beta }(x)+\int _{\rho ({v_{0}})}^{\rho ({v})}d(s) \varsigma _{2}(\max_{\phi \in [\tau \lambda ,\lambda ]_{ \mathbb{T}}}r(\phi ))\circ \rho ^{-1}(\lambda )\bar{\Delta \lambda } ]^{\alpha }\Delta x, \\ \hphantom{b(v_{0}),\,} \quad v\in [v_{0}, v^{\ast }]\cap \mathbb{T}, \\ b(v_{0}),\quad v \in [\tau \mu ,v_{0}]_{\mathbb{T}}. \end{cases}\displaystyle \end{aligned}$$

Clearly \(y(v)\) is nondecreasing, so that (6)

$$ r(v)\leq y(v) $$
(27)

holds. For \(v\in [v_{0}, v^{\ast }]\cap \mathbb{T}\) and \(x\in [\rho (v _{0}),\rho (v)]_{\mathbb{T}}\), we have

$$\begin{aligned} \max_{\phi \in [\tau \lambda ,\lambda ]_{\mathbb{T}}}r(\phi ) \circ \rho ^{-1}(x) &\leq \max _{\phi \in [\tau \lambda ,\lambda ]_{\mathbb{T}}}y(\phi ) \circ \rho ^{-1}(x) \\ &=\max_{\phi \in [\tau \rho ^{-1}( \lambda ),\rho ^{-1}(\lambda )]_{ \mathbb{T}}}y(\phi ) \\ &=y(s)\circ \rho ^{-1}(\lambda ). \end{aligned}$$
(28)

The combination of (6), (27), and (28) gives

$$\begin{aligned} y(v) &\leq b\bigl(v^{\ast }\bigr)+ \int _{v_{0}}^{v}j(v,x) \varsigma _{1} \bigl(y(x)\bigr) \biggl[y^{\beta }(x)+ \int _{\rho ({v_{0}})}^{\rho ({v})}d(s)\varsigma _{2}\bigl(y(s) \bigr)\circ \rho ^{-1}(\lambda )\bar{\Delta \lambda } \biggr]^{ \alpha }\Delta x \\ &= b\bigl(v^{\ast }\bigr)+ \int _{v_{0}}^{v}j(v,x) \varsigma _{1} \bigl(y(x)\bigr) \biggl[y ^{\beta }(x)+ \int _{v_{0}}^{x}d(s)\varsigma _{2}\bigl(y(s) \bigr) \rho ^{\Delta }(s) \Delta s \biggr]^{\alpha }\Delta x. \end{aligned}$$

The definition of \(y(v)\) and Lemma 2.6 implies that

$$\begin{aligned} \begin{aligned}[b] y^{\Delta }(v) &= \biggl\{ \int _{v_{0}}^{v}j^{\Delta }(v,x)\Delta x+j\bigl( \sigma (v),v\bigr) \biggr\} \biggl[y^{\beta }(v)+ \int _{v_{0}}^{v}d(s)\varsigma _{2}\bigl(y(s) \bigr) \rho ^{\Delta }(s)\Delta s \biggr]^{\alpha }\varsigma _{1}\bigl(y(v)\bigr) \\ &= \biggl\{ \int _{v_{0}}^{v}j(v,x)\Delta x \biggr\} ^{\Delta }Q^{\alpha }(v)\varsigma _{1}\bigl(y(v)\bigr), \end{aligned} \end{aligned}$$
(29)

where

$$ Q(v)=y^{\beta }(v)+ \int _{v_{0}}^{v}d(s)\varsigma _{2}\bigl(y(s) \bigr) \rho ^{ \Delta }(s)\Delta s, $$
(30)

and

$$\begin{aligned}& y^{\beta }(v)\leq Q(v)\quad \Rightarrow \quad y(v)\leq Q^{\frac{1}{\beta }}(v), \end{aligned}$$
(31)
$$\begin{aligned}& y^{\sigma }(v)\leq Q^{\frac{1}{\beta }}\bigl(\sigma (v)\bigr) \end{aligned}$$
(32)

for all \(v\in [v_{0}, v^{\ast }]\cap \mathbb{T}\), but \(\beta \geq 1\). Delta differentiation (30), utilizing \(Q(v)\leq Q^{\sigma }(v)\), Lemma 2.1, Lemma 2.6, (29), (31), (32), we derive that

$$\begin{aligned} Q^{\Delta }(v) &= \beta \bigl(y^{\sigma }(v)\bigr)^{\beta -1}y^{\Delta }(v)+d(v) \varsigma _{2}\bigl(y(v)\bigr)\rho ^{\Delta }(v) \\ &\leq \beta \bigl(Q^{\sigma }(v)\bigr)^{\frac{\beta -1}{\beta }} \biggl\{ \int _{v_{0}}^{v}j(v,x)\Delta x \biggr\} ^{\Delta }Q^{\alpha }(v) \varsigma _{1}\bigl(Q(v)\bigr)+d(v) \varsigma _{2}\bigl(Q(v)\bigr)\rho ^{\Delta }(v) \\ &=\beta \biggl\{ \int _{v_{0}}^{v}j(v,x)\Delta x \biggr\} ^{\Delta } \varsigma _{1}\bigl(Q(v)\bigr) \bigl(Q^{\sigma }(v) \bigr)^{\frac{\beta +\alpha \beta -1}{ \beta }}+d(v)\varsigma _{2}\bigl(Q(v)\bigr)\rho ^{\Delta }(v). \end{aligned}$$

Since \(\varsigma _{2}(Q(v))>0\) for \(v>0\), hence

$$ \frac{Q^{\Delta }(v)}{\varsigma _{2}(Q(v))}\leq \beta \biggl\{ \int _{v _{0}}^{v}j(v,x)\Delta x \biggr\} ^{\Delta } \frac{\varsigma _{1}(Q(v))(Q ^{\sigma }(v))^{\frac{\beta +\alpha \beta -1}{\beta }}}{\varsigma _{2}(Q(v))}+d(v) \rho ^{\Delta }(v). $$
(33)

Taking integral of both sides of (33) from \(v_{0}\) to v, using (25) and \(Q(v_{0})=b^{\beta }(v^{\ast })\), we get

$$\begin{aligned} Q(v) &\leq \varLambda _{1}^{-1} \biggl(\varLambda _{1}\bigl(b^{\beta }\bigl(v^{\ast }\bigr)\bigr)+ \int _{v_{0}}^{v^{\ast }}d(x)\rho ^{\Delta }(x)\Delta x \\ &\quad{} + \beta \int _{v _{0}}^{v} \biggl\{ \int _{v_{0}}^{v}j(v,x)\Delta x \biggr\} ^{\Delta } \frac{ \varsigma _{1}(Q(v))(Q^{\sigma }(v))^{\frac{\beta +\alpha \beta -1}{ \beta }}}{\varsigma _{2}(Q(v))}\Delta x \biggr) \\ &= \varLambda _{1}^{-1}\bigl(L(v)\bigr), \end{aligned}$$
(34)

where

$$ \begin{aligned}[b] L(v)&= \varLambda _{1}\bigl(b^{\beta } \bigl(v^{\ast }\bigr)\bigr)+ \int _{v_{0}}^{v^{\ast }}d(x) \rho ^{\Delta }(x)\Delta x \\ &\quad{} + \beta \int _{v_{0}}^{v} \biggl\{ \int _{v_{0}} ^{v}j(v,x)\Delta x \biggr\} ^{\Delta } \frac{\varsigma _{1}(Q(v))(Q^{ \sigma }(v))^{\frac{\beta +\alpha \beta -1}{\beta }}}{\varsigma _{2}(Q(v))} \Delta x, \end{aligned} $$
(35)

and \(L(v)\) is a positive nondecreasing function with

$$\begin{aligned}& L\bigl(v^{\ast }\bigr)= \varLambda _{1} \bigl(b^{\beta }\bigl(v^{\ast }\bigr)\bigr)+ \int _{v_{0}}^{v^{ \ast }}d(x)\rho ^{\Delta }(x)\Delta x, \end{aligned}$$
(36)
$$\begin{aligned}& Q^{\sigma }(v)\leq \varLambda _{1}^{-1} \bigl(L\bigl(\sigma (v)\bigr)\bigr). \end{aligned}$$
(37)

Following the same steps from (29)–(33) with suitable changes and substituting (25), (34), (36) in (35), we have

$$ L\bigl(v^{\ast }\bigr)\leq \varPhi _{1}^{-1} \biggl[\varPhi _{1} \biggl(\varLambda _{1}\bigl(b \bigl(v^{ \ast }\bigr)\bigr)+ \int _{v_{0}}^{v^{\ast }}d(x)\rho ^{\Delta }(x)\Delta x \biggr)+ \int _{v_{0}}^{v^{\ast }}j(v,x)\rho ^{\Delta }(x)\Delta x \biggr], $$
(38)

\(v^{\ast }\in \mathbb{T}_{0}\) is chosen, therefore the required estimate in (24) can be obtained by combining (27), (31), (34), and (38). The proof is completed. □

Remark 3.8

If \(\mathbb{T}=\mathbb{R}\), τ tends to 1, \(b(v)=r_{0}\) (any constant), \(\alpha =p\), \(\beta =q\), \(v_{0}=\rho (v_{0})=0\) and \(j(v,x)=f(v)\) in Theorem 3.7, then we get the inequality obtained by Abdeldaim and El-Deeb in [15], Theorem 2.5 without maxima.

Theorem 3.9

If relation (7) under assumptions (D1)(D5) is satisfied, then

$$ r(v)\leq \biggl[\varTheta ^{-1} \biggl\{ \varOmega ^{-1} \biggl(\varOmega \bigl(\varTheta \bigl(H ^{\star }\bigr) \bigr)+ \int _{v_{0}}^{v}j(v,x) \biggl[1+ \int _{v_{0}}^{x}d(x,\lambda )\rho ^{\Delta }( \lambda )\Delta \lambda \biggr]\Delta x \biggr) \biggr\} \biggr]^{\frac{1}{\alpha }}, $$
(39)

with

$$\begin{aligned}& \varOmega \bigl(\varTheta \bigl(H^{\star }\bigr)\bigr)+ \int _{v_{0}}^{v}j(v,x) \biggl[1+ \int _{v _{0}}^{x}d(x,\lambda )\rho ^{\Delta }( \lambda )\Delta \lambda \biggr] \Delta x \in \operatorname{Dom}\bigl(\varOmega ^{-1}\bigr), \\& H^{\star }={\max } \Bigl\{ b^{\alpha },\max _{x\in [\tau \mu ,v_{0}]_{\mathbb{T}}}\psi (x) \Bigr\} , \end{aligned}$$
(40)

where Θ and Ω are increasing bijective functions defined by

$$\begin{aligned}& (\varTheta \circ q)^{\Delta }(v)=\frac{q^{\Delta }(v)}{\varsigma _{1}(q ^{\frac{1}{\alpha }}(v))}, \end{aligned}$$
(41)
$$\begin{aligned}& (\varOmega \circ a)^{\Delta }(v)=\frac{a^{\Delta }(v)}{\varsigma _{2}( \varTheta ^{-1}(a^{\frac{1}{\alpha }}(v)))} \end{aligned}$$
(42)

for \(\alpha >1\).

Proof

Defining \(q: [\tau \mu ,\infty )_{\mathbb{T}}\rightarrow \mathbb{R}_{+}\) by

$$ q(v)= \textstyle\begin{cases} H^{\star }+\int _{v_{0}}^{v}j(v,x)\varsigma _{1}(r(x)) [\varsigma _{2}(r(x)) \\ \quad{} +\int _{\rho ({v_{0}})}^{\rho ({v})}d(x,s)\varsigma _{2}( \max_{\phi \in [\tau \lambda ,\lambda ]_{\mathbb{T}}}r(\phi )) \circ \rho ^{-1}(\lambda )\bar{\Delta \lambda } ]\Delta x, \quad v \in \mathbb{T}_{0}, \\ H^{\star },\quad\hspace{198pt} v \in [\tau \mu ,v_{0}]_{\mathbb{T}}, \end{cases} $$

where \(H^{\star }\) is as mentioned in (40). Obviously \(q(v)\) is a nondecreasing function, so (7) is equivalent to

$$ r^{\alpha }(v)\leq q(v)\quad \Rightarrow \quad r(v)\leq q^{\frac{1}{\alpha }}(v). $$
(43)

For \(v\in \mathbb{T}_{0}\) and \(x\in [\rho (v_{0}),\rho (v)]_{ \mathbb{T}}\), we get

$$\begin{aligned} \max_{\phi \in [\tau \lambda ,\lambda ]_{\mathbb{T}}}r(\phi ) \circ \rho ^{-1}(\lambda ) &\leq \max_{\phi \in [\tau \lambda ,\lambda ]_{\mathbb{T}}}q^{\frac{1}{ \alpha }}(\phi )\circ \rho ^{-1}( \lambda ) \\ &=\max_{\phi \in [\tau \rho ^{-1}( \lambda ),\rho ^{-1}(\lambda )]_{ \mathbb{T}}}q^{\frac{1}{\alpha }}(\phi ) \\ &=q^{\frac{1}{\alpha }}\bigl(\rho ^{-1}(\lambda )\bigr)=q^{\frac{1}{\alpha }}(s) \circ \rho ^{-1}(\lambda ). \end{aligned}$$

Inequalities (7), (43) and the last verification show

$$\begin{aligned} q(v) &\leq H^{\star }+ \int _{v_{0}}^{v}j(v,x)\varsigma _{1} \bigl(q^{\frac{1}{ \alpha }}(x)\bigr) \biggl[\varsigma _{2} \bigl(q^{\frac{1}{\alpha }}(x)\bigr)+ \int _{\rho ({v_{0}})}^{\rho ({v})}d(x,s)\varsigma _{2} \bigl(q^{\frac{1}{ \alpha }}(s)\bigr)\circ \rho ^{-1}(\lambda )\bar{\Delta \lambda } \biggr] \Delta x \\ &=H^{\star }+ \int _{v_{0}}^{v}j(v,x)\varsigma _{1} \bigl(q^{\frac{1}{\alpha }}(x)\bigr) \biggl[\varsigma _{2} \bigl(q^{\frac{1}{\alpha }}(x)\bigr)+ \int _{v_{0}}^{v}d(x,s) \varsigma _{2} \bigl(q^{\frac{1}{\alpha }}(s)\bigr) \rho ^{\Delta }(\lambda ) \Delta \lambda \biggr]\Delta x. \end{aligned}$$
(44)

Delta differentiation of (44) and applying Lemma 2.6, we notice that

$$ \frac{q^{\Delta }(v)}{\varsigma _{1}(q^{\frac{1}{\alpha }}(v))}\leq \biggl\{ \int _{v_{0}}^{v}j(v,x)\Delta x \biggr\} ^{\Delta } \biggl[ \varsigma _{2}\bigl(q^{\frac{1}{\alpha }}(v)\bigr)+ \int _{v_{0}}^{v}d(v,s) \varsigma _{2} \bigl(q^{\frac{1}{\alpha }}(s)\bigr) \rho ^{\Delta }(\lambda ) \Delta \lambda \biggr], $$

which, by employing (41), implies the estimate

$$\begin{aligned} q(v) &\leq \varTheta ^{-1} \biggl\{ \varTheta \bigl(H^{\star } \bigr)+ \int _{v_{0}}^{v}j(v,x) \biggl(\varsigma _{2} \bigl(q^{\frac{1}{\alpha }}(v)\bigr)+ \int _{v_{0}}^{v}d(v,s) \varsigma _{2} \bigl(q^{\frac{1}{\alpha }}(s)\bigr) \rho ^{\Delta }(\lambda ) \Delta \lambda \biggr)\Delta x \biggr\} \\ &\leq \varTheta ^{-1}\bigl(a(v)\bigr) \end{aligned}$$
(45)

such that

$$ a(v)\leq \varTheta \bigl(H^{\star }\bigr)+ \int _{v_{0}}^{v}j(v,x) \biggl(\varsigma _{2} \bigl(q^{\frac{1}{\alpha }}(v)\bigr)+ \int _{v_{0}}^{v}d(v,s)\varsigma _{2}\bigl(q ^{\frac{1}{\alpha }}(s)\bigr) \rho ^{\Delta }(\lambda )\Delta \lambda \biggr) \Delta x. $$
(46)

Similarly, taking delta derivative of (46), using Lemma 2.6, then integrating the resultant inequality from \(v_{0}\) to v with \(a(v_{0})=\varTheta (H^{\star })\), we obtain

$$ \varOmega ^{-1} \biggl(\varOmega \bigl(\varTheta \bigl(H^{\star } \bigr)\bigr)+ \int _{v_{0}}^{v}j(v,x) \biggl[1+ \int _{v_{0}}^{x}d(x,\lambda )\rho ^{\Delta }( \lambda )\Delta \lambda \biggr]\Delta x \biggr). $$

The desired bound in (39) can be acquired by putting the above inequality in (43) and (45). The proof is done. □

Remark 3.10

Inequality (7) of Theorem 3.9 converts into the inequality given in [14], Theorem 2.1 without maxima, if we take, τ tends to 1, \(b(v)=r_{0}\) (any constant), \(\alpha =1\), \(v_{0}=\rho (v_{0})=0\), \(j(v,x)=f(x)\), \(d(x,s)=d(s)\), \(\varsigma _{1}= \varsigma _{2}=\varsigma \), and \(\mathbb{T}=\mathbb{R}\).

Theorem 3.11

Assume conditions (D1)(D5),

  1. (i)

    of Theorem 3.7,

  2. (ii)

    \(p\in C_{\mathrm{rd}}(\mathbb{T}_{0},\mathbb{R}_{+})\),

  3. (iii)

    \(\max_{v\in [\tau \mu ,v_{0}]_{\mathbb{T}}}\varPsi (v) \leq \varsigma _{1}(b(v_{0}))\), and relation (8) hold. Then

    $$\begin{aligned} r(v) &\leq \varsigma _{1}^{-1} \biggl\{ \varLambda ^{-1} \biggl(M^{-1} \biggl[M \biggl(\varLambda \biggl(\varsigma _{1}\bigl(b(v)\bigr)+ \int _{v_{0}}^{v}p(x)\Delta x\biggr) \biggr) \\ &\quad{} + \int _{v_{0}}^{v}j(v,x) \biggl(1+ \int _{v_{0}}^{x}d(x,\lambda ) \rho ^{\Delta }( \lambda )\Delta \lambda \biggr)\Delta x \biggr] \biggr) \biggr\} , \end{aligned}$$

    provided that

    $$ \begin{aligned} & M \biggl(\varLambda \biggl(\varsigma _{1}\bigl(b(v)\bigr)+ \int _{v_{0}}^{v}p(x)\Delta x\biggr) \biggr) \\ &\quad{} + \int _{v_{0}}^{v}j(v,x) \biggl(1+ \int _{v_{0}}^{x}d(x,\lambda ) \rho ^{\Delta }( \lambda )\Delta \lambda \biggr)\Delta x\in \operatorname{Dom}\bigl(M^{-1} \bigr), \end{aligned} $$

    Λ is defined as in (13) and

    $$ (M\circ z)^{\Delta }(v)=\frac{z^{\Delta }(v)}{\varLambda ^{-1}(z(v))}. $$

Proof

Fixing \(v^{\ast }\in \mathbb{T}_{0}\) for \(v\in [v_{0}, v^{\ast }] \cap \mathbb{T}\) and denoting a function \(y: [\tau \mu ,\infty ) _{\mathbb{T}}\rightarrow \mathbb{R}_{+}\) in (8), we obtain

$$\begin{aligned} y(v)= \textstyle\begin{cases} \varsigma _{1}(b(v^{\ast }))+\int _{v_{0}}^{v}\varsigma _{2}(r(x)) [j(v,x) \varsigma _{1}(\max_{\phi \in [\tau x,x]_{\mathbb{T}}}r(\phi ))+p(x) ]\Delta x \\ \quad{} +\int _{v_{0}}^{v}\varsigma _{2}(r(x))j(v,x) [\int _{\rho ({v_{0}})} ^{\rho ({v})}d(x,s)\varsigma _{1}(\max_{\phi \in [\tau \lambda ,\lambda ]_{\mathbb{T}}}r(\phi )) \circ \rho ^{-1}(\lambda )\bar{\Delta \lambda } ]\Delta x, \\ \quad \hspace{39pt}v \in [v_{0}, v^{\ast }]\cap \mathbb{T}, \\ \varsigma _{1}(b(v_{0})),\quad v \in [\tau \mu ,v_{0}]_{\mathbb{T}}. \end{cases}\displaystyle \end{aligned}$$

Since \(y(v)\) is nondecreasing, then from (8) we have

$$ r(v)\leq \varsigma _{1}^{-1}\bigl(y(v)\bigr). $$

For \(v\in [v_{0}, v^{\ast }]\cap \mathbb{T}\) and \(x\in [v_{0},v]_{ \mathbb{T}}\), we get

$$ \varsigma _{1}\max_{\phi \in [\tau x,x]_{\mathbb{T}}}r(\phi ) \leq \varsigma _{1}\max_{\phi \in [\tau x,x]_{\mathbb{T}}} \varsigma _{1}^{-1} \bigl(r(\phi )\bigr)=r(s). $$

Similarly, for \(v\in [v_{0}, v^{\ast }]\cap \mathbb{T}\) and \(x\in [\rho (v_{0}),\rho (v)]_{\mathbb{T}}\), we have

$$\begin{aligned} \max_{\phi \in [\tau \lambda ,\lambda ]_{\mathbb{T}}}r(\phi ) \circ \rho ^{-1}(x) &\leq \max _{\phi \in [\tau \lambda ,\lambda ]_{\mathbb{T}}}y(\phi ) \circ \rho ^{-1}(x) \\ &=\max_{\phi \in [\tau \rho ^{-1}( \lambda ),\rho ^{-1}(\lambda )]_{ \mathbb{T}}}y(\phi ) \\ &=y(s)\circ \rho ^{-1}(\lambda ). \end{aligned}$$

The remaining proof of Theorem 3.11 follows from a suitable application of Theorem 3.2. Here, we omit the details. □

Remark 3.12

As a special case of delta derivative on time scales without maxima, if \(\varsigma _{1}(r)=r\), \(\varsigma _{2}=1\), \(\rho (v)\leq v\), \(j(v,x)=f(v)\), \(b(v)=r_{0}\), \(\tau \rightarrow 1\) in Theorem 3.11, then it reduces to Theorem 3.1 due to Li [31].

Remark 3.13

In addition to the above remark, \(\rho (v)=0\) and \(d(x,\lambda )=g( \lambda )\), Theorem 3.11 transfers into [8] of Theorem 1.

Remark 3.14

We can obtain from Theorem 3.11, the continuous and discrete type of inequality studied by Pachpatte [7], Theorem 2.1 \((a_{1})\), and Theorem 2.3 \((c_{1})\) if \(\mathbb{T}=\mathbb{R}\) and \(\mathbb{T}=\mathbb{Z}\) added with Remark 3.12 and \(\rho (v)=0\) respectively.

4 Application

In this section, we illustrate some applications of Theorem 3.9 to study certain properties of solutions of differential equations with maxima. Let us consider (9)–(10), where \(G\in C_{\mathrm{rd}}(\mathbb{T}_{0}^{2}\times \mathbb{R}^{2}, \mathbb{R})\), \(Z\in C_{\mathrm{rd}}(\mathbb{T}_{0}^{2}\times \mathbb{R},\mathbb{R})\), \(r\in C_{\mathrm{rd}}(\mathbb{T}_{0}, \mathbb{R}_{+})\), \(\psi \in ([\tau v_{0},v_{0}]_{\mathbb{T}}, \mathbb{R})\), \(0<\tau <1\), and k, μ are constants such that \(\tau \mu \leq v_{0}\).

The subsequent corollary deals with the global existence on the solutions of (9).

Corollary 4.1

Suppose that:

  1. (i)

    There exists \(\rho \in C_{\mathrm{rd}}(\mathbb{T}_{0},\mathbb{R} _{+})\) that is strictly increasing, i.e., \(\rho (\mathbb{T}) =\bar{ \mathbb{T}}\) is a time scale and \(\min [v_{0},\rho (v_{0})]= \mathbb{T}\).

  2. (ii)

    \(n(v,x), n^{\Delta }(v,x), \rho ^{\Delta }, m(v,x), m^{\Delta }(v,x)\in C_{\mathrm{rd}}(\mathbb{T}_{0}\times \mathbb{T} _{0},\mathbb{R}_{+})\) are the functions.

  3. (iii)

    The continuous and nondecreasing functions \(\xi _{1}, \xi _{2} \in C_{\mathrm{rd}}(\mathbb{R}_{+},\mathbb{R}_{+})\) with \(\xi _{j}(v)>0\) for \(v>0\), (\(j=1,2\)) and

    $$\begin{aligned}& \bigl\vert G(v,x,y,z) \bigr\vert \leq n(v,x)\xi _{1} \vert y \vert \bigl[\xi _{2} \vert y \vert + \vert z \vert \bigr], \end{aligned}$$
    (47)
    $$\begin{aligned}& \bigl\vert Z(v,x,y) \bigr\vert \leq m(v,x)\xi _{2} \vert y \vert \rho ^{\Delta }(x) \end{aligned}$$
    (48)

    for \(v\in \mathbb{T}_{0}\), \(y,z\in \mathbb{R}\), then \(r(v)\) is a solution of (9)(10) and implies the estimate

    $$ \begin{aligned}[b] \bigl\vert r(v) \bigr\vert &\leq \biggl[\bar{\varTheta }^{-1} \biggl\{ \bar{\varOmega }^{-1} \biggl(\bar{ \varOmega } \bigl(\bar{\varTheta }(Q)\bigr) \\ &\quad{} + \int _{v_{0}}^{v}n(u,x) \biggl[1+ \int _{v_{0}} ^{u}m(u,\lambda )\rho ^{\Delta }( \lambda )\Delta \lambda \biggr]\Delta x \biggr) \biggr\} \biggr]^{\frac{1}{\alpha }}, \end{aligned} $$
    (49)

    where \(\alpha >1\),

    $$\begin{aligned}& \bar{\varOmega }\bigl(\bar{\varTheta }(Q)\bigr)+ \int _{v_{0}}^{v}n(u,x) \biggl[1+ \int _{v_{0}}^{u}m(u,\lambda )\rho ^{\Delta }( \lambda )\Delta \lambda \biggr]\Delta x \in \operatorname{Dom}\bigl(\bar{\varOmega }^{-1}\bigr), \\& Q=\max \Bigl\{ k^{\alpha },\max_{x\in [\tau \mu ,v_{0}]_{\mathbb{T}}}\psi (x) \Bigr\} , \\& (\bar{\varTheta }\circ q)^{\Delta }(v)=\frac{q^{\Delta }(v)}{\xi _{1}(q ^{\frac{1}{\alpha }}(v))}, \\& (\bar{\varOmega }\circ a)^{\Delta }(v)=\frac{a^{\Delta }(v)}{\xi _{2}( \varTheta ^{-1}(a^{\frac{1}{\alpha }}(v)))}. \end{aligned}$$

Proof

Clearly \(r(v)\) is a solution of (9) with (10) and can take the form

$$ r^{\alpha }(v) = k^{\alpha }+ \int _{v_{0}}^{v} (G\biggl(u,x,r(x), \int _{v_{0}}^{u}Z\Bigl(u,s,\max_{\phi \in [\tau x,x]_{\mathbb{T}}}r( \phi )\Bigr)\Delta s \biggr)\Delta x. $$
(50)

Employing conditions (47) and (48), it follows from (50) that

$$\begin{aligned} \bigl\vert r^{\alpha }(v) \bigr\vert &\leq \biggl\vert k^{\alpha }+ \int _{v_{0}}^{v} (G\biggl(u,x,r(x), \int _{v_{0}}^{u}Z\Bigl(u,s,\max_{\phi \in [\tau x,x]_{\mathbb{T}}}r( \phi )\Bigr)\Delta s \biggr)\Delta x \biggr\vert \\ &\leq \bigl\vert k^{\alpha } \bigr\vert + \biggl\vert \int _{v_{0}}^{v} (G\biggl(u,x,r(x), \int _{v_{0}} ^{u}Z\Bigl(u,s,\max_{\phi \in [\tau x,x]_{\mathbb{T}}}r( \phi )\Bigr) \Delta s \biggr)\Delta x \biggr\vert \\ &\leq \bigl\vert k^{\alpha } \bigr\vert + \int _{v_{0}}^{v} \biggl\vert (G\biggl(u,x,r(x), \int _{v_{0}} ^{u}Z\Bigl(u,s,\max_{\phi \in [\tau x,x]_{\mathbb{T}}}r( \phi )\Bigr) \Delta s \biggr) \biggr\vert \Delta x \\ &\leq \bigl\vert k^{\alpha } \bigr\vert + \int _{v_{0}}^{v}n(u,x)\xi _{1} \bigl\vert r(x) \bigr\vert \biggl[\xi _{2} \bigl\vert r(x) \bigr\vert \\ &\quad{} + \int _{v_{0}}^{u}m(u,s)\xi _{2}\Bigl(\max _{\phi \in [\tau x,x]_{\mathbb{T}}} \bigl\vert r(\phi ) \bigr\vert \Bigr)\rho ^{\Delta }(s) \Delta s \biggr]\Delta x. \end{aligned}$$

Applying the same argument as in the proof of Theorem 3.9 to the above inequality yields

$$ \bigl\vert r(v) \bigr\vert \leq \biggl[\bar{\varTheta }^{-1} \biggl\{ \bar{\varOmega }^{-1} \biggl(\bar{ \varOmega }\bigl(\bar{\varTheta }(Q) \bigr)+ \int _{v_{0}}^{v}n(u,x) \biggl[1+ \int _{v_{0}} ^{u}m(u,\lambda )\rho ^{\Delta }( \lambda )\Delta \lambda \biggr]\Delta x \biggr) \biggr\} \biggr]^{\frac{1}{\alpha }}, $$

which is the desired estimate in (49). This completes the proof. □

Corollary 4.2

Let the following conditions be satisfied:

$$\begin{aligned}& \begin{aligned}[b] \bigl\vert G(v,x,y_{1},z_{1})-G(v,x,y_{2},z_{2}) \bigr\vert &\leq n(v,x) \bigl(\xi _{1} \vert y_{1} \vert -\xi _{1} \vert y_{2} \vert \bigr) \\ &\quad{} \times \bigl[\bigl(\xi _{2} \vert y_{1} \vert -\xi _{2} \vert y_{2} \vert \bigr)+\bigl( \vert z_{1} \vert + \vert z_{2} \vert \bigr)\bigr], \end{aligned} \end{aligned}$$
(51)
$$\begin{aligned}& \begin{aligned}[b] \bigl\vert Z(v,x,y_{1})-Z(v,x,y_{2}) \bigr\vert &\leq m(v,x) \bigl(\xi _{2} \vert y_{1} \vert -\xi _{2} \vert y_{2} \vert \bigr) \rho ^{\Delta }(x), \end{aligned} \end{aligned}$$
(52)

where m, n, \(\xi _{1}\), \(\xi _{2}\), ρ, r are defined as in Corollary 4.1. Then the delay dynamic equation (9) with (10) has at most one solution.

Proof

If \(r_{1}(v)\) and \(r_{2}(v)\) are solutions of (9), then we obtain

$$\begin{aligned} r_{1}^{\alpha }(v)-r_{2}^{\alpha }(v) &= \int _{v_{0}}^{v} (G\biggl(u,x,r _{1}(x), \int _{v_{0}}^{u}Z\Bigl(u,s,\max_{\phi \in [\tau x,x]_{\mathbb{T}}}r_{1}( \phi )\Bigr)\Delta s \biggr) \Delta x \\ &\quad{} - \int _{v_{0}}^{v} (G\biggl(u,x,r_{2}(x), \int _{v_{0}}^{u}Z\Bigl(u,s, \max_{\phi \in [\tau x,x]_{\mathbb{T}}}r_{2}( \phi )\Bigr)\Delta s \biggr)\Delta x. \end{aligned}$$

Using hypotheses (51) and (52) to the previous equation, we have

$$\begin{aligned} \begin{aligned} \bigl\vert r_{1}^{\alpha }(v)-r_{2}^{\alpha }(v) \bigr\vert &\leq \int _{v_{0}}^{v}n(u,x) \bigl( \xi _{1} \bigl\vert r_{1}(x) \bigr\vert -\xi _{1} \bigl\vert r_{2}(x) \bigr\vert \bigr) \biggl[\bigl(\xi _{2} \bigl\vert r_{1}(x) \bigr\vert -\xi _{2} \bigl\vert r _{2}(x) \bigr\vert \bigr) \\ &\quad{} + \int _{v_{0}}^{u}m(u,s) \Bigl(\xi _{2}\max _{\phi \in [\tau x,x]_{\mathbb{T}}} \bigl\vert r_{1}(\phi ) \bigr\vert -\xi _{2} \max_{\phi \in [\tau x,x]_{\mathbb{T}}} \bigl\vert r_{2}(\phi ) \bigr\vert \Bigr)) \rho ^{\Delta }(s)\Delta s \biggr]\Delta x. \end{aligned} \end{aligned}$$

Applying the same procedure of Theorem 3.9 with suitable changes to the function \(\vert r_{1}^{\alpha }(v)-r_{2}^{\alpha }(v) \vert \) in the last inequality, we get

$$ \bigl\vert r_{1}^{\alpha }(v)-r_{2}^{\alpha }(v) \bigr\vert \leq 0, \quad v\in \mathbb{T}_{0}. $$

Hence \(r_{1}(v)=r_{2}(v)\). Thus the delay dynamic equation (9) with (10) has one positive solution. The proof is completed. □

Now, we will examine the nonlinear delay integral equation (9) with the condition \(r(v)\leq \varPsi (v)\), \(v\in [\tau \mu , v_{0}]_{ \mathbb{T}}\), and \(\alpha =1\).

Example 4.3

Assume that conditions (i) and (ii) of Corollary 4.1 are fulfilled, also

$$ \bigl\vert G(v,x,r,y) \bigr\vert \leq n(v,x)\bigl[\sqrt{ \vert r \vert }+m(v,x)\sqrt{ \vert y \vert }\rho ^{\Delta }(x)\bigr]. $$
(53)

Then the solution \(r(v)\) satisfies

$$ \bigl\vert r(v) \bigr\vert \leq \frac{1}{4} \biggl(2 \sqrt{ \bigl\vert \varPsi (v_{0}) \bigr\vert }+ \int _{v_{0}}^{v}n(u,x) \biggl[1+ \int _{v_{0}}^{u}m(u,s)\rho ^{\Delta }(s)\Delta s \biggr]\Delta \biggr)^{2}. $$
(54)

Proof

It is easy to see that \(r(v)\) satisfies the following integral equation:

$$ r(v) = \varPsi (r_{0})+ \int _{v_{0}}^{v} (G\biggl(u,x,r(x), \int _{v_{0}} ^{u}Z\Bigl(u,s,\max_{\phi \in [\tau x,x]_{\mathbb{T}}}r( \phi )\Bigr) \Delta s \biggr)\Delta x. $$

From (53) and the above equation, we get

$$\begin{aligned} \bigl\vert r(v) \bigr\vert &\leq \bigl\vert \varPsi (r_{0}) \bigr\vert + \int _{v_{0}}^{v} \biggl\vert (G\biggl(u,x,r(x), \int _{v_{0}}^{u}Z\Bigl(u,s,\max_{\phi \in [\tau x,x]_{\mathbb{T}}}r( \phi )\Bigr)\Delta s \biggr) \biggr\vert \Delta x \\ &\leq \bigl\vert \varPsi (r_{0}) \bigr\vert + \int _{v_{0}}^{v}n(u,x) \biggl[\sqrt{ \bigl\vert r(x) \bigr\vert }+ \int _{v_{0}}^{u}m(u,s)\sqrt{\max_{\phi \in [\tau x,x]_{\mathbb{T}}} \bigl\vert r(\phi ) \bigr\vert }\rho ^{\Delta }(s) \Delta s \biggr]\Delta x. \end{aligned}$$

The required inequality (54) can be obtained by closely looking at the proof of Theorem 3.9 with suitable modifications to the previous inequality. By the comparison of Theorem 3.9 and (54), it is noticed that \(b= \vert \varPsi (r_{0}) \vert \), \(\alpha =1\), \(\varsigma _{1}(r)=1\), \(\varsigma _{2}(r)=\sqrt{r}\), \(\varOmega (r)=2 \sqrt{r}\), and \(\varOmega ^{-1}(r)=\frac{1}{4}r^{2}\). □