1 Introduction

Commutation relations of the form

$$\begin{aligned} AB=B F(A), \end{aligned}$$
(1)

where AB are elements of an associative algebra and F is a function of the elements of the algebra, play a significant role in various areas of mathematics, physics, and engineering, and appear in various contexts under different names, such as covariance relations, crossed product type relations, semi-direct product type relations or skew-polynomial rings commutation relations.

In quantum mechanics, quantum field theory, quantum information and quantum computing, many models are described in terms of matrices or linear operators on Hilbert spaces or other normed linear spaces, satisfying such commutation relations for various functions F, or other commutation relations transformable to such form after changes of variables, factorizations (such as polar decomposition) or passing to some functions of operators. For example, the exponentiation of the linear operators satisfying Heisenberg’s canonical commutation relations of quantum mechanics results in the operators satisfying the commutation relations (1) with the degree 1 polynomial F of multiplication by a scalar. For Lie algebras, q-deformations of Lie algebras, various variants of quantum harmonic oscillator algebras, quantum groups, skew-polynomial rings and algebras and many other non-commutative rings and algebras, the main methods are based on such transition to the commutation relations of the form (1). The commutation relations of the form (1) allow efficient reordering in non-commutative expressions employing iterations of the mapping F, which leads to the broad interplay with the theory dynamical systems obtained by iterations of mappings F in the systematic investigation of spectral analysis and classifications of their operator representations as well as of the structure and properties of the corresponding algebras. Systems of imprimitivity and induced representations of groups and algebras, playing important roles in physics, analysis and algebra, are based on semidirect-products type constructions associated to the actions of groups, and the commutation relations of the form (1) being fundamental for the algebraic structure and for description and classification of their operator representations [28,29,30, 44]. The commutation relations of the form (1) are also fundamental for the symmetry analysis since many symmetry groups have generators satisfying the commutation relations of the form (1). For example, in the dihedral groups, generators geometrically associated with rotations and reflections, satisfy the relations of the form (1) for a rational \(F(z)=\frac{1}{z}\) which inverts elements of the group and corresponding elements of its group algebra, and in general, the relations of the form (1) are built into the structure of the many important groups or semigroups or rings and algebras that are semi-direct products defined using actions of groups or semigroups on other groups or semigroups or on other rings or algebras. Such semidirect product structures and their matrix and operator representations are important in solid-state physics in connection to crystallographic structures and in the description, classification, and applications to symmetry analysis of the atomic latices and complex molecular and chemical structures and processes. Also, there are advanced coding algorithms and quantum computing algorithms, based on non-commutative structure and computations in non-commutative semi-direct products and skew-polynomial rings and their matrix representations, thus providing applications of the relations (1) in computer science, quantum computing, cryptography and advanced methods for secure information transmission. The commutation relations of the form (1) also play an important role in the operator methods for solution of differential and difference equations as many specific transforms such as Fourier, Laplace or Wavelet transforms satisfy relations of the form (1) with appropriate shift, weighted shift, weighted composition, differential or difference type operators, leading to unified broad operator methods in harmonic analysis, as well as in general operator methods and symmetry analysis of differential, difference and integral type equations in many applications in physics and engineering. Linear operators satisfying commutation relations are called representations of commutation relations. Representations of covariance commutation relations of the form (1) by linear operators on finite-dimensional and infinite-dimensional linear spaces, Hilbert spaces and other normed linear spaces play an important role in the investigation of actions and induced representations of groups and semigroups, crossed product operator algebras, dynamical systems, harmonic analysis, wavelet and fractal analysis, non-commutative geometry and in applications in physics, engineering and computer science (see [4, 5, 22,23,24, 28,29,30, 36, 37, 44] and references therein).

The \(*\)-Representations for the relation (1) by bounded and unbounded linear operators on a Hilbert space have been extensively studied using functional calculus and spectral theory for self-adjoint, unitary and more general normal operators, theory of crossed product \(C^*\)-algebras, \(W^*\)-algebras and Banach \(*\)-algebras and their interplay with dynamical systems and iterated function systems generated by iteration of involved in the defining commutation relations maps describing the actions of groups and semigroups (see [3, 8,9,10, 12, 13, 15,16,17,18,19, 31,32,36, 38,39,42, 44,45,57] and references therein).

However, for many important classes of linear operators defined on Banach spaces or normed linear spaces other than Hilbert spaces, spectral theory, and functional calculus for operators on Hilbert spaces are not available to the same extent as for Hilbert spaces. For example, the generalization of the spectral theory to the classification of families of operators up to equivalences, such as up to the unitary equivalence in case of families of self-adjoint or unitary operators on Hilbert spaces, becomes problematic in the Banach and normed spaces which are not Hilbert spaces. In many contexts and applications, the classification of operator representations of commutations relations (1) and of the defined by such relations algebras up to some equivalences might be difficult and of the secondary interest, while the main mathematical goal and interest from the side of applications, is in direct construction of various operator models using specific important classes of operators satisfying the crossed-product type covariance commutation relations (1) on various specific Banach spaces and other normed linear spaces.

The development of new more direct methods and constructions of representations of the commutation relations by operators of important classes on various important linear spaces without relying on classifications of operators based on spectral theory is one of the general goals of our investigation in this paper. The systematic general investigation of representations of commutation relations of the form (1) by integral operators on various function spaces was not systematically studied in literature, and the general results and new examples of operator representations of (1) by integral operators we obtained in this paper are new, to our knowledge.

The fundamental role and applications of commutation relations (1) in many areas of mathematics and applications comes mainly from that the relations (1) allow effective reordering of non-commuting elements or operators satisfying the relation, which is important in computations using non-commutative expressions and functions of such elements or operators, and in the theories and results built on possibility to obtain and effectively use such non-commutative computations and reordering formulas. For the operator representations by operators from specific classes on specific spaces, it is thus important to express, if possible, such reordering formulas specifically in terms of the defining parameters, functions and operations of such operators, that is, in the case of the integral operator representations considered in this article, in terms of the defining kernel functions of such integral operators.

There are also many interesting open problems and directions connected to our approach and general results as well as for the concrete pairs of operator representations of (1) by integral operators that we found as examples for our results. Further study of operator theoretical properties of these operators including their spectra, properties, invariant subspaces, reducibility and operator factorizations and other decompositions, commutants (operators commuting with these operators) and establishing interplay with orbit spaces, invariant or quasi-invariant measures and other properties of the dynamical systems generated by the iterations of the mappings F could be very interesting. We chose in this paper to consider operators on \(L_p\) spaces as these are natural spaces where many general and concrete classes of integral operators are well defined. Many integral operators act between important Banach or normed spaces which are not Hilbert spaces, and the development of general systematic methods for finding and further study of the integral operators that satisfy the commutation relations of the form (1) is an open important direction for the further study of their spectral and other properties and their possible applications in those various areas of mathematics and applications where the commutation relations (1) and their operator representations are important.

In this paper, we develop general methods for the direct construction of representations of the covariance commutation relations of the form (1) by bounded linear integral operators on Banach spaces \(L_p\). We focus on the construction and properties of nontrivial (\(B\ne 0\)) representations of the commutation relations (1). We consider representations by linear integral operators defined by kernels satisfying various conditions and derive conditions on such kernel functions so that the corresponding operators satisfy (1) for general polynomial F and for interesting important special cases, such as for example arbitrary affine, quadratic or monomial of any degree. Furthermore, we derive some composition and reordering formulas for the operators involved. This paper consists of four sections. After the introduction, we present in Sect. 2 some preliminaries, notations, basic definitions and a lemma used throughout the article. In Sect. 3, we present our main methods, and the results describing conditions on the kernels of the integral operators in order for them to satisfy the commutation relations (1), as well as construct several interesting new examples of representations of (1), for general and concrete polynomials F, when both operators A and B are linear integral operators acting on the Banach spaces \(L_p\). In Sect. 4, we also obtain useful general reordering formulas for the integral operators satisfying the general commutation relations of the form (1) in terms of their kernel functions.

2 Preliminaries and notations

In this section we present some necessary preliminaries, basic definitions and notations for this article [1, 2, 6, 20, 25, 26, 43].

Let \({\mathbb {R}}\) be the set of all real numbers, \( S\subseteq {\mathbb {R}}\) be a nonempty Lebesgue measurable set, and \((S,\varSigma , {\tilde{m}})\) be a \(\sigma \)-finite measure space, where \(\varSigma \) is a \(\sigma \)-algebra of Lebesgue measurable subsets of S, \({\tilde{m}}\) is the Lebesgue measure, and S can be covered with at most countably many disjoint sets \(E_1,E_2,E_3,\ldots \) such that \( E_i\in \varSigma , \, {\tilde{m}}(E_i)<\infty \), \(i=1,2,\ldots \). For \(1\le p<\infty ,\) we denote by \(L_p(S)\), the set of all classes of equivalent (different on a set of zero measure) measurable functions \(f:S\rightarrow {\mathbb {R}}\) such that \(\int \nolimits _{S} |f(t)|^p dt < \infty .\) This is a Banach space (Hilbert space when \(p=2\)) with norm \(\Vert f\Vert _p= \left( \int \nolimits _{S} |f(t)|^p dt \right) ^{\frac{1}{p}}.\) We denote by \(L_\infty (S)\) the set of all classes of equivalent measurable functions \(f:S\rightarrow {\mathbb {R}}\) such that \(|f(t)|<\infty \) almost everywhere. This is a Banach space with norm \(\Vert f\Vert _{\infty }=\mathop {{{\,\mathrm{ess\;sup}\,}}}_{t\in S} |f(t)|.\)

We denote by

$$\begin{aligned} \mu _{[\varsigma _1,\varsigma _2]}(u,v)=\int \limits _{\varsigma _1}^{\varsigma _2} u(t)v(t)dt \end{aligned}$$
(2)

for real numbers \(\varsigma _1<\varsigma _2\) and real valued functions uv.

Definition 2.1

Let \(f:\, {\mathbb {R}}\rightarrow {\mathbb {R}}\) be any function. Consider the family of all open sets \(\{ \varOmega _i\}_{i\in I}\) in \({\mathbb {R}}\), such that \(f=0\) almost everywhere on \(\varOmega _i\), \(i\in I\). Set \(\varOmega =\bigcup \nolimits _{i\in I} \varOmega _i\). The support of f is the complement of the set \(\varOmega \) and we denote it by \( \mathrm{supp \, } f. \)

We consider a useful lemma for integral operators.

Lemma 2.1

([14]) Let \(1\le p \le \infty \) and \(\alpha _1,\beta _1,\alpha _2,\beta _2\in {\mathbb {R}}\), \(\alpha _1<\beta _1\), \(\alpha _2<\beta _2\), and let \(f:[\alpha _1,\beta _1]\rightarrow {\mathbb {R}}\), \(g: [\alpha _2,\beta _2]\rightarrow {\mathbb {R}}\) be measurable functions such that for all \(x\in L_p({\mathbb {R}})\), the integrals \( \int \nolimits _{\alpha _1}^{\beta _1} f(t)x(t)dt\) and \(\int \nolimits _{\alpha _2}^{\beta _2} g(t)x(t)dt \) are finite. Let \( G=[\alpha _1,\beta _1]\cap [\alpha _2,\beta _2]. \) Then the following statements are equivalent:

  1. 1.

    For \(1\le p \le \infty \) and for all \(x\in L_p({\mathbb {R}})\),

    $$\begin{aligned} \int \limits _{\alpha _1}^{\beta _1} f(t)x(t)dt=\int \limits _{\alpha _2}^{\beta _2} g(t)x(t)dt. \end{aligned}$$
    (3)
  2. 2.

    The following conditions hold:

  •    (a) for almost every \(t\in G\), \(f(t)=g(t)\);

  •    (b) for almost every \(t \in [\alpha _1,\beta _1]{\setminus } G,\ f(t)=0;\)

  •    (c) for almost every \(t \in [\alpha _2,\beta _2]{\setminus } G,\ g(t)=0.\)

Proof

\(\Rightarrow \)   1 follows from direct computation.

Suppose that 1 holds. Take \(x(t)=I_{G_1}(t)\), the indicator function of the set \(G_1=[\alpha _1,\beta _1]\cup [\alpha _2,\beta _2]\). For this function we have, for some constant \(\eta \),

$$\begin{aligned} \int \limits _{\alpha _1}^{\beta _1} f(t)x(t)dt=\int \limits _{\alpha _2}^{\beta _2} g(t)x(t)dt= \int \limits _{\alpha _1}^{\beta _1} f (t)dt=\int \limits _{\alpha _2}^{\beta _2} g (t)dt =\eta . \end{aligned}$$

Now by taking \(x(t)=I_{[\alpha _1,\beta _1]{\setminus } G}(t)\) we get

$$\begin{aligned} \int \limits _{\alpha _1}^{\beta _1} f(t)x(t)dt=\int \limits _{\alpha _2}^{\beta _2} g(t)x(t)dt= \int \limits _{[\alpha _1,\beta _1]{\setminus } G} f (t)dt=\int \limits _{\alpha _2}^{\beta _2} g (t)\cdot 0dt =0. \end{aligned}$$

Then \( \int \nolimits _{[\alpha _1,\beta _1]{\setminus } G} f (t)dt=0. \) If instead, \(x(t)=I_{[\alpha _2,\beta _2]{\setminus } G}(t)\), then \( \int \nolimits _{[\alpha _2,\beta _2]{\setminus } G} g (t)dt=0. \) We claim that \(f(t)=0\) for almost every \(t\in [\alpha _1,\beta _1]{\setminus } G\) and \(g(t)=0\) for almost every \(t\in [\alpha _2,\beta _2]{\setminus } G\). We take an arbitrary partition of the set \(G_1{\setminus } G=\bigcup S_i\) such that \(S_i\cap S_j=\emptyset \), for \(i\not =j\) and each set \(S_i\) has positive measure. For each \(x_i(t)=I_{S_i}(t)\), we have

$$\begin{aligned} \int \limits _{\alpha _1}^{\beta _1} f(t)x_i(t)dt=\int \limits _{\alpha _2}^{\beta _2} g(t)x_i(t)dt \Rightarrow \int \limits _{S_i} f (t)dt=\int \limits _{\alpha _2}^{\beta _2} g (t)\cdot 0dt =0. \end{aligned}$$

Thus, for each \(S_i,\) \(\int \nolimits _{S_i} f (t)dt=0. \) Since we can choose arbitrary partition with positive measure on each of its elements \(f(t)=0\) for almost every \(t\in [\alpha _1,\beta _1]{\setminus } G.\) Analogously, \(g(t)=0\) for almost every \(t\in [\alpha _2,\beta _2]{\setminus } G.\) Then,

$$\begin{aligned} \eta = \int \limits _{\alpha _1}^{\beta _1} f (t)dt=\int \limits _{\alpha _2}^{\beta _2} g (t)dt =\int \limits _G f(t)dt=\int \limits _G g (t)dt. \end{aligned}$$

Then, for all \(x\in L_p({\mathbb {R}})\) we have

$$\begin{aligned} \int \limits _G f (t)x(t)dt=\int \limits _G g (t)x(t)dt \Longleftrightarrow \int \limits _G (f(t)-g(t))x(t)dt=0. \end{aligned}$$

By taking \(x(t)=\left\{ \begin{array}{ll} 1, &{}\quad \text{ if } f(t)-g(t)>0, \\ -1, &{}\quad \text{ if } f(t)-g(t)<0,\, \end{array}\right. \) for almost every \(t\in G\), and \(x(t)=0\) for almost every \(t\in {\mathbb {R}}{\setminus } G\), we get \(\int \nolimits _{G} |f(t)-g(t)|d\mu =0\). This implies that \(f(t)=g(t)\) for almost every \(t\in G\). \(\square \)

Remark 2.1

Lemma 2.1 can also be used in the particular case when \(f\in L_q([\alpha _1,\beta _1])\), \(g\in L_q([\alpha _2,\beta _2])\), \(1\le q\le \infty \), since for \(1\le p \le \infty \) with \(\frac{1}{p}+\frac{1}{q}=1\) and for all \(x\in L_p({\mathbb {R}})\), the integrals \( \int \nolimits _{\alpha _1}^{\beta _1} f(t)x(t)dt,\ \int \nolimits _{\alpha _2}^{\beta _2} g(t)x(t)dt \) are finite by Hölder inequality.

Remark 2.2

When operators are given in abstract form, we use the notation \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\) meaning that operator A is well defined from \(L_p({\mathbb {R}})\) to \(L_p({\mathbb {R}})\) without discussing sufficient conditions for it to be satisfied. For instance, for the following integral operator \( (Ax)(t)= \int \nolimits _{{\mathbb {R}}} k(t,s)x(s)ds \) there are sufficient conditions on kernel \(k(\cdot ,\cdot )\) such that operator A is well defined from \(L_p({\mathbb {R}}) \) to \(L_p({\mathbb {R}})\) and bounded [11, 20, 21]. For instance, [20, Theorem 6.18] states the following: if \(1\le p\le \infty \) and \(k:{\mathbb {R}}\times [\alpha ,\beta ]\rightarrow {\mathbb {R}}\) is a measurable function, \(\alpha ,\beta \in {\mathbb {R}}\), \(\alpha <\beta \), and there is a constant \(\lambda >0\) such that

$$\begin{aligned} \mathop {{{\,\mathrm{ess\;sup}\,}}}_{s\in [\alpha ,\beta ]}\int \limits _{{\mathbb {R}}}|k(t,s)| dt\le \lambda , \quad \mathop {{{\,\mathrm{ess\;sup}\,}}}_{t\in {\mathbb {R}}}\int \limits _{\alpha }^\beta |k(t,s)| ds \le \lambda , \end{aligned}$$
(4)

then A is well defined from \(L_p({\mathbb {R}})\) to \(L_p({\mathbb {R}})\), \(1\le p \le \infty \) and bounded. When \(p=1\) or \(p=\infty \) only one of the conditions in (4) is enough.

3 Representations by linear integral operators

In this section we consider representations of the covariance type commutation relation (1) when both operators A and B are linear integral operators on Banach space \(L_p({\mathbb {R}})\), \(1\le p\le \infty \) defined as follows:

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha _1}^{\beta _1} k(t,s)x(s)ds,\quad (Bx)(t)= \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s)x(s)ds, \end{aligned}$$

where \(\alpha _i,\beta _i\in {\mathbb {R}}, i=1,2\), \(\alpha _1<\beta _1\), \(\alpha _2<\beta _2\), and \( k(t,s):{\mathbb {R}}\times [\alpha _1,\beta _1]\rightarrow {\mathbb {R}} \) and \({\tilde{k}}(t,s):{\mathbb {R}}\times [\alpha _2,\beta _2]\rightarrow {\mathbb {R}}\) are measurable functions.

Theorem 3.1

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:\, L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1\le p\le \infty \) be nonzero operators defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha _1}^{\beta _1} k(t,s)x(s)ds,\quad (Bx)(t)= \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s)x(s)ds, \end{aligned}$$

where \(\alpha _i,\beta _i\in {\mathbb {R}}\), \(i=1,2\), \(\alpha _1<\beta _1\), \(\alpha _2<\beta _2\), and \( k(t,s):{\mathbb {R}}\times [\alpha _1,\beta _1]\rightarrow {\mathbb {R}} \) and \({\tilde{k}}(t,s):{\mathbb {R}}\times [\alpha _2,\beta _2]\rightarrow {\mathbb {R}}\) are measurable functions. Consider a polynomial defined by \(F(z)=\sum \nolimits _{j=0}^{n} \delta _j z^j\), where \(\delta _j \in {\mathbb {R}}\), \(j=0,\ldots ,n\). Set \(G=[\alpha _1,\beta _1]\cap [\alpha _2,\beta _2]\) and

$$\begin{aligned} k_0(t,s)= & {} k(t,s), \quad k_m(t,s)=\int \limits _{\alpha _1}^{\beta _1} k(t,\tau )k_{m-1}(\tau ,s)d\tau ,\quad m=1,\ldots ,n, \\ F_0(k(t,s))= & {} 0,\ F_n(k(t,s))=\sum \limits _{j=1}^{n} \delta _j k_{j-1}(t,s),\quad \text{ if } n\ge 1. \end{aligned}$$

Then, \( AB=BF(A) \) if and only if the following conditions are fulfilled:

  1. 1.

    for almost every \((t,\tau )\in {\mathbb {R}}\times G\),

    $$\begin{aligned} \int \limits _{\alpha _1}^{\beta _1} k(t,s){\tilde{k}}(s,\tau )ds-\delta _0{\tilde{k}}(t,\tau ) = \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s) F_n(k(s,\tau ))ds; \end{aligned}$$
  2. 2.

    for almost every \((t,\tau )\in {\mathbb {R}}\times ([\alpha _2,\beta _2]{\setminus } G)\),

    $$\begin{aligned} \int \limits _{\alpha _1}^{\beta _1} k(t,s){\tilde{k}}(s,\tau )ds=\delta _0{\tilde{k}}(t,\tau ); \end{aligned}$$
  3. 3.

    for almost every \((t,\tau )\in {\mathbb {R}}\times ([\alpha _1,\beta _1]{\setminus } G)\),

    $$\begin{aligned} \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s) F_n(k(s,\tau ))ds=0. \end{aligned}$$

Proof

By applying Fubini theorem from [1] and iterative kernels from [27] we have

$$\begin{aligned} (A^2x)(t)= & {} \int \limits _{\alpha _1}^{\beta _1} k(t,s)(Ax)(s)ds= \int \limits _{\alpha _1}^{\beta _1} k(t,s)\left( \ \int \limits _{\alpha _1}^{\beta _1} k(s,\tau )x(\tau )d\tau \right) ds\\= & {} \int \limits _{\alpha _1}^{\beta _1} \left( \ \int \limits _{\alpha _1}^{\beta _1} k(t,s)k(s,\tau )ds\right) x(\tau ) d\tau = \int \limits _{\alpha _1}^{\beta _1} k_1(t,\tau )x(\tau )d\tau ,\\{} & {} \text{ where } k_1(t,s)=\int \limits _{\alpha _1}^{\beta _1} k(t,\tau )k(\tau ,s)d\tau ;\\ (A^3x)(t)= & {} \int \limits _{\alpha _1}^{\beta _1} k(t,s)(A^2x)(s)ds =\int \limits _{\alpha _1}^{\beta _1} k(t,s)\left( \ \int \limits _{\alpha _1}^{\beta _1} k_1(s,\tau )x(\tau )d\tau \right) ds=\\= & {} \int \limits _{\alpha _1}^{\beta _1} \left( \ \int \limits _{\alpha _1}^{\beta _1} k(t,s)k_1(s,\tau )ds\right) x(\tau )d\tau =\int \limits _{\alpha _1}^{\beta _1} k_2(t,\tau )x(\tau )d\tau ,\\{} & {} \text{ where } k_2(t,s)=\int \limits _{\alpha _1}^{\beta _1} k(t,\tau )k_1(\tau ,s)d\tau , \end{aligned}$$

and in general for all \(n\ge 1\),

$$\begin{aligned} (A^n x)(t)&=\int \limits _{\alpha _1}^{\beta _1} k_{n-1}(t,s)x(s)ds,\nonumber \\ \text{ where } \ k_m(t,s)&=\int \limits _{\alpha _1}^{\beta _1} k(t,\tau )k_{m-1}(\tau ,s)d\tau ,\ m=1,\ldots ,n,\ k_0(t,s)=k(t,s). \end{aligned}$$
(5)

It follows that

$$\begin{aligned} (F(A)x)(t)= & {} \delta _0 x(t)+ \sum \limits _{j=1}^{n} \delta _j (A^j x)(t)=\delta _0 x(t)+\sum \limits _{j=1}^{n} \delta _{j} \int \limits _{\alpha _1}^{\beta _1} k_{j-1}(t,s)x(s)ds \nonumber \\= & {} \delta _0 x(t)+\int \limits _{\alpha _1}^{\beta _1} F_n(k(t,s))x(s)ds, \nonumber \\{} & {} \text{ where } F_0(k(t,s))=0,\ F_n(k(t,s))=\sum \limits _{j=1}^{n} \delta _j k_{j-1}(t,s),\ \text{ if } n\ge 1. \end{aligned}$$
(6)

We now compute BF(A)x and (AB)x. We have

$$\begin{aligned} (BF(A)x)(t)= & {} \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s)(F(A)x)(s)ds\\= & {} \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s) \left( \delta _0 x(s)+\int \limits _{\alpha _1}^{\beta _1} F_n(k(s,\tau )x(\tau )d\tau ) \right) ds \\= & {} \delta _0\int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s)x(s)ds+ \int \limits _{\alpha _1}^{\beta _1} \left( \ \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s)F_n(k(s,\tau ))ds\right) x(\tau ) d\tau \\= & {} \delta _0\int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s)x(s)ds+ \int \limits _{\alpha _1}^{\beta _1} k_{BFA}(t,\tau )x(\tau )d\tau \\{} & {} \text{ where } k_{BFA}(t,\tau )=\int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s) F_n(k(s,\tau ))ds,\\ (ABx)(t)= & {} \int \limits _{\alpha _1}^{\beta _1} k(t,s)(Bx)(s)ds=\int \limits _{\alpha _1}^{\beta _1} k(t,s)\left( \ \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(s,\tau )x(\tau )d\tau \right) ds\\= & {} \int \limits _{\alpha _2}^{\beta _2} \left( \ \int \limits _{\alpha _1}^{\beta _1} k(t,s){\tilde{k}}(s,\tau )ds\right) x(\tau )d\tau = \int \limits _{\alpha _2}^{\beta _2} k_{AB}(t,\tau ) x(\tau )d\tau ,\\{} & {} \text{ where } k_{AB}(t,\tau )=\int \limits _{\alpha _1}^{\beta _1} k(t,s){\tilde{k}}(s,\tau )ds. \end{aligned}$$

We thus have \((ABx)(t)=(BF(A)x)(t)\) for all \(x\in L_p({\mathbb {R}})\) if and only if

$$\begin{aligned} \int \limits _{\alpha _2}^{\beta _2} [ k_{AB}(t,\tau )-\delta _0 {\tilde{k}}(t,\tau )] x(\tau )d\tau =\int \limits _{\alpha _1}^{\beta _1} k_{BFA}(t,\tau )x(\tau )d\tau . \end{aligned}$$

By applying [7, Corollary 3.4.2] and Lemma 2.1 we have \(AB=BF(A)\) if and only if the following conditions hold

  1. 1.

    for almost every \((t,\tau )\in {\mathbb {R}}\times G\),

    $$\begin{aligned} \int \limits _{\alpha _1}^{\beta _1} k(t,s){\tilde{k}}(s,\tau )ds-\delta _0{\tilde{k}}(t,\tau ) = \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s) F_n(k(s,\tau ))ds; \end{aligned}$$
  2. 2.

    for almost every \((t,\tau )\in {\mathbb {R}}\times ([\alpha _2,\beta _2]{\setminus } G)\),

    $$\begin{aligned} \int \limits _{\alpha _1}^{\beta _1} k(t,s){\tilde{k}}(s,\tau )ds=\delta _0{\tilde{k}}(t,\tau ). \end{aligned}$$
  3. 3.

    for almost every \((t,\tau )\in {\mathbb {R}}\times ([\alpha _1,\beta _1]{\setminus } G)\),

    $$\begin{aligned} \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s) F_n(k(s,\tau ))ds=0. \end{aligned}$$

\(\square \)

Remark 3.1

In Theorem 3.1 when \([\alpha _1,\beta _1]=[\alpha _2,\beta _2]=G\) conditions 2 and 3 are taken on set of measure zero so we can ignore them. Thus, we only remain with condition 1. When \([\alpha _1,\beta _1]\not =[\alpha _2,\beta _2]\) we need to check also conditions 2 and 3 outside the intersection \(G=[\alpha _1,\beta _1]\cap [\alpha _2,\beta _2]\). Moreover, condition 3, which is, for almost every \((t,\tau )\in {\mathbb {R}}\times ([\alpha _1,\beta _1]{\setminus } G)\),

$$\begin{aligned} \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s) F_n(k(s,\tau ))ds=0. \end{aligned}$$
(7)

does not imply \(B\left( \sum \nolimits _{k=1}^{n} \delta _k A^k\right) =0\) because its kernel has to satisfy (7) only on the set \({\mathbb {R}}\times ([\alpha _1,\beta _1]{\setminus } G)\) and not on the whole set of definition. On the other hand, the same kernel has to satisfy condition 1, that is, for almost every \((t,\tau )\in {\mathbb {R}}\times G\),

$$\begin{aligned} \int \limits _{\alpha _1}^{\beta _1} k(t,s){\tilde{k}}(s,\tau )ds-\delta _0{\tilde{k}}(t,\tau ) = \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s) F_n(k(s,\tau ))ds. \end{aligned}$$

Note that Theorem 3.1 does not imply \(\sum \nolimits _{k=1}^n \delta _k A^k=0\). In fact, \(\sum \nolimits _{k=1}^n \delta _k A^k=0\) implies \( B\left( \sum \nolimits _{k=1}^{n} \delta _k A^k\right) =0\) which, as mentioned, is not zero in general.

Example 3.1

Consider \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1\le p<\infty \) defined as follows, for almost every t,

$$\begin{aligned} (Ax)(t)= \int \limits _0^\pi k(t,s)x(s)ds,\quad (Bx)(t)= \int \limits _0^\pi {\tilde{k}}(t,s)x(s)ds, \end{aligned}$$

where for almost every \( (t,s)\in {\mathbb {R}}\times [0,\pi ]\),

$$\begin{aligned} k(t,s)=\frac{2}{\pi }(\cos t \cos s+\sin t\sin s+\cos t\sin s),\quad {\tilde{k}}(t,s)=\frac{2}{\pi }(\cos t \cos s+2\sin t\sin s). \end{aligned}$$

These operators cannot be example for Theorem 3.1, because

operators A and B are not well defined as linear operator from \(L_p({\mathbb {R}})\) to \(L_p({\mathbb {R}})\), for any p such that \(1\le p<\infty \). In fact, there are functions \(x_0(\cdot )\) from \(L_p({\mathbb {R}})\) such that \((Ax_0)(\cdot )\) is not in \(L_p({\mathbb {R}})\), \(1\le p<\infty \). For instance, for \(x_0(t)=\textrm{exp}(-|t|)\), \(t\in {\mathbb {R}}\) we have

$$\begin{aligned} \Vert Ax_0\Vert ^p= & {} \int \limits _{-\infty }^{\infty }\left| \int \limits _{0}^{\pi }\frac{2}{\pi }(\cos t \cos s+\sin t\sin s+\cos t\sin s )\textrm{exp}(-|s|)ds\right| ^p dt \\= & {} \left( \frac{2}{\pi }\right) ^p\int \limits _{-\infty }^{\infty }\left| 2\gamma _0 \cos t+\gamma _0 \sin t\right| ^p dt\\= & {} \left| \frac{2}{\pi }\gamma _0\sqrt{3}\right| ^p \int \limits _{-\infty }^{\infty }\left| \cos \left( t+\arctan (\frac{1}{2})\right) \right| ^p dt\\= & {} \left| \frac{2}{\pi }\gamma _0\sqrt{3}\right| ^p \int \limits _{-\infty }^{\infty }\left| \cos t\right| ^p dt, \end{aligned}$$

where

$$\begin{aligned} \gamma _0=\int \limits _{0}^{\pi } \cos (s)\, \textrm{exp}(-|s|) ds=\int \limits _{0}^{\pi } \sin (s) \,\textrm{exp}(-|s|) ds=\frac{1}{2}+\frac{1}{2}e^{-\pi }. \end{aligned}$$
(8)

Using the fact that for fixed p such that \(1\le p <\infty \), the function \(|\cos t|^p\) is periodic with period \(\pi \) and \(|\cos t|^p>0\), for all \(t\in ]\frac{\pi }{2},\frac{3}{2}\pi [\), we have

$$\begin{aligned} \Vert Ax_0\Vert ^p= & {} \left| \frac{2}{\pi }\gamma _0\sqrt{3}\right| ^p \int \limits _{-\infty }^{\infty }\left| \cos (t)\right| ^p dt \ge \left| \frac{2}{\pi }\gamma _0\sqrt{3}\right| ^p \int \limits _{0}^{\infty }\left| \cos t\right| ^p dt \\\ge & {} \left| \frac{2}{\pi }\gamma _0\sqrt{3}\right| ^p \sum \limits _{k=0}^{\infty } \int \limits _{\frac{\pi }{2}+k\pi }^{\frac{\pi }{2}+(k+1)\pi }\left| \cos t\right| ^p dt=\infty . \end{aligned}$$

Therefore \((Ax_0)(\cdot )\not \in L_p({\mathbb {R}})\) for any p such that \(1\le p<\infty \).

For operator B we have

$$\begin{aligned} \Vert Bx_0\Vert ^p= & {} \int \limits _{-\infty }^{\infty }\left| \int \limits _{0}^{\pi }\frac{2}{\pi }(\cos t \cos s+2\sin t\sin s)\textrm{exp}(-|s|)ds\right| ^p dt \\= & {} \left| \frac{2}{\pi }\right| ^p \int \limits _{0}^{\pi } |\gamma _0\cos t+2\gamma _0 \sin t|^pdt\\= & {} \left| \frac{2}{\pi }\gamma _0 \sqrt{3}\right| ^p \int \limits _{0}^{\pi } | \cos (t+\arctan 2)|^p dt=\infty \end{aligned}$$

for any \(1\le p<\infty \), by a similar argument as for the case of \(\Vert Ax_0\Vert ^p\), \(\gamma _0\) is given in (8). Then \((Bx_0)(\cdot )\not \in L_p({\mathbb {R}})\) for any \(1 \le p<\infty \).

Nevertheless, operators A and B are well defined as linear operator from \(L_\infty ({\mathbb {R}})\) to \(L_\infty ({\mathbb {R}})\). In fact, the corresponding kernels satisfy conditions of [20, Theorem 6.18], that is,

$$\begin{aligned} \mathop {{{\,\mathrm{ess\;sup}\,}}}_{t\in {\mathbb {R}} } \bigg (\int \limits _{\alpha _1}^{\beta _1} |k(t,s)| ds\bigg )= & {} \mathop {{{\,\mathrm{ess\;sup}\,}}}_{t\in {\mathbb {R}} } \bigg (\int \limits _{0}^{\pi } \bigg |\frac{2}{\pi }(\cos t \cos s+\sin t\sin s+\cos t\sin s) \bigg | ds\bigg ) \le 6, \\ \mathop {{{\,\mathrm{ess\;sup}\,}}}_{t\in {\mathbb {R}} } \bigg (\int \limits _{\alpha _2}^{\beta _2} |{\tilde{k}}(t,s)| ds\bigg )= & {} \mathop {{{\,\mathrm{ess\;sup}\,}}}_{t\in {\mathbb {R}} } \bigg (\int \limits _{0}^{\pi } \bigg |\frac{2}{\pi }(\cos t \cos s+2\sin t\sin s) \bigg | ds\bigg ) \le 6. \end{aligned}$$

Consider the polynomial \(F(z)=z^2\). These operators satisfy \(AB=BF(A)\). In fact, by applying Theorem 3.1 we have \(\delta _0=\delta _1=0\), \(\delta _2=1\), \(n=2\),

$$\begin{aligned} k_{AB}(t,\tau )= & {} \int \limits _{0}^{\pi } k(t,s){\tilde{k}}(s,\tau )ds\\= & {} \frac{4}{\pi ^2} \int \limits _{0}^{\pi } (\cos (t)\cos (s)+\sin (t)\sin (s) + \cos (t)\sin (s))\\{} & {} \cdot (\cos (s)\cos (\tau )+2\sin s\sin \tau )ds\\= & {} \frac{4}{\pi }\left( \frac{\cos t\cos \tau }{2}+\cos t\sin \tau +\sin t \sin \tau \right) \\= & {} \frac{2}{\pi } ({\cos t\cos \tau }+2\cos t\sin \tau +2\sin t \sin \tau ), \end{aligned}$$

for almost every \((t,\tau )\in {\mathbb {R}}\times [0,\pi ]\).

Moreover,

$$\begin{aligned} F_2(k(t,s))= & {} k_1(t,s) = \int \limits _{0}^{\pi } k(t,\tau )k(\tau ,s)d\tau \\= & {} \frac{4}{\pi ^2}\int \limits _{0}^{\pi } (\cos t \cos \tau +\sin t\sin \tau +\cos t\sin \tau )\\{} & {} \cdot (\cos \tau \cos s+\sin \tau \sin s+\cos \tau \sin s)d\tau \\= & {} \frac{4}{\pi } \left( \frac{\cos t\cos s}{2}+\cos t\sin s+\frac{\sin t\sin s}{2}\right) \\= & {} \frac{2}{\pi } ({\cos t\cos s}+2\cos t\sin s+{\sin t\sin s}), \end{aligned}$$

for almost every \((t,s)\in {\mathbb {R}}\times [0,\pi ]\).

Therefore,

$$\begin{aligned} k_{BFA}(t,\tau )= & {} \int \limits _{0}^{\pi } {\tilde{k}}(t,s)F_2(k(s,\tau ))ds\\= & {} \frac{4}{\pi ^2} \int \limits _{0}^{\pi } (\cos (t)\cos (s)+2\sin (t)\sin (s))\\{} & {} \cdot ({\cos s\cos \tau }+2\cos s\sin \tau +{\sin s\sin \tau })ds\\= & {} \frac{4}{\pi } \left( \frac{\cos t\cos \tau }{2}+\cos t\sin \tau +\sin t \sin \tau \right) \\= & {} \frac{2}{\pi } ({\cos t\cos \tau }+2\cos t\sin \tau +2\sin t \sin \tau ), \end{aligned}$$

for almost every \((t,\tau )\in {\mathbb {R}}\times [0,\pi ]\). Which coincides with the kernel \(k_{AB}\). Thus, conditions of Theorem 3.1 are fulfilled and so \(AB=BA^2\).

We compute

$$\begin{aligned} (A^2x)(t)= & {} \frac{2}{\pi } \int \limits _{0}^{\pi } (\cos t \cos s+\sin t\sin s+\cos t\sin s)Ax(s)ds\\= & {} \frac{2}{\pi } \int \limits _{0}^{\pi } (\cos t \cos s+\sin t\sin s+\cos t\sin s) \\{} & {} \cdot \bigg (\frac{2}{\pi }\int \limits _{0}^{\pi } (\cos s \cos \tau +\sin s\sin \tau +\cos s\sin \tau )x(\tau )d\tau \bigg ) ds\\= & {} \frac{4}{\pi ^2}\int \limits _{0}^{\pi } \bigg (\int \limits _{0}^{\pi } (\cos t \cos s+\sin t\sin s+\cos t\sin s) \\{} & {} \cdot (\cos s \cos \tau +\sin s\sin \tau +\cos s\sin \tau )ds \bigg ) x(\tau )d\tau \\= & {} \frac{2}{\pi }\int \limits _{0}^{\pi } \left( {\cos t\cos \tau }+2\cos t\sin \tau +{\sin t\sin \tau }\right) x(\tau )d\tau . \end{aligned}$$

Therefore,

$$\begin{aligned} (BA^2x)(t)= & {} \frac{2}{\pi }\int \limits _{0}^{\pi } (\cos t\cos s+2\sin t\sin s)(A^2x)(s)ds\\= & {} \frac{2}{\pi }\int \limits _{0}^{\pi } (\cos t\cos s+2\sin t\sin s)\\{} & {} \cdot \bigg ( \frac{2}{\pi }\int \limits _{0}^{\pi } \bigg ( {\cos s\cos \tau }+2\cos s\sin \tau +{\sin s\sin \tau }\bigg )x(\tau )d\tau \bigg )ds\\= & {} \frac{4}{\pi ^2} \int \limits _{0}^{\pi }\bigg ( \int \limits _{0}^{\pi } (\cos t\cos s+2\sin t\sin s)\\{} & {} \cdot \bigg ( {\cos s\cos \tau }+2\cos s\sin \tau +{\sin s\sin \tau }\bigg )ds \bigg ) x(\tau )d\tau \\= & {} \frac{2}{\pi } \int \limits _{0}^{\pi } ({\cos t\cos \tau }+2\cos t\sin \tau +2\sin t \sin \tau ) x(\tau )d\tau . \end{aligned}$$

Relation \((AB)x=(BA^2)x\) is satisfied for all \(x\in L_{\infty }({\mathbb {R}})\), but this does not imply that \( A^2=0\), because in this case

$$\begin{aligned} (A^2x)(t)= \frac{2}{\pi }\int \limits _{0}^{\pi } \left( {\cos t\cos \tau }+2\cos t\sin \tau +{\sin t\sin \tau }\right) x(\tau )d\tau , \end{aligned}$$

neither is it a conclusion from Theorem 3.1, see Remark 3.1.

Example 3.2

With some minor modifications on the kernels one gets integral operators acting on \(L_p({\mathbb {R}})\) for \(1<p<\infty \). Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1< p<\infty \) be defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _0^\pi k(t,s)x(s)ds,\quad (Bx)(t)= \int \limits _0^\pi {\tilde{k}}(t,s)x(s)ds, \end{aligned}$$

for almost every t, where \(k(t,s)=I_{[\alpha ,\beta ]}(t)\frac{2}{\pi }(\cos t \cos s+\sin t\sin s+\cos t\sin s)\), \({\tilde{k}}(t,s)=I_{[\alpha ,\beta ]}(t)\frac{2}{\pi }(\cos t \cos s+2\sin t\sin s)\), for almost every \( (t,s)\in {\mathbb {R}}\times [0,\pi ]\), \(\alpha ,\,\beta \in {\mathbb {R}}\) such that \(\alpha \le 0\), \(\beta \ge \pi \) and \(I_{E}(t)\) is the indicator function of the set E. These operators are well defined and bounded in \(L_p({\mathbb {R}})\), \(1<p<\infty \) by [21, Theorem 3.4.10], since kernels have compact support in \({\mathbb {R}}\times [0,\pi ]\), and for \(q > 1\) such that \(\frac{1}{p}+\frac{1}{q}=1\),

$$\begin{aligned} \int \limits _{{\mathbb {R}}}\bigg (\int \limits _{0}^{\pi }|k(t,s)|^q ds\bigg )^{p/q}dt= & {} \int \limits _{\alpha }^\beta \bigg (\int \limits _{0}^{\pi }\bigg |\frac{2}{\pi }(\cos t \cos s+\sin t\sin s+\cos t\sin s)\bigg |^q ds\bigg )^{p/q}dt\\\le & {} \int \limits _{\alpha }^\beta \frac{6^p\pi ^{p/q}}{\pi ^{p-1}}dt={6^p\pi (\beta -\alpha )}<\infty , \\ \int \limits _{{\mathbb {R}}}\bigg (\int \limits _{0}^{\pi }|{\tilde{k}}(t,s)|^q ds\bigg )^{p/q}dt= & {} \int \limits _{\alpha }^\beta \bigg (\int \limits _{0}^{\pi }\bigg |\frac{2}{\pi }(\cos t \cos s+2\sin t\sin s)\bigg |^q ds\bigg )^{p/q}dt\\\le & {} \int \limits _{\alpha }^\beta \frac{6^p\pi ^{p/q}}{\pi ^{p-1}}dt={6^p\pi (\beta -\alpha )} <\infty . \end{aligned}$$

In the estimations above we used the following inequalities:

$$\begin{aligned} |2(\cos t \cos s+\sin t\sin s+\cos t\sin s)|^q\le & {} 2^q\cdot 3^q=6^q,\quad 1<q<\infty , \\ |2(\cos t \cos s+2\sin t\sin s)|^q\le & {} 2^q\cdot 3^q=6^q, \ 1<q<\infty . \end{aligned}$$

Note that in this case conditions 1, 2, 3 of Theorem 3.1 reduce to condition 1 because \([\alpha _1,\beta _1]=[\alpha _2,\beta _2]=[0,\pi ]\), and so \(G=[0,\pi ]\), \([\alpha _1,\beta _1]{\setminus } G=[\alpha _2,\beta _2]{\setminus } G=\emptyset \). Therefore, according to Remark 3.1, conditions 2 and 3, taken on a set of measure zero, are fulfilled.

Consider the polynomial \(F(z)=z^2\). These operators satisfy \(AB=BF(A)\). In fact, by applying Theorem 3.1, we have \(\delta _0=\delta _1=0\), \(\delta _2=1\), \(n=2\),

$$\begin{aligned} k_{AB}(t,\tau )= & {} \int \limits _{0}^{\pi } k(t,s){\tilde{k}}(s,\tau )ds\\= & {} \frac{4}{\pi ^2} \int \limits _{0}^{\pi } I_{[\alpha ,\beta ]}(t)(\cos (t)\cos (s)+\sin (t)\sin (s) + \cos (t)\sin (s))\\{} & {} \cdot I_{[\alpha ,\beta ]}(s)(\cos (s)\cos (\tau )+2\sin s\sin \tau )ds\\= & {} \frac{4}{\pi }I_{[\alpha ,\beta ]}(t)\left( \frac{\cos t\cos \tau }{2}+\cos t\sin \tau +\sin t \sin \tau \right) \\= & {} \frac{2}{\pi }I_{[\alpha ,\beta ]}(t)({\cos t\cos \tau }+2\cos t\sin \tau +2\sin t \sin \tau ), \end{aligned}$$

for almost every \((t,\tau )\in {\mathbb {R}}\times [0,\pi ]\).

Moreover, for almost every \((t,s)\in {\mathbb {R}}\times [0,\pi ]\),

$$\begin{aligned} F_2(k(t,s))= & {} k_1(t,s) = \int \limits _{0}^{\pi } k(t,\tau )k(\tau ,s)d\tau \\= & {} \frac{4}{\pi ^2}\int \limits _{0}^{\pi } I_{[\alpha ,\beta ]}(t)(\cos t \cos \tau +\sin t\sin \tau +\cos t\sin \tau )\\{} & {} \cdot I_{[\alpha ,\beta ]}(\tau )(\cos \tau \cos s+\sin \tau \sin s+\cos \tau \sin s)d\tau \\= & {} \frac{4}{\pi } I_{[\alpha ,\beta ]}(t)\left( \frac{\cos t\cos s}{2}+\cos t\sin s+\frac{\sin t\sin s}{2}\right) \\= & {} \frac{2}{\pi } I_{[\alpha ,\beta ]}(t)({\cos t\cos s}+2\cos t\sin s+{\sin t\sin s}). \end{aligned}$$

Therefore, for almost every \((t,\tau )\in {\mathbb {R}}\times [0,\pi ]\),

$$\begin{aligned} k_{BFA}(t,\tau )= & {} \int \limits _{0}^{\pi } {\tilde{k}}(t,s)F_2(k(s,\tau ))ds\\= & {} \frac{4}{\pi ^2} \int \limits _{0}^{\pi } I_{[\alpha ,\beta ]}(t)(\cos (t)\cos (s)+2\sin (t)\sin (s))\\{} & {} \cdot I_{[\alpha ,\beta ]}(s)({\cos s\cos \tau }+2\cos s\sin \tau +{\sin s\sin \tau })ds\\= & {} \frac{4}{\pi } I_{[\alpha ,\beta ]}(t)\left( \frac{\cos t\cos \tau }{2}+\cos t\sin \tau +\sin t \sin \tau \right) \\= & {} \frac{2}{\pi } I_{[\alpha ,\beta ]}(t)({\cos t\cos \tau }+2\cos t\sin \tau +2\sin t \sin \tau ), \end{aligned}$$

which coincides with the kernel \(k_{AB}\). Thus, conditions of Theorem 3.1 are fulfilled, and so \(AB=BA^2\).

Corollary 3.1

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1\le p\le \infty \) be nonzero operators defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha _1}^{\beta _1} k(t,s)x(s)ds,\quad (Bx)(t)= \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s)x(s)ds, \end{aligned}$$

where \(\alpha _i,\beta _i\in {\mathbb {R}}\), \(i=1,2\), \(\alpha _1<\beta _1\), \(\alpha _2<\beta _2\), and \( k(t,s):{\mathbb {R}}\times [\alpha _1,\beta _1]\rightarrow {\mathbb {R}}, \) \({\tilde{k}}(t,s):{\mathbb {R}}\times [\alpha _2,\beta _2]\rightarrow {\mathbb {R}}\) are measurable functions. Consider a polynomial defined by \(F(z)=\delta _0+\delta _1 z\), where \(\delta _0,\ \delta _1\in {\mathbb {R}}\). Set \(G=[\alpha _1,\beta _1]\cap [\alpha _2,\beta _2].\) Then

$$\begin{aligned} AB-\delta _1 BA=\delta _0 B \end{aligned}$$

if and only if the following conditions are fulfilled:

  1. 1.

    for almost every \((t,\tau )\in {\mathbb {R}}\times G\),

    $$\begin{aligned} \int \limits _{\alpha _1}^{\beta _1} k(t,s){\tilde{k}}(s,\tau )ds-\delta _0{\tilde{k}}(t,\tau ) = \delta _1\int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s) k(s,\tau )ds. \end{aligned}$$
  2. 2.

    for almost every \((t,\tau )\in {\mathbb {R}}\times ([\alpha _2,\beta _2]{\setminus } G)\),

    $$\begin{aligned} \int \limits _{\alpha _1}^{\beta _1} k(t,s){\tilde{k}}(s,\tau )ds=\delta _0{\tilde{k}}(t,\tau ). \end{aligned}$$
  3. 3.

    for almost every \((t,\tau )\in {\mathbb {R}}\times ([\alpha _1,\beta _1]{\setminus } G)\),

    $$\begin{aligned} \delta _1\int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s) k(s,\tau )ds=0. \end{aligned}$$

Proof

This follows by Theorem 3.1. \(\square \)

Corollary 3.2

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1\le p\le \infty \) be nonzero operators defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha _1}^{\beta _1} k(t,s)x(s)ds,\quad (Bx)(t)= \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s)x(s)ds, \end{aligned}$$

where \(\alpha _i,\beta _i\in {\mathbb {R}}\), \(i=1,2\), \(\alpha _1<\beta _1\), \(\alpha _2<\beta _2\), and \( k(t,s):{\mathbb {R}}\times [\alpha _1,\beta _1]\rightarrow {\mathbb {R}} \), \({\tilde{k}}(t,s):{\mathbb {R}}\times [\alpha _2,\beta _2]\rightarrow {\mathbb {R}}\) are measurable functions. Consider a polynomial \(F:{\mathbb {R}}\rightarrow {\mathbb {R}}\) defined by \(F(z)=\delta z^d\), where \(\delta \not =0\) is a real number and d is a positive integer. Set

$$\begin{aligned} G= & {} [\alpha _1,\beta _1]\cap [\alpha _2,\beta _2], \\ k_0(t,s)= & {} k(t,s), \quad k_m(t,s)=\int \limits _{\alpha _1}^{\beta _1} k(t,\tau )k_{m-1}(\tau ,s)d\tau ,\quad m=1,\ldots ,d. \end{aligned}$$

Then \( AB=\delta BA^d \) if and only if the following conditions are fulfilled:

  1. 1.

    for almost every \((t,\tau )\in {\mathbb {R}}\times G\),

    $$\begin{aligned} \int \limits _{\alpha _1}^{\beta _1} k(t,s){\tilde{k}}(s,\tau )ds = \delta \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s) k_{d-1}(s,\tau )ds. \end{aligned}$$
  2. 2.

    for almost every \((t,\tau )\in {\mathbb {R}}\times ([\alpha _2,\beta _2]{\setminus } G)\),

    $$\begin{aligned} \int \limits _{\alpha _1}^{\beta _1} k(t,s){\tilde{k}}(s,\tau )ds=0. \end{aligned}$$
  3. 3.

    for almost every \((t,\tau )\in {\mathbb {R}}\times ([\alpha _1,\beta _1]{\setminus } G)\),

    $$\begin{aligned} \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s) k_{d-1}(s,\tau )ds=0. \end{aligned}$$

Proof

This follows by Theorem 3.1. \(\square \)

Remark 3.2

Examples 3.1 and 3.2 describe specific cases for Corollary 3.2 when \([\alpha _1,\beta _1]=[\alpha _2,\beta _2]=[0,\pi ]\), \(\delta =1\), \(d=2\).

In the following theorem we consider a particular case of operators in Theorem 3.1 when the kernels have the variables in a separated form.

Theorem 3.2

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1\le p\le \infty \) be nonzero operators defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha _1}^{\beta _1} a(t)b(s)x(s)ds,\quad (Bx)(t)= \int \limits _{\alpha _2}^{\beta _2} c(t)e(s)x(s)ds, \end{aligned}$$
(9)

where \(\alpha _i,\beta _i\in {\mathbb {R}}\), \(i=1,2\), \(\alpha _1<\beta _1\), \(\alpha _2<\beta _2\), \(a,c\in L_p({\mathbb {R}})\), \(b\in L_q([\alpha _1,\beta _1])\), \(e\in L_q([\alpha _2,\beta _2])\), \(1\le q\le \infty \), \(\frac{1}{p}+\frac{1}{q}=1\). Consider a polynomial defined by \(F(z)=\sum \nolimits _{j=0}^{n} \delta _j z^j\), where \(\delta _j \in {\mathbb {R}}\), \(j=0,\ldots ,n\). Set

$$\begin{aligned} G= & {} [\alpha _1,\beta _1]\cap [\alpha _2,\beta _2],\\ k_1= & {} \sum \limits _{j=1}^{n} \delta _j \mu _{[\alpha _1\beta _1]} (a,b)^{j-1}\mu _{[\alpha _2,\beta _2]} (a,e), \quad k_2=\mu _{[\alpha _1,\beta _1]} (b,c), \end{aligned}$$

where \(\mu _{[\varsigma _1,\varsigma _2]}(u,v)\) is defined by (2). Then, \(AB=BF(A)\) if and only if the following conditions are fulfilled:

  1. 1.
    1. (a)

      for almost every \((t,s)\in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\times [({{\,\mathrm{\textrm{supp}\,}\,}}\, e)\cap G]\),

      1. (i)

        if \(k_2\not =0\), then \(b(s)k_1= \lambda e(s)\) and \(a(t)=\frac{(\delta _0+\lambda )c(t)}{k_2}\) for some real scalar \(\lambda \),

      2. (ii)

        if \(k_2=0\), then \(k_1b(s)=-\delta _0e(s)\);

    2. (b)

      If \(t\not \in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\) then either \(k_2=0\) or \(a(t)=0\) for almost all \(t\not \in {{\,\mathrm{\textrm{supp}\,}\,}}\, c \).

    3. (c)

      If \(s\in G{\setminus } {{\,\mathrm{\textrm{supp}\,}\,}}\, e\) then either \(k_1=0\) or \(b(s)=0\) for almost all \(s\in G{\setminus } {{\,\mathrm{\textrm{supp}\,}\,}}\, e;\)

  2. 2.

    \(k_2 a(t)-\delta _0 c(t)=0\) for almost every \(t\in {\mathbb {R}}\) or \(e(s)=0\) for almost every \(s\in [\alpha _2,\beta _2]{\setminus } G\);

  3. 3.

    \(k_1=0\) or \(b(s)=0\) for almost every \(s\in [\alpha _1,\beta _1]{\setminus } G\).

Proof

We observe that since \(a,c\in L_p({\mathbb {R}}),\ 1\le p\le \infty \), \(b\in L_q([\alpha _1,\beta _1])\), \(e\in L_q([\alpha _2,\beta _2])\), where \(1\le q\le \infty \), with \(\frac{1}{p}+\frac{1}{q}=1\), then by [20, Theorem 6.18], operators A and B are well-defined and bounded. By direct calculation, we have

$$\begin{aligned} (A^2x)(t)= & {} \int \limits _{\alpha _1}^{\beta _1} a(t)b(s)(Ax)(s)ds=\int \limits _{\alpha _1}^{\beta _1} a(t)b(s)a(s)ds\int \limits _{\alpha _1}^{\beta _1} b(\tau _1)x(\tau _1)d\tau _1\\= & {} \mu _{[\alpha _1,\beta _1]}(a,b)(Ax)(t), \\ (A^3x)(t)= & {} A(A^2x)(t)=\mu _{[\alpha _1,\beta _1]}(a,b)(A^2x)(t)=\mu _{[\alpha _1,\beta _2]}(a,b)^2(Ax)(t) \end{aligned}$$

for almost every t. We suppose that, for almost every t,

$$\begin{aligned} (A^{m}x)(t)=\mu _{[\alpha _1,\beta _1]}(a,b)^{m-1} (Ax)(t),\quad m=1,2,\ldots . \end{aligned}$$
(10)

Then, for almost every t,

$$\begin{aligned} (A^{m+1}x)(t)= & {} A(A^{m}x)(t)=\mu _{[\alpha _1,\beta _1]}(a,b)^{m-1} (A^2x)(t)=\mu _{[\alpha _1,\beta _1]}(a,b)^{m} (Ax)(t). \end{aligned}$$

Then, we compute

$$\begin{aligned} (ABx)(t)&=\int \limits _{\alpha _1}^{\beta _1} a(t)b(s)c(s)ds\int \limits _{\alpha _2}^{\beta _2} e(\tau _1) x(\tau _1)d\tau _1= k_2 \int \limits _{\alpha _2}^{\beta _2} a(t) e(\tau _1) x(\tau _1)d\tau _1, \end{aligned}$$
(11)
$$\begin{aligned} (F(A)x)(t)&=\delta _0 x(t)+a(t)\sum \limits _{j=1}^{n} \delta _j \left( \mu _{[\alpha _1,\beta _1]}(a,b) \right) ^{j-1}\int \limits _{\alpha _1}^{\beta _1} b(\tau ) x(\tau ) d\tau \nonumber \\ (BF(A)x)(t)&=\delta _0 c(t)\int \limits _{\alpha _2}^{\beta _2} e(\tau _1)x(\tau _1)d\tau _1\nonumber \\&\quad +c(t)\sum \limits _{j=1}^{n} \delta _j \left( \mu _{[\alpha _1,\beta _1]}(a,b) \right) ^{j-1}\int \limits _{\alpha _2}^{\beta _2} e(\tau )a(\tau )d\tau \int \limits _{\alpha _1}^{\beta _1} b(\tau _1) x(\tau _1) d\tau _1 \nonumber \\&= \delta _0 c(t)\int \limits _{\alpha _2}^{\beta _2} e(\tau _1) x(\tau _1)d\tau _1+c(t)k_1\int \limits _{\alpha _1}^{\beta _1} b(\tau _1) x(\tau _1)d\tau _1. \end{aligned}$$
(12)

Thus, \((ABx)(t)=(BF(A)x)(t)\) for all \(x\in L_p( {\mathbb {R}})\) if and only if

$$\begin{aligned} \int \limits _{\alpha _2}^{\beta _2} [k_2a(t)-\delta _0 c(t)]e(s)x(s)ds= \int \limits _{\alpha _1}^{\beta _1} k_1 c(t)b(s)x(s)ds. \end{aligned}$$

Then, by applying [7, Corollary 3.4.2] and Lemma 2.1, we have \(AB=BF(A)\) if and only if the following conditions are satisfied:

  1. 1.

    for almost every \((t,s)\in {\mathbb {R}}\times G\),

    $$\begin{aligned}{}[k_2a(t)-\delta _0 c(t)]e(s)=k_1 c(t)b(s); \end{aligned}$$
    (13)
  2. 2.

    \(k_2a(t)-\delta _0 c(t)=0\) for almost every \(t\in {\mathbb {R}}\) or \(e(s)=0\) for almost every \(s\in [\alpha _2,\beta _2]{\setminus } G\);

  3. 3.

    \(k_1=0\) or \(c(t)=0\) for almost every \(t\in {\mathbb {R}}\) or \(b(s)=0\) for almost every \(s\in [\alpha _1,\beta _1]{\setminus } G\).

We can rewrite the first condition as follows:

  1. (a)

    Suppose \((t,s)\in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\times [({{\,\mathrm{\textrm{supp}\,}\,}}\, e)\cap G]\).

    1. (i)

      If \(k_2\ne 0,\) then \(k_1\frac{b(s)}{e(s)}=k_2\frac{a(t)}{c(t)}-\delta _0=\lambda \) for some real scalar \(\lambda \). From this, it follows that \(k_1 b(s)=e(s)\lambda \) and \(a(t)=\frac{\delta _0+\lambda }{k_2} c(t)\).

    2. (ii)

      If \(k_2=0\) then \(-\delta _0 c(t)e(s)=k_1 c(t)b(s)\) from which we get that \(k_1b(s)=-\delta _0e(s)\).

  2. (b)

    If \(t\not \in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\) then \(k_2a(t)e(s)=0\) from which we get that either \(k_2=0\) or \(a(t)=0\) for almost all \(t\not \in {{\,\mathrm{\textrm{supp}\,}\,}}\, c \) or \(e(s)=0\) almost everywhere (this implies \(B=0\)).

  3. (c)

    If \(s\in G{\setminus } {{\,\mathrm{\textrm{supp}\,}\,}}\, e,\) then \(k_1c(t)b(s)=0\) which implies that either \(k_1=0\) or \(b(s)=0\) for almost all \(s\in G{\setminus } {{\,\mathrm{\textrm{supp}\,}\,}}\, e\), or \(c(t)=0\) almost everywhere (this implies that \(B=0\)).

\(\square \)

Remark 3.3

Observe that operators A and B as defined in (9) take the form \((Ax)(t)=a(t)\phi (x)\) and \((Bx)(t)=c(t)\psi (x)\) for some \(a,c\in L_p({\mathbb {R}})\), \(1\le p \le \infty \) and linear functionals \(\phi ,\psi :L_p({\mathbb {R}})\rightarrow {\mathbb {R}}.\) In this case \(AB=BF(A)\) if and only in \(\phi (\psi (x)c(t))a(t)=\psi (F(\phi (x)a(t)))c(t)\) in \(L_p({\mathbb {R}})\), \(1\le p\le \infty \).

Corollary 3.3

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1\le p\le \infty \) be nonzero operators defined by

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha }^{\beta } a(t)b(s)x(s)ds,\quad (Bx)(t)= \int \limits _{\alpha }^{\beta } c(t)e(s)x(s)ds, \end{aligned}$$

where \(\alpha ,\beta \in {\mathbb {R}}\), \(\alpha <\beta \), \(a, c \in L_p({\mathbb {R}})\), \(b, e\in L_q([\alpha ,\beta ])\), \(1\le q\le \infty \), \(\frac{1}{p}+\frac{1}{q}=1\). Consider a polynomial \(F(z)=\delta _0+\cdots +\delta _n z^n\) with \(\delta _j \in {\mathbb {R}}\) for \(j=0,\ldots ,n\). Set

$$\begin{aligned} k_1=\sum \limits _{j=1}^{n} \delta _j \mu _{[\alpha ,\beta ]}(a,b)^{j-1}\mu _{[\alpha ,\beta ]}(a,e), \quad k_2=\mu _{[\alpha ,\beta ]}(b,c). \end{aligned}$$

Then, \(AB=BF(A)\) if and only if the following is true:

  1. 1.

    for almost every \((t,s)\in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\times {{\,\mathrm{\textrm{supp}\,}\,}}\, e\), we have

    1. (a)

      If \(k_2\ne 0,\) then \(k_1 b(s)=e(s)\lambda \) and \(a(t)=\frac{\delta _0+\lambda }{k_2} c(t)\) for some \(\lambda \in {\mathbb {R}}\).

    2. (b)

      If \(k_2=0\) then \(k_1b(s)=-\delta _0e(s)\);

  2. 2.

    If \(t\not \in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\) then either \(k_2=0\) or \(a(t)=0\) for almost all \(t\not \in {{\,\mathrm{\textrm{supp}\,}\,}}\, c.\)

  3. 3.

    If \(s\in [\alpha ,\beta ]{\setminus } {{\,\mathrm{\textrm{supp}\,}\,}}\, e,\) then either \(k_1=0\) or \(b(s)=0\) for almost all \(s\in [\alpha ,\beta ]{\setminus } {{\,\mathrm{\textrm{supp}\,}\,}}\, e\)

Proof

This follows by Theorem 3.2 as \(\alpha _1=\alpha _2=\alpha \), \(\beta _1=\beta _2=\beta \) and \(G=[\alpha ,\beta ]\) in this case. \(\square \)

Remark 3.4

From Theorem 3.2 and Corollary 3.3 we observe that if \(k_1,k_2 \ne 0\), then given operator B as defined by (9), we can obtain the kernel of operator A using relations, \(a(t)=\frac{\delta _0+\lambda }{k_2}c(t)\) and \(b(s)=\frac{\lambda }{k_1}e(s)\) for some \(\lambda \in {\mathbb {R}}\). In the next two propositions, we state necessary and sufficient conditions for the choice of \(\lambda \).

Proposition 3.1

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1\le p\le \infty \) be nonzero operators defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha }^{\beta } a(t)b(s)x(s)ds,\quad (Bx)(t)= \int \limits _{\alpha }^{\beta } c(t)e(s)x(s)ds, \end{aligned}$$

where \(\alpha ,\beta \in {\mathbb {R}}\), \(\alpha <\beta \), \(a, c \in L_p({\mathbb {R}})\), \(b, e\in L_q([\alpha ,\beta ])\), \(1\le q\le \infty \), \(\frac{1}{p}+\frac{1}{q}=1\). Consider a polynomial \(F(z)=\delta _0+\cdots +\delta _n z^n\), where \(\delta _j\in {\mathbb {R}}\), \(j=0,\ldots ,n\). Set

$$\begin{aligned} k_1=\sum \limits _{j=1}^{n} \delta _j \mu _{[\alpha ,\beta ]}(a,b)^{j-1}\mu _{[\alpha ,\beta ]}(a,e), \quad k_2=\mu _{[\alpha ,\beta ]}(b,c). \end{aligned}$$

Suppose that \(AB=BF(A).\) If \(k_2\not =0\) and \(k_1\not =0\) in condition 1a in Corollary 3.3, then the corresponding nonzero \(\lambda \) satisfies

$$\begin{aligned} F(\lambda +\delta _0)=\lambda +\delta _0. \end{aligned}$$
(14)

Proof

By definition

$$\begin{aligned} k_1=\sum \limits _{j=1}^{n} \delta _j \mu _{[\alpha ,\beta ]}(a,b)^{j-1}\mu _{[\alpha ,\beta ]}(a,e), \quad k_2=\mu _{[\alpha ,\beta ]}(b,c). \end{aligned}$$

If \(k_1\not =0\) and \(k_2\not =0\), by condition 1a in Corollary 3.3 we have

$$\begin{aligned} a(t)=\frac{\lambda +\delta _0}{k_2}c(t),\quad b(s)=\frac{\lambda }{k_1}e(s) \end{aligned}$$

almost everywhere. If \(\lambda \not =0\) then we replace \(k_2=\mu _{[\alpha ,\beta ]}(b,c)=\mu _{[\alpha ,\beta ]}(\frac{\lambda }{k_1} e, c)\) in the following equality

$$\begin{aligned} k_1=\sum \limits _{j=1}^{n} \delta _j \mu _{[\alpha ,\beta ]}\left( \frac{\lambda +\delta _0}{k_2}c,\frac{\lambda }{k_1}e \right) ^{j-1}\mu _{[\alpha ,\beta ]}\left( \frac{\lambda +\delta _0}{k_2}c,e\right) . \end{aligned}$$

Then, by using the bilinearity of \(\mu _{[\cdot ,\cdot ]}(\cdot ,\cdot )\) and after simplification, this is equivalent to

$$\begin{aligned} \lambda =\sum \limits _{j=1}^{n}\delta _j(\lambda +\delta _0)^j. \end{aligned}$$

By adding \(\delta _0\) on both sides we can write this as (14). \(\square \)

Proposition 3.2

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1\le p\le \infty \) be nonzero operators defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha }^{\beta } a(t)b(s)x(s)ds,\quad (Bx)(t)= \int \limits _{\alpha }^{\beta } c(t)e(s)x(s)ds, \end{aligned}$$

where \(\alpha ,\beta \in {\mathbb {R}}\), \(\alpha <\beta \), \(a, c \in L_p({\mathbb {R}})\), \(b, e\in L_q([\alpha ,\beta ])\), \(1\le q\le \infty \), \(\frac{1}{p}+\frac{1}{q}=1\). Consider a polynomial \(F(z)=\delta _0+\cdots +\delta _n z^n\), \(\delta _j\in {\mathbb {R}}\), \(j=0,\ldots ,n\). Suppose that for almost every \((t,s)\in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\times {{\,\mathrm{\textrm{supp}\,}\,}}\, e\) we have

$$\begin{aligned} a(t)=\frac{\lambda +\delta _0}{k_2}c(t),\quad b(s)=\frac{\lambda }{k_1}e(s) \end{aligned}$$
(15)

for nonzero constants \(\lambda ,\) \(k_1\) and \(k_2\). If \(F(\lambda +\delta _0)=\lambda +\delta _0\) and \(k_2=\frac{\lambda }{k_1}\mu _{[\alpha ,\beta ]}(e,c)\), then

  1. 1.

    \(A=\frac{\lambda +\delta _0}{\mu _{[\alpha ,\beta ]}(e,c)}B,\)

  2. 2.

    for all \(x\in L_p({\mathbb {R}})\) and almost all \(t\in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\),

    $$\begin{aligned} (ABx)(t)=(BF(A)x)(t). \end{aligned}$$

Proof

We have, for almost every \(t \in \textrm{supp} c\),

$$\begin{aligned} (Ax)(t)=\int \limits _{\alpha }^{\beta } a(t)b(s)x(s)ds=\frac{(\lambda +\delta _0)\lambda }{k_1k_2}\int \limits _{\alpha }^{\beta } c(t)e(s)x(s)ds=\frac{(\lambda +\delta _0)\lambda }{k_1k_2}(Bx)(t). \end{aligned}$$

Moreover, for almost every t,

$$\begin{aligned} (ABx)(t)= & {} \frac{(\lambda +\delta _0)\lambda }{k_1k_2}(B^2x)(t)=\frac{(\lambda +\delta _0)\lambda }{k_1k_2}\mu _{[\alpha ,\beta ]}(c,e)(Bx)(t), \\ (A^2x)(t)= & {} \left( \frac{(\lambda +\delta _0)\lambda }{k_1k_2}\right) ^2(B^2x)(t)= \left( \frac{(\lambda +\delta _0)\lambda }{k_1k_2}\right) ^2\mu _{[\alpha ,\beta ]}(e,c)(Bx)(t). \end{aligned}$$

Similarly, for \(m \ge 2\), for almost every \(t \in \textrm{supp} c\),

$$\begin{aligned} (A^mx)(t)&= \left( \frac{(\lambda +\delta _0)\lambda }{k_1k_2}\right) ^m \mu _{[\alpha ,\beta ]}(c,e)^{m-1}(Bx)(t), \\ (F(A)x)(t)&= \delta _0 (Bx)(t)+\sum \limits _{j=1}^{n}\delta _j \left( \frac{(\lambda +\delta _0)\lambda }{k_1k_2}\right) ^j \mu _{[\alpha ,\beta ]}(c,e)^{j-1}(Bx)(t). \end{aligned}$$

Therefore, for almost every t,

$$\begin{aligned} (BF(A)x)(t)= & {} \delta _0 (B^2x)(t)+\sum \limits _{j=1}^{n}\delta _j \left( \frac{(\lambda +\delta _0)\lambda }{k_1k_2}\right) ^j \mu _{[\alpha ,\beta ]}(c,e)^{j-1}(B^2x)(t)\\= & {} \delta _0 \mu _{[\alpha ,\beta ]}(c,e) (Bx)(t)+\sum \limits _{j=1}^{n}\delta _j \left( \frac{(\lambda +\delta _0)\lambda }{k_1k_2}\right) ^j \mu _{[\alpha ,\beta ]}(c,e)^{j}(Bx)(t) \\= & {} F\left( \frac{(\lambda +\delta _0)\lambda }{k_1k_2}\mu _{[\alpha ,\beta ]}(c,e)\right) (Bx)(t). \end{aligned}$$

So, \((ABx)(t)=(BF(A)x)(t),\) for all \(x\in L_p({\mathbb {R}})\) and almost all \(t\in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\) if and only if, for almost every \(t\in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\),

$$\begin{aligned} \frac{(\lambda +\delta _0)\lambda }{k_1k_2}\mu _{[\alpha ,\beta ]}(c,e)= F\left( \frac{(\lambda +\delta _0)\lambda }{k_1k_2}\mu _{[\alpha ,\beta ]}(c,e)\right) . \end{aligned}$$
(16)

If \(k_2=\frac{\lambda }{k_1}\mu _{[\alpha ,\beta ]}(c,e)\) and \(\lambda \) satisfies (14), then (16) holds. \(\square \)

Corollary 3.4

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1< p<\infty \) be nonzero operators defined by

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha }^{\beta } a(t)b(s)x(s)ds,\quad (Bx)(t)= \int \limits _{\alpha }^{\beta } c(t)e(s)x(s)ds, \end{aligned}$$

where \(\alpha ,\beta \in {\mathbb {R}}\), \(\alpha <\beta \), \(a, c \in L_p({\mathbb {R}})\), \(b, e\in L_q([\alpha ,\beta ])\), \(1<q<\infty \), \(\frac{1}{p}+\frac{1}{q}=1\). Consider a polynomial \(F(z)=\delta _0+\delta _1 z+\delta _2 z^2\), \(\delta _j \in {\mathbb {R}}\), \(j=0,1,2\). Suppose that for almost every \((t,s)\in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\times {{\,\mathrm{\textrm{supp}\,}\,}}\, e\), we have

$$\begin{aligned} a(t)=\frac{\lambda +\delta _0}{k_2}c(t),\quad b(s)=\frac{\lambda }{k_1}e(s) \end{aligned}$$
(17)

for nonzero constants \(\lambda ,\) \(k_1\) and \(k_2\). If \(k_2=\frac{\lambda }{k_1}\mu _{[\alpha ,\beta ]}(e,c)\), then

$$\begin{aligned} (ABx)(t)=(BF(A)x)(t), \end{aligned}$$

for all \(x\in L_p({\mathbb {R}})\) and almost all \(t\in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\) if either \(\delta _0\delta _2<0\), or \(\delta _0\delta _2\geqslant 0\) and either \(\delta _1\geqslant 1+2\sqrt{\delta _0\delta _2}\) or \(\delta _1\leqslant 1-2\sqrt{\delta _0\delta _2}.\)

Proof

From Propositions 3.1 and 3.2 we have that \(AB=BF(A)\) if \(F(\lambda +\delta _0)=\lambda +\delta _0.\) This is equivalent to

$$\begin{aligned} \delta _2\lambda ^2+(2\delta _0\delta _2+\delta _1-1)\lambda +\delta _2\delta _0^2+\delta _1\delta _0=0. \end{aligned}$$
(18)

Equation (18) has real solutions if and only if \((\delta _1-1)^2-4\delta _0\delta _2\geqslant 0.\) This is equivalent to either \(\delta _0\delta _2<0\), or \(\delta _0\delta _2\geqslant 0\) and either \(\delta _1\geqslant 1+2\sqrt{\delta _0\delta _2}\) or \(\delta _1\leqslant 1-2\sqrt{\delta _0\delta _2}.\) \(\square \)

Example 3.3

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1< p<\infty \) be nonzero operators defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _0^1 a(t)b(s)x(s)ds,\quad (Bx)(t)= \int \limits _0^1 c(t)e(s)x(s)ds, \end{aligned}$$

where \(a \in L_p({\mathbb {R}})\), \(b\in L_q([0,1])\), \(1<q<\infty \), \(\frac{1}{p}+\frac{1}{q}=1,\) and \(c(t)=tI_{[0,1]}(t)\), \(e(s)=s+1.\) Consider the polynomial \(F(z)=z^2+z-1\) and suppose that for almost every \((t,s)\in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\times {{\,\mathrm{\textrm{supp}\,}\,}}\, e\), we have

$$\begin{aligned} a(t)=\frac{\lambda +\delta _0}{k_2}c(t),\quad b(s)=\lambda e(s) \end{aligned}$$
(19)

for nonzero constants \(\lambda \) and \(k_2=\lambda \mu _{[0,1}(e,c)=\frac{5}{6}\lambda \). From Propositions 3.1 and 3.2 we have that \(AB=BF(A)\) if \(F(\lambda -1)=\lambda -1,\) or \(\lambda ^2-2\lambda =0.\) Therefore we take \(\lambda =2.\) Then

$$\begin{aligned} A=\frac{\lambda +\delta _0}{\mu _{[0,1]}(e,c)}B=\frac{6}{5}B. \end{aligned}$$

Therefore

$$\begin{aligned} A^2=\left( \frac{6}{5}B\right) \left( \frac{6}{5}B\right) =\left( \frac{6}{5}\right) ^2B^2. \end{aligned}$$

But

$$\begin{aligned} (B^2x)(t)=\int _0^1tI_{[0,1]}(t)(s+1)\int _0^1sI_{[0,1]}(s)(\tau +1)x(\tau )d\tau =\frac{5}{6}(Bx)(t). \end{aligned}$$

Therefore \(A^2=(\frac{6}{5})^2B^2=\frac{6}{5}B=A.\) It follows that

$$\begin{aligned} F(A)= & {} A^2+A-I=2A-I=\frac{12}{5}B-I, \\ BF(A)= & {} B\left( \frac{12}{5}B-I\right) =\frac{12}{5}B^2-B=\frac{12}{5}\cdot \frac{5}{6}B-B=B. \end{aligned}$$

Finally,

$$\begin{aligned} AB=\frac{6}{5}B^2=\frac{6}{5}\cdot \frac{5}{6}B=B=BF(A). \end{aligned}$$

Remark 3.5

From Proposition 3.2 we have that if \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1\le p\le \infty \) are nonzero operators defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha }^{\beta } a(t)b(s)x(s)ds,\quad (Bx)(t)= \int \limits _{\alpha }^{\beta } c(t)e(s)x(s)ds, \end{aligned}$$

where \(\alpha ,\beta \) are real numbers, with \(\alpha <\beta \), \(a, c \in L_p({\mathbb {R}})\), \(b, e\in L_q([\alpha ,\beta ])\), \(1\le q\le \infty \), \(\frac{1}{p}+\frac{1}{q}=1\) and \(F(z)=\delta _0+\delta _1z+\cdots +\delta _n z^n\), \(\delta _j\in {\mathbb {R}}\), \(j=0,\ldots ,n\), then if we suppose that for almost every \((t,s)\in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\times {{\,\mathrm{\textrm{supp}\,}\,}}\, e,\ a=\frac{\lambda +\delta _0}{k_2}c(t)\) and \(b(s)=\frac{\lambda }{k_1}e(s)\) for some nonzero constants \(\lambda ,\) \(k_1\) and \(k_2\), and if \(F(\lambda +\delta _0)=\lambda +\delta _0\) and \(k_2=\frac{\lambda }{k_1}\mu _{[\alpha ,\beta ]}(e,c)\), then \(A=\frac{\lambda +\delta _0}{\mu _{[\alpha ,B]}(e,c)}B\) and \(AB=BF(A).\) Now suppose that \(A=\omega B\) for some \(\omega \in {\mathbb {R}}\), then \(AB=BF(A)\) if and only if

$$\begin{aligned} F(\omega \mu _{[\alpha ,\beta ]}(c,e))=\omega \mu _{[\alpha ,\beta ]}(c,e). \end{aligned}$$
(20)

This relation is the same as (16) with \(\omega =\frac{(\lambda +\delta _0)\lambda }{k_1k_2}.\)

Corollary 3.5

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1\le p\le \infty \) be nonzero operators defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha _1}^{\beta _1} a(t)b(s)x(s)ds,\quad (Bx)(t)= \int \limits _{\alpha _2}^{\beta _2} c(t)e(s)x(s)ds, \end{aligned}$$

where \(\alpha _i,\beta _i\), \(i=1,2\) are real numbers, with \(\alpha _1<\beta _1\), \(\alpha _2<\beta _2\), \(a, c \in L_p({\mathbb {R}})\), \(b\in L_q([\alpha _1,\beta _1])\), \(e\in L_q([\alpha _2,\beta _2])\), \(1\le q\le \infty \) and \(\frac{1}{p}+\frac{1}{q}=1\). Consider a polynomial \(F(z)=\delta _0+\delta _1z+\cdots +\delta _n z^n\), \(\delta _j\in {\mathbb {R}}\), \(j=0,\ldots ,n\). Set

$$\begin{aligned} G= & {} [\alpha _1,\beta _1]\cap [\alpha _2,\beta _2]\\ k_1= & {} \sum \limits _{j=1}^{n}\delta _j\mu _{[\alpha _1,\beta _1]} (a,b)^{j-1}\mu _{[\alpha _2,\beta _2]} (a,e), \quad k_2=\mu _{[\alpha _1,\beta _1]} (b,c). \end{aligned}$$

Then,

  1. 1.

    if \(k_1\not =0\), \(k_2\not =0\), then \(AB=BF(A)\) if and only if \(A=\omega B\), for some constant \(\omega \) which satisfies (20);

  2. 2.

    if \(k_2=0\), then \(AB=0\) and, \(AB=BF(A)\) if and only if \(BF(A)=0\). Moreover,

    1. (a)

      if \(k_1\not =0\) then \(BF(A)=0\) if and only if

      $$\begin{aligned} b(s)=-\frac{\delta _0}{k_1}e(s)I_{G}(s) \end{aligned}$$

      almost everywhere.

    2. (b)

      if \(k_1=0\) then \(AB=BF(A)\) if \(\delta _0=0\), that is, \(F(t)=\sum \nolimits _{j=1}^{n}\delta _j t^{j}\).

  3. 3.

    if \(k_2\not =0\) and \(k_1=0\) then \(AB=BF(A)\) if and only if \(AB=\delta _0 B\), that is

    $$\begin{aligned} (Ax)(t)=\frac{\delta _0}{k_2}\int \limits _{\alpha _1}^{\beta _1}c(t)b(s)x(s)ds. \end{aligned}$$

Proof

  1. 1.

    By applying Theorem 3.2, if \(k_1\not =0\) and \(k_2\not =0\) we have \(AB=BF(A)\) if and only if the following is true:

    1. (i)

      for almost every \(t\in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\)

      $$\begin{aligned} a(t)=\frac{\delta _0+\lambda }{k_2}c(t) \end{aligned}$$

      and \(b(s)=\frac{\lambda }{k_2}e(s)\) for almost every \(s\in G \cap {{\,\mathrm{\textrm{supp}\,}\,}}\, e\) and nonzero constant \(\lambda \) satisfying (16);

    2. (ii)

      \(e(s)=0\) for almost every \(s\in [\alpha _2,\beta _2]{\setminus } G\);

    3. (iii)

      \(b(s)=0\) for almost every \(s\in [\alpha _1,\beta _1]{\setminus } G\);

    from which we have,

    $$\begin{aligned} (Ax)(t)= & {} \int \limits _G a(t)b(s)x(s)ds+\int \limits _{[\alpha _1,\beta _1]{\setminus } G} a(t)b(s)x(s)ds\\= & {} \frac{(\lambda +\delta _0)\lambda }{k_1k_2}\int \limits _G c(t)e(s)x(s)ds =\frac{(\lambda +\delta _0)\lambda }{k_1k_2} (Bx)(t) \end{aligned}$$

    almost everywhere. If \(\lambda =0\), then \(A=0\).

  2. 2.

    If \(k_2=0\), then from (11) we have \(AB=0\) and, hence \(AB=BF(A)\) if and only if \(BF(A)=0\). Moreover, by applying Theorem 3.2 we have that

    1. (a)

      if \(k_1\not =0\), then \(AB=BF(A)\) if and only if for almost every \(s\in {{\,\mathrm{\textrm{supp}\,}\,}}\, e\cap G\),

      $$\begin{aligned} b(s)=-\frac{\delta _0}{k_1}e(s), \end{aligned}$$

      \(b(s)=0\) for almost every \(s\in G{\setminus } {{\,\mathrm{\textrm{supp}\,}\,}}\, e\), \(e(s)=0\) for almost every \(s\in [\alpha _2,\beta _2]{\setminus } G\) and \(b(s)=0\) for almost every \(s\in [\alpha _1,\beta _1]{\setminus } G\). Therefore,

      $$\begin{aligned} b(s)=-\frac{\delta _0}{k_1}e(s)I_G(s) \end{aligned}$$

      almost everywhere.

    2. (b)

      if \(k_1=0\) and \(\delta _0=0\), then \(AB=BF(A)\).

  3. 3.

    By applying Theorem 3.2 and if \(k_2\not =0\), \(k_1=0\) we have then \(AB=BF(A)\) if and only if for almost every \(t\in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\),

    $$\begin{aligned} a(t)=\frac{\delta _0+\lambda }{k_2}c(t) \end{aligned}$$

    and \(\lambda e(s)=0\) for almost every \(s\in G \cap {{\,\mathrm{\textrm{supp}\,}\,}}\, e\) from which we get \(\lambda =0\). Therefore, for almost every \(t \in \textrm{supp} c\),

    $$\begin{aligned} a(t)=\frac{\delta _0}{k_2}c(t). \end{aligned}$$

    So we can write, for almost every \(t \in \textrm{supp} c\),

    $$\begin{aligned} (Ax)(t)=\int \limits _{\alpha _1}^{\beta _1} a(t)b(s)x(s)ds=\frac{\delta _0}{k_2}\int \limits _{\alpha _1}^{\beta _1} c(t)b(s)x(s)ds. \end{aligned}$$

    Hence, for almost every t,

    $$\begin{aligned} (ABx)(t)= & {} \int \limits _{\alpha _1}^{\beta _1} c(t)b(s)\left( \int \limits _{\alpha _2}^{\beta _2} e(\tau )x(\tau )d\tau \right) ds=\frac{\delta _0}{k_2} c(t)\int \limits _{\alpha _1}^{\beta _1} c(s)b(s)ds\int \limits _{\alpha _2}^{\beta _2} e(\tau )x(\tau )d\tau \\= & {} \frac{\delta _0}{k_2}\mu _{[\alpha _1,\beta _1]}(b,c)\int \limits _{\alpha _2}^{\beta _2}c(t) e(\tau )x(\tau )d\tau =\delta _0(Bx)(t). \end{aligned}$$

    On the other hand, from (12) it follows that \(BF(A)=\delta _0B\) if \(k_1=0\).

\(\square \)

Example 3.4

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1< p<\infty \) be nonzero operators defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _{0}^{1} a(t)b(s)x(s)ds,\quad (Bx)(t)= \int \limits _{0}^{1} c(t)e(s)x(s)ds, \end{aligned}$$

where \(a(t)=t^2I_{[0,1]}(t),\) \(b(s)=s^3\), \(c(t)=-6t^2I_{[0,1]}(t)\) and \(e(s)=s\). Consider a polynomial \(F(z)=\delta _0+\delta _1z+\delta _2 z^2\), \(\delta _j\in {\mathbb {R}}\), \(j=0,1,2\). We have

$$\begin{aligned} k_2= \mu _{[0,1]}(b,c)=\int \limits _{0}^{1} b(s)c(s)ds=\int \limits _{0}^{1} -6s^3s^2ds=-1. \end{aligned}$$

If \(k_1=\delta _1\mu _{[0,1]}(a,e)+\delta _2 \mu _{[0,1]}(a,b)\mu _{[0,1]}(a,e)=0\), then we choose \(\delta _i\), \(i=1,2\) such that \(0=\delta _1+\delta _2\mu _{[0,1]}(a,b)=\delta _1-\frac{1}{6}\delta _2\mu _{[0,1]}(c,b)=\delta _1+\frac{1}{6}\delta _2\). Thus \(\delta _2=-6\delta _1\) and \(\frac{\delta _0}{k_2}=-\frac{1}{6}\) from which we get \(\delta _0=\frac{1}{6}\). Hence \(F(z)=-6\delta _1 z^2+\delta _1 z+\frac{1}{6}\). We have, for almost every t,

$$\begin{aligned} (Ax)(t)=\int \limits _{0}^{1} t^2I_{[0,1]}(t)s^3 x(s)ds,\quad (Bx)(t)=-6\int \limits _{0}^{1} t^2I_{[0,1]}(t)s x(s)ds, \end{aligned}$$

and thus, for almost every t,

$$\begin{aligned} (ABx)(t)=\int \limits _{0}^{1} t^2I_{[0,1]}(t)s^3 \left( -6\int \limits _{0}^{1} s^2I_{[0,1]}(s)\tau x(\tau )d\tau \right) ds=\frac{1}{6}(Bx)(t),\\ (A^2x)(t)=\int \limits _{0}^{1} t^2I_{[0,1]}(t)s^3 \left( \int \limits _{0}^{1} s^2I_{[0,1]}(s)\tau ^3x(\tau )d\tau \right) ds=\frac{1}{6}(Ax)(t). \end{aligned}$$

Finally, we have

$$\begin{aligned} BF(A)=B\left( -6\delta _1 A^2+\delta _1 A+\frac{1}{6}I\right) =-\delta _1 BA+\delta _1 BA+\frac{1}{6}B=\frac{1}{6}B=AB. \end{aligned}$$

Example 3.5

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1< p<\infty \) be nonzero operators defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha }^{\beta } a(t)b(s)x(s)ds,\quad (Bx)(t)= \int \limits _{\alpha }^{\beta } c(t)e(s)x(s)ds, \end{aligned}$$

where \(\alpha ,\beta \in {\mathbb {R}}\), \(a,c\in L_p({\mathbb {R}})\), \(b,e\in L_q([\alpha ,\beta ])\) where \(1<q<\infty \) such that \(\frac{1}{p}+\frac{1}{q}=1\). Consider a polynomial \(F(z)=\delta _0+\delta _1z+\delta _2 z^2\), \(\delta _j\in {\mathbb {R}}\), \(j=0,1,2\). Set

$$\begin{aligned} k_2= \mu _{[\alpha ,\beta ]}(b,c)=\int \limits _{\alpha }^{\beta } b(s)c(s)ds, \quad k_1= \delta _1\mu _{[\alpha ,\beta ]}(a,e)+\delta _2\mu _{[\alpha ,\beta ]}(a,b) \mu _{[\alpha ,\beta ]}(a,e). \end{aligned}$$

If \(k_2\not =0\) and \(k_1=0\) then we choose either \(\mu _{[\alpha ,\beta ]}(a,e)=0\) or \(\delta _i\), \(i=1,2\) such that \(\delta _1+\delta _2\mu _{[\alpha ,\beta ]}(a,b)=0\). Thus from Corollary 3.5, for almost every t,

$$\begin{aligned} a(t)=\frac{\delta _0}{k_2}c(t) \end{aligned}$$

Thus \(k_1=0\) implies that either

$$\begin{aligned} \mu _{[\alpha ,\beta ]}(a,e)=0\ \text{ or }\ \delta _1+\frac{\delta _0}{k_2}\delta _2 k_2=0. \end{aligned}$$

We choose coefficients \(\delta _j\), \(j=0,1,2\) such that \(\delta _1=-\delta _0\delta _2\), and hence \(F(z)=\delta _2 z^2-\delta _0\delta _2 z+\delta _0\). Then, the operators

$$\begin{aligned} (Ax)(t)=\frac{\delta _0}{k_2}\int \limits _{\alpha }^{\beta } c(t)b(s) x(s)ds,\quad (Bx)(t)=\int \limits _{\alpha }^{\beta } c(t)e(s) x(s)ds \end{aligned}$$

for almost every t, satisfy the relation

$$\begin{aligned} AB=\delta _2 BA^2-\delta _0\delta _2 BA+\delta _0B. \end{aligned}$$
(21)

In fact, for almost every t,

$$\begin{aligned} (ABx)(t)= & {} \frac{\delta _0}{k_2}\int \limits _{\alpha }^{\beta } c(t)b(s) \left( \int \limits _{\alpha }^{\beta } c(s)e(\tau ) x(\tau )d\tau \right) ds=\delta _0(Bx)(t),\\ (A^2x)(t)= & {} \frac{\delta _0}{k_2}\int \limits _{\alpha }^{\beta } c(t)b(s) \left( \frac{\delta _0}{k_2}\int \limits _{\alpha }^{\beta } c(s)b(\tau )x(\tau )d\tau \right) ds=\delta _0(Ax)(t). \end{aligned}$$

Finally, we have

$$\begin{aligned} BF(A)=B\left( \delta _2 A^2-\delta _2 \delta _0 A+\delta _0I\right) =\delta _2\delta _0 BA-\delta _2\delta _0 BA+\delta _0 B=\delta _0B=AB. \end{aligned}$$

In particular, if \(\alpha =0\), \(\beta =1\), \(b(s)=s\) and \(c(t)=t^2I_{[0,1]}(t)\), \(e(s)=s^3\) we have \(k_2=\mu _{[0,1]}(b,c)=\frac{1}{4} \). Hence, the operators

$$\begin{aligned} (Ax)(t)=4\delta _0\int \limits _{0}^{1} t^2I_{[0,1]}(t)sx(s)ds,\ (Bx)(t)=\int \limits _{0}^{1} t^2I_{[0,1]}(t)s^3x(s)ds \end{aligned}$$
(22)

satisfy the commutation relation (21). In particular, if \(\delta _2=1\) and \(\delta _0=-1\), that is, \(F(z)=z^2+z-1\), then the corresponding operators in (22) satisfy

$$\begin{aligned} AB=BA^2+BA-B. \end{aligned}$$

Corollary 3.6

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1\le p\le \infty \) be nonzero operators defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha _1}^{\beta _1} a(t)b(s)x(s)ds,\quad (Bx)(t)= \int \limits _{\alpha _2}^{\beta _2} c(t)e(s)x(s)ds, \end{aligned}$$

where \(\alpha _i,\beta _i\in {\mathbb {R}}\), \(i=1,2\), \(\alpha _1<\beta _1\), \(\alpha _2<\beta _2\), \(a, c \in L_p({\mathbb {R}})\), \(b\in L_q([\alpha _1,\beta _1])\), \(e\in L_q([\alpha _2,\beta _2])\), \(1\le q\le \infty \) and \(\frac{1}{p}+\frac{1}{q}=1\). Let \(F(z)=\delta _0+\delta _1z+\cdots +\delta _n z^n\) be polynomial with \(\delta _j\in {\mathbb {R}}\) for \(j=0,\ldots ,n\). Set

$$\begin{aligned} G= & {} [\alpha _1,\beta _1]\cap [\alpha _2,\beta _2],\\ k_1= & {} \sum \limits _{j=1}^{n}\delta _j\mu _{[\alpha _1,\beta _1]} (a,b)^{j-1}\mu _{[\alpha _2,\beta _2]} (a,e), \quad k_2=\mu _{[\alpha _1,\beta _1]} (b,c). \end{aligned}$$

Then if \(k_2\not =0\) and \(\mu _{[\alpha _2,\beta _2]} (a,e)=0\) then \(AB=BF(A)\) if and only if \(AB=\delta _0 B\), that is for almost every t, \(a(t)=\frac{\delta _0}{k_2}c(t).\)

Proof

This follows by Corollary 3.5 since \(k_2\not =0\) and \(k_1=0\). \(\square \)

Corollary 3.7

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1\le p\le \infty \) be nonzero operators defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha _1}^{\beta _1} a(t)b(s)x(s)ds,\quad (Bx)(t)= \int \limits _{\alpha _2}^{\beta _2} c(t)e(s)x(s)ds, \end{aligned}$$

where \(\alpha _i,\beta _i\in {\mathbb {R}}\), \(i=1,2\), \(\alpha _1<\beta _1\), \(\alpha _2<\beta _2\), \(a, c \in L_p({\mathbb {R}})\), \(b\in L_q([\alpha _1,\beta _1])\), \(e\in L_q([\alpha _2,\beta _2])\), \(1\le q\le \infty \) and \(\frac{1}{p}+\frac{1}{q}=1\). Consider a monomial \(F(z)=\delta z^d\), d is a positive integer and \(\delta \not =0\) is a real number. Set

$$\begin{aligned} G= & {} [\alpha _1,\beta _1]\cap [\alpha _2,\beta _2],\\ k_1= & {} \delta \mu _{[\alpha _1,\beta _1]} (a,b)^{d-1}\mu _{[\alpha _2,\beta _2]} (a,e), \quad k_2=\mu _{[\alpha _1,\beta _1]} (b,c). \end{aligned}$$

Then, \(AB=\delta BA^d\) if and only the following conditions are fulfilled:

  1. 1.
    1. (a)

      for almost every \((t,s)\in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\times [({{\,\mathrm{\textrm{supp}\,}\,}}\, e)\cap G]\) we have the following:

      1. (i)

        If \(k_2\ne 0,\) then \(k_1 b(s)=e(s)\lambda \) and \(a(t)=\frac{\lambda }{k_2} c(t)\) for some real scalar \(\lambda \).

      2. (ii)

        If \(k_2=0\) then either \(k_1=0\) or \(b(s)=0\) for almost all \(s\in {{\,\mathrm{\textrm{supp}\,}\,}}\, e\cap G\).

    2. (b)

      If \(t\not \in {{\,\mathrm{\textrm{supp}\,}\,}}\, c\) then either \(k_2=0\) or \(a(t)=0\) for almost all \(t\not \in {{\,\mathrm{\textrm{supp}\,}\,}}\, c \).

    3. (c)

      If \(s\in G{\setminus } {{\,\mathrm{\textrm{supp}\,}\,}}\, e\) then either \(k_1=0\) or \(b(s)=0\) for almost all \(s\in G{\setminus } {{\,\mathrm{\textrm{supp}\,}\,}}\, e.\)

  2. 2.

    \(k_2 =0\), or \(e(s)=0\) for almost every \(s\in [\alpha _2,\beta _2]{\setminus } G\).

  3. 3.

    \(k_1=0\) or \(b(s)=0\) for almost every \(s\in [\alpha _1,\beta _1]{\setminus } G\).

Proof

This follows from Theorem 3.2 and the fact that \(\delta _0=0\) in this case. \(\square \)

Example 3.6

Let \(A:L_2([\alpha ,\beta ])\rightarrow L_2([\alpha ,\beta ])\), \(B:L_2([\alpha ,\beta ])\rightarrow L_2([\alpha ,\beta ])\), be defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha }^{\beta } a(t)b(s)x(s)ds,\quad (Bx)(t)= \int \limits _{\alpha }^{\beta } c(t)e(s)x(s)ds, \end{aligned}$$

where \(\alpha ,\beta \in {\mathbb {R}}\), \(\alpha <\beta \), and \(a, b, c, e \in L_2([\alpha ,\beta ])\), such that \(a\perp b\) and \(b \perp c\), that is,

$$\begin{aligned} \int \limits _{\alpha }^{\beta } a(t)b(t)dt=\int \limits _{\alpha }^{\beta } b(t)c(t)dt=0. \end{aligned}$$

Then the above operators satisfy \(AB=\delta BA^d\), \(d=2,3,\ldots \). In fact, by using Corollary 3.7 and putting

$$\begin{aligned} F(z)= & {} \delta z^d,\quad d=2,3,\ldots \\ k_1= & {} \mu _{[\alpha ,\beta ]}(a,b)^{d-1}\mu _{[\alpha ,\beta ]}(a,e), \quad k_2=\mu _{[\alpha ,\beta ]}(b,c), \end{aligned}$$

we get \(k_1=k_2=0\). So we have all conditions in Corollary 3.7 satisfied. In particular, if \(a(t)=\left( \frac{5}{3}t^3-\frac{3}{2}t\right) I_{[-1,1]}(t)\), \(b(s)=\frac{3}{2}s^2-\frac{1}{2}\) and \(c(t)=tI_{[-1,1]}(t)\), then the operators

$$\begin{aligned} (Ax)(t)= & {} \int \limits _{-1}^1 \left( \frac{5}{3}t^3-\frac{3}{2}t\right) I_{[-1,1]}(t)\left( \frac{3}{2}s^2-\frac{1}{2}\right) x(s)ds, \\ (Bx)(t)= & {} \int \limits _{-1}^1 tI_{[-1,1]}(t) e(s) x(s)ds \end{aligned}$$

satisfy the relation \(AB=BA^d\), \(d=2,3,\ldots \). In fact \(a,\, b,\, c\) are pairwise orthogonal in \([-1,1]\).

Corollary 3.8

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1\le p\le \infty \) be nonzero operators defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha _1}^{\beta _1} a(t)b(s)x(s)ds,\quad (Bx)(t)= \int \limits _{\alpha _2}^{\beta _2} c(t)e(s)x(s)ds, \end{aligned}$$

where \(\alpha _i,\beta _i\), \(i=1,2\) are real numbers with \(\alpha _1<\beta _1\), \(\alpha _2<\beta _2\), \(a, c \in L_p({\mathbb {R}})\), \(b\in L_q([\alpha _1,\beta _1])\), \(e\in L_q([\alpha _2,\beta _2])\), \(1\le q\le \infty \) and \(\frac{1}{p}+\frac{1}{q}=1\). Consider a constant monomial \(F(z)=\delta _0\), \(\delta _0\in {\mathbb {R}}\). Then, \(AB=BF(A)\) if and only if

$$\begin{aligned} a(t)=\frac{\delta _0 c(t)}{k_2},\quad k_2=\mu _{[\alpha _1,\beta _1]} (b,c)\not =0 \end{aligned}$$

or \(\delta _0=0\).

Proof

This follows by Theorem 3.2. \(\square \)

Remark 3.6

In general, it is not true that for all integral operators \(A_1, A_2, B\) and polynomials F,

$$\begin{aligned} A_iB=BF(A_i), \quad i=1,2 \end{aligned}$$
(23)

implies

$$\begin{aligned} (A_1+A_2)B=BF(A_1+A_2). \end{aligned}$$
(24)

Indeed, let \(A_1:L_p[0,1]\rightarrow L_p[0,1]\), \(A_2:L_p[0,1]\rightarrow L_p[0,1]\), \(B:L_p[0,1]\rightarrow L_p[0,1]\) with \(1\le p\le \infty \) be defined as follows

$$\begin{aligned} (A_1x)(t)=(A_2x)(t)=(Bx)(t)=\int \limits _0^1 x(s)ds,\quad F(z)=z^2. \end{aligned}$$

We have

$$\begin{aligned} (A_1+A_2)B\equiv A_1+A_2 \not = B(A_1+A_2)^2\equiv A_1+A_2+2A_1(A_2), \end{aligned}$$

where

$$\begin{aligned} (A_1A_2x)(t)=\int \limits _{0}^{1}x(s)ds\not \equiv 0. \end{aligned}$$

Nevertheless, there are non-zero operators \(A_1\), \(A_2\) and B which satisfy (23) and (24) for a nonlinear function \(F(z)=z^n\). For example for \(A_i:L_p[0,1]\rightarrow L_p[0,1]\), \(i=1,2\), \(B:L_p[0,1]\rightarrow L_p[0,1]\), \(1\le p<\infty \) defined as follows

$$\begin{aligned} (A_1x)(t)= & {} \int \limits _{0}^{1} t(3s-2)x(s)ds,\quad (A_2x)(t)=\int \limits _{0}^{1} t(2-3s)x(s)ds,\quad \\ (Bx)(t)= & {} \int \limits _{0}^{1} t(4-6s)x(s)ds. \end{aligned}$$

we have

$$\begin{aligned}{} & {} A_iA_j=0, \quad i,j=1,2,\\{} & {} A_iB=BA^n_i=0,\quad i=1,2,\quad n=1,2,3,\ldots \\{} & {} (A_1+A_2)B=B(A^n_1+A^n_2)=B(A_1+A_2)^n=0, \quad n=1,2,3,\ldots \end{aligned}$$

where 0 is the zero operator.

4 Reordering formulas

We present reordering formulas for the noncommutative relation \(AB=BF(A)\) for linear integral operators.

Proposition 4.1

([33, 40]) Let AB be linear operators and \(F:{\mathbb {R}}\rightarrow {\mathbb {R}}\) a function with appropriate functional calculus such that \(AB=BF(A)\). Then for any polynomial h we have

$$\begin{aligned} A^jB= & {} B[F(A)]^j,\quad j=1,2,\ldots ,\\ h(A)B^k= & {} B^k\left( h\circ F^{\circ (k)}(A) \right) ,\quad k=1,2,3,\ldots , \end{aligned}$$

where \(F^{\circ (k)} \) is the k-fold composition of a function F with itself.

Lemma 4.1

Let \(\alpha _1,\beta _1\in {\mathbb {R}},\ \alpha _1<\beta _1\) and \(k(t,s):{\mathbb {R}}\times [\alpha _1,\beta _1]\rightarrow {\mathbb {R}}\) be a measurable function such that operator \((Ax)(t)=\int \nolimits _{\alpha _1}^{\beta _1}k(t,s)x(s)ds\) for almost every t, is defined on \(L_p({\mathbb {R}})\), \(1\le p\le \infty \). Consider a polynomial defined by \(F(z)=\delta _0+\delta _1 z +\cdots +\delta _m z^m\), where \(\delta _0,\ldots ,\delta _m\in {\mathbb {R}}\), and let

$$\begin{aligned} k_0(t,s)= & {} k(t,s), \quad k_j(t,s)=\int \limits _{\alpha _1}^{\beta _1} k(t,\tau )k_{j-1}(\tau ,s)d\tau ,\quad j=1,\dots ,m, \nonumber \\ F_{m,1}(k(t,s))= & {} \sum \limits _{j=1}^{m} \delta _j k_{j-1}(t,s),\nonumber \\ F_{m,i}(k(t,s))= & {} \int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,\tau ))F_{m,i-1}(k(\tau ,s))d\tau ,\quad i=2,3,\dots . \end{aligned}$$
(25)

Then, for each \(i=0,\dots ,\) \(G_i:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1\le p\le \infty \) defined by

$$\begin{aligned} (G_ix)(t)=\left\{ \begin{array}{ll} x(t), &{}\quad i=0, \\ \int \nolimits _{\alpha _1}^{\beta _1} F_{m,i}(k(t,s))x(s)ds, &{}\quad i=1,2,\dots \end{array} \right. \end{aligned}$$
(26)

is linear and satisfies the following semigroup property:

$$\begin{aligned} G_{i}(G_jx)(t)=(G_{i+j}x)(t),\quad i,j=0,1,2,\ldots \end{aligned}$$
(27)

Proof

By definition the operator \(G_i\) is well-defined and linear. If at least one of the variables ij is zero, then it is trivial. Suppose that \(i\not =0\) and \(j\not =0\).

We fix \(j\not =0\) and proceed by induction. For \(i=1\), by Fubbini’s theorem we have

$$\begin{aligned} G_1(G_jx)(t)= & {} \int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,s))\left( \int \limits _{\alpha _1}^{\beta _1} F_{m,j}(k(s,\tau ))x(\tau )d\tau \right) ds\\= & {} \int \limits _{\alpha _1}^{\beta _1} \left( \int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,s))F_{m,j}(k(s,\tau ))ds \right) x(\tau )d\tau = \int \limits _{\alpha _1}^{\beta _1} F_{m,j+1}(k(t,\tau )) x(\tau )d\tau \\= & {} (G_{j+1}x)(t). \end{aligned}$$

Suppose that for \(i=l\), \(G_l(G_jx)(t)=(G_{l+j}x)(t)\). Combining this with the same idea as in the case \(G_{1}G_j\) we have

$$\begin{aligned} G_{l+1}(G_jx)(t)=G_1(G_l(G_jx))(t)=G_1(G_{l+j}x)(t)=(G_{l+1+j}x)(t). \end{aligned}$$

Since j was arbitrarily chosen this is true for all \(i,j=0,1,2,\ldots \). \(\square \)

Proposition 4.2

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1\le p\le \infty \), be defined, for almost every t, by

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha _1}^{\beta _1}k(t,s) x(s)ds, \end{aligned}$$
(28)

where \(\alpha _1,\beta _1\in {\mathbb {R}}\), and \(k(t,s):{\mathbb {R}}\times [\alpha _1,\beta _1]\rightarrow {\mathbb {R}}\) is a measurable function. Consider a polynomial defined by \(F(z)=\delta _0+\delta _1 z +\cdots +\delta _m z^m\), where \(\delta _0,\ldots ,\delta _m \in {\mathbb {R}}\). Let

$$\begin{aligned} k_0(t,s)= & {} k(t,s), \quad k_j(t,s)=\int \limits _{\alpha _1}^{\beta _1} k(t,\tau )k_{j-1}(\tau ,s)d\tau ,\quad j=1,\dots ,m, \nonumber \\ F_{m,1}(k(t,s))= & {} \sum \limits _{j=1}^{m} \delta _j k_{j-1}(t,s),\nonumber \\ F_{m,i}(k(t,s))= & {} \int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,\tau ))F_{m,i-1}(k(\tau ,s))d\tau ,\quad i=2,3,\dots \end{aligned}$$
(29)

Then, we have

$$\begin{aligned} (F^{\circ (n)}(A)x)(t)=\left\{ \begin{array}{ll}\sum \nolimits _{i=0}^{n}\left( {\begin{array}{c}n\\ i\end{array}}\right) \delta ^{n-i}_0 (G_ix)(t),&{}\quad \text{ if } \delta _0\not =0,\\ \\ (G_nx)(t),\quad &{}\quad \text{ otherwise, } \end{array}\right. \end{aligned}$$
(30)

where \(F^{\circ (0)}\) is the identity operator, \(G_i\) is given by (26) and \(\left( {\begin{array}{c}n\\ i\end{array}}\right) =\frac{n!}{i!(n-i)!}.\)

Proof

We proceed by induction and suppose that \(\delta _0\not =0\). If \(n=0\) it is trivial. Consider \(n=1\), we have on the right hand side

$$\begin{aligned} \sum \limits _{i=0}^{1}\left( {\begin{array}{c}1\\ i\end{array}}\right) \delta ^{1-i}_0 (G_ix)(t)= & {} \delta _0x(t)+(G_1x)(t)=\delta _0x(t)+\int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,s))x(s)ds\\= & {} \delta _0 x(t)+\int \limits _{\alpha _1}^{\beta _1} \sum \limits _{j=1}^{m} \delta _j k_{j-1}(t,s)x(s)ds=(F(A)x)(t) \end{aligned}$$

as we can check from relation (6).

For \(n=2\) we start from the left hand side. By using Fubini’s theorem,

$$\begin{aligned} (F(F(A))x)(t)&=\delta _0 (F(A)x)(t)+\int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,s))(F(A)x)(s)ds\\&=\delta _0\left( \delta _0 x(t)+\int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,s))x(s)ds \right) \\&\quad + \int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,s)) \left( \delta _0 x(s)+\int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(s,\tau ))x(\tau )d\tau \right) ds\\&=\delta _0^2 x(t)+2\delta _0\int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,s))x(s)ds \\&\quad +\int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,s))\left( \int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(s,\tau ))x(\tau )d\tau \right) ds\\&=\delta _0^2 x(t)+2\delta _0\int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,s))x(s)ds +\int \limits _{\alpha _1}^{\beta _1} F_{m,2}(k(t,s))x(s)ds\\&=\delta _0^2 (G_0x)(t)+2\delta _0 (G_1x)(t)+(G_2x)(t). \end{aligned}$$

Suppose now that the formula is true for \(n=j\). Then, by Fubini theorem,

$$\begin{aligned} (F^{\circ (j+1)}(A)x)(t)= & {} ((F(F^{\circ (j)})(A))x)(t)\\= & {} \delta _0 (F^{\circ (j)})(A)x)(t)+\int \limits _{\alpha _1}^{ \beta _1} F_{m,1}(k(t,s))(F^{\circ (j)})(A)x)(s)ds\\= & {} \delta _0 \sum \limits _{i=0}^{j}\left( {\begin{array}{c}j\\ i\end{array}}\right) \delta ^{j-i}_0 (G_ix)(t)+\int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,s))\sum \limits _{i=0}^{j}\left( {\begin{array}{c}j\\ i\end{array}}\right) \delta ^{j-i}_0 (G_ix)(s)ds\\= & {} \delta _0^{j+1}x(t)+\delta _0 \sum \limits _{i=1}^{j}\left( {\begin{array}{c}j\\ i\end{array}}\right) \delta ^{j-i}_0 (G_ix)(t)\\{} & {} +\delta _0^j \int \limits _{\alpha _1}^{\beta _1}F_{m,1}(k(t,s))x(s)ds+ \sum \limits _{i=1}^{j}\left( {\begin{array}{c}j\\ i\end{array}}\right) \delta ^{j-i}_0 \int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,s))(G_ix)(s)ds\\= & {} \delta _0^{j+1}x(t)+ \sum \limits _{i=1}^{j}\left( {\begin{array}{c}j\\ i\end{array}}\right) \delta ^{j+1-i}_0 \int \limits _{\alpha _1}^{\beta _1} F_{m,i}(k(t,s))x(s)ds\\{} & {} +\delta _0^j \int \limits _{\alpha _1}^{\beta _1}F_{m,1}(k(t,s))x(s)ds+ \sum \limits _{i=1}^{j}\left( {\begin{array}{c}j\\ i\end{array}}\right) \delta ^{j-i}_0 \int \limits _{\alpha _1}^{\beta _1} F_{m,i+1}(k(t,s))x(s)ds\\= & {} \delta _0^{j+1}x(t)+ j\delta ^j_0\int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,s))x(s)ds \\{} & {} +\sum \limits _{i=2}^{j}\left( {\begin{array}{c}j\\ i\end{array}}\right) \delta ^{j+1-i}_0 \int \limits _{\alpha _1}^{\beta _1} F_{m,i}(k(t,s))x(s)ds+ \delta _0^j \int \limits _{\alpha _1}^{\beta _1}F_{m,1}(k(t,s))x(s)ds \\{} & {} +\sum \limits _{i=1}^{j-1}\left( {\begin{array}{c}j\\ i\end{array}}\right) \delta ^{j-i}_0 \int \limits _{\alpha _1}^{\beta _1} F_{m,i+1}(k(t,s))x(s)ds +\int \limits _{\alpha _1}^{\beta _1} F_{m,j+1}(k(t,s))x(s)ds. \end{aligned}$$

By noticing that

$$\begin{aligned} \sum \limits _{i=1}^{j-1}\left( {\begin{array}{c}j\\ i\end{array}}\right) \delta ^{j-i}_0 \int \limits _{\alpha _1}^{\beta _1} F_{m,i+1}(k(t,s))x(s)ds= \sum \limits _{i=2}^{j}\left( {\begin{array}{c}j\\ i-1\end{array}}\right) \delta ^{j+1-i}_0 \int \limits _{\alpha _1}^{\beta _1} F_{m,i}(k(t,s))x(s)ds \end{aligned}$$

and by using the binomial coefficients formula \( \left( {\begin{array}{c}j\\ i\end{array}}\right) +\left( {\begin{array}{c}j\\ i-1\end{array}}\right) =\left( {\begin{array}{c}j+1\\ i\end{array}}\right) ,\) for \(i=2,\ldots ,j\), we get

$$\begin{aligned} (F^{\circ (j+1)}(A)x)(t)= & {} \sum \limits _{i=0}^{j+1} \left( {\begin{array}{c}j+1\\ i\end{array}}\right) \delta _0^{j+1-i} \int \limits _{\alpha _1}^{\beta _1}F_{m,i}(k(t,s))x(s)ds=\\= & {} \sum \limits _{i=0}^{j+1} \left( {\begin{array}{c}j+1\\ i\end{array}}\right) \delta _0^{j+1-i} (G_ix)(t). \end{aligned}$$

In the same way, the formula (30) can be proved when \(\delta _0=0\). \(\square \)

Example 4.1

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1<p<\infty \), be defined, for almost all t, by

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha _1}^{\beta _1}k(t,s) x(s)ds, \end{aligned}$$

where \(k(t,s):{\mathbb {R}}\times [\alpha _1,\beta _1]\rightarrow {\mathbb {R}}\) (\(\alpha _1,\beta _1\in {\mathbb {R}}\)) is a measurable function.

Consider a polynomial defined by \(F(z)=\delta _0+\delta _1 z \), where \(\delta _0,\delta _1\in {\mathbb {R}}\). Let

$$\begin{aligned} F_{1,1}(k(t,s))=\delta _1k(t,s), \ F_{1,i}(k(t,s))=\int \limits _{\alpha _1}^{\beta _1} \delta _1k(t,\tau )F_{1,i-1}(k(\tau ,s))d\tau ,\ i=2,3,\ldots \end{aligned}$$

Then, we have

$$\begin{aligned} (F^{\circ (n)}(A)x)(t)= \left\{ \begin{array}{ll} \delta _0^nx(t)+ \sum \nolimits _{i=1}^{n}\left( {\begin{array}{c}n\\ i\end{array}}\right) \delta ^{n-i}_0 \int \nolimits _{\alpha _1}^{\beta _1} F_{1,i}(k(t,s))x(s)ds &{}\quad \text{ if }\ \delta _0\not =0\\ \int \nolimits _{\alpha _1}^{\beta _1} F_{1,i}(k(t,s))x(s)ds &{}\quad \text{ otherwise }.\end{array} \right. \end{aligned}$$

In fact, this follows by Proposition 4.2.

Example 4.2

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1<p<\infty \), be defined, for almost all t, by

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha _1}^{\beta _1}k(t,s) x(s)ds, \end{aligned}$$

where \(k(t,s):{\mathbb {R}}\times [\alpha _1,\beta _1]\rightarrow {\mathbb {R}}\) (\(\alpha _1,\beta _1\in {\mathbb {R}}\)) is a measurable function. Consider a polynomial defined by \(F(z)=\delta z^d \), where d is a positive integer and \(\delta \not =0\) is a real number. Let

$$\begin{aligned} k_0(t,s)= & {} k(t,s), \quad k_j(t,s)=\int \limits _{\alpha _1}^{\beta _1} k(t,\tau )k_{j-1}(\tau ,s)d\tau ,\quad j=1,\ldots ,d \\ F_{d,1}(k(t,s))= & {} \delta k_{d-1}(t,s), \\ F_{d,i}(k(t,s))= & {} \int \limits _{\alpha _1}^{\beta _1} \delta k_{d-1}(t,\tau )F_{d,i-1}(k(\tau ,s))d\tau ,\quad i=2,3, \cdots \end{aligned}$$

Then, we have

$$\begin{aligned} (F^{\circ (n)}(A)x)(t)= \int \limits _{\alpha _1}^{\beta _1}F_{d,n}(k(t,s)) x(s)ds. \end{aligned}$$

In fact, this follows immediately from Proposition 4.2.

Corollary 4.1

Consider the operator \(A:L_p([\alpha _1,\beta _1])\rightarrow L_p([\alpha _1,\beta _1])\), \(1\le p\le \infty \) defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha _1}^{\beta _1} a(t)b(s)x(s)ds, \end{aligned}$$

where \(\alpha _1,\beta _1 \in {\mathbb {R}}\), \(a\in L_p([\alpha _1,\beta _1])\), \(b\in L_q([\alpha _1,\beta _1])\), \(1\le q\le \infty \), \(\displaystyle \frac{1}{p}+\frac{1}{q}=1\).

Consider \(F(z)=\sum \nolimits _{j=0}^{m} \delta _j z^j\), where \(\delta _0\),...,\(\delta _m \in {\mathbb {R}}\). Then,

$$\begin{aligned} (F^{\circ (n)}(A)x)(t)=\sum \nolimits _{j=0}^{n}\left( {\begin{array}{c}n\\ j\end{array}}\right) \delta ^{n-j}_0 (G_ix)(t)=\delta _0^n x(t)+\zeta (Ax)(t), \end{aligned}$$
(31)

where

$$\begin{aligned} \zeta= & {} \left\{ \begin{array}{ll} \sum \nolimits _{j=1}^{n}\left( {\begin{array}{c}n\\ j\end{array}}\right) \delta ^{n-j}_0 \sum \nolimits _{i_1=1}^{m}\ldots \sum \nolimits _{i_j=1}^{m} \mu ^{\alpha _j-1}\prod \nolimits _{l=1}^{j}\delta _{i_l}, &{}\quad \text{ if }\ \delta _0\not =0\\ \sum \nolimits _{i_1=1}^{m}\ldots \sum \nolimits _{i_n=1}^{m} \mu ^{\alpha _n-1}\prod \nolimits _{l=1}^{n}\delta _{i_l},&{}\quad \text{ otherwise, } \end{array}\right. \\ \mu= & {} \mu _{[\alpha _1,\beta _1]}(a,b)=\int \limits _{\alpha _1}^{\beta _1} a(s)b(s)ds, \ \alpha _j=\sum \limits _{l=1}^{j} i_l. \end{aligned}$$

Proof

Suppose that \(\delta _0 \not =0\). By applying Proposition 4.2, we have

$$\begin{aligned} k_0(t,s)= & {} a(t)b(s), \\ k_1(t,s)= & {} \int \limits _{\alpha _1}^{\beta _1} a(t)b(\tau )a(\tau )b(s)d\tau =k_0(t,s)\int \limits _{\alpha _1}^{\beta _1} k_0(\tau ,\tau )d\tau =k_0(t,s)\mu , \end{aligned}$$

where \(\mu =\int \limits _{\alpha _1}^{\beta _1} k_0(\tau ,\tau )d\tau =\int \limits _{\alpha _1}^{\beta _1} a(\tau )b(\tau )d\tau \). Computation of the iterated kernels yields

$$\begin{aligned} k_2(t,s)= & {} \int \limits _{\alpha _1}^{\beta _1} k_0(t,\tau )k_1(\tau ,s)d\tau =\mu \int \limits _{\alpha _1}^{\beta _1} k_0(t,\tau )k_0(\tau ,s)d\tau =\mu ^2 k_0(t,s).\\ k_j(t,s)= & {} \mu ^j k_0(t,s),\quad \text{ for } j=0,\ldots ,m. \end{aligned}$$

Now, computing \(F_{m,i}(k(t,s))\), we get

$$\begin{aligned} F_{m,1}(k(t,s))= & {} \sum \limits _{j=1}^{m} \delta _j k_{j-1}=k_0(t,s) \sum \limits _{j=1}^{m} \delta _j \mu ^{j-1} \\ F_{m,2}(k(t,s))= & {} \int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,\tau ))F_{m,1}(k(\tau ,s))d\tau \\= & {} \int \limits _{\alpha _1}^{\beta _1} \left( k_0(t,\tau ) \sum \limits _{j=1}^{m} \delta _j \mu ^{j-1} k_0(\tau ,s) \sum \limits _{l=1}^{m} \delta _l \mu ^{l-1} \right) d\tau \\= & {} k_0(t,s)\sum \limits _{j=1}^{m} \sum \limits _{l=1}^{m} \delta _j \delta _l \mu ^{j+l-1}. \end{aligned}$$

We claim that

$$\begin{aligned} F_{m,j}(k(t,s))= & {} \int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,\tau ))F_{m,j-1}(k(\tau ,s))d\tau \\= & {} k_0(t,s)\sum \limits _{i_1=1}^{m}\ldots \sum \limits _{i_j=1}^{m} \mu ^{\alpha _j-1}\prod \limits _{l=1}^{j}\delta _{i_l}, \end{aligned}$$

where \(\alpha _j=i_1+\cdots +i_j\). We will prove this claim by induction. We have shown that it holds true for \(j=1\) and \(j=2\). Suppose it holds for j. Then we have

$$\begin{aligned} F_{m,j+1}(k(t,s))= & {} \int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,\tau ))F_{m,j}(k(\tau ,s))d\tau \\= & {} \int \limits _{\alpha _1}^{\beta _1} \left( k_0(t,\tau )\sum \limits _{r=1}^{m} \delta _r \mu ^{r-1} k_0(\tau ,s)\sum \limits _{i_1=1}^{m}\ldots \sum \limits _{i_j=1}^{m} \mu ^{\alpha _j-1}\prod \limits _{l=1}^{j}\delta _{i_l}\right) d\tau \\= & {} \sum \limits _{r=1}^{m} \delta _r \mu ^{r-1} \sum \limits _{i_1=1}^{m}\ldots \sum \limits _{i_j=1}^{m} \mu ^{\alpha _j-1}\prod \limits _{l=1}^{j}\delta _{i_l} \int \limits _{\alpha _1}^{\beta _1} k_0(t,\tau )k_0(\tau ,s)d\tau \\= & {} k_0(t,s)\mu \sum \limits _{r=1}^{m} \delta _r \mu ^{r-1} \sum \limits _{i_1=1}^{m}\ldots \sum \limits _{i_j=1}^{m} \mu ^{\alpha _j-1}\prod \limits _{l=1}^{j}\delta _{i_l}\\= & {} k_0(t,s)\sum \limits _{r=1}^{m} \sum \limits _{i_1=1}^{m}\ldots \sum \limits _{i_j=1}^{m} \mu ^{\alpha _j+r-1} \delta _r\prod \limits _{l=1}^{j}\delta _{i_l} \\= & {} k_0(t,s)\sum \limits _{i_1=1}^{m}\ldots \sum \limits _{i_{j+1}=1}^{m} \mu ^{\alpha _{j+1}-1}\prod \limits _{l=1}^{j+1}\delta _{i_l}. \end{aligned}$$

Applying Proposition 4.2, we have

$$\begin{aligned} (F^{\circ (n)}(A)x)(t)=\sum \limits _{j=0}^{n}\left( {\begin{array}{c}n\\ j\end{array}}\right) \delta ^{n-j}_0 (G_ix)(t)=\delta _0^n x(t)+\zeta (Ax)(t), \end{aligned}$$

where

$$\begin{aligned} \zeta= & {} \sum \limits _{j=1}^{n}\left( {\begin{array}{c}n\\ j\end{array}}\right) \delta ^{n-j}_0 \sum \limits _{i_1=1}^{m}\ldots \sum \limits _{i_j=1}^{m} \mu ^{\alpha _j-1}\prod \limits _{l=1}^{j}\delta _{i_l}, \\ (Ax)(t)= & {} \int \limits _{\alpha _1}^{\beta _1} a(t)b(s)x(s)ds,\quad \mu =\mu _{[\alpha _1,\beta _1]}(a,b)=\int \limits _{\alpha _1}^{\beta _1} a(s)b(s)ds, \qquad \alpha _j=\sum \limits _{l=1}^{j} i_l. \end{aligned}$$

In the same way we prove the case when \(\delta _0=0\). \(\square \)

Example 4.3

Consider the operator \(A:L_p([\alpha _1,\beta _1])\rightarrow L_p([\alpha _1,\beta _1])\), \(1<p<\infty \) defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha _1}^{\beta _1} a(t)b(s)x(s)ds, \end{aligned}$$

where \(\alpha _1,\beta _1\) are real numbers and \(a\in L_p([\alpha _1,\beta _1])\), \(b\in L_q([\alpha _1,\beta _1])\), \(1<q<\infty \), \(\displaystyle \frac{1}{p}+\frac{1}{q}=1\). Consider a polynomial \(F(z)=\delta _0+\delta _1 z\), where \(\delta _0,\delta _1\in {\mathbb {R}}\). Then

$$\begin{aligned} (F^{\circ (n)}(A)x)(t)= \delta _0^n x(t)+ \zeta (Ax)(t), \end{aligned}$$

where

$$\begin{aligned} \zeta = \left\{ \begin{array}{ll}\sum _{j=1}^{n} \left( {\begin{array}{c}n\\ j\end{array}}\right) \delta _0^{n-j}\delta _1^j \mu ^{j-1}, &{}\quad \text{ if } \delta _0\not =0\\ \\ \delta _1^n \mu ^{n-1}, &{}\quad \text{ otherwise, } \end{array}\right. \end{aligned}$$

with \( \mu =\mu _{[\alpha _1,\beta _1]}(a,b).\) Indeed this follows by Corollary 4.1.

Example 4.4

Consider the operator \(A:L_p([\alpha _1,\beta _1])\rightarrow L_p([\alpha _1,\beta _1])\), \(1<p<\infty \) defined as follows

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha _1}^{\beta _1} a(t)b(s)x(s)ds, \end{aligned}$$

for almost every t, where \(\alpha _1,\beta _1\), are real numbers and \(a\in L_p([\alpha _1,\beta _1])\), \(b\in L_q([\alpha _1,\beta _1])\), \(1<q<\infty \), \(\displaystyle \frac{1}{p}+\frac{1}{q}=1\). Consider \(F(z)=\delta z^d\), where d is a positive integer and \(\delta \not =0\) is a real number. Then

$$\begin{aligned} (F^{\circ (n)}(A)x)(t)=\delta ^n \mu ^{dn-1}(Ax)(t), \end{aligned}$$

where \(\mu =\mu _{[\alpha _1,\beta _1]}(a,b).\) Indeed, this follows by Corollary 4.1

Proposition 4.3

Consider operators A defined by (28) and \(B:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1<p<\infty \) defined for almost every t by

$$\begin{aligned} (Bx)(t)=\int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s)x(s)ds, \end{aligned}$$

where \({\tilde{k}}:{\mathbb {R}}\times [\alpha _2,\beta _2]\rightarrow {\mathbb {R}}\) is a measurable function. Consider a polynomial \(F(z)=\sum \nolimits _{j=0}^{m}\delta _j z^j\), where \(\delta _0,\ldots ,\delta _m \in {\mathbb {R}}\). Then

$$\begin{aligned} (A^r B^l x)(t)= & {} \int \limits _{\alpha _2}^{\beta _2} k_{r,l}(t,s)x(s)ds, \end{aligned}$$
(32)
$$\begin{aligned} B^l\left( (F^{\circ (n)})^k (A)x\right) (t)= & {} \left\{ \begin{array}{ll} \sum \nolimits _{i=0}^{kn} \delta _0^{kn-i} \left( {\begin{array}{c}kn\\ i\end{array}}\right) \int \nolimits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s) (G_{i}x)(s)ds, &{}\quad \text{ if } \ \delta _0\not =0\\ \int \nolimits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,s) (G_{nk}x)(s)ds, &{}\quad \text{ otherwise, } \end{array}\right. \end{aligned}$$
(33)

where

$$\begin{aligned} k_{r,l}(t,s)= & {} \int \limits _{\alpha _1}^{\beta _1} k_{r-1}(t,\tau ){\tilde{k}}_{l-1}(\tau ,s)d\tau ,\quad l,r=1,2,3,\ldots ,\\ k_{j}(t,s)= & {} \int \limits _{\alpha _1}^{\beta _1} k(t,\tau )k_{j-1}(\tau ,s)d\tau , \quad j=1,\ldots ,m,\quad k_0(t,s)=k(t,s),\\ {\tilde{k}}_{j}(t,s)= & {} \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}(t,\tau ){\tilde{k}}_{j-1}(\tau ,s)d\tau , \quad j=1,\ldots ,m,\quad {\tilde{k}}_0(t,s)={\tilde{k}}(t,s) \end{aligned}$$

and \(G_i\), \(i=0,\ldots ,nk\) are given by (26).

Proof

From (5) we have that

$$\begin{aligned} (A^rx)(t)=\int \limits _{\alpha _1}^{\beta _1} k_{r-1}(t,s)x(s)ds,\quad (B^lx)(t)=\int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}_{l-1}(t,s)x(s)ds,\quad r,l=1,2,\ldots \end{aligned}$$

Using Fubini theorem, we have

$$\begin{aligned} (A^r B^lx)(t)= & {} A^r(B^l x)(t)=\int \limits _{\alpha _1}^{\beta _1} k_{r-1}(t,s)\left( \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}_{l-1}(s,\tau )x(\tau )d\tau \right) ds\\= & {} \int \limits _{\alpha _2}^{\beta _2} \left( \int \limits _{\alpha _1}^{\beta _1} k_{r-1}(t,s) {\tilde{k}}_{l-1}(s,\tau )ds\right) x(\tau )d\tau =\int \limits _{\alpha _2}^{\beta _2} k_{r,l}(t,\tau )x(\tau )d\tau . \end{aligned}$$

For the formula (33) we have, for \(l=1,2,\ldots \) and for \(\delta _0\not =0\),

$$\begin{aligned} B^l\left( (F^{\circ (n)})^k (A)x\right) (t)= & {} \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}_{l-1}(t,s)((F^{\circ (n)})^k (A)x)(s)ds\\= & {} \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}_{l-1}(t,s)((F^{\circ (nk)}) (A)x)(s)ds\\= & {} \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}_{l-1}(t,s) \sum \limits _{i=0}^{kn} \delta _0^{kn-i} \left( {\begin{array}{c}kn\\ i\end{array}}\right) (G_{i}x)(s)ds\\= & {} \sum \limits _{i=0}^{kn} \delta _0^{kn-i} \left( {\begin{array}{c}kn\\ i\end{array}}\right) \int \limits _{\alpha _2}^{\beta _2} {\tilde{k}}_{l-1}(t,s) (G_{i}x)(s)ds. \end{aligned}$$

In the same way one can prove the corresponding formula when \(\delta _0=0\). \(\square \)

Example 4.5

Let \(1 \le p\le \infty \) and

$$\begin{aligned} A:L_p([\alpha _1,\beta _1])\rightarrow L_p([\alpha _1,\beta _1]), \quad B:L_p([\alpha _2,\beta _2])\rightarrow L_p([\alpha _2,\beta _2]) \end{aligned}$$

be operators defined, for almost every t, by

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha _1}^{\beta _1} a(t)b(s)x(s)ds, \quad (Bx)(t)= \int \limits _{\alpha _2}^{\beta _2} c(t)e(s)x(s)ds, \end{aligned}$$

where \(\alpha _i,\beta _i\in {\mathbb {R}}\), \(i=1,2\) and \(a\in L_p([\alpha _1,\beta _1])\), \(c\in L_p([\alpha _2,\beta _2])\), \(b\in L_q([\alpha _1,\beta _1])\), \(e\in L_q([\alpha _2,\beta _2])\), where \(1 \le q\le \infty \) such that \(\displaystyle \frac{1}{p}+\frac{1}{q}=1\). Consider a polynomial \(F(z)=\sum \nolimits _{j=0}^{m} \delta _j z^j\), where \(\delta _0\),...,\(\delta _m\in {\mathbb {R}}\). From (10) we have

$$\begin{aligned} (A^r x)(t)= & {} \left( \int \limits _{\alpha _1}^{\beta _1} a(s)b(s)ds \right) ^{r-1} \int \limits _{\alpha _1}^{\beta _1} a(t)b(s)x(s)ds=\mu ^{r-1}\int \limits _{\alpha _1}^{\beta _1} a(t)b(s)x(s)ds \\ \quad (B^l x)(t)= & {} \left( \int \limits _{\alpha _2}^{\beta _2} c(s)e(s)ds \right) ^{l-1} \int \limits _{\alpha _1}^{\beta _1} c(t)e(s)x(s)ds=\nu ^{l-1}\int \limits _{\alpha _2}^{\beta _2} c(t)e(s)x(s)ds, \end{aligned}$$

for \(l,r=1,2,\ldots \), where \( \mu =\mu _{[\alpha _1,\beta _1]}(a,b),\ \nu =\mu _{[\alpha _2,\beta _2]}(c,e).\) We therefore have

$$\begin{aligned} (A^r (B^l)x)(t)= & {} \mu ^{r-1}\nu ^{l-1}\int \limits _{\alpha _1}^{\beta _1} a(t)b(s)\left( \int \limits _{\alpha _2}^{\beta _2} c(s)e(\tau )x(\tau )d\tau \right) ds\\= & {} \mu ^{r-1}\nu ^{l-1}\sigma \int \limits _{\alpha _1}^{\beta _1} a(t)e(\tau )x(\tau )d\tau , \end{aligned}$$

where \(\sigma =\mu _{[\alpha _1,\beta _1]}(b,c).\)

By using Proposition 4.3 and formula (31) we have

$$\begin{aligned} B^l\left( (F^{o(n)})^k(A)x\right) (t)= & {} \int \limits _{\alpha _2}^{\beta _2} \nu ^{l-1} c(t)e(s)\sum \limits _{j=0}^{nk} \delta _0^{nk-j} \left( {\begin{array}{c}nk\\ j\end{array}}\right) (G_jx)(s)ds\\= & {} \nu ^{l-1}\delta _0^{kn}\int \limits _{\alpha _2}^{\beta _2}c(t)e(\tau )x(\tau )d\tau \\{} & {} +\nu ^{l-1} \sum \limits _{j=1}^{nk} \delta _0^{nk-j} \int \limits _{\alpha _2}^{\beta _2} c(t)e(s)\left( \int \limits _{\alpha _1}^{\beta _1} a(s)b(\tau ) \theta _j x(\tau )d\tau \right) \\= & {} c(t)\nu ^{l-1}\left( \delta _0^{nk}\int \limits _{\alpha _2}^{\beta _2}e(\tau )x(\tau )d\tau +\sum \limits _{j=1}^{nk}\delta _0^{nk-j}\left( {\begin{array}{c}nk\\ j\end{array}}\right) \theta _j \gamma \int \limits _{\alpha _1}^{\beta _1} b(\tau )x(\tau )d\tau \right) , \end{aligned}$$

for \(l=1,2,\ldots \) and for \(\delta _0\not =0\), where

$$\begin{aligned} \gamma =\mu _{[\alpha _2,\beta _2]}(a,e)=\int \limits _{\alpha _2}^{\beta _2} a(s)e(s)ds,\ \theta _j=\sum \limits _{i_1=1}^{m}\ldots \sum \limits _{i_j=1}^{m}\mu ^{\alpha _j-1}\prod \limits _{l=1}^{j}\delta _{i_l},\ \alpha _j=\sum \limits _{l=1}^{j} i_l. \end{aligned}$$

If \(\delta _0=0\), then using Proposition 4.3 and formula (31) we have

$$\begin{aligned} B^l\left( (F^{\circ (n)})^k(A)x\right) (t)= & {} \int \limits _{\alpha _2}^{\beta _2} \nu ^{l-1} c(t)e(s) (G_{nk}x)(s)ds =\nu ^{l-1}\theta _{nk} \gamma c(t) \int \limits _{\alpha _1}^{\beta _1} b(\tau )x(\tau )d\tau . \end{aligned}$$

Remark 4.1

We can have a different but equivalent form of writing the operator \(\displaystyle ((B^l(F^{\circ (n)})^kA)x)(t)\) if instead of \((F^{\circ (n)})^k(A)=F^{\circ (nk)}(A)\) we consider the left hand side by its definition. In this case we have the following formula for the corresponding composition.

Proposition 4.4

Let \(A:L_p({\mathbb {R}})\rightarrow L_p({\mathbb {R}})\), \(1\le p\le \infty \), be defined as follows, for almost all t,

$$\begin{aligned} (Ax)(t)= \int \limits _{\alpha _1}^{\beta _1}k(t,s) x(s)ds, \end{aligned}$$

where \(\alpha _1,\beta _1\in {\mathbb {R}}\) and \(k(t,s):{\mathbb {R}}\times [\alpha _1,\beta _1]\rightarrow {\mathbb {R}}\) is a measurable function. Consider a polynomial defined by \(F(z)=\delta _0+\delta _1 z +\cdots +\delta _m z^m\), where \(\delta _0,\ldots ,\delta _m\in {\mathbb {R}}\). Let

$$\begin{aligned} k_0(t,s)= & {} k(t,s), \quad k_j(t,s)=\int \limits _{\alpha _1}^{\beta _1} k(t,\tau )k_{j-1}(\tau ,s)d\tau ,\quad j=1,\dots ,m \\ F_{m,1}(k(t,s))= & {} \sum \limits _{j=1}^{m} \delta _j k_{j-1}(t,s),\\ F_{m,i}(k(t,s))= & {} \int \limits _{\alpha _1}^{\beta _1} F_{m,1}(k(t,\tau ))F_{m,i-1}(k(\tau ,s))d\tau ,\quad i=2,3,\dots \end{aligned}$$

we have

$$\begin{aligned} \left( (F^{\circ (n)})^k(A)x\right) (t)=\left\{ \begin{array}{ll} \sum \nolimits _{i_1=0}^{n}\ldots \sum \nolimits _{i_k=0}^{n} \delta _0^{kn-\alpha _k}\prod \nolimits _{l=1}^{k} \left( {\begin{array}{c}n\\ i_l\end{array}}\right) (G_{\alpha _k}x)(t), &{}\quad \text{ if } \delta _0\not =0 \\ (G_{nk}x)(t), &{}\quad \text{ otherwise } , \end{array}\right. \end{aligned}$$
(34)

where \(G_i\) is given by (26) and \(\alpha _k=\sum \nolimits _{j=0}^{k} i_j,\ k=1,2,\ldots \)

Proof

Suppose that \(\delta _0\not =0\). We proceed by induction. When \(k=1\) we have the right hand side

$$\begin{aligned} \sum \limits _{i_1=0}^{n} \delta _0^{n-\alpha _1}\left( {\begin{array}{c}n\\ i_1\end{array}}\right) (G_{\alpha _1}x)(t)=(F^{\circ (n)}(A)x)(t), \end{aligned}$$

since \(\alpha _1=i_1\).

When \(k=2\) we have

$$\begin{aligned} \left( (F^{\circ (n)})(F^{\circ (n)}(A)x)\right) (t)&= \sum \limits _{i=0}^{n}\left( {\begin{array}{c}n\\ i\end{array}}\right) \delta ^{n-i}_0 \left( G_i (F^{\circ (n)}(A)x) \right) (t)\\&=\sum \limits _{i=0}^{n}\left( {\begin{array}{c}n\\ i\end{array}}\right) \delta ^{n-i}_0 G_i\left( \sum \limits _{j=0}^{n}\left( {\begin{array}{c}n\\ j\end{array}}\right) \delta ^{n-j}_0 (G_j x)(t) \right) \\&=\sum \limits _{i=0}^{n}\left( {\begin{array}{c}n\\ i\end{array}}\right) \delta ^{n-i}_0 \sum \limits _{j=0}^{n}\left( {\begin{array}{c}n\\ j\end{array}}\right) \delta ^{n-j}_0 G_i(G_jx)(t)\\&{\mathop {=}\limits ^{(\text {27})}}\sum \limits _{i=0}^{n}\sum \limits _{j=0}^{n}\left( {\begin{array}{c}n\\ i\end{array}}\right) \delta ^{n-i}_0 \left( {\begin{array}{c}n\\ j\end{array}}\right) \delta ^{n-j}_0 (G_{i+j}x)(t)\\&=\sum \limits _{i_1=0}^{n}\sum \limits _{i_2=0}^n \delta ^{2n-(i_1+i_2)}_0 \prod \limits _{l=1}^2 \left( {\begin{array}{c}n\\ i_l\end{array}}\right) (G_{i_1+i_2}x)(t). \end{aligned}$$

Suppose now that the formula is true for \(k=w\). Then for \(k=w+1\) we have

$$\begin{aligned}{} & {} \left( \left( F^{\circ (n)}\right) ^{w+1} (A)x\right) (t)=\left( F\left( (F^{\circ (n)})^w(A)\right) x\right) (t)\\{} & {} \quad = \sum \limits _{r=0}^{n} \left( {\begin{array}{c}n\\ r\end{array}}\right) \delta ^{n-r}_0 G_r\left( \left( (F^{\circ (n)})^w(A)\right) x\right) (t)\\{} & {} \quad =\sum \limits _{r=0}^{n} \left( {\begin{array}{c}n\\ r\end{array}}\right) \delta ^{n-r}_0 \sum \limits _{i_1=0}^{n}\ldots \sum \limits _{i_w=0}^{n} \delta _0^{wn-\alpha _w}\prod \limits _{l=1}^{w} \left( {\begin{array}{c}n\\ i_l\end{array}}\right) G_r(G_{\alpha _w}x)(t)\\{} & {} \quad {\mathop {=}\limits ^{(\text {27})}}\sum \limits _{i_1=0}^{n}\ldots \sum \limits _{i_{w+1}=0}^{n} \delta _0^{(w+1)n-\alpha _{w+1}}\prod \limits _{l=1}^{w+1} \left( {\begin{array}{c}n\\ i_l\end{array}}\right) (G_{\alpha _{w+1}}x)(t). \end{aligned}$$

In the same way we can prove formula (34) when \(\delta _0=0\). \(\square \)

Example 4.6

Under conditions of Proposition 4.4, we compute \(((F^{\circ (n)})^k(A)x)(t)\) for \(n=2\), \(k=3\) and for \(\delta _0\not =0\). By using formula (34) we have:

$$\begin{aligned} ((F^{\circ (2)})^3(A)x)(t)= & {} \sum \limits _{i_1=0}^{2}\sum \limits _{i_2=0}^{2}\sum \limits _{i_3=0}^{2} \delta _0^{6-(i_1+i_2+i_3)} \prod \limits _{l=1}^{3}\left( {\begin{array}{c}2\\ i_l\end{array}}\right) (G_{(i_1+i_2+i_3)}x)(t)\\= & {} \delta _0^6 x(t)+2\delta _0^5 (G_1x)(t)+ \delta _0^4 (G_2x)(t)+2\delta _0^5 (G_1x)(t)\\{} & {} + 4\delta _0^4 (G_2x)(t)+ 2\delta _0^3 (G_3x)(t)+\delta _0^4 (G_2x)(t)+2\delta _0^3 (G_3x)(t)\\{} & {} +\delta _0^2 (G_4x)(t)+2\delta _0^5 (G_1x)(t)+4\delta _0^4 (G_2x)(t)+2\delta _0^3 (G_3x)(t)\\{} & {} +4\delta _0^4 (G_2x)(t)+8\delta _0^3 (G_3x)(t) +4\delta _0^2 (G_4x)(t)+ 2\delta _0^3 (G_3x)(t)\\{} & {} +4\delta _0^2 (G_4x)(t)+2\delta _0 (G_5x)(t)+\delta _0^4 (G_2x)(t)+2\delta _0^3 (G_3x)(t)\\{} & {} +\delta _0^2 (G_4x)(t)+2\delta _0^3 (G_3x)(t)+4\delta _0^2 (G_4x)(t)+2\delta _0 (G_5x)(t)\\{} & {} +\delta _0^2 (G_4x)(t)+2\delta _0 (G_5x)(t) +\delta _0 (G_6x)(t)\\= & {} \delta _0^6 x(t)+6\delta _0^5 (G_1x)(t) +15\delta _0^4 (G_2x)(t)+20\delta _0^3 (G_3x)(t)\\{} & {} + 15\delta _0^2 (G_4x)(t)+ 6\delta _0 (G_5x)(t)+(G_6x)(t). \end{aligned}$$

By using formula (30) we get

$$\begin{aligned} ((F^{\circ (2)})^3(A)x)(t)= & {} (F^{\circ (6)}(A)x)(t)=\sum \limits _{i=0} \delta _0^{6-i} \left( {\begin{array}{c}6\\ i\end{array}}\right) (G_{i}x)(t)\\= & {} \delta _0^6 x(t)+6\delta _0^5 (G_1x)(t) +15\delta _0^4 (G_2x)(t)+20\delta _0^3 (G_3x)(t)\\{} & {} +15\delta _0^2 (G_4x)(t) +6\delta _0 (G_5x)(t)+(G_6x)(t), \end{aligned}$$

which agrees with formula (34).