1 Introduction

From the point of view of probability theory this paper concerns theory of weak generalized convolutions. This theory was introduced and developed by K. Urbanik motivated by the work of Kingman [9] in his papers [25, 26, 28, 29]. Roughly speaking, a generalized convolution is a commutative and associative operation \(\diamond \) on measures such that the generalized convolution \(\delta _x \diamond \delta _y\) of point-mass measures \(\delta _x\) and \(\delta _y\) does not have to be equal \(\delta _{x+y}\) as in the usual convolution case.

The weak generalized convolutions are defined by weakly stable distributions. The study of weakly stable distributions was initiated by Kucharczak and Urbanik (see [11, 27]) and followed by a series of papers by Urbanik, Kucharczak, Panorska, and Vol’kovich (see, e.g., [10, 12, 19, 30,31,32]). Misiewicz, Oleszkiewicz and Urbanik [16] gave a characterization of weakly stable distributions with non-trivial discrete part and proved some uniqueness properties of weakly stable distributions. For further information on generalized convolutions and weakly stable laws see [2,3,4,5,6,7, 15, 17, 18, 20].

This paper is a continuation of the papers written by K. Oleszkiewicz [18], G. Mazurkiewicz and J. Misiewicz [15], W. Jarczyk and J. Misiewicz [2] and it is a big step on the road to a characterization of stable but not strictly stable distributions with respect to weak generalized convolution.

From the functional equation theory point of view this paper concerns solutions of the following three functional equations.

  • The target one is the equation

    $$\begin{aligned} \begin{array}{c} \forall \, r,s \ge 0 \; \exists \, c(r,s) \ge 0 \; \exists \, d(r,s) \ge 0\, \forall \, t \ge 0 \\ \varphi (rt) \varphi (st) = \varphi (c(r,s)t) \psi (d(r,s)t), \end{array} \end{aligned}$$
    (A)

    where \(\varphi \) and \(\psi \) are unknown symmetric characteristic functions. This equation implies that, without loss of generality, we may assume that for all \(r,s,t >0\) we have \(c(st,rt) = c(rt, st) = tc(s,r)\), \(c(1,0) = 1\), and \(d(st,rt) = d(rt, st) = td(s,r)\), \(d(1,0) = 0\), and by the interpretation of functions \(\psi , \varphi \) as generalized characteristic functions of some probability distributions we have the continuity of all considered functions \(\varphi , \psi , c, d\) (for details see [2, 15]). Since the cases \(d \equiv 0\) and \(d(r,s)=0\) for some \(r,s>0\) were solved in [2, 15] we assume here that \(d(r,s) >0\) for all \(r,s>0\). We solve (A) under the assumption that for fixed \(\psi \) and some functions \(c, d: [0,\infty )^2 \rightarrow [0,\infty )\) there exist at least two different solutions \(\varphi \) of (A).

  • To determine solutions of (A) we will also find all Lebesgue measurable and all Baire measurable functions \(f: (0,\infty ) \rightarrow {\mathbb {R}}\) satisfying the equation

    $$\begin{aligned} \begin{array}{l} \bigl (f(t(x+y)) - f(tx)\bigr ) \bigl (f(x+y) - f(y)\bigr ) \\ \quad = \bigl (f(t(x+y)) - f(ty)\bigr ) \bigl (f(x+y) - f(x)\bigr ), \end{array} \quad x,y,t> 0. \end{aligned}$$
    (B)

    The first step here is to show that any measurable solution of (B) is in fact infinitely differentiable.

  • As an auxiliary result we also solve the equation

    $$\begin{aligned} \phi (rt) \phi (st) = \phi (c(r,s)t), \qquad r,s,t\ge 0, \end{aligned}$$
    (C)

    which is known as the equation characterizing strictly stable characteristic functions. Here, however, we consider it in a more general class of functions and apply the obtained results to study equation (A).

We will use the following notation. By \({\mathcal {P}}({\mathbb {E}})\) we denote the set of all probability measures on a separable Banach space \({\mathbb {E}}\) (with dual \({\mathbb {E}}^{*}\)). For simplicity we write \({\mathcal {P}}\) for the set of all probability measures on \({\mathbb {R}}\) and \({\mathcal {P}}_{+}\) for the set of probability measures on \([0,\infty )\). The symbol \(\delta _{{\textbf{x}}}\) denotes the probability measure concentrated at the point \({\textbf{x}} \in {\mathbb {E}}\).

Given a random vector (or random variable) \({\textbf{X}}\) we write \({\mathcal {L}} ({\textbf{X}})\) for the distribution of \({\textbf{X}}\). For \(\lambda \in {\mathcal {P}}\), \(\lambda = {\mathcal {L}} (\theta )\) for some random variable \(\theta \), we define \(|\lambda |{:}{=} {\mathcal {L}} (|\theta |)\). If \(\mu = {\mathcal {L}}({\textbf{X}})\in {\mathcal {P}} ({\mathbb {E}})\), then the characteristic function \({\widehat{\mu }}: {\mathbb {E}}^{*} \rightarrow {\mathbb {C}}\) of the measure \(\mu \) (of the random vector \({\textbf{X}}\)) is defined by

$$\begin{aligned} {\widehat{\mu }}(\xi ) = {\textbf{E}} \exp \left( i<\xi , {\textbf{X}}> \right) = \int _{{\mathbb {E}}} \exp \left( i <\xi , x > \right) \mu (dx), \end{aligned}$$

where \(<\xi , x>\) is the value of the linear functional \(\xi \) on the element x. If \({\mathbb {E}} = {\mathbb {E}}^{*} = {\mathbb {R}}^n\) then we have \(<\xi ,x> = \sum _{k=1}^n \xi _k x_k\), for \(x=(x_1,\dots ,x_n)\) and \(\xi = (\xi _1,\dots ,\xi _n)\). Given random vectors \({\textbf{X}}, {\textbf{Y}}\), we write \({\textbf{X}} {\mathop {=}\limits ^{d}} {\textbf{Y}}\) whenever \({\mathcal {L}}({\textbf{X}}) = {\mathcal {L}} ({\textbf{Y}})\) (here and in what follows \({\mathop {=}\limits ^{d}}\) denotes the equality of distributions). If \({\textbf{X}}\) and \({\textbf{Y}}\) are independent random vectors, then \({\mathcal {L}}({\textbf{X}} + {\textbf{Y}})\) is the convolution of \({\mathcal {L}}({\textbf{X}})\) and \({\mathcal {L}}({\textbf{Y}})\) denoted by \({\mathcal {L}}({\textbf{X}}) *{\mathcal {L}}({\textbf{Y}})\).

For \(t \in {\mathbb {R}}\), the rescaling operator \(T_t: {\mathcal {P}}({\mathbb {E}}) \rightarrow {\mathcal {P}}({\mathbb {E}})\) is defined as follows:

$$\begin{aligned} T_t \mu (B) = \left\{ \begin{array}{lcl} \mu ({B/t}) &{} \hbox { if } &{} t \in {\mathbb {R}} \setminus \{0\}, \\ \delta _0(B) &{} \hbox { if } &{} t=0, \end{array} \right. \end{aligned}$$

for every Borel subset B of \({\mathbb {E}}\). It is easy to see that if \(\mu = {\mathcal {L}} ({\textbf{X}})\) then \(T_t \mu = {\mathcal {L}}(t {\textbf{X}})\). The scale mixture \(\mu \circ \lambda \) of the measure \(\mu \in {\mathcal {P}} ({\mathbb {E}})\) with respect to the measure \(\lambda \in {\mathcal {P}}\) is defined by

$$\begin{aligned} \mu \circ \lambda (B) = \int _{{\mathbb {R}}} T_t \mu (B) \lambda (dt). \end{aligned}$$

If \(\mu = {\mathcal {L}}({\textbf{X}})\) and \(\lambda = {\mathcal {L}}(\theta )\) with independent \({\textbf{X}}\) and \(\theta \), then \(\mu \circ \lambda = {\mathcal {L}} ({\textbf{X}} \theta )\).

A random vector \({{\textbf{X}}}\) with the distribution \(\mu \) on a real separable Banach space \({\mathbb {E}}\) is called weakly stable if

$$\begin{aligned} \forall \, a,b \in {\mathbb {R}} \,\, \exists \, \theta \, \qquad a {{\textbf{X}}} + b \, {{\textbf{X}}}' {\mathop {=}\limits ^{d}} {{\textbf{X}} } \theta , \end{aligned}$$

where \({{\textbf{X}}}'\) is an independent copy of \({{\textbf{X}}}\) and the random variable \(\theta \) is independent of \({{\textbf{X}}}\). For measures on the positive half-line K. Urbanik called such measures \({\mathfrak {B}}\)-stable probability distributions (see [24]). It was proved in [16] that \({{\textbf{X}}}\) is weakly stable if and only if (second condition)

$$\begin{aligned} \forall \, \theta _1, \theta _2 \,\, \exists \, \theta \quad \theta _1 {{\textbf{X}}} + \theta _2 {{\textbf{X}}}' {\mathop {=}\limits ^{d}} {{\textbf{X}}} \theta , \end{aligned}$$

where \(\theta _1, \theta _2\) are real random variables such that \(\theta _1, \theta _2, {{\textbf{X}}}, {{\textbf{X}}}'\) are independent and the random variable \(\theta \) is independent of \({{\textbf{X}}}\). In the language of probability measures, defining \(\mu \) as a weakly stable measure we use the “second condition” and write:

$$\begin{aligned} \forall \, \lambda _1, \lambda _2 \in {\mathcal {P}} \,\, \exists \, \lambda \in {\mathcal {P}} \quad (\mu \circ \lambda _1) *(\mu \circ \lambda _1) = \mu \circ \lambda , \end{aligned}$$

for \(\lambda , \lambda _i\) being, respectively, the distributions of \(\theta , \theta _i\), \(i=1,2\). It was shown in [16] that the measure \(\lambda \) is uniquely determined if the measure \(\mu \) is not symmetric. For a symmetric measure \(\mu \) we have only the uniqueness of the measure \(|\lambda |\).

It was proved in [16] that if a weakly stable distribution \(\mu \) contains a discrete part, then it is discrete and either \(\mu = \delta _0\), or \(\mu = \frac{1}{2} \delta _a + \frac{1}{2} \delta _{-a}\) for some \(a \in {\mathbb {E}}{\setminus } \{0\}\). From now on we will assume that the considered weakly stable measure \(\mu \) is non-trivial in the sense that it is not discrete.

We can now define a generalized weak convolution \(\otimes = \otimes _{\mu }\) for any nontrivial weakly stable measure \(\mu \).

Definition 1.1

Let \({\textbf{X}}\) be a nontrivial random vector with the weakly stable distribution \(\mu \). The weak generalized convolution \(\otimes = \oplus _{\mu }\) of measures \(\lambda _1,\lambda _2 \in {\mathcal {P}}\) is defined by

$$\begin{aligned} \lambda _1 \otimes \lambda _2 = {\left\{ \begin{array}{ll} \lambda &{} \textit{ if } \mu \ \textit{is not symmetric,} \\ |\lambda |&{} \textit{ if } \mu \textit{ is symmetric}, \end{array}\right. } \end{aligned}$$

where \(\lambda \in {\mathcal {P}}\) is such that \((\mu \circ \lambda _1) *(\mu \circ \lambda _2) = \mu \circ \lambda \). For two independent random variables \(\theta _1\) and \(\theta _2\) with distributions \(\lambda _1\) and \(\lambda _2\), respectively, the weak generalized sum \(\theta _1 \oplus \theta _2\) is a random variable defined by

$$\begin{aligned} \theta _1 {{\textbf{X}}} + \theta _2 \textbf{X}' {\mathop {=}\limits ^{d}} {{\textbf{X}}} \left( \theta _1 \oplus \theta _2 \right) , \end{aligned}$$

where \(\theta _1, \theta _2, {{\textbf{X}}}, {{\textbf{X}}}'\) are independent, the random variable \(\theta _1 \oplus \theta _2\) is independent of \({{\textbf{X}}}\) and \({\mathcal {L}}(\theta _1 \oplus \theta _2) = \lambda _1 \otimes \lambda _2\).

Notice that the random variable \(\theta _1 \oplus \theta _2\) is not defined by an explicitly written operation on the variables \(\theta _1, \theta _2\) which in the case of classical convolution is given by \(\theta _1 + \theta _2\). The object \(\theta _1 \oplus \theta _2\) is defined only up to equality of distributions.

The operation \(\otimes \) in \({\mathcal {P}}\) is commutative and associative. Moreover, as shown in [17], the following conditions hold:

(i):

the measure \(\delta _0\) is the unit element, i.e. \(\delta _0 \otimes \lambda = \lambda \) for all \(\lambda \in {\mathcal {P}}\) if \(\mu \) is not symmetric and \(\delta _0 \otimes \lambda = |\lambda |\) for all \(\lambda \in {\mathcal {P}}\) if \(\mu \) is symmetric;

(ii):

\((p \lambda _1 + q \lambda _2) \otimes \lambda = p (\lambda _1 \otimes \lambda ) + q (\lambda _2 \otimes \lambda )\), whenever \(\lambda , \lambda _1, \lambda _2 \in {\mathcal {P}}\) and \(p\ge 0\), \(q\ge 0\), \(p+q = 1\) (linearity);

(iii):

\((T_a \lambda _1) \otimes (T_a \lambda _2) = T_a (\lambda _1 \otimes \lambda _2)\) for any \(\lambda _1, \lambda _2 \in {\mathcal {P}}\) and \(a>0\) (homogeneity);

(iv):

if \(\lambda _n \rightarrow \lambda _0\) then \(\lambda _n \otimes \lambda \rightarrow \lambda _0 \otimes \lambda \) for all \(\lambda \in {\mathcal {P}}\) (continuity with respect to weak convergence of measures).

The idea of generalized convolutions has been extensively studied after it was introduced by K. Urbanik in 1964 [25]. The definition proposed by K. Urbanik is as follows:

A commutative and associative binary operation \(\diamond : {\mathcal {P}}_{+} \times {\mathcal {P}}_{+} \rightarrow \mathcal { P}_{+}\) is called a generalized convolution if it satisfies conditions (i)\(\div \)(iv) with \(\otimes \) replaced by \(\diamond \) and the following condition holds:

(v):

there exists a sequence \((c_n)_{n\in {\mathbb {N}}}\) of positive numbers such that the sequence \((T_{c_n} \delta _1^{\diamond n})_{n \in {\mathbb {N}}}\) weakly converges to a measure different from \(\delta _0\).

2 Stable Distributions in the Sense of Generalized Convolution

Definition 2.1

Let \(\mu \) be a non-trivial symmetric, weakly stable measure on a separable Banach space \({\mathbb {E}}\). A measure \(\lambda \in {\mathcal {P}}\) is called stable with respect to the weak generalized convolution \(\otimes = \otimes _{\mu }\) if

$$\begin{aligned} \forall \,\, r,s\ge 0 \,\, \exists \, c(r,s)\ge 0, d(r,s) \in {\mathbb {R}} \quad (T_r \lambda ) \otimes (T_s \lambda ) = (T_{c(r,s)} \lambda ) \otimes \delta _{d(r,s)}. \end{aligned}$$

If for every \(r,s>0\) we have \(d(r,s) = 0\), then we say that \(\lambda \) is strictly stable with respect to \(\otimes \).

If the symbol \(\otimes \) in the above equality denotes the classical convolution, then Definition 2.1 reduces to the classical functional equation defining stable distributions; for details see e.g. [22, 33]. It can be easily seen that the weak generalized convolution defined by a symmetric weakly stable random vector \({\textbf{X}}\) coincides with the one defined by any non-trivial one-dimensional projection of \({\textbf{X}}\), and thus without loss of generality we may assume that \(\mu \in {\mathcal {P}}\). According to [2] we can also assume that c and d are homogeneous.

The condition described in Definition 2.1 can be written in the language of generalized characteristic functions for the generalized convolution \(\otimes _{\mu }\):

$$\begin{aligned} \varphi (rt) \varphi (st) = \varphi (c(r,s)t) \psi (d(r,s)t), \quad r,s\ge 0, \,\,t \in {\mathbb {R}}, \end{aligned}$$
(A)

where \(\psi \) is the characteristic function of the measure \(\mu \) and \(\varphi \) is the characteristic function of the measure \(\mu \circ \lambda \), i.e.

$$\begin{aligned} \varphi (t) = \int _{{\mathbb {R}}} \psi (ts) \lambda (ds). \end{aligned}$$

It was shown in [2, 15] that by the probabilistic interpretation of equation (A) we have: \(c(rt,st) = c(st,rt) = t c(r,s)\) for all \(r,s,t \ge 0\), \(c(1,0) = 1\), \(d(rt,st) = d(st,rt) = t d(r,s)\) for all \(r,s,t \ge 0\) and \(d(1,0) = 0\). Some partial solutions of equation (A) are already known:

  • If \(d(r,s) = 0\) for all \(r,s>0\), i.e. if the measure \(\lambda \) is strictly stable with respect to the generalized convolution \(\otimes _{\mu }\), then as K. Urbanik proved (see [25]), for every generalized convolution \(\otimes \) (not only for weak generalized convolution) there exist \(\alpha > 0\) and \(A\ge 0\) such that \(\varphi (t) = \exp \left( -At^{\alpha } \right) \) for all \(t\ge 0\). It was shown also in [25, 26] that \(\alpha \le \kappa (\otimes )\), where \(\kappa (\otimes )\) is the characteristic exponent parameter for generalized convolution \(\otimes \).

  • The case when the function \(\psi \) is fixed (up to a scale) and \(\psi (t) = \exp \left( - t^p\right) \) for every \(t\ge 0\) was considered by K. Oleszkiewicz in [18] for \(p=2\) and by G. Mazurkiewicz and J. Misiewicz in [15] for arbitrary \(p \in (0,2]\). In both cases authors showed that there exist \(q >0\) and some positive constants \(A,B >0\) such that \(\varphi (t) = \exp \left( - At^p - Bt^q\right) \). The problem is that not all configurations of parameters pqAB are possible and big parts of both papers are devoted to the description of admissible configurations.

  • If \(d \not \equiv 0\) but there exist \(r,s>0\) such that \(d(r,s) = 0\), then W. Jarczyk and J.K. Misiewicz showed in [2] that there exist \(\alpha > 0\) and continuous functions \(H,K: (0,\infty ) \rightarrow (0,\infty )\) such that \(H(t) = H(rt) = H(st)\), \(K(t) = K(rt) = K(st)\) for every \(t>0\) and \(\varphi (t) = \exp \left( - t^{\alpha } H(t)\right) \), \(\psi (t) = \exp \left( - t^{\alpha } K(t)\right) \). However, if the group generated by the set \(\{ {r/s}: r,s>0, d(r,s)=0\}\) is dense in \((0,\infty )\), then the functions HK are constant and consequently, the only possible stable distributions with respect to \(\otimes _{\mu }\) are \(\delta _x\) for some \(x>0\).

In this paper we consider the missing case: \(d(r,s) \ne 0\) for all \(r,s>0\). The main results of this paper are obtained for continuous functions \(c,d, \varphi , \psi \) under the following assumptions:

$$\begin{aligned} \left\{ \begin{array}{l} c(rt,st) = t c(r,s), \, d(rt, st) = t d(r,s) \text { for all } r,s,t>0, \\ d(r,s)= d(s,r)> 0, d(r,0) = 0 \quad \text { for all } r,s >0, \\ \text {functions } \psi \in \Phi \text { and } c,d \text { are fixed}, \\ \exists \, \varphi _1, \varphi _2 \in \Phi , \varphi _1 \ne \varphi _2, \text { satisfying equation (A)}, \end{array} \right\} \end{aligned}$$
(1)

where \(\Phi \) usually denotes the set of all generalized characteristic functions with respect to \(\diamond \) but for this paper it is enough to assume that

$$\begin{aligned} \Phi = \left\{ \varphi : {\mathbb {R}} \rightarrow {\mathbb {R}} \Bigg |\begin{array}{ll} &{}\varphi (0)=1,\, \varphi (t) = \varphi (-t) \ \ \text {for } t \in {\mathbb {R}}, \\ &{}\varphi \hbox { is uniformly continuous on } {\mathbb {R}}, \\ &{}\left( \exists \, t_n \searrow 0 \,\, \varphi (t_n) = 1\right) \Rightarrow \varphi \equiv 1. \end{array} \right\} . \end{aligned}$$

The last condition in the description of the set \(\Phi \) replaces the condition of positive definiteness of the characteristic function, which is much more difficult to check. In this work, such a restriction is not essential (see e.g. [14]). We will be solving equation (A) in the set \(\Phi _2\) of pairs of characteristic functions \((\varphi , \psi )\) belonging to the set

$$\begin{aligned} \Phi _2 = \left\{ (\varphi , \psi ) \in \Phi ^2: \exists \,\, \lambda \in {\mathcal {P}}_{+} \,\,\, \forall \,\, t \in {\mathbb {R}}\,\,\,\varphi (t) = \int _{{\mathbb {R}}} \psi (ts) \lambda (ds) \right\} . \end{aligned}$$

The assumption about the existence of two different solutions \(\varphi _1, \varphi _2\) of (A) is meaningful. It is easy to see that if the pair \((\varphi (\cdot ), \psi (\cdot ))\in \Phi _2\) is a solution of equation (A), then for each \(a>0\) the pair of re-scalings \((\varphi (a \cdot ), \psi (a \cdot ))\) also belongs to \(\Phi _2\) and it is a solution of (A). However, if we fix functions cd and \(\psi \), then it may happen that there exists exactly one (if any) solution of equation (A).

Because of the symmetry of considered functions we will here restrict functions \(\varphi , \psi \) to the nonnegative half-line \([0,\infty )\).

3 Equation (C) and its Solution

For any real characteristic function \(\varphi \) we define \(a_{\varphi } {:}{=} \inf \left\{ t > 0: \varphi (t) = 0\right\} \). We start with the following useful observation.

Lemma 3.1

Assume that the functions \(c,d,\psi \) are fixed. If \(\varphi \) is a solution of equation (A) and \(a_{\varphi }<\infty \), then equation (A) has exactly one solution \(\varphi \) and there exists \(\alpha >0\) such that

  1. (1.)

    \(c(r,s) = \max \{ r,s\}\) and \(d(r,s) = \alpha \min \{r,s\}\) for all \(r,s \ge 0\);

  2. (2.)

    \(\varphi (t) = \psi (\alpha t)\) for all \(t \ge 0\).

Proof

Of course \(a_{\varphi }>0\) since \(\varphi (0)=1\) and \(\varphi \) is a continuous function. Recall that \(d(1,0) = 0\) and \(d(r,s)>0\) for all \(r,s>0\), thus \(d(1,1) >0\). We define \(\psi _0(r) {:}{=} \psi (d(1,1) r)\) and \(d_0(r,s) {:}{=} {{d(r,s)}/{d(1,1)}}\) for all \(r,s\ge 0\). Then \(d_0(1,1) = 1\) and the equation (A) takes the form

$$\begin{aligned} \varphi (rt) \varphi (st) = \varphi (c(r,s)t) \psi _0(d_0(r,s)t) \quad r,s\ge 0,\,\, t \in {\mathbb {R}}. \end{aligned}$$

Since \(a_{\varphi }< \infty \) then for every \(s,r\in [0,a_{\varphi })\)

$$\begin{aligned} \varphi (s) \varphi (r) = \varphi (c(s,r))\, \psi _0(d_0(s,r)) \ne 0. \end{aligned}$$
(2)

Because of the continuity of c and the equality \(c(0,0)=0\) we see that

$$\begin{aligned} \begin{array}{lcl} c(s,r) < a_{\varphi } &{} \hbox { for all } &{} s,r \in (0, a_{\varphi }), \\ \psi _0(d_0(s,r)) \ne 0 &{} \hbox { for all } &{} s,r \in (0, a_{\varphi }). \end{array} \end{aligned}$$

If \(c(1,s) > 1\) for some \(s \in (0,1)\), then by (2) we have

$$\begin{aligned} \varphi \left( \frac{a_{\varphi }}{c(1,s)}\right) \varphi \left( \frac{a_{\varphi }s}{c(1,s)}\right) = \varphi (a_{\varphi }) \psi _0 \left( \frac{a_{\varphi }}{c(1,s)}\, d_0(1,s) \right) = 0, \end{aligned}$$

contrary to the definition of \(a_{\varphi }\), since \({a_{\varphi }}/{c(1,s)} < a_{\varphi }\) and \({a_{\varphi }s}/{c(1,s)} <a_{\varphi }\). Consequently, \(c(1,s) \le 1\) for every \(s \in (0,1)\). If \(c(1,s) <1\) for some \(s \in (0,1)\) then, applying (2), for every \(r \ge 0\) we obtain

$$\begin{aligned} \varphi (r) \varphi (r s) = \varphi (r c(1,s))\, \psi _0( r d_0(1,s)). \end{aligned}$$
(3)

For every \(r < a_{\varphi }\) the left hand side of this equality is positive, and thus \(a_{\psi _0} \ge a_{\varphi } d_0(1,s)\). On the other hand

$$\begin{aligned} \varphi (a_{\varphi }) \varphi (a_{\varphi } s) = \varphi (a_{\varphi } c(1,s))\, \psi _0(a_{\varphi } d_0(1,s)) =0. \end{aligned}$$

Since \(\varphi (a_{\varphi }) =0\) and \(\varphi (a_{\varphi } c(1,s)) \ne 0\) we conclude that \(\psi _0 (a_{\varphi } d_0(1,s)) =0\) and hence \(a_{\varphi } d_0(1,s) \ge a_{\psi _0}\). Consequently, \(a_{\psi _0} = a_{\varphi } d_0(1,s)\). At the same time, for all \(r\ge 0\) we have

$$\begin{aligned} \varphi (r)^2 = \varphi (r c(1,1)) \psi _0(r d_0(1,1)) = \varphi (r c(1,1)) \, \psi _0(r). \end{aligned}$$
(4)

This implies that for every \(r \in (0, a_{\varphi })\) we have \(\psi _0(r) \ne 0\), hence \(a_{\psi _0} \ge a_{\varphi }\). By equalities (4) we see that either \(c(1,1) = 1\) and then \(\psi _0 = \varphi \) on \((0,a_{\varphi }]\), \(\psi _0 (a_{\varphi }) = 0\) and \(a_{\psi _0} = a_{\varphi } = a_{\varphi } d_0(1,s)\), or \(c(1,1) < 1\) and then we see that \(\psi _0(a_{\varphi }) =0\) and again \(a_{\psi _0} = a_{\varphi } = a_{\varphi } d(1,s)\).

Now, dividing equalities (4) and (3) by sides, for \(r \ge 0\) (if only the denominators are not equal to zero) we obtain

$$\begin{aligned} \frac{\varphi (r)}{\varphi (rs)} = \frac{\varphi (r c(1,1))}{\varphi (rc(1,s))}. \end{aligned}$$

The left hand side of this equality is nonzero on \((0, a_{\varphi })\) and zero at \(a_{\varphi }\), and thus the right hand side has the same properties and consequently \(c(1,1) = 1\) anyway. By (4) we get \(\varphi = \psi _0\) on \((0,\infty )\) and by (3) we come to \(\psi _0(r s) = \psi _0(r c(1,s))\) on \((0, \infty )\). Writing the last equality using the random variable X with the characteristic function \(\psi _0\) we get \(sX {\mathop {=}\limits ^{d}} c(1,s) X\), hence \(s = c(1,s)\).

Finally, for each \(s \in [0,1]\) we see that either \(c(1,s) = 1\), or \(c(1,s) = s\). Since the function c is continuous and \(c(1,0) = 1\) we conclude that \(c(1,s) = 1\) for every \(s \in [0,1]\) and for \(0<r<s\)

$$\begin{aligned} c(r,s) = s c\left( 1, \frac{r}{s} \right) = s = \max \{ r,s\}. \end{aligned}$$

By (2) we get \(d_0(r,s) = \min \{r,s\} = {{d(r,s)}/{d(1,1)}}\) for \(r,s \ge 0\). Coming back to the functions \(\psi , d\) and using the fact that \(\varphi = \psi _0\), for every \(r,s>0\) we get

$$\begin{aligned} \psi (\alpha r) \psi (\alpha s) = \psi (\alpha \max \{ r,s\}) \psi (\alpha \min \{ r,s\}), \end{aligned}$$
(5)

where \(\alpha = d(1,1) >0\). \(\square \)

Remark 1

Assume that c is different from \(\max \) function or d is not proportional to \(\min \) function. It follows from Lemma 3.1 that if \(\varphi \) is a real-valued characteristic function satisfying equation (A) with some \(\psi \), then \(a_\varphi = a_\psi = \infty \).

Remark 2

Note that every characteristic function \(\psi \) satisfies equation (5). This means that for each generalized convolution with probability kernel \(\psi \) the point mass measure \(\mu = \delta _{\alpha }\) is stable but not strictly stable with the characteristic representation (cf. Definition 2.1)

$$\begin{aligned} T_r \delta _{\alpha } \diamond T_s \delta _{\alpha } = T_{\max \{r,s\}} \delta _{\alpha } \diamond T_{\min \{r,s\}} \delta _{\alpha }. \end{aligned}$$

Note that if the condition (1) is true and \(a_{\varphi _1} = a_{\varphi _2} = \infty \), then \( a_{\psi } = \infty \) and the equalities

$$\begin{aligned} \varphi _1 (rt) \varphi _1 (st)= & {} \varphi _1 \bigl (tc(r,s)\bigr ) \psi \bigl ( td(r,s)\bigr ) \end{aligned}$$

and

$$\begin{aligned} \varphi _2 (rt) \varphi _2 (st)= & {} \varphi _2 \bigl (tc(r,s)\bigr ) \psi \bigl ( td(r,s)\bigr ) \end{aligned}$$

can be divided by sides. Defining \(\phi = {\varphi _1}/{\varphi _2}\) we obtain

$$\begin{aligned} \phi (rt) \phi (st) = \phi \bigl (tc(r,s)\bigr ), \qquad r,s,t \ge 0. \end{aligned}$$
(C)

This functional equation resembles the equation describing the characteristic function of a strictly stable distribution in the sense of a classical convolution (see e.g. [21, 22]) except that here \( \phi \) need not be any characteristic function. For this reason we will solve equation (C) in the class of continuous functions \(\phi \) defined on \( [0, \infty ) \) such that \( \phi (0) = 1\). We will do it by use of a series of lemmas.

Lemma 3.2

Assume that condition (1) holds, \(a_{\varphi _1} = a_{\varphi _2} = \infty \) and let \(\phi = {\varphi _1}/{\varphi _2}\) be a solution of equation (C). Then the function \(s \mapsto c(1,s)\) is one-to-one on \([0,\infty )\).

Proof

By assumption (1) we have \(\varphi _1 \not \equiv \varphi _2\), and thus \(\phi \) is a non-constant solution of equation (C) on \([0,\infty )\). Suppose that for some \(0<s<r\) we have \(c(1,s) = c(1,r)\). Since \(\phi \) is continuous and \(\phi (t) > 0\) for all \(t \ge 0\), then by (C) we get

$$\begin{aligned} \phi (t c(1,s)) = \phi (tc(1,r)) \,\, \Longrightarrow \,\,\phi (ts) = \phi (tr), \quad t \ge 0. \end{aligned}$$

Consequently, for every \(t \ge 0\) we have

$$\begin{aligned} \phi (t) = \phi \left( t \frac{s}{r} \right) = \phi \left( t \left( \frac{s}{r}\right) ^n \right) {\mathop {\longrightarrow }\limits ^{n\rightarrow \infty }} \phi (0) = 1. \end{aligned}$$

This means that \(\phi \equiv 1\) on \([0, \infty )\); a contradiction. \(\square \)

Lemma 3.3

If (1) holds, \(a_{\varphi _1} = a_{\varphi _2} = \infty \) and \(\phi = {\varphi _1}/{\varphi _2}\) is a solution of equation  (C), then the function \(c(1,\cdot )\) is strictly increasing on \([0,\infty )\).

Proof

Since \(c(1, \cdot )\) is continuous and one-to-one, it is strictly monotonic. Suppose that \(c(1, \cdot )\) is strictly decreasing. Then \(c(1,s) < c(1,0) = 1\) for every \(s \in (0,\infty )\). On the other hand we also have that \(c(1,s) = s c(1, s^{-1}) < s\) for all \(s \in (0,1)\). Letting \(s \rightarrow 0\), we get

$$\begin{aligned} 1 =c(1,0)= \lim _{s \rightarrow 0} c(1,s) \le \lim _{s \rightarrow 0} s = 0. \end{aligned}$$

which is impossible. Therefore the function \(c(1,\cdot )\) is strictly increasing. \(\square \)

Lemma 3.4

If condition (1) is satisfied, \(a_{\varphi _1} = a_{\varphi _2} = \infty \) and \(\phi = {\varphi _1}/{\varphi _2}\) is a solution of (C), then \(\phi ^{-1}(\{1\}) = \{ 0\},\) i.e. \(\varphi _1(s) \ne \varphi _2(s)\) for each \(s>0\).

Proof

Let \(t>0\) be such that \(\phi (t) \ne 1\) and let

$$\begin{aligned} a = \sup \{x\in [0,t]: \phi (x) = 1 \}, \quad b = \inf \{ x\in (t,\infty ): \phi (x) = 1\}. \end{aligned}$$

Since \(\phi (0)=1\) and \(\phi \) is continuous, then we have \(0 \le a < b\). Suppose that \(b<\infty \). Without loss of generality, possibly replacing the function \(\phi \) by \(1/\phi \), we can assume that \(\phi (u)>1\) for every \(u \in (a,b)\). Since \(\phi (x) >0\) for all \(x \in [0,\infty )\) and

$$\begin{aligned} 1 = \phi (b) = [\phi \left( {b/{c(1,1)}}\right) ]^2, \end{aligned}$$

we have \(\phi ( b/{c(1,1)}) =1\) and \({b/{c(1,1)}} <b\). Consequently, \({b/{c(1,1)}} \le a\). Notice that the function \(s \rightarrow {b/{c(1,s)}}\) is strictly decreasing on the interval [0, 1], \({b/{c(1,0)}} = b\) and \({b/{c(1,1)}} \le a\). Thus there exists \(s_0 \in (0,1]\) such that \({b/{c(1,s_0)}}=a\). By equality (C) we have

$$\begin{aligned} \phi \left( \frac{b}{c(1,s)}\right) \phi \left( \frac{bs}{c(1,s)}\right) = \phi (b) = 1, \quad \quad s \in [0,s_0]. \end{aligned}$$

Since \({b/{c(1,s)}} \in (a,b)\) implies that \( \phi ({b/{c(1,s)}}) > 1 \) for all \(s \in (0,s_0)\), we conclude that \(\phi ({bs/{c(1,s)}}) <1\) for every \(s \in (0,s_0)\). In particular, this means that there exists \(t_0>0\) such that \(\phi (t) <1\) for each \(t\in (0,t_0)\). By (C) we have \([\phi (t)]^2 = \phi (t c(1,1))\), and thus \(\phi (t) <1\) for each \(t \in (0, t_0 c(1,1))\). Consequently,

$$\begin{aligned} \phi (t) <1 \quad \hbox { for all } \quad t \in (0, t_0 c(1,1)^n) \hbox { and } n \in {{\mathbb {N}}}, \end{aligned}$$

that is for all \(t \in (0,\infty )\), since \(c(1,1) > c(1,0)=1\). This contradicts the inequality \(\phi (t)>1\) for all \(t\in (a,b)\). Therefore \(b=\infty \).

Now we see that \(\phi (u) > 1\) for every \(u > a\). If \(a>0\) then \(1= [\phi (a)]^2 = \phi (a c(1,1)) >1\) which, together with the inequality \(a c(1,1)>a\), completes the proof. \(\square \)

Lemma 3.5

If condition (1) is satisfied, \(a_{\varphi _1} = a_{\varphi _2} = \infty \) and \(\phi = {\varphi _1}/{\varphi _2}\) is a solution of equation (C), then the function \(\phi \) is one-to-one.

Proof

By Lemma 3.4 we can assume without loss of generality that \(\phi (t)>1\) for each \(t>0\). Suppose that there exist \(0<s<r\) such that \(\phi (s) = \phi (r)\). Since \(s = c(s,0)\) and \(c(s,r) = r c(1, {s/r}) > r\), then, by the continuity of \(c(1, \cdot )\), there exists \(u \in (0,r)\) such that \(c(s,u) = r\). By equality (C) we have

$$\begin{aligned} \phi (s) \phi (u) = \phi (c(s,u)) = \phi (r) = \phi (s), \end{aligned}$$

thus \(\phi (u) = 1\) which is impossible. \(\square \)

Lemma 3.6

Let \(G: (0,\infty )^2 \rightarrow (0,\infty )\) be a solution of the equation

$$\begin{aligned} G\Bigl ( t, G(s,x) \Bigr ) = G(st,x) \end{aligned}$$

which is continuous with respect to each variable and such that the function \(G(t, \cdot )\) is additive for every \(t \in (0,\infty )\). Then there exists \(p \in {\mathbb {R}}\) such that

$$\begin{aligned} G(t,x) = t^p x, \quad t,x \in (0,\infty ). \end{aligned}$$

Proof

Since any continuous additive mapping from \((0,\infty )\) to \((0,\infty )\) is of the form \(x \rightarrow cx\) (see e.g. [13]) then for each fixed \(t\in (0,\infty )\) there exists \(H(t)>0\) such that

$$\begin{aligned} G(t,x) = H(t) x, \quad x \in (0,\infty ). \end{aligned}$$

This defines a function \(H: (0,\infty ) \rightarrow (0,\infty )\) which, by our assumptions, is continuous and satisfies the condition

$$\begin{aligned} H(t) H(s) x = H(st) x, \quad \quad s,t,x \in (0,\infty ). \end{aligned}$$

Therefore H is a continuous solution of the multiplicative Cauchy equation and the assertion follows. \(\square \)

Proposition 3.7

(Solution of equation (C)). If condition (1) holds, \(\varphi _1,\varphi _2 \in \Phi \), \( a_{\varphi _1} = a_{\varphi _2} = \infty \), then the function \(\phi = {\varphi _1}/{\varphi _2}: (0,\infty ) \mapsto (0,\infty )\) is a solution of equation (C) if and only if there are numbers \(\beta \in {\mathbb {R}} {\setminus } \{0\}\) and \(p>0\) such that \(c(a,b) = (a^p + b^p)^{1/p}\) for all \(a,b \in [0,\infty )\) and

$$\begin{aligned} \phi (t) = \exp \left( \beta t^p \right) , \quad \quad t \in [0,\infty ). \end{aligned}$$

Proof

We know that the function \(\phi = {{\varphi _1}/{\varphi }}\) is continuous, one-to-one and \(\phi (0) =1\). It follows that either \(\phi ([0,\infty )) \subset (0,1]\), or \(\phi ([0,\infty )) \subset [1,\infty )\). Therefore, as both \(\phi \) and \(1/\phi \) satisfy equality (C), we may assume the second possibility.

Put \(F {:}{=} \ln {\phi }\). Then F maps \([0,\infty )\) into itself and equality (C) gives

$$\begin{aligned} F(at) + F(bt) = F\bigl ( c(a,b) t \bigr ), \end{aligned}$$

that is

$$\begin{aligned} c(a,b) t = F^{-1} \bigl ( F(at) + F(bt) \bigr ) \end{aligned}$$

for all \(a,b,t \in (0,\infty )\). Consequently,

$$\begin{aligned} F^{-1} \bigl ( F(a) + F(b) \bigr ) t = F^{-1} \bigl ( F(at) + F(bt)\bigr ), \quad a,b,t \in (0,\infty ). \end{aligned}$$

Putting \(x=F(a)\) and \(y=F(b)\) we see that \(x,y \in F([0,\infty )) = [1, \xi )\) for some \(\xi \in (1, \infty ]\) and we obtain

$$\begin{aligned} F \bigl ( t F^{-1} \bigl ( x+y \bigr ) \bigr ) = F\bigl (t F^{-1}(x)\bigr ) + F\bigl (t F^{-1}(y)\bigr ), \quad x,y,t \in [1,\xi ). \end{aligned}$$

This means that

$$\begin{aligned} G(t, x+y) = G(t,x) + G(t,y), \quad x,y,t \in (0,\infty ), \end{aligned}$$

where \(G: (0,\infty )^2 \mapsto (0,\infty )\) is given by \(G(t,x) = F\bigl (t F^{-1}(x)\bigr )\). Moreover,

$$\begin{aligned} G\bigl (t,G(s,x)\bigr ) = F\bigl (ts F^{-1}(x) \bigr ) = G(st,x), \quad x,y,t \in (0,\infty ). \end{aligned}$$

Therefore, by Lemma 3.6 we infer that \(F(tF^{-1}(x)) = t^p x\) for all \(t,x>0\) with some \(p\in {\mathbb {R}}\). Taking here \(x=\beta =F(1)\) we get that

$$\begin{aligned} \phi (t) = \exp \left( F(t) \right) = \exp \left( \beta t^p \right) , \quad t \in (0,\infty ). \end{aligned}$$

Since \(\phi \not \equiv 1\) we have that \(\beta \ne 0\); moreover, as \(\phi \) is continuous, one-to-one, and \(\phi (0) = 1\) we deduce that \(p>0\). \(\square \)

As a simple consequence of Proposition 3.7 and the previous considerations we obtain the following result.

Theorem 3.8

Let \(\psi \in \Phi \) and cd be functions such that for all \(r, s, t > 0\) we have

$$\begin{aligned} c(st, rt) = c(rt, st) = tc(s, r),\,\, c(1, 0) = 1, \end{aligned}$$

and

$$\begin{aligned} d(st, rt) = d(rt, st) = td(s, r),\,\, d(1, 0) = 0, \,\,d(s,r)>0 \end{aligned}$$

whenever \(sr>0\).

If there are two different solutions \(\varphi _1, \varphi _2 \in \Phi \) of equation (A), then \(a_{\varphi _1} = a_{\varphi _2} = a_{\psi } = \infty \), and for some real \(\beta \) and \(p>0\),

$$\begin{aligned} c(r,s) = (r^p + s^p)^{1/p}, \qquad r,s > 0, \end{aligned}$$

and

$$\begin{aligned} \varphi _2(t) = \exp \left( \beta \vert t\vert ^p\right) \, \varphi _1 (t), \qquad t \in {\mathbb {R}}. \end{aligned}$$

If \(\varphi \in \Phi \) is a solution of equation (A) and \(a_{\varphi } < \infty \), then \(\varphi \) is the unique solution of (A) and

$$\begin{aligned} \varphi (t) = \psi (t), \qquad c(r,s) = \max \{ r,s\} \quad \text{ and } \quad d(r,s) = \min \{r,s\} \end{aligned}$$

for all \(r,s,t \ge 0\).

4 From Equation (A) to Equation (B)

By Theorem 3.8 we know that the assumption (1) and \(a_{\varphi _1} = a_{\varphi _2} = \infty \) imply that there exists \(p>0\) such that \(c(a,b)^p = a^p + b^p\) for all \(a,b>0\). Consequently, equation (A) takes the form

$$\begin{aligned} \begin{array}{c} \forall \, a,b> 0 \,\, \exists \, d(a,b) \in {\mathbb {R}} \, \forall \, t> 0 \\ \varphi (at) \varphi (bt) = \varphi \left( t (a^p+b^p)^{1/p}\right) \psi (t d(a,b)). \end{array} \end{aligned}$$

Multiplying this equality by \(\exp \left( \beta \vert t\vert ^p (a^p + b^p) \right) \) for any real \(\beta \) and \(p>0\) we see that the function \(t \rightarrow \chi (t){:}{=} \exp \left( \beta \vert t\vert ^p\right) \varphi (t)\) is a solution of this equation providing \(\varphi \) is. However, the function \(\chi \) can be a solution of equation (A) only if \(\chi \in \Phi \), which is usually hard to verify.

Let \(x=t a^p\), \(rx=t b^p\), \({\tilde{h}}(x) = \varphi (x^{1/p})\), \({\tilde{g}}(x) = \psi ( x^{1/p})\) and \(D(x,y) = d(x^{1/p}, y^{1/p})^p\). We see that \(D(xt,yt) = t D(x,y)\) for all \(x,y,t>0\) and equation (A) can be rewritten in the form

figure a

We know that the functions \(\varphi , \psi \) and, consequently, \({\tilde{h}}\), \({\tilde{g}}\) do not vanish. Moreover, the function \({\tilde{g}}\) and also \({\tilde{h}}\) are not constant because otherwise \(\varphi \equiv \psi \equiv 1\) which we excluded as a trivial case. Now we can define

$$\begin{aligned} h = \ln \circ {\tilde{h}} \quad \hbox { and } \quad g = \ln \circ {\tilde{g}}. \end{aligned}$$

Since we focus on the case when \(D(0,y)=0\) for every \(y \ge 0\) and \(D(x,y) \ne 0\) for each \(x,y>0\), rescaling if necessary function g, we may assume that \(q(1) = D(1,1)=1\). Putting \(q(r) = D(1,r)\) we obtain

figure b

Notice that by our assumptions all functions appearing in this equation are continuous.

Remark 3

In this paper we assume that the function \(d: [0,\infty )^2 \rightarrow [0,\infty )\) is continuous and such that \(d(tr,ts) = d(ts,tr) = t d(r,s)\) for all \(r,s,t \ge 0\), \(d(0,t)=0\) for all \(t \ge 0\) and \(d(t,s) >0\) if only \(ts \ne 0\). Consequently, we have \(q(0) = 0\) and \(q(r)>0\) for all \(r>0\). If we additionally assume that the function q is one-to-one, then it is increasing and, consequently, differentiable a. e. (with respect to the Lebesgue measure) on \((0,\infty )\).

Remark 4

Notice that by our assumptions, putting \(r=1\) in equation  (\(A^{\prime }{\prime }\)), we have \( g(2t) = 2h(t) - h(2t)\), and thus the differentiability of h implies the differentiability of g. However, in general the differentiability of g does not imply the differentiability of h. Notice also that the differentiability of \(h, {\tilde{h}}\) and \(g, {\tilde{g}}\) are equivalent.

Moreover, since \({\tilde{h}}: [0,\infty )\rightarrow (0,1]\) and \(h(0) =1\), then the additional assumption that h is one-to-one implies that h is strictly decreasing and then differentiable almost everywhere. Consequently, also the function \({\tilde{g}}\) is differentiable almost everywhere.

Assume that g is one-to-one and, consequently, the function \(\psi \) is one-to-one. Since at the beginning of this paper (see Sect. 2) we assumed that \((\varphi , \psi ) \in \Phi _2\), we see that also functions \(\varphi , \psi \) are one-to-one. Finally, we infer that if any of the functions hg (equivalently \({\tilde{h}}, {\tilde{g}}\)) is one-to-one, then they both are strictly decreasing and differentiable almost everywhere on \((0,\infty )\).

Theorem 4.1

Assume that for the differentiable function h, continuous function g and continuous, one-to-one function q such that \(q(r) = rq(1/r)>0\) for all \(r>0\) and \(q(0) =0\), equation (\(A^{\prime }{\prime }\)) is satisfied. Then for all \(r,s,t>0\) we have

figure c

In the proof we will use the below auxiliary result.

Lemma 4.2

Under the assumptions of Theorem 4.1 the function q is differentiable on \((0,\infty )\).

Proof

By Remark 4 we see that the function g is differentiable. Thus both sides of equation (\(A^{\prime }{\prime }\)) are differentiable. Denote them by L(rx) and R(rx), respectively. For a function \(f: (0,\infty )^2 \rightarrow {\mathbb {R}}\) and arbitrarily fixed numbers \(r,x \in (0,\infty )\) define (in a vicinity of 0, small enough) a function \(f_{r,x}\) by the formula

$$\begin{aligned} f_{r,x} (\Delta ) = \frac{1}{\Delta } ( f(r+\Delta , x+\Delta ) - f(r,x)). \end{aligned}$$

Then

$$\begin{aligned}{} & {} L_{r,x} (\Delta ) = \frac{h(x+\Delta ) - h(x)}{\Delta }\\{} & {} \quad + \frac{h((r + \Delta )(x+\Delta )) - h(r x)}{(r_0 + \Delta )(x+\Delta ) - r x} \frac{(r + \Delta )(x+\Delta ) - r x}{\Delta } \\{} & {} \quad -\frac{h((1 + r + \Delta )(x+\Delta )) - h((1+ r) x)}{(1+ r + \Delta )(x+\Delta )) - (1+ r) x} \frac{(1+ r + \Delta )(x+\Delta )) - (1+r) x}{\Delta } \\{} & {} \quad \rightarrow h'(x) + h'(r x)(r +x) - h'((1+r)x) (1+ r + x) \qquad \hbox { when } \quad \Delta \rightarrow 0, \end{aligned}$$

since the function h is differentiable. It follows that the respective limit for the function \(R_{r,x}\) also exists. Note that

$$\begin{aligned} R_{r,x} (\Delta ) = \frac{g(q(r + \Delta )(x+\Delta )) - g(q(r)x)}{q(r + \Delta )(x+\Delta ) - q(r)x} \left[ x\, \frac{q(r + \Delta ) - q(r)}{\Delta } + q(r + \Delta ) \right] . \end{aligned}$$

Since g is not constant (cf. the beginning of the present section) we can find a \(x\in (0, \infty )\) such that \(g' (q (r) x) \ne 0\). Then letting \(\Delta \rightarrow 0\) we conclude that q is differentiable at an arbitrary point \(r\in (0,\infty )\). \(\square \)

Proof of Theorem 4.1

By our assumptions \(q(r) >0\) for each \(r>0\). Substituting \(t \rightarrow {t/{q(r)}}\) in (\(A^{\prime }{\prime }\)) we obtain

$$\begin{aligned} h\left( \frac{t}{q(r)} \right) + h\left( \frac{rt}{q(r)} \right) - h\left( \frac{(1+r)t}{q(r)} \right) = g(t), \quad \quad r,t>0. \end{aligned}$$

Differentiating both sides of this equality with respect to r and then dividing both sides by t we see that

$$\begin{aligned}{} & {} \frac{-q'(r)}{q^2(r)}\, h'\! \left( \frac{t}{q(r)} \right) + \frac{q(r) - r q'(r)}{q^2(r)}\, h'\! \left( \frac{rt}{q(r)} \right) \\{} & {} \quad \quad \quad = \frac{q(r) - (1+r) q'(r)}{q^2(r)}\, h'\! \left( \frac{(1+r)t}{q(r)} \right) . \end{aligned}$$

Rearranging parts of this equality and multiplying them by \(q^2(r)\) we have

$$\begin{aligned} \left( q(r) - r q'(r)\right) \! \left[ h'\!\left( \frac{rt}{q(r)} \right) \! - h'\!\left( \frac{(1+r)t}{q(r)} \right) \! \right] \! = q'(r)\! \left[ h'\!\left( \frac{t}{q(r)}\right) \! - h'\!\left( \frac{(1+r)t}{q(r)} \right) \! \right] . \end{aligned}$$

Since \(q({1/r})= r^{-1} q(r)\), we have \(q'({1/r}) = q(r) - r q'(r)\) and we can rewrite the last equality in the form

$$\begin{aligned} q'({1/r})\! \left[ h'\! \left( \frac{t}{q({1/r})} \right) \! - h'\! \left( \frac{(1+r^{-1})t}{q({1/r})} \right) \right] \! = q'(r) \! \left[ h'\! \left( \frac{t}{q(r)}\right) \! - h'\! \left( \frac{(1+r)t}{q(r)} \right) \right] . \end{aligned}$$

Putting tq(r) instead of t in the previous equation we obtain

$$\begin{aligned} q'({1/r}) \Bigl [ h'\! \left( rt \right) - h'\! \left( (1+r)t \right) \Bigr ] = q'(r) \Bigl [ h'\! \left( t\right) - h'\! \left( (1+r)t \right) \Bigr ], \quad r,t>0. \end{aligned}$$

We can also write for every \(s>0\)

$$\begin{aligned} q'(r) \Bigl [ h'\! \left( st\right) - h'\! \left( (1+r)st \right) \Bigr ] = q'({1/r}) \Bigl [ h'\! \left( rst \right) - h'\! \left( (1+r)st \right) \Bigr ], \quad r,t>0. \end{aligned}$$

Multiplying the respective sides of these two equations we obtain

$$\begin{aligned}{} & {} q'({1/r}) q'(r) \Bigl ( h'\! \left( rt \right) - h'\!\left( (1+r)t \right) \Bigr ) \Bigl ( h'\! \left( st \right) - h'\! \left( (1+r)s t \right) \Bigr ) \\{} & {} \quad = q'({1/r}) q'(r) \Bigl ( h'\! \left( rst \right) - h'\! \left( (1+r)st \right) \Bigr ) \Bigl ( h'\! \left( t\right) - h'\! \left( (1+r) t \right) \Bigr ). \end{aligned}$$

By Remark 3 and Lemma 4.2 we have \(q'(r)>0\) for each \(r>0\) and we can divide both sides of this equation by \(q'({1/r}) q'(r)\). \(\square \)

5 Regularity of Measurable Solutions of (B)

The equation

$$\begin{aligned} \left. \begin{array}{l} \bigl (f(t(x+y)) - f(tx)\bigr ) \bigl (f(x+y) - f(y)\bigr ) \\ \quad = \bigl (f(t(x+y)) - f(ty)\bigr ) \bigl (f(x+y) - f(x)\bigr ), \end{array}\right. \quad x,y,t >0, \end{aligned}$$
(B)

is interesting from the point of view of theory of functional equations. Thus we decided to solve it in a general situation, with no significant assumptions on regularity of considered functions. We will study Lebesgue measurable solutions and Baire measurable solutions of this equation.

Notice that any function \(f:(0, \infty )\rightarrow {\mathbb {C}}\) satisfying the equation

$$\begin{aligned} f(xy)=f(x)+f(y) \end{aligned}$$

or the equation

$$\begin{aligned} f(xy)=f(x)f(y), \end{aligned}$$

satisfies also equation (B). Since both these equations have Lebesgue and Baire non-measurable solutions we conclude that (B) also has some non-measurable solutions, which are not considered in this work.

Let us recall that Lebesgue density of a measurable set C at a point c is the limit of the Lebesgue measure of the intersection of C with a ball centered at c divided by the Lebesgue measure of the ball when the radius of the ball goes to 0. By Lebesgue’s density theorem it exists and is equal to 1 for almost all points of C and exists and is equal to 0 for almost all points of the complement of C. We will use this fact in the proof of Theorem 5.2.

Remark 5

Let \(f: (0,\infty ) \mapsto {\mathbb {C}}\). If f is almost everywhere (with respect to the Lebesgue measure) constant solution of equation (B), then it is constant everywhere.

To see this assume that \(f = c\) almost everywhere for some \(c \in {\mathbb {C}}\) and let \(x\in (0,\infty )\) be such that \(f(x)\not =c\). Substituting \(t=x/y\) in (B) we obtain

$$\begin{aligned} \left. \begin{array}{l} \Bigl (f\bigl (x+{{x^2}/{y}}\bigr )-f\bigl ( {{x^2}/{y}} \bigr ) \Bigr )\Bigl ( f(x+y)-f(y)\Bigr ) \\ \quad = \Bigl ( f\bigl ( x+{{x^2}/{y}}\bigr ) - f(x) \Bigr ) \Bigl ( f(x+y)-f(x)\Bigr ). \end{array} \right. \end{aligned}$$
(6)

Since \(f(y) = c\) for almost all \(y>0\), also the following conditions hold for almost all \(y>0\):

$$\begin{aligned} f(y) = c, \quad f(x+y)=c \quad \hbox {and} \quad f(x+x^2/y)=c. \end{aligned}$$

This means that for almost all \(y \in (0,\infty )\) the left hand side of (6) equals zero while the right hand side is \((c-f(x))^2 \ne 0\) for a.a \(y \in (0,\infty )\). A contradiction.

In the proof of Theorem 5.2 we need the following version of the Lusin theorem. By \(\ell \) we denote here the Lebesgue measure on \({\mathbb {R}}\).

Lusin theorem If \(f: (0,\infty ) \rightarrow {\mathbb {C}}\) is Lebesgue measurable function, then for every \(\varepsilon >0\) and every measurable set \(B \subset (0,\infty )\) of finite Lebesgue measure there exists a compact set \(E\subset B\) with \(\ell (B \setminus E) < \varepsilon \) such that the function \(f\!\mid _E\) is continuous with respect to the topology on E inherited from \({\mathbb {R}}\).

Now we are able to formulate and prove the following result.

Theorem 5.1

Let \(f: (0,\infty ) \mapsto {\mathbb {C}}\). If f is a Lebesgue measurable solution of equation (B) then it is continuous.

Proof

In view of Remark 5 it is enough to consider f which is not constant almost everywhere. With this assumption the essence of the proof is to show that the term

$$\begin{aligned} Q = Q(x,t,y) {:}{=} f\bigl (t(x+y)\bigr )-f(ty) \end{aligned}$$
(7)

is nonzero for a “large” set of triples (xty). Hence we may divide both sides of equation (B) by Q and apply the general methods of [8], especially a variant of [8, Thm. 8.1]. We will also use the ideas of the proofs of Theorems 3.8, 3.9, 3.10 from [8], but for simplicity we will give here a self-contained proof, using the ideas but not the results of [8].

Step 1. Since f is not almost everywhere constant, then there exists a positive integer N such that \(f\!\mid _{( 0,N) }\) is not almost everywhere constant on (0, N). Let us consider a countable base \({\mathcal {B}}\) of open sets in \({\mathbb {C}}\). Let

$$\begin{aligned} G = \bigcup _{B \in {\mathcal {A}}} B, \end{aligned}$$

where, for \(\ell \) denoting the Lebesgue measure on \({\mathbb {R}}\),

$$\begin{aligned} {\mathcal {A}} = \left\{ B \in {\mathcal {B}}: \ell \left( f^{-1}(B) \cap (0,N) \right) = 0\right\} . \end{aligned}$$

Since \({\mathcal {A}}\) is countable then it follows that

$$\begin{aligned} \ell \Bigl ( f^{-1}(G) \cap (0,N) \Bigr )= & {} \ell \left( f^{-1}\left( \bigcup _{A \in {\mathcal {A}}} A \right) \cap (0,N) \right) \le \sum _{A \in {\mathcal {A}}} \ell \Bigl ( f^{-1}(A) \cap (0,N) \Bigr ) = 0. \end{aligned}$$

Step 2. The complement \(G' = {\mathbb {C}} \setminus G\) has at least two points, say \(b^-\not =b^+\), because otherwise \(G' = \emptyset \) or \(G' = \{ b\}\) for some \(b \in {\mathbb {C}}\). Consequently, if \(G' = \emptyset \) then \(G = {\mathbb {C}}\) and we would have the following contradiction:

$$\begin{aligned} N = \ell ((0,N)) = \ell \left( f^{-1}({\mathbb {C}} ) \cap (0,N)\right) = 0. \end{aligned}$$

If \(G' = \{ b\}\), then \(f\! \!\mid _{(0,N)} =b\) except the set \(f^{-1}( {\mathbb {C}} \setminus \{b\}) \cap (0,N)\) with the Lebesgue measure zero, in contradiction with the choice of the interval (0, N).

Step 3. Take any different \(b^-, b^+ \in {\mathbb {C}} {\setminus } G\). There exist disjoint open neighborhoods \(U, V \in {\mathcal {B}} {\setminus } {\mathcal {A}}\) of \(b^-\) and \(b^+\), respectively, such that

$$\begin{aligned} \ell ( f^{-1}(U)) \ge \ell ( f^{-1}(U)\cap (0,N)) > 0 \end{aligned}$$

and

$$\begin{aligned} \ell (f^{-1}(V)) \ge \ell ( f^{-1}(V)\cap (0,N)) > 0. \end{aligned}$$

By the Lusin theorem there exist compact subsets \(C^- \subset f^{-1}(U)\) and \(C^+ \subset f^{-1}(V)\) having positive measure such that \(f\!\!\mid _{C^-}\) and \(f\!\!\mid _{C^+}\) are continuous. Let \(c^-\) and \(c^+\) be density 1 points of the sets \(C^-\) and \(C^+\), respectively. Without loss of generality we may assume that \(c^-< c^+\) (otherwise we may interchange \(b^-\) and \(b^+\)). Clearly if \(0<c^-<c^+<a\) for some \(a>0\), then for \(c{:}{=}c^+-c^-\) we also have \(0<c<a\).

Step 4. Let \(x_0>0\) be arbitrary. We prove that f is continuous at \(x_0\). Let \(t_0{:}{=}c/x_0\) and \(y_0{:}{=}c^-/t_0\); we have \(t_0x_0=c\), \(t_0y_0=c^-\) and \(t_0(x_0+y_0)=c^+\). Let us fix \(a\ge N\) such that \(a>x_0+y_0\) and hence \(a>x_0\) and \(a>y_0\). We choose \(\eta >0\) such that

$$\begin{aligned} \begin{array}{l} \ell \bigl ([c^+-3t_0\eta ,c^{{+}{+}}3t_0\eta ]\cap C^+\bigr )\,> \, 0.99 \cdot 6t_0\eta , \\ \ell \bigl ([c^--2t_0\eta ,c^-+2t_0\eta ]\cap C^+\bigr ) \, > \, 0.99 \cdot 4t_0\eta , \\ X:=[x_1,x_2]:=[x_0- \eta ,x_0+ \eta ] \, \subset \,( 0,a), \\ Y:=[y_1,y_2]:=[y_0- \eta ,y_0+ \eta ]\, \subset \, ( 0,a) \end{array} \end{aligned}$$

and

$$\begin{aligned} \begin{array}{ll} {[c^+ - 3t_0 \eta ,c^+ + 3t_0 \eta ] \subset ( 0,a), }&{} { [c^- -2t_0 \eta , c^- + 2t_0 \eta ]\subset ( 0,a),} \\ {[c-2t_0 \eta , c+2t_0 \eta ] \subset ( 0,a),} &{} {[x_0+y_0-2 \eta ,x_0 + y_0 + 2\eta ] \subset ( 0,a). }\\ \end{array} \end{aligned}$$

Let us choose \(t_1,t_2\) such that

$$\begin{aligned}{} & {} t_0 \max \left\{ \frac{1}{2},\, \frac{x_0+y_0-3\eta }{x_0+y_0-2\eta },\, \frac{y_0-2\eta }{y_0-\eta }, \frac{x_0-2\eta }{x_0-\eta } \right\} \le t_1 \\{} & {} \quad< t_0 < t_2 \le t_0 \min \left\{ \frac{x_0+y_0+3\eta }{x_0+y_0+2\eta },\, \frac{y_0+2\eta }{y_0+\eta }, \, \frac{x_0+2\eta }{x_0+\eta } \right\} . \end{aligned}$$

Let \(T{:}{=}[t_1,t_2]\). Because the function \(g_3\), defined by \(g_3(x,t,y){:}{=}tx\), is strictly monotonic in t and x, and

$$\begin{aligned} t_1 x_1\ge t_0 \frac{x_0-2\eta }{x_0-\eta }(x_0-\eta )=c-2t_0\eta ,\quad t_2 x_2\le t_0 \frac{x_0+2\eta }{x_0+\eta }(x_0+\eta )=c+2t_0\eta , \end{aligned}$$

it maps \(\Delta {:}{=} X \times T\times Y\) into \([c-2t_0\eta ,c+2t_0\eta ]\). Similarly, putting

$$\begin{aligned} \begin{array}{c} g_1(x,t,y){:}{=}x+y, \,\, g_2(x,t,y){:}{=}y, \,\, g_4(x,t,y){:}{=}ty, \text{ and } g_5(x,t,y){:}{=}t(x+y) \end{array} \end{aligned}$$

we see that

$$\begin{aligned} \begin{array}{l} g_1 \text{ maps } \Delta \hbox { into } [x_0+y_0-2\eta ,x_0+y_0+2\eta ], \\ g_2 \text{ maps } \Delta \hbox { into } [y_0-\eta ,y_0+\eta ], \\ g_4 \text{ maps } \Delta \hbox { into } [c^--2t_0\eta ,c^-+2t_0\eta ], \end{array} \end{aligned}$$

and

$$\begin{aligned} \begin{array}{l} g_5 \hbox { maps } \Delta \hbox { into } [c^+-3t_0\eta ,c^{{+}{+}}3t_0\eta ]. \end{array} \end{aligned}$$

Let \(\delta >0\) be such that \(\delta <(t_2-t_1)x_1/2\) and \(\delta <\eta /4\). Using the Lusin theorem we may choose a compact set \(C\subset ( 0,a) \) such that \(\ell \bigl (( 0,a) {\setminus } C\bigr )<\delta \) and \(f\!\mid _C\) is continuous. Now for any \(x\in X\) let

$$\begin{aligned}&T \times Y \supset D_x \\ {}&\quad {:}=\, \bigl \{(t,y): g_i(x,t,y)\in C \text{ for }\, \, i=1,2,3,\,\, g_4(t,x,y)\in C^-\text{, } g_5(t,x,y)\in C^+\bigr \}, \end{aligned}$$

and, moreover,

$$\begin{aligned} D{:}{=}\bigcup _{x\in X}\{x\}\times D_x. \end{aligned}$$

Clearly, if \((x,t,y)\in D\), then equation (B) can be rewritten in the form

$$\begin{aligned} f(x)=f(x+y)+ \bigl ( f(y) - f(x+y) \bigr ) \frac{f\bigl (t(x+y)\bigr )-f(tx)}{f\bigl (t(x+y)\bigr )-f(ty)}. \end{aligned}$$
(8)

Using the notation

$$\begin{aligned} h(z_1,z_2,z_3,z_4,z_5){:}{=}z_1+(z_2-z_1) \frac{z_4-z_3}{z_5-z_4} \end{aligned}$$
(9)

equation (8) takes the form

$$\begin{aligned} f(x)=h\Bigl (f\bigl (g_1(x,t,y)\bigr ),f\bigl (g_2(x,t,y)\bigr ),\ldots ,f(g_5(x,t,y)\bigr )\Bigr ). \end{aligned}$$
(10)

We will prove that f is continuous at \(x_0\), i.e., that for each \(\varepsilon > 0 \) there exists \(\gamma >0\) such that if \( |x-x_0 |< \gamma \), then \( |f(x) - f(x_0) |< \varepsilon \). We will use equation (10). Because h is continuous on its open domain \(\bigl \{(z_1,z_2,z_3,z_4,z_5)\in {\mathbb {R}}^5:z_4\not =z_5\bigr \}\), it is uniformly continuous on the compact set \(f(C)\times f(C)\times f(C)\times f(C^-)\times f(C^+)\), hence there exists \(\alpha >0\) such that if \((z_1,z_2,z_3,z_4,z_5)\) and \((z'_1, z'_2, z'_3, z'_4, z'_5)\) are \(\alpha \)-close in this set (i.e. the distance of these points is less than \(\alpha \) in the Euclidean or in any other norm), then \(h(z_1, z_2, z_3, z_4, z_5)\) and \(h(z'_1, z'_2, z'_3, z'_4, z'_5)\) are \(\varepsilon \)-close. By the uniform continuity of f on the compact set \(C\cup C^- \cup C^+\), we get \(\beta >0\) such that if u and \(u'\) are in this set and \( |u-u' |< \beta \), then \(|f(u)-f(u')|<\alpha \). Since the functions \(g_i\), \(i=1,2,3,4,5\), are continuous, and hence uniformly continuous on the compact set \(X\times T\times Y\), we get \(\gamma >0\) such that if (xty) and \((x',t',y')\) are \(\gamma \)-close in this set, then \(g_i(x,t,y)\) and \(g_i(x',t',y')\) are \(\beta \)-close for \(i=1,2,3,4,5\). We will choose \(x'=x_0\) and \(t'=t\), \(y'=y\), hence the proof is complete if we prove that for each \(x\in X\) the set \(D_x\cap D_{x_0}\) is nonempty, because then the functional equation implies that from the fact that x and \(x_0\) are \(\gamma \)-close in X we obtain that the fact f(x) and \(f(x_0)\) are \(\varepsilon \)-close, hence the continuity of f at \(x_0\).

Step 5. To prove this, let \(T_x{:}{=}\{t\in T:tx\in C\}\). The set \(\{x\} \times (T\setminus T_x) \times Y\) is mapped by \(g_3\) into a set having measure \(x\ell (T\setminus T_x)\) but contained in \(( 0,a) {\setminus } C\). Hence \(\ell (T {\setminus } T_x)<\delta /x\le \delta /x_1<(t_2-t_1)/2\). Applying this for any \( x \in X \) and then for \( x = x_0 \) we obtain that \(T_x\) and \(T_{x_0}\) both have \(\ell \)-measure greater than \((t_2-t_1)/2\), hence their intersection is non-empty. Let us fix t from this intersection. We will investigate the sets \(Y_{1,x,t}{:}{=}\{y\in Y:x+y\in C\}\), \(Y_{2,x,t}{:}{=} \{y\in Y:y\in C\}\), \(Y_{4,x,t}{:}{=}\{y\in Y:t(x+y)\in C^+\}\), \(Y_{5,x,t}{:}{=}\{y\in Y:ty\in C^-\}\). For the first two sets, their measure is greater than \(2\eta -\delta \). For the third set, \(Y\setminus Y_{4,x,t}\) is mapped by \(g_4\) into a subset having measure \(t\ell (Y \setminus Y_{4,x,y}) \) but contained in \([c^+-3t_0\eta ,c^{{+}{+}}3t_0\eta ]\setminus C^+\) having measure less than \(0.06t_0\eta \). Hence \(\ell (Y{\setminus } Y_{4,x,t})<0.06t_0\eta /t\le 0.06t_0\eta /t_1 \le 0.06t_0\eta /(t_0/2)=0.12\eta \). Similarly, we obtain that \(\ell (Y\setminus Y_{5,x,t})<0.08\eta \). Of course, the same estimates hold true for the special case \(x=x_0\). Hence we get that the set

$$\begin{aligned} \bigcap _{i=1,2,4,5}\bigl (Y_{i,x,t}\cap Y_{i,x_0,t}\bigr ) \end{aligned}$$

has measure at least \(2\eta -4\delta -0.4\eta >0.6\eta \), so it is non-empty. \(\square \)

Remark 6

Let \(f: (0,\infty ) \rightarrow {\mathbb {C}}\). If f is a solution of equation (B) which is constant everywhere except for a set of first category, then it is constant everywhere.

This follows from similar arguments as those used in Remark 5.

Theorem 5.2

Let \(f: (0,\infty ) \rightarrow {\mathbb {C}}\). If f is a Baire measurable solution of equation (B) then it is continuous.

Proof

In view of Remark 6 it is enough to consider the case of Baire measurable solution of (B) which is not constant on the complement of some first category set. The proof makes use of similar ideas as the proof of the previous theorem but it is somewhat simpler: we prove that for a solution which is nonconstant on the complement of any set having first category the term \(Q(x,t,y) = f\bigl (t(x+y)\bigr )-f(ty)\) is nonzero for a “large” set of triples (xty), hence we may divide by it and apply the general methods of [8], especially a variant of Theorem 9.1. We will also use the ideas of the proofs of from [8, Thms. 4.1, 4.2, 4.3], but for simplicity we will give here a self-contained proof, using only ideas of [8], not the results.

If a function \(f:( 0,+\infty ) \rightarrow {\mathbb {C}}\) is Baire measurable, then there exists a set \(E\subset ( 0,+\infty ) \) having complement of the first category such that \(f\vert _E\) is continuous. The function \(f\vert _E\) cannot be constant, hence there exist \(c^-\) and \(c^+\) in E such that \(f(c^-)\not =f(c^+)\). Without loss of generality we may assume that \(c^-<c^+\). Let \(c{:}{=}c^+-c^-\). We will prove that f is continuous at any fixed \(x_0>0\).

Let us choose disjoint open neighborhoods \(V^-\subset ( 0,+\infty ) \) and \(V^+\subset ( 0,+\infty ) \) of \(c^+\) and \(c^-\), respectively, such that each element of \(f(E\cap V^-)\) is less than any element of \(f(E\cap V^+)\). Let X, Y and T be neighborhoods of \(x_0\), \(c/x_0\) and \(x_0c^-/c\), respectively, such that for each \(x\in X\), \(y\in Y\) and \(t\in T\) we have \(ty\in V^-\) and \(t(x+y)\in V^+\). Let us choose \(t_0\in T\) such that \(t_0x_0\in E\) and then \(y_0\in Y\) such that \(y_0\in E\), \(x_0+y_0\in E\), \(t_0y_0\in E\) and \(t_0(x_0+y_0)\in E\). Because for \(z{:}{=}f(x_0)\), \(z_1{:}{=}f(x_0+y_0)\), \(z_2{:}{=}f(y_0)\), \(z_3{:}{=}f(t_0x_0)\), \(z_4{:}{=}f(t_0y_0)\) and \(z_5{:}{=}f \bigl ( t_0(x_0+y_0) \bigr )\) we have \(z_4\not =z_5\) and by the equality

$$\begin{aligned} z=z_1+(z_2-z_1)\frac{z_4-z_3}{z_5-z_4}, \end{aligned}$$

for an arbitrary neighborhood W of z there exist neighborhoods \(W_1\), \(W_2\), \(W_3\), \(W_4\) and \(W_5\) of \(z_1\), \(z_2\), \(z_3\), \(z_4\) and \(z_5\), respectively, such that for any \(z'_1\in W_1\), \(z'_2\in W_2\), \(z'_3\in W_3\), \(z'_4\in W_4\) and \(z'_5\in W_5\) we have \(z'_4\not = z'_5\) and

$$\begin{aligned} z'=z'_1 + (z'_2-z'_1) \frac{z'_4-z'_3}{z'_5-z'_4}\in W. \end{aligned}$$

Because \(f\vert _E\) is continuous at \(z_i\), there exist open sets \(V_1, \ldots V_5\) such that for \(z'_i\in E\cap V_i\) we have \(f(z'_i)\in W_i\) for \(i=1,2,3,4,5\). Now there exist neighborhoods \(X_0\), \(Y_0\) and \(T_0\) of \(x_0\), \(y_0\) and \(t_0\), respectively, contained in X, Y and T, such that \(x+y\in V_1\), \(y\in V_2\), \(tx\in V_3\), \(ty\in V_4\) and \(t(x+y)\in V_5\) whenever \(x\in X_0\), \(y\in Y_0\) and \(t\in T_0\). Fixing any \(x\in X_0\) let us choose \(t\in T_0\) such that \(tx\in E\). Then let us also choose \(y\in Y_0\) such that \(y\in E\), \(x+y\in E\), \(ty\in E\) and \(t(x+y)\in E\). For \(z'{:}{=}f(x)\), \(z'_1{:}{=}f(x+y)\), \(z'_2{:}{=}f(y)\), \(z'_3{:}{=}f(tx)\), \(z'_4{:}{=}f(ty)\) and \(z'_5{:}{=}f\bigl (t(x+y)\bigr )\) we have \(z'_4\not =z'_5\) and by the equality

$$\begin{aligned} z'=z'_1+(z'_2-z'_1) \frac{z'_4-z'_3}{z'_5-z'_4}\in W. \end{aligned}$$

This proves that f is continuous at \(x_0\). \(\square \)

In the next two theorems we will use the following result.

Theorem 1.28 from [8] Let X, Y, and Z be open subsets of \({\mathbb {R}}^r\), \({\mathbb {R}}^s\), and \({\mathbb {R}}^t\), respectively, let D be an open subset of \(X\times Y\) and let W be an open subset of \(D\times Z^n\), where \(r,s,t,n \in {\mathbb {N}}\). Let \(f:X\rightarrow Z\), \(g_i= (g_{i,1}, \dots , g_{i,r}): D\rightarrow X\) for all \(i=1,2,\dots ,n\), and \(h: W\rightarrow Z\). Assume that

  1. (i)

    h and \(g_1, \ldots ,g_n\) are of class \({\mathcal {C}}^{\infty }\);

  2. (ii)

    there exists a compact subset C of X such that for each \(i=1,2,\dots ,n\) and for each \({x}\in X\) there exists \(y=(y_1,\dots ,y_s)\) for which \((x,y)\in D\), \(g_i(x,y)\in C\) and the matrix

    $$\begin{aligned} \frac{\partial g_i}{\partial y}({x},{y}) = \Bigl [\frac{\partial g_{i,k}}{\partial y_j}({x},{y})\Bigr ]_{k \le r, j \le s} \end{aligned}$$

    has rank r;

  3. (iii)

    if \(({x},{y})\in D\) then \(\Bigl ( ({x},{y}), f\bigl ( g_1 ({x}, {y}) \bigr ), \dots ,f\bigl (g_n({x},{y})\bigl ) \Bigr )\in W\) and

    $$\begin{aligned} f({x})=h\Bigl ( (x,y), f\bigl (g_1(x,y)\bigl ), \dots , f\bigl ( g_n(x,y) \bigr ) \Bigr ). \end{aligned}$$

If f is Lebesgue or Baire measurable, then it is of class \({\mathcal {C}}^{\infty }\).

Theorem 5.3

Let \(f:( 0,\infty ) \rightarrow {\mathbb {C}}\) be a one-to-one Lebesgue measurable (or Baire measurable) solution of equation (B). Then f is of class \({{\mathcal {C}}}^{\infty }\).

Proof

We may use the previous two theorems to infer that Lebesgue measurable solutions and Baire measurable solutions are continuous. More simply, expressing the term f(x) from equation  (B), we may use Theorem 1.25 from [8] to obtain that f is continuous. Although, for real valued solutions, Theorem 1.26 from [8] can be applied to obtain \(f\in {{\mathcal {C}}}^{\infty }\), because one-to–one and continuous solutions are strictly monotonic, hence almost everywhere differentiable, this method does not work for complex-valued solutions. So instead we are going to apply Theorem 1.28 from [8] but this cannot be done directly. The reason is, roughly speaking, that if x is large then \(x+y\) is even larger, hence we cannot keep it in a compact set if x moves in a larger open set. We may try to express \(f(x+y)\) from equation  (B) and introduce a new free variable, say \(x'=x+y\) instead of, say x; but in this case we have a problem if \(x'\) is small: y is even smaller, hence we cannot keep it in a compact subset of \(( 0,+\infty ) \) if \(x'\) moves in a larger open set.

The basic idea is to remove the term \(f(x+y)\) completely. From  (B) we obtain

$$\begin{aligned} \left. \begin{array}{l} f(x+y)\bigl [f(tx)-f(ty)\bigr ] \\ \quad =f(x)\bigl [f(t(x+y)) -f(ty) \bigr ] -f(y) \bigl [f(t(x+y))-f(tx)\bigr ]. \end{array} \right. \end{aligned}$$
(11)

Replacing txy by \(t',x',y'\), respectively, we obtain

$$\begin{aligned} \left. \begin{array}{l} f(x'+y')\bigl [f(t'x')-f(t'y')\bigr ] \\ = f(x')\bigl [f(t'(x'+y'))-f(t'y')\bigr ] -f(y')\bigl [f(t'(x'+y'))-f(t'x')\bigr ]. \end{array} \right. \end{aligned}$$
(12)

Assume that \(x+y = x'+y'\). Multiplying (11) by \(f(t'x')-f(t'y')\) and (12) by \(f(tx)-f(ty)\), and then, subtracting the second of the resulting equality from the first one, we get

$$\begin{aligned}{} & {} \bigl (f(t'x')-f(t'y')\bigr )\Bigl [\bigl (f(x) - f(y) \bigr ) f(t(x+y)) \!- \! f(x) f(ty) + f(y) f(tx) \Bigr ] \nonumber \\{} & {} \quad = \bigl (f(tx)\! - \! f(ty)\bigr ) \Bigl [ \bigl ( f(x') - f(y') \bigr )f(t'(x+y))\! - \! f(x') f(t'y') + f(y') f(t'x')\Bigr ].\nonumber \\ \end{aligned}$$
(13)

Of course \(x'\) is no more a free variable but a function of \(x,y,y'\), i.e., \(x'=x+y-y'\). The functional equation (13) is satisfied for all \(x,y,t,y', t'>0\) for which \(x+y-y'>0\). Expressing f(x) from equation (13) we obtain the following functional equation

$$\begin{aligned} \left. \begin{array}{lll} f(x) &{}=f(y) \, \displaystyle {\frac{f(t(x+y))\! - \!f(tx)}{f (t(x+y))\! - \! f(ty)}} + \bigl (f(tx)\! - \! f(ty)\bigr ) \\ &{}\quad \times \displaystyle {\frac{\Bigl [f(x') \bigl (f(t'(x+y))\! - \! f(t'y')\bigr ) -f(y')\bigl (f(t'(x+y))\! - \! f(t'x')\bigr )\Bigr ]}{\bigl (f(t'x')\! - \! f(t'y')\bigr ) \bigl (f(t(x+y))\! - \! f(ty)\bigr )}} \end{array} \right. \nonumber \\ \end{aligned}$$
(14)

satisfied on an open set D, where

$$\begin{aligned} D = \left\{ (x,y,t,y',t'): x,y,t,y',t'>0, y'<x+y, x'\not =y'\right\} . \end{aligned}$$

Let arbitrary \(a>0\) and \(b \ge 18a\) be fixed. We will apply Theorem 1.28 from [8] to deduce that \(f\in {{\mathcal {C}}}^{\infty }\bigl (( a,b) \bigr )\). It is clear that for all the functions \(g_1, \ldots , g_9\), given by

$$\begin{aligned} \begin{array}{ll} g_1(x,y,t,y',t')=y, &{} g_2(x,y,t,y',t')=tx, \\ g_3(x,y,t,y',t')=ty, &{} g_4(x,y,t,y',t')=x+y-y', \\ g_5(x,y,t,y',t')=y', &{} g_6(x,y,t,y',t')=t'(x+y), \\ g_7(x,y,t,y',t')=t'y', &{} g_8(x,y,t,y',t')=t'(x+y-y'), \\ g_9(x,y,t,y',t')=t(x+y), &{} \\ \end{array} \end{aligned}$$

respectively, the rank of the matrix \({\partial g_i}/{\partial (t,y,t',y')}\) is 1 whenever \(x,y,t,y',t'>0\). Hence it is enough to prove that for each \(a<x<b\) we have \(y,t,y',t'\) such that \((x,y,t,y',t')\in D\) and all \(g_i(x,y,t,y',t')\), \(i=1,\ldots ,9\), is in the compact subset \([2a,b-a]\) of the open interval (ab). We consider three cases.

If \(a<x\le 2a\) then let \(y=x+2a\), \(y'=x+a\) (then \(x'=x+a\)), let \(t=2\) and \(t'=1\) (then \(tx=2x\), \(ty=4a+2x\), \(t(x+y)=4a+6x\), \(t'x'=a+x\), \(t'y'=a+x\), \(t'(x+y)=2a+2x\)).

If \(2a<x<b-4a\) then let \(y=3a\), \(y'=3a\) (then \(x'=x\)), let \(t=1\) and \(t'=1\) (then \(tx=x\), \(ty=3a\), \(t(x+y)=3a+x\), \(t'x'=x\), \(t'y'=3a\), \(t'(x+y)=3a+x\)).

If \(b-4a\le x<b\) then let \(y=8a\), \(y'=x-2a\) (then \(x'=6a\)), let \(t=1/2\) and \(t'=1/2\) (then \(tx=x/2\), \(ty=4a\), \(t(x+y)=4a+x/2\), \(t'x'=3a\), \(t'y'=x/2-a\), \(t'(x+y)=4a+x/2\)).

Now Theorem 1.28 in [8] and equation (14) imply that \(f\in {{\mathcal {C}}}^{\infty }\bigl (( a,b) \bigr )\). Because \(a>0\) and \(b\ge 18a\) were arbitrary we obtain that \(f \in {{\mathcal {C}}}^{\infty } \bigl (( 0,+\infty ) \bigr )\). \(\square \)

We would like to extend the previous theorem to the case where f is not assumed to be one-to-one. We also use the ideas of the previous proof. First we will prove an auxiliary result.

Lemma 5.4

Let X be any set and \(f:( 0,\infty ) \rightarrow X\). If f is not a constant function, then for an arbitrary nonempty interval \(( a,b) \subset ( 0,1) \) there exist \(\gamma \in ( a,b) \) and \(u,v>0\) such that \(f(u)\not =f(v)\) and \(u/v=\gamma \).

Proof

Suppose on the contrary that there exists a nonempty interval \((a,b) \subset (0,1)\) for which such \(\gamma ,u,v\) do not exist. In particular, this means that for any choice of \(u,v>0\), \(u<v\) we have either \(f(u) = f(v)\) or \(u/v \not \in (a,b)\). Consequently, we have that if \(u,v >0\) and \(u/v \in (a,b)\) then \(f(u) = f(v)\).

Let \(u_1>0\) and \(f(u_1) = c\). For every \(v>0\) such that \({u_1}/v \in (a,b)\), i.e. for every \(v>0\) such that \(v \in ({u_1}/b, {u_1}/a)\) we have \(f(v) = c\). Putting \(u_2 = {bu_1}/a\) and repeating these arguments we see that for every \(v>0\), \(v \in ({u_1}/a, {bu_1}/a^2)\) we get \(f(v)=c\). Consequently, using mathematical induction we obtain

$$\begin{aligned} f^{-1} \bigl ( \{c\} \bigr ) \supset \bigcup _{n=0}^{\infty } \left( \frac{u_1}{b} \frac{b^{n}}{a^n}, \frac{u_1}{a} \frac{b^n}{a^{n}} \right) . \end{aligned}$$

Moving in other direction we put \(v_2 = {au_1}/b\). For every \(v>0\) such that \({v_2}/v \in (a,b)\), i.e. for every \(v >0\) such that \(v \in ({v_2}/b, {v_2}/a) = ({au_1}/{b^2}, {u_1}/b)\) we have \(f(v) = c\). Consequently, by mathematical induction we finally have

$$\begin{aligned} f^{-1} \bigl ( \{c\} \bigr ) \supset \bigcup _{n=- \infty }^{\infty } \left( \frac{u_1}{b} \frac{b^{n}}{a^n}, \frac{u_1}{a} \frac{b^n}{a^{n}} \right) . \end{aligned}$$

This means that the function f is constant on the whole positive half-line except for countably many points \({u_1b^n}/{a^{n+1}}\), \(n \in {\mathbb {N}} \cup (-{\mathbb {N}})\). Because \(u_1\) was arbitrarily chosen we conclude that f is a constant function. \(\square \)

Theorem 5.5

Let \(f:( 0,+\infty ) \rightarrow {\mathbb {C}}\) be a continuous solution of equation (B). Then f is of class \({{\mathcal {C}}}^{\infty }\).

Proof

We may assume that f is nonconstant. We will use the notations of the proof of the previous theorem. By Lemma 5.4 there exist positive \(\gamma _1, u_1,v_1\) and \(\gamma _2, u_2, v_2\) such that

$$\begin{aligned} \begin{array}{ll} \frac{u_1}{v_1} = \gamma _1 \in \bigl ( \frac{\sqrt{5}-1}{2}, 1 \bigr ), \quad &{} \quad f(u_1) \ne f(v_1), \\ \frac{u_2}{v_2} = \gamma _2 \in \bigl ( 0, \frac{1}{3}\bigr ), \quad &{} \quad f(u_2) \ne f(v_2). \end{array} \end{aligned}$$

Then \(\gamma _1 > \gamma _2\), \(u_1< v_1 < \frac{\sqrt{5}+1}{2} \, u_1\) and \(v_2 > 3 u_2\). Considering

$$\begin{aligned} w_1{:}=v_1-u_1, \quad z_1{:}=u_1+v_1, \quad w_2{:}=v_2-u_2 \end{aligned}$$

we see that \(0<w_1<u_1<v_1<z_1\) and \(u_2<w_2<v_2\). Now we choose \(a>0\) small enough and \(b>0\) large enough to have

$$\begin{aligned} w_1,u_1,v_1,z_1,u_2,w_2,v_2\in (a,b) \quad \hbox { and } \quad b(1-\gamma _1)/{\gamma _1}>a(1-\gamma _2)/{\gamma _2}. \end{aligned}$$

Since the function \(\gamma \mapsto (1-\gamma ^2)/{\gamma }\) is strictly decreasing on the interval (0, 1), then

$$\begin{aligned} \alpha {:}{=} \frac{a(1-\gamma _1^2)}{\gamma _1} < a \biggl ( 1 - \Bigl ( \frac{\sqrt{5}-1}{2} \Bigr )^2\biggr ) \frac{2}{\sqrt{5}-1}=a \end{aligned}$$

and

$$\begin{aligned} \beta : = b (1-\gamma _2)(1+\gamma _1)>b \frac{2}{3}\left( 1+ \frac{\sqrt{5}-1}{2}\right) = b \frac{1+\sqrt{5}}{3}>b. \end{aligned}$$

We will apply Theorem 1.28 from [8] to infer that f is in \({{\mathcal {C}}}^{\infty }\) on \(( \alpha ,\beta ) \). Let D be the set of all quintuples \((x,t,y,t',y')\) for which \(\alpha<x<\beta \), the values of all the inner functions \(g_i\), \(i=1,\ldots ,9\), are greater than a and less than b, i.e.

$$\begin{aligned} a<y,tx,ty,t(x+y),x',y',t'x',t'y',t'(x+y)<b, \end{aligned}$$

where \(x'=x+y-y'\), moreover \(f \bigl ( t(x+y) \bigr ) \not =f(ty)\) and \(f(t'x')\not =f(t'y')\). Of course, if \((x,t,y,t',y')\in D\), then equation (13) can be rewritten in the form (14). Hence we may apply the above mentioned theorem with the compact set \(C{:}{=}[a,b]\) provided we prove that for each \(x\in X{:}{=}( \alpha ,\beta ) \) there exists \((x,t,y,t',y')\in D\). There are two possible cases.

We start with the first one when \(\alpha< x < b (1-\gamma _1)/{\gamma _1} \). In this case we choose t and y such that \(t(x+y)=v_1\) and \(ty=u_1\), i.e., \(y/(x+y)=\gamma _1\), \(y{:}{=}\gamma _1/(1-\gamma _1)x\) and \( t{:}{=} u_1(1-\gamma _1)/{\gamma _1}\). Then \(tx=w_1\). Next we choose \(t'\) and \(y'\) such that \(t'y'=v_1\) and \(t'x'=u_1\), i.e., \(x'/y'=(x+y)/y'-1=\gamma _1\),

$$\begin{aligned} 1+\gamma _1= \frac{x+y}{y'} = \frac{x+y}{y} \frac{y}{y'}= \frac{1}{\gamma _1} \frac{y}{y'}, \end{aligned}$$

hence \(y'{:}{=}y/{\bigl (\gamma _1(1+\gamma _1)\bigr )}=x/(1-\gamma _1^2)\) and \(t': = v_1 (1-\gamma _1^2)/x\). Then \(t'(x+y)=t'(x'+y')=z_1\). We have to check that \(y,y',x'\in (a,b)\). Because \(x'<y'\) and

$$\begin{aligned} \gamma _1(1+\gamma _1) > \frac{\sqrt{5}-1}{2}\frac{\sqrt{5}+1}{2}=1, \end{aligned}$$

we have \(x'<y'<y\), hence it is enough only to give a lower estimate of \(x'\):

$$\begin{aligned} x'=\gamma _1y' = \frac{\gamma _1x}{1-\gamma _1^2} > \frac{\gamma _1\alpha }{1-\gamma _1^2}=a. \end{aligned}$$

Similarly, the upper estimate is \( y = x \gamma _1/(1-\gamma _1) < b \).

Now we pass to the second case when \( b (1-\gamma _1)/{\gamma _1} \le x < \beta \). Here we choose t and y such that \(t(x+y)=v_1\) and \(ty=u_2\), i.e., \( y / (x+y) = \gamma _2\), \( y: = \gamma _2/(1-\gamma _2)x\) and \( t: = u_2 (1-\gamma _2)/{\gamma _2}\). Then \(tx=w_2\). We choose also \(t'\) and \(y'\) such that \(t'y'=u_1\) and \(t'x'=v_1\), i.e., \(1/{\gamma _1}=x'/y'=(x+y)/y'-1\),

$$\begin{aligned} 1+1/{\gamma _1}= \frac{x+y}{y'} = \frac{x+y}{y} \frac{y}{y'} = \frac{1}{\gamma _2} \frac{y}{y'}, \end{aligned}$$

hence \(y'{:}{=}y/{\bigl (\gamma _2(1+1/{\gamma _1})\bigr )}=x/{\bigl ((1-\gamma _2)(1+1/{\gamma _1})\bigr )}\) and \(t'{:}{=}u_1(1-\gamma _2)(1+1/{\gamma _1})/x\). Then \( t'(x+y) = t'(x'+y') = z_1 \). We have to check that \(y,y',x'\in (a,b)\). Because \(y'<x'\) and

$$\begin{aligned} \gamma _2(1+1/{\gamma _1})< \frac{1}{3} \Bigl (1+ \frac{2}{\sqrt{5}-1} \Bigr ) = \frac{\sqrt{5}+3}{6}<1, \end{aligned}$$

we have \(y<y'<x'\), hence we find the lower estimate \(y=x\gamma _2/(1-\gamma _2)>a\) and the upper estimate

$$\begin{aligned} x'= \frac{y'}{\gamma _1} = \frac{x}{(1-\gamma _2)(\gamma _1+1)} < \frac{\beta }{(1-\gamma _2)(1+\gamma _1)}=b. \end{aligned}$$

This proves, that \( f \in {{\mathcal {C}}}^{\infty } \bigl ( (\alpha ,\beta ) \bigr )\), especially \( f \in {{\mathcal {C}}}^{\infty } \bigl ( (a,b) \bigr )\). Because a can be arbitrarily small and b can be arbitrarily large, we obtain \(f\in {{\mathcal {C}}}^{\infty }\bigl (( 0,+\infty ) \bigr )\). \(\square \)

Corollary 5.6

Every Lebesgue measurable and every Baire measurable solution \(f: (0,\infty ) \mapsto {\mathbb {C}}\) of equation (B) is of class \({\mathcal {C}}^{\infty }\).

6 Solution of Equation (B)

In this section we give all measurable solutions of the equation

$$\begin{aligned} \left. \begin{array}{l} \bigl (f(t(x+y)) - f(tx)\bigr ) \bigl (f(x+y) - f(y)\bigr ) \\ \quad = \bigl (f(t(x+y)) - f(ty)\bigr ) \bigl (f(x+y) - f(x)\bigr ), \end{array}\right. \quad x,y,t >0. \end{aligned}$$
(B)

We will use the following auxiliary fact.

Proposition 6.1

If \(f:(0, \infty )\rightarrow {{\mathbb {R}}}\) is a Lebesgue measurable or Baire measurable solution of equation (B), then f is either constant or strictly monotonic.

The basic tool in the proof of Proposition 6.1 is the following “folk lemma”.

Lemma 6.2

Let \(I\subset {\mathbb {R}}\) be an open interval and \(f:I\rightarrow {\mathbb {R}}\) be a continuous function. If \(c \in I\) is a point of a local extremum of  f, then for every positive \(\varepsilon \) small enough there is a point \(x \in I\) such that \(x+ \varepsilon \in I\), \(x\le c \le x+\varepsilon \) and \(f\left( x+\varepsilon \right) =f(x)\).

Proof

Assume that c is a point of local maximum of f and choose a positive \(\varepsilon _0\) such that \(\left( c-\varepsilon _0, c+\varepsilon _0 \right) \subset I\) and \(f(c)\ge f(x)\) for every \(x \in \left( c- \varepsilon _0, c+\varepsilon _0 \right) \). Fix any \(\varepsilon \in \left( 0, \varepsilon _0 \right) \) and define a function \(\varphi _{\varepsilon }: \left[ c-\varepsilon ,c \right] \rightarrow {\mathbb {R}}\) by \(\varphi _{\varepsilon } (x)=f\left( x+\varepsilon \right) -f(x)\). Then, since \(\varphi _{\varepsilon }\) is continuous,

$$\begin{aligned} \varphi _{\varepsilon }(c-\varepsilon )=f(c)-f(c-\varepsilon )\ge 0 \end{aligned}$$

and

$$\begin{aligned} \varphi _{\varepsilon }(c)=f(c+\varepsilon )-f(c)\le 0, \end{aligned}$$

we can find an \(x \in [c-\varepsilon ,c]\) such that \(\varphi _{\varepsilon }(x)=0\). This implies that \(x\le c \le x+\varepsilon \);   \(x, x+\varepsilon \in I\), and \(f(x+\varepsilon )=f(x)\). \(\square \)

Now we are ready to prove Proposition 6.1 but first let us recall that microperiodic function is any periodic function without the smallest positive period. The Dirichlet function \({\textbf{1}}_{{\mathbb {Q}}}\) serves as an example. A folk theorem states that any continuous microperiodic function mapping \({\mathbb {R}}\) into \({\mathbb {R}}\) is constant. It has plenty of variants and generalizations of this results. Some of them rely on weakening the assumption of continuity, others on extending the class of admissible domains. Sometimes Polish mathematicians connect it with the names of Stanisław Ruziewicz and/or Antoni Łomnicki.

Proof of Proposition 6.1

By Theorems 5.1 and 5.2 we know that f is continuous. Assume that it is not strictly monotonic. Since f is continuous it follows that it is not one-to-one and, consequently, has at least one point of local extremum. Let \(c \in (0,\infty )\) denote any of such local extremum points.

We prove that the limit \(f(0+)\) exists and \(f(0+)=f(c)\). By Lemma 6.2 we can find a positive number \(\varepsilon _0\) such that for every \(\varepsilon \in \left( 0, \varepsilon _0\right) \) there is an \(x_{\varepsilon }\in (0, \infty )\) satisfying the conditions

$$\begin{aligned} x_{\varepsilon }\le c\le x_{\varepsilon }+\varepsilon \qquad \text{ and } \qquad f\left( x_{\varepsilon }+\varepsilon \right) = f\left( x_{\varepsilon }\right) . \end{aligned}$$

If \(f(\varepsilon )=f\left( x_{\varepsilon }\right) \) for every \({\varepsilon }\) from a right vicinity of 0, then \(f(\varepsilon ) =f\left( x_{\varepsilon }\right) \rightarrow f(c)\) while \(\varepsilon \) tends to 0, so \(f(0+)\) exists and equals f(c). In the opposite case there exists a sequence \(\left( \varepsilon (n)\right) _{n \in {\mathbb {N}}}\) of numbers from \(\left( 0, \varepsilon _0\right) \), tending to 0 and such that \(f(\varepsilon (n))\not = f\left( x_{\varepsilon (n)}\right) \) for all \(n \in {\mathbb {N}}\). Then \(f(\varepsilon (n)) \not = f\left( x_{\varepsilon (n)}+\varepsilon (n)\right) \) and, using (B) for \(x=x_{\varepsilon (n)}\) and \(y=\varepsilon (n)\), we see that

$$\begin{aligned} f\left( t\left( x_{\varepsilon (n)}+\varepsilon (n) \right) \right) -f\left( tx_{\varepsilon (n)}\right) =0, \quad t \in (0, \infty ),\, n \in {{\mathbb {N}}}, \end{aligned}$$

that is

$$\begin{aligned} f(t)=f\left( \alpha _n t\right) , \qquad t \in (0, \infty ),\, n \in {{\mathbb {N}}}, \end{aligned}$$

where \(\alpha _n=\frac{x_{\varepsilon (n)}}{x_{\varepsilon (n)}+\varepsilon (n)}\). Since \({\lim \nolimits _{n\rightarrow \infty }} \alpha _n=\frac{c}{c+0}=1\) we conclude that the function \(f\circ \exp \) is microperiodic. Being continuous, it is constant, so is f. In particular, \(f(0+)=f(c)\).

Let \(d=\sup \left\{ x \in (0, \infty ): \, f(x)=f(c)\right\} \). Since c was taken as an arbitrary point of local extremum of f, the equality \(f(0+)=f(c)\) implies that f takes exactly one local extremum value, viz. f(c). Thus, since \(f(0+)= f(c)=f(d-)\) and f is continuous, it follows that the function \(f\vert _{(0,d)}\) is constant. Suppose that \(d<\infty \). Take any \(x \in \left( d/2,d\right) \) and \(\varepsilon \in (0, d-x)\), and put \(y=x+\varepsilon \). Then \(0<x<y<d<x+y\), so \(f(x)=f(y)=f(c)\) and \(f(x+y)\not =f(c)\), whence \(f(x+y)-f(x)=\) \(f(x+y)-f(y)\not =0\). Now (B) gives

$$\begin{aligned} f\left( t\left( x+y\right) \right) -f\left( tx\right) =f\left( t\left( x+y\right) \right) -f\left( ty\right) , \end{aligned}$$

that is

$$\begin{aligned} f(ty)=f(tx) \end{aligned}$$

for every \(t>0\), hence, for every \( t > 0 \)

$$\begin{aligned} f\left( t\right) =f\left( \frac{x}{y}t\right) =f\left( \left( \frac{x}{y}\right) ^nt\right) ={\underset{n\rightarrow \infty }{\lim }}f\left( \left( \frac{x}{y}\right) ^nt\right) =f(0+). \end{aligned}$$

This means that f is constant. \(\square \)

Another useful tool in the proof of Theorem 6.4 is provided by the lemma below.

Lemma 6.3

Let \(\varrho : {\mathbb {N}} \rightarrow {\mathbb {R}}\), \(q \in {\mathbb {R}}\) and let \(g: (0,\infty ) \rightarrow {\mathbb {R}}\) be a continuous function such that

$$\begin{aligned} g(nx) = g(x)+ \varrho (n)x^q, \quad x> 0,\,\, n \in {\mathbb {N}}. \end{aligned}$$
(15)

Then there exist \(a,b \in {\mathbb {R}}\) such that either \(q=0\) and

$$\begin{aligned} g(x) = a \ln x + b, \quad x>0, \end{aligned}$$

or \(q \ne 0\) and

$$\begin{aligned} g(x) = a x^q + b, \quad x>0. \end{aligned}$$

Proof

For every \(m,n \in {\mathbb {N}}\) and every \(x>0\) we have \(g(mnx) = g(x) + \varrho (mn)x^q\) and, on the other hand,

$$\begin{aligned} g(mnx) = g(nx) + \varrho (m) (nx)^q = g(x) + \varrho (n) x^q + \varrho (m) n^q x^q. \end{aligned}$$

Consequently,

$$\begin{aligned} \varrho (mn) = \varrho (n) + \varrho (m)n^q = \varrho (m) + \varrho (n)m^q, \quad m,n \in {\mathbb {N}}. \end{aligned}$$

This implies that

$$\begin{aligned} \varrho (n) (m^q -1) = \varrho (m) (n^q - 1), \qquad m,n \in {\mathbb {N}}. \end{aligned}$$

If \(q \ne 0\) then this means that there exists a number a such that \(\varrho (n) = a(n^q - 1)\). Using equation (15) twice we obtain

$$\begin{aligned} g\left( \frac{m}{n}\, x \right) = g(x) + \bigl ( \varrho (m) - \varrho (n) \bigr ) \left( \frac{x}{n} \right) ^q = g(x) + a \left( \left( \frac{m}{n} \right) ^q - 1 \right) x^q \end{aligned}$$

for all \(n,m \in {\mathbb {N}}\) and every \(x>0\), that is

$$\begin{aligned} g\left( r x \right) = g(x) + a \left( r^q - 1 \right) x^q, \qquad x>0, \, r \in {\mathbb {Q}}_+. \end{aligned}$$

The continuity of the function g with substitution \(x=1\) implies now that for all \(t>0\) we have \(g(t) = a t^q + (g(1) - a)\). If \(q =0\) then \(\varrho (mn) = \varrho (m) + \varrho (n)\) for all \(m,n\in {\mathbb {N}}\) and

$$\begin{aligned} g\left( \frac{m}{n}\, x \right) = g(x) + \bigl ( \varrho (m) - \varrho (n) \bigr ) \end{aligned}$$

for all \(m,n \in {\mathbb {N}}\) and every \(x>0\). Substituting \(x=1\) we obtain that \(\varrho (m) - \varrho (n) = g({m/n}) - g(1)\) for all \(m,n \in {\mathbb {N}}\) and, consequently,

$$\begin{aligned} \bigl ( g(rx) -g(1) \bigr ) = \bigl ( g(x) -g(1) \bigr ) + \bigl ( g(r) -g(1)\bigr ), \quad x> 0,\; r \in {\mathbb {Q}}_{+}. \end{aligned}$$

The continuity of the function g implies that this equality holds for each \(x,r>0\) and, consequently (see [13], Theorem 3.1.5), there exists a real number \(a \in {\mathbb {R}}\) such that

$$\begin{aligned} g(x) - g(1) = a \ln x, \quad x>0, \end{aligned}$$

which ends the proof. \(\square \)

Now we are able to prove the main result of this section.

Theorem 6.4

Let \(f: (0,\infty ) \rightarrow {\mathbb {R}}\) be a Lebesgue measurable, or a Baire measurable solution of equation  (B). Then there exist numbers \(a, b, q \in {\mathbb {R}}\) such that either

$$\begin{aligned} q=0 \quad \hbox { and } \quad f(x) = a \ln {x} + b, \quad x>0, \end{aligned}$$

or

$$\begin{aligned} q \ne 0 \quad \hbox { and } \quad f(x) = ax^q + b, \quad x>0. \end{aligned}$$

Proof

Theorems 5.1 and 5.2 imply that f is a continuous function. By Proposition 6.1 the function f is either constant, or strictly monotonic. In the second case the equation (B) can be rewritten in the following way:

$$\begin{aligned} \frac{f(tx) - f(ty)}{f(x) - f(y)} = \frac{f(tx + ty) - f(ty)}{f(x+y) - f(y)}, \quad x,y,t \in (0,\infty ), x \ne y. \end{aligned}$$

Consequently, by mathematical induction, for all \(n \in {\mathbb {N}}\) we have

$$\begin{aligned} \frac{f(tx) - f(ty)}{f(x) - f(y)} = \frac{f(tx + nty) - f(ty)}{f(x+n y) - f(y)}, \quad x,y,t \in (0,\infty ),\,\, x \ne y. \end{aligned}$$
(16)

Case 1. \(f(0+) = b \in {\mathbb {R}}\). Without loss of generality, replacing if necessary f by \(f - b\), we may assume that \(b=0\). Assume, for instance, that f is strictly increasing and \(f(r) > 0\) for every \(r>0\). By the continuity of the function f, tending with x to 0 in equality (16), we see that

$$\begin{aligned} \frac{f(ty)}{f(y)} = \frac{f(nty) - f(ty)}{f(ny) - f(y)}, \qquad y,t>0, \, n \ge 2. \end{aligned}$$

Hence, after some elementary calculations, we come to the condition

$$\begin{aligned} \frac{f(nty)}{f(ty)} = \frac{f(ny)}{f(y)}, \quad y,t>0,\; n \in {\mathbb {N}}. \end{aligned}$$

Since t can be chosen arbitrarily we see that \(\frac{f(nx)}{f(x)} = \frac{f(n)}{f(1)}\) for each \(n\in {\mathbb {N}}\). Taking the logarithm of both sides of this equality we obtain

$$\begin{aligned} \ln f(nx) - \ln f(x) = \varrho (n), \quad x>0, \; n \in {\mathbb {N}}, \end{aligned}$$

where \(\varrho (n){:}{=} \ln {f(n)} - \ln {f(1)}\). By Lemma 6.3 we infer that \(\ln f(x) = q \ln x +d\), \(x>0\), for some real q and, consequently, \(f(x) = a x^q\) for \(x>0\). If \(f(0+) = b\) then we have \(f(x) = a x^q +b\) for \(x>0\).

Case 2. \(f(0+) = \infty \). Now equation (16) can be rewritten in the form

$$\begin{aligned} \frac{f(tx + nty) - f(ty)}{f(x+ n y) - f(y)} = \frac{f(tx)}{f(x)} \frac{1 - \frac{f(ty)}{f(tx)}}{1 - \frac{f(y)}{f(x)}}, \quad x,y,t \in (0,\infty ), x \ne y. \end{aligned}$$

Letting \(x\rightarrow 0+\) we see that \({{f(ty)}/{f(tx)}} \rightarrow 0\) and \({{f(y)}/{f(x)}}\rightarrow 0\) and thus there exist the limits of both sides of this equality and we have

$$\begin{aligned} \lim _{x \rightarrow 0+} \frac{f(tx)}{f(x)} = \frac{f(nty) - f(ty)}{f(n y) - f(y)} \end{aligned}$$

for every \(t>0\) and every \(n \ge 2\). Thus the function f is regularly varying at 0 (cf. [1], sec. 1.4.1 or [23], Sec. 1.3). According to the fundamental theory of such functions there exists \(q \in {\mathbb {R}}\) such that

$$\begin{aligned} \lim _{x \rightarrow 0+} \frac{f(tx)}{f(x)} = t^q, \qquad t>0. \end{aligned}$$

Now for every \(n \in {\mathbb {N}}\) we have

$$\begin{aligned} \frac{f(nty) - f(ty)}{f(n y) - f(y)} = t^q, \quad y,t \in (0,\infty ). \end{aligned}$$

Substituting \(ty=x\) we see that

$$\begin{aligned} \frac{f(nx) - f(x)}{x^q} = \frac{f(ny) - f(y)}{y^q}, \quad x,y > 0. \end{aligned}$$

This means that for each \(n \in {\mathbb {N}}\) there exists a number \(\varrho (n)\) such that \(f(nx) = f(x) + \varrho (n) x^q\) for all \(x >0\) and a direct application of Lemma 6.3 gives \(f(x) = ax^q +b\) for some \(a,b,q \in {\mathbb {R}}\). This completes the proof. \(\square \)

7 Solving Equation (A)

Recall that the probabilistic kernel \(\psi (t)\) for the generalized convolution \(\diamondsuit \) is equal to the generalized characteristic function of this convolution for the point measure \(\delta _t\) (in particular \(\psi \not \equiv 1\)), \(\varphi \) is the generalized characteristic function of the measure \(\mu \). Recall also that for any continuous real function f we define \(a_{f} {:}{=} \inf \{ t >0: f(t) =0\}\). Since \(a_{\psi } < \infty \) if and only if \(a_{\varphi } <\infty \), Lemma 3.1 is equivalent to the result below.

Theorem 7.1

For every generalized convolution \(\diamondsuit \) on \({\mathcal {P}}_+\) every point mass measure \(\delta _t\), \(t >0\), is stable but not strictly stable with respect to convolution \(\diamondsuit \) with \(c(r,s) = \max \{r,s\}\), \(d(r,s) = \min \{r,s\}\) since

$$\begin{aligned} \delta _{rt} \diamondsuit \delta _{st} = \delta _{t\max \{r,s\}} \diamondsuit \delta _{t\min \{r, s\}} \qquad r,s,t >0. \end{aligned}$$

If \(a_{\psi } < \infty \) then the point mass measures are the only stable but not strictly stable measures with respect to the generalized convolution \(\diamondsuit \) corresponding to the probability kernel \(\psi \).

By Proposition 3.7 we know that under assumptions (1) if \(a_{\psi } = \infty \) then there exists \(p>0\) such that

$$\begin{aligned} c(r,s) = (r^p + s^p )^{1/p}, \qquad r,s \ge 0. \end{aligned}$$

Recall that in this paper function \(d: [0,\infty )^2 \rightarrow {\mathbb {R}}\) is homogenous, \(d(0, t) = 0\) for all \(t\ge 0\) and \(d(s,t)>0\) for all \(s,t >0\).

Our aim is to find the general solution of equation (A) under assumptions (1), with \((\varphi , \psi ) \in \Phi _2\), however, in this paper we can do it only by using the solution of equation (B), where the implication \((A) \Rightarrow (B)\) (Sect. 4) was shown only under some additional assumptions described in Theorem 4.1. Consequently, our solution is not truly general.

The main result of the probabilistic part of this work, taking into account the assumptions of Theorem 4.1, includes the following theorem.

Theorem 7.2

Assume that condition (1) holds and let \((\varphi ,\psi ) \in \Phi _2\) be a pair of real and symmetric functions satisfying equation (A). Then

  1. (a)

    \(c(r,s) = (r^p + s^p)^{1/p}\),    \(r,s\ge 0\) for some \(p>0\). If, in addition, the function \( d(1,\cdot )\) is one-to-one and

    1. 1.

      \(\varphi \) is differentiable on \((0,\infty )\) and \(\psi \) is one-to-one on \((0, \infty )\) or

    2. 2.

      \(\psi \) is differentiable and one-to-one on \((0,\infty )\),

    then

  2. (b)

    there are \(A, B, p, r >0\), \(r \ne 1\), such that

    $$\begin{aligned} \varphi (x)=\textrm{e}^{-A x^{rp} - B x^p } \quad and \quad \psi (x)=\textrm{e}^{-A x^{rp} }, \quad x \in [0, \infty ), \end{aligned}$$

    and

    $$\begin{aligned} d(x,y)^{rp} = x^{rp}+y^{rp}- \left( x^p+y^p\right) ^{r}, \quad x \in [0, \infty ). \end{aligned}$$

If the function \(d(1,\cdot )\) is one-to-one, one of conditions 1. and 2. holds and \(\psi \) is a characteristic function (i.e. the corresponding measure \(\mu \) such that \(\psi = {\widehat{\mu }}\) is weakly stable and the corresponding generalized convolution is weak), then the representation (b) holds with \(r \in (0, \min \{ 1, {2/p}\}) \cup (1, {2/p}]\).

Proof

Notice that if condition (1) hold, then \(a_{\varphi } = a_{\psi } = \infty \), since otherwise, by Lemma 3.1, we have \(\varphi _1 \equiv \varphi _2\). By Theorem 3.8 there exists \(p> 0\) such that \(c(r,s)^p = r^p + s^p\) which completes the proof of (a).

Observe also that the assumptions 1. and 2. are equivalent. The implication \(1\Rightarrow 2\), stating that differentiability of \(\varphi \) implies differentiability of \(\psi \), was already noticed in Remark 4. To see that differentiability of \(\psi \) implies differentiability of \(\varphi \) note first that if \((\varphi ,\psi ) \in \Phi _2\) then there exists \(\lambda \in {\mathcal {P}}_+\) such that

$$\begin{aligned} \varphi (t) = \int _0^{\infty } \psi (st) \lambda (ds), \quad t>0. \end{aligned}$$

Moreover, we know that \(a_{\varphi } = a_{\psi } = \infty \), \(\psi (0) = 1 = \max \{ \psi (x): x \ge 0\}\), and thus the assumption that \(\psi \) is one-to-one implies that \(\psi \) is strictly decreasing and bounded from below, so \(\varphi \) is also strictly decreasing on \((0, \infty )\) and differentiable on \((0, \infty )\). Moreover, \(\psi '\) is negative and increasing. If \(g = \lim _{x\rightarrow \infty } \psi (x) \in (0,1)\) and \(\psi \) is a characteristic function of some weakly stable measure \(\mu \), then \(g=0\) in view of Theorem 6 in [16].

Using now Theorem 4.1 for the function \(f = h'\) where \(h(x) = \ln \varphi (x^{1/p})\), and then Theorem 6.4, we infer that there exists \(a, b, q \in {\mathbb {R}}\) such that either

$$\begin{aligned} q=0 \quad \hbox {and} \quad f(x) = a \ln x +b, \qquad x> 0, \end{aligned}$$

or

$$\begin{aligned} q\ne 0 \quad \hbox {and} \quad f(x) = a x^q +b, \qquad x> 0. \end{aligned}$$

Since \(\varphi \) is positive on \((0,\infty )\) and \(\varphi '\) is negative on \((0,\infty )\), then f is negative on \((0,\infty )\) and, consequently, either \(q=0,\, a=0,\, b<0\), or \(q \ne 0,\, a \in {\mathbb {R}},\, b<0\). To finish the proof we will consider the following three possible cases.

Case 1. If the function f is constant, say \(-f \equiv A\), then \(A >0\) and \(\varphi (t) = \exp \left( - A t^p\right) \). This implies that \(d(r,s) = 0\) for all \(r,s >0\) which means that the considered measure is strictly stable with respect to the generalized convolution contrary to our assumptions.

Case 2. If there exists limit \(f(0+) = b\) and f is not constant, then \(a,b <0\) and \(q>0\) are such that \(f(x) = a x^q +b \) for all \(x>0\). Then

$$\begin{aligned} \varphi (t) = \exp \left( - A t^{rp} - B t^p \right) , \quad \psi (x)=\textrm{e}^{-A x^{rp}}, \qquad t>0, \end{aligned}$$

where \(A = -\frac{a}{q+1}\), \(r = q+1>1\) and \(B=-b\). If \(\psi \) is a characteristic function, then we see that \(rp\in (0, 2]\) and, consequently, \(r \in (1, {2/p}]\).

Case 3. If \(f(0 +) = - \infty \) then \(q<0\), \(a<0\), \(f(t) = a t^q + b\) for all \(t>0\) and

$$\begin{aligned} \varphi (t) = \exp \left( - A t^{rp} - B t^p \right) , \quad \psi (x)=\textrm{e}^{-A x^{rp} } \qquad t>0, \end{aligned}$$

where \(A = - \frac{a}{q+1}\), \(r = q+1\) and \(B=-b\). If \(\psi \) is a characteristic function we see that \(A>0\) and \(r p \in (0,2]\). Then \(a < 0\) and \(r \in (0, \min \{ 1, 2/p\})\).

In the Cases 2 and 3 it is enough to substitute the obtained function \(\varphi \) into the equation (A) to get the explicit formulas for \(\psi \) and d. \(\square \)

Remark 7

Notice that Theorem 7.2 presents only solutions of functional equation (A). We still do not know whether the resulting form of function \(\varphi \) guarantees that \(\varphi \) is a characteristic function. In the Case 2 for \(p \in (0,2]\) and \(r \in (1, {2/p}]\) every configuration of parameters \(A,B >0\) is admissible, which means that the obtained function \(\varphi \) is a characteristic function. To see this notice first that \(rp \in (0,2]\). Thus it follows from the theory of stable distributions that for each \(A>0\) the function \(\psi \) is a characteristic function. Now it is enough to notice that

$$\begin{aligned} \varphi (t) = \exp \left( - A t^{rp} - B t^p \right) = \int _0^{\infty } \textrm{e}^{-A (st)^{rp} } \lambda (ds), \end{aligned}$$

where \(\lambda = \delta _{1} *\gamma \), with \(\gamma = {\mathcal {L}}(\theta ^{1/{rp}})\) for the positive random variable \(\theta \) with \(r^{-1}\!-\)stable distribution with the Laplace transform of the form \(\exp \left( -BA^{-{1/r}} t^{1/r} \right) \). Now it is enough to know that any scale mixture of a characteristic function with respect to a probability measure is also a characteristic function.

Remark 8

In all other situations, i.e. if \(p>2\) or \(r<1\) not every configuration of parameters \(A,B >0\) is admissible. For the discussion of admissible and not admissible pairs (AB) we refer the interested reader to the papers [18] by K. Oleszkiewicz and [15] by G. Mazurkiewicz and J.K. Misiewicz.