1 Introduction

We consider a measure space \((X,{\mathcal {A}},\mu )\) and two measurable functions \(f,g:E\rightarrow {\mathbb {R}}\), for some \(E\in {\mathcal {A}}\). The aim of this paper is to characterize the property of having equal p-norms when p belongs to a sequence \(P=(p_j)_{j=1}^{\infty }\) of distinct real numbers \(p_j\ge 1\).

Before stating our main results, let us recall the standard notation for the norms in \({\mathcal {L}}^p(E)\):

$$\begin{aligned} ||f||_p=\left( \int _E|f|^p\,\mathrm {d}\mu \right) ^{\frac{1}{p}}\quad \text { if }1\le p<+\infty ,\qquad \qquad ||f||_{\infty }={\mathop {{\hbox {ess sup}}}\limits _{E}} |f|. \end{aligned}$$

Moreover, for a measurable function \(f:E\rightarrow {\mathbb {R}}\), we write

$$\begin{aligned} \begin{aligned}&\{f>\alpha \}=\{x\in E:f(x)>\alpha \},\qquad \mu (f>\alpha )=\mu (\{f>\alpha \}). \end{aligned} \end{aligned}$$

Here is our main result.

Theorem 1

Suppose \(f,g\in {\mathcal {L}}^{p}(E)\) for all \(p\ge 1\), and \(P=(p_j)_{j=1}^{\infty }\) is a sequence of distinct real numbers \(p_j\ge 1\). Suppose also that at least one of the following two conditions holds:

  1. a)

    P has an accumulation point in \((1,+\infty )\,\);

  2. b)

    \(f,g\in {\mathcal {L}}^{\infty }(E)\) and

    $$\begin{aligned} \sum _{j=1}^{\infty }\frac{p_j-1}{(p_j-1)^2+1}=+\infty . \end{aligned}$$
    (1)

Then the following two statements are equivalent:

  1. i)

    \(||f ||_p=||g ||_p\ \quad \text {for all } p \in P\) ;

  2. ii)

    \(\mu (|f|>\alpha )=\mu (|g|>\alpha )\quad \text { for all }\alpha \ge 0.\)

Observe that, if \(\lim _{j} p_j=+\infty \), then (1) is equivalent to

$$\begin{aligned} \sum _{j=1}^{\infty }\frac{1}{p_j}=+\infty . \end{aligned}$$

Notice moreover that, in this case, if we drop the boundedness hypothesis on f and g, the result is no longer true, as we will show with a counterexample in Sect. 4. Theorem 1 applies for example if

$$\begin{aligned} P=\left\{ j\in {\mathbb {N}},\; j\ge 100\right\} ,\quad P=\left\{ 1+\frac{1}{\sqrt{j}} : j\in {\mathbb {N}}{\setminus }\{0\}\right\} , \quad \text {or}\quad P=\left\{ 2+\frac{1}{j} : j\in {\mathbb {N}}{\setminus }\{0\}\right\} . \end{aligned}$$

On the contrary, the set \(P=\left\{ j^2 : j\in {\mathbb {N}}{\setminus }\{0\}\right\} \) is not admissible, since in this case the series in (1) converges.

It can be seen, as a consequence of the Chebyshev inequality, that ii) implies the identity \(||f ||_p=||g ||_p\), for all \(p \in [1,+\infty )\), and for any functions \(f,g\in {\mathcal {L}}^p(E)\). Hence, the main interest of our result will be in proving that i) implies ii).

The paper is organized as follows. In Sect. 2, we recall some results in measure theory and complex analysis. Section 3 is devoted to the proof of Theorem 1 by the use of the “Full Müntz Theorem in \({\mathcal {C}}[0,1]\)”, elementary complex analysis and the Mellin transform. In Sect. 4, we construct the counterexample to the conclusion of Theorem 1 if the boundedness hypothesis on f and g is dropped in b). In the last section, we provide some complementary results and final remarks.

2 Some preliminaries for the proof

We will need the following preliminary results.

Lemma 2

Suppose \(P=(p_j)_{j=1}^{\infty }\) is a sequence of distinct real numbers \(p_j\ge 1\) satisfying (1) and having at most 1 as a finite accumulation point. If \(\varphi \in {\mathcal {L}}^1[0,1]\) satisfies

$$\begin{aligned} \int _{0}^1\varphi (x)x^{p-1}\,\mathrm {d}x=0\quad {\text { for all } } \;p\in P, \end{aligned}$$
(2)

then \(\varphi (x)=0\) for almost every \(x\in [0,1]\).

Proof

By the assumptions given on P, one of the following two cases occurs:

(b1):

P has a strictly increasing subsequence \((p_{j_k})_k\) such that

$$\begin{aligned} \displaystyle p_{j_k}\rightarrow +\infty \quad \hbox { and } \quad \sum _{k=1}^{\infty }\frac{1}{p_{j_k}}=+\infty \,; \end{aligned}$$
(b2):

P has a strictly decreasing subsequence \((p_{j_k})_k\) such that

$$\begin{aligned} p_{j_k} \rightarrow \ 1 \quad \hbox { and } \quad \sum _{k=1}^{\infty }(p_{j_k}-1)=+\infty . \end{aligned}$$

With no loss of generality, we may replace P by the subsequence \(p_{j_k}\), in either case.

Suppose that (b1) holds; then by (2) we have

$$\begin{aligned} \int _{0}^1\varphi (x)x^{p_1-1}x^{p-p_1}\,\mathrm {d}x=0. \end{aligned}$$

By the “Full Müntz Theorem in \({\mathcal {C}}[0,1]\)” given in [1, Theorem 2.1] the linear span of \(\{x^{p-p_1},p\in P\}\) is a dense subset in \({\mathcal {C}}^0[0,1]\) because in this case

$$\begin{aligned} \sum _{p\in P}\frac{p-p_1}{(p-p_1)^2+1}=+\infty , \end{aligned}$$

by (1). Hence, for any \(h\in {\mathcal {C}}^0[0,1]\) we have:

$$\begin{aligned} \int _0^1\varphi (x)x^{p_1-1}h(x)\,\mathrm {d}x=0. \end{aligned}$$

Extend \(\varphi \equiv 0\) outside [0, 1]. Let \(\chi _{\varepsilon }\) be a family of mollifying kernels on \({\mathbb {R}}\) (as shown for example in the article [3] or [9, Chapter 9.2]). Then picking \(h(x)=\chi _{\varepsilon }(x-y)\) and setting \(k(x)=\varphi (x)x^{p_1-1}\) we obtain:

$$\begin{aligned} \int _0^1k(x)h(x)\,\mathrm {d}x= \int _{{\mathbb {R}}}k(x)\chi _{\varepsilon }(x-y)\,\mathrm {d}x= (k*\chi _{\varepsilon })(y)=0\qquad \text {for every }y\in {\mathbb {R}}. \end{aligned}$$

Now, by [9, Chapter 9, Theorem 9.6],

$$\begin{aligned} k*\chi _{\varepsilon }\rightarrow k\qquad \text {in }{\mathcal {L}}^1[0,1]. \end{aligned}$$

Consequently, (2) holds if and only if \(\varphi =0\) almost everywhere.

If case (b2) holds then we can define a sequence of functions \((f_j)_j\) by

$$\begin{aligned} f_j(x)=\varphi (x)x^{p_j-1}. \end{aligned}$$

The sequence \((f_j)_j\) converges to \(\varphi \) pointwise. Moreover, choosing \(\sigma (x)=|\varphi (x)|\), we obtain that \(\sigma \in {\mathcal {L}}^1\) and

$$\begin{aligned} |f_j(x)|\le \sigma (x)\qquad \text {for all } x\in [0,1] \text { and all } j\in {\mathbb {N}},j\ge 1. \end{aligned}$$

By the Dominated Convergence theorem, we obtain that

$$\begin{aligned} \int _{0}^1\varphi (x)\,\mathrm {d}x=\lim _{j\rightarrow \infty }\int _{0}^1\varphi (x)x^{p_j-1}\,\mathrm {d}x=0. \end{aligned}$$

Consequently we can suppose, without losing in generality, that 1 belongs to P and the proof proceeds as before applying the “Full Müntz Theorem in \({\mathcal {C}}[0,1]\)” [1, Theorem 2.1]. \(\square \)

Remark 3

If \(f\in {\mathcal {L}}^p(E)\) for all \(p\ge 1\), let \(\mu _f:(0,+\infty )\rightarrow {\mathbb {R}}\) be defined by

$$\begin{aligned} \mu _f(t)=\mu \left( |f|>t\right) . \end{aligned}$$

It is well known that \(\mu _f\) is a monotone nonincreasing function, continuous from the right, and that

$$\begin{aligned} \int _E|f|^p\mathrm {d}\mu =p\int _{0}^{+\infty }\mu _f(t)t^{p-1}\mathrm {d}t, \end{aligned}$$

for every \(p\in [1,+\infty )\), cf. [9, Theorem 5.51]. Note in particular that \(\mu _f\in {\mathcal {L}}^1(0,+\infty )\).

To deal with the first part of Theorem 1, namely when case a) holds, we have to recall some tools in complex analysis. The Mellin transform of a function \(\upsilon (t)\) is defined as

$$\begin{aligned} \{{\mathcal {M}}\upsilon \}(z) = F(z) = \int _0^{\infty }\upsilon (t) t^{z-1} \mathrm {d}t, \quad z\in {\mathbb {C}}, \end{aligned}$$

whenever the integral exists for at least one value \(z_0\) of z (cf. [8, 10, 11]).

Lemma 4

Let \(\upsilon :[0,+\infty )\rightarrow {\mathbb {R}}\) be a function such that

$$\begin{aligned} \upsilon (t)\, t^{z-1} \in {\mathcal {L}}^1([0,+\infty ))\qquad {{ for all} }\,z\ge 1. \end{aligned}$$

Then \({\mathcal {M}}\upsilon \) is analytic in \(S=\{w\in {\mathbb {C}}: \mathfrak {R}(w)>1\}.\)

Proof

Let \(\gamma :[0,1]\rightarrow {\mathbb {C}}\) be a triangle in S and let \(\Phi (s,z)=\upsilon (s)s^{z-1}\). Then,

$$\begin{aligned} \int _{\gamma } F=\int _{\gamma }\left( \int _0^{+\infty }\Phi (s,z)\,\mathrm {d}s\right) \mathrm {d}z=\int _0^1\left( \int _0^{+\infty }\Phi (s,\gamma (t))\gamma '(t)\,\mathrm {d}s\right) \mathrm {d}t, \end{aligned}$$

with \(\gamma '(t)\) defined for all but three points \(t\in [0,1]\). Observe that

$$\begin{aligned} \int _{\gamma } |F|= \int _0^1\left( \int _0^{+\infty }|\Phi (s,\gamma (t))\gamma '(t)|\,\mathrm {d}s\right) \mathrm {d}t\le \int _0^1\left( \int _0^{+\infty }|\upsilon (s)||s^{\gamma (t)-1}|\,|\gamma '(t)|\,\mathrm {d}s\right) \mathrm {d}t. \end{aligned}$$

Being \(\gamma \) a triangle, \(|\gamma '(t)|\) is constant on every side and then there exists \(C_1\) such that \(|\gamma '(t)|<C_1\) for all \(t\in [0,1]\) where the tangent vector is defined. Let \(R>0\) be such that \({{\,\mathrm{Supp}\,}}(\gamma )\subseteq B(0,R)\). Then,

$$\begin{aligned} |s^{\gamma (t)-1}|=s^{\mathfrak {R}[\gamma (t)-1]}\le s^{R+1}, \end{aligned}$$

and so

$$\begin{aligned} \int _0^1\left( \int _0^{+\infty }|\upsilon (s)||s^{\gamma (t)-1}|\,|\gamma '(t)|\,\mathrm {d}s\right) \mathrm {d}t\le C_1\int _0^1\left( \int _0^{+\infty }|\upsilon (s)s^{R+1}|\,\mathrm {d}s\right) \mathrm {d}t. \end{aligned}$$

By hypothesis, \(\upsilon (s)s^p\) is Lebesgue integrable for all \(p\ge 0\), so

$$\begin{aligned} C_1\int _0^1\left( \int _0^{+\infty }|\upsilon (s)s^{R+1}|\,\mathrm {d}s\right) \mathrm {d}t\le C_1 C_R <+\infty . \end{aligned}$$

Then, by Fubini-Tonelli Theorem,

$$\begin{aligned} \int _{\gamma } F\!=\!\int _{\gamma }\left( \int _0^{+\infty }\Phi (s,z)\,\mathrm {d}s\right) \mathrm {d}z\!=\! \int _0^{+\infty }\left( \int _{\gamma }\Phi (s,z)\,\mathrm {d}z\right) \mathrm {d}s= \int _0^{+\infty }\left( \int _{\gamma }\upsilon (s)s^{z-1}\,\mathrm {d}z\right) \mathrm {d}s. \end{aligned}$$

But now \(\upsilon (s)s^{z-1}\) is a holomorphic function of z, and then by the Cauchy integral theorem

$$\begin{aligned} \int _{\gamma }\upsilon (s)s^{z-1}\,\mathrm {d}z=0, \end{aligned}$$

and then

$$\begin{aligned} \int _{\gamma } F=0, \end{aligned}$$

for every triangular path. Consequently, by Morera’s theorem for triangles (see for example [2]), F(s) is holomorphic on \(\{w\in {\mathbb {C}}: \mathfrak {R}(w)>1\}.\)\(\square \)

3 Proof of Theorem 1

Being f and g in \({\mathcal {L}}^p(E)\) for every \(p\in [1,+\infty )\), the functions

$$\begin{aligned} t\rightarrow \mu \left( |f|>t^{\frac{1}{p}}\right) \qquad \text { and }\qquad t\rightarrow \mu \left( |g|>t^{\frac{1}{p}}\right) \end{aligned}$$

are finite almost everywhere, the function

$$\begin{aligned} \begin{aligned} {\mathcal {I}}(p)=&\int _E\left[ \,|f|^p-|g|^p\,\right] \,\mathrm {d}\mu \end{aligned} \end{aligned}$$

is well defined and finite for every \(p\in [1,+\infty )\), and

$$\begin{aligned} \begin{aligned} {\mathcal {I}}(p)=&\int _E\big (|f|^p-|g|^p\big )\mathrm {d}\mu \\ =&\int _0^{+\infty }\big [\mu (|f|^p>t)-\mu (|g|^p>t)\big ]\mathrm {d}t\!=\! \int _0^{+\infty }\left[ \mu \left( |f|>t^{\frac{1}{p}}\right) \!-\!\mu \left( |g|>t^{\frac{1}{p}}\right) \right] \mathrm {d}t. \end{aligned} \end{aligned}$$

Substituting \(z=t^{\frac{1}{p}}\), the integral becomes

$$\begin{aligned} \begin{aligned}&{\mathcal {I}}(p)=p\int _0^{+\infty }\big [\mu (|f|>z)-\mu (|g|>z)\big ]z^{p-1}\, \mathrm {d}z=p\int _0^{+\infty }\varphi (z)z^{p-1}\, \mathrm {d}z, \end{aligned} \end{aligned}$$
(3)

where

$$\begin{aligned} \varphi (z)=\mu (|f|>z)-\mu (|g|>z). \end{aligned}$$

Notice that \(\varphi :[0,{+\infty })\rightarrow {\mathbb {R}}\) is continuous from the right and that

$$\begin{aligned} ||f ||_p=||g ||_p\; \Leftrightarrow \;{\mathcal {I}}(p)=0. \end{aligned}$$

Suppose a) holds. Being \(\varphi :[0,+\infty )\rightarrow {\mathbb {R}}\) the difference of two monotone functions, it is differentiable almost everywhere, and hence continuous almost everywhere. Moreover, it is of bounded variation on \([a,+\infty )\) for all \(a>0\) and then of bounded variation in a neighborhood of each \(y\in (0,+\infty )\). Notice that the integral in the right-hand side of (3) is the Mellin transform of \(\varphi ,\) hence

$$\begin{aligned} \begin{aligned} {\mathcal {I}}(p)=0\;\Leftrightarrow \;\{{\mathcal {M}}\varphi \}(p)=0. \end{aligned} \end{aligned}$$

By [10, Chapter 6.9, Theorem 28], for every \(c\in (1,+\infty )\),

$$\begin{aligned} \frac{1}{2}\left[ \varphi (x+0)+\varphi (x-0)\right] =\frac{1}{2\pi i}\lim _{T\rightarrow +\infty }\int _{c-iT}^{c+iT}\{{\mathcal {M}}\varphi \}(p)x^{-p}\, \mathrm {d}p, \end{aligned}$$
(4)

where

$$\begin{aligned} \varphi (x+0)=\lim _{t\rightarrow x^+}\varphi (t)\qquad \text { and }\qquad \varphi (x-0)=\lim _{t\rightarrow x^-}\varphi (t). \end{aligned}$$

By Lemma 4, \({\mathcal {M}}\varphi \) is holomorphic on \(\{w\in {\mathbb {C}}: \mathfrak {R}(w)>1\}.\) But

$$\begin{aligned} \{{\mathcal {M}}\varphi \}(p)=0 \qquad \text {for all }\ p\in P, \end{aligned}$$

and, by a), P has an accumulation point in \(\{w\in {\mathbb {C}}: \mathfrak {R}(w)>1\}.\) Then, by the identity theorem of complex analytic functions,

$$\begin{aligned} {\mathcal {M}}\varphi \equiv 0 \qquad \text {on }\{w\in {\mathbb {C}}: \mathfrak {R}(w)>1\}. \end{aligned}$$

The inversion formula (4) then becomes

$$\begin{aligned} \frac{1}{2}\left[ \varphi (x+0)+\varphi (x-0)\right] =\frac{1}{2\pi i}\lim _{T\rightarrow +\infty }\int _{c-iT}^{c+iT}0\cdot \, x^{-p}\, \mathrm {d}p=0. \end{aligned}$$

Being \(\varphi (x)\) continuous almost everywhere, we have that \(\varphi (x)=0\) for almost every x. The conclusion easily follows.

Assume now that b) holds and that P has no accumulation points in \((1,+\infty )\). If \(||f ||_{\infty }=0\) or \(||g ||_{\infty }=0,\) then either i) or ii) imply that \(f=g=0\) almost everywhere, and the result is achieved. Without loss of generality, we can suppose \(||f ||_{\infty }\le ||g ||_{\infty }=1\). Indeed,

$$\begin{aligned} \begin{aligned} ||f ||_p=||g ||_p&\Leftrightarrow ||f ||_p^p=||g ||_p^p\\&\Leftrightarrow \int _E||g ||_{\infty }^p\left( \frac{|f|}{||g ||_{\infty }}\right) ^p\mathrm {d}\mu = \int _E||g ||_{\infty }^p\left( \frac{|g|}{||g ||_{\infty }}\right) ^p\mathrm {d}\mu \\&\Leftrightarrow \int _E\left( \frac{|f|}{||g ||_{\infty }}\right) ^p\mathrm {d}\mu = \int _E\left( \frac{|g|}{||g ||_{\infty }}\right) ^p\mathrm {d}\mu . \end{aligned} \end{aligned}$$

In this case,

$$\begin{aligned} I(p)=\int _0^{1}\varphi (z)z^{p-1}\, \mathrm {d}z, \end{aligned}$$

and \(\varphi \in {\mathcal {L}}^1[0,1]\). By Lemma 2, we have that \({\mathcal {I}}(p)=0\) for all \(p\in P\) if and only if \(\varphi (z)=0\) for almost every \(z\in [0,1]\). By the right continuity, we conclude. \(\square \)

4 Construction of the counterexample

In this section, we want to show that, in general, the boundedness hypothesis on f and g in Theorem 1, when b) holds, cannot be removed. In the first part, we give some definitions to set the problem in a more general frame, then we develop the counterexample. Precisely, we will firstly build a continuous function \(\varphi \) defined on the positive real semiaxis and orthogonal to every monomial (and for linearity to every polynomial). Then, we will prove that this function is continuous and it is of bounded variation on \([0,+\infty )\). So, it can be written as the difference of two strictly decreasing functions; their inverses are the functions we are looking for. To conclude we show, as corollaries of independent interest, that modifying a bit this function \(\varphi \) firstly we can make it smooth, and secondly it could be orthogonal to every rational power of x, with fixed denominator. For an in-depth analysis of this argument, see e.g., [5, 7].

Lemma 5

The function \(\varphi :[0,+\infty )\rightarrow {\mathbb {R}}\) defined as

$$\begin{aligned} \varphi (x)=\mathrm{e}^{-\root 4 \of {x}}\sin \left( \root 4 \of {x}\right) \end{aligned}$$

is such that

$$\begin{aligned} \int _{0}^{+\infty }x^n\varphi (x)\,\mathrm {d}x=0\qquad {\text { for all } } n\in {\mathbb {N}}. \end{aligned}$$

Proof

This result was already known by Stieltjes that gave it in his work on continuous fractions [6, page 105]. For the convenience of the reader, we sketch a self-contained proof relying only on elementary complex analysis.

Set

$$\begin{aligned} I_n=\int _{0}^{+\infty }x^n \mathrm{e}^{-(1-i)x} \mathrm {d}x. \end{aligned}$$

Being \(|x^n \mathrm{e}^{-(1-i)x}|=x^n \mathrm{e}^{-x}\), we have that the integral \(I_n\) is well defined for all n. Moreover, letting \(z=1-i\), performing the change of variables \(zx=y\) we obtain:

$$\begin{aligned} I_n=\int _0^{+\infty }x^n \mathrm{e}^{-(1-i)x} \, \mathrm {d}x= \int _{0}^{+\infty }x^n \mathrm{e}^{-zx} \, \mathrm {d}x= z^{-n-1}\int _{\gamma }y^n \mathrm{e}^{-y} \, \mathrm {d}y, \end{aligned}$$

where \(\gamma \) is the half-line starting at the origin, containing the point \(1-i\). Consider the triangular path \(T_N\) in the complex plane joining the points 0, N, \(N-iN\). Being \(y^n \mathrm{e}^{-y}\) analytic over the interior of \(T_N\) the integral along \(T_N\) is 0. Moreover:

$$\begin{aligned} \Bigg |\int _{N}^{N-iN}y^n \mathrm{e}^{-y} \, \mathrm {d}y \Bigg |= \Bigg |\int _{0}^{N} (N-it)^n \mathrm{e}^{-N+it} \, \mathrm {d}t\Bigg |\le \int _{0}^{N} |N^2+t^2|^{\frac{n}{2}} \mathrm{e}^{-N} \, \mathrm {d}t\rightarrow 0, \end{aligned}$$

and so

$$\begin{aligned} \int _{0}^{N}y^n \mathrm{e}^{-y} \, \mathrm {d}y+ \int _{N}^{N-iN}y^n \mathrm{e}^{-y} \, \mathrm {d}y+ \int _{N-iN}^{0}y^n \mathrm{e}^{-y} \, \mathrm {d}y=0. \end{aligned}$$

Then, passing to the limit for N tending to \(+\infty \), the first term tends to \(\Gamma (n+1)\), the second term tends to 0, and the third tends to \(-z^{n+1}I_n\) hence:

$$\begin{aligned} I_n=z^{-n-1}\Gamma (n+1)=z^{-n-1}n!. \end{aligned}$$

Then,

$$\begin{aligned} \begin{aligned} I_n&=n!\cdot (1-i)^{-n-1}=n!\cdot (1+i)^{n+1}\cdot 2^{-n-1}=\\&=n!\cdot \left[ \frac{(1+i)}{\sqrt{2}}\right] ^{n+1}\cdot 2^{-n-1}\cdot 2^{\frac{n+1}{2}}=n! \cdot \mathrm{e}^{\frac{(n+1)i\pi }{4}}\cdot 2^{-\frac{n+1}{2}}. \end{aligned} \end{aligned}$$

So,

$$\begin{aligned} \begin{aligned}&I_{4p+3}\in {\mathbb {R}}\quad \text {for all }p\in {\mathbb {N}}, \end{aligned} \end{aligned}$$

and then

$$\begin{aligned} \begin{aligned}&\mathfrak {I}\left( I_{4p+3}\right) =0\quad \text {for all }p\in {\mathbb {N}}, \end{aligned} \end{aligned}$$

so that

$$\begin{aligned} 0=\mathfrak {I}(I_{4p+3})=\int _{0}^{+\infty }x^{4p+3}\mathrm{e}^{-x} \mathfrak {I}(\mathrm{e}^{ix}) \, \mathrm {d}x=\int _{0}^{+\infty }x^{4p+3}\mathrm{e}^{-x} \sin (x)\, \mathrm {d}x\quad \text {for all }\ p\in {\mathbb {N}}. \end{aligned}$$

Letting \(x=u^{\frac{1}{4}},\) we arrive to

$$\begin{aligned} \int _{0}^{+\infty }u^{p}\mathrm{e}^{-\root 4 \of {u}}\sin ({\root 4 \of {u}}) \, \mathrm {d}u=0\quad \text {for all }\ p\in {\mathbb {N}}. \end{aligned}$$

The function

$$\begin{aligned} \varphi (x)= \mathrm{e}^{-\root 4 \of {x}}\sin ({\root 4 \of {x}}) \end{aligned}$$

has the requested properties. \(\square \)

Lemma 6

The function \(\varphi \) defined in Lemma 5 belongs to \(BV([0,+\infty ))\).

Proof

Observe preliminarily that \(\varphi (0)=0\) and \(\varphi \) tends to 0 at infinity; moreover,

$$\begin{aligned} \varphi '(x)=\frac{\mathrm{e}^{-\root 4 \of {x}} \cos \left( \root 4 \of {x}\right) }{4 x^{3/4}}-\frac{\mathrm{e}^{-\root 4 \of {x}} \sin \left( \root 4 \of {x}\right) }{4 x^{3/4}}= \frac{\sqrt{2}\mathrm{e}^{-\root 4 \of {x}}}{4 x^{3/4}} \sin \left( \frac{\pi }{4}-\root 4 \of {x}\right) , \end{aligned}$$

and so

$$\begin{aligned} \varphi '(x)=0\;\Leftrightarrow \; \frac{\sqrt{2}\mathrm{e}^{-\root 4 \of {x}}}{4 x^{3/4}} \sin \left( \frac{\pi }{4}-\root 4 \of {x}\right) =0 \;\Leftrightarrow \; \root 4 \of {x}=\frac{\pi }{4}+k\pi \quad \text {for } k\in {\mathbb {N}}. \end{aligned}$$

The second derivative of \(\varphi \) is given by

$$\begin{aligned} \varphi ''(x)=\frac{\mathrm{e}^{-\root 4 \of {x}} \left( 3 \sin \left( \root 4 \of {x}\right) -\left( 2 \root 4 \of {x}+3\right) \cos \left( \root 4 \of {x}\right) \right) }{16 \,x^{7/4}}. \end{aligned}$$

Letting

$$\begin{aligned} x_n=\left( \frac{\pi }{4}+n\pi \right) ^4, \end{aligned}$$

we see that \((\varphi ''(x_n))_n\) has alternating signs, since

$$\begin{aligned} \varphi ''(x_n)=(-1)^{n+1}\frac{256 \sqrt{2} \mathrm{e}^{\pi \left( -\left( n+\frac{1}{4}\right) \right) }}{(4 \pi n+\pi )^6}. \end{aligned}$$

So, the total variation of \(\varphi \) is the series of variations between each stationary point. Writing \({\mathbb {R}}^+=\{x\in {\mathbb {R}}:x\ge 0\}\), we have

$$\begin{aligned} \begin{aligned} V_{{\mathbb {R}}^+}(\varphi )&= \sum _{n\ge 0}|\varphi (x_{n+1})-\varphi (x_{n})|\\&=\sum _{n\in 2{\mathbb {N}}}|\varphi (x_{n+1})-\varphi (x_{n})|+\sum _{n\in 2{\mathbb {N}}+1}|\varphi (x_{n+1})-\varphi (x_{n})| \\&=\sum _{n\in {\mathbb {N}}}|\varphi (x_{2n+1})-\varphi (x_{2n})|+\sum _{n\in {\mathbb {N}}}|\varphi (x_{2n+2})-\varphi (x_{2n+1})| \\&=\sum _{n\in {\mathbb {N}}}\Bigg |-\frac{\mathrm{e}^{-\frac{1}{4} \pi (8 n+5)}}{\sqrt{2}}-\frac{\mathrm{e}^{-\frac{1}{4} \pi (8 n+1)}}{\sqrt{2}}\Bigg |+\sum _{n\in {\mathbb {N}}}\Bigg |\frac{\mathrm{e}^{-\frac{1}{4} \pi (8 n+9)}}{\sqrt{2}}+\frac{\mathrm{e}^{-\frac{1}{4} \pi (8 n+5)}}{\sqrt{2}}\Bigg |\\&=\frac{\mathrm{e}^{-\frac{\pi }{4}} \left( \mathrm{e}^{\pi }+1\right) }{\sqrt{2} \left( \mathrm{e}^{\pi }-1\right) }. \end{aligned} \end{aligned}$$

We are now ready to construct the counterexample. Define

$$\begin{aligned} \phi (t)=P(t,+\infty )+\frac{1}{(t+1)},\qquad \psi (t)=N(t,+\infty )+\frac{1}{(t+1)}, \end{aligned}$$

where \(P(t,+\infty )\) and \(N(t,+\infty )\) are, respectively, the positive and the negative variation of \(\varphi \) on \((t,+\infty )\). The functions \(\phi \) and \(\psi \) are positive, strictly decreasing, bounded and achieve their maximum in 0. Moreover,

$$\begin{aligned} \lim _{t\rightarrow +\infty }\phi (t)=\lim _{t\rightarrow +\infty }\psi (t)=0, \end{aligned}$$

and

$$\begin{aligned} \phi (t)-\psi (t)=\varphi (t). \end{aligned}$$

Restricting the codomain of \(\phi \) to \((0,\phi (0)]\) and that of \(\psi \) to \((0,\psi (0)]\,\), we obtain two invertible functions

$$\begin{aligned} {\hat{\phi }}:[0,+\infty )\rightarrow (0,\phi (0)],\qquad {\hat{\psi }}:[0,+\infty )\rightarrow (0,\psi (0)].\\ \end{aligned}$$

Moreover, their inverses are also nonnegative decreasing functions. Define

$$\begin{aligned} f={\hat{\phi }}^{-1}:(0,\phi (0)]\rightarrow [0,+\infty ),\qquad g={\hat{\psi }}^{-1}:(0,\psi (0)]\rightarrow [0,+\infty ),\\ \end{aligned}$$

and notice that

$$\begin{aligned} \lim _{x\rightarrow 0^+}f(x)=\lim _{x\rightarrow 0^+}g(x)=+\infty . \end{aligned}$$

Extend f and g to all \({\mathbb {R}}\) by setting them equal to 0 outside their domain, and call them \({\tilde{f}}\) and \({\tilde{g}}\). These are the functions we are looking for. Indeed, we will now prove that \(\mu (|{\tilde{f}}|>\alpha )\) does not coincide with \(\mu (|{\tilde{g}}|>\alpha )\) for a.e. \(\alpha \ge 0\). By contradiction suppose that

$$\begin{aligned} \mu (|{\tilde{f}}|>\alpha )=\mu (|{\tilde{g}}|>\alpha )\qquad \text { for a.e. }\alpha \ge 0 . \end{aligned}$$

Being \({\tilde{f}}\) and \({\tilde{g}}\) nonnegative and being their level sets coincident with those of f and g, we have

$$\begin{aligned} \mu (f>\alpha )=\mu (g>\alpha )\qquad \text { for a.e. }\alpha \ge 0 . \end{aligned}$$

But f and g are monotonically strictly decreasing, so \(\{f>\alpha \}=[0,f^{-1}(\alpha ))\) and \(\{g>\alpha \}=[0,g^{-1}(\alpha ))\), hence

$$\begin{aligned} f^{-1}(\alpha )=g^{-1}(\alpha )\qquad \text { for a.e. }\alpha \ge 0 . \end{aligned}$$

Being \(f^{-1}=\phi \) and \(g^{-1}=\psi \),

$$\begin{aligned} \phi (\alpha )=\psi (\alpha )\qquad \text { for a.e. }\alpha \ge 0 . \end{aligned}$$

Recall then the definition of \(\phi \) and \(\psi \) to obtain

$$\begin{aligned} P(\alpha ,+\infty )+\frac{1}{(\alpha +1)}=N(\alpha ,+\infty )+\frac{1}{(\alpha +1)}\qquad \text { for a.e. }\alpha \ge 0, \end{aligned}$$

so

$$\begin{aligned} P(\alpha ,+\infty )=N(\alpha ,+\infty )\qquad \text { for a.e. }\alpha \ge 0, \end{aligned}$$

and then

$$\begin{aligned} \varphi (\alpha )=P(\alpha ,+\infty )-N(\alpha ,+\infty )=0\qquad \text { for a.e. }\alpha \ge 0, \end{aligned}$$

finding a contradiction. The proof is then completed. \(\square \)

In the following corollary, we want to extend Lemma 5 to find a continuous function orthogonal to every fractional power of x with fixed denominator.

Corollary 7

Fix \(q\in {\mathbb {N}}{\setminus }\{0\}\). There exists a continuous function \(\varphi _q:(0,+\infty )\rightarrow {\mathbb {R}}\), not identically equal to 0, such that

$$\begin{aligned} \int _{0}^{+\infty }x^{\frac{n}{q}}\varphi _q(x)\,\mathrm {d}x=0\qquad {\text { for all } } n\in {\mathbb {N}}. \end{aligned}$$

Proof

Define \(I_n\) as before. We have

$$\begin{aligned} \int _{0}^{+\infty }x^{4p+3}\mathrm{e}^{-x} \sin (x) \, \mathrm {d}x=0\quad \text {for all }p\in {\mathbb {N}}. \end{aligned}$$

Letting \(x=u^{\frac{1}{4q}}\) we arrive to

$$\begin{aligned} \int _{0}^{+\infty }u^{\frac{p}{q}}\mathrm{e}^{-\root 4q \of {u}}\sin ({\root 4q \of {u}}) \,u^{\frac{1-q}{q}} \, \mathrm {d}u=0\quad \text {for all }p\in {\mathbb {N}}. \end{aligned}$$

The function

$$\begin{aligned} \varphi (x)=\mathrm{e}^{-\root 4q \of {x}}\sin ({\root 4q \of {x}})\, x^{\frac{1-q}{q}} \end{aligned}$$

is the one we were looking for. \(\square \)

The aim of the subsequent lemma is to show that, if we multiply the functions \(\varphi (x)\) and \(\varphi _q(x)\), found, respectively, in Lemma 5 and Corollary 7, by a suitable power of x, we obtain two new functions that maintain the same property of orthogonality but are arbitrarily regular. We achieve this result applying Faà di Bruno’s formula.

Lemma 8

Let \(w\in {\mathcal {C}}^{\infty }({\mathbb {R}})\) and \(0<\alpha <1\). Then the function \(g_n:[0,+\infty )\rightarrow {\mathbb {R}}\),

$$\begin{aligned} g_n(x)=x^nw(x^{\alpha }), \end{aligned}$$

is of class \({\mathcal {C}}^n\) on \([0,+\infty )\), with \(g_n^{(j)}(0)=0\) for all \(j=0,1,2,\dots ,n\).

Proof

A central tool of this proof will be Faà di Bruno’s formula that we will recall briefly. Let w and u be \({\mathcal {C}}^m\) real-valued functions such that the composition \(w\circ u\) is defined; then \((w\circ u)(x)\) is of class \({\mathcal {C}}^m\) and for \(x>0\) we have

$$\begin{aligned} (w\circ u)^{(j)}(x)=j!\sum _{k=1}^j\Bigg [\frac{w^{(k)}(u(x))}{k!}\sum _{h_1+\cdots + h_k=j}\frac{u^{(h_1)}(x)}{h_1!}\cdots \frac{u^{(h_k)}(x)}{h_k!}\Bigg ], \end{aligned}$$

or

$$\begin{aligned} (w\circ u)^{(j)}(x)&=\sum _ {k_1+2k_2+\cdots +jk_j=j} \frac{j!}{k_1!\cdots k_j!}\, w^{(k_1+\cdots +k_j)}(u(x))\cdot \left( \frac{u'(x)}{1!}\right) ^{k_1}\\&\quad \cdot \left( \frac{u''(x)}{2!}\right) ^{k_2}\cdots \left( \frac{u^{(j)}(x)}{j!}\right) ^{k_j}. \end{aligned}$$

For a proof of this formula look at [4]. We have that

$$\begin{aligned} g^{(j)}_n(x)=\sum _{h=0}^j\left( {\begin{array}{c}j\\ h\end{array}}\right) n(n-1)\cdots (n-h+1)x^{n-h}\left[ w(x^{\alpha })\right] ^{(n-h)}, \end{aligned}$$

and each term of this sum is of the form

$$\begin{aligned} C\,x^{n-h}[w(x^{\alpha })]^{(n-h)}, \end{aligned}$$
(5)

where C is a real number depending on jh and n. Now we use the Faà di Bruno’s formula to express the derivatives of w. In our case \(u(x)=x^{\alpha }\) and so

$$\begin{aligned} u^{(h)}(x)=(\alpha )_h\, x^{\alpha -h} \qquad \text {where}\qquad (\alpha )_h=\alpha (\alpha -1)\cdots (\alpha -h+1). \end{aligned}$$

Consequently

$$\begin{aligned} u^{(h_1)}(x)\cdots u^{(h_k)}(x)= (\alpha )_{h_1}(\alpha )_{h_2}\cdots (\alpha )_{h_k} x^{\alpha -h_1}x^{\alpha -h_2}\cdots x^{\alpha -h_k}, \end{aligned}$$

and if \(h_1+\cdots +h_k=j,\)

$$\begin{aligned} u^{(h_1)}(x)\cdots u^{(h_k)}(x)=C(h_1,\dots h_k)x^{k\alpha -j}. \end{aligned}$$

So, applying Faà di Bruno’s formula to (5), each term has the form

$$\begin{aligned} x^{n-h}\sum _{k=1}^{n-h}C_k w^{(k)}(x^{\alpha })\cdot x^{k\alpha -(n-h)}=\sum _{k=1}^{n-h}C_k w^{(k)}(x^{\alpha })\cdot x^{k\alpha }. \end{aligned}$$

To conclude observe that

$$\begin{aligned} g_n^{(j)}(x)=\sum _{k=1}^{j}C'_k w^{(k)}(x^{\alpha })\, x^{k\alpha }, \end{aligned}$$

and apply the theorem on the limit of the derivative. \(\square \)

As a consequence of Lemma 8, we have that the function \(\varphi :[0,+\infty )\rightarrow {\mathbb {R}}\) in the statement of Lemma 5 can be chosen to be arbitrarily regular (but not \({\mathcal {C}}^{\infty }\)). For example, taking

$$\begin{aligned} \varphi (x)= \mathrm{e}^{-\root 4 \of {x}}\sin ({\root 4 \of {x}})x^n,\qquad n\in {\mathbb {N}}, \end{aligned}$$

by Lemma 8 choosing \(w(x)=\mathrm{e}^{-x}\sin (x)\) and \(\alpha =\frac{1}{4}\), we see that \(\varphi (x)\) is of class \({\mathcal {C}}^n\). The same reasoning choosing the same w and \(\alpha =\frac{1}{4q}\) allows to conclude that also the function \(\varphi _q\) is of class \({\mathcal {C}}^n\) if multiplied by \(x^{n+1}\).

5 Final remarks

As a direct consequence of Theorem 1, we have the following.

Corollary 9

Suppose \(\mu (E)<+\infty \), \(f\in {\mathcal {L}}^{p}(E)\) for all \(p \ge 1\) and \(P=(p_j)_{j=1}^{\infty }\) is a sequence of distinct real numbers \(p_j\ge 1\). Let C be a nonnegative constant, and suppose that either a) or b) holds. Then, the following two conditions are equivalent:

  1. i)

    \( \displaystyle \left( \frac{1}{\mu (E)}\right) ^{\frac{1}{p}}||f ||_p= C \qquad \text {for all }\,p\in P\,; \)

  2. ii)

    \( \displaystyle |f(x)|= C \qquad \qquad \qquad \text { for a.e. } x\in E. \)

Indeed, Theorem 1 applies taking as g a constant function.

Theorem 1 remains valid assuming the existence of an accumulation point of P in (0, 1] and supposing \(f,g\in {\mathcal {L}}^p(E)\) for all \(p>0\). In this case, \(||f||_p\) is formally defined as before, although this is not a norm anymore. To prove this, first notice that, without loss of generality, we can always assume that \(f,g\ge 0\,\). Let \(\delta \in (0,1]\) be an accumulation point of P. For all \(p\in P\cap \left( \frac{\delta }{2},2\right) \),

$$\begin{aligned} \int _E|f|^p\mathrm {d}\mu&=\int _E|g|^p\mathrm {d}\mu \;\;\Leftrightarrow \;\; \left( \int _E{|f^{{\frac{\delta }{2}}}|}^{\frac{2p}{\delta }}\mathrm {d}\mu \right) ^{\frac{\delta }{2p}}=\left( \int _E{|g^{{\frac{\delta }{2}}}|}^{\frac{2p}{\delta }}\mathrm {d}\mu \right) ^{\frac{\delta }{2p}}\;\\&\Leftrightarrow \;\; ||f^{\frac{\delta }{2}} ||_{\frac{2p}{\delta }}=||g^{\frac{\delta }{2}} ||_{\frac{2p}{\delta }}. \end{aligned}$$

The set \({\widetilde{P}}=\{\frac{2p}{\delta }:p\in P\cap \left( \frac{\delta }{2},2\right) \}\) is contained in \((1,+\infty )\) and has an accumulation point there. We can now apply the second part of Theorem 1 to find that

$$\begin{aligned} \mu (|f^{\frac{\delta }{2}}|>\alpha )=\mu (|g^{\frac{\delta }{2}}|>\alpha )\qquad \text { for all } \alpha \ge 0, \end{aligned}$$

and so \(\mu (|f|>\alpha )=\mu (|g|>\alpha )\) for all \(\alpha \ge 0\).

In the last part of this section, we consider the special case of \(\ell ^p\) spaces and show that Theorem 1 can be generalized in this setting. We recall that, for a sequence \(A=(a_n)_n\), the \(\ell ^p\) norms are defined as follows:

$$\begin{aligned} ||A||_p=\left( \sum _{n=0}^{\infty }|a_n|^p\right) ^{\frac{1}{p}},\qquad ||A||_{\infty }=\sup _{n}|a_n|. \end{aligned}$$

Here is our result.

Theorem 10

Let \(A=(a_n)_n\) and \(B=(b_n)_n\) be two sequences of real numbers in \(\ell ^1\). If \(P=(p_j)_{j=1}^{\infty }\) is a sequence of distinct real numbers \(p_j\ge 1\) having an accumulation point in \((1,+\infty ]\) and

$$\begin{aligned} ||A ||_p=||B ||_p\ \quad {\text { for all} } \,p \in P, \end{aligned}$$

then the sequences

$$\begin{aligned} |A|=(|a_n|)_n \qquad {\text { and}}\qquad |B|=(|b_n|)_n \end{aligned}$$

can be obtained one from the other by permutation, appending or removing some zeroes.

Proof

Suppose that the accumulation point is finite. Choose \(X={\mathbb {N}}\), \({\mathcal {A}}={\mathcal {P}}({\mathbb {N}})\) and \(\mu \) the counting measure. By Theorem 1, we have that

$$\begin{aligned} \#\left( |A|>\alpha \right) =\#\left( |B|>\alpha \right) \qquad \text { for all }\alpha \ge 0. \end{aligned}$$

Without loss of generality, we can suppose that \(a_n,b_n> 0\) for all \(n\in {\mathbb {N}}\). Being \((a_n)_n\) and \((b_n)_n\) absolutely convergent, we can rearrange them in such a way that A and B are nonincreasing without modifying the \(\ell ^p\) norms, thus obtaining \({\hat{A}}=({\hat{a}}_n)_n\) and \({\hat{B}}=({\hat{b}}_n)_n\), respectively. Clearly

$$\begin{aligned} \#\left( A>\alpha \right) =\#\left( {\hat{A}}>\alpha \right) \quad \text {and}\quad \#\left( B>\alpha \right) =\#\left( {\hat{B}}>\alpha \right) . \end{aligned}$$

If \({\hat{a}}_n={\hat{b}}_n\) for all n, then the theorem is proved. Assume by contradiction that \({\hat{A}}\ne {\hat{B}}\) and let \({\bar{n}}\) be the smallest index such that \({\hat{a}}_{{\bar{n}}}\ne {\hat{b}}_{{\bar{n}}}\). Suppose for instance that \({\hat{a}}_{{\bar{n}}}>{\hat{b}}_{{\bar{n}}}\) and choose

$$\begin{aligned} \alpha =\frac{{\hat{a}}_{{\bar{n}}}+{\hat{b}}_{{\bar{n}}}}{2}. \end{aligned}$$

With this choice, we have

$$\begin{aligned} \#\left( {\hat{A}}>\alpha \right) \ge {\bar{n}}>\#\left( {\hat{B}}>\alpha \right) , \end{aligned}$$

a contradiction. Suppose now that \(+\infty \) is the only accumulation point of P. We have that

$$\begin{aligned} \lim _j\Vert {\hat{A}}\Vert _{p_j}=\Vert {\hat{A}}\Vert _{\infty }={\hat{a}}_0, \end{aligned}$$

and similarly for \({\hat{B}}\), obtaining that \({\hat{a}}_0={\hat{b}}_0\). Define \({\hat{A}}_1=({\hat{a}}_n)_{n\ge 1}\) and \({\hat{B}}_1=({\hat{b}}_n)_{n\ge 1}\) and notice that:

$$\begin{aligned} \Vert {\hat{A}}_1\Vert _{p_j}^{p_j}= \sum _{n=0}^{\infty }{\hat{a}}_n^{p_j}-{\hat{a}}_0^{p_j}= \Vert {\hat{A}}\Vert _{p_j}^{p_j}-{\hat{a}}_0^{p_j}= \Vert {\hat{B}}\Vert _{p_j}^{p_j}-{\hat{b}}_0^{p_j}=\Vert {\hat{B}}_1\Vert _{p_j}^{p_j}, \end{aligned}$$

and consequently \(\Vert {\hat{A}}_1\Vert _{p_j}=\Vert {\hat{B}}_1\Vert _{p_j}\) for all j. By the same reasoning, we will obtain that

$$\begin{aligned} \lim _j\Vert {\hat{A}}_1\Vert _{p_j}=\Vert {\hat{A}}_1\Vert _{\infty }={\hat{a}}_1, \end{aligned}$$

and the same for \({\hat{B}}_1,\) to obtain \({\hat{a}}_1={\hat{b}}_1\). Proceeding by induction, we conclude that \({\hat{A}}={\hat{B}}\). \(\square \)

Notice that, choosing for example \(p_j=j!\) for all \(j\ge 1\), the condition \(\Vert {\hat{A}}\Vert _{p_j}=\Vert {\hat{B}}\Vert _{p_j}\) for all j implies that \({\hat{A}}={\hat{B}}\), but in this case the series in (1) converges. Notice moreover that Theorem 10 remains valid if 1 is the only accumulation point of P, supposing that (1) holds.

Let us show with an example that, even if \(\mu (E)\) is finite, condition (1) could not be necessary for the conclusion of Theorem 1. Let \(X=\{0\}\), \({\mathcal {A}}={\mathcal {P}}(X)\) and \(\mu \) the counting measure. Let \(P=(j^2)_{j\ge 1}\) and let \(f,g:\{0\}\rightarrow {\mathbb {R}}\) two measurable functions. Suppose moreover \(\Vert f\Vert _{j^2}=\Vert g\Vert _{j^2}\) for all \(j\ge 1\). Without losing in generality, we can assume that \(f(0)=\alpha _0\ge 0\) and \(g(0)=\beta _0\ge 0\). Then,

$$\begin{aligned} \Vert f\Vert _{j^2}=\alpha _0 \qquad \text { and }\qquad \Vert g\Vert _{j^2}=\beta _0\qquad \text { for all } j\ge 1, \end{aligned}$$
(6)

and consequently \(\alpha _0=\beta _0.\) However, condition (1) is not fulfilled.

The problem of the necessity of condition (1) remains open, e.g., when \(X={\mathbb {R}}\) and \(\mu \) is the Lebesgue measure.