1 Introduction

Signals in the natural world are often real-valued. A signal can have a single variable or multi-variables. For one-dimensional real-valued signals, it is often of advantage to define an associated complex signal of which the original real-valued signal is the real part. In Cohen’s book [2], for instance, it is shown that one of the motivations of defining a complex signal is to define the phase and then the instantaneous frequency as the phase derivative. Frequency is a crucial concept. Various kinds of frequencies are important references in order to understand signals. The classical notion of frequency is the so-called Fourier frequency defined through Fourier transformation. Another popular, yet controversial, notion of frequency is related to a particular type of complex signals, viz., analytic signals introduced by Gabor in [7]. For a real-valued square-integrable signal \(f\) in the whole time range, the function \(\frac{1}{2}(f+\mathbf{i}\mathbf{H}f)\) is called the analytic signal associated with \(f,\) where \(\mathbf{H}\) is the Hilbert transformation on the line. It is, in fact, the boundary value of the Cauchy integral of \(f\) analytic in the upper-half complex plane.

Writing the analytic signal in the polar coordinate representation

$$\begin{aligned} f^{+}=\frac{1}{2}(f+\mathbf{i}\mathbf{H}f)=A(t)[\cos \theta (t)+\mathbf{i}\sin \theta (t)]=A(t)\mathrm{e}^{\mathbf{i}\theta (t)}, \end{aligned}$$

then \(A(t)=\frac{1}{2}\sqrt{f^2+(\mathbf{H}f)^2}\) is called the instantaneous amplitude, and \(\theta (t)=\mathrm {arctg}\frac{ \mathbf{H}f}{f}\) the instantaneous phase. The derivative of the phase, \(\theta '(t),\) is usually defined to be instantaneous frequency. Clearly, \(\theta '(t)={Re}\left\{ \left[ \frac{1}{\mathbf{i}}\frac{\mathrm{d}}{\mathrm{d}t} f^{+}(t)\right] [{f^{+}(t)}]^{-1}\right\} \).

To study and characterize signals, it is often useful to establish fundamental quantitative relations. For example, the total energy of a signal is defined to be the integral of the energy density function \(|f(t)|^2\) over that entire time domain, viz., \(\int \nolimits _{-\infty }^{\infty }|f(t)|^2 \mathrm{d}t.\) In order to describe where the density is concentrated and whether the density is concentrated around the average, the average time and the standard deviation are defined to be [2]

$$\begin{aligned} \langle t\rangle =\int \limits _{-\infty }^{\infty }t|f(t)|^2\mathrm{d}t \end{aligned}$$

and

$$\begin{aligned} T^2=\sigma _t^2=\int \limits _{-\infty }^{\infty }(t-\langle t\rangle )^2|f(t)|^2 \mathrm{d}t. \end{aligned}$$

If \(f\) is of unit energy, then

$$\begin{aligned} \sigma _t^2=\langle t^2\rangle -\langle t\rangle ^2. \end{aligned}$$

Similarly, for a signal of unit energy, if \(|\hat{f}(\omega )|^2\) represents the Fourier frequency density, then the average frequency and the Fourier bandwidth are defined by

$$\begin{aligned} \langle \omega \rangle =\frac{1}{2\pi } \int \limits _{-\infty }^{\infty }\omega |\hat{f}(\omega )|^2 \mathrm{d}\omega \end{aligned}$$

and

$$\begin{aligned} \sigma _{\omega }^2=B^2&= \frac{1}{2\pi }\int \limits _{-\infty }^{\infty } (\omega -\langle \omega \rangle )^2|\hat{f}(\omega )|^2 \mathrm{d}\omega \\&= \langle \omega ^2\rangle -\langle \omega \rangle ^2. \end{aligned}$$

In time-frequency analysis, there are instructive formulas revealing relations between the Fourier frequency and the phase derivative frequency involving mean and bandwidth of frequency, and two forms of covariance, etc. In virtue of such formulas, for instance, Fourier frequency can be avoided to compute the average Fourier frequency and the Fourier bandwidth. The following results can be found in [2]:

$$\begin{aligned} \langle \omega \rangle =\int \limits _{-\infty }^{\infty }\frac{\mathrm{d}\theta (t)}{\mathrm{d}t}|f(t)|^2 \mathrm{d}t \end{aligned}$$
(1.1)

and

$$\begin{aligned} \sigma _{\omega }^2=\int \limits _{-\infty }^{\infty }\left( \frac{\mathrm{d}A(t)}{\mathrm{d}t}\right) ^2\mathrm{d}t+ \int \limits _{-\infty }^{\infty }\left( \frac{\mathrm{d}\theta (t)}{\mathrm{d}t}-\langle \omega \rangle \right) ^2 A^2(t)\mathrm{d}t. \end{aligned}$$

Similarly, we have

$$\begin{aligned} \langle t\rangle =-\int \limits _{-\infty }^{\infty } \frac{\mathrm{d}\psi (\omega )}{\mathrm{d}\omega }|\hat{f}(\omega )|^2 \mathrm{d}\omega \end{aligned}$$

and

$$\begin{aligned} T^2=\sigma _t^2=-\int \limits _{-\infty }^{\infty }\left( \frac{\mathrm{d}B(\omega )}{\mathrm{d}\omega }\right) ^2 \mathrm{d}\omega +\int \limits _{-\infty }^{\infty } \left( \frac{\mathrm{d}\psi (\omega )}{\mathrm{d}\omega }+\langle t\rangle \right) ^2B^2(\omega ) \mathrm{d}\omega . \end{aligned}$$

For a real-valued signal, since \(\hat{f}(-\omega )=\overline{\hat{f}(\omega )}\), we have that \(\langle \omega \rangle \) is always zero. In such case, \(\langle \omega \rangle \) does not show where the Fourier frequency density concentrates. One, instead, uses \(\langle \omega \rangle ^+\) to study the mean of Fourier frequency [2], that is

$$\begin{aligned} \langle \omega \rangle ^{+}&= \frac{1}{2\pi }\int \limits _{0}^{\infty }\omega |\hat{f}(\omega )|^2 \mathrm{d}\omega \\&= \frac{1}{\pi }\int \limits _{-\infty }^{+\infty }\omega |\widehat{f^+}(\omega )|^2 \mathrm{d}\omega . \end{aligned}$$

The uncertainty principle, on the other hand, reveals certain relations between the bandwidths of time and frequency. It can also be represented by the phase derivative through a related covariance.

The work [3] extends the above-mentioned results to non-smooth signals via Hardy space decomposition.

For a real-valued square-integrable signal \(f\) in the higher-dimensional Euclidean space \(\mathbf{R}^m\), one defines the associated Clifford monogenic signal, analogous to complex analytic signal, to be the boundary value of the Cauchy integral of \(f\). By virtue of the notion of monogenic signal, one can define various types of signal phases [5, 6], but until [12], phase derivative as frequency and related theoretical aspects in higher-dimensional spaces had not been studied. Various types of vector-valued phases are handy to be defined, but with less characteristic properties in relation to applications. The work [12] has two contributions. One is the well-defined notion of the scalar-valued phase; and the other is analysis of the scalar-valued phase derivative as frequency.

In this paper, with the scalar-valued phase notion, we extend the fundamental results of [3] to higher dimensions. We include an uncertainty principal for real-valued signals in higher dimensions. For vector-valued signals of the axial form, we obtain an uncertainty principle involving the defined scalar-valued phase derivative with an improved lower bound.

Our writing plan is as follows. Section 2 contains the basic knowledge of Clifford analysis required by this study. In Sect. 3, we study the mean of Fourier frequency in terms of the scalar-valued phase derivative. In the final section, we prove the two types of uncertainty principle in higher dimensions.

2 Preliminary

The basic knowledge and notation in relation to Clifford algebra cited in this section are referred to [1] and [4]. The formulation in relation to the scalar-valued phase derivative is taken from [12].

Let \(\mathbf{e}_1,\ldots , \mathbf{e}_m \) be basic elements satisfying \(\mathbf{e}_i\mathbf{e}_j+\mathbf{e}_j\mathbf{e}_i=-2\delta _{ ij }\), where \(\delta _{ ij }=1\) if \(i=j;\) and \(\delta _{ ij }=0\) otherwise, \(i, j=1, 2, \ldots , m.\) Let

$$\begin{aligned} \mathbf{R}_1^m =\{x_0+\underline{x} : \underline{x}\in \mathbf{R}^m \}, \end{aligned}$$

where

$$\begin{aligned} \mathbf{R}^m =\{\underline{x}=x_1 \mathbf{e}_1 + \cdots + x_m \mathbf{e}_m : x_j \in \mathbf{R}, j=1, 2, \ldots , m \} \end{aligned}$$

be identical with the usual Euclidean space \(\mathbf{R}^m\).

An element in \(\mathbf{R}^m\) is called a vector. The real (complex) Clifford algebra generated by \(\mathbf{e}_1, \mathbf{e}_2, \ldots , \mathbf{e}_m\), denoted by \(\mathbf{R}_{m} (\mathbf{C}_{m})\), is an associative algebra over the real field \(\mathbf{R}\). A general element of \(\mathbf{R}_{m}\), therefore, is of the form \(x=\sum \nolimits _S x_S \mathbf{e}_S,\) where \(x_S\in \mathbf{R}, \mathbf{e}_S=\mathbf{e}_{i_1}\mathbf{e}_{i_2}\ldots \mathbf{e}_{i_l},\) and \(S\) runs over all the ordered subsets of \(\{1,2,\ldots ,m\},\) namely

$$\begin{aligned} S=\{ i_1 , i_2, \ldots , i_l \},\quad 1 \le i_1 <i_2< \cdots < i_l \le m, \quad 1\le l \le m. \end{aligned}$$

For a Clifford number \(x\), we use \(\mathrm{{Sc}}[x]\) to denote the scalar part of \(x\) and \(\mathrm{{Nsc}}[x]\) the non-scalar part of \(x\). The multiplication of two vectors \(\underline{x}=\sum \nolimits _{j=1}^{m}x_j e_j\) and \(\underline{y}=\sum \nolimits _{j=1}^{m}y_j e_j\) is given by

$$\begin{aligned} \underline{x}\underline{y}=\underline{x}\cdot \underline{y}+\underline{x}\wedge \underline{y}\end{aligned}$$

with

$$\begin{aligned} \underline{x}\cdot \underline{y}=-\sum _{j=1}^{m}x_jy_j=\frac{1}{2}(\underline{x}\underline{y}+\underline{y}\underline{x})=-\langle \underline{x}, \underline{y}\rangle \end{aligned}$$

and

$$\begin{aligned} \underline{x}\wedge \underline{y}=\sum _{i<j}e_{ ij }(x_iy_j-x_jy_i) = \frac{1}{2}(\underline{x}\underline{y}-\underline{y}\underline{x}), \end{aligned}$$

being a scalar and a bi-vector, respectively. In particular, we have \(\underline{x}^2=-\langle \underline{x}, \underline{x}\rangle =-|\underline{x}|^2=-\sum \nolimits _{j=1}^{m}x_j^2.\)

We define, respectively, the conjugation and the reversion of \(\mathbf{e}_S,\) to be \(\overline{\mathbf{e}}_{S}=\overline{\mathbf{e}}_{il}\ldots \overline{\mathbf{e}}_{i1}, {\overline{\mathbf{e}}_j=-\mathbf{e}_j}\) and \(\widetilde{\mathbf{e}}_{S}=\mathbf{e}_{il}\ldots \mathbf{e}_{i1}\). As example, the Clifford conjugate of a vector \(\underline{x}\) is \(\overline{\underline{x}}=-\underline{x};\) and the Clifford reversion of a vector \(\underline{x}\) is \(\widetilde{\underline{x}}=\underline{x}\). They extend linearly to the whole real Clifford algebra. For the complex Clifford algebra \(x=\sum \nolimits _S x_S \mathbf{e}_S,\) we define \(\overline{x}=\sum \nolimits _S \overline{x}_S \overline{\mathbf{e}}_S.\) It is easy to verify that \(0 \not = x \in \mathbf{R}^m_1\) implies

$$\begin{aligned} x^{-1} = \frac{\overline{x}}{|x|^2}. \end{aligned}$$

The natural inner product between \(x\) and \(y\) in \(\mathbf{C}_{m},\) denoted by \(\langle x, y\rangle ,\) is defined to be the complex number \(\sum \nolimits _Sx_S\overline{y_S},\) where \(x=\sum \nolimits _Sx_S\mathbf{e}_S\) and \(y=\sum \nolimits _Sy_S\mathbf{e}_S.\) The norm associated with this inner product is

$$\begin{aligned} |x|=\langle x, x\rangle ^{1\over 2}=\left( \sum \limits _S|x_S|^2\right) ^{{1\over 2}}. \end{aligned}$$

We will study functions defined in \(\mathbf{R}^m\) taking values in \(\mathbf{C}_{m}.\) Such functions are of the form \(f(\underline{x})=\sum \nolimits _S f_S (\underline{x}) \mathbf{e}_S,\) where \(f_S\) are complex-valued functions. The definitions of several types of Clifford monogenic functions in \(\mathbf{R}^m\) are based on the Dirac operator \(\underline{D}={\partial \over \partial x_1}\mathbf{e}_1+\cdots +{\partial \over \partial x_m}\mathbf{e}_m.\)

First, we specify the “left” and “right” roles of the operators \(\underline{D}\) by, respectively,

$$\begin{aligned} \underline{D} f = \sum ^m_{i=1} \sum _S {\partial f_S \over \partial x_i} \mathbf{e}_i \mathbf{e}_S \end{aligned}$$

and

$$\begin{aligned} f\underline{D} = \sum ^m_{i=1} \sum _S {\partial f_S \over \partial x_i} \mathbf{e}_S \mathbf{e}_i. \end{aligned}$$

If \(\underline{D} f=0\) in a domain (open and connected) \(\Omega \), then we say that \(f\) is left monogenic in \(\Omega \); and, if \(f \underline{D}=0\) in \(\Omega \), we say that \(f\) is right-monogenic in \(\Omega \). If \(f\) is both left- and right-monogenic, then we say that \(f\) is monogenic.

We recall that

$$\begin{aligned} E(\underline{x})={\overline{\underline{x}} \over |\underline{x}|^{m}} \end{aligned}$$

is the Cauchy kernel in \(\mathbf{R}^{m}\). It is easy to see that \(E(\underline{x})\) is a monogenic function in \(\mathbf{R}^{m}\setminus \{0\}\).

If \(f\in L^1{(\mathbf{R}^m; \mathbf{C}_m)}\), we define the Fourier transform of \(f\) by

$$\begin{aligned} \hat{f}(\underline{\xi })=\int \limits _{\mathbf{R}^m}\mathrm{e}^{-\mathbf{i}\langle \underline{x},\underline{\xi }\rangle }f(\underline{x})\mathrm{d}\underline{x}\end{aligned}$$

and, formally, the inverse Fourier transform of \(\hat{f}\) by

$$\begin{aligned} f(\underline{x})=\frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m} \hbox {e}^{\mathbf{i}\langle \underline{x},\underline{\xi }\rangle }\hat{f}(\underline{\xi }) \mathrm{d}\underline{\xi }. \end{aligned}$$

For square-integrable functions, the Plancherel Theorem holds

$$\begin{aligned} \int \limits _{\mathbf{R}^m} |f(\underline{x})|^2 \mathrm{d}\underline{x}=\frac{1}{(2 \pi )^m}\int \limits _{\mathbf{R}^m} |\hat{f}(\underline{\xi })|^2 \mathrm{d}\underline{\xi }. \end{aligned}$$

The monogenic signal associated with \(f(\underline{x})\) is defined to be the non-tangential boundary value of the Cauchy integral of \(f\) as a monogenic function in the upper-half space. The boundary value reads \(\frac{1}{2}\left[ f(\underline{x})+ H[f](\underline{x})\right] \), where

$$\begin{aligned} H[f]&= -\sum _{j=1}^{m}R_j(f)(\underline{x}){\mathbf{e}_j},\\ R_j(f)(\underline{x})&= \lim _{\varepsilon \rightarrow 0^{+}}\int \limits _{|\underline{x}-\underline{\xi }|>\varepsilon } \frac{x_j-\xi _j}{|\underline{x}-\underline{\xi }|^{m+1}}f(\underline{\xi })\mathrm{d}\underline{\xi }\end{aligned}$$

is the \(j\)th-Riesz transform of \(f\) [11]. Clearly, if \(f(\underline{x})\) is real-valued, then \(H[f](\underline{x})\) is vector-valued. We will restrict ourselves to real-valued functions \(f.\)

Write \(f^+(\underline{x})=\frac{1}{2}\left[ f(\underline{x})+ H[f](\underline{x})\right] \) in the polar form \(A(f)\mathrm{e}^{\left[ \frac{{H}[f]}{|{H}[f]|}\theta (\underline{x})\right] }.\) Then in [12], \(A(f)=\frac{1}{2}\sqrt{f^2+|{H}[f]|^2}\) is called the amplitude, \(\theta (\underline{x})=\arctan \frac{|{H}[f]|}{f}\) the phase, defined between \(0\) and \(\frac{\pi }{2},\) \(\frac{{H}[f]}{|{H}[f]|}\theta (\underline{x})\) the phase vector, and \(\mathrm{e}^{\left[ \frac{{H}[f]}{|{H}[f]|}\theta (\underline{x})\right] }\) the phase direction. We also define the directional phase derivative to be \(\mathrm{{Sc}}\left\{ [\underline{D}\theta (\underline{x})] \frac{{H}[f]}{|{H}[f]|}\right\} ,\) and the phase derivative or instantaneous frequency to be

$$\begin{aligned} \mathrm{{Sc}}\left\{ [{\underline{D}f^{+}(\underline{x})}][{f^{+}(\underline{x})}]^{-1}\right\} . \end{aligned}$$

Remark 2.1

One of the reasons of the promotion of the scalar-valued phase derivative is as follows. In terms of such defined phase derivative, we can prove a counterpart result to [2] in higher dimensions [12]. Formulas like (1.1) exhibit significant relations between the phase derivative and the Fourier frequency, providing reasons to define the phase derivative as instantaneous frequency (IF). Furthermore, some nice properties of the proposed scalar-valued phase derivative are proved, including positivity of the phase derivative of the Cauchy kernel [12].

Remark 2.2

In the one-dimensional case, the above-defined directional phase derivative and the phase derivative coincide. While in higher dimensions, they are different. Their relation is given by the equation [12]

$$\begin{aligned}&\mathrm{{Sc}}\left\{ [{\underline{D}f^{+}(\underline{x})}][{f^{+}(\underline{x})}]^{-1}\right\} \\&\quad =\mathrm{{Sc}}\left\{ [\underline{D}\mathrm{e}^{\frac{H[f]}{|H[f]|}\theta (\underline{x})}][\mathrm{e}^{\frac{H[f]}{|H[f]|}\theta (\underline{x})}]^{-1}\right\} \\&\quad =\mathrm{{Sc}}\left\{ [\underline{D}\frac{{H}[f]}{|{H}[f]|}]\sin \theta (\underline{x})\cos \theta (\underline{x})\right\} +\mathrm{{Sc}}\left\{ [\underline{D}\theta (\underline{x})]\frac{{H}[f]}{|{H}[f]|}\right\} . \end{aligned}$$

Example 2.1

For the Poisson kernel \(f(\underline{x})=\frac{s}{|s+\underline{x}|^{m+1}}, s>0\), we have \(f^{+}=\frac{1}{2}\frac{\overline{s+\underline{x}}}{|s+\underline{x}|^{m+1}}\), which is proportional to the Cauchy kernel in \(\mathbf{R}_1^m\). Through calculation, the instantaneous frequency of the Poisson kernel signal is \(\frac{ ms}{|s+\underline{x}|^2},\) clearly positive.

In [8] it is shown that

$$\begin{aligned} f^{\pm }(\underline{x})&= \frac{1}{2}\left[ f(\underline{x})\pm H[f](\underline{x})\right] \\&= \frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}\mathrm{e}^{\mathbf{i}\langle \underline{x},\underline{\xi }\rangle } \frac{1}{2}\left( 1\pm \mathbf{i}\frac{\underline{\xi }}{|\underline{\xi }|}\right) \hat{f}(\underline{\xi })\mathrm{d}\underline{\xi }\\&= \lim _{x_0\rightarrow 0^{\pm }}\frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m} \mathrm{e}^{\pm }(x_0+\underline{x}, \underline{\xi })\hat{f}(\underline{\xi })\mathrm{d}\underline{\xi }, \end{aligned}$$

where

$$\begin{aligned} \mathrm{e}^{\pm }(x_0+\underline{x}, \underline{\xi })=\mathrm{e}^{\mp x_0|\underline{\xi }|}\mathrm{e}^{\mathbf{i}\langle \underline{x},\underline{\xi }\rangle }\frac{1}{2}\left( 1\pm \mathbf{i}\frac{\underline{\xi }}{|\underline{\xi }|}\right) \end{aligned}$$

are left monogenic in \(\mathbf{R}_1^{m},\) being Fourier transforms of the Cauchy kernels in, respectively, the upper and the lower spaces. It indicates that \(\frac{1}{2}\left[ f(\underline{x})\pm H[f](\underline{x})\right] \in H_2^{\pm }(\mathbf{R}^m)\) are the boundary value functions of some left-monogenic functions in the upper and lower half space of \(\mathbf{R}_1^m\). Clearly, \(f=f^{+}+f^{-}\). We also have \(\widehat{H[f]}(\underline{\xi })=\mathbf{i}\frac{\underline{\xi }}{|\underline{\xi }|}\hat{f}(\underline{\xi })\).

Let \(\chi _{\pm }(\underline{\xi })=\frac{1}{2}(1\pm \mathbf{i}\frac{\underline{\xi }}{|\underline{\xi }|}).\) The functions \(\chi _{\pm }\) enjoy the usual projection properties \(\chi _{\pm }^2=\chi _{\pm }, \chi _{+}+\chi _{-}=1, \chi _{+}\chi _{-}=\chi _{-}\chi _{+}=0.\) Moreover, \( \overline{\chi _{\pm }}=\chi _{\pm }\) and \(|\underline{\xi }|\chi _{\pm }(\underline{\xi })=\pm \mathbf{i}\underline{\xi }\chi _{\pm }(\underline{\xi })\).

In this paper, we let \(\Vert f\Vert _{L^2}=1\).

3 Mean and variance of Fourier frequency in terms of monogenic phase derivatives

Definition 3.1

Let \(f(\underline{x})\) be a square-integrable signal and \(|\hat{f}(\underline{\xi })|^2\) the density of the Fourier frequency, then we can define the mean of the Fourier frequency by

$$\begin{aligned} \langle \underline{\xi }\rangle =\frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}\mathbf{i}\underline{\xi }|\hat{f}(\underline{\xi })|^2 \mathrm{d}\underline{\xi }, \end{aligned}$$

and the Fourier bandwidth by

$$\begin{aligned} B^2=\sigma _{\underline{\xi }}^2&= \frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}(\mathbf{i}\underline{\xi }-\langle \underline{\xi }\rangle )^2|\hat{f}(\underline{\xi })|^2 \mathrm{d}\underline{\xi }\\&= \langle \underline{\xi }^2\rangle -\langle \underline{\xi }\rangle ^2, \end{aligned}$$

where \(\langle \underline{\xi }^2\rangle =\frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}-{\underline{\xi }}^2|\hat{f}(\underline{\xi })|^2 \mathrm{d}\underline{\xi }\).

Clearly, if \(f(\underline{x})\) is real-valued, then \(\hat{f}(-\underline{\xi })=\overline{\hat{f}(\underline{\xi })}\). Therefore, \(\langle \underline{\xi }\rangle \) is always zero.

Definition 3.2

Assume \(f, \underline{D}f \in L^2(\mathbf{R}^m)\) with the decomposition \(f=f^++f^-,\ \widehat{f^{\pm }}=\chi _{\pm }\hat{f}.\) We define

$$\begin{aligned} \langle \underline{\xi }\rangle ^{\pm }=\frac{1}{(2\pi )^m} \int \limits _{\mathbf{R}^m}\pm |\underline{\xi }||\widehat{f^{\pm }}(\underline{\xi })|^2 \mathrm{d}\underline{\xi }. \end{aligned}$$

Lemma 3.1

[12] Assume \(f, \underline{D}f \in L^2(\mathbf{R}^m)\) with the decomposition \(f=f^++f^-, \widehat{f^{\pm }}=\chi _{\pm }\hat{f}.\) Then

$$\begin{aligned} \langle \underline{\xi }\rangle ^{\pm }=\int \limits _{\mathbf{R}^m}\mathrm{{Sc}} \left\{ [\underline{D}f^{\pm }(\underline{x})][f^{\pm }(\underline{x})]^{-1}\right\} |f^{\pm }(\underline{x})|^2 \mathrm{d}{\underline{x}}. \end{aligned}$$

Next, we will study relations involving the mean of the Fourier frequency, the Fourier bandwidth and the phase derivative.

Theorem 3.1

Assume \(f, \underline{D}f \in L^2(\mathbf{R}^m)\) with the decomposition \(f=f^++f^-, \widehat{f^{\pm }}=\chi _{\pm }\hat{f}.\) Then the mean Fourier frequency \(\langle \underline{\xi }\rangle \) is identical with

$$\begin{aligned} \langle \underline{\xi }\rangle \!=\!\int \limits _{\mathbf{R}^m}\mathrm{{Sc}} \left\{ [\underline{D}f^{+}(\underline{x})][f^{+}(\underline{x})]^{-1}\right\} |f^{+}(\underline{x})|^2 \mathrm{d}{\underline{x}}\!+\!\int \limits _{\mathbf{R}^m} \mathrm{{Sc}}\left\{ [\underline{D}f^{-} (\underline{x})][f^{-}(\underline{x})]^{-1}\right\} |f^{-}(\underline{x})|^2 \mathrm{d}{\underline{x}}. \end{aligned}$$

Proof

Since \(f, \underline{D}f \in L^2(\mathbf{R}^m)\), clearly, \(\ \hat{f}(\underline{\xi }), \ \underline{\xi }\hat{f}(\underline{\xi })\in L^2(\mathbf{R}^m)\). H\(\ddot{o}\)lder inequality implies \(\underline{\xi }|\hat{f}(\underline{\xi })|^2 \in L^1(\mathbf{R}^m)\), hence \(\langle \underline{\xi }\rangle \) is well-defined. Applying the properties of \(\chi _{\pm }\), we have

$$\begin{aligned} \langle \underline{\xi }\rangle&= \frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}\mathbf{i}\underline{\xi }|\hat{f}(\underline{\xi })|^2 \mathrm{d}\underline{\xi }\\&= \frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}\mathbf{i}\underline{\xi }\chi _+(\underline{\xi })|\hat{f}(\underline{\xi })|^2 \mathrm{d} \underline{\xi }+\frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}\mathbf{i}\underline{\xi }\chi _-(\underline{\xi })|\hat{f}(\underline{\xi })|^2 \mathrm{d}\underline{\xi }\\&= \frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}|\underline{\xi }|\chi _+(\underline{\xi })|\hat{f}(\underline{\xi })|^2 \mathrm{d} \underline{\xi }-\frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}|\underline{\xi }|\chi _-(\underline{\xi })|\hat{f}(\underline{\xi })|^2 \mathrm{d}\underline{\xi }\\&= \frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}|\underline{\xi }|\chi _+(\underline{\xi })\hat{f}(\underline{\xi })\overline{\chi _+(\underline{\xi }) \hat{f}(\underline{\xi })} \mathrm{d}\underline{\xi }-\frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}|\underline{\xi }|\chi _-(\underline{\xi }) \hat{f}(\underline{\xi })\overline{\chi _-(\underline{\xi })\hat{f}(\underline{\xi })} \mathrm{d}\underline{\xi }\\&= \frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}|\underline{\xi }||\widehat{f^{+}}(\underline{\xi })|^2 \mathrm{d}\underline{\xi }+\frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}(-|\underline{\xi }|)|\widehat{f^{-}}(\underline{\xi })|^2 \mathrm{d}\underline{\xi }\\&= \langle \underline{\xi }\rangle ^{+}+\langle \underline{\xi }\rangle ^{-}. \end{aligned}$$

Applying Lemma 3.1, we complete the proof.

Remark 3.1

If \(f(\underline{x})\) is real-valued, then \(f^{\pm }=\frac{1}{2}[f\pm H[f]]\) and \(\widehat{f^{\pm }} =\frac{1}{2}(1\pm \mathbf{i}\frac{\underline{\xi }}{|\underline{\xi }|})\hat{f}\). So \(|\widehat{f^{+}}|=|\widehat{f^{-}}|\). By Theorem 3.1, we have \(\langle \underline{\xi }\rangle =0\). Hence, as in the classical case, we will adopt \(\langle \underline{\xi }\rangle ^{+}\) when we study mean of Fourier frequency.

Example 3.1

For the Poisson kernel \(f(\underline{x})=\frac{s}{|s+\underline{x}|^{m+1}}\), we have \(\hat{f}(\underline{\xi })=\frac{\pi ^{\frac{m+1}{2}}}{\Gamma (\frac{m+1}{2})}\mathrm{e}^{-s|\underline{\xi }|}\) and \(H[f](\underline{x})=\frac{\bar{\underline{x}}}{|s+\underline{x}|^{m+1}}\). So \(f^{\pm }=\frac{1}{2}\frac{\overline{s\pm \underline{x}}}{|s+\underline{x}|^{m+1}}\), being proportional to the Cauchy kernel in \(\mathbf{R}_1^m\). Through direct computation, we have \(|f^{\pm }|^2=\frac{1}{4}\frac{1}{|s+\underline{x}|^{2m}}\) and \(\underline{D}f^{\pm }(\underline{x})[f^{\pm }(\underline{x})]^{-1}=\frac{\pm ms-\underline{x}}{|s+\underline{x}|^2}\).

Using the definition of \(\langle \underline{\xi }\rangle \), we have \(\langle \underline{\xi }\rangle =\frac{1}{(2\pi )^m}\int \nolimits _{\mathbf{R}^m}\mathbf{i}\underline{\xi }\frac{\pi ^{m+1}}{\Gamma ^2(\frac{m+1}{2})} \hbox {e}^{-2s|\underline{\xi }|} \mathrm{d}\underline{\xi }=0\).

On the other hand,

$$\begin{aligned} \langle \underline{\xi }\rangle ^{+}+\langle \underline{\xi }\rangle ^{-}&= \frac{1}{4}\int \limits _{\mathbf{R}^m}\frac{ms}{|s+\underline{x}|^{2m+2}} \mathrm{d}{\underline{x}} +\frac{1}{4}\int \limits _{\mathbf{R}^m}\frac{-ms}{|s+\underline{x}|^{2m+2}} \mathrm{d}{\underline{x}}\\&= 0. \end{aligned}$$

The following theorem gives a similar result for \(\langle \underline{\xi }^2\rangle \).

Theorem 3.2

Assume \(f, \underline{D}f \in L^2(\mathbf{R}^m)\) with the decomposition \(f=f^++f^-, \widehat{f^{\pm }}=\chi _{\pm }\hat{f}.\) Then

$$\begin{aligned} \langle \underline{\xi }^2\rangle&= \int \limits _{\mathbf{R}^m}|\underline{D}f^{+}(\underline{x})|^2 \mathrm{d}\underline{x}+\int \limits _{\mathbf{R}^m}|\underline{D}f^{-}(\underline{x})|^2 \mathrm{d}\underline{x}\\&= \int \limits _{\mathbf{R}^m}|\underline{D}f^{+}(\underline{x})+\underline{D}f^{-}(\underline{x})|^2 \mathrm{d}\underline{x}\\&= \int \limits _{\mathbf{R}^m}|\underline{D}f(\underline{x})|^2 \mathrm{d}\underline{x}\\&= \int \limits _{\mathbf{R}^m}\left| [\underline{D}f^{+}(\underline{x})][f^{+}(\underline{x})]^{-1}\right| ^2| f^{+}(\underline{x})|^2 \mathrm{d}{\underline{x}} +\int \limits _{\mathbf{R}^m}\left| [\underline{D}f^{-}(\underline{x})][f^{-}(\underline{x})]^{-1}\right| ^2| f^{-}(\underline{x})|^2\mathrm{d}{\underline{x}}. \end{aligned}$$

Proof

Since \(f, \underline{D}f \in L^2(\mathbf{R}^m)\), clearly, \(\ \underline{\xi }\hat{f}(\underline{\xi })\in L^2(\mathbf{R}^m), \ |\underline{\xi }|^2|\hat{f}(\underline{\xi })|^2\in L^1(\mathbf{R}^m)\), hence \(\langle \underline{\xi }^2\rangle \) is well-defined.

$$\begin{aligned} \langle \underline{\xi }^2\rangle&= \frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}-{\underline{\xi }}^2|\hat{f}(\underline{\xi })|^2 \mathrm{d}\underline{\xi }\\&= \frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}|\underline{\xi }|^2|\hat{f}(\underline{\xi })|^2\mathrm{d}\underline{\xi }\\&= \frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}|\underline{\xi }|^2\chi _+(\underline{\xi })\hat{f}(\underline{\xi })\overline{\chi _+(\underline{\xi })\hat{f}(\underline{\xi })} \mathrm{d}\underline{\xi }+\frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}|\underline{\xi }|^2\chi _-(\underline{\xi })\hat{f}(\underline{\xi })\overline{\chi _-(\underline{\xi })\hat{f}(\underline{\xi })}\mathrm{d}\underline{\xi }\\&= \frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}|\underline{\xi }|^2|\widehat{f^{+}}(\underline{\xi })|^2\mathrm{d}\underline{\xi }+\frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}|\underline{\xi }|^2|\widehat{f^{-}}(\underline{\xi })|^2\mathrm{d}\underline{\xi }\\&= \langle \underline{\xi }^2\rangle ^{+}+\langle \underline{\xi }^2\rangle ^{-}. \end{aligned}$$

Applying the properties of \(\chi _{\pm }\) and Plancherel theorem, we have

$$\begin{aligned} \langle \underline{\xi }^2\rangle ^{\pm }&= \frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}|\underline{\xi }|^2|\widehat{f^{\pm }}(\underline{\xi })|^2\mathrm{d}\underline{\xi }\\&= \frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}|\underline{\xi }|\widehat{f^{\pm }}(\underline{\xi })\overline{|\underline{\xi }|\widehat{f^{\pm }}(\underline{\xi })}\mathrm{d}\underline{\xi }=\frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}|\underline{\xi }|\chi _{\pm }(\underline{\xi })\hat{f}(\underline{\xi })\overline{|\underline{\xi }|\chi _{\pm }(\underline{\xi })\hat{f}(\underline{\xi })}\mathrm{d}\underline{\xi }\\&= \frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}\mathbf{i}\underline{\xi }\chi _{\pm }(\underline{\xi })\hat{f}(\underline{\xi })\overline{\mathbf{i}\underline{\xi }\chi _{\pm }(\underline{\xi })\hat{f}(\underline{\xi })}\mathrm{d}\underline{\xi }\\&= \frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}\widehat{\underline{D}f^{\pm }}(\underline{\xi })\overline{\widehat{\underline{D}f^{\pm }}(\underline{\xi })}\mathrm{d}\underline{\xi }=\int \limits _{\mathbf{R}^m}{\underline{D}f^{\pm }}(\underline{x})\overline{{\underline{D}f^{\pm }}(\underline{x})}\mathrm{d}\underline{x}\\&= \int \limits _{\mathbf{R}^m}|\underline{D}f^{\pm }(\underline{x})|^2\mathrm{d}\underline{x}=\int \limits _{\mathbf{R}^m}\left| [\underline{D}f^{\pm }(\underline{x})][f^{\pm }(\underline{x})]^{-1}\right| ^2 |f^{\pm }(\underline{x})|^2 \mathrm{d}{\underline{x}}. \end{aligned}$$

Therefore,

$$\begin{aligned} \langle \underline{\xi }^2\rangle&= \int \limits _{\mathbf{R}^m}|\underline{D}f^{+}(\underline{x})|^2\mathrm{d}\underline{x}+\int \limits _{\mathbf{R}^m}|\underline{D}f^{-}(\underline{x})|^2\mathrm{d}\underline{x}\\&= \int \limits _{\mathbf{R}^m}|\underline{D}f^{+}(\underline{x})+\underline{D}f^{-}(\underline{x})|^2\mathrm{d}\underline{x}=\int \limits _{\mathbf{R}^m}|\underline{D}f(\underline{x})|^2\mathrm{d}\underline{x}. \end{aligned}$$

Example 3.2

For the Poisson kernel as in Example 1, using the definition of \(\langle \underline{\xi }^2\rangle \), we have

$$\begin{aligned} \langle \underline{\xi }^2\rangle&= \frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}|\underline{\xi }|^2|\hat{f}(\underline{\xi })|^2\mathrm{d}\underline{\xi }=\frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}|\underline{\xi }|^2\frac{\pi ^{m+1}}{\Gamma ^2(\frac{m+1}{2})}\hbox {e}^{-2s|\underline{\xi }|}\mathrm{d}\underline{\xi }\\&= \frac{\omega _{m-1}}{s^{m+2}}\frac{\pi \Gamma (m+2)}{2^{2m+2}\Gamma ^2\left( \frac{m+1}{2}\right) }. \end{aligned}$$

On the other hand,

$$\begin{aligned} \langle \underline{\xi }^2\rangle ^{+}+\langle \underline{\xi }^2\rangle ^{-}&= \int \limits _{\mathbf{R}^m}|\underline{D}f^{+}(\underline{x})|^2 \mathrm{d}\underline{x}+\int \limits _{\mathbf{R}^m}|\underline{D}f^{-}(\underline{x})|^2\mathrm{d}\underline{x}\\&= \int \limits _{\mathbf{R}^m}|[\underline{D}f^{+}(\underline{x})][f^{+}(\underline{x})]^{-1}|^2|f^{+}(\underline{x})|^2\mathrm{d}{\underline{x}}\\&\quad +\int \limits _{\mathbf{R}^m}|[\underline{D}f^{-}(\underline{x})][f^{-}(\underline{x})]^{-1}|^2|f^{-}(\underline{x})|^2\mathrm{d}{\underline{x}}\\&= \frac{1}{2}\int \limits _{\mathbf{R}^m}\frac{m^2s^2+|\underline{x}|^2}{|s+\underline{x}|^{2m+4}}\mathrm{d}\underline{x}\\&= \frac{1}{4}\frac{\omega _{m-1}}{s^{m+2}}\left[ \frac{m^2\Gamma (\frac{m}{2})\Gamma \left( \frac{m}{2}+2\right) +\Gamma ^2\left( \frac{m}{2}+1\right) }{\Gamma (m+2)}\right] \\&= \frac{\omega _{m-1}}{s^{m+2}}\frac{\pi \Gamma (m+2)}{2^{2m+2}\Gamma ^2\left( \frac{m+1}{2}\right) }\\&= \langle \underline{\xi }^2\rangle . \end{aligned}$$

Theorem 3.3

Assume \(f, \underline{D}f \in L^2(\mathbf{R}^m).\) With the decomposition \(f=f^++f^-, \widehat{f^{\pm }}=\chi _{\pm }\hat{f},\) we have

$$\begin{aligned} B^2&= \int \limits _{\mathbf{R}^m}\{\mathrm{{Sc}}[(\underline{D}f^+(\underline{x}))(f^+(\underline{x}))^{-1}]-\langle \underline{\xi }\rangle \}^2|f^+(\underline{x})|^2\mathrm{d}\underline{x}\\&\quad +\int \limits _{\mathbf{R}^m}\{\mathrm{{Sc}}[(\underline{D}f^-(\underline{x}))(f^-(\underline{x}))^{-1}]-\langle \underline{\xi }\rangle \}^2|f^-(\underline{x})|^2\mathrm{d}\underline{x}\\&\quad +\int \limits _{\mathbf{R}^m}\left| \mathrm{{Nsc}}[(\underline{D}f^+(\underline{x}))(f^+(\underline{x}))^{-1}]\right| ^2|f^+(\underline{x})|^2\mathrm{d}\underline{x}\\&\quad +\int \limits _{\mathbf{R}^m}\left| \mathrm{{Nsc}}[(\underline{D}f^-(\underline{x}))(f^-(\underline{x}))^{-1}]\right| ^2|f^-(\underline{x})|^2\mathrm{d}\underline{x}. \end{aligned}$$

Proof

Applying Theorem 3.1 and 3.2, we have

$$\begin{aligned} B^2&= \langle \underline{\xi }^2\rangle -\langle \underline{\xi }\rangle ^2\\&= \langle \underline{\xi }^2\rangle -2\langle \underline{\xi }\rangle [\langle \underline{\xi }\rangle ^{+}+\langle \underline{\xi }\rangle ^{-}]+ \langle \underline{\xi }\rangle ^2\int \limits _{\mathbf{R}^m}|f(\underline{x})|^2\mathrm{d}\underline{x}\\&= \int \limits _{\mathbf{R}^m}\left| [\underline{D}f^{+}(\underline{x})][f^{+}(\underline{x})]^{-1}\right| ^2|f^{+}(\underline{x})|^2\mathrm{d}{\underline{x}}\\&\quad +\int \limits _{\mathbf{R}^m}\left| \left[ \underline{D}f^{-}(\underline{x})\right] \left[ f^{-}(\underline{x})\right] ^{-1}\right| ^2|f^{-}(\underline{x})|^2\mathrm{d}{\underline{x}}\\&\quad -2\langle \underline{\xi }\rangle \int \limits _{\mathbf{R}^m}\mathrm{{Sc}}\left\{ [\underline{D}f^{+}(\underline{x})][f^{+}(\underline{x})]^{-1}\right\} |f^{+}(\underline{x})|^2\mathrm{d}{\underline{x}}\\&\quad -2\langle \underline{\xi }\rangle \int \limits _{\mathbf{R}^m}\mathrm{{Sc}}\left\{ [\underline{D}f^{-}(\underline{x})][f^{-}(\underline{x})]^{-1}\right\} |f^{-}(\underline{x})|^2\mathrm{d}{\underline{x}}\\&\quad +\int \limits _{\mathbf{R}^m}\langle \underline{\xi }\rangle ^2|f^+(\underline{x})|^2\mathrm{d}\underline{x}+\int \limits _{\mathbf{R}^m}\langle \underline{\xi }\rangle ^2|f^-(\underline{x})|^2\mathrm{d}\underline{x}. \end{aligned}$$

It is easy to see that

$$\begin{aligned}&\int \limits _{\mathbf{R}^m}\left| [\underline{D}f^{+}(\underline{x})][f^{+}(\underline{x})]^{-1}\right| ^2|f^{+}(\underline{x})|^2\mathrm{d}{\underline{x}} +\int \limits _{\mathbf{R}^m}\left| [\underline{D}f^{-}(\underline{x})][f^{-}(\underline{x})]^{-1}\right| ^2|f^{-}(\underline{x})|^2\mathrm{d}{\underline{x}}\\&\quad =\int \limits _{\mathbf{R}^m}\{\mathrm{{Sc}}[(\underline{D}f^+(\underline{x}))(f^+(\underline{x}))^{-1}]\}^2|f^+(\underline{x})|^2\mathrm{d}\underline{x}\\&\quad +\int \limits _{\mathbf{R}^m}\{\mathrm{{Sc}}[(\underline{D}f^-(\underline{x}))(f^-(\underline{x}))^{-1}]\}^2|f^-(\underline{x})|^2\mathrm{d}\underline{x}\\&\quad +\int \limits _{\mathbf{R}^m}\left| \mathrm{{Nsc}}[(\underline{D}f^+(\underline{x}))(f^+(\underline{x}))^{-1}]\right| ^2|f^+(\underline{x})|^2\mathrm{d}\underline{x}\\&\quad +\int \limits _{\mathbf{R}^m}\left| \mathrm{{Nsc}}[(\underline{D}f^-(\underline{x}))(f^-(\underline{x}))^{-1}]\right| ^2|f^-(\underline{x})|^2\mathrm{d}\underline{x}. \end{aligned}$$

Therefore, we have

$$\begin{aligned} B^2&= \int \limits _{\mathbf{R}^m}\{\mathrm{{Sc}}[(\underline{D}f^+(\underline{x}))(f^+(\underline{x}))^{-1}]-\langle \underline{\xi }\rangle \}^2|f^+(\underline{x})|^2\mathrm{d}\underline{x}\\&\quad +\int \limits _{\mathbf{R}^m}\{\mathrm{{Sc}}[(\underline{D}f^-(\underline{x}))(f^-(\underline{x}))^{-1}]-\langle \underline{\xi }\rangle \}^2|f^-(\underline{x})|^2\mathrm{d}\underline{x}\\&\quad +\int \limits _{\mathbf{R}^m}\left| \mathrm{{Nsc}}[(\underline{D}f^+(\underline{x}))(f^+(\underline{x}))^{-1}]\right| ^2|f^+(\underline{x})|^2\mathrm{d}\underline{x}\\&\quad +\int \limits _{\mathbf{R}^m}\left| \mathrm{{Nsc}}[(\underline{D}f^-(\underline{x}))(f^-(\underline{x}))^{-1}]\right| ^2|f^-(\underline{x})|^2\mathrm{d}\underline{x}. \end{aligned}$$

The proof is complete.

Remark 3.2

If, in particular, \(\frac{Hf}{|Hf|}=\frac{\bar{\underline{x}}}{|\underline{x}|}\), then we have

$$\begin{aligned} \mathrm{{Nsc}}[(\underline{D}f^{\pm }(\underline{x}))(f^{\pm }(\underline{x}))^{-1}]=\frac{\underline{D}A(f^{\pm })}{A(f^{\pm })}+\frac{(m-1)\underline{x}}{|\underline{x}|^2}\sin ^2\theta (\underline{x}). \end{aligned}$$

Let \(m=1\), \(B^2\) is just the classical case.

4 Uncertainty principle

Definition 4.1

Assume \(f(\underline{x})\in L^2(\mathbf{R}^m)\). Define the mean of the space variable \(\underline{x}\) by

$$\begin{aligned} \langle \underline{x}\rangle =\int \limits _{\mathbf{R}^m}\mathbf{i}\underline{x}|f(\underline{x})|^2\mathrm{d}\underline{x}, \end{aligned}$$

and the duration by

$$\begin{aligned} \sigma _{\underline{x}}^2=\int \limits _{\mathbf{R}^m}(\mathbf{i}\underline{x}-\langle \underline{x}\rangle )^2|f(\underline{x})|^2\mathrm{d}\underline{x}. \end{aligned}$$

Similarly to Theorems 3.1 and 3.2, we have

Theorem 4.1

Assume \(f(\underline{x}), \ \underline{x}f(\underline{x})\in L^2(\mathbf{R}^m)\) with the decomposition \(\hat{f}(\underline{\xi })=\hat{f}^+(\underline{\xi })+\hat{f}^-(\underline{\xi })\), \({\hat{f}^{\pm }}(\underline{\xi })=\widehat{[\chi _{\mp }{f}]}(\underline{\xi }).\) Then the mean of the space variable \(\underline{x}\) is identical with

$$\begin{aligned} \langle \underline{x}\rangle&= -\int \limits _{\mathbf{R}^m}\mathrm{{Sc}}\left\{ [\underline{D}\hat{f}^{+}(\underline{\xi })][\hat{f}^{+}(\underline{\xi })]^{-1}\right\} |\hat{f}^{+}(\underline{\xi })|^2\mathrm{d}{\underline{\xi }}\\&-\, \int \limits _{\mathbf{R}^m}\mathrm{{Sc}}\left\{ [\underline{D}\hat{f}^{-}(\underline{\xi })][\hat{f}^{-}(\underline{\xi })]^{-1}\right\} |\hat{f}^{-}(\underline{\xi })|^2\mathrm{d}{\underline{\xi }}. \end{aligned}$$

Theorem 4.2

Assume \(f(\underline{x}), \ \underline{x}f(\underline{x})\in L^2(\mathbf{R}^m)\) with the decomposition \(\hat{f}(\underline{\xi })=\hat{f}^+(\underline{\xi })+\hat{f}^-(\underline{\xi }),\) \( {\hat{f}^{\pm }}(\underline{\xi })=\widehat{[\chi _{\mp }{f}]}(\underline{\xi }).\) Then

$$\begin{aligned} \langle \underline{x}^2\rangle&= \int \limits _{\mathbf{R}^m}|\underline{D}\hat{f}^{+}(\underline{\xi })|^2\mathrm{d}\underline{\xi }+\int \limits _{\mathbf{R}^m}|\underline{D}\hat{f}^{-}(\underline{\xi })|^2\mathrm{d}\underline{\xi }\\&= \int \limits _{\mathbf{R}^m}|\underline{D}\hat{f}^{+}(\underline{\xi })+\underline{D}\hat{f}^{-}(\underline{\xi })|^2\mathrm{d}\underline{\xi }\\&= \int \limits _{\mathbf{R}^m}\left| [\underline{D}\hat{f}^{+}(\underline{\xi })][\hat{f}^{+}(\underline{\xi })]^{-1}\right| ^2|\hat{f}^{+}(\underline{\xi })|^2\mathrm{d}{\underline{\xi }} +\int \limits _{\mathbf{R}^m}\left| [\underline{D}\hat{f}^{-}(\underline{\xi })][\hat{f}^{-}(\underline{\xi })]^{-1}\right| ^2|\hat{f}^{-}(\underline{\xi })|^2\mathrm{d}{\underline{\xi }}. \end{aligned}$$

Next, we simplify the existing proof of known uncertainty principle [10]. That is

Theorem 4.3

For real-valued signal \(f(\underline{x})\), if \(f(\underline{x}), \underline{D}f, \underline{x}f(\underline{x})\in L^2(\mathbf{R}^m)\), then \(\sigma _{\underline{x}}\sigma _{\underline{\xi }}\ge \frac{m}{2}\). Moreover, if and only if when

$$\begin{aligned} f(\underline{x})=\hbox {e}^{-\frac{s}{2}(|\underline{x}|^2-2\mathbf{i}\langle \underline{x}\rangle \cdot \underline{x})}, \quad s>0, \end{aligned}$$

then the equal relation holds.

Proof

Since \(f\) is assumed to be a real-valued signal, we have \(\langle \underline{\xi }\rangle =0\) (See Remark 3.1). The bandwidth is then reduced to

$$\begin{aligned} \sigma _{\underline{\xi }}^2=\frac{1}{(2\pi )^m}\int \limits _{\mathbf{R}^m}|\underline{\xi }|^2|\hat{f}(\underline{\xi })|^2\mathrm{d}\underline{\xi }. \end{aligned}$$

Recall that the duration is

$$\begin{aligned} \int \limits _{\mathbf{R}^m}(\mathbf{i}\underline{x}-\langle \underline{x}\rangle )^2|f(\underline{x})|^2\mathrm{d}\underline{x}=\int \limits _{\mathbf{R}^m}|\underline{x}+\mathbf{i}\langle \underline{x}\rangle |^2|f(\underline{x})|^2\mathrm{d}\underline{x}. \end{aligned}$$

From Theorem 3.2, and using H\(\ddot{o}\)lder’s inequality, we have

$$\begin{aligned} \sigma _{\underline{x}}^2\sigma _{\underline{\xi }}^2&= \int \limits _{\mathbf{R}^m}|\underline{x}+\mathbf{i}\langle \underline{x}\rangle |^2|f(\underline{x})|^2\mathrm{d}\underline{x}\times \int \limits _{\mathbf{R}^m}|\underline{D}f(\underline{x})|^2\mathrm{d}\underline{x}\\&\ge \left| \int \limits _{\mathbf{R}^m}[\underline{D}f(\underline{x})]\overline{(\underline{x}+\mathbf{i}\langle \underline{x}\rangle ) f(\underline{x})}\mathrm{d}\underline{x}\right| ^2\\&= \left| \int \limits _{\mathbf{R}^m}[\underline{D}f(\underline{x})] f(\underline{x})(\overline{\underline{x}}-\mathbf{i}\langle \underline{x}\rangle )\mathrm{d}\underline{x}\right| ^2. \end{aligned}$$

It is easy to see that

$$\begin{aligned}{}[\underline{D}f(\underline{x})]f(\underline{x})(\overline{\underline{x}} -\mathbf{i}\langle \underline{x}\rangle )=\frac{1}{2}\underline{D}[f^2(\underline{x})(\bar{\underline{x}}-\mathbf{i}\langle \underline{x}\rangle )]-\frac{m}{2}f^2(\underline{x}), \end{aligned}$$

On the other hand,

$$\begin{aligned} \int \limits _{\mathbf{R}^m}\underline{D}[f^2(\underline{x})(\overline{\underline{x}}-\mathbf{i}\langle \underline{x}\rangle )]\mathrm{d}\underline{x}= \mathbf{i}\underline{\xi }[f^2(\underline{x})(\bar{\underline{x}}-\mathbf{i}\langle \underline{x}\rangle )\hat{]}(\underline{\xi })|_{\underline{\xi }=0}= 0. \end{aligned}$$
(4.1)

Therefore,

$$\begin{aligned} \sigma _{\underline{x}}^2\sigma _{\underline{\xi }}^2&\ge \left| \int \limits _{\mathbf{R}^m}[\underline{D}f(\underline{x})]f(\underline{x})(\overline{\underline{x}}-\mathbf{i}\langle \underline{x}\rangle )\mathrm{d}\underline{x}\right| ^2\\&= \left| -\frac{m}{2}\int \limits _{\mathbf{R}^m}f^2(\underline{x})\mathrm{d}\underline{x}\right| ^2\\&= \left( \frac{m}{2}\right) ^2. \end{aligned}$$

The last step uses the unit energy assumption of \(f(\underline{x})\).

The use of Hölder’s inequality implies that if and only if \((\underline{x}+\mathbf{i}\langle \underline{x}\rangle )f(\underline{x})\) and \(\underline{D}f(\underline{x})\) are linear dependent, then the equality holds. We have \(f(\underline{x})=\mathrm{e}^{-\frac{s}{2}(|\underline{x}|^2-2\mathbf{i}\langle \underline{x}\rangle \cdot \underline{x})}\). The proof is complete.

Example 4.1

For the Poisson kernel \(f(\underline{x})=\frac{s}{|s+\underline{x}|^{m+1}}\), through directly computation, we have \(\sigma _{\underline{x}}\sigma _{\underline{\xi }}=\frac{\sqrt{m(m+1)}}{2}> \frac{m}{2}\).

Remark 4.1

The definition of the phase derivative and the above results is also valid for signals of the axial form \(f(\underline{x})=U(r)+\frac{\bar{\underline{x}}}{r}V(r)=\rho (r) \mathrm{e}^{\frac{\bar{\underline{x}}}{r}\phi (r)}\) with \(r=|\underline{x}|\). Here \(\rho (r)=\sqrt{U^2(r)+V^2(r)}\) and \(\phi (r)=\arctan {\frac{V(r)}{U(r)}}\). If an axial \(f(\underline{x})\) is also a monogenic signal, then we call it axial monogenic signal.

Lemma 4.1

$$\begin{aligned} \mathrm{{Nsc}}\left\{ [\underline{D}\mathrm{e}^{\frac{\bar{\underline{x}}}{r}\phi (r)}]\mathrm{e}^{-\frac{\bar{\underline{x}}}{r}\phi (r)}\right\} \bar{\underline{x}} =(m-1)\sin ^2\phi (r). \end{aligned}$$

Proof

By directly computation, we have

$$\begin{aligned}&[\underline{D}\mathrm{e}^{\frac{\bar{\underline{x}}}{r}\phi (r)}]\mathrm{e}^{-\frac{\bar{\underline{x}}}{r}\phi (r)}\\&\quad =\underline{D}\left[ \cos \phi (r)+\frac{\bar{\underline{x}}}{r}\sin \phi (r)\right] \left[ \cos \phi (r)-\frac{\bar{\underline{x}}}{r}\sin \phi (r)\right] \\&\quad =\left[ -\sin \phi (r)\underline{D}\phi (r)+(\underline{D}\frac{\bar{\underline{x}}}{r})\sin \phi (r)+\cos \phi (\underline{D} \phi (r))\frac{\bar{\underline{x}}}{r}\right] \left[ \cos \phi (r)-\frac{\bar{\underline{x}}}{r}\sin \phi (r)\right] \\&\quad =\left[ -\sin \phi (r)\underline{D}\phi (r)+\frac{m-1}{r}\sin \phi (r) +\cos \phi (\underline{D}\phi (r))\frac{\bar{\underline{x}}}{r}\right] \left[ \cos \phi (r)-\frac{\bar{\underline{x}}}{r}\sin \phi (r)\right] \\&\quad =-\sin \phi \cos \phi \underline{D}\phi (r)+\sin ^2\phi \underline{D}\phi (r)\frac{\bar{\underline{x}}}{r}+\frac{m-1}{r}\sin \phi \cos \phi \\&\quad \quad -\frac{(m-1)\bar{\underline{x}}}{r^2}\sin ^2\phi +\underline{D}\phi (r)\frac{\bar{\underline{x}}}{r}\cos ^2\phi +\underline{D}\phi (r)\sin \phi \cos \phi \\&\quad =\sin ^2\phi \,\phi '(r)+\frac{m-1}{r}\sin \phi \cos \phi -\frac{(m-1)\bar{\underline{x}}}{r^2}\sin ^2\phi . \end{aligned}$$

Then

$$\begin{aligned}\mathrm{{Nsc}}\left\{ [\underline{D}\mathrm{e}^{\frac{\bar{\underline{x}}}{r}\phi (r)}]\mathrm{e}^{-\frac{\bar{\underline{x}}}{r}\phi (r)}\right\} \bar{\underline{x}} =-\frac{(m-1)\bar{\underline{x}}}{r^2}\sin ^2\phi \bar{\underline{x}}=(m-1)\sin ^2\phi (r). \end{aligned}$$

The proof is complete.

For axial monogenic signals, we obtain an improved uncertainty principle INVOLVING the phase derivative as follows.

Theorem 4.4

Let \(f(\underline{x})\) be an axial monogenic signal with the form \(U(r)+\frac{\bar{\underline{x}}}{r}V(r).\) If \(f(\underline{x}), \ \underline{D}f\) and \(\underline{x}f(\underline{x})\in L^2(\mathbf{R}^m)\), then \(\sigma _{\underline{x}}\sigma _{\underline{\xi }}\ge \sqrt{[-\frac{m}{2}+(m-1)\int \nolimits _{\mathbf{R}^m}V^2\mathrm{d}\underline{x}]^2+\mathrm{Cov}_{\underline{\xi }\underline{x}}^2}\), where

$$\begin{aligned} \mathrm{Cov}_{\underline{\xi }\underline{x}}=\langle \underline{x}\mathrm{{Sc}}[(\underline{D}f)f^{-1}]\rangle -\langle \underline{x}\rangle \langle \underline{\xi }\rangle , \end{aligned}$$
$$\begin{aligned} \langle \underline{x}\mathrm{{Sc}}[(\underline{D}f)f^{-1}]\rangle =\int \limits _{\mathbf{R}^m}\mathbf{i}\underline{x}\mathrm{{Sc}}{[(\underline{D}f)f^{-1}]}|f(\underline{x})|^2\mathrm{d}\underline{x}. \end{aligned}$$

Proof

Without loss of generality, we may assume \(\langle \underline{x}\rangle =0, \langle \underline{\xi }\rangle =0\). Due to Theorem 4.3, we have

$$\begin{aligned} \sigma _{\underline{x}}^2\sigma _{\underline{\xi }}^2&\ge \left| \int \limits _{\mathbf{R}^m}[\underline{D}f(\underline{x})]\overline{f(\underline{x})}\overline{\underline{x}}\mathrm{d}\underline{x}\right| ^2. \end{aligned}$$

If we write \(f(\underline{x})\) in the polar form \(\rho \mathrm{e}^{\frac{\bar{\underline{x}}}{r}\phi }\), we have

$$\begin{aligned}&[\underline{D}f(\underline{x})]\overline{f(\underline{x})}\overline{\underline{x}}\\&\quad =\left[ \underline{D}(\rho \mathrm{e}^{\frac{\bar{\underline{x}}}{r}\phi })\right] \rho \mathrm{e}^{-\frac{\bar{\underline{x}}}{r}\phi }\overline{\underline{x}}\\&\quad =\left[ (\underline{D}\rho ) \mathrm{e}^{\frac{\bar{\underline{x}}}{r}\phi }+\rho (\underline{D}\mathrm{e}^{\frac{\bar{\underline{x}}}{r}\phi })\right] \rho \mathrm{e}^{-\frac{\bar{\underline{x}}}{r}\phi }\overline{\underline{x}}\\&\quad =(\underline{D}\rho )\rho \overline{\underline{x}}+\rho ^2[(\underline{D}\mathrm{e}^{\frac{\bar{\underline{x}}}{r}\phi })\mathrm{e}^{-\frac{\bar{\underline{x}}}{r}\phi }]\overline{\underline{x}}\\&\quad =\frac{1}{2}\underline{D}[\rho ^2\bar{\underline{x}}]-\frac{m}{2}\rho ^2+\rho ^2 [\underline{D}\mathrm{e}^{\frac{\bar{\underline{x}}}{r}\phi (r)}]\mathrm{e}^{-\frac{\bar{\underline{x}}}{r}\phi (r)}\overline{\underline{x}}. \end{aligned}$$

By the same reason as in the proof of Theorem 4.3, we have

$$\begin{aligned} \int \limits _{\mathbf{R}^m}\underline{D}[\rho ^2\overline{\underline{x}}]\mathrm{d}\underline{x}=0. \end{aligned}$$

Then

$$\begin{aligned} \int \limits _{\mathbf{R}^m}[\underline{D}f(\underline{x})]\overline{f(\underline{x})}\overline{\underline{x}}\mathrm{d}\underline{x}&= -\frac{m}{2}\int \limits _{\mathbf{R}^m}\rho ^2\mathrm{d}\underline{x}+ \int \limits _{\mathbf{R}^m}\rho ^2 \left[ \underline{D}\mathrm{e}^{\frac{\bar{\underline{x}}}{r}\phi (r)}\right] \mathrm{e}^{-\frac{\bar{\underline{x}}}{r}\theta (r)}\bar{\underline{x}}\mathrm{d}\underline{x}\\&= -\frac{m}{2}+\int \limits _{\mathbf{R}^m}\rho ^2\mathrm{{Sc}}{\left[ (\underline{D}f) f^{-1}\right] }\bar{\underline{x}}\mathrm{d}\underline{x}\\&\quad +\int \limits _{\mathbf{R}^m}\rho ^2\mathrm{{Nsc}}\left\{ [\underline{D}\mathrm{e}^{\frac{\bar{\underline{x}}}{r}\phi (r)}]\mathrm{e}^{-\frac{\bar{\underline{x}}}{r}\phi (r)}\right\} \bar{\underline{x}}\mathrm{d}\underline{x}. \end{aligned}$$

Remark 2.2 is used in the second equation. Applying Lemma 4.1, we have

$$\begin{aligned}&\int \limits _{\mathbf{R}^m}\rho ^2\mathrm{{Nsc}}\left\{ [\underline{D}\mathrm{e}^{\frac{\bar{\underline{x}}}{r}\phi (r)}]\mathrm{e}^{-\frac{\bar{\underline{x}}}{r}\phi (r)}\right\} \bar{\underline{x}}\mathrm{d}\underline{x}\\&\quad =(m-1)\int \limits _{\mathbf{R}^m}\rho ^2\sin ^2\phi \mathrm{d}\underline{x}\\&\quad =(m-1)\int \limits _{\mathbf{R}^m}V^2(r)\mathrm{d}\underline{x}. \end{aligned}$$

This implies

$$\begin{aligned} \sigma _{\underline{x}}^2\sigma _{\underline{\xi }}^2&\ge \left| \int \limits _{\mathbf{R}^m}[\underline{D}f(\underline{x})]\overline{f(\underline{x})}\overline{\underline{x}}\mathrm{d}\underline{x}\right| ^2\\&= \left| -\frac{m}{2}+(m-1)\int \limits _{\mathbf{R}^m}V^2\mathrm{d}\underline{x}+\mathbf{i}\int \limits _{\mathbf{R}^m}\mathbf{i}\underline{x}Sc{[(\underline{D}f)f^{-1}]}|f(\underline{x})|^2\mathrm{d}\underline{x}\right| ^2\\&= \left| -\frac{m}{2}+(m-1)\int \limits _{\mathbf{R}^m}V^2\mathrm{d}\underline{x}+\mathbf{i}\mathrm{{Cov}}_{\underline{\xi }\underline{x}}\right| ^2. \end{aligned}$$

The proof is complete.

Remark 4.2

When \(m=1\), Theorem 4.4 reduces to the classical uncertainty principal.

For the axial monogenic signal \(f(\underline{x})=U(r)+\frac{\bar{\underline{x}}}{r}V(r)\), we have \(\frac{\bar{\underline{x}}}{r}V(r)=HU(r)\). Therefore,

$$\begin{aligned} \int \limits _{\mathbf{R}^m}U^2(r)\mathrm{d}\underline{x}=\int \limits _{\mathbf{R}^m}V^2(r)\mathrm{d}\underline{x}=\frac{1}{2}\Vert f\Vert ^2=\frac{1}{2}. \end{aligned}$$

Then we have

Corollary 4.1

Let \(f(\underline{x})\) be a axial monogenic signal. If \(f(\underline{x}),\ \underline{D}f, \ \underline{x}f(\underline{x})\in L^2(\mathbf{R}^m)\), then

$$\begin{aligned} \sigma _{\underline{x}}\sigma _{\underline{\xi }}\ge \frac{1}{2} \sqrt{1+4\mathrm{{Cov}}_{\underline{\xi }\underline{x}}^2}. \end{aligned}$$