1 Introduction

Since many decades, simple electronic circuits have been used in connection with classical and, more recently, quantum mechanics, since they can provide concrete devices where some physical effect can be observed, or modeled. A well-known example of such a map between electronics and classical mechanics is given by any RLC-circuit (RLCc), which is dynamically equivalent to a damped harmonic oscillator (DHO). This is because the time evolution of the charge in an RLCc, i.e., a circuit with an inductance, a resistance, and a capacitor in series, is driven by exactly the same equation of motion of that of a DHO, with the dissipative effect of the resistance replaced by the friction, see, e.g., [1, 2] and references therein. In [3,4,5,6], among others, a quantum version of the DHO, and therefore of the RLCc, has been considered both for a purely mathematical interest and in view of possible applications. In particular, due to its particularly simple form, a deep comprehension of the DHO/RLCc is surely a first step toward a better understanding of the correct quantization procedure for generic dissipative systems, see [7, 8], which are so relevant in several physical contexts.

A similar interest is at the basis of other papers connected with related problems, [9,10,11]: in all these papers, electronic circuits are analyzed, either in connection with PT-quantum mechanics and exceptional points, [12], or because they produce interesting results when one tries to quantize the system, not only for the physical consequences of this quantization, but also because of its many mathematical consequences.

For instance, some interesting mathematics appears when quantizing a DHO, and diagonalizing its Bateman Hamiltonian, [13], by means of ladder operators. In particular, in [14, 15] the authors claimed they could construct two biorthonormal bases of square integrable eigenfunctions for the Hamiltonian of the DHO. However, as shown in [16, 17], this claim was wrong, since in particular the vacua of the lowering operators needed in their construction are not square integrable functions, but Dirac delta distributions. Hence, distributions are more relevant than functions in this context. And, in fact, this was not the first appearance of distributions, and of the Dirac delta in particular, in the analysis of some physical system. For instance, they are the main object of research when dealing rigorously with quantum fields, [18]. More recently, distributions have proved to be the natural tool to use when looking for the (generalized) eigenstates of certain Hamiltonians, mostly connected with what we have called weak pseudo-bosons, see, e.g., [19,20,21] for a recent monograph on this and related topics.

In this paper, we show how an extended version of the Bateman Lagrangian can be introduced and studied, and the role that distributions play in this analysis. More in detail, we start replacing the original pairs of uncoupled oscillators described by Bateman with different pairs of possibly interacting oscillators, one of which is still damped (hence, is a loss system), while the other is amplified (so that it can be interpreted as a gain system). The Bateman system can be recovered as a special case of our settings. The main physical result of our analysis is that we will able to quantize the system and to find the eigenvalues and the eigenvectors of its related Hamiltonian H. However, these eigenvectors turn out not to be functions. Indeed, they are distributions and, as such, they require some extra mathematical care to be properly considered, also in view of their relations with the eigenvectors of \(H^\dagger \), the adjoint of H. In particular, biorthonormality of these two families of eigenvectorsFootnote 1 should be defined, since no ordinary scalar product can be naturally introduced in our analysis. This aspect is discussed in many concrete situations in the literature, [12, 21], but mostly in a Hilbert space settings. Here, Hilbert spaces are not enough. For this reason, to analyze biorthonormality of these vectors, we need to consider a new concept of scalar product, which extends the usual one in \({{{\mathcal {L}}}^2(\mathbb {R})}\). This is possibly the most interesting (mathematical) result of this paper. This definition will be applied to our specific system, and indeed, a kind of biorthonormality will be deduced. Surprisingly enough, the notion of Abel summation will be quite relevant in this analysis.

The paper is organized as follows: in Sect. 2, we will review some previous results on the DHO, to introduce the notation and to stress some essential aspects of what was already discussed in the literature. Also, this preliminary analysis is useful since the extended system considered in this paper reduces to that, after a suitable change of variables. In Sect. 3, we introduce our gain-loss linear circuit, and we show how to introduce ladder operators in its description. As already stated, this forces us to deal with distributions. This motivates our analysis in Sect. 4, where we propose a new definition of multiplication between distributions, and we deduce some of its properties. In Sect. 5, we then show how this multiplication can be used in the analysis of our specific gain–loss system. Section 6 contains our conclusions. To keep the paper self-contained, we list a series of definitions and properties of pseudo-bosonic ladder operators in “Appendix A,” while in “Appendix B” we prove some useful identities used in Sect. 5. Also, “Appendix C” contains some technical results useful for us.

2 Preliminaries

We devote this section to a brief review of what was discussed in [16, 17]. This is relevant in view of what follows, since we will show that the system introduced in Sect. 3 can be rewritten as the one we are going to consider here.

The classical equation for the DHO is \(m\ddot{x}+\gamma \dot{x}+kx=0\), in which \(m,\gamma \) and k are the physical positive quantities of the oscillator: the mass, the friction coefficient, and the spring constant.Footnote 2 The Bateman Lagrangian, [13], is

$$\begin{aligned} L_0=m\dot{x}\dot{y}+\frac{\gamma }{2}(x\dot{y}-\dot{x}y)-kxy, \end{aligned}$$
(2.1)

which, other than the previous equation, produces also \(m\ddot{y}-\gamma \dot{y}+ky=0\), the differential equation associated with the virtual AHO, see also [24]. The conjugate momenta are

$$\begin{aligned} p_x=\frac{\partial L_0}{\partial \dot{x}}=m\dot{y}-\frac{\gamma }{2}\,y,\qquad p_y=\frac{\partial L_0}{\partial \dot{y}}=m\dot{x}+\frac{\gamma }{2}\,y, \end{aligned}$$

and the corresponding classical Hamiltonian is

$$\begin{aligned} H_0=p_x\dot{x}+p_y \dot{y}-L_0=\frac{1}{m} p_xp_y+\frac{\gamma }{2m}\left( yp_y-xp_x\right) +\left( k-\frac{\gamma ^2}{4m}\right) xy. \end{aligned}$$
(2.2)

By introducing the new variables \(x_1\) and \(x_2\) through

$$\begin{aligned} x=\frac{1}{\sqrt{2}}(x_1+x_2), \qquad y=\frac{1}{\sqrt{2}}(x_1-x_2), \end{aligned}$$
(2.3)

\(L_0\) and \(H_0\) can be written as follows:

$$\begin{aligned} L_0=\frac{m}{2}(\dot{x}_1^2-\dot{x}_2^2)+\frac{\gamma }{2}(x_2\dot{x}_1-x_1\dot{x}_2)-\frac{k}{2}(x_1^2-x_2^2) \end{aligned}$$

and

$$\begin{aligned} H_0=\frac{1}{2m}\left( p_1-\frac{\gamma }{2}x_2\right) ^2-\frac{1}{2m}\left( p_2-\frac{\gamma }{2}x_1\right) ^2+\frac{k}{2}(x_1^2-x_2^2), \end{aligned}$$

where \(p_1=\frac{\partial L_0}{\partial \dot{x}_1}=m\dot{x}_1+\frac{\gamma }{2}\,x_2\) and \(p_2=\frac{\partial L_0}{\partial \dot{x}_2}=m\dot{x}_2-\frac{\gamma }{2}\,x_1\). By putting \(\omega ^2=\frac{k}{m}\,-\frac{\gamma ^2}{4m^2}\), we can rewrite \(H_0\) as follows:

$$\begin{aligned} H_0=\left( \frac{1}{2m}p_1^2+\frac{1}{2}m\omega ^2x_1^2\right) -\left( \frac{1}{2m}p_2^2+\frac{1}{2}m\omega ^2x_2^2\right) -\frac{\gamma }{2m}(p_1x_2+p_2x_1). \end{aligned}$$
(2.4)

We will here only consider \(\omega ^2>0\). The case \(\omega ^2\le 0\) has been briefly considered in [16].

Following [14], we impose the following canonical quantization rules between \(x_j\) and \(p_k\): \([x_j,p_k]=i\delta _{j,k}1 \!\! 1\), working in unit \(\hbar =1\). Here \(1 \!\! 1\) is the identity operator. This is equivalent to the choice in [24]. Ladder operators can now be easily introduced:

$$\begin{aligned} a_k=\sqrt{\frac{m\omega }{2}}\,x_k+i\sqrt{\frac{1}{2m\omega }}\,p_k, \end{aligned}$$
(2.5)

\(k=1,2\). These are bosonic operators since they satisfy the canonical commutation rules: \([a_j,a^\dagger _k]=\delta _{j,k}1 \!\! 1\). Furthermore, they are densely defined on any Schwartz test function. In particular, \([a_j,a^\dagger _k]\varphi (x)=\delta _{j,k}\varphi (x)\), for all \(\varphi (x)\in {{\mathcal {S}}}(\mathbb {R})\). It might be useful to recall that \({{\mathcal {S}}}(\mathbb {R})\) is the set of all \(C^\infty \) functions which decrease, together with their derivatives, faster than any inverse power of x, [22]:

$$\begin{aligned} {{\mathcal {S}}}({\mathbb {R}})= \left\{ g(x)\in C^\infty : \lim _{|x|,\,\infty } {|x|^k g^{(l)} (x)}=0 \quad \forall k,l \in {\mathbb {N}_0}\right\} , \end{aligned}$$

where \(\mathbb {N}_0=\mathbb {N}\cup \{0\}\).

In terms of these operators, the quantum version of the Hamiltonian \(H_0\) in (2.4) can be written as

$$\begin{aligned} H_0=\omega \left( a_1^\dagger a_1-a_2^\dagger a_2\right) +\frac{i\gamma }{2m}\left( a_1a_2-a_1^\dagger a_2^\dagger \right) . \end{aligned}$$
(2.6)

Following again [14], we introduce the operators:

$$\begin{aligned} A_1=\frac{1}{\sqrt{2}}(a_1-a_2^\dagger ), \quad A_2=\frac{1}{\sqrt{2}}(-a_1^\dagger +a_2), \end{aligned}$$
(2.7)

as well as

$$\begin{aligned} B_1=\frac{1}{\sqrt{2}}(a_1^\dagger +a_2), \quad B_2=\frac{1}{\sqrt{2}}(a_1+a_2^\dagger ). \end{aligned}$$
(2.8)

They satisfy the following requirements:

$$\begin{aligned}{}[A_j,B_k]\varphi (x)=\delta _{j,k}\varphi (x), \end{aligned}$$
(2.9)

\(\forall \varphi (x)\in {{\mathcal {S}}}(\mathbb {R})\). We observe that \(B_j\ne A_j^\dagger \), \(j=1,2\). Moreover, \(A_1=-A_2^\dagger \) and \(B_1=B_2^\dagger \). It might be useful to stress that the map in (2.7)–(2.8) is reversible, since \(a_j\) and \(a_j^\dagger \) can be recovered out of \(A_j\) and \(B_j\).

In [21, 23], operators of this kind, named pseudo-bosonic, were analyzed in detail, producing several interesting results mainly connected with their nature of ladder operators.

In terms of these operators, \(H_0\) can now be written as follows:

$$\begin{aligned} H_0=\omega \left( B_1A_1-B_2A_2\right) +\frac{i\gamma }{2m}\left( B_1A_1+B_2A_2+1 \!\! 1\right) , \end{aligned}$$
(2.10)

which only depends on the pseudo-bosonic number operators \(N_j=B_jA_j\), [23]. This is exactly the same Hamiltonian found in [14], and it is equivalent to that given in [5, 24] and in many other papers on this subject. In [16], we proved the following theorem, stating that the pseudo-bosonic lowering operators \(A_1\), \(A_2\), \(B_1^\dagger \) and \(B_2^\dagger \) do not admit square integrable vacua.

Proposition 1

There is no nonzero function \(\varphi _{00}(x_1,x_2)\) satisfying

$$\begin{aligned} A_1\varphi _{00}(x_1,x_2)=A_2\varphi _{00}(x_1,x_2)=0. \end{aligned}$$

Also, there is no nonzero function \(\psi _{00}(x_1,x_2)\) satisfying

$$\begin{aligned} B_1^\dagger \psi _{00}(x_1,x_2)=B_2^\dagger \psi _{00}(x_1,x_2)=0. \end{aligned}$$

We refer to [16, 17] for further results on this problems. In particular, it was shown that the vacua of \(A_j\) and \(B_j^\dagger \), \(j=1,2\), are, respectively, \(\varphi _{00}(x_1,x_2)=\alpha \delta (x_1-x_2)\) and \(\psi _{00}(x_1,x_2)=\beta \delta (x_1+x_2)\): they are not functions, but distributions. \(\alpha \) and \(\beta \) are some sort of normalization constants. Here we just want to stress that, in these latter papers, our analysis stopped at this level because our interest was mainly focused in proving that it was not possible to find square-integrable eigenfunctions of \(H_0\), contrarily to what claimed in [14, 15]. What is more interesting for us, now, is the possibility to answer the following questions:

  • Is it possible to replace the pair DHO-AHO with some more general system sharing similar properties, and a similar quantization procedure?

  • Is it possible to construct a set of biorthonormal-like distributions out of \(\varphi _{00}(x_1,x_2)\) and \(\psi _{00}(x_1,x_2)\) using the pseudo-bosonic raising operators as described in “Appendix A”?

We will see that both these questions can be successfully considered, and in particular, the second question will force us to introduce an interesting mathematical tool, which can be used to define a class of multiplications between distributions.

3 Our system

In this section, we will consider the first question raised above, proposing a simple classical Lagrangian which generalizes the one in (2.1), and which describes, in view of its interpretation as a gain-loss system, two coupled DHO-AHO. The idea is very simple: we just add to \(L_0\) in (2.1) another term, \(L_1\), which is again quadratic in the variables x and y. With a proper choice of \(L_1\), and of the parameters of the system, we will be able to describe a new pair of coupled oscillators and to quantize the system in a similar way as we discussed in Sect. 2, facing with similar problems.

Let us consider

$$\begin{aligned} L_1=A(m\dot{x}^2-kx^2)+B(m\dot{y}^2-ky^2) \end{aligned}$$
(3.1)

and

$$\begin{aligned} L=L_0+L_1=m\dot{x}\dot{y}+\frac{\gamma }{2}(x\dot{y}-\dot{x}y)-kxy+A(m\dot{x}^2-kx^2)+B(m\dot{y}^2-ky^2). \end{aligned}$$
(3.2)

Here A and B are constants whose values will be constrained later.

Remark

The Lagrangian in (3.2) is a particular case of a more general choice \(L=L_0+L_1\), with \(L_1=f(y,\dot{y})+g(x,\dot{x})\). Not surprisingly, also in this general case, if we write the Hamiltonian \(H=p_x\dot{x}+p_y\dot{y}-L\), and we compute its time derivative, we get \(\dot{H}=0\), which can be interpreted as some kind of energy conservation for the coupled system.

From L in (3.2), we get the following set of coupled differential equations:

$$\begin{aligned} \left\{ \begin{array}{ll} m\ddot{x}+\gamma \dot{x}+kx=-2B\left( m\ddot{y}+ky\right) \\ m\ddot{y}-\gamma \dot{y}+ky=-2A\left( m\ddot{x}+kx\right) ,\\ \end{array} \right. \end{aligned}$$
(3.3)

which can also be rewritten, after some minor manipulations, as

$$\begin{aligned} \left\{ \begin{array}{ll} m'\ddot{x}+\gamma \dot{x}+k'x=-2B\gamma \dot{y}\\ m'\ddot{y}-\gamma \dot{y}+k'y=2A\gamma \dot{x},\\ \end{array} \right. \end{aligned}$$
(3.4)

where \(m'=m(1-4AB)\) and \(k'=k(1-4AB)\), which are both positive if \(AB<\frac{1}{4}\). It is now possible to rewrite L in a form which is quite close to \(L_0\) in (2.1), with a change of variable \((x,y)\rightarrow (X_1,Y_1)\):

$$\begin{aligned} \left\{ \begin{array}{ll} x=\alpha _x X_1+\beta _x Y_1,\\ y=\alpha _y X_1+\beta _y Y_1,\\ \end{array} \right. \end{aligned}$$
(3.5)

where \(\alpha _x\), \(\alpha _y\), \(\beta _x\) and \(\beta _y\) must satisfy the condition \(\alpha _x\beta _y-\beta _x\alpha _y\ne 0\), in order to have an invertible transformation. From now on, we take

$$\begin{aligned} \alpha _x=-\,\frac{\alpha _y}{2A}\left( 1-\sqrt{1-4AB}\right) , \qquad \beta _x=-\,\frac{\beta _y}{2A}\left( 1+\sqrt{1-4AB}\right) , \end{aligned}$$
(3.6)

so that \(\alpha _x\beta _y-\beta _x\alpha _y=\frac{\alpha _y\beta _y}{A}\sqrt{1-4AB}\), which is different from zero if \(\alpha _y,\beta _y\ne 0\), under our constraint on AB. After some manipulation, we get that

$$\begin{aligned} L= m_1\dot{X}_1\dot{Y}_1+\frac{\gamma _1}{2}(X_1\dot{Y}_1-\dot{X}_1Y_1)-k_1X_1Y_1, \end{aligned}$$
(3.7)

where we have introduced

$$\begin{aligned} m_1=\frac{m\alpha _y\beta _y}{A}(4AB-1), \quad k_1=\frac{k\alpha _y\beta _y}{A}(4AB-1),\quad \gamma _1=\frac{\gamma \alpha _y\beta _y}{A}\sqrt{1-4AB}. \end{aligned}$$
(3.8)

Recalling that \(4AB-1<0\), it is clear that \(k_1,m_1>0\) only if \(\frac{\alpha _y\beta _y}{A}<0\), which is what we will assume from now on. However, under this condition, it follows that \(\gamma _1=-|\gamma _1|<0\). We rewrite L in (3.7) as

$$\begin{aligned} L= m_1\dot{X}_1\dot{Y}_1+\frac{|\gamma _1|}{2}(Y_1\dot{X}_1-\dot{Y}_1X_1)-k_1X_1Y_1. \end{aligned}$$
(3.9)

This Lagrangian describes again a coupled DHO-AHO as the original one in (2.1), where \(Y_1\) is the coordinate of the DHO (\(Y_1\rightleftarrows x\)), while \(X_1\) is that of the AHO (\(X_1\rightleftarrows y\)). Hence, we can repeat the same steps as in Sect. 2, and in particular quantize the system and diagonalize the Hamiltonian in terms of pseudo-bosonic operators. Formula (2.3) is replaced here by

$$\begin{aligned} Y_1=\frac{1}{\sqrt{2}}(x_1+x_2), \qquad X_1=\frac{1}{\sqrt{2}}(x_1-x_2). \end{aligned}$$

Then, introducing \(p_1\) and \(p_2\) as before, \(p_j=\frac{\partial L}{\partial \dot{x}_j}\), we get \(p_1=m_1\dot{x}_1+\frac{|\gamma _1|}{2}\,x_2\) and \(p_2=m_1\dot{x}_2-\frac{|\gamma _1|}{2}\,x_1\), and the classical Hamiltonian \(H_0\) in (2.4) should be replaced now by

$$\begin{aligned} H=\left( \frac{1}{2m_1}p_1^2+\frac{1}{2}m_1\omega _1^2x_1^2\right) -\left( \frac{1}{2m_1}p_2^2+\frac{1}{2}m_1\omega _1^2x_2^2\right) -\frac{|\gamma _1|}{2m_1}(p_1x_2+p_2x_1), \end{aligned}$$
(3.10)

where \(\omega _1^2=\frac{k_1}{m_1}\,-\frac{|\gamma _1|^2}{4m_1^2}=\frac{k}{m}\,-\frac{\gamma ^2}{4m^2(1-4AB)}\), which we assume here to be positive.

Next we quantize the system, requiring that \([x_j,p_k]=i\delta _{j,k}1 \!\! 1\), and we introduce the bosonic operators

$$\begin{aligned} a_k=\sqrt{\frac{m\omega }{2}}\,x_k+i\sqrt{\frac{1}{2m\omega }}\,p_k, \end{aligned}$$
(3.11)

\(k=1,2\), and their combinations

$$\begin{aligned} A_1=\frac{1}{\sqrt{2}}(a_1-a_2^\dagger ), \quad A_2=\frac{1}{\sqrt{2}}(-a_1^\dagger +a_2),\quad B_1=\frac{1}{\sqrt{2}}(a_1^\dagger +a_2), \quad B_2=\frac{1}{\sqrt{2}}(a_1+a_2^\dagger ). \end{aligned}$$
(3.12)

These operators satisfy, see (2.9), the commutation rule

$$\begin{aligned}{}[A_j,B_k]\varphi (x)=\delta _{j,k}\varphi (x), \end{aligned}$$
(3.13)

\(\forall \varphi (x)\in {{\mathcal {S}}}(\mathbb {R})\), as well as the other properties stated in Sect. 2. An essential consequence is that H is diagonal in these operators,

$$\begin{aligned} H=\omega _1\left( B_1A_1-B_2A_2\right) +\frac{i|\gamma _1|}{2m_1}\left( B_1A_1+B_2A_2+1 \!\! 1\right) , \end{aligned}$$
(3.14)

and Proposition 1 applies. In particular, the vacua of \(A_j\) and \(B_j^\dagger \), \(j=1,2\), are, respectively, \(\varphi _{00}(x_1,x_2)=\alpha \delta (x_1-x_2)\) and \(\psi _{00}(x_1,x_2)=\beta \delta (x_1+x_2)\).

Going back to our first question at the end of Sect. 2, we have seen here that it is indeed possible to construct more general systemsFootnote 3 which, after a certain change of variables, turn out not to be different from the pair of oscillators described by the Bateman Lagrangian. Next, because of the role the distributions play in our analysis, we consider a mathematical interlude on a possible definition of a class of multiplications between distributions. We should maybe stress that, in fact, the content of Sect. 4 is (in our opinion) the most relevant mathematical result of this paper.

4 Multiplication of distributions

In [22], a possible way to introduce a multiplication between distributions was discussed. It is based on the simple fact that the scalar product between two good functions f(x) and g(x), for instance, \(f(x),g(x)\in {{\mathcal {S}}}(\mathbb {R})\), can be written in terms of a convolution between \(\overline{f(x)}\) and \(\tilde{g}(x)=g(-x)\): \(\langle f,g\rangle =(\overline{f}* \tilde{g})(0)\). Hence, it is natural to define the scalar product between two elements \(F(x), G(x)\in {{\mathcal {S}}}'(\mathbb {R})\) as the following convolution:

$$\begin{aligned} \langle F,G\rangle =(\overline{F}* \tilde{G})(0), \end{aligned}$$
(4.1)

whenever this convolution exists, which is not always true. Notice that, in order to compute \(\langle F,G\rangle \), it is first necessary to compute \((\overline{F}* \tilde{G})[f]\), \(f(x)\in {{\mathcal {S}}}(\mathbb {R})\), and this can be done by using the equality \((\overline{F}* \tilde{G})[f]=\langle F,G*f\rangle \) which, again, is not always well defined. It is maybe useful to stress that \((\overline{F}* \tilde{G})[f]\) represents here the action of \((\overline{F}* \tilde{G})(x)\) on the function f(x).

This approach has been used in some concrete situations in recent years, mainly to check if the generalized eigenstates of some non self-adjoint operator \({\hat{H}}\) are biorthonormal (with respect to this generalized product) to those of \({\hat{H}}^\dagger \). Some results in this direction can be found in [19,20,21].

However, this approach does not seem to be flexible enough to cover also the situation discussed in this paper, i.e., to deal with the set of weak eigenvectors of the Hamiltonian of the system in Sect. 3. This is because, among other difficulties, it is very hard to take care properly of the domains of various operators involved in this analysis, but also because it is quite complicated to perform explicit computations even in simple cases. For this reason, we introduce now a different multiplication between distributions, and we analyze some of its properties. In Sect. 5, we will show how this new definition works for our gain-loss system. To be general, we work here with \({{{\mathcal {L}}}^2(\mathbb {R}^d)}\), \(d\ge 1\). First of all, we introduce an orthonormal, total, set of vectors in \({{{\mathcal {L}}}^2(\mathbb {R}^d)}\):

$$\begin{aligned} {{\mathcal {F}}}_e=\{e_{\underline{n}}(\underline{x})\in {{\mathcal {S}}}(\mathbb {R}^d), \quad {\underline{n}}=(n_1,n_2,\ldots ,n_d) \}, \end{aligned}$$
(4.2)

where each \(n_j=0,1,2,3,\ldots \). For instance, if \(d=1\), the set \({{\mathcal {F}}}_e\) could be the set of the eigenstates of the quantum harmonic oscillator. If \(d\ge 2\), \({{\mathcal {F}}}_e\) can be constructed as tensor product of these 1-d functions, and so on. Due to the nature of \({{\mathcal {F}}}_e\), for all \(f(x), g(x)\in {{{\mathcal {L}}}^2(\mathbb {R}^d)}\), we have that

$$\begin{aligned} \langle f , g\rangle =\sum _{{\underline{n}}}\langle f , e_{\underline{n}}\rangle \langle e_{\underline{n}} , g\rangle =\sum _{{\underline{n}}} \overline{f}[e_{\underline{n}}]\,g[\overline{e_{\underline{n}}}]=\sum _{{\underline{n}}} \overline{f}[e_{\underline{n}}]\,g[{e_{\underline{n}}}], \end{aligned}$$

if we further assume that each \(e_{\underline{n}}(\underline{x})\) is real, for simplicity. We are using the following notation:

$$\begin{aligned} \overline{h}[c]=\int \limits _{\mathbb {R}^d} \overline{h({\underline{x}})}\,c({\underline{x}})\, \textrm{d}{\underline{x}}=\langle h , c\rangle . \end{aligned}$$
(4.3)

In the Parceval identity above, the particular choice of \({{\mathcal {F}}}_e\) is not relevant, as far as \(f(x), g(x)\in {{{\mathcal {L}}}^2(\mathbb {R}^d)}\). Now, the fact that \(e_{\underline{n}}({\underline{x}})\in {{\mathcal {S}}}(\mathbb {R}^d)\) implies that, for all \(K({\underline{x}})\in {{\mathcal {S}}}'(\mathbb {R}^d)\), the set of tempered distributions, [22], the following quantity is well defined:

$$\begin{aligned} \overline{K}[e_{\underline{n}}]=\langle K , e_{\underline{n}}\rangle . \end{aligned}$$
(4.4)

What might exist, or not, is the following sum

$$\begin{aligned} \langle F , G\rangle _e=\sum _{{\underline{n}}}\overline{F}[e_{\underline{n}}]G[e_{\underline{n}}], \end{aligned}$$
(4.5)

where \(F({\underline{x}}), G({\underline{x}})\in {{\mathcal {S}}}'(\mathbb {R}^d)\). This suggests the following:

Definition 2

Two tempered distributions \(F({\underline{x}}), G({\underline{x}})\in {{\mathcal {S}}}'(\mathbb {R}^d)\) are \({{\mathcal {F}}}_e\)-multiplicable if the series in (4.5) converges.

What we have discussed before implies that all the square integrable functions are mutually \({{\mathcal {F}}}_e\)-multiplicable, and the result is independent of the specific choice of \({{\mathcal {F}}}_e\). This means that Definition 2 makes sense on a large set of tempered distributions, all those defined by ordinary square-integrable functions. Moreover, at least for these functions, (4.5) and (4.1) coincide. We will show in the next section that \(\langle F , G\rangle _e\) is also well defined in other cases. However, for generic elements of \({{\mathcal {S}}}'(\mathbb {R})\), it is not granted a priori that \(\langle F , G\rangle _e\) is independent of the choice of \({{\mathcal {F}}}_e\).

The following results are natural extensions of the properties of any ordinary scalar product to \(\langle . , .\rangle _e\).

Result \(\sharp 1\): If \(F({\underline{x}}), G({\underline{x}})\in {{\mathcal {S}}}'(\mathbb {R}^d)\) are such that \(\langle F , G\rangle _e\) exists, then also \(\langle G , F\rangle _e\) exists and

$$\begin{aligned} \langle F , G\rangle _e=\overline{\langle G , F\rangle _e}. \end{aligned}$$
(4.6)

Indeed, we have, recalling that the series in (4.5) converges,

$$\begin{aligned} \overline{\langle F , G\rangle _e}=\sum _{{\underline{n}}}\overline{\langle F , e_{\underline{n}}\rangle }\,\overline{\langle e_{\underline{n}} , G\rangle }=\sum _{{\underline{n}}}\langle G , e_{\underline{n}}\rangle \langle e_{\underline{n}} , F\rangle =\langle G , F\rangle _e, \end{aligned}$$

which in particular implies that \(\langle G , F\rangle _e\) exists, too.

Result \(\sharp 2\): If \(F({\underline{x}}), G({\underline{x}}), L({\underline{x}})\in {{\mathcal {S}}}'(\mathbb {R}^d)\) are such that \(\langle F , G\rangle _e\) and \(\langle F , L\rangle _e\) exist, then also \(\langle F , \alpha G+\beta L\rangle _e\) exists, for all \(\alpha ,\beta \in \mathbb {C}\), and

$$\begin{aligned} \langle F , \alpha G+\beta L\rangle _e=\alpha \,\langle F , G\rangle _e+\beta \langle F , L\rangle _e. \end{aligned}$$
(4.7)

Then the \({{\mathcal {F}}}_e\)-multiplication is linear in the second variable. The proof is trivial and will not be given here. Of course, the \({{\mathcal {F}}}_e\)-multiplication is anti-linear in the first variable.

Result \(\sharp 3\): If \(F({\underline{x}})\in {{\mathcal {S}}}'(\mathbb {R}^d)\) is such that \(\langle F , F\rangle _e\) exists, then \(\langle F , F\rangle _e\ge 0\). In particular, if \(\langle F , F\rangle _e=0\), then \(F[f]=0\) for all \(f(x)\in {{\mathcal {L}}}_e\), the linear span of the \(e_{\underline{n}}({\underline{x}})\)’s.

In fact, from (4.5) we have

$$\begin{aligned} \langle F , F\rangle _e=\sum _{{\underline{n}}}|F[e_{\underline{n}}]|^2, \end{aligned}$$

which is never negative. Moreover, \(\langle F , F\rangle _e=0\) if and only if \(F[e_{\underline{n}}]=0\) for all \({\underline{n}}\), which, because of the linearity of F, implies our claim.

Remark

It is not possible to conclude that \(F=0\), even if \(\langle F , F\rangle _e=0\). The reason is that, to conclude that \(F=0\), we should check that \(F[g]=0\) for all \(g({\underline{x}})\in {{\mathcal {S}}}(\mathbb {R}^d)\). Now, since \({{\mathcal {S}}}(\mathbb {R}^d)\subset {{{\mathcal {L}}}^2(\mathbb {R}^d)}\), it is clear that \(g({\underline{x}})=\Vert .\Vert -\lim _{\{N_k\},\infty }\sum _{{\underline{n}}=\underline{0}}^{\underline{N}}\langle e_{\underline{n}} , g\rangle \,e_{\underline{n}}({\underline{x}})\), where \(\underline{N}=(N_1,N_2,\ldots ,N_d)\), with \(N_j<\infty \), \(\underline{0}=(0,0,\ldots ,0)\), and the convergence is in the norm of \({{{\mathcal {L}}}^2(\mathbb {R}^d)}\). But this convergence does not imply that the same sequence converges in the topology \(\tau _{{\mathcal {S}}}\) of \({{\mathcal {S}}}(\mathbb {R}^d)\). Hence, the continuity of F is not sufficient to conclude that

$$\begin{aligned} F[g]=\lim F\left[ \sum _{{\underline{n}}=\underline{0}}^{\underline{N}}\langle e_{\underline{n}} , g\rangle \,e_{\underline{n}}({\underline{x}})\right] =0. \end{aligned}$$

It might be interesting to observe that for square-integrable functions f(x) and g(x) the adjoint of any bounded operator X satisfies the equality \(\langle X^\dagger f , g\rangle _e=\langle f , Xg\rangle _e\). This is a consequence of the analogous relation for \(\langle . , .\rangle \) and of the identity \(\langle f , g\rangle =\langle f , g\rangle _e\), true \(\forall f({\underline{x}}),g({\underline{x}})\in {{{\mathcal {L}}}^2(\mathbb {R}^d)}\). However, this is no longer granted for \(F({\underline{x}}), G({\underline{x}})\in {{\mathcal {S}}}'(\mathbb {R}^d)\). In fact, even if F and G are \({{\mathcal {F}}}_e\)-multiplicable, and if \(X^\dagger F, XG\in {{\mathcal {S}}}'(\mathbb {R}^d)\), there is no general reason for \(\langle X^\dagger F , G\rangle _e\) and \(\langle F , XG\rangle _e\) to exist, and to be equal. This is, in our opinion, one of the many points of the \({{\mathcal {F}}}_e\) multiplication which deserves a deeper investigation. Another relevant aspect of this multiplication concerns the optimal choice of \({{\mathcal {F}}}_e\), if any. We will comment on this particular aspect later on.

5 Orthogonality of eigenstates

In Sect. 3, we have deduced the vacua of the pseudo-bosonic lowering operators \(A_j\) and \(B_j^\dagger \). We will now use the standard pseudo-bosonic strategy, in its weak form, see [19,20,21], to construct a set of distributions which are the (generalized) eigenstates of H in (3.14) and of its adjoint. In what follows, we will use what we have discussed in Sect. 4, focusing on the case \(d=2\).

In analogy with (A.2), after finding the vacua, the second step in our construction consists in using the raising pseudo-bosonic operators to construct, out of the vacua, two families of vectors. In particular, we put

$$\begin{aligned} \varphi _{n_1,n_2}(x_1,x_2)=\frac{1}{\sqrt{n_1!\,n_2!}}B_1^{n_1}B_2^{n_2}\varphi _{0,0}(x_1,x_2), \end{aligned}$$
(5.1)

and

$$\begin{aligned} \psi _{n_1,n_2}(x_1,x_2)=\frac{1}{\sqrt{n_1!\,n_2!}}(A_1^\dagger )^{n_1}(A_2^\dagger )^{n_2}\psi _{0,0}(x_1,x_2), \end{aligned}$$
(5.2)

where \(n_1,n_2=0,1,2,3,\ldots \). As in Sect. 4, we will often use the notation \({\underline{n}}=(n_1,n_2)\) and \({\underline{x}}=(x_1,x_2)\). These vectors are, clearly, not square-integrable functions. Indeed, they are tempered distributions, as it is already clear from their expressions for \({\underline{n}}=(0,0)\). Because of formulas (3.11), (3.12), and the fact that \(p_k=-i\frac{\partial }{\partial x_k}\), \(k=1,2\), it follows that \(\varphi _{n_1,n_2}(x_1,x_2)\) and \(\psi _{n_1,n_2}(x_1,x_2)\) are deduced from the vacua by acting on them with weak derivatives and multiplication operators, all operations mapping \({{\mathcal {S}}}'(\mathbb {R}^2)\) into itself. This implies that the two sets \({{\mathcal {F}}}_\varphi =\{\varphi _{{\underline{n}}}({\underline{x}}), \, n_1,n_2\ge 0\}\) and \({{\mathcal {F}}}_\psi =\{\psi _{{\underline{n}}}({\underline{x}}), \, n_1,n_2\ge 0\}\) are both sets of tempered distributions: \(\varphi _{{\underline{n}}}({\underline{x}}),\psi _{{\underline{n}}}({\underline{x}})\in {{\mathcal {S}}}'(\mathbb {R}^2)\), \(\forall {\underline{n}}\). We would like to check now if, and in which sense, we can recover for these vectors the analogous of formula (A.4). In other words, we would like to understand if (and again, in which sense) \({{\mathcal {F}}}_\varphi \) and \({{\mathcal {F}}}_\psi \) are biorthonormal families of vectors.

Remark

It is not hard to check that (4.1) can be used to define an extended scalar product between \(\varphi _{{\underline{0}}}({\underline{x}})\) and \(\psi _{{\underline{0}}}({\underline{x}})\), and to check that \(\langle \varphi _{{\underline{0}}} , \psi _{{\underline{0}}}\rangle =1\). However, this same definition does not allow any simple computation of the other scalar products \(\langle \varphi _{{\underline{n}}} , \psi _{{\underline{m}}}\rangle \), in general, and this is the main reason why we prefer to adopt here the definition of the \({{\mathcal {F}}}_e\)-multiplication proposed in Sect. 4, see Definition 2. This analysis has an important side effect, since it will allow us to check on a rather concrete situation that the definition of \(\langle . , .\rangle _e\) works also outside \({{{\mathcal {L}}}^2(\mathbb {R}^d)}\). Hence, \(\langle . , .\rangle _e\) can really be seen as an extension of the ordinary scal product in \({{{\mathcal {L}}}^2(\mathbb {R}^d)}\).

Let \({\mathcal {G}}_e=\{e_n(x), \,n\ge 0\}\) be the usual orthonormal basis of eigenfunctions of the harmonic oscillator,

$$\begin{aligned} {\tilde{H}}_0=\frac{p^2}{2m_1}+\frac{1}{2}\,m_1\omega _1^2x^2, \end{aligned}$$

where \(m_1\) and \(\omega _1\) are those introduced in Sect. 3. It is well known that \(e_n(x)\in {{\mathcal {S}}}(\mathbb {R})\) for all \(n\ge 0\). Then, we consider

$$\begin{aligned} e_{\underline{n}}({\underline{x}})=e_{n_1}(x_1)e_{n_2}(x_2), \end{aligned}$$
(5.3)

and \({{\mathcal {F}}}_e=\{e_{\underline{n}}\}\). This is an orthonormal basis of \({{{\mathcal {L}}}^2(\mathbb {R}^2)}\) of functions, all belonging to \({{{\mathcal {S}}}(\mathbb {R}^2)}\). Further, and very important, various vectors of this basis obey ladder equalities when considered together with the bosonic operators \(a_j\) and \(a_j^\dagger \) in (3.11). For instance,

$$\begin{aligned} a_1^\dagger e_{\underline{n}}=\sqrt{n_1+1}\,e_{n_1+1,n_2},\qquad a_2 e_{\underline{n}}=\sqrt{n_2}\,e_{n_1,n_2-1}, \end{aligned}$$

(or \( a_2 e_{\underline{n}}=0\) if \(n_2=0\)), and so on.

From now on, we will use this particular basis to define \(\langle . , .\rangle _e\) as in (4.5). The reason is simple: our raising operators \(A_j^\dagger \) and \(B_j\) can be written, see (3.12), in terms of \(a_j\) and \(a_j^\dagger \), and their action on each \(e_{\underline{n}}({\underline{x}})\) in (5.3) is simple.Footnote 4 Our main effort is to check that \(\langle \psi _{{\underline{k}}} , \varphi _{{\underline{l}}}\rangle _e\) makes sense, and that it is equal to \(\delta _{{\underline{k}},{\underline{l}}}\). This would imply that our multiplication in (4.5) is useful, defined on a large set, and that the families \({{\mathcal {F}}}_\varphi \) and \({{\mathcal {F}}}_\psi \) are biorthonormal (with respect to this extended scalar product).

To prove this claim, we need to compute

$$\begin{aligned} \langle \psi _{{\underline{k}}} , \varphi _{{\underline{l}}}\rangle _e=\sum _{\underline{n}}\overline{\psi _{{\underline{k}}}}[e_{\underline{n}}]\,\varphi _{{\underline{l}}}[e_{\underline{n}}], \end{aligned}$$
(5.4)

using (4.5). A general proof of the existence of this quantity is not easy. On the other hand, a direct check of the facts that \(\langle \psi _{{\underline{k}}} , \varphi _{{\underline{l}}}\rangle _e\) exists and that it is equal to \(\delta _{{\underline{k}},{\underline{l}}}\), it is not hard, if we restrict to few (low) values of \({\underline{k}}\) and \({\underline{l}}\). To do so, it is convenient to check first the following equalities:

$$\begin{aligned} \left\{ \begin{array}{ll} \varphi _{{\underline{0}}}[e_{\underline{n}}]=\alpha \delta _{n_1,n_2}, \qquad \qquad \qquad \qquad \qquad \varphi _{0,1}[e_{\underline{n}}]=\alpha \sqrt{2n_2}\delta _{n_1,n_2-1},\\ \varphi _{1,0}[e_{\underline{n}}]=\alpha \sqrt{2(n_2+1)}\delta _{n_1,n_2+1},\quad \qquad \varphi _{{\underline{1}}}[e_{\underline{n}}]=\alpha (1+2n_2)\delta _{n_1,n_2},\\ \end{array} \right. \end{aligned}$$
(5.5)

and

$$\begin{aligned} \left\{ \begin{array}{ll} \overline{\psi _{{\underline{0}}}}[e_{\underline{n}}]=\overline{\beta } (-1)^{n_2} \delta _{n_1,n_2}, \qquad \qquad \qquad \qquad \qquad \overline{\psi _{0,1}}[e_{\underline{n}}]=\overline{\beta } (-1)^{n_2+1}\sqrt{2n_2}\, \delta _{n_1,n_2-1},\\ \overline{\psi _{1,0}}[e_{\underline{n}}]=\overline{\beta } (-1)^{n_2}\sqrt{2(n_2+1)}\delta _{n_1,n_2+1},\quad \qquad \overline{\psi _{{\underline{1}}}}[e_{\underline{n}}]=\overline{\beta } (-1)^{n_2+1}(1+2n_2)\delta _{n_1,n_2}.\\ \end{array} \right. \end{aligned}$$
(5.6)

The proof of (some of) these identities is given in “Appendix B.”

We can now use (5.5) and (5.6) to check few of the orthonormality results needed to conclude that \({{\mathcal {F}}}_\varphi \) and \({{\mathcal {F}}}_\psi \) are \({{\mathcal {F}}}_e\)-biorthonormal. In particular, it is easy to check that all the vectors \(\psi _{\underline{k}}\) and \(\varphi _{\underline{l}}\) are \({{\mathcal {F}}}_e\)-orthogonal if \({\underline{k}}\ne {\underline{l}}\), and \({\underline{k}},{\underline{l}}=(j_1,j_2)\), for all \(j_1,j_2=0,1\). For instance, we have

$$\begin{aligned} \langle \psi _{1,0} , \varphi _{\underline{0}}\rangle _e=\sum _{\underline{n}}\overline{\psi _{1,0}}[e_{\underline{n}}]\varphi _{{\underline{0}}}[e_{\underline{n}}]=\overline{\beta }\alpha \sum _{\underline{n}}(-1)^{n_2}\sqrt{2(n_2+1)}\delta _{n_1,n_2+1}\delta _{n_1,n_2}=0, \end{aligned}$$

clearly. Similarly, we have

$$\begin{aligned} \langle \psi _{{\underline{1}}} , \varphi _{1,0}\rangle _e=\sum _{\underline{n}}\overline{\psi _{{\underline{1}}}}[e_{\underline{n}}]\varphi _{1,0}[e_{\underline{n}}]=\overline{\beta }\alpha \sum _{\underline{n}}(-1)^{n_2+1}(1+2n_2)\delta _{n_1,n_2}\sqrt{2(n_2+1)}\delta _{n_1,n_2+1}=0, \end{aligned}$$

too, and so on. Much more interesting is the proof that, when \({\underline{k}}={\underline{l}}\), \(\langle \psi _{{\underline{k}}} , \varphi _{\underline{l}}\rangle _e=1\), even in these few simple cases. Using the results in (5.5) and (5.6), we can get the following identities:

$$\begin{aligned} \left\{ \begin{array}{ll} \langle \psi _{\underline{0}} , \varphi _{\underline{0}}\rangle _e=\overline{\beta }\alpha \sum _k(-1)^k,\\ \langle \psi _{1,0} , \varphi _{1,0}\rangle _e=\langle \psi _{0,1} , \varphi _{0,1}\rangle _e=2\overline{\beta }\alpha \sum _k(-1)^k(k+1),\\ \langle \psi _{{\underline{1}}} , \varphi _{{\underline{1}}}\rangle _e=-\overline{\beta }\alpha \sum _k(-1)^k(2k+1)^2.\\ \end{array} \right. \end{aligned}$$
(5.7)

None of these series converges in ordinary sense. However, they are all Abel-convergent. In fact, since

$$\begin{aligned} A-\sum _k(-1)^k=\frac{1}{2}, \qquad A-\sum _k(-1)^kk=-\frac{1}{4}, \qquad A-\sum _k(-1)^kk^2=0, \end{aligned}$$

if we take \(\alpha \overline{\beta }=2\), we conclude that

$$\begin{aligned} \langle \psi _{\underline{0}} , \varphi _{\underline{0}}\rangle _e=\langle \psi _{1,0} , \varphi _{1,0}\rangle _e=\langle \psi _{0,1} , \varphi _{0,1}\rangle _e=\langle \psi _{{\underline{1}}} , \varphi _{{\underline{1}}}\rangle _e=1. \end{aligned}$$

Of course, what we have explicitly checked here is not a general result. In other words, this is just an indication that, see (5.4), \(\langle \psi _{{\underline{k}}} , \varphi _{{\underline{l}}}\rangle _e\) exists and is equal to \(\delta _{{\underline{k}},{\underline{l}}}\). This is what we will discuss next.

5.1 From few to many

Our aim now is trying to generalize, as much as possible, formulas in (5.7) so to check that \({{\mathcal {F}}}_\varphi \) and \({{\mathcal {F}}}_\psi \) are indeed \({{\mathcal {F}}}_e\)-biorthonormal. However, as we will see, the existence of \(\langle \psi _{{\underline{k}}} , \varphi _{{\underline{l}}}\rangle _e\) will be, for us, a working assumption, motivated by the results we have deduced previously. We hope to be able to produce a general proof of this existence in a close future. It is quite likely that the difficulty in proving this result is connected with the fact that various series above are not convergent in the usual sense, but only Abel-convergent. For this reason, we believe that the preliminary analysis proposed here and before is relevant and useful for a deeper understanding of the situation.

We start proving that, calling \(N_j=B_jA_j\) and \(N_j^\dagger \) its adjoint, the following weak eigenvalue equations are satisfied:

$$\begin{aligned} \langle \Phi , N_j\varphi _{\underline{l}}\rangle =l_j\langle \Phi , \varphi _{\underline{l}}\rangle , \qquad \langle \Phi , N_j^\dagger \psi _{\underline{l}}\rangle =l_j\langle \Phi , \psi _{\underline{l}}\rangle , \end{aligned}$$
(5.8)

for all \({\underline{l}}\in \mathbb {N}_0^2\), for all \(\Phi (x_1,x_2)\in {{{\mathcal {S}}}(\mathbb {R}^2)}\), and for \(j=1,2\).

Using the definition of \(N_1=B_1A_1\) in terms of multiplication and derivative operators, see (3.11) and (3.12), and using the fact that \(\Phi (x_1,x_2)\in {{{\mathcal {S}}}(\mathbb {R}^2)}\) and that \({{{\mathcal {S}}}(\mathbb {R}^2)}\) is stable under the action of various operators involved in the game, we have

$$\begin{aligned} \langle \Phi , N_1\varphi _{\underline{n}}\rangle =\langle N_1^\dagger \Phi , \varphi _{\underline{n}}\rangle =\frac{1}{\sqrt{n_1!}}\langle N_1^\dagger \Phi , B_1^{n_1}\varphi _{0,n_2}\rangle =\frac{1}{\sqrt{n_1!}}\langle {B_1^\dagger }^{n_1}N_1^\dagger \Phi , \varphi _{0,n_2}\rangle . \end{aligned}$$

Now, since \(\Phi (x_1,x_2)\in {{{\mathcal {S}}}(\mathbb {R}^2)}\), which is stable, we can safely rewrite

$$\begin{aligned} {B_1^\dagger }^{n_1}N_1^\dagger \Phi =\left( \left[ {B_1^\dagger }^{n_1},N_1^\dagger \right] +N_1^\dagger {B_1^\dagger }^{n_1}\right) \Phi =\left( n_1{B_1^\dagger }^{n_1}+N_1^\dagger {B_1^\dagger }^{n_1}\right) \Phi , \end{aligned}$$

where the last equality can be proved by induction on \(n_1\). Hence, using again the definition of the weak derivative, we have

$$\begin{aligned} \frac{1}{\sqrt{n_1!}}\langle {B_1^\dagger }^{n_1}N_1^\dagger \Phi , \varphi _{0,n_2}\rangle =\frac{1}{\sqrt{n_1!}}\langle (n_1{B_1^\dagger }^{n_1}+N_1^\dagger {B_1^\dagger }^{n_1})\Phi , \varphi _{0,n_2}\rangle =n_1\langle \Phi , \varphi _{\underline{n}}\rangle , \end{aligned}$$

since, in particular, \(\langle N_1^\dagger {B_1^\dagger }^{n_1}\Phi , \varphi _{0,n_2}\rangle =\langle {B_1^\dagger }^{n_1}\Phi , N_1\varphi _{0,n_2}\rangle =0\). The other equalities in (5.8) can be proved in a similar way.

This result has interesting consequences like those given in the rest of this section.

Proposition 3

Assume that, for some \({\underline{k}}\) and \({\underline{l}}\), \(\langle \psi _{\underline{k}} , \varphi _{\underline{l}}\rangle _e\) exists. Then, \(\langle \psi _{\underline{k}} , N_j\varphi _{\underline{l}}\rangle _e\) and \(\langle N_j^\dagger \psi _{\underline{k}} , \varphi _{\underline{l}}\rangle _e\), \(j=1,2\), also exist and

$$\begin{aligned} \langle \psi _{\underline{k}} , N_j\varphi _{\underline{l}}\rangle _e=l_j\langle \psi _{\underline{k}} , \varphi _{\underline{l}}\rangle _e, \qquad \langle N_j^\dagger \psi _{\underline{k}} , \varphi _{\underline{l}}\rangle _e=k_j\langle \psi _{\underline{k}} , \varphi _{\underline{l}}\rangle _e. \end{aligned}$$
(5.9)

Proof

By definition, we have, for instance,

$$\begin{aligned} \langle \psi _{\underline{k}} , N_1\varphi _{\underline{l}}\rangle _e=\sum _{\underline{n}}\overline{\psi _{{\underline{k}}}}[e_{\underline{n}}]\,(N_1\varphi _{{\underline{l}}})[e_{\underline{n}}]. \end{aligned}$$

But, since \(e_{\underline{n}}({\underline{x}})\in {{{\mathcal {S}}}(\mathbb {R}^2)}\), we can use (5.8) and we get \((N_1\varphi _{{\underline{l}}})[e_{\underline{n}}]=\langle e_{\underline{n}} , N_1\varphi _{{\underline{l}}}\rangle =l_1\langle e_{\underline{n}} , \varphi _{{\underline{l}}}\rangle \). Hence,

$$\begin{aligned} \langle \psi _{\underline{k}} , N_j\varphi _{\underline{l}}\rangle _e=l_1\sum _{\underline{n}}\overline{\psi _{{\underline{k}}}}[e_{\underline{n}}]\,\varphi _{{\underline{l}}}[e_{\underline{n}}]=l_1\langle \psi _{\underline{k}} , \varphi _{\underline{l}}\rangle _e, \end{aligned}$$

as we had to prove. The other equalities in (5.9) can be proved in a similar way. \(\square \)

We have already commented that, in general, taken F and G in \({{\mathcal {S}}}'(\mathbb {R}^2)\), and a given operator X, even if \(X^\dagger F, XG\in {{\mathcal {S}}}'(\mathbb {R}^2)\), there is no general reason for \(\langle X^\dagger F , G\rangle _e\) and \(\langle F , XG\rangle _e\) to exist, and to have \(\langle X^\dagger F , G\rangle _e=\langle F , XG\rangle _e\). However, this may happen in some particular cases. In fact, we can check that

$$\begin{aligned} \langle N_j^\dagger \psi _{\underline{k}} , \varphi _{\underline{l}}\rangle _e=\langle \psi _{\underline{k}} , N_j\varphi _{\underline{l}}\rangle _e \end{aligned}$$
(5.10)

\(\forall {\underline{k}},{\underline{l}}\), \(j=1,2\). The proof of this equality is rather technical and is given in “Appendix C.” Now it is clear that, as in the standard, Hilbert space, settings, the following result holds:

Proposition 4

If \({\underline{k}}\ne {\underline{l}}\) then

$$\begin{aligned} \langle \psi _{\underline{k}} , \varphi _{\underline{l}}\rangle _e=0. \end{aligned}$$
(5.11)

Proof

The proof is identical to the usual one, using the (nontrivial, here) identities in (5.9) and (5.10). \(\square \)

Summarizing, what we get here is a strong indication that the results deduced before, using (5.5) and (5.6), can be generalized and that the sets \({{\mathcal {F}}}_\varphi \) and \({{\mathcal {F}}}_\psi \) are indeed two \({{\mathcal {F}}}_e\)-biorthonormal sets of tempered distributions, and are made of weak eigenstates of the number operators \(N_j\) and \(N_j^\dagger \) and, therefore, of weak eigenstates of the Hamiltonian H in (3.14) and of \(H^\dagger \). In our analysis, we were somehow forced to introduce a new, potentially interesting, multiplication between distributions.

6 Conclusions

There are several open points in the analysis proposed in this paper. First of all, we believe that the idea of \({{\mathcal {F}}}_e\)-multiplication of distributions may have interesting consequences and applications. Also, their properties should be analyzed in more details than those considered here. We hope to consider this aspect of our analysis soon.

There are also other aspects of the system considered here which should be clarified: if on the one side we were able to compute explicitly some of the scalar products \(\langle \psi _{\underline{k}} , \varphi _{\underline{l}}\rangle _e\), showing that, as expected, they are zero or one, we have no general argument showing that \(\langle \psi _{\underline{k}} , \varphi _{\underline{l}}\rangle _e\) does exist for all \({\underline{k}}\) and \({\underline{l}}\). Still, under the hypothesis that this exists, we were able to deduce several interesting results, including the biorthogonality of the vectors \(\psi _{\underline{k}}\) and \(\varphi _{\underline{l}}\). However, the nature of \({{\mathcal {F}}}_\varphi \) and \({{\mathcal {F}}}_\psi \) as possible bases (of some kind, see [23]) is also still to be understood.

Another interesting aspect is the role of the Abel summation appearing in several computations. This creates more questions: Is this related to the physical system? How?

Finally, on a more physical side, the Lagrangian considered in (3.1) and (3.2) is just a particular possible choice among many possibilities. What does it change if we consider a different \(L_1\)? Which kind of physics can we describe? And what is the role of distributions, if any, for general dissipative systems?

We hope to be able to answer some of these questions soon.