1 Introduction

Concepts and techniques of information theory (IT) [18] have been successfully applied to explore the molecular electron probabilities and the associated patterns of chemical bonds, e.g., [918]. In Schrödinger’s quantum mechanics the electronic state is determined by the system wave function, the (complex) amplitude of the particle probability-distribution, which carries the resultant information content. Both the electron density or its shape factor, the probability distribution determined by the wave-function modulus, and the system current-distribution, related to the gradient of the wave-function phase, ultimately contribute to the quantum information descriptors of molecular states. The former reveals the classical information term, while the latter determines its non-classical complement in the overall information measures [9, 10, 16, 17]. The phenomenological IT description of equilibria in molecular subsystems has also been proposed [11, 1922], which formally resembles the ordinary thermodynamics [23].

In the present analysis we shall emphasize the non-classical, (phase/current)-related contributions to quantum information measures of electronic states in molecules. It is the main purpose of this work to identify the non-classical supplements of the classical cross (relative) entropy (information-distance) descriptors within both the Fisher and Shannon measures of the information content, and to explore the the role of the phase dependence of scattering amplitudes in the orbital communication theory (OCT) [12, 13, 18, 2427]. The information-cascade (bridge) propagation of electronic probabilities in molecular information systems, which generates the indirect bond contributions due to orbital intermediaries [13, 2832], will be also examined.

Throughout the article the following tensor notation is used: \(A\) denotes a scalar quantity, \({{\varvec{A}}}\) stands for the row or column vector, and A represents a square or rectangular matrix. The logarithm of the Shannon-type information measure is taken to an arbitrary but fixed base. In keeping with the custom in works on IT the logarithm taken to base 2 corresponds to the information measured in bits (binary digits), while selecting log = ln expresses the amount of information in nats (natural units): 1 nat = 1.44 bits.

2 Probability and current descriptors of electronic states

Consider the electron density \(\rho ({{\varvec{r}}}) = Np({{\varvec{r}}})\), or its shape (probability) factor \(p({{\varvec{r}}})\), and the current density \({{\varvec{j}}}({{\varvec{r}}})\) in the quantum state \(\Psi (N)\) of \(N\) electrons,

$$\begin{aligned} \Psi \!\left( N\right) =R\!\left( N \right) \hbox {exp}\!\left[ \hbox {i}\Phi \!\left( N \right) ]\equiv R\!\left( {\mathbf{r}^{N};{\varvec{\sigma }}^{N}} \right) \hbox { exp}[\hbox {i}\Phi \!\left( {\mathbf{r}^{N};{\varvec{\sigma }}^{N}} \right) \right] . \end{aligned}$$
(1)

Here \(R(N)\) and \(\Phi (N)\) stand for the wave-function modulus and (spatial) phase parts, respectively, \({\varvec{\sigma }}^{N} = (\sigma _{1}, \sigma _{2}, {\ldots }, \sigma _{N})\) groups the spin orientations of all \(N\) (indistinguishable) electrons and \(\mathbf{r}^{N} = ({{\varvec{r}}}_{1}, {{\varvec{r}}}_{2}, {\ldots }, {{\varvec{r}}}_{N})\equiv ({{\varvec{r}}}_{1}, \mathbf{r}^{\prime })\) combines their spatial positions. These quantities are defined by the quantum-mechanical expectation values,

$$\begin{aligned} {\rho ( {{\varvec{r}}})}= \langle \Psi |{\hat{\uprho }}({{\varvec{r}}})|\Psi \rangle \;{\text {and}}\,\, \ {{\varvec{j}}}({{\varvec{r}}}) = \langle \Psi |{\hat{\mathbf{j}}} ({{\varvec{r}}})|\Psi \rangle \equiv \sum \limits _{{k = 1}}^{N} \langle \Psi |{\hat{\mathbf{j}}_{k} ({{\varvec{r}}})|\Psi \rangle } =\mathop {\sum \limits _{k=1}^{N}}{{\varvec{j}}}_{k}({{\varvec{r}}}), \end{aligned}$$
(2)

of the corresponding observables in the position representation,

$$\begin{aligned} \hat{{\uprho }}({{\varvec{r}}})&= \sum \limits _{k=1}^N {\delta ({{\varvec{r}}}_k -{{\varvec{r}}})} \equiv \sum \limits _{k=1}^N {\hat{{\uprho }}_k ({{\varvec{r}}})} , \nonumber \\ {\hat{\mathbf{j}}}({{\varvec{r}}})&= \frac{1}{2m}\sum _{k=1}^N \,{\left[ {\delta ({{\varvec{r}}}_k -{{\varvec{r}}}){\hat{\mathbf{p}}}_k +{\hat{\mathbf{p}}}_k \delta ({{\varvec{r}}}_k {-{\varvec{r}}})} \right] }\nonumber \\&= \frac{\hbar }{2m\hbox {i}}\sum _{k=1}^N \,\,{\left[ {\delta ({{\varvec{r}}}_k -{{\varvec{r}}})\nabla _k +\nabla _k \delta ({{\varvec{r}}}_k -{{\varvec{r}}})} \right] } \equiv \sum _{k=1}^N {{\hat{\mathbf{j}}}_k ({{\varvec{r}}})} , \end{aligned}$$
(3)

where \(m\) denotes the electronic mass and the momentum operator \({\hat{\mathbf{p}}}_k =-\hbox {i}\hbar \nabla _k\).

The modulus part of the wave function generates the state electron distribution,

$$\begin{aligned} \rho ({{\varvec{r}}})=N\sum _\sigma {\int {R^{2}({{\varvec{r}}},{{\varvec{r}}}^{\prime };\varvec{\sigma }^{N}) d\mathbf{r}^{\prime }} } =Np({{\varvec{r}}}), \end{aligned}$$
(4)

where the summation is over the admissible spin-orientations of all electrons,

$$\begin{aligned} \sigma \in \{\sigma _k \in \left( {\hbox {spin-}up, \hbox {spin-}down} \right) \}, \end{aligned}$$
(5)

and the integration covers the coordinates of positions \(\mathbf{r}^{\prime } = ({{\varvec{r}}}_{2}, {{\varvec{r}}}_{3}, {\ldots }, {{\varvec{r}}}_{N})\) of all these indistinguishable fermions but the representative “first” electron in \({{\varvec{r}}}, {{\varvec{r}}}_{1} ={{\varvec{r}}}\), as enforced by the Dirac delta function. The probability current density is similarly shaped by the state phase gradient:

$$\begin{aligned} {{\varvec{j}}}({{\varvec{r}}})=\frac{\hbar }{m}N\sum _\sigma {\int {R^{2}({{\varvec{r}}},\mathbf{r}^{\prime };\varvec{\sigma }^{N}) \nabla _{{\varvec{r}}} \Phi ({{\varvec{r}}},\mathbf{r}^{\prime };\varvec{\sigma } ^{N}) d\mathbf{r}^{\prime }} } . \end{aligned}$$
(6)

These expressions assume particularly simple forms in the MO approximation,

$$\begin{aligned} \Psi (N)=(1/\sqrt{N!}) \det (\psi _1 ,\psi _2 , \ldots , \psi _N )\equiv \left| {\psi _1 ,\psi _2 , \ldots , \psi _N } \right| , \end{aligned}$$
(7)

e.g., in the familiar Hartree–Fock (HF) or Kohn–Sham (KS) self-consistent field (SCF) theories, in which \(\Psi (N)\) is given by the antisymmetrized product (Slater determinant) of \(N\) one-particle functions, spin molecular orbitals (SMO),

$$\begin{aligned} \{\psi _k \!\left( {{\varvec{q}}} \right) =\psi _k ({{\varvec{r}}};\sigma )\equiv \{R_k \!\left( {{\varvec{r}}} \right) \hbox {exp}[\hbox {i}\phi _k \!\left( {{\varvec{r}}} \right) ]\zeta _k (\sigma )\equiv \varphi _k \!\left( {{\varvec{r}}} \right) \zeta _k (\sigma ),k= \hbox {1}, \hbox {2},\ldots ,N\}, \end{aligned}$$
(8)

each determined by a product of the associated spatial function (MO) \(\varphi _{k}({{\varvec{r}}})\) and the corresponding spin-state

$$\begin{aligned} \zeta _k (\sigma )\in \{\alpha (\sigma ), \hbox {spin-}\textit{up}\,\hbox {state }(\uparrow );\beta (\sigma ), \hbox {spin-}\,\textit{down}\,\hbox {state }(\downarrow )\}. \end{aligned}$$
(9)

Indeed, since the observables of Eq. (2) combine one-electron operators their expectation values are given by the sum of the corresponding orbital expectation values:

$$\begin{aligned} \rho ({{\varvec{r}}})&= \sum _{k=1}^N {\left\langle {\varphi _k } \right| {\hat{{\uprho }}}_k ({{\varvec{r}}})\left| {\varphi _k } \right\rangle } =\sum _k {\left| {\varphi _k }\!\left( {\varvec{r}}\right) \right| ^2} =\sum _k {R_k^2 ({{\varvec{r}}}}) =\sum _k { \rho _k ({{\varvec{r}}}}) ,\end{aligned}$$
(10)
$$\begin{aligned} {{\varvec{j}}}({{\varvec{r}}})&= \sum _{k=1}^N {\left\langle {\varphi _k } \right| {\hat{\mathbf{j}}}_k ({{\varvec{r}}})} \left| {\varphi _k } \right\rangle =\frac{\hbar }{m}\sum _k {\rho _k ({{\varvec{r}}})\nabla \phi _k ({{\varvec{r}}}) } =\sum _k {j_k ({{\varvec{r}}})} , \end{aligned}$$
(11)

where \(\{\rho _k ({{\varvec{r}}})\}\) and \(\{j_k({{\varvec{r}}})\}\) denote the orbital contributions to the system electron density and current distributions, respectively.

In the simplest case of a single (\(N = 1\)) electron in a general state described by the complex MO,

$$\begin{aligned} \varphi \!\left( {{\varvec{r}}} \right) =R\!\left( {{\varvec{r}}} \right) \,\hbox {exp}[\hbox {i}\phi \!\left( {{\varvec{r}}} \right) ], \end{aligned}$$
(12)

the modulus factor of this wave function determines the particle spatial probability/density distribution,

$$\begin{aligned} \rho \!\left( {{\varvec{r}}} \right)&= \langle \varphi |\hat{\uprho }({{\varvec{r}}})| \varphi \rangle =\varphi ^{*}({{\varvec{r}}})\varphi ({{\varvec{r}}})=R({{\varvec{r}}})^{2}=p({{\varvec{r}}}),\nonumber \\ {\hat{{\uprho }}}({{\varvec{r}}})&= \delta ({{\varvec{r}}}^{\prime }-{{\varvec{r}}}),\,\quad \int {p({{\varvec{r}}})\,d{{\varvec{r}}}={1},} \end{aligned}$$
(13)

while the gradient of its phase component generates the associated current density:

$$\begin{aligned}&{{\varvec{j}}}\!\left( {{\varvec{r}}} \right) =\langle \varphi |\hat{\mathbf{j}}({{\varvec{r}}})|\varphi \rangle =\frac{\hbar }{2mi}[\varphi ^{*}({{\varvec{r}}})\nabla \varphi ({{\varvec{r}}})-\varphi ({{\varvec{r}}})\nabla \varphi ^{*}({{\varvec{r}}})]=\frac{\hbar p({{\varvec{r}}})}{m}\nabla \phi ({{\varvec{r}}}), \nonumber \\&{\hat{\mathbf{j}}}({{\varvec{r}}})=\frac{\hbar }{2mi}[ \delta ({{\varvec{r}}}^{\prime }-{{\varvec{r}}})\nabla _{{{\varvec{r}}}^{\prime }} +\nabla _{{{\varvec{r}}}^{\prime }} \delta ({{\varvec{r}}}^{\prime }-{{\varvec{r}}})]. \end{aligned}$$
(14)

The phase gradient is thus proportional to the current-per-particle, “velocity” field \({{\varvec{ V}}}({{\varvec{r}}}) ={{\varvec{j}}}({{\varvec{r}}})/p({{\varvec{r}}})\):

$$\begin{aligned} \nabla \phi \!\left( {{\varvec{r}}} \right) =\!\left( m/\hbar \right) {{\varvec{V}}}\!\left( {{\varvec{r}}} \right) . \end{aligned}$$
(15)

The probability and current densities manifest the complementary facets of electron distributions in molecules. They respectively generate the classical and non-classical contributions to the generalized measures of the information content in quantum electronic states [9, 10, 16, 17, 33], which we shall briefly summarize in the next section.

3 Information measures

In Sects. 35 we provide a short overview of the pertinent concepts and techniques of IT, including the classical measures of the information content and their quantum generalizations capable of tackling the complex probability amplitudes (wave functions). As already remarked above, these generalized quantities have to be used in diagnosing the full, quantum information content of electronic states [9, 10], exploring molecular equilibria [16, 17], probing the chemical bond multiplicities due to orbital bridges, and in treating the associated multiple (cascade) communications in molecular information channels. Some rudiments of the classical communication systems and the associated amplitude channels will also be given and the entropic descriptors of the orbital networks will be linked to the chemical bond multiplicities and their covalent/ionic composition.

The key element of the IT approach to molecular electronic structure is an adequate definition of a generalized measure of the information content in the given (generally complex) quantum state of electrons in molecules. The system electron distribution, related to the wave-function modulus, reveals only the classical, probability aspect of the molecular information content [17], while the phase (current) component gives rise to the non-classical entropy/information terms [9, 10, 16, 17, 33]. The resultant quantum measure then allow one to monitor the full information content of the non-equilibrium (variational) quantum states, thus providing the complete information description of their evolution towards the final equilibrium.

In density functional theory (DFT) [34, 35] one often refers to the density-constrained principles [9, 10, 36] and states [3740], which correspond to the fixed electronic probability distribution. They determine the so called vertical equilibria, which are determined solely by the non-classical (current related) entropy/information functionals [9, 10, 16, 17]. The density-unrestricted principles associated with the resultant information measure similarly determine the horizontal (unconstrained) equilibria in molecules [911].

Of interest in the electronic structure theory also are the cross (or relative) entropy quantities, which measure the information distance between two probability distributions and reflect the information similarity between different states or molecules, as well as descriptors of the information propagation between bonded atoms and orbitals in the system chemical bonds. One also aims at formulating the adequate, consistent with the prevailing chemical intuition, measures of the chemical bond multiplicity in the system as a whole and its constituent fragments, as well as the entropy/information descriptors of the bond covalent/ionic composition.

The spread of information in molecular communication networks, among AIM or between AO they contribute to molecular bond system, is investigated in OCT in which molecular systems are regarded as the AO-resolved information channels. Their conditional-entropy (communication noise) and mutual-information (information-flow) descriptors [3, 7, 1113] then provide a chemical resolution of the resulting IT bond multiplicities into the covalent and ionic bond components, respectively. One is also interested in mutual relations between analogous concepts developed within the SCF MO and IT approaches. Together they offer a deeper insight into the complex phenomenon called the chemical bonding.

The Shannon entropy [3] of the normalized probability vector \({{\varvec{p}}} = \{p_{i}\}, \sum _{i} p_{i} = 1\),

$$\begin{aligned} S({{\varvec{p}}})=-\sum _i p_i \,\hbox {log} p_i , \end{aligned}$$
(16)

where the summation extends over labels of the elementary events determining the discrete probability scheme in question, provides a measure of the average indeterminacy in the argument probability distribution. One similarly introduces the associated functional of the spatial probability distribution \(p({{\varvec{r}}}) = {\vert }\varphi ({{\varvec{r}}}){\vert }^{2}\), for the continuous labels of the electron locality events \(\{{{\varvec{r}}}\}\):

$$\begin{aligned} S^{class.}[\varphi ] =S [p]=-\int \!\!{p({{{\varvec{r}}}})\,\log \! p({{{\varvec{r}}}})\,\, d{{{\varvec{r}}}}} \equiv \int \!\!{p({{{\varvec{r}}}}) \, S^{class.}({{{\varvec{r}}}})\,\, d{{{\varvec{r}}}}} \equiv \int \!\!{s^{class.}({{{\varvec{r}}}})\,\, d{{{\varvec{r}}}}.}\nonumber \\ \end{aligned}$$
(17)

These electron “uncertainty” quantities also measure the corresponding amounts of information, \(I^\mathrm{S}(p)=S(p)\) or \(I^\mathrm{S}[p] = S[p]\), obtained when the distribution indeterminacy is removed by an appropriate measurement (experiment). This familiar (global) information measure is classical in character, being determined by probabilities alone. This property distinguishes it from the corresponding quantum concept of the non-classical entropy contribution due to the phase of the complex quantum state in question. As argued elsewhere [9, 10, 16, 17], for a single particle in the MO state of Eq. (12) the density of the non-classical entropy complement to the classical Shannon entropy of Eq. (17) is proportional to the local magnitude of the phase function, \({\vert }\phi {\vert } = (\phi ^{2})^{1/2}\), the square root of the phase-density \(\phi ^{2}\), with the local particle probability density providing the relevant weighting factor:

$$\begin{aligned} S^{nclass.}[\varphi ]&= -{2}\int \!\!{p({{{\varvec{r}}}})|\phi \!\left( {{{\varvec{r}}}}\right) \!|\,d{{{\varvec{r}}}}} \equiv S[p,\phi ]\equiv \int \!\! p\!\left( {{\varvec{r}}} \right) S^{nclass.}\!\left( {{{\varvec{r}}}}\right) d{{{\varvec{r}}}} \nonumber \\&\equiv \int \!\! {s^{nclass.}}\!\left( {{{\varvec{r}}}} \right) d{{{\varvec{r}}}} \equiv -{2}\langle |\phi |\rangle . \end{aligned}$$
(18)

Therefore, for the given quantum state \(\varphi \) of an electron the two components \(S[p] = S^{class.}[\varphi ]\) and \(S[p, \phi ] = S^{nclass.}[\varphi ]\) determine the resultant entropy descriptor:

$$\begin{aligned} S[\varphi ] =S^{class.} [\varphi ] +S^{nclass.} [\varphi ] =S [p]+S[p,\phi ]. \end{aligned}$$
(19)

The classical Fisher information for locality events [1, 2], also called the intrinsic accuracy, historically predates the Shannon entropy by about 25 years, being proposed in about the same time, when the final form of the modern quantum mechanics was shaped. This classical gradient measure of the information content in the probability density \(p({{\varvec{r}}})\) reads:

$$\begin{aligned} I\!\!\left[ p \right] =\int \!\!{p({{\varvec{r}}}) [\nabla \hbox {ln} \,p({{\varvec{r}}})]^{{2}}\,d{{\varvec{r}}}} =\int \!\!{[\nabla p({{\varvec{r}}})]^{{2}}/p({{\varvec{r}}})\,\, d{{\varvec{r}}}} =4\int \!\!{[\nabla A({{\varvec{r}}})]^{{2}}\,d{{\varvec{r}}}\equiv I\!\left[ A \right] ,} \end{aligned}$$
(20)

where \(A({{\varvec{r}}}) = \sqrt{p({{\varvec{r}}})}\) denotes the classical amplitude of this probability distribution.

This Fisher information is reminiscent of von Weizsäcker’s [41] inhomogeneity correction to the electronic kinetic energy in the Thomas-Fermi theory. It characterizes the compactness of the probability density \(p({{\varvec{r}}})\). For example, the Fisher information in the normal distribution measures the inverse of its variance, called the invariance, while the complementary Shannon entropy is proportional to the logarithm of variance, thus monotonically increasing with the spread of the Gaussian distribution. The Shannon entropy and intrinsic accuracy thus describe complementary facets of the probability density: the former reflects distribution’s “spread” (delocalization, “disorder”), while the latter measures its “narrowness” (localization, “order”).

This classical amplitude form of Eq. (20) is then naturally generalized into the domain of the quantum (complex) probability amplitudes, the wave functions of Schrödinger’s quantum mechanics. For example, for the one-electron state of Eq. (12), when \(p({{\varvec{r}}}) = \varphi ^{*}({{\varvec{r}}})\varphi ({{\varvec{r}}}) = {\vert }\varphi ({{\varvec{r}}}){\vert }^{2} = [R({{\varvec{r}}})]^{2}\), i.e., \(A({{\varvec{r}}}) = R({{\varvec{r}}})\), this generalized measure is given by the following MO functional related to the average kinetic energy \(T[\varphi ]\):

$$\begin{aligned} I[\varphi ]= \hbox {4}\int \!\! \vert {\nabla \varphi \!\left( {{\varvec{r}}} \right) |^{{2}}\,\,d{{\varvec{r}}}=\frac{8m}{\hbar ^{2}} T[\varphi ],} \end{aligned}$$
(21)

where from the spatial integration by parts,

$$\begin{aligned} T[\varphi ]\equiv \left\langle \varphi \right| \hat{\mathrm{T}}\left| \varphi \right\rangle =-\frac{\hbar ^{2}}{2m}\int \!\! {\varphi ^{*}({{\varvec{r}}})\Delta \varphi ({{\varvec{r}}}) }\, d{{\varvec{r}}}=\frac{\hbar ^{2}}{2m}\int \!\!{\left| {\nabla \varphi ({{\varvec{r}}})} \right| ^{2}}\, d{{\varvec{r}}}. \end{aligned}$$
(22)

This quantum kinetic energy also consists of the classical Fisher contribution, depending solely upon the electron probability distribution \(p\)(r),

$$\begin{aligned} T^{class.}[\varphi ] =T [p]=\frac{\hbar ^{2}}{8m}\int \!\!{\frac{\left| {\nabla p({{\varvec{r}}})} \right| ^{2}}{p({{\varvec{r}}})}}\,\, d{{\varvec{r}}}= \frac{\hbar ^{2}}{2m}\int \!{\!\left( {\nabla R({{\varvec{r}}})} \right) ^{2}}\, d{{\varvec{r}}}, \end{aligned}$$
(23)

and the non-classical, (phase/current)-related term,

$$\begin{aligned} T^{nclass.}[\varphi ]&= T [p,{{\varvec{j}}}]=\frac{m}{2}\int {\left( \,\, {{{\varvec{j}}}({{\varvec{r}}})/R({{\varvec{r}}})} \right) ^{2}}\,\, d{{\varvec{r}}}=\frac{\hbar ^{2}}{2m}\int \!\! {R^{2}({{\varvec{r}}})\!\left( {\nabla \phi ({{\varvec{r}}})} \right) ^{2}}\,\, d{{\varvec{r}}},\qquad \end{aligned}$$
(24)
$$\begin{aligned} T[\varphi ]&= T^{class.} [\varphi ] +T^{nclass.} [\varphi ] =T [p] +T [p,{{\varvec{j}}}]. \end{aligned}$$
(25)

Expressing the information functional of Eq. (21) in terms of the modulus and phase components of the argument MO state similarly gives:

$$\begin{aligned} I[\varphi ]&= I [p]+{4}\int {p({{\varvec{r}}})[\nabla \phi ({{\varvec{r}}})]^{{2}}\,\, d{{\varvec{r}}}\equiv I[p]+I[p,\phi ]}\nonumber \\&= I[p]+4 \left( {\frac{m}{\hbar }} \right) ^{2}\int {{{\varvec{j}}}^{2}({{\varvec{r}}})} /p({{\varvec{r}}}) \, d{{\varvec{r}}}=I\!\left[ p \right] +I[p,{{\varvec{j}}}]\equiv I^{class.}[\varphi ] +{I^{nclass.}} [\varphi ] \nonumber \\&\equiv \int {p({{\varvec{r}}})[I^{class.}({{\varvec{r}}})+I^{nclass.}({{\varvec{r}}})]\,\, d{{\varvec{r}}}} \equiv \int \!\! {[f^{class.}({{\varvec{r}}})+f^{nclass.}({{\varvec{r}}})] \, d{{\varvec{r}}}.} \end{aligned}$$
(26)

Here, the two information densities-per-electron read:

$$\begin{aligned}&I^{class.}\!\left( {{\varvec{r}}}\right) =[\Delta p\!\left( {{\varvec{r}}}\right) /p\!\left( {{\varvec{r}}}\right) ]^{{2}}=\hbox {4}[\Delta R\!\left( {{\varvec{r}}}\right) ]^{{2}}, \nonumber \\&I^{class.}\left( {{\varvec{r}}}\right) =\hbox {4}[\Delta \phi \!\left( {{\varvec{r}}}\right) ]^{{2}} =\hbox {4}\!\left( {m/\hbar } \right) ^{{2}}[{{\varvec{j}}}\!\left( {{\varvec{r}}}\right) /p \!\left( {{\varvec{r}}}\right) ]^{{2}}. \end{aligned}$$
(27)

The classical and non-classical densities-per-electron of these complementary measures of the information content are mutually related via the common-type dependence [9, 10]:

$$\begin{aligned} I^{class.}({{\varvec{r}}})&= [\nabla \hbox {ln}p({{\varvec{r}}})]^{2}= [\nabla S^{class.}({{\varvec{r}}})]^{2}\quad \hbox { and}\nonumber \\ I^{nclass.}\!\left( {{\varvec{r}}}\right)&= \!\left( {\frac{2m \,{{\varvec{j}}}({{\varvec{r}}})}{\hbar p({{\varvec{r}}})}} \right) ^{2}\equiv [\nabla S^{nclass.}\!\left( {{\varvec{r}}}\right) ]^{{2}}. \end{aligned}$$
(28)

Thus, the square of the gradient of the local Shannon probe of the state quantum “indeterminicity” (disorder) generates the density of the corresponding Fisher measure of the state quantum “determinicity” (order). Notice, that the second of these relations determines the density \(S^{{ nclass}\cdot }({\varvec{r}})\) only up to its sign. Therefore, both positive and negative [see Eq. (18)] signs of this non-classical (phase-related) information density are admissible.

4 Comparing probability distributions

An important generalization of Shannon’s entropy concept, called the relative (cross) entropy, also known as the entropy deficiency, missing information or directed divergence, has been proposed by Kullback and Leibler [5, 6]. It measures the information “distance” between the two (normalized) probability distributions for the same set of events. For example, in the discrete probability scheme identified by events \({{\varvec{a}}} = \{a_{i}\}\) and their probabilities \({{\varvec{P}}}({{\varvec{a}}}) = \{P(a_{i})=p_{i}\} = {{\varvec{p}}}\), this discrimination information in \({{\varvec{p}}}\) with respect to the reference distribution \({{\varvec{P}}}({{\varvec{a}}}^{0}) = \{P(a_{i}^{0})=p_{i}^{0}\} = {{\varvec{p}}}^{0}\) reads:

$$\begin{aligned} \Delta S({{\varvec{p}}}|{{\varvec{p}}}^{0})=\sum _i p_i \log \!\left( {p_i /p_i ^{0}} \right) \ge 0. \end{aligned}$$
(29)

This quantity provides a measure of the information resemblance between the two compared probability schemes. The more the two distributions differ from one another, the larger the information distance. For individual events the logarithm of probability ratio \( I_{i} = \log (p_{i}/p_{i}^{0})\), called the probability surprisal, provides a measure of the event information in \({{\varvec{p}}}\) relative to that in the reference distribution \({{\varvec{p}}}^{0}\). Notice that the equality in the preceding equation takes place only for the vanishing surprisal for all events, i.e., when the two probability distributions are identical.

Similar classical concepts of the information distance can be advanced within the Fisher measure. The directed-divergence between the continuous probability density \(p({{\varvec{r}}}) = {\vert }\varphi {\vert }^{2 }\) and the reference distribution \(p^{0}({{\varvec{r}}}) = {\vert }\varphi ^{0}{\vert }^{2}\), measuring the average probability-surprisal \(I_{p}({{\varvec{r}}})\),

$$\begin{aligned} \Delta S[p|p^{0}]=\int \!\!{p\!\left( {{\varvec{r}}}\right) \hbox { log}[p\!\left( {{\varvec{r}}}\right) /p^{0}\!\left( {{\varvec{r}}}\right) ]\,\, d{{\varvec{r}}}} \equiv \int \!\!{p\!\left( {{\varvec{r}}}\right) I_p \!\left( {{\varvec{r}}}\right) }\,\, d{{\varvec{r}}}\equiv \Delta S^{class.}[\varphi |\varphi ^{0}], \end{aligned}$$
(30)

has been generalized into its gradient analog [11,22]:

$$\begin{aligned} \Delta I[p|p^{0}]&= \int \!\!\!{p\!\left( {{\varvec{r}}}\right) [\nabla I_p \!\left( {{\varvec{r}}}\right) ]^{{2}}\,\,d{{\varvec{r}}}} \equiv \Delta I^{class.} [\varphi |\varphi ^{0}] \nonumber \\&= \int \!\!{p\!\left( {{\varvec{r}}}\right) [p\!\left( {{\varvec{r}}}\right) ^{-{1}} \nabla p\!\left( {{\varvec{r}}}\right) -p^{0}\!\left( {{\varvec{r}}}\right) ^{-{1}} \nabla p^{0}\!\left( {{\varvec{r}}}\right) ]^{{2}}d{{\varvec{r}}}\ge 0.} \end{aligned}$$
(31)

One can also introduce measures of the non-classical information distances, related to the phase/current degrees-of-freedom of the two quantum states \(\varphi \) and \(\varphi ^{0}\), which generate the associated (probability, phase, current) components, (\(p, \phi , {{\varvec{j}}}\)) and (\(p^{0}, \phi ^{0}, {{\varvec{j}}}^{0})\), respectively. The non-classical Shannon measure \(S[p, \phi ]\) of Eq. (18) then generates the following information distance measuring the average phase-surprisal \(I_{\phi }({{\varvec{r}}})\):

$$\begin{aligned} \Delta S[\phi |\phi ^{0}]\equiv \Delta S^{nclass.}[\varphi |\varphi ^{0}]=\int \!\!{p\!\left( {{\varvec{r}}}\right) \hbox { log}|\phi \!\left( {{\varvec{r}}}\right) \!/\phi ^{0}\!\left( {{\varvec{r}}}\right) \!|\,\,d{{\varvec{r}}}} \equiv \int \!\! {p\!\left( {{\varvec{r}}}\right) I_\phi \!\left( {{\varvec{r}}}\right) \,d{{\varvec{r}}}.} \end{aligned}$$
(32)

Two components of Eqs. (30) and (32) thus determine the following resultant entropy- deficiency between two complex wave functions:

$$\begin{aligned} \Delta S[\varphi |\varphi ^{0}]&= \Delta S^{class.}[\varphi | \varphi ^{0}]+\Delta S^{nclass.}[\varphi |\varphi ^{0}]= \Delta S[p|p^{0}]+\Delta S[\phi |\phi ^{0}] \nonumber \\&= \int \!\! {p\!\left( {{\varvec{r}}}\right) [I_p \!\left( {{\varvec{r}}} \right) +I_\phi \!\left( {{\varvec{r}}}\right) ]\,\,d{{\varvec{r}}}.} \end{aligned}$$
(33)

In a search for the non-classical Fisher-information distance,

$$\begin{aligned} \Delta I^{nclass.}[\varphi |\varphi ^{0}]\equiv \int \!\! {p\!\left( {{\varvec{r}}}\right) \Delta I^{nclass.}\!\left( {{\varvec{r}}}\right) d{{\varvec{r}}},} \end{aligned}$$
(34)

we use Eq. (28), which establishes a general relation between the complementary Shannon and Fisher information densities-per-electron. For comparing the two quantum states we thus propose

$$\begin{aligned} \Delta I^{nclass.}\!\left( {{\varvec{r}}}\right)&= \{\nabla [\Delta S^{nclass.}\!\left( {{\varvec{r}}}\right) ]\}^{2} = [\nabla I_\phi \!\left( {{\varvec{r}}}\right) ]{^{{2}}} = [\nabla \hbox {ln}|\phi \!\left( {{\varvec{r}}}\right) /\phi ^{0} \!\left( {{\varvec{r}}}\right) \!|]^{{2}}\nonumber \\&= \{ \nabla \phi \! \left( {{\varvec{r}}}\right) /\phi \!\left( {{\varvec{r}}} \right) - \nabla \phi ^{0} \!\left( {{\varvec{r}}}\right) / \phi ^{0}\!\left( {{\varvec{r}}}\right) \}^{2}, \end{aligned}$$
(35)

where we have used the gradient identity

$$\begin{aligned} \nabla |\phi ({{\varvec{r}}})| = \nabla [\phi ^{{2}}({{\varvec{r}}})] {^{{1}\!/{2}}} = [\phi \!\left( {{\varvec{r}}}\right) \!/|\phi \!\left( {{\varvec{r}}}\right) \!|]\nabla \phi \!\left( {{\varvec{r}}}\right) = \hbox {sign}[\phi \!\left( {{\varvec{r}}}\right) ]\nabla \phi \!\left( {{\varvec{r}}}\right) . \end{aligned}$$

Since the phase gradients are related to the corresponding “velocities”, which measure the corresponding currents-per-particle [see Eq. (15)],

$$\begin{aligned} {{\varvec{V}}}= \frac{{{\varvec{j}}}}{p}=\frac{\hbar }{m}\nabla \phi \qquad \hbox {and}\qquad {{\varvec{V}}}^{0}= \frac{{{\varvec{j}}}^{0}}{p^{0}}=\frac{\hbar }{m}\nabla \phi ^{0}, \end{aligned}$$
(36)

the resulting non-classical contribution to the Fisher measure of the quantum information distance between the two electronic states [Eq. (34)],

$$\begin{aligned} \Delta I^{nclass.}[\varphi |\varphi ^{0}]&= \int \!\! {p\!\left( {{\varvec{r}}}\right) [\nabla \hbox {log}|\phi \!\left( {{\varvec{r}}}\right) \!/\phi ^{0}\!\left( {{\varvec{r}}}\right) \!|]^{{2}}\,\,d{{\varvec{r}}}} \nonumber \\&= \int \!\! {p\!\left( {{\varvec{r}}}\right) [\nabla I_\phi \!\left( {{\varvec{r}}}\right) ]^{{2}} \,\,d{{\varvec{r}}}\equiv \Delta I[\phi |\phi ^{0}],} \end{aligned}$$
(37)

represents the average value of the squared combination of currents (velocities) in the two states compared.

Together the two components of Eqs. (31) and (37) determine the resultant Fisher-information distance between the two complex wave functions:

$$\begin{aligned} \Delta I[\varphi |\varphi ^{0}]&= \Delta I^{class.}[\varphi |\varphi ^{0}]+\Delta I^{nclass.}[\varphi |\varphi ^{0}]=\Delta I[p|p^{0}]+\Delta I[\phi |\phi ^{0}] \nonumber \\&= \int \!\! p\!\left( {{\varvec{r}}}\right) \{[\nabla I_p \!\left( {{\varvec{r}}}\right) ]^{{2}}+[\nabla I_\phi \!\left( {{\varvec{r}}}\right) ] {^{{2}}}]\,\,d{{\varvec{r}}}. \end{aligned}$$
(38)

The common amount of information in two dependent events \(a_{i}\) and \(b_{j}\), \(I(i:j)\), measuring the information about \(a_{i}\) provided by the occurrence of \(b_{j}\) or the information about \(b_{j}\) provided by the occurrence of \(a_{i}\), determines the mutual information in these two events,

$$\begin{aligned} I(i:j)&= \hbox { log}[P(a_i \wedge b_j )/P\!\left( {a_i } \right) P\!\left( {b_j } \right) ] = {\hbox {log}}[\pi _{i,j} /(p_i q_j )] \nonumber \\&\equiv -\hbox {log}p_i -\hbox {log}q_j +\hbox { log}\pi _{i,j} \equiv I\!\left( i \right) +I\!\left( j \right) -I(i\wedge j) \nonumber \\&\equiv \hbox {log}[P(i|j)/p_i ]=-\hbox {log}p_i +\hbox { log}P(i|j)=I\!\left( i \right) -I(i|j) \nonumber \\&\equiv \hbox {log}[P(j|i)/q_j ]=-\hbox {log}q_j +\hbox { log}P(j|i)=I\!\left( j \right) -I(j|i)=I\!\left( {j:i} \right) ;\nonumber \\ \end{aligned}$$
(39)

where \(P(a_{i}\wedge b_{j}) \equiv \pi _{i,j}\) stands for the probability of the joint event, of simultaneously observing \(a_{i}\) and \(b_{j}\), while the quantity \(I(i{\vert }j) = -\log P(i{\vert }j)\) measures the conditional entropy in \(a_{i}\) given the occurrence of \(b_{j}\), or the self-information in the conditional event of observing \(a_{i}\) given \(b_{j}\). The mutual information \(I(i:j)\) may take on any real value, positive, negative, or zero: it vanishes, when both events are independent, i.e., when the occurrence of one event does not influence (or condition) the probability of the occurrence of the other event, and it is negative, when the occurrence of one event makes a non-occurrence of the other event more likely. It also follows from the preceding equation that the self-information of the joint event \(I(i \wedge j)=-\log \pi _{i,j}\) reads:

$$\begin{aligned} I(i\wedge j)=I\!\left( i \right) +I\!\left( j \right) -I\!\left( {i:j} \right) . \end{aligned}$$
(40)

Thus, the information in the joint occurrence of two events \(a_{i}\) and \(b_{j}\) is the information in the occurrence of \(a_{i}\) plus that in the occurrence of \(b_{j}\) minus the mutual information. Clearly, for independent events, when \(\pi _{i,j}=\pi _{i,j}^{0}=p_{i}q_{j}, I(i:j) = 0\) and hence \(I(i \wedge j)=I(i)+I(j)\).

The mutual information of an event with itself defines its self-information: \(I(i:i) \equiv I(i) = \log [P(i{\vert } i)/p_{i}] = -\log p_{i}\), since for a single event \(P(i{\vert }i) = 1\). It vanishes for \(p_{i} = 1\), i.e., when there is no uncertainty about the occurrence of \(a_{i}\), so that the occurrence of this event removes no uncertainty, hence conveys no information. This quantity provides a measure of the uncertainty about the occurrence of the event itself, i.e., the information received when the event actually takes place.

Consider now two mutually dependent (discrete) probability vectors for different sets of events, \({{\varvec{P}}}({{\varvec{a}}}) = \{P(a_{i})=p_{i}\} \equiv {{\varvec{p}}}\) and \({{\varvec{P}}}({{\varvec{b}}}) = \{P(b_{j})=q_{j}\} \equiv {{\varvec{q}}}\) (see Fig. 1). One decomposes the joint probabilities of the simultaneous events \({{\varvec{a}}}\wedge {{\varvec{b}}} = \{a_{i} \wedge b_{j}\}\) in these two schemes, \(\mathbf{P}({{\varvec{a}}}\wedge {{\varvec{b}}}) = \{P(a_{i} \wedge b_{j})=\pi _{i,j}\} \equiv {\varvec{\uppi }}\), as products of the “marginal” probabilities of events in one set, say \({{\varvec{P}}}({{\varvec{a}}})\), and the corresponding conditional probabilities \(\mathbf{P}({{\varvec{b}}}\vert {{\varvec{a}}}) = \{P(j{\vert }i)\}\) of outcomes in the other set \({{\varvec{b}}}\), given that events \({{\varvec{a}}}\) have already occurred [see Eq. (39)]:

$$\begin{aligned} {\varvec{\uppi }} =\{\pi _{i,j} =p_i P(j|i)\}. \end{aligned}$$
(41)

The relevant normalization conditions for such joint and conditional probabilities read:

$$\begin{aligned} \sum _j \pi _{i,j} =p_i ,\sum _i \pi _{i,j} =q_j ,\sum _i \sum _j \pi _{i,j} =1,\sum _j P(j|i)=1,i=1, 2,\ldots \end{aligned}$$
(42)

The Shannon entropy of the product distribution \({\varvec{\uppi }}, S({\varvec{\uppi }}) = -\sum _{i}\sum _{j} \pi _{i,j}\log \pi _{i,j}\),

$$\begin{aligned} S({\varvec{\uppi }})&= -\sum _i \sum _j p_i P(j|i)[\log p_i + \log P(j|i)] \nonumber \\&= -\left[ \sum _j P(j|i)\right] \sum _i p_i \log p_i -\sum _i \sum _j \pi _{i,j} \log P(j|i) \nonumber \\&\equiv S({{\varvec{p}}})+\sum _i p_i S({{\varvec{q}}}|i) \equiv S({{\varvec{p}}})+S({{\varvec{q}}}|{{\varvec{p}}}), \end{aligned}$$
(43)

can be thus expressed as the sum of the average entropy in the marginal probability distribution, \(S({{\varvec{p}}})\), and the average conditional entropy in \({{\varvec{q}}}\) given \({{\varvec{p}}}\),

$$\begin{aligned} S({{\varvec{q}}}|{{\varvec{p}}})=-\sum _i \sum _j \pi _{i,j} \log P(j|i)=-\sum _i p_i \!\left[ \sum _j P(j|i)\log P(j|i)\right] . \end{aligned}$$
(44)

The latter represents the extra amount of the uncertainty/information about the occurrence of events \({{\varvec{b}}}\), given that the events \({{\varvec{a}}}\) are known to have occurred. In other words: the amount of information obtained as a result of simultaneously observing the events \({{\varvec{a}}}\) and \({{\varvec{b}}}\) equals to the amount of information in one set, say \({{\varvec{a}}}\), supplemented by the extra information provided by the occurrence of events in the other set \({{\varvec{b}}}\), when \({{\varvec{a}}}\) are known to have occurred already (see Fig. 1).

Fig. 1
figure 1

Diagram of the conditional-entropy and mutual-information quantities for two dependent probability distributions \({{\varvec{p}}}\) and \({{\varvec{q}}}\). Two circles enclose areas representing the entropies \(S({{\varvec{p}}})\) and \(S({{\varvec{q}}})\) of two separate probability vectors, while their common (overlap) area corresponds to the mutual information \(I({{\varvec{p}}}:{{\varvec{q}}})\) in these two distributions. The remaining part of each circle represents the corresponding conditional entropy, \(S({{\varvec{p}}}{\vert }{{\varvec{q}}})\) or \(S({{\varvec{q}}}{\vert }{{\varvec{p}}})\), measuring the residual uncertainty/information about events in one set of outcomes, when one has the full knowledge of the occurrence of events in the other set. The area enclosed by the circle envelope then represents the entropy of the “product” (joint) distribution: \(S({\varvec{\uppi }}) = S(\mathbf{P}({{\varvec{a}}}\wedge {{\varvec{b}}})) = S({{\varvec{p}}}) + S({{\varvec{q}}}) - I({{\varvec{p}}}:{{\varvec{q}}}) = S({{\varvec{p}}}) + S({{\varvec{q}}}{\vert }{{\varvec{p}}}) = S({{\varvec{q}}}) + S({{\varvec{p}}}{\vert }{{\varvec{q}}})\)

The classical Shannon entropy [Eq. (16)] can be thus interpreted as the mean value of self-informations in all individual events: \(S({{\varvec{p}}}) = \sum _{i}p_{i}I(i)\). One similarly defines the average mutual information in two probability distributions as the (\({\varvec{\uppi }}\)-weighted) mean value of the mutual information quantities for individual joint events (see also Fig. 1):

$$\begin{aligned} I\!\left( {{{\varvec{p}}}:{{\varvec{q}}}} \right)&= \sum _i \sum _j \pi _{i,j} I\!\left( {i:j} \right) =\sum _i \sum _j \pi _{i,j} \hbox {log}( \pi _{i,j} /\pi _{i,j} ^{0}) \nonumber \\&= S\!\left( {{\varvec{p}}} \right) +S\!\left( {{\varvec{q}}} \right) -S({\varvec{\uppi }})=S\!\left( {{\varvec{p}}} \right) -S({{\varvec{p}}}|{{\varvec{q}}})=S\!\left( {{\varvec{q}}} \right) -S({{\varvec{q}}}|{{\varvec{p}}})\ge 0. \end{aligned}$$
(45)

The equality holds only for independent distributions, when \(\pi _{i,j}=p_{i} q_{j}\equiv \pi _{i,j}^{0}\). Indeed, the amount of uncertainty in \({{\varvec{q}}}\) can only decrease, when \({{\varvec{p}}}\) has been known beforehand, \(S({{\varvec{q}}}) \ge S({{\varvec{q}}}{\vert }{{\varvec{p}}}) = S({{\varvec{q}}})-I({{\varvec{p}}}:{{\varvec{q}}})\), with equality being observed only when the two sets of events are independent, thus giving non-overlapping entropy circles in Fig. 1.

The average mutual information is an example of the entropy deficiency, measuring the missing information between the joint probabilities \(\mathbf{P}({{\varvec{a}}}\wedge {{\varvec{b}}}) = {\varvec{\uppi }}\) of the dependent events \({{\varvec{a}}}\) and \({{\varvec{b}}}\), and the joint probabilities \(\mathbf{P}^{ind.}({{\varvec{a}}}^{0}\wedge {{\varvec{b}}}^{0}) = {\varvec{\uppi }}^{0} = {{\varvec{p}}}^{\mathrm{T}}{{\varvec{q}}}\) for the independent events: \(I({{\varvec{p}}}:{{\varvec{q}}}) = \Delta S( {\varvec{\uppi }}{\vert } {\varvec{\uppi }}^{0})\). The average mutual information thus reflects a dependence between events defining the two probability schemes. A similar information-distance interpretation can be attributed to the average conditional entropy: \(S({{\varvec{p}}}{\vert }{{\varvec{q}}}) = S({{\varvec{p}}}) - \Delta S({\varvec{\uppi }}{\vert }{\varvec{\uppi }}^{0})\).

5 Communication channels

We continue this short IT overview with the entropy/information descriptors of a transmission of the electron-assignment “signals” in molecular communication systems [1113]. The classical orbital networks [3, 6, 1113, 18] propagate probabilities of electron assignments to basis functions of SCF MO calculations, while the quantum channels [13, 33] scatter wave functions, i.e., (complex) probability amplitudes, between such elementary states. The former loose memory of the phase aspect of this information propagation, which becomes crucial in the multi-stage (cascade, bridge) propagations [28]. In determining the underlying conditional probabilities of the output-orbital events, given the input-orbital events, or the scattering amplitudes of the emitting (input) states among the monitoring/receiving (output) states, one uses the (bond-projected) superposition principle (SP) of quantum mechanics [42] (see also next section).

We begin with some rudiments on the classical information systems. The basic elements of such a “device” are shown in Fig. 2. The signal emitted from \(n\) “inputs” \({{\varvec{a}}} = (a_{1}, a_{2}, {\ldots }, a_{n})\) of the channel source A is characterized by the a priori probability distribution \({{\varvec{P}}}({{\varvec{a}}}) = {{\varvec{p}}} = (p_{1}, p_{2}, {\ldots }, p_{n})\), which describes the way the channel is exploited. It can be received at \(m\) “outputs” \({{\varvec{b}}} = (b_{1}, b_{2}, {\ldots }, b_{m})\) of the system receiver B. The transmission of signals in such communication network is randomly disturbed thus exhibiting a typical communication noise. Indeed, the signal sent at the given input can in general be received with a non-zero probability at several outputs. This feature of communication systems is described by the conditional probabilities of the outputs-given-inputs, \(\mathbf{P}({{\varvec{b}}}{\vert }{{\varvec{a}}}) = \{P(b_{j}{\vert }a_{i})=P(a_{i}\wedge b_{j})/P(a_{i}) \equiv P(j {\vert } i)\}\), where \(\{P(a_{i}\wedge b_{j})\equiv \pi _{i,j}\} = {\varvec{\uppi }}\) stands for the probability of the joint occurrence of the specified pair of the input–output events. The latter define the simultaneous probability matrix \(\{P(a_{i}\wedge b_{j})\equiv \pi _{i,j}\} ={\varvec{\uppi }}\). The distribution of the output signal among the detection events \({{\varvec{b}}}\) is thus given by the a posteriori (output) probability distribution

$$\begin{aligned} {{\varvec{P}}}({{\varvec{b}}})={{\varvec{q}}}=(q_1 ,q_2 ,\ldots ,q_m )={{\varvec{p}}}\, \mathbf{P}({{\varvec{b}}}|{{\varvec{a}}}). \end{aligned}$$

The input probabilities reflect the way the channel is used (probed). The Shannon entropy \(S({{\varvec{p}}})\) of the source probabilities \({{\varvec{p}}}\) determines the channel a priori entropy. The average conditional entropy of the outputs given inputs \(S({{\varvec{q}}}{\vert }{{\varvec{p}}}) \equiv H(\mathbf{B}{\vert }\mathbf{A})\) is determined by the scattering probabilities \(\mathbf{P}({{\varvec{b}}}{\vert }{{\varvec{a}}}\)). It measures the average noise in the \({{\varvec{a}}}\rightarrow {{\varvec{b}}}\) transmission. The so called a posteriori entropy, of the input given output, \(H(\mathbf{A}{\vert }\mathbf{B})\equiv S({{\varvec{p}}}{\vert }{{\varvec{q}}})\), is similarly defined by the conditional probabilities of the \({{\varvec{b}}}\rightarrow {{\varvec{a}}}\) signals: \(\mathbf{P}({{\varvec{a}}}{\vert }{{\varvec{b}}}) = \{P(a_{i}{\vert }b_{j})=P(a_{i}\wedge b_{j})/P(b_{j})=P(i{\vert }j)\}\). It reflects the residual indeterminacy about the input signal, when the output signal has already been received. The average conditional entropy \(S({{\varvec{p}}}{\vert }{{\varvec{q}}})\) thus measures the indeterminacy of the source with respect to the receiver, while the conditional entropy \(S({{\varvec{q}}}{\vert }{{\varvec{p}}})\) reflects the uncertainty of the receiver relative to the source. An observation of the output signal thus provides on average the amount of information given by the difference between the a priori and a posteriori uncertainties, \(S({{\varvec{p}}}) - S({{\varvec{p}}}{\vert }{{\varvec{q}}}) = I({{\varvec{p}}}:{{\varvec{q}}})\), which defines the mutual information in the source and receiver. In other words, the mutual information measures the net amount of information transmitted through the communication channel, while the conditional entropy \(S({{\varvec{p}}}{\vert }{{\varvec{q}}})\) reflects a fraction of \(S({{\varvec{p}}})\) transformed into “noise” as a result of the input signal being scattered in the information channel. Accordingly, \(S({{\varvec{q}}}{\vert }{{\varvec{p}}})\) reflects the noise part of \(S({{\varvec{q}}}) = S({{\varvec{q}}}{\vert }{{\varvec{p}}}) + I({{\varvec{p}}}:{{\varvec{q}}})\) (see Fig. 1).

Fig. 2
figure 2

Schematic diagram of the communication system characterized by two probability vectors: \({{\varvec{P}}}({{\varvec{a}}}) = \{P(a_{i})\} = {{\varvec{p}}} = (p_{1}, {\ldots }, p_{n})\), of the channel “input” events \({{\varvec{a}}} = (a_{1}, {\ldots }, a_{n})\) in the system source A, and \({{\varvec{P}}}(\mathbf{b }) = \{P(b_{j})\} = {{\varvec{q}}} = (q_{1}, {\ldots }, q_{m})\), of the “output” events \({{\varvec{b}}} = (b_{1}, {\ldots }, b_{m})\) in the system receiver B. The transmission of signals in this communication channel is described by the (\(n\times m\))-matrix of the conditional probabilities \(\mathbf{P}({{\varvec{b}}}{\vert }{{\varvec{a}}}) = \{P(b_{j}{\vert }a_{i})\equiv P(j{\vert }i)\}\), of observing different “outputs” (columns, \(j = 1, 2, {\ldots }, m)\), given the specified “inputs” (rows, \(i = 1, 2, {\ldots }, n\)). For clarity, only a single scattering \(a_{i} \rightarrow b_{j}\) is shown in the diagram

In OCT the orbital channels [3, 6, 1113] propagate probabilities of electron assignments to basis functions of SCF MO calculations, e.g., atomic orbitals (AO) \({\varvec{\chi }} = (\chi _{1}, \chi _{2}, {\ldots }, \chi _{m})\). The underlying conditional probabilities of the output AO events, given the input AO events, \(\mathbf{P}({\varvec{\chi }}'{\vert }{\varvec{\chi }}) = \{P(\chi _{j}{\vert }\chi _{i})\equiv P(j{\vert }i) \equiv P_{i\rightarrow j } \equiv A(j{\vert }i)^{2}\equiv (A_{i\rightarrow j})^{2}\}\), or the associated scattering amplitudes \(\mathbf{A}({\varvec{\chi }}'{\vert }{\varvec{\chi }}) = \{A(j{\vert }i) = A_{i\rightarrow j}\}\) of the emitting (input) states \({{\varvec{a}}} = {\vert }{\varvec{\chi }}\rangle = \{{\vert }\chi _{i}\rangle \}\) among the monitoring/receiving (output) states \({{\varvec{b}}} = {\vert }{\varvec{\chi }}'\rangle = \{{\vert }\chi _{j}\rangle \}\), results from the (bond-projected) SP of quantum mechanics [42]. The local description (LCT) similarly invokes the basis functions \(\{{\vert }{{\varvec{r}}}\rangle \}\) of the position representation, identified by the continuous labels of spatial coordinates determining the location \({{\varvec{r}}}\) of an electron. This complete basis set then determines both the input \({{\varvec{a}}} = \{{\vert }{{\varvec{r}}}\rangle \}\) and output \({{\varvec{b}}} = \{{\vert }{{\varvec{r}}}'\rangle \}\) events of the local molecular channel determined by the relevant kernel of conditional-probabilitities: \(P({{\varvec{r}}}'{\vert }{{\varvec{r}}})= P_{{{\varvec{r}}}\rightarrow {{\varvec{r}}}^{\prime }}= (A_{{{\varvec{r}}}\rightarrow {{\varvec{r}}}^{\prime }})^{2}\) [43].

In OCT the entropy/information indices of the covalent/ionic components of the system chemical bonds respectively represent the complementary descriptors of the average communication noise and amount of information flow in the molecular channel. One observes that the molecular input \({{\varvec{P}}}({{\varvec{a}}}) \equiv {{\varvec{p}}}\) generates the same distribution in the output of this network, \({{\varvec{q}}} = {{\varvec{p}}}\, \mathbf{P}({{\varvec{b}}}{\vert }{{\varvec{a}}}) = \{\sum _{i} p_{i }\,P(j{\vert }i)\equiv \sum _{i }P(i\wedge j)=p_{j}\} = {{\varvec{p}}}\), thus identifying \({{\varvec{p}}}\) as the stationary vector of AO-probabilities in the molecular ground state. This purely molecular communication channel is devoid of any reference (history) of the chemical bond formation and generates the average noise index of the IT bond-covalency measured by the average conditional-entropy of the system outputs-given-inputs: \(S({{\varvec{P}}}({{\varvec{b}}}){\vert }{{\varvec{P}}}({{\varvec{a}}})) = S({{\varvec{q}}}{\vert }{{\varvec{p}}}) \equiv S\).

The AO channel with the promolecular input signal, \({{\varvec{P}}}({{\varvec{a}}}_{0}) = {{\varvec{p}}}_{0}=\{p_{i}^{0}\}\), of AO in the system free constituent atoms, refers to the initial stage in the bond-formation process. It corresponds to the ground-state (fractional) occupations of the AO contributed by the system constituent atoms, before their mixing into MO. These input probabilities give rise to the average information flow index of the system IT bond-ionicity, given by the mutual-information in the channel promolecular inputs and molecular outputs:

$$\begin{aligned} I({{\varvec{P}}}({{\varvec{a}}}_{0}):{{\varvec{P}}}({{\varvec{b}}}))&= I({{\varvec{p}}}_{0}:{{\varvec{q}}}) = \sum _{i}\sum _{j }P(i, j)\log [p_{i}P(i, j)/(p_{i}q_{j}p_{i}^{0})]\nonumber \\&= \sum \limits _i \sum \limits _j P\!\left( {i,j} \right) [-\hbox {log}q_j +\hbox { log}\!\left( {p_i /p_i ^{0}} \right) +\hbox { log}P(j|i)] \nonumber \\&= S\!\left( {{\varvec{q}}} \right) +\Delta S({{\varvec{p}}}|{{\varvec{p}}}_0 )-S\equiv I_0. \end{aligned}$$
(46)

This amount of information reflects the fraction of the initial (promolecular) information content \(S({{\varvec{p}}}_{0})\) which has not been dissipated as noise in the molecular communication system. In particlular, for the molecular input, when \({{\varvec{p}}}_{0}= {{\varvec{p}}}\) and hence the vanishing information distance \(\Delta S({{\varvec{p}}}{\vert }{{\varvec{p}}}_{0})\),

$$\begin{aligned} I\!\left( {{{\varvec{p}}}:{{\varvec{q}}}} \right) =S\!\left( {{\varvec{q}}} \right) -S\equiv I. \end{aligned}$$
(47)

The sum of these two bond components, e.g.,

$$\begin{aligned} M\!\left( {{\varvec{P}}}\!\left( {{{\varvec{a}}}_0 } \right) ; {{{\varvec{P}}}\!\left( {{\varvec{b}}} \right) } \right) =M\!\left( {{{\varvec{p}}}_0 ;{{\varvec{q}}}} \right) = S+I^{0}=S\!\left( {{\varvec{q}}} \right) +\Delta S({{\varvec{p}}}|{{\varvec{p}}}_0 )\equiv M_0 , \end{aligned}$$
(48)

measures the absolute overall IT bond-multiplicity index, of all bonds in the molecular system under consideration, relative to the promolecular reference. For the molecular input this quantity preserves the Shannon entropy of the molecular input probabilities:

$$\begin{aligned} M\!\left( {{{\varvec{p}}} ;{{\varvec{q}}}} \right) =S({{\varvec{q}}}|{{\varvec{p}}})+I \!\left( {{{\varvec{p}}}:{{\varvec{q}}}} \right) =S\!\left( {{\varvec{q}}} \right) \equiv M. \end{aligned}$$
(49)

The relative index [43],

$$\begin{aligned} \Delta M=M-M_0 =\Delta S({{\varvec{p}}}|{{\varvec{p}}}_0 ), \end{aligned}$$
(50)

reflecting multiplicity changes due to the chemical bonds alone, is then interaction dependent. It correctly vanishes in the atomic dissociation limit of separated atoms, when \({{\varvec{p}}}_{0}\) and \({{\varvec{p}}}\) become identical. The entropy deficiency index \(S({{\varvec{p}}}{\vert }{{\varvec{p}}}_{0})\), reflecting the information distance between the molecular electron density, generated by the constituent bonded atoms, thus represents the overall IT difference-index of the system chemical bonds.

6 Superposition principle, conditional probabilities and non-additive quantities

Let us recall SP of quantum mechanics [42]:

Any combination \({\vert }\psi \rangle = \sum _{k} C_{k}{\vert }\psi _{k}\rangle \) of the admissible (orthonormal) quantum states \(\{{\vert }\psi _{k}\rangle \}\), where the set \(\{C_{k}=\langle \psi _{k}{\vert }\psi \rangle \}\) denotes generally complex expansion coefficients, also represents a possible quantum state of the system under consideration. The projections determining the expansion coefficients represent quantum amplitudes \(\{C_{k }=A(\psi _{k}{\vert }\psi ) = \langle \psi _{k}{\vert }\psi \rangle \}\) of the conditional probabilities

$$\begin{aligned} P(\psi _k |\psi )&= |A(\psi _k |\psi )|^{{2}}=|C_k |^{{2}}=\langle \psi _k |\psi \rangle \langle \psi |\psi _k \rangle \equiv \langle \psi _k |\hat{\hbox {P}}_\psi |\psi _k \rangle \nonumber \\&=\langle \psi |\psi _k \rangle \langle \psi _k |\psi \rangle \equiv \langle \psi |\hat{\mathrm{P}}_\psi |\psi \rangle , \end{aligned}$$
(51)

of observing \({\vert }\psi _{k}\rangle \) given \({\vert }\psi \rangle \). For the complete (variable) set \(\{{\vert }\psi _{k}\rangle \}\), when \(\sum _{k}{\vert }\psi _{k}\rangle \langle \psi _{k}{\vert } \equiv \sum _{k}\hat{\mathrm{P}}_k = 1\), these probabilities are normalized:

$$\begin{aligned} \sum _k P(\psi _k |\psi )=1. \end{aligned}$$
(52)

This axiom formally introduces the conditional probabilities between the specified quantum states, of observing the variable (monitoring) state \({\vert }\psi _{k}\rangle \) given the reference (parameter) state \({\vert }\psi \rangle \), which define the associated molecular communications. They are given by the expectation value in the variable state of the projection operator onto the reference state.

Let us now examine the time evolution of the representative conditional probability of observing \({\vert }\theta (t)\rangle \) given \({\vert }\psi (t)\rangle , P[\theta (t){\vert }\psi (t)] = \langle \theta (t)\vert \hat{\mathrm{P}}_{\psi }(t)\vert \theta (t)\rangle \). In the Schrödinger picture of quantum mechanics the dynamics of quantum states is governed by the system energy operator \(\hat{\mathrm{H}}\):

$$\begin{aligned} \hbox {i}\hbar \,\partial |\psi \!\left( t \right) \rangle /\partial t=\hat{\mathrm{H}}|\psi \!\left( t \right) \rangle . \end{aligned}$$
(53)

This Schrödinger equation then gives the following expression for the time derivative of the conditional probability \(P[\theta (t){\vert }\psi (t)]\) in question:

$$\begin{aligned}&\partial P[\theta \!\left( t \right) |\psi (t)]{/}\partial t=\{\langle \partial \theta \!\left( t \right) \! /\partial t|\hat{\mathrm{P}}_{\psi }(t)|\theta \!\left( t \right) \rangle \nonumber \\&\qquad +\langle \theta (t){|}\hat{\mathrm{P}}_{\psi }(t)|\partial \theta (t){/}\partial t\rangle \}+\langle \theta \!\left( t \right) |\partial \hat{\mathrm{P}}_\psi (t)/\partial t|\theta (t)\rangle \nonumber \\&\quad =\!\left( {\hbox {i}\hbar } \right) ^{-{1}}\langle \theta \!\left( t \right) \!|[\hat{\hbox {P}}_{\psi }(t),\hat{\mathrm{H}}] + [\hat{\hbox {H}}, {\hat{\hbox {P}}}_{\psi }(t)]|\theta \!\left( t \right) \rangle =0. \end{aligned}$$
(54)

Therefore, the conditional probabilities between general quantum states, which determine the associated information channels, remain conserved in time and so do their entropic descriptors of the bond multiplicity and composition.

The amplitude \(A(\psi _{k}{\vert }\psi )\) preserves the relative phase of the two states involved, which is responsible for the quantum-mechanical interference. As an illustration consider the two (complex, orthonormal) basis states:

$$\begin{aligned} {\varvec{\psi }} =\{\psi _k =R_k \hbox {exp}(\hbox {i}\phi _k), \ \langle \psi _k |\psi _l \rangle =\delta _{k,l} ;\,(k,l)\in \!\left( {{1}, \hbox {2}} \right) \}, \end{aligned}$$
(55)

in the combined molecular state

$$\begin{aligned} {\psi } = \hbox {2}^{-{1}/{2}}(\psi _{1} +\psi _{2} ). \end{aligned}$$
(56)

Expressing the probability density \(P = {\vert }\psi {\vert }^{2}\) in terms of probabilities \(\{p_{k} = {\vert }\psi _{k}{\vert }^{2}= R_{k}^{2 }=p[\psi _{k}]\}\) and phases \(\{\phi _{k}\}\) of two individual states in this combination gives the familiar result:

$$\begin{aligned} p=\frac{1}{2}\!\left( {p_{1} +p_{2} } \right) +\sqrt{p_1 p_2 } \hbox {cos}(\phi _{1} -\phi _{2} )\equiv p_{\varvec{\psi }} ^{add.} \;+\;p_{\varvec{\psi }}^{nadd.}. \end{aligned}$$
(57)

It identifies the superposition term \(p_{\varvec{\psi }}^{nadd.} \), depending upon the relative phase \(\phi _{rel.} = \phi _{1 }-\phi _{2 }\) of two functions in the combined state, as the non-additive probability contribution,

$$\begin{aligned} p_{\varvec{\psi }}^{nadd.} =p_{\varvec{\psi }}^{total} -p_{\varvec{\psi }}^{add.} , \end{aligned}$$
(58a)

expressed as the difference between the total probability in \(\psi \)-resolution,

$$\begin{aligned} p_{\varvec{\psi }}^{total} =p\equiv p[\psi ], \end{aligned}$$
(58b)

and its additive contribution given by the weighted average of probability distributions of individual states,

$$\begin{aligned} p_{\varvec{\psi }}^{add.} =P(\psi _{1} |\psi )p_{1} +P(\psi _{2} |\psi )p_{2} \equiv 1/2(p[\psi _{1}] {+p} [\psi _{2} ]). \end{aligned}$$
(58c)

In accordance with the SP, the probability “weights” in the preceding equation are provided by the squares of the combination coefficients, which define the relevant conditional probabilities \(P(\psi _{1}{\vert }\psi )=P(\psi _{2}{\vert }\psi ) = 1/2\):

$$\begin{aligned} \psi =\sqrt{P(\psi _1 \left| \psi \right. )}\psi _{1} +\sqrt{P(\psi _2 \left| \psi \right. )}\psi _{2} =\frac{1}{\sqrt{2}}(\psi _1 +\psi _2 ),\qquad P(\psi _{1} |\psi )+P(\psi _{2} |\psi )= \hbox {1}. \end{aligned}$$
(59)

The probability interference term

$$\begin{aligned} p_{\varvec{\psi }}^{nadd.} =\sqrt{P(\psi _1 \left| \psi \right. )}\sqrt{P(\psi _2 \left| \psi \right. )}\!\left[ {\psi _{1} ^{*}\psi _{2} +\psi _{2} ^{*}\psi _{1} } \right] =\sqrt{p_1 p_2 }\hbox {cos}\phi _{rel.} , \end{aligned}$$
(60)

is responsible for the direct chemical bond, say between two hydrogens, when the two (orthogonalized) 1\(s\) AO’s of constituent atoms are mixed into the symmetric (doubly occupied) bonding MO. Clearly, for the real AO case, when \(\phi _{1 }=\phi _{2} = 0\), this contribution reduces into the geometric average of probability densities of the two components: \(p_{\varvec{\psi }}^{nadd.} =\sqrt{p_1 p_2 }\). It should be also observed that the overall probability \(p\) determines the resultant modulus factor \(R =\sqrt{p}\) of \(\psi =R \exp (\hbox {i}\varPhi )\), while its resultant phase follows from equation:

$$\begin{aligned} \varPhi = \hbox {arctg}\{[R_{1} \hbox {sin}\phi _{1} +R_{2} \hbox {sin}\phi _{2}] / [R_{1} \hbox {cos}\phi _{1} +R_{2} \hbox {cos}\phi _{2} ]\}. \end{aligned}$$
(61)

Of interest also is the related partitioning of the probability-current density in this combined state, into the corresponding additive and non-additive contributions [33]:

$$\begin{aligned} {{\varvec{j}}}&= {{\varvec{j}}}[\psi ]\equiv {{\varvec{j}}}_{\varvec{\psi }} ^{total} ={{\varvec{j}}}_{\varvec{\psi }}^{add.} +{{\varvec{j}}}_{\varvec{\psi }}^{nadd.} ,\end{aligned}$$
(62)
$$\begin{aligned} {{\varvec{j}}}_{\varvec{\psi }}^{add.}&\equiv P(\psi _{1} |\psi ){{\varvec{j}}}_{1} +P(\psi _{2} |\psi ){{\varvec{j}}}_{2} =\frac{1}{2}({{\varvec{j}}}_{1} +{{\varvec{j}}}_{2} ),\nonumber \\ {{\varvec{j}}}_{\varvec{\psi }}^{nadd.}&= \sqrt{P(\psi _1 \left| \psi \right. )}\sqrt{P(\psi _2 \left| \psi \right. )}\frac{\hbar }{4m\hbox {i}}\!\left[ (\psi _{1} ^{*}\nabla \psi _{2} -\psi _{1} \nabla \psi _{2} ^{*})+\!\left( {\psi _{2} ^{*}\nabla \psi _{1} -\psi _{2} \nabla \psi _{1} ^{*}} \right) \right] \nonumber \\&= \frac{\hbar }{4m\hbox {i}}[(\psi _{1} ^{*}\nabla \psi _{2} -\psi _{1} \nabla \psi _{2} ^{*})+\!\left( {\psi _{2} ^{*} \nabla \psi _{1} -\psi _{2} \nabla \psi _{1} ^{*}} \right) ] \nonumber \\&= \frac{\hbar }{4m\hbox {i}}\!\left( {(\sqrt{p_2 }\,\bar{{\nabla }}p_1 -\sqrt{p_1 }\,\bar{{\nabla }}p_2 ) \sin \phi _{rel.} -\hbox {i}(\sqrt{p_2 }\,\bar{{{{\varvec{j}}}}}_1 -\sqrt{p_1 }\,\bar{{{{\varvec{j}}}}}_2 ) \cos \phi _{rel.} } \right) ,\nonumber \\ \end{aligned}$$
(63)

where bars denote the reduced quantities:

$$\begin{aligned} \bar{{\nabla }}p_k =p_k^{-1/2} \nabla p_k \;\hbox {and } \; \bar{{{{\varvec{j}}}}}_k =[2m/(\hbar p_k^{1/2} )]{{\varvec{j}}}_k \end{aligned}$$

This interference contribution to the resultant flow of probability density again depends on the relative phases of the two states combined. For real member functions, for which the relative phase and currents identically vanish, the above additive and non-additive flows of the electron probability are both seen to identically vanish.

In a similar way one partitions other physical quantities, which generally depend on both \(p\) and  \({{\varvec{ j}}}\) (or \(\varPhi )\). Consider, e.g., the quantum extension of the gradient measure of the information content in state \(\psi \) [Eq. (21)]:

$$\begin{aligned} I[\psi ]&= \hbox { 4}\!\!\int \!\!{\vert }{\nabla \psi ({{\varvec{r}}}) |^{{2}}\,\, d{{\varvec{r}}}} \equiv \int \!\!\!{f\!\left( {{\varvec{r}}} \right) \,\,d{{\varvec{r}}}} \nonumber \\&= \int \!\! {[\nabla p ({\varvec{r}})]^{{2}}/{p({\varvec{r}})}\,\, d{{\varvec{r}}}}+ \hbox {4}\!\!\int \!\!{p({\varvec{r}})}[\nabla \varPhi ({\varvec{r}})]^{{2}}\,\,d{\varvec{r}} \nonumber \\&= I[p] +I [p,\varPhi ] = I[p]+\hbox { 4}\!\left( {m/\hbar } \right) ^{{2}}\int [{{\varvec{j}}}^{{2}}({{\varvec{r}}})/p({\varvec{r}})]\,\,d{{\varvec{r}}}=I[p] +I [p,{{\varvec{j}}}]\nonumber \\&= I^{class.}[\psi ] +I^{nclass.} [\psi ] , \end{aligned}$$
(64)

where the classical Fisher functional \(I^{class.}[\psi ] = I[p]\) combines the contribution due to the probability distribution alone, while the non-classical functional \(I^{nclass.}[\psi ] = I[p, {{\varvec{j}}}] = I[p, \varPhi \)] carries the current(phase)-dependent information content.

Let us examine the superposition rule for the density \(f({{\varvec{r}}})\) of this quantum-generalized Fisher-information [32] in the combined state \(\psi \),

$$\begin{aligned} f\equiv f_{\varvec{\psi }}^{total} =f_{\varvec{\psi }}^{add.} +f_{\varvec{\psi }}^{nadd.} , \end{aligned}$$
(65)

proportional to the squared modulus of the wave function gradient:

$$\begin{aligned} f&\equiv f[\psi ]=\hbox { 4}\nabla \psi ^{*}\cdot \nabla \psi = \hbox {4}[(\nabla R)^{{2}}+R^{{2}}(\nabla \varPhi )^{{2}}]\nonumber \\&= (\nabla p)^{{2}}\!/p + \left( \frac{2m}{\hbar }\right) ^{\!2}\frac{{\varvec{j}}^{2}}{p} \equiv f\!\left[ p \right] +f[p,{{\varvec{j}}}]. \end{aligned}$$
(66)

This information density defines the total local contribution in the adopted \({\varvec{\psi }}\)-resolution, \(f_{\varvec{\psi }} ^{total} \equiv f[\psi ]\), while the weighted sum of the information content of the two individual states determines its additive contribution:

$$\begin{aligned} f_{\varvec{\psi }}^{add.} \equiv P(\psi _{1} |\psi )f[\psi _{1} ]+P(\psi _{2} |\psi )f[\psi _{2} ]=1/2\!\left( {f_{1} +f_{2} } \right) . \end{aligned}$$
(67)

The information density \(f_{k }\) in the complex member-state \(\psi _{k}\) similarly reads:

$$\begin{aligned} f_k&= f[\psi _k ]= \hbox {4}\nabla \psi _k ^{*}\cdot \nabla \psi _k = \hbox {4}[(\nabla R_k )^{{2}}+R_k ^{{2}}(\nabla \phi _k )^{{2}}] \nonumber \\&= (\nabla p_k )^{{2}}/p_k +\frac{4m^{2}}{\hbar ^{2}}\frac{{\varvec{j}}_{k}^{2}}{p_{k}} \equiv f_k \!\left[ {p_k } \right] +f_k [p_k ,{{\varvec{j}}}_ k], \end{aligned}$$
(68)

where \({{\varvec{j}}}_{k} \equiv {{\varvec{j}}}[\psi _{k}]\) denotes the probability-current density in \(\psi _{k}\):

$$\begin{aligned} {{\varvec{j}}}_k =\frac{\hbar }{2m\hbox {i}}(\psi _k^{*} \nabla \psi _k -\psi _k \nabla \psi _k^{*} )=\frac{\hbar }{m}\hbox {Im}(\psi _k^{*} \nabla \psi _k )=\frac{\hbar }{m}R_k^2 \nabla \phi _k =p_k \nabla \!\left[ {\frac{\hbar \phi _k }{m}} \right] . \end{aligned}$$
(69)

The non-additive (interference) information density, determined by both the probability amplitudes and phases of the two combined states, now reads:

$$\begin{aligned} f_{\varvec{\psi }}^{nadd.}&= \hbox { 4}\sqrt{P(\psi _1 \left| \psi \right. )} \sqrt{P(\psi _2 \left| \psi \right. )}[\nabla \psi _{1} ^{*}\cdot \nabla \psi _{2} \nonumber \\&+\nabla \psi _{2} ^{*}\cdot \nabla \psi _{1}] ={{2}} [\nabla \psi _{1} ^{*}\cdot \nabla \psi _{2} +\nabla \psi _{2} ^{*}\cdot \nabla \psi _{1} ] \nonumber \\&= 4[(\nabla R_{1} \cdot \nabla R_{2} +R_{1} R_{2} \nabla \phi _{1} \cdot \nabla \phi _{2} ) \cos \phi _{rel.} \nonumber \\&+(R_{2} \nabla R_{1} \cdot \nabla \phi _{2} -R_{1} \nabla R_{2} \cdot \nabla \phi _{1} ) \sin \phi _{rel.} ]. \end{aligned}$$
(70)

It again depends on the relative phase of two constituent states. This expression somewhat simplifies, when formulated in terms of the probability and current descriptors \(\{p_{k}, {{\varvec{j}}}_{k}\}\) of individual states. More specifically, eliminating \(\nabla \psi _{k}\) from \(\nabla p_{k}=\psi _{k}\nabla \psi _{k}^{*}+\psi _{k}^{*}\nabla \psi _{k}\) and Eq. (69),

$$\begin{aligned} \nabla \psi _k =\frac{\psi _k }{2p_k }\!\left( {\nabla p_k +\frac{2m\hbox {i}}{\hbar }{{\varvec{j}}}_k } \right) ,k={1},{2}, \end{aligned}$$
(71)

gives:

$$\begin{aligned} f_{\varvec{\psi }}^{nadd.}&= \frac{1}{\sqrt{p_1 p_2 }}\!\left[ \!\left( \nabla p_1 \cdot \nabla p_2 +\frac{4m^{2}}{\hbar ^{2}}{{\varvec{j}}}_1 \cdot {{\varvec{j}}}_2\right) \cos \phi _{rel.}\right. \nonumber \\&\left. +\frac{2m}{\hbar }\!\left( {\nabla p_1 \cdot {{\varvec{j}}}_2 -\nabla p_2 \cdot {{\varvec{j}}}_1 } \right) \sin \phi _{rel.} \right] \nonumber \\&\equiv (\bar{{\nabla }}p_1 \cdot \bar{{\nabla }}p_2 +\bar{{{{\varvec{j}}}}}_1 \cdot \bar{{{{\varvec{j}}}}}_2 )\cos \phi _{rel.} +(\bar{{\nabla }}p_1 \cdot \bar{{{{\varvec{j}}}}}_2 -\bar{{\nabla }}p_2 \cdot \bar{{{{\varvec{j}}}}}_1 )\sin \phi _{rel.} . \end{aligned}$$
(72)

This equation expresses the change in the quantum Fisher information density, relative to the reference level of the additive contribution of Eq. (67), which accompanies the quantum-mechanical superposition of two individual states in the combination of Eq. (56).

For the stationary member states, when the current-dependent contributions identically vanish, e.g., when combining two real AO into MO, this information contribution is seen to be solely determined by the product of the reduced gradients of the particle probability distributions in the combined state. It is then related to the non-additive kinetic energy in AO resolution [1113, 44], which has been successfully employed as an efficient CG criterion for localizing chemical bonds in molecular systems [44]. The related quantity in MO resolution [45] generates the key concept of ELF [4648].

7 Phase relations in two-orbital model

In order to examine the phase aspect of orbital superposition and orbital communications in a more detail we examine the 2-AO model of the preceding section, with each of the complex basis functions \({\varvec{\psi }}({{\varvec{r}}}) = [\psi _{1}({{\varvec{r}}}), \psi _{2}({{\varvec{r}}})]\) of the promolecular reference again contributing a single electron to this model two-electron system. These complex AO give rise to two MO: bonding,

$$\begin{aligned} \varphi _b \!\left( {{\varvec{r}}} \right) =\psi _{1} \!\left( {\varvec{r}}\right) U_{{1},b} +\psi _{2} \!\left( {{\varvec{r}}} \right) U_{{2},b} \equiv {\varvec{\psi }}\!\left( {\varvec{r}}\right) {{\varvec{U}}}_b, \end{aligned}$$
(73)

and anti-bonding,

$$\begin{aligned} \varphi _a \!\left( {{\varvec{r}}} \right) =\psi _{1} \!\left( {{\varvec{r}}} \right) U_{{1},a} +\psi _{2} \!\left( {{\varvec{r}}} \right) U_{{2},a} \equiv {\varvec{\psi }} \!\left( {{\varvec{r}}} \right) {{\varvec{U}}}_a. \end{aligned}$$
(74)

In a more compact, matrix notation, with \({\varvec{ \varphi }}({{\varvec{r}}}) = [\varphi _{b}({{\varvec{r}}}), \varphi _{a}({{\varvec{r}}})]\) grouping the orthonormal MO combinations, the preceding equations jointly read:

$$\begin{aligned} {\varvec{\varphi }} \!\left( {{\varvec{r}}} \right) ={\varvec{\psi }} \!\!\left( {{\varvec{r}}} \right) \, \left[ {{\varvec{U}}_b \vert {\varvec{U}}_a } \right] \equiv {\varvec{\psi }} \!\left( {{\varvec{r}}}\right) \mathbf{U}. \end{aligned}$$
(75)

These MO now combine the complex AO,

$$\begin{aligned} {\varvec{\psi }} \!\left( {{\varvec{r}}}\right) =\{\psi _k \!\left( {{\varvec{r}}} \right) =R_k \!\left( {{\varvec{r}}} \right) \hbox {exp}[\hbox {i}\phi _k \!\left( {{\varvec{r}}} \right) ],\qquad k= \hbox {1}, \hbox {2}\}, \end{aligned}$$

where we again assume their spatial orthonormality:

$$\begin{aligned} \langle \psi _k |\psi _l \rangle =\int {R_k \!\!\left( {{\varvec{r}}} \right) R_l \!\!\left( {{\varvec{r}}} \right) \hbox {exp}(\hbox {i}[\phi _l \!\!\left( {{\varvec{r}}} \right) -\phi _k \!\!\left( {{\varvec{r}}} \right) ])\,\,d{{\varvec{r}}}=\delta _{k,l}\qquad \hbox {or}\qquad \langle {\varvec{\psi }} |{\varvec{\psi }} \rangle =\mathbf{I}.} \end{aligned}$$
(76)

Hence, MO-orthonormality condition implies the unitary character of the (complex) transformation matrix U:

$$\begin{aligned} \langle {\varvec{\varphi }} |{\varvec{\varphi }} \rangle =\mathbf{U}^{\dagger }\langle {\varvec{\psi }} |{\varvec{\psi }} \rangle \mathbf{U}=\mathbf{U}^{\dagger }\mathbf{U}=\mathbf{I}. \end{aligned}$$
(77)

The MO combinations can be similarly expressed in terms of their resultant moduli and phases:

$$\begin{aligned}&\varphi _s \!\left( {{\varvec{r}}} \right) =\psi _{1} \!\left( {{\varvec{r}}} \right) U_{{1},s} +\psi _{2} \!\left( {{\varvec{r}}} \right) U_{{2},s} \equiv R_s \!\left( {{\varvec{r}}} \right) \hbox { exp}[\hbox {i}\varPhi _s \!\left( {{\varvec{r}}} \right) ], \nonumber \\&U_{k,s} =\langle \psi _k |\varphi _s \rangle \equiv z_{k,s} \hbox {exp}\!\left( {\hbox {i}f_{k,s} } \right) , \nonumber \\&|U_{{1},s} |^{{2}}+|U_{{2},s} |^{{2}}=\!\left( {z_{{1},s} } \right) ^{{2}}+\!\left( {z_{{2},s} } \right) ^{{2}}\equiv P+Q= \hbox {1},\qquad s=b,a. \end{aligned}$$
(78)

The complementary conditional probabilities,

$$\begin{aligned} P=P(\psi _{1} |\varphi _b )=P(\psi _{2} |\varphi _a )\qquad \hbox { and}\qquad Q=P(\psi _{2} |\varphi _b )=P(\psi _{1} |\varphi _a )= \hbox {1}-P, \end{aligned}$$
(79)

determining weights of AO in MO, also reflect the bond polarization, when two spin-paired electrons occupy \(\varphi _{b}\).

In terms of these resultant components the MO-orthonormality condition reads:

$$\begin{aligned} \langle \varphi _s |\varphi _{s^{\prime }} \rangle =\int \!\!\! {R_s \!\left( {{\varvec{r}}} \right) R_{s^{\prime }} \!\left( {{\varvec{r}}} \right) \hbox {exp}\{\hbox {i}[\varPhi _{s^{\prime }} \!\left( {{\varvec{r}}} \right) -\varPhi _s \!\left( {{\varvec{r}}} \right) ]\}\,\,d{{\varvec{r}}}=\delta _{s,{s^{\prime }}} }. \end{aligned}$$
(80)

The two diagonal (normalization) requirements are seen to be automatically satisfied by the normalization of the complementary conditional probabilities of Eq. (79):

$$\begin{aligned} P(\psi _{1} |\varphi _b )+P(\psi _{2} |\varphi _b )=P(\psi _{1} |\varphi _a )+P(\psi _{2} |\varphi _a)=P+Q= \hbox {1}. \end{aligned}$$
(81)

In order to satisfy the MO-orthogonality equation, e.g., \(\langle \varphi _{a}{\vert }\varphi _{b}\rangle = 0\), the resultant MO phases have to obey the off-diagonal constraint

$$\begin{aligned} \langle \varphi _a |\varphi _b \rangle =\langle \varphi _b |\varphi _a \rangle ^{*}=\!\left( {\mathbf{U}^{\dagger }\mathbf{U}} \right) _{a,b} =U_{{1},a} ^{*}U_{{1},b} +U_{{2},a} ^{*}U_{{2},b} \equiv Z\hbox {exp}\!\left( {\hbox {i}F} \right) =0. \end{aligned}$$
(82)

This equation is automatically fullfilled for the vanishing modulus \(Z\) of this scalar product or its square

$$\begin{aligned} \!\left[ {Z^{({1})}} \right] ^{{2}}&= \!\left( {z_{{1},a} } \right) ^{{2}}\!\left( {z_{{1},b} } \right) ^{{2}}+\!\left( {z_{{2},a} } \right) ^{{2}}\!\left( {z_{{2},b} } \right) ^{{2}}\nonumber \\&-{2}z_{{1},a} z_{{1},b} z_{{2},a} z_{{2},b} \hbox {cos}\{\pi -[(f_{{2},b} -f_{{2},a} )-(f_{{1},b} -f_{{1},a} )]\} \nonumber \\&= \hbox {2}PQ\{\hbox {1}+\hbox {cos}[(f_{{2},b} -f_{{2},a} )-(f_{{1},b} -f_{{1},a} )]\}=0. \end{aligned}$$
(83)

Hence, the MO-orthogonality constraint imposes the following requirement to be satisfied by phases \(\mathbf{f} = \{f_{k,s}\}\) of the complex LCAO MO coefficients U [Eq. (78)]:

$$\begin{aligned}{}[f_{{2},b} -f_{{2},a} ]-[f_{{1},b} -f_{{1},a} ]\equiv \theta _{2} -\theta _{1} =\uppi . \end{aligned}$$
(84)

In other words, the differences between phases of the expansion coefficients multiplying the two basis functions in the bonding and anti-bonding MO combinations, respectively, determine the opposite directions in the complex plane. The preceding equation can be also interpreted in terms of the opposite directions corresponding to differences between phases of the expansion coefficients in the same MO:

$$\begin{aligned} (f_{{2},b} -f_{{1},b} )-(f_{{2},a} -f_{{1},a} )\equiv \varepsilon _b -\varepsilon _a =\uppi . \end{aligned}$$
(85)

Yet another interpretation of this phase relation involves the cross-phase sums, \(\vartheta _{2,1}\equiv f_{2,b}+f_{1,a}\) and \(\vartheta _{1,2}\equiv f_{1,b}+f_{2,a}\),

$$\begin{aligned} \vartheta _{{2},{1}} -\vartheta _{{1},{2}} =\uppi . \end{aligned}$$
(86)

For example, in the real AO case, for \({\varvec{\psi }} = {\varvec{ \chi }}\), the LCAO MO coefficients

$$\begin{aligned} \mathbf{U}=\mathbf{C}=\!\left[ {{\begin{array}{ll} {\sqrt{P}}&{}\quad {-\sqrt{Q}} \\ {\sqrt{Q}}&{}\quad {\sqrt{P}} \\ \end{array} }} \right] \; \equiv \left[ {{{\varvec{C}}}_b \vert {{\varvec{C}}}_a } \right] \end{aligned}$$
(87)

correspond to the following modulus and phase parts of Eq. (78):

$$\begin{aligned} \mathbf{z}=\left\{ {z_{k,s} } \right\} \equiv \!\left[ \begin{array}{ll} \sqrt{P} &{} \sqrt{Q}\\ \sqrt{Q} &{} \sqrt{P}\\ \end{array}\right] \qquad \hbox {and}\qquad \mathbf f =\left\{ {f_{k,s} } \right\} \left[ {{\begin{array}{ll} 0&{} \pi \\ 0&{} 0 \\ \end{array} }} \right] \equiv \!\left[ {{{\varvec{f}}}_b \vert {{\varvec{f}}}_a } \right] . \end{aligned}$$
(88)

In accordance with Eqs. (84)–(86) they determine the opposite phase differences:

$$\begin{aligned} \theta _{1}&= -\pi ,\quad \theta _{2} =0;\qquad \varepsilon _b =0,\quad \varepsilon _a =-\pi ;\nonumber \\ \theta _{{1},{2}}&\equiv f_{{1},b} -f_{{2},a} =0,\quad \theta _{{2},{1}} \equiv f_{{2},b} -f_{{1},a} =-\pi ,\quad \theta _{{1},{2}} -\theta _{{2},{1}} =\pi , \end{aligned}$$
(89)

and sums:

$$\begin{aligned} \vartheta _{{1},{2}} =0,\quad \vartheta _{{2},{1}} =\pi , \quad \vartheta _{{2},{1}} -\vartheta _{{1},{2}} =\pi . \end{aligned}$$
(90)

8 Molecular communications

Let us now turn to the conditional probabilities between AO events in molecules. For simplicity we focus on the probability and amplitude channels in a single electron configuration; for the multi-configuration extension the reader is referred to refs. [33, 43]. An exploration of the chemical bond system in the given ground-state of a molecule indeed calls for the AO resolution determined by the basis functions \({\varvec{\chi }} = (\chi _{1}, \chi _{2}, {\ldots }, \chi _{m})\) of typical (HF or KS) SCF calculations. They express the bonding subspace of the singly occupied SMO,

$$\begin{aligned} {\varvec{\psi }} ={\varvec{\chi }} \mathbf{C}=\{\psi _k \!\left( {{\varvec{q}}} \right) =\psi _k ({{\varvec{r}}};\sigma )=\varphi _k \!\left( {{\varvec{r}}} \right) \zeta _k (\sigma ),\quad k=1,2,\ldots ,N\}, \end{aligned}$$

which define the molecular ground-state \(\Psi (N)\) given by the Slater determinant consisting of \(N\)-lowest SMO:

$$\begin{aligned} \Psi \!\left( N \right) = \hbox {det}[\varvec{\psi }]\equiv |\psi _1 ,\psi _2 ,\ldots ,\psi _N |. \end{aligned}$$

Here \(\varphi _{k}({{\varvec{r}}})\) denotes the spatial function (MO) and \(\zeta _{k }= \{\alpha (\hbox {spin-}\textit{up}) \, \hbox {or} \, \beta (\hbox {spin-}\textit{down})\}\) stands for one of the two admissible spin states of an electron [see Eq. (9)].

In this simplest orbital approximation one thus takes into account only the physical (bond) subspace \({\varvec{\varphi }}\) of the configuration occupied MO, which defines the associated MO projector \(\hat{\mathrm{P}}_{\varvec{\varphi }} \equiv {\vert }{\varvec{\varphi }} \rangle \langle {\varvec{\varphi }}{\vert }\). It gives rise to the (idempotent) charge and bond-order (CBO) one-electron density matrix in AO representation,

$$\begin{aligned} {\varvec{\upgamma }}&= \left\{ {\gamma _{i, j} } \right\} =\left\langle {\varvec{\chi }} | {\varvec{\varphi }} \right\rangle \left\langle {\varvec{\varphi }} | {\varvec{\chi }} \right\rangle \equiv \left\langle {\varvec{\chi }} \right| \hat{\mathrm{P}}_{\varvec{\varphi }} \left| {\varvec{\chi }} \right\rangle =\mathbf{CC}^{\dagger },\nonumber \\ {\varvec{\upgamma }}^{{2}}&= \left\langle {\varvec{\chi }} \right| \hat{\mathrm{P}}_{\varvec{\varphi }} \left| {\varvec{\chi }} \right\rangle \left\langle {\varvec{\chi }} \right| \hat{\mathrm{P}}_{\varvec{\varphi }} \left| {\varvec{\chi }} \right\rangle =\left\langle {\varvec{\chi }} \right| \hat{\mathrm{P}}_{\varvec{\varphi }} \hat{\mathrm{P}}_{\varvec{\chi }} \hat{\mathrm{P}}_{\varvec{\varphi }} \left| {\varvec{\chi }} \right\rangle =\mathbf{C}\!\left( {\mathbf{C}^{\dagger }\mathbf{C}} \right) \mathbf{C}^{\dagger }=\mathbf{CC}^{\dagger }={\varvec{\upgamma }},\nonumber \\ \end{aligned}$$
(91)

where we have used the AO orthonormality, \(\langle {\varvec{\chi }}{\vert }{\varvec{\chi }}\rangle = \mathbf{I}_{m}\), and that of the occupied MO expanded in this basis, \(\langle {\varvec{\varphi }}|{\varvec{\chi }}\rangle \langle {\varvec{\chi }}{\vert }{\varvec{\chi }}\rangle \langle {\varvec{\chi }}{\vert }{\varvec{\varphi }}\rangle = \mathbf{C}^{\dagger }\mathbf{C} = \mathbf{I}_{N}\), which further implies

$$\begin{aligned} \hat{\mathrm{P}}_{\varvec{\varphi }} \hat{{\hbox {P}}}_{\varvec{\chi }} \hat{\mathrm{P}}_{\varvec{\varphi }} =|{\varvec{\varphi }} \rangle \!\left( {\mathbf{C}^{\dagger }\mathbf{C}} \right) \langle {\varvec{\varphi }} |=\hat{\mathrm{P}}_{\varvec{\varphi }}. \end{aligned}$$
(92)

The molecular joint probabilities of the given pair of the inputoutput AO in the bond system determined by \({\varvec{\varphi }}\) are then proportional to the square of the corresponding CBO matrix element:

$$\begin{aligned} P(\chi _i \wedge \chi _j )&= P(i\wedge j)=\gamma _{i,j} \gamma _{j,i} /N=(\gamma _{i,j} )^{{2}}/N,)\nonumber \\ \sum _j P(i\wedge j&= N^{-{1}}\sum _j \gamma _{i,j} \gamma _{j ,i} =\gamma _{i,i} /N=p_i . \end{aligned}$$
(93)

The conditional probabilities between AO, \(\mathbf{P}({\varvec{{\varvec{\chi }}}}'{\vert } {\varvec{{\varvec{\chi }}}}) = \{P(j {\vert }i)=P(i \wedge j)/p_{i}\}\),

$$\begin{aligned} \{P(j|i)\equiv P_{i \rightarrow j} =|A_{i{\rightarrow j}}|^{2}=N_i {|\langle \chi _{i}|\hat{\hbox {P}}_{\varvec{\varphi }}|\chi _{j}\rangle |}^{2} =|\gamma _{i,j} |^{2}/\gamma _{i,i} \}, \end{aligned}$$
(94)

reflect the electron delocalization in this MO system and identify the associated scattering amplitudes \(\mathbf{A}( {\varvec{\chi }}'\mathbf{{\vert }}{\varvec{\chi }}) = \{A(j\mathbf{{\vert }}i)=A_{i \rightarrow j}\}\):

$$\begin{aligned} A_{i \rightarrow j} =\gamma _{i,j} /[\gamma _{i,i} ]^{1/2}\equiv \!\left[ {N_i } \right] ^{{1}/{2}}\gamma _{i,j}. \end{aligned}$$
(95)

These amplitudes for the ground-state probability propagation are thus related to the corresponding elements of the CBO matrix \({\varvec{\upgamma }}=\left\langle {\varvec{\chi }} \right| \hat{\mathrm{P}}_{\varvec{\varphi }} \left| {\varvec{\chi }} \right\rangle \), the AO representation of the ground-state occupied SMO projector \(\hat{\mathrm{P}}_{\varvec{\varphi }}\).

In the closed-shell state, when the occupied spatial MO accommodate two spin-paired electrons each, the ground state is generated by the \(N\)/2 lowest MO, \({\varvec{\varphi }}^{o} = (\varphi _{1}, \varphi _{2}, {\ldots }, \varphi _{N/2}) ={\varvec{\chi }}\mathbf{C}^{o}\),

$$\begin{aligned} \Psi \!\left( N \right) =|\varphi _1 \alpha ,\varphi _1 \beta ,\ldots ,\varphi _{N/2} \alpha ,\varphi _{N/2} \beta |, \end{aligned}$$
(96)

and \(\hat{\mathrm{P}}_{\varvec{\varphi }} = {\vert } {\varvec{\varphi }}\rangle \langle {\varvec{\varphi }}{\vert } = 2{\vert } {\varvec{\varphi }}^{o}\rangle \langle {\varvec{\varphi }}^{o}\equiv 2\hat{\mathrm{P}}_{\varvec{\varphi }}^o\); hence, the SMO idempotency then implies \((\hat{\mathrm{P}}_{\varvec{\varphi }})^{2} = [2{\vert }{\varvec{\varphi }}^{o}\rangle \langle {\varvec{\varphi }}^{o}{\vert }]^{2} = 4{\vert } {\varvec{\varphi }}^{o}\rangle \langle {\varvec{\varphi }}^{o}{\vert } = 2\hat{\mathrm{P}}_{\varvec{\varphi }}\) and [see Eqs. (91) and (92)]

$$\begin{aligned} {\varvec{\upgamma }}&= \left\langle {\varvec{\chi }} | {\varvec{\varphi }} \right\rangle \left\langle {\varvec{\varphi }} | {\varvec{\chi }} \right\rangle \nonumber \\&= 2\langle {\varvec{\chi }} | {{\varvec{\varphi }} ^{o}} \rangle \langle {{\varvec{\varphi }} ^{o}} | {\varvec{\chi }}\rangle \equiv 2\left\langle {\varvec{\chi }} \right| \hat{\mathrm{P}}_{\varvec{\varphi }} ^o \left| {\varvec{\chi }} \right\rangle = \hbox {2}\mathbf{C}^{o}\mathbf{C}^{o{\dagger }}= \hbox {2}{\varvec{\upgamma }}^{o},\nonumber \\ ({\varvec{\upgamma }}^{o})^{{2}}&= {\varvec{\upgamma }}^{o}=\left\{ {\gamma _{i,j}^{o}} \right\} ,\quad {\varvec{\upgamma }}^{{2}}= \hbox {4}({\varvec{\upgamma }}^{o})^{{2}}=\hbox {4}{\varvec{\upgamma }}^{o}= \hbox {2}{\varvec{\upgamma }}. \end{aligned}$$
(97)

For such molecular states the representative conditional probability of the molecular AO channel reads [12, 13]:

$$\begin{aligned} P(j|i)&\equiv P_{i \rightarrow j} =|A_{i \rightarrow j} |^{{2}}=\gamma _{i,j} \gamma _{j , i} /[{2}\gamma _{i,i} ]=\gamma _{i, j} ^{o}\gamma _{j , i} ^{o}/\gamma _{i,i} ^{o}\nonumber \\&= N_i ^{o}\langle \chi _i |\hat{\mathrm{P}}_{\varvec{\varphi }} ^o | \chi _j \rangle \langle \chi _j |\hat{\mathrm{P}}_{\varvec{\varphi }} ^o | \chi _i \rangle \equiv N_i ^{o}\left\langle \chi _i \right| \hat{\mathrm{P}}_{\varvec{\varphi }} ^o \hat{\mathrm{P}}_j \hat{{\hbox {P}}}_{\varvec{\varphi }} ^o \left| \chi _i \right\rangle \equiv N_i ^{o}\left\langle \chi _i \right| \hat{{\tilde{\hbox {P}}}}_j^o \left| {\chi }_i \right\rangle , \nonumber \\ A(j|i)&\equiv A_{i \rightarrow j} =\gamma _{i , j} /({2}\gamma _{i, i} )^{{1}/{2}}=\gamma _{i, j} ^{o}/(\gamma _{i,i} ^{o})^{{1}/{2}}\equiv \!\left( {N_i ^{o}} \right) ^{{1}/{2}}\gamma _{i, j} ^{o}. \end{aligned}$$
(98)

Here, \(N_{i}^{o} = (\gamma _{i,i}^{o})^{-1}\) stands for the multiplicative constant required to satisfy the appropriate normalization condition:

$$\begin{aligned} \sum _j P(j|i)=\sum _j \gamma _{i,j} ^{o}\gamma _{j,i} ^{o}/\gamma _{i, i} ^{o}=\sum _j \gamma _{i,j} \gamma _{j,i} /({2}\gamma _{i,i} )={1}, \end{aligned}$$
(99)

where we have used the assumed closed-shell idempotency of Eq. (97). In Eq. (98) the \(i \rightarrow j\) probability scattering has been expressed as the expectation value in the input \(\hbox {AO}\,\chi _{{i}}\) of the communication operator to the specified output AO \(\chi _{j}, \hat{{\tilde{P}}}_j^o \equiv \hat{\mathrm{P}}_{\varvec{\varphi }}^o \hat{\mathrm{P}}_j \hat{{P}}_{\varvec{\varphi }}^o\).

It should be again stressed that the classical, probability-channel determined by the conditional probabilities of the output AO-events \({\varvec{\chi }}\)’ given the input AO-events \({\varvec{\chi }}\), \(\mathbf{P} ({\varvec{\chi }}'{\vert } {\varvec{\chi }}) = \{P(j {\vert }i)= P_{i\rightarrow j}\}\), which in short notation reads

$$\begin{aligned} {\varvec{\chi }} {-\!\!-}\mathbf{P}({\varvec{\chi }}^ {\prime }|{\varvec{\chi }} ){-\!\!\!\longrightarrow } {\varvec{\chi }}^ {\prime }, \end{aligned}$$
(100)

losses memory about phases of the scattering amplitudes \(\{A_{i \rightarrow j}\}\), which are preserved in the associated amplitude-channel defined by the direct communications \(\mathbf{A}({\varvec{\chi }}'{\vert }{\varvec{\chi }}) = \{A(j{\vert }i)=A_{i\rightarrow j}\}\):

$$\begin{aligned} |{\varvec{\chi }} \rangle {-\!\!-}\mathbf{A}({\varvec{\chi }}^ {\prime }|{\varvec{\chi }}){-\!\!\!\longrightarrow }|{\varvec{\chi }}^ {\prime }\rangle . \end{aligned}$$
(101)

This observation also applies to the sequential (product) arrangements of several such (direct) channels, called “cascades”, for the indirect (bridge) communications between orbitals in molecules, since the modulus of the product of complex functions is given by the product of the moduli of its factors. For example, the single-AO intermediates \({\varvec{\chi }}\)” in the sequential three-orbital scatterings \({\varvec{\chi }} \rightarrow {\varvec{\chi }}''\rightarrow {\varvec{\chi }}\)’ define the following probability and amplitude cascades:

$$\begin{aligned} {\varvec{\chi }} -[\mathbf{P}({\varvec{\chi }}^{{\prime }{\prime }}|{\varvec{\chi }} )\rightarrow {\varvec{\chi }}^{{\prime }{\prime }}-\mathbf{P}({\varvec{\chi }}^{\prime }|{\varvec{\chi }}^{{\prime }{\prime }})]\rightarrow {\varvec{\chi }}^{\prime }&\Rightarrow {\varvec{\chi }} -\mathbf{P}[({\varvec{\chi }}^{\prime }|{\varvec{\chi }} );{\varvec{\chi }}^{{\prime }{\prime }}]\rightarrow {\varvec{\chi }}^{\prime }, \nonumber \\ |{\varvec{\chi }} \rangle -[\mathbf{A}({\varvec{\chi }}^{{\prime }{\prime }}|{\varvec{\chi }} )\rightarrow |{\varvec{\chi }}^{{\prime }{\prime }}\rangle -\mathbf{A}({\varvec{\chi }}^{\prime }|{\varvec{\chi }}^{{\prime }{\prime }})]\rightarrow |{\varvec{\chi }}^{\prime }\rangle&\Rightarrow |{\varvec{\chi }} \rangle -\mathbf{A}[({\varvec{\chi }}^{\prime }|{\varvec{\chi }} );{\varvec{\chi }}^{{\prime }{\prime }}]\rightarrow |{\varvec{\chi }}^{\prime }\rangle .\nonumber \\ \end{aligned}$$
(102)

The associated (indirect) conditional probabilities between AO-events and their amplitudes are then given by products of the corresponding elementary two-orbital communications in each (direct) sub-channel:

$$\begin{aligned} P[(j|i);k]\equiv P_{i \rightarrow j ;k} =P_{i \rightarrow k} P_{k \rightarrow j} \;\hbox { and }\;A[(j|i);k]\equiv A_{i \rightarrow j;k} =A_{i \rightarrow k} A_{k \rightarrow j}. \end{aligned}$$
(103)

Therefore, such bridge probabilities can be straightforwardly derived from the direct probability and amplitude channels. They satisfy the relevant bridge-normalization sum-rules over the final and intermediate output AO events:

$$\begin{aligned} \sum _k \!\left( \sum _j P_{i \rightarrow j;k} \right) =\sum _k P_{i \rightarrow k} =1. \end{aligned}$$
(104)

This single-cascade development can be straightforwardly generalized to any bridge order. Consider the sequential \(t\)-cascade involving all basis functions at each propagation stage. Let us examine the resultant amplitudes, \(\mathbf{A}[( {\varvec{\chi }}'{\vert } {\varvec{\chi }}); t- {\varvec{\chi }}] = \{A_{i\rightarrow j}^{(t)}\}\), and probabilities, \(\mathbf{P}[({\varvec{\chi }}'{\vert } {\varvec{\chi }}); t\!-\!{\varvec{\chi }}] = \{{\vert }A _{i\rightarrow j}^{(t)}{\vert }^{2}\}\), of the multiple scatterings in the \(t\)-stage bridge involving sequential cascades via \(t\)-AO intermediates (orbital bridges) (\(k, l, {\ldots }, m, n)\), e.g., in the amplitude channel [31]:

$$\begin{aligned}&|{\varvec{\chi }} \rangle -[\mathbf{A}({\varvec{\chi }} ^{\!\left( {1} \right) }|{\varvec{\chi }} ){-\!\!\!\longrightarrow } |{\varvec{\chi }} ^{\!\left( {1} \right) }\rangle -\mathbf{A}({\varvec{\chi }} ^{\!\left( {2} \right) }|{\varvec{\chi }} ^{\!\left( {1} \right) }){-\!\!\!\longrightarrow } |{\varvec{\chi }} ^{\!\left( {2} \right) }\rangle \nonumber \\&\qquad -\ldots \rightarrow |{\varvec{\chi }} ^{\!\left( t \right) }\rangle -\mathbf{A}({\varvec{\chi }} {\prime }|{\varvec{\chi }} ^{\!\left( t \right) })]{-\!\!\!\longrightarrow } |{\varvec{\chi }} {\prime }\rangle \nonumber \\&\quad \Rightarrow |{\varvec{\chi }} \rangle -\mathbf{A}[({\varvec{\chi }}^{\prime }|{\varvec{\chi }} );t-{\varvec{\chi }} ]{-\!\!\!\longrightarrow } |{\varvec{\chi }}^{\prime } \rangle . \end{aligned}$$
(105)

Such \(t\)-cascade amplitudes \(\mathbf{A}[( {\varvec{\chi }}'{\vert }{\varvec{\chi }}); t-{\varvec{\chi }}] = \{A_{i\rightarrow j;k,l,\ldots ,m,n} \equiv A_{i\rightarrow j}^{(t)}\}\) are proportional to the corresponding matrix element of the (\(t+1\))-power of the (idempotent) projector onto the occupied SMO subspace,

$$\begin{aligned} A_{i \rightarrow j} ^{\!\left( t \right) }=\!\left( {N_{i \rightarrow j} ^{(t)}} \right) ^{{1}/{2}}\left\langle {{\varvec{\chi }} _i } \right| (\hat{\mathrm{P}}_{\varvec{\psi }})^{t+1}\left| {{\varvec{\chi }} _j } \right\rangle =\!\left( {N_i } \right) ^{{1}/{2}}\left\langle {{\varvec{\chi }}_i} \right| \hat{\mathrm{P}}_{\varvec{\psi }} \left| {{\varvec{\chi }}_j} \right\rangle =A_{i \rightarrow j} , \end{aligned}$$
(106)

where we have again used the idempotency property of the SMO projector \(\hat{\mathrm{P}}_{\varvec{\psi }}: (\hat{\mathrm{P}}_{\varvec{\psi }})^{n}=\hat{\mathrm{P}}_{\varvec{\psi }}\).

Hence, the amplitude \(A_{i\rightarrow j}^{(t)}\) for the complete consecutive \(t\)-cascade preserves the direct-scattering probabilities,

$$\begin{aligned} \mathbf{P}[({\varvec{\chi }}^{\prime }|{\varvec{\chi }} );t\!-\!{\varvec{\chi }}]=\{\left| {A_{i \rightarrow j} ^{(t)}} \right| ^{{2}}=\left| A_{i \rightarrow j} \right| ^{{2}}=P_{i \rightarrow j} \}=\mathbf{P}({\varvec{\chi }}^{\prime }|{\varvec{\chi }} ), \end{aligned}$$
(107)

thus satisfying the important consistency requirement of the stationary molecular channel [30, 31]. It also obeys the relevant bridge-normalization sum-rules:

$$\begin{aligned}&\sum _k \sum _l \ldots \sum _m \sum _n \!\left[ \sum _j P_{i \rightarrow j} {;k,l,\ldots ,m,n} \right] =\sum _k \sum _l \ldots \sum _m \!\left[ \sum _n P_{i \rightarrow n} {;k,l,\ldots ,m} \right] \nonumber \\&\quad =\sum _k \sum _l \ldots \!\left[ \sum _m P_{i \rightarrow m ;k,l\ldots } \right] =\cdots =\sum _k \!\left[ \sum _l P_{i \rightarrow l} {;k} \right] =\sum _k P_{i \rightarrow k} =1. \end{aligned}$$
(108)

For the specified pair of the “terminal” AO, say \(\chi _{i}\in {\varvec{\chi }}\) and \(\chi _{j } \in {\varvec{\chi }}'\), one can similarly examine the indirect scatterings in the molecular bond system, via the incomplete cascades consisting of the remaining (“bridge”) functions \({\varvec{\chi }} ^{b} = \{{\varvec{\chi }}_{k\ne (i,j)}\}\), with the two terminal AO being then excluded from the set of admissible intermediate scatterers. The associated bridge-communications give rise to the indirect (through-bridge) components of bond multiplicities [2832], which complement the familiar direct (through-space) chemical “bond-orders” [4959] and provide a novel IT perspective on chemical interactions between more distant AIM, alternative to the fluctuational charge-shift mechanism [60].

9 Vertical equilibrium principles

The IT (entropic) representation in the theory of molecular electronic structure provides a thermodynamic-like outlook on molecular equilibria [16, 17]. A generally complex probability amplitudes in the molecular quantum mechanics, the system electronic wave functions, require generalized information measures of Sect. 3, which combine the classical and non-classical contributions, due to the particle probability and current (or phase) distributions, respectively. Let us briefly comment on the density-constrained (“vertical”) variational principles for these quantum information functionals [17], which closely resemble their familiar entropy and energy analogs in the ordinary thermodynamics [23]. By the Hohenberg–Kohn (HK) [35] theorem, the conserved values of the system ground-state energy or classical entropy in these searches, are exactly determined by the corresponding DFT functionals of the system ground-state electron density \(\rho _{0}({{\varvec{r}}}) = { Np}_{0}({{\varvec{r}}})\). It also fixes other classical measures of the state information content. Therefore, these “vertical” principles for determining molecular equilibria, for the fixed ground-state distribution of electrons, involve only displacements in the non-classical entropy/information complements, functionals of the system current/phase.

In Sect. 3 the non-classical information contributions for the constrained probability density \(p_{0}({{\varvec{r}}})\), in a trial state of a single particle [Eq. (12)], were shown to be given by the following functionals of the spatial phase function \(\phi ({{\varvec{r}}})\):

$$\begin{aligned} S[p_0 ,\phi ]&= -{2}\!\!\int \!\!{p_0 \!\left( {{\varvec{r}}} \right) [\phi ^{{2}}\!\left( {{\varvec{r}}} \right) ]^{{1}/{2}} \ d{{\varvec{r}}} } \;\qquad \hbox {and}\nonumber \\ I[p_0 ,\phi ]&= \hbox {4}\!\!\int \!\! p_0 \!\left( {{\varvec{r}}} \right) [\nabla \phi \!\left( {{\varvec{r}}} \right) ]^{{2}}\,\, d{{\varvec{r}}}=\hbox { 4}\!\left( {m/\hbar } \right) ^{{2}}\!\!\int \! {{\varvec{j}}}_0 [\phi ;{{\varvec{r}}} ]^{{2}}{/}p_0 \!\left( {{\varvec{r}}} \right) \ d{{\varvec{r}}} \equiv I[ {p_0 ,{{\varvec{j}}}_0 } [\phi ]],\nonumber \\ \end{aligned}$$
(109)

where \({{\varvec{j}}}_{0}[\phi ] \equiv {{\varvec{j}}}[p_{0},\phi ]\). The sign of the non-classical entropy term has been chosen to guarantee the maximum value \(S[p_{0},\phi =0]\) at the exact ground-state solution, when the spatial phase vanishes, for which the complementary non-classical Fisher information reaches its minimum value \(I[p_{0}, \phi = 0]=0\).

The combined classical and non-classical contributions determine the associated resultant entropy/information content of general (variational) quantum states:

$$\begin{aligned} S[p_0 ,\phi ] =S [p_0] +S [p_0 ,\phi ] \qquad \hbox {and}\qquad I [p_0 ,\phi ] =I [p_0 ] +I [p_0 ,\phi ]. \end{aligned}$$
(110)

The ground-state equilibrium of a molecule then alternatively results either from the entropy-constrained principle for the system minimum-energy, or from the energy-constrained searches for the maximum of the non-classical Shannon entropy or the minimum of the non-classical Fisher information [16, 17].

It should be realized that general (trial) states of such entropic rules generally imply non-vanishing contributions from both the classical (probability) and non-classical (phase-current) functionals of the entropy/information content. Only in the final, exact (stationary) ground-state, which exhibits purely time-dependent phase, the state information is measured by the corresponding classical measure, since then the space-dependent part of the wave-function phase, responsible for the current distribution, exactly vanishes.

As an illustration consider again a single particle moving in an external potential \(v({{\varvec{r}}})\) due to the fixed nuclei (Born–Oppenheimer approximation), described by the Hamiltonian

$$\begin{aligned} \hat{{\hbox {H}}}({{\varvec{r}}})=-\!\left( {\hbar ^{{2}}/{2}m} \right) \!\nabla ^{{2}}+v\!\left( {{\varvec{r}}} \right) , \end{aligned}$$
(111)

in the complex variational state \(\varphi ({{\varvec{r}}}) = R({{\varvec{r}}}) \exp [\hbox {i}\phi ({{\varvec{r}}})]\). In the non-degenerate ground state the lowest-energy amplitude eigenfunction of the Hamiltonian is then described solely by the modulus factor: \(\varphi _{0}=R_{0}\). It specifies the equilibrium probability distribution \(p_{0}=R_{0}^{2}\), while the exactly vanishing spatial phase \(\phi _{0} = 0\) implies the vanishing current density in this stationary molecular state: \({{\varvec{j}}}_{0}({{\varvec{r}}}) = (\hbar /m) p_{0 }\nabla \phi _{0} = {{\varvec{0}}}\).

Let us examine the modulus-constrained (vertical) search for the unknown phase function in the trial state \(\varphi ^{0}({{\varvec{r}}}) = R_{0}({{\varvec{r}}}) \exp [\mathrm{i}\phi ({{\varvec{r}}})]\), where \(R_{0}({{\varvec{r}}}) = [p_{0}({{\varvec{r}}})]^{1/2}\), for the conserved classical (phase-independent) entropy \(S^{class.}[\psi _{0}] = S[p_{0}]\) in the system exact ground-state, which also marks the vanishing spatial phase, \(\phi =\phi _{0}= 0\). The corresponding expression for the expectation value of the system energy in this probability-constrained state reads:

$$\begin{aligned} E_{v}[\varphi ^{0}]&= \left\langle {\varphi ^{0}} \right| \hat{{\hbox {H}}}\left| {\varphi ^{0}} \right\rangle =(\hbar ^{2}\hbox {/2}m)\int {[(\nabla R_0 )^{2}}+\,R_0^2 (\nabla \phi )^{2}] \,\, d{{\varvec{r}}}\nonumber \\&\qquad \qquad \qquad \qquad \qquad \qquad +\int {R_0^2 v}\,\, d{{\varvec{r}}}\equiv E_v[R_0 ,\phi ]. \end{aligned}$$
(112)

Its phase-minimum principle recovers the familiar (classical) DFT energy expression:

$$\begin{aligned} \mathop {\min }\limits _\phi E_v [R_0, \phi ]&= \left\langle {\varphi _0 } \right| \hat{\mathrm{H}}\left| {\varphi _0 } \right\rangle =(\hbar ^{2}\hbox {/2}m)\int \!\! {(\nabla R_0 )^{2}}\,\, d{{\varvec{r}}}+\int \!\! {R_0^2 v}\,\, d{{\varvec{r}}}\equiv E_v [R_0 ,\phi _ 0]\nonumber \\&= \!\left( {\frac{\hbar ^{2}}{\hbox {8}m}} \right) \int \!\! {\frac{(\nabla p_0 )^{2}}{p_0 }}\,\, d{{\varvec{r}}}+\int \!\! {p_0 v}\,\, d{{\varvec{r}}} =E_v [p_0 ]. \end{aligned}$$
(113)

The optimum solution \(\phi _{0} = 0\) marks the maximum of the quantum entropy, \(S[p_{0}, \phi _{0}\)] = 0, and the minimum of the associated (density-constrained) non-classical Fisher information \(I[p_{0}, \phi _{0}] = 0\). This eigenstate solution indeed corresponds exclusively to the classical measures of the entropy/information content: \(S[\varphi _{0}] = S[p_{0}]\) and \(I[\varphi _{0}] = I[p_{0}]\).

This optimum solution of the maximum quantum entropy also implies the minimum of the associated (density-constrained) quantum Fisher information:

$$\begin{aligned} \mathop {\hbox {min}}\limits _\phi I[\varphi ^{0}]=I[\rho _0 ] {+\mathop {\hbox {min}}\limits _\phi I} [\rho _0 ,\phi ] =I [\rho _0 ]. \end{aligned}$$
(114)

Therefore, the phase component, vital for identifying the trial (vertical) functions, identically vanishes for the exact phase \(\phi _{0}= 0\) of the ground state. Notice, that only then the classical DFT functionals of the system electron density can be used to predict the state average energy and the information content of this stationary electron distribution.

We have thus arrived at a remarkable parallelism with the ordinary thermodynamics [23]: the ground-state equilibrium results from the equivalent vertical (density-constrained) variational principles: either for the system minimum energy, at the constrained ground-state entropy \(S[p_{0}\)] or the classical information \(I[p_{0}\)], or—alternatively—for the extremum entropy/information, at the constrained ground-state energy \(E_{v}[p_{0}\)]. One has to use the quantum measures of the system information content, in order to distinguish the phase/current composition of the trial (vertical) states. The evolution of such entropic search is then properly described by the current shape of the phase function, which ultimately vanishes in the optimum ground-state solution.

Consider next a general case of a trial quantum state of \(N\) electrons, \(\Psi (N)\), corresponding to the fixed ground-state electron density \(\rho = \rho _{0} = Np_{0}\). The energy variational principle now involves a search for the optimum (normalized) wave function, which minimizes the expectation value of the electronic Hamiltonian

$$\begin{aligned} \hat{\hbox {H}}(N) =\hat{\hbox {V}}_{ne} (N)+ [\hat{\hbox {T}}(N)+\hat{\hbox {V}}_{ee} (N)] \equiv \sum _{i=1}^N {v(i)} +\hat{\hbox {F}}(N), \end{aligned}$$
(115)

where \(\hat{\mathrm{F}}(N)\) combines the electron kinetic [\(\hat{\mathrm{T}}(N)\)] and repulsion [\(\hat{\mathrm{V}}_{ee} (N)\)] energy operators of \(N\) electrons. One recalls that the “entropic” interpretation has been also attributed previously [11, 13], to the density-constrained principles of the modern DFT, in searches performed for the specified electron density \(\rho \). For example, in Levy’s [36] constrained-search one defines the universal part of the density functional for the system electronic-energy, which admitts non \(-\)(\(v\)-representable) densities, by the following (vertical) variational principle,

$$\begin{aligned} F[\rho ] = {T} [\rho ] +{V_{ee}} [\rho ] = \hbox {inf}_{\Psi \rightarrow \rho } \langle \Psi |\hat{\mathrm{F}}|\Psi \rangle \equiv \langle \Psi [\rho ]|\hat{\mathrm{F}}|\Psi [\rho ]\rangle ; \end{aligned}$$
(116)

here, one searches over wave functions which yield the given electron density \(\rho \), symbolically denoted by \(\Psi \!\!\rightarrow \!\!\rho \), and calculates the universal (\(v\)-independent) part \(F[\rho ]\) of the density functional for the system electronic energy,

$$\begin{aligned} E_v [\rho ] =F [\rho ]+\int {v\!\left( {{\varvec{r}}} \right) \,\rho \!\left( {{\varvec{r}}} \right) \,d {{\varvec{r}}},} \end{aligned}$$
(117)

as the lowest value (infimum) of the quantum expectation value of \(\hat{\mathrm{F}}(N)\). When this search is performed for the fixed ground-state density, \(\rho =\rho _{0}\), it also implies the fixed DFT value of the system electronic energy, by the first HK theorem [35]. This feature is thus reminiscent of the thermodynamic criterion for determining the equilibrium state formulated in the entropy representation. By analogy to the maximum-entropy principle for constant internal energy in the phenomenological thermodynamics, this DFT construction can be thus regarded as being also “entropic” in character [11].

It should be emphasized that the variational principle for determining the ground-state wave function, involving the constrained search for the minimum of the system energy, can be also interpreted as the DFT optimization over all admissible densities, in accordance with the second HK theorem [35]. It combines the external (“horizontal”) search, over trial electron densities, and the internal (vertical) search, over wave functions of \(N\) fermions that yield the current density of the external search:

$$\begin{aligned} \hbox {min}_\Psi \langle \Psi {\vert }\hat{\mathrm{H}}{\vert }\Psi \rangle&= \hbox {min}_\rho E_{v}[\rho ]= \hbox {min}_\rho \{\smallint \,v({{\varvec{r}}})\,\rho ({{\varvec{r}}})\,d{{\varvec{r}}}\nonumber \\&+\hbox {inf}_{{\Psi }\!\rightarrow \!\rho } \langle \Psi {\vert }\hat{{\hbox {F}}} {\vert } \Psi \rangle \}= E_{v}[\rho _{0}]. \end{aligned}$$
(118)

Let us focus on the vertical (internal) optimization in the preceding equation. It corresponds to the fixed ground-state electron density, say, \(\rho =\rho _{0 }=\rho [v]\), identified in the (external) horizontal search. The internal, quantum-entropy rule thus involves the energy-constrained search over \(\Psi \rightarrow \rho _{0}\) for the optimum wave function \(\Psi [\rho _{0}\)] corresponding to the fixed, matching external potential \(v=v\rho _{0}]\) due to the “frozen” nuclei:

$$\begin{aligned} E_{v}[\rho _{0}]&= \smallint v({{\varvec{r}}})\rho _{0}({ {\varvec{r}}})d{{\varvec{r}}} + \hbox {inf}_{\Psi \rightarrow \rho _0 } \langle \Psi {\vert }\hat{{\hbox {F}}} {\vert }\Psi \rangle =V_{ne}[\rho _{0}] + V_{ee}[\rho _{0}] +\hbox {inf}_{\Psi \rightarrow \rho _0 } \left\langle \Psi \right| \hat{{\hbox {T}}}\left| \Psi \right\rangle \nonumber \\&= V_{ne} [\rho _0] {+V_{ee} } [\rho _0] {+T} [\rho _0] {=V_{ne} } [\rho _0] {+V_{ee} } [\rho _0 ]+\!\left( {\frac{\hbar ^{2}}{\hbox {8}m}} \right) I[\rho _0 ]. \end{aligned}$$
(119)

One observes the presence of Levy’s functional \(F[\rho _{0}\)] as the crucial (entropic) part of this extremun principle of the system physical information. Notice, that the external potential and electron-repulsion energies are fixed by the frozen-density constraint so that the optimum state also marks the infimum \(I[\rho _{0}] = I[\varphi _{0}\)] of the quantum Fisher measure of the information in the trial (vertical) wave functions, related to the system average kinetic energy \(T[\rho _{0}\)].

10 Conclusion

The quantum-generalized information measures and their vertical variation principles have been examined. This analysis of molecular equilibria has stressed the need for using the resultant information measures, which take into account both the classical and quantum contributions, due to the electronic probability and current (phase) distributions, respectively. The non-classical generalization of the gradient (Fisher) information introduces the information contribution due to the probability current. The proposed quantum-generalized Shannon entropy includes the additive contribution due to the average magnitude of the state phase. This extension has been accomplished by requesting that the relation between the classical Shannon and Fisher information densities-per-electron extends into their non-classical (quantum) analogs. A similar generalization of the information-distance (entropy-deficiency) concept for comparing spatial probability/current distributions has also been proposed in both the Shannon cross-entropy and Fisher missing-information representations.

These quantum-information terms complement the classical Fisher and Shannon measures, functionals of the particle probability distribution alone. The resultant quantum measures are thus capable to extract the full information content of the complex probability amplitudes (wave functions), due to both the probability and current distributions. Elsewhere, the associated continuity equations have been examined, the non-classical information sources, linked to the wave function phase or the probability-current densities, have been identified and the phase-current density has been introduced, which complements the familiar probability-current concept in quantum mechanics [1013].

As in the ordinary thermodynamics, the equilibrium ground state of a molecule was shown to alternatively result either from the (entropy/information)-constrained principle for the system minimum energy, or from the energy-constrained search for the extremum of the information content: either the maximum of the non-classical Shannon entropy or the minimum of the quantum Fisher information. By the HK theorem the ground-state values of the density/probability functionals for the system energy and its entropy/information content uniquely identify the equilibrium distribution of electrons. Therefore, in vertical searches carried for this fixed electron density, the vanishing spatial phase and probability current in the non-degenerate ground-state, are both determined by the variational principles of the non-classical entropy/information contributions, for the fixed values of the corresponding classical terms. The spatial-phase aspect identifies the trial function in this density-constrained (vertical) searches, and it ultimately vanishes in the final non-degenerate ground-state solution.

The SP of quantum mechanics introduces the conditional probabilities between quantum states, which generate a network of molecular communications. The non-additive contributions to probability/current distributions and information densities have been identified and the phase relations in two-orbital model have been examined. The OCT of the chemical bond introduces the molecular information system transmitting “signals” of electron allocations to AO states, which define the set of elementary electronic events. The conditional probabilities between these basis functions, propagated via the network of the occupied molecular orbitals, determine the orbital communications in molecules, which are generated by the bond-projected SP. In the SCF MO theory their amplitudes are related to the corresponding elements of the CBO matrix. The standard conditional-entropy and mutual-information descriptors of this orbital network provide useful indices of the IT covalency and ionicity.