1 Introduction

Communication systems are designed to cope with the constraints of the physical medium. Previous works have shown that chaos has intrinsic properties that make it attractive to sustain the modern design of communication systems.

Take x(t) to represent a controlled chaotic signal and that encodes information from a single transmitter. Let r(t) represent the transformed signal that is received. Chaos has offered communication systems whose information capacity could remain invariant by a small increase in the noise level, [1,2,3,4] and could be robust to filtering [5,6,7] and multi-path propagation [7], intrinsically present in the wireless communication. Decoding of r(t) can be trivial, with the use of a simple threshold technique [7, 8]. Chaos allows for simple controlling techniques to encode digital information [9, 10]. For the wonderful solvable systems proposed in [11, 12], simple analytical expressions to generate the controlled signal x(t) can be derived [13, 14]. Moreover, these systems have matched filters whose output maximizes the signal-to-noise ratio (SNR) of r(t), thus offering a practical and reliable way to decode transmitted information. Chaos allows for integrated communication protocols [15]; it offers viable solutions for the wireless underwater [16, 17], digital [18] and optical [19] communication, radar applications [20], and simultaneously radar communication [21]. Chaotic communication has been experimentally shown to achieve higher bit rate in a commercial wired fibre-optic channel [22] and lower bit error rate (BER) than conventional wireless non-chaotic baseband waveform methods. Moreover, chaos-based communication only requires equipment that is compatible with the today’s commonly used ones [14].

Several works on communication with chaos have focused on a system composed by two users, the transmitter and the receiver. Some works were motivated by the master–slave synchronization configuration [23] where the master (the transmitter) sends the information to the slave (the receiver) [1]. The understanding of how two users communicate cannot always capture the complexities involved in even simple networked communication systems. It is often more appropriate to break down this complex communication problem into a much simpler problem consisting of two configurations, the uplink and the downlink. The uplink configuration would render us an understanding of how several nodes that transmit different information signals can be processed in a unique central node. The downlink configuration would render us an understanding about how a unique central node that transmits a single signal can distribute dedicated information for several other nodes. This strategy to break a complex network problem into several smaller networks being described by the uplink and the downlink configurations, which is crucial to understand very complex technologically oriented flow networks, such as the communication and power networks, can also shed much light into the processing of information in networks as complex as the brain. The uplink configuration would contribute to a better understanding about how pre-synaptic neurons transmit information to a hub neuron, and the downlink configuration would contribute to a better understanding about how post-synaptic neurons can process information about a hub neuron. This paper focuses on information signals that are linearly composed, and thus, this approach could in principle be used to explain communication in neurons doing electric synapses. However, the main focus of the present paper is about the understanding of how superimposed chaotic signals can be robust to non-ideal properties of physical medium that is present in wireless communication networks.

A novelty of this work is to show that chaos can naturally allow for communication systems that operate in a multi-transmitter/receiver and multi-frequency environment. In a scenario where the received signal, r(t), is composed by a linear superposition of chaotic signals of two transmitters \(x^{(1)}\) and \(x^{(2)}\) (or more), as in \(r(t)={\tilde{\gamma }^{(1)}} x^{(1)}(t) + {\tilde{\gamma }^{(2)}}x^{(2)}(t) + w(t)\), each signal operating with different frequency bandwidths and each encoding different information contents with different bit rates, with \({\tilde{\gamma }^{(i)}} \in \mathfrak {R}\) and w(t) representing additive white Gaussian noise (AWGS) modelling the action of a physical medium in the composed transmitted signal, is it possible to decompose the source signals, \(x^{(1)}(t)\) and \(x^{(2)}(t)\), out of the received signal r(t), and recover (i.e. decode) their information content? My work explores the wonderful decomposability property chaotic signals have to positively answer this question, enabling a solution for a multi-source and multi-frequency communication.

In this paper, I show that for the no-noise scenario, the spectrum of positive Lyapunov Exponents (LEs) of r(t) is the union of the set of the positive Lyapunov exponents of both signals \(x^{(1)}(t)\) and \(x^{(2)}(t)\). This is demonstrated in the main manuscript in Sect. 2.2 for the system used to communicate. “Appendix C” generalizes this result to superimposed signals coming from arbitrary chaotic systems. And what is more, for the system proposed in [11], the information content of the composed signal r(t) preserves the information carried by the source signals, this being linked to the preservation of the positive Lyapunov exponents. This result is fully explained in Sect. 2.3, where I present the information encoding capacity of the proposed communication system, or in other words, the rate of information contained the linearly composed signal of several chaotic sources. I also discuss in this section how this result extends to communication systems that have users communicating with other chaotic systems, different from the one in Ref. [11]. Preservation of the Lyapunov exponents in the composed signals of arbitrary chaotic systems is demonstrated; thanks to an equivalence principle deterministic chaotic systems have that permits that the composed signal can be effectively described by a signal departing from a single source but with time-delayed components. Moreover, when the physical medium where the composed signal is transmitted has noise, it is possible to determine appropriate linear coefficients \(\tilde{\gamma }^{(i)}\) (denoted as power gains, see Eq. (17) in Sect. 2.4), which will depend on the natural frequency of the user, on the attenuation properties of the media and the number of users (end of Sect. 2), such that the information content carried by the composed signal r(t) can be trivially decomposed, or decoded, by a simple threshold (see Eq. (18)), with low probability of errors, or no errors at all for sufficiently small noise levels. In the latter case, that would imply that the information encoding capacity provides the information capacity of the system, or the rate of information received/decoded.

The scientific problem to decompose a linear superposition of chaotic signals that renders the mathematical support for the proposed communication system is similar to that of blind source separation for mixed chaotic signals [24] or that of the separation of a signal composed of a linear superposition of independent signals [25]. However, these separation methods require long measurements, and additionally either several measurements of multiple linear combinations of the source signals, or source signals that have similar power spectra and that are independent. These requirements cannot be typically fulfilled by a typical wireless communication environment, where information must be decoded even when very few observations are made, signals are sent only once with constant power gains, source signals can have arbitrary natural frequencies, and they can be dependent.

I also show in Sect. 3 that in the single-user communication system proposed in the work of [11], with a chaotic generator for the source signal and a matched filter to decode information from the received signal corrupted by noise, the chaotic generator has no negative LEs, which leads to a stable matched filter with no positive LEs, and that can therefore optimally filter noise. Moreover, I show that the single-user communication system formed by the chaotic generator plus the matched filter can be roughly approximated by the unfolded Baker’s map [26]. This understanding permits the conclusion that in the multi-user environment the matched filter that decomposes the source signal of a user from the received composed signal r(t) is the matched filter of that user alone.

I will then study, in Sect. 4, the information capacity of the proposed communication system in prototyped wireless network configurations, and in Sect. 4.1, I will compare its performance with a non-chaotic communication method that is the strongest candidate for the future 5G networks, the non-orthogonal multiplex access (NOMA), and will show that the proposed multi-user chaos-based communication system can (under certain configurations) communicate at higher bit rates for large noise levels in the physical medium.

In Sect. 5, I will discuss how communication with chaos can be made robust to other types of non-ideal physical media (also refereed as a “channel of communication”) [27] that present dispersion and whose signals interfere with other period (non-chaotic) signals.

Finally, for a succinct presentation on the historical developments of chaos for communication, see “Appendix D”.

2 Linear composition of chaotic signals, the preservation of the Lyapunov exponents and encoding for transmission

A wonder of chaotic oscillations for communication is the system proposed in Ref. [11]. With an appropriate rescaling of time to a new time frame \(\mathrm{d}t{^{\prime }}=\gamma \mathrm{d}t\), it can be rewritten as

$$\begin{aligned} \ddot{x}-2\beta (\gamma )\dot{x}+(\omega ^2+\beta (\gamma )^2)(x-s(t))=0, \end{aligned}$$
(1)

where \(s(t) \in (-1,1)\) is a 2-symbol alphabet discrete state that switches the value by the signum function \(s(t)={x(t)/|x(t)|}\), whenever \(|x(t)|<1\) and \(\dot{x}=0\). If the information to be communicated is the binary stream \(\mathbf{b} =\{b_0,b_1,b_2, \ldots \}\) (\(b_n \in \{0,1\}\)), a signal can be created such that \(s(t) = (2b_n-1)\), for \(nT \le t < (n+1)T\) [13]. In this new time frame, the natural frequency is \(f(\gamma )=1/\gamma \) (\(\omega =2\pi f\)), the period \(T(\gamma )=1/f(\gamma )=\gamma \), and \(\beta (\gamma ) =\beta (\gamma =1) f(\gamma )\), where \(0 < \beta (\gamma =1) \le \ln {(2)}\). More details can be seen in “Appendix A”. \(\beta (\gamma =1)\) is a parameter, but with an important physical meaning. It represents the Lyapunov exponent (LE) of the system in units of nepits per period (or per cycles), which is also equal to the rate of information produced by the chaotic trajectory in nepits per period. On the other hand, \(\beta (\gamma )\) represents the LE in units of nepits per unit of time, which is also equal to the rate of information produced by the chaotic trajectory in nepits per unit of time. See Sect. 2.3.

The received signal in the noiseless wireless channel from user k can be modelled by

$$\begin{aligned} r^{(k)}(t) = \sum _{l=0}^{L-1} \alpha _l \gamma ^{(k)} x(t-\tau _l) \end{aligned}$$
(2)

where there are L propagation paths, each with an attenuation factor of \(\alpha _l\) and a time delay \(\tau _l\) for the signal to arrive to the receiver along the path l (with \(0=\tau _0< \tau _2< \cdots < \tau _{L-1}\)), and \( \gamma ^{(k)}\) is an equalizing power gain to compensate for the amplitude decay due to the attenuation factor. The noisy channel can thus be modelled by \(r(t)+w(t)\), where w(t) is an AWGN.

Let me consider the time-discrete dynamics of the signal generated by a single user \(r^{(k)}(t) = r(t)\) (with \(\gamma ^{(k)}=1\)), whose signal is sampled at frequency f, so \(r_n=r(n/f)\) are collected, then the return map (see “Appendix B”) of the received signal (assuming for simplicity that \( \gamma ^{(k)}\)=1) is given by

$$\begin{aligned} r_{n+1}= & {} e^{\frac{\beta }{f}} r_n - \sum _{l=0}^{L-1} \alpha _l \left( e^{\beta /f} s_{n^{\prime }} - \mathcal {K}_l s_{n^{\prime }} \right. \nonumber \\&\left. - s_{n^{\prime }+1} + s_{n^{\prime }+1} \mathcal {K}_l \right) \end{aligned}$$
(3)

where \(n^{\prime } = n - \lceil f \tau _l \rceil \) and \(\mathcal {K}_l = e^{-\beta (\tau _l - \lceil \tau _l/T \rceil T)}[\cos {\left( 2\pi \frac{\tau _l}{T}\right) } + \frac{\beta }{\omega }\sin {\left( 2\pi \frac{\tau _l}{T}\right) }]\) where \(s_n\) represents the binary symbol associated with the time interval \(nT \le t <(n+1)T\), so \(s_n=s(t=nT)\), \(\lceil f \tau _l \rceil \) representing the ceiling integer of \(f \tau _l\), and \(\frac{\beta }{f}\) denotes \( \frac{\beta (\gamma )}{f(\gamma )} = \beta (\gamma =1)\). Equation (3) extends the result in [28], valid for when \(\tau _l = mT\), with \(m \in \mathbb {N}\), when \(\mathcal {K}_l=1\).

The Lyapunov exponent (LE) of the 1-dimensional map in Eq. (3) in units of nepits per period for multi-path propagation, denoted by \(\chi \), (which is equal to the positive LE of the continuous dynamics—see Sect. I of Supplementary Material (SM)) is equal to \(\chi = \frac{\beta }{f} = \beta (\gamma =1)\) [nepits per period], since \(\chi = \lim _{n\rightarrow \infty } \frac{1}{n} \ln {\left| \prod _{i=0}^{n} \frac{dr_{n+1}}{dr_n} \right| }\). This LE can be calculated in nepits per unit of time by simply making \(\frac{\chi }{T} = \beta \). LE can be calculated in units of “bits per period” by using binary logarithm instead of natural logarithm. This is also equal to the LE of the return map

$$\begin{aligned} x_{n+1}=e^{\frac{\beta }{f}}[x_n - (1-e^{-\beta /f})s_n], \end{aligned}$$
(4)

obtained from Eq. (3) when there is only a direct path, \(L=1\). Notice also that the constant attenuation factor \(\alpha _l\) does not contribute to this LE, only acting on the value of the binary symbols. This is to be expected [29].

2.1 Linear composition of chaotic signals for the uplink and the downlink communication configurations

The analysis will focus on two prototype wireless communication configurations: the uplink and the downlink. In the uplink communication, several users transmit signals that become linearly superimposed when they arrive to a base station antenna (BS). In the downlink communication, a BS sends 1 composed signal (linear superposition of chaotic signals) signal containing information to be decomposed (or decoded) by several users.

I propose a chaos-based communication system, named “Wi-C1”, that allows for multi-user communication, where one of the N users operates with its own natural frequency. It is assumed that other constraints of the wireless medium are present, such as multi-path propagation and AWGN. Wi-C1 with 1 BS can be modelled by a linear superposition of chaotic signals as

$$\begin{aligned} O(t)_{u}= & {} \sum _{k=1}^N \sum _{l=0}^{L^{(k)}-1} \alpha _l^{(k)} \gamma ^{(k)} \tilde{\gamma }^{(k)} x^{(k)}(t-\tau _l^{(k)}) + w(t) \\ O^{(m)}(t)_{d}= & {} \sum _{l=0}^{L^{(k)}-1} \alpha _l^{(m)} \sum _{k=1}^N \gamma ^{(k)} \tilde{\gamma }^{(k)} x^{(k)}(t-\tau _l^{(m)}) \nonumber \\&+ w^{(m)}\nonumber \end{aligned}$$
(5)

\(O(t)_{u}\) in Eq. (5) represents the composed signal received at BS from all users in the uplink. This signal will be the focus of the paper from now on. \(O^{(m)}(t)_{d}\) represents the signal received by user m from a composed signal transmitted by the BS in the downlink. w(t) represents an AGWN at the base station, and \(w^{m}(t)\), for \(m=1,\ldots ,N\) represents AGWN at the user m. \(\alpha _l^{(k)}\) is the attenuation factor between the BS and the user k along path l, and \(\gamma ^{(k)}\) and \(\tilde{\gamma }^{(k)}\) are power gains. \(L^{(k)}\) are the number of propagation paths between user k and the BS. In this work, we will choose \(\gamma ^{(k)} = 1/\alpha _l^{(k)}\), to compensate for the medium attenuation, and \(\tilde{\gamma }^{(k)}\) is a power gain to be applied at the transmitter or BS and that can be identified as being the linear coefficients of the superposition of chaotic signals.

I will now consider the uplink, where 2 users send signals that are linearly composed by a superposition that happens at the BS, each user or source signal is identified with an index \(k=\{1,2\}\) and will in most of the following results neglect in Eq. (3) the contribution from other propagation paths other than the direct (\(L^{(1)}=L^{(2)}=1\)). Assume user 1 to operate at frequency \(f^{(1)}=f=1/T\) and user 2 at frequency \(f^{(2)}=2f=2/T\), and \(\gamma ^{(k)}=1\). In order to reduce the continuous mathematical description of the uplink communication, including the decoding phase to the 2D unfolded Baker’s map, I will only treat cases for which the natural frequency of user k is given by \(f^{(k)} = 2^{m} f\), with \(m \in \mathbb {N}\), the parameter \(\beta ^{(k)}=f^{(k)}\ln {(2)}\), and f is the base frequency of user 1, which will be chosen to be 1. At time \((n+1)T\), the signal received by BS from user k=1 as a function of the signal received at nT is described by

$$\begin{aligned} r_{n+1}^{(1)} = 2 r^{(1)}_n - \alpha _0^{(1)}s^{(1)}_n. \end{aligned}$$
(6)

At time \((n+1)T\), the signal received by BS from user k=2 as a function of the signal received at nT is

$$\begin{aligned} r_{2n+2}^{(2)} = 4 r^{(2)}_{2n} - \alpha _0^{(2)}[2s^{(2)}_{2n} + s^{(2)}_{2n+1}], \end{aligned}$$
(7)

where the \(r_{2n}\) represents the value of \(r^{(2)}(t=nT)\) (recall that at each time interval T, user 2 chaotic system completes two full cycles each with period T/2). Notice that the LE of Eq. (7) will provide a quantity in term of 2 cycles of user 2, but 1 cycle in terms of user 1. So, the LE of Eq. (7) is equal to \(\ln {(4)}\) nepits per each period T, which is twice the LE of Eq. (6) for that same period T. Comparison of both LEs become easier if we calculate them in units of nepits per unit of time. LE for user 1 is \(\beta ^{(1)}=f^{(1)}\ln {(2)}=\ln {(2)}\) and that for user 2 is \(\beta ^{(2)}=f^{(2)}\ln {(2)}=2\ln {(2)}\). This is because user 2 has a frequency twice larger than that of user 1 [30]. Since these two maps are full shift, their LE equals their Shannon entropy, so their LE represents the encoding capacity (in units of nepit). Doing the coordinate transformation \(r^{(1)}_{n}=2u_n^{(1)}-1\) (for the map in (6)) and \(r^{(2)}_{2n}=2u_n^{(2)}-1\) (for the map in (7)) and choosing \(\gamma ^{(k)}=1/\alpha ^{(k)}\), Eqs. (6) and (7) become, respectively,

$$\begin{aligned} u^{(1)}_{n+1}= & {} 2u^{(1)}_n-\lfloor 2u^{(1)}_n \rfloor \equiv 2u^{(1)}_n- b^{(1)}_n \end{aligned}$$
(8)
$$\begin{aligned} u^{(2)}_{n+1}= & {} 4u^{(2)}_n-\lfloor 4u^{(2)}_n \rfloor \equiv 4u^{(2)}_n- b^{(2)}_n, \end{aligned}$$
(9)

where \(u^{(k)}_n \in [0,1]\) (in contrast to \(r_n^{(k)} \in [-1,1]\)), and \(b^{(1)}_n=1/2(s^{(1)}_n+1) \in (0,1)\), and \(b^{(2)}_n=(s^{(2)}_n+s^{(2)}_{n+1}/2) \in (0,1,2,3)\). Equation (8) is simply the Bernoulli shift map, representing the discrete dynamics of user 1 (the signal received after equalizing for the attenuation), and Eq. (9) is the second iteration of the shift map representing the discrete dynamics of user 2 (after equalizing the attenuation, by doing \(\gamma ^{(k)}=1/\alpha ^{(k)}\)).

Figure 1A, B shows in red dots solutions for Eqs. (8) and (9), respectively. Corresponding return maps of the discrete set of points \(x_n^{(k)}\) is constructed directly from the continuous solution of Eq. (1) with frequency given by \(f^{(k)}=kf\) by taking points at the time \(t=nT\), and doing the normalization as before \(x^{(k)}_{n}=2x_n^{(k)}-1\) (so, \(x_n^{(k)} \in [0,1]\)) is shown by the black crosses.

Fig. 1
figure 1

The return maps of Eqs. (8) and (9) are shown by red dots, and corresponding return maps of discrete sets obtained directly from the continuous solution of Eq. (1) are shown by black crosses. In A, discrete states for user \(k=1\), and in B, discrete states for user k=2. (Color figure online)

The composed received signal at discrete times nT, a linear superposition of 2 chaotic signals with different power spectrum, is given by

$$\begin{aligned} O_{n}=\tilde{\gamma }^{(1)}u^{(1)}_n + \tilde{\gamma }^{(2)}u^{(2)}_{n}. \end{aligned}$$
(10)

Generalization for N source signals can be written as \(O_{n}=\sum _{k=1}^{N} \tilde{\gamma }^{(k)}u^{(k)}_n\). At the BS, the received signal is \(O_n + w_n\), so it is corrupted by an AGWN \(w_n\) that has a signal-to-noise rate (SNR) in dB as compared with the power of the signal \(O_n\). The received discrete-time return map, for \(w_n=0\), can be derived by putting Eqs. (8) and (9) into Eq. (10)

$$\begin{aligned} O_{n+1}= & {} 4 O_n - 2 \tilde{\gamma }^{(1)}u^{(1)}_n - \tilde{\gamma }^{(2)}b_n^{(2)} - \tilde{\gamma }^{(1)}b_n^{(1)} \end{aligned}$$
(11)
$$\begin{aligned} u^{(1)}_{n+1}= & {} 2u^{(1)}_n- b^{(1)}_n , \end{aligned}$$
(12)

where Eq. (12) is just Eq. (8).

2.2 Preservation of LEs for linear compositions of chaotic source signals

The system of Eqs. (11) and (12) has two distinct positive LEs, one along the direction \({\varvec{v}}^{(1)} = (0 \,1)\) associated with the user 1 and equal to \(\chi ^{(1)}=\ln {(2)}\) nepit per period T, and another along the direction \({\varvec{v}}^{(2)} = (1\, 0)\), which can be associated with the user 2 and equals \(\chi ^{(2)}=\ln {(4)}=2ln{(2)}\) nepit per period T.

To calculate the LEs of this 2-dimensional system (see [29, 31]), we consider the expansion of a unitary basis of orthogonal perturbation vectors \(\mathbf {v}\) and calculate them by

$$\begin{aligned} {\varvec{\chi }} = \lim _{n\rightarrow \infty } \frac{1}{n} \ln {|| \varvec{M} \cdot \mathbf {v} ||} , \end{aligned}$$
(13)

where \(||\varvec{v}||\) is the norm of vector \(\mathbf {v}\), \(\varvec{M}=\varvec{J}^n\), and \(\varvec{J}=\left( \begin{array}{cc} 4 &{} -2\tilde{\gamma }^{(1)} \\ 0 &{} 2\end{array} \right) \). Thus, combining chaotic signals with different frequencies as a linear superposition described by Eq. (10) preserves the spectra of LEs of the signals from the users alone. This is a hyperbolic map where the sum of the positive Lyapunov exponents is equal to the Kolmogorov–Sinai’s entropy, which represents the information rate. Consequently, the information received is equal to the sum of the information transmitted by both users, for the no-noise scenario. More details about this relationship are presented in Sect. 2.3. In other words, a linear superposition of chaotic signals as represented by Eq. (10) does not destroy the information content of each source signal. Preservation of the spectrum of the LEs in a signal that is a linear superposition of chaotic signals with different power spectrum is a universal property of chaos. Demonstration is provided in “Appendix C”, where I study signals composed by two variables from the Rössler attractor, user 2 with a base frequency that is Q times that of the user 1. This demonstration uses an equivalence principle. Every wireless communication network with several users can be made equivalent to a single user in the presence of several imaginary propagating paths. Attenuation and power gain factors need to be recalculated to compensate for a signal that is in reality departing from user 2 but that is being effectively described as departing from user 1. Suppose the 2 users case, both with the same frequency \(f^{(k)}=f\), in the uplink scenario. The trajectory of user 2 at a given time t, \(x^{(2)}(t)\), can be described in terms of the trajectory of the user 1 at a given time \(t-\tau \). So, the linear superposition of 2 source signals in Eq. (5) can be simply written as a single source with time-delayed components as

$$\begin{aligned} O(t)_{u}= & {} \sum _{l=0}^{L^{(k)}-1} [\alpha _l^{(1)} \gamma ^{(1)} \tilde{\gamma }^{(1)} x^{(1)}(t-\tau _l^{(1)}) \nonumber \\&+ \alpha _l^{(2)} \gamma ^{(2)} \tilde{\gamma }^{(2)} x^{(1)}(t-\tau _l^{(1)}-\tau )] + w(t). \end{aligned}$$
(14)

In practice, \(\tau \) can be very small, because of the sensibility to the initial conditions and transitivity of chaos. For a small \(\tau \) and \(\epsilon \), it is true that \(|x^{(2)}(t) - x^{(1)}(t-\tau )| \le \epsilon \), regardless of t.

This property of chaos is extremely valuable, since when extending the ideas of this work to arbitrarily large and complex communicating networks, one might want to derive expressions such as in Eqs. (11) and (12) to decode the information arriving at the BS. Details of how to use this principle to derive these equations for two users with \(f^{(2)}=2f^{(1)}\) and also when \(f^{(2)}=f^{(1)}\) are shown in Sect. II of SM.

2.3 Lyapunov exponents, the information carried by chaotic signals and the information capacity of Wi-C1

Pesin’s equality relates positive Lyapunov exponents (LEs) with information rate of a chaotic trajectory [32]: The sum of positive LEs of a chaotic trajectory is equal to the Kolmogorov–Sinai entropy, denoted \(H_{KS}\) (a kind of Shannon entropy rate), a quantity that is considered to be the physical entropy of a chaotic system. This is always true for chaotic systems that possess the Sinai–Ruelle–Bowen (SRB) measure [33], or more precisely that have absolutely continuous conditional measures on unstable manifolds. In this work, I have considered a parameter configuration such that the system used to generate chaotic signals is described by the shift map, a hyperbolic map, which has SRB measure. Therefore, the amount of information transmitted by a user is given by the LE of the system in Eq. (1).

I have demonstrated that linearly composed chaotic signals with different natural frequencies preserve all the positive LEs of the source signals (“Appendix C”). By a chaotic signal, I mean a 1-dimensional scalar time-series, or simply a single variable component of a higher-dimensional chaotic trajectory. If the chaotic signals are generated by Eq. (1), their linear composition in Eqs. (11) and (12) is still described by a hyperbolic dynamics (possessing SRB measure), thus leading to a trajectory whose information content is given by the sum of the positive LEs, which happens to be equal to the sum of the LEs of the source signals. So, the information encoding capacity in units of nepits per unit of time of the Wi-C1, denoted by \(\mathcal {C}_e\), when users use the system in Eq. (1) to generate chaotic signals, is given by the sum of Lyapunov exponents of the source signals:

$$\begin{aligned} \mathcal {C}_e = \sum _k f^{(k)} \beta (\gamma =1)^{(k)} = \sum _k \beta (\gamma )^{(k)} \end{aligned}$$
(15)

where \( f^{(k)}\) and \(\beta (\gamma =1)^{(k)}\) and are the natural frequency of the signal and the LE of user k (in units of nepits per unit of time), respectively. By information encoding capacity, I mean the information rate of a signal that is obtained by a linear composition of chaotic signals. If linear coefficients (power gains) are appropriately chosen (see next Sect. 2.4) and noise is sufficiently low (see Sect. 4), then the information encoding capacity of Wi-C1 is equal to the information capacity of Wi-C1, or the total rate of information being received/decoded.

It is worth discussing, however, what would be the information capacity of Wi-C1, in case one, considers users communicating with other chaotic systems than that described by Eq. (1). My result in “Appendix C” demonstrates that all the positive Lyapunov exponents of the chaotic source signals are present in the spectra of the linearly composed chaotic signals constructed using different chaotic signals (that may have different natural frequencies) and being generated by the same chaotic system.

Recent work [34, 35] has shown that there is a strong link between the sum of the positive LEs and the topological entropy, denoted \(H_T\), in a chaotic system. The topological entropy measures the rate of exponential growth of the number of distinct orbits, as we consider orbits with growing periods. For Eq. (1), its topological entropy equals its positive LE and its Kolmogorov–Sinai entropy. So, \(H_T = \beta (\gamma )=H_{KS}\) (in units of bits per unit of time). That is not always the case. Denoting the sum of LEs of a chaotic system by \(\sum ^+ \), one would typically expect that \(H_T \ge H_{KS}\) and moreover that \(\sum ^+ \ge H_{KS}\). However, the recent works in Refs. [34, 35] have shown that there are chaotic systems for which \(H_T = \sum ^+ \).

This work considers that the proposed communication system Wi-C1 has users that use chaotic signals generated by means of controlling (class (i) discussed in Sect. 1), so that the trajectory can represent the desired information to be transmitted. The work in Ref. [9] has shown that the information encoding capacity of a chaotic trajectory produced by control is given by the topological entropy of the non-perturbed system, not by its Kolmogorov–Sinai entropy. Therefore, if only a single user is being considered in the communication (e.g. only one transmitter), and this user generates chaotic signals for which \(H_T = \sum ^+ \), the information encoding capacity of this communication system would be given by \(\sum ^+\).

Let us now discuss the multi-user scenario, still assuming that the users generate their source chaotic signals using systems for which \(H_T = \sum ^+ \). As demonstrated in “Appendix C”, all the positive Lyapunov exponents of chaotic source signals are preserved in a linearly composed signal. Moreover, since that LEs of a chaotic signal are preserved by linear transformations, and since a linear transformation to a signal does not alter its information content, it is suggestive to consider that the information capacity of this multi-user communication system would be given by the sum of the positive LEs of the chaotic source signals for each user. This, however, will require further analysis.

2.4 Preparing the signal to be transmitted (encoding): finding appropriate power gains

In order to avoid interference or false near neighbours crossing in the received composed signal, allowing one to discover the symbols \(b^{(1)}\) and \(b^{(2)}\) only by observing the 2-dimensional return map of \(O_{n+1} \times O_{n}\) that maximizes the separation among the branches of the map to avoid mistakes induced by noise, we need to appropriately choose the power gains \(\tilde{\gamma }^{(k)}\). Looking at the mapping in Eq. (11), the term \(2^{f^{(2)}} O_n\) represents a piecewise linear map with \(2^{f^{(2)}}\) branches. The spatial domain for each piece has a length denoted by \(\zeta (f^{(2)})\). The term \((2^{f^{(2)}} - 2) \tilde{\gamma }^{(1)} u^{(1)}_{n}\) representing the dynamics for the smallest oscillatory frequency is described by a piecewise linear map with \((2^{f^{(2)}} - 2)\) branches. To avoid interference, the return map for this term must occupy a length \(\zeta (f^{(1)})\) that is fully embedded within the domain for the dynamics representing higher-order frequencies. Assuming that for a given number of users N, all frequencies \(f^{(i)}\) with \(i=1,\ldots ,N\) are used; this idea can be expressed in terms of an equation where

$$\begin{aligned} \zeta (f^{(i)}) = 2^{(f^{(i)})} \zeta (i-1), ~ i=\{1, \ldots , N\}. \end{aligned}$$
(16)

Then, \(\tilde{\gamma }^{(k)}=\zeta (k)\), but for a received map within the interval [0, 1], normalization of the values o \(\tilde{\gamma }^{(k)}\) by

$$\begin{aligned} \tilde{\gamma }^{(k)} = \frac{\zeta (k)}{\sum _{i=1}^N \zeta (f^{(i)})}. \end{aligned}$$
(17)

For 2 users (\(N=2\)) and \(\zeta (1)=0.2\), the appropriate power gains to be chosen in the encoding phase and that allows for the decomposition (or decoding) of the information content of the composed received signal are given by \(\tilde{\gamma }^{(1)}=0.2\) and \(\tilde{\gamma }^{(2)}=0.8\). Using these values for \(\tilde{\gamma }^{(1)}\) and \(\tilde{\gamma }^{(1)}\) in Eq. (10) and considering an AWGN \(w_n\) with SNR of 40dB (with respect to the power of \(O_n\)) produces the return map shown by points in Fig. 2A, with 8 branches all aligned along the same direction (the branches would have the same derivative for the no noise scenario), which therefore prevents crossings or false near neighbours—and are also equally separated to avoid mistakes in the decoding of the information due to noise.

The choice of the power gains for the downlink configuration is similarly done as in the uplink configuration, taking into consideration that each user has its own noise level. This is shown in Sect. III of SM.

3 Decomposing the linear superposition of chaotic signals, and the decoding of signals and their information content

3.1 Decomposition (decoding) by thresholding received signal

Communication based on chaos offers several alternatives for decoding, or in other words, the process to obtain the information that is conveyed by the received signal. Assuming the received signal is modelled by Eqs. (11) and (12), with the appropriated power gains as in Eq. (17), the optimal 2-dimensional partition to decode the digital information is described by the same map of Eqs. (11) and (12) with a translation. For the case of 2 users in the uplink scenario, this translates into a 7-line partition

$$\begin{aligned} O^*_{n+1}(j)&= 4O^*_n(j) - T_j, \nonumber \\ T_j&= \frac{1}{2}\left[ 3 \tilde{\gamma }^{(1)} + (j-1)\tilde{\gamma }^{(2)} \right] , ~ j=\{1,\ldots ,7\}. \end{aligned}$$
(18)

These partition lines for \(\tilde{\gamma }^{(1)}=0.2\) and \(\tilde{\gamma }^{(2)}=0.8\) are shown by the coloured straight lines in Fig. 2A. They allow for the decomposition/decoding of the digital (symbolic) information contained in the composed received signal.

3.2 Decomposition (decoding) by filtering received signal

A more sophisticated approach to decode information is based on a matched filter [11]. In here I show that the system formed by Eq. (1) and its matched filter can be approximately described by the unfolded Baker’s map, a result that allows us to understand that the recovery of the signal sent by a user from the composed signal solely depends on the inverse dynamics of this user. Details of the fundamentals presented in the following can be seen in Sect. IV of SM. If the equations describing the dynamics of the transmitted chaotic signal (in this case Eq. (1)) possess no negative Lyapunov exponents—as it is shown Sect. I of SM—attractor estimation of a noisily corrupted signal can be done using its time-inverse dynamics that is stable and possesses no positive LEs (shown in Sect. V of SM). The evolution to the future of the time-inverse dynamics is described by a system of ODE hybrid equations obtained by the time-rescaling \(\mathrm{d}/\mathrm{d}t^{\prime } = -\mathrm{d}/\mathrm{d}t\) applied to Eq. (1) resulting in

$$\begin{aligned} \ddot{y}+2\beta \dot{y}+(\omega ^2+\beta ^2)[y-\eta (t)]=0, \end{aligned}$$
(19)

where the variable y represents the x in time reverse, and as shown in Sect. IV of SM, if \(\eta (t)\) is defined by \(\dot{\eta (t)}=x(t)-x(t-T)\) (defined as \(\dot{\eta (t)}=x(t+T)-x(t)\) in Ref. [11]) it can be roughly approximated to be equal to the symbol s(t).

Taking the values of y at discrete times at nT, writing that \(y(nT)=y_n\), and defining the new variable for users 1 and 2 as before \(y^{(1)}_n=2z^{(1)}_n -1\) and \(y^{(2)}_{2n}=2z^{(2)}_n -1\) if Eqs. (8) and (9) are map solutions of Eq. (1) (in the re-scaled coordinate system, with appropriate \(\gamma \) gains) for user k with frequencies \(f^{(k)}=k\), their inverse mapping the solution of Eq. (19) is given by

$$\begin{aligned} z^{(k)}_{n+1}=2^{-k}\{z^{(k)}_n-\lfloor 2^{k}u^{(k)}_n \rfloor \}, \, \text{ and } \lfloor 2^{k}u^{(k)}_n \rfloor \equiv b^{(k)}_n, \end{aligned}$$
(20)

This map can be derived simply defining \(z^{(k)}_{n+1} = u^{(k)}_{n}\) and \(z^{(k)}_{n} = u^{(k)}_{n+1}\). We always have that \(\lfloor 2^{k}u^{(k)}_n \rfloor = b^{(k)}_n\). So, for any \(z^{(k)}_n \in [0,1]\) and which can be simply chosen to be equal to the received composed signal \(O_n\) (normalized such that \(\in [0,1]\)), it is also true that

$$\begin{aligned} \lfloor 2^{k} z^{(k)}_{n+1} \rfloor = \lfloor 2^{k}u^{(k)}_n \rfloor = b^{(k)}_n. \end{aligned}$$
(21)

So, if we represent an estimation of the transmitted symbol of user k by \(\tilde{b}^{(k)}_n\), then decoding of the transmitted symbol of user k can be done by calculating \(z^{(k)}_{n+1}\) using the inverse dynamics of the user k

$$\begin{aligned} z^{(k)}_{n+1}=2^{-k}\{z^{(k)}_n - \tilde{b}^{(k)}_n\}. \end{aligned}$$
(22)

and applying this value to Eq. (21). This means that the system formed by the variables \(u^{(k)}_{n},z^{(k)}_{n}\) is a generalization (for \(k \ne 1\)) of the unfolded Baker’s map [26], being described by a time-forward variable \(u^{(k)}_{n}\) (the Bernoulli shift for k=1), and its backward variable component \(z^{(k)}_{n}\).

Figure 2B demonstrates that it is possible to extract the signal of a user (user k=2) from the composed signal, \(O_n\) (Eq. (10)), by setting in Eq. (22) that \(z^{(2)}_n=O_n\), and \(\tilde{b}^{(k)}_n = b^{(k)}_n\). Even though \(u^{(2)}_n \ne z^{(2)}_n\), decoding Eq. (21) is satisfied. Therefore, the matched filter that decomposes the source signal of a user from the received composed signal is the matched filter of that user alone.

Fig. 2
figure 2

In A points shows the return map of the received signal with \(\tilde{\gamma }^{(1)}=0.2\) and \(\tilde{\gamma }^{(2)}=0.8\), and the lines the partitions from which received symbols are estimated. Inside the parenthesis, the first symbol is from user 2 and the second symbol is from user 2. In B, one sees a solution of the unfolded Baker’s map, where horizontal axis shows trajectory points from Eq. (9) and vertical axis trajectory points from Eq. (21), for the user k=2. In C is shown C against \(\sum I\), with respect to the signal-to-noise ratio (SNR)

4 Analysis of performance of Wi-C1, under noise constraints

I can now do an analysis of the performance of the Wi-C1, for both the uplink and the downlink configurations, for 2 users modelled by Eqs. (8) and (9) with power gains \(\tilde{\gamma }^{(1)}=0.2\) and \(\tilde{\gamma }^{(2)}=0.8\). The information capacity for both users (in bits per iteration, or bits per period) is given by

$$\begin{aligned} C= & {} 0.5\log _2{\left( 1 + \hbox {SNR} \right) }, \end{aligned}$$
(23)

where \(\hbox {SNR} = \frac{P}{P^{w}}\) (units in dB, decibel) is the signal-to-noise ratio, the ratio between the power P of the linearly composed signal \(\tilde{\gamma }^{(1)}u_n^{(1)} + \tilde{\gamma }^{(2)}u_n^{(2)}\) (arriving at the BS, in the uplink configuration, or departing from it, in the downlink configuration) and \(P^{w}\), the power of the noise \(w_n\) at the BS (for the uplink configuration) or at the users (for the downlink configuration, assumed to be the same). The total capacity of the communication denoted by C is calculated, assuming that decoding of users 1 and 2 is simultaneously done from the noisily corrupted received signal \(O_n+w_n\) (see Eq. (10)), and so, decoding of the signal from user 2 does not treat the signal of user 1 as noise.

This capacity has to be compared to the actual rate of information being realised at the BS (or at the receivers), quantified by the mutual information, \(I(b_n^{(k)};\tilde{b}_n^{(k)})\) between the symbols transmitted (\(b_n^{(k)}\)) and the decoded symbols \(\tilde{b}_n^{(k)}\) estimated by using partition in Eq. (18), defined as usual by \(I(b_n^{(k)};\tilde{b}_n^{(k)}) = H(b_n^{(k)}) - H(b_n^{(k)}|\tilde{b}_n^{(k)}) \)

where \(H(b_n^{(k)})\) denotes the Shannon’s entropy of the user k which is equal to the positive LE of the user k, for \(\beta (\gamma =1) = \ln {(2)}\), and \(H(b_n^{(k)}|\tilde{b}_n^{(k)})\) is the conditional entropy.

Figure 2C shows in red squares the full theoretical capacity given by C against the rate of information decoded given by \(\sum I = I(b_n^{(1)};\tilde{b}_n^{(1)}) + I(b_n^{(2)};\tilde{b}_n^{(2)})\), in black circles, with respect to the SNR. As it is to be expected, the information rate received \(\sum I\) is equal to the information encoding capacity \(\mathcal {C}_e\) that is transmitted (both equal to 3bits/period) for low noise levels, tough smaller than the theoretical limit.

Notice that this analysis was carried out using the map version of the matched filter [11] in Eq. (19), and as such lacks the powerful use of the negativeness of the LE to filter noise. Moreover, decoding used the trivial 2D threshold by Eq. (18), and not higher-dimensional reconstructions.

4.1 Comparison of performance of Wi-C1 against NOMA

To cope with the expected demand in 5G wireless communication, non-orthogonal multiple access (NOMA) [36,37,38] was proposed to allow all users to use the whole available frequency spectrum. One of the most popular NOMA schemes allocates different power gains to the signal of each user. Full description of this scheme and its similarities with Wi-C1 is given in Sect. VI of SM.

The key concept behind NOMA is that users signals are superimposed with different power gains, and successive interference cancellation (SIC) is applied at the user with better channel condition, in order to remove the other users signals before detecting its own signal [39]. In the Wi-C1, as well as in NOMA, power gains are also applied to construct the linear superposition of signals. But in this work, I assume that the largest power gain is applied to the user with the largest frequency. Moreover, in this work, I have not done successive interference cancellation (SIC), since the information from all the users is simultaneously recovered by the thresholding technique, by considering a trivial 2D threshold by Eq. (18).

Comparison of the performance of Wi-C1 and NOMA is done considering the work in Ref. [40], which has analysed the performance of NOMA for two users in the downlink configuration, under partial channel knowledge. Partial channel knowledge means in rough terms that the “amplitude” of the signal arriving to a user from the BS is incorrectly estimated. In this sense, I have considered in the Wi-C1 perfect channel knowledge, since my simulations in Fig. 2C based on Eq. (10) assume that \(\gamma ^{(k)}=\frac{1}{\alpha ^{(k)}}\) to compensate for the amplitude decay \(\alpha ^{(k)}\) in the physical media (see Eq. (5)). More precisely, partial channel knowledge means that a Gaussian distribution describing the signal amplitudes departing from a user decreases its variance inversely proportional to a power-law function of the distance between that user and BS. The variance of the error of this distribution estimation is denoted by \(\sigma _{\epsilon }\), an important parameter to understand the results in Ref. [40]. Partial channel knowledge will impact on the optimal SIC performed for the results in Ref. [40]. Recall again that for the Wi-C1, no SIC is performed.

In Fig. 3, the curve for \(\sum I\) (the rate of decoded information) in Fig. 2C is plotted in red circles and compared with data shown in Fig. 3 of Ref. [40] for the quantity “average sum rate”, where each dataset considers a different channel configuration. Blue down triangles show the quantity “average sum rate” for perfect channel knowledge (\(\sigma _{\epsilon }=0\)), and black squares represent the same quantity for partial channel knowledge (\(\sigma _{\epsilon }=0.0005\)). The data points in Fig. 3 of Ref. [40] were extracted by a digitalization process. The quantity \(\sum I\) for Wi-C1 in in Fig. 2C in units of bits per period (or channel use) is compared with the quantity “average sum rate” (whose unit was given in bits per second per Hz) by assuming the period of signals in Ref. [40] is 1s. The average value obtained in Ref. [40] has taken into consideration Monte Carlo simulations of several configurations for 2 users that are uniformly distributed in a disk and the BS is located at the centre.

Fig. 3
figure 3

Red circles show \(\sum I\), blue down triangle show the average sum rate for partial channel knowledge (\(\sigma _{\epsilon }=0.0005\)), and black squares show the average sum rate for perfect channel knowledge, with respect to the signal-to-noise ratio (SNR). (Color figure online)

The results in Fig. 3 show that Wi-C1 has similar performance in terms of the bit rate for 0 dB, better performance for the SNR \(\in ]0,30]\) dB as that of the NOMA (with respect to the average sum rate) for perfect channel knowledge, and better performance for the SNR \(\in ]0,40[\) dB as that of the NOMA (with respect to the average sum rate) for partial channel knowledge.

One needs to have into consideration that this outstanding performance of Wi-C1 against NOMA is preliminary, requiring more deep analysis, but that is out of the scope of the present work.

5 Other non-ideal physical media

Previous sections of this work have tackled with great rigour and detail how chaotic signals are affected when travelling through medium that presents non-ideal properties such as multi-path propagation, noise and chaotic interference (linear superposition), and how this impacts on the proposed communication system. This section is dedicated to conceptually discuss with some mathematical support how chaotic signals and their information content are transformed by physical channels with other non-ideal properties (dispersion and interference with periodic signals), and how this impacts on the multi-user communication system proposed.

For the following analysis, I will neglect the existence of multiple indirect paths of propagation and will consider that only the direct path contributes to the transmission of information, so \(L=1\). I will consider the uplink scenario where users transmit to a BS. I will initially focus the analysis about the impact of the non-ideal physical medium on the signal of a single user, in particular the effect of the medium in the received discrete signal being described by Eq. (3) and its Lyapunov exponent (LE), and will then briefly discuss the impact of the non-ideal medium on a communication configuration with multi-users.

5.1 Physical media with dispersion

Physical media with dispersion are those in which waves have their phase velocity altered as a function of the frequency of the signal. However, a dispersive medium does not alter the frequency of the signal, and therefore, it does not alter its natural period, only its propagation velocity. As a consequence, the LEs of any arbitrary chaotic signal travelling in a dispersive medium are not modified. The information carried by this chaotic signal would also not be altered, if it were generated by Eq. (1), or by a system whose chaotic trajectory possesses SRB measure, or that its topological entropy \(H_{T}\) equals the sum of the positive LEs.

However, the travel time of a signal to arrive at the BS along the direct path \(\tau _0\) is altered. This can impact on the ability to decode as can be seen from Eq. (3). Suppose that the travel time of user k, given by \(\tau ^{(k)}_0\), increases from 0 (as in the previous derivations) to a finite value that is still smaller than the period of that user \(T^{(k)}\), so that \(n^{\prime }=n\) for that user. But, \(\mathcal {K}_0\) would be different than 1, and as a consequence, the return map of the received signal would contain a term that is a function of the symbol \(s_{n+1}\). Extracting the symbols from the received discrete signal (decoding) would have to take into consideration this extra symbol, which represents a symbol 1 iteration (or period) in the future. Decoding for the symbol \(s_n\) from the received signal would require the knowledge of the symbol \(s_{n+1}\). So, to decode what is being received at a given moment in the present would require knowledge of the symbol that has just been sent. To circumvent this limitation, one could firstly send a dummy symbol known by both the transmitter and the receiver at the BS, and use it to decode the incoming symbol \(s_n\), which then could be used to decode \(s_{n-1}\), and so on. Noise could impact on the decoding. Every new term that appears in Eq. (3) results in a new branch for this map. With noise, a branch in the return map that appears due to the symbol \(s_{n+1}\) could be misinterpreted as a branch for the symbol \(s_n\), causing errors in the decoding.

In a multi-user scenario, dispersion would only contribute to change the time delays \(\tau ^{(k)}_l\) for each user for each propagating path. As discussed, this will not affect the LEs of the source chaotic signals. Moreover, as demonstrated, the LEs of the source signals should be preserved by the linearly composed signal arriving at the BS, suggesting that the information encoding capacity given by Eq. (15) in the multi-user scenario could also be preserved for the systems for which \(H_T=\sum ^+\) or \(\sum ^+ = H_{KS}\) (as discussed in Sect. 2.3). Noise would, however, increase the chances of mistakes in the decoding of a multi-user configuration, thus impacting on the information capacity of the communication, since branches in the mapping of the received signal could overlap. At the overlap, one cannot discern which symbol was transmitted.

5.2 Physical media with interfering periodic (non-chaotic) signals

This case could be treated as a chaotic signal that is modulated by a periodic signal. Assuming no amplitude attenuation, the continuous signal arriving at the BS from user k can be described by

$$\begin{aligned} r^{(k)}(t) = x(t) + A\sin {(2\pi f_p t + \phi _0)} \end{aligned}$$
(24)

where \(f_p\) represents the frequency of the periodic signal, and \(\phi _0\) its initial constant phase. In here, I analyse the simplest case, when \(f_p=f^{(k)}\), in which the discrete-time signal arriving at the BS at times \(t=nT\), from user k, would receive a constant contribution \(c^{(k)} = A\sin {(2\pi n + \phi _0)}\), due to the interfering periodic signal. If \(r^{(k)}_n\) and \(r^{(k)}_{n+1}\) denote the discrete time signals arriving at the BS without periodic interference from user k at discrete times \(t=nT\) and \(t=(n+1)T\), respectively, then \(\tilde{r}^{(k)}_n\) and \(\tilde{r}^{(k)}_{n+1}\) described by

$$\begin{aligned} \tilde{r}^{(k)}_n= & {} r^{(k)}_n + c^{(k)} \end{aligned}$$
(25)
$$\begin{aligned} \tilde{r}^{(k)}_{n+1}= & {} r^{(k)}_{n+1} + c^{(k)} \end{aligned}$$
(26)

would represent the discrete time signals arriving at times \(t=nT\) and \(t=(n+1)T\) at the BS, respectively, after suffering interference from the periodic signal. Substituting these equations into the mapping in Eq. (3) would allow us to derive a mapping for the signal with interference

$$\begin{aligned} \tilde{r}^{(k)}_{n+1} = e^{\frac{\beta }{f}} \tilde{r}^{(k)}_n - (e^{\frac{\beta }{f}} -1)(c^{(k)} + \alpha _0 s_n). \end{aligned}$$
(27)

As expected, adding a constant term to a chaotic map does not alter its LE given by \(\frac{\beta }{f}\). Consequently, the information encoding capacity of this chaotic signal is also not altered, since it is generated by Eq. (1).

This constant addition results in a vertical displacement of the map by a constant value -\((e^{\frac{\beta }{f}} -1) c^{(k)}\). So, added noise in the received signal with interference would not impact more than the impact caused by noise in the signal without interference.

In a multi-user scenario, LEs of the linearly composed signal arriving at the BS should preserve all the LEs of the source chaotic signals, suggesting that the information encoding capacity in the multi-user scenario could also be preserved, for signals being generated by the chaotic systems discussed in Sect. 2.3. Noise would, however, increase the chances of mistakes in the decoding of a multi-user configuration, thus impacting on the information capacity of the communication, since for each user the branches of the mapping describing the received signal would be vertically shifted by a different constant, resulting in branches of the received signal that overlap. At the overlap, one cannot discern which symbol was transmitted.

6 Conclusions

In this work, I show with mathematical rigour that a linear superposition of chaotic signals with different natural frequencies fully preserves the spectra of Lyapunov exponents and the information content of the source signals. I also show that if each source signal is tuned with appropriated linear coefficients (or power gains), successful decomposition of the source signals and their information content out of the composed signal is possible. Driven by today’s huge demand for data, there is a desire to develop wireless communication systems that can handle several sources, each using different frequencies of the spectrum. As an application of this wonderful decomposability property that chaotic signals have, I propose a multi-user and multi-frequency communication system, Wi-C1, where the encoding phase (i.e. the preparation of the signal to be transmitted through a physical media) is based on the correct choice of the linear coefficients, and the decoding phase (i.e. the recovery of the transmitted signals and their information content) is based on the decomposition of the received composed signal.

The information encoding capacity of Wi-C1, or the information rate of a signal that is obtained by a linear composition of chaotic signals, is demonstrated to be equal to the sum of positive Lyapunov exponents of the source signals of each user. If linear coefficients (power gains) are appropriately chosen, and noise is sufficiently low, then the information encoding capacity of Wi-C1 is equal to the information capacity of Wi-C1, or the total rate of information being received/decoded.

Further improvement for the rate of information could be achieved by adding more transmitters (or receivers) at the expense of reliability. One could also consider similar ideas as in [3, 4], which would involve more post-processing, at the expense of weight. Post-processing would involve the resetting of initial conditions in Eq. (20) all the time, and then using the inverse dynamics up to some specified number of backward iterations to estimate the past of \(u^{(k)}_n\). One could even think of constructing stochastic resonance detectors to extract the information of a specific user from the received composed signal [41]. These proposed analyses for the improvement of performance in speed, weight and reliability of the communication are out of the scope of this work.

I have compared the performance of Wi-C1 with a non-chaotic communication method that is the strongest candidate for the future 5G networks, the non-orthogonal multiplex access (NOMA), and have shown that Wi-C1 can communicate at higher bit rates for large noise levels in the channel.

The last section of this paper is dedicated to conceptually discuss with some mathematical support how chaotic signals and their information content are transformed by physical channels with other non-ideal properties (dispersion and interference with periodic signals), and how this impacts on the multi-user communication system proposed.