1 Introduction

In recent years, orthogonal frequency division multiplexing (OFDM) coding has been implemented in an enormous number of telecommunication systems. This method was developed more than 40 years ago [2, 13, 14], but for several years it has not been used extensively, probably partly due to technical limitations. Currently its benefits are used in systems like wireless computer networks, mobile communication networks, and terrestrial digital television or optical fiber networks. The idea of the method is to encode the information on a plurality of subcarriers, which are orthogonal on the time interval corresponding to a single OFDM symbol. This results in immunity to destructive interference in multipath propagation, which is greatly appreciated in indoor environments or cities, but involves specific problems, e.g., high peak-to-average power ratio and problems with nonlinear distortions. In particular, one type of nonlinear distortion source can be digital-to-analog (D/A) or analog-to-digital (A/D) converters.

The process of D/A or A/D conversion of an OFDM signal generally involves some inevitable signal degradation [3, 7, 9]. It can be analyzed and characterized by means of numerical simulations, as in [4, 8], but although this allows one to obtain precise results for the analyzed cases, obviously this approach does not provide knowledge of the system’s behavior in general. For a wider understanding of the subject it is preferable to use an analytic model, showing explicitly the relations in the system. Some of these models have already been developed for signal clipping [6], quantization noise [10], and their joint effects [1, 5], and one important resulting conclusion was that degradation due to quantization noise can be reduced by oversampling the signal. However, it seems that little is still known about nonlinear distortions in the conversion process, which may be particularly important for systems using high-order modulations with constellations densely populated by symbols. For this reason, the aim of this paper is to provide a rigorous, analytic model of pseudo-random nonlinear distortions based on concepts described in [11, 12]. These two papers have laid foundations for this theoretical model, introducing the idea of effective number of samples of a digital (discrete and quantized) signal, and in this paper this quantity is rigorously related to degradation of the signal. The derived model shows that, like quantization noise, pseudo-random nonlinear distortions can be limited by oversampling, but that this limiting is less efficient.

The organization of the paper is as follows. The definitions and assumptions used for the construction of the model are presented in Sect. 2. In Sect. 3 there is briefly discussed the possibility of decreasing degradation by quantization noise by oversampling of the signal. The assumed decomposition of nonlinear distortions into deterministic and pseudo-random parts is introduced in Sect. 4. Further, Sect. 5 presents the derivation of the transition probability for OFDM signal, in Sect. 6 there is defined the effective number of samples of the signal and an expression for this quantity is derived, and Sect. 7 provides the theoretical description of the signal’s degradation by pseudo-random nonlinear distortions. Finally, Sect. 8 summarizes and concludes the paper.

2 Definitions and Assumptions

The subject of the considerations here is a real valued, digital signal with OFDM coding, converted by a D/A or A/D converter with a resolution of n bits. The signal is formed by discretization and quantization of an ideal OFDM symbol, given by superposition of K modulated subcarriers:

$$ x ({t} ) = \sum_{k=1}^K A_k \cos ({\omega_k t + \phi_k} ). $$
(1)

In general, individual subcarriers represent symbols from arbitrary constellations, defined by amplitudes A k and phases ϕ k , constant during the whole time interval T S of the OFDM symbol. For simplicity, it is assumed that all A k and ϕ k are independent random variables, amplitudes \(A_{k} \in\mathbb{R}_{+}\) and have the same probability distributions, while each ϕ k satisfies \(\langle {\mathrm{e}^{{\,\mathrm{j}}\phi_{k}}}\rangle = \langle {\mathrm{e} ^{2{\mathrm{j} }\phi_{k}}}\rangle = 0\). These assumptions are true for the commonly used quadrature modulations, like M-QAM and M-PSK with M≥4 (for other constellations which do not meet the last assumptions, like BPSK, some corrections of the numerical factors will be needed, but the presented derivations still remain essentially valid). The OFDM symbol is assumed to have mean value equal to 0, and the angular frequencies are chosen as

$$ \omega_k = k \omega_1, \quad\text{with}\ \omega_1 = \frac{2\pi }{T_\mathrm{S}}; $$
(2)

thus the modulated subcarriers are orthogonal on the time interval of the OFDM symbol. With these assumptions, the mean square \(\langle {A_{k}^{2}}\rangle \) is the same for each k, and so the mean power of the OFDM symbol is

$$ \sigma^2 = K \frac{\langle {A_k^2}\rangle }{2}. $$
(3)

The discretized form of the OFDM symbol is the sequence of N S samples

$$ x_i\equiv x ({iT} ), \quad i = 0, 1, \ldots, N_\mathrm{S}-1, $$
(4)

with i indexing consecutive samples and T=T S/N S being the sampling period. The discretization density (oversampling rate) is determined by the ratio N S/K, assumed to satisfy the Nyquist criterion: N S/K≥2. It is known from the central limit theorem that the signal x i can be well approximated by a Gaussian stochastic process with independent samples, taking value x with probability density dependent on the root mean square value (3):

$$ \mathcal{P} ({x_i = x} ) = \frac{1}{\sqrt{2\pi\sigma ^2}} \exp \biggl({- \frac{x^2}{2\sigma^2}} \biggr). $$
(5)

Digital devices represent numbers with finite precision. In particular, the considered converter with a resolution of n bits requires rounding (quantization) of x i values to integral numbers from the set \(\mathbb{Z}_{n}= \{-2^{n-1},\ldots ,2^{n-1}-1 \}\), further called “levels.” According to the above description, the digital representation of the OFDM symbol is the sequence of steps

$$ x^{\mathrm{q}}_i = x_i + \varDelta^\mathrm{q}_i, \quad\text {with}\ x^{\mathrm{q}}_i \in \mathbb{Z}_n: \bigl \vert {x^{\mathrm{q}}_i -x_i}\bigr \vert = \min_{\xi\in\mathbb {Z}_n} \vert {\xi- x_i}\vert , $$
(6)

where \(\varDelta^{\mathrm{q}}_{i}\) is the quantization error (noise).

The highest and lowest values of the signal are limited, and hence clipping of the signal occurs [1, 5, 6]. It is convenient to relate the power of the signal to the converter’s dynamic range (clipping level) with the help of the coefficient

$$ \alpha= \frac{2^{n-1}}{\sigma}. $$
(7)

The reasonable practical value is α≈4. Further it is assumed here that clipping is insignificant compared to other degrading factors.

In real converters the levels differ from ideal values, and the differences are characterized by so-called differential and integral nonlinearity [9]. This causes nonlinear distortions of the converted signal. If the error for level p is Δ(p), then the signal with nonlinear distortions is

$$ y_i = x^{\mathrm{q}}_i + \varDelta _\mathrm {}\bigl(x^{\mathrm{q}}_i \bigr), $$
(8)

where the sequence \(\varDelta _{\mathrm{}}({x^{\mathrm{q}}_{i} } )\) depends on imperfections of the converter and also on the converted signal itself. This has considerable consequences, qualitatively different than quantization, which result from the fact that, unlike quantization noise, consecutive samples of the nonlinear distortions \(\varDelta _{\mathrm{}}({x^{\mathrm{q}}_{i} } )\) can have the same value with nonzero probability.

3 Quantization Noise

It is known that (using the introduced notation) if σ≫1, then the quantization noise \(\varDelta^{\mathrm{q}}_{i}\) is very well approximated by white noise with uniform distribution over the interval \([-\frac{1}{2},\frac{1}{2} ]\), i.e., of length the same as level separation [15]. In this case, essentially by definition, the whole power of the quantization noise

$$ \bigl \langle { \bigl({\varDelta^\mathrm{q}_i} \bigr)^2}\bigr \rangle = \frac{1}{12} $$
(9)

spreads equally over all the N S samples of the spectrum, from which only 2K correspond to the signal. Thus, the total power of the quantization noise in the signal band is

$$ \sigma_\mathrm{q}^2 = \frac{2K}{N_\mathrm{S}} \bigl \langle { \bigl({\varDelta^\mathrm {q}_i} \bigr)^2}\bigr \rangle = \frac{1}{6} \frac {K}{N_\mathrm{S}}. $$
(10)

Hence, the signal-to-noise ratio for quantization noise,

$$ \mathrm{SNR}_\mathrm{q}= \frac{\sigma^2}{\sigma_\mathrm{q}^2} = \frac{3}{2} \biggl({ \frac {2^n}{\alpha}} \biggr)^2 \frac{N_\mathrm{S}}{K}, $$
(11)

depends directly on oversampling factor N S/K, scaling factor α, and the converter’s resolution n. In particular, this means that an improvement in quantization noise can be obtained by increasing the number of signal samples and that the improvement of SNRq is simply proportional to that increase.

4 Decomposition of Nonlinear Distortions

The total error of converter level value can result from constant offset and nonunitary gain, which shift and change the slope of the converter’s transient characteristics, and differential and integral nonlinearities [7, 9]. Offset and nonunitary gain cause no nonlinear distortions and can be relatively easily corrected; thus, these two error sources are further ignored here. Then, the remaining error results from nonlinearities and can be decomposed into two components:

$$ \varDelta _\mathrm {}({p} ) = \varDelta _\mathrm {d} ({p} ) + \varDelta _\mathrm {s} ({p} ). $$
(12)

The first component Δ d(p) is the slowly varying or deterministic part of the nonlinear distortions. It can be regarded as a systematic or regular deviation of the converter’s transient characteristic, approximately constant within adjacent levels, i.e., related mainly to integral nonlinearity. The second component Δ s(p) is the quickly varying, pseudo-random or stochastic term, representing irregular deviations without any clearly visible pattern. It is defined to have vanishing mean value:

$$ \bigl \langle {\varDelta _\mathrm {s} ({p} )}\bigr \rangle = 0 $$
(13)

and no correlation between errors of particular levels:

$$ \bigl \langle {\varDelta _\mathrm {s} ({p} )\varDelta _\mathrm {s} \bigl({p'} \bigr)}\bigr \rangle = \varDelta _\mathrm {s}^2 \delta_{pp'}. $$
(14)

The stochastic part is naturally associated to differential nonlinearity of the A/D or D/A converter. The model derived in this paper takes this source of degradation into account.

Consecutive samples of pseudo-random nonlinear distortions corresponding to digital signal \(x^{\mathrm{q}}_{i} \) form a stochastic process that can be written in the form

$$ \varDelta _\mathrm {s} \bigl({x^{\mathrm{q}}_i } \bigr) = \sum _{p\in\mathbb {Z}_n} \varDelta _\mathrm {s} ({p} ) \delta_{px^{\mathrm{q}}_i }. $$
(15)

The autocorrelation function of this stochastic process is not trivial, despite condition (14). In fact, using (15) and then (14) one obtains

$$ R_i = \bigl \langle {\varDelta _\mathrm {s} \bigl({x^{\mathrm{q}}_0} \bigr) \varDelta _\mathrm {s} \bigl({x^{\mathrm{q}}_i } \bigr)}\bigr \rangle = \varDelta _\mathrm {s}^2 \langle {\delta_{x^{\mathrm{q}}_0x^{\mathrm{q}}_i } }\rangle . $$
(16)

Obviously, for i=0 the delta is equal to 1 and hence \(R_{0} = \varDelta _{\mathrm{s}}^{2}\). For i≠0 and uncorrelated samples with values from a continuous set, the above delta would determine a zero-measure subset of the probability space; hence, R i =0 for i≠0. This is the case for quantization noise. However, samples of a digital signal belong to discrete set \(\mathbb{Z}_{n} \); thus, the probability of them being equal is greater than zero, and the autocorrelation function of samples of pseudo-random nonlinear distortions is more complicated. Its calculation is based on the derivation of the transition probability for the OFDM signal, which is presented in the next section.

5 Transition Probability for Digital OFDM Signal

The transition probability of a signal is defined as the probability (or probability density for the continuous case) of observation of given values at two samples with given delay. Exploiting stationarity, the transition probability for the digital signal \(x^{\mathrm{q}}_{i} \) can be defined as

$$ \mathcal{P}^{\mathrm{q}}_i \bigl(p,p'\bigr) = \mathcal{P} \bigl(x^{\mathrm{q}}_0 = p \wedge x^{\mathrm{q}}_i = p' \bigr). $$
(17)

For the analog (with continuous values) case, i.e., signal x i prior to quantization, the transition probability density is similarly defined:

$$ \mathcal{P}_i(p_0, p_1) = \mathcal{P} ({x_0 = p_0 \wedge x_i = p_1} ). $$
(18)

These two functions are related by the obvious formula

$$ \mathcal{P}^{\mathrm{q}}_i (p,p') = \int _{p-\frac{1}{2}}^{p+\frac{1}{2}}\mathrm{d} {p_0}\, \int _{p'-\frac {1}{2}}^{p'+\frac{1}{2}}\mathrm{d} {p_1}\, \mathcal{P}_i(p_0, p_1). $$
(19)

The probability of conjunction can be expressed by a conditional probability, hence:

$$ \mathcal{P}_i(p_0, p_1) = \mathcal{P} ({{x_i = p_1}\vert x_0 = p_0} ) \mathcal{P} ({x_0 = p_0} ). $$
(20)

The factor \(\mathcal{P} ({x_{0} = p_{0}} )\) is given by (5), while \(\mathcal{P} ({{x_{i} = p_{1}}\vert x_{0} = p_{0}} )\) can be found starting from the definition

$$ w ({t} ) = x ({t} ) - x_0 = \sum_{k=1}^K \mathrm{Re} \bigl\{{A_k \mathrm{e}^{{\,\mathrm{j}}\phi_k} \bigl({\mathrm{e} ^{{\,\mathrm{j}}\omega _k t} - 1 } \bigr) } \bigr\}. $$
(21)

Then \(\mathcal{P} ({ {x_{i} = p_{1}}\vert x_{0} = p_{0}} ) = \mathcal{P} ({{w ({iT} ) = p_{1} - p_{0}}\vert x_{0} = p_{0}} )\). Because w(iT) is a linear combination of multiple independent random variables, the central limit theorem states that its probability distribution can be well approximated by the Gaussian

$$ \mathcal{P} \bigl({w ({iT} ) = x} \bigr) = \frac {1}{\sqrt{2 \pi \sigma _{w{i}}^2}} \exp \biggl({-\frac{x^2}{2 \sigma _{w{i}}^2}} \biggr), $$
(22)

with variance

$$ \sigma _{w{i}}^2 = \bigl \langle {w ({iT} )^2}\bigr \rangle = 2 \sigma^2 \biggl({1 - \frac{ \cos\frac{ ({K+1} )\pi i}{N_\mathrm {S}} \sin \frac{K \pi i}{N_\mathrm{S}} }{ K \sin\frac{\pi i}{N_\mathrm{S}} } } \biggr) $$
(23)

(for i=1 it will be abbreviated σ w σ w1). Note that lim i→0 σ wi =0, just as it should follow from the definition of w(t). Ignoring any statistical dependence between w(iT) and x 0, the sought conditional probability \(\mathcal{P} ({ {w ({iT} ) = p_{1} - p_{0}}\vert x_{0} = p_{0}} ) \approx\mathcal{P} ({w ({iT} ) = p_{1} - p_{0}} )\); then

$$ \mathcal{P}_i(p_0, p_1) \approx\frac{1}{2 \pi \sigma _{w{i}} \sigma} \exp \biggl({-\frac{ ({p_1 - p_0} )^2}{2 \sigma _{w{i}}^2}} \biggr) \exp \biggl({-\frac{p_0^2}{2\sigma^2}} \biggr) $$
(24)

and

(25)

This result can be rewritten using the error function

$$ \operatorname{erf}(x) = \frac{2}{\sqrt{\pi}} \int_{0}^{x} \mathrm {d} {t}\, \mathrm{e}^{-t^2}, $$
(26)

and because σ≫1, it is a good approximation to assume that the interval of integration over p 0 is very narrow and the integrand is essentially constant within these limits, and thus to evaluate it at p 0=p. Then eventually,

(27)

Thus, the signal value is the same for both samples with probability

$$ \mathcal{P}^{\mathrm{q}}_i (p, p) \approx \frac{1}{\sqrt{2\pi\sigma^2}} \exp \biggl({-\frac{p^2}{2 \sigma ^2}} \biggr) \operatorname{erf}\biggl( \frac{1}{2 \sigma _{w{i}}\sqrt{2}}\biggr). $$
(28)

Note that within the used approximations

$$ \mathcal{P}^{\mathrm{q}}_i (p, p) \approx \frac{ \operatorname{erf}(\frac{1}{2 \sigma _{w{i}}\sqrt{2}}) }{ \operatorname{erf}(\frac{1}{2 \sigma _{w{}}\sqrt{2}}) } \mathcal {P}^{\mathrm{q}}_1(p, p) $$
(29)

and

$$ \mathcal{P}^{\mathrm{q}}_0(p, p) \approx \frac{1}{\sqrt{2\pi\sigma^2}} \exp \biggl({-\frac{p^2}{2 \sigma ^2}} \biggr) = \mathcal{P} ({x_i = p} ). $$
(30)

Assuming that summation over levels can be replaced by integration, the total probability \(\sum_{p} \mathcal{P}^{\mathrm{q}}_{0}(p, p) \approx1\), meaning that the derived expression is quite accurate.

Having found the transition probability, the next step is to calculate the effective number of samples, defined further in the paper.

6 Effective Number of Samples of Digital OFDM Signal

Because the values of \(x^{\mathrm{q}}_{i} \) belong to a set with a finite number of elements, the probability that two different samples have the same value is greater than zero. This means that in the sequence \(x^{\mathrm{q}}_{i} \) there appear constant subsequences of two or more samples, effectively reducing the number of samples of the signal. Let 〈L〉 denote the mean length of a constant subsequence within signal \(x^{\mathrm{q}}_{i} \). The effective number of samples of the signal \(x^{\mathrm{q}}_{i} \) is hereby defined as

$$ N_\mathrm{eff}= \frac{N_\mathrm{S}}{\langle {L}\rangle }. $$
(31)

Thus, N eff denotes the number of changes of values of consecutive samples in the digital signal. The limiting values are N eff=N S if no sample is equal to the previous one, and N eff=1 if the signal is constant (all samples are equal). It is convenient to use the normalized effective number of samples N eff/N S, taking values from 1/N S up to 1.

From the definition it follows that the effective number of samples can be calculated by counting changes of values of consecutive samples. Therefore,

$$ N_\mathrm{eff}= \Biggl \langle {1 + \sum_{i=1}^{N_\mathrm{S}-1} \bigl({1 - \delta_{ x^{\mathrm{q}}_{i-1} x^{\mathrm{q}}_i } } \bigr) }\Biggr \rangle . $$
(32)

For a stationary stochastic process, the mean value of the delta in this expression is the same for each pair of samples; hence,

$$ \frac{N_\mathrm{eff}}{N_\mathrm{S}} = 1 - \biggl({1-\frac {1}{N_\mathrm{S}}} \biggr) \langle { \delta_{ x^{\mathrm{q}}_0 x^{\mathrm{q}}_1 } }\rangle . $$
(33)

The calculated transition probability allows we to express the mean value of the Kronecker delta:

$$ \langle {\delta_{ x^{\mathrm{q}}_0 x^{\mathrm{q}}_1 } }\rangle = \sum_{p\in\mathbb{Z}_n} \mathcal{P}^{\mathrm{q}}_1(p, p) \approx2\sum _{p=0}^{2^{n-1}} \mathcal{P}^{\mathrm{q}}_1(p, p) - \mathcal{P}^{\mathrm{q}}_1(0, 0). $$
(34)

Then

$$ \frac{N_\mathrm{eff}}{N_\mathrm{S}} \approx1 - \frac{1}{\sqrt {2\pi\sigma^2}} \operatorname{erf}\biggl( \frac{1}{2 \sigma _{w{}}\sqrt{2}}\biggr) \Biggl[{2 \sum_{p=0}^{2^{n-1}} \exp \biggl({-\frac{p^2}{2 \sigma^2}} \biggr) -1 } \Biggr]. $$
(35)

Again exploiting σ≫1 and approximating the summation over levels p by integration, one obtains the closed-form expression

$$ \frac{N_\mathrm{eff}}{N_\mathrm{S}} \approx1 - \operatorname{erf}\biggl(\frac{1}{2 \sigma _{w{}}\sqrt{2}}\biggr) \biggl[{\operatorname{erf}\biggl(\frac{\alpha}{\sqrt{2}}\biggr) - \frac {1}{\sqrt{2\pi \sigma^2}} } \biggr]. $$
(36)

This formula is compared to the results of numerical calculations shown in Fig. 1, plotted against resolution n and “reduced resolution”

$$ \nu= n - \log_2\frac{N_\mathrm{S}}{K}. $$
(37)

As can be seen, the derived formula (36) reproduces the effective number of samples very accurately in most cases.

Fig. 1
figure 1

Normalized effective number of samples of OFDM signal N eff/N S vs. (a) converter’s resolution n and (b) reduced resolution ν: comparison of numerical results from [12] and the derived formula (36)

The derived formula (36) can be approximated to more clearly show the relations between various parameters. For the practically significant value of α≈4 one has \(\operatorname{erf}(\alpha/\sqrt{2})\approx1\). The second term in brackets is inversely proportional to σ, thus quickly decreasing as 2n. This term is small and can be ignored. Expanding the trigonometric functions in \(\sigma _{w{}}^{2}\) given by (23) into Maclaurin series and then leaving only the leading term, one obtains

$$ \sigma _{w{}}\approx\frac{ 2^n \pi}{ \alpha\sqrt{3} } \biggl({\frac {N_\mathrm{S}}{K}} \biggr)^{-1} = \frac{ 2^\nu\pi}{ \alpha\sqrt {3} }. $$
(38)

For small arguments \(\operatorname{erf}(x) \approx2 x /\sqrt{\pi }\), therefore for ν>0, the simplified formula is

$$ \frac{N_\mathrm{eff}}{N_\mathrm{S}} \approx1 - \frac{ \alpha\sqrt {3} }{ 2^n \sqrt {2\pi ^3}} \frac{N_\mathrm{S}}{K} = 1 - \frac{ \alpha\sqrt{3} }{ 2^\nu \sqrt {2\pi^3} }. $$
(39)

It can be seen that the only dependence on n and N S/K is through the reduced resolution ν, which explains the behavior of the results depicted in Fig. 1b.

7 Signal Degradation by Pseudo-Random Nonlinear Distortions

The autocorrelation function R i given by (16) contains the mean value of the Kronecker delta, which can be calculated using the derived transition probability:

$$ \langle {\delta_{x^{\mathrm{q}}_0x^{\mathrm{q}}_i } }\rangle = \sum_{p\in\mathbb{Z}_n} \mathcal {P}^{\mathrm{q}}_i(p, p). $$
(40)

Using (29) the mean value for samples 0 and i can be expressed by the mean value for samples 0 and 1:

$$ \langle {\delta_{x^{\mathrm{q}}_0x^{\mathrm{q}}_i } }\rangle = \frac{ \operatorname{erf}(\frac{1}{2 \sigma _{w{i}}\sqrt{2}}) }{ \operatorname{erf}(\frac{1}{2 \sigma _{w{}}\sqrt{2}}) } \langle { \delta_{x^{\mathrm {q}}_0 x^{\mathrm{q}}_1} }\rangle . $$
(41)

Therefore, using (33) rearranged to express the mean value of delta by the effective number of samples, one finds:

$$ R_i \approx \biggl({1 - \frac{N_\mathrm{eff}}{N_\mathrm{S}} } \biggr) \varDelta _\mathrm {s}^2 \frac { \operatorname{erf}(\frac{1}{2 \sigma _{w{i}}\sqrt{2}}) }{ \mathrm {erf}(\frac{1}{2 \sigma _{w{}}\sqrt{2}})}. $$
(42)

Exemplary results of averaged autocorrelation functions from numerical simulations and calculated with the help of the above formula are presented in Fig. 2. For each case the averaging has been conducted over 301 randomly generated signals and randomly generated level errors Δ(p). It can be seen that the derived expression correctly resembles the trends of the considered distortions. It is interesting to note that the autocorrelation function is generally a peak of certain width, which can be characterized by the ratio

$$ \frac{R_1}{R_0} = 1 - \frac{N_\mathrm{eff}}{N_\mathrm{S}}, $$
(43)

and hence, by the effective number of samples. It is the expected observation, since a lower effective number of samples means more repetitions of values in the signal, and thus higher correlation between consecutive samples.

Fig. 2
figure 2

Normalized autocorrelation function of pseudo-random nonlinear distortions: comparison of numerical results and the derived formula (42) for three cases. Values are plotted only for a few of the smallest lags, which are the most interesting part of the autocorrelation function

From the Wiener-Khinchin theorem, the power spectrum of the pseudo-random distortions can be calculated as a Fourier transform of R i :

$$ S_k = \frac{1}{N_\mathrm{S}}\sum_{i=0}^{N_\mathrm{S}-1} R_i \mathrm {e}^{-{\mathrm{j}}\omega _k i T} = \frac{\varDelta _\mathrm {s}^2}{N_\mathrm{S}} \biggl({1 - \frac{N_\mathrm {eff}}{N_\mathrm{S}} } \biggr) \sum_{i=0}^{N_\mathrm{S}-1} \frac{ \operatorname{erf}(\frac{1}{2 \sigma _{w{i}}\sqrt{2}}) }{ \operatorname{erf}(\frac{1}{2 \sigma _{w{}}\sqrt{2}}) } \mathrm {e}^{-{\mathrm{j}}\omega_k i T}. $$
(44)

The predictions given by (44) are compared to averaged numerical results in Fig. 3. A good agreement is observed as the theoretical curves resemble the power spectra with high accuracy. The only clearly visible discrepancy is for the frequencies with the lowest powers; however, it is possible that the results would fit better if the number of results for averaging was increased. Still, the total contribution of this part of the spectrum is rather low, and even if the derived expression is inaccurate here, this will have no significant impact.

Fig. 3
figure 3

Normalized power spectrum of pseudo-random nonlinear distortions: comparison of numerical results and the derived formula (44) for three cases

The total power of pseudo-random nonlinear distortions in the signal band is

$$ \sigma_\mathrm{nls}^2 = \sum_{\vert {k}\vert \leq K, k\neq0} S_k = \frac {1}{N_\mathrm{S}} \sum_{i=0}^{N_\mathrm{S}-1} R_i \frac{ \sin\frac{2\pi K i}{N_\mathrm {S}} }{ \sin\frac {2\pi i}{N_\mathrm{S}} }. $$
(45)

With R i given by the expression (42) this leads to

$$ \sigma_\mathrm{nls}^2 = \frac{\varDelta _\mathrm {s}^2}{N_\mathrm{S}} \biggl({1 - \frac{N_\mathrm{eff}}{N_\mathrm{S}} } \biggr) \sum_{i=0}^{N_\mathrm{S}-1} \frac{ \operatorname{erf}(\frac{1}{2 \sigma _{w{i}}\sqrt{2}}) }{ \operatorname{erf}(\frac{1}{2 \sigma _{w{}}\sqrt{2}}) } \frac{ \sin \frac{2\pi K i}{N_\mathrm {S}} }{ \sin \frac {2\pi i}{N_\mathrm{S}} }. $$
(46)

This expression for \(\sigma_{\mathrm{nls}}^{2}\) is complicated, and it would be beneficial to derive an approximation that, although less accurate, would express the relations between quantities more clearly. Because R i is mainly a peak of certain width, it can be approximated in (45) with a triangular function:

$$ R_i \approx \begin{cases} R_0 - ({R_0 - R_1} ) i, & \vert {i}\vert < \frac {N_\mathrm{S} }{N_\mathrm{eff}}, \\ 0, & \text{otherwise}, \end{cases} $$
(47)

where the range of i is shifted by −N S/2, which is possible because R i is periodic. Then, nonzero terms of the sum are only near i=0, so one can use sin(2πi/N S)≈2πi/N S and the next step is replacement of summation with integration. However, in geometrical terms, since in this case summation corresponds to adding areas of rectangles, while integration corresponds to adding areas of triangles, and R 0 is expected to be the dominant component, to compensate for the ignored area of the central step (peak) a factor of 2 is introduced: ∑→2∫. Hence, in this way one obtains

(48)

where Six=xx 3/(3⋅3!)+x 5/(5⋅5!)−x 7/(7⋅7!)+⋯ is the integral sine function. Thus, in the rough approximation \(\sigma_{\mathrm{nls}}^{2} \approx2 K \varDelta _{\mathrm{s}}^{2} / N_{\mathrm{eff}}\) and the associated signal-to-noise ratio is

$$ \mathrm{SNR}_\mathrm{nls} = \frac{\sigma^2}{\sigma_\mathrm {nls}^2} \approx \frac{1}{8 \varDelta _\mathrm {s}^2} \biggl({\frac{2^n}{\alpha}} \biggr)^2 \frac{N_\mathrm{eff}}{K}. $$
(49)

This result has the same form as for quantization noise with the effective number of samples N eff used in place of the total number of samples N S.

8 Summary

The expressions (42) and (44) presented in this paper accurately describe the autocorrelation function and power spectrum of distortions caused by pseudo-random, i.e., irregular, nonlinear distortions introduced by an A/D or D/A converter into an OFDM signal and thus extend the theoretical model of the conversion process present in the literature. Their derivation is based on the transition probability for the OFDM signal and a newly introduced quantity, the effective number of samples, for which the expressions (27) and (36) have been found. It has been shown that the effective number of samples reveals certain information about the autocorrelation function and power spectrum of the considered distortions. In general, this quantity is important for digital signals, because it takes into account an important stochastic property—repetition of values of samples—which has significant consequences. It is expected that this defined effective number of samples should find more applications in the description of digital signals.