Advertisement

Optimal fronthaul compression for synchronization in the uplink of cloud radio access networks

  • Eunhye Heo
  • Osvaldo Simeone
  • Hyuncheol Park
Open Access
Research

Abstract

A key problem in the design of cloud radio access networks (CRANs) is to devise effective baseband compression strategies for transmission on the fronthaul links connecting a remote radio head (RRH) to the managing central unit (CU). Most theoretical works on the subject implicitly assume that the RRHs, and hence the CU, are able to perfectly recover time synchronization from the baseband signals received in the uplink, and focus on the compression of the data fields. This paper instead does not assume a priori synchronization of RRHs and CU, and considers the problem of fronthaul compression design at the RRHs with the aim of enhancing the performance of time and phase synchronization at the CU. The problem is tackled by analyzing the impact of the synchronization error on the performance of the link and by adopting information and estimation-theoretic performance metrics such as the rate-distortion function and the Cramer-Rao bound (CRB). The proposed algorithm is based on the Charnes-Cooper transformation and on the Difference of Convex (DC) approach, and is shown via numerical results to outperform conventional solutions.

Keywords

C-RAN Fronthaul compression Time and phase synchronization 

1 Introduction

As mobile operators are faced with increasingly demanding requirements in terms of data rates and operational costs, the novel architecture of cloud radio access networks (C-RANs) has emerged as a promising solution [1, 2]. In a C-RAN, the baseband processing and higher-layers operations of the base stations are migrated to a central unit (CU) in the “cloud”, to which the base station, typically referred to a remote radio head (RRH), are connected via fronthaul links, which in turn may be realized via fiber optics, microwave or mmwave technologies. By simplifying the network edge and by centralizing baseband processing, the C-RAN architecture is expected to provide significant benefits in energy efficiency, load balancing, and interference management capabilities (see review in [2]).

A key issue in C-RANs is to devise effective methods of transporting digitized baseband signals on the fronthaul links with the limited capacity. The Common Public Radio Interface (CPRI) standard [3] defines the communication interface between CU and RRHs on the fronthaul network, including the use of sampling and scalar quantization for the digitization of the baseband signals. However, the basic approach prescribed by CPRI is bound to produce bit rates that are difficult to accommodate within the available fronthaul capacities. This has motivated the design of strategies that reduce the bit rate of the fronthaul data stream while limiting the distortion incurred on the quantized signal. In order to reduce the fronthaul rate by means of compression, there are CPRI techniques based on a number of principles such as filtering and downsampling [4], optimized non-uniform quantization [5], and lossless compression [6]. In addition to the mentioned point-to-point compression algorithms, there are works that tackle the design of fronthaul transmission strategies from a network-aware perspective (see, e.g., [7, 8, 9, 10][13]).

Most theoretical works on fronthaul compression for C-RAN implicitly assume perfect time synchronization and channel state information (CSI) at the RRHs and the CU. However, on the one hand, this assumption violates the C-RAN paradigm that minimal baseband processing should be carried out at the RRHs, and, on the other hand, the resulting design neglects the additional requirements on fronthaul processing at the RRHs that are imposed by synchronization and channel estimation. This limitation is alleviated by [10], which considers robust compression in the presence of imperfect CSI and by papers [11, 12], which study the impact of fronthaul compression on channel estimation. To the best of our knowledge, analyses that account for imperfect time synchronization are not available.

In this paper, we consider training-based synchronization for the uplink of a C-RAN cellular system. Specifically, we consider the system illustrated in Fig. 1 in which an RRH is connected to a CU in the cloud via finite-capacity fronthaul link, as it is by now standard in related investigations of C-RAN (see, e.g., [2]). We study the problem of optimal fronthaul compression of the training field with the aim of enhancing the performance of time and phase synchronization at the CU.
Fig. 1

Uplink communication between a number of MSs and an RRH. The RRH is connected via a finite-capacity fronthaul link to a CU that performs baseband processing, including synchronization

To this end, the effect of the synchronization error on the signal to noise ratio (SNR) is analyzed by adopting the Cramer-Rao bound (CRB) as the performance criterion of interest and by accounting for compression via information theoretic tools. The resulting proposed algorithm is based on the Charnes-Cooper transformation [14] and the Difference of Convex (DC) approach [15]. Numerical results show that optimized fronthaul compression that targets enhanced synchronization performance outperforms conventional solution that do not account for the impact of synchronization errors. The rest of the paper is organized as follows. Section 2 introduce system model of uplink C-RAN cellular system. The analytic study of the performance and optimization are presented in Section 3: the CRBs of time and phase offset estimation carried at CU is derived in Section 3.1, and the analysis of impact of the synchronization error on the effective SNR in Section 3.2, and the optimization of fronthaul compression in Section 3.3. Finally, the performance is evaluated through simulations to present benefits of the proposed compression scheme in Section 4.

1.1 Notation

Boldface lowercase letters denotes column vectors and boldface uppercase letters denotes matrices. The superscripts (·) denotes conjugate transpose of its argument. (·)−1 denotes inverse operation of its argument. The determinant of matrix A is denoted as |A|. The expectation operation with respect to x is denoted as \(\mathbb {E}_{x}[\!\cdot ]\); the correlation matrix of random vector x is defined as \(\mathbf {K}_{\mathbf {x}}=\mathbb {E}[\!\mathbf {x}\mathbf {x}^{\dag }]\).

2 System model

In this paper, we consider training-based synchronization for the uplink of a C-RAN cellular system. We specifically focus on the operation of a single cell, as illustrated in Fig. 1, and assume that, as in current cellular implementations, the MSs transmit over orthogonal time/frequency resources, so that we can focus on a single active MS in a given resource block. The MS transmits a frame consisting of a training and a data field. We further assume that the active MS and the RRH have a single antenna. The RRH is connected to a CU in the cloud via a fronthaul link that can deliver C bits per uplink sample to the CU. It is also assumed that the RRH is synchronized at the frame level so as to be able to distinguish between the training and data fields that compose each transmitted frame.

2.1 Training phase

Assuming a flat-fading channel, the signal received at the RRH during the training, or pilot, field, is given as
$$\begin{array}{*{20}l}{} y_{p}(t) = Ae^{\,j\theta}\!\sum_{l=-L+1}^{N_{p}-1} \!x_{p}[\!l]g(t-lT-\!\tau) +z_{p}(t),~\!t\in\,[\!0,N_{p}T) \end{array} $$
(1)

where A is a positive amplitude that accounts for the attenuation due to fading; θ is the phase offset, which models the effect of the channel and of the phase mismatch between the oscillators at the MS and at the RRH; τ accounts for the residual timing offset between MS and RRH; T is the symbol period; x p [l] is the lth pilot symbol transmitted by the MS; N p is the number of pilot symbols; g(t) is the pulse shape, which includes the effect of the transmit and receive filter and is assumed to be supported in the interval [0,(L−1)T] for some integer L>1; and z p (t) is the complex additive white Gaussian noise with two-sided power spectral density N 0. We assume that the RRH is able to estimate the channel amplitude A, for instance, by means of automatic gain control in the presence of constant amplitude symbols. Instead, the time offset τ and phase offset θ need to be estimated from the received signal (1).

The training sequence is generated randomly such that the symbols x p [ l] for l=0,...,N p −1 are independent and distributed as \(\mathcal {CN}(0,E_{x_{p}})\). The training sequence is known to the CU and the random generation is assumed here for the sake of simplifying the analysis in the spirit of Shannon’s random coding (see, e.g., [16]). We further assume that the pilot symbols are preceded by a cyclic prefix of duration equal to (L−1)T. This implies that x p [−l]=x p [−l+N p ] for 1≤lL−1. Alternatively, as it will be discussed, the analysis below holds as long as the number of training symbols N p is sufficiently larger than the support of the waveform g(t)L.

In order to potentially enhance the performance of phase and time synchronization, we allow the receiver to oversample the received signal at the BS with a sampling period T s =T/F, where F is the oversampling factor. For simplicity of analysis, we consider a raised cosine pulse g(t) with zero excess bandwidth (i.e., a sinc function) so that the two-sided bandwidth is B=1/T. As a result, setting F=1, i.e., no oversampling, is an acceptable choice that leads to no spectral aliasing. However, as it will be seen in Section 4, the selection F>1 may yield an improved performance. Note that this is true even under the given assumption of zero excessive bandwidth. The reason is that collecting a larger number of samples enables the mitigation of the effect of the additive noise.

The resulting discrete-time signal y p (m T+n T s ) can be expressed as the interleaving of the F polyphase sequences \({y^{n}_{p}}[m]=y_{p}(mT+{nT}_{s})\), with n=0,1,...,F−1, see, e.g., [17]. Each sequence \({y^{n}_{p}}[m]\) can be in turn written as
$$\begin{array}{*{20}l}{} {y^{n}_{p}}[m] &= A x_{p}[m]\circledast g_{\tau,\theta}^{n}[m] + {z^{n}_{p}}[m], \ \ m=0,...,N_{p}-1, \end{array} $$
(2)

where we have defined \({z^{n}_{p}}[m]\triangleq z_{p}(mT+{nT}_{s}), g_{\tau,\theta }^{n}[m]\triangleq e^{j\theta }g(mT+{nT}_{s}-\tau)\), and \(\circledast \) denotes the circular convolution. Assuming that the noise z p (t) is white over the bandwidth [−1/2T s ,1/2T s ], the discrete-time noise sequence \({z^{n}_{p}}[m]\) is an i.i.d. process with zero mean and power N 0/T s .

Remark 1

The presampling filter has a cut-off frequency of 1/2T since it is matched to the signal waveform. As a result, the noise prior to sampling is bandlimited with two-sided bandwidth B=1/T. As such, it is correlated with auto-correlation function proportional to sinc(t/T). Therefore, with oversampling, the discrete-time noise samples, which are taken at times multiple of T/F, are more properly modelled as correlated if F>1. Here, following many related references (see, e.g., [18,19]), we instead make the simplifying assumption that the noise is white. This choice can be seen to lead to lower bounds on the actual system performance. □

2.2 Data phase

The signal received during the data field of a frame can be written, in an analogous fashion as (1), as
$$\begin{array}{*{20}l}{} y_{d}(t) &= Ae^{\,j\theta}\!\sum_{l=-L+1}^{N_{d}-1} \!x_{d}[\!l]g(t\!-lT-\!\tau\!) \!+z_{d}(t), \,t\in\, [\!0,N_{d}T), \end{array} $$
(3)

where x d [l] is the lth data symbol transmitted by the MS, which is generated randomly in a constellation set Ω x with zero mean and power \(E_{x_{d}}\), and N d is the number of data symbols. The other parameters are defined as in (1).

After sampling at baud rate for the data field, the discrete-time signal is given as
$$\begin{array}{*{20}l} y_{d}[\!m] &= Ae^{\,j\theta}\sum_{l=-L+1}^{N_{d}-1} x_{d}[\!l]g((m-l\,)T-\tau) +z_{d}[\!m],\\ &~~~~~~m=0,...,N_{d}-1, \end{array} $$
(4)

where the discrete-time noise sequence z d [m] is an i.i.d. process with zero mean and power N 0/T. Note that oversampling could be adopted also for the data field by following the same model used for the training field, but we do not further pursue this here in order to focus on training for synchronization.

2.3 Fronthaul compression

Following the C-RAN principle, compression is performed at the RRH in order to convey the baseband signal over the limited-capacity fronthaul link to the CU. For the training field, we assume the use of block quantizers that compress each nth polyphase sequence y n [m], with n=0,...,F−1, separately for transmission over the fronthaul link. Note that, while joint compression of these sequences generally leads to an improved compression efficiency, here we adopt separate compression both for its lower computation complexity and for its analytical tractability. In particular, each polyphase sequence is stationary and can be hence compressed by using standard compression strategies, including universal methods [15, Ch. 10]. Furthermore, the resulting compression rate can be computed using rates distortion theory as discussed next.

Using the standard additive quantization noise model, the resulting compressed signal for each nth polyphase sequence can be written as
$$\begin{array}{*{20}l} \hat{y}^{n}_{p}[\!m] &= {y^{n}_{p}}[\!m]+{q^{n}_{p}}[\!m], \ \ m=0,...,N_{p}-1, \end{array} $$
(5)

where \({q^{n}_{p}}[m]\) indicates the quantization noise and \({q^{n}_{p}}[\!m]\) is assumed to be complex Gaussian and generally correlated across the discrete-time index m. Due to the separate quantization of the polyphase sequences, the quantization noise is independent across the index n. From the covering lemma of rate-distortion theory [16], vector quantization schemes can be designed such that the joint (empirical) distribution of the input and output of the quantizer satisfies (5), as long as the rate is sufficiently large (see, e.g., [16]). Furthermore, the relationship (5) can be in practice approximated by a high-dimensional dithered vector quantizers [20]. The practical relevance of the additive-noise quantization model for system design is further validated in Section 4 by means of numerical results.

The covariance matrix \(\mathbf {K}_{\mathbf {q}^{n}_{p}}\) of the vector \(\mathbf {q}^{n}_{p}=\left [{q^{n}_{p}}[0],...,{q^{n}_{p}}[N_{p}-1]\right ]\) is taken to be circulant in order to facilitate its optimization in the frequency domain. This is done with the aim of reducing the number of degrees of freedom in the problem, hence enabling efficient and scalable optimization, as discussed in the next section. Taking the discrete Fourier transform (DFT) of (5) leads to the frequency-domain signals
$$\begin{array}{*{20}l}{} \hat{Y}^{n}_{p}[\!k] = A X_{p}[\!k]G^{n}_{\tau,\theta}[\!k]+ {Z^{n}_{p}}[k]+ {Q^{n}_{p}}[\!k],\,k=0,...,N_{p}-1, \end{array} $$
(6)

where \(X_{p}[k], G^{n}_{\tau,\theta }[k], {Z^{n}_{p}}[k]\), and \({Q^{n}_{p}}[k]\) are obtained by taking the DFT of the sequences \(\left \{x_{p}[m]\}_{m=0}^{N_{p}-1}, \{g^{n}_{\tau,\theta }[m]\}_{m=0}^{N_{p}-1}, \{{z^{n}_{p}}[m]\right \}_{m=0}^{N_{p}-1}\), and \(\{{q^{n}_{p}}[m]\}_{m=0}^{N_{p}-1}\), respectively. Due to the lack of spectral aliasing afforded by the chosen waveform and sampling frequency, we can write \(G^{n}_{\tau,\theta }[k]=G^{n}[k]e^{-j(2\pi \frac {k}{N_{p}T_{s}} \tau -\theta)}\).1

From the mentioned covering lemma [16] (see also [20]), the fronthaul rate required to convey the compressed signals \(\hat {\mathbf {y}}_{p}=\left [\hat {\mathbf {y}}_{p}^{0},...,\hat {\mathbf {y}}_{p}^{F-1}\right ]\), where \(\hat {\mathbf {y}}_{p}^{n}=\left [\hat {y}^{n}_{p}[0],...,\hat {y}^{n}_{p}[N_{p}-1]\right ]\), from the RRH to the CU is given by the mutual information \(I(\mathbf {y}_{p};\hat {\mathbf {y}}_{p})\), with vector y p being similarly defined. However, the mutual information \(I(\mathbf {y}_{p};\hat {\mathbf {y}}_{p})\) depends on the joint distribution of y p and \(\hat {\mathbf {y}}_{p}\) and hence on the timing offset τ and phase offset θ, which are not known at the RRH. Therefore, the necessary rate of a worst-case estimate is \(R_{p}=\sup _{\tau,\theta }I(\mathbf {y}_{p};\hat {\mathbf {y}}_{p})\). It can be easily calculated from the mutual information, which is given by
$$\begin{array}{*{20}l} I(\mathbf{y}_{p};\hat{\mathbf{y}}_{p})= \sum^{F-1}_{n=0} \log_{2}\frac{|\mathbf{K}_{\mathbf{y}^{n}_{p}}+\mathbf{K}_{\mathbf{q}^{n}_{p}}|}{|\mathbf{K}_{\mathbf{q}^{n}_{p}}|}, \end{array} $$
(7)
where \(\mathbf {y}^{n}_{p}=\left [{y^{n}_{p}}[0],...,{y^{n}_{p}}[N_{p}-1]\right ]\) and \(\mathbf {q}^{n}_{p}=\left [{q^{n}_{p}}[0],...,{q^{n}_{p}}[N_{p}-1]\right ]\). Since the covariance matrix of the quantization noise \(\mathbf {K}_{\mathbf {q}^{n}_{p}}\) is assumed to be circulant, by leveraging Szeg\(\ddot {\mathrm {o}}\) theorem [21], we can write (7) as
$$\begin{array}{*{20}l}{} I(\mathbf{y}_{p} & ;\hat{\mathbf{y}}_{p}) = \sum_{n=0}^{F-1} \sum_{k=0}^{N_{p}-1} \log_{2} \left(1+ \frac{E_{x_{p}}A^{2} |G^{n}[k]|^{2}+ N_{0}/T_{s}}{S_{{Q^{n}_{p}}}[k]} \right), \end{array} $$
(8)
where \(S_{{Q^{n}_{p}}}[\!k]\), for k=0,...,N p −1, indicate the eigenvalues of the matrix \(\mathbf {K}_{\mathbf {q}^{n}_{p}}\). We will refer to \(S_{{Q^{n}_{p}}}[k]\) as the power spectral density (PSD) of the quantization noise \({q^{n}_{p}}[m]\). We observe that (8) does not depend on θ and τ. Therefore, the required fronthaul rate R p is given by the right-hand side of (8). We will therefore impose the fronthaul capacity constraint as
$$\begin{array}{*{20}l} I(\mathbf{y}_{p};\hat{\mathbf{y}}_{p})\leq N_{p}C, \end{array} $$
(9)

where \(I(\mathbf {y}_{p};\hat {\mathbf {y}}_{p})\) is given in (8).

The compressed data signal during the data field, similar to (5), can be written as
$$\begin{array}{*{20}l} \hat{y}_{d}[m] &= y_{d}[m]+ q_{d}[m],\ \ m=0,...,N_{d}-1, \end{array} $$
(10)
where q d [m] indicates the quantization noise, which is assumed to be white Gaussian random variable with zero mean and variance \(\sigma ^{2}_{q_{d}}\). We observe that an optimized correlation for the quantization noise on the data phase could also be designed, similar to [10], but we leave this aspect to future work in order to concentrate on training for synchronization. Furthermore, following the discussion above, the fronthaul rate required to convey the compressed data signal \(\hat {\mathbf {y}}_{d}=[\hat {y}_{d}[0],...,\hat {y}_{d}[N_{d}-1]]\), from the RRH to the CU is given by \(R_{d}=\sup _{\tau,\theta }I(\mathbf {y}_{d};\hat {\mathbf {y}}_{d})\), with vector y d being similarly defined, with
$$\begin{array}{*{20}l} I(\mathbf{y}_{d};\hat{\mathbf{y}}_{d}) &= \log_{2}\frac{|\mathbf{K}_{\mathbf{y}_{d}}+\mathbf{K}_{\mathbf{q}_{d}}|}{|\mathbf{K}_{\mathbf{q}_{d}}|}, \end{array} $$
(11a)
$$\begin{array}{*{20}l} &= \sum_{i=0}^{N_{d}-1}\log_{2} \left(1+ \frac{E_{x_{d}}A^{2} |G[i]|^{2}+ N_{0}}{\sigma^{2}_{q_{d}}} \right), \end{array} $$
(11b)
where (11b) follows from Szeg\(\ddot {\mathrm {o}}\) theorem as in (8) and the fronthaul capacity constraint of the data phase is given as
$$\begin{array}{@{}rcl@{}} I(\mathbf{y}_{d};\hat{\mathbf{y}}_{d})\leq N_{d}C. \end{array} $$
(12)

3 Analysis and optimization

In this section, we analyze the performance of the C-RAN system introduced above by accounting for the impact of imperfect synchronization, with the aim of enabling the optimization of fronthaul quantization. We will first discuss the performance of time and phase synchronization at the CU in Section 3.1. Then, we study the impact of synchronization errors on the SNR in Section 3.2. Finally, we investigate the optimization of fronthaul compression in Section 3.3.

3.1 CRBs for the time and phase offset estimation

The CU estimates the time and phase offsets based on the compressed pilot signals \(\hat {\mathbf {y}}_{p}\), producing the estimates \(\hat {\tau }(\hat {\mathbf {y}}_{p},\mathbf {x}_{p})\) and \(\hat {\theta }(\hat {\mathbf {y}}_{p},\mathbf {x}_{p})\). The mean squared errors (MSEs) of these estimates can be bounded by the corresponding CRBs, i.e., by the inequalities \(\mathbb {E}_{\hat {\mathbf {{y}}}_{p},\mathbf {x}_{p}}[(\hat {\tau }(\hat {\mathbf {y}}_{p},\mathbf {x}_{p}) - \tau)^{2}]\geq \textrm {CRB}_{\tau }\) and \(\mathbb {E}_{\hat {\mathbf {{y}}}_{p},\mathbf {x}_{p}}[(\hat {\theta }(\hat {\mathbf {y}}_{p},\mathbf {x}_{p}) - \theta)^{2}]\geq \textrm {CRB}_{\theta }\). Note that the mentioned estimates depend on both the training sequence x p and the compressed received signal \(\hat {\mathbf {y}}_{p}\), and that the squared error is averaged over the joint distribution of x p and \(\hat {\mathbf {y}}_{p}\). To evaluate the CRBs, we assume that the relationship (5)-(6) is satisfied for the given vector quantizer. This is done for the sake of tractability and is motivated by the covering lemma and by the results in [20] as discussed in the previous section. The CRBs are given, respectively, as
$$\begin{array}{*{20}l} \textrm{CRB}_{\tau}&= \left(\left(\frac{2\pi}{N_{p}T_{s}}\right)^{2} \sum_{n=0}^{F-1}\sum_{k=0}^{N_{p}-1}\frac{E_{x_{p}} A^{2}k^{2}|G^{n}[k]|^{2}}{\frac{N_{0}}{T_{s}}+S_{{Q^{n}_{p}}}[k]}\right)^{-1}, \end{array} $$
(13)
and
$$\begin{array}{*{20}l} \textrm{CRB}_{\theta}&= \left(\sum_{n=0}^{F-1}\sum_{k=0}^{N_{p}-1}\frac{E_{x_{p}}|A|^{2}|G^{n}[k]|^{2}}{\frac{N_{0}}{T_{s}}+S_{{Q^{n}_{p}}}[k]}\right)^{-1}. \end{array} $$
(14)

The derivation of (13)–(14) is given in the Appendix. Note that the bounds (13) and (14) do not depend on the phase θ and delay τ.

3.2 Impact of the synchronization error on the SNR

Having estimated the time and phase offsets \(\hat {\tau }\) and \(\hat {\theta }\), the CU compensates for these offsets in the received signal, obtaining the discrete-time signal
$$\begin{array}{*{20}l}{} y_{d}[m] &= Ae^{\,j\Delta \theta}\sum_{l=-L+1}^{N_{d}-1} x_{d}[l]g((m-l)T+\Delta\tau) +z_{d}[m], \\ &~~~~~m=0,...,N_{d}-1, \end{array} $$
(15)

where \(\Delta \tau =\hat {\tau }(\hat {\mathbf {y}},\mathbf {x}) -\tau \) and \(\Delta \theta =\hat {\theta }(\hat {\mathbf {y}},\mathbf {x})-\theta \) are the synchronization errors for timing and phase, respectively. We note that compensation of the time offset requires interpolation, which is possible given the lack of spectral aliasing. Moreover, under the mentioned assumption on the zero excess bandwidth waveform g(t), the statistics of the (white Gaussian) noise terms are unchanged by interpolation.

To account for the impact of the synchronization errors Δ τ and Δ θ, we follow the approach in [22], whereby the sinc waveform g(t) is approximated by retaining only two sidelobes on either side. Under this approximation, we can express (15) as
$$\begin{array}{*{20}l} y_{d}[m] = Ax_{d}[m]g(\Delta\tau)+z_{s}[m]+ z_{{isi}}[m] + z_{d}[m], \end{array} $$
(16)
where the terms in (16) are detailed below. First, the term z s [m]=A x d [m]g(Δ τ)(e j Δ θ −1) indicates additional noise caused by the estimation error of phase offset Δ θ. The term z i s i [m] instead accounts for inter-symbol interference caused by the time synchronization error and is given as
$$\begin{array}{*{20}l} z_{{isi}}[\!m] = Ae^{\,j\Delta \theta}\sum_{l=m-3, l\neq m }^{l=m+3}x_{d}[\!l]g((l-m)T+\Delta\tau). \end{array} $$
(17)
In order to evaluate the power of the noise terms z s [ m] and z i s i [ m], we make the simplifying assumption that the estimation errors Δ τ and Δ θ are uniform distributed on \(\left [-\frac {\Delta \tau _{\text {max}}}{2},\frac {\Delta \tau _{\text {max}}}{2}\right ]\) and on \(\left [-\frac {\Delta \theta _{\text {max}}}{2},\frac {\Delta \theta _{\text {max}}}{2}\right ]\), respectively. We observe that this approximation is expected to be increasingly accurate in the regime of small synchronization errors. Moreover, we approximate Δ τ max and Δ θ max by means of the CRB τ (13) and CRB θ (14), respectively, by imposing the equalities \(\mathbb {E}[\!\Delta \tau ^{2}] = \textrm {CRB}_{\tau }\) and \(\mathbb {E}[\!\Delta \theta ^{2}] = \textrm {CRB}_{\theta }\), which yields \(\Delta \tau _{\text {max}}= \sqrt {12\textrm {CRB}_{\tau }}\) and \(\Delta \theta _{\text {max}}= \sqrt {12\textrm {CRB}_{\theta }}\). Finally, we adopt the piecewise linear approximation of the raised cosine pulse g(t) proposed in [22], whereby pulse g(t) can be written as
$$\begin{array}{*{20}l} g((l-m)T+\Delta\tau) \approx \ &a_{l} \times \frac{\Delta\tau}{T}, \end{array} $$
(18a)
$$\begin{array}{*{20}l} \text{where} \ \ &a_{l}=a_{l}^{+} \ \ \text{if} \ \Delta\tau >0 \end{array} $$
(18b)
$$\begin{array}{*{20}l} \text{and} \ \ &a_{l}=a_{l}^{-} \ \ \text{if} \ \Delta\tau <0, \end{array} $$
(18c)
for lm and
$$\begin{array}{*{20}l} g(\Delta\tau)\approx \left(1-\eta\frac{|\Delta\tau|}{T}\right), \end{array} $$
(19)
where we have defined \(\eta =\frac {2T}{\Delta \tau _{\text {max}}}(1-g(\Delta \tau _{\text {max}}/2T))\) and the values of \(a_{l}^{+}\) and \(a_{l}^{-}\) are listed in Table 1, in which we have \(c_{1}= \frac {2T}{\Delta \tau _{\text {max}}}g(1-\frac {\Delta \tau _{\text {max}}}{2T}), c_{2}=\frac {2T}{\Delta \tau _{\text {max}}}|g(1+ \frac {\Delta \tau _{\text {max}}}{2T})|, c_{3}=\frac {2T}{\Delta \tau _{\text {max}}}|g(2-\frac {\Delta \tau _{\text {max}}}{2T})|, c_{4}=\frac {2T}{\Delta \tau _{\text {max}}}g(2+\frac {\Delta \tau _{\text {max}}}{2T})\), and \(c_{5}=\frac {2T}{\Delta \tau _{\text {max}}}g(3-\frac {\Delta \tau _{\text {max}}}{2T})\) [22].
Table 1

Coefficients in the piecewise linear approximation of the raised cosine pulse

l

m−3

m−2

m−1

m+1

m+2

m+3

\(a^{+}_{l}\)

0

c 4

c 2

c 1

c 3

c 5

\(a^{-}_{l}\)

c 5

c 3

c 1

c 2

c 4

0

To evaluate the effect of the synchronization error on the performance, we now calculate an effective signal to noise ratio (SNR) that accounts for the presence of the estimation error for time and phase offsets. By using the discussion above, the following approximations are derived in the Appendix. The power of the desired signal s d [m]=A x d [m]g(Δ τ) in (16) is approximated as
$$\begin{array}{*{20}l} \mathbb{E}_{\Delta\tau,x_{d}}[|s_{d}[m]|^{2}] &\approx A^{2}E_{x_{d}} \left(1-\frac{\eta}{2T}\sqrt{12 \textrm{CRB}_{\tau}} \right). \end{array} $$
(20)
The power of z s [m] in (16) is similarly approximated as
$$\begin{array}{*{20}l}{} \mathbb{E}_{\Delta\tau, \Delta\theta, x_{d}}[|z_{s}[m]|^{2}] &\approx A^{2}E_{x_{d}} \textrm{CRB}_{\theta} \left(1-\frac{\eta}{2T}\sqrt{12 \textrm{CRB}_{\tau}} \right), \end{array} $$
(21)
and the power of z i s i [m] in (17) as
$$\begin{array}{*{20}l} \mathbb{E}_{\Delta\tau,\bar{\mathbf{x}}_{d}}[|z_{{isi}}[m]|^{2}] &\approx \frac{A^{2}E_{x_{d}}\bar{a}}{T^{2}} \textrm{CRB}_{\tau}, \end{array} $$
(22)

where \(\bar {a}=\Sigma ^{l=m+3}_{l=m-3,l\neq m}|a_{l}|^{2} \) and \(\bar {\mathbf {x}}_{d}=[x_{d}[m-3]\ x_{d}[m-2] x_{d}[m-1]\ x_{d}[m+1]\ x_{d}[m+2]\ x_{d}[m+3]]^{T}\).

Using (20), (21), and (22), we obtain the approximate effective SNR expression
$$\begin{array}{*{20}l} \textrm{SNR}_{\text{eff}} &\approx \frac{A^{2}E_{x_{d}}\,f_{\tau} }{ A^{2}E_{x_{d}} \textrm{CRB}_{\theta} \,f_{\tau} + \frac{A^{2}E_{x_{d}}\bar{a}}{ T^{2}} \textrm{CRB}_{\tau} + \sigma^{2}_{z_{d}}+ \sigma^{2}_{q_{d}}} \end{array} $$
(23a)
$$\begin{array}{*{20}l} &\approx \frac{A^{2}E_{x_{d}}}{ A^{2}E_{x_{d}} \textrm{CRB}_{\theta} + \frac{A^{2}E_{x_{d}}\bar{a}}{ T^{2}} \textrm{CRB}_{\tau} + \sigma^{2}_{z_{d}}+ \sigma^{2}_{q_{d}}}, \end{array} $$
(23b)

where \(f_{\tau }= 1-\frac {\eta }{2T}\sqrt {12 \textrm {CRB}_{\tau }} \), and for analytical tractability, we made the further approximation f τ ≈1. We observe that the expression (23b) captures the effect of time and phase errors by means of additional noise terms in the denominator of the effective SNR. We remark that the approximations made in deriving (23b) will be validated in the numerical results by evaluating the performance of proposed optimization schemes for fronthaul compression that are based on (23b) and discussed next.

3.3 Optimization of fronthaul compression

In the proposed design, we wish to maximize the effective SNR (23b) under the constraints (9) and (12) on the fronthaul capacity, over the statistics of the quantization noises, namely over the PSDs \({S_{{Q^{n}_{p}}}[k]}\) of the training field and over the variance of the quantization noise \(\sigma ^{2}_{q_{d}}\) for the data field. Accordingly, we have following optimization problem:
$$\begin{array}{*{20}l}{} \underset{\{S_{{Q^{n}_{p}}}[k]\},\sigma^{2}_{q_{d}}}{\text{maximize}} &\textrm{SNR}_{\text{eff}} \end{array} $$
(24a)
$$\begin{array}{*{20}l} \textrm{s.t.} &\sum_{n=0}^{F-1}\sum_{k=0}^{N_{p}-1} \log_{2}\!\left(\!1+ \frac{E_{x_{p}}A^{2} |G^{n}[k]|^{2}+\frac{N_{0}}{T_{s}}}{S_{{Q^{n}_{p}}}[k]}\!\right)\!\leq N_{p} C, \end{array} $$
(24b)
$$\begin{array}{*{20}l} &\sum_{i=0}^{(N-N_{p})-1}\log_{2}\! \left(\! 1+\! \frac{E_{x_{d}}A^{2} |G[i]|^{2}+ N_{0}}{\sigma^{2}_{q_{d}}} \!\right)\! \leq\! (N-N_{p}) C, \end{array} $$
(24c)
$$\begin{array}{*{20}l} &S_{{Q^{n}_{p}}}[k]\geq 0,\ \ \ n=0,...,F-1,\ k=0,...,N_{p}-1, \end{array} $$
(24d)
$$\begin{array}{*{20}l} &\sigma^{2}_{q_{d}} \geq 0, N_{p} \geq 0, \end{array} $$
(24e)

where constraints (24b) and (24c) correspond to (9) and (12), respectively.

Towards solving problem (24), we first observe that the variance \(\sigma ^{2}_{q_{d}}\) can be obtained, without loss of optimality, by imposing the equality in constraint (24c). This is because SNR e f f is monotonically decreasing with respect to \(\sigma ^{2}_{q_{d}}\) while the left-hand side of (24c) is monotonically decreasing in \(\sigma ^{2}_{q_{d}}\). We then have the following equivalent problem
$$\begin{array}{*{20}l}{} \underset{S_{{Q^{n}_{p}}}[k]}{\text{minimize}} \ \ \ & A^{2}E_{x_{d}} \textrm{CRB}_{\theta} + \frac{A^{2}E_{x_{d}}\bar{a}}{ T^{2}} \textrm{CRB}_{\tau} \end{array} $$
(25a)
$$\begin{array}{*{20}l} \textrm{s.t.} & \sum_{n=0}^{F-1}\sum_{k=0}^{N_{p}-1} \log_{2}\! \left(\!1+ \frac{E_{x_{p}}A^{2} |G^{n}[k]|^{2}+\frac{N_{0}}{T_{s}}}{S_{{Q^{n}_{p}}}[\!k]}\! \right)\leq N_{p} C, \end{array} $$
(25b)
$$\begin{array}{*{20}l} &S_{Q^{n}}[k]\geq 0, \ \ n=0,...,F-1,\ k=0,...,N_{p}-1, \end{array} $$
(25c)
where the objective function (25a) can be rewritten, using (13) and (14), as
$$\begin{array}{*{20}l} &\frac{A^{2}E_{x_{d}} }{\sum_{n=0}^{F-1}\sum_{k=0}^{N_{p}-1}\frac{E_{x_{p}}A^{2}|G^{n}[k]|^{2}}{\frac{N_{0}}{T_{s}}+S_{{Q^{n}_{p}}}[k]}}\\ &+ \frac{A^{2}E_{x_{d}}\bar{a}/T^{2}}{\left(\frac{2\pi}{N_{p}T_{s}}\right)^{2} \sum_{n=0}^{F-1}\sum_{k=0}^{N_{p}-1}\frac{E_{x_{p}}A^{2}k^{2}|G^{n}[k]|^{2}}{\frac{N_{0}}{T_{s}}+S_{{Q^{n}_{p}}}[k]}}. \end{array} $$
(26)
To tackle the optimization problem (25), we first define the auxiliary variables \(u_{n,k}\triangleq (S_{Q^{n}}[k])^{-1}, a_{n,k}\triangleq \left (2\pi /(N_{p}T_{s})\right)^{2}k^{2}E_{x_{p}}|A|^{2}|G^{n}[k]|^{2}\), and \(b_{n,k}\triangleq E_{x_{p}}|A|^{2}|G^{n}[k]|^{2}\), and then use the Charnes-Cooper transformation [14], i.e., we set v n,k =(1+(N 0/T s )u n,k )−1, yielding the equivalent objective function
$$\begin{array}{*{20}l} &\frac{A^{2}E_{x_{d}}}{\sum_{n=0}^{F-1}\sum_{k=0}^{N_{p}-1} \frac{a_{n,k}}{N_{0}/T_{s}}(1-v_{n,k})}\\ &+\frac{A^{2}E_{x_{d}}\bar{a}/T^{2}}{\sum_{n=0}^{F-1}\sum_{k=0}^{N_{p}-1} \frac{b_{n,k}}{N_{0}/T_{s}}(1-v_{n,k})}. \end{array} $$
(27)
The objective function (27) is convex with respect to the variables v n,k since denominator of each term is an affine function of v n,k , and the function 1/g(x) is convex if g(x) is concave and positive. However, the constraint (25b) is still not convex in the variables v n,k for n=0,...,F−1,k=0,...,N p −1. Nevertheless, it can be expressed as the sum of a concave and of a convex function, i.e.,
$$\begin{array}{*{20}l} &\sum_{n=0}^{F-1}\sum_{k=0}^{N_{p}-1} \left(\log_{2} \left(-b_{n,k}v_{n,k}+b_{n,k}+N_{0}/T_{s}\right)-\log_{2}\right.\\ &\left.\left((N_{0}/T_{s})v_{n,k}\right)\right) \leq N_{p}C. \end{array} $$
(29)
Therefore, the Difference of Convex (DC) approach [15] can be leveraged to obtain an iterative optimization algorithm. This is done by linearizing the concave part of (29) at the current iterate \(v_{n,k}^{(i)}\), where i is the index of the current iteration, obtaining the locally tight convex upper bound
$$\begin{array}{*{20}l} \log_{2} (-b_{n,k}v_{n,k}+b_{n,k}+N_{0}/T_{s})\leq e_{n,k}^{(i)}v_{n,k}+f_{n,k}^{(i)}, \end{array} $$
(30)

where \(e_{n,k}^{(i)}=-b_{n,k}/(\ln (2)(\frac {N_{0}}{T_{s}}+b_{n,k}-b_{n,k}v_{n,k}^{(i)})), f_{n,k}^{(i)}=\log _{2}(-b_{n,k}v_{n,k}^{(i)}+b_{n,k}+\frac {N_{0}}{T_{s}})-e_{n,k}^{(i)}v_{n,k}^{(i)}\).

The DC algorithm performs successive optimization of the convex problem obtained by substituting the right-hand side of (30) for the concave part in (29) until convergence. Given the known properties of the DC algorithm [15], the proposed approach, summarized in Algorithm 1, provides a feasible solution at every iteration and converges to a local minimum of problem (25). Moreover, since it only requires the solution of convex problems, the algorithm has a polynomial complexity per iteration.

4 Numerical results

In this section, we present numerical results to give insight into optimal fronthaul compression for synchronization and to validate the analysis presented in the previous sections. Throughout, we set A=0.7 and the SNR during training phase and SNR during data phase are defined as \(\textrm {SNR}_{p} = E_{x_{p}}/(N_{0}/T_{s})\phantom {\dot {i}\!}\) and \(\phantom {\dot {i}\!}\textrm {SNR}_{d} = E_{x_{d}}/(N_{0}/T)\), respectively.

Figure 2 shows the inverse of the PSD of the quantization noise \(1/S_{{Q^{n}_{p}}}[k]\) obtained from Algorithm 1 for various values of SNR p with C=3 bits/sample, N=100,N p =16, and F=2. Note that the frequency axis ranges from −N p /2 to N p /2−1 rather than in the interval [ 0,N p −1] for convenience of illustration. Moreover, we emphasize that \(1/S_{{Q^{n}_{p}}}[k]\) is a measure of the accuracy of quantization at frequency k with k=−N p /2,...,N p /2−1, so that a larger \(1/S_{{Q^{n}_{p}}}[k]\) implies a more refined quantization. We first observe that the optimized solution prescribes a more accurate quantization at higher frequencies, since these convey more information on the time delay, as per the CRB (13), while all frequencies contribute in equal manner to the estimate of the phase offset as per (14). Moreover, as SNR p increases, it is seen that lower frequencies tend to be neglected by the quantizer in the sense that, for such frequencies, we have \(1/S_{{Q^{n}_{p}}}[k]=0\), and hence the signals on these frequencies are not compressed and not transmitted to the CU.
Fig. 2

Inverse of the PSD of the quantization noise obtained from Algorithm 1 versus the frequency index k: C=3 bits/sample, F=2,A=0.7,N=100 and N p =16

In order to validate the advantage of the proposed design, we now consider the synchronization performance under a conventional least-square joint phase and timing estimator operating on the compressed signal \(\hat {Y}^{n}[k], n=0,\ldots,F-1, k=0,\ldots,N_{p}-1\). The estimator is given as
$$\begin{array}{*{20}l} (\hat{\theta}, \hat{\tau}) = \arg \min_{\tilde{\theta}, \tilde{\tau}} \Phi(\tilde{\theta}, \tilde{\tau}), \end{array} $$
(31)

with \(\Phi (\tilde {\theta }, \tilde {\tau })= \sum _{n,k}|{r^{n}_{k}}-{r^{n}_{k}}(\tilde {\theta }, \tilde {\tau })|^{2}\) where \({r^{n}_{k}}=\arg (\hat {Y}^{n}[k]X^{*}[k])/{2\pi }\) and \({r^{n}_{k}}(\tilde {\theta }, \tilde {\tau })=\tilde {\theta }-k/N_{p}(n+\tilde {\tau })\). By applying the estimator (31), we evaluate the performance of optimized compression scheme in terms of MSEs of time and phase offsets as compared to white-PSD compression that is constant across all frequencies. The white-PSD compression scheme is considered as reference since it does not attempt to optimize quantization with the aim of enhancing synchronization.

Figure 3 a, b illustrates the MSE of the timing and phase offset estimates, respectively, as a function of SNR p for C=1 bits/sample and C=3 bits/sample with F=1,A=0.7, N=100, and N p =16. In addition, we plot the MSE of the timing and phase offset estimates in case of F=2 in Fig. 3 c, 3 d, respectively, under the same parameters. We observe that the proposed scheme significantly outperforms the conventional white-PSD strategy and that the gain of the proposed scheme is more pronounced for larger SNR values. This is because as the SNR grows, the impact of the quantization noise becomes more relevant compared to the channel noise. Furthermore, a larger oversampling factor F seems to yield an improved performance only for the proposed optimization scheme and not with the conventional white-PSD scheme. This is because in the latter case, the performance benefits of a larger number of observation are offset by the increased fronthaul overhead, which leads to a more pronounced quantization noise.
Fig. 3

MSE for joint phase and timing estimation (31) versus the SNR p : A=0.7,N=100 and N p =16a F=1, MSE of phase offset. b F=1, MSE of phase offset. c F=2, MSE of timing offset. d F=2, MSE of phase offset

Adopting the same estimator for time and phase offset, the system performance in terms of uncoded SER during the data phase is shown in Figs. 4 and 5 for BPSK and QPSK modulation, respectively. We consider the SNR for both training and data fields, i.e., SNR=SNR p =SNR d ,F=2,A=0.7,N=100 and N p =16. Simulation results with perfect synchronization are also presented for reference. We note that, consistently with the results in Fig. 5, the proposed method is observed to outperform the conventional white-PSD scheme more significantly as the SNR increases and as the fronthaul capacity C decreases. For instance, it is seen in Fig. 5 that the proposed approach has a gain of about 0.5 dB for C=5 bits/sample and of about 2 dB for C=3 bits/sample.
Fig. 4

SER with uncoded BPSK transmission versus SNR with joint phase and timing estimation (31): F=2,A=0.7,N=100 and N p =16

Fig. 5

SER with uncoded QPSK transmission versus SNR with joint phase and timing estimation (31): F=2,A=0.7,N=100 and N p =16

Finally, we elaborate on the performance of actual quantization by adopting a standard scalar uniform quantizer, instead of the additive quantization model considered so far. In particular, we choose the step size Δ[k] of the quantizer used for frequency k based on the optimal PSD S q [k] obtained from Algorithm 1 by using the relationship \(S_{q}[k]=\frac {|\Delta [k]|^{2}}{12}\). This relationship is justified by fact that, at high resolution, the quantization noise is approximately uniformly distributed. As reference, we also consider the performance of a uniform quantizer in which step size is same for all frequencies k, i.e., Δ[k]=Δ, with the same dynamic range as for the optimized quantizer. Figure 6 presents the MSE of the timing and phase offset estimates versus SNR p with F=2,C=3,A=0.7, N=100 and N p =16. We observe that the proposed scheme outperforms the conventional uniform quantizer, with a gain of about 2 dB in the high SNR regime.
Fig. 6

MSE of joint phase and timing estimation versus SNR p in the presence of scalar fronthaul quantization and joint phase and timing estimation (31): F=2,C=3,A=0.7,N=100 and N p =16

5 Conclusions

This paper tackles the problem of optimal fronthaul compression with the aim of enhancing the effective SNR in the presence of time and phase synchronization errors at the CU. The proposed algorithm optimizes the PSD of quantization noise at the RRHs by using the Charnes-Cooper transformation and the DC approach, and is shown to outperform the conventional solution that assumes an equal quantizer at all frequencies. Numerical results validate the analysis by evaluating the performance of the proposed design under practical synchronization algorithms and with scalar quantization. An interesting direction for future research is the consideration of frequency-selective channels and of frequency synchronization.

6 Endnote

1 The more general case with spectral aliasing could be handled by using the analysis in [17] and is left as an open problem.

7 Appendix

7.1 Proof of the CRBs for time and phase offset estimates

In this appendix, we provide a brief derivation for the bounds (13) and (14), which follow from standard arguments (see, e.g., [23]). For the bound (13), we first have the chain of inequalities
$$\begin{array}{*{20}l} \mathbb{E}_{\hat{\mathbf{{y}}}_{p},\mathbf{x}_{p}}[\triangle \tau(\hat{\mathbf{y}}_{p})^{2}] &=\mathbb{E}_{\mathbf{x}_{p}}[\mathbb{E}_{\hat{\mathbf{{y}}}_{p}|\mathbf{x}_{p}}[\triangle \tau(\hat{\mathbf{y}}_{p})^{2}]] \end{array} $$
(32)
$$\begin{array}{*{20}l} &\geq \mathbb{E}_{\mathbf{x}_{p}} \left[ \frac{1}{\mathbb{E}_{\hat{\mathbf{{y}}}_{p}|\mathbf{x}_{p}}\left[ \left(\frac{\partial \ln p(\hat{\mathbf{y}}_{p}|\mathbf{x}_{p},\tau) }{\partial \tau} \right)^{2}\right]}\right] \end{array} $$
(33)
$$\begin{array}{*{20}l} &\geq \frac{1}{\mathbb{E}_{\mathbf{x}_{p}}\left[ \mathbb{E}_{\hat{\mathbf{{y}}}_{p}|\mathbf{x}_{p}}\left[ \left(\frac{\partial \ln p(\hat{\mathbf{y}}_{p}|\mathbf{x}_{p},\tau) }{\partial \tau} \right)^{2} \right]\right]}, \end{array} $$
(34)
where (33) follows from the CRB and (34) is a consequence of Jensen’s inequality and of the convexity of the function \(\frac {1}{x}\) for x>0. The Fisher information for a vector of correlated Gaussian observations can be calculated using [24, Ch. 3.9], which can be directly evaluated as
$$\begin{array}{*{20}l}{} &\mathbb{E}_{\hat{\mathbf{{y}}}_{p}|\mathbf{x}_{p}}\!\left[ \!\left(\frac{\partial \ln p(\hat{\mathbf{y}}_{p}|\mathbf{x}_{p},\tau) }{\partial \tau} \right)^{2} \!\right] \!= \sum_{n=0}^{F-1} \mathcal{R}e \left[\frac{\partial {\mathbf{s}^{n}}^{\dag}}{\partial \tau}\mathbf{K}_{\tilde{\mathbf{z}}^{n}}^{-1}\frac{\partial \mathbf{s}^{n}}{\partial \tau}\right] \end{array} $$
(35)
$$\begin{array}{*{20}l} &=\left(\frac{2\pi}{N_{p}T_{s}}\right)^{2}\sum_{n=0}^{F-1}\sum_{k=0}^{N_{p}-1}\frac{A^{2} k^{2}|X[k]|^{2}|G^{n}[k]|^{2}}{\frac{N_{0}}{T_{s}}+S_{Q^{n}}[k]}. \end{array} $$
(36)

The summation in (35) follows from the fact that the vectors \(\hat {\mathbf {{y}}}^{n}_{p}\) in \(\hat {\mathbf {{y}}}_{p}=[\hat {\mathbf {{y}}}^{0}_{p}, \cdots, \hat {\mathbf {{y}}}_{p}^{F-1}]\) are independent for all n given the pilot signal x p . Furthermore, in (35), \(\mathbf {K}_{\tilde {\mathbf {z}}^{n}}\) is the covariance matrix of the effective noise \(\tilde {\mathbf {z}}^{n}=\mathbf {z}^{n}_{p}+\mathbf {q}^{n}_{p}\) and we have defined s n =[s n [0],...,s n [N p −1]] T with \(s^{n}[m]=Ax_{p}[m]\circledast g^{n}_{\tau,\theta }[m]\). Finally, equality (36) follows from Szeg\(\ddot {\mathrm {o}}\) theorem. By inserting (36) into (34), and noting that \(\mathbb {E}[|X[k]|^{2}]=E_{x_{p}}\), the proof of (13) is concluded. The proof of (14) can be obtained using similar steps and is omitted.

7.2 Derivation of (20), (21), and (22)

We compute the powers of the desired signal s d [m] in (20) and of the interference terms z s [m] in (21) and of z i s i [m] in (22). The power of the desired signal is approximated, using (19), as
$$\begin{array}{*{20}l}{} \mathbb{E}_{\Delta\tau,x_{d}}[|s_{d}[m]|^{2}] &\approx A^{2} \mathbb{E}_{\Delta\tau,x_{d}}\left[ |x_{d}[m]|^{2}\left(1-\frac{\eta}{T}|\Delta\tau|\right)^{2} \right] \end{array} $$
(37a)
$$\begin{array}{*{20}l} &= \!A^{2}E_{x_{d}}\! \left(\! 1\!-\frac{2\eta}{T}\mathbb{E}[\!|\Delta\tau|] + \frac{\eta^{2}}{ T^{2}}\mathbb{E}[\!|\Delta\tau|^{2}]\!\right) \end{array} $$
(37b)
$$\begin{array}{*{20}l} &= A^{2}E_{x_{d}} \left(1-\frac{\eta \Delta \tau_{\text{max}}}{2T} + \frac{\eta^{2}}{T^{2}}\frac{\Delta \tau_{\text{max}}^{2}}{12}\right) \end{array} $$
(37c)
$$\begin{array}{*{20}l} &\approx A^{2}E_{x_{d}} \left(1-\frac{\eta\Delta \tau_{\text{max}}}{2T} \right) \end{array} $$
(37d)
$$\begin{array}{*{20}l} &\approx A^{2}E_{x_{d}} \left(1-\frac{\eta}{2T}\sqrt{12 \textrm{CRB}_{\tau}} \right), \end{array} $$
(37e)

where in (37c) we used the assumption \(\Delta \tau \sim {U}[-\frac {\Delta \tau _{\text {max}}}{2},\frac {\Delta \tau _{\text {max}}}{2}]\), which implies \(\mathbb {E}[|\Delta \tau |]=\frac {\Delta \tau _{\text {max}}}{4}\) and \(\mathbb {E}[|\Delta \tau |^{2}]=\frac {\Delta \tau _{\text {max}}^{2}}{12}\); (37d) follows by removing higher-order terms in Δ τ max under the assumption that Δ τ max is small enough; and (37e) is a consequence of the approximation \(\mathbb {E}[\Delta \tau ^{2}] = \frac {\Delta \tau _{\text {max}}^{2}}{12} \approx \textrm {CRB}_{\tau }\).

The power of z s [m] is similarly approximated, using (19), as
$$\begin{array}{*{20}l} &\!\!\!\!\!\!\!\mathbb{E}_{\Delta\tau, \Delta\theta, x_{d}}[|z_{s}[m]|^{2}] \\& \approx A^{2} \mathbb{E}_{\Delta\tau, \Delta\theta, x_{d}}\left[ |x_{d}[m]|^{2}|e^{-j\Delta\theta}-1|^{2}\left(1-\frac{\eta}{T}|\Delta\tau|\right)^{2} \right] \end{array} $$
(38a)
$$\begin{array}{*{20}l} &= A^{2}E_{x_{d}} \mathbb{E}_{\Delta\tau,\Delta\theta }\left[|e^{-j\Delta\theta}-1|^{2}\left(1-\frac{\eta}{T}|\Delta\tau|\right)^{2}\right] \end{array} $$
(38b)
$$\begin{array}{*{20}l} &\approx A^{2}E_{x_{d}}\textrm{CRB}_{\theta} \left(1-\frac{2\eta}{T}\mathbb{E}_{\Delta\tau}[|\Delta\tau|] + \frac{\eta^{2}}{T^{2}}\mathbb{E}_{\Delta\tau} [|\Delta\tau|^{2}]\right) \end{array} $$
(38c)
$$\begin{array}{*{20}l} &\approx A^{2}E_{x_{d}} \textrm{CRB}_{\theta} \left(1-\frac{\eta}{2T}\sqrt{12 \textrm{CRB}_{\tau}} \right), \end{array} $$
(38d)
where the approximation in (38b) follows as
$$\begin{array}{*{20}l} \mathbb{E}_{\Delta\theta}[|e^{-j\Delta\theta}-1|^{2}] & = 2-2\mathbb{E}_{\Delta\theta}[\cos(\Delta\theta)] \end{array} $$
(39a)
$$\begin{array}{*{20}l} &= 2-2\left(\frac{\sin(\Delta \theta_{\text{max}}/2)}{\Delta \theta_{\text{max}}/2}\right) \end{array} $$
(39b)
$$\begin{array}{*{20}l} &\approx 2-2\left(1- \frac{(\Delta \theta_{\text{max}}/2)^{2}}{3!}\right) \end{array} $$
(39c)
$$\begin{array}{*{20}l} &=\frac{\Delta \phi^{2}}{12} \end{array} $$
(39d)
$$\begin{array}{*{20}l} &\approx \textrm{CRB}_{\theta}, \end{array} $$
(39e)

where (39c) follows from the Taylor series of the sinc function up to the second order, and (39e) is a consequence of the approximation \(\mathbb {E}[\Delta \theta ^{2}] = \frac {\Delta \theta _{\text {max}}^{2}}{12} \approx \textrm {CRB}_{\theta }\).

Finally, using (18a), the power of z i s i [m] is approximated as
$$\begin{array}{*{20}l} \mathbb{E}_{\Delta\tau,\bar{\mathbf{x}}_{d}}[|z_{{isi}}[m]|^{2}] & \approx \frac{A^{2}}{ T^{2}}\mathbb{E}_{\Delta\tau,\bar{\mathbf{x}}_{d}}\left[ |\mathbf{a}^{T}\bar{\mathbf{x}}_{d}|^{2}\Delta\tau^{2} \right] \end{array} $$
(40a)
$$\begin{array}{*{20}l} &= \frac{A^{2}E_{x_{d}}\bar{a}}{T^{2}} \mathbb{E}_{\Delta\tau}[\Delta\tau^{2}] \end{array} $$
(40b)
$$\begin{array}{*{20}l} &\approx \frac{A^{2}E_{x_{d}}\bar{a}}{T^{2}} \textrm{CRB}_{\tau}. \end{array} $$
(40c)

Notes

Acknowledgements

The work of O. Simeone was partially supported by U.S. NSF under grant CCF-1525629. This work was supported by ’The Cross-Ministry Giga KOREA Project’ grant from the Ministry of Science, ICT, and Future Planning, Korea.

Competing interests

The authors declare that they have no competing interests.

References

  1. 1.
    K Chen, Duan R. C-RAN: The road towards green RAN. China mobile research institute ver. 2, (2011).Google Scholar
  2. 2.
    A Checko, HL Christiansen, Y Yan, L Scolari, G Kardaras, MS Berger, L Dittmann, Cloud RAN for mobile networks: A technology overview. IEEE Comm. Surv. Tutorials. 17(1), 405–426 (2012).CrossRefGoogle Scholar
  3. 3.
    AB Ericsson. Huawei Technologies, NEC Corporation, Alcatel Lucent and Nokia Siemens Networks, Common public radio interface (CPRI); interface specification, CPRI specification. vol. 5, (2011).Google Scholar
  4. 4.
    D Samardzija, J Pastalan, M MacDonald, S Walker, R Valenzuela, Compressed transport of baseband signals in radio access networks. IEEE Trans. Wirel. Comm. 11(9), 3216–3225 (2012).CrossRefGoogle Scholar
  5. 5.
    B Guo, W Cao, A Tao, D Samardzija, in Proc. Int. ICST Conf. CHINACOM. CPRI compression transport for LTE and LTE-A signal in C-RAN, (2012), pp. 843–849.Google Scholar
  6. 6.
    A Vosoughi, M Wu, JR Cavallaro, in Proc. IEEE Global Communications Conference. Baseband signal compression in wireless base stations, (2012), pp. 4505–4511.Google Scholar
  7. 7.
    A Sanderovich, O Somech, HV Poor, S Shamai, Uplink macro diversity of limited backhaul cellular network. IEEE Trans.Inf. Theory. 55(8), 3457–3478 (2009).MathSciNetCrossRefGoogle Scholar
  8. 8.
    L Zhou, W Yu, Optimized backhaul compression for uplink Cloud Radio Access Network. IEEE J. Sel. Areas Comm. 32(6), 1295–1307 (2014).CrossRefGoogle Scholar
  9. 9.
    A del Coso, S Simoens, Distributed compression for MIMO coordinated networks with a backhaul constraint. IEEE Trans. Wirel. Comm. 8(9), 4698–4709 (2009).CrossRefGoogle Scholar
  10. 10.
    S-H Park, O Simeone, O Sahin, S Shamai, Robust and efficient distributed compression for cloud radio access networks. IEEE Trans. Veh. Techn. 62(2), 692–703 (2013).CrossRefGoogle Scholar
  11. 11.
    J Hoydis, M Kobayashi, M Debbah, Optimal channel training in uplink network MIMO systems. IEEE Trans. Sig. Proc. 59(6), 2824–2833 (2011).MathSciNetCrossRefGoogle Scholar
  12. 12.
    J Kang, O Simeone, J Kang, S Shamai, Joint signal and channel state information compression for the backhaul of uplink network MIMO systems. IEEE Trans. Wirel. Comm. 13(3), 1555–1567 (2014).CrossRefGoogle Scholar
  13. 13.
    S-H Park, O Simeone, O Sahin, S Shamai, Fronthaul compression for cloud radio access networks: signal processing advances inspired by network information theory. IEEE Signal Proc. Mag. 31(6), 69–79 (2014).CrossRefGoogle Scholar
  14. 14.
    A Charnes, WW Cooper, Programming with linear fractional functionals. Nav. Res. Logist. Q. 9(3–4), 181–186 (1962).MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    R Horst, NV Thoai, DC Programming: Overview. J. Optim. Theory Appl. 103(1), 1–43 (1999).MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    A El Gamal, Y Kim, Network information theory (Cambridge University Press, 2012).Google Scholar
  17. 17.
    Y Chen, YC Eldar, AJ Goldsmith, Shannon meets Nyquist: Capacity of sampled Gaussian channels. IEEE Trans. Inf. Theory. 69(8), 4889–4914 (2013).MathSciNetCrossRefGoogle Scholar
  18. 18.
    L Tong, G Xu, B Hassibi, T Kailath, Blind channel identification based on second-order statistics: A frequency-domain approach. IEEE Trans. Inf. Theory. 41(1), 329–334 (1995).CrossRefMATHGoogle Scholar
  19. 19.
    S Gault, W Hachem, P Ciblat, in Proc. IEEE ICASSP, 4. Cramer-Rao bounds for data-aided sampling clock offset and channel estimation, (2004), pp. iv-1029-32.Google Scholar
  20. 20.
    R Zamir, M Feder, On lattice quantization noise. IEEE Trans. Inf. Theory. 42(4), 1152–1159 (1996).CrossRefMATHGoogle Scholar
  21. 21.
    RM Gray, Toeplitz and Circulant Matrices: A review (Now Publishers Inc, 2006).Google Scholar
  22. 22.
    S Jagannathan, H Aghajan, A Goldsmith, in Proc. IEEE GLOBECOM 2004. The effect of time synchronization errors on the performance of cooperative MISO systems, (2004), pp. 102–107.Google Scholar
  23. 23.
    F Gini, R Reggiannini, U Mengali, The modified Cramer-Rao bound in vector parameter estimation. IEEE Trans. Comm. 46(1), 52–60 (1998).CrossRefGoogle Scholar
  24. 24.
    SM Kay, Fundamentals of Statistical Signal Processing: Estimation Theory (Prentice-Hall, Englewood Cliffs, NJ, 1993).MATHGoogle Scholar

Copyright information

© The Author(s) 2017

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.School of Electrical EngineeringKorea Advanced Institute of Science and Technology (KAIST)DaejeonRepublic of Korea
  2. 2.Center for Wireless Information Processing (CWIP), Electrical and Computer Engineering DepartmentNew Jersey Institute of Technology (NJIT)NewarkUSA

Personalised recommendations