Skip to main content
Log in

Low Complexity Time Synchronization Algorithm for OFDM Systems with Repetitive Preambles

  • Published:
Journal of Signal Processing Systems Aims and scope Submit manuscript

Abstract

In this paper, a new time synchronization algorithm for OFDM systems with repetitive preamble is proposed. This algorithm makes use of coarse and fine time estimation; the fine time estimation is performed using a cross-correlation similar to previous proposals in the literature, whereas the coarse time estimation is made using a new metric and an iterative search of the last sample of the repetitive preamble. A complete analysis of the new metric is included, as well as a wide performance comparison, for multipath channel and carrier frequency offset, with the main time synchronization algorithms found in the literature. Finally, the complexity of the VLSI implementation of this proposal is discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8
Figure 9
Figure 10
Figure 11

Similar content being viewed by others

References

  1. IEEE 802.11a standard (1999). Wireless LAN medium access control (MAC) and physical layer (PHY) specifications: high-speed physical layer in the 5 GHz band.

  2. IEEE 802.11 g standard (2003). Wireless LAN specifications: Further higher data rate extension in the 2.4 GHz band.

  3. IEEE 802.16-2004 (2004). Standard for local and metropolitan area networks, part 16: Air interface for fixed broadband wireless access systems.

  4. Lee, D., & Cheun, K. (2002). Coarse symbol synchronization algorithms for OFDM systems in multipath channels. IEEE Communications Letters, 6(10), 446–448.

    Article  Google Scholar 

  5. Park, B., Cheon, H., Ko, E., Kang, C., & Hong, D. (2004). A blind OFDM synchronization algorithm based on cyclic correlation. IEEE Signal Processing Letters, 11(2), 83–85.

    Article  Google Scholar 

  6. Beek, J. J., Sandell, M., & Börjesson, P. O. (1997). ML estimation of time and frequency offset in OFDM system. IEEE Transactions on Signal Processing, 45(7), 1800–1805.

    Article  MATH  Google Scholar 

  7. Ma, S., Pan, X., Yang, G., & Ng, T. (2009). Blind symbol synchronization based on cyclic prefix for OFDM systems. IEEE Transactions on Vehicular Technology, 58(4), 1746–1751.

    Article  Google Scholar 

  8. Schmidl, T., & Cox, D. (1997). Robust frequency and timing synchronization for OFDM. IEEE Transactions on Communications, 45(12), 1613–1621.

    Article  Google Scholar 

  9. Coulson, A. J. (2001). Maximum likelihood synchronization for OFDM using a pilot symbol: Algorithms. IEEE Journal on Selected Areas in Communications, 19(12), 2495–2503.

    Article  Google Scholar 

  10. Tufvesson, F., Edfors, O., & Faulker, M. (1999). Time and frequency synchronization for OFDM using PN-sequence preambles. Proceedings of the Vehicular Technology Conference (VTC), 4, 2203–2207.

    Google Scholar 

  11. Shi, K., & Serpedin, E. (2004). Coarse frame and carrier synchronization of OFDM systems: a new metric and comparison. IEEE Transactions on Wireless Communications, 3(4), 1271–1284.

    Article  Google Scholar 

  12. Minn, H., Zeng, M., & Bhargava, V. K. (2000). On timing offset estimation for OFDM Systems. IEEE Communications Letters, 4, 242–244.

    Article  Google Scholar 

  13. Minn, H., Bhargava, V. K., & Letaief, K. B. (2003). A robust timing and frequency synchronization for OFDM systems. IEEE Transactions on Wireless Communications, 2(4), 822–839.

    Article  Google Scholar 

  14. Minn, H., Bhargava, V. K., & Letaief, K. B. (2006). A combined timing and frequency synchronization and channel estimation for OFDM. IEEE Transactions on Communications, 54(3), 416–422.

    Article  Google Scholar 

  15. Park, B., Cheon, H., Ko, E., Kang, C., & Hong, D. (2003). A novel timing estimation method for OFDM systems. IEEE Communications Letters, 7(5), 239–241.

    Article  Google Scholar 

  16. Chang, S., & Kelley, B. (2003). Time synchronization for OFDM-based WLAN systems. Electronics Letters, 39(13), 1024–1026.

    Article  Google Scholar 

  17. Wu, Y., Yip, K., Ng, T., & Serpedin, E. (2005). Maximum-likelihood symbol synchronization for IEEE 802.11a WLANs in unknown frequency-selective fading channels. IEEE Transactions on Wireless Communications, 4(6), 2751–2763.

    Article  Google Scholar 

  18. Larsson, E. G., Liu, G., Li, J., & Giannakis, G. B. (2001). Joint symbol timing and channel estimation for OFDM based WLANs. IEEE Communications Letters, 5(8), 325–327.

    Article  Google Scholar 

  19. Troya, A., Maharatna, K., Krstic, M., Grass, E., Jagdhold, U., & Kraemer, R. (2007). Efficient inner receiver design for OFDM-based WLAN systems: algorithm and architecture. IEEE Transactions on Wireless Communications, 6(4), 1374–1385.

    Article  Google Scholar 

  20. Yang, J., & Cheun, K. (2006). Improved symbol timing synchronization in IEEE 802.11a/g wireless LAN systems in multipath channels. International Conference on Consumer Electronics. doi:10.1109/ICCE.2006.1598425.

  21. Manusani, S. K., Hshetrimayum, R. S., & Bhattacharjee, R. (2006). Robust time and frequency synchronization in OFDM based 802.11a WLAN systems. Annual India Conference. doi:10.1109/INDCON.2006.302775.

  22. Zhou, L., & Saito, M. (2004). A new symbol timing synchronization for OFDM based WLANs under multipath fading channels. 15th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications. doi: 10.1109/PIMRC.2004.1373890.

  23. Kim, T., & Park, S.-C. (2007). A new symbol timing and frequency synchronization design for OFDM-based WLAN systems. 9th Conference on Advanced Communication Technology. doi:10.1109/ICACT.2007.358691.

  24. Baek, J. H., Kim, S. D., & Sunwoo, M. H. (2008). SPOCS: Application specific signal processor for OFDM communication systems. Journal of Signal Processing Systems, 53(3), 383–397.

    Article  Google Scholar 

  25. Van Kempen, G., & van Vliet, L. (2000). Mean and variance of ratio estimators used in fluorescence ratio imaging. Cytometry, 39(4), 300–305.

    Article  Google Scholar 

  26. J. Melbo, J., & Schramm, P. (1998). Channel models for HIPERLAN/2 in different indoor scenarios. 3ERI085B, HIPERLAN/2 ETSI/BRAN contribution.

  27. Abramowitz, M., & Stegun, I. A. (1972). Handbook of mathematical functions. Dover.

  28. López-Martínez, F. J., del Castillo-Sánchez, E., Entrambasaguas, J. T., & Martos-Naya, E. (2010). Iterative-gradient based complex divider FPGA core with dynamic configurability of accuracy and throughput. Journal of Signal Processing Systems. doi:10.1007/s11265-010-0464-y.

  29. Angarita, F., Canet, M. J., Sansaloni, T., Perez-Pascual, A., & Valls, J. (2008). Efficient mapping of CORDIC Algorithm for OFDM-based WLAN. Journal of Signal Processing Systems, 52(2), 181–191.

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the Spanish Ministerio de Educación y Ciencia under grants TEC2006-14204-C02-01 and TEC2008-06787.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maria Jose Canet.

Appendices

Appendix A. Statitical Properties of the Timing Metric at the Repetitive Part of the Preamble

Each received sample is composed of an OFDM signal and a noise component \( r(n) = {s_{{SS}}}(n) + w(n) \), where s SS (n) are samples of the received preamble during the repetitive part (short symbols) and w(n) are samples of complex additive white Gaussian noise with variance \( \sigma_w^2 \) and zero mean. Using the common approach of calculating the mathematic expectation of \( {R_{{SS}}}(n) \), the input signal power has a mean:

$$ E\left\{ {{R_{{SS}}}(n)} \right\} = \sum\limits_{{k = - L}}^{{L - 1}} {{{\left| {\,{s_{{SS}}}\left( {n + k} \right)\,} \right|}^2}} + E\left\{ {\sum\limits_{{k = - L}}^{{L - 1}} {{{\left| {\,w\left( {n + k} \right)\,} \right|}^2}} } \right\} = {\varepsilon_s}(n) + 2L\,\sigma_w^2{,} $$

where \( {\varepsilon_s}(n) \) is the estimated local energy of the received preamble. As the short symbols are periodic with L, we can write: \( {\varepsilon_s}(n) = 2{\varepsilon_{{SS}}} \), where ε SS is the energy of a period of the received preamble. So, during the repetitive part of the preamble the mean of R is a constant due to the periodicity.

Another method to obtain this result is to take into account that r(n) is formed by noise (characterized as a complex Gaussian random variable) added to the received preamble samples (a deterministic signal). When the sum of the squared modulus of r(n) is obtained, we get a non-central chi-squared random variable represented by symbol \( \chi_p^2 \), where p represents the degrees of freedom (number of real random Gaussian variables, in this case p = 2 L). A central chi-squared random variable G obtained as \( G = \sum\nolimits_{{i = 1}}^p {{{\left( {{{{{V_i}}} \left/ {{{\sigma_v}}} \right.}} \right)}^2}} \), where V i are real random Gaussian variables with variance \( \sigma_v^2 \) and zero mean, has a mean of \( p\sigma_v^2 \) and a variance of \( 2p\sigma_v^4 \) [27]. When the real Gaussian variables have mean μ i , a non-central chi-squared random variable is generated with mean \( (p + \lambda )\,\sigma_v^2 \) and variance \( 2(p + 2\lambda )\,\sigma_v^4 \), with \( \lambda = {\sum\nolimits_{{i = 1}}^p {\left( {{{{{\mu_i}}} \left/ {{{\sigma_v}}} \right.}} \right)}^2} \). In our case, the real and imaginary components of the Gaussian noise have variance \( {{{\sigma_w^2}} \left/ {2} \right.} \) and their mean is the value of the real or imaginary part of the received preamble: \( {\mu_i} = {s_R}(i) \) or \( {\mu_i} = {s_I}(i) \). Then, if we calculate parameter λ we obtain:

$$ \lambda = \sum\limits_{{i = - L}}^{{L - 1}} {\left( {\frac{{s_R^2(i)}}{{{{{\sigma_w^2}} \left/ {2} \right.}}} + \frac{{s_I^2(i)}}{{{{{\sigma_w^2}} \left/ {2} \right.}}}} \right)} = \frac{{2{\varepsilon_{{SS}}}}}{{{{{\sigma_w^2}} \left/ {2} \right.}}}. $$

So, the mean of R(n) can be obtained as:

$$ E\left\{ {{R_{{SS}}}} \right\} = (p + \lambda )\frac{{\sigma_w^2}}{2} = \left( {2\cdot (2L) + \frac{{{\varepsilon_s}(n)}}{{{{{\sigma_w^2}} \left/ {2} \right.}}}} \right)\frac{{\sigma_w^2}}{2} = 2{\varepsilon_{{SS}}} + 2L\,\sigma_w^2. $$

By this method the variance can be easily obtained as:

$$ {\rm var} \left\{ {{R_{{SS}}}} \right\} = 4{\varepsilon_{{SS}}}\,\sigma_w^2 + 2L\,\sigma_w^4. $$

Next, we study the numerator when it is applied to samples from the short symbols, we call this P SS (n). In this case, M must be chosen in such a way that assures that: \( s(n) = {s_{{SS}}}(n - L) = {s_{{SS}}}(n - 2L) = \cdots = {s_{{SS}}}(n - M \cdot L) \), then the signal component inside the squared modulus cancels out and this leaves only noise terms: \( w'(n) = w(n) - {{{\left( {\sum\nolimits_{{m = 2}}^{{M + 1}} {w(n - L\cdot m} } \right)}} \left/ {M} \right.} \), where \( w\prime (n) \) is a white Gaussian random variable with variance \( \sigma_{{w'}}^2 = {{{(M + 1)\,\sigma_w^2}} \left/ {M} \right.} \). Using the same reasoning as before, the numerator is the sum of a squared modulus of Gaussian random variables and the obtained signal is a central chi-squared random variable with mean and variance:

$$ E\left\{ {{P_{{SS}}}} \right\} = 2L\,\frac{{\sigma_{{w'}}^2}}{2} = L\frac{{M + 1}}{M}\sigma_w^2{,} $$
$$ {\rm var} \left\{ {{P_{{SS}}}} \right\} = 2\left( {2L} \right){\left( {\frac{{\sigma_{{w'}}^2}}{2}} \right)^2} = L{\left( {\frac{{M + 1}}{M}} \right)^2}\sigma_w^4. $$

Finally, we need to calculate the covariance between P SS and R SS . The covariance is:

$$ {\rm cov} \left\{ {{P_{{SS}}},{R_{{SS}}}} \right\} = E\left\{ {{P_{{SS}}}\,{R_{{SS}}}} \right\} - E\left\{ {{P_{{SS}}}} \right\}\;E\left\{ {{R_{{SS}}}} \right\}. $$

After some mathematical manipulations, the first term is given by:

$$ E\left\{ {{P_{{SS}}}\,{R_{{SS}}}} \right\} = {\varepsilon_{{SS}}}L\,\sigma_w^2 + (2{L^2} + L)\,\sigma_w^4 + {\varepsilon_{{SS}}}\,L\,\frac{{\sigma_w^2}}{M} + 2{L^2}\frac{{\sigma_w^4}}{M}. $$

Using the mean of R and P calculated previously we obtain:

$$ {\rm cov} \left\{ {{P_{{SS}}},{R_{{SS}}}} \right\} = L\sigma_w^4. $$

Appendix B. Statistical Properties of the Timing Metric after the Repetitive Part of the Preamble

At the first sample of the GI the subtraction in (4) can be expressed as \( u(n) = {r_{{GI}}}(n) - {r_{{SS}}}(n) \). We can decompose it in noise and a combination of samples from the preamble: \( u(n) = z(n) + w'(n) \), where the noise is as in Appendix A:\( w'(n) = w(n) - {{{\left( {\sum\nolimits_{{m = 1}}^M {w(n - L\cdot m} } \right)}} \left/ {M} \right.} \); and the other term is: \( z(n) = {s_{{GI}}}(n) - {s_{{SS}}}(n) \), where \( {s_{{SS}}}(n) \) is the received repetitive preamble and \( {s_{{GI}}}(n) \) is the received guard interval. To obtain \( E\left\{ {{P_{{GI}}}} \right\} \), that is, the mean of the numerator of the time metric at n = n GI , we can use the same reasoning as in Appendix A, when we estimated \( E\left\{ {{R_{{SS}}}} \right\} \), since signal u(n) is composed of a sum of a deterministic signal plus a white Gaussian noise with variance \( \sigma_{{w'}}^2 = \sigma_w^2\left( {{{{M + 1}} \left/ {M} \right.}} \right)\, \). Therefore we obtain:

$$ E\left\{ {{P_{{GI}}}} \right\} = {\varepsilon_z} + L\left( {\frac{{M + 1}}{M}} \right)\,\sigma_w^2{,} $$
$$ {\rm var} \left\{ {{P_{{GI}}}} \right\} = 2{\varepsilon_z}\left( {\frac{{M + 1}}{M}} \right)\,\sigma_w^2 + L{\left( {\frac{{M + 1}}{M}} \right)^2}\,\sigma_w^4{,} $$

where ε z is:

$$ \begin{gathered} {\varepsilon_z} = \sum\limits_{{n = 0}}^{{L - 1}} {{{\left| {z(n)} \right|}^2}} = \sum\limits_{{n = 0}}^{{L - 1}} {{{\left| {{s_{{GI}}}(n) - {s_{{SS}}}(n)} \right|}^2}} \\ = \sum\limits_{{n = 0}}^{{L - 1}} {{{\left| {{s_{{GI}}}(n)} \right|}^2}} + \sum\limits_{{n = 0}}^{{L - 1}} {{{\left| {{s_{{SS}}}(n)} \right|}^2}} - 2\sum\limits_{{n = 0}}^{{L - 1}} {{\rm Re} \left\{ {{s_{{GI}}}(n)\,s_{{SS}}^{*}(n)} \right\}} \\ = {\varepsilon_{{GI}}} + {\varepsilon_{{SS}}} - 2C. \\ \end{gathered} $$

And as the mean of R(n) includes part of SS and part of GI becomes:

$$ E\left\{ {{R_{{GI}}}} \right\} = {\varepsilon_{{SS}}} + {\varepsilon_{{GI}}} + 2L\,\sigma_w^2{,} $$

and the variance:

$$ {\rm var} \left\{ {{R_{{GI}}}} \right\} = 2({\varepsilon_{{SS}}} + {\varepsilon_{{GI}}})\,\sigma_w^2 + 2L\,\sigma_w^4. $$

The covariance between P GI and R GI can be obtained in a similar way to the one calculated in Appendix A giving:

$$ {\rm cov} \left\{ {{P_{{GI}}},{R_{{GI}}}} \right\} = L\sigma_w^4 + 2\sigma_w^2\left( {{\varepsilon_{{GI}}} - C} \right). $$

Appendix C. Effect of the Carrier Frequency Offset in the Time Metric

The signal model that includes the CFO was defined as: \( r(n) = s(n)\;{e^{{j\omega \,n}}} + w(n) \). As the CFO affects the signal term, during the repetitive part of the preamble the mean of (4) can be written as:

$$ E\left\{ {{P_{{SS}}}(n)} \right\} = \sum\limits_{{k = 0}}^{{L - 1}} {{{\left| {\;{s_{{SS}}}(n + k)\;{e^{{j\omega (n + k)}}} - \frac{1}{M}\sum\limits_{{m = 1}}^M {{s_{{SS}}}\left( {n + k - L \cdot m} \right)} \;{e^{{j\omega (n + k - Lm)}}}\;} \right|}^2}} + L\frac{{M + 1}}{M}\sigma_w^2, $$

where the signal term does not cancel out as before. Now, the mean can be rewritten, due to the periodicity of s SS (n), as:

$$ E\left\{ {{P_{{SS}}}(n)} \right\} = \sum\limits_{{k = 0}}^{{L - 1}} {{{\left| {\;{s_{{SS}}}(n + k)\;{e^{{j\omega (n + k)}}}\;\left( {1 - \frac{1}{M}\sum\limits_{{m = 1}}^M {{e^{{ - j\omega Lm}}}} \;} \right)\;} \right|}^2}} + L\frac{{M + 1}}{M}\sigma_w^2. $$

The sum of complex exponentials can be expressed as:

$$ \sum\limits_{{m = 1}}^M {{e^{{ - j\omega Lm}}}} = \frac{{\sin (M\omega L/2)}}{{\sin (\omega L/2)}}\;{e^{{ - j\omega L\left( {1 + \frac{M}{2}} \right)}}}{;} $$

then, the signal term is given by:

$$ \begin{gathered} \sum\limits_{{k = 0}}^{{L - 1}} {{{\left| {\;{s_{{SS}}}(n + k)\;{e^{{j\omega (n + k)}}}\;\left( {1 - \frac{1}{M}\;\frac{{\sin (M\omega L/2)}}{{\sin (\omega L/2)}}\;{e^{{ - j\omega L\left( {1 + \frac{M}{2}} \right)}}}} \right)\;} \right|}^2}} = \\ = \sum\limits_{{k = 0}}^{{L - 1}} {{{\left| {\;{s_{{SS}}}(n + k)\;\;} \right|}^2}} \;{\left| {{e^{{j\omega (n + k)}}}} \right|^2}\;{\left| {\;\;1 - \frac{1}{M}\;\frac{{\sin (M\omega L/2)}}{{\sin (\omega L/2)}}\;{e^{{ - j\omega L\left( {1 + \frac{M}{2}} \right)}}}\;} \right|^2} = \\ = \gamma (\omega, M,L)\;\,{\varepsilon_{{SS}}}\,\,, \\ \end{gathered} $$

where ε SS is the energy of a period from the received preamble and \( \gamma (\omega, L,M) \) is a CFO factor that is given by

$$ \gamma (\lambda, L,M) = {\left| {1 - \frac{1}{M}\;\frac{{\sin ({{{M\omega L}} \left/ {2} \right.})}}{{\sin ({{{\omega L}} \left/ {2} \right.})}}\,{e^{{ - j\omega L\left( {1 + \frac{M}{2}} \right)}}}} \right|^2}. $$

Finally, the mean of the estimator is:

$$ E\left\{ {{P_{{SS}}}} \right\} = \gamma (\lambda, L,M)\,\,{\varepsilon_{{SS}}} + L\frac{{M + 1}}{M}\sigma_w^2. $$

To evaluate the effect of the CFO on the mean of (4) at n GI , we must calculate how the CFO affects the signal term:

$$ \hat{z}(n) = {s_{{GI}}}(n)\,{e^{{j\omega \,n}}} - \frac{1}{M}\sum\limits_{{k = 1}}^M {{s_{{SS}}}(n - kL)\,{e^{{j\omega \,(n - kL)}}}} . $$

Again we consider that all the terms from the repetitive preamble \( {s_{{SS}}}(n - kL) \) are equal, then

$$ \hat{z}(n) = {s_{{GI}}}(n)\,{e^{{j\omega \,n}}} - {s_{{SS}}}(n)\,{e^{{j\omega \,n}}}\frac{1}{M}\sum\limits_{{k = 1}}^M {\,{e^{{ - j\omega \,kL}}}} {,} $$

where the sum of complex exponentials is:

$$ \kappa (\omega, M,L) = \frac{1}{M}\sum\limits_{{m = 1}}^M {{e^{{ - j\omega Lk}}}} = \frac{1}{M}\frac{{\sin (M\omega L/2)}}{{\sin (\omega L/2)}}{e^{{ - j\omega L\left( {1 + \frac{M}{2}} \right)}}}. $$

If we calculate the energy of the signal term:

$$ \begin{gathered} {\varepsilon_{{\hat{z}}}} = \sum\limits_{{n = 0}}^{{L - 1}} {{{\left| {\hat{z}(n)} \right|}^2}} = \sum\limits_{{n = 0}}^{{L - 1}} {{{\left| {{s_{{GI}}}(n)\,{e^{{j\omega \,n}}} - {s_{{SS}}}(n)\,{e^{{j\omega \,n}}}\kappa (\omega, M,L)} \right|}^2}} \, \\ = \sum\limits_{{n = 0}}^{{L - 1}} {{{\left| {{s_{{GI}}}(n)\,{e^{{j\omega \,n}}}} \right|}^2}} + {\kappa^2}(\omega, M,L)\sum\limits_{{n = 0}}^{{L - 1}} {{{\left| {{s_{{SS}}}(n)\,{e^{{j\omega \,n}}}} \right|}^2}} - 2\kappa (\omega, M,L)\sum\limits_{{n = 0}}^{{L - 1}} {{\rm Re} \left\{ {{s_{{GI}}}(n) - s_{{SS}}^{*}(n)} \right\}} \\ = {\varepsilon_{{GI}}} + {\kappa^2}(\omega, M,L)\,{\varepsilon_{{SS}}} - 2\kappa (\omega, M,L)\,C. \\ \end{gathered} $$

Then, the mean of (4) at n GI is:

$$ E\left\{ {{P_{{GI}}}} \right\} = {\varepsilon_{{\hat{z}}}} + L\left( {\frac{{M + 1}}{M}} \right)\sigma_w^2. $$

Rights and permissions

Reprints and permissions

About this article

Cite this article

Canet, M.J., Almenar, V., Flores, S.J. et al. Low Complexity Time Synchronization Algorithm for OFDM Systems with Repetitive Preambles. J Sign Process Syst 68, 287–301 (2012). https://doi.org/10.1007/s11265-011-0618-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11265-011-0618-6

Keywords

Navigation