A Phase Linearization Method for IF Sampling Digitizer
A digitally modulated signal has phase of the carrier modulated in addition to amplitude modulation. If time domain measurement or demodulation on the modulated signal is to be performed, the measurement system or receiver must not introduce phase distortion onto the signal prior to measurement process. Traditionally, receiver IF bandwidth is made sufficiently wide compared to the signal bandwidth such that the desired signal spectrum falls within the linear phase portion of the receiver’s IF response. This approach becomes impractical for new multi-carrier modulation such as various OFDM schemes with signal bandwidth in tens or even hundreds of megahertz. With limited digitizer sampling rate, the digitizer front end response (RF + IF + anti-aliasing) cannot be made sufficiently wide band to be presented as a linear phase channel to the received modulated signal. As such, a post-ADC phase linearizing scheme is required before accurate measurement of time-domain parameters of the signal can be performed. A design procedure and algorithm is presented with simulated results of a derived phase linearizing FIR filter compensating the nonlinear phase response of a 9th order Chebyshev BPF.
KeywordsPhase linearization Phase compensation
- 1.Parks TW, Burrus CS (1987) Digital filter design. Wiley-Interscience, HobokenGoogle Scholar
- 2.McClellan JH, Parks TW, Rabiner LR (1973) A computer program for designing optimum FIR linear phase digital filters. IEEE Trans Audio Electroacoust AU-21(6):506–526Google Scholar
- 3.Proakis JG, Manolakis DG (2007) Digital signal processing principles, algorithms and applications, 4th edn. Pearson Prentice Hall, Upper Saddle RiverGoogle Scholar