This chapter is devoted to illustrating applications of the theory of weakly nonlinear time-varying systems with practical examples. After introducing the class of electrical networks called switched circuits, we analyse in details some instantiations. We analyse a practical implementation of a quadrature modulator, including its distortion and other aspects of fundamental importance in applications. As another example of the usefulness of time-varying circuits, we illustrate how they allow implementing highly selective filters that are otherwise unfeasible in small integrated form.

1 Switched Circuits

An important class of circuits that finds many applications is the Switched circuits one. These are circuits whose only time varying components are switches. Despite the fact that switches are time-varying resistors, this class of systems can be analysed as a sequence of time invariant circuits, each one valid over an interval over which all switches remain in the same state.

Let O denote an open interval of \({\mathbb {R}}^n\). The space  \({\mathcal {D}}_O\) of test functions \(\phi \) with support contained in O is a vector subspace of \({\mathcal {D}}\). A distribution on O is a distribution defined on \({\mathcal {D}}_O\). The vector space of distributions defined on O is denoted by  \({\mathcal {D'}}_O\).

Consider a linear switched circuit including at least a capacitor or an inductor. Without loss of generality we can assume the network to be driven by a single independent source x (superposition principle). Let \(t_i, i \in {\mathbb {N}}\) denote the times at which any of the ideal switches changes state and \(O_i\) the open intervals \((t_i,t_{i+1})\). In any of these intervals the network is described by a system of first order linear differential equations

$$\begin{aligned} Du = A_i u + B_i x \end{aligned}$$

with u the state of the network that we can represent by the voltage across the capacitors and the currents through the inductors. Suppose that the network is in the zero state and that we apply a Dirac impulse at time \(\tau \) with \(t_i< \tau < t_{i+1}\). Then, for \(\tau < t < t_{i+1}\) the state u evolves as a continuous function. At time \(t_{i+1}\) some of the switches change state. If we exclude circuits including closed loops composed exclusively by ideal inductors, ideal voltage sources and (closed) ideal switches, then at this time one of the following happens

  • If an ideal switch across a capacitor is closed, then the voltage across that capacitor immediately after \(t_{i+1}\) becomes zero: \(v_C(t_{i+1}+) = 0\). Since charge must be conserved, this change in state must be accompanied by a current impulse \(v_C(t_{i+1}-)C \delta (t - t_{i+1})\) discharging the capacitor through the switch.

  • If the closing of a switch forms a closed loop formed exclusively of (ideal) capacitors then at time \(t_{i+1}\) the charge in the capacitors will instantly redistribute under the constraint of charge and energy conservation. This will be accompanied by current impulses through the capacitors. For example, if at \(t_{i+1}\) two capacitors with capacitance \(C_1\) and \(C_2\) respectively are connected in parallel by an ideal switch then the voltage across the capacitors immediately after the closing of the switch will be

    $$\begin{aligned} v_1(t_{i+1}+) [C_1 + C_2] = v_2(t_{i+1}+) [C_1 + C_2] = v_1(t_{i+1}-) C_1 + v_2(t_{i+1}-) C_2\,. \end{aligned}$$
  • If an ideal switch in series with an inductor is opened then the current through the inductor immediately after \(t_{i+1}\) becomes zero: \(i_L(t_{i+1}+) = 0\). Faraday’s law implies that this change in state is accompanied by a voltage impulse \(-i_L(t_{i+1}-)L \delta (t - t_{i+1})\) across the inductor.

  • In all other cases the voltages across the capacitors and the currents through the inductors remain unchanged. In other words the state component \(u_m\) representing any of those quantities immediately after \(t_{i+1}\) must equal the state component before that time instant: \(u_m(t_{i+1}+) = u_m(t_{i+1}-)\).

These conditions specify initial conditions for the interval \(O_{i+1}\) that, together with the differential equation, allow to calculate the evolution of the state u of the network in that interval. The same arguments apply to all subsequent switching times. The state u is therefore fully determined and can be extended to a distribution on the hole of \({\mathbb {R}}\). The fundamental kernel W is therefore well defined for \(\tau \in O_i, i \in {\mathbb {N}}\).

The only problematic cases are when \(\tau \) coincides with one of the switching times. To work around this problem we limit the set of allowed input signals to the set of regular bounded distributions. Then, assuming further x to be right-sided, the state of the circuit can be represented by the integral

$$\begin{aligned} u(t) = \int \limits _0^t W(t,\tau ) B(\tau ) x(\tau ) \text {d}\tau \,. \end{aligned}$$

Since the values of \(\tau \) at which W is not defined is a set of zero measure, the output is well-defined at all times.

The above discussion shows that for switched circuits the computation of the time-varying impulse response is a straightforward process. In addition, since the impulse response in each interval \(O_i\) corresponds to the one of an LTI system, the computation of the time varying frequency response by Fourier transformation of \(h(t,\xi )\) does not pose any problem.

Linear periodically switched circuits are linear switched circuits in which the operation of the switches is periodic. For these circuits the time-varying impulse response \(h(t,\xi )\) and the time-varying frequency response \(\hat{h}(t,\omega )\) are periodic in time.

In the following we illustrate the use of this technique to analyse idealised versions of some practical periodically switched circuits used in communication receivers and transmitters.

Fig. 14.1
figure 1

Voltage-mode quadrature modulator

2 Voltage-Mode Quadrature Modulator

In this section we analyse an implementation of the quadrature modulator of Example 12.9 suitable for realisation in a CMOS technology and shown in Fig. 14.1. The input signals \(v_I\) and \(v_Q\) (called r and q in Example 12.9) are applied differentially. We assume the LO signals to be non-overlapping and to have very fast edges. In fact we model the gate signals as having rectangular waveform high 25% of the time and with a relative delay among them of \({\mathcal {T}}/4\). We assume further that when the corresponding LO signal is high each MOSFET can be modeled as a resistor of value \(r_{ON}\), while when the LO signal is low it can be modeled as an open circuit (infinite resistance). We are interested in the signal \(v_A\) at the input of the amplifier following the switching transistors and assume that the input impedance of the latter can be adequately modeled as a capacitor.

Fig. 14.2
figure 2

Voltage-mode quadrature modulator model

Under these assumptions the circuit is linear. Hence, we can analyse the contribution to the output of each input signal independently. Figure 14.2 shows the model used to analyse the contribution of signal \(v_I^+\) where we have combined \(r_{ON}\) with the source resistance (assumed equal at all inputs)

$$\begin{aligned} r = r_{ON} + R_S \end{aligned}$$

and where we assume the switches to be closed when the corresponding LO control signal \(l_i,i=0,\dotsc ,3\) is high and open when low. The control signals are defined by

$$\begin{aligned} l_i(t) :=l(t - i {\mathcal {T}}/4) \qquad i=0,\dotsc ,3 \end{aligned}$$

with l the signal introduced in Example 12.10 and shown in Fig. 12.10a with \(\tau = {\mathcal {T}}/4\).

2.1 Single Input Response

In this subsection we analyse the response to \(v_I^+\). To compute its contribution to the output, we apply an input impulse at time \(\tau \). If the impulse is applied when the input switch is open, then its contribution is zero. If it’s applied when the switch is closed, \(\tau \in (-{\mathcal {T}}/8,{\mathcal {T}}/8) \mod {\mathcal {T}}\), its contribution is described by the differential equation

$$\begin{aligned} (1 + r C D) v_A = \delta (t - \tau )\,. \end{aligned}$$

Note that the capacitor C is connected in parallel with a resistor of value r at all times! The fundamental kernel is therefore

$$\begin{aligned} W(t, \tau ) = \omega _{3dB} \,\text {e}^{-\omega _{3dB}(t - \tau )}\, \textsf{1}_{+}(t - \tau )\, l_0(\tau )\,, \qquad \omega _{3dB} = \frac{1}{r C} \end{aligned}$$

and can be interpreted as the impulse response of an LTI system whose input signal is \(v_I^+(t) l_0(t)\).

The time-varying impulse response of the system can be obtained from the above fundamental kernel by applying the variable transformation \(\xi = t - \tau \)

$$\begin{aligned} h(t,\xi ) = \omega _{3dB} \,\text {e}^{-\omega _{3dB}\xi }\, \textsf{1}_{+}(\xi )\, l_0(t - \xi )\,. \end{aligned}$$

We compute the response of the system to a complex tone through the time-varying frequency response. The latter is obtained by Fourier transforming \(h(t,\xi )\) with respect to \(\xi \)

$$\begin{aligned} \hat{h}(t,\omega ) = \int \limits _{-\infty }^\infty \, h(t,\xi ) \,\text {e}^{-\jmath \omega \xi }\, \text {d}\xi \,. \end{aligned}$$

Using the Fourier series of \(l_0\)

$$\begin{aligned} l_0(t) &= \sum _{n=-\infty }^\infty a_n \text {e}^{\jmath n \omega _{\mathcal {T}}t}\,, & a_n &= a_{-n} = {\left\{ \begin{array}{ll} \frac{1}{4} &{} n = 0\\ \frac{1}{\pi n} \sin (n \frac{\pi }{4}) &{} n > 0 \end{array}\right. } \end{aligned}$$

where \(\omega _{\mathcal {T}}= 2\pi /{\mathcal {T}}\), we have

$$\begin{aligned} \hat{h}(t, \omega ) &= \omega _{3dB} \int \limits _0^\infty \text {e}^{-(\omega _{3dB} + \jmath \omega )\xi } \sum _{n=-\infty }^\infty a_n \text {e}^{\jmath n \omega _{\mathcal {T}}(t - \xi )} \,\text {d}\xi \nonumber \\ &= \omega _{3dB} \sum _{n=-\infty }^\infty a_n \int \limits _0^\infty \text {e}^{-[\omega _{3dB} + \jmath (\omega + n \omega _{\mathcal {T}})]\xi } \,\text {d}\xi \, \text {e}^{\jmath n \omega _{\mathcal {T}}t} \nonumber \\ & = \sum _{n=-\infty }^\infty \hat{h}_n(\omega )\, \text {e}^{\jmath n \omega _{\mathcal {T}}t} \end{aligned}$$
(14.1)

with

$$\begin{aligned} \hat{h}_n(\omega ) &= \omega _{3dB} a_n \int \limits _0^\infty \text {e}^{-[\omega _{3dB} + \jmath (\omega + n \omega _{\mathcal {T}})]\xi } \,\text {d}\xi \nonumber \\ & = \frac{a_n}{1 + \jmath \frac{\omega + n \omega _{\mathcal {T}}}{\omega _{3dB}}}\,. \end{aligned}$$
(14.2)

The response of the system to a complex tone of angular frequency \(\omega \) is thus

$$\begin{aligned} v_A(t) = \sum _{n=-\infty }^\infty \hat{h}_n(\omega )\, \text {e}^{\jmath (\omega + n \omega _{\mathcal {T}}) t} \end{aligned}$$
(14.3)

and, as remarked above, is seen to be equal the response of an LTI system with transfer function

$$\begin{aligned} H(s) = \frac{1}{1 + \frac{s}{\omega _{3dB}}} \end{aligned}$$
(14.4)

to the input

$$\begin{aligned} v_I^+(t) l_0(t) = \sum _{n=-\infty }^\infty a_n \text {e}^{\jmath (\omega + n \omega _{\mathcal {T}}) t}\,. \end{aligned}$$

2.2 Input Current and Switched Resistor

Before combining the outputs from the four input signals \(v_I^+, v_I^-, v_Q^+\) and \(v_Q^-\) we compute the input current drawn from the input \(v_I^+\) when the other sources are disabled. When the input switch is closed, the input current is equal to the current flowing into the capacitor, while when the switch is open, the current is zero

$$\begin{aligned} i_S(t) = l_0(t) i_C(t), \qquad i_C(t) = C Dv_I^+(t)\,. \end{aligned}$$

The current is therefore given by

$$\begin{aligned} i_S(t) &= \left| \sum _{n=-\infty }^\infty a_n \text {e}^{\jmath n \omega _{\mathcal {T}}t} \right| \left|\sum _{n=-\infty }^\infty \frac{a_n}{r} \frac{\jmath \frac{(\omega + n \omega _{\mathcal {T}})}{\omega _{3dB}}}{1 + \jmath \frac{\omega + n \omega _{\mathcal {T}}}{\omega _{3dB}}} \, \text {e}^{\jmath (\omega + n \omega _{\mathcal {T}}) t} \right| \nonumber \\ &= \sum _{m=-\infty }^\infty \sum _{n=-\infty }^\infty \frac{a_m a_n}{r} \frac{\jmath \frac{(\omega + n \omega _{\mathcal {T}})}{\omega _{3dB}}}{1 + \jmath \frac{\omega + n \omega _{\mathcal {T}}}{\omega _{3dB}}} \, \text {e}^{\jmath [\omega + (n + m) \omega _{\mathcal {T}}] t} \nonumber \\ & = \sum _{k=-\infty }^\infty y_k(\omega )\, \text {e}^{\jmath (\omega + k \omega _{\mathcal {T}}) t} \end{aligned}$$
(14.5)

with

$$\begin{aligned} y_k(\omega ) :=\sum _{n=-\infty }^\infty a_{k-n} a_n \frac{\jmath (\omega + n \omega _{\mathcal {T}}) C}{1 + \jmath \frac{\omega + n \omega _{\mathcal {T}}}{\omega _{3dB}}} \end{aligned}$$
(14.6)

where in the last step we made the substitution \(k = m + n\).

Let’s consider more closely the term for \(k=0\). Substituting the expression for \(a_n\) we obtain

$$\begin{aligned} y_0(\omega ) = \jmath \omega \frac{C}{16} + \sum _{n \ne 0} \frac{\sin ^2(n\frac{\pi }{4})}{(\pi n)^2} \frac{\jmath (\omega + n \omega _{\mathcal {T}}) C}{1 + \jmath \frac{\omega + n \omega _{\mathcal {T}}}{\omega _{3dB}}}\,. \end{aligned}$$

To find an approximate value for this series it’s useful to separate real- and imaginary-parts

$$\begin{aligned} y_0(\omega ) = g_0(\omega ) + \jmath b_0(\omega )\,. \end{aligned}$$

We start by simplifying the imaginary part

$$\begin{aligned} b_0(\omega ) = \omega \frac{C}{16} + \sum _{n \ne 0} \frac{\sin ^2(n\frac{\pi }{4})}{(\pi n)^2} \frac{(\omega + n \omega _{\mathcal {T}}) C}{1 + \Big (\frac{\omega + n \omega _{\mathcal {T}}}{\omega _{3dB}}\Big )^2} \end{aligned}$$

The first thing to note is that the terms proportional to \(n \omega _{\mathcal {T}}\) with \(n>0\) cancel with the ones for \(n<0\) so that we obtain

$$\begin{aligned} b_0(\omega ) = \omega C \Bigg [ \frac{1}{16} + \sum _{n \ne 0} \frac{\sin ^2(n\frac{\pi }{4})}{(\pi n)^2} \frac{1}{1 + \Big (\frac{\omega + n \omega _{\mathcal {T}}}{\omega _{3dB}}\Big )^2} \Bigg ]\,. \end{aligned}$$
(14.7)

All terms in the square bracket are positive. If we assume \(\left|\omega \right| \ll \omega _ {\mathcal {T}}< \omega _{3dB}\) the terms decrease as \(1/n^2\) for \(n < \omega _{\mathcal {T}}/\omega _{3dB}\) and as \(1/n^4\) for larger values of n. We can therefore bound the series by

$$\begin{aligned} \sum _{n \ne 0} \frac{\sin ^2(n\frac{\pi }{4})}{(\pi n)^2} \frac{1}{1 + \Big (\frac{\omega + n \omega _{\mathcal {T}}}{\omega _{3dB}}\Big )^2} < \sum _{n \ne 0} \frac{1}{(\pi n)^2}\,. \end{aligned}$$

Using the known result

$$\begin{aligned} \sum _{n=1}^\infty \frac{1}{n^2} = \frac{\pi ^2}{6} \end{aligned}$$

we thus obtain the upper bound

$$\begin{aligned} \frac{b_0(\omega )}{\omega C} < \frac{1}{16} + 2 \sum _{n = 1}^\infty \frac{1}{(\pi n)^2} = \frac{19}{48} \approx 0.40\,. \end{aligned}$$

This bound is tighter for large values of \(\omega _{3dB}/\omega _{\mathcal {T}}\).

To obtain a value closer to the actual value of the series we note that for \(N \gg 1\)

$$\begin{aligned} \sum _{n = 1}^N \sin ^2(n\frac{\pi }{4}) \approx \frac{N}{2}. \end{aligned}$$
(14.8)

Instead of bounding \(\sin ^2(n \pi /4)\) by 1, we approximate its value by 1/2 independently of n to obtain

$$\begin{aligned} \frac{b_0(\omega )}{\omega C} < \frac{1}{16} + \frac{1}{6} = \frac{11}{48} \approx 0.23\,. \end{aligned}$$

Figure 14.3 shows the normalized value of \(b_0\) as a function of \(\omega _{3dB}/\omega _{\mathcal {T}}\) computed from (14.7). We see that, while the argument to obtain the above approximate value is questionable, for large values of \(\omega _{3dB}/\omega _{\mathcal {T}}\) the approximation is remarkably close to the real value.

Fig. 14.3
figure 3

Normalized values of the real- and imaginary parts of \(y_0(0.1\omega _{\mathcal {T}})\) as a function of the ratio \(\omega _{3dB}/\omega _{\mathcal {T}}\)

We next turn to the real part of \(y_0(\omega )\)

$$\begin{aligned} g_0(\omega ) = \frac{1}{r} \sum _{n \ne 0} \frac{\sin ^2(n\frac{\pi }{4})}{(\pi n)^2} \frac{\Big (\frac{\omega + n \omega _{\mathcal {T}}}{\omega _{3dB}}\Big )^2}{1 + \Big (\frac{\omega + n \omega _{\mathcal {T}}}{\omega _{3dB}}\Big )^2} \end{aligned}$$
(14.9)

If we assume \(\left|\omega \right| \ll \omega _ {\mathcal {T}}< \omega _{3dB}\) then to a good approximation we have

$$\begin{aligned} g_0(\omega ) \approx \frac{2}{r} \sum _{n = 1}^\infty \frac{\sin ^2(n\frac{\pi }{4})}{\pi ^2} \frac{\Big (\frac{\omega _{\mathcal {T}}}{\omega _{3dB}}\Big )^2}{1 + \Big (\frac{n \omega _{\mathcal {T}}}{\omega _{3dB}}\Big )^2} \end{aligned}$$

If \(\omega _{\mathcal {T}}/ \omega _{3dB} \ll 1\) then the quadratic term in the denominator can be neglected in a large number of terms up to approximately \(N = \lceil \frac{\omega _{3dB}}{\omega _{\mathcal {T}}} \rceil \). The first N terms of the series contribute the largest part of its total value. Therefore, referring again to the approximation (14.8), we approximate again \(\sin ^2(n \pi /4)\) by 1/2. Instead of neglecting the terms for \(n > N\) we approximate the series by the integral

$$\begin{aligned} \frac{1}{\pi ^2 r} \frac{\omega _{\mathcal {T}}}{\omega _{3dB}} \int \limits _0^\infty \frac{1}{1 + x^2} \,\text {d}x \end{aligned}$$

with

$$\begin{aligned} n \frac{\omega _{\mathcal {T}}}{\omega _{3dB}} \rightarrow x, \qquad \frac{\omega _{\mathcal {T}}}{\omega _{3dB}} \rightarrow \text {d}x. \end{aligned}$$

This integral is easily solved and we finally obtain

$$\begin{aligned} g_0(\omega ) \approx \frac{1}{r \pi ^2} \frac{\omega _{\mathcal {T}}}{\omega _{3dB}} \frac{\pi }{2} = \frac{C}{{\mathcal {T}}}\,. \end{aligned}$$

Figure 14.3 shows the normalized value of \(g_0\) as a function of \(\omega _{3dB}/\omega _{\mathcal {T}}\) computed from Eq. (14.9). For \(\omega _{3dB}/\omega _{\mathcal {T}}> 3\) it’s in very good agreement with the given approximation.

Fig. 14.4
figure 4

Normalized \(i_S(t)\) for a cosinusoidal input with \(\omega = 0.1\omega _{\mathcal {T}}, \omega _{3dB} = 2 \omega _{\mathcal {T}}\) computed using (14.5) truncated at \(\left|k \right| = 60\) and \(\left|n \right| = 200\)

Figure 14.4 shows the normalized current \(i_S(t)\) for a cosinusoidal input with \(\omega = 0.1\omega _{\mathcal {T}}\) and \(\omega _{3dB} = 2 \omega _{\mathcal {T}}\). The curve consists of peaks in concomitance with the closing instants of switch 0, followed by an exponential decay with a time constant of approximately \(1/\omega _{3dB}\) and a sudden jump to zero at the instants where switch 0 is opened. (The oscillations around the instants where switch 0 changes state are due to the Gibbs phenomenon of the Fourier series.) As the time constant is shortened by reducing the value of r, the curve converges to a series of Dirac pulses at the closing instants of switch 0. If we shift the closing instants of switch 0 at multiples of \({\mathcal {T}}\) we can express this behavior by

$$\begin{aligned} \lim _{\begin{array}{c} r \rightarrow 0\\ \omega \rightarrow 0 \end{array}} i_S\left( t - \frac{{\mathcal {T}}}{8}\right) = \sum _{k=-\infty }^\infty \frac{C}{{\mathcal {T}}} \text {e}^{\jmath k \omega _{\mathcal {T}}t}\,. \end{aligned}$$

The discrete spectrum of \(i_S(t - {\mathcal {T}}/8)\) for \(\omega _{3dB} = 3 \omega _{\mathcal {T}}, 20 \omega _{\mathcal {T}}\) and \(\omega = 0.1\omega _{\mathcal {T}}\) is shown in Fig. 14.5. The figure shows that as the value of \(\omega _{3dB} / \omega _{\mathcal {T}}\) is increased, an increasing number of coefficients \(y_k(\omega )\) tend to approach the value of \(C/{\mathcal {T}}\) as expected. At a value of \(k \approx \omega _{3dB} / \omega _{\mathcal {T}}\) the real and imaginary parts have roughly the same value and for larger values of k the magnitude of \(y_k(\omega )\) decreases.

Fig. 14.5
figure 5

Normalized real- and imaginary-part of \(y_k(\omega ) \text {e}^{-\jmath (\omega + k\omega _{\mathcal {T}}){\mathcal {T}}/8}\) for \(\omega = 0.1\omega _{\mathcal {T}}\) as a function of k computed with (14.6) truncated to \(\left|n \right| = 2000\). The points between the discrete values of k were joined to better highlight the trend

2.3 Full Response

We now go back to the voltage across the capacitor \(v_A\) and calculate the combined response to all four signals \(v_I^+, v_I^-, v_Q^+\) and \(v_Q^-\). To distinguish the four responses we will add a subscript equal to the one of the corresponding LO signal. Thus in the following we will denote the response to the signal \(v_I^+\) given in (14.3) by \(v_{A,0}\). The contribution of \(v_I^-\) differs from the one of \(v_I^+\) by (i) a shift by \({\mathcal {T}}/2\) in the LO waveform and (ii) a reversal of sign of the input signal. Its contribution to \(v_A\) is therefore

$$\begin{aligned} v_{A,2}(t) &= - \text {e}^{\jmath \omega t} \sum _{n=-\infty }^\infty \hat{h}_n(\omega )\, \text {e}^{\jmath n \omega _{\mathcal {T}}(t - {\mathcal {T}}/2)} \\ &= \sum _{n=-\infty }^\infty (-1)^{n+1} \hat{h}_n(\omega )\, \text {e}^{\jmath (\omega + n \omega _{\mathcal {T}}) t}\,. \end{aligned}$$

Note that the even harmonics have opposite sign compared to the ones of \(v_{A,0}\), while the odd ones have the same sign. Therefore the combined response of \(v_I^1\) and \(v_I^-\) consists of odd harmonics only

$$\begin{aligned} v_{A,0}(t) + v_{A,2}(t) = 2 \sum _{n\ \text {odd}} \hat{h}_n(\omega )\, \text {e}^{\jmath (\omega + n \omega _{\mathcal {T}}) t}\,. \end{aligned}$$

The response to the signal \(v_Q^+\) differs from the one to \(v_I^+\) by (i) a shift by \(-{\mathcal {T}}/4\) of the LO signal and (ii) a shift of \({\mathcal {T}}/4\) in the input signal

$$\begin{aligned} v_Q^+ = -\jmath \text {e}^{\jmath \omega t}\,. \end{aligned}$$

Its contribution to \(v_A\) is therefore

$$\begin{aligned} v_{A,3}(t) &= - \jmath \text {e}^{\jmath \omega t} \sum _{n=-\infty }^\infty \hat{h}_n(\omega )\, \text {e}^{\jmath n \omega _{\mathcal {T}}(t + {\mathcal {T}}/4)}\\ &= - \sum _{n=-\infty }^\infty \jmath ^{n+1} \hat{h}_n(\omega )\, \text {e}^{\jmath (\omega + n \omega _{\mathcal {T}}) t}\,. \end{aligned}$$

Similarly, the response to the signal \(v_Q^-\) differs from the one to \(v_I^+\) by (i) a shift by \({\mathcal {T}}/4\) of the LO signal and (ii) a shift of \(-{\mathcal {T}}/4\) in the input signal

$$\begin{aligned} v_{A,1}(t) &= \jmath \text {e}^{\jmath \omega t} \sum _{n=-\infty }^\infty \hat{h}_n(\omega )\, \text {e}^{\jmath n \omega _{\mathcal {T}}(t - {\mathcal {T}}/4)}\\ &= - \sum _{n=-\infty }^\infty (-\jmath )^{n+1} \hat{h}_n(\omega )\, \text {e}^{\jmath (\omega + n \omega _{\mathcal {T}}) t}\,. \end{aligned}$$

Again we note that odd harmonics of \(v_{A,3}\) and \(v_{A,1}\) have the same sign, while even ones have opposite sign. The combined response of these two signals is therefore also composed of odd harmonics only

$$\begin{aligned} v_{A,3}(t) + v_{A,1}(t) &= -2 \sum _{n\ \text {odd}} \jmath ^{n+1} \hat{h}_n(\omega )\, \text {e}^{\jmath (\omega + n \omega _{\mathcal {T}}) t}\\ &= -2 \sum _{n\ \text {odd}} (-1)^{(n+1)/2} \hat{h}_n(\omega )\, \text {e}^{\jmath (\omega + n \omega _{\mathcal {T}}) t}\,. \end{aligned}$$

We now combine the two partial sums \(v_{A,0} + v_{A,2}\) and \(v_{A,3} + v_{A,1}\). The terms for \(n = 1 + 4 m, m \in {\mathbb {Z}}\) have the same sign, while the terms at \(n = -1 + 4m\) have opposite sign. The total sum is therefore

$$\begin{aligned} v_A(t) &= v_{A,0}(t) + v_{A,2}(t) + v_{A,3}(t) + v_{A,1}(t)\\ &= 4 \sum _{m = -\infty }^\infty \hat{h}_{1+4\,m}(\omega )\, \text {e}^{\jmath [\omega + (1+4m) \omega _{\mathcal {T}}] t}\,. \end{aligned}$$

Note again that the response of the system is equal to the one of an LTI system with the transfer function given by (14.4) and driven by the input signal

$$\begin{aligned} x(t) = v_I^+(t) \, l_0(t) + v_I^-(t) \, l_2(t) + v_Q^+(t) \, l_3(t) + v_Q^-(t) \, l_1(t)\,. \end{aligned}$$

Using four signal paths (two differential) this quadrature modulator cancels three spurious emission tones every four. It’s a transmitter implementation of the harmonic-reject mixer presented in Example 12.10 where we showed that to suppress more harmonics requires a larger number of signal paths.

The above derivation of the output signal highlights the fact that suppression of harmonics relies on exact cancelling of strong tones. We will investigate some imperfections limiting the amount of cancelling achievable in practical implementations in later sections. Before turning to that question we investigate the effect called carrier leakage.

2.4 Carrier Leakage

Carrier leakage refers to the presence of a tone at \(\pm \omega _{\mathcal {T}}\) in the output spectrum of the modulator. In transmitters, it is one of the undesired tones close to the signal of interest (or, depending on the architecture, indeed overlapping with the modulated wanted signal) that can’t be easily filtered. It is caused by the presence of small, DC offset voltages at the inputs of the modulator. These offset voltages are the result of mismatch in the driving circuits and are therefore Gaussian random variables.

We denote the DC offset random variables by \(X_k, k=0,\dotsc ,3\) where the index matches the one of the corresponding switch. They form the following signal at the input of the equivalent LTI system

$$\begin{aligned} X_0 \, l_0 & (t) + X_1 \, l_1(t) + X_2 \, l_2(t) + X_3 \, l_3(t)\nonumber \\ &= \sum _{n = -\infty }^\infty \big ( X_0 + X_1 \text {e}^{-\jmath \frac{\pi }{2}n} + X_2 \text {e}^{-\jmath \pi n} + X_3 \text {e}^{-\jmath \frac{3\pi }{2}n} \big ) a_n \text {e}^{\jmath n \omega _{\mathcal {T}}t}\nonumber \\ &= \sum _{n = -\infty }^\infty \big ( X_0 + X_1 (-\jmath )^n + X_2 (-1)^n + X_3 (\jmath )^n \big ) a_n \text {e}^{\jmath n \omega _{\mathcal {T}}t}\,. \end{aligned}$$
(14.10)

Since usually the most problematic tone is the one at \(\pm \omega _{\mathcal {T}}\) we only consider the terms for \(n=-1,1\). The term for \(n=1\) is

$$\begin{aligned} \big [X_0 - X_2 + \jmath (X_3 - X_1)\big ] a_1 \text {e}^{\jmath \omega _{\mathcal {T}}t} \end{aligned}$$

and the one at \(n=-1\) is its conjugate complex. The sum of the two terms gives

$$\begin{aligned} 2 a_1 \big [ X_c \cos (\omega _{\mathcal {T}}t) - X_s \sin (\omega _{\mathcal {T}}t) \big ], \qquad X_c = X_0 - X_2\,,\quad X_s = X_3 - X_1\,. \end{aligned}$$

Linear combinations of independent Gaussian random variables are Gaussian. Therefore, if we assume \(X_k, k=0,\dotsc ,3\) to be independent of each other, \(X_c\) and \(X_s\) are independent Gaussian random variables as well. We denote the standard deviation of \(X_c\) and \(X_s\) by \(\sigma _X\). Their joint probability density function (PDF) is

$$\begin{aligned} p_{X_c,X_s}(x_c, x_s) = p_{X_c}(x_c) p_{X_s}(x_s) = \frac{1}{2\pi \sigma _X^2} \text {e}^{-\frac{x_c^2 + x_s^2}{2\sigma _X}}\,. \end{aligned}$$

It is now convenient to pass to polar random variables. Specifically, using the relation

$$\begin{aligned} \cos (\omega t + \phi ) = \cos (\phi )\cos (\omega t) - \sin (\phi )\sin (\omega t) \end{aligned}$$

the sum of the input terms for \(n=1\) and -1 can be rewritten as

$$\begin{aligned} 2 a_1 X_r \cos (\omega _{\mathcal {T}}t + X_\phi ) \end{aligned}$$

with the new polar random variables

$$\begin{aligned} X_r &= \sqrt{X_c^2 + X_s^2}\\ X_\phi &= \arctan \frac{X_s}{X_c}\,. \end{aligned}$$

Given that the probability density in terms of \(X_c, X_s\) must agree with the one in terms of \(X_r,X_\phi \), we must have

$$\begin{aligned} p_{X_c,X_s}(x_c,x_s) \text {d}x_c \text {d}x_s = p_{X_r,X_\phi }(x_r,x_\phi ) \text {d}x_r \text {d}x_\phi \,. \end{aligned}$$

From this equation and \(\text {d}x_c \text {d}x_s = x_r \text {d}x_r \text {d}x_\phi \) we therefore deduce

$$\begin{aligned} p_{X_r,X_\phi }(x_r, x_\phi ) = \frac{x_r}{2\pi \sigma _X^2} \text {e}^{-\frac{x_r^2}{2\sigma _X^2}}\,. \end{aligned}$$

This joint probability density function is easily factored

$$\begin{aligned} p_{X_r,X_\phi }(x_r, x_\phi ) = p_{X_r}(x_r) p_{X_\phi }(x_\phi ) \end{aligned}$$

which implies that \(X_r\) and \(X_\phi \) are independent random variables with the following probability density functions

$$\begin{aligned} p_{X_r}(x_r) &= \frac{1}{\sigma _X^2} x_r \text {e}^{-\frac{x_r^2}{2\sigma _X^2}}\,, \qquad x_r \ge 0\end{aligned}$$
(14.11)
$$\begin{aligned} p_{X_\phi }(x_\phi ) &= {\left\{ \begin{array}{ll} \frac{1}{2\pi } &{} 0 \le x_\phi < 2\pi \\ 0 &{} \text {otherwise}\,. \end{array}\right. } \end{aligned}$$
(14.12)

The phase random variable \(X_\phi \) is uniformly distributed over the full circle. The distribution of the variable \(X_r\) is called Rayleigh distribution. Its PDF and complementary cumulative density function \(1 - F_{X_r}(x_r)\) are plotted in Fig. 14.6. The PDF assumes its maximum at \(x_r = \sigma _X\). The expected value and variance of \(X_r\) are

$$\begin{aligned} {\text {E}}[X_r] = \int \limits _0^\infty x_r p_{X_r}(x_r) dx_r = \sqrt{\frac{\pi }{2}} \sigma _X \end{aligned}$$

and

$$\begin{aligned} {\text {Var}}(X_r) = {\text {E}}[(x_r - {\text {E}}[X_r])^2] = \frac{4-\pi }{2} \sigma _X^2 \end{aligned}$$

respectively.

Fig. 14.6
figure 6

Rayleigh distribution for \(\sigma _X = 1\). a Probability density function. b Complementary cumulative density function \(1 - F_{X_r}(x_r)\)

The carrier leakage of the modulator is therefore given by

$$\begin{aligned} X_r\frac{\sqrt{2}}{\pi } \Re \{ H(\jmath \omega _{\mathcal {T}}) \text {e}^{\jmath (\omega _{\mathcal {T}}t + X_\phi )} \} \end{aligned}$$

with the phase uniformly distributed over the full circle, The magnitude of the tone is Rayleigh distributed and if \(\omega _{\mathcal {T}}\ll \omega _{3dB}\) has an expected value of

$$\begin{aligned} \frac{\sigma _X}{\sqrt{\pi }} \end{aligned}$$

From Fig. 14.6 we read that, under the same assumption, 0.1% of the modulators have a carrier leakage magnitude exceeding

$$\begin{aligned} 3.7 \frac{\sqrt{2}}{\pi } \sigma _X \approx 1.67 \sigma _X\,. \end{aligned}$$

2.5 Image-Rejection

In this subsection we come back to the finite cancelling of harmonics in practical implementations. We assume again the common case of a transmitter up-converting the input signal to \(\omega + \omega _{\mathcal {T}}\) with \(\left|\omega \right| \ll \omega _{\mathcal {T}}\lesssim \omega _{3dB}\).

In previous calculations we assumed perfectly balanced signals \(v_I^- = - v_I^+\), \(v_Q^- = - v_Q^+\), equal amplitudes for all signals, a phase difference between \(v_I^+\) and \(v_Q^+\) of exactly \(\pi /2\) and delays between the LO signals of exactly \({\mathcal {T}}/4\). If we now introduce small differences in the amplitudes

$$\begin{aligned} v_I^+(t) = (A_I + \Delta A_I/2) \text {e}^{\jmath \omega t}\,, \quad v_I^-(t) = -(A_I - \Delta A_I/2) \text {e}^{\jmath \omega t} \end{aligned}$$

the even harmonics of \(v_I^+(t) l_0(t) + v_I^-(t) l_2(t)\) do not cancel perfectly anymore

$$\begin{aligned} v_I^+(t) l_0(t) + v_I^-(t) l_2(t) = \sum _{n\ \text {odd}} 2 A_I a_n \text {e}^{\jmath (n\omega _{\mathcal {T}}+ \omega ) t} + \sum _{n\ \text {even}} \Delta A_I a_n \text {e}^{\jmath (n\omega _{\mathcal {T}}+ \omega ) t} \end{aligned}$$

and similarly for the signal \(v_Q^+(t) l_3(t) + v_Q^-(t) l_1(t)\)

$$\begin{aligned} v_Q^+(t) l_3(t) + v_Q^-(t) l_1(t) = - \sum _{n\ \text {odd}} \jmath ^{n+1} 2 A_Q a_n \text {e}^{\jmath [(n\omega _{\mathcal {T}}+ \omega ) t - n\Delta \phi ]} \qquad \qquad &\\ - \sum _{n\ \text {even}} \jmath ^{n+1} \Delta A_Q a_n \text {e}^{\jmath [(n\omega _{\mathcal {T}}+ \omega ) t - n\Delta \phi ]} & \end{aligned}$$

where in addition we have added a small delay error in \(l_3\) and \(l_1\) of \(\Delta \tau \) and set \(\Delta \phi = 2\pi \Delta \tau /{\mathcal {T}}\). If we now sum these partial sums, the complete cancelling that was happening for three harmonics out of four becomes a partial cancelling. In particular this partial cancelling causes the appearance of a tone at \(\left|\omega _{\mathcal {T}}- \omega \right|\) (for \(n=-1\)) which is difficult to filter as, in a single-sided representation, it appears very close to the wanted signal at \(\omega _{\mathcal {T}}+ \omega \). The tone at \(\left|\omega _{\mathcal {T}}- \omega \right|\) is called theimage of the wanted signal. The ratio of the magnitude of the image to the one of the signal is called the  image-reject ratio (IRR) and is given by

$$\begin{aligned} IRR &= \Bigg {|}{\frac{A_I - A_Q\text {e}^{\jmath \Delta \phi }}{A_I + A_Q\text {e}^{-\jmath \Delta \phi }}}\Bigg {|} = \Bigg {|}{\frac{A_I \text {e}^{-\jmath \Delta \phi /2} - A_Q \text {e}^{\jmath \Delta \phi /2}}{A_I \text {e}^{\jmath \Delta \phi /2} + A_Q \text {e}^{-\jmath \Delta \phi /2}}}\Bigg {|}\nonumber \\ &= \sqrt{ \frac{(A_I - A_Q)^2 \cos ^2(\Delta \phi /2) + (A_I + A_Q)^2 \sin ^2(\Delta \phi /2)}{(A_I + A_Q)^2 \cos ^2(\Delta \phi /2) + (A_I - A_Q)^2 \sin ^2(\Delta \phi /2)}}\nonumber \\ &\approx \sqrt{\Big (\frac{A_I - A_Q}{A_I + A_Q}\Big )^2 + \tan ^2(\Delta \phi /2)} \end{aligned}$$
(14.13)

where in the last step we neglected the term \((A_I - A_Q)^2 \sin ^2(\Delta \phi /2)\) in the denominator which is of second order in the errors. Note that part of the phase error could well come from the input signal. This is the IRR of the effective signal at the input of the LTI system H(s). It is plotted in Fig. 14.7.

Fig. 14.7
figure 7

Mixer image-reject ratio

2.6 Effect of Mismatch

In this subsection we investigate the effect of mismatch which, as we will see, is another phenomenon limiting the amount of harmonic cancelling achievable in practical implementations.

Due to mismatch, each of the four transistors with which the modulator is implemented (see Figs. 14.1 and 14.2) presents a slightly different \(r_{ON}\) resistance and similarly for the four source resistors. Therefore, the value of the resistance connected to the capacitor is not independent of time, but is time-varying

$$\begin{aligned} r(t) = \sum _{k = 0}^3 r_{k} \, l_k(t)\,, \qquad r_k \in {\mathbb {R}}, k=0,\dotsc ,3\,. \end{aligned}$$

The differential equation describing the system therefore becomes

$$\begin{aligned}\begin{gathered} [D+ \omega _{3dB}(t)] v_A = \omega _{3dB}(t) x(t)\,, \qquad \omega _{3dB}(t) :=\frac{1}{r(t) C}\\ x(t) :=v_I^+(t) \, l_0(t) + v_I^-(t) \, l_2(t) + v_Q^+(t) \, l_3(t) + v_Q^-(t) \, l_1(t) \end{gathered}\end{aligned}$$

where we denoted the sum of the input signals by x. Since the variation of the resistance from the nominal value is small, we can solve the equation using the perturbation method and proceed as in Example 12.8 (Fig. 14.8).

Fig. 14.8
figure 8

Possible sample waveform of the time varying cut-off frequency \(\omega _{3dB}(t)\). Variation greatly exaggerated for illustration

As a first step we develop \(\omega _{3dB}(t)\) in a Fourier series and decompose it in two parts

$$\begin{aligned} \omega _{3dB} & (t) = \omega _c(t) + \omega _s(t)\end{aligned}$$
(14.14)
$$\begin{aligned} \omega _c(t) = r_0 \, l_0(t) + r_2 \, l_2(t) & = \omega _{c,0} + X_c \, \sum _{n = 1}^\infty w_n \cos (n \omega _{\mathcal {T}}t)\end{aligned}$$
(14.15)
$$\begin{aligned} \omega _s(t) = r_1 \, l_1(t) + r_3 \, l_3(t) & = \omega _{s,0} - X_s \, \sum _{n = 1}^\infty w_n \sin (n \omega _{\mathcal {T}}t) \end{aligned}$$
(14.16)

with

$$\begin{aligned} w_n = \frac{4}{\pi n} \sin (n \frac{\pi }{4})\,, \qquad n > 0\,. \end{aligned}$$

\(\omega _c(t)\) corresponds to the curve of Fig. 12.10b with \(\tau = {\mathcal {T}}/4\) scaled by \(X_c\) plus a constant term. \(\omega _s(t)\) is constructed similarly, but with the curve of Fig. 12.10b shifted by \(-{\mathcal {T}}/4\). We use the symbols \(X_c\) and \(X_s\) to denote independent Gaussian random variables as in Sect. 14.2.4, but they are not related to the quantities of that section. We also denote again their standard deviation by \(\sigma _X\).

The sum of the constant terms (which are also random variables) is the average frequency

$$\begin{aligned} \omega _0 = \omega _{c,0} + \omega _{s.0}\,. \end{aligned}$$

The variable part of \(\omega _{3dB}(t)\) can be written as

$$\begin{aligned} \sum _{n = 1}^\infty w_n [X_c\cos (n \omega _{\mathcal {T}}t) - X_s\sin (n \omega _{\mathcal {T}}t)] \end{aligned}$$

Proceeding as in the analysis of carrier leakage we can express it in terms of the polar random variables \(X_r\) and \(X_\phi \)

$$\begin{aligned} \sum _{n = 1}^\infty w_n X_r\cos (n \omega _{\mathcal {T}}t + X_\phi )\,. \end{aligned}$$

Using this form for \(\omega _{3dB}(t)\) the differential equation can be written as

$$\begin{aligned} (D+ \omega _0) v_A = \omega _{3dB}(t) x(t) - \sum _{n = 1}^\infty w_n X_r\cos (n \omega _{\mathcal {T}}t + X_\phi ) v_A(t)\,. \end{aligned}$$

We solved this equation to first order in \(X_r\) for one input tone and one \(\cos \) term in Example 12.8. Referring to that example for details, we conclude that mismatch produces tones at all frequencies \(n\omega _{\mathcal {T}}+ \omega , n \in {\mathbb {Z}}\). The amplitude of these tones is proportional to the random variable \(X_r\) which is Rayleigh distributed. The phase is uniformly distributed over the full circle.

While in this subsection we focused on mismatch, the same method can be used to analyse other effects causing variations in the resistance such as overlapping LO signals.

2.7 Second-Order Distortion

In this subsection we analyse the distortion of second order introduced by the nonlinear characteristic of the MOSFETs. As discussed, if we neglect mismatch the circuit can be modeled as a time-invariant system drived by the input signal

$$\begin{aligned} x(t) = v_I^+(t) \, l_0(t) + v_I^-(t) \, l_2(t) + v_Q^+(t) \, l_3(t) + v_Q^-(t) \, l_1(t)\,. \end{aligned}$$
(14.17)

To simplify the calculations we discard the source resistors \(R_S\) and consider the situation shown in Fig. 14.9a. We assume the transistor to remain in the so-called linear region of its characteristic which is described by

$$\begin{aligned} i_D = \beta (v_G - V_T)(v_D - v_S) - \frac{\beta }{2}(v_D^2 - v_S^2)\,. \end{aligned}$$

In our model the gate voltage is assumed to be constant at a sufficiently high level \(V_G\), in which case the characteristic can be modeled by the linear resistor r that we used before and two nonlinear VCCS that we combine in a single one controlled by the two voltages \(v_D\) and \(v_S\)

$$\begin{aligned} i_D = g_1(v_D - v_S) + g_2(v_D^2 - v_S^2) \end{aligned}$$

with

$$\begin{aligned} g_1 = \frac{1}{r} = \beta (V_G - V_T) \quad \text {and}\quad g_2 = - \frac{\beta }{2} \end{aligned}$$

and represented in Fig. 14.9b. Using this transistor model the differential equation describing the system is

$$\begin{aligned} (1 + r C D) v_A = x + \frac{g_2}{g_1} (x^2 - v_A^2)\,. \end{aligned}$$
Fig. 14.9
figure 9

a Equivalent WNTI schematic of the quadrature modulator of Fig. 14.1 b Equivalent WNTI circuit of the quadrature modulator of Fig. 14.1

The first order response of the system is described by the transfer function H that we calculated before, that we repeat here for convenience and to which we add an index representing the order as usual

$$\begin{aligned} H_1(s_1) = \frac{1}{1 + \frac{s_1}{\omega _{3dB}}}\,. \end{aligned}$$

We compute the higher order responses by Laplace transforming the differential equation and retaining only terms of the relevant order. To obtain the transfer functions directly we use a Dirac pulse as input. The Laplace transformed of the second-order part of the differential equation is

$$\begin{aligned}{}[1 + r C (s_1 + s_2)] H_2(s_1,s_2) = \frac{g_2}{g_1}[1 - H_1(s_1) H_1(s_2)]\,. \end{aligned}$$

The second-order transfer function therefore is

$$\begin{aligned} H_2(s_1,s_2) = \frac{g_2}{g_1}H_1(s_1 + s_2) [1 - H_1(s_1) H_1(s_2)]\,. \end{aligned}$$
(14.18)

Consider the case in which the modulator is driven by two baseband (\(v_I^+,v_I^-,v_Q^+,v_Q^-\)) real tones at \(\omega _1\) and \(\omega _2\). The tones of interest in the effective input signal x of the WNTI model are at \(\pm (\omega _{\mathcal {T}}\pm \omega _i), i=1,2\). Under the assumption that \(\left|\omega _i \right| \ll \omega _{\mathcal {T}}\ll \omega _{3dB}\) we can approximate \(H_1\) at these frequencies by

$$\begin{aligned} H_1(\jmath \omega ) \approx 1 - \jmath \frac{\omega }{\omega _{3dB}}\,. \end{aligned}$$

Using this approximation in \(H_2\) we find

$$\begin{aligned} H_2(\jmath \omega _1,\jmath \omega _2) &\approx \frac{g_2}{g_1} \frac{1 - (1 - \jmath \omega _1/\omega _{3dB}) (1 - \jmath \omega _2/\omega _{3dB})}{1 + \jmath \frac{\omega _1 + \omega _2}{\omega _{3dB}}}\nonumber \\ &\approx \frac{-1}{2 (V_G - V_T)} \frac{\jmath \frac{\omega _1+\omega _2}{\omega _{3dB}}}{1 + \jmath \frac{\omega _1 + \omega _2}{\omega _{3dB}}}\,. \end{aligned}$$
(14.19)

where in the last step we have neglected the small quantity \(\omega _1\omega _2/\omega _{3dB}^2\). This expression shows that, to reduce the principal second order distortion components under the given assumptions, it is more convenient to choose a voltage \(V_G - V_T\) as large as possible than merely reduce r by using a wider transistor.

2.8 Third-Order Distortion

We next compute the third-order transfer function. The third-order part of the Laplace transformed differential equation is

$$\begin{aligned}{}[1 + r C (s_1 + s_2 + s_3)] H_3(s_1,s_2,s_3) = -2 \frac{g_2}{g_1} \left[ H_1(s_1) H_2(s_2,s_3)\right] _{\text {sym}}\,. \end{aligned}$$

From it we immediately obtain

$$\begin{aligned} H_3(s_1,s_2,s_3) = -2 \frac{g_2}{g_1} H_1(s_1 + s_2 + s_3)\left[ H_1(s_1) H_2(s_2,s_3)\right] _{\text {sym}}\,. \end{aligned}$$
(14.20)

To gain some insight from this expression we assume again two real input tones with \(\left|\omega _i \right| \ll \omega _{\mathcal {T}}\ll \omega _{3dB}\). Under these assumptions we can expand \(H_3\) in a first order Taylor polynomial and obtain

$$\begin{aligned} H_3(\jmath \omega _1,\jmath \omega _2,\jmath \omega _3) &\approx -\jmath \frac{4}{3} \Big (\frac{g_2}{g_1} \Big )^2 \frac{\omega _1 + \omega _2 + \omega _3}{\omega _{3dB}}\nonumber \\ &= -\frac{\jmath (\omega _1 + \omega _2 + \omega _3) C}{3 \beta (V_G - V_T)^3}\,. \end{aligned}$$
(14.21)

As for second-order distortion we find that it’s more convenient to choose a large \(V_G - V_T\) than to increase the width of the transistor. Using this expression we can estimate the IP3 of the modulator as

$$\begin{aligned} A_{\text {IP3}} \approx {\frac{g_1}{g_2}}\Bigg {|}\sqrt{\frac{\omega _{3dB}}{\omega _{\mathcal {T}}}} = 2(V_G - V_T) \sqrt{\frac{\omega _{3dB}}{\omega _{\mathcal {T}}}}\Bigg {|}\,. \end{aligned}$$
(14.22)

The value of this approximation is compared with the value calculated from the full \(H_3\) (14.20) as a function of \(\omega _{3dB}/\omega _{\mathcal {T}}\) in Fig. 14.10 for \(\omega _1 = \omega _2 = (1+0.1)\omega _{\mathcal {T}}\) and \(\omega _3 = -(1+0.2)\omega _{\mathcal {T}}\). The approximation gives a reasonable value from \(\omega _{3dB}/\omega _{\mathcal {T}}\gtrsim 2\). For values of \(\omega _{3dB}/\omega _{\mathcal {T}}< 1\) the IP3 is seen to raise. This is however related to the fact that the wanted signals do also experience substantial attenuation compared with the case of a large ratio \(\omega _{3dB}/\omega _{\mathcal {T}}\).

Fig. 14.10
figure 10

Quadrature modulator IP3 as a function of \(\omega _{3dB}/\omega _{\mathcal {T}}\) normalized to \(\left|g_1/g_2 \right|\). Solid line computed with the full \(H_3\) in (14.20), dashed line IP3 computed with (14.22). \(\omega _1 = \omega _2 = (1+0.1)\omega _{\mathcal {T}}, \omega _3 = -(1+0.2)\omega _{\mathcal {T}}\)

It is important to realize that the effective input signal x of the model includes many tones that produce many intermodulation products. In particular, the tones at \(3\omega _{\mathcal {T}}- \omega _i, i=1,2\) together with the main tones at \(\omega _{\mathcal {T}}+ \omega _i\) produce third-order intermodulation products that falls close to the wanted signal and are difficult to suppress

$$\begin{aligned} 3\omega _{\mathcal {T}}- \omega _i - 2(\omega _{\mathcal {T}}+ \omega _i) = \omega _{\mathcal {T}}- 3\omega _i\,. \end{aligned}$$

These tones are called third order counter intermodulation products (CIM3). We saw in previous paragraphs that many practical imperfections introduce tones at the second harmonic of the input signals \(2(\omega _{\mathcal {T}}\pm \omega _i)\). In this case second-order distortion does also produce tones around the signal of interest. In particular the combination

$$\begin{aligned} 2(\omega _{\mathcal {T}}- \omega _i) - (\omega _{\mathcal {T}}+ \omega _i) = \omega _{\mathcal {T}}- 3\omega _i \end{aligned}$$

results in a tone at the CIM3 frequency as does the second-order distortion between \(3\omega _{\mathcal {T}}- \omega _i\) and \(2(\omega _{\mathcal {T}}- \omega _i)\)

$$\begin{aligned} (3\omega _{\mathcal {T}}- \omega _i) - 2(\omega _{\mathcal {T}}+ \omega _i) = \omega _{\mathcal {T}}- 3\omega _i\,. \end{aligned}$$

Depending on the details of the design, these second order distortion components may contribute significantly to the overall CIM3 level of the modulator.

Fig. 14.11
figure 11

Single-sided output spectrum of the modulator simulated with accurate transistor models

Figure 14.11 shows part of the output spectrum magnitude obtained by numerical simulation of the modulator with accurate transistor models, a load capacitance of 1 pF, an LO frequency of 1 GHz, and two input tones given by

$$\begin{aligned} v_I^+(t) &= -v_I^-(t) = A \cos (\omega _1 t) + A \cos (\omega _2 t)\\ v_Q^+(t) &= -v_Q^-(t) = A \sin (\omega _1 t) + A \sin (\omega _2 t) \end{aligned}$$

with

$$\begin{aligned} A = 0.15\,\text {V}, \quad \omega _1 = \frac{\omega _{\mathcal {T}}}{8}\,, \quad \omega _2 = \omega _1 + \frac{\omega _{\mathcal {T}}}{64}\,. \end{aligned}$$

We used 22 nm FinFETs modelled with BSIM-CMG compact models [29] with technology parameters from [30]. The transistors were sized to have an \(r_{ON}\) of 20 \(\Omega \) at \(V_G - V_T = 0.5\) V. Since the threshold voltage of the transistors is 0.311 V, the LO voltage high level was chosen to be 0.811 V, while the low level was set to –0.189 V. To avoid overlapping the duration of the high pulses was reduced slightly from the nominal value of \({\mathcal {T}}/4\) to produce a cross point between successive LO signals ca. 0.1 V below \(V_T\). The raise- and fall-times were set to 0.125 ns giving an LO transient slope K of 8 GV/s. The LO signals used in the simulation are shown in Fig. 14.12.

Fig. 14.12
figure 12

LO signal waveforms used in the simulation of the modulator

The spectral component levels obtained by simulation compare favorably with our analysis. The expected level of the main tones is

$$\begin{aligned} A \frac{4}{\sqrt{2} \pi } \approx 0.135\,\text {V} \end{aligned}$$

and is very close to simulated one of 0.131 V. Our choice of parameters is such that \(\omega _{3dB}/\omega _{\mathcal {T}}\approx 7.9\). We can therefore estimate the expected second- and third order intermodulation products from the approximate transfer functions given by (14.19) and (14.22) respectively. The expected level of the second order tone at \(2(\omega _{\mathcal {T}}- \omega _1)\) is estimated to be

$$\begin{aligned} \left|{ \Big (A \frac{4}{\sqrt{2} \pi }\Big )^2 \frac{1}{2} H_2(\jmath (\omega _{\mathcal {T}}- \omega _1), \jmath (\omega _{\mathcal {T}}- \omega _1))} \right| \approx 2.29\,\text {mV} \end{aligned}$$

The expected IM3 level at \(\omega _{\mathcal {T}}+ 2\omega _1 - \omega _2\) is

$$\begin{aligned} \left|{ \Big (A \frac{4}{\sqrt{2} \pi }\Big )^3 \frac{3}{4} H_3(\jmath (\omega _{\mathcal {T}}+ \omega _1) , \jmath (\omega _{\mathcal {T}}+ \omega _1) , -\jmath (\omega _{\mathcal {T}}+ \omega _2))} \right| \approx 0.23\,\text {mV}\,. \end{aligned}$$

The simulated values are 5.44 and 0.35 mV respectively, reasonably close to the predicted values.

In our simplified analysis we assumed zero LO signal raise- and fall-times. It is interesting to investigate how fast the LO transients have to be before the intermodulation products start to deviate significantly from the predicted values. We investigated this question by simulation. The IM3 level is plotted as a function of the LO transients slope K in Fig. 14.13. For all values of K the crossing point was kept constant. Note that the LO waveforms corresponding to the lowest K values are essentially triangular with no flat high level region. This simulation suggest that the analysis gives reasonable IM3 estimates for values of \(f_{\mathcal {T}}/K \lessapprox 0.14\).

Fig. 14.13
figure 13

Simulated IM3 versus LO transient slope K

3 Sampling Mixer

A communication receiver should ideally be able to detect a single signal on a channel of a frequency band allocated to the service of interest and completely suppress all other signals. Due to limitations in the selectivity of filters and the difficulty of implementing tuneable filters, this can only be achieved approximately. Virtually all receivers are composed by a fixed highly selective filter (the preselection filter) typically implemented with surface- (SAW) or bulk-acoustic wave (BAW) technologies suppressing all signals outside the band of interest. This filter is then followed by some signal amplification and by a shift of the signal of interest to a lower fixed frequency with the help of a mixer. At this lower frequency another fixed filter (the channel filter) with a bandwidth corresponding to the bandwidth of a channel separates the wanted signal from other signals on adjacent channels. The channel of interest is selected by shifting in frequency of the input spectrum in such a way that the desired signal falls in the passband of the channel filter. This is done by appropriately choosing the frequency of the so-called local oscillator (LO) driving the LO port of the mixer.

The sampling mixer analysed hereafter is an attempt to remove the preselection filter and implement a tuneable filter capable of selecting a single channel only using components available in a standard CMOS technology. This is driven by the desire for miniaturisation and cost reduction. While this type of circuits have their own drawbacks they are a nice example showing some capabilities of time-varying systems that can’t be matched by LTI ones.

Fig. 14.14
figure 14

Linear periodic switched capacitor circuit that can be used as a sampling mixer or as an N-path filter

3.1 Time-Varying Impulse Response

Consider the highly idealised sampling mixer shown in Fig. 14.14. The input signal is represented by the voltage source \(V_s\) and the nodes labelled \(V_0,\dotsc ,V_{N-1}\) represent output signals. The ideal switches \(S_n, n = 0,\dots ,N-1\) are driven by the \({\mathcal {T}}\)-periodic clock signals \(\phi _n\). A switch is closed when the corresponding clock signal is high and open when low. We assume that the clock signals are non-overlapping. Since no reactive component is present on the source side of the switches the output signals can be analysed independently of each other. In the following we assume that each clock phase has the same duration so that

$$\begin{aligned} t_n = n \frac{{\mathcal {T}}}{N};\qquad n = 0,\dotsc ,N-1. \end{aligned}$$

In this case it’s enough to compute the time-varying impulse response of one output signal only. The other ones are then obtained by a simple translations in time. We will therefore compute the time-varying impulse response corresponding to the output \(V_0\) that in the following we will denote by y. Similarly, we will denote the source signal by x.

The circuit, having two phases, is described by two differential equations. Between \(0 < t < t_1\), when the switch is closed, it is described by

$$\begin{aligned} Dy + \omega _0 y = \omega _0 x, \qquad \omega _0 :=\frac{1}{RC} \end{aligned}$$

while for \(t_1 < t < {\mathcal {T}}\), when the switch is open, by

$$\begin{aligned} Dy = 0\,. \end{aligned}$$

We start by computing the fundamental kernel of the system \(W(t,\tau )\) which is the solution of the differential equation when driven by a Dirac impulse occurring at time \(\tau \). Since the circuit varies periodically in time, it’s enough to compute it for \(0 < \tau < {\mathcal {T}}\).

Fig. 14.15
figure 15

Fundamental kernel of the sampling mixer for \(\tau =0.1{\mathcal {T}}, N=4, \omega _0 {\mathcal {T}}= 0.5\)

For \(0 < \tau < t_1\) the output is zero up to time \(\tau \) at which point it will jump to 1/RC and start to decay exponentially as in an LTI system. At time \(t_1\) the switch is opened, leaving the output capacitor floating. The output voltage will therefore remain constant up to time \({\mathcal {T}}\). At time \({\mathcal {T}}\), since there is a resistor between the capacitor and the source, the output will simply start to decrease exponentially again with the same time constant 1/RC as during the first part of the response. Continuing this process we obtain (see Fig. 14.15)

$$\begin{aligned} \underset{0 < \tau < t_1}{W(t,\tau )}\ = {\left\{ \begin{array}{ll} 0 &{} t < \tau \\ \omega _0 \text {e}^{-\omega _0(t - \tau )} &{} \tau < t < t_1\\ \omega _0 B A^{k-1} \text {e}^{-\omega _0(t - k{\mathcal {T}})} &{} k{\mathcal {T}}< t < k{\mathcal {T}}+ t_1\,, k \ge 1\\ B A^k &{} t_1 + k {\mathcal {T}}< t < (k+1) {\mathcal {T}}\,, k \ge 0 \end{array}\right. } \end{aligned}$$

with

$$\begin{aligned} A :=\text {e}^{-\omega _0 t_1} \qquad B :=\text {e}^{-\omega _0 (t_1 - \tau )}\,. \end{aligned}$$

For \(t_1 < \tau < {\mathcal {T}}\), given that the output is disconnected from the input, the output remains zero

$$\begin{aligned} \underset{t_1 < \tau < {\mathcal {T}}}{W(t,\tau )}\ = 0\,. \end{aligned}$$
Fig. 14.16
figure 16

a Time-varying impulse response of the sampling mixer as a function of time for \(\xi =0.1T\), \(N = 4, \omega _0 = 0.5 {\mathcal {T}}\) b Time-varying impulse response of the sampling mixer as a function of \(\xi \) for \(t=0.1 T\), \(N = 4, \omega _0 = 0.5 {\mathcal {T}}\)

The time varying impulse response can be derived from the fundamental kernel with the help of the variable substitution \(\xi = t - \tau \) and by keeping in mind that the value of the impulse response \(h(t,\xi )\) is the value of the output at time t assuming that a Dirac impulse was applied \(\xi \) seconds in the past. As the impulse response is periodic, it’s enough to compute its value over the first period. For \(0 < t < t_1\) it is given by (see Fig. 14.16a)

$$\begin{aligned} \underset{0 < t < t_1}{h(t,\xi )}\ = {\left\{ \begin{array}{ll} \omega _0 \text {e}^{-\omega _0 \xi } &{} 0 < \xi < t\\ \omega _0 A^k \text {e}^{-\omega _0 (\xi - k{\mathcal {T}})} &{} t - t_1 + k {\mathcal {T}}< \xi < t + k {\mathcal {T}}, k \ge 1\\ 0 &{} \text {otherwise} \end{array}\right. } \end{aligned}$$

and for \(t_1 < t < {\mathcal {T}}\) by

$$\begin{aligned} \underset{t_1 < t < {\mathcal {T}}}{h(t,\xi )}\ = {\left\{ \begin{array}{ll} \omega _0 A^k \text {e}^{-\omega _0 [\xi - (k {\mathcal {T}}+ t - t_1)]} &{} t - t_1 + k {\mathcal {T}}< \xi < t + k {\mathcal {T}}, k \ge 1\\ 0 &{} \text {otherwise}\,. \end{array}\right. } \end{aligned}$$

3.2 Time-Varying Transfer Function

While the circuit is fully characterised by the above time-varying impulse response, its filtering characteristics are best understood by analysing its time-varying frequency response. This will allow us to easily obtain the output signal when the circuit is driven by a tone.

We compute the time-varying transfer function \(\hat{h}(t,\omega )\) by Fourier transforming \(h(t,\xi )\). For \(0 < t < t_1\) we have

$$\begin{aligned} \hat{h}(t,\omega ) = \int \limits _0^t \omega _0 \text {e}^{-\omega _0\xi } \text {e}^{-\jmath \omega \xi } \text {d}\xi + \sum _{k=1}^\infty \int \limits _{t-t_1+k{\mathcal {T}}}^{t+k{\mathcal {T}}} \omega _0 A^k \text {e}^{-\omega _0(\xi - k{\mathcal {T}})} \text {e}^{-\jmath \omega \xi } \text {d}\xi \,. \end{aligned}$$

The terms in the right summation are powers of the summation variable k multiplied by a constant and reminds of a geometric series with a missing first term. As a first step we therefore add the missing term by adjusting the limits of the first integral

$$\begin{aligned} \hat{h}(t,\omega ) = -\int \limits _{t-t_1}^0 \omega _0 \text {e}^{-\omega _0\xi -\jmath \omega \xi } \text {d}\xi + \sum _{k=0}^\infty \omega _0 A^k \text {e}^{\omega _0 k{\mathcal {T}}} \int \limits _{t-t_1+k{\mathcal {T}}}^{t+k{\mathcal {T}}} \text {e}^{-\omega _0\xi -\jmath \omega \xi } \text {d}\xi \,. \end{aligned}$$

Evaluating the integrals and simplifying we find

$$\begin{aligned} \frac{1 - \text {e}^{-(\omega _0 + \jmath \omega )(t - t_1)}}{1 + \jmath \frac{\omega }{\omega _0}} + \text {e}^{-(\omega _0 + \jmath \omega )t} \frac{\text {e}^{(\omega _0 + \jmath \omega )t_1} - 1}{1 + \jmath \frac{\omega }{\omega _0}} \sum _{k=0}^\infty A^k \text {e}^{-\jmath \omega k{\mathcal {T}}}\,. \end{aligned}$$

Performing the summation of the geometric series we finally obtain

$$\begin{aligned} \underset{0 < t < t_1}{\hat{h}(t,\omega )} = \frac{1 - \text {e}^{-(\omega _0 + \jmath \omega )(t - t_1)}}{1 + \jmath \frac{\omega }{\omega _0}} + \frac{\bigl (\text {e}^{(\omega _0 + \jmath \omega )t_1} - 1\bigr ) \text {e}^{-(\omega _0 + \jmath \omega )t}}{\bigl (1 + \jmath \frac{\omega }{\omega _0}\bigr ) \bigl (1- \text {e}^{-\jmath \omega {\mathcal {T}}} \text {e}^{-\omega _0 t_1}\bigr )}\,. \end{aligned}$$
(14.23)

For \(t_1 < t < {\mathcal {T}}\) the time-varying frequency response is given by

$$\begin{aligned} \hat{h}(t,\omega ) = \sum _{k=0}^\infty \int \limits _{t-t_1+k{\mathcal {T}}}^{t+k{\mathcal {T}}} \omega _0 A^k \text {e}^{-\omega _0[\xi - (k{\mathcal {T}}+ t - t_1)]} \text {e}^{-\jmath \omega \xi } \text {d}\xi \,. \end{aligned}$$

This is again a geometric series and proceeding as above we obtain

$$\begin{aligned} \underset{t_1 < t < {\mathcal {T}}}{\hat{h}(t,\omega )} = \frac{\bigl (\text {e}^{\jmath \omega t_1} - \text {e}^{-\omega _0 t_1}\bigr ) \text {e}^{-\jmath \omega t}}{\bigl (1 + \jmath \frac{\omega }{\omega _0}\bigr ) \bigl (1- \text {e}^{-\jmath \omega {\mathcal {T}}} \text {e}^{-\omega _0 t_1}\bigr )}\,. \end{aligned}$$
(14.24)

3.3 Selectivity

With \(\hat{h}(t,\omega )\) the output of the circuit when driven by \(x(t) = \cos (\omega t)\) is immediately obtained

$$\begin{aligned} y(t) = \Re \{\hat{h}(t,\omega ) \text {e}^{\jmath \omega t}\}\,. \end{aligned}$$
Fig. 14.17
figure 17

Sampling mixer output \(V_0\) when driven by \(\cos (\omega t)\) with \(N=4, \omega _0=0.5{\mathcal {T}}\). a \(\omega = 1.01 \omega _s\). b \(\omega = 1.2 \omega _s\)

The output is shown in Fig. 14.17 for two values of the input frequency and \(N=4\). During the time intervals \(t_1 + k{\mathcal {T}}< t < (k+1){\mathcal {T}}, k\in {\mathbb {Z}}\) the output is constant and assumes the value

$$\begin{aligned} y(t) = \Re \{\hat{h}(t,\omega ) \text {e}^{\jmath \omega t}\} = \Re \biggl \{ \frac{\bigl (\text {e}^{\jmath \omega t_1} - \text {e}^{-\omega _0 t_1}\bigr ) \text {e}^{\jmath \omega k{\mathcal {T}}}}{\bigl (1 + \jmath \frac{\omega }{\omega _0}\bigr ) \bigl (1- \text {e}^{-\jmath \omega {\mathcal {T}}} \text {e}^{-\omega _0 t_1}\bigr )} \bigg \}\,. \end{aligned}$$

where we used the periodicity in time of \(\hat{h}(t,\omega )\) and the previously computed expression valid for \(t_1 < t < {\mathcal {T}}\)

$$\begin{aligned} \hat{h}(t,\omega ) = \underset{t_1 < t - k{\mathcal {T}}< {\mathcal {T}}}{\hat{h}(t - k{\mathcal {T}},\omega )}\,. \end{aligned}$$

These values are the output sample values of the sampling mixer. Let’s denote them by y[k] and set \(\omega = n \omega _s + \Delta \omega , n \in {\mathbb {Z}}\) with \(\omega _s = 2\pi /{\mathcal {T}}\) and \(\Delta \omega < \omega _s/2\). Then the above expression becomes

$$\begin{aligned} \begin{aligned} y[k] &= \Re \Big \{ h_{\text {eff}}(n \omega _s + \Delta \omega ) \text {e}^{\jmath \Delta \omega k{\mathcal {T}}} \Big \}\\ &:=\Re \bigg \{ \frac{\bigl (\text {e}^{\jmath (n\omega _s + \Delta \omega ) t_1} - \text {e}^{-\omega _0 t_1}\bigr ) \text {e}^{\jmath \Delta \omega k{\mathcal {T}}}}{\bigl (1 + \jmath \frac{n\omega _s + \Delta \omega }{\omega _0}\bigr ) \bigl (1- \text {e}^{-\jmath \Delta \omega {\mathcal {T}}} \text {e}^{-\omega _0 t_1}\bigr )} \bigg \}\,. \end{aligned} \end{aligned}$$
(14.25)

These are the samples of a sinusoidal with angular frequency \(\Delta \omega \) and amplitude

$$\begin{aligned} \Bigg {|}{ \frac{\bigl (\text {e}^{\jmath (n\omega _s + \Delta \omega ) t_1} - \text {e}^{-\omega _0 t_1}\bigr )}{\bigl (1 + \jmath \frac{n\omega _s + \Delta \omega }{\omega _0}\bigr ) \bigl (1- \text {e}^{-\jmath \Delta \omega {\mathcal {T}}} \text {e}^{-\omega _0 t_1}\bigr )}}\Bigg {|}\,. \end{aligned}$$

The fact that the output samples correspond to samples of a sinusoidal with a frequency independent of n is a manifestation of aliasing inherent in every sampling process. The interesting aspect of the sampling mixer is the fact that only samples of tones with a frequency very close to \(n \omega _s\) have a significant amplitude while the ones of signals with frequencies at a distance larger than approximately \(\omega _0/N\) from \(n \omega _s\) are attenuated. This effect is due to the factor

$$\begin{aligned} 1- \text {e}^{-\jmath \Delta \omega {\mathcal {T}}} \text {e}^{-\omega _0 t_1} = 1- \text {e}^{-\jmath 2\pi \frac{\Delta \omega }{\omega _s}} \text {e}^{-\frac{2\pi }{N}\frac{\omega _0}{\omega _s}} \end{aligned}$$

in the denominator of the above expression. For \(\omega _0 \ll N \omega _s\) the last exponential on the right is only slightly smaller than 1. Therefore, for \(\Delta \omega < \omega _0/N\) this factor becomes small thereby boosting the value of the samples around those frequencies (see Fig. 14.18). From this we conclude that the sampling mixer not only behaves as a sample and hold, but it also acts as a highly selective (large quality factor) filter around the sampling frequency and its harmonics. The achievable selectivity is much higher than the one of LTI RLC filters integrable in a standard CMOS technology.

Fig. 14.18
figure 18

Equivalent magnitudes of the output samples of a sampling mixer with \(N=4, \omega _0=0.5{\mathcal {T}}\)

3.4 Even Harmonic Response Suppression

The response around the harmonics is generally undesired and can be suppressed by using weighted sums of the N outputs as discussed in Example 12.10. In the following we assume N even and investigate the possibility to suppress the responses at even harmonics by making use of the sample values on the capacitors as opposed to the full waveforms. Consider the sample value on capacitor \(i=N/2\)

$$\begin{aligned} V_{N/2}(t) = \Re \{\hat{h}(t-{\mathcal {T}}/2, \omega ) \text {e}^{\jmath \omega t}\} \end{aligned}$$

Denoting the sample value held by the capacitor during the interval \(t_1 + t_1N/2 + k{\mathcal {T}}< t < {\mathcal {T}}+ t_1N/2 + k{\mathcal {T}}\) by \(y_{N/2}[k]\) and using the periodicity in time of \(\hat{h}(t,\omega )\) as before we obtain

$$\begin{aligned} {\begin{matrix} y_{N/2}[k] &{}= \Re \bigg \{ \frac{\bigl (\text {e}^{\jmath \omega t_1} - \text {e}^{-\omega _0 t_1}\bigr ) \text {e}^{-\jmath \omega (t - k{\mathcal {T}}- {\mathcal {T}}/2)}}{\bigl (1 + \jmath \frac{\omega }{\omega _0}\bigr ) \bigl (1- \text {e}^{-\jmath \omega {\mathcal {T}}} \text {e}^{-\omega _0 t_1}\bigr )} \text {e}^{\jmath \omega t} \bigg \}\\ &{}= \Re \bigg \{ \frac{\bigl (\text {e}^{\jmath \omega t_1} - \text {e}^{-\omega _0 t_1}\bigr )}{\bigl (1 + \jmath \frac{\omega }{\omega _0}\bigr ) \bigl (1- \text {e}^{-\jmath \omega {\mathcal {T}}} \text {e}^{-\omega _0 t_1}\bigr )} \text {e}^{\jmath \omega (k{\mathcal {T}}+ {\mathcal {T}}/2)} \bigg \}\\ &{}= \Re \Big \{ h_{\text {eff}}(\omega ) \text {e}^{\jmath \omega k{\mathcal {T}}} \text {e}^{\jmath \omega {\mathcal {T}}/2} \Big \} \end{matrix}} \end{aligned}$$

If the frequency of the signal is close to the nth harmonic of the sampling frequency \(\omega = n \omega _s + \Delta \omega , \Delta \omega \ll \omega _s\) the last expression becomes

$$\begin{aligned} {\begin{matrix} y_{N/2}[k] &= \Re \Big \{ h_{\text {eff}}(n \omega _s + \Delta \omega ) \text {e}^{\jmath \Delta \omega k{\mathcal {T}}} (-1)^n \text {e}^{\jmath \pi \frac{\Delta \omega }{\omega _s}} \Big \}\,. \end{matrix}} \end{aligned}$$

The difference between sample values on capacitor \(C_0\) and \(C_{N/2}\) is thus given by

$$\begin{aligned} y[k] - y_{N/2}[k] = \Re \Big \{ h_{\text {eff}}(n \omega _s + \Delta \omega ) \text {e}^{\jmath \Delta \omega k{\mathcal {T}}} \big [ 1 - (-1)^n \text {e}^{\jmath \pi \frac{\Delta \omega }{\omega _s}} \big ] \Big \}\,. \end{aligned}$$

Under the assumption \(\Delta \omega \ll \omega _s\) the right most exponential is close to 1 so that

$$\begin{aligned} y[k] - y_{N/2}[k] \approx {\left\{ \begin{array}{ll} 0 &{} n\ \text {even}\\ 2 y[k] &{} n\ \text {odd}\,. \end{array}\right. } \end{aligned}$$

For \(N=4\) the four output samples can be combined in pairs forming signals corresponding to the in-phase (\(V_I[k] = V_0[k] - V_2[k]\)) and quadrature (\(V_Q[k] = V_1[k] - V_3[k]\)) output of a quadrature mixer.

Before concluding this section we note that a loss of charge from the capacitor during the hold phase results in a lower boost of the samples around the frequencies \(n \omega _s, n\in {\mathbb {Z}}\). A loss of charge could be caused for example by a finite load resistance or by a switched-capacitor circuit following the sampling mixer. The reduction in the magnitude of the samples comes from the fact that the value of A appearing in the definition of \(h(t,\xi )\) will become smaller and as a consequence the boosting factor

$$\begin{aligned} 1- \text {e}^{-\jmath \Delta \omega {\mathcal {T}}} A \end{aligned}$$

in the denominator of \(\hat{h}(t,\omega )\) will not become as small as calculated above.

Fig. 14.19
figure 19

Block diagram of a generic N-path filter

4 N-Path Filters

The block diagram of a general  N-path filter is shown in Fig. 14.19 and can be thought of as the cascade of an N-path receiver and an N-path transmitter (compare with Example 12.10). Among other things they permit to implement transfer functions that, under suitable assumptions, mimic the ones of LTI networks that are difficult or not manufacturable with RLC elements due to the limited range of actually implementable values or due to limitations in their quality (quality factor). The time-varying transfer function of a general N-path filter can be analysed using the same methods used to analyse the N-path receiver of Example 12.10. Here instead of the general case we analyse a concrete implementation that shows some useful applications.

In the following we analyse the simple case in which the N LTI subsystems are simple shunt capacitors and where the periodic input and output functions are equal switching functions. Under these conditions, when the switches of path k are closed, the upper plate of the kth capacitor is simultaneously connected to the input as to the output and we obtain the circuit shown in Fig. 14.14 where now the output is constituted by the node labeled \(V_f\).

4.1 Time-Varying Frequency Response

During clock phase \(k, 0 \le k \le N-1\), during which the switch \(S_k\) is closed, the voltage \(V_f\) is equal to the voltage \(V_i\). For this reason we can express \(V_f\) in terms of the time-varying frequency response that we obtained in the previous section. In particular, when the input is a complex tone \(\text {e}^{\jmath \omega t}\) the output is given by

$$\begin{aligned} V_f(t) = \text {e}^{\jmath \omega t} \sum _{k=0}^{N-1} \hat{h}_{sm}(t - k t_1,\omega ) 1_{t_1}(t - k t_1 \mod {\mathcal {T}}) \end{aligned}$$

with

$$\begin{aligned} 1_{t_1}(t) = {\left\{ \begin{array}{ll} 1 &{} 0 < t < t_1\\ 0 &{} \text {otherwise} \end{array}\right. } \end{aligned}$$

and where we denoted the time-varying frequency response of the sampling mixer by \(\hat{h}_{sm}\) to avoid confusion with the one of the whole N-path filter that we’ll denote by \(\hat{h}\). For \(0 < t < {\mathcal {T}}\), using (14.23) the frequency response of the filter is thus given by

$$\begin{aligned} \hat{h}(t,\omega ) = & \frac{1}{1 + \jmath \frac{\omega }{\omega _0}} + \bigg [ \frac{- \text {e}^{(\omega _0 + \jmath \omega ) t_1}}{1 + \jmath \frac{\omega }{\omega _0}}\\ & \quad + \frac{\bigl (\text {e}^{(\omega _0 + \jmath \omega )t_1} - 1\bigr )}{\bigl (1 + \jmath \frac{\omega }{\omega _0}\bigr ) \bigl (1- \text {e}^{-\jmath \omega {\mathcal {T}}} \text {e}^{-\omega _0 t_1}\bigr )} \bigg ] \sum _{k=0}^{N-1} \text {e}^{-(\omega _0 + \jmath \omega )(t - k t_1)} 1_{t_1}(t - k t_1)\,. \end{aligned}$$

As the time-varying frequency response is \({\mathcal {T}}\)-periodic, we can expand it in a Fourier series. The nth Fourier coefficient of the summation on the right is

$$\begin{aligned} \begin{aligned} a_n &= \frac{1}{{\mathcal {T}}} \int \limits _0^{\mathcal {T}}\sum _{k=0}^{N-1} \text {e}^{-(\omega _0 + \jmath \omega )(t - k t_1)} 1_{t_1}(t - k t_1) \text {e}^{-\jmath n \omega _s t} \text {d}t\\ &= \frac{1}{{\mathcal {T}}} \sum _{k=0}^{N-1} \text {e}^{(\omega _0 + \jmath \omega ) k t_1} \int \limits _{k {\mathcal {T}}/N}^{(k+1){\mathcal {T}}/N} \text {e}^{-[\omega _0 + \jmath (\omega + n \omega _s)] t} \text {d}t\\ &= \frac{1}{{\mathcal {T}}} \frac{1 - \text {e}^{-[\omega _0 + \jmath (\omega + n \omega _s)] t_1}}{\omega _0 + \jmath (\omega + n \omega _s)} \sum _{k=0}^{N-1} \text {e}^{-\jmath n \omega _s k t_1}\,. \end{aligned} \end{aligned}$$

The last summation is zero unless n is a multiple of N in which case it evaluates to N

$$\begin{aligned} a_n = {\left\{ \begin{array}{ll} \frac{N}{{\mathcal {T}}} \frac{1 - \text {e}^{-[\omega _0 + \jmath (\omega + n \omega _s)] t_1}}{\omega _0 + \jmath (\omega + n \omega _s)} &{} n = N m, m \in {\mathbb {Z}}\\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

The time-varying frequency response of the filter is therefore given by

$$\begin{aligned} \hat{h}(t,\omega ) = \sum _{n = \dotsc ,-4,0,4,\dotsc } h_n(\omega ) \text {e}^{\jmath n \omega _s t} \end{aligned}$$
(14.26)

with

$$\begin{aligned} h_n(\omega ) = \frac{1}{1 + \jmath \frac{\omega }{\omega _0}} + \bigg [ \frac{- \text {e}^{(\omega _0 + \jmath \omega ) t_1}}{1 + \jmath \frac{\omega }{\omega _0}} + \frac{\bigl (\text {e}^{(\omega _0 + \jmath \omega )t_1} - 1\bigr )}{\bigl (1 + \jmath \frac{\omega }{\omega _0}\bigr ) \bigl (1- \text {e}^{-\jmath \omega {\mathcal {T}}} \text {e}^{-\omega _0 t_1}\bigr )} \bigg ] a_n \end{aligned}$$

or, after some simplification

$$\begin{aligned} h_n(\omega ) = \frac{1}{1 + \jmath \frac{\omega }{\omega _0}} \Bigg [ 1 + \frac{1}{t_1\omega _0} \frac{\bigl (\text {e}^{- \jmath \omega ({\mathcal {T}}- t_1)} - 1\bigr ) \bigl (1 - \text {e}^{-[\omega _0 + \jmath (\omega + n \omega _s)] t_1}\bigr )}{ \bigl (1- \text {e}^{-\jmath \omega {\mathcal {T}}} \text {e}^{-\omega _0 t_1}\bigr ) \bigl (1 + \jmath \frac{\omega + n \omega _s}{\omega _0}\bigr )} \Bigg ]\,. \end{aligned}$$
(14.27)

The last term of \(h_n(\omega )\) includes the same factor that we discussed in the analysis of the sampling mixer and responsible, under the condition \(\omega _0 t_1 \ll 1\), for boosting the response of the circuit at frequencies \(\omega = k\omega _s + \Delta \omega , k\in {\mathbb {Z}}, \Delta \omega < \omega _0/N\). Therefore, for \(\omega _0 t_1 \ll 1\) the transfer function \(h_0(\omega )\) represents a highly selective band-pass filter with pass bands centered at \(k\omega _s\) with \(k\ne N m, m\in {\mathbb {Z}}\) (see Fig. 14.20).

Fig. 14.20
figure 20

N-path filter transfer functions magnitude for \(N = 4, \omega _0 = 0.5{\mathcal {T}}\). a \(n = 0\). b Detail around \(f/f_s\approx 1\) for \(n = -4, 0, 4\)

The transfer functions \(h_n(\omega ), n\ne 0\) are also highly selective, but they also introduce a shift in frequency. In particular an input tone at \(\omega - n\omega _s\) passing through \(h_n(\omega )\) results in an output tone at \(\omega \) which will overlap with the response of \(h_0(\omega )\) to an input tone at \(\omega \). Therefore, if this N-path filter is used in a receiver to suppress interfering signals, while it possess undesired pass-bands, the closest frequency of an interfering signal that at the output of the circuit will overlap in frequency with the wanted signal is \(N\omega _s\). The magnitude of a few transfer functions producing an output tone at \(\omega \) are shown in Fig. 14.20.

4.2 Selectivity

By applying some approximations to \(h_0\) valid in the vicinity of \(\omega _s\) we can obtain a transfer function that can be implemented with fixed RLC components. This will allow us to quantify the selectivity of the filter in terms of standard metrics.

As a first step we set again \(\omega = \omega _s + \Delta \omega \) and use \(\Delta \omega \ll \omega _s\) to make the following approximation

$$\begin{aligned} \begin{aligned} \frac{1}{1 + \jmath \frac{\omega _s + \Delta \omega }{\omega _0}} \frac{1}{t_1\omega _0} \bigl ( \bigl (\text {e}^{- \jmath \omega ({\mathcal {T}}- t_1)} - 1\bigr ) \bigr ) &\approx \frac{1}{\jmath \omega _s t_1} \text {e}^{\jmath \omega _s t_1/2} \bigl ( \text {e}^{\jmath \omega _s t_1/2} - \text {e}^{-\jmath \omega _s t_1/2} \bigr )\\ &= \text {e}^{\jmath \pi /N} \frac{\sin (\pi /N)}{\pi /N}\,. \end{aligned} \end{aligned}$$

Similarly, using in addition \(\omega _0 \ll \omega _s\)

$$\begin{aligned} \frac{\bigl ( 1 - \text {e}^{-[\omega _0 + \jmath (\omega _s + \Delta \omega )] t_1} \bigr )}{ \bigl (1- \text {e}^{-\jmath (\omega _s + \Delta \omega ){\mathcal {T}}} \text {e}^{-\omega _0 t_1}\bigr ) \bigl ( 1 + \jmath \frac{\omega _s + \Delta \omega }{\omega _0} \bigr )} &\approx \text {e}^{-\jmath \pi /N} \frac{2\jmath \sin (\pi /N)}{(\jmath \Delta \omega {\mathcal {T}}+ \omega _0 t_1) \jmath \frac{\omega _s}{\omega _0}}\\ &\approx \text {e}^{-\jmath \pi /N} \frac{\sin (\pi /N)}{(1 + \jmath N \frac{\Delta \omega }{\omega _0})\pi /N }\,. \end{aligned}$$

Finally, using these approximations and noting that the first summand in \(h_0(\omega )\) is small compared to the second we obtain

$$\begin{aligned} h_0(\omega ) \approx \Big (\frac{\sin (\pi /N)}{\pi /N}\Big )^2 \frac{1}{1 + \jmath N \frac{\Delta \omega }{\omega _0}}. \end{aligned}$$
(14.28)
Fig. 14.21
figure 21

Model for the N-path filter \(h_0(\omega )\) valid around \(\omega _s\)

The impedance of a parallel LTI RLC resonator with a resonance frequency of \(\omega _s\) is given by

$$\begin{aligned} Z_r(\omega ) = \frac{\frac{\jmath \omega }{\omega _s} \frac{R_r}{q}}{\big (\frac{\jmath \omega }{\omega _s}\big )^2 + \frac{\jmath \omega }{\omega _s q} + 1} \end{aligned}$$

with q the quality factor of the resonator and \(R_r\) the impedance at resonance. Around the resonance frequency it can be approximated by

$$\begin{aligned} Z_r(\omega _s + \Delta \omega ) \approx \frac{R_r}{1 + \jmath 2 q \frac{\Delta \omega }{\omega _s}}\,. \end{aligned}$$

The transfer function to \(V_f\) around the resonance frequency of the circuit shown in Fig. 14.21 is therefore given by

$$\begin{aligned} \frac{R_r}{R + R_r} \cdot \frac{1}{1 + \jmath 2 q \frac{R}{R + R_r}\frac{\Delta \omega }{\omega _s}}\,, \quad \omega _s^2 = \frac{1}{\sqrt{L_rC_r}}\,, \quad q = \frac{R_r}{\omega _sL_r} \end{aligned}$$

and has the same form as the approximation of \(h_0(\omega )\) given in (14.28). The two are equal if

$$\begin{aligned} \left\{ \begin{aligned} \frac{R_r}{R + R_r} &= K\\ 2 q \frac{R}{R + R_r}\frac{\Delta \omega }{\omega _s} &= \frac{N}{\omega _0} \end{aligned} \right. \end{aligned}$$

with

$$\begin{aligned} K :=\Big (\frac{\sin (\pi /N)}{\pi /N}\Big )^2\,. \end{aligned}$$

This shows that around \(\omega _s\) \(h_0(\omega )\) can be modelled as a parallel resonator with resonance frequency \(\omega _s\) and characterised by

$$\begin{aligned} R_r = R \frac{K}{1 - K}\,, \qquad q = \frac{N \pi }{(1 - K)\omega _0 {\mathcal {T}}}\,. \end{aligned}$$
Fig. 14.22
figure 22

Comparison between the N-path filter transfer function using the RLC model valid around the sampling frequency and the transfer function \(h_0\) for \(N = 4\) and \(\omega _0 = 0.5{\mathcal {T}}\)

The transfer function of this model is compared with the exact \(h_0(\omega )\) in Fig. 14.22. For \(N=4, \omega _0 {\mathcal {T}}= 0.5\) the quality factor has a value close to 133. For comparison, the highest resonance quality factor implementable at RF frequencies with inductors and capacitors available in standard CMOS technologies is in the range of 20, with typical values substantially lower than this.