Orthogonal Matched Wavelets with Vanishing Moments: A Sparsity Design Approach

This paper presents a novel approach to design orthogonal wavelets matched to a signal with compact support and vanishing moments. It provides a systematic and versatile framework for matching an orthogonal wavelet to a specific signal or application. The central idea is to select a wavelet by optimizing a criterion which promotes sparsity of the wavelet representation of a prototype signal. Optimization is performed over the space of orthogonal wavelet functions with compact support, coming from filter banks with a given order. Parametrizations of this space are accompanied by explicit conditions for extra vanishing moments, to be used for constrained optimization. The approach is developed first for critically sampled wavelet transforms, then for undecimated wavelet transforms. Two different optimization criteria are presented: to achieve sparsity by L1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L^1$$\end{document}-minimization or by L4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L^4$$\end{document}-norm maximization. Masking can be employed by introducing weighting factors. Examples are given to demonstrate and evaluate the approach.


Introduction
Wavelets have now been around for several decades and have well proven their usefulness in many areas of signal processing [20,30]. Unlike the Fourier transform, which exclusively employs harmonic functions, the wavelet framework allows for a choice of wave form. In practical applications, it is often not so clear how to make this choice [28]. Sometimes a choice is driven by theoretical considerations of a general nature, such as the wavelet being orthogonal and having as many vanishing moments as possible [7]. Other aspects may include the linear phase property of filters or symmetry of the wave form [23]. In practice, researchers may also first test a variety of wavelets and then proceed to use the few that initially provide the most promising results.
How to move away from the standard wavelet families and toolboxes and to take greater advantage of the great freedom there is in choosing or constructing a wavelet for an application at hand, is far from trivial, but some work has been done. In [6], the amplitude and phase spectrum of a wavelet are separately matched to that of a reference signal in the Fourier domain. In [27], a parameterized Klauder wavelet, resembling a QRS complex in a cardiac signal, is tuned to a measured signal. In [12], a design procedure for bi-and semi-orthogonal wavelets is discussed where the optimization criterion aims to maximize the energy in the approximation coefficients. In [21], a parameterization for orthogonal wavelets is used to design wavelets of which either the wavelet or the scaling function resembles a prototype signal. It has the resulting effect that wavelet analysis parallels template matching. How to enforce additional vanishing moments, and thus smoothness, in such approaches, is an issue having received little attention so far. Another approach is the lifting scheme [31]. This works in polyphase like the current paper, but the parameterization is different, split into an updating and a prediction step, where designing the latter is most challenging. In [2], biorthogonal wavelets in the lifting scheme are matched to a signal where attention is given to linear phase, but not to vanishing moments.
In the current paper, an approach is discussed to design orthogonal wavelets, matched to a prototype signal from a given application. We work with polyphase filters, and the orthogonal wavelets are required to have compact support and a userspecified number of built-in vanishing moments. Two design criteria are developed which promote sparsity and allow the wavelet to be matched to a prototype signal for the application at hand: one by minimizing an L 1 -norm, the other by maximizing an L 4 -norm. Sparsity is naturally motivated by compression and detection applications. The two criteria achieve sparsity by a shared underlying principle of maximization of variance. The method is first developed for the critically sampled wavelet transform, but then extended to work with the undecimated wavelet transform too. Additionally, masking can be employed by introducing weighting factors. To build in orthogonality, compact support, and a few vanishing moments, a suitable parameterization is required. Here we provide a parameterization which has orthogonality and compact support built in, and for which the vanishing moment conditions take a fairly easy form as long as the filter order is not too high. This allows one to handle them conveniently with standard constrained optimization techniques.
The paper builds on preliminary research previously reported in [15,16] and is organized as follows. First, in Sect. 2, we introduce wavelets from filter banks, the polyphase representation, orthogonal wavelets, and their connection to lossless polyphase filters. Two (closely related) parameterizations of FIR lossless filters are at the core of the wavelet design procedure and presented in Sect. 3. In Sect. 4, we express the vanishing moment conditions in terms of these parameterizations. Suitable criteria for measuring how well a certain orthogonal wavelet matches an application at hand are developed in Sect. 5, followed by a discussion of practical optimization considerations in Sect. 6. Several wavelet matching examples are presented in Sect. 7. Conclusions regarding the matching design procedure are discussed in Sect. 8. All proofs are collected in "Appendix." 2 Filter Banks, Polyphase Representation, and Orthogonal Wavelets

Filter Banks for Signal Processing
An important approach to wavelets employs filter banks for digital signal processing. See [30], on which much of the material in this section is based. A filter bank consists of two discrete-time filters: a low-pass filter C(z) = k∈Z c k z −k and a high-pass filter D(z) = k∈Z d k z −k , where z is the complex variable of the z-transform [30, p.5].
(This relates to the Fourier transform through z = e iω , for a frequency ω.) They are used to process a given discrete-time signal s = {s k } k∈Z by first passing it through both filters separately, and then applying dyadic downsampling ↓2 (i.e., the odd-indexed values are discarded and the remaining subsequences are relabeled). This produces a level-1 approximation signal a = {a k } k∈Z and a level-1 detail signal b = {b k } k∈Z , given by The same procedure is then reapplied to the approximation signal a to obtain level-2 approximation and detail signals. Repeating the process, one obtains approximation and detail signals up to a user-selected level j max . The approximations and details become coarser with each new iteration step. Note that in practical applications with a signal s of finite length, the approximation and detail signals a and b resulting from one iteration step are roughly only half as long as s, which induces a natural upper bound on j max .

Polyphase Filtering
It is well known (see, e.g., [30,Chapter 4]) that the same level-1 signals a and b can alternatively be obtained: by first splitting the given signal s into its two phases s even = {s 2k } k∈Z and s odd = {s 2k+1 } k∈Z , and then jointly passing s even and the delayed odd phase {s 2k−1 } k∈Z through a 2 × 2 polyphase filter H (z). It is defined by with To be explicit, it then holds that For the z-transforms, we equivalently have where A(z), B(z), S even (z), and S odd (z) denote the z-transforms of a, b, s even , and s odd , respectively. Note also that the filters C(z) and D(z) satisfy C(z) = C even (z 2 ) + z −1 C odd (z 2 ) and D(z) = D even (z 2 ) + z −1 D odd (z 2 ) and are obtained by: see [30]. This will be used to explore vanishing moment conditions in terms of polyphase filters in Sect. 4.

Perfect Reconstruction, Power Complementarity and Orthogonal Wavelets
A polyphase filter H (z) allows for perfect reconstruction of every s if and only if H (z) −1 exists. When the filters C(z) and D(z) are power complementary, see [33] and Eq. (8), then this holds and the inverse is particularly easy to compute. Such power complementary filters are associated with orthogonal decompositions generated by the filter bank and with orthogonal wavelet representations of the signals [20,30]. The dilation equation and the wavelet equation serve to explore this connection. The dilation equation is a two-scale functional equation (a refinement equation), given for which we assume a non-trivial solution φ(t) to exist which is square integrable, which is normalized with a unit L 2 -norm, and which has a nonzero integral. 2 The function φ(t) is called the scaling function, and it is used to build signal approximations. The wavelet function ψ(t) is used to represent signal details on different scales and is defined by the wavelet equation: When the filters C(z) and D(z) are power complementary, that is: it follows that the set {φ(t − m) | m ∈ Z} forms an orthonormal basis for the function space V 0 it spans [30]. Likewise, the set {ψ(t − m) | m ∈ Z} gives an orthonormal basis for its span W 0 , while V 0 is orthogonal to W 0 . The space Z} as a consequence of the dilation and wavelet equations (6) and (7). Repeating the procedure, a useful multi-resolution structure . . . [20,30].
To characterize orthogonality, it is helpful to observe that the filters C(z) and D(z) are power complementary if and only if the polyphase filter H (z) is an allpass filter, i.e., if H (z) is unitary for all z on the complex unit circle. In digital signal processing applications, filters are often causal, stable and of finite order. Causal stable all-pass filters are known as lossless filters. They have been studied extensively in the literature, see, e.g., [33] and can be conveniently parameterized, see Sect. 3. This gives a framework to carry out orthogonal wavelet design by optimizing a criterion function over a parameterized class of lossless polyphase filters. Incorporating orthogonality of wavelets in this way saves us from imposing explicit nonlinear algebraic constraints-which otherwise tend to significantly deteriorate the performance of numerical optimization routines.

Causality, FIR Filters, Compact Support, and Wavelet Admissibility
A linear time-invariant filter F(z) is causal if it can be written as F(z) = k∈Z f k z −k with f k = 0 for all k < 0. When processing a signal, such a filter only uses values from the present and past to compute the current filter output. To capture local features of a signal, wavelets are required to be well located in both time and frequency, meaning that they should effectively have compact support and a bounded bandwidth. Here the choice is made for true compact support, for which the discrete-time filters must have a finite impulse response (FIR) and are moving average (MA) systems instead of the more generic ARMA models. In this case, the filters C(z) and D(z) have all their poles at the origin; they take the form where the maximum lag N denotes the filter order. For such an orthogonal FIR filter bank, it is well known that the orthogonality conditions imply that D(z) = ±(−z) −N C(−z −1 ), see [33]. Since the choice of sign for D(z) only corresponds to a sign choice for ψ(t) and is of no further consequence, we fix it to be positive. Then the coefficients of D(z) are related to those of C(z) by the 'alternating flip' construction [20]: Also, since zero coefficients at the start or end of the sequence {c k }, k = 0, . . . , N , can either be compensated for by shifting the signal or reducing the filter order, it is natural to impose both c 0 = 0 and c N = 0. Then orthonormality of the basis is also FIR. It has the order n − 1. The scaling function φ(t) and the wavelet ψ(t) both live on the interval [−N , 0] in this setup. 3 Regarding orthonormality and losslessness of H (z), the filters C(z) and D(z) play a symmetric role. This symmetry is broken by their connection to the dilation and the wavelet equation. It implies that C(z) is low pass and generates an approximation signal, while D(z) is high pass and generates a detail signal. And also that the wavelet admissibility condition is imposed which requires that ψ(t) has integral R ψ(t)dt = 0. The wavelet ψ(t) and the high-pass filter D(z) are both said to have a first vanishing moment (which is of order zero) [20]. given there, related parameterizations for the 2 × 2 FIR case are given in which H (z), of order n − 1, is factored into a product of rotations and elementary single-channel delays: where This parameterization involves n angular parameters θ 1 , θ 2 , . . . , θ n , associated with the rotation matrices R(θ k ). The matrix (−1) = 1 0 0 −1 appears as the last factor of H (z) to comply with the sign convention of the alternating flip construction (11). Note that the factorization (12) makes it easy to compute the partial derivatives of H (z) with respect to the parameters, which can be an advantage for implementing optimization routines. Another nice feature of this parameterization is that it admits a lattice filter implementation [30,32], in which the initial section represents the constant transfer matrix R(θ 1 ) (−1), while the other n − 1 lattices represent the first-order lossless transfer matrices R(θ k ) (z), (k = 2, . . . , n); it is displayed in Fig. 1.
The factorization (12) also allows to recursively compute the filter coefficients c 0 , c 1 , . . . , c 2n−1 from a given set of parameters θ 1 , . . . , θ n . If the coefficients of a lossless FIR polyphase filter of order n − 2 with parameters θ 1 , . . . , θ n−1 are denoted byĉ 0 ,ĉ 1 , . . . ,ĉ 2n−3 , then: Conversely, to compute θ n from a given set of coefficients of a lossless FIR polyphase filter, note that θ n must be chosen such that zeros appear on the specified locations in the coefficient matrix of the right-hand side expression of (14). To parameterize all the lossless FIR polyphase filters of order n − 1, it suffices to take θ 1 ∈ [−π, π) (any half-open interval of length 2π will do) and θ 2 , . . . , θ n ∈ [− π 2 , π 2 ) (any half-open interval of length π will do). A set of filter coefficients then corresponds to a unique set of angular parameters (from the chosen intervals). Filters of order less than 2n − 1 are embedded (with additional delays) in multiple ways into the parameterization.
As indicated, a first vanishing moment (of order zero) is needed for the wavelet interpretation of ψ(t) and it is implied by the dilation and wavelet equations. The (normalized) scaling function φ(t) has integral 1 and the wavelet ψ(t) has integral zero. By integrating (6) and (7), it follows that the low-pass filter coefficients add up to c 0 +c 1 +· · ·+c 2n−1 = √ 2 and the high-pass filter coefficients to d 0 +d 1 +· · ·+d 2n−1 = 0. For the polyphase filter H (z), this implies the following result.

(b) The condition under (a) is satisfied in terms of the angular parameters if and only if
For an admissible lossless FIR polyphase filter of order n−1, the condition of Proposition 1(b) leaves n − 1 degrees of freedom, which we normally assign to θ 2 , . . . , θ n by fixing θ 1 as θ 1 = π 4 − (θ 2 + · · · + θ n )(mod 2π). For some computations in this paper, it will be convenient to fix θ n instead, and to assign the n − 1 degrees of freedom to the parameters θ 1 , . . . , θ n−1 . We then propose to re-parameterize the polyphase filters using a Potapov decomposition [26] in the following way. First, note that in Proposition 1 the matrix H (1) = H (1) −1 is symmetric and orthogonal. LetH (z) be defined bỹ This is also a lossless FIR polyphase filter of order n − 1, now satisfyingH (1) = I 2 .
Therefore,H (z) admits a Potapov decomposition, which is a product of 'elementary' lossless filters of order 1:H with It can be worked out directly that the parameters φ k in this new setup are given in terms of θ 1 , . . . , θ n−1 by: Note that because each parameter θ k can be freely chosen from an arbitrary half-open interval of length π , the same holds for each parameter φ k . In addition, observe that B φ k (1) = I 2 , which makes it easy to increase or decrease the order of the polyphase filter by merely adding or deleting parameters φ k .

Conditions for Vanishing Moments
Vanishing moments are useful wavelet properties for several reasons. In this section, we aim to explain some of this background and also to express conditions for them in terms of the polyphase matrices H (z) andH (z), and in terms of the two parameterizations discussed in Sect. 3. A wavelet is said to have p vanishing moments (of orders This implies that all polynomial time functions of degree ≤ p − 1 are orthogonal to ψ(t) and to its shifts and dilations. Because the scaling function and the wavelet have compact support, this interpretation holds locally too: Local polynomial approximations up to degree p − 1 belong to the approximation spaces. Also, it is well known that smoothness of wavelets requires vanishing moments too. Orthogonal wavelets need p vanishing moments to be p − 1 times continuously differentiable [18].
The vanishing moments property of wavelets ψ(t) can also be expressed as vanishing moment conditions for the high-pass filter D(z). It is well known that Eq. Equivalently, where D (m) (z) denotes the mth-order derivative of D(z). Because of the alternating flip construction, this is equivalent to related conditions on the low-pass filter: and This shows that the filters C(z) and D(z) have flatness at z = −1 and z = 1, respectively, up to order p − 1. See also, e.g., [7,30]. For the two parameterizations at hand, the question arises which conditions on the angular parameters θ 2 , . . . , θ n or φ 1 , . . . , φ n−1 , respectively, enforce p − 1 extra vanishing moments (in addition to the wavelet admissibility condition). For a second vanishing moment (i.e., of order 1), we have the following result.

Proposition 2
Let H (z) be a 2 × 2 lossless FIR polyphase matrix of order n − 1, satisfying the admissibility condition (15), ensuring a vanishing moment of order 0. Let H (z) be parameterized in terms of the angular parameters θ 1 , . . . , θ n according to Eq. (12), satisfying the condition (16). LetH (z) be the associated lossless filter given by Eq. (17), parameterized in terms of the angular parameters φ 1 , . . . , φ n−1 , defined by Eq. (20). Then H (z) has a vanishing moment of order 1 if and only if one of the following equivalent conditions holds: Though these conditions follow directly from well-known Eqs. (7) and (13), the conditions of Proposition 2 are not trivially built into a parameterization. While before we had that condition (16) was easily handled by eliminating the parameter θ 1 , condition (c) of Proposition 2 cannot always be rewritten to express θ 2 in terms of the other parameters. This happens because values for θ 3 , . . . , θ n can be chosen such that no feasible value for θ 2 exists which satisfies the condition. For n = 3 this is illustrated by Fig. 2a. In this case, because of the low order, the manifold of such filters is topologically equivalent to a circle, and one may in fact identify an interval of feasible values for θ 2 ; it then is possible to (generically) build in this property. The same problem exists for the parameterization in terms of φ k , as is illustrated in Fig. 2b, now relating to condition (f) of Proposition 2.
A third vanishing moment (of second order) can be characterized as follows.  cos(2(θ k+1 + · · · + θ n )) = 0, For n = 4, the parameter θ 1 can be eliminated as before. But to handle θ 2 , θ 3 and θ 4 conveniently is far more complicated. In Fig. 3a the manifolds of filters having 2 or 3 vanishing moments are displayed. For 3 vanishing moments it consists of two connected components (the green curves). Switching to the parameters φ 1 , φ 2 , φ 3 as presented in Fig. 3b is more convenient, but likewise shows the complexity involved. Because periodicity holds (opposite sides are to be identified) the blue surfaces for 2 vanishing moments are closed and have genus 3.
In [25] a procedure was developed to build up to 4 vanishing moments into a different parameterization of 2 × 2 lossless systems of arbitrary (sufficiently large) order, using the tangential Schur algorithm with interpolation conditions at the stability boundary. However, that parameterization was not restricted to FIR filters. A parameterization which imposes both the FIR property and has several vanishing moments is still lacking. So far, the parameterizations given here, in terms of θ 1 , . . . , θ n or in terms of Surfaces and curves in the parameter space for n = 4, corresponding to orthogonal wavelets with two vanishing moments (blue surfaces) and three vanishing moments (green curves). The lossless polyphase filter satisfying the wavelet admissibility structure has n − 1 = 3 free parameters: θ 2 , θ 3 and θ 4 , or φ 1 , φ 2 and φ 3 , respectively, depending on the choice of parameterization. Clearly there are choices of θ 2 and θ 3 for which no θ 4 can be found to satisfy condition (c) of Proposition 2. Likewise, this holds for φ 1 , φ 2 and φ 3 with respect to condition (f). The same holds for the green curves for conditions (c) and (f) of Proposition 3. The parameter spaces are periodic: opposite faces of the cubic parameter areas are to be identified. a Parameterization with θ 2 , θ 3 , θ 4 , b Parameterization with φ 1 , φ 2 , φ 3 (Color figure online) φ 1 , . . . , φ n−1 , with the accompanying constraints for vanishing moments up to order 2, seem to be the most practical currently available for numerical optimization.

Measuring the Quality of a Wavelet Representation
As previously argued in [15,16], choosing a good wavelet is not trivial. Some vanishing moments and an optimal ratio of the Heisenberg uncertainty rectangle in time-frequency, see [22], are usually considered desirable properties for wavelets to have, for general reasons. One can also attempt to measure the quality of a wavelet in terms of the representation of a prototype signal in the wavelet domain. For compression purposes it then is intuitive that it is beneficial if a prototype signal gets represented in the wavelet domain sparsely: It will give a high compression rate. Similarly, sparsity can be useful for feature detection purposes, interpreting a sparse feature representation as a 'fingerprint'. Note that sparsity encompasses template matching, but is quite a bit more flexible: It allows a feature to be built from several copies of the wavelet at different positions and scales, while values of the wavelet coefficients may still show some variation.
Here we advocate to design a matched wavelet for a given application by maximizing sparsity of the wavelet representation of a prototype signal. To represent a signal in the wavelet domain there are several options. First, one may use the 'critically sampled' discrete wavelet transform where dyadic downsampling takes place [19]. Second, one can employ the 'redundant' undecimated wavelet transform [24], to have time invariance and full resolution at every dyadic scale. Third, one could use the wavelet and the dilation equation to first find the wavelet and scaling function [30] and then employ the continuous wavelet transform [20]. We will first focus on a matching approach for the critically sampled wavelet transform and then, in the next section, indicate how this can be extended to accommodate the undecimated wavelet transform too.
To obtain a critically sampled signal decomposition, the wavelet filter bank (with downsampling) is repeatedly applied to the low-pass filter outputs [19]. At each level j we shall denote the detail coefficients by b For orthonormal wavelets, a wavelet analogue of Parseval's identity holds. It states [20] that the energy of an input signal s of finite length m, which for convenience is assumed to be a power of two, is preserved in the approximation and detail signals [15,30]: If a vector w is introduced to contain all the detail and approximation coefficients of a signal s, as in [15]: then the L 2 -norms of s and w are equal: s 2 = w 2 . When performing wavelet design over the class of orthonormal wavelets with a fixed choice of the filter order N = 2n − 1, it is important to note that w 2 is the same for every choice of the parameters. If the goal is to promote sparsity subject to this constraint on the L 2 -norm of w, a conceptual approach is to maximize the variance. To make this concept operational, we propose two approaches, see also [15,16]: 1. maximize the variance of the absolute values of the wavelet coefficients; 2. maximize the variance of the squared wavelet coefficients.
The following theorem [15,16] shows that the former approach corresponds to minimization of an L 1 -norm, which is a well-known criterion to achieve sparsity in various other contexts, see for example [4,5,9,10]. The latter approach maximizes the variance of the energy distribution over the approximation and detail coefficients at the various scales, and corresponds to maximization of an L 4 -norm. (27), resulting from the processing of a signal s of length m by means of an orthogonal filter bank, depending on a set of free parameters to be optimized. Then:

Theorem 1 Let w be the vector of the wavelet and the approximation coefficients as in
(1) Maximization of the variance of the sequence of absolute values |w k | is equivalent to minimization of the L 1 -norm V 1 = m k=1 |w k |. (2) Maximization of the variance of the sequence of energies |w k | 2 is equivalent to maximization of the L 4 -norm V 4 = ( m k=1 |w k | 4 ) 1/4 .
We have used both optimization criteria and explored their merits for wavelet matching. Their advantages and drawbacks are discussed, together with several examples, in the following two sections.

Practical Considerations for Wavelet Matching
The parameterization with angular parameters θ 1 , . . . , θ n developed first in Sect. 3 allows for a convenient design of orthogonal wavelets, by means of a local search over the search space [− π 2 , π 2 ) n−1 , where either θ 1 or θ n is fixed to enforce the wavelet admissibility condition. (Alternatively, the parameters φ 1 , . . . , φ n−1 can be used in a similar manner.) For additional vanishing moments, some other parameters can be fixed, depending on the values of the remaining free parameters θ k . During iterative search, if fixing these parameters for a current set of free parameter values is not possible, then the optimization criterion V 1 or V 4 is augmented with a large penalty term (with an appropriate sign) to force the iterations back into the region where the constraints can be satisfied. Caution must be exercised due to the existence of local optima as discussed in [15]. As an example, five local optima encountered for a case with two free parameters are shown in Fig. 4. Here the solutions labeled 1 and 3 are very similar and they are located in two different valleys of the search space. Between solutions 1 and 3, there is an apparent time delay in effect, which by the alternating flip construction also produces a sign change in the wavelet function (Fig. 5).
A second point to consider when choosing between V 1 and V 4 is masking. When only a certain time span of the prototype signal (e.g., carrying a distinctive feature) or a certain frequency band corresponding to approximate wavelet scales is of interest, then one can use a time-frequency mask on the wavelet domain representation of the prototype signal and evaluate the criteria on that. But when a minimization criterion is employed, the energy will simply tend to be forced outside the mask. This might easily result in an undesirable delay filter, and this phenomenon is not straightforward to counteract. Hence when masking is used, it is recommendable to use the maximization criterion V 4 for which this drawback does not occur. As an example, in Fig. 12 in Sect. 7 masking is used to force energy into the fine scales corresponding to the time location of two QRS complexes in an ECG signal.
Third, an extension that might be beneficial, e.g., for detection purposes, is to employ an overcomplete representation. Retaining dyadic scales, but using uniform time sampling as in the undecimated [29] or the stationary wavelet transform [24], ensures that the representation becomes translation invariant. Since the coefficients of the critically sampled wavelet transform for different time shifts are all subsets of the undecimated coefficients, 'conservation of energy' still applies for any such selection of subsets. The overall energy therefore is a weighted sum of the energies per scale, where the weights decrease by a factor two per scale: Here s is the signal of length m, j max the number of scales (levels) of the wavelet decomposition, and all the scales of the wavelet decomposition have m detail coefficients b ( j) k and approximation coefficients a  k = 1, . . . , m). To avoid finite length effects, we have applied 'wrap around' to induce periodicity and to obtain wavelet coefficient sequences of length m at every scale. As a result, likewise adjusted criteria V 1 and V 4 can be used with the undecimated wavelet transforms. We will refer to these adjusted criteria as the undecimated V 1 and V 4 . The translation invariance of the undecimated criteria helps to avoid undesired phenomena and ensures that sparsity is more robustly built in.

Wavelet Matching Examples
Example 1 To compare wavelet matching for the critically sampled wavelet transform and the undecimated wavelet transform, we took an acceleration signal for a human patient which was measured during a step in normal gait. It is displayed in Figs. 6 and 7.
A first matched orthogonal wavelet was designed by minimizing the criterion V 1 , using the critically sampled wavelet transform. The polyphase filter order was chosen as n − 1 = 4, and one vanishing moment was enforced, leaving 4 free parameters for optimization. The low-pass filter coefficients are given in Table 1 and the results shown in Fig. 6. Sparsity of the resulting wavelet representation is confirmed by the presence of a single large wavelet component on the detail scale D 3 . The morphology of the matched wavelet function on this scale indeed closely mimics the morphology of the dominant component of the prototype signal, as displayed in Fig. 8.
In Fig. 7, the results are shown for minimization of the undecimated V 1 criterion using the same filter order with 3 free parameters and two vanishing moments. Again a sparse representation is obtained, and the new orthogonal wavelet (green) turns out to be similar to the one computed earlier, while the new scaling function (blue) shows Table 1 Low-pass filter coefficients for the matched wavelets of  a time shift with respect to the previous one. The undecimated wavelet transform has the same high resolution on all scales and admits a more precise detection of the locations of the peaks in the signal. The low-pass filter coefficients are again given in Table 1. One can investigate the differences in sparsity obtained by the wavelet design by considering the L 1 -norm of the wavelet coefficients. This is illustrated in Table 2 for the two designed wavelets in example 1 and the Daubechies and Symlet wavelets all with the same prototype signal, 10 filter coefficients and 4 scales. From these measurements, it is clear that the design of a wavelet with only one vanishing moment gives a considerable increase in sparsity (lower L 1 -norm), which can be also witnessed in Fig. 6, where a lot of the coefficient values are close to zero. Adding an additional vanishing moment comes at the cost of freedom, and in this case the consequence is that there is a lower increase in sparsity. The sparse representation of the normal gait in the wavelet domain makes it possible to isolate and remove the gait pattern from other measured acceleration signals of the same subject. This approach, with the matched wavelet presented here, was successfully applied in [17], to identify stumbles in acceleration signals. Fig. 9 Example 2, matched wavelet and scaling function for a blood pressure signal, minimizing the undecimated criterion V 1 with 4 free parameters and 2 vanishing moments Example 2 To compare the undecimated criteria V 1 and V 4 , we took a quasi-periodic blood pressure signal from the 'time course data for blood pressure in Dahl SS and SSBN13 rats', see [3]. We designed two orthogonal matched wavelets, in both cases using a polyphase filter order n −1 = 5 and enforcing two vanishing moments, leaving 4 free parameters for optimization. See Table 1 for their low-pass filter coefficients. The first matched wavelet was designed by minimizing the undecimated criterion V 1 , see Fig. 9. The second by maximizing the undecimated criterion V 4 , see Fig. 10. The first wavelet appears to achieve a better localization of the signal in the wavelet domain, as the signal peaks are slightly more pronounced on the detail scales D 3 and D 4 . The second wavelet shows more oscillations and is less smooth than the first one.

Example 3
To study the effects of masking, we took an ECG signal from the St. Petersburg INCART database, see [11]. It is displayed in Figs. 11 and 12. We chose a polyphase filter order n − 1 = 3 and enforced two vanishing moments, leaving 2 free parameters for optimization. First we designed a matched orthogonal wavelet by maximizing the undecimated criterion V 4 , see Fig. 11. The optimization procedure, employing multiple random starting points, terminated with the coefficients reported in Table 1. The Euclidean distance of the coefficient vector to that of the Daubechies 4 filter coefficients is only 0.0955. Fig. 10 Example 2, matched wavelet and scaling function for a blood pressure signal, maximizing the undecimated criterion V 4 with 4 free parameters and 2 vanishing moments A second wavelet was designed using the undecimated criterion V 4 with masking. Two masks were introduced to emphasize two selected QRS complexes in the fine scales, as illustrated in Fig. 12. This resulted in a wavelet that is close to the Daubechies 2 wavelet. The Euclidean distance of the coefficient vector to that of the Daubechies 2 filter coefficients is 0.1122. For this second wavelet, one can observe that the other QRS complexes not used for masking become better pronounced in the fine scales too.

Conclusions and Discussion
From the examples presented in the previous section, we conclude that the proposed approach for designing matched orthogonal wavelets is a feasible one, with a clear potential for applications. The availability of parameterizations, which build in orthogonality and allow for explicit conditions to get extra vanishing moments, makes it relatively easy to compute matched wavelets through constrained optimization. But still some user choices are to be made, which will be application dependent, and so the method cannot fully be automated.
First, the user has to decide on the filter order n−1 and the number of extra vanishing moments p −1. The higher the value of n, the more complex the optimization problem becomes. A higher value of p does reduce the degrees of freedom, but increases the Fig. 11 Example 3, design of a matched wavelet with and without masking for an ECG signal. Shown is the optimization of the undecimated V 4 criterion without masking, with 2 free parameters and two vanishing moments Fig. 12 Example 3, as in Fig. 11 but with masking. The mask is indicated by the dashed boxes. It pronounces the QRS complex in the fine scales complexity in a different way, as more complicated constraints are then to be satisfied. Here we considered p ≤ 3, though conditions can be derived for higher values too. When n − p (the degrees of freedom) does not exceed 3, optimization is still easily supported by visualization of the criterion. This may help to detect and avoid local optima. Otherwise one may have to use multiple starting points, for which one needs to decide on how to choose them.
Second, the user needs a prototype signal to match the wavelets too. For some applications, this may be more straightforward than for others. Obviously, the choice of a prototype signal may have an impact on the matched wavelet that is obtained, so it could be advisable to try a few different ones and compare performance of the resulting matched wavelets.
Third, in our setup of orthogonal wavelets the user needs to decide whether to minimize V 1 or to maximize V 4 , to use the critically sampled or the undecimated transforms, and also if using weights (or masking) seems useful. Though some of these choices might be clear from the application one has in mind, it may again require some trial and error before satisfactory results are obtained.
The approach evolves around building orthonormal bases and as such the proposed design criteria assume conservation of energy. Therefore, to apply similar ideas to the biorthogonal wavelet case, one would have to come up with an optimization criterion which takes this non-conservation into account and somehow compensates for it, for instance through appropriate weighting.
The approach also has some advantages that we would like to highlight. Most importantly, the approach has more flexibility than if one restricts to using the standard classes available through software toolboxes and it provides a method to explore this in a systematic way. Next, sparsity appears to be a natural criterion to aim for in compression and detection applications, and this framework aims to directly work with it. In the examples, we presented, the sparsity concept seems to lead to useful matched wavelets. Finally, because several well-known classes of orthogonal wavelets, such as Daubechies wavelets and Coiflets, are naturally included in this framework, it becomes better possible to compare the performance of an optimized wavelet to any one from such classes, than merely by properties such as vanishing moments and how they affect polynomial signals. One may then for instance still decide to use a wavelet from a standard class when it is shown to be only a little suboptimal.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.