Journal of Mathematical Imaging and Vision

, Volume 44, Issue 3, pp 223–235

Geometric Moments and Their Invariants

Authors

    • Department of Mathematics & StatisticsUniversity of Canterbury
Article

DOI: 10.1007/s10851-011-0323-x

Cite this article as:
Hickman, M.S. J Math Imaging Vis (2012) 44: 223. doi:10.1007/s10851-011-0323-x

Abstract

Moments and their invariants have been extensively used in computer vision and pattern recognition. There is an extensive and sometimes confusing literature on the computation of a basis of functionally independent moments up to a given order. Many approaches have been used to solve this problem albeit not entirely successfully. In this paper we present a (purely) matrix algebra approach to compute both orthogonal and affine invariants for planar objects that is ideally suited to both symbolic and numerical computation of the invariants. Furthermore we generate bases for both systems of invariants and, in addition, our approach generalises to higher dimensional cases.

Keywords

Orthogonal and affine transformationsMomentsInvariantsCovariantsImage recognition

1 Introduction

Moments are quantities that characterise the “shape” of an object. That object may be a physical shape, an image or something more abstract like a statistical distribution. Let the object of interest by described by ρ:ℝn→ℝm; that is a vector-valued function defined on ℝn. We define a geometric moment of ρ by
$$\mathbf{m}_{\mathbf{a}} = \int \mathbf{x}^\mathbf{a}\rho(\mathbf{x})\,d{\varOmega} $$
(1)
where a=(a1,a2,…,an) is a multi-index (with non-negative components),
$$\mathbf{x}^\mathbf{a}\equiv x_1^{a_1} x_2^{a_2}\cdots x_n^{a_n}$$
and
$$d {\varOmega} = dx_1\wedge dx_2\wedge\cdots \wedge dx_n$$
is the volume form on ℝn. The order of the moment is given by
$$|\mathbf{a}|\equiv a_1+a_2+ \cdots + a_n.$$
We assume that ρ has compact support D and is bounded so that the moments (1) are finite.
In the application of shape recognition of a planar object then n=2 and ρ is usually a scalar valued function (grey scale image) or takes values in ℝ3 (RGB image). Thus
$$m_{a,b} = \iint x^a y^b \rho(x, y)\,dx\wedge dy$$
is a moment of order a+b. It has a single component in the case of a grey scale image. For RGB images we will consider that there are three separate moments. Thus, for example, the red component is given by
$$m_{a,b}^R = \iint x^a y^b \rho^R(x, y)\,dx\wedge dy$$
and so we will always consider a moment to be a scalar quantity.

These moments were introduce to pattern recognition by the seminal paper of Hu [7]. In this paper he consider planar images and introduced the a set of 7 rotational invariants that were constructed from moments up to order 3. This set has had a rather checkered history (see [5] for a brief account of the history of Hu’s invariants).

The motivation for Hu’s paper was to construct combinations of moments that are unchanged under a rotation. We will consider any group G with a linear action on ℝn; that is the group acts on ℝn via a matrix A
$$\overline{\mathbf{x}} = A \mathbf{x}. $$
(2)
Clearly the standard actions of both the orthogonal O(2) and affine GL(2) groups on ℝ2 are linear (they are also linear in higher dimensions). By contrast, the projective action of GL(3) on ℝ2 is not. The choice of a linear action may seem to be excessively restrictive in that it does not allow translations. However as we will see in Sect. 5, translations may be invariantly normalised.

There has been a number of approaches in the literature to the construction of orthogonal (rotational) and affine moment invariants. The methods employed have ranged from the use of Fourier-Mellin transforms [11, 22] to algebraic invariants [9, 19, 20, 23], orthogonal polynomials [2, 12, 16, 17, 21], complex moments [1, 3, 4] and graphical methods [14]. In this paper, inspired by the theory of binary invariants [6, 13], we develop a purely matrix algebraic approach to obtain both orthogonal and affine invariants.

In Sect. 2 we will examine the transformation of the monomials xa under the action (2). In the following section, the action on moments is discussed. In Sect. 4 moment covariants and invariants are introduced. Invariants for the orthogonal group (and its conformal generalisation) are constructed in Sect. 6. The affine case is examined in Sect. 7. In the final section we make some concluding remarks.

2 Monomials

Given a group G with a linear action on ℝ2; that is the group acts via a matrix A we wish to compute the induced action on the monomial xayb. Our analysis is valid in any dimension and so for this and the next section we work in ℝn. We begin with a definition of the Kronecker product [18, p. 1867] (see [10, Chap. 13] for a recent discussion).

Definition 1

Let R be m×n matrix and S be an arbitrary matrix. The Kronecker (or direct or tensor) product of R and S is the matrix given by
$$R\otimes S \equiv \left[ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c}R_{11} S & R_{12} S & \cdots & R_{1n} S \\R_{21} S & R_{22} S & \cdots & R_{2n} S \\\vdots & \vdots & \ddots & \vdots \\R_{m1} S & R_{m2} S & \cdots & R_{mn} S \end{array} \right].$$

The importance of this construction lies in the following observation.

Proposition 1

LetA,B,XandYbe matrices such thatAXandBYare defined. Then
$$(A\otimes B) (X \otimes Y) = AX \otimes BY.$$
In addition
$$A^T \otimes B^T = (A \otimes B)^T$$
and
$$A^{-1} \otimes B^{-1} = (A \otimes B)^{-1}$$
whenever the inverses exist.

Proof

We have
https://static-content.springer.com/image/art%3A10.1007%2Fs10851-011-0323-x/MediaObjects/10851_2011_323_Equj_HTML.gif
Next we observe
$$A^T\otimes B^T = \left[ \begin{array}{c@{\quad}c@{\quad}c}A_{11} B^T & A_{21} B^T & \cdots \\A_{12} B^T & A_{22} B^T & \cdots \\\vdots & \vdots & \ddots \end{array}\right] = (A \otimes B)^T.$$
Finally note that
https://static-content.springer.com/image/art%3A10.1007%2Fs10851-011-0323-x/MediaObjects/10851_2011_323_Equl_HTML.gif
 □

Thus there is an induced action on XY given by AB.

Let x(1)=x be the monomials of degree 1 in ℝn. Define
$$\mathbf{x}_{(d)} = \bigotimes_d \mathbf{X}_{(1)}.$$
For example, in ℝ2,
$$\mathbf{x}_{(2)} = \mathbf{x}_{(1)}\otimes \mathbf{x}_{(1)} =\left[ \begin{array}{c}x^2\\xy \\yx \\y^2 \end{array}\right], \qquad \mathbf{x}_{(3)} =\left[ \begin{array}{c}x^3 \\x^2y \\xyx \\xy^2 \\\vdots \\y^3\end{array}\right]$$
are the (with some redundancy) monomials of degree 2 and 3.

Proposition 2

LetGact linearly onn. Then the action ofAGon the monomials of degreedis given by
$$A_d = \bigotimes_d A;$$
that is
$$\overline{\mathbf{x}}_{(d)} = A_d \mathbf{x}_{(d)}.$$

Proof

The action of A on the monomials of degree d is given by
$$\underbrace{A \mathbf{x}_{(1)}\otimes A \mathbf{x}_{(1)}\otimes \cdots \otimes A \mathbf{x}_{(1)}}_{d\ \mathrm{factors}} =\Bigl(\bigotimes_d A \Bigr) \Bigl(\bigotimes_d \mathbf{x}_{(1)}\Bigr)$$
by Proposition 1. □

In [15] higher order monomials are constructed as matrices from outer products of lower order monomials; that is order c+d monomials are initially given by \(\mathbf{x}_{(c)} \mathbf{x}_{(d)}^{T}\). However these authors rely on a specific variable ordering and normalisation of the monomials in order to obtain a transformation matrix that will be orthogonal if A is orthogonal.

The Kronecker product is implemented in both symbolic and numerical mathematical packages. For example, it is given by KroneckerProduct in both the LinearAlgebra package of Maple and in Mathematica, and by kron in Matlab.

3 Transformations of Moments

Under a linear transformation A, the zeroth order moment will transform
$$\overline{m}_{0,0} = \det A \iint \rho(A \mathbf{x}) \ dx \wedge dy.$$
Ideally we wish that moments transform under a so-called multiplier representation [13]; that is
$$\overline{m}_{0,0} = (\det A)^q m_{0,0} $$
(3)
for some q. This will not happen unless ρ has a very specific form. In particular, if ρ is the characteristic function then we get the desired transformation law (with q=1). Of course ρ=1 is not a particularly interesting case for pattern recognition. However we obtain a moment that transforms under a multiplier representation if we consider the parts of the image where the value of ρ is in a chosen range.

Definition 2

For α>0, let
$$\rho^{(k)}(\mathbf{x}) = \begin{cases}1 & \hbox{if}\ k \leq \rho(\mathbf{x}) < k + \alpha, \\0 & \hbox{otherwise}. \end{cases}$$
The threshold moments based on ρ(k) are given by
$$m_{\mathbf{a}}^{(k)} = \int \mathbf{x}^{\mathbf{a}} \rho^{(k)}(\mathbf{x}) d{\varOmega} =\int_{D_k} \mathbf{x}^{\mathbf{a}} d {\varOmega}$$
where Dk is the support of ρ(k).
In practice, ρ will take only a discrete range of values. For each of these values (fixing α) we have a complete set of threshold moments. For example, if the image of interest is a 8-bit grey scale image and we choose α=1 then each of the 256 threshold moments
$$m_{0,0}^{(k)} =\iint_{D_k} dx \wedge dy$$
amounts to counting the number of pixels whose density is k. All of these moments will transform under (3) with q=1. Therefore any weighted sum of these threshold invariants will also transform under (3) with q=1.

Remark

The level sets of ρ may not be closed curves. For example, consider the unit square with ρ(x,y)=x. The level sets of ρ are line segments x=k, 0≤y≤1. This is the reason why, from a theoretical point of view, we do not choose threshold moments based on ρ(x)=k.

We define a “duality”
$${}^\ast : \mathbf{x}^{\mathbf{a}} \leftrightarrow m_{\mathbf{a}}$$
between monomials of degree d and moments of order d where d=|a|. Let
$$\mathcal{M}_{(d)} = {}^\ast \mathbf{x}_{(d)}.$$
For example, in ℝ2,
$$\mathcal{M}_{(2)} = \left[ \begin{array}{l}m_{2,0} \\m_{1,1} \\m_{1,1} \\m_{0,2}\end{array}\right].$$
For convenience we define
$$\mathcal{M}_{(0)} = [m_{\mathbf{0}}],\qquad A_0 = [1].$$

Proposition 3

Under a linear transformationA, we have
$$\overline{\mathcal{M}}_{(d)} = (\det A) A_d \mathcal{M}_{(d)}$$
where the components of\(\mathcal{M}_{(d)}\)are threshold moments.

The proof of this proposition follows immediately from Proposition 2 and the discussion above. In the light of this result, all moments discussed in the rest of this paper will be understood to be threshold moments.

Remark

In our definition of moments we used the volume form rather than the volume “element” dx1dx2dxn; that is, we used the signed volume rather than the unsigned volume. If the unsigned volume is used then Proposition 3 becomes
$$\overline{\mathcal{M}}_{(d)} = |\mathrm{det}\, A| A_d \mathcal{M}_{(d)}.$$
While this choice makes no difference for the special orthogonal group SO(n) (that is, the group of rotations) or the special affine group SL(n), it does make a difference for the full orthogonal group O(n) (rotations plus reflections) and the general affine group GL(n).

4 Moment Invariants and Covariants

A moment invariant is a combination of moments that is unchanged under a group transformation. Formally we have

Definition 3

A (rational) function ι of moments up to order d is said to be an (rational) invariant of order d under a group action G if
$$\iota( \overline{\mathcal{M}}_{(0)},\ldots, \overline{\mathcal{M}}_{(d)}) =(\det A)^q \iota(\mathcal{M}_{(0)},\ldots, \mathcal{M}_{(d)}) $$
(4)
for all AG. q is called the weight of the invariant ι. In particular if q=0 then ι is called an absolute invariant. If q≠0 then ι is called a relative invariant.
Clearly product of invariants are also invariants and sums of invariants of the same weight are invariants. Therefore absolute invariant may be construct from two relative invariants ι1, ι2 of weights q1 and q2 respectively by
$$\iota_1^{q_2} \iota_2^{-q_1}.$$
For this reason, we restrict our attention to rational invariants.
The “volume” moment m0 is trivially an invariant of weight 1. For the orthogonal group, a less trivial example is given by \(\mathcal{M}_{(d)}^{T} \mathcal{M}_{(d)}\) since
$$\overline{\mathcal{M}}_{(d)}^T \overline{\mathcal{M}}_{(d)} =\mathcal{M}_{(d)}^T A_d^T A_d \mathcal{M}_{(d)} =\mathcal{M}_{(d)}^T \mathcal{M}_{(d)}$$
(it follows from Proposition 1 that, if A is orthogonal then Ad is also orthogonal). This absolute invariant will be a quadratic polynomial in order d moments.

We extend this definition to allow dependency of the variable x (see, for example, [6, 13]).

Definition 4

A function η of x and moments up to order d is said to be a covariant of order d and weight q under a group G if
$$\eta(\overline{\mathbf{x}}, \overline{\mathcal{M}}_{(0)},\ldots,\overline{\mathcal{M}}_{(d)}) = (\det A)^q \eta(\mathbf{x},\mathcal{M}_{(0)},\ldots, \mathcal{M}_{(d)})$$
for all AG.

For the orthogonal group, the simplest example of a covariant is xTx (in other words the Euclidean length). Since both x(d) and \(\mathcal{M}_{(d)}\) transform under Ad then \(\mathbf{x}_{(d)}^{T} \mathcal{M}_{(d)}\) will be a covariant of order d.

Remark

If the unsigned volume element is used then |det A|d replaces (detA)d in Definitions 3 and 4.

Whilst invariants are of primary interest in pattern recognition, covariants allow us to generate the invariants. The covariants will be the building blocks from which we obtain the invariants.

Let
$$\partial_{(1)} = \partial = \left[ \begin{array}{c}\partial_{x_1} \\\partial_{x_2} \\\vdots \\\partial_{x_n}\end{array}\right]$$
and
$$\partial_{(d)} = \bigotimes_d \partial_{(d)}$$
with
$$\partial_{(0)} = [1]$$
(that is, (0)P=P). (d) is a vector of dth order partial derivatives. Under a linear transformation A, will transform
$$\overline{\partial} = A^{-T} \partial $$
and so, by Proposition 1,
$$\overline{\partial}_{(d)} = A_d^{-T} \partial_{(d)}. $$
(5)

Our strategy will be to construct a system of covariants and then use the derivative operator (d) to generate the invariants. It is clear that invariants that are not “independent” will yield no new information. Specifically

Definition 5

A set of invariants {ιk:k=1,…,m} is said to be (functionally) dependent if there exists a non-trivial function F such that
$$F(\iota_1, \iota_2,\ldots, \iota_m) = 0. $$
(6)
Conversely, if no such function exists then the ιk are said to be independent.

If the invariants ιk are dependent then, in principle at least, (6) may be solved to find one of the invariants in terms of the remaining m−1 invariants. Following [5], we define

Definition 6

A basis for invariants of order d or less is a set of independent invariants {ιk:k=1,…,m} such that for any invariant ι of order d or less, there exists a non-trivial function F such that
$$F(\iota, \iota_1, \iota_2,\ldots, \iota_m) = 0.$$
Thus a basis is a complete set of functionally independent invariants.

This definition does not require that a basis separates the orbits; that is, the intersection of the level sets of all the basis elements is a single orbit of the group action. For example, an element in a separating basis ι could be replaced by ι2. The elements would remain functionally independent and thus form a basis. However new basis may not separate the orbits since ι2=c2 only implies ιc. The issue here is that there is no restriction on the function F. For the separation of the orbits, a basis must generate all rational invariants rationally. Thus

Definition 7

A rational basis for invariants of order d is a basis such that for any rational invariant ι of order d or less, there exists a non-trivial rational function F such that
$$F(\iota, \iota_1, \iota_2,\ldots, \iota_m) = 0.$$

The number of elements in a basis (rational or otherwise) will depend on d and the group G.

5 Translations

If the group G includes translations (for example, the Euclidean group SE(2) of rotations and translation on the plane) then the linear action (2) is replaced by an affine action
https://static-content.springer.com/image/art%3A10.1007%2Fs10851-011-0323-x/MediaObjects/10851_2011_323_Equak_HTML.gif
This action may be “converted” into a linear action on ℝn+1 via
https://static-content.springer.com/image/art%3A10.1007%2Fs10851-011-0323-x/MediaObjects/10851_2011_323_Equ7_HTML.gif
(7)
The analysis of the previous two sections may now be applied to this action. However x(d) will be a vector of mixed degree. For example, for an affine planar action
$$\mathbf{x}_{(1)} = \left[ \begin{array}{c}x \\y \\1 \end{array}\right],\qquad \mathbf{x}_{(2)} = \left[ \begin{array}{c}x^2 \\xy \\x \\\vdots \\1 \end{array}\right].$$
Similarly \(\mathcal{M}_{(d)}\) will be a vector of moments of mixed order. Therefore the transformation laws will no longer be homogeneous in the degree of the monomial or order of the moment.
The zeroth order moment, m0, will still be an invariant of weight 1 (the volume form is invariant under translations). For a pure translation (that is A=I) the first order moments transform
$$\overline{m}_{\mathbf{e}_i} = m_{\mathbf{e}_i} + \xi_i m_{\mathbf{0}}$$
where ei is the multi-index which is 1 in the ith position and 0 everywhere else and ξi is the ith component of ξ. Choosing \(\overline{m}_{\mathbf{e}_{i}}=0\); that is,
$$\xi_i = \frac{_{\mathbf{e}_i}}{m_{\mathbf{0}}} $$
(8)
will (invariantly) normalise the translation component of the group. The remaining part of the group will yield a linear action on ℝn. However, as we shall see, there is a cost in choosing this normalisation. In the literature the use of this normalisation is widespread and the resultant moments are called central moments [5].

6 Moment Invariants of the (Conformal) Orthogonal Group

For the orthogonal group, invariants have either weight 0 or 1 (the so-called pseudoinvariants [5] that change sign under a reflection). If we restrict the group to the special orthogonal group (that is, rotations only) then all invariants are absolute. However, if we choose the unsigned volume element then all orthogonal invariants will be absolute. In this case the orthogonal case is identical to the special orthogonal case. On the other hand, if we add dilations, that is (uniform) scaling, to the orthogonal group we obtain the conformal orthogonal group also known as the similarity group. In this case invariants may have any weight.

A conformal orthogonal matrix has the form A=λB where B is orthogonal. Thus
$$A^T A = \lambda^2 I = (\det A)^{2/n} I$$
in n dimensions.

Lemma 1

LetA=λBbe a conformal orthogonal matrix in n. ThenAd=λdBdand
$$A_d^T A_d =\lambda^{2d} I = (\det A)^{2d/n} I.$$
Let
$$\eta_0 = \mathbf{x}_{(1)}^T \mathbf{x}_{(1)}.$$
η0 is a covariant of weight 2/n for the conformal orthogonal group (and of weight 0 for the orthogonal group). For d>0, let
$$\eta_d = \mathbf{x}_{(d)}^T \mathcal{M}_{(d)}.$$
We have
$$\overline{\eta_d} = (\det A) \mathbf{x}_{(d)}^T A_d^T A_d \mathcal{M}_{(d)}$$
and so, by Lemma 1, ηd is a covariant of the conformal orthogonal group of weight 1+2d/n (and 1 in the orthogonal case).
For n=2, ηd has conformal weight d+1 and
https://static-content.springer.com/image/art%3A10.1007%2Fs10851-011-0323-x/MediaObjects/10851_2011_323_Equas_HTML.gif

Inspired by the theory of binary forms [6, 13] we have

Definition 8

The rth transvectant of two (conformal) orthogonal covariants P and Q is given by
$${\varLambda}^r(P, Q) = (\partial_{(r)} P)^T \partial_{(r)} Q.$$

Note that Λ0(P,Q)=PQ and Λr(P,Q)=Λr(Q,P).

Proposition 4

LetPandQbe conformal orthogonal covariants of weightpandqrespectively. ThenΛr(P,Q) is a covariant of weightp+q−2r/n. For the orthogonal group, its weight is (p+q)mod 2.

Proof

Under a linear transformation A, Λr(P,Q) will transform
$$(\det A)^{p+q}(\partial_{(r)} P)^T A_r^{-1} A_r^{-T} \partial_{(r)} Q.$$
For the orthogonal case, Proposition 1 implies that \(A_{r}^{-1}\) is orthogonal and so the result follows. For the conformal case, Lemma 1 gives
$$A_r^{-1} A_r^{-T} = (\det A)^{-2r/n} I$$
and so the transvectant will be a covariant of weight p+q−2r/n. □

Lemma 2

Let ζ be a moment. Then
$$\frac{\partial}{\partial \zeta} {\varLambda}^{r}(P, Q) ={\varLambda}^{r} \biggl(\frac{\partial P}{\partial \zeta}, Q\biggr) +{\varLambda}^{r}\biggl(P, \frac{\partial Q}{\partial \zeta}\biggr).$$

Proof

Since the transvectant is a matrix product and derivatives commute the result follows immediately. □

The transvectant allows invariants to be constructed from covariants. If P and Q have degree r (in the variables x(1)) then Λr(P,Q) will be an invariant. For example, for n=3,
https://static-content.springer.com/image/art%3A10.1007%2Fs10851-011-0323-x/MediaObjects/10851_2011_323_Equax_HTML.gif
are invariants. Our strategy will be the choose invariants that depend on order d moments and, if possible, on only one other order. This choice will facilitate the proof of independence. Moreover we wish to construct a complete set of invariants in the n=2 case.
There is considerable freedom in making this choice. For example one could consider
https://static-content.springer.com/image/art%3A10.1007%2Fs10851-011-0323-x/MediaObjects/10851_2011_323_Equ9_HTML.gif
(9)
However these invariants grow in degree (with respect to the moments) alarmingly particularly if i and d are relatively prime. In that case the invariant with be a polynomial of degree 2di. The following choice has the advantage that the degree of the elements does not grow and is, in fact, bounded by 3. This choice is also motivated by the strategy of using order 1 invariants to normalise translations. Instead we make the following choice.

Proposition 5

LetGbe the (conformal) orthogonal group and let
$$\eta_d = \mathbf{x}_{(d)}^T \mathcal{M}_{(d)}.$$
Ford=2q, let
https://static-content.springer.com/image/art%3A10.1007%2Fs10851-011-0323-x/MediaObjects/10851_2011_323_Equaz_HTML.gif
with
$$s =2 \biggl\lfloor \frac{q+1}{2} \biggr\rfloor - 1$$
and
$$\sigma_{1d} = \begin{cases}{\varLambda}^{2}(\eta_1^2, \eta_2), & d=2, \\{\varLambda}^{d}(\eta_0^{q-2} \eta_2^2, \eta_d), & d > 2.\end{cases}$$
Ford=2q+1, let
https://static-content.springer.com/image/art%3A10.1007%2Fs10851-011-0323-x/MediaObjects/10851_2011_323_Equbc_HTML.gif
and
$$\sigma_{1d} = \begin{cases}{\varLambda}^{2}(\eta_2, {\varLambda}^{2}(\eta_0, \eta_3^2)), & d=3, \\{\varLambda}^{d}(\eta_0^{(d-3)/2} \eta_3, {\varLambda}^{1}(\eta_2, \eta_d)), & d>3.\end{cases}$$
Finally let
$${\varSigma}_d = \bigl\{ \sigma_{0d}, \sigma_{1d}, \sigma_{2d},\ldots, \sigma_{dd}\bigr\}.$$
Ford>1, Σdis a functionally independent set of invariants.

Proof

Direct computation using Maple shows that Σd is functionally independent for d≤3. Let d>3 and suppose Σd is not functionally independent. Thus there exists F such that
$$F(\sigma_{0d}, \sigma_{1d}, \sigma_{2d},\ldots, \sigma_{dd}) = 0. $$
(10)
Let ζ be an order i>3 with id moment. Note that
$$\frac{\partial F}{\partial \zeta} = \frac{\partial F}{\partial \sigma_{id}}\frac{\partial \sigma_{id}}{\partial \zeta} = 0.$$
Therefore
$$\frac{\partial F}{\partial \sigma_{id}} = 0.$$
Thus F=F(σ0d,σ1d,σ2d,σ3d,σdd).
Note that
$$\frac{\partial \eta_d}{\partial m_{d,0}} = x^d,\qquad \frac{\partial \eta_d}{\partial m_{0,d}} = y^d$$
and so
$$\partial_{(d)} \frac{\partial \eta_d}{\partial m_{d,0}} =d! \mathbf{e}_1,\qquad \partial_{(d)} \frac{\partial \eta_d}{\partial m_{0,d}} = d! \mathbf{e}_{-1}$$
where e1 is the unit vector with 1 in the first position and e−1 is the unit vector with 1 is the last position.
For d even, since only σ3d depends on order 3 moments, F cannot depend on σ3d. By Lemma 2, we have
$$\frac{\partial \sigma_{0d}}{\partial m_{d,0}} = d! \bigl(\partial_{(d)} \eta_0^{d/2}\bigr)^T \mathbf{e}_1 =(d!)^2 = \frac{\partial \sigma_{0d}}{\partial m_{0,d}}.$$
Similarly we have
$$\frac{\partial \sigma_{dd}}{\partial m_{d,0}} = 2(d!)^2 m_{d,0},\qquad \frac{\partial \sigma_{dd}}{\partial m_{0,d}} = 2(d!)^2 m_{0,d}$$
and
$$\frac{\partial \sigma_{kd}}{\partial m_{d,0}} = (d!)^2 m_{2,0}^k,\qquad \frac{\partial \sigma_{kd}}{\partial m_{0,d}} = (d!)^2 m_{0,2}^k$$
for k=1,2. Also note that
$$\frac{\partial \sigma_{2d}}{\partial m_{2,0}} = \bigl(\partial_{(d)} \eta_0^{d/2-1} x^2\bigr)^T(\partial_{(d)} \eta_d) = (d!)^2 (m_{d,0} + \cdots)$$
where ⋯ indicate terms independent of md,0. Similarly
https://static-content.springer.com/image/art%3A10.1007%2Fs10851-011-0323-x/MediaObjects/10851_2011_323_Equbn_HTML.gif
Therefore the block in the Jacobian generated by these derivatives has the form (after removing the numerical factor of d!2)
$$\left[ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c}0 & 0& 1 & 1 \\2m_{2,0} m_{d,0} + \cdots & 2m_{0,2} m_{0,d} + \cdots & m_{2,0}^2 & m_{0,2}^2 \\m_{d,0} + \cdots & m_{0,d} + \cdots & m_{2,0} & m_{0,2} \\0 & 0 & 2m_{d,0} & 2m_{0,d} \end{array}\right].$$
This matrix has rank 4 and so σ0d,σ1d,σ2d,σdd are functionally independent. Therefore F=0 and Σd is functionally independent for d even.
For d odd, differentiating (10) with respect to M2,0 and m0,2 yields the system
$$\left[ \begin{array}{c@{\quad}c}\frac{\partial \sigma_{1d}}{\partial m_{2,0}} & \frac{\partial \sigma_{2d}}{\partial m_{2,0}} \\[5pt]\frac{\partial \sigma_{1d}}{\partial m_{0,2}} & \frac{\partial \sigma_{2d}}{\partial m_{0,2}}\end{array} \right] \left[ \begin{array}{c}\frac{\partial F}{\partial \sigma_{1d}} \\[5pt]\frac{\partial F}{\partial \sigma_{2d}}\end{array}\right] = \left[ \begin{array}{c}0 \\0 \end{array}\right]. $$
(11)
Observe that
$$\frac{\partial \sigma_{2d}}{\partial m_{2,0}} ={\varLambda}^{d}(x^2 \eta_0^{(d-5)/2} \eta_3, \eta_d)$$
and so will not have any term that depends on m1,d−1 whereas
$$\frac{\partial \sigma_{1d}}{\partial m_{2,0}} = {\varLambda}^{d}(\eta_0^{(d-3)/2}\eta_3, {\varLambda}^{1}(x^2, \eta_d)$$
will have a term that depends on m1,d−1. Similarly \(\frac{\partial \sigma_{1d}}{\partial m_{0,2}}\) will have a term that depends on md−1,1 whereas \(\frac{\partial \sigma_{2d}}{\partial m_{0,2}}\) will not. Therefore the coefficient matrix is non-singular and so
$$\frac{\partial F}{\partial \sigma_{1d}} = \frac{\partial F}{\partial \sigma_{2d}} = 0$$
is the only solution to (11). This clearly implies that F cannot depend on σ3d. Finally a similar argument shows that
$$\frac{\partial F}{\partial \sigma_{0d}} = \frac{\partial F}{\partial \sigma_{dd}} = 0$$
and so F=0. Therefore Σd is functionally independent. □
As we shall see shortly,
$$\mathcal{E}_d = \bigcup_j {\varSigma}_j$$
is also functionally independent. Moreover, in the case n=2, it is a complete set. For convenience, let
$${\varSigma}_0 = \{m_{\mathbf{0}}\},\qquad {\varSigma}_1 = \{\sigma_{01}\}$$
(remember that m0 is an invariant of weight 1).

Lemma 3

Let
$$\sigma = G(\sigma_{0d}, \sigma_{1d},\ldots, \sigma_{dd})$$
and suppose that
$$\frac{\partial \sigma}{\partial\zeta} = 0$$
for all orderdmoments ζ. Thenσ=0.

Proof

Let ξ be a order i moment with i≠2,3,d. Note that
$$0 = \frac{\partial^{2} \sigma}{\partial \xi \partial \zeta} =\frac{\partial G}{\partial \sigma_{id}} \frac{\partial ^{2} \sigma_{id}}{\partial \xi \partial \zeta}$$
and so
$$\frac{\partial G}{\partial \sigma_{id}} = 0.$$
Next observe that, differentiating with respect to m2,0, md,0 and with respect to m0,2, m0,d yields the set of equations
$$\left[ \begin{array}{c@{\quad}c}\frac{\partial^{2} \sigma_{1d}}{\partial m_{2,0} \partial m_{d,0}}& \frac{\partial^{2} \sigma_{2d}}{\partial m_{2,0} \partial m_{d,0}} \\[5pt]\frac{\partial^{2} \sigma_{1d}}{\partial m_{0,2} \partial m_{0,d}}& \frac{\partial^{2} \sigma_{2d}}{\partial m_{0,2} \partial m_{0,d}} \end{array}\right]\left[ \begin{array}{c}\frac{\partial G}{\partial \sigma_{1d}} \\[5pt]\frac{\partial G}{\partial \sigma_{2d}}\end{array}\right] =\left[ \begin{array}{c}0 \\0 \end{array}\right].$$
For d even, the coefficient matrix is
$$(d!)^2 \left[ \begin{array}{c@{\quad}c}2m_{2,0} & 1 \\2m_{0,2} & 1 \end{array}\right]$$
which is non-singular. For d odd, note that
$$\frac{\partial^{2} \sigma}{\partial m_{2,0} \partial m_{1,d-1}} =\frac{\partial G}{\partial \sigma_{1d}}\frac{\partial^{2} \sigma_{1d}}{\partial m_{2,0} \partial m_{1,d-1}} = 0.$$
The proof of Proposition 5 shows that the second factor is non-zero. Thus, in both cases, we obtain
$$\frac{\partial G}{\partial \sigma_{1d}} = \frac{\partial G}{\partial \sigma_{2d}} = 0.$$
This immediately implies that
$$\frac{\partial G}{\partial \sigma_{3d}} = 0.$$
A similar argument shows that
$$\frac{\partial G}{\partial \sigma_{0d}} = \frac{\partial G}{\partial \sigma_{dd}} = 0.$$
Thus σ=0 as required. □

Theorem 1

The set
$$\mathcal{E}_d =\bigcup_{j=1}^d {\varSigma}_j$$
is functionally independent.

Proof

As noted before \(\mathcal{E}_{3}\) is functionally independent. Assume that \(\mathcal{E}_{d-1}\) is functionally independent. Suppose that \(\mathcal{E}_{d}\) is not functionally independent. Thus there exist a non-trivial invariant
$$\sigma = G(\sigma_{0d}, \sigma_{1d}, \sigma_{2d},\ldots, \sigma_{dd})$$
such that
$$\mathcal{E}'= \mathcal{E}_{d-1}\cup \{\sigma\}$$
is functionally dependent. Consequently a relationship F=0 exists between the elements of \(\mathcal{E}'\). Now for every order d moment ζ
$$0 = \frac{\partial F}{\partial \sigma} \frac{\partial \sigma}{\partial\zeta}.$$
Therefore either F does not depend on σ or, by Lemma 3, σ=0. Either case contradicts the assumption that \(\mathcal{E}_{d-1}\) is functionally independent. Thus \(\mathcal{E}_{d}\) is functionally independent. □
For the special orthogonal group, σij is an absolute invariant. For the orthogonal group, let
$$\sigma_{ij}' = \begin{cases}\sigma_{ij} & \hbox{if the weight of $\sigma_{ij}$ is $0$}, \\\sigma_{ij}^2 & \hbox{if the weight of $\sigma_{ij}$ is $1$}.\end{cases}$$
For the conformal group, let
$$\sigma_{ij}^* = m_{\mathbf{0}}^{-w_{ij}} \sigma_{ij}$$
where wij is the weight of σij. Clearly \(\sigma_{ij}'\) and \(\sigma_{ij}^{*}\) are absolute invariants with respect to their group actions. Moreover, let
$$\mathcal{E}'_d = \{\sigma_{ij}' : \sigma_{ij}\in \mathcal{E}_d\},\qquad \mathcal{E}^*_d = \{\sigma_{ij}^* : \sigma_{ij}\in \mathcal{E}_d\}.$$
These normalisation do not destroy the independence of the invariants and so \(\mathcal{E}'_{d}\) and \(\mathcal{E}^{*}_{d}\) is an independent set of absolute invariants.

We now restrict our attention to the n=2 case. The dimension of O(2) is 1 (and its conformal extension has dimension 2). There are d+1 moments of order d. Thus there are \(\frac{1}{2}(d+1)(d+2)\) moments of order up to d. Hence the orbits of the action of the orthogonal group on these moments have codimension \(\frac{1}{2}(d+1)(d+2)-1\). In the conformal case, the orbits have codimension \(\frac{1}{2}(d+1)(d+2)-2\). The orbits lie in the level sets of the absolute invariants and so their codimension gives the number of independent absolute invariants. In particular this shows that the set given in Proposition 5 is a complete set.

Theorem 2

LetGact on moments of order up tod.
  1. 1.

    ForG=SO(2), \(\{m_{0,0}\}\cup \mathcal{E}_{d}\)is a basis for the absolute invariants.

     
  2. 2.

    ForG=O(2), \(\{m_{0,0}^{2}\}\cup \mathcal{E}'_{d}\)is a basis for the absolute invariants.

     
  3. 3.

    For the conformal orthogonal group, \(\mathcal{E}_{d}^{*}\)is a basis for the absolute invariants.

     

Proof

The independence of these sets has already been established. \(\mathcal{E}_{d}, \mathcal{E}'_{d}\) and \(\mathcal{E}^{*}_{d}\) each have
$$1 + 3 + 4 + \cdots + (d+1) = \frac{1}{2}(d+1)(d+2)-2$$
elements. Thus in each case, the given set has the same number of elements as the codimension of the orbits. The result therefore follows. □

If translations are normalised via (8) then the order 1 moments are zero and σ01=σ12=0. \(\mathcal{E}_{d}\) loses 2 elements. However orbit codimension is also reduced by 2. Therefore \(\mathcal{E}_{d}\) (with σ01 and σ12 removed) remains a basis.

These invariants are easily computed (it only requires matrix algebra). Maple code for this computation is given in the Appendix. The elements of \(\mathcal{E}_{4}\) are (with common numerical factors removed)
https://static-content.springer.com/image/art%3A10.1007%2Fs10851-011-0323-x/MediaObjects/10851_2011_323_Equcn_HTML.gif
By contrast, the i=3,d=4 case of (9) has 119 terms (the equivalent basis element in this choice has 12 terms).

Computations using the Maple package implementing the Hubert-Kogan algorithm [8] shows that this is not a rational basis. For d=3 (with translations normalised) there are two elements in the rational basis which cannot be expressed rationally in terms of the above basis. Their squares have rational expressions in this basis. However the rational basis generated by the Hubert-Kogan algorithm has a linear element, a quadratic element and two elements each of degree 4 and 5 with up to 41 terms in each element. In contrast, the above basis \(\mathcal{E}_{3}\) (with translations normalised) has a linear element, 3 quadratic elements and two cubic elements and up to 15 terms in each element. By comparison, the invariants derived by Hu [7] have degree at most 4 and have up to 29 terms. It is well known that the Hu invariants are not functionally independent. The basis given by Flusser et al. [5] has degree at most 4 and each element has up to 16 terms.

An advantage of the basis \(\mathcal{E}_{d}\) is the relatively slow growth of the maximum number of terms in the invariants. The degree of the basis elements in the moments always remains bounded by 3. The maximum number of terms in a basis element of order d is

Order

5

6

7

8

9

10

Max # of terms

34

30

50

60

66

105

7 Affine Actions on the Plane

In this section we consider the linear actions of the affine group GL(2) and the special affine group SL(2) on ℝ2. Let
$$J = \left [ \begin{array}{c@{\quad}c}0 & 1 \\-1 & 0 \end{array}\right]$$
a so-called symplectic matrix, J2=−I. AGL(2) is characterised [18, p. 2925] by
$$A^T J A =(\det A) J. $$
(12)
As before, let
$$J_d = \bigotimes_d J.$$
It follows from Proposition 1 that
$$J_d^T = (-1)^d J_d,\qquad A_d^T J_d A_d = (\det A)^d J_d$$
for any AGL(2). We immediately see that
$$\nu_d = \mathbf{x}_{(d)}^T J_d \mathcal{M}_{(d)} =\sum_{j=0}^d (-1)^i \binom{d}{i} x^{d-i} y^i m_{i, d-i}$$
are covariants of the GL(2) action. Since J is skew-symmetric, xTJx=0 and so we have d covariants in the moments up to order d.

The definition of the orthogonal transvectant is modified to give

Definition 9

The dth transvectant of two GL(2) covariants P and Q is given by
$${\varLambda}^{d}(P, Q) = (\partial_{(d)} P)^T J_d \partial_{(d)} Q.$$

Proposition 6

LetPandQbeGL(2) covariants of weightpandqrespectively. ThenΛd(P,Q) is a covariant of weightp+qd. Furthermore
  1. 1.

    Λd(P,Q)=(−1)dΛd(Q,P),

     
  2. 2.

    Λ2d+1(P,P)=0.

     
The proof of the first part of this proposition is the same argument as used in the proof of Proposition 4. The second part follows immediately from the skew-symmetry of Jd for d odd. The skew symmetry of odd order transvectants implies that σdd=0, for d odd, if we choose to define the invariants in the same manner as the orthogonal case. For convenience, let
$$\mathcal{D}^{r}(P) = {\varLambda}^{r}(P, P).$$
Clearly \(\mathcal{D}^{r}(\nu_{i})\) will be non-zero and quadratic in order i moments (and degree 2(ir) in x and y) if ri is even.
Let d=2q≥4 be even. Define
$$\theta_{id} = \left\{\begin{array}{l@{\quad}l}{\varLambda}^4(\nu_4, \mathcal{D}^{d-2}(\nu_d)), & i = 0, \\[3pt]{\varLambda}^4(\nu_2^2, \mathcal{D}^{d-2}(\nu_d)),& i = 1, \\[3pt]{\varLambda}^d(\nu_i \nu_{d-i}, \nu_d), & i=2, 3,\ldots, q, \\[3pt]{\varLambda}^{d}(\nu_2 \mathcal{D}^2(\nu_{i}), \nu_d), & i = q+1, \\[3pt]{\varLambda}^d(\mathcal{D}^{i-q}(\nu_i,) \nu_d),\\[3pt]\multicolumn{2}{r}{i = q+2, q+4,\ldots, d-1,} \\[3pt]{\varLambda}^d({\varLambda}^1(\nu_i, \nu_{d-i+2}), \nu_d),\\[3pt]\multicolumn{2}{r}{i=q+3, q+5,\ldots, d-1,} \\\mathcal{D}^d(\nu_d), & i=d.\end{array}\right.$$
For d=2q+1≥5, define
$$\theta_{id} = \begin{cases}{\varLambda}^2(\nu_2, \mathcal{D}^{d-1}(\nu_d)), & i = 0, \\{\varLambda}^2(\mathcal{D}^2(\nu_3), \mathcal{D}^{d-1}(\nu_d)), & i = 1, \\{\varLambda}^d(\nu_i \nu_{d-i}, \nu_d), & i=2, 3,\ldots, q, \\{\varLambda}^d({\varLambda}^2(\nu_2 \nu_q, \nu_{q+1}), \nu_d), & i = q+1, \\{\varLambda}^d({\varLambda}^1(\nu_i, \nu_{d-i+2}), \nu_d), & i=q+2,\ldots, d-1, \\\mathcal{D}^2(\mathcal{D}^{d-1}(\nu_d)), & i=d.\end{cases}$$
Finally, let
https://static-content.springer.com/image/art%3A10.1007%2Fs10851-011-0323-x/MediaObjects/10851_2011_323_Equcw_HTML.gif

Lemma 4

Ford>2, let
$${\varTheta}_d = \{\theta_{0d}, \theta_{1d},\ldots, \theta_{dd}\}$$
with
$${\varTheta}_2 = \{\theta_{12}, \theta_{22}\}.$$
Θdis a functionally independent set ofGL(2) invariants.

Proof

We first must show that θid are not constants. There is a potential that cancellation may occur. Note that, since J2s is symmetric and non-degenerate, Λ2s(P,P) is not zero for any covariant, P of degree ≥2s. Furthermore Λj(P,Q) will be non-zero if P depends on order i invariants but Q does not (and both P and Q have degrees ≥j). Thus θid≠0.

Direct calculation using Maple shows that \(\bigcup_{i=2}^{6}{\varTheta}_{i}\) is functionally independent. Consider the case d=2q>6, even. Assume, on the contrary, that there exists a non-trivial F such that
$$F(\theta_{0d}, \theta_{id},\ldots, \theta_{dd}) = 0.$$
Since order d−1 moments only occur in θd−1,d, F cannot depend on θd−1,d. Next consider the terms depending on order moments d−2; that is θ2d and θd−2,d. Differentiating with respect to md−2,0 and m0,d−2 we obtain
$$\left[ \begin{array}{c@{\quad}c}\frac{\partial \theta_{2d}}{\partial m_{d-2,0}} & \frac{\partial \theta_{d-2,d}}{\partial m_{d-2,0}} \\[5pt]\frac{\partial \theta_{2d}}{\partial m_{0,d-2}} & \frac{\partial \theta_{d-2,d}}{\partial m_{0,d-2}]}\end{array}\right] \left[ \begin{array}{c}\frac{\partial F}{\partial \theta_{2d}} \\[5pt]\frac{\partial F}{\partial \theta_{d-2,d}}\end{array}\right] =\left[ \begin{array}{c}0 \\0 \end{array}\right].$$
The coefficient matrix is either
$$\left[ \begin{array}{c@{\quad}c}{\varLambda}^d(x^{d-2} \nu_2, \nu_d) & 2{\varLambda}^d({\varLambda}^2(x^{d-2}, \nu_{d-2}), \nu_d) \\{\varLambda}^d(y^{d-2} \nu_2, \nu_d) & 2{\varLambda}^d({\varLambda}^2(y^{d-2}, \nu_{d-2}), \nu_d)\end{array}\right]$$
or
$$\left[ \begin{array}{c@{\quad}c}{\varLambda}^d(x^{d-2} \nu_2, \nu_d) & {\varLambda}^d({\varLambda}^1(x^{d-2}, \nu_{4}), \nu_d) \\{\varLambda}^d(y^{d-2} \nu_2, \nu_d) & {\varLambda}^d({\varLambda}^1(y^{d-2}, \nu_{4}), \nu_d)\end{array}\right].$$
In both cases this matrix is non-singular. Thus F cannot depend on θ2d or θd−2,d. In a similar manner, we see that F cannot depend on θi,d or θdi,d for i=3,4,…,q with the exception of the case i=4, d=8. Thus, apart from this exceptional case, we have
$$F = F(\theta_{0d}, \theta_{1d}, \theta_{dd}).$$
However θ0d, θ1d and θdd are clearly independent and so F=0. In the exceptional case (d=8), we have
$$F = F(\theta_{08}, \theta_{18}, \theta_{48}, \theta_{88}).$$
A direct computation using Maple, shows that F=0. Thus Θd is functionally independent for d even. A similar set of arguments may be used in the case for d odd. □

Theorem 3

The set
$$\mathcal{T}_d = \bigcup_{j=2}^d {\varTheta}_j$$
is functionally independent.

The proof of this result follows exactly the same arguments as used in Theorem 1.

In the case of SL(2) all the elements of \(\mathcal{T}_{d}\) absolute invariants. In the case of GL(2) we normalise these relative invariants using m0,0 to obtain absolute invariants \(\mathcal{T}_{d}^{*}\). The dimension of SL(2) is 3 while that of GL(2) is 4. Therefore the codimension of orbits of their action on moments up to order d is \(\frac{1}{2}(d+1)(d+2)-3\) and \(\frac{1}{2}(d+1)(d+2)-4\) respectively. We see that \(\mathcal{T}_{d}\) has
$$2 + 4 + 5 + \cdots + (d+1) = \frac{1}{2}(d+1)(d+2) - 4$$
elements. Thus we obtain

Theorem 4

LetGact on the moments of order up tod.
  1. 1.

    ForG=SL(2), \(\{m_{0,0}\}\cup \mathcal{T}_{d}\)is a basis for the absolute invariants

     
  2. 2.

    ForG=GL(2), \(\mathcal{T}_{d}^{*}\)is a basis for the absolute invariants.

     
Again, if translations are normalised via (8) then θ12=θ13=0 and the dimension of \(\mathcal{T}_{d}\) reduces by 2. However the orbit codimension also reduces by 2 and so we still have a maximal functionally independent set. In this case remaining the elements of \(\mathcal{T}_{3}\) are
https://static-content.springer.com/image/art%3A10.1007%2Fs10851-011-0323-x/MediaObjects/10851_2011_323_Equdh_HTML.gif

8 Conclusions

The construction of orthogonal invariants using the transvectant (8) and the covariants ηi may be applied in any dimension. In higher dimensions the issue is whether one can generate enough functionally independent invariants to match the codimension of the group orbits. For example the orbits of the group SO(3) have codimension
$$\frac{1}{6}(d+1)(d+2)(d+3)-3$$
in the space of moments of order d or less. Thus the number of independent invariants grows as d3 rather than as d2 in the planar case. Whether a maximal independent set of invariants can be constructed from the covariants ηi remains open.

A more subtle question is whether a rational basis; that is, a set of invariants that separate the orbits of the group action, may be constructed from the covariants ηi. As noted above, the basis constructed in Sect. 6 does not separate the orbits. The computation work for the d=3 case suggest that a rational basis has significantly greater complexity that the basis given here. From a operational viewpoint, this added complexity may outweigh any benefits that a basis which separates the orbits may give.

In the case of the affine invariants, the move to higher dimensions will only work in even dimensions since there is no matrix J such that J2=−I in odd dimensions. The identification of matrices that satisfy (12) with the general affine group is only valid for n=2. In general, matrices that satisfy (12) form a subgroup of GL(n) known as the symplectic group. Thus the invariants generated in the even dimensional case will be invariants of the symplectic group.

Acknowledgements

I would like to thank the Galaad group at INRIA Sophia Antipolis for hosting me while this work was completed. In particular, I would like to thank Evelyne Hubert for fruitful discussions on this material and for running her code and comparing its output with the basis given in this paper.

Copyright information

© Springer Science+Business Media, LLC 2011