# Image analysis using new set of separable two-dimensional discrete orthogonal moments based on Racah polynomials

- 766 Downloads
- 1 Citations

## Abstract

In this paper, we propose three new separable two-dimensional discrete orthogonal moments baptized: RTM (Racah-Tchebichef moments), RKM (Racah-Krawtchouk moments), and RdHM (Racah-dual Hahn moments). We present a comparative study between our proposed separable two-dimensional discrete orthogonal moments and the classical ones, in terms of gray-level image reconstruction accuracy, including noisy and noise-free conditions. Furthermore, in this study, the local feature extraction capabilities of the proposed moments are described. Finally, a new set of RST (rotation, scaling, and translation) invariants, based on separable proposed moments, is introduced in this paper for the first time, and their description performances are highly tested as pattern features for image classification in comparison with the traditional moment invariants. The experimental results show that the new set of moments is potentially useful in the field of image analysis.

### Keywords

Separable discrete orthogonal moments Moment invariants Gray-level image reconstruction Local feature extraction Image classification Classical discrete orthogonal polynomials### Abbreviations

- dHdHMI
Dual Hahn-dual Hahn moment invariants

- GMI
Geometric moment invariants

- KKMI
Krawtchouk-Krawtchouk moment invariants

- KRM
Krawtchouk-Racah moments

- MSE
Mean squared error

- PSNR
Peak signal-to-noise ratio

- RdHM
Racah-dual Hahn moments

- RdHMI
Racah-dual Hahn moment invariants

- RKM
Racah-Krawtchouk moments

- RKMI
Racah-Krawtchouk moment invariants

- ROI
Region of interest of an image

- RRMI
Racah-Racah moment invariants

- RST
Rotation, scaling, and translation

- RTM
Racah-Tchebichef moments

- RTMI
Racah-Tchebichef moment invariants

- SSIM
Structural SIMilarity

- TTMI
Tchebichef-Tchebichef moment invariants

## 1 Introduction

The theory of moments has been widely used in several fields of image processing, such as image analysis [1, 2, 3, 4, 5], image watermarking [6, 7], classification and pattern recognition [8, 9, 10], and *video coding* [11, 12], with considerable and important results. Historically, Hu in 1962 has presented a set of geometric moment invariants [1], used particularly in pattern recognition. However, these moments suffer from high information redundancy due to their non-orthogonal property [13]. To overcome this problem, Teague in 1980 has introduced a set of continuous orthogonal moments [14], such as Zernike, pseudo-Zernike, and Legendre moments. This set of moments has been used as high discriminative features in many fields [15]. Apart from their usefulness and wide applicability, the computation of continuous orthogonal polynomials involves two major inconveniences: discrete approximation of the continuous integration and discretization of the continuous space [14]. Nevertheless, to overtake this problem, a new set of discrete orthogonal moments has been proposed. Mukundun in 2001 was the first who introduced discrete Tchebichef moments in image analysis [16]. This study has initiated several other types of discrete moments: Krawtchouk [17], Racah [18], and dual Hahn [19].

The majority of continuous and discrete orthogonal moments in 2D space have separable basic functions. This property can be expressed as two separate terms by the product tensor of two classical orthogonal polynomials with one variable [10]. Zhu in [20] proposed a set of bivariate discrete and continuous orthogonal polynomials in order to define a series of new set of separable orthogonal moments. In this study, the author cites different application in image analysis, such as reconstruction of noisy and noise-free image, local feature extraction, and object recognition using the invariant geometric moments. Hmimid et al. in [10] introduced a new set of separable orthogonal moments based on the product tensor of Meixner polynomials by Tchebichef, Krawtchouk, and Hahn polynomials; this study focuses on the classification performance of geometric invariant moments.

In this paper, firstly, we introduce a new set of bivariate orthogonal polynomials, obtained by the product tensor of Racah polynomials defined on non-uniform lattice by Tchebichef, Krawtchouk polynomials, both defined on uniform lattice, and dual Hahn polynomials defined on non-uniform lattice. Using this new approach, we generate three separable 2D discrete orthogonal moments: RTM, RKM, and RdHM. Secondly, we provide the theoretical background for deriving their corresponding RST invariants RTMI (Racah-Tchebichef moment invariants), RKMI (Racah-Krawtchouk moment invariants), and RdHMI (Racah-dual-Hahn moment invariants) with respect to rotation, scaling, and translation transforms. Finally, we evaluate the performance of this new set of separable discrete orthogonal moments and moment invariants in the field of image analysis, specifically in image reconstruction, local feature extraction, and image classification.

To demonstrate usefulness of the proposed moments in image analysis, their accuracy as global descriptors is assessed by reconstructing the whole gray-level images. We then compare the results with the most used discrete orthogonal moments in the literature; our goal is to evaluate the combination of the three polynomials (Tchebichef, Krawtchouk, and dual Hahn) with Racah polynomials. Also, our study investigates the robustness of the proposed moments against different types of noise. Besides, it should be highlighted that the locality parameter *p* of Krawtchouk polynomials has been depicted in order to introduce the local feature extraction of the two proposed separable orthogonal moments RKM and KRM (Krawtchouk-Racah moments), which provide the opportunity to extract a specific ROI (region of interest) of an image.

In the last decades, moment invariants have been extensively studied and widely applied in image analysis and pattern recognition, since they can extract shape features independently of geometric transformation. In this context, only few papers are published with the aim to construct separable moment invariants for object recognition and image classification [10, 20]; however, all these introduced works focus on the generation of separable moment invariants from bivariate polynomials defined only on uniform lattice. To the best of our knowledge, no such paper has been published in order to derive RST separable 2D moment invariants based on bivariate polynomials, which defined as a combination of polynomials of uniform and non-uniform lattice. Our objective is to extend the derivation process of moment invariants to include bivariate polynomials defined on different lattices (uniform and non-uniform lattice) and evaluate their performances in a real image classification problem in comparison with the traditional moment invariants.

As a summary, the main contributions of our work include the following aspects: (1) The proposition of a new set of bivariate discrete orthogonal polynomials based on the product tensor of Racah polynomials defined on non-uniform lattice by Tchebichef and Krawtchouk polynomials, both defined on uniform lattice, and dual Hahn polynomials defined on non-uniform lattice. (2) The application of the proposed methods in the field of image reconstruction, in the case of noisy and noise-free gray-level images. (3) The introduction of local feature extraction by specific separable discrete orthogonal moments i.e. RKM and KRM. (4) The proposition of new sets of moment invariants for object recognition and image classification.

The rest of this paper is structured as follows. In Section 2, we discuss the known classical discrete orthogonal polynomials of one variable; this set of orthogonal polynomials serves as basic background for the rest of this work, followed by the introduction of the new proposed separable discrete orthogonal moments. In Section 3, we introduce their RST invariants. Results and discussion are provided in Section 4 to demonstrate their performance in image reconstruction, local feature extraction, and image classification. In conclusion, a brief summary and the future work are presented.

## 2 Methods

### 2.1 Discrete classical orthogonal polynomial

In this section, we include a brief presentation of the most used discrete orthogonal polynomials. This will constitute a theoretical background for the rest of our work. For that, the definition of Tchebichef polynomials is firstly provided, followed by Krawtchouk, dual Hahn and Racah polynomials. For more details, all these polynomials are described in [16, 17, 18, 19].

#### 2.1.1 Tchebichef discrete orthogonal polynomial

*n*th order of classical Tchebichef polynomials:

_{3}

*F*

_{2}represents the generalized hypergeometric function defined as follows:

*a*)

_{ k }expresses the Pochhammer symbol defined as follows:

where *Γ*(·) is the Gamma function.

*t*

_{ n }(

*x*,

*N*)} satisfies the orthogonality property:

*w*

_{ t }=1 and the squared norm

*β*(

*n*,

*N*) is a suitable constant which is independent of

*x*, as given in [16] by

#### 2.1.2 Krawtchouk discrete orthogonal polynomial

*x*,

*n*=0,1,,

*N*, 0<

*p*<1 and

_{2}

*F*

_{1}express the hypergeometric function defined as follows:

*w*

_{ k }(

*x*;

*p*,

*N*−1) is the weight function denoted by

with \(A_{n}=\frac {(N-1p-2(n-1)p+n-1-x)}{\sqrt {p(1-p)n(N-n)}}\) and \(B_{n}=\sqrt {\frac {(n-1)(N-n+1)}{(N-n)n}}\).

#### 2.1.3 Dual Hahn discrete orthogonal polynomial

The dual Hahn polynomials have been introduced in image analysis by Zhu et al. in [19]. This family of discrete orthogonal polynomials is defined on the non-uniform lattice.

*n*th order is given by

*a*,

*b*,

*c*,

*n*, and

*s*are restricted to \(-\frac {1}{2}<a<b\),

*b*=

*a*+

*N*, |

*c*|<

*a*+1,

*n*=0,1,…

*N*−1, and

*s*=

*a*,

*a*+1,…

*b*−1. Also,

_{3}

*F*

_{2}is the generalized hypergeometric function given in Eq. (2). The dual Hahn polynomials satisfy the following orthogonality property:

*Δ*

*X*(

*s*)=

*X*(

*s*+1)−

*X*(

*s*), with

*X*(

*s*)=

*s*(

*s*+1) and

*w*

_{dh}, is the weight function :

*n*proposed by Zhu et al. in [19] that is denoted by the following formula:

#### 2.1.4 Racah discrete orthogonal polynomial

*n*th order of Racah polynomials are defined as follows:

*a*,

*b*,

*α*,

*β*,

*n*, and

*s*are restricted to\( -\frac {1}{2}<a<b\),

*α*>−1, −1<

*β*<2

*a*+1,

*b*=

*a*+

*N*,

*n*=0,1,…

*n*−1, and

*s*=

*a*,

*a*+1,…,

*b*−1 and

_{4}

*F*

_{3}is the generalized hypergeometric function given by

*Δ*

*X*(

*s*)=

*X*(

*s*+1)−

*X*(

*s*), with

*X*(

*s*)=

*s*(

*s*+1) and

*w*

_{ r }, is the weight function:

*n*proposed by Zhu et al. in [19], which is denoted by the following formula:

### 2.2 Proposed new separable orthogonal discrete moments

This section is devoted to present a new set of bivariate discrete orthogonal polynomials, using the classical polynomials cited previously. Inspired from the method proposed by Xu in [22, 23], we can produce new several bivariate discrete orthogonal polynomials based on the product tensor of Racah polynomials with Tchebichef, Krawtchouk, and dual Hahn polynomials; the list of this new series is presented in the following subsections.

#### 2.2.1 Separable Racah-Tchebichef orthogonal discrete moments

*V*={(

*i*,

*j*):0≤

*i*,

*j*≤

*N*−1}, with respect to the weight function, that is defined as follows:

*N*×

*N*image having intensity function

*f*(

*s*,

*y*), is defined as follows

*n*

_{max}, by applying the inverse moments formula, that is defined as follows:

#### 2.2.2 Separable Racah-Krawtchouk orthogonal discrete moments

*V*={(

*i*,

*j*):0≤

*i*,

*j*≤

*N*−1}, where the weight function is defined as follows:

*N*×

*N*image having intensity function

*f*(

*x*,

*y*) is defined as follows:

*n*

_{max}can be done by applying the inverse moments formula that is defined as follows:

#### 2.2.3 Separable Racah-dual Hahn orthogonal discrete moments

*V*={(

*i*,

*j*):0≤

*i*,

*j*≤

*N*−1}, with respect to the weight function, that is defined as follows:

*N*×

*N*image having intensity function

*f*(

*s*,

*t*) is given by

*n*

_{max}, by applying the inverse moments formula as follows:

When *n* _{max}=*N*−1, The reconstructed image using the computed Racah-Tchebichef, Racah-Krawtchouk and Racah-dual Hahn moments, by applying Eqs. (32, 36, 40), can be optimal with a minimal reconstruction error.

## 3 Moment invariants

The usual method for obtaining RST invariants is to express the image moments as a linear combination of geometric ones and then makes use of RST geometric invariants instead of geometric moments.

*G*

_{ nm }of an image with the size

*N*×

*M*pixels are defined using the discrete sum approximation as follows:

*U*

_{ nm }are defined by

with \(\bar {x}=\frac {G_{10}}{G_{00}}\) and \(\bar {y}=\frac {G_{01}}{G_{00}}\).

*n*+

*m*, noted

*V*

_{ nm }, which is independent of rotation, scaling, and translation, can be written as follows:

with \(\gamma =\frac {n+m}{2}+1\) and \(\theta =\frac {1}{2} \text {tan}^{-1}\left (\frac {2U_{11}}{U_{20}-U_{02}}\right) \).

### 3.1 Separable Racah-Tchebichef moment invariants

*x*

^{ r }. The

*n*th order of discrete Racah polynomials can be written as follows:

and *C* _{00}=1, *C* _{10}=0, *C* _{11}=*a* _{1}, and *C* _{12}=1 with *a* _{ n }=2*a*+*n*.

*s*(

*t*,

*r*) is the Stirling numbers of the first kind, obtained by the following recurrence relations:

with *s*(*t*,0)=*s*(0,*r*)=0 and *s*(0,0)=1.

*f*(

*x*,

*y*) can be expressed as follows:

where *ρ* _{ r }(*n*) and *β*(*n*,*N*) are the normalization constants of Racah and Tchebichef polynomials relative to Eq. (26) and Eq. (7), respectively.

*n*+

*m*order, the geometric moments

*G*

_{ zr }in the previous equation can be replaced by

*V*

_{ zr }geometric moment invariants as follows:

### 3.2 Separable Racah-Krawtchouk moment invariants

*k*

_{ n }(

*x*;

*p*,

*N*) can be expressed as a polynomial of

*x*as follows:

with \(Q_{nt}(p,N)=\frac {(-n)_{t}}{(-N)_{t} t!} \left (\frac {-1}{p}\right)^{t}\), and *s*(*t*,*r*) is the Sterling number of the first kind from Eq. (47).

*f*(

*x*,

*y*) can be written in term of geometric moments

*G*

_{ nm }as follows:

where *ρ* _{ r }(*n*) and *ρ* _{ k }(*n*,*p*,*N*) are the normalization constants of Racah and Krawtchouk polynomials relative to Eq. (26) and Eq. (13), respectively.

*G*

_{ zr }by

*V*

_{ zr }in Eq. (51), we obtain the RKMI of order

*n*+

*m*:

### 3.3 Separable Racah-dual-Hahn moment invariants

*n*th order of dual Hahn polynomials can be represented as polynomial of

*x*

^{ r }as follows:

where \(R_{n}^{(c)}(\mu,\vartheta)=\frac {(a,-\vartheta +1)_{n}(\mu +c'+1)_{n}}{n!}\), \(B_{nt}^{(c)}(\mu,\vartheta)= B_{n(t-1)}^{(c)}(\mu,\vartheta)\frac {n-m+1}{(a-b+m)(a+c+m)m}, \forall n\geq 0, 0\leq m \leq n\) with \(B_{00}^{(c)}(\mu,\vartheta)=1\).

And *C* _{ tr } is given by the Eq. (45) with *a* _{ n }=2*μ*+*n*.

*f*(

*x*,

*y*) can be expanded in terms of geometric moments as follows:

where *ρ* _{ r }(*n*) and *ρ* _{ dh }(*n*) are the normalization constants of Racah and dual Hahn polynomials relative to Eqs. (26) and (19), respectively.

*n*+

*m*, the geometric moments

*G*

_{ zr }in the previous equation can be replaced by the geometric moment invariants

*V*

_{ zr }as follows:

## 4 Results and discussion

In this section, several experimental results are provided to validate the theoretical study of our new separable discrete orthogonal moments developed in the previous sections. This section is presented through four subsections. In the first subsection, the reconstruction capability of the whole noisy and noise-free image is addressed. The experimental study on the local feature extraction has been depicted in the second subsection. Then, the invariability of the proposed moment invariants is examined under different geometric transforms and their noise robustness are also investigated. Finally, in the fourth subsection, image classification accuracy is presented with a comparison between the new sets of separable moment invariants and the existing ones.

### 4.1 Global features reconstruction

*N*×

*N*is defined as follows:

with *f*(*x*,*y*) and \(\hat {f}(x,y)\) denote the original and the reconstructed image, respectively. In order to complete this comparison, another measure index has been used in the current work. This index is called SSIM (Structural SIMilarity) that attempts to measure the change in luminance, contrast, and structure between two images. The SSIM has been firstly presented by Z. Wang in [27].

The proposed methods are expected to achieve a better estimation of original image using only a few number of moments, which should minimize the MSE value, conversely maximize PSNR value. Moreover, the SSIM index is used to evaluate the preservation of structural information in the reconstructed image. In this case, we expect that we obtain high SSIM values that indicate better reconstruction performance.

So as to exhibit a global comparison between different set of proposed separable discrete orthogonal moment, the Krawtchouk *p* parameter is restricted on 0.5, to obtain a global reconstruction taken from the image center, as presented by Yap et al. in [17]. While the dual Hahn parameters are restricted on *μ*=8, *𝜗*=*N*+*μ*, and *c*=−8, Racah parameters are restricted on *a*=256, *α*=256, *β*=160, and *b*=*N*+*a*.

Comparative results in terms of PSNR (db) and SSIM values of test images (Lena)

PSNR and SSIM’s values of Lena image | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Orders | RTM | RdHM | RKM | TTM | dHdHM | RRM | KKM | |||||||

PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |

0 | 8.00 | 0.04 | 5.87 | 0.01 | 6.16 | 0.02 | 14.11 | 0.11 | 5.86 | 0.00 | 6.14 | 0.03 | 6.08 | 0.00 |

6 | 8.88 | 0.04 | 6.31 | 0.04 | 6.64 | 0.03 | 15.12 | 0.14 | 6.50 | 0.04 | 6.53 | 0.01 | 6.63 | 0.05 |

18 | 11.69 | 0.30 | 8.37 | 0.19 | 8.63 | 0.17 | 16.91 | 0.26 | 8.56 | 0.18 | 8.79 | 0.22 | 8.34 | 0.18 |

26 | 13.30 | 0.38 | 9.87 | 0.27 | 10.24 | 0.30 | 18.08 | 0.41 | 10.09 | 0.27 | 10.25 | 0.30 | 9.81 | 0.22 |

38 | 15.94 | 0.48 | 13.61 | 0.47 | 13.33 | 0.45 | 19.31 | 0.53 | 14.91 | 0.50 | 13.26 | 0.44 | 13.40 | 0.51 |

72 | 22.68 | 0.80 | 21.90 | 0.77 | 22.69 | 0.80 | 22.26 | 0.75 | 21.77 | 0.77 | 22.36 | 0.79 | 22.94 | 0.80 |

78 | 22.92 | 0.81 | 22.57 | 0.79 | 23.43 | 0.82 | 22.79 | 0.78 | 22.24 | 0.79 | 23.32 | 0.81 | 23.57 | 0.83 |

88 | 24.45 | 0.84 | 23.50 | 0.83 | 24.55 | 0.85 | 24.05 | 0.83 | 23.02 | 0.82 | 24.55 | 0.85 | 24.65 | 0.86 |

92 | 25.77 | 0.88 | 23.94 | 0.84 | 25.03 | 0.86 | 24.59 | 0.86 | 23.46 | 0.83 | 25.02 | 0.86 | 25.16 | 0.87 |

98 | 26.86 | 0.92 | 24.45 | 0.86 | 26.05 | 0.89 | 25.73 | 0.89 | 23.99 | 0.85 | 25.99 | 0.88 | 25.93 | 0.89 |

108 | 28.97 | 0.94 | 25.58 | 0.89 | 28.16 | 0.93 | 27.81 | 0.93 | 24.92 | 0.87 | 28.24 | 0.93 | 27.76 | 0.93 |

112 | 29.90 | 0.96 | 26.28 | 0.90 | 29.55 | 0.94 | 29.58 | 0.95 | 25.64 | 0.88 | 29.58 | 0.94 | 28.91 | 0.94 |

Comparative results in terms of PSNR (db) and SSIM values of test images (Man)

PSNR and SSIM’s values of Man image | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Orders | RTM | RdHM | RKM | TTM | dHdHM | RRM | KKM | |||||||

PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |

0 | 8.62 | 0.04 | 6.56 | 0.00 | 6.85 | 0.00 | 14.20 | 0.08 | 6.63 | 0.01 | 6.88 | 0.00 | 6.72 | 0.00 |

6 | 9.92 | 0.07 | 7.28 | 0.06 | 7.60 | 0.02 | 15.36 | 0.11 | 7.03 | 0.01 | 7.67 | 0.05 | 7.51 | 0.04 |

18 | 11.82 | 0.30 | 9.35 | 0.26 | 10.01 | 0.19 | 16.99 | 0.20 | 8.89 | 0.15 | 10.16 | 0.25 | 9.87 | 0.22 |

26 | 13.20 | 0.39 | 11.01 | 0.28 | 11.75 | 0.35 | 17.91 | 0.29 | 11.43 | 0.29 | 11.70 | 0.33 | 11.48 | 0.34 |

38 | 15.58 | 0.44 | 14.45 | 0.44 | 14.34 | 0.51 | 19.22 | 0.47 | 15.78 | 0.46 | 14.16 | 0.48 | 14.05 | 0.51 |

72 | 22.15 | 0.74 | 21.40 | 0.73 | 21.82 | 0.76 | 21.80 | 0.71 | 21.44 | 0.73 | 21.52 | 0.75 | 22.22 | 0.77 |

78 | 22.65 | 0.76 | 22.02 | 0.76 | 22.63 | 0.78 | 22.44 | 0.76 | 21.81 | 0.75 | 22.44 | 0.78 | 22.85 | 0.79 |

88 | 23.90 | 0.83 | 23.04 | 0.80 | 23.70 | 0.83 | 23.56 | 0.82 | 22.44 | 0.78 | 23.79 | 0.83 | 23.81 | 0.83 |

92 | 24.19 | 0.85 | 23.41 | 0.82 | 24.17 | 0.85 | 24.14 | 0.84 | 22.78 | 0.80 | 24.28 | 0.85 | 24.29 | 0.85 |

98 | 25.26 | 0.88 | 23.96 | 0.84 | 25.05 | 0.88 | 24.97 | 0.87 | 23.27 | 0.82 | 25.16 | 0.88 | 24.99 | 0.87 |

108 | 27.52 | 0.93 | 25.14 | 0.87 | 26.64 | 0.91 | 26.88 | 0.92 | 24.15 | 0.85 | 26.80 | 0.92 | 26.73 | 0.91 |

112 | 28.23 | 0.95 | 25.86 | 0.89 | 27.66 | 0.93 | 27.81 | 0.94 | 24.73 | 0.86 | 27.80 | 0.93 | 27.82 | 0.93 |

Comparative results in terms of PSNR (db) and SSIM values of test images (Texture)

PSNR and SSIM’s values of Texture image | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

Orders | RTM | RdHM | RKM | TTM | dHdHM | RRM | KKM | |||||||

PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |

0 | 7.20 | 0.01 | 5.14 | 0.00 | 5.36 | 0.00 | 11.55 | 0.01 | 5.08 | 0.00 | 5.37 | 0.00 | 5.34 | 0.00 |

6 | 7.98 | 0.01 | 5.55 | 0.00 | 5.86 | 0.00 | 11.83 | 0.01 | 5.51 | 0.00 | 5.88 | 0.01 | 5.78 | 0.00 |

18 | 10.72 | 0.02 | 6.94 | 0.01 | 7.34 | 0.01 | 11.92 | 0.02 | 6.92 | 0.01 | 7.47 | 0.01 | 7.11 | 0.01 |

26 | 10.68 | 0.02 | 8.13 | 0.04 | 8.48 | 0.02 | 11.96 | 0.02 | 8.10 | 0.03 | 8.62 | 0.06 | 8.12 | 0.02 |

38 | 11.84 | 0.05 | 10.26 | 0.07 | 9.89 | 0.02 | 12.08 | 0.04 | 10.64 | 0.08 | 10.09 | 0.07 | 9.70 | 0.07 |

72 | 12.90 | 0.23 | 12.57 | 0.20 | 12.59 | 0.21 | 13.09 | 0.23 | 12.46 | 0.15 | 12.60 | 0.23 | 12.68 | 0.24 |

78 | 13.91 | 0.29 | 12.76 | 0.24 | 12.87 | 0.28 | 13.31 | 0.28 | 12.60 | 0.19 | 12.90 | 0.29 | 13.18 | 0.34 |

88 | 13.98 | 0.41 | 13.62 | 0.43 | 13.98 | 0.46 | 13.91 | 0.40 | 13.24 | 0.35 | 13.79 | 0.45 | 14.01 | 0.47 |

92 | 14.34 | 0.49 | 14.09 | 0.51 | 14.25 | 0.51 | 14.24 | 0.46 | 13.39 | 0.37 | 14.31 | 0.54 | 14.21 | 0.50 |

98 | 15.13 | 0.62 | 14.79 | 0.59 | 14.93 | 0.59 | 14.78 | 0.54 | 13.74 | 0.43 | 15.10 | 0.62 | 14.88 | 0.58 |

108 | 18.96 | 0.87 | 17.80 | 0.83 | 19.27 | 0.88 | 17.41 | 0.78 | 14.31 | 0.52 | 18.93 | 0.87 | 19.19 | 0.88 |

112 | 20.52 | 0.92 | 18.54 | 0.86 | 20.34 | 0.91 | 19.61 | 0.88 | 14.64 | 0.55 | 19.76 | 0.90 | 20.38 | 0.91 |

As a main conclusion of these experiments, the proposed RTM and RKM perform competitively with other methods in terms of gray-level image representation capability that can justify their usefulness as a global descriptors in the field of image reconstruction, in other hand, the proposed RdHM does not perform well in these experiments.

### 4.2 Robustness to different kind of noises

*ν*=0.01) as shown in the first three columns of Fig. 5. Secondly, the effect of salt-and-pepper noise with the density of 3% is displayed in the last three columns of Fig. 5.

Comparative results of noisy image reconstruction in terms of PSNRs (db)

Gaussian noise ( | Salt-and-pepper noise (3%) | |||||
---|---|---|---|---|---|---|

Cameraman | Mandrill | Peppers | Cameraman | Mandrill | Peppers | |

Methods | PSNR values | PSNR values | ||||

RTM | 20.2458 | 20.2244 | 20.8964 | 20.3689 | 20.4726 | 20.465 |

RKM | 20.1001 | 20.2059 | 20.7613 | 20.2028 | 20.5158 | 20.4371 |

RdHM | 20.3312 | 20.2564 | 21.0734 | 20.4353 | 20.564 | 20.5085 |

RRM | 20.2346 | 20.1725 | 20.8658 | 20.324 | 20.4288 | 20.4396 |

### 4.3 Local feature extraction by RKM and KRM discrete orthogonal moments

In the following experiments, we will investigate the capability of the proposed KRM and RKM to capture the local information of an image. This study is based on the ability of Krawtchouk moments to extract the local feature by adjusting the *p* parameter [17, 20]. This property can be very useful in the context of pattern classification in order to extract and recognize a part of scene containing a specific object to classify [28]. Therefore, we focus in this subsection on the choice of adaptable parameters for the proposed separable discrete moments. In the case of RKM, if we set the parameters *p*=0.1, *a*=0, *b*=*N*, *α*=0, and *β*=0, then, the region of interest will be extracted horizontally from left to right on the top of an image. If we set the parameters *p*=0.9, *a*=0, *b*=*N*, *α*=0, and *β*=0, then, the region of interest will be extracted horizontally from left to right on the bottom of an image. If we set the parameters *p*=0.5, *a*=0, *b*=*N*, *α*=0, and *β*=0, then, the region of interest will be extracted horizontally from left to right on the center of an image. In the case of KRM, if we set the parameters *p*=0.1, *a*=0, *b*=*N*, *α*=0, and *β*=0, then, the region of interest will be extracted vertically from top to bottom on the left of an image. If we set the parameters *p*=0.9, *a*=0, *b*=*N*, *α*=0, and *β*=0, then, the region of interest will be extracted vertically from top to bottom on the right of an image. Finally, if we set the parameters *p*=0.5, *a*=0, *b*=*N*, *α*=0, and *β*=0, then, the region of interest will be extracted vertically from top to bottom on the center of an image.

### 4.4 Invariability

*n*+

*m*≤6) using the proposed separable moment invariants, and the relative error of Eq. (58) between moment invariant coefficients of the original image and the transformed one is computed.

where ∥·∥, *f*, and *g* denote the Euclidean norm, the original and the transformed image, respectively, where low relative error leads to good precision.

Furthermore, to understand the effect of noise on the proposed moment invariants, in a similar way to the previous experiment, the test image has been corrupted by different kind of noise. Firstly, distorted by different densities of salt-and-pepper noise varying from 0% to 5% with interval 0.25%, secondly, corrupted by Gaussian noise with zero mean and standard deviation varying between 0 and 0.5 with step 0.05.

It is clear from Figs. 7 and 8 that the relative error rate is very low (10^{−10}), which indicates that the proposed moment invariants exhibit good performance and express high numerical stability under different geometric transformations, as well as in presence of noisy effects. Therefore, the new set of invariants can be very useful in the field of pattern recognition and image classification.

### 4.5 Image classification

In fact, three testing subsets of four classes, six classes, and ten classes have been extracted from each database, in order to demonstrate the discrimination capability of the proposed RTMI, RKMI, and RdHMI in comparison with the existing moment invariants GMI, TTMI (Tchebichef-Tchebichef moment invariants), KKMI (Krawtchouk-Krawtchouk moment invariants), RRMI (Racah-Racah moment invariants), and dHdHMI (dual Hahn-dual Hahn moment invariants). Furthermore, we used the conventional 1-NN (*k*-nearest neighbors with *k*=1) classifier with 5-folds cross validation and a moment invariants order up to 10 with (*n*≤5,*m*≤5).

Image classification rate (%) using GMI, RTMI, RKMI, RdHMI, RRMI, TTMI, KKMI, and dHdHMI

Number of classes | Outex | Caltech-101 | Corel | |||||||
---|---|---|---|---|---|---|---|---|---|---|

4 | 6 | 10 | 4 | 6 | 10 | 4 | 6 | 10 | Mean | |

GMI | 76.11 | 67.87 | 62.56 | 78.56 | 68.64 | 54.89 | 77.5 | 52.83 | 39.7 | 64.30 |

RTMI | | | | | | | | | | |

RKMI | | | | | | | | | | |

RdHMI | | | | | | | | | | |

RRMI | 75.69 | 66.57 | 58.72 | 81.49 | 70.61 | 59.13 | 89.25 | 77.5 | 55.7 | 70.52 |

TTMI | 75.14 | 58.8 | 57.83 | 82.02 | 69.57 | 58.27 | 88.75 | 73.5 | 47.0 | 67.88 |

KKMI | 77.64 | 59.17 | 58.5 | 82.15 | 68.75 | 57.2 | 89.25 | 74.17 | 48.6 | 68.38 |

dHdHMI | 78.89 | 61.94 | 57.33 | 80.16 | 70.27 | 58.67 | 88.25 | 76.67 | 46.3 | 68.72 |

## 5 Conclusions

In this paper, we have proposed a new set of bivariate discrete orthogonal polynomials based on the product of Racah polynomials by Tchebichef, Krawtchouk, and dual Hahn polynomials. Using these bivariate discrete orthogonal polynomials, we have defined three new separable 2D discrete orthogonal moments named: RTM, RKM, and RdHM. Several experimental studies have been introduced for measuring the performance of the proposed methods in comparison with the classical known moments in terms of image reconstruction quality (under noisy and noise-free conditions), local feature extraction, and image classification accuracy. It should be highlighted that in most experiments, the proposed moments provide better results than classical methods and their invariability is highly confirmed.

As a conclusion, considering all presented performances and robustness of this new set of moments, we are assured of their ability to give a better representation of the image content that can be extremely helpful in the fields of image analysis. Thus, in our future works, we will focus on improving the numerical stability of the proposed moments and presenting a fast algorithm for computation of large size images, instead of the straightforward algorithm.

## Notes

### Acknowledgements

The authors would like to thank the Laboratory of Intelligent Systems and Applications for his support to achieve this work.

### Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

### Authors’ contributions

All authors contributed equally to this work. IB and RB designed and performed the experiments and prepared the manuscript. KZ and HF supervised the work and contributed to the writing of the paper. All authors read and approved the final manuscript.

### Competing interests

The authors declare that they have no competing interests.

### References

- 1.MK Hu, Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory IT.
**8:**, 179–187 (1962).MATHGoogle Scholar - 2.SX Liao, M Pawlak, On image analysis by moments. IEEE Trans. Pattern Anal. Mach. Intell.
**18:**, 254–266 (1996).CrossRefGoogle Scholar - 3.H Zhu, M Liu, H Shu, H Zhang, L Luo, General form for obtaining discrete orthogonal moments. IET Image Process.
**4:**, 335–352 (2010).MathSciNetCrossRefGoogle Scholar - 4.M Sayyouri, A Hmimid, H Qjidaa, A fast computation of novel set of Meixner invariant moments for image analysis. Circ. Syst. Signal. Process.
**34:**, 875–900 (2015).CrossRefGoogle Scholar - 5.H Shu, H Zhang, B Chen, P Haigron, L Luo, Fast computation of Tchebichef moments for binary and gray-scale images. IEEE Trans. Image Process.
**19:**, 3171–3180 (2010).MathSciNetCrossRefGoogle Scholar - 6.XY Wang, YP Yang, HY Yang, Invariant image watermarking using multi-scale Harris detector and wavelet moments. Comput. Electr. Eng.
**36:**, 31–44 (2010).CrossRefMATHGoogle Scholar - 7.ED Tsougenis, GA Papakostas, DE Koulouriotis, Image watermarking via seperable moments. Multimed. Tools Appl.
**74:**, 3985–4012 (2015).CrossRefGoogle Scholar - 8.GA Papakostasa, EG Karakasisb, DE Koulouriotis, Novel moment invariants for improved classification performance in computer vision applications. Pattern Recogn.
**43:**, 58–68 (2010).CrossRefGoogle Scholar - 9.J Flusser, Pattern recognition by affine moment invariants. Pattern Recogn.
**26:**, 167–174 (1993).MathSciNetCrossRefGoogle Scholar - 10.A Hmimid, M Sayyouri, H Qjidaa, Fast computation of separable two-dimensional discrete invariant moments for image classification. Pattern Recogn.
**48:**, 509–521 (2015).CrossRefGoogle Scholar - 11.C Yan, Y Zhang, J Xu, F Dai, L Li, Q Dai, F Wu, A highly parallel framework for HEVC coding unit partitioning tree decision on many-core processors. IEEE Signal Process. Lett.
**21:**, 573–576 (2014).CrossRefGoogle Scholar - 12.C Yan, Y Zhang, J Xu, F Dai, J Zhang, Q Dai, F Wu, Efficient parallel framework for HEVC motion estimation on many-core processors. IEEE Trans. Circ. Syst. Video Technol.
**24:**, 2077–2089 (2014).CrossRefGoogle Scholar - 13.CH Teh, RT Chin, On image analysis by the methods of moments. IEEE Trans. Pattern Anal. Mach. Intell.
**10:**, 496–513 (1988).CrossRefMATHGoogle Scholar - 14.MR Teague, Image analysis via the general theory of moments. J. Opt. Soc. Am.
**70:**, 920–930 (1980).MathSciNetCrossRefGoogle Scholar - 15.GA Papakostas, DE Koulouriotis, EG Karakasis, Computation strategies of orthogonal image moments: a comparative study. Appl. Math. Comput.
**216:**, 1–17 (2010).MathSciNetMATHGoogle Scholar - 16.R Mukundun, SH Ong, PA Lee, Image analysis by Tchebichef moments. IEEE Trans. Image Process.
**10:**, 1357–1364 (2001).MathSciNetCrossRefMATHGoogle Scholar - 17.P-T Yap, R Paramesran, S-H Ong, Image analysis by Krawtchouk moments. IEEE Trans. Image Process.
**12:**, 1367–1377 (2003).MathSciNetCrossRefGoogle Scholar - 18.H Zhu, H Shu, J Liang, L Luo, J-L Coatrieux, Image analysis by discrete orthogonal Racah moments. Signal Proc.
**87:**, 687–708 (2007).CrossRefMATHGoogle Scholar - 19.H Zhu, H Shu, J Zhou, L Luo, J-L Coatrieux, Image analysis by discrete orthogonal dual Hahn moments. Pattern Recogn. Lett.
**28:**, 1688–1704 (2007).CrossRefGoogle Scholar - 20.H Zhu, Image representation using separable two-dimensional continuous and discrete orthogonal moments. Pattern Recognit.
**45:**, 1540–1558 (2012).CrossRefMATHGoogle Scholar - 21.M Krawtchouk, On interpolation by means of orthogonal polynomials. Mem. Agric. Inst. Kyiv.
**4:**, 21–28 (1929).Google Scholar - 22.Y Xu, On discrete orthogonal polynomials of several variables. Adv. Appl. Math.
**33**(3), 615–632 (2004).MathSciNetCrossRefMATHGoogle Scholar - 23.Y Xu, Second order difference equations and discrete orthogonal polynomials of two variables. Int. Math. Res. Not.
**8:**, 449–475 (2005).MathSciNetCrossRefGoogle Scholar - 24.EG Karakasis, GA Papakostas, DE Koulouriotis, VD Tourassis, Generalized dual Hahn moment invariants. Pattern Recognit.
**46:**, 1998–2014 (2013).CrossRefMATHGoogle Scholar - 25.L Fei-Fei, R Fergus, P Perona, One-shot learning of object categories. IEEE Trans. Pattern Anal. Mach. Intell.
**28:**, 594–611 (2006).CrossRefGoogle Scholar - 26.JZ Wang, J Li, G Wiederhold, SIMPLIcity: Semantics-sensitive integrated matching for picture libraries. IEEE Trans. Pattern Anal. Mach. Intell.
**23:**, 947–963 (2001).CrossRefGoogle Scholar - 27.Z Wang, AC Bovik, HR Sheikh, EP Simoncelli, Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process.
**13:**, 1–14 (2004).CrossRefGoogle Scholar - 28.SMM Rahman, T Howlader, D Hatzinakos, On the selection of 2D Krawtchouk moments for face recognition. Pattern Recogn.
**00:**, 1–32 (2016).Google Scholar

## Copyright information

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.