1 Introduction

Congenital heart defects are reported to be the cause of 26.5% percent of pregnancy-related deaths [45]. It has also been shown that improving maternal health care can prevent more than 80% of maternal fatalities and fetal distress. Women during pregnancy are periodically required to monitor fluctuations in fetal heart rate (FHR) and Follow up with doctors regularly [53]. As a result, in the 1960s, Electronic Fetal Monitoring (EFM) is a strong diagnostic technique for foetal heart rate (FHR). Nodaway, continuous and long-term fetal heart rate surveillance seems to be an essential approach for improving diagnostic accuracy [4]. The methods that can be used to monitor a fetal include a cardiotocography (CTG) [35, 36], fetal electrocardiography (fECG) [43], fetal magnetocardiography (fMC) [50] or fetal phonocardiography (fPCG) [39, 48]. The fPCG approach appears to be one of the most effective prenatal methods based on foetal acoustic heart sounds (fHSs). [44]. Fetal phonocardiography (fPCG) is a non-invasive way to detect fetal heart sounds by recording acoustic cardiac signals (fHSs). It records vibrational acoustic signals from the maternal abdomen surface [10, 16]. fPCG signal gives vital information on cardiac murmurs and the fetal heart rate (fHR), which is a crucial determinant of the fetus’s health and well-being [7]. The fPCG is a very effective instrument for clinical practice because of its complete non-invasiveness. Besides that, there are many advantages to using fPCG approaches, including the examination is inexpensive; that no radiation is emitted to the mother or foetus.

Furthermore, the fPCG device is user-friendly, allowing even non-experts to record signals during the mother’s long-term day or night recordings and then be analyzed to obtain a more overall understanding of the fetus’s functionality [8]. Ultrasound and ultra-sonographic cardiotocography (CTG), two commonly used foetal monitoring techniques, may be detrimental to the foetus. The fECG can only be done in an existing emergency, despite the danger of infection. Consequently, fPCG is the most recommended and risk-free alternative approach for recording foetal heart activity [14]. For more accurate diagnostic information, it is necessary to record the fPCG signals for an extended period of time at a high sampling rate and resolution. It needs a lot of memory space, a long time to send information, and wide bandwidth. [23]. Clinical care systems and electronic monitoring using Remote Healthcare Monitoring Systems (RHMs) have been gaining importance in recent years [13]. In RHMs, A sensor is put on the belly of the mother to record fPCG signal and wirelessly send it to a smartphone application via Bluetooth. The application receives an fPCG signal and transmits it to a cloud server, demonstrating the various algorithms in order to detect and diagnose disease. Remote Monitoring Systems are a highly effective instrument for transmitting data remotely and making rapid diagnoses. Unfortunately, these systems continue to face problems that impair their efficiency. Energy consumption is considered one of the important problems faced by Remote Healthcare Monitoring Systems sensors. Since it runs on batteries, the device’s memory and processing capabilities are limited. Extending battery life by reducing power energy usage is a necessary step in assuring continuous signal capture and monitoring [1, 30]. So, conserving these sensors’ energy is a crucial problem facing RHMs. Traditionally, energy consumption in sensors is caused by data sensing, data processing, and data transmission. Data transmission is considered the primary reason for the wastage of energy, in which during data sending and receiving, the wastage of power occurs [49]. The recorded signal data must be minimized using efficient compression techniques to reduce energy consumption. This motivated us to provide solutions to the problem of energy consumption in the devices of RHMs. For this reason, we addressed the fPCG signal data compression algorithm to decrease the size of the signal being stored or transmitted to overcome this problem.

In general, Various of algorithms are introduced for compressing biomedical signal [26] such as Electrocardiography (ECG) [37], Electromyography (EMG) [54], Electroencephalographic (EEG) [9, 47] and Salt sensitive Rat Blood Pressure signal [5]. Concerning PCG signals, several publications have appeared in recent years documenting. A. Bendifallah et al. present PCG signal compression methods based on dictionary and bitmask techniques. This method is applied to the bitstream produced by Set Partitioning [11]. Hong Tang et al. developed a novel PCG signals compression technique using sound repetition and vector quantization [51]. In a recent paper, Ying-Ren Chien et al. propose a deep convolutional autoencoder for PCG compression. In that technique, seven convolutional layers are used to compress the PCG signals into the feature maps at the encoder stage and at the decoder stage; the other seven convolutional layers are used to decompress the feature maps and obtain the reconstructed image signals [15]. Despite the author’s knowledge, very few publications can be found in the literature dealing with the issue of fPCG signaling pressure, but it has gained prominence in recent years [44].

The Transform-based techniques widely used in medical applications such as image watermarking [40,41,42, 58] and signal compression [26]. Vibha Aggarwal et al. [6] introduced a compression technique for foetal phonocardiography (PCG) signal based on Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). The results show that DWT performed better than DCT in high spatial resolution for Foetal PCG signals. In a recent paper by Samit Kumar Ghosh [29], a methodology based on transforms for compressing the foetal Phonocardiogram (fPCG) signal is proposed. Transform-based techniques such as Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and Fast Walsh Hadamard Transform (FWHT) are used to decompose the fPCG signal. The results demonstrate the efficiency of the transform strategies in compressing fPCG signals. FWHT has a higher CR, whereas DWT produces good fidelity parameters with a comparable CR.

In recent years, one-dimensional and two-dimensional discrete orthogonal moments have been gaining importance because of their ability to represent signals and images well in various fields. The applications of discrete orthogonal moments include signal and image reconstruction [17, 19, 31], face recognition [46], image classification [2, 12], image watermarking [57], image encryption [56], images compression [24, 33, 55], signals compression [3, 21, 32]. Charlier moment (CMs) represents one type of discrete orthogonal moment [18, 38]. The experimental results obtained Charlier moments’ efficacy as feature descriptors.

In our work, we propose a compression technique based on discrete Charlier moment for fPCG signals. In order to calculate out CMs, you have to compute kernel discrete orthogonal polynomials. Two major problems constrain the High-order computation of Charlier polynomials: The first problem is the fluctuation of polynomial values due to power, exponential and factorial functions. We use a recurrence relation completely independent of Charlier polynomials computation to solve this problem, which is no longer dependent on the factorial and power functions. The second problem is the propagation of numerical errors that occur when computing high-order Charlier polynomials (CPs), which destroys the orthogonality property of these polynomials. To overcome this problem, we propose using the Householder orthonormalization method (HOM) to maintain the orthogonality property of high-order CPs. With the use of HOM, the computation of Charlier coefficients at higher orders becomes numerically more stable. This paper includes a number of important contributions, which can be stated as follows:

  • Proposing an efficient fPCG signals compression technique to reduce energy consumption in Remote Healthcare Monitoring Systems.

  • Presenting a stable compution version of Charlier Moments,which at high orders CPs avoid numerical error propagation and preserve the orthogonality property.

The rest of the paper consists of seven sections: In Section 2, DOCMs and CPs will be discussed. Section 3 is devoted to describing the proposed procedure for maintaining the orthogonality property of CPs. The proposed algorithm is introduced in Section 4. Section 5 illustrates the compression performance indicators. The experimental results of this work are summarized in Section 6, while the discussion is reported in Section 7. In Section 8, we present the conclusion of this paper.

2 Material and methods

2.1 Discrete orthogonal Charlier moment

The set of discrete orthogonal one-dimensional (1D) Charlier Moment (CMs) are defined as follows [59]:

$$ C{M}_p={\sum}_{x=0}^{N-1}\kern0.1em {C}_p^{a_1}(x)S(x),p\le N $$
(1)

where s(x) indicates a 1D signal of size 1 × N, \( {C}_p^{a_1}(x) \) are Charlier polynomials to order p and a1 denotes the parameter of Charlier polynomial, which must be a strictly positive real number (a1 > 0).

The reconstructed signal S(x) is calculated from the inverse transformation of Charlier Moment as follows:

$$ S(x)={\sum}_{x=0}^{N-1}\kern0.1em C{M}_P{C}_P^{a_1}(x),x=0,1,2,\dots, N-1 $$
(2)

Using the following matrix form decreases the time and complexity of 1D Charlier moment computations significantly:

$$ C{M}_p={{\mathrm{C}}_p}^Ts=\left[\begin{array}{cccc}{C}_0(0)& {C}_0(1)& \dots & {C}_0\left(N-1\right)\\ {}{C}_1(0)& {C}_1(1)& \dots & {C}_1\left(N-1\right)\\ {}\vdots & \vdots & \vdots & \vdots \\ {}{C}_P(0)& {C}_P(1)& \dots & {C}_p\left(N-1\right)\end{array}\right]\times \left[\begin{array}{c}s(0)\\ {}s(1)\\ {}\vdots \\ {}s\left(\mathrm{N}-1\right)\end{array}\right] $$
(3)

where Cp indicates Charlier polynomials of order p,s denotes 1 × N signal vector.The following matrix relation determines the inverse transformation of CMs.

$$ \mathrm{S}=C{M}_p{\mathrm{C}}_p $$
(4)

where CMp and Cp are Charlier moments and Charlier Polynomials, respectively.

2.2 Computation of Charlier polynomials

The Charlier polynomials of order (p) are formulated using hypergeometric function as follows [18]:

$$ {C}_p^{a_1} =_2{F}_0\left(-n,-x;-\frac{1}{a_1}\right) $$
(5)

where xp = 0,1,2,...,∞, p is Charlier polynomial order, a1 is the parameter of Charlier polynomial (a1>0), and 2F0(.) is defined as:

$$ _2{F}_0\left(a,b;z\right)={\sum}_{k=0}^{\infty}\kern0.1em {(a)}_k{(b)}_k\frac{z^k}{k!} $$
(6)

The symbol (a)kis defined:

$$ {(a)}_k=a\left(a+1\right)\left(a+2\right)\dots \left(a+k-1\right)=\frac{\Gamma \left(a+k\right)}{\Gamma (a)} $$
(7)

Where Γ(.) indicates the gamma function.

The Charlier polynomials satisfy an orthogonal relation of the form:

$$ {\sum}_{x=0}^N\kern0.1em {C}_n^{a_1}(x){C}_m^{a_1}(y)\omega (x)=\rho (n){\delta}_{nm} $$
(8)

Where ω(x) is the weight function of the DOCPs defined as:

$$ \omega (x)=\frac{e^{-a1}{a}_1^x}{x!} $$
(9)

The squared norm of DOCPs is calculated as follows:

$$ \rho (p)=\frac{p!}{a_1^p} $$
(10)

The orthonormalized CPs are defined by the square norm ρ(p) and the weighted function ω(x) as follows:

$$ {C}_p^{a_1}(x)={C}_p^{a_1}(x)\sqrt{\frac{\omega (x)}{\rho (p)}} $$
(11)

Using Eqs. (6) and (11) computing the Charlier polynomials is time-consuming and leads to numerical instability. For this reason, researchers are developing recursive calculation methods to efficiently determine polynomials.

The discrete orthogonal Charlier polynomials CPs of order (p) are presented using recursive relation as follows [18].

$$ {C}_p^{a_1}(x)=\frac{a_1-x+p-1}{a_1}\sqrt{\frac{\rho \left(p-1\right)}{\rho (p)}}{C}_{p-1}^{a_1}(x)-\frac{p-1}{a_1}\sqrt{\frac{\rho \left(p-2\right)}{\rho (p)}}{C}_{p-2}^{a_1}(x) $$
(12)

with \( {c}_0^{a_1}(x)=\sqrt{\frac{\omega (x)}{\rho (0)}}=\sqrt{\frac{e^{-{a}_1}{a}_1^x}{x!}} \)

$$ {C}_1^{a_1}(x)=\frac{a_1-x}{a_1}\sqrt{\frac{\omega (x)}{\rho (1)}}=\frac{a_1-x}{a_1}\sqrt{\frac{e^{-{a}_1}{a}_1^{x+1}}{x!}} $$
(13)

The existence of the power, exponential and factorial functions that lead to the numerical fluctuations in the Charlier polynomials coefficients eliminates in the next section.

The following recurrence formula can be generated from the function ρ(p) given by Eq. (10).

$$ {\displaystyle \begin{array}{c}\rho \left(p+1\right)=\frac{\left(p+1\right)}{a_1}\rho (p)\kern0.75em \mathrm{with}\ \rho (0)=1,\\ {}\frac{\rho \left(p-1\right)}{\rho (p)}=\frac{a_1}{p}\ \mathrm{and}\ \frac{\rho \left(p-2\right)}{\rho (p)}=\frac{a_1^2}{p\left(p-1\right)}.\end{array}} $$
(14)

Using Eqs. (14) and (12), we get:

$$ {C}_p^{a_1}(x)=\frac{a_1-x+p-1}{a_1}\sqrt{\frac{a_1}{p}}{C}_{p-1}^{a_1}(x)-\sqrt{\frac{p-1}{p}}{C}_{p-2}^{a_1}(x) $$
(15)

with

$$ {\displaystyle \begin{array}{c}{C}_0^{a_1}(x)=\sqrt{\frac{e^{-{a}_1}{a}_1^x}{x!}}\\ {}{C}_1^{a_1}(x)=\frac{a_1-x}{a_1}\sqrt{\frac{e^{-{a}_1}{a}_1^{x+1}}{x!}}\end{array}} $$
(16)

We observe that Eq. (15) presents a recurrence relation completely independent of the ρ(p) function, indicating eliminating terms that numerically fluctuate the polynomial coefficients. The factorial, exponential, and power functions still exist in determining the initial conditions. To efficiently overcome these obstacles while computing \( {C}_0^{a_1}(x) \) and\( {C}_1^{a_1}(x) \) These initial conditions will be calculated as follows:

  • for p = 0, we have from Eq. (16):

$$ {C}_0^{a_1}\left(x+1\right)=\sqrt{\frac{e^{-{a}_1}{a}_1^x}{x!}}\sqrt{\frac{a_1}{x+1}} $$
(17)

Based on Eqs. (16) and (17), we get:

$$ {C}_0^{a_1}\left(x+1\right)=\sqrt{\frac{a_1}{x+1}}{C}_0^{a_1}(x)\kern2.5em \mathrm{with}\kern1.5em {C}_0^{a_1}(0)={e}^{\frac{-{a}_1}{2}} $$
(18)
  • for p = 1, we have from Eq. (16):

$$ {C}_1^{a_1}\left(x+1\right)=\frac{a_1-x+1}{a_1}\sqrt{\frac{\omega \left(x+1\right)}{\rho (1)}}=\frac{a_1-x+1}{a_1}\sqrt{\frac{e^{-{a}_1}{a}_1^{x+2}}{\left(x+1\right)!}} $$
(19)

Based on Eqs. (16) and (19), we get:

$$ {C}_1^{a_1}\left(x+1\right)=\frac{a_1-x+1}{a_1-x}\sqrt{\frac{a_1}{x+1}}{C}_1^{a_1}(x)\ \mathrm{with}\ {C}_1^{a_1}(0)=\sqrt{e^{-{a}_1}{a}_1} $$
(20)

It is deterring the initial conditions using Eqs. (18) and (20) overcome the numerical fluctuations because of no longer dependent on the factorial and power functions. During the recursive calculation of CPs, numerical error propagation occurs due to the round-off error propagation. This error propagation leads to the loss of the orthogonality property of CPs. In order to find a solution to this problem, we propose in the next section to use the Householder orthonormalization method (HOM). The purpose of applying this procedure is to preserve the orthogonality property of CPs at high order during recursive calculations.

3 The proposed procedure for maintaining the orthogonality property of CPs using the householder method

According to the orthogonality property, CPs matrix (Cn, x) satisfies the following relation [20]:

$$ {C_{n,x}}^T{C}_{n,x}={I}_n $$
(21)

Where In is the identity matrix.

To avoid numerical error propagation and keep the property of orthogonality of CPs, we present a new method for re-orthonormalizing CPs matrix columns using QR decomposition methods. In these methods, a matrix A = [ u1u2, …, un − 1, un] of size n × m factored as A = QR, where Q is an n × m matrix with orthogonal columns (QTQ = I) and R is an m × m upper triangular matrix [28]. In our situation, R matrix contains just recursive computation errors. The primary purpose of these methods is to generate the orthogonal Q(n × m) matrix from Cn, x that contains round-off errors.

Many methods are used in QR decomposition, such as the Gram-Schmidt method, the Householder method, and the Given Rotations method. For QR decomposition of matrices, the Householder approach is considered numerically more stable and faster execution than the Gram-Schmidt and Given Rotations methods orthogonalization. As a result, the Householder approach is preferred for real-time applications [20]. The proposed algorithm for computing Charlier polynomials (CPs) using the Householder orthonormalization method (HOM) is illustrated in Algorithm 1:

figure a

Algorithm 1 The Proposed Algorithm for computing CPs using HOM

4 The proposed compression algorithm

The proposed algorithm is described as follows: the original fPCG signal (1 × N) is the input, and the fPCG signal is subdivided into smaller blocks of size (1 × n). In the proposed algorithm, we selected the size of blocks (1 × 125). The order (P) of the Charlier moment is selected according to the required compression factor (CF) as follows:

$$ \mathrm{Order}\left(\mathrm{P}\right)=\mathrm{Round}\left[\frac{\mathrm{n}}{100-\mathrm{CF}}\right], $$
(22)

Where compression factor (CF) shows how much of the signal is compressed. After determining the order (p), The Charlier polynomials (CPs) are computed on the signal block using Eq. (15). Then, the Householder method (HOM) is applied to the Charlier polynomials (CPs) as illustrated in algorithm 1. The forward Charlier moment is applied to the signal block to extract the discriminate features using Eq. (3). The features of the signal blocks are concatenated to obtain the compressed fPCG signal. In the decompression process, the compressed signal is subdivided into blocks, and the inverse transformation of CMs is applied to each block using Eq. (4). Then, the blocks are concatenated to obtain the reconstructed fPCG signal.

The main steps of the proposed compression algorithm are summarized as:

  • Step1: Input the fPCG signal to be compressed.

  • Step2: Input the desired CF, and Set block size (1 × n) = (1 × 125).

  • Step3: Determine the order (P) of the Charlier polynomials using Eq. (22).

  • Step4: Set the value of the parameter a1 =140.

  • Step5: For each block.

  • Step6: Compute Charlier polynomials of order (p) by Eq. (15).

  • Step7: Apply the Householder method (HOM) on the Charlier polynomials (CPs) using algorithm 1.

  • Step8: Compute the forward Charlier moment by Eq. (3).

  • Step9: Concatenate the features of each block.

  • Step10: Apply the inverse transformation of the Charlier moment to obtain the reconstructed signal by Eq. (4).

  • Step11: Calculate the efficiency of the proposed algorithm by computing PRD, SNR, and PSNR.

Figure 1 show Digram that illustrates the processes of compression and decompression. Figure 2 show Flow chart for the proposed compression Fpcg signals algorithm. The proposed algorithm for compression fPCG signals can be summarized in Algorithm 2.

Fig. 1
figure 1

Diagram of the proposed algorithm’s compression and decompression steps

Fig. 2
figure 2

Flow chart for the proposed compression Fpcg signals algorithm

figure b

Algorithm 2 The proposed algorithm for compressing fPCG signals

5 Compression performance indicators

The effectiveness and efficiency of the presented compression algorithm are evaluated based on the following performance criteria. [22, 25]:

  • Compression ratio (CR)

    $$ \mathrm{CR}=\frac{\mathrm{number}\ \mathrm{of}\ \mathrm{bits}\ \mathrm{for}\ \mathrm{the}\ \mathrm{original}\ \mathrm{signal}}{\mathrm{number}\ \mathrm{of}\ \mathrm{bits}\ \mathrm{for}\ \mathrm{the}\ \mathrm{compressed}\ \mathrm{signal}\ } $$
    (23)
  • Percent Root Mean Square Difference (PRD)

    $$ \mathrm{PRD}=\sqrt{\frac{\sum {\left(s(x)-S(x)\right)}^2}{\sum s{(x)}^2}\kern0.5em }\times 100 $$
    (24)

    where s(x), S(x) are the original and the reconstructed signal, respectively.

  • Quality Score (QS)

    QS is utilized to evaluate the overall performance of a compression technique. A high QS number indicates higher compression performance, as described below:

    $$ \mathrm{QS}=\frac{\mathrm{CR}}{\mathrm{PRD}} $$
    (25)
  • Signal to Noise Ratio (SNR)

    $$ \mathrm{SNR}=10\times \log \left(\ \frac{\sum {\left(s(x)-\overline{s(x)}\ \right)}^2}{\sum {\left(s(x)-S(x)\ \right)}^2}\right) $$
    (26)

    where \( \overline{s(x)} \) is the mean value of s(x).

  • Peak Signal to Noise Ratio (PSNR)

    $$ \mathrm{PSNR}=20\times {\log}_{10}\frac{\mathit{\max}\left|s(x)\right|}{\sqrt{MSE}} $$
    (27)

    Where max|s(x)| is the maximum point in s(x), and MSE is the mean square error between the original s(x) and reconstructed signalS(x).

  • Compression Speed

    $$ \mathrm{Compression}\ \mathrm{Speed}=\frac{\mathrm{uncompressed}\ \mathrm{bits}}{sconds\ to\ comperss} $$
    (28)
    $$ \mathrm{Decompression}\ \mathrm{Speed}=\frac{\mathrm{uncompressed}\ \mathrm{bits}}{sconds\ to\ decomperss} $$
    (29)
  • Computational efficiency (CE)

    Computational efficiency (CE) is presented as CE of algorithm stands for the ratio between the Compression ratio and Processing Time. A high CE value indicates superior computational efficiency, which CE value reveals a high CR with a lower compression Time.

    $$ \mathrm{CE}=\frac{\mathrm{CR}}{\mathrm{Compression}\ \mathrm{Time}} $$
    (30)

6 Experimental results

Fetal PCG Database was used to test the effectiveness of the proposed compression algorithm [34]. It contains 26 fPCG signals from pregnant women in their final months of physiological singleton pregnancies (weeks 31–40). The recorded samples were 1 minute (60 seconds) in duration, and the data were digitized at a sampling rate of 333 Hz at an 8-bit ADC. All of the ladies were healthy and between the ages of 25 and 35.

6.1 Results of the proposed algorithm

The performance compression of the proposed algorithm was evaluated using the Fetal PCG Database in terms of CR, PRD, SNR, PSNR, and QS. Table 1 summarizes the obtained results of the proposed algorithm on the Whole (26 signal) Fetal PCG dataset. From Table 1, observably, the proposed method yields excellent results for all signals in the dataset. It produces large compression ratios (CR), excellent reconstruction quality (PRD), and high SNR and PSNR. For particular fetal_PCG_p07_GW_38m signal, the proposed algorithm achieves large compression ratios (CR = 32) with large reconstruction quality (PRD = 0.17) and excellent PSNR (PSNR = 46.01). The fetal_PCG_p07_GW_38m, fetal_PCG_p11_GW_37m, and fetal_PCG_p23_GW_38m have the best Quality Score (QS) 188.24, 168.42, 145.45, respectively (A high QS result represents that the CR is high and the distortion rate is low). The proposed algorithm has an average performance of 18.33, 0.21, 48.85, 68.86, and 90.88 in terms of CR, PRD, SNR, PSNR, and QS, as depicted in Table 1. Figures 3, 4, 5 and 6 depict compressed and decompressed fPCG signals using the proposed method. Visual examination reveals that the reconstructed signals closely resemble the original signals.

Table 1 Compression performance indicators of the proposed algorithm for fPCG signals
Fig. 3
figure 3

Compression of fetal_PCG_p01_GW_36m signal with CR = 16 and PRD = 0.19

Fig. 4
figure 4

Compression of fetal_PCG_p07_GW_38m with CR = 32 and PRD = 0.17

Fig. 5
figure 5

Compression of fetal_PCG_p21_GW_39m with CR = 16 and PRD = 0.21

Fig. 6
figure 6

Compression of fetal_PCG_p26_GW_36m with CR = 10.6 and PRD = 0.13

6.2 Results of comparisons between the proposed technique and existent compression algorithms

To the sake of validating the efficiency of the proposed approach, a comparative investigation is performed with existing compression algorithms [6, 29] in CR, PRD, and QS. The results obtained from the comparison of the proposed approach with transform-based techniques such as Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and Fast Walsh Hadamard Transform (FWHT) are summarized in Table 2. As depicted in Table 2, for the comparison with DCT and DWT, the proposed method outperforms them in all performance indicators for fPCG in the dataset. The proposed method has the highest QS because it has good CR and the lowest PRD. Comparing the proposed method to FWHT may not achieve the highest CR, but it achieves excellent PRD and the best QS.

Table 2 Compression performance comparison of the proposed algorithm with existing algorithms

Consequently, the proposed approach produces the greatest quality reconstructed signal compared to all other known techniques. This ensures that the reconstructed signal will contain all relevant diagnostic information. Figure 7 show a comparison of the proposed approach with DCT, DWT, and FWHT in term of SNR, PRD, CR, and QS, each of them independently. FWHT provides good results in CR, but the proposed algorithm is extremely high QS and better reconstruction quality (PRD). Graphs in Fig. 7 demonstrate the superiority of the proposed method compared to all other known techniques in SNR, PRD, and QS.

Fig. 7
figure 7

Performance comparison of the proposed algorithm with DCT, DWT, and FWHT in the team of (a) SNR, (b) PRD, (c) CR (d) QS

6.3 Compression speed and computational efficiency of the proposed algorithm

Table 3 summarises the Compression time, Compression speed, and computational efficiency for various fPCG signals extracted from datasets utilized in this paper and applied to the proposed approach. The proposed algorithm’s compression time and compression speed are just as important as their compression performance. The compression speed of the technique is crucial for wireless biosensors in RHMs. As seen in Table 3, compression speed and computational efficiency increase as compression time decreases.

Table 3 The Compression time, Compression speed, and computational efficiency of the proposed algorithm

6.4 Energy consumption evaluation

This section discusses the evaluation energy consumption with the proposed algorithm. The proposed compression algorithm is implemented on STM32f429 Discovery development board. The STM32f429 discovery system comprises an ARM Cortex-M4base 32-bit microcontroller from STMicroelectronics. It operates at a maximum frequency of 180 MHz, which this study utilizes for data compression [27]. The development board offers a direct memory access technique (DMA) for fPCG measuring devices accessing memory directly. It is connected with a Bluetooth module with UART as compressed data output. Module-MD08R-C2A Bluetooth module with baudrate support of 1.2 k to 921.6 k bps is employed. The Bluetooth module is linked to the development board through the UART interface, and the baudrate is set to 230,400 bps (28,800 bytes).

  • Wearable devices are in a “wake-up” state when processing or transmitting data; otherwise, they are in a “sleep” state.

  • While a wearable device is in ‘wake-up’ mode, it drains battery power.

  • Decreasing the transmitting data size reduces data transmission time, consequently reducing a ‘wake-up’ mode.

  • Battery consumption is reduced by decreasing a ‘wake-up’ mode for the wearable device.

  • According to the fPCG dataset mentioned above, the size of fetal_PCG_p01_GW_36m (333*8*60) = 159,840 bytes.

  • Depending on the Bluetooth capability used, the transmission time of the signal is (159,840/28,880) 5.55 seconds.

  • Table 3 indicates that the signal is compressed by CR = 16, and the compression time is 1.47 seconds, which means that the size of the compressed signal is 9990 bytes. Then the transmission time of the compressed signal is 0.347 seconds

Conditions

Compression Time(s)

Transmission Time(s)

total time(s)

without compression

5.55

5.55

with compression

1.47

0.347

1.817

From previous data, we note that the signal will be transmitted in 5.55 seconds; using the compression, the signal will be compressed and transmitted in 1.47 + 0.347 = 1.817. Therefore, the proposed compression algorithm decreases the overall work time by about 5.55–1.87 = 3.68 s.

7 Discussions

Experiments and the results above demonstrate that the proposed algorithm outperforms previous recent algorithms. Various metrics of performance were tested in the experiments. In the first place, we must consider the compression ratio and the quality of the compressed signals. Second, we considered the Compression time, Compression speed, and computational efficiency. The last path is one of decreasing energy consumption. The tabular and graphical results demonstrate the efficacy of the suggested method in terms of all paths’ performance measurements. The good results in compression ratio and compressed signals quality of the proposed algorithm it goes back to using discrete Charlier moment as a feature descriptor with the householder orthonormalization method (HOM). HOM preserves the orthogonal property of Charlier moment at high order; subsequently, the numerical error propagation doesn’t occur when computing high-order Charlier moments. As a result, the proposed algorithm can achieve a high compression ratio and high quality of the compressed signals. The superiority in Compression time, Compression speed, and computational efficiency are thanks to using a recurrence relation completely independent of Charlier polynomials computation, which is no longer dependent on the factorial and power functions. In addition, using HOM in the proposed algorithm leads to speeding the compression, which is considered the best way of QR decomposition in signal reconstruction quality and execution time [27].

The average value of the performance metrics is CR = 18.33, PRD = 0.21, SNR = 48.85, PSNR = 68.86, and QS = 90.88. Even though the proposed algorithm didn’t provide the highest CR, it did provide the best PRD, SNR, PSNR, and QS. This guarantees that the reconstructed signal has all relevant diagnostic information. This is important when compressing medical signals since losing diagnostic information could lead to a wrong diagnosis. Additionally, the superiority of the proposed approach in terms of compression time, compression speed, and computational efficiency makes it more efficient in decreasing energy consumption.

The superiority in the performance of the proposed approach in compressing fPCG signals can be attributed to the following factors:

  • A Charlier moment has a basic function that is orthogonal to the moment. Each Charlier moment coefficient can capture the signal’s distinct and unique components with no information redundancy.

  • According to the order value, Charlier moments’ basis functions can extract a variety of diverse forms of information from the signal. It is due to using the Householder orthonormalization method (HOM) to maintain the orthogonality property of Charlier polynomials.

  • Moments generated from discrete orthogonal polynomials are effective at compressing signals. This is because they have a higher efficiency of energy compression for common signals. If the discrete orthogonal moment is correctly specified, the signal’s energy is concentrated on a small fraction of the moment coefficients; these coefficients are then saved and used to reconstruct the signal.

  • The power of Charlier moments to extract both local and global features.

  • The higher compression speed and computational efficiency result from using recursive formulas to compute polynomials by lower polynomial orders instead of directly computing them.

There have been much advancement in compression techniques; however, there are still many obstacles to overcome. The computational complexity and the availability of memory management are two important factors in the development of compression techniques. Most current methods have high computing complexity, making them unsuitable for use in real-time applications like Remote Monitoring Systems. With the usage of the compression approach, memory management becomes more complicated. When the amount of memory needed to do the compression method is more than what is available on the device, efficient compression cannot be done. Even if certain compression approaches obtain a higher CR, they do not manage memory effectively. So another interesting research topic is the study of memory management in compression algorithms.

8 Conclusion

The present article proposes a new compression technique for fPCG signals to reduce energy consumption in Remote Healthcare Monitoring systems (RHMS). The proposed compression algorithm employs Discrete Orthogonal Charlier Moment (DOCMs) as a features extractor due to the orthogonality of its basis functions. We determined and deleted the terms responsible for the numerical fluctuations in the values of the Charlier polynomial using a recurrence relation that is independent of the factorial and power functions for the construction of CPs. In order to preserve the orthogonality property of Charlier polynomials, the Householder orthonormalization method (HOM) is used. We evaluated the efficiency of proposed algorithm across Fetal PCG Database. It contains 26 fPCG signals from pregnant women in their final months of physiological singleton pregnancies (weeks 31–40). We used several metrics to compare the proposed algorithm to other well-known compression algorithms. The evaluation performance for the compression algorithm is implemented in three aspects. First, the compression ratio with reconstruction quality. Second, the compression speed and computational efficiency of the proposed algorithm. The third and final aspect is the energy consumption. The comparison between the proposed algorithm and the previous recent algorithms demonstrates that the proposed algorithm is better than those used in the previous works with respect to all aspects. In addition, the evaluation of energy consumption indicates that the proposed compression algorithm decreases the overall work time of devices in RHMS. In terms of the mentioned three aspects, our proposed algorithm satisfies the following: significant compression ratio is plausible, the quality of the reconstructed signal is respected, and energy consumption is decreased. As a result, it is well-suited for use with wearable devices, long-term data storage, massive databases, and RHMs.

In our future work, we may use new Discrete Orthogonal Moments as a feature extractor to improve the proposed algorithm for fPCG compression. Moreover, using various orthonormalization methods to maintain the orthogonality property at the High-order computation of Charlier polynomials could also increase the effectiveness of the proposed algorithm.