1 Introduction

The past few years have seen a rise in Artificial Intelligence (AI) services and applications, ranging from audio and video surveillance to personal assistants and recommendation engines, thanks to advances in deep learning [1]. Since mobile computing and the Internet of Things (IoT) have become so popular in recent years, millions of IoT and mobile devices have been connected to the Internet, producing billions of information at the network edge [2]. Motivated by this trend, it is imperative to push the boundaries of AI to the edge of the network to fully realize the potential of edge big data [3]. It acknowledged that Edge Computing (EC), a new paradigm that shifts computing services and workloads from the network core to the edge, is a potential way to meet this demand [4]. Social scientists, philosophers, and lawyers have historically been the ones to address privacy as a social and legal issue [5]. However, the widespread use of different software programs in the framework of fundamental e-services necessitates the adoption of additional technological safeguards to ensure people's electronic privacy [6]. In these days, maintaining privacy involves minimizing the amount of data gathered and kept on file and erasing it as soon as it is no longer needed [7]. Most e-services available today rely on stored data to identify customers, their preferences, and their past transaction history. Integrating such data, however, will frequently result in a privacy violation [8].

A significant step toward the next revolution in the IoT space is believed to be taken by intelligent edge or the combination of EC with digital innovations like AI, data analytics, machine learning (ML), big data, and cloud computing [9]. To optimize the possibility of data analytics in real-time with the slightest delay, EAI is used for intelligently researching, gathering, storing, and processing massive amounts of IoT data [10]. Furthermore, intelligent edge systems can handle erratic and imprecise problems like mobility, security, and dependability in addition to lowering bandwidth usage and speeding up response times [11]. Soft Computing (SC), which aims to provide tractable, reliable, and affordable solutions, encompasses numerous methods that can facilitate the shift to intelligent edge systems. Smart speakers, head-mounted displays, car sensors, drones, security cameras, business and consumer robots, PCs, smartphones, and tablets will be the leading-edge devices generated by AI and SC in quantity [12, 13]. SC approaches can handle various EC issues, including mobility, data accumulation, interoperability, and security, by combining algorithms, approximate models, and unpredictable and random algorithmic actions [14]. The use of SC methods is becoming more popular in various academic and industrial domains. One of the most noteworthy features of these tools is their capacity to handle when making decisions using hazy or inaccurate info [15, 16]. Similar circumstances frequently arise in requirements engineering. Through data processing at the source, edge-based SC can offer increased privacy and security in IoT [17]. The most challenging tasks in data mining are high-dimensionality datasets, incomplete and non-standard data, changing data, overfitting, integration, mixed media data, and knowledge [18]. Due to their dispersed nature and frequent placement in public or semi-public areas, edge AI devices are vulnerable to physical and cyberattacks. Furthermore, handling sensitive data by these devices frequently raises the possibility of data breaches. Concerns about data leaks and exploitation are prevalent when using AI systems. Data breach and misuse are risks because an AI model needs a lot of personal information to be trained. Large volumes of (personal) data are required for AI systems, and if this data ends up in the wrong hands, it could be exploited for evil things like identity theft.

Due to various limitations, i.e., computational complexity and more delay in cloud computing are overtaken by EC for efficient and fair resource allocation, such as power and battery lifetime in IoT-based industrial applications [19]. However, the challenging issue in these power-hungry, short battery lifetime, and delay-intolerant portable devices needs to be more appropriate and efficient classical trends of fair resource allotment [20]. So, in this paper, we design an Ant Lion Optimization assisted Diffie–Hellman-based Twofish cryptography (ALO-DHT) to enhance security and privacy. The developed model reduces the computational cost and latency and provides better security. The created model's primary goal is to create a shared secret without sending it over the Internet. The two can encrypt and decode data or conversations using a public key using symmetric cryptography and the developed model. Protect the data from attacks and unlawful access as well. The main motive is to generate effective Data Transmission in Edge AI Networks to enhance security and privacy. The main aim of the developed model is to enhance the security of the edge AI system using soft computing and cryptographic techniques. The developed model preserves security and privacy during data transmission using an optimized hybrid cryptographic algorithm. The key contribution of the developed model is detailed below:

  • Design an optimized hybrid cryptographic algorithm for preserving security and privacy during data transmission.

  • Combines the Ant-lion optimization with the hybrid Diffie–Hellman-based Twofish approach (ALO-DHT) for enhancing security.

  • An edge AI environment is created with edge computing devices and AI-enabled sensors.

  • To check the malicious data entry in the created environment, an autoencoder neural network-based attack prediction module was designed.

  • Further, the processed data is encrypted using the proposed cryptography to ensure privacy and security during transmission.

  • The Diffie–Hellman approach facilitates the secure key exchange process between the source and destination devices, while the Twofish algorithm performs encryption using the shared key.

  • Finally, the gain performance of the developed model is validated with other prevailing techniques in terms of accuracy, recall, time, delay, precision, and energy consumption.

The organization of the paper is as follows: Sect. 2 details the literature survey, Sect. 3 describes the proposed methodology, Sect. 4 details the results and discussion of the developed model, and Sect. 5 ends with a conclusion.

2 Related Works

To distinguish Edge Intelligence (EI) into AI on edge and AI for edge, Shuiguang et al. [21] created an entirely new computing paradigm. With the aid of well-liked and beneficial AI technologies, the former concentrates on offering more optimal solutions to significant EC problems, while the latter investigates how to complete the entire AI model-building process, that is model training and inference on edge. With a more comprehensive viewpoint, this paper offers insights into this emerging interdisciplinary field. Discussing the fundamental ideas and the research roadmap should give prospective future EI researchers the necessary background knowledge.

Meanwhile, AI-based precise and intelligent resource management has gained significant attention, particularly in industrial applications. The availability and computation speed of IoT-based devices in industries will be increased using the integration of AI at the edge. A forward central dynamic and available (FCDA) system was proposed by Hassan et al. [22] by modifying the sensing and transfer processes' running times in portable IoT devices. Create a data reliability model for edge AI-powered IoT devices. For appropriate industrial platform monitoring, two key cases are presented, namely product processing and noise and fault diagnosis. The suggested FCDAA improves energy utilization and battery lifetime, according to an experimental test-bed.

Sheikh et al. [23] suggest a novel EI computing system for confidentiality image classification in the IoT. In particular, the autoencoder will be developed independently at each edge device, and the edge server will receive the latent vectors for classification training. Without involving the server, every edge device's autoencoder is developed independently. By sending latent vectors, the suggested method protects end users' confidentiality and does not have a high communication cost. The experimental results show how well images are classified with different design parameters, like the autoencoder's data compression ratio and complexities.

Because of distributed denial of service (DDOS) attacks, the improper use of cloud design services and resources has become a regular problem in day-to-day operations. Since DDOS attacks are highly sophisticated and are expanding quickly, identifying and implementing countermeasures is a difficult task. Mugunthan [24] created autonomous detection with the SC method to counteract rate-DDOS assaults in the cloud's structure. The suggested approach makes use of the Random Forest to categorize detected attacks. Moreover, the Hidden Markov Model is developed to observe network flow. The recommended method is assessed to gage the level of enhancement in performance.

The utilization of SC techniques facilitates the identification of malicious activities within social networks. SC dominates research in this field because of its robustness and low-cost solutions for identifying undesired activities. Sathish [25] employs a combination of SC methodologies and an improved SC method to identify intrusions that result in security problems within social networks. The suggested approach uses an improved SC procedure involving feature reduction, pre-processing, clustering, and classification to create a security model that is more successful than conventional computations at recognizing social network misuse.

One encoding method that aids in the encoding of data that must be saved in the cloud is homomorphic encryption. The Modified Particle Swarm Optimization Algorithm, which Adwait et al. [26] presented, is a unique encryption technique that helps generate the optimal key by reducing resource consumption and encryption execution time. The research aims to improve the current encryption technique for faster encryption by merging the Ant Lion Optimization algorithm with the PSO algorithm. Additionally, the concept of RandomWalkOfAnts is utilized to identify the ideal encryption key. Matlab is used to carry out the suggested research, and the results of the devised technique improved execution time and resource utilization.

Hybrid Rivest–Shamir–Adleman (RSA) and ALO algorithms were created by Khalid et al. [27] to increase the safety of wireless sensor networks. Over the past few decades, two significant advancements in the specifications for information security inside a wireless sensor network have been made. These systems employ hash functions each time Rebalanced RSA is used to encrypt a message. Another approach that is being looked into is the second one, which involves transforming the ciphertext into binary format. A new index called the “Threat Severity Index” (TSI) has been created to assess the overall security of each particular system. Furthermore, investigation and comparison of the decryption performance of RSA-type cryptosystems with other RSA-type cryptosystems are conducted.

Performance and scalability problems may arise with the current decentralized AI technology because each transaction needs to be verified and documented on several nodes. Traditional blockchain architectures have the drawback of typically requiring the sacrifice of security, decentralization, or both to achieve scale. Malicious nodes on the network could jeopardize the security of the data saved. Decentralized storage systems also largely depend on the network infrastructure for their performance. Data stored on the network may not always be accessible during network interruption. Some common edge AI defense include: data encryption using well-known and robust algorithms like AES, SSL/TLS, or VPN; model encryption using homomorphic encryption, federated learning or secure multiparty computation, antivirus software, firewalls, device security through updating and network.

The summary of the literature survey is detailed in Table 1.

Table 1 Summary of the literature survey

3 Proposed Alo-Dhb for Secure Transmission in Edge AI Network

In this work, we designed an optimized hybrid cryptographic algorithm for preserving security and privacy during data transmission. The proposed method combines the Ant Lion optimization with the hybrid Diffie–Hellman-based Twofish approach (ALO-DHT). Initially, an edge AI environment is created with edge computing devices and AI-enabled sensors. To check the malicious data entry in the built environment, an autoencoder neural network-based attack prediction module was designed. Further, the processed data is encrypted using the proposed cryptography to ensure privacy and security during transmission. In presented cryptography, the Diffie–Hellman approach facilitates the secure key exchange process between the source and destination devices, while the Twofish algorithm performs encryption using the shared key.

The ALO technique in the presented framework optimizes the key generation phase, thereby reducing energy consumption and processing time. The proposed methodology is shown in Fig. 1.

Fig. 1
figure 1

Proposed methodology of the ALO-DHB for secure transmission in edge AI network

3.1 Edge AI Network

The proposed framework commences with the collection of dataset from the Edge AI network. An edge AI network is a distributed computing infrastructure, which utilizes the efficiency of edge computing (EC) and artificial intelligence. This network was created using AI approaches on the edge devices of the IoT network. The commonly used edge devices include sensors, edge-computing devices like routers, switches, etc. This feature of edge AI network offers real-time data processing and analysis. In the proposed work, the data was collected from the medical edge AI network. The medical edge AI network includes different kinds of sensors, and EC devices for collecting details from the patients. This network ensures real-time monitoring of patient health and assist in healthcare decision-making. The data collected from the network includes electronic health records (EHRs) like blood pressure level, sugar level, cholesterol level, blood glucose level, etc. It also includes information regarding body temperature, oxygen saturation level, heart beat rate, and respiratory rate. This collected information is embedded into a comprehensive database. Since this database was collected using IoT devices, sensors, EC devices, etc., it is prone to security threats. Hence, protecting the data from cyber threats is significant for effective decision-making in healthcare units.

3.2 Autoencoder Neural Network

An autoencoder is an artificial neural network that utilizes unsupervised learning methodology to capture the patterns and interconnection in the data. In the proposed work, the autoencoder was employed to identify and eliminate malicious data entry in edge AI networks [28]. The architecture of an autoencoder consists of an encoder component and a decoder component. The autoencoder has two phases: encoding and decoding. During encoding, the neural network compresses the data into lower-dimensional representation. In this step, the neural network captures the important features and patterns present in the input data. On the other hand, during decoding step, the neural network was trained to reconstruct the input data. In the training phase, the network adjusts its parameters to reduce the deviation between the input data and the reconstructed data, effectively learning the patterns of the data. Initially, we use normal (non-malicious) data for both encoding and decoding, so that the system can learns the patterns of abnormal data. In attack detection phase, we compare the features of new data (data from the Edge AI network) with the patterns identified by the autoencoder during training. If the deviation between the patterns of new input data and the reconstructed output is high, it is identified as attack or malicious data. The training with the non-malicious data ensures that the system can identify all types of attacks in the Edge AI environment. The encoder module consists of multiple layers of neurons that map the input data into lower-dimensional representation. The encoder is expressed in Eq. (1).

$$Y = A_{ct} \left( {w_{gt} X + B_{s} } \right)$$
(1)

where \(Y\) indicates the encoder data representation, \(A_{ct}\) denotes the activation function,\(w_{gt}\) represents the weight matrix, \(B_{s}\) refers to the bias vector, and \(X\) defines the input data. While compressing the data into lower-dimensional latent space, it intends to capture the important features and data patterns. In malicious data prediction, the encoders capture the features that distinguish the normal and malicious data. On the other hand, the decoder takes the latent data representation as input and aims to reconstruct the original data. In the decoder module, the reverse operation of the encoder is performed, and it is expressed in Eq. (2).

$$X^{\prime } = A_{ct}^{\prime } \left( {w_{gt}^{\prime } Y + B_{s}^{\prime } } \right)$$
(2)

where \(A_{ct}^{\prime }\) denotes the decoder’s activation function, \(w_{gt}^{\prime }\) represents the weight matrix of the decoder, \(B_{s}^{\prime }\) refers to the bias vector of decoder, and \(X^{\prime }\) defines the reconstructed data. The decoder performs a reconstruction function and produces the data that resembles the input data. The malicious data entry was identified by evaluating the reconstruction error. The reconstruction error of the encoder is measured using the Eq. (3).

$$L_{fn} = \frac{1}{m}\sum\nolimits_{i = 1}^{n} {\left( {X_{i} - X_{i}^{\prime } } \right)}$$
(3)

where \(L_{fn}\) defines the reconstruction error and \(m\) denotes the number of data. The system predicts the malicious data based on the reconstruction error. That is, if the reconstruction error is above the threshold value, the system predicts it as malicious. Once the malicious data is predicted, the scheme intends to eliminate it from the network to provide maximum security and privacy to the data.

3.3 Cryptography

After attack prediction and elimination, the data is passed to the cryptography block. The cryptography is the process of securing the data during transmission by converting or encoding the raw data into another form. In this work, we proposed an innovative cryptographic algorithm by combing the Ant Lion Optimization, Diffie–Hellman, and Twofish algorithm. Typically, edge AI network collects the information from different sources like sensors, EC devices, IoT devices, etc. Securely storing the collected information is a challenging task in edge AI network. The proposed work aims to secure the data during data transmission and storage using the developed mechanism. The ALO approach integrated into the cryptographic approach optimizes the key exchange process by optimally selecting the initial parameters of Diffie–Hellman cryptography. In Diffie–Hellman approach, the parameters, such as prime number, primitive modulo, etc., directly influence the key generation process, and the choice of these parameters plays an important role in evaluating the key strength. Hence, the optimal selection of these parameters is significant for generating keys with better strength. The ALO process aims to maximize the key strength by refining and fine-tuning the DH parameters. The key generated using the optimized DH mechanism is used for data encryption using Twofish algorithm. The key with greater strength makes the third parties difficult to encode the data during transmission. Since ALO is an iterative process, the parameter values change at each iteration, ensuring adaptability and improved security in the network. At each iteration, the ALO selects the parameter values, which increases the strength of the key. Thus, the hybrid cryptographic mechanism resolves the security and privacy challenges associated with the edge AI network.

3.3.1 Diffie–Hellman Key Exchange

The DH key exchange is a cryptographic mechanism that enables two parties (source and destinations) to establish a shared secret key over an insecure communication medium securely. The Twofish algorithm further uses its shared secret key for performing data encryption. In the context of secure communication, the DH mechanism offers a shared secret key between the communicating parties over the insecure communication medium. In the proposed work, it utilizes the optimal parameters from the ALO for key generation, ensuring adaptability and high security in the network. In addition, it allows numerous numbers of participants to share the keys without increasing the computational complexity. Thus, it offers scalability in the network.

In key exchange process, both parties (sender and receiver) agree on the public parameters and select the private key. Using the selected parameters, they independently generate public keys and exchange them. Then, they independently generate the shared secret key using the received private key and their own private key. This key is integrated into the Twofish algorithm for encryption, providing improved security during data transmission. In DH mechanism, initially, the source and the destination agree to establish key exchange over the communication medium [29]. Then, both source and destination agree on two public parameter \(u\) and \(v\) in which the parameter \(u\) is a large prime number and \(v\) is the primitive modulo of \(u\). Each participant selects a private key, which is kept secret. The private keys chosen by both parties are random integers. Then, they determine their public keys and share them over the insecure communication medium. The calculation of the public keys of both parties is expressed in Eqs. (4), and (5).

$$P_{ka} = v^{ \wedge } P_{ra} {\mkern 1mu} {\mkern 1mu} \,\bmod \,{\mkern 1mu} {\mkern 1mu} u$$
(4)
$$P_{kb} = v^{ \wedge } P_{rb} {\mkern 1mu} {\mkern 1mu} \,\bmod \,{\mkern 1mu} {\mkern 1mu} u$$
(5)

where \(P_{ka}\) indicates the public key of the sender, \(P_{ra}\) represents the private key of the sender, \(P_{kb}\) denotes the public key of the receiver, and \(P_{rb} \,\) defines the private key of the receiver. These \(P_{ka}\) and \(P_{rb} \,\) are exchanged over the communication channel. Then, both parties calculate their secret key, expressed in Eqs. (6) and (7).

$$S_{k} = P_{rb}^{ \wedge } {\mkern 1mu} P_{ka} {\mkern 1mu} \,\bmod \,{\mkern 1mu} {\mkern 1mu} u$$
(6)
$$S_{k} = P_{ra}^{ \wedge } {\mkern 1mu} P_{kb} {\mkern 1mu} \,\bmod \,{\mkern 1mu} {\mkern 1mu} u$$
(7)

where \(S_{k}\) is the shared secret key, which is utilized by the Twofish approach for data encryption operation. In Diffie–Hellman key exchange, selecting initial parameters, such as \(u\) and \(v\), is crucial for effective secret key generation. Hence, the ALO algorithm was employed for the optimal selection of initial parameters of DH key exchange.

3.3.2 Ant Lion Optimization

The ALO is a meta-heuristic approach inspired by the foraging characteristics of ant lions [30]. In the proposed method, the ALO approach was employed for optimizing the selection of initial parameters of Diffie–Hellman (DH) key exchange including large prime numbers and primitive modulo. This optimization mechanism improves the efficiency of the DH algorithm by dynamically tuning its parameters to changing environment conditions. Thus, the ALO enables the system to adapt in real-time changes in the Edge AI environment, improves the system security by iteratively tuning the key exchange process, thereby making the system more effective in handling the threats, and offers greater level of security during transmission. The ALO approach considers the parameters of the DH key exchange as input and the objective function is to find the optimal value for those key generation parameters. This approach explores the parameter space and updates the parameters based on their fitness values, which is determined based on performance of the DH key exchange for particular parameter value. The performance of the DH key exchange was determined in terms of generation time, complexity, level of security, etc. After exploration, the parameter with greater fitness value will be selected for key generation in DH mechanism. This iterative mechanism improves the efficiency of parameter selection and ensures greater level of security. The optimization process begins with initializing the parameters of the DH key exchange. The initialization of the parameter population is indicated in Eq. (8).

$$Pa_{p} = \left[ \begin{gathered} pa_{11} \,\,\,\,\,pa_{12} \,\,\,\,pa_{13} \,.......pa_{1w} \hfill \\ pa_{21} \,\,\,\,\,pa_{22} \,\,\,\,pa_{23} \,.......pa_{2w} \,\, \hfill \\ \,\,\, \vdots \,\,\,\,\,\,\,\,\,\,\,\,\,\, \vdots \,\,\,\,\,\,\,\,\,\,\,\,\, \vdots \,\,\,\,\,\,........\,\,\,\, \vdots \hfill \\ pa_{z1} \,\,\,\,\,pa_{z2} \,\,\,\,pa_{z3} \,.......pa_{zw} \hfill \\ \end{gathered} \right]$$
(8)

where \(Pa_{p}\) denotes the parameter population and \(pa\) indicates the parameter set present in the parameter space and they have parameters like \(u\), \(v\), \(P_{ra}\), and \(P_{rb}\) with different values. Then, the fitness value of each parameter set is estimated, and it depends on the efficiency of the DH key exchange process. The fitness value of the parameter set is directly proportional to the performance of the DH mechanism. If the DH algorithm acquires improved security and less computational error for the parameter group [31], the particular set has an excellent fitness value. Further, these parameters move through the parameter space randomly, and the random walk of the parameters is represented in Eq. (9).

$$P_{pa} \left( t \right) = \left[ {0,\,{\text{cumcum}}\,\left( {2s\left( {t_{1} - 1} \right)} \right),\,{\text{cumcum}}\,\left( {2s\left( {t_{2} - 1} \right)} \right),\, \ldots ,{\text{cumcum}}\,\left( {2s\left( {t_{n} - 1} \right)} \right)} \right]$$
(9)

where \(P_{pa} \left( t \right)\) indicates the position of the parameters at a time \(t\), \({\text{cumcum}}\) represents the cumulative function, which calculates the sum of values, and \(s\left( t \right)\) defines the stochastic process, which influences the direction of the random walk. After the exploration phase, the positions of the parameters are updated by the optimal solutions, and it is expressed in Eq. (10).

$$P_{pa} \left( {t + 1} \right) = \frac{{\left( {{\text{P}}_{{{\text{pa}}}} \left( t \right) - m_{r} \left( t \right)} \right) \times \left( {mxf_{pa} \left( t \right) - mif_{pa} \left( t \right)} \right)}}{{\left( {mxf_{pa} \left( t \right) - mr_{r} \left( t \right)} \right)}}$$
(10)

where \(P_{pa} \left( {t + 1} \right)\) indicates the updated position of the parameter at a time \(t\), \(m_{r} \left( t \right)\) denotes the minimum of random walk, \(mxf_{pa} \left( t \right)\) represents the maximum fitness of the parameter, and \(mif_{pa} \left( t \right)\) indicates the minimum fitness value of the parameter at a time \(t\). After position update, the fitness of the parameters is updated based on their new positions. Finally, the initial parameters with greater fitness value were chosen for DH key exchange process. Using these initial parameters, the DH approach generates the shared secret key, which is utilized by the Twofish cryptography for data encryption. Moreover, it is important to note that this initial parameters change iteratively, ensuring adaptability to the network.

3.3.3 Twofish Algorithm

The Twofish technique is a symmetric-key cryptographic mechanism employed for data encryption [32]. This technique utilizes single key for performing encryption and decryption tasks. In the presented work, the primary of Twofish algorithm is to provide security and privacy to the data during communication. It utilizes the shared secret key from optimized DH key exchange, and encodes the data into another form. This method of data encryption protects the data from network threats, and offers high level of security. This algorithm works by splitting the dataset into fixed size blocks and then, encodes those blocks using the shared secret key generated using optimized DH approach. It combines substitution, permutation, and confusion–diffusion mechanisms to transform plaintext into cipher text. Initially, the shared secret key is expanded into a sequence of subkeys for the encryption rounds using the key expansion algorithm. Then, each data block undergoes whitening, substitution, permutation, and Feistel network-based processing. In whitening, the input data block is combined with the whitening subkey through XOR operation and expressed in Eq. (11).

$$D_{wh} = D_{ip} \oplus S_{whk}$$
(11)

where \(D_{wh}\) denotes the whitened data, \(D_{ip}\) indicates the input data, and \(S_{whk}\) refers to the whitening subkey. Then, the Twofish approach utilizes the Feistel network-based processing, where the input data is divided into two halves (left and right), and each half goes through multiple rounds of processing. In each round, both left and right halves undergo the following operations in Eqs. (12) and (13).

$$L_{h}^{\prime } = R_{h} \oplus F_{e} \left( {R_{h} ,R_{sbk} } \right)$$
(12)
$$R_{h}^{\prime } = R_{h} \oplus L_{h}$$
(13)

where \(R_{h}^{\prime }\) defines the new right half of data after the first round, \(L_{h}\) indicates the left half of data,\(F_{e}\) represents the Feistel network function, \(R_{h}\) refers to the right half of data, \(R_{sbk}\) defines the round subkey, and \(L_{h}^{\prime }\) indicates the new left half of data after the first round. However, in case of real-time implementation, undergoing functions like permutations, substitutions, etc., are difficult. Therefore, proper understanding of the concept is required for accurate implementation of this algorithm. The pseudo-code of the proposed approach is presented in Algorithm 1.

Algorithm 1
figure a

ALO-DHT

For each round, the new right and left half becomes the right and left values. After a series of encryption rounds, combining the final left and right halves of the data produces the cipher text, which is transmitted over the communication channel to the receiver (cloud storage). In the receiver end, the decryption process was performed by reversing the order of the subkeys and applying the inverse of the Feistel function [33].

4 Results and Discussion

It's critical to protect data security and privacy in these decentralized networks as Edge AI systems. This work presented a new hybrid cryptography mechanism that combines DHT and ALO to provide secure data transmission. The generated work gathers data from the developed edge AI system and uses the Autoencoder to process it. To detect malicious data entry, the Autoencoder learns patterns in the data. While the ALO enhances security performance and optimizes key exchange, the DH key exchange creates shared secret keys for encryption. Additionally, to guard against security risks during transmission, the Twofish algorithm encrypts data using a secret key generated by DH. The developed model is implemented in a MATLAB tool, and the gained performance is validated with other prevailing techniques.

4.1 Simulation Details

The suggested investigation was implemented using the Matlab program. Matlab makes testing various methods, like Particle Swarm Optimization and Ant Lion Optimization, easier. Additionally, simulation aids in the identification of mistakes that can be prevented when the implementation is deployed in real-time. The simulation was run on a 64-bit Windows platform machine with an AMD Ryzen 7 CPU with eight cores. Additionally, Matlab was used to replicate the experiment's proposed case using the C programming language.

4.2 Performance Analysis

The developed ALO-DHT model gained performance is validated with other conventional models regarding accuracy, recall, precision, F1 score, time consumption, delay and energy consumption. To prove the reliability of the developed model, compare gained outcome of the designed model with other techniques such as EC model with AI mechanism (EC-AI) [22] EI computing framework in IoT (EI-IoT) [23], SC-based Attack Detection and Security model (SC-ADS) [24] Enhanced SC approach for Intrusion Detection (SC-ID) schemes [25].

4.2.1 Accuracy

The gained accuracy of the developed ALO-DHT model results is validated with other prevailing techniques, such as EC-AI, EI-IoT, SC-ADS, and SC-ID. The performance is validated with five types of data rates: 50, 100, 150, 200, and 250 Kbps. The comparison of the accuracy is shown in Fig. 2.

Fig. 2
figure 2

Accuracy comparison of the proposed with other prevailing techniques

The existing model reached a low accuracy rate compared to the proposed technique. The EC-AI model gained 95%, the EI-IoT technique gained 85.6%, the SC-ADS model reached 97.34%, and the SC-ID technique reached 90% for a 50 Kbps data rate. The designed ALO-DHT model achieved accuracy of 99.45%, 99.32%, 99.11%, 99%, and 98.86% for 50, 100, 150, 200, and 250 Kbps data rates. It shows the effectiveness and reliability of the model to secure the EAI.

4.2.2 Precision

The gained precision of the developed ALO-DHT model results is validated with other prevailing techniques, such as EC-AI, EI-IoT, SC-ADS, and SC-ID. The performance is validated with five types of data rates: 50, 100, 150, 200, and 250 Kbps. The comparison of the precision is shown in Fig. 3.

Fig. 3
figure 3

Precision comparison of the proposed with other prevailing techniques

The existing model reached a low precision rate compared to the proposed technique. The EC-AI model gained 93.64%, the EI-IoT technique gained 83%, the SC-ADS model reached 95.45%, and the SC-ID technique reached 90.86% for a 50 Kbps data rate. The designed ALO-DHT model achieved precision of 99.15%, 99.03%, 98.74%, 98.12%, and 97.98% for 50, 100, 150, 200, and 250 Kbps data rates.

4.2.3 Recall

The gained recall of the developed ALO-DHT model results is validated with other prevailing techniques, such as EC-AI, EI-IoT, SC-ADS, and SC-ID. The performance is validated with five types of data rates: 50, 100, 150, 200, and 250 Kbps. The comparison of the recall is shown in Fig. 4.

Fig. 4
figure 4

Recall comparison of the proposed with other prevailing techniques

The existing model reached a low recall rate compared to the proposed technique. The EC-AI model gained 96.54%, the EI-IoT technique gained 84%, the SC-ADS model reached 95.45%, and the SC-ID technique reached 91.2% for a 50 Kbps data rate. The designed ALO-DHT model achieved recall of 99.77%, 99.52%, 99.12%, 98.68%, and 98.12% for 50, 100, 150, 200, and 250 Kbps data rates.

4.2.4 F1 Score

The gained F1 score of the developed ALO-DHT model results is validated with other prevailing techniques, such as EC-AI, EI-IoT, SC-ADS, and SC-ID. The performance is validated with five types of data rates: 50, 100, 150, 200, and 250 Kbps. The comparison of the F1-score is shown in Fig. 5.

Fig. 5
figure 5

F1 score comparison of the proposed with other prevailing techniques

The existing model reached a low F1 score rate compared to the proposed technique. The EC-AI model gained 93.87%, the EI-IoT technique gained 90%, the SC-ADS model reached 75.6%, and the SC-ID technique reached 85.7% for a 50 Kbps data rate. The designed ALO-DHT model achieved 98.3%, 98.03%, 97.86%, 97.34%, and 97% for 50, 100, 150, 200, and 250 Kbps data rates.

4.2.5 Time Consumption

The gained time consumption of the developed ALO-DHT model results is validated with other prevailing techniques, such as EC-AI, EI-IoT, SC-ADS, and SC-ID. The comparison of the time consumption is shown in Fig. 6.

Fig. 6
figure 6

Time comparison of the proposed with other prevailing techniques

The existing model required a large amount of time to execute the process when compared to the proposed technique. The EC-AI model gained 5 s, the EI-IoT technique gained 2.7 s, the SC-ADS model reached 3 s, and the SC-ID technique reached 4.5 s to secure the edge AI system. Finally, the developed method required time consumption of 2 s for securing the data. It needs less time and saves the energy consumption.

4.2.6 Delay

The gained delay of the developed ALO-DHT model results is validated with other prevailing techniques, such as EC-AI, EI-IoT, SC-ADS, and SC-ID. The comparison of the delay is shown in Fig. 7.

Fig. 7
figure 7

Delay comparison of the proposed with other prevailing techniques

The existing model required a large delay rate to execute the process compared to the proposed technique. The EC-AI model gained 3 s, the EI-IoT technique gained 2.5 s, the SC-ADS model reached 1.8 s, and the SC-ID technique reached 5 s to secure the data. Finally, the developed method required delay rate of 0.8 s for securing the data. The developed model needs less delay rate.

4.2.7 Energy Consumption

The gained energy consumption of the developed ALO-DHT model results is validated with other prevailing techniques, such as EC-AI, EI-IoT, SC-ADS, and SC-ID. The comparison of the energy consumption is shown in Fig. 8.

Fig. 8
figure 8

Energy consumption comparison of the proposed with other prevailing techniques

The existing model required a large energy consumption to execute the process compared to the proposed technique. The EC-AI model gained 6.5 mJ, the EI-IoT technique gained 8.4 mJ, the SC-ADS model reached 11.6 mJ, and the SC-ID technique reached 7 mJ to secure the data. Finally, the developed method required energy consumption of 3.2 mJ for securing the data. The developed model needs less energy consumption.

4.2.8 Security and Confidentiality

Security and privacy goals are supported by the fundamental idea of security. Data security and confidentiality are frequently used interchangeably. While privacy safeguards allow individuals to regulate the information they collect, maintain, and share with others, confidentiality controls guard toward unauthorized utilization of information previously in the control of an edge AI system. The gained security and the confidentiality of the developed ALO-DHT model results are validated with other prevailing techniques, such as FHE, AES, RSA, and PHE. The comparison of the security is shown in Fig. 9.

Fig. 9
figure 9

Security comparison of the proposed with other prevailing techniques

The existing model reached a less security and less confidentiality rate compared to the proposed technique. The FHE model gained 88%, the AES technique gained 81%, the RSA model reached 92%, and the PHE technique reached 94% for security. The designed ALO-DHT model achieved security of 99.74%. Additionally, the FHE model gained 85%, the AES technique gained 83%, the RSA model reached 90%, and the PHE technique reached 96% for confidentiality. The designed ALO-DHT model achieved confidentiality of 99%.

4.3 Discussion

Cryptography does not mitigate the risks and vulnerabilities resulting from shoddy system, protocol, and procedure design. These must be resolved by carefully planning and establishing a defensive infrastructure. There is a delay when cryptographic techniques are added to information processing. Manu cryptographic methods have been previously employed in Edge AI systems for data security, such as fully homomorphic encryption (FHE) [9], MSIAP [34], Convergent and Modified Elliptic Curve Cryptography (MECC) [35], Paillier Homomorphic Encryption (PHE) [36], lightweight Speck encryption [37], Shamir threshold cryptography [38], and ciphertext policy attribute-based encryption [39, 40]. The strength and weakness of the cryptographic technique is detailed in Table 2.

Table 2 Cryptographic technique with strengths and weakness

The suggested technique combines the hybrid Diffie–Hellman-based Twofish approach (ALO-DHT) with Ant-lion optimization. First, AI-enabled sensors and edge computing devices are used to create an edge AI environment. An attack prediction module with autoencoder neural networks is created to verify malicious data entry in the built environment. The processed data is additionally encrypted using the suggested cryptography to guarantee security and privacy during transmission. The developed model enhances security using cryptographic and optimization techniques. The security performance of the developed model is validated with other models and is detailed in Table 3.

Table 3 Performance comparison of proposed and existing cryptographic models

Table 3 compares the performance rendered by the proposed ALO-DHT technique with the prevailing methods concerning encryption time, decryption time, key generation time, and security level. The proposed ALO-DHT model takes 5.64 s to encrypt the data, 4.28 s to decrypt the data, and 347.21 ms to generate a key. The existing FHE, AES, RSA, and PHE methods take 612.54 ms, 873.76 ms, 569.34 ms, and 773.29 ms to create a key. Here, the existing process takes more time for key generation. However, when compared to other techniques, the proposed ALO-DHT takes less time to create a key. Furthermore, the security level of the proposed and existing methods is compared with existing techniques. The proposed ALO-DHT gives the highest security value (99.74%), but the existing FHE, AES, RSA, and PHE methods give 88%, 81%, 92%, and 94% security. So, it is inferred that the proposed ALO-DHT proffers high performance for both key generation and safety.

The capacity to achieve a desired outcome with the least time, effort, or resource waste is known as efficiency. The capacity to provide a better result—one that adds more value or accomplishes a better goal—is known as effectiveness. These apps are speedy, have minimal latency, robust security, better privacy management, and use less power. Additionally, Edge AI helps businesses maximize their energy, network, and computing resources, raising the total cost-effectiveness of AI applications. It shows the security performance, execution time, encryption, and decryption process. Better efficiency and effectiveness offer the excellent security and privacy of a decentralized edge AI system.

The stable performance of the system over increasing data size demonstrates that the proposed method is capable of dealing large databases, ensuring its scalability for growing network. Moreover, the increased performance of the system under increasing data size indicates that the system maintains its efficiency and responsiveness over extensive networks. This positive outcome illustrates that proposed systems robustness and its capacity in handling the complex edge AI network. However, the system performances are not evaluated under different network conditions like different network threats, bandwidth variations, time, etc. This may reduce its applicability in real-time environment and also reduces its scalability. Moreover, the developed model is limited to high computational cost and high data.

Many technologies, including Computer Vision, Multi-modal AI, AutoML, Digital Twins, Explainable AI, Neural Networks, and more, are part of the current AI trends. These AI trends are revolutionizing how we engage with and utilize technology while fostering sustainable growth for companies across all industries. With a low latency and fast cellular speed, 5G offers robust wireless connectivity to EC, opening up exciting new possibilities like autonomous drones, remote telesurgery, innovative city projects, and much more.

4.4 Limitations of the Study

The limitations of the study are listed below—

  • Since the key generation process is optimized by ALO mechanism, it indirectly increases the latency. The iterative refinement of DH parameters potentially increases the key generation time, inducing latency in real-time network.

  • Combining different algorithms, like Autoencoder neural network, Ant Lion Optimization, Diffie–Hellman, and Twofish algorithm into a single framework introduces computational complexity.

  • The DH parameter optimization through ALO demands significant amount of computational resources. This makes the system ineffective in resource-constrained edge AI network.

  • Since the system adaptability relies on the iterative nature of ALO approach, it faces difficulty in adapting the rapidly changing network conditions.

  • Although the Twofish algorithm is robust and offers security, it limits the system’s interpretability, and demands a knowledge expertise for precise implementation of this approach in real-world applications.

5 Conclusion

In this study, optimized hybrid cryptography (ALO-DHT) is presented to ensure security and privacy during data transmission. The proposed model utilizes the data collected from the edge AI network as input and processes it using the Autoencoder for malicious data prediction. Further, to ensure data security, the processed data is encrypted using the proposed ALO-DHT approach. The DH key exchange produces the shared secret key, and the Twofish approach performs data encryption using the shared secret key. The ALO optimizes the key exchange and encryption process, thereby improving the system performance. The presented framework is implemented in a MATLAB environment, and the results are examined as accuracy, precision, delay, time consumption, recall, f-measure, and energy consumption. The experimental results illustrate that the proposed method acquired improved accuracy of 99.45%, 99.15% precision, 99.77% recall, and 98.3% f-measure. In addition, the energy consumption, delay, and time consumption are minimized to 3.2 mJ, 0.8 s, and 2 s, respectively. Also, the proposed methodology obtained minimum encryption time of 5.64 s, less key generation time of 347.21 s, lower decryption time of 4.28 s, and maximum security rate of 99.74. Furthermore, we made a comparative study with the conventional algorithms, such as EC-AI, EI-IoT, SC-ADS, and SC-ID, to validate the effectiveness of the proposed framework. The comparative analysis highlights that the proposed framework outperformed the existing techniques in terms of parameters, such as accuracy, precision, recall, and f-measure.

Although the proposed ALO-DHT method offers greater security in data transmission, yet it faces some demerits like computational overhead, high cost, limited scalability, etc. Therefore, the future study must concentrate on developing an intelligent security framework to work optimally under resource-constrained and dynamic network conditions. Also, the future study should focus on reducing the computational complexity using some compression mechanisms, which makes them more effective and reliable in real-time edge AI applications.