Introduction

Ensuring the security of information in the modern age is very important. There are a variety of strategies that can accomplish this goal. Symmetric key cryptography and asymmetric key cryptography are the most widely used. Asymmetric cryptographic efficiency is lower than symmetric cryptographic efficiency. Therefore, from a symmetric cryptography algorithm, a neural synchronization is built and the neural key produced being used for encryption and decryption with an asymmetric algorithm. The key exchange technique, however, is mostly depended on numerical computation, that involves immense memory and processing. There is also a fresh solution through the neural networks known as the neural key exchange to obtain a symmetric key. Through mutual learning, two neural networks can achieve synchronization. They get a shared input vector and give their respective output to one another. The corresponding weight vectors of both networks are, however, utilized as an asymmetric key after synchronization that could be used to encode and decode. One of the significant issues in the neural exchange is the degree of synchronization to which the two networks are synchronized without the other party’s weight information. Methods for calculating the cosine and the distance between the weights using the Euclidean rule are not used due to the knowledge about the weight vector of the other party is not known. Dolecki and Kozera [9] proposed a method for assessing the synchronization degree in fixed prior synchronization learning measures, by measuring the frequent equivalent performance. But there is a delay in finding complete synchronization between the two networks, so the two networks continue to learn while they are synchronized entirely. The successful intruder discloses additional details from the further steps of learning. Liu et al. [18] suggested a method for evaluating neural network synchronization with hash. Their method will precisely find complete synchronization. However, information traffic will increase. In exceptional cases, there will be considerable communication traffic if the hash feature is used inappropriately. The two networks’ synchronization must be completed early and the learning must be stopped.

The main problem domain is stated as follows:

  1. 1.

    The method for evaluating the synchronization of the neural key exchange protocol must be established to improve security and reliability.

  2. 2.

    In conventional synchronization evaluation techniques, such as the hashing algorithm and the frequency solution, there is a significant lag in the calculation of full synchronization. The geometric attacker disrupts neuronal synchronization due to the long delay.

  3. 3.

    The efficacy of neural synchronization is primarily harmed by long delays. To reduce the rate of delay and error, a better solution is needed.

  4. 4.

    For the frequency approach, there is still the risk of making a mistake. If any communicating partner makes a mistake, the neural key exchange protocol must be re-performed.

The paper’s contributions, thus are:

  1. 1.

    This paper aims to increase protection and reliability by developing the procedure for assessing the synchronization of the neural key exchange protocol.

  2. 2.

    The synchronization assessment method, therefore, plays a key role in the key exchange.

  3. 3.

    This paper proposes a new method for reducing the rate of delay and error.

  4. 4.

    The merits of the hashing technique and the efficiency of the neural network are combined.

  5. 5.

    It implies a more accurate way of assessing neural network synchronization. The proposed method will rapidly and efficiently achieve full synchronization. Delays would be reduced as a result of the improved technology.

  6. 6.

    It introduces a technique for the enhanced assessment process to look for an ideal parameter. The improved method’s misappreciation rate is minimized by the optimum parameters. The experimental findings show that in the proposed process, the delayed steps are smaller than the other existing approaches and that the rate of error is almost negligible. It is possible to enhance the security and privacy of the neural synchronization which adopts this efficient algorithm.

The rest of this paper is organized accordingly. Section 2 deals with related works. Neural synchronization is explained in Sect. 3. Section 4 deals with the proposed methodology. Sections 6 and 7 deal with security analysis and the results respectively. Conclusions are given in Sect. 7 and references are given at the end.

Table 1 The strengths and drawbacks

Related works

Artificial neural networks have been used for cryptography Zçakmak et al. [39], Alani [2], Abdalrdha et al. [1], Protic [26], Hadke and Kale [12]. A lot of work has been done on this. A neural synchronization was first proposed in Kanter et al. [14] and implemented with simple parameters based on neural networks. The authors analyzed with naïve attackers the protection of the protocol. The authors also introduced bit-package techniques and extended the protocol to speed up the synchronization of neural networks. Klimov et al. [16] reviewed the protocol proposed in Kanter et al. [14] and described why the two communicating parties could reach complete weight synchronization. The findings reveal that although the same network configuration, input, and learning rules are implemented, the naïve attacker is unable to reach absolute synchronization with the two communicating partners. Then three kinds of attacks, including genetics, geometric attacks, and probabilistic attacks, were introduced by them. Therefore, the security of the protocol relies on neural network structure parameters. Shacham et al. [37] suggested a cooperative technique that can co-operate with genetic, geometric, and probabilistic attacks, to prevent the depth of the synapse from influencing the likelihood of success of attackers. Ruttor et al. [28] performed comparative analyses between genetic, geometric, and majority attacks. The results of the simulation show that the neural synchronization ensures that the depth of the synapse increases. Unlike the random common input, they have suggested a system that uses queries that can boost the protection of the key exchange protocol. In the assessment of the degree of coordination between two communicating neural networks, cosine similarity was implemented by Ruttor et al. [29]. The protocol proposed in Kanter et al. [14] was introduced by Desai et al. [8] to perform secret sharing to encode and decode the image. The authenticated key exchange protocol was suggested by Allam et al. [5], using a shared key as a learning boundary. The dynamic learning rate and random learning rules are adopted in this protocol. This protocol has the advantage that the protection of neural key exchanges can be strengthened without compromising efficiency. The efficiency of this protocol was later evaluated by the authors Santhanalakshmi et al. [31]. An effective and novel solution to malware identification has been suggested by Alazab et al. [3]. For malware feature discovery, the proposed methodology uses a hybrid wrapper-filter model that incorporates Maximal Importance filter heuristics and Artificial Neural Net Input Gain Measurement Approximation (ANNIGMA) wrapper heuristic for sub-set selection by capitalizing on the strengths of each classifier. It injects into the wrapper process the intrinsic features of data collected by the filter and integrates this with the heuristic score of the wrapper. In exchange, this will decrease the search space and direct the search for the most relevant detection-assisting malware functionality. Dolecki and Kozera [9] showed that synchronization can be measured using the frequency at which both communication parties have the same output. The results of the simulation show that relationships are very strong between the frequency system, cosine similarity, and Euclidean distance. In the synchronization of neural networks, Santhanalakshmi et al. [32] implemented the genetic algorithm to find optimal weights as initial weights. By reducing learning time and phases, this approach will accelerate synchronization. The genetic algorithm and tree parity machine parameters were studied in Santhanalakshmi et al. [32]. Their findings show that the number of hidden layers of the TPM will boost protection. Chourasia et al. [7] have suggested a vectored neural synchronization for key exchange. Pal et al. [24] suggested a rule of learning that could speed up synchronization and improve key randomness. Dong and Huang [10] proposed a complex value-based neural network for neural cryptography. Here all the inputs and the outputs are complex values. But, this technique takes a significant amount of time to complete the synchronization process. Édgar Salguero Dorokhin et al. [30] have proposed a method for finding the optimal TPM configuration, thus allowing a more effective and stable neural key exchange protocol. It will assess complete synchronization by using Alice and Bob’s weights. They do not have any real information on the weight vector of each other. The measurement of the distance between weights using the rule of Euclidean is not even technically applicable. Sarkar and Mandal [34], Sarkar [33], Sarkar and Mandal [35], Mandal and Sarkar [19], Sarkar et al. [36] proposed schemes that enhanced the security of the protocol by enhancing the synaptic depths of TPM and henceforth counteracting the attacks of the brute power of the attacker. It is found that the amount of security provided by the TPM synchronization also can be improved by introducing a large set of neurons and entries of each neuron in the hidden layers. To forecast smart grid reliability, a new Multidirectional Long Short-Term Memory (MLSTM) model has been proposed by Alazab et al. [4]. Compared with conventional Machine Learning (ML) models such as Long Short Term Memory (LSTM), Gated Recurrent Units (GRU), Recurrent Neural Networks, the output of MLSTM (RNN). The proposed model attained 99.07% accuracy in training and testing, 3% better than other conventional deep learning models. Gadekallu et al. [11] proposed a method to proactively take the necessary steps to tackle agricultural crises, the emphasis was on applying the machine learning model to identify the tomato disease image dataset. The dataset is gathered from the freely accessible plant-village dataset in this article. Using the hybrid-principal component analysis-Whale optimization algorithm, the essential features are extracted from the dataset. To distinguish intrusion detection system (IDS) datasets, a hybrid principal component analysis (PCA)-firefly focused machine learning model was suggested by Bhattacharya et al. [6]. From Kaggle, the dataset used in the analysis is obtained. For the transformation of the IDS datasets, the model first performs One-Hot encoding. Then, for dimensionality reduction, the hybrid PCA-firefly algorithm is used. For classification, the XGBoost algorithm is applied to the reduced dataset. Shishniashvili et al. [38] suggested a strategy to make use of the portions of the synchronized weight as a session key rather than the whole weight vector. In quantum key distribution protocols, Mehic et al. [20] synchronization of two neural nets can serve as a reconciliation of errors as well. Niemiec [21] suggested the new approach of using the tree parity machine to address the mistakes in the quantum key distribution protocol during transmission. The effect of the variance of parameters on security and the performance of the various efficiency learning rules was analyzed in Niemiec et al. [22].

The following four methods of synchronization were mainly implemented in these schemes; frequency of the identical output from the two partners, the similarity of cosine, Euclidean distance, and hash function. Table 1 lists the strengths and drawbacks of the four approaches.

The methods were evaluated by considering four different aspects: additional traffic, misconduct, convenience, and delay. Here, the delay in the indicator indicates that the whole synchronization cannot be identified in the first step. The synchronization has been deterred and the neural key swap over is only accomplished after several phases of true synchronization. By the delay, the additional traffic indicator increases the communication traffic. Indicator error of judgment implies that while true synchronization is not accomplished, the key can not be transferred. Whether the approach can be used in practice is a practical predictor.

In the methods of calculating the cosine for measurement of similarity index and the distance between the weights using the Euclidean rule, there are no problems of misconceptions, nor even delay, or increased traffic. Even though they have several benefits, their crushing downside is they are not realistic. This is because the other network’s weights are not accessible. The frequency and hashing scheme are helpful. But the use of both realistic approaches often leads to delays and additional traffic. The important problem with the hash function is when the hash function is being used. It will cause inefficiency when the hash function is used too early. When the hash function is used too late, the protection of neural synchronization will be reduced. The frequency approach also involves misjudgment.

Finally, for the protection of neural key exchange, the assessment of neural network synchronization is a critical matter. This paper integrated the advantage of the hash function with the neural networks’ performance.

Python is used for the implementation of the technique and statistical analysis is done using R.

Neural synchronization

Fig. 1
figure 1

Neural synchronization process

Two neural networks can participate in the synchronization procedure at a time. Both possess a special structure of neural network called tree parity machine (TPM) Ruttor [27]. Both tree parity machines can be completely synchronized by reciprocal learning. Then the same weight vector of a TPM can either be utilized as a seed for secret key generation, an encryption/decryption purpose, or as a session key. The protocol of neural synchronization includes five important phases: initialization, calculation, update, evaluation, and completion phase. Figure 1 displays the flow diagram of the neural synchronization and explains the specifics of each step as follows.

  1. 1.

    Step of initialization Arindam and Moumita initialize the same parameters of their tree parity unit. Then the weights are generated randomly and their weights are initialized.

  2. 2.

    The phase of estimation The identical input vector is obtained by Arindam and Moumita at each learning stage. They calculate the tree parity machines’ output and give each other the result.

  3. 3.

    Step of upgrading They then evaluate if the two outputs are identical after they obtain another party’s output. Arindam and Moumita will adjust the respective weight vector according to basic learning laws. Otherwise, the estimation process will be done.

  4. 4.

    Step of evaluation Arindam and Moumita must determine if maximum synchronization is achieved if the weights are updating successfully. The protocol goes into the next step if the complete synchronization is achieved. Otherwise, the estimation process will be done.

  5. 5.

    Finishing process Arindam and Moumita are producing their keys by weight. They’re finishing the neural key exchange process.

Proposed methodology

In this section, a method of evolution of synchronization, optimum parameter finding strategy have been analyzed.

Proposed method of evolution of synchronization

Four main methods can determine the synchronization of the weight vector cosines, the distance measure of the weight vector using Euclidean rule, the identical outputs’ frequency, and the weight vector’s hash function. Table 1 deals with the drawbacks of the aforementioned methods. In the results and discussion section, it is assessed by simulation. It is therefore essential to assess in a more suitable and timely manner the synchronization of the neural networks. An improved approach that can find complete synchronization in the first place is presented here. Table 2 lists the parameters used in this article.

Table 2 Symbol description

The core principle of the proposed solution is to approximately determine the degree of synchronization by estimating the identical output’s frequency for both Arindam and Moumita. Depending on the degree, the weights’ hash value are computed. And then it is specifically calculated by evaluating the values of the hash if the weights are equal.

The phase of rough assessment

Both Arindam and Moumita do not know each other’s weight vectors. The only information available is the output transmitted over the public channel. However, to assess the degree of coordination, the frequency at which both parties have the identical output will then be utilized. Algorithm 1 illustrates the algorithm for estimation. Algorithm 1 takes z, n, and \(a_n \) as input and produces the frequency \(b_n \) as an output. The z parameter is a natural number that has to be set to the correct value. n is the step of learning. If Arindam and Moumita in nth steps have the same output, then \(a_n=1 \). If in nth steps Arindam and Moumita have different outputs, then \(a_n=0 \). The frequency is determined by the equation \(b_n={\frac{1}{z}}{\textstyle \sum _{j=1}^{n}}a_n \) , if the learning step n is less than z. The frequency is determined by the equation \(b_n={\frac{1}{z}}{\textstyle \sum _{j=n-z+1}^{n}}a_n \) , if the learning steps n is greater than z.

The algorithm equates \(b_n \) to the threshold value of t. If \(b_n<t \), the evaluation process will be discontinued and the status syn will be restored to 1. \(\mathrm{syn}=-1 \) denotes that both parties yet not attained absolute coordination and will need to continue the learning further. The \(\mathrm{syn}=1 \) indicates that both parties have achieved complete synchronization and the synchronization procedure would then stop. If \(b_n>t \), in the next step, it goes to. Algorithm 1 illustrates the rough assessment process.

Algorithm 1:

Input: z, n, \(a_n \)

Output: \(b_n \)

\({{\varvec{i\,f}}}\;z<n\;{{\varvec{then}}}\)

\(\;\;\;\;b_n=\frac{1}{z}{\textstyle \sum _{j=n-z+1}^{n}}a_n \)

\({{\varvec{else}}}\)

\(\;\;\;\;\;\;\;b_n=\frac{1}{z}{\textstyle \sum _{j=1}^{n}}a_n \)

\({{\varvec{end}}}\;{{\varvec{i\,f}}}\)

The precise process of assessment

This phase will begin only after the degree of coordination crosses the preset limit. Then with the aid of the hash function, both parties decide if they have reached an absolute correlation. This paper uses Arindam here, as an illustration, to demonstrate how to test full synchronization. It is assumed that another party, Moumita , will conduct the same protocol as Arindam. The assessment algorithm is seen in algorithm 2. This algorithm accepts t, \(b_n \), W, \(h_\mathrm{B} \) as input, and outputs synchronization status. The factor t is a preset value that must be chosen and define accordingly. The outcome of Algorithm 1 is \(b_n \). The W denotes Arindam’s weight vector. The attribute \(h_\mathrm{B} \) is Moumita’s weight vector’s hash value. Arindam contrasts the Algorithm 1 outcomes and the threshold value at the initiation of Algorithm 2. Following algorithm 2 illustrates the precise process of assessment.

Algorithm 2:

Input: t, \(b_n \), W, \(h_\mathrm{B} \)

Output: syn

\({{\varvec{i\,f}}}\;b_n\ge t\;{{\varvec{then}}} \)

\(\;\;\;\;\;\;\;\;\;\;\;h_A=H(W) \)

\(\;\;\;\;\;\;\;\;{{\varvec{i\,f}}}\;h_A=h_B\;{{\varvec{then}}} \)

\(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;syn=1 \)

\(\;\;\;\;\;\;\;\;\;{{\varvec{else}}} \)

\(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;syn=-1 \)

\(\;\;\;\;\;\;\;\;\;{{\varvec{end}}}\;{{\varvec{i\,f}}} \)

\({{\varvec{else}}} \)

\(\;\;\;\;\;\;\;\;\;\;\;\;\;syn=-1 \)

\({{\varvec{end}}}\;{{\varvec{i\,f}}} \)

The optimum parameter finding algorithm

Two conditions must be met for optimum z. First of all, no mistake. Second, as little time as possible to find complete synchronization. In the following section, this paper demonstrated that a linear increase in z increases in the amount of delayed steps in the discovery of complete synchronization and a decrease in errors as z increases. Therefore, as ideal z, the minimum z that does not lead to error can be used. An algorithm is proposed here to look for the optimal z. There are two stages in the algorithm: the boundary search process and the binary search phase. Each phase is defined in detail as follows.

Boundary search process

Let z start with 25 at the beginning. Then, if the number of errors for z is greater than 0, z is increased by 50 each time the sum is equal to 0. In algorithm 3 one can see the details. It does not accept any input and provides low and high values as an output that will be used in the next step. Algorithm 3 illustrates the boundary search process.

Algorithm 3:

Input: none

Output: low, high

\(Let\;z=25,\;low=25 \)

\(Calculate\;the\;total\;number\;of\;misjudgment \)

\({{\varvec{while}}}\;number\le 0\;{{\varvec{do}}}~\)

\(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;low=z \)

\(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;z=z+50 \)

\(\;\;\;\;\;\;\;\;\;\;\;\;\;\;syn=-1 \)

\(\;\;\;\;\;\;\;\;Calculate\;the\;total\;number\;of\;misjudgment \)

\({{\varvec{end}}}\;{{\varvec{while}}} \)

\(high=z \)

The binary search process

The binary search method is then used to find the best z. In algorithm 4, details can be seen. As an input, it detects the low and high parameters and generates the optimum z as the output. The procedure aims are to search for the optimum value, that does not cause a rapid misjudgment between low and high to occur. Algorithm 4 illustrates the binary search process.

Algorithm 4:

Input: low, high

Output: \(optimal\;z \)

\({{\varvec{while}}}\;high-low<2\;{{\varvec{do}}}~\)

\(\;\;\;\;\;\;\;\;\;z=\frac{(low+high)}{2} \)

\(\;\;\;\;\;\;\;\;Calculate\;the\;total\;number\;of\;misjudgment \)

\(\;\;\;\;\;\;\;\;\;\;\;{{\varvec{i\,f}}}\;number\;of\;misjudgment>0\;{{\varvec{then}}} \)

\(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;low=z \)

\(\;\;\;\;\;\;\;\;\;\;\;\;{{\varvec{else}}} \)

\(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;high=z \)

\(\;\;\;\;\;\;\;\;\;\;\;\;{{\varvec{end}}}\;{{\varvec{i\,f}}} \)

\({{\varvec{end}}}\;{{\varvec{while}}} \)

This special neural network TLTPM is composed of M no. of input neurons for each H no. of hidden neurons. TLTPM has only neuron in its output. TLTPM works with binary input, \(xinput_{u,v}\in \{-1,+1\} \). The mapping between input and output is described by the discrete weight value between \(-Lrange \) and \(+Lrange \), \(weight_{u,v}\in \{-Lrange,-Lrange+1,\;\dots ,\;+Lrange\} \).

In TLTPM u-th hidden unit is described by the index \(u=1,\ldots ,H \) and that of \(v=1,\ldots ,M \) denotes input neuron corresponding to the u-th hidden neuron of the TLTPM. Consider there are \(H_{1}, H_{2}, H_{3} \) numbers of hidden units in the first, second, and third hidden layers respectively. Each hidden unit of the first layer calculates its output by performing the weighted sum over the present state of inputs on that particular hidden unit. Each hidden unit of the second layer calculates its output by performing the weighted sum over the hidden units of the first layer on that particular hidden unit. Similarly, Each hidden unit of the third layer calculates its output by performing the weighted sum over the hidden units of the second layer on that particular hidden unit.

Table 3 Delayed steps comparisons with proposed and existing techniques
Table 4 Geometric attack’s success probability

Time complexity analysis

In neural synchronization technique initialization of the weight vector takes \((M\;\times \;H_{1}\;+\;H_{1}\;\times \;H_{2}\;+\;H_{2}\;\times \;H_{3}) \) amount of computations. For example, if \(M=2,\;H_{1}=2,\;H_{2}=3, \) and \(H3=2 \) then total numbers of synaptic links (weights) are (2 \(\times \) 2 + 2 \(\times \) 3 + 3 \(\times \) 2) = 16. So, it takes sixteen amounts of computations. Generation of M number of input vectors for each \(H_{1} \) number of hidden neurons takes \((M\times H_{1}) \) amount of computations. Computation of the hidden neuron outputs takes \(\left( H_{1}+H_{2}+H_{3}\right) \) amount of computations. Where \(H_{1},\;H_{2}, \) and \(H_{3} \) are the number of hidden units in 1st, 2nd, and 3rd layer respectively. Computation of final output value takes a unit amount of computation because it needs only a single operation to compute the value.

In the best case of the neural synchronization algorithm, Arindam’s and Moumita’s arbitrarily chosen weight vectors are identical. So, networks that are synchronized at the initial stage do not need to update the weight using the learning rule. Here, \((\;(M\;\times \;H_{1})\;+\;(M\;\times \;H_{1}\;+\;H_{1}\;\times \;H_{2}\;+\;H_{2}\;\times \;H_{3})\;+\;(H_{1}\;+\;H_{2}\;+\;H_{3})) \) amount of computation is needed in the best case which is in form of \(O\)( Generation of common seed value + initialization of input vector + initialization of weight vector + Computation of the hidden neuron outputs). If the Arindam’s and Moumita’s arbitrarily chosen weight vectors are not identical then in each iteration the weight vectors of the hidden unit which has a value equivalent to the final output are updated according to the learning rule. This scenario leads to an average and worst-case situation where \(I\) number of iteration to be performed to generate the identical weight vectors at both ends. So, the total computation for the average and worst case is \((\;(M\;\times \;H_{1})\;+\;(M\;\times \;H_{1}\;+\;H_{1}\;\times \;H_{2}+\;H_{2}\;\times \;H_{3})+ (H_{1}\;+\;H_{2}\;+\;H_{3}))+(No.\;of\;iteration\;\times \;No.\;of weight\;updation)\) which is can be expressed in \(O\)(Time complexity in the first iteration + (No. of iteration \(\times \) No. of weight update)).

Fig. 2
figure 2

Number of misjudgement

Fig. 3
figure 3

Delay using \(H_{1}\) = 3, \(H_{2}\) = 3, \(H_{3}\) = 3, M = 101, and z = 75

Security analysis

The delay in the search for complete neural network synchronization may result in a neural key exchange security problem. And the more delayed steps are less reliable in the exchange of neural key. In other words, the delayed steps will increase the attacker’s ability. The delay steps and security of the existing approach in Dolecki and Kozera [9], Liu et al. [18] and this paper have been contrasted in this section.

The solution implied in Dolecki and Kozera [9] is for the hash function to be used where the learning steps surpass the step limit, i.e. the expected learning steps of specified \(H_{1},\;H_{2},\;H_{3},\;M \), and Lrange. The proposed solution utilizes the hashing technique. The degree threshold corresponds to an average grade determined by the rough synchronization process.

Table 3 compares the delayed steps of each process. The parameter \(H_{1},\;H_{2} \), and \(H_{3} \) are all set to 3, and the parameter M is set to 101. However, with the increase of Lrange, for these three strategies, the number of delayed steps increases. There are three methods of each Lrange. The proposed method has a minimum amount of delayed steps (Table 3).

The geometric attack is then considered measuring the protection of the synchronization protocol using the assessment approach suggested in Dolecki and Kozera [9], Liu et al. [18] and this paper, respectively. For each process, 10,000 simulations have been performed.

Fig. 4
figure 4

Delay using \(H_{1}\) = 3, \(H_{2}\) = 3, \(H_{3}\) = 3, M = 101, and z = 125

Fig. 5
figure 5

Average delayed steps

Table 4 indicates the likelihood of the success of a geometric attack. The likelihood of success with the increase of Lrange decreases with each process. The likelihood is very low as Lrange rises to 4. If Lrange is more than four, the neural synchronization protocol cannot be violated by the geometric attack. For Lrange’s fixed value, the proposed method’s probability is the lowest, whereas the method’s probability is the highest in Dolecki and Kozera [9]. For instance, the proposed method’s probability is 28.32% lower than in Dolecki and Kozera [9] and 4.12% lower than in the method Liu et al. [18]. However, with the increase of Lrange, the probability difference between the proposed method and the method Dolecki and Kozera [9] decreases. Between the Liu et al. [18] approach and the proposed method, the same phenomenon occurs.

This is also associated with the observation in Kanter et al. [14] that as Lrange rises, the probability of success of a geometric attack is decreasing exponentially. The primary reason is that each approach’s mean delayed steps are increasing very slowly than the expected learning steps taken to reach complete synchronization. The mean delayed steps are seen earlier in Table 5 if the Lrange value is hiked from value 1 to 7. As Lrange increases, the expected steps of learning needed for the two neural networks to reach optimal synchronization increases exponentially. Since each process’s average deferred phases are only linear as Lrange rises.

Consider n number of cascading encryption/decryption techniques are used to encrypt/decrypt the plaintext with the help of a neural synchronized session key. Then a session key of length [(number of cascaded encryption technique in bits) + (three bits combinations of encryption/ decryption technique index) + (length of i number of encryption/decryption keys in bits) + (length of n number of session keys in bits)] i.e. \([\;8+\;\left( 3\times n\right) +\;\left( 128\times n\right) +\left( 128\times n\right) \;]\) bits to \([\;8+\;\left( 3\times n\right) +\;\left( 256\times n\right) +(256\times n)]\) number of bits. So, \(\frac{[\;8+\;\left( 3\times n\right) +\;\left( 128\times n\right) +(128\times n)]}{8} \) \(=\left[ 1+\frac{\left( 3\times n\right) }{8}+16n+16n\right] =32n \) to \(\frac{[\;8+\;\left( 3\times n\right) +\;\left( 256\times n\right) +(256\times n)]}{8} \) \(=\left[ 1+\frac{\left( 3\times n\right) }{8}+32n+32n\right] =64n \) numbers of characters.

Therefore, the total number of keys = \({256}^{64n} \). Attacker checks with half of the possible key in an average, the time needed at 1 decryption/ \(\upmu \)s = \(0.5\times {256}^{64n} \) \(\upmu \)s = \(0.5\times 2^{8\times 64n} \) \(\upmu \)s = \(0.5\times 2^{512n} \) \(\upmu \)s = \(2^{(512n-1)} \) \(\upmu \)s.

Consider any single encryption using the neural key of size 512 bit which is hypothetically approved and needed to be analyzed in the context of the time taken to crack a ciphertext with the help of the fastest supercomputers available at present. In this neural technique to crack a ciphertext, the number of permutation combinations on the neural key is \(2^{512}\;=\;1.340780\;\times \;{10}^{154} \) trials for a size of 512 bits only. IBM Summit at Oak Ridge, U.S. invented the fastest supercomputer in the world with 148.6 PFLOPS i.e. means \(148.6\;\times \;{10}^{15} \) floating-point computing/second. Certainly, it can be considered that each trial may require 1000 FLOPS to undergo its operations. Hence, the total test needed per second is \(148.6\;\times \;{10}^{12} \). Total no. of sec. a year have = \(365\;\times \;24\;\times \;60\;\times \;60\;=\;31,536,000 \) s. Total number of years for Brute Force attack: \((1.340780\;\times {10}^{154})/(148.6\;\times \;{10}^{12}\times 31,536,000)=2.86109\;\times \;{10}^{132}\) years.

Results and analysis

For result and simulation purposes, an Intel Core i7 10th Generation, 2.6 GHz processor, 16 GB RAM is used. Comprehensive and needful security views have been focused to affect acquaintance security and robustness issues. The precision of \({10}^{-15} \) decimal has been used in arithmetic operations according to the authenticated IEEE Standard 754. In this section, using simulation experiments both the method proposed in Dolecki and Kozera [9] and the method proposed in Liu et al. [18] are examined. Comparative analysis has been carried out between the proposed and existing approaches. Both algorithms have been implemented in python.

The idea of the approach proposed by Dolecki and Kozera [9] is to quantify the frequency at which previous phases had the very same output for Arindam and Moumita. Here, it is observed that if the parameter z is not properly chosen, an error can appear. As there is some likelihood that the case of all the outcomes being similar between two parties would not be entirely synchronized with Arindam and Moumita in previous phases. Also, the smaller the z, the higher the number of errors of judgment. For starters, we simulated 10,000 samples for different combinations of \(H_{1},\;H_{2},\;H_{3},\;M \) and Lrange as z rises from 25 to 225: 3-3-3-101-1, 3-3-3-101-2, 3-3-3-101-3, 3-3-3-101-4, 3-3-3-101-5, 3-3-3-101-6, and 3-3-3-101-7 (Fig. 2). For each combination of \(H_{1},\;H_{2},\;H_{3},\;M \) and Lrange, with an increment of z, the volume of misjudgment decreases. This improves the efficiency of the proposed technique. Although the volume of error of judgment grows with the increase in Lrange for specified z.

It has been seen it is delayed to find complete synchronization of two networks by the technique proposed by Dolecki and Kozera [9]. Therefore, if complete synchronization is accomplished at the present level when both communicating parties have one or more outputs that are dissimilar in the previous phase, the frequency value would be smaller than one. Both the parties will proceed the learning unless the previous steps do not yield any other results between Arindam and Moumita. The steps with the cosine of the vector are 558, for example, for \(H_{1}=3,\;H_{2}=3,\;H_{3}=3,\;M=101 \), and \(Lrange=3 \), while \(z=75 \), for frequency steps, are 603. There are 45 delayed steps (Fig. 3).

If \(z=125 \), the steps using the weight vector cosine is 397, but those steps are made use of frequency are 496. The total number of delayed steps is 99 (Fig. 4).

This proposed technique utilizes the cosine method to first locate complete coordination and save the latest measures. When their process judges are fully coordinated, the steps are recorded. Then process equate the delay with the two phases and compute it. For z: 25, 75, 125, 175, and 225, the proposed method has set five different values. For \(H_{1},\;H_{2},\;H_{3},\;M \), and Lrange, the process has specified a number of variables: 3-3-3-101-1, 3-3-3-101-2, 3-3-3-101-3, 3-3-3-101-4, 3-3-3-101-5, 3-3-3-101-6 and 3-3-3-101-7. The mean delayed steps of each case are determined using 10000 samples. In Fig. 5, when \(H_{1},\;H_{2},\;H_{3},\;M \), and z are set, the amount of mean differed steps minimizes if the value of Lrange increased. This is due to the previously analyzed misunderstandings. The number of mean steps is incremented linearly with the increase in the parameter z when \(H_{1},\;H_{2},\;H_{3},\;M \), and Lrange are fixed.

Therefore, it is necessary to select an optimal z for synchronization assessment. The optimum z for each combination of \(H_{1},\;H_{2},\;H_{3},\;M \), and Lrange are chosen using the proposed algorithm. The findings are presented in Table 5.

By passing the fifteen tests in that suite, real randomness was assured in the proposed transmitting technique. These tests are extremely helpful for a technique with such a high level of robustness as the one proposed. A probability value (p\(\_\)Value) determines the acceptance or rejection of the input vector. Table 6 contains the results of the NIST Statistical tests NIST [23] on the generated random input vector. This table also represents the comparison of p\(\_\)Value between the proposed TLTPM and the existing CVTPM technique Dong and Huang [10]. From the Table 6, it has been seen that in the NIST statistical test p\(\_\)Value of proposed TLTPM has outperformed than CVTPM.

Table 5 The optimal value of z
Table 6 Comparison of p\(\_\)Value between proposed TLTPM and the existing CVTPM

The proportion of 1 and 0 in the coordinated neural session key is the result of the frequency test. The TLTPM procedure achieves a frequency test result of 0.696413, compared to 0.512374 for Kanso and Smaoui [13], 0.1329 for Karakaya et al. [15], 0.632558 for Patidar et al. [25], and 0.629806 for Liu et al. [17]. The frequency measure’s \(p\_\)Value analysis is shown in Table 7.

Table 7 Comparison of p\(\_\)Value of NIST Frequency Test

The results of simulations for different \(H_{1}\)\(H_{2}\)\(H_{3}\)MLrange are shown in Table 8. The number of minimum and maximum synchronization steps indicates a number of minimum and maximum steps to synchronize the weight of the two networks. The column for the average steps indicates the sum of all simulations conducted. The minimum and maximum synchronization time columns indicate the minimum and maximum time in seconds required to synchronize the weights of the two networks. The Successful synchronization of the attacker E column shows successfully how many instances that the attacking network imitated the behaviors of the other two networks for total simulations performed. Finally, the percentage of successful synchronization of the attacker E (%) column shows successfully the percentage of total simulations shown by the earlier column.

As shown in Table 8, the best results for the defense against the attacking network are the combinations (8-8-8-16-8) and (8-8-8-8-128), respectively. E did not mimic the behavior of A and B’s TLTPM in any of the 700,000 simulations. The results of 1200,000 simulations for the (8-8-8-16-8) and (8-8-8-8-128) configurations as shown in Table 9.

Table 8 Results after 700,000 simulations with different values of \(H_{1}\)\(H_{2}\)\(H_{3}\)MLrange

Table 10 shows the comparison of synchronization time for fixed network size and variable learning rules and synaptic depth in the proposed TLTPM and existing CVTPM method. From Table 10, it has been seen that a trend towards an increase in the synchronization steps as the range of weight values (Lrange) increases in all three learning rules. For small (Lrange) values, Hebbian takes fewer synchronization steps than the other two learning rules in the range of 5–15, but as the (Lrange) value increases the Hebbian rule, more steps are taken to synchronize than the other two learning rules. Here, the Anti-Hebbian rules take less time than the other two learning rules in the 20–30 range. Random Walk outperforms 35 and beyond. As a result, an increase in the value of the (Lrange) security of the system can be increased.

Table 9 Results after 1200,000 simulations for the (8-8-8-16-8) and (8-8-8-8-128) combinations
Table 10 Comparison of synchronization time for fixed network size and variable learning rules and synaptic depth in proposed TLTPM and existing CVTPM method

Conclusion and future scope

The assessment of synchronization is a significant feature of neural synchronization. Time taken to complete synchronization has a major security impact. This paper has proposed an advanced approach for evaluating coordination between two neural networks. For generating different key lengths using the neural synchronization process various combinations of \(H_{1},\;H_{2},\;H_{3},\;M \), and Lrange with variable network size are considered. This paper provides the method for the evolution of synchronization and optimum parameter finding algorithm. The proposed approach has been introduced and evaluated under neural synchronization. The results suggest that the proposed method has less delay in achieving complete synchronization using the approach used in this article. Using the proposed approach, the neural synchronization is much more robust to geometric attacks than the existing methods. Therefore, by using the proposed method, the security of neural synchronization can also be improved. This proposed method has several practical implications. This key exchange technique can be used in wireless communication for the secured exchange of information. To validate the experimental findings, several results and analyses are conducted. There are a few delays in achieving absolute synchronization. To extend or improve the current state of knowledge in this field further research is needed to optimize the weight of the neural network for faster synchronization. Any nature-inspired optimization algorithm can be considered for this purpose. The study’s limitation is that this paper does not consider any strong PRNG with good statistical property for input vector generations to boost the protection of the neural synchronization method. Two key things require more development in the future. One is further decreased the delay and the other is to start the fresh synchronization instead of carrying out a lengthy one.