Generative adversarial network guided mutual learning based synchronization of cluster of neural networks

Neural synchronization is a technique for establishing the cryptographic key exchange protocol over a public channel. Two neural networks receive common inputs and exchange their outputs. In some steps, it leads to full synchronization by setting the discrete weights according to the specific rule of learning. This synchronized weight is used as a common secret session key. But there are seldom research is done to investigate the synchronization of a cluster of neural networks. In this paper, a Generative Adversarial Network (GAN)-based synchronization of a cluster of neural networks with three hidden layers is proposed for the development of the public-key exchange protocol. This paper highlights a variety of interesting improvements to traditional GAN architecture. Here GAN is used for Pseudo-Random Number Generators (PRNG) for neural synchronization. Each neural network is considered as a node of a binary tree framework. When both i-th and j-th nodes of the binary tree are synchronized then one of these two nodes is elected as a leader. Now, this leader node will synchronize with the leader of the other branch. After completion of this process synchronized weight becomes the session key for the whole cluster. This proposed technique has several advantages like (1) There is no need to synchronize one neural network to every other in the cluster instead of that entire cluster can be able to share the same secret key by synchronizing between the elected leader nodes with only logarithmic synchronization steps. (2) This proposed technology provides GAN-based PRNG which is very sensitive to the initial seed value. (3) Three hidden layers leads to the complex internal architecture of the Tree Parity Machine (TPM). So, it will be difficult for the attacker to guess the internal architecture. (4) An increase in the weight range of the neural network increases the complexity of a successful attack exponentially but the effort to build the neural key decreases over the polynomial time. (5) The proposed technique also offers synchronization and authentication steps in parallel. It is difficult for the attacker to distinguish between synchronization and authentication steps. This proposed technique has been passed through different parametric tests. Simulations of the process show effectiveness in terms of cited results in the paper.


Introduction
Cryptography is an art of converting the plaintext into ciphertext using a secret key and vice-versa. It aims to secure information from eavesdroppers, interceptors, rivals, intruders, enemies, and assailants Bauer [5]. The prime concentration is on the frameworks for ensuring the confi-prime numbers is increased then the security of the encryption algorithm also increased; however, which also increases the calculation complexity and costs.
Diffie and Hellman [11]'s public-key exchange algorithm is a key distribution algorithm. It enables two communicating devices to agree upon a common encryption key through an unstable medium by sharing a key between them. In several fields, this exchange process, and the secret key are used, such as authentication of identity, data encryption, the security of data privacy. This scheme suffers from a MITM attack.
It is, therefore, necessary to search for innovative methods of secured and low costs protocol for the generation/exchange of cryptographic keys. This is a big challenge as attacker E may not be able to deduce the key, although he may partake the ability to follow the algorithm framework. The process of synchronization of the neural network provides an ability to address immense exchange problems Chen et al. [7], Liu et al. [31], Chen et al. [6], Wang et al. [52], Wang et al. [53], Xiao et al. [54], Zhang and Cao [55], Wang et al. [51], Dong et al. [15]. Neural cryptography has recently been recognized as capable of achieving this aim through neural synchronization Rosen-Zvi et al. [40] through in-depth integration and analysis of Artificial Neural Network (ANN) Lakshmanan et al. [28], Ni and Paul [35].
A special Artificial Neural Network (ANN) framework called the Tree Parity Machine (TPM) is used in this proposed technique. On both the sender and the receiver end, two TPM networks with similar configurations are used. These two networks are synchronized by generating random input vector and exchanging the outputs of such networks and preserving the secret of the synaptic weight. Two users A and B can generate a cryptographic key that is difficult for the attacker to infer, even though the attacker is aware of the algorithm structure and communication channel. In this paper, a Generative Adversarial Network (GAN)-based PRNG is proposed for generating the input vector to the neural synchronization process. This research is motivated by the work of Abadi and Andersen [1] in the learning of encoding methods by the neural network, which suggests that even a neural network can represent a good PRNG function. The intention is also derived from security requirements: there are some potentially useful characteristics of a hypothetical neural-network-based PRNG. This requires the ability, through more training, to make ad hoc alterations to the generator, which can constitute the backbone of strategies for dealing with the type of non-statistical threats presented by Kelsey et al. [25].
The rest of this paper is organized accordingly. Section 2 deals with related works. Section 3 deals with the proposed methodology. Sections 4 and 5 deal with security analysis and the results respectively. Conclusions are given in Sect. 6 and references are given at the end.

Related works
Several efforts are being made with neural networks to produce PRNG sequences by Desai et al. [10], Desai et al. [9], Tirdad and Sadeghian [50], Jeong et al. [21]. Tirdad and Sadeghian [50] and Jeong et al. [21], have described the most effective methods. The previous was using Hopfield neural networks to avoid convergence and promote dynamic behavior, whereas the latter used a random data sampletrained LSTM to acquire indices in the pi digits. In statistical randomness tests, these two papers reported a great result. However, no method sought to train a PRNG neural network end-to-end, rather than using the networks as modules of more complicated algorithms. RosenZvi et al. [40] and Kanter et al. [23] successfully created equal states of their internal synaptic weight when two ANNs have been trained using a particular learning rule. Kinzel and Kanter [26] and Ruttor et al. [42] described that a chance of an attack is declining if the network's weight range get increases and the assailant's calculation cost increases, as its effort increases exponentially, as users' effort becomes polynomial. Sarkar and Mandal [44], Sarkar et al. [48], Sarkar et al. [46], Sarkar and Mandal [45], Sarkar et al. [47] proposed schemes which enhanced the security of the protocol by enhancing the synaptic depths of TPM and henceforth counteracting the attacks of the brute power of the attacker. It is found that the amount of security provided by the TPM synchronization also can be improved by introducing a large set of neurons and entries of each neuron in the hidden layers. Allam et al. [2] described an authentication algorithm using previously shared secrets. As a result, the algorithm attains a very high degree of defense without increasing synchronization time. Ruttor [41] described that lower values of the hidden unit have negative safety implications. Klimov et al. [27], compute whether or not two networks synchronize their weights. In connection with performance measurements of TPM network performance measurements, Dolecki and Kozera [12] proposed a frequency analysis technique that permits the assessment before completion of the synchronization rate of two TPM networks with a determined value that is not related to their differences in synaptic weights. Santhanalakshmi et al. [43] and Dolecki and Kozera [13] evaluate the efficiency of coordinated usage of genetic algorithm and the Gaussian distribution respectively. As a result, the random weight's replacement with optimum weights decreases the time of synchronization. Pu et al. [39] carry out an algorithm, which combines "true random sequences" with a TPM network that shows more complex dynamic behaviors which enhances encryption efficiency and attack resistance. According to the analysis, a safe, uniform and distribution for the synaptic weight values of the TPM networks are produced. Further, the Poisson's distribution changes in each simulation run with the results as steps are move ahead towards synchronization. In the light of rules leading to the creation of TPM synchronization, Mu and Liao [33] and Mu et al. [34] describes the heuristic of minimum hamming distances. To test the security level of the final TPM network structure, the proposed heuristic rule was used. Concerning improvements to initial TPM network infrastructure, Gomez et al. [19] observed that the synchronization period is reduced from 1.25 ms to less than 0.7 ms with an initial assignment of the weights between 15 to 20%. The number of steps is also decreased from 220 to less than 100. Niemiec [36] is proposing a new concept for the main quantity reconciliation process using TPM networks. Dong and Huang [14] proposed a complex value-based neural network for neural cryptography. Here all the inputs and the outputs are complex value. But, this technique takes a significant amount of time to complete the synchronization process.
From this survey, it has been observed that all works in progress in neural cryptography are based on the mutual synchronization of two neural networks. To synchronize with n numbers of neural networks needs n 2 number of synchronization. Also, it is rare to study the generation of good random input vector through robust pseudo-random number generator in the neural synchronization process. Existing TPM does not perform authentication steps in parallel with synchronization. The proposed technique performs cluster synchronization in logarithmic time instead of generating separate keys for individual communication in quadratic time. By implementing a deep learning technique such as GAN Goodfellow et al. [20] to design a PRNG that produces PRNG sequences explicitly, this paper develops the specific task. This paper, using the NIST Bassham et al. [4] test suite, introduces two simple structures and tests their power as PRNGs. A framework that is simple, conceptually efficient, and reliable is the general result of these modifications. Nearly 95 % of NIST tests can be consistently passed by the trained generator, indicating that the adversarial procedure is extremely effective in training the network to act as a PRNG. Findings are comparable to those of Tirdad and Sadeghian [50], and Jeong et al. [21], outperforming a range of regular PRNGs. This proposed cluster synchronization has several advantages that are as follows: (1) This proposed technology provides GAN-based robust random input vectors for the synchronization process. (2) To enhance the security a complex internal architecture of TPM is used. (3) An increase in the weight range of the neural network increases the complexity of a successful attack exponentially but the effort to build the neural key decreases over the polynomial time. (4) The proposed technique also offers synchronization and authentication steps in parallel.

Proposed methodology
A key aspect of many security algorithms is Pseudo-Random Number Generators (PRNG). One of the focus is to decide how a GAN system can learn how to produce randomized sequences of numbers, and if such a system might be used in a security context as a PRNG. By suggesting the use of GAN to train an ANN to act as a PRNG, this paper presents a new approach to its implementation. Consequently, the generator learns to generate values that cannot be predicted by the attacker. This paper demonstrates that even a tiny feed-forward completely linked ANN can be efficiently trained by a GAN to generating PRNG sequences with strong statistical characteristics. In this paper, a GAN-based common input vectors generation for neural synchronization is proposed. The proposed technique considers the range of input and output values as a set U . Table 1 lists the variables used in this GAN-based PRNG for neural synchronization.
The members of set U are all unsigned integer of 16 bits. PRNG can be represented as a following Eq. 1.
Here, s is the seed value. The seed value is random which helps to generate a complete random common input vector that in turn satisfies the randomness in the purview of the NIST test suite for randomness. Here n can be considered a very large number. The model also can be characterized by individual outputs using Eq. 2.
Here C i is the present internal state, and T represents the all tuples set (s, C i ). To approximate, φ(s) a function G(s) is used. Consider an input O t that leads to the following Eq. 3, which approximates φ ∇ (s, C i ), with the dimension network output.
For any fixed s value the model G(s) is presented by concatenating generator's output sequences ∀t G ∇ (s, O t ).
It follows the Eq. 4, for the length of the full sequence G(s). The model will reduce the adversary's probability of accurately predicting future outputs from previous outputs. This model can be subdivided into discriminative and predictive model.
Each n-bit produced by the generator in the predictive approach is divided in such a way so that the (n − 1) bits act as an input for the predictor, and the label is designated by the n-th bit. Following Fig. 1 describes the discriminative approach. Figure 2 describes the predictive approach.
The generator consists of a fully connected feed-forward neural network, which is shown in the Eq. 5.
The input vector contains a s value and a scalar O 1 . In this model, four hidden layers of thirty units and one output layer of eight units is considered shown in Fig. 3. The Leaky ReLU activation function gets used by the input layer and the hidden layer. Mod activation function is applied to the output layer, mapping values into the desired range while avoiding other pitfalls of sigmoid and tan h.
A Convolution Neural Network (CNN) is used as a discriminator using the following Eq. 6.
where, r is an 8-long vector. The generator generates this r . The discriminator generates a scalar p(true) inside [0, 1], which is likely to belong to either class. The discriminator is made up of 4 stacked CNN layers each having four filters, a kernel size of two, and stride one, followed by a layer of maxpooling and 2 fully connected layers each having four and one respectively. The stack of convolution layers allows the network to identify complex input patterns. The least-squares loss is considered for discriminative purpose. Figure 4 shows the structure of the discriminator.
A Convolution Neural Network (CNN) is used as a predictor using the following Eq. 7.
where r split is the output vector of the generator with the last variable is discarded. The label is designated by the n-th bit which is served as a predictor's input. Both the discriminator and predictor have the same design. But their input size and meaning of the result are different from each other. An absolute difference loss is used for the predictive purpose. NIST statistical test for randomness is performed on a variety of output generated for a particular seed for both before and after training. Next, the initialization of the predefined evaluation data set D is performed. It consists of input vector Generator performance is being assessed after and before training using the NIST test. The procedure is repeated for at least twenty times for discriminative and predictive approaches.
Here, a complete binary tree framework is used to synchronize the cluster of TLTPM. Each TLTPM is considered as a node of a tree framework shown in Fig. 5. In a complete binary tree, a node with no child is called a leaf node. Consider j = 1 at initial round ( j is the round number) and leaves = 8. Then the binary tree gets divided into no. of leaves 2 j subtrees each with 2 j leaves i.e. four subtrees with two leaves. Siblings are involved in mutual learning at height four.
In round 1 siblings become synchronized using the mutual learning step shown in Fig. 6. From each subtree, a node is nominated as a leader among the nodes having the same parents to perform the next round of operation.
In round 2 when j = 2 and leaves = 8, the tree gets divided into two subtrees with four leaves at height 3 shown in Fig. 7. In this way next rounds are performed until the root node at height 1 is synchronized.
On the completion of the synchronization process, all the nodes in the cluster become synchronized based on a common cluster session key. Table 2 lists the parameters used in this article.
This special neural network TLTPM is composed of M no. of input neurons for each H no. of hidden neurons. TLTPM     has an only neuron in its output. TLTPM works with binary input, α u,v ∈ {−1, +1}. The mapping between input and output is described by the discrete weight value between −κ and +κ, β u,v ∈ {−κ, −κ + 1, . . . , +κ}.
In TLTPM u-th hidden unit is described by the index u = 1, . . . , H and that of v = 1, . . . , M denotes input neuron corresponding to the u-th hidden neuron of the TLTPM. Consider there are H 1, H 2, H 3 numbers of hidden units in the first, second, and third hidden layers respectively. Each hidden unit of the first layer calculates its output by performing the weighted sum over the present state of inputs on that particular hidden unit. Each hidden unit of the second layer calculates its output by performing the weighted sum over the hidden units of the first layer on that particular hidden unit. Similarly, each hidden unit of the third layer calculates its output by performing the weighted sum over the hidden units of second layer on that particular hidden unit. The calculation for the first hidden layer is given by Eq. 8.
signum(h u ) define the output γ u of the u-th hidden unit. (in Eq. 9), If h u = 0 then γ u is set to −1 to make the output in binary form. If h u > 0 then γ u is mapped to +1, which represents that the hidden unit is active. If γ u = −1 then it denotes that The input vector of the u-th hidden unit β u The weight vector of the u-th hidden unit The likelihood distribution of the weight values in u-th hidden neuron of a's and b's TLTPM

Q u
The standard order parameter ρ u The normalized overlap the hidden unit is inactive (in Eq. 10).
The product of the hidden neurons of the third layer denotes the ultimate result of TLTPM. This is represented by ζ is (in Eq. 11), The value of ζ is mapped in the following way (in Eq. 12), ζ = γ 1 , if only one hidden unit (H = 1) is there. ζ value can be the same for 2 H −1 different (γ 1 , γ 2, ..., γ H ) representations.
If the output of two parties disagrees, ζ A = ζ B , then no update is allowed on the weights. Otherwise, follow the following rules: s TPM be trained from each other using Hebbian learning rule inzel and Kanter [26] (in Eq. 13).
In the Anti-Hebbian learning rule, both TPM is learned with the reverse of their output Kinzel and Kanter [26] (in Eq. 14) .
If the set value of the output is not imperative for tuning given that it is similar for all participating TPM then random-walk learning rule Kinzel and Kanter [26], is used (in Eq. 15).
Only weights are updated which are in hidden units with γ u = ζ . f n(β) is used for each learning rule (in Eq. 16).
The likelihood distribution of the weight values in u-th hidden neuron of two TLTPM is represented by (2κ + 1) (in Eq. 17).
The standard order parameters [18] can be calculated as functions of pb u a,b shown in Eq. 18, 19, and 20.
Tuning is represented by the normalized overlap [18] given in Eq. 21.
Using Eqs. 22, 23, and 24 the common information Cover and Thomas [8] of A's and B's represented using Eq. 25.
The likelihood to observe γ u α u,v = +1 or γ u α u,v = −1 are not equal, but depend on the related weight β u,v (in Eq. 26).
γ u α u,v = signum(β u,v ) occurs more frequently than γ u α u,v = − signum(β u,v ), the stationary likelihood distribution of the weights for t → ∞, is computed using Eq. 19 for the transition likelihood Ruttor et al. [42]. This is represented using Eq. 27.
Here the normalization constant p 0 is given by Eq. 28.
For M → ∞, the parameter of the error functions will be ruled out, so that the weights stay consistently dispersed shown in Eq. 29.
Otherwise, if M is finite, the likelihood distribution represented using order parameter Q u , shown in the following Eq. 30.
In the case of 1 κ √ M, the asymptotic performance of the order parameter is represented using Eq. 32.
First-order approximation [42] of Q u is given by Eq. 33.
If ζ A = ζ B , but γ A u = γ B u , the weight of one hidden neuron is changed. The weights execute an anisotropic diffusion in case of attractive steps (in Eq. 35).
Repulsive steps, as an alternative, are equal to normal diffusion steps (in Eq. 36), on the same lattice. ρs attr (ρ) and ρs repu (ρ) are random variables. ρs attr and ρs repu are the step size for attractive and repulsive respectively shown in Eqs. 37 and 38.
At the initial state of the synchronization, it has its highest Klimov et al. [27] effect (in Eq. 39), Weights are uncorrelated as shown in Eq. 40. (40) Highest consequence (in Eq. 41), is achieved for complete harmonized weights (in Eq. 42).
The weights are updated if ζ A = ζ B . u is the generalization error. In common overlap for H hidden neurons, u = , the likelihood is represented using Eq. 43.
For synchronization of TLTPM with H > 1, the likelihood of attractive as well as repulsive are represented using by Eqs. 44 and 45.
Harmonization time T m r ,s for the two random walk beginning at position s and distance r . R f r ,s is the time of the first reflection Ruttor et al. [42]. T m r ,s is given by Eq. 48.
Tuning time T m for arbitrarily selected beginning positions of the 2 random walkers represented using Eqs. 50 and 51. The mean attractive steps needed to achieve a harmonized state increases nearly proportional to m 2 is given by Eq. 52.
This result is fixed with the scaling performance tm synch ∝ κ 2 found for TPM harmonization Kanter et al. [23]. The standard deviation of T m is shown in Eq. 53,

Geometric attack
In this proposed technique a geometric attack is considered on TLTPM. The likelihood of γ E u = γ A u is represented using the prediction error Ein-Dor and Kanter [17] using Eq. 55, of the perceptron. When the u-th hidden neuron does not agree and rest of all hidden units has a condition of γ E v = γ A v , then the likelihood Ruttor et al. [42] of a modification using the geometric attack is shown using Eq. 56.
For identical order parameters Q = Q E v and R = R AE j and various outputs γ A v = γ E v . Then the likelihood for a modification of γ E u = γ A u is shown using Eq. 57.
By using the same equation the likelihood for an incorrect modification of γ E u = γ A u is shown with the help of equation 58.
condition is satisfied and in total there is an even number of hidden units that satisfied this condition then no geometric modification is done. Equation 59 represents this.
Second part of P E r can be represented using Eq. 60.
Third part of P E r can be represented using Eq. 61.
If H > 1 then the probability value of attractive steps and repulsive steps in the u-th hidden unit are represented using Eqs. 62 and 63.
Attractive steps are performed when H = 1. If H = 3 probability value can be calculated using Eq. 56, that forms Eq. 64 and 65.

Authentication Steps
In TLTPM input of both parties A and B acts as a common secret. The probability of an input vector α A/B (t)having a particular parity p ∈ {0, 1} is 0.5. For authentication purpose this parity will at this moment use the output bit ζ A/B . At any given time t with common inputs for both parties, the probability of identical output is given in Eq. 66.
Given a number n (1 ≤ n ≤ α) of pure authentication steps, in which one transmits the parity of the consequent input vector as output ζ A/B directly, the probability that the two parties subsequently produce the same output n times (and thus are likely to have the same n inputs) decreases exponentially with n i.e. P(ζ A (t) = p = ζ B (t)) = 1 2 n . For statistical security of ε ∈ [0, 1] select n = α authentication steps such that 1 − 1 2 α ≥ ε which can be computed as α = log 2 1 1−ε . With α = 14 the achievable statistical security ε = 0.9999 i.e. 99.9999%. The synchronization period for this technique therefore increases by α authentication steps depending on the necessary level of security ε. Select a certain bit subpattern in the input vector used for authentication only, such that the security threshold will be reached soon enough with a certain probability. Inputs are uniformly distributed so the last m bits is also uniformly distributed. Now select those entries that possess a defined bit sub-pattern (e.g. 0101 for m = 4). The probability of such a fixed bit sub pattern of m bit to occur is 1 2 m , because each bit has a fixed value with a probability of 0.5. Thus, for four bits, on average every sixteenth input would be used for authentication. Authentication step is performed when the subpattern arises and then one of the party sends out the parity of the consequent input vector as output ζ A/B . This will only occur at the other party if it has the same inputs. Such an authentication does not manipulate the learning process at all. Because of the truth that the inputs are secret, an attacker cannot know when exactly such an authentication procedure takes place. Therefore, the total number of keys = 256 64n . Attacker checks with half of the possible key in an average, the time needed at 1 decryption/ μs = 0.5 × 256 64n μs = 0.5 × 2 8×64n μs = 0.5 × 2 512n μs = 2 (512n−1) μs.

Secret key space analysis
Consider any single encryption using the neural key of size 512 bit which is hypothetically approved and needed to be analyzed in the context of the time taken to crack a ciphertext with the help of the fastest supercomputers available at present. In this neural technique to crack a ciphertext, the number of permutation combinations on the neural key is 2 512 = 1.

Results and analysis
For result and simulation purpose, an Intel Core i7 10th Generation, 2.6 GHz processor, 16 GB RAM is used. Comprehensive and needful security views have been focused to affect acquaintance security and robustness issues. The precision of 10 −15 decimal has been used in arithmetic operations according to the authenticated IEEE Standard 754.
True randomness was ensured in the proposed transmission technique by passing the fifteen tests contained in that suite. These tests are very useful for such a proposed technique with high robustness. A probability value (p-Value) determines the acceptance or rejection of the input vector generated by the GAN. Table 2 contains the results of  [14]. Here, p_Value of the proposed TLTPM of the average of 5 and 10 iterations, and p_Value of the existing CVTPM of the average of 10 iterations are represented. From Table 3, it has been seen that in the NIST statistical test p_Value of the proposed TLTPM has outperformed than the CVTPM. This confirms that the GAN-generated input vector has better randomness than the existing CVTPM. The result of the frequency test indicates a ratio of 0 and 1 in the generated random sequence. Here, the value of the frequency test is 0.696413 which is quite average Kanso and Smaoui [22] and better than the result of frequency test 0.1329 in Karakaya et al. [24], and 0.632558 in Patidar et al. [38] 0.629806 in Liu et al. [30]. Comparison of p_Value of NIST frequency Test is given in Table 4.
The results of simulations for different Table 5. The number of minimum and maximum synchronization steps indicates a number of minimum and maximum steps to synchronize the weight of the two networks. The column for the average steps indicates the sum of all simulations conducted. The minimum and maximum synchronization time columns indicate the minimum and maximum time in seconds required to synchronize the weights of the two networks. The successful synchronization of the attacker E column shows successfully how many instances that the attacking network imitated the behaviors of the other two networks for total simulations performed. Finally, the percentage of successful synchronization of the attacker E (%) column shows successfully the percentage of total simulations shown by the earlier column. As shown in Table 5, the best results for the defense against the attacking network are the combinations (8-8-8-16-8) and (8-8-8-8-128), respectively. E did not mimic the behaves of A and B's TLTPM in any of the 500,000 simulations. Table 6 shows that the best combination of values for the trio is (8-8-8-16-8). Table 7 shows the comparison of synchronization time for fixed network size and variable learning rules and synaptic depth in the proposed TLTPM and existing CVTPM method. From the table, it has been seen that a trend towards an increase in the synchronization steps as the range of weight values κ increases in all three learning rules. For small κ values, Hebbian takes fewer synchronization steps than the other two learning rules in the range of 5 to 15, but as the κ value increases the Hebbian rule, more steps are taken to synchronize than the other two learning rules. Here, the Anti-Hebbian rules take less time than the other two learning rules in the 20 to 30 range. Random Walk outperforms 35 and beyond. As a result, an increase in the value of the κ security of the system can be increased.

Conclusion and future scope
In this paper, a GAN-based public-key exchange protocol is proposed. By presenting several alterations to the GAN system, this research makes lots of innovative contributions. This paper presents a rationalization of the GAN framework applicable to this work, in which a reference dataset that the generator should acquire knowledge to mimic is not included in the GAN. Also, instead of a recurrent network, this work design the statefulness of a PRNG that used a feed-forward neural network with additional non-random "counter" inputs. For generating different key lengths using neural synchronization process various combinations of H 1 − H 2 − H 3 − M − κ with variable network size are considered. The security and synchronization time of GAN-based TLTPM is also being investigated. A geometric attack is also considered and it has been shown that it has a lower success rate. GAN-based TLTPM security is found to be higher than TPM with the same set of TPM network parameters. Finally, a variety of results and analysis are performed to confirm the experimental results. As future work, a more comprehensive analysis of security is planning to carry out. Also, different natureinspired optimization algorithms will be considered for the optimization of weights value for faster synchronization purposes. financial interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
competing interests The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.