1 Introduction

Proteins are the macromolecules that are almost universally in charge of bestowing out the numerous functionalities requisite to endure life, cell structural underpinning, immune safeguarding, enzymatic catalysis, cell signal transduction, and translation control. These numerous functionalities are made feasible by the distinctive three-dimensional structures applied by distinct protein molecules. The objective of protein structure prediction methods is to make use of computational representation to govern the spatial location of every atom in a protein molecule beginning from only its sequences of amino acid. Based on the homologous structures present in the Protein Data Bank (PDB), numerous protein structure models have been generally classified as template-based modeling (TBM) or template-free modeling (FM) approaches. Template-based modeling (TBM) was proposed in [1] via deep learning techniques, therefore increasing the precision in a significant manner. Even though a significant amount of precision was said to be attained, there is still an issue with the cost of accuracy. Nevertheless, the swift improvement observed over the past few years alone bestows certain types of assistance in holistic protein structure prediction issue that may be addressed by employing deep learning, where predictions may continually attain accuracies that even may go on par with the experimental methods.

The functioning and structuring of protein are deliberated by the positioning of the linear arrangement of amino acids in 3D space. Protein design using deep graph neural networks (PD-DGNN) was proposed in [2] by utilizing energy-based scores and molecular dynamics. Followed by this, as validation for proof-of-principle, ProteinSolver was also utilized to bring about sequences that counterpart the formation of serum albumin, then synthesize the top-scoring representation and substantiate it in vitro utilizing circular dichroism, therefore contributing to accuracy. Even though accuracy was found to be improved, the specificity and recall measure were not included. One limitation of PD-DGNN methods for protein design is the observation and interpretation of steep learning curve and the immense magnitude of domain competence that is required to provide rational and logical predictions.

A deep learning-based S-glutathionylation sites prediction tool called a computational framework (CF) was proposed in [3] to significantly identify the species-specific S-glutathionylation sites. In this study, species-specific S-glutathionylation sites prediction was made on the basis of the deep learning and particle swarm optimization algorithms. With this, the prediction results were said to be significantly improved. Despite improvement observed in the prediction accuracy, the time involved in prediction was not focused. Though better performance is being achieved by means of DeepGSH tools, there are numerous characteristics to be included, i.e., the inclusion of additional features like information concerning evolutionary aspect, interactions between proteins, secondary structures and so on, which may result in an accurate performance.

With the potentiality of deep learning (DL) [4], it was featured in considering numerous magnitudes of data structure, dispensing with noisy data, acquiring raw features without the requirement for feature engineering, and incorporating in a sensible manner to fabricate sensible predictions for data not utilized in training. Moreover, the ideal objective of bioinformatics not only remained in ensuring prediction accuracy but also remained in thorough comprehension of the fundamental biological procedures at work. Each member of the protein structure family may possess a moderately distinct form or shape from every other member, and hence, this creates an intrinsic accuracy constraint to deep learning-based modeling. This feasibly highlights the increased significance of structure cleansing to the succeeding protein structure prediction.

A distance-based protein structure prediction using deep learning, called DPSP method, was proposed in [5]. The method utilized a distance geometry algorithm with the purpose of enhancing the threading of protein without the presence of good templates in a protein data bank. With this, a significant amount of accuracy was said to be achieved in addition to the concentration of errors. Although the current protocol for predicting via predicted distance distribution went well, it was not found to be optimal for constructing 3D models.

1.1 Contributions

Motivated by the above state-of-the-art methods for secondary protein structure prediction, in our work, an Energy Profile Bayes and Thompson Optimized Convolutional Neural Network (EPB-OCNN) method is proposed. The major contributory factors of EPB-OCNN method are given below.

  • A novel secondary protein structure prediction method is proposed based on Energy Profile Bayes and Thompson Optimized Convolutional Neural Network model, which can offer maximum accuracy and precision rate with minimum time and therefore contributing to overall prediction accuracy.

  • A separate algorithm is designed for the settings of Energy Profile Bayes and Thompson Optimized Convolutional Neural Network, respectively, therefore addressing protein structure prediction time, accuracy, precision, specificity, recall, and MCC.

  • An integrated theoretical, qualitative analysis and experimental results are given, which validate the proposed method.

  • The performance was evaluated through extensive simulations based on Protein Data Bank dataset. In comparison with TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5], our EPB-OCNN method is superior in terms of protein structure prediction time, accuracy, precision, specificity, and recall.

1.2 Contribution explanation

The explanation about the Energy Profile Legion-Class Bayes Protein Structure Identification algorithm and Thompson Optimized CNN Protein Secondary Structure Prediction algorithm is detailed below.

1.2.1 Energy Profile Legion-Class Bayes Protein Structure Identification algorithm

Energy Profile Legion-Class Bayes Protein Structure Identification algorithm differentiates itself from others where it possesses energy profiles for two atoms of given types based on the protein sequence profile context of the atom. Next, Legion Class Bayes function via Location-Specific Resultant Matrix reveals the patterns with maximum precision.

1.2.2 Thompson Optimized CNN Protein Secondary Structure Prediction algorithm

Thompson Optimized CNN Protein Secondary Structure Prediction algorithm is that the optimization function in the pooling layer via nonlinear down-sampling aids in minimizing the dimensionality of the features and parameters for obtaining relevant and precise protein structure prediction. Also, the learning rate forming hyperparameter controls how much to change in response to the estimated error each time during the prediction process.

1.3 Organization of the paper

The fundamental materials and methods connected with this study and the related works are discussed in brief in Sect. 2. The pseudo-codes representation with the aid of block diagram of the proposed Energy Profile Bayes and Thompson Optimized Convolutional Neural Network (EPB-OCNN) methods is outlined in Sect. 3. The experimental setup along with the results is summarized and contextualized in Sect. 4. Finally, the study is concluded in Sect. 5.

2 Related works

Secondary protein structure prediction is a paramount issue as far as structural biology and structural bioinformatics are concerned. Though enormous progression in structural bioinformatics has been seen in the recent few years, specifically accuracy of predicted structures inclines to differ in an extensive manner based on supplementary information availability and frequency of homologous structures and sequences in databases. Soluble and membrane protein design was proposed in [6] for fine tuning protein structures of low resolution, modeling drug binding sites in an accurate manner and modeling solvent-mediated protein catalysis. However, it was found to be a laborious and cumbersome process which remained difficult to distinguish at high resolution. A spherical graph convolutional network denoted as a molecular graph was designed in [7] for accurate structure prediction via angular information. Moreover, the spherical convolution technique can also be integrated with other methods for assessment of the corresponding protein model quality, and hence can also pave the way for additional and supplementary input features. In this manner, it will be probable to attain even better prediction results by including biological and chemical information in the corresponding input graphs.

Protein loop modeling was presented in [8] by means of deep learning. Owing to the drawbacks of computing potentialities, simulation experiments were only performed in limited settings of distinct network configurations. However, additional network configurations can still more assist in enhancing the overall loop modeling performance to a greater extent. The objective of protein structure prediction is to employ computational models to estimate spatial location of every atom in protein initiating from its amino acid sequence. Despite stagnancy observed in recent past two decades, contemporary application of deep neural networks to spatial prediction and end-to-end model has extensively enhanced the protein structure prediction accuracy.

Incorporation of deep learning techniques into numerous steps of protein folding and design was proposed in [9]. Nonetheless, while there are unquestionably many issues in the domain, the advancement noticed over the past few years bestows expectation that one of the most laborious and significant biological issues, i.e., predicting protein structures at their stability state of affairs initiating from the amino acid sequences unattended could be addressed via the employment of deep learning within the destined future. However, due to a lack of sufficient solved structures, high-throughput deep transfer learning model facilitating drug discovery was designed in [10]. One of the crucial disadvantages of the deep learning technique is that it is laborious and cumbersome to interpret the resultant technique. Even though machine learning technique has been giving solutions toward this issue, extracting simple rules from the deep learning technique to explain why particular pair of residues is predicted to form a contact is yet to be hard to incept, or else a simpler model of contact prediction can also be developed.

Neural network model was presented in [11] for analysis of dynamics and function prediction. Neural networks protein structure and function prediction was performed for dynamic analysis comprising of prediction based on amino acid sequences using multiple sequence alignment, utilization of assay data for interaction between protein and compound prediction and application of molecular dynamic simulation for protein detection. Though study here only presented a single approach, room for improvement is still said to persist. Machine learning techniques were applied in [12] for AlphaFold2 protein structure prediction involving foundation reconfiguration for bio-molecular modeling. Though the design and analysis of AlphaFold2 is said to be unquestionably a milestone in the prolonged history of protein structure prediction, achieving accuracy for single domain prediction is only possible. However, predictions empathetic to insignificant transposes that result in crucial structure altercations are said to be uneven and untested.

In [13], deep learning methods like convolutional neural networks and recurrent networks were applied to enhance prediction accuracy involved in protein structure prediction. To be more specific, though the convolutional neural network and recurrent network heavily depend on certain types of sensitive models to detect similarity from dissimilar structures and sequences, it is not clear whether the predictions accurately denote low energy arrangements except for their correctness. However, further enhancements with the inclusion of additional resources required for computation and a high volume of data are certainly necessitated. Different machine learning methods using a support vector machine and decision tree were applied in [14] for enhancing stability for protein structure forecasting. This proposed support vector machine and decision tree model was found to be advantageous in protein analysis for the sequences provided in an anonymous structure. However, these machine learning methods can be applied to even other protein datasets to ensure an efficient, aggressive analysis of the protein structure prediction.

Protein biological functions are fundamentally connected with its structure. Hence, for the last few years, protein structure identification has been considered as hot research issue in the field of bioinformatics. Accurate protein structure identification may help the research communities in evaluating numerous protein functions. The primary structure of a protein involves a polymer of 20 amino acids, which are heavily responsible for numerous functions and these functions are said to be largely based on their corresponding structures. Hence, protein structure information bestows indicators in secondary and tertiary protein structure prediction.

A framework called protein distance net was proposed in [15] for protein structure prediction for training and testing real-valued distances. Protein distance can also be analyzed by testing the importance by including certain other features via covariance and precision matrix. Also, an in-depth analysis can also be made by concentrating on the loss aspect specifically for distance prediction. Yet another deep structural inference for proteins that integrated both deep learning and template based structural model was designed in [16] therefore solving protein structure prediction problem. Also, tertiary structure prediction in a large-scale manner was performed for over 1200 single-domain proteins. Moreover, it predicted the tertiary structure in a successful manner four times that was predicted previously. In [17], clustering recurrent neural network was proposed for predicting distance matrices, torsion angles and secondary structures. However, the method was even found to be highly expensive upon comparison to the shallow learning method. Owing to this reason, measures to speed up the overall learning process in addition to the accuracy maintenance of the prediction system was not focused.

Despite improvement observed inaccuracy, it was not found to be computationally efficient. Yet another novel computational method using deep learning was investigated in [18], therefore, achieving secondary structure prediction accuracy. Moreover, a learning strategy performed based on the multi-task model was also utilized in predicting secondary structures and the trans-membrane helixes. The novel computation method was elaborately trained and tested by employing an independent dataset that was found to be non-redundant in nature. As a result, the secondary structure prediction accuracy was found to be 78% as far as the non-transmembrane region was concerned and found to be 90% for the transmembrane region. A linear predictive coding model using position-specific score matrices was proposed in [19] for predicting protein structural class. To sum up, the method was found to be satisfactory upon comparison with other methods on a single type of features. Owing to this reason, cost-effective mechanism for predicting protein structural class was ensured.

A secondary structure prediction based on the position-specific scoring value using matrix representation was proposed in [20]. Although numerous and extensive machine learning algorithms have been provided for predicting secondary structures in every short period, the enhancements rate were found to be comparatively minimum. This is owing to the reason that the amino was already introduced in and around 2000. However, there have been only elementary shifts in the feature set prediction. However, to improve the rate of accuracy, the foundation must be enhanced. In [21], an integration of physical, chemical, statistical, and biological characteristics of protein were employed as the features with which a novel mechanism was presented with the purpose of predicting protein’s post-translational modification sites, therefore contributing to accuracy. Despite improvement observed in accuracy, numerous types of protein post-translational modification must be elaborated in detail as far as the domain of biology is concerned. Also, specialized formation of a structure must be utilized as the means for feature prediction.

Yet another deep learning approach based on position specific scoring matrix employing deep network architecture was investigated in [22]. By taking into consideration the enormous endeavor essential for researchers to bring about small enhancements, the realistic objective would remain in concentrating on the enhancement in the overall prediction of protein. A review of deep learning involving protein structural modeling was investigated in [23]. Finally, with the objective of acquiring a greater purview into the biomolecule fundamental science, a specific requirement to associate artificial intelligence (AI) with the biochemical and biophysical properties would arise. Also, a holistic approach to the underlying strategies and hidden patterns that result in the development of therapeutics is also the need of the hour. A state-of-the-art machine learning method was proposed in [24] by building on the alpha fold model, therefore, ensuring high quality predictions. Key features on alpha-fold two were concentrated in [25] using attention mechanism, therefore, ensuring computational capability.

Protein model quality assessments were made in [26] by employing spherical convolutions via rotation-equivariant spherical filters, therefore ensuring critical assessment of structure prediction benchmarks. Despite critical assessment, the precision with which the benchmark arrived was not discussed. Gaining a thorough insight into the protein structure is contemplated as the laborious process toward the design and development of new types of drugs. Therefore, acquiring a thorough knowledge and understanding of its functionalities would provide a good understanding into life machinery and organization, therefore the paving way for great social influence.

In [27], protein engineering was performed by means of deep diving with only a small proportion of protein sequence descriptors. With this, protein redesign issues in the pharmaceutical industry were addressed in a precise manner. Even though an extensive protein sequence selection along with the state-of-the-art machine learning techniques have been provided in detail, still further enhancement would cause an overall improvement. To name an objective may be to enhance the learning rate polity that in turn would result in minimizing the training time and hence the overall performance improvement. Yet another critical assessment of protein structure prediction using a simple gradient descent algorithm was proposed in [28] with increased accuracy. With the analysis results, it can be inferred that the overall process can be optimized by means of simple gradient descent algorithm that, in turn, would acquire structures without complicated sampling procedures.

An ensemble of deep convolutional neural networks for protein function prediction was investigated in [29] to address the time analysis. To sum up, the proposed method can, in turn, bestow swift prediction of the protein functioning, therefore making room for pertinent applications, to name a few being identification of target concerning pharmacological application. Also, a thorough analysis can be made for hierarchical function prediction and enzyme annotation toward the enzyme classification system. Yet another protein structure prediction that was performed automatically using I-TASSER was proposed in [30]. Here, from the target proteins of the respective amino acid sequence, the I-TASSER initially generated a comprehensive lengthy atomic structure format based on multiple threading alignments. Followed by this, an assembly simulation performed in an iterative fashion based on the atomic-level structure refinement was also proposed. Finally, protein biological functions comprising of ligand-binding sites and commission number of the respective enzyme were then acquired from protein function databases based on the sequence and comparative structure profile. A prediction model of protein consisting of Kunitz-type trypsin inhibitor from the respective seeds of Acacia Nilotica (L) based on the antimicrobial and insecticidal activity was proposed in [31]. Here, two generation progenies were studied, therefore reducing the mean percent mortality.

Protein family identification and classification are one of the most paramount issues as far as bioinformatics and protein studies are concerned. In these cases, it becomes necessary to mention the protein family as it finds a place chiefly in smart drug therapies, functioning of protein and so on. However, determining these families with sequencing yet consumes an enormous time. A novel protein mapping method was designed in [32] based on the Fibonacci numbers and hashing table called (FIBHASH). The Fibonacci number was assigned to each amino acid code based on integer representations. Followed by these amino acid codes were inserted into hashing table for further classification using recurrent neural networks, therefore improving protein mapping accuracy to a greater extent. As of now, the novel coronavirus (COVID-19) is a swiftly proliferating disease with a high rate of mortality.

In [33], interactions between specific flavonols such as 2019-nCoV receptor binding domain (RBD) and cathepsins (CatB and CatL) were medically analyzed. Based on the Relative Binding Capacity Index (RBCI) value estimated based on the free energy of binding and calculated inhibition constants, results were analyzed that robinin (ROB) and gossypetin (GOS) were determined to be the most significant flavonols among all the targets. Biological organism sequence data like nucleotide and amino acid are stored in databases that comprise billions of records. With the objective of processing enormous data in a comparatively lesser amount of time, high-performance analysis models were designed.

In [34], pairwise and multiple sequence alignment operations were proposed to perform sequence alignment concerning bioinformatics in minimal time. As far as uncharacterized protein sequences are concerned, prediction of the functioning of protein in an automated fashion is said to be a critical issue to be handled. Over the past few years, deep learning-based algorithms were said to outperform the existing methods owing to the issues concerning overfitting and significance involved in training. A DEEPred was proposed in [35] that involved multitasking deep neural networks in a feed forward manner with the structure of hierarchical stack-based protein function prediction. With this organization, protein function prediction was found to be good.

From the above hypothesis, present-day research works explored above are mandatorily recapitulating necessity for novel secondary protein structure prediction method. Thus, in this work, it is concentrated on deliberately proposing an Energy Profile Bayes and Thompson Optimized Convolutional Neural Network (EPB-OCNN) method that provides significant results for precise and accurate Protein Secondary Structure Prediction method with minimum time and maximum improvement in specificity, recall and F-measure.

3 Energy Profile Bayes and Thompson Optimized Convolutional Neural Network (EPB-OCNN) protein secondary structure prediction

This section mainly deals with the proposed Energy Profile Bayes and Thompson Optimized Convolutional Neural Network (EPB-OCNN) method of protein secondary structure prediction from a broader point of view. The EPB-OCNN method is divided into three parts. The first part consists of the problem definition, whereas the second portion contains an extensive protein structure identification using Energy Profile Legion-Class Bayes Protein Structure Identification. The third subsection includes a detailed analysis of protein secondary structure prediction by employing the Thompson Optimized Convolutional Neural Network model. Figure 1 shows the block diagram of the EPB-OCNN method.

Fig. 1
figure 1

Block diagram of Energy Profile Bayes and Thompson Optimized Convolutional Neural Network (EPB-OCNN) method

As shown in the above figure, the protein sequences are obtained from Protein Data Bank (PDB) [20] dataset. Next, protein structure identification called Energy Profile Legion-Class Bayes is designed based on the energy profiles of corresponding atom pairs to describe protein energy and accordingly rank different conformations based on energy. Next, the Thompson Optimized Convolutional Neural Network model is designed by extracting features between amino acid and extracting the association between protein sequence, optimizing parameters involved in predicting protein structure via CNN.

3.1 Problem definition

Protein structure prediction from amino acid sequence has been a striking confront for decagons. Therefore, atomic-level structures of proteins are frequently the initiating locality to perceive protein structure identification and engineer them. Naturally occurring proteins denote only a minuscule subset of probable amino acid sequences designated by evolutionary process to carry out a biological function. The state-of-the-art methodology for protein structure prediction is based on the thermodynamic hypothesis. The thermodynamics concerning protein structure prediction states that the indigenous protein structure must possess the lowest free energy. Identifying the lowest-energy state is demanding owing to the fact as it occupies an enormous space of possible conformations obtainable to a protein. Considering these challenges, in our work, Legion-Shape Protein Structure Identification is proposed. Next, with the identified secondary protein structure and optimized parameter learning, protein structure prediction employing Thompson function is made for robust and accurate prediction.

3.2 Energy Profile Legion-Class Bayes Protein Structure Identification

To start within this section, protein structure identification using Energy Profile Legion-Class Bayes is designed. For this, first, protein and location based analytical Evolutionary Duo Distance Reliance Potential (EDDRP) is obtained. We configure the perceived probability in EDDRP by the evolutionary information in addition to atom types. EDDRP differentiates itself from others in that it possesses numerous energy profiles for two atoms of given types, according to protein taken into consideration and sequence profile atom context. With this EDDRP, Energy Profile Legion-Class Bayes Protein Structure Identification is made. Figure 2 shows the block diagram of Energy Profile Legion-Class Bayes Protein Structure Identification.

Fig. 2
figure 2

Block diagram of Energy Profile Legion-Class Bayes Protein Structure identification

As shown in the above figure, with protein sequences obtained as input, initially, distance reliance analytical potential with energy profiles is measured. Second, the posterior probability at each location is measured to form Location-Specific Resultant Matrix. Finally, with the obtained Location-Specific Resultant Matrix, accurate and relevant protein structures are identified. To start with, distance reliance analytical potential and energy profiles for atom pairs are mathematically represented as given below.

$$ U\left( {{\text{Dis}}| A_{i} , A_{j} , {\text{Area}}_{i} ,{\text{Area}}_{j} ,{\text{Rad}}_{{\text{G}}} } \right) = T{\text{Log}} \left( {\frac{{{\text{Prob}}\left( {{\text{Dis}}| A_{i} , A_{j} , {\text{Area}}_{i} ,{\text{Area}}_{j} ,{\text{Rad}}_{{\text{G}}} } \right)}}{{{\text{Ref}}\left( {{\text{Dis}}|{\text{Rad}}_{{\text{G}}} } \right)}}} \right) $$
(1)

From Eq. (1), ‘\(T\)’ is the temperature factor corresponding to each amino acid, ‘\(\mathrm{Ref}\left(\mathrm{Dis}|{\mathrm{Rad}}_{\mathrm{G}}\right)\)’ denotes the reference state ‘\(\mathrm{Ref}\)’ with respect to distance ‘\(\mathrm{Dis}\)’ and gyration radius ‘\({\mathrm{Rad}}_{\mathrm{G}}\)’ of protein considered for simulation. In addition, ‘\(\mathrm{Prob}\left(\mathrm{Dis}| {A}_{i}, {A}_{j}, { \mathrm{Area}}_{i},{\mathrm{Area}}_{j},{\mathrm{Rad}}_{\mathrm{G}}\right)\)’ denotes the discovered probability of two atoms ‘\({A}_{i}\),’ ‘\({A}_{j}\)’ linked within a distance ‘\(\mathrm{Dis}\)’ and gyration radius ‘\({\mathrm{Rad}}_{G}\).’

With the utilization of the above distance reliance analytical potential model, accurate energy functions are evolved for describing protein physics and sampling protein sequence. With the concept of Legion Class Bayes employed in our work and energy profiles for pair of atoms, let us consider, ‘\(P=\left\{{P}_{1}, {P}_{2}, \dots ,{P}_{n}{P}_{n+1}\left[{D}_{1}\right], {P}_{n+2}, \dots ,{P}_{2n}\left[{D}_{2}\right], {P}_{m+1},{P}_{m+2}, \dots ,{P}_{mn}\left[{D}_{n}\right]\right\}\),’ where ‘\({P}_{i}=\left\{\mathrm{1,2},\dots .m\right\}\)’ with ‘\(i\)’ denoting amino acid position in protein sequence. In the case of Legion-class problem, the trial sample ‘\(S\)’ comprises of ‘\(\left({\mathrm{CL}}_{1}, {\mathrm{CL}}_{2}, \dots ,{\mathrm{CL}}_{n}\right)\)’ where ‘\({\mathrm{CL}}_{1}\)’ is the first class, ‘\({\mathrm{CL}}_{2}\)’ is the second class, and ‘\({\mathrm{CL}}_{n}\)’ denotes the last class, respectively. Then, the posterior probability for each class is estimated employing the probability of amino acid at each location in trial samples. With this, the Legion Class Bayes is mathematically formulated as given below.

$$ {\text{Prob}}\left( {{\text{CL}}_{i} |S} \right) = \frac{{{\text{Prob}}\left( {S|{\text{CL}}_{i} } \right){\text{Prob}}\left( {CL_{i} } \right)}}{{{\text{Prob}}\left( S \right)}} $$
(2)

From Eq. (2), ‘\(\mathrm{Prob}\left({\mathrm{CL}}_{i}|\mathrm{S}\right)\)’ represents posterior probabilities of amino acid at each location with respect to each class ‘\({\mathrm{CL}}_{i}\)’ and samples ‘\(S\),’ ‘\(\mathrm{Prob}\left({\mathrm{CL}}_{i}\right)\)’ denotes the prior probabilities of amino acid at each location for the corresponding class ‘\(\left({\mathrm{CL}}_{i}\right)\),’ ‘\(\mathrm{Prob}\left(S|{\mathrm{CL}}_{i}\right)\)’ represents the likelihood of amino acid at each location and ‘\(\mathrm{Prob}\left(S\right)\)’ denotes the probability of the overall trial sample taken into consideration for simulation. The likelihood of each sample ‘\(S\)’ with respect to distance ‘\({D}_{i}\)’ is mathematically formulated as given below.

$$ {\text{Prob}} \left( {S|D_{i} } \right) = \prod\nolimits_{j = 1}^{m} {{\text{Prob}}\left( {S_{j} |D_{i} } \right)} $$
(3)

With the resultant Legion Class Bayes value Location-Specific Resultant Matrix (LSRM), an evolutionary energy profiles for pair of atoms is estimated. The LSRM is then mathematically formulated for a protein sequence ‘\(P\)’ possessing area ‘\(\mathrm{Area}\)’ as given below.

$$ P_{{{\text{LSRM}}}} = \left[ {\begin{array}{*{20}l} {R\left( {1 \to 1} \right)} \hfill & {\quad R\left( {1 \to 2} \right)} \hfill & {\quad \ldots } \hfill & {\quad R\left( {1 \to i} \right)} \hfill & {\quad \ldots } \hfill & {\quad R\left( {1 \to 20} \right)} \hfill \\ {R\left( {2 \to 1} \right)} \hfill & {\quad R\left( {2 \to 2} \right)} \hfill & {\quad \ldots } \hfill & {\quad R\left( {2 \to i} \right)} \hfill & {\quad \ldots } \hfill & {\quad R\left( {2 \to 20} \right)} \hfill \\ \ldots \hfill & {\quad \ldots } \hfill & {\quad \ldots } \hfill & {\quad \ldots } \hfill & {\quad \ldots } \hfill & {\quad \ldots } \hfill \\ {R\left( {j \to 1} \right)} \hfill & {\quad R\left( {j \to 2} \right)} \hfill & {\quad \ldots } \hfill & {\quad R\left( {j \to i} \right)} \hfill & {\quad \ldots } \hfill & {\quad R\left( {j \to 20} \right)} \hfill \\ \ldots \hfill & {\quad \ldots } \hfill & {\quad \ldots } \hfill & {\quad \ldots } \hfill & {\quad \ldots } \hfill & {\quad \ldots } \hfill \\ {R\left( {{\text{Area}} \to 1} \right)} \hfill & {\quad R\left( {{\text{Area}} \to 2} \right)} \hfill & {\quad \ldots } \hfill & {\quad R\left( {{\text{Area}} \to i} \right)} \hfill & {\quad \ldots } \hfill & {\quad R\left( {{\text{Area}} \to 20} \right)} \hfill \\ \end{array} } \right] $$
(4)

From Eq. (4), ‘\(R\left(i\to j\right)\)’ corresponds to the resultant outcome of ‘\(ith\)’ amino acid location, that was swapped by ‘\(j\)’ amino acid in protein sequence during the process of evolution. The ‘\({P}_{\mathrm{LSRM}}\)’ is produced for multiple sequence with protein sequence ‘\(P\),’ possessing ‘\(A*20\)’ resultant outcomes. Finally, the probability of evolution ‘\(\mathrm{Evol}\)’ (i.e., protein structure identification) from ‘p-th’ to ‘q-th’ amino acid ‘\({\mathrm{Evol}}_{pq}\)’ is mathematically formulated as given below.

$$ {\text{Evol}}_{pq} = {\text{Prob}}_{ip} {\text{Prob}}_{jq} \left[ {1 \le p \le 20; \le q \le 20} \right] $$
(5)

From Eq. (5), the probability of evolution from ‘p-th’ to ‘q-th’ amino acid ‘\({\mathrm{Evol}}_{pq}\)’ is obtained for 20 resultant outcomes (i.e., sliding window ranging between 13 and 19). The pseudo-code representation of Energy Profile Legion-Class Bayes Protein Structure Identification is given below.

figure a

As given in the above Energy Profile Legion-Class Bayes Protein Structure Identification algorithm, the objective remains in obtaining accurate and reliable protein data utilizing the Legion-Class Bayes function in addition to the higher-resolution energy profiles. Initially with PDB dataset provided as input, for each amino acid, distance reliance analytical potential with energy profiles is measured. With this function, optimal molecules are arrived at with minimum time. Next, to the energy profile values, the Legion Class Bayes function is applied to obtain Location-Specific Resultant Matrix, therefore revealing patterns with maximum precision. Finally, utilizing the probability of evolution, protein structure identification is made with improved true positive rate.

Initially, with the PDB dataset provided as input for each amino acid, the first distance relies on analytical potential with energy profiles using the features, amino acid name/residue name, residue number, X, Y, Z coordinates and occupancy. Followed by this, the posterior probability with the probability of amino acid at each location was measured employing record type, atom number, atom name, amino acid name, chain name, residue number, X, Y, Z coordinates, occupancy, temperature factors and element symbols, respectively. Next, Location-Specific Resultant Matrix was estimated based on the record type, atom number, atom name, residue number, chain name, residue name, X, Y, Z coordinates, occupancy, temperature factors and element symbols, respectively. Finally, the probability of evolution is obtained to acquire the final features, namely amino acid name/residue name, residue number, X, Y, Z coordinates and occupancy, respectively.

3.3 Thompson Optimized Convolutional Neural Network Protein Secondary Structure Prediction model

The accurate protein secondary structure prediction not only warrants us to realize complicated association between protein sequence and protein structure, but also assists in analyzing the functioning of the protein. In this work, a deep learning algorithm based on convolutional neural network, called Thompson Optimized Convolutional Neural Network Protein Secondary Structure Prediction model has been applied to protein secondary structure prediction. The objective of designing this model remains in optimizing network parameters and speed up overall process. Structure of Thompson Optimized Convolutional Neural Network Protein Secondary Structure Prediction model is shown in Fig. 3.

Fig. 3
figure 3

Structure of Thompson Optimized Convolutional Neural Network protein secondary structure prediction

The Thompson Optimized Convolutional Neural Network extracts features between amino acid. The input of each neuron in the convolutional layer comes from Location-Specific Resultant Matrix (LSRM) in a definite area. In addition, the size of this definite area is obtained by means of a convolution kernel. The feature map initially ‘\({\mathrm{FM}}^{i}\)’ is formulated as given below.

$$ {\text{FM}}^{i} = \left( {{\text{CK}}_{1}^{i} , {\text{CK}}_{2}^{i} , {\text{CK}}_{3}^{i} , \ldots ,{\text{CK}}_{n}^{i} } \right) $$
(6)

From Eq. (6), ‘\({\mathrm{CK}}_{n}^{i}\)’ corresponds to the convolution kernel of the ‘i-th’ layer with ‘\(n\)’ representing a number of convolution kernels. The functioning of convolution is to perceive convolution operation via a feature extraction filter for input matrix LSRM. Each region, in turn, is obtained by multiplying the input ‘\(\mathrm{LSRM}\)’ matrix and weights and then added with the offset constant ‘\(\mathrm{Off}\)’ to produce the feature map.

$$ {\text{FM}}_{l}^{i} = {\text{fun}}\left( {\mathop \sum \limits_{l} {\text{LSRM}}\left[ {{\text{CK}}_{l}^{i} } \right]*W_{l}^{i} + {\text{off}}} \right) $$
(7)

From Eq. (7), ‘\(\mathrm{LSRM}\left[{\mathrm{CK}}_{l}^{i}\right]\)’ denotes the feature map obtained by the convolution kernel ‘\(\mathrm{CK}\)’ of the input ‘\(\mathrm{LSRM}\)’ matrix data with the weight of the ‘i-th’ convolution kernel represented by ‘\({W}_{l}^{i}\)’ and offset value denoted by ‘\(\mathrm{off}\)’ for ‘\(l\)’ amount of convolution kernels, respectively. Next, the pooling layer known as nonlinear down-sampling aids in minimizing dimensionality features and parameter to minimize the frequency of calculation. To adjust weights during training or learning process, our work uses a Thompson Function. Finally, the output layer of our forms fully connected layer and SoftMax layer. The SoftMax function layer in our protein secondary structure prediction employs activation function as fined below.

$$ {\text{Prob}}\left( {\Pr_{i} , {\text{CL}}} \right) = \frac{{{\text{Prob}}\left( {{\text{CL}}/\Pr_{i} } \right) {\text{Prob }}\left( {\Pr_{i} } \right)}}{{\mathop \sum \nolimits_{j = 1}^{{{\text{CL}}}} {\text{Prob }}\left( {{\text{CL}}/\Pr_{j} } \right) {\text{Prob}} \left( {\Pr_{i} } \right)}} $$
(8)

From Eq. (8), ‘\(\mathrm{Prob}\left(\mathrm{CL}/{\mathrm{Pr}}_{i}\right)\)’ refers to the probability of given class sample (i.e., from four different classes) and ‘\(\mathrm{Prob} \left({\mathrm{Pr}}_{i}\right)\)’ denotes the prior probability of the protein secondary structure class. Owing to the increasing number of protein sequences present in a protein data bank, a considerable amount of time is said to be consumed while modeling the CNN to the protein secondary structure prediction. This is because of the reason that it takes a significant amount of time to modify the hyperparameters of CNN. This work employs a Thompson optimization algorithm to optimize the hyperparameters of CNN, employing learning rate (i.e., 0.01), impulse and regularization factor, respectively. First, let us assume that the Gaussian kernel function is selected as an acquisition function to obtain consecutive sampling points. Thompson optimization of hyperparameters is Gaussian prior modeling of the loss function ‘\(f\left(pd\right)\)’ by hyperparameters of corresponding protein data ‘\({\mathrm{pd}}_{i}\).’

$$ L\left( {{\text{OP}}_{i} , V_{j} } \right) = {\text{CL}}\left[ {{\text{OP}}_{i} \left( {pd_{i} } \right)Y_{i} } \right] $$
(9)

As the observations on the secondary protein structure prediction involves a considerable amount of noise, Gaussian noise ‘\(\alpha \)’ is added to each observation sample ‘\(L\left({\mathrm{OP}}_{i}, {V}_{j}\right)\)’ for the objective function ‘\(\mathrm{CL}\left[{\mathrm{OP}}_{i}\left({pd}_{i}\right){Y}_{i}\right]\).’

$$ f\left( {pd} \right) = L\left( {{\text{OP}}_{i} , V_{j} } \right) + \alpha $$
(10)

Let us further consider input hyperparameters ‘\(pd=\left({pd}_{1}, {pd}_{2}, {pd}_{3}, \dots ,{pd}_{n}\right)\)’ obtains the output ‘\(Y=\mathrm{CL}\left({pd}_{i}, {V}_{j}\right)\).’ Then, the Thompson optimization for hyperparameters ‘\({f}_{pd}\)’ is mathematically formulated as given below.

$$ f_{pd} = \int {{\text{IF}} \left[ {{\text{Exp}}\left( {r| pd, \alpha } \right) = \max \,{\text{Exp}} \left( {r| pd, \alpha } \right)} \right] {\text{Prob}} \left( {\alpha | D} \right)d\alpha } $$
(11)

The expected improvement based on the above Thompson optimization is given below.

$$ \left( {pd|D} \right) = {\text{Exp}} \left[ {\max \left( {\alpha ,f_{pd} - f_{{{\text{best}}}} } \right)} \right] $$
(12)

From Eq. (12), ‘\({f}_{\mathrm{best}}\)’ forms optimal solution for hyperparameters, learning rate, impulse, and regularization factor, respectively. The pseudo-code representation of Thompson Optimized CNN Protein Secondary Structure Prediction is given below.

figure b

As given in the above Thompson Optimized CNN Protein Secondary Structure Prediction algorithm, the objective remains in predicting secondary protein structure sequence with maximum precision and accuracy, therefore, contributing to robustness. To achieve this, the Location-Specific Resultant Matrix is employed as input to the CNN. Followed by which, for each protein data, convolved feature map is evolved by means of kernel weight and offset parameter. Next, Thompson optimized learning rate, impulse and regularization factor is estimated and applied to pooled layer, therefore obtaining robust protein structure prediction sequence. Also, in our work, first, the loss function is evaluated via Thompson optimization of hyperparameters. Second, if the detected loss is starting to increase, the weights are reset based on the Gaussian noise ‘\(\alpha \)’ back to where the minimum occurred. This ensures that the proposed method using Thompson Optimized CNN Protein Secondary Structure Prediction algorithm won’t continue to learn noise and overfit the data. In this manner, Thompson Optimized CNN Protein Secondary Structure Prediction algorithm addresses overfitting, hence guaranteeing accurate results.

4 Experimental setting and qualitative analysis

In this section, initially, dataset details are provided. Subsequent subsections contain the performance metrics analysis and discussion, comparison with the state-of-the-art methods, and the statistical analysis, respectively.

4.1 Dataset details

The Protein Data Bank (PDB) [21] used in our work corresponds to a database containing three-dimensional protein structure hitherto determined by nuclear magnetic resonance. The PDB dataset consists of the protein structural domains that have been categorized based on the structure similarities and amino acid sequences provide a detailed and comprehensive description of the structural and evolutionary relationships between proteins. However, for practical applications, only four classes ‘\(\mathrm{all}-\alpha \),’ ‘\(\mathrm{all}-\beta \),’ ‘\(\alpha +\beta \)’ and ‘\(\alpha /\beta \)’ are considered.With an overall ‘\(1673\)’ protein sequences, they are classified as ‘\(443 \left(\mathrm{all}-\alpha \right)\),’ ‘\(443 \left(\mathrm{all}-\beta \right)\),’ ‘\(441 \left(\alpha +\beta \right)\)’ and ‘\(346 \left(\alpha /\beta \right)\),’ respectively (as given in the Table 1), with a typical 80% training data size and 20% testing data size. The dataset is a list of protein sequences that are an arrangement of amino acids having 1673 as the sample size. Out of it, 80% remains the training data size (i.e., 1339) and 20% (i.e., 334.6 = 335) is the testing data size. With 335 amino acids arrangements in the list of protein sequences, protein data ranging between 500 and 5000 are considered for simulation.

Table 1 Typical PDB dataset with four categories utilized for benchmarking

4.2 Performance metrics analysis and Discussion

In this section, performance analysis of metrics such as prediction time, accuracy, ROC curve, precision, specificity, recall, F-measure, Mathew Correlation Coefficient and precision–recall curves are discussed.

Table 2 lists the hyperparameters and their description employed in our proposed method.

Table 2 Hyperparameters and description

4.2.1 Performance analysis of protein structure prediction time

In this section, a detailed analysis of protein structure prediction time is made. During prediction of secondary protein structure, a considerable amount of time is said to be consumed. This is mathematically expressed as given below.

$$ {\text{PSP}}_{{{\text{time}}}} = \sum\nolimits_{i = 1}^{n} {P_{i} } \,*{\text{Time}} \left[ {{\text{Prob}}\left( {\Pr_{i} , {\text{CL}}} \right)} \right] $$
(13)

From Eq. (13), the protein structure prediction time ‘\({\mathrm{PSP}}_{\mathrm{time}}\)’ is measured based on the number of protein data considered for simulation ‘\({P}_{i}\)’ and time consumed in analyzing protein secondary structure prediction using SoftMax function ‘\(\mathrm{Time} \left[\mathrm{Prob}\left({\mathrm{Pr}}_{i}, \mathrm{CL}\right)\right]\).’ It is measured in terms of milliseconds (ms). Table 3 provides the protein structure prediction time of the proposed method EPB-OCNN method and the state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5], respectively, for hyperparameter with a sliding window of 13.

Table 3 Tabulation for protein structure prediction time

Figure 4 shows a graphical representation of the secondary protein structure prediction analysis of the proposed method EPB-OCNN, and the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5], respectively. As shown in the above figure, with the x axis representing a number of protein data and the y axis representing protein structure prediction time, increasing protein data result in an increase in secondary protein structure time also. This is because with a large number of protein data sequence considered for simulation, protein structure modeling and designing also increase. This, in turn, causes an increase in the corresponding time. But simulations show that the proposed method EPB-OCNN achieved betterment in comparison with the five state-of-the-art methods. Single protein structure prediction time of the proposed method EPB-OCNN and the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5] was observed to be 0.110 ms, 0.120 ms, 0.165 ms, 0.185 ms, 0.195 ms, and 0.200 ms, respectively. With this, the overall prediction time of the proposed method EPB-OCNN, and the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5] was observed to be 55 ms, 60 ms, 82.5 ms, 92.5 ms, 97.5 ms, and 100 ms, respectively, for 500 number of protein data, therefore reducing time using EPB-OCNN method. The reason was due to the application of distance reliance on analytical potential precise energy for describing protein and protein sequence sampling. As a result, the protein structure prediction time of the proposed method EPB-OCNN in comparison with the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5] was observed to be improved by 23%, 38%, 49%, 55%, and 60%, respectively.

Fig. 4
figure 4

Protein structure prediction time analyses

4.2.2 Performance analysis of protein structure prediction accuracy

The second parameter of significance is the accuracy involved during the prediction of secondary protein structure. This is mathematically stated as given below.

$$ {\text{PSP}}_{{{\text{acc}}}} = \sum\nolimits_{i = 1}^{n} {\frac{{{\text{PS}}_{{{\text{CP}}}} \left[ {{\text{Prob}}\left( {\Pr_{i} , {\text{CL}}} \right)} \right]}}{{P_{i} }}*100} $$
(14)

From Eq. (14), protein structure prediction accuracy ‘\({\mathrm{PSP}}_{\mathrm{acc}}\)’ is measured based on protein structure correctly predicted using Softmax function ‘\({\mathrm{PS}}_{\mathrm{CP}}\left[\mathrm{Prob}\left({\mathrm{Pr}}_{i}, \mathrm{CL}\right)\right]\)’ and the overall number of protein data considered for simulation ‘\({P}_{i}\).’ It is measured in terms of percentage (%). Table 4 provides the protein structure prediction accuracy values of proposed method EPB-OCNN, and the state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4] and DPSP [5], respectively, for hyperparameters with a sliding window of 13.

Table 4 Tabulation for protein structure prediction accuracy

Figure 5 shows a graphical representation of the secondary protein structure prediction accuracy analysis of the proposed method EPB-OCNN, and the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5], respectively. As shown in the above figure, protein structure prediction accuracy is found to be inversely proportional to number of protein data considered for simulation. This is because an increasing number of protein data, a large number of protein sequence to be analyzed are kept in a stack and this, in turn, results in making a small amount of the wrong predictions. However, simulation analysis made with 500 protein data shows that the accuracy of the proposed method EPB-OCNN, and the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5] was observed to be 98.8%, 98%, 97%, 96%, 95%, and 94%, respectively. With this analysis, secondary structure prediction accuracy of the proposed method EPB-OCNN was found to be better than that of other five state-of-the-art methods. The betterment was due to the application of Energy Profile Legion-Class Bayes Protein Structure Identification algorithm. Legion-Class Bayes function was applied along with the higher-resolution energy profiles to model secondary protein structure prediction. As a result, the secondary protein structure prediction accuracy of the proposed method EPB-OCNN in comparison with the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5] was found to be improved by 10%, 15%, 16%, 18%, and 21%, respectively.

Fig. 5
figure 5

Protein structure prediction accuracy analyses

4.2.3 Performance analysis of ROC curve

In this section, receiver operating characteristic (ROC) curves are analyzed to measure the prediction rate performance. With the assistance of ROC curve, binary classification is made based on either protein structure correctly predicted or wrongly predicted. The ROC curve is measured based on the true positive, false positive, true negative and false negative. With these four probable outcomes, receiver operating characteristic (ROC) curve is made where false positive rate is plotted on the x axis, and the true positive rate is plotted on the y axis. Table 5 provides the ROC curve analysis values of the proposed method EPB-OCNN, and the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5], respectively, for hyperparameters with a sliding window of 19.

Table 5 Tabulation for ROC curve

Figure 6 illustrates the graphical representation of ROC curve analysis of the proposed method EPB-OCNN and the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5], respectively. In the above graphical representation, the x axis denotes the false positive rate, whereas the y axis denotes the true positive rate. A diagonal line from (0, 0) in the lower left-hand corner to (1, 1) in the upper right-hand corner is drawn. This diagonal line displays the protein structure prediction test results. Also, the ROC makes an analysis of the protein structure prediction based on the true positive rate and false positive rate for each possible cut point value of the test. From the above figure, it is illustrative that the roc curve of EPB-OCNN method is identified to be comparatively better than that of the five state-of-the-art methods, therefore corroborating the secondary protein structure prediction rate.

Fig. 6
figure 6

ROC curve analyses

4.2.4 Performance analysis of precision

The next parameter of significance is precision. While predicting secondary protein structure, precision measurement has to be evolved. This is mathematically expressed as given below.

$$ P = \left[ {\frac{{t_{p} }}{{t_{p} + f_{p} }}} \right] $$
(15)

From Eq. (15), precision ‘\(P\)’ is measured based on the true positive rate ‘\({t}_{p}\)’ (i.e., protein structure correctly predicted as it is) and the false positive rate ‘\({f}_{p}\)’ (i.e., protein structure incorrectly predicted). It is measured in terms of percentage (%). Table 6 gives the analysis of the precision of the proposed method EPB-OCNN, and the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5], respectively, for hyperparameters with a sliding window of 19.

Table 6 Tabulation for precision

Figure 7 illustrates a graphical representation of precision analyses of the proposed method EPB-OCNN, and the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5], respectively. To analyze the precision factor, protein sequence data in the range of 500 to 5000 are taken into consideration. From the above figurative representation, it is inferred that the precision of the proposed method EPB-OCNN is found to be comparatively higher than that of the five state-of-the-art methods. The reason behind the improvement was due to the application of Legion Class Bayes function employed for obtaining energy profile resultant values via Location-Specific Resultant Matrix. With this pattern, maximum precision is said to be revealed. The precision of the proposed method EPB-OCNN method in comparison with the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5] is improved by 8%, 14%, 26%, 30%, and 31%, respectively.

Fig. 7
figure 7

Precision analyses

4.2.5 Performance analysis of specificity, recall and F-measure

Specificity denotes the percentage ratio of negatives that are correctly identified as with not possessing the actual secondary protein structure. On the other hand, recall denotes the percentage ratio of relevant protein sequences instances that were retrieved as it is. Finally, F-measure represents the harmonic mean of both the precision and recall. Table 7 provides the specificity, recall and F-measure analyses of the proposed method EPB-OCNN and five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4] and DPSP [5], respectively, for hyperparameters with a sliding window of 13.

Table 7 Tabulation for specificity, recall and F-measure

Figure 8 shows the graphical representation of specificity, recall and F-measure analysis of the proposed method EPB-OCNN and the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5], respectively. From the above figure, the specificity rate of the proposed method EPB-OCNN and the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5] is inferred to be 88.25%, 86.45%, 84.25%, 76.75%, 71.25%, and 67.32%, respectively. In addition, the recall rate of the proposed method EPB-OCNN and the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5] was found to be 87.65%, 85.25%, 83.55%, 77.15%, 72.65%, and 68.65%, respectively. From the specificity and recall results, it is identified that specificity, recall, and F-measure of the proposed method EPB-OCNN were found to be better than the five state-of-the-art methods. The reason behind improvement was due to the application of Thompson optimized function to obtain learning rate, impulse, and regularization factor. With this, improvement was observed at a true positive rate in turn reducing true negative rate, therefore contributing to specificity, recall and F-measure.

Fig. 8
figure 8

Specificity, recall, and F-measure analyses

4.2.6 Performance analysis of precision–recall curve

In this section, the precision–recall curve is analyzed. The precision–recall curve shows graphical representation of secondary protein structure prediction at different threshold values. This ROC curve plot two different parameters, namely precision and recall. Table 8 makes a detailed analysis of the precision–recall curve of the proposed method EPB-OCNN and the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5], respectively, for hyperparameters with a sliding window of 13.

Table 8 Tabulation for precision–recall curve

Figure 9 illustrates the graphical representation of precision–recall analysis of the proposed method EPB-OCNN and the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5], respectively. In this study, simulation was performed for all the six methods with unique recall values ranging between 0.1 and 1. For these simulation values, the precision–recall value of the proposed method EPB-OCNN and the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5] was observed to be 0.26, 0.23, 0.21, 0.18, 0.17, and 0.15, respectively. From the simulation results, optimality between precision and recall was found by the comparison of the precision–recall value of the proposed method with that of the five other existing methods. The precision–recall improvement was observed owing to the application of Thompson Optimized Convolutional Neural Network Protein Secondary Structure Prediction model. With this model, optimizing the network parameters, in turn, resulted in speeding up of overall process and hence causing an improvement in the precision–recall curve of the proposed method EPB-OCNN over the five other existing methods.

Fig. 9
figure 9

Precision–recall analyses

4.2.7 Performance analysis of MCC coefficient

Matthews Correlation Coefficient considers true positive, true negative, false positive and false negatives and hence considered as a balanced measure even when it is used with classes of distinct sizes. Therefore, MCC refers to a correlation coefficient between observed and predicted binary classifications, returning a value between − 1 and + 1. The coefficient of + 1 denotes a perfect prediction made, 0 is not better than the random prediction made, and finally, − 1 denotes total disagreement between prediction and observation. This is mathematically formulated as given below.

$$ {\text{MCC}} = \frac{{\left( {{\text{TP}}*{\text{TN}}} \right) - \left( {{\text{FP}}*{\text{FN}}} \right)}}{{\sqrt {\left( {{\text{TP}} + {\text{FP}}} \right)\left( {{\text{TP}} + {\text{FN}}} \right)\left( {{\text{TN}} + {\text{FP}}} \right)\left( {{\text{TN}} + {\text{FN}}} \right)} }} $$
(16)

From Eq. (16), the Matthews Correlation Coefficient ‘\(\mathrm{MCC}\)’ is measured using the true positive rate ‘\(\mathrm{TP}\),’ true negative rate ‘\(\mathrm{TN}\),’ false positive rate ‘\(\mathrm{FP}\)’ and the false negative rate ‘\(\mathrm{FN}\),’ respectively. Table 9 provides the MCC resultant values of the proposed method EPB-OCNN and the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5], respectively.

Table 9 Tabulation for MCC

Finally, Fig. 10 shows a graphical representation of MCC analysis of the proposed method EPB-OCNN and the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5], respectively. As illustrated in the above figure, with the increase in the number of protein data, a significant amount of decrease in the MCC is noted. This is because increasing the number of protein data sequences considered for simulation compromises the results of classifications being made for secondary protein structure prediction. This, in turn, reduces the entire MCC values for all the six methods. However, simulation analysis showed betterment of the proposed method EPB-OCNN method over the existing five state-of-the-art methods. The reason behind the improvement was the incorporation of Thompson Optimized CNN Protein Secondary Structure Prediction algorithm. By applying this algorithm, secondary protein structure prediction was made using Location-Specific Resultant Matrix as input to the CNN. Next, the convolved feature map was obtained using kernel weight and offset parameter for each protein data based on the Thompson optimization function, therefore corroborating the result.

Fig. 10
figure 10

MCC analyses

4.2.8 Comparison of algorithms

The objective of the proposed Energy Profile Bayes and Thompson Optimized Convolutional Neural Network (EPB-OCNN) method remains in utilizing the protein structure sequence features and focusing on both the loss and the precision function so that the secondary protein structure prediction can be made in an accurate and precise manner. A comprehensive comparative analysis is provided in Table 10.

Table 10 Comparison of algorithms

With the purpose of improving the precision involved during protein structure prediction, spatial locations of every atom in a protein molecule were analyzed in [1]. Though precision was analyzed, error or loss function involved was not focused. Deep graph neural networks were employed in [2], where the analysis of both loss and accuracy was made while designing protein. However, the true and false positive rate was not concentrated while doing validation for proof-of-principle. Deep learning and particle swarm optimization algorithms were integrated into [3] to identify the species-specific S-glutathionylation with maximum accuracy. However, the precision involved in optimization was not included. Along with the accuracy, the noise factor was analyzed in [4] by means of feature engineering. Distance-based protein structure prediction via deep learning was made in [5], therefore contributing to the accuracy factor. Table 10 lists the comparison of algorithms made with different methods.

Feature selection As we have applied the energy profiles for pair of atoms, protein data were found to be highly relevant for identification, and then prediction was made using the proposed EPB-OCNN method, and this was not performed in [3, 4] 5. However, local structure and structural features were utilized in [1] 2.

Linear/nonlinear/collinear data In the proposed EPB-OCNN method, energy profiles were applied to the raw PDB dataset, and hence, relevant protein structures were said to be identified. Hence, irrespective of the type of data, by applying energy profiles, further processing was ensured. In the case of [1, 2, 4], 5, only linear data was said to be applied. However, in [3], the data type was not mentioned. Hence, upon comparison with the other state-of-the-art methods, the proposed method EPB-OCNN performed well for both linear and nonlinear data.

Optimization algorithm Thompson optimization function is used to fine tune the parameters in the proposed EPB-OCNN method that in turn reduces the processing time and hence speeds up the entire process. In the case of torsion optimization applied in [1], any small error at local residue may result in big RMSE, therefore, compromising protein structure prediction accuracy. In the case of gradient descent applied in [3], it results in redundant computation, therefore, increasing protein structure prediction time. Though deep learning and deep neural networks applied in [4], 5 learnt the network parameters by fine tuning the parameters, it resulted in higher convergence rate.

Activation function In the proposed EPB-OCNN method, the sigmoid activation function is utilized and hence controls between exploitation and exploration during optimization. With residue network activation used in [1], premature convergence was said to occur. Also, with linear function utilized in [3,4,5], and the absence of both exploitation and exploration, it resulted in premature convergence.

Hyperparameters Optimization of the hyperparameters were made using the proposed EPB-OCNN method, whereas learning rate used in three existing methods [3,4,5] was found to be 0.6, 0.6, and 0.5, respectively. Also, with optimized learning rates used in our work, optimal protein structure identification with a minimum time of the proposed method EPB-OCNN is said to be achieved upon comparison with the state-of-the-art methods.

Neural network construction method In the proposed EPB-OCNN method, convolutional model was used that in turn updated the protein amino acid sequencing based on the optimization process, therefore discarding premature convergence. Though deep residual and deep graphs were utilized in [1,2,3,4,5], with the lack of precise optimization model, optimized results were not arrived at.

Weight calculation of nodes The score values are only obtained from the Thompson optimization model, therefore ensuring optimization. In the case of the other state-of-the-art methods, it was not available, therefore proposed EPB-OCNN method is contributing to the better precision–recall and ROC curve.

Error rate In the proposed EPB-OCNN method, Gaussian prior modeling was applied based on the LSRM matrix, which in turn minimized the error or loss rate. No such provision was included to address error aspect in [2], 4. The absolute calculation was done in [5]. Zero training error was seen in [4].

4.2.9 Statistical test/analysis

The statistical test for secondary protein structure prediction is performed using McNemar test (Table 11). This McNemar test is employed while we are identifying a change in ratio for the paired protein structure. To evaluate the McNemar test, the protein structure data is said to be placed into a 2 × 2 contingency table, with the cell frequencies equaling the number of pairs. The McNemar test formula is then measured as given below.

$$ \chi^{2} = \frac{{\left( {b - c} \right)^{2} }}{b + c} $$
(17)
Table 11 Tabulation for McNemar test

Figure 11 shows the McNemar test (M-test) of the proposed method EPB-OCNN and the existing five state-of-the-art methods. From the figure, it is illustrative that by performing the M-test analysis for simulation ranging between 500 and 5000 numbers of protein data, an increasing trend was found. Despite this result, with simulations conducted for 500 protein data, a comparative improvement was observed in the proposed EPB-OCNN method upon comparison with the five state-of-the-art methods. The reason behind the improvement was due to the application of the optimization function for fine tuning the hyperparameters. With this, the M-test result of the proposed method EPB-OCNN in comparison with the five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5] was said to be improved by 6%, 7%, 9%, 10%, and 12%, respectively.

Fig. 11
figure 11

M-test analysis

4.2.10 L2 Loss function

L2 loss function is applied to reduce the error. L2 loss function is measured as the sum of all the squared difference between the true value and the predicted value. This is mathematically evaluated as given below.

$$ L2\, {\text{Loss}}\, {\text{Function}} = \sum\nolimits_{i = 1}^{n} {\left( {y_{{{\text{true}}}} - y_{{{\text{predicted}}}} } \right)^{2} } $$
(18)

From Eq. (18), L2 loss function is evaluated. Table 12 provides the L2 loss function values of proposed method EPB-OCNN, and the state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5], respectively, for hyper parameters with a sliding window of 13.

Table 12 Tabulation for L2 loss function

Figure 12 demonstrates the L2 loss function of the proposed method EPB-OCNN and the existing five state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5], respectively. The number of protein data is taken in the horizontal direction, and the L2 loss function is observed at the vertical axis. The number of protein data is considered in the range of 500 and 5000 to conduct the simulation. The reason behind the improvement was application of the L2 loss function for fine tuning the hyperparameters with the aid of Thompson optimization algorithm. With this, the L2 loss function of the proposed method EPB-OCNN in comparison with the five state-of-the-art methods provides minimal loss. Let us considers 500 protein data for conducting the experiments in the first iteration. By applying the proposed EPB-OCNN, 494 data are correctly predicted and the L2 loss function is 36 whereas the L2 loss function of the existing TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5] are 100, 225, 400, 625, and 900, respectively, followed which various performance results are observed for each method. For each method, ten different results are observed. The performance of the proposed EPB-OCNN has achieved a better result for L2 loss function than other existing methods.

Fig. 12
figure 12

L2 loss function

4.2.11 Root mean square error

Root mean square error (RMSE) is measured by taking the square root of above mentioned L2 loss function. This is mathematically computed as given below.

$$ {\text{RMSE}} = \sqrt {\sum\nolimits_{i = 1}^{n} {\left( {y_{{{\text{true}}}} - y_{{{\text{predicted}}}} } \right)^{2} } } $$
(19)

From Eq. (19), root mean square error ‘\(RMSE\)’ is computed. Table 13 provides the RMSE values of proposed method EPB-OCNN, and the state-of-the-art methods, TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5], respectively, for hyperparameters with a sliding window of 13.

Table 13 Tabulation for RMSE function

Figure 13 displays the root mean square error of the proposed method EPB-OCNN and the existing five state-of-the-art methods with respect to the number of protein data. The x-axis denotes the number of protein data, and the y-axis represents the root mean square error. In the experimentation process, the different number of protein data is taken as input in the ranges of 500 and 5000. From the observed results, the RMSE is minimized using the introduced EPB-OCNN method. This is because of the implementation of the loss function to optimize the hyperparameters by using the Thompson optimization algorithm. In the first iteration, 500 protein data is used to estimate the experiments. The RMSE of the proposed EPB-OCNN is 6, whereas the RMSE of the existing TBM [1], PD-DGNN [2], CF [3], DL [4], and DPSP [5] is 10, 15, 20, 25, and 30, respectively. The proposed EPB-OCNN obtained good results for RMSE compared to the state-of-the-art methods.

Fig. 13
figure 13

Root mean square error loss function

5 Conclusion

In bioinformatics, secondary protein secondary structure prediction is a very significant task. To have better apprehension between the sequencing of proteins and their structural formations, we propose an Energy Profile Bayes and Thompson Optimized Convolutional Neural Network (EPB-OCNN) method. Secondary protein secondary structure prediction is a work of considerable importance in the area of bioinformatics. Hence it is mandatory to completely realize the purpose and protein structure. In this work, Energy Profile Legion-Class Bayes Protein Structure Identification and Thompson Optimized Convolutional Neural Network Protein Secondary Structure Prediction models are combined to predict secondary protein secondary structure. The Energy Profile Legion-Class Bayes first measures energy profiles and extracts protein sequence features for identifying secondary protein structures. Next, Thompson Optimized Convolutional Neural Network uses Location-Specific Resultant Matrix as input of convolutional neural network with optimization performed via Thompson optimization function. This is done to predict secondary protein secondary structure. Upon comparison with prediction results of state-of-the-art methods, protein structure prediction accuracy, time and precision of the proposed method EPB-OCNN method are relatively strong and can accomplish very consequential effects and possess good precision. Additional protein descriptors employing regularization techniques may also be explored. Inclusion of categorical variables to produce amino acid descriptors are also worth further investigation. Future versions of quantum computers, with their potential to simulate quantum-chemical systems, may also shed light on the protein structure prediction.