From zero-shot machine learning to zero-day attack detection

Machine learning (ML) models have proved efficient in classifying data samples into their respective categories. The standard ML evaluation methodology assumes that test data samples are derived from pre-observed classes used in the training phase. However, in applications such as Network Intrusion Detection Systems (NIDSs), obtaining data samples of all attack classes to be observed is challenging. ML-based NIDSs face new attack traffic known as zero-day attacks that are not used in training due to their non-existence at the time. Therefore, this paper proposes a novel zero-shot learning methodology to evaluate the performance of ML-based NIDSs in recognising zero-day attack scenarios. In the attribute learning stage, the learning models map network data features to semantic attributes that distinguish between known attacks and benign behaviour. In the inference stage, the models construct the relationships between known and zero-day attacks to detect them as malicious. A new evaluation metric is defined as Zero-day Detection Rate (Z-DR) to measure the effectiveness of the learning model in detecting unknown attacks. The proposed framework is evaluated using two key ML models and two modern NIDS data sets. The results demonstrate that for certain zero-day attack groups discovered in this paper, ML-based NIDSs are ineffective in detecting them as malicious. Further analysis shows that attacks with a low Z-DR have a significantly distinct feature distribution and a higher Wasserstein Distance range than the other attack classes.


Introduction
Over the past few years, Machine Learning (ML) capabilities have been utilised to enhance the performance and efficiency of various technological applications [1].ML is a subset of Artificial Intelligence (AI) [2], involving a set of statistical algorithms that can learn from data without being explicitly programmed [3].ML models are recognised for their superior ability to extract and learn complex data patterns that are not feasibly realizable by domain experts [4].The learnt patterns are used to predict, classify, and regress future events and scenarios.This has been a disruptive innovation [5] in multiple industries where operational automation and efficiency are required.Therefore, ML models have been widely deployed across multiple domains, proving great success over traditional computing algorithms, where it is challenging to perform the required operations.The same motivation has led to the implementation of ML models in the cybersecurity domain [6], to further enhance and strengthen the security posture of organisations.The intelligence of ML models has been taken advantage of in securing computer networks [7] against advanced threats.The addition of the intelligence element to the organisation's security strategy will add sophisticated layers of defence [8] that can limit the number of internal and external threats if designed efficiently.ML operation is capable of detecting complicated modern attacks that require advanced innovation detection capabilities [9].
Network Intrusion Detection Systems (NIDSs) are essential security tools that detect threats as they penetrate an organisation's network environment [10].There are two main types of NIDS; Signature-based NIDSs scan incoming network traffic for any Indicator of Compromise (IOC), also known as attack signatures, such as source IPs, domain names, and hash values, that might indicate malicious traffic [11].One of the main and ongoing challenges of securing computer networks with signature-based NIDSs is the detection of zero-day attacks [12].A Zero-day attack is a new kind of threat that has not been seen before [13], designed to infiltrate or disrupt network communications.It is an unknown vulnerability to security administrators, that hackers can exploit before it has been remediated.A recent example is a zero-day vulnerability discovered in Microsoft Windows in June 2019, that was targeting local escalation privileges [14].Generally, when a zero-day attack is discovered, it gets added to the publicly shared Common Vulnerabilities and Exposures (CVE) list [15].Known security vulnerabilities are defined using a CVE code and severity level [15], and shared amongst the community for immediate action.The remediation of zero-day attacks is generally conducted by adding IOCs related to the threat into a list of detection databases [16] used by signature-based NIDS.As such, signature-based NIDSs are deemed to be unreliable in the detection of zero-day attacks simply because the complete set of IOCs has not been discovered or registered for monitoring at the time of penetration.
It is very challenging to identify zero-day attacks using signature-based NIDS, as it takes an average of 312 days [13] to obtain the full set of attack IOCs.Meanwhile, organisations protected by signature-based NIDS are vulnerable to such attacks.Therefore, the focus has diverted towards building ML-based NIDS [12], an enhanced modern edition of traditional NIDS to overcome the limitations faced in the detection of zero-day or unseen attacks.ML-based NIDS are designed and deployed to scan and analyse incoming network traffic for any anomalies or malicious intent [12].The Analysis process is conducted via a comparison of the incoming network behaviour with the learnt behaviour of safe and intrusive traffic [17].During the design process, the ML model is trained using a set of benign and attack samples, where the hidden complex pattern of traffic is learned.Unlike the signature-based NIDSs that solely relies on IOC for detection, the ML-based NIDS utilises the learnt behavioural pattern to detect network attacks [17].This has a great potential of detecting zero-day attacks as the requirement of obtaining IOC is obsolete [18].Zero-day attacks can be detected by ML-based NIDS using the learned attack behaviour, which attracts great attention and focus towards the development of such models.Most of the available research works have aimed at the design and evaluation of ML-based NIDSs in the detection of known attack groups.However, a limited amount of research has focused on the evaluation of zero-day attack detection to measure the benefits of ML-based NIDS over signature-based NIDS.As such, a large number of proposed ML-based NIDS does not consider the most likely re-occurring scenario of zero-day attacks, where a new attack class may appear after the learning stage of the ML model.
Zero-shot Learning (ZSL) is an emerging methodology used to evaluate and improve the generalisability of ML models to new or unseen data classes [19].This technique follows the assumption that the training dataset might not include the entire set of classes that the ML model could observe once deployed in the real world.As such, ZSL addresses the ever-growing set of classes that might render it unfeasible to collect training samples for each of them [20].ZSL involves the recognition of new data samples derived from previously unseen classes.In the attribute learning stage, the model is provided with distinguishing semantics of the missing class.In the inference stage, ZSL applies the learnt attributes to predict or classify certain samples belonging to the missing data class [19].ZSL addresses one of the main challenges in building a reliable ML-based NIDS, which is the evaluation of detecting new attack classes that are not available in the training phase, such as zero-day attacks [21].This directly applies to ML-based NIDS, as new attack classes that the learning model did not train on, are emerged and observed in the real world post-deployment.This includes zero-day attacks which could lead to fatal consequences to the adopting organisation if undetected [13].Therefore, a reliable ML-based NIDS needs not only to be evaluated across a set of known attacks but also unknown attacks that were missing from the training dataset, simulating the likely scenario of a zero-day threat.
In this paper, a new zero-day evaluation methodology, inspired by ZSL, is proposed.The framework measures how well an ML-based NIDS can detect unseen attacks using a set of semantic attributes learnt from seen attacks.There are two main stages of the proposed setup.In the attribute learning stage, the models extract and map the network data features to the unique attributes of known attacks.In the inference phase, the model associates the relationships between known attacks and zero-day attacks to assist in their discovery and classification as malicious.Unlike, traditional evaluation methods, the proposed setup aims to evaluate ML-based NIDS using a new metric, named Zero-day Detection Rate (Z-DR), that measures how well a learning model can reconstruct the distinguishing semantics learnt from known attack classes, to detect unknown attack classes.The proposed methodology has been implemented using a combination of two key NIDS datasets, each consisting of a broad range of modern attacks, and two widely used ML models in the research field.Additionally, the achieved results have been analysed using the Wasserstein Distance to explain the variation of the Z-DR with different attack groups.The key contribution of this paper is the adoption of a ZSL-based problem set up to propose a reliable evaluation methodology of ML-based NIDS in the detection of new or unseen attack types, mimicking the most likely occurrences of zero-day attacks post-deployment.In Section 2 key related works are discussed, followed by a detailed explanation of the proposed ZSL-based methodology in Section 3. The experimental methodology followed in this paper and the results obtained are discussed in Sections 4 and 5 respectively, before concluding this paper.

Related Works
In this section, key related papers that aimed to evaluate NIDS for the detection of zero-day attacks are discussed.While most papers aimed to design sophisticated ML-based NIDS [22], the focus has been towards the evaluation of the proposed systems across a range of known attacks.Where traditional signature-based NIDSs have been achieving a satisfactory performance throughout the years in the detection of known attacks.Therefore, it is surprising to notice that only a few papers have aimed to challenge ML-based NIDS in the detection of unknown or zero-day attacks.In the case of unsupervised anomaly detection systems, where the model only learns the behaviour of benign traffic, the NIDS fundamentally work to detect each attack type as an unknown attack.However, it is noted that such models lead to a large number of false alarms leading to alert fatigue [23], as it does not consider the attack behaviour.Overall, there is a limited number of papers following a zero-shot learning methodology to detect zero-day attacks.Out of these works, none, to the best of our knowledge, have aimed to utilise modern network datasets which represent current network traffic characteristics, to evaluate their approach.
In [24], the author has evaluated the zero-day attack detection performance using a signature-based NIDS.The paper studies the frequent claim that such systems are not capable of detecting zero-day attacks.The experiment studies 356 network attacks, out of which 183 attacks are unknown (zero-day) to the rule set.The paper utilised the Snort tool, a well-known signature-based NIDS in the industry.The Metasploit Framework is utilized to simulate the attack scenarios.The detection rate is calculated by applying a Snort ruleset which does not disclose the vulnerabilities relevant to the attack.The results show that Snort has an unreliable detection rate of 17% against zero-day attacks.The paper argues that the frequent claim that signature-based NIDSs are not capable of detection of zero-day attacks is incorrect, as 17% is significantly larger than zero.The author mentions that further mechanisms should be implemented to complement signature-based NIDS in the detection of unregistered attacks and the results of this paper can be seen as a baseline for zero-day attack detection.
In [25], Hindy et al. aimed to improve the unsupervised outlier-based detection systems that usually suffer from high False Alarm Rate (FAR).The paper explored an autoencoder for the detection of zero-day attacks to maintain a high detection rate while lowering the FAR.The system is evaluated across two key datasets; CICIDS2017 and NSL-KDD.The methodology involved training the classifiers using the benign data samples and evaluating the detection of zero-day attacks.The results are compared to a one-class support vector machine, where the autoencoder is superior.The results demonstrate a zero-day detection accuracy of 89-99% for the NSL-KDD dataset and 75-98% for the CICIDS2017 dataset.However, the proposed models do not take the attack behaviour into consideration and the numbers of undetected attacks and false alarms are unmeasured.
Zhang et al. [21], has evaluated ML-based NIDS detection performance against zero-day attacks.The authors have used zero-shot learning to simulate the occurrence of zero-day attack scenarios.The ML models learn the distinguishing information between the attack and benign classes by mapping the feature space and attribute space.The authors utilised a sparse autoencoder model that projects the features of known attacks to a semantic space and establishes a feature to semantic mapping to detect the unknown attacks.The paper utilised the attacks present in the NSL-KDD dataset, released in 1998, to simulate a zero-day scenario, the dataset contains 4 attack scenarios.The results demonstrate that the average accuracy achieved is 88.3% across the available attacks in the dataset.
Li et al. [26] focused on attribute learning methods to detect unknown attack types.The authors followed a zero-shot learning method to design a NIDS to overcome the anomaly detection limitation faced by current methods.The architecture involves a pipeline using a Random Forest (RF) feature selection and a spatial clustering attribute conversion method.The results demonstrate that the proposed method overcomes the state-of-the-art approaches in anomaly detection.The attribute learning framework converts network data samples into unsupervised cluster attributes.The NSL-KDD dataset has been utilised to evaluate the proposed framework where it was able to detect the DoS (apache2) and Probe (saint) attacks achieving an overall accuracy of 34.71%.The authors compared its performance with a decision tree classifier which achieved a poor overall accuracy of 13.59%.
Overall, significant contributions have been provided by the research works aiming to evaluate the performance of ML-based NIDS in the detection of unknown attacks.However, only a very small number adopted a ZSL-based setup to simulate the occurrence of zero-day attacks.Moreover, there has been a very limited amount of experimental work on modern zero-day attack scenarios with recent data sets and attack types, which limits the identification of sophisticated attacks that can not be detected in zero-day scenarios.In addition, it is surprising that some recent works still utilise the NSL-KDD dataset for evaluation purposes, given that it is more than 20 years old.The attack scenarios available in the dataset do not represent modern network traffic characteristics and threats, which limits the reliability of the proposed methodology and its evaluation [27].

Proposed Methodology
In a traditional ML evaluation methodology, the learning model is trained and tested on the same set of data classes.In the training stage, the model learns to identify patterns directly from each data class.In the testing stage, the model applies the learnt patterns to identify the data samples derived from the same data classes used in the training stage.In an experimental setup, the utilised dataset is split into training and testing partitions, where both sets have the same number and type of classes.The learning model is trained on the training set using the complete set of classes that also forms the test set used in the evaluation stage.This approach of evaluation follows the assumption that the dataset collected for the training of ML models includes the full set of classes that the model will observe post-deployment in production.In the case of an ML-based NIDS, the model is trained and tested using a set of known attack classes.The model is evaluated on how well it can detect data samples derived from known attack groups as malicious.
The training set D tr , and testing set D tst of an NIDS dataset can be represented as follows: where X tr ⊂ X, X tst ⊂ X The traditional ML setup has been commonly used in the ML-based NIDS evaluation process, proving to be effective in measuring the detection rate of known attack groups used in the training set.However, obtaining data samples for each attack class is very challenging for several different reasons.For instance, zero-day attacks have emerged repeatedly over the past few decades and present a serious risk to the organisation of computer networks.A zero-day attack can be a new kind or a modified threat that has not been seen or available earlier.The sole purpose of its design is to infiltrate and disrupt network communications [13].Therefore, the traditional ML evaluation setup removes the conclusion that ML-based NIDSs are effective in the detection of zero-day attack scenarios, due to their unavailability at the time of training.ZSL techniques have been adopted to address such shortcomings in the evaluation of systems required to detect a larger set of classes than the one used in training.Unlike the traditional ML methods that focus on evaluating the generalization of the model to new data samples derived from pre-observed classes, in ZSL the objective is to improve the detection of unseen classes.
ZSL is essentially done in two stages [19]; an attribute learning stage where distinguishing knowledge, also known as auxiliary information, is captured.Followed by an inference stage where the learnt semantics are utilised to categorise data samples that belong to a new set of classes.ZSL is a promising approach to leverage supervised learning for the detection of unavailable training data samples.ZSL was principally developed to overcome the issue where none of the training samples is available.This approach overcomes the limitation of evaluating ML-based NIDS in the detection of zero-day attacks.As the collection and labelling of training data samples of zero-day attacks remain an impossible task simply due to their absence at the time of ML-based NIDS model development and training phases.Overall, ZSL overcomes the necessity of collecting training data samples of all the attack classes that the model will observe post-deployment including zero-day attacks.

Fig. 1: Proposed Methodology
In this paper, we propose a ZSL-based methodology, illustrated in Figure 1, to evaluate ML-based NIDSs in the detection of zero-day attacks.In the attribute learning stage, the model captures the semantic attributes of the attack behaviour using a set of known attacks and benign data samples.The attributes hold the distinguishing vectors between attack and benign network traffic.In the inference stage, the learnt knowledge is utilised to reconstruct the relationship between known attacks and the zero-day attack to classify the unseen zero-day attack group as malicious.Three main data concepts exist as part of the proposed methodology; 1) Known attacks-these are precedent attacks for which labelled data samples are available during training.
2) Zero-day attacks-these are unknown attacks that will emerge post-deployment for which labelled data samples are unavailable during training.3) Semantic attributes-the distinguishing information that the ML model will learn from the known attacks to detect the zero-day attacks.
The proposed methodology assumes that at the testing stage, the model is evaluated using zero-shot samples derived from attack classes that were not available during the training stage.The model is required to detect the unseen class as malicious by associating the distinguishing information learnt from the observed and unobserved classes.
Given a NIDS dataset, we can define a ZSL training set D z tr for an attack classes z as follows: where X tr ⊂ X, X tst ⊂ X the set of training classes Y z tr consists of benign traffic b, and n attack classes a 1 , ..., a n , but importantly, minus the attack class a z .In contrast, the test dataset D tst always consists of samples of all classes, i.e., without the removal of any attack class.By excluding an attack class z from the training phase, we are essentially simulating a zero-day attack scenario, as the ML model has not been trained on the respective attack class and a new attack class has emerged post the training phase.The concept of our ZSL evaluation scenario is also illustrated in Figure 2.
The purpose of this stage is for the model to map the network data feature space to the relevant attribute space defining the attack behaviour.The learnt semantics can be used to distinguish a zero-day attack from benign traffic.Assuming S is the set of known attack classes that can be used to train an ML model, each known attack data sample is denoted by x, their respective labels denoted by y, and the semantic information of attack behaviour is denoted by h.Therefore, x and y can be values of one of the known attack samples and classes, respectively.This can be represented using the following notation, where H is the set of learnt attributes that can be used to predict zero-day attacks.
During the testing or inference phase, the zero-day attack traffic class a z is added back to the test set, in order to measure the zero-day detection accuracy.For this purpose, we have defined a new evaluation metric, which is discussed in Section 4.2.This follows the generalised ZSL setting where the test samples may belong to the seen (known attacks and benign traffic) or unseen (zero-day attack) data samples [28].This has proven to be a more practical scenario than the conventional ZSL setting, where the test set only includes samples from the unseen class, which is difficult to guarantee from a network security perspective.The main goal in this setting is to reconstruct the knowledge learnt from the known attacks so that the model can detect the zero-day attack as malicious.This is accomplished by associating the relationships between known attacks and zero-day attacks.For each of the available attack classes in the dataset, we simulate the zero-day attack scenario by removing this class from the corresponding training set and consider the ability of the model trained on all the other attack classes to detect the unseen attack class.
The proposed zero-day detection evaluation methodology aims to evaluate ML-based NIDSs in regards to their ability to generalise and detect new and unseen attack classes post-deployment, which is a very realistic and relevant scenario.Information gained from our evaluation methodology can be used to further

Experimental Setup
The evaluation of ML-based NIDS capability to detect zero-day attacks using a set of attributes learnt from known attacks is crucial.As the performance motivates the usage and development of ML in the detection of zero-day attacks.In this paper, two commonly used ML models have been used in the design of MLbased NIDSs, Random Forest (RF) [29] and Multi-Layer Perceptron (MLP) [30].They have been deemed to achieve reliable performance and typically achieve a high detection accuracy.The RF classifier is designed using randomised 50 decision trees classifiers in the forest, each following the Gini impurity function [31] to measure the quality of a split.The MLP neural network model is structured with 100 neurons in two hidden layers, each performing the Rectified Linear Unit (ReLU) [32] activation function.The stochastic gradient-based optimiser is utilised for the model's weight and parameters optimisation.In the inference stage, a five-fold cross-validation method is adopted to calculate the mean results.

Datasets
In this paper, two NIDS datasets are used to evaluate the ML models following the proposed methodology, i.e., UNSW-NB15 [33], and NF-UNSW-NB15-v2 [34].The datasets are synthetic which were created via virtual network testbeds representing modern network structures.In designing such datasets, certain attack scenarios are conducted and the corresponding network traffic is captured and labelled with the respective attack type.In addition, normal network traffic is generated representing benign traffic is captured and labelled accordingly.Both the malicious and non-malicious traffic is captured in the native packet capture (pcap) format, and certain data features are extracted to represent explicit information regarding the data flow.The chosen datasets include a variety of modern network attacks such that each can be used to simulate the incoming of a zero-day attack.Such datasets have been widely used in the literature as they do not present the limitations faced by the collection and labelling of real-world production networks.
• UNSW-NB15 [33] The complete set of network data samples in each dataset is used in this paper.Initially, the flow identifiers such as sample id, source/destination IPs, source/destination ports, and timestamps are dropped to avoid learning bias towards the attacking and victim endpoints.This is required as distinct nodes in the testbed have been used to launch attack scenarios targeting certain network ports.Moreover, all categoricalbased features are converted to numerical-based values using the label encoding technique where each label is assigned a unique integer.Once a full numerical dataset is obtained to accommodate for efficient experiments, the min-max scaler technique is applied to normalise all values between 0 and 1.This is necessary to avoid the learning model from assigning higher weights to features holding larger numerical values.This completes the preprocessing stage where the data is ready for efficient ML training and testing stages following the proposed setup.

Zero-day Detection Rate
We will use the common classification performance metrics of Accuracy, Detection Rate (DR), False Alarm Rate (FAR), Area under the (ROC) Curve (AUC), and F1 Score for our evaluation.These metrics are defined based on the True Positives (TP), False Positives (FP), True Negatives (TN), and False Negatives (FN) number, as shown in Table 1.In our evaluation scenarios, these metrics are calculated based on a The percentage of correctly classified zero-day attack samples in the test set.
TP az TP az +FN az × 100 binary classification case, where the classifier distinguishes between benign and attack traffic, and is hence equivalent to the micro-average in the multiclass classification scenario.In addition to these standard metrics, we define a new evaluation metric called Zero-Day Detection Rate (Z-DR z ), also shown in Table 1, which is defined as the specific detection rate of the zero-day attack class a z , which was excluded from the training data set (Equation 6).

Z-DR z =
T P a z T P a z + FN a z × 100 (6) Here, T P a z and FN a z are the number of True Positives and False Negatives calculated specifically for the samples of the zero-day attack class a z .The new metric measures how well the ML model can detect zero-day attacks of class a z , i.e. based solely on training information provided by the other attack classes.It therefore indicates the ability of the ML model to generalise the information learnt from the other attack classes to the new, unseen (zero-day) class.

Evaluation
In this section, two ML models, Multilayer Perceptron (MLP) and Random Forest (RF), have been utilised to evaluate the detection of zero-day attacks using our proposed ZSL evaluation scenario.Two synthetic NIDS datasets (UNSW-NB15 and NF-UNSW-NB15-v2) have been used in the experiments.Each available attack in the datasets is considered to simulate a zero-day attack incident.The models are evaluated based on the Z-DR, as well as the overall detection accuracy of the test set that includes known attacks, zero-day attack, and benign data samples.This represents a generalised ZSL setup where the test set includes both known and unknown data samples, which is appropriate for ML-based NIDS evaluation.

Results
The goal of the experiments is to evaluate the performance of ML-based NIDSs in a realistic network environment where attacks not used in the training of the model are likely to be observed post-deployment.Tables 2 to 5 display the complete set of results collected.Each table represents a unique combination of an ML model and a dataset.In each table, the first column lists the attacks used to simulate a zero-day attack incident.The second column displays the corresponding Z-DR value, and the rest of the columns present the remaining evaluation metrics collected over the complete test set, which include the zero-day attack, known attacks and benign data samples.
In Table 2 and 3, the performance of the MLP and RF classifiers, when evaluated using the UNSW-NB15 dataset, are presented.During the simulation of zero-day attacks, the Exploits, Reconnaissance, and DoS attacks are detected at a rate of around 90% using the MLP classifier.The RF classifier is more effective in the detection of Exploits and DoS attacks.Both the MLP and RF models detect 20% and 15% of the  fuzzer attack data samples, respectively.This kind of attack presents severe risks to organisations protected by such vulnerable ML-based NIDSs in the scenario of a launched zero-day attack similar to Fuzzers.
The MLP model is superior to RF in the detection of Generic and Shellcode attack types, achieving a high detection rate of 96% and 97% compared to 59% and 91%, respectively.The Analysis attack type is deemed to be complex in its detection as a zero-day attack where the MLP model achieved an 84% and the RF model detected 81%.Other attack types, such as Backdoor and Worms, were almost fully detected by both ML models when observed as zero-day attacks.
The performance of both ML models were depended on the complexity of the incoming zero-day attacks.The models were successful in the detection of the 95% or more of attacks such as Generic, DoS, Backdoor, Shellcode, and Worms.However, Exploits, Reconnaissance, and Analysis have shown to be harder to detect, with both models achieving around 90% detection rates.However, in the likely scenario of the models observing attacks related to the Fuzzers attack group as a zero-day attack, ML-based NIDSs would be extremely vulnerable as more than 80% of their data samples were undetected and classified as benign samples.Further Analysis is required to investigate the complexity of the Fuzzers attack group that causes the undetected zero-day attack samples by the ML models.Overall, the MLP classifier achieved an average of 85.5% detection rates across the zero-day attacks.The RF classifier was slightly inferior with an average detection rate of 80.67%.
In Figure 3, the detection rate of each attack group in the UNSW-NB15 dataset is measured in the (traditional) known attack and zero-day attack scenarios.In the known attack scenario, the model has observed the attack in the training set.In the zero-day attack scenario, the attack is not available in the training set for the model to observe.Figures 3a and 22 represent the performance using the MLP and RF models, respectively.The drop of detection rate is highly notable in certain attack types such as Fuzzers and Reconnaissance.The DR value dropped by around 70% and 10% respectively for the two ML models.Furthermore, there are distinct differences in the performance of the two models.The MLP model was more successful in the detection of zero-day Generic attacks at a detection rate of 95.90% compared to 59.06% achieved by RF.Both models achieved a 100% detection rate when the attack class was observed in the  In Tables 4 and 5, the ML models' zero-day attack detection performance is evaluated using NF-UNSW-NB15-v2, the NetFlow-based edition of the UNSW-NB15 dataset.The MLP model is superior to the RF model in the detection of Exploits and Fuzzers zero-day attack groups with a detection rate of 82% and 76% compared to 59% and 51%, respectively.The ML models did not successfully apply the learnt semantic attributes of the attack behaviour to be able to relate the Exploits and Fuzzers zero-day attacks as malicious traffic.Based on the results of our considered scenario, we can say that attacks such as Generic, Reconnaissance, Backdoor, Shellcode, and present a significantly lower cybersecurity risk to organisations protected by ML-based NIDS when they are observed for the first time as zero-day attacks.The utilised models correctly detected close to 100% of their data samples as intrusive traffic.Moreover, DoS and Analysis attack groups were slightly harder to detect, as both the ML models detected around 90% of their data samples.
Most of the attacks present in the NF-UNSW-NB15-v2 dataset were reliably detected using the considered two ML models in the considered zero-day attack scenario.The learning models were successful in utilising the learnt information from known attacks to detect the zero-day attack types.However, the Exploits and Fuzzers attack scenarios seem to be harder to detect, if the ML model did not observe them during training and encounter them as zero-day attacks.Overall, the MLP and RF models average Z-DR values of 92.45% and 87.37%, respectively.The UNSW-NB15 and NF-UNSW-NB15-v2 datasets contain the same attack groups, and they differ only in their respective feature sets.The NetFlow-based feature set of NF-UNSW-NB15-v2 results in an increased detection rate by around 7% for each of the two ML models.This demonstrates the advantage of using NetFlow-based features in the detection of zero-day attack scenarios.
In Figure 4, the detection rate of each attack group in the NF-UNSW-NB15-v2 dataset is shown for both known attack and zero-day attack scenarios.Figures 4a and 4b display the performance of the MLP and RF models, respectively.In this dataset, a large drop in detection rates is noted for the Exploits and Fuzzers attack groups, with an average decrease of 28% and 35%, respectively for the two ML models in a zero-day attack scenario.For the rest of the attack groups, the ML models were successfully able to detect  the attacks, however, the DoS and Analysis were slightly sophisticated in their detection even in a known attack scenario.

Analysis
To investigate the results provided in the previous subsection, especially the low Z-DRs of particular attacks, the feature distribution of each attack group is studied in both datasets.The main objective of this analysis is to find out any possible differences between several ZSL training and testing sets.As such, many statistical measures were explored that could identify the dissimilarities between the feature distributions.The Wasserstein Distance (WD) metric, which is commonly used in the ML/AI community, has been successfully used in [35] for quantifying the feature distribution distances.The (first) Wasserstein Distance, also known as the Earth Mover's distance, is a distance function defined between two probability distributions u and v and is defined as follows [36]: Here, Γ (u, v) is the set of (probability) distributions on R × R where u and v are its first and second factor marginals.γ(x, y) can be interpreted as a transport plan/function that gives the amount of mass to move from each x to y to transport u to v, subject to the following constraints: this indicates that for an infinitesimal region around x, the total mass moved out must be equal to u(x)dx and similarly, for an infinitesimal region around y the total mass moved in must be equal to v(y)dy.
Using WD as the comparison metric, a set of experiments were conducted to investigate the differences in the feature distributions of 9 different zero-day scenarios (one per attack class in each dataset).After selecting the training (D z tr ) and testing (D z tst ) sets, for each data feature (except the flow identifier features that were removed in the pre-processing stage), the feature distributions between the sets were compared using the WD metric, i.e.W (D z tr , D z tst ) in the form of Equation 7 notation.The method is performed by measuring the WD between the set of known attacks and the set including the zero-day attack.This is the same setup as performed in the zero-shot learning and inference phases.Hence, in each zero-day attack scenario, a WD value corresponding to each feature of the dataset was obtained.A higher WD value for a feature indicates a more distinctive distribution between the training (D z tr ) and testing (D z tst ) sets of the corresponding zero-day attack.5b, show the WD value of each zero-day attack scenario, averaged over all features in each experiment for the UNSW-NB15 and NF-UNSW-NB15-v2 datasets respectively.As can be observed, most attacks have a low WD value of around 0.2, which indicates the overall feature distributions are similar between the training and testing sets in the case of these zero-day attack scenarios.This shows that these attacks are similar in their statistical feature distributions to the rest of the attacks.Due to the similarity in the attack types, it is expected to see a higher zero-day detection performance.This mostly includes the Analysis, Backdoor, DoS, Reconnaissance, Shellcode, and Worms attacks.Considering Tables 2, 3, 4 and 5 and the Z-DR values of these attacks, we see that these attacks are detected with a high detection rate in a zero-day attack scenario.Our results show that there is only a minor degradation in their Z-DR values compared to their normal (non-zero-day) detection rate, using the same ML model.
The correspondence between the WD values and the Z-DR performance indicates that the similarity of the feature distributions, between the zero-day training and testing sets makes it possible for the model to learn useful patterns about the zero-day attack from the features of the known attack classes.Hence, as long as the feature distributions are not significantly different between the training and test sets in a zeroday scenario, the ML-based NIDS is expected to classify the unseen attack (classes) with a classification performance similar to if it had been seen in the training stage.Such attacks represent a relatively lower risk to organisations adopting an ML-based NIDS due to the similar statistical patterns that ML models can utilise in their detection.
These results are further confirmed by the results related to the three remaining attack groups, i.e., Exploits, Fuzzers, and Generic.As per the UNSW-NB15 dataset, the high WD value for Fuzzers and Exploits with an averaged value of 0.5, and the Generic class with an averaged WD value of 1.0 indicates that the overall feature distributions are much more distinctive between the zero-day training and testing sets.Accordingly, we can expect to see a worse degradation of their Z-DR compared to known attack scenarios, which is confirmed by the results shown in Table 2 and 3. Similarly, for the NF-UNSW-NB15 dataset, the Fuzzers and Exploits stand out with very high average WD values of 0.9 and 1.0 respectively.Their corresponding Z-DR values are much lower than for other attack classes, as per Tables 4 and 5.The findings are consistent due to the distinguishing attributes of such attack groups that were not learnt via the seen classes.
Overall, the WD function has identified several attack groups with a unique malicious pattern compared to the remainder of the attacks.Therefore, from an ML perspective, their detection as zero-day attacks using an ML-based NIDS will be challenging.This matches the results in this paper, as there is a significant difference between their Z-DR and the seen detection rate.Further studies are required to improve MLbased NIDSs in the detection of unique attack behaviour related to sophisticated attacks.
While not perfect, our Analysis using the Wasserstein (WD) distance between feature distributions of different attack classes provides a solid explanation of the results presented in sub-section (Sub-Section 5.1), and is consistent with the main findings of this paper.

Conclusion
ZSL is a key technique used to improve an ML model's ability to classify data samples derived from classes that have not been accessible during the training stage.In this paper, a ZSL-based methodology has been proposed to evaluate the performance of an ML-based NIDS in the detection of unseen, also known as zero-day, attacks.In the attribute learning stage, the model learns the distinguishing attributes of attack traffic using a set of known attacks.This is accomplished by mapping relationships between the network data feature space to the semantic space.In the inference stage, the model is required to associate the learnt attributes with the attack behaviour to detect a zero-day attack that was not observed during the training stage.Using our proposed methodology, two well-known ML models have been used to evaluate their ability to detect each attack present in the UNSW-NB15 and NF-UNSW-NB15-v2 datasets as a zero-day attack.The results show that an ML-based NIDS is, somewhat surprisingly, capable of securing organisations against most zero-day attacks to a significant extent.
However, certain attack groups, such as Fuzzers, Exploits, and Analysis have shown a much lower Zeroday detection rate, and the ML model trained on all other attack types was unable to generalise to these types of attacks.Our Analysis using the Wasserstein Distance (WD) distance metric of the feature distributions is able to explain this to some extent.We observed that the WD metric of the different attack types correlates strongly with their Zero-day detection rate.The statistical feature distributions of these attacks are too different from the other attack types observed during the training stage, and prevent the model to generalise and detect them in the test phase of the experiment.The ability to generalise and detect new, unseen types of attacks is an important feature of ML-based NIDSs, in is critical for increased practical deployment in production networks.This important issue has so far attracted only relatively limited attention in the research literature.We hope that the work presented in this paper provides a basis and motivation for further research into this important aspect.
here, x represents a data sample (flow) chosen from the set of training X tr , test X tst data, and X represents all the data samples.y represents the corresponding labels and Y tr represents the set of class labels observed in the training phase, and Y tst represents the set of class labels used in the test phase.In traditional Machine Learning, we have Y tr = Y tst , i.e., the set of classes observed during training is identical to the set of classes encountered by the model during testing.

Fig. 2 :
Fig. 2: Venn Diagram of the training (D z tr ) and test (D z tst ) sets

Fig. 5 :
Fig. 5: Wasserstein distances of feature distributions between the train and test sets, W (D z tr , D z tst ), of each zero-day attack, averaged over all features, a) UNSW-NB15 dataset, and b) NF-UNSW-NB15-v2 dataset -A well-known and widely used NIDS dataset in the research community released in 2015 by the Cyber Range Lab of the Australian Center for Cyber Security (ACCS).The synthetic dataset is designed using the IXIA Perfect Storm tool to generate benign network activities and premeditated attack scenarios.The dataset contains 49 features extracted by Argus and Bro-IDS tools and twelve additional SQL algorithms.The dataset consists of 2,218,761 (87.35%) benign samples and 321,283 (12.65%) attack on, that is, 2,540,044 network data samples in total.• NF-UNSW-NB15-v2 [34]-A recently released dataset in 2021 by the University of Queensland.The dataset is generated by extracting 43 NetFlow-based features from the pcap files of the UNSW-NB15 dataset.The nprobe feature extraction tool is utilised to extract the network data flows from the pcap files.The total number of data flows are 2,390,275 out of which 95,053 (3.98%) are attack samples and 2,295,222 (96.02%) are benign.