1 Introduction

During the last decade, secure software management has progressively relied on industrial management processes and guidelines aimed at framing cybersecurity as a production function (Viega and McGraw 2001). Industrial secure software lifecycle processes, such as the Microsoft Security Development Lifecycle (Microsoft 2019), Cigital’s Software Security Touchpoints (McGraw 2006), and the Software Assurance Forum for Excellence in Code (SafeCODE 2018), all define requirements for software security development and made cybersecurity risk assessment one of the pillars of their approach, being it applied to code, architecture, or third-party components of a software project.

In this work, we focus on vulnerability assessment as a part of the overall cybersecurity risk assessment process (ISO 2008) and the use of metrics in the security development lifecycle (Morrison et al. 2018). Our overall goal is studying to what extent the overall accuracy of the assessment of software vulnerabilities according to a technical methodology depend on the assessor’s background knowledge and expertise (MSc students and security professionals). We further aim at analyzing whether such accuracy, may vary with respect to different facets of vulnerabilities (e.g. complexity of exploitation vs need for users to ‘click something’ for the exploit to succeed) and different facets of expertise (eg years of experience vs knowledge of attacks).

A key issue in this respect is the selection of the technical methodology. Some specific approaches for software vulnerability risk assessment have been developed by large corporations (Microsoft 2019), specialized companies (Tripwire 2019), and open source communities (OWASP 2019), but eventually the sector as a whole coalesced into the use of the Common Vulnerability Scoring System (CVSSFootnote 1) (Mell et al. 2007) as a simple, clear and structured methodology that could be fruitfully adopted in cybersecurity risk assessments. The CVSS alone only permits to score some important general characteristics of a vulnerability by evaluating its severity which only broadly approximates the requirements of a risk metric. A lively debate is taking place in the cybersecurity community on the definition of a CVSS-based risk assessment (Doynikova and Kotenko 2017; Spring et al. 2018; Allodi and Massacci 2014) and we review some of this discussion further in Section 5.

Nevertheless, the CVSS simplicity has made it ubiquitous as a partial solution. For example, if you are a merchant accepting credit cards, your software environment should be free from vulnerabilities with a CVSS base score higher than four (see (PCI-DSS 2018), Testing Procedure 11.2.2.b). This is an arbitrary threshold, far from optimal from the perspective of risk assessment, as recent works have shown (Allodi and Massacci 2014; Jacobs et al. 2019), but easy to define as a standard requirement for a broad industry. Similarly, US Federal agencies leverage CVSS for their software security assessment.Footnote 2

Given the importance, ubiquity and practical impact of the CVSS software vulnerability scoring methodology, it is somewhat surprising that the extent of evaluation errors, and the overall effect of those errors on the final assessment of a vulnerability are still largely untested. The goal of this paper is to provide a first answer in this direction.

For publicly known vulnerabilities, CVSS scores are assigned by experts (e.g. at FIRST Special Interest Group (SIG) (FIRST 2015)), but even within expert groups differences often arise. Before a vulnerability becomes public, it must still be scored (for example for bounty programs such as BugcrowdFootnote 3) and there is limited empirical evidence on how such scoring is influenced by the competences of the assessor. Our tests revealed that assessment variability could be high, a result commonly emerged in many studies on opinion formation in a pool of experts, which motivated the development of methods for opinion debiasing (Kretz 2018), calibrating (Camerer and Johnson 1991; Lichtenstein et al. 1982), and pooling (Dietrich and List 2017).

Methods

As for a long tradition of controlled experiments, we recruited MSc students and security professionals with the aim of comparing the performances in evaluating vulnerabilities according to CVSS (Acar et al. 2017; Arkin et al. 2005; Katsantonis et al. 2017; Labunets et al. 2017; Workman 2008). Students were divided between those with or without specific security education, whereas professionals have a median of six years of working experience in the cybersecurity field (ranging between two and fifteen years) but no specific security education at academic level.

CVSS has been selected as the methodology for conducting the experiments, because: i) it is the industrial standard and its usage is not specifically reserved to software vulnerability experts; ii) its evaluation criteria are simple and the steps to perform are clearly structured; iii) it is decomposable into ‘atomic tasks’ corresponding to different technical competences. In this setting, accuracy is determined as the number of correct CVSS scores each participant produces for each evaluated vulnerability, with respect to the scores assigned by experts of the FIRST Special Interest Group (SIG).

This work focuses on three specific issues:

  1. 1.

    We first consider the effects that different educational background and practical experience may have on the accuracy of vulnerability evaluation.

  2. 2.

    Secondly, we consider whether specific vulnerability characteristics (i.e. individual components of the CVSS scores) are more (or less) accurately scored by different type of assessors.

  3. 3.

    Finally we consider which facets of professional experiences (e.g. years vs knowledge of attacks) yield a more accurate assessment.

The rationale is that, to improve software security management processes one may not necessarily need a full fledge security expertise. Rather, a general knowledge of the field might just be complemented by specific training.

Summary of Contributions

The experimental task we considered in this paper (i.e., CVSS assessments) has already been recognized as part of standard risk assessment processes that companies and organizations of all types should carry out as normal management practices. The outcome of our work provides a much-needed measure of the variability of vulnerability assessment scores when assessors profiles vary across their educational background and working experience. In addition, this study answers the call for objective and evidence-based analyses of the quality of software security expert assessments, by including cognitive and professional biases. Finally, by recognizing the greater effectiveness of a mixed training and education with respect to ‘vertical’ competences (Joint Task Force on Cybersecurity Education 2017), our work also contributes to the debate concerning the definition of meaningful evaluation methodologies and metrics for advanced education in cybersecurity and software security professional training (McGettrick 2013; Conklin et al. 2014; Santos et al. 2017). An initial finding is that being competent in security (either through education or experience) improves the overall accuracy of the vulnerability scoring. The result confirms similar findings in software engineering studies and for more specific security problems (Edmundson et al. 2013; Acar et al. 2017). In addition, by quantifying this effect, our study poses the bases for future cost/benefit analyses, for example to evaluate investments on security training. On the other hand, we find that, under our experimental settings, experienced security professionals showed no clear advantage over students with a security specialization. This lack of clear difference between students and professionals has also been detected in previous experiments in software engineering studies, which have shown that the performance of experts could become similar to that of novices when problems are framed in novel situations for which ‘precompiled’ rules are not helpful (Singh 2002). An expertise reversal effect has also been observed when experts have decided to ignore new instructional procedures in favor of familiar mental schemes (Kalyuga et al. 2003).

The work is organized as follows: Section 2 presents an overview of related work. In Section 3, the study design is described, first by presenting our research questions and some details about the CVSS standard, followed by the description of our sample of participants, data collection procedure, and analysis methodology. Section 4 analyzes the results obtained from the experiment, while in Section 5, we discuss possible consequences on software security development lifecycle and management that our research may suggest. Finally, some conclusions are presented.

2 Related Work

Software development and security practices. Security principles and practices are increasingly incorporated into software development processes with the improvement of industry’s maturity, the approval of regulations and laws including severe sanctions following damages caused by inadequate cybersecurity measures, and the diffusion of secure software development guidelines (Colesky et al. 2016; Islam et al. 2011). In Morrison et al. (2017), the authors surveyed security practices in open-source projects. Among the others, vulnerability tracking and resolution prioritization are two security practices resulted to be among the most often reported as daily practices. On the other hand, for tracking and prioritizing vulnerabilities, as well as for several other surveyed security practices, the authors found that Ease of use varies negatively with Usage. We could probably conclude that when vulnerabilities should be tracked and prioritized, the task looks easier than it actually turns out to be. This confirms research and analyses (Bozorgi et al. 2010; Allodi and Massacci 2014; 2017) regarding the difficulty of risk-ranking software vulnerabilities due to often unclear likelihood and consequences when the assessment has to be specific for a certain organization.

The same survey (Morrison et al. 2017) also provides an anecdotal confirmation of our hypothesis that up to now there has been a lack of analytical studies and experiments aiming at evaluating how cybersecurity skills are formed. The survey reports the opinion of a participant, not unusual in cybersecurity professional circles, expressing his/her disdain for security training because associated to useless classroom lessons and suggesting, instead, to include other hands-on, informal types of training. In our work, we explicitly considered this issue and designed a natural experiment for obtaining evidence from students and professionals. Our results do not support the belief that practical experience always makes a better cybersecurity expert than formal education. Still Morrison et al. have a second recent survey (Morrison et al. 2018), this time on security metrics and considering scientific papers, that reveals an unsatisfactory scenario for what concerns the analysis criteria of software vulnerabilities. In the survey, the authors called Incident metric the one related to vulnerabilities and found that papers could be divided in two subgroups: those that focused on quantifying vulnerabilities, a goal more difficult than it may look like (Geer 2015) and a bad inference method to evaluate risk, and those that discussed CVSS. With respect to our work, this survey confirms the prevalence of CVSS as the reference methodology for vulnerability scoring and therefore our motivations for using it in the experiment, despite its limitations that we acknowledge and take into account in our analysis.

Professionalization

Relative to the professionalization of cybersecurity, important issues are still debated, like the definition of standards needed to establish a curriculum or certification (Burley et al. 2014; Conklin et al. 2014), or the best way for governments to encourage cybersecurity professionalization (Reece and Stahl 2015). These works are connected to ours because in presenting different initiatives, for example regarding new curricula or discussing the suitability of licenses and certifications, they also witness the scarcity of experimental studies about which skills and to what extent they are most useful for solving relevant security problems.

Experiments with Students

Among controlled experiments involving students and professionals, some have tight relation with our work. In Wermke and Mazurek (2017), a sample of developers recruited from GitHub was asked to complete simple security programming tasks. It turns out that the only statistically significant difference was determined by the years of programming experience, which indicates that familiarity with the task is the main driver, rather than professional or educational level. This is in line with results in Edmundson et al. (2013), whereby security professionals do not outperform students in identifying security vulnerabilities. The usability of cryptographic libraries has been studied in Acar et al. (2017). Relevant for our work is the fact that different groups of individuals, with different levels of education or experience, have been recruited. They found that participants’ programming skill, security background, and experience with the given library did not significantly predict the code’s overall functionality. Instead, participants with security skills produced more secure software, although neither as good as expected nor as self-evaluated by participants. We have found compatible results, although under very different settings and more nuanced. All these works differ from ours in that they study the performance of individuals with respect to a specific technical security skill or tool, as opposed to studying how education, experience, and the combination of subject skills correlate with accuracy in solving a more general software security problem. One study is closer to ours (Acar et al. 2016), where Android developers’ decision making performances are analyzed with respect to education and experience. The experiment was based on observing how developers with different background perform when provided with different types of documentation, and it found a sensible difference between professionals and students.

3 Study Design

3.1 Analysis Goals and Research Questions

In this study we evaluate the effect of different subject characteristics on technical, user and system-oriented, and managerial aspects of a vulnerability assessment. Specifically, our study aims at the following two goals:

Goal 1: :

The effect of the assessor’s security knowledge on the accuracy of software vulnerability assessments should be evaluated.We should further determine whether such accuracy varies for different facets of a software vulnerability (e.g. the complexity of exploitation or the need for a software user to ‘click on something’ for the vulnerability to be exploitable).

To address this goal, we distinguish between two broad classes of knowledge: knowledge acquired through formal security education, meaning academic-level specialized security courses, and through professional experience. In general, saying that an individual exhibits a technical skill means that s/he has acquired an adequate level of proficiency in mastering the technical issue as required by the industry. Since the quality of technical knowledge is extremely variable among industrial sectors, we consider that an individual owns a certain skill if, being a student, his/her academic curricula had a corresponding Knowledge Unit (e.g., we consider a student skilled in code security if s/he attended a Secure Programming course), while, being a professional, we relied on the self-evaluations provided with the questionnaire we asked to fill before the test. We split Goal 1 in two experimental research questions (i.e, RQ1.1 and RQ1.2), as reported in Table 1.

Table 1 Experimental research questions
Goal 2::

Professional experience has different facets (years of experience, specific knowledge of standards, or attacks, etc.) and we want to understand whether they have an effect on the accuracy of the vulnerability assessment. We also would like to know whether such accuracy varies for different facets of a vulnerability.

In other words, we would like to understand whether to recognize the severity of a software vulnerability one needs to be an expert in everything that is security related, or few things only make a difference. A case study of the Boeing Company (Burley and Lewis 2019) showed, for example, that the role of Incident Response Specialist first requires a general knowledge of system security, and secondly only specific knowledge units related to Organisational Security (i.e., Business Continuity, Disaster Recovery, and Incident Management). Differently, for the role of Network Security Specialist a larger set of knowledge areas is required: in addition to the general knowledge of system security, knowledge units related to Connection, Component, Organizational, Data, and Software Security are included. Also the broad comparison of the available frameworks for the definition of cybersecurity foundational concepts, organizational roles, and knowledge units provided in Hudnall (2019) shows that not all knowledge units are necessary for each skill. This represents a different approach with respect to the provision of a ‘standard’ and very broad portfolio of security competences suggested by some academic curricula (McGettrick 2013) and industries (Von Solms 2005), which may not fit well with the requirements of modern cybersecurity.

Hence we put forth the idea that the accuracy of a security assessment should be analyzed with respect to the specific facets of experience of the assessor, with a granularity in the definition of technical competences similar to that defined by cybersecurity curricular frameworks like the CSEC or the CAE (Conklin et al. 2018).

Therefore we have the final experimental research question RQ2.1 as presented in Table 1. The very same question could be asked to formal education and our student subjects. However for privacy reasons we could not collect the information on the grades that students obtained in the various courses corresponding to different knowledge units (see Table 3).

3.2 Task Mapping and Vulnerability Selection

The CVSS v3 framework provides a natural mapping of different vulnerability metrics on aspects of the larger spectrum of security competencies we are considering: technical, user-oriented, and management-oriented. Using CVSS, the assessor performs an evaluation of the vulnerability based on available information. Table 2 provides a summary of CVSS’s Base metrics (columns CVSS and Metric description) with the possible values an assessor could chose (column Values). They are all the metrics used by CVSS to assess a vulnerability, with the exception of Scope.Footnote 4 In addition, we added a short description of the technical skills related to the specific metric (Skill set) and their mapping with the Knowledge Units formally defined by the ACM Joint Task Force on Cybersecurity Education (2017).

Table 2 Summary of considered CVSS v3 Base metrics, mapping to relevant skill sets, and JTF knowledge areas and knowledge units

For our experiment, 30 vulnerabilities have been randomly chosen among the 100 used by the CVSSSpecial Interest Group (SIG) to define the CVSS standard. We did not consider relevant to strictly maintain the same distribution of CVSS scores of the original SIG sample in our reduced sample, as well as the SIG sample does not reflect the distribution of scores on the whole NVD,Footnote 5 because the distribution of scores does not represent different difficulty levels in evaluating vulnerabilities nor reflect any relevant technical feature that may bias the result of the test (Scarfone and Mell 2009; Allodi and Massacci 2014). In Appendix A, it is possible to find an example of assessment for three vulnerabilities, with the descriptions, and the results expressed as error frequency of test participants (see Fig. 4 also in Appendix A.1).

3.3 Participants and Recruiting Procedure

We follow Meyer (1995) and performed an experiment recruiting three groups of individuals (total n = 73 participants): 35 major students with no training in security; 19 major students with three to four years of specific training in security; 19 security professionals with a median of six years of professional experience. Some participants knew what CVSS is used for and its scores associated to CVE vulnerabilities,Footnote 6 but none had experience with CVSS v3 vulnerability assessment or knew the specific metrics used to produce the score.

With regard to ethical concerns, no personal identifiable information was collected and participant answers were anonymous. For students, the IRB of the departments involved confirmed that no formal ethical approval was required, and students were informed that the participation to the test was voluntary. Participating professionals were informed that their participation was anonymous with respect to information about their professional experience and that their CVSS evaluations were in no way linkable to their identity.

Unfortunately, recruiting subjects with very different profiles makes it hard to control for possible confounding factors; for example, some professionals may have received an education equivalent to that of (a group of) student subjects, or some students may have changed masters during their student career. As these effects are impossible to reliably measure, we explicitly account for the (unmeasured) in- subject variability in the analysis methodology and report the corresponding estimates.

3.3.1 Students

Students participating in our study are MSc students of two Italian universities, both requiring proficiency in English and a background in computer science. The first group, SEC, is enrolled in the Information Security MSc of the University of Milan and already completed a BSc in Information Security. The second group, CS group, is composed of students enrolled in a Computer Science MSc at the University of Trento. SEC subjects were recruited during the Risk Analysis and Management course at the first year of their MSc; CS students were recruited during the initial classes of the course Security and Risk Management, the first security-specific course available in their MSc curriculum. Table 3 provides the information about specific skills acquired by the two groups of students in their BSc programs. Here skills are reported as core Knowledge Units defined according to the categories of the U.S. Center for Academic Excellence (CAE). Footnote 7Footnote 8 In particular, we see from Table 3 that the two groups of students, CS and SEC, share at least ten core Knowledge Units, representing fundamental computer science competences (e.g., networking, operating systems, programming, etc.). With respect to security Knowledge Units, while CS students do not have any, the SEC students have attended at least five classes dedicated to security fundamentals (e.g., secure design, cryptography, secure networks, etc.). Specific student information, such as the exam grades, possibly useful to infer the degree of knowledge for each topic, cannot obviously be accessed as the trial was anonymous.Footnote 9

Table 3 Core knowledge units for CS and SEC students

3.3.2 Professionals

Subjects in the PRO group are members of a professional security community lead by representatives of the Italian headquarters of a major US corporation in the IT sector. Participants in our study have been recruited through advertisement in the Community’s programme of a training course on CVSS v3. Participants in the PRO group have different seniority in security and all professional profiles focus on security-oriented problems, technologies, and regulations. To characterize PRO experiences, we asked them to complete a questionnaire detailing job classification and years of experience, education level, experience in vulnerability assessment, and expertise level in system security/hardening, network security, cryptography, and attack techniques. Of the 19 components of the PRO group, 13 accepted to fill the questionnaire. No motivation was provided by those that preferred not to disclose any personal information. The median subject in the PRO group has six years of expertise in the security field, and roles comprise Security Analysts, Computer Emergency Response Team members, penetration testers and IT auditors. A detailed characterization of PRO subjects over the other dimensions is given in Section 4.2.

3.4 Data Collection

Ahead of the experiment, participants attended an introductory seminar to CVSS v3 held by one of the authors. Content and delivery of the seminar were identical for the three groups. The experiment replicates the procedure performed by security experts represented in the CVSS Special Interest Group (SIG), which perform assessments over a set of vulnerabilities in their organization by relying on vulnerability descriptions and the CVSS v3 official documentation. These assessments are usually performed in two to five minutes each, due to the limited information available to the assessors (Holm and Afridi 2015). Similarly, we asked each participant to complete 30 vulnerability assessments in 90 minutes by only relying on CVE descriptions, a summary description of CVSS v3 metrics, and a scoring rubric reporting standard value definitions for CVSS.Footnote 10. All participants completed the assessment in the assigned time, except for seven students of the CS group.

To evaluate the quality of the assessments, we assumed an evaluation of a metric as wrong if it was different from the corresponding evaluation, produced by the SIG for the same vulnerability. Then, for each participant and each vulnerability assessed, we counted the number of correct answers and, for the wrong ones, we also kept track of the severity of errors, which we used for a qualitative evaluation of errors made by the participants.

Table 4 reports an example of vulnerability assessment. The answers from one participant, randomly chosen, for each group, are shown together with reference evaluations produced by SIG (bottom row). In this particular case, the CS student had all answers wrong and, despite this, declared to be confident in his/her evaluation. Both the SEC student and the PRO professional, instead, made one mistake, but exhibited different degree of confidence in their evaluation.

Table 4 Example of assessment by randomly selected participants for each CS, SEC, PRO groups compared to SIG’s assessment for CVE 2010-3974

3.5 Analysis Methodology

We formalize a CVSS assessment by assuming that there exists a function ai(vj) representing the assessment produced by assessor i ∈{CS∪SEC∪PRO} of vulnerability v represented as the vector of CVSS metrics to be evaluated (j ∈ {AV, AC, UI, PR, C, I, A}). We further define a function e(ai(vj)) that detects the error on metric j by assessor i on vulnerability v by comparing the subject’s assessment ai∈{CS,SEC,PRO}(vj) with the assessment provided by the SIG asSIG(vj) on the same vulnerability. We observe subjects in our study multiple times (once per vulnerability). As each observation is not independent and subjects may learn or understand each vulnerability differently, a formal analysis of our data requires to account for the variance in the observation caused by subject (e.g. rate of learning or pre-existent knowledge) and vulnerability characteristics (e.g. clarity of description). To evaluate the effect rigorously, we adopt a set of mixed-effect regression models that account for two sources of variation: the vulnerability and the subject (Agresti and Kateri 2011). The general form of the models is:

$$ g(y^{j}_{iv}) = \boldsymbol{x}_{iv}\boldsymbol{\beta} + \boldsymbol{z}_{i}\boldsymbol{u}_{i} + \boldsymbol{h}_{v}\boldsymbol{k}_{v} + \epsilon_{iv}, $$
(1)

where g(⋅) is the link function, and \(y^{j}_{iv}\) denotes the observation on CVSS metric j performed by subject i on vulnerability v. xivj is the vector of fixed effects with coefficient β. The vectors ui and kv capture the shared variability at the subject and vulnerability levels that induces the association between responses (i.e. assessment error on CVSS metric j) within each observation level (i.e. subject i and vulnerability v). 𝜖iv is the leftover error. We report regression results alongside a pseudo-R2 estimation of the explanatory power of the model for the fixed-effect part, as well as for the full model as specified in Nakagawa and Schielzeth (2013). We report odds ratio (exponentiated regression coefficients) and confidence intervals (via robust profile-likelihood estimations (Murphy and Van der Vaart 2000)) for a more immediate model interpretation. Odds lower than one (with 0 ≤ C.I. < 1) indicate a significant decrease in error rates. These are indicated in Tables 5 and 6 with a ∗ next to the estimate. Borderline results are those whose C.I. only marginally crosses the unity up to 5% (i.e. 0 ≤ C.I. ≤ 1.05).

4 Empirical Results

Our data collection comprises 2190 assessments performed by 73 subjects over 30 vulnerabilities. We consider an assessment as valid if the assessment is a) complete (i.e., the whole CVSS vector is compiled), and b) meaningful (i.e. the assessment is made by assigning a valid value to each CVSS metrics). This leaves us with 1924 observations across 71 subjects. The 244 observations excluded from the dataset are due to incomplete or invalid records not matching CVSS specifications, and cannot therefore be interpreted for the analysis.

4.1 Effect of Security Knowledge

4.1.1 Assessment Confidence

We start our analysis by evaluating the level of scoring confidence for the three groups for each vulnerability. Table 7 shows the results for the subjects’ reported confidence in the assessments (See Table 4 for an example).

Overall, subjects declared to have been confident in their assessment in 39% (757) of the cases, and non-confident in 48% (922). The remaining 13% subjects left the field blank. Looking at the different groups, a significant majority of scorings in the CS group (64%) was rated as low confidence, while for SEC and PRO groups approximately 50% were confident assessments. Even by considering ‘Blank’ confidence as low confidence, the figures for the SEC and PRO groups are statistically indistinguishable (p = 1 for a Fisher exact testFootnote 11), whereas the difference is significant between CS and SEC+PRO confidence levels (p = 0.017).

4.1.2 Severity Estimations

Whereas technical details may significantly vary between vulnerabilities, for simplicity we grouped the vulnerability assessed into six macro-categories whose definitions have been derived from the Common Weakness Enumeration (CWE) as provided by the NIST/MITRE:Footnote 12

  • input: vulnerabilities caused by flawed or missing validation (e.g. code injection);

  • information: vulnerabilities regarding system or process specific (e.g. info disclosure);

  • resource access: vulnerabilities granting the attacker access to otherwise unauthorized resources (e.g. path traversal);

  • crypto: vulnerabilities affecting cryptographic protocols or systems;

  • other: vulnerabilities that do not belong to specific CWE classes (taken as is from NVD);

  • insufficient information: vulnerabilities for which there is not enough information to provide a classification (taken as is from NVD).

The mapping has been directly derived from the MITRE CWE classification, and has been performed by one author of this study and independently verified by other two. Table 8 in the Appendix ?? details the mapping between CWEs in our dataset and the defined categories.

Figure 1 reports how severity estimations of vulnerabilities vary, w.r.t. the reference score computed by the SIG, between the three groups of participants and for each vulnerability category. A positive difference indicates an overestimation (i.e. participants attributed a higher severity score); a negative value indicates an underestimation. We observe that Cryptographic Issues and Insufficient information categories were perceived as more severe by all participant groups than by the SIG, whereas for other categories the results are mixed. Following CVSS v3 specifications (FIRST 2015) (Section 8.5), an over- or under-estimation of two points may result in an important mis-categorization of the vulnerability, whereas an error of ± 0.5 points is within accepted tolerance levels. Overall, we find that experiment subjects’ estimations of vulnerability severity are only marginally off with respect to the SIG estimations.

Fig. 1
figure 1

Distribution of difference in severity estimation by vulnerability type and subject group

4.1.3 Assessment Errors

In Fig. 2 we have a more detailed inspection of scoring errors for the three groups by considering the specific CVSS metrics rather than the total score as computed by the CVSS for a vulnerability. We first evaluate the sign and the size of errors. With regard to the sign of an error, for instance, the PR metric could have three values (High, Low, None; see Table 2). Assuming that the SIG attributed the value Low for a certain vulnerability, if a participant selects High the error is an overestimation (positive error, + 1), if he or she selects None it is an underestimation (negative error, -1). Errors may also have different sizes, which depend on the specific metric and the specific SIG evaluation. In the previous example, the size of the error is at most 1. However, for a different vulnerability the SIG could have evaluated as High for the PR metric. In that case, if a participant selects Low it results in a negative error of size 1 (i.e., -1), if s/he selects None the error size is 2 (i.e., -2), with different consequences on the overall scoring error for the vulnerability.

Fig. 2
figure 2

Distribution of assessment errors over the CVSS metrics

Given this computation of errors’ sign and size, we observe that the frequency of large errors (defined as errors with size greater then 1), is small. This indicates that, in general, subjects did not ‘reverse’ the evaluation by completely missing the correct answer (e.g. assessing a High Confidentiality impact as a None), a situation that might have lead to a severely mistaken vulnerability assessment. Whereas a detailed analysis of error margins is outside the scope of this study we observe that, overall, most subjects in all groups showed a good grasp of the task at hand.

The large errors we observe on certain metrics (between 30% and 60% of tests, depending on the group of respondents and the metric, as discussed in the following) are mostly produced by errors of size 1. Error rates of this size are to be expected in similar experimental circumstances (Onarlioglu et al. 2012, finds error in the 30-40% rate over a binomial outcome), particularly considering that participants in our experiment have been explicitly selected with no previous experience in CVSS assessment, the limited amount of time, and the CVE description as the only technical documentation, this rate of small errors is unsurprising.

Overall, we observe that there is a clear difference in accuracy between the security unskilled CS and security skilled SEC+PRO for all metrics. This is particularly evident in the AV, AC and PR metrics, and all CIA impact metrics. This effect is also present in the UI metric, but here the CS and SEC students perform similarly, whereas professionals in the PRO group achieve higher accuracy. As UI depends specifically on a user’s interaction with the vulnerable component, the greater operative and domain-specific experience of the PRO group may explain this difference (e.g. for the appearance of warning dialogs on a certificate error). We observe an overall tendency in over-estimating PR and UI, and under-estimating AC, which may indicate that relevant information for the assessment of these metrics are missing, a sensible problem already noted in the industrial sector as well (see for example the recent ‘call for action’ from NIST (2018)). Conversely, the difference between SEC students and PRO professionals seems less pronounced, if present at all. The tendency of the error does not appear to meaningfully differ between groups, indicating no specific bias toward over or underestimation.

As each metric has a different set of possible values, to simplify the interpretation of results, we here consider the binary response of presence or absence of error in the assessment. We define a set of regression equations for each CVSS metric j of the form:

$$ \begin{array}{@{}rcl@{}} g(e_{vi}^{j}) &=& c + \beta_{1} CONF_{vi} + \boldsymbol{\beta}_{\boldsymbol{2}} \boldsymbol{G}\boldsymbol{R}\boldsymbol{O}\boldsymbol{U}\boldsymbol{P}_{i}\\ &+& \boldsymbol{\beta}_{\boldsymbol{3}} \boldsymbol{V}\boldsymbol{U}\boldsymbol{L}\boldsymbol{N}\boldsymbol{T}\boldsymbol{Y}\boldsymbol{P}\boldsymbol{E}_{\boldsymbol{v}} + .. \end{array} $$
(2)

where g(⋅) is the logit link function, \(e_{vi}^{j}\) is the binary response on presence or absence of error on metric j for subject i and vulnerability v, and β2GROUPi and β3VULNTYPEv represent respectively the vector of subject groups (CS, SEC, PRO), and vulnerability categories.Footnote 13

Table 5 reports the regression results. We conservatively consider assessments with a ‘Blank’ level of confidence (ref. Table 7) as non-confident. Effects for the group variables SEC and PRO are with respect to the baseline category CS. We report the estimated change in odds of error and confidence intervals of the estimation.

Table 5 Effect of security education on odds of error

In general, from our results it emerges that subjects with security knowledge, i.e. SEC+PRO, produce significantly more accurate assessments than subjects with no security knowledge, i.e. CS, on all metrics. Overall, SEC+PRO is between 30% to 60% less likely than CS in making an error.

figure e

It is interesting to note that the SEC group tends to perform better than CS over metrics requiring technical and formal knowledge of system properties, for example to correctly evaluate the complexity of a vulnerability exploit. Whereas both groups are acquainted to concepts such as Confidentiality, Integrity, and Availability, the formal application of these concepts to the security domain provides a clear advantage in terms of accuracy for the SEC group when compared to CS students. Security knowledge appears to have a less decisive effect on network (AV) and access (PR) aspects; whereas training on networks is common to both groups (ref. Table 3), the application of security aspects appears to be beneficial, albeit only marginally. Perhaps more surprisingly, one would expect assessments on the PR metric to benefit from knowledge on access control and policies (foundational aspects of security designs taught to SEC, ref. Table 3). Yet, this difference appears to be only marginal, suggesting that other factors, such as experience with software and systems, may fill the educational gap between the two groups in this respect.

figure f

These findings indicate that the professional expertise that characterizes the PRO group does not necessarily improve the accuracy of the assessment over subjects with security knowledge but limited or no professional expertise. PRO appears to have a slight advantage over SEC for the AV metric; in line with findings on SEC vs CS, this again underlies the importance of experience in applying general concepts like networking to the security domain, when performing security tasks.

The effect of confidence on the assessment is relevant for the impact metrics CIA, indicating that a significant source of uncertainty may emerge from the effect of the vulnerability on the system. Interestingly, we found that some vulnerability types (Information and Resource access) are likely to induce error on the A metric, suggesting that specific knowledge or expertise may be needed to discern, for example, between information and service availability. By contrast, the vulnerability category Input is related to a significant reduction in error rates for UI; this is expected as Input vulnerabilities generally require user interaction for input to an application, such as opening an infected file or clicking on a rogue link. Similarly, Resource Access significantly reduces error on the PR metric, as vulnerabilities of this category explicitly involve considerations on existing attacker permissions on the vulnerable system. We did not find other specific effects of vulnerability categories on the measured outcomes, suggesting that the results are largely independent from the specific vulnerability types.

Variance by subject (Var(c|ID)) and by vulnerability (Var(c|CVE)) indicate that the intercept of the model may vary significantly for each observation (i.e. both different subjects and vulnerabilities have different ‘baseline’ error rates). This is interesting because it indicates that neither the subject variables (GROUPi) nor the vulnerability variables (VULNTYPEv), whereas significant in explaining part of the observed error, could fully characterize the effect. For example, specific user characteristics or the thoroughness of the vulnerability description may play a substantial role in determining assessment accuracy. On this same line, it is interesting to observe that the overall explicative power of the model is relatively small for all the considered metrics. This can be expected for random processes in natural experiments where the environment can not be fully controlled by the experimenter (Agresti and Kateri 2011) (as exemplified by the variance explained by the full model as opposed to that of the fixed effects); still, the small R2 values for the fixed effect parameters suggest that the sole presence of security knowledge, even when confounded by assessment confidence and vulnerability type, does not satisfactorily characterize the observation. This further supports that other subject-specific characteristics may drive the occurrence of an error. We investigate this in the following.

4.2 Effect of Subject Characteristics

To analyze results in finer detail, we turned to the answers from the questionnaire that characterizes PRO subjects as described in Section 3.5. This allowed us focusing on the target group of professionals that eventually perform the analysis in the real world (Salman et al. 2015).

The median subject in the PRO group has six years of professional expertise in the security field, in a range between two and 15 years (μ = 5.79, σ = 3.83).

Figure 3 reports the distribution of the levels for each measured variable. All factors are reported on an ordinal scale (with the exception of CVSS experience for which we have a nominal scale), codified in levels \(1\rightarrow 3\), where Education: 1=High School; 2=BSc degree; 3=MSc degree. Previous CVSS experience: 1=None; 2=Yes; 3=NON-CVSS metric. System security/Network Security/Cryptography/Attacks: 1=Novice; 2=Knowledgeable; 3=Expert (a fourth level, ‘None’, is not reported as no participant rated him or herself less than novice on any of these dimensions). Most subjects obtained at least a BSc degree. From discussion during the initial CVSS training it emerged that none of the participants in the PRO group had a formal specialization in security at the University level. The group is evenly split between participants that have previous experience in vulnerability measurement (earlier versions of the CVSS or other methods); most participants rated themselves as ‘Competent’ or ‘Expert’ in Network Security, and are equally split between the levels ‘Novice’ and ‘Competent or Expert’ for all other variables.

Fig. 3
figure 3

Education and expertise profile of professionals in the PRO group

To evaluate the effect of subject characteristics on odds of error, we have considered that: first, the subject distribution seems to be skewed toward presence or absence of expertise or education rather than being meaningfully distributed across all levels. For example, most subjects attended University with only a handful interrupting their studies after high school; similarly, few subjects rated themselves as ‘experts’ in any dimension, with most subjects being either ‘novices’ or ‘competent’ on the subject matter. We therefore collapsed the levels to ‘novice’ or ‘not novice’ to represent this distinction. Secondly, some subject characteristics may show high levels of correlation: for example, subjects competent in system security may be likely competent on network security as well. Similarly, highly educated professionals may be (negatively) correlated with years of experience (as more time would be spent on one’s studies than on the profession). We have checked for multicollinearity problems by calculating the Variance Inflation Factor of the categorical variables defined above, and dropped the variables that showed evidence of correlation. Education, CVSSExp, NetSec, Crypto have been dropped because highly correlated with other factors. This broad correlation shows that the current expectation at professional level is for experts to have a broad spectrum of skills as we discussed above for RQ2.1 (McGettrick 2013; Von Solms 2005). As a result we have kept: years, (knowledge of) attacks, (knowledge of) system security. We then define the following regression equation:

$$ \begin{array}{@{}rcl@{}} g(e_{vi}^{j}) &=& c + \beta_{1} Years_{i} + \beta_{2} Attacks_{i} + \beta_{3} SysSec\\ &&+ \boldsymbol{\beta} \boldsymbol{V}\boldsymbol{U}\boldsymbol{L}\boldsymbol{N}\boldsymbol{T}\boldsymbol{Y}\boldsymbol{P}\boldsymbol{E}_{\boldsymbol{v}} + .. \end{array} $$
(3)

Table 6 reports the results. In general, we observe that not all expertise dimensions are relevant for all metrics. This is to be expected as, for example, knowledge of attack techniques may have an impact on evaluating attack complexity, but may make little difference for other more system-oriented aspects, like requirements on user interaction.

figure g
Table 6 Effect of subject characteristics on odds of error in the PRO group
Table 7 Confidence assessments for the groups

Results for vulnerability type are qualitatively equivalent to those reported for the evaluation by group in Table 5. Interestingly, the overall explanatory power of the model (accounting for both fixed and random effects) remains satisfactory, and the subject characteristics are clearly effective in explaining the variance for most metrics. The only low (< 10%)R2 fixed-effect values is for AV and can be explained by the low incidence of error in this metric, which may be then simply be driven by random fluctuations. This is in contrast with the effect, for example, for the AC metric that is characterized by a high variability in error (ref. Fig. 2), and for which more than 20% of the variance is explained by the measured fixed effects. This is in sharp contrast with results in Table 5 where most of the variance was absorbed by the random effects.

5 Discussion

Implications for Software Security Lifecycle and the Cybersecurity Job Market

To know that information security knowledge significantly improves the accuracy of a vulnerability assessment is of no surprise. However, the actual magnitude of the improvement and the relation between the skill set of assessors and the production of reliable security assessments is oftentimes left uncertain. In other words, the employability and relevance of security skills is seldom empirically investigated, and is more often left to anecdotes or to political discourses.

According to our study, the gain produced by security knowledge appears remarkable: security experts (SEC and PRO groups) show error rates reduced by approximately 20% (see Fig. 2). A second result appears by looking at the average confidence declared by participants: not only assessment accuracy improves with knowledge, but so also does confidence in assessments. In fact, the unskilled students in CS are mostly not confident, while the skilled participants SEC+PRO declare higher confidence.

What we also observe is that the combination of skills explains most of the subjects’ variance. This is another observation often made anecdotally, but seldom empirically tested in order to be translated into operational policies and tools useful in better support software development and management or in the definition of recruiting and training plans.

Given the growth of cybersecurity competence areas and the increasing segmentation of technical skills, there is an increasing need for a better knowledge of how professional skills should be mixed for accurate security assessments, particularly in the software engineering domain where a relatively narrow skill-set is oftentimes available. On the cybersecurity job market and in corporate human resource procedures, profiles for Technical Specialists are commonly identified and looked for. Those represent vertical definitions of skills narrowly correlated and often tied to a certain technology. Much less common are profiles with horizontal definitions of skills bringing together more heterogeneous competences, despite the recurrent calls for more transversal technical skills. To this end, a recent research (Van Laar et al. 2017) has surveyed a large number of studies to understand the relation between so-called 21st-century skills (Binkley et al. 2012) and digital skills (van Laar et al. 2018). One observation made by that survey is that, while digital skills are moving towards the knowledge-related skills, they do not cover the broad spectrum of 21st-century skills. These observations are coherent with what we have observed: there is a lack of analytical studies regarding the composition and the effect of workforce’s skills and that a better knowledge of transversal compositions may lead to sensible improvements in the accuracy of security assessments and software development.

Beyond Base Scores and Towards Full Software Risk Assessment

The result of our experiments is that evaluating the CVSS Base metric given a software vulnerability description is difficult in practice but potentially viable, given the clear meaning of the metrics and the limited set of admitted answers.

A problem on a different scale of complexity is to produce a risk-based assessment of a software vulnerability with respect to the specific operational context of the software (e.g., including software technical environment, the organization’s characteristics, industry sector, geographical position, market, and geopolitical scenario). Even for a purely technical analysis, many more aspects must be considered and in particular the CVSS Environmental and Temporal metrics, which are aimed at modifying the scores assigned with the Base metric. These additional metrics consider the importance, to the assessor’s organization, of the importance of the IT asset affected by a vulnerability, and the vulnerability lifecycle phase (e.g., whether or not the vulnerability is patched or if an exploit has been released) (FIRST 2015).

Running an experiment with CVSS Environmental metrics would be an experiment in itself as it will introduce a further confounding factor: the choice of the concrete software deployment scenario. A preliminary experiment has being reported in Allodi et al. (2017) were students were given the Base metrics of a set of vulnerabilities and asked to identify the Environmental metrics in a number of credit card compliance scenarios (Williams and Chuvakin 2012). However, significantly more analyses are needed before it could be concluded what a software deployment scenario is a valid empirical benchmark. Evaluating IT assets relevance (for an organization), and being able to correctly estimate the current state of exploit techniques or the uncertainty of a vulnerability definition, requires not only data sources (difficult to obtain, maintain, and update), but also involves business strategies and corporate decisions that are hard to manually formalize.

Hence, one question that is likely to raise from the error rates, the uncertainty in assessments quality, and the intricate dependencies between assessor’s profiles and their performances, is whether this problem could be a good candidate for automation, possibly supported by artificial intelligence (AI) techniques.

It appears that the idea of mitigating the uncertainty of human-driven security assessments through AI techniques is gaining traction in the cybersecurity field, with several attempts to automate security decisions, from software development to maintenance and deployment (Conti et al. 2018; Morel 2011; Buczak and Guven 2016). With respect to vulnerability assessments, few attempts have been done at automating CVSS Base metric scoring so far. To the best of our knowledge, the most developed one is currently undertaken by NIST, to employ AI-based automatic techniques to support analysts in charge of deciding CVSS scores for the NVD (Marks 2018); it is still unclear which accuracy levels have been achieved so far. Furthermore, the applicability of unsupervised models to a wide range of cybersecurity issues remains an open issue, particularly for new projects for which only a few (and probably biased towards certain classes of bugs) ‘ground truth’ data points are available. With respect to the problem we are considering in this work, automatic AI-based solutions seem still far from practical utility at the moment.

Governance, Risks, and Compliance

Another important and still overlooked problem that arises from the empirical measurement of vulnerability assessments regards compliance with regulations. In the EU, both the GDPR (2016b) and the NIS Directive (2016a) require systematic risk assessments and adequate risk management processes. Sanctions could be committed by the EU to organizations with poor and insufficient procedures in case of security breaches with data loss. In the same vein, ENISA, the EU agency for information security, lists as priorities: risk management and governance, threat intelligence, and vulnerability testing (ENISA 2017). However, having observed how difficult it currently is to produce consistent and accurate vulnerability assessments and how those assessments depend on professional skills seldom analyzed, questions about how mandatory security risk assessments are performed inevitably arise. Our study suggests to spend more efforts in systematic analyses of workforce’s security skills, not just as vertical specializations, could benefit the security sector typically called ”Governance, Risk, and Compliance”, to which many consultant companies, IT auditors, and experts of IT processes belong to.

6 Threats to Validity

We here identify and discuss Construct, Internal, and External threats to validity (Wohlin et al. 2012) of our study.

Construct

The application of the CVSS methodology as a security assessment task can only provide an approximation of the complexity and variety of real world scenarios in the software engineering domain. On the other hand, CVSS offers a single framework involving abstract as well as technical reasoning on (security) properties of a software artifact, engaging the different skills needed in the field (see Table 2 for reference). The specific vulnerabilities used for the assessment may bias specific competences over others; whereas there is no reference distribution of vulnerabilities in specific software projects (e.g. a web application likely has very different software vulnerabilities from the underlying webserver), we control for possible noise by breaking the analysis over the single CVSS dimensions, and by accounting for the effect of the specific vulnerability types on the observed outcomes.

Internal

Subjects in all groups were given an introductory lecture on vulnerability assessment and scoring with CVSS prior to the exercise. The exercise was conducted in class by the same lecturer, using the same material. Another factor that may influence subjects’ assessments is the learning factor: “early assessments” might be less precise than “late assessments”. All subjects performed the assessment following a fixed vulnerability order. We address possible biases in the methodology by considering the within observation variance on the single vulnerabilities (Agresti and Kateri 2011). Further, the use of students as subjects of an experiment can be controversial, especially when the matter of the study is directly related to a course that the students are attending (Sjøberg et al. 2003). Following the guidelines given in Sjøberg et al. (2003) we made clear to all students that their consent to use their assessment for experimental purposes would not influence their student career but would only be used to provide feedback to the CVSS SIG to improve the scoring instructions.

A potential limitation of our approach is that we consider the CVSS SIG assessment as the ground truth. This may introduce some bias as it happened that some CVSS evaluations of CVE vulnerabilities performed by the SIG have been criticized by the security community. However, in order to establish a benchmark, for what concern CVSS evaluation in industry, the SIG is considered authoritative (as we mentioned in the introduction at least by the credit card companies, the energy companies, the US Federal government and by several other governments). This is a partially unsatisfactory answer but, in absence of a more solid scientific alternative, it provides a benchmark on what evaluation secure software experts are expected to reach by their industry peers. Even challenging the credit cards companies assessment (which we some of us did in Allodi and Massacci (2014)) would pose the question of how we know that one assessment is better than SIG’s assessment. More analysis is needed on building a ground truth that does not depend on pooling expert judgments and its limitations (Dietrich and List 2017).

External

A software engineer investigating a vulnerability in a real scenario can account for additional information beside the vulnerability description when performing the analysis. This additional information was not provided in the context of our experiment. For this reason we consider our accuracy estimates as conservative (worst-case). On the other hand, the limited number of participants in the SEC and PRO groups, and the difficulty associated with recruiting large sets of professionals (Salman et al. 2015) calls for further studies on the subject. At the end of the day we have only experimented with lass than twenty professionals. Specific high-complexity security and software engineering tasks may require highly specialized expertise, for which the diversity of tasks accounted for in our analysis may not be representative. Different settings may also have an impact on the applicability of our results; for example, repetitive operations or operations without strict time constraints may stress different sets of skills or competences. Similarly, our results have limited applicability for professionals with limited security experience, or with significantly different skill-sets and expertise from those we employed in this study.

7 Conclusions

A reliable software vulnerability assessment process is instrumental for software risks prioritization, for secure software development, and in general for a full risk assessment process and general IT governance. With this work, we aimed at understanding to what extent software vulnerability assessments can be expected to be consistent and accurate and how the results of assessments are related to the skills of the assessor.

As the testing methodology, we choose CVSS for being the industrial standard for scoring vulnerability severity and its relatively simple structure. We conducted a natural experiment with three groups of individuals having different technical skills and professional profiles. Even experienced professionals in the security field may produce evaluations with high variance, and in some cases not dissimilar to evaluations produced by students trained in security. This behavior is similar to what is traditionally discussed in the scientific literature about pooling expert opinions and about expert performances within some software engineering problems. Moreover, we could observe how accuracy of vulnerability assessments is related to skills and combination of skills of assessors.

Overall, our work suggests some further directions with a high potential for practical impact. The first is that more analytic and empirical studies are needed, focused on measuring software vulnerability assessment accuracy as part of the risk assessment process. Just saying ‘we follow an industry standards’ is not enough to warrant accurate assessment results according to that very standard. The lack of empirical tests has been overlooked so far, but with the mounting pressure on organization for better cybersecurity management and the liability deriving from recent regulations, we believe it is no longer possible to dismiss it. On the contrary, results of software vulnerability assessments used by companies should always be complemented with an analysis of their accuracy. Together with the need of measuring software vulnerability assessment accuracy, organizations should better manage the training of the workforce, not only with respect to vertical specializations, but also with respect to the often claimed as needed transversal skills. This is challenging for human resource departments and educational institutions, but with a better understanding of the relation between skills and performance, it could be achieved.

Future work to improve the practical impact of vulnerability assessment is to extend the study with the full set of CVSS metrics (FIRST 2015), including the Temporal and Environmental metrics, aiming at capturing the ability of assessors to evaluate the concrete operational environment and vulnerability lifecycle. We are particularly interested in cooperating with other researchers to replicate our study in different national and educational contest as results might have important policy implication for university education in software security and eventually for cybersecurity in the field.