Security measures on devices are often seen as restrictive and obtrusive by end-users, potentially limiting users’ ability to perform tasks. To circumvent these measures, users may engage in behaviours which are deemed to be risky, placing their devices at risk of compromise.
This section explores previous research, highlighting risky security behaviours users may inadvertently engage in, and perception of risk. Previous attempts at educating the end-user are discussed, before proposing the concept of affective feedback as a possible method to educate the end-user.
2.1 Risky Security Behaviour
What constitutes risky behaviour is not necessarily obvious to all end-users and can be difficult to recognise. In the context of a browser-based environment there are multiple examples of behaviour which could be perceived as risky, e.g., creating weak passwords/sharing passwords with colleagues [2, 3], downloading data from unsafe websites [4] or interacting with a website containing coding vulnerabilities [5].
Attempts have been made to categorise behaviours displayed by users which could be classified as risky, including a 2005 paper by Stanton et al. [2]. Following interviews with both security experts and IT experts, and a study involving end-users in the US, across a range of professions, a taxonomy of 6 behaviours was defined: intentional destruction, detrimental misuse, dangerous tinkering, naïve mistakes, aware assurance and basic hygiene.
Padayachee [6] discussed compliant security behaviours whilst investigating if some users had a predisposition to adhering to security behaviour. A taxonomy developed highlighted elements which have the potential to influence security behaviours in users i.e. extrinsic motivation, identification, awareness and organisational commitment. The paper acknowledges the taxonomy does not present a complete overview of all possible motivational factors regarding compliance with security policies. Despite this, it may provide a basis as to how companies could start to improve security education of employees.
Weak passwords are associated with poor security behaviour and a trade-off exists between the usability of passwords and the level of security they provide [3]. Whilst exploring the issue of security hygiene, Stanton et al. [2] touched on the subject of passwords noting that 27.9% of participants wrote their passwords down and 23% revealed their passwords to colleagues Others have explored the usability of passwords and have acknowledged the difficulties end-users can experience in choosing a password whereby it was determined “length requirements alone are not sufficient for usable and secure passwords” [7].
Another risky behaviour category relates to how users perceive technology flaws, e.g. vulnerability to XSS attacks or session hijacking. Social engineering can also be considered to fall into this category: e.g. an attacker could potentially clone a profile on a social networking site and utilise the information to engineer an attack against a target (e.g. via a malicious link) [5]. Such attacks can be facilitated by revealing too much personal information on social networking sites [8].
A paper by Milne et al. [9] also investigated risky behaviours and compared this with self-efficacy. The paper concludes that depending on the demographic and the self-efficacy of the end-user, different types of behaviour are exhibited online. 449 people participated in the web-based study. During the survey, participants were asked if they had engaged in specific risky behaviours online. These suggestions were drawn from previous research into risky behaviours [10, 11].
Specific behaviours users were asked about in the survey included the use of private email addresses to register for contests on websites, selecting passwords consisting of dictionary words, and accepting unknown friends on social networking sites. The most common risky behaviour which participants admitted to was allowing the computer to save passwords: 56% of participants admitted to this.
Whilst there has been a number of attempts to categorise risky security behaviours, users may also exhibit a lack of perception regarding risk.
2.2 Perception of Risk
A number of research papers have explored techniques to gauge the perception of risk. Farahmand et al. [12] explored the possibility of using a psychometric model originally developed by Fischoff et al. in 1978 [13] in conjunction with questionnaires, allowing a user to reflect on their actions and gauge their perception, providing a qualitative overview.
Takemura [14] also used questionnaires when investigating factors determining the likelihood of workers complying with information security policies defined within a company, in an attempt to measure perception of risk. Participants were asked a hypothetical question regarding whether or not they would implement an anti-virus solution on their computer if there was a risk of being infected by a virus. Results revealed that 52.7% of users would implement an antivirus solution if the risk was only 1% however, 3% of respondents still refused to implement antivirus, even when the risk was at 99%. This displays a wide range of attitudes towards risk perception.
San-José and Rodriguez [15] used a multimodal approach to measure perception of risk. In a study of over 3000 households with PCs connected to the internet, users were given an antivirus program to install which scanned the machines on a monthly basis. The software was supplemented by quarterly questionnaires, allowing levels of perception to be measured and compared with virus scan results. Users were successfully monitored and results showed that the antivirus software created a false sense of security and they were unaware of how serious certain risks could be.
In a different study, Hill and Donaldson [16] proposed a methodology to integrate models of behaviour and perception. The research attempted to assess the perception of security the system administrator possessed. It also created a trust model, reducing the threat from malicious software. The methodology engaged system administrators whilst developing the threat modelling process, and quantified risk of threats, essentially creating a triage system to deal with issues.
Understanding the level of risk perception a user possesses can help identify the best methods to educate users regarding security behaviour.
2.3 Tools to Educate End-Users
Since there is the potential for end-users to inadvertently engage in behaviours deemed risky, many tools have been developed to help users.
Furnell et al. [17] conducted a study in 2006, to gain an insight into how end-users deal with passwords. The survey found that 22% of participants said they lacked security awareness, with 13% of people admitting they required security training. Participants also found browser security dialogs confusing and in some cases, misunderstood the warnings they were provided with. The majority of participants considered themselves as above average in terms of their understanding of technology, yet many struggled with basic security.
Much of the research conducted into keeping users safe online, educating them about risky security behaviour revolves around phishing attacks. Various solutions have been developed to gauge how to educate users about the dangers of phishing attacks, with the view that education will reduce engagement in risky security behaviours.
Dhamija and Tygar [18] proposed a method to enable users to distinguish between spoofed websites and genuine sites. A Firefox extension was developed providing users with a trusted window in which to enter login details. A remote server generated a unique image used to customise the web page the user is visiting, whilst the browser detects the image and displays it in the trusted window e.g. as a background image on the page. Content from the server is authenticated via the use of the secure Remote Password Protocol. If the images match, the website is genuine and provides a simple way for a user to verify the authenticity of the website.
Sheng et al. [19] tried a different approach to reducing risky behaviour, gamifying the subject of phishing with a tool named Anti-Phishing Phil. The game involves a fish named Phil who has to catch worms, avoiding the worms, on the end of fisher- men’s hooks (these are the phishing attempts). The study compared 3 approaches to teaching users about phishing: playing the Anti-Phishing Phil game, reading a tutorial developed or reading existing online information. After playing the game, 41% of participants viewed the URL of the web page, checking if it was genuine. The game produced some unwanted results in that participants became overly cautious, producing a number of false-positives during the experimental phase.
PhishGuru is another training tool designed by Kumaraguru et al. [20] to discourage people from revealing information in phishing attacks. When a user clicks on a link in a suspicious email, they are presented with a cartoon message, warning them of the dangers of phishing, and how they can avoid becoming a victim. The cartoon proved to be effective: participants retained the information after 28 days didn’t cause participants to become overly cautious.
Similarly, an Android app called NoPhish has been developed to educate users about phishing on mobile devices [21]. The game features multiple levels where users are presented with a URL and are asked if is a legitimate link or a phishing attempt. In a study conducted after playing the game, participants gave significantly more correct answers when asked about phishing. A further long-term study was conducted 5 months later. The long-term outcomes showed participants still performed well however, their overall performance decreased.
Besmer et al. [22] acknowledged that various applications may place users at risk by revealing personal information. A tool was developed and tested on Facebook to present a simpler way of informing the user about who could view their information. A prototype user interface highlighted the information the site required, optional in- formation, the profile data the user had provided and the percentage of the users’ friends who could see the information entered. The study showed that those who were already interested in protecting their information found the interface useful in viewing how applications handled the data.
In addition to security tools which have been developed to target privacy issues on social networking sites, studies have also focused on more general warning tools for the web. A Firefox extension developed by Maurer [23] attempts to provide alert dialogs when users are entering sensitive data such as credit card information. The extension seeks to raise security awareness, providing large JavaScript dialogs to warn users, noting that the use of certain colours made the user feel more secure.
More recently, Volkamer et al. [24] developed a Firefox Add-On, called PassSec in attempt to help users detect websites which provided insecure environments for entering a password. The extension successfully raised security awareness and significantly reduced the number of insecure logins.
Despite the number of tools created to help protect users online, users continue to engage in risky security behaviour. The tools developed span a number of years, indicating users still require security education. Therefore, this suggests that a different approach is needed when conveying information to end-users. Ongoing research is described and explores the use of affective feedback as a suitable method of educating the end-user, raising security awareness.
2.4 Affective Feedback
In terms of computing, this is defined as “computing that relates to, arises from, or deliberately influences emotions” [25]. Types of affective feedback include, specific text or phrases, and avatars with subtle facial cues. Such feedback has previously been beneficial in educational environments [26,27,28].
Several methods can be employed to inform the user that they are exhibiting risky behaviour. Ur et al. [29] investigated ways in which feedback could be given to users, in the context of aiding a user in choosing a more secure password. Research conducted found that users could be influenced to increase their password security if terms such as “weak” were used to describe their current attempt. In the research, colour was also used as a factor to provide feedback to users. When test subjects were entering passwords into the system, a bar meter was shown next to the input field. Depending upon the complexity of the password, the meter displayed a scale ranging from green/blue for a good/strong password, to red, for a simplistic, easy to crack password. Affective properties of colour were highlighted by Osgood and Adams in 1973 [30], and colours such as red signify danger in Western culture. Data gathered from the experiments showed that the meters also had an effect on users, prompting them to increase system security by implementing stronger passwords.
Multimedia content such as the use of colour and sound can also be used to provide feedback to the user. In a game named “Brainchild” developed by McDarby et al. [26], users must gain control over their bio-signals by relaxing. In an attempt to help users relax, an affective feedback mechanism has been implemented whereby the sounds, colours and dialogues used provides a calming mechanism.
Textual information provided via the GUI can be used to communicate feedback to the user. Dehn and Van Mulken [31] conducted an empirical review of ways in which animated agents could interact with users. They provided a comparison between the role of avatars and textual information in human-computer interaction. It was hypothesised that textual information provided more direct feedback to users however, avatars could be used to provide more subtle pieces of information via gestures or eye contact. Ultimately it was noted multimodal interaction could provide users with a greater level of communication with the computer system.
Previous research has indicated that affective feedback could be utilised when aiding users in considering their security behaviour online, since it can detect and help users alter their internal states [26]. Work conducted by Robison et al. [27] used avatars in an intelligent tutoring system to provide support to users, noting that such agents have to decide whether to intervene when a user is working, to provide affective feedback.
Hall et al. [28] concurs with the notion of using avatars to provide affective feedback to users, indicating that they influence the emotional state of the end-user. Avatars were deployed in a personal social and health education environment, to educate children about the subject of bullying. Studies showed that the avatars produced an empathetic effect in children, indicating that the same type of feedback could potentially be used to achieve the same result in adults.
2.5 The Relationship Between Security Behavior, Education, and Affective Feedback
Although there’s a number of security tools available which have been designed to help the end-user, people are still falling victim to online attacks. This suggests that perhaps a different approach is required. The ongoing research discussed in the following sections offers the application of affective feedback in the context of a browser-based environment, in attempt to raise the security awareness of end-users.