Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

A few years ago, one of our senior managers began bringing his corporate laptop into the cafeteria at lunchtime. Typically, he’d find an empty table, set down the laptop, and then walk out of sight to get his lunch. As he perused the salads and main courses, made selections, and paid for his food, his laptop sat unattended in plain view of hundreds of people using the large cafeteria.

My security team noticed the neglected laptop and pointed it out to me. I discussed the issue with the manager a few times, but he continued leaving the laptop unattended. So eventually, I began taking the laptop and leaving my business card in its place.

Not surprisingly, the manager became somewhat annoyed. “Nobody’s going to steal the laptop because there are all these people around,” he said.

“Okay,” I responded. “I’ll never take your laptop or complain again on one condition. If you really trust everybody here, you’ll take off your wedding ring and leave it on top of the laptop. If you do that, you’ll never hear from me again.”

He thought about this for a while. Then he said, “You made your point.” And he never again left the laptop unattended.

The Shifting Perimeter

This incident helped crystallize in my mind a new perspective about how we should approach information security. It occurred soon after we transitioned from desktop to laptop computers within Intel, and it demonstrated how each person’s daily decisions can affect the risk dynamics of the company overall.

The traditional enterprise security paradigm, often expressed in castle-and-drawbridge terms, described a wall of technology that isolated and completely protected the workers behind it. To protect our people and information assets, we focused our efforts on fortifying the network perimeter and the physical perimeter of our buildings.

Today, however, a growing number of user interactions with the outside world bypass the physical and network perimeters and the security controls these perimeters offer. They take place on external web sites and social networks, on laptops in coffee shops and homes, and on personal devices such as smartphones. The laptop left unattended in the cafeteria was clearly inside the physical perimeter, but the corporate information it contained was still potentially at risk due to the manager’s actions.

This changing environment doesn’t mean the security perimeter has vanished. Instead, it has shifted to the user. People have become part of the perimeter. Every day, users make decisions that can have as much impact on security as the technical controls we use. Do I leave my computer unattended or not? Do I post this information online or not? Do I install this software on my device? Do I report this suspicious-looking e-mail? When I’m in a coffee shop, do I connect to the corporate infrastructure via a secure virtual private network, or do I engage directly over the Internet?

We could view each of these decisions purely in terms of the potential for increased risk. However, there’s also a positive side. If users become more aware of security and make better decisions, they can strengthen the organization’s defenses by helping identify threats and prevent impact.

Therefore, as information security professionals, we are in the behavior modification business. Our goals include creating a more security-conscious workforce so that users are more aware of threats and vulnerabilities and make better security decisions. Furthermore, we need to influence employees’ behavior both within the workplace and when they are home or traveling.

If the manager was comfortable leaving his laptop unattended in our cafeteria, would he also leave it unattended at the local coffee shop? At the airport? Or somewhere else where the risk of loss was even greater? My belief is he probably would. When trying to influence this person’s behavior, I wanted to achieve more than a level of compliance. I wanted to initiate a feeling of commitment.

The term compliant behavior implies making the minimum effort necessary to achieve good performance to a predefined standard. It’s like checking boxes on a list of security compliance items. Ultimately, employees feel they are being compelled to follow someone else’s list of instructions. Because of this, compliance requires supervision and policing, and employees may sometimes engage in lengthy recreational complaining. If employees are simply following a checklist, what happens when they encounter a situation that’s not on the list? They stop and await further instructions, or perhaps they are even unaware of the threat or ignore it.

In contrast, committed behavior is intrinsically motivated and self-directed. Being committed implies that people are emotionally impelled to invest in security—they take responsibility and ownership. When people feel committed, they tend to deliver above and beyond the bare minimum. Rather than simply following a predefined list of instructions, they are empowered to make decisions and judgment calls in real time, with a focus on how their actions affect others as well as themselves.

If we can create this sense of commitment in our users, we can implement security not as a wall but as a collective security force that permeates the entire organization. Individually and as a group, every person in the corporation uses their skills in security to protect the organization, handling known attacks today as well as quickly adapting to new threats tomorrow.

When I needed to influence the manager’s behavior, I looked for a way to establish this level of commitment. I sought to change the way he felt about the laptop, and to do this I tapped into his emotional connection to his wedding ring.

Creating a culture of self-motivated commitment rather than compliance can make a big difference, as shown in studies by management guru Dov Seidman. His group looked at behavioral differences between businesses with a culture of self-governance, in which an organization’s purpose and values inform employee decision-making and behavior, and those with a culture of blind obedience based on command-and-control and coercion. Organizations based on self-governance experienced three times more employee loyalty and half as many incidents of misconduct, compared with organizations based on blind obedience (Seidman 2011).

The implications for enterprise security are clear. As the boundaries between personal and corporate computing dissolve, employees may be accessing information from any location, on any device. If users behave in an insecure way while they are in the office, it’s likely they will also exhibit insecure behavior when they’re elsewhere. Conversely, if we can create a feeling of commitment that causes them to own responsibility for security, there’s a better chance they will behave more securely both within the workplace and when they are outside our physical perimeter. This change in behavior improves the security of the device they are using, the information they are accessing, their personal lives, and the enterprise.

Examining the Risks

Before discussing ways that we can modify user behavior, I’d like to briefly mention an example of what can happen if we don’t influence the ways that users think and act.

As an experiment, the US Department of Homeland Security secretly dropped disks and thumb drives in the parking lots of government and private contractors’ buildings. Their goal was to see whether people would pick them up and plug them into their computers. As reported by Bloomberg News (Edwards et al. 2011), up to 60 percent of the people who picked up the items inserted them into their office computers. That number rose to 90 percent if the item included an official-looking logo. Clearly, the security behavior of employees at these facilities left quite a bit to be desired.

If that’s what happens with technologies that have been around for decades, think about what can happen with newer, more sophisticated exploits. Today, threats may arrive in the form of carefully crafted personalized communications designed to win the trust of targeted users. These users then unwittingly provide access to the information the attackers want.

Let’s say a company is looking to hire a credit analyst with a very specific set of skills. Attackers notice this and apply online, using a résumé that lists the exact skills required for the job, and contains the terms the company’s résumé-scanning software is likely to be looking for. Suitably impressed, the company’s human-resources specialists forward the application to the company’s credit-department manager, who has access to all the systems storing customer financial data. The manager trusts this communication because it’s sent from another department within the same company. So she clicks on the link to the résumé. Unfortunately, that action triggers the execution of malicious code. The human-resources team effectively acted as an infection agent, ensuring the attack reached its real target.

Careless behavior outside the enterprise can create other risks. In a blog post, software engineer Gary LosHuertos (2010) described how, while sitting in a café, he used a freely available packet-sniffing tool to obtain the social media identities of around 40 people who were using the café’s Wi-Fi network. Some of these people remained logged into the social media site even after he sent them messages informing them that he had just collected their login information. As he explained, a compromised account doesn’t just provide access to the social media site; it can also be used to perform social engineering attacks and gain access to a wide range of other resources.

Social media accounts can become sources of risk even when they haven’t been compromised. Users frequently post information on external social-media sites that attracts the attention of competitors or the media. To boost their job prospects, interns mention product features they helped develop during their summer job at a well-known company; sales representatives reveal the names of major clients; even senior executives have been known to unintentionally disclose key corporate strategies. In fact, services exist that specialize in aggregating apparently minor snippets of information from social-media and other web sites to build an accurate view of a company’s size, geographical distribution, and business strategy, including hiring patterns that indicate whether the company is expanding and which new areas it is moving into.

Adjusting Behavior

To counter these new risks, we need to make employees aware and empowered, so they act as an effective part of the security perimeter.

At Intel, we have focused for several years on building security and privacy protection into the corporate culture, getting employees to own responsibility for protecting enterprise and personal information. Achieving this has required a lot of effort, and we’ve realized that it takes just as much work to maintain a culture of security and privacy as to build it.

Training is a key part of our efforts. We have found training is particularly effective when general security training, which fulfills most legal requirements, is supplemented by targeted training. Some roles and job duties pose a greater risk to data than others; employees who have access to sensitive information receive specialized courses that focus on their specific needs.

We’ve found that another effective technique is to embed security and privacy training into business processes. When an employee requests access to an application that handles sensitive information, they are automatically prompted to take training that focuses on the related security and privacy concerns. We have also moved toward online training, including video and other visually stimulating material as well as entertaining, interactive tools to help engage users. In the “Find the Phish” game, for example, users learn how to spot fake web sites designed to lure them into revealing personal information (see Figure 5-1).

Figure 5-1.
figure 1

Intel’s internal “Find the Phish” interactive training tool helps employees spot web scams. Source: Intel Corporation, 2012

However, it is not enough to create good training. If nobody takes the training, the effort is wasted. We have found incentives such as public recognition, combined with a training and awareness campaign, can help ensure employees undergo training and absorb the lessons. We promote training through positive messages, sometimes associated with themes such as online scavenger hunts. Ultimately, if people continue to avoid security training, we escalate compliance efforts by directly contacting them and their managers.

We’ve also found we can help maintain and increase awareness by publishing security-related articles on Intel’s primary employee portal (see Figure 5-2 and the sidebar article). Many of these articles include a personal aspect, such as preventing identity theft, keeping children safe online, and home wireless security tips. We believe this personal aspect helps continuously reinforce our connection with employees and helps them more easily absorb the messages. The focus on personal concerns is also a recognition that the way employees behave outside the office is as important to enterprise security as their behavior in the office. These articles are also a good way to keep employees abreast of trends such as the growth of fake antivirus software. Other articles help to remind people of basic security issues, such as why it’s important to avoid downloading applications from the Web, clicking mysterious attachments, and using weak passwords.

Figure 5-2.
figure 2

Some of Intel’s internal security-related articles for employees include a personal aspect, to encourage secure behavior both within and outside the workplace. Source: Intel Corporation, 2012

How to Make Sure no One can Access your Intel or Personal Data on your Smartphone

Ensure private information and Intel IP are properly protected with these simple steps

Story by: Secure Intel

Ahh, the good old days—when the only way to access your Intel e-mail account was through your laptop or desktop PC. But now, you can read corporate e-mail on your personal handheld device. Gone are the late call-ins to meetings because you had to wait for the bridge data-filled computer to boot up.

But with new conveniences come new necessary precautions. Because Intel e-mail is now available on our personal smartphones, it’s important to take steps to protect the personal information and intellectual property we may be carrying on our handheld devices. How can you go about doing this? Easy:

  1. 1.

    Install a password on your iPhone (or other smartphone).

  2. 2.

    “Remote wipe” your smartphone if it’s ever lost or stolen.

Just as we take steps to protect information on our laptops by installing encryption, we need to safeguard data on our smartphones. Password protection offers the first line of defense. But if you leave your phone in a restaurant or on a plane, a “remote wipe” can help increase the chances that your personal information—and Intel’s intellectual property—doesn’t fall into the wrong hands.

The Payoff

How do we know our security efforts pay off? We’ve accumulated a variety of evidence. These include independent benchmark results from the Information Risk Executive Council (2011), which indicate that over the last five years, Intel employees consistently ranked in the top 10 percent of companies for secure behavior.

We also experience laptop loss rates that are substantially lower than industry averages. Among more than 300 companies studied by the Ponemon Institute, the average rate of lost and stolen laptops ranged from 5 to 10 percent over a laptop’s three-year lifespan. For the past several years, Intel’s laptop loss rates have consistently been much lower than this, below 1 percent annually. I attribute this largely to our employees’ level of commitment and sense of responsibility, in addition to the fact that we allow reasonable personal use of the laptop, as I’ll discuss later.

We continue to observe examples of employees acting as part of the security perimeter. Recently, dozens of users alerted us to a suspicious text message they’d received via their personal or corporate smartphones. They thought the message looked odd and was potentially fraudulent. At the time, we didn’t know whether this message was an exploit specifically targeting Intel, or a more widespread scam aimed at taking advantage of consumers in general. In a sense, it didn’t really matter because a compromised personal environment can affect the security of the enterprise. By requiring employees to register their personal or corporate mobile devices, we have a database of all these devices. So we sent an alert to all of these users warning them of the problem.

This incident also illustrates how intruders are shifting to exploit new areas of vulnerability. As e-mail filtering improves, threats move to less-protected newer channels such as phone texting and instant messages.

In another less serious case, our human-resources group wanted to survey a broad cross section of Intel employees to gather their opinions of the company. They hired an outside agency, which dutifully e-mailed the survey to thousands of employees. Within minutes, our help desk phones lit up, as employees called to say they were receiving suspicious e-mails from outside the company. Administrative coordinators warned teams not to open the messages, and my security group began blocking the e-mails. Soon after this, we received an anguished call from the frantic manager who had funded the survey. Though this was frustrating for HR, it helped validate our security awareness work. After we had invested significantly in awareness campaigns, the employees’ responses provided supporting evidence that we really had influenced behavior.

Roundabouts and Stop Signs

To try to reduce driving accidents at a dangerous curve in Chicago, the city painted a series of white lines across the road. As drivers approached the sharpest point of the curve, the spacing between the lines progressively decreased, giving the drivers the illusion they were speeding up, and nudging them to tap their brakes. The result was a 36 percent drop in crashes, as described by Richard Thaler and Cass Sunstein in their book Nudge (Yale University Press, 2008).

This traffic-control method succeeded in making drivers more aware and improving safety while keeping the traffic flowing with minimum disruption. I think this example provides a useful metaphor for information security. Some security controls are like stop signs or barriers: we simply block access to technology or data. But if we can shape the behavior of employees rather than blocking them altogether, we’ll allow employees, and therefore the company, to move faster.

To use another traffic metaphor, a roundabout at an intersection typically results in more efficient traffic flow than an intersection with stop signs, because drivers don’t have to come to a complete halt. The roundabout increases drivers’ awareness, but they can proceed without stopping if the way is clear. Statistics have shown roundabouts are often safer than intersections.

Of course, we need to block access in some situations, such as with illegal web sites. But there are cases where it’s more efficient and productive to make users aware of the risks, yet leave them empowered to make the decisions themselves. For example, it might make sense to warn users visiting certain countries that they may be accessing material that is considered unacceptable. Here’s a hypothetical example. A US employee traveling on business might be working in a local office of a country with strict religious guidelines. The employee has a daughter who’s in a beauty pageant—so it would be natural to check the pageant web site from time to time. But the images could be offensive in the country, so it makes sense to warn the employee to exercise caution. At Intel, we’ve found that when we warn users in this way about potentially hazardous sites, the vast majority heed the warnings and don’t access the web sites.

In the case of information security, there’s an additional benefit of making controls as streamlined as possible. We all know if controls are too cumbersome or unreasonable, users may simply find ways around them.

We kept this concern in mind when developing a social media strategy at Intel IT (Buczek and Harkins 2009). We were well aware of the risks associated with social media, but attempting to stop the use of external social media web sites would have been counterproductive and, in any case, impossible. We realized that if we did not embrace social media and define ways to use it, we would lose the opportunity to shape employee behavior.

As part of our initial investigation into this area, we conducted a social media risk assessment. We found social media does not create new risks, but can increase existing ones. For example, there’s always been a risk that information can be sent to inappropriate people outside the organization. However, posting the same information on a blog or forum increases the risk by immediately exposing the information to a much wider audience. We also determined that we could reduce risk by implementing social media tools within the organization.

So we developed a social media strategy that included several key elements. We deployed internal social media capabilities, such as wikis, forums, and blogs. Initially, these were mostly standalone tools, and employees used them mainly to connect socially rather than for core business functions. Since then, our use has evolved to include more enterprise-focused tools, and we have integrated the tools into line-of-business applications to achieve project and business goals. We’ve also added social media tools tailored for specific business groups, such as a secure collaboration solution used by design teams to simplify real-time sharing of confidential project information across geographically dispersed teams.

As we designed our internal social media capabilities, we also worked with Intel’s human-resources groups to develop guidelines for employee participation in external social media sites. Intel then developed an instructional video that was posted externally on a public video-sharing site. The video candidly explains Intel’s goals and concerns, as well as providing guidance for employees. It explains that Intel wants to use social media to open communications channels with customers, partners, and influencers and to encourage people to adopt the technology, as well as close the feedback loop. The information also includes guidance about how to create successful content and general usage guidelines such as the need to be transparent, respect confidentiality, distinguish between opinion and fact, and to admit mistakes.

We also use technology to help ensure that employees follow the guidelines. We monitor the Internet for posts containing information that could expose us to risks, and we also monitor internal social media sites to detect exposure of sensitive information and violations of workplace ethics or privacy.

The Security Benefits of Personal Use

When it comes to technology consumerization, information security specialists tend to focus on the security risks. As I discussed earlier in the book, we’ve found that the productivity benefits easily outweigh the risks. But even the security implications are not as one-sided as they might seem at first glance. I believe that, in some respects, allowing personal use may actually encourage better security.

In general, people are likely to take better care of their own possessions than someone else’s. They feel a stronger connection to their own car than to one provided by their employer. If people are using their own computing device, they may take better precautions against theft or loss. And they may feel the same way if they are storing personal information on a corporate device. At Intel, we allow reasonable personal use of corporate laptops, and therefore many employees store personal as well as corporate information on their laptops. Because of this, they have a personal stake in ensuring the devices don’t get lost or stolen.

I believe this sense of ownership contributes to our lower-than-average laptop loss rates. And recently, another company’s experience provided some empirical evidence supporting this idea. The company conducted a computing tablet pilot deployment in which, for the first time, it allowed personal use of corporate devices. At the end of the pilot, the company found that breakage and loss rates were dramatically reduced compared to its past experience with mobile devices. The CIO’s conclusion was that employees simply take better care of devices when they use them for personal purposes. Due to the lower loss rates, the company saved money.

It may also be worthwhile to reexamine other assumptions about the security implications of personal devices. Some companies have policies forbidding the use of cameras in their offices. However, a smartphone includes a camera that employees can use to capture the off-the-cuff design sketches often scrawled on whiteboards during brainstorming sessions. This intellectual property can then be stored on a hard drive within the enterprise and encrypted. Is it safer to allow employees to photograph the image, or to copy it onto a piece of paper, or to leave it on the whiteboard where anyone might see it? Companies may come to different conclusions, depending on their culture and appetite for risk. But this is another illustration of the importance of considering all the possible business benefits, as well as the risks when making technology decisions.

Sealing the Gaps

Many organizations, including Intel, use disk encryption on laptops to protect data in the event the laptop is lost or stolen. Adoption of disk encryption accelerated when states began passing privacy protection laws, and the consequences of data theft increased as a consequence. However, with some disk encryption software, the latest data isn’t encrypted until the user shuts down the PC or puts it into hibernate mode. If users simply put the PC into standby by closing the lid, the system may contain recently created data that is still unencrypted and vulnerable. If the PC is stolen at that point, the thief still has to penetrate the usual login access controls, but that’s much easier than figuring out how to decrypt the data.

I realized many IT professionals were unaware of this when I spoke at a CIO conference soon after the first major privacy legislation was passed. I asked the audience how many of them had deployed disk encryption. Most raised their hands. I asked how many had experienced lost or stolen laptops since deploying encryption. Again, nearly all raised their hands. Then I asked how many of them had established a process for evaluating the state of a lost system to determine if the data on it was truly encrypted. This time, nearly all the hands stayed down.

I then explained why some laptops might contain unencrypted data, and asked how many of the audience thought they should issue a breach notification. At this point there was a silence, followed by a buzz of activity as attendees rushed off to make calls to security specialists at their companies.

When our security group analyzed this data encryption issue, we decided that we needed to be careful about how we addressed it. We wanted to ensure data on laptops was protected, but we didn’t want to disrupt the users’ experience by forcing them to shut down their laptops more frequently, and then endure the subsequent lengthy reboots. So we adjusted the system settings to initiate encryption whenever the laptop was left unused for a specific length of time. Now, if a laptop is lost or stolen, based on the time that elapsed since the employee last used it, we can determine the likelihood that it contains unencrypted data. While making this change to technical security controls, we also increased our efforts to educate employees about secure behavior.

The IT Professional

So far, in discussing the people perimeter, I’ve focused mainly on the security roles of end users. But let’s not forget that IT professionals are also a part of the people perimeter, and that their actions can have major positive or negative effects.

IT professionals manage almost every element of the technology spanning our networks, data centers, and users’ computing devices. They develop and install software. They configure, administer, and monitor systems. Their actions or inaction can make the difference between a system that is vulnerable and one that is reasonably secure.

Servers, which are typically managed by IT professionals, are still the IT assets most commonly attacked and robbed of data. An attacker may initially gain access to your company by compromising a user’s laptop, but the biggest prize—databases of corporate intellectual property and personal information—still reside on the enterprise servers. To steal that information, the attacker may use a compromised end-user device to search the network for servers with inadequately configured access controls. Surveys show most attacks continue to exploit security holes that organizations could easily have fixed. Among organizations surveyed for the 2011 Data Breach Investigations Report, an astonishing 96 percent of breaches could have been avoided using simple or intermediate controls, and 92 percent of attacks were not categorized as highly difficult (Verizon 2011). “Every year that we study threat actions leading to data breaches, the story is the same; most victims aren’t overpowered by unknowable and unstoppable attacks. For the most part, we know them [the attacks] well enough and we also know how to stop them,” the authors concluded.

Similar trends can be seen in the incidence of software errors. Many of the most serious, frequently exploited vulnerabilities in software are due to well-known errors that are “often easy to find, and easy to exploit,” as noted in the 2011 CWE/SANS Top 25 Most Dangerous Software Errors (CWE/SANS 2011). Furthermore, the situation does not seem to be improving. As David Rice, author of Geekonomics (Addison-Wesley Professional, 2007), puts it, most software is not sufficiently engineered to fulfill its designated role as the foundation for our products, services, and infrastructure (Rice 2007). This is partly due to the fact that incentives to improve quality are “missing, ineffectual, or even distorted,” he concluded. To compete, suppliers focus on bringing products to market faster and adding new features, rather than on improving quality. Rice estimated, based on government data, that “bad” error-ridden software cost the United States a staggering USD 180 billion even back in 2007.

Not surprisingly, the typical recommendations for improving IT security often sound remarkably familiar. That’s because they address problems already known to most organizations, but not fully addressed. For example, the recommendations of the Data Breach Investigations Report include ensuring passwords are unique; regularly reviewing user accounts to ensure they are valid and properly configured; securing remote access; increasing employee awareness using methods such as training; and application testing and code review to prevent exploits such as SQL injection attacks and cross-site scripting, which take advantage of common software errors.

The fact that these measures do not appear to be rigorously applied at many organizations takes us back to a key theme of this chapter: that the commitment of employees is as important as the policies and procedures you have in place. If IT administrators and enterprise developers are committed rather than just following directives, if they feel personally responsible for the security of the enterprise, they will be more conscientious about ensuring the right technical controls are in place.

Insider Threats

It’s an unfortunate reality that many intentional threats originate within the organization. Among the 600 organizations participating in the 2011 Cybersecurity Watch Survey (CSO et al. 2011), about 20 percent of attacks were attributed to insiders.

The damage can be substantial. One employee working for a manufacturer stole blueprints containing trade secrets worth USD 100 million, and sold them to a Taiwanese competitor in hopes of obtaining a new job with them. Insider attacks also cause additional harm that can be hard to quantify and recoup, such as damage to an organization’s reputation. Insiders have a significant advantage because they can bypass physical and technical security measures such as firewalls and intrusion detection systems that were designed to prevent unauthorized access.

Yet surveys have also suggested that many insider attacks are opportunistic, rather than highly planned affairs. Many insiders take data after they’ve already accepted a job offer from a competitor or another company, and steal data to which they already have authorized access. In some cases, misguided employees may simply feel they’re entitled to take information related to their job.

It may not be possible to thwart all insider exploits, but we can take action to deter the more opportunistic attacks. Perhaps the biggest step we can take is to try to instill a culture of commitment. But we can also use technology to help against insider attacks.

As part of our security strategy at Intel, we’re implementing monitoring technology that tracks users’ logins and access attempts. At many companies, IT organizations treat such login data as information that should be closely held and not revealed to users. However, our strategy is to make login information available to users so that they can act as part of the perimeter, helping to spot anomalous access attempts. Let’s say an employee’s log indicates that he accessed the network from Asia yesterday, when in fact he was in Europe. The security organization might be unaware that anything untoward has occurred. But it’s obvious to the employee that someone stole his smartphone or his access information, and he can alert us to the breach.

Providing this login information to users can also help deter insider attacks. If unscrupulous insiders know they’re being watched, they’re less likely to take advantage. It’s like the corner store that invested in a CCTV camera; when you walk up to the counter, you see yourself in the display. Now consider the store on the next corner that lacks a camera. Which one is more likely to be robbed?

Finding the Balance

Whether we like it or not, people are already part of the perimeter. Technical controls alone are no longer able to keep pace with rapidly changing attacks, especially when those attacks are combined with sophisticated social engineering. It’s up to us, as security professionals, to recognize that people, policy, and technology are all fundamental components of any security system, and to create strategies that balance these components. Above all, we need to create a sense of personal commitment and security ownership among our employees. If we succeed in this goal, we will empower employees to help protect the enterprise by making better security decisions both within and outside the workplace.