To continue the exploration of why I believe security and privacy is a matter of corporate social responsibility, here’s another quick historical perspective, this time examining the emergence of information risk in the context of technology’s evolution
.
The march of
technology can be viewed as a succession of major waves, each lasting roughly 100 years (Rifkin 2013). Each wave has brought transformative benefits to society, but also significant challenges. The first wave, starting in the 1760s, included steam power, railways, and early factories as well as mass education and printing. The second wave, starting roughly in the 1860s and continuing well past the mid-1900s, included automobiles, electricity, mass production, and had an even bigger effect on society. Many of today’s corporate social responsibility issues today are the negative impacts of those first two waves of technology: examples are environmental impacts due to industrial production, mining, and oil drilling; factory working conditions; and the safety of mass-produced items.
Table 9-1.
The March of Technology
The third wave began in the 1960s, with early computers, but only really gained momentum in the 1990s. It includes the Internet and smart “things,” molecular biology and genetic engineering, and renewable energy. Arguably, this technology wave may have the broadest impact on society of any to date. Each previous wave lasted about 100 years, so history suggests that we are far from reaching the crest. If this wave was a movie, we’d still be watching the opening credits.
If the opportunities presented by this third wave of technology are unparalleled, so are the risks to society. As I’ve argued in earlier chapters, as technology has spread exponentially, so have the threats and their impacts, while security controls have progressed at a more linear, incremental rate. As a result, there’s a continually growing gap between the capabilities of the controls and the impact of exploits. If the impact of security breaches seems big now, consider what the impact will be in 10, 20, or 50 years, when technology is even more pervasive throughout society.
Let’s consider some of the potential impacts
by reiterating two examples from Chapter 6. Last year, doctors for the first time inserted an artificial “eye” that enabled a blind person to see. The device is a retinal implant that receives signals from a video camera integrated into eyeglasses. Think ahead a few years, to a time when the implants are more sophisticated and can see in much higher resolution, and also include software to automatically interpret visual information, such as QR codes. Then imagine that a malicious actor creates a QR code that triggers the vision system to download malware. Like the PC malware that paralyzed Sony’s network in 2014, the malware then demands a ransom to re-enable the person’s vision. Now consider the example of a cement company that’s embedding sensors in the concrete mix used to build a new road, thus enabling local authorities to monitor traffic patterns and adjust signals to optimize the flow of vehicles. If the technology is not securely designed and implemented, all that a malicious person needs is the ability to execute malicious code, in order to falsify the traffic pattern in such a way that vehicles converge on the scene of a planned bomb attack.
Here’s example of a real-life attack
that unfortunately has already occurred. Over a four-day period during November 2008, members of an Islamic militant organization carried out a series of 12 coordinated shooting and bombing attacks across Mumbai. The attacks killed 164 people and wounded at least 308. Of the funding that enabled the attack, $2 million was raised by cyber crime (Goodman 2015). Think about how cyber crime works. Typically, the cybercrime cycle starts with stealing someone’s identity by installing malicious code on a device or by taking advantage of insecure behavior. So ask yourself: If I don’t keep my systems up to date, if I don’t design and implement them well, and educate employees to ensure they are security-aware, am I indirectly contributing to terrorism? The answer is that you might be—although in most cases, you won’t even know it.
As I discussed in Chapter 6, four motivations
account for the majority of serious exploits. Terrorism is one. The others are financial gain, warfare, and hacktivism. Each of these motivations can result in consequences with broad impacts across society: economic damage, loss of services, damage to morale, degradation of government services, and even human casualties.
As all companies become technology companies, the technology they create and deploy may be exposed to exploits with potential impact on society. The same applies, of course, to public-sector organizations
. Even though this idea is becoming more widely accepted, I occasionally encounter people who don’t believe it applies to their organization. Recently, as I fielded questions after giving a talk, an audience member commented that she was on the board of a local school and definitely didn’t see the school as a technology organization. “Does your school have a web site that parents and kids can use to view and update information?” I asked. She said yes. Then I asked “Does your school have an app that lets parents check whether their kids attend class?” No, she said, but the school was considering it. “Let’s imagine you have a web site that’s not well designed, and a malicious person decides to take advantage of that with a zero-day exploit,” I said. “He can compromise the site and the personal information of the parents and children that use it.” I added that if a school takes its technology to the next level by making an app available to parents or kids, it becomes even more clearly a technology supplier—and its security concerns now include product vulnerabilities. By the time I’d finished explaining, the audience member asked me if I could come and explain the issues to her board, which of course I agreed to do.
Here’s another school example, one that highlights the risks of failing to consider all the ethical implications
: A Pennsylvania school district issued laptops to some 2,300 students, then remotely activated the laptops’ webcams—without informing the students—and used the webcams to secretly snap students at home, including in their bedrooms. Surveillance software on the laptops also tracked students’ chat logs and the web sites they visited, and then transmitted the data to servers, where school authorities reviewed and shared the information and in at least one case used it to discipline a student. Ultimately, the school district was forced to settle a class-action lawsuit that charged it had infringed on the students’ privacy rights (Bonus 2010).