1 Alien Computer Viruses

Early one morning in the not-too-distant future, SETI locates and archives a bona-fide artificial signal of extraterrestrial origin.Footnote 1 When the alien signal is loaded into the researcher’s computer for analysis, the computer appears to crash. Shortly thereafter, computers across the Internet begin to crash in a suspiciously similar manner. Within a matter of hours, the Internet has been crippled. Stock markets are unable to function. Global communication is destroyed. Power grids begin to falter. Travel and commerce have been brought to a halt, and panic ensues. The cause is ultimately traced back to executable code embedded in the alien signal that managed to penetrate the SETI researcher’s computer and spread worldwide.

This type of scenario is similar to that envisioned by Richard Carrigan (2006). Though the possibility of this happening is remote it is, as Carrigan argues, not zero. Let us consider, therefore, the conditions that must hold for such a thing to occur. First, the extraterrestrial intelligences (ETIs) must have some knowledge of the current state of our computer technology. That includes knowing that our computers operate on binary data, knowing that those computers use instructions defined by strings of binary data of a certain length, that those instructions take specific operators, that the computer processor uses a memory pointer and an instruction pointer, that the computer uses a form of temporary and a form of persistent storage, and how those two types of storage are accessed and used.

Assume that an alien civilization inhabits a planet orbiting Proxima Centauri, a star just over 4 light years distant. This civilization is near enough to Earth to have received information regarding relatively current terrestrial computer technology via our noisy broadcast television or radio. Ignore for the moment our rapid adoption of satellite and fiber-optic transmission technologies that obsolete much of our old broadcast system, thus reducing the likelihood that any such information would ever reach Proxima Centauri. This civilization would quite possibly possess all of the information I list above as required for such an attack to occur. At least, they would possess the signals containing that information.

Also assume that this alien civilization is not only quite a bit more advanced than ours, but also that these aliens are more intelligent on average than the typical human. They are able to decode the signals, decipher whatever human language the information has been presented in, and comprehend everything conveyed therein in a relatively short period of time: one Earth year. They have the material, financial, and political resources to react to and subsequently act on this alien signal in a global, coordinated manner without significant societal impact or upheaval. They see us as a threat to their continued existence in and dominance of this sector of the galaxy. Thus, they devise a plan to leave us defenseless in the face of an alien invasion. They construct the facilities and equipment necessary to transmit a signal containing their computer virus back to Earth, hidden in what seems to be a message of peace and goodwill. The signal is sent, coincident with the launch of an armada of alien vessels that travel quite near the speed of light, designed to arrive in the solar system very soon after the signal reaches Earth.

Four more years pass, and the aliens have arrived in our solar system. They park their battleships behind Jupiter and begin surveillance of Earth, waiting for the telltale signs of societal collapse. Eventually, a SETI researcher or research team picks up the extraterrestrial signal containing the cleverly-hidden virus, and…nothing happens. The virus relied on information almost a decade out of date. It also relied on information based on a computer chip architecture completely different from the one the researcher is using: The aliens received information about the top-of-the-line computer processor of 2002: the Intel Pentium 4, while the researcher is using a modern SPARC T3 processor. The age isn’t as of much significance as the two CPUs in question: One is a CISC-based processor, the other, a RISC-based processor. The acronyms aren’t particularly relevant here: the salient point is that the two are as different as night and day when it comes to the instructions they are designed to execute. Trying to get CISC code to run at the binary level on a RISC processor would be akin to expecting someone who only speaks Swahili to discuss the finer points of Shakespeare with someone who only speaks Tagalog.

The time difference does become an issue if we instead discuss alien civilizations on more distant stars. Suddenly, the number of years that pass between the transmission of the information the extraterrestrial intelligences (ETIs) rely on to build their virus and the subsequent reception of said virus back here on Earth can span entire evolutions and revolutions in computer technology and processor architecture. Go from just 4 light years out to 10, and our malevolent neighbors are suddenly at least two decades behind in computer knowledge. That’s long enough to mean the difference between a modern PC and a 486-based machine without any recognizable modern form of Internet access.

But let’s get back to our Centauran virus: it’s been loaded on a SETI researcher’s Sun workstation, and has done precisely nothing. Does this invalidate Carrigan’s scenario? Not necessarily. That data could be sent to other machines, ones using a processor architecture more similar to that used by the ETIs when designing their virus. Will it therefore wreak untold havoc on our modern, technology-reliant society? Well, assuming the ETIs were working at the binary level—that is, assuming they sent a long string of ones and zeroes that represented a sequence of instructions for the processor they learned about—and assuming the virus eventually landed on a computer using an Intel processor sufficiently old enough, or sufficiently compatible enough, that those instructions were still valid they may or may not do anything (the x86 processor architecture goes out of its way to preserve as much backwards compatibility as possible in its instruction set, though this can’t always be done; this is one reason why higher-level programming languages exist, to abstract away from having to deal with the actual processor instructions). They would still need to be executed: the virus would have to be “run” on the computer. This is usually done via direct user intervention: clicking on an icon or typing a command. However, this allows the computer’s operating system (OS) to decide what to do with the file. This is usually dictated by the file’s type, which is in turn usually defined by the file’s extension. SETI researchers aren’t in the habit of trying to execute files containing signals received from other stars, and even if they were, the OS wouldn’t treat it as an executable file.

There is another way that computer viruses can be executed on a computer without human intervention: the virus tricks the computer into running it. This, however, requires a detailed knowledge of the computer hardware, the operating system, and the particular piece of software being run within the operating system (a web browser, a signal detection and analysis program, a statistical package, etc.). The virus would exploit weaknesses in that piece of software, typically discovered by benevolent or malevolent researchers via painstaking months of investigation and reverse-engineering, using knowledge that relies on a detailed understanding of the various types of weaknesses typically found in computer software. Those weaknesses would be leveraged to execute the virus payload—in this case, the malicious Centauran code.

Let’s say this happens, as vanishingly unlikely as this scenario has become: the Centauran virus finds a suitable host and executes. It still has to do something more malignant than crash the one computer it’s on. It needs to spread, and spread quickly, if it’s going to cause the kind of worldwide catastrophe the Centaurans are counting on. Relying on random manual infection of this sort is far too inefficient to be any real threat. It needs to get onto the Internet without human intervention. It needs either to be autonomous, or hide itself in something so wildly popular that it will entice people in all major sectors of our infrastructure to install and run it themselves. The latter would require even more knowledge about us in terms of economics, sociology, politics, and so forth, and that knowledge would need to be much more current than what the Centaurans possessed when they created the virus. It would be wonderful if the Centaurans could figure out a way to get a virus to execute simply by viewing captioned images of cats, but if that were a viable attack vector, I can promise you it would already be in use by much more domestic sources. So let’s examine the possibility of autonomy.

For the Centauran virus to propagate, it needs to be able to detect and use modern networking hardware. And here the Centauran virus is likely to be stopped in its tracks: the odds of any two computers having an identical hardware configuration, including the networking hardware, are small. The odds of enough computers on the planet sharing that same configuration are essentially zero. The reason why this is relevant is because we assumed at the beginning that this virus was working at the level of “bare metal”: that is, it was using raw, binary data to represent a string of instructions hard-wired into the computer’s processor. At this level, the virus authors would need to know about things like networking drivers, and how to interface with them. It’s not out of the question that such an advanced tidbit of knowledge could have been broadcast in 2002. What is beginning to stretch credulity is that said knowledge would still be relevant a decade later. Not only has the hardware in use changed in that period of time, so too have the drivers necessary to interface with them. In addition, there are many different makes and models of networking hardware in use at any given time throughout the world. So the Centaurans couldn’t possibly hope to have their virus communicate directly with the networking hardware in the computer at that level. That means they would instead have to rely on the computer’s operating system.

The operating system of a computer provides a useful abstraction layer for such tasks as communicating with a network card. This allows software writers to ignore all of the details I outlined above, and focus on the more important aspects of their task. Rather than walking into a deli and instructing the employees in every minute detail of building the sandwich you’d like, from slicing the meat, cheeses, and bread to placing them along with your chosen condiments and additions in the proper order on a plate, you simply give the counter person the name of the sandwich you’d like, and get on with the more important task of lunch. The Centauran virus could conceivably tell the OS it wants to connect to the Internet, assuming several more critical events have occurred: The Centaurans have received enough low-level programming information about an OS to accomplish this, the OS in question is still the dominant OS back on Earth, and the Centaurans have received an expert-level education in how the Internet operates.

In 2002, Windows XP was just becoming the dominant PC OS. Windows 2000 was still quite prevalent in business environments, and Windows 98 or Windows ME was still to be found in great number in homes. Not to mention the resurgence of Apple with OS X, and the vast array of Unix-based OSes in universities and research organizations around the world. A decade later, it’s Windows 2003 and Windows 7, and newer versions of OS X and all those Unix OSes. The diversity of operating systems alone precludes the type of worldwide takeover this virus was meant to achieve. Even if we were a monoculture of a single Windows operating system, and that OS remained dominant for the decade necessary for the Centaurans to achieve their aim of accessing the networking hardware in computers worldwide, they would still need to understand that the Internet existed, let alone its inner workings. We take all these things for granted, and with the entire Internet at our fingertips, most of us would struggle with learning enough of this information to achieve what the Centaurans intend. Now imagine trying to do it on a sparse collection 10-year-old broadcasts you cannot control, or search, or request follow-up information on.

As unlikely as this scenario is, it raises an interesting question: If we did receive a signal from an ETI, would we be able to tell or determine the sender’s intent? Would we somehow be able to tell that the Centaurans had hidden a virus in the signal, meant to bring society to its knees, or are we at the mercy of our assumptions about extraterrestrial altruism?

2 Inferring Altruistic or Selfish Intent

We are all here on earth to help others. What I can’t figure out is what the others are here for.

W. H. Auden (2002, 347)

What is altruism and how does it apply to SETI? Altruism is commonly defined in two ways: the belief in or practice of disinterested and selfless concern for the well-being of others, and the behavior of an animal that benefits another at its own expense. Either definition could be applied to a signal sent into the cosmos by an alien race. But would we be correct in assuming any such act was an altruistic one? The concept of altruism may be universal (Minsky 1985), so it’s not unreasonable to assume that an alien civilization would be acquainted with the idea.

A computer is completely trusting of its input, and disambiguates mercilessly. Its interpretation of input is based on a finite set of well-understood rules. Thus, the sender’s message is interpreted plainly within that ruleset: if an executable instruction is found in the proper context, the computer assumes it was meant to be executed, and that instruction is totally unambiguous to the computer: there can be only one way in which that particular instruction may be interpreted in a given context. This property allows the creation of astoundingly complex software. However, it is exactly this clarity and brittleness that gives rise to “bugs” in computer programs: unintended effects of well-intentioned but poorly-structured code. The benefit of computers is that they do exactly what you tell them to do. The drawback of computers is that they do exactly what you tell them to do.

In his paper “Science and Linguistics” Benjamin Lee Whorf (1940, 229) wrote, “We cut nature up, organize it into concepts, and ascribe significances as we do, largely because we are parties to an agreement to organize it in this way—an agreement that holds throughout our speech community and is codified in the patterns of our language. The agreement is, of course, an implicit and unstated one, but its terms are absolutely obligatory; we cannot talk at all except by subscribing to the organization and classification of data which the agreement decrees.” Computers may interpret a signal in one way. Humans, on the other hand, may have quite a different interpretation of that same signal. In addition, we humans have the added benefit of considering the intent behind the signal itself; we can take the signaler into consideration.

With respect to the content of a signal, communication as consensus presents interesting problems when applied to interstellar distances. In the beginning of this chapter, we presented a scenario in which an alien race was able to communicate with our own terrestrial computer systems in their own language. This scenario makes many assumptions, including that the signal was intended only for receivers on Earth. A more likely scenario may posit a signal that is not language-specific, species-specific, or even location-specific. How does one decode a signal written in a language so totally foreign that no assumptions whatsoever can be made about its content? Various solutions have been proposed (Freudenthal 1960; Sagan 1973) in which the foundations for this agreement are present within the message, defining the terms through which the sender is attempting to communicate with the recipient. In this situation, the sender assumes both parties will come to consensus and agree to use the terms in the primer properly. This technique has been demonstrated in various works of fiction, perhaps most notably the book Contact (Sagan 1997). This allows us some hope that we may be able to establish the linguistic common ground necessary to interpret the contents of a signal. However, such a primer would by necessity provide only the bare minimum necessary to interpret the contents of the message. What we would lack is the cultural contexts in which the concepts being communicated arose, and knowledge of the sociopolitical forces that led to the transmission of the signal. A simple, “Hi, we’re over here!” could mean anything from “WARNING: Avoid at all costs. Imminent collapse into ultra massive black hole in progress,” to “We just wanted to let you know where we were before we annihilate you. In our society, it’s considered polite.” We cannot rely on the content of the signal by itself when determining whether or not the signal is intended to be altruistic.

To explore the question of altruism in an extraterrestrial signal, the signal may be considered with respect to both the act of signaling itself, and the signal’s content. Either component may be altruistic or selfish, resulting in four possibilities:

  • Beacon altruism: The act of signaling is intended as an act of altruism, to inform other civilizations of the signaler’s existence.

  • Content altruism: The signal contains a message designed to impart some benefit to the recipients.

  • Beacon selfishness: The signaling act is intended to harm the recipient. Perhaps it was meant as a lure to entice other civilizations to expend significant resources attempting to contact and/or visit the signaler, disrupting the recipient’s civilization and leaving them vulnerable to attack. Or perhaps it was meant to throw the civilization’s religious beliefs into question or incite panic in the populace.

  • Content selfishness: The signal contains a message designed to harm the recipient. Perhaps the scenario described at the beginning of this chapter was the intended effect of the data contained within the signal. Perhaps the signal contains instructions for building a doomsday device that, when activated, destroys the recipient’s world thereby eliminating the signaler’s competition in the long term.

The time and distance involved in interstellar communication makes any such act a one-way affair. Therefore, it may be assumed that the signaler will not reap any immediate benefit nor incur any immediate harm due to the actions of the signal recipients.

Under these assumptions, acts of beacon altruism would seem to provide the least benefit to the signaler. There can be no short-term gains to the signaling civilization, except perhaps social and political benefits to certain individuals derived from the decision and act. The costs of beacon altruism are quite high in comparison, due to the strength and duration of signal necessary to produce a significant likelihood of reception. Content altruism would seem to suffer from the same problem: high costs with little to no short-term gain. Furthermore, altruistic signaling may produce long-term harm, in that a recipient civilization may view the signal source as a threat, and act to minimize or eliminate that threat.

Selfish acts, on the other hand, may provide a greater benefit to the signaling civilization. The ability to remotely hobble or destroy a potential competitor at a comparatively negligible cost may seem attractive to a xenophobic and/or highly-competitive civilization. As described above, these effects are possible either through the content of a signal, or through the act of signaling itself. With this potentially greater reward, however, comes increased risk. If the signal recipient is similarly competitive but more technologically advanced than the signaler, there is the possibility that the signaling civilization could be located and destroyed, using the signal as a means to locate the signaler.

Given the relative costs and benefits of altruistic versus selfish signaling, which is more likely? The question presents a classic Prisoner’s Dilemma scenario (Axelrod 1984). The basic Prisoner’s Dilemma structure is one based on cooperation versus betrayal: two parties are faced with possible harm. If one party cooperates and the other party betrays them, the betrayer is not harmed, while the cooperator receives the greatest amount of harm. If both parties cooperate, both parties are minimally harmed. If both parties betray the other, both parties are harmed slightly more than they would be if both had cooperated, but neither is harmed to the full extent possible. If we consider alien signaling in the Prisoner’s Dilemma framework, it would be of greatest benefit to the signaling civilization to be selfish, while assuming that the recipient would infer altruism on the part of the signaler. This isn’t an unreasonable assumption to make. Fehr and Fischbacher (2003) have demonstrated that we humans tend towards cooperation in such situations, even though logic dictates the best outcome for both parties would be to act in their own interests.

Based on this information, should we, the recipient of an extraterrestrial signal, view that signal and its contents as an altruistic act, or an act of selfishness? Should we trust that the signaling civilization has our best interest at heart, or should we think the worst and act accordingly? We have no means of evaluating the cultural context which gave rise to the alien signal. There is no knowledge of the social and political pressures that drove the signaling civilization to send a signal in the first place, nor those that shaped the signal’s content. Indeed, there is no knowledge of whether social interaction as we know it exists on the signalers’ world; we merely assume that concepts like “social pressure” and “politics” exist as we know them.

We must therefore infer the signaler’s intent due to a lack of context in which to place the act. Is the signal meant as an interstellar “Hello, neighbor”? Is it a tantalizing lure meant to exhaust our resources or to provide the signalers’ descendants with flash-frozen culinary delights? Are the contents of the signal an Encyclopedia Galactica? Or are they an insidious computer virus designed to destroy any sufficiently advanced technology with which it comes into contact? We must ask these and similar questions once the initial excitement regarding signal discovery has died down.

The problem with doing this is that we may never know the correct answers. The theory of cultural relativity states that concepts like morality, hostility, and so forth vary among cultures. As such, what the recipients deem an immoral act (e.g., sending a signal that contains instructions for incubating a fast-acting virus that destroys all biological life on the recipients’ world, disguised as a cure for all disease) may be viewed as not only moral, but highly desired by the senders (e.g., a culture whose religion holds in the highest regard the act of introducing another to the afterlife quickly and efficiently). Western culture is full of examples of these erroneous inferences. In J. Michael Straczynski’s Babylon 5 television series, an interplanetary war is predicated on a cultural misunderstanding: An alien race makes first contact with a human warship with its gun ports open—a sign of strength and respect in the alien culture. The humans mistake this gesture as an act of aggression and open fire. In American culture, the “thumbs-up” gesture (fist outstretched and vertical with a thumb extended upward) is typically interpreted as “I’m okay” or “everything is great.” In many Arab cultures, however, the gesture is considered offensive, equivalent to the “middle finger” gesture in American culture (Axtell 1998).

Without knowledge of the signalers’ culture, it is impossible to intuit the original intent of the signal. Even an instance of beacon altruism—where the signalers’ only intent was a friendly, content-free “hello”—could be construed as an untoward act by the recipients. If the signal were such that it caused significant problems with the receiving equipment or sensitive but otherwise completely unrelated equipment, the recipients may interpret the act as hostile (e.g., perhaps the aliens deliver messages using small amounts of hyper-dense material accelerated to nearly the speed of light, meant to be deciphered at impact after being decelerated and captured by a special orbiting apparatus that we sadly lack. It would be the interstellar equivalent of tying a message to a rock and hurling it through a priceless stained-glass window!). If public knowledge of the signal were to cause a social and/or political upheaval in the recipient’s civilization, there is no way to determine whether those effects were intended or accidental.

On the other hand, a signal originally intended as selfish may be interpreted as altruistic by the recipients. An act of beacon selfishness may be viewed as nothing more than a friendly “hello,” and the recipient civilization fails to demonstrate the anticipated reaction the signalers had hoped for. In this instance, it is the signaler that made faulty assumptions about the recipients’ culture. Likewise, an act of signal selfishness, though it may contain instructions for constructing a dangerous and ultimately deadly device, may provide the recipient civilization with precious knowledge of advanced technology or manufacturing techniques, and they may determine the true nature of the device before it becomes active. Hostile computer code meant to wreak havoc with the recipients’ computers may in fact alter them in such a way as to make them self-aware, or may provide the recipients with enlightening insights into new programming techniques. In each case, the signalers’ intended ill effects not only fail to manifest, but the recipient civilization benefits in some way from the content of the message.

These acts of accidental altruism highlight the difficulties a recipient civilization may face once an extraterrestrial signal has been detected. The odds of correctly interpreting the sender’s intent and full meaning are small. According to Wiio’s First Law, communication usually fails, except by accident (Wiio 1978). Ultimately, the determination of intent must be made with respect to our own culture’s concepts, values, and mores, while remembering that we know nothing of the social, political, or cultural contexts in which the alien signal originated.

In the face of all this uncertainty, how should we proceed? The Prisoner’s Dilemma suggests that we should be trusting only if we believe the signaling civilization’s intent is altruistic. However, the Prisoner’s Dilemma holds only if the actions of both entities have known interpretations by and effects on the other party. As demonstrated here, this is not the case: it is entirely possible for the signaling civilization to act altruistically yet have that act interpreted as selfish (in this case, dangerous or untrustworthy) by the recipients. It is also possible for the signalers to act selfishly only to be viewed as interstellar altruists by grateful recipients. Thus, the Prisoner’s Dilemma scenario cannot be relied upon to inform our reactions to an alien signal. Perhaps the best course of action is a prudent one that embodies guarded optimism: prepare for the worst and hope for the best.