When the first warm breezes of April blow in from Buzzard’s Bay, the South End of New Bedford, near the elbow of the Massachusetts coast as it extends eastward out to Cape Cod, changes character entirely. Not that the fast-food places, convenience stores, double-decker tenements, or boarded-up businesses transform magically into some sort of suburban Valhalla, but the change of season at least encourages people to go out on the streets and makes everything seem a little brighter.

Like many other small to mid-sized New England cities, New Bedford was once a vibrant manufacturing center. From the whaling industry of the eighteenth and early nineteenth centuries to the textile industry of the latter portion of the nineteenth and first half of the twentieth centuries, the city thrived. But New Bedford, like Fall River, Taunton, Lawrence, Lowell, Nashua, New Haven, Pawtucket, and other bygone industrial cities of the northeastern United States, has been down on its luck since the exodus of manufacturing jobs to the cheaper southern states and overseas. That, coupled with the devastation wrought by the influx of heroin and then crack cocaine in the 1970s and 1980s, respectively, virtually ensured the cumulative, continuous cycle of poverty that has gripped the population there ever since. Nonetheless, despite all the despair, the transition from winter to spring always brings the promise of a new beginning.

The first sign that change is afoot can be seen on the ball fields and playgrounds. Children and teenagers emerge as if responding to an unseen force. Impromptu baseball games appear, despite the nearly unusable, muddy infields and the swamp-like outfields caused by the recently melted snows and months of sun deprivation. The basketball courts, too, are beehives of activity, with groups of kids trying to make up for the time lost to the long, exhausting New England winter.

Michael Gomes, a reserve forward on the local high school basketball team, had seen little playing time during his sophomore season that ended when they lost in the first round of the state tournament in March. Although he had been good enough to make the varsity team, he knew that it would take an entire off-season of work to reach his goal of being a starter next year. It was for this reason that he decided to try to forget about the throbbing ache in his head and take advantage of the first decent day of the early spring to join a pick-up game with some of the older, neighborhood boys at the city courts.

He had awakened with a dull pain over the back of his head and neck and assumed that he had slept in an unusual position, causing muscle spasms. He took Advil, an anti-inflammatory medicine that had always worked for aches, pains, and the occasional headache he had suffered in the past. This seemed to have alleviated the pain somewhat and allowed him to sleep for a few extra hours. But he still had the headache when he woke up; he was not a kid who usually got headaches.

During the pick-up game, Michael felt cold, something distinctly unusual as he was usually too warm during games. He put on a sweatshirt even though the other players were all in either T-shirts or bare-chested. Nonetheless, the game helped him forget about his headache temporarily. He played reasonably well and felt that it was a pretty good start to what would be a busy spring and summer of playing. On the short walk home around noon, he felt the sudden urge to vomit and did so on the sidewalk; he had never done that before. The headache was back in full force, throbbing in the back of his head. The sunlight bothered his eyes and made the headache seem worse. He found it difficult to bend his neck or turn from side to side. When he got home, he went straight to his bedroom, closed the shades, turned off the lights, and tried to sleep, hoping that he would feel better after a few hours.

When his mother came home from work at 5 o’clock, she went into Michael’s room, surprised to find him asleep in his bed at this hour of the day. When she could not awaken him, only getting moans and unintelligible words in response to shaking him and shouting his name, she knew that something was wrong and called 9-1-1 for help. At the hospital’s emergency room, the nurses found Michael to have a high fever, a stiff, rigid neck, and when they removed his clothes, the doctors observed faint, purplish spots on his skin around the area of the elastic waistband of his gym shorts; a few areas on his belly looked like small bruises. He seemed to be going in and out of consciousness; when arousable, he would try to answer their questions, but for the most part his answers did not make any sense. He was becoming increasingly agitated.

Michael’s mother gave permission for them to perform a spinal tap, a test in which a needle is inserted into the space between the bones of the lower back in order to take a sample of the fluid that surrounds the spinal cord and brain. The fluid the doctors removed, normally thin and crystal clear, was cloudy, white, and thick. Michael had pyogenic meningitis, a bacterial infection that affects the tissues—the meninges—that cover the brain and spinal cord in the central nervous system. Despite the best efforts of modern medicine and powerful treatments that were brought to bear in the case of Michael Gomes, he died within 24 hours. A healthy, 16-year-old boy playing basketball with friends one day, gone the next—in the blink of an eye—another victim of a devastating disease that kills or permanently disables many thousands of individuals worldwide each year in its sporadic—episodic—form and has the potential to kill orders of magnitude more than that in its epidemic form. The disease has earned its status as one of the most dreaded contagious ­diseases of nature.

Outbreaks of infectious diseases were probably not a major concern of our ­earliest ancestors. The intimate, complex relationship between human beings and infectious diseases occurred only as a consequence of human social evolution. Early humans lived in small, scattered bands of hunter-gatherers; their primary concern was their own survival. And this essentially meant two things: figuring out where their next meal would come from while at the same time avoiding becoming someone else’s meal or the unfortunate victim of some other deadly accident.

The relatively short life spans of our earliest ancestors were the result of starvation, predation, environmental exposures, and lethal trauma rather than the result of epidemic infections.1 Infectious illnesses can only flourish in human groups when they can be passed from one member to another—a chain of transmission. Hence, infectious diseases are also known as “transmissible” or “communicable” illnesses. Because early humans traveled and lived in only small groups, the chain was, by definition, a short one. It could neither support the persistence nor the amplification of infection. Thus, the spread of any infectious disease would have been extinguished along with its human hosts—preventing the development of epidemics.

It was only with the advent of food-producing, large, dense, immobile, agricultural societies that conditions were created in which epidemics of infectious diseases could be maintained.1 Social urbanization created a climate in which populations increased exponentially; the result—societies evolved into complex structures and became self-sustaining. Such societies not only developed governing organizations and well-defined social strata, but they also provided the ideal breeding ground for germs. Contagious infections flourished under such conditions. Disease transmission was facilitated through the crowded, unsanitary living conditions that characterized ancient societies, and their epidemic potential became magnified. It should therefore come as no surprise that infections such as smallpox, plague, tuberculosis, dysentery, and pneumonia were primarily responsible for the limited life expectancy and death of a significant proportion of the population in early modern Europe.2

One of the most significant ‘episodes’ in the development of human societies occurred nearly 10,000 years ago when inhabitants of the Fertile Crescent, in what is the modern-day Middle East, first successfully domesticated plants and animals. These actions benefited humans in a variety of important ways—and altered our collective fate. Domestication reduced the risk of starvation by providing a ready and regenerative supply of protein and other necessary nutrients. With the improved nutritional status of women, fecundity rates steadily rose, leading to favorable and sustained effects on childbearing and thus promoting the further growth of communities. Domestication of animals also provided humans with easily accessible sources of transportation and work. The cultivation of crops led to the establishment of permanent housing close to agricultural fields and livestock. Each of these factors improved the odds of survival of the human species.

But the successful domestication of animals was accompanied by a significant downside; the increasing proximity of humans to animals had the unintended effect of exposing humans to infectious diseases of animals—“zoonoses.” Although crossing the species barrier is a difficult process for bacteria, viruses, parasites, and other infection-causing “pathogens,” once accomplished, the germs enjoy unfettered access to a new host species, unencumbered by any of their new host’s preexisting immunologic experiences. This can set the stage for epidemic diseases. Ancient examples abound: camelpox in domesticated camels became human smallpox, the greatest disease scourge of all time; bovine rinderpest became epidemic human measles; bovine tuberculosis became human tuberculosis, still a problem to this day; swine influenza became human influenza; and so on. More recent historical examples include the cross-species adaptation of human immunodeficiency virus type 1 (HIV-1) from simian immunodeficiency virus (SIV) of nonhuman primates—monkeys;3 spongiform encephalopathy from sheep to cattle and on to humans as mad cow disease;4 avian influenza from water fowl to humans through stops in chickens and pigs;5 and severe acute respiratory disease (SARS) from civet cats to humans.6

The events of 10,000 years ago put humans on a path of accelerated social evolution. Adaptation from a nomadic, hunter-gather existence to a stable agrarian society with ample food supplies spawned a massive population explosion. A division of labor ensued resulting in the blossoming of civilization: science, innovation, government, and the arts.1 But the gains in terms of civilized society were not without consequences with regard to disease—and infectious diseases reaped the benefits. The rapid expansion of densely populated, human communities with their attendant poor sanitation, absent sewage disposal, proximity to domesticated animals, and lack of understanding about the spread of contagious diseases created favorable conditions for outbreaks of infectious diseases—a pattern that continues unabated today in many parts of the world.

Other elements in the transmission of disease also acquired importance as human societies evolved. Rats, mice, and their rodent cousins, emboldened by feeding upon the enormous amounts of refuse generated by large, urban population centers, became efficient reservoirs—sources of germs—and vectors—transmission vehicles—for infections such as epidemic typhus and plague. Large groups of humans living in stable, agrarian communities provided a fertile environment for the airborne transmission of respiratory pathogens and also for the exchange of sexually transmitted infections—such as gonorrhea and syphilis—between intimate partners.7 Human infectious diseases became a part of the fabric of civilization.

Epidemic infectious diseases in ancient cultures were believed to be of divine etiology.2, 8Many had a profound effect upon civilizations. Ancient Greek hegemony never recovered from the devastation wrought by the plague of Athens that began in 430 bc—early in the Peloponnesian War—and was caused by measles or perhaps another highly contagious infectious disease.9 The Antonine plague of 165–169 ad, likely due to smallpox, originated in the eastern reaches of the Roman Empire—modern-day Iraq—before it spread widely and played a significant role in the inexorable decline of that superpower.10

Recurring pandemics—epidemics occurring over broad geographic areas—and sporadic—episodic—focal outbreaks of infectious diseases have played important roles in shaping the course of human history.1, 11, 12 The Justinian plague of 541–544 ad was simply the opening salvo in 11 bubonic and pneumonic plague epidemics that disseminated and resurged in cycles throughout the known world of that time over a period of two centuries.2, 13 It has been estimated that up to fifty percent of the population perished, contributing to major sociopolitical changes in the Byzantine Empire and leading Europe into the Middle Ages.14

Plague—the “Black Death”—arrived again in Sicily in 1347 via the trade routes from Asia, devastating the population of Europe and likely changing the course of history through its impact on geopolitics, armies, medieval commerce, and almost all aspects of daily and cultural life.1, 15 The impact of the epidemic in Europe may have extended to the very core of humans—their genetic structure—altering the predisposition to future infectious diseases in that population via gene mutations, themselves driven by the selective pressure to evolve and thus survive in the face of the plague.16

Repeated exposure to infections had other effects as well. It allowed humans to develop highly evolved immune systems—defensive weapons in the battle against germs—that became a major advantage for survival. Because communicable diseases were so prevalent, European societies became immunologically experienced to many pathogens; their body’s defense mechanisms learned how to fight off germs that they had previously encountered. With each round of epidemic infection—measles, smallpox, plague, or other scourge—enlightened observers had noted the phenomenon of “resistance” to sickness upon reexposure to the same disease process. Hence, over time, those who had been previously exposed were able to resist many infectious diseases—if they survived the first encounter. These infections, therefore, became a part of society’s morbid landscape, causing episodic eruptions of disease in nonimmune people but no longer carrying the same explosive mortality for the whole community.2

However, circumstances were entirely different when populations were exposed to infectious germs for the first time. In this setting, some pathogens behaved differently—and particularly badly. Many infections were much more aggressive—deadly—when they encountered populations without any previous exposure to the germ. Immunologically naïve societies were therefore much more vulnerable as compared to immunologically experienced ones.17

The historical record is replete with vivid examples of the consequences of an infectious germ entering a population that had not previously experienced its wrath. Columbus’ first voyage across the Atlantic in 1492 unleashed Europe’s repertoire of epidemic infectious diseases on the immunologically virginal population of the New World—a dynamic that continued with successive Old World incursions into the Americas over the next 150 years. Indigenous populations were decimated as smallpox epidemics ravaged the island of Hispaniola in the first quarter of the sixteenth century, reducing the population by more than ninety five percent.18

Other Native American societies of the Caribbean basin and later Mexico, Guatemala, and Brazil fell victim to multiple infections imported from the Old World: dysentery, influenza, malaria, and measles among them. With epidemic smallpox in tow, introduced by Spanish forces rampaging through the Indian population of central Mexico, Hernán Cortés was able to easily subjugate the immense Aztec Empire with fewer than five hundred men in 1521.18,19 His compatriot, Francisco Pizarro, was the beneficiary of a similar result against the Incas in Peru a decade later.19

An analogous fate was met by other immunologically naïve populations when novel diseases were introduced via friendly or hostile visitors from areas known to harbor the pathogens. Yellow fever virus entered the New World through the transatlantic slave trade from Africa.20 It caused recurrent, highly lethal epidemics in coastal areas of the Americas from the seventeenth century through the early part of the twentieth century. In Philadelphia, the disease killed ten percent of the city’s population in 1793.21 A decade later yellow fever decimated Napoleon’s expeditionary forces in Haiti, forcing the “Little Corporal” to abandon his imperial plans for the Americas and to sell the Louisiana Territory to the newly independent United States.20 A century later the disease again altered history by driving the French out of the Panama Canal development process and later almost derailing the American effort there.22 Measles, imported into the isolated Faroe Islands in the North Atlantic by an infected carpenter in1846, caused an epidemic that infected nearly eighty percent of the population within six months.23

Within their evolutionary framework, epidemic infectious diseases of numerous varieties were well established in human society by the second millennium. Informed by the burgeoning of scientific thought of the eighteenth century’s Age of Enlightenment, hypotheses were beginning to be formulated regarding transmissible infectious diseases and their causative agents. By the nineteenth century, scientific knowledge and technology had developed to such an extent that some of these theories could finally be formally tested—and either confirmed or refuted. The lethal infection that killed young Michael Gomes would be among those whose mystery would begin to be unraveled.