Introduction: Historicizing meat culture

Meat and livestock have played so fundamental a role in human societies that meat production and consumption are often seen as being the natural way of things and hence taken for granted, much like contemporary differences between men and women are often seen as a reflection of core biological dissimilarities. In this paper, we review the available evidence on two widespread explanations for the importance of meat in Western history and culture: biophysical and political-economic. Biophysical explanations for meat consumption are based upon the premise that meat has been and continues to be essential to human nutrition and agricultural sustainability. In contrast, political–economic explanations for meat consumption assert that consumers’ behavior is largely determined by their relationship to the means of production and the overarching power of government and corporations. Despite their distinct emphases on biological and social processes, respectively, the biophysical and political-economic approaches are both materialistic explanations of human beliefs and behaviors (see Harris and Ross 2009). Notably, while sociobiological explanations in general have been thoroughly challenged by social scientists (see Freese et al. 2003), the broader premise that meat consumption is driven by materialist imperatives has yet to be challenged by a sustained and comprehensive historical critique. This paper provides a genealogy of that premise, along with arguments against its veracity.

In order to further explore these arguments, we conducted an extensive genealogical analysis of meat production and meat culture across thousands of years of Western history. This type of approach was ideal for our purposes here because it is first and foremost “concerned with the processes, procedures and apparatuses by which truth and knowledge are produced” (Tamboukou 1999, p. 202). Such analyses focus on historical and cultural practices, paying particular attention to relations of power (see Foucault 1971, 1991, 1995). Our investigation thus involved a comprehensive examination of meat-related secondary literature in the fields of anthropology, zooarchaeology, agricultural and environmental history, public health and nutrition, rural and environmental sociology, gender studies, consumer culture, science and technology studies, and religious studies.

In reviewing literature from a wide array of data sources and methodologies, we ultimately reached a series of conclusions that were informed and corroborated by a broad depth and breadth of perspectives. Our underlying argument throughout this paper can be summed up as follows: Except under conditions of environmental scarcity, the meaning, value, and legitimacy of meat cannot be attributed to intrinsic biophysical value or to the political-economic actors who materially benefit from it. This argument is premised upon the assertion that it is impossible to understand meat in America, or anywhere for that matter, without understanding its cultural roots. By historicizing the apparent naturalness of meat consumption, our analysis undermines the essentialist tendencies on which biophysical explanations for meat consumption rest. Further, by demonstrating the importance of cultural meaning in shaping meat consumption, we challenge the reductionist tendencies of much scholarship in the political-economy tradition.

In what follows, we begin by briefly discussing the biophysical and political-economic paradigms. We then look at the circumstances that encouraged Paleolithic people to become hunter-gatherers. While meat was an indispensable source of sustenance for Paleolithic communities, we critique the argument that the everyday lives of hunter-gatherers somehow legitimate meat consumption as an essential, intrinsic element of human behavior more broadly. Next, we discuss how changing biophysical, political-economic, and cultural contexts during the Neolithic period helped meat to become venerated in the name of food security, social status, and religious expression. We then look at how political-economic, gastronomic, and religious imperatives shaped the legitimacy of meat from antiquity through modernity, up to and including early US history.Footnote 1 Following this discussion, we examine how the legitimacy of twentieth century industrialized meat production and consumption emerged in alignment with corporate interests, urban political priorities, and consumers’ desire for convenient and affordable meat products. The paper closes with a discussion of the implications of the constructed naturalness/necessity, and thereby legitimacy, of meat.

Demythologizing man, meat, and materialism

Numerous popular and academic discourses on environmental nutrition have heralded the Paleolithic era and its attendant meat consumption as a kind of “Golden Age” of human health and sustainability. Advocates of what Knight (2011, p. 706) describes as “evolutionary nutrition” (also known as the “discordance hypothesis”), for example, legitimate meat eating by arguing that humans are genetically programmed to require it (Eaton and Konner 1985; Konner and Eaton 2010). According to this perspective, humans were genetically ill equipped to consume foods that were produced with the advent of agriculture, namely, grains, refined sugars, and highly processed foods. This argument emerged over 25 years ago, but various versions of it—the Atkins diet, South Beach diet, the Zone, and the Paleo diet—reintroduced evolutionary nutrition to popular culture in the late 1990s (Knight 2011). Proponents of evolutionary nutrition also argue that human brain development can largely be attributed to meat eating, that dental hygiene wasn’t a problem until people began to consume grains, and that diseases of civilization like diabetes, cancer, and cardiovascular disease did not start to appear until humans stopped engaging in hunting-gathering.Footnote 2

There are numerous limitations with the biophysical perspective. It presumes that early human diets were generally universal, that post-Paleolithic evolutionary changes are insignificant, and that genetics is the most important indicator as to what constitutes normal human eating patterns (Turner and Thompson 2013). To the contrary, anthropological research suggests that early human diets were quite flexible and varied according to “geography, food availability, seasonality, and climatic conditions” (Turner and Thompson 2013, p. 502). Human genetics further evolved throughout the Paleolithic and Neolithic periods, well into the introduction of agriculture, as evidenced by varying levels of lactose and grain tolerance among different populations (Matheisson et al. 2015). Genetics is also only one of many aspects of human dietary propensities and sensitivities—other influences include fetal nutrition, exposure to different types of foods, the socialization of children to appreciate certain tastes, the absence or introduction of gut microbiomes, and the manipulation of eating environments through cooking, decontamination, and other processes (Korsmeyer 2002). Evolutionary nutrition also ignores the socioeconomic factors that drive contemporary health disparities (Nestle 2002; Knight 2011; Turner and Thompson 2013). Other proponents of the biophysical perspective argue that meat’s legitimacy stems from a symbiotic relationship between livestock and the earth (see for example Pollan 2006), an argument which be addressed more extensively later in the paper.

Claims that there are nutritional and/or environmental imperatives that compel people to eat meat all share the same basic premise, namely, that meat eating aligns with the natural order of the world and is therefore immutable. This premise has widespread appeal, and has been, to varying degrees, explicitly and implicitly adopted by academics and the general public alike. For instance, in an article in the Journal of Nutrition comparing the nutritional profiles of vegetarian and meat-based diets, the authors state that diets free of meat can be ‘nutritionally adequate’ but nonetheless go on to advocate for “the advantages of combining plant-based diets with ASF [animal source foods]” (Murphy and Alleny 2003, p. 3932S). The implication is that even though diets without animal products can be nutritionally sound, ‘animal source foods’ are nonetheless preferable and even necessary. This assumption extends beyond the field of nutrition. Arcari (2016) reports that all 15 of the most prominent reports on climate change, sustainability, and food security she analyzed describe meat as a natural and necessary part of the human diet.

With respect to the broader public, Joy (2011) has identified three primary discourses that legitimate meat consumption, collectively referred to as the 3Ns: the belief that eating meat is natural, normal, and necessary. Piazza et al. (2015) subsequently added nice to this list of justifications. Case in point, upon asking university students and contract employees at a large internet company to provide three reasons why they think “eating meat is ok,” the 4Ns were among the most frequently cited justifications (Piazza et al. 2015).Footnote 3 A study of residents in Victoria, Australia also found that a notable proportion of the sample of 415 people agreed with the statement that “humans are meant to eat lots of meat” (Lea et al. 2006).Footnote 4 Discourse analyses of advertisements for animal products and red meat exporter marketing materials similarly indicate that the romanticisation of ‘naturalness’ is one of the three most common themes in these materials (Fitzgerald and Taylor 2014).Footnote 5

Besides the biophysical paradigm, which naturalizes meat consumption, traditional political economy scholarship provides another influential approach to meat. Both paradigms converge in their dismissal of the cultural domain as compared with their valorization of the material realm. Marx, for example, famously argued that the material basis of human life is what determines which discourses take shape and are adopted by social actors (Marx 1978 [1859]). More recently, Gould et al. (2004) have similarly argued that producers’ ability to dictate the use of technology, manage labor, and shape consumer ideologies vastly outmatches consumer activists’ ability to create alternatives to the productivist system or slow its velocity. By this logic, people consume meat specifically because large corporations have encouraged them to.

Economic determinism is evinced in many criticisms of large meat companies by food scholars and activists of all stripes, who argue that these companies have outsized political, economic, and cultural influence. These critics point out, for example, that the price of meat has been artificially lowered by agricultural subsidies (Winders and Nibert 2004); that fast food advertisements bombard kids while they’re vulnerable (Delahoyde and Despenich 1994; Nestle 2002); that geographically distant facilities actively deter consumers from learning about how meat is produced (Vialles 1994); and that economic inequality leaves the poor with little option but to eat fast food and other low-nutrition meat items. Simon (2013, p. xvi) puts it bluntly: “Americans have, to a great extent, become puppets of the animal food industry. We eat what and how much we are told to, and we exercise little informed, independent judgment”.

To be sure, while the means of production certainly does provide the “base” upon which material lives are lived and goods are introduced, the meanings of these goods are far from pre-ordained (Smith 1998; Hall 1986). In recent decades, anthropologists and environmental sociologists have sought to avoid purely materialist origin stories by examining how the biophysical world sets parameters within which culture can develop, without absolutely determining it (Shanklin 1985; Mullin 1999; Pellow and Brehm 2013; Fischler 1980; Fiddes 1991). Drawing on and extending these scholars’ insights, in the following sections, we observe the evolution of meat through different epochs of Western history. For each historical era, we describe meat’s political-economic and biophysical value, the practical drawbacks of meat production-consumption during the era in question, and the cultural imperatives that ensured meat’s eminence.

Origins (200,000 YA–10,000 YA)

The political economy of early human societies was based upon basic provisioning. While there was a division of labor and inequality between the sexes, there was no private property or accumulation of wealth that could establish meaningful class stratification. Archeological discovery of early tools accompanying the remains of animals led to the now refuted conclusion that our early ancestors sustained themselves through hunting. More recent analyses have concluded that the tools were used for scavenging (Mithen 1999; Cartmill 1993). In order to secure sustenance, the earliest humans likely engaged primarily in gathering insects and scavenging the remains of larger animals which had been left behind by predators (Mithen 1999).

Clear evidence of hunting has only been dated to between four and five hundred thousand years ago (Cartmill 1993; Kalof 2007). At that point, technological advances in weaponry and other tools allowed humans to hunt megafauna, which provided more meat and higher quality fats with less labor (Smil 2013). Thus, contrary to popular narratives, meat was eventually integrated into human diets “despite a strongly herbivorous ancestry” (Longo and Malone 2006, p. 115). It developed a communal significance because the killing of a large animal produced more meat than one person, or even oftentimes one family, could consume. The excess was often shared with others, which made it a focal point for gift exchange and social gatherings (Franklin 1999; Rozin 2003). It would be unwise, however, to attribute this favoritism to the long-term environmental and nutritional indispensability of meat-centric living.

First, even among Paleolithic societies, hunting often came with a steep ecological cost. Kalof (2007, p. 7) observes that “Paleolithic hunters killed randomly, slaughtering more animals than they needed for survival.” In Europe, Paleolithic hunters drove “kangaroos, giant wombats, large ground sloths, the mammoth, the mastodon, the cave bear and rhinoceros” to extinction (Kalof 2007, p. 7). In North America, the overhunting of mammoth, mastodon, bison, camel, ground sloth, horse, shrub oxen, and tapirs contributed to the mass extinction of species as well. This had a cascading effect on biodiversity, as predators that depended on these herds—such as the saber-toothed cat, the dire wolf, and the hyena—also died off (Smith 1975). Moreover, Paleolithic hunter-gatherers used grass fires to manage animal herds, and this radically altered local landscapes and habitats as well (Redman 1999). Second, as discussed previously, early hunter-gatherer diets were diverse and highly variable.Footnote 6 The wear and tear on the fossilized teeth of early hominids from 2 to 3 million years ago indicates that a primarily plant-based diet (including grain) was consumed, and that animal protein played only a complementary role (Perlès 1999; Flandrin 1999).

In sum, while there is an understandable lure to look favorably upon Paleolithic lifestyles as compared with the challenges of modern industrial societies (see for example Sahlins 1968), we would be mistaken to gloss over the shortcomings, diversities, and cultural preferences of these early human societies. Their consumption of meat was limited, and it was certainly not a panacea. Life expectancies were short.Footnote 7 While it is certainly evident that the onset of agriculture presaged a decline in nutrition, catalyzed negative environmental impacts, and ushered in class stratification (Diamond 1987), claims that the hunter-gatherer diet and lifestyle are an ideal solution for modern challenges are dubious at best.

The dawn of animal agriculture (10,000 YA–2500 BCE)

Early Neolithic

Climatic changes, the extinction of large game animals, and expanding sedentary populations put critical constraints on the continued viability of many hunter-gatherer communities. These factors encouraged a search for alternative food sources, and the domestication of plants and animals began to emerge in nine different regions on four continents between 8500 and 2500 BCE (Diamond 2002; Bar-Yosef 2002).Footnote 8

Many archaeologists have argued that increased proportions of sheep, goat, and cattle bones, combined with decreased proportions of bones from wild animals, provides evidence that livestock played an increasingly vital political-economic and nutritional role in early Neolithic societies. However, as noted by Marciniak (2005), early Neolithic societies did not have enough cattle to provide a substantive quantity of meat, obtaining fodder required too much land and labor, and there was an insufficient supply of available winter housing for livestock. Tending to livestock was also more labor-intensive than obtaining wild foods. With respect to the argument that meat played an indispensable nutritional role, bone records in Central Europe show that the beef and pork cuts which offered the highest fat and muscle content were not the most consumed (Marciniak 2005).

Pastoralism also frequently brought negative environmental consequences, namely soil erosion and deforestation (Williams 2005; Redman 1999). Grazing also required large tracts of land. In order to maintain a Neolithic settlement for 30 people, 40 cattle, and 40 goats or sheep, the required land for housing, wheat fields, garden plots, woodlots, pasture, meadow, and forest browse would have amounted to approximately 6 km² of woodland, or 20 hectares per person—which would be the equivalent to more than twice the size of Lower Manhattan (Williams 2005, p. 32).Footnote 9 In order to protect domesticated herds, predators were dispersed or hunted to extinction (Redman 1999), while other wild animals were lost due to agricultural expansion and/or habitat destruction (Verhoeven 2004).Footnote 10

The purpose of this critique is not to suggest that crop agriculture offered a magic elixir—far from it. Crop agriculture, when done poorly, also caused extensive damage. Our point is simply that it would be a mistake to trace meat’s cultural legitimacy to some kind of “golden age” of sustainable livestock production. Primitive agricultural societies were far from idyllic settings, and even those settlements that produced high yields, recycled nutrients, and irrigated their fields “experienced a high incidence of malnutrition and could not escape a recurrence of major famines” (Smil 2013, p. 52). Humans’ closer proximity to domesticated animals also increased their exposure to zoonotic diseases, including smallpox, malaria, tuberculosis, measles, and influenza (Redman 1999).

In light of these numerous downsides, the Neolithic pursuit of animal husbandry begins to make more sense when situated in cultural context. Domestication was first and foremost a very spiritual process, and livestock ownership also signified social status and group identity (Simoons 1994; Goudie 2000; Marciniak 2005). In the Eastern Mediterranean, archaeological digs at some of the world’s oldest villages show the deliberate placement of animal bones in human burial, the combined burial of human and dog remains, and decorated bone sickle hafts that depicted half human/half animal figures (Verhoeven 2004). Through ritual, livestock ownership, agricultural experimentation, tool development, herding, building homes, and other everyday activities, humans co-evolved over generations with and through nature to strengthen communal identities and create new environments.

Middle/late neolithic

Primitive agricultural communities eventually gained strategic and competitive advantages over their hunter-gatherer counterparts.Footnote 11 As domestication continued into the Middle Neolithic, the economic functionality of livestock rose in importance (Marciniak 2005). Animals were increasingly used for draft labor, eggs, milk, wool, and soil fertilization, more so than meat or ritual, and feed crops remained too valuable to feed to animals (Verhoeven 2004).

When it came to livestock ownership and meat consumption, the political-economic and cultural dimensions of Middle Neolithic life were deeply enmeshed. Feasting was a highly competitive activity that was used to showcase status as well as group solidarity (Marciniak 2005). Eventually, with increasing human populations, sheep and goats were overgrazed and soil quality was diminished (Sherman 2002). Such was the case at Belderg Beg—a Middle Neolithic era Irish settlement that was abandoned due to “year-round grazing, soil exhaustion and erosion” (Verrill and Tipping 2010, p. 1018).

At the same time, it is important to recognize that not all cultural aspects of meat consumption during the Middle Neolithic were related to political economy. Prohibitions against pork, for example, date back to 1400 BCE in Mesopotamia. In Ancient Egypt, pigs slowly lost their social significance, and evolved from being seen as a symbol of conquest to being met with “indifference, avoidance and prohibition, and finally taboo” (Marciniak 2005, p. 222). Camel flesh was similarly proscribed in the Middle East, prior to the emergence of Islam (Simoons 1994). These taboos demonstrate the limitations of purely biophysical or political-economic explanations of meat eating during the Middle and Late Neolithic.

In sum, the lesson from the Neolithic is that the consumption of meat from domesticated livestock has a deep cultural history that is distinct from utilitarian function. Meat from livestock served a peripheral role—at best—in providing for communal nutrition, and livestock usage frequently resulted in serious political-economic and environmental consequences. As the domestication and usage of livestock expanded, the environmental, political-economic, and religious/ethical tensions which began to unsettle the increasingly complex societies of the Neolithic would also confront later civilizations.

Meat in antiquity (2500 BCE–550 CE)

Bronze age

Human societies in the Bronze Age were characterized by larger and increasingly complex social structures, a greater degree of social stratification, written language, and the use of metals. Politically, livestock symbolized wealth, and meat was the diet of heroes in homer’s Iliad (Spencer 1993). Again, however, the political-economic benefits of meat and livestock often came at a significant ecological cost. In ancient Greece, soil erosion was not apparent during the age of Neolithic farming, but it began to appear during the late Bronze Age (1400–1200 B.C.E.). Much of this was due to grazing on poor quality fields, upland farms, and terraces (Redman 1999). While goats were valued for their fecundity, they also consumed young shoots and seedlings. The result was widespread damage to local plants, which were succeeded in turn by low-quality woodland and ultimately poor pasture (Williams 2005). This degradation may have been a significant contributor to the decline of ancient Greece (Williams 2005).

Despite these widespread impacts, meat actually provided only a nominal contribution to the ancient Greeks’ diets, and possibly as little as 1 or 2 kg/year was consumed—generally at public festivals, political events, and religious ceremonies (Montanari 1999). People who did not participate in these events were considered non-members of the community (Preece 2009). The Greeks also used philosophy and sacrificial rituals to legitimate the killing of animals for food, build community, and communicate with deities (Montanari 1999, p. 74). While there were many ancient Greek philosophers who endorsed ethical vegetarianism, including Pythagoras, Theophrastus, Empedocles, Dicaerchus, Plutarch, Porphyry, and Seneca, they were in the minority (Preece 2009).

Iron age

During the Iron Age, growing populations coincided with agricultural expansion and intensification. There was increased reliance upon livestock and a decreased dependence upon hunting game (Raish 1992). Livestock production in the dry Mediterranean climate proved to be environmentally challenging. Drought, combined with deforestation, reduced the availability of summer forage, and so livestock were moved to mountainous pastures during these months. In doing so, however, Iron Age farmers prevented manure from fertilizing their primary arable lands, and they accordingly ran the risk of overgrazing the mountain regions (Hoffmann 2014). For those who could overcome these obstacles, there were clear political-economic rewards. In ancient Rome (753 BCE—476 CE), large estates, powered by both slaves and free workers, produced specialized grain and livestock products that were demanded by urban centers and governments (Tauger 2013).

To be sure, however, the political-economic role of meat was embedded within cultural contexts. Grain was necessary for survival, and hence sellers were not permitted to obtain a large profit from it. Given that meat was seen as a luxury, no such restrictions were put in place, and livestock production hence became a strategy for wealth creation. Essentially, there were two agricultural systems: one for nourishment, and one for luxury and celebration (DuPont 1999). Roman emperors distributed both pork and bread, however, “to maintain public order as well as underscore the privilege accorded to Roman citizens” (Montanari 1999, p. 74).

The Romans’ relationship with meat was also broadly influenced by their desire to distinguish themselves from the Germans, who they regarded as savage barbarians (DuPont 1999). The Romans sought to domesticate hunting and animal husbandry by setting aside semi-wild reserves, and hunters and shepherds were looked down upon as uncivilized nomads. Nonetheless, the commodity of meat itself was held in deep esteem, and exotic game was brought to Rome from the fringes of the empire for the banquet halls of elite citizens (DuPont 1999).Footnote 12

Like the Greeks, the Romans also used a legitimating ideology to justify the killing of animals, and they drew heavily upon the Stoicist/Aristotelian view that animals were inferior beings that existed for human purposes. Even those Roman figures who wrote more favorably on behalf of animals—Pliny the Elder, Marcus Aurelius, and Sextus Empiricus—did not object to killing animals for food (Preece 2009).

In hindsight, many of the tensions that characterize contemporary meat production and consumption were already evident in antiquity. Like modern societies, the Greeks and Romans used meat production and consumption to accumulate wealth and showcase social status. Their usage of livestock also overtaxed the environment, and they used a legitimating ideology in order to justify the killing of animals. The Romans moreover used meat to signify their imperial superiority over “the other.”

Meat in ancient Israel and early Christian societies (1550 BCE—379 CE)

Ancient Israel emerged in Palestine during the end of the Bronze Age and the beginning of the Iron Age (Miller 1986).Footnote 13 Livestock had both political-economic and nutritional value in ancient Israel. Examinations of sheep and goat fossil records suggest that the majority of livestock were exported, and that local people primarily consumed meat from older/non-productive animals, albeit rarely (Miller 1986; Borowski 1998). The environmental legacy that livestock have had on The Levant is somewhat unclear, and differing conclusions have been reached. The Israelites warned one another against excessive herd size in order to prevent overgrazing, and this practice was also proscribed in Genesis 13:5–7 (Borowski 1998, p. 46). Nonetheless, grazing in the Levant likely damaged farmland all the same. This is because the local climate allowed for grazing year-round, which inhibited soil recovery and displaced natural oak woodlands (Redman 1999).Footnote 14

When taken as a whole, the meaning and importance of livestock in ancient Hebrew and Christian societies transcended their economic and environmental usage. This is evidenced by the usage of meat as a cultural resource, the reliance upon scripture and ritual as a justification for killing animals, and the ostracizing of those who refrained from meat eating.

Even after the ancient Egyptians no longer controlled Palestine, they continued to wield considerable influence over this region (Miller 1986). The Egyptians worshiped animals, and the Hebrews wanted to politically distance themselves from this practice. Sapir-Hen et al. (2013) argue that the people who lived in Judah had already stopped raising pigs for geographic reasons, and that the taboo may have evolved as a way for Israelites living in Judah to differentiate themselves from the Philistines in the southern lowlands. Moreover, Semitic tribes united around a shared belief that there was one God who had created a specific order, and lifestyle and dietary rules were needed to reflect that natural order. Castrated animals, carnivores, blood, and animals that blurred categories (e.g., land animals with cloven hooves who did not chew the cud) could not be eaten. Thus, ancient Hebrews arguably rejected pork for reasons of cultural identity, not food safety. If food safety were the main reason why pork was proscribed, the same fears of parasitic disease likely would have been shown towards other meats as well. Further evidence against the food safety argument comes from fossil records, as “remains of domesticated pig were found at several Chalcolithic, Early, Middle, and Late Bronze Age sites in Israel” (Borowski 1998, p. 140). Ancient Egyptians, Mesopotamians, Iraqis, and Greeks consumed pork without worry. Moreover, the ancient Hebrews would not have had access to knowledge about the medical dangers of undercooked meat (Soler 1999).Footnote 15

At the same time, pre-Christian Judaism, according to the Torah, had several rules against animal cruelty. It prohibited the infliction of pain on animals, including those without owners, and another Hebrew doctrine dictates that animals should not be sold to cruel people (Preece 2009). Nonetheless, Genesis 1:26–28 asserts that God bequeathed animals to humans to be used as food and fiber, and killing animals for food was justified by the ancient Hebrews through the practice of ritual sacrifice (Spencer 1993; Soler 1999; Cockburn 1996).

The early history of Christianity serves as further testament to the role of culture in shaping the meaning and importance of meat eating. For example, early Christian sects that practiced vegetarianism for ascetic and/or ethical reasons were often denounced as heretics (Spencer 1993; Preece 2009). During his rule as the Patriarch of Alexandria (380–385 AD), Timothy I gave food tests to his clergy and interrogated those who would not eat meat (Spencer 1993, p. 142).

In sum, like the Romans, the Israelites likely used their dietary rules and preferences for certain types of meat as a means by which to designate themselves as culturally superior. Also, like the Greeks and Romans, the Israelites and Early Christians used a legitimating ideology to justify the killing of animals. Christian sects who dissented from this ideology were persecuted. The ideology of dominion and the teachings of St. Augustine—who, echoing Aristotle, made the forceful argument that animals could not reason and therefore were subject to slaughter (Preece 2009)Footnote 16 - would leave a particularly enduring legacy.

Meat in medieval Europe (476 CE–1400 CE)

Early middle ages

After the fall of Rome, Christianity continued to spread, Germanic cultivation of the land and exploitation of the forest came to be seen as profitable rather than barbaric and backwards, and urban populations dispersed into small towns (Montanari 1999).Footnote 17 Meat production continued to have a serious environmental impact, and increasingly more land was being transformed into pasture. Bears, wolves, and other predators became extinct in areas where humans presided, as they were a threat to livestock. Overall, however, meat did not serve as a primary food source, and “was eaten as much for reasons of taste and social prestige as on nutritional grounds” (Hoffmann 2014, p. 137).

Some cultural and institutional controls were placed on meat consumption. Religious avoidance of terrestrial meat during Fridays, before feasts, and during Lent increased the religious and symbolic importance of meat, while also triggering a major commercial system of trade preserved fish from coastal to inland Europe (Spencer 1993; Hoffman 2014). The church and the government in Britain more specifically endeavored to limit meat consumption in an attempt to promote self-control among the citizenry, because there was a concern that overconsumption could feed into animalistic tendencies and result in an inability to control one’s passions (Franklin 1999).

Later middle ages

Eventually, the mixed forestry/pastoral agriculture that typified feudal communities began to decline as populations grew. As this occurred, there was more of a shift to a market economy and direct cultivation through deforestation, planted fields, and plowing. The primary use of converted woodland was grazing (Williams 2005). By 1300, feudalism had “overstepped the socio-ecological limits to continued expansion” (Moore 2003, p. 313) as lands became increasingly exhausted and less productive from overuse. As a result, by 1400, urban consumers in central Europe had become dependent upon long-distant cattle trades with eastern and northern Europe for beef. This resulted in severe environmental consequences in Denmark, where overgrazing and the loss of regular manure deposits resulted in declines in soil fertility, sand drift, and the emergence of bogs (Hoffmann 2014).

Land scarcity and deforestation made it more difficult for peasants to grow cereal crops, which could provide more food than hunting or grazing livestock (Hoffmann 2014). The cultural value of meat helps to explain why—despite its contribution to deforestation and rural poverty—its consumption persisted. Meat consumption—particularly of fresh, prized cuts—was central to the cultural identity of the aristocracy and the emerging urban middle class (Montanari 1999). As noted by Hoffman (2014, p. 115), “In agricultural society diet is the main driver of land-use, but the intermediary between diet and land use is power.” These economic relations were legitimated by a cosmology and a dietary science which proclaimed a “great chain of being,” whereby birds were closest to the heavens and thus should be consumed by the upper classes, while plant-based foods were based on Earth and should hence be consumed by the serfs (Grieco 1999). One doctor wrote in 1583 that “partridges are only unhealthy for country people,” and medical professionals concurred in general that the peasantry needed to eat more vegetables for physiological rather than economic reasons (Grieco 1999, p. 311). The nutritional science of the medieval era moreover valorized meat consumption as essential to male virility (Montanari 1999, p. 179).

There was also a continued effort to label vegetarians as heretics during the Middle Ages. St. Thomas Aquinas (1225–1274), in the tradition of Aristotle and Augustine, argued that animals “were without rational souls, were therefore imperfect and could not be immortal; they might therefore be killed and eaten” (Spencer 1993, p. 176). These principles would be tested by the Cathar movement, which embraced asceticism and opposed traditional baptism, the usage of the cross, marriage, and meat consumption. In 1243, French troops forced the Cathars to surrender, and over 200 were burned to death.Footnote 18

In sum, much of the medieval-era schism between meat production and consumption was driven by ideology, as cultural demands began to increase for specific types of meat products. This left a deep legacy, as “medieval frontiers provided distant zones of surplus and ‘abundance’, which let consumers avoid changing their own cultural preferences and social practices by externalizing, even forgetting, the social and environmental costs of satisfying them” (Hoffman 2014, p. 155). Meat consumption was used during the Middle Ages to pursue and demonstrate social status, vitality, and godliness, and those who rejected this premise were persecuted.

Renaissance, reformation, and the emergence of early modern Europe (1400–1800)

The Black Death of the fourteenth century had a marked impact on European agriculture. Lowered population levels shrunk both food demand and the rural labor pool, which took many fields out of cultivation (Edwards 2011). As a result, “landowners increasingly turned to raising animals as a way to profit from their land” (Kalof 2007, p. 78). Livestock thus became progressively more important to the rural economy in the fifteenth century, and their main usage revolved around draft labor, byproducts, and subsistence household production. Labor shortages also reduced unemployment, which boosted wages and allowed more people to eat meat—“in some areas the amount of meat consumed more than doubled between the fourteenth and fifteenth centuries” (Kalof 2007, p. 78).

The market for meat further expanded with the rebounding of Europe’s population, particularly in the cities, which resulted in higher livestock densities and increased sales for export (Edwards 2011; Raber 2007). In the wake of an economic depression in the fifteenth century, some European countries decided to devote more land to grazing because the value of wool and meat had proven more resistant to economic decline than crops (Wallerstein 2011). Further, ambitious English aristocrats came to realize that they could better enrich themselves through the livestock trade as opposed to taxing serfs, as had been the case under feudalism (Moore 2003). Through a series of sixteenth century parliamentary laws known as the Enclosures Acts, wealthy landlords privatized the peasants’ common lands, used the land to graze sheep, and then sold the wool to continental Europe. No longer able to engage in subsistence farming, the peasantry dissolved into wage laborers and the era of capitalism was born. This paved the way for land, animals, meat, and literally everything else deemed to be of value to be privatized, commoditized, and sold for profit. As such, it has been suggested that the origins of capitalism can be found in the keeping of livestock and agriculture more generally, as opposed to with the merchants in urban areas, where many historians have focused (Rifkin 1992; Wood 2002).

To be sure, there were dissident voices. Sir Thomas More, for example, criticized the use of land for grazing cattle and sheep on pasture. He argued that livestock “consume, destroy, and devour whole fields, houses, and cities” and that the nobility “leave no land for tillage—they close all into pasture, they throw down houses, they plucked downtowns and leave nothing standing” (More, quoted by Spencer 1993, p. 186). Moreover, as noted by Moore (2003, p. 317), “the resulting widespread displacement of cereal agriculture by animal husbandry not only entailed a deepening of the world-economy’s division of labor, but biased it in favor of further expansion.” The enclosures also foreshadowed future systems of intensive livestock confinement (Spencer 1993).

The political-economic importance of meat was further reinforced by the scientific discourses of the Enlightenment, particularly as concerned nutrition, philosophy, and agricultural science. Dietary science of the era considered meat to be vital for good health, as animal muscle was seen as analogous to human muscle and therefore nourishing (Albala 2003). While interest in meatless diets began spreading in the 1700s, as sufficient vegetables and cereals became more widely accessible, the dominant view continued to be that a meatless diet would endanger one’s health (Spencer 1993).

Scientific discourses also legitimated the exploitation and killing of animals. Descartes, for example, famously declared in 1637 that animals were nothing but fancy machines, and he routinely dissected live animals (a common practice at the Royal Society in London). While his views were not universally shared at the time, they were nonetheless highly influential (Spencer 1993; Preece 2009). During this same period, livestock were increasingly objectified through the discourses of specialized breeding, cataloguing, veterinary medicine, experimentation, accounting, and dietary science (Raber 2007).

In sum, while scientific authority gradually replaced ecclesiastical authority, the cultural consensus around the importance and necessity of meat eating continued (Spencer 1993). Nation states turned to livestock as a relatively stable good in the context of economic uncertainties, and wealthy landowners found that they could profit more from rearing livestock than they could from taxing the peasantry. As with previous historical epochs, cultural imperatives contributed to stoke meat consumption throughout the emergence of modernity.

Meat in colonial America (1607–1776)

As capitalism swept Europe, wealth and new professional occupations began to emerge in cities, expanding urban populations increasingly enjoyed higher standards of living, and the land required to sustain these lifestyles became increasingly scarce and degraded by overgrazing. Hence, the colonization of America was driven largely by the desire for cheap land. Colonial America became a patchwork of small plot family farmers living materially primitive lives, and livestock played a key political-economic and nutritional role in the lives of subsistence settlers (Cronon 2011; Anderson 2004). Growing vegetables was laborious, and meat was believed to be healthier (Ogle 2013). However, even as most Americans engaged in subsistence family farming to support their homesteads, the early American economy was primarily based upon agricultural specialization, trade, and profit. Beef emerged as a critically important export, and livestock were being fattened on corn and perishable grains for decades before the American Revolution (Ogle 2013).

Nonetheless, “the people of plenty were a people of waste” (Cronon 2011, p. 170). In total, colonists’ livestock.

Required more land than all other agricultural activities put together. It was too labor-intensive to actively tend to livestock, and so they were allowed to roam freely. In a typical town, the land allocated to them was from 2 to 10 times greater than that used for tillage (Cronon 2011, pp. 138–139).

Woodland grazing also compressed the soil and constrained the available oxygen for the roots of higher plants while limiting moisture absorption. Pigs killed trees by gnawing on roots (Anderson 2004). After livestock had devastated the vegetation in one area, colonists would simply “open new pastures, create additional hay meadows, or cultivate more grain crops,” and this resulted in further deforestation (Cronon 2011, p. 146). Soils became exhausted after 1–2 years, and minerals were washed out into waterways, particularly during floods. Beaches and sandy areas were particularly prone to erosion from grazing and plowing (Cronon 2011). Beaver, deer, bear, turkey, and wolf disappeared, and were replaced by English domesticated animals. Free-range cattle and pigs also destroyed indigenous peoples’ food sources: shellfish, nuts and berries, cultivated fields, and buried grain baskets (Cronon 2011; Anderson 2004). Livestock, moreover, helped to spread tuberculosis and influenza, which were particularly deadly for indigenous communities (Anderson 2004).

Beyond the motives of profit and subsistence, colonial settlers’ profligate usage of livestock was also driven by racist and imperialist ideologies. Colonists either ignored or dismissed the notion that their grazing practices were destructive, either to the land or its original inhabitants. To the contrary, they firmly believed that meat production “improved” and “civilized” the land, and that this gave them a right to it. For example, in his semi-autobiographical Letters from an American Farmer (1782), the French-born settler J. Hector St. John de Crèvecoeur wrote:

Every year I kill from 1500–2000 weight of pork 1200 of beef, half a dozen of good wethers in harvest; of fowls my wife has always a great stock; what can I wish more? My negroes are tolerably faithful and healthy… This formerly rude soil has been converted by my father into a pleasant farm, and, in return, it has established all our rights. On it is founded our rank, our freedom, our power, as citizens; our importance as inhabitants of such a district (Hagenstein et al. 2011, pp. 17–18).

Meat production and consumption was deeply taken for granted as the godliest way of life by colonial settlers, and alternatives to this lifestyle were nearly unthinkable—despite the fact that indigenous populations managed quite well without rearing livestock (Anderson 2004).

Meat consumption also provided an important psychological wage for the colonists. Whereas only half of England’s population could afford meat, Chesapeake settlers consumed “nearly 150 pounds annually per person … even servants expected meat in their rations and complained to county courts when their masters failed to provide it” (Anderson 2004, p. 112). Upon visiting Pennsylvania in the 1750s, one man wrote “[E]ven in the humblest or poorest houses, no meals are served without a meat course” (Ogle 2013, p. 4). Meanwhile, only 15% of the protein being consumed in Europe was in the form of meat (Dauvergne 2008; Franklin 1999).Footnote 19

All too often, the sustainability of pre-industrial American meat production is taken for granted and treated as self-evident. However, when looking more closely at what it took for early colonial settlers to make a profit from meat production, these assumptions clearly merit deeper reflection. Unfortunately, the environmental and social problems associated with meat production in colonial America were not historically unusual, nor were the ideologies of dominion and imperial superiority used to justify these problems.

Meat and the American frontier (1776–1890)

In the aftermath of the American Revolution, delivering meat and other foodstuffs to swelling urban populations in New England and Europe required massive infrastructure investments and new lands. The federal government, eager to improve its international competitiveness, hastened this transition by encouraging export, developing canals and railways, and acquiring Native American territories for agricultural expansion. Grazing and cattle feeding began to extend far into the Ohio River Valley, through Ohio, Pennsylvania, and Illinois. An extensive byproducts industry developed, and previously wasted carcass remains were converted into sausage, head-cheese, lard, cosmetics, soap, and ink. By the 1840s, Cincinnati emerged as the center of a nationwide pork packing industry, and it would soon be eclipsed by Chicago in the 1850s (Danbom 2006; Ogle 2013).

We cannot know for certain how much meat was being consumed at this point in time because formal consumption records were not kept; however, we can glean some insight from records of widow and slave allowances. Widow allowances contained 200 pounds of meat by the early nineteenth century and slaves reportedly were to receive 150 pounds of meat each (Horowitz 2006). Pork was the most commonly consumed meat not due to taste preferences but because it could be cured, thus enabling year round consumption. Pork was surpassed by beef consumption once refrigeration technology made it more accessible (and it retained the title of favorite meat through much of the twentieth century); preferred cuts were also shaped by external factors and therefore also changed over time (Horowitz 2006). Wage labor and disposable income unleashed consumer demand for choice cuts of meat, and the infrastructure was now in place to deliver on these requests. Initially, consumer preference for fresh steaks dictated the delivery of live cattle to urban markets (Cronon 1996). Eventually, however, meat packers lured consumers to appreciate the low prices of dressed and boxed beef—products which relied upon centralized facilities for processing and refrigeration technology for delivery (Ogle 2013; Stull and Broadway 2004). Perhaps more significantly, refrigerated meat exacerbated a growing conceptual and logistical distancing between meat production, processing, and consumption, in addition to the impacts of all of these processes (Fitzgerald 2015).

The expansion of meat production and consumption had a cumulatively devastating environmental and social impact. Ogle (2013, p. 52) observes that “in 1870, a steer could survive on 5 acres of land; by 1880, thanks to overgrazing and grass depletion, those same animals needed 50–90 acres.” Prior to industrialization, the Chicago area was covered by tallgrass prairie, which settlers plowed up for crops (wheat, corn) or fenced off for pasture. If crops failed, farmers were successful with grazing, and cattle grazing required the displacement of the Plains Indians and buffalo. The prairie was transformed into pasture, and the pasture was ultimately converted to grain production and feedlots (Cronon 1992).

With respect to the grazing that occurred in western states, Sheridan (2007, p. 126) notes that there is “an inescapable fact about stock raising in semiarid lands. Despite the cultural, legal, and economic differences among pastoralists, grazers, and ranchers across the world, all confront one overarching ecological imperative: the need for access to large tracts of land.” Overgrazing in Western states was encouraged by competition among ranchers, none of who wanted to be left out in the race for land and profit (Sheridan 2007).

While producers promoted meat in the interest of profit, for consumers, powerful discourses further encouraged the desire for meat. Central among these were dietary beliefs, as the virtues of meat consumption were also extolled by the medical establishment. In 1832, in response to a cholera outbreak, the federal government encouraged people to eat fewer fruits and vegetables, and instead recommended people consume more meat and alcohol (Preece 2009). Several cities went so far as to ban the sales of fruit, salad, and uncooked vegetables, and the US Board of Health nationalized the ban. To take another example, in the Boston Medical and Surgical Journal (which would eventually become the New England Journal of Medicine), one author wrote in 1838 that.

It is almost universally the fact that the vegetable-eaters among us… are a very complaining sort of people. They are delicate, nervous, dyspeptic, and universally susceptible to all sorts of influences. They have sallow or pale countenances, a lax fibre, little muscular strength, and are incapable of any fatiguing or laborious employment. These characteristics of the vegetable-livers are, doubtless, very often the result of inadequate nourishment, depending on feeble digestive powers and innutritious diet (quoted in Iacobbo and Iacobbo 2004, p. 42).

In short, during the nineteenth century, people who refrained from meat were routinely heckled, ridiculed, labeled as heretics, and ostracized (Iacobbo and Iacobbo 2004).

The denigration of vegetarians was further intertwined with nationalism and racism (Ogle 2013). George Beard, a prominent nineteenth century physician, argued that civilized people consumed more animal proteins, and offered by way of explanation that, “the rice eating Hindoo and Chinese, and the potato-eating Irish are kept in subjection by the well-fed English” (Beard, quoted in Belasco 2006, p. 9). A 1909 medical publication more bluntly stated that “white bread, red meat, and blue blood make the tricolor flag of conquest” (Belasco 2006, p. 9). Food reformers’ efforts to reduce meat consumption at the turn-of-the-century accordingly fell on deaf ears, particularly among many working class immigrants who came to America in search of the good life (Levenstein 1999, p. 519).

In sum, the frontier era of meat production resulted in further ecological degradation and the emergence of mass consumption. Meatless diets continued to be ridiculed, for both dietary as well as racist reasons. In hindsight, given the strong cultural preference for meat throughout the course of Western history, the popular acceptance of these claims during the nineteenth century is quite understandable.

Modern meat (1890–present)

The twentieth century bore witness to a sea change in global agricultural production, ownership, and practices (Danbom 2006). While the diversity of regional landscapes and the dissimilarity between agriculture and other sectors of the economy made food production something of a “late comer” to the industrial revolution, new productionist technologies, pro-capitalist technocratic discourses, and consolidations in ownership eventually brought agriculture into the twentieth century global economy (Beardsworth and Keil 1997; Fitzgerald 2003; Heffernan 2000). Corporate power would later come to wield increasing leverage over farmers and small producers through horizontal integration, expansions in market share, vertical integration, and global conglomeration (Heffernan 2000). In order to amass wealth in an industry characterized by thin profit margins, corporate packers sought to produce meat at the largest scale possible (Stull and Broadway 2004; Ogle 2013).

There was also a political motivation for industrializing the production of meat and other foodstuffs. Policymakers wanted to push for increased economic growth, and this required a reliable supply of affordable food for factory workers and other city dwellers. Shifting the economic base of the nation from farming to industry and service required a Herculean effort, providing inspiration for the Department of Agriculture, agricultural subsidies for commodity crops, and colleges of agriculture at land-grant universities (Roberts 2009). The techniques and technologies that resulted from these efforts included intensive tillage, monoculture, the application of synthetic fertilizer, irrigation, chemical pest and weed control, genetic engineering of crops and livestock, and factory farming (Gliessman 2007). Small-scale farms based on animal power, mixed plant/animal husbandry, and diversification were more laborious and knowledge-intensive, and hence less competitive, than larger producers (Hagenstein et al. 2011).

The most revolutionary changes were realized in the poultry industry, where vertical integration, antibiotics, and the use of vitamin D in poultry feed drastically reduced production costs and consumer prices. By the late 1970s, further gains in the poultry industry were made through the promotion of value-added products like chicken breast patties, “and consumers, eager to have the specialty products that matched the convenience-based lifestyle, willingly paid for the food preparation work they no longer chose to do” (Schwartz 1991, p. 46). Chicken meat sales statistics dramatically illustrate this shift: By the mid-1990s, 86% of sales of chicken were for processed meat, such as nuggets. Only 30 years earlier, 83% of sales had been for whole chickens (Horowitz 2006).

Even though the industry could charge more for value-added cuts, meat was still relatively inexpensive. In fact, the twentieth century could be considered the century that meat consumption became democratized: increased production and reduced prices, coupled with increased wages (for at least part of the century), made it possible for a growing portion of the population to eat more meat (Fitzgerald 2015). Consumption had been relatively low during the Great Depression at between 85 and 112 pounds (Horowitz 2006; Skaggs 1986), but it rebounded between the Great Depression and World War II, to the point that the industry could not meet the demand and war-time rationing was implemented (Skaggs 1986). During this time, meat was extensively promoted as fuel for soldiers by the US military. According to a pamphlet produced by the Office of Price Administration, meat was “an important part of a military man’s diet, [it gives] him the energy to out fight the enemy” (Belasco 2006, p. 11). Women, in turn, were seen as incapable of handling too much of these types of foods, and were expected to do without. The rationing policies at this time make explicit the role meat has played as an enduring symbol of masculinity and racial/national superiority.

With the end of the war and meat rationing, meat consumption levels soared in the 1950s and 1960s to the highest levels ever recorded (Horowitz 2006). Income increasingly became detached from meat consumption rates, as illustrated by the fact that by the mid-1960s two-thirds of families in the US could afford to consume the most coveted cut: steaks (Horowitz 2006). This democratization of consumption had significant symbolic implications: “Meat, particularly the more expensive cuts, became a symbol of the growing prosperity in the United States and elsewhere” (Fitzgerald 2015, p. 69). Meat also became tightly tethered to notions of progress and family values during this time of post-war prosperity and veneration of the nuclear household unit.

The cultural and political-economic influences on meat consumption are also evident in the changing ‘tastes’ for different types (i.e., species) of meat during this period. Again, poultry consumption trends are illustrative. As discussed earlier, poultry was historically considered flesh to be consumed by elites (and priced as such) and consumption levels were low relative to pork and beef. Mid-century, however, the perception of poultry and its consumption changed rather dramatically. Certainly, the industrialization of poultry production made it more affordable and popular, but other factors were also involved in the surge in poultry consumption. The poultry industry began to market its product as meat; up to this point in time it had not been considered a meat (and still isn’t by some people today). Additionally, poultry was not rationed during World War II and therefore people had become more familiar with cooking it when their go-to meats were restricted. It was also increasingly popularized by the growing fast food industry, which exploited its dropping price. Finally, poultry was also depicted as a healthier alternative to the higher fat meats that had been favored. As a result, by the 1980s, poultry had reached the consumption levels of beef, and by the 1990s it had surpassed it (Horowitz 2006; Striffler 2005). Chicken remains in the lead: the average American consumes approximately 43 pounds of pork, 56 pounds of beef, and 72 pounds of poultry per year (Earth Policy Institute 2014; United States Department of Agriculture Economic Research Service 2013).

The twentieth century also witnessed changing preferences for cuts of beef and pork. Like the chicken nugget, standardized cuts in the beef and pork industries were also popularized; one only need look to the current virtual celebrity status of bacon as evidence. In many ways, the shift to processed, standardized cuts has been a win–win for both consumers and the industry. In addition to making it possible for the industry to charge more for value-added products, companies were able to build brand loyalty by displaying their company logo on the packaging (Horowitz 2006). Processed cuts were also more accessible and convenient for consumers, who eschewed laborious cooking in favor of convenience, yet still saw the value (particularly of the familial variety) in consuming meat.

When faced with diminishing take-home pay in the late twentieth century, Americans have relied on cheap meat and other inexpensive consumer goods to avoid poverty, remain in the middle class, and eat affluently (at least as compared with non-industrialized societies). The industrialization of production and processing, coupled with favorable government policies, has kept meat (even value-added cuts) relatively inexpensive. Adjusting for inflation, the cost of meat actually declined throughout the twentieth century, reaching its lowest price at the end of the century (Stull and Broadway 2004). Due to this affordability, the discourse of family values that asserts that the American family comes together when it enjoys meat together has remained intact.

The modern affordability and convenience of meat has also contributed to its enduring association with (hegemonic) masculinity. As discussed throughout this paper, meat has long been culturally associated with power and vitality, and this has made the conspicuous consumption of meat an effective resource for performing masculinity (Adams 1990/2000; Franklin 1999). This meat-masculinity connection has actually enjoyed a resurgence in the US in recent years in the context of uncertainty stemming from shifting gender roles (Rothgerber 2013). Empirical analyses of media and advertisements have documented not only the pervasiveness of the association of meat with masculinity, but also its use as a way to reestablish “traditional” and “natural” gender roles that are felt to be under attack (Parry 2010; Rogers 2008).

The cultural power of meat is perhaps most vivid when meat is absent. Throughout the twentieth and twenty-first centuries, abstention from meat consumption—particularly by men—has long been perceived by many to be a threat to the dominant culture. For example, in 1927, despite the increasing evidence that plant-based diets could provide for adequate nutrition, a magazine published by the American Medical Association pejoratively described vegetarians as overly sentimental, fanatical, and ignorant about science (Iacobbo and Iacobbo 2004). In 1954, an official from the National Institutes of Health, W. H. Sebrell, called vegetarian diets a fad based on “ancient superstition” (Iacobbo and Iacobbo 2004). Meatless diets were also attacked in the entertainment industry. For example,

An August 1951 episode of the George Burns and Gracie Allen show featured a vegetarian meal as appetizing as a plate of cardboard… The episode ends when Gracie and Blanche dump their new diet and instead dine on steak (Iacobbo and Iacobbo 2004, p. 164).

Nearly 50 years later, a 1998 episode of the sitcom Everybody Loves Raymond followed a nearly identical plotline. After the family reacts to her tofu turkey with revulsion, Raymond’s mother abandons her diet program and instead prepares a more traditional Thanksgiving meal. At the end of the episode, she rhetorically asks, “What’s the point of living longer if you’re miserable, dear?”Footnote 20 In spite of this history of negative depictions of those who refrain from consuming animal products, the twenty-first Century has witnessed an increase in the number of people adopting plant-based diets, as well as those simply reducing the amount of meat they consume, such as through participation in “Meatless Mondays” and other strategic initiatives.

Discussion: the discursive legacy of meat

Over the past century, Americans have built upon and extended the myths and discourses pertaining to masculinity, vitality, and racial superiority that have cumulatively been associated with meat over the long arc of Western history (see Adams 1990/2000; Willard 2002). The end product of these discourses has been a form of “naturalism” that normalizes humans’ use of animals for food as being somehow innate (Peñaloza 2000, p. 99) and as necessary by extension (Joy 2011). This normalization/naturalization of meat consumption is not merely an abstraction. The empirical research reviewed earlier in this paper documents the ways in which academics, the general public, advertisers, and industry promulgate and popularize meat as natural and necessary. The foundation of these assumptions can be found in the cultural-historical antecedents that have been detailed throughout this paper.

By providing a detailed genealogical analysis of meat production and consumption, this paper demonstrates that as biophysical, political-economic, and cultural contexts have changed, the source of meat’s legitimacy has changed as well; that is, its legitimacy is not ‘material.’ Subsistence societies turned to animal agriculture to meet their nutritional needs. As societies became progressively more complex, meat became increasingly important for political-economic reasons. It would be a mistake, however, to impute the source of meat’s legitimacy exclusively from materialistic sources. Rituals, habits, traditions, and desires have stoked the demand for meat as well. Contemporary arguments that insist that the demand for meat stems primarily from the material necessities of political economy, environmental utility, or nutrition are thus only telling half the story. Our analysis begins to tell the other half of the story: it indicates that during many historical periods, much less meat was consumed than is generally assumed; that characterizations of pre-industrial meat production as sustainable are problematic; and that meat consumption is not tied exclusively—and perhaps not even mostly—to economic, nutritional, and environmental utility. Rather, meat consumption has been leveraged for notable social purposes, such as to demonstrate cultural status and superiority. In short, the normalization of meat consumption via assumptions that it is natural and rooted in material necessity is flawed.

Thus, the primary contribution of this paper is that in deconstructing the normalized/naturalized materialist assumptions circling around meat consumption throughout history, it clears the way for a more nuanced appreciation of the role that culture has played in the demand for and legitimation of meat. Such an understanding is important because it makes it possible to envision new ways forward. Consistent with the method employed herein, this “genealogy is attempting to go further by tracing possible ways of thinking differently, instead of accepting and legitimating what are already the ‘truths’ of our world” (Tamboukou 1999, p. 203). While the material world provides the sustenance upon which life can be lived, culture bestows legitimacy upon certain lifestyles while condemning others. It dictates how the material world should be interpreted, what should be valued, what should be avoided, how the world ought to be, and even what individuals should eat. These cultural processes, in turn, shape the material world. And, to be fair, at the current point in human history, the material is poised to shape the cultural in a profound way: the impacts of the normalization of meat consumption on the environment, not to mention human and non-human animals, are staggering, and these material impacts are ripe for questioning the cultural legitimacy of one of our most deeply engrained traditions.